added
stringlengths 19
26
| created
timestamp[s] | id
stringlengths 9
16
| metadata
dict | source
stringclasses 2
values | text
stringlengths 6
1.2M
|
---|---|---|---|---|---|
2024-09-04T02:54:54.799549 | 2020-02-26T22:52:58 | 2002.11830 | {
"authors": "Nicolas Dupin",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25898",
"submitter": "Nicolas Dupin",
"url": "https://arxiv.org/abs/2002.11830"
} | arxiv-papers | # Polynomial algorithms for p-dispersion problems in a 2d Pareto Front
Nicolas Dupin
###### Abstract
Having many best compromise solutions for bi-objective optimization problems,
this paper studies p-dispersion problems to select $p\geqslant 2$
representative points in the Pareto Front(PF). Four standard variants of
p-dispersion are considered. A novel variant, denoted Max-Sum-Neighbor
p-dispersion, is introduced for the specific case of a 2d PF. Firstly, it is
proven that $2$-dispersion and $3$-dispersion problems are solvable in $O(n)$
time in a 2d PF. Secondly, dynamic programming algorithms are designed for
three p-dispersion variants, proving polynomial complexities in a 2d PF. The
Max-Min p-dispersion problem is proven solvable in $O(pn\log n)$ time and
$O(n)$ memory space. The Max-Sum-Min p-dispersion problem is proven solvable
in $O(pn^{3})$ time and $O(pn^{2})$ space. The Max-Sum-Neighbor p-dispersion
problem is proven solvable in $O(pn^{2})$ time and $O(pn)$ space. Complexity
results and parallelization issues are discussed in regards to practical
implementation.
Keywords : Optimization, Operational Research, Computational Geometry ;
Dynamic programming ; p-dispersion problems ; complexity ; bi-objective
optimization ; Pareto front ; Parallel Computing
## 1 Introduction
This paper is motivated by real-life applications of multi-objective
optimization (MOO) [5, 25]. Many best compromise solutions may exist
considering the Pareto dominance [11]. A Pareto Front (PF) denotes the
projection of these solutions in the space of the objectives. This work aims
to select $p$ solutions from $n\gg p$ non dominated solutions, while
maximizing the diversity of these $p$ solutions in the objective space.
Firstly, such problem occurs when selecting alternatives for decision makers,
$p$ is small in such applications. Secondly, a similar problem occurs inside
MOO meta-heuristics to archive diversified solutions of the PF, $p$ is larger
in such applications [32, 34]. For such problem, one can use the hypervolume
measure as in [2, 21], or clustering algorithms as studied in [6, 7, 8].
In this paper, we consider (discrete) p-dispersion problems, dispersing the
selected points as much as possible, as in [12]. Actually, four variants of
discrete p-dispersion problems are defined [13]. Max-Min and Max-Sum
p-dispersion problems, the most studied variants, are proven NP-complete in
the general case ([12, 16]). Although p-dispersion is mentioned to have
relevant applications for MOO [13, 27], no specific studies concerned the
p-dispersion in a PF to the best of our knowledge. This paper studies the four
variants of p-dispersion problems in the case of a 2-dimensional (2d) PF.
Another problem, denoted Max-Sum-Neighbor p-dispersion, is also introduced in
a 2d PF. For the five p-dispersion problems, the cases $p=2$ and $p=3$ are
proven to be solvable in $O(n)$ time. Generally, the Max-Min, Max-Sum-Min and
Max-Sum-Neighbor p-dispersion problems are proven to be solvable in polynomial
time in a 2d PF thanks to dynamic programming (DP) algorithms.
This paper is organized as following. In Section 2, we formally describe the
problem. In Section 3, we discuss related state-of-the-art elements to
appreciate our contributions. In Section 4, intermediate results are
presented, allowing to define the Max-Sum-Neighbor p-dispersion problem. In
Section 5, a DP algorithm is presented for the Max-Sum-Neighbor variant with a
complexity proven in $O(pn^{2})$ time and $O(pn)$ memory space. In section 6,
the previous DP algorithm is adapted for the Max-Min variant with a complexity
in $O(pn\log n)$ time and $O(n)$ space. In section 7, a DP algorithm is
presented for the Max-Min-Sum variant with a complexity in $O(pn^{3})$ time
and $O(pn^{2})$ space. In Section 8, practical applications are discussed. In
Section 9, our contributions are summarized, discussing also future directions
of research.
## 2 Problem statement and notation
We suppose in this paper having a set $E$ of $n$ elements of a 2d PF, where
the minimization of two objectives is considered (without loss of generality,
transforming objectives to maximize $f$ into $-f$). Similarly with [7], this
can be formalized $E=\\{x_{1},\dots,x_{n}\\}$, having $n$ elements of
${\mathbb{R}}^{2}$, such that for all $i\neq j$,
$x_{i}\phantom{0}\mathcal{I}\phantom{0}x_{j}$, defining the binary relations
$\mathcal{I},\prec$ for all
$y=(y^{1},y^{2}),z=(z^{1},z^{2})\in{\mathbb{R}}^{2}$ with:
$\displaystyle y\prec z$ $\displaystyle\Longleftrightarrow$ $\displaystyle
y^{1}<z^{1}\phantom{2}\mbox{and}\phantom{2}y^{2}>z^{2}$ (1) $\displaystyle
y\preccurlyeq z$ $\displaystyle\Longleftrightarrow$ $\displaystyle y\prec
z\phantom{2}\mbox{or}\phantom{2}y=z$ (2) $\displaystyle
y\phantom{1}\mathcal{I}\phantom{1}z$ $\displaystyle\Longleftrightarrow$
$\displaystyle y\prec z\phantom{2}\mbox{or}\phantom{2}z\prec y$ (3)
To measure distances between points $x_{i},x_{j}\in E$, we consider
$d_{ij}=d(x_{i},x_{j})^{\alpha}$ where $\alpha>0$ and $d$ is the Euclidian
distance:
$\forall
y=(y^{1},y^{2}),z=(z^{1},z^{2})\in{\mathbb{R}}^{2},\phantom{2}d(y,z)=\sqrt{\left(y^{1}-z^{1}\right)^{2}+\left(y^{2}-z^{2}\right)^{2}}$
(4)
The p-dispersion problems select $p\geqslant 2$ out of $n$ given candidate
points, while maximizing a dispersion function $f$:
${\mathcal{P}}_{disp}(E,p)=\max_{(z_{1},z_{2},\dots,z_{p})\in
D_{p}}f(z_{1},z_{2},\dots,z_{p})$ (5)
where $D_{p}$ denotes the set of all the distincts points of $E$:
$D_{p}=\\{(z_{1},z_{2},\dots,z_{p})\in E^{p}|\forall 1\leqslant i<j\leqslant
p,z_{i}\neq z_{j}\\}$ (6)
The most standard p-dispersion problem, also denoted Max-Min p-dispersion
problem or p-dispersion-MM, the dispersion function is the minimum of
distances $d_{ij}$ between pairs of the selected points. Formally, the Max-Min
p-dispersion problem for $p\geqslant 2$ can be written as:
${\mathcal{P}}_{disp}^{MM}(E,p)=\max_{(z_{1},z_{2},\dots,z_{p})\in
D_{p}}\min_{i,j:1\leqslant i<j\leqslant p}d_{ij}$ (7)
Another known variant is the Max-Sum(-Sum) dispersion problem, denoted
p-dispersion-MS, with:
${\mathcal{P}}_{disp}^{MS}(E,p)=\max_{(z_{1},z_{2},\dots,z_{p})\in
D_{p}}\sum_{i=1}^{p-1}\sum_{j=i+1}^{p}d_{ij}$ (8)
We consider also in this paper the Max-Sum-Min p-dispersion problem, denoted
p-dispersion-MSM:
${\mathcal{P}}_{disp}^{MSM}(E,p)=\max_{(z_{1},z_{2},\dots,z_{p})\in
D_{p}}\sum_{i=1}^{p}\min_{j\neq i:1\leqslant j\leqslant p}d_{ij}$ (9)
Another specific variant of Max-Sum-Min p-dispersion in a 2d PF will be
defined in the Section 4, using specific properties of a 2d PF. Lastly, the
Max-Min-Sum p-dispersion problem, denoted p-dispersion-MMS, is defined as
following:
${\mathcal{P}}_{disp}^{MMS}(E,p)=\max_{(z_{1},z_{2},\dots,z_{p})\in
D_{p}}\min_{i\in[\\![1,p]\\!]}\sum_{j\in[\\![1,p]\\!]-\\{i\\}}d_{ij}$ (10)
## 3 Related works
This section describes related works to appreciate the contributions of this
paper.
### 3.1 Complexity results for p-dispersion problems
Max-Min and Max-Sum p-dispersion problems are NP-hard problems[12, 16]. It is
proven in both cases by a polynomial reduction to the maximum independent set
problem [12, 16]. Max-Min and Max-Sum p-dispersion problems are still NP-hard
problems when distances fulfill the triangle inequality [12, 16]. The planar
(2d) Max-Min p-dispersion problem is also NP-hard [33], the NP-hardness of the
planar Max-Sum p-dispersion problem is still an open question to the best of
our knowledge. The one-dimensional (1d) cases of Max-Min and Max-Sum
p-dispersion problems are solvable in polynomial time, with a similar DP
algorithm running in $O(\max\\{pn,n\log n\\})$ time [27, 33]. Complexity
results have also been proven for the APX classes of several variants of
p-dispersion problems. Firstly, for all $\alpha>1$, any $\alpha$-approximation
for the Max-Min p-dispersion problem is NP-hard in the general case [27]. In
two specific cases, 2d or having triangle inequality, it exists a
$2$-approximation for the Max-Min p-dispersion problem [27]. For all
$\alpha<2$, any $\alpha$-approximation for the Max-Min p-dispersion problem
with triangle inequality is NP-hard [27].
Although, p-dispersion is mentioned to have relevant applications for MOO [13,
27], no specific studies concerned the p-dispersion in a PF to the best of our
knowledge. We note that an affine 2d PF is a line in ${\mathbb{R}}^{2}$, such
case is equivalent to the 1d case. Hence, Max-Min and Max-Sum p-dispersion
problems are solvable in $O(\max\\{pn,n\log n\\})$ time thanks to [27, 33].
General planar cases of p-dispersion problems can also be seen as specific
cases of three-dimensional (3d) PF, affine 3d PF. Having a NP-hard complexity
proven for the planar cases of Max-Min p-dispersion, it implies that the Max-
Min p-dispersion problem is also NP-hard for 3d PF.
### 3.2 Exact methods to solve p-dispersion problems
The Max-Sum p-dispersion problem can be formulated as a quadratic optimization
problem, defining binary variables $z_{j}\in\\{0,1\\}$ with $z_{j}=1$ if and
only if the point $x_{j}$ is selected:
$\begin{array}[]{llrllr}{\mathcal{P}}_{disp}^{MS}(E,p)&=&\max\displaystyle\sum_{i=1}^{n}\sum_{j=1}^{n}d_{i,j}&z_{i}z_{j}&&(\ref{mathProgMaxSumDispersion}.1)\\\
&s.t:&\sum_{j=1}^{n}z_{j}&=p&&(\ref{mathProgMaxSumDispersion}.2)\\\
&&z_{j}&\in\\{0,1\\}\hskip 19.91684pt&\forall
j\in[\\![1,n]\\!],&(\ref{mathProgMaxSumDispersion}.3)\end{array}$ (11)
The linearization of (11.3) leads to the Integer Linear Programming (ILP)
formulation provided by [20]. Mathematical programming formulations are also
used in exact Branch&Bound (B& B) algorithms iterating with the computations
of upper and lower bounds, with a Lagrangian relaxation in [1] or with upper
bounds computable in $O(n^{3}$) in [26].
Similarly, the Max-Min p-dispersion problem can be modeled using non linear
optimlization ([26]):
$\begin{array}[]{llrllr}{\mathcal{P}}_{disp}^{MM}(E,p)&=\displaystyle\max_{d\geqslant
0}&d&&&(\ref{mathProgDispersion}.1)\\\
&s.t:&\sum_{j=1}^{n}z_{j}&=p&&(\ref{mathProgDispersion}.2)\\\
&&dz_{i}z_{j}&\geqslant d_{i,j}&\forall 1\leqslant i<j\leqslant
n,&(\ref{mathProgDispersion}.3)\\\ &&z_{j}&\in\\{0,1\\}\hskip
19.91684pt&\forall j\in[\\![1,n]\\!],&(\ref{mathProgDispersion}.4)\end{array}$
(12)
The standard linearization of constraints (12.3) leads to the Mixed Integer
Linear Programming (MILP) formulation provided by [20]. Another MILP
formulation was proposed, to speed-up the resolution using straightforward B&
B solving and implementing specific cuts in a Branch&Cut algorithm [28].
Decomposition schemes from [1] and [26]. can also be extended for the Max-Min
p-dispersion problem.
Similarly, MILP formulations were designed for the Max-Sum-Min and Max-Min-Sum
p-dispersion variants [13]. Such variants were less studied, a recent work
proposed a unified MILP formulation and B&B algorithm for the four variants of
p-dispersion problems [22].
### 3.3 Clustering/selecting points in Pareto frontiers
We summarize here results related to the selection or the clustering of points
in PF, with applications to MOO algorithms. Maximizing the quality of discrete
representations of Pareto sets was studied with the hypervolume measure in the
Hypervolume Subset Selection (HSS) problem [2, 29]. The HSS problem,
maximizing the representativity of $k$ solutions among a PF of size $n$
initial ones, is known to be NP-hard in dimension 3 (and greater dimensions)
since [3]. An exact algorithm in $n^{O(\sqrt{k})}$ and a polynomial-time
approximation scheme for any constant dimension $d$ are also provided in [3].
The 2d case is solvable in polynomial time thanks to a DP algorithm with a
complexity in $O(kn^{2})$ time and $O(kn)$ space provided in [2]. The time
complexity of the DP algorithm was improved in $O(kn+n\log n)$ by [4] and in
$O(k(n-k)+n\log n)$ by [21].
Some similar results were also proven for clustering problems. k-median and
k-medoid problems are known to be NP hard in dimension 2 since [24]. The
specific case of 2d PF were proven to be solvable in $O(n^{3})$ time with DP
algorithms [9, 7]. The 1d case are solvable in $O(nk)$ time [17]. K-means, one
of the most famous unsupervised learning problem, is also NP-hard for 2d cases
[23]. The restriction to 2d PF would be also solvable in $O(n^{3})$ time with
a DP algorithm if a conjecture is proven [6]. The 1d case of k-means is also
solvable by a DP algorithm, with a complexity in $O(kn)$ using memory space in
$O(n)$ [15]. For k-means, k-medoids, k-median, significant differences in the
complexities exist between the 2d PF and 1d cases.
Lastly, p-center problems present also similar results. The discrete and
continuous p-center problems are NP-hard in general, the discrete p-center
problem in ${\mathbb{R}}^{2}$ with a Euclidian distance is also NP-hard [24].
The 1d case of continuous p-center is solvable in $O(pn\log n)$ time and
$O(n)$ space [18], which is the same complexity for continuous p-center
problems in a 2d PF [8]. The discrete p-center problem in a 2d PF is solvable
$O(pn\log^{2}n)$ time and $O(n)$ space [8]. For p-center problems, the
complexity for 2d PF problems is similar to the one from 1d cases, i.e. affine
2d PF.
## 4 Intermediate results
This section presents intermediate results required for the following
developments. Firstly, Lemma 1 extends trivially the properties of $\leqslant$
and $<$ in ${\mathbb{R}}$:
###### Lemma 1
$\preccurlyeq$ is an order relation, and $\prec$ is a transitive relation:
$\forall x,y,z\in{\mathbb{R}}^{2},\phantom{3}x\prec
y\phantom{1}\mbox{and}\phantom{1}y\prec z\Longrightarrow x\prec z$ (13)
Lemma 1 implies an order among the points of $E$, for a reindexation in
$O(n.\log n)$ time:
###### Proposition 1 (Total order)
Points $(x_{i})$ can be indexed such that:
$\displaystyle\forall(i_{1},i_{2})\in[\\![1;n]\\!]^{2},$ $\displaystyle
i_{1}<i_{2}\Longrightarrow$ $\displaystyle x_{i_{1}}\prec x_{i_{2}}$ (14)
$\displaystyle\forall(i_{1},i_{2})\in[\\![1;n]\\!]^{2},$ $\displaystyle
i_{1}\leqslant i_{2}\Longrightarrow$ $\displaystyle x_{i_{1}}\preccurlyeq
x_{i_{2}}$ (15)
This property is stronger than the property that $\preccurlyeq$ induces a
total order in $E$. Furthermore, the complexity of the sorting reindexation is
in $O(n\log n)$
Proof: We reindex $E$ such that the first coordinate is increasing:
$\forall(i_{1},i_{2})\in[\\![1;n]\\!]^{2},i_{1}<i_{2}\Longrightarrow
x_{i_{1}}^{1}<x_{i_{2}}^{1}$
This sort has a time complexity in $O(n\log n)$. Let
$(i_{1},i_{2})\in[\\![1;n]\\!]^{2}$, with $i_{1}<i_{2}$. We have thus
$x_{i_{1}}^{1}<x_{i_{2}}^{1}$. Having $x_{i_{1}}\mathcal{I}x_{i_{2}}$ implies
$x_{i_{1}}^{2}>x_{i_{2}}^{2}$. $x_{i_{1}}^{1}<x_{i_{2}}^{1}$ and
$x_{i_{1}}^{2}>x_{i_{2}}^{2}$ is by definition $x_{i_{1}}\prec x_{i_{2}}$.
$\square$
$\emph{Obj}_{1}$$\emph{Obj}_{2}$$x_{1}$$\bullet$$x_{2}$$\bullet$$x_{3}$$\bullet$$x_{4}$$\bullet$$x_{5}$$\bullet$$x_{6}$$\bullet$$x_{7}$$\bullet$$x_{8}$$\bullet$$x_{9}$$\bullet$$x_{10}$$\bullet$$x_{11}$$\bullet$$x_{12}$$\bullet$$x_{13}$$\bullet$$x_{14}$$\bullet$$x_{15}$$\bullet$
Figure 1: Illustration of a 2d PF with 15 points and the indexation implied by
Proposition 1
The re-indexation of Proposition 1 implies a monotony for the distances in the
2d PF:
###### Proposition 2
We suppose that points $(x_{i})$ are sorted following Proposition 1.
$\displaystyle\forall(i_{1},i_{2},i_{3})\in[\\![1;n]\\!]^{3},\;\;\;$
$\displaystyle i_{1}\leqslant i_{2}<i_{3}\Longrightarrow
d_{i_{1},i_{2}}<d_{i_{1},i_{3}}$ (16)
$\displaystyle\forall(i_{1},i_{2},i_{3})\in[\\![1;n]\\!]^{3},\;\;\;$
$\displaystyle i_{1}<i_{2}\leqslant i_{3}\Longrightarrow
d_{i_{2},i_{3}}<d_{i_{1},i_{3}}$ (17)
Proof: We index $E$ following Proposition 1. Let’s prove it implies (16), (17)
is similar with (16). Let $i_{1}<i_{2}<i_{3}$. We note
$x_{i_{1}}=(x^{1}_{i_{1}},x^{2}_{i_{1}})$,
$x_{i_{2}}=(x^{1}_{i_{2}},x^{2}_{i_{2}})$ and
$x_{i_{3}}=(x^{1}_{i_{3}},x^{2}_{i_{3}})$ . (14) implies
$x^{1}_{i_{1}}<x^{1}_{i_{2}}<x^{1}_{i_{3}}$ and
$x^{2}_{i_{1}}>x^{2}_{i_{2}}>x^{2}_{i_{3}}$.
Hence, $(x^{1}_{i_{1}}-x^{1}_{i_{2}})^{2}<(x^{1}_{i_{1}}-x^{1}_{i_{3}})^{2}$
and $(x^{2}_{i_{1}}-x^{2}_{i_{2}})^{2}<(x^{2}_{i_{1}}-x^{2}_{i_{3}})^{2}$.
$d(x_{i_{1}},x_{i_{2}})^{2}={(x^{1}_{i_{1}}-x^{1}_{i_{2}})^{2}+(x^{2}_{i_{1}}-x^{2}_{i_{2}})^{2}}<d(x_{i_{1}},x_{i_{3}})^{2}$.
Having $\alpha>0$, it implies
$d(x_{i_{1}},x_{i_{2}})^{\alpha}<d(x_{i_{1}},x_{i_{3}})^{\alpha}$ which proves
(16). $\hfill\square$
Proposition 2 allows to reformulate Max-Min and Max-Sum-Min p-dispersion
problems:
###### Proposition 3
Let $E=\\{z_{1},\dots,z_{n}\\}$ a subset of $n$ points of ${\mathbb{R}}^{2}$,
such that for all $i\neq j$, $x_{i}\prec x_{j}$. The Max-Min and Max-Sum-Min
p-dispersion problems in $E$ are also defined with:
${\mathcal{P}}_{disp}^{MM}(E,p)=\max_{1\leqslant i_{1}<\dots<i_{p}\leqslant
p}\phantom{2}\min_{j\in[\\![1;p-1]\\!]}d_{i_{j},i_{j+1}}$ (18)
${\mathcal{P}}_{disp}^{MSM}(E,p)=\max_{1\leqslant i_{1}<\dots<i_{p}\leqslant
p}\phantom{2}\sum_{i=1}^{p}\min(d_{i_{j},i_{j+1}},d_{i_{j},i_{j-1}})$ (19)
Proof: In the last minimization of problems (7) and (9), distances
$d_{i_{j},i_{j^{\prime}}}$ are considered. Such distances are minimized by
$d_{i_{j},i_{j+1}}$ and $d_{i_{j^{\prime}},i_{j^{\prime}+1}}$ using
Proposition 2, and it remains only distances among consecutive points.
$\hfill\square$
Furthermore, Proposition 2 allows to define a new variant of the Max-Sum-Min
p-dispersion problem, denoted as the Max-Sum-Neighbor p-dispersion problem or
p-dispersion-MSN:
###### Definition 1
Let $E=\\{z_{1},\dots,z_{n}\\}$ a subset of $n$ points of ${\mathbb{R}}^{2}$,
such that for all $i\neq j$, $x_{i}\prec x_{j}$. The Max-Sum-Neighbor
p-dispersion, also denoted p-dispersion-MSN, can be defined in the 2d PF $E$
summing only the distances between neighbor points:
${\mathcal{P}}_{disp}^{MSN}(E,p)=\max_{1\leqslant
i_{1}<i_{2}<\dots<i_{p}\leqslant p}\sum_{i=1}^{p-1}d_{i_{j},i_{j+1}}$ (20)
The dispersion definitions after the Proposition 3 are illustrated in Table 1.
With the order of Proposition 1 and the monotony of Proposition 2,
p-dispersion-MSM is not a symmetric expression of the distances, inducing more
importance to extreme distances AB and CD. On the contrary, p-dispersion-MSN
induces symmetrical expressions. This motivated the introduction of
p-dispersion-MSN in the specific case of a 2d PF.
Table 1: Illustration of the Proposition 3 and the difference of the dispersion functions: we consider the 4-dispersion problems, with four selected points, A,B,C and D such that $A\prec B\prec C\prec D$. Dispersion type | | Dispersion of A,B,C,D
---|---|---
p-dispersion-MM | : | $\min$(AB,BC,CD)
p-dispersion-MS | : | AB+AC+AD+BC+BD+CD
p-dispersion-MSN | : | AB+BC+CD
p-dispersion-MSM | : | AB+min(AB,BC) +min(BC,CD) +CD
p-dispersion-MMS | : | min(AB+AC+AD,AB+BC+BD,AD+BD+CD)
Propositions 1 and 2 allow to get first polynomial results in Propositions 4,
6 and 7 present some polynomial complexity results. A key element is that the
extreme points of the 2d PF are natural candidates for p-dipersion problems,
as analyzed in Propositions 4 and 5:
###### Proposition 4 (2-dispersion problems)
Let $E=\\{z_{1},\dots,z_{n}\\}$ a subset of $n$ points of ${\mathbb{R}}^{2}$,
such that for all $i\neq j$, $z_{i}{\mathcal{I}}z_{j}$. 2-dispersion problems
are solvable in $O(n)$ time using $O(1)$ additional memory space in the 2d PF
$E$, considering variants (7), (8), (9), (10) or (20). There is a unique
optimal solution, selecting $z_{1}$ and $z_{n}$.
Proof: Considering variants (7), (8), (9), (10) or (20), the same problem must
be solved:
${\mathcal{P}}_{2}(E)=\max_{1\leqslant i<j\leqslant p}d(z_{i},z_{j})$ (21)
Indeed,
${\mathcal{P}}_{disp}^{MM}(E,2)={\mathcal{P}}_{disp}^{MS}(E,2)={\mathcal{P}}_{disp}^{MMS}(E,2)={\mathcal{P}}_{disp}^{MSN}(E,2)={\mathcal{P}}_{2}(E)$
and ${\mathcal{P}}_{disp}^{MSM}(E,2)=2{\mathcal{P}}_{2}(E)$. Using Proposition
2, it is trivial that there is a unique optimal solution to (21), selecting
the two extreme points $z_{1}$ and $z_{n}$ after the re-indexation of
proposition 1, with a optimal cost $d_{1,n}$. The complexity, once having
computed $z_{1}$ and $z_{n}$ is in $O(1)$ time and additional space. Re-
indexing the whole PF would induce a complexity in $O(n\log n)$ time.
Actually, there is no need for a full re-indexation, computing the extreme
points can be done in one traversal of $E$, inducing a complexity in $O(n)$
time. $\hfill\square$
###### Proposition 5 (p-dispersion and extreme points)
Considering variants (7), (8), (9), (10) or (20), an optimal solution of
p-dispersion can be considered selecting $x_{1}$ and $x_{n}$ for $p\geqslant
2$ once points $(x_{i})$ are re-indexed as in Proposition 1. Reciprocally, any
optimal solution contains the extreme points in the case of the Max-Sum and
Max-Sum-Neighbor variants, contrary to the Max-Min, Max-Sum-Min and Max-Min-
Sum variants where counter-examples exist.
Proof: Considering variants (7), (8), (9), (10) or (20), let $1\leqslant
i_{1}<i_{2}<\dots<i_{p}\leqslant p$ the indexes defining an optimal solution.
Considering new indexes $i_{1}^{\prime}=1,i_{2}^{\prime}=i_{2},\dots
i_{p-1}^{\prime}=i_{p-1},i_{p}^{\prime}=p$, we have thanks to Proposition 2
$d(i_{j_{1}},i_{j_{2}})^{\alpha}\leqslant
d(i_{j_{1}}^{\prime},i_{j_{2}}^{\prime})^{\alpha}$. It implies that the points
defined with new indexes have at least the same dispersion than the original
points. The new points define necessarily an optimal solution of the
considered variant of p-dispersion.
In the case of Max-Sum and Max-Sum-Neighbor p-dispersion problems, having an
optimal solution with $1<i_{1}$ or $i_{p}<p$ induces that we have furthermore
$d(i_{1},i_{2})^{\alpha}+d(i_{p-1},i_{p})^{\alpha}<d(i_{1}^{\prime},i_{j_{2}}^{\prime})^{\alpha}+d(i_{p-1}^{\prime},i_{p}^{\prime})^{\alpha}$,
and the new construction of points with indexes $i^{\prime}$ induces a
strictly better solution. So, an optimal solution of Max-Sum and Max-Sum-
Neighbor p-dispersion problems contains necessarily the extreme points.
Lastly, a counter example of the reciprocity with Max-Min 3-dispersion and 4
points in a 2d PF is given with: $z_{1}=(0,10)$, $z_{2}=(1,9)$, $z_{3}=(3,7)$,
$z_{4}=(5,5)$.$\hfill\square$
Remark: This property of p-dispersion is relevant in the selection of points
of a PF for human decision makers. Indeed, it is natural to present the
extreme points to evaluate the preferences of the decision maker. This is a
justification to use p-dispersion in that goal instead of clustering measure
like k-medoids, which would not furnish the extreme points [7].
Proposition 5 is useful to determine the complexity of $3$-dispersion
problems, and also improve the general complexity enumerating all the
possibilities, that would be in $O(n^{p})$.
###### Proposition 6 (3-dispersion)
$3$-dispersion problems in a 2d PF are solvable in $O(n)$ time using $O(1)$
additional space.
Proof:Considering variants (7), (8), (9), (10) or (20), we consider the two
extreme points, which can be found in $O(n)$ time with one traversal of $E$
like in Proposition 4. Then, there are $n-2$ cases to enumerate the last
point, each cost computation of 3-dispersion being in $O(1)$ time, this last
naive enumeration is in $O(n)$ time. $3$-dispersion problems are thus solved
in $O(n)$ time using $O(1)$ additional space with two traversals of
$E$.$\hfill\square$
###### Proposition 7 (p-dispersion)
The $p$-dispersion problems in a 2d PF are solvable in $O(n^{p-2})$ time using
$O(1)$ additional space.
Proof: Similarly with Proposition 6, once the extreme points are found in
$O(n)$ time, the naive enumeration of the other $p-2$ selected points induces
${n-2}\choose{p-2}$ computations, requiring $O(p)$ or $O(p^{2})$ time
computations.$\hfill\square$
## 5 p-dispersion-MSN is polynomially solvable
Proposition 2 allows to prove Bellman equations for the p-dispersion-MSN
problem in a 2d PF, which is the key ingredient to design a DP algorithm:
###### Proposition 8 (Bellman equations for p-dispersion-MSN)
Defining $C_{k,i}^{MSN}$ as the optimal cost of $k$-dispersion-MSN among the
points indexed in $[\\![1,i]\\!]$ for all $k\in[\\![2,p]\\!]$ and
$i\in[\\![k,n]\\!]$, we have:
$\forall i\in[\\![1,n]\\!],\>\>\>C^{MSN}_{2,i}=d_{1,i}$ (22) $\forall
i\in[\\![1,n]\\!],\>\forall
k\in[\\![3,p]\\!],\>\>\>C^{MSN}_{k,i}=\max_{j\in[\\![k-1,i-1]\\!]}(C^{MSN}_{k-1,j}+d_{j,i})$
(23)
Proof: (22) is given by Proposition 4. We suppose $k\geqslant 3$ and prove
(23). Let $i\in[\\![k,n]\\!]$. Selecting for each $j\in[\\![k-1,i-1]\\!]$ an
optimal solution of $(k-1)$-dispersion-MSN among points indexed in
$[\\![1,j]\\!]$, and adding point $i$, it makes a feasible solution for
$(k-1)$-dispersion-MSN among points indexed in $[\\![1,i]\\!]$ with a cost
$C^{MSN}_{k-1,j}+d_{j,i}$. This last cost is lower than the optimal $k$
dispersion cost, thus $C^{MSN}_{k,i}\geqslant C^{MSN}_{k-1,j}+d_{j,i}$.
$C^{MSN}_{k,i}\geqslant\max_{j\in[\\![k-1,i-1]\\!]}(C^{MSN}_{k-1,j}+d_{j,i})$
(24)
Let $j_{1},j_{2},\dots,j_{k-1},j_{k}$ indexes such that
$1=j_{1}<j_{2}<\dots<j_{k-1}<j_{k}=i$ defines an optimal solution of
$k$-dispersion-MSN, its cost is $C^{MSN}_{k,i}$. Necessarily,
$j_{1},j_{2},\dots,j_{k-1}$ defines an optimal solution of $(k-1)$-dispersion-
MSN among points indexed in $[\\![1,j_{k-1}]\\!]$. On the contrary, a strictly
better solution for $C^{MSN}_{k,i}$ would be constructed adding the index $i$.
We have thus: $C^{MSN}_{k,i}=C^{MSN}_{k-1,j_{k-1}}+d_{j_{k-1},i}$. Combined
with (24), it proves :
$C^{MSN}_{k,i}=\max_{j\in[\\![k-1,i-1]\\!]}(C^{MSN}_{k-1,j}+d_{j,i})$.
$\hfill\square$
It allows to design a DP algorithm for p-dispersion-MSN in Algorithm 1. The
first phase constructs by induction the matrix of optimal costs
$C^{MSN}_{k,i}$ with $k$ increasing. $C^{MSN}_{p,n}$ is the optimal value of
MSN p-dispersion. Then, backtracking operations in the matrix $C^{MSN}_{k,i}$
return an optimal solution.
Algorithm 1: p-dispersion-MSN in a 2d PF with $p>3$
---
Input:
\- $n$ points of ${\mathbb{R}}^{2}$, $E=\\{x_{1},\dots,x_{n}\\}$ such that for
all $i_{1}<i_{2}$, $x_{i_{1}}\phantom{0}\mathcal{I}\phantom{0}x_{i_{2}}$ ;
\- an integer $p$ with $3<p\leqslant N$.
initialize matrix $C$ with $C_{k,i}:=0$ for all
$i\in[\\![1;N]\\!],k\in[\\![2;p-1]\\!]$
for $i=1$ to $N-1$:
$C_{2,i}:=d_{1,i}$
end for
for $k=3$ to $p-1$ :
for $i=1$ to $N-1$:
$C_{k,i}:=\max_{j\in[\\![k-1,i-1]\\!]}(C_{k-1,j}+d_{j,i})$
end for
end for
$OPT:=\max_{j\in[\\![p-1,N-1]\\!]}(C_{p-1,j}+d_{j,N})$
$j:=\mbox{argmax}_{j\in[\\![p-1,N-1]\\!]}(C_{p-1,j}+d_{j,N})$
initialize $i:=j$ and ${\mathcal{I}}:=\\{1,j,N\\}$.
for $k=p-1$ to $3$ with increment $k\leftarrow k-1$:
$j:=\mbox{argmax}_{j\in[\\![k-1,i-1]\\!]}(C_{p-1,j}+d_{j,i})$
add $j$ in ${\mathcal{I}}$
$i:=j$
end for
return $OPT$ the optimal cost and the set of selected indexes ${\mathcal{I}}$.
###### Theorem 1
Let $E=\\{x_{1},\dots,x_{n}\\}$ a subset of $n$ points of ${\mathbb{R}}^{2}$,
such that for all $i\neq j$, $x_{i}\phantom{0}\mathcal{I}\phantom{0}x_{j}$.
Max-Sum-Neighbor p-dispersion is solvable in polynomial time in the 2d PF $E$.
The cases $p=2,3$ are solvable with a complexity in $O(n)$ time using an
additional memory space in $O(1)$. Algorithm 1 solves the cases $p>3$ with a
complexity in $O(pn^{2})$ time and $O(pn)$ memory space.
Proof: The cases $p=2,3$ are given by Propositions 4, and 6, so that we
suppose $p\geqslant 4$ and we consider the Algorithm 1. The induction formula
(23) uses only values $C_{i,j}$ with $j<k$. Hence, $C_{k,n}$ is at the end of
each loop in $k$ the optimal value of the $k$-dispersion-MSN among the $n$
points of $E$, and the optimal cost. The remaining operations after the
computation of $C_{p,n}$ is a standard backtrack algorithm to return an
optimal solution. This proves the validity of Algorithm 1 to solve optimally
$p$-dispersion-MSN. Let us analyze the complexity of Algorithm 1. Re-indexing
$E$ following Proposition 1 has a time complexity in $O(n\log n)$. Computing
the line $k=2$ of the DP matrix has also a time complexity in $O(n)$.
Computing $\max_{j\in[\\![k-1,i-1]\\!]}(C_{k-1,j}+d_{j,i})$ is in $O(i-k)$ and
thus in $O(n)$ enumerating all the $i-k$ possibilities. It induces time
complexities in $O(pn^{2})$ for the construction of the DP matrix, and in
$O(pn)$ for the backtracking operations. Finally, the time complexity is given
by the construction of the DP matrix, in $O(pn^{2})$ time. The space
complexity is in $O(pn)$, storing the DP matrix $C$, the complexity is proven.
$\hfill\square$
Remark: In Algorithm 1, the DP matrix $C$ is computed line by line, with the
index $k$ increasing. The computation of line $k+1$ requires only the line $k$
and computations of distances using $O(1)$ additional memory space. If one
wishes to compute only the optimal value $C_{p,n}$, it is possible to delete
the line $k-1$ once the line $k$ is completed. Such implementation has at most
$2n$ elements in the memory, and thus a complexity in $O(n)$ memory space.
Theorem 1 states a complexity in $O(pn)$ memory space as the backtracking
operations to compute an optimal solution, as written in Algorithm 1, use the
whole DP matrix.
## 6 Max-Min p-dispersion is polynomially solvable
Proposition 2 allows also to prove Bellman equations for the Max-Min
p-dispersion problem in a 2d PF:
###### Proposition 9 (Bellman equations)
Let $E=\\{x_{1},\dots,x_{n}\\}$ a subset of $n$ points of ${\mathbb{R}}^{2}$,
such that for all $i\neq j$, $x_{i}\prec x_{j}$. Let
$E=\\{x_{1},\dots,x_{n}\\}$ a subset of $n$ points of ${\mathbb{R}}^{2}$, such
that for all $i\neq j$, $x_{i}\prec x_{j}$. Defining $C^{MM}_{k,i}$ as the
optimal cost of Max-Min $k$-dispersion among the points indexed in
$[\\![1,i]\\!]$ for all $k\in[\\![2,p]\\!]$ and $i\in[\\![k,n]\\!]$ , we have
the following relations:
$\forall i\in[\\![1,n]\\!],\>\>\>C_{2,i}^{MM}=d_{1,i}$ (25) $\forall
k\in[\\![3,p]\\!],\>\forall
i\in[\\![k,n]\\!],\>\>\>C_{k,i}^{MM}=\max_{j\in[\\![k-1,i-1]\\!]}\min(C_{k-1,j}^{MM},d_{j,i})$
(26)
Proof: The proof is similar with the one of Proposition 8. $\hfill\square$
Remark: Similarly with Algorithm 1, equations (26) allows to design a DP
algorithm with a complexity in $O(pn^{2})$ time and $O(pn)$ space. Following
developments improve this complexity.
Algorithm 2: Computation of
$\mbox{max}_{j\in[\\![k-1,i-1]\\!]}\min(C_{k-1,j}^{MM},d_{j,i})$
---
input: indexes $k<i$
``
`` define $a:=k-1$, $b:=i-1$
`` while $b-a\geqslant 2$
`` Compute $j=\left\lfloor\frac{a+b}{2}\right\rfloor$
`` if $C_{k-1,j}^{MM}-d_{j,i}>0$ ` `then :` ` $b:=j$
`` else ` `:` ` $a:=j$
`` end while
``
`` return $\max(\min(C_{k-1,a}^{MM},d_{a,i}),\min(C_{k-1,b}^{MM},d_{b,i}))$
###### Proposition 10
Let $k\in[\\![3,p]\\!]$ and $i\in[\\![k,n]\\!]$. Algorithm 2 computes
$C_{k,i}^{MM}=\max_{j\in[\\![k-1,i-1]\\!]}\min(C_{k-1,j}^{MM},d_{j,i})$ with a
time complexity in $O(\log(i+1-k))$ once the $C_{j-1,k-1}$ are computed.
Proof: Let $k\in[\\![3,p]\\!]$ and $i\in[\\![k,n]\\!]$. Reformulating
Proposition 2, the application $j\in[\\![k-1,i-1]\\!]\mapsto d_{j,i}$ is
strictly decreasing. The application $j\in[\\![k-1,i-1]\\!]\mapsto
C_{k-1,j}^{MM}$ is increasing: any feasible solution of $(k-1)$-dispersion-MM
among the $j$ first points, is a feasible solution for $(k-1)$-dispersion-MM
considering the $j+1$ first points, and the optimal value $C_{k-1,j}^{MM}$ is
increasing. Hence, $\varphi_{i,k}:j\in[\\![k-1,i-1]\\!]\mapsto
C_{k-1,j}^{MM}-d_{j,i}$ is strictly increasing. Let
$\psi_{i,k}:j\in[\\![k-1,i-1]\\!]\mapsto\min(C_{k-1,j}^{MM},d_{j,i})$ .
$\varphi_{i,k}(i-1)=C_{i,k-1}^{MM}>0$. Let
$\alpha=\min\\{j\in[\\![k-1,i-1]\\!],\varphi_{i,k}(j)\geqslant 0\\}$. For
$j\geqslant\alpha$, $\psi_{i,k}(j)=C_{k-1,j}^{MM}$, and $\psi_{i,k}$ is
increasing for $j\geqslant\alpha$. For $j<\alpha$,
$\psi_{i,k}(j)=C_{k-1,j}^{MM}$, and $\psi$ is strictly decreasing for
$j<\alpha$. Hence, $\psi_{i,k}$ reaches a minimum for $j=\alpha$ or
$j=\alpha-1$.
The computation of $\alpha$, as the minimal value such that
$\varphi_{i,k}(j)\geqslant 0$, can be solved with a dichotomic search
presented in Algorithm 2, for a time complexity in $O(\log(i+1-k))$.
$\hfill\square$
Proposition 10 allows to improve the time complexity of the DP algorithm for
Max-Min p-dispersion problem. The following developments improve the space
complexity. Similarly with the remark page 1 for p-dispersion-MSN, the final
cost $C_{p,n}^{MM}$ can be constructed using a memory space in $O(n)$,
deleting the line $k$ of the DP matrix when the line $k+1$ is fully computed.
The point here is to provide backtracking algorithms which do not require to
have stored the whole DP matrix $C_{k,i}^{MM}$, with a complexity in at most
$O(n)$ memory space and $O(pn\log n)$ time. Algorithm 3 and 3’ compute optimal
solutions of p-dispersion-MM knowing the optimal cost OPT, with greedy
strategies:
Algorithm 3: Backtracking algorithm using $O(n)$ memory space
---
input:
``\- $n$ points of a 2d PF, $E=\\{z_{1},\dots,z_{n}\\}$, sorted such that for
all $i<j$, $z_{i}\prec z_{j}$ ;
``\- $p\in{\mathbb{N}},p>2$;
``\- $OPT$, the optimal cost of Max-Min $p$-dispersion;
``
`` initialize $M:=1$, $m:=1$, ${\mathcal{I}}=\\{1,n\\}$.
`` for $k=2$ to $p-1$ with increment $k\leftarrow k+1$
`` $M$ := the smallest index such that $d(x_{m},x_{M})\geqslant OPT$
`` add $M$ in ${\mathcal{I}}$
`` $m:=M$
`` end for
return ${\mathcal{I}}$
Algorithm 3’: Backtracking algorithm using $O(n)$ memory space
---
input:
``\- $n$ points of a 2d PF, $E=\\{z_{1},\dots,z_{n}\\}$, sorted such that for
all $i<j$, $z_{i}\prec z_{j}$ ;
``\- $p\in{\mathbb{N}},p>2$;
``\- $OPT$, the optimal cost of Max-Min $p$-dispersion;
``
`` initialize $M:=n$, $m:=n$, ${\mathcal{I}}=\\{1,n\\}$.
`` for $k=p-1$ to $2$ with increment $k\leftarrow k-1$
`` $m$ := the biggest index such that $d(x_{m},x_{M})\geqslant OPT$
`` add $m$ in ${\mathcal{I}}$
`` $M:=m$
`` end for
return ${\mathcal{I}}$
###### Proposition 11
Let $p\in{\mathbb{N}},p\geqslant 3$. Let $E=\\{z_{1},\dots,z_{n}\\}$, sorted
such that for all $i<j$, $z_{i}\phantom{0}\prec\phantom{0}z_{j}$. Once the
optimal cost of Max-Min p-dispersion problem is computed, Algorithm 3 and 3’
computes an optimal partition in $O(p\log n)$ time using $O(1)$ additional
memory space. Furthermore, let $j_{1}=1,j_{2},\dots,j_{p-1},j_{p}=n$ the
indexes of an optimal solution, let $1,i_{2},\dots,i_{p-1},n$ (resp
$1,i_{2}^{\prime},\dots,i_{p-1}^{\prime},n$) the indexes given by Algorithm 3
(resp 3’). We have: for all $k\in[\\![2,p-1]\\!],i_{k}\leqslant j_{k}\leqslant
i^{\prime}_{k}$. In other words, the indexes given by Algorithm 3 (resp 3’)
are lower (resp upper) bounds of the indexes of any optimal solution of Max-
Min p-dispersion considering the extreme points $z_{1}$ and $z_{n}$.
Proof: We prove the result for Algorithm 3, the proof is similar for Algorithm
3’. Let $j_{1}=1,j_{2},\dots,j_{p-1},j_{p}=n$ the indexes of an optimal
solution, let $i_{1}=1,i_{2},\dots,i_{p-1},i_{p}=n$ the indexes given by
Algorithm 3. Firstly, We prove by induction on $k$ that for all
$k\in[\\![1,p-1]\\!],i_{k}\leqslant j_{k}$. The case $k=1$ is given by
$j_{1}=i_{1}=1$. We suppose $k>1$ and that the induction hypothesis is true or
$k-1$, i.e. $i_{k-1}\leqslant j_{k-1}$. $i_{k}$ is the smallest index such
that $d(x_{i_{k}},x_{i_{k-1}})\geqslant OPT$. Using Proposition 2,
$d(x_{j_{k}},x_{j_{k-1}})\leqslant d(x_{j_{k}},x_{j_{i-1}})$.
$i_{k-1}>j_{k-1}$ would be in contradiction with
$d(x_{j_{k}},x_{j_{k-1}})\geqslant OPT$ and the definition of $i_{k}$ as the
smallest index such that $d(x_{i_{k}},x_{i_{k-1}})\geqslant OPT$. We have also
$i_{k}\leqslant j_{k}$, which terminates the induction proof that indexes
$i_{k}$ are lower bounds of indexes $j_{k}$.
let us prove that indexes $i_{1}=1,i_{2},\dots,i_{p-1},i_{p}=n$ define an
optimal solution. We have by construction $d(x_{i_{k}},x_{i_{k-1}})\geqslant
OPT$ for all $k\in[\\![1,p-1]\\!]$, we have just to prove that
$d(x_{n},x_{i_{p-1}})\geqslant OPT$. $i_{p-1}\leqslant j_{p-1}\leqslant
n=j_{p}=i_{p}$. Using Proposition 2, $d(x_{n},x_{i_{p-1}})\geqslant
d(x_{n},x_{j_{p-1}})$. The optimality implies $d(x_{n},x_{j_{p-1}})\geqslant
OPT$, and thus $d(x_{n},x_{i_{p-1}})\geqslant OPT$. Algorithm 3 returns the
optimal solution with the minimal indexes, let’s analyze the complexity.
Algorithm 4 calls at most $p-2$ times the computation of smallest index
$i_{k}$ such that $d(x_{i_{k}},x_{i_{k-1}})\geqslant OPT$, which can be
proceeded with a dichotomic search. Hence, Algorithm 3 runs in $O(p\log n)$
time. $\hfill\square$ Proposition 11 allows to define in Algorithm 4 a valid
DP algorithm for Max-Min p-dispersion running in $O(n)$ memory space. Theorem
2 summarizes the complexity results for Max-Min p-dispersion:
Algorithm 4: Max-Min p-dispersion in a 2d-PF with $p>3$
---
Input:
\- $n$ points of ${\mathbb{R}}^{2}$, $E=\\{x_{1},\dots,x_{n}\\}$ such that for
all $i_{1}<i_{2}$, $x_{i_{1}}\phantom{0}\mathcal{I}\phantom{0}x_{i_{2}}$ ;
\- an integer $p$ with $3<p\leqslant N$.
re-index $E$ following the order of Proposition 1
initialize line $2$ of the matrix $C$ with $C_{2,i}=0$ for all
$i\in[\\![1;n]\\!]$
for $i=1$ to $n-1$:
$C_{2,i}:=d_{1,i}$
end for
for $k=3$ to $p$ :
initialize line $k$ of the matrix $C$ with $C_{k,i}=0$ for all
$i\in[\\![k;n]\\!]$
for $i=1$ to $n-1$:
$C_{k,i}:=\displaystyle\max_{j\in[\\![k-1,i-1]\\!]}\min(C_{k-1,j},d_{j,i})$
with Algorithm 2
end for
delete line $k-1$ of the matrix $C$
end for
delete line $p$ of the matrix $C$
$OPT:=C_{p,n}$
return $OPT$ and a solution of Algorithm 3 (or Algorithm 3’)
###### Theorem 2
Let $E=\\{x_{1},\dots,x_{n}\\}$ a subset of $n$ points of ${\mathbb{R}}^{2}$,
such that for all $i\neq j$, $x_{i}\mathcal{I}x_{j}$. The Max-Min p-dispersion
problem is polynomially solvable to optimality in the 2d PF $E$. The cases
$p=2,3$ are solvable with a complexity in $O(n)$ time using an additional
memory space in $O(1)$ . With $p>3$, Algorithm 4 solves the Max-Min
p-dispersion problem with a complexity in $O(pn\log n)$ time and $O(n)$ memory
space.
Proof: The cases $p=2,3$ are given by Propositions 4 and 6, so that we suppose
$p\geqslant 4$ and we consider the Algorithm 4. The induction formula (23) in
Proposition 9 uses only values $C_{i,j}$ with $j<k$ in Algorithm 4. $C_{k,n}$
is at the end of each loop in $k$ the optimal value of the Max-Min
$k$-dispersion among the $n$ points of $E$, and the optimal cost is given by
$C_{k,n}$. The validity of the backtracking procedures is proven in
Proposition 11. Let us analyze the complexity of Algorithm 4. The space
complexity is in $O(n)$, storing at most two lines of the DP matrix $C$.
Sorting and indexing the elements of $E$ following Proposition 1 has a time
complexity in $O(n\log n)$. Computing the line $k=2$ of the DP matrix has also
a time complexity in $O(n)$. The other lines are computed in $O(n\log n)$ time
with Proposition 10, for a total computation of the DP matrix in $O(pn\log n)$
time. The backtracking operations run in $O(p\log n)$ time, so that the time
complexity is given by the computation of the DP matrix in $O(pn\log n)$ time.
$\hfill\square$
## 7 Max-Sum-Min p dispersion is polynomially solvable
Proposition 2 allows also to design Bellman equations for the p-dispersion-MSM
problem in a 2d PF:
###### Proposition 12 (Bellman equations)
Let $E=\\{x_{1},\dots,x_{n}\\}$ a subset of $n$ points of ${\mathbb{R}}^{2}$,
such that for all $i\neq j$, $x_{i}\prec x_{j}$. Let
$p\in{\mathbb{N}},p\geqslant 4$. For all $k\in[\\![3,p]\\!]$, we define
$C_{k,i,i^{\prime}}^{MSM}$ as the optimal cost of Max-Sum-Min $k$-dispersion
among the points indexed in $[\\![k,i^{\prime}]\\!]$ with $i<i^{\prime}$ being
last selected point before $i^{\prime}$. We have the following relations:
$\forall
i\in[\\![1,n]\\!],\>\>\>C_{3,i,i^{\prime}}^{MSM}=d_{i,i^{\prime}}+d_{1,i^{\prime}}+\min(d_{1,i^{\prime}},d_{i^{\prime},i})$
(27) $\forall k\in[\\![4,p]\\!],\>\forall i^{\prime}\in[\\![1,n]\\!],\>\forall
i<i^{\prime}\>\>\>C_{k,i,i^{\prime}}^{MSM}=\max_{j\in[\\![1,i-1]\\!]}C_{k-1,j,i}^{MSM}+\min(d_{j,i},d_{i^{\prime},i})$
(28)
Proof: $C_{3,i,i^{\prime}}^{MSM}$ is not an optimization problem with
Proposition 5, there is a unique solution to consider: selecting,
$1,i^{\prime},i$. This makes the dispersion given in (27). Let
$k\in[\\![4,p]\\!]$, let $i\in[\\![k,n-1]\\!]$, let
$i^{\prime}\in[\\![i+1,n]\\!]$. Selecting for each $j\in[\\![k-1,i-1]\\!]$ an
optimal solution of $(k-1)$-dispersion-MSM among points indexed in
$[\\![1,i]\\!]$ with $j$ as last selected point before $i$, and adding point
$i$, it makes a feasible solution for $(k-1)$-dispersion-MSM among points
indexed in $[\\![1,i]\\!]$ with a cost
$C_{k-1,j,i}^{MSM}+\min(d_{j,i},d_{i^{\prime},i})$. This last cost is lower
than the optimal $k$ dispersion cost, thus $C_{k,i,i^{\prime}}^{MSM}\geqslant
C_{k-1,j,i}^{MSM}+\min(d_{j,i},d_{i^{\prime},i})$.
$C_{k,i,i^{\prime}}^{MSM}\geqslant\max_{j\in[\\![1,i-1]\\!]}C_{k-1,j,i}^{MSM}+\min(d_{j,i},d_{i^{\prime},i})$
(29)
Let $j_{1},\dots,j_{k}$ indexes such that $1\leqslant
j_{1}<j_{2}<\dots<j_{k-1}=i<j_{k}=i^{\prime}$ defining an optimal solution of
$k$-dispersion-MSM in $[\\![1,i^{\prime}]\\!]$ with $i$ as last selected point
before $i^{\prime}$, its cost is $C_{k,i,i^{\prime}}^{MSM}$. Necessarily,
$j_{1},j_{2},\dots,j_{k-1}$ defines an optimal solution of $(k-1)$-dispersion-
MSM among points indexed in $[\\![1,i]\\!]$ with $j_{k-2}$ as last selected
point before $i$. On the contrary, a strictly better solution for
$C_{k,i,i^{\prime}}^{MSM}$ would be constructed adding the index $i^{\prime}$.
We have thus:
$C_{k,i,i^{\prime}}^{MSM}=C_{k-1,j_{k-2},i}^{MSM}+\min(d_{j_{k-2},i},d_{i^{\prime},i})$.
Combined with (29), it proves :
$C^{MSN}_{k,i}=\max_{j\in[\\![k-1,i-1]\\!]}C_{k-1,j,i}^{MSM}+\min(d_{j,i},d_{i^{\prime},i})$.
$\hfill\square$
Algorithm 5: Max-Sum-Min p-dispersion in a 2d-PF with $p>5$
---
Input:
\- $n$ points of ${\mathbb{R}}^{2}$, $E=\\{x_{1},\dots,x_{n}\\}$ such that for
all $i_{1}<i_{2}$, $x_{i_{1}}\prec x_{i_{2}}$ ;
\- an integer with $5<p\leqslant N$.
initialize matrix $C$ with $C_{k,i,i^{\prime}}:=0$ for all
$i\in[\\![0;n]\\!],k\in[\\![2;p-1]\\!]$
re-index $E$ following the order of Proposition 1
for $i=1$ to $n-1$
for $i^{\prime}=1$ to $i-1$
$C_{3,i,i^{\prime}}^{MSM}:=d_{i,i^{\prime}}+d_{1,i^{\prime}}+\min(d_{1,i^{\prime}},d_{i^{\prime},i})$
for $k=4$ to $p-1$
$C_{k,i,i^{\prime}}:=\max_{j\in[\\![1,i-1]\\!]}C_{k-1,j,i}+\min(d_{j,i},d_{i^{\prime},i})$
end for
end for
$OPT:=\max_{j\in[\\![p-1,N-1]\\!]}C_{p,j,N}$
$j:=\mbox{argmax}_{j\in[\\![p-1,N-1]\\!]}C_{p,j,N}$
initialize $i^{\prime}:=N$, $i:=j$ and ${\mathcal{I}}:=\\{1,j,N\\}$.
for $k=p-1$ to $3$ with increment $k\leftarrow k-1$
find $j\in[\\![1,i-1]\\!]$ such that
$C_{k,i,i^{\prime}}=\max_{j\in[\\![1,i-1]\\!]}C_{k-1,j,i}+\min(d_{j,i},d_{i^{\prime},i})$
add $j$ in ${\mathcal{I}}$
$i^{\prime}:=i$
$i:=j$
end for
return $OPT$ the optimal cost and the set of selected indexes ${\mathcal{I}}$.
###### Theorem 3
Let $E=\\{x_{1},\dots,x_{n}\\}$ a subset of $n$ points of ${\mathbb{R}}^{2}$,
such that for all $i\neq j$, $x_{i}\mathcal{I}x_{j}$. Max-Sum-Min p-dispersion
is polynomially solvable to optimality in the 2d PF $E$. The cases $p=2,3$ are
solvable with a complexity in $O(n)$ time using an additional memory space in
$O(1)$ . The cases $p=4$ (resp $p=5$) are solvable with a complexity in
$O(n^{2})$ (resp $O(n^{3})$) time using an additional memory space in $O(1)$.
In the cases $p>5$, Algorithm 5 solves Max-Sum-Min p-dispersion with a
complexity in $O(pn^{3})$ time and $O(pn^{2})$ memory space.
Proof: The cases $p=2,3,4,5$ are given by Propositions 4, and 6, so that we
suppose $p\geqslant 6$ and we consider the Algorithm 5. The proof of the
validity of Algorithm 5 to compute the optimal value and an solution for Max-
Sum-Min p-dispersion is similar with Theorems 1 and 2. Algorithm 5 computes
optimal values of $C_{k,i,i^{\prime}}^{MSM}$ with $k$ increasing requiring
only $C_{k-1,i,i^{\prime}}^{MSM}$ values. By induction, it proves that for all
$k$, $C_{k,i,i^{\prime}}^{MSM}$ has the wished optimal value at the end of the
loop $k$. The optimal value of Max-Sum-Min p-dispersion in $E$ is then given
by $\max_{j\in[\\![p-1,N-1]\\!]}C_{p,j,N}$. The remaining operations define a
standard backtrack algorithm. This proves the validity of Algorithm 5 to
return an optimal solution of $k$-dispersion-MSM. Let us analyze the
complexity. The space complexity is in $O(pn^{2})$, storing the DP matrix $C$.
The time complexity is given by the construction of the matrix
$C_{k,i,i^{\prime}}^{MSM}$, ie $O(pn^{2})$ operations running in $O(n)$ time
with naive enumerations to compute
$C_{k,i,i^{\prime}}=\max_{j\in[\\![1,i-1]\\!]}C_{k-1,j,i}+\min(d_{j,i},d_{i^{\prime},i})$.
Algorithm 5 has thus a time complexity in $O(pn^{3})$. $\hfill\square$
## 8 Discussions
This section discusses some practical applications of Theorems 1,2,3 and
Algorithms 1,4,5.
### 8.1 Comparison of DP algorithms and complexity results
Max-Min p-dispersion was proven NP-hard in 2d and solvable in
$O(\max\\{pn,n\log n\\})$ time in 1d (and thus for affine cases of 2d PF)
[33]. Theorem 2 illustrates that the PF hypothesis is crucial for the
complexity results. Max-Min p-dispersion is polynomial in a 2d PF thanks to
Proposition 1: a 2d PF can be projected in a 1d structure. The complexity in
$O(pn\log n)$ time and $O(n)$ space is not very different from the 1d case
using Max-Min p-dispersion, contrary to k-medoids and k-means problems [6, 7].
The difference in complexity from 2d PF to 1d cases are due to the triangle
inequalities in 2d PF cases, whereas 1d cases allow to use an additivity of
distances.
Bellman equations are similar for p-dispersion-MSN and p-dispersion-MM,
changing the inner minimization into a summation. A generic implementation of
Algorithm 1 for p-dispersion-MSN and p-dispersion-MM problems is possible,
with a complexity in $O(pn^{2})$ and $O(pn)$ space, which may be efficient for
practical applications. The improvements presented in section 6 are only valid
for p-dispersion-MM. On the contrary, the Bellman equations for p-dispersion-
MSM in Proposition 12 require to store the previous indexes to compute an
additive cost. The complexity in $O(pn^{3})$ time and $O(pn^{2})$ space is a
limiting factor when $n$ is large. Complexity results induces to prefer the
variant p-dispersion-MSN to the original p-dispersion-MSM for a practical
application. This was also the preference illustrated Table 1.
### 8.2 Equivalent solutions and hierarchic p-dispersion
Proposition 11 gives tight bounds on the indexes of optimal solutions of Max-
Min p-dispersion in a 2d PF. Many optimal solutions may exist for
p-dispersion-MM. Having an optimal solution, one can identify the bottleneck
distance and rearrange other selected points without changing the Max-Min
dispersion. Such situations occur when p is large enough and when the points
in the 2d PF are not well distributed. We note that the solutions of
Algorithms 3 and 3’ are very unbalanced, leading to the highest values or the
terminating distances. For a practical application, one seeks a selection of
well distributed.
To solve this issue, a bi-objective hierarchic optimization can be considered,
ranking the optimal solutions of Max-Min p-dispersion with the MSN-dispersion
function. Bellman equations of Propositions 8 and 9 can be extended to design
a DP algorithm for this hierarchic optimization. Indeed, we can define DP
matrix pairs $(C_{k,i}^{{}^{\prime}MM},C_{k,i}^{{}^{\prime}MSN})$ denoting the
optimal costs with the lexicographic order, optimizing firstly the Max-Min
k-dispersion, and then the k-dispersion-MSN among points indexed in
$[\\![1,i]\\!]$. The costs $C_{k,i}^{{}^{\prime}MM}$ are the same with the
mono-objective optimization of Max-Min p-dispersion. Such DP matrices are
constructed in $O(pn^{2})$, enumerating for each computation $i,k$ the
possible costs with an intermediate index $j$, and sorting the best current
value with the lexicographic order. The backtracking algorithm use similar
computations, as in Algorithm 1, and the hierarchic optimization is also
running in $O(pn^{2})$ time and $O(pn)$ space. One may also speed-up such cost
computations reusing the Algorithm 2 and Proposition 10, and showing that the
optimal and equivalent solutions leading to $C_{k,i}^{MM}$ are in a plateau,
ending at $\alpha$ or $\alpha-1$.
If one wishes to have an algorithm with a complexity in $O(pn\log n)$ time and
$O(n)$ space, one may try to polish the solutions given by Algorithms 3 and 3’
to have more evenly distributed distances among the consecutive selected
points. One polishing procedure is to polish locally a solution, considering
3-dispersion-MM optimizations in the consecutive points. Each 3-dispersion-MM
computation is running in $O(\log n)$ using also Proposition 10 as the points
are already sorted. This induces a quick and efficient polishing local search.
### 8.3 Parallelization issues
The time complexity in $O(pn\log n)$ or $O(pn^{2})$ can be satisfactory for
large scale computations of p-dispersion problems. For a practical speed-up,
Algorithms 1,4 and 5 have also useful properties for a parallel implementation
in multi or many-core environments. As mentioned in Theorems 1,2 and 3, the
construction of the DP matrix is the bottleneck in the time complexity, it is
crucial to parallelize carefully this phase. The backtracking algorithm is
essentially sequential, but with a low complexity, so that a parallelization
of this phase is not crucial. The initial sorting algorithm is significant in
the computation times only for the Max-Min p-dispersion problem and small
values of $p$. In this case, parallelization on General Purpose Graphical
Processing Units (GPGPU) is available [31].
In the computation of each line of the DP matrix in Algorithms 1, 4 and 5,
many operations are independent using the previous line. In Algorithm 1 (and
for the hierarchic optimization discussed in section 8.2), there are
$O(n^{2})$ independent operations. In Algorithm 5, there are $O(n^{3})$
independent operations. After the specific improvements for Max-Min
p-dispersion with Algorithms 2 and 4, there are $O(n)$ independent operations
to compute each value $C_{i,k}^{MM}$ for $i\in[\\![k,n]\\!]$ in $O(\log i)$
time. These $O(n)$ operations shall be computed from the highest values of $i$
with $i=n$ to the lowest with $i=k$, for a better load balancing following the
results of LPT (Lowest Processing Times) heuristic and approximation results
[14]. In all cases, the parallel implementation is straightforward in a shared
memory environment like OpenMP, inducing only $p-3$ synchronizations, which is
useful for the practical efficiency of parallelization.
Lastly, we note that $p$-dispersion-MM and MSN (and the hierarchic
optimization discussed in section 8.2) can be parallelized under GPGPU, with
the enumerations running in $O(pn^{2})$ time. However, the dichotomic search
in Algorithm 2 is not compatible with a GPGPU parallelization. For Max-Min
p-dispersion, one may parallelize the $O(pn\log n)$ time version with OpenMP,
or the $O(pn^{2})$ time and massively parallelized version under GPGPU.
Practical efficiency becomes dependent on the characteristic of the GPU
hardware. We note that the computation of the DP matrix line by line is an
useful point for GPGPU parallelization, using less memory space in the GPU
hardware, which may be a limiting factor.
### 8.4 Application to k-medoids/k-means clustering in a 2d PF
The exact DP algorithms for k-medoids and k-means run in $O(n^{3})$ time [6,
7]. Such complexity may be a bottleneck for the practical applications dealing
with large values of $n$. Classical heuristics for k-means/k-medoids problems
may be used in such cases, improving in quick local search procedures an
initialization of solutions [19, 30]. Such heuristics have no guarantee of
optimality, and depends heavily on the initialization strategies [19, 30]. One
may initialize these local searches with solutions of Max-Min p-dispersion to
have a quick initialization procedure using the Algorithm 4. Actually, several
initialization strategies are possible. Firstly, one can initialize the $k$
centroids using $k$-dispersion. Secondly, one can solve $2k+1$-dispersion, and
select the $k$ intermediate points following order of Proposition 1. For both
strategies, one can compute the optimal DP algorithms, or also use solutions
build by local search iterations of 3-dispersion as discussed in section 8.2.
Having many different initialization is useful for the final quality of
solutions, implementing several local searches with different initializations
in parallel environments (using Message Passing Interface protocols for a
coarse grain parallelization), as in [10].
## 9 Conclusion and perspectives
This paper examined properties of the four p-dispersion problems in a 2d PF
using Euclidian distances. A novel variant, namely Max-Sum-Neighbor
p-dispersion, is also designed specifically. The cases $p=2$ and $p=3$ induce
a complexity in $O(n)$ time with an additional memory space in $O(1)$. The
cases $p=4$ and $p=5$ have respectively time complexities in $O(n^{2})$ and
$O(n^{3})$ time with an additional memory space in $O(1)$. Such results can be
useful in the application to select a small number of representative solutions
for a decision maker. This offers another perspective, comparing the different
variants of p-dispersion in large 2d PF with small values $p$, allowing to
compare the results from [13].
Three variants are proven to be solvable in polynomial time, designing similar
DP algorithms. The Max-Sum-Min p-dispersion problem is solvable in $O(pn^{3})$
time and $O(pn^{2})$ memory space. The Max-Sum-Neighbor, is proven solvable in
$O(pn^{2})$ time and $O(pn)$ space. The standard Max-Min p-dispersion problem
is solvable in $O(pn\log n)$ time and $O(n)$ space. Furthermore, the DP
algorithms can be implemented with quasi linear speed-up with parallelization.
These results offer practical perspectives for large 2d PF, as demanded by
population meta-heuristics in bi-objective optimization. Heuristic speed-up
and quality of solutions were also discussed for such applications.
The efficiency of Max-Min p-dispersion in a 2d PF is also useful for an
application to heuristics for k-means and k-medoids clustering in a 2d PF,
where the DP algorithm has a complexity in $O(n^{3})$. Initialization
strategies can use the work on p-dispersion problems, as discussed in this
paper.
Max-Min p-dispersion is NP hard in the planar case, it is thus also the case
for three dimensional PF, there is no hope to extend the results of this paper
in dimension 3. The perspective is only to design quick heuristics for PF in
dimensions greater than 3.
## References
* [1] S. Ağca, B. Eksioglu, and J. Ghosh. Lagrangian solution of maximum dispersion problems. Naval Research Logistics, 47(2):97–114, 2000.
* [2] A. Auger, J. Bader, D. Brockhoff, and E. Zitzler. Investigating and exploiting the bias of the weighted hypervolume to articulate user preferences. In Proceedings of GECCO 2009, pages 563–570. ACM, 2009.
* [3] K. Bringmann, S. Cabello, and M. Emmerich. Maximum volume subset selection for anchored boxes. arXiv preprint arXiv:1803.00849, 2018.
* [4] K. Bringmann, T. Friedrich, and P. Klitzke. Two-dimensional subset selection for hypervolume and epsilon-indicator. In Annual Conference on Genetic and Evolutionary Computation, pages 589–596. ACM, 2014.
* [5] N. Dupin. Modélisation et résolution de grands problèmes stochastiques combinatoires: application à la gestion de production d’électricité. PhD thesis, Univ. Lille 1, 2015.
* [6] N. Dupin, F. Nielsen, and E. Talbi. Dynamic programming heuristic for k-means clustering among a 2-dimensional pareto frontier. 7th Internat. Conf. on Metaheuristics and Nature Inspired Computing, 2018.
* [7] N. Dupin, F. Nielsen, and E. Talbi. k-medoids clustering is solvable in polynomial time for a 2d Pareto front. In World Congress on Global Optimization, pages 790–799. Springer, 2019.
* [8] N. Dupin, F. Nielsen, and E. Talbi. Clustering a 2d Pareto Front: p-center problems are solvable in polynomial time. In International Conference on Optimization and Learning, pages 179–191. Springer, 2020.
* [9] N. Dupin and E. Talbi. Clustering in a 2-dimensional Pareto Front: p-median and p-center are solvable in polynomial time. arXiv preprint arXiv:1806.02098, 2018.
* [10] N. Dupin and E. Talbi. Parallel matheuristics for the discrete unit commitment problem with min-stop ramping constraints. International Transactions in Operational Research, 27(1):219–244, 2020.
* [11] M. Ehrgott and X. Gandibleux. Multiobjective combinatorial optimization - theory, methodology, and applications. In Multiple criteria optimization: State of the art annotated bibliographic surveys, pages 369–444. Springer, 2003.
* [12] E. Erkut. The discrete p-dispersion problem. European Journal of Operational Research, 46(1):48–60, 1990.
* [13] E. Erkut and S. Neuman. Comparison of four models for dispersing facilities. INFOR: Information Systems and Operational Research, 29(2):68–86, 1991.
* [14] R. Graham. Bounds on multiprocessing timing anomalies. SIAM journal on Applied Mathematics, 17(2):416–429, 1969.
* [15] A. Grønlund, K. Larsen, A. Mathiasen, J. Nielsen, S. Schneider, and M. Song. Fast exact k-means, k-medians and Bregman divergence clustering in 1d. arXiv preprint arXiv:1701.07204, 2017.
* [16] P. Hansen and I. Moon. Dispersing facilities on a network. Cahiers du GERAD, 1995.
* [17] R. Hassin and A. Tamir. Improved complexity bounds for location problems on the real line. Operations Research Letters, 10(7):395–402, 1991.
* [18] A. Karmakar, S. Das, S. Nandy, and B. Bhattacharya. Some variations on constrained minimum enclosing circle problem. Journal of Combinatorial Optimization, 25(2):176–190, 2013.
* [19] S. Khan and A. Ahmad. Cluster center initialization algorithm for k-means clustering. Pattern recognition letters, 25(11):1293–1302, 2004.
* [20] M.J Kuby. Programming models for facility dispersion: The p-dispersion and maxisum dispersion problems. Geographical Analysis, 19(4):315–329, 1987.
* [21] T. Kuhn, C. Fonseca, L. Paquete, S. Ruzika, M. Duarte, and J. Figueira. Hypervolume subset selection in two dimensions: Formulations and algorithms. Evolutionary Computation, 24(3):411–425, 2016.
* [22] T. Lei and R. Church. On the unified dispersion problem: Efficient formulations and exact algorithms. European Journal of Operational Research, 241(3):622–630, 2015\.
* [23] M. Mahajan, P. Nimbhorkar, and K. Varadarajan. The planar k-means problem is NP-hard. Theoretical Computer Science, 442:13–21, 2012.
* [24] N. Megiddo and K. Supowit. On the complexity of some common geometric location problems. SIAM journal on computing, 13(1):182–196, 1984.
* [25] T. Peugeot, N. Dupin, M-J Sembely, and C. Dubecq. MBSE, PLM, MIP and Robust Optimization for System of Systems Management, Application to SCCOA French Air Defense Program. In Complex Systems Design&Management, pages 29–40. Springer, 2017\.
* [26] D. Pisinger. Upper bounds and exact algorithms for p-dispersion problems. Computers & operations research, 33(5):1380–1398, 2006.
* [27] S. Ravi, D. Rosenkrantz, and G. Tayi. Heuristic and special case algorithms for dispersion problems. Operations Research, 42(2):299–310, 1994.
* [28] D. Sayah and S. Irnich. A new compact formulation for the discrete p-dispersion problem. European Journal of Operational Research, 256(1):62–67, 2017.
* [29] S. Sayın. Measuring the quality of discrete representations of efficient sets in multiple objective mathematical programming. Mathematical Programming, 87(3):543–560, 2000.
* [30] E. Schubert and P. Rousseeuw. Faster k-Medoids Clustering: Improving the PAM, CLARA, and CLARANS Algorithms. arXiv preprint arXiv:1810.05691, 2018.
* [31] E. Sintorn and U. Assarsson. Fast parallel GPU-sorting using a hybrid algorithm. Journal of Parallel and Distributed Computing, 68(10):1381–1388, 2008.
* [32] E. Talbi. Metaheuristics: from design to implementation, volume 74. Wiley, 2009.
* [33] D. Wang and Y. Kuo. A study on two geometric location problems. Information processing letters, 28(6):281–286, 1988.
* [34] E. Zio and R. Bazzo. A clustering procedure for reducing the number of representative solutions in the pareto front of multiobjective optimization problems. European Journal of Operational Research, 210(3):624–634, 2011\.
|
2024-09-04T02:54:54.812757 | 2020-02-26T22:59:36 | 2002.11832 | {
"authors": "Andres Fernandez Herrero",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25899",
"submitter": "Andres Fernandez Herrero",
"url": "https://arxiv.org/abs/2002.11832"
} | arxiv-papers | spacing=nonfrench
# On the quasicompactness of the moduli stack of logarithmic $G$-connections
over a curve
Andres Fernandez Herrero
###### Abstract
Fix a smooth projective curve over a field of characteristic zero and a finite
set of punctures. Let $G$ be a connected linear algebraic group. We prove that
the moduli of $G$-bundles with logarithmic connections having fixed residue
classes at the punctures is an algebraic stack of finite type.
###### Contents
1. 1 Introduction
2. 2 Preliminaries
1. 2.1 Notation
2. 2.2 Two representability lemmas
3. 2.3 Change of group for $G$-bundles
3. 3 The moduli stack of regular $G$-connections
1. 3.1 Regular $G$-connections
2. 3.2 Functoriality for regular $G$-connections
3. 3.3 The stack of regular $G$-connections
4. 4 The moduli stack of logarithmic $G$-connections
1. 4.1 Logarithmic $G$-connections and their residues
2. 4.2 Indecomposable example over elliptic curves
3. 4.3 Functoriality for logarithmic $G$-connections
4. 4.4 Stacks of logarithmic $G$-connections
## 1 Introduction
Fix a curve $C$ that is smooth projective and geometrically irreducible over a
field $k$ of characteristic $0$. Let $G$ be a connected linear algebraic group
over $k$. If $k=\mathbb{C}$, then the curve $C$ is a Riemann surface. The
Riemann-Hilbert correspondence establishes an equivalence between the groupoid
of regular $G$-connections on $C$ and the groupoid of homomorphism from the
topological fundamental group $\pi_{1}(C)$ into $G(\mathbb{C})$.
One would like to view the moduli stack $\text{Conn}_{G}(C)$ of regular
$G$-connections on $C$ as an algebraic avatar of the moduli of representations
of $\pi_{1}(C)$. Motivated by this, we show that $\text{Conn}_{G}(C)$ is an
algebraic stack of finite type for an arbitrary connected linear algebraic
group $G$ over any field of characteristic $0$.
More generally, the Riemann-Hilbert correspondence can be established for a
noncompact Riemann surface. Fix a finite set $\\{x_{i}\\}$ of $k$-points
$x_{i}\in C$. Set $D=\sum_{i}x_{i}$ to be the corresponding reduced divisor.
In this case one would like to study representations of the fundamental group
of $C\setminus D$. The corresponding objects considered in this context are
logarithmic connections with poles at $D$ [Del70]. This logarithmic version of
the Riemann-Hilbert correspondence gives an analytic description of the
monodromy around a puncture $x_{i}$, meaning the conjugacy class of the
element of $G(\mathbb{C})$ determined by a loop around $x_{i}$. Namely, the
monodromy around $x_{i}$ is related to the residue $O_{i}$ of the
corresponding logarithmic connection at $x_{i}$. Here $O_{i}$ is an adjoint
orbit in the Lie algebra of $G$.
The notion of logarithmic connection is algebraic; there is an algebraic stack
$\text{Conn}_{G}^{D}(C)$ parametrizing logarithmic $G$-connections on $C$ with
poles at $D$. In view of the Riemann-Hilbert correspondence, it is natural to
view this stack as an algebraic avatar of representations of the fundamental
group of $C\setminus D$. The main goal of this paper is to study some
geometric properties of $\text{Conn}_{G}^{D}(C)$. To this end, we prove the
following theorem.
###### Main Theorem (=Theorem 4.16).
Let $k$ be a field of characteristic $0$. Let $G$ be a connected linear
algebraic group over $k$. Let $\text{Conn}_{G}^{D,O_{i}}(C)$ denote the moduli
stack of $G$-logarithmic connections with poles at $D$ and fixed residue class
$O_{i}$ at each $x_{i}$. Then $\text{Conn}_{G}^{D,O_{i}}(C)$ is of finite type
over $k$.
The boundedness of a related moduli problem for $G=\text{GL}_{n}$ was proven
by Inaba, Iwasaki and Saito [IIS06]. In that paper they assume that the
logarithmic connection comes equipped with a compatible parabolic vector
bundle that is semistable with respect to a fixed set of weights.
If $D$ is not empty, we prove in Proposition 4.14 that
$\text{Conn}_{G}^{D}(C)$ is quasicompact if and only if $G$ is unipotent. This
shows that in most cases of interest it is necessary to restrict to a substack
of $\text{Conn}_{G}^{D}(C)$ in order to expect a bounded moduli problem.
Theorem 4.16 says that the substack $\text{Conn}^{D,O_{i}}_{G}(C)$ with fixed
residues is quasicompact. In a different direction, Nitsure [Nit93][Prop. 3.1]
proved that the substack of semistable connections inside
$\text{Conn}_{G}^{D}(C)$ is quasicompact when $G=\text{GL}_{n}$. He uses this
to construct a quasiprojective coarse moduli space for semistable logarithmic
connections [Nit93][Thm. 3.5].
The first part of this paper deals with the case when the set of punctures
$\\{x_{i}\\}$ is empty. Then the relevant geometric object is the stack
$\text{Conn}_{G}(C)$ of regular $G$-connection. We prove Proposition 3.7,
which is a special case of Theorem 4.16 above. The Harder-Narasimhan
filtration for vector bundles is our main tool to show quasicompactness. The
proof Proposition 3.7 is different than for the more general Theorem 4.16; it
yields better bounds for the Harder-Narasimhan type of the underlying bundle.
In the case when $G=\text{GL}_{n}$, Simpson also proved that
$\text{Conn}_{G}(C)$ is of finite type [Sim94a][Cor. 3.4]. This was one of the
main ingredients for his construction of the deRham moduli space in [Sim94a]
[Sim94b]. See pages 15-16 for a comparison of our approach with Simpson’s in
the case of $\text{GL}_{n}$.
The second part of the paper deals with the case when the set of punctures
$\\{x_{i}\\}$ is nonempty. This part is where we prove our main theorem
(Theorem 4.16). We also prove an auxiliary result that could be of independent
interest. This is Proposition 4.2. It gives an explicit description of the
cohomological obstruction for a $G$-bundle to admit a logarithmic connection
with some given set of residues. The proof of Proposition 4.2 is completely
algebraic and applies to families over an arbitrary base without any
assumptions on the characteristic of the ground field. It generalizes
[BDP18][Prop. 3.1], which was proven for the group $\text{GL}_{n}$ over
$\mathbb{C}$ using transcendental methods.
Acknowledgements: I would like to thank my advisor Nicolas Templier for his
help in improving the manuscript, and to Indranil Biswas and Arideep Saha for
helpful comments. I acknowledge support by NSF grants DMS-1454893 and
DMS-2001071.
## 2 Preliminaries
### 2.1 Notation
We work over a fixed perfect ground field $k$. Some results will hold only
when the characteristic of $k$ is $0$; we will explicitly mention when this is
the case. Unless otherwise stated, all schemes will be understood to be
schemes over $k$. An undecorated product of $k$-schemes (e.g. $X\times S$)
should always be interpreted as a fiber product over $k$. We will sometimes
write $X_{S}$ instead of $X\times S$. If $R$ is a $k$-algebra and $S$ is a
$k$-scheme, we may use the notation $S_{R}$ to denote the fiber product
$S\times\text{Spec}(R)$. If a scheme $t$ is Spec of a field, we will write
$\kappa(t)$ for the corresponding coordinate field.
We fix once and for all a curve $C$ that is smooth, projective and
geometrically connected over $k$. We let $g$ denote the genus of $C$. We also
fix an ample line bundle $\mathcal{O}_{C}(1)$ on $C$. Choose a a finite set
$\\{x_{i}\\}_{i\in I}$ of $k$-points in $C$. We will denote by
$q_{i}:x_{i}\rightarrow C$ the closed immersion of $x_{i}$ into $C$.
Let $G$ be a smooth connected linear algebraic group over $k$. Write
$\mathfrak{g}=\text{Lie}(G)$ for the Lie algebra of $G$. There is a
representation $Ad:G\longrightarrow\text{GL}(\mathfrak{g})$ called the adjoint
representation. For $G=\text{GL}_{n}$ it is given by matrix conjugation.
G-bundles will play a prominent role in this paper. Let $X$ be a $k$-scheme. A
$G$-bundle over $X$ is a scheme $\pi:\mathcal{P}\rightarrow X$ equipped with a
right $G$-action. This action is required to make $\mathcal{P}$ a $G$-torsor
in the étale topology. This means that for some étale cover $T\rightarrow X$
the base-change $\mathcal{P}\times_{X}T$ is $G$-equivariantly isomorphic to
the trivial $G$-torsor $G_{T}$. An isomorphism between $G$-bundles is an
isomorphism of $X$-schemes that intertwines the $G$-actions.
We will often deal with quasicoherent sheaves on schemes. Let $X$ and $Y$ be
schemes and let $\mathcal{Q}$ be quasicoherent sheaf on $Y$. If there is a
clear implicit choice of morphism $f:X\rightarrow Y$, we will write
$\mathcal{Q}|_{X}$ to denote the pullback $f^{*}\mathcal{Q}$ of the
quasicoherent sheaf $\mathcal{Q}$ by $f$. The same convention will be used for
pullbacks of $G$-bundles.
### 2.2 Two representability lemmas
In this subsection we state two technical lemmas. We first fix some notation.
Let $X,S$ be $k$-schemes. Suppose that we have a morphism $f:X\longrightarrow
S$.
###### Definition 2.1.
Let $\mathcal{E}$ be a vector bundle on $X$. Define
$\Gamma_{X/S}(\mathcal{E})$ to be the functor from $S$-schemes to sets given
as follows. For any $S$-scheme $g:T\longrightarrow S$, we set
$\Gamma_{X/S}(\mathcal{E})\,(T)\vcentcolon=H^{0}\left(X\times_{S}T,\,(\text{id}_{X}\times_{S}g)^{*}\,\mathcal{E}\,\right)$
###### Lemma 2.2.
Let $X\longrightarrow S$ be a proper flat morphism of finite presentation. Let
$\mathcal{E}$ be a vector bundle on $X$. The functor
$\Gamma_{X/S}(\mathcal{E})$ is represented by a scheme that is relatively
affine and of finite presentation over $S$.
###### Proof.
This is a special case of [Sta19, Tag 08K6]. In this case we set
$\mathcal{G}=\mathcal{O}_{X}$ and $\mathcal{F}=\mathcal{E}$. ∎
###### Lemma 2.3.
Let $f:X\longrightarrow S$ be a proper flat morphism of finite presentation.
Let $\mathcal{E}$ be a vector bundle on $X$. Fix a cohomology class $\sigma\in
H^{i}(X,\,\mathcal{E})$. Let $Z$ be the functor from $k$-schemes to sets
defined as follows. For any $k$-scheme $T$, we let
$Z(T)\vcentcolon=\;\left\\{\begin{matrix}\text{morphisms $g:T\rightarrow S$
such that}\\\
\text{$(\text{id}_{X}\times_{S}g)^{*}\sigma=0$}\end{matrix}\right\\}$
Then $Z$ is represented by a closed subscheme of $S$.
###### Proof.
After passing to a Zariski cover of $S$, we can assume that $S$ is affine. Let
$F^{\bullet}$ be the Grothendieck complex of $\mathcal{E}$ with respect to the
morphism $X\longrightarrow S$ [Sta19, Tag 0B91]. Choose a section
$\overline{v}\in F^{i}\,/\,F^{i-1}$ corresponding to the cohomology class
$\sigma\in H^{i}\left(X,\,\mathcal{E}\right)$. For any $k$-scheme $T$, we have
$Z(T)=\;\left\\{\begin{matrix}\text{morphisms $g:T\rightarrow S$ such that}\\\
\text{$g^{*}\overline{v}=0$}\end{matrix}\right\\}$
This functor is represented by the scheme theoretic support of the section
$\overline{v}$ of the coherent $\mathcal{O}_{S}$-sheaf $F^{i}\,/\,F^{i-1}$.
Hence it is a closed subscheme of $S$, as desired. ∎
### 2.3 Change of group for $G$-bundles
Let $\text{B}G$ denote the classifying stack of $G$. This is the pseudofunctor
from $k$-schemes to groupoids given as follows. For any $k$-scheme $S$,
$\text{B}G\,(S)\;\vcentcolon=\;\left\\{\begin{matrix}\text{groupoid of
$G$-bundles $\mathcal{P}$}\;\text{over $S$}\end{matrix}\right\\}$
Let $\text{Bun}_{G}(C)$ denote the moduli stack of $G$-bundles on $C$. This is
defined by $\text{Bun}_{G}(C)\,(S)\vcentcolon=\text{B}G(C\times S)$ for any
$k$-scheme $S$.
###### Proposition 2.4.
([Beh91][Prop. 4.4.4]). Suppose that $G$ is a connected reductive group over
$k$.
1. (i)
Let $\rho:G\longrightarrow\text{GL}_{n}$ be a faithful representation. Let
$\rho_{*}:\text{Bun}_{G}(C)\longrightarrow\text{Bun}_{\text{GL}_{n}}(C)$ be
the morphism induced by extension of structure groups. Then, $\rho_{*}$ is
schematic, affine and of finite type.
2. (ii)
$\text{Bun}_{G}(C)$ is an algebraic stack locally of finite type over $k$.
###### Proof.
1. (i)
Let $S$ be a $k$-scheme. Let
$\mathcal{P}:S\longrightarrow\text{Bun}_{\text{GL}_{n}}(C)$ be a
$\text{GL}_{n}$-bundle over $C\times S$. Set
$L\vcentcolon=\text{Bun}_{G}(C)\times_{\text{Bun}_{\text{GL}_{n}}(C)}S$. We
want to show that $L$ is represented by a scheme that is relatively affine and
of finite type over $S$.
Matsushima’s criterion [Ric77] implies that the homogeneous space
$\text{GL}_{n}\,/\,G$ is affine. Form the associated fiber bundle
$X\vcentcolon=\mathcal{P}\times^{\text{GL}_{n}}\left(\text{GL}_{n\,C\times
S}\,/\,G_{C\times S}\right)$. For any $S$-scheme $f:T\longrightarrow S$, we
have
$L(T)=\left\\{\;\text{sections of the morphism}\;X\times_{S}T\longrightarrow
C\times T\;\right\\}$
The action of $\text{GL}_{n}$ on the affine scheme $\text{GL}_{n}\,/\,G$
induces a representation of $\text{GL}_{n}$ on the coordinate ring
$\mathcal{O}_{\text{GL}_{n}\,/\,G}$ via translations. Since
$\text{GL}_{n}\,/\,G$ is of finite type over $k$, we can choose finitely many
generators for $\mathcal{O}_{\text{GL}_{n}\,/\,G}$ as a $k$-algebra. The first
theorem in page 24 of [Wat79] implies that there is a finite dimensional
$k$-subrepresentation $V\subset\mathcal{O}_{\text{GL}_{n}\,/\,G}$ such that
$V$ contains the generators. The inclusion
$V\hookrightarrow\mathcal{O}_{\text{GL}_{n}\,/\,G}$ induces a
$\text{GL}_{n}$-equivariant ring homomorphism
$\text{Sym}(V)\rightarrow\mathcal{O}_{\text{GL}_{n}\,/\,G}$. Since $V$
contains a set of generators of $\mathcal{O}_{\text{GL}_{n}\,/\,G}$, the
homomorphism is surjective. Applying the functor $\text{Spec}(-)$ we get a
$\text{GL}_{n}$-equivariant closed immersion $\text{GL}_{n}/G\hookrightarrow
V^{*}$.
Let $\mathcal{E}$ denote the vector bundle on $C\times S$ given by the
associated bundle $\mathcal{P}\times^{\text{GL}_{n}}V^{*}_{C\times S}$. By
étale descent for morphisms [Sta19, Tag 040L], the morphism
$\text{GL}_{n}/G\hookrightarrow V^{*}$ induces a map
$h:X\longrightarrow\mathcal{E}$. It is a closed immersion of finite
presentation, because these properties can be checked étale locally [Sta19,
Tag 02L6] [Sta19, Tag 02L0].
By definition, the functor $\Gamma_{C\times S/S}(\mathcal{E})$ sends a
$S$-scheme $g:T\longrightarrow S$ to the set
$\begin{split}\Gamma_{C\times
S/S}(\mathcal{E})&\vcentcolon=\;\;\;\;\;\;H^{0}\left(C\times
T,\,(\text{id}_{C}\times g)^{*}\mathcal{E}\,\right)\\\
&=\,\left\\{\;\text{sections of the morphism}\;\mathcal{E}_{C\times
T}\longrightarrow C\times T\;\right\\}\end{split}$
Let $s$ be a section of the morphism $X\times_{S}T\longrightarrow C\times T$.
We can compose with
$h\times_{S}\,\text{id}_{T}:X\times_{S}T\longrightarrow\mathcal{E}_{C\times
T}$ in order to obtain a section $(h\times_{S}\,\text{id}_{T})\circ s$ of the
map $\mathcal{E}_{C\times T}\longrightarrow C\times T$. This induces a
morphism of functors $\psi:L\rightarrow\Gamma_{C\times S/S}(\mathcal{E})$. By
Lemma 2.2, $\Gamma_{C\times S/S}(\mathcal{E})$ is represented by a scheme that
is relatively affine and of finite type over $S$. We shall prove that
$\psi:L\rightarrow\Gamma_{C\times S/S}(\mathcal{E})$ is a closed immersion,
which shows that $L$ is relatively affine and of finite type.
Let $g:T\longrightarrow S$ be a $S$-scheme. For our purposes we can assume
that $T$ is affine. Choose a morphism $T\longrightarrow\Gamma_{C\times
S/S}(\mathcal{E})$ represented by a section $s$ of $\mathcal{E}_{C\times
T}\longrightarrow C\times T$. We want to show that the fiber product
$L\times_{\Gamma_{C\times S/S}(\mathcal{E})}T$ is represented by a closed
subscheme of $T$. For any $k$-scheme $Y$, we have
$\displaystyle L\times_{\Gamma_{C\times
S/S}(\mathcal{E})}T\,(Y)=\left\\{\begin{matrix}\text{morphisms $j:Y\rightarrow
T$ such that}\\\ \;\;\text{$(\text{id}_{C}\times j)^{*}s:C\times
Y\longrightarrow\mathcal{E}_{C\times Y}$ \; \text{factors through} \;
$X\times_{S}Y$}\;\;\end{matrix}\right\\}$
Define $Z$ to be the fiber product
${Z}$${C\times T}$${X\times_{S}T}$${\mathcal{E}_{C\times
T}}$$\hookrightarrow$$\scriptstyle{s}$$\scriptstyle{h\times\text{id}_{T}}$
The morphism $\iota:Z\hookrightarrow C\times T$ is a closed immersion of
finite presentation. For any $k$-scheme $Y$, we have
$L\times_{\Gamma_{C\times
S/S}(\mathcal{E})}T\,(Y)=\left\\{\begin{matrix}\text{morphisms $j:Y\rightarrow
T$ such that}\\\ \;\;\text{$(\text{id}_{C}\times j)^{*}\iota:Z_{C\times
Y}\hookrightarrow C\times Y$}\;\text{is an isomorphism}\end{matrix}\right\\}$
Let us denote by $\mathcal{J}$ the finitely presented sheaf of ideals
corresponding to the closed immersion $\iota:Z\hookrightarrow C\times T$. Let
$\varphi:\mathcal{J}\hookrightarrow\mathcal{O}_{C\times T}$ denote the
inclusion. We have
$\displaystyle L\times_{\Gamma_{C\times
S/S}(\mathcal{E})}T\,(Y)=\left\\{\begin{matrix}\text{maps $j:Y\rightarrow T$
such that the morphism}\\\ \;\;\text{$(\text{id}_{C}\times
j)^{*}\varphi:(\text{id}_{C}\times
j)^{*}\mathcal{J}\longrightarrow\mathcal{O}_{C\times Y}$}\;\text{is
identically $0$}\;\;\end{matrix}\right\\}$
There exists some positive integer $N$ such that the twist $\mathcal{J}(N)$ is
globally generated. This means that there is a surjective morphism
$\mathcal{O}^{\oplus l}_{C\times T}\twoheadrightarrow\mathcal{J}(N)$ for some
positive integer $l$. Let $\sigma$ denote the composition
$\sigma:\mathcal{O}^{\oplus l}_{C\times
T}\xtwoheadrightarrow{\;\;\;\;\;\;}\mathcal{J}(N)\xrightarrow{\;\varphi\,\otimes\,\text{id}_{\mathcal{O}_{C\times
T}(N)}\;}\mathcal{O}_{C\times T}(N)$
$\sigma$ can be viewed as a section of the vector bundle
$\text{Hom}\left(\mathcal{O}_{C\times T}^{\oplus l},\,\mathcal{O}_{C\times
T}(N)\right)\cong\mathcal{O}_{C\times T}^{\oplus l}(N)$. We have
$L\times_{\Gamma_{C\times S/S}(\mathcal{E})}T\,(Y)=\left\\{\text{ \, maps
$j:Y\rightarrow T$ such that $(\text{id}_{C}\times j)^{*}\sigma=0$
\,}\right\\}$
By Lemma 2.3, this functor is represented by a closed subscheme of $T$.
2. (ii)
By part $(i)$, it suffices to show that $\text{Bun}_{\text{GL}_{n}}(C)$ is an
algebraic stack that is locally of finite type over $k$. The stack
$\text{Bun}_{\text{GL}_{n}}(C)$ classifies rank $n$ vector bundles on $C$.
This is an algebraic stack locally of finite type over $k$, see [Neu09][Thm.
2.57], [Hei10][Example 1.14] or [LMB00][4.6.2.1].
∎
We now return to the general case when $G$ is a smooth connected linear
algebraic group. Let $U$ denote the unipotent radical of $G$. Recall that this
is the biggest smooth connected normal unipotent subgroup of $G$. The quotient
$G/U$ is a reductive linear algebraic group. Let $\rho_{G}:G\rightarrow G/U$
denote the quotient morphism.
###### Proposition 2.5.
Suppose that the characteristic of $k$ is $0$. Let
$(\rho_{G})_{*}:\text{Bun}_{G}(C)\rightarrow\text{Bun}_{G/U}(C)$ denote the
morphism induced by extension of structure groups.
1. (i)
For all $k$-schemes $T$ and morphisms $T\rightarrow\text{Bun}_{G/U}(C)$, the
fiber product $\text{Bun}_{G}(C)\times_{\text{Bun}_{G/U}(C)}T$ is an algebraic
stack of finite type over $T$.
2. (ii)
$\text{Bun}_{G}(C)$ is an algebraic stack locally of finite type over $k$.
3. (iii)
The morphism of algebraic stacks $(\rho_{G})_{*}$ is of finite type.
4. (iv)
If the group $G$ is unipotent, then $\text{Bun}_{G}(C)$ is an algebraic stack
of finite type over $k$.
###### Proof.
1. (i)
We will induct on the length $l(U)$ of the nilpotent series for the unipotent
group $U$. The base case is $l(U)=0$. Then $U$ is trivial and so
$(\rho_{G})_{*}$ is the identity. Hence $(i)$ holds trivially for the base
case.
Suppose that (i) holds whenever $l(U)\leq n$. Assume that $l(U)=n+1$. Let
$Z_{U}$ denote the neutral component of the center of $U$. The algebraic group
$Z_{U}$ is a smooth connected normal subgroup of $G$. Let $\rho_{z}$ denote
the quotient morphism $\rho_{z}:G\rightarrow G/Z_{U}$. The unipotent radical
of $G/Z_{G}(U)$ is $U/Z_{G}(U)$. The morphism $(\rho_{G})_{*}$ factors as
$(\rho_{G})_{*}:\text{Bun}_{G}(C)\xrightarrow{(\rho_{z})_{*}}\text{Bun}_{G/Z_{U}}(C)\xrightarrow{(\rho_{G/Z_{U}})_{*}}\text{Bun}_{G/U}(C)$
Note that $l(U/Z_{U})=n$ by construction. By the induction hypothesis, (i)
holds for the morphism $(\rho_{G/Z_{U}})_{*}$. Therefore it suffices to show
that the morphism $(\rho_{z})_{*}$ satisfies (i).
Let $T$ be a $k$-scheme. Choose a morphism
$T\rightarrow\text{Bun}_{G/Z_{U}}(C)$ represented by a $G/Z_{U}$-bundle
$\mathcal{P}$ on $C_{T}$. We want to show that the fiber product
$\text{Bun}_{G}(C)\times_{\text{Bun}_{G/Z_{U}}(C)}T$ is an algebraic stack of
finite type over $T$. After passing to a Zariski cover of $T$, we can assume
that $T$ is affine.
Let $(Z_{U})^{\mathcal{P}}$ denote the unipotent commutative group scheme over
$C_{T}$ obtained by twisting $Z_{U}$ by $\mathcal{P}$ via the natural
conjugation action of $G/Z_{U}$ on $Z_{U}$. Since the characteristic of $k$ is
$0$, the commutative unipotent group $Z_{U}$ is isomorphic to a product of
additive groups $(\mathbb{G}_{a})^{r}$. Furthermore $G/Z_{U}$ acts by linear
automorphisms on $Z_{U}\cong(\mathbb{G}_{a})^{r}$. By étale descent for vector
bundles, it follows that the twisted group $(Z_{U})^{\mathcal{P}}$ is the
total space of a vector bundle $\mathcal{V}$ on $C_{T}$.
By [Hof10][Prop. 3.1], there is an obstruction in
$H^{2}_{fppf}(C_{T},\mathcal{V})$ for lifting the $G/Z_{U}$-bundle
$\mathcal{P}$ to a $G$-bundle. Since $\mathcal{V}$ is a vector bundle, the
fppf cohomology group $H^{2}_{fppf}(C_{T},\mathcal{V})$ is the same as the
usual sheaf cohomology group $H^{2}(C_{T},\mathcal{V})$ in the small Zariski
site (cf. [Hof10][Remark 3.3]). Since the projective morphism
$C_{T}\rightarrow T$ has relative dimension $1$ and $T$ is affine, the Leray
spectral sequence implies that $H^{2}(C_{T},\mathcal{V})=0$. Now [Hof10][Prop.
3.1] shows that the fiber product
$\text{Bun}_{G}(C)\times_{\text{Bun}_{G/Z_{U}}(C)}T$ is isomorphic to the
stack $\text{Bun}_{(Z_{U})^{\mathcal{P}}}(C_{T})$ classifying
$(Z_{U})^{\mathcal{P}}$-bundles on $C_{T}$. We are reduced to showing that
$\text{Bun}_{(Z_{U})^{\mathcal{P}}}(C_{T})$ is an algebraic stack of finite
type over $T$.
Let $F^{\bullet}$ be the Grothendieck complex of $\mathcal{V}$ with respect to
the morphism $C_{T}\longrightarrow T$ [Sta19, Tag 0B91]. Since the relative
dimension of the projective morphism $C_{T}\rightarrow T$ is $1$, we can
choose $F^{\bullet}$ to be a two term complex $\left[F^{0}\rightarrow
F^{1}\right]$, where $F^{0}$ and $F^{1}$ are finite free modules on the affine
scheme $T$. We will also denote by $F^{0}$ and $F^{1}$ the corresponding
(constant) vector group schemes over $T$. Note that $F^{0}$ acts on $F^{1}$
via the differential morphism $F^{0}\rightarrow F^{1}$. A similar reasoning as
in [Hof10][Remark 2.2] shows that $\text{Bun}_{(Z_{U})^{\mathcal{P}}}(C_{T})$
is isomorphic to the quotient stack $\left[F^{1}/\,F^{0}\right]$. This is an
algebraic stack of finite type over $T$, as desired.
2. (ii)
In order to show that $\text{Bun}_{G}(C)$ is an algebraic stack locally of
finite type over $k$, we are allowed to check after passing to a smooth cover
of the pseudofuctor $\text{Bun}_{G}(C)$. By Proposition 2.4[(ii)],
$\text{Bun}_{G/U}(C)$ is an algebraic stack locally of finite type over $k$.
This implies that $\text{Bun}_{G/U}(C)$ admits an atlas
$A\rightarrow\text{Bun}_{G/U}(C)$ such that $A$ is locally of finite type over
$k$. Consider the fiber product:
${A\times_{\text{Bun}_{G/U}(C)}\text{Bun}_{G}(C)}$${\text{Bun}_{G}(C)}$${A}$${\text{Bun}_{G/U}(C)}$
The morphism
$A\times_{\text{Bun}_{G/U}(C)}\text{Bun}_{G}(C)\rightarrow\text{Bun}_{G}(C)$
is a smooth cover of $\text{Bun}_{G}(C)$. By part (i),
$A\times_{\text{Bun}_{G/U}(C)}\text{Bun}_{G}(C)$ is an algebraic stack of
finite type over $A$. Since $A$ is locally of finite type over $k$, this
implies that
$A\times_{\text{Bun}_{G/U}(C)}\text{Bun}_{G}(C)\rightarrow\text{Bun}_{G}(C)$
is locally of finite type over $k$, as desired.
3. (iii)
This is just a restatement of (i).
4. (iv)
If $G$ is unipotent, then $G/U$ is the trivial group. Therefore
$\text{Bun}_{G/U}(C)=Spec(k)$, and $(\rho_{G})_{*}$ is the structure morphism
of $\text{Bun}_{G}(C)$. By part (iii), it follows that $\text{Bun}_{G}(C)$ is
of finite type over $k$.
∎
## 3 The moduli stack of regular $G$-connections
### 3.1 Regular $G$-connections
There is a canonical left-invariant $\mathfrak{g}$-valued $1$-form on $G$
called the Maurer-Cartan form. It is defined as follows. Left translation
induces an isomorphism $T_{G}\cong\mathfrak{g}\otimes\mathcal{O}_{G}$ for the
tangent sheaf $T_{G}$ of $G$. This yields a chain of isomorphisms
$\Omega^{1}_{G/k}\otimes\mathfrak{g}=\text{Hom}_{\mathcal{O}_{G}}(T_{G},\mathcal{O}_{G})\otimes\mathfrak{g}\cong\text{Hom}_{k}(\mathfrak{g},\mathfrak{g})\otimes\mathcal{O}_{G}$.
The invariant $\mathfrak{g}$-valued 1-form on $G$ that corresponds to
$\text{id}_{\mathfrak{g}}\otimes 1$ under this isomorphism is the Maurer-
Cartan form. We denote it by $\omega\in
H^{0}\left(G,\,\Omega_{G/k}^{1}\otimes\mathfrak{g}\right)$.
Let $S$ be a $k$-scheme. The same construction can be applied to define the
relative Maurer-Cartan form $\omega_{S}\in
H^{0}\left(G_{S},\,\Omega^{1}_{G\times S/S}\otimes\mathfrak{g}\right)$ for the
group scheme $G_{S}$ over the base $S$. This can be used to define a
homomorphism $\phi:G_{C\times
S}\longrightarrow\text{Aff}\left(\Omega^{1}_{C\times
S/S}\otimes\mathfrak{g}\right)$ of $C\times S$-group schemes from $G_{C\times
S}$ to the group scheme of affine linear transformations of the locally free
$\mathcal{O}_{C\times S}$-module $\Omega^{1}_{C\times
S/S}\otimes\mathfrak{g}$. We describe this map in terms of functors of points.
Given a $C\times S$-scheme $U$ and an element $g\in G_{S}(U)$, we can pull
back the relative Maurer-Cartan form in order to obtain $g^{*}\omega_{S}\in
H^{0}\left(U,\,\Omega_{U/S}^{1}\otimes\mathfrak{g}\right)$. We define
$\phi(U)\,(g)\vcentcolon=Id_{\Omega^{1}_{U/S}}\underset{\mathcal{O}_{U}}{\otimes}Ad(g)\,+\,(g^{-1})^{*}\omega_{S}$.
The fact that this is a homomorphism follows from the left-invariance of the
Maurer-Cartan form.
Let $\mathcal{P}$ be a $G$-bundle on $C\times S$. Suppose that $X$ is a
relatively affine $C\times S$-scheme equipped with an action of $G_{C\times
S}$. The group scheme $G_{C\times S}$ acts diagonally on the product
$\mathcal{P}\times_{C\times S}X$. We define the associated bundle
$\mathcal{P}\times^{G}X$ to be the quotient $(\mathcal{P}\times_{C\times
S}X)\,/\,G_{C\times S}$. Étale descent for affine morphisms [Sta19, Tag 0245]
implies that $\mathcal{P}\times^{G}X$ is represented by a scheme. One such
example is the vector bundle $\mathcal{P}\times^{G}\mathfrak{g}_{C\times S}$
associated to the adjoint representation $Ad:G_{C\times
S}\longrightarrow\text{GL}(\mathfrak{g}_{C\times S})$. This is called the
adjoint bundle of $\mathcal{P}$, and will henceforth be denoted by
$Ad\,\mathcal{P}$.
We now recall the definition of $G$-connection on $\mathcal{P}$ relative to
$S$. The homomorphism $\phi$ induces an action of $G_{C\times S}$ on the total
space of $\Omega^{1}_{C\times S/S}\otimes\mathfrak{g}$ by affine linear
transformations. A $G$-connection on $\mathcal{P}$ relative to $S$ is defined
to be a section of the associated affine bundle
$\mathcal{P}\times^{G}\left(\Omega^{1}_{C\times
S/S}\otimes\mathfrak{g}\right)$. By construction the bundle of $G$-connections
$\mathcal{P}\times^{G}\left(\Omega^{1}_{C\times
S/S}\otimes\mathfrak{g}\right)$ is an étale torsor for the $C\times S$-sheaf
$Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times S/S}$, which is viewed as an
abelian sheaf under addition. This torsor represents a cohomology class
$\gamma_{\mathcal{P}}\in H^{1}\left(C\times
S,\,Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times S/S}\right)$ called the Atiyah
class of $\mathcal{P}$ relative to $S$. Here $H^{1}\left(C\times
S,\,Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times S/S}\right)$ is the ordinary
sheaf cohomology group in the Zariski site. It coincides with the
corresponding étale cohomology group because
$Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times S/S}$ is quasicoherent [Sta19, Tag
03P2].
We can think of the Atiyah class $\gamma_{\mathcal{P}}\in H^{1}\left(C\times
S,\,Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times S/S}\right)$ as an element of
$\text{Ext}^{1}\left(\mathcal{O}_{C\times
S},\,\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times S/S}\right)$. It
corresponds to an extension of $\mathcal{O}_{C\times S}$-modules
$0\longrightarrow Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\longrightarrow At(\mathcal{P})\longrightarrow\mathcal{O}_{C\times
S}\longrightarrow 0$
We call $At(\mathcal{P})$ the Atiyah bundle. The short exact sequence above
will be referred to as the Atiyah sequence. By definition a $G$-connection on
the $G$-bundle $\mathcal{P}$ is the same as a $\mathcal{O}_{C\times S}$-linear
splitting of the Atiyah sequence. Such splitting exists if and only if the
Atiyah class $\gamma_{\mathcal{P}}$ vanishes.
###### Remark 3.1.
In the literature (e.g. [Bis10]) the Atiyah sequence differs from the one we
have defined. It is usually the twist of the sequence above by
$\left(\Omega^{1}_{C\times S/S}\right)^{\vee}$.
###### Example 3.2.
Suppose that $S=\text{Spec}\,k$ and $G=\mathbb{G}_{m}$. A
$\mathbb{G}_{m}$-torsor is the same thing as a line bundle on $C$. Let
$\mathcal{L}$ be such a line bundle. By definition $\gamma_{\mathcal{L}}\in
H^{1}\left(C,\,Ad\,\mathcal{L}\otimes\Omega_{C/k}^{1}\right)$. Since
$\mathbb{G}_{m}$ is abelian, there is a canonical identification
$Ad\,\mathcal{L}\cong\mathcal{O}_{C}\otimes\mathfrak{gl}_{1}$. Hence
$\gamma_{\mathcal{L}}\in
H^{1}\left(C,\,\Omega_{C/k}^{1}\otimes\mathfrak{gl}_{1}\right)$. Serre duality
yields an isomorphism
$H^{1}\left(C,\,\Omega_{C/k}^{1}\otimes\mathfrak{gl}_{1}\right)\cong\left(H^{0}\left(C,\,\mathcal{O}_{C}\right)\otimes\mathfrak{gl}_{1}\right)^{\vee}$.
Since $C$ is geometrically irreducible and proper, there is a canonical
isomorphism of $k$-algebras $H^{0}\left(C,\,\mathcal{O}_{C}\right)\cong k$.
Under this identification we have
$\gamma_{\mathcal{L}}\in\mathfrak{gl}_{1}^{\vee}$. Finally, there is a
canonical isomorphism $\mathfrak{gl}_{1}\cong k$ obtained via the natural
inclusion $\mathbb{G}_{m}\hookrightarrow\mathbb{A}^{1}_{k}$. So we can view
$\gamma_{\mathcal{L}}$ as an element of $k$.
###### Lemma 3.3.
Let $\mathcal{L}$ be as in Example 3.2. Under the identification above, we
have $\gamma_{\mathcal{L}}=-\text{deg}\,\mathcal{L}$.
###### Proof.
This follows from a Cech cohomology computation similar to the one in [Ati57a]
Section 3. ∎
Part (i) in the following proposition is one of the directions in a theorem of
Weil. See [BR08] Section 7 for an exposition.
###### Lemma 3.4.
Suppose that the characteristic of $k$ is $0$. Let $\mathcal{E}$ be a vector
bundle on $C$ equipped with a connection.
1. (i)
Every direct summand $\mathcal{F}$ of $\mathcal{E}$ satisfies
$\text{deg}\,\mathcal{F}=0$. In particular $\mathcal{E}$ itself has degree
$0$.
2. (ii)
If $\mathcal{F}$ is a subbundle of $\mathcal{E}$ that is stable under the
connection, then $\text{deg}\,\mathcal{F}=0$.
###### Proof.
1. (i)
Suppose that $\mathcal{E}$ is a direct sum
$\mathcal{E}=\mathcal{F}\oplus\mathcal{G}$ of vector bundles. Let
$i:\mathcal{F}\hookrightarrow\mathcal{E}$ and
$\text{pr}:\mathcal{E}\rightarrow\mathcal{F}$ denote the inclusion and
projection morphisms induced by the direct sum decomposition.
We can think of the connection on $\mathcal{E}$ as a morphism of sheaves
$\theta:\mathcal{E}\rightarrow\mathcal{E}\otimes\Omega^{1}_{C/k}$ that
satisfies the Leibniz rule. Then, the composition
$\mathcal{F}\xrightarrow{i}\mathcal{E}\xrightarrow{\theta}\mathcal{E}\otimes\Omega^{1}_{C/k}\xrightarrow{\text{pr}\otimes\text{id}}\mathcal{F}\otimes\Omega^{1}_{C/k}$
also satisfies the Leibniz rule, so it is a connection on $\mathcal{F}$.
Suppose that the rank of $\mathcal{F}$ is $r$. Using the functoriality of
connections for the determinant morphism
$\text{det}:\text{GL}_{r}\rightarrow\mathbb{G}_{m}$, we conclude that the
determinant bundle $\text{det}\,\mathcal{F}$ admits a connection. This means
that $\gamma_{\text{det}\,\mathcal{F}}=0$. By Lemma 3.3, this is equivalent to
$-\text{deg}\,\mathcal{F}=0$ when viewed as an element of $k$. Since the
characteristic of $k$ is $0$, it follows that $\text{deg}\,\mathcal{F}=0$.
2. (ii)
Suppose that $\mathcal{F}\subset\mathcal{E}$ is stable under the connection
$\theta$. Then, we can restrict $\theta|_{\mathcal{F}}$ in order to obtain a
connection defined on $\mathcal{F}$. Now part (i) shows that
$\text{deg}\,\mathcal{F}=0$.
∎
### 3.2 Functoriality for regular $G$-connections
Connections are contravariant with respect to morphisms of base schemes. To
see why this is the case, let $f:T\longrightarrow S$ be a morphism of
$k$-schemes. Let $\mathcal{P}$ be a $G$-bundle on $C\times S$. By definition
the bundle of $G$-connections $(Id_{C}\times
f)^{*}\mathcal{P}\times^{G}\left(\Omega^{1}_{C\times
T/T}\otimes\mathfrak{g}\right)$ is the pullback of the
$Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times S/S}$-torsor
$\mathcal{P}\times^{G}\left(\Omega^{1}_{C\times
S/S}\otimes\mathfrak{g}\right)$. This means that the Atiyah class behaves well
under pullbacks, meaning $\gamma_{(Id_{C}\times
f)^{*}\mathcal{P}}=(Id_{C}\times f)^{*}\gamma_{\mathcal{P}}$. Here we write
$(Id_{C}\times f)^{*}$ on the right-hand side to denote the natural map on
sheaf cohomology groups
$\displaystyle H^{1}\left(C\times
S,\,Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\right)\,\longrightarrow\,H^{1}\left(C\times T,\,(Id_{c}\times
f)^{*}\left(Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\right)\,\right)\,\xlongequal{\;\;}\,H^{1}\left(C\times
T,\,Ad\,(Id_{C}\times f)^{*}\mathcal{P}\otimes\Omega^{1}_{C\times T/T}$
As a consequence, the Atiyah sequence for $(Id_{C}\times f)^{*}\mathcal{P}$ is
the $(Id_{C}\times f)$-pullback of the Atiyah sequence for $\mathcal{P}$. We
can therefore pullback connections by interpreting them as splittings of the
Atiyah sequence. For a $G$-connection $\theta$ on $\mathcal{P}$, we denote by
$f^{*}\theta$ the corresponding connection on $(Id_{C}\times
f)^{*}\mathcal{P}$.
On the other hand, connections are covariant with respect to morphisms of
structure groups. Fix a $k$-scheme $S$. Let $H$ be a linear algebraic group
over $k$ with Lie algebra $\mathfrak{h}$. Let $\varphi:G\longrightarrow H$ be
a homomorphism of algebraic groups. For any $G$-bundle $\mathcal{P}$ over
$C\times S$, there exists an associated $H$-bundle
$\varphi_{*}\mathcal{P}\vcentcolon=\mathcal{P}\times^{G}H$ obtained via
extension of structure group. Observe that there is an induced morphism of
tangent spaces at the identity
$\text{Lie}(\varphi):\mathfrak{g}\longrightarrow\mathfrak{h}$. By étale
descent for morphisms of quasicoherent sheaves [Sta19, Tag 023T], this yields
a vector bundle morphism $Ad\,\varphi:Ad\,\mathcal{P}\longrightarrow
Ad\,\varphi_{*}\mathcal{P}$. This induces the following map of sheaf
cohomology groups.
$\varphi^{1}_{*}:H^{1}\left(C\times
S,\,Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times S/S}\right)\longrightarrow
H^{1}\left(C\times S,\,Ad\,\varphi_{*}\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\right)$
By construction
$\varphi_{*}^{1}\left(\gamma_{\mathcal{P}}\right)=\gamma_{\varphi_{*}\mathcal{P}}$.
Recall that the functoriality of $\text{Ext}^{1}(\mathcal{O}_{C\times S},-)$
can be described concretely in terms of pushouts [Sta19, Tag 010I]. This
description shows that there is a canonical commutative diagram of short exact
sequences between the extension defined by $\gamma_{\mathcal{P}}$ and the
extension defined by $\varphi_{*}^{1}(\gamma_{\mathcal{P}})$. Since
$\varphi_{*}^{1}\left(\gamma_{\mathcal{P}}\right)=\gamma_{\varphi_{*}\mathcal{P}}$,
there is a commutative diagram of Atiyah sequences
${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}}$${At(\mathcal{P})}$${\mathcal{O}_{C\times
S}}$${0}$${0}$${\text{Ad}\,\varphi_{*}\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}}$${At(\varphi_{*}\mathcal{P})}$${\mathcal{O}_{C\times
S}}$${0}$$\scriptstyle{Ad\,\varphi\,\otimes\,id}$$\scriptstyle{At(\varphi)}$$\xlongequal{}$
We denote the middle vertical morphism by $\text{At}(\varphi)$. Let $\theta$
be a $G$-connection on $\mathcal{P}$, viewed as a splitting of the top row. We
compose $\theta$ with $At(\varphi)$ in order to define a $H$-connection
$\varphi_{*}\theta\vcentcolon=At(\varphi)\circ\theta$ on the $H$-bundle
$\varphi_{*}\mathcal{P}$.
### 3.3 The stack of regular $G$-connections
The contravariant functoriality for connections with respect maps of base
schemes implies that the $G$-connections form a pseudofunctor into groupoids.
This allows to define the moduli stack of flat $G$-bundles over $C$.
###### Definition 3.5.
The moduli stack $\text{Conn}_{G}(C)$ of flat $G$-bundles over $C$ is the
pseudofunctor from $k$-schemes to groupoids defined as follows. For every
$k$-scheme $S$, we define
$\text{Conn}_{G}(C)\,(S)\vcentcolon=\;\left\\{\begin{matrix}\text{groupoid of
$G$-torsors}\;\mathcal{P}\text{ over }C\times S\\\ $+$\\\ \text{ a
$G$-connection $\theta$ on $\mathcal{P}$ relative to
$S$}\end{matrix}\right\\}$
The isomorphisms are required to be compatible with the connections.
There is a morphism of pseudofunctors
$Forget:\text{Conn}_{G}(C)\longrightarrow\text{Bun}_{G}(C)$ given by
forgetting the connection.
###### Proposition 3.6.
The morphism $Forget:\text{Conn}_{G}(C)\longrightarrow\text{Bun}_{G}(C)$ is
schematic, affine and of finite type. In particular $\text{Conn}_{G}(C)$ is an
algebraic stack that is locally of finite type over $k$.
###### Proof.
Let $Z$ be the subfunctor of $\text{Bun}_{G}(C)$ defined as follows. For all
$k$-schemes $S$, we set
$Z(S)\vcentcolon=\;\left\\{\begin{matrix}\text{groupoid of
$\mathcal{P}\in\text{Bun}_{G}(C)\,(S)$}\\\ \text{such that
$\gamma_{\mathcal{P}}=0$}\end{matrix}\right\\}$
We claim that $Z$ is a closed substack of $\text{Bun}_{G}(C)$. Choose a
$k$-scheme $S$ and a morphism $S\longrightarrow\text{Bun}_{G}(C)$ represented
by a $G$-bundle $\mathcal{P}$ on $C\times S$. We want to show that the
morphism $Z\times_{\text{Bun}_{G}(C)}S\xrightarrow{pr_{2}}S$ is a closed
immersion. The functor $Z\times_{\text{Bun}_{G}(C)}S$ can be described as
follows. For all $k$-schemes $T$, we have
$Z\times_{\text{Bun}_{G}(C)}S\,(T)=\;\left\\{\begin{matrix}\text{morphisms
$f:T\rightarrow S$ such that}\\\ \text{$(Id_{C}\times
f)^{*}\gamma_{\mathcal{P}}=0$}\end{matrix}\right\\}$
We are implicitly using the compatibility of the Atiyah class with pullbacks.
Lemma 2.3 applied to the proper flat morphism $C\times S\longrightarrow S$
implies that $Z\times_{\text{Bun}_{G}(C)}S$ is represented by a closed
subscheme of $S$. This concludes our proof of the claim.
Consider now the morphism
$Forget:\text{Conn}_{G}(C)\longrightarrow\text{Bun}_{G}(C)$. Since the
vanishing of the Atiyah class is a necessary condition for the existence of a
$G$-connection, the morphism $Forget$ factors through the closed immersion
$Z\hookrightarrow\text{Bun}_{G}(C)$. It suffices to show that
$\text{Conn}_{G}(C)\longrightarrow Z$ is schematic, affine and of finite type.
Let $S$ be a $k$-scheme and $\mathcal{P}:S\longrightarrow Z$ be a $G$-bundle
on $C\times S$ with $\gamma_{\mathcal{P}}=0$. By definition the fiber product
$\text{Conn}_{G}(C)\times_{Z}S$ is the functor from $S$-schemes
$g:T\longrightarrow S$ to sets given by
$\text{Conn}_{G}(C)\times_{Z}S\,(T)=\left\\{\;\text{$G$-connections
on}\;(\text{id}_{C}\times g)^{*}\mathcal{P}\;\text{relative to
$T$}\;\right\\}$
We want to show that $\text{Conn}_{G}(C)\times_{Z}S$ is represented by a
scheme that is relatively affine and of finite type over $S$. Since
$\gamma_{\mathcal{P}}=0$, the Atiyah sequence for $\mathcal{P}$ is split.
Choose a splitting $\theta$, i.e. a $G$-connection on $\mathcal{P}$ relative
to $S$. Any other $G$-connection on $\mathcal{P}$ is of the form $\theta+s$
where $s\in H^{0}\left(C\times S,\,Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\,\right)$. This induces an isomorphism of presheaves of sets
$\psi:\Gamma_{C\times S/S}\left(\text{Ad}\,\mathcal{P}\otimes\Omega_{C\times
S/S}^{1}\right)\xrightarrow{\sim}\text{Conn}_{G}(C)\times_{Z}S$
To be precise, $\psi$ is defined as follows. For any $S$-scheme
$g:T\longrightarrow S$ and section $s\in H^{0}\left(C\times
T,\,(\text{id}_{C}\times g)^{*}\,Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times
T/T}\,\right)$ we define
$\psi(T)\,(s)=(\text{id}_{C}\times g)^{*}\theta+s$
By Lemma 2.2, we know that $\Gamma_{C\times
S/S}\left(\text{Ad}\,\mathcal{P}\,\otimes\,\Omega_{C\times S/S}^{1}\right)$ is
represented by a scheme that is relatively affine and of finite type over $S$.
The proposition follows. ∎
We now use the $Forget$ map to prove that the stack $\text{Conn}_{G}(C)$ is
quasicompact whenever $\text{char}\,k=0$.
###### Proposition 3.7.
Suppose that the characteristic of $k$ is $0$. The algebraic stack
$\text{Conn}_{G}(C)$ is of finite type over $k$.
###### Proof.
By Proposition 3.6, it suffices to show that that the map
$Forget:\text{Conn}_{G}(C)\longrightarrow\text{Bun}_{G}(C)$ factors through an
open substack $W\subset\text{Bun}_{G}(C)$ that is of finite type over $k$. We
shall prove this.
Let $U$ be the unipotent radical of $G$. Denote by $\rho_{G}:G\rightarrow G/U$
the correspoding quotient morphism. Choose a faithful representation
$\rho:G/U\longrightarrow\text{GL}_{n}$ of the reductive group $G/U$. For any
$G$-bundle $\mathcal{P}$ we can form the associated $\text{GL}_{n}$-bundle
$(\rho\circ\rho_{G})_{*}\mathcal{P}$ via extension of structure group. This
construction defines a morphism
$(\rho\circ\rho_{G})_{*}:\text{Bun}_{G}(C)\longrightarrow\text{Bun}_{\text{GL}_{n}}(C)$.
For any $G$-connection $\theta$ on $\mathcal{P}$ we get an induced
$\text{GL}_{n}$-connection $(\rho\circ\rho_{G})_{*}\theta$ on
$\rho_{*}\mathcal{P}$. There is a commutative diagram
$\textstyle{\text{Conn}_{G}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Forget}$$\textstyle{\text{Conn}_{G/U}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Forget}$$\textstyle{\text{Conn}_{\text{GL}_{n}}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Forget}$$\textstyle{\text{Bun}_{G}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\rho_{G})_{*}}$$\textstyle{\text{Bun}_{G/U}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho_{*}}$$\textstyle{\text{Bun}_{\text{GL}_{n}}(C)}$
Propositions 2.4 and 2.5 show that the two horizontal morphisms at the bottom
of the diagram above are of finite type. Therefore the composition
$(\rho\circ\rho_{G})_{*}$ is of finite type. The claim will follow if we can
show that the forgetful map
$Forget:\text{Conn}_{\text{GL}_{n}}(C)\longrightarrow\text{Bun}_{\text{GL}_{n}}(C)$
factors through an open substack $W\subset\text{Bun}_{\text{GL}_{n}}(C)$ of
finite type over $k$.
For this purpose, we will use the Harder-Narasimhan stratification of
$\text{Bun}_{\text{GL}_{n}}$ [Sch15]. For any rational cocharacter
$\lambda\in\mathbb{Q}^{n}$, let $\text{Bun}_{\text{GL}_{n}}^{\leq\lambda}(C)$
be the quasicompact open substack of $\text{Bun}_{GL_{n}}(C)$ consisting of
the strata with Harder-Narasimhan polygon smaller than $\lambda$. We will show
that
$Forget:\text{Conn}_{\text{GL}_{n}}(C)\longrightarrow\text{Bun}_{\text{GL}_{n}}(C)$
factors through a finite union
$\bigcup_{j}\text{Bun}_{\text{GL}_{n}}^{\leq\lambda_{j}}(C)$ for some
$\lambda_{j}$. This can be checked at the level of field-valued points. Hence
it suffices to check that the set of Harder-Narasimhan polygons of geometric
points $\overline{t}\longrightarrow\text{Bun}_{\text{GL}_{n}}(C)$ with
nonempty fiber $\text{Conn}_{\text{GL}_{n}}(C)_{\overline{t}}$ is a finite
set.
Let $\mu=(\mu_{1}\geq\mu_{2}\geq...\geq\mu_{n})$ be a tuple of rational
numbers in $\frac{1}{n!}\mathbb{Z}^{n}$. Let
$\mathcal{E}:\overline{t}\longrightarrow\text{Bun}_{\text{GL}_{n}}(C)$ be a
vector bundle on $C\times\overline{t}$, where $\overline{t}$ is Spec of an
algebrically closed field. Suppose that the Harder-Narasimhan polygon of
$\mathcal{E}$ is given by $\mu$. Assume that the fiber
$\text{Conn}_{\text{GL}_{n}}(C)_{\overline{t}}$ is nonempty. This means that
$\mathcal{E}$ admits a connection. We claim that $\mu$ satisfies the following
two conditions
1. (a)
$\sum_{j=1}^{n}\mu_{j}=0$.
2. (b)
$\mu_{j}-\mu_{j+1}\leq\text{max}\\{2g-2,0\\}$ for all $1\leq j\leq n-1$.
The claim implies that
$-\left(\text{max}(\\{g-1,0\\}\right)\cdot(n-1)\leq\mu_{n}\leq\mu_{1}\leq\left(\text{max}\\{g-1,0\\}\right)\cdot(n-1)$.
This would show that there are finitely many possibilities for
$\mu\in\frac{1}{n!}\mathbb{Z}^{n}$. We are left to proving the claim.
1. (a)
By Lemma 3.4, $\text{deg}\,\mathcal{E}=0$. This is equivalent to the condition
(a) in the claim above.
2. (b)
For the sake of contradiction, assume that
$\mu_{j+1}-\mu_{j}>\text{max}\\{2g-2,0\\}$ for some $1\leq j\leq n-1$. Suppose
that $\mathcal{E}$ has Harder-Narasimhan filtration
$0\subset\mathcal{F}_{1}\subset\mathcal{F}_{2}\subset...\subset\mathcal{F}_{l}=\mathcal{E}$
There is an index $1\leq k\leq l-1$ such that
$\mu\left(\mathcal{F}_{k}\,/\,\mathcal{F}_{k-1}\right)=\mu_{j}$. We have a
short exact sequence
$0\longrightarrow\mathcal{F}_{k}\longrightarrow\mathcal{E}\longrightarrow\mathcal{E}\,/\,\mathcal{F}_{k}\longrightarrow
0$
An application of Serre duality plus the definition of semistability and the
hypothesis $\mu_{j+1}-\mu_{j}>\text{max}(2g-2,0)$ implies that
$\text{Ext}^{1}\left(\mathcal{E}\,/\,\mathcal{F}_{k}\,,\,\mathcal{F}_{k}\right)=0$.
We conclude that
$\mathcal{E}=\mathcal{F}_{k}\,\oplus\,\mathcal{E}\,/\,\mathcal{F}_{k}$. But by
assumption we must have
$\text{deg}\,\mathcal{F}_{k}\,>\,\text{deg}\,\mathcal{E}\,>\,\text{deg}\,\mathcal{E}\,/\,\mathcal{F}_{k}$
Hence $\text{deg}\,\mathcal{F}_{k}\,>\,0$. This contradicts Lemma 3.4 (i),
because $\mathcal{F}_{k}$ is a direct summand of $\mathcal{E}$.
∎
We record another proof that $\text{Conn}_{\text{GL}_{n}}(C)$ is of finite
type as a consequence of Simpson’s work [Sim94a]. The stack
$\text{Conn}_{\text{GL}_{n}}(C)$ classifies rank $n$ vector bundles equipped
with a connection. These are $\Lambda$-modules of pure dimension $1$ on $C$,
where $\Lambda$ is the sheaf of differential operators on $C$ (using the
notation in [Sim94a][pg. 85-86]). If $\mathcal{E}$ is a vector bundle with a
connection, then $\text{deg}(\mathcal{E})=0$ by Lemma 3.4. If furthermore
$\mathcal{E}$ has rank $n$, then we can use Riemann-Roch to see that the
Hilbert polynomial of $\mathcal{E}$ is given by
$p(\mathcal{E},x)=\left(n\cdot\text{deg}\,\mathcal{O}_{C}(1)\right)x+n\cdot(1-g)$
In particular the slope of $\mathcal{E}$ with respect to $\mathcal{O}_{C}(1)$
is given by
$\widehat{\mu}(\mathcal{E})=\frac{1-g}{\text{deg}\,\mathcal{O}_{C}(1)}$.
Suppose that $\mathcal{G}\subset\mathcal{E}$ is a $\Lambda$-submodule. This
means that $\mathcal{G}$ is a subbundle that is stable under the connection.
Lemma 3.4 (ii) implies that $\text{deg}\,\mathcal{G}=0$. The same Riemann-Roch
computation can be applied to see that
$\widehat{\mu}(\mathcal{G})=\frac{1-g}{\text{deg}\,\mathcal{O}_{C}(1)}$. This
shows that $\mathcal{E}$ is a $\mu$-semistable $\Lambda$-module in the sense
of [Sim94a][§3]. It follows that $\text{Conn}_{\text{GL}_{n}}(C)$ classifies
$\mu$-semistable $\Lambda$-modules with fixed Hilbert polynomial
$P(x)=\left(n\cdot\text{deg}\,\mathcal{O}_{C}(1)\right)x+n\cdot(1-g)$. We can
therefore apply [Sim94a][Cor. 3.4] to conclude that the moduli stack
$\text{Conn}_{\text{GL}_{n}}(C)$ is quasicompact.
In fact [Sim94a][Lemma 3.3] provides a bound on the Harder-Narasimhan polygons
of points in the image of the morphism
$\text{Forget}:\text{Conn}_{\text{GL}_{n}}(C)\rightarrow\text{Bun}_{\text{GL}_{n}}(C)$.
We assume that the ample line bundle $\mathcal{O}_{C}(1)$ has degree $1$.
Using the notation of [Sim94a][Lemma 3.3], we have that
$\text{Gr}_{1}(\Lambda)$ is the tangent bundle of $C$. A Riemann-Roch
computation shows that we can take $m=4g-3$ in [Sim94a][Lemma 3.3]. Let
$\mathcal{E}\in\text{Forget}\left(\text{Conn}_{\text{GL}_{n}}(C)\right)$ be a
vector bundle with Harder-Narasimhan type
$\mu=(\mu_{1}\geq\mu_{2}\geq...\geq\mu_{n})$. We have
$\mu_{1}=\mu(\mathcal{F}_{1})=\widehat{\mu}(\mathcal{F}_{1})+(1-g)$, where
$\mathcal{F}_{1}$ is the first step in the Harder-Narasimhan filtration.
[Sim94a][Lemma 3.3] shows that
$\mu_{1}\leq\left(\text{max}\\{4g-3,0\\}\right)\cdot n$. Using this inequality
and the fact that $\sum_{j=1}^{n}\mu_{j}=0$, we get a bound on the Harder-
Narasimhan polygon of $\mathcal{E}$. Hence there are finitely many
possibilities for $\mu$. This bound on the Harder-Narasimhan type is weaker
than the one obtained in the proof of Proposition 3.7.
## 4 The moduli stack of logarithmic $G$-connections
### 4.1 Logarithmic $G$-connections and their residues
We start this section by recalling the necessary preliminaries on logarithmic
connections. Our initial exposition is adapted from the holomorphic case
considered in [BDPS17][2.2, 2.3].
Let $D=\sum_{i\in I}x_{i}$ be the reduced divisor of $C$ supported at the
$x_{i}$’s. There is a canonical inclusion of sheaves $\mathcal{O}_{C\times
S}(-D)\hookrightarrow\mathcal{O}_{C\times S}$ induced by $x_{i}$. The
logarithmic Atiyah sequence $\text{At}^{D}(\mathcal{P})$ with poles at $D$ is
defined by the following pullback diagram.
${At^{D}(\mathcal{P})}$${\mathcal{O}_{C\times
S}(-D)}$${At(\mathcal{P})}$${\mathcal{O}_{C\times
S}}$$\hookrightarrow$$\hookrightarrow$
By definition, there is a commutative diagram of short exact sequences
${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}}$${At^{D}(\mathcal{P})}$${\mathcal{O}_{C\times
S}(-D)}$${0}$${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}}$${At(\mathcal{P})}$${\mathcal{O}_{C\times
S}}$${0}$$\xlongequal{}$$\hookrightarrow$$\hookrightarrow$
The top row is called the logarithmic Atiyah sequence with poles at $D$. A
splitting of this sequence is called a logarithmic connection with poles at
$D$. By construction it makes sense to ask whether a logarithmic connection
$\theta$ of $\mathcal{P}$ extends to a $G$-connection of $\mathcal{P}$. This
just means that the composition $\mathcal{O}_{C\times
S}(-D)\xrightarrow{\theta}\text{At}^{D}(\mathcal{P})\rightarrow\text{At}(\mathcal{P})$
(uniquely) extends to $\mathcal{O}_{C\times S}$.
###### Example 4.1.
Suppose that $S=\text{Spec}\,k$ and $G=\mathbb{G}_{m}$. Let $\mathcal{L}$ be a
line bundle on $C$. If the divisor $D$ is nonzero, the logarithmic Atiyah
sequence for $\mathcal{L}$ splits. Therefore all line bundles admit
logarithmic connections.
We tensor the logarithmic Atiyah sequence with $\mathcal{O}_{C\times S}(D)$ to
obtain the following diagram
${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}(D)}$${At^{D}(\mathcal{P})(D)}$${\mathcal{O}_{C\times S}}$${0}$ Figure 1:
Diagram 1
Let $i\in I$. Base-changing Diagram 1 to the closed subscheme $x_{i}\times S$
we get a diagram with exact rows and columns
${0}$${0}$${0}$${0}$${0}$${\left(At^{D}(\mathcal{P})\,/\,At(\mathcal{P})\right)(D)\,|_{x_{i}\times
S}}$${\mathcal{O}_{x_{i}\times
S}}$${0}$${0}$${Ad(\mathcal{P})\otimes\Omega^{1}_{C\times
S/S}(D)\,|_{x_{i}\times S}}$${At^{D}(\mathcal{P})(D)|_{x_{i}\times
S}}$${\mathcal{O}_{x_{i}\times
S}}$${0}$${0}$${Ad(\mathcal{P})\otimes\Omega^{1}_{C\times
S/S}(D)\,|_{x_{i}\times S}}$${At(\mathcal{P})(D)|_{x_{i}\times
S}}$${\mathcal{O}_{x_{i}\times
S}}$${0}$${0}$${0}$${\left(At^{D}(\mathcal{P})\,/\,At(\mathcal{P})\right)(D)\,|_{x_{i}\times
S}}$${\mathcal{O}_{x_{i}\times
S}}$${0}$${0}$${0}$${0}$$\scriptstyle{\psi_{i}}$$\scriptstyle{Id}$$\scriptstyle{Id}$$\scriptstyle{0}$$\scriptstyle{Id}$$\scriptstyle{\psi_{i}}$
The first two rows yield a canonical splitting of the base-change of Diagram 1
to $x_{i}\times S$
${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}(D)\,|_{x_{i}\times S}}$${At^{D}(\mathcal{P})(D)|_{x_{i}\times
S}}$${\mathcal{O}_{x_{i}\times
S}}$${0}$${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}(D)\,|_{x_{i}\times S}}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}(D)\,|_{x_{i}\times S}\oplus\mathcal{O}_{x_{i}\times
S}}$${\mathcal{O}_{x_{i}\times
S}}$${0}$$\xlongequal{}$$\xrightarrow{\sim}$$\xlongequal{}$
Figure 2: Diagram 2
Consider the stalk $\Omega^{1}_{C/k}(D)|_{\mathcal{O}_{C,x_{i}}}$. It is a
free $\mathcal{O}_{C,x_{i}}$-module of rank $1$. There is a canonical
generating element given as follows. Choose a uniformizer $z_{i}$ of the
discrete valuation ring $\mathcal{O}_{C,x_{i}}$. Then, the meromorphic
$1$-form $\frac{dz_{i}}{z_{i}}$ is a generator of
$\Omega^{1}_{C/k}(D)|_{\mathcal{O}_{C,x_{i}}}$ and does not depend on the
choice of uniformizer. This induces a canonical trivialization
$\Omega^{1}_{C/k}(D)|_{\mathcal{O}_{C,x_{i}}}\cong\mathcal{O}_{C,x_{i}}$. This
in turn yields an isomorphism $\Omega^{1}_{C\times S/S}(D)|_{x_{i}\times
S}\cong\mathcal{O}_{x_{i}\times S}$ after base-changing to $x_{i}\times S$.
This canonical isomorphism $\Omega^{1}_{C\times S/S}(D)|_{x_{i}\times
S}\cong\mathcal{O}_{x_{i}\times S}$ and the splitting in Diagram 2 can be used
to identify the $x_{i}\times S$ fiber of Diagram 1 with the following split
short exact sequence
${0}$${\text{Ad}\left(\mathcal{P}|_{x_{i}\times
S}\right)}$${\text{Ad}\left(\mathcal{P}|_{x_{i}\times
S}\right)\oplus\mathcal{O}_{x_{i}\times S}}$${\mathcal{O}_{x_{i}\times
S}}$${0}$ Figure 3: Diagram 3
Let $\theta$ be a logarithmic connection on $\mathcal{P}$ with poles at $D$,
viewed as a splitting of the logarithmic Atiyah sequence. We can tensor with
$\mathcal{O}_{C\times S}(D)$ and pass to the $x_{i}\times S$ fiber to obtain a
splitting of Diagram 3. Such splittings are in natural correspondence with
global sections of the vector bundle $\text{Ad}\left(\mathcal{P}|_{x_{i}\times
S}\right)$. The section corresponding to the splitting induced by $\theta$ is
called the residue of $\theta$ at $x_{i}$, and will be denoted
$\text{Res}_{x_{i}}\theta\in H^{0}\left(x_{i}\times
S,\,\text{Ad}\left(\mathcal{P}|_{x_{i}\times S}\right)\,\right)$.
Our next goal is to define a version of the logarithmic Atiyah sequence with
prescribed residues. Let $S$ be a $k$-scheme. Let $\mathcal{P}$ be a
$G$-bundle on $C\times S$. For each $i\in I$, fix a section $s_{i}\in
H^{0}\left(x_{i}\times S,\,Ad(\mathcal{P}|_{x_{i}\times S})\,\right)$. Set
$W_{i}$ to be the cokernel of the map $s_{i}\oplus
Id_{\mathcal{O}_{x_{i}\times S}}:\,\mathcal{O}_{x_{i}\times S}\longrightarrow
Ad(\mathcal{P}|_{x_{i}\times S})\oplus\mathcal{O}_{x_{i}\times S}$. We have
seen that there is an isomorphism
$At^{D}(\mathcal{P})(D)|_{x_{i}\times
S}\xrightarrow{\sim}\text{Ad}\left(\mathcal{P}|_{x_{i}\times
S}\right)\oplus\mathcal{O}_{x_{i}\times S}$
Define $At^{D,s_{i}}(\mathcal{P})$ to be the kernel of the composition
$\displaystyle At^{D}(\mathcal{P})(D)\xrightarrow{\underset{i\in
I}{\oplus}unit}\bigoplus_{i\in I}(q_{i}\times
id_{S})_{*}\left(At^{D}(\mathcal{P})(D)|_{x_{i}\times
S}\right)\xrightarrow{\sim}\bigoplus_{i\in I}(q_{i}\times
id_{S})_{*}\,\left(Ad(\mathcal{P}|_{x_{i}\times
S})\oplus\mathcal{O}_{x_{i}\times S}\right)\twoheadrightarrow\bigoplus_{i\in
I}(q_{i}\times id_{S})_{*}\,W_{i}$
Consider the pullback diagram
${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}}$${At^{D,s_{i}}(\mathcal{P})}$${\mathcal{O}_{C\times
S}}$${0}$${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}(D)}$${At^{D}(\mathcal{P})(D)}$${\mathcal{O}_{C\times
S}}$${0}$$\hookrightarrow$$\hookrightarrow$$\xlongequal{}$ Figure 4: Diagram 4
By construction, there is a bijection between logarithmic $G$-connections
$\theta$ on $\mathcal{P}$ such that $\text{Res}_{x_{i}}\theta=s_{i}$ for all
$i\in I$ and splittings of the top row in Diagram 4. We refer to this short
exact sequence as the logarithmic Atiyah sequence with prescribed residues.
${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}}$${At^{D,s_{i}}(\mathcal{P})}$${\mathcal{O}_{C\times S}}$${0}$ Figure 5:
Diagram 5
We set $\gamma^{D,s_{i}}_{\mathcal{P}}\in H^{1}\left(C\times
S,\,Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times S/S}\,\right)$ to be the
cohomology class associated to the extension in Diagram 5. It is the analogue
of the Atiyah class for logarithmic connections with prescribed residues. The
$G$-bundle $\mathcal{P}$ admits a logarithmic connections with prescribed
residues $s_{i}$ if and only if $\gamma^{D,s_{i}}_{\mathcal{P}}=0$.
Proposition 4.2 gives an explicit description
$\gamma^{D,s_{i}}_{\mathcal{P}}$. In order to understand this description we
need some setup. For each $i\in I$, we have a short exact sequence
$0\longrightarrow\mathcal{O}_{C\times S}\longrightarrow\mathcal{O}_{C\times
S}(x_{i})\xrightarrow{unit}(q_{i}\times
id_{S})_{*}\left(\mathcal{O}_{C}(x_{i})|_{x_{i}\times S}\right)\longrightarrow
0$
We tensor this sequence with $\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}$ in order to obtain
$\displaystyle
0\longrightarrow\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\longrightarrow\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}(x_{i})\longrightarrow(q_{i}\times
id_{S})_{*}\left(\text{Ad}\,\mathcal{P}|_{x_{i}\times
S}\otimes\Omega^{1}_{C\times S/S}(x_{i})|_{x_{i}\times
S}\right)\longrightarrow 0$
The canonical trivialization $\Omega^{1}_{C\times S/S}(x_{i})|_{x_{i}\times
S}\cong\mathcal{O}_{x_{i}\times S}$ described earlier can be used rewrite the
short exact sequence above.
$0\longrightarrow\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\longrightarrow\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}(x_{i})\longrightarrow(q_{i}\times
id_{S})_{*}\left(\text{Ad}\,\mathcal{P}|_{x_{i}\times S}\right)\longrightarrow
0$
Denote by $\delta_{i}:H^{0}\left(x_{i}\times
S,\,\text{Ad}\,\mathcal{P}|_{x_{i}\times S}\right)\longrightarrow
H^{1}\left(C\times S,\,\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\right)$ the corresponding connecting homomorphism of sheaf cohomology
groups.
The following proposition is proven in [BDP18][Prop. 3.1] in the case when
$k=\mathbb{C}$, $G=\text{GL}_{n}$ and $S=\text{Spec}\,\mathbb{C}$. The
argument found there uses transcendental methods. We give an algebraic proof
of the analogous result in the case of an arbitrary base $S$, arbitrary ground
field $k$, and any smooth connected linear algebraic group $G$.
###### Proposition 4.2.
Fix a $k$-scheme $S$. Let $\mathcal{P}$ be a $G$-bundle over $C\times S$. Let
$s_{i}\in H^{0}\left(x_{i}\times S,\,\text{Ad}(\mathcal{P}|_{x_{i}\times
S})\,\right)$ be sections as above. Then, we have
$\gamma^{D,s_{i}}_{\mathcal{P}}=\gamma_{\mathcal{P}}-\sum_{i\in
I}\delta_{i}(s_{i})$.
###### Proof.
Consider the short exact sequence
$0\longrightarrow\mathcal{O}_{C\times S}\longrightarrow\mathcal{O}_{C\times
S}(D)\xrightarrow{unit}\bigoplus_{i\in I}(q_{i}\times
id_{S})_{*}\left(\mathcal{O}_{C}(x_{i})|_{x_{i}\times S}\right)\longrightarrow
0$
By tensoring with $\text{Ad}\,\mathcal{P}\,\otimes\,\Omega_{C\times S/S}^{1}$
and using the identifications $\Omega^{1}_{C\times S/S}(x_{i})|_{x_{i}\times
S}\cong\mathcal{O}_{x_{i}\times S}$ as described above, we obtain a short
exact sequence
$\displaystyle
0\longrightarrow\text{Ad}\,\mathcal{P}\,\otimes\,\Omega^{1}_{C\times
S/S}\,\xrightarrow{\;\;j\;\;}\,\text{Ad}\,\mathcal{P}\,\otimes\,\Omega^{1}_{C\times
S/S}(D)\,\xrightarrow{\;\;u\;\;}\,\bigoplus_{i\in I}(q_{i}\times
id_{S})_{*}\left(\text{Ad}\,\mathcal{P}|_{x_{i}\times S}\right)\longrightarrow
0$
The morphisms $j$ and $u$ above are labeled for future use. The construction
of the connecting homomorphism in Cech cohomology implies that the connecting
homomorphism for this short exact sequence is given by the sum
$\sum_{i\in I}\delta_{i}\,:\;\bigoplus_{i\in I}H^{0}\left(x_{i}\times
S,\,\text{Ad}\,\mathcal{P}|_{x_{i}\times S}\right)\longrightarrow
H^{1}\left(C\times S,\,\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\right)$
We recall how to describe the extension corresponding to the cohomology class
$-\sum_{i\in I}\delta_{i}(s_{i})\in H^{1}\left(C\times
S,\,\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\right)=\text{Ext}^{1}\left(\mathcal{O}_{C\times
S},\,\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times S/S}\right)$. Define a
global section $v_{i}:\mathcal{O}_{C\times S}\longrightarrow(q_{i}\times
id_{S})_{*}\left(\text{Ad}\,\mathcal{P}|_{x_{i}\times S}\right)$ to be the
composition
$v_{i}:\;\mathcal{O}_{C\times S}\,\xrightarrow{unit}\,(q_{i}\times
id_{S})_{*}\,\mathcal{O}_{x_{i}\times S}\,\xrightarrow{(q_{i}\times
id_{S})_{*}s_{i}}\,(q_{i}\times
id_{S})_{*}\left(\text{Ad}\,\mathcal{P}|_{x_{i}\times S}\right)$
Let $\mathcal{F}$ be the $\mathcal{O}_{C\times S}$-sheaf given by the pullback
diagram
${\mathcal{F}}$${\mathcal{O}_{C\times
S}}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}(D)}$${\bigoplus_{i\in I}(q_{i}\times
id_{S})_{*}\left(\text{Ad}\,\mathcal{P}|_{x_{i}\times
S}\right)}$$\scriptstyle{p_{2}}$$\scriptstyle{p_{1}}$$\scriptstyle{\left(\,-v_{i}\,\right)_{i}}$$\scriptstyle{u}$
This means that $\mathcal{F}$ is the kernel of the morphism
$\text{Ad}\,\mathcal{P}\otimes\Omega_{C\times
S/S}^{1}(D)\,\oplus\,\mathcal{O}_{C\times
S}\,\xrightarrow{\;\;\;u\,+\,\left(v_{i}\right)_{i}\;\;\;}\,\bigoplus_{i\in
I}\,(q_{i}\times id_{S})_{*}\left(\text{Ad}\,\mathcal{P}|_{x_{i}\times
S}\right)$
Here the maps $p_{l}$ for $l=1,2$ are the natural projections.
The morphism $(j,0):\text{Ad}\,\mathcal{P}\otimes\Omega_{C\times
S/S}^{1}\longrightarrow\text{Ad}\,\mathcal{P}\otimes\Omega_{C\times
S/S}^{1}(D)\,\oplus\,\mathcal{O}_{C\times S}$ factors through the subsheaf
$\mathcal{F}\subset\text{Ad}\,\mathcal{P}\otimes\Omega_{C\times
S/S}^{1}(D)\,\oplus\,\mathcal{O}_{C\times S}$. By construction
$(j,0):\text{Ad}\,\mathcal{P}\otimes\Omega_{C\times
S/S}^{1}\longrightarrow\mathcal{F}$ is a kernel for the morphism
$p_{2}:\mathcal{F}\longrightarrow\mathcal{O}_{C\times S}$. The extension
corresponding to the cohomology class $-\sum_{i\in I}\delta_{i}(s_{i})$ is
given by
$0\longrightarrow\,\text{Ad}\,\mathcal{P}\otimes\Omega_{C\times
S/S}^{1}\,\xrightarrow{\;\;\;\
(j,0)\;\;\;}\,\mathcal{F}\,\xrightarrow{\;\;\;\;p_{2}\;\;\;\;}\,\mathcal{O}_{C\times
S}\,\longrightarrow\,0$
We describe now the extension corresponding to the cohomology class
$\gamma_{\mathcal{P}}-\sum_{i\in I}\delta_{i}(s_{i})$. Suppose that the Atiyah
sequence for $\mathcal{P}$ is given by
$0\,\longrightarrow\,Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\,\xrightarrow{\;\;\;\;b\;\;\;\;}\,At(\mathcal{P})\,\xrightarrow{\;\;\;\;p\;\;\;\;}\,\mathcal{O}_{C\times
S}\,\longrightarrow\,0$
Define $\mathcal{G}$ to be the kernel of the morphism
$p-p_{2}\,:\,\text{At}(\mathcal{P})\oplus\mathcal{F}\longrightarrow\mathcal{O}_{C\times
S}$. Consider the map
$(b,\,(j,0)\,):\text{Ad}\,\mathcal{P}\otimes\Omega_{C\times
S/S}^{1}\longrightarrow\text{At}(\mathcal{P})\oplus\mathcal{F}$. By
construction it factors through the subsheaf
$\mathcal{G}\subset\text{At}(\mathcal{P})\oplus\mathcal{F}$. Recall that in
the Yoneda group $\text{Ext}^{1}\left(\mathcal{O}_{C\times
S},\,\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times S/S}\right)$ addition is
given by the Baer sum [Sta19, Tag 010I]. The Baer sum
$\gamma_{\mathcal{P}}-\sum_{i\in I}\delta_{i}(s_{i})$ is given by
$0\,\longrightarrow\text{Ad}\,\mathcal{P}\otimes\Omega_{C\times
S/S}^{1}\,\xrightarrow{\;\;\;(b,\,0)\;\;\;}\,\mathcal{G}\,/\,\text{Im}(b,\,(j,0)\,)\,\xrightarrow{\;\;\;\;p\;\;\;\;}\mathcal{O}_{C\times
S}\,\longrightarrow\,0$ Figure 6: Diagram 6
The proposition amounts to showing that the extension in Diagram 6 is
isomorphic to the logarithmic Atiyah sequence with prescribed residues in
Diagram 5. We will explicitly construct an isomorphism.
There is a canonical inclusion
$k:\text{At}(\mathcal{P})\hookrightarrow\text{At}(\mathcal{P})(D)$. Define the
map
$w:\text{At}(\mathcal{P})\,\oplus\,\text{Ad}\,\mathcal{P}\otimes\Omega_{C\times
S/S}^{1}(D)\longrightarrow\text{At}(\mathcal{P})(D)$ to be the composition
$\text{At}(\mathcal{P})\,\oplus\,\text{Ad}\,\mathcal{P}\otimes\Omega_{C\times
S/S}^{1}(D)\,\xrightarrow{(k,\,-b(D))}\text{At}(\mathcal{P})(D)\oplus\text{Ad}(\mathcal{P})(D)\,\xrightarrow{\;\;+\;\;}\,\text{At}(\mathcal{P})(D)$
Let $a:\mathcal{G}\longrightarrow\text{At}(\mathcal{P})(D)$ denote the
composition
$\mathcal{G}\,\hookrightarrow\,\text{At}(\mathcal{P})\oplus\mathcal{F}\,\xrightarrow{\;\;id\oplus
p_{1}\;\;}\,\text{At}(\mathcal{P})\,\oplus\,\text{Ad}\,\mathcal{P}\otimes\Omega_{C\times
S/S}^{1}(D)\,\xrightarrow{\;\;\;w\;\;\;}\,\text{At}(\mathcal{P})(D)$
By construction $\text{Im}(b,\,(j,0)\,)$ is the kernel of $a$. We are left to
show the following two claims.
1. (C1)
The morphism $a$ factors through the subsheaf
$\text{At}^{D,s_{i}}(\mathcal{P})\subset\text{At}(\mathcal{P})(D)$.
2. (C2)
The induced map
$a:\mathcal{G}\,/\,\text{Im}(b,\,(j,0)\,)\,\longrightarrow\,\text{At}^{D,s_{i}}(\mathcal{P})$
yields an isomorphism of short exact sequences
${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}}$${\mathcal{G}\,/\,\text{Im}(b,\,(j,0)\,)}$${\mathcal{O}_{C\times
S}}$${0}$${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}}$${\text{At}^{D,s_{i}}(\mathcal{P})}$${\mathcal{O}_{C\times
S}}$${0}$$\xlongequal{}$$\scriptstyle{(b,0)}$$\scriptstyle{p}$$\scriptstyle{a}$$\xlongequal{}$
These two claims can be checked at the level of stalks. Let $x$ be a
topological point of $C\times S$.
Suppose first that $x$ does not belong to any of the divisors $x_{i}\times S$
for $i\in I$. Then, the stalk at $x$ of the logarithmic Atiyah sequence in
Diagram 5 becomes canonically isomorphic to the stalk of the regular Atiyah
sequence. Similarly, the stalk at $x$ of Diagram 6 becomes canonically
isomorphic to the regular Atiyah sequence. This is because all of the
operations we have performed above are trivial outside of the divisors
$x_{i}\times S$. Under these identifications, the induced morphism on stalks
$a_{x}$ in (C2) becomes the identity. This concludes the proof when $x$ is not
contained in any of the divisors $x_{i}\times S$.
Assume now that $x$ is contained in $x_{i}\times S$ for some $i\in I$. The
stalks at $x$ of the short exact sequences we have considered are just
extensions of finite free modules over the local ring $\mathcal{O}_{C\times
S,x}$. It follows that all of these extensions split. Choose a splitting of
the $x$-stalk of the regular Atiyah sequence
${0}$${\left(\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\right)_{x}}$${At(\mathcal{P})_{x}}$${\mathcal{O}_{C\times
S,x}}$${0}$${0}$${\left(\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\right)_{x}}$${\left(\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\right)_{x}\oplus\mathcal{O}_{C\times S,x}}$${\mathcal{O}_{C\times
S,x}}$${0}$$\xlongequal{}$$\scriptstyle{b_{x}}$$\scriptstyle{p_{x}}$$\xrightarrow{\sim}$$\xlongequal{}$
Such splitting induces an identification
$\text{At}^{D}(\mathcal{P})(D)_{x}\cong\left(\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}(D)\right)_{x}\oplus\mathcal{O}_{C\times S,x}$. Furthermore, we can choose
the splitting so that this identification is compatible with Diagram 2 after
base-changing to the residue field of $x$. The residue isomorphism
$\Omega^{1}_{C/k}(D)|_{\mathcal{O}_{C,x_{i}}}\cong\mathcal{O}_{C,x_{i}}$
induces by base-change an isomorphism $\Omega^{1}_{C\times
S/S}(D)_{x}\cong\mathcal{O}_{C\times S,x}$. We can use this to identify the
stalks
$\text{At}(\mathcal{P})(D)_{x}\cong\text{Ad}(\mathcal{P})_{x}\oplus\mathcal{O}(D)_{x}$.
Let $\overline{s}_{i}\in\text{Ad}(\mathcal{P})_{\kappa(x)}$ be the restriction
of the section $s_{i}$ to the residue field of $x$. The submodule
$\text{At}^{D,s_{i}}(\mathcal{P})_{x}\subset\text{At}(\mathcal{P})(D)_{x}$
corresponds to
$\displaystyle\text{At}^{D,s_{i}}(\mathcal{P})_{x}=\left\\{\,(c,r)\in\text{Ad}(\mathcal{P})_{x}\oplus\mathcal{O}_{C\times
S,x}\;\mid\;c_{\kappa(x)}=r_{\kappa(x)}\,\overline{s}_{i}\,\right\\}\;\subset\;\text{Ad}(\mathcal{P})_{x}\oplus\mathcal{O}(D)_{x}$
Diagram 5 is identified with
$0\,\longrightarrow\text{Ad}\,\mathcal{P}(-D)_{x}\,\xrightarrow{\;\;\;\left(\,k(-D)_{x},\,0\,\right)\;\;\;}\,\text{At}^{D,s_{i}}(\mathcal{P})_{x}\,\xrightarrow{\;\;\;\;\text{pr}_{2}\;\;\;\;}\mathcal{O}_{C\times
S,x}\,\longrightarrow\,0$
By the construction of $\mathcal{F}$, the canonical isomorphism
$\Omega^{1}_{C\times S/S}(D)_{x}\cong\mathcal{O}_{C\times S,x}$ induces an
identification
$\mathcal{F}_{x}=\left\\{\,(c,r)\in\text{Ad}(\mathcal{P})_{x}\oplus\mathcal{O}_{C\times
S,x}\;\mid\;c_{\kappa(x)}=-r_{\kappa(x)}\,\overline{s}_{i}\,\right\\}$
The submodule
$\mathcal{G}_{x}\subset\text{At}(\mathcal{P})_{x}\oplus\mathcal{F}_{x}$ is
given by
$\displaystyle\mathcal{G}_{x}=\left\\{\,(c,r,d,r)\in\text{Ad}(\mathcal{P})(-D)_{x}\oplus\mathcal{O}_{C\times
S,x}\oplus\text{Ad}(\mathcal{P})_{x}\oplus\mathcal{O}_{C\times
S,s}\;\mid\;d_{\kappa(x)}=-r_{\kappa(x)}\,\overline{s}_{i}\,\right\\}$
The map $a_{x}:\mathcal{G}_{x}\longrightarrow\text{At}(\mathcal{P})(D)_{x}$
corresponds under the identification
$\text{At}(\mathcal{P}(D)_{x}\cong\text{Ad}(\mathcal{P})_{x}\oplus\mathcal{O}_{C\times
S,x}$ to the composition
$\displaystyle
a_{x}:\mathcal{G}_{x}\,\hookrightarrow\,\text{Ad}(\mathcal{P})(-D)_{x}\oplus\mathcal{O}_{C\times
S,x}\oplus\text{Ad}(\mathcal{P})_{x}\oplus\mathcal{O}_{C\times
S,x}\xrightarrow{\left(\,k(-D)_{x}\circ\text{pr}_{1}-\text{pr}_{3},\,\text{pr}_{2}\right)}\,\text{Ad}(\mathcal{P})_{x}\oplus\mathcal{O}_{C\times
S,x}$
In other words, $a_{x}(c,r,d,r)=(c-d,\,r)$. This implies that the image of
$a_{x}$ is the submodule $\text{At}^{D,s_{i}}(\mathcal{P})_{x}$. Part (C1) of
the claim follows. On the other hand the commutativity of the induced diagram
${0}$${\text{Ad}(\mathcal{P})(-D)_{x}}$${\mathcal{G}_{x}\,/\,\text{Im}(b,\,(j,0)\,)_{x}}$${\mathcal{O}_{x}}$${0}$${0}$${\text{Ad}(\mathcal{P})(-D)_{x}}$${\text{At}^{D,s_{i}}(\mathcal{P})_{x}}$${\mathcal{O}_{x}}$${0}$$\xlongequal{}$$\scriptstyle{(\text{id},\,0,0,0)}$$\scriptstyle{p_{x}}$$\xrightarrow{\sim}$$\xlongequal{}$$\scriptstyle{(\,k(-D)_{x},\,0\,)}$
follows plainly from the concrete description of $a_{x}$ we have given. This
concludes the proof of the claims. ∎
###### Remark 4.3.
Using the notation of [BDP18][Prop. 3.1] when $G=\text{GL}_{n}$, we have
$\gamma_{\mathcal{P}}=-\phi^{0}_{E}$, $s_{i}=A(x_{i})$ and
$\delta_{i}=\gamma_{x_{i}}$.
###### Example 4.4.
Set $S=\text{Spec}\,k$ and $G=\mathbb{G}_{m}$. Let $\mathcal{L}$ be a line
bundle on $C$. The identifications explained in Example 3.2 induce a canonical
isomorphisms
$H^{1}\left(C,\,Ad\,\mathcal{L}\otimes\Omega_{C/k}^{1}\right)\cong k$ and
$H^{0}\left(x_{i},\,Ad\,\mathcal{L}|_{x_{i}}\right)\cong k$. Under these
identifications we have that $\delta_{i}$ is the identity for all $i\in I$.
Lemma 3.3 and Proposition 4.2 imply that
$\gamma_{\mathcal{L}}^{D,s_{i}}=-\text{deg}\,\mathcal{L}-\sum_{i\in I}s_{i}$.
### 4.2 Indecomposable example over elliptic curves
Let $C$ be an elliptic curve over a field $k$ of characteristic not equal to
$2$. Let $x\in C$ be a $k$-point, and set $D=x$. We will work with $G=SL_{2}$.
Since the $k$-vector space $H^{1}(C,\mathcal{O}_{C})$ is one-dimensional,
there exists a unique nonsplit extension of vector bundles
$0\to\mathcal{O}_{C}\to\mathcal{E}\to\mathcal{O}_{C}\to 0$. Here $\mathcal{E}$
is a vector bundle of rank $2$, which we can think of as a $GL_{2}$-bundle. It
follows from [Ati57b][Lemma 16] that $\mathcal{E}$ is indecomposable as a
vector bundle.
There is a canonical trivialization of the determinant
$\text{det}(\mathcal{E})\xrightarrow{\sim}\text{det}(\mathcal{O}_{C})\otimes\text{det}(\mathcal{O}_{C})\xrightarrow{\sim}\mathcal{O}_{C}$.
This yields a reduction of structure group to $SL_{2}$. We denote the
corresponding $SL_{2}$-bundle by $\mathcal{P}$.
###### Lemma 4.5.
Suppose that the characteristic of $k$ is $0$.
1. (i)
$\mathcal{P}$ is $L$-indecomposable in the sense of [BBN05][Def. 2.1].
2. (ii)
Let $\text{Aut}(\mathcal{P})$ denote the automorphism group of $\mathcal{P}$
(as defined in [BBN05]). Then all tori contained in $\text{Aut}(\mathcal{P})$
are trivial.
3. (iii)
The Atiyah class $\gamma_{\mathcal{P}}$ is $0$.
###### Proof.
1. (i)
We use [BBN05][Prop. 2.4]. We have to check that $\mathcal{P}$ does not admit
a reduction of structure group to any proper Levi sugroup of $SL_{2}$. Such
Levi reduction would induce a nontrivial direct sum decomposition of the
vector bundle $\mathcal{E}$, contradicting the fact that $\mathcal{E}$ is
indecomposable.
2. (ii)
Since the center $Z_{G}$ of $SL_{2}$ is finite, [BBN05][Defn. 2.1] implies
that any torus $T\subset\text{Aut}(\mathcal{P})$ is trivial.
3. (iii)
It follows from [BBN05][Remark 3.6] that the Atiyah class
$\gamma_{\mathcal{P}}$ vanishes.
∎
By definition, the extension bundle
$0\to\mathcal{O}_{C}\to\mathcal{E}\to\mathcal{O}_{C}\to 0$ comes from a
$\mathbb{G}_{a}$-bundle. The Cech cocycle of $\mathcal{E}$ is obtained by
choosing local splittings of the extension. This shows that the bundle
$\mathcal{P}$ admits a reduction of structure group to the unipotent radical
$U=\mathbb{G}_{a}$ of the Borel group of upper triangular matrices inside
$SL_{2}$.
Restrict the adjoint representation $\text{Ad}:SL_{2}\to
GL(\mathfrak{sl}_{2})$ to the subgroup $U$. As a $U$-representation,
$\mathfrak{sl}_{2}$ admits a filtration
$0\subset\mathfrak{u}\subset\mathfrak{u}\oplus\mathfrak{t}\subset\mathfrak{sl}_{2}$
where $\mathfrak{u}$ is the Lie algebra of $U$ and $\mathfrak{t}$ is the toral
Lie algebra of diagonal matrices inside $\mathfrak{sl}_{2}$. Note that $U$
acts trivially on $\mathfrak{u}$, because $U$ is commutative. Therefore the
associated line bundle $\mathcal{P}\times^{U}\mathfrak{u}$ is trivial. On the
other hand, a matrix computation shows that the $U$-representations
$\mathfrak{u}\oplus\mathfrak{t}$ and $\mathfrak{sl}_{2}/\mathfrak{u}$ are
isomorphic to the composition of the inclusion of $U\hookrightarrow SL_{2}$
and the standard representation $SL_{2}\to GL_{2}$ (here we are using the fact
that the characteristic of $k$ is not $2$). It follows that
$\mathcal{P}\times^{U}(\mathfrak{u}\oplus\mathfrak{t})\cong\mathcal{E}$ and
$\mathcal{P}\times^{U}(\mathfrak{sl}_{2}/\mathfrak{u})\cong\mathcal{E}$.
Therefore we have an induced filtration of $\text{Ad}(\mathcal{P})$
$0\subset\mathcal{O}_{C}\subset\mathcal{E}\subset\text{Ad}(\mathcal{P})$
with $\text{Ad}(\mathcal{P})/\mathcal{O}_{C}\cong\mathcal{E}$ and
$\text{Ad}(\mathcal{P})/\mathcal{E}\cong\mathcal{O}_{C}$. The restriction of
the filtration to $x$ yields full flag of the $3$-dimensional fiber
$\text{Ad}(\mathcal{P})|_{x}$ by vector subspaces
$0\subset V_{1}\subset V_{2}\subset\text{Ad}(\mathcal{P})|_{x}$
Let $\delta:H^{0}(C,\text{Ad}(\mathcal{P})|_{x})\to
H^{1}(C,\text{Ad}(\mathcal{P})\otimes\Omega_{C}^{1})$ denote the boundary
morphism for the short exact sequence
$0\to\text{Ad}(\mathcal{P})\otimes\Omega_{C}^{1}\to\text{Ad}(\mathcal{P})\otimes\Omega_{C}^{1}(x)\to\text{Ad}(\mathcal{P})|_{x}\to
0$
as defined before Proposition 4.2. The following lemma gives an explicit
description of $\delta$.
###### Lemma 4.6.
With notation as above, we have:
1. (i)
The $k$-vector space of obstructions
$H^{1}(C,\text{Ad}(\mathcal{P})\otimes\Omega^{1})$ has dimension $1$.
2. (ii)
The boundary morphism $\delta:H^{0}(C,\text{Ad}(\mathcal{P})|_{x})\to
H^{1}(C,\text{Ad}(\mathcal{P})\otimes\Omega^{1})$ is surjective.
3. (iii)
$V_{2}$ is the kernel of $\delta$.
###### Proof.
Consider the exact sequence in cohomology
$\displaystyle H^{0}(C,\text{Ad}(\mathcal{P})\otimes\Omega^{1}_{C}(x))\to
H^{0}(C,\text{Ad}(\mathcal{P})|_{x})\xrightarrow{\delta}H^{1}(C,\text{Ad}(\mathcal{P})\otimes\Omega^{1}_{C})\to
H^{1}(C,\text{Ad}(\mathcal{P})\otimes\Omega^{1}_{C}(x))$
Since $C$ is an elliptic curve, we have $\Omega^{1}_{C}\cong\mathcal{O}_{C}$.
Using this identification plus an application of Serre duality yields the
following exact sequence
$\displaystyle H^{0}(C,\text{Ad}(\mathcal{P})(x))\to
H^{0}(C,\text{Ad}(\mathcal{P})|_{x})\xrightarrow{\delta}H^{0}(C,\text{Ad}(\mathcal{P})^{\vee})^{*}\to
H^{0}(C,\text{Ad}(\mathcal{P})^{\vee}(-x))^{*}$
Since the characteristic of $k$ is not $2$, the trace form on
$\mathfrak{sl}_{2}$ is nondegenerate. This shows that the adjoint
representation is self-dual, and hence
$\text{Ad}(\mathcal{P})\cong\text{Ad}(\mathcal{P})^{\vee}$. Therefore we can
rewrite
$H^{0}(C,\text{Ad}(\mathcal{P})(x))\to
H^{0}(C,\text{Ad}(\mathcal{P})|_{x})\xrightarrow{\delta}H^{0}(C,\text{Ad}(\mathcal{P}))^{*}\to
H^{0}(C,\text{Ad}(\mathcal{P})(-x))^{*}$ (1)
We now proceed with the proof of the lemma.
1. (i)
The discussion above implies that
$\text{dim}\,H^{1}(C,\text{Ad}(\mathcal{P})\otimes\Omega_{C}^{1})=\text{dim}\,H^{0}(C,\text{Ad}(\mathcal{P}))$.
We know that we have filtration
$0\subset\mathcal{O}_{C}\subset\mathcal{E}\subset\text{Ad}(\mathcal{P})$ with
$\text{Ad}(\mathcal{P})/\mathcal{O}_{C}\cong\mathcal{E}$ and
$\text{Ad}(\mathcal{P})/\mathcal{E}\cong\mathcal{O}_{C}$. This can be used to
prove that
$0\to\mathcal{O}_{C}\to\text{Ad}(\mathcal{P})\to\mathcal{E}\to 0$
is the unique indecomposable extension of $\mathcal{E}$ by $\mathcal{O}_{C}$
as in [Ati57b][Lemma 16]. It follows by direct computation (or [Ati57b][Lemma
15(i)]) that $\text{dim}\,H^{0}(C,\text{Ad}(\mathcal{P}))=1$.
2. (ii)
We show that $H^{0}(C,\text{Ad}(\mathcal{P})(-x))=0$ in the exact sequence
(1). Equivalently, we need to prove that there are no nontrivial morphisms
$\mathcal{O}_{C}\to\text{Ad}(\mathcal{P})(-x)$. The subquotients of the
filtration
$0\subset\mathcal{O}_{C}\subset\mathcal{E}\subset\text{Ad}(\mathcal{P})$ are
all isomorphic to the (stable) line bundle $\mathcal{O}_{C}$ of slope $\mu=0$.
It follows that $\text{Ad}(\mathcal{P})$ is semistable of slope $0$. Therefore
$\text{Ad}(\mathcal{P})(-x)$ is a semistable vector bundle of negative slope
$\mu=-1$. A standard argument now shows that there are no notrivial morphisms
$\mathcal{O}_{C}\to\text{Ad}(\mathcal{P})$ from the semistable bundle
$\mathcal{O}_{C}$ of slope $0$ to the semistable bundle
$\text{Ad}(\mathcal{P})(-x)$ of negative slope (see e.g. [HL97][Lemma 1.3.3]).
3. (iii)
By the exact sequence (1), we need to prove that the image of the following
composition is $V_{2}$.
$H^{0}(C,\text{Ad}(\mathcal{P})(x))\to
H^{0}(C,\text{Ad}(\mathcal{P})(x)|_{x})\xrightarrow{\sim}H^{0}(C,\text{Ad}(\mathcal{P})|_{x})$
By dimension count, it suffices to show that
$\text{Im}(H^{0}(C,\text{Ad}(\mathcal{P})(x)))\subset V_{2}$. In other words,
we want to show that the following composition is trivial.
$\displaystyle H^{0}(C,\text{Ad}(\mathcal{P})(x))\to
H^{0}(C,\text{Ad}(\mathcal{P})(x)|_{x})\to
H^{0}(C,(\text{Ad}(\mathcal{P})/\mathcal{E})(x)|_{x})\xrightarrow{\sim}H^{0}(C,\text{Ad}(\mathcal{P})|_{x}))/V_{2}$
Choose a global section $f$ in $H^{0}(C,\text{Ad}(\mathcal{P})(x))$. We need
to prove that the image in $(\text{Ad}(\mathcal{P})/\mathcal{E})(x)|_{x}$ is
$0$. By reduction, $f$ yields a global section $\overline{f}$ of the quotient
$(\text{Ad}(\mathcal{P})/\mathcal{E})(x)\cong\mathcal{O}_{C}(x)$. A Riemann-
Roch computation shows that
$\text{dim}\,H^{0}(C,\mathcal{O}_{C}(x))=\text{dim}\,H^{0}(C,\mathcal{O}_{C})=1$.
This implies that the global section $\overline{f}$ must necessarily come from
the subsheaf $\text{Ad}(\mathcal{P})/\mathcal{E}\cong\mathcal{O}_{C}$ of
$(\text{Ad}(\mathcal{P})/\mathcal{E})(x)$. Therefore $\overline{f}$ becomes
$0$ when restricted to the fiber
$(\text{Ad}(\mathcal{P})/\mathcal{E})(x)|_{x}$, as desired.
∎
Now we use our previous discussion to characterize the residues that can arise
from a logarithmic connection on $\mathcal{P}$.
###### Proposition 4.7.
Suppose that the characteristic of $k$ is $0$. The $SL_{2}$-bundle
$\mathcal{P}$ admits a logarithmic connection with residue
$s\in\text{Ad}(\mathcal{P})|_{x}$ at $x$ if and only if $s$ belongs to the
vector subspace $V_{2}\subset\text{Ad}(\mathcal{P})|_{x}$. ∎
###### Proof.
Let $s\in H^{0}(C,\text{Ad}(\mathcal{P})|_{x})$. Recall from Diagram 5 that
the $SL_{2}$-bundle $\mathcal{P}$ admits a logarithmic connection with residue
$s$ if and only if the cohomological obstruction $\gamma_{\mathcal{P}}^{D,s}$
vanishes. By Proposition 4.2, we have
$\gamma_{\mathcal{P}}^{D,s}=\gamma_{\mathcal{P}}-\delta(s)$. Lemma 4.5(iii)
shows that $\gamma_{\mathcal{P}}=0$. Hence
$\gamma_{\mathcal{P}}^{D,s}=-\delta(s)$. We now conclude by Lemma 4.6, which
says that $\delta(s)=0$ if and only if $s\in V_{2}$. ∎
We consider now the setup of [BDPS17][§1]. On [BDPS17][pg. 2] we have a
maximal torus $T\subset\text{Aut}(\mathcal{P})$ and an associated Levi
subgroup $H\subset G$. In our example of $SL_{2}$-bundle $\mathcal{P}$, Lemma
4.5(i) implies that $H=SL_{2}$ (the identity reduction to $SL_{2}$ satisfies
both conditions at the end of page 12 in [BDPS17]). Furthermore, it follows
from Lemma 4.5(ii) that $T\subset\text{Aut}(\mathcal{P})$ is trivial. In
particular every residue $s\in\text{Aut}(\mathcal{P})|_{x}$ is $T$-rigid.
Choose a residue $s\in\text{Ad}(\mathcal{P})|_{x}$ that does not lie in the
$2$-dimensional vector subspace $V_{2}$. Then [BDPS17][Thm. 1.1] is false for
the $SL_{2}$-bundle $\mathcal{P}$ and the residue $w_{x}=s$. Indeed, property
(1) in [BDPS17][Thm. 1.1] is not satisfied: by Proposition 4.7 the bundle
$\mathcal{P}$ does not admit a logarithmic connection with residue $s$ at $x$.
On the other hand property (2) of [BDPS17][Thm. 1.1] is satisfied. This is
because the only character $\chi$ of $H=SL_{2}$ is the trivial character,
which satisfies $d\chi=0$ and
$\text{deg}\,E_{H}(\chi)=\text{deg}\,\mathcal{O}_{C}=0$. This contradicts
[BDPS17] which says that properties (1) and (2) are equivalent. [BDPS17][Thm.
1.2] does not hold in this case either, because of the same reason.
The above error originates from the use of [BDPS17][Cor. 3.1] during the
second half of the proof of [BDPS17][Prop. 5.1]. Let $S$ be the quotient
$H/T_{G}$ of $H$ by its maximal central torus $T_{G}$, and let $E_{S}$ the
$S$-bundle obtained from $E_{H}$ via the morphism $H\to S$. The reduction of
structure group $E_{P}\subset E_{S}$ constructed in page 16 of [BDPS17] need
not be compatible with the residues $w_{x}^{S}$ obtained from $w_{x}$ under
the induced morphism of adjoint bundles $\text{Ad}(E_{H})\to\text{Ad}(E_{S})$.
However, the application of [BDPS17][Cor. 3.1] requires compatibility of the
reduction with the residues.
One way to repair [BDPS17][Thm. 1.1, 1.2] is to impose the above condition
that $w_{x_{i}}^{S}\in\text{Ad}(E_{P})|_{x_{i}}$ for all $i\in I$. Indeed the
proof of [BDPS17][Prop. 5.1] then goes through. As an example, consider our
$SL_{2}$-bundle $\mathcal{P}$ over an elliptic curve $C$. In this case we have
$S=SL_{2}$. The space of sections $H^{0}(C,\text{Ad}(\mathcal{P}))$ is one-
dimensional, and hence there is a unique nonzero adjoint section up to
scaling. This section is given by the inclusion
$\mathcal{O}_{C}\subset\text{Ad}(\mathcal{P})$ in the first step of the
filtration described above, so it does not vanish anywhere. The corresponding
parabolic subgroup $P$ constructed in [BDPS17][pg. 16] is a Borel subgroup of
$S$. The Borel reduction $E_{P}$ comes from the filtration
$0\subset\mathcal{O}_{C}\subset\mathcal{E}$, and the subspace
$\text{Ad}(E_{P})|_{x}$ of compatible residues coincides with the two-
dimensional subspace $V_{2}\subset\text{Ad}(\mathcal{P})|_{x}$. This is
consistent with our Proposition 4.7.
Another possible way to repair [BDPS17][Thm. 1.1, 1.2] would be to impose the
additional requirement that appears in [BDPS17][pg. 15] that $\beta=0$. In our
notation, this translates to the condition that
$\gamma_{E_{S}}^{D,w^{S}_{x_{i}}}=-\sum_{i\in I}\delta_{i}(w^{S}_{x_{i}})$ is
$0$ in $H^{1}(C,\text{Ad}(E_{S})\otimes\Omega_{C}^{1})$. Note that the bundle
$E_{S}$ is $L$-indecomposable by [BBN05][Thm. 3.2], and hence
$\gamma_{E_{S}}=0$.
### 4.3 Functoriality for logarithmic $G$-connections
Logarithmic connections enjoy the same functoriality properties as regular
connections. Let $f:T\longrightarrow S$ be a morphism of $k$-schemes. Let
$\mathcal{P}$ be a $G$-bundle on $C\times S$. The same reasoning as for
regular connections shows that the logarithmic Atiyah class
$\gamma^{D}_{\mathcal{P}}$ behaves well under pullback. Explicilty,
$\gamma^{D}_{(Id_{C}\times f)^{*}\mathcal{P}}=(Id_{C}\times
f)^{*}\gamma^{D}_{\mathcal{P}}$. Here we are abusing notation and writing
$(Id_{C}\times f)^{*}$ on the right-hand side to denote the natural map on the
corresponding sheaf cohomology groups. It follows that the logarithmic Atiyah
sequence for $(Id_{C}\times f)^{*}\mathcal{P}$ is the $(Id_{C}\times
f)$-pullback of the Atiyah sequence for $\mathcal{P}$. We can therefore
pullback logarithmic connections by interpreting them as splittings of the
logarithmic Atiyah sequence. Let $\theta$ be a logarithmic $G$-connection on
$\mathcal{P}$ with poles on $D$. We denote by $f^{*}\theta$ the corresponding
logarithmic connection on $(Id_{C}\times f)^{*}\mathcal{P}$. This construction
is compatible with taking residues. This implies that the obstruction classes
$\gamma^{D,s_{i}}_{\mathcal{P}}$ and the corresponding Atiyah sequences with
prescribed residues are compatible with pullback. We leave the precise
formulation of these statements to the interested reader.
Logarithmic connections have covariant functoriality with respect to morphisms
of structure groups. Fix a $k$-scheme $S$. Let $H$ be a linear algebraic group
over $k$ with Lie algebra $\mathfrak{h}$. Let $\varphi:G\longrightarrow H$ be
a homomorphism of algebraic groups. Let $\mathcal{P}$ be a $G$-bundle over
$C\times S$. Fix a set of sections $s_{i}\in H^{0}\left(x_{i}\times
S,\,Ad\,\mathcal{P}|_{x_{i}\times S}\right)$. Recall that there is a naturally
defined morphism of vector bundles $Ad\,\varphi:Ad\,\mathcal{P}\longrightarrow
Ad\,\varphi_{*}\mathcal{P}$. Set
$\varphi_{*}s_{i}\vcentcolon=Ad\,\varphi|_{x_{i}\times S}\,(s_{i})\in
H^{0}\left(x_{i}\times S,\,Ad\,\mathcal{P}|_{x_{i}\times S}\right)$. The
description of $\gamma_{\mathcal{P}}^{D,s_{i}}$ in Proposition 4.2 and the
discussion in Subsection 3.2 imply that
$\gamma_{\varphi_{*}\mathcal{P}}^{D,\,\varphi_{*}s_{i}}=\varphi_{*}^{1}(\gamma_{\mathcal{P}}^{D,s_{i}})$.
Here recall that
$\varphi^{1}_{*}:H^{1}\left(C,\,Ad\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\right)\longrightarrow
H^{1}\left(C,\,Ad\,\varphi_{*}\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}\right)$ is the map on cohomology groups induced by $Ad\,\varphi$. The
concrete description of $\varphi_{*}^{1}(\gamma_{\mathcal{P}}^{D,s_{i}})$ in
terms of pushouts [Sta19, Tag 010I] shows that there is a commutative diagram
of Atiyah sequences with prescribed residues
${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}}$${At^{D,s_{i}}(\mathcal{P})}$${\mathcal{O}_{C\times
S}}$${0}$${0}$${\text{Ad}\,\varphi_{*}\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}}$${At^{D,\,\varphi_{*}s_{i}}(\varphi_{*}\mathcal{P})}$${\mathcal{O}_{C\times
S}}$${0}$$\scriptstyle{Ad\,\varphi\,\otimes\,id}$$\scriptstyle{At^{D,\,s_{i}}(\varphi)}$$\xlongequal{}$
We denote by $\text{At}^{D,\,s_{i}}(\varphi)$ the middle vertical arrow in the
diagram above. Let $\theta$ be a logarithmic $G$-connection on $\mathcal{P}$
with residue $s_{i}$ at each $x_{i}$, viewed as a splitting of the top row. We
compose it with $At^{D,\,s_{i}}(\varphi)$ in order to define a logarithmic
$H$-connection
$\varphi_{*}\theta\vcentcolon=At^{D,\,s_{i}}(\varphi)\circ\theta$ on the
$H$-bundle $\varphi_{*}\mathcal{P}$ with residue $\varphi_{*}s_{i}$ at
$x_{i}$.
###### Example 4.8.
Let $S=\text{Spec}\,k$ and $G=\text{GL}_{n}$. Consider the determinant
character $\text{det}:\text{GL}_{n}\longrightarrow\mathbb{G}_{m}$. The
corresponding Lie algebra map
$\text{Lie}(\text{det}):\mathfrak{gl}_{n}\longrightarrow\mathfrak{gl}_{1}$ is
given by taking the trace of a matrix. Let $\mathcal{E}$ be a vector bundle of
rank $n$ on $C$, viewed as a $\text{GL}_{n}$-bundle. Fix an endomorphisms
$s_{i}\in\text{End}(\mathcal{E}|_{x_{i}})$ for each $i\in I$. The
corresponding element $\text{det}_{*}s_{i}\in\mathfrak{gl}_{1}$ is given by
the trace $\text{tr}\,s_{i}$. Supose that $\mathcal{E}$ admits a logarithmic
connection $\theta$ with residue $s_{i}$ at $x_{i}$. By functoriality, there
is an induced connection $\text{det}_{*}\,\theta$ on $\text{det}\,\mathcal{E}$
with residue $\text{tr}\,s_{i}$ at $x_{i}$. Therefore the Atiyah class with
prescribed residues $\gamma_{\text{det}\,\mathcal{E}}^{D,\,\text{tr}\,s_{i}}$
must vanish. It follows from Example 4.4 that
$\text{deg}\,\mathcal{E}=-\sum_{i\in I}\text{tr}\,s_{i}$.
### 4.4 Stacks of logarithmic $G$-connections
We describe the stack of logarithmic $G$-connections with a fixed set of
poles. Let $D$ be a reduced divisor $D=\sum_{i\in I}x_{i}$, where $x_{i}\in
C(k)$.
###### Definition 4.9.
The moduli stack $\text{Conn}_{G}^{D}(C)$ is the pseudofunctor from
$k$-schemes $S$ to groupoids given as follows. For every $k$-scheme $S$, we
define
$\text{Conn}_{G}^{D}(C)\,(S)\vcentcolon=\;\left\\{\begin{matrix}\text{groupoid
of $G$-torsors}\;\mathcal{P}\text{ over }C\times S\\\ $+$\\\ \text{ a
logarithmic $G$-connection $\theta$ on $\mathcal{P}$ with poles at $D$
}\end{matrix}\right\\}$
We will want to keep track of the residues of the logarithmic connection. In
order to do this we define another stack over $\text{Bun}_{G}(C)$.
For every $G$-bundle $\mathcal{P}$ on a $k$-scheme $S$, we can form the
adjoint bundle of $Ad\,\mathcal{P}$. This construction allows us to define a
vector bundle $\mathcal{V}$ over $\text{B}G$. The pullback of $\mathcal{V}$
under a map $\mathcal{P}:S\rightarrow\text{B}G$ is the associated bundle
$Ad\,\mathcal{P}$ on $S$. The total space of the vector bundle is represented
by the map of stacks
$\pi:\left[G\,\backslash\,\mathfrak{g}\right]\longrightarrow\left[G\,\backslash\,\text{Spec}\,k\right]=\text{B}G$
Here the action in the left-most quotient stack is given by the adjoing
representation.
For each $i\in I$, there is a morphism of algebraic stacks
$\psi_{i}:\text{Bun}_{G}(C)\longrightarrow\text{B}G$ defined by taking the
fiber at $x_{i}$. More precisely, for every $k$-scheme $S$ and every
$G$-bundle $\mathcal{P}$ over $C\times S$, set
$\psi_{i}(\mathcal{P})=\mathcal{P}|_{x_{i}\times S}$.
###### Definition 4.10.
The stack $\text{Bun}_{G}^{Ad,\,D}(C)$ is defined to be the fiber product
$\textstyle{\text{Bun}_{G}^{Ad,\,D}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\text{Bun}_{G}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\prod_{i\in
I}\psi_{i}}$$\textstyle{\prod_{i\in
I}\left[G\,\backslash\,\mathfrak{g}\right]\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\;\;\;\;\prod_{i\in
I}\pi\;\;}$$\textstyle{\prod_{i\in I}\text{B}G}$
The definition implies that $\text{Bun}_{G}^{Ad,\,D}(C)$ is an algebraic stack
that is locally of finite type over $k$. For each $k$-scheme $S$, the groupoid
$\text{Bun}_{G}^{Ad,\,D}(C)\,(S)$ is naturally equivalent to the groupoid of
$G$-bundles $\mathcal{P}$ on $C\times S$ along with a global section of the
associated bundle $Ad\left(\mathcal{P}|_{x_{i}\times S}\right)$ for each $i\in
I$. There is a natural map
$Forget^{D}:\text{Conn}^{D}_{G}(C)\longrightarrow\text{Bun}_{G}^{Ad,\,D}(C)$
given by forgetting the connection and remembering only the $G$-bundle and the
residue at each point $x_{i}$. We have the following result, analogous to
Proposition 3.6.
###### Proposition 4.11.
The map
$Forget^{D}:\text{Conn}^{D}_{G}(C)\longrightarrow\text{Bun}_{G}^{Ad,\,D}(C)$
is schematic, affine and of finite type. In particular,
$\text{Conn}^{D}_{G}(C)$ is an algebraic stack that is locally of finite type
over $k$.
###### Proof.
Let $S$ be a $k$-scheme. Let
$(\mathcal{P},s_{i}):\,S\longrightarrow\text{Bun}_{G}^{Ad,\,D}(C)$ be a
morphism. It consists of the data of a $G$-bundle $\mathcal{P}$ on $C\times S$
and a section $s_{i}\in H^{0}\left(x_{i}\times
S,\,Ad(\mathcal{P}|_{x_{i}\times S})\,\right)$ for all $i\in I$. We want to
show that the projection
$\text{Conn}^{D}_{G}(C)\times_{\text{Bun}_{G}^{Ad,\,D}(C)}\,S\,\longrightarrow\,S$
is schematic, affine and of finite type. There is a short exact sequence as in
Diagram 5 above.
${0}$${\text{Ad}\,\mathcal{P}\otimes\Omega^{1}_{C\times
S/S}}$${At^{D,s_{i}}(\mathcal{P})}$${\mathcal{O}_{C\times S}}$${0}$
Let $\gamma_{\mathcal{P}}^{D,s_{i}}\in H^{1}\left(C\times
S,\,Ad\,\mathcal{P}\times\Omega^{1}_{C\times S/S}\,\right)$ be the
corresponding cohomology class. Let $Z$ denote the subfunctor of $S$ defined
as follows. For any $k$-scheme $T$,
$Z(T)\vcentcolon=\;\left\\{\begin{matrix}\text{morphisms $f:T\rightarrow S$
such that}\\\ \text{$(Id_{C}\times
f)^{*}\,\gamma_{\mathcal{P}}^{D,s_{i}}=0$}\end{matrix}\right\\}$
Lemma 2.3 shows that $Z$ is represented by a closed subscheme of $S$. The
vanishing of $\gamma_{\mathcal{P}}^{D,s_{i}}$ is a necessary condition for the
existence of a logarithmic $G$-connection with residue $s_{i}$ at $x_{i}$.
Therefore, the map
$\text{Conn}^{D}_{G}(C)\times_{\text{Bun}_{G}^{Ad,\,D}(C)}\,S\longrightarrow
S$ factors through the closed subscheme $Z$. It suffices to show that the
morphism
$\text{Conn}^{D}_{G}(C)\times_{\text{Bun}_{G}^{Ad,\,D}(C)}\,S\longrightarrow
Z$ is schematic, affine and of finite type. The proof of this follows from
similar arguments as in Proposition 3.6, using Diagram 5 instead of the Atiyah
sequence. ∎
It is not necessarily true that the moduli stack of logarithmic connections
$\text{Conn}_{G}^{D}(C)$ is quasicompact, even when $\text{char}\,k=0$. We
illustrate this with an example of an unbounded family on $\mathbb{P}^{1}$.
###### Example 4.12.
Let $C=\mathbb{P}^{1}_{k}=\text{Proj}(k[x_{0},x_{1}])$. Set
$G=\mathbb{G}_{m}$. We choose a single puncture $\infty$ in
$\mathbb{P}^{1}_{k}$. For each $n\geq 1$, we equip the line bundle
$\mathcal{O}(-n)$ with a logarithmic connection as follows. There is a natural
inclusion
$\mathcal{O}(-n)\xrightarrow{x_{1}^{n}}\mathcal{O}_{\mathbb{P}^{1}_{k}}$
that identifies $\mathcal{O}(-n)$ with the ideal sheaf of the divisor
$n\infty$. We compose this with the universal derivation
$d:\mathcal{O}_{\mathbb{P}^{1}_{k}}\rightarrow\Omega_{\mathbb{P}^{1}_{k}/k}^{1}$
and the natural inclusion
$\Omega_{\mathbb{P}^{1}_{k}/k}^{1}\hookrightarrow\Omega_{\mathbb{P}^{1}_{k}/k}^{1}(\infty)$
in order to obtain a morphism
$\partial_{n}:\mathcal{O}(-n)\hookrightarrow\mathcal{O}_{\mathbb{P}^{1}_{k}/k}\xrightarrow{d}\Omega_{\mathbb{P}^{1}_{k}/k}^{1}\hookrightarrow\Omega_{\mathbb{P}^{1}/k}^{1}(\infty)$
This can be described in coordinates in the following way. Over the affine
open $\mathbb{A}^{1}_{\frac{x_{0}}{x_{1}}}$ we have
$\partial_{n}\left(f\cdot\frac{1}{x_{1}^{n}}\right)=df$
On the other hand, over the affine open $\mathbb{A}^{1}_{\frac{x_{1}}{x_{0}}}$
we have
$\partial_{n}\left(f\cdot\frac{1}{x_{0}^{n}}\right)=\left(\frac{x_{1}}{x_{0}}\right)^{n}\otimes
df+nf\left(\frac{x_{1}}{x_{0}}\right)^{n}\otimes\frac{d\left(\frac{x_{1}}{x_{0}}\right)}{\left(\frac{x_{1}}{x_{0}}\right)}$
This description shows that $\partial_{n}$ factors through the subsheaf
$\mathcal{O}(-n)\otimes\Omega_{\mathbb{P}_{1}/k}^{1}(\infty)\subset\Omega_{\mathbb{P}^{1}/k}^{1}(\infty)$.
This way we obtain a logarithmic connection
$\partial_{n}:\mathcal{O}(-n)\rightarrow\mathcal{O}(-n)\otimes\Omega_{\mathbb{P}^{1}/k}^{1}(\infty)$
defined on $\mathcal{O}(-n)$. The residue at the pole is $n$.
This example generalizes to the case when $G$ is an arbitrary nontrivial
connected reductive group.
###### Lemma 4.13.
Let $G$ be a nontrivial connected reductive group over $k$. Suppose that $D$
is nonempty. The stack of logarithmic connections $\text{Conn}_{G}^{D}(C)$ is
not of finite type over $k$.
###### Proof.
Since the property of being of finite type is stable under base-change, we can
replace the ground field $k$ with an algebraic closure. For the rest of the
proof we assume that $k$ is algebraically closed.
Let $T$ be a maximal torus in $G$. By assumption, $T$ is a nontrivial split
torus. We fix an identification $T\xrightarrow{\sim}\mathbb{G}_{m}^{n}$ for
some $m\geq 1$. A $T$-bundle on $C$ is equivalent to a $n$-tuple
$\vec{\mathcal{L}}=(\mathcal{L}_{j})_{j=1}^{n}$ of line bundles on $C$. For
such $n$-tuple $\vec{\mathcal{L}}$, we define a $n$-tuple of integers
$\vec{\text{deg}}(\vec{\mathcal{L}})\vcentcolon=(\text{deg}\,\mathcal{L}_{j})_{j=1}^{n}$.
The tuple $\vec{\text{deg}}(\vec{\mathcal{L}})$ can be interpreted more
canonically as a cocharacter in $X_{*}(T)$. This is the degree of the
$T$-bundle $\vec{\mathcal{L}}$ as in [Sch15][2.2.2].
Choose a Borel subgroup of $G$ containing $T$. Let $\lambda\in X_{*}(T)$ be a
cocharacter of $T$ contained in the interior of the cone of $B$-dominant
cocharacters. Using the identification $T\xrightarrow{\sim}\mathbb{G}_{m}^{n}$
we think of $\lambda$ as a tuple of integers $(m_{j})_{j=1}^{n}$. Since $k$ is
algebraically closed, there exist line bundles $\mathcal{L}_{j}^{\lambda}$
such that $\text{deg}\,\mathcal{L}_{j}^{\lambda}=m_{j}$ for each $j$. We can
think of the tuple $\vec{\mathcal{L}}^{\lambda}=(\mathcal{L}_{j})_{j=1}^{n}$
as a $T$-bundle.
Let $i\in I$. By Example 4.4, there is a canonical identification
$H^{0}(x_{i},\text{Ad}(\mathcal{L}_{j}^{\lambda}|_{x_{i}})\cong k$. Set
$s_{i,j}=-\frac{1}{|I|}\text{deg}\,\mathcal{L}^{\lambda}_{j}$. Example 4.4
shows that the obstruction $\gamma_{\mathcal{L}_{j}^{\lambda}}^{D,s_{i,j}}$
satisfies $\gamma_{\mathcal{L}_{j}^{\lambda}}^{D,s_{i,j}}=0$. Therefore the
line bundle $\mathcal{L}_{j}^{\lambda}$ admits a logarithmic connection
$\theta_{j}^{\lambda}$ with residue $s_{i,j}$ at $x_{i}$. The pair of
$n$-tuples
$\left(\vec{\mathcal{L}}^{\lambda},\vec{\theta}^{\lambda}\right)\vcentcolon=\left(\mathcal{L}_{j}^{\lambda},\theta_{j}^{\lambda}\right)_{j=1}^{n}$
is a $T$-bundle equipped with a logarithmic connection with poles at $D$.
Let $\rho$ denote the inclusion $\rho:T\hookrightarrow G$. By functoriality of
logarithmic connections, there is an associated $G$-bundle with logarithmic
connection
$(\mathcal{G}^{\lambda},\nu^{\lambda})\vcentcolon=\rho_{*}\left(\,\left(\vec{\mathcal{L}}^{\lambda},\vec{\theta}^{\lambda}\right)\,\right)$
This way we get an infinite set of $G$-bundles
$\\{\mathcal{G}^{\lambda}\\}_{\lambda}$ indexed by the cocharacters
$\lambda\in X_{*}(T)$ that are contained in the interior of the $B$-dominant
cone. Each $\mathcal{G}^{\lambda}$ admits a logarithmic connection
$\nu^{\lambda}$ with poles at $D$. In order to conclude the proposition, we
shall show that the set $\\{\mathcal{G}^{\lambda}\\}_{\lambda}$ is unbounded
in $\text{Bun}_{G}(C)$.
For each $\lambda$, we have
$\mathcal{G}^{\lambda}=\rho_{*}(\vec{\mathcal{L}}^{\lambda})$. Let $\psi$
denote the inclusion $\psi:T\hookrightarrow B$. The $T$-bundle
$\vec{\mathcal{L}}^{\lambda}$ is semistable in the sense of [Sch15][2.2.3],
because $T$ does not have nontrivial proper parabolic subgroups. By
construction, $\lambda$ is the degree of
$\psi_{*}(\vec{\mathcal{L}}^{\lambda})$ as in [Sch15][2.2.2]. Since $\lambda$
is in the interior of the $B$-dominant cone, it follows that
$\psi_{*}(\vec{\mathcal{L}}^{\lambda})$ is the canonical reduction of
$\mathcal{G}^{\lambda}$ [Sch15][2.4.1]. Therefore the cocharacter $\lambda$ is
the Harder-Narasimhan type of $\mathcal{G}^{\lambda}$. Hence the $G$-bundles
$\\{\mathcal{G}^{\lambda}\\}_{\lambda}$ have distinct Harder-Narasimhan types.
Since the image of each Harder-Narasimhan stratum is constructible in
$\text{Bun}_{G}(C)$ [Sch15][Thm. 2.3.3(a)], it follows that the set
$\\{\mathcal{G}^{\lambda}\\}_{\lambda}$ must be unbounded. ∎
Let’s return to the case when $G$ is an arbitrary smooth connected linear
algebraic group. Using Lemma 4.13 we can prove the following.
###### Proposition 4.14.
Suppose that $D$ is nonempty and that the characteristic of $k$ is $0$. Then
the stack $\text{Conn}_{G}^{D}(C)$ is of finite type over $k$ if and only if
the group $G$ is unipotent.
###### Proof.
Assume that the group $G$ is unipotent. Consider the composition
$h_{G}:\text{Conn}^{D}_{G}(C)\xrightarrow{\text{Forget}^{D}}\text{Bun}_{G}^{Ad,D}(C)\rightarrow\text{Bun}_{G}(C)$
By definition, $\text{Bun}_{G}^{Ad,D}(C)\rightarrow\text{Bun}_{G}(C)$ is of
finite type. Proposition 4.11 shows that $\text{Forget}^{D}$ is also of finite
type. Therefore, the composition $h_{G}$ is of finite type. By Proposition 2.5
(iv), $\text{Bun}_{G}(C)$ is of finite type over $k$. We conclude that
$\text{Conn}_{G}^{D}(C)$ is of finite type over $k$.
Conversely, suppose that $G$ is not unipotent. Let $U$ denote the unipotent
radical of $G$. We have a short exact sequence of algebraic groups
$1\rightarrow U\rightarrow G\rightarrow\overline{G}\rightarrow 1$
where $\overline{G}$ is a nontrivial connected reductive group. Since the
characteristic of $k$ is $0$, the short exact sequence above admits a
splitting. Therefore we can view $G=U\rtimes\overline{G}$. Consider the chain
of morphisms
$\overline{G}\xhookrightarrow{\iota}U\rtimes\overline{G}=G\xtwoheadrightarrow{q}\overline{G}$
By functoriality of logarithmic connections, this induces a chain of morphisms
of stacks
$\text{Conn}^{D}_{\overline{G}}(C)\xrightarrow{\iota_{*}}\text{Conn}_{G}^{D}(C)\xrightarrow{q_{*}}\text{Conn}_{\overline{G}}^{D}(C)$
By definition, the composition $q_{*}\circ\iota_{*}$ is the identity. Hence
$\iota_{*}$ exhibits $\text{Conn}^{D}_{\overline{G}}(C)$ as a subfunctor of
$\text{Conn}^{D}_{G}(C)$. Assume for the sake of contradiction that
$\text{Conn}^{D}_{G}(C)$ is quasicompact. This would imply that the substack
$\text{Conn}^{D}_{G}(C)$ is quasicompact, thus contradicting Lemma 4.13. ∎
For each $i\in I$, choose an orbit $O_{i}$ for the adjoint action of $G$ on
its Lie algebra $\mathfrak{g}$. Each orbit $O_{i}$ is a smooth locally closed
subscheme of $\mathfrak{g}$. After quotienting by the action of $G$, we get a
locally closed substack
$\prod_{i\in I}\left[G\,\backslash\,O_{i}\right]\,\hookrightarrow\,\prod_{i\in
I}\left[G\,\backslash\,\mathfrak{g}\right]$
###### Definition 4.15.
$\text{Conn}_{G}^{D,O_{i}}(C)$ is defined to be the locally closed substack of
$\text{Conn}_{G}^{D}(C)$ given by the following cartesian diagram
$\textstyle{\text{Conn}_{G}^{D,O_{i}}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\text{Conn}^{D}_{G}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{Forget^{D}}$$\textstyle{\prod_{i\in
I}\left[G\,\backslash\,O_{i}\right]\times_{\prod_{i\in
I}\left[G\,\backslash\,\mathfrak{g}\right]}\text{Bun}_{G}^{\text{Ad},D}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\text{Bun}_{G}^{Ad,\,D}(C)}$
The stack $\text{Conn}_{G}^{D,O_{i}}(C)$ parametrizes $G$-bundles with
logarithmic connections whose residue at $x_{i}$ lies in the orbit $O_{i}$.
###### Theorem 4.16.
Suppose that $\text{char}\,k=0$. The algebraic stack
$\text{Conn}_{G}^{D,O_{i}}(C)$ is of finite type over $k$.
###### Proof.
By Proposition 4.11, the map $Forget^{D}$ is of finite type. By definition the
map $\text{Bun}^{Ad,\,D}_{G}(C)\longrightarrow\text{Bun}_{G}(C)$ is of finite
type. Hence it suffices to show that the composition
$h_{G}:\text{Conn}_{G}^{D,O_{i}}(C)\hookrightarrow\text{Bun}_{G}^{D}(C)\xrightarrow{Forget^{D}}\text{Bun}_{G}^{Ad,\,D}(C)\longrightarrow\text{Bun}_{G}(C)$
factors through a quasicompact open substack of $\text{Bun}_{G}(C)$.
Let $U$ be the unipotent radical of $G$. We denote by $\rho_{G}:G\rightarrow
G/U$ the correspoding quotient morphism. Choose a faithful representation
$\rho:G/U\longrightarrow\text{GL}_{n}$ of the reductive group $G/U$. Let
$(\rho\circ\rho_{G})_{*}O_{i}$ denote the unique $\text{GL}_{n}$-orbit in
$\mathfrak{gl}_{n}$ containing the image $(\rho\circ\rho_{G})(O_{i})$. The
functoriality properties described in Subsection 4.3 imply that there is a
commutative diagram
$\textstyle{\text{Conn}_{G}^{D,O_{i}}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{G}}$$\textstyle{\text{Conn}_{G/U}^{D,\,(\rho_{G})_{*}(O_{i})}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{G/U}}$$\textstyle{\text{Conn}_{\text{GL}_{n}}^{D,\,(\rho\circ\rho_{G})_{*}(O_{i})}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{h_{\text{GL}_{n}}}$$\textstyle{\text{Bun}_{G}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\rho_{G})_{*}}$$\textstyle{\text{Bun}_{G/U}(C)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho_{*}}$$\textstyle{\text{Bun}_{\text{GL}_{n}}(C)}$
The bottom horizontal maps are of finite type by Propositions 2.4 and 2.5.
Therefore it suffices to show that
$\text{Conn}^{D,\,(\rho\circ\rho_{G})_{*}O_{i}}_{\text{GL}_{n}}(C)$ is
quasicompact. We shall establish that $h_{\text{GL}_{n}}$ factors through a
quasicompact open substack of $\text{Bun}_{\text{GL}_{n}}(C)$. Just as in the
proof of Proposition 3.7, it suffices to check that the set of Harder-
Narasimhan polygons of geometric points
$\overline{t}\longrightarrow\text{Bun}_{\text{GL}_{n}}(C)$ with nonempty fiber
$\text{Conn}^{D,O_{i}}_{\text{GL}_{n}}(C)_{\overline{t}}$ is a finite set.
Let $\mu=(\mu_{1}\geq\mu_{2}\geq...\geq\mu_{n})$ be a tuple of rational
numbers in $\frac{1}{n!}\mathbb{Z}$. Let
$\mathcal{E}:\overline{t}\longrightarrow\text{Bun}_{\text{GL}_{n}}(C)$ be a
vector bundle on $C\times\overline{t}$, where $\overline{t}$ is Spec of an
algebrically closed field. Suppose that the Harder-Narasimhan polygon of
$\mathcal{E}$ is given by $\mu$. Assume that the fiber
$\text{Conn}^{D,O_{i}}_{\text{GL}_{n}}(C)_{\overline{t}}$ is nonempty. This
means that $\mathcal{E}$ admits a logarithmic connection $\theta$ with residue
$\text{Res}_{x_{i}}\theta$ in the conjugacy class
$O_{i}|_{\kappa(\overline{t})}$. Set
$s_{i}\vcentcolon=\text{Res}_{x_{i}}\theta$.
After choosing a trivialization of $\mathcal{E}|_{x_{i}\times\overline{t}}$,
the endomorphism $s_{i}$ is represented by a matrix in
$\mathfrak{gl}_{n}\otimes\kappa(\overline{t})$. Let
$(\lambda_{i}^{(l)})_{l=1}^{n}$ be the tuple of eigenvalues of $s_{i}$. This
tuple does not depend up to permutation on the choice of trivialization of
$\mathcal{E}|_{x_{i}\times\overline{t}}$.
Let $B(\\{1,2,...,n\\})$ denote the power set of $\\{1,2,...,n\\}$. Define the
set of $I$-tuples
$A\vcentcolon=\left\\{(J_{i})_{i\in I}\in
B(\\{1,2,...,n\\})^{I}\,\mid\,J_{i}\neq\emptyset\;\text{for all $i\in
I$}\;\;\,\text{and}\;\,\sum_{i\in I}\sum_{j\in
J_{i}}\lambda_{i}^{(j)}\in\mathbb{Z}\right\\}$
Set $M\vcentcolon=\underset{(J_{i})\in A}{\text{max}}\,\left\lvert\sum_{i\in
I}\sum_{j\in J_{i}}\lambda_{i}^{(j)}\right\rvert$. The quantity $M$ does not
depend on the choice of $\mathcal{E}$, because the eigenvalues are completely
determined by the conjugacy classes $(\rho\circ\rho_{G})_{*}(O_{i})$ of
matrices. We claim that $\mu$ satisfies the following two conditions
1. (a)
$\sum_{j=1}^{n}\mu_{j}=-\sum_{i\in I}\text{tr}\,s_{i}$.
2. (b)
$\mu_{1}\leq\left(\text{max}\\{4g-3-|I|,0\\}\right)\cdot n+M$.
The claim implies that there are finitely many possibilities for the Harder-
Narasimhan polygon $\mu\in\frac{1}{n!}\mathbb{Z}^{n}$. Hence we are reduced to
showing the claim.
1. (a)
This follows from the discussion in Example 4.8, because
$\sum_{j=1}^{n}\mu_{j}=\text{deg}\,\mathcal{E}$.
2. (b)
Set $m\vcentcolon=\text{max}\\{4g-3-|I|,0\\}$. Suppose that the Harder-
Narasimhan filtration of $\mathcal{E}$ is given by
$0\subset\mathcal{F}_{1}\subset\mathcal{F}_{2}\subset...\subset\mathcal{F}_{l}=\mathcal{E}$
We use the interpretation of logarithmic $\text{GL}_{n}$-connections as
$\Lambda$-modules, as in [Sim94a][§2]. Let
$\mathcal{D}_{C\times\overline{t}/\,\overline{t}}$ denote the usual sheaf of
rings of differential operators on $C\times\overline{t}$ [Sim94a][§2, pg. 85].
Define $\Lambda$ to be the subsheaf of rings of
$\mathcal{D}_{C\times\overline{t}/\,\overline{t}}$ generated by
$\Omega^{1}_{C\times\overline{t}/\overline{t}}(D)^{\vee}\subset
T_{C\times\overline{t}/\overline{t}}$ (see [Sim94a][§2, pg. 87]). Recall that
there is a filtration of $\mathcal{D}_{C\times\overline{t}/\,\overline{t}}$
given by the order of the differential operator. This induces a filtration on
the subsheaf $\Lambda\subset\mathcal{D}_{C\times\overline{t}/\,\overline{t}}$,
which endows $\Lambda$ with the structure of a sheaf ring of differential
operators as defined in [Sim94a][§2, pg.77]. The first graded piece
$\text{Gr}_{1}\Lambda$ associated to the filtration is
$\text{Gr}_{1}\Lambda=\Omega_{C\times\overline{t}/\,\overline{t}}^{1}(D)^{\vee}$.
Observe that the twist
$\Omega_{C\times\overline{t}/\,\overline{t}}^{1}(D)^{\vee}(m)$ is generated by
global sections. This follows from a standard cohomology argument using Serre
duality. The proof of Lemma 3.3 in [Sim94a][§3] shows that
$\mu_{1}=\mu(\mathcal{F}_{1})\leq mn+\mu(\mathcal{G})$, where $\mathcal{G}$ is
the smallest subbundle containing $\mathcal{F}_{1}$ and preserved by the
logarithmic connection $\theta$. We will show that any subsheaf
$\mathcal{G}\subset\mathcal{E}$ preserved by the logarithmic connection
satisfies $\mu(\mathcal{G})\leq M$.
Restrict the logarithmic connection $\theta$ to the subbundle $\mathcal{G}$ in
order to obtain a logarithmic connection $\theta^{\mathcal{G}}$ on
$\mathcal{G}$. By assumption the residue
$s_{i}\in\text{End}(\mathcal{E}|_{x_{i}\times\overline{t}})$ preserves the
subspace $\mathcal{G}|_{x_{i}\times\overline{t}}$. The residue
$s_{i}^{\mathcal{G}}\vcentcolon=\text{Res}_{x_{i}}\,\theta^{\mathcal{G}}$ is
the restriction of $s_{i}$ to $\mathcal{G}|_{x_{i}\times\overline{t}}$. For
all $i\in I$, there is a nonempty subset $J_{i}\subset\\{1,2,...,n\\}$ such
that the eigenvalues of $s_{i}^{\mathcal{G}}$ are $(\lambda_{i}^{(j)})_{j\in
J_{i}}$.
The discussion in Example 4.8 applied to $\mathcal{G}$ shows that
$\text{deg}\,\mathcal{G}=-\sum_{i\in
I}\text{tr}\,s_{i}^{\mathcal{G}}=-\sum_{i\in I}\sum_{j\in
J_{i}}\lambda_{i}^{(j)}$
Since $\text{deg}\,\mathcal{G}\in\mathbb{Z}$, we have $(J_{i})_{i\in I}\in A$.
By definition, the right hand side is bounded by $M$. This implies that
$\mu(\mathcal{G})\leq M$, thus concluding the proof of the claim.
∎
## References
* [Ati57a] M. F. Atiyah. Complex analytic connections in fibre bundles. Trans. Amer. Math. Soc., 85:181–207, 1957.
* [Ati57b] M. F. Atiyah. Vector bundles over an elliptic curve. Proc. London Math. Soc. (3), 7:414–452, 1957.
* [BBN05] V. Balaji, Indranil Biswas, and D. S. Nagaraj. Krull-Schmidt reduction for principal bundles. J. Reine Angew. Math., 578:225–234, 2005.
* [BDP18] Indranil Biswas, Ananyo Dan, and Arjun Paul. Criterion for logarithmic connections with prescribed residues. Manuscripta Math., 155(1-2):77–88, 2018.
* [BDPS17] Indranil Biswas, Ananyo Dan, Arjun Paul, and Arideep Saha. Logarithmic connections on principal bundles over a riemann surface. International Journal of Mathematics, 28(12):1750088, Nov 2017.
* [Beh91] Kai Achim Behrend. The Lefschetz trace formula for the moduli stack of principal bundles. 1991\. Thesis (Ph.D.)–University of California, Berkeley.
* [Bis10] Indranil Biswas. The Atiyah bundle and connections on a principal bundle. Proc. Indian Acad. Sci. Math. Sci., 120(3):299–316, 2010.
* [BR08] Indranil Biswas and N. Raghavendra. The Atiyah-Weil criterion for holomorphic connections. Indian J. Pure Appl. Math., 39(1):3–47, 2008.
* [Del70] Pierre Deligne. Équations différentielles à points singuliers réguliers. Lecture Notes in Mathematics, Vol. 163. Springer-Verlag, Berlin-New York, 1970.
* [Hei10] Jochen Heinloth. Lectures on the moduli stack of vector bundles on a curve. In Affine flag manifolds and principal bundles, Trends Math., pages 123–153. Birkhäuser/Springer Basel AG, Basel, 2010.
* [HL97] Daniel Huybrechts and Manfred Lehn. The geometry of moduli spaces of sheaves. Aspects of Mathematics, E31. Friedr. Vieweg & Sohn, Braunschweig, 1997\.
* [Hof10] Norbert Hoffmann. On moduli stacks of $G$-bundles over a curve. In Affine flag manifolds and principal bundles, Trends Math., pages 155–163. Birkhäuser/Springer Basel AG, Basel, 2010.
* [IIS06] Michi-aki Inaba, Katsunori Iwasaki, and Masa-Hiko Saito. Moduli of stable parabolic connections, Riemann-Hilbert correspondence and geometry of Painlevé equation of type VI. I. Publ. Res. Inst. Math. Sci., 42(4):987–1089, 2006.
* [LMB00] Gérard Laumon and Laurent Moret-Bailly. Champs algébriques, volume 39 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer-Verlag, Berlin, 2000.
* [Neu09] Frank Neumann. Algebraic stacks and moduli of vector bundles. Publicações Matemáticas do IMPA. [IMPA Mathematical Publications]. Instituto Nacional de Matemática Pura e Aplicada (IMPA), Rio de Janeiro, 2009. 27${{\rm{o}}}$ Colóquio Brasileiro de Matemática. [27th Brazilian Mathematics Colloquium].
* [Nit93] Nitin Nitsure. Moduli of semistable logarithmic connections. J. Amer. Math. Soc., 6(3):597–609, 1993.
* [Ric77] R. W. Richardson. Affine coset spaces of reductive algebraic groups. Bull. London Math. Soc., 9(1):38–41, 1977.
* [Sch15] Simon Schieder. The Harder-Narasimhan stratification of the moduli stack of $G$-bundles via Drinfeld’s compactifications. Selecta Math. (N.S.), 21(3):763–831, 2015.
* [Sim94a] Carlos T. Simpson. Moduli of representations of the fundamental group of a smooth projective variety. I. Inst. Hautes Études Sci. Publ. Math., (79):47–129, 1994.
* [Sim94b] Carlos T. Simpson. Moduli of representations of the fundamental group of a smooth projective variety. II. Inst. Hautes Études Sci. Publ. Math., (80):5–79 (1995), 1994\.
* [Sta19] The Stacks Project Authors. Stacks Project. https://stacks.math.columbia.edu, 2019.
* [Wat79] William C. Waterhouse. Introduction to affine group schemes, volume 66 of Graduate Texts in Mathematics. Springer-Verlag, New York-Berlin, 1979.
Department of Mathematics, Cornell University, 310 Malott Hall, Ithaca, New
York 14853, USA
E-mail address<EMAIL_ADDRESS>
|
2024-09-04T02:54:54.830041 | 2020-02-26T23:12:29 | 2002.11837 | {
"authors": "George C. Alexandropoulos, Md Atiqul Islam, and Besma Smida",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25900",
"submitter": "Md Atiqul Islam",
"url": "https://arxiv.org/abs/2002.11837"
} | arxiv-papers | # Full Duplex Hybrid A/D Beamforming with Reduced Complexity Multi-Tap Analog
Cancellation
George C. Alexandropoulos1, Md Atiqul Islam2, and Besma Smida2 This work was
partially funded by the National Science Foundation CAREER award #1620902.
1Department of Informatics and Telecommunications, National and Kapodistrian
University of Athens, Greece
2Department of Electrical and Computer Engineering, University of Illinois at
Chicago, USA
emails<EMAIL_ADDRESS>{mislam23<EMAIL_ADDRESS>
###### Abstract
Although the hardware complexity of the analog self-interference canceller in
full duplex Multiple Input Multiple Output (MIMO) designs does not necessarily
scale with the number of transceiver antennas, exploiting the benefits of
analog cancellation in massive MIMO systems with hundreds of antenna elements
is still quite impractical. Hybrid Analog and Digital (A/D) beamforming
architectures have been lately considered as a candidate technology for
realizing massive MIMO transceivers with very large number of antenna
elements, but with much fewer numbers of Radio Frequency (RF) chains. In this
paper, we present a novel architecture for full duplex hybrid A/D beamforming
transceivers including multi-tap analog cancellation with reduced number of
taps and simple multiplexers for efficient signal routing among the
transceiver RF chains. Capitalizing on the proposed transceiver architecture,
we present a joint design of analog cancellation and A/D beamforming with the
objective to maximize the achievable full duplex rate performance.
Representative millimeter wave simulation results demonstrate the
effectiveness of the proposed architecture and algorithmic framework for
enabling simultaneous uplink and downlink communications with reduced
complexity analog self-interference cancellation.
###### Index Terms:
Analog cancellation, full duplex, hybrid beamforming, joint optimization,
multi-user communication, massive MIMO.
## I Introduction
In band full duplex, also known shortly as Full Duplex (FD), is a candidate
technology for the Release $17$ of the fifth Generation (5G) New Radio (NR)
standard enabling simultaneous UpLink (UL) and DownLink (DL) communication
within the entire frequency band [1]. An FD radio can transmit and receive at
the same time and frequency resource units, consequently, it can double the
spectral efficiency achieved by a Half Duplex (HD) radio. Current wireless
systems exploit Multiple Input Multiple Output (MIMO) communication, where
increasing the number of Transmitter (TX) and Receiver (RX) antennas can
increase the spatial Degrees of Freedom (DoF), hence boosting rate
performance. Combining FD with MIMO operation can provide further spectral
efficiency gains [2, 3, 4, 5, 6, 7, 8].
FD radios suffer from Self Interference (SI), a term referring to the signal
transmitted by the FD radio TX that leaks to the FD radio RX. At the RX of the
FD radio, the SI power can be many times stronger than the power of the
received signal of interest. Consequently, SI can severely degrade the
reception of the signal of interest, and thus SI mitigation is required in
order to maximize the spectral efficiency gain of the FD operation. As the
number of antennas increases, mitigating SI becomes more challenging, since
more antennas naturally result in more SI components. Conventional SI
suppression techniques in Single-Input Single-Output (SISO) systems include
propagation domain isolation, analog domain suppression, and digital
cancellation[4, 9]. Although analog SI cancellation in FD MIMO systems can be
implemented through SISO replication, its hardware requirements scale with the
number of TX/RX antennas. The authors in [2, 5] presented spatial suppression
techniques that alleviate the need for analog SI cancellation, which was
replaced solely by digital TX/RX beamforming. In [10], a joint design of
multi-tap analog cancellation and TX/RX beamforming, where the number of taps
does not scale with the product of TX and RX antenna elements, was proposed.
The FD technology has been lately theoretically combined with Hybrid analog
and digital BeamForming (HBF) [11] to enable simultaneous UL and DL
communications in massive MIMO systems operating in the millimeter wave
frequency bands [12, 13, 14, 15]. These works mainly assigned the role of SI
mitigation to the hybrid beamformers and/or deployed analog SI cancellation
that scales with the number of TX/RX antennas.
In this paper, we present a novel hardware architecture for FD HBF systems
enabling the joint design of A/D TX/RX beamformers with reduced complexity
tap-based analog cancellation. The proposed analog canceller interconnects a
subset of the outputs of the TX Radio Frequency (RF) chains to a subset of the
inputs to the RX RF chains in order to ensure that the residual SI signal
after A/D TX precoding and analog RX combining remains below the noise floor.
Our indicative simulation results with the proposed architecture and an
example FD HBF algorithmic framework showcase a $1.7$ times rate improvement
over HD HBF communication.
Figure 1: The considered bidirectional communication system with the proposed
FD HBF architecture in the MIMO node $k$ including $N$-tap analog cancellation
and A/D TX/RX beamforming. The HD multi-antenna nodes $q$ and $m$ communicate
with node $k$ in the DL and UL directions, respectively. Each TX RF chain
consists of a Digital to Analog Converter (DAC), a mixer upconverting the
signal from BaseBand (BB) to RF, and a Power Amplifier (PA). A RX RF chain
consists of a Low Noise Amplifier (LNA), a mixer downconverting the signal
from RF to BB, and an Analog to Digital Converter (ADC). Upsampling and pulse
shaping are used to prepare the BB signal for DAC and RF transmission at the
TX side, whereas matched filtering and downsampling are used at the RX side
before BB processing of the received RF signal.
Notation: Vectors and matrices are denoted by boldface lowercase and boldface
capital letters, respectively. The transpose and Hermitian transpose of
$\mathbf{A}$ are denoted by $\mathbf{A}^{\rm T}$ and $\mathbf{A}^{\rm H}$,
respectively, and $\det(\mathbf{A})$ is the determinant of $\mathbf{A}$, while
$\mathbf{I}_{n}$ ($n\geq 2$) is the $n\times n$ identity matrix and
$\mathbf{0}_{n\times m}$ ($n,m\geq 2$) is a $n\times m$ matrix with zeros.
$\|\mathbf{A}\|_{\rm F}$ is the Frobenius norm of $\mathbf{A}$,
$\|\mathbf{a}\|$ stands for $\mathbf{a}$’s Euclidean norm, and ${\rm
diag}\\{\mathbf{a}\\}$ denotes a square diagonal matrix with $\mathbf{a}$’s
elements in its main diagonal. $[\mathbf{A}]_{i,j}$ represents $\mathbf{A}$’s
$(i,j)$-th element, while $[\mathbf{a}]_{i}$ denotes the $i$-th element of
$\mathbf{a}$. $\mathbb{R}$ and $\mathbb{C}$ represent the real and complex
number sets, respectively, and $|\cdot|$ denotes the amplitude of a complex
number.
## II System Model and Proposed Architecture
### II-A System Model
We consider the $3$-user bidirectional communication system in Fig. 1
comprising of a FD MIMO node $k$ equipped with $N_{k}$ TX and $M_{k}$ RX
antenna elements, and two HD multi-antenna nodes $q$ and $m$ having $M_{q}$
and $N_{m}$ antennas, respectively. It is assumed that node $k$ communicates
simultaneously (in the same time and frequency resources) with node $q$ in the
DL and node $m$ in the UL. All nodes are considered capable of performing
digital beamforming, which for simplicity we assume to be realized with linear
filters. Node $k$ is also capable of analog TX/RX beamforming using the
partially connected HBF architecture [11], as will be detailed in the sequel.
It is assumed that node $m$ makes use of the digital precoding matrix
$\mathbf{V}_{m}^{(\mathrm{BB})}\in\mbox{$\mathbb{C}$}^{N_{m}\times d_{m}}$ for
processing in BaseBand (BB) its unit power symbol vector
$\mathbf{s}_{m}\in\mbox{$\mathbb{C}$}^{d_{m}\times 1}$ (chosen in practice
from a discrete modulation set) before UL transmission. The dimension of
$\mathbf{s}_{m}$ satisfies $d_{m}\leq\min\\{M_{k}^{(\mathrm{RF})},N_{m}\\}$
with $M_{k}^{(\mathrm{RF})}$ denoting the number of RX RF chains at node $k$.
It holds $M_{k}^{(\mathrm{RF})}\leq M_{k}$, although in practical systems it
can be $M_{k}^{(\mathrm{RF})}\ll M_{k}$. The constraint for $d_{m}$ certifies
data decodability for the considered UL communication. It additionally holds
that
$\mathbb{E}\\{\|\mathbf{V}_{m}^{(\mathrm{BB})}\mathbf{s}_{m}\|^{2}\\}\leq{\rm
P}_{m}$, where ${\rm P}_{m}$ is the total TX power of node $m$. On the DL, the
reception node $q$ applies the digital combining matrix
$\mathbf{U}_{q}^{(\mathrm{BB})}\in\mbox{$\mathbb{C}$}^{N_{q}\times d_{k}}$ in
the BB received signal that includes the unit power symbol vector
$\mathbf{s}_{k}\in\mbox{$\mathbb{C}$}^{d_{k}\times 1}$ (again chosen from a
discrete modulation set) transmitted from node $k$ such that
$d_{k}\leq\min\\{M_{q},N_{k}^{(\mathrm{RF})}\\}$ with $N_{k}^{(\mathrm{RF})}$
($N_{k}^{(\mathrm{RF})}\leq N_{k}$, but practically it can be
$N_{k}^{(\mathrm{RF})}\ll N_{k}$) denoting the number of TX RF chains at node
$k$. Similarly, the latter constraint verifies the spatial DoF of the
effective $M_{q}\times N_{k}^{(\mathrm{RF})}$ DL MIMO channel between the TX
RF chains of node $k$ and the RX RF chains of node $q$. It is noted that each
antenna at node $q$ is connected to a dedicated RF chain.
### II-B Proposed FD HBF Hardware Architecture
The proposed FD HBF hardware architecture, comprising of $N$-tap analog
cancellation for the SI signal as well as A/D precoding and combining for the
outgoing and incoming signals, is adopted for the MIMO node $k$, as depicted
in the left part of Fig. 1. Differently from [10]’s architecture that
considered only fully digital TX/RX beamforming, node $k$ is capable of HBF
through its partially connected beamforming architecture. As shown in the
figure, the analog canceller interconnects the $N_{k}^{(\mathrm{RF})}$ inputs
of the analog TX precoder to the $M_{k}^{(\mathrm{RF})}$ outputs of the analog
RX combiner. The complexity of the analog canceller expressed in the number of
taps $N$ is independent of the numbers $N_{k}$ and $M_{k}$ of the TX and RX
antennas, respectively, and as it will be shown next, it scales with the
product $\xi N_{k}^{(\mathrm{RF})}M_{k}^{(\mathrm{RF})}$ with $\xi<1$. This is
in contrast to [10] where the analog canceller interconnects the $N_{k}$ TX
antenna inputs to the $M_{k}$ RX antenna outputs.
#### II-B1 Partially Connected HBF
Each of the $N_{k}^{(\mathrm{RF})}$ TX RF chains of node $k$ is connected to a
separate subset of the available TX antenna elements. As shown in Fig. 1, the
$i$-th TX RF chain with $i=1,2,\ldots,N_{k}^{(\mathrm{RF})}$ is connected via
phase shifters with $N_{k}^{({\rm A})}$ TX antenna elements, each denoted as
${\rm TX}(i,j)$ $\forall$$j=1,2,\ldots,N_{k}^{({\rm A})}$. Clearly, it holds
$N_{k}=N_{k}^{(\mathrm{RF})}N_{k}^{({\rm A})}$ for the total number of TX
antennas at node $k$. Stacking the values of the $N_{k}^{({\rm A})}$ phase
shifters that connect each $i$-th TX RF chain with its antenna elements in a
complex-valued $N_{k}^{({\rm A})}\times 1$ vector $\mathbf{v}_{i}$, we can
formulate the complex-valued $N_{k}\times N_{k}^{(\mathrm{RF})}$ analog TX
precoder, as follows:
$\mathbf{V}_{k}^{(\mathrm{RF})}=\left[\begin{matrix}\mathbf{v}_{1}&\mathbf{0}_{N_{k}^{({\rm
A})}\times 1}&\cdots&\mathbf{0}_{N_{k}^{({\rm A})}\times 1}\\\
\mathbf{0}_{N_{k}^{({\rm A})}\times
1}&\mathbf{v}_{2}&\cdots&\mathbf{0}_{N_{k}^{({\rm A})}\times 1}\\\
\vdots&\vdots&\ddots&\vdots\\\ \mathbf{0}_{N_{k}^{({\rm A})}\times
1}&\mathbf{0}_{N_{k}^{({\rm A})}\times 1}&\cdots&\mathbf{v}_{N_{k}^{({\rm
RF})}}\end{matrix}\right].$ (1)
The elements of each $\mathbf{v}_{i}$ are assumed to have constant magnitude,
i.e., $|[\mathbf{v}_{i}]_{n}|^{2}=1/N_{k}^{({\rm A})}$
$\forall$$n=1,2,\ldots,N_{k}^{({\rm A})}$. We also assume that
$\mathbf{v}_{i}\in\mathbb{F}_{\rm TX}$
$\forall$$i=1,2,\ldots,N_{k}^{(\mathrm{RF})}$, which means that all analog TX
precoding vectors belong in a predefined beam codebook $\mathbb{F}_{\rm TX}$
including ${\rm card}(\mathbb{F}_{\rm TX})$ distinct vectors (or analog
beams). Apart from applying $\mathbf{V}_{k}^{(\mathrm{RF})}$ in the analog
domain to the signal before transmission, the symbol vector $\mathbf{s}_{k}$
is also processed in BB with the digital TX precoder
$\mathbf{V}_{k}^{(\mathrm{BB})}\in\mbox{$\mathbb{C}$}^{N_{k}^{(\mathrm{RF})}\times
d_{k}}$ (recall that $d_{k}\leq\min\\{M_{q},N_{k}^{(\mathrm{RF})}\\}$) before
entering into the $N_{k}^{(\mathrm{RF})}$ TX RF chains, as shown in Fig. 1.
Similar to the UL communication from node $m$ to $k$, we assume that the DL
transmission from node $k$ to $q$ is power limited according to
$\mathbb{E}\\{\|\mathbf{V}_{k}^{(\mathrm{RF})}\mathbf{V}_{k}^{(\mathrm{BB})}\mathbf{s}_{k}\|^{2}\\}\leq{\rm
P}_{k}$ with ${\rm P}_{k}$ being the total available TX power at node $k$.
The RX of node $k$ is composed of an analog combiner connecting the RX antenna
elements to the inputs of the RX RF chains, and a digital combiner that
processes the outputs of the RX RF chains in BB before signal decoding. In
particular, the $n$-th RX RF chain with $n=1,2,\ldots,M_{k}^{(\mathrm{RF})}$
is connected through phase shifters with $M_{k}^{({\rm A})}$ distinct RX
antennas; these phase shifters are denoted as ${\rm RX}(n,\ell)$
$\forall$$\ell=1,2,\ldots,M_{k}^{({\rm A})}$. It should hold that
$M_{k}=M_{k}^{(\mathrm{RF})}M_{k}^{({\rm A})}$ for the total number of RX
antennas at node $k$. We define the complex-valued $M_{k}\times
M_{k}^{(\mathrm{RF})}$ analog RX combiner $\mathbf{U}_{k}^{(\mathrm{RF})}$
having a similar block diagonal structure to (1). In particular,
$\mathbf{U}_{k}^{(\mathrm{RF})}$ contains $\mathbf{u}_{n}$’s with
$n=1,2,\ldots,M_{k}^{(\mathrm{RF})}$ in the diagonal, where each
$\mathbf{u}_{n}$ contains the constant magnitude values of the $M_{k}^{({\rm
A})}$ phase shifters (i.e., $|[\mathbf{u}_{n}]_{j}|^{2}=1/M_{k}^{({\rm A})}$
$\forall$$j=1,2,\ldots,M_{k}^{({\rm A})}$) connecting each $n$-th RX RF chain
with its antenna elements. We also assume that
$\mathbf{u}_{n}\in\mathbb{F}_{\rm RX}$ $\forall$$n$, i.e., all analog RX
combiners belong in a predefined beam codebook $\mathbb{F}_{\rm RX}$ having
${\rm card}(\mathbb{F}_{\rm RX})$ vectors. Finally,
$\mathbf{U}_{k}^{(\mathrm{BB})}\in\mbox{$\mathbb{C}$}^{M_{k}^{(\mathrm{RF})}\times
d_{m}}$ with $d_{m}\leq\min\\{M_{k}^{(\mathrm{RF})},N_{m}\\}$ represents the
digital RX combiner at node $k$.
#### II-B2 Multi-Tap Analog Cancellation
The analog canceller at node $k$ consists of $N$ taps with each tap connected
via a $N_{k}^{\mathrm{(RF)}}$-to-$1$ MUltipleXer (MUX) to all
$N_{k}^{\mathrm{(RF)}}$ outputs of the respective TX RF chains. A tap includes
a fixed delay, a variable phase shifter, and a variable attenuator [16, 10].
To route the cancellation signal to one of the adders located just before the
RX RF chains, the output of each tap is connected to a
$1$-to-$M_{k}^{\mathrm{(RF)}}$ DEMUltipleXer (DEMUX). There is a total of
$NM_{k}^{\mathrm{(RF)}}$ such adders and we use the notation “Adder ($i,j$)”
to label the adder that connects DEMUX $j$ to RX RF chain $i$, where
$i=1,2,\ldots,M_{k}^{\mathrm{(RF)}}$ and $j=1,2,\ldots,N$. The adders before
the RX RF chains can be implemented via power combiners or directional
couplers, while the analog RF MUXs/DEMUXs can be implemented with RF switches.
Clearly, the proposed analog canceller interconnects the outputs of some of
the available TX RF chains to the inputs of some of the RX RF chains, and in
contrast to [10], the size of each MUX/DEMUX depends on the number of TX/RF RF
chains and not on the number of TX/RX antennas. Similar to [10], we model in
BB the analog processing realized by the analog canceller as
$\mathbf{C}_{k}\triangleq\mathbf{L}_{3}\mathbf{L}_{2}\mathbf{L}_{1}\in\mbox{$\mathbb{C}$}^{M_{k}^{\mathrm{(RF)}}\times
N_{k}^{\mathrm{(RF)}}}$, where $\mathbf{L}_{1}\in\mbox{$\mathbb{R}$}^{N\times
N_{k}^{\mathrm{(RF)}}}$, $\mathbf{L}_{2}\in\mbox{$\mathbb{C}$}^{N\times N}$,
and $\mathbf{L}_{3}\in\mbox{$\mathbb{R}$}^{M_{k}^{\mathrm{(RF)}}\times N}$.
The elements $[\mathbf{L}_{1}]_{j,\ell}$ and $[\mathbf{L}_{3}]_{i,j}$ with
$j=1,2,\ldots,N$, $\ell=1,2,\ldots,N_{k}^{\mathrm{(RF)}}$, and
$i=1,2,\ldots,M_{k}^{\mathrm{(RF)}}$ take the binary values $0$ or $1$, and it
must hold that
$\sum_{\ell=1}^{N_{k}^{\mathrm{(RF)}}}[\mathbf{L}_{1}]_{j,\ell}=\sum_{i=1}^{M_{k}^{\mathrm{(RF)}}}[\mathbf{L}_{3}]_{i,j}=1\,\,\,\forall
j=1,2,\ldots,N.$ (2)
The $\mathbf{L}_{2}$ in $\mathbf{C}_{k}$ is a diagonal matrix whose complex
entries represent the attenuation and phase shift of the canceller taps; the
magnitude and phase of the element $[\mathbf{L}_{2}]_{i,i}$ with
$i=1,2,\ldots,N$ specify the attenuation and phase of the $i$-th tap. Recall
that the tap delays in each canceller tap are fixed, hence, we model the
effects of the $i$-th tap delay as a phase shift that is incorporated to the
phase of $[\mathbf{L}_{2}]_{i,i}$.
### II-C Received Signal Models
Using the previously described system configuration, the BB received signal
$\mathbf{y}_{q}\in\mbox{$\mathbb{C}$}^{M_{q}\times 1}$ at node $q$ in the DL
communication can be mathematically expressed as
$\mathbf{y}_{q}\triangleq\mathbf{H}_{q,k}\mathbf{V}_{k}^{(\mathrm{RF})}\mathbf{V}_{k}^{(\mathrm{BB})}\mathbf{s}_{k}+\mathbf{H}_{q,m}\mathbf{V}_{m}\mathbf{s}_{m}+\mathbf{n}_{q},$
(3)
where $\mathbf{H}_{q,k}\in\mbox{$\mathbb{C}$}^{M_{q}\times N_{k}}$ is the DL
channel gain matrix (i.e., between nodes $q$ and $k$),
$\mathbf{H}_{q,m}\in\mbox{$\mathbb{C}$}^{M_{q}\times N_{m}}$ denotes the
channel gain matrix for inter-node interference (i.e., between nodes $q$ and
$m$), and $\mathbf{n}_{q}\in\mbox{$\mathbb{C}$}^{M_{q}\times 1}$ represents
the Additive White Gaussian Noise (AWGN) at node $q$ with variance
$\sigma_{q}^{2}$. In the UL communication, the symbol vector
$\hat{\mathbf{s}}_{m}\in\mbox{$\mathbb{C}$}^{d_{m}\times 1}$ used for the
estimation of $\mathbf{s}_{m}$ at the FD HBF node $k$ is derived as
$\displaystyle\hat{\mathbf{s}}_{m}\triangleq$
$\displaystyle\left(\mathbf{U}_{k}^{(\mathrm{BB})}\right)^{\rm
H}\left(\left(\mathbf{U}_{k}^{(\mathrm{RF})}\right)^{\rm
H}\mathbf{H}_{k,k}\mathbf{V}_{k}^{(\mathrm{RF})}+\mathbf{C}_{k}\right)\mathbf{V}_{k}^{(\mathrm{BB})}\mathbf{s}_{k}$
$\displaystyle+\left(\mathbf{U}_{k}^{(\mathrm{BB})}\right)^{\rm
H}\left(\mathbf{U}_{k}^{(\mathrm{RF})}\right)^{\rm
H}\left(\mathbf{H}_{k,m}\mathbf{V}_{m}\mathbf{s}_{m}+\mathbf{n}_{k}\right),$
(4)
where $\mathbf{H}_{k,k}\in\mbox{$\mathbb{C}$}^{M_{k}\times N_{k}}$ denotes the
SI channel seen at the RX antennas of node $k$ due to its own DL transmission,
$\mathbf{H}_{k,m}\in\mbox{$\mathbb{C}$}^{M_{k}\times N_{m}}$ is the UL channel
gain matrix (i.e., between nodes $k$ and $m$), and
$\mathbf{n}_{k}\in\mbox{$\mathbb{C}$}^{M_{k}\times 1}$ denotes the received
AWGN at node $k$ with variance $\sigma_{k}^{2}$. The first term in (II-C)
describes the residual SI signal after analog cancellation and A/D TX/RX
beamforming, while its second term contains the A/D RX combined signal
transmitted from node $m$ plus AWGN. In contrast to [10], $\mathbf{C}_{k}$
needs to cancel the SI channel $(\mathbf{U}_{k}^{(\mathrm{RF})})^{\rm
H}\mathbf{H}_{k,k}\mathbf{V}_{k}^{(\mathrm{RF})}$, which is a matrix of
dimension $M_{k}^{\mathrm{(RF)}}\times N_{k}^{\mathrm{(RF)}}$ and not the
actual $M_{k}\times N_{k}$ SI channel $\mathbf{H}_{k,k}$.
## III Joint Design Problem Formulation
We focus on the FD HBF node $k$ in Fig. 1 and present a sum-rate optimization
framework for the joint design of $\mathbf{C}_{k}$,
$\mathbf{V}_{k}^{(\mathrm{RF})}$, $\mathbf{V}_{k}^{(\mathrm{BB})}$,
$\mathbf{U}_{k}^{(\mathrm{RF})}$, and $\mathbf{U}_{k}^{(\mathrm{BB})}$. Using
the notation
$\mathbf{V}_{k}\triangleq\mathbf{V}_{k}^{(\mathrm{RF})}\mathbf{V}_{k}^{(\mathrm{BB})}$
and assuming Gaussian signaling and capacity-achieving combining at node $q$,
the achievable DL rate that is a function of the A/D TX precoding matrices
$\mathbf{V}_{k}^{(\mathrm{RF})}$ and $\mathbf{V}_{k}^{(\mathrm{BB})}$ of node
$k$ as well as the digital TX precoder $\mathbf{V}_{m}$ of node $m$, is given
by
$\mathcal{R}_{\rm DL}=\log_{2}\left({\rm
det}\left(\mathbf{I}_{M_{q}}+\mathbf{H}_{q,k}\mathbf{V}_{k}\mathbf{V}_{k}^{\rm
H}\mathbf{H}_{q,k}^{\rm H}\mathbf{Q}_{q}^{-1}\right)\right),$ (5)
where $\mathbf{Q}_{q}\in\mathbb{C}^{M_{q}\times M_{q}}$ denotes the covariance
matrix of the Interference-plus-Noise (IpN) at node $q$ that is obtained as
$\mathbf{Q}_{q}\triangleq\mathbf{H}_{q,m}\mathbf{V}_{m}\mathbf{V}_{m}^{\rm
H}\mathbf{H}_{q,m}^{\rm H}+\sigma_{q}^{2}\mathbf{I}_{M_{q}}.$ (6)
We hereinafter assume that there is no inter-node interference between the HD
multi-antenna nodes $q$ and $m$ due to, for example, appropriate node
scheduling [7] for the FD operation at node $k$. The latter assumption
translates to setting the channel matrix between those involved nodes in (3)
as $\mathbf{H}_{q,m}=\mathbf{0}_{M_{q}\times N_{k}}$, which means that (6)
simplifies to $\mathbf{Q}_{q}=\sigma_{q}^{2}\mathbf{I}_{M_{q}}$.
For the computation of the achievable UL rate, we use the notation
$\mathbf{U}_{k}\triangleq\mathbf{U}_{k}^{(\mathrm{RF})}\mathbf{U}_{k}^{(\mathrm{BB})}$
to express this rate as a function of the A/D RX combiners
$\mathbf{U}_{k}^{(\mathrm{RF})}$ and $\mathbf{U}_{k}^{(\mathrm{BB})}$, the A/D
TX precoders $\mathbf{V}_{k}^{(\mathrm{RF})}$ and
$\mathbf{V}_{k}^{(\mathrm{BB})}$, and the analog cancellation matrix
$\mathbf{C}_{k}$ of node $k$ as well as of the digital TX precoder
$\mathbf{V}_{m}$ of node $m$. Using (II-C), the UL rate is given by
$\mathcal{R}_{\rm UL}=\log_{2}\left({\rm
det}\left(\mathbf{I}_{d_{m}}+\mathbf{U}_{k}^{\rm
H}\mathbf{H}_{k,m}\mathbf{V}_{m}\mathbf{V}_{m}^{\rm H}\mathbf{H}_{k,m}^{\rm
H}\mathbf{U}_{k}\mathbf{Q}_{k}^{-1}\right)\right).$ (7)
where $\mathbf{Q}_{k}\in\mathbb{C}^{d_{m}\times d_{m}}$ denotes the IpN
covariance matrix after A/D RX combining at node $q$, which can be expressed
as
$\begin{split}\mathbf{Q}_{k}\triangleq&\left(\mathbf{U}_{k}^{(\mathrm{BB})}\right)^{\rm
H}\tilde{\mathbf{H}}_{k,k}\mathbf{V}_{k}^{(\mathrm{BB})}\left(\mathbf{V}_{k}^{(\mathrm{BB})}\right)^{\rm
H}\tilde{\mathbf{H}}_{k,k}^{\rm H}\mathbf{U}_{k}^{(\mathrm{BB})}\\\
&+\sigma_{k}^{2}\left(\mathbf{U}_{k}^{(\mathrm{BB})}\right)^{\rm
H}\left(\mathbf{U}_{k}^{(\mathrm{RF})}\right)^{\rm
H}\mathbf{U}_{k}^{(\mathrm{RF})}\mathbf{U}_{k}^{(\mathrm{BB})}.\end{split}$
(8)
In the latter expression,
$\tilde{\mathbf{H}}_{k,k}\in\mbox{$\mathbb{C}$}^{M_{k}^{\mathrm{(RF)}}\times
N_{k}^{\mathrm{(RF)}}}$ denotes the effective SI channel after performing
analog TX/RX beamforming and analog cancellation, which is defined as
$\tilde{\mathbf{H}}_{k,k}\triangleq\left(\mathbf{U}_{k}^{(\mathrm{RF})}\right)^{\rm
H}\mathbf{H}_{k,k}\mathbf{V}_{k}^{(\mathrm{RF})}+\mathbf{C}_{k}.$ (9)
Using the expressions (5) and (7) for the achievable DL and UL rates,
respectively, the sum-rate optimization problem for the joint design of the
analog canceller and the A/D TX/RX beamformers is mathematically expressed as
$\begin{split}&\mathcal{OP}:\max_{\mathbf{C}_{k},\mathbf{V}_{k}^{(\mathrm{RF})},\mathbf{V}_{k}^{(\mathrm{BB})},\mathbf{U}_{k}^{(\mathrm{RF})},\mathbf{U}_{k}^{(\mathrm{BB})}}\mathcal{R}_{\rm
DL}+\mathcal{R}_{\rm UL}\\\ &\textrm{s.t.}\leavevmode\nobreak\
\leavevmode\nobreak\
\mathrm{tr}\\{\mathbf{V}_{k}^{(\mathrm{RF})}\mathbf{V}_{k}^{(\mathrm{BB})}\left(\mathbf{V}_{k}^{(\mathrm{BB})}\right)^{\rm
H}\left(\mathbf{V}_{k}^{(\mathrm{RF})}\right)^{\rm H}\\}\leq{\rm
P}_{k},\,\,({\rm C1})\\\
&\mathbf{C}_{k}=\mathbf{L}_{3}\mathbf{L}_{2}\mathbf{L}_{1}\,\,{\rm
with}\,\,\eqref{Eq:L_1_L_3}\,\,{\rm and}\,\,[\mathbf{L}_{2}]_{i,j}=0\,\,{\rm
for}\,\,i\neq j,\hskip 5.97527pt({\rm C2})\\\
&\left\|\left[\tilde{\mathbf{H}}_{k,k}\mathbf{V}_{k}^{(\mathrm{BB})}\right]_{(j,:)}\right\|^{2}\leq\rho_{\rm
A}\,\,\forall j=1,2,\ldots,M_{k}^{\mathrm{(RF)}},\hskip 4.97931pt({\rm C3})\\\
&\mathbf{u}_{j}\in\mathbb{F}_{\rm RX}\,\,\forall j\,\,{\rm
and}\,\,\mathbf{v}_{n}\in\mathbb{F}_{\rm TX}\,\,\forall
n=1,2,\ldots,N_{k}^{({\rm RF})},({\rm C4})\end{split}$
where constraint $({\rm C1})$ relates to the average TX power at node $k$ and
constraint $({\rm C2})$ refers to the hardware capabilities of the analog
canceller. Constraint $({\rm C3})$ imposes the threshold $\rho_{\rm
A}\in\mbox{$\mathbb{R}$}$ on the average power of the residual SI signal after
analog cancellation and analog TX/RX beamforming. Finally, constraint $({\rm
C4})$ refers to the predefined TX and RX beam codebooks. To tackle
$\mathcal{OP}$, which is a nonconvex problem with nonconvex constraints, we
adopt a similar to [10] decoupled way that in this case requires at most
$\alpha_{\max}\triangleq N_{k}^{(\mathrm{RF})}-1$ iterations including closed
form expressions for the design parameters. We first solve for
$\mathbf{C}_{k}$, $\mathbf{V}_{k}^{(\mathrm{RF})}$,
$\mathbf{V}_{k}^{(\mathrm{BB})}$, and $\mathbf{U}_{k}^{(\mathrm{RF})}$
maximizing the DL rate, and then find $\mathbf{U}_{k}^{(\mathrm{BB})}$
maximizing the UL rate. Specifically, we formulate the following optimization
subproblem for the design of $\mathbf{C}_{k}$,
$\mathbf{V}_{k}^{(\mathrm{RF})}$, $\mathbf{V}_{k}^{(\mathrm{BB})}$, and
$\mathbf{U}_{k}^{(\mathrm{RF})}$:
$\begin{split}\mathcal{OP}1:&\max_{\mathbf{C}_{k},\mathbf{V}_{k}^{(\mathrm{RF})},\mathbf{V}_{k}^{(\mathrm{BB})},\mathbf{U}_{k}^{(\mathrm{RF})}}\mathcal{R}_{\rm
DL}\leavevmode\nobreak\ \leavevmode\nobreak\ \textrm{s.t.}\leavevmode\nobreak\
\leavevmode\nobreak\ ({\rm C1}),\,({\rm C2}),\,\text{and}\,({\rm
C3}).\end{split}$
Algorithm 1 Digital TX Precoder Design
1:Input: ${\rm P}_{k}$, $\mathbf{V}_{k}^{(\mathrm{RF})}$ and
$\mathbf{U}_{k}^{(\mathrm{RF})}$ solving $\mathcal{OP}2$, $\mathbf{H}_{k,k}$,
and $\mathbf{H}_{q,k}$ as well as a realization of $\mathbf{C}_{k}$ for a
given $N$ satisfying constraint $({\rm C2})$.
2:Set
$\tilde{\mathbf{H}}_{k,k}=\left(\mathbf{U}_{k}^{(\mathrm{RF})}\right)^{\rm
H}\mathbf{H}_{k,k}\mathbf{V}_{k}^{(\mathrm{RF})}+\mathbf{C}_{k}$.
3:Obtain $\mathbf{D}_{k}$ with the $N_{k}^{\mathrm{(RF)}}$ right-singular
vectors of $\widetilde{\mathbf{H}}_{k,k}$ corresponding to the singular values
in descending order.
4:for $\alpha=\alpha_{\max},\alpha_{\max}-1,\ldots,2$ do
5: Set
$\mathbf{F}_{k}=[\mathbf{D}_{k}]_{(:,N_{k}^{\mathrm{(RF)}}-\alpha+1:N_{k}^{\mathrm{(RF)}})}$.
6: Set $\mathbf{G}_{k}$ as the optimum precoding for the effective
7: DL MIMO channel
$\mathbf{H}_{q,k}\mathbf{V}_{k}^{(\mathrm{RF})}\mathbf{F}_{k}$ given ${\rm
P}_{k}$.
8: if
$\|[\widetilde{\mathbf{H}}_{k,k}\mathbf{F}_{k}\mathbf{G}_{k}]_{(i,:)}\|^{2}\leq\rho_{\rm
A}$ $\forall i=1,\ldots,M_{k}^{\mathrm{(RF)}}$, then
9: Output $\mathbf{V}_{k}^{(\mathrm{BB})}=\mathbf{F}_{k}\mathbf{G}_{k}$ and
stop the algorithm.
10: end if
11:end for
12:Set $\mathbf{F}_{k}=[\mathbf{D}_{k}]_{(:,N_{k}^{\mathrm{(RF)}})}$ and
$\mathbf{G}_{k}={\rm P}_{k}^{1/2}$.
13:if
$|[\widetilde{\mathbf{H}}_{k,k}\mathbf{F}_{k}\mathbf{G}_{k}]_{i}|^{2}\leq\rho_{\rm
A}$ $\forall i=1,\ldots,M_{k}^{\mathrm{(RF)}}$, then
14: Output $\mathbf{V}_{k}^{(\mathrm{BB})}=\mathbf{F}_{k}\mathbf{G}_{k}$ and
stop the algorithm.
15:else
16: Output that the $\mathbf{C}_{k}$ realization does not meet
17: the residual SI constraint.
18:end if
For the solution of the latter problem we use an alternating optimization
approach. First, we find $\mathbf{V}_{k}^{(\mathrm{RF})}$ and
$\mathbf{U}_{k}^{(\mathrm{RF})}$ constrained as $({\rm C4})$ that boost the DL
rate, while minimizing the SI signal before any other form of cancellation.
Particularly, we perform the following exhaustive search:
$\begin{split}\mathcal{OP}2:&\max_{\mathbf{V}_{k}^{(\mathrm{RF})},\mathbf{U}_{k}^{(\mathrm{RF})}}\frac{\left\|\mathbf{H}_{q,k}\mathbf{V}_{k}^{(\mathrm{RF})}\right\|_{\rm
F}}{\left\|\left(\mathbf{U}_{k}^{(\mathrm{RF})}\right)^{\rm
H}\mathbf{H}_{k,k}\mathbf{V}_{k}^{(\mathrm{RF})}\right\|_{\rm
F}}\leavevmode\nobreak\ \leavevmode\nobreak\ \textrm{s.t.}\leavevmode\nobreak\
\leavevmode\nobreak\ ({\rm C4}).\end{split}$
In the sequel, given the solution for $\mathcal{OP}2$ and supposing that the
available number of analog canceller taps $N$ and a realization of
$\mathbf{C}_{k}$ satisfying $({\rm C2})$ are given, we seek for
$\mathbf{V}_{k}^{(\mathrm{BB})}$ maximizing the DL rate while meeting $({\rm
C1})$ and $({\rm C3})$. The latter procedure is repeated for all allowable
realizations of $\mathbf{C}_{k}$ for the given $N$ in order to find the best
$\mathbf{V}_{k}^{(\mathrm{BB})}$ solving $\mathcal{OP}1$; this procedure is
summarized in Algorithm 1. The values for $\mathbf{C}_{k}$,
$\mathbf{V}_{k}^{(\mathrm{RF})}$, $\mathbf{V}_{k}^{(\mathrm{BB})}$, and
$\mathbf{U}_{k}^{(\mathrm{RF})}$ solving $\mathcal{OP}1$ are finally
substituted into the achievable UL rate expression $\mathcal{R}_{\rm UL}$ in
(7). The $\mathbf{U}_{k}^{(\mathrm{BB})}$ maximizing this point-to-point MIMO
rate is obtained in closed form using [17, Sec. 4.2].
## IV Simulation Results and Discussion
In this section, we investigate the performance of the considered $3$-user
bidirectional communication system for the case where
$N_{k}^{(\mathrm{RF})}=M_{q}=4$, $M_{k}^{(\mathrm{RF})}=2$, $N_{m}=1$, and
$N_{k}^{(\mathrm{A})}=M_{k}^{(\mathrm{A})}=\\{16,128\\}$. We have assumed that
the antennas at the FD HBF node $k$ are arranged in Uniform Linear Arrays
(ULAs) with $\lambda/2$ space separation between adjacent elements, where
$\lambda$ is the wavelength. The distance and angle between the TX and RX ULAs
at node $k$ were set respectively to $d=2\lambda$ and $\omega=\pi/6$ [12]. The
DL and UL channels were simulated as millimeter wave clustered channels, as
described in [13, eq. (2)] with pathloss $110$dB. In addition, the SI channel
was modeled as Rician [13, eq. (7)] with $K$-factor $35$dB and pathloss
$40$dB. The RX noise floors at both nodes $k$ and $q$ were set to $-110$dBm,
resulting in an effective dynamic range of $62$dB for $14$-bit ADCs with a
$10$dB peak-to-average power ratio. Hence, to avoid saturation, the residual
SI power after analog cancellation at the input of each RX RF chain needs to
be below $-47$dBm. Non-ideal $N$-tap analog cancellation has been considered
as in [10], and for the analog TX/RX beamformers, we have used beam codebooks
based on the discrete Fourier transform matrix.
The achievable FD rate as a function of the transmit powers of nodes $k$ and
$m$ is illustrated in Fig. 2 for $N=4$ taps for the proposed analog canceller,
which translates to $50\%$ reduction in the number of taps compared to a case
that connects all outputs of the TX RF chains to every input to the RX RF
chains. For the rate results, we averaged over $1000$ independent channel
realizations and calculated the FD rate with the proposed Algorithm 1, as well
as the achievable HD rate. It is shown in the figure that all rates increase
with increasing transmit power, and no rate saturation is exhibited for the
proposed FD HBF technique. The latter trend witnesses the effective
adaptability of Algorithm 1 to the SI conditions for both considered pairs of
$N_{k}^{(\mathrm{A})}$ and $M_{k}^{(\mathrm{A})}$. As an indicative example,
it is shown that for $40$dBm transmit powers, the proposed approach results in
a $52$bps/Hz achievable rate, which is around $1.7$ times more than the
achievable HD rate.
Figure 2: Average FD and HD rates vs the transmit powers in dBm for
$N_{k}^{(\mathrm{RF})}=M_{q}=N=4$, $M_{k}^{(\mathrm{RF})}=2$, $N_{m}=1$, and
different values for $N_{k}^{(\mathrm{A})}$ and $M_{k}^{(\mathrm{A})}$ at node
$k$.
## References
* [1] A. Sabharwal _et al._ , “In-band full-duplex wireless: Challenges and opportunities,” _IEEE J. Sel. Areas Commun._ , vol. 32, no. 9, pp. 1637–1652, Sep. 2014.
* [2] T. Riihonen _et al._ , “Mitigation of loopback self-interference in full-duplex MIMO relays,” _IEEE Trans. Signal Proces._ , vol. 59, no. 12, pp. 5983–5993, Dec. 2011.
* [3] D. Nguyen _et al._ , “Precoding for full duplex multiuser MIMO systems: Spectral and energy efficiency maximization,” _IEEE Trans. Signal Process._ , vol. 61, no. 13, pp. 4038–4050, Aug. 2013.
* [4] D. Bharadia and S. Katti, “Full duplex MIMO radios,” in _Proc. USENIX NSDI_ , Seattle, WA, 2-4 Apr. 2014, pp. 359–372.
* [5] E. Everett _et al._ , “SoftNull: Many-antenna full-duplex wireless via digital beamforming,” _IEEE Trans. Wireless Commun._ , vol. 12, no. 15, pp. 8077–8092, Dec. 2016.
* [6] H. H. M. Tam _et al._ , “Successive convex quadratic programming for quality-of-service management in full-duplex MU-MIMO multicell networks,” _IEEE Trans. Commun._ , vol. 64, no. 6, pp. 2340–2353, Jun. 2016.
* [7] G. C. Alexandropoulos _et al._ , “User scheduling and optimal power allocation for full-duplex cellular networks,” in _Proc. IEEE SPAWC_ , Edinburgh, UK, 3-6 Jul. 2016, pp. 1–6.
* [8] M. A. Islam _et al._ , “A unified beamforming and A/D self-interference cancellation design for full duplex MIMO radios,” in _Proc. IEEE PIMRC_ , Istanbul, Turkey, 8-11 Sep. 2019, pp. 1–6.
* [9] D. Korpi _et al._ , “Widely linear digital self-interference cancellation in direct-conversion full-duplex transceiver,” _IEEE J. Sel. Areas Commun._ , vol. 32, no. 9, pp. 1674–1687, Sep. 2014.
* [10] G. C. Alexandropoulos and M. Duarte, “Joint design of multi-tap analog cancellation and digital beamforming for reduced complexity full duplex MIMO systems,” in _Proc. IEEE ICC_ , Paris, France, May 2017, pp. 1–7.
* [11] A. F. Molisch _et al._ , “Hybrid beamforming for massive MIMO: A survey,” _IEEE Commun. Mag._ , vol. 55, no. 9, pp. 134–141, Sep. 2017\.
* [12] Z. Xiao _et al._ , “Full-duplex millimeter-wave communication,” _IEEE Wireless Commun._ , vol. 24, no. 6, pp. 136–143, Dec. 2017.
* [13] K. Satyanarayana _et al._ , “Hybrid beamforming design for full-duplex millimeter wave communication,” _IEEE Trans. Veh. Technol._ , vol. 68, no. 2, pp. 1394–1404, Dec. 2018.
* [14] Y. Cai _et al._ , “Robust joint hybrid transceiver design for millimeter wave full-duplex mimo relay systems,” _IEEE Trans. Wireless Commun._ , vol. 18, no. 2, pp. 1199–1215, Jan. 2019.
* [15] I. P. Roberts and S. Vishwanath, “Beamforming cancellation design for millimeter-wave full-duplex,” _[online] https://arxiv.org/pdf/1908.06505.pdf_ , 2019.
* [16] K. E. Kolodziej _et al._ , “Multitap RF canceller for in-band full-duplex wireless communications,” _IEEE Trans. Wireless Commun._ , vol. 15, no. 6, pp. 4321–4334, Jun. 2016.
* [17] G. C. Alexandropoulos and C. B. Papadias, “A reconfigurable iterative algorithm for the $K$-user MIMO interference channel,” _Signal Process. (Elsevier)_ , vol. 93, no. 12, pp. 3353–3362, Dec. 2013.
|
2024-09-04T02:54:54.840394 | 2020-02-27T00:25:10 | 2002.11855 | {
"authors": "Cody D. Schimming and Jorge Vi\\~nals",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25901",
"submitter": "Cody Schimming",
"url": "https://arxiv.org/abs/2002.11855"
} | arxiv-papers | # Computational molecular field theory for nematic liquid crystals
Cody D. Schimming<EMAIL_ADDRESS>School of Physics and Astronomy,
University of Minnesota, Minneapolis, Minnesota 55455, USA Jorge Viñals
School of Physics and Astronomy, University of Minnesota, Minneapolis,
Minnesota 55455, USA
###### Abstract
Nematic liquid crystals exhibit configurations in which the underlying
ordering changes markedly on macroscopic length scales. Such structures
include topological defects in the nematic phase and tactoids within nematic-
isotropic coexistence. We discuss a computational study of inhomogeneous
configurations that is based on a field theory extension of the Maier-Saupe
molecular model of a uniaxial, nematic liquid crystal. A tensor order
parameter is defined as the second moment of an orientational probability
distribution, leading to a free energy that is not convex within the
isotropic-nematic coexistence region, and that goes to infinity if the
eigenvalues of the order parameter become non-physical. Computations of the
spatial profile of the order parameter are presented for an isotropic-nematic
interface in one dimension, a tactoid in two dimensions, and a nematic
disclination in two dimensions. We compare our results to those given by the
Landau de-Gennes free energy for the same configurations and discuss the
advantages of such a model over the latter.
## I Introduction
Liquid crystals represent an interesting opportunity to study a unique
interplay between topology, anisotropy, and elasticity in materials. The
entropy driven local ordering of rod-like molecules accounts for anisotropic
optical and transport properties even in homogeneous nematics. Furthermore,
external fields or topological defects can distort the local ordering of the
molecules giving rise to several elastic modes de Gennes (1975); Selinger
(2018). The ability to quantitatively model these complex features of liquid
crystals is imperative to address recent applications, including
electrokinetics of colloidal particles or biological materials Lazo _et al._
(2014); Peng _et al._ (2015, 2018), surface and texture generation and
actuation in nematic surfaces Mostajeran (2015); Babakhanova _et al._ (2018),
systems of living nematics Genkin _et al._ (2017), and stabilization of
liquid shells Hokmabad _et al._ (2019).
Liquid crystals generally belong to one of two main classes: Thermotropics are
short molecules that undergo ordering through changes in temperature, while
lyotropics are more complex molecules or assemblies of molecules in solvent
that order through changes in concentration. Thermotropics have been
extensively studied, both theoretically and experimentally, due to their
applications in displays de Gennes (1975); Yeh and Gu (2009). However, because
of their small characteristic length scale, the fine structure of defects and
two phase domains (commonly referred to as tactoids) are generally beyond the
resolution of standard optical techniques. On the other hand experimental
studies of defect core structures and tactoids have been recently undertaken
in so called lyotropic chromonic liquid crystals. These materials are composed
of disc-like molecules that stack to form rod-like structures Collings _et
al._ (2010, 2015). The characteristic length scale that determines the size of
defects and tactoid interfacial thickness in chromonics are thousands of times
larger than those in thermotropics, and hence are readily observable with
conventional optical techniques. Such experiments have revealed anisotropic
geometries of the order parameter near the core of defects, and “cusp-like”
features on the interface of tactoids Kim _et al._ (2013); Zhou _et al._
(2017).
To mathematically model a liquid crystal in its nematic phase a unit vector
$\mathbf{n}$, the director, is typically defined to characterize the local
orientation of the molecules. Because the molecules are apolar, any model
involving $\mathbf{n}$ must be symmetric with respect to
$\mathbf{n}\to-\mathbf{n}$. Distorted nematic configurations are described by
three independent elastic modes: splay, twist, and bend. The energy cost of
each mode is associated with three elastic constants $K_{1}$, $K_{2}$, and
$K_{3}$ in the Oseen-Frank free energy Selinger (2018); Frank (1958). Models
and computations often assume that these constants are equal, though it has
been shown for chromonics that the values of all three constants are widely
different for the relevant range of temperatures and molecular concentrations
Zhou _et al._ (2014). Additionally, topological defects and tactoids lead to
large distortions of the underlying order. To model defected configurations
using the Oseen-Frank free energy either a short distance cutoff is
introduced, and the defect core treated separately, or a new variable
representing the degree of order of the molecules is added to the free energy
Leslie (1966); Ericksen (1991). This new variable also has the effect of
regularizing singularities at the core of defects. The method has recently
allowed the study of tactoids within the coexistence region Zhang _et al._
(2018).
Resolving the degree of orientational order and the orientation poses several
challenges computationally, however. The director is undefined both at the
core of defects and in the isotropic phase, and half-integer disclinations
(the stable line defects in liquid crystals) cannot be adequately described
computationally with a polar vector. Therefore, the model that is widely used
to describe either disclinations or tactoids is the phenomenological Landau-de
Gennes (LdG) free energy Meiboom _et al._ (1983); Golovaty _et al._ (2019);
Popa-Nita _et al._ (1997). In the LdG framework, the order parameter is
defined to be a traceless and symmetric tensor, $\mathbf{Q}$, typically
proportional to a macroscopic quantity, e.g. the magnetic susceptibility
Gramsbergen _et al._ (1986); Lubensky (1970). The free energy is then assumed
to be an analytic function in powers of $\mathbf{Q}$. To model spatial
inhomogeneity, an expansion in gradients of $\mathbf{Q}$ is typically added to
the free energy. Such an expansion in gradients can be mapped to the elastic
modes in the director $\mathbf{n}$ in the Oseen-Frank elastic energy Selinger
(2018).
The validity of the LdG free energy in regions of large variation of the order
is not well understood, and it has been shown that the simplest LdG elastic
expansions that capture differences in the Oseen-Frank constants result in
unbounded free energies Longa _et al._ (1987); Ball and Majumdar (2010).
Therefore, when working in the LdG framework, one must introduce more
computationally complex assumptions to bound the free energy. In this work, we
present an alternative field theoretic model of a nematic liquid crystal that
is based on a microscopic description, and that allows for anisotropic elastic
energy functionals that can capture the elasticity observed in chromonics. The
model presented here is a computational implementation of the model introduced
by Ball and Majumdar Ball and Majumdar (2010), which itself is a continuum
extension of the well known Maier-Saupe model for the nematic-isotropic phase
transition Maier and Saupe (1959). The Maier-Saupe model is a mean field
molecular theory in which the orientation of the molecules of the liquid
crystal is described by a probability distribution function, so that each
molecule interacts only with the average of its neighbors. Below, we define
$\mathbf{Q}$ microscopically, based on a probability distribution that is
allowed to vary spatially (as in the hypothesis of local equilibrium in
nonequilibrium thermodynamics). Our ultimate goal is to develop a
computationally viable implementation of the model for fully anisotropic
systems. We present below the results of several proof of concept computations
on various prototypical liquid crystal configurations, albeit in the one
elastic constant approximation. All our results are compared with those from
the LdG free energy for analogous configurations.
In Section II we briefly summarize the model as put forth in Ref. Ball and
Majumdar (2010) with minor adjustments to notation and conceptual
understanding. In Section III we present the computational implementation of
the model and derive the equations that are solved numerically. We also
briefly discuss the conventions used to compare to the LdG free energy. In
Section IV we compare the free energies of the model presented here with that
given by LdG and show that they are both non-convex. We then present
computational results from the model for a one dimensional nematic-isotropic
interface, a two-dimensional tactoid, and a two-dimensional disclination. All
of these are compared to results given by LdG. Finally, in Section V we
summarize and discuss the computational model and results, and discuss future
potential for the model.
## II Model
Following Ref. Ball and Majumdar (2010), we consider a tensor order parameter
defined over a small volume at $\mathbf{r}$
$\mathbf{Q}(\mathbf{r})=\int_{S^{2}}\big{(}\bm{\xi}\otimes\bm{\xi}-\frac{1}{3}\mathbf{I}\big{)}p(\bm{\xi};\mathbf{r})\,d\bm{\xi}$
(1)
where $\bm{\xi}$ is a unit vector in $S^{2}$, $\mathbf{I}$ is the identity
tensor, and $p(\bm{\xi};\mathbf{r})$ is the canonical probability distribution
of molecular orientation in local equilibrium at some temperature $T$ at
$\mathbf{r}$. Due to the symmetry of the molecules, $p(\bm{\xi};\mathbf{r})$
must have a vanishing first moment; hence, $\mathbf{Q}$ is defined as the
second moment of the orientational probability distribution. With this
definition, the order parameter is symmetric, traceless, and, most
importantly, has eigenvalues that are constrained to lie in the range
$-1/3\leq q\leq 2/3$. The situation where $q=-1/3,\,2/3$ represents perfect
ordering of the molecules (i.e. the variance of the distribution goes to
zero), and is therefore interpreted as unphysical. We note that Eq. (1) can be
generalized to biaxial molecules, that is, molecules that are microscopically
plate-like, by appropriately changing the domain of the probability
distribution to three Euler angles, and considering the second moment of the
extended probability distribution. Such a description may be useful in
studying similar defects and domains for biaxial molecules, as in Ref.
Chiccoli _et al._ (2019).
A mean field free energy functional of $\mathbf{Q}(\mathbf{r})$ is defined by
$F[\mathbf{Q}(\mathbf{r})]=H[\mathbf{Q}(\mathbf{r})]-T\Delta S$ (2)
where $H$ is the energy of a configuration, and $\Delta S$ its entropy
relative to the uniform distribution. The energy is chosen to be
$H[\mathbf{Q}(\mathbf{r})]=\int_{\Omega}\Big{(}-\alpha\operatorname{Tr}[\mathbf{Q}^{2}]+f_{e}(\mathbf{Q},\nabla\mathbf{Q})\Big{)}\,d\mathbf{r}$
(3)
where $\alpha$ is an interaction parameter, and $f_{e}$ is an elastic energy.
The term $-\alpha\operatorname{Tr}[\mathbf{Q}^{2}]$ originates from the Maier-
Saupe model, and incorporates an effective contact interaction that promotes
alignment Maier and Saupe (1959); Selinger (2016). In the spatially
homogeneous case $f_{e}=0$. The entropy is the usual Gibbs entropy
$\Delta
S=-nk_{B}\int_{\Omega}\bigg{(}\int_{S^{2}}p(\bm{\xi};\mathbf{r})\ln\Big{(}4\pi
p(\bm{\xi};\mathbf{r})\Big{)}\,d\bm{\xi}\bigg{)}\,d\mathbf{r}$ (4)
where $n$ is the number density of molecules. It should be noted that the
outer integral is on the physical domain of the system, and the inner integral
is on the unit sphere, the domain of the probability distribution. This model,
with these definitions, is equivalent to the Maier-Saupe model in the
spatially homogeneous case Maier and Saupe (1959). We extend the Maier-Saupe
treatment to spatially nonuniform configurations by minimization of Eq. (2)
subject to boundary conditions that lead to topological defects in the domain,
or two-phase configurations at coexistence. We then find configurations
$\mathbf{Q}(\mathbf{r})$ that are not uniform, and that minimize Eq. (2)
subject to the constraint (1).
Figure 1: Examples of the probability distribution, $p(\bm{\xi})$ of Eq. (5),
on the sphere spanned by $\bm{\xi}$ for (a) a uniaxial configuration and (b) a
biaxial configuration. Note that the probability distribution involves a
uniaxial molecule, but a biaxial order parameter can occur for a probability
distribution with biaxial second moment. Only northern hemispheres are
displayed since the probability distribution is symmetric about the equator
due to the symmetry of the molecules. For these plots, (a)
$\bm{\Lambda}=4\operatorname{diag}(-1,\,-1,\,0.5)$ and (b)
$\bm{\Lambda}=10\operatorname{diag}(-0.25,\,-1,\,0.25)$.
The entropy, Eq. (4), can be maximized, subject to the constraint (1), by
introducing a tensor of Lagrange multipliers, $\bm{\Lambda}(\mathbf{r})$, for
each component of the constraint Ball and Majumdar (2010); Katriel _et al._
(1986). The resulting probability that maximizes the entropy is given by
$\displaystyle p(\bm{\xi};\mathbf{r})$
$\displaystyle=\frac{\exp[\bm{\xi}^{T}\bm{\Lambda}(\mathbf{r})\bm{\xi}]}{Z[\bm{\Lambda}(\mathbf{r})]}$
(5) $\displaystyle Z[\bm{\Lambda}(\mathbf{r})]$
$\displaystyle=\int_{S^{2}}\exp[\bm{\xi}^{T}\bm{\Lambda}(\mathbf{r})\bm{\xi}]\,d\bm{\xi}$
(6)
where $Z$ can be interpreted as a single particle partition function. Fig. 1
shows graphical examples of the probability distribution on the unit sphere.
We mention that the single particle partition function can only be computed
numerically, and hence the minimization procedure described next has to be
carried out numerically in its entirety.
The minimization of $F$ in Eq. (2) with $p(\bm{\xi};\mathbf{r})$ given by Eqs.
(5) and (6) is therefore reformulated in terms of two tensor fields on the
domain, $\mathbf{Q}(\mathbf{r})$ and $\bm{\Lambda}(\mathbf{r})$ (from here on
the dependence on $\mathbf{r}$ will be dropped for brevity). $\bm{\Lambda}$
acts as an effective interaction field which mediates interactions among
molecules. Substituting Eq. (5) into the constraint, Eq. (1), leads to a
relation between $\mathbf{Q}$ and $\bm{\Lambda}$:
$\mathbf{Q}+\frac{1}{3}\mathbf{I}=\frac{\partial\ln
Z[\bm{\Lambda}]}{\partial\bm{\Lambda}}.$ (7)
It has been shown that if the eigenvalues of $\mathbf{Q}$ approach the
endpoints of their physically admissible values, both $\bm{\Lambda}$ and the
free energy diverge. This feature is not present in the LdG theory, which can
lead to nonphysical configurations for certain choices of the elastic energy,
$f_{e}$, in Eq. (3) Ball and Majumdar (2010); Bauman and Phillips (1986).
The fields $\mathbf{Q}$ and $\bm{\Lambda}$ that minimize Eq. (2) and satisfy
Eq. (7) are the equilibrium configuration for a given set of boundary
conditions. In the next section we describe a computational implementation of
the model presented here.
## III Computational Method
### III.1 Molecular Theory
To find the configuration $\mathbf{Q}$ that minimizes the free energy of the
molecular field theory we numerically solve the differential equations $\delta
F/\delta\mathbf{Q}=0$. This, in principle, is a system of nine equations.
However, since $\mathbf{Q}$ is traceless and symmetric, there are only five
degrees of freedom. The eigenvalues of $\mathbf{Q}$ describe two degrees of
freedom since $\mathbf{Q}$ is traceless. The eigenvectors of $\mathbf{Q}$ form
an orthonormal frame (since $\mathbf{Q}$ is symmetric) which accounts for the
other three degrees of freedom: the first vector has two degrees of freedom
since it is a unit vector, the second vector has one degree of freedom since
it is a unit vector and must be orthogonal to the first vector, and the third
vector is determined from the other two vectors since it must be orthogonal to
both. The eigenvalues are related to the amount of order in the system, while
the eigenvector which corresponds to the largest eigenvalue is the director,
$\mathbf{n}$. This is illustrated in Fig. 1 which shows the probability
distribution for molecules with a director along the z-axis. Fig. 1a shows a
uniaxial configuration in which two of the eigenvalues are degenerate, leading
to arbitrary eigenvectors in the xy-plane. It is possible for the probability
distribution to be of the form in Fig. 1b in which the director is still along
the z-axis, but all three eigenvalues are distinct. In this case, we call the
probability distribution biaxial since it leads to a second moment,
$\mathbf{Q}$, that is biaxial. It is known that biaxiality of the order
parameter is important near defects and at interfaces in systems of uniaxial
molecules as modeled by the LdG free energy Pismen (1999); Popa-Nita _et al._
(1997); Mottram and Newton (2014). Despite the uniaxial character of the
molecules, Eq. (1), the molecular theory detailed here can accommodate biaxial
order.
Local biaxial order will be parametrized as
$\mathbf{Q}=S(\mathbf{n}\otimes\mathbf{n}-\frac{1}{3}\mathbf{I})+P(\mathbf{m}\otimes\mathbf{m}-\bm{\ell}\otimes\bm{\ell})$
(8)
where $\\{\mathbf{n},\mathbf{m},\bm{\ell}\\}$ are an orthonormal triad of
vectors. This representation explicitly includes the five degrees of freedom
of $\mathbf{Q}$, namely, three for the orthonormal set of vectors and two for
the amplitudes $S$ and $P$. In addition to $\mathbf{n}$ being the director,
$S$ represents the amount of uniaxial order, and $P$ the amount of biaxial
order. That is, $S=(3/2)\,q_{1}$ and $|P|=(1/2)\,(q_{2}-q_{3})$ where $q_{i}$
are the eigenvalues of $\mathbf{Q}$, and $q_{3}\leq q_{2}\leq q_{1}$.
Because we are primarily concerned with experiments in thin nematic films, we
further reduce the degrees of freedom of $\mathbf{Q}$ by only considering
spatial variation in at most two dimensions. If we write
$\mathbf{n}=(\cos\phi,\,\sin\phi,\,0)$,
$\mathbf{m}=(-\sin\phi,\,\cos\phi,\,0)$, and $\bm{\ell}=(0,\,0,\,1)$, where
$\phi$ is the angle the director makes with the x-axis, we need only one
degree of freedom to describe the eigenframe of $\mathbf{Q}$. We can then
further simplify the computations by transforming to the auxiliary variables
Sen and Sullivan (1987)
$\displaystyle\eta$ $\displaystyle=S-\frac{3}{2}(S-P)\sin^{2}\phi$
$\displaystyle\mu$ $\displaystyle=P+\frac{1}{2}(S-P)\sin^{2}\phi$ (9)
$\displaystyle\nu$ $\displaystyle=\frac{1}{2}(S-P)\sin 2\phi.$
This transformation is equivalent to expressing $\mathbf{Q}$ in terms of a new
basis for traceless, symmetric matrices. While we do this for ease of
computation, we can transform back to the original parametrization after
calculating the eigenvalues and eigenvectors of $\mathbf{Q}$. Although all of
our calculations are conducted with the set $\\{\eta,\mu,\nu\\}$, we will
present our results in terms of the more physically intuitive $S$, $P$, and
$\phi$.
The tensor order parameter in this representation is
$\mathbf{Q}=\begin{bmatrix}\frac{2}{3}\eta&\nu&0\\\
\nu&-\frac{1}{3}\eta+\mu&0\\\ 0&0&-\frac{1}{3}\eta-\mu\end{bmatrix}.$ (10)
We can now substitute Eq. (10) into Eq. (1) to write the constraint in terms
of $\eta$, $\mu$, and $\nu$. Following the procedure of Section II, we
introduce three Lagrange multipliers $\Lambda_{1}$, $\Lambda_{2}$, and
$\Lambda_{3}$ corresponding to $\eta$, $\mu$ and $\nu$ respectively, and a
partition function
$Z[\Lambda_{1},\Lambda_{2},\Lambda_{3}]=\int_{S^{2}}\exp\bigg{[}\frac{3}{2}\Lambda_{1}\xi_{1}^{2}+\Lambda_{2}(\frac{1}{2}\xi_{1}^{2}+\xi_{2}^{2})+\Lambda_{3}\xi_{1}\xi_{2}\bigg{]}\,d\bm{\xi}$
(11)
while the relation from Eq. (7) manifests itself as the three equations
$\displaystyle\frac{\partial\ln Z}{\partial\Lambda_{1}}$
$\displaystyle=\eta+\frac{1}{2}$ $\displaystyle\frac{\partial\ln
Z}{\partial\Lambda_{2}}$ $\displaystyle=\mu+\frac{1}{2}$ (12)
$\displaystyle\frac{\partial\ln Z}{\partial\Lambda_{3}}$ $\displaystyle=\nu$
that implicitly relate the variables $\eta$, $\mu$, and $\nu$ to the Lagrange
multipliers. Note that since $Z[\Lambda_{1},\Lambda_{2},\Lambda_{3}]$ cannot
be obtained analytically, relation (12) can only be solved numerically. The
free energy, Eq. (2), is rewritten as
$F=\int_{\Omega}\Big{(}f_{b}(\eta,\mu,\nu,\Lambda_{1},\Lambda_{2},\Lambda_{3})+f_{e}(\eta,\mu,\nu,\nabla\eta,\nabla\mu,\nabla\nu)\Big{)}\,d\mathbf{r}$
(13)
where $f_{b}$ is a bulk free energy density that does not depend on gradients
of the fields. Written explicitly,
$f_{b}=-2\alpha\big{(}\frac{1}{3}\eta^{2}+\mu^{2}+\nu^{2}\big{)}+\\\
nk_{B}T\Big{(}\Lambda_{1}\big{(}\eta+\frac{1}{2}\big{)}+\Lambda_{2}\big{(}\mu+\frac{1}{2}\big{)}+\Lambda_{3}\nu+\ln(4\pi)-\ln
Z[\Lambda_{1},\Lambda_{2},\Lambda_{3}]\Big{)}.$ (14)
We will focus in this paper on an isotropic elastic energy
$f_{e}=L\partial_{k}Q_{ij}\partial_{k}Q_{ij}$ where repeated indices are
summed, and $L$ is the elastic constant. This is the ‘one constant
approximation’ so that mapping this elastic energy to the Oseen-Frank elastic
energy yields the same value for all three elastic constants Longa _et al._
(1987). Written in terms of the auxiliary variables we have
$f_{e}=2L\big{(}\frac{1}{3}|\nabla\eta|^{2}+|\nabla\mu|^{2}+|\nabla\nu|^{2}\big{)}.$
(15)
Before deriving the differential equations to be solved we redefine quantities
in a dimensionless way:
$\tilde{f_{b}}=\frac{f_{b}}{nk_{B}T},\quad\tilde{f_{e}}=\frac{f_{e}}{nk_{B}T},\quad\tilde{x}=\frac{x}{\xi_{MS}},\quad\tilde{L}=\frac{L}{\xi_{MS}^{2}nk_{B}T}$
(16)
where $\xi_{MS}$ is a length scale which we set by defining the value of the
dimensionless parameter $\tilde{L}$ instead. For the rest of the paper the
tildes are omitted for brevity.
To derive the equilibrium equations, we note that Eq. (12) relates $\eta$,
$\mu$, and $\nu$ as functions of $\\{\Lambda_{i}\\}$ through the unknown
single particle partition function. It has been shown that these relations are
invertible when $\eta$, $\mu$, and $\nu$ give physical eigenvalues of
$\mathbf{Q}$ Katriel _et al._ (1986). We can then regard $\Lambda_{1}$,
$\Lambda_{2}$, and $\Lambda_{3}$ as functions of $\eta$, $\mu$, and $\nu$ via
the inverse of Eq. (12). Although an analytic inverse does not exist we can
numerically invert this equation using a Newton-Raphson method. We create a
MATLAB scattered interpolant from values given by the Newton-Raphson method.
We select interpolant points from the values $0\leq S\leq 0.7$, $0\leq P\leq
0.1$, and $-\pi/2\leq\phi\leq\pi/2$ with $\Delta S=\Delta P=0.05$ and
$\Delta\phi=0.0245$. These values are then transformed to $\eta$, $\mu$, and
$\nu$ through Eqs. (9) and the Newton-Raphson method is run using these values
to find $\Lambda_{i}$ for the chosen interpolant points. The MATLAB scattered
interpolant is then created and used in the numerical minimization procedure.
The Euler-Lagrange equations are derived by taking the variations of Eqs. (14)
and (15) with respect to $\eta$, $\mu$, and $\nu$ while using Eqs. (12) to
simplify. The dimensionless equations are
$\displaystyle\frac{4}{3}L\nabla^{2}\eta=\Lambda_{1}-\frac{4}{3}\frac{\alpha}{nk_{B}T}\eta$
$\displaystyle 4L\nabla^{2}\mu=\Lambda_{2}-4\frac{\alpha}{nk_{B}T}\mu$ (17)
$\displaystyle 4L\nabla^{2}\nu=\Lambda_{3}-4\frac{\alpha}{nk_{B}T}\nu$
where, again, $\Lambda_{i}$ are numerically calculated as functions of $\eta$,
$\mu$, and $\nu$. Eqs. (17) are the central equations of this study and are
solved numerically in the following section for various cases of interest.
To numerically solve Eqs. (17) we use a finite differencing scheme. For one-
dimensional configurations, an implicit backward Euler method is used with 129
discrete points and time step $\Delta t=0.1\Delta x^{2}$. For two-dimensional
configurations a Gauss-Seidel relaxation method with $257^{2}$ discrete points
is used Press _et al._ (2002). We iterate until the calculated energy of a
configuration fails to change to within $10^{-7}$. We check that the
calculated energy of the initial condition is larger than the energy of the
final configuration. In all cases we use Dirichlet boundary conditions that
depend on the case being studied, as described in the relevant section. The
MATLAB code used for the numerical solutions can be found in Ref. Schimming
(2020).
### III.2 Landau-de Gennes Theory
Here, we summarize the conventions and notation used in the calculations to
compare the LdG free energy with the molecular field theory presented in the
previous section. The bulk energy density is of the form
$f_{LdG}=\frac{1}{2}a(T-T^{*})\operatorname{Tr}[\mathbf{Q}^{2}]-\frac{1}{3}B\operatorname{Tr}[\mathbf{Q}^{3}]+\frac{1}{4}C\big{(}\operatorname{Tr}[\mathbf{Q}^{2}]\big{)}^{2}$
(18)
where $a$, $B$, and $C$ are material parameters, and $T^{*}$ is the
temperature at which the isotropic phase loses its stability. We use the same
elastic free energy defined above when comparing to the molecular field theory
as well. For the sake of computation, we define the following dimensionless
quantities:
$\tilde{f}_{LdG}=\frac{f_{LdG}}{C},\quad\tilde{f_{e}}=\frac{f_{LdG}}{C},\quad\tilde{x}=\frac{x}{\xi_{LdG}},\quad\tilde{L}=\frac{L}{\xi_{LdG}^{2}C}$
(19)
which leaves $a(T-T^{*})/C$, $B/C$, and $\tilde{L}$ as dimensionless
parameters for the model. $\xi_{LdG}$ here is a length scale for the model
defined by the value of $\tilde{L}$ similar to $\xi_{MS}$ in Eq. (16). As
before, the tilde is subsequently dropped for brevity.
Computations are done using the same auxiliary variables defined in Eq. (9)
with the same finite difference scheme outlined above to solve the Euler-
Lagrange equations resulting from $f_{LdG}$.
## IV Results
Figure 2: Equilibrium value of the uniaxial order, $S$, versus the parameter
$\alpha/(nk_{B}T)$. At high $T$, the system is in an isotropic phase, while at
low $T$ the system is in a uniaxial nematic phase. A first order phase
transition occurs at $\alpha/(nk_{B}T)\approx 3.4049$.
### IV.1 Uniform Configuration and Bulk Free Energy
We first check our numerical method and methodology with known results for the
Maier-Saupe free energy. As mentioned above, this model should be equivalent
to the Maier-Saupe model in the case of a uniform system, $f_{e}=0$. In this
case, it has been shown that minimizers of the bulk free energy, Eq. (14),
will be uniaxial states Ball and Majumdar (2010). Thus, because we are
considering a uniform system, the choice of director is arbitrary. We choose
$\phi=0$ for this analysis so the auxiliary variables defined by Eq. (9) give
$\eta=S$, $\mu=P$, and $\nu=0$. Further, since we know the system will be
uniaxial we can take $\mu=P=0$. One can show that this implies
$\Lambda_{2}=\Lambda_{3}=0$ from Eq. (12).
Because the system is uniform, $S$ is constant, and hence $\nabla^{2}S=0$.
Defining $S_{N}$ as the value of $S$ in uniform equilibrium, we find, from Eq.
(17):
$\Lambda_{1}=\frac{4}{3}\frac{\alpha}{nk_{B}T}S_{N}$ (20)
which is a well known result for the Maier-Saupe model when $\Lambda_{1}$ is
regarded as an effective interaction strength de Gennes (1975); Maier and
Saupe (1959); Selinger (2016). We then substitute Eq. (20) into Eq. (14) and
numerically minimize it to find the value of $S$ in equilibrium for a uniform
system. Fig. 2 shows $S_{N}$ as a function of $\alpha/(nk_{B}T)$. At high
temperatures, the equilibrium phase is isotropic with $S=0$. At low
temperatures a uniaxial nematic phase is stable with $S=S_{N}$. A first order
phase transition occurs at $\alpha/(nk_{B}T)\approx 3.4049$ with
$S_{N}=0.4281$. The diagram of Fig. 2 agrees with previous studies of the
Maier-Saupe model which has been used successfully to describe phase
transitions in experiments Selinger (2016).
We can further elucidate the nature of the molecular field theory by examining
the bulk free energy density, Eq. (14), restricted to a uniaxial
configuration. For a uniform, uniaxial system, the free energy density is
$f_{b}(S)=-\frac{2}{3}\frac{\alpha}{nk_{B}T}S^{2}+\Lambda_{1}\Big{(}S+\frac{1}{2}\Big{)}-\ln
Z[\Lambda_{1}]+\ln(4\pi)$ (21)
where $\Lambda_{1}$ is calculated as a function of $S$ through Eq. (12). This
function is plotted in Fig. 3 for three different values of
$\alpha/(nk_{B}T)$. As $\alpha/(nk_{B}T)$ increases we find that $f_{b}$
becomes non-convex, leading to a coexistence region in the phase diagram, and
a first order phase transition. It is well known that these features are also
present in the LdG free energy of Eq. (18) Gramsbergen _et al._ (1986). The
primary difference between LdG and the Maier-Saupe theory is that in the
latter $f_{b}$ diverges when $S=-1/2$ or $S=1$, that is, when the eigenvalues
leave the physical range. The non-convexity obtained agrees with similar plots
for the Maier-Saupe free energy in Ref. Selinger (2016).
Figure 3: Bulk free energy density as a function of the uniaxial order, $S$
for three values of the parameter $\alpha/(nk_{B}T)$. As $\alpha/(nk_{B}T)$
increases, the free energy becomes non-convex, leading to coexistence between
the isotropic and nematic phases.
The non-convexity and similarity of the bulk free energy to LdG suggest that
there should exist stable interfacial configurations at coexistence as well as
stable solutions for topological defects in the nematic phase. In the
following three subsections we demonstrate just this and compare to results
given by LdG theory.
### IV.2 Planar Isotropic-Nematic Interface
We consider a one-dimensional configuration with a planar interface in which
the order parameter $\mathbf{Q}(\mathbf{r})=\mathbf{Q}(x)$. We solve Eqs. (17)
on a domain of size $\mathcal{L}=100\xi_{MS}$ with Dirichlet boundary
conditions where $S=S_{N}$ at $x=-50\xi_{MS}$ and $S=0$ at $x=50\xi_{MS}$. We
set $\alpha/(nk_{B}T)=3.4049$ and $S_{N}=0.4281$ so that the isotropic and
nematic bulk phases coexist. An important note is that since we are using the
“one-constant approximation” for the elastic free energy there are no
anisotropic effects, such as anchoring, in our analysis. It is known that
anisotropy changes the width of an interface for different director
orientations, however, because we are only considering isotropic terms here
the structure of the interfacial profile should not change if the angle of the
director in the nematic phase, $\phi$, is changed Popa-Nita _et al._ (1997).
Fig. 4 shows the equilibrium uniaxial order parameter $S$ for $\phi=0$. We
find a smooth, diffuse interface with $P=0$, that is, no biaxiality. We also
find that changing the angle of the director does not change the solution, as
expected. We can calculate the width of the interface by finding the points
where $S=0.1S_{N}$ and $S=0.9S_{N}$ and define them as $x_{1}$ and $x_{2}$
respectively. Then we define the width as $x_{1}-x_{2}$.
Figure 4: $S$ as a function of position for a one-dimensional interface.
Dirichlet boundary conditions maintain $S=S_{N}$ at the left boundary while
$S=0$ at the right boundary. $L=1$ for this configuration.
In order to compare with the LdG free energy, Eq. (18), we recall that the
interfacial profile for this configuration is known exactly
$S_{LdG}(x)=\frac{S_{N}}{2}\bigg{(}1-\tanh\Big{(}\frac{x}{w_{LdG}}\Big{)}\bigg{)}$
(22)
with
$w_{LdG}=\frac{6\sqrt{6}}{B/C}\sqrt{L}$ (23)
which sets the width of the interface. This implies that
$(x_{1}-x_{2})\propto\sqrt{L}$. One can similiarly show that the bulk energy
contribution, i.e. the bulk contribution to the surface tension,
$\sigma\propto\sqrt{L}$.
With this in mind, we compare the scaling of the molecular field theory
solutions that we obtain with $\sqrt{L}$. To this end, we find the interface
widths and bulk surface tensions for solutions to Eqs. (17) for a variety of
values of $L$. The bulk surface tension is found by numerically integrating
the bulk free energy density, Eq. (14). Interface widths and bulk surface
tensions are plotted in Fig. 5 for both the molecular field theory and LdG. We
find both $(x_{1}-x_{2})\propto\sqrt{L}$ and $\sigma\propto\sqrt{L}$ for the
molecular field theory. Note that the LdG solution allows additional tuning
via the parameter $B/C$, which we have set to 9 in Fig. 5. In Fig. 5b the
discrepency between the LdG solution and the molecular field theory
computations highlights that even if the widths of LdG interfaces are tuned to
be similar to those of the molecular field theory, the surface tensions cannot
be, and vice versa.
Figure 5: (a) Interface width and (b) bulk surface tension versus $\sqrt{L}$.
Dots represent the molecular field theory (MFT) computations while the solid
lines are derived from the analytical solution for LdG, Eq. (22), with
$B/C=9$. Both the interface width and excess free energy (i.e. surface
tension) scale linearly with the parameter $\sqrt{L}$, the same scaling
relationship as that of Landau-de Gennes.
We note that the similarity in bulk free energy landscape likely leads to the
similarity in solutions for LdG and the molecular field theory. Anisotropic
effects have yet to be analyzed for our model, for which it is known for LdG
there is nonzero biaxiality at interfaces Popa-Nita _et al._ (1997). This
will be the subject of a future study.
Figure 6: Plots of $S(x,y)$ for (a) a tactoid with $m=1$ director
configuration at the outer boundary and (b) tactoid with $m=-1/2$ director
configuration at the outer boundary. The radius in (a) is $R/\xi_{MS}=19.92\pm
0.2$ and the radius in (b) is $R/\xi_{MS}=4.59\pm 0.2$. The smaller size of
the $m=-1/2$ tactoid is due to the director distortion energy’s $m^{2}$
dependence. For both computations $L=1$.
### IV.3 Tactoids
We consider a two-dimensional square domain of size $\mathcal{L}=100\xi_{MS}$.
We set $S=S_{N}$, $P=0$, and $\phi=m\theta$ at the outer boundary, where
$\theta$ is the polar angle and $m$ is the winding number of $\phi$. We set
$\alpha/(nk_{B}T)=3.4049$ and $L=1$. As initial conditions we set $S=0$ within
a disc centered at the origin of radius $R=15\xi_{MS}$.
By “tactoid” we refer to a two-phase domain separated by an interface. In the
isotropic region $S=P=0$. We consider distorted boundary conditions to ensure
an interface forms in the simulation. Because the director can vary as a
function of position in two dimensions, the boundary conditions imposed will
change the size and shape of the object under consideration. Since we are only
considering isotropic gradients in the elastic free energy, there is no
anchoring term at the interface, i.e. there is not a difference in energy
based on the orientation of the molecules relative to the interface. Thus, we
expect the tactoids to be cylindrical. The topology of the boundary conditions
does impact the size of the tactoids, however. This is due to a balance
between two energies: the surface tension, which in two dimensions is
proportional to $R$, the radius of the tactoid, and the elastic energy in the
nematic region from Oseen-Frank which is proportional to
$m^{2}\ln(\mathcal{L}/R)$. Due to the symmetry of the molecules, half integer
$m$ is allowed and costs four times less director distortion energy than
integer $m$. Hence, we expect that tactoids with integer boundary conditions
should be approximately four times larger than those with half integer
boundary conditions.
In Fig. 6, we show equilibrium configurations for boundary conditions with
$m=1$ and $m=-1/2$. In both cases an isotropic region with $S=P=0$ is present
at the center of the computational domain. As expected, both configurations
are cylindrical in shape and we find that $R/\xi_{MS}=19.92\pm 0.2$ for the
$m=1$ configuration and $R/\xi_{MS}=4.59\pm 0.2$ for the $m=-1/2$
configuration. To find the radii we take a cut from the center of the tactoid
to the outer boundary and find the point where $S=0.5S_{N}$. It should be
noted that LdG, in the one-constant approximation in elastic energy, gives
similar results in terms of the size and shape of tactoids.
It is known for the LdG bulk free energy with anisotropic elastic free
energies that the shape of the tactoids also changes due to anchoring at the
interface Golovaty _et al._ (2019). Anisotropic effects on the shape of
tactoids in the molecular field theory will be the subject of a future study.
### IV.4 Nematic Disclinations
We consider next the case of disclination lines in thin films. We consider a
two-dimensional square of size $\mathcal{L}=10\xi_{MS}$. For all calculations
$L=1$ and $\alpha/(nk_{B}T)>3.4049$, so nematic ordering is energetically
advantageous. At the outer boundary we fix the system to be uniaxial ($P=0$)
and fix the director orientation, $\phi=(-1/2)\theta$. The initial
configuration is $S(r)=S_{N}\big{(}1-\exp(r/2)\big{)}$ with $P=0$ everywhere.
In Fig. 7 we show the director profile, and the radial profile of equilibrium
$S$ and $P$ from the center of a disclination to the boundary of the domain
for the parameter $\alpha/(nk_{B}T)=4$. For the director, $\phi=-(1/2)\theta$
outside the core. Much like solutions for the LdG free energy, we see a
disclination core that is biaxial Meiboom _et al._ (1983); Schopohl and
Sluckin (1987). The biaxiality of the core was explained topologically by
Lyuksyutov, assuming a LdG bulk free energy Lyuksyutov (1978). Using this free
energy for analysis, one can define a “biaxial length” scale for the
disclinations, $R_{b}\approx\sqrt{K/(BS^{3})}$, where $K$ is on the order of
the Frank constants and $B$ is the parameter associated with the cubic term in
the LdG bulk energy, Eq. (18). For distances from the core smaller than
$R_{b}$, the elastic energy becomes comparable to the cubic term in the LdG
free energy and the system can remove the elastic singularity by becoming
biaxial, since a biaxial order parameter can remove the singularity. We note
that at the core, $S=P$ in both models. Using the parametrization from Eq.
(8), one can show that this is interpreted as a uniaxial order parameter, but
for a disc if $S>0$ or a rod aligned with the z-axis if $S<0$. For both
models, $S>0$ at the core. Thus, we interpret the biaxial solution as a
macroscopic “transformation” of rods far away from the core to discs at the
core. Microscopically, the probability distribution describing individual
molecules becomes more and more spread out in the x-y plane in an attempt to
alleviate the elastic energy singularity.
Figure 7: (a) Director profile and (b) radial plots of the uniaxial order $S$
and the biaxial order $P$ for a nematic disclination. The spatial extent of
biaxiality is on the order of the radius of the disclination core. Here,
$\alpha/(nk_{B}T)=4$ and $L=1$.
We emphasize that it is not obvious that the molecular field theory should
give biaxial core solutions for the disclinations since, by construction, the
model is markedly different from LdG. While LdG is an expansion of a
macroscopic order parameter, the model here is based on a microscopic
description. Because of this, it is difficult to quantitatively compare the
solutions for the disclinations given by the two models. While we note that
the spatial extent of the biaxiality for the disclinations is on the order of
the radius of the defects, there is not a cubic term in the free energy to
define a length such as $R_{b}$. Instead, this behavior is induced by the
single particle partition function which appears in Eq. (14) since the Maier-
Saupe energy is purely quadratic in $\mathbf{Q}$.
Another aspect of the disclinations that we can compare, at least
qualitatively, to the LdG model is the scaling of the radius of disclinations
with temperature. To find the radius, we take a cut from the center of the
disclination to the boundary and find the point where $S-P=S_{N}(1-e^{-1})$.
The results are plotted in Fig. 8. We show both the scaling for the molecular
field theory and for results given by LdG. It can be seen that the scaling is
similar for both models in a wide range of temperatures up to the coexistence
temperature, where the isotropic phase becomes energetically favorable.
We are currently investigating the effects of anisotropic elastic free
energies on disclinations. It is known that the director structure becomes
less symmetric away from the disclination core if the Frank constants for bend
and splay are not equal, and recent experiments have found anisotropic core
structures Zhou _et al._ (2017).
Figure 8: Radius of disclinations plotted as a function of temperature for (a)
the molecular field theory of section II and (b) the Landau-de Gennes model.
$T^{*}$ is the temperature where the isotropic phase loses its metastability,
while the dotted line on the plots indicate where coexistence between phases
is for the respective model. For the molecular field theory we use $L=1$, and
for Landau-de Gennes $L=1$ and $B/C=4$ for all simulations.
## V Conclusion
In this work, we have presented a computational implementation of the model of
reference Ball and Majumdar (2010). We show that the model can be interpreted
as replacing direct interactions between molecules via an effective
interaction field $\bm{\Lambda}$ in the mean field approximation. Further, we
investigate the similarity between the free energy of this molecular field
theory and the LdG free energy and compare solutions given by both for the
cases of interfaces, tactoids, and topological defects. We find that all have
qualitatively similar results which is an interesting result given that the
construction of the two models is very different.
This model allows for a more fundamental understanding of the underlying
microscopic and mesoscopic physics at play, and can serve as an alternative to
the LdG free energy when describing systems with inhomogeneous ordering. The
extension of the Maier-Saupe model to a field theory allows us to understand
not just the phase transition but also inhomogeneous configurations, and can
possibly be used to describe experiments like those of Refs. Zhou _et al._
(2017); Kim _et al._ (2013).
Moving forward, we are currently investigating the results of adding
anisotropy to the elastic free energy, which has been done to some extent for
the LdG model Golovaty _et al._ (2019). Importantly, however, one can
consider in this framework the values of the elastic constants for chromonics
that have been determined experimentally Zhou _et al._ (2014), while avoiding
boundedness issues in LdG theory when bend and splay constants are different.
Further, because of the microscopic nature of the model, one can, in
principle, use a more physically realistic Hamiltonian to describe the
molecular system, as opposed to the effective Maier-Saupe Hamiltonian that is
used here. One can also generalize the computations to more complex molecules,
such as plate-like molecules, by modifying Eq. (1).
###### Acknowledgements.
We are indebted to Shawn Walker and Sergij Shiyanovskii for useful
discussions. This research is supported by the National Science Foundation
under contract DMR-1838977, and by the Minnesota Supercomputing Institute.
## References
* de Gennes (1975) P. G. de Gennes, _The Physics of Liquid Crystals_ (Oxford University Press, 1975).
* Selinger (2018) J. V. Selinger, Interpretation of saddle-splay and the Oseen-Frank free energy in liquid crystals, Liquid Crystals Reviews 6, 129 (2018).
* Lazo _et al._ (2014) I. Lazo, C. Peng, J. Xiang, S. V. Shiyanovskii, and O. D. Lavrentovich, Liquid crystal-enabled electroosmosis through spatial charge separation in distorted regions as a novel mechanism of electrokinetics, Nat. Commun. 5, 5033 (2014).
* Peng _et al._ (2015) C. Peng, Y. Guo, C. Conklin, J. Viñals, S. V. Shiyanovskii, Q.-H. Wei, and O. D. Lavrentovich, Liquid crystals with patterned molecular orientation as an electrolytic active medium, Phys. Rev. E 92, 052502 (2015).
* Peng _et al._ (2018) C. Peng, T. Turiv, Y. Guo, Q.-H. Wei, and O. D. Lavrentovich, Sorting and separation of microparticles by surface properties using liquid crystal-enabled electro-osmosis, Liq. Cryst. 45, 1936 (2018).
* Mostajeran (2015) C. Mostajeran, Curvature generation in nematic surfaces, Phys. Rev. E 91, 062405 (2015).
* Babakhanova _et al._ (2018) G. Babakhanova, T. Turiv, Y. Guo, M. Hendrikx, Q.-H. Wei, A. P. Schenning, D. J. Broer, and O. D. Lavrentovich, Liquid crystal elastomer coatings with programmed response of surface profile, Nat. Commun. 9, 456 (2018).
* Genkin _et al._ (2017) M. M. Genkin, A. Sokolov, O. D. Lavrentovich, and I. S. Aranson, Topological defects in a living nematic ensnare swimming bacteria, Phys. Rev. X 7, 011029 (2017).
* Hokmabad _et al._ (2019) B. V. Hokmabad, K. A. Baldwin, C. Krüger, C. Bahr, and C. C. Maass, Topological stabilization and dynamics of self-propelling nematic shells, Phys. Rev. Lett. 123, 178003 (2019).
* Yeh and Gu (2009) P. Yeh and C. Gu, _Optics of Liquid Crystal Displays_ (Wiley, 2009).
* Collings _et al._ (2010) P. Collings, A. Dickinson, and E. Smith, Molecular aggregation and chromonic liquid crystals, Liquid Crystals 37, 701 (2010).
* Collings _et al._ (2015) P. J. Collings, J. N. Goldstein, E. J. Hamilton, B. R. Mercado, K. J. Nieser, and M. H. Regan, The nature of the assembly process in chromonic liquid crystals, Liquid Crystal Reviews 3, 1 (2015).
* Kim _et al._ (2013) Y. K. Kim, S. V. Shiyanovskii, and O. D. Lavrentovich, Morphogenesis of defects and tactoids during isotropic-nematic phase transition in self-assembled lyotropic chromonic liquid crystals, J. Phys.: Condens. Matter 25, 404202 (2013).
* Zhou _et al._ (2017) S. Zhou, S. V. Shiyanovskii, H.-S. Park, and O. D. Lavrentovich, Fine structure of the topological defect cores studied for disclinations in lyotropic chromonic liquid crystals, Nat. Commun. 8, 14974 (2017).
* Frank (1958) F. Frank, On the theory of liquid crystals, Discuss. Faraday Soc. 25, 19 (1958).
* Zhou _et al._ (2014) S. Zhou, K. Neupane, Y. A. Nastishin, A. R. Baldwin, S. V. Shiyanovskii, O. D. Lavrentovich, and S. Sprunt, Elasticity, viscocity, and orientational fluctuations of a lyotropic chromonic nematic liquid crystal disodium cromoglycate, Soft Matter 10, 6571 (2014).
* Leslie (1966) F. M. Leslie, Some constitutive equations for anisotropic fluids, J. Mech. Appl. Math. 19, 357 (1966).
* Ericksen (1991) J. L. Ericksen, Liquid crystals with variable degree of orientation, Arch. for Rational Mech. Anal. 113, 97 (1991).
* Zhang _et al._ (2018) C. Zhang, A. Acharya, N. J. Walkington, and O. D. Lavrentovich, Computational modelling of tactoid dynamics in chromonic liquid crystals, Liq. Cryst. 45, 1084 (2018).
* Meiboom _et al._ (1983) S. Meiboom, M. Sammon, and W. F. Brinkman, Lattice of disclinations: The structure of the blue phases of choloesteric liquid crystals, Phys. Rev. A 27, 438 (1983).
* Golovaty _et al._ (2019) D. Golovaty, Y.-K. Kim, O. D. Lavrentovich, M. Novack, and P. Sternberg, Phase transitions in nematics: textures with tacoids and disclinations, e-print arXiv:1902.06342v1[cond-mat.soft] (2019).
* Popa-Nita _et al._ (1997) V. Popa-Nita, T. Sluckin, and A. Wheeler, Statics and kinematics at the nematic-isotropic interface: effects of biaxiality, Journal de Physique II 7, 1225 (1997).
* Gramsbergen _et al._ (1986) E. F. Gramsbergen, L. Longa, and W. H. de Jeu, Landau theory of the nematic-isotropic phase transition, Physics Reports 135, 195 (1986).
* Lubensky (1970) T. C. Lubensky, Molecular description of nematic liquid crystals, Phys. Rev. A 2, 2497 (1970).
* Longa _et al._ (1987) L. Longa, D. Monselesan, and H. R. Trebin, An extension of the Landau-Ginzberg-de Gennes theory for liquid crystals, Liq. Cryst. 2, 769 (1987).
* Ball and Majumdar (2010) J. M. Ball and A. Majumdar, Nematic liquid crystals: from Maier-Saupe to a continuum theory, Mol. liq. Cryst. 525, 1 (2010).
* Maier and Saupe (1959) W. Maier and A. Saupe, A simple molecular statistical theory of the nematic liquid-crystalline phase, I. Z. Naturf. 14, 882 (1959).
* Chiccoli _et al._ (2019) C. Chiccoli, L. R. Evangelista, P. Pasini, G. Skačej, R. T. de Souza, and C. Zannoni, Influence of boundary conditions on the order and defects of biaxial nematic droplets, Phys. Rev. E 100, 032702 (2019).
* Selinger (2016) J. V. Selinger, Liquid crystals, in _Introduction to the Theory of Soft Matter: From Ideal Gases to Liquid Crystals_ (Springer International Publishing, 2016) pp. 131–182.
* Katriel _et al._ (1986) J. Katriel, G. F. Kventsel, G. R. Luckhurst, and T. Sluckin, Free energies in the Landau and molecular field approaches, Liq. Cryst. 1, 337 (1986).
* Bauman and Phillips (1986) P. Bauman and D. Phillips, Regularity and the behavior of eigenvalues for minimizers of a constrained Q-tensor energy for liquid crystals, Calc. Var. 55, 81 (1986).
* Pismen (1999) L. M. Pismen, _Vortices in Nonlinear Fields_ (Oxford University Press, 1999).
* Mottram and Newton (2014) N. J. Mottram and C. J. Newton, Introduction to Q-tensor theory, e-print arXiv:1409.3542v2 [cond-mat.soft] (2014).
* Sen and Sullivan (1987) A. K. Sen and D. E. Sullivan, Landau-de Gennes theory of wetting and orientational transitions at a nematic-liquid–substrate interface, Phys. Rev. A 35, 1391 (1987).
* Press _et al._ (2002) W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, _Numerical Recipes in C++ The Art of Scientific Computing_ , 2nd ed. (Cambridge University Press, 2002).
* Schimming (2020) C. D. Schimming, MATLAB code for the numerical solution of the Maier-Saupe field theory, http://hdl.handle.net/11299/211301 (2020).
* Schopohl and Sluckin (1987) N. Schopohl and T. Sluckin, Defect core structure in nematic liquid crystals, Phys. Rev. Lett. 59, 22 (1987).
* Lyuksyutov (1978) I. F. Lyuksyutov, Topological instability of singularities at small distances in nematics, Zh. Eksp. Teor. Fiz. 75, 358 (1978).
|
2024-09-04T02:54:54.857771 | 2020-02-27T02:38:21 | 2002.11882 | {
"authors": "Ngoc Duy Nguyen, Thanh Thi Nguyen, Doug Creighton, Saeid Nahavandi",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25902",
"submitter": "Thanh Thi Nguyen",
"url": "https://arxiv.org/abs/2002.11882"
} | arxiv-papers | # A Visual Communication Map for Multi-Agent Deep Reinforcement Learning
Ngoc Duy Nguyen, Thanh Thi Nguyen, Doug Creighton, Saeid Nahavandi Ngoc Duy
Nguyen, Doug Creighton and Saeid Nahavandi are with the Institute for
Intelligent Systems Research and Innovation, Deakin University, Waurn Ponds
Campus, Geelong, Victoria, Australia (e-mails<EMAIL_ADDRESS><EMAIL_ADDRESS>and [email protected]). Thanh Thi
Nguyen is with the School of Information Technology, Deakin University,
Burwood Campus, Melbourne, Victoria, Australia (e-mail:
[email protected]).
###### Abstract
Deep reinforcement learning has been applied successfully to solve various
real-world problems and the number of its applications in the multi-agent
settings has been increasing. Multi-agent learning distinctly poses
significant challenges in the effort to allocate a concealed communication
medium. Agents receive thorough knowledge from the medium to determine
subsequent actions in a distributed nature. Apparently, the goal is to
leverage the cooperation of multiple agents to achieve a designated objective
efficiently. Recent studies typically combine a specialized neural network
with reinforcement learning to enable communication between agents. This
approach, however, limits the number of agents or necessitates the homogeneity
of the system. In this paper, we have proposed a more scalable approach that
not only deals with a great number of agents but also enables collaboration
between dissimilar functional agents and compatibly combined with any deep
reinforcement learning methods. Specifically, we create a global communication
map to represent the status of each agent in the system visually. The visual
map and the environmental state are fed to a shared-parameter network to train
multiple agents concurrently. Finally, we select the _Asynchronous Advantage
Actor-Critic_ (A3C) algorithm to demonstrate our proposed scheme, namely
_Visual communication map for Multi-agent A3C_ (VMA3C). Simulation results
show that the use of visual communication map improves the performance of A3C
regarding learning speed, reward achievement, and robustness in multi-agent
problems.
###### Index Terms:
reinforcement learning, deep learning, learning systems, multi-agent systems.
## I Introduction
Applications of deep reinforcement learning (RL) methods in the multi-agent
environments have attracted much attention recently because of their
capability of solving various complex problems. Multi-agent learning is to
learn the interaction between multiple distributed self-operated agents to
balance a shared interest or collectively achieve a designated objective [1,
2]. A vital form of such interaction involves the ability to communicate among
agents, which basically ensures agents to function as a team rather than as
individuals [3]. Examples of communication activities in multi-agent systems
include the scheduling problem in wireless sensor networks [4], strategic
planning in robot soccer [5], traffic junction problem [6], elevator control
[7], and multi-agent games [8]. Therefore, learning to communicate in multi-
agent systems has drawn a great deal of attention from the research community
in recent years [9, 10].
Figure 1: A traffic light can be used as a visual indicator to control the
level of cooperation between vehicles in a road intersection.
Deep RL [11] is known as a normative approach to deal with the _curse of
dimensionality_ in domains where the observation space and action space are
highly dimensional [12]. It is possible because deep RL approximately
estimates the value function by the use of neural networks, which basically
include convolutional layers to process raw graphical data directly. In this
paper, we leverage the potential of _Convolutional Neural Networks_ [13]
(ConvNets) to examine the effect of graphical features to the gradual
cooperation between interacting agents during training. To make it feasible,
we define a set of visual indicators so that each indicator represents the
current status of an agent. The visual indicators are globally visible to all
agents and plays as a virtual communication medium. For this reason, we call a
set of visual indicators a visual communication map.
To clearly present the motivation of the study, we consider the following
example. Fig. 1 describes a scenario in a road intersection. Vehicles in the
intersection must follow the traffic light to make subsequent decisions and
avoid possible collisions. When the light turns red, the vehicles are notified
to stop and when the light turns green, the vehicles are allowed to move
forward. In this case, the traffic light plays as a virtual communication
medium among vehicles. Because the traffic light is visible to every vehicle
in the intersection, it can be used as visual indicators to enable cooperation
between vehicles. Therefore, the drivers are only required to learn the
traffic light rules. The example implies that visual indicators can, somehow,
help cooperation in a multi-agent system.
Figure 2: Visual indicators are used to enable cooperation between multiple
agents.
By analyzing the previous example meticulously, we design a visual
communication map for multi-agent environments, as shown in Fig. 2. Initially,
we create a visual map that includes a set of visual indicators. Each
indicator represents the current status of an agent. The visual map is
combined with the environmental state to feed into a ConvNet. The output of
the ConvNet is connected with a fully-connected network followed by a control
layer where subsequent actions are predicted. Finally, each agent uses the
augmented knowledge of other agents via the visual communication map to learn
the optimal distributed policy on its own and, at the same time, maintain a
certain level of cooperation with other agents.
Deep RL brings in a great deal of success in many complex real-world problems
such as Atari games [12, 14, 15, 16], the game of Go [17], robotics control
tasks [18, 19, 20, 21], autonomous driving [22], and surgical robotics [23,
24]. As opposed to previous studies, we examine deep RL in multi-agent
domains. Multi-agent learning is more sophisticated than the single-agent
counterpart as the action space exponentially increases with the number of
agents. Furthermore, we face the challenge of _moving target problem_ [26,
27], which is a primary difficulty in finding the optimal policy of every
agent. In other words, the rewarding signal of an agent depends on the
decisions of other agents and causes the seeking policies to become _non-
stationary_ [27, 28]. The use of _experience replay_ memory even makes the
situation worse as a large portion of transitions in the memory could become
deprecated [26, 29, 30]. For this reason, we choose the A3C [31] method, which
is a policy-based algorithm, to demonstrate our proposed scheme. There are
different variants of A3C such as [32, 33, 34], but we use the A3C version
based on GPU [32] due to various beneficial factors: high performance with
multiple concurrent actor-learners, efficiency within a short period of
training, robustness with a wide range of learning rate, and easy to implement
due to many open-source codes available in Python.
Figure 3: A gameplay interface of Milk Factory.
Last but not least, we develop a game named _Milk Factory_ that is used to
benchmark our proposed scheme. Milk Factory simulates a task in a warehouse
where robots are used to automate the production chain. Fig. 3 describes the
gameplay interface of Milk Factory. There are two kinds of robots in the
factory: pick-up robots and mechanic robots. At first, a pick-up robot waits
for the milk bottle that is on the running conveyor belt and picks it when the
bottle is in front of the robot. The robot can only pick one milk bottle at a
time. It then brings the bottle to the box that is placed far away from the
conveyor belt. After placing the milk bottle in the box, the robot is free and
can go back to the conveyor belt to wait for another milk bottle. However, the
pick-up robot may stop to work (e.g. due to battery depletion) during the
operation period. In this case, the mechanic robot goes to the pick-up robot’s
position and fixes the error. After fixing, the pick-up robot can continue to
operate. A pick-up robot is rewarded a score of 10 whenever it picks a milk
bottle or puts the bottle in the box. The mechanic robot is also given a score
of 10 whenever it fixes the pick-up robot successfully. Therefore, the game
requires the cooperation of two robots to maximize reward achievement.
Finally, the paper contributes to the following highlights:
* •
The study conducts an interesting investigation of the relationship between
visual indicators and interacting agents. Experimental results show that
visual indicators play as a communication bridge to enable the cooperation
between multiple agents. Therefore, the study provides a practical method for
the development of multi-agent systems.
* •
We develop a multi-agent environment named Milk Factory. Milk Factory is
developed in Python, which follows the standard interface of _Arcade Learning
Environment_ (ALE) [35, 36] and _OpenAI Gym_ [37]. Therefore, previous deep RL
methods, which work with ALE and OpenAI Gym, adapt well with Milk Factory
without significant modifications. Furthermore, Milk Factory is a configurable
game that provides unlimited potentials to suit with any research purposes.
* •
A3C is not a well-known method for multi-agent learning [8]. In this study,
however, we show that the combination of visual communication map and A3C can
work properly with multi-agent problems.
* •
Although we use A3C to demonstrate our proposed scheme, it can be used with
any deep RL algorithms. Furthermore, the method is scalable in terms of the
number of agents and compatible with heterogeneous environments where agents
can function different tasks.
The paper is organized as follows. Section II examines recent studies in
multi-agent domains. Section III presents the concept of a visual
communication map and the proposed scheme VMA3C. The performance evaluation is
conducted based on Milk Factory in Section IV. Finally, to conclude the paper,
we present potential extension directions and the future work in Section V.
## II Related Work
Since the first development of deep RL, namely _Deep Q-Network_ (DQN) [12],
there have been numerous variants of DQN such as _Double Q-Learning_ [30],
_Dueling Network Architecture_ [38], _Recurrent Deep Q-Learning_ [39], _Deep
Attention Recurrent Q-Network_ [40], and _Prioritized Experience Replay_ [29].
However, these approaches concentrate on approximating the value function,
which requires a large amount of memory to store historical transitions.
Furthermore, the experience replay is known to amplify the non-stationary
problem in multi-agent systems [26]. Therefore, a policy-based approach such
as A3C [31] and its variants, e.g. _UNsupervised REinforcement and Auxiliary
Learning_ (UNREAL) [41], are developed to hasten the learning speed by
allowing multiple actors-learners to be trained at the same time. These
methods are more efficient than the value-based methods as they do not require
an experience replay. Simulation results show that the policy-based methods
outperform the value-based ones concerning learning speed and total reward
achievement in the Atari domain [31]. Besides, _Deep Deterministic Policy
Gradient_ (DDPG) [42] and _Trust Region Policy Optimization_ (TRPO) [43] are
proposed to deal with continuous action domains. In the first case, DDPG
combines DQN with the actor-critic architecture to find the optimal policy.
Whereas, TRPO is a policy gradient method that allows controlling the policy
improvement in every training step. In this paper, we select A3C as the
baseline method to demonstrate our proposed scheme because Milk Factory has a
discrete action space.
There are notable methods that use communication to control multiple agents
such as Foerster _et al._ [3] formulate two approaches _Reinforced Inter-Agent
Learning_ (RIAL) and _Differentiable Inter-Agent Learning_ (DIAL). In the
first scheme, RIAL combines DQN with a recurrent network to independently
train every agent through a shared-parameter network. Whereas, DIAL
additionally sends messages across agents during the centralized learning.
These real-valued messages are mapped to a limited number of communication
actions. In this paper, we also adopt the centralized learning by training
multiple agents at the same time but operating in a distributed nature when
deployed. It is feasible because the visual indicators can be used as a
virtual communication medium. Agents are trained to learn the cooperation
through the visual indicators. Because the channel is virtual, there is no
cost for exchanging messages between agents and hence energy efficiency.
Sukhbaatar _et al._ [6] propose a communication model that allows multiple
agents to communicate before deciding subsequent actions. The authors create a
deep neural network that has an open access to a shared communication channel.
Agents receive aggregated transmissions from other agents via a continuous
vector embedded in the channel. Because the communication channel carries
continuous data, it is possible to combine the approach with a standard
single-agent RL method. However, the scheme is not scalable due to the
complication of the neural network structure. In this study, we use the visual
communication map, which is basically separated from the network
configuration. Therefore, our proposed method is more robust and scalable.
Finally, Gupta _et al._ [8] examine a set of cooperative control tasks and
compare the performance of three different deep RL approaches policy gradient,
temporal-difference error, and actor-critic method. The authors combine
various disciplines to efficiently train multiple agents to learn a wide range
of control tasks from discrete to continuous action domains. Initially, the
agents are trained to learn a simple task and move to the harder ones
afterward. The knowledge of the previous task is accumulated over time rather
than acquiring new knowledge from scratch. This approach is called _curriculum
learning_ [44, 45]. The authors also show that the use of a shared-parameter
network is scalable in terms of the number of agents. However, the paper
examines only homogeneous domains where agents function similar jobs. In our
study, we extend the investigation to heterogeneous systems where agents can
be in different task domains. Last but not least, we though evaluate our
proposed framework in Milk Factory, a fully observation problem, the presented
approach can be applied to partially observation domains such as [46]
similarly.
## III Proposed Schemes
In this section, we have first presented preliminary terminologies of RL and
the A3C method. We then introduce the use of a visual communication map with
A3C as well as the network architecture for multi-agent problems. Finally, we
propose two implementation approaches for the visual communication map.
### III-A Preliminary
#### III-A1 RL
Figure 4: The relationship between an agent and the environment in RL.
RL involves the interaction between an agent and the environment, as shown in
Fig. 4. At time-step $t$, the agent perceives an environmental state $s_{t}$.
It reacts with an action $a_{t}$ and the environment returns a reward $r_{t}$.
From time to time, the agent obtains a set of transitions:
$S^{T}=\\{s_{0},a_{0},r_{1},s_{1},a_{1},r_{2},...,s_{T},a_{T},r_{T+1}\\}$,
where $T$ is the terminal state of the episode. The initial state $s_{0}$ is
generated randomly from the observation space $S$ and the terminal reward
$r_{T+1}$ is assigned to 0.
On the agent’s side, a policy $\pi$ is defined as a mapping function $f$ from
the observation space $S$ to the action space $A$, _i.e._ ,
$\pi=f:S\longrightarrow A$. In stochastic environments, it is convenient to
define the policy $\pi$ as a conditional probability distribution of action
$a_{t}$, given the state $s_{t}$, as follows:
$\pi(a_{t}|s_{t})=\left\\{\ p(a_{t}|s_{t})\ \bigg{|}\ \forall a_{t}\in
A,\forall s_{t}\in S\right\\}.$
In this definition, the next state $s_{t+1}$ is assumed to depend only on the
previous state $s_{t}$ and the corresponding action $a_{t}$. A problem that
satisfies this condition is called a _Markovian decision process_ (MDP). Milk
Factory is an MDP. In real-world problems, the agent only senses a limited
view of the surrounding environment. Therefore, it is more practical to
formulate the problem as a _partially observable MDP_ (POMDP) [47]. In this
paper, for concise, we only consider the MDP, but the same procedure can be
inferred for the POMDP.
On the other hand, we define the discounted return at time-step $t$ as
$R_{t}=r_{t+1}+\gamma r_{t+2}+\gamma^{2}r_{t+3}+...$, where $\gamma$ is a
discounted factor so that $\gamma\in[0,1]$. The goal of RL is to maximize the
discounted return $R_{t}$.
Finally, to evaluate the _value_ of a state, we define the value function $V$
of a state $s_{t}$ under a policy $\pi$ as the expected value of the
discounted return $R_{t}$, _i.e._ ,
$V(s_{t})|_{\pi}=\mathbf{E}\left\\{R_{t}\middle|\pi\right\\}=\mathbf{E}\left\\{\sum_{i=0}^{T-t-1}\gamma^{i}r_{t+i+1}\middle|\pi\right\\}.$
(1)
#### III-A2 A3C
A3C is a policy-based method that uses the actor-critic architecture [48] and
the advantage function [49] to estimate the optimal policy $\pi*$.
Furthermore, it enables concurrent learning by creating multiple actors-
learners to update the parameters of the neural network asynchronously. In
particular, as shown in Fig. 5, the A3C’s network structure consists of two
essential sub-networks: one actor network $N^{a}(\theta)$ and one critic
network $N^{c}(\theta^{\prime})$, where $\theta$ and $\theta^{\prime}$ denote
$N^{a}$’s weight parameters and $N^{c}$’s weight parameters respectively. The
actor network estimates the policy $\pi(a_{t}|s_{t},\theta)$ of the agent
while the critic network evaluates the value function
$V(s_{t},\theta^{\prime})$ (1). Therefore, the A3C method optimizes two loss
functions: the actor loss function $L^{a}(\theta)$ and the critic loss
function $L^{c}(\theta^{\prime})$:
Figure 5: Actor-critic architecture in A3C.
$L^{a}(\theta)\simeq\log(\pi(a_{t}|s_{t},\theta))\left(A_{t}(\theta^{\prime})-V(s_{t},\theta^{\prime})\right)$
(2)
and
$L^{c}(\theta^{\prime})\simeq\frac{1}{2}\left(R_{t}(s_{t},\theta^{\prime})-V(s_{t},\theta^{\prime})\right)^{2},$
(3)
where $A_{t}$ is calculated by the following equation:
$A_{t}(\theta^{\prime})=\sum_{k=0}^{T_{max}-t-1}\gamma^{k}r_{t+k+1}+\gamma^{T_{max}-t}V(s_{T_{max}},\theta^{\prime}),$
(4)
where $T_{max}$ denotes the maximum number of observation steps. If
$T_{max}=T$, we have $V(s_{T_{max}},\theta^{\prime})=V(s_{T})=0$. Finally, to
enable the agent’s exploration during the training process, the entropy
regularization factor $H(s_{t},\theta)$ is added to the total loss.
Finally, the total loss is the summation of the actor loss, the critic loss,
and the entropy regularization:
$L^{total}=L^{a}+L^{c}+\beta H,$
where $\beta$ denotes the control parameter so that $\beta\in[0,1]$.
### III-B VMA3C
#### III-B1 Multi-agent A3C
Figure 6: Multiple-actor-single-critic architecture in A3C.
In this section, we extend the use of A3C to multi-agent learning, as shown in
Fig. 6. The procedure can be applied to any standard deep RL methods. It is
even more straightforward to derive from a value-based method such as DQN
because its network has a single output while the A3C’s network includes two
outputs (actor-critic), as explained in the previous subsection.
We consider an MDP (or a POMDP) that consists of $N$ agents $\\{1,2,...,N\\}$.
Because A3C has two factors in the network configuration (actor and critic),
we create $N$ actor networks $N^{a}_{i}(\theta_{i})$ for $N$ agents ($i=1..N$)
while using a single critic network $N^{c}(\theta^{\prime})$. To efficiently
train multiple agents at the same time, we use a shared-parameter network for
$N$ actors, _i.e._ , $\theta_{1}=\theta_{2}=...=\theta_{N}=\theta$. Let
$\pi_{i}(a_{i,t}|s_{t})$ be a policy of the agent $i$, where $a_{i,t}\in
A_{i}$, $s_{t}\in S$, and $A_{i}$ is the action space of the agent $i$. We
consider that the intermediate reward $r_{t}$, given $s_{t}$ and
$a_{i,t}(i=1..N)$, is the summation of individual reward, _i.e._ ,
$r_{t}=\sum_{i=1}^{N}r_{i,t}$. Because the state space $S$ is shared among all
agents and because we use a single critic network, the critic loss function
for a multi-agent scenario is the same as in equation (3), _i.e._ ,
$L^{c}(\theta^{\prime})\simeq\frac{1}{2}\left(R_{t}(s_{t},\theta^{\prime})-V(s_{t},\theta^{\prime})\right)^{2}$.
The actor loss $L^{a}_{i}(\theta)$ of the agent $i$ is also calculated as in
equation (2) as below:
$L^{a}_{i}(\theta)\simeq\log(\pi_{i}(a_{i,t}|s_{t},\theta))\left(A_{t}(\theta^{\prime})-V(s_{t},\theta^{\prime})\right).$
However, the entropy regularization $H$ in this case is the summation of $N$
individual factors, _i.e._ , $H=\sum_{i=1}^{N}H_{i}(s_{t},\theta)$.
Finally, we have the total loss for multi-agent problems using A3C as follows:
$L^{total}=\sum_{i=1}^{N}L_{i}^{a}+\alpha L^{c}+\beta H,$ (5)
where $\alpha$ ($\alpha\geq 1$) is added to balance the importance of the
critic among multiple actors.
#### III-B2 Visual communication map
For each agent, we define a set of status
$S^{stat}_{i}=\\{c_{i1},c_{i2},...,c_{ij}\\}$, where $i=1..N$, $j\geq 0$, and
$c_{il}\neq c_{ip}(l\neq p)$. We also define a mapping function $F_{i}$ that
maps $S^{stat}_{i}$ to a set of graphical representations
$G=\\{g_{1},g_{2},...,g_{m}\\}$ so that
* •
$\forall x\in s_{t}\Rightarrow x\not\in g_{i}$,
* •
$\forall x\in g_{i}\Rightarrow x\not\in g_{j}(i\neq j$ and
$i,j\in\\{1,...,m\\})$,
where $m=\sum_{i=1}^{N}||S^{stat}_{i}||$. In other words, we have
$F_{i}:S^{stat}_{i}\rightarrow G$ $(i=1..N)$.
Let $C_{i}^{t}$ be the status of the agent $i$ at time-step $t$. We have
$C_{i}^{t}\in S^{stat}_{i}$. A visual communication map at time-step $t$ is
defined as a set of graphical representations of $N$ agents’ status at time-
step $t$:
$M_{t}=\\{F_{1}(C_{1}^{t}),F_{2}(C_{2}^{t}),...,F_{N}(C_{N}^{t})\\}$.
Finally, we define an operator $\oplus$ that combines the environmental state
$s_{t}$ with $M_{t}$. We assume that $S_{t}=s_{t}\oplus M_{t}$. Then, $S_{t}$
is used to feed to the neural network (instead of $s_{t}$). To maximize the
learning efficiency, we define the operator $\oplus$ so as to satisfy the
following conditions:
###### Premise 1.
The environmental state $s_{t}$ is a strict subset of $S_{t}$, i.e.,
$s_{t}\subset S_{t}$.
###### Premise 2.
Every element of the visual communication map $M_{t}$ is a strict subset of
$S_{t}$, i.e., $F_{i}(C_{i}^{t})\subset S_{t}(i=1..N)$.
###### Premise 3.
Given $E$=$\\{s_{t},F_{1}(C_{1}^{t}),F_{2}(C_{2}^{t}),...,F_{N}(C_{N}^{t})\\}$
and an arbitrary $x$ so that $x\in S_{t}$, we have that $x$ exclusively
belongs to one element of $E$.
Premises 1 and 2 make sure that $S_{t}$ maintains the integrity information of
the environmental state and the current status of all agents in the system.
Premise 3 is defined to maximize learning efficiency. We can infer the
following rules based on these conditions:
###### Lemma 1.
Let $s_{t}\neq\emptyset$ be the environmental state at time-step $t$ and
$G_{i}^{t}$ be the graphical representation of the status of agent $i$ at
time-step $t$, i.e., $G_{i}^{t}=F_{i}(C_{i}^{t})(i=1..N)$ so that $\exists
k\in\\{1,...,N\\}$ and $G_{k}^{t}\neq\emptyset$. The following operator
$S_{t}=s_{t}\oplus M_{t}$ $=$ $s_{t}\cup\left(\cup_{i=1}^{N}G_{i}^{t}\right)$
satisfies the three premises.
_Proof_ : Because $S_{t}=s_{t}\cup\left(\cup_{i}^{N}G_{i}^{t}\right)$, we have
$s_{t}\subseteq S_{t}$ and $G_{i}^{t}\subseteq S_{t}(i=1..N)$. However, the
_equal_ sign does not occur because $s_{t}\neq\emptyset$ and $\exists
k\in\\{1,...,N\\}$ so that $G_{k}^{t}\neq\emptyset$ and $s_{t}\neq
G_{i}^{t}(i=1..N)$. This implies that the operator satisfies premises 1 and 2.
Secondly, we get an element $x\in S_{t}$. We assume that $x$ belongs to at
least two elements of $E$. There are two possible cases:
* •
If $x\in s_{t}$, we have $\exists k\in\\{1,...,N\\}$ so that $x\in G_{k}^{t}$
and $G_{k}^{t}\neq\emptyset$. This contradicts with the definition of the
visual communication map, _i.e._ , $x\not\in G_{k}^{t}(\forall x\in s_{t})$.
* •
If $x\not\in s_{t}$, we have $\exists i,j\in\\{1,...,N\\}$ so that $i\neq
j,x\in G_{i}^{t}$ and $x\in G_{j}^{t}$, which also contradicts with the
definition of the visual communication map.
The lemma is completely proved. $\hfill\blacksquare$
###### Lemma 2.
Let $s_{t}\neq\emptyset$ be the environmental state at time-step $t$,
$G_{i}^{t}$ be the graphical representation of the status of agent $i$ at
time-step $t$, and $\Phi$ be an arbitrary 1-to-1 mapping function from set to
set. We assume that $G_{i}^{t}\neq\emptyset(i=1..N)$. The following operator
$S_{t}=s_{t}\oplus M_{t}$ $=$
$\Phi(s_{t})\cup\left(\cup_{i=1}^{N}\Phi(G_{i})\right)$ satisfies the three
premises.
_Proof_ : Because $\Phi:A\rightarrow B$ is a 1-to-1 mapping function from set
$A$ to set $B$, _i.e._ , $\forall x\in A,\exists!y\in B\Rightarrow y=\Phi(x)$
and vice versa. Therefore, the problem can be converted to Lemma 1, which is
completely proved. $\hfill\blacksquare$
Lemma 1 and Lemma 2 are important as they can be used to suggest different
implementation approaches for the visual communication map. In this paper, we
suggest the following approaches:
Figure 7: Visual aggregation by creating a holder.
1. 1.
We create a graphical holder (a mask) that embeds both the environmental state
and the visual communication map, as shown in Fig. 7. The communication map is
a graphical representation that contains the current status information of $N$
agents. The holder is used as $S_{t}$ and is fed to the neural network. This
approach is inferred from Lemma 1.
2. 2.
We create $N+1$ ConvNets: one for the environmental state and the other $N$
ConvNets for $N$ agents, as shown in Fig. 8. The output features of $N+1$
ConvNets are concatenated to form $S_{t}$. This approach is inferred from
Lemma 2.
Figure 8: Concatenating output features of $N+1$ ConvNets.
In this paper, we use the first method to benchmark our proposed scheme
because it is more efficient (using a single shared-parameter ConvNet). The
detailed algorithm is described in Algorithm 1.
Algorithm 1 Visual communication Map for Multi-agent A3C (VMA3C)
1:$N\leftarrow 0$ $\triangleright$ Global shared counter
2:procedure VMA3C $\triangleright$ A learner procedure
3: $t\leftarrow 0$
4: repeat
5: $s\leftarrow s_{t}\oplus M_{t}$
6: $c\leftarrow 0$
7: Reset experience memory $E\leftarrow\\{\\}$
8: repeat
9: $a\leftarrow\\{a_{1,t},a_{2,t},...,a_{N,t}\\}$ $\triangleright$ Using
$\pi_{1},\pi_{2},...,\pi_{N}$
10: $r\leftarrow r_{t+1}=\sum_{i}^{N}r_{i,t+1}$
11: $s^{\prime}\leftarrow s_{t+1}$
12: Save transition $(s,a,r,s^{\prime})$ to $E$
13: $s\leftarrow s^{\prime}$
14: $t\leftarrow t+1,N\leftarrow N+1,c\leftarrow c+1$
15: until $s=s_{T}\text{ or }c=T_{\text{max}}$
16: Retrieve all $(s_{j},a_{j},r_{j},s^{\prime}_{j})$ from $E$
17: Calculate $A_{t}$ based on (4)
18: Calculate gradients $\frac{\partial L^{total}}{\partial\theta}$,
$\frac{\partial L^{total}}{\partial\theta^{\prime}}$ based on (5)
19: Perform asynchronous update of $\theta$ and $\theta^{\prime}$
20: if $s=s_{T}$ then
21: $t\leftarrow 0$
22: until $N>N_{\text{end}}$
In a real-world application, it is not scalable if the number of visual
representations increases with the number of agents. Therefore, it is more
efficient to use the following _condensed_ rule to reduce the number of visual
representations.
Condensed rule: Let $c_{ij}$ be the status $j$-th of an agent $i$. If there
exists $K=\\{k_{1},k_{2},...,k_{N-1}\\}$ so that the conditional probability
$P(c_{ij}|c_{mk_{p}})>0$ $(\forall m\in\\{1,2,...,N\\}-\\{i\\},k_{p}\in K)$
then $F_{i}(c_{ij})=\emptyset$.
For example, in the road intersection problem, we only need three visual
indicators (red, green, orange) to control all vehicles. However, this rule
requires extra information about the problem to estimate the conditional
probability $P(c_{ij}|c_{mk_{p}})$.
## IV Performance Evaluation
### IV-A Parameter settings
In this section, we describe the parameter settings that are used in Milk
Factory and the proposed scheme (VMA3C) for performance evaluation. The
algorithm is run in a computer with an octa-core processor and a GTX 1080Ti
graphics card. In Milk Factory, an episode terminates when the number of steps
reaches 200. The pick-up robot is rewarded a score of 10 whenever it
successfully picks a milk bottle or puts it into the box. Whereas, the
mechanic robot is rewarded a score of 10 if it successfully fixes the pick-up
robot. The error rate of the pick-up robot is represented as a random variable
_ER_ , _i.e._ , the probability of an error per action of the pick-up robot
equals to $1\%$. Moreover, the pick-up robot has 5 decision actions: moving
up, moving down, moving left, moving right, and picking/dropping a milk
bottle. The mechanic robot also has 5 decision actions: moving up, moving
down, moving left, moving right, and fixing the pick-up robot. A history of 4
game frames is used as a single state to feed into the neural network.
Finally, based on the _condensed rule_ , we only map the status of the pick-up
robot to visual representations because the state of the mechanic robot
depends on the pick-up robot. The pick-up robot has two states: busy state
(bringing the milk bottle) and failed state (because of errors). A milk bottle
represents the visual indicator of a busy state and a question mark represents
the visual indicator of a failed state.
We compare the performance of the VMA3C method with the A3C method. We keep
the parameter settings of A3C same as the previous study [31] except the
following changes. The learning rate starts from 0.001 and anneals to 0 during
the course of training. The discounted factor $\gamma$ equals to 0.99. The
maximum observation step $T_{max}$ equals to 5. The encouraging factor $\beta$
equals to 0.01. Moreover, we perform the gradient clipping by using the
following formula [50]:
$\delta_{i}^{G}=\delta_{i}^{G}*\frac{\varpi}{\max(\omega,\varpi)},$
where $\varpi=40$, $\forall i:\delta_{i}^{G}\in\Delta_{G}$, and $\forall
j:\delta_{i,j}^{G}\in\delta_{i}^{G}$ we have:
$\omega=\sqrt{\sum_{i,j}|\delta_{i,j}^{G}|^{2}}.$
Finally, we use the RMSProp [51] optimizer with a decay of 0.99 and an epsilon
of 0.1. The network of the VMA3C method includes 4 layers: 2 convolutional
layers (one has 16 filters of 8 $\times$ 8 with a stride of 4 followed by
another one that has 32 filters of 4 $\times$ 4 with a stride of 2), one
fully-connected layer with a size of 256 and ReLU activation functions, and
one output layer that includes a single critic and and set of $N$ actors
($N=2$ for two-agent setting and $N=3$ for three-agent setting). The critic is
connected into a linear activation function while each actor is connected into
a _softmax_ activation function.
Figure 9: The reward distribution of VMA3C in the two-agent setting of Milk
Factory.
We use A3C and VMA3C to train agents in 4 million steps. It takes
approximately 12 training hours for each variant. During the training process,
we create 40 checkpoints in different training steps. For each checkpoint, we
perform a 10,000-step evaluation and record the mean of total reward and its
standard error. We divide the performance evaluation into three stages.
Firstly, we compare the performance of VMA3C with A3C in the two-agent setting
of Milk Factory (one pickup robot and one mechanic robot). Subsequently, we
add an additional pick-up robot to examine the scalability of the proposed
scheme. Finally, we re-evaluate the previous two stages with different values
of _ER_ to examine the robustness of VMA3C in stochastic environments.
### IV-B Two-agent setting
In this setting, two robots are operated in Milk Factory: one pick-up robot
and one mechanic robot. We examine the cooperation between the robots to
maximize the total reward. Because the maximum number of episode steps is set
to 200, it is feasible to calculate the optimal score that the two robots can
achieve. In an ideal case, it takes 8 steps to pick a milk bottle, go to the
box, and put the bottle into the box. The robot optimally achieves a reward of
20 in each round. Therefore, the maximum reward that the robot can achieve is
$200*20/8=500$ without any errors. If an error occurs during the operation,
the total reward must be smaller because the robot misses the current milk on
the conveyor belt. Therefore, the reward of 500 is also the optimal total
reward of two robots. Fig. 9 describes the reward distribution of VMA3C during
the training. We can see that VMA3C takes only three training hours to
establish an optimal policy that approximately reaches a reward of 500. It is
possible due to the visual map aids the mechanic robot to fix the pick-up
robot immediately when an error occurs.
Figure 10: Comparison of the mean of total rewards between two methods (A3C
and VMA3C) in the two-agent setting of Milk Factory.
Fig. 10 shows the performance of two methods, A3C and VMA3C, regarding the
mean of total rewards. In the figure, A3C performs poorly without any
improvements after 12 hours of training. Furthermore, the pick-up robot gets
stuck after picking a milk bottle. The mechanic robot is located next to it,
as shown in the following video _https://youtu.be/J0qusnfyrr0_. As opposed to
A3C, VMA3C achieves an optimal policy after 3 hours of training. This marks
the importance of visual communication map in multi-agent problems. The pick-
up robot in VMA3C is operated as expected: it picks a milk bottle, goes to the
box by the shortest path, puts the milk bottle into the box, and goes back to
the conveyor belt. Moreover, the mechanic robot fixes the pick-up robot if
necessary.
### IV-C Three-agent setting
Figure 11: The reward distribution of VMA3C in the three-agent setting of Milk
Factory.
To examine the scalability of VMA3C, we add a pick-up robot into the gameplay.
Therefore, we have two pick-up robots and one mechanic robot. We also add an
additional box and a milk bottle on the conveyor to increase the productivity.
Fig. 11 shows the reward distribution of the VMA3C method in the three-agent
setting. It can estimate an optimal policy that reaches a reward of 900.
However, the results vary wildly because it is impossible to fix two pick-up
robots at the same time. In this case, the environment is more stochastic than
the previous one.
Figure 12: Comparison of the mean of total rewards between two methods (A3C
and VMA3C) in the three-agent setting of Milk Factory.
Fig. 12 compares the performance of VMA3C and A3C in the three-agent setting
of Milk Factory. The A3C method obtains a maximum reward of 300 after 12 hours
of training while the VMA3C has a 200% higher performance than the A3C method.
As shown in the following video, two pick-up robots in VMA3C operate
concurrently while the mechanic robot is located in the middle of the screen,
as shown in the following video _https://youtu.be/eoud2D0nW1k_.
### IV-D Robustness
Figure 13: The performance of A3C in Milk Factory with four different error
rates: $2\%$, $3\%$, $4\%$, and $5\%$. The left figure is conducted in the
two-agent setting and the right figure is conducted in the three-agent setting
of Milk Factory.
Figure 14: The performance of VMA3C in Milk Factory with four different error
rates: $2\%$, $3\%$, $4\%$, and $5\%$. The left figure is conducted in the
two-agent setting and the right figure is conducted in the three-agent setting
of Milk Factory.
In this subsection, we vary the values of _ER_ to examine the robustness of
the proposed method. We record the performance of both approaches by using
four different error rates: $2\%$, $3\%$, $4\%$, and $5\%$. Fig. 13 shows the
performance of A3C in the two-agent setting and the three-agent setting of
Milk Factory. In the two-agent setting, A3C performs poorly, as explained in
the previous subsection. We notice that the mean of total reward increases
with the error rate. This implies that the total reward is calculated by the
mechanic robot. Finally, Fig. 14 presents the performance of VMA3C in the two
and three-agent setting of Milk Factory. We conclude that VMA3C is robust in
the stochastic environment because it still achieves a high reward regardless
of the error rate.
## V Conclusions
Communications between multiple agents are difficult to implement, especially
when agents are characterized by deep RL algorithms. The paper conducts an
interesting investigation of using virtual communication channel via visual
indicators to establish a cooperative policy for multi-agent problems. The
method has practical meaning and can be applied widely in real-world
applications due to many beneficial factors. Firstly, it can be used with any
standard RL methods. Secondly, the proposed method is scalable in terms of the
number of agents and the type of agents. Finally, the method is robust in
stochastic environments and plays a vital role to solve the non-stationary
problem in multi-agent domains. By using visual communication map, the agents
learn a cooperative policy on their own to maximize the reward in a short
period of training time. We continue to work on the effect of visual
indicators to multi-agent domains in different directions:
* •
The method requires human knowledge to reduce the number of visual indicators,
which prevents the automation of the approach. We suggest using a Bayesian
model to predict the dependency between agents to completely automate the
proposed method.
* •
The scheme is constructed from the A3C method and evaluated in a discrete
action environment. We will extend the use of visual communication map with
different single-agent deep RL algorithms and examine it in continuous action
domains.
## Acknowledgement
The authors wish to thank our colleagues in Institute for Intelligent Systems
Research and Innovation for their comments and helpful discussions.
## References
* [1] L. Panait and S. Luke, “Cooperative multi-agent learning: The state of the art,” _Autonomous Agents and Multi-Agent Systems_ , vol. 11, no. 3, pp. 387–434, 2005.
* [2] D. Bloembergen, K. Tuyls, D. Hennes, and M. Kaisers, “Evolutionary dynamics of multi-agent learning: A survey,” _Journal of Artificial Intelligence Research_ , vol. 53, pp. 659–697, 2015.
* [3] J. Foerster, I.A. Assael, N. de Freitas, and S. Whiteson, “Learning to communicate with deep multi-agent reinforcement learning,” In _Advances in Neural Information Processing Systems_ , pp. 2137–2145, 2016.
* [4] D. Fox, W. Burgard, H. Kruppa, and S. Thrun, “A probabilistic approach to collaborative multi-robot localization,” _Autonomous Robots_ , vol. 8, no. 3, pp. 325–344, 2000.
* [5] M. Riedmiller, T. Gabel, R. Hafner, and S. Lange, “Reinforcement learning for robot soccer,” _Autonomous Robots_ , vol. 27, no. 1, pp. 55–73, 2009.
* [6] S. Sukhbaatar and R. Fergus, “Learning multiagent communication with backpropagation,” In _Advances in Neural Information Processing Systems_ , pp. 2244–2252, 2016.
* [7] R. H. Crites and A. G. Barto, “Elevator group control using multiple reinforcement learning agents,” _Machine Learning_ , vol. 33, no. 2–3, pp. 235–262, 1998.
* [8] J. K. Gupta, M. Egorov, and M. Kochenderfer, “Cooperative multi-agent control using deep reinforcement learning,” In _International Conference on Autonomous Agents and Multiagent Systems_ , pp. 66–83, 2017.
* [9] T. Nguyen, N. D. Nguyen, and S. Nahavandi, “Deep reinforcement learning for multi-agent systems: A review of challenges, solutions and applications,” _arXiv preprint arXiv:1812.11794_ , 2018.
* [10] N. D. Nguyen, T. Nguyen, and S. Nahavandi, “System design perspective for human-level agents using deep reinforcement learning: A survey,” _IEEE Access_ , vol. 5, pp. 27091–27102, 2017.
* [11] R. S. Sutton and G. B. Andrew, _Introduction to Reinforcement Learning_. Cambridge: MIT press, 1998.
* [12] V. Mnih et al., “Human-level control through deep reinforcement learning,” _Nature_ , 2015.
* [13] A. Krizhevsky, S. Ilya, and E. H. Geoffrey, “Imagenet classification with deep convolutional neural networks.” In _Advances in Neural Information Processing Systems_ , 2012.
* [14] V. Mnih et al., “Playing atari with deep reinforcement learning,” _arXiv preprint arXiv:1312.5602_ , 2013.
* [15] T. Nguyen, N. D. Nguyen, and S. Nahavandi, “Multi-agent deep reinforcement learning with human strategies,” In _2019 IEEE International Conference on Industrial Technology (ICIT)_ , pp. 1357-1362. IEEE, 2019.
* [16] N. D. Nguyen, S. Nahavandi, and T. Nguyen. “A human mixed strategy approach to deep reinforcement learning,” In _2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC)_ , pp. 4023-4028. IEEE, 2018.
* [17] D. Silver et al., “Mastering the game of Go with deep neural networks and tree search,” _Nature_ , vol. 529, no 7587, 2016.
* [18] T. de Bruin et al., “The importance of experience replay database composition in deep reinforcement learning,” _Deep Reinforcement Learning Workshop (NIPS)_ , 2015.
* [19] S. Gu et al., “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” In _International Conference on Robotics and Automation_ , 2017.
* [20] B. Thananjeyan et al., “Multilateral surgical pattern cutting in 2d orthotropic gauze with deep reinforcement learning policies for tensioning,” In _International Conference on Robotics and Automation_ , 2017.
* [21] T. G. Thuruthel et al., “Model-based reinforcement learning for closed-loop dynamic control of soft robotic manipulators,” _IEEE Transactions on Robotics_ , vol. 35, pp.124–134, 2018.
* [22] S. S.-Shwartz, S. Shaked, and S. Amnon, “Safe, multi-agent, reinforcement learning for autonomous driving,” _arXiv preprint arXiv:1610.03295_ , 2016.
* [23] T. Nguyen, N. D. Nguyen, F. Bello, and S. Nahavandi, “A new tensioning method using deep reinforcement learning for surgical pattern cutting,” In _2019 IEEE International Conference on Industrial Technology (ICIT)_ , pp. 1339-1344. IEEE, 2019.
* [24] N. D. Nguyen, T. Nguyen, S. Nahavandi, A. Bhatti, and G. Guest, “Manipulating soft tissues by deep reinforcement learning for autonomous robotic surgery,” In _2019 IEEE International Systems Conference (SysCon)_ , pp. 1-7. IEEE, 2019.
* [25] M. Egorov, “Multi-agent deep reinforcement learning,” _CS231n: Convolutional Neural Networks for Visual Recognition_ , 2016.
* [26] G. Palmer et al., “Lenient multi-agent deep reinforcement learning,” In _International Conference on Autonomous Agents and MultiAgent Systems_ , 2018.
* [27] L. Bu, B. Robert, and D.S. Bart, “A comprehensive survey of multiagent reinforcement learning,” _IEEE Transactions on Systems, Man, and Cybernetics_ , vol. 38, pp. 156–172, 2008.
* [28] K. Tuyls and W. Gerhar, “Multiagent learning: Basics, challenges, and prospects,” _AI Magazine_ , vol. 33, 2012.
* [29] T. Schaul et al., “Prioritized experience replay,” _arXiv preprint arXiv:1511.05952_ , 2015.
* [30] H. V. Hasselt, G. Arthur, and S. David, “Deep reinforcement learning with double q-learning,” In _Conference on Artificial Intelligence_ , 2016.
* [31] V. Mnih et al., “Asynchronous methods for deep reinforcement learning,” _International Conference on Machine Learning_ , 2016.
* [32] M. Babaeizadeh et al., “Reinforcement learning through asynchronous advantage actor-critic on a gpu,” _arXiv preprint arXiv:1611.06256_ , 2016.
* [33] Y. Wu et al., “Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation,” In _Advances in Neural Information Processing Systems_ , 2017.
* [34] B. Peng et al., “Adversarial advantage actor-critic model for task-completion dialogue policy learning,” In _International Conference on Acoustics, Speech and Signal Processing_ , 2018.
* [35] M. G Bellemare et al., “The arcade learning environment: An evaluation platform for general agents,” _Journal of Artificial Intelligence Research_ , vol. 47, pp. 253–279, 2013.
* [36] M. C. Machado et al., “Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents,” _Journal of Artificial Intelligence Research_ , vol 61, pp. 523–562, 2018.
* [37] G. Brockman et al., “OpenAI Gym,” _arXiv preprint arXiv:1606.01540_ , 2016.
* [38] Z. Wang et al., “Dueling network architectures for deep reinforcement learning,” _arXiv preprint arXiv:1511.06581_ , 2015.
* [39] M. Hausknecht and S. Peter, “Deep recurrent q-learning for partially observable mdps,” In _AAAI Fall Symposium Series_ , 2015.
* [40] I. Sorokin et al., “Deep attention recurrent Q-network,” _arXiv preprint arXiv:1512.01693_ , 2015.
* [41] M. Jaderberg et al., “Reinforcement learning with unsupervised auxiliary tasks,” _arXiv preprint arXiv:1611.05397_ , 2016.
* [42] T. P. Lillicrap et al., “Continuous control with deep reinforcement learning,” _arXiv preprint arXiv:1509.02971_ , 2015.
* [43] J. Schulman et al., “Trust region policy optimization,” In _International Conference on Machine Learning_ , 2015.
* [44] Y. Bengio et al., “Curriculum learning,” _Proceedings of the 26th annual international conference on machine learning_ , 2009.
* [45] L. Jiang et al., “Self-paced curriculum learning,” In _AAAI Conference on Artificial Intelligence_ , 2015.
* [46] J. Z. Leibo et al., “Multi-agent reinforcement learning in sequential social dilemmas,” In _Conference on Autonomous Agents and MultiAgent Systems_ , 2017.
* [47] T. Jaakkola, P.S. Satinder, and I. J. Michael, “Reinforcement learning algorithm for partially observable Markov decision problems,” In _Advances in Neural Information Processing Systems_ , 1995.
* [48] J. Peters and S. Stefan, “Natural actor-critic,” _Neurocomputing_ , vol. 71, no 7–9, pp. 1180-1190, 2008.
* [49] R. S. Sutton et al., “Policy gradient methods for reinforcement learning with function approximation,” In _Advances in Neural Information Processing Systems_ , 2000.
* [50] R. Pascanu, M. Tomas, and B. Yoshua, “On the difficulty of training recurrent neural networks,” In _International Conference on Machine Learning_ , 2013.
* [51] T. Tieleman and H. Geoffrey, “Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude,” _COURSERA: Neural Networks for Machine Learning_ , pp. 26–31, 2012.
|
2024-09-04T02:54:54.869523 | 2020-02-27T02:38:47 | 2002.11883 | {
"authors": "Ngoc Duy Nguyen, Thanh Thi Nguyen, Hai Nguyen, Doug Creighton, Saeid\n Nahavandi",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25903",
"submitter": "Thanh Thi Nguyen",
"url": "https://arxiv.org/abs/2002.11883"
} | arxiv-papers | # Review, Analysis and Design of a Comprehensive Deep Reinforcement Learning
Framework
Ngoc Duy Nguyen, Thanh Thi Nguyen, Hai Nguyen, Doug Creighton, Saeid Nahavandi
Ngoc Duy Nguyen, Doug Creighton and Saeid Nahavandi are with the Institute for
Intelligent Systems Research and Innovation, Deakin University, Waurn Ponds
Campus, Geelong, Victoria, Australia (e-mails<EMAIL_ADDRESS><EMAIL_ADDRESS>and [email protected]). Thanh Thi
Nguyen is with the School of Information Technology, Deakin University,
Burwood Campus, Melbourne, Victoria, Australia (e-mail:
[email protected]). Hai Nguyen is with Khoury College of Computer
Science, Notheastern University, Boston, USA (e-mail<EMAIL_ADDRESS>
###### Abstract
The integration of deep learning to reinforcement learning (RL) has enabled RL
to perform efficiently in high-dimensional environments. Deep RL methods have
been applied to solve many complex real-world problems in recent years.
However, development of a deep RL-based system is challenging because of
various issues such as the selection of a suitable deep RL algorithm, its
network configuration, training time, training methods, and so on. This paper
proposes a comprehensive software framework that not only plays a vital role
in designing a connect-the-dots deep RL architecture but also provides a
guideline to develop a realistic RL application in a short time span. We have
designed and developed a deep RL-based software framework that strictly
ensures flexibility, robustness, and scalability. By inheriting the proposed
architecture, software managers can foresee any challenges when designing a
deep RL-based system. As a result, they can expedite the design process and
actively control every stage of software development, which is especially
critical in agile development environments. To enforce generalization, the
proposed architecture does not depend on a specific RL algorithm, a network
configuration, the number of agents, or the type of agents. Using our
framework, software developers can develop and integrate new RL algorithms or
new types of agents, and can flexibly change network configuration or the
number of agents.
###### Index Terms:
reinforcement learning, deep learning, software architecture, learning
systems, multi-agent systems, human-machine interactions, framework.
## I Introduction
Recent development of deep learning has been applied to solve various complex
real-world problems. Notably, its integration into reinforcement learning (RL)
has attracted a great deal of research attention. RL conducts a learning
procedure by allowing agents to directly interact with the environment. An RL
agent can imitate human learning process to achieve a designated goal, _i.e._
, the agent conducts trial-and-error learning (exploration) and draws on
“experience” (exploitation) to improve its behaviors [1, 2]. Therefore, RL is
used in countless domains, such as IT resources management [3], cyber-security
[4], robotics [5, 6, 7, 8], surgical robotics [9, 10], control systems [11,
12], recommendation systems [13], bidding and advertising campaigns [14], and
video games [15, 16, 17, 18, 19]. However, traditional RL methods and dynamic
programming [20], which use a _bootstrapping_ mechanism to approximate the
objective function, cease to work in high-dimensional environments due to
memory and processing power limitations. This so-called issue, _the curse of
dimensionality_ , creates a major challenge in RL literature.
Figure 1: Using a UML sequential diagram to describe an RL problem.
Fig. 1 describes an RL problem by using a _Unified Modeling Language_ (UML)
[21] sequential diagram. Specifically, the problem includes two entities: a
decision maker and the environment. The environment can be an artificial
simulator or a wrapper of the real-world environment. While the environment is
a _passive_ entity, the decision maker is an _active_ entity that periodically
interacts with the environment. In the RL context, a decision maker and an
agent can be interchangeable, though they can be two identified objects from a
software design perspective.
At first, the decision maker perceives a state $s_{t}$ from the environment.
Then it uses its internal model to select the corresponding action $a_{t}$.
The environment interacts with the chosen action $a_{t}$ by sending a
numerical reward $r_{t+1}$ to the decision maker. The environment also brings
the decision maker to a new state $s_{t+1}$. Finally, the decision maker uses
the current transition $\vartheta=\\{a_{t},s_{t},r_{t+1},s_{t+1}\\}$ to update
its decision model. This process is iterated until $t$ equals $T$, where
$s_{T}$ denotes the terminal state of an episode. There are different methods
to develop a decision model, such as fuzzy logic [22], genetic algorithms [23,
24], or dynamic programming [25]. In this paper, however, we consider a deep
neural network as the decision model.
The previous diagram infers that RL is _online_ learning because the model is
updated with incoming data. However, RL can be performed offline via a _batch
learning_ [26] technique. In particular, the current transition $\vartheta$
can be stored in an _experience replay_ [27] and retrieved later to train the
decision model. Finally, the goal of an RL problem is to maximize the expected
sum of discounted reward $R_{t}$, _i.e._ ,
$R_{t}=r_{t+1}+\gamma r_{t+2}+\gamma^{2}r_{t+3}+...+\gamma^{T-t-1}r_{T},$
where $\gamma$ denotes the discounted factor and $0<\gamma\leq 1$.
In 2015, Google DeepMind [28] announced a breakthrough in RL by combining it
with deep learning to create an intelligent agent that can beat a professional
human player in a series of 49 Atari games. The idea was to use a deep neural
network with convolutional layers [29] to directly process raw images (states)
of the game screen to estimate subsequent actions. The study is highly valued
because it opened a new era of RL with deep learning, which partially solves
the curse of dimensionality. In other words, deep learning is a great
complement for RL in a wide range of complicated applications. For instance,
Google DeepMind created a program, AlphaGo, which beat the Go grandmaster, Lee
Sedol, in the best-of-five tournament in 2016 [30]. AlphaGo is a full-of-tech
AI that is based on Monte Carlo Tree Search [31], a hybrid network (policy
network and value network), and a self-taught training strategy [32]. Other
applications of deep RL can be found in self-driving cars [33, 34],
helicopters [35], or even NP-hard problems such as _Vehicle Routing Problem_
[36] and combinatorial graph optimization [37].
As stated above, deep RL is crucial owing to its appealing learning mechanism
and widespread applications in the real world. In this study, we further delve
into practical aspects of deep RL by analyzing challenges and solutions while
designing a deep RL-based system. Furthermore, we consider a real-world
scenario where multiple agents, multiple objectives, and human-machine
interactions are involved. Firstly, if we can take advantage of using multiple
agents to accomplish a designated task, we can shorten the _wall time_ ,
_i.e._ , the computational time to execute the assigned task. Depending on the
task, the agents can be _cooperative_ or _competitive_. In the cooperative
mode, agents work in _parallel_ or in _pipeline_ to achieve the task [38]. In
the case of competition, agents are scrambled, which basically raises the
_resource hunting_ problem [39]. However, in contrast to our imagination,
competitive learning can be fruitful. Specifically, the agent is trained
continually to place the opponent into a disadvantaged position and the agent
is improved over time. Because the opponent is also improved over time, this
phenomenon eventually results in the _Nash equilibrium_ [40]. Moreover,
competitive learning originates the self-taught strategy (_e.g._ AlphaGo) and
a series of techniques such as the _Actor-Critic architecture_ [41, 42],
opponent modeling [43, 44], and _Generative Adversarial Networks_ [45].
Finally, we notice the problem of _moving target_ [46, 47] in multi-agent
systems, which describes a scenario when the decision of an agent depends on
other agents, thus the optimal policy becomes _non-stationary_ [47, 48].
Secondly, a real-world objective is often complicated as it normally comprises
of multiple sub-goals. It is straightforward if sub-goals are _non-
conflicting_ because they can be seen as a single composite objective. The
more difficult case is when there are _conflicting objectives_. One solution
is to convert a multi-objective problem into a single objective counterpart by
applying _scalarization_ through an application of a linear weighted sum for
individual objective [49] or non-linear methods [50]. These approaches are
categorized as _single-policy methods_. In contrast, the _multi-policy
methods_ [51] seek multiple optimal policies at the same time. Although the
number of multi-policy methods is restricted, it can be powerful. For
instance, the _Convex Hull Value Iteration_ algorithm [52] computes a set of
objective combinations to retrieve all deterministic optimal policies.
Finally, to benchmark a multi-objective method, we find or approximate a
boundary surface, namely _Pareto dominance_ , which presents the maximum
performance of different weights (if scalarization is used) [53]. Recent
studies have tried to integrate multi-objective mechanisms into deep RL [54,
55, 56].
TABLE I: Key RL terminologies Term | Description | Pros | Cons
---|---|---|---
Model-free RL | The environment is a black box. Agents mostly | The algorithm does not | Requires a large amount of
conduct a trial-and-error procedure to learn on its own. | need a model of the | samples
They use rewards to update their decision models | environment |
Model-based RL | Agents construct a model that | Speed up learning | Having an accurate
simulates the environment and use it to generate | and improve sample | and useful model is often
future episodes. By using the model, agents can | efficiency | challenging
estimate not only actions but also future states | |
Temporal difference learning | Use TD error to estimate the value function. | Fast convergence as it | Estimates can be
For example, in Q-learning, | does not need to wait until | biased
$Q(s_{t},a_{t})=Q(s_{t},a_{t})+\beta(r_{t}+\gamma\max_{a}Q(s_{t+1},a))$ | the episode ends |
Monte-Carlo method | Estimate the value function by obtaining the average | The values are non-biased | Slow convergence and the
of the same values in different episodes | estimates | estimates have high
$Q(s_{t},a_{t})=\lim_{N->\infty}\sum_{i=1}^{N}Q(s_{t}^{i},a_{t}^{i})$ | | variances. Has to wait until
| | | episode ends to do updates
Continuous action space | The number of control actions is continuous | A policy-based method | Cannot use a
| can be used | value-based method
Discrete action space | The number of control actions is discrete | Both policy-based method | Intractable if the number
and finite | and value-based method | of actions is large
| can be used |
Deterministic policy | The policy maps each state to a specific action | Reduce data sampling | Vulnerable to noise and
| | stochastic environments
Stochastic policy | The policy maps each state to a probability distribution | Better exploration | Requires a large amount of
over actions | | samples
On-policy method | Improve the current policy that the agent | Safer to explore | May be stuck in local
is using to make decisions | | minimum solutions
Off-policy method | Learn the optimal policy (while samples are generated | Instability. Often used with | Might be unsafe because
by the behavior policy) | an experience replay | the agent is free to explore
Fully observable | All agents can observe the complete states of the | Easier to solve than | The number of state
environment | environment | partially observable | can be large
| | environments |
Partially observable | Each agent only observes a limited observation | More practical in | More difficult to solve
environment | of the environment | real-world applications | as the agents require
| | | to remember past states
Last but not least, human-machine interaction is another key factor in a deep
RL-based system. A self-driving car, for example, should accept human
intervention in emergency cases [57, 58]. Therefore, it is critical to ensure
a certain level of safety while designing a hybrid system in which humans and
machines can work together. Due to its importance, Google DeepMind and OpenAI
have presented novel ways that have encouraged a number of inventions in
recent years [59]. For instance, Christiano _et al._ [60] propose a novel
scheme that accepts human feedback during the training process. However, the
method requires an operator to constantly observe the agent’s behavior, which
is an onerous and error-prone task. Recent work [61] provides a more practical
approach by introducing a behavioral control system. The system is used to
control multiple agents in real time via human dictation. Table I summarizes
key terminologies that are widely used in RL contexts.
In summary, the study contributes the following key factors:
* •
The paper presents an overall picture of contemporary deep RL. We briefly
overview state-of-the-art deep RL methods considering three key factors of a
real-world application such as multi-agent learning, multi-objective problems,
and human-machine interactions. Thereafter, the paper offers a checklist for
software managers, a guideline for software designers, and a technical
document for software programmers.
* •
We analyze challenges and difficulties while designing a deep RL-based system
and hence mitigate possible mistakes during the development process. In other
words, software designers can inherit the proposed design, foresee
difficulties, and eventually expedite the entire development procedure,
especially in agile software development.
* •
Finally, the source code of the proposed framework can be found in [62]. Based
on this template, RL beginners can prototype an RL method and develop an RL-
based application in a short time span. As a result, the paper is a key factor
to share deep RL to a wider community.
The paper has the following sections. Section II conducts a brief survey of
state-of-the-art deep RL methods in different research directions. Section III
presents our proposed system architecture, which supports multiple agents,
multiple objectives, and human-machine interactions. Finally, we conclude the
paper in Section IV.
## II Literature Review
TABLE II: Key deep RL methods in literature Method | Description and Advantage | Technical Requirements | Drawbacks | Implementation
---|---|---|---|---
Value-based method
DQN | Use a deep convolutional network to | $\bullet$ Experience replay | $\circ$ Excessive memory usage | [85]
| directly process raw graphical data and | $\bullet$ Target network | $\circ$ Learning instability | [86]
| approximate the action-value function | $\bullet$ Q-learning | $\circ$ Only for discrete action | [87]
| | | space | [88]
Double DQN | Mitigate the DQN’s maximization bias | $\bullet$ Double Q-learning | $\circ$ Inherit DQN’s drawbacks | [88]
| problem by using two separate networks: | | |
| one for estimating the value, one for | | |
| selecting action. | | |
Prioritized | Prioritize important transitions so | $\bullet$Importance sampling | $\circ$ Inherit DQN’s drawbacks | [85]
Experience Replay | that they are sampled more frequently. | | $\circ$ Slower than non-prioritized | [88]
| Improve sample efficiency | | experience replay (speed) |
Dueling Network | Separate the DQN architecture into | $\bullet$ Dueling network | $\circ$ Inherit DQN’s drawbacks | [88]
| two streams: one estimates state-value | architecture | |
| function and one estimates the advantage | $\bullet$ Prioritized replay | |
| of each action | | |
Recurrent DQN | Integrate recurrency into DQN | $\bullet$ Long Short Term | $\circ$ Inherit DQN’s drawbacks | [88]
| Extend the use of DQN in | Memory | |
| partially observable environments | | |
Attention Recurrent | Highlight important regions of the | $\bullet$ Attention mechanism | $\circ$ Inherit DQN’s drawbacks | [68]
DQN | environment during the training process | $\bullet$ Soft attention | |
| | $\bullet$ Hard attention | |
Rainbow | Combine different techniques in DQN | $\bullet$ Double Q-learning | $\circ$ Inherit DQN’s drawbacks | [85]
| variants to provide the state-of-the-art | $\bullet$ Prioritized replay | |
| performance on Atari domain | $\bullet$Dueling network | |
| | $\bullet$Multi-step learning | |
| | $\bullet$Distributional RL | |
| | $\bullet$Noisy Net | |
Policy-based method
A3C/A2C | Use actor-critic architecture to estimate | $\bullet$ Multi-step learning | $\circ$ Policy updates exhibit | [86]
| directly the agent policy. A3C enables | $\bullet$ Actor-critic model | high variance | [87]
| concurrent learning by allowing multiple | $\bullet$ Advantage function | | [88]
| learners to operate at the same time | $\bullet$ Multi-threading | |
UNREAL | Use A3C and multiple unsupervised | $\bullet$ Unsupervised reward | $\circ$ Policy updates exhibit | [89]
| reward signals to improve learning efficiency | signals | high variance |
| in complicated environments | | |
DDPG | Concurrently learn a deterministic policy | $\bullet$ Deterministic | $\circ$ Support only continuous | [87]
| and a Q-function in DQN’s fashion | policy gradient | action space | [88]
TRPO | Limit policy update variance by | $\bullet$ Kullback-Leibler | $\circ$ Computationally expensive | [87]
| using the conjugate gradient to estimate | divergence | $\circ$ Large batch of rollouts | [88]
| the natural gradient policy | $\bullet$ Conjugate gradient | $\circ$ Hard to implement |
| TRPO is better than DDPG in terms of | $\bullet$ Natural policy gradient | |
| sample efficiency | | |
ACKTR | Inherit the A2C method | $\bullet$ Kronecker-factored | $\circ$ Still complex | [87]
| Use Kronecker-Factored approximation to | approximate curvature | |
| reduce computational complexity of TRPO | | |
| ACKTR outperforms TRPO and A2C | | |
ACER | Integrate an experience replay into A3C | $\bullet$ Importance weight | $\circ$ Excessive memory usage | [87]
| Introduce a light-weight version of TRPO | truncation & bias correction | $\circ$ Still complex | [88]
| ACER outperforms TRPO and A3C | $\bullet$ Efficient TRPO | |
PPO | Simplify the implementation of TRPO | $\bullet$ Clipped objective | $\circ$ Require network tuning | [86]
| by using a surrogate objective function | $\bullet$ Adaptive KL penalty | | [87]
| Achieve the best performance in continuous | coefficient | | [88]
| control tasks | | |
### II-A Single-Agent Method
The first advent of deep RL, _Deep Q-Network_ (DQN) [28, 63], basically uses a
deep neural network to estimate values of state-action pairs via a _Q-value
function_ (_a.k.a._ , _action-value function_ or $Q(s,a)$). Thereafter, a
number of variants based on DQN were introduced to improve the original
algorithm. Typical extensions can be examined such as _Double DQN_ [64],
_Dueling Network_ [65], _Prioritized Experience Replay_ [66], _Recurrent DQN_
[67], _Attention Recurrent DQN_ [68], and an ensemble method named _Rainbow_
[69]. These approaches use an experience replay to store historical
transitions and retrieve them in batches to train the network. Moreover, a
separate network _target network_ can be used to mitigate the correlation of
the sequential data and prevent the training network from overfitting.
Instead of estimating the action-value function, we can directly approximate
the agent’s policy $\pi(s)$. This approach is known as the _policy gradient_
or _policy-based_ method. _Asynchronous Advantage Actor-Critic_ (A3C) [70] is
one of the first policy-based deep RL methods to appear in the literature. In
particular, A3C includes two networks: an actor network that is used to
estimate the agent policy $\pi(s)$ and a critic network that is used to
estimate the _state-value function_ $V(s)$. Additionally, to stabilize the
learning process, A3C uses the _advantage function_ , _i.e._ ,
$A(s,a)=Q(s,a)-V(s)$. There is a synchronous version of A3C, namely A2C [70],
which has the advantage of being simpler but with comparable or better
performance. A2C mitigates the risk of multiple learners which might be
overlapping when updating the weights of the global networks.
There have been a great number of policy gradient methods since the
development of A3C. For instance, _UNsupervised REinforcement and Auxiliary
Learning_ (UNREAL) [71] uses multiple unsupervised pseudo-reward signals at
the same time to improve the learning efficiency in complicated environments.
Rather than estimating a stochastic policy, _Deterministic Policy Gradient_
[72] (DPG) finds a deterministic policy, which significantly reduces data
sampling. Moreover, _Deep Deterministic Policy Gradient_ [73] (DDPG) combines
DPG with DQN to enable the learning of a deterministic policy in a continuous
action space using the actor-critic architecture. The authors in [74] even
propose _Multi-agent DDPG_ (MADDPG), which employs DDPG in multi-agent
environments. To further stabilize the training process, the authors in [75]
introduce the _Trust Region Policy Optimization_ (TRPO) method, which
integrates the _Kullback–Leibler divergence_ [76] into the training procedure.
However, the implementation of the method is complicated. In 2017, Wu _et al._
[77] proposed _Actor-Critic using Kronecker-Factored Trust Region_ (ACKTR),
which applies Kronecker-factored approximation curvature into gradient update
steps. Additionally, the authors in [78] introduced an efficient off-policy
sampling method based on A3C and an experience replay, namely _Actor-Critic
with Experience Replay_ (ACER). To simplify the implementation of TRPO, ACKTR,
and ACER, _Proximal Policy Optimization_ (PPO) [79] is introduced by using a
clipped “surrogate” objective function together with stochastic gradient
ascent. Finally, some studies combine a policy-based and value-based method
such as [80, 81, 82] or an on-policy and off-policy method such as [83, 84].
Table II summarizes key deep RL methods and their reliable implementation
repositories. Based on specific application domains, software managers can
select a suitable deep RL method to act as a baseline for the target system.
### II-B Multi-Agent Method
In multi-agent learning, there are two widely used schemes in the literature:
_individual_ and _mutual_. In the first case, each agent in the system can be
considered as an independent decision maker and other agents as a part of the
environment. In this way, any deep RL methods in the previous subsection can
be used in multi-agent learning. For instance, Tampuu _et al._ [90] used DQN
to create an independent policy for each agent. The authors analyze the
behavioral convergence of the involved agents with respect to cooperation and
competition. Similarly, Leibo _et al._ [91] introduced a sequential social
dilemma, which basically uses DQN to analyze the agent’s strategy in Markov
games such as Prisoner’s Dilemma, Fruit Gathering, and Wolfpack. However, the
approach limits the number of agents because the computational complexity
increases with the number of policies. To overcome this obstacle, Nguyen _et
al._ developed a behavioral control system [61] in which homogeneous agents
can share the same policy. As a result, the method is robust and scalable.
Another problem in multi-agent learning is the use of an experience replay,
which amplifies the non-stationary problem that occurs due to asynchronous
data sampling of different agents [92, 93]. A lenient approach [94] can subdue
the problem by mapping transitions into decaying temperature values, which
basically controls the magnitude of updating different policies.
In mutual scheme, agents can “speak” with each other via a settled
communication channel. Moreover, agents are often trained in a centralized
manner but eventually operate in a decentralized fashion when deployed [95].
In other words, a multi-agent RL problem can be divided into two sub-problems:
a goal-directed problem and a communication problem. Specifically, Foerster
_et al._ [96] introduced two communication schemes based on the centralized-
decentralized rationale: _Reinforced Inter-Agent Learning_ (RIAL) and
_Differentiable Inter-Agent Learning_ (DIAL). While RIAL reinforces agents’
learning by sharing parameters, DIAL allows _inter-communication_ between
agents via a shared medium. Both methods, however, operate with a discrete
number of communication actions. As opposed to RIAL and DIAL, the authors in
[97] introduce a novel network architecture, namely _Communication Neural Net_
(CommNet), which enables communication by using a continuous vector. As a
result, the agents are trained to learn to communicate by backpropagation.
However, CommNet limits the number of agents due to the increase of
computational complexity. To make it scalable, Gupta _et al._ [98] introduced
a parameter sharing method that can handle a large number of agents. However,
the method only works with homogeneous systems. Finally, Nguyen _et al._ [61]
extended the Gupta’s study to heterogeneous systems by designing a behavioral
control system. For rigorous study, a complete survey on multi-agent RL can be
found in [99, 100, 101].
In summary, it is critical to address the following factors in multi-agent
learning because they have a great impact on the target software architecture:
* •
It is preferable to employ the centralized-decentralized rationale in a multi-
agent RL-based system because the training process is time-consuming and
computationally expensive. A working system may require hundreds to thousands
of training sessions by searching through the hyper-parameter space to find
the optimal solution.
* •
The communication between agents can be _realistic_ or _imaginary_. In
realistic communication, agents “speak” with each other using an established
communication protocol. However, there is no actual channel in imaginary
communication. The agents are trained to collaborate using a specialized
network architecture. For instance, OpenAI [102] proposes an actor-critic
architecture where the critic is augmented with other agents’ policy
information. As a result, the two methods can differentiate how to design an
RL-based system.
* •
A partially observable environment has a great impact on designing a multi-
agent system because each agent has its own unique perspective of the
environment. Therefore, it is important to first carefully examine the
environment and application type to avoid any malfunction in the design.
### II-C RL Challenges
In this subsection, we briefly review major challenges while designing a deep
RL-based system and corresponding solutions. To remain concise, the proposed
framework is not concerned with these techniques but it is straightforward to
extend the architecture to support these rectifications.
_Catastrophic forgetting_ is a problem that occurs in _continual learning_ and
_multi-task learning_ , _i.e._ , an another task is trained after learning the
first task by using the same neural network. In this case, the neural network
gradually forgets the knowledge of the first task to adopt the new one. One
solution is to use regularization [103, 104] or a dense neural network [105,
106]. However, these approaches are only feasible with a limited number of
tasks. Recent studies introduce more scalable approaches such as _Elastic
Weight Consolidation_ (EWC) [107] or _PathNet_ [108]. While EWC finds a
configuration of the network that yields the best performance in different
tasks, PathNet uses a “super” neural network to fulfill the knowledge of
different tasks in different paths.
_Policy distillation_ [109] or _transfer learning_ [110, 111] can be used to
train an agent to learn individual tasks and collectively transfer the
knowledge to a single network. Transfer learning is often used when the actual
experiment is expensive and intractable. In this case, the network is trained
with simulations and later is deployed into the target experiment. However, a
_negative transfer_ may occur when the performance of the learner is lower
than the trainer. The authors in [112] introduced _Hierarchical Prioritized
Experience Replay_ that uses high-level features of the task and selects
important data from the experience replay to mitigate the negative transfer.
One recent study [113] aligned the mutual learning to achieve a comparable
performance between the actual experiment and simulations.
Another obstacle in RL is dealing with _long-horizon_ environments with
_sparse rewards_. In such tasks, the agent hardly receives any reward and
easily gets stuck in local minimum solutions. One straightforward solution is
to use _reward shaping_ [114] that continuously instructs the agent to achieve
the objective. The problem can also be divided into a hierarchical tree of
sub-problems where the parent problem has a higher abstraction than the child
problem (_Hierarchical RL_) [115]. To encourage self-exploration, the authors
in [116] introduced intrinsic reward signals to reinforce the agent to make a
generic decision. State-of-the-art methods of _intrinsic motivation_ can be
found in [117, 118, 119]. Finally, Andrychowicz _et al._ [120] propose
_Hindsight Experience Replay_ that implicitly simulates _curriculum learning_
[121] by creating imagined trajectories in the experience replay with positive
rewards. In this way, the agent can learn from failures and automatically
generalize a solution in success cases.
Finally, a variety of RL-related techniques are proposed to make RL feasible
in large-scale applications. One approach is to augment the neural network
with a “memory” to enhance sample efficiency in complicated environments [122,
123]. Additionally, to enforce scalability, many distributed methods have been
proposed such as _Distributed Experience Replay_ [124], deep RL acceleration
[125], and distributed deep RL [126]. Finally, _imitation learning_ can be
used together with _inverse RL_ to speed up training by directly learning from
expert demonstrations and extracting the expert’s cost function [127].
### II-D Deep RL Framework
In this subsection, we discuss the latest deep RL frameworks in the
literature. We select the libraries based on different factors including
Python-based implementation, clear documentation, reliability, and active
community. Based on our analysis, software managers can select a suitable
framework depending on project requirements.
Figure 2: A “pyramid” software architecture.
* •
_Chainer_ – Chainer [88] is a powerful and flexible framework for neural
networks. The framework is currently supported by IBM, Intel, Microsoft, and
Nvidia. It provides an easy way to manipulate neural networks such as by
creating a customized network, visualizing a computational graph, and
supporting a debug mode. It also implements a variety of deep RL methods.
However, the Chainer’s architecture is complicated and requires a great effort
to develop a new deep RL method. The number of integrated environments is also
limited, _e.g._ , Atari [128], OpenAI Gym [129], and Mujoco [130].
* •
_Keras-RL_ – Keras-RL [131] is a friendly deep RL library, which is
recommended for deep RL beginners. However, the library provides a limited
number of deep RL methods and environments.
* •
_TensorForce_ – TensorForce [133] is an ambitious project that targets both
industrial applications and academic research. The library has the best
modular architecture we have reviewed so far. Therefore, it is convenient to
use the framework to integrate customized environments, modify network
configurations, and tweak deep RL algorithms. However, the framework has a
deep software stack (“pyramid” model) that includes many abstraction layers,
as shown in Fig. 2. This hinders novice readers from prototyping a new deep RL
method.
* •
_OpenAI Baselines_ – OpenAI Baselines [87] is a high-quality framework for
contemporary deep RL. In contrast to TensorForce, the library is suitable for
researchers who want to reproduce original results. However, OpenAI Baselines
is unstructured and incohesive.
* •
_RLLib_ – RLLib [86] is a well-designed deep RL framework that provides a
means to deploy deep RL in distributed systems. Therefore, the usage of RLLib
is not friendly for RL beginners.
* •
_RLLab_ – RLLab [136] provides a diversity of deep RL methods including TRPO,
DDPG, Cross-Entropy Method, and Evolutionary Strategy. The library is friendly
to use but not straightforward for modifications.
In summary, most deep RL frameworks focus on the performance of deep RL
methods. As a result, those frameworks limits code legibility, which basically
restricts RL beginners from readability and modifications. In this paper, we
propose a comprehensive framework that has the following properties:
* •
Allow new users to prototype a deep RL method in a short period of time by
following a modular design. As opposed to TensorForce, we limit the number of
abstraction layers and avoid the pyramid structure.
* •
The framework is friendly with a simplified user interface. We provide an API
based on three key concepts: policy network, network configuration, and
learner.
* •
Enforce scalability and generalization while supporting multiple agents,
multiple objectives, and human-machine interactions.
* •
Finally, we introduce a concept of unification and transparency by creating
plugins. Plugins are gateways that extract learners from other libraries and
plug them into our proposed framework. In this way, users can interact with
different frameworks using the same interface.
## III Software Architecture
In this section, we examine core components towards designing a comprehensive
deep RL framework, which basically employs generality, flexibility, and
interoperability. We aim to support a broad range of RL-related applications
that involve multiple agents, multiple objectives, and human-agent
interaction. We use the following pseudocode to describe a function signature:
$\bullet\textbf{function\\_name}([param1,param2,...])\rightarrow\\\
[return1,return2,...]\text{ or }\\{return1,return2,...\\}\text{ or }A$
where $\rightarrow$ denotes a return operation, $A$ is a scalar value, $[...]$
denotes an array, and $\\{...\\}$ denotes a list of possible values of a
single variable.
### III-A Environment
First, we create a unique interface for the _environment_ to establish a
communication channel between the framework and agents. However, to reduce
complexity, we put any human-related communication into the environment. As a
result, human interactions can be seen as a part of the environment and are
hidden from the framework, _i.e._ , the environment provides two interfaces:
one for the framework and one for human, as shown in Fig. 3. While the
framework interface is often in programming level (functions), the human
interface has a higher abstraction mostly in human understanding forms such as
voice dictation, gesture recognition, or control system.
Figure 3: A conceptual model of the environment with a human interface.
Essentially, the environment’s framework interface should provide the
following functions:
* •
clone(): the environment can duplicate itself. The function is useful when an
RL algorithm requires multiple learners at the same time (_e.g._ A3C).
* •
reset(): reset the environment to the initial state. The function must be
called after or before an episode.
* •
step($[a_{1},a_{2},...,a_{N}]$) $\rightarrow$ [$r_{1},r_{2},...,r_{M}$]:
executes $N$ specified actions of $N$ agents in the environment. The function
returns $M$ rewards, each of which represents an objective function.
* •
get_state() $\rightarrow$ [$s_{1},s_{2},...,s_{N}$]: retrieves the current
states of the environment. If the environment is a partially observable MDP,
the function returns $N$ states, which each presents the current state of an
agent. However, if the environment is a fully observable MDP, we have
$s_{1}=s_{2}=...=s_{N}=s$.
* •
is_terminal() $\rightarrow$ {True, False}: checks if the episode is
terminated.
* •
get_number_of_objectives() $\rightarrow$ $M$: is a helper function that
indicates the number of objectives in the environment.
* •
get_number_of_agents() $\rightarrow$ $N$: is a helper function that indicates
the number of agents in the environment.
Finally, it is important to consider the following questions while designing
an environment component as they have a great impact on subsequent design
stages:
* •
_Is it a simulator or a wrapper?_ In the case of a wrapper, the environment is
already developed and configured. Our duty is to develop a wrapper interface
that can compatibly interact with the framework. In contrast to the wrapper,
developing a simulator is complicated and requires expert knowledge. In real-
time applications, we may first develop a simulator in C/C++ (for better
performance) and then create a Python wrapper interface (for easier
integration). In this case, we need to develop both a simulator and a wrapper.
* •
_Is it stochastic or deterministic?_ Basically, a stochastic environment is
more challenging to implement than a deterministic one. There are potential
factors that are likely to contribute to the randomness of the environment.
For example, a company intends to open a bike rental service. _N_ bikes are
equally distributed into _M_ potential places. However, at a specific time,
place _A_ still has a plenty of bikes because there is no customer. As a
result, bikes in place _A_ are delivered to other places where the demand is
higher. The company seeks development of an algorithm that can balance the
number of bikes in each place over time. In this example, the bike rental
service is a stochastic environment. We can start building a simple stochastic
model based on Poisson distribution to represent the rental demand in each
place. We end up with a complicated model based on a set of observable factors
such as rush hour, weather, weekend, festival, etc. Depending on the
stochasticity of the model, we can decide to use a model-based or model-free
RL method.
* •
_Is it complete or incomplete?_ A complete environment at any time provides
sufficient information to construct a branch of possible moves in the future
(_e.g._ Chess or Go). The completeness can help to decide an effective RL
method later. For instance, a complete environment can be solved with a
careful planning rather than a trial-and-error approach.
* •
_Is it fully observable or partially observable?_ The observability of
environment is essential when designing a deep neural network. Partially
observable environments might require recurrent layers or an attention
mechanism to enhance the network’s capacity during the training. A self-
driving scenario is partially observable while a board game is fully
observable.
* •
_Is it continuous or discrete?_ As described in Table I, this factor is
important to determine the type of methods used such as policy-based or value-
based methods or the network configuration such as actor-critic architectures.
* •
_How many objectives does it have?_ Real-world applications often have
multiple objectives. If the importance weights between objectives can be
identified in the beginning, it is reasonable to use a single-policy RL
method. Alternatively, a multi-policy RL method can prioritize the importance
of an objective in real time.
* •
_How many agents does it have?_ A multi-agent RL-based system is much more
complicated than a single-agent counterpart. Therefore, it is essential to
analyze the following factors of a multi-agent system before delving into the
design: the number of agents, the type of agents, communication abilities,
cooperation strategies, and competitive potentials.
### III-B Network
The _neural network_ is also a key module of our proposed framework, which
includes a network configuration and a policy network, as illustrated in Fig.
4. A network configuration defines the deep neural network architecture
(_e.g._ CNN or LSTM), loss functions (_e.g._ Mean Square Error or Cross
Entropy Loss), and optimization methods (_e.g._ Adam or SGD). Depending on the
project’s requirements, a configuration can be divided to different
abstraction layers, where the lower abstraction layer is used as a mapping
layer for the higher abstraction layer. In the lowest abstraction level
(programming language), a configuration is implemented by a deep learning
library, such as Pytorch [134] (with dynamic graph) or TensorFlow (with static
graph). The next layer is to use a scripting language, such as _xml_ or _json_
, to describe the network configuration. This level is useful because it
provides a faster and easier way to configure a network setting. For those who
do not have much knowledge of implementation details such as system analysts,
a graphical user interface can assist them. However, there is a trade-off
here: the higher abstraction layer achieves better usability and productivity
but has a longer development cycle.
A policy network is a composite component that includes a number of network
configurations. However, the dependency between a policy network and a
configuration can be weak, _i.e._ , an _aggregation_ relationship. The policy
network’s objective is twofold. It provides a high-level interface that
maintains connectivity with other modules in the framework, and it initializes
the network, saves the network’s parameters into checkpoints, and restores the
network’s parameters from checkpoints. Finally, the neural network interface
should provide the following functions:
Figure 4: A neural network module includes a network configuration and a
policy network.
* •
create_network() $\rightarrow$ [$\theta_{1},\theta_{2},...,\theta_{K}$]:
instantiates a deep neural network by using a set of network configurations.
The function returns the network’s parameters (references)
$\theta_{1},\theta_{2},...,\theta_{K}$.
* •
save_model(): saves the current network’s parameters into a checkpoint file.
* •
load_model([chk]): restores the current network’s parameters from a specified
checkpoint file _chk_.
* •
predict([$s_{1},s_{2},...,s_{N}$]) $\rightarrow$ [$a_{1},a_{2},...,a_{N}$]:
given the current states of $N$ agents $s_{1},s_{2},..._{,}s_{N}$, the
function uses the network to predict the next $N$ actions
$a_{1},a_{2},...,a_{N}$.
* •
train_network([data_dict]): trains the network by using the given data
dictionary. The data dictionary often includes the current states, current
actions, next states, terminal flags, and miscellaneous information (global
time step or objective weights) of $N$ agents.
### III-C Learner
Figure 5: A high-level design of a learner module.
The last key module of our proposed framework is a _learner_ , as shown in
Fig. 5. While the environment module and the network module create the
application’s shell, the learner plays as an engine that allows the system to
operate properly. The three modules jointly create the backbone of the system.
In particular, the learner uses the environment module to generate episodes.
It manages the experience replay memory and defines the RL implementation
details, such as multi-step learning, multi-threading, or reward shaping. The
learner is often created together with a monitor. The monitor is used to
manage multiple learners (if multi-threading is used) and collect any data
from the learners during training, such as performance information for
debugging purposes and post-evaluation reports. Finally, the learner collects
necessary data, packs it into a dictionary, and sends it to the network module
for training.
Additionally, a _factory_ pattern [135] can be used to hide the operating
details between the monitor and the learners. As a result, the factory
component promotes higher abstraction and usability through a simplified user
API as below:
* •
create_learner([monitor_dict, learner_dict]) $\rightarrow$ obj: The factory
creates a learner by using the monitor’s data dictionary (batch size, the
number of epochs, report frequency, etc) and the learner’s data dictionary
(the number of threads, epsilon values, reward clipping thresholds, etc.).
* •
train(): trains the generated learner.
* •
evaluate(): evaluates the generated learner.
### III-D Plugin
There have been a great number of RL methods in the literature. Therefore, it
is impractical to implement all of them. However, we can reuse the
implementation from existing libraries such as TensorForce, OpenAI Baselines,
or RLLab. To enforce flexibility and interoperability, we introduce a concept
of unification by using plugins. A plugin is a piece of program that extracts
learners or network configurations from third party libraries and plugs them
into our framework. As a result, the integrated framework provides a unique
user API but supports a variety of RL methods. In this way, users do not need
to learn different libraries. The concept of unification is described in Fig.
6.
Figure 6: A unification of different RL libraries by using plugins.
A plugin can also act as a conversion program that converts the environment’s
interface of this library into the environment’s interface of other libraries.
As a result, the proposed framework can work with any environments in third
party libraries and vice versa. Therefore, a plugin should include the
following functions:
* •
convert_environment([source_env]) $\rightarrow$ target_env: converts the
environment’s interface from the source library to the environment’s interface
defined in the target library.
* •
extract_learner([param_dict]) $\rightarrow$ learner: extracts the learner from
the target library.
* •
extract_configuration([param_dict]) $\rightarrow$ config: extracts the network
configuration from the target library.
### III-E Overall Structure
Figure 7: A UML sequential diagram of the training process. Figure 8: A UML sequential diagram of the evaluation process. TABLE III: Demonstration codes of different use cases [62]. Use case | Description | Source Code
---|---|---
1\. How to inherit an existing learner? | Develop a Monte-Carlo learner that inherits the | fruit/learners/mc.py
| existing Q-Learning learner |
2\. Develop a new environment | Create a Grid World [1] environment that follows | fruit/envs/games/grid_world/
| the framework’s interface |
3\. Develop a multi-agent environment | Create a Tank Battle [61] game in which | fruit/envs/games/tank_battle/
with human-agent interaction | humans and AI agents can play together |
4\. Multi-objective environment and | Use a multi-objective learner (MO Q-Learning) | fruit/samples/basic/multi_objectives_test.py
multi-objective RL | to train an agent to play Mountain Car [51] |
5\. Multi-agent learner with | Create a multi-agent RL method based on A3C [98] | fruit/samples/tutorials/chapter_6.py
human-agent interaction | and apply it to Tank Battle |
6\. How to use a plugin? | Develop a TensorForce plugin, extract the PPO learner, | fruit/plugins/quick_start.py
| and train an agent to play Cart Pole [1] |
Putting everything together, we have a sequential diagram of the training
process, as described in Fig. 6. The workflow breaks down the training process
into smaller procedures. Firstly, the factory instantiates a specified learner
(or a plugin) and sends its reference to the monitor. The monitor clones the
learner into multiple learner threads. Each learner thread is run until the
number of epochs exceeds a predefined value, $K$. The second loop within the
learner thread is used to generate episodes. In each episode, a learner thread
perceives the current states of the environment and predicts next actions
using the policy network and configuration network. The next actions are
applied to the environment. The environment returns next states and a terminal
flag. Finally, the policy network is trained for every $L$-step. There are
minor changes in the evaluation process, as shown in Fig. 8. First, the policy
network’s parameters are restored from a specified checkpoint file while
initializing the learner. Second, all training procedure calls are discarded
while generating episodes.
To enhance usability and reduce redundancy, it is advisable to implement the
framework in _Object-Oriented Programming_ (OOP). In this way, a new learner
(configuration) can be easily developed by inheriting existing learners
(configurations) in the framework, as shown in Fig. 9.
Figure 9: An inheritance relationship between learners and configurations.
## IV Conclusions
It is promising to see many successful applications of deep RL methods in
recent years. This paper has briefly reviewed recent advances in the RL
literature with respect to multi-agent learning, multi-objective learning, and
human-machine interactions. We also examine different deep RL libraries and
analyze their limitations. Finally, we propose a novel design of a deep RL
framework that exhibits great potential in terms of usability, flexibility,
and interoperability. We highlight any considerable notices during the design
so that software managers can avoid possible mistakes while designing an RL-
based application.
The proposed framework can be considered as a _template_ to design a real-
world RL-based application. Because the framework is developed in OOP, it is
beneficial to utilize OOP principles, such as inheritance, polymorphism, and
encapsulation to expedite the whole development process. We advisedly created
a “soft” software layer stack, where the number of modules is minimal while
maintaining a certain level of cohesion. As a result, the learning curve is
not steep. By providing a simplified API, the framework is suitable for novice
readers who are new to deep RL, especially software engineers. Finally, the
framework acts as a bridge to connect different RL communities around the
world.
Our ultimate goal is to build an educational software platform for deep RL.
The next development milestone includes three steps:
* •
Implement a variety of plugins for the proposed framework.
* •
Develop a GUI application that can configure the neural network, modify the
learner, and visualize the RL workflow.
* •
Complete documentation including tutorials and sample codes.
## Appendix
To keep the paper brief, we provide documentation of the proposed framework as
online materials [136]. These include an installation guide, code samples,
benchmark scores, tutorials, an API reference guide, a class diagram, and a
package diagram. Table III lists demonstration codes of different use cases
(codebase [62]).
## Acknowledgement
The authors wish to thank our colleagues in the Institute for Intelligent
Systems Research and Innovation for their comments and helpful discussions. We
truly appreciate Nguyen Chau, a principle IT product manager at Atlassian, who
shared his expertise in the field to eradicate any misunderstandings in this
paper. We also thank Dr. Thanh Nguyen, University of Chicago, for being an
active adviser in the design process. Finally, we send our gratefulness to the
RL community who provided crucial feedback during the project’s beta testing.
## References
* [1] R. S. Sutton and G. B. Andrew, _Introduction to Reinforcement Learning_. Cambridge: MIT press, 1998.
* [2] N. D. Nguyen, T. Nguyen, and S. Nahavandi, “System design perspective for human-level agents using deep reinforcement learning: A survey,” _IEEE Access_ , vol. 5, pp. 27091–27102, 2017.
* [3] H. Mao, M. Alizadeh, I. Menache, and S. Kandula, “Resource management with deep reinforcement learning,” In _Proceedings of the 15th ACM Workshop on Hot Topics in Networks_ , pp. 50–56, 2016.
* [4] T. T. Nguyen, and V. J. Reddi, “Deep reinforcement learning for cyber security,” _arXiv preprint arXiv:1906.05799_ , 2019.
* [5] D. Fox, W. Burgard, H. Kruppa, and S. Thrun, “A probabilistic approach to collaborative multi-robot localization,” _Autonomous Robots_ , vol. 8, no. 3, pp. 325–344, 2000.
* [6] M. Riedmiller, T. Gabel, R. Hafner, and S. Lange, “Reinforcement learning for robot soccer,” _Autonomous Robots_ , vol. 27, no. 1, pp. 55–73, 2009.
* [7] K. Mülling, J. Kober, O. Kroemer, and J. Peters, “Learning to select and generalize striking movements in robot table tennis,” _International Journal of Robotics Research_ , vol. 32, no. 3, pp. 263–279, 2013.
* [8] T. G. Thuruthel et al., “Model-based reinforcement learning for closed-loop dynamic control of soft robotic manipulators,” _IEEE Transactions on Robotics_ , vol. 35, pp.124–134, 2018.
* [9] T. Nguyen, N. D. Nguyen, F. Bello, and S. Nahavandi, “A new tensioning method using deep reinforcement learning for surgical pattern cutting,” In _2019 IEEE International Conference on Industrial Technology (ICIT)_ , pp. 1339-1344. IEEE, 2019.
* [10] N. D. Nguyen, T. Nguyen, S. Nahavandi, A. Bhatti, and G. Guest, “Manipulating soft tissues by deep reinforcement learning for autonomous robotic surgery,” In _2019 IEEE International Systems Conference (SysCon)_ , pp. 1-7. IEEE, 2019.
* [11] R. H. Crites and A. G. Barto, “Elevator group control using multiple reinforcement learning agents,” _Machine Learning_ , vol. 33, no. 2–3, pp. 235–262, 1998.
* [12] I. Arel, C. Liu, T. Urbanik, and A. G. Kohls, “Reinforcement learning-based multi-agent system for network traffic signal control,” _IET Intelligent Transport Systems_ , vol 4, no. 2, pp. 128–135, 2010.
* [13] G. Zheng, F. Zhang, Z. Zheng, Y. Xiang, N. J. Yuan, X. Xie, and Z. Li, “DRN: A deep reinforcement learning framework for news recommendation,” In _Proceedings of the 2018 World Wide Web Conference_ , pp. 167–176, 2018.
* [14] J. Jin, C. Song, H. Li, K. Gai, J. Wang, and W. Zhang, “Real-time bidding with multi-agent reinforcement learning in display advertising.,” In _Proceedings of the 27th ACM International Conference on Information and Knowledge Management_ , pp. 2193–2201, 2018.
* [15] M. Campbell, A. J. Hoane, and F. H. Hsu, “Deep blue,” _Artificial Intelligence_ , vol. 134, no. 1–2, pp. 57–83, 2002.
* [16] G. Tesauro and G. R. Galperin, “On-line policy improvement using monte-carlo search,” in _Advances in Neural Information Processing Systems_ , pp. 1068–1074, 1997.
* [17] G. Tesauro, “Temporal difference learning and td-gammon,” _Communication_ , vol. 38, no. 3, pp. 58–68, 1995.
* [18] T. Nguyen, N. D. Nguyen, and S. Nahavandi, “Multi-agent deep reinforcement learning with human strategies,” In _2019 IEEE International Conference on Industrial Technology (ICIT)_ , pp. 1357-1362. IEEE, 2019.
* [19] N. D. Nguyen, S. Nahavandi, and T. Nguyen. “A human mixed strategy approach to deep reinforcement learning,” In _2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC)_ , pp. 4023-4028. IEEE, 2018.
* [20] R. Bellman, _Dynamic Programming_. Princeton: Princeton University Press, 2010.
* [21] M. Fowler and K. Scott, _UML Distilled: A Brief Guide to the Standard Object Modeling Language_. Addison-Wesley Professional, 2004.
* [22] T. J. Ross, _Fuzzy Logic with Engineering Applications_. John Wiley & Sons, 2005.
* [23] M. Hausknecht, J. Lehman, R. Miikkulainen, and P. Stone, “A neuroevolution approach to general atari game playing,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 6, no. 4, pp. 355–366, 2014.
* [24] T. Salimans, J. Ho, X. Chen, A. Sidor, and I. Sutskever, “Evolution strategies as a scalable alternative to reinforcement learning, _arXiv preprint arXiv:1703.03864_ , 2017.
* [25] D. P. Bertsekas, _Dynamic Programming and Optimal Control_. Belmont, MA: Athena scientific, 1995.
* [26] J. Duchi and Y. Singer, “Efficient online and batch learning using forward backward splitting,” _Journal of Machine Learning Research_ , vol. 10, pp. 2899–2934, 2009.
* [27] S. Adam, L. Busoniu, and R. Babuska, “Experience replay for real-time reinforcement learning control, _IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)_ , vol. 42, no. 2, pp. 201–212, 2011.
* [28] V. Mnih et al., “Human-level control through deep reinforcement learning,” _Nature_ , 2015.
* [29] A. Krizhevsky, S. Ilya, and E. H. Geoffrey, “Imagenet classification with deep convolutional neural networks.” In _Advances in Neural Information Processing Systems_ , 2012.
* [30] D. Silver et al., “Mastering the game of Go with deep neural networks and tree search,” _Nature_ , vol. 529, no 7587, 2016.
* [31] C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton, “A survey of monte carlo tree search methods,” _IEEE Transactions on Computational Intelligence and AI in games_ , vol. 4, no. 1, pp. 1–43, 2011.
* [32] G. Tesauro, “TD-Gammon, a self-teaching backgammon program, achieves master-level play, _Neural Computation_ , vol. 6, no. 2, pp. 215–219, 1992.
* [33] A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep reinforcement learning framework for autonomous driving,” _Electronic Imaging_ , vol. 19, pp. 70–76, 2017.
* [34] S. S.-Shwartz, S. Shaked, and S. Amnon, “Safe, multi-agent, reinforcement learning for autonomous driving,” _arXiv preprint arXiv:1610.03295_ , 2016.
* [35] A. Y. Ng, A. Coates, M. Diel, V. Ganapathi, J. Schulte, B. Tse, E. Berger, and E. Liang, “Autonomous inverted helicopter flight via reinforcement learning,” _Experimental Robotics IX_ , pp. 363–372, 2006.
* [36] M. Nazari, A. Oroojlooy, L. Snyder, and M. Takac, “Reinforcement learning for solving the vehicle routing problem,” In _Advances in Neural Information Processing Systems_ , pp. 9839–9849, 2018.
* [37] I. Bello, H. Pham, Q. V. Le, M. Norouzi, and S. Bengio, “Neural combinatorial optimization with reinforcement learning,” _arXiv preprint arXiv:1611.09940_ , 2016.
* [38] L. Panait and S. Luke, “Cooperative multi-agent learning: The state of the art,” _Autonomous Agents and Multi-Agent Systems_ , vol. 11, no. 3, pp. 387–434, 2005.
* [39] J. Z. Leibo et al., “Multi-agent reinforcement learning in sequential social dilemmas,” In _Conference on Autonomous Agents and Multi-Agent Systems_ , 2017.
* [40] X. Wang and T. Sandholm, “Reinforcement learning to play an optimal Nash equilibrium in team Markov games,” In _Advances in Neural Information Processing Systems_ , pp. 1603–1610, 2003.
* [41] J. Peters and S. Stefan, “Natural actor-critic,” _Neurocomputing_ , vol. 71, no 7–9, pp. 1180-1190, 2008.
* [42] V. R. Konda and J. N. Tsitsiklis, “Actor-critic algorithms,” In _Advances in Neural Information Processing Systems_ , pp. 1008–1014, 2000.
* [43] H. He, J. Boyd-Graber, K. Kwok, and H. Daume III, “Opponent modeling in deep reinforcement learning,” In _International Conference on Machine Learning_ , pp. 1804–1813, 2016.
* [44] F. Southey, M. P. Bowling, B. Larson, C. Piccione, N. Burch, D. Billings, and C. Rayner, “Bayes’ bluff: Opponent modelling in poker,” _arXiv preprint arXiv:1207.1411_ , 2012.
* [45] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” In _Advances in Neural Information Processing Systems_ , pp. 2672–2680, 2014.
* [46] G. Palmer et al., “Lenient multi-agent deep reinforcement learning,” In _International Conference on Autonomous Agents and MultiAgent Systems_ , 2018.
* [47] L. Bu, B. Robert, and D.S. Bart, “A comprehensive survey of multiagent reinforcement learning,” _IEEE Transactions on Systems, Man, and Cybernetics_ , vol. 38, pp. 156–172, 2008.
* [48] K. Tuyls and W. Gerhar, “Multiagent learning: Basics, challenges, and prospects,” _AI Magazine_ , vol. 33, 2012.
* [49] S. Natarajan and P. Tadepalli, “Dynamic preferences in multi-criteria reinforcement learning,” In _International Conference on Machine Learning_ , pp. 601–608, 2005.
* [50] D. M. Roijers, P. Vamplew, S. Whiteson, and R. Dazeley, “A survey of multi-objective sequential decision-making,” _Journal of Artificial Intelligence Research_ , vol. 48, pp.67–113, 2013.
* [51] K. Van Moffaert, and A. Nowe, “Multi-objective reinforcement learning using sets of pareto dominating policies,” _The Journal of Machine Learning Research_ , vol. 15, no. 1, pp. 3483–3512, 2014.
* [52] L. Barrett and S. Narayanan, “Learning all optimal policies with multiple criteria,” In _International Conference on Machine Learning_ , pp. 41–47, 2008.
* [53] P. Vamplew, R. Dazeley, A. Berry, R. Issabekov, and E. Dekker, “Empirical evaluation methods for multiobjective reinforcement learning algorithms,” _Machine Learning_ , vol. 84, no. 1–2, pp. 51–80, 2011.
* [54] H. Mossalam, Y. M. Assael, D. M. Roijers, and S. Whiteson, “Multi-objective deep reinforcement learning,” _arXiv preprint arXiv:1610.02707_ , 2016.
* [55] H. Van Seijen, M. Fatemi, J. Romoff, R. Laroche, T. Barnes, and J. Tsang, “Hybrid reward architecture for reinforcement learning,” In _Advances in Neural Information Processing Systems_ , pp. 5392–5402, 2017.
* [56] T. T. Nguyen, “A multi-objective deep reinforcement learning framework,” _arXiv preprint arXiv:1803.02965_ , 2018.
* [57] S. Shalev-Shwartz, S. Shammah, and A. Shashua, “Safe, multi-agent, reinforcement learning for autonomous driving,” _arXiv preprint arXiv:1610.03295_ , 2016.
* [58] A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep reinforcement learning framework for autonomous driving,” _Electronic Imaging_ , pp. 70–76, 2017.
* [59] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mane. Concrete problems in AI safety. In _arXiv:1606.06565_ , 2016.
* [60] P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei. Deep reinforcement learning from human preferences. In _Advances in Neural Information Processing Systems_ , pages 4302–4310, 2017.
* [61] N. D. Nguyen, T. T. Nguyen, S. Nahavandi, “Multi-agent behavioral control system using deep reinforcement learning,” _Neurocomputing_ , 2019.
* [62] N. D. Nguyen and T. T. Nguyen “Fruit-API,” https://github.com/garlicdevs/Fruit-API, 2019.
* [63] V. Mnih et al., “Playing atari with deep reinforcement learning,” _arXiv preprint arXiv:1312.5602_ , 2013.
* [64] H. V. Hasselt, G. Arthur, and S. David, “Deep reinforcement learning with double q-learning,” In _Conference on Artificial Intelligence_ , 2016.
* [65] Z. Wang et al., “Dueling network architectures for deep reinforcement learning,” _arXiv preprint arXiv:1511.06581_ , 2015.
* [66] T. Schaul et al., “Prioritized experience replay,” _arXiv preprint arXiv:1511.05952_ , 2015.
* [67] M. Hausknecht and S. Peter, “Deep recurrent q-learning for partially observable mdps,” In _AAAI Fall Symposium Series_ , 2015.
* [68] I. Sorokin et al., “Deep attention recurrent Q-network,” _arXiv preprint arXiv:1512.01693_ , 2015.
* [69] M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: Combining improvements in deep reinforcement learning,” In _AAAI Conference on Artificial Intelligence_ , 2018.
* [70] V. Mnih et al., “Asynchronous methods for deep reinforcement learning,” _International Conference on Machine Learning_ , 2016.
* [71] M. Jaderberg et al., “Reinforcement learning with unsupervised auxiliary tasks,” _arXiv preprint arXiv:1611.05397_ , 2016.
* [72] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller, “Deterministic policy gradient algorithms,” in _International Conference on Machine Learning_ , 2014.
* [73] T.P Lillicrap, J.Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” _arXiv preprint arXiv:1509.02971_ , 2015.
* [74] R. Lowe, Y. Wu, A. Tamar, J. Harb, O.P. Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” In _Advances in Neural Information Processing Systems_ , pp. 6379–6390, 2017.
* [75] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region policy optimization,” In _International Conference on Machine Learning_ , pp. 1889–1897, 2015.
* [76] T. Van Erven and P. Harremos, “Renyi divergence and Kullback-Leibler divergence,” _IEEE Transactions on Information Theory_ , vol. 60, no. 7, pp. 3797–3820, 2014.
* [77] Y. Wu, E. Mansimov, R.B. Grosse, S. Liao, and J. Ba, “Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation,” In _Advances in Neural Information Processing Systems_ , pp. 5279–5288, 2017.
* [78] Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas, “Sample efficient actor-critic with experience replay,” _arXiv preprint arXiv:1611.01224_ , 2016.
* [79] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” _arXiv preprint arXiv:1707.06347_ , 2017.
* [80] O. Nachum, M. Norouzi, K. Xu, and D. Schuurmans, “Bridging the gap between value and policy based reinforcement learning,” In _Advances in Neural Information Processing Systems_ , pp. 2775–2785, 2017.
* [81] B. O’Donoghue, R. Munos, K. Kavukcuoglu, and V. Mnih, “Combining policy gradient and Q-learning,” _arXiv preprint arXiv:1611.01626_ , 2016.
* [82] J. Schulman, X. Chen, and P. Abbeel, “Equivalence between policy gradients and soft q-learning,” _arXiv preprint arXiv:1704.06440_ , 2017.
* [83] A. Gruslys, W. Dabney, M.G. Azar, B. Piot, M. Bellemare, and R. Munos, “The Reactor: A fast and sample-efficient actor-critic agent for reinforcement learning,” _arXiv preprint arXiv:1704.04651_ , 2017.
* [84] S.S Gu, T. Lillicrap, R.E. Turner, RZ. Ghahramani, B. Schölkopf, and S. Levine, “Interpolated policy gradient: Merging on-policy and off-policy gradient estimation for deep reinforcement learning,” In _Advances in Neural Information Processing Systems_ , pp. 3846–3855, 2017.
* [85] P. S. Castro, S. Moitra, C. Gelada, S. Kumar, and M. G. Bellemare, “Dopamine: A research framework for deep reinforcement learning,” _arXiv preprint arXiv:1812.06110_ , 2018.
* [86] E. Liang, R. Liaw, R. Nishihara, P. Moritz, R. Fox, J. Gonzalez, and I. Stoica, “Ray rllib: A composable and scalable reinforcement learning library,” _arXiv preprint arXiv:1712.09381_ , 2017.
* [87] C. Hesse, M. Plappert, A. Radford, J. Schulman, S. Sidor, and Y. Wu, “OpenAI baselines,” 2017.
* [88] S. Tokui, K. Oono, S. Hido, and J. Clayton, “Chainer: a next-generation open source framework for deep learning,” In _Proceedings of Workshop on Machine Learning Systems in Conference on Neural Information Processing Systems_ , pp. 1–6, 2015.
* [89] K. Miyoshi, 2017. Available: https://github.com/miyosuda/unreal.
* [90] A. Tampuu _et al._ , “Multiagent cooperation and competition with deep reinforcement learning,” _PloS One_ , vol. 12, no. 4, Apr. 2017.
* [91] J.Z. Leibo, v. Zambaldi, M. Lanctot, J. Marecki, and T. Graepel, “Multi-agent reinforcement learning in sequential social dilemmas,” In _Conference on Autonomous Agents and MultiAgent Systems_ , pp. 464–473, 2017.
* [92] L. Bu, B. Robert, and D.S. Bart, “A comprehensive survey of multiagent reinforcement learning,” _IEEE Transactions on Systems, Man, and Cybernetics_ , vol. 38, pp. 156–172, 2008.
* [93] K. Tuyls and W. Gerhar, “Multiagent learning: Basics, challenges, and prospects,” _AI Magazine_ , vol. 33, 2012.
* [94] G. Palmer, K. Tuyls, D. Bloembergen, and R. Savani, “Lenient multi-agent deep reinforcement learning,” In _International Conference on Autonomous Agents and MultiAgent Systems_ , pp. 443–451, 2018.
* [95] L. Kraemer and B. Banerjee, “Multi-agent reinforcement learning as a rehearsal for decentralized planning,” _Neurocomputing_ , pp. 82–94, 2016.
* [96] J. Foerster, Y. M. Assael, N. de Freitas, and S. Whiteson, “Learning to communicate with deep multi-agent reinforcement learning,” in _Advances in Neural Information Processing Systems_ , pp. 2137–2145, 2016.
* [97] S. Sukhbaatar and R. Fergus, “Learning multiagent communication with backpropagation,” in _Advances in Neural Information Processing Systems_ , pp. 2244–2252, 2016.
* [98] J. K. Gupta, M. Egorov, and M. Kochenderfer, “Cooperative multi-agent control using deep reinforcement learning,” In _International Conference on Autonomous Agents and Multiagent Systems_ , pp. 66–83, 2017.
* [99] T. Nguyen, N. D. Nguyen, and S. Nahavandi, “Deep reinforcement learning for multi-agent systems: A review of challenges, solutions and applications,” _arXiv preprint arXiv:1812.11794_ , 2018.
* [100] M. Egorov, “Multi-agent deep reinforcement learning,” _CS231n: Convolutional Neural Networks for Visual Recognition_ , 2016.
* [101] L. Bu, B. Robert, and D.S. Bart, “A comprehensive survey of multiagent reinforcement learning,” _IEEE Transactions on Systems, Man, and Cybernetics_ , vol. 38, pp. 156–172, 2008.
* [102] R. Lowe, Y. Wu, A. Tamar, J. Harb, O.P. Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” In _Advances in Neural Information Processing Systems_ , pp. 6379–6390, 2017.
* [103] F. Girosi, M. Jones, and T. Poggio, “Regularization theory and neural networks architectures,” _Neural Comput._ , vol. 7, no. 2, pp. 219–269, 1995.
* [104] I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio, “An empirical investigation of catastrophic forgetting in gradient-based neural networks,” _arXiv:1312.6211 [cs, stat]_ , Dec. 2013.
* [105] S. Thrun and L. Pratt, _Learning to Learn_. Boston, MA: Kluwer Academic Publishers, 1998.
* [106] A. A. Rusu _et al._ , “Progressive neural networks,” _arXiv:1606.04671 [cs]_ , 2016.
* [107] J. Kirkpatrick _et al._ , “Overcoming catastrophic forgetting in neural networks,” in _Proc. Nat. Acad. Sci._ , pp. 3521–3526, 2017.
* [108] C. Fernando _et al._ , “Pathnet: Evolution channels gradient descent in super neural networks,” _arXiv preprint arXiv:1701.087_ , 2017.
* [109] A. A. Rusu _et al._ , “Policy distillation,” _arXiv:1511.06295 [cs]_ , Nov. 2015.
* [110] H. Yin and S. J. Pan, “Knowledge transfer for deep reinforcement learning with hierarchical experience replay,” in _Proc. AAAI Conf. Artif. Intell._ , pp. 1640–1646, Jan. 2017.
* [111] E. Parisotto, J. L. Ba, and R. Salakhutdinov, “Actor-mimic: Deep multitask and transfer reinforcement learning,” _arXiv:1511.06342 [cs]_ , 2015.
* [112] H. Yin and S. J. Pan, “Knowledge transfer for deep reinforcement learning with hierarchical experience replay,” in _Proc. AAAI Conf. Artif. Intell._ , pp. 1640–1646, Jan. 2017.
* [113] M. Wulfmeier, I. Posner, and P. Abbeel, “Mutual alignment transfer learning,” _arXiv preprint arXiv:1707.07907_ , 2017.
* [114] M. Grzes, and D. Kudenko, “Online learning of shaping rewards in reinforcement learning,” _Neural Networks_ , vol. 23, no. 4, pp. 541–550, 2010.
* [115] A. G. Barto and S. Mahadevan, “Recent advances in hierarchical reinforcement learning,” _Discrete Event Dyn. Syst._ , vol. 13, no. 4, pp. 341–379, 2003.
* [116] T. D. .Kulkarni, K. R. Narasimhan, A. Saeedi, and J. B. Tenenbaum, “Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation,” in _Adv. Neural Inf. Process. Syst._ , pp. 3675–3683, 2016.
* [117] Y. Burda, H. Edwards, D. Pathak, A. Storkey, T. Darrell, and A.A. Efros, “Large-scale study of curiosity-driven learning,” _arXiv preprint arXiv:1808.04355_ , 2018.
* [118] D. Pathak, P. Agrawal, A.A. Efros, and T. Darrell, “Curiosity-driven exploration by self-supervised prediction,” In _Conference on Computer Vision and Pattern Recognition Workshops_ , pp. 16–17, 2017.
* [119] G. Ostrovski, M. G. Bellemare, A. van den Oord, and R. Munos, “Count-based exploration with neural density models,” In _International Conference on Machine Learning_ , pp. 2721–2730, 2017.
* [120] M. Andrychowicz _et al._ , “Hindsight experience replay,” In _Advances in Neural Information Processing Systems_ , 2017.
* [121] Y. Bengio, J. Louradour, R. Collobert, and J. Weston, “Curriculum learning,” In _International Conference on Machine Learning_ , pp. 41–48, 2009.
* [122] A. Santoro, R. Faulkner, D. Raposo, J. Rae, M. Chrzanowski, T. Weber, D. Wierstra, O. Vinyals, R. Pascanu, and T. Lillicrap, “Relational recurrent neural networks,” In _Advances in Neural Information Processing Systems_ , pp. 7299–7310, 2018.
* [123] E. Parisotto and R. Salakhutdinov, “Neural map: Structured memory for deep reinforcement learning,” _arXiv preprint arXiv:1702.08360_ , 2017.
* [124] D. Horgan, J. Quan, D. Budden, G Barth-Maron, M. Hessel, H. Van Hasselt, and D. Silver, “Distributed prioritized experience replay,” _arXiv preprint arXiv:1803.00933_ , 2018.
* [125] A. Stooke and P. Abbeel, “Accelerated methods for deep reinforcement learning,” _arXiv preprint arXiv:1803.02811_ , 2018.
* [126] E. Liang, R. Liaw, P. Moritz, R. Nishihara, R. Fox, K. Goldberg, and I. Stoica, “Rllib: Abstractions for distributed reinforcement learning,” _arXiv preprint arXiv:1712.09381_ , 2017.
* [127] J. Ho and S. Ermon, “Generative adversarial imitation learning,” In _Advances in Neural Information Processing Systems_ , pp. 4565–4573, 2016.
* [128] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling, “The arcade learning environment: An evaluation platform for general agents,” _J. Artif. Intell. Res._ , vol. 47, pp. 253–279, 2013.
* [129] G. Brockman _et al._ , “OpenAI gym,” _arXiv:1606.01540 [cs]_ , 2016.
* [130] E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” In _International Conference on Intelligent Robots and Systems_ , pp. 5026–5033, 2012.
* [131] M. Plappert, “keras-rl,” https://github.com/matthiasplappert/keras-rl, 2016.
* [132] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel, “Benchmarking deep reinforcement learning for continuous control,” In _International Conference on Machine Learning_ , pp. 1329–1338, 2016.
* [133] M. Schaarschmidt, A. Kuhnle, and K. Fricke, “TensorForce: A TensorFlow library for applied reinforcement learning,” 2017.
* [134] A. Paszke _et al._ , “PyTorch: An imperative style, high-performance deep learning library,” In _Advances in Neural Information Processing Systems_ , 2019.
* [135] B. Ellis, J. Stylos, and B. Myers, “The factory pattern in API design: A usability evaluation,” In _International Conference on Software Engineering_ , 2007.
* [136] N. D. Nguyen and T. T. Nguyen, “FruitLAB,” http://fruitlab.org/, 2019.
|
2024-09-04T02:54:54.884194 | 2020-02-26T00:20:02 | 2002.11912 | {
"authors": "Randall Balestriero, Sebastien Paris, Richard Baraniuk",
"full_text_license": null,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25904",
"submitter": "Randall Balestriero",
"url": "https://arxiv.org/abs/2002.11912"
} | arxiv-papers | # Max-Affine Spline Insights into Deep Generative Networks
Randall Balestriero Sebastien Paris Richard G. Baraniuk
###### Abstract
We connect a large class of Generative Deep Networks (GDNs) with spline
operators in order to derive their properties, limitations, and new
opportunities. By characterizing the latent space partition, dimension and
angularity of the generated manifold, we relate the manifold dimension and
approximation error to the sample size. The manifold-per-region affine
subspace defines a local coordinate basis; we provide necessary and sufficient
conditions relating those basis vectors with disentanglement. We also derive
the output probability density mapped onto the generated manifold in terms of
the latent space density, which enables the computation of key statistics such
as its Shannon entropy. This finding also enables the computation of the GDN
likelihood, which provides a new mechanism for model comparison as well as
providing a quality measure for (generated) samples under the learned
distribution. We demonstrate how low entropy and/or multimodal distributions
are not naturally modeled by DGNs and are a cause of training instabilities.
Generative Deep Networks, Machine Learning, Manifold, Dropout, Multimodal
Distributions
## 1 Introduction
Deep Generative Networks (DGNs), which map a low-dimensional latent variable
$\bm{z}$ to a higher-dimensional generated sample $\bm{x}$, have made enormous
leaps in capabilities in recent years. Popular DGNs include Generative
Adversarial Networks (GANs) (Goodfellow et al., 2014) and their variants
(Dziugaite et al., 2015; Zhao et al., 2016; Durugkar et al., 2016; Arjovsky et
al., 2017; Mao et al., 2017; Yang et al., 2019); Variational Autoencoders
(Kingma & Welling, 2013) and their variants (Fabius & van Amersfoort, 2014;
van den Oord et al., 2017; Higgins et al., 2017; Tomczak & Welling, 2017;
Davidson et al., 2018); and flow based models such as NICE (Dinh et al.,
2014), Normalizing Flow (Rezende & Mohamed, 2015), and their variants (Dinh et
al., 2016; Grathwohl et al., 2018; Kingma & Dhariwal, 2018).
While DGNs are easy to describe and analyze locally in terms of simple affine
operators and scalar nonlinearities, a general framework for their global
structure has remained elusive. In this paper, we take a step in the direction
of a better theoretical understanding of DGNs constructed using continuous,
piecewise affine nonlinearities by leveraging recent progress on max-affine
spline operators (MASOs) (Balestriero & Baraniuk, 2018b). Our main
contributions are as follows;
[C1] We characterize the piecewise-affine manifold structure of the generated
samples, including its intrinsic dimension (Section 3), which sheds new light
on the impact of techniques like dropout (Section 3.3) and provides
practionioners with sensible design principles for constructing a desired
manifold.
[C2] We characterize the local coordinates of the generated manifold and the
inverse mapping from data points $\bm{x}$ back to latent variables $\bm{z}$
(Section 4.1), which provides new necessary and sufficient conditions for
disentanglement, interpretability and new links between DGNs and adaptive
basis methods (Section 4.2). By characterizing the angles between adjacent
local affine regions, we demonstrate how weight sharing in a DGN heavily
constrains the curvature of the generated manifold despite the fact that the
DGN might be tremendously overparameterized (Section 4.3).
[C3] We provide a DGN input-output formula that enables us to derive the
analytical probability density on the generated manifold that is induced by
the latent space (Section 5.1). We use this result to derive Normalizing Flows
(NMFs) form first principles and highlight how the DGN design,
$\bm{x}\mapsto\bm{z}$ (most NMFs) versus $\bm{z}\mapsto\bm{x}$ (DGNs) allow
for either fast likelihood computation and slow sampling or vice-versa
(Section 5.2). Finally, the Shannon entropy of the output density provide a
new lens through which to study the difficulty of generating multidimensional,
low-entropy distributions using DGNs (Section 5.3).
Reproducible code for the various experiments and figures is be provided on
Github111https://github.com/RandallBalestriero/GAN.git.
## 2 Background
Deep Networks. A deep (neural) network (DN) is an operator $f_{\Theta}$ with
parameters $\Theta$ that maps the input $\bm{z}\in{\mathbb{R}}^{S}$ to the
output $\bm{x}\in{\mathbb{R}}^{D}$ by composing $L$ intermediate layer
mappings $f_{\ell}$, $\ell=1,\dots,L$, that combine affine and simple
nonlinear operators such as the fully connected operator (simply the affine
transformation defined by the weight matrix $\bm{W}_{\ell}$ and bias vector
$\bm{b}_{\ell}$), convolution operator (with circulent $\bm{W}_{\ell}$),
activation operator (applying a scalar nonlinearity such as the ubiquitous
ReLU), or pooling operator. Precise definitions of these operators can be
found in (Goodfellow et al., 2016). We will omit $\Theta$ for conciseness
unless it is needed.
We precisely define a layer $f_{\ell}$ as comprising a single nonlinear
operator composed with any (if any) preceding linear operators that lie
between it and the preceding nonlinear operator. Each layer $f_{\ell}$
transforms its input feature map $\bm{v}_{\ell-1}\in{\mathbb{R}}^{D_{\ell-1}}$
into an output feature map $\bm{v}_{\ell}\in{\mathbb{R}}^{D_{\ell}}$ with the
initializations $\bm{v}_{0}:=\bm{z}$, $D_{0}=S$, and
$\bm{v}_{L}:=\bm{x},D_{L}=D$. In this paper, we focus on DGNs, where $S<D$,
$\bm{z}$ is interpreted as a latent representation, and $\bm{x}$ is the
generated data, e.g, a time-serie or image. The feature maps $\bm{v}_{\ell}$
can be viewed equivalently as signals, flattened column vectors, or tensors
depending on the context.
Max-Affine Spline Deep Networks. A $K$-dimensional max-affine spline operator
(MASO) concatenates $K$ independent max-affine spline (MAS) functions, with
each MAS formed from the point-wise maximum of $R$ affine mappings (Magnani &
Boyd, 2009; Hannah & Dunson, 2013). Given an input vector $\bm{u}$, the output
of a MASO is given by
$\displaystyle{\rm
MASO}(\bm{u};\\{\bm{A}_{r},\bm{b}_{r}\\}_{r=1}^{R})=\max_{r=1,\dots,R}\bm{A}_{r}\bm{u}+\bm{b}_{r},$
(1)
where $\bm{A}_{r}\in\mathbb{R}^{D_{\ell}\times D_{\ell-1}}$ are the slopes and
$\bm{b}_{r}\in\mathbb{R}^{D_{\ell}}$ are the offset/bias parameters $\forall
r$ and the maximum is taken coordinate-wise. Note that a MASO is a continuous
piecewise-affine (CPA) operator (wang2005generalization).
The key background result for this paper is that the layers of DNs (DGNs)
constructed from piecewise affine operators (e.g., convolution, ReLU, and max-
pooling) are MASOs Balestriero & Baraniuk (2018b, a); hence a DN (DGN) is a
composition of MASOs. For example, a layer comprising a fully connected
operator with weights $\bm{W}_{\ell}$ and biases $\bm{b}_{\ell}$ followed by a
ReLU activation operator has parameters
$R=2,\bm{A}_{1}=\bm{W}_{\ell},\bm{A}_{2}=\mathbf{0},\bm{b}_{1}=\bm{b}_{\ell},\bm{b}_{2}=\mathbf{0}$.
The piecewise-affine spline interpretation provides a powerful global
geometric interpretation of a DN (DGN) in which it partitions its input space
${\mathbb{R}}^{L}$ into polyhedral regions (the set $\Omega$) and then assigns
a different, fixed affine transformation to each region. The partition regions
are built up over the layers via a subdivision process and are closely related
to Voronoi and power diagrams (Balestriero et al., 2019).
## 3 The Generated Manifold of a DGN
In this section we study the properties of the mapping
$\bm{G}_{\Theta}:\mathbb{R}^{S}\rightarrow\mathbb{R}^{D}$ of a deep generative
network (DGN) comprising $L$ piecewise-affine MASO layers.
### 3.1 Input Space Partition and Region Mapping
While our approach holds for arbitrary piecewise affine layers, for
concreteness of exposition, we will focus on nonlinearities with $R=2$ (e.g.,
ReLU, leaky ReLU, absolute value). In all such cases, the state of the
nonlinearity can be encoded as a value from $\\{\alpha,1\\}$ with $\alpha=0$
for ReLU, $\alpha=-1$ for absolute value and $\alpha>0$ for leaky-ReLU. At
layer $\ell$, observing an input $\bm{v}_{\ell-1}$ defines the state of the
layer nonlinearities and in turn defines the “piece” of the layer MASO used to
produce the output $\bm{v}_{\ell}$. We call the nonlinearity’s state its code
$\bm{q}_{\ell}(\bm{v}_{\ell-1})\in\\{\alpha,1\\}^{D_{\ell}},\bm{v}_{\ell-1}\in\mathbb{R}^{D_{\ell-1}}$
with
$\displaystyle[\bm{q}_{\ell}(\bm{v}_{\ell-1})]_{i}=\begin{cases}\alpha,&[\bm{W}_{\ell}\bm{v}_{\ell-1}+\bm{b}_{\ell}]_{i}\leq
0\\\ 1,&[\bm{W}_{\ell}\bm{v}_{\ell-1}+\bm{b}_{\ell}]_{i}>0\end{cases}$ (2)
leading to the simple forward formula for the layer
$\displaystyle
f_{\ell}(\bm{v}_{\ell-1})=\text{diag}(\bm{q}_{\ell}(\bm{v}))(\bm{W}_{\ell}\bm{v}+\bm{b}_{\ell}).$
(3)
We concatenate the per-layer codes into
$\bm{q}(\bm{z})=[\bm{q}_{1}(\bm{z})^{T},\dots,\bm{q}_{L}(\bm{z})^{T}]^{T}\in\\{\alpha,1\\}^{\prod_{l=1}^{L}D_{l}}$
with $\bm{z}\in\mathbb{R}^{S}$ the DGN input and
$\bm{q}_{\ell}(\bm{v}_{\ell-1})$ abbreviated as $\bm{q}_{\ell}(\bm{z})$.
###### Definition 1.
A partition region $\omega_{\mathbf{k}}$ of the DGN input space partition
$\Omega$ is defined as the input space region for which the MASO states
$\bm{q}(\cdot)$ are identical
$\displaystyle\omega_{\mathbf{k}}=$
$\displaystyle\left\\{\bm{z}\in\mathbb{R}^{S}:\bm{q}(\bm{z})={\mathbf{k}}\right\\},\forall\mathbf{k}\in\\{\alpha,1\\}^{\prod_{\ell=1}^{L}D_{\ell}},$
(4) $\displaystyle\Omega=$
$\displaystyle\left\\{\omega_{\mathbf{k}},\mathbf{k}\in\\{\alpha,1\\}^{\prod_{\ell=1}^{L}D_{\ell}}\right\\}\setminus\emptyset.$
(5)
Note that $\cup_{\omega\in\Omega}\;\omega=\mathbb{R}^{S}$ and
$\forall(\omega,\omega^{\prime})\in\Omega^{2},\omega\not=\omega^{\prime},\omega^{\circ}\cap\omega^{{}^{\prime}\circ}=\emptyset$,
with $(\cdot)^{\circ}$ the interior operator (Munkres, 2014).
Since $\bm{G}$ is formed from a composition of MASOs, it is itself a CPA with
a fixed affine mapping over each region $\omega\in\Omega$.
As a result, the generator $\bm{G}$ maps each $S$-dimensional convex latent
space region $\omega\in\Omega$ to the convex affine subspace
$\bm{G}(\omega)\subset\mathbb{R}^{D}$ as
$\displaystyle\forall\omega\in\Omega,\;\;\bm{G}(\omega)=\\{\bm{A}_{\omega}\bm{z}+\bm{b}_{\omega},\bm{z}\in\omega\\}\subseteq\mathbb{R}^{D},$
(6)
with $\bm{A}_{\omega}$ and $\bm{b}_{\omega}$ obtained by composing (3) and
distributing the terms; we will call this the generated manifold.
###### Proposition 1.
A DGN $\bm{G}$ comprised of MASO layers is a CPA operator with input space
partition $\Omega$ given by (5). Each region $\omega\in\Omega$ is a convex
polytope in $\mathbb{R}^{S}$ that is affinely mapped to the affine subspace
$\bm{G}(\omega)$ (recall (6), a convex polytope in $\mathbb{R}^{D}$ given by
(6). (Proof in Appendix C.1.)
Analytically, the form of (6) can be obtained directly by composing the per
layer mappings from (3), distributing and rearranging the terms into the slope
and bias terms of the affine mapping. Computationally, the affine parameters
$\bm{A}_{\omega},\bm{b}_{\omega}$ can be obtained efficiently in one of the
two following ways. On the one hand, if one possesses an input $\bm{z}$
belonging to the desired region $\omega$, then we simply perform
$\displaystyle\bm{A}_{\omega}=\nabla_{\bm{z}}\bm{G}(\bm{z}),\;\;\bm{b}_{\omega}=\bm{G}(\bm{z})-\bm{A}_{\omega}\bm{z}.$
(7)
On the other hand, in the case where one has access to the code
$\bm{q}(\omega)$ of the region (as opposed to a point in the region), one can
directly impose the nonlinearity states (defined by $\bm{q}(\omega)$) on the
DGN mapping. Once the nonlinearities are fixed, one can feed an arbitrary
input $\bm{z}\in\mathbb{R}^{S}$ and compute the affine parameters as in (7) on
the fixed DGN.
We can extend (6) to the entire domain of the generator via
$\displaystyle\bm{G}(\text{supp}(\bm{p}_{\bm{z}}))=\bigcup_{\omega\in\Omega}\bm{G}(\omega\cap\text{supp}(\bm{p}_{\bm{z}})),$
(8)
with $\bm{p}_{\bm{z}}$ the probability distribution on the latent space that
generates $\bm{z}$, and where $\bm{G}(\text{supp}(p_{\bm{z}}))$ denotes the
image of $\bm{G}$ by considering all inputs $\bm{z}\in\text{supp}(p_{\bm{z}})$
with nonzero probability, e.g., $\mathbb{R}^{S}$ if the latent distribution is
a Gaussian, and the $S$-dimensional hypercube if it is a standard Uniform
distribution. Thus, the generated manifold (8) combines the per-region affine
transformations of the input space partition per (6). With this formulation,
we now characterize the intrinsic dimension of the per-region and overall
manifold mapping of $\bm{G}$.
### 3.2 Generated Manifold Intrinsic Dimension
We now turn into the intrinsic dimension of the per-region affine subspaces
$\bm{G}(\omega))$ that comprise the generated manifold. In fact, as per (8),
its dimension depends not only on the latent dimension $S$ but also on the per
layer parameters.
###### Lemma 1.
The intrinsic dimension of the affine subspace $\bm{G}(\omega))$ (recall (8))
has the following upper bound
$\displaystyle\dim(\bm{G}(\omega))\leq\min\Big{(}S,\min_{\ell}\big{(}{\rm
rank}\left({\rm
diag}(\bm{q}_{\ell}(\omega))\bm{W}_{\ell}\right)\big{)}\Big{)}.$
(Proof in Appendix C.2.)
We make three observations. First, we see that the choice of the nonlinearity
(i.e., the choice of $\alpha$) and/or the choice of the per-layer dimensions
(i.e., the “width” of the DGN) are the key elements controlling the upper
bound of $\dim(\bm{G})$. For example, in the case of ReLU ($\alpha=0$) then
$\dim(\bm{G}(\omega))$ is directly impacted by the number of $0$s in the codes
$\bm{q}_{\ell}(\omega)$ of each layer in addition of the rank of
$\bm{W}_{\ell}$; this sensitivity does not occur when using other
nonlinearities ($\alpha\not=0$). Second, “bottleneck layers” impact directly
the dimension of the subspace and thus should also be carefully picked based
on the a priori knowledge of the target manifold intrinsic dimension. Third,
we obtain the following condition relating the per-region dimension to the
bijectivity and surjectivity of the mapping. The latter should be avoided in
DGNs, since it implies that multiple different latent vectors will generate
the same output sample.
###### Proposition 2.
A DGN is bijective on $\omega$ iff
$\dim(\bm{G}(\omega))=S,\forall\omega\in\Omega$ and surjective iff
$\exists\omega\in\Omega\text{ s.t. }\dim(\bm{G}(\omega))<S$. A DGN is
bijective on $\text{supp}(\bm{p}_{\bm{z}})$ iff it is bijective for each
region $\omega\in\Omega$ and
$\bm{G}(\omega)\cap\bm{G}(\omega^{\prime})=\emptyset,\forall\omega\not=\omega^{\prime}$.
(Proof in Appendix C.3.)
### 3.3 Application: Effect of Dropout/Dropconnect
Figure 1: Demonstration of a GAN DGN trained on a circle dataset. Each line is
the learned piecewise linear manifold generated by the DGN; each color
corresponds to a different realization of dropout noise. Dropout turns a DGN
into a finite ensemble of DGNs with the same or lower intrinsic dimension.
Noise techniques, such as dropout (Wager et al., 2013) and dropconnect (Wan et
al., 2013), alter the per-region affine mapping in a very particular way that
we now characterize.
Dropout and dropconnect techniques apply a multiplicative binary noise onto
the feature maps and/or the weights; the multiplicative noise
$\epsilon_{\ell}\in\\{0,1\\}^{D_{\ell}}$ is typically an iid Bernoulli random
variable. (Isola et al., 2017). To characterize how this noise impacts the DGN
mapping, denote the generator $\bm{G}$ equipped with random
dropout/dropconnect by $\widetilde{\bm{G}}$, and the generator in which the
noise realization is fixed by
$\bm{G}(\bm{z}|\bm{\epsilon})=\bm{A}_{\omega}(\bm{\epsilon})\bm{z}+\bm{b}_{\omega}(\bm{\epsilon})$
where $\bm{\epsilon}$ concatenates the random variables of each layer. Given
the mapping form (recall (3)) with per layer parameters
$\bm{W}_{\ell},\bm{b}_{\ell}$, the presence of dropout noise leads to the
following noisy input-output mapping on a region $\omega$ (recall (6)) as
$\bm{G}(\bm{z}|\bm{\epsilon})=\left(\prod_{\ell=L}^{1}\text{diag}(\bm{q}_{\ell}(\omega)\odot\epsilon_{\ell})\bm{W}_{\ell}\right)\bm{z}\\\
+\sum_{\ell=1}^{L}\left(\prod_{\ell^{\prime}=L}^{\ell+1}\text{diag}(\bm{q}_{\ell^{\prime}}(\omega)\odot\epsilon_{\ell^{\prime}})\bm{W}_{\ell^{\prime}}\right)\bm{b}_{\ell},$
(9)
where $\odot$ is the Hadamard product and $\bm{z}\in\omega$. (See Appendix G
for the dropconnect formula.) Thus, the noisy generator actually combines all
the above mappings for each noise realisation via
$\displaystyle\widetilde{\bm{G}}(\text{supp}(p_{\bm{z}}))=\bigcup_{\bm{\epsilon}\in\text{supp}(p_{\bm{\epsilon}})}\bm{G}(\text{supp}(p_{\bm{z}})|\bm{\epsilon}).$
(10)
Dropout 0.1
Dropout 0.3
Dropconnect 0.1
Dropconnect 0.3
$\dim(\bm{G}(\omega))$
Figure 2: Impact of dropout and dropconnect on the dimension of the noisy
generator affine subspaces $\bm{G}(\omega|\epsilon),\forall\omega$ (recall
(10)). We depict two “drop” probabilities $0.1$ and $0.3$ for a generator
$\bm{G}$ with $S=6$, $D=10$, $L=3$ and varying width $D_{1}=D_{2}$ ranging in
$\\{6,8,10,12,24,48\\}$ (x-axis); note that the architecture limits the
dimension to $S=6$. The boxplot represents the distribution of the per-region
affine subspace dimensions for $2000$ sampled regions over $2000$ different
noise realizations $\epsilon$. We make two observations. First, dropconnect
tends to preserve the latent dimension $S$ even when the width $D_{1},D_{2}$
is close to $S$. Second, the dropout-induced collection of generators tends to
have degenerate dimension (much smaller than $S$) until the width is twice the
latent space dimension $(D_{\ell}\geq 2S)$. As a result, while dropout turns a
generator into a collection of generators, those generators will have
degenerate dimension unless $\bm{G}$ is much wider that $S$.
###### Proposition 3.
Multiplicative binary dropout/dropconnect transforms the generator $\bm{G}$
into the union of generators given by (10), each with per-region dimension
between $0$ and $\max_{\omega}\text{dim}(\bm{G}(\omega))$. (Proof in Appendix
C.4.)
From the above, we see that the multiplicative binary noise does not make the
generator $\widetilde{\bm{G}}$ dense in its output space, but rather turns
$\bm{G}$ into a collection of piecewise linear generators, each corresponding
to a different realization of $\bm{\epsilon}$ as depicted in Fig. 1.
Furthermore, each noise realization produces a generator with a possibly
different per-region dimension being upper bounded by the dimension of the
original generator $\bm{G}$. Also, each induced generator has a possibly
different input space partition based on the noise realisation. On the one
hand, this highlights a potential limitation of those techniques for narrow
models ($D_{\ell}\approx S$) for which the noisy generators will tend to be
degenerate (per-region dimension smaller than $S$), implying surjectivity
(recall Prop. 2). On the other hand, when used with wide DGNs ($D_{\ell}\gg
S$) much more noisy generators will maintain the same affine subspace
dimension that the original generator. The latter is crucial when $S$ is
picked a priori to match exactly the true intrinsic dimension. We illustrate
the above in Fig. 2.
### 3.4 Application: Optimal Dimension and Training Error Increase
We now emphasize how the DGN dimension
$\tilde{S}\triangleq\max_{\omega}\dim(\bm{G}(\omega))$ impacts the training
error loss and training sample recovery. We answer the following question: Can
a generator generate $N$ training samples from a continuous $S^{*}$
dimensional manifold if $\tilde{S}<S^{*}$? Denote the empirical error
measuring the ability to generate the data samples by
$E^{*}_{N}=\min_{\Theta}\frac{1}{N}\sum_{n=1}^{N}\min_{\bm{z}}\|\bm{G}_{\Theta}(\bm{z})-\bm{x}_{n}\|$.
We now demonstrate and empirically validate that if $\tilde{S}<S^{*}$ then
$E^{*}_{N}$ increases with $N$ for any data manifold.
###### Theorem 1.
Given the true intrinsic dimension of an arbitrary manifold $S^{*}$, any
finite DGN with $\tilde{S}<S^{*}$ will have increasing error $E^{*}$ with the
dataset size $N$ as in
$\exists\ell\in\\{0,\dots,L\\}:D_{\ell}<S^{*}\implies\forall N>0,\exists
N^{\prime}>N:E^{*}_{N^{\prime}}>E^{*}_{N}$. (Proof in Appendix C.11.)
In general, it is clear that, for smooth manifolds, $E^{*}$ increases with $N$
since the DGN is piecewise linear. However, the above result extends this to
any manifold, even when the data manifold is as simple as a linear manifold
(translated subspace). We empirically validate this phenomenon in Fig. 3 for
the simple case of a linear manifold. (The experimental details are given in
Appendix F.)
Minimum error $E^{*}$
DGN latent dimension $S$, ($S^{*}$ in red)
Figure 3: Error $E^{*}$ (y-axis) for a linear manifold with $S^{*}=5$, for
increasing dataset size $N\in\\{100,120,140,160,180,200,300,400,500,1000\\}$
(blue to green) for different latent space dimension $S\in\\{1,2,3,4,5,6,7\\}$
(x-axis) which forces $\tilde{S}<S$, the $E^{*}=0$ line is depicted in black.
This demonstrates, as per Thm. 1, that whenever $\tilde{S}<S^{*}$, the
training error $E^{*}$ increases with the dataset size $N$ and is $0$
otherwise whenever $\tilde{S}\geq S^{*}$.
It is clear that a direct implication of Thm. 1 is that there does not exist a
finite architecture with $D_{\ell}<D$ for some $\ell$ and parameter $\Theta$
such that a MASO DGN would be bijective with $\mathbb{R}^{D}$. The above
results are key to understand the challenges and importance of the design of
DGN starting with the width of the hidden layers and latent dimensions in
conjunction with the choice of nonlinearities and constraints on
$\bm{W}_{\ell}$ of all layers.
## 4 Manifold Local Coordinates and Curvature
We now turn to the study of the local coordinates of the affine mappings
comprising a DGN’s generated manifold. We then study the coupling between the
affine mappings of adjacent regionsto characterize the curvature/angularity of
the generated manifold.
### 4.1 Local Coordinate Systems and Inverse Mapping
Recall from (8) that a DGN is a CPA operator. Inside region $\omega\in\Omega$,
points are mapped to the output space affine subspace which is itself governed
by a coordinate system or basis. For the remaining of the section we assume
that $\dim(\bm{G}(\omega))=S,\forall\omega\in\Omega$ and thus the columns of
$\bm{A}_{\omega}$ are linearly independent. For cases where
$\dim(\bm{G}(\omega))<S$ then the following analysis also applies by
considering a lower dimensional latent space $S^{\prime}<S$ and the
corresponding sub-network that only depend on the kept latent space
dimensions.
###### Lemma 2.
A basis for the affine subspace $\bm{G}(\omega)$ (recall (6)) is given by the
columns of $\bm{A}_{\omega}$.
In other words, the columns of $\bm{A}_{\omega}$ form the local coordinate
space, and each latent space dimension moves a point in this region by adding
to it the corresponding slope column. Prior leveraging this result for latent
space characterization, we derive an inverse of the generator $\bm{G}$ that
maps any point from the generated manifold to the latent space. This inverse
is well-defined as long as the generator is injective, preventing that
$\exists\bm{z}_{1}\not=\bm{z}_{2}$ s.t.
$\bm{G}(\bm{z}_{1})=\bm{G}(\bm{z}_{2})$. Assuming injectivity, the inverse of
$\bm{G}$ on a region $\bm{G}(\omega)$ in the output space is obtained by
$\displaystyle\bm{G}_{\omega}^{-1}(\bm{x})=\left(\bm{A}^{T}_{\omega}\bm{A}_{\omega}\right)^{-1}\bm{A}_{\omega}^{T}(\bm{x}-\bm{b}_{\omega}),\forall\bm{x}\in\bm{G}(\omega),$
(11)
leading to
$\bm{G}_{\omega}^{-1}(\bm{G}(\omega))=\omega,\forall\omega\in\Omega$. Note
that the inverse $\left(\bm{A}^{T}_{\omega}\bm{A}_{\omega}\right)^{-1}$ is
well defined as $\bm{A}_{\omega}$ is full column rank since we only consider a
generator with $\tilde{S}=S$. We can then simply combine the region-
conditioned inverses to obtain the overall generator inverse.
###### Lemma 3.
The inverse mapping of an injective DGN is the CPA operator mapping
$\bm{G}(\text{supp}(\bm{p}_{\bm{z}}))\mapsto\text{supp}(\bm{p}_{\bm{z}})$
given by
$\bm{G}^{-1}(\bm{x})=\sum_{\omega\in\Omega}\bm{G}_{\omega}^{-1}(\bm{x})\mathbbm{1}_{\\{\bm{x}\in\bm{G}(\omega)\\}}.$
(Proof in Appendix C.5.)
### 4.2 Application: Adaptive Basis and Disentenglement
As mentioned in the above section, $\bm{A}_{\omega}$ forms a basis of the
affine subspace $\bm{G}(\omega)$. The latent vector $\bm{v}$ combines them to
obtain the subspace which is then shifted by the bias $\bm{b}_{\omega}$. This
process is performed locally for each region $\omega$, in a manner similar to
an “adaptive basis” (Donoho et al., 1994).
learned initial
FC GAN
CONV GAN
FC VAE
CONV VAE
Figure 4: Visualization of a single basis bector $[\bm{A}_{\omega}]_{.,k}$
with $\omega$ at initialization and after learning obtained from a region
$\omega$ containing the digits $7,5,9$, and $0$ respectively, and this for GAN
and VAE models made of fully connected or convolutional layer (see Appendix B
for details). By visual inspection, one can observe that the depicted basis
vector encodes right rotation, cedilla extension, left rotation, and upward
translation respectively. In addition, we observe how the basis vectors are
smoother for VAE-based models which why they tend to generate blurred samples.
In this context, we aim to characterize the subspace basis in term of
disentanglement, i.e., the alignment of the basis vectors with respect to each
other. While there is not a unique definition, a disentangled basis should
provide a “compact” and interpretable latent representation $\bm{z}$ for the
associated $\bm{x}=\bm{G}(\bm{z})$. In particular, it should ensure that a
small perturbation of dimension $d$ of $\bm{z}$ implies a transformation
independent from a small perturbation of dimension $d^{\prime}\not=d$
(Schmidhuber, 1992; Bengio et al., 2013). That is,
$\langle\bm{G}(\bm{z})-\bm{G}(\bm{z}+\epsilon\delta_{d}),\bm{G}(\bm{z})-\bm{G}(\bm{z}+\epsilon\delta_{d^{\prime}})\rangle\approx
0$ with $\delta_{d}$ a one-hot vector at position $d$ and length $Z$ (Kim &
Mnih, 2018). A disentangled representation is thus considered to be most
informative as each latent dimension imply a transformation that leaves the
others unchanged (Bryant & Yarnold, 1995). For example, rotating an object
should not alter its vertical or horizontal translation and vice-versa
###### Proposition 4.
A necessary condition for disentanglement is to have “near orthogonal” columns
as $\langle[\bm{A}_{\omega}]_{.,i},[\bm{A}_{\omega}]_{.,j}\rangle\approx
0,\forall i,\not=j,\forall\omega\in\Omega$. (Proof in Appendix C.6.)
Table 1: Depiction of the cosine similarity summed over pairwise different columns of $\bm{A}_{\omega}$. Measure of $0$ means that the basis vectors are orthogonal, improving disentanglement (recall Prop. 4). The first line represent the maximum of this quantity over $10000$ sampled regions $\omega$, the second line represents the average; the std of those quantities is given for $8$ runs. We see that training increases disentanglement, and fully connected models offer increased disentanglement as compared to convolutional models. | FC GAN | CONV GAN | FC VAE | CONV VAE
---|---|---|---|---
init. | 8.84 $\pm$ 0.07 | 3.2 $\pm$ 0.33 | 5.23 $\pm$ 0.29 | 3.5 $\pm$ 0.27
4.41 $\pm$ 0.26 | 1.84 $\pm$ 0.08 | 2.25 $\pm$ 0.08 | 1.74 $\pm$ 0.06
learn | 1.36 $\pm$ .08 | 1.72 $\pm$ 0.07 | 1.32 $\pm$ 0.07 | 1.77 $\pm$ 0.11
0.9 $\pm$ 0.03 | 1.12 $\pm$ 0.03 | 0.89 $\pm$ 0.03 | 1.15 $\pm$ 0.03
Figure 4 visualizes one of the basis vectors of four different DGNs trained on
the MNIST dataset with $S=10$. Interpretability of the transformation encoded
by the dimension of the basis vector can be done as well as model comparison
such as blurriness of VAE samples that is empirically observed across datasets
(zhao2017towards; huang2018introvae). We also provide in Table 1 the value of
$\|Q_{\omega}-I\|_{2}$ with $Q_{\omega}\in[0,1]^{S\times S}$ the matrix of
cosine angles between basis vector of $\bm{A}_{\omega}$ for 10,000 regions
sampled randomly and where we report the average over the regions and the
maximum. Finally, this process is performed over $8$ runs, the mean and
standard deviation are reported in the table. We observe that there does not
seem to be a difference in the degree of disentanglement different GAN and
VAE; however, the topology, fully connected vs. convolution, plays an
important part, favoring the former. To visually control the quality of the
DGN, randomly generated digits are given in Fig. 13 in the Appendix; we also
provide more background on disentanglement in Appendix E.
### 4.3 Generated Manifold Curvature
We now study the curvature or angularity of the generated mapping. That is,
whenever $\widetilde{S}<D$, the per-region affine subspace of adjacent region
are continuous, and joint at the region boundaries with a certain angle that
we now characterize.
###### Definition 2.
Two regions $\omega,\omega^{\prime}$ are adjacent whenever they share part of
their boundary as in
$\overline{\omega}\cap\overline{\omega^{\prime}}\not=\emptyset$.
The angle between adjacent affine subspace is characterized by means of the
greatest principal angle (Afriat, 1957; Bjorck & Golub, 1973) and denote
$\theta$. Denote the per-region projection matrix of the DGN by
$\displaystyle
P(\bm{A}_{\omega})=\bm{A}_{\omega}(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-1}\bm{A}_{\omega}^{T}$
(12)
where $\bm{A}_{\omega}^{T}\bm{A}_{\omega}\in\mathbb{R}^{S\times S}$ and
$P(\bm{A}_{\omega})\in\mathbb{R}^{D\times D}$. We now assume that
$\dim(\bm{G})=Z$ ensuring that $\bm{A}_{\omega}^{T}\bm{A}_{\omega}$ is
invertible.222The derivation also applies if $\dim(\bm{G})<Z$ by replacing
$\bm{A}_{\omega}$ with $\bm{A}_{\omega}^{\prime}$ (recall Lemma 2).
###### Theorem 2.
The angle between adjacent (recall Def. 2) region mappings
$\theta(\bm{G}(\omega),\bm{G}(\omega^{\prime}))$ is given by
$\displaystyle\sin\big{(}\theta(\bm{G}(\omega),\bm{G}($
$\displaystyle\omega^{\prime}))\big{)}=\|P(\bm{A}_{\omega})-P(\bm{A}_{\omega^{\prime}})\|_{2},$
(13)
$\forall\omega\in\Omega,\omega^{\prime}\in\text{adj}(\omega)$. (Proof in
Appendix C.7.)
Notice that two special cases of the above theorem emerge. When $S=1$, the
angle is given by the cosine similarity between the vectors $\bm{A}_{\omega}$
and $\bm{A}_{\omega^{\prime}}$ of adjacent regions. When $S=D-1$ the angle is
given by the cosine similarity between the normal vectors of the $D-1$
subspace spanned by $\bm{A}_{\omega}$ and $\bm{A}_{\omega^{\prime}}$
respectively.
We illustrate the angles in a simple case $D=2$ and $Z=1$ in Fig. 5. It can be
seen how a DGN with few parameters produces angles mainly at the points of
curvature of the manifold. We also provide many additional figures with
different training settings in Fig. 10 in the Appendix as well as repetitions
of the same experiment with different random seeds.
Figure 5: Piecewise linear continuous 1-D manifold learned by a GAN DGN (in
black) from a collection of data points (in blue). The breakpoints between
adjacent regions are depicted by dots of color proportional to the angle. The
manifold starts at the green box and proceeds clockwise. Figure 11 in the
Appendix contains additional examples.
### 4.4 Application: Angle Distribution of a DGN with Random Weights
We can use the above result to study the distribution of angles of different
DGNs with random weights and study the impact of depth, width, as well $Z$ and
$D$, the latent and output dimensions respectively. Figure 6 summarizes the
distribution of angles for several different settings.
Two key observations emerge. First, the codes of adjacent regions
$\bm{q}(\omega),\bm{q}(\omega^{\prime})$ share a large number of their values
(see Appendix D for details) implying that most of the DGN parameters are
shared in their composition to produce $\bm{A}_{\omega}$ and
$\bm{A}_{\omega^{\prime}}$. In turn, this weight sharing correlates adjacent
hyperplanes, such that their angles are much smaller than if randomly picked
form one another. The random case (in blue in Fig. 6) favors aggressively
large angles as opposed to the ones of DGNs. Second, the distribution moments
depend on the ratio $S/D$ rather that those values taken independently. In
particular, as this ratio gets smaller, as the angle distribution becomes bi-
modal with an emergence of high angles. That is, the manifold is “flat”
overall except in some parts of the space where high angularity is present.
This effect is strengthened with wider DGNs. Notice that this large ratio
$S/D$ is the one encountered in practice where it is common to have $S\approx
100$ and $D>800$.
Figure 6: Histograms of the largest principal angles for DGNs with two hidden
layers, $S=16$ and $D=17,D=32,D=64$ respectively and varying width $D_{\ell}$
on the y-axis. Three trends to observe: When the width becomes large, the
distribution becomes more bimodal and greatly favors near $0$ angles; when the
output space dimension becomes large, there is an increase in the number of
angles near orthogonal; the amount of weight sharing between the parameters
$\bm{A}_{\omega}$ and $\bm{A}_{\omega^{\prime}}$ of adjacent regions $\omega$
and $\omega^{\prime}$ allow to greatly constrain the angles to be small, as
opposed to the distribution of angles between random subspaces (Absil et al.,
2006) depicted in blue. Hence despite the large amount of regions, most will
be aligned with each other leading to an overall well behave manifold.
Additional figures are available in Fig. 10 in the Appendix.
The above experiment demonstrates the impact of width and latent space
dimension into the angularity of the DGN output manifold and how to pick its
architecture based on a priori knowledge of the target manifold. Under the
often-made assumptions that the weights of overparametrized DGN do not move
far from their initialization during training (Li & Liang, 2018), these
results also hint at the distribution of angles after training.
## 5 Density on the Generated Manifold
The study of DGNs would not be complete without considering that the latent
space is equipped with a density distribution $\bm{p}_{\bm{z}}$ from which
$\bm{z}$ are sampled in turn leading to sampling of $\bm{G}(\bm{z})$. Thus, we
now study how this density is spread over the output space, covering the
generated manifold and highlighting some key properties such as density
conentration, entropy computation and training instabilities.
### 5.1 Analytical Output Density
Given a distribution $\bm{p}_{\bm{z}}$ over the latent space, we can
explicitly compute the output distribution after the application of $\bm{G}$,
which lead to an intuitive result exploiting the piecewise affine property of
the generator.
###### Lemma 4.
Denote by $\sigma_{i}(\bm{A}_{\omega})$ the $i^{\rm th}$ singular value of
$\bm{A}_{\omega}$. Then, the volume of a region $\omega\in\Omega$ denoted by
$\mu(\omega)$ is related to the volume of $\bm{G}(\omega)$ by
$\displaystyle\mu(\bm{G}(\omega))=\sqrt{\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})}\mu(\omega)=\prod_{i:\sigma_{i}(\bm{A}_{\omega})>0}\sigma_{i}(\bm{A}_{\omega})\mu(\omega).$
(Proof in Appendix C.8.)
###### Theorem 3.
The generator probability density $p_{\bm{G}}(\bm{x})$ given $\bm{p}_{\bm{z}}$
and a injective generator $\bm{G}$ with per-region inverse
$\bm{G}^{-1}_{\omega}$ from (11) is given by
$\displaystyle
p_{\bm{G}}(\bm{x})=\sum_{\omega\in\Omega}\frac{\bm{p}_{\bm{z}}\left(\bm{G}^{-1}_{\omega}(\bm{x})\right)}{\sqrt{\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})}}\mathbbm{1}_{\\{\bm{x}\in\bm{G}(\omega)\\}}.$
(14)
(Proof in Appendix C.9.)
That is, the distribution obtained in the output space naturally corresponds
to a piecewise affine transformation of the original latent space
distribution, weighted by the change in volume of the per-region mappings.
From the analytical derivation of the generator density distribution, we
obtain its differential entropy, i.e., the Shannon entropy for continuous
distributions.
###### Corollary 1.
The differential entropy of the output distribution $\bm{p}_{\bm{G}}$ of the
DGN is given by
$\displaystyle
E(\bm{p}_{\bm{G}})=E(\bm{p}_{\bm{z}})+\sum_{\omega\in\Omega}P(\bm{z}\in\omega)\log(\sqrt{\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})}).$
(Proof in Appendix C.10.)
As the result, the differential entropy of the output distribution
$\bm{p}_{\bm{G}}$ corresponds to the differential entropy of the latent
distribution $\bm{p}_{\bm{z}}$ plus a convex combination of the per-region
volume change. Two results emerge directly. First, it is possible to optimize
the latent distribution $\bm{p}_{\bm{z}}$ to better fit the target
distribution entropy as been done for example in (Ben-Yosef & Weinshall,
2018). Second, whenever this distribution is fixed, any gap between the latent
and output distribution entropy imply the need for high change in volumes
between $\omega$ and $\bm{G}(\omega)$.
Gaussian Case. We now demonstrate the use of the above derivation by
considering practical examples for which we are able to gain ingights into the
DGN data modeling and generation. First, consider that the latent distribution
is set as $\bm{z}\sim\mathcal{N}(0,1)$ we obtain the following result directly
from Thm. 3.
###### Corollary 2.
The generator density distribution $p_{\bm{G}}(\bm{x})$ given
$\bm{z}\sim\mathcal{N}(\mathbf{0},I)$ is
$\displaystyle
p_{\bm{G}}(\bm{x})=\sum_{\omega\in\Omega}\frac{e^{-\frac{1}{2}(\bm{x}-\bm{b}_{\omega})^{T}(\bm{A}_{\omega}^{+})^{T}\bm{A}_{\omega}^{+}(\bm{x}-\bm{b}_{\omega})}}{\sqrt{(2\pi)^{S}\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})}}\mathbbm{1}_{\\{\bm{x}\in\bm{G}(\omega)\\}}.$
(Proof in Appendix C.12.)
The above formula is reminiscent of Kernel Density Estimation (KDE)
(Rosenblatt, 1956) and in particular adaptive KDE (Breiman et al., 1977),
where a partitioning of the data manifold is performed on each cell ($\omega$
in our case) different kernel parameters are used.
Uniform Case. We now turn into the uniform latent distribution case. Consider
the following question: Suppose we start from a uniform distribution
$\bm{z}\sim\mathcal{U}(0,1)$ on the hypercube in $\mathbb{R}^{S}$, will the
samples be uniformly distributed on the manifold of $\bm{G}$?
###### Proposition 5.
Given a uniform latent distribution $\bm{v}\sim\mathcal{U}(0,1)$, the sampling
of the manifold $\bm{G}(\text{supp}(\bm{p}_{\bm{z}}))$ will be uniform iff
$\det(\bm{A}^{T}_{\omega}\bm{A}_{\omega})=c,\forall\omega:\omega\cap\text{supp}(\bm{p}_{\bm{z}})\not=\emptyset,c>0$.
### 5.2 Generative Deep Network Likelihood and Normalizing Flows
Note from Thm. 3 that we obtain an explicit density distribution. One
possibility for learning thus corresponds to minimizing the negative log-
likelihood (NLL) between the generator output distribution and the data.
Recall from Thm. 3 that
$\sqrt{\det{((\bm{A}^{+}_{\omega})^{T}\bm{A}^{+}_{\omega})}}=(\sqrt{\det{(\bm{A}_{\omega}^{T}\bm{A}_{\omega})}})^{-1}$;
thus we can write the log density from (14) over a sample $\bm{x}_{n}$ as
$\mathcal{L}(\bm{x}_{n})=\log(\bm{p}_{\bm{z}}(\bm{G}^{-1}(\bm{x}_{n})))+\log(\sqrt{\det(J(\bm{x}_{n}))})$,
where
$J(\bm{x}_{n})=J_{\bm{G}^{-1}}(\bm{x}_{n})^{T}J_{\bm{G}^{-1}}(\bm{x}_{n})$,
$J$ the Jacobian operator. Learning the weights of the DGN by minimization of
the NLL given by $-\sum_{n=1}^{N}\mathcal{L}(\bm{x}_{n})$, corresponds to the
normalizing flow model. The practical difference between this formulation and
most NF models comes from having either a mapping from $\bm{x}\mapsto\bm{z}$
(NF) or from $\bm{z}\mapsto\bm{x}$ (DGN case). This change only impacts the
speed to either sample points or to compute the probability of observations.
In fact, the forward pass of a DGN is easily obtained as opposed to its
inverse requiring a search over the codes $\bm{q}$ itself requiring some
optimization. Thus, the DGN formulation will have inefficient training (slow
to compute the likelihood) but fast sampling while NMFs will have efficient
training but inefficient sampling.
### 5.3 On the Difficulty of Generating Low entropy/Multimodal Distributions
We conclude this section with the study of the instabilities encountered when
training DGNs on multimodal densities or other atypical cases.
$\sigma_{1}=0,\sigma_{2}\in\\{1,2,3\\}$
$\sigma_{1}=1,\sigma_{2}\in\\{1,2,3\\}$
$\sigma_{1}=2,\sigma_{2}\in\\{1,2,3\\}$
Figure 7: Distribution of
$\log(\sqrt{\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})})$ for $2000$ regions
$\omega$ with a DGN with $L=3,S=6,D=10$ and weights initialized with Xavier;
then, half of the weights’ coefficients (picked randomly) are rescaled by
$\sigma_{1}$ and the other half by $\sigma_{2}$. We observe that greater
variance of the weights increase the spread of the log-determinants and
increase the mean of the distribution.
# regions data
Figure 8: Distribution of the log-determinant of the per-region mappings for
different true distributions: two Gaussian with increasing standard deviation
(left to right). We observe how the concentration and inter-mode distance
impacts greatly the distribution of the log-determinant to allow the generator
to fit the distribution, this in turns increases the variance of the weights
$\bm{W}_{\ell}$.
Figure 9: Example of a GAN DGN trained on a mixture of $25$ Gaussians. On the
left is depicted the latent space per-region log-determinant (color coded)
with values ranging from $-6$ to $4$ in log-scale ($0.002$ to $55$ in linear
scale). On the middle are depicted the true sample (blue) and generated ones
(oranges), and on the right are depicted points sampled from the generator
from regions with low determinant (green) and large determinant (red). We can
observe that this failure case (poor approximation of the true distribution)
is due to the continuous properties of MASO DGNs, which makes the generator
move continuously between modes while not being able to reduce enough the
sampling probability $\bm{p}_{\bm{G}}$ in between the modes. Additional
examples are contained in Fig. 14
We demonstrated in Thm. 3 and Cor. 1 that the product of the nonzero singular
values of $\bm{A}_{\omega}$ plays the central role to concentrate or disperse
the density on $\bm{G}(\omega)$. Let now consider a simple case of mixture of
Gaussians. It becomes clear that the standard deviation of the modes and the
inter-mode distances will put constraints on the singular values of the slope
matrix $\bm{A}_{\omega}$. However, imposing a large variance in the singular
values of $\bm{A}_{\omega}$ for different regions $\omega$ directly stress the
parameters $\bm{W}_{\ell}$ as they compose the slope matrix. This is
highlighted in Fig. 7 where we depict the distribution of the log-determinants
for different bimodal distribution for the weights $\bm{W}_{\ell}$ showing the
correlation between the variance of those parameters and the variance of log-
determinant over different regions.
We also illustrate the variation of the learned generator log-determinant
distribution across regions in Fig. 8, where we trained a GAN DGN on two
Gaussians for different scalings. This further highlights the importance of
the distribution of the determinants that is reachable by a DGN which depends
on the architecture and parameter space. In conclusion, for multimodal and low
entropy distribution, the required log-determinant for the DGN to approximate
the true distribution goes against some standard regularization techniques
such as Tikhonov, which (recall Fig. 7) pushes the generator output density
$\bm{p}_{\bm{G}}$ to be more uniform with higher entropy.
Supplementary Material
## Appendix A Extra Figures
All the below pictures have been compressed to be uploaded on Arxiv.
### A.1 Angles Histogram
S=2, D=3
S=8, D=9
S=2, D=4
S=8, D=16
S=2, D=8
S=8, D=32
S=4, D=5
S=16, D=17
S=4, D=8
S=16, D=32
S=4, D=16
S=16, D=64
Figure 10: Reproduction of Fig. 6. Histograms of the largest principal angles
for DGNs with one hidden layer (first two rows) and two hidden layers (last
two rows). In each case the latent space dimension and width of the hidden
layers is in the top of the column. The observations reinforce the claims on
the role of width and $S$ versus $D$ dimensions.
### A.2 Angles Manifold
Figure 11: Reproduction of Fig. 5 for various GDNs topologies. The columns
represent different widths $D_{\ell}\in\\{6,8,16,32\\}$ and the rows
correspond to repetition of the learning for different random initializations
of the GDNs for consecutive seeds.
### A.3 More on MNIST Disentanglement
Figure 12: Randomly generated digits from the trained GAN (top) and trained
VAE(bottom) models for the experiment from Fig. 4. Each row represents a model
that was training on a different random initialization (8 runs in total) which
produced the result in Table 1.
Figure 13: Randomly generated digits from the trained CONV GAN (top) and
trained CONV VAE(bottom) models for the experiment from Fig. 4. Each row
represents a model that was training on a different random initialization (8
runs in total) which produced the result in Table 1.
### A.4 More on Determinant Figures
Figure 14: Reproduction of Fig. 8 for multiple standard deviations and
multiple random seeds. Each column represent a different standard deviation of
the two Gaussians $\sigma\in\\{0.002,0.01,0.05,0.1,0.3,1,2\\}$ and each row is
a run with a different seed. As can be seen in all cases (except when lack of
convergence) the distribution of the determinants support the claim and relate
with the Entropy of the true distribution (blue points).
## Appendix B Architecture Details
We describe the used models below. The Dense(T) represents a fully connected
layer with $T$ units (activation function not included). The Conv2D(I, J, K)
represent $I$ filters of spatial shape $(J,K)$ and the input dilation and
padding follow the standard definition. For the VAE models the encoder is
given below and for the GAN models the discriminator is given below as well.
FC GAN model means that the FC generator is used in conjonction with the
discriminator, the CONV GAN means that the CONV generator is used in
conjonction with the discriminator and similarly for the VAE case.
FC generator | CONV generator | Encoder | Discriminator
---|---|---|---
Dense(256) | Dense(256) | Dense(512) | Dense(1024)
leaky ReLU | leaky ReLU | Dropout(0.3) | Dropout(0.3)
Dense(512) | Dense(8 * 6 * 6) | leaky ReLU | leaky ReLU
leaky ReLU | leaky ReLU | Dense(256) | Dense(512)
Dense(1024) | Reshape(8, 6, 6) | leaky ReLU | Dropout(0.3)
leaky ReLU | Conv2D(8, 3, 3, inputdilation=2, pad=same) | Dense(2*S) | leaky ReLU
Dense(28*28) | leaky ReLU | | Dense(256)
| Conv2D(8, 4, 4, inputdilation=3, pad=valid) | | Dropout(0.3)
| Reshape(28*28) | | leaky ReLU
| | | Dense(2)
all the training procedures employ the Adam optimizer with a learning of
$0.0001$ which stays constant until training completion. In all cases trainig
is done on $300$ epochs, an epoch consisting of viewing the entire image
training set once.
## Appendix C Proofs
### C.1 Proof of Proposition 1
###### Proof.
The result is a direct application of Corollary 3 in (Balestriero et al.,
2019) adapted to GDNs (and not classification based DNs). The input regions
are proven to be convex polytopes. Then by linearity of the per region
mapping, conexity is preserved and with form given by (6). ∎
### C.2 Proof of Lemma 1
###### Proof.
First recall the standard result that
$\displaystyle\text{rank}(AB)\leq\min(\text{rank}(A),\text{rank}(B)),$
for any matrix $A\in\mathbb{R}^{N\times K}$ and $B\in\mathbb{R}^{K\times D}$
(see for example (banerjee2014linear) chapter 5). Now, noticing that
$\min(\min(a,b),\min(c,d))=\min(a,b,c,d)$ leads to the desired result by
unrolling the product of matrices that make up the $\bm{A}_{\omega}$ matrix to
obtain the desired result. ∎
### C.3 Proof of Proposition 2
###### Proof.
First notice that there can only be two major cases. First for the dimension
of the affinely mapped region $\bm{G}(\omega)$ to be $S$ or to be smaller than
$S$. Let first consider the bijective case. For the GDN to be bijective on the
region we need a one-to-one mapping from $\omega$ to $\bm{G}(\omega)$. If the
dimension of the subsapce $\bm{G}(\omega)$ is $S$, then it means that the
matrix $\bm{A}_{\omega}$ is full-rank, with rank $S$. In turn, this means that
the columns of the matrix are linearly independent. This implies bijectivity
on the region as each point in $\omega$ is mapped to an unique point in
$\bm{G}(\omega)$ and vice-versa. The surjectivity is direct as if the
dimension is smaller than $S$, then the matrix $\bm{A}_{\omega}$ is not full-
rank and all the points in the region $\omega$ that leave in the kernel of the
matrix (lifted with the bias $\bm{b}_{\omega}$) will be mapped to the same
output points. This means that there exists different points in $\omega$ s.t.
they are mapped to the same point in $\bm{G}(\omega)$ which gives
surjectivity.
For global bijectivity, we need an additional condition. In fact, to ensure
that the entire GDN preserves a one-to-one mapping, we need per region
bijectivity coupled with the fact that the mapping for different region do not
intersect. In fact, we know look at bijectivity between
$\text{supp}(\bm{p}_{\bm{z}})$ and $\bm{G}(\text{supp}(\bm{p}_{\bm{z}}))$.
Thus if the regions do not intersection after affine projections then there
does not exist different latent vectors $\bm{z}$ and $\bm{z}^{\prime}$ that
would be mapped to the same output point. Yet because we have bijectivity on
between $\omega$ and $\bm{G}(\omega),\forall\omega$ it means that each point
in $\text{supp}(\bm{p}_{\bm{z}})$ is mapped to an unique point in
$\bm{G}(\text{supp}(\bm{p}_{\bm{z}}))$ which gives global bijectivity.
∎
### C.4 Proof of Proposition 3
###### Proof.
The first bound is obtained by taking the realization of the noise where
$\bm{r}=\mathbf{0}$, in that case the input space partition is the entire
space as any input is mapped to the same VQ code. As such, the mapping
associated to this trivial partition has $\mathbf{0}$ slope (matrix filled
with zeros) and a possibly nonzeros bias; as such the mapping is zero-
dimensional (any points is mapped to the same point). This gives the lower
bound stating that in the mixture of GDNs, one will have dimension $0$. For
the other case, simply take the trivial case of $\bm{r}=\mathbf{1}$ which
gives the result. ∎
### C.5 Proof of Lemma 3
###### Proof.
First, as we impose injectivity, we can not have multiple regions of the input
space, say $\omega$ and $\omega^{\prime}$ such that
$\bm{G}(\omega)\cap\bm{G}(\omega^{\prime})\not=\emptyset$. Second, a region of
the input space is mapped to another region in the output space by means of
the affine transformation, thus even though the ambiant space $D$ might be of
greater dimension that $\dim(\bm{G})$, the injectivity implies that points in
$\omega$ are mapped to at most one point in $\bm{G}(\omega)$. They are
affinely mapped meaning that the inverse is given by removing the bias and
inverting the linear mapping which is given by the pseudo inverse. Recalling
the above result on surjectivity, we see that for the GDN to be injective the
per region dimension msut be $S$ showing existence of the pseudo inverse. ∎
### C.6 Proof of Proposition 4
###### Proof.
The proof is straightforward from the used definition of disentenglement. In
fact, recall that we aim to have
$\langle\bm{G}(\bm{z})-\bm{G}(\bm{z}+\epsilon\delta_{d}),\bm{G}(\bm{z})-\bm{G}(\bm{z}+\epsilon\delta_{d^{\prime}})\rangle\approx
0$. In our case, consider only small transformation such that
$\bm{z}+\epsilon\delta_{d}$ and $\bm{z}+\epsilon\delta_{d^{\prime}}$ remain
the in the same region $\omega$ in which was $\bm{z}$. Then it is clear that
for any positive constant $\epsilon$ fulfilling this condition, the above
disentangelement definition translates into
$\displaystyle\langle\bm{G}(\bm{z})-\bm{G}(\bm{z}+\epsilon\delta_{d}),\bm{G}(\bm{z})-\bm{G}(\bm{z}+\epsilon\delta_{d^{\prime}})\rangle\approx
0\iff\langle[\bm{A}_{\omega}]_{.,d},[\bm{A}_{\omega}]_{.,d^{\prime}}\rangle\approx
0,$
This gives a necessary condition which is not sufficient as this alone does
not guarantee that each dimension of the latent space only impacts a single
transformation of the output. But a disentangled representation must have near
orthogonal columns for the slope matrices $\bm{A}_{\omega}$. ∎
### C.7 Proof of Theorem 2
###### Proof.
First, notice that
$P(\bm{A}_{\omega})=\bm{A}_{\omega}(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-1}\bm{A}_{\omega}^{T}$
defines a projection matrix. In fact, we have that
$\displaystyle P(\bm{A}_{\omega})^{2}$
$\displaystyle=\bm{A}_{\omega}(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-1}\bm{A}_{\omega}^{T}\bm{A}_{\omega}(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-1}\bm{A}_{\omega}^{T}$
$\displaystyle=\bm{A}_{\omega}(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-1}\bm{A}_{\omega}^{T}$
$\displaystyle=P(\bm{A}_{\omega})$
and we have that $(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-1}$ is well defined
as we assume injectivity ($\text{rank}(\bm{A}_{\omega})=S$) making the
$S\times S$ matrix $\bm{A}_{\omega}^{T}\bm{A}_{\omega}$ full rank. Now it is
clear that this projection matrix maps an arbitrary point
$\bm{x}\in\mathbb{R}^{D}$ to the affine subspace $\bm{G}(\omega)$ up to the
bias shift. As we are interested in the angle between two adjacent subspaces
$\bm{G}(\omega)$ and $\bm{G}(\omega^{\prime})$ it is also clear that the
biases (which do not change the angle) can be omited. Hence the task
simplifies to finding the angle between $P(\bm{A}_{\omega})$ and
$P(\bm{A}_{\omega^{\prime}})$. This can be done by means of the greatest
principal angle (proof can be found in Stewart (1973)) with the result being
$\sin\big{(}\theta(\bm{G}(\omega),\bm{G}(\omega^{\prime}))\big{)}=\|P(\bm{A}_{\omega})-P(\bm{A}_{\omega^{\prime}})\|_{2}$
as desired. ∎
### C.8 Proof of Lemma 4
###### Proof.
In the special case of an affine transform of the coordinate given by the
matrix $A\in\mathbb{R}^{D\times D}$ the well known result from demonstrates
that the change of volume is given by $|\det(A)|$ (see Theorem 7.26 in
(rudin2006real)). However in our case the mapping is a rectangular matrix as
we span an affine subspace in the ambiant space $\mathbb{R}^{D}$ making
$|\det(A)|$ not defined. However by applying Sard’s theorem
(spivak2018calculus) we obtain that the change of volume from the region
$\omega$ to the affine subspace $\bm{G}(\omega)$ is given by
$\sqrt{\det(A^{T}A)}$ which can also be written as follows with $USV^{T}$ the
svd-decomposition of the matrix $A$:
$\displaystyle\sqrt{\det(A^{T}A)}=\sqrt{\det((USV^{T})^{T}(USV^{T}))}=$
$\displaystyle\sqrt{\det((VS^{T}U^{T})(USV^{T}))}$ $\displaystyle=$
$\displaystyle\sqrt{\det(VS^{T}SV^{T})}$ $\displaystyle=$
$\displaystyle\sqrt{\det(S^{T}S)}$ $\displaystyle=$
$\displaystyle\prod_{i:\sigma_{i}\not=0}\sigma_{i}(A)$
∎
### C.9 Proof of Theorem 3
###### Proof.
We will be doing the change of variables
$\bm{z}=(\bm{A}^{T}_{\omega}\bm{A}_{\omega})^{-1}\bm{A}_{\omega}^{T}(\bm{x}-\bm{b}_{\omega})=\bm{A}_{\omega}^{+}(\bm{x}-\bm{b}_{\omega})$,
also notice that $J_{\bm{G}^{-1}}(\bm{x})=A^{+}$. First, we know that
$P_{\bm{G}(\bm{z})}(\bm{x}\in
w)=P_{\bm{z}}(\bm{z}\in\bm{G}^{-1}(w))=\int_{\bm{G}^{-1}(w)}p_{\bm{z}}(\bm{z})d\bm{z}$
which is well defined based on our full rank assumptions. We then proceed by
$\displaystyle P_{\bm{G}}(\bm{x}\in w)=$
$\displaystyle\sum_{\omega\in\Omega}\int_{\omega\cap
w}p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))\sqrt{\det(J_{\bm{G}^{-1}}(\bm{x})^{T}J_{\bm{G}^{-1}}(\bm{x}))}d\bm{x}$
$\displaystyle=$ $\displaystyle\sum_{\omega\in\Omega}\int_{\omega\cap
w}p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))\sqrt{\det((\bm{A}_{\omega}^{+})^{T}\bm{A}_{\omega}^{+})}d\bm{x}$
$\displaystyle=$ $\displaystyle\sum_{\omega\in\Omega}\int_{\omega\cap
w}p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))(\prod_{i:\sigma_{i}(\bm{A}_{\omega}^{+})>0}\sigma_{i}(\bm{A}_{\omega}^{+}))d\bm{x}$
$\displaystyle=$ $\displaystyle\sum_{\omega\in\Omega}\int_{\omega\cap
w}p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))(\prod_{i:\sigma_{i}(\bm{A}_{\omega})>0}\sigma_{i}(\bm{A}_{\omega}))^{-1}d\bm{x}\;\;\;\text{Etape
1}$ $\displaystyle=$ $\displaystyle\sum_{\omega\in\Omega}\int_{\omega\cap
w}p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))\frac{1}{\sqrt{\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})}}d\bm{x}$
Let now prove the Etape 1 step by proving that
$\sigma_{i}(A^{+})=(\sigma_{i}(A))^{-1}$ where we lighten notations as
$A:=\bm{A}_{\omega}$ and $USV^{T}$ is the svd-decomposition of $A$:
$\displaystyle A^{+}=(A^{T}A)^{-1}A^{T}=$
$\displaystyle((USV^{T})^{T}(USV^{T}))^{-1}(USV^{T})^{T}$ $\displaystyle=$
$\displaystyle(VS^{T}U^{T}USV^{T})^{-1}(USV^{T})^{T}$ $\displaystyle=$
$\displaystyle(VS^{T}SV^{T})^{-1}VS^{T}U^{T}$ $\displaystyle=$ $\displaystyle
V(S^{T}S)^{-1}S^{T}U^{T}$ $\displaystyle\implies$
$\displaystyle\sigma_{i}(A^{+})=(\sigma_{i}(A))^{-1}$
with the above it is direct to see that
$\sqrt{\det((\bm{A}_{\omega}^{+})^{T}\bm{A}_{\omega}^{+})}=\frac{1}{\sqrt{\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})}}$
as follows
$\displaystyle\sqrt{\det((\bm{A}_{\omega}^{+})^{T}\bm{A}_{\omega}^{+})}=\prod_{i:\sigma_{i}\not=0}\sigma_{i}(\bm{A}_{\omega}^{+})=$
$\displaystyle\prod_{i:\sigma_{i}\not=0}\sigma_{i}(\bm{A}_{\omega})^{-1}$
$\displaystyle=$
$\displaystyle\left(\prod_{i:\sigma_{i}\not=0}\sigma_{i}(\bm{A}_{\omega})\right)^{-1}$
$\displaystyle=$
$\displaystyle\frac{1}{\sqrt{\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})}}$
which gives the desired result. ∎
### C.10 Proof of Cor. 1
###### Proof.
The derivation of the Entropy will consist in rewritting the Entropy w.r.t the
distribution in the output space of the GDN and performing the change of
coordinates leveragign the abvoe result to finally obtain the desired result
as follows:
$\displaystyle E(\bm{p}_{\bm{G}})=$
$\displaystyle-\sum_{\omega\in\Omega}\int_{\bm{G}(\omega)}\bm{p}_{\bm{G}}(\bm{x})\log(\bm{p}_{\bm{G}}(\bm{x}))d\bm{x}$
$\displaystyle=$
$\displaystyle-\sum_{\omega\in\Omega}\int_{\bm{G}(\omega)}p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-\frac{1}{2}}\log\left(p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-\frac{1}{2}}\right)$
$\displaystyle=$
$\displaystyle-\sum_{\omega\in\Omega}\int_{\bm{G}(\omega)}p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-\frac{1}{2}}\left(\log\left(p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))\right)+\log\left(\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-\frac{1}{2}}\right)\right)$
$\displaystyle=$
$\displaystyle-\sum_{\omega\in\Omega}\int_{\bm{G}(\omega)}\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-\frac{1}{2}}p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))\log\left(p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))\right)$
$\displaystyle-\sum_{\omega\in\Omega}\int_{\bm{G}(\omega)}\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-\frac{1}{2}}p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))\log\left(\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-\frac{1}{2}}\right)$
$\displaystyle=$ $\displaystyle E(\bm{p}_{\bm{z}})\;\;\;\;\;\;\;\text{(apply
the change of coordinate $\bm{z}=G(\bm{x})$)}$
$\displaystyle-\sum_{\omega\in\Omega}\int_{\bm{G}(\omega)}p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-\frac{1}{2}}\log\left(\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-\frac{1}{2}}\right)$
$\displaystyle=$ $\displaystyle
E(\bm{p}_{\bm{z}})+\frac{1}{2}\sum_{\omega\in\Omega}P(\bm{z}\in\omega)\log\left(\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})\right)\;\;\;\;\;\;\;\text{(apply
the change of coordinate $\bm{z}=G(\bm{x})$)}$
which gives the desired result. For a complete review of integrals on manifold
please see (Cover & Thomas, 2012). ∎
### C.11 Proof of Theorem 1
###### Proof.
For very small $N$ it is clear than in general, even if $S<S^{*}$, the
memorization capacity of the generator will be s.t. it can fit through those
points. Just imagine a couple of points sampled from a $2D$ linear manifold,
even though $S=1$, the GDN can go through those two points and thus have
$E^{*}=0$ for $N=2$. We now consider the case where $N$ is large enough. Two
cases have to be studied.
* •
Case $S<S^{*}$: if $S<S^{*}$, the generated manifold can never be dense in the
true linear manifold. This means that the newly introduced point will almost
surely not lie in the span of the current generated manifold. Thus,
$E^{*}(N+1)>E^{*}(N)$.
* •
Case $S\geq S^{*}$: in that case, it is clear the there always exist a setting
of the parameters $\Theta$ s.t. the DGN spans the linear manifold. For example
if using ReLU, consider any weights for the first $L-1$ layers s.t. the ReLU
ae always “on” and use the last layer affine mapping to rotate and translate
the affine subspace to the true one. That is, $E^{*}(N)=0,\forall N>0$.
The above demonstrates how for the simplest target manifold (linear) an too
narrow DN leading to $S<S^{*}$ will haev a training error $E^{*}$ increasing
with $N$ or $0$ if $S\geq S^{*}$ for any $N$. ∎
### C.12 Proof of Corollary 2
###### Proof.
First, by applying the above results on the general density formula and
setting $\bm{p}_{\bm{z}}$ a standard Normal distribution we obtain that
$\displaystyle\bm{p}_{\bm{G}}(\bm{x}\in w)=$
$\displaystyle\sum_{\omega\in\Omega}\int_{\omega\cap
w}\mathbbm{1}_{\bm{x}\in\bm{G}(\omega)}p_{\bm{z}}(\bm{G}^{-1}(\bm{x}))\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})^{-\frac{1}{2}}d\bm{x}$
$\displaystyle=$ $\displaystyle\sum_{\omega\in\Omega}\int_{\omega\cap
w}\mathbbm{1}_{\bm{x}\in\bm{G}(\omega)}\frac{1}{(2\pi)^{S/2}\sqrt{\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})}}e^{-\frac{1}{2}\|\bm{G}^{-1}(\bm{x})\|_{2}^{2}}d\bm{x}$
$\displaystyle=$ $\displaystyle\sum_{\omega\in\Omega}\int_{\omega\cap
w}\mathbbm{1}_{\bm{x}\in\bm{G}(\omega)}\frac{1}{(2\pi)^{S/2}\sqrt{\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})}}e^{-\frac{1}{2}((\bm{A}^{+}_{\omega}(\bm{x}-\bm{b}_{\omega}))^{T}((\bm{A}^{+}(\bm{x}-\bm{b}_{\omega}))}d\bm{x}$
$\displaystyle=$ $\displaystyle\sum_{\omega\in\Omega}\int_{\omega\cap
w}\mathbbm{1}_{\bm{x}\in\bm{G}(\omega)}\frac{1}{(2\pi)^{S/2}\sqrt{\det(\bm{A}_{\omega}^{T}\bm{A}_{\omega})}}e^{-\frac{1}{2}(\bm{x}-\bm{b}_{\omega})^{T}(\bm{A}^{+}_{\omega})^{T}\bm{A}^{+}_{\omega}(\bm{x}-\bm{b}_{\omega})}d\bm{x}$
giving the desired result. ∎
## Appendix D Codes of neighbour regions
Each code is equivalent to a system of inequalities that define the regions.
In fact, a code depends on the signs of the feature map pre activation (recall
(3)). This defines a polytope in the input space and also in the output space.
Now, when traveling from a point $\bm{z}$ to another point $\bm{z}^{\prime}$
of a neighbouring region (recall Def. 2), we ask the question on how many
indices of the code will change. That is, what is the Hamming distance between
$\bm{q}(\bm{z})$ and $\bm{q}(\bm{z}^{\prime})$. As a neighbouring region is
defined as a region which shares some of its boundary with another (their
interior is disjoint) we can see that the degree of the face that is shared
between the two regions define the amount of changes in their corresponding
codes. If two regions share a $S-1$ dimensional face, then only $1$ value of
the code changes. If they share in general a $S-r$ dimensional face, then the
code will change by $r$ values. As most adjacent regions will share a high
dimensional face, we see that $r$ tends to be small and thus codes are
similar. For details and analytical study of the above please see
(lautensack_zuyev_2008).
## Appendix E More on Disentangled Latent Representations
It has been coined that providing interpretable and practical generators lies
in the ability to learn a disentangled representation of an input
$\bm{x}=\bm{G}(\bm{z})$ (Schmidhuber, 1992; Bengio et al., 2013). The code
$\bm{z}$ should contain all the information present in $\bm{x}$ in a compact
and interpretable structure where distinct, informative factors of variations
are encoded by different dimensions. Such motivations orginitated from the
(non-)linear independent component analysis focusing on recovering independent
factors from observed data (Comon, 1994; Hyvarinen & Morioka, 2016). In fact,
even in recent GAN/VAE based models, disentengled representations are
associated to independent transformations of the input such as pose, hair
color, eye color and so on which should behave independently form each other
(yim2015rotating; tran2017disentangled). For a more in-depth review of
learning disentangled representation, see (Locatello et al., 2018).
## Appendix F Details on Training Procedure
The experiment aims at depicting the training error being $E^{*}$ on the
training set for varying latent dimensions $S$ in the simple case of a linear
true data manifold approximation. In order to prevent any optimization unlucky
degeneracy we repeat the training procedure $30$ times and compute for each
poch the error $E^{*}$ and report the minimum over the $30$ trials and
training epochs. We also set a very large number of epochs: $2000$. Due to the
large number of trials and epochs the reported results are not due to some
random initialization settings and convey the point of the result which is
that even for such a simple data model (linear manifold) if $S<S^{*}$ then the
training error $E^{*}$ will increase with $N$. Finally, the minimization over
$z$ is replacer by an autoencoder with a very wide encoder s.t. it has the
capacity for each training point to memorize the optimum $z$ that minimizes
$E$. That is, when minimizing
$\displaystyle\min_{\Theta}\min_{\Theta^{\prime}}\|\bm{G}_{\Theta}(E_{\Theta^{\prime}}(\bm{x}))-\bm{x}\|\approx\min_{\Theta}\min_{\bm{z}}\|\bm{G}_{\Theta}(\bm{z})-\bm{x}\|,$
got a large enough encoder network $E$. In our case given that we used
$S^{*}=6$ we used an encoder with $D_{\ell}=256$ units and $L=3$.
## Appendix G More on Effect of Dropout/Dropconnect
As opposed to the Dropout case which applies the binary noise ont the feature
maps $\bm{v}_{\ell}$, Dropconnect (Wan et al., 2013) applies this binary noise
onto the slope matrices $\bm{W}_{\ell}$ making the mapping noise become
$\displaystyle\bm{G}(\bm{z})=\left(\prod_{\ell=L}^{1}\text{diag}(\bm{q}_{\ell})(\bm{W}_{\ell}\odot\bm{R}_{\ell})\right)\bm{z}+\sum_{\ell=1}^{L}\left(\prod_{\ell^{\prime}=L}^{\ell+1}\text{diag}(\bm{q}_{\ell^{\prime}})(\bm{W}_{\ell^{\prime}}\odot\bm{R}_{\ell^{\prime}})\right)\bm{b}_{\ell},$
where the binary noise matrices are denoted by $\bm{R}_{\ell}$. Despite this
change of nosei application, the exact same result applies and Prop. 3 also
holds. That is the dropconnect equipped GDN becomes a mixture of GDNs with
varying dimensions and parameters. Notice however that dropconnect will be
less likely to reduce the ablated generator dimension as opposed to dropout
due to its application on each entry of the weight matrix as opposed to an
entire row at a time as depicted in Fig. 2.
## Appendix H Acknowledgments
This work was supported by NSF grants CCF-1911094, IIS-1838177, and
IIS-1730574; ONR grants N00014-18-12571 and N00014-17-1-2551; AFOSR grant
FA9550-18-1-0478; DARPA grant G001534-7500; and a Vannevar Bush Faculty
Fellowship, ONR grant N00014-18-1-2047, the Ken Kennedy Institute 2019/20 BP
Graduate Fellowship.
## References
* Absil et al. (2006) Absil, P.-A., Edelman, A., and Koev, P. On the largest principal angle between random subspaces. _Linear Algebra and its applications_ , 414(1):288–294, 2006.
* Afriat (1957) Afriat, S. N. Orthogonal and oblique projectors and the characteristics of pairs of vector spaces. In _Mathematical Proceedings of the Cambridge Philosophical Society_ , volume 53, pp. 800–816. Cambridge University Press, 1957.
* Andrés-Terré & Lió (2019) Andrés-Terré, H. and Lió, P. Perturbation theory approach to study the latent space degeneracy of variational autoencoders, 2019.
* (4) Arjovsky, M. and Bottou, L. Towards principled methods for training generative adversarial networks. arxiv, 2017. _arXiv preprint arXiv:1701.04862_.
* Arjovsky et al. (2017) Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein gan. _arXiv preprint arXiv:1701.07875_ , 2017.
* Bakır et al. (2004) Bakır, G. H., Weston, J., and Schölkopf, B. Learning to find pre-images. _Advances in neural information processing systems_ , 16:449–456, 2004.
* Balestriero & Baraniuk (2018a) Balestriero, R. and Baraniuk, R. Mad max: Affine spline insights into deep learning. _arXiv preprint arXiv:1805.06576_ , 2018a.
* Balestriero & Baraniuk (2018b) Balestriero, R. and Baraniuk, R. G. A spline theory of deep networks. In _Proc. Int. Conf. Mach. Learn._ , volume 80, pp. 374–383, Jul. 2018b.
* Balestriero et al. (2019) Balestriero, R., Cosentino, R., Aazhang, B., and Baraniuk, R. The geometry of deep networks: Power diagram subdivision. In _Advances in Neural Information Processing Systems 32_ , pp. 15806–15815. 2019.
* Ben-Yosef & Weinshall (2018) Ben-Yosef, M. and Weinshall, D. Gaussian mixture generative adversarial networks for diverse datasets, and the unsupervised clustering of images. _arXiv preprint arXiv:1808.10356_ , 2018.
* Bengio et al. (2013) Bengio, Y., Courville, A., and Vincent, P. Representation learning: A review and new perspectives. _IEEE Trans. Pattern Anal. Mach. Intell._ , 35(8):1798–1828, 2013.
* Berg et al. (2018) Berg, R. v. d., Hasenclever, L., Tomczak, J. M., and Welling, M. Sylvester normalizing flows for variational inference. _arXiv preprint arXiv:1803.05649_ , 2018.
* Biau et al. (2018) Biau, G., Cadre, B., Sangnier, M., and Tanielian, U. Some theoretical properties of gans. _arXiv preprint arXiv:1803.07819_ , 2018.
* Bjorck & Golub (1973) Bjorck, A. and Golub, G. H. Numerical methods for computing angles between linear subspaces. _Mathematics of computation_ , 27(123):579–594, 1973.
* Bojanowski et al. (2017) Bojanowski, P., Joulin, A., Lopez-Paz, D., and Szlam, A. Optimizing the latent space of generative networks. _arXiv preprint arXiv:1707.05776_ , 2017.
* Breiman et al. (1977) Breiman, L., Meisel, W., and Purcell, E. Variable kernel estimates of multivariate densities. _Technometrics_ , 19(2):135–144, 1977.
* Bryant & Yarnold (1995) Bryant, F. B. and Yarnold, P. R. Principal-components analysis and exploratory and confirmatory factor analysis. 1995\.
* Chongxuan et al. (2017) Chongxuan, L., Xu, T., Zhu, J., and Zhang, B. Triple generative adversarial nets. In _Advances in neural information processing systems_ , pp. 4088–4098, 2017.
* Comon (1994) Comon, P. Independent component analysis, a new concept? _Signal processing_ , 36(3):287–314, 1994.
* Cover & Thomas (2012) Cover, T. M. and Thomas, J. A. _Elements of information theory_. John Wiley & Sons, 2012.
* Davidson et al. (2018) Davidson, T. R., Falorsi, L., De Cao, N., Kipf, T., and Tomczak, J. M. Hyperspherical variational auto-encoders. _arXiv preprint arXiv:1804.00891_ , 2018.
* Dieng et al. (2019) Dieng, A. B., Ruiz, F. J. R., Blei, D. M., and Titsias, M. K. Prescribed generative adversarial networks, 2019.
* Dinh et al. (2014) Dinh, L., Krueger, D., and Bengio, Y. Nice: Non-linear independent components estimation. _arXiv preprint arXiv:1410.8516_ , 2014.
* Dinh et al. (2016) Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. _arXiv preprint arXiv:1605.08803_ , 2016.
* Dinh et al. (2019) Dinh, L., Sohl-Dickstein, J., Pascanu, R., and Larochelle, H. A rad approach to deep mixture models. _arXiv preprint arXiv:1903.07714_ , 2019.
* Donoho et al. (1994) Donoho, D. L., Johnstone, I. M., et al. Ideal denoising in an orthonormal basis chosen from a library of bases. _Comptes rendus de l’Académie des sciences. Série I, Mathématique_ , 319(12):1317–1322, 1994.
* Durugkar et al. (2016) Durugkar, I., Gemp, I., and Mahadevan, S. Generative multi-adversarial networks. _arXiv preprint arXiv:1611.01673_ , 2016.
* Dziugaite et al. (2015) Dziugaite, G. K., Roy, D. M., and Ghahramani, Z. Training generative neural networks via maximum mean discrepancy optimization. _arXiv preprint arXiv:1505.03906_ , 2015.
* Fabius & van Amersfoort (2014) Fabius, O. and van Amersfoort, J. R. Variational recurrent auto-encoders. _arXiv preprint arXiv:1412.6581_ , 2014.
* Gan et al. (2017) Gan, Z., Chen, L., Wang, W., Pu, Y., Zhang, Y., Liu, H., Li, C., and Carin, L. Triangle generative adversarial networks. In _Advances in Neural Information Processing Systems_ , pp. 5247–5256, 2017.
* Ghosh et al. (2018) Ghosh, A., Kulharia, V., Namboodiri, V. P., Torr, P. H., and Dokania, P. K. Multi-agent diverse generative adversarial networks. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 8513–8521, 2018.
* Goodfellow et al. (2016) Goodfellow, I., Bengio, Y., and Courville, A. _Deep Learning_ , volume 1. MIT Press, 2016. http://www.deeplearningbook.org.
* Goodfellow et al. (2014) Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In _Proceedings of the 27th International Conference on Neural Information Processing Systems_ , pp. 2672–2680. MIT Press, 2014.
* Grathwohl et al. (2018) Grathwohl, W., Chen, R. T., Betterncourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models. _arXiv preprint arXiv:1810.01367_ , 2018.
* Gulrajani et al. (2017) Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. C. Improved training of wasserstein gans. In _Advances in neural information processing systems_ , pp. 5767–5777, 2017.
* Hannah & Dunson (2013) Hannah, L. A. and Dunson, D. B. Multivariate convex regression with adaptive partitioning. _J. Mach. Learn. Res._ , 14(1):3261–3294, 2013\.
* Higgins et al. (2017) Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. beta-vae: Learning basic visual concepts with a constrained variational framework. _ICLR_ , 2(5):6, 2017.
* Hyvarinen & Morioka (2016) Hyvarinen, A. and Morioka, H. Unsupervised feature extraction by time-contrastive learning and nonlinear ica. In _Advances in Neural Information Processing Systems_ , pp. 3765–3773, 2016.
* Isola et al. (2017) Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. Image-to-image translation with conditional adversarial networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 1125–1134, 2017.
* Kim & Mnih (2018) Kim, H. and Mnih, A. Disentangling by factorising. _arXiv preprint arXiv:1802.05983_ , 2018.
* Kingma & Dhariwal (2018) Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible 1x1 convolutions. In _Advances in Neural Information Processing Systems_ , pp. 10215–10224, 2018.
* Kingma & Welling (2013) Kingma, D. P. and Welling, M. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_ , 2013.
* Kodali et al. (2017) Kodali, N., Abernethy, J., Hays, J., and Kira, Z. On convergence and stability of gans. _arXiv preprint arXiv:1705.07215_ , 2017.
* LeCun et al. (1995) LeCun, Y., Bengio, Y., et al. Convolutional networks for images, speech, and time series. _The handbook of brain theory and neural networks_ , 3361(10):1995, 1995.
* Li et al. (2017) Li, J., Madry, A., Peebles, J., and Schmidt, L. Towards understanding the dynamics of generative adversarial networks. _arXiv preprint arXiv:1706.09884_ , 2017.
* Li & Liang (2018) Li, Y. and Liang, Y. Learning overparameterized neural networks via stochastic gradient descent on structured data. In _Advances in Neural Information Processing Systems_ , pp. 8157–8166, 2018.
* Liu et al. (2017) Liu, S., Bousquet, O., and Chaudhuri, K. Approximation and convergence properties of generative adversarial learning. In _Advances in Neural Information Processing Systems_ , pp. 5545–5553, 2017.
* Locatello et al. (2018) Locatello, F., Bauer, S., Lucic, M., Rätsch, G., Gelly, S., Schölkopf, B., and Bachem, O. Challenging common assumptions in the unsupervised learning of disentangled representations. _arXiv preprint arXiv:1811.12359_ , 2018.
* Magnani & Boyd (2009) Magnani, A. and Boyd, S. P. Convex piecewise-linear fitting. _Optim. Eng._ , 10(1):1–17, 2009.
* Mao et al. (2017) Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., and Paul Smolley, S. Least squares generative adversarial networks. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 2794–2802, 2017.
* Matthey et al. (2017) Matthey, L., Higgins, I., Hassabis, D., and Lerchner, A. dsprites: Disentanglement testing sprites dataset. https://github.com/deepmind/dsprites-dataset/, 2017.
* Miyato et al. (2018) Miyato, T., Kataoka, T., Koyama, M., and Yoshida, Y. Spectral normalization for generative adversarial networks. _arXiv preprint arXiv:1802.05957_ , 2018.
* Munkres (2014) Munkres, J. _Topology_. Pearson Education, 2014.
* Qi & Sun (1998) Qi, L. and Sun, D. Nonsmooth equations and smoothing newton methods. _Applied Mathematics Report AMR_ , 98(10), 1998.
* Qi & Sun (1993) Qi, L. and Sun, J. A nonsmooth version of newton’s method. _Mathematical programming_ , 58(1-3):353–367, 1993.
* Rezende & Mohamed (2015) Rezende, D. J. and Mohamed, S. Variational inference with normalizing flows. _arXiv preprint arXiv:1505.05770_ , 2015.
* Rosenblatt (1956) Rosenblatt, M. Remarks on some nonparametric estimates of a density function. _The Annals of Mathematical Statistics_ , pp. 832–837, 1956.
* Roth et al. (2017) Roth, K., Lucchi, A., Nowozin, S., and Hofmann, T. Stabilizing training of generative adversarial networks through regularization. In _Advances in neural information processing systems_ , pp. 2018–2028, 2017.
* Roy et al. (2018) Roy, A., Vaswani, A., Neelakantan, A., and Parmar, N. Theory and experiments on vector quantized autoencoders. _arXiv preprint arXiv:1805.11063_ , 2018.
* Schmidhuber (1992) Schmidhuber, J. Learning factorial codes by predictability minimization. _Neural Computation_ , 4(6):863–879, 1992.
* Stewart (1973) Stewart, G. W. Error and perturbation bounds for subspaces associated with certain eigenvalue problems. _SIAM review_ , 15(4):727–764, 1973.
* Streubel et al. (2013) Streubel, T., Griewank, A., Radons, M., and Bernt, J.-U. Representation and analysis of piecewise linear functions in abs-normal form. In _IFIP Conference on System Modeling and Optimization_ , pp. 327–336. Springer, 2013.
* Tomczak & Welling (2016) Tomczak, J. M. and Welling, M. Improving variational auto-encoders using householder flow. _arXiv preprint arXiv:1611.09630_ , 2016.
* Tomczak & Welling (2017) Tomczak, J. M. and Welling, M. Vae with a vampprior. _arXiv preprint arXiv:1705.07120_ , 2017.
* van den Oord et al. (2017) van den Oord, A., Vinyals, O., et al. Neural discrete representation learning. In _Advances in Neural Information Processing Systems_ , pp. 6306–6315, 2017.
* Wager et al. (2013) Wager, S., Wang, S., and Liang, P. S. Dropout training as adaptive regularization. In _Advances in neural information processing systems_ , pp. 351–359, 2013.
* Wan et al. (2013) Wan, L., Zeiler, M., Zhang, S., Le Cun, Y., and Fergus, R. Regularization of neural networks using dropconnect. In _International conference on machine learning_ , pp. 1058–1066, 2013.
* Yang et al. (2019) Yang, D., Hong, S., Jang, Y., Zhao, T., and Lee, H. Diversity-sensitive conditional generative adversarial networks, 2019\.
* Zhang et al. (2017) Zhang, P., Liu, Q., Zhou, D., Xu, T., and He, X. On the discrimination-generalization tradeoff in gans. _arXiv preprint arXiv:1711.02771_ , 2017.
* Zhao et al. (2016) Zhao, J., Mathieu, M., and LeCun, Y. Energy-based generative adversarial network. _arXiv preprint arXiv:1609.03126_ , 2016.
|
2024-09-04T02:54:54.899547 | 2020-02-27T06:51:19 | 2002.11942 | {
"authors": "Mirai Ikebuchi",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25905",
"submitter": "Mirai Ikebuchi",
"url": "https://arxiv.org/abs/2002.11942"
} | arxiv-papers | # A Lower Bound of the Number of Rewrite Rules Obtained by Homological Methods
Mirai Ikebuchi Massachusetts Institute of Technology, Cambridge, MA, USA
<EMAIL_ADDRESS>
###### Abstract.
It is well-known that some equational theories such as groups or boolean
algebras can be defined by fewer equational axioms than the original axioms.
However, it is not easy to determine if a given set of axioms is the smallest
or not. Malbos and Mimram investigated a general method to find a lower bound
of the cardinality of the set of equational axioms (or rewrite rules) that is
equivalent to a given equational theory (or term rewriting systems), using
homological algebra. Their method is an analog of Squier’s homology theory on
string rewriting systems. In this paper, we develop the homology theory for
term rewriting systems more and provide a better lower bound under a stronger
notion of equivalence than their equivalence. The author also implemented a
program to compute the lower bounds, and experimented with 64 complete TRSs.
###### Key words and phrases:
Term rewriting systems, Equational logic, Homological algebra
The author thanks Keisuke Nakano for comments that greatly improved the
manuscript and Aart Middeldorp for his suggestion about prime critical pairs.
## 1\. Introduction
The purpose of this paper is to find a lower bound of the number of axioms
that are equivalent to a given equational theory. For example, the theory of
groups is given by the following axioms:
$\begin{array}[]{lll}G_{1}.\
m(m(x_{1},x_{2}),x_{3})=m(x_{1},m(x_{2},x_{3})),&G_{2}.\
m(x_{1},e)=x_{1},&G_{3}\ m(e,x_{1})=x_{1},\\\ G_{4}.\
m(i(x_{1}),x_{1})=e,&G_{5}.\ m(x_{1},i(x_{1}))=e.&\end{array}$ (1)
It is well-known that $G_{2}$ and $G_{5}$ can be derived from only
$\\{G_{1},G_{3},G_{4}\\}$. Moreover, the theory of groups can be given by two
axioms: the axiom
$m(x_{1},i(m(m(i(m(i(x_{2}),m(i(x_{1}),x_{3}))),x_{4}),i(m(x_{2},x_{4})))))=x_{3}$
together with $G_{4}$ is equivalent to the group axioms [11]. If we use the
new symbol $d$ for division instead of multiplication, a single axiom,
$d(x_{1},d(d(d(x_{1},x_{1}),x_{2}),x_{3}),d(d(d(x_{1},x_{1}),x_{1}),x_{3}))=x_{2},$
is equivalent to the group axioms [4]. However, no single axiom written in
symbols $m,i,e$ is equivalent to the group axioms. This is stated without
proof by Tarski [19] and published proofs are given by Neumann [11] and Kunen
[7]. Malbos and Mimram developed a general method to calculate a lower bound
of the number of axioms that are “Tietze-equivalent” to a given complete term
rewriting system (TRS) [9, Proposition 23]. We state the definition of Tietze
equivalence later (Definition 4), but roughly speaking, it is an equivalence
between equational theories (or TRSs) $(\Sigma_{1},R_{1})$,
$(\Sigma_{2},R_{2})$ where signatures $\Sigma_{1}$ and $\Sigma_{2}$ are not
necessarily equal to each other, while the usual equivalence between TRSs is
defined for two TRSs $(\Sigma,R_{1})$, $(\Sigma,R_{2})$ over the same
signature (specifically, by
${\xleftrightarrow{*}_{R_{1}}}={\xleftrightarrow{*}_{R_{2}}}$). For string
rewriting systems (SRSs), a work was given earlier by Squier [16], and Malbos
and Mimram’s work is an extension of Squier’s work. Squier provided a
rewriting view for “homology groups of monoids”, and proved the existence of
an SRS that does not have any equivalent SRSs that are finite and complete.
In this paper, we will develop Malbos and Mimram’s theory more, and show an
inequality which gives a better lower bound of the number of axioms with
respect to the usual equivalence between TRSs over the same signature. For the
theory of groups, our inequality gives that the number of axioms equivalent to
the group axioms is greater than or equal to 2, so we have another proof of
Tarski’s theorem above as a special case. Our lower bound is algorithmically
computable if a complete TRS is given.
We will first give the statement of our main theorem and some examples in
Section 2. Then, we will see Malbos-Mimram’s work briefly. The idea of their
work is to provide an algebraic structure to TRSs and extract information of
the TRSs, called homology groups, which are invariant under Tietze
equivalence. The basics of such algebraic tools are given in Section 3. We
will explain how resolutions of modules, a notion from abstract algebra, is
related to rewriting systems, which is not written in usual textbooks. and we
will see the idea of the construction of the homology groups of TRSs in
Section 4. After that, in Section 5, we will prove our main theorem. In
Section 6, we show prime critical pairs are enough for our computation at the
matrix operation level and also at the abstract algebra level. In Section 7,
we study the number of rewrite rules in a different perspective: the minimum
of $\\#R-\\#\Sigma$ over all equivalent TRSs $(\Sigma,R)$ is called the
_deficiency_ in group theory and we show that deciding whether the deficiency
is less than a given integer or not is computationally impossible.
## 2\. Main Theorem
In this section, we will see our main theorem and some examples. Throughout
this paper, we assume that terms are over the set of variables
$\\{x_{1},x_{2},\dotsc\\}$ and all signatures we consider are unsorted. For a
signature $\Sigma$, let $T(\Sigma)$ denote the set of terms over the signature
$\Sigma$ and the set of variables $\\{x_{1},x_{2},\dotsc\\}$. Let $(\Sigma,R)$
be a TRS. The degree of $R$, denoted by $\deg(R)$, is defined by
$\deg(R)=\gcd\\{\\#_{i}l-\\#_{i}r\mid l\rightarrow r\in R,i=1,2,\dotsc\\}$
where $\\#_{i}t$ is the number of occurrences of $x_{i}$ in $t$ for $t\in
T(\Sigma)$ and we define $\gcd\\{0\\}=0$ for convenience. For example,
$\deg(\\{f(x_{1},x_{2},x_{2})\rightarrow x_{1},\
g(x_{1},x_{1},x_{1})\rightarrow e\\})=\gcd\\{0,2,3\\}=1$. Let
$(\Sigma,R=\\{l_{1}\rightarrow r_{1},\dotsc,l_{n}\rightarrow r_{n}\\})$ be a
TRS and $\operatorname{CP}(R)=\\{(t_{1},s_{1}),\dots,(t_{m},s_{m})\\}$ be the
set of the critical pairs of $R$. For any $i\in\\{1,\dots,m\\}$, let
$a_{i},b_{i}$ be the numbers in $\\{1,\dots,n\\}$ such that the critical pair
$(t_{i},s_{i})$ is obtained by $l_{a_{i}}\rightarrow r_{a_{i}}$ and
$l_{b_{i}}\rightarrow r_{b_{i}}$, that is, $t_{i}=r_{a_{i}}\sigma\leftarrow
l_{a_{i}}\sigma=C[l_{b_{i}}\sigma]\rightarrow C[r_{b_{i}}\sigma]=s_{i}$ for
some substitution $\sigma$ and single-hole context $C$. Suppose $R$ is
complete. We fix an arbitrary rewriting strategy and for a term $t$, let
$\operatorname{nr}_{j}(t)$ be the number of times $l_{j}\rightarrow r_{j}$ is
used to reduce $t$ into its $R$-normal form with respect to the strategy. To
state our main theorem, we introduce a matrix $D(R)$ and a number $e(R)$:
Suppose $d=\deg(R)$ is prime or $0$. If $d=0$, let $\mathfrak{R}$ be
$\mathbb{Z}$, and if $d$ is prime, let $\mathfrak{R}$ be
$\mathbb{Z}/d\mathbb{Z}$ (integers modulo $d$). For $1\leq i\leq m$, $1\leq
j\leq n$, let $D(R)_{ij}$ be the integer
$\operatorname{nr}_{j}(s_{i})-\operatorname{nr}_{j}(t_{i})+\delta(b_{i},j)-\delta(a_{i},j)$
where $\delta(x,y)$ is the Kronecker delta. The matrix $D(R)$ is defined by
$D(R)=(D(R)_{ji})_{j=1,\dots,n,i=1,\dots,m}$. Let $\mathfrak{R}$ be
$\mathbb{Z}$ or $\mathbb{Z}/p\mathbb{Z}$ for any prime $p$. If an $m\times n$
matrix $M$ over $\mathfrak{R}$ is of the form
$\left(\begin{matrix}e_{1}&0&\dots&\dots&\dots&\dots&\dots&0\\\
0&e_{2}&0&\dots&\dots&\dots&\dots&0\\\
\vdots&0&\ddots&0&\dots&\dots&\dots&\vdots\\\
\vdots&\vdots&0&e_{r}&0&\dots&\dots&\vdots\\\
\vdots&\vdots&\vdots&0&0&\dots&\dots&\vdots\\\
\vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\dots&\vdots\\\
0&0&\dots&\dots&\dots&\dots&\dots&0\end{matrix}\right)$
and $e_{i}$ divides $e_{i+1}$ for every $1\leq i<r$, we say $M$ is in _Smith
normal form_. We call $e_{i}$s the _elementary divisors_. It is known that
every matrix over $\mathfrak{R}$ can be transformed into Smith normal form by
elementary row/column operations, that is, (1) switching a row/column with
another row/column, (2) multiplying each entry in a row/column by an
invertible element in $\mathfrak{R}$, and (3) adding a multiple of a
row/column to another row/column [14, 9.4]. (If $d=0$, the invertible elements
in $\mathfrak{R}\cong\mathbb{Z}$ are $1$ and $-1$, and if $d$ is prime, any
nonzero element in $\mathfrak{R}=\mathbb{Z}/d\mathbb{Z}$ is invertible. So,
$e(R)$ is equal to the rank of $D(R)$ if $d$ is prime.) In general, the same
fact holds for any principal ideal domain $\mathfrak{R}$. We define $e(R)$ as
the number of invertible elements in the Smith normal form of the matrix
$D(R)$ over $\mathfrak{R}$. We state the main theorem.
###### Theorem 1.
Let $(\Sigma,R)$ be a complete TRS and suppose $d=\deg(R)$ is 0 or prime. For
any set of rules $R^{\prime}$ equivalent to $R$, i.e.,
$\xleftrightarrow{*}_{R^{\prime}}\,=\,\xleftrightarrow{*}_{R}$, we have
$\\#R^{\prime}\geq\\#R-e(R).$ (2)
We shall see some examples. Consider the signature $\Sigma=\\{{\sf
0}^{(0)},{\sf s}^{(1)},{\sf ave}^{(2)}\\}$ and the set $R$ of rules
$\begin{array}[]{lll}A_{1}.{\sf ave}({\sf 0},{\sf 0})\rightarrow{\sf
0},&A_{2}.{\sf ave}(x_{1},{\sf s}(x_{2}))\rightarrow{\sf ave}({\sf
s}(x_{1}),x_{2}),&A_{3}.{\sf ave}({\sf s}({\sf 0}),{\sf 0})\rightarrow{\sf
0},\\\ A_{4}.{\sf ave}({\sf s}({\sf s}({\sf 0})),{\sf 0})\rightarrow s({\sf
0}),&A_{5}.{\sf ave}({\sf s}({\sf s}({\sf s}(x_{1}))),x_{2})\rightarrow{\sf
s}({\sf ave}({\sf s}(x_{1}),x_{2})).&\end{array}$
$R$ satisfies $\deg(R)=0$ and has one critical pair $C$:
We can see the matrix $D(R)$ is the $5\times 1$ zero matrix. The zero matrix
is already in Smith normal form and $e(R)=0$. Thus, for any $R^{\prime}$
equivalent to $R$, $\\#R^{\prime}\geq\\#R=5$. This means there is no smaller
TRS equivalent to $R$. Also, Malbos-Mimram’s lower bound, denoted by
$s(H_{2}(\Sigma,R))$, is equal to 3, though we do not explain how to compute
it in this paper. (We will roughly describe $s(H_{2}(\Sigma,R))$ in Section
4.) As a generalization of this example, we have an interesting corollary of
our main theorem:
###### Corollary 2.
Let $(\Sigma,R)$ be a complete TRS. If for any critical pair $u\leftarrow
t\rightarrow v$, two rewriting paths $t\rightarrow
u\rightarrow\dots\rightarrow\hat{t}$ and $t\rightarrow
v\rightarrow\dots\rightarrow\hat{t}$ contain the same number of $l\rightarrow
r$ for each $l\rightarrow r\in R$, then there is no $R^{\prime}$ equivalent to
$R$ which satisfies $\\#R^{\prime}<\\#R$.
We compute the lower bound for the theory of groups, (1). A complete TRS $R$
for the theory of groups is given by
$\begin{array}[]{ll}G_{1}.\ m(m(x_{1},x_{2}),x_{3})\rightarrow
m(x_{1},m(x_{2},x_{3}))&G_{2}.\ m(e,x_{1})\rightarrow x_{1}\\\ G_{3}.\
m(x_{1},e)\rightarrow x_{1}&G_{4}.\ m(x_{1},i(x_{1}))\rightarrow e\\\ G_{5}.\
m(i(x_{1}),x_{1})\rightarrow e&G_{6}.\ m(i(x_{1}),m(x_{1},x_{2}))\rightarrow
x_{2}\\\ G_{7}.\ i(e)\rightarrow e&G_{8}.\ i(i(x_{1}))\rightarrow x_{1}\\\
G_{9}.\ m(x_{1},m(i(x_{1}),x_{2}))\rightarrow x_{2}&G_{10}.\
i(m(x_{1},x_{2}))\rightarrow m(i(x_{2}),i(x_{1})).\end{array}$
Since $\deg(R)=2$, we set $\mathfrak{R}=\mathbb{Z}/2\mathbb{Z}$. $R$ has 48
critical pairs and we get the $10\times 48$ matrix $D(R)$ given in Appendix A.
The author implemented a program which takes a complete TRS as input and
computes its critical pairs, the matrix $D(R)$, and $e(R)$. The program is
available at https://github.com/mir-ikbch/homtrs. The author checked
$e(R)=\operatorname{rank}(D(R))=8$ by the program, and also by MATLAB’s gfrank
function (https://www.mathworks.com/help/comm/ref/gfrank.html). Therefore we
have $\\#R-e(R)=2$. This provides a new proof that there is no single axiom
equivalent to the theory of groups.
Malbos-Mimram’s lower bound is given by $s(H_{2}(\Sigma,R))=0$. Let
$\Sigma=\\{-^{(1)},f^{(1)},+^{(2)},\cdot^{(2)}\\}$ and $R$ be
$\begin{array}[]{ll}A_{1}.\ -(-x_{1})\rightarrow x_{1},&A_{2}.\
-f(x_{1})\rightarrow f(-x_{1}),\\\ A_{3}.\
-(x_{1}+x_{2})\rightarrow(-x_{1})\cdot(-x_{2}),&A_{4}.\ -(x_{1}\cdot
x_{2})\rightarrow(-x_{1})+(-x_{2}).\end{array}$
Figure 1. The critical pairs of $R$
We have $\deg(R)=0$ and $R$ has four critical pairs (Figure 1). The
corresponding matrix $D(R)$ and its Smith normal form are computed as
$D(R)=\left(\begin{matrix}0&0&1&1\\\ 2&0&0&0\\\ 0&0&1&1\\\ 0&0&1&1\\\
\end{matrix}\right)\rightsquigarrow\left(\begin{matrix}0&0&1&1\\\ 2&0&0&0\\\
0&0&0&0\\\ 0&0&0&0\\\
\end{matrix}\right)\rightsquigarrow\left(\begin{matrix}0&0&1&0\\\ 2&0&0&0\\\
0&0&0&0\\\ 0&0&0&0\\\
\end{matrix}\right)\rightsquigarrow\left(\begin{matrix}1&0&0&0\\\ 0&2&0&0\\\
0&0&0&0\\\ 0&0&0&0\\\ \end{matrix}\right).$
Thus, $\\#R-e(R)=3$. This tells $R$ does not have any equivalent TRS with 2 or
fewer rules, and it is not difficult to see $R$ has an equivalent TRS with 3
rules, $\\{A_{1},A_{2},A_{3}\\}$.
Malbos-Mimram’s lower bound for this TRS is given by $s(H_{2}(\Sigma,R))=1$.
Although the equality of (2) is attained for the above three examples, it is
not guaranteed the equality is attained by some TRS $R^{\prime}$ in general.
For example, the TRS with only the associative rule
$\\{f(f(x_{1},x_{2}),x_{3})\rightarrow f(x_{1},f(x_{2},x_{3}))\\}$ satisfies
$\\#R-e(R)=0$ and it is obvious that no TRSs with zero rule is equivalent.
Also, in Appendix B, Malbos-Mimram’s and our lower bounds for various examples
are given.
## 3\. Preliminaries on Algebra
In this section, we give a brief introduction to module theory, homological
algebra, and Squier’s theory of homological algebra for string rewriting
systems (SRSs) [16]. Even though Squier’s theory is not directly needed to
prove our theorem, it is helpful to understand the homology theory for TRSs,
which is more complicated than SRSs’ case.
### 3.1. Modules and Homological Algebra
We give basic definitions and theorems on module theory and homological
algebra without proofs. For more details, readers are referred to [14, 13] for
example.
Modules are the generalization of vector spaces in which the set of scalars
form a ring, not necessarily a field. Let $\mathfrak{R}$ be a ring and $(M,+)$
be an abelian group. For a map $\cdot:\mathfrak{R}\times M\rightarrow M$,
$(M,+,\cdot)$ is a _left $\mathfrak{R}$-module_ if for all
$r,s\in\mathfrak{R}$ and $x,y\in M$, we have
$r\cdot(x+y)=r\cdot x+r\cdot y,\ (r+s)\cdot x=r\cdot x+s\cdot x,\ (rs)\cdot
x=r\cdot(s\cdot x)$
where $rs$ denotes the multiplication of $r$ and $s$ in $\mathfrak{R}$. We
call the map $\cdot$ _scalar multiplication_.
For a map $\cdot:M\times\mathfrak{R}\rightarrow M$, $(M,+,\cdot)$ is a _right
$\mathfrak{R}$-module_ if for any $r,s\in\mathfrak{R}$ and $x,y\in M$,
$(x+y)\cdot r=x\cdot r+y\cdot r,\ x\cdot(r+s)=x\cdot r+x\cdot s,\
x\cdot(sr)=(x\cdot s)\cdot r.$
If ring $\mathfrak{R}$ is commutative, we do not distinguish between left
$\mathfrak{R}$-modules and right $\mathfrak{R}$-modules and simply call them
$\mathfrak{R}$-modules.
Linear maps and isomorphisms of modules are also defined in the same way as
for vector spaces. For two left $\mathfrak{R}$-modules
$(M_{1},+_{1},\cdot_{1}),(M_{2},+_{2},\cdot_{2})$, a group homomorphism
$f:(M_{1},+_{1})\rightarrow(M_{2},+_{2})$ is an _$\mathfrak{R}$ -linear map_
if it satisfies $f(r\cdot_{1}x)=r\cdot_{2}f(x)$ for any $r\in\mathfrak{R}$ and
$x\in M_{1}$. An $\mathfrak{R}$-linear map $f$ is an _isomorphism_ if it is
bijective, and two modules are called _isomorphic_ if there exists an
isomorphism between them.
Any abelian group $(M,+)$ is a $\mathbb{Z}$-module under the scalar
multiplication $n\cdot x=\underbrace{x+\dotsb+x}_{n}$. For any ring
$\mathfrak{R}$, the direct product
$\mathfrak{R}^{n}=\underbrace{\mathfrak{R}\times\dotsb\times\mathfrak{R}}_{n}$
forms a left $\mathfrak{R}$-module under the scalar multiplication
$r\cdot(r_{1},\dotsc,r_{n})=(rr_{1},\dotsc,rr_{n})$. Let $\mathfrak{R}$ be a
ring and $X$ be a set. $\mathfrak{R}\underline{X}$ denotes the set of formal
linear combinations
$\sum_{x\in X}r_{x}\underline{x}\quad(r_{x}\in\mathfrak{R})$
where $r_{x}=0$ except for finitely many $x$s. The underline is added to
emphasize a distinction between $r\in\mathfrak{R}$ and $x\in X$.
$\mathfrak{R}\underline{X}$ forms a left $\mathfrak{R}$-module under the
addition and the scalar multiplication defined by
$\left(\sum_{x\in X}r_{x}\underline{x}\right)+\left(\sum_{x\in
X}s_{x}\underline{x}\right)=\sum_{x\in X}(r_{x}+s_{x})\underline{x},\quad
s\cdot\left(\sum_{x\in X}r_{x}\underline{x}\right)=\sum_{x\in
X}(sr_{x})\underline{x}.$
If $X$ is the empty set, $\mathfrak{R}\underline{X}$ is the left
$\mathfrak{R}$-module $\\{0\\}$ consisting of only the identity element. We
simply write $0$ for $\\{0\\}$. $\mathfrak{R}\underline{X}$ is called the
_free left $\mathfrak{R}$-module generated by $X$_. If $\\#X=n\in\mathbb{N}$,
$\mathfrak{R}\underline{X}$ can be identified with $\mathfrak{R}^{n}$.
A left $\mathfrak{R}$-module $M$ is said to be _free_ if $M$ is isomorphic to
$\mathfrak{R}\underline{X}$ for some $X$. Free modules have some similar
properties to vector spaces. If a left $\mathfrak{R}$-module $F$ is free, then
there exists a basis (i.e., a subset that is linearly independent and
generating) of $F$. If a free left $\mathfrak{R}$-module $F$ has a basis
$(v_{1},\dotsc,v_{n})$, any $\mathfrak{R}$-linear map $f:F\rightarrow M$ is
uniquely determined if the values $f(v_{1}),\dotsc,f(v_{n})$ are specified.
Suppose $F_{1}$, $F_{2}$ are free left $\mathfrak{R}$-modules and
$f:F_{1}\rightarrow F_{2}$ is an $\mathfrak{R}$-linear map. If $F_{1}$ has a
basis $(v_{1},\dotsc,v_{n})$ and $F_{2}$ has a basis $(w_{1},\dotsc,w_{m})$,
the matrix $(a_{ij})_{i=1,\dotsc,n,j=1,\dotsc,m}$ where $a_{ij}$s satisfy
$f(v_{i})=a_{i1}w_{1}+\dotsb+a_{im}w_{m}$ for any $i=1,\dotsc,n$ is called a
_matrix representation_ of $f$. We define submodules and quotient modules, as
in linear algebra. Let $(M,+,\cdot)$ be a left (resp. right)
$\mathfrak{R}$-module. A subgroup $N$ of $(M,+)$ is a _submodule_ if for any
$x\in N$ and $r\in\mathfrak{R}$, the scalar multiplication $r\cdot x$ (resp.
$x\cdot r$) is in $N$.
For any submodule $N$, the quotient group $M/N$ is also an
$\mathfrak{R}$-module. $M/N$ is called the _quotient module_ of $M$ by $N$.
For submodules and quotient modules, the following basic theorem is known:
###### Theorem 3 (First isomorphism theorem).
[14, Theorem 8.8] Let $(M,+,\cdot),(M^{\prime},+^{\prime},\cdot^{\prime})$ be
left (or right) $\mathfrak{R}$-modules, and $f:M\rightarrow M^{\prime}$ be an
$\mathfrak{R}$-linear map.
1. (1)
The inverse image of $0$ by $f$, $\ker f=\\{x\in M\mid f(x)=0\\}$, is a
submodule of $M$.
2. (2)
The image of $M$ by $f$, $\operatorname{im}f=\\{f(x)\mid x\in M\\}$, is a
submodule of $M^{\prime}$.
3. (3)
The image $\operatorname{im}f$ is isomorphic to $M/\ker f$.
###### Theorem 4 (Third isomorphism theorem).
[14, Theorem 7.10] Let $M$ be a left (or right) $\mathfrak{R}$-module, $N$ be
a submodule of $M$, and $L$ be a submodule of $N$. Then $(M/L)/(N/L)$ is
isomorphic to $M/N$.
###### Theorem 5.
[14, Theorem 9.8] Let $\mathfrak{R}$ be $\mathbb{Z}$ or
$\mathbb{Z}/p\mathbb{Z}$ for some prime $p$. Every submodule of a free
$\mathfrak{R}$-module is free. Moreover, if an $\mathfrak{R}$-module $M$ is
isomorphic to $\mathfrak{R}^{n}$, then every submodule $N$ of $M$ is
isomorphic to $\mathfrak{R}^{m}$ for some $m\leq n$. (In general, this holds
for any principal ideal domain $\mathfrak{R}$.)
Let $M$ be a left $\mathfrak{R}$-module. For $S\subset M$, the set
$\mathfrak{R}S$ of all elements in $M$ of the form $\sum_{i=1}^{k}r_{i}s_{i}$
$(k\in\mathbb{Z}_{\geq 0},r_{i}\in\mathfrak{R},s_{i}\in S)$ is a submodule of
$M$. If $\mathfrak{R}S=M$, $S$ is called a _generating set_ of $S$ and the
elements of $S$ are called _generators_ of $M$. Let $S=\\{s_{i}\\}_{i\in I}$
be a generating set of $M$ for some indexing set $I$. For a set
$X=\\{x_{i}\\}_{i\in I}$, the linear map
$\epsilon:\mathfrak{R}\underline{X}\ni x_{i}\mapsto s_{i}\in M$ is a
surjection from the free module $\mathfrak{R}\underline{X}$. The elements of
$\ker\epsilon$, that is, elements $\sum_{x_{i}\in X}r_{i}\underline{x_{i}}$
satisfying $\epsilon(\sum_{x_{i}\in X}r_{i}\underline{x_{i}})=\sum_{x_{i}\in
X}r_{i}s_{i}=0$, are called _relations_ of $M$.
Now, we introduce one of the most important notions to develop the homology
theory of rewriting systems, free resolutions. We first start from the
following example. Let $M$ be the $\mathbb{Z}$-module defined by
$\mathbb{Z}\underline{\\{a,b,c,d,e\\}}/\mathbb{Z}\\{\underline{a}+\underline{b}+\underline{c}-\underline{d}-\underline{e},\
2\underline{b}-\underline{c},\
\underline{a}+2\underline{c}-\underline{b}-\underline{d}-\underline{e}\\}.$
We consider the $\mathbb{Z}$-linear map between free $\mathbb{Z}$-modules
$f_{0}:\mathbb{Z}^{3}\rightarrow\mathbb{Z}\underline{\\{a,b,c,d,e\\}}$ defined
by
$f_{0}(1,0,0)=\underline{a}+\underline{b}+\underline{c}-\underline{d}-\underline{e},\
f_{0}(0,1,0)=2\underline{b}-\underline{c},\
f_{0}(0,0,1)=\underline{a}+2\underline{c}-\underline{b}-\underline{d}-\underline{e}.$
We can see that the image of $f_{0}$ is the set of relations of $M$. In other
words, $\operatorname{im}f_{0}=\ker\epsilon$ for the linear map
$\epsilon:\mathbb{Z}\underline{\\{a,b,c,d,e\\}}\rightarrow M$ which maps each
element to its equivalence class. Then, we consider the “relations between
relations”, that is, triples $(n_{1},n_{2},n_{3})$ which satisfy
$f_{0}(n_{1},n_{2},n_{3})=n_{1}(\underline{a}+\underline{b}+\underline{c}-\underline{d}-\underline{e})+n_{2}(2\underline{b}-\underline{c})+n_{3}(\underline{a}+2\underline{c}-\underline{b}-\underline{d}-\underline{e})=0$,
or equivalently, elements of $\ker f_{0}$. We can check $\ker
f_{0}=\\{m(-1,1,1)\mid m\in\mathbb{Z}\\}$. This fact can be explained in terms
of rewriting systems. If we write relations in the form of rewrite rules
$A_{1}.\
\underline{a}+\underline{b}+\underline{c}\rightarrow\underline{d}+\underline{e},\
A_{2}.\ 2\underline{b}\rightarrow\underline{c},\ A_{3}.\
\underline{a}+2\underline{c}\rightarrow\underline{b}+\underline{d}+\underline{e},$
we see $\\{A_{1},A_{2},A_{3}\\}$ is a complete rewriting system (over the
signature
$\\{\underline{a},\underline{b},\underline{c},\underline{d},\underline{e},+\\}$)
with two joinable critical pairs
We associate these critical pairs with an equality between formal sums
$A_{2}+A_{3}=A_{1}$, and it corresponds to
$f_{0}(-1,1,1)=\underbrace{-(\underline{a}+\underline{b}+\underline{c}-\underline{d}-\underline{e})}_{-A_{1}}+\underbrace{(2\underline{b}-\underline{c})}_{A_{2}}+\underbrace{(\underline{a}+2\underline{c}-\underline{b}-\underline{d}-\underline{e})}_{A_{3}}=0.$
In fact, this correspondence between critical pairs and “relations between
relations” is a key to the homology theory of TRSs.
We define a linear map $f_{1}:\mathbb{Z}\rightarrow\mathbb{Z}^{3}$ by
$f_{1}(1)=(-1,1,1)$ and then $f_{1}$ satisfies $\operatorname{im}f_{1}=\ker
f_{0}$. We can go further, that is, we can consider $\ker f_{1}$, but it
clearly turns out that $\ker f_{1}=0$.
We encode the above information in the following diagram:
$\mathbb{Z}\xrightarrow{f_{1}}\mathbb{Z}^{3}\xrightarrow{f_{0}}\mathbb{Z}\underline{\\{a,b,c,d,e\\}}\xrightarrow{\epsilon}M$
(3)
where $\operatorname{im}f_{1}=\ker f_{0},\operatorname{im}f_{0}=\ker\epsilon$
and $\epsilon$ is surjective. Sequences of modules and linear maps with these
conditions are called free resolutions: A sequence of left
$\mathfrak{R}$-modules and $\mathfrak{R}$-linear maps
$\dotsb\xrightarrow{f_{i+1}}M_{i+1}\xrightarrow{f_{i}}M_{i}\xrightarrow{f_{i-1}}\dotsb$
is called an _exact sequence_ if $\operatorname{im}f_{i}=\ker f_{i-1}$ holds
for any $i$.
Let $M$ be a left $\mathfrak{R}$-module. For infinite sequence of free modules
$F_{i}$ and linear maps $f_{i}:F_{i+1}\rightarrow F_{i}$,
$\epsilon:F_{0}\rightarrow M$, if the sequence
$\dotsb\xrightarrow{f_{1}}F_{1}\xrightarrow{f_{0}}F_{0}\xrightarrow{\epsilon}M$
is exact and $\epsilon$ is surjective, the sequence above is called a _free
resolution_ of $M$. If the sequence is finite, it is called a _partial free
resolution_.
(Exact sequences and free resolutions are defined for right
$\mathfrak{R}$-modules in the same way.) Notice that the exact sequence (3)
can be extended to the infinite exact sequence
$\dotsb\rightarrow 0\rightarrow\dotsb\rightarrow
0\rightarrow\mathbb{Z}\xrightarrow{f_{1}}\mathbb{Z}^{3}\xrightarrow{f_{0}}\mathbb{Z}\underline{\\{a,b,c,d,e\\}}\xrightarrow{\epsilon}M$
since $\ker f_{1}=0$. Thus, the sequence (3) is a free resolution of $M$.
As there are generally several rewriting systems equivalent to a given
equational theory, free resolutions of $M$ are not unique. However, we can
construct some information of $M$ from a (partial) free resolution which does
not depend on the choice of the free resolution. The information is called
homology groups. To define the homology groups, we introduce the tensor
product of modules. Let $N$ be a right $\mathfrak{R}$-module and $M$ be a left
$\mathfrak{R}$-module. Let $F(N\times M)$ be the free abelian group generated
by $N\times M$. The _tensor product_ of $N$ and $M$, denoted by
$N\otimes_{\mathfrak{R}}M$, is the quotient group of $F(N\times M)$ by the
subgroup generated by the elements of the form
$(x,y)+(x,y^{\prime})-(x,y+y^{\prime}),\
(x,y)+(x^{\prime},y)-(x+x^{\prime},y),\ (x\cdot r,y)-(x,r\cdot y)$
where $x,x^{\prime}\in N$, $y,y^{\prime}\in M$, $r\in R$. The equivalence
class of $(x,y)$ in $N\otimes_{\mathfrak{R}}M$ is written as $x\otimes y$.
For a right $\mathfrak{R}$-module $N$ and a $\mathfrak{R}$-linear map
$f:M\rightarrow M^{\prime}$ between left $\mathfrak{R}$-modules
$M,M^{\prime}$, we write $N\otimes f:N\otimes_{\mathfrak{R}}M\rightarrow
N\otimes_{\mathfrak{R}}M^{\prime}$ for the map $(N\otimes f)(a\otimes
x)=a\otimes f(x)$. $N\otimes f$ is known to be well-defined and be a group
homomorphism. Let
$\dotsb\xrightarrow{f_{1}}F_{1}\xrightarrow{f_{0}}F_{0}\xrightarrow{\epsilon}M$
be a free resolution of a left $\mathfrak{R}$-module $M$. For a right
$\mathfrak{R}$-module $N$, we consider the sequence
$\dotsb\xrightarrow{N\otimes
f_{1}}N\otimes_{\mathfrak{R}}F_{1}\xrightarrow{N\otimes
f_{0}}N\otimes_{\mathfrak{R}}F_{0}.$ (4)
Then, it can be shown that $\operatorname{im}(N\otimes
f_{i})\subset\ker(N\otimes f_{i-1})$ for any $i=1,2,\dotsc$. In general, a
sequence
$\dotsb\xrightarrow{f_{i+1}}M_{i+1}\xrightarrow{f_{i}}M_{i}\xrightarrow{f_{i-1}}\dotsb$
of left/right $\mathfrak{R}$-modules satisfying
$\operatorname{im}f_{i}\subset\ker f_{i-1}$ for any $i$ is called a chain
complex. The homology groups of a chain complex are defined to be the quotient
group of $\ker f_{i-1}$ by $\operatorname{im}f_{i}$: Let
$(C_{\bullet},f_{\bullet})$ denote the pair
$(\\{C_{i}\\}_{i=0,1,\dots},\\{f_{i}:C_{i+1}\rightarrow
C_{i}\\}_{i=0,1,\dots})$. For a chain complex
$\dotsb\xrightarrow{f_{i+1}}C_{i+1}\xrightarrow{f_{i}}C_{i}\xrightarrow{f_{i-1}}\dotsb$,
the abelian group $H_{j}(C_{\bullet},f_{\bullet})$ defined by
$H_{j}(C_{\bullet},f_{\bullet})=\ker f_{j-1}/\operatorname{im}f_{j}$
is called the _$j$ -th homology groups_ of the chain complex
$(C_{\bullet},f_{\bullet})$. The homology groups of the chain complex (4)
depend only on $M$, $N$, and $\mathfrak{R}$:
###### Theorem 6.
[13, Corollary 6.21] Let $M$ be a left $\mathfrak{R}$-module and $N$ be a
right $\mathfrak{R}$-module. For any two resolutions
$\dotsb\xrightarrow{f_{1}}F_{1}\xrightarrow{f_{0}}F_{0}\xrightarrow{\epsilon}M$,
$\dotsb\xrightarrow{f^{\prime}_{1}}F^{\prime}_{1}\xrightarrow{f^{\prime}_{0}}F^{\prime}_{0}\xrightarrow{\epsilon}M$,
we have a group isomorphism
$H_{j}(N\otimes_{\mathfrak{R}}F_{\bullet},N\otimes f_{\bullet})\cong
H_{j}(N\otimes_{\mathfrak{R}}F_{\bullet}^{\prime},N\otimes
f_{\bullet}^{\prime}).$
We end this subsection by giving some basic facts on exact sequences.
###### Proposition 7.
[14, Proposition 7.20 and 7.21]
1. (1)
$M_{1}\xrightarrow{f}M_{2}\rightarrow 0$ is exact if and only if $\ker f=0$.
2. (2)
$0\rightarrow M_{1}\xrightarrow{f}M_{2}$ is exact if and only if
$\operatorname{im}f=M_{2}$.
3. (3)
If $M_{1}$ is a submodule of $M_{2}$, the sequence $0\rightarrow
M_{2}\xrightarrow{\iota}M_{1}\xrightarrow{\pi}M_{1}/M_{2}\rightarrow 0$ is
exact where $\iota$ is the inclusion map $\iota(x)=x$ and $\pi$ is the
projection $\pi(x)=[x]$.
###### Proposition 8.
Suppose we have an exact sequence of $\mathfrak{R}$-modules $0\rightarrow
M_{1}\rightarrow M_{2}\rightarrow M_{3}\rightarrow 0$. If $M_{3}$ is free,
then $M_{2}$ is isomorphic to $M_{1}\times M_{3}$.
The proof is given by using [14, Proposition 7.22].
### 3.2. String Rewriting Systems and Homology Groups of Monoids
For an alphabet $\Sigma$, $\Sigma^{*}$ denotes the set of all strings of
symbols over $\Sigma$. The set $\Sigma^{*}$ forms a monoid under the operation
of concatenation with the empty string serving as the identity, and we call
$\Sigma^{*}$ the free monoid generated by $\Sigma$. For a string rewriting
system (SRS) $(\Sigma,R)$, we write $\mathcal{M}_{(\Sigma,R)}$ for the set
defined by $\mathcal{M}_{(\Sigma,R)}=\Sigma^{*}\delimiter
84079374\mathopen{}\xleftrightarrow{*}_{R}$. We can see
$\mathcal{M}_{(\Sigma,R)}$ is a monoid under the operations $[u]\cdot[v]=[uv]$
where $[w]$ denotes the equivalence class of $w\in\Sigma^{*}$ with respect to
$\xleftrightarrow{*}_{R}$.
We say that two SRSs $(\Sigma_{1},R_{1}),(\Sigma_{2},R_{2})$ are _Tietze
equivalent_ if the monoids $\mathcal{M}_{(\Sigma_{1},R_{1})}$,
$\mathcal{M}_{(\Sigma_{2},R_{2})}$ are isomorphic. It is not difficult to show
that for any two SRSs $(\Sigma,R_{1}),(\Sigma,R_{2})$ with the same signature,
if $R_{1}$ and $R_{2}$ are equivalent (i.e.,
$\xleftrightarrow{*}_{R_{1}}\,=\,\xleftrightarrow{*}_{R_{2}}$), then
$(\Sigma,R_{1})$ and $(\Sigma,R_{2})$ are isomorphic. Roughly speaking, the
notion that two SRSs are isomorphic means that the SRSs are equivalent but
their alphabets can be different. For example, let $\Sigma_{1}$ be
$\\{a,b,c\\}$ and $R_{1}$ be $\\{abb\rightarrow ab,\ ba\rightarrow c\\}$.
Then, $(\Sigma_{1},R_{1})$ is isomorphic to $(\Sigma_{2},R_{2})$ where
$\Sigma_{2}=\\{a,b\\}$ and $R_{2}=\\{abb\rightarrow ab\\}$. Intuitively, since
$c$ is equivalent to $ba$ with respect to the congruence
$\xleftrightarrow{*}_{R_{1}}$, $c$ is redundant as long as we consider strings
modulo $\xleftrightarrow{*}_{R_{1}}$ and $(\Sigma_{2},R_{2})$ is the SRS made
by removing $c$ from $(\Sigma_{1},R_{1})$.
If a monoid $S$ is isomorphic to $\mathcal{M}_{(\Sigma,R)}$ for an SRS
$(\Sigma,R)$, we call $(\Sigma,R)$ a _presentation_ of the monoid $S$.
Let $S$ be a monoid and consider the free $\mathbb{Z}$-module
$\mathbb{Z}\underline{S}$. The module $\mathbb{Z}\underline{S}$ can be
equipped with a ring structure under the multiplication $\left(\sum_{w\in
S}n_{w}\underline{w}\right)\left(\sum_{w\in
S}m_{w}\underline{w}\right)=\sum_{w,v\in S}n_{w}m_{v}\underline{wv}$ where
$n_{w}m_{v}$ is the usual multiplication of integers and $wv$ is the
multiplication of the monoid $S$. $\mathbb{Z}\underline{S}$ as a ring is
called the _integral monoid ring_ of $S$. When we think of
$\mathbb{Z}\underline{S}$ as a ring, we write $\mathbb{Z}\langle S\rangle$
instead of $\mathbb{Z}\underline{S}$.
We consider $\mathbb{Z}\langle S\rangle$-modules. The group of integers
$\mathbb{Z}$ forms a left (resp. right) $\mathbb{Z}\langle S\rangle$-module
under the scalar multiplication $(\sum_{w\in S}n_{w}\underline{w})\cdot
m=\sum_{w\in S}n_{w}m$ (resp. $m\cdot(\sum_{w\in
S}n_{w}\underline{w})=\sum_{w\in S}n_{w}m\underline{w}$). Let
$\dotsb\xrightarrow{\partial_{1}}F_{1}\xrightarrow{\partial_{0}}F_{0}\xrightarrow{\epsilon}\mathbb{Z}$
be a free resolution of $\mathbb{Z}$ over the ring $\mathbb{Z}\langle
S\rangle$. The abelian group $H_{i}(S)$ is defined as the $i$-th homology
group of the chain complex $(\mathbb{Z}\otimes_{\mathbb{Z}\langle
S\rangle}F_{\bullet},\mathbb{Z}\otimes\partial_{\bullet})$, i.e.,
$H_{i}(S)=H_{i}(\mathbb{Z}\otimes_{\mathbb{Z}\langle
S\rangle}F_{\bullet},\mathbb{Z}\otimes\partial_{\bullet})=\ker\mathbb{Z}\otimes\partial_{i-1}/\operatorname{im}\mathbb{Z}\otimes\partial_{i}.$
If $S$ is isomorphic to $\mathcal{M}_{(\Sigma,R)}$ for some SRS $(\Sigma,R)$,
it is known that there is a free resolution in the form of
$\dotsb\rightarrow(\mathbb{Z}\langle
S\rangle)\underline{P}\xrightarrow{\partial_{2}}(\mathbb{Z}\langle
S\rangle)\underline{R}\xrightarrow{\partial_{1}}(\mathbb{Z}\langle
S\rangle)\underline{\Sigma}\xrightarrow{\partial_{0}}(\mathbb{Z}\langle
S\rangle)\underline{\\{\star\\}}\xrightarrow{\epsilon}\mathbb{Z}$
for some set $P$. Squier [16] showed that if the SRS $(\Sigma,R)$ is complete
and reduced111An SRS $(\Sigma,R)$ is reduced if for each $l\rightarrow r\in
R$, $r$ is normal w.r.t. $\rightarrow_{R}$ and there does not exist
$l^{\prime}\rightarrow r^{\prime}\in R$ such that $l^{\prime}=ulv\neq l$ for
some $u,v\in\Sigma^{*}$, there is $\partial_{2}:(\mathbb{Z}\langle
S\rangle)\underline{P}\rightarrow(\mathbb{Z}\langle S\rangle)\underline{R}$
for $P=(\text{the critical pairs of $R$})$ so that we can compute
$H_{2}(S)=\ker\partial_{1}/\operatorname{im}\partial_{2}$ explicitly. This is
an analog of Example 5, but we omit the details here. For an abelian group
$G$, let $s(G)$ denote the minimum number of generators of $G$ (i.e., the
minimum cardinality of the subset $A\subset G$ such that any element $x\in G$
can be written by $x=a_{1}+\dotsb+a_{k}-a_{k+1}-\dotsb-a_{m}$ for
$a_{1},\dotsc,a_{m}\in A$). Then, we have the following theorem:
###### Theorem 9.
[21] Let $(\Sigma,R)$ be an SRS and $S=\mathcal{M}_{(\Sigma,R)}$. Then
$\\#\Sigma\geq s(H_{1}(S))$, $\\#R\geq s(H_{2}(S))$.
To prove this theorem, we use the following lemma:
###### Lemma 10.
Let $X$ be a set. The group homomorphism $\mathbb{Z}\otimes_{\mathbb{Z}\langle
S\rangle}(\mathbb{Z}\langle
S\rangle)\underline{X}\rightarrow\mathbb{Z}\underline{X}$, $n\langle
w\rangle\underline{x}\mapsto n\underline{x}$ is an isomorphism.
This lemma is proved in a straightforward way.
###### Proof 3.1 (Proof of Theorem 9).
Since $\mathbb{Z}\otimes_{\mathbb{Z}\langle S\rangle}(\mathbb{Z}\langle
S\rangle)\underline{X}\cong\mathbb{Z}\underline{X}$ by the above lemma,
$s(\mathbb{Z}\otimes_{\mathbb{Z}\langle S\rangle}(\mathbb{Z}\langle
S\rangle)\underline{X})=s(\mathbb{Z}\underline{X})=\\#X$. For any set $Y$ and
group homomorphism
$f:\mathbb{Z}\underline{X}\rightarrow\mathbb{Z}\underline{Y}$, since $\ker f$
is a subgroup of $\mathbb{Z}\underline{X}$, we have $\\#X\geq s(\ker f)$. For
any subgroup $H$ of $\ker f$, $\ker f/H$ is generated by
$[x_{1}],\dotsc,[x_{k}]$ if $\ker f$ is generated by $x_{1},\dotsc,x_{k}$.
Thus $\\#\Sigma\geq
s(\ker\partial_{0}/\operatorname{im}\partial_{1})=s(H_{1}(S))$, $\\#R\geq
s(\ker\partial_{1}/\operatorname{im}\partial_{2})=s(H_{2}(S))$.
Note that $H_{i}(S)$ does not depend on the choice of presentation
$(\Sigma,R)$ by Theorem 6. Therefore, Theorem 9 can be restated as follows:
Let $(\Sigma,R)$ be an SRS. For any SRS $(\Sigma^{\prime},R^{\prime})$
isomorphic to $(\Sigma,R)$, the number of symbols $\\#\Sigma^{\prime}$ is
bounded below by $s(H_{1}(\mathcal{M}_{(\Sigma,R)}))$ and the number of rules
$\\#R^{\prime}$ is bounded below by $s(H_{2}(\mathcal{M}_{(\Sigma,R)}))$.
## 4\. An Overview of the Homology Theory of Algebraic Theories
In this section, we will briefly see the homology theory of algebraic
theories, which is the main tool to obtain our lower bounds.
We fix a signature $\Sigma$. Let $t=\langle t_{1},\dotsc,t_{n}\rangle$ be a
$n$-tuple of terms and suppose that for each $t_{i}$, the set of variables in
$t_{i}$ is included in $\left\\{x_{1},\dots,x_{m}\right\\}$. For an $m$-tuple
of term $s=\langle s_{1},\dotsc,s_{m}\rangle$, we define the composition of
$t$ and $s$ by
$t\circ s=\langle
t_{1}[s_{1}/x_{1},\dotsc,s_{m}/x_{m}],\dotsc,t_{n}[s_{1}/x_{1},\dotsc,s_{m}/x_{m}]\rangle$
where $t_{i}[s_{1}/x_{1},\dotsc,s_{m}/x_{m}]$ denotes the term obtained by
substituting $s_{j}$ for $x_{j}$ in $t_{i}$ for each $j=1,\dotsc,m$ in
parallel. (For example,
$f(x_{1},x_{2})[g(x_{2})/x_{1},g(x_{1})/x_{2}]=f(g(x_{2}),g(x_{1}))$.) By this
definition, we can think of any $m$-tuple $\langle s_{1},\dotsc,s_{m}\rangle$
of terms as a (parallel) substitution $\left\\{x_{1}\mapsto
s_{1},\dotsc,x_{m}\mapsto s_{m}\right\\}$. Recall that, for a TRS $R$, the
reduction relation $\rightarrow_{R}$ between terms is defined as
$t_{1}\rightarrow_{R}t_{2}\iff t_{1}=C[l\circ s],\ t_{2}=C[r\circ s]$ for some
single-hole context $C$, $m$-tuple $s$ of terms, and rewrite rule
$l\rightarrow r\in R$ whose variables are included in
$\\{x_{1},\dotsc,x_{m}\\}$. This definition suggests that the pair of a
context $C$ and an $m$-tuple of terms (or equivalently, substitution) $s$ is
useful to think about rewrite relations. Malbos and Mimram [9] called the pair
of a context and an $m$-tuple of terms a bicontext. For a bicontext $(C,t)$
and a rewrite rule $A$, we call the triple $(C,A,t)$ a rewriting step. The
pair of two rewriting steps $(\square,l_{1}\rightarrow
r_{1},s),(C,l_{2}\rightarrow r_{2},t)$ is called a critical pair if the pair
$(r_{1}\circ s,C[r_{2}\circ t])$ of terms is a critical pair in the usual
sense given by $l_{1}\rightarrow r_{1}$, $l_{2}\rightarrow r_{2}$.
The composition of two bicontexts $(C,t),(D,s)$ ($t=\langle
t_{1},\dotsc,t_{n}\rangle$, $s=\langle s_{1},\dotsc,s_{m}\rangle$) is defined
by
$(C,t)\circ(D,s)=(C[D\circ t],s\circ t)$
where $D\circ t=D[t_{1}/x_{1},\dotsc,t_{n}/x_{n}]$ and note that the order of
composition is reversed in the second component. We write $\mathbb{K}(n,m)$
($n,m\in\mathbb{N}$) for the set of bicontexts $(C,t)$ where $t=\langle
t_{1},\dotsc,t_{n}\rangle$ and each $t_{i}$ and $C$ have variables in
$\\{x_{1},\dotsc,x_{m}\\}$ (except $\square$ in $C$).
To apply homological algebra to TRSs, we construct an algebraic structure from
bicontexts. For two natural numbers $n,m$, we define
$\mathbb{Z}\langle\mathbb{K}\rangle(n,m)$ to be the free abelian group
generated by $\mathbb{K}(n,m)$ (i.e., any element in
$\mathbb{Z}\langle\mathbb{K}\rangle(n,m)$ is written in the form of formal sum
$\sum_{(C,t)\in\mathbb{K}(n,m)}\lambda_{(C,t)}(C,t)$ where each
$\lambda_{(C,t)}$ is in $\mathbb{Z}$ and is equal to $0$ except for finitely
many $(C,t)$s.) Then, the composition
$\circ:\mathbb{K}(n,m)\times\mathbb{K}(k,n)\rightarrow\mathbb{K}(k,m)$ can be
extended to
$\circ:\mathbb{Z}\langle\mathbb{K}\rangle(n,m)\times\mathbb{Z}\langle\mathbb{K}\rangle(k,n)\rightarrow\mathbb{Z}\langle\mathbb{K}\rangle(k,m)$
by
$\left(\sum_{(C,t)}\lambda_{(C,t)}(C,t)\right)\circ\left(\sum_{(D,s)}\mu_{(D,s)}(D,s)\right)=\sum_{(C,t)}\sum_{(D,s)}\lambda_{(C,t)}\mu_{(D,s)}((C,t)\circ(D,s)).$
This family of free abelian groups forms a structure called _ringoid_. Suppose
an abelian group $(\mathcal{R}(i,j),+_{i,j},0_{i,j})$ is defined for each
$i,j\in\mathbb{N}$. If for each $i,j,k\in\mathbb{N}$, a map
$\circ_{i,j,k}:\mathcal{R}(j,k)\times\mathcal{R}(i,j)\rightarrow\mathcal{R}(i,k)$
is defined and satisfies the following conditions, $\mathcal{R}$ is called a
_ringoid_ (also called a _small $\mathbf{Ab}$-enriched category_).
1. (1)
For each $i$, there exists an element $1_{i}\in\mathcal{R}(i,i)$ such that
$a\circ_{i,i,j}1_{i}=a$, $1_{i}\circ_{j,i,i}b=b$ ($j\in\mathbb{N}$,
$a\in\mathcal{R}(i,j)$, $b\in\mathcal{R}(j,i)$),
2. (2)
$(a\circ_{j,k,l}b)\circ_{i,j,l}c=a\circ_{i,k,l}(b\circ_{i,j,k}c)$
($a\in\mathcal{R}(k,l)$, $b\in\mathcal{R}(j,k)$, $c\in\mathcal{R}(i,j)$),
3. (3)
$(a+_{j,k}b)\circ_{i,j,k}c=a\circ_{i,j,k}c+_{i,k}b\circ_{i,j,k}c$
($a,b\in\mathcal{R}(j,k)$, $c\in\mathcal{R}(i,j)$),
4. (4)
$a\circ_{i,j,k}(b+_{i,j}c)=a\circ_{i,j,k}b+_{i,k}a\circ_{i,j,k}c$
($a\in\mathcal{R}(j,k)$, $b,c\in\mathcal{R}(i,j)$),
5. (5)
$a\circ_{i,j,k}0_{i,j}=0_{i,k}=0_{j,k}\circ_{i,j,k}b$ ($a\in\mathcal{R}(j,k)$,
$b\in\mathcal{R}(i,j)$).
We will omit subscripts of $+,\circ,0,1$ if there is no confusion. The notion
of modules over a ring is extended to modules over a ringoid. Let
$\mathcal{R}$ be a ringoid. Suppose that for each $i\in\mathbb{N}$, an abelian
group $(M(i),+_{i},0_{i})$ is defined. If there is a map
$\cdot_{i,j}:\mathcal{R}(i,j)\times M(i)\rightarrow M(j)$ satisfying the
following conditions, $M$ is called a _left $\mathcal{R}$-module_.
1. (1)
$(a\circ_{i,j,k}b)\cdot_{i,k}x=a\cdot_{j,k}(b\cdot_{i,j}x)$
($a\in\mathcal{R}(j,k)$, $b\in\mathcal{R}(i,j)$, $x\in M(i)$),
2. (2)
$1_{i}\cdot_{i,i}x=x$ ($x\in M(i))$),
3. (3)
$(a+_{i,j}b)\cdot_{i,j}x=(a\cdot_{i,j}x)+_{j}(b\cdot_{i,j}x)$
($a,b\in\mathcal{R}(i,j)$, $x\in M(i)$),
4. (4)
$a\cdot_{i,j}(x+_{i}y)=(a\cdot_{i,j}x)+_{j}(a\cdot_{i,j}y)$
($a\in\mathcal{R}(i,j)$, $x,y\in M(i)$),
5. (5)
$0_{i,j}\cdot_{i,j}x=0_{j}$ ($x\in M(i)$).
A _right $\mathcal{R}$-module_ $M$ is also defined with a map
$\cdot_{i,j}:M(i)\times\mathcal{R}(i,j)\rightarrow M(j)$ in the same manner
with right modules over a ring.
An _$\mathcal{R}$ -linear map_ $f:M\rightarrow M^{\prime}$ between left
$\mathcal{R}$-modules $M,M^{\prime}$ is a collection of group homomorphisms
$f_{i}:M(i)\rightarrow M^{\prime}(i)$ ($i\in\mathbb{N}$) that satisfy
$f_{j}(a\cdot_{i,j}x)=a\cdot_{i,j}f_{i}(x)\quad(a\in\mathcal{R}(i,j),x\in
M(i)).$
Ringoids and modules over ringoids are originally defined in a category
theoretic way (cf. [10, 9]). (A ringoid is a small Ab-enriched category, and a
module over a ringoid is an additive functor.) Our definitions here are
obtained by unfolding the category theoretic terminology in the original
definitions so that those who are not familiar with category theory can
understand them more easily. Let $\mathcal{R}$ be a ringoid and $P$ be a
family of sets $P_{i}$ ($i\in\mathbb{N}$). The free left $\mathcal{R}$-module
generated by $P$, denoted by $\mathcal{R}\underline{P}$ is defined as follows.
For each $i\in\mathbb{N}$, $(\mathcal{R}\underline{P})(i)$ is the abelian
group of formal finite sums
$\sum_{p_{j}\in P_{j},\
j\in\mathbb{N}}a_{p_{j}}\underline{p_{j}},\quad(a_{p_{j}}\in\mathcal{R}(j,i))$
and for each $r\in\mathcal{R}(i,k)$,
$r\cdot\left(\sum_{p_{j}\in P_{j},\
j\in\mathbb{N}}a_{p_{j}}\underline{p_{j}}\right)=\sum_{p_{j}\in P_{j},\
j\in\mathbb{N}}(r\circ a_{p_{j}})\underline{p_{j}}.$
If a left $\mathcal{R}$-module $M$ is isomorphic to $\mathcal{R}\underline{P}$
for some $P$, we say that $M$ is free. For
$\mathbb{Z}\langle\mathbb{K}\rangle$, we write $C\underline{x}t$ for elements
of $((\mathbb{Z}\langle\mathbb{K}\rangle)\underline{P})(X)$ instead of
$(C,t)\underline{x}$, and $(D+C)\underline{x}t$ for
$D\underline{x}t+C\underline{x}t$.
The tensor product of two modules over a ringoid is also defined. [9] Let
$\mathcal{R}$ be a ringoid, $M_{1}$ be a right $\mathcal{R}$-module, and
$M_{2}$ be a left $\mathcal{R}$-module. For a family of groups $\\{G_{X}\mid
X\in P\\}$ for some indexing set $P$, its direct sum, denoted by
$\bigoplus_{X\in P}G_{X}$, is the subset of the direct product defined by
$\\{(g_{X})_{X\in P}\in\prod_{X\in P}G_{X}\mid\text{$g_{X}=0$ except for
finite $X$s}\\}$. The direct sum of groups also forms a group.
The tensor product $M_{1}\otimes_{\mathcal{R}}M_{2}$ is the quotient abelian
group of
$\bigoplus_{X\in\mathcal{R}}M_{1}(X)\otimes_{\mathcal{R}(X,X)}M_{2}(X)$ by
relations $(x\cdot a)\otimes y-x\otimes(a\cdot y)$ for all
$a\in\mathcal{R}(Y,X)$, $x\in M(X),y\in M(Y)$.
We define an equivalence between two TRSs $(\Sigma,R)$,
$(\Sigma^{\prime},R^{\prime})$, called _Tietze equivalence_. Two TRSs are
_Tietze equivalent_ if one is obtained from the other by applying a series of
_Tietze transformations_ defined as follows:
1. (1)
If $f^{(n)}$ is a symbol not in $\Sigma$ and $t\in T(\Sigma)$ has variables in
$\\{x_{1},\dots,x_{n}\\}$, then $(\Sigma,R)$ can be transformed into
$(\Sigma\cup\\{f\\},R\cup\\{t\rightarrow f(x_{1},\dots,x_{n})\\})$.
2. (2)
If $t\rightarrow f(x_{1},\dots,x_{n})\in R$, $t\in T(\Sigma\setminus\\{f\\})$,
and $f$ does not occur in any rule in $R^{\prime}=R\setminus\\{t\rightarrow
f(x_{1},\dots,x_{n})\\}$, then $(\Sigma,R)$ can be transformed into
$(\Sigma\setminus\\{f\\},R^{\prime})$.
3. (3)
If $t\xleftrightarrow{*}_{R}s$, then $(\Sigma,R)$ can be transformed into
$(\Sigma,R\cup\\{t\rightarrow s\\})$.
4. (4)
If $t\rightarrow s\in R$ and $t\xleftrightarrow{*}_{R^{\prime}}s$ for
$R^{\prime}=R\setminus\\{t\rightarrow s\\}$, then $(\Sigma,R)$ can be
transformed into $(\Sigma,R^{\prime})$.
We can see that any two TRSs $(\Sigma,R_{1})$,$(\Sigma,R_{2})$ are Tietze
equivalent if they are equivalent in the usual sense,
$\xleftrightarrow{*}_{R_{1}}\,=\,\xleftrightarrow{*}_{R_{2}}$. Tietze
equivalence is originally introduced in group theory [20, §11] and is also
defined for monoids [2, 7.2]. Consider the signature
$\Sigma=\\{+^{(2)},S^{(1)},0^{(0)}\\}$ and the set $R$ of four rules
$0+x\rightarrow x,\ x+0\rightarrow x,\ S(x)+y\rightarrow S(x+y),\
(x+y)+z\rightarrow x+(y+z).$
We can see $(\Sigma,R)$ is Tietze equivalent to $(\Sigma^{\prime},R^{\prime})$
where
$\Sigma^{\prime}=\\{+^{(2)},0^{(0)},1^{(0)}\\},\ R^{\prime}=\\{0+x\rightarrow
x,\ x+0\rightarrow x,\ (x+y)+z\rightarrow x+(y+z)\\}$
as follows:
$\displaystyle(\Sigma,R)$
$\displaystyle\xlongrightarrow{\rm(1)}(\Sigma\uplus\\{1^{(0)}\\},R\uplus\\{S(0)\rightarrow
1\\})$
$\displaystyle\xlongrightarrow{\rm(3)}(\Sigma\uplus\\{1^{(0)}\\},R\uplus\\{S(0)\rightarrow
1,\ 1+x\rightarrow S(x)\\})$
$\displaystyle\xlongrightarrow{\rm(4)}(\Sigma\uplus\\{1^{(0)}\\},R\uplus\\{1+x\rightarrow
S(x)\\})$
$\displaystyle\xlongrightarrow{\rm(4)}(\Sigma\uplus\\{1^{(0)}\\},R\uplus\\{1+x\rightarrow
S(x)\\}\setminus\\{S(x)+y\rightarrow S(x+y)\\})$
$\displaystyle\xlongrightarrow{\rm(2)}(\Sigma\uplus\\{1^{(0)}\\}\setminus\\{S^{(1)}\\},R\setminus\\{S(x)+y\rightarrow
S(x+y)\\})=(\Sigma^{\prime},R^{\prime}).$
Now, we outline Malbos-Mimram’s construction of the homology groups of TRSs.
Let $d=\deg(R)$.
1. (1)
We begin by defining a new ringoid from $\mathbb{Z}\langle\mathbb{K}\rangle$.
That ringoid, denoted by
$\overline{\mathbb{Z}\langle\mathbb{K}\rangle}^{(\Sigma,R)}$, depends only on
the Tietze equivalence class of $(\Sigma,R)$.
$\overline{\mathbb{Z}\langle\mathbb{K}\rangle}^{(\Sigma,R)}$ corresponds to
$\mathbb{Z}\langle\mathcal{M}_{(\Sigma,R)}\rangle$ in the case $(\Sigma,R)$ is
an SRS.
2. (2)
From this step, we write $\mathcal{R}$ for
$\overline{\mathbb{Z}\langle\mathbb{K}\rangle}^{(\Sigma,R)}$. It can be shown
that we have a partial free resolution
$\mathcal{R}\underline{\mathbf{P}_{3}}\xrightarrow{\partial_{2}}\mathcal{R}\underline{\mathbf{P}_{2}}\xrightarrow{\partial_{1}}\mathcal{R}\underline{\mathbf{P}_{1}}\xrightarrow{\partial_{0}}\mathcal{R}\underline{\mathbf{P}_{0}}\xrightarrow{\epsilon}\mathcal{Z}$
(5)
where every $\mathbf{P}_{i}$ is a family of sets $(\mathbf{P}_{i})_{j}$ given
by $(\mathbf{P}_{0})_{1}=\\{1\\}$, $(\mathbf{P}_{0})_{j}=\emptyset$ ($j\neq
1$), $(\mathbf{P}_{1})_{j}=\Sigma^{(j)}=\\{f\in\Sigma\mid\text{$f$ is of arity
$j$}\\}$, $(\mathbf{P}_{2})_{j}=\\{l\rightarrow r\in R\mid\text{$l$ is of
arity $j$}\\}$, $(\mathbf{P}_{3})_{j}=\\{((\square,A,s),(C,B,t))\text{ :
critical pair}\mid\text{one of $A,B$ is in $(\mathbf{P}_{2})_{j}$, and the
other is in }(\mathbf{P}_{2})_{k}\\\ \text{ for }k\leq j\\}$.
3. (3)
By taking the tensor product $\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}-$,
we have the chain complex
$\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{3}}\xrightarrow{\mathbb{Z}/d\mathbb{Z}\otimes\partial_{2}}\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{2}}\xrightarrow{\mathbb{Z}/d\mathbb{Z}\otimes\partial_{1}}\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{1}}\xrightarrow{\mathbb{Z}/d\mathbb{Z}\otimes\partial_{0}}\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{0}}$
(6)
where $\mathbb{Z}/d\mathbb{Z}$ above is the $\mathcal{R}$-module defined by
$\mathbb{Z}/d\mathbb{Z}(i)=\mathbb{Z}/d\mathbb{Z}$ (the abelian group of
integers) for each object $i$, and the scalar multiplication is given by
$(C,t)\cdot k=k$.
4. (4)
The homology groups can be defined by
$H_{i}(\Sigma,R)=\ker(\mathbb{Z}/d\mathbb{Z}\otimes\partial_{i-1})/\operatorname{im}(\mathbb{Z}/d\mathbb{Z}\otimes\partial_{i}).$
It is shown that the homology groups of TRS depend only on the Tietze
equivalence class of $(\Sigma,R)$. Thus, we have the following:
$\xleftrightarrow{*}_{R_{1}}\,=\,\xleftrightarrow{*}_{R_{2}}\implies
H_{i}(\Sigma,R_{1})\cong H_{i}(\Sigma,R_{2}).$
For the step 1, we define the relations of
$\overline{\mathbb{Z}\langle\mathbb{K}\rangle}^{(\Sigma,R)}$. We identify
elements in $\mathbb{Z}\langle\mathbb{K}\rangle$ as follows. (a) For two
$m$-tuples $t=\langle t_{1},\dotsc,t_{m}\rangle,s=\langle
s_{1},\dotsc,s_{m}\rangle$ of terms, we identify $t$ and $s$ if
$t\xleftrightarrow{*}_{R}s$. (b) Similarly, for two single-hole contexts
$C,D$, we identify $C$ and $D$ if $C\xleftrightarrow{*}_{R}D$. For the last
identification, we introduce operator $\kappa_{i}$ which takes a term $t$ and
returns the formal sum of single-hole contexts $C_{1}+\dotsb+C_{m}$ where
$C_{j}$ $(j=1,\dotsc,m)$ is obtained by replacing the $j$-th occurrence of
$x_{i}$ with $\square$ in $t$, and $m$ is the number of the occurrences of
$x_{i}$ in $t$. For example, we have
$\displaystyle\kappa_{1}(f(g(x_{1},x_{2}),x_{1}))$
$\displaystyle=f(g(\square,x_{2}),x_{1})+f(g(x_{1},x_{2}),\square),$
$\displaystyle\kappa_{2}(f(g(x_{1},x_{2}),x_{1}))$
$\displaystyle=f(g(x_{1},\square),x_{1}),$ $\displaystyle\kappa_{2}(h(x_{1}))$
$\displaystyle=0.$
The definition of $\kappa_{i}$ can be stated inductively as follows:
$\displaystyle\kappa_{i}(x_{i})$ $\displaystyle=\square,\
\kappa_{i}(x_{j})=0\quad(j\neq i),$
$\displaystyle\kappa_{i}(f(t_{1},\dotsc,t_{n}))$
$\displaystyle=\sum_{k=1}^{n}f(t_{1},\dotsc,t_{k-1},\kappa_{i}(t_{k}),t_{k+1},\dotsc,t_{n}).$
Then, (c) we identify formal sums of bicontexts $(C_{1},t)+\dots+(C_{k},t)$
and $(D_{1},t)+\dots+(D_{l},t)$ if $\kappa_{i}(u)=C_{1}+\dots+C_{k}$,
$\kappa_{i}(v)=D_{1}+\dots+D_{l}$ for some positive integer $i$ and terms
$u,v$ such that $u\xleftrightarrow{*}_{R}v$.
$\overline{\mathbb{Z}\langle\mathbb{K}\rangle}^{(\Sigma,R)}$ is defined as the
quotient of $\mathbb{Z}\langle\mathbb{K}\rangle$ by the equivalence class
generated by the identifications (a), (b), and (c).
We omit the definitions of the $\mathcal{R}$-linear maps
$\epsilon,\partial_{i}$ ($i=0,1,2$) in the step 2, but we describe the group
homomorphisms
$\mathbb{Z}/d\mathbb{Z}\otimes\partial_{i}:\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{i+1}}\rightarrow\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{i}}$.
Let $\tilde{\partial}_{i}$ denote $\mathbb{Z}/d\mathbb{Z}\otimes\partial_{i}$
for simplicity. For the step 2, we define the $\mathcal{R}$-linear maps
$\epsilon,\partial_{i}$ ($i=0,1,2$). For $f^{(n)}\in\Sigma$, the homomorphism
$\tilde{\partial}_{0}:\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{1}}\rightarrow\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{0}}$
is given by
$\tilde{\partial}_{0}(\underline{f})=(n-1)\underline{1}.$
For a term $t$, we define $\varphi(t)$ as the linear combinaton of symbols
$\sum_{f\in\Sigma}n_{f}\underline{f}$ where $n_{f}$ is the number of
occurrences of $f$ in $t$. Using this, for $l\rightarrow r\in R$, the
homomorphism
$\tilde{\partial}_{1}:\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{2}}\rightarrow\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{1}}$
is given by
$\tilde{\partial}_{1}(\underline{l\rightarrow r})=\varphi(r)-\varphi(l).$
For a critical pair $((\square,l\rightarrow r,s),(C,u\rightarrow v,t))$, let
$(D_{i},l_{i}\rightarrow r_{i},s_{i})$, $(C_{j},u_{j}\rightarrow v_{j},t_{j})$
($i=1,\dots,k,j=1,\dots,l$) be rewriting steps such that $r\circ
s=D_{1}[l_{1}\circ s_{1}],D_{1}[r_{1}\circ s_{1}]=D_{2}[l_{2}\circ
s_{2}],\dots,D_{k-1}[r_{k-1}\circ s_{k-1}]=D_{k}[l_{k}\circ s_{k}]$, $C[v\circ
t]=C_{1}[u_{1}\circ t_{1}],C_{1}[v_{1}\circ t_{1}]=C_{2}[u_{2}\circ
t_{2}],\dots,C_{l-1}[v_{l-1}\circ t_{l-1}]=C_{l}[u_{l}\circ t_{l}]$,
$D_{k}[r_{k}\circ s_{k}]=C_{l}[v_{l}\circ t_{l}]$. Then the map
$\tilde{\partial}_{2}((\square,l\rightarrow r,s),(C,u\rightarrow v,t))$ is
defined by
$\underline{u\rightarrow v}-\underline{l\rightarrow
v}-\sum_{i=1}^{k}\underline{u_{i}\rightarrow
v_{i}}-\sum_{j=1}^{l}\underline{l_{j}\rightarrow r_{j}}.$
Malbos-Mimram’s lower bound for the number of rewrite rules is given by
$s(H_{2}(\Sigma,R))$. (Recall that $s(G)$ denotes the minimum number of
generators of an abelian group $G$.) More precisely, $\\#\Sigma^{\prime}\geq
s(H_{1}(\Sigma,R))$ and $\\#R^{\prime}\geq s(H_{2}(\Sigma,R))$ hold for any
TRS $(\Sigma^{\prime},R^{\prime})$ that is Tietze equivalent to $(\Sigma,R)$.
These inequalities are shown in a similar way to the proof of Theorem 9.
## 5\. Proof of Main Theorem
Let $(\Sigma,R)$ be a complete TRS. We first simplify the tensor product
$\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{i}}$
in (6).
###### Lemma 11.
Let $d=\deg(R)$ and $P$ be a family of sets $P_{0},P_{1},\dots$. Then, we have
$\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{P}\cong(\mathbb{Z}/d\mathbb{Z})\underline{\biguplus_{i}P_{i}}$.
Especially, if $d=0$,
$\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{P}\cong\mathbb{Z}\underline{\biguplus_{i}P_{i}}$.
###### Proof 5.1.
We define a group homomorphism
$f:\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{P}\rightarrow(\mathbb{Z}/d\mathbb{Z})\underline{\biguplus_{i}P_{i}}$
by $f((w_{n})_{n\geq 0})=\sum_{n\geq 0}f_{n}(w_{n})$ where
$f_{n}:\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}(n,n)}\mathcal{R}\underline{P}(n)\rightarrow(\mathbb{Z}/d\mathbb{Z})\underline{P_{n}}$
is defined by $f_{n}([k]\otimes C\underline{a}t)=[k]\underline{a}$ for $a\in
P_{n}$. Since each $f_{n}$ is an isomorphism, $f$ is also an isomorphism.
As special cases of this lemma, we have
$\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{0}}\cong(\mathbb{Z}/d\mathbb{Z})\underline{\Sigma}$,
$\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{1}}\cong(\mathbb{Z}/d\mathbb{Z})\underline{R}$,
and
$\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{2}}\cong(\mathbb{Z}/d\mathbb{Z})\underline{\operatorname{CP}(R)}$.
Additionally, we can see each group homomorphism $\tilde{\partial}_{i}$
($i=0,1,2$) is a $\mathbb{Z}/d\mathbb{Z}$-linear map.
To prove Theorem 1, we show the following lemma.
###### Lemma 12.
Let $d=\deg(R)$. If $d=0$ or $d$ is prime,
$\\#R-e(R)=s(H_{2}(\Sigma,R))+s(\operatorname{im}\tilde{\partial}_{1})$.
(Recall that $s(G)$ is the minimum number of generators of a group $G$.)
###### Proof 5.2.
By definition, $D(R)$ defined in Section 2 is a matrix representation of
$\tilde{\partial}_{2}$. Suppose $d$ is prime. In this case,
$s(H_{2}(\Sigma,R))$ is equal to the dimension of $H_{2}(\Sigma,R)$ as a
$\mathbb{Z}/d\mathbb{Z}$-vector space. By the rank-nullity theorem, we have
$\displaystyle\dim(H_{2}(\Sigma,R))$
$\displaystyle=\dim(\ker\tilde{\partial}_{1})-\dim(\operatorname{im}\tilde{\partial}_{2})$
$\displaystyle=\dim(\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{1}})-\dim(\operatorname{im}\tilde{\partial}_{1})-\dim(\operatorname{im}\tilde{\partial}_{2})$
$\displaystyle=\dim((\mathbb{Z}/d\mathbb{Z})\underline{R})-\dim(\operatorname{im}\tilde{\partial}_{1})-\operatorname{rank}(D(R))$
$\displaystyle=\\#R-\dim(\operatorname{im}\tilde{\partial}_{1})-e(R).$
Suppose $d=0$. We show
$H_{2}(\Sigma,R)\cong\mathbb{Z}^{\\#R-r-k}\times\mathbb{Z}/e_{1}\mathbb{Z}\times\dotsb\times\mathbb{Z}/e_{r}\mathbb{Z}$
where $r=\operatorname{rank}(D(R))$,
$k=s(\operatorname{im}\tilde{\partial}_{1})$, and $e_{1},\dotsc,e_{r}$ are the
elementary divisors of $D(R)$. Let
$\overline{\partial}_{1}:\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{1}}/\operatorname{im}\tilde{\partial}_{2}\rightarrow\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{0}}$
be the group homomorphism defined by $[x]\mapsto\tilde{\partial}_{1}(x)$.
$\overline{\partial}_{1}$ is well-defined since
$\operatorname{im}\tilde{\partial}_{2}\subset\ker\tilde{\partial}_{1}$, and
$\ker\overline{\partial}_{1}$ is isomorphic to
$\ker\tilde{\partial}_{1}/\operatorname{im}\tilde{\partial}_{2}=H_{2}(\Sigma,R)$.
By taking the basis $v_{1},\dotsc,v_{\\#R}$ of
$\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{1}}\cong\mathbb{Z}\underline{R}$
such that $D(R)$ is the matrix representation of $\tilde{\partial}_{2}$ under
the basis $v_{1},\dotsc,v_{\\#R}$ and some basis of
$\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{2}}$, we can
see
$\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{1}}/\operatorname{im}\tilde{\partial}_{2}\cong\mathbb{Z}^{\\#R-r}\times\mathbb{Z}/e_{1}\mathbb{Z}\times\dotsb\times\mathbb{Z}/e_{k}\mathbb{Z}$.
Suppose $\overline{\partial}_{1}(e_{i}[x])=0$ for some $x$ and $i=1,\dotsc,r$.
Since $\overline{\partial}_{1}$ is a homomorphism,
$\overline{\partial}_{1}(e_{i}[x])=e_{i}\overline{\partial}_{1}([x])\in\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{0}}\cong\mathbb{Z}\underline{\Sigma}$
holds. Since $\mathbb{Z}\underline{\Sigma}$ is free, we have $[x]=0$.
Therefore, $\ker\overline{\partial}_{1}$ is included in the subset of
$\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{1}}/\operatorname{im}\tilde{\partial}_{2}$
isomorphic to $\mathbb{Z}^{\\#R-r}\times\\{0\\}\times\dotsb\times\\{0\\}$.
Thus,
$\ker\overline{\partial}_{1}\cong\mathbb{Z}^{\\#R-r-k}\times\mathbb{Z}/e_{1}\mathbb{Z}\times\dotsb\times\mathbb{Z}/e_{r}\mathbb{Z}$.
Since $\mathbb{Z}/e\mathbb{Z}\cong 0$ if $e$ is invertible,
$\mathbb{Z}^{\\#R-r-k}\times\mathbb{Z}/e_{1}\mathbb{Z}\times\dotsb\times\mathbb{Z}/e_{k}\mathbb{Z}\cong\mathbb{Z}^{\\#R-r-k}\times\mathbb{Z}/e_{e(R)+1}\mathbb{Z}\times\dotsb\mathbb{Z}/e_{r}\mathbb{Z}=:G$.
The group $G$ is generated by
$(\underbrace{1,0,\dotsc,0}_{\\#R-r-k},\underbrace{[0],\dotsc,[0]}_{r-e(R)})$,
$\dotsc$, $(0,\dotsc,0,1,[0],\dotsc,[0])$, $\dotsc$,
$(0,\dotsc,0,[1],[0],\dotsc,[0])$, $\dotsc$,
$(0,\dotsc,0,[0],\dotsc,[0],[1])$, so we have
$s(G)\leq\\#R-r-k+r-e(R)=\\#R-k-e(R)$. Let $p$ be a prime number which divides
$e_{e(R)+1}$. We can see $G/pG\cong(\mathbb{Z}/p\mathbb{Z})^{\\#R-k-e(R)}$. It
is not hard to see $s(G)\geq s(G/pG)$, and since $G/pG$ is a
$\mathbb{Z}/p\mathbb{Z}$-vector space, $s(G/pG)=\dim(G/pG)=\\#R-k-e(R)$. Thus,
$s(H_{2}(\Sigma,R))=s(G)=\\#R-s(\operatorname{im}\tilde{\partial}_{1})-e(R)$.
By Lemma 12, Theorem 1 is implied by the following theorem:
###### Theorem 13.
Let $(\Sigma,R)$ be a TRS and $d=\deg(R)$. If $d=0$ or $d$ is prime,
$\\#R\geq s(H_{2}(\Sigma,R))+s(\operatorname{im}\tilde{\partial}_{1}).$ (7)
###### Proof 5.3.
By the first isomorphism theorem, we have an isomorphism between
$\mathbb{Z}/d\mathbb{Z}$-modules
$\operatorname{im}\tilde{\partial}_{1}\simeq\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbb{P}_{2}}\delimiter
84079374\mathopen{}\ker\tilde{\partial}_{1}$
and by the third isomorphism theorem, the right hand side is isomorphic to
$\displaystyle\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbb{P}_{2}}\delimiter
84079374\mathopen{}\ker\tilde{\partial}_{1}$
$\displaystyle\simeq\left(\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbb{P}_{2}}\delimiter
84079374\mathopen{}\operatorname{im}\tilde{\partial}_{2}\right)\delimiter
84079374\mathopen{}\left(\ker\tilde{\partial}_{1}\delimiter
84079374\mathopen{}\operatorname{im}\tilde{\partial}_{2}\right)$
$\displaystyle\simeq\left(\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbb{P}_{2}}\delimiter
84079374\mathopen{}\operatorname{im}\tilde{\partial}_{2}\right)\delimiter
84079374\mathopen{}H_{2}(\Sigma,R).$
Thus, we obtain the following exact sequence by Proposition 7:
$0\rightarrow
H_{2}(\Sigma,R)\rightarrow\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbb{P}_{2}}\delimiter
84079374\mathopen{}\operatorname{im}\tilde{\partial}_{2}\rightarrow\operatorname{im}\tilde{\partial}_{1}\rightarrow
0.$
By Theorem 5, since
$\operatorname{im}\tilde{\partial}_{1}\subset\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbb{P}_{1}}\cong(\mathbb{Z}/d\mathbb{Z})\underline{R}$
and $(\mathbb{Z}/d\mathbb{Z})\underline{R}$ is a free
$\mathbb{Z}/d\mathbb{Z}$-module, $\operatorname{im}\tilde{\partial}_{1}$ is
also free and by Proposition 8, we have
$\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbb{P}_{2}}\delimiter
84079374\mathopen{}\operatorname{im}\tilde{\partial}_{2}\cong
H_{2}(\Sigma,R)\times\operatorname{im}\tilde{\partial}_{1}$. Therefore,
$s(\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbb{P}_{2}}\delimiter
84079374\mathopen{}\operatorname{im}\tilde{\partial}_{2})=s(H_{2}(\Sigma,R))+s(\operatorname{im}\tilde{\partial}_{1})$.
Since
$\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbb{P}_{2}}\delimiter
84079374\mathopen{}\operatorname{im}\tilde{\partial}_{2}$ is generated by
$[l_{1}\rightarrow r_{1}],\dots,[l_{k}\rightarrow r_{k}]$ if
$R=\\{l_{1}\rightarrow r_{1},\dots,l_{k}\rightarrow r_{k}\\}$, we obtain
$k=\\#R\geq
s(\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbb{P}_{2}}\delimiter
84079374\mathopen{}\operatorname{im}\tilde{\partial}_{2})=s(H_{2}(\Sigma,R))+s(\operatorname{im}\tilde{\partial}_{1}).$
Thus, we get (7).
Now, we prove our main theorem, Theorem 1.
###### Proof 5.4 (Proof of Theorem 1).
As we stated, $H_{2}(\Sigma,R)$ depends only on the Tietze equivalence class
of $(\Sigma,R)$ and particularly, $H_{2}(\Sigma,R^{\prime})$ is isomorphic to
$H_{2}(\Sigma,R)$ if $R^{\prime}$ is equivalent to $R$ (in the sense
${\xleftrightarrow{*}_{R}}={\xleftrightarrow{*}_{R^{\prime}}}$). Let us show
$s(\operatorname{im}\tilde{\partial}_{1})$ depends only on the equivalence
class of $R$. For a left $\mathfrak{R}$-module $M$, $\operatorname{rank}(M)$
denotes the cardinality of a minimal linearly independent generating set of
$M$, that is, a minimal generating set $S$ of $G$ such that any element
$s_{1},\dotsc,s_{k}\in\Gamma$, and $r_{1}s_{1}+\dotsb+r_{k}s_{k}=0\implies
r_{1}=\dotsb=r_{k}=0$ for any $r_{1},\dotsc,r_{k}\in\mathfrak{R}$,
$s_{1},\dotsc,s_{k}\in S$. It can be shown that $\operatorname{rank}(M)=s(M)$
if $M$ is free. Especially,
$s(\operatorname{im}\tilde{\partial}_{1})=\operatorname{rank}(\operatorname{im}\tilde{\partial}_{1})$
since $\operatorname{im}\tilde{\partial}_{1}\subset\mathbb{Z}\underline{R}$ if
$\deg(R)=0$. Also,
$\operatorname{rank}(\operatorname{im}\tilde{\partial}_{1})=\operatorname{rank}(\ker\tilde{\partial}_{0})-\operatorname{rank}(\ker\tilde{\partial}_{0}/\operatorname{im}\tilde{\partial}_{1})$
is obtained by a general theorem [14, Ch 10, Lemma 10.1]. By definition,
$\tilde{\partial}_{0}$ does not depend on $R$. Since
$\ker\tilde{\partial}_{0}/\operatorname{im}\tilde{\partial}_{1}=H_{1}(\Sigma,R)$
depends only on the Tietze quivalence class of $R$, two sets of rules
$R,R^{\prime}$ with
${\xleftrightarrow{*}_{R}}={\xleftrightarrow{*}_{R^{\prime}}}$ give the same
$\operatorname{rank}(\operatorname{im}\tilde{\partial}_{1})$.
In conclusion, for any TRS $R^{\prime}$ equivalent to $R$, we obtain
$\\#R^{\prime}\geq
s(H_{2}(\Sigma,R))+s(\operatorname{im}\tilde{\partial}_{1})=\\#R-e(R)$.
## 6\. Prime Critical Pairs in a Homological Perspective
Let $(\Sigma,R)$ be a complete TRS. It is known that in confluence tests (and
then in Knuth-Bendix completion), it suffices to consider only prime critical
pairs [6]. (A critical pair $r_{1}\sigma\leftarrow
l_{1}\sigma=C[l_{2}\sigma]\rightarrow C[r_{2}\sigma])$ is prime if no proper
subterm of $l_{2}\sigma$ is reducible by $R$.) We have defined the matrix
$D(R)$ using the critical pairs of $R$, but in fact, we can restrict the
critical pairs to the prime ones and obtain the same $e(R)$. In other words,
we have the following theorem.
###### Theorem 14.
In the matrix $D(R)$, all columns corresponding to critical pairs that are not
prime can be transformed into the zero vectors by elementary column
operations.
###### Proof 6.1.
For any terms $t,s$ and position $p\in\operatorname{Pos}(t)$, we write
$t[s]_{p}$ for the term obtained from $t$ by replacing the subterm at $p$ with
$s$. Suppose that $R=\\{l_{1}\rightarrow r_{1},\dots,l_{n}\rightarrow
r_{n}\\}$ and that there is a non-prime critical pair
$(r_{i}\sigma,l_{i}\sigma[r_{j}\sigma]_{p})$ for some position
$p\in\operatorname{Pos}(l_{i})$. Then, there is a rule $l_{k}\rightarrow
r_{k}$ such that $l_{k}$ matches $(l_{j}\sigma)|_{p}$ for some position
$p^{\prime}\in\operatorname{Pos}(l_{j}\sigma)$. So, we have the following
three paths.
${l_{i}\sigma}$${{l_{i}\sigma[l_{j}\sigma]_{p}}}$${{l_{i}\sigma[l_{j}\sigma[l_{k}\sigma]_{p^{\prime}}]_{p}}}$${r_{i}\sigma}$${{l_{i}\sigma[r_{j}\sigma]_{p}}}$${{l_{i}\sigma[l_{j}\sigma[r_{k}\sigma]_{p^{\prime}}]_{p}}}$${\vdots}$${\vdots}$${\vdots}$${t}$
(8)
We show that the column for $(r_{i}\sigma,l_{i}\sigma[r_{j}\sigma]_{p})$ in
$D(R)$ can be transformed into the zero vector by elementary column
operations. Let $\operatorname{Pos}_{\mathcal{F}}(s)$ be the set
$\operatorname{Pos}(s)\setminus\\{\text{variable positions in $s$}\\}$ for a
term $s$. Consider the following four cases: (1)
$p^{\prime}\in\operatorname{Pos}_{\mathcal{F}}(l_{j})$ and
$pp^{\prime}\in\operatorname{Pos}_{\mathcal{F}}(l_{i})$, (2)
$p^{\prime}\in\operatorname{Pos}_{\mathcal{F}}(l_{j})$ and
$pp^{\prime}\notin\operatorname{Pos}_{\mathcal{F}}(l_{i})$, (3)
$p^{\prime}\notin\operatorname{Pos}_{\mathcal{F}}(l_{j})$ and
$pp^{\prime}\in\operatorname{Pos}_{\mathcal{F}}(l_{i})$, (4)
$p^{\prime}\notin\operatorname{Pos}_{\mathcal{F}}(l_{j})$ and
$pp^{\prime}\notin\operatorname{Pos}_{\mathcal{F}}(l_{i})$.
1. (1)
Case where $p^{\prime}\in\operatorname{Pos}_{\mathcal{F}}(l_{j})$ and
$pp^{\prime}\in\operatorname{Pos}_{\mathcal{F}}(l_{i})$: In this case, for
some substitutions $\sigma^{\prime},\sigma^{\prime\prime}$ more general than
$\sigma$, the pairs
$P_{j,k}=(r_{j}\sigma^{\prime},l_{j}\sigma^{\prime}[r_{k}\sigma^{\prime}]_{p^{\prime}})$
and
$P_{i,k}=(r_{i}\sigma^{\prime\prime},l_{i}\sigma^{\prime\prime}[l_{j}\sigma^{\prime\prime}[r_{k}\sigma^{\prime\prime}]_{p^{\prime}}]_{p})$
are critical. Let $P_{i,j}=(r_{i}\sigma,l_{i}\sigma[r_{j}\sigma]_{p})$. Also,
we write $v_{i,m},v_{j,m},v_{k,m}$ for the numbers of $l_{m}\rightarrow r_{m}$
appears in $l_{i}\sigma\rightarrow r_{i}\sigma\rightarrow\dots\rightarrow t$,
$l_{i}\sigma[l_{j}\sigma]_{p}\rightarrow
l_{i}\sigma[r_{j}\sigma]_{p}\rightarrow\dots\rightarrow t$,
$l_{i}\sigma[l_{j}\sigma[l_{k}\sigma]_{p^{\prime}}]_{p}\rightarrow
l_{i}\sigma[l_{j}\sigma[r_{k}\sigma]_{p^{\prime}}]_{p}\rightarrow\dots\rightarrow
t$, respectively. Then, the column of the matrix $D(R)$ for the critical pair
$P_{\alpha,\beta}$ ($\alpha,\beta\in\\{i,j,k\\}$) can be given by
$V_{\alpha,\beta}:=(v_{\alpha,1}-v_{\beta,1}~{}v_{\alpha,2}-v_{\beta,2}~{}\dots~{}v_{\alpha,n}-v_{\beta,n})^{T}$.
So, we have $V_{i,j}=V_{i,k}-V_{j,k}$ and this means that the column for
$P_{i,j}$ can be transformed into the zero vector by an elementary column
operation.
2. (2)
Case where $p^{\prime}\in\operatorname{Pos}_{\mathcal{F}}(l_{j})$ and
$pp^{\prime}\notin\operatorname{Pos}_{\mathcal{F}}(l_{i})$: In this case,
$(r_{j}\sigma^{\prime},l_{j}\sigma^{\prime}[r_{k}\sigma^{\prime}]_{p^{\prime}})$
is a critical pair for some $\sigma^{\prime}$ and $l_{k}\rightarrow r_{k}$ can
rewrite $x_{a}\sigma$ for some variable $x_{a}$ in $l_{i}$ into some term $s$.
Then, we have the two paths:
${{l_{i}\sigma[l_{j}\sigma[l_{k}\sigma]_{p^{\prime}}]_{p}}}$${{l_{i}\sigma[l_{j}\sigma[r_{k}\sigma]_{p^{\prime}}]_{p}}}$${\dots}$${l_{i}\sigma^{\prime\prime}}$${l_{i}\sigma}$${r_{i}\sigma}$${\dots}$${r_{i}\sigma^{\prime\prime}}$$\scriptstyle{l_{k}\rightarrow
r_{k}}$$\scriptstyle{\\#_{a}l_{i}\text{ times}}$$\scriptstyle{l_{k}\rightarrow
r_{k}}$$\scriptstyle{l_{k}\rightarrow r_{k}}$$\scriptstyle{l_{i}\rightarrow
r_{i}}$$\scriptstyle{l_{i}\rightarrow r_{i}}$$\scriptstyle{l_{k}\rightarrow
r_{k}}$$\scriptstyle{\\#_{a}r_{i}\text{ times}}$$\scriptstyle{l_{k}\rightarrow
r_{k}}$
where $\sigma^{\prime\prime}$ is the substitution that is the same as $\sigma$
but $x_{a}\sigma^{\prime\prime}=s$. Then, $v_{i,m}-v_{k,m}=0$ for any $m\neq
k$ and $v_{i,k}-v_{k,k}=\\#_{a}l_{i}-\\#_{a}r_{i}=0\mod\deg(R)$. Therefore, we
have $V_{i,j}=-V_{j,k}\mod\deg(R)$.
3. (3)
Case where $p^{\prime}\notin\operatorname{Pos}_{\mathcal{F}}(l_{j})$ and
$pp^{\prime}\in\operatorname{Pos}_{\mathcal{F}}(l_{i})$: We can show
$V_{i,j}=V_{i,k}$ in a way similar to (2).
4. (4)
Case where $p^{\prime}\notin\operatorname{Pos}_{\mathcal{F}}(l_{j})$ and
$pp^{\prime}\notin\operatorname{Pos}_{\mathcal{F}}(l_{i})$: Also in a similar
way, we can see that $V_{i},j$ is the zero vector.
Thus, we can remove the column for $P_{i,j}$ in $D(R)$. Repeating this
proccess for all non-prime pairs, we obtain the desired result.
For case (1) in the proof, we can actually remove the column for $P_{i,k}$ or
$P_{j,k}$ instead of $P_{i,j}$ by symmetry. We shall call a triple of critical
pairs like (8) a _critical triple_. Recall that we mentioned $D(R)$ is a
matrix presentation of
$\tilde{\partial}_{2}=\mathbb{Z}/d\mathbb{Z}\otimes\partial_{2}:\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{3}}\rightarrow\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{2}}$
in Section 5. If we define $\mathbf{P}_{4}$ to be a collection of critical
triples and
$\tilde{\partial}_{3}:\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{4}}\rightarrow\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{3}}$
to be
$\tilde{\partial}_{3}(\underline{(P_{i,j},P_{i,k},P_{j,k})})=\underline{P_{i,j}}-\underline{P_{i,k}}+\underline{P_{j,k}}$,
then we have
$\tilde{\partial}_{2}\circ\tilde{\partial}_{3}(\underline{(P_{i,j},P_{i,k},P_{j,k})})=\tilde{\partial}_{2}(\underline{P_{i,j}}-\underline{P_{i,k}}+\underline{P_{j,k}})=0$.
(This corresponds to $V_{i,k}=V_{i,k}-V_{j,k}$.) Therefore
$\ker\tilde{\partial}_{2}\supset\operatorname{im}\tilde{\partial}_{3}$ holds,
so we can extend our chain complex (6):
$\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{4}}\xrightarrow{\tilde{\partial}_{3}}\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{3}}\xrightarrow{\tilde{\partial}_{2}}\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{2}}\xrightarrow{\tilde{\partial}_{1}}\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{1}}\xrightarrow{\tilde{\partial}_{0}}\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{0}}.$
Note that since we have not defined
$\partial_{3}:\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{3}}\to\mathbb{Z}/d\mathbb{Z}\otimes_{\mathcal{R}}\mathcal{R}\underline{\mathbf{P}_{2}}$,
the third homology $H_{3}$ is meaningless unless we define $\partial_{3}$
extending resolution (5) and show
$\tilde{\partial}_{3}=\mathbb{Z}_{d}\otimes\partial_{3}$. However, this
suggests that the next term of our partial resolution could be generated by
critical triples.
## 7\. Deficiency and Computability
We consider the case where every symbol in $\Sigma$ is of arity 1. Notice that
any TRS $(\Sigma,R)$ can be seen as an SRS and $\deg(R)=0$ in this case. We
have $\operatorname{rank}(\ker\tilde{\partial}_{0})=\\#\Sigma$ since
$\tilde{\partial}_{0}(\underline{f})=0$ for any $f\in\Sigma$. Therefore, (7)
can be rewritten to
$\\#R-\\#\Sigma\geq s(H_{2}(\Sigma,R))-\operatorname{rank}(H_{1}(\Sigma,R)).$
(9)
So, for SRSs, we have a lower bound of the difference between the number of
rewrite rules and the number of symbols. For groups, in fact, this inequality
is proved in terms of group homology [3] without using the notion of a
rewriting system. In group theory, a group presentation $(\Sigma,R)$ is called
_efficient_ if the equality of (9) holds and a group is called efficient if it
has an efficient presentation. It is known that inefficient groups exist [18].
Let us move back to the case of general TRSs. We have already seen that there
exists a TRS such that none of its equivalent TRS satisfies the equality of
(7) in the last paragraph of Section 2. The _deficiency_ of (the equivalence
class of) a TRS $(\Sigma,R)$, denoted by
$\operatorname{def}\langle\Sigma,R\rangle$, is the minimum of
$\\#R^{\prime}-\\#\Sigma^{\prime}$ over all TRSs
$(\Sigma^{\prime},R^{\prime})$ Tietze equivalent to $(\Sigma,R)$. We pose the
problem to decide inequalities of the deficiency for TRSs and see its
undecidability is shown by using powerful facts from group theory.
###### Problem 15.
Given an integer and a TRS $(\Sigma,R)$, does
$\operatorname{def}\langle\Sigma,R\rangle\leq n$ hold?
We will prove that Problem 15 is undecidable. It suffices to restrict the
problems to the case where $n$ is negative and $(\Sigma,R)$ presents a group
of finite index, that is, $(\Sigma,R)$ is an SRS and
$\mathcal{M}_{(\Sigma,R)}=\Sigma^{*}/{\xleftrightarrow{*}_{R}}$ forms a group
of finite index.
###### Problem 16.
Given a negative integer $n$ and an SRS $(\Sigma,R)$ whose corresponding
monoid $\mathcal{M}_{(\Sigma,R)}$ forms a group of finite index, does
$\operatorname{def}\langle\Sigma,R\rangle\leq n$ hold?
###### Theorem 17.
Problem 16 is undecidable, and then so is Problem 15
To prove the theorem, we will apply one of the most useful tools on
computability in group theory called Adian-Rabin theorem which states every
“Markov property” is undecidable. Let $P$ be a property of finitely presented
groups which is preserved under group isomorphism. The property $P$ is said to
be a _Markov property_ if
1. (1)
there exists a finitely presented group $G_{+}$ with $P$, and
2. (2)
there exists a finitely presented group $G_{-}$ such that there is no
injective group homomorphism from $G_{-}$ to a finitely presented group $G$
with $P$.
The condition (2) is equivalent to the case where there exists a finitely
presented group $G_{-}$ which does not have $P$ and whenever a finitely
generated group $G$ has $P$, all subgroups of $G$ also have $P$.
###### Theorem 18.
[1][12][8, Theorem 4.1] Markov properties are undecidable.
###### Proof 7.1 (Proof of Theorem 17).
Let $n$ be a negative integer. If a group $G$ is presented by $(\Sigma,R)$, we
write $\operatorname{def}G$ for $\operatorname{def}\langle\Sigma,R\rangle$. We
show that $\operatorname{def}G\leq n$ is a Markov property. Since the free
group $F_{k}$ satisfies $\operatorname{def}F_{k}=0-k=-k$ for any $k\geq 0$,
there always exist $G_{+}$ with $\operatorname{def}G_{+}\leq n$ and $G_{-}$
with $G_{-}>n$. Therefore it is enough to show that for any finitely presented
group $G$ of finite index and a subgroup $H$ of $G$, $\operatorname{def}G\leq
n$ implies $\operatorname{def}H\leq n$. Let $G$ be a finitely presented group
with $\operatorname{def}G\leq n$ and $H$ be a subgroup of $G$. Given a finite
presentation $(\Sigma,R)$ of $G$, it is known that we can construct a
presentation $(\Sigma^{\prime},R^{\prime})$ of $H$ satisfying
$\\#R^{\prime}-\\#\Sigma^{\prime}+1=[G:H](\\#R-\\#\Sigma+1)$ where $[G:H]$ is
the index of $H$ in $G$. (See [8, Ch. II. Proposition 4.1], for example.) The
way of construction is known as _Reidemeister-Schreier method_. Thus, we have
$\operatorname{def}H+1\leq[G:H](\operatorname{def}G+1)\leq\operatorname{def}G+1\leq
n+1.$
## 8\. Conclusions
We have seen that the number of rewrite rules is bounded below by a computable
number defined using homology groups of TRSs. The computation is by simple
term rewriting and matrix transformation. The fact that the theory of groups
must have at least two equational axioms is proved as a corollary. We have
also showed that deciding $\operatorname{def}\langle\Sigma,R\rangle\leq n$ is
not computationally possible.
## References
* [1] S. I. Adian. Algorithmic unsolvability of problem of recognition of certain properties of groups. Dokl. Akad. Nauk SSSR, pages 533–535, 1955.
* [2] Ronald V. Book and Friedrich Otto. String-rewriting Systems. Springer-Verlag, Berlin, Heidelberg, 1993.
* [3] D. Epstein. Finite presentations of groups and 3-manifolds. The Quarterly Journal of Mathematics, 12(1):205–212, 1961.
* [4] G. Higman and B. H. Neumann. Groups as groupoids with one law. Publicationes Mathematicae Debrecen, 228(2):215â227, 1952\.
* [5] E. Westbrook I. Wehrman, A. Stump. Slothrop: Knuth-Bendix completion with a modern termination checker. 4098, 2006.
* [6] Deepak Kapur, David R. Musser, and Paliath Narendran. Only prime superpositions need be considered in the Knuth-Bendix completion procedure. Journal of Symbolic Computation, 6(1):19 – 36, 1988.
* [7] K. Kunen. Single axioms for groups. Journal of Automated Reasoning, 9(3):291–308, Dec 1992.
* [8] R. C. Lyndon and P. E. Schupp. Combinatorial Group Theory. Springer-Verlag Berlin Heidelberg, 2001.
* [9] P. Malbos and S. Mimram. Homological computations for term rewriting systems. In 1st International Conference on Formal Structures for Computation and Deduction (FSCD 2016), volume 52 of Leibniz International Proceedings in Informatics (LIPIcs), pages 27:1–27:17, Dagstuhl, Germany, 2016. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
* [10] B. Mitchell. Rings with several objects. Advances in Mathematics, 8(1):1 – 161, 1972.
* [11] B. H. Neumann. Another single law for groups. Bulletin of the Australian Mathematical Society, 23(1):81â102, 1981.
* [12] Michael O. Rabin. Recursive unsolvability of group theoretic problems. Annals of Mathematics, 67(1):172–194, 1958.
* [13] J. J. Rotman. An Introduction to Homological Algebra. Springer-Verlag New York, 2009.
* [14] J. J. Rotman. Advanced Modern Algebra, volume 114. American Mathematical Soc., 2010.
* [15] H. Sato, S. Winkler, M. Kurihara, and A. Middeldorp. Multi-completion with termination tools (system description). In Proc. 4th IJCAR, volume 5195 of LNAI, pages 306–312, 2008\.
* [16] C. C. Squier. Word problems and a homological finiteness condition for monoids. Journal of Pure and Applied Algebra, 49(1-2):201–217, 1987.
* [17] J. Steinbach and U. Kühler. Check your ordering - termination proofs and problems. Technical Report Technical Report R-90-25, Universität Kaiserslautern, 1990.
* [18] R. G. Swan. Minimal resolutions for finite groups. Topology, 4(2):193 – 208, 1965.
* [19] A. Tarski. Equational logic and equational theories of algebras. In Contributions to Mathematical Logic, volume 50 of Studies in Logic and the Foundations of Mathematics, pages 275 – 288. Elsevier, 1968.
* [20] Heinrich Tietze. Über die topologischen invarianten mehrdimensionaler mannigfaltigkeiten. Monatshefte für Mathematik und Physik, 19(1):1–118, Dec 1908\.
* [21] V. A. Ufnarovskij. Combinatorial and asymptotic methods in algebra. 1995\.
* [22] Ian Wehrman and Aaron Stump. Mining propositional simplification proofs for small validating clauses. Electronic Notes in Theoretical Computer Science, 144(2):79 – 91, 2006. Proceedings of the Third Workshop on Pragmatics of Decision Procedures in Automated Reasoning (PDPAR 2005).
## Appendix A The matrix $D(R)$ for The Theory of Groups
For the TRS $R$ defined in Example 2, $D(R)$ is given by the transpose of
$\left(\begin{array}[]{cccccccccc}1&0&0&0&0&0&0&0&0&0\\\
0&0&0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&1&1\\\
0&0&0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&1&0&0&0&1\\\ 1&1&0&0&1&1&0&0&0&0\\\
1&1&0&1&0&0&0&0&1&0\\\ 1&0&0&0&0&0&0&0&1&1\\\ 1&1&1&0&0&0&0&0&0&0\\\
1&0&0&0&0&0&0&0&0&0\\\ 1&0&0&0&0&0&0&0&0&0\\\ 0&1&1&0&0&0&1&0&0&1\\\
0&0&0&0&0&0&1&0&1&0\\\ 0&0&0&0&0&1&1&0&0&0\\\ 0&1&0&1&0&0&1&0&0&0\\\
0&1&1&0&0&0&0&0&0&0\\\ 0&1&1&0&0&0&1&0&0&1\\\ 0&0&1&1&0&0&0&0&1&0\\\
0&0&1&0&1&1&0&0&0&0\\\ 0&0&1&0&1&0&1&0&0&0\\\ 1&0&0&0&0&0&0&0&1&1\\\
0&0&0&1&1&0&1&0&0&1\\\ 0&0&1&1&0&0&0&1&1&0\\\ 0&0&0&1&1&0&0&1&0&0\\\
0&1&0&1&0&0&1&0&0&0\\\ 0&0&1&1&0&1&0&0&0&0\\\ 1&0&0&0&0&1&0&0&0&1\\\
0&0&0&1&1&0&1&0&0&1\\\ 0&0&1&0&1&0&0&0&1&0\\\ 0&0&0&1&1&0&0&1&0&0\\\
0&1&0&0&1&0&1&0&0&0\\\ 0&0&1&0&1&1&0&1&0&0\\\ 0&0&0&0&0&0&0&1&0&0\\\
0&0&0&0&0&1&0&0&0&1\\\ 1&0&1&0&1&1&0&1&0&0\\\ 0&0&0&0&0&1&0&0&1&0\\\
0&0&0&0&0&1&0&0&1&0\\\ 0&0&0&0&0&1&0&1&1&0\\\ 0&0&0&0&0&1&1&0&0&0\\\
0&0&0&0&0&0&1&0&1&0\\\ 0&0&0&0&0&0&0&1&0&0\\\ 0&0&0&0&0&0&0&0&0&0\\\
0&0&0&0&0&0&0&1&0&0\\\ 0&0&0&0&0&1&0&1&1&0\\\ 0&0&0&0&0&0&0&1&0&0\\\
0&0&0&0&0&0&0&0&1&1\\\ 1&0&1&0&1&0&0&0&1&0\\\ \end{array}\right)$
where the $i$-th column corresponds to the rule $G_{i}$, and the $j$-th row
corresponds to the critical pair $C_{j}$ shown in the next two pages.
$\displaystyle C_{1}:\ $ $\displaystyle m(m(x_{1},x_{2}),x_{3})\rightarrow
m(x_{1},m(x_{2},x_{3})),\quad m(m(x_{4},x_{5}),x_{6})\rightarrow
m(x_{4},m(x_{5},x_{6})),\quad m(\square,x_{3}),$ $\displaystyle\\{x_{6}\mapsto
x_{2},x_{1}\mapsto m(x_{4},x_{5})\\}$ $\displaystyle C_{2}:\ $ $\displaystyle
i(m(x_{1},x_{2}))\rightarrow m(i(x_{2}),i(x_{1})),\quad
m(m(x_{3},x_{4}),x_{5})\rightarrow m(x_{3},m(x_{4},x_{5})),\quad i(\square),$
$\displaystyle\\{x_{5}\mapsto x_{2},x_{1}\mapsto m(x_{3},x_{4})\\}$
$\displaystyle C_{3}:\ $ $\displaystyle m(m(x_{1},x_{2}),x_{3})\rightarrow
m(x_{1},m(x_{2},x_{3})),\quad m(x_{4},m(i(x_{4}),x_{5}))\rightarrow
x_{5},\quad m(\square,x_{3}),$ $\displaystyle\\{x_{2}\mapsto
m(i(x_{1}),x_{5}),x_{4}\mapsto x_{1}\\}$ $\displaystyle C_{4}:\ $
$\displaystyle m(x_{1},m(i(x_{1}),x_{2}))\rightarrow x_{2},\quad
m(m(x_{3},x_{4}),x_{5})\rightarrow m(x_{3},m(x_{4},x_{5})),\quad\square,$
$\displaystyle\\{x_{5}\mapsto m(i(m(x_{3},x_{4})),x_{2}),x_{1}\mapsto
m(x_{3},x_{4})\\}$ $\displaystyle C_{5}:\ $ $\displaystyle
m(m(x_{1},x_{2}),x_{3})\rightarrow m(x_{1},m(x_{2},x_{3})),\quad
m(i(x_{4}),m(x_{4},x_{5}))\rightarrow x_{5},\quad m(\square,x_{3}),$
$\displaystyle\\{x_{2}\mapsto m(x_{4},x_{5}),x_{1}\mapsto i(x_{4})\\}$
$\displaystyle C_{6}:\ $ $\displaystyle m(i(x_{1}),m(x_{1},x_{2}))\rightarrow
x_{2},\quad m(m(x_{3},x_{4}),x_{5})\rightarrow m(x_{3},m(x_{4},x_{5})),\quad
m(i(x_{1}),\square),$ $\displaystyle\\{x_{5}\mapsto x_{2},x_{1}\mapsto
m(x_{3},x_{4})\\}$ $\displaystyle C_{7}:\ $ $\displaystyle
m(m(x_{1},x_{2}),x_{3})\rightarrow m(x_{1},m(x_{2},x_{3})),\quad
m(i(x_{4}),x_{4})\rightarrow e,\quad m(\square,x_{3}),$
$\displaystyle\\{x_{4}\mapsto x_{2},x_{1}\mapsto i(x_{2})\\}$ $\displaystyle
C_{8}:\ $ $\displaystyle m(m(x_{1},x_{2}),x_{3})\rightarrow
m(x_{1},m(x_{2},x_{3})),\quad m(x_{4},i(x_{4}))\rightarrow e,\quad
m(\square,x_{3}),$ $\displaystyle\\{x_{2}\mapsto i(x_{1}),x_{4}\mapsto
x_{1}\\}$ $\displaystyle C_{9}:\ $ $\displaystyle m(x_{1},i(x_{1}))\rightarrow
e,\quad m(m(x_{2},x_{3}),x_{4})\rightarrow
m(x_{2},m(x_{3},x_{4})),\quad\square,$ $\displaystyle\\{x_{4}\mapsto
i(m(x_{2},x_{3})),x_{1}\mapsto m(x_{2},x_{3})\\}$ $\displaystyle C_{10}:\ $
$\displaystyle m(m(x_{1},x_{2}),x_{3})\rightarrow
m(x_{1},m(x_{2},x_{3})),\quad m(x_{4},e)\rightarrow x_{4},\quad
m(\square,x_{3}),\quad\\{x_{2}\mapsto e,x_{4}\mapsto x_{1}\\}$ $\displaystyle
C_{11}:\ $ $\displaystyle m(x_{1},e)\rightarrow x_{1},\quad
m(m(x_{2},x_{3}),x_{4})\rightarrow
m(x_{2},m(x_{3},x_{4})),\quad\square,\quad\\{x_{4}\mapsto e,x_{1}\mapsto
m(x_{2},x_{3})\\}$ $\displaystyle C_{12}:\ $ $\displaystyle
m(m(x_{1},x_{2}),x_{3})\rightarrow m(x_{1},m(x_{2},x_{3})),\quad
m(e,x_{4})\rightarrow x_{4},\quad m(\square,x_{3}),\quad\\{x_{4}\mapsto
x_{2},x_{1}\mapsto e\\}$ $\displaystyle C_{13}:\ $ $\displaystyle
i(m(x_{1},x_{2}))\rightarrow m(i(x_{2}),i(x_{1})),\quad m(e,x_{3})\rightarrow
x_{3},\quad i(\square),\quad\\{x_{3}\mapsto x_{2},x_{1}\mapsto e\\}$
$\displaystyle C_{14}:\ $ $\displaystyle m(x_{1},m(i(x_{1}),x_{2}))\rightarrow
x_{2},\quad m(e,x_{3})\rightarrow x_{3},\quad\square,\quad\\{x_{3}\mapsto
m(i(e),x_{2}),x_{1}\mapsto e\\}$ $\displaystyle C_{15}:\ $ $\displaystyle
m(i(x_{1}),m(x_{1},x_{2}))\rightarrow x_{2},\quad m(e,x_{3})\rightarrow
x_{3},\quad m(i(x_{1}),\square),\quad\\{x_{3}\mapsto x_{2},x_{1}\mapsto e\\}$
$\displaystyle C_{16}:\ $ $\displaystyle m(x_{1},i(x_{1}))\rightarrow e,\quad
m(e,x_{2})\rightarrow x_{2},\quad\square,\quad\\{x_{2}\mapsto
i(e),x_{1}\mapsto e\\}$ $\displaystyle C_{17}:\ $ $\displaystyle
m(x_{1},e)\rightarrow x_{1},\quad m(e,x_{2})\rightarrow
x_{2},\quad\square,\quad\\{x_{2}\mapsto e,x_{1}\mapsto e\\}$ $\displaystyle
C_{18}:\ $ $\displaystyle i(m(x_{1},x_{2}))\rightarrow
m(i(x_{2}),i(x_{1})),\quad m(x_{3},e)\rightarrow x_{3},\quad
i(\square),\quad\\{x_{2}\mapsto e,x_{3}\mapsto x_{1}\\}$ $\displaystyle
C_{19}:\ $ $\displaystyle m(x_{1},m(i(x_{1}),x_{2}))\rightarrow x_{2},\quad
m(x_{3},e)\rightarrow x_{3},\quad m(x_{1},\square),\quad\\{x_{2}\mapsto
e,x_{3}\mapsto i(x_{1})\\}$ $\displaystyle C_{20}:\ $ $\displaystyle
m(i(x_{1}),m(x_{1},x_{2}))\rightarrow x_{2},\quad m(x_{3},e)\rightarrow
x_{3},\quad m(i(x_{1}),\square),\quad\\{x_{2}\mapsto e,x_{3}\mapsto x_{1}\\}$
Figure 2. The critical pairs of the complete TRS $R$ (1)
($C_{j}:\ l\rightarrow r,\ l^{\prime}\rightarrow r^{\prime},\ C,\ \sigma$
means $C_{j}$ is the critical pair $(r\sigma,C[r^{\prime}\sigma])$.)
$\displaystyle C_{21}:\ $ $\displaystyle m(i(x_{1}),x_{1})\rightarrow e,\quad
m(x_{2},e)\rightarrow x_{2},\quad\square,\quad\\{x_{1}\mapsto e,x_{2}\mapsto
i(e)\\}$ $\displaystyle C_{22}:\ $ $\displaystyle m(x_{1},i(x_{1}))\rightarrow
e,\quad i(m(x_{2},x_{3}))\rightarrow m(i(x_{3}),i(x_{2})),\quad
m(x_{1},\square),\quad\\{x_{1}\mapsto m(x_{2},x_{3})\\}$ $\displaystyle
C_{23}:\ $ $\displaystyle i(m(x_{1},x_{2}))\rightarrow
m(i(x_{2}),i(x_{1})),\quad m(x_{3},i(x_{3}))\rightarrow e,\quad
i(\square),\quad\\{x_{2}\mapsto i(x_{1}),x_{3}\mapsto x_{1}\\}$ $\displaystyle
C_{24}:\ $ $\displaystyle m(x_{1},m(i(x_{1}),x_{2}))\rightarrow x_{2},\quad
m(x_{3},i(x_{3}))\rightarrow e,\quad m(x_{1},\square),\quad\\{x_{2}\mapsto
i(i(x_{1})),x_{3}\mapsto i(x_{1})\\}$ $\displaystyle C_{25}:\ $ $\displaystyle
m(x_{1},i(x_{1}))\rightarrow e,\quad i(i(x_{2}))\rightarrow x_{2},\quad
m(x_{1},\square),\quad\\{x_{1}\mapsto i(x_{2})\\}$ $\displaystyle C_{26}:\ $
$\displaystyle m(x_{1},i(x_{1}))\rightarrow e,\quad i(e)\rightarrow e,\quad
m(x_{1},\square),\quad\\{x_{1}\mapsto e\\}$ $\displaystyle C_{27}:\ $
$\displaystyle m(i(x_{1}),m(x_{1},x_{2}))\rightarrow x_{2},\quad
m(x_{3},i(x_{3}))\rightarrow e,\quad m(i(x_{1}),\square),\quad\\{x_{2}\mapsto
i(x_{1}),x_{3}\mapsto x_{1}\\}$ $\displaystyle C_{28}:\ $ $\displaystyle
m(i(x_{1}),x_{1})\rightarrow e,\quad i(m(x_{2},x_{3}))\rightarrow
m(i(x_{3}),i(x_{2})),\quad m(\square,x_{1}),\quad\\{x_{1}\mapsto
m(x_{2},x_{3})\\}$ $\displaystyle C_{29}:\ $ $\displaystyle
i(m(x_{1},x_{2}))\rightarrow m(i(x_{2}),i(x_{1})),\quad
m(i(x_{3}),x_{3})\rightarrow e,\quad i(\square),\quad\\{x_{3}\mapsto
x_{2},x_{1}\mapsto i(x_{2})\\}$ $\displaystyle C_{30}:\ $ $\displaystyle
m(x_{1},m(i(x_{1}),x_{2}))\rightarrow x_{2},\quad m(i(x_{3}),x_{3})\rightarrow
e,\quad m(x_{1},\square),\quad\\{x_{1}\mapsto x_{2},x_{3}\mapsto x_{2}\\}$
$\displaystyle C_{31}:\ $ $\displaystyle m(i(x_{1}),x_{1})\rightarrow e,\quad
i(i(x_{2}))\rightarrow x_{2},\quad m(\square,x_{1}),\quad\\{x_{1}\mapsto
i(x_{2})\\}$ $\displaystyle C_{32}:\ $ $\displaystyle
m(i(x_{1}),x_{1})\rightarrow e,\quad i(e)\rightarrow e,\quad
m(\square,x_{1}),\quad\\{x_{1}\mapsto e\\}$ $\displaystyle C_{33}:\ $
$\displaystyle m(i(x_{1}),m(x_{1},x_{2}))\rightarrow x_{2},\quad
m(i(x_{3}),x_{3})\rightarrow e,\quad m(i(x_{1}),\square),\quad\\{x_{3}\mapsto
x_{2},x_{1}\mapsto i(x_{2})\\}$ $\displaystyle C_{34}:\ $ $\displaystyle
m(i(x_{1}),m(x_{1},x_{2}))\rightarrow x_{2},\quad
m(i(x_{3}),m(x_{3},x_{4}))\rightarrow x_{4},\quad
m(i(x_{1}),\square),\quad\\{x_{2}\mapsto m(x_{3},x_{4}),x_{1}\mapsto
i(x_{3})\\}$ $\displaystyle C_{35}:\ $ $\displaystyle
m(i(x_{1}),m(x_{1},x_{2}))\rightarrow x_{2},\quad i(m(x_{3},x_{4}))\rightarrow
m(i(x_{4}),i(x_{3})),\quad m(\square,m(x_{1},x_{2})),\quad\\{x_{1}\mapsto
m(x_{3},x_{4})\\}$ $\displaystyle C_{36}:\ $ $\displaystyle
i(m(x_{1},x_{2}))\rightarrow m(i(x_{2}),i(x_{1})),\quad
m(i(x_{3}),m(x_{3},x_{4}))\rightarrow x_{4},\quad
i(\square),\quad\\{x_{2}\mapsto m(x_{3},x_{4}),x_{1}\mapsto i(x_{3})\\}$
$\displaystyle C_{37}:\ $ $\displaystyle m(i(x_{1}),m(x_{1},x_{2}))\rightarrow
x_{2},\quad m(x_{3},m(i(x_{3}),x_{4}))\rightarrow x_{4},\quad
m(i(x_{1}),\square),\quad\\{x_{2}\mapsto m(i(x_{1}),x_{4}),x_{3}\mapsto
x_{1}\\}$ $\displaystyle C_{38}:\ $ $\displaystyle
m(x_{1},m(i(x_{1}),x_{2}))\rightarrow x_{2},\quad
m(i(x_{3}),m(x_{3},x_{4}))\rightarrow x_{4},\quad
m(x_{1},\square),\quad\\{x_{2}\mapsto m(x_{1},x_{4}),x_{3}\mapsto x_{1}\\}$
$\displaystyle C_{39}:\ $ $\displaystyle m(i(x_{1}),m(x_{1},x_{2}))\rightarrow
x_{2},\quad i(i(x_{3}))\rightarrow x_{3},\quad
m(\square,m(x_{1},x_{2})),\quad\\{x_{1}\mapsto i(x_{3})\\}$ $\displaystyle
C_{40}:\ $ $\displaystyle m(i(x_{1}),m(x_{1},x_{2}))\rightarrow x_{2},\quad
i(e)\rightarrow e,\quad m(\square,m(x_{1},x_{2})),\quad\\{x_{1}\mapsto e\\}$
$\displaystyle C_{41}:\ $ $\displaystyle m(x_{1},m(i(x_{1}),x_{2}))\rightarrow
x_{2},\quad i(e)\rightarrow e,\quad
m(x_{1},m(\square,x_{2})),\quad\\{x_{1}\mapsto e\\}$ $\displaystyle C_{42}:\ $
$\displaystyle i(i(x_{1}))\rightarrow x_{1},\quad i(e)\rightarrow e,\quad
i(\square),\quad\\{x_{1}\mapsto e\\}$ $\displaystyle C_{43}:\ $ $\displaystyle
i(i(x_{1}))\rightarrow x_{1},\quad i(i(x_{2}))\rightarrow x_{2},\quad
i(\square),\quad\\{x_{1}\mapsto i(x_{2})\\}$ $\displaystyle C_{44}:\ $
$\displaystyle i(i(x_{1}))\rightarrow x_{1},\quad i(m(x_{2},x_{3}))\rightarrow
m(i(x_{3}),i(x_{2})),\quad i(\square),\quad\\{x_{1}\mapsto m(x_{2},x_{3})\\}$
$\displaystyle C_{45}:\ $ $\displaystyle m(x_{1},m(i(x_{1}),x_{2}))\rightarrow
x_{2},\quad i(i(x_{3}))\rightarrow x_{3},\quad
m(x_{1},m(\square,x_{2})),\quad\\{x_{1}\mapsto i(x_{3})\\}$ $\displaystyle
C_{46}:\ $ $\displaystyle m(x_{1},m(i(x_{1}),x_{2}))\rightarrow x_{2},\quad
m(x_{3},m(i(x_{3}),x_{4}))\rightarrow x_{4},\quad m(x_{1},\square),$
$\displaystyle\\{x_{2}\mapsto m(i(i(x_{1})),x_{4}),x_{3}\mapsto i(x_{1})\\}$
$\displaystyle C_{47}:\ $ $\displaystyle m(x_{1},m(i(x_{1}),x_{2}))\rightarrow
x_{2},\quad i(m(x_{3},x_{4}))\rightarrow m(i(x_{4}),i(x_{3})),\quad
m(x_{1},m(\square,x_{2})),\quad\\{x_{1}\mapsto m(x_{3},x_{4})\\}$
$\displaystyle C_{48}:\ $ $\displaystyle i(m(x_{1},x_{2}))\rightarrow
m(i(x_{2}),i(x_{1})),\quad m(x_{3},m(i(x_{3}),x_{4}))\rightarrow x_{4},\quad
i(\square),\quad\\{x_{2}\mapsto m(i(x_{1}),x_{4}),x_{3}\mapsto x_{1}\\}$
Figure 3. The critical pairs of the complete TRS $R$ (2)
($C_{j}:\ l\rightarrow r,\ l^{\prime}\rightarrow r^{\prime},\ C,\ \sigma$
means $C_{j}$ is the critical pair $(r\sigma,C[r^{\prime}\sigma])$.)
## Appendix B Experimental Results
We present our experimental data in Table 1 and Table 2. The data set of
complete TRSs is taken from experimental results of MKBtt [15], which include
benchmark problems [17],[22],[5]. The column headed “degree” shows the degree
of the TRS, the column $\\#R_{\textit{before}}$ the number of rules, the
column $\\#R_{\textit{after}}$ the number of rules after completion, the
column $s(H_{2})$ Malbos-Mimram’s lower bound, and the column
$\\#R_{\textit{after}}-e(R)$ our lower bound. The table is also available at
https://mir-ikbch.github.io/homtrs/experiment/result.html which has links to
TRS files.
Table 1. Malbos-Mimram’s and our lower bounds (1) name | degree | $\\#R_{\mathit{before}}$ | $\\#R_{\mathit{after}}$ | $s(H_{2})$ | $\\#R_{\mathit{after}}-e(R)$
---|---|---|---|---|---
ASK93_1 | 0 | 2 | 2 | 0 | 2
ASK93_6 | 0 | 11 | 11 | 0 | 9
BD94_collapse | 1 | 5 | 5 | – | –
BD94_peano | 1 | 4 | 4 | – | –
BD94_sqrt | 2 | 3 | 4 | 0 | 3
BGK94_D08 | 2 | 6 | 21 | 2 | 5
BGK94_D10 | 2 | 6 | 21 | 1 | 4
BGK94_D12 | 2 | 6 | 20 | 2 | 5
BGK94_D16 | 2 | 6 | 20 | 2 | 5
BH96_fac8_theory | 1 | 6 | 6 | – | –
Chr89_A2 | 2 | 5 | 18 | 0 | 4
Chr89_A3 | 2 | 7 | 16 | 0 | 6
KK99_linear_assoc | 0 | 2 | 2 | 0 | 1
LS94_G0 | 2 | 8 | 13 | 1 | 4
Les83_fib | 1 | 9 | 9 | – | –
Les83_subset | 1 | 12 | 12 | – | –
OKW95_dt1_theory | 1 | 11 | 11 | – | –
SK90_3.01 | 2 | 4 | 11 | 0 | 3
SK90_3.02 | 0 | 3 | 3 | 1 | 2
SK90_3.03 | 2 | 5 | 11 | 0 | 3
SK90_3.04 | 1 | 4 | 8 | – | –
SK90_3.05 | 1 | 4 | 13 | – | –
SK90_3.06 | 1 | 5 | 12 | – | –
SK90_3.07 | 1 | 5 | 15 | – | –
SK90_3.08 | 2 | 5 | 4 | 0 | 2
SK90_3.10 | 2 | 4 | 8 | 0 | 3
SK90_3.11 | 0 | 4 | 3 | 0 | 3
SK90_3.12 | 2 | 4 | 9 | 0 | 2
SK90_3.13 | 0 | 6 | 6 | 0 | 3
SK90_3.14 | 0 | 7 | 8 | 1 | 5
SK90_3.15 | 2 | 8 | 7 | 1 | 4
SK90_3.16 | 1 | 4 | 4 | – | –
SK90_3.17 | 1 | 3 | 5 | – | –
Table 2. Malbos-Mimram’s and our lower bounds (2) name | degree | $\\#R_{\mathit{before}}$ | $\\#R_{\mathit{after}}$ | $s(H_{2})$ | $\\#R_{\mathit{after}}-e(R)$
---|---|---|---|---|---
SK90_3.18 | 0 | 5 | 6 | 2 | 4
SK90_3.19 | 0 | 9 | 7 | 1 | 4
SK90_3.20 | 1 | 10 | 11 | – | –
SK90_3.21 | 1 | 9 | 4 | – | –
SK90_3.23 | 0 | 4 | 8 | 1 | 4
SK90_3.24 | 0 | 3 | 2 | 0 | 2
SK90_3.25 | 0 | 1 | 2 | 0 | 1
SK90_3.27 | 0 | 8 | 3 | 0 | 3
SK90_3.28 | 0 | 9 | 18 | 0 | 6
SK90_3.29 | 0 | 7 | 8 | 2 | 7
SK90_3.30 | 1 | 3 | 3 | – | –
SK90_3.31 | 1 | 3 | 3 | – | –
SK90_3.32 | 1 | 3 | 2 | – | –
SK90_3.33 | 0 | 3 | 3 | 0 | 2
TPTP-BOO027-1_theory | 1 | 5 | 5 | – | –
TPTP-COL053-1_theory | 0 | 1 | 1 | 0 | 1
TPTP-COL056-1_theory | 0 | 3 | 3 | 0 | 3
TPTP-COL060-1_theory | 0 | 2 | 2 | 0 | 2
TPTP-COL085-1_theory | 0 | 1 | 1 | 0 | 1
TPTP-GRP010-4_theory | 2 | 4 | 11 | 1 | 3
TPTP-GRP011-4_theory | 2 | 4 | 11 | 1 | 3
TPTP-GRP012-4_theory | 2 | 4 | 10 | 0 | 2
slothrop_ackermann | 1 | 3 | 3 | – | –
slothrop_cge | 2 | 6 | 20 | 0 | 4
slothrop_cge3 | 2 | 9 | 28 | 0 | 5
slothrop_endo | 2 | 4 | 14 | 0 | 3
slothrop_equiv_proofs | 1 | 12 | 23 | – | –
slothrop_fgh | 1 | 4 | 3 | – | –
slothrop_groups | 2 | 3 | 10 | 0 | 2
slothrop_groups_conj | 2 | 5 | 10 | 0 | 2
slothrop_hard | 0 | 2 | 2 | 1 | 2
|
2024-09-04T02:54:54.913412 | 2020-02-27T07:10:23 | 2002.11945 | {
"authors": "Sumon Kumar Bose, Jyotibdha Acharya, and Arindam Basu",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25906",
"submitter": "Sumon Bose Mr.",
"url": "https://arxiv.org/abs/2002.11945"
} | arxiv-papers | # Is my Neural Network Neuromorphic? Taxonomy, Recent Trends and Future
Directions in Neuromorphic Engineering
Sumon Kumar Bose School of EEE
Nanyang Technological University
Jyotibdha Acharya School of EEE
Nanyang Technological University
Arindam Basu School of EEE
Nanyang Technological University
###### Abstract
In this paper, we review recent work published over the last 3 years under the
umbrella of Neuromorphic engineering to analyze what are the common features
among such systems. We see that there is no clear consensus but each system
has one or more of the following features:(1) Analog computing (2) Non von-
Neumann Architecture and low-precision digital processing (3) Spiking Neural
Networks (SNN) with components closely related to biology. We compare recent
machine learning accelerator chips to show that indeed analog processing and
reduced bit precision architectures have best throughput, energy and area
efficiencies. However, pure digital architectures can also achieve quite high
efficiencies by just adopting a non von-Neumann architecture. Given the design
automation tools for digital hardware design, it raises a question on the
likelihood of adoption of analog processing in the near future for industrial
designs. Next, we argue about the importance of defining standards and
choosing proper benchmarks for the progress of neuromorphic system designs and
propose some desired characteristics of such benchmarks. Finally, we show
brain-machine interfaces as a potential task that fulfils all the criteria of
such benchmarks.
###### Index Terms:
Neuromorphic, Low-power, Machine learning, Spiking neural networks, Memristor
## I Introduction
The rapid progress of Machine Learning (ML) fuelled by Deep Neural Networks
(DNN) in the last several years has created an impact in a wide variety of
fields ranging from computer vision, speech analysis, natural language
processing etc. With the progress in software, there has been a concomitant
push to develop better hardware architectures to support the deployment as
well as training of these algorithms[1, 2]. This has rekindled an interest in
“Neuromorphic Engineering”–a term coined in 1990 by Carver Mead in his seminal
paper [3] where he claimed that hardware implementations of algorithms like
pattern recognition (where relative values are of more importance than
absolute ones e.g. is this image more likely to be a cat or a dog?) would be
more energy and area efficient if it adopts biological strategies of analog
processing.S
(a)
(b)
Figure 1: (a) Google search trends over the last 15 years for the topic
“Neuromorphic Engineering” shows a decline around 2010 followed by a renewed
interest in the last 5 years. (b) Number of neuromorphic papers published in
journals from the Nature series have shown a steady increase in the last 10
years. Data for 2019 is till the month of October.
While the above idea of brain-inspired analog processing is very appealing and
showed initial promise with several interesting sensory prototypes, it failed
to gain increased traction over time possibly due to the potential
difficulties of creating robust, programmable, large-scale analog designs that
can benefit from technology scaling in an easy manner. However, in the last 5
years, there has been renewed interest in this topic, albeit with a slightly
expanded connotation of the term “neuromorphic”. Figure 1(a) shows a history
of google searches of the term “neuromorphic engineering” over the past 15
years (obtainable from Google Trends). Data points are plotted for every month
with the maximum search number normalized to $100$. It can be seen that there
was a decline in interest about neuromorphic research around 2010. However, it
has again gained momentum in the last five years with a slightly broadened
scope which we refer to as version 2 (while referring to the Meadian
definition as version 1). A similar trend (plotted in Figure 1(b)) is obtained
also by analyzing the number of papers published in relevant journals (Nature,
Nature Communications, Nature Electronics, Nature Machine Intelligence, Nature
Materials, Nature Nanotechnology) from the Nature journal series over the last
$\approx 10$ years that are on the topic of neuromorphic research. It can be
seen that there is a rapid increase in the number of such papers over the last
5 years.
The rest of the paper is organized as follows: the next section introduces the
new connotation of the term “neuromorphic” followed by an analysis of some
recent research trends in this field. Section IV describes the need for
neuromorphic benchmarks and some desired criteria of such benchmarks while
Section V proposes brain-machine interfaces as a potentially good benchmark.
## II Neuromorphic v2.0: A Taxonomy
As discussed in the last section, the renaissance in Neuromorphic research
over the last 5 years has seen the term being used in a wider sense than the
original definition[3]. This is partially due to the fact that scientists from
different communities (not only circuits or neuroscientists) ranging from
material science to computer architects have now become involved. Based on the
recent work, we describe next the key characteristic features of this new
version of neuromorphic systems as:
* •
Use of analog or physics based processing as opposed to conventional digital
circuits–this is same as the original version of neuromorphic system from a
circuits perspective.
* •
From the viewpoint of computer architecture, usage of non von-Neumann
architecture (independent of analog or digital compute) and low-precision
digital datapath are hallmarks of neuromorphic systems. In other words,
conventional computers using von-Neumann architectures read from memory,
compute and write back the result–this is very different from brain-inspired
systems where memory and computing are interspersed [4].
* •
Computer scientists and algorithm developers on the other hand consider a
system neuromorphic if it uses a spiking neural network (SNN) as opposed to a
traditional artificial neural network (ANN). Neurons in an SNN inherently
encode time and output a 1-bit digital pulse called a spike or action
potential.
Figure 2: Survey of neuromorphic systems reported over 2017-2019 in Nature,
Science, Science Advances, Nature Nanotechnology, Nature Electronics, Nature
Materials, Nature Communications . A large majority use SNN in their work.
Details of all papers used in the survey are in [5]. Figure 3: Survey of IC
implementations of non von-Neumann architecture over the same period in ISSCC,
SOVC, JSSC however shows very few work uses the term “neuromorphic”. Details
of all papers used in the survey are in [5].
We next illustrate how frequently each type of viewpoint is expressed in
neuromorphic research. Figure 2 categorizes the neuromorphic research papers
published between 2017-2019 in the Nature series of journals surveyed in
Figure 1(b) along with the journals Science and Science Advances. The papers
are categorized according to the neuromorphic aspect they primarily focus
on–(1) Analog processing, (2) non von-Neumann architecture or (3) SNN. It can
be seen that a large majority of the work focussed on the SNN aspect (details
of papers used in the survey are available at [5]). Most of these work focus
on new materials or device fabrication and then present SNN simulations using
the novel device properties bypassing the circuit level. Hence, we also
decided to create a survey of ML accelerator integrated circuits (IC)
published in IEEE ISSCC and IEEE SOVC conferences inspired by the ADC
survey[6]. In addition, we also considered papers published in the IEEE
Journal of Solid State Circuits (JSSC). Figure 3 plots the result of
categorizing all ML accelerators adopting non von-Neumann architecture
published between 2017-2019 (details in [5]). Surprisingly, it can be seen
that only 5 papers have used the term “neuromorphic” to describe their work!
This clearly shows a stark difference in terminology used across different
research communities.
Figure 4: A new taxonomy that has non von-Neumann architecture as the
overarching topic with neuromorphic v2.0 and ML accelerators as two sub-topics
under it. Figure 5: Machine learning hardware trends: Peak energy efficiency
in TOPS/W plotted against memory size and categorized by (a) bit-width of
datapath and (b) digital vs mixed-signal analog approaches. (c) Throughput at
peak energy efficiency and (d) Area efficiency plotted against peak energy
efficiency for recent ASIC implementations reported over 2017-2019 in ISSCC,
SOVC, and JSSC. The larger dots in (d) indicate ASIC area without pad.
This leads us to propose a new taxonomy for neuromorphic systems as shown in
Figure 4. It is possibly better to use the term non von-Neumann architecture
as the overarching topic. Under its ambit, neuromorphic v2.0 can refer to
systems using analog or mixed-signal circuits, implementing SNN algorithms or
the extremely quantized version (1-bit) of ANNs. On the other hand, ML
accelerators can refer to digital circuits with non von-Neumann architecture
implementing multi-bit ANN. With this in mind, we look at some recent
performance trends in ML accelerators using non von-Neumann architectures that
were reviewed in Figure 3.
## III Trends in Machine Learning Hardware
There are several important metrics to quantify the performance of ML
accelerators such as energy efficiency, throughput and area efficiency. To
identify some trends, we plot several combinations of these quantities in
Figure 5.
First, we expect bigger chips to have lower energy efficiency in general due
to cost of moving data around large areas that dissipates more energy charging
and discharging interconnects. Since the area of these ICs are dominated by
the static random access memory (SRAM) required to store weights and
activations, we use the SRAM size as a proxy for chip area. The energy
efficiency in Tera operations (TOPS) per Watt are plotted against SRAM size
for these designs in Fig. 5(a) and (b) and indeed show an inverse relation
between energy efficiency and SRAM size or chip size. Figure 5(a) further uses
different colours to categorize the data points according to bit width of
datapath. As expected, it can be seen that the extremely quantized 1-bit
designs[7, 8, 9] show best energy efficiency and are located significantly
($\approx 10X$) above the trend line. The same data is plotted in Fig. 5(b)
but colour coded according to the design approach of digital versus analog
mixed-signal. It is interesting to note that the mixed signal designs indeed
exhibit higher energy efficiencies, but they are in general much smaller than
the digital ones.
Thus, in general we can see that the neuromorphic v2.0 principles of non von-
Neumann architecture coupled with low data precision and analog computing
(described earlier in Section II) do indeed provide great energy efficiencies.
However, it can be seen that the energy efficiencies of pure digital
approaches using only the principles of non von-Neumann architecture and low
bit-width are at least much higher ($\approx 500X$) than the energy efficiency
wall of $\approx 10$ GMACs/W for traditional von-Neumann processors[10, 11].
Hence, this raises an interesting question–given the scalability, testability
and ease of porting across nodes offered by digital designs, is it reasonable
to expect large scale industrial adoption of analog neuromorphic designs for
an extra $10X$ in energy efficiency?
Next, we analyze the trade-offs in throughput at peak energy efficiency by
plotting it against peak energy efficiency in Fig. 5(c). Interestingly, these
two quantities are positively correlated with a majority of designs exhibiting
throughput $\approx 100$ GOPS. Higher throughput would generally mean the
static power is better amortized across the operations leading to higher
energy efficiency. Also, in general reduced bit precision designs that
increases energy efficiency would also reduce critical path delays increasing
throughput. Lastly, we analyze area efficiency of the designs (measured in
GOPS/mm2) by plotting it against energy efficiency in Fig. 5(d). Again, these
two quantities show a positive correlation implying again that good design
practices of reduced bit precision and analog design positively impact both
the quantities. This is also clarified in Figure 5 by demarcating the designs
according to bit precision and design styles. These plots show that apart from
energy efficiency, analog mixed signal design styles also provide $\approx
10X$ improvement in throughput and area efficiency. Coupled with energy
efficiency advantages, these points might be sufficient to suggest that in the
longer term, there is reason for large scale interest in neuromorphic designs
following the principles outlined earlier. However, all of these comparisons
are not very relevant unless they can all run a common set of benchmark
problems. This is discussed next in the following section.
## IV Neuromorphic Benchmarks
The comparison between all the hardware designs in the earlier section are not
fair unless they can all at least report performance on a minimum set of
benchmark algorithms. While there are at least some common benchmarks for the
ANN community such as MNIST[12], CIFAR[13] and Imagenet[14] for image
recognition, there is not much consensus about good benchmarks for
neuromorphic SNN algorithms. Hence, while advocating usage of benchmarks from
the ML community for neuromorphic hardware ANNs, we discuss more details about
what might constitute desired criteria for SNN benchmarks. While this topic is
deemed important, there has been very few dedicated efforts in this area[15].
Given the role the Imagenet benchmark played in catalysing progress in ANN
research, we believe it is of utmost importance that the neuromorphic
community spend more effort immediately on devising good benchmarks for SNN.
Some recent work on SNN has focussed on converting images from ANN benchmarks
to spike trains and then classifying them[16, 17]. While being great pieces of
research, we feel that this is fundamentally not a good application for SNN
since the original input signal is static and does not change with time.
Instead, it might be more natural to use SNN as dynamical systems to track
moving objects in video streams[18, 19] or classify signals that vary over
time such as speech[20]. With this in mind, we propose the following desired
characteristics for neuromorphic benchmarks:
1. 1.
The signal being processed should be encoded in time naturally so that the
continuous time dynamics of SNN can be more effective than ANN to process it.
Signals such as speech, video, etc are good examples. From the biomedical
domain, EEG signals are another good example.
2. 2.
There should be a need for real-time response of the system such as in closed-
loop systems such as sensori-motor loops in robotics. The rapid response time
of neuromorphic sensors and SNN processing should be useful in such cases.
3. 3.
There should be need for the system to adapt or learn frequently. This
necessitates learning from few samples, a common complaint with current deep
learning based ANNs that require many thousands of examples to train.
4. 4.
Ideally, the example applications should be ones that require low-power
operation so that the energy efficiency of neuromorphic hardware meets an
important design requirement.
5. 5.
There would potentially be different benchmarks for different scales of the
problem–edge deployment (sensory information processing) or cloud based
analytics (large scale search, creativity etc).
We argue in the next section that brain-machine interfaces provide a benchmark
application that meets all of the above criteria.
Figure 6: Example of a BMI experimental setup where the NHP is using his
thoughts to move a wheelchair (adopted from [21] under CC-BY license). The
decoder to convert brain signals to a command provides ideal opportunity for
low-power, real-time neuromorphic machine learners.
## V Brain-Machine Interfaces
The aim of intra-cortical Brain Machine Interfaces (iBMIs) is to substantially
improve the lives of patients afflicted by spinal cord injury or debilitating
neurodegenerative disorders such as tetraplegia, amyotrophic lateral
sclerosis. These systems take neural activity as an input and drive effectors
such as a computer cursor [22], wheelchair [21] and prosthetic [23], paralysed
[24] limbs for the purposes of communication, locomotion and artificial hand
control respectively. While early work focussed on non-invasive EEG based
systems, invasive neural interfaces are needed for fine grained motor control
as well as for advancing fundamental knowledge about the brain due to higher
signal quality obtainable. Figure 6 demonstrates a typical experimental setup
involving a non-human primate (NHP) where an implanted micro-electrode array
is interfaced with amplifiers to readout neural activity at the level of
single cells[21]. This neural data is collected while the primate is doing
different types of tasks according to a given cue (typically visual). Based on
the recorded data, a machine learner or decoder is trained to convert the
neural recording to an action that affects the physical world and provides
feedback to the NHP (again typically visual feedback is used). We argue that a
decoder in iBMI satisfies all the conditions required for a neuromorphic
system described in Section IV as explained below:
1. 1.
Neural data recorded from the brain are indeed a streaming signal arriving
continuously over time. Further, the data are naturally in the form of spikes
avoiding the question for the need of spike conversion and how to do it.
2. 2.
Due to the visual feedback provided to the NHP, decoding has to be done in
real-time. In this case, typical update frequencies of $10$ Hz are used[25].
3. 3.
There is a need to frequently adapt the weights of the decoder since the
neural data is non-stationary[26]. The statistics can change due to micro-
motion of the electrode or scar tissue formation.
4. 4.
The decoder must consume very little energy to prolong the battery life of the
system[27]. If included within the implant, its area must be very small as
well.
There has been some initial work on neuromorphic decoders[28, 29, 30, 31, 25].
While [28, 29] performed software simulations, [30, 31, 25] have shown results
from custom low-power neuromorphic ICs. Further, closed-loop decoding results
from NHP have so far been demonstrated only in [25]. One of the issues behind
lack of results in this domain is the difficulty and cost of creating a NHP
based experiment. Open-source datasets are just beginning to be available in
this field[32, 33]. While these definitely will provide a good starting point,
they cannot be used to simulate closed-loop settings. We envision that setting
up AI based models to mimic closed-loop BMI experimental settings could be a
good research direction for this area.
## Conclusion
In this paper, we reviewed the recent trend in papers published on the topic
of neuromorphic engineering or computing and showed that the connotation of
the term has broadened beyond its original definition of brain-inspired analog
computing. Neuromorphic v2.0, as we call it in this paper, includes the
concept of non von-Neumann and low precision digital computing from computer
architecture and spiking neural networks from the computer science and
algorithm community. However, there are differences in the way different
scientific communities have used the term and a potential better taxonomy is
to consider non von-Neumann computing as an umbrella under which a sub-concept
is neuromorphic computing. Trends in recently published ML accelerator ICs
indeed show that using the above neuromorphic concepts lead to $\approx 10X$
benefit in energy efficiency, area efficiency and throughput over digital non
von-Neumann architectures. We also pointed out the need for benchmarks in SNN
research and suggested some potential characteristics of such benchmarks.
Finally, we pointed out that brain-machine interfaces (BMI) have all these
desired characteristics of real-time response, processing time varying
signals, need for quick re-training as well as strict requirement for low-
power dissipation. We envision generation of BMI based benchmarks in the
future for testing and standardization of different neuromorphic systems.
## References
* [1] A. Basu, J. Acharya, and et. al., “Low-power, adaptive neuromorphic systems: Recent progress and future directions,” _IEEE Journal of Emerging Topics in Circuits and Systems_ , vol. 8, no. 1, pp. 6–27, 2018.
* [2] C.-Y. Chen, B. Murmann, J.-S. Seo, and H.-J. Yoo, “Custom sub-systems and circuits for deep learning: Guest editorial overview,” _IEEE Journal of Emerging Topics in Circuits and Systems_ , vol. 9, no. 2, pp. 247–252, 2019.
* [3] C. Mead, “Neuromorphic electronic systems,” _Proc. of IEEE_ , vol. 78, no. 10, pp. 1629–36, 1990.
* [4] G. Indiveri and S. C. Liu, “Memory and information processing in neuromorphic systems,” _Proc. of IEEE_ , vol. 103, no. 8, pp. 1379–97, 2015.
* [5] S. K. Bose, J. Acharya, and A. Basu, “Survey of neuromorphic and machine learning accelerators in SOVC, ISSCC and Nature/Science series of journals from 2017 onwards,” https://sites.google.com/view/arindam-basu/neuromorphic-survey-asilomar, 2019\.
* [6] B. Murmann, “ADC Performance Survey 1997-2019,” http://web.stanford.edu/ murmann/adcsurvey.html, 2019.
* [7] D. Bankman, L. Yang, B. Moons, M. Verhelst, and B. Murmann, “An always-on $3.8\mu J/86\%$ CIFAR-10 mixed-signal binary CNN processor with all memory on chip in 28nm CMOS,” in _2018 IEEE International Solid - State Circuits Conference - (ISSCC)_ , Feb 2018, pp. 222–224.
* [8] H. Valavi, P. J. Ramadge, E. Nestler, and N. Verma, “A 64-Tile 2.4-Mb In-Memory-Computing CNN Accelerator Employing Charge-Domain Compute,” _IEEE Journal of Solid-State Circuits_ , vol. 54, no. 6, pp. 1789–1799, June 2019.
* [9] S. Yin, P. Ouyang, J. Yang, T. Lu, X. Li, L. Liu, and S. Wei, “An Ultra-High Energy-Efficient Reconfigurable Processor for Deep Neural Networks with Binary/Ternary Weights in 28NM CMOS,” in _2018 IEEE Symposium on VLSI Circuits_ , June 2018, pp. 37–38.
* [10] B. Marr and et. al., “Scaling energy per operation via an asynchronous pipeline,” _IEEE Trans. on VLSI_ , vol. 99, pp. 1–5, 2012.
* [11] J. Hasler and B. Marr, “Finding a roadmap to achieve large neuromorphic hardware systems,” _Frontiers in Neuroscience_ , vol. 7, no. 118, pp. 1–29, 2013.
* [12] Y. Lecun, C. Cortes, and C. J. C. Burges, “THE MNIST DATABASE of handwritten digits,” http://yann.lecun.com/exdb/mnist/, 1998.
* [13] A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” https://www.cs.toronto.edu/ kriz/cifar.html, 2009.
* [14] O. Russakovsky, J. Deng, and et. al., “Imagenet large scale visual recognition challenge,” _Intl. Journal of Computer Vision_ , vol. 115, no. 3, pp. 211–252, 2015.
* [15] M. Pfeiffer, R. B. Benosman, and J. Tapson, “Benchmarks and Challenges for Neuromorphic Engineering,” https://www.frontiersin.org/research-topics/3448/benchmarks-and-challenges-for-neuromorphic-engineering#articles, 2016\.
* [16] A. Sengupta and et. al., “Going deeper in spiking neural networks: Vgg and residual architectures,” _Frontiers in Neuroscience_ , vol. 13, no. 95, pp. 1–10, 2019.
* [17] B. Rueckauer and et. al., “Conversion of continuous-valued deep networks to efficient event-driven networks for image classification,” _Frontiers in Neuroscience_ , vol. 11, no. 682, pp. 1–12, 2017.
* [18] J. Acharya, A. U. Caycedo, and et.al., “EBBIOT: A Low-complexity Tracking Algorithm for Surveillance in IoVT Using Stationary Neuromorphic Vision Sensors,” in _IEEE System on Chip Conference (SOCC)_ , 2019.
* [19] J. Acharya, V. Padala, and A. Basu, “Spiking Neural Network Based Region Proposal Networks for Neuromorphic Vision Sensors,” in _Intl. Symp. on Circuits and Systems (ISCAS)_ , 2019.
* [20] J. Acharya and et. al., “A comparison of low-complexity real-time feature extraction for neuromorphic speech recognition,” _Frontiers in Neuroscience_ , vol. 12, no. 160, 2018.
* [21] “Independent Mobility Achieved through a Wireless Brain-Machine Interface,” _PLoS ONE_ , vol. 11, no. 11, pp. 1–13, 2016.
* [22] C. Pandarinath, P. Nuyujukian, C. H. Blabe, B. L. Sorice, J. Saab, F. R. Willett, L. R. Hochberg, K. V. Shenoy, and J. M. Henderson, “High performance communication by people with paralysis using an intracortical brain-computer interface,” _eLife_ , vol. 6, p. e18554, 2017 2017.
* [23] J. L. Collinger, B. Wodlinger, J. E. Downey, W. Wang, E. C. Tyler-Kabara, D. J. Weber, A. J. C. McMorland, M. Velliste, M. L. Boninger, and A. B. Schwartz, “High-performance neuroprosthetic control by an individual with tetraplegia.” _The Lancet_ , vol. 381, no. 9866, pp. 557–64, 2013 2013\.
* [24] A. B. Ajiboye, F. R. Willett, D. R. Young, W. D. Memberg, B. A. Murphy, J. P. Miller, B. L. Walter, J. A. Sweet, H. A. Hoyen, M. W. Keith, P. H. Peckham, J. D. Simeral, J. P. Donoghue, L. R. Hochberg, and R. F. Kirsch, “Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration,” _The Lancet_ , vol. 389, no. 10081, pp. 1821–1830, 2017 2017.
* [25] S. Shaikh, R. So, and et. al., “Real-time closed loop neural decoding on a neuromorphic chip,” in _2019 9th International IEEE/EMBS Conference on Neural Engineering (NER)_ , 2019, pp. 670–3.
* [26] S. Shaikh, Y. Chen, R. So, and A. Basu., “Cortical Motor Intention Decoding on an Analog Co-Processor with Fast Training for Non-Stationary Data,” in _IEEE Biomedical Circuits and Systems conference (BioCAS)_ , 2017.
* [27] S. Shaikh, R. So, and et. al, “Towards intelligent intra-cortical bmi (i2bmi): Low-power neuromorphic decoders that outperform kalman filters,” _IEEE Trans. on Biomedical Circuits and Systems (Early Access)_ , 2019.
* [28] B. Rapoport, L. Turicchia, and et. al, “Efficient universal computing architectures for decoding neural activity,” _PLOS One_ , vol. 7, no. 9, pp. 1–13, 2012.
* [29] J. Dethier and et. al, “Design and validation of a real-time spiking-neural-network decoder for brain machine interfaces,” _Journal of Neural Engineering_ , vol. 10, no. 036008, pp. 1–12, 2012.
* [30] F. Boi and et. al, “A bidirectional brain-machine interface featuring a neuromorphic hardware decoder,” _Frontiers in Neuroscience_ , vol. 10, no. 563, pp. 1–15, 2016.
* [31] Y. Chen, Y. Enyi, and A. Basu, “A 128 channel extreme learning machine based neural decoder for brain machine interfaces,” _IEEE Trans. on Biomedical Circuits and Systems_ , vol. 10, no. 3, pp. 679–692, 2016.
* [32] J. I. Glaser, M. G. Perich, P. Ramkumar, L. E. Miller, and K. P. Kording, “Population coding of conditional probability distributions in dorsal premotor cortex,” in _Nature Communications_ , 2018.
* [33] J. E. O’Doherty, M. M. B. Cardoso, J. G. Makin, and P. N. Sabes, “Nonhuman primate reaching with multichannel sensorimotor cortex electrophysiology,” 2017\. [Online]. Available: https://zenodo.org/record/583331
|
2024-09-04T02:54:54.929108 | 2020-02-27T09:21:10 | 2002.11986 | {
"authors": "K. Abe, N. Akhlaq, R. Akutsu, A. Ali, C. Alt, C. Andreopoulos, L.\n Anthony, M. Antonova, S. Aoki, A. Ariga, T. Arihara, Y. Asada, Y. Ashida,\n E.T. Atkin, Y. Awataguchi, S. Ban, M. Barbi, G.J. Barker, G. Barr, D. Barrow,\n C. Barry, M. Batkiewicz-Kwasniak, A. Beloshapkin, F. Bench, V. Berardi, L.\n Berns, S. Bhadra, S. Bienstock, A. Blondel, S. Bolognesi, T. Bonus, B.\n Bourguille, S.B. Boyd, D. Brailsford, A. Bravar, D. Bravo Bergu\\~no, C.\n Bronner, S. Bron, A. Bubak, M. Buizza Avanzini, J. Calcutt, T. Campbell, S.\n Cao, S.L. Cartwright, M.G. Catanesi, A. Cervera, A. Chappell, C. Checchia, D.\n Cherdack, N. Chikuma, G. Christodoulou, M. Cicerchia, J. Coleman, G.\n Collazuol, L. Cook, D. Coplowe, A. Cudd, A. Dabrowska, G. De Rosa, T.\n Dealtry, P.F. Denner, S.R. Dennis, C. Densham, F. Di Lodovico, N. Dokania, S.\n Dolan, T.A. Doyle, O. Drapier, J. Dumarchez, P. Dunne, A. Eguchi, L. Eklund,\n S. Emery-Schrenk, A. Ereditato, P. Fernandez, T. Feusels, A.J. Finch, G.A.\n Fiorentini, G. Fiorillo, C. Francois, M. Friend, Y. Fujii, R. Fujita, D.\n Fukuda, R. Fukuda, Y. Fukuda, K. Fusshoeller, C. Giganti, T. Golan, M. Gonin,\n A. Gorin, M. Guigue, D.R. Hadley, J.T. Haigh, P. Hamacher-Baumann, M. Hartz,\n T. Hasegawa, S. Hassani, N.C. Hastings, T. Hayashino, Y. Hayato, A. Hiramoto,\n M. Hogan, J. Holeczek, N.T. Hong Van, T. Honjo, F. Iacob, A.K. Ichikawa, M.\n Ikeda, T. Ishida, T. Ishii, M. Ishitsuka, K. Iwamoto, A. Izmaylov, N. Izumi,\n M. Jakkapu, B. Jamieson, S.J. Jenkins, C. Jes\\'us-Valls, M. Jiang, S.\n Johnson, P. Jonsson, C.K. Jung, P.B. Jurj, M. Kabirnezhad, A.C. Kaboth, T.\n Kajita, H. Kakuno, J. Kameda, D. Karlen, S.P. Kasetti, Y. Kataoka, Y.\n Katayama, T. Katori, Y. Kato, E. Kearns, M. Khabibullin, A. Khotjantsev, T.\n Kikawa, H. Kikutani, H. Kim, S. King, J. Kisiel, A. Knight, A. Knox, T.\n Kobata, T. Kobayashi, L. Koch, T. Koga, A. Konaka, L.L. Kormos, Y. Koshio, A.\n Kostin, K. Kowalik, H. Kubo, Y. Kudenko, N. Kukita, S. Kuribayashi, R.\n Kurjata, T. Kutter, M. Kuze, L. Labarga, J. Lagoda, M. Lamoureux, D. Last, M.\n Laveder, M. Lawe, M. Licciardi, T. Lindner, R.P. Litchfield, S.L. Liu, X. Li,\n A. Longhin, L. Ludovici, X. Lu, T. Lux, L.N. Machado, L. Magaletti, K. Mahn,\n M. Malek, S. Manly, L. Maret, A.D. Marino, L. Marti-Magro, J.F. Martin, T.\n Maruyama, T. Matsubara, K. Matsushita, V. Matveev, C. Mauger, K.\n Mavrokoridis, E. Mazzucato, M. McCarthy, N. McCauley, J. McElwee, K.S.\n McFarland, C. McGrew, A. Mefodiev, C. Metelko, M. Mezzetto, A. Minamino, O.\n Mineev, S. Mine, M. Miura, L. Molina Bueno, S. Moriyama, J. Morrison, Th.A.\n Mueller, L. Munteanu, S. Murphy, Y. Nagai, T. Nakadaira, M. Nakahata, Y.\n Nakajima, A. Nakamura, K.G. Nakamura, K. Nakamura, Y. Nakano, S. Nakayama, T.\n Nakaya, K. Nakayoshi, C. Nantais, C.E.R. Naseby, T.V. Ngoc, K. Niewczas, K.\n Nishikawa, Y. Nishimura, E. Noah, T.S. Nonnenmacher, F. Nova, P. Novella, J.\n Nowak, J.C. Nugent, H.M. O'Keeffe, L. O'Sullivan, T. Odagawa, T. Ogawa, R.\n Okada, K. Okumura, T. Okusawa, S.M. Oser, R.A. Owen, Y. Oyama, V. Palladino,\n J.L. Palomino, V. Paolone, M. Pari, W.C. Parker, S. Parsa, J. Pasternak, P.\n Paudyal, M. Pavin, D. Payne, G.C. Penn, L. Pickering, C. Pidcott, G.\n Pintaudi, E.S. Pinzon Guerra, C. Pistillo, B. Popov, K. Porwit, M.\n Posiadala-Zezula, A. Pritchard, B. Quilain, T. Radermacher, E. Radicioni, B.\n Radics, P.N. Ratoff, E. Reinherz-Aronis, C. Riccio, E. Rondio, S. Roth, A.\n Rubbia, A.C. Ruggeri, C. Ruggles, A. Rychter, K. Sakashita, F. S\\'anchez, G.\n Santucci, C.M. Schloesser, K. Scholberg, J. Schwehr, M. Scott, Y. Seiya, T.\n Sekiguchi, H. Sekiya, D. Sgalaberna, R. Shah, A. Shaikhiev, F. Shaker, A.\n Shaykina, M. Shiozawa, W. Shorrock, A. Shvartsman, K. Skwarczynski, A.\n Smirnov, M. Smy, J.T. Sobczyk, H. Sobel, F.J.P. Soler, Y. Sonoda, J.\n Steinmann, S. Suvorov, A. Suzuki, S.Y. Suzuki, Y. Suzuki, A.A. Sztuc, M.\n Tada, M. Tajima, A. Takeda, Y. Takeuchi, H.K. Tanaka, H.A. Tanaka, S. Tanaka,\n Y. Tanihara, M. Tani, N. Teshima, L.F. Thompson, W. Toki, C. Touramanis, T.\n Towstego, K.M. Tsui, T. Tsukamoto, M. Tzanov, Y. Uchida, M. Vagins, S.\n Valder, Z. Vallari, D. Vargas, G. Vasseur, C. Vilela, W.G.S. Vinning, T.\n Vladisavljevic, V.V. Volkov, T. Wachala, J. Walker, J.G. Walsh, Y. Wang, D.\n Wark, M.O. Wascko, A. Weber, R. Wendell, M.J. Wilking, C. Wilkinson, J.R.\n Wilson, R.J. Wilson, K. Wood, C. Wret, J. Xia, Y. Yamada, K. Yamamoto, C.\n Yanagisawa, G. Yang, T. Yano, K. Yasutome, S. Yen, N. Yershov, M. Yokoyama,\n T. Yoshida, M. Yu, A. Zalewska, J. Zalipska, K. Zaremba, G. Zarnecki, M.\n Ziembicki, E.D. Zimmerman, M. Zito, S. Zsoldos, A. Zykova",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25907",
"submitter": "Georgios Christodoulou",
"url": "https://arxiv.org/abs/2002.11986"
} | arxiv-papers | 11institutetext: University Autonoma Madrid, Department of Theoretical
Physics, 28049 Madrid, Spain22institutetext: University of Bern, Albert
Einstein Center for Fundamental Physics, Laboratory for High Energy Physics
(LHEP), Bern, Switzerland33institutetext: Boston University, Department of
Physics, Boston, Massachusetts, U.S.A.44institutetext: University of British
Columbia, Department of Physics and Astronomy, Vancouver, British Columbia,
Canada55institutetext: University of California, Irvine, Department of Physics
and Astronomy, Irvine, California, U.S.A.66institutetext: IRFU, CEA Saclay,
Gif-sur-Yvette, France77institutetext: University of Colorado at Boulder,
Department of Physics, Boulder, Colorado, U.S.A.88institutetext: Colorado
State University, Department of Physics, Fort Collins, Colorado,
U.S.A.99institutetext: Duke University, Department of Physics, Durham, North
Carolina, U.S.A.1010institutetext: Ecole Polytechnique, IN2P3-CNRS,
Laboratoire Leprince-Ringuet, Palaiseau, France 1111institutetext: ETH Zurich,
Institute for Particle Physics and Astrophysics, Zurich,
Switzerland1212institutetext: CERN European Organization for Nuclear Research,
CH-1211 Genéve 23, Switzerland1313institutetext: University of Geneva, Section
de Physique, DPNC, Geneva, Switzerland1414institutetext: University of
Glasgow, School of Physics and Astronomy, Glasgow, United
Kingdom1515institutetext: H. Niewodniczanski Institute of Nuclear Physics PAN,
Cracow, Poland1616institutetext: High Energy Accelerator Research Organization
(KEK), Tsukuba, Ibaraki, Japan1717institutetext: University of Houston,
Department of Physics, Houston, Texas, U.S.A.1818institutetext: Institut de
Fisica d’Altes Energies (IFAE), The Barcelona Institute of Science and
Technology, Campus UAB, Bellaterra (Barcelona) Spain1919institutetext: IFIC
(CSIC & University of Valencia), Valencia, Spain2020institutetext: Institute
For Interdisciplinary Research in Science and Education (IFIRSE), ICISE, Quy
Nhon, Vietnam2121institutetext: Imperial College London, Department of
Physics, London, United Kingdom2222institutetext: INFN Sezione di Bari and
Università e Politecnico di Bari, Dipartimento Interuniversitario di Fisica,
Bari, Italy2323institutetext: INFN Sezione di Napoli and Università di Napoli,
Dipartimento di Fisica, Napoli, Italy2424institutetext: INFN Sezione di Padova
and Università di Padova, Dipartimento di Fisica, Padova,
Italy2525institutetext: INFN Sezione di Roma and Università di Roma “La
Sapienza”, Roma, Italy2626institutetext: Institute for Nuclear Research of the
Russian Academy of Sciences, Moscow, Russia2727institutetext: International
Centre of Physics, Institute of Physics (IOP), Vietnam Academy of Science and
Technology (VAST), 10 Dao Tan, Ba Dinh, Hanoi, Vietnam2828institutetext: Kavli
Institute for the Physics and Mathematics of the Universe (WPI), The
University of Tokyo Institutes for Advanced Study, University of Tokyo,
Kashiwa, Chiba, Japan2929institutetext: Keio University, Department of
Physics, Kanagawa, Japan3030institutetext: King’s College London, Department
of Physics, Strand, London WC2R 2LS, United Kingdom3131institutetext: Kobe
University, Kobe, Japan3232institutetext: Kyoto University, Department of
Physics, Kyoto, Japan3333institutetext: Lancaster University, Physics
Department, Lancaster, United Kingdom3434institutetext: University of
Liverpool, Department of Physics, Liverpool, United Kingdom3535institutetext:
Louisiana State University, Department of Physics and Astronomy, Baton Rouge,
Louisiana, U.S.A.3636institutetext: Michigan State University, Department of
Physics and Astronomy, East Lansing, Michigan, U.S.A.3737institutetext: Miyagi
University of Education, Department of Physics, Sendai,
Japan3838institutetext: National Centre for Nuclear Research, Warsaw,
Poland3939institutetext: State University of New York at Stony Brook,
Department of Physics and Astronomy, Stony Brook, New York,
U.S.A.4040institutetext: Okayama University, Department of Physics, Okayama,
Japan4141institutetext: Osaka City University, Department of Physics, Osaka,
Japan4242institutetext: Oxford University, Department of Physics, Oxford,
United Kingdom4343institutetext: University of Pennsylvania, Department of
Physics and Astronomy, Philadelphia, PA, 19104, USA.4444institutetext:
University of Pittsburgh, Department of Physics and Astronomy, Pittsburgh,
Pennsylvania, U.S.A.4545institutetext: Queen Mary University of London, School
of Physics and Astronomy, London, United Kingdom4646institutetext: University
of Regina, Department of Physics, Regina, Saskatchewan,
Canada4747institutetext: University of Rochester, Department of Physics and
Astronomy, Rochester, New York, U.S.A.4848institutetext: Royal Holloway
University of London, Department of Physics, Egham, Surrey, United
Kingdom4949institutetext: RWTH Aachen University, III. Physikalisches
Institut, Aachen, Germany5050institutetext: University of Sheffield,
Department of Physics and Astronomy, Sheffield, United
Kingdom5151institutetext: University of Silesia, Institute of Physics,
Katowice, Poland5252institutetext: SLAC National Accelerator Laboratory,
Stanford University, Menlo Park, California, USA5353institutetext: Sorbonne
Université, Université Paris Diderot, CNRS/IN2P3, Laboratoire de Physique
Nucléaire et de Hautes Energies (LPNHE), Paris, France5454institutetext: STFC,
Rutherford Appleton Laboratory, Harwell Oxford, and Daresbury Laboratory,
Warrington, United Kingdom5555institutetext: University of Tokyo, Department
of Physics, Tokyo, Japan5656institutetext: University of Tokyo, Institute for
Cosmic Ray Research, Kamioka Observatory, Kamioka, Japan5757institutetext:
University of Tokyo, Institute for Cosmic Ray Research, Research Center for
Cosmic Neutrinos, Kashiwa, Japan5858institutetext: Tokyo Institute of
Technology, Department of Physics, Tokyo, Japan5959institutetext: Tokyo
Metropolitan University, Department of Physics, Tokyo, Japan6060institutetext:
Tokyo University of Science, Faculty of Science and Technology, Department of
Physics, Noda, Chiba, Japan6161institutetext: University of Toronto,
Department of Physics, Toronto, Ontario, Canada6262institutetext: TRIUMF,
Vancouver, British Columbia, Canada6363institutetext: University of Victoria,
Department of Physics and Astronomy, Victoria, British Columbia,
Canada6464institutetext: University of Warsaw, Faculty of Physics, Warsaw,
Poland6565institutetext: Warsaw University of Technology, Institute of
Radioelectronics and Multimedia Technology, Warsaw, Poland6666institutetext:
University of Warwick, Department of Physics, Coventry, United
Kingdom6767institutetext: University of Winnipeg, Department of Physics,
Winnipeg, Manitoba, Canada6868institutetext: Wroclaw University, Faculty of
Physics and Astronomy, Wroclaw, Poland6969institutetext: Yokohama National
University, Faculty of Engineering, Yokohama, Japan7070institutetext: York
University, Department of Physics and Astronomy, Toronto, Ontario,
Canadaaainstitutetext: Also at INFN-Laboratori Nazionali di
Legnarobbinstitutetext: Also at J-PARC, Tokai, Japanccinstitutetext:
Affiliated member at Kavli IPMU (WPI), the University of Tokyo,
Japanddinstitutetext: Also at National Research Nuclear University "MEPhI" and
Moscow Institute of Physics and Technology, Moscow, Russiaeeinstitutetext:
Also at the Graduate University of Science and Technology, Vietnam Academy of
Science and Technologyffinstitutetext: Also at JINR, Dubna,
Russiagginstitutetext: Also at Nambu Yoichiro Institute of Theoretical and
Experimental Physics (NITEP)hhinstitutetext: Also at BMCC/CUNY, Science
Department, New York, New York, U.S.A.‡‡institutetext: Deceased
# Measurement of the charged-current electron (anti-)neutrino inclusive cross-
sections at the T2K off-axis near detector ND280
K. Abe 45 N. Akhlaq, 57 R. Akutsu 32 A. Ali 11 C. Alt 54,34 C. Andreopoulos 21
L. Anthony 19 M. Antonova 31 S. Aoki 2 A. Ariga 59 T. Arihara 69 Y. Asada 32
Y. Ashida 21 E.T. Atkin 59 Y. Awataguchi 32 S. Ban 46 M. Barbi 66 G.J. Barker
42 G. Barr 42 D. Barrow 34 C. Barry 15 M. Batkiewicz-Kwasniak 26 A.
Beloshapkin 34 F. Bench 22 V. Berardi 58 L. Berns 70 S. Bhadra, 53 S.
Bienstock 53,13 A. Blondel 6 S. Bolognesi 68 T. Bonus 18 B. Bourguille 66 S.B.
Boyd 33 D. Brailsford 13 A. Bravar 1 D. Bravo Berguño 56 C. Bronner 13 S. Bron
51 A. Bubak 10 M. Buizza Avanzini 36 J. Calcutt 7 T. Campbell 16 S. Cao 50
S.L. Cartwright 22 M.G. Catanesi 19 A. Cervera 66 A. Chappell 24 C. Checchia
17 D. Cherdack 55 N. Chikuma 12 G. Christodoulou 24,a M. Cicerchia, 34 J.
Coleman 24 G. Collazuol 42,28 L. Cook 42 D. Coplowe 7 A. Cudd 15 A. Dabrowska
23 G. De Rosa 33 T. Dealtry 66 P.F. Denner 34 S.R. Dennis 54 C. Densham 30 F.
Di Lodovico 39 N. Dokania 12 S. Dolan 33 T.A. Doyle 10 O. Drapier 53 J.
Dumarchez 21 P. Dunne 55 A. Eguchi 14 L. Eklund 6 S. Emery-Schrenk 2 A.
Ereditato 19 P. Fernandez 4,62 T. Feusels 33 A.J. Finch, 70 G.A. Fiorentini 23
G. Fiorillo 2 C. Francois 16,b M. Friend 16,b Y. Fujii 55 R. Fujita 40 D.
Fukuda 60 R. Fukuda 37 Y. Fukuda 11 K. Fusshoeller 53 C. Giganti 68 T. Golan
10 M. Gonin 26 A. Gorin 53 M. Guigue 66 D.R. Hadley 66 J.T. Haigh 49 P.
Hamacher-Baumann 62,28 M. Hartz 16,b T. Hasegawa 6 S. Hassani 16 N.C. Hastings
32 T. Hayashino 56,28 Y. Hayato 32 A. Hiramoto, 8 M. Hogan 51 J. Holeczek
20,27 N.T. Hong Van 41 T. Honjo 24 F. Iacob 32 A.K. Ichikawa 56 M. Ikeda 16,b
T. Ishida 16,b T. Ishii 60 M. Ishitsuka 55 K. Iwamoto 26 A. Izmaylov 60 N.
Izumi 16 M. Jakkapu 67 B. Jamieson 50 S.J. Jenkins 18 C. Jesús-Valls 32 M.
Jiang 7 S. Johnson 21 P. Jonsson 39,c C.K. Jung 57 X. Junjie 21 P.B. Jurj 42
M. Kabirnezhad 48,54 A.C. Kaboth, 57,c T. Kajita 59 H. Kakuno 56 J. Kameda
63,62 D. Karlen 35 S.P. Kasetti 56 Y. Kataoka 69 Y. Katayama 30 T. Katori 56
Y. Kato 3,28,c E. Kearns 26 M. Khabibullin 26 A. Khotjantsev 32 T. Kikawa 55
H. Kikutani 41 H. Kim 30 S. King 51 J. Kisiel 66 A. Knight 33 A. Knox 41 T.
Kobata 16,b T. Kobayashi 42 L. Koch 55 T. Koga 62 A. Konaka 33 L.L. Kormos,
40,c Y. Koshio 26 A. Kostin 38 K. Kowalik 32 H. Kubo 26,d Y. Kudenko 41 N.
Kukita 32 S. Kuribayashi 65 R. Kurjata 35 T. Kutter 58 M. Kuze 1 L. Labarga 38
J. Lagoda 24 M. Lamoureux 43 D. Last 24 M. Laveder 33 M. Lawe 10 M. Licciardi
62 T. Lindner 14 R.P. Litchfield 39 S.L. Liu 39 X. Li 24 A. Longhin 25 L.
Ludovici 42 X. Lu 18 T. Lux, 23 L.N. Machado 22 L. Magaletti 36 K. Mahn 50 M.
Malek 47 S. Manly 13 L. Maret 7 A.D. Marino 56,28 L. Marti-Magro 61 J.F.
Martin 16,b T. Maruyama 16 T. Matsubara 55 K. Matsushita 26 V. Matveev 43 C.
Mauger 34 K. Mavrokoridis 6 E. Mazzucato 70 M. McCarthy 34 N. McCauley 50 J.
McElwee 47 K.S. McFarland 39 C. McGrew 26 A. Mefodiev 34 C. Metelko 24 M.
Mezzetto 69 A. Minamino, 26 O. Mineev 5 S. Mine 56,c M. Miura 11 L. Molina
Bueno 56,c S. Moriyama 36 J. Morrison 10 Th.A. Mueller 6 L. Munteanu 11 S.
Murphy 7 Y. Nagai 16,b T. Nakadaira 56,28 M. Nakahata 56 Y. Nakajima 40 A.
Nakamura 32 K.G. Nakamura 28,16,b K. Nakamura 31 Y. Nakano 56,28 S. Nakayama
32,28 T. Nakaya 16,b K. Nakayoshi 61 C. Nantais 21 C.E.R. Naseby 20,e T.V.
Ngoc 68 K. Niewczas 16,‡ K. Nishikawa, 29 Y. Nishimura 13 E. Noah 21 T.S.
Nonnenmacher 54 F. Nova 19 P. Novella 33 J. Nowak 14 J.C. Nugent 33 H.M.
O’Keeffe 50 L. O’Sullivan 32 T. Odagawa 16 T. Ogawa 40 R. Okada 57,28 K.
Okumura 41 T. Okusawa 4,62 S.M. Oser 45 R.A. Owen 16,b Y. Oyama 23 V.
Palladino 39 J.L. Palomino 44 V. Paolone 24 M. Pari 48 W.C. Parker 13 S. Parsa
21 J. Pasternak 34 P. Paudyal, 62 M. Pavin 34 D. Payne 34 G.C. Penn 36 L.
Pickering 50 C. Pidcott 69 G. Pintaudi 70 E.S. Pinzon Guerra 2 C. Pistillo
53,f B. Popov 51 K. Porwit 64 M. Posiadala-Zezula 34 A. Pritchard 10 B.
Quilain 49 T. Radermacher 22 E. Radicioni 11 B. Radics 33 P.N. Ratoff 8 E.
Reinherz-Aronis 39 C. Riccio 38 E. Rondio 49 S. Roth 11 A. Rubbia 23 A.C.
Ruggeri 14 C.A. Ruggles 65 A. Rychter, 16,b K. Sakashita 13 F. Sánchez 70 G.
Santucci 11 C.M. Schloesser 9,c K. Scholberg 8 J. Schwehr 21 M. Scott 41,g Y.
Seiya 16,b T. Sekiguchi 56,28,c H. Sekiya 11 D. Sgalaberna 54,42 R. Shah 26 A.
Shaikhiev 67 F. Shaker 26 A. Shaykina 56,28 M. Shiozawa 21 W. Shorrock 26 A.
Shvartsman 26 A. Smirnov 5 M. Smy 68 J.T. Sobczyk 5,28 H. Sobel 14 F.J.P.
Soler 56 Y. Sonoda 49 J. Steinmann, 26,6 S. Suvorov 31 A. Suzuki 16,b S.Y.
Suzuki 28 Y. Suzuki 21 A.A. Sztuc 16,b M. Tada 32 M. Tajima 56 A. Takeda 31,28
Y. Takeuchi 56,c H.K. Tanaka 52,61 H.A. Tanaka 41 S. Tanaka 69 Y. Tanihara 32
M. Tani 41 N. Teshima 50 L.F. Thompson 8 W. Toki 34 C. Touramanis 61 T.
Towstego 34 K.M. Tsui 16,b T. Tsukamoto 35 M. Tzanov 21 Y. Uchida 28,5 M.
Vagins 66 S. Valder, 39 Z. Vallari 18 D. Vargas 6 G. Vasseur 39 C. Vilela 66
W.G.S. Vinning 54 T. Vladisavljevic 26 V.V. Volkov 15 T. Wachala 67 J. Walker
33 J.G. Walsh 39 Y. Wang 54,42 D. Wark 21 M.O. Wascko 54,42 A. Weber 32,c R.
Wendell 39 M.J. Wilking 2 C. Wilkinson 30 J.R. Wilson 8 R.J. Wilson 39 K. Wood
47 C. Wret 16,‡ Y. Yamada 41,g K. Yamamoto 39,h C. Yanagisawa 39 G. Yang, 56
T. Yano 32 K. Yasutome 62 S. Yen 26 N. Yershov 55,c M. Yokoyama 58 T. Yoshida
70 M. Yu 15 A. Zalewska 38 J. Zalipska 65 K. Zaremba 38 G. Zarnecki 65 M.
Ziembicki 7 E.D. Zimmerman 53 M. Zito 30 S. Zsoldos 26 and A. Zykova
<EMAIL_ADDRESS>
###### Abstract
The electron (anti-)neutrino component of the T2K neutrino beam constitutes
the largest background in the measurement of electron (anti-)neutrino
appearance at the far detector. The electron neutrino scattering is measured
directly with the T2K off-axis near detector, ND280. The selection of the
electron (anti-)neutrino events in the plastic scintillator target from both
neutrino and anti-neutrino mode beams is discussed in this paper. The flux
integrated single differential charged-current inclusive electron
(anti-)neutrino cross-sections, $d\sigma/dp$ and $d\sigma/d\cos(\theta)$, and
the total cross-sections in a limited phase-space in momentum and scattering
angle ($p>300$ MeV/c and $\theta\leq 45^{\circ}$) are measured using a binned
maximum likelihood fit and compared to the neutrino Monte Carlo generator
predictions, resulting in good agreement.
###### Keywords:
neutrino cross-section
††arxiv: 2002.11986
## 1 Introduction
The measurement of the $\nu_{\mu}\rightarrow\nu_{e}$ (and
$\bar{\nu}_{\mu}\rightarrow\bar{\nu}_{e}$) oscillations - which is the main
goal of the T2K experiment T2KExperiment \- is affected by two main
background sources. The first is the intrinsic $\nu_{e}$ and $\bar{\nu}_{e}$
beam contaminations and the second is the neutral current (NC) $\pi^{0}$
production, where the $\pi^{0}$ can mimic an electron from a charged-current
(CC) $\nu_{e}$ or $\bar{\nu}_{e}$ interaction at the far detector, Super-
Kamiokande. In addition, the $\nu_{\mu}\rightarrow\nu_{e}$
($\bar{\nu}_{\mu}\rightarrow\bar{\nu}_{e}$) appearance signal is predicted by
using a predominantly $\nu_{\mu}$ ($\bar{\nu}_{\mu}$) sample, which relies on
the knowledge of the $\nu_{e}$ ($\bar{\nu}_{e}$) cross-section relative to the
$\nu_{\mu}$ ($\bar{\nu}_{\mu}$). The modelling of signal and backgrounds is
strongly depending on the neutrino cross-sections and the near detector is
crucial for measuring them.
The electron (anti-)neutrino flux arises from the decay of kaons, muons and
pions produced when the proton beam impinges upon a graphite target. Kaons can
decay to electron (anti-)neutrinos through the decay channels,
$K^{\pm}\rightarrow\pi^{0}+e^{\pm}+\nu_{e}(\bar{\nu}_{e})$ and
$K^{0}_{e3}\rightarrow\pi^{\pm}+e^{\mp}+\bar{\nu}_{e}(\nu_{e})$. Muons, mainly
produced from pion decay, can also decay to electron (anti-)neutrinos through
$\mu^{\pm}\rightarrow
e^{\pm}+\bar{\nu}_{\mu}(\nu_{\mu})+\nu_{e}(\bar{\nu}_{e})$. The direct
contribution of the pion decays to the electron (anti-)neutrino flux is tiny.
Together these combinations provide the $\nu_{e}$ and $\bar{\nu}_{e}$ flux at
the near detector. In general, the electron (anti-)neutrinos from kaon decays
are more energetic than those from muon decays and populate the high energy
tail of the neutrino energy spectrum.
The CC electron (anti-)neutrino selection at the near detector is challenging
for two reasons. Firstly, there is a small number of electrons (positrons)
produced from CC $\nu_{e}$ ($\bar{\nu}_{e}$) interactions, compared to the
much larger number of muons, pions and protons produced in the final states of
CC and NC $\nu_{\mu}$ and $\bar{\nu}_{\mu}$ interactions. The particle
identification (PID) must work extremely well to obtain a pure electron
selection. The second reason is the large number of background electrons from
sources such as $\pi^{0}$, which can be produced either inside or outside the
target detectors. Rejection of background electrons (positrons) is vital for
the measurement of the CC $\nu_{e}$ ($\bar{\nu}_{e}$) interactions.
Electron (anti-)neutrino cross-section measurements in the GeV region are rare
since the (anti-)neutrino beams primarily produce muon (anti-)neutrinos. The
first CC-$\nu_{e}$ inclusive cross-section measurement and the only
CC-$\bar{\nu}_{e}$ inclusive cross-section measurement so far were made by the
Gargamelle bubble chamber experiment in 1978 Gargamelle . Thirty-six years
later, in 2014, T2K measured the CC-$\nu_{e}$ inclusive cross-section
ND280NueXs and in 2016 MINERvA performed the first CC-$\nu_{e}$ cross-section
measurement without pions in the final state MinervaNue . Measurements of the
electron (anti-)neutrino cross-sections will have a pivotal role for the
precision measurements of neutrino oscillations for the current and next
generation of long-baseline neutrino oscillation experiments DUNETDR ;
HYPERKTDR .
Compared to the 2014 results, the work in this paper follows a different
approach to measure the CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ cross-sections.
Following the developments in the T2K muon neutrino cross-sections
measurements ND280cc0pi ; ND280numucc4pi ; ND280nucleff , the differential
cross-sections are measured in a model independent way as a function of
electron and positron kinematics (momentum and scattering angle), the
quantities which are measured in the near detector. Although cross-section
results were calculated in $Q^{2}$ in the 2014 work, such measurements could
introduce model dependencies and are not included in this work. Each $Q^{2}$
bin contains contributions from events with different electron kinematics
leading to model dependencies when correcting for the efficiencies since our
acceptance for backward and high angle events is very poor. Similarly, cross-
section measurements in momentum, scattering angle and neutrino energy which
are extrapolated to regions with no or very little acceptance are also model
dependent since they depend on the underlying model for the efficiency
corrections. Such results are not produced in this paper. For the differential
cross-section extraction, following the experience from T2K muon neutrino
cross-section measurements ND280cc0pi ; ND280numucc4pi ; ND280nucleff , this
work uses a binned likelihood fit with control samples to tune the backgrounds
instead of an iterative matrix inversion method dagostini . The likelihood fit
method is preferred as the correction of detector smearing effects is
independent of the signal model used in the simulation and it allows in-depth
validation of the background tuning and of the extracted results. Finally,
events with momentum below 200 MeV/c were not considered in the 2014 results.
This background enriched region can be used for fit validation studies and it
is used in the current work.
Since the CC-$\nu_{e}$ inclusive cross-section measurement in 2014, T2K has
doubled the neutrino data and collected a significant amount of anti-neutrino
data. With these new datasets, T2K performs new measurements of the
CC-$\nu_{e}$ inclusive cross-sections in neutrino and anti-neutrino modes. In
addition, the first CC-$\bar{\nu}_{e}$ inclusive cross-section in anti-
neutrino mode, since Gargamelle, is measured.
## 2 Experimental Setup
### 2.1 T2K beam
The T2K neutrino beam is produced at the Japan Proton Accelerator Research
Complex (J-PARC) by colliding 30 GeV protons with a graphite target. The pions
and kaons produced in the target are focused by three magnetic horns and decay
in flight to produce neutrinos. T2K can run with either forward horn current
(FHC) or with reverse horn current (RHC) producing beams in neutrino or anti-
neutrino enhanced mode, respectively.
The T2K beamline T2Kbeamline is simulated using FLUKA2011 fluka1 ; fluka2 ,
GEANT3 geant3 and GCALOR gcalor . The simulated yields of hadronic particles
are tuned using the NA61/SHINE na61shine1 ; na61shine2 ; na61shine3 thin
target measurements. The neutrino fluxes at the off-axis near detector ND280
in FHC and RHC are shown in Figure 1. The off-axis position of the near
detector, from the neutrino beam direction, results in a narrow-band
$\nu_{\mu}$ or $\bar{\nu}_{\mu}$ beam, however, the same does not occur with
$\nu_{e}$ and $\bar{\nu}_{e}$ fluxes due to their production via three-body
decays, resulting in broader $\nu_{e}$ and $\bar{\nu}_{e}$ spectra. The mean
of the $\nu_{e}$ energy spectrum at ND280 is 1.28 GeV in FHC and 1.98 GeV in
RHC. The mean of the $\bar{\nu}_{e}$ energy spectrum in RHC is 0.99 GeV. The
total integrated $\nu_{e}$ flux at ND280 in FHC is
$\Phi_{\nu_{e}}^{FHC}=\left(2.67\pm 0.24\right)\times 10^{11}$ $\rm
neutrinos/cm^{2}$ and in RHC is $\Phi_{\nu_{e}}^{RHC}=\left(2.65\pm
0.21\right)\times 10^{10}$ $\rm neutrinos/cm^{2}$. The total integrated
$\bar{\nu}_{e}$ flux at ND280 in RHC is
$\Phi_{\bar{\nu}_{e}}^{RHC}=\left(1.00\pm 0.10\right)\times 10^{11}$ anti-
neutrinos$\rm/cm^{2}$.
Figure 1: The neutrino and anti-neutrino fluxes at ND280 in neutrino (FHC)
mode (left) and in anti-neutrino (RHC) mode (right).
### 2.2 T2K off-axis near detector ND280
The $2.5^{\circ}$ off-axis near detector, ND280, is located 280 metres from
the proton target. The main goal of ND280 is to constrain the neutrino flux
and the interaction cross-sections. It is composed of several sub-detectors
located inside a 0.2 T magnet, as depicted in Figure 2. The front part is the
$\pi^{0}$ detector (P0D) ND280P0D and is optimised to measure neutrino
interactions with $\pi^{0}$ production. The rear part is the tracker and it is
optimised to measure charged particles produced in neutrino interactions. It
consists of two Fine-Grained Detectors ND280FGD , the first of which is
composed of layers of plastic scintillator (FGD1) and the second has
alternating layers of plastic scintillators and water (FGD2).
Figure 2: An exploded view of the T2K near detector, ND280. The neutrino beam
enters ND280 from the left.
The P0D, FGD1 and FGD2 provide the target mass for neutrino interactions and
each is followed by a Time Projection Chamber (TPC1, TPC2 and TPC3) ND280TPC .
The TPCs are filled with a gas mixture based on argon and provide excellent
track reconstruction with a momentum resolution of roughly 8% for 1 GeV/c
tracks. This can be combined with energy loss ($dE/dx$) measurements in order
to perform PID of tracks crossing the TPCs. The measured and the expected
$dE/dx$ are used to define the "pull" (the difference between the measured
mean ionization and the expected one divided by the resolution) of each
particle species. The TPC energy loss for negatively and positively charged
tracks originating in FGD1 is shown in Figure 3. Notice the region below 200
MeV/c where the electron $dE/dx$ curve crosses with the muon and pion dE/dx
curves, and the region around 1 GeV/c where the proton $dE/dx$ curve crosses
with the electron $dE/dx$ curve.
Figure 3: TPC energy loss for tracks in data originating in FGD1. Left:
negatively charged tracks. Right: positively charged tracks. The expected
energy loss curves for electrons, muons, pions and protons are also shown.
The P0D and the tracker are surrounded by the lead-scintillator
Electromagnetic Calorimeter (ECal) ND280ECal and a Side Muon Range Detector
(SMRD) ND280SMRD . The ECal measures the energy of photon and electrons (EM
energy) and provides additional PID for minimum ionizing particles (MIP),
electromagnetic showers (EM) and highly ionizing stopping particles (HIP) like
protons.
The ECal EM energy is reconstructed under the hypothesis that the energy
deposit is due to an electromagnetic shower. Comparing the TPC momentum with
the ECal EM energy, electrons can be separated from muons and protons. The
ratio of the TPC momentum over the ECal EM energy peaks at unity for electrons
and at lower values for muons and protons. The ECal EM energy resolution is
approximately 10% at 1 GeV.
The ECal PID is based on the longitudinal and lateral profile of ECal clusters
to generate probability density functions (PDFs). These are combined for each
particle type and PID variable to form a likelihood from the products of the
PDFs, see ND280NueSel for details.
$R_{MIP/EM}$ is the log-likelihood ratio of the MIP and electron hypothesis
and $R_{EM/HIP}$ is the log-likelihood ratio of the electron and proton
hypothesis. The $R_{MIP/EM}$ for high purity control samples (90% or better)
is shown in Figure 4, where the muon sample comprises cosmic muons and muons
produced by neutrino interactions outside ND280 that cross the detector
(through-going muons), the electron sample is formed from electron-positron
pairs from photon conversions and the protons are from neutrino interactions.
The ECal can provide supplementary PID to the TPC, especially in the region
around 1 GeV/c where the TPC energy loss curves of electrons and protons
cross. Figure 4 also shows the $R_{EM/HIP}$ for showers (classified by
$R_{MIP/EM}>0$) with $p>600$ MeV/c only. Although there are some shape
differences in data and simulation for $R_{MIP/EM}$ and $R_{EM/HIP}$, the data
and simulation efficiencies to select electron (or positrons) and reject muons
and protons are similar. The PID efficiencies in the simulation are corrected
using the data control samples.
Figure 4: Performance of the ECal PID using high purity control samples of
cosmic and through-going muons, electrons and positrons from gamma conversions
and protons from neutrino interactions. Left: Log-likelihood ratio of the ECal
track-shower ($R_{MIP/EM}$) PID. Right: Log-likelihood ratio of the ECal
electron-proton ($R_{EM/HIP}$) PID for showers with $R_{MIP/EM}>0$ and $p>600$
MeV/c. Plots are normalised to unity.
## 3 Data samples and MC simulation
For FHC, $11.92\times 10^{20}$ protons-on-target (POT) are analysed
corresponding to data collected in the periods 2010-2013 and 2016-2017. For
RHC, $6.29\times 10^{20}$ POT are analysed corresponding to data collected
from 2014 to 2016.
The ND280 flux is simulated as described in section 2.1. The (anti-)neutrino
interactions with the ND280 detector materials, including nuclear effects, are
simulated using NEUT 5.3.2 neutmc and GENIE 2.8.0 geniemc Monte Carlo (MC)
generators. The neutrino generators account for differences in the lepton mass
for the muon and electron neutrino cross-section computations. However, other
effects like radiative corrections, modifications of the pseudoscalar form
factors and the effect of form factors to second class vector and axial
currents are not considered NueNumuCCQE .
NEUT 5.3.2 uses the Llewellyn-Smith formalism LlewellynSmith to describe the
CC quasi-elastic neutrino-nucleon cross sections. The spectral function is
used as the nuclear model SpectralFunction . The axial mass used for the CC
quasi-elastic process is set to 1.21 $\rm GeV/c^{2}$. The simulation of multi-
nucleon interactions, where the neutrino interacts with a correlated pair of
nucleons, is described using the Nieves et al. model Nieves2p2h . The resonant
pion production process with an invariant mass $W\leq 2$ $\rm GeV/c^{2}$ is
described by the Rein-Sehgal model ReinSehgal . The resonant axial mass set to
0.95 $\rm GeV/c^{2}$. The deep inelastic scattering (DIS) is calculated for
$W>1.3$ $\rm GeV/c^{2}$ and is modeled using the GRV98 parton distribution
function GRV98 including the Bodek and Yang corrections BodekYangNeut .
Single pion production with $W\leq 2$ $\rm GeV/c^{2}$ is suppressed to avoid
double counting with resonant production. Final state interactions describe
the transportation of the hadrons produced from neutrino interaction through
the nucleus and are simulated using a semi-classical intra-nuclear cascade
model.
GENIE 2.8.0 uses a different value for the axial mass for quasi-elastic
process of 0.99 $\rm GeV/c^{2}$. It relies on a different nuclear model using
a relativistic Fermi gas with Bodek and Ritchie modifications BodekRitchie .
Resonant production is based on Rein-Sehgal model, same as NEUT. In GENIE the
resonant model is not restricted to the single pion decay channel. To avoid
double counting with the DIS model, the resonant model is switched off when
$W>1.7$ $\rm GeV/c^{2}$. The resonant axial mass is set to 1.12 $\rm
GeV/c^{2}$. DIS is simulated similar to NEUT but using slightly different
Bodek-Yang corrections BodekYangGenie . A parametrized model of final state
interactions (GENIE "hA" model) is used.
Detail description of the NEUT and GENIE models can be found in previous T2K
publications ND280numucc4pi ; T2KOscLong .
GEANT 4.9.4 geant4 is used to transport the final state particles through the
ND280 detector. Nominal MC is produced by simulating approximately 10 times
the data POT for both NEUT and GENIE.
Data-driven reconstruction efficiency corrections are applied to the nominal
MC. These corrections are estimated using high-purity ($>$ 90%) control
samples of cosmic and through-going muons, electrons and positrons from photon
conversions and protons from neutrino interactions.
The nominal ND280 MC simulates only the neutrino interactions that occur
within the ND280 detector. In reality, neutrino interactions also occur in the
surrounding material (sand interactions) and these produce particles that
enter ND280. These particles can then affect the event selection by triggering
one of the three veto cuts 111Section 4.2 describes the event selection and
the veto cuts in the TPC, ECal and P0D sub-detectors. during a beam bunch time
window (i.e. an ND280 event) and hence causing an ND280 event to fail the
selection cut. This is the sand pile-up effect, which is inherently present in
the data, but not in the nominal ND280 MC. To simulate the effect, a second MC
simulation is generated (sand MC) to estimate the rate at which sand
interactions trigger these veto cuts in coincidence with an ND280 event. To
propagate this effect to the nominal ND280 MC there is a pile-up correction,
which is a weight that is applied to all ND280 events, for each of the veto
cuts. If sand interactions are estimated to trigger a given veto for X% of
ND280 events, then a weight of (1 - X/100) is applied to all ND280 events.
Since the pile-up rate depends on the beam intensity and on the beam mode (FHC
or RHC), the corrections are computed separately for each data period. For the
high intensity neutrino beam in 2017, the total pile-up correction is
approximately 5%.
## 4 Selection of electron (anti-)neutrino interactions at ND280
The selection of electron (anti-)neutrinos in FGD1 closely follows the steps
described in the 2014 FHC CC-$\nu_{e}$ analysis ND280NueSel and is summarised
below. There are several reconstruction improvements since the 2014 analysis
and additional selection criteria are applied to improve purities. The RHC
CC-$\nu_{e}$ selection is identical to the FHC selection, but for the
CC-$\bar{\nu}_{e}$ additional selection criteria are applied to remove the
proton background. Details are described in section 4.2.
### 4.1 Signal and background definitions
A MC event is defined as signal if the selected primary track is an electron
(positron) from a CC-$\nu_{e}$ (CC-$\bar{\nu}_{e}$) interaction with the
vertex inside the FGD1 fiducial volume, which has a total mass of 919.5 kg,
corresponding to $\left(5.54\pm 0.04\right)\times 10^{29}$ nucleons.
Backgrounds are separated into four categories: photon, muon, proton and other
backgrounds. The photon background category considers events where the
selected primary track is an electron or positron from a photon conversion and
the true conversion point is inside the FGD1 fiducial volume. Events where the
selected primary track is a muon (proton), but misidentified as electron enter
the muon (proton) background category. Any other backgrounds including
misidentified pions, electrons from photons converting outside of the fiducial
volume but reconstructed inside the fiducial volume, electrons from $\pi^{0}$
Dalitz decay and Michel electrons go into the other background category.
### 4.2 Event selection
The event selection for CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ events is
described in the following:
1. (i)
Only events during periods of good beam and detector quality are used. The
event time has to be reconstructed within one of the eight distinct beam
bunches.
2. (ii)
The highest momentum negatively charged (leading negatively charged) FGD1-TPC
track, for the CC-$\nu_{e}$ selection, or the highest momentum positively
charged (leading positively charged) FGD1-TPC track, for CC-$\bar{\nu}_{e}$
selection, with a vertex in the FGD1 fiducial volume is selected. The leading
positively charged track in the CC-$\bar{\nu}_{e}$ selection must also be the
highest momentum track (from all negatively and positively charged tracks).
3. (iii)
To ensure reliable PID and momentum measurements, the selected leading track
is required to have at least 18 TPC hits if it enters the ECal or 36 TPC hits
if it does not enter the ECal. The momentum spectra of the selected leading
negatively charged and leading positively charged tracks with the minimum
number of TPC hits are shown in Figure 5. Notice the large number of protons
selected as the leading positively charged track in the RHC CC-$\bar{\nu}_{e}$
selection. Some data-MC discrepancies are visible in the low momentum region
which contains the poorly modelled photon and other backgrounds.
Figure 5: Momentum distribution of the selected leading negatively charged
track with a vertex in the FGD1 fiducial volume for (a) FHC CC-$\nu_{e}$, (b)
RHC CC-$\nu_{e}$ and (c) leading positively charged track for RHC
CC-$\bar{\nu}_{e}$. The number of MC events is normalized to the data POT. The
last bin is the overflow bin.
4. (iv)
TPC PID is applied to select electrons and remove minimum-ionizing tracks.
Using the electron TPC pull, the leading track must agree with the electron
TPC $dE/dx$ hypothesis. If the leading track does not enter the ECal, then
additional cuts on the TPC PID are applied using the muon and pion TPC pulls.
The event is rejected if the leading track agrees with the muon or pion TPC
hypothesis; events around 150 MeV/c, including electrons, are rejected where
the only information (TPC) is unable to distinguish them.
5. (v)
Additional PID is applied using either the ECal EM energy or the ECal PID
depending on the momentum of the leading track as it enters the ECal. To
maximize the efficiency, if the leading track has $p>1$ GeV/c and is fully
contained in the ECal, the reconstructed ECal EM energy is used to separate EM
showers from MIPs and it is required to be larger than 1 GeV. Otherwise the
ECal MIP/EM shower PID discriminator $R_{MIP/EM}$ has to agree with the EM
shower PID hypothesis. Events that pass the TPC and ECal PID are shown in
Figure 6. For the CC-$\bar{\nu}_{e}$ selection a complication arises since the
TPC energy loss curves for positrons and protons cross around 1 GeV/c (see
Figure 3) leaving a significant amount of proton background.
Figure 6: Momentum distribution after the TPC and ECal PID cuts for (a) FHC
CC-$\nu_{e}$, (b) RHC CC-$\nu_{e}$ and (c) RHC CC-$\bar{\nu}_{e}$ candidates.
The number of MC events is normalized to the data POT. Notice the significant
proton background around 1 GeV/c in the CC-$\bar{\nu}_{e}$ selection due to
the weakness of the TPC PID to separate positrons from protons, see the text
and Figure 3 for details. Additional PID is applied to remove this proton
background, see the text for more details. The last bin is the overflow bin.
6. (vi)
Search for the paired FGD1-TPC electron or positron track from a potential
photon conversion. The paired track must have opposite charge than the leading
track, start within 5 cm from the leading track and agree with the electron
TPC dE/dx hypothesis. If several paired tracks are found, the pair with the
lowest invariant mass is considered since it is more likely to come from a
photon conversion. Pairs with invariant mass less than 110 $\rm MeV/c^{2}$ are
removed.
7. (vii)
Veto P0D, TPC and ECal activity upstream of the vertex and remove events with
additional vertices in FGD1. Events with multiple vertices more likely to come
from a $\nu_{\mu}$ interaction with one or more $\pi^{0}$ in the final state.
8. (viii)
For the CC-$\bar{\nu}_{e}$ selection, additional selection criteria are
applied if the leading positively charged track has $p>600$ MeV/c, the region
which is contaminated by the proton background. If the leading positively
charged track produce shower activity in FGD2 then it is selected. If the
leading positively charged track enters the ECal, the proton background can be
removed by comparing the ECal EM energy ($E$) and the TPC momentum ($p$) using
a cut $E/p>0.65$. In addition, the $R_{EM/HIP}$ shower PID discriminator has
to agree with the EM shower hypothesis.
9. (ix)
For the CC-$\bar{\nu}_{e}$ selection, if the leading positively charged track
stops in FGD2, the FGD2 energy loss must not agree with the proton hypothesis.
10. (x)
Remove external background by comparing the time stamps of the leading track
between FGD1 and ECal. This cut aims to remove tracks originating in the ECal
and stop in FGD1 but are mis-reconstructed with the wrong direction.
11. (xi)
Check if the leading track is broken inside FGD1. A track is broken if it
originates in FGD1 and is not reconstructed as a single track, but is broken
into two or more components. In such pathological cases the leading track
could originate outside the fiducial volume but mis-reconstructed within it.
If the leading track follows an isolated FGD1 track then the event is removed.
Figure 7 summarises the CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ selections.
Figure 7: Summary of the CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ selections in
FGD1. See the text for the details of each cut.
### 4.3 Final selection
The momentum and angular (with respect to the neutrino direction)
distributions of all selected CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ candidates
are shown in Figures 8 and 9, respectively. These plots also show the total
systematic uncertainty on the MC event yields, which is discussed in section
6. A significant data deficit is observed at low momentum ($p<600$ MeV/c) in
the FHC CC-$\nu_{e}$ channel. In this region the photon background is dominant
which has significant systematic uncertainties associated with the $\pi^{0}$
production. Roughly a third of the photon background comes from neutrino
interactions outside of FGD1. These external backgrounds could originate from
neutrino interactions on heavy targets, like iron, copper or lead with
significant final state interaction systematic uncertainties. In addition,
roughly another third of the photon background come from NC interactions which
are poorly measured. The final third of the photon background come from
CC-$\nu_{\mu}$ and CC-$\bar{\nu}_{\mu}$ interactions, usually when the muon is
emitted in high angles and it is lost. In such occasions the most energetic of
the other tracks is selected as the leading track. A similar data deficit is
also observed in the statistically poorer RHC CC-$\bar{\nu}_{e}$ channel. In
addition, an excess of events has been observed in the RHC channels at high
momenta (more visible in the RHC CC-$\bar{\nu}_{e}$ channel). For the photon
background produced from $\nu_{\mu}$ and $\bar{\nu}_{\mu}$ interactions in
FGD1, roughly 10% is coming from NC DIS interactions in all three selections.
The relevant fraction of FGD1 CC DIS events entering the photon background is
approximately 4% in CC-$\bar{\nu}_{e}$ selection, 16% in RHC CC-$\nu_{e}$
selection and 20% in FHC CC-$\nu_{e}$ selection. The differences are due to
the additional selection criteria applied to CC-$\bar{\nu}_{e}$ and the
presence of protons which can be selected as the leading track instead of the
primary muon or background positron.
Figure 8: Momentum distribution of the selected electron and positron
candidates for (a) FHC CC-$\nu_{e}$, (b) RHC CC-$\nu_{e}$ and (c) RHC
CC-$\bar{\nu}_{e}$. The number of MC events is normalized to the data POT. The
effect of the total systematic uncertainty on the MC event yields (see section
6 for details) is also shown on these plots. The last bin is the overflow bin.
Figure 9: Angular distribution of the selected electron and positron
candidates for (a) FHC CC-$\nu_{e}$, (b) RHC CC-$\nu_{e}$ and (c) RHC
CC-$\bar{\nu}_{e}$. The number of MC events is normalized to the data POT. The
effect of the total systematic uncertainty on the MC event yields (see section
6 for details) is also shown on these plots. The last bin includes all
backward-going candidates.
Most of the efficiency loss is observed at low momentum since the electron and
muon/pion $dE/dx$ energy loss curves cross around 150 MeV/c (see Figure 3). In
addition, high angle tracks that do not enter the TPC are not selected and the
events are lost. Another important source of efficiency loss is due to
electron shower or bremsstrahlung in FGD1. As a result the primary electron
track does not enter the TPC or another track is selected as the leading
track. As estimated from the MC, 35-45% of the signal electrons or positrons
are lost because the primary electron track does not enter the TPC. The
efficiency loss is larger in the FHC CC-$\nu_{e}$ channel since the electron
momentum spectrum is softer and at higher angles. The true vs reconstructed
momentum and angular distributions in the MC for the selected signal electrons
and positrons are shown in Figure 10. The effect of bremsstrahlung is visible
as the reconstructed momentum spectrum is biased towards lower momenta. A
summary of efficiencies and purities is shown in Table 1.
Figure 10: Distribution of the true vs reconstructed values of momentum (left)
and angle (right) for signal electrons and positrons that passed all cuts in
the MC. The effect of bremsstrahlung is visible on the left plot, see the text
for details.
The muon mis-identification probability (probability of a muon to be mis-
identified as an electron after applying the PID) was studied in previous T2K
publications ND280NueSel with very good agreement between data and MC.
Similarly, the proton mis-identification probability is important for the
CC-$\bar{\nu}_{e}$ selection. A high-purity independent sample of protons has
the same PID criteria as the CC-$\bar{\nu}_{e}$ selection applied and the
number of protons that survive is checked. An independent control sample that
can be used is the FHC CC-$\bar{\nu}_{e}$ selection. This channel has a tiny
signal contribution and a much larger proton background and it is not used in
the cross-section measurements. Before applying the proton rejection cuts
(viii) and (ix), approximately 94% of the leading tracks selected with $p>600$
MeV/c and not entering the ECal are protons. The measured proton mis-
identification probability is the fraction of protons that survive from these
independent proton enriched samples and is $\left(4.6\pm 0.8\right)$% for the
data compared to $\left(5.0\pm 0.3\right)$% in the MC. The errors are
statistical only. The proton purity is lower in the case where the leading
track enters the ECal and is approximately 70% with the rest to be mostly
positrons. Due to the relatively low proton purity of this sample, only an
approximate proton mis-identification probability can be measured in this
case, $\left(9.4\pm 0.1\right)$% in the data compared to $\left(11.9\pm
0.05\right)$% in the MC.
### 4.4 Event selection using alternative MC
The CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ selections in the MC are repeated
using GENIE (2.8.0) instead of NEUT (5.3.2) MC. There are some differences
between these two neutrino generators, see section 3 and for more details the
description in ND280numucc4pi . One of the most important is that the neutrino
multi-nucleon interaction simulations are turned-off in this version of GENIE.
Efficiencies and purities for NEUT and GENIE agree quite well. Compared to the
selected events, both NEUT and GENIE predictions disagree with data at low
momenta with the FHC CC-$\nu_{e}$. The prediction of the photon background in
particular is similar in both neutrino generators. Tables 1 and 2 summarize
the event selections using NEUT and GENIE.
Table 1: Summary of efficiency, purity and number of MC events normalised to the $11.92\times 10^{20}$ POT in the FHC beam and $6.29\times 10^{20}$ POT in the RHC beam for the CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ channels using NEUT (5.3.2) and GENIE (2.8.0) MC, in addition to the number of data events that survive all cuts in each channel. Channel | Efficiency | Purity | MC Events | Data events
---|---|---|---|---
NEUT FHC CC-$\nu_{e}$ | 0.26 | 0.54 | 797.07 | 697
GENIE FHC CC-$\nu_{e}$ | 0.27 | 0.53 | 769.17 | 697
NEUT RHC CC-$\nu_{e}$ | 0.33 | 0.48 | 175.92 | 176
GENIE RHC CC-$\nu_{e}$ | 0.33 | 0.44 | 168.10 | 176
NEUT RHC CC-$\bar{\nu}_{e}$ | 0.31 | 0.54 | 99.99 | 95
GENIE RHC CC-$\bar{\nu}_{e}$ | 0.30 | 0.51 | 99.21 | 95
Table 2: Breakdown of the number of CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ events
selected in FGD1 according to their category for NEUT (5.3.2) and GENIE
(2.8.0) MC. The number of events is normalized to data POT. The photon
background is separated to events with a true vertex in FGD1 (In-FGD) and to
events with a true vertex out of FGD1 (OO-FGD).
Channel | Signal (%) | In-FGD $\gamma$ (%) | OO-FGD $\gamma$ (%) | $\mu^{\pm}$ (%) | Proton (%) | Other (%)
---|---|---|---|---|---|---
NEUT FHC CC-$\nu_{e}$ | 429.16 (53.9) | 162.23 (20.4) | 78.09 (9.8) | 35.67 (4.5) | - | 91.92 (11.4)
GENIE FHC CC$-\nu_{e}$ | 409.23 (53.5) | 152.56 (20.0) | 78.00 (10.2) | 33.29 (4.4) | - | 96.10 (12.0)
NEUT RHC CC-$\nu_{e}$ | 83.62 (47.5) | 42.41 (24.1) | 20.23 (11.5) | 6.38 (3.6) | - | 23.28 (13.2)
GENIE RHC CC-$\nu_{e}$ | 73.28 (43.6) | 43.46 (25.9) | 21.67 (12.9) | 6.33 (3.8) | - | 23.35 (13.9)
NEUT RHC CC-$\bar{\nu}_{e}$ | 53.85 (53.9) | 18.76 (18.8) | 12.47 (12.5) | 1.22 (1.2) | 6.52 (6.5) | 7.17 (7.2)
GENIE RHC CC-$\bar{\nu}_{e}$ | 50.49 (51.2) | 21.28 (21.5) | 11.43 (11.5) | 1.74 (1.7) | 7.20 (7.3) | 7.07 (7.1)
## 5 Photon background control samples
Since the photon background is the most important in the electron
(anti-)neutrino selections, a dedicated photon control sample of electrons and
positrons from photon conversions is selected to constrain this background.
Photon candidates are selected from two nearby electron-like FGD1-TPC tracks
of opposite charge with low invariant mass that start in the FGD1 fiducial
volume.
### 5.1 Selection of photon candidates
The steps to select photon candidates are:
1. (i)
Only events during periods of good beam and detector quality are used. The
event time has to be reconstructed within one of the eight distinct beam
bunches.
2. (ii)
The highest momentum negatively charged or highest momentum positively charged
FGD1-TPC track (leading track) with a vertex in the FGD1 fiducial volume is
selected.
3. (iii)
The leading track must be compatible with the electron TPC $dE/dx$ hypothesis.
If the leading track enters the ECal, and has momentum $p>1$ GeV/c and ECal
energy E, then $E/p>0.5$ is required in order to clean up the high momentum
tail.
4. (iv)
Require a second track with opposite charge to the leading track, also
compatible with the electron TPC $dE/dx$ hypothesis and with a starting
position within 5 cm from the primary track.
5. (v)
The invariant mass calculated from the leading and paired tracks must be less
than 55 $\rm MeV/c^{2}$. The distributions of the invariant mass of the
selected $e^{-}e^{+}$ pairs are shown in Figure 11. The invariant mass cut is
very effective to remove backgrounds from misidentified muons, protons and
electrons from CC-$\nu_{e}$ interactions.
Figure 11: Invariant mass of electron-like FGD1-TPC pairs with opposite charge
for (a) FHC selecting electron as the leading track, (b) RHC selecting
electron as the leading track and (c) RHC selecting positron as the leading
track. The number of MC events is normalized to the data POT. Last bin is the
overflow bin. The arrow at 55 MeV/$c^{2}$ indicates the final photon to
$e^{-}e^{+}$ conversion cut.
6. (vi)
Although the photon selection at this stage is very pure, it is contaminated
by external photons (photons from neutrino interactions outside FGD1). To
remove external photons the same veto cuts used in the CC-$\nu_{e}$ and
CC-$\bar{\nu}_{e}$ selections are applied.
The signal and background categories are the same as for the CC-$\nu_{e}$ and
CC-$\bar{\nu}_{e}$ selections. The momentum and angular distributions of the
selected photon candidates are shown in Figures 12 and 13, respectively. The
systematic uncertainties on the MC event yields are also shown in these plots,
see section 6 for details. A MC excess below 300 MeV/c is visible. In the
angular distributions a significant MC excess is observed at high angles in
the FHC CC-$\nu_{e}$ selection but not in the photon control selection
(Figures 9 and 13).
Figure 12: Momentum distribution of the selected photon candidates for (a) FHC
selecting electron as the leading track, (b) RHC selecting electron as the
leading track and (c) RHC selecting positron as the leading track. The number
of MC events is normalized to the data POT. The effect of the total systematic
uncertainty on the MC event yields (see section 6 for details) is also shown
on these plots. Last bin is the overflow bin.
Figure 13: Angular distribution of the selected photon candidates for (a) FHC
selecting electron as the leading track, (b) RHC selecting electron as the
leading track and (c) RHC selecting positron as the leading track. The number
of MC events is normalized to the data POT. The effect of the total systematic
uncertainty on the MC event yields (see section 6 for details) is also shown
on these plots. The last bin includes all backward-going candidates.
The purity of the photon control samples is approximately 80% when selecting
electrons and 85% when selecting positrons. A significant fraction of the
selected photon candidates is classified in the other background category
where the photons are coming from a true conversion point outside the FGD1
fiducial volume, but are mis-reconstructed inside of it. Including these
events in the photon category definition increases the purity to approximately
90%. The rest of the other background contributes (5 - 6)% in the photon
control samples and comes from $\pi^{0}$ Dalitz decay, general mis-
reconstructions like broken tracks and accidental matching when at least one
of the two tracks selected in the pair is not electron or positron. The signal
leakage (CC-$\nu_{e}$ or CC-$\bar{\nu}_{e}$) in the photon control samples is
around (3 - 4)% when the selected leading track is an electron. The leakage is
otherwise negligible when the selected leading track is a positron. The muon
background entering the photon control samples is less than 1% in all of the
cases.
When selecting electrons as the leading track in the photon control
selections, approximately 40% of the photon candidates come from external
photons, approximately 30% come from NC interactions in FGD1 and the other 30%
come from CC interactions in FGD1. When selecting positrons as the leading
track in the photon control selections the contributions are slightly
different. Approximately 45% of the photon candidates come from external
photons, approximately 35% come from NC interactions in FGD1 and 20% come from
CC interactions in FGD1. Often the event is rejected if the selected highest
positively charged momentum track is a proton. However, since the protons are
invisible when selecting negatively charged tracks the same event could be
selected when searching for the highest momentum negatively charged track.
This explains the difference in the number of photon candidates in RHC when
the leading track selected is the electron or the positron.
### 5.2 Comparisons with the photon background in the standard selections
Although the photon control samples are of high purity they have some
differences compared to the photon background entering the CC-$\nu_{e}$ and
CC-$\bar{\nu}_{e}$ selections. The main reason is that the photon control
selection requires both the electron and positron to be reconstructed in the
TPC, while the photon background is mostly related to events where either the
electron or positron is lost, usually when it is not very energetic or emitted
at high angles. As a result, the photon background consists mostly of highly
asymmetric events where most of the energy of the photon goes into one of two
electrons. For high angle events it is predominantly due to one of the two
electrons being lost, resulting in more high angle photon background in the
CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ selections.
This angular dependence will introduce different external photons to the
photon background and the photon control selection. Most of the external
photons entering the photon control samples come from neutrino interactions in
the P0D or in the aluminium frame of TPC1. For the photon background, however,
a significant population of external photons are also from neutrino
interactions in the ECals. The photons mostly come from $\pi^{0}$ decays and
Table 3 shows the different contributions to the photon background and the
photon control selections from CC and NC interactions and from external
photons. Despite the differences discussed, the origin of the photon
background entering the CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ selections and the
photon control selections is similar. This provides confidence that the photon
control samples can be used to constrain the photon background in the signal
channels. Additional simulation studies are also performed to check for shape
variations in momentum and angle in the photon selections and in the photon
background in the signal selections. These studies include the variation of
the relevant fraction of CC/NC photon events by a factor of 2, weighting the
nominal MC by varying the Delta resonance width by $\pm 1\sigma$ and varying
the external photon background between (40 - 75)% based on the target material
the neutrino interaction occurred. In all the cases the effect on the momentum
and angular shapes is found to be very small.
Table 3: Comparison of the photon background entering the CC electron
(anti-)neutrino selections and the photon control selections split down to
different $\pi^{0}$ contributions from CC and NC interactions in FGD1 and to
external photons. Out of fiducial volume (OOFV) photons are separated into
events where the true neutrino vertex is in FGD1 (In-FGD) and into events
where the true neutrino vertex is out of FGD1 (OO-FGD).
Interaction Type | FHC CC-$\nu_{e}$ (%) | Photon Selection (%) | RHC CC-$\nu_{e}$ | Photon Selection (%) | RHC CC-$\bar{\nu}_{e}$ | Photon Selection (%)
---|---|---|---|---|---|---
CC $0\pi^{0}$ | 4.5 | 4.3 | 4.8 | 6.9 | 1.1 | 5.4
CC $1\pi^{0}$ | 15.7 | 14.6 | 14.7 | 12.8 | 6.7 | 11.8
CC $>1\pi^{0}$ | 6.1 | 4.7 | 5.4 | 4.5 | 1.9 | 3.8
NC $0\pi^{0}$ | 3.6 | 3.6 | 2.6 | 3.0 | 1.9 | 2.3
NC $1\pi^{0}$ | 24.8 | 28.5 | 26.7 | 30.5 | 35.1 | 31.1
NC $>1\pi^{0}$ | 4.3 | 5.1 | 4.7 | 4.2 | 2.8 | 3.6
OOFV (In-FGD) | 8.5 | 7.4 | 8.8 | 7.8 | 10.7 | 8.9
OOFV (OO-FGD) | 32.5 | 31.8 | 32.3 | 30.2 | 39.9 | 33.1
## 6 Systematic uncertainties
Systematic uncertainties affecting the MC prediction on event yields are
separated into five main categories: cross-section modelling, final state
interactions, detector, external backgrounds and flux.
Cross-section modelling. The cross-section interaction modelling in NEUT and
GENIE is briefly described in section 3 and in detail in previous T2K
publications ND280numucc4pi ; T2KOscLong . In this section, the systematic
uncertainties relevant to cross-section modelling parameters will be briefly
discussed. Neutrino cross-section parameters in NEUT relevant to charged-
current quasi-elastic interactions are the axial mass ($M_{A}^{QEL}$ = 1.21
$\pm$ 0.41 $\rm GeV/c^{2}$), binding energy ($E_{B}^{C}$ = 25.0 $\pm$ 9.0 MeV)
and Fermi momentum ($p_{F}^{C}$ = 223.0 $\pm$ 31.0 MeV/c). Binding energy and
Fermi momentum are target dependent, for this analysis only those relevant to
carbon are considered. For multi-nucleon interactions, a 100% normalization
uncertainty is assumed. The CC resonant production model has three parameters
in NEUT: the axial mass ($M_{A}^{RES}$ = 0.95 $\pm$ 0.15 $\rm GeV/c^{2}$), the
normalization of the axial form factor for resonant pion production
($CA_{5}^{RES}$ = 1.01 $\pm$ 0.12) and the normalisation of the isospin non-
resonant component ($I_{\frac{1}{2}}$ = 1.3 $\pm$ 0.2). For the CC DIS process
an energy dependent normalisation uncertainty (10% at 4 GeV) is considered.
For CC coherent interactions a 100% normalisation uncertainty is considered.
For neutral-current interactions, due to poor constraints from external data,
a 30% normalisation uncertainty is applied. The effect of the cross-section
uncertainties on the event yields is evaluated by shifting each cross-section
parameter by $\pm 1\sigma$ and shifting the nominal MC.
Final State Interactions The pion final state interaction systematic
uncertainties include the effects of absorption, inelastic scattering, charge
exchange and quasi-elastic scattering inside the nucleus. A full description
can be found in previous T2K publications ND280numucc4pi ; T2KOscLong .
Similarly with the cross-section uncertainties, the effect of final state
interaction systematic uncertainties on the event yields is evaluated by
varying simultaneously the final state interaction effects by $\pm 1\sigma$
and shifting the nominal MC.
Detector. Detector systematic uncertainties encapsulate the performance of
each ND280 sub-detector (FGDs, TPCs and ECals). They are applied to simulated
events and are separated in three categories: normalization, selection
efficiency and variation of the observable. Normalization systematics are
applied as a single weight to all events. Efficiency systematics are applied
as a weight that depends on one or more observables. Variation systematics are
treated by varying the observables and redoing the event selections. Detector
systematic uncertainties considered and their treatment are summarised in
Table 4.
Detector systematics are evaluated using high purity ($>$ 90%) control samples
from cosmic and through-going muons, electrons and positrons from photon
conversions and protons from neutrino interactions. ECal related uncertainties
are evaluated using the same methodology described in ND280NueSel . All other
detector systematics, except FGD2 shower efficiency, are evaluated in the same
way as explained in ND280NueSel ; ND280numucc4pi ; T2KOscLong .
The FGD2 shower efficiency describes the probability of electrons and protons
originating in FGD1 to shower in FGD2. Since FGD2 is a thin detector and
cannot contain showers, a shower is defined when multiple FGD2-TPC3 tracks are
produced when the leading track passes through FGD2. Since this systematic is
only relevant for the CC-$\bar{\nu}_{e}$ channel, the uncertainty is evaluated
using events with single electron or proton tracks in the neutrino beam
originating in FGD1, passing through FGD2 and comparing the FGD2 shower
efficiencies for data and MC.
Table 4: List of detector systematic uncertainties and their treatment for simulated events. Normalization systematics are applied as a single weight to all events. Efficiency systematics are applied as a weight that depends on one or more observables. Variation systematics are treated by varying the observables and redoing the event selection. Systematic | Treatment | Comment
---|---|---
TPC tracking efficiency | efficiency |
TPC charge mis-identification | efficiency |
TPC momentum resolution and scale | variation |
B-field distortions | variation |
TPC PID | variation |
FGD-TPC matching efficiency | efficiency |
TPC-ECal matching efficiency | efficiency |
FGD2 PID | variation | Only applied to CC-$\bar{\nu}_{e}$
FGD2 shower efficiency | efficiency | Only applied to CC-$\bar{\nu}_{e}$
FGD1 mass | normalisation |
TPC, P0D and ECal pile-up | normalisation |
ECal $R_{MIP/EM}$ PID | efficiency |
ECal $R_{EM/HIP}$ PID | efficiency | Only applied to CC-$\bar{\nu}_{e}$
ECal EM energy resolution and scale | variation |
Pion and proton secondary interactions | efficiency |
Sand interactions | efficiency |
FGD1-ECal time resolution | variation |
External backgrounds. These are related to the uncertainties associated with
photons (or other particles) produced outside of the FGD1, either in other
sub-detectors or outside of ND280, that propagate inside FGD1. A large number
of these neutrino interactions are on heavier nuclear targets (aluminium, iron
and lead) with considerable cross-section modelling uncertainties. A detailed
study of the external photon propagation in ND280 was performed in
ND280SingleGamma but only in limited angular regions. Outside these angular
regions there are large data/MC differences due to the poor simulation of
inactive material. Since the method developed in ND280SingleGamma is very
sensitive to the material density and composition (small changes can cause
large variation in systematic uncertainties), conservatively a 100% systematic
uncertainty on the external photon production and propagation is assumed. The
effect on the momentum and angular shapes for both the photon background in
the signal selections and the photon selections is studied with additional
simulations. The external photon events in the simulation are varied between
(40 - 75)% based on the target material the neutrino interaction occurred. The
effect on both momentum and angular shapes is found to be negligible.
Flux. Flux systematic uncertainties are calculated as a function of the
neutrino energy and they are correlated between the neutrino flavours and
between the neutrino and anti-neutrino beams. Flux systematic uncertainties
are larger at the high energy tail of the neutrino spectrum and for the
$\nu_{\mu}$ and $\bar{\nu}_{\mu}$ fluxes are in the range (7.0 - 14)%. The
$\nu_{e}$ and $\bar{\nu}_{e}$ flux systematic uncertainties are shown in
Figure 14 and are dominated by the systematic uncertainties on hadron
production. The evaluation of the flux systematic uncertainties can be found
in previous T2K publications T2KOscLong ; t2kfluxerror .
Figure 14: Flux systematic uncertainties for the FHC $\nu_{e}$ flux (top
left), RHC $\nu_{e}$ flux (top right) and RHC $\bar{\nu}_{e}$ flux (bottom).
### 6.1 Effect of systematic uncertainties on the event yields
A summary of systematic uncertainties on signal and background MC event yields
for the CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ selections is shown in Table 5.
The systematic uncertainties on signal yields are dominated by the flux (8 –
10%) and cross-section modelling (13 – 14%). The larger cross-section
systematic uncertainties come from the large uncertainties considered on the
quasi-elastic axial mass $M_{A}^{QEL}$ and multi-nucleon interactions, each
contributing (6.5 – 8.5)% to the total cross-section systematic uncertainty.
Detector systematic uncertainties on signal yields are (2 – 4)% with the most
important being the TPC PID and TPC-ECal matching efficiencies. For
CC-$\bar{\nu}_{e}$, the ECal PID and FGD2 shower efficiency, which are related
to the proton background rejection, are also important. For an inclusive CC
selection, final state interaction systematic uncertainties on signal yields
are small. They are only considered if a charged pion, after final state
interactions, becomes more or less energetic than the primary electron or when
there is a $\pi^{0}$ involved as the secondary electrons can be more or less
energetic than the primary electron. The total systematic uncertainty on the
signal yields is approximately (16 – 17)% in all the channels.
The systematic uncertainties on the MC background event yields are separated
into photon background and all other backgrounds. The total systematic
uncertainties on the MC photon background event yields are approximately (23 –
26)% in all channels and are dominated by the cross-section and external
systematic uncertainties. Cross-section systematic uncertainties (16 – 19)%
are dominated by the charged-current and neutral-current resonant and DIS
production models. The flux systematic uncertainties are around 8% and the
final state interaction systematic uncertainties are (1.5 – 3.0)%. Detector
systematic uncertainties are (3 – 6)%, with TPC PID, FGD1 and ECal time
resolutions, TPC-ECal matching efficiency and pion secondary interactions
being the most important. Approximately a third of the photon background comes
from neutrino interactions outside FGD1, either in other sub-detectors or
outside the ND280 and the majority of these events populate the low momentum
and/or high angle regions.
The systematic uncertainties on the other backgrounds MC event yields vary
from (19 – 33)% since different sources of backgrounds contribute to each
channel. The biggest difference comes from the external background which
dominates the systematic uncertainties on the other background event yields
and is different in each channel since the neutrino flux is different. Flux
systematic uncertainties are around 8% and the cross-section systematic
uncertainties are around (11 – 12)%. Detector systematic uncertainties are
(4.0 – 6.5)%, which are larger than the corresponding detector systematic
uncertainties for signal and photon background event yields.
Table 5: Summary of systematic uncertainties on MC signal and background event yields. The total systematic uncertainty is the quadratic sum of all the systematic sources. Possible correlations between the different systematic sources are ignored. | Source of uncertainty | Signal (%) | $\gamma$ background (%) | Other backgrounds (%)
---|---|---|---|---
FHC CC-$\nu_{e}$ | Detector | 2.96 | 3.02 | 3.91
External background | 0.00 | 17.25 | 29.07
Flux | 8.92 | 7.61 | 7.60
Final State Interactions | 0.52 | 2.78 | 3.72
Cross-section | 13.60 | 16.54 | 11.18
Total | 16.54 | 25.41 | 32.51
RHC CC-$\nu_{e}$ | Detector | 2.12 | 3.09 | 5.12
External background | 0.00 | 12.71 | 17.56
Flux | 8.11 | 8.28 | 8.23
Final State Interactions | 0.98 | 1.48 | 4.97
Cross-section | 13.45 | 17.71 | 10.67
Total | 15.88 | 23.57 | 23.26
RHC CC-$\bar{\nu}_{e}$ | Detector | 3.46 | 5.68 | 6.46
External background | 0.00 | 14.90 | 7.20
Flux | 9.95 | 8.33 | 8.01
Final State Interactions | 0.39 | 1.95 | 7.94
Cross-section | 12.98 | 18.88 | 12.01
Total | 16.72 | 26.15 | 19.11
### 6.2 Effect of systematic uncertainties on the photon control samples
The systematic uncertainties on the photon control samples are roughly (20 –
23)% and are summarised in Table 6. The dominant sources are coming from the
external background and cross-section modelling.
Table 6: Effect of the systematic uncertainties on the photon control sample MC event yields selecting either electron as the leading track ($\gamma$-Elec.) or positron as the leading track ($\gamma$-Posi.). The total systematic uncertainty is the quadratic sum of all the systematic sources. Possible correlations between the different systematic sources are ignored. Systematic uncertainty | FHC $\gamma$-Elec. (%) | RHC $\gamma$-Elec. (%) | RHC $\gamma$-Posi. (%)
---|---|---|---
Detector | 2.35 | 1.81 | 1.72
External background | 14.24 | 9.57 | 11.10
Flux | 7.62 | 8.29 | 8.26
Final State Interactions | 2.62 | 1.49 | 1.93
Cross-section | 16.49 | 15.28 | 15.67
Total | 23.35 | 19.98 | 21.06
## 7 Fit model
The flux integrated single differential cross-section as a function of the
electron or positron true momentum $p$ or true scattering angle
$\rm\cos(\theta)$ is expressed as
$\frac{d\sigma_{i}}{dk_{i}}=\frac{N_{i}}{\epsilon_{i}}\times\frac{1}{T\Phi\Delta
k_{i}},$ (1)
where $k$ is either $p$ or $\cos(\theta)$, $N_{i}$ is the number of signal
events in bin $i$, $\epsilon_{i}$ is the efficiency in bin $i$, $\Phi$ is the
neutrino flux, $T$ the number of target nucleons and $\Delta k_{i}$ is the
true momentum or true scattering angle bin interval.
The number of signal events in each bin is calculated using an extended,
binned maximum likelihood fit. The PDFs are constructed from histogram
templates using ROOT’s histogram-based HistFactory fit package histfactory ,
which is based on the RooStats roostats and RooFit roofit packages. The fit
is performed simultaneously on all the signal channels (FHC and RHC
CC-$\nu_{e}$ and RHC CC-$\bar{\nu}_{e}$) and their corresponding photon
control channel. Each channel is broken down to angular regions and each
angular region is broken down to one dimensional templates in momentum for
signal, photon background and other backgrounds. For the photon control
channels the small signal contribution is merged in the other backgrounds.
A likelihood is constructed from all the signal and background templates,
nuisance parameters $\vec{\theta}$ and their constraints
$C\left(\theta_{\kappa}^{0},\theta_{\kappa}\right)$ and a set of scaling
parameters $c$ and $g$ controlling the signal and photon background
respectively, given the observed data $\vec{N}$
$\displaystyle L\left(\vec{N}|c,g,\vec{\theta}\right)=$ (2)
$\displaystyle\left[\prod_{i=1}^{N_{region}}\prod_{j=1}^{N_{bin}}\frac{\left[c_{ij}S_{ij}(\vec{\theta})+g_{i}B_{ij}^{\gamma}(\vec{\theta})+B_{ij}^{other}(\vec{\theta})\right]^{n_{ij}}}{n_{ij}!}e^{-\left[c_{ij}S_{ij}(\vec{\theta})+g_{i}B_{ij}(\vec{\theta})+B_{ij}^{other}(\vec{\theta})\right]}\right]$
$\displaystyle\times\left[\prod_{i=1}^{N_{region}}\prod_{l=1}^{N_{bin;PC}}\frac{\left[g_{i}B_{il;PC}^{\gamma}(\vec{\theta})+B_{il;PC}^{other}(\vec{\theta})\right]^{m_{il}}}{m_{il}!}e^{-\left[g_{i}B_{il;PC}^{\gamma}(\vec{\theta})+B_{il;PC}^{other}(\vec{\theta})\right]}\right]$
$\displaystyle\times\prod_{k=1}^{N_{syst}}C\left(\theta_{\kappa}^{0},\theta_{\kappa}\right),$
where $N_{region}$ is the number of angular regions which is the same for
signal and photon control channels, $N_{bin}$ ($N_{bin;PC}$) is the number of
bins in signal (photon control) region, $S_{ij}$ are the signal templates
contributing to reconstructed bin $j$ for region $i$, $B_{ij}^{\gamma}$
($B_{il;PC}^{\gamma}$) is the number of photon events in reconstructed bin $j$
($l$) for signal (photon control) region $i$, $B_{ij}^{other}$
($B_{il;PC}^{other}$) is the number of other background events in
reconstructed bin $j$ ($l$) for signal (photon control) region $i$, $n_{ij}$
($m_{il}$) are the number of entries in each bin in signal (photon control)
region and $N_{syst}$ is the number of nuisance parameters.
### 7.1 Propagation of systematic uncertainties
Systematic uncertainties are included in the fit as nuisance parameters and
are calculated as $\pm 1\sigma$ variations of the nominal samples in the
signal and photon control samples, $S_{ij}(\vec{\theta})$,
$B_{ij}^{\gamma}(\vec{\theta})$, $B_{ij}^{other}(\vec{\theta})$,
$B_{il;PC}^{\gamma}(\vec{\theta})$ and $B_{il;PC}^{other}(\vec{\theta})$.
These variations can either change the normalisation or produce bin-dependent
shape differences or have a correlated effect on shape and normalisation. For
each variation (or set of variations) a nuisance parameter is used to
interpolate between the $\pm 1\sigma$ uncertainties with a Gaussian
constraint. Systematic uncertainties that are common between samples or
channels are fully correlated in the fit. A summary of the nuisance parameters
included in the fit is shown in Table 7.
Variations from the cross-section uncertainties are calculated by varying each
cross-section parameter by $\pm 1\sigma$ and changing the nominal samples.
Some of the cross-section uncertainties may produce asymmetric variations and
these are considered in the fit. Variations related to the final state
interaction systematic uncertainties, including their correlations, are
estimated following the methodology described in T2KOscLong .
Variations from the flux uncertainties are calculated using the full beam
covariance taking into account all the correlations between the neutrino
beams, neutrino flavours and energy bins.
Variations of the nominal samples arising from the detector, pile-up and
external background systematics are evaluated using MC simulations varying the
systematics to change the number of events in each reconstructed bin. Three
nuisance parameters are used for the three pile-up systematics in each beam
mode (FHC or RHC). Four nuisance parameters in each beam mode are used to
describe the external background systematic uncertainties. The external
backgrounds are separated based on their origin (ND280 or sand interactions),
their background category (photon or other backgrounds) and their beam mode
(FHC or RHC).
MC statistical uncertainties, describing the finite size of the simulated
events in each sample, are also included as nuisance parameters in the fit
following the Barlow-Beeston barlowbeeston approach considering one nuisance
parameter per channel and bin.
Table 7: Summary of nuisance parameters related to systematic uncertainties
considered in the fit.
Source of uncertainty | Number of parameters | Constraint | Variation type
---|---|---|---
MC statistical | 29 | Poissonian | One per bin
Pile-up | 6 | Gaussian | Normalisation
External backgrounds | 8 | Gaussian | Shape and normalisation
Detector and flux | 15 | Gaussian | Shape and/or normalisation
Cross-section and final state interactions | 13 | Gaussian | Shape and normalisation
### 7.2 Binning choice
The choice of the binning depends on a number of factors, some of the most
important are: sufficient number of signal events in each bin, isolation of
the backgrounds in specific $p-\theta$ regions, event migration due to
bremsstrahlung and flat efficiency corrections.
The first criterion for the binning choice is to not consider high angle
events ($\theta>45^{\circ}$) since the acceptance due to detector effects is
almost zero. In addition, the photon background is large and the statistics in
the photon control channels is poor. The high angle regions and the low
momentum ($p<300$ MeV/c) bins are background enriched and are kept in the fit
as background validation regions. Approximately 75% of the external photon
background is contained in the low momentum and high angle regions. The signal
contribution in the low momentum bins ($p<300$ MeV/c) is tiny and, to help the
fit performance, it is kept constant in the fit. Two angular regions are
considered to better describe the middle and forward angular cases. The
momentum bins are identical in each angular region.
The momentum bins are optimised to minimize the effect of bremsstrahlung.
Since bremsstrahlung is not a detector smearing effect, but a physics effect
depending on the initial electron kinematics and the material propagated,
special requirements are considered to minimize this effect.
The (anti-)correlations between the momentum bins introduced by bremsstrahlung
are studied with MC simulations requiring them to be less than 50%. If the
chosen momentum binning fails this requirement, the momentum bins are expanded
to reduce the migration of events due to bremsstrahlung and the MC simulations
are repeated. Due to the large momentum bins chosen in this analysis, the
effect of bremsstrahlung can be efficiently handled in the fit.
Signal efficiencies are also a significant factor for the binning choice as
they should be flat to avoid model dependencies. The efficiencies in the two
angular regions in each signal channel are shown in Figure 15 and are
relatively flat with some small fluctuations observed between NEUT and GENIE
and in the low statistics bins. Although the cross-section measurements are
calculated in one dimension, fitting in $p-\theta$ is important to check for
model dependencies due to the efficiency corrections. After the total
statistical and systematic uncertainties are applied on signal efficiencies,
the efficiency errors are artificially inflated to cover differences between
NEUT and GENIE and variations between momentum bins. The efficiencies with
statistical, systematic and inflation uncertainties are shown in Figure 16.
The binning choice for each signal channel is shown in Table 8. In total there
are 9 free parameters controlling the photon background (one for each angular
region) and 17 free parameters controlling the signal (one for each bin in the
table, except for the six lowest momentum bins which are kept constant in the
fit since the number of signal events is negligible).
Figure 15: Signal efficiencies for NEUT and GENIE MC using a finer binning than used in the cross-section measurements. Errors are statistical only. Figure 16: Signal efficiencies in different angular regions for the three samples for NEUT MC with statistical, systematics and inflation uncertainties. Table 8: Summary of the binning for CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ inclusive channels included in the fit. Validation bins are background enriched and are used as extra fit validation regions. These bins are excluded from the cross-section measurements. | Angular region ($\cos(\theta)$) | Momentum bin (GeV/c) | Comment
---|---|---|---
FHC CC-$\nu_{e}$ | -1.00 - 0.7071 | 0 - 30 | Validation bin
0.7071 - 0.88 | 0 - 0.3 | Validation bin
0.7071 - 0.88 | 0.3 - 1.6 |
0.7071 - 0.88 | 1.6 - 3.2 |
0.7071 - 0.88 | 3.2 - 30 |
0.88 - 1.00 | 0 - 0.3 | Validation bin
0.88 - 1.00 | 0.3 - 1.6 |
0.88 - 1.00 | 1.6 - 3.2 |
| 0.88 - 1.00 | 3.2 - 30 |
RHC CC-$\nu_{e}$ | -1.00 - 0.7071 | 0 - 30 | Validation bin
0.7071 - 0.95 | 0 - 0.3 | Validation bin
0.7071 - 0.95 | 0.3 - 1.6 |
0.7071 - 0.95 | 1.6 - 30 |
0.95 - 1.00 | 0 - 0.3 | Validation bin
0.95 - 1.00 | 0.3 - 1.6 |
| 0.95 - 1.00 | 1.6 - 30 |
RHC CC-$\bar{\nu}_{e}$ | -1.00 - 0.7071 | 0 - 30 | Validation bin
0.7071 - 0.92 | 0 - 0.3 | Validation bin
0.7071 - 0.92 | 0.3 - 1.6 |
0.7071 - 0.92 | 1.6 - 30 |
0.92 - 1.00 | 0 - 0.3 | Validation bin
0.92 - 1.00 | 0.3 - 1.6 |
| 0.92 - 1.00 | 1.6 - 30 |
## 8 Cross-section results
The fit is used to measure the number of signal events in all channels
including all systematic uncertainties as described in section 7. The best fit
results and the fit covariance matrix are used to measure the flux-integrated
single differential cross-sections $d\sigma/dp$ and $d\sigma/d\cos(\theta)$
using eq. (1).
Prior to fitting the data, the signal and background events are varied under
different model assumptions to create pseudo datasets generated from
variations of nominal MC (toy experiments). These pseudo datasets are used to
check the fit performance, possible biases, over-constraining the nuisance
parameters and the impact of nuisance parameters to the signal normalisation
parameters and understand the dependencies on signal and background models. In
addition, the cross-sections are measured using two generators to test
different model assumptions. The results are in good agreement with all the
tests providing confidence that our measurements are free from model
dependencies.
The differential cross-section results in electron and positron momentum, $\rm
d\sigma/dp$, using NEUT (5.3.2) or GENIE (2.8.0) as input MC are shown in the
top plot in Figure 17 and they are in agreement with the predictions. The
CC-$\nu_{e}$ cross-sections are expected to be larger in RHC since the
neutrino energy spectrum peaks at higher energy and it is much broader with
larger contribution from higher energy neutrinos. Differences between the
results using either NEUT or GENIE simulations are expected due to small
differences in the efficiency corrections (Figure 15) and small differences in
the muon, proton and other backgrounds (Table 2) which are kept constant in
the fit. The cross-section results are dominated by the statistical
uncertainty, especially in RHC. The statistical uncertainty is estimated by
fixing all the nuisance parameters to their post-fit nominal values and
repeating the fit.
The differential cross-sections are also calculated in electron and positron
scattering angles, $\rm d\sigma/d\cos(\theta)$, for both NEUT and GENIE. They
are calculated in the same angular regions defined in Table 8 and for $p>300$
MeV/c. The results are shown in the bottom plot in Figure 17 and they are in
agreement with the NEUT and GENIE predictions.
Figure 17: CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ inclusive cross-section results
in $d\sigma/dp$ (top) and $d\sigma/d\cos(\theta)$ (bottom) in a limited phase-
space ($p>300$ MeV/c and $\theta\leq 45^{\circ}$). The statistical uncertainty
is computed by fixing all the nuisance parameters to their post-fit nominal
values and redoing the fit. The systematic uncertainty is computed by
subtracting in quadrature the statistical uncertainty from the total
uncertainty.
The systematic uncertainties are propagated in the final cross-section
measurements using toy experiments. For each toy experiment the best-fit
values and the post-fit covariance are used to vary the number of signal
events. Simultaneously, the flux, efficiency and the number of targets are
also varied for each toy resulting in a new measurement of the cross-section
using equation 1. For N toy experiments the covariance, in a fractional form,
is computed from
$V_{ij}=\frac{1}{N}\sum_{i=1}^{N}\frac{\left(\frac{d\sigma_{i}^{variation}}{dk_{i}}-\frac{d\sigma_{i}^{meas.}}{dk_{i}}\right)\left(\frac{d\sigma_{j}^{variation}}{dk_{j}}-\frac{d\sigma_{j}^{meas.}}{dk_{j}}\right)}{\frac{d\sigma_{i}^{meas.}}{dk_{i}}\frac{d\sigma_{j}^{meas.}}{dk_{j}}},$
(3)
where $k$ is either $p$ or $\cos(\theta)$,
$\frac{d\sigma_{i}^{meas.}}{dk_{i}}$ is the measured differential cross-
section in bin $i$ and $\frac{d\sigma_{i}^{variation}}{dk_{i}}$ is the
differential cross-section in bin $i$ calculated from a toy experiment
variation. The single differential cross-sections in momentum and
$\cos(\theta)$ are calculated using the same two dimensional fit and the
covariance matrix should include the correlations between $d\sigma/dp$ and
$d\sigma/d\cos(\theta)$. The full fractional covariance matrix as calculated
from Equation 3 and is shown in Figure 18.
Figure 18: Cross-section fractional covariance matrix for $d\sigma/dp$ (bottom
left area) and $d\sigma/d\cos(\theta)$ (top right area) measurements for NEUT
(5.3.2). The top left and bottom right areas show the covariance between
$d\sigma/dp$ and $d\sigma/d\cos(\theta)$ measurements.
### 8.1 Total cross-sections in limited phase-space
The total cross-sections in the measured phase-space ($p>300$ MeV/c and
$\theta\leq 45^{\circ}$) using NEUT and GENIE MC are shown in Table 9. The
results are compatible with the NEUT and GENIE predictions, although larger
cross-sections are measured in RHC, but with large statistical uncertainties.
Table 9: Measurement of the flux integrated CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ inclusive total cross-sections in a limited phase-space ($p>300$ MeV/c and $\theta\leq 45^{\circ}$) obtained using NEUT (5.3.2) and GENIE (2.8.0) MC. The statistical uncertainty is computed by fixing all nuisance parameters to their post-fit nominal and redoing the fit. The systematic uncertainty is computed by subtracting in quadrature the statistical uncertainty from the total uncertainty. The mean of the neutrino energy, $<E>$, in each beam mode is also shown. Selection | Measured $\sigma$ | Nominal $\sigma$ | $<E>$
---|---|---|---
| $[/10^{-39}\rm cm^{2}/nucleon]$ | $[/10^{-39}\rm cm^{2}/nucleon]$ | GeV
FHC CC-$\nu_{e}$ NEUT | $6.62\pm 1.32(\rm stat)\pm 1.30(\rm syst)$ | 7.18 | 1.28
GENIE | $6.93\pm 1.40(\rm stat)\pm 1.33(\rm syst)$ | 6.87 |
RHC CC-$\nu_{e}$ NEUT | $14.56\pm 4.90(\rm stat)\pm 2.31(\rm syst)$ | 12.96 | 1.98
GENIE | $14.73\pm 5.06(\rm stat)\pm 2.01(\rm syst)$ | 11.44 |
RHC CC-$\bar{\nu}_{e}$ NEUT | $3.01\pm 1.36(\rm stat)\pm 0.57(\rm syst)$ | 2.61 | 0.99
GENIE | $3.10\pm 1.46(\rm stat)\pm 0.53(\rm syst)$ | 2.51 |
### 8.2 Comparisons to additional models
Using the NUISANCE framework nuisance , the fit results are compared to cross-
section predictions from recent neutrino generator models in NEUT (5.4.0),
GENIE (2.12.10) and also from NuWro (19.02) nuwro . NEUT 5.4.0 uses a local
Fermi gas (instead of spectral function). Other interaction modelling and
final state interactions are similar to NEUT 5.3.2 (detailed in section 3).
GENIE 2.12.10 interaction modelling is similar to 2.8.0 (detailed in section
3), with the "empirical MEC" model for the description of multi-nucleon
interactions enabled. NuWro simulates the CC quasi-elastic process with the
Llewellyn-Smith model with axial mass value of 1.03 $\rm GeV/c^{2}$. The
nuclear model is simulated using the relativistic Fermi gas including random
phase approximation corrections rpa . Multi-nucleon interactions are simulated
similar to NEUT using the model from Nieves2p2h . For pion production a single
$\Delta$-model by Adler-Rarita-Schwinger adler is used for the hadronic mass
W $<$ 1.6 $\rm GeV/c^{2}$ with axial mass value of 0.94 $\rm GeV/c^{2}$. A
smooth transition to DIS processes is made for W between 1.3 and 1.6 $\rm
GeV/c^{2}$. The total cross section is based on the Bodek and Yang approach
BodekYangNeut . Similar to NEUT, final state interactions are simulated using
a semi-classical cascade model.
The comparisons of the data to NEUT 5.4.0, GENIE 2.12.10 and NuWro 19.02 are
shown in Figure 19. A $\chi^{2}$ between the data measurements and each
neutrino generator model predictions is defined as
$\chi^{2}=\sum_{i}\sum_{j}\left(\frac{d\sigma_{i}^{meas.}}{dk_{i}}-\frac{d\sigma_{i}^{model}}{dk_{i}}\right)V_{ij}^{-1}\left(\frac{d\sigma_{j}^{meas.}}{dk_{j}}-\frac{d\sigma_{j}^{model}}{dk_{j}}\right),$
(4)
where $k$ is either $p$ or $\cos(\theta)$,
$\frac{d\sigma_{i}^{meas.}}{dk_{i}}$ is the differential cross-section
measurement in bin $i$, $\frac{d\sigma_{i}^{model}}{dk_{i}}$ is the
differential cross-section model prediction in bin $i$ and $V_{ij}$ is the
covariance matrix as defined in equation 3 and shown in Figure 18. The
$\chi^{2}$ is measured for each neutrino generator individually and is
summarised in Table 10. NEUT 5.4.0 has the lowest $\chi^{2}$ compared to our
data. GENIE 2.12.10 has a slightly larger $\chi^{2}$. The $\chi^{2}$ for NuWro
19.02 is significantly larger. The $\chi^{2}$ is also calculated individually
for the single differential cross-sections $d\sigma/dp$ and
$d\sigma/d\cos(\theta)$. A reduced covariance is used considering only the
momentum or $\cos(\theta)$ part of the full covariance in Figure 18. In these
cases the $\chi^{2}$, in both momentum and $\cos(\theta)$ measurements, are
smaller and similar for all neutrino generators. This highlights the
importance of using the combined cross-section measurements in momentum and
$\cos(\theta)$ when doing model comparisons, rather than using each cross-
section measurement in momentum or $\cos(\theta)$ individually.
Figure 19: Flux integrated CC-$\nu_{e}$ and CC-$\bar{\nu}_{e}$ inclusive cross-section results in a limited phase-space ($p>300$ MeV/c and $\theta\leq 45^{\circ}$) with comparisons to neutrino generator models from NEUT 5.4.0, GENIE 2.12.10 and NuWro 19.02 obtained using the NUISANCE framework. The top plot shows the results in momentum and the bottom plot the results in scattering angle. The $\chi^{2}$ is the total from the combined measurements in momentum and $\cos(\theta)$. Table 10: The $\chi^{2}$ comparing data with neutrino generator models. The $\chi^{2}$ is calculated using Equation 4. The full covariance, as shown in Figure 18, is used for $p-\cos(\theta)$ $\chi^{2}$ calculation. A reduced covariance considering only the momentum and $\cos(\theta)$ part of the full covariance is used to calculate the $p$-only and $\cos(\theta)$-only $\chi^{2}$ respectively. The number of degrees of freedom (ndof) for each $\chi^{2}$ is also shown. Generator | $p-\cos(\theta)$ $\chi^{2}$ | $p$-only $\chi^{2}$ | $\cos(\theta)$-only $\chi^{2}$
---|---|---|---
| (ndof = 13) | (ndof = 7) | (ndof = 6)
NEUT 5.4.0 | 14.63 | 5.82 | 5.34
GENIE 2.12.10 | 16.32 | 4.16 | 4.55
NuWro 19.02 | 32.08 | 4.52 | 5.08
## 9 Summary and conclusions
Electron-like neutrino and anti-neutrino events are selected in the T2K off-
axis near detector ND280, using both FHC and RHC modes. A significant amount
of photon background populates the low momentum and high angle regions,
constrained by an independent photon control selection. The regions dominated
by the photon background also show significant data and MC discrepancies and
are dominated by large systematic uncertainties. The flux integrated single
differential cross-sections, as a function of momentum and scattering angle,
are measured by fitting simultaneously the CC inclusive selections and their
corresponding photon control selections. To minimize detector effects, the
cross-sections are measured in a limited phase-space, $p>300$ MeV/c and
$\theta\leq 45^{\circ}$. The results are consistent from the two fits with
both NEUT 5.3.2 and GENIE 2.8.0 predictions. The cross-section results are
also compared with more recent neutrino generator models using NEUT 5.4.0,
GENIE 2.12.10 and NuWro 19.02. The best agreement is observed with NEUT 5.4.0.
These are the first CC-$\nu_{e}$ cross-section measurements using both FHC and
RHC fluxes and the first CC-$\bar{\nu}_{e}$ cross-section measurement since
the Gargamelle measurements in 1978. The data release from this paper can be
found here nuedata .
###### Acknowledgements.
We thank the J-PARC staff for superb accelerator performance. We thank the
CERN NA61/SHINE Collaboration for providing valuable particle production data.
We acknowledge the support of MEXT, Japan; NSERC (Grant No. SAPPJ-2014-00031),
NRC and CFI, Canada; CEA and CNRS/IN2P3, France; DFG, Germany; INFN, Italy;
National Science Centre (NCN) and Ministry of Science and Higher Education,
Poland; RSF (Grant #19-12-00325) and Ministry of Science and Higher Education,
Russia; MICINN and ERDF funds, Spain; SNSF and SERI, Switzerland; STFC, UK;
and DOE, USA. We also thank CERN for the UA1/NOMAD magnet, DESY for the HERA-B
magnet mover system, NII for SINET4, the WestGrid and SciNet consortia in
Compute Canada, and GridPP in the United Kingdom. In addition, participation
of individual researchers and institutions has been further supported by funds
from ERC (FP7), RFBR project number 20-32-70196, "la Caixa” Foundation (ID
100010434, fellowship code LCF/BQ/IN17/11620050), the European Union’s Horizon
2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant
agreements no. 713673 and no. 754496, and H2020 Grants No. RISE-RISE-
GA822070-JENNIFER2 2020 and RISE-GA872549-SK2HK; JSPS, Japan; Royal Society,
UK; French ANR Grant No. ANR-19-CE31-0001; and the DOE Early Career program,
USA.
## References
* (1) K. Abe et al. (T2K Collaboration), _The T2K Experiment_ , _Nucl. Instrum. Meth. A_ 659, 106, 2011.
* (2) J. Blietschau et al. (Gargamelle Collaboration), _Total Cross-Sections for electron-neutrino and anti-electron-neutrino Interactions and Search for Neutrino Oscillations and Decay_ , _Nucl. Phys. B_ 133, 205, 1978.
* (3) K. Abe et al. (T2K Collaboration), _Measurement of the Inclusive Electron Neutrino Charged Current Cross Section on Carbon with the T2K Near Detector_ , _Phys. Rev. Lett._ 113, 241803, 2014.
* (4) J. Wolcott et al. (MINERvA Collaboration), _Measurement of Electron Neutrino Quasielastic and Quasielasticlike Scattering on Hydrocarbon at $\left\langle E_{\nu}\right\rangle=3.6$ GeV_, _Phys. Rev. Lett._ 116, 081802, 2016.
* (5) B. Abi et al. (DUNE Collaboration), _Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume II DUNE Physics_ , _arXiv:2002.03005 [physics.ins-det]_ , 2020.
* (6) K. Abe et al. (Hyper-K Collaboration), _Hyper-Kamiokande Design Report_ , _arXiv:1805.04163 [physics.ins-det]_ , 2018.
* (7) K. Abe et al. (T2K Collaboration), _Measurement of double-differential muon neutrino charged-current interactions on $C_{8}H_{8}$ without pions in the final state using the T2K off-axis detector_, _Phys. Rev. D_ 93, 112012, 2016.
* (8) K. Abe et al. (T2K Collaboration), _Measurement of inclusive double-differential $\nu_{\mu}$ charged-current cross section with improved acceptance in the T2K off-axis near detector_, _Phys. Rev. D_ 98, 012004, 2018.
* (9) K. Abe et al. (T2K Collaboration), _Characterisation of nuclear effects in muon-neutrino scattering on hydrocarbon with a measurement of final-state kinematics and correlations in charged-current pionless interactions at T2K_ , _Phys. Rev. D_ 98, 032003, 2018.
* (10) G. D’Agostini _A multidimensional unfolding method based on Bayes’ theorem_ , _Nucl. Instrum. Methods_ A362, 487, 1995
* (11) K. Abe et al. (T2K Collaboration), _T2K neutrino flux prediction_ , _Phys. Rev. D_ 87, 012001, 2013.
* (12) T. Bohlen, F. Cerutti, M. Chin, A. Fasso, A. Ferrari, P. Ortega, A. Mairani, P. Sala, G. Smirnov, and V. Vlachoudis, _The FLUKA Code: Developments and Challenges for High Energy and Medical Applications_ , _Nuclear Data Sheets_ 120, 211, 2014.
* (13) A. Ferrari, P. R. Sala, A. Fasso, and J. Ranft, _FLUKA : A multi-particle transport code_ , _CERN-2005-010, SLAC-R-773, INFN-TC-05-11_ , 2005.
* (14) R. Brun, F. Carminati, and S. Giani, _GEANT: Detector Description and Simulation Tool_ , CERN-W5013, 1994.
* (15) C. Zeitnitz and T. A. Gabriel, _The GEANT-CALOR Interface_ , _Proceedings of International Conference on Calorimetry in High Energy Physics_ , 1993.
* (16) N. Abgrall et al. (NA61/SHINE Collaboration), _Measurements of cross sections and charged pion spectra in proton-carbon interactions at 31 GeV/c_ _Phys. Rev. C_ 84, 034604, 2011.
* (17) N. Abgrall et al. (NA61/SHINE Collaboration), _Measurement of production properties of positively charged kaons in proton-carbon interactions at 31 GeV/c_ , _Phys. Rev. C_ 85, 035210, 2012.
* (18) N. Abgrall et al. (NA61/SHINE Collaboration), _Measurements of $\pi^{\pm}$, $K^{\pm}$, $K^{\pm}$, $K^{0}_{S}$, $\Lambda$ and proton production in proton-carbon interactions at 31 GeV/c with the NA61/SHINE spectrometer at the CERN SPS_, _Eur. Phys. J. C_ 76, 84, 2016.
* (19) S. Assylbekov et al., _The T2K ND280 Off-Axis Pi-Zero Detector_ , _Nucl. Instrum. Meth. A_ 686, 48, 2012.
* (20) P.-A. Amaudruz et al., _The T2K Fine-Grained Detectors_ , _Nucl. Instrum. Meth. A_ 696, 1, 2012.
* (21) N. Abgrall et al., _Time projection chambers for the T2K near detectors_ , _Nucl. Instrum. Meth. A_ 637, 25, 2011.
* (22) K. Allan et al, _The Electromagnetic Calorimeter for the T2K Near Detector ND280_ , _JINST_ 8, P10019, 2013.
* (23) S. Aoki et al., _The T2K Side Muon Range Detector (SMRD)_ , _Nucl. Instrum. Meth. A_ 698, 135, 2013.
* (24) K. Abe et al. (T2K Collaboration), _Measurement of the intrinsic electron neutrino component in the T2K neutrino beam with the ND280 detector_ , _Phys. Rev. D_ 89, 092003, 2014.
* (25) Y. Hayato, _A neutrino interaction simulation program library NEUT_ , _Acta Phys. Pol. B_ 40, 2477, 2009.
* (26) C. Andreopoulos et al., _The GENIE Neutrino Monte Carlo Generator_ , _Nucl. Instrum. Meth. A_ , 614, 87, 2010.
* (27) Melanie Day and Kevin S. McFarland, _Differences in Quasi-Elastic Cross-Sections of Muon and Electron Neutrinos_ , _Phys. Rev. D_ 86, 053003, 2012.
* (28) C. H. Llewellyn Smith, _Neutrino reactions at accelerator energies_ , _Phys. Rep._ 3, 261 - 379, 1972.
* (29) O. Benhar, A. Fabrocini, S. Fantoni, and I. Sick, _Spectral function of finite nuclei and scattering of GeV electrons_ , _Nucl. Phys. A_ 579, 493 - 517, 1994.
* (30) J. Nieves, I. R. Simo, and M. V. Vacas, _The nucleon axial mass and the MiniBooNE quasielastic neutrino–nucleus scattering problem_ _Phys. Lett. B_ 707, 72 - 75, 2012.
* (31) D. Rein and L.M. Sehgal, _Neutrino-excitation of baryon resonances and single pion production_ _Ann. Phys. (N.Y.)_ 133, 79 - 153, 1981.
* (32) M. Gluck, E. Reya, and A. Vogt, _Dynamical parton distributions revisited_ _Eur. Phys. J. C_ 5, 461 - 470, 1998.
* (33) A. Bodek and U. K. Yang, _Modeling Neutrino and Electron Scattering Cross Sections in the Few GeV Region with Effective LO PDFs_ _AIP Conf. Proc._ 670, 110, 2003.
* (34) A. Bodek and J. L. Ritchie, _Further studies of Fermi-motion effects in lepton scattering from nuclear targets_ _Phys. Rev. D_ 24, 1400, 1981.
* (35) A. Bodek and U. K. Yang, _A Unified Model for inelastic e - N and $\nu$ \- N cross sections at all $Q^{2}$_ _AIP Conf. Proc._ 792, 2005.
* (36) K. Abe et al. (T2K Collaboration), _Measurements of neutrino oscillation in appearance and disappearance channels by the T2K experiment with $6.6\times 10^{20}$ protons on target_, _Phys. Rev. D_ 91, 072010, 2015.
* (37) S. Agostinelli et al. (GEANT4 Collaboration), _GEANT4: A Simulation toolkit_ , _Nucl. Instrum. Meth. A_ 506, 250, 2003.
* (38) K. Abe et al. (T2K Collaboration), _Search for neutral-current induced single photon production at the ND280 near detector in T2K_ , _J. Phys. G: Nucl. Part. Phys._ 46, 08LT01, 2019.
* (39) K. Abe et al. (T2K Collaboration), _Measurement of neutrino and antineutrino oscillations by the T2K experiment including a new additional sample of $\nu_{e}$ interactions at the far detector_, _Phys. Rev. D_ 96, 092006, 2017.
* (40) K.S. Cranmer, G. Lewis, L. Moneta, A. Shibata and W. Verkerke, _HistFactory: A tool for creating statistical models for use with RooFit and RooStats_ , _Tech. Rep. CERN-OPEN-2012-016_ (2012).
* (41) L. Moneta, K. Belasco, K.S. Cranmer, S. Kreiss, A. Lazzaro, D. Piparo, G. Schott, W. Verkerke and M. Wolf, _The RooStats Project, PoS ACAT2010_ , 057, _arXiv:1009.1003 [physics.data-an]_ , 2010.
* (42) W. Verkerke and D. P. Kirkby, _The RooFit toolkit for data modeling, eConf C0303241 MOLT007_ , _arXiv:physics/0306116 [physics.data-an]_ , 2003.
* (43) R.J. Barlow and C. Beeston, _Fitting using finite MC samples_ , _Computer Physics Comm. 77_, 219 - 228, 1993.
* (44) P.Stowell et al, _NUISANCE: a neutrino cross-section generator tuning and comparison framework_ , _JINST 12_, P01016, 2017.
* (45) T.Golan, J.T.Sobczyk and J.Zmuda, _NuWro: the Wrocław Monte Carlo Generator of Neutrino Interactions_ , _Nucl. Phys. Proc. Suppl. 229-232_, 2012
* (46) J. Nieves, I. Ruiz Simo, and M. J. Vicente Vacas, _Inclusive charged-current neutrino-nucleus reactions_ , _Phys. Rev. C_ 83, 045501, 2011.
* (47) K. M. Graczyk, D. Kielczewska, P. Przewlocki, J. T. Sobczyk, _$C^{A}_{5}$ axial form factor from bubble chamber experiments_, _Phys. Rev. D_ 80, 093001, 2009.
* (48) https://t2k-experiment.org/results/2020_nuecc
|
2024-09-04T02:54:54.948286 | 2020-02-27T09:40:18 | 2002.11998 | {
"authors": "Andrea Coladangelo, Or Sattath",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25908",
"submitter": "Andrea Coladangelo",
"url": "https://arxiv.org/abs/2002.11998"
} | arxiv-papers | l3regexThis package is obsolete
# A Quantum Money Solution to the Blockchain Scalability Problem
Andrea Coladangelo Computing and Mathematical Sciences, Caltech Or Sattath
Computer Science Department, Ben-Gurion University
###### Abstract
We put forward the idea that classical blockchains and smart contracts are
potentially useful primitives not only for classical cryptography, but for
quantum cryptography as well. Abstractly, a smart contract is a functionality
that allows parties to deposit funds, and release them upon fulfillment of
algorithmically checkable conditions, and can thus be employed as a formal
tool to enforce monetary incentives.
In this work, we give the first example of the use of smart contracts in a
quantum setting. We describe a simple hybrid classical-quantum payment system
whose main ingredients are a classical blockchain capable of handling stateful
smart contracts, and quantum lightning, a strengthening of public-key quantum
money introduced by Zhandry [Zha19]. Our hybrid payment system employs quantum
states as banknotes and a classical blockchain to settle disputes and to keep
track of the valid serial numbers. It has several desirable properties: it is
decentralized, requiring no trust in any single entity; payments are as quick
as quantum communication, regardless of the total number of users; when a
quantum banknote is damaged or lost, the rightful owner can recover the lost
value.
Note: This work supersedes two previous independent works, [Col19] and
[Sat19].
###### Contents
1. 1 Introduction
2. 2 Preliminaries
1. 2.1 Notation
2. 2.2 Quantum money and quantum lightning
3. 2.3 Universal Composability
3. 3 Blockchains and smart contracts
4. 4 A payment system based on quantum lightning and a classical blockchain
1. 4.1 The payment system and its components
5. 5 Security
6. 6 Practical issues in a less idealized setting
1. 6.1 Attacks outside of the idealized setting
2. 6.2 Trading a Bolt for a Signature
3. 6.3 Security against Sabotage
4. 6.4 A resolution of the practical issues
7. 7 A practical implementation on Bitcoin and space optimization
1. 7.1 Bitcoin to Quantum Money: The Simple Approach
2. 7.2 Bitcoin to Quantum Money: Space Optimization
8. 8 Comparing our scheme to classical alternatives
9. 9 Conclusion
10. A Appendix
1. A.1 Proof of Proposition 2
2. A.2 Standard Security Definitions
## 1 Introduction
Cryptocurrencies, along with blockchains and smart contracts, have recently
risen to popular attention, the most well-known examples being Bitcoin and
Ethereum [Nak08, But14]. Informally, a blockchain is a public ledger
consisting of a sequence of blocks. Each block typically contains information
about a set of transactions, and a new block is appended regularly via a
consensus mechanism that involves the parties of a network and no a priori
trusted authority. A blockchain is endowed with a native currency which is
employed in transactions, and whose basic unit is a “coin”. The simplest type
of transaction is a payment, which transfers coins from one party to another.
However, more general transactions are allowed, which are known as smart
contracts. These can be thought of as contracts stored on a blockchain, and
whose consequences are executed upon fulfilment of algorithmically checkable
conditions.
A central issue that needs to be resolved for blockchains to achieve mass-
adoption is scalability. This refers to the problem of increasing the
throughput of transactions (i.e. transactions per second) while maintaining
the resources needed for a party to participate in the consensus mechanism
approximately constant, and while maintaining security against adversaries
that can corrupt constant fractions of the parties in the network. For
example, Bitcoin and Ethereum, can currently handle only on the order of $10$
transactions per second (Visa for a comparison handles about $3500$ per
second).
In this work, we show that quantum information is inherently well-suited to
tackle this problem. We show that a classical blockchain can be leveraged
using tools from quantum cryptography (in particular, quantum money) to
provide a simple solution to the scalability problem.111We clarify that this
solution only solves the scalability problem for payment transactions, and not
for the more general smart contracts transactions.
The main quantum ingredient that we employ is a primitive called quantum
lightning, formally introduced by Zhandry [Zha19] and inspired by Lutomirski
et al.’s notion of collision resistant quantum money [LAF+09]. In a public-key
quantum money scheme, a bank is entrusted with generating quantum states (we
refer to these as quantum banknotes) with an associated serial number, and a
public verification procedure allows anyone in possession of the banknote to
check its validity. Importantly, trust is placed in the fact that the central
bank will not create multiple quantum banknotes with the same serial number. A
quantum lightning scheme has the additional feature that no generation
procedure, not even the honest one (and hence not even a bank!), can produce
two valid money states with the same serial number, except with negligible
probability. This opens to the possibility of having a completely
decentralized quantum money scheme. However, if the system ought to be trust-
less, some issues need to be addressed: most importantly, who is allowed to
mint money? Who decides which serial numbers are valid? Our solution leverages
a (classical) blockchain to address these questions.
#### Our contributions
We design a hybrid classical-quantum payment system that uses quantum states
as banknotes and a classical blockchain to settle disputes and to keep track
of the valid serial numbers. This is, to the best of our knowledge, the first
example of the use of a classical blockchain in combination with quantum
cryptographic tools. Our payment system has the following desirable features:
* (i)
It is decentralized, requiring no trust in any single entity.
* (ii)
Payments involve the exchange of quantum banknotes, and enjoy many of the
properties of cash, which cryptocurrencies do not have. For example,
transactions are not recorded on the blockchain, and they involve only the
payer and the payee. Thus the throughput is unbounded. Payments are as quick
as quantum communication, and they do not incur transaction fees.
* (iii)
The rightful owner of a quantum banknote can recover the original value, even
if the quantum banknote is damaged or lost; an adversary who tried to abuse
this feature would be penalized financially.
Our contribution is primarily conceptual, but our treatment is formal: we work
within the Generalized Universal Composability framework (GUC) of Canetti et
al. [CDPW07], and we formulate an ideal functionality for a blockchain that
supports smart contracts. Note that we do not prove composable security of our
payment system. Instead, we prove a “one-shot” version of security, assuming
parties have access to such an ideal functionality. Nonetheless, we find it
desirable to work within the GUC framework, and we discuss the reason for this
choice in more detail below. We also provide an informal construction of our
payment system on the Bitcoin blockchain. Its treatment is less rigorous, but
it addresses some ways in which the payment system could be optimized.
As a further technical contribution, in order to achieve part (iii), we
formalize a novel property of quantum lightning schemes, which we call “bolt-
to-signature” capability and is reminiscent of one-time digital signatures. We
provide a provably secure construction of a lightning scheme with such a
property (assuming a secure lightning scheme which might not satisfy this).
The construction is based on the hash-and-sign paradigm as well as Lamport
signatures scheme. It allows a user holding a valid quantum lightning state
with serial number $s$ to sign a message $\alpha$ with respect to $s$. The
security guarantees are that no one who does not possess a lightning state
with serial number $s$ can forge signatures with respect to $s$, and once the
lightning state is utilized to produce even a single signature, it will no
longer pass the lightning verification procedure. We envision that such a
primitive could find applications elsewhere and is of independent interest.
#### How to model a blockchain?
The essential properties of blockchains and smart contracts can be abstracted
by modeling them as ideal functionalities in the Universal Composability (UC)
framework of Canetti [Can01]. Such a framework provides both a formal model
for multiparty computation, and formal notions of security with strong
composability properties. The approach of studying blockchains and smart
contracts within such a framework was first proposed by Bentov al. in [BK14],
and explored further in [BKM17]. The main reason why such an approach is
desirable is that it abstracts the features of blockchains and smart contracts
into building blocks that can be utilized to design more complex protocols in
a modular way. The limitation of the works [BK14, BKM17] is that they modify
the original model of computation of UC in order to incorporate coins, but
they do not prove that a composition theorem holds in this variant. A more
natural approach was proposed by Kiayias et al. in [KZZ16], which uses the
Generalized Universal Composability (GUC) framework by Canetti et al.
[CDPW07].
In this work, we define a ledger functionality in the GUC framework which
supports universal smart contracts. We use this as an abstraction layer which
significantly simplifies exposition and the security analysis. The downside of
this approach is that, to the best of our knowledge, there is no known proof
of a GUC-secure realization of an ideal functionality for a ledger supporting
smart contracts by any cryptocurrency. Proving that a complicated system such
as Bitcoin or Ethereum GUC-securely realizes a simple ledger functionality,
even without the support for smart contracts, requires already substantial
work [BMTZ17]. The upside is that, as soon as one provides a GUC secure
realization on some cryptocurrency of our ideal ledger functionality, one can
replace the latter in our payment system with its real world implementation,
and the security of our payment system would hold verbatim, by the composition
properties of the GUC framework. For this reason, we find the approach very
desirable. Other approaches that do not fall into the GUC framework include
[KMS+16] and [DEF18].
We emphasize that we do not prove that our payment system can be composed
securely, i.e. we do not define an ideal functionality for our payment system
and prove a secure realization of it. Rather, the security we prove is only
“one-shot”.
#### A sketch of our payment system
The main ingredient that we employ is quantum lightning. As suggested by
Zhandry, this primitive seems well-suited for designing a decentralized
payment system, as it is infeasible for anyone to copy banknotes (even a
hypothetical bank). However, since the generation procedure is publicly known,
there needs to be a mechanism that regulates the generation of new valid
banknotes (to prevent parties from continuously generating banknotes).
We show how stateful smart contracts on a classical blockchain can be used to
provide such a mechanism. For instance, they allow to easily keep track of a
publicly trusted list of valid serial numbers. We elaborate on this. In our
payment system, the native coin of a classical blockchain is used as a
baseline classical currency. Any party can spend coins on the classical
blockchain to add a serial number of their choice to the list of valid serial
numbers. More precisely, they can deposit any amount (of their choice) $d$ of
coins into an appropriately specified smart contract and set the initial value
of a serial number state variable to whatever they wish (presumably the serial
number of a quantum banknote that they have just generated locally). They have
thus effectively added to the blockchain a serial number associated to a
quantum banknote that, in virtue of this, we can think of as having “acquired”
value $d$. Payments are made by transferring quantum banknotes: party $A$, the
payer, transfers his quantum banknote with serial number $s$ to party $B$, the
payee, and references a smart contract whose serial number state variable is
$s$. Party $B$ then _locally_ verifies that the received quantum banknote is
valid. This completes the transaction. Notice that this does not involve
interaction with any other third party, and only requires read access to the
blockchain. Of course, the same quantum banknote can be successively spent an
unlimited number of times in the same manner, without ever posting any new
transaction on the blockchain. The latter is only invoked when a banknote is
first generated, and in case of a dispute. Thus, throughput of transactions is
no longer a concern. Likewise, long waiting times between when a payment is
initiated and when it is confirmed (which are typically in the order of
minutes – for example, it is recommended to accept a Bitcoin transaction after
6 confirmations, which takes an hour in expectation) are also no longer a
concern since payments are verified immediately by the payee. One can think of
the quantum banknotes as providing an off-chain layer that allows for
virtually unlimited throughput. We give an extended comparison of our payment
system with other off-chain solutions, like Bitcoin’s Lightning Network, in
Section 8.
The full payment system includes two additional optional features:
* (i)
A mechanism that allows any party in possession of a quantum banknote to
recover the coins deposited in the corresponding smart contract. Together with
the mechanism outlined earlier, this makes quantum banknotes and coins on the
blockchain in some sense interchangeable: one can always convert one into the
other by publishing a single transaction on the blockchain.
* (ii)
A mechanism that allows any honest party who has lost or damaged a valid
quantum banknote to change the serial number state variable of the
corresponding smart contract to a fresh value of their choice.
These two additional desirable features are realizable as long as the quantum
lightning scheme employed satisfies an additional property. We formalize this
property, which we call “bolt-to-certificate capability”, and show that
Zhandry’s proposed constructions satisfy this property. The latter asserts
informally that it is possible to measure a valid quantum banknote and obtain
a classical certificate, but it is impossible to simultaneously hold both a
valid banknote and a valid certificate. In other words, the classical
certificate acts as a “proof of destruction”: a certificate which guarantees
that no quantum lightning state with the corresponding serial number exists.
We also introduce a novel implementation of a one-time signature scheme based
on the “bolt-to-certificate capability”, and inspired by Lamport’s signature
scheme. This protects honest parties who broadcast a classical certificate to
the network from having their classical certificate stolen before it is
registered on the blockchain.
We emphasize that the bolt-to-certificate capability of the lightning scheme
is only required in order to achieve the two additional features (i) and (ii)
above, but is not necessary in order to achieve the basic functionality of our
payment system described above.
#### Limitations of our solution
We see two main limitations to our payment system:
* •
The technologies needed to implement our payment system will not be available
in the foreseeable future. For instance, our payment system would require the
ability of each party to store large entangled quantum states for extended
periods of time. It would also require that each party be able to send quantum
states to any party it wishes to make payments to. The latter could be
achieved, for example, in a network in which any two parties have the ability
to request joint EPR pairs (i.e. maximally entangled pairs of qubits), which
is one of the primary components of a “quantum internet” [WEH18]. One can view
our scheme as a possible use case of a quantum internet.
* •
The only known concrete constructions of a quantum lightning scheme rely on
non-standard assumptions for security. The construction of Farhi et al.
[FGH+12a] contains no security proof. The construction of Zhandry [Zha19] is
secure based on an assumption related to the multi-collision resistance of
certain degree-2 hash functions. Arguably, neither of these assumptions is
well-studied.
In Section 8, we extensively discuss the disadvantages (as well as the
advantages) of our payment system vis-à-vis with other classical alternatives
– namely, a standard crypto-currency such as Bitcoin and second layer
solutions such as the Lightning Network.
#### Outline
Section 2 covers preliminaries: 2.1 covers basic notation; 2.2 introduces
quantum lightning; 2.3 gives a concise overview of the Universal Composability
framework of Canetti [Can01]. Section 3 gives first an informal description of
blockchains and smart contracts, followed by a formal definition of our global
ideal functionality for a transaction ledger that handles stateful smart
contracts. Section 4 describes our payment system. In Section 5, we describe
an adversarial model, and then prove security guarantees with respect to it.
## 2 Preliminaries
### 2.1 Notation
For a function $f:\mathbb{N}\rightarrow\mathbb{R}$, we say that $f$ is
negligible, and we write $f(n)=negl(n)$, if for any positive polynomial $p(n)$
and all sufficiently large $n$’s, $f(n)<\frac{1}{p(n)}$. A binary random
variable is a random variable over $\\{0,1\\}$. We say that two ensembles of
binary random variables $\\{X_{n}\\}$ and $\\{Y_{n}\\}$ are indistinguishable
if,
$\left|\,\Pr[X_{n}=1]-\Pr[Y_{n}=1]\,\right|=negl(n).$
We use the terms PPT and QPT as abbreviations of probabilistic polynomial time
and quantum polynomial time respectively.
### 2.2 Quantum money and quantum lightning
Quantum money is a theoretical form of payment first proposed by Wiesner
[Wie83], which replaces physical banknotes with quantum states. In essence, a
quantum money scheme consists of a generation procedure, which mints
banknotes, and a verification procedure, which verifies the validity of minted
banknotes. A banknote consists of a quantum state together with an associated
serial number. The appeal of quantum money comes primarily from a fundamental
theorem in quantum theory, the No-Cloning theorem, which informally states
that there does not exist a quantum operation that can clone arbitrary states.
A second appealing property of quantum money, which is not celebrated nearly
as much as the first, is that quantum money can be transferred almost
instantaneously (by quantum teleportation for example). The first proposals
for quantum money schemes required a central bank to carry out both the
generation and verification procedures. The idea of public key quantum money
was later formalized by Aaronson [Aar09]. In public-key quantum money, the
verification procedure is public, meaning that anyone with access to a quantum
banknote can verify its validity.
In this section, we focus on quantum lightning, a primitive recently proposed
by Zhandry [Zha19], and we enhance this to a decentralized quantum payment
system. Informally, a quantum lightning scheme is a strengthening of public-
key quantum money. It consists of a public generation procedure and a public
verification procedure which satisfy the following two properties:
* •
Any quantum banknote generated by the honest generation procedure is accepted
with probability negligibly close to $1$ by the verification procedure.
* •
No adversarial generation procedure (not even the honest one) can generate two
banknotes with the same serial number which both pass the verification
procedure with non-negligible probability.
As mentioned earlier, there is only one known construction of quantum
lightning, by Zhandry [Zha19], who gives a construction which is secure under
a computational assumption related to the multi-collision resistance of some
degree-2 hash function. Zhandry also proves that any non-collapsing hash
function can be used to construct quantum lightning. However, to the best of
our knowledge, there are no known hash functions that are proven to be non-
collapsing. In this section, we define quantum lightning formally, but we do
not discuss any possible construction. Rather, in Section 4, we will use
quantum lightning as an off-the-shelf primitive.
###### Definition 1 (Quantum lightning [Zha19]).
A quantum lightning scheme consists of a PPT algorithm
$\textsf{QL.Setup}(1^{\lambda})$ (where $\lambda$ is a security parameter)
which samples a pair of polynomial-time quantum algorithms $(\textsf{gen-
bolt}$, $\textsf{verify-bolt})$. gen-bolt outputs pairs of the form
$\left(\ket{\psi}\in\mathcal{H}_{\lambda},s\in\\{0,1\\}^{\lambda}\right)$. We
refer to $\ket{\psi}$ as a “bolt” and to $s$ as a “serial number”. ver-bolt
takes as input a pair of the same form, and outputs either “accept” (1) or
“reject” (0) (together with a post-measurement state). They satisfy the
following:
* •
$\displaystyle\Pr[\textsf{verify-bolt}(\ket{\psi},s)$
$\displaystyle=1:(\ket{\psi},s)\leftarrow\textsf{gen-bolt},(\textsf{gen-
bolt},\textsf{verify-bolt})\leftarrow\textsf{QL.Setup}]$ (1)
$\displaystyle=1-negl(\lambda)$ (2)
* •
For a state $\ket{\psi}$ let $\mathcal{M}_{\ket{\psi}}$ be the two outcome
measurement which accepts $\ket{\psi}$ and rejects all states orthogonal to
it.
$\displaystyle\Pr[\mathcal{M}_{\ket{\psi}}(\ket{\psi^{\prime}})=1$
$\displaystyle:(b,\ket{\psi^{\prime}})\leftarrow\textsf{verify-
bolt}(\ket{\psi},s),(\ket{\psi},s)\leftarrow\textsf{gen-bolt},$ (3)
$\displaystyle(\textsf{gen-bolt},\textsf{verify-
bolt})\leftarrow\textsf{QL.Setup}(\lambda)]=1-negl(\lambda)$ (4)
Here $\ket{\psi^{\prime}}$ is the post-measurement state upon running
$\textsf{verify-bolt}(\ket{\psi},s)$.
* •
For all $s^{\prime}\in\\{0,1\\}^{\lambda}$,
$\displaystyle\Pr[\textsf{verify-bolt}(\ket{\psi},s^{\prime})=$ $\displaystyle
1\,\,\land\,\,s^{\prime}\neq s:((\ket{\psi},s)\leftarrow\textsf{gen-
bolt},(\textsf{gen-bolt},\textsf{verify-bolt})\leftarrow\textsf{QL.Setup}]$
(5) $\displaystyle=negl(\lambda)$ (6)
The three requirements simply ask that, with overwhelming probability, for any
validly generated bolt $\ket{\psi}$, there is a single serial number $s$ such
that $(\ket{\psi},s)$ is accepted by the verification procedure, and that the
verification does not perturb the bolt, except negligibly.
For security, we require that no adversarial generation procedure can produce
two bolts with the same serial number. Formally, we define security via the
following game between a challenger and an adversary $\mathcal{A}$.
* •
The challenger runs $(\textsf{gen-bolt},\textsf{verify-
bolt})\leftarrow\textsf{QL.Setup}(\lambda)$ and sends $(\textsf{gen-
bolt},\textsf{verify-bolt})$ to $\mathcal{A}$.
* •
$\mathcal{A}$ produces a pair
$\ket{\Psi_{12}}\in\mathcal{H}_{\lambda}^{\otimes 2},s\in\\{0,1\\}^{\lambda}$.
* •
The challenger runs $\textsf{verify-bolt}(\cdot,s)$ on each half of
$\ket{\Psi_{12}}$. The output of the game is $1$ if both outcomes are
“accept”.
We let $\textsf{Counterfeit}(\lambda,\mathcal{A})$ be the random variable
which denotes the output of the game.
###### Definition 2 (Security [Zha19]).
A quantum lightning scheme is secure if, for all polynomial-time quantum
adversaries $\mathcal{A}$,
$\Pr[\textsf{Counterfeit}(\lambda,\mathcal{A})=1]=\textnormal{negl}(\lambda).$
We define an additional property of a quantum lightning scheme, which in
essence establishes that one can trade a quantum banknote for some useful
classical certificate. Intuitively, this is meant to capture the fact that in
the proposed construction of quantum lightning by Zhandry, one can measure a
bolt with serial number $y$ in the computational basis to obtain a pre-image
of $y$ under some hash function. However, doing so damages the bolt so that it
will no longer pass verification. In order to define this additional property,
we change the procedure $\textsf{QL.Setup}(1^{\lambda})$ slightly, so that it
outputs a tuple $(\textsf{gen-bolt},\textsf{verify-bolt},\textsf{gen-
certificate},\textsf{verify-certificate})$, where gen-certificate is a QPT
algorithm that takes as input a quantum money state and a serial number and
outputs a classical string of some fixed length $l(\lambda)$ for some
polynomially bounded function $l$, which we refer to as a certificate, and
verify-certificate is a PPT algorithm which takes as input a serial number and
a certificate, and outputs “accept” ($1$) or “reject” ($0$). The additional
property is defined based on the following game Forge-certificate between a
challenger and an adversary $\mathcal{A}$:
* •
The challenger runs $(\textsf{gen-bolt},\textsf{verify-bolt},\textsf{gen-
certificate},\textsf{verify-
certificate})\leftarrow\textsf{QL.Setup}(1^{\lambda})$ and sends the tuple to
$\mathcal{A}$.
* •
$\mathcal{A}$ returns $c\in\\{0,1\\}^{l(\lambda)}$ and $(\ket{\psi},s)$.
* •
The challenger runs $\textsf{verify-certificate}(s,c)$ and $\textsf{verify-
bolt}(\ket{\psi},s)$. Outputs $1$ if they both accept.
Let $\textsf{Forge-certificate}(\mathcal{A},\lambda)$ be the random variable
for the output of the challenger in the game above.
###### Definition 3 (Trading the bolt for a classical certificate).
Let $\lambda\in\mathbb{N}$. We say that a quantum lightning scheme has “bolt-
to-certificate capability” if:
* (I)
$\displaystyle\Pr[\textsf{verify-certificate}(s,c)=1:$
$\displaystyle(\textsf{gen-bolt},\textsf{verify-bolt},\textsf{gen-
certificate},\textsf{verify-
certificate})\leftarrow\textsf{QL.Setup}(1^{\lambda}),$ (7)
$\displaystyle(\ket{\psi},s)\leftarrow\textsf{gen-bolt},$ (8) $\displaystyle
c\leftarrow\textsf{gen-certificate}(\ket{\psi},s)]=1-negl(\lambda)$ (9)
* (II)
For all polynomial-time quantum algorithms $\mathcal{A}$,
$\Pr[\textsf{Forge-certificate}(\mathcal{A},\lambda)=1]=negl(\lambda).$
Notice that property $(II)$ also implies that for most setups and serial
numbers $s$ it is hard for any adversary to find a $c$ such that
$\textsf{verify-certificate}(s,c)=1$ without access to a valid state whose
serial number is $s$. In fact, if there was an adversary $\mathcal{A}$ which
succeeded at that, this could clearly be used to to construct an adversary
$\mathcal{A^{\prime}}$ that succeeds in Forge-certificate: Upon receiving
$(\textsf{gen-bolt},\textsf{verify-bolt},\textsf{gen-
certificate},\textsf{verify-certificate})$ from the challenger,
$\mathcal{A}^{\prime}$ computes $(\ket{\psi},s)\leftarrow\textsf{gen-bolt}$;
then runs $\mathcal{A}$ on input $s$ to obtain some $c$.
$\mathcal{A}^{\prime}$ returns $c$,$\ket{\psi}$ and $s$ to the challenger. We
emphasize that in game Forge-certificate it is the adversary himself who
generates the state $\ket{\psi}$. This is important because when we employ the
quantum lightning scheme later on parties are allowed to generate their own
quantum banknotes.
###### Proposition 1.
Any scheme that uses Zhandry’s construction instantiated with a non-collapsing
hash function [Zha19] satisfies the property of Definition 3.
###### Proof.
Zhandry already proved that his scheme satisfies Definitions 1 and 2. As we
will see, since our construction does not change $\textsf{QL.Setup},\
\textsf{gen-bolt}$, and verify-bolt, we only need to prove that it satisfies
Definition 3. We refer the reader to [Zha19] for a definition of non-
collapsing hash function. In Zhandry’s construction based on a non-collapsing
hash function, $\textsf{QL.Setup}(1^{\lambda})$ outputs $(\textsf{gen-
bolt},\textsf{verify-bolt})$. A bolt generated from gen-bolt has the form
$\ket{\Psi}=\bigotimes_{i=1}^{n}\ket{\psi_{y_{i}}}$, where
$y_{i}\in\\{0,1\\}^{\lambda}$ for all $i$ and
$\ket{\psi_{y_{i}}}=\sum_{x:H(x)=y_{i}}\ket{x}$, where $H$ is a non-collapsing
hash-function, and $n\in\mathbb{N}$ is polynomial in the security parameter
$\lambda$. verify-bolt has the form of $n$-fold product of a verification
procedure “Mini-Ver” which acts on a single one of the $n$ registers, though
it is not crucial to understand how Mini-Ver for this work. In Zhandry’s
construction, the serial number associated to the bolt $\ket{\Psi}$ above is
$s=(y_{1},\ldots,y_{n})$.
We define $\mathsf{gen\textit{-}certificate}$ as the QPT algorithm that
measures $\ket{\psi}$ in the standard basis, and outputs the outcome. When
applied to an honestly generated bolt, the outcomes are pre-images $x_{i}$ of
$y_{i}$, for $i=1,..,n$. We define $\mathsf{verify\textit{-}certificate}$ as
the deterministic algorithm which receives a serial number
$s=(y_{1},\ldots,y_{n})$ and a certificate $c=(x_{1},\ldots,x_{n})$ and checks
that for all $i\in[n]$, $H(x_{i})=y_{i}$.
Is is clear that $(I)$ holds.
For property $(II)$, suppose there exists $\mathcal{A}$ such that
$\Pr[\textsf{Forge-certificate}(\mathcal{A},\lambda)=1]$ is non-negligible. We
use $\mathcal{A}$ to construct an adversary $\mathcal{A}^{\prime}$ that breaks
collision-resistance of a non-collapsing hash function as follows:
$\mathcal{A}^{\prime}$ runs $(\textsf{gen-bolt},\textsf{verify-
bolt})\leftarrow\textsf{QL.Setup}(1^{\lambda})$. Let $H$ be the non-collapsing
hash function hard-coded in the description of verify-bolt.
$\mathcal{A^{\prime}}$ sends the tuple $(\textsf{gen-bolt},\textsf{verify-
bolt},\textsf{gen-certificate},\textsf{verify-certificate})$ to $\mathcal{A}$,
where the latter two are defined as above in terms of $H$. $\mathcal{A}$
returns $(c,\ket{\psi})$, where $c$ is parsed as $c=(x_{1},..,x_{n})$.
$\mathcal{A}^{\prime}$ then measures each of the $n$ registers of $\ket{\psi}$
to get $c^{\prime}=(x^{\prime}_{1},..,x^{\prime}_{n})$. If $x_{i}\neq
x_{i}^{\prime}$, then $\mathcal{A}^{\prime}$ outputs $(x_{i},x_{i}^{\prime})$.
We claim that with non-neglibile probability $\mathcal{A^{\prime}}$ outputs a
collision for $H$. To see this, notice that since $\mathcal{A}$ wins Forge-
certificate with non-negligible probability, then $\ket{\psi}$ must pass
verify-bolt with non-negligible probability; and from the analysis of [Zha19],
any such state must be such that at least one of the registers is in a
superposition which has non-negligible weight on at least two pre-images. For
more details, see the proof of Theorem 5 in [Zha19], and specifically the
following claim:
> If the bolts are measured, two different pre-images of the same y, and hence
> a collision for $H^{\otimes r}$, will be obtained with probability at least
> 1/200.
∎
###### Proposition 2.
Zhandry’s construction based on the multi-collision resistance of certain
degree-2 hash functions (from section 6 of [Zha19]) satisfies the property of
Definition 3.
The proof is similar to the proof of Proposition 1. We include it for
completeness in Appendix A.1.
###### Fact 4.
Zhandry’s construction from 2 only requires a common random string (CRS) for
QL.Setup.
This fact will be relevant when we discuss a concrete implementation in a
system such as Bitcoin. For more details, see Section 7.
#### Other Quantum Lightning Schemes?
Farhi et al. [FGH+12b] constructed a public quantum money scheme which they
conjectured to have the following property:
> A rogue mint running the same algorithm as the mint can produce a new set of
> money pairs, but (with high probability) none of the serial numbers will
> match those that the mint originally produced.
This is less formal, but we believe captures Zhandry’s security definition of
a quantum lightning (though it was introduced roughly 7 years earlier).
Very recently, Peter Shor gave a talk on ongoing (unpublished) work for a
lattice-based quantum lightning scheme (see https://youtu.be/8fzLByTn8Xk).
Here, we briefly compare the two existing published constructions, and we omit
Shor’s scheme from this discussion.
* •
Farhi et al. only provide several plausibility arguments supporting the
security of their scheme, but their work does not contain any security proof.
In a follow-up work, Lutomirski [Lut11] showed a security reduction from a
problem _related_ to counterfeiting Farhi et al.’s money scheme. Even though
it does not have a (proper) security proof, Farhi et al.’s scheme was first
published 8 years ago, and was not broken since then. Zhandry’s construction
is proved to be secure under a non-standard hardness assumption, which was
first introduced in that work. In his Eurocrypt talk, Zhandry reports that the
construction is “broken in some settings” – see
https://youtu.be/fjumbNTZSik?t=1302.222Additionally, the hardness assumption
was changed between the second and third version on the IACR e-print. This
might suggest that the original hardness assumption was also broken. The most
updated e-print version does not discuss that discrepancy.
* •
The scheme of Farhi et al. does not require a trusted setup.
* •
The scheme of Farhi et al. does not have a bolt-to-certificate capability.
This property is required in our payment system to transform quantum banknotes
back to coins on the blockchain – see Section 4.
### 2.3 Universal Composability
This section is intended as a concise primer about the Universal Composability
(UC) model of Canetti [Can01]. We refer the reader to [Can01] for a rigorous
definition and treatment of the UC model and of composable security, and to
the tutorial [Can06] for a more gentle introduction. At the end, we provide a
brief overview of the Generalized UC model (GUC) [CDPW07]. While introducing
UC and GUC, we also setup some of the notation that we will employ in the rest
of the paper. As mentioned in the Introduction, we elect to work in the GUC,
because this provides a convenient formal abstraction layer for modeling a
ledger functionality that supports smart-contracts, and allows for a clean
security proof in the idealized setting in which parties have access to this
functionality.
The reader familiar with the UC and GUC framework may wish to skip ahead to
the next section.
In the universal composability framework (UC), parties are modelled as
Interactive Turing Machines (ITM) who can communicate by writing on each
other’s externally writable tapes, subject to some global constraints.
Informally, a protocol is specified by a code $\pi$ for an ITM, and consists
of various rounds of communication and local computation between instances of
ITMs (the parties), each running the code $\pi$ on their machine, on some
private input.
Security in the UC model is defined via the notion of emulation. Informally,
we say that a protocol $\pi$ emulates a protocol $\phi$ if whatever can be
achieved by an adversary attacking $\pi$ can also be achieved by some other
adversary attacking $\phi$. This is formalized by introducing simulators and
environments.
Given protocols $\pi$ and $\phi$, we say that $\pi$ emulates (or “is as secure
as”) $\phi$ in the UC model, if for any polynomial-time adversary
$\mathcal{A}$ attacking protocol $\pi$, there exists a polynomial-time
simulator $\mathcal{S}$ attacking $\phi$ such that no polynomial-time
distinguisher $\mathcal{E}$, referred to as the environment, can distinguish
between $\pi$ running with $\mathcal{A}$ and $\phi$ running with
$\mathcal{S}$. Here, the environment $\mathcal{E}$ is allowed to choose the
protocol inputs, read the protocol outputs, including outputs from the
adversary or the simulator, and to communicate with the adversary or simulator
during the execution of the protocol (without of course being told whether the
interaction is with the adversary or with the simulator). In this framework,
one can formulate security of a multiparty cryptographic task by first
defining an ideal functionality $\mathcal{F}$ that behaves exactly as
intended, and then providing a “real-world” protocol that emulates, or
“securely realizes”, the ideal functionality $\mathcal{F}$.
We give more formal definitions for the above intuition. To formulate
precisely what it means for an environment $\mathcal{E}$ to tell two
executions apart, one has to formalize the interaction between $\mathcal{E}$
and the protocols in these executions. Concisely, an execution of a protocol
$\pi$ with adversary $\mathcal{A}$ and environment $\mathcal{E}$ consists of a
sequence of activations of ITMs. At each activation, the active ITM runs
according to its code, its state and the content of its tapes, until it
reaches a special wait state. The sequence of activations proceeds as follows:
The environment $\mathcal{E}$ gets activated first and chooses inputs for
$\mathcal{A}$ and for all parties. Once $\mathcal{A}$ or a party is actived by
an incoming message or an input, it runs its code until it produces an
outgoing message for another party, an output for $\mathcal{E}$, or it reaches
the wait state, in which case $\mathcal{E}$ is activated again. The execution
terminates when $\mathcal{E}$ produces its output, which can be taken to be a
single bit. Note that each time it is activated, $\mathcal{E}$ is also allowed
to invoke a new party, and assign a unique PID (party identifier) to it.
Allowing the environment to invoke new parties will be particularly important
in Section 5, where we discuss security. There, the fact that the environment
has this ability implies that our security notion captures realistic scenarios
in which the set of parties is not fixed at the start, but is allowed to
change. Moreover, each invocation of a protocol $\pi$ is assigned a unique
session identifier SID, to distinguish it from other invocations of $\pi$. We
denote by $\textrm{EXEC}_{\pi,\mathcal{A},\mathcal{E}}(\lambda,z)$ the output
of environment $\mathcal{E}$ initialized with input $z$, and security
parameter $\lambda$ in an execution of $\pi$ with adversary $\mathcal{A}$.
We are ready to state the following (slightly informal) definition.
###### Definition 5.
A protocol $\pi$ UC-emulates a protocol $\phi$ if, for any PPT adversary
$\mathcal{A}$, there exists a PPT simulator $\mathcal{S}$ such that, for any
PPT environment $\mathcal{E}$, the families of random variables
$\\{\textrm{EXEC}_{\pi,\mathcal{A},\mathcal{E}}(\lambda,z)\\}_{\lambda\in\mathbb{N},z\in\\{0,1\\}^{poly(\lambda)}}$
and
$\\{\text{EXEC}_{\phi,\mathcal{S},\mathcal{E}}(\lambda,z)\\}_{\lambda\in\mathbb{N},z\in\\{0,1\\}^{poly(\lambda)}}$
are indistinguishable.
Then, given an ideal functionality $\mathcal{F}$ which captures the intended
ideal behaviour of a certain cryptographic task, one can define the ITM code
$I_{\mathcal{F}}$, which behaves as follows: the ITM running $I_{\mathcal{F}}$
simply forwards any inputs received to the ideal functionality $\mathcal{F}$.
We then say that a “real-world” protocol $\pi$ securely realizes $\mathcal{F}$
if $\pi$ emulates $I_{\mathcal{F}}$ according to Definition 5.
#### A composition theorem
The notion of security we just defined is strong. One of the main advantages
of such a security definition is that it supports composition, i.e. security
remains when secure protocols are executed concurrently, and arbitrary
messages can be sent between executions. We use the notation $\sigma^{\pi}$
for a protocol $\sigma$ that makes up to polynomially many calls to another
protocol $\pi$. In a typical scenario, $\sigma^{\mathcal{F}}$ is a protocol
that makes use of an ideal functionality $\mathcal{F}$, and $\sigma^{\pi}$ is
the protocol that results by implementing $\mathcal{F}$ through the protocol
$\pi$ (i.e. replacing calls to $\mathcal{F}$ by calls to $\pi$). It is natural
to expect that if $\pi$ securely realizes $\mathcal{F}$, then $\sigma^{\pi}$
securely realizes $\sigma^{\mathcal{F}}$. This is the content of the following
theorem.
###### Theorem 6 (Universal Composition Theorem).
Let $\pi$, $\phi$, $\sigma$ be polynomial-time protocols. Suppose protocol
$\pi$ UC-emulates $\phi$. Then $\sigma^{\pi}$ UC-emulates $\sigma^{\phi}$.
Replacing $\phi$ by $I_{\mathcal{F}}$ for some ideal functionality
$\mathcal{F}$ in the above theorem yields the composable security notion
discussed above.
#### Generalized UC model
The formalism of the original UC model is not able to handle security
requirements in the presence of a “global trusted setup”. By this, we mean
some global information accessible to all parties, which is guaranteed to have
certain properties. Examples of this are a public-key infrastructure or a
common reference string. Emulation in the original UC sense is not enough to
guarantee composability properties in the presence of a global setup. Indeed,
one can construct examples in which a UC-secure protocol for some
functionality interacts badly with another UC-secure protocol and affects its
security, if both protocols make reference to the same global setup. For more
details and concrete examples see [CDPW07].
The generalized UC framework (GUC) of Canetti et al. [CDPW07] allows for a
“global setup”. The latter is modelled as an ideal functionality which is
allowed to interact not only with the parties running the protocol, but also
with the environment. GUC formulates a stronger security notion, which is
sufficient to guarantee a composition theorem, i.e. ideal functionalities with
access to a shared global functionality $\mathcal{G}$ can be replaced by
protocols that securely realize them in the presence of $\mathcal{G}$.
Further, one can also replace global ideal functionalities with appropriate
protocols realizing them. This kind of replacement does not immediately follow
from the previous composition theorem and requires a more careful analysis, as
is done in [CSV16], where sufficient conditions for this replacement are
established.
#### Universal Composability in the quantum setting
In our setting, we are interested in honest parties, adversaries and
environments that are quantum polynomial-time ITMs. The notion of Universal
Composability has been studied in the quantum setting in [BOM04], [Unr04] and
[Unr10]. In particular, in [Unr10], Unruh extends the model of computation of
UC and its composition theorems to the setting in which polynomial-time
classical ITMs are replaced by polynomial-time quantum ITMs (and ideal
functionalities are still classical). The proofs are essentially the same as
in the classical setting. Although the quantum version of the Generalized UC
framework has not been explicitly studied in [Unr10], one can check that the
proofs of the composition theorems for GUC from [CDPW07] and [CSV16] also go
through virtually unchanged in the quantum setting.
## 3 Blockchains and smart contracts
In this section, we start by describing blockchains and smart contracts
informally. We follow this by a more formal description. As mentioned in the
introduction, the essential features of blockchains and smart contracts can be
abstracted by modeling them as ideal functionalities in the Universal
Composability framework of Canetti [Can01]. In this section, we introduce a
global ideal functionality that abstracts the properties of a transaction
ledger capable of handling stateful smart contracts. We call this
$\mathcal{F}_{Ledg}$, which we describe in Fig. 1. We remark that
$\mathcal{F}_{Ledg}$ is somewhat of an idealized functionality which allows us
to obtain a clean description and security proof for our payment system.
However, $\mathcal{F}_{Ledg}$ does not capture, for example, attacks that stem
from miners possibly delaying honest parties’ messages from being recorded on
the blockchain. We discuss this and other issues extensively in Section 6. We
also discuss ways to resolve such issues in detail, but we do not formally
incorporate these in the description of our our payment system in Section 4:
since we view our contribution as primarily conceptual, we elect to keep the
exposition and security proof of the basic payment system as clean and
accessible as possible.
Informally, a blockchain is a public ledger consisting of a sequence of
blocks. Each block typically contains information about a set of transactions,
and a new block is appended regularly via a consensus mechanism that involves
the nodes of a network. A blockchain is equipped with a native currency which
is employed in transactions, and whose basic unit is referred to as a coin.
Each user in the network is associated with a public key (this can be thought
of as the user’s address). A typical transaction is a message which transfers
coins from a public key to another. It is considered valid if it is digitally
signed using the secret key corresponding to the sending address.
More precisely, in Bitcoin, parties do not keep track of users’s accounts, but
rather they just maintain a local copy of a set known as “unspent transaction
outputs set” (UTXO set). An unspent output is a transaction that has not yet
been “claimed”, i.e. the coins of these transactions have not yet been spent
by the receiver. Each unspent output in the UTXO set includes a circuit (also
known as a “script”) such that any user that can provide an input which is
accepted by the circuit (i.e. a witness) can make a transaction that spends
these coins, thus creating a new unspent output. Hence, if only one user knows
the witness to the circuit, he is effectively the owner of these coins. For a
standard payment transaction, the witness is a signature and the circuit
verifies the signature. However, more complex circuits are also allowed, and
these give rise to more complex transactions than simple payments: smart
contracts. A smart contract can be thought of as a transaction which deposits
coins to an address. The coins are released upon fulfillment of certain pre-
established conditions.
In [BK14] and [BKM17], smart contracts are defined as ideal functionalities in
a variant of the Universal Composability (UC) model [Can01]. The ideal
functionality that abstracts the simplest smart contracts was formalized in
[BK14], and called “Claim or Refund”. Informally, this functionality specifies
that a sender $P$ locks his coins and chooses a circuit $\phi$, such that a
receiver $Q$ can gain possession of these coins by providing a witness $w$
such that $\phi(w)=1$ before an established time, and otherwise the sender can
reclaim his coins. The “Claim or Refund” ideal functionality can be realized
in Bitcoin as long as the circuit $\phi$ can be described in Bitcoin’s
scripting language. On the other hand, Ethereum’s scripting language is
Turing-complete, and so any circuit $\phi$ can be described.
“Claim or refund” ideal functionalities can be further generalized to
“stateful contracts”. In Ethereum, each unspent output also maintains a state.
In other words, each unspent output comprises not only a circuit $\phi$, but
also state variables. Parties can claim partial amounts of coins by providing
witnesses that satisfy the circuit $\phi$ in accordance with the current state
variables. In addition, $\phi$ also specifies an update rule for the state
variables, which are updated accordingly. We refer to these type of
transactions as stateful contracts, as opposed to the “stateless” contract of
“Claim or Refund”. Stateful contracts can be realized in Ethereum, but not in
Bitcoin. From now onwards, we will only work with stateful contracts. We will
use the terms “smart contracts” and “stateful contracts” interchangeably.
We emphasize that our modeling is inspired by [BK14] and [BKM17], but differs
in the way that coins are formalized. One difference from the model of Bentov
et al. is that there, in order to handle coins, the authors augment the
original UC model by endowing each party with a wallet and a safe, and by
considering coins as atomic entities which can be exchanged between parties.
To the best of our knowledge, this variant of the UC framework is not subsumed
by any of the previously studied variants, and thus it is not known whether a
composition theorem holds for it.
On the other hand, we feel that a more natural approach is to work in the
Generalized UC model [CDPW07], and to define a global ideal functionality
$\mathcal{F}_{Ledg}$ which abstracts the essential features of a transaction
ledger capable of handling stateful smart contracts. This approach was first
proposed by Kiayias et al. [KZZ16]. The appeal of modeling a transaction
ledger as a global ideal functionality is that composition theorems are known
in the Generalized UC framework. In virtue of this, any secure protocol for
some task that makes calls to $\mathcal{F}_{Ledg}$ can be arbitrarily composed
while still maintaining security. This means that one need not worry about
composing different concurrent protocols which reference the same transaction
ledger. One would hope that it is also the case that a secure protocol that
makes calls to $\mathcal{F}_{Ledg}$ remains secure when the latter are
replaced by calls to secure real-world realizations of it (on Ethereum for
example). This requires a more careful analysis, and Canetti et al. provide in
[CSV16] sufficient conditions for this replacement to be possible. We do not
prove that a secure real-world realization of $\mathcal{F}_{Ledg}$ on an
existing blockchain exists, but we believe that $\mathcal{F}_{Ledg}$, or a
close variant of it, should be securely realizable on the Ethereum blockchain.
In any case, we work abstractly by designing our payment system, and proving
it secure, assuming access to such an ideal functionality. The appeal of such
an approach is that the security of the higher-level protocols is independent
of the details of the particular real-world implementation of
$\mathcal{F}_{Ledg}$.
Next, we describe our global ideal functionality $\mathcal{F}_{Ledg}$. In
doing so, we establish the notation that we will utilize in the rest of the
paper.
#### Global ledger ideal functionality
We present in Fig. 1 our global ledger ideal functionality
$\mathcal{F}_{Ledg}$. In a nutshell, this keeps track of every registered
party’s coins, and allows any party to transfer coins in their name to any
other party. It also allows any party to retrieve information about the number
of coins of any other party, as well as about any previous transaction. The
initial amount of coins of a newly registered party is determined by its PID
(recall that in a UC-execution the PID of each invoked party is specified by
the environment; see Section 2.3 for more details). Moreover,
$\mathcal{F}_{Ledg}$ handles (stateful) smart contracts: it accepts deposits
from the parties involved in a contract and then pays rewards appropriately.
Recall that in stateful smart contracts a party or a set of parties deposit an
amount of coins to the contract. The contract is specified by a circuit
$\phi$, together with an initial value for a state variable st. A state
transition is triggered by any party $P$ with PID $pid$ sending a witness $w$
which is accepted by $\phi$ in accordance with the current state and the
current time $t$. More precisely, the contract runs
$\phi(pid,w,t,\textsf{st})$, which outputs either “$\perp$” or a new state
(stored in the variable st) and a number of coins $d\in\mathbb{N}$ that is
released to $P$. Each contract then repeatedly accepts state transitions until
it has distributed all the coins that were deposited into it at the start.
Notice that $\phi$ can accept different witnesses at different times (the
acceptance of a witness can depend on the current time $t$ and the current
value of the state variable st). Information about existing smart contracts
can also be retrieved by any party.
The stateful-contract portion of $\mathcal{F}_{Ledg}$ resembles closely the
functionality $\mathcal{F}_{StCon}$ from [BKM17]. Our approach differs from
that of [BKM17] in that the we make the smart contract functionality part of
the global ideal functionality $\mathcal{F}_{Ledg}$ which also keeps track of
party’s coins and transactions. In [BKM17] instead, coins are incorporated in
the computation model by augmenting the ITMs with wallets and safes (this
changes the model of computation in a way that is not captured by any of the
the previously studied variants of the UC framework).
We implicitly assume access to an ideal functionality for message
authentication $\mathcal{F}_{auth}$ which all parties employ when sending
their messages, and also to a global ideal functionality for a clock that
keeps track of time. We assume implicitly that $\mathcal{F}_{Ledg}$ makes
calls to the clock and keeps track of time. Alternatively, we could just have
$F_{Ledg}$ maintain a local variable that counts the number of transactions
performed, and a local time variable $t$, which is increased by $1$ every time
the number of transactions reaches a certain number, after which the
transaction counter is reset (this mimics the process of addition of blocks in
a blockchain, and time simply counts the number of blocks). From now onwards,
we do not formally reference calls to $\mathcal{F}_{auth}$ or to the clock to
avoid overloading notation. We are now ready to define $\mathcal{F}_{Ledg}$.
Global ledger ideal functionality
Initialize the sets $\textsf{parties}=\\{\\}$, $\textsf{contracts}=\\{\\}$,
$\textsf{AllTransactions}=\\{\\}$. (Throughout, the variable $t$ denotes the
current time.)
#### Add party:
Upon receiving a message AddParty from a party with PID $pid=(id,d)$, send
(AddedParty, $pid$) to the adversary; upon receiving a message ok from the
adversary, and if this is the first request from $pid$, add $pid$ to the set
parties. Set $pid.\textsf{id}\leftarrow id$ and $pid.\textsf{coins}\leftarrow
d$.
#### Retrieve party:
Upon receiving a message (RetrieveParty, $pid$) from some party $P$ (or the
adversary), output (RetrieveParty, $pid$, $d$) to $P$ (or to the adversary),
where $d=\perp$ if $pid\notin\textsf{parties}$, else $d=pid.\textsf{coins}$.
(We abuse notation here in that, when taken as part of a message, $pid$ is
treated as a string, but, when called by the functionality, it is a variable
with attributes $pid.\textsf{id}$ and $pid.\textsf{coins}$).
#### Add transaction:
Upon receiving a message (AddTransaction, $pid^{\prime}$, $d$) from some party
$P$ with PID $pid$, and $pid\in\textsf{parties}$, do the following:
* •
If $pid^{\prime}\in\textsf{parties}$ and $pid.\textsf{coins}>d$, update
$pid^{\prime}.\textsf{coins}\leftarrow pid^{\prime}.\textsf{coins}+d$ and
$pid.\textsf{coins}\leftarrow pid.\textsf{coins}-d$. Set
$trId=|\textsf{AllTransactions}|+1$. Add a variable named $trId$ to
AllTransactions, with attribute
$trId.\textsf{transaction}=(pid,pid^{\prime},d,t)$. Send a message (Executed,
$trId$) to $P$.
* •
Else, return $\perp$ to $P$.
#### Retrieve transaction:
Upon receiving a message (RetrieveTransaction, $trId$) from some party $P$ (or
the adversary), output (RetrieveTransaction, $trId$, $s$), where $s=\perp$ if
$trId\notin\textsf{allTransactions}$, and $s=trId.\textsf{transaction}$
otherwise.
#### Add/trigger smart contract:
Upon receiving a message (AddSmartContract,
Params=$(I,D,\phi,\textsf{st}_{0})$), where $I$ is a set of PID’s, $D$ is a
set $\\{(pid,d_{pid}):pid\in I\\}$ of “initial deposits”, with $d_{pid}$ being
the amount required initially from the party with PID $pid$, $\phi$ is a
circuit, and $\textsf{st}_{0}$ is the initial value of a state variable st,
check that $I\subseteq\textsf{parties}$. If not, ignore the message; if yes,
set $ssid=|\textsf{contracts}|+1$. Add a variable named $ssid$ to contracts
with attributes $ssid.\textsf{Params}=(I,D,\phi,\textsf{st}_{0})$,
$ssid.\textsf{state}=\textsf{st}$ and $ssid.\textsf{coins}\leftarrow 0$. Send
a message (RecordedContract, $ssid$) to $P$. Then, do the following:
* •
Initialization phase: Wait to get message
$(\textsf{InitializeWithCoins},\textit{ssid},\textsf{Params}=(I,D,\phi,\textsf{st}_{0}))$
from party with PID $pid$ for all $pid\in I$. When all messages are received,
and if, for all $pid\in I$, $pid.\textsf{coins}\geq d_{pid}$, then, for all
$pid\in I$, update: $pid.\textsf{coins}\leftarrow pid.\textsf{coins}-d_{pid}$
and $ssid.\textsf{coins}\leftarrow ssid.\textsf{coins}+d_{pid}$. Set
$\textsf{st}\leftarrow\textsf{st}_{0}$ (We assume that $ssid.\textsf{state}$
changes dynamically with st).
* •
Execution phase: Repeat until termination: Upon receiving a message of the
form $(\textsf{Trigger},\textit{ssid},w,d)$ at time $t$ from some party with
PID $pid\in\textsf{parties}$ (where it can also be $d=0$) such that
$\phi(pid,w,t,\textsf{st},d)\neq\perp$, do the following:
* –
If $d>0$, update $pid.\textsf{coins}\leftarrow pid.\textsf{coins}-d$ and
$ssid.\textsf{coins}\leftarrow ssid.\textsf{coins}+d$
* –
Update $(\textsf{st},e)\leftarrow\phi(pid,w,t,\textsf{st},d)$.
* –
If $e=\textnormal{``all coins''}$, let $q:=ssid.\textsf{coins}$. Send the
message (Reward, $ssid$, $q$) to the party with PID $pid$ and update
$pid.\textsf{coins}\leftarrow pid.\textsf{coins}+q$ and
$ssid.\textsf{coins}\leftarrow 0$. If $e>0$ and $ssid.\textsf{coins}>e$, send
the message (Reward, $ssid$, $e$) to the party with PID $pid$, and update
$pid.\textsf{coins}\leftarrow pid.\textsf{coins}+e$ and
$ssid.\textsf{coins}\leftarrow ssid.\textsf{coins}-e$. Else, if
$ssid.\textsf{coins}=e^{\prime}\leq e$, send the message (Reward, $ssid$,
$e^{\prime}$) to the party with PID $pid$, and update
$pid.\textsf{coins}\leftarrow pid.\textsf{coins}+e^{\prime}$ and
$ssid.\textsf{coins}\leftarrow 0$. Then, terminate.
#### Retrieve smart contract:
Upon receiving a message (RetrieveContract, $ssid$) from some party $P$ (or
the adversary), output (RetrieveContract, $ssid$, $z$), where $z=\perp$ if
$ssid\notin\textsf{contracts}$, and
$z=(ssid.\textsf{Params},ssid.\textsf{state},ssid.\textsf{coins})$ otherwise.
Figure 1: Global ledger ideal functionality $\mathcal{F}_{Ledg}$
We think of “Retrieve” operations as being fast, or free, as they do not alter
the state of the ledger. We think of “Add” and “Trigger” operations as being
slow, as they alter the state of the ledger.
From now on, we will often refer to the number of coins $ssid.\textsf{coins}$
of a contract with session identifier $ssid$ as the coins deposited in the
contract. When we say that a contract releases some coins to a party $P$ with
PID $pid$, we mean more precisely that $\mathcal{F}_{Ledg}$ updates its local
variables and moves coins from $ssid.\textsf{coins}$ to $pid.\textsf{coins}$.
## 4 A payment system based on quantum lightning and a classical blockchain
In this section, we describe our payment system. We give first an informal
description, and in Section 4.1 we give a formal description.
The building block of our payment system is a quantum lightning scheme,
reviewed in detail in Section 2.2. Recall that a quantum lightning scheme
consists of a generation procedure which creates quantum banknotes, and a
verification procedure that verifies them and assigns serial numbers. The
security guarantee is that no generation procedure (not even the honest one)
can create two banknotes with the same serial number except with negligible
probability. As mentioned earlier, this property is desirable if one wants to
design a decentralized payment system, as it prevents anyone from cloning
banknotes (even the person who generates them). However, this calls for a
mechanism to regulate generation of new valid quantum banknotes.
In this section, we describe formally a proposal that employs smart contracts
to provide such a mechanism. As we have described informally in the
introduction, the high-level idea is to keep track of the valid serial numbers
using smart contracts. Any party is allowed to deposit any amount of coins $d$
(of their choice) into a smart contract with specific parameters (see
definition 7 below), and with an initial value of his choice for a serial
number state variable. We can think of the quantum banknote with the chosen
serial number as having “acquired” value $d$. A payment involves only two
parties: a payer, who sends a quantum banknote, and a payee who receives it
and verifies it locally. As anticipated in the introduction, the full payment
system includes the following additional features, which we describe here
informally in a little more detail (all of these are described formally in
Section 4.1):
* •
Removing a serial number from the list of valid serial numbers in order to
recover the amount of coins deposited in the corresponding smart contract.
This makes the two commodities (quantum banknotes and coins on the blockchain)
interchangeable. This is achieved by exploiting the additional property of of
some quantum lightning scheme from Definition 3. Recall that, informally, this
property states that there is some classical certificate that can be recovered
by measuring a valid quantum banknote, which no efficient algorithm can
recover otherwise. The key is that once the state is measured to recover this
certificate, it is damaged in a way that it only passes verification with
negligible probability (meaning that it can no longer be spent). We allow
users to submit this classical certificate to a smart contract, and if the
certificate is consistent with the serial number stored in the contract, then
the latter releases all of the coins deposited in the contract to the user.
* •
Allowing a party to replace an existing serial number with a new one of their
choice in case they lose a valid quantum state (they are fragile after all!).
We allow a user $P$ to file a “lost banknote claim” by sending a message and
some fixed amount of coins $d_{0}$ to a smart contract whose serial number is
the serial number of the lost banknote. The idea is that if no one challenges
this claim, then after a specified time $t_{tr}$ user $P$ can submit a message
which changes the value of the serial number state variable to a value of his
choice and recovers the previously deposited $d_{0}$ coins. On the other hand,
if a user $P$ maliciously files a claim to some contract with serial number
$s$, then any user $Q$ who possesses the valid banknote with serial number $s$
can recover the classical certificate from Definition 3, and submit it to the
contract. This releases all the coins deposited in the contract to $Q$
(including the $d_{0}$ deposited by $P$ to make the claim). As you might
notice, this requires honest users to monitor existing contracts for “lost
banknote claims”. This, however, is not much of a burden if $t_{tr}$ is made
large enough (say a week or a month). The requirement of being online once a
week or once a month is easy to meet in practice.
### 4.1 The payment system and its components
In this section, we describe in detail all of the components of the payment
system. It consists of the following: a protocol to generate valid quantum
banknotes; a protocol to make a payment; a protocol to file a claim for a lost
banknote; a protocol to prevent malicious attempts at filing claims for lost
banknotes; and a protocol to trade a valid quantum banknote in exchange for
coins.
Let $\lambda\in\mathbb{N}$. From now onwards, we assume that
$(\textsf{gen-bolt},\textsf{verify-bolt},\textsf{gen-
certificate},\textsf{verify-
certificate})\leftarrow\textsf{QL.Setup}(\lambda),$
where the latter is the setup procedure of a quantum lightning scheme with
bolt-to-certificate capability (i.e. a quantum lightning scheme that satisfies
the additional property of Definition 3). We are thus assuming a trusted setup
for our payment system (note there are practical ways to ensure that such a
setup, which is a one-time procedure, is performed legitimately even in a
network where many parties might be dishonest). We also assume that all
parties have access to an authenticated and ideal quantum channel.
Recall from the description of $\mathcal{F}_{Ledg}$ that each smart contract
is specified by several parameters: $I$ is a set of PIDs of parties who are
expected to make the initial deposit, with $\\{d_{pid}:pid\in I\\}$ being the
required initial deposit amounts; a circuit $\phi$ specifies how the state
variables are updated and when coins are released; an initial value
$\textsf{st}_{0}$ for the state variable st; a session identifier $ssid$.
In Definition 7 below, we define an instantiation of smart contracts with a
particular choice of parameters, which we refer to as banknote-contracts.
Banknote-contracts are the building blocks of the protocols that make up our
payment system. We describe a banknote-contract informally before giving a
formal definition.
A banknote-contract is a smart contract initialized by a single party, and it
has a state variable of the form
$\textsf{st}=(\textsf{serial},\textsf{ActiveLostClaim})$. The party
initializes the banknote-contract by depositing a number of coins $d$ and by
setting the initial value of serial to any desired value. The banknote-
contract handles the following type of requests:
* •
As long as $\textsf{ActiveLostClaim}=\text{``No active claim''}$ (which
signifies that there are no currently active lost-banknote claims), any party
$P$ can send the message BanknoteLost, together with a pre-established amount
of coins $d_{0}$ to the contract. This will trigger an update of the state
variable ActiveLostClaim to reflect the active lost-banknote claim by party
$P$.
* •
As long as there is an active lost-banknote claim, i.e.
$\textsf{ActiveLostClaim}=\text{``Claim by $pid$ at time $t$''}$, any party
$Q$ can challenge that claim by submitting a message
$(\textsf{ChallengeClaim},c,s^{\prime})$ to the contract, where $s^{\prime}$
is a proposed new serial number. We say that $c$ is a valid classical
certificate for the current value $s$ of serial if $\textsf{verify-
certificate}(s,c)=1$. Such a $c$ can be thought of as a proof that whoever is
challenging the claim actually possessed a quantum banknote with serial number
$s$, and destroyed it in order to obtain the certificate $c$, and thus that
the current active lost-banknote claim is malicious. If $c$ is a valid
classical certificate for $s$, then serial is updated to the new value
$s^{\prime}$ submitted by $Q$, who also receives all of the coins deposited in
the contract (including the $d_{0}$ coins deposited by the malicious claim).
* •
If party $P$ has previously submitted a lost-banknote claim, and his claim
stays unchallenged for time $t_{tr}$, then party $P$ can send a message
$(\textsf{ClaimUnchallenged},s^{\prime})$ to the contract, where $s^{\prime}$
is a proposed new serial number. Then the contract returns to $P$ the $d_{0}$
coins he initially deposited when making the claim, and updates serial to
$s^{\prime}$.
* •
Any party $P$ can submit to the contract a message
$(\textsf{RecoverCoins},c)$. If $c$ is a valid classical certificate for the
current value of $s$ of serial, then the contract releases to $P$ all the
coins currently deposited in the contract. This allows party $P$ to “convert”
back his quantum banknote into coins.
Next, we will formally define banknote-contracts, and then formally describe
all of the protocols that make the payment system.
$\phi_{\$}\left(pid,w,t,(\textsf{serial},\textsf{ActiveLostClaim}),d\right)$
takes as input strings $pid$ and $w$, where $pid$ is meant to be the PID of
some party $P$, and we refer to $w$ as the “witness”, $t\in\mathbb{N}$ denotes
the “current time” mantained by $\mathcal{F}_{Ledg}$,
$(\textsf{serial},\textsf{ActiveLostClaim})$ is the current value of the state
variable, and $d\in\mathbb{N}$ is the number of coins that are being deposited
to the smart contract with the current message. $\phi_{\$}$ has hardcoded
parameters: $d_{0}\in\mathbb{N}$ the amount of coins needed to file a claim
for a lost money state, $t_{tr}\in\mathbb{N}$ the time after which an
unchallenged claim can be settled ($d_{0}$ and $t_{tr}$ are fixed constants
agreed upon by all parties, and they are the same for all banknote-contracts),
$l(\lambda)$ the length of a certificate in the lightning scheme, a
description of verify-certificate. The circuit $\phi_{\$}$ outputs new values
for the state variables and an amount of coins as follows:
On input $(pid,w,t,(\textsf{serial}=s,\textsf{ActiveLostClaim}),d)$,
$\phi_{\$}$ does the following:
* •
If $\textsf{ActiveLostClaim}=\text{``No active claim''}$:
* –
If $w=\textsf{BanknoteLost}$ and $d=d_{0}$, then $\phi$ outputs
$0((\textsf{serial}=s,\textsf{ActiveLostClaim}=\text{``Claim by $pid$ at time
$t$''}),00)$ (to symbolize that at time $t$ party with PID $pid$ has claimed
to have lost the money state with serial number $s$, and that zero coins are
being released).
* –
If $w=(\textsf{RecoverCoins},c)$, where $c\in\\{0,1\\}^{l(\lambda)}$ and
$\textsf{verify-certificate}(s,c)=1$, then $\phi$ outputs
$0((\textsf{serial}=\perp,\textsf{ActiveLostClaim}=\perp),\textnormal{``all
coins''}0)$
* •
If $\textsf{ActiveLostClaim}=\text{``Claim by $pid^{\prime}$ at time
$t_{0}$''}$ for some $pid^{\prime},t_{0}$:
* –
If $w=(\textsf{ChallengeClaim},c,s^{\prime})$, where
$c\in\\{0,1\\}^{l(\lambda)}$ and $\textsf{verify-certificate}(s,c)=1$, and
$s^{\prime}\in\\{0,1\\}^{\lambda}$ then $\phi$ outputs
$0((\textsf{serial}=s^{\prime},\textsf{ActiveLostClaim}=\text{``No active
claim''}),d_{0}0)$.
* –
If $w=(\textsf{ClaimUnchallenged},s^{\prime})$, $pid=pid^{\prime}$ and
$t-t_{0}>t_{tr}$, then $\phi$ outputs
$0((\textsf{serial}=s^{\prime},\textsf{ActiveLostClaim}=\text{``No active
claim''}),d_{0}0)$.
Figure 2: Circuit $\phi_{\$}$ for banknote-contracts
###### Definition 7.
(Banknote-contract) A banknote-contract, is a smart contract on
$\mathcal{F}_{Ledg}$ specified by parameters of the following form:
$I=\\{pid\\}$ for some $pid\in[n]$, $D=\\{(pid,d_{pid})\\}$ for some
$d_{pid}\in\mathbb{N}$, $\textsf{st}_{0}=(s,\text{``No active claim''})$ for
some $s\in\\{0,1\\}^{\lambda}$, and circuit $\phi=\phi_{\$}$, where
$\phi_{\$}$ is defined as in Fig. 2.
For convenience, we denote by serial and ActiveLostClaim respectively the
first and second entry of the state variable of a banknote-contract.
#### Generating valid quantum banknotes
We describe the formal procedure for generating a valid quantum banknote.
Protocol carried out by some party $P$ with PID $pid$.
Input of $P$: An integer $d$ such that $pid.\textsf{coins}>d$ in
$\mathcal{F}_{Ledg}$ ($d$ is the “value” of the prospective banknote).
* •
Run $(\ket{\psi},s)\leftarrow\textsf{gen-bolt}$.
* •
Send $0(\textsf{AddSmartContract},\textsf{Params}0)$ to $\mathcal{F}_{Ledg}$,
where $\textsf{Params}=(\\{pid\\},\\{(pid,d)\\},\phi,(s,\text{``No active
claim''}))$. Upon receipt of a message of the form (RecordedContract, $ssid$),
send the message (InitializeWithCoins, $ssid$, Params) to
$\mathcal{F}_{Ledg}$.
Figure 3: Generating a valid banknote
#### Making a payment
We describe formally the protocol for making a payment in Fig. 4. Informally,
the protocol is between a party $P$, the payer, and a party $Q$, the payee. In
order to pay party $Q$ with a bolt whose serial number is $s$, party $P$ sends
the valid bolt to party $Q$, the payee, together with the $ssid$ of a smart
contract with $\textsf{serial}=s$. Party $Q$ verifies that $ssid$ corresponds
to a banknote-contract with $\textsf{serial}=s$, and verifies that the
banknote passes verification and has serial number $s$.
The protocol is between some party $P$ with PID $pid$(the payer) and a party
$Q$ with PID $pid^{\prime}$ (the payee):
Input of $P$: $\ket{\Psi}$, a valid bolt with serial number $s$. $ssid$ the
session identifier of a smart contract on $\mathcal{F}_{Ledg}$ such that
$ssid.\textsf{state}=(s,\text{``No active claim''})$, and
$ssid.\textsf{coins}=d$.
* •
$P$ sends $(\ket{\Psi},s,ssid,d)$ to $Q$.
* •
$Q$ sends a message (RetrieveContract, $ssid$) to $\mathcal{F}_{Ledg}$. Upon
receiving a message (RetrieveContract, $ssid$, $z$) from $\mathcal{F}_{Ledg}$
(where $z=(ssid.\textsf{Params},ssid.\textsf{state},ssid.\textsf{coins})$ if
$P$ is honest), $Q$ does the following:
* –
If $z=(\textsf{Params},(s,\text{``No active claim''}),d)$, then $Q$ checks
that the parameters Params are of the form of a banknote-contract (from
Definition 7). If so, runs $\textsf{verify-bolt}(\ket{\Psi},s)$ and checks
that the outcome is $1$. If so, sends the message accept to $P$.
* –
Else, $Q$ sends the message reject and the state $\ket{\Psi}$ back to $P$.
Figure 4: Protocol for making and verifying a payment
#### Recovering lost banknotes
As much as we can hope for experimental progress in the development of quantum
memories, for the foreseeable future we can expect quantum memories to only be
able to store states for a time on the order of days. It is thus important
that any payment system involving quantum money is equipped with a procedure
for users to recover the value associated to quantum states that get damaged
and become unusable. Either users should be able to “convert” quantum money
states back to coins on the blockchain, or they should be able, upon losing a
quantum banknote, to change the serial number state variable of the associated
smart contract to a new serial number (presumably of freshly generated quantum
banknote). Here, we describe a protocol for the latter. After this, we will
describe a protocol for the former.
Informally, a party $P$ who has lost a quantum banknote with serial number $s$
associated to a smart contract with session identifier $ssid$, makes a “lost
banknote claim” at time $t$ by depositing a number of coins $d_{0}$ to that
banknote-contract. Recall the definition of banknote-contracts from Definition
7, and in particular of the circuit $\phi_{\$}$:
* •
If party $P$ is honest, then after a time $t_{tr}$ has elapsed, he will be
able to update the state variable serial of the banknote-contract from $s$ to
$s^{\prime}$ (where $s^{\prime}$ is presumably the serial number of a new
valid bolt that party $P$ has just generated).
* •
If party $P$ is dishonest, and he is claiming to have lost a banknote with
serial number $s$ that someone else possesses, then the legitimate owner can
run $\textsf{gen-certificate}(\ket{\psi},s)$ where $\ket{\Psi}$ is the
legitimate bolt, and obtain a valid certificate $c$. He can then send $c$ to
the contract and a new serial number $s^{\prime}$ (presumably of a freshly
generate bolt) and obtain $d_{0}$ coins from the contract (the $d_{0}$ coins
deposited by $P$ in his malicious claim).
We describe the protocol formally in Fig. 5.
One might wonder whether, in practice, an adversary can instruct a corrupt
party to make a “lost banknote claim”, and then intercept an honest party’s
classical certificate $c$ before this is posted to the blockchain, and have a
second corrupt party post it instead. This attack would allow the adversary to
“steal” the honest party’s value. Or alternatively, an adversary could monitor
the network for lost-banknote claims by honest parties, and whenever he sees
one he will delay this claim, and instruct a corrupt party to make the same
claim so that it is registered first on the ledger. In our analysis, we do not
worry about such attacks, as we assume access to the ideal functionality
$\mathcal{F}_{Ledg}$, which, by definition, deals with incoming messages in
the order that they are received. We also assume in our adversarial model,
specified more precisely in Section 5, that the adversary does not have any
control over the delivery of messages (and their timing). If one assumes a
more powerful adversary (with some control over the timing of delivery of
messages), then the first issue can still be resolved elegantly. The second
issue has a satisfactory resolution but it is trickier to analyze formally. We
discuss this in more detail in Section 6.
Protocol carried out by party $P$ with PID $pid$ for changing the serial
number of a smart contract.
$P$’s input: $s$ the serial number of a (lost) quantum banknote. $ssid$ the
session identifier of a banknote-contract such that
$ssid.\textsf{state}=(s,\text{``No active claim''})$.
* •
$P$ sends $(\textsf{Trigger},ssid,\textsf{BanknoteLost},d_{0}$) to
$\mathcal{F}_{Ledg}$. This updates $ssid.\textsf{state}$ to $(s,\text{``Active
claim by $pid$ at time $t$''})$ (where $t$ is the current time mantained by
$\mathcal{F}_{Ledg}$), and deposits $d_{0}$ coins into the contract.
* •
After time $t_{tr}$, $P$ sends
$(\textsf{Trigger},ssid,(\textsf{ClaimUnchallenged},s^{\prime}),0)$ to
$\mathcal{F}_{Ledg}$. If $P$ was honest then $ssid.\textsf{state}$ is updated
to $(s^{\prime},\text{``No active claim''})$, and $d_{0}$ coins are released
to $P$.
Figure 5: Protocol for changing the serial number of a smart contract
Next, we give a protocol carried out by all parties to prevent malicious
attempts at changing the state variable serial of a smart contract.
Informally, this involves checking the blockchain regularly for malicious
attempts at filing lost-banknote claims.
Recall that $t_{tr}$ was defined in Definition 7.
Protocol carried out by a party $P$ to prevent malicious attempts at changing
the state variable serial of a smart contract.
Input of $P$: A triple $(\ket{\Psi},s,ssid)$, where $\ket{\Psi}$ is a quantum
banknote with serial number $s$, and $ssid$ is the session identifier of a
banknote-contract such that $ssid.\textsf{state}=(s,\text{``No active
claim''})$
At regular intervals of time $t_{r}-1$, do the following:
* •
Send a message (RetrieveContract, $ssid$) to $\mathcal{F}_{Ledg}$. Upon
receiving a message (RetrieveContract, $ssid$, $z$) from $\mathcal{F}_{Ledg}$,
if $z=(\textsf{Params},(s,\text{``Claim by $pid^{\prime}$ at time $t$ ''}),d)$
for some $pid^{\prime},t,d$ and for some banknote-contract parameters Params:
* –
Run $c\leftarrow\textsf{gen-certificate}(\ket{\Psi},s)$.
* –
Sample $(\ket{\Psi^{\prime}},s^{\prime})\leftarrow\textsf{gen-bolt}$.
* –
Send $(\textsf{Trigger},ssid,(\textsf{ChallengeClaim},c,s^{\prime}),0)$ to
$\mathcal{F}_{Ledg}$. (If $P$ was honest, this updates
$ssid.\textsf{state}\leftarrow(s^{\prime},\text{``No active claim''})$ and
releases $d_{0}$ coins to $P$).
Figure 6: Protocol for preventing malicious attempts at changing the state
variable serial of a smart contract.
#### Trading a quantum banknote for coins:
Finally, we describe a protocol for trading a quantum banknote to recover all
the coins deposited in its associated banknote-contract.
Protocol carried out by a party $P$.
Input of $P$: A tuple $(\ket{\Psi},s,ssid,d)$, where $\ket{\Psi}$ is a quantum
banknote with serial number $s$, and $ssid$ is the session identifier of a
banknote-contract such that $ssid.\textsf{state}=(z,\text{``No active
claim''})$ and $ssid.\textsf{coins}=d$.
* •
Run $c\leftarrow\textsf{gen-certificate}(\ket{\Psi},s)$.
* •
Send message $(\textsf{Trigger},ssid,(\textsf{RecoverCoins},c),0)$ to
$\mathcal{F}_{Ledg}$. This releases $d$ coins to $P$.
Figure 7: Protocol for trading a quantum banknote for coins.
## 5 Security
We first specify an adversarial model. Security with respect to this
adversarial model is formally captured by Theorem 8. At a high-level, Theorem
8 establishes that, within this adversarial model, no adversary can increase
his “value” beyond what he has legitimately spent or received to and from
honest parties. This captures, for example, the fact that the adversary will
not be able to double-spend his banknotes, or successfully file a “lost
banknote claim” for banknotes he does not legitimately possess.
#### Adversarial model
We assume that all the messages of honest parties are sent using the ideal
functionality for authenticated communication $\mathcal{F}_{auth}$, and that
the adversary sees all messages that are sent (in UC language, we assume that
the adversary is activated every time a party sends a message) but has no
control over the delivery of messages (whether they are delivered or not) and
their timing. Our payment system can be made to work also if we assume that
the adversary can delay delivery of honest parties’ messages by a fixed amount
of time (see the remark preceding Fig. 5 for more details), but, for
simplicity, we do not grant the adversary this power.
The adversary can corrupt any number of parties, and it may do so adaptively,
meaning that the corrupted parties are not fixed at the start, but rather an
honest party can become corrupted, or a corrupted party can return honest, at
any point. The process of corruption is modeled analogously as in the original
UC framework, where the adversary simply writes a corrupt message on the
incoming tape of an honest party, upon which the honest party hands all of its
information to the adversary, who can send messages on the corrupted party’s
behalf. Our setting is slightly more involved in that corrupted parties also
possess some quantum information, in particular the quantum banknotes. We
assume that when an adversary corrupts a party he takes all of its quantum
banknotes. Importantly, we assume that these are not returned to the party
once the party is no longer corrupted. It might seem surprising that we do not
upper bound the fraction of corrupted parties. Indeed, such a bound would only
be needed in order to realize securely the ideal functionality
$\mathcal{F}_{Ledg}$ (any consensus-based realization of $F_{Ledg}$ would
require such a bound). Here, we assume access to such an ideal functionality,
and we do not worry about its secure realization. Naturally, when replacing
the ideal functionalities with real-world realizations one would set the
appropriate bound on the corruption power of the adversary, but we emphasize
that our schemes are independent of the particular real-world realization.
Note that we do not fix a set of parties at the start, but rather new parties
can be created (see below for more details).
We assume that (ITMs of) honest parties run the code $\pi$. This represents
the “honest” code which executes the protocols from Section 4 as specified.
The input to $\pi$ then specifies when and which protocols from Section 4 are
to be executed. As part of $\pi$, we specify that, upon invocation, a party
sends a message AddParty to $\mathcal{F}_{Ledg}$ to register itself. We also
specify as part of $\pi$ that an honest party runs the protocol of Fig. 6 (to
prevent malicious claims for lost banknotes). Moreover, for notational
convenience, we specify as part of $\pi$ that each party maintains a local
variable banknoteValue, which keeps track of the total value of the quantum
banknotes possessed by the party. banknoteValue is initialized to $0$, and
updated as follows. Whenever a party $P$ successfully receives a quantum
banknote (i.e. $P$ is the payee in the protocol from Fig. 4 and does not
abort) of value $d$ (i.e. the associated smart contract has $d$ coins
deposited), then $P$ updates
$\textsf{banknoteValue}\leftarrow\textsf{banknoteValue}+d$. Similarly, when
$P$ sends a quantum banknote of value $d$, it updates
$\textsf{banknoteValue}\leftarrow\textsf{banknoteValue}-d$. Finally, we
specify also as part of $\pi$, that whenever a party that was corrupted is no
longer corrupted, it resets $\textsf{banknoteValue}=0$ (this is because we
assumed that quantum banknotes are not returned by the adversary). The
following paragraph leads up to a notion of security and a security theorem.
Let $\mathcal{A}$ be a quantum polynomial-time adversary and $\mathcal{E}$ a
quantum polynomial-time environment. Consider an execution of $\pi$ with
adversary $\mathcal{A}$ and environment $\mathcal{E}$ (see Section 2.3 for
more details on what an “execution” is precisely). We keep track of two
quantities during the execution, which we denote as AdversaryValueReceived and
AdversaryValueCurrentOrSpent (These quantities are not computed by any of the
parties, adversary or environment. Rather, they are just introduced for the
purpose of defining security). The former represents the amount of value,
coins or banknotes, that the adversary has received either by virtue of having
corrupted a party, or by having received a payment from an honest party. The
latter counts the total number of coins currently possessed by corrupted
parties, as recorded on $\mathcal{F}_{Ledg}$, and the total amount spent by
the adversary to honest parties either via coins or via quantum banknotes (it
does not count the value of quantum banknotes currently possessed; these only
count once they are successfully spent). Both quantities are initialized to
$0$, and updated as follows throughout the execution:
* (i)
When $\mathcal{A}$ corrupts a party $P$: let $d$ be the number of coins of $P$
according to the global functionality $\mathcal{F}_{Ledg}$ and $d^{\prime}$ be
$P$’s banknoteValue just before being corrupted. Then,
$\textsf{AdversaryValueReceived}\leftarrow\textsf{AdversaryValueReceived}+d+d^{\prime}$,
and
$\textsf{AdversaryValueCurrentOrSpent}\leftarrow\textsf{AdversaryValueCurrentOrSpent}+d$.
* (ii)
When a corrupted party $P$ with $d$ coins and
$\textsf{banknoteValue}=d^{\prime}$ ceases to be corrupted and returns honest,
$\textsf{AdversaryValueReceived}\leftarrow\textsf{AdversaryValueReceived}-d$.
* (iii)
When an honest party pays $d$ coins to a corrupted party,
$\textsf{AdversaryValueReceived}\leftarrow\textsf{AdversaryValueReceived}+d$.
Likewise, when an honest party sends a quantum banknote of value $d$ to a
corrupted party, through the protocol of Fig. 4, then (even if the corrupted
party does not return accept)
$\textsf{AdversaryValueReceived}\leftarrow\textsf{AdversaryValueReceived}+d$.
* (iv)
When $\mathcal{A}$ succesfully spends a quantum banknote of value $d$ to an
honest party $P$, i.e. a corrupted party is the payer in the protocol from
Fig. 4 and $P$ is the payee and returns accept, or when $\mathcal{A}$ pays $d$
coins to an honest party, then
$\textsf{AdversaryValueCurrentOrSpent}\leftarrow\textsf{AdversaryValueCurrentOrSpent}+d$.
* (v)
When a corrupted party receives $d$ coins from a banknote-contract, then
$\textsf{AdversaryValueCurrentOrSpent}\leftarrow\textsf{AdversaryValueCurrentOrSpent}+d$.
Notice that this can happen only in two ways: $\mathcal{A}$ successfully
converts a quantum banknote of value $d$ to coins on $\mathcal{F}_{Ledg}$ (via
the protocol of Fig. 7), or a corrupted party successfully challenges a
BanknoteLost claim (in this case $d=d_{0}$).
Intuitively, if our payment scheme is secure, then at no point in time should
the adversary be able to make
$\textsf{AdversaryValueCurrentOrSpent}-\textsf{AdversaryValueReceived}>0$.
This would mean that he has successfully spent/stolen value other than the one
he received by virtue of corrupting a party or receiving honest payments. The
following theorem formally captures this notion of security. First, we denote
by
$\mathcal{F}_{Ledg}\text{-}\textrm{EXEC}^{(MaxNetValue)}_{\pi,\mathcal{A},\mathcal{E}}(\lambda,z)$
the maximum value of
$\textsf{AdversaryValueCurrentOrSpent}-\textsf{AdversaryValueReceived}$ during
an execution of $\pi$ with adversary $\mathcal{A}$ and environment
$\mathcal{E}$, with global shared functionality $\mathcal{F}_{Ledg}$.
###### Theorem 8 (Security).
For any quantum polynomial-time adversary $\mathcal{A}$ and quantum
polynomial-time environment $\mathcal{E}$,
$\Pr[\mathcal{F}_{Ledg}\text{-}\textrm{EXEC}^{(MaxNetValue)}_{\pi,\mathcal{A},\mathcal{E}}(\lambda,z)>0]=negl(\lambda).$
The rationale behind considering executions of $\pi$ and quantifying over all
possible adversaries and environments is that doing so captures all possible
ways in which a (dynamically changing) system of honest parties running our
payment system alongside an adversary can behave (where the adversary respects
our adversarial model).
Recall that, in an execution of $\pi$, the environment has the ability to
invoke new parties and assign to them new unique PIDs. Since in
$\mathcal{F}_{Ledg}$ the PIDs are used to register parties and initialize
their number of coins, this means that the environment has the ability to pick
the initial number of coins of any new party that it invokes. Moreover, by
writing inputs to the parties input tapes, the environment can instruct honest
parties to perform the honest protocols from Section 4 in any order it likes.
Quantifying over all adversaries and environments, in the statement of Theorem
8 means that the adversary and the environment can intuitively be thought of
as one single adversary. The statement of the theorem thus captures security
against realistic scenarios in which new parties can be adversarially created
with an adversarially chosen number of coins, and they can be instructed to
perform the honest protocols of the payment system from Section 4, in whatever
sequence is convenient to the adversary.
###### Proof of Theorem 8.
Suppose for a contradiction that there exists $\mathcal{A}$ and $\mathcal{E}$
such that
$\Pr[\mathcal{F}_{Ledg}\text{-}\textrm{EXEC}^{(MaxNetValue)}_{\pi,\mathcal{A},\mathcal{E}}(\lambda,z)>0]\neq
negl(\lambda).$ (10)
Then, we go through all of the possible ways that an adversary can increase
its net value, i.e. increase the quantity
$\textsf{AdversaryValueCurrentOrSpent}-\textsf{AdversaryValueReceived}$: the
adversary can do so through actions from items (ii), (iv) and (v) above.
Amongst these, it is easy to see that action (ii) never results in
$\textsf{AdversaryValueCurrentOrSpent}-\textsf{AdversaryValueReceived}>0$.
Thus, in order for (10) to hold, it must be the case that one of the following
happens with non-negligible probability within an execution of $\pi$ with
adversary $\mathcal{A}$ and environment $\mathcal{E}$.
* •
An action from item (iv) resulted in a positive net value for $\mathcal{A}$,
i.e.
$\textsf{AdversaryValueCurrentOrSpent}-\textsf{AdversaryValueReceived}>0$.
Notice that for this to happen it must be the case that $\mathcal{A}$ has
double-spent a banknote, i.e. $\mathcal{A}$ has produced two banknotes with
the same serial number that have both been accepted by honest parties in a
payment protocol of Fig. 4, and so they have both passed verification. But
then, it is straightforward to see that we can use this adversary, together
with $\mathcal{E}$ to construct an adversary $\mathcal{A}^{\prime}$ that
breaks the security of the quantum lightning scheme (i.e. game Counterfeit):
$\mathcal{A}^{\prime}$ simply simulates an execution of protocol $\pi$ with
adversary $\mathcal{A}$ and environment $\mathcal{E}$, and with non-negligible
probability the adversary $\mathcal{A}$ in this execution produces two
banknotes with the same serial number. $\mathcal{A}^{\prime}$ uses these
banknotes to win the security game of quantum lightning.
* •
An action from item (v) resulted in a positive net value for $\mathcal{A}$.
Then, notice that for this to happen it must be that either:
* –
$\mathcal{A}$ has sent a message
$(\textsf{Trigger},ssid,(\textsf{RecoverCoins},c),0)$ to $\mathcal{F}_{Ledg}$
for some $ssid$ and $c$ such that $\textsf{verify-certificate}(s,c)=1$, where
$ssid.\textsf{state}=(s,\text{``No active claim''})$, and the last “make a
payment” protocol (from Fig. 4) referencing $ssid$ had an honest party as
payee which remained honest at least up until after $\mathcal{A}$ sent his
message (or the banknote-contract was initialized by an honest user and the
banknote was never spent). But then, one of the following must have happened:
* *
$\mathcal{A}$ possessed a bolt $\ket{\Psi}$ with serial number $s$ at some
point, before $\ket{\Psi}$ was spent to the honest user. Then, this adversary
would have recovered a valid $c$ and also spent a bolt with serial number $s$
successfully to an honest user. But such an $\mathcal{A}$, together with
$\mathcal{E}$, can be used to win game Forge-certificate from Definition 3
with non-negligible probability, with a similar reduction to the one above,
thus violating the property of Definition 3.
* *
$\mathcal{A}$ recovered $c$ such that $\textsf{verify-certificate}(s,c)=1$
without ever possessing a valid bolt with serial number $s$. Again, such an
adversary could be used, together with $\mathcal{E}$ to win Forge-certificate
from Definition 3).
* *
$\mathcal{A}$ has successfully changed the serial number of contract $ssid$ to
$s$ from some previous $s^{\prime}$ without possessing a bolt $\ket{\Psi}$
with serial number $s^{\prime}$. This cannot happen since any honest user who
possesses the valid bolt with serial number $s^{\prime}$ performs the protocol
of Fig. 6.
* –
$\mathcal{A}$ has sent a message
$(\textsf{Trigger},ssid,(\textsf{ChallengeClaim},c),0)$ to
$\mathcal{F}_{Ledg}$ for some $ssid$ such
$ssid.\textsf{state}=(s,\text{``Claim by $pid$ at time $t$''})$ for some
$s,pid,t$ with $pid$ honest and $c$ such that $\textsf{verify-
certificate}(s,c)=1$. Since $pid$ is honest, he must be the last to have
possessed a valid bolt with serial number $s$. Then, there are two
possibilities:
* *
$\mathcal{A}$ never possessed a valid bolt with serial number $s$, and
succeeded in recovering $c$ such that $\textsf{verify-certificate}(s,c)=1$.
Analogously to earlier, this adversary, together with $\mathcal{E}$, can be
used to win Forge-certificate.
* *
$\mathcal{A}$ possessed a bolt $\ket{\Psi}$ with serial number $s$ at some
point, before $\ket{\Psi}$ was spent to an honest user. Analogously to
earlier, this means such an $\mathcal{A}$ both recovered a $c$ with
$\textsf{verify-certificate}(s,c)=1$ and spent a bolt with serial number $s$
successfully. Such an $\mathcal{A}$ can be used, together with $\mathcal{E}$,
to win Forge-certificate.
∎
#### Remark:
The security guarantee of Theorem 8 establishes that an adversary cannot end
up with more value than he started with (after taking into account the amount
he received from honest parties and the amount he succesfully spent to honest
parties). However, we do not analyze formally attacks which do not make the
adversary gain value directly, but which “sabotage” honest parties, making
them lose value. We believe that it should be possible to capture such attacks
within our model by modifying the way we keep track of the adversary’s value,
but we leave this analysis for future work. We discuss and formalize the
notion of “sabotage” in detail in Section 6.3.
## 6 Practical issues in a less idealized setting
The ideal functionality $\mathcal{F}_{Ledg}$ defined in Section 3 does not
capture adversaries that are allowed to see messages sent by honest parties to
$\mathcal{F}_{Ledg}$ before they are registered on the ledger, and who could
try to use this information to their advantage: by definition of
$\mathcal{F}_{Ledg}$, messages are processed and registered on the ledger in
exactly the order that they are sent, and are not seen by the adversary until
they make it onto the ledger. While such a definition of $\mathcal{F}_{Ledg}$
makes for a clear exposition and clean proof of security, it is in practice
unrealistic. In typical real-world implementations of blockchains, miners (and
hence potentially adversaries) can see a pool of pending messages which have
not yet been processed and registered on the blockchain, and can potentially
delay the processing of certain messages, while speeding up the processing of
others. This makes the system susceptible to the attacks described in the
following subsection.
### 6.1 Attacks outside of the idealized setting
1. (i)
An adversary could file a malicious “lost banknote claim” (running the
protocol of Fig. 5) corresponding to a serial number $s$ of a quantum banknote
that he does not possess. This would prompt an honest party who possesses a
valid quantum banknote with serial number $s$ to publish the corresponding
classical certificate $x$, in order to stop the malicious claim. If the
adversary could read this message before it is published on the ledger, it
could instruct a corrupt party to also publish $x$, and attempt to have this
message appear first on the ledger. This would effectively result in the
adversary having stolen the honest party’s value associated to the serial
number $s$.
2. (ii)
Suppose an honest party wants to trade their quantum banknote with serial
number $s$ (registered on the ledger) for the coins deposited in the
corresponding contract. The honesty party executes the protocol of Fig. 7.
This includes publishing the classical certificate $x$ associated to $s$. An
adversary who sees $x$ before it is registered on the ledger, could instruct a
corrupt party to make the same claim for for the coins in the contract
associated to $s$. If the corrupt party’s message is processed faster than the
honest party’s message, the adversary has succeeded in stealing the coins
deposited in the contract.
3. (iii)
Suppose an honest party has lost a valid quantum banknote with serial number
$s$ associated to some contract on the ledger. The honest party files a “lost
banknote claim” by executing the protocol of Fig. 5. An adversary who hears
this before the claim is registered on the ledger could instruct a corrupt
party to make a similar claim, and have it appear on the ledger before the
honest claim. This would result in the corrupt party obtaining a valid quantum
banknote associated to the above contract.
4. (iv)
The unforgeability property alone is not enough for _public_ quantum money, as
it allows sabotage: an attacker might be able to burn other people’s money,
without a direct gain from the attack. Consider an adversary that wants to
harm its competitor. The adversary does not follow the honest protocol that
generates the quantum money state. Instead, it could create a tweaked quantum
money which has a noticeable probability to pass verification once, and fail
the second time. This way, the adversary could buy some merchandise using this
tweaked quantum money. When the merchant will try to pay to others using this
quantum money, the verification will fail – the receiver will run the
verification (and this is exactly the second verification), which will cause
it to fail. This is not an issue with schemes in which the verification is a
projective (or very close to being projective), but, for example, the scheme
by Farhi et al. [FGH+12b] is _not_ projective.
Indeed, even though our security proof (see Theorem 8) guarantees that an
adversary cannot end up with more money that he was legitimately given, it
does not rule out sabotage.
5. (v)
Similarly to the sabotage attack on the verification of quantum money, an
attacker might sabotage the ability to produce a valid certificate from the
verified money.
In the next Section 6.2, we will introduce a novel property of quantum
lightning schemes, which we call bolt-to-signature capability, we give a
provably-secure construction of it. We will employ this in Section 6.4 to give
a somewhat elegant resolution to issues (i) and (ii) above. In Section 6.3, we
show that our scheme is secure against sabotage attacks, hence resolving
issues (iv) and (v). In Section 6.4 we discuss a practical approach to
resolving issue (iii) above, though it does not contain a formal analysis for
reasons which are made clear there.
### 6.2 Trading a Bolt for a Signature
We define a new property of a quantum lightning scheme which we call “trading
a bolt for a signature” (and we say that a lightning scheme has bolt-to-
signature capability). This is an extension of the property of “trading a bolt
for a certificate” defined in Section 2.2. The known concrete constructions of
quantum lightning do not possess this property, but we will show (Fig. 8) that
a lightning scheme with bolt-to-certificate capability can be bootstrapped to
obtain a lightning scheme with bolt-to-signature capability. We envision that
this primitive could find application elsewhere, and is of independent
interest.
We start with an informal description of this property, highlighting the
difference between the certificate and signature properties. Suppose Alice
shows Bob a certificate with a serial number $s$. Bob can conclude that Alice
cannot hold a bolt with the same serial number. In particular, if she held
such bolt, it means she must have measured and destroyed it to produce the
certificate. For the bolt-to-signature property, we ask the following:
* •
There is a way for Alice, who holds a bolt with serial number $s$, to produce
a signature of any message $\alpha$ of her choice, with respect to serial
number $s$.
* •
The signature should be verifiable by anyone who knows $s$.
* •
Just like a certificate, anyone who accepts the signature of $\alpha$ with
respect to $s$ can conclude that Alice can no longer hold a quantum money with
serial number $s$;
* •
(one-time security) As long as Alice signs a single message $\alpha$, no one
other than Alice should be able to forge a signature for a message
$\alpha^{\prime}\neq\alpha$ (even though her state is no longer a valid bolt,
Alice can still sign more than one message, but the unforgeability guarantee
no longer holds).
A quantum lightning scheme with bolt-to-signature capability is a quantum
lightning scheme with two additional algorithms: gen-sig is a QPT algorithm
which receives as input a quantum state, a serial number, and a message of any
length, and outputs a classical signature. verify-sig is a PPT algorithm which
receives a serial number, a message of any length and a signature, and either
accepts or rejects. Thus, we modify the setup procedure of the lightning
scheme so that QL.Setup outputs a tuple $(\textsf{gen-bolt},\textsf{verify-
bolt},\textsf{gen-sig},\textsf{verify-sig})$.
The definition of the property has a completeness and a soundness part. The
latter is formally defined through the following game Forge-sig. The game
Forge-sig is similar in spirit to the game for _onetime_ security of a
standard digital signature scheme – see, e.g. [KL14, Definition 12.14],
[Gol04, Definition 6.4.2].
* •
The challenger runs $(\textsf{gen-bolt},\textsf{verify-bolt},\textsf{gen-
sig},\textsf{verify-sig})\leftarrow\textsf{QL.Setup}(1^{\lambda})$ and sends
this tuple to $\mathcal{A}$.
* •
$\mathcal{A}$ sends $(\ket{\psi},s)$ and $\alpha$ to the challenger.
* •
The challenger runs $\textsf{verify-bolt}(\ket{\psi},s)$. If this rejects, the
challenger outputs “$0$”. Else it proceeds to the next step.
* •
Let $\ket{\psi^{\prime}}$ be the leftover state. The challenger runs
$\sigma\leftarrow\textsf{gen-sig}(\ket{\psi^{\prime}},s,\alpha)$ and sends
$\sigma$ to $\adv$.
* •
$\adv$ returns a pair $(\alpha^{\prime},\sigma^{\prime})$.
* •
The challenger checks that $\alpha\neq\alpha^{\prime}$ and runs
$\textsf{verify-sig}(s,\alpha^{\prime},\sigma^{\prime})$. If the latter
accepts, the challenger outputs “$1$”.
Let $\textsf{Forge-sig}(\adv,\lambda)$ be the random variable for the outcome
of the game.
###### Definition 9 (Trading a bolt for a signature).
We say that a quantum lightning scheme has bolt-to-signature capability if the
following holds:
* (I)
For every $\alpha$:
$\displaystyle\Pr[\textsf{verify-sig}(s,\sigma,\alpha)=1:$
$\displaystyle(\textsf{gen-bolt},\textsf{verify-bolt},\textsf{gen-
sig},\textsf{verify-sig})\leftarrow\textsf{QL.Setup}(1^{\lambda}),$ (11)
$\displaystyle(\ket{\psi},s)\leftarrow\textsf{gen-bolt},$ (12)
$\displaystyle\sigma\leftarrow\textsf{gen-
sig}(\ket{\psi},s,\alpha)]=1-negl(\lambda)$ (13)
* (II)
For all polynomial-time quantum algorithms $\mathcal{A}$,
$\Pr[\textsf{Forge-sig}(\mathcal{A},\lambda)=1]=negl(\lambda).$
Informally, the security definition based on game Forge-sig guarantees that:
* •
As long as Alice signs only one message with respect to $s$, no one except her
can forge a signature of another message with respect to $s$. This property is
very similar to one-time unforgeability for digital signatures, if one views
the bolt as a secret key, and the serial number $s$ as a public key. The
difference is that the “secret key” in this case has meaning beyond enabling
the signing of messages: it can be spent as quantum money. This is what make
the next property important.
* •
Signing a message destroys the bolt, i.e. it is infeasible to simultaneously
produce both a valid signature with respect to a serial number $s$ and a bolt
with serial number $s$ which passes verification (an adversary who can succeed
at this is easily seen to imply an adversary who wins Forge-sig). This
property is unique to the quantum lightning setting. It says that signing a
message with respect to serial number $s$ inevitably destroys the bolt with
serial number $s$. We remark that it is possible to sign more messages with
the leftover state, but such state will no longer pass the quantum lightning
verification procedure, i.e. it can no longer be spent. One can think of the
bolt with serial number $s$ as being “burnt” once the owner decides to use it
to sign a message.
We are now ready to present our construction of a quantum lightning scheme
with bolt-to-signature capability. The construction is based on the hash-and-
sign paradigm (see, e.g., [KL14]), as well as Lamport signatures [Lam79]
(familiarity with those is helpful, although our presentation is self-
contained). For convenience, we use $\mathcal{H}$ to denote a family of fixed-
length hash functions, and we use the notation $H\leftarrow\mathcal{H}(1^{n})$
to denote an efficent random sampling of a hash function $H$ with output
length $n$ from the family $\mathcal{H}$.
Given: A quantum lightning scheme with bolt-to-certificate capability with
setup procedure QLC.Setup. A family $\mathcal{H}$ of fixed-length collision-
resistant hash functions.
QLDS.Setup: takes as input a security parameter $\lambda$, and outputs a tuple
of (descriptions of) algorithms
$(\textsf{QLDS.Gen},\textsf{QLDS.Ver},\textsf{QLDS.gen-
sig},\textsf{QLDS}.\mathsf{verify\textit{-}sig})$.
* •
Let $n=\poly$. Sample $(\textsf{gen-bolt},\textsf{verify-bolt},\textsf{gen-
bolt},\textsf{verify-certificate})\leftarrow\textsf{QLC.Setup()}$. Sample
$H:\\{0,1\\}^{*}\rightarrow\\{0,1\\}^{n}\leftarrow\mathcal{H}(1^{n})$.
* •
QLDS.Gen: Run gen-bolt $2n$ times to obtain bolts
$(\ket{\psi_{i}}\in\mathcal{H}_{\lambda},s_{i})$, for $i=1,..,2n$. Let
$\ket{\Psi}=\bigotimes_{i=1,..,2n}\ket{\psi_{i}}\in\mathcal{H}_{\lambda}^{\otimes
2n}$. Let $s=s_{1}||..||s_{2n}$. Output $(\ket{\Psi},s)$.
* •
QLDS.Ver: Takes as input a state $\ket{\Psi}\in\mathcal{H}_{\lambda}^{\otimes
2n}$. Applies QLDS.Ver to each of the $2n$ factors, and outputs “accept” if
all $2n$ verifications output “accept”.
* •
QLDS.gen-sig: Takes as input a state
$\ket{\Psi}\in\mathcal{H}_{\lambda}^{\otimes 2n}$, a serial number $s$ and a
message $\alpha$ of any length. Lets $\beta=H(\alpha)\in\\{0,1\\}^{n}$. For
$i=1,..,n$:
Run gen-certificate on the $(\beta_{i}\cdot n+i)$-th factor to obtain a
certificate $x_{i}$. Outputs $\sigma=x_{1}||..||x_{n}$.
* •
QLDS.verify-sig: Takes as input a serial number $s$, a message $\alpha$, a
signature $\sigma$. Parses $s$ as $s=s_{1}||..||s_{2n}$, $\sigma$ as
$\sigma=x_{1}||..||x_{n}$. Computes $\beta=H(\alpha)$. For $i=1,..,n$:
Let $r_{i}=\textsf{verify-certificate}(s_{\beta_{i}\cdot n+i},x_{i})$. Output
“accept” if $r_{i}=1$ for all $i$.
Figure 8: Our construction of a quantum lightning scheme with bolt-to-
signature capability $QLDS$
We state and prove the main theorem of this section.
###### Theorem 10.
If there exists a secure quantum lightning scheme which has a bolt-to-
certificate capability, and a family of fixed length quantum-secure collision-
resistant hash functions, then there exists a quantum lightning scheme with
bolt-to-signature capability.
Specifically, under the assumptions that QLC is a secure quantum lightning
scheme with a bolt-to-certificate capability and that $\mathcal{H}$ is a
fixed-length family of collision-resistant hash functions, the construction
QLDS of Fig. 8 is a secure quantum lightning scheme with bolt-to-signature
capability.
###### Proof.
First of all, it is clear that QLDS is still a secure quantum lightning scheme
according to Definition 2.
The fact that QLDS satisfies the correctness requirement of a quantum
lightning scheme with bolt-to-signature capability ($(I)$ in Definition 9)
follows from the correctness property of QLC (namely $(I)$ in Definition 3)
and that $n$ is at most polynomial in $\lambda$.
Assume towards a contradiction that there exists a QPT adversary $\adv$ that
wins the $\mathsf{Sig\textit{-}Forge}$ game with non-negligible probability
$\epsilon(\secpar)$. We construct an adversary $\mathcal{B}$ that wins the
game Forge-certificate (from Definition 3) with probability at least
$\epsilon(\lambda)$. $\mathcal{B}$ runs as follows:
* •
$\mathcal{B}$ receives a tuple $(\textsf{gen-bolt},\textsf{verify-
bolt},\textsf{gen-certificate},\textsf{verify-certificate})$ from the
challenger.
* •
$\mathcal{B}$ constructs algorithms gen-sig and verify-sig from gen-
certificate and verify-certificate. Sends the tuple $(\textsf{gen-
bolt},\textsf{verify-bolt},\textsf{gen-sig},\textsf{verify-sig})$ to
$\mathcal{A}$.
* •
$\mathcal{A}$ returns a pair $(\ket{\psi},s)$ and a message $\alpha$. Let
$\beta=H(\alpha)$. $\mathcal{B}$ simulates the next steps of the challenger in
Forge-sig with the following modification: it runs $\textsf{verify-
bolt}(\ket{\psi},s)$ but only measures the registeres corresponding to indexes
associated with $\beta$ (i.e. $\beta_{i}\cdot n+i$ for $i\in[n]$). It then
runs $\sigma\leftarrow\textsf{gen-sig}(\ket{\psi^{\prime}},s,\alpha)$, where
$\ket{\psi^{\prime}}$ is the leftover state after verification. $\mathcal{B}$
sends $\sigma$ to $\adv$, and $\mathcal{A}$ returns a pair
$(\alpha^{\prime},\sigma^{\prime})$, where
$\sigma^{\prime}=x^{\prime}_{1}||..||x^{\prime}_{n}$.
* •
$\mathcal{B}$ computes $\beta=H(\alpha)$ and
$\beta^{\prime}=H(\alpha^{\prime})$. If $\beta\neq\beta^{\prime}$, let $i$ be
the first index such that $\beta_{i}\neq\beta_{i}^{\prime}$. $\mathcal{B}$
outputs $(\ket{\psi_{\beta_{i}^{\prime}\cdot n+i}},s_{\beta_{i}^{\prime}\cdot
n+i})$ and $x^{\prime}_{i}$.
We analyze the winning probability of $\mathcal{B}$ in game Forge-certificate.
With probability at least $\epsilon(\lambda)$ it is
$\alpha\neq\alpha^{\prime}$ (otherwise $\mathcal{A}$ simply loses). Moreover,
with overwhelming probability, it must be $\beta\neq\beta^{\prime}$, otherwise
it is immediate such an $\mathcal{A}$ could be used to break collision-
resistance of $H$. Hence, with non-negligible probability, there is an index
$i$ such that $\beta_{i}\neq\beta_{i}^{\prime}$. Using the definition of
QLDS.verify-sig, we deduce that with probability at least $\epsilon$ it must
be that $\textsf{verify-certificate}(s_{\beta_{i}^{\prime}\cdot
n+i},x^{\prime}_{i})=1$. Moreover, the state
$(\ket{\psi_{\beta_{i}^{\prime}\cdot n+i}}$ was not measured, and it must pass
verification with probability at least $\epsilon(\lambda)$.
∎
### 6.3 Security against Sabotage
To address Item (iv) in Section 6.1, we define two security games that capture
the notion of sabotage; the first is denoted
$\mathsf{Sabotage\textit{-}Money}$:
* •
The challenger runs $(\textsf{gen-bolt},\textsf{verify-
bolt})\leftarrow\textsf{QL.Setup}(\lambda)$ and sends $(\textsf{gen-
bolt},\textsf{verify-bolt})$ to $\mathcal{A}$.
* •
$\adv$ outputs a quantum state $\ket{\psi}$ and sends it to the challenger.
* •
The challenger runs verify-bolt two consecutive times on the quantum state
$\ket{\psi}$.
* •
The adversary wins if the first verification accepts with a serial number $s$
and the second rejects, or accepts with a serial number $s^{\prime}\neq s$.
Let $\mathsf{Sabotage\textit{-}Money}(\adv,\lambda)$ be the random variable
that is $1$ if the adversary $\adv$ wins, and is $0$ otherwise.
###### Definition 11 (Security against sabotage).
A quantum lighting scheme is secure against sabotage if for every QPT $\adv$
there exists a negligible function $\negl[]$ such that:
$\Pr(\mathsf{Sabotage\textit{-}Money}(\adv,\lambda)=1)=\negl$ (14)
The security against sabotage was first defined in the context of quantum
money in [BS16] (though, the term sabotage was not used).
We extend the notion of sabotage in the natural way for schemes with bolt-to-
certificate or bolt-to-signature capability. Our goal is to avoid a scenario
in which an adversary gives a user a quantum lightning state which passes
verification, but later fails to produce a valid certificate or signature. We
define the following experiment, $\mathsf{Sabotage\textit{-}Certificate}$:
1. 1.
The challenger runs $(\textsf{gen-bolt},\textsf{verify-
bolt},\mathsf{gen\textit{-}certificate},\mathsf{verify\textit{-}certificate})\leftarrow\textsf{QL.Setup}(\lambda)$
and sends that tuple to $\mathcal{A}$.
2. 2.
$\adv$ sends $\ket{\psi},s$ to the challenger.
3. 3.
The challenger runs $\textsf{verify-bolt}(\ket{\psi},s)$. If verification
fails, set $r=1$. Otherwise, the challenger uses the post-measured state
$\ket{\psi^{\prime}}$ to generate a certificate
$c\leftarrow\mathsf{gen\textit{-}certificate}(\ket{\psi^{\prime}},s)$, and
checks whether it is a valid certificate:
$r\leftarrow\mathsf{gen\textit{-}certificate}(\ket{\psi^{\prime}},s)$.
4. 4.
$\adv$ wins if $r=0$. Let
$\mathsf{Sabotage\textit{-}Certificate}(\adv,\lambda)$ be the random variable
that is $1$ if the adversary $\adv$ wins, and is $0$ otherwise.
###### Definition 12.
A quantum lighting scheme with a bolt-to-certificate capability is secure
against sabotage if, in addition to the requirement in Eq. 14, for every QPT
$\adv$ there exists a negligible function $\negl[]$ such that:
$\Pr(\mathsf{Sabotage\textit{-}Certificate}(\adv,\lambda)=1)=\negl$ (15)
We suspect that Zhandry’s construction based on non-collapsing hash functions,
as well as the construction by Farhi et al. (see p. 2.2) does not satisfy the
security against sabotage. Fortunately, the construction based on the multi-
collision resistance is secure against sabotage:
###### Proposition 3.
The quantum lighting construction with the bolt to certificate capability
discussed in 2 is secure against sabotage.
###### Proof.
Zhandry’s scheme discussed in 2 has the property that $\textsf{verify-
bolt}(s,\cdot)$, is (exponentially close to) a rank-1 projector, and therefore
upon one successful verification, it will continue to pass verifications and
therefore satisfy Eq. 14. In fact, it holds even against unbounded
adversaries. This rank-1 projector is such that the state that it accepts only
has support on $x$’s that are valid certificates. Since the
$\mathsf{gen\textit{-}certificate}$ algorithm is simply a measurement in the
standard basis, we conclude that Eq. 15 holds. ∎
The $\mathsf{Sabotage\textit{-}Signature}$ experiment is defined in an
analogous fashion:
1. 1.
The challenger runs $(\textsf{gen-bolt},\textsf{verify-
bolt},\mathsf{sign},\mathsf{verify\textit{-}sig})\leftarrow\textsf{QL.Setup}(\lambda)$
and sends $(\textsf{gen-bolt},\textsf{verify-
bolt},\mathsf{sign},\mathsf{verify\textit{-}sig})$ to $\mathcal{A}$.
2. 2.
$\adv$ sends $\ket{\psi}$, and (a document to be signed) $\alpha$ to the
challenger.
3. 3.
The challenger runs $\textsf{verify-bolt}(\ket{\psi})$. If verification fails,
set $r=1$. If verification accepts the challenger uses the post-measured state
$\ket{\psi^{\prime}}$ to generate a signature of $\alpha$:
$\sigma\leftarrow\mathsf{sign}(\ket{\psi^{\prime}},s,\alpha)$. The challenger
runs $r\leftarrow\mathsf{verify\textit{-}sig}(s,\sigma,\alpha)$.
4. 4.
$\adv$ wins if $r=0$. Let $\mathsf{Sabotage\textit{-}Signature}(\adv,\lambda)$
be the random variable that is $1$ if the adversary $\adv$ wins, and is $0$
otherwise.
###### Definition 13.
A quantum lighting scheme with a bolt-to-signature capability is secure
against sabotage if, in addition to the requirement in Eq. 14, for every QPT
$\adv$ there exists a negligible function $\negl[]$ such that:
$\Pr(\mathsf{Sabotage\textit{-}Signature}(\adv,\lambda)=1)=\negl$ (16)
###### Proposition 4.
The construction QLDS in Fig. 8 is secure against sabotage, assuming the
underlying quantum lightning scheme with a bolt-to-certificate capability QLC
which it uses is secure against sabotage.
###### Proof.
Given a QPT adversary $\adv$ that wins the
$\mathsf{Sabotage\textit{-}Signature}$ experiment with probability $\epsilon$
with respect to the QLDS scheme, we can construct ano adversary $\bdv$ that
wins the $\mathsf{Sabotage\textit{-}Certificate}$ experiment with probability
$\frac{\epsilon(\lambda)}{2n}$. By our assumption that QLC is secure against
sabotage, we conclude that $\epsilon(\lambda)$ is necessarily a negligible
function.
The adversary $\bdv$ receives from his challenger
$(\mathsf{Gen},\textsf{verify-
bolt},\mathsf{gen\textit{-}certificate},\mathsf{verify\textit{-}certificate})$.
$\bdv$ will sample
$H:\\{0,1\\}^{*}\rightarrow\\{0,1\\}^{n}\leftarrow\mathcal{H}^{1^{n}}$. $\bdv$
will play the role of the challenger in the
$\mathsf{Sabotage\textit{-}Signature}$ experiment, and will also simulate
$\adv$. $\adv$ will receive the tuple above and $H$ as part of the setup, and
will produce a state $\ket{\psi}$ over $2n$ registers, serial number
$s_{1},\ldots,s_{n}$ a document $\alpha$. $\bdv$ will sample $i\in[2n]$
uniformly at random, and send the $i$’th register of $\ket{\psi}$ to his
challenger.
Next we show that indeed $\bdv$ wins with probability at least
$\frac{\epsilon}{2n}$.
We know that with probability $\epsilon(\lambda)$, $\adv$’s challenger will
verify all the $2n$ states, generate a certificate from $n$ of these states,
and at least one of these certificates will verification. Recall that $i$ was
sampled uniformly at random. $\bdv$’s challenger preforms exactly the same
procedure – the challenger generates a certificate from the state it receives
and checks whether it it passes the verification as a certificate. The
probability that $i$ is one of the failed certificates is therefore at least
$\frac{\epsilon}{2n}$. ∎
### 6.4 A resolution of the practical issues
We first employ a quantum lightning scheme with bolt-to-signature capability
to resolve issues (i) and (ii) of Section 6.1.
We make a simple modification to the protocols of Fig. 6 (preventing and
challenging malicious “lost banknote claims”) and Fig. 7 (trading quantum
banknotes for coins).
We upgrade the quantum lightning scheme with bolt-to-certificate capability to
one with bolt-to-signature capability. To deal with issue (i), we make two
modifications to our payment system of Section 4.
* •
We modify the protocol of Fig. 6 as follows: when party $P$ notices a
malicious “lost banknote claim” for a serial number $s$ associated to a
quantum banknote $\ket{\psi}$ that he possesses, he does not simply compute
the classical certificate associated to $s$ and send it in clear to
$\mathcal{F}_{Ledg}$. Rather, $P$ computes a signature
$\sigma\leftarrow\textsf{gen-sig}(\ket{\psi},s,\alpha)$ where $\alpha$ is a
message saying “Party $P$ challenges the claim”.
* •
We modify the definition of banknote-contract (Definition 7) so that whenever
there is an active lost claim, the coins deposited in a contract with state
variable $\textsf{serial}=s$ are released to a party $P$ only upon receipt of
a signature $\sigma$ with respect to $s$ of a message “Party $P$ challenges
the claim”.
We do not give a formal proof of security of the modified scheme, as this
would require first defining formally the modified ledger functionality.
Instead we argue informally: it is straightforward to see that the attack
described in point (i) of Section 6.1 is impossible. Any adversary that is
able to carry out that attack could be used to create an adversary that is
able to forge a signature with respect to a serial number $s$ of a bolt that
they do not possess. This violates the security of the bolt-to-signature
property of the lightning scheme.
To deal with issue (ii), we make a similar simple modification:
* •
We modify the protocol of Fig. 7 as follows: in order to trade a valid quantum
banknote with serial number $s$ for the coins deposited in a smart-contract
with state variable $\textsf{serial}=s$, party $P$ does not simply compute the
classical certificate associated to $s$ and send it in clear to
$\mathcal{F}_{Ledg}$. Instead, $P$ computes a signature
$\sigma\leftarrow\textsf{gen-sig}(\ket{\psi},s,\alpha)$ where $\alpha$ is a
message saying “Party $P$ wishes to recover the coins in the contract”.
* •
We modify the definition of banknote-contract (Definition 7) so that whenever
there is no active claim, the coins deposited in a contract with state
variable $\textsf{serial}=s$ are released to a party $P$ only upon receipt of
a signature $\sigma$ with respect to $s$ of a message “Party $P$ wishes to
recover the coins in the contract”.
For a similar reasoning as for point (i), the attack in point (ii) of Section
6.1 is no longer possible.
Dealing with issue (iii) of Section 6.1 is trickier. In this case, an honest
party $P$ has lost a valid quantum banknote with serial number $s$, and there
is no way for $P$ to recover any certificate or signature proving possession
of the banknote. The only difference between $P$ and everyone else, as far as
owning the coins deposited in the associated contract, is that $P$ knows that
the banknote has been damages and lost, and no one else does. The “lost
banknote claim” protocol of Fig. 5 requires $P$ to send a message to
$\mathcal{F}_{Ledg}$ declaring the loss, and the only reason why $P$ is be
able to recover the coins deposited in the contract in the idealized setting
is that he is the first to make this claim, and no has the ability to
challenge it. The situation changes dramatically if we allow the adversary to
delay the processing of honest parties’ messages to $\mathcal{F}_{Ledg}$ in
favour of its own. The adversary could simply notice a “lost banknote claim”
and take advantage of this by making its own claim and ensuring that it is
registered first on the ledger. We propose the following modification to
handle this issue:
* •
Instead of directly making a “lost banknote claim”, a party $P$ posts a
commitment to a message of the form “$P$ is filing a lost banknote claim
associated to the smart contract with identifier $ssid$”. The commitment
contains a deposit of $d_{0}$ coins.
* •
$P$ has to reveal that commitment after no more than $t_{0}$ blocks.
* •
The coins are released to user $P$ only $t_{1}$ blocks after the reveal phase,
and provided no reclaim was made during that time.
* •
In case there are two or more reveals to two or more commitments to the same
$ssid$ – the one which was committed to the earliest (i.e., the commitment
appears in a an earlier block) receives the coins.
Intuitively, this modification resolves issue (iii). This is because the
commitment is hiding, and hence the adversary does not learn anything about
the claim it contains prior to the reveal. After the reveal phase, it is too
late for the adversary to start the commitment phase. On the other hand, if an
adversary simply tries to guess what the content of a claim is (i.e. which
quantum banknote was lost), and tries to make a claim of his own, the
adversary will most likely lose coins, assuming the frequency at which “lost
banknote claims” are made is low (recall that making a claim requires staking
some coins, which are lost if the claim is successfully challenged).
As much as this resolution to issue (iii) seems to work in theory, the
adversary could in practice possess some side information regarding the claim
that is hidden in the commitment, and it becomes difficult to model this side
information in a way that is both rigorous and faithful to what is possible in
practice. We illustrate this with some practical examples.
First, it might be hard to know, for an honest user who lost their quantum
banknote, whether the latter was indeed lost, or whether it was stolen.
Therefore, an honest user who wrongfully believes that his quantum banknote
was lost might end up losing an extra $d_{0}$ coins, to the thief who stole
it, when trying to recover it.
Additionally, if an adversary could guess a serial number of a quantum money
state that was or will be lost, and post it before the honest user, the
adversary will be the one who will eventually receive these coins. Such a
setting might be plausible, for example, during (or even before) power
outages.
Another attack vector is the following. Indeed, the commitment does not reveal
which quantum banknote was lost. However, each commitment reveals that 1
banknote was lost. Suppose there is a claim for 1743 coins at some time $t$.
There is a good chance that these $1743$ coins belong to one person who lost
all his $1743$ coins. An adversary that can figure out who owns exactly $1743$
coins that were recently lost, and what are the serial numbers of these coins,
can effectively steal these coins. Since our construction does not claim any
guarantees about the privacy of users, this information might be readily
available to the adversary. Various attacks on the users’ privacy are known in
the “classical” Bitcoin literature – see Item 6 in Section 8.
Therefore, we elect to stir away from formal security claims, and we leave
this as a proposed resolution which requires further investigation.
## 7 A practical implementation on Bitcoin and space optimization
In this section, we informally discuss a practical implementation of our
payment system using Bitcoin. In Section 4, we have elected to describe our
payment system in a model with access to a ledger functionality that supports
stateful universal smart contracts. This was for two main reasons: clarity of
exposition and flexibility of the framework. Nevertheless, it is possible to
implement our payment system on the Bitcoin blockchain.
Bitcoin uses a special purpose scripting language for transactions. The basic
building block of a a script is called an _opcode_. The opcodes that Bitcoin
supports are extremely limited. For example, the opcode OP_ADD, which adds two
inputs, is supported, but even a simple operation such as multiplication or
raising a number to a power are _not_ supported – as they are not strictly
needed, yet they increase the attack surface (for example, they may be used to
preform a memory-based attack). More information about the scripting language
and opcodes can be found in Ref. [NBF+16]. The only required adjustment to the
Bitcoin protocol needed to implement our payment system is to add two opcodes
to the Bitcoin scripting language. These opcodes are utilized once when
transforming Bitcoin to quantum banknotes, and once when the quantum banknotes
are transformed back to Bitcoin. We provide more detail about this
implementation by describing a possible improvement to the space efficiency of
our payment system. In Section 7.1, we informally describe an implementation
on Bitcoin of the original payment system. In Section 7.2, we informally
describe a modification that drastically improves space efficiency.
### 7.1 Bitcoin to Quantum Money: The Simple Approach
In order to use a quantum lightning scheme, of course, we need to first run
$\textsf{QL.Setup}(\secparam)$. In some scenarios, where we can assume some
trusted setup, this is not an issue. But in an existing distributed system,
such as Bitcoin, this could be considered contentious. One of Zhandry’s
construction only requires a Common Random String (CRS) – see Fact 4.
In the context of Bitcoin, such a CRS could be generated by a multiple-source
randomness extractor, applied on the headers of the first $n$ blocks, where
$n$ is determined by the amount of randomness required for the quantum
lightning scheme, and the min-entropy each header contains. More details
regarding using Bitcoin as a source of randomness can be found in Ref.
[BCG15]333Though, for our purposes, we do not need a recurring form of
randomness usually called a randomness beacon, which is the focus of their
work. In this context the source of randomness is only used once.. Bonneau et
al. have argued that the min-entropy in each block header is at least 32 bits
[BCG15, Table 1].
To transform $x$ bitcoins associated to a signing key $sk$, to quantum money,
the user acts as follows. First, the user mints a quantum lightning state,
together with some serial number: $(\ket{\$},s)\leftarrow\mathsf{mint}_{pk}$.
Second, the user signs the concatenation444We use $||$ to denote
concatenation. of a special “quantum money” opcode, the serial number, and the
value $y$ using the secret key:
$\sigma\leftarrow\mathsf{sign}_{sk}(\text{OP\\_BITCOIN\\_TO\\_QUANTUM\\_MONEY}||s||y).$
(17)
Third, the user propagates the signed message
$(\text{OP\\_BITCOIN\\_TO\\_QUANTUM\\_MONEY}||s||y,\sigma)$ to the Bitcoin
miners. Here $y$ is the value which the quantum money holds, and $x-y$
represents the fee which incentivizes miners to include that message in the
blockchain, in line with the current fee mechanism in Bitcoin (which we have
not covered in this work – see [NBF+16] for more details).
A miner would include the message
$(\text{OP\\_BITCOIN\\_TO\\_QUANTUM\\_MONEY}||s||y,\sigma)$ in their block, as
long as (i) the signature is legitimate, i.e.,
$\mathsf{verify}_{vk}(\text{OP\\_BITCOIN\\_TO\\_QUANTUM\\_MONEY}||s||y,\sigma)=1$,
(ii) $y<x$, where $x$ is the value of the bitcoins associated with the
verification key (bitcoin address) $vk$, and (iii) as long as the miner fee is
attractive enough.
Under the above conditions, eventually, a block which contains that
transaction would be mined, verified by all other nodes, and would become part
of the longest chain.
At this point, the original owner of the $x$ bitcoins does not possess any
“standard” bitcoins, since they are considered spent. Instead, she holds the
quantum state $(\ket{\$},s)$ with the “value” $y$. She could then spend the
quantum money by sending it to other others, and it would be treated as having
the same value as $y$ bitcoins. To verify the quantum money
$(\ket{\psi},s,y)$, the receiver would first check that
$\text{OP\\_BITCOIN\\_TO\\_QUANTUM\\_MONEY}||s||y$ indeed appears in the
blockchain. This step requires storing the blockchain. Any old version that
contains the signed message will do. Techniques such as Simplified Payment
Verification (SPV) could be used to eliminate the need to store that
information, with some downsides in terms of privacy and security [Nak08,
NBF+16]. Then, the receiver would accept the quantum money if
$\mathsf{verify}_{pk}(\ket{\psi},s)=1$. The receiver could in turn spend the
received quantum money to other users via the same procedure.
### 7.2 Bitcoin to Quantum Money: Space Optimization
The approach mentioned above has a limitation. Unlike bitcoin, which is
divisible almost infinitely, quantum money is not (see Item 13 in p. 13 for
more details). The most naive workaround is to divide the value of the $x$
bitcoins between several quantum money states, with values
$y_{1},\ldots,y_{n}$ so that $\sum_{i\in[n]}y_{i}<x$. The disadvantage is in
terms of space: each serial number is recorded on the blockchain, which causes
high fees and somewhat of a scalability problem.
We now present a much more efficient approach, in which the space required on
the blockchain is independent of the number of quantum money states the user
creates. Suppose the user wants to split the $x$ bitcoins into $2^{n}$ quantum
money states, each having a value of $2^{-n}y$. The user first creates $2^{n}$
quantum money states with serial numbers $s_{1},\ldots,s_{n}$, and calculates
the Merkle Hash Tree [Mer80]555We note that the original motivation of Merkle
was very different than the one we use here. See [KL14, Section 5.6.2] and
[NBF+16, Section 1.2] for the specific application we need. of these serial
numbers. The user than signs and publishes only the root of the Merkle tree
$r$, the total value $y$ (which must satisfy $y<x$ as before), and $n$. Each
quantum money state now consists of three parts: the quantum part $\ket{\$}$,
the serial number $s_{i}$, and a Merkle path from $s_{i}$ to the root $r$.
Note that the information recorded on the block-chain is independent of $n$,
and one could create very small denominations using this approach.
Another advantage of this approach is that it allows a slightly weak variant
of a Proof of Payment – see Item 17 in p. 17.
One could even take this one step further. Miners could include a Merkle root
of the Merkle tree of all existing serial numbers, at the end of every
calendar year. This would require users that wish to verify quantum money to
come online once every year to store the Merkle root. Every owner of quantum
money would calculate the Merkle path from the serial number of the quantum
money state, to that root, and append it to their quantum money state. That
would allow these users to verify the quantum money state generated in
previous years using only one hash, typically, e.g., 256 bits. Of course, this
technique does not reduce the space requirement for quantum money which was
generated in a year which has not ended yet.
## 8 Comparing our scheme to classical alternatives
In this section, we discuss several trade-offs between Bitcoin and our payment
system based on quantum money. The current roadmap for solving the scalability
problem of Bitcoin goes through a second layer solution called the Lightning
Network (LN) 666Note that there is no connection between the terms quantum
lightning and the lightning network. This section discusses the trade-offs
between the following three different alternatives of employing Bitcoin: (a) a
standard Bitcoin transaction, (b) a LN transaction and (c) a quantum money
transaction as in our payment system (we will refer to this as QM for short,
from now on). Before doing so, we give a brief overview of the LN.
The LN improves upon Bitcoin, and provides a way to transact off-chain using
_bidirectional payment channels_ between two parties [PD16] (see also
[BDW18]). The opening and closing of the channel is done on-chain (i.e., both
of these operations require $1$ transaction that would appear on the Bitcoin
blockchain) while updating the balance of the channel is done completely off-
chain, and thus allows for an unlimited number of transactions which do not
affect the limited throughput of the Bitcoin blockchain, and do not incur the
long confirmation times of on-chain transactions. At any point, each of the
two parties involved can “close” the channel, by submitting the most up to
date state of the channel to the blockchain. Crucially, the lightning network
supports routing. What this means is that the graph induced by all bi-
directional payment channels allows Alice to pay Bob, without any on-chain
transaction, as long there is a path in the graph from Alice to Bob. The
objectives of the LN are to increase the effective transaction throughput of
the Bitcoin blockchain (by having most of the transactions happen off-chain),
make transactions be effective immediately, and reduce the transaction costs.
Since early 2018, the LN has been active on the Bitcoin main-net. The capacity
of the LN is shown in Fig. 9, and as of August 2019, it stores only $0.004\%$
of the total bitcoins in circulation, and is still considered
experimental777For example, as of August 2019, the maximal capacity of a LN
channel is $0.16$ bitcoin, see https://github.com/lightningnetwork/lightning-
rfc/blob/master/02-peer-protocol.md#the-open_channel-message, and the
requirements regarding the parameter funding_satoshis. .
Figure 9: The total capacity of the lightning network, starting on January
2018, brown in BTC, blue in USD. Source: https://bitcoinvisuals.com/ln-
capacity
The following list provides an exhaustive comparison between the three
alternative modes of operation mentioned above. It is worth noting that
quantum money, used as in our payment system, in many ways resembles (digital)
cash banknotes, both in terms of the advantages and the disadvantages, and is
thus the closest of the three modes to ideal digital cash. Items 1-9 present
the aspects in which quantum money outperform Bitcoin and the LN, while the
disadvantages of quantum money are presented in Items 10-17.
1. 1.
Throughput. The throughput of Bitcoin is strongly capped. On average, in the
first 6 months of 2019, the Bitcoin network had 3.9 transaction per
second888Source: https://www.blockchain.com/charts/n-transactions.. This
throughput cannot increase dramatically without changing the Bitcoin protocol.
To receive bitcoins through the LN, Alice must have an open channel, which she
will eventually need to close. This requires at least two on-chain
transaction, though, the number of uses is not bounded. In this regard,
transforming quantum money to Bitcoin is very similar to opening a quantum
channel, and transforming the quantum money back to Bitcoin is similar to
closing a channel.
The balance of a LN channel can be updated, but is always bounded by the
transaction that opened that channel. For example, if Alice locks $10$
bitcoins in a channel with Bob, then initially she has $10$ bitcoins and Bob
has zero; Alice and Bob could then update it, but Bob could never receive more
than $10$ bitcoins using the channel.
A user could receive and send quantum money without ever making an on-chain
transaction. Quantum money has no limit on the value transferred.
2. 2.
Liquidity. Suppose Alice wants to pay Bob. In Bitcoin, she could do that,
assuming she has enough funds, and connection to the Bitcoin network. In the
LN, this is not always the case: there needs to be a way to route money among
the open channels. Sometimes no such route exists, or is inaccessible (using
these channels for routing requires the cooperation of the parties along the
channel). This may have an impact on the effective throughput of the LN. A
quantum money state can always be sent from the receiver to the sender.
3. 3.
Latency. The recommended best practice for a Bitcoin transaction is waiting
for 6 confirmations, which takes 1 hour on average.
Both the LN and QM need one transaction – in order to open a LN channel, or to
transform bitcoin to QM. That part can be done in advance, and is only needed
once, but it suffers from the same latency as a Bitcoin transaction. A LN has
no inherent latency other than the delays caused by the routing. For example,
a single transaction might involve several hops between distant nodes, which
takes on the order of a second. QM has a slightly better performance,
especially when the sender and receiver are physically close to each other –
the latency in this case is only due to the quantum verification algorithm.
Overall, the LN and QM have comparable latencies, which are much better than
Bitcoin’s.
4. 4.
Fees. Each Bitcoin transaction needs to pay a _fee_ , in order to get
approved. This fee is collected by the miner, which included the transaction.
The average fee per transaction in the first 6 months of 2019 was $1.41$
USD999Sources: https://www.blockchain.com/charts/n-transactions and
https://www.blockchain.com/charts/transaction-fees-usd.. To encourage LN nodes
to provide liquidity, the protocol uses routing fees. These fees are expected
to be smaller than on-chain transaction fees, but still non-zero. No fees are
needed to transact with QM.
5. 5.
Dependence on mining. The Bitcoin protocol and implicitly, the LN, are based
on mining (also known as Proof-of-Work [DN92]). This approach suffers from two
main drawbacks: (a) As of August 2018, Bitcoin mining (also known as Proof-of-
Work [DN92]) consumed slightly under 1% of the world’s electricity [Nar18].
Mining is required to secure the network. Proof-of-Stake is a competing
approach with somewhat different trade-offs [KRDO17, DGKR18], with the main
advantage that it does not spend so much energy. (b) The security model
related to mining is convoluted, and is based on incentives, without clear
assumptions nor a full security analysis. In particular, it is known that the
Bitcoin protocol is _not_ incentive compatible [ES14]. In PoW and PoS, a
corrupted majority can double-spend. In addition, the Bitcoin protocol does
not provide finality – in principle, at any point, a roll-back (known as re-
organization) could occur.
Quantum money does not require PoW or PoS, and provides finality. For example,
if all the bitcoins were transformed to quantum money, PoW or PoS would not be
needed at all. Of course, coordinating such a change is highly non-trivial.
6. 6.
Privacy. The users’ privacy is not guaranteed in Bitcoin, and certainly not by
default. All the transactions that have ever occurred are publicly accessible.
The common approach to provide some level of privacy is to use a new bitcoin
address per transaction. Yet, there are various techniques to relate these
addresses to real-world identities [RS13, RS14, MPJ+16]. There are several
commercial services that deanonymize Bitcoin transactions101010E.g.,
https://www.chainalysis.com. They claim to have “the biggest database in the
world of connections between real world entities and transactions on the
blockchain”, see https://youtu.be/yNpNz-FvSYQ?t=154. . More recent works
tackle this privacy issue – see [vS13, MGGR13, BCG+14, Poe16] and references
therein.
There is a trade-off between privacy and concurrency in the LN, see [MMK+17]
and references therein. In addition, this aforementioned work explicitly
leaves the question of privacy preserving routing for the LN open.
Quantum money enjoys superior privacy, as only the sender and receiver take
part in the transaction, and the transaction leaves no trace. The privacy of
QM is analogous to that of physical banknotes. Bear in mind that banknotes
have serial numbers, and these could potentially be traced, although arguably
the effect on privacy is negligible. In this sense, coins provide better
privacy for the users, since they are indistinguishable. _Quantum Coins_ have
been formally studied: these are indistinguishable quantum states which
provide an analogous level of privacy as physical coins - see [MS10, JLS18]
(improving upon [TOI03]). Unfortunately, these constructions are for private
quantum money (rather than public), and do not constitute a quantum lightning
scheme, which is crucial for our construction, and thus cannot be used in our
payment system. We leave it as an open question whether the same level of
anonymity achieved using quantum coins, or the construction mentioned above,
could be achieved in the context of quantum money for Bitcoin.
7. 7.
Risk of Locked funds. In the LN, the parties lock funds in a channel. When
both parties are cooperative, these funds can be easily unlocked, and used in
the next block. In case one party is uncooperative, and refuses to close the
channel, these funds are effectively locked for some period of time,
typically, for a few days.
Quantum money does not require users to lock any funds.
8. 8.
Connectivity requirements. A Bitcoin transaction requires communication with
the Bitcoin network. At the very least, the receiver needs a communication
channel with at least one other Bitcoin node. A LN transaction needs even more
resources, since the LN uses source routing – therefore, the sender has to be
well connected to the entire network in order to find the route. A QM
transaction only requires quantum communication between the sender and the
receiver.
9. 9.
Liveliness. For technical reasons which are outside the scope of this work,
both parties participating in each channel have to be on-line occasionally,
and monitor the Bitcoin network, in order to revoke a transaction, in case the
other party cheats. This task can be outsourced to a _Watchtower_ , which has
to be trusted to perform its job (and of course the watch-tower has to be on-
line occasionally). Online availability is not required for QM, if one opts
for a version of our payment system in which there is no procedure to recover
the value corresponding to lost or damaged quantum banknotes. If one opts to
include such a recovery mechanism, then a similar level of online availability
as in the LN is required.
10. 10.
Technological requirements. Transactions with quantum money require 3 main
resources: long term quantum memory to store the quantum money, a universal
quantum computer to verify the quantum money, and a quantum network to
transmit the quantum money between users.
11. 11.
Smart contracts. Bitcoin provides a scripting language which can be used to
design _smart contracts_. Notable examples include _multi-sig transactions_ ,
in which $m$-out-of-$n$ signatures are needed to spend the money, and _atomic
cross-chain swap_ which allows Alice and Bob to trade two crypto-currencies
without trusting each other [NBF+16]. Both capabilities naturally extend to
the LN111111The video in https://youtu.be/cBVcgzEuJ7Q demonstrates a LN atomic
swap between Bitcoin and Litecoin.. Other crypto-currencies such as Ethereum
have a richer programming language, which allows constructing Decentralized
Applications (DApps) [Woo14, But14].
QM does not solve the consensus problem or any variant of it, and does not
provide the functionality of smart contracts.
12. 12.
Backup. It is pretty easy to backup a Bitcoin wallet. Typically, all that a
user needs is a fairly short string called a _seed_. This seed is used to
generate a Hierarchical Deterministic Wallet [Wui13]. A backup can be done
once, and never needs to be updated in the lifetime of a Bitcoin wallet. LN
channels are slightly harder to backup, since the protocol is stateful, and
therefore, currently, backing up requires having the most up-to-date state of
the channel.
By definition, it is impossible to backup a quantum money state. In this work,
we proposed a mechanism to _recover_ lost banknotes. In essence, a user can
claim that her quantum money was lost, by depositing $d$ bitcoins. The user
would receive these bitcoins after some period of time (e.g., one year). If
the party that claimed that the coins were lost is dishonest, then the person
holding the legitimate quantum money state can produce a certificate of this
fact, and claim the $d$ bitcoins, in addition to the value of the quantum
money that she originally held. To avoid theft, users that want this option
available have to be on-line occasionally (in this example, at least once a
year).
13. 13.
Divisibility. One of the advantages of Bitcoin and the LN is that any amount,
down to $10^{-8}$ bitcoin can be sent, and in principle, even smaller amounts
could be used. Quantum money, on the other hand, is _not_ divisible. The user
must decide, at the time of the quantum minting, what is the denomination of
the quantum money, and it remains the same for the lifetime of the quantum
money.
14. 14.
Hidden inflation. Consider a computationally powerful adversary that attacks
the Bitcoin network. Such an adversary could, for example, break the digital
signature scheme and steal other people’s Bitcoins. Yet, such an adversary
couldn’t “print” bitcoin from thin air, without others noticing it121212Unless
there is some bug that is unrelated to cryptography.
When we use quantum money, the situation is different. A powerful adversary
could create new quantum money from thin air, without others being able to
notice it. This might be a threat either because of invalid computational
assumptions, or flaws in the implementation. This threat is not unique to
quantum money. In fact, such a flaw in the implementation occurred in ZCash, a
crypto-currency which is based on the ZeroCash protocol [BCG+14], see
https://electriccoin.co/blog/zcash-counterfeiting-vulnerability-successfully-
remediated. Inevitably, there is no definitive way to know whether that bug
was exploited.
15. 15.
Finality of the security parameters. One interesting feature of Bitcoin is
that the level of security that is achieved can, in principle, be increased.
If the level of security seems insufficient due to technological advancement,
the protocol may allow users to transition to more secure schemes. This is
exactly the case for the proposed post-quantum secure digital signature
schemes [But13, LRS18]. It is the the responsibility (and incentive) of each
individual user to transition – otherwise, her funds might be lost.
The quantum money can be used too increase security in the same manner: users
with QM in circulation can create new QM with the improved parameters, sign
the new serial number using their the bolt-to-signature capability (and by
that destroying it), and adding that signed message to the blockchain. Yet,
the incentives here a slightly different: an adversary could steal the
bitcoins of some user, if the security parameters are poorly chosen. In the
quantum money setting, the adversary could print money from thin air – see
Item 14. This makes the system, as a whole, insecure. Therefore, it is
advisable to make such a transition mandatory.
16. 16.
Optional Transparency. Bitcoin transactions are publicly available, and
organizations or individuals that wish to, can make their accounting book
completely transparent. See Bitcoin Improvement Proposal (BIP) 32 [Wui13] and
Ref. [NBF+16, Chapter 4.2] for more details.
Quantum money transactions leave no trace, and therefore it seems impossible
to achieve this sort of transparency.
17. 17.
Proof of payment. Consider the following scenario. You go to a store, and pay
for an item. You hand over a valid banknote to the seller. The seller takes it
to the bill checking machine, and secretly replaces your valid bill with a
fake one. The seller then gives you back the fake money, and blames you for
trying to fraud him.
This kind of attack cannot happen in Bitcoin, if used appropriately. The
seller can ask for a signed payment request, and the buyer can then verify the
authenticity of that message, using the seller’s public key. After payment,
the seller cannot argue that the payment was not received – the buyer can
prove that the bitcoins were sent to the seller’s address, by showing the
payment on the blockchain.
A Bitcoin payment can be done in such a way that an honest user would have a
proof of payment [AH13]. A similar functionality might be possible to achieve
in the LN, though currently, as far as the authors are aware, the LN does not
provide such functionality.
On the other hand, QM transactions leaves no trace, and proof of payment seems
harder to achieve. A possible workaround (which works for the LN as well)
could be the following. Suppose Alice wants to send 10 bitcoins worth of
quantum money to Bob. Instead of sending it all at a time, she could divide
the payment into 100 iterations. In each iteration she would send $0.1$
bitcoin, and expect a digital signature in return approving the payment in
return. If Bob fails to provide such a signature, she would abort. The worst
case scenario in this case is that she would not have a proof of payment for
$0.1$ bitcoins.
## 9 Conclusion
In this work, we gave the first example of the use of classical smart
contracts in conjunction with quantum cryptographic tools. We showed that
smart contracts can be combined with quantum tools, in particular quantum
lightning, to design a decentralized payment system which solves the problem
of scalability of (payment) transactions. There is currently only one known
secure construction of quantum lightning, which relies on a computational
assumption about multi-collision resistance of certain degree-2 hash functions
[Zha19]. Finding alternative constructions of quantum lightning, secure under
more well-studied computational assumptions, is a very interesting open
problem.
Smart contracts have found several applications in classical cryptographic
tasks, but their application to quantum cryptographic tasks is virtually
unexplored. We hope that this work will ignite future investigations. Some
candidate tasks which might potentially benefit from smart contracts are:
generation of public trusted randomness, distributed delegation of quantum
computation, secure multi-party quantum computation.
#### Acknowledgments
A.C is supported by the Simons Institute for the Theory of Computing. O.S. is
supported by the Israel Science Foundation (ISF) grant No. 682/18 and 2137/19,
and by the the Cyber Security Research Center at Ben-Gurion University.
## References
* [Aar09] S. Aaronson. Quantum copy-protection and quantum money. In Computational Complexity, 2009. CCC’09. 24th Annual IEEE Conference on, pages 229–242. IEEE, 2009.
* [AH13] G. Andresen and M. Hearn. Payment Protocol. Bitcoin Improvement Proposal (BIP) 70 https://github.com/bitcoin/bips/blob/master/bip-0070, 2013.
* [BCG+14] E. Ben-Sasson, A. Chiesa, C. Garman, M. Green, I. Miers, E. Tromer, and M. Virza. Zerocash: Decentralized Anonymous Payments from Bitcoin. In 2014 IEEE Symposium on Security and Privacy, SP 2014, Berkeley, CA, USA, May 18-21, 2014, pages 459–474. IEEE Computer Society, 2014\.
* [BCG15] J. Bonneau, J. Clark, and S. Goldfeder. On Bitcoin as a public randomness source. IACR Cryptology ePrint Archive, 2015:1015, 2015.
* [BDW18] C. Burchert, C. Decker, and R. Wattenhofer. Scalable funding of Bitcoin micropayment channel networks. Royal Society Open Science, 5(8):180089, August 2018.
* [BK14] I. Bentov and R. Kumaresan. How to use bitcoin to design fair protocols. In International Cryptology Conference, pages 421–439. Springer, 2014.
* [BKM17] I. Bentov, R. Kumaresan, and A. Miller. Instantaneous decentralized poker. In International Conference on the Theory and Application of Cryptology and Information Security, pages 410–440. Springer, 2017.
* [BMTZ17] C. Badertscher, U. Maurer, D. Tschudi, and V. Zikas. Bitcoin as a transaction ledger: A composable treatment. In Annual International Cryptology Conference, pages 324–356. Springer, 2017.
* [BOM04] M. Ben-Or and D. Mayers. General security definition and composability for quantum & classical protocols. arXiv preprint quant-ph/0409062, 2004.
* [BS16] S. Ben-David and O. Sattath. Quantum Tokens for Digital Signatures, 2016, arXiv: 1609.09047.
* [But13] V. Buterin. Bitcoin Is Not Quantum-Safe, And How We Can Fix It When Needed. https://bitcoinmagazine.com/articles/bitcoin-is-not-quantum-safe-and-how-we-can-fix-1375242150,http://www.webcitation.org/6wDiIPU3l, 2013\.
* [But14] V. Buterin. A next-generation smart contract and decentralized application platform. https://github.com/ethereum/wiki/wiki/White-Paper, 2014.
* [Can01] R. Canetti. Universally composable security: A new paradigm for cryptographic protocols. In Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium on, pages 136–145. IEEE, 2001.
* [Can06] R. Canetti. Security and Composition of Cryptographic Protocols: A Tutorial. Technical report, Cryptology ePrint Archive, Report 2006/465, 2006. http://eprint. iacr. org/2006/465, 2006.
* [CDPW07] R. Canetti, Y. Dodis, R. Pass, and S. Walfish. Universally composable security with global setup. In Theory of Cryptography Conference, pages 61–85. Springer, 2007\.
* [Col19] A. Coladangelo. Smart contracts meet quantum cryptography, 2019, arXiv: 1902.05214.
* [CSV16] R. Canetti, D. Shahaf, and M. Vald. Universally composable authentication and key-exchange with global PKI. In IACR International Workshop on Public Key Cryptography, pages 265–296. Springer, 2016.
* [DEF18] S. Dziembowski, L. Eckey, and S. Faust. Fairswap: How to fairly exchange digital goods. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pages 967–984. ACM, 2018.
* [DGKR18] B. David, P. Gazi, A. Kiayias, and A. Russell. Ouroboros Praos: An Adaptively-Secure, Semi-synchronous Proof-of-Stake Blockchain. In J. B. Nielsen and V. Rijmen, editors, Advances in Cryptology \- EUROCRYPT 2018 - 37th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Tel Aviv, Israel, April 29 - May 3, 2018 Proceedings, Part II, volume 10821 of Lecture Notes in Computer Science, pages 66–98. Springer, 2018.
* [DN92] C. Dwork and M. Naor. Pricing via Processing or Combatting Junk Mail. In Advances in Cryptology - CRYPTO ’92, 12th Annual International Cryptology Conference, Santa Barbara, California, USA, August 16-20, 1992, Proceedings, pages 139–147, 1992.
* [ES14] I. Eyal and E. G. Sirer. Majority Is Not Enough: Bitcoin Mining Is Vulnerable. In Financial Cryptography and Data Security - 18th International Conference, FC 2014, Christ Church, Barbados, March 3-7, 2014, Revised Selected Papers, pages 436–454, 2014, arXiv: 1311.0243.
* [FGH+12a] E. Farhi, D. Gosset, A. Hassidim, A. Lutomirski, and P. Shor. Quantum money from knots. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pages 276–289, 2012.
* [FGH+12b] E. Farhi, D. Gosset, A. Hassidim, A. Lutomirski, and P. Shor. Quantum money from knots. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pages 276–289. ACM, ACM, 2012, arXiv: 1004.5127.
* [Gol04] O. Goldreich. The Foundations of Cryptography - Vol. 2, Basic Applications. Cambridge University Press, 2004.
* [JLS18] Z. Ji, Y. Liu, and F. Song. Pseudorandom Quantum States. In H. Shacham and A. Boldyreva, editors, Advances in Cryptology \- CRYPTO 2018 - 38th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 19-23, 2018, Proceedings, Part III, volume 10993 of Lecture Notes in Computer Science, pages 126–152. Springer, 2018, arXiv: 1711.00385.
* [KL14] J. Katz and Y. Lindell. Introduction to Modern Cryptography, Second Edition. CRC Press, 2014.
* [KMS+16] A. Kosba, A. Miller, E. Shi, Z. Wen, and C. Papamanthou. Hawk: The blockchain model of cryptography and privacy-preserving smart contracts. In 2016 IEEE symposium on security and privacy (SP), pages 839–858. IEEE, 2016.
* [KRDO17] A. Kiayias, A. Russell, B. David, and R. Oliynykov. Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol. In J. Katz and H. Shacham, editors, Advances in Cryptology - CRYPTO 2017 - 37th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 20-24, 2017, Proceedings, Part I, volume 10401 of Lecture Notes in Computer Science, pages 357–388. Springer, 2017.
* [KZZ16] A. Kiayias, H.-S. Zhou, and V. Zikas. Fair and robust multi-party computation using a global transaction ledger. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 705–734. Springer, 2016.
* [LAF+09] A. Lutomirski, S. Aaronson, E. Farhi, D. Gosset, A. Hassidim, J. Kelner, and P. Shor. Breaking and making quantum money: toward a new quantum cryptographic protocol. arXiv preprint arXiv:0912.3825, 2009.
* [Lam79] L. Lamport. Constructing digital signatures from a one-way function, 1979.
* [LRS18] T. Lee, M. Ray, and M. Santha. On a quantum Bitcoin mining contest. private communication, 2018.
* [Lut11] A. Lutomirski. Component mixers and a hardness result for counterfeiting quantum money, 2011, arXiv: 1107.0321.
* [Mer80] R. C. Merkle. Protocols for Public Key Cryptosystems. In Proceedings of the 1980 IEEE Symposium on Security and Privacy, Oakland, California, USA, April 14-16, 1980, pages 122–134. IEEE Computer Society, 1980.
* [MGGR13] I. Miers, C. Garman, M. Green, and A. D. Rubin. Zerocoin: Anonymous Distributed E-Cash from Bitcoin. In 2013 IEEE Symposium on Security and Privacy, SP 2013, Berkeley, CA, USA, May 19-22, 2013, pages 397–411. IEEE Computer Society, 2013\.
* [MMK+17] G. Malavolta, P. Moreno-Sanchez, A. Kate, M. Maffei, and S. Ravi. Concurrency and Privacy with Payment-Channel Networks. In B. M. Thuraisingham, D. Evans, T. Malkin, and D. Xu, editors, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30 - November 03, 2017, pages 455–471. ACM, 2017.
* [MPJ+16] S. Meiklejohn, M. Pomarole, G. Jordan, K. Levchenko, D. McCoy, G. M. Voelker, and S. Savage. A fistful of Bitcoins: characterizing payments among men with no names. Commun. ACM, 59(4):86–93, 2016.
* [MS10] M. Mosca and D. Stebila. Quantum coins, volume 523 of Contemp. Math., pages 35–47. Amer. Math. Soc., 2010, arXiv: 0911.1295.
* [Nak08] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system, 2008.
* [Nar18] A. Narayanan. Hearing on Energy Efficiency of Blockchain and Similar Technologies, Committee on Energy and Natural Resources, United States Senate, August 2018.
* [NBF+16] A. Narayanan, J. Bonneau, E. W. Felten, A. Miller, and S. Goldfeder. Bitcoin and Cryptocurrency Technologies - A Comprehensive Introduction. Princeton University Press, 2016.
* [PD16] J. Poon and T. Dryja. The bitcoin lightning network: Scalable off-chain instant payments, 2016.
* [Poe16] A. Poelstra. Mimblewimble, 2016\.
* [RS13] D. Ron and A. Shamir. Quantitative Analysis of the Full Bitcoin Transaction Graph. In Financial Cryptography and Data Security - 17th International Conference, Japan, pages 6–24, 2013.
* [RS14] D. Ron and A. Shamir. How Did Dread Pirate Roberts Acquire and Protect his Bitcoin Wealth? In Financial Cryptography and Data Security - FC 2014 Workshops, BITCOIN and WAHC 2014, Barbados, pages 3–15, 2014.
* [Sat19] O. Sattath. Transforming Bitcoin to Quantum Money, and Back. Unplished Manuscript, 2019.
* [TOI03] Y. Tokunaga, T. Okamoto, and N. Imoto. Anonymous quantum cash, 2003.
* [Unr04] D. Unruh. Simulatable security for quantum protocols. arXiv preprint quant-ph/0409125, 2004.
* [Unr10] D. Unruh. Universally composable quantum multi-party computation. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 486–505. Springer, 2010.
* [vS13] N. van Saberhagen. CryptoNote v 2.0, 2013.
* [WEH18] S. Wehner, D. Elkouss, and R. Hanson. Quantum internet: A vision for the road ahead. Science, 362(6412):eaam9288, 2018.
* [Wie83] S. Wiesner. Conjugate coding. ACM Sigact News, 15(1):78–88, 1983.
* [Woo14] G. Wood. Ethereum: a secure decentralised generalised transaction ledger. http://gavwood.com/paper.pdf, 2014.
* [Wui13] P. Wuille. Hierarchical deterministic wallets. Bitcoin Improvement Proposal (BIP) 32 https://github.%****␣quantum_money_solution_blockchain_scalability.bbl␣Line␣350␣****com/bitcoin/bips/blob/master/bip-0032, 2013.
* [Zha19] M. Zhandry. Quantum Lightning Never Strikes the Same State Twice. In Y. Ishai and V. Rijmen, editors, Advances in Cryptology - EUROCRYPT 2019 - 38th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Darmstadt, Germany, May 19-23, 2019, Proceedings, Part III, volume 11478 of Lecture Notes in Computer Science, pages 408–438. Springer, 2019, arXiv: 1711.02276.
## Appendix A Appendix
### A.1 Proof of Proposition 2
###### Proof of Proposition 2.
We assume some familiarity with Zhandry’s construction (see section 6 of
[Zha19] for more details). In his construction, a full bolt is a tensor
product of $n$ mini-bolts. A valid mini-bolt with serial number $y$ takes the
form $\ket{\Psi}^{\otimes(k+1)}$ where $\ket{\Psi}$ is a superposition of pre-
images of $y$ under a certain function $H$ (this is called $f_{\mathcal{A}}$
in Zhandry’s paper, and we do not go into the details of what this function
is). The serial number of the full bolt is the concatenation of the serial
numbers of the mini-bolts. verify-bolt has $H$ hardcoded in its description.
The computational assumption under which Zhandry’s construction is proved
secure is that $H$ is $(2k+2)$-multi-collision resistant, i.e. it is hard to
find $2k+2$ colliding inputs (for this particular function it is easy to find
$k+1$ on the other hand).
Similarly to the proof of Proposition 1, we define gen-certificate to be the
QPT algorithm that measures each mini-bolt in the computational basis and
outputs the concatenation of the outcomes. We define verify-certificate to be
the deterministic algorithm which receives as input a serial number
$(y_{1},..,y_{n})$, and the concatenation of $(z^{(i)}_{1},..,z^{(i)}_{k+1})$
for $i=1,..,n$, and checks that $H(z^{(i)}_{1})=..=H(z^{(i)}_{k+1})=y_{i}$ for
all $i$.
It is clear that $(I)$ holds.
For property $(II)$, similarly to the proof of Proposition 1, we can construct
an adversary $\mathcal{A}^{\prime}$ that breaks the $(2k+2)$-multi-collision
resistance of $H$ from an adversary $\mathcal{A}$ that wins game Forge-
certificate with non-negligible probability: $\mathcal{A}^{\prime}$ runs
$(\textsf{gen-bolt},\textsf{verify-
bolt})\leftarrow\textsf{QL.Setup}(1^{\lambda})$ and sends $(\textsf{gen-
bolt},\textsf{verify-bolt},\textsf{gen-certificate},\textsf{verify-
certificate})$ to $\mathcal{A}$, where the latter two are defined as above in
terms of the function $H$ hardcoded in verify-bolt. $\mathcal{A}$ returns $c$
which is parsed as $(x_{1},..,x_{n})$, where each $x_{i}$ is a $(k+1)$-tuple,
and $\ket{\psi}$. $\mathcal{A}^{\prime}$ then measures each of the $n$
registers of $\ket{\psi^{\prime}}$ to get $n$ $(k+1)$-tuples
$(x^{\prime}_{1},..,x^{\prime}_{n})$. If there is some $i$ such that
$(x_{i},x_{i}^{\prime})$ is a $(2k+2)$-collision, then $\mathcal{A}^{\prime}$
outputs this. We claim that with non-neglibile probability
$\mathcal{A}^{\prime}$ outputs a $(2k+2)$-collision: in fact, since
$\mathcal{A}$ wins Forge-certificate with non-negligible probability, then
$\ket{\psi^{\prime}}$ must pass verify-bolt with non-negligible probability;
from the analysis of Zhandry’s proof, we know that any full bolt that passes
verification with non-negligible probability must be such that most mini-bolts
have non-negligible weight on most pre-images (in fact they should be close to
uniform superpositions over all pre-images). Indeed, in Section 6.3, p. 435,
Zhandry argues:
> Conditioned on acceptance, by the above arguments the resulting mini bolts
> must all be far from singletons when we trace out the other bolts. This
> means that if we measure the mini-bolts, the resulting superpositions will
> have high min-entropy.
∎
### A.2 Standard Security Definitions
###### Definition 14 (Digital Signature Scheme).
A digital signature scheme consists of 3 PPT algorithms
$\mathsf{key\textit{-}gen},\ \mathsf{sign}$ and $\mathsf{verify}$. The scheme
is complete if the following holds. When a document is signed using the
private key, the signature is accepted by the verification algorithm using the
public key. Formally, for every $\alpha\in\\{0,1\\}^{*}$:
$\Pr\left[(vk,sk)\leftarrow\mathsf{key\textit{-}gen}(\secparam);\sigma\leftarrow\mathsf{sign}_{sk}(\alpha);r\leftarrow\mathsf{verify}_{vk}\left(\alpha,\sigma)\right):r=1\right]=1$
(18)
The scheme satisfies Post-Quantum Existential Unforgeability under a Chosen
Message Attack (PQ-EU-CMA) if the following holds. Consider a QPT adversary
with the capability of adaptively requesting documents to be signed by a
signing oracle. The scheme is secure if such an adversary cannot generate a
signature for any fresh document – a document which the adversary did not ask
the oracle to sign. Formally, for every QPT adversary $\adv$ there exists a
negligible function $\negl[]$ such that
$\Pr\left[(vk,sk)\leftarrow\mathsf{key\textit{-}gen}(\secparam);(\alpha,\sigma)\leftarrow\adv^{\mathsf{sign}_{sk}}(\secparam,vk):\mathsf{verify}_{vk}(\alpha,\sigma)=1\wedge\alpha\notin
Q_{\adv}^{\mathsf{sign}_{sk}}\right]\leq\negl,$ (19)
where $\adv^{\mathsf{sign}_{sk}}$ is a QPT algorithm with access to the
signing oracle, and $Q_{\adv}^{\mathsf{sign}_{sk}}$ is the set of queries it
made to the oracle.
|
2024-09-04T02:54:54.976247 | 2020-02-27T09:50:41 | 2002.12005 | {
"authors": "Zhenisbek Assylbekov and Alibi Jangeldin",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25909",
"submitter": "Zhenisbek Assylbekov",
"url": "https://arxiv.org/abs/2002.12005"
} | arxiv-papers | 11institutetext: School of Sciences and Humanities, Nazarbayev University,
Nur-Sultan, Kazakhstan
11email<EMAIL_ADDRESS>
# Squashed Shifted PMI Matrix: Bridging Word Embeddings and Hyperbolic Spaces
Zhenisbek Assylbekov 0000-0003-0095-9409 Alibi Jangeldin
###### Abstract
We show that removing sigmoid transformation in the skip-gram with negative
sampling (SGNS) objective does not harm the quality of word vectors
significantly and at the same time is related to factorizing a squashed
shifted PMI matrix which, in turn, can be treated as a connection
probabilities matrix of a random graph. Empirically, such graph is a complex
network, i.e. it has strong clustering and scale-free degree distribution, and
is tightly connected with hyperbolic spaces. In short, we show the connection
between static word embeddings and hyperbolic spaces through the squashed
shifted PMI matrix using analytical and empirical methods.
###### Keywords:
Word vectors PMI Complex networks Hyperbolic geometry
## 1 Introduction
Modern word embedding models (McCann et al., 2017; Peters et al., 2018; Devlin
et al., 2019) build vector representations of words in context, i.e. the same
word will have different vectors when used in different contexts (sentences).
Earlier models (Mikolov et al., 2013b; Pennington et al., 2014) built the so-
called static embeddings: each word was represented by a single vector,
regardless of the context in which it was used.
Despite the fact that static word embeddings are considered obsolete today,
they have several advantages compared to contextualized ones. Firstly, static
embeddings are trained much faster (few hours instead of few days) and do not
require large computing resources (1 consumer-level GPU instead of 8–16 non-
consumer GPUs). Secondly, they have been studied theoretically in a number of
works (Levy and Goldberg, 2014b; Arora et al., 2016; Hashimoto et al., 2016;
Gittens et al., 2017; Tian et al., 2017; Ethayarajh et al., 2019; Allen et
al., 2019; Allen and Hospedales, 2019; Assylbekov and Takhanov, 2019; Zobnin
and Elistratova, 2019) but not much has been done for the contextualized
embeddings (Reif et al., 2019). Thirdly, static embeddings are still an
integral part of deep neural network models that produce contextualized word
vectors, because embedding lookup matrices are used at the input and output
(softmax) layers of such models. Therefore, we consider it necessary to
further study static embeddings.
With all the abundance of both theoretical and empirical studies on static
vectors, they are not fully understood, as this work shows. For instance, it
is generally accepted that good quality word vectors are inextricably linked
with a low-rank approximation of the pointwise mutual information (PMI) matrix
or the Shifted PMI (SPMI) matrix, but we show that vectors of comparable
quality can also be obtained from a low-rank approximation of a Squashed SPMI
matrix (Section 2). Thus, a Squashed SPMI matrix is a viable alternative to
standard PMI/SPMI matrices when it comes to obtaining word vectors.
At the same time, it is easy to interpret the Squashed SPMI matrix with
entries in $[0,1)$ as a connection probabilities matrix for generating a
random graph. Studying the properties of such a graph, we come to the
conclusion that it is a so-called complex network, i.e. it has a strong
clustering property and a scale-free degree distribution (Section 3).
It is noteworthy that complex networks, in turn, are dual to hyperbolic spaces
(Section 4) as was shown by Krioukov et al. (2010). Hyperbolic geometry has
been used to train word vectors (Nickel and Kiela, 2017; Tifrea et al., 2018)
and has proven its suitability — in a hyperbolic space, word vectors need
lower dimensionality than in the Euclidean space.
Thus, to the best of our knowledge, this is the first work that establishes
simultaneously a connection between word vectors, a Squashed SPMI matrix,
complex networks, and hyperbolic spaces. Figure 1 summarizes our work and
serves as a guide for the reader.
Squashed SPMIComplex NetworksWord EmbeddingsHyperbolic SpacesSection 2Section
3Section 4 Figure 1: Summary of our work
### Notation
We let $\mathbb{R}$ denote the real numbers. Bold-faced lowercase letters
($\mathbf{x}$) denote vectors, plain-faced lowercase letters ($x$) denote
scalars, $\langle\mathbf{x},\mathbf{y}\rangle$ is the Euclidean inner product,
$(a_{ij})$ is a matrix with the $ij$-th entry being $a_{ij}$. ‘i.i.d.’ stands
for ‘independent and identically distributed’. We use the sign $\propto$ to
abbreviate ‘proportional to’, and the sign $\sim$ to abbreviate ‘distributed
as’.
Assuming that words have already been converted into indices, let
$\mathcal{W}:=\\{1,\ldots,n\\}$ be a finite vocabulary of words. Following the
setup of the widely used word2vec model (Mikolov et al., 2013b), we use two
vectors per each word $i$: (1) $\mathbf{w}_{i}\in\mathbb{R}^{d}$ when
$i\in\mathcal{W}$ is a center word, (2) $\mathbf{c}_{i}\in\mathbb{R}^{d}$ when
$i\in\mathcal{W}$ is a context word; and we assume that $d\ll n$.
In what follows we assume that our dataset consists of co-occurence pairs
$(i,j)$. We say that “the words $i$ and $j$ co-occur” when they co-occur in a
fixed-size window of words. The number of such pairs, i.e. the size of our
dataset, is denoted by $N$. Let $\\#(i,j)$ be the number of times the words
$i$ and $j$ co-occur, then
$N=\sum_{i\in\mathcal{W}}\sum_{j\in\mathcal{W}}\\#(i,j)$.
## 2 Squashed SPMI and Word Vectors
A well known skip-gram with negative sampling (SGNS) word embedding model of
Mikolov et al. (2013b) maximizes the following objective function
$\textstyle\sum_{i\in\mathcal{W}}\sum_{j\in\mathcal{W}}\\#(i,j)\left(\log\sigma(\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle)+k\cdot\mathbb{E}_{j^{\prime}\sim
p}[\log\sigma(-\langle\mathbf{w}_{i},\mathbf{c}_{j^{\prime}}\rangle)]\right),$
(1)
where $\sigma(x)=\frac{1}{1+e^{-x}}$ is the logistic sigmoid function, $p$ is
a smoothed unigram probability distribution for words111The authors of SGNS
suggest $p(i)\propto\\#(i)^{3/4}$., and $k$ is the number of negative samples
to be drawn. Interestingly, training SGNS is approximately equivalent to
finding a low-rank approximation of a Shifted PMI matrix (Levy and Goldberg,
2014b) in the form $\log\frac{p(i,j)}{p(i)p(j)}-\log
k\approx\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle$, where the left-hand side
is the $ij$-th element of the $n\times n$ shifted PMI matrix, and the right-
hand side is an element of a matrix with rank $\leq d$ since
$\mathbf{w}_{i},\mathbf{c}_{j}\in\mathbb{R}^{d}$. This approximation (up to a
constant shift) was later re-derived by Arora et al. (2016); Assylbekov and
Takhanov (2019); Allen et al. (2019); Zobnin and Elistratova (2019) under
different sets of assuptions. In this section we show that constraint
optimization of a slightly modified SGNS objective (1) leads to a low-rank
approximation of the Squashed Shifted PMI ($\sigma$SPMI) matrix, defined as
$\sigma\mathrm{SPMI}_{ij}:=\sigma(\mathrm{PMI}_{ij}-\log k)$.
###### Theorem 2.1
Assuming $0<\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle<1$, the following
objective function
$\mathcal{L}=\sum_{i\in\mathcal{W}}\sum_{j\in\mathcal{W}}\underbrace{\\#(i,j)\left(\log\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle+k\cdot\mathbb{E}_{j^{\prime}\sim
P}[\log(1-\langle\mathbf{w}_{i},\mathbf{c}_{j^{\prime}}\rangle)]\right)}_{\ell(\mathbf{w}_{i},\mathbf{c}_{j})},$
(2)
reaches its optimum at
$\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle=\sigma\mathrm{SPMI}_{ij}$.
###### Proof
Expanding the sum and the expected value in (2) as in Levy and Goldberg
(2014b), and defining $p(i,j):=\frac{\\#(i,j)}{N}$, $p(i):=\frac{\\#(i)}{N}$,
we have
$\mathcal{L}=N\sum_{i\in\mathcal{W}}\sum_{j\in\mathcal{W}}p(i,j)\cdot\log\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle+p(i)\cdot
p(j)\cdot k\cdot\log(1-\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle).$ (3)
Thus, we can rewrite the individual objective
$\ell(\mathbf{w}_{i},\mathbf{c}_{j})$ in (2) as
$\ell=N\left[p(i,j)\cdot\log\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle+p(i)\cdot
p(j)\cdot k\cdot\log(1-\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle)\right].$
(4)
Differentiating (4) w.r.t. $\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle$ we
get
$\frac{\partial\ell}{\partial\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle}=N\left[\frac{p(i,j)}{\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle}-\frac{p(i)\cdot
p(j)\cdot k}{1-\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle}\right].$
Setting this derivative to zero gives
$\frac{p(i,j)}{p(i)p(j)}\cdot\frac{1}{k}=\frac{\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle}{1-\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle}\quad\Rightarrow\quad\log\frac{p(i,j)}{p(i)p(j)}-\log
k=\log\frac{\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle}{1-\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle}\\\
\Leftrightarrow\quad\log\frac{p(i,j)}{p(i)p(j)}-\log
k=\operatorname*{logit}\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle\\\
\Leftrightarrow\quad\sigma\left(\log\frac{p(i,j)}{p(i)p(j)}-\log
k\right)=\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle,$ (5)
where $\operatorname*{logit}(q):=\log\frac{q}{1-q}$ is the logit function
which is the inverse of the logistic sigmoid function, i.e.
$\sigma(\operatorname*{logit}(q))=q$. From (5) we have
$\sigma\mathrm{SPMI}_{ij}=\langle\mathbf{w}_{i},\mathbf{c}_{j}\rangle$, which
concludes the proof.
###### Remark 1
Since $\sigma(x)$ can be regarded as a smooth approximation of the Heaviside
step function $H(x)$, defined as $H(x)=1$ if $x>0$ and $H(x)=0$ otherwise, it
is tempting to consider a binarized SPMI (BSPMI) matrix
$H(\mathrm{PMI}_{ij}-\log k)$ instead of $\sigma$SPMI. Being a binary matrix,
BSPMI can be interpreted as an adjacency matrix of a graph, however our
empirical evaluation below (Table 1) shows that such strong roughening of the
$\sigma$SPMI matrix degrades the quality of the resulting word vectors. This
may be due to concentration of the SPMI values near zero (Figure 5), while
$\sigma(x)$ is approximated by $H(x)$ only for $x$ away enough from zero.
###### Remark 2
The objective (2) differs from the SGNS objective (1) only in that the former
does not use the sigmoid function (keep in mind that
$\sigma(-x)=1-\sigma(x)$). We will refer to the objective (2) as Nonsigmoid
SGNS.
### Direct Matrix Factorization
Optimization of the Nonsigmoid SGNS (2) is not the only way to obtain a low-
rank approximation of the $\sigma$SPMI matrix. A viable alternative is
factorizing the $\sigma$SPMI matrix with the singular value decomposition
(SVD): $\sigma\mathrm{SPMI}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}$, with
orthogonal $\mathbf{U},\mathbf{V}\in\mathbb{R}^{n\times n}$ and diagonal
$\mathbf{\Sigma}\in\mathbb{R}^{n\times n}$, and then zeroing out the $n-d$
smallest singular values, i.e.
$\sigma\mathrm{SPMI}\approx\mathbf{U}_{1:n,1:d}\mathbf{\Sigma}_{1:d,1:d}\mathbf{V}^{\top}_{1:d,1:n},$
(6)
where we use $\mathbf{A}_{a:b,c:d}$ to denote a submatrix located at the
intersection of rows $a,a+1,\ldots,b$ and columns $c,c+1,\ldots,d$ of
$\mathbf{A}$. By the Eckart-Young theorem (Eckart and Young, 1936), the right-
hand side of (6) is the closest rank-$d$ matrix to the $\sigma$SPMI matrix in
Frobenius norm. The word and context embedding matrices can be obtained from
(6) by setting
$\mathbf{W}^{\text{SVD}}:=\mathbf{U}_{1:n,1:d}\sqrt{\mathbf{\Sigma}_{1:d,1:d}}$,
and
$\mathbf{C}^{\text{SVD}}:=\sqrt{\mathbf{\Sigma}_{1:d,1:d}}\mathbf{V}^{\top}_{1:d,1:n}$.
When this is done for a positive SPMI (PSPMI) matrix, defined as
$\max(\mathrm{PMI}_{ij}-\log k,0)$, the resulting word embeddings are
comparable in quality with those from the SGNS (Levy and Goldberg, 2014b).
### Empirical Evaluation of the $\sigma$SPMI-based Word Vectors
To evaluate the quality of word vectors resulting from the Nonsigmoid SGNS
objective and $\sigma$SPMI factorization, we use the well-known corpus,
text8.222http://mattmahoney.net/dc/textdata.html. We ignored words that
appeared less than 5 times, resulting in a vocabulary of 71,290 words. The
SGNS and Nonsigmoid SGNS embeddings were trained using our custom
implementation.333https://github.com/zh3nis/SGNS The SPMI matrices were
extracted using the hyperwords tool of Levy et al. (2015) and the truncated
SVD was performed using the scikit-learn library of Pedregosa et al. (2011).
Table 1: Evaluation of word embeddings on the analogy tasks (Google and MSR) and on the similarity tasks (the rest). For word similarities evaluation metric is the Spearman’s correlation with the human ratings, while for word analogies it is the percentage of correct answers. Method | WordSim | MEN | M. Turk | Rare Words | Google | MSR
---|---|---|---|---|---|---
SGNS | .678 | .656 | .690 | .334 | .359 | .394
Nonsigm. SGNS | .649 | .649 | .695 | .299 | .330 | .330
PMI + SVD | .663 | .667 | .668 | .332 | .315 | .323
SPMI + SVD | .509 | .576 | .567 | .244 | .159 | .107
PSPMI + SVD | .638 | .672 | .658 | .298 | .246 | .207
$\sigma$SPMI + SVD | .657 | .631 | .661 | .328 | .294 | .341
BSPMI + SVD | .623 | .586 | .643 | .278 | .177 | .202
The trained embeddings were evaluated on several word similarity and word
analogy tasks: WordSim (Finkelstein et al., 2002), MEN (Bruni et al., 2012),
M.Turk (Radinsky et al., 2011), Rare Words (Luong et al., 2013), Google
(Mikolov et al., 2013a), and MSR (Mikolov et al., 2013c). We used the Gensim
tool of Řehůřek and Sojka (2010) for evaluation. For answering analogy
questions ($a$ is to $b$ as $c$ is to $?$) we use the 3CosAdd method of Levy
and Goldberg (2014a) and the evaluation metric for the analogy questions is
the percentage of correct answers. We mention here that our goal is not to
beat state of the art, but to compare SPMI-based embeddings (SGNS and
SPMI+SVD) versus $\sigma$SPMI-based ones (Nonsigmoid SGNS and
$\sigma$SPMI+SVD). The results of evaluation are provided in Table 1.
As we can see the Nonsigmoid SGNS embeddings in general underperform the SGNS
ones but not by a large margin. $\sigma$SPMI shows a competitive performance
among matrix-based methods across most of the tasks. Also, Nonsigmoid SGNS and
$\sigma$SPMI demonstrate comparable performance as predicted by Theorem 2.1.
Although BSPMI is inferior to $\sigma$SPMI, notice that such aggressive
compression as binarization still retains important information on word
vectors.
Figure 2: Spectral distribution of the $\sigma$SPMI-induced graphs (left and
middle columns), and of scale-free random graphs with strong clustering
property (right top: Goh et al. (2001), right bottom: Farkas et al. (2001)).
When generating several random graphs from the same $\sigma$SPMI matrix, their
eigenvalue distributions are visually indistinguishable, thus we display the
results of one run per each matrix.
Figure 3: Degree distributions of the $\sigma$SPMI-induced graphs. The axes are on logarithmic scales. Table 2: Clustering coefficients of the $\sigma$SPMI-induced graphs. For each corpus–window combination we generate ten graphs and report 95% confidence intervals across these ten runs. | text8 | enwik9
---|---|---
| window $=2$ | window $=5$ | window $=2$ | window $=5$
$C$ | $.1341\pm.0006$ | $.1477\pm.0005$ | $.1638\pm.0006$ | $.1798\pm.0004$
$\bar{k}/n$ | $.0014\pm.0000$ | $.0030\pm.0000$ | $.0006\pm.0000$ | $.0012\pm.0000$
## 3 $\sigma$SPMI and Complex Networks
$\sigma$SPMI matrix has the following property: its entries
$\sigma\text{SPMI}_{ij}\in[0,1)$ can be treated as connection probabilities
for generating a random graph. As usually, by a graph $\mathcal{G}$ we mean a
set of vertices $\mathcal{V}$ and a set of edges
$\mathcal{E}\subset\mathcal{V}\times\mathcal{V}$. It is convenient to
represent graph edges by its adjacency matrix $(e_{ij})$, in which $e_{ij}=1$
for $(i,j)\in\mathcal{E}$, and $e_{ij}=0$ otherwise. The graph with
$\mathcal{V}:=\mathcal{W}$ and
$e_{ij}\sim\text{Bernoulli}(\sigma\mathrm{SPMI}_{ij})$ will be referred to as
$\sigma$SPMI-induced Graph.
### 3.1 Spectrum of the $\sigma$SPMI-induced Graph
First of all, we look at the spectral properties of the $\sigma$SPMI-induced
Graphs.444We define the graph spectrum as the set of eigenvalues of its
adjacency matrix. For this, we extract SPMI matrices from the text8 and enwik9
datasets using the hyperwords tool of Levy et al. (2015). We use the default
settings for all hyperparameters, except the word frequency threshold and
context window size. We ignored words that appeared less than 100 times and
250 times in text8 and enwik9 correspondingly, resulting in vocabularies of
11,815 and 21,104 correspondingly. We additionally experiment with the context
window size 5, which by default is set to 2. We generate random graphs from
the $\sigma$SPMI matrices and compute their eigenvalues using the TensorFlow
library (Abadi et al., 2016), and the above-mentioned threshold of 250 for
enwik9 was chosen to fit the GPU memory (11GB, RTX 2080 Ti). The eigenvalue
distributions are provided in Figure 2.
The distributions seem to be symmetric, however, the shapes of distributions
are far from resembling the Wigner semicircle law
$x\mapsto\frac{1}{2\pi}\sqrt{4-x^{2}}$, which is the limiting distribution for
the eigenvalues of many random symmetric matrices with i.i.d. entries (Wigner,
1955, 1958). This means that the entries of the $\sigma$SPMI-induced graph’s
adjacency matrix are dependent, otherwise we would observe approximately
semicircle distributions for its eigenvalues. We observe some similarity
between the spectral distributions of the $\sigma$SPMI-induced graphs and of
the so-called complex networks which arise in physics and network science
(Figure 2).
Notice that the connection between human language structure and complex
networks was observed previously by Cancho and Solé (2001). A thorough review
on approaching human language with complex networks was given by Cong and Liu
(2014). In the following subsection we will specify precisely what we mean by
a complex network.
### 3.2 Clustering and Degree Distribution of the $\sigma$SPMI-induced Graph
We will use two statistical properties of a graph – degree distribution and
clustering coefficient. The degree of a given vertex $i$ is the number of
edges that connects it with other vertices, i.e.
$\deg(i)=\sum_{j\in\mathcal{V}}e_{ij}$. The clustering coefficient measures
the average fraction of pairs of neighbors of a vertex that are also neighbors
of each other. The precise definition is as follows.
Let us indicate by $\mathcal{G}_{i}=\\{j\in\mathcal{V}\mid e_{ij}=1\\}$ the
set of nearest neighbors of a vertex $i$. By setting
$l_{i}=\sum_{j\in\mathcal{V}}e_{ij}\left[\sum_{k\in\mathcal{G}_{i};\
j<k}e_{jk}\right],$ we define the local clustering coefficient as
$C(i)=\frac{l_{i}}{{|\mathcal{G}_{i}|\choose 2}}$, and the clustering
coefficient as the average over $\mathcal{V}$:
$C=\frac{1}{n}\sum_{i\in\mathcal{V}}C(i)$.
Let $\bar{k}$ be the average degree per vertex, i.e.
$\bar{k}=\frac{1}{n}\sum_{j\in\mathcal{V}}e_{ij}$. For random binomial graphs,
i.e. graphs with edges
$e_{ij}\,\,{\stackrel{{\scriptstyle\text{iid}}}{{\sim}}}\,\,\mathrm{Bernoulli}(p)$,
it is well known (Erdős and Rényi, 1960) that $C\approx\frac{\bar{k}}{n}$ and
$\deg(i)\,\,\sim\,\,\mathrm{Binomial}(n-1,p)$. A complex network is a graph,
for which $C\gg\frac{\bar{k}}{n}$ and
$p(\deg(i)=k)\propto\frac{1}{k^{\gamma}}$, where $\gamma$ is some constant
(Dorogovtsev, 2010). The latter property is referred to as scale-free (or
power-law) degree distribution.
We constructed $\sigma$SPMI-induced Graphs from the text8 and enwik9 datasets
using context windows of sizes 2 and 5 and ignoring words that appeared less
than 5 times, and computed their clustering coefficients (Table 2) as well as
degree distributions (Figure 3) using the NetworKit tool (Staudt et al.,
2016). NetworKit uses the algorithm of Schank and Wagner (2005) to compute the
clustering coefficient. As we see, the $\sigma$SPMI-induced graphs are complex
networks, and this brings us to the hyperbolic spaces.
## 4 Complex Networks and Hyperbolic Geometry
Complex networks are “dual” to hyperbolic spaces as was shown by Krioukov et
al. (2010). They showed that any complex network, as defined in Section 3, has
an effective hyperbolic geometry underneath. Apart from this, they also showed
that any hyperbolic geometry implies a complex network: they placed randomly
$n$ points (nodes) into a hyperbolic disk of radius $R$, and used
$p_{ij}:=\sigma\left(c[R-x_{ij}]\right)$ as connection probability for
connecting nodes $i$ and $j$, where $x_{ij}$ is the hyperbolic distance
between $i$ and $j$, and $c$ is a constant. An example of such random graph is
shown in Figure 5.
Figure 4: Rand. hyperbolic graph.
Figure 5: SPMI values distr’n (top) vs $R-X$.
Krioukov et al. (2010) showed that the resulting graph is a complex network.
They establish connections between the clustering coefficient $C$ and the
power-law exponent $\gamma$ of a complex network and the curvature of a
hyperbolic space.
Comparing the construction of Krioukov et al. (2010) to the way we generate a
random graph from the $\sigma$SPMI matrix, and taking into account that both
methods produce similar structures (complex networks), we conclude that the
distribution of the SPMI values should be similar to the distribution of
$R-x_{ij}$, i.e. $\text{PMI}_{ij}-\log k\sim R-x_{ij}$. To verify this claim
we compare the distribution of SPMI values with the p.d.f. of a random
variable $R-X$, where $X$ is a hyperbolic distance between two random points
on the hyperbolic disk (the exact form of this p.d.f. is given in the Appendix
0.A). $R$ was chosen according to the formula $R=2\ln[8n/(\pi\bar{k})]$
(Krioukov et al., 2010), where $\bar{k}$ is the average degree of the
$\sigma$SPMI-induced Graph. The results are shown in Figure 5. As we can see,
the two distributions are indeed similar and the main difference is in the
shift—distribution of $R-X$ is shifted to the left compared to the
distribution of the SPMI values. This allows us reinterpreting the pointwise
mutual information as the negative of hyperbolic distance (up to scaling and
shifting).
## 5 Conclusion
It is noteworthy that the seemingly fragmented sections of scientific
knowledge can be closely interconnected. In this paper, we have established a
chain of connections between word embeddings and hyperbolic geometry, and the
key link in this chain is the Squashed Shifted PMI matrix. Claiming that
hyperbolicity underlies word vectors is not novel (Nickel and Kiela, 2017;
Tifrea et al., 2018). However, this work is the first attempt to justify the
connection between hyperbolic geometry and the word embeddings. In the course
of our work, we discovered novel objects—Nonsigmoid SGNS and Squashed Shifted
PMI matrix—which can be investigated separately in the future.
## Acknowledgements
This work is supported by the Nazarbayev University faculty-development
competitive research grants program, grant number 240919FD3921. The authors
would like to thank Zhuldyzzhan Sagimbayev for conducting preliminary
experiments for this work, and anonymous reviewers for their feedback.
## References
* Abadi et al. (2016) Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al.: Tensorflow: A system for large-scale machine learning. In: Proceedings of OSDI. pp. 265–283 (2016)
* Allen et al. (2019) Allen, C., Balazevic, I., Hospedales, T.: What the vec? towards probabilistically grounded embeddings. In: Advances in Neural Information Processing Systems. pp. 7465–7475 (2019)
* Allen and Hospedales (2019) Allen, C., Hospedales, T.: Analogies explained: Towards understanding word embeddings. In: International Conference on Machine Learning. pp. 223–231 (2019)
* Arora et al. (2016) Arora, S., Li, Y., Liang, Y., Ma, T., Risteski, A.: A latent variable model approach to pmi-based word embeddings. Transactions of the Association for Computational Linguistics 4, 385–399 (2016)
* Assylbekov and Takhanov (2019) Assylbekov, Z., Takhanov, R.: Context vectors are reflections of word vectors in half the dimensions. Journal of Artificial Intelligence Research 66, 225–242 (2019)
* Bruni et al. (2012) Bruni, E., Boleda, G., Baroni, M., Tran, N.K.: Distributional semantics in technicolor. In: Proceedings of ACL. pp. 136–145. Association for Computational Linguistics (2012)
* Cancho and Solé (2001) Cancho, R.F.I., Solé, R.V.: The small world of human language. Proceedings of the Royal Society of London. Series B: Biological Sciences 268(1482), 2261–2265 (2001)
* Cong and Liu (2014) Cong, J., Liu, H.: Approaching human language with complex networks. Physics of life reviews 11(4), 598–618 (2014)
* Devlin et al. (2019) Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT. pp. 4171–4186 (2019)
* Dorogovtsev (2010) Dorogovtsev, S.: Lectures on Complex Networks. Oxford University Press, Inc., USA (2010)
* Eckart and Young (1936) Eckart, C., Young, G.: The approximation of one matrix by another of lower rank. Psychometrika 1(3), 211–218 (1936)
* Erdős and Rényi (1960) Erdős, P., Rényi, A.: On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci 5(1), 17–60 (1960)
* Ethayarajh et al. (2019) Ethayarajh, K., Duvenaud, D., Hirst, G.: Towards understanding linear word analogies. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. pp. 3253–3262 (2019)
* Farkas et al. (2001) Farkas, I.J., Derényi, I., Barabási, A.L., Vicsek, T.: Spectra of “real-world” graphs: Beyond the semicircle law. Physical Review E 64(2), 026704 (2001)
* Finkelstein et al. (2002) Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G., Ruppin, E.: Placing search in context: The concept revisited. ACM Transactions on information systems 20(1), 116–131 (2002)
* Gittens et al. (2017) Gittens, A., Achlioptas, D., Mahoney, M.W.: Skip-gram- zipf+ uniform= vector additivity. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 69–76 (2017)
* Goh et al. (2001) Goh, K.I., Kahng, B., Kim, D.: Spectra and eigenvectors of scale-free networks. Physical Review E 64(5), 051903 (2001)
* Hashimoto et al. (2016) Hashimoto, T.B., Alvarez-Melis, D., Jaakkola, T.S.: Word embeddings as metric recovery in semantic spaces. Transactions of the Association for Computational Linguistics 4, 273–286 (2016)
* Krioukov et al. (2010) Krioukov, D., Papadopoulos, F., Kitsak, M., Vahdat, A., Boguná, M.: Hyperbolic geometry of complex networks. Physical Review E 82(3), 036106 (2010)
* Levy and Goldberg (2014a) Levy, O., Goldberg, Y.: Linguistic regularities in sparse and explicit word representations. In: Proceedings of CoNLL. pp. 171–180 (2014a)
* Levy and Goldberg (2014b) Levy, O., Goldberg, Y.: Neural word embedding as implicit matrix factorization. In: Proceedings of NeurIPS. pp. 2177–2185 (2014b)
* Levy et al. (2015) Levy, O., Goldberg, Y., Dagan, I.: Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics 3, 211–225 (2015)
* Luong et al. (2013) Luong, T., Socher, R., Manning, C.: Better word representations with recursive neural networks for morphology. In: Proceedings of CoNLL. pp. 104–113 (2013)
* McCann et al. (2017) McCann, B., Bradbury, J., Xiong, C., Socher, R.: Learned in translation: Contextualized word vectors. In: Advances in Neural Information Processing Systems. pp. 6294–6305 (2017)
* Mikolov et al. (2013a) Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013a)
* Mikolov et al. (2013b) Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems. pp. 3111–3119 (2013b)
* Mikolov et al. (2013c) Mikolov, T., Yih, W.t., Zweig, G.: Linguistic regularities in continuous space word representations. In: Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pp. 746–751 (2013c)
* Nickel and Kiela (2017) Nickel, M., Kiela, D.: Poincaré embeddings for learning hierarchical representations. In: Advances in neural information processing systems. pp. 6338–6347 (2017)
* Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12, 2825–2830 (2011)
* Pennington et al. (2014) Pennington, J., Socher, R., Manning, C.: Glove: Global vectors for word representation. In: Proceedings of EMNLP. pp. 1532–1543 (2014)
* Peters et al. (2018) Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., Zettlemoyer, L.: Deep contextualized word representations. In: Proceedings of NAACL-HLT. pp. 2227–2237 (2018)
* Radinsky et al. (2011) Radinsky, K., Agichtein, E., Gabrilovich, E., Markovitch, S.: A word at a time: computing word relatedness using temporal semantic analysis. In: Proceedings of the 20th international conference on World wide web. pp. 337–346. ACM (2011)
* Řehůřek and Sojka (2010) Řehůřek, R., Sojka, P.: Software Framework for Topic Modelling with Large Corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. pp. 45–50. ELRA, Valletta, Malta (May 2010), http://is.muni.cz/publication/884893/en
* Reif et al. (2019) Reif, E., Yuan, A., Wattenberg, M., Viegas, F.B., Coenen, A., Pearce, A., Kim, B.: Visualizing and measuring the geometry of bert. In: Advances in Neural Information Processing Systems. pp. 8592–8600 (2019)
* Schank and Wagner (2005) Schank, T., Wagner, D.: Approximating clustering coefficient and transitivity. Journal of Graph Algorithms and Applications 9(2), 265–275 (2005)
* Staudt et al. (2016) Staudt, C.L., Sazonovs, A., Meyerhenke, H.: Networkit: A tool suite for large-scale complex network analysis. Network Science 4(4), 508–530 (2016)
* Tian et al. (2017) Tian, R., Okazaki, N., Inui, K.: The mechanism of additive composition. Machine Learning 106(7), 1083–1130 (2017)
* Tifrea et al. (2018) Tifrea, A., Bécigneul, G., Ganea, O.E.: Poincaré glove: Hyperbolic word embeddings. arXiv preprint arXiv:1810.06546 (2018)
* Wigner (1955) Wigner, E.P.: Characteristic vectors of bordered matrices with infinite dimensions. Annals of Mathematics pp. 548–564 (1955)
* Wigner (1958) Wigner, E.P.: On the distribution of the roots of certain symmetric matrices. Annals of Mathematics pp. 325–327 (1958)
* Zobnin and Elistratova (2019) Zobnin, A., Elistratova, E.: Learning word embeddings without context vectors. In: Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019). pp. 244–249 (2019)
## Appendix 0.A Auxiliary Results
###### Proposition 1
Let $X$ be a distance between two points that were randomly uniformly placed
in the hyperbolic disk of radius $R$. The probability distribution function of
$X$ is given by
$f_{X}(x)=\int_{0}^{R}\int_{0}^{R}\frac{\sinh(x)}{\pi\sqrt{1-A(r_{1},r_{2},x)}\sinh(r_{1})\sinh(r_{2})}\rho(r_{1})\rho(r_{2})dr_{1}dr_{2},$
(7)
where
$A(r_{1},r_{2},x)=\frac{\cosh(r_{1})\cosh(r_{2})-\cosh(x)}{\sinh(r_{1})\sinh(r_{2})}$,
and $\rho(r)=\frac{\sinh r}{\cosh R-1}$.
The proof is by direct calculation and is omitted due to page limit.
|
2024-09-04T02:54:54.997176 | 2020-02-27T10:32:47 | 2002.12021 | {
"authors": "Felicitas L\\\"offler, Valentin Wesp, Birgitta K\\\"onig-Ries, Friederike\n Klan",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25910",
"submitter": "Felicitas L\\\"offler",
"url": "https://arxiv.org/abs/2002.12021"
} | arxiv-papers | # Dataset search in biodiversity research:
Do metadata in data repositories reflect
scholarly information needs?
Felicitas Löffler* * — 1 Valentin Wesp * — 1 Birgitta König-Ries * —1 2 3
Friederike Klan * — 2 4
(1Heinz-Nixdorf Chair for Distributed Information Systems,
Department of Mathematics and Computer Science,
Friedrich Schiller University Jena, Jena, Germany
2Michael-Stifel-Center for Data-Driven and Simulation Science, Jena, Germany
3German Center for Integrative Biodiversity Research (iDiv)
Halle-Jena-Leipzig, Germany
4Citizen Science Group, DLR-Institute of Data Science
German Aerospace Center, Jena, Germany
__)
###### Abstract
The increasing amount of publicly available research data provides the
opportunity to link and integrate data in order to create and prove novel
hypotheses, to repeat experiments or to compare recent data to data collected
at a different time or place. However, recent studies have shown that
retrieving relevant data for data reuse is a time-consuming task in daily
research practice.
In this study, we explore what hampers dataset retrieval in biodiversity
research, a field that produces a large amount of heterogeneous data. We
analyze the primary source in dataset search - metadata - and determine if
they reflect scholarly search interests. We examine if metadata standards
provide elements corresponding to search interests, we inspect if selected
data repositories use metadata standards representing scholarly interests, and
we determine how many fields of the metadata standards used are filled. To
determine search interests in biodiversity research, we gathered 169 questions
that researchers aimed to answer with the help of retrieved data, identified
biological entities and grouped them into 13 categories. The categories were
evaluated with nine biodiversity scholars who assigned one of the types to
pre-labeled biological entities in the questions.
Our findings indicate that environments, materials and chemicals, species,
biological and chemical processes, locations, data parameters and data types
are important search interests in biodiversity research. The comparison with
existing metadata standards shows that domain-specific standards cover search
interests quite well, whereas general standards do not explicitly contain
elements that reflect search interests. We inspect metadata from five large
data repositories. Our results confirm that metadata currently poorly reflect
search interests in biodiversity research. From these findings, we derive
recommendations for researchers and data repositories how to bridge the gap
between search interest and metadata provided.
*<EMAIL_ADDRESS>
Keywords: semantic search, query expansion, biological data, Life Sciences,
biodiversity.
## Introduction
Scientific progress in biodiversity research, a field dealing with the
diversity of life on earth - the variety of species, genetic diversity,
diversity of functions, interactions and ecosystems [idiv, 2019], is
increasingly achieved by the integration and analysis of heterogeneous
datasets [GBIF, 2018, Culina et al., 2018]. Therefore, locating and finding
proper data for synthesis is a key challenge in daily research practice.
Datasets can differ in format and size. Interesting data is often scattered
across various repositories focusing on different domains. In a survey
conducted by the Research Data Alliance (RDA) Data Discovery Group [SiriJodha
Khalsa, 22z1], 35% of the 98 participating repositories stated that they host
data from Life Science and 34% indicated they cover Earth Science. All of
these are potentially of interest to biodiversity researchers.
However, the offered search services at public data providers do not seem to
support scholars effectively. A study by Kacprzak et al. [Kacprzak et al.,
2018] reports that 40% of the users, who had sent data search requests to two
open data portals, said, that they could not find the data they were
interested in and thus directly requested the data from the repository
manager. In several studies, ecologists report on the difficulties they had
when looking for suitable datasets to reuse [Parker et al., 2016] [Ramakers et
al., 2018] [Culina et al., 2018]. Scholars from research projects we are
involved in also complain that data discovery is a time-consuming task. They
have to search in a variety of data repositories with several different search
terms to find data about species, habitats, or processes. Thus, there is a
high demand for new techniques and methods to better support scholars in
finding relevant data.
In this study, we explore what hampers data set retrieval in biodiversity
research. We analyze two building blocks in retrieval systems: _information
needs (user queries)_ and underlying _data_. We want to find out how large the
gap is between scholarly search interests and provided data. In order to
identify scholarly search interests, we analyzed _user questions_. In contrast
to user queries, which are usually formulated in a few keywords, questions
represent a search context, a more comprehensive information need.
Characteristic terms or phrases in these textual resources can be labeled and
classified to identify biological entities [Kilicoglu et al., 2018, Nentidis
et al., 2017]. _Scientific data_ are not easily accessible by classical text
retrieval mechanisms as they were mainly developed for unstructured textual
resources. Thus, effective data retrieval heavily relies on the availability
of proper _metadata_ (structured information about the data) describing
available datasets in a way that enables their _Findability_ , one principle
to ensure FAIR data [Wilkinson et al., 2016]. A survey conducted by the
Research Data Alliance (RDA) Data Discovery Group points out that 58% of the
98 participating data repositories index all metadata and partial metadata
(52%), and only 33% integrate data dictionaries or variables [SiriJodha
Khalsa, 22z1].
We argue that _Findability_ at least partially depends on how well metadata
reflect scholarly information needs. Therefore, we propose the following
layered approach:
(A) At first, we identified main entity types (categories) that are important
in biodiversity research. We collected $169$ questions provided by 73 scholars
of three large and very diverse biodiversity projects in Germany, namely
_AquaDiva_ [AquaDiva, 2020], GFBio - The German Federation for Biological Data
[GFBio, 2020] and iDiv - The German Research Center for Integrative
Biodiversity Research [idiv, 2019]. Two authors of this publication labeled
and grouped all noun entities into $13$ categories (entity types), which were
identified in several discussion rounds. Finally, all proposed categories were
evaluated with biodiversity scholars in an online survey. The scholars
assigned the proposed categories to important phrases and terms in the
questions (Section “A - Information Needs in the Biodiversity Domain”).
(B) Most data providers use keyword-based search engines returning data sets
that exactly match keywords entered by a user [SiriJodha Khalsa, 22z1]. In
dataset search, the main source are metadata that contain structured entries
on measurements, data parameters or species observed rather than textual
descriptions. It depends on the metadata schema used how sparse or rich the
description turns out to be and which facets are provided for filtering.
Therefore, we inspected common metadata standards in the Life Sciences and
analyzed, to which extent their metadata schemes cover the identified
information categories (Section “B - Metadata Standards in the Life
Sciences”).
(C) There are several data repositories that take and archive scientific data
for biodiversity research. According to _Nature’s_ list of recommended data
repositories [Nature, 2018], repositories such as _Dryad_ [Dryad, 2019],
_Zenodo_ [Zenodo, 2019a] or _Figshare_ [Figshare, 2019] are generalist
repositories and can handle different types of data. Data repositories such as
_Pangaea_ [Pangaea, 2019a] (environmental data) or _GBIF_ [GBIF, 2020]
(taxonomic data) are domain specific and only take data of a specific format.
We harvested and parsed all publicly available metadata from these
repositories and analyzed, if they utilize metadata schemes with elements
reflecting search interests. For _GBIF_ , we concentrated on datasets only, as
individual occurrence records are not available in the metadata API. We
explored how many fields of the respective schemas are actually used and
filled (Section “C - Metadata Usage in Selected Data Repositories”).
(D) Finally, we discuss the results and outline how to consider and address
user interests in metadata (Section “D - Discussion”).
In order to foster reproducibility, questions, scripts, results, and the
parsed metadata are publicly available: https://github.com/fusion-
jena/QuestionsMetadataBiodiv
The structure of the paper is as follows: The first part “Definitions” focuses
on the clarification of various terms. This is followed by sections that
explain basics in Information Retrieval (“Background”) and “Related Work”. The
fourth section “Objectives” gives an overview of our research idea. The
following four sections contain the individual research contributions
described above. Each of these sections describes the respective methodology
and results. Finally, section “Conclusion” summarizes our findings.
## Definitions
Since dataset retrieval is a yet largely unexplored research field [Chapman et
al., 2019], few definitions exist describing what it comprises and how it can
be characterized. Here, we briefly introduce an existing definition and add
our own definition from the Life Sciences’ perspective.
Chapman et al [Chapman et al., 2019] define a dataset as “A collection of
related observations organized and formatted for a particular purpose”. They
further characterize a dataset search as an application that “involves the
discovery, exploration, and return of datasets to an end user.” They
distinguish between two types: (a) a basic search in order to retrieve
individual datasets in data portals and (b) a constructive search where
scholars create a new dataset out of various input datasets in order to
analyze relationships and different influences for a specific purpose.
From our perspective, this definition of a dataset is a bit too restricted.
All kinds of scientific data such as experimental data, observations,
environmental and genome data, simulations and computations can be considered
as datasets. We therefore extend the definition of Chapman et al [Chapman et
al., 2019] as follows:
###### Definition 1
A dataset is a collection of scientific data including primary data and
metadata organized and formatted for a particular purpose.
We agree with Chapman et al.’s definition of dataset search. We use _Dataset
Search_ and _Dataset Retrieval_ synonymously and define it as follows:
###### Definition 2
Dataset Retrieval comprises the search process, the ranking and return of
scientific datasets.
Unger et al. [Unger et al., 2014] introduced three dimensions to take into
account in Question Answering namely the _User_ and _Data_ perspective as well
as the _Complexity_ of a task. We argue that these dimensions can also be
applied in dataset retrieval.
### User Perspective
In conventional retrieval systems users’ search interests are represented as a
few keywords that are sent to the system as a search query. Keywords are
usually embedded in a search context that can be expressed in a full sentence
or a question.
In order to understand what users are looking for, a semantic analysis is
needed. _Information Extraction_ is a technique from text mining that
identifies main topics (also called entity types) occurring in unstructured
text [Jurafsky and Martin, 2008]. Noun entities are extracted and categorized
based on rules. Common, domain-independent entity types are for instance
_Person_ , _Location_ , and _Time_. When it comes to specific domains,
additional entity types corresponding to core user interests need to be taken
into consideration. In bio-medicine, according to [Roberts et al., 2017], the
main topics are data type, disease type, biological process and organism. In
new research fields such as biodiversity research these main entity types
still need to be identified in order to get insights into users’ information
needs and to be able to later adapt systems to user requirements.
### Data Perspective
From the data perspective, a dataset search can be classified into two types
based on the source of data: _primary data_ and _metadata_.
###### Definition 3
Primary data are scientific raw data. They are the result of scientific
experiments, observations, or simulations and vary in type, format, and size.
###### Definition 4
Metadata are structured, descriptive information of primary data and answer
the W-questions: _What?_ has been measured by Whom?, When?, Where? and Why?.
Metadata are created for different purposes such as search, classification, or
knowledge derivation.
Dataset retrieval approaches focussing on primary data as source data have to
deal with different data formats such as tabular data, images, sound files, or
genome data. This requires specific query languages such as QUIS [Chamanara et
al., 2017] to overcome the ensuing heterogeneity and is out of scope of this
paper. Here, we solely focus on dataset retrieval approaches that use metadata
as input for search. A variety of metadata standards in the Life Sciences are
introduced in Section “B - Metadata Standards in the Life Sciences”.
### Complexity
Scholarly search interests are as heterogeneous as data are. Information needs
can range from specific questions where users expect datasets to contain the
complete answer to broader questions that are answered partially only by
datasets. Furthermore, users construct new datasets out of various input
datasets. Unger et al. [Unger et al., 2014] characterize the complexity in
retrieval tasks along four dimensions: _Semantic complexity_ describes how
complex, vague, and ambiguous a question is formulated and if heterogeneous
data have to be retrieved. _Answer locality_ denotes if the answer is
completely contained in one dataset or if parts of various datasets need to be
composed or if no data can be found to answer the question. _Derivability_
describes if the answer contains explicit or implicit information. The same
applies for the question. If broad or vague terms appear in the question or
answer, additional sources have to be integrated to enrich both, question
and/or answer. _Semantic tractability_ denotes if the natural language
question can be transformed into a formal query.
In this work, we do not further explore the complexity of questions. We focus
on the analysis of user interests and metadata, only.
## Background
This section provides background information on which parts are involved in a
search process, how the system returns a result based on a user’s query and
what evaluation methods and metrics exist in Information Retrieval.
### The Retrieval Process
A retrieval system consists of a collection of documents (a _corpus_) and a
user’s information needs that are described with a few keywords (_query_). The
main aim of the retrieval process is to return a ranked list of documents that
match a user’s query. The architecture of a retrieval system is depicted in
Figure (1): If the document corpus is not given, an optional _Crawling
Process_ has to be run beforehand to retrieve and collect documents [Baeza-
Yates and Ribeiro-Neto, 2008]. The _Indexing Process_ comprises pre-processing
steps such as stopword removal, stemming, and spell checks important to clean
documents from unnecessary information and to analyze only those terms that
truly represent the content of a document. Afterwards, the system counts word
frequencies within a document and across all documents. The result is an
inverted index. Similar to a book index, this is a list of terms together with
the number of occurrences of each term in each document and across all
documents. These statistics, generated regularly in background processes, form
the basis for a fast access to the documents at search time. The actual search
takes place in the _Retrieval and Ranking Process_ whenever a user sends a
query to the system and results in a ranked result set being returned to the
user.
Based on the underlying _Retrieval Model_ , different ranking functions have
been developed to produce a score for the documents with respect to the query.
Top-scored documents are returned first. In larger corpora, paging functions
allow a subsequent retrieval of further documents. Classical retrieval models
are for instance: the _Boolean Model_ [Manning et al., 2008] where only
documents are returned that exactly match a query. In this model all documents
in the retrieved set are equally relevant and therefore it is not considered
as a ranking algorithm. It is often used in search engines in combination with
further retrieval models such as the _Vector Space Model_ [Manning et al.,
2008]. Here, documents are represented by vectors that consist of term
weights. The similarity of documents and queries is determined by computing
the distance between the vectors. _Probabilistic Models_ [Manning et al.,
2008] are based on computations of the probability of a document belonging to
the relevant set. For languages where word boundaries are not given, e.g., in
Eastern Asian Languages, _Language Models_[Jurafsky and Martin, 2000] have to
be applied to get a mathematical representation of the documents. The system
analyzes the text documents by means of character-based sliding windows (_n-
grams_) to determine word boundaries and compute statistics. All these
classical retrieval models are keyword-based. Thus, retrieval systems only
return documents that exactly match the user query.
Figure 1: The architecture of an Information Retrieval system based on [Baeza-
Yates and Ribeiro-Neto, 2008]: An optional _Crawling Process_ (blue, dashed
line) gathers documents or web pages. In the _Indexing Process_ (blue) the
documents are pre-processed before an index can be established. The _Retrieval
Process_ (orange) comprises the transformation of a user query into a format
the search engine understands before the actual search and ranking takes
place. Finally, users receive a ranked list of documents that match their
query.
### Evaluation in Information Retrieval
When setting up a retrieval system, various design decisions influencing
different parts of the system have to be made. Examples of such decisions are
whether to stem terms in the pre-processing phase or which terms to include in
the stopword list.
Numerous evaluation measures have been developed to determine the
_effectiveness_ of the systems, i.e., the accuracy of the result returned by a
given retrieval algorithm. For this purpose, a test collection is required
that consists of three things [Manning et al., 2008]: (1) a corpus of
documents, (2) representative information needs expressed as queries, (3) a
set of relevance judgments provided by human judges containing assessments of
the relevance of a document for given queries. If judgments are available for
the entire corpus they serve as baseline (“gold standard”) and can be used to
determine how many relevant documents a search system finds for a specific
topic.
User queries should be representative for the target domain. Queries are
either obtained from query logs of a similar application or domain users are
asked to provide example queries [Croft et al., 2009]. The number of example
questions influences the evaluation result. TREC (Text REtrieval Conference)
is a long-running, very influential annual Information Retrieval competition
that considers different retrieval issues in a number of _Tracks_ , e.g.,
Genomics Track or Medical Track (https://trec.nist.gov/). Various TREC
experiments have shown that the number of queries used for the evaluation
matters more than the number of documents judged per query [Croft et al.,
2009]. Therefore, TREC experiments usually consist of around 150 queries (or
so-called “topics”) per track.
Common evaluation metrics with respect to effectiveness are _Precision and
Recall (PR)_ , _F-Measure_ and _Mean Average Precision (MAP)_ [Manning et al.,
2008]. Precision denotes which fraction of the documents in the result set is
relevant for a query, whereas recall describes which fraction of relevant
documents was successfully retrieved. Both metrics are based on binary
judgments, i.e., raters can only determine, if a document is relevant or non-
relevant. The F-Measure is the harmonic mean of Precision and Recall.
Precision and Recall can only be used, when a gold standard is provided
containing the total number of documents in a corpus that are relevant for a
query. However, in applied domains where corpora are established specifically
for a particular research field, gold standards are usually not available.
Therefore, with the recall being unknown, _MAP_ only requires to get ratings
for the TopN-ranked documents to compute an average precision. The assumption
here is that users are only interested in the first entries of a search result
and usually do not navigate to the last page. The top-ranked documents get
higher scores than the lower ranked ones [Croft et al., 2009]. Another metric
proposed by Järvelin and Kekäläinen[Järvelin and Kekäläinen, 2002] is the
_Discounted Cumulated Gain (DCG)_ , a metric that uses a Likert-scale as
rating scheme and allows non-binary ratings. All entries of the scheme should
be equally distributed. However, DCG does not penalize for wrong results but
only increases the scores of top-ranked documents. Other evaluation criteria
concentrate on the _efficiency_ (e.g., the time, memory and disk space
required by the algorithm to produce the ranking [Croft et al., 2009]), _user
satisfaction_ on the provided result set and _visualization_ [Hearst, 2011].
The missing aspect in evaluation approaches in Information Retrieval is the
analysis of the underlying documents. The data source in classical Information
Retrieval systems is unstructured text whereas Dataset Retrieval is based on
structured metadata files. Hence, retrieval success depends on the metadata
format used, the experience of the curator, and the willingness of the
individual scholar to describe data properly and thoroughly. This structured
information could be used in the search index. For instance, if researchers
provide information such as taxon, length, location or experimental method in
the metadata explicitly, a search application could offer a search over a
specific metadata field. Thus, we argue that there is a need to analyze the
given metadata and to quantify the gap between a scholar’s actual search
interests and the metadata primarily used in search applications.
## Related Work
This section focuses on approaches that analyze, characterize and enhance
dataset search. We discuss studies identifying users’ information needs and
introduce existing question corpora. In a second part, we describe approaches
that aim at improving dataset search.
### User Interests
In order to understand user behavior in search, query logs or question corpora
are valid sources. Kacprazal et al [Kacprzak et al., 2018] provide a
comprehensive log analysis of three government open data portals from the
United Kingdom (UK), Canada, and Australia and one open data portal with
national statistics from the UK. 2.2 million queries from logs provided by the
data portals (internal queries) and 1.1 million queries issued to external web
search engines (external queries) were analyzed. Two authors manually
inspected a sample set of 665 questions and determined the main query topics.
Most queries were assigned to Business and Economy (20$\%$ internal queries,
10$\%$ external queries) and Society (14.7$\%$ internal queries, 18$\%$
external queries). Besides query logs, Kacprazal et al [Kacprzak et al., 2018]
also explicit requests by users for data via a form on the website. Here,
users provided title and description which allowed the authors to perform a
deeper thematic analysis on 200 manually selected data requests. It revealed
that geospatial (77.5$\%$) and temporal (44$\%$) information occurred most,
often together with a specific granularity (24.5$\%$), e.g., “hourly weather
and solar data set” or “prescription data per hospital”. Users were also asked
why they had requested data explicitly, and more than 40$\%$ indicated that
they were not able to find relevant data via the provided search.
In the Life Sciences, Dognan et al [Islamaj Dogan et al., 2009] inspected one
month of log data with more than 58 million user queries from PubMed [PubMed,
2019], a platform providing biomedical literature. They randomly selected
10,000 queries for a semantic analysis. Seven annotators categorized the
queries along 16 given categories. They distinguished between bibliographic
queries (44$\%$) containing information such as journal name, author name, or
article title and non-bibliographic queries with domain specific categories.
The most frequent category over all questions was “Author Name” (36$\%$)
followed by “Disorder” (20$\%$) comprising diseases, abnormalities,
dysfunctions etc., and “Gene/ Protein” (19 $\%$). Further main topics were
abbreviations (mostly from genes/ proteins) and chemicals/drugs.
A large study on user needs in biodiversity research have been conducted in
the GBIF community in 2009 [Faith et al., 2013, Ariño et al., 2013]. The aim
was to determine what GBIF users need in terms of primary data and to identify
data gaps in the current data landscape at that time. More than 700
participants from 77 countries took part in the survey. It revealed that
scholars used retrieved primary data for analyzing species diversity,
taxonomy, and life histories/ phenology. That mainly required “taxon names,
occurrence data and descriptive data about the species” [Ariño et al., 2013].
As biodiversity is a rapidly changing research field, the authors recommend to
repeat content need assessments in frequent intervals [Faith et al., 2013].
Apart from query logs, question corpora are another source for identifying
search interests. Usually, questions are collected from experts of a
particular research field and important terms representing main information
needs are labeled with categories or so-called _entity types_. These manually
generated annotations help understanding what information users are interested
in and developing tools and services to either automatically extract these
interests from text (Text Mining), to retrieve relevant data (Information
Retrieval) or to provide an exact answer for that information need (Question
Answering).
In the Life Sciences, question corpora for text retrieval have been mainly
established in the medical and biomedical domains. One of the largest corpora
in medicine is the Consumer Health Corpus [Kilicoglu et al., 2018], a
collection of email requests (67%) received by the U.S. National Library of
Medicine (NLM) customer service and search query logs (33%) of MedlinePlus, a
consumer-oriented NLM website for health information. The final corpus
consists of 2614 questions and has been integrated into the Medical Question
Answering Task at TREC 2017 LiveQA [Abacha et al., 2017]. Six trained domain
experts were involved in the annotation tasks to manually label information.
The experts had to indicate named entities, e.g., problem, anatomy or
measurement and labeled question topics such as the cause of a disease or
complications (longer term effects of a disease).
A common question corpus in biomedicine is the Genomics Track at TREC
conferences [Hersh and Voorhees, 2009]. The topics of the retrieval tasks are
formulated as natural language questions and contain pre-labeled main
categories, e.g., _What [GENES] are involved in insect segmentation?_. A
further large question corpus in biomedicine is the question corpus created
for the BioASQ challenge [Nentidis et al., 2017], an annual challenge for
researchers working on text mining, machine learning, information retrieval,
and question answering. The tasks are split into three parts: (1) the
extraction of main entities and their linkage with ontological concepts
(semantic annotation), (2) the translation of natural language queries into
RDF triples, and (3) the retrieval of the exact answer to a natural language
query. The question corpus was created and annotated by a team of $10$
experts, selected with the goal to cover different ages and complementary
expertise in the fields of medicine, biology, and bioinformatics
[Polychronopoulos et al., 2013]. Each expert was asked to formulate $50$
questions in English that reflect “real-life information needs”. However, the
type of questions to be formulated was restricted, e.g., the experts were
instructed to provide questions of certain types typically considered in
question answering systems (yes/no, factoid, etc.). These restrictions are
justified to a certain degree since they affect the applicability of the
resulting corpus for evaluation purposes of question answering approaches.
However, they have an impact on which questions are formulated and how. This
will likely lead to a bias in the question corpus.
Another question corpus in the biomedical domain is the benchmark developed
for the 2016 bioCADDIE Dataset Retrieval Challenge [Roberts et al., 2017].
This benchmark was explicitly created for the retrieval of datasets based on
metadata and includes 137 questions, 794,992 datasets gathered from different
data portals in XML structure, and relevance judgments for 15 questions.
Similar to the BioASQ challenge, domain experts got instructed on how to
create questions. Based on templates, the question constructors formulated
questions using the most desired entity types, namely data type, disease type,
biological process, and organism.
At present, to the best of our knowledge, there is neither a public log
analysis nor a question corpus available for biodiversity research. In order
to understand genuine user interests and to improve current dataset retrieval
systems, unfiltered information needs are crucial. Therefore, collecting
current search interests from scholars is the first step in our top-down
approach presented in Section “Objectives”.
### Dataset Search
A study by the RDA Data Discovery Group points out [SiriJodha Khalsa, 22z1]
that most data repositories offer search applications based on metadata and
utilize one of the existing and widely spread search engines for data access,
e.g., _Apache Solr_ (http://lucene.apache.org/solr/) or _elasticsearch_
(https://www.elastic.co/products/elasticsearch). Large data repositories such
as _GBIF_ [GBIF, 2019], _PANGAEA_ [Pangaea, 2019b] or _Zenodo_ [Zenodo, 2019b]
also use _elasticsearch_ and offer public search services. _Apache Solr_ and
_elasticsearch_ are both keyword-based and return datasets that exactly match
a user’s entered query terms. If the desired information need is not
explicitly mentioned in the metadata, the search will fail.
In recent years, a variety of approaches have emerged to improve dataset
search. A common approach is to annotate metadata with entities from
_schema.org_ (https://schema.org). Favored by Google [Google, 2019] and the
RDA Discovery Task Group [RDA, 2019], the idea is to add descriptive
information to structured data such as XML or HTML in order to increase
findability and interoperability. These additional attributes help search
engines to better disambiguate terms occuring in text. For example, Jaguar
could be a car, an animal or an operating system. By means of _schema.org_
entities, data providers can define the context explicitly. Numerous
extensions for specific domains have been developed or are still in
development, e.g., _bioschemas.org_ [Michel and Community, 2018] for the Life
Sciences. Since Google launched its beta version of a dataset search in Fall
2018 (https://toolbox.google.com/datasetsearch), _schema.org_ entities got
more and more attention. Hence, data centers such as _PANGAEA_ [Pangaea,
2019a] or _Figshare_ [Figshare, 2019] are increasingly incorporating
_schema.org_ entities in their dataset search.
Other approaches favor an improved metadata schema. Pfaff et al [Pfaff et al.,
2017] introduce the Essential Annotation Schema for Ecology (EASE). The schema
was primarily developed in workshops and intensive discussions with scholars
and aims to support scientists in search tasks. The MIBBI project [Taylor et
al., 1411] (now known as _BioSharing_ or _FAIRSharing_ portal -
https://fairsharing.org/) also recognized that only improved metadata allow
information seekers to retrieve relevant experimental data. They propose a
harmonization of minimum information checklists in order to facilitate data
reuse and to enhance data discovery across different domains. Checklist
developers are advised to consider “’cross-domain’ integrative
activities”[Taylor et al., 1411] when creating and maintaining checklists. In
addition, standards are supposed to contain information on formats (syntax),
vocabularies and ontologies used.
The latter points to an increasing interest in semantic techniques that have
emerged over the past decade. Vocabularies such as the Data Catalog Vocabulary
(DCAT) [Maali and Erickson, 2014] or the Vocabulary of Interlinked Datasets
(VoID) [Alexander et al., 2011] aim to describe datasets semantically in RDF
[Brickley and Guha, 2014] or OWL [W3C, 2012] format based on subject,
predicate, and object triples. Fully semantic approaches such as BioFED
[Hasnain et al., 2017] offer a single-point-of-access to 130 SPARQL endpoints
in the Life Sciences. They integrate a variety of heterogeneous biomedical
ontologies and knowledge bases. Each data source is described by VoID
descriptors that facilitate federated SPARQL query processing. The user
interface permits simple and complex SPARQL queries and provides support in
creating federated SPARQL queries. The result set contains provenance
information, i.e., where the answer has been found, “the number of triples
returned and the retrieval time”[Hasnain et al., 2017]. However, improvements
in the user interface still remain necessary. As BioFED is mainly focused on a
linked data approach, it requires all data sources to be stored in semantic
formats and users to have at least basic SPARQL knowledge. In contrast, Kunze
and Auer [Kunze and Auer, 2013] consider the search process in their search
over RDF datasets as an exploratory task based on semantic facets. Instead of
SPARQL queries or keyword-based user interfaces, they provide parameters for
filtering. This allows an unambiguous search and returns relevant datasets
that match the provided filter parameters.
Other federated approaches outside semantic techniques attempt to align
heterogeneous data sources in one search index. That allows the use of
conventional search engines and keyword-based user interfaces: DataONE is a
project aiming to provide access to earth and environmental data provided by
multiple member repositories [Cook et al., 2012]. Participating groups can
provide data in different metadata formats such as EML, DataCite or FGDC
[DataONE, 2019a]. DataONE is currently working on quantifying FAIR [DataONE,
2019b]. Their findability check determines if specific metadata items such as
title, abstract or publication date are present. For title and abstract, they
additionally check the length and content. Based on these criteria, they
evaluated their data and found out that concerning _Findability_ around 75% of
the available metadata fulfilled the self-created criteria. The German
Federation for Biological Data (GFBio) [Diepenbroek et al., 2014] is a
national infrastructure for research data management in the green Life
Sciences and provides a search over more than six million heterogeneous
datasets from environmental archives and collection data centers. It was
extended to a semantic search [Löffler et al., 2017] that allows a search over
scientific names, common names, or other synonyms. These related terms are
obtained from GFBio’s Terminology Service [Karam et al., 2016] and are added
in the background to a user’s query.
As described above, numerous approaches have been proposed and developed to
improve dataset search. However, what is lacking is a comprehensive analysis
on what exactly needs to be improved and how large the actual gap is between
user requirements and given metadata.
## Objectives
Current retrieval evaluation methods are basically focused on improving
retrieval algorithms and ranking. Therefore, question corpora and documents
are taken as given and are not questioned. However, if the underlying data do
not contain the information users are looking for, the best retrieval
algorithm will fail. We argue, in dataset search, metadata, the basic source
for dataset applications, need to be adapted to match users’ information
needs.
We want to find out how large the gap in biodiversity research is between
actual user needs and provided metadata and how to overcome this obstacle.
Thus, the following analysis aims to explore:
* •
What are genuine user interests in biodiversity research?
* •
Do existing metadata standards reflect information needs of biodiversity
scholars?
* •
Are metadata standards utilized by data repositories useful for data
discovery? How many metadata fields are filled?
* •
Do common metadata fields contain useful information?
We take a top-down approach starting from scholars’ search interests, then
looking at metadata standards and finally inspecting the metadata provided in
selected data repositories.
(A) First, we generate an annotated question corpus for the biodiversity
domain: We gather questions from scholars, explore the questions and identify
information categories. In an online evaluation, domain experts assign these
categories to terms and phrases of the questions (Section “A - Information
Needs in the Biodiversity Domain”).
(B) We inspect different metadata standards in the Life Sciences and compare
the metadata elements to the identified search categories from (A) (Section “B
- Metadata Standards in the Life Sciences”).
(C) We analyze the application programming interfaces (APIs) of selected data
repositories to figure out what metadata standards are used and how many
elements of a metadata schema are utilized for data description (Section “C -
Metadata Usage in Selected Data Repositories”).
(D) We discuss how to bridge the gap between users’ search interests and
metadata. We propose an approach to overcome the current obstacles in dataset
search (Section “D - Discussion”).
## A - Information Needs in the Biodiversity Domain
Question corpora are common sources for getting an impression what users are
interested in in a particular domain. Therefore, we asked biodiversity
scholars to provide questions that are specific for their research. We
analyzed the questions and identified search topics that represent scholarly
information needs in this domain.
### Methodology
The following subsection describes the methodology in detail, divided into
four paragraphs.
#### Questions:
We gathered questions in three large biodiversity projects, namely _CRC
AquaDiva_ [AquaDiva, 2020], _GFBio_ [GFBio, 2020] and _iDiv_ [idiv, 2019]. We
explicitly requested fully expressed questions to capture the keywords in
their search context. These projects vary widely in their overall setting, the
scientists and disciplines involved and their main research focus. Together,
they provide a good and rather broad sample of current biodiversity research
topics. In total, 73 scholars with various research backgrounds in biology
(e.g., ecology, bio-geochemistry, zoology and botany) and related fields
(e.g., hydro-geology) provided 184 questions. This number is comparable to
related question corpora in Information Retrieval (e.g., bioCADDIE [Roberts et
al., 2017]) which typically consist of around 100 – 150 questions. The
scholars were asked to provide up to five questions from their research
background. Questions varied with respect to granularity. The corpus contains
specific questions, such as _List all datasets with organisms in water
samples!_ or questions with a broader scope, e.g., _Does agriculture influence
the groundwater?_. We published the questionnaires that were handed out in
AquaDiva and iDiv as supplementary material in our repository. In the GFBio
project, questions were gathered via email and from an internal search
evaluation. All questions were inspected by the authors with respect to
comprehensibility. We discarded questions which were not fully understandable
(e.g., missing verb, misleading grammatical structures) but left clear phrases
in the corpus that were not fully expressed as a question. If scholars
provided several questions, they were treated individually even if terms
referred to previous questions, e.g., _Do have earthworm burrows (biopores) an
impact on infiltration and transport processes during rainfall events?_ and
_Are the surface properties influencing those processes?_. In this case, no
further adaption towards comprehensibility has been made. The questions were
also not corrected with respect to grammar and spelling since changing the
grammar could lead to an altered statement. We did not want to loose the
original question statement. In some questions, abbreviations occurred without
explanations. In these cases, we left the questions as they are and did not
provide full terms, since these abbreviations can have various meanings in
different biological fields. It was up to the domain experts to either look
them up or to leave the term out. After the cleaning, the final corpus
consists of 169 questions and is publicly available:
https://github.com/fusion-jena/QuestionsMetadataBiodiv/tree/master/questions.
#### Categories:
Boundaries of semantic categories are domain-dependent and fuzzy. However, in
search, categories support users in finding relevant information more easily
and should be valid across various research backgrounds. In a first round, two
authors of this work analyzed the collected questions manually. Both have a
research background in computer science and strong knowledge in scientific
data management, in particular for biodiversity research. The corpus was split
up and each of them inspected around 50% of it and assigned broad categories
independently of the other one. Afterwards, this first classification was
discussed in several sessions. This resulted in 13 categories. The naming was
adapted to domain-specific denotations and ontologies. Furthermore, the
categories were compared to EASE [Pfaff et al., 2017], a metadata schema which
was primarily developed for an improved dataset retrieval in the field of
ecology. This comparison revealed that there is an overlap with EASE but that
we discovered further relevant categories [Löffler et al., 2017]. The final
categories are:
1. 1.
ORGANISM comprises all individual life forms including plants, fungi,
bacteria, animals and microorganisms.
2. 2.
All species live in certain local and global ENVIRONMENTS such as habitats,
ecosystems (e.g., below 4000 m, ground water, city) and
3. 3.
have certain characteristics (traits, phenotypes) that are summarized with
QUALITY & PHENOTYPE, e.g., length, growth rate, reproduction rate, traits.
4. 4.
Biological, chemical and physical PROCESSES are re-occurring and transform
materials or organisms due to chemical reactions or other influencing factors.
5. 5.
EVENTS are processes that appear only once at a specific time, such as
environmental disasters, e.g., Deepwater Horizon oil spill, Tree of the Year
2016.
6. 6.
Chemical compounds, rocks, sand and sediments can be grouped as MATERIALS &
SUBSTANCES.
7. 7.
ANATOMY comprises the structure of organisms, e.g., body or plant parts,
organs, cells, and genes.
8. 8.
METHOD describes all operations and experiments that have to be conducted to
lead to a certain result, e.g., lidar measurements, observation, remote
sensing.
9. 9.
Outcomes of research methods are delivered in DATA TYPE, e.g., DNA data or
sequence data is the result of genome sequencing, lidar data is the result of
lidar measurements (active remote sensing).
10. 10.
All kinds of geographic information is summarized with LOCATION, e.g.,
Germany, Hainich, Atlantic Ocean, and
11. 11.
temporal data including date, date times, and geological eras are described by
TIME, e.g., current, over time, triassic.
12. 12.
PERSON & ORGANIZATION are either projects or authors of data.
13. 13.
As reflected in the search questions, scholars in biodiversity are highly
interested in HUMAN INTERVENTION on landscape and environment, e.g., fishery,
agriculture, and land use.
For the evaluation with domain experts we added two more categories, namely
OTHER and NONE. The first permits to define an own category, if none of the
given ones is appropriate. NONE applies, if the term is not relevant, or if
the domain expert does not know the term or if the phrase is too fuzzy and can
not be classified into one category.
#### Annotation:
An annotation process usually has two steps: (1) the identification of terms
based on annotation rules and (2) the assignment of an appropriate category in
a given context. Usually, an annotator - a domain expert - who is trained in
the annotation guidelines, carries out both tasks. However, we argue that
training is somewhat biased and influences annotators in their classification
decision. This is an obstacle in search where an intuitive feedback for
category assignment is required. Hence, we split up the annotation process.
Two scholars, who collected the questions and who are familiar with the
guidelines conducted the identification, whereas domain experts only received
short instructions and assigned categories. Our annotation guidelines, needed
to identify phrases and terms (artifacts) to label, are available as
supplementary material in our repository.
#### Annotators and Annotation Process:
Nine domain experts (8 Postdocs, 1 Project Manager) with expertise in various
biological and environmental sciences participated in the classification task.
All of them have experience in ecology but in addition, each of them has
individual research competence in fields such as bio-geography, zoology,
evolutionary biology, botany, medicine, physiology, or biochemistry.
For the category assignment, all scholars received a link to an online survey
with explanations of the categories (including examples) and short
instructions on how to classify the artifacts. A screenshot of the survey is
presented in Figure 2. The purpose of this evaluation was also explained to
them (improvement of data set retrieval systems). Multi-labeling was not
allowed; only one category was permitted per artifact. Should there be no
proper category, they were advised to select OTHER and if possible to provide
an alternative category. If they did not know a term or phrase, they could
decide either to look it up or to omit it. The latter also applied,if they
considered a phrase or term to be not relevant or too complicated and fuzzy.
As we wanted to obtain intuitive feedback, the experts were told not to spend
too much time on the classification decision but to determine categories
according to their knowledge and research perspective. The annotators also had
the opportunity to skip an artifact. In this case the category NONE was
applied. For each question, annotators had the opportunity to provide a
comment.
Figure 2: Excerpt of the survey that was set up for the classification task.
The annotators were told to assign only one category per given artifact. If an
artifact is a compound noun, the nested entities such as adjectives or second
nouns that further describe the term were provided for tagging as well.
We decided to use a combination of csv files, Python scripts and _Limesurvey_
to support the annotation process. Details on this process can be found in the
supplementary material in our repository.
### Results
We analyzed the user responses to determine whether the identified information
categories are comprehensive and representative for biodiversity research. We
computed the inter-rater agreement per artifact to determine the category that
best describes an artifact.
#### Representativeness of the Categories
In order to verify completeness we determined the fraction of artifacts
assigned to the category OTHER, i.e., if the experts deemed none of the given
categories as appropriate. Figure 3 depicts the frequency of information
categories and how often they were selected by the domain experts. As it
turned out, the category OTHER was selected by at least $1$ expert per
artifact for $46\%$ of the phrases and terms and by at least $2$ experts for
$24\%$. The fraction of phrases for which at least $3$ experts selected the
category OTHER was $12\%$. If at least two domain experts agree that there is
no proper category for a given phrase, it is a strong indicator for a missing
category or a misinterpretation. This is the case for $24\%$ out of all
annotated artifacts. Hence, the coverage of the identified information
categories is still high.
Figure 3: The frequency of the categories and how often they were assigned to
given phrases and terms, with and without QUALITY correction.
However, there might be various reasons why none of the given categories fit:
(1) The phrase or term to be annotated was unknown to the annotator such as
_shed precipitation_. (2) Frequently, phrases that refer to data attributes
(e.g., _soil moisture_ , _oxygen uptake rate_ or _amount of rain_) and which
were supposed to be covered by the category QUALITY, were classified as OTHER.
As alternative category, the annotators proposed “Parameter” or “Variable”.
When adding these ratings to the QUALITY category, the results for the OTHER
category decreased to $37\%$/$13\%$/$4\%$. That strongly indicates that
renaming the QUALITY category or adding synonyms would increase
comprehensibility significantly. (3) The category OTHER was often chosen for
terms used in questions with a broader scope in order to express expected
results. However, since this is often vague, scholars tend to use generic
terms such as _signal_ , _pattern_ , _properties_ , _structure_ ,
_distribution_ , _driver_ or _diversity_. Hence, further discussions in the
biodiversity research community are needed to define and classify these terms.
In addition, we wanted to know if there are categories that were not or rarely
used by the annotators. This would indicate a low relevance for biodiversity
research. As depicted in Figure 3, the categories ENVIRONMENT, ORGANISM,
MATERIAL & SUBSTANCES, QUALITY, PROCESS, LOCATION and DATA TYPE have been
selected most frequently (assigned to more than $15\%$ of the phrases).
Information related to these categories seems to be essential for biodiversity
research. Although there were categories that were rarely chosen (PERSON &
ORGANISATION and TIME), there was no category that was not used at all.
#### Consensus of the Categories
In statistics, the consensus describes how much homogeneity exists in ratings
among domain experts. We determined the inter-rater agreement and inter-rater
reliability using Fleiss’ Kappa ($\kappa$ statistics) [Fleiss, 1971] and
GWET’s AC [Gwet, 2008]. In general, the inter-rater reliability computes the
observed agreement among raters “and then adjusts the result by determining
how much agreement could be expected from random chance”[Quarfoot and Levine,
2016]. $\kappa$ values vary between $-1$ and $+1$, where values less than $0$
denote poorer than chance agreement and values greater than $0$ denote better
than chance agreement. As suggested by Landis and Koch [Landis and Koch,
1977], $\kappa$ values below $0.4$ indicate fair agreement beyond chance,
values between $0.4$ and $0.6$ moderate agreement, values between $0.6$ and
$0.8$ substantial agreement and values higher than $0.80$ indicate almost
perfect agreement. However, $\kappa$ statistics can lead to a paradox: When
the distribution of the raters’ scores is unbalanced, the correction for the
chance agreement can result in negative $\kappa$ values even if the observed
agreement is very high [Quarfoot and Levine, 2016]. Since this is the opposite
of what is expected, a new and more robust statistic has emerged, the GWET’s
AC [Gwet, 2008]. GWET’s AC considers the response categories in the agreement
by chance and the values can range from $0$ to $1$.
With a Fleiss’ Kappa of $0.48$ and GWET’s AC of $0.51$ the agreement of the
annotators over all categories was moderate. Considering the QUALITY
correction, the values increase slightly to $0.49$ for Fleiss’ Kappa and
$0.52$ to GWET’s AC. Figure 4a reveals a more detailed picture. It shows the
Fleiss’ Kappa for the individual information categories with QUALITY
correction. The agreement among the experts was excellent for the categories
TIME and ORGANISM and intermediate to good for the categories PERSON &
ORGANIZATION, LOCATION, PROCESS, MATERIALS & SUBSTANCES and ENVIRONMENT. The
experts’ agreement for the categories EVENT, HUMAN INTERVENTION, ANATOMY, DATA
TYPE, METHOD and QUALITY was fair. This lack of agreement can either point to
a different understanding of the categories or might indicate that the
categorization of the phrase itself was difficult since some phrases, in
particular longer ones with nested entities, were fuzzy and difficult to
classify in one category. In the latter case, the annotators were advised not
to choose a category for that phrase. Our results show, that for $5\%$ of the
phrases at least $2$ annotators did not provide a category. The fraction of
phrases where $3$ or more annotators did not choose a category was below
$2\%$. This points out that annotators in fact interpreted the categories with
poor agreement differently. This correlates with our results regarding the
category QUALITY. For the categories EVENT, HUMAN INTERVENTION, ANATOMY, DATA
TYPE, METHOD there is no such evidence. This should be discussed and
reconsidered with biodiversity experts.
(a) Fleiss’ Kappa values per category with and without QUALITY correction.
(b) Fleiss’ Kappa values per category for artifacts with one and two terms
(with QUALITY correction).
Figure 4: Fleiss’ Kappa values for the individual information categories.
#### Comparison of short and long artifacts
We also analyzed the influence of longer artifacts on the result. Table 1
presents the $\kappa$ statistic and $GWET^{\prime}sAC$ for artifacts with one
term, two terms, three and more terms including the quality correction. As
assumed, the longer an artifact is, the more difficult it is to assign an
unambiguous category.
Table 1: Annotator’s agreement with QUALITY correction overall and for one term, two terms, three terms and more per artifact | Overall | One Term | Two Terms | $>=$ Three Terms
---|---|---|---|---
$Fleiss^{\prime}Kappa$ | 0.49 | 0.54 | 0.50 | 0.33
$GWET^{\prime}sAC$ | 0.52 | 0.57 | 0.53 | 0.37
Figure 4b depicts a more detailed picture on the individual categories for
artifacts with one and two terms. Since artifacts with three and more terms
resulted in coefficients with less than $0.4$, we left them out in this
analysis. One-term artifacts got an excellent agreement ($>0.8$) for the
categories ORGANISM, TIME and LOCATION and a moderate agreement for
ENVIRONMENT, MATERIAL, PROCESS and DATA TYPE. It strikes that PERSON results
in a negative value with a poor agreement. Since full person names usually
contain two terms, there were no artifacts with one term that could be
assigned to PERSON & ORGANIZATION. However, looking at the results for two
terms per artifact, the PERSON category reaches an excellent agreement as well
as ORGANISM. Surprisingly, PROCESS ($0.76$) got a substantial agreement for
two terms pointing out that biological and chemical processes are obviously
mainly defined by two terms. The same effect, a larger agreement for two terms
than one term, can also be observed for the categories EVENT and HUMAN
INTERVENTION. DATA TYPE got a moderate agreement for one and two terms.
#### Summary
All $13$ provided categories were used by the annotators to label the
artifacts in the questions. However, what stands out is the high number of the
category OTHER in the frequency analysis. For 45% out of 592 annotations, at
least one domain expert did not assign one of the given categories but
selected OTHER. That points to missing interests that are not represented by
the given classes. In terms of consensus, seven information categories got a
moderate agreement ($>$ 0.4) and five out of these seven were also mentioned
very often ($>15\%$), namely ENVIRONMENT (e.g., habitats, climate zone, soil,
weather conditions), MATERIAL (e.g., chemicals, geological information),
ORGANISM (species, taxonomy), PROCESS (biological and chemical processes) and
LOCATION (coordinates, altitude, geographic description) (Figure 5). We
conclude that these classes are important search interests for biodiversity
research.
In comparison to the outcome of the content assessment analysis conducted in
the GBIF community [Ariño et al., 2013] in 2009, the assumption that user
interests change over time has been confirmed. Species are still an important
category scholars are interested in, however, further important topics for the
acquisition and description of ecosystem services are emerging.
Figure 5: Frequency of category mentions and inter-rater agreement with
QUALITY correction.
We are aware that this result is not complete and leaves room for improvement.
Some category names were misleading and confused the annotators. That is
reflected in fair and bad agreement for some categories such as QUALITY (data
parameters measured) or DATA TYPE (nature or genre of the primary data). Here,
it should be discussed in the research community how they could be further
considered in search, e.g., re-naming or merging of categories. Since the
backgrounds of the annotators were quite diverse and no training took place,
we did not expect completeness and perfect agreement. We wanted to get a real,
genuine, and unbiased first picture of biodiversity scholars’ comprehension
when looking for scientific data. In biology and biodiversity research,
scholars use a specialized language with diverse content and imprecise and
inconsistent naming [Thessen et al., 2012, Ananiadou et al., 2004]. Hence,
labeling and extracting biological entities remain a challenge. Therefore, our
thresholds for agreement ($>$ 0.4) and frequency ($>15\%$) are not as high as
in similar studies in bio-medicine.
Concerning the shortened methodology for the evaluation, our assumptions have
been confirmed. It saved a lot time that only a few people did the
identification of artifacts to be labeled and that domain experts assigned
categories, only. On average, domain experts spent between two and three hours
for labeling 169 questions. We conclude that our shortened annotation approach
is fine for opening up new domains and getting insights in what scholars are
interested in. If the aim is to achieve higher agreement per annotation, we
recommend training sessions and trial rounds. However, it should be considered
that in this case the unbiased feedback gets lost.
For further reuse of the annotated question corpus, our analysis script also
produces an XML file with all questions and annotations above a certain
agreement threshold that can be set as a parameter. By default, all
annotations per question with an agreement above $0.6$ will be returned.
## B - Metadata Standards in the Life Sciences
In this section, we describe a selection of existing metadata standards and
investigate whether their elements reflect the identified information
categories.
### Methodology
Metadata describe scientific primary data such as experiments, tabular data,
images, sound and acoustic files in a structured format, e.g., XML or JSON.
Metadata possess a _schema_ stored typically in an XSD file outlining which
elements and attributes exist and which of them are mandatory and/or
repeatable. Many schemes employ vocabularies or ontologies to ensure that the
metadata use the same names for concepts. In order to become a metadata
standard, a schema needs to be formally adopted by a standards’ organization
such as the International Organization for Standardization,
https://www.iso.org.
There are a variety of metadata standards for different research fields. Table
2 presents a list of 13 metadata standards used in data repositories for the
Life Sciences. All metadata standards were obtained from _re3data_ [re3data,
2018]. We filtered for “Life Sciences” and retrieved a list of 25 standards.
The categories _Other_ and _Repository-Developed Metadata Schema_ have been
left out. The _MIBBI_ standard is outdated and has been integrated into _ISA-
Tab_ , so we left it out, too. All other standards that were used in at least
5 repositories have been selected.
We compared them along the focused _Domain_ , _Number of Elements_ in the
standard and _Mandatory Fields_ (Table 3). The standards are ranked by the
number of data repositories supporting them. The number of elements and
required fields were either stated on the standard’s website or they were
obtained from the schema. We also examined, if there is support for the
Semantic Web, namely, if the standard supports RDF or OWL formats. According
to the FAIR principles [Wilkinson et al., 2016], community standards, semantic
formats, and ontologies ensure interoperability and data reuse. The last two
columns denote whether the standard is still maintained and provide some
examples of data repositories that support the respective standard.
Table 2: Metadata Schemes in the Life Sciences obtained from _re3data_ [re3data, 2018] Dublin Core (http://dublincore.org/documents/dces/) | DDI (https://www.ddialliance.org/)
---|---
Dublin Core is a widely used generic metadata standard offering basic fields for the description of research data. | The DDI (Document, Discover, Interoperate) standard addresses metadata from questionnaires and surveys in the social, behavioral, economic and health sciences.
Data Cite (https://schema.datacite.org) | ISO19115 (https://www.iso.org/standard/53798.html)
Data Cite relates to generic research data and comprises a set of mandatory, recommended and optional properties. | The ISO19115 metadata standard includes the identification, the extent, the quality, the spatial and temporal schema, spatial reference, and distribution of digital geographic data.
DIF (https://gcmd.nasa.gov/DocumentBuilder/defaultDif10/guide) | FDGC/CSDGM (https://www.fgdc.gov/metadata/csdgm-standard)
The Directory Interchange Format (DIF) is the US-predecessor of ISO 19115 and focuses on the description of geospatial metadata. | The Federal Geographic Data Committee Content Standard for Digital Geospatial Metadata is a legacy national standard for geospatial data developed in the United States. FGDC now encourages its research community to use the international ISO standards.
EML (https://knb.ecoinformatics.org/) | Darwin Core (https://dwc.tdwg.org/)
The Ecological Metadata Language (EML) is a series of XML document types that can be used in a modular and extensible manner to document ecological data. | The Darwin Core standard provides metadata fields for sharing biodiversity data. It is primarily based on taxa, their occurrence in nature and related information.
RDF Data Cube (https://www.w3.org/TR/vocab-data-cube/) | ISA - Tab (https://isa-specs.readthedocs.io)
The RDF Data Cube vocabulary aims to describe statistical data. The model is compatible with the cube model that underlies the Statistical Data and Metadata eXchange standard (SDMX, https://sdmx.org/), an ISO standard for exchanging and sharing statistical data and metadata among organizations. | The ISA specification is not a standard but a metadata framework that addresses the description and management of biological experiments. It comprises three core entities to capture experimental metadata: Investigation (the project context), Study (a unit of research) and Assay (analytical measurements).
ABCD (https://github.com/tdwg/abcd) | CF (http://cfconventions.org/)
The ABCD (Access to Biological Collection Data) metadata standards aims to share biological collection data. It offers a variety of metadata fields to describe specimen and observations, and it is compatible with numerous existing standards. | The Conventions for Climate and Forecast Metadata (CF) comprise geophysical quantities to describe climate and forecast data.
DCAT (https://www.w3.org/TR/vocab-dcat/) |
The Data Catalog Vocabulary (DCAT) facilitates interoperability between data catalogs in the web and allows dataset search across sites. |
Table 3: Comparison of metadata standards and specifications used in data
repositories for Life Sciences. The number in brackets denotes the number of
repositories supporting the standard.
Standard Name | Domain | Elements | Mandatory Elements | Semantic Support | Maintenance | Examples
---|---|---|---|---|---|---
$DublinCore(142)$ | general | 15 | No | Yes (RDFS) | Yes | Pangaea, Dryad, GBIF, Zenodo, Figshare
$DDI(74)$ | questionnaires and surveys in the social, behavioral, economic, and health sciences | 1154 | 7 | No | Yes | Dataverse
$DataCite(60)$ | general research data | 19 (57) | 5 | No | Yes | Pangaea, Zenodo, Figshare, Radar
$ISO19115(36)$ | geospatial data | N/A | 7 | No | Yes | Pangaea, NSF Arctic Data Center, coastMap
$FDGC/CSDGM(34)$ | geographic information | 342 | 74 | No | No (1998, last update: 2002) | Dataverse, NSF Arctic Data Center
$EML(21)$ | ecological data | N/A | N/A | No | Yes | GBIF, GFBio, SNSB, Senckenberg, WORMS, NSF Arctic Data Center
$DarwinCore(21)$ | biodiversity data | 184 | No | Yes (RDF) | Yes | GFBio, GBIF, VerNET, Atlas of Living Australia, WORMS
$RDFDataCube(18)$ | statistical data | 36 | N/A | Yes | Yes | Dryad (only RDF with DublinCore)
$ISA-Tab(9)$ | biological experiments | N/A | Yes (11 blocks) | Yes | Yes | Data Inra, GigaDB
$DIF(7)$ | geospatial metadata | 34(219) | 8 | No | Yes | Pangaea, Australian Antarctic Data Center, Marine Environmental Data Section
$CF(7)$ | climate and forecast | 4798—54—70 (lines in the standard table) | No | No | Yes | WORMS, NSF Arctic Data Center, coastMap
$ABCD(7)$ | biological collection data | 1418 | 20 | No | Yes (ABCD 3.0) | GBIF, BioCase Network
$DCAT(6)$ | data catalogs, data sets | 16 | N/A | Yes | Yes | Data.gov.au, European Data Portal
N/A denotes that the information was not available
---
The standard supported by most repositories is _Dublin Core_ , a general
metadata standard based on 15 fields, such as contributor, coverage, creator,
date, description, format, and identifier. In addition, data repositories
utilize further domain-specific standards with richer vocabulary and structure
such as _ISO19115_ for geospatial data or _EML_ for ecological data. The _RDF
Data Cube Vocabulary_ is not used by any of the data centers. We suppose, the
abbreviation _RDF DC_ might lead to a misunderstanding (_DublinCore_ instead
of _RDF Data Cube_). All standards provide elements that can be described
along the questions: Who? What? Where? When? Why? and How?. In particular,
contact person, collection or publication date and location are considered
with one or several metadata fields in all standards. In order to describe the
main scope of the primary data, all standards offer numerous metadata fields
but differ in their granularity. While simple ones such as _Dublin Core_ only
offer fields such as title, description, format, and type, standards with more
elements such as _EML_ or _ABCD_ even offer fields for scientific names,
methods and data attributes measured. _EML_ even allows scholars to define the
purpose of the study making it the only standard that supports the _Why_
question. Data reuse and citation also play an important role. As it is
demanded by the Joint Declaration of Data Citation Principles [M., 2014] and
practical guidelines for data repositories [Fenner et al., 2019], all
standards provide several elements for digital identifiers, license
information and citation. In addition, some standards provide elements for
data quality checks. For instance, _ISO19115_ offers a container for data
quality including lineage information and _EML_ supports quality checks with
the _qualityControl_ element. Surprisingly, 52 repositories stated to use own-
developed metadata schemes. That indicates that a variety of data repositories
is not satisfied with the existing metadata landscape and therefore started
developing their own schema.
For our further analysis, we selected 12 out of the 13 standards shown in
Table 3. Since _DDI_ is a standard that was mainly developed for
questionnaires and surveys, we decided not to use it.
### Results
In our second analysis, we compared the information categories with elements
of the metadata schemes to figure out, if search interests can be explicitly
described with metadata elements.
Our results are presented in Table 4. For the sake of completeness, we
explored all $13$ categories from the previous analysis but marked the ones
with an asterisk that had a fair agreement ($<$ 0.4). The categories are
sorted by frequency from left to right. The red color denotes that no element
is available in the standard to express the category, orange indicates that
only a general field could be used to describe the category and a light-orange
cell implies that one or more elements are available in the standard for this
search interest.
Table 4: Comparison of metadata standards and information categories. The
categories are sorted by frequency, the asterisk denotes the categories with
an agreement less than 0.4
| Environment | Quality* | Material | Organism | Process | Location | Data Type* | Method* | Anatomy* | Human Intervention* | Event* | Time | Person
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$DublinCore$ | | | | | | | | | | | | |
$DataCite$ | | | | | | | | | | | | |
$ISO19115$ | | | | | | | | | | | | |
$FDGC/CSDGM$ | | | | | | | | | | | | |
$EML$ | | | | | | | | | | | | |
$DarwinCore$ | | | | | | | | | | | | |
$RDFDataCube$ | | | | | | | | | | | | |
$ISA-Tab$ | | | | | | | | | | | | |
$DIF$ | | | | | | | | | | | | |
$CF$ | | | | | | | | | | | | |
$ABCD$ | | | | | | | | | | | | |
$DCAT$ | | | | | | | | | | | | |
Table key
$\blacksquare$ | Not provided | $\blacksquare$ | Unspecific (generic element) | $\blacksquare$ | Available (one or more elements)
---|---|---|---|---|---
There is no schema that covers all categories. Since the interests are
obtained from scholars with various and heterogeneous research backgrounds,
this was also not to be expected. Some standards such as _ABCD_ or
_DarwinCore_ are discipline-specific and therefore, mainly provide elements
that support the respective domain (e.g., collection data).
Apart from HUMAN INTERVENTION, all categories are covered by different
metadata schemes. In particular, _ISA-Tab_ followed by _ABCD_ , _DarwinCore_
and _EML_ are frameworks and metadata schemes with elements that cover most of
the search interests of biodiversity researchers. _EML_ provides numerous
fields to describe ecological data including elements for environmental
information (studyAreaDescription), species (taxonomicCoverage) and research
methods used (methods). However, important search preferences such as
materials (including chemicals) and biological and chemical processes are only
explicitly supported by _ISA-Tab_. Widely used general standards such as
_DublinCore_ or _DataCite_ offer at least a general field (dc:subject,
subject) that could be used to describe the identified search categories. In
_DublinCore_ , at least one metadata field each is provided to describe
geographic information, e.g., where the data have been collected (LOCATION),
the type of the data (DATA TYPE), the creator and contributor (PERSON &
ORGANIZATION) and when it was collected or published (TIME). However, one
field is often not enough to distinguish if the provided field is for instance
a collection date or publication date, or if the creator of the dataset is
also the same person that collected the data. In contrast, _DataCite_ provides
individual fields for publication year and the date field can be used with
dateType="Collected" to specify a collection date. The metadata field
contributor can also be extended with a type to indicate whether the contact
details belong to the data collector or the project leader. Bounding box
elements are also provided to enter geographic coordinates (LOCATION).
The question that still remains to be answered is whether these detailed
metadata standards are actually used by data repositories.
## C - Metadata Usage in Selected Data Repositories
In the following analysis, we examine what metadata standards are used in
selected data repositories and how many schema elements are actually filled.
In a second part, we explore, if descriptive fields of selected files contain
data that might be relevant for information seekers.
### Methodology
Scholarly publishers increasingly demand scientific data to be submitted along
with publications. Since publishers usually do not host the data on their own,
they ask scholars to upload the data at one of the repositories for their
research domain. According to _Nature’s_ list of recommended data repositories
[Nature, 2018], we selected five archives for our further analysis: three
generalist ones (_Dryad_ , _Figshare_ and _Zenodo_) and two domain-specific
ones (_PANGAEA_ \- environmental data, _GBIF_ \- taxonomic data). In the
biodiversity projects we are involved in, scholars also often mention these
repositories as the ones they mainly use.
#### OAI-PMH Harvesting
The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) is a
client/server architecture primarily developed for providing and consuming
metadata. Data repositories are required to expose their metadata in
$DublinCore$ metadata format and may also support other metadata formats.
Metadata consumers, e.g., other institutions or data portals, harvest that
data via the provided services on the OAI-PMH server in order to integrate or
reuse it in their services. The OAI-PMH protocol comprises a set of six
services that are accessible via HTTP. Requests for metadata can be based on a
date stamp range or can be restricted to named sets defined by the provider.
We parsed all available metadata from _Figshare_ , _Dryad_ , _GBIF_ ,
_PANGAEA_ and _Zenodo_ in May 2019 via their respective _OAI-PMH_ interfaces.
_GBIF_ only offers the metadata of their datasets in the _OAI-PMH_ interface.
The individual occurrence records, which are provided in _Darwin Core_
metadata schema [Gaiji et al., 2013] and belong to a dataset, are available in
the search, only. Hence, we only analyzed the metadata of the datasets.
Our script parses the metadata fields of all public records per metadata
schema for each of the selected data repositories (Table 5). Apart from the
metadata standards introduced in the previous section, a few more standards
appear in this list. _OAI-DC_ is an abbreviation for $DublinCore$, a mandatory
standard in OAI-PMH interfaces. _QCD_ means _qualified DublinCore_ and denotes
an extended $DublinCore$ extending or refining the $15$ core elements. _ORE_
(The Open Archives Initiative Object Reuse and Exchange (OAI-ORE)) is a
standard for exchanging aggregations of web resources. It can be used together
with other semantic standards such as RDF to group individual web resources.
We also considered _Pan-MD_ , a metadata schema developed by _PANGAEA_. It
extends $DublinCore$ with more fine-grained geographic information such as
bounding boxes or adds information on data collection. The latter can range
from projects, parameters, methods, and sensors to taxonomy or habitats.
Table 5: Metadata schemes offered by selected data repositories in their OAI-PMH interfaces. Dryad | GBIF | PANGAEA | Zenodo | Figshare
---|---|---|---|---
METS | EML | DATACITE3 | DATACITE | CERIF
OAI-DC | OAI-DC | DIF | DATACITE3 | METS
ORE | | ISO19139 | DATACITE4 | OAI-DATACITE
RDF | | ISO19139.IODP | MARCXML | OAI-DC
| | OAI-DC | MARC21 | QDC
| | PAN-MD | OAI-DATACITE | RDF
| | | OAI-DATACITE3 |
| | | OAI-DC |
_MARC21_ , _MARCXML_ and _METS_ are metadata standards that are mainly used
for bibliographic data in digital libraries. Hence, we left them out of our
further explorations. We also did not consider the _Common European Research
Information Format (CERIF)_ and _ORE_ as they are not focused on describing
primary data but research entities and their relationships and grouping web
resources, respectively. However, we decided to permit all available
repository-developed schemes for Life Sciences such as _Pan-MD_ in order to
get an impression how repositories extend metadata descriptions.
Per metadata file we inspected which elements of the metadata standards are
used, and we saved their presence (1) or non-presence (0). The result is a csv
file per metadata schema that contains dataset IDs and metadata elements used.
All generated files are stored in separate folders per repository and metadata
format. Each request to a repository returns an XML body that includes several
metadata files as records. Each record is separated in two sections, a header
and a metadata section. The header section comprises general information such
as example ID of the record and a date stamp. The metadata section contains
elements of the metadata schema, e.g., the name of the contributors, abstract
and publication year. Unused metadata fields are not included in the response.
We saved a boolean value encoding whether a metadata field was used or not.
The source code and a documentation on how to use it is available in our
$GitHub$ repository.
Table 6: The date stamps used for each metadata standard and their
descriptions obtained from the standard’s website.
Format | Element | URL | Description
---|---|---|---
_OAI-DC/METS/QDC
RDF(Dryad)_ | dc:date | http://www.dublincore.org/specifications/dublin-core/dces/ | “A point or period of time associated with an event in the lifecycle of the resource.”
_EML_ | pubDate | https://knb.ecoinformatics.org/external//emlparser/docs/eml-2.1.1/eml-resource.html#pubDate | “The ’pubDate’ field represents the date that the resource was published.”
_DATACITE3/4
OAI-DATACITE/3_ | publicationYear | https://support.datacite.org/docs/schema-40 | “The year when the data was or will be made publicly available.”
_DIF_ | DIF_Creation-
_Date | https://gcmd.gsfc.nasa.gov/DocumentBuilder/defaultDif10/guide/metadata_dates.html | “ refers to the date the data was created”
_ISO19139/ISO19139.iodp_ | gco:DateTime | https://geo-ide.noaa.gov/wiki/index.php?title=ISO_Dates | (CI-DataTypeCode=publication), publication Date
_PAN-MD_ | md:dateTime | http://ws.pangaea.de/schemas/pangaea/MetaData.xsd | publication date (contact to data repository)
_RDF(Figshare)_ | vivo:datePublished | https://codemeta.github.io/terms/ | ”Date of first broadcast/publication”
For our further consideration, we wanted to obtain a publication date of each
downloaded dataset to inspect how many datasets have been published over the
years in which format per data repository. Unfortunately, a publication date
is not provided in all metadata schemes. Therefore, we looked up each date
related field in the schema and used the one that is (based on the
description) the closest to a publication date. Table 6 depicts all date
stamps utilized and their descriptions. If the respective date stamp was not
found in a dataset or was empty, we left the dataset out in the following
analysis.
#### Content Analysis:
General, descriptive metadata fields such as ‘title’, ‘description’ or
‘abstract’, and ‘subject’ might contain relevant data that are interesting for
information seekers. Using conventional retrieval techniques, this data is
only accessible in a full text search and if the entered query terms exactly
match a term in the dataset. Hence, we aim to explore what information is
available in general, descriptive metadata fields.
In a first step, we downloaded descriptive metadata fields, namely, dc:title,
dc:description and dc:subject in _OAI-DC_ format from all repositories in
October and November 2019. Parallel to the download, we collected the keywords
used in the subject field and counted their presence in a separate csv file.
In order to further inspect the content with Natural Language Processing (NLP)
tools, we selected a subset of representative datasets. We limited the amount
to 10,000 datasets per repository as the processing of textual resources is
time-consuming and resource-intensive. A variety of applications have been
developed to determine Named Entities (NE) such as geographic locations,
persons and dates. Thessen et. al [Thessen et al., 2012] explored the
suitability of existing NLP applications for biodiversity research. Their
outcome reveals that current text mining systems, which were mainly developed
for the biomedical domain, are able to discover biological entities such as
species, genes, proteins and enzymes. Further relevant entity types such as
habitats, data parameters or processes are currently not supported by existing
taggers. Thus, we concentrated on the extraction of entity types that (a)
correspond to the identified search interests and for which (b) text mining
pipelines are available. We used the text mining framework GATE [Cunningham et
al., 2011] and its ANNIE pipeline [Cunningham et al., 2002] as well as the
OrganismTagger [Naderi et al., 2011a] to extract geographic locations,
persons, organizations and organisms.
### Results
The overall statistics are presented in Table 7. At first, we inspected the
fields concerning a publication date in a valid format. We could not use all
harvested datasets as for some metadata files publication dates were not
available. _Dryad_ had a large number of datasets with a status “Item is not
available”, which we left out, too. The number in brackets denotes the amount
of datasets we used for the following considerations. What stands out is that
most repositories provide general standards, only _Pangaea_ and _GBIF_ utilize
discipline-specific metadata schemes. $Dryad$ and $Figshare$ already provide
metadata in semantic formats such as RDF. In addition, $Figshare$ offers
_Qualified Dublin Core (QDC)_ , an extended _Dublin Core_ that allows the
description of relations to other data sources.
#### Timelines
Based on the given publication dates, we computed timelines (Figure 6) for the
introduction of the various standards over time per data repository. The code
and all charts are available in the repository. As _Dryad_ provides several
dc:date elements in the metadata, we used the first available date entry as
publication date for the timeline chart.
Per repository, the timelines for the different metadata formats are almost
identical. Obviously, when introducing a new metadata format, publication
dates were adopted from existing metadata formats. Only _Figshare_ uses new
date stamps when a new metadata format is provided. For instance, _Figshare_
’s timeline shows that QDC and RDF were launched in 2015. The result for RDF
was too large to process it together with the other metadata formats. Hence,
we produced the timeline for RDF separately. The timelines across all
repositories reveal a steadily increasing number of datasets being published
at _GBIF_ , _Dryad_ , _Zenodo_ and _Figshare_. For _PANGAEA_ , the timeline
points to a constant number of published datasets of around 10,000 datasets a
year apart from an initial release phase between 2003 and 2007.
Figure 6: Timelines for all repositories presenting the number of datasets per metadata schema offered. _Figshare’s_ timeline for RDF was computed separately as the data are too large to process it together with the other metadata formats. Table 7: Total number of datasets parsed per data repository and metadata schema. The numbers in brackets denote the number of datasets used for the analysis. All datasets were harvested and parsed in May 2019. Metadata Schema | Dryad | PANGAEA | GBIF | Zenodo | Figshare
---|---|---|---|---|---
_OAI-DC_ | 186951 (142329) | 383899 (383899) | 44718 (42444) | 255000 (255000) | 3128798 (3128798)
_QDC_ | | | | | 1718059 (1718059)
_RDF_ | 186955 (142989) | | | | 3157347 (3157347)
_DATACITE_ | | | | 1268155 (1268155) |
_DATACITE3_ | | 383906 (383906) | | 1268232 (1268232) |
_OAI-DATACITE_ | | | | 1266522 (1266522) | 3134958 (3134958)
_OAI-DATACITE3_ | | | | 1268679 (1268679) |
_DATACITE4_ | | | | 1268262 (1268262) |
_EML_ | | | 44718 (42444) | |
_DIF_ | | 383899 (383899) | | |
_ISO19139_ | | 383899 (383899) | | |
_ISO19139.iodp_ | | 383899 (383899) | | |
_PAN-MD_ | | 383899 (383899) | | |
#### Metadata Field Usage
Figure 7 presents how many metadata elements of the best matching standard
were filled. The individual results per data archive are available in our
repository as supplementary material. _Dryad_ used 9 out of 15 available
metadata fields from OAI-DC very often (¿ 80%) including important fields such
as dc:title, dc:description and dc:subject. dc:publisher and dc:contributor
were provided in less that 20%. For _GBIF_ , the EML standard does not provide
a fixed number of core elements. Hence, we analyzed the 129 available fields.
Most of them (89 elements) were not filled, e.g., fields describing taxonomic
information. Data about author, title and description were provided in more
than 80%. The general field eml:keyword was used in around 20%. Out of 124
used fields in _PANGAEA’s_ Pan-MD format, 43 fields were filled in more than
80% of the harvested metadata files including information on the author,
project name, coordinates, data parameters and used devices. Fields that were
less filled are supplementary fields, for instance for citation, e.g., volume,
pages. For _Zenodo_ , all required fields in DataCite (identifier, creator,
title, publisher, publication year) were always filled. In addition, title,
rights and descriptions as well as resource type were also provided in more
than 99% of the analyzed metadata files. However, in only 45% of the metadata
files, keywords (subject) were present. _Figshare_ used only 12 out of 17
available fields of QDC, but these fields were always filled.
Figure 7: Metadata field usage in all data repositories evaluated.
#### Category - Field - Match
Per data repository and metadata format, we computed charts that visualize
which field was filled at what percentage rate and if they correspond to the
categories introduced in Section “A - Information Needs in the Biodiversity
Domain”. Table 8 presents a summary of all data repositories and their best
matching standard. The individual results per repository and the concrete
field-to-category mapping are available in our repository.
Temporal expressions (TIME) and information about author and/or creator
(PERSON) were mostly provided in all repositories. Apart from $PANGAEA$,
repositories mainly provided the publication date and only partially added
information about when the data was collected. Information about data type and
formats was also contained in all metadata files apart from _GBIF_. The
identified search categories were partially covered by two repositories.
$GBIF$ with EML reflects most of the categories, but fields that correspond to
ENVIRONMENT, ORGANISM, DATA TYPE and METHOD were rarely filled. Metadata files
in $PANGAEA^{\prime}s$ repository-developed standard Pan-MD always contained
information on data parameters (QUALITY) and geographic locations. In most
cases, research methods and devices used were also given. _Dryad_ provided
geographic information (LOCATION) in its dc:coverage field in at least 60%.
Table 8: Comparison of data repositories and their best matching standard
with the information categories. The categories are sorted by frequency. The
asterisk denotes the categories with an agreement less than 0.4
| Environment | Quality* | Material | Organism | Process | Location | Data Type* | Method* | Anatomy* | Human Intervention* | Event* | Time | Person
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$GBIF$ ($EML$) | (3%) | | | (11%) | | (35%) | (8%) | (18%) | | | | (publication Date - 100%, collection Date - 10%) | ($>$90%)
$Dryad$ ($OAI-DC$) | | | | | | (60%) | | | | | | (publication Date) | (80%)
$PANGAEA$ ($Pan-MD$) | | ($>$90%) | | | | (100%) | (100%) | (Devices used - 90%, research methods - 65%) | | | | (publication Date - 100%, collection Date - 80%) | (100%)
$Zenodo$ ($OAI-Datacite$) | | | | | | | (100%) | | | | | (publication Date) | (100%)
$Figshare$ ($QDC$) | | | | | | | (100%) | | | | | (publication Date) | (100%)
Table key
$\blacksquare$ | Unspecific (generic element) | $\blacksquare$ | Available (one or more elements)
---|---|---|---
Amount in brackets denotes the percentage the element is filled.
#### Content Analysis:
Table 9 presents the Top5 keywords in the metadata field dc:subject for all
repositories sorted by their frequencies. The full keyword lists are available
in our repository.
For _GBIF_ datasets, in 81% an empty dc:subject field was returned. _Zenodo’s_
metadata provided keywords in 52% of the inspected cases. None of the
repositories seem to consider upper and lower cases. For several terms,
different spellings resulted in separate entries. Numerous keywords in
_PANGAEA_ and _Dryad_ reveal that both repositories host marine data.
_PANGAEA_ ’s list mainly contains data parameters measured and used devices.
In contrast, _Dryad’s_ list indicates that terrestrial data are also provided.
For instance, the lower ranked terms contain entries such as Insects (1296)
(insects (180)) or pollination (471) (Pollination (170)). Geographic
information , e.g., California (9817), also occured in _Dryad’s_ dc:subject
field. _Zenodo’s_ and _Figshare’s_ keyword lists contain numerous terms
related to collection data. We checked the term ‘Biodiversity’ in both
repositories in their search interfaces on their websites. It turned out that
the _Meise Botanic Garden_ (https://www.plantentuinmeise.be) provided large
collection data in _Zenodo_. Hence, each occurrence record counted as a search
hit and got the label ‘Biodiversity’. We also discovered that _Figshare_
harvests _Zenodo_ data which also resulted in high numbers for _Figshare_ and
the keyword ‘Biodiversity’ (219022).
Table 9: Top5 keywords and their frequencies in the metadata field dc:subject. | Pangaea | GBIF | Dryad | Zenodo | Figshare
---|---|---|---|---|---
| water (201102) | Occurrence (6510), occurrence (46) | Temperature (16652), temperature (15916) | Taxonomy (459877), taxonomy (105) | Medicine (1057684), medicine (240)
| DEPTH (198349), Depth(71916) | Specimen (3046), specimen (22) | Integrated Ocean Observing System (16373) | Biodiversity (458336), biodiversity (8593) | Biochemistry (1015906), biochemistry (92)
| Spectral irradiance (175373) | Observation (2425), observation (24) | IOOS (16373) | Herbarium (270110), herbarium (91) | Biological Sciences not elsewhere classified (983829)
| DATE/TIME (128917) | Checklist (589), checklist (43) | Oceanographic Sensor Data (15015) | Terrestrial (269900), terrestrial (177) | Chemical Sciences not elsewhere classified (842865)
| Temperature (118522), temperature (50) | Plantas (368), plantas (42) | continental shelf (15015) | Animalia (205242), animalia (261) | Biotechnology (792223), biotechnology (23978)
empty dc:subject | 0 | 38296 | 15436 | 705730 | 0
In a second analysis, we investigated which kinds of entity occur in
descriptive metadata fields. As the processing of textual resources with NLP
tools is time-consuming and resource-intensive, we selected a subset of
datasets. We limited the amount to 10,000 datasets per repository. Table 10
presents the filter strategies. For _PANGAEA_ and _GBIF_ , we randomly
selected 10,000 datasets as they are domain-specific repositories for which
all data are potentially relevant for biodiversity research. For _Dryad_ , the
filter consists of a group of relevant keywords, and for _Zenodo_ and
_Figshare_ we used the keyword ‘Biodiversity’. Due to the large amount of
collection data with the keyword ‘Biodiversity’, we are aware that this filter
strategy might have led to a certain bias in the selected data.
Table 10: Filter strategies used per data repository to select 10,000 datasets. The number in brackets denotes the total number of available datasets in OAI-DC format at the time of download (October/November 2019). | Pangaea | GBIF | Dryad | Zenodo | Figshare
---|---|---|---|---|---
filter strategy | 10000 randomly selected (388254) | 10000 randomly selected GBIF (46954) | 10000 randomly with keywords: biodiversity, climate change, ecology, insects, species richness, invasive species, herbivory, pollination, endangered species, ecosystem functioning, birds (149672) | 10000 randomly with keyword: Biodiversity (1467958) | 10000 randomly with keyword: Biodiversity (3602808)
Per data repository, we processed the selected 10,000 files with two open-
source taggers of the text mining framework GATE [Cunningham et al., 2011].
Named Entities such as Person, Organization and Location were obtained with
the ANNIE pipeline [Cunningham et al., 2002], and Organisms were obtained from
the OrganismTagger [Naderi et al., 2011a]. The results are presented in Table
11. Unfortunately, the OrganismTagger pipeline aborted for _PANGAEA_ and
_Zenodo_ , but in around 12% _GBIF_ files, 36% _Dryad_ files and 85%
_Figshare_ files ‘Organism’ annotations were created. Probably, the number of
‘Organism’ annotations in _Figshare_ files is that high due to the mentioned
bias towards collection data. The number of ‘Organism’ annotations in _GBIF_
files is low since datasets mostly describe the overall study and do not
contain concrete species names but rather broader taxonomic terms such as
‘Family’ or ‘Order’. A large number of ‘Location’ annotations were extracted
for files from _PANGAEA_ (91%) and _Figshare_ ( 100%). ‘Person’ and
‘Organization’ annotations are largely presented in _PANGAEA_ ( 51%) and
_GBIF_ files ( 74%).
The text mining pipelines were originally developed and evaluated with text
corpora and not sparse datasets. Hence, the results might contain wrong (false
positive) annotations. However, the results indicate that NLP tools can
support the identification of biological entities. That could be an approach
for generalist repositories to additionally enrich metadata. All scripts and
the final results are available in our repository.
Table 11: NLP analysis: Number of datasets with Named Entities (out of 10,000 processed files in a reduced OAI-DC schema) per repository. Each file contains a subset of the original metadata, namely, dc:title, dc:description, dc:subject and dc:date. | Pangaea | GBIF | Dryad | Zenodo | Figshare
---|---|---|---|---|---
Organism | N/A (pipeline aborted) | 1183 | 3603 | N/A (pipeline aborted) | 8542
Location | 9111 | 5642 | 3530 | 4641 | 9978
Person & Organization | 5048 | 7355 | 657 | 192 | 1645
## D - Discussion
In this study, we explored what hampers dataset retrieval in biodiversity
research. The following section summarizes our findings and outlines a
proposal on how to bridge the gap between search interests in biodiversity and
given metadata. We also highlight challenges that are not fully resolved yet.
### Research Contributions
#### Scholarly Search Interests in Biodiversity Research
In order to understand what biodiversity scholars are interested in, we
gathered 169 questions, identified biological entities and classified the
entities in 13 information categories. In the subsequent evaluation with
domain experts, five categories were verified and can be considered as
important information needs in biodiversity research. That includes
information about habitats, ecosystems, vegetation (ENVIRONMENT), chemical
compounds, sediments and rocks (MATERIAL), species (ORGANISM), biological and
chemical processes (PROCESS). Further categories being mentioned very often
are information about data parameters (QUALITY) and the nature or type of data
resources (DATA TYPE). Usually, the latter is an outcome of a certain research
method. However, the naming should be discussed in the research community as
the comprehensibility of these categories were fair, only.
#### Comparison of Metadata Standards and User Interests
We selected 13 metadata standards used in the Life Sciences from _re3data_ ,
and we analyzed whether the elements of the metadata schemes reflect the
identified information categories.
Elements of general standards cover the categories to some extent, only.
LOCATION and DATA TYPE are the sole information that can be explicitly
described with metadata fields of general standards such $DublinCore$ or
$DataCite$. Further elements are focused on information less relevant for
search such as data creator, contributor (PERSON), collection or publication
data (TIME), and license information. All this information is important for
data reuse and data citation and needs to be part of the metadata. However, if
the dataset is not findable, it can not be cited. As a general standard,
$DataCite$ provides many more fields and attributes to describe personal data,
time and geographic information. Therefore, it should be provided in addition
to $DublinCore$.
There are numerous discipline-specific standards that describe search
interests quite well. For instance, $EML$, $DarwinCore$ and $ABCD$ provide
elements to describe environmental information, species, methods, and data
parameters. _ISA-Tab_ , a framework for genome data and biological experiments
covers all important search categories. The only drawback is that it takes
time for scholars and/or data curators to fill in all these fields. In some
standards such as $ABCD$ more than 1000 elements are available. With our work,
we aim to provide insights on what scholars are actually interested in when
looking for scientific data. We believe that our results could serve as a good
start for discussions in the respective research communities to define core
elements of discipline-specific standards that are not only focused on data
citation but also broaden the view on search interests.
#### Metadata Analysis of Selected Data Repositories
We selected 5 repositories from _Nature’s_ list of recommended data archives
and analyzed the metadata provided in their OAI-PMH interfaces. We wanted to
know what metadata standards are used in common data repositories in the Life
Sciences and how many elements of the standard are actually filled.
We figured that generalist repositories such as $Dryad$, $Zenodo$ and
$Figshare$ tend to use only general standards such as $DublinCore$ and
$DataCite$. Even when using simple standards the repositories did not fully
use all provided elements. Furthermore, the ones utilized are not always
filled. That hampers successful data retrieval. Most repositories seem to be
aware of that problem and enhance metadata with numerous keywords in generic
fields such as dc:subject. Discipline-specific repositories, e.g., $GBIF$ and
$PANGAEA$ are more likely to provide domain-related standards such as $EML$ or
_Pan-MD_. That supports an improved filtering in search, however, it does not
guarantee that the fields are always filled. In _GBIF’s_ case, we are aware
that we could not provide a full picture as we did not analyze the occurrence
records. Here, only a deeper analysis of the provided fields in the search
index would deliver more answers. However, that would require technical staff
support as the access to search indices is limited.
### Suggestions to Bridge the Gap
In this subsection, we outline approaches to overcome the current obstacles in
dataset search applications based on our findings from the preceding sections.
Table 12 presents checklists for data repositories and scholars that in the
following are discussed in detail.
Table 12: Recommendations for data repositories and scholars to create rich metadata For Data Repositories | For Scholars
---|---
1.) Keep metadata diversity by offering domain-specific standards. | 1.) If available, select a domain-specific repository where possible (as recommended by _Nature_).
2.) Use metadata standards that include the information categories identified in Section A - Information Needs to cover potential search interests and make your data findable. | 2.) Check if appropriate, discipline-specific metadata standards are offered to describe your data.
3.) Extend existing standards where necessary, preferably get in touch with metadata standard consortia. | 3.) Fill in all applicable metadata fields and use appropriate terms (if possible from controlled vocabularies).
4.) Fill in all metadata fields. If possible use controlled vocabularies to describe the data, preferably Linked Open Data. | 4.) If your data is available in search, check if you can find it with various search terms.
5.) Enrich your metadata with entities from _schema.org_ or _bioschemas.org_. | 5.) Contact the data repository if you notice issues in your data description or presentation.
6.) In addition to explicit information, attempt to extract implicit information from metadata fields that contain longer textual resources, e.g., title, description and abstract. |
#### For Data Repositories
Adherence to the FAIR principles, long-term data preservation and the creation
of citable and reusable data are main targets of all data repositories.
Therefore, a strong focus of data archives is on generating unique
identifiers, linking the metadata to their primary data and publications. Less
considered is the perspective of dataset seekers. Hence, we propose the
following improvements to enhance dataset retrieval.
_Keep metadata diversity_ : Scientific data are very heterogeneous. This
diversity can not be reflected in one generic metadata standard. Thus, it is
highly recommended to use different domain-specific standards considering the
requirements from various research disciplines.
_Use proper metadata fields_ : If search interests are explicitly mentioned in
metadata, conventional search techniques are able to retrieve relevant
datasets. Providing possible search terms in generic keyword fields supports
dataset retrieval in a full-text search but does not allow proper category-
based facet creation. Therefore, using proper metadata fields covering
potential search interests greatly enhances dataset retrieval and filtering.
In addition, metadata need to have a unique identifier and should answer the
W-questions including information on how the data can be re-used. That
comprises information on data owner, contact details and citation information.
_Extend standards_ : Metadata standards are developed and adopted from large
organizations or research communities for a specific purpose or research
fields. They also discuss extensions of new fields or changes of existing
elements. If the given fields are not sufficient for particular requirements,
the preferred way is to get in touch with the standard organization and to
propose new fields or attributes. However, since these processes usually take
a long time, it is sometimes unavoidable to extend a schema or to develop a
new schema. In these cases, it would be a good scientific practice to give
feedback to the standard organization why and how a schema has been changed or
extended. That might influence the further development of standards and would
counteract the creation of numerous repository-developed schemes.
_Use controlled vocabularies_ : The questions that still remain and that have
not been considered so far are how metadata fields are filled - by the data
submitter, the data repository or by the system - and whether a controlled
vocabulary is used for the keywords and the other metadata elements. When
describing scientific data it is highly recommended to use controlled
vocabularies or terminologies, in particular for important fields in search.
If possible, Linked Open Data [Heath and Bizer, 2011] vocabularies should be
utilized to better link datasets, publications, authors and other resources.
That supports data transparency, data findability and finally data reuse. In
the Life Sciences, there are a variety of terminology providers. We provide a
list of ontology and semantic service providers in our repository.
_Utilize schema.org_ : Driven by _Google_ and the Research Data Alliance (RDA)
Data Discovery Group, the enrichment of HTML with _schema.org_
(https://schema.org) entities became very popular in recent years. The
enrichment helps to identify unique identifiers, persons, locations or time
information in the HTML file. That supports external search engines or data
providers to crawl the landing pages of search applications provided by the
data repositories per dataset. As the current schema.org entities do not fully
reflect scientific search interests, more attention should be paid to
initiatives such as _bioschemas.org_ (https://bioschemas.org/) that aims to
expand schema.org on biological entities such as species and genes. That
confirms and complements our recommendations for explicit metadata fields
tailored to search interests. At the time of writing this paper,
_bioschemas.org_ is still in draft mode. However, in the future, efforts like
this will improve dataset retrieval significantly.
_Extract implicit information_ : Apart from short information such as contact
details, data type or location, metadata usually contain longer textual
resources such as title, description and abstract. Most of them contain useful
information for search and mention species observed or describe environments
where data has been gathered. These resources could be used to extract
implicit information and to automatically identify further relevant data.
#### For Scholars
Documenting scientific data is a disliked task that also takes time.
Therefore, scholars attempt to minimize the effort on describing their data
and are pleased when data repositories offer not too many fields to fill in
for data submission. However, scholars are responsible to properly document
their data so that other researchers are able to find and reuse it. Hence,
each scholar should carefully and thoroughly describe the produced research
data. Based on our findings, we summarize what should be considered when
submitting scientific data to a data repository.
_Prefer domain-specific repositories:_ As generalist repositories tend to
offer only general metadata standards for data description, preference should
be given to domain-specific data archives. This is also recommend by highly
influential journals such as _Nature_ [Nature, 2018]. Another advantage is
that repositories that are familiar with the research domain might give more
qualitative feedback on the submitted data descriptions.
_Use domain-specific metadata standards:_ Even when selecting a domain-
specific data repository, it does not guarantee that archives use proper
metadata standards. Scholars are advised to know at least a few appropriate
standards for their research field and to ask the repository if one of these
standards are supported if not stated anywhere.
_Fill in all relevant fields with controlled vocabularies:_ All relevant
metadata fields should be filled in. That enhances the chance that datasets
are retrieved. When describing the data, scholars should attempt to use
controlled vocabularies. As this is a new procedure in data submission, it is
currently not supported by all data repositories. However, if it is available,
it is recommended to use the terminologies given and not to describe the data
with one’s own words.
_Search for your data:_ Once the data is available in the repositories’ search
application, scholars are advised to check if they can find their data with
various search terms. They should also review whether the data are accessible
and all displayed information are correct. It is also recommended to repeat
this checking from time to time as repositories might update or extend data
presentations and/or metadata schemes used.
_Get in touch with the repository:_ If scholars notice anything concerning
their data, they should contact the archive. The staff at the repositories are
probably grateful if attentive scholars give feedback on their submitted data
or detect issues that hampers dataset retrieval.
### Challenges
As stated in our summary of the question analysis, the outcomes in Section “A
- Information Needs in the Biodiversity Domain” are not a complete picture of
search interests but only serve as a start for discussions with biodiversity
researchers to further identify possible search categories.
Controlled vocabularies can only be used if appropriate terminologies exist.
This is not the case for all topics. While there are numerous vocabularies for
species, to the best of our knowledge, there is no vocabulary that allow the
description of research methods and results. Scientific data types are also
less considered in existing terminologies.
Another challenge lies in the automatic identification of relevant search
topics in metadata. The text mining community has already developed various
taggers and pipelines to extract organisms [Naderi et al., 2011b], chemistry
items [Cunningham et al., 2011] or genes [McDonald and Pereira, 2005] from
text. These annotations can support automatic facet or category creation.
However, for important categories such as habitats, data parameters,
biological and chemical processes or research methods, taggers are still
missing. In order to increase semantic linkage of datasets and other resources
such as publications, authors and locations, it would be a great benefit if
the annotations also contain URIs to resources in controlled vocabularies.
Then, dataset retrieval could be expanded on semantically related terms such
as synonyms or more specific or broader terms.
An important point, however, are not standards, systems or vocabularies, but
scholars themselves. Scholars need to be aware that thorough data descriptions
are part of a good scientific practice. In order to preserve all kind of
scientific data, independently of whether it has been used in publications or
not, proper metadata in appropriate schemes are the key to successful dataset
retrieval and thus, to data citation and data reuse. Data Repositories could
offer data curation services to support scholars in describing research data
and to encourage them to describe their data thoroughly. We are aware that it
would require high efforts to introduce more domain-specific metadata schemes
at generalist repositories; however, it would enhance dataset retrieval.
Computer science research can contribute to improvements for dataset search by
developing methods and software tools that facilitate standard-compliant
metadata provision ideally at the time of data collection, thus ensuring
metadata standards to be actually used by data providers.
## Conclusion
Scholarly search interests are as diverse as data are and can range from
specific information needs such as searches for soil samples collected in a
certain environment to broader research questions inspecting relationships
among species. Our findings reveal that these search interests are not
entirely reflected in existing metadata. One problem are general standards
that are simple and mainly contain information that support data citation.
Actual search interests can only be represented if keywords and suitable
search terms are provided in general, non-specific fields that are provided in
most standards, e.g., dc:subject in _DublinCore_. Most data repositories
utilize these fields to enrich metadata with suitable search terms. However,
if search interests are not explicitly given, facet creation, e.g., filtering
over species or habitats, is more difficult. Full-text searches only return
data if query terms match given keywords. On the other hand, even when
scholars submit their data to a domain-specific repository that uses
discipline-specific metadata standards, it does not guarantee that all search-
relevant fields will be filled.
Data findability, one of the four FAIR principles [Wilkinson et al., 2016], at
least partially relies on rich metadata descriptions reflecting scholarly
information needs. If the information scholars are interested in is not
available in metadata, the primary data can not be retrieved, reused and
cited. In order to close this gap, we propose checklists for data archives and
scholars to overcome the current obstacles. We also highlight remaining
challenges. In our future work, we would like to focus on a machine-supported
extraction of relevant search categories in metadata as well as an automatic
filling of metadata fields from primary data. That will minimize the metadata
creation process and will support scholars and data repositories in producing
proper and rich metadata with semantic enrichment.
## Acknowledgments
We acknowledge the Collaborative Research Centre AquaDiva (CRC 1076 AquaDiva)
of the Friedrich Schiller University Jena and the GFBio project (KO2209/13-2),
both funded by the Deutsche Forschungsgemeinschaft (DFG). The authors would
also like to thank the annotators and reviewers for their time and valuable
comments.
## References
* [Abacha et al., 2017] Abacha, A. B., Agichtein, E., Pinter, Y., and Demner-Fushman, D. (2017). Overview of the medical question answering task at trec 2017 liveqa. Technical report, TREC LiveQA 2017.
* [Alexander et al., 2011] Alexander, K., Cyganiak, R., Hausenblas, M., and Zhao, J. (2011). Describing Linked Datasets with the VoID Vocabulary. https://www.w3.org/TR/void/, accessed on 24.01.2019.
* [Ananiadou et al., 2004] Ananiadou, S., Friedman, C., and Tsujii, J. (2004). Introduction: named entity recognition in biomedicine. Journal of Biomedical Informatics, 37(6):393 – 395. Named Entity Recognition in Biomedicine.
* [AquaDiva, 2020] AquaDiva (2020). CRC AquaDiva. http://www.aquadiva.uni-jena.de/, accessed on 12.01.2020.
* [Ariño et al., 2013] Ariño, A. H., Chavan, V., and Faith, D. P. (2013). Assessment of user needs of primary biodiversity data: Analysis, concerns, and challenges. Biodiversity Informatics, 8(2).
* [Baeza-Yates and Ribeiro-Neto, 2008] Baeza-Yates, R. and Ribeiro-Neto, B. (2008). Modern Information Retrieval: The Concepts and Technology Behind Search. Addison-Wesley Publishing Company, USA, 2nd edition.
* [Brickley and Guha, 2014] Brickley, D. and Guha, R. (2014). Rdf schema 1.1. https://www.w3.org/TR/rdf-schema/, accessed on 30.11.2019.
* [Chamanara et al., 2017] Chamanara, J., König-Ries, B., and Jagadish, H. V. (2017). Quis: In-situ heterogeneous data source querying. Proc. VLDB Endow., 10(12):1877–1880.
* [Chapman et al., 2019] Chapman, A., Simperl, E., Koesten, L., Konstantinidis, G., Ibáñez, L.-D., Kacprzak, E., and Groth, P. (2019). Dataset search: a survey. The VLDB Journal.
* [Cook et al., 2012] Cook, B., Michener, W., Vieglais, D., Budden, A., and Koskela, R. (2012). Dataone: A distributed environmental and earth science data network supporting the full data life cycle. In EGU General Assembly 2012, held 22-27 April, 2012 in Vienna, Austria., p.11863.
* [Croft et al., 2009] Croft, B., Metzler, D., and Strohman, T. (2009). Search Engines: Information Retrieval in Practice. Addison-Wesley Publishing Company, USA, 1st edition.
* [Culina et al., 2018] Culina, A., Baglioni, M., Crowther, T. W., Visser, M. E., Woutersen-Windhouwer, S., and Manghi, P. (2018). Navigating the unfolding open data landscape in ecology and evolution. Nature Ecology & Evolution, 2(3):420–426.
* [Cunningham et al., 2002] Cunningham, H., Maynard, D., Bontcheva, K., and Tablan, V. (2002). GATE: A Framework and Graphical Development Environment for Robust NLP Tools and Applications. In Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics (ACL’02).
* [Cunningham et al., 2011] Cunningham, H., Maynard, D., Bontcheva, K., Tablan, V., Aswani, N., Roberts, I., Gorrell, G., Funk, A., Roberts, A., Damljanovic, D., Heitz, T., Greenwood, M. A., Saggion, H., Petrak, J., Li, Y., and Peters, W. (2011). Text Processing with GATE (Version 6).
* [Cunningham et al., 2011] Cunningham et al., H. (2011). Text Processing with GATE (Version 6). University of Sheffield, Dept. of Computer Science.
* [DataONE, 2019a] DataONE (2019a). Indexer Documentation. https://github.com/DataONEorg/indexer_documentation, accessed on 20.11.2019.
* [DataONE, 2019b] DataONE (2019b). Quantifying fair: metadata improvement and guidance in the dataone repository network. https://www.dataone.org/webinars/quantifying-fair-metadata-improvement-and-guidance-dataone-repository-network.
* [Diepenbroek et al., 2014] Diepenbroek, M., Glöckner, F., Grobe, P., Güntsch, A., Huber, R., König-Ries, B., Kostadinov, I., Nieschulze, J., Seeger, B., Tolksdorf, R., and Triebel, D. (2014). Towards an integrated biodiversity and ecological research data management and archiving platform: GFBio. In Informatik 2014.
* [Dryad, 2019] Dryad (2019). https://datadryad.org/, accessed on 16th of May 2019.
* [Faith et al., 2013] Faith, D., Collen, B., Arturo, A., Koleff, P., Guinotte, J., Kerr, J., and Chavan, V. (2013). Bridging the biodiversity data gaps: Recommendations to meet users’ data needs. Biodiversity Informatics, 8(2).
* [Fenner et al., 2019] Fenner, M., Crosas, M., Grethe, J. S., Kennedy, D., Hermjakob, H., Rocca-Serra, P., Durand, G., Berjon, R., Karcher, S., Martone, M., and Clark, T. (2019). A data citation roadmap for scholarly data repositories. Scientific Data, 6(1):28.
* [Figshare, 2019] Figshare (2019). https://figshare.com/, accessed on 16th May 2019.
* [Fleiss, 1971] Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378–382.
* [Gaiji et al., 2013] Gaiji, S., Chavan, V., Ariño, A. H., Otegui, J., Hobern, D., Sood, R., and Robles, E. (2013). Content assessment of the primary biodiversity data published through gbif network: Status, challenges and potentials. Biodiversity Informatics, 8(2).
* [GBIF, 2018] GBIF (2018). GBIF Science Review 2018. Technical report, https://doi.org/10.15468/VA9B-3048, accessed on 20.02.2019.
* [GBIF, 2019] GBIF (2019). Search. http://api.gbif.org/v1/occurrence/search, accessed on 30.11.2019.
* [GBIF, 2020] GBIF (2020). Global Biodiversity Information Facility. https://www.gbif.org/, accessed on 12.01.2020.
* [GFBio, 2020] GFBio (2020). The German Federation for Biological Data. https://www.gfbio.org, accessed on 12.01.2020.
* [Google, 2019] Google (2019). https://developers.google.com/search/docs/guides/intro-structured-data, accessed on: 20.02.2019.
* [Gwet, 2008] Gwet, K. L. (2008). Computing inter-rater reliability and its variance in the presence of high agreement. British Journal of Mathematical and Statistical Psychology, 61(1):29–48.
* [Hasnain et al., 2017] Hasnain, A., Mehmood, Q., Sana e Zainab, S., Saleem, M., Warren, C., Zehra, D., Decker, S., and Rebholz-Schuhmann, D. (2017). Biofed: federated query processing over life sciences linked open data. Journal of Biomedical Semantics, 8(1):13.
* [Hearst, 2011] Hearst, M. A. (2011). Modern Information Retrieval, chapter User Interfaces and Visualization, pages 257–340. Addison-Wesley Publishing Company, USA, 2nd edition.
* [Heath and Bizer, 2011] Heath, T. and Bizer, C. (2011). Linked data: Evolving the web into a global data space. Synthesis Lectures on the Semantic Web: Theory and Technology, 1(1):1–136.
* [Hersh and Voorhees, 2009] Hersh, W. and Voorhees, E. (2009). Trec genomics special issue overview. Information Retrieval, 12(1):1–15.
* [idiv, 2019] idiv (2019). German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig. https://www.idiv.de,accessed on 11.04.2019.
* [Islamaj Dogan et al., 2009] Islamaj Dogan, R., Murray, G. C., Névéol, A., and Lu, Z. (2009). Understanding PubMed® user search behavior through log analysis. Database, 2009.
* [Järvelin and Kekäläinen, 2002] Järvelin, K. and Kekäläinen, J. (2002). Cumulated Gain-based Evaluation of IR Techniques. ACM Trans. Inf. Syst., 20(4):422–446.
* [Jurafsky and Martin, 2000] Jurafsky, D. and Martin, J. H. (2000). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice Hall PTR, Upper Saddle River, NJ, USA, 1st edition.
* [Jurafsky and Martin, 2008] Jurafsky, D. and Martin, J. H. (2008). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice Hall PTR, Upper Saddle River, NJ, USA, 1st edition.
* [Kacprzak et al., 2018] Kacprzak, E., Koesten, L., Ibáñez, L.-D., Blount, T., Tennison, J., and Simperl, E. (2018). Characterising dataset search—an analysis of search logs and data requests. Journal of Web Semantics.
* [Karam et al., 2016] Karam, N., Müller-Birn, C., Gleisberg, M., Fichtmüller, D., Tolksdorf, R., and Güntsch, A. (2016). A terminology service supporting semantic annotation, integration, discovery and analysis of interdisciplinary research data. Datenbank-Spektrum, 16(3):195–205.
* [Kilicoglu et al., 2018] Kilicoglu, H., Ben Abacha, A., Mrabet, Y., Shooshan, S. E., Rodriguez, L., Masterton, K., and Demner-Fushman, D. (2018). Semantic annotation of consumer health questions. BMC Bioinformatics, 19(1):34.
* [Kunze and Auer, 2013] Kunze, S. R. and Auer, S. (2013). Dataset retrieval. In Proceedings of the 2013 IEEE Seventh International Conference on Semantic Computing, ICSC ’13, pages 1–8, Washington, DC, USA. IEEE Computer Society.
* [Landis and Koch, 1977] Landis, J. R. and Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1):159–174.
* [Löffler et al., 2017] Löffler, F., Opasjumruskit, K., Karam, N., Fichtmüller, D., Schindler, U., Klan, F., Müller-Birn, C., and Diepenbroek, M. (2017). Honey Bee Versus Apis Mellifera: A Semantic Search for Biological Data, pages 98–103. Springer International Publishing, Cham.
* [Löffler et al., 2017] Löffler, F., Pfaff, C.-T., Karam, N., Fichtmüller, D., and Klan, F. (2017). What do biodiversity scholars search for? identifying high-level entities for biological metadata. In Algergawy, A., Karam, N., Klan, F., and Jonquet, C., editors, Proceedings of the 2nd Semantics for Biodiversity Workshop held in conjunction with ISWC2017, Vienna, Austria. October 22nd, 2017.
* [M., 2014] M., M. (2014). (ed.) San Diego CA: FORCE11, Data Citation Synthesis Group: Joint Declaration of Data Citation Principles. https://doi.org/10.25490/a97f-egyk.
* [Maali and Erickson, 2014] Maali, F. and Erickson, J. (2014). Data Catalog Vocabulary (DCAT). https://www.w3.org/TR/vocab-dcat/, accessed on 01/24/2019.
* [Manning et al., 2008] Manning, C. D., Raghavan, P., and Schütze, H. (2008). Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA.
* [McDonald and Pereira, 2005] McDonald, R. and Pereira, F. (2005). Identifying gene and protein mentions in text using conditional random fields. BMC Bioinformatics, 6(1):S6.
* [Michel and Community, 2018] Michel, F. and Community, T. B. (2018). Bioschemas & schema.org: a lightweight semantic layer for life sciences websites. Biodiversity Information Science and Standards, 2:e25836.
* [Naderi et al., 2011a] Naderi, N., Kappler, T., Baker, C. J. O., and Witte, R. (2011a). Organismtagger. Bioinformatics, 27(19):2721–2729.
* [Naderi et al., 2011b] Naderi, N., Kappler, T., Baker, C. J. O., and Witte, R. (2011b). OrganismTagger: detection, normalization and grounding of organism entities in biomedical documents. Bioinformatics, 27(19):2721–2729.
* [Nature, 2018] Nature (2018). Scientific Data, Recommended Data Repositories. https://www.nature.com/sdata/policies/repositories, access date: 18.12.2018.
* [Nentidis et al., 2017] Nentidis, A., Bougiatiotis, K., Krithara, A., Paliouras, G., and Kakadiaris, I. (2017). Results of the fifth edition of the bioasq challenge. In BioNLP 2017, pages 48–57, Vancouver, Canada,. Association for Computational Linguistics.
* [Pangaea, 2019a] Pangaea (2019a). Data Publisher for Earth & Environmental Science. https://www.pangaea.de/, accessed on 30.11.2019.
* [Pangaea, 2019b] Pangaea (2019b). Search. http://ws.pangaea.de/es/portals/pansimple/_search, accessed on 30.11.2019.
* [Parker et al., 2016] Parker, T. H., Forstmeier, W., Koricheva, J., Fidler, F., Hadfield, J. D., Chee, Y. E., Kelly, C. D., Gurevitch, J., and Nakagawa, S. (2016). Transparency in ecology and evolution: Real problems, real solutions. Trends in Ecology & Evolution, 31(9):711 – 719.
* [Pfaff et al., 2017] Pfaff, C.-T., Eichenberg, D., Liebergesell, M., König-Ries, B., and Wirth, C. (2017). Essential annotation schema for ecology (ease)—a framework supporting the efficient data annotation and faceted navigation in ecology. PLOS ONE, 12(10):1–13.
* [Polychronopoulos et al., 2013] Polychronopoulos, D., Almirantis, Y., Krithara, A., and Paliouras, G. (2013). Expert team. Project deliverable D3.1.
* [PubMed, 2019] PubMed (2019). US National Library of Medicine National Institutes of Health. https://www.ncbi.nlm.nih.gov/pubmed/, accessed on 30.11.2019.
* [Quarfoot and Levine, 2016] Quarfoot, D. and Levine, R. A. (2016). How robust are multirater interrater reliability indices to changes in frequency distribution? The American Statistician, 70(4):373–384.
* [Ramakers et al., 2018] Ramakers, J., Culina, A., Visser, M., and Gienapp, P. (2018). Environmental coupling of heritability and selection is rare and of minor evolutionary significance in wild populations. Nature Ecology & Evolution, 2.
* [RDA, 2019] RDA (2019). Data Discovery Interest Group. https://www.rd-alliance.org/groups/data-discovery-paradigms-ig, accessed on: 20.2.2019.
* [re3data, 2018] re3data (2018). https://https://www.re3data.org,accessed on 21.11.2018.
* [Roberts et al., 2017] Roberts, K., Gururaj, A. E., Chen, X., Pournejati, S., Hersh, W. R., Demner-Fushman, D., Ohno-Machado, L., Cohen, T., and Xu, H. (2017). Information retrieval for biomedical datasets: the 2016 bioCADDIE dataset retrieval challenge. Database, 2017.
* [SiriJodha Khalsa, 22z1] SiriJodha Khalsa, Peter Cotroneo, M. W. (2018, doi:10.17632/7j43z6n22z.1). A survey of current practices in data search services. Technical report, Research Data Alliance Data (RDA) Discovery Paradigms Interest Group.
* [Taylor et al., 1411] Taylor, C. F., Field, D., Sansone, S.-A., Aerts, J., Apweiler, R., Ashburner, M., Ball, C. A., Binz, P.-A., Bogue, M., Booth, T., Brazma, A., Brinkman, R. R., Michael Clark, A., Deutsch, E. W., Fiehn, O., Fostel, J., Ghazal, P., Gibson, F., Gray, T., Grimes, G., Hancock, John Mand Hardy, N. W., Hermjakob, H., Julian Jr, R. K., Kane, M., Kettner, C., Kinsinger, C., Kolker, E., Kuiper, M., Novère, N. L., Leebens-Mack, J., Lewis, S. E., Lord, P., Mallon, A.-M., Marthandan, N., Masuya, H., McNally, R., Mehrle, A., Morrison, N., Orchard, S., Quackenbush, J., Reecy, J. M., Robertson, D. G., Rocca-Serra, P., Rodriguez, H., Rosenfelder, H., Santoyo-Lopez, J., Scheuermann, R. H., Schober, D., Smith, B., Snape, J., Stoeckert Jr, C. J., Tipton, K., Sterk, P., Untergasser, A., Vandesompele, J., and Wiemann, S. (2008, https://doi.org/10.1038/nbt.1411). Promoting coherent minimum reporting guidelines for biological and biomedical investigations: the mibbi project. Nature Biotechnology, 889.
* [Thessen et al., 2012] Thessen, A. E., Cui, H., and Mozzherin, D. (2012). Applications of natural language processing in biodiversity science. Advances in Bioinformatics, 2012(Article ID 391574):17 pages.
* [Unger et al., 2014] Unger, C., Freitas, A., and Cimiano, P. (2014). An Introduction to Question Answering over Linked Data, pages 100–140. Springer International Publishing, Cham.
* [W3C, 2012] W3C (2012). OWL Working Group, OWL 2 Web Ontology Language. https://www.w3.org/TR/owl2-overview/, accessed on 12.11.2019.
* [Wilkinson et al., 2016] Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J.-W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J., Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R., Gonzalez-Beltran, A., Gray, A. J., Groth, P., Goble, C., Grethe, J. S., Heringa, J., ’t Hoen, P. A., Hooft, R., Kuhn, T., Kok, R., Kok, J., Lusher, S. J., Martone, M. E., Mons, A., Packer, A. L., Persson, B., Rocca-Serra, P., Roos, M., van Schaik, R., Sansone, S.-A., Schultes, E., Sengstag, T., Slater, T., Strawn, G., Swertz, M. A., Thompson, M., van der Lei, J., van Mulligen, E., Velterop, J., Waagmeester, A., Wittenburg, P., Wolstencroft, K., Zhao, J., and Mons, B. (2016). The fair guiding principles for scientific data management and stewardship. Scientific Data 3, (160018).
* [Zenodo, 2019a] Zenodo (2019a). https://zenodo.org/, accessed on 16.05.2019.
* [Zenodo, 2019b] Zenodo (2019b). Search. https://zenodo.org/api/records, accessed on 30.11.2019.
|
2024-09-04T02:54:55.023333 | 2020-02-27T12:23:33 | 2002.12059 | {
"authors": "Guido Giachetti, Stefano Gherardini, Andrea Trombettoni, Stefano Ruffo",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25911",
"submitter": "Stefano Gherardini",
"url": "https://arxiv.org/abs/2002.12059"
} | arxiv-papers | # Quantum-heat fluctuation relations in $3$-level systems under projective
measurements
G. Giachetti<EMAIL_ADDRESS>SISSA, Via Bonomea 265, I-34136 Trieste, Italy
INFN, Sezione di Trieste, I-34151 Trieste, Italy S. Gherardini
<EMAIL_ADDRESS>SISSA, Via Bonomea 265, I-34136 Trieste, Italy
Department of Physics and Astronomy & LENS, University of Florence, via G.
Sansone 1, I-50019 Sesto Fiorentino, Italy A. Trombettoni<EMAIL_ADDRESS>Department of Physics, University of Trieste, Strada Costiera 11, I-34151
Trieste, Italy CNR-IOM DEMOCRITOS Simulation Center and SISSA, Via Bonomea
265, I-34136 Trieste, Italy S. Ruffo<EMAIL_ADDRESS>SISSA, Via Bonomea 265,
I-34136 Trieste, Italy INFN, Sezione di Trieste, I-34151 Trieste, Italy
Istituto dei Sistemi Complessi, Consiglio Nazionale delle Ricerche, via
Madonna del Piano 10, I-50019 Sesto Fiorentino, Italy
###### Abstract
We study the statistics of energy fluctuations in a three-level quantum system
subject to a sequence of projective quantum measurements. We check that, as
expected, the quantum Jarzynski equality holds provided that the initial state
is thermal. The latter condition is trivially satisfied for two-level systems,
while this is generally no longer true for $N$-level systems, with $N>2$.
Focusing on three-level systems, we discuss the occurrence of a unique energy
scale factor $\beta_{\rm eff}$ that formally plays the role of an effective
inverse temperature in the Jarzynski equality. To this aim, we introduce a
suitable parametrization of the initial state in terms of a thermal and a non-
thermal component. We determine the value of $\beta_{\rm eff}$ for a large
number of measurements and study its dependence on the initial state. Our
predictions could be checked experimentally in quantum optics.
## I Introduction
Fluctuation theorems relate fluctuations of thermodynamic quantities of a
given system to equilibrium properties evaluated at the steady state
Esposito2009 ; Campisi2011 ; SeifertRPP2012 ; DeffnerBook2019 . This statement
finds fulfillment in the Jarzynski equality, whose validity has been
extensively theoretically and experimentally discussed in the last two decades
for both classical and quantum systems JarzynskiPRL1997 ; CrooksPRE1999 ;
CollinNature2005 ; ToyabeNatPhys2010 ; Kafri2012 ; Albash13PRE88 ;
Rastegin13JSTAT13 ; Sagawa2014 ; AnNatPhys2015 ; BatalhaoPRL2014 ;
CerisolaNatComm2017 ; BartolottaPRX2018 ; Hernandez2019 .
The evaluation of the relevant work originated by using a coherent modulation
of the system Hamiltonian has been the subject of intense investigation
TalknerPRE2007 ; CampisiPRL2009 ; MazzolaPRL2013 ; AllahverdyanPRE2014 ;
TalknerPRE2016 ; JaramilloPRE2017 ; DengENTROPY2017 . A special focus was
devoted to the study of heat and entropy production, obeying of the second law
of thermodynamics, the interaction with one or more external bodies, and/or
the inclusion of an observer JarzynskiPRL2004 ; Campisi15NJP17 ;
Campisi17NJP19 ; BatalhaoPRL ; Gherardini_entropy ; ManzanoPRX ;
Irreversibility_chapter ; SantosnpjQI2019 ; KwonPRX2019 ; RodriguesPRL2019 .
In this respect, the energy variation and the emission/absorption of heat
induced—and sometimes enhanced—by the application of a sequence of quantum
measurements was studied Campisi2010PRL ; CampisiPRE2011 ; Yi2013 ;
WatanabePRE2014 ; HekkingPRL2013 ; AlonsoPRL2016 ; GherardiniPRE2018 ;
Hernandez2019 . In such a case, the term quantum heat has been used
Elouard2016 ; GherardiniPRE2018 ; we will employ it as well in the following
to refer to the fact that the fluctuations of energy exchange are induced by
quantum projective measurements performed during the time evolution of the
system. As recently discussed in Ref. Hernandez2019 , the information about
the fluctuations of energy exchanges between a quantum system and an external
environment may be enclosed in an energy scaling parameter that only depends
on the initial and the asymptotic (for long times) quantum states resulting
from the system dynamics.
In Ref. GherardiniPRE2018 , the effect of stochastic fluctuations on the
distribution of the energy exchanged by a quantum two-level system with an
external environment under sequences of quantum measurements was characterized
and the corresponding quantum-heat probability density function was derived.
It has been shown that, when a stochastic protocol of measurements is applied,
the quantum Jarzynski equality is obeyed. In this way, the quantum-heat
transfer was characterized for two-level systems subject to projective
measurements. Two-level systems have the property that a density matrix in the
energy basis (as the one obtained after a measurement of the Hamiltonian
operator TalknerPRE2007 ) can be always written as a thermal state and,
therefore, the Jarzynski equality has a $1$ on its right-hand side. Therefore,
a natural issue to be investigated is the study of quantum-heat fluctuation
relations for $N$-level systems, e.g., $N=3$, where this property of the
initial state of a two-points measurement scheme of being thermal is no longer
valid. So, it would be desirable, particularly for the case of a large number
of quantum measurements, to study the properties of the characteristic
function of the quantum heat when initial states cannot be written as thermal
states.
With the goal of characterizing the effects of having arbitrary initial
conditions, in this paper, we study quantum systems described by finite
dimensional Hilbert spaces, focusing on the case of three-level systems. We
observe that finite-level quantum systems may present peculiar features with
respect to continuum systems. As shown in Ref. JaramilloPRE2017 , even when
the quantum Jarzynski equality holds and the average of the exponential of the
work equals the free energy difference, the variance of the energy difference
may diverge for continuum systems, an exception being provided by finite-level
quantum systems.
In this paper we analyze, using numerical simulations, (i) the distribution of
the quantum heat originated by a three-level system under a sequence of $M$
projective measurements in the limit of a large $M$, and (ii) the behavior of
an energy parameter $\beta_{\rm eff}$, such that the Jarzynski equality has
$1$ on its right-hand side, always in the limit of a large $M$. We also
discuss the dependence of $\beta_{\rm eff}$ on the initial state, before the
application of the sequence of measurements.
## II The Protocol
Let us consider a quantum system described by a finite dimensional Hilbert
space. We denote with $H$ the time-independent Hamiltonian of the system that
admits the spectral decomposition
$H=\sum^{N}_{k=1}E_{k}\lvert E_{k}\rangle\\!\langle E_{k}\rvert\ ,$ (1)
where $N$ is the dimension of the Hilbert space. We assume that no
degeneration occurs in the eigenstates of $H$.
At time $t=0^{-}$, just before the first measurement of $H$ is performed, the
system is supposed to be in an arbitrary quantum state described by the
density matrix $\rho_{0}$ s.t. $[H,\rho_{0}]=0$. This allows us to write
$\rho_{0}=\sum^{N}_{k=1}c_{k}\lvert E_{k}\rangle\\!\langle E_{k}\rvert\ ,$ (2)
where $1\geq c_{k}\geq 0\ \forall k=1,\dots,N$ and $\sum^{N}_{k}c_{k}=1$.
Then, we assume that the fluctuations of the energy variations, induced by a
given transformation of the state of the system, are evaluated by means of the
so-called two-point measurement (TPM) scheme TalknerPRE2007 . According to
this scheme, a quantum projective measurement of the system hamiltonian is
performed both at the initial and the final times of the transformation. This
hypothesis justifies the initialization of the system in a mixed state, as
given in Equation (2). By performing a first projective energy measurement, at
time $t=0^{+}$, the system is in one of the states $\rho_{n}=\lvert
E_{n}\rangle\\!\langle E_{n}\rvert$ with probability $p_{n}=\langle
E_{n}\rvert\rho_{0}\lvert E_{n}\rangle$, while the system energy is $E_{n}$.
Afterwards, we suppose that the system $S$ is subject to a number $M$ of
consecutive projective measurements of the generic observable
$O=\sum^{N}_{k=1}\Omega_{k}\lvert\Omega_{k}\rangle\\!\langle\Omega_{k}\rvert\
,$ (3)
where $\Omega_{k}$ and $\lvert\Omega_{k}\rangle$ denote, respectively, the
outcomes and the eigenstates of $O$. According to the postulates of quantum
mechanics, the state of the system after one of these projective measurements
is given by one of the projectors
$\lvert\Omega_{n}\rangle\\!\langle\Omega_{n}\rvert$. Between two consecutive
measurements, the system evolves with the unitary dynamics generated by $H$,
i.e., $U(\tau_{i})=e^{-iH\tau_{i}}$, where $\hbar$ has been set to unity and
the waiting time $\tau_{i}$ is the time difference between the
$(i-1)^{\text{th}}$ and the $i^{\text{th}}$ measurement of $O$.
In general, the waiting times $\tau_{i}$ can be random variables, and the
sequence $(\tau_{1},\ldots,\tau_{M})$ is distributed according to the joint
probability density function $p(\tau_{1},\ldots,\tau_{M})$. The last (i.e, the
$M^{\text{th}}$) measurement of $O$ is immediately followed by a second
projective measurement of the energy, as prescribed by the TPM scheme. By
denoting with $E_{m}$ the outcome resulting from the second energy measurement
of the scheme, the final state of the system is $\rho_{m}=\lvert
E_{m}\rangle\\!\langle E_{m}\rvert$ and the quantum heat $Q$ exchanged during
the transformation is thus given by
$Q=E_{m}-E_{n}\ .$ (4)
As $Q$ is a random variable, one can define the characteristic function
$G(\epsilon)\equiv\left\langle e^{-\epsilon
Q}\right\rangle=\sum_{m,n}p_{m|n}p_{n}e^{-\epsilon(E_{m}-E_{n})}\ ,$ (5)
where $p_{m|n}$ denotes the probability of obtaining $E_{m}$ at the end of the
protocol conditioned to have measured $E_{n}$ at the first energy measurement
of the TPM scheme. If the initial state is thermal, i.e., $\rho_{0}=e^{-\beta
H}/Z$, then one recovers the Jarzynski equality stating that
$G(\beta)=\left\langle e^{-\beta Q}\right\rangle=1\ .$ (6)
Let us notice that $G(\epsilon)$ is a convex function such that $G(0)=1$ and
$G(\pm\infty)\rightarrow+\infty$, as discussed in Ref. Hernandez2019 . Hence,
as long as $\frac{\partial G}{\partial\epsilon}(0)\neq 0$, one can
unambiguously introduce the parameter $\beta_{\rm eff}\neq 0$ defined by the
relation
$G(\beta_{\rm eff})=1\,,$ (7)
which formally plays the role of an effective inverse temperature. Focusing on
three-level systems, in the following, we will study the characteristic
function $G(\epsilon)$ and the properties of such a parameter $\beta_{\rm
eff}$. For comparison, we first pause in the next subsection to discuss what
happens for two-level systems.
### II.1 Intermezzo on Two-Level Quantum Systems
We pause here to remind the reader of the results for two-level systems. The
state of any two-level system, diagonal on the Hamiltonian basis, is a thermal
state for some value of $\beta$. Of course, if the state is thermal, the value
of $\beta_{\rm eff}$ trivially coincides with $\beta$. In particular, in Ref.
GherardiniPRE2018 , the energy exchanged between a two-level quantum system
and a measurement apparatus was analyzed, with the assumption that the
repeated interaction with the measurement device can be reliably modeled by a
sequence of projective measurements occurring instantly and at random times.
Numerically, it has been observed that, as compared with the case of
measurements occurring at fixed times, the two-level system exchanges more
energy in the presence of randomness when the average time between consecutive
measurements is sufficiently small in comparison with the inverse resonance
frequency. However, the quantum-heat Jarzynski equality, related to the
equilibrium properties of the transformation applied to the system, is still
obeyed, as well as when the waiting times between consecutive measurements are
randomly distributed and for each random realization. These results are
theoretically supported by the fact that, in the analyzed case, the dynamical
evolution of the quantum system is unital Rastegin13JSTAT13 ; Sagawa2014 . A
discussion on the values of the parameter $\beta_{\rm eff}$, extracted from
experimental data for nitrogen-vacancy (NV) centers in diamonds subject to
projective measurements in a regime where an effective two-level approximation
is valid was recently presented in Ref. Hernandez2019 .
In Figure 1, we plot the quantum-heat characteristic function $\langle
e^{-\beta Q}\rangle$ as a function of the parameter $c_{1}$ that appears in
the decomposition of the initial state $\rho_{0}$ with respect to the energy
eigenstates $|E_{1}\rangle$ and $|E_{2}\rangle$
Figure 1: Quantum-heat characteristic function $\langle e^{-\beta Q}\rangle$
for a two-level quantum system as a function of $c_{1}$ in Equation (8) for
three values of $|a|^{2}$, which characterizes the initial state. The function
is obtained from numerical simulations performed for a system with a
Hamiltonian $H=J(\lvert 0\rangle\\!\langle 1\rvert+\lvert 1\rangle\\!\langle
0\rvert)$ subject to a sequence of $M=5$ projective measurements. The latter
are separated by a fixed waiting time $\tau=0.5$, averaged over $2000$
realizations, with $E_{1,2}=\pm 1$ and $\beta=3/2$ (notice that the same
values of $|a|^{2}$ are used in Fig. 1 of GherardiniPRE2018 and in the
corresponding caption the value of $\beta$ should read $\beta=3/2$). Units are
used with $\hbar=1$ and $J=1$.
$\rho_{0}=c_{1}\lvert E_{1}\rangle\\!\langle E_{1}\rvert+c_{2}\lvert
E_{2}\rangle\\!\langle E_{2}\rvert\ ,$ (8)
where $c_{2}=1-c_{1}$. The function is plotted for three values of the
parameter $|a|^{2}$, used to parametrize the eigenstates
$\\{|\Omega_{1}\rangle,|\Omega_{2}\rangle\\}$ of $O$ as a function of the
energy eigenstates of the system, i.e.,
$|\Omega_{1}\rangle=a\lvert E_{1}\rangle-b\lvert
E_{2}\rangle\,\,\,\,\,\text{and}\,\,\,\,\,|\Omega_{2}\rangle=b\lvert
E_{1}\rangle+a\lvert E_{2}\rangle\ ,$ (9)
with $|a|^{2}+|b|^{2}=1$ and $a^{\ast}b=ab^{\ast}$. As a result, one can
observe that $\langle e^{-\beta Q}\rangle=1$ for the value of $c_{1}$ ensuring
that $\rho_{0}=e^{-\beta H}/Z$. Further details can be found in Ref.
GherardiniPRE2018 .
The $N>2$ case is trickier: Since, in general, the initial state is no longer
thermal, it is not trivial to determine the value of $\beta_{\rm eff}$, and
its dependence on the initial condition is interesting to investigate. In this
regard, in the following, we will numerically address the $N=3$ case by
providing results on the asymptotic behavior of the system in the limit $M\gg
1$. For the sake of simplicity, from here on, we assume that the values of the
waiting times $\tau_{i}$ are fixed for any $i=1,\ldots,M$. However, our
findings turn out to be the same for any choice of the marginal probability
distribution functions $p(\tau_{i})$, up to “pathological” cases such as
$p(\tau)=\delta(\tau)$. So, having random waiting times does not significantly
alter the scenario emerging from the presented results.
## III Parametrization of the Initial State
In this paragraph, we introduce a parametrization of the initial state
$\rho_{0}$ for the $N=3$ case, which will be useful for a thermodynamic
analysis of the system.
As previously recounted, for a two-level quantum system in a mixed state, as
given by Equation (2), it is always formally possible to define a temperature.
In particular, one can implicitly find an effective inverse temperature,
making $\rho_{0}$ a thermal state, by solving the following equation for
$\beta\in\mathbb{R}$
$e^{-\beta(E_{2}-E_{1})}=\frac{c_{2}}{c_{1}}\ .$ (10)
In the $N=3$ case, however, we have two independent parameters in (2) (the
third parameter indeed enforces the condition $\text{Tr}[\rho_{0}]=1$), and
thus, in general, it is no longer possible to formally define a single
temperature for the state. Here, we propose the following parametrization of
$c_{1},c_{2},c_{3}$ that generalizes the one of Equation (10).
We denote as _partial_ effective temperatures the three parameters
$b_{1},b_{2},b_{3}$, defined through the ratios of $c_{1}$, $c_{2}$, and
$c_{3}$
$\frac{c_{2}}{c_{1}}=e^{-b_{1}(E_{2}-E_{1})},\hskip
28.45274pt\frac{c_{3}}{c_{2}}=e^{-b_{2}(E_{3}-E_{2})},\hskip
28.45274pt\frac{c_{1}}{c_{3}}=e^{-b_{3}(E_{1}-E_{3})}\ ,$ (11)
such that, for a thermal state, $b_{k}=\beta$, $\forall k=1,2,3$. The three
parameters are not independent, as they are constrained by the relation
$\frac{c_{2}}{c_{1}}\frac{c_{3}}{c_{2}}\frac{c_{1}}{c_{3}}=1\ ,$ (12)
which gives in turn the following equality
$b_{1}(E_{2}-E_{1})+b_{2}(E_{3}-E_{2})+b_{3}(E_{1}-E_{3})=0\ .$ (13)
By introducing $\Delta_{1}=E_{2}-E_{1}$, $\Delta_{2}=E_{3}-E_{2}$, and
$\Delta_{3}=E_{1}-E_{3}$, Equation (13) can be written as
$\sum^{3}_{k=1}b_{k}\Delta_{k}=0\ ,$ (14)
where by definition
$\sum^{3}_{k=1}\Delta_{k}=0\ .$ (15)
Thus, as expected, the thermal state is a solution of the condition (14) for
any choice of $E_{1},E_{2},E_{3}$. This has also a geometric interpretation.
In the space of the $\Delta_{k}$, $k=1,2,3$, Equation (14) becomes an
orthogonality condition between the $\Delta_{k}$ and the $b_{k}$ vectors,
while (15) defines a plane that is orthogonal to the vector $(1,1,1)$. When
$b_{k}$ is proportional to $(1,1,1)$, the orthogonality condition is
automatically satisfied and one finds a thermal state. This suggests that, in
general, one can conveniently parametrize $b_{k}$ in terms of both the
components that are orthogonal and parallel to the plane
$\sum^{3}_{k=1}\Delta_{k}=0$. Such terms have the physical meaning of the
thermal and non-thermal components of the initial state. Formally, this means
that we can parametrize each $b_{k}$ through the fictitious inverse
temperature $\beta$ and a deviation $\alpha$, i.e.,
$(b_{1},b_{2},b_{3})=\beta(1,1,1)+\frac{\alpha}{v}(\Delta_{3}-\Delta_{2},\,\Delta_{1}-\Delta_{3},\,\Delta_{2}-\Delta_{1})\
,$ (16)
where $v$ acts as a normalization constant
$v^{2}=3\left(\Delta_{1}^{2}+\Delta_{2}^{2}+\Delta_{3}^{2}\right)\ .$ (17)
Hence, taking into account the normalization constraint, the coefficients
$c_{k}$ are given by
$c_{1}=\frac{1}{1+e^{-b_{1}\Delta_{1}}+e^{b_{3}\Delta_{3}}},\hskip
28.45274ptc_{2}=\frac{1}{1+e^{-b_{2}\Delta_{2}}+e^{b_{1}\Delta_{1}}},\hskip
28.45274ptc_{3}=\frac{1}{1+e^{-b_{3}\Delta_{3}}+e^{b_{2}\Delta_{2}}},$ (18)
or, in terms of the parameters $\alpha$ and $\beta$,
$c_{1}=\frac{1}{\tilde{Z}}\exp\left[-\beta
E_{1}+\frac{\alpha}{v}(E_{2}-E_{3})^{2}\right],\hskip
11.38092ptc_{2}=\frac{1}{\tilde{Z}}\exp\left[-\beta
E_{2}+\frac{\alpha}{v}(E_{3}-E_{1})^{2}\right],\hskip
11.38092ptc_{3}=\frac{1}{\tilde{Z}}\exp\left[-\beta
E_{3}+\frac{\alpha}{v}(E_{1}-E_{2})^{2}\right],$ (19)
where $\tilde{Z}$ is a pseudo-partition function ensuring the normalization of
the $c_{k}$’s
$\tilde{Z}=\tilde{Z}(\alpha,\beta)\equiv e^{-\beta
E_{1}+\frac{\alpha}{v}(E_{2}-E_{3})^{2}}+e^{-\beta
E_{2}+\frac{\alpha}{v}(E_{3}-E_{1})^{2}}+e^{-\beta
E_{3}+\frac{\alpha}{v}(E_{1}-E_{2})^{2}}\ .$ (20)
Let us provide some physical intuition about the parameters $\alpha$ and
$\beta$: For $\alpha=0$, we recover a thermal state, whereby
$c_{1}>c_{2}>c_{3}$ if $\beta>0$, or vice versa if $\beta<0$. On the other
hand, the non-thermal component $\alpha$ can be used to obtain a non-monotonic
behavior of the coefficients $c_{k}$. For example, for $\beta=0$, since
$(E_{3}-E_{1})^{2}$ is greater than both $(E_{3}-E_{2})^{2}$ and
$(E_{1}-E_{2})^{2}$, one finds that $c_{2}>(<)\,c_{1},c_{3}$ if
$\alpha>(<)\,0$.
As a final remark, it is worth noting that we can reduce the dimension of the
space of the parameters. In particular, without loss of generality, one can
choose the zero of the energy by taking $E_{2}=0$ (and then $E_{3}>0$,
$E_{1}<0$), or we can reduce our analysis to the cases with $\beta>0$. As a
matter of fact, the parametrization is left unchanged by the transformation
$\\{\beta\rightarrow-\beta,\,E_{k}\rightarrow-E_{k}\\}$, with the result that
the case of $\beta<0$ can be explored by simply considering $\beta>0$ in the
fictitious system with $E^{\prime}_{k}=-E_{k}$ (here, the choice of $E_{2}=0$
guarantees that this second case can be simply obtained by substituting
$E_{1}$ with $E_{3}$).
## IV Large $M$ Limit
Here, we numerically investigate the behavior of a three-level system subject
to a sequence of $M$ projective quantum measurements with a large $M$
(asymptotic limit) and where $\tau$ is not infinitesimal. From here on, we
adopt the language of spin-$1$ systems, and we thus identify $O$ with $S_{z}$.
In the asymptotic limit, the behavior of the system is expected not to depend
on the choice of the evolution Hamiltonian, with the exception that at least
one of the eigenstates of $S_{z}$ is also an energy eigenstate. In such a
case, indeed, if the energy outcome corresponding to the common eigenstate is
obtained by the first measurement of the TPM scheme, then the evolution is
trivially deterministic, as the system is locked in the measured eigenstate.
Choosing a generic observable (with no eigenstates in common with $H$),
numerical simulations (cf. Figure 2) suggest that our protocol leads the
system to the completely uniform state. The latter can be interpreted as a
canonical state with $\beta=0$ (notice that this result holds in the situation
we are analyzing, with a finite dimensional Hilbert space). The system evolves
with Hamiltonian $H=\omega_{1}S_{z}+\omega_{2}S_{x}$, where the energy units
are chosen such that $\omega_{1}=1$ and $\omega_{2}=\frac{1}{2}$. It is
initialized in the state $\rho_{0}$ with
$\\{c_{1}=0.8,c_{2}=0.01,c_{3}=0.19\\}$, corresponding to $\alpha\approx-2,32$
and $\beta\approx 1,96$, and we performed $M=20$ projective measurements of
the observable $O=S_{z}$ separated by the time $\tau=1$.
Figure 2: Histogram of the initial (dashed) and final (solid) energy outcomes
for the TPM scheme described in the text performed over $3\cdot 10^{5}$
realizations. While the initial state is non-uniform, the final state is
practically uniform over the three energy levels.
This numerical finding allows us to derive an analytic expression of the
quantum-heat characteristic function. In this regard, as the final state is
independent of the initial one for a large $M$, the joint probability of
obtaining $E_{n}$ and $E_{m}$, after the first and the second energy
measurement respectively, is equal to
$p_{mn}=\frac{1}{3}c_{m}\ .$ (21)
Hence,
$G(\epsilon)=\left\langle e^{-\epsilon
Q}\right\rangle=\frac{1}{3}\sum^{3}_{m,n=1}c_{m}e^{-\epsilon(E_{m}-E_{n})}=\frac{1}{3}\sum_{n=1}^{3}e^{-\epsilon
E_{n}}\sum_{m=1}^{3}c_{m}e^{\epsilon E_{m}}\ .$ (22)
As a consequence, $G$ can be expressed in terms of the partition function
$Z(\beta)$ and of the pseudo-partition function introduced in Equation
$\eqref{pseudoZ}$, i.e.
$G(\epsilon;\alpha,\beta)=\frac{Z(\epsilon)}{Z(0)}\frac{\tilde{Z}(\alpha,\beta-\epsilon)}{\tilde{Z}(\alpha,\beta)}\
.$ (23)
Regardless of the choice of the system parameters, the already known results
are straightforwardly recovered, i.e., $G(0)=1$ and $G(\beta)=1$ for
$\alpha=0$ (initial thermal state). In Figure 3, our analytical expression for
$G(\varepsilon)$ is compared with its numerical estimate for two different
Hamiltonians, showing a very good agreement.
We remark that the distribution of $\rho$ after the second energy measurement
could also be obtained by simply imposing the maximization of the von Neumann
entropy. This is reasonable, since the measurement device is macroscopic and
can provide any amount of energy. As a final remark, notice that, in the
numerical findings of Figure 3, the statistics of the quantum-heat
fluctuations originated by the system respect the same ergodic hypothesis that
is satisfied whenever a sequence of quantum measurements is performed on a
quantum system Gherardini2016NJP ; Gherardini2017QSc ; PiacentiniNatPhys2017 .
In particular, in Figure 3, one can observe that the analytical expression of
$G(\epsilon)$ for a large $M$ (i.e., in the asymptotic regime obtained by
indefinitely increasing the time duration of the implemented protocol)
practically coincides with the numerical results obtained by simulating a
sequence with a finite number of measurements ($M=20$) but over a quite large
number ($3\cdot 10^{5}$) of realizations. This evidence is quite important,
because it means that the quantum-heat statistics is homogeneous and fully
take into account even phenomena occurring with very small probability in a
single realization of the protocol.
(a) (b)
Figure 3: Comparison of the analytic expression (22) of the asymptotic
(large-$M$) quantum-heat characteristic function $G(\epsilon)$ (blue solid
lines) with the numerical results averaged over $3\cdot 10^{5}$ realizations
(red dots). The initial state is the same as in Figure 2 and, again,
$O=S_{z}$. In panel (a), the Hamiltonian is the same as in Figure 2, while in
panel (b) the Hamiltonian is $H=\omega_{1}S^{2}_{z}+\omega_{2}S_{x}$, with
$\omega_{1}=2\omega_{2}=1$.
## V Estimates of $\beta_{\rm eff}$
In this section, we study the behavior of $\beta_{\rm eff}$, i.e., the
nontrivial solution of $G(\beta_{\rm eff})=1$, as a function of the initial
state (parametrized by $\alpha$ and $\beta$) and of the energy levels of the
system. Let us first notice that, by starting from Equation (22), obtaining an
analytical expression for $\beta_{\rm eff}$ in the general case appears to be
a very non-trivial task. Thus, in Figure 4a, we numerically compute
$\beta_{\rm eff}$ as a function of $\alpha$ (the non-thermal component of
$\rho_{0}$) for different values of $\beta$. Three representative cases for
the energy levels are taken, i.e., $\\{E_{1}=-1,\,E_{2}=0,\,E_{3}=3\\}$,
$\\{E_{1}=-1,\,E_{2}=0,\,E_{3}=1\\}$, and
$\\{E_{1}=-3,\,E_{2}=0,\,E_{3}=1\\}$, respectively. This choice allows us to
deal both with the cases $E_{3}-E_{2}>E_{2}-E_{1}$ and
$E_{3}-E_{2}<E_{2}-E_{1}$. The choice of the energy unit is such that the
smallest energy gap between $E_{3}-E_{2}$ and $E_{2}-E_{1}$ is set to one. As
stated above, we consider $\beta>0$; the corresponding negative values of the
inverse temperature are obtained by taking $E^{\prime}_{k}=-E_{k}$ with
$\beta^{\prime}=-\beta$. As expected, for $\alpha=0$, we have $\beta_{\rm
eff}=\beta$, regardless of the values of the $E_{k}$’s.
In the next two subsections, we continue discussing in detail the findings of
Figure 4a, presenting the asymptotic behaviors for large positive and negative
values of $\alpha$.
(a) Figure 4: Behavior of $\beta_{\rm eff}$ as a function of $\alpha$ for
different values of $\beta\in[0,2.5]$. We have chosen: (a)
$\\{E_{1}=-1,\,E_{2}=0,\,E_{3}=3\\}$, (b)
$\\{E_{1}=-1,\,E_{2}=0,\,E_{3}=1\\}$, and (c)
$\\{E_{1}=-3,\,E_{2}=0,\,E_{3}=1\\}$, respectively.
.
.
(b) (c)
### V.1 Asymptotic Behavior for a Large Positive $\alpha$
From Figure 4a, one can deduce that, for large positive values of $\alpha$
(corresponding to having as the initial density operator the pure state
$\rho_{0}=\lvert E_{2}\rangle\\!\langle E_{2}\rvert$), $\beta_{\rm
eff}\rightarrow\bar{\beta}_{\rm eff}$, which only depends on $E_{1}$ and
$E_{3}$. This asymptotic value $\bar{\beta}_{\rm eff}$ is positive if
$E_{3}-E_{2}>E_{2}-E_{1}$, negative if $E_{3}-E_{2}<E_{2}-E_{1}$, and zero
when $E_{3}-E_{2}=E_{2}-E_{1}$. To better explain the plots in Figure 4a, let
us consider the analytic expression of $G(\epsilon)$. In this regard, for a
large $\alpha$ and finite $\beta$, we can write
$\tilde{Z}(\alpha,\beta)\approx e^{-\beta
E_{2}+\frac{\alpha}{v}(E_{3}-E_{1})^{2}}\ ,$ (24)
so that, by using Equation (23), the condition $G(\beta_{\rm eff})=1$ reads as
$e^{-\bar{\beta}_{\rm eff}(E_{1}-E_{2})}+e^{-\bar{\beta}_{\rm
eff}(E_{3}-E_{2})}=2$, or, setting $E_{2}=0$,
$e^{-\bar{\beta}_{\rm eff}E_{1}}+e^{-\bar{\beta}_{\rm eff}E_{3}}=2\ .$ (25)
Notice that, if $E_{3}=-E_{1}$ the only solution of Equation (25) is
$\bar{\beta}_{\rm eff}=0$, while a positive solution appears for
$E_{3}>-E_{1}$ and a negative one if $E_{3}<-E_{1}$, thus confirming what was
observed in the numerical simulations. Moreover, by replacing
$E_{1}\rightarrow-E_{3}$ and $E_{3}\rightarrow-E_{3}$, the value of
$\bar{\beta}_{\rm eff}$ changes its sign.
Now, without loss of generality, let us fix the energy unit so that
$E_{1}=-1$. The behavior of $\bar{\beta}_{\rm eff}$ as a function of $E_{3}$
is shown in Figure 5. We observe a monotonically increasing behavior of
$\bar{\beta}_{\rm eff}$ up to a constant value for $E_{3}\gg|E_{1}|=1$. Once
again, this value can be analytically computed from Equation (25), which, for
a large value of $E_{3}$, gives $\bar{\beta}_{\rm eff}=\ln{2}$. Putting
together all of the above considerations and restoring the energy scales, the
limits of $\bar{\beta}_{\rm eff}$ are the following
$-\frac{\ln{2}}{E_{3}-E_{2}}<\bar{\beta}_{\rm eff}<\frac{\ln{2}}{E_{2}-E_{1}}\
.$ (26)
The lower and the upper bounds of $\bar{\beta}_{\rm eff}$ are also shown in
Figure 5, in which $E_{1}=-1$ and $E_{2}=0$.
### V.2 Asymptotic Behavior for a Large Negative $\alpha$
From Figure 4a, one can also conclude that, for large negative values of
$\alpha$, the behavior of $\beta_{\rm eff}$ is linear with $\alpha$
$\beta_{\rm eff}\approx r\alpha\ ,$ (27)
with $r>0$ if $E_{3}-E_{2}>E_{2}-E_{1}$, $r=0$ when $E_{3}-E_{2}=E_{2}-E_{1}$,
and $r$ is negative otherwise. This divergence is easily understood: In fact,
the limit $\alpha\rightarrow-\infty$ (for finite $\beta$) corresponds to the
initial state $\rho_{0}=\lvert E_{1}\rangle\\!\langle E_{1}\rvert$ when
$E_{3}-E_{2}<E_{2}-E_{1}$ or $\rho_{0}=\lvert E_{3}\rangle\\!\langle
E_{3}\rvert$ if $E_{3}-E_{2}>E_{2}-E_{1}$. On the other hand, those states
(thermal states with $\beta_{\rm eff}=\beta=\pm\infty$) are also reached in
the limits $\beta\rightarrow\pm\infty$ with a finite $\alpha$. This simple
argument does not imply the linear divergence of $\beta_{\rm eff}$ as in
Equation (27), nor does it provide insights about the value of $r$, which,
however, can be derived from Equation (23). Although the calculation makes a
distinction on the sign of $r$ depending on whether $E_{3}-E_{2}$ is greater
or smaller than $E_{2}-E_{1}$, the result is independent of this detail. In
particular, by considering the case $E_{3}-E_{2}>E_{2}-E_{1}$ ($r>0$) and
taking into account the divergence of $\beta_{\rm eff}=r\alpha$, we find in
the $\alpha\rightarrow-\infty$ regime that the characteristic function
$G(\beta_{\rm eff})$ has the following form:
$G(\beta_{\rm eff})=\frac{1}{3}+\text{const}\times
e^{-\alpha|\Delta_{3}|\left[r-\frac{(\Delta_{1}-\Delta_{2})}{v}\right]}\ .$
(28)
(a) (b)
Figure 5: (a) Behavior of the asymptotic value $\bar{\beta}_{\rm eff}$ for a
large positive $\alpha$ ($\alpha=20$) as a function of $E_{3}$ (solid blue
line) with $E_{1}=-1$ and $E_{2}=0$. We compare the curve with its limiting
(lower and upper) values, defined in Equation (26) (dash-dotted red lines).
(b) Behavior of the asymptotic slope $r$, rescaled for $v$, for a large
negative $\alpha$ ($\alpha=-20$) as a function of $E_{3}$. In both cases,
$E_{2}=0$ and $E_{1}=-1$.
Hence, in order to ensure that $G(\beta_{\rm eff})\neq\frac{1}{3}$ in the
limit $\alpha\rightarrow-\infty$, the following relation has to be satisfied
$r=\frac{E_{1}+E_{3}-2E_{2}}{v}\ .$ (29)
The numerical estimate of $rv$ as a function of $E_{3}$ is shown in Figure 5.
The numerical results confirm the linear dependence of $\beta_{\rm eff}$ as a
function of $\alpha$.
### V.3 Limits of the Adopted Parametrization
(a)
(b)
(c)
Figure 6: Behavior of $\beta_{\rm eff}$ as a function of $q$, which
parametrizes the initial state $\rho_{0}$ as in Equation (30), in each of the
three cases (a) $E_{3}-E_{2}>E_{2}-E_{1}$, (b) $E_{3}-E_{2}=E_{2}-E_{1}$, and
(c) $E_{3}-E_{2}<E_{2}-E_{1}$.
The parametrization introduced in Section III is singular in correspondence of
the initial states $\rho_{0}$ with one or more coefficients $c_{k}$ equal to
zero. In this regard, as remarked above, initial pure states can be easily
obtained in the limits $\beta\rightarrow\pm\infty$, $\alpha$ finite
(corresponding to $\rho_{0}=\lvert E_{1}\rangle\\!\langle E_{1}\rvert$ and
$\rho_{0}=\lvert E_{3}\rangle\\!\langle E_{3}\rvert$, respectively) and
$\alpha\rightarrow+\infty$, $\beta$ finite (that provides $\rho_{0}=\lvert
E_{2}\rangle\\!\langle E_{2}\rvert$). Instead, initial states with only a
coefficient $c_{k}$ equal to zero, namely
$\rho_{0}=q\lvert E_{1}\rangle\\!\langle E_{1}\rvert+(1-q)\lvert
E_{2}\rangle\\!\langle E_{2}\rvert\ ,\hskip 14.22636pt\rho_{0}=q\lvert
E_{1}\rangle\\!\langle E_{1}\rvert+(1-q)\lvert E_{3}\rangle\\!\langle
E_{3}\rvert\ ,\hskip 14.22636pt\rho_{0}=q\lvert E_{2}\rangle\\!\langle
E_{2}\rvert+(1-q)\lvert E_{3}\rangle\\!\langle E_{3}\rvert\ ,$ (30)
with $q\in[0,1]$, cannot be easily written in terms of $\alpha$ and $\beta$.
In fact, the expressions in Equation (30) correspond to the limit in which
$\alpha\rightarrow-\infty$ with $\beta=a\alpha+b$ for suitable $a$, $b$. This
result can be obtained, e.g., for the first of the states in Equation (30),
considering the state $c_{1}=q(1-e^{-Y})$, $c_{2}=e^{-Y}$, and
$c_{3}=(1-q)(1-e^{-Y})$ in the limit $Y\rightarrow+\infty$. Solving for
$\alpha$ and $\beta$, we have
$\alpha=-\frac{v}{\Delta_{1}\Delta_{2}}Y+O(1)\ ,\hskip
28.45274pt\beta=-\frac{r}{3}\frac{v}{\Delta_{1}\Delta_{2}}Y+O(1)\ ,$ (31)
so that $a=r/3$, while the $q$ dependence is encoded in the next-to-leading
term. For this reason, the parametrization in terms of $q\in[0,1]$ turns out
to be the most convenient in the case of singularity. In Figure 6, the
numerical estimates of $\beta_{\rm eff}$ as a function of $q$ are shown for
the three cases in Equation (30), respectively for $E_{3}-E_{2}$ greater,
equal to, and smaller than $E_{2}-E_{1}$. The symmetries $E_{1}\rightarrow-
E_{3}$, $E_{3}\rightarrow-E_{1}$, $q\rightarrow 1-q$ and $\beta_{\rm
eff}\rightarrow-\beta_{\rm eff}$, due to our choice of parametrization, can be
observed.
## VI Conclusions
In this paper, we studied the quantum-heat distribution originating from a
three-level quantum system subject to a sequence of projective quantum
measurements.
As figure of merit, we analyze the characteristic function
$G(\epsilon)=\langle e^{-\epsilon Q}\rangle$ of the quantum heat $Q$ by using
the formalism of stochastic thermodynamics. In this regard, it is worth
recalling that, as the system Hamiltonian $H$ is time-independent, the
fluctuations of the energy variation during the protocol can be effectively
referred of as quantum heat. As shown in Ref. Hernandez2019 , the fluctuation
relation describing all the statistical moments of $Q$ is simply given by the
equality $G(\beta_{\rm eff})=1$, where the energy-scaling parameter
$\beta_{\rm eff}$ can be considered as an effective inverse temperature. The
analytic expression of $\beta_{\rm eff}$ has been determined only for specific
cases, as, for example, two-level quantum systems.
Here, a three-level quantum system was considered and, in order to gain
information on the value of $\beta_{\rm eff}$, we performed specific numerical
simulations. In doing this, we introduced a convenient parametrization of the
initial state $\rho_{0}$, such that its population values can be expressed as
a function of the reference inverse temperature $\beta$ and the parameter
$\alpha$, identifying the deviation of $\rho_{0}$ from the thermal state.
Then, the behavior of the system when $M$, the number of projective
measurements, is large is numerically analyzed. The condition of a large $M$
leads to an asymptotic regime whereby the final state of the system tends to a
completely uniform state, stationary with respect to the energy basis. This
means that such a state can be equivalently described by an effective thermal
state with zero inverse temperature. In this regime, the value of the energy
scaling $\epsilon$ allowing for the equality $G(\epsilon)=1$ (i.e.,
$\beta_{\rm eff}$) is evaluated. As a consequence, $\beta_{\rm eff}$, which
uniquely rescales energy exchange fluctuations, implicitly encloses
information on the initial state $\rho_{0}$. In other terms, once $\rho_{0}$
and the system (time-independent) Hamiltonian $H$ are fixed, $\beta_{\rm eff}$
remains unchanged by varying parameters pertaining to the measurements
performed during the dynamics, e.g., the time interval between the
measurements.
We have also determined $\beta_{\rm eff}$ as a function of $\alpha$ and
$\beta$ for large $M$. Except for few singular cases, we found that, for large
negative values of $\alpha$, $\beta_{\rm eff}$ is linear with respect to
$\alpha$, while it tends to become constant and independent of $\beta$ for
large positive values of $\alpha$. Such conditions can be traced back to an
asymptotic equilibrium regime, because any dependence from the initial state
$\rho_{0}$ is lost.
As a final remark, we note that, overall, the dynamics acting on the analyzed
three-level system are unital Rastegin13JSTAT13 ; Sagawa2014 . As a matter of
fact, this is the result of a non-trivial composition of unitary evolutions
(between each couple of measurements) and projections. It would certainly also
be interesting to analyze a three-level system subject to both a sequence of
quantum measurements and in interaction with an external (classical or
quantum) environment. In this respect, in light of the results in Refs.
WoltersPRA2013 ; Hernandez2019 , the most promising platforms for this kind of
experiment are NV centers in diamonds DohertyPhysRep2013 . Finally, also the
analysis of general $N$-level systems, and the study of large $M$ behavior
deserve further investigations.
### Acknowlegments
The authors gratefully acknowledge M. Campisi, P. Cappellaro, F. Caruso, F.
Cataliotti, N. Fabbri, S. Hernández-Gómez, M. Müller and F. Poggiali for
useful discussions. This work was financially supported by the MISTI Global
Seed Funds MIT-FVG Collaboration Grant “NV centers for the test of the Quantum
Jarzynski Equality (NVQJE)”. The author (SG) also acknowledges PATHOS EU H2020
FET-OPERN grant No. 828946 and UNIFI grant Q-CODYCES. The author (SR) thanks
the editors of this issue for inviting him to write a paper in honour of
Shmuel Fishman, whom he had the pleasure to meet several times and appreciate
his broad and deep knowledge of various fields of the theory of condensed
matter; besides that, Shmuel Fishman was a lovely person with whom it was a
privilege to spend time in scientific and more general discussions.
## References
* (1) Esposito, M.; Harbola, U.; Mukamel, S. Nonequilibrium fluctuations, fluctuation theorems, and counting statistics in quantum systems. _Rev. Mod. Phys._ 2009, _81_ , 1665.
* (2) Campisi, M.; Hänggi, P.; Talkner, P. Colloquium: Quantum fluctuations relations: Foundations and applications. _Rev. Mod. Phys._ 2011, _83_ , 1653.
* (3) Seifert, U. Stochastic thermodynamics, fluctuation theorems, and molecular machines. _Rep. Prog. Phys._ 2012, _75_ , 126001.
* (4) Deffner, S.; Campbell, S. Quantum Thermodynamics: An Introduction to the Thermodynamics of Quantum Information. Morgan & Claypool Publishers: Williston, VT, USA, 2019.
* (5) Jarzynski, C. Nonequilibrium equality for free energy differences. _Phys. Rev. Lett._ 1997, _78_ , 2690.
* (6) Crooks, G. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. _Phys. Rev. E_ 1999, _60_ , 2721.
* (7) Collin, D.; Ritort, F.; Jarzynski, C.; Smith, S.B.; Tinoco, I.; Bustamante, C. Verification of the Crooks fluctuation theorem and recovery of RNA folding free energies. _Nature_ 2005, _437_ , 231–234.
* (8) Toyabe, S.; Sagawa, T.; Ueda, M.; Muneyuki, E.; Sano, M. Experimental demonstration of information-to-energy conversion and validation of the generalized Jarzynski equality. _Nat. Phys._ 2010, _6_ , 988–992.
* (9) Kafri, D.; Deffner, S. Holevo’s bound from a general quantum fluctuation theorem. _Phys. Rev. A_ 2012, 86, 044302.
* (10) Albash, T.; Lidar, D.A.; Marvian, M.;Zanardi, P. Fluctuation theorems for quantum process. _Phys. Rev. A_ 2013, _88_ , 023146.
* (11) Rastegin, A.E. Non-equilibrium equalities with unital quantum channels. _J. Stat. Mech._ 2013,_6_ , P06016.
* (12) Sagawa, T. _Lectures on Quantum Computing, Thermodynamics and Statistical Physics._ World Scientific: Singapore, 2013.
* (13) An, S.; Zhang, J.N.; Um, M.; Lv, D.; Lu, Y.; Zhang, J.; Yin, Z.; Quan, H.T.; Kim, K. Experimental test of the quantum Jarzynski equality with a trapped-ion system. _Nat. Phys._ 11, 193–199 (2015).
* (14) Batalhão, T.B.; Souza, A.M.; Mazzola, L.; Auccaise, R.; Sarthour, R.S.; Oliveira, I.S.; Goold, J.; Chiara, G.D.; Paternostro, M.; Serra, R.M. Experimental Reconstruction of Work Distribution and Study of Fluctuation Relations in a Closed Quantum System. _Phys. Rev. Lett._ 2014, 113, 140601.
* (15) Cerisola, F.; Margalit, Y.; Machluf, S.; Roncaglia, A.J.; Paz, J.P.;Folman, R. Using a quantum work meter to test non-equilibrium fluctuation theorems. _Nat. Comm._ 2017, 8, 1241.
* (16) Bartolotta, A.; Deffner, S. Jarzynski Equality for Driven Quantum Field Theories. _Phys. Rev. X_ 2018, 8, 011033.
* (17) Hernández-Gómez, S.; Gherardini, S.; Poggiali, F.; Cataliotti, F.S.; Trombettoni, A.; Cappellaro, P.; Fabbri, N. Experimental test of exchange fluctuation relations in an open quantum system. arXiv 2019, arXiv: 1907.08240. Available online: https://arxiv.org/abs/1907.08240.
* (18) Talkner, P.; Lutz, E.; Hänggi, P. Fluctuation theorems: Work is not an observable, _Phys. Rev. E_ 2007 75, 050102(R).
* (19) Campisi, M.; Talkner, M.; Hänggi, P. Fluctuation Theorem for Arbitrary Open Quantum Systems. _Phys. Rev. Lett._ 2009, 102, 210401.
* (20) Mazzola, L.; De Chiara, G.; Paternostro, M. Measuring the characteristic function of the work distribution. _Phys. Rev. Lett._ 2013 110, 230602.
* (21) Allahverdyan, A.E. Nonequilibrium quantum fluctuations of work. _Phys. Rev. E_ 2014, 90, 032137.
* (22) Talkner, P.; Hänggi, P. Aspects of quantum work. _Phys. Rev. E_ 2016, 93, 022131.
* (23) Jaramillo, J.D.; Deng, J.; Gong, J. Quantum work fluctuations in connection with the Jarzynski equality. _Phys. Rev. E_ 2017, 96, 042119.
* (24) Deng, J.; Jaramillo, J.D.; Hänggi, P.; Gong, J. Deformed Jarzynski Equality. _Entropy_ 2017, 19, 419.
* (25) Jarzynski, C.; Wojcik, D.K. Classical and Quantum Fluctuation Theorems for Heat Exchange. _Phys. Rev. Lett._ 2004, 92, 230602.
* (26) Campisi, M.; Pekola, J.; Fazio, R. Nonequilibrium fluctuations in quantum heat engines: theory, example, and possible solid state experiments. _New J. Phys._ 2015, 17, 035012.
* (27) Campisi, M.; Pekola, J.; Fazio, R. Feedback-controlled heat transport in quantum devices: theory and solid-state experimental proposal. _New J. Phys._ 2017, 19, 053027.
* (28) Batalhao, T.B.; Souza, A M.; Sarthour, R.S.; Oliveira, I.S.; Paternostro, M.; Lutz, E.; Serra, R.M. Irreversibility and the arrow of time in a quenched quantum system. _Phys. Rev. Lett._ 2015, 115, 190601.
* (29) Gherardini, S.; Müller, M.M.; Trombettoni, A.; Ruffo, S.; Caruso, F. Reconstructing quantum entropy production to probe irreversibility and correlations. _Quantum Sci. Technol._ 2018, 3, 035013.
* (30) Manzano, G.; Horowitz, J.M.; Parrondo, J.M. Quantum Fluctuation Theorems for Arbitrary Environments: Adiabatic and Nonadiabatic Entropy Production. _Phys. Rev. X_ 2018, 8, 031037.
* (31) Batalhão, T.B.; Gherardini, S.; Santos, J.P.; Landi, G.T.; Paternostro, M. Characterizing irreversibility in open quantum systems. In _Thermodynamics in the Quantum Regime_ , Springer: Berlin/Heidelberger, Germany, 2018; pp. 395–410.
* (32) Santos, J.P.; Céleri, L.C.; Landi, G.T.; Paternostro, M. The role of quantum coherence in non-equilibrium entropy production. _npj Quant. Inf._ 2019, 5, 23.
* (33) Kwon, H.; Kim, M.S. Fluctuation Theorems for a Quantum Channel. _Phys. Rev. X_ 2019, 9, 031029.
* (34) Rodrigues, F.L.; De Chiara, G.; Paternostro, M.; Landi, G.T. Thermodynamics of Weakly Coherent Collisional Models. _Phys. Rev. Lett._ 2019, 123, 140601.
* (35) Campisi, M.; Talkner, P.;Hänggi, P. Fluctuation Theorems for Continuously Monitored Quantum Fluxes. _Phys. Rev. Lett._ 2010, 105, 140601.
* (36) Campisi, M.; Talkner, P.;Hänggi, P. Influence of measurements on the statistics of work performed on a quantum system. _Phys. Rev. E_ 2011, 83, 041114.
* (37) Yi, J.; Kim, Y.W. Nonequilibirum work and entropy production by quantum projective measurements. _Phys. Rev. E_ 2013, 88, 032105.
* (38) Watanabe, G.; Venkatesh, B.P.; Talkner, P.; Campisi, M.; Hänggi, P. Quantum fluctuation theorems and generalized measurements during the force protocol. _Phys. Rev. E_ 2014, 89, 032114.
* (39) Hekking, F.W.J.; Pekola, J.P. Quantum jump approach for work and dissipation in a two-level system. _Phys. Rev. Lett._ 2013, 111, 093602.
* (40) Alonso, J.J.; Lutz, E.; Romito, A. Thermodynamics of weakly measured quantum systems. _Phys. Rev. Lett._ 2016 116, 080403.
* (41) Gherardini, S.; Buffoni, L.; Müller, M.M.; Caruso, F.; Campisi, M.; Trombettoni, A.; Ruffo, S. Nonequilibrium quantum-heat statistics under stochastic projective measurements. _Phys. Rev. E_ 2018, 98, 032108.
* (42) Elouard, C.; Herrera-Martí, D.A.; Clusel, M.; Auffèves, A. The role of quantum measurement in stochastic thermodynamics. _npj Quantum Info._ 2017, 3, 9.
* (43) Gherardini, S.; Gupta, S.; Cataliotti, F.S.; Smerzi, A.; Caruso, F.; Ruffo, S. Stochastic quantum Zeno by large deviation theory. _New J. Phys._ 2016, 18, 013048.
* (44) Gherardini, S.; Lovecchio, C.; Müller, M.M.; Lombardi, P.; Caruso, F.; Cataliotti, F.S. Ergodicity in randomly perturbed quantum systems. _Quantum Sci. Technol._ 2017, 2, 015007.
* (45) Piacentini, F.; Avella, A.; Rebufello, E.; Lussana, R.; Villa, F.; Tosi, A.; Marco, G.; Giorgio, B.; Eliahu, C.; Lev, V.; et al. Determining the quantum expectation value by measuring a single photon. _Nat. Phys._ 2017, 13, 1191–1194.
* (46) Wolters, J.; Strauß, M.; Schoenfeld, R.S.; Benson, O. Quantum Zeno phenomenon on a single solid-state spin. _Phys. Rev. A_ 2013, _88_ , 020101(R).
* (47) Doherty, M.W.; Manson, N.B.; Delaney, P.; Jelezko, F.; Wrachtrup, J.; Hollenberg, L.C. The nitrogen-vacancy colour centre in diamond. _Phys. Rep._ 2013,_528_ , 1–45.
|
2024-09-04T02:54:55.034247 | 2020-02-27T13:38:15 | 2002.12088 | {
"authors": "Miroslav Saur (University of Chinese Academy of Sciences), Fu-Sheng Yu\n (Lanzhou university)",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25912",
"submitter": "Miroslav Saur",
"url": "https://arxiv.org/abs/2002.12088"
} | arxiv-papers | ∎
11institutetext: Miroslav Saur 22institutetext: School of Physical Sciences,
University of Chinese Academy of Sciences, Beijing, 100000, China
22email<EMAIL_ADDRESS>33institutetext: Fu-Sheng Yu 44institutetext:
School of Nuclear Science and Technology, Lanzhou university, Lanzhou, 730000,
China
44email<EMAIL_ADDRESS>
# Charm $C\\!PV$: observation and prospects
Miroslav Saur Fu-Sheng Yu
## 1 Introduction
In physics, the symmetries and their violation always provide deep insights
into the Nature. The parity ($P$) symmetry represents the system is unchanged
under the space reflection. The violation of parity, firstly proposed by Lee
and Yang and subsequently discovered in 1956, plays the key role in the
understanding of the weak interaction which is one of the four basic forces of
nature. The charge ($C$) symmetry describes a property between particles and
their anti-particles. The violation of the combined charge-parity ($C\\!P$)
symmetry was unexpectedly observed in kaon meson decays in 1964. The $C$ and
$C\\!P$ violation ($C\\!PV$) are required to explore why there are much more
matter than anti-matter in the Universe.
The explanation of $C\\!PV$ was proposed by Kobayashi and Maskawa (KM) in 1973
by introducing three generations of quarks, or say six quarks, whereas only
three quarks were established at the time. All the six quarks were found in
the following twenty years. This theory was finally manifested by the
observation of $C\\!PV$ in the bottom-quark meson system in 2001. The measured
amount of $C\\!PV$ in the Standard Model (SM) of particle physics is about ten
orders of magnitude smaller than required by the matter-antimatter asymmetry
in the Universe. Therefore, it is important to search for new sources of
$C\\!PV$ beyond the SM (BSM). The KM mechanism also predicts the existence of
$C\\!PV$ in the charm-quark system which, however, had never been discovered
with a lot of efforts during the past decade. The LHCb collaboration
eventually observed the charm $C\\!PV$ in 2019 via measuring the difference of
$C\\!P$ asymmetries of $D^{0}\rightarrow K^{+}K^{-}$ and
$D^{0}\rightarrow\pi^{+}\pi^{-}$ with the result of $(1.54\pm 0.29)\times
10^{-3}$ LHCb-PAPER-2019-006 , with the significance of 5.3$\sigma$. After the
establishment of $C\\!PV$ in the strange- and bottom-quark systems, the
observation of charm $C\\!PV$ is a milestone of particle physics.
## 2 LHCb and recent measurement
Large Hadron Collider beauty experiment (LHCb) on Large Hadron Collider (LHC)
is a dedicated heavy-flavour (particles containing $c$ and $b$ quarks)
experiment with a special focus on $C\\!PV$ measurements. Being a single-arm
forward spectrometer with excellent vertex, interaction point and momentum
resolution in combination with high efficient particle identification systems
and large ${c}{\overline{c}}$ cross-section, LHCb can study charm physics,
especially possible $C\\!P$ violating processes, with higher precision than
previous dedicated $B$-factory experiments.
In the time period from 2011 to 2018, LHCb has collected 9 fb-1 of data,
roughly corresponding to the sample of decays of $10^{10}$ ${D}^{0}$ whose
components are a charm quark and an anti-up quark. Charmed mesons can be
produced as a direct result of proton-proton collisions (prompt production) or
via weak decays of $b$-hadrons (semileptonic productions). In the case of
studies using ${D}^{0}$ mesons, prompt production is in fact a strong decay
${D}^{*}(2010)^{+}\rightarrow{{D}^{0}}{{\pi}^{+}}$ and charge conjugated decay
as well. Usage of this decays allows to determine exact charm charge of
${D}^{0}$ meson according to the charge of bachelor pion. Semileptonic process
are then defined by the weak decay ${{\kern
1.79993pt\overline{\kern-1.79993ptB}}{}^{0}}\rightarrow{{D}^{0}}{\mu^{+}}{{\overline{\nu}}_{\mu}}X$
and charge conjugated, where $X$ stands for any allowed additional particles.
In this case charm charge of ${D}^{0}$ meson is determined by the charge of
muon.
Recently reported observation of $C\\!PV$ in the Charm sector by LHCb is based
on the new Run 2 data analysis and subsequent combination of the obtained
results with the previous measurements from Run 1 LHCb-PAPER-2014-013 ; LHCb-
PAPER-2015-055 . The new analysis is based on $44~{}(9)\times 10^{6}$ and
$14~{}(3)\times 10^{6}$ ${D}^{0}$ $\rightarrow$ ${K}^{+}$ ${K}^{-}$ and
${D}^{0}$ $\rightarrow$ ${\pi}^{+}$ ${\pi}^{-}$ prompt (semileptonic) decays,
respectively. This data set, corresponding to 6 fb-1, was recorded from 2015
to 2018 at the collision energy 13 TeV.
Time dependent $C\\!P$ asymmetry of ${D}^{0}$ decays is given by
$A_{CP}(f,t)\equiv\frac{\mathrm{\Gamma}(D^{0}(t)\rightarrow
f)-\mathrm{\Gamma}({{\kern
1.79993pt\overline{\kern-1.79993ptD}}{}^{0}}(t)\rightarrow
f)}{\mathrm{\Gamma}(D^{0}(t)\rightarrow f)+\mathrm{\Gamma}({{\kern
1.79993pt\overline{\kern-1.79993ptD}}{}^{0}}(t)\rightarrow f)},$ (1)
where f is a final state and $C\\!P$ eigenstate, in the case of reported
analysis final state is ${K}^{+}$ ${K}^{-}$ or ${\pi}^{+}$ ${\pi}^{-}$ and
${\kern 1.79993pt\overline{\kern-1.79993ptD}}{}^{0}$ is the anti-particle of
${D}^{0}$. This asymmetry can be also written as the combination of direct and
indirect $C\\!P$ asymmetry effect: $A_{CP}(f)\approx
a_{{C\\!P}}^{\mathrm{dir}}(f)-\frac{\langle
t(f)\rangle}{\tau({{D}^{0}})}A_{\Gamma}(f)$, where $\langle t(f)\rangle$
denotes the mean decay time of ${D}^{0}$ $\rightarrow~{}f$ influenced by the
experimental efficiency, $a_{{C\\!P}}^{\mathrm{dir}}(f)$ is the direct $C\\!P$
asymmetry, $\tau$(${D}^{0}$) the ${D}^{0}$ lifetime and $A_{\Gamma}$ the
asymmetry between the ${D}^{0}$ $\rightarrow f$ and ${\kern
1.79993pt\overline{\kern-1.79993ptD}}{}^{0}$ $\rightarrow f$ effective decay
widths.
Figure 1: HFLAV fit of direct $C\\!PV$ parameter $\Delta
a^{\mathrm{dir}}_{{C\\!P}}$ and indirect $C\\!PV$ parameter
$a_{{C\\!P}}^{\mathrm{ind}}$ updated with the reported LHCb measurement.
Reproduced from Ref. HFLAV16 .
However,the $A_{CP}$ values, as defined above are not accessible directly by
the experimental methods and must be extracted from the data. Directly
measurable value is the difference between raw yields, $A_{\rm raw}$, of
${{D}^{0}}\\!\rightarrow{{K}^{+}}{{K}^{-}}$ and ${{\kern
1.79993pt\overline{\kern-1.79993ptD}}{}^{0}}\\!\rightarrow{{K}^{-}}{{K}^{+}}$
decays or between ${{D}^{0}}\\!\rightarrow{{\pi}^{+}}{{\pi}^{-}}$ and ${{\kern
1.79993pt\overline{\kern-1.79993ptD}}{}^{0}}\\!\rightarrow{{\pi}^{-}}{{\pi}^{+}}$,
respectively. $A_{\rm raw}$ can be very well approximated, up to the order
$\mathcal{O}(10^{-6})$, as linear combination of physical $C\\!P$ asymmetry
$A_{CP}$, detection asymmetry of ${D}^{0}$ which is equal to zero due to
charge conjugated final states, mother particle production asymmetry and
detection asymmetry of tagging particle. These detection and production
asymmetries are cancelled by equalising kinematics between ${K}^{+}$ ${K}^{-}$
and ${\pi}^{+}$ ${\pi}^{-}$ decay modes and then taking a difference. This
equalisation is done in three dimension of kinematic variables simultaneously
after the removal of phase space regions with large intrinsic asymmetries due
to the LHCb detector geometry. Final experimental formula is then written as
following
$\displaystyle\mathrm{\Delta}A_{CP}$ $\displaystyle\equiv
A_{CP}(D^{0}\rightarrow{{K}^{+}}{{K}^{-}})-A_{CP}(D^{0}\rightarrow{{\pi}^{+}}{{\pi}^{-}})$
$\displaystyle=A_{\rm raw}^{\rm equalised}({{K}^{+}}{{K}^{-}})-A_{\rm
raw}^{\rm equalised}({{\pi}^{+}}{{\pi}^{-}}).$ (2)
The difference of $C\\!P$ asymmetries in $D^{0}\rightarrow K^{+}K^{-}$ and
$D^{0}\rightarrow\pi^{+}\pi^{-}$ are finally measured by LHCb as
$\mathrm{\Delta}A_{CP}^{\rm prompt}=[-18.2\pm 3.2\text{\,(stat.)}\pm
0.9\text{\,(syst.)}]\times 10^{-4}$, $\mathrm{\Delta}A_{CP}^{\rm
semileptonic}=[-9\pm 8\text{\,(stat.)}\pm 5\text{\,(syst.)}]\times 10^{-4}$
LHCb-PAPER-2019-006 . By combing both these results with the previous LHCb
measurements with the Run I data of 3 fb-1 LHCb-PAPER-2014-013 ; LHCb-
PAPER-2015-055 , it can be obtained that
$\begin{split}\mathrm{\Delta}A_{CP}^{\rm combined}=(-15.4\pm 2.9)\times
10^{-4},\end{split}$ (3)
where the uncertainty includes statistical and systematic contributions. This
result deviates from zero $C\\!P$ asymmetry hypothesis on 5.3$\sigma$ level.
This is the first observation of $C\\!P$ violation in the charm sector.
With the LHCb average of $A_{\Gamma}$ HFLAV16 , the direct $C\\!P$ asymmetry
can then be obtained as
$\Delta a_{{C\\!P}}^{\mathrm{dir}}=(-15.7\pm 2.9)\times 10^{-4}$, which shows
the sensitivity of $\mathrm{\Delta}A_{CP}$ to the direct $C\\!PV$. Finally,
the combined fit of the direct and indirect $C\\!P$ asymmetries by the Heavy
Flavour Averaging Groups (HFLAV) is shown in Fig. 1. The current world average
result excludes the no-$C\\!PV$ hypothesis on the level of 5.44$\sigma$.
## 3 Theoretical explanations and implications
In theory, $C\\!PV$ in $D^{0}\rightarrow K^{+}K^{-}$ and $\pi^{+}\pi^{-}$
results from the interference between the tree and penguin amplitudes of charm
decays. It is difficult to calculate in the first-principle QCD methods due to
the large non-perturbative contributions at the charm scale. Therefore, the
order of magnitude of predictions on the charm $C\\!PV$ is meaningful.
Before 2019, several orders of magnitude of charm $C\\!PV$ have been predicted
in literatures, ranging from $10^{-4}$ to $10^{-2}$. If persisting in using
the perturbative QCD, $C\\!PV$ in charm decays is naively expected as
$A_{CP}\sim{\alpha_{s}(\mu_{c})\over\pi}{|V_{ub}V_{cb}^{*}|\over|V_{us}V_{cs}^{*}|}=\mathcal{O}(10^{-4})$.
On the contrary, If taking the unknown non-perturbative contributions to be
arbitrarily large, the charm $C\\!PV$ could be as large as $10^{-2}$, to be
consistent with the experimental results in 2011 when $\mathrm{\Delta}A_{CP}$
was measured to be $(-0.82\pm 0.24)\%$ by LHCb Aaij:2011in . Due to the limit
of space of this article, relevant references can be seen in
FSY_implication:2019hho . The most interesting thing is that only two papers,
written by Cheng and Chiang (CC) Cheng:2012xb and Li, Lü and Yu (LLY)
Li:2012cfa , quantitatively predicted $\mathrm{\Delta}A_{CP}$ at the order of
$10^{-3}$ before the observation. They are much smaller than the experimental
measurements in 2011 and 2012, but manifested by the recent LHCb result. The
comparison between the experimental measurements and the theoretical
predictions by CC and LLY are shown in Fig. 2.
Figure 2: Comparison between experimental measurements (in black) and
theoretical predictions (in blue) on $\mathrm{\Delta}A_{CP}$ in year.
Experimental results are corresponding to the world-average values for
specific year as calculated by the HFLAV HFLAV16 . The theoretical approaches
of CC and LLY are explained in text. The yellow band is the most recent
experimental result for comparison.
To predict the charm $C\\!PV$, one should reliably obtain the tree amplitudes
first, to understand the dynamics at the charm scale, and then calculate the
penguin amplitudes reasonably. Including all the strong-interaction effects,
especially the non-perturbative contributions, the topological diagrammatic
approach works well for hadronic charm-meson decays by extracting the tree
amplitudes from the data of branching fractions Cheng:2012wr ; Cheng:2012xb ;
Li:2012cfa . CC pointed out that the upper bound of $\mathrm{\Delta}A_{CP}$ in
the SM is $-0.25\%$ Cheng:2012wr which is more than $2\sigma$ away from the
experimental result at that time. Later on, they assumed that the penguin-
exchange diagram is identical to the $W$-exchange diagram, $PE=E$, so that
$\mathrm{\Delta}A_{CP}$ $=(-1.39\pm 0.04)\times
10^{-3}~{}\mathrm{or}~{}(-1.51\pm 0.04)\times 10^{-3}$ Cheng:2012xb , and
updated with similar results in Cheng:2019ggx . To give a reasonable
uncertainty of the CC approach, considering the possible difference between
$PE$ and $E$, we take $PE$ ranging from $E/2$ to $2E$. Under the factorization
hypothesis, LLY proposed the factorization-assisted topological-amplitude
approach which relates the penguin amplitudes to the tree amplitudes without
any additional free parameters. Considering the uncertainties of input
parameters, it is predicted that $\mathrm{\Delta}A_{CP}$
$=(-0.57\sim-1.87)\times 10^{-3}$ Li:2012cfa , which is consistent with the
latest result by the LHCb measurement.
After the observation of charm $C\\!PV$ by LHCb in 2019, new explanations are
explored either in the SM or in the BSM. In the SM picture, the measured
result of $\mathrm{\Delta}A_{CP}$ can be explained by the non-perturbative
final-state-interaction contributions from the rescattering effects
Grossman:2019xcj or the near-by resonant effects Soni:2019xko .
Alternatively, given that the SM contribution to the charm $C\\!PV$ is very
small based on the heavy-quark expansion and the perturbative QCD, the
observed $\mathrm{\Delta}A_{CP}$ are explored by the BSM explanations, such as
the flavour-violating $Z^{\prime}$ model, the two-Higgs-doublet model, and
vector-like quark models Lenz_CPV_BSM:2019fdb ; Dery_implication:2019ysp ;
Calibbi:2019bay .
Due to the non-perturbative physics at the charm scale, it is difficult to use
the observed CPV to reliably search for new physics. On the contrary, the
study of charm CPV could be used to test and understand the non-perturbative
behaviour of the SM. The additional progress in theory and also more precise
experimental measurements are needed.
## 4 Impact and prospect for the future
Although the combination of ${{D}^{0}}\\!\rightarrow{{K}^{+}}{{K}^{-}}$ and
${{D}^{0}}\\!\rightarrow{{\pi}^{+}}{{\pi}^{-}}$ was expected to be one of the
best probes of $C\\!PV$ in charm, many other studies are possible or even
already ongoing, such as
${{D}^{0}}\rightarrow{{K}^{0}_{\mathrm{S}}}{{K}^{0}_{\mathrm{S}}}$ and
${{D}^{+}}\rightarrow{{K}^{+}}{{K}^{-}}{{\pi}^{+}}$ FSY_implication:2019hho ,
to test and understand the dynamics of charm decay and to search for new
physics beyond the SM. Due to their generally rich decays structure multibody
decays offer additional interesting possibility how to look for $C\\!PV$
signatures. However, at the same time such a studies generally require more
complicated analysis methods and higher recorded luminosity. Another
interesting measurement is being performed to investigate a novel effect of
$C\\!PV$ in charmed meson decaying into ${K}^{0}_{\mathrm{S}}$, which comes
from mother decay and daughter mixing with predicted values reaching the
available experimental sensitivity Yu_CPV:2017oky .
The LHCb detector is currently going through the substantial upgrade which
will allow to record data with 5 times higher luminosity than during the years
2015-2018. This, in combination with the new full software trigger, a crucial
point for the charm physics, will allow LHCb to achieve an unprecedented
precision in the heavy flavour sector $C\\!PV$ measurements. This opens a door
to measure possible $C\\!PV$ effects in rare decays, e.g. radiative and semi-
leptonic decays. Another dedicated heavy-flavour experiment is Belle II, which
started taking data in 2019, from which contributions to $C\\!PV$ measurements
are expected, especially results from the decays with neutral particles in the
final states. Another substantial upgrade of the LHCb is planned for the time
period after 2030, with additional ten fold increase of luminosity. The LHCb
is currently expected to be the only dedicated heavy-flavour experiment taking
data during that time period. Table 1 summarises current and expected future
trigger-level yields and sensitivity in promptly produced
${{D}^{0}}\\!\rightarrow{{K}^{+}}{{K}^{-}}$ and
${{D}^{0}}\\!\rightarrow{{\pi}^{+}}{{\pi}^{-}}$ decays.
In summary, the first experimental observation of $C\\!P$ violation in the
charm sector was done with an amazing sensitivity obtained by LHCb in 2019.
This is a milestone in the high energy physics. The result is consistent with
the theoretical predictions by CC and LLY. It is expected that more precise
measurements and more theoretical studies in the near future will help us to
deeply understand the dynamics at the charm scale and to explore the new
physics effects.
Table 1: Overview of the recorded and predicted values for promptly produced ${{D}^{0}}\rightarrow{{K}^{+}}{{K}^{-}}$ and ${{D}^{0}}\rightarrow{{\pi}^{+}}{{\pi}^{-}}$ yields during the different data-taking periods at LHCb. All values are corresponding to the trigger-level yields within the LHCb acceptance. Last column shows expected precision of the $\mathrm{\Delta}A_{CP}$ measurements with the corresponding yields. Reproduced from Ref. LHCb-PII-Physics . Sample [fb-1] | Yield | yield | $\sigma(\mathrm{\Delta}A_{CP})$
---|---|---|---
| ${{D}^{0}}\rightarrow{{K}^{+}}{{K}^{-}}$ | ${{D}^{0}}\rightarrow{{\pi}^{+}}{{\pi}^{-}}$ | [%]
Run 1-2 (9) | $52\times 10^{6}$ | $17\times 10^{6}$ | 0.03
Run 1-3 (23) | $280\times 10^{6}$ | $94\times 10^{6}$ | 0.013
Run 1-4 (50) | $1\times 10^{9}$ | $305\times 10^{6}$ | 0.01
Run 1-5 (300) | $4.9\times 10^{9}$ | $1.6\times 10^{9}$ | 0.003
## 5 Acknowledgement
This work is partially supported by the National Natural Science Foundation of
China under
Grants No.U1732101 and 11975112, by Gansu Natural Science Fund under grant
No.18JR3RA265, by the Fundamental Research Funds for the Central Universities
under Grant No. lzujbky-2019-55 and by the University of Chinese Academy of
Sciences scholarship for International students.
## References
* (1) LHCb collaboration, R. Aaij, C. Abellan Beteta, M. Adinolfi, et al., _Observation of $C\\!P$ violation in charm decays_, Phys. Rev. Lett. 122 (2019) 211803, arXiv:1903.08726
* (2) LHCb collaboration, R. Aaij, B. Adeva, M. Adinolfi, et al., _Measurement of $C\\!P$ asymmetry in ${{D}^{0}}\\!\rightarrow{{K}^{-}}{{K}^{+}}$ and ${{D}^{0}}\\!\rightarrow{{\pi}^{-}}{{\pi}^{+}}$ decays_, JHEP 07 (2014) 041, arXiv:1405.2797
* (3) LHCb collaboration, R. Aaij, C. Abellan Beteta, B. Adeva, et al., _Measurement of the difference of time-integrated $C\\!P$ asymmetries in ${{D}^{0}}\\!\rightarrow{{K}^{-}}{{K}^{+}}$ and ${{D}^{0}}\\!\rightarrow{{\pi}^{-}}{{\pi}^{+}}$ decays_, Phys. Rev. Lett. 116 (2016) 191601, arXiv:1602.03160
* (4) Heavy Flavor Averaging Group, Y. Amhis, Sw. Banerjee, E. Ben-Haim et al., _Averages of $b$-hadron, $c$-hadron, and $\tau$-lepton properties as of summer 2016_, Eur. Phys. J. C77 (2017) 895, arXiv:1612.07233, updated results and plots available at https://hflav.web.cern.ch
* (5) LHCb, R. Aaij, C. Abellan Beteta, B. Adeva, et al., _Evidence for CP violation in time-integrated $D^{0}\rightarrow h^{-}h^{+}$ decay rates_, Phys. Rev. Lett. 108 (2012) 111602, arXiv:1112.0938
* (6) H.-N. Li, C.-D. Lü, and F.-S. Yu, _Implications on the first observation of charm cpv at lhcb_ , arXiv:1903.10638
* (7) H.-Y. Cheng and C.-W. Chiang, _SU(3) symmetry breaking and CP violation in D $\rightarrow$ PP decays_, Phys. Rev. D86 (2012) 014014, arXiv:1205.0580
* (8) H.-n. Li, C.-D. Lu, and F.-S. Yu, _Branching ratios and direct CP asymmetries in $D\rightarrow PP$ decays_, Phys. Rev. D86 (2012) 036012, arXiv:1203.3120
* (9) H.-Y. Cheng and C.-W. Chiang, _Direct CP violation in two-body hadronic charmed meson decays_ , Phys. Rev. D 85 (2012) 034036, arXiv:1201.0785, [Erratum: Phys.Rev.D 85, 079903 (2012)]
* (10) H.-Y. Cheng and C.-W. Chiang, _Revisiting cp violation in $d\rightarrow p\\!p$ and $v\\!p$ decays_, Phys. Rev. D 100 (2019) 093002, arXiv:1909.03063
* (11) Y. Grossman and S. Schacht, _The emergence of the $\Delta U=0$ rule in charm physics_, JHEP 07 (2019) 020, arXiv:1903.10952
* (12) A. Soni, _Resonance enhancement of Charm CP_ , arXiv:1905.00907
* (13) M. Chala, A. Lenz, A. V. Rusov, and J. Scholtz, _$\delta a\\_{CP}$ within the standard model and beyond_, JHEP 07 (2019) 161, arXiv:1903.10490
* (14) A. Dery and Y. Nir, _Implications of the LHCb discovery of CP violation in charm decays_ , JHEP 12 (2019) 104, arXiv:1909.11242
* (15) L. Calibbi, T. Li, Y. Li, and B. Zhu, _Simple model for large CP violation in charm decays, B-physics anomalies, muon g-2, and Dark Matter_ , arXiv:1912.02676
* (16) F.-S. Yu, D. Wang, and H.-n. Li, _$CP$ asymmetries in charm decays into neutral kaons_, Phys. Rev. Lett. 119 (2017) 181802, arXiv:1707.09297
* (17) LHCb collaboration, R. Aaij, B. Adeva, M Adinolfi, et al., _Physics case for an LHCb Upgrade II — Opportunities in flavour physics, and beyond, in the HL-LHC era_ , arXiv:1808.08865
|
2024-09-04T02:54:55.048281 | 2020-02-27T14:37:37 | 2002.12122 | {
"authors": "Danqing Hu, Ning-Hua Tong, Yi-feng Yang",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25913",
"submitter": "Yi-feng Yang",
"url": "https://arxiv.org/abs/2002.12122"
} | arxiv-papers | # Energy-scale cascade and correspondence between
the Mott and Kondo lattice physics
Danqing Hu Beijing National Laboratory for Condensed Matter Physics and
Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China Ning-Hua
Tong Department of Physics, Renmin University of China, Beijing 100872, China
Yi-feng Yang<EMAIL_ADDRESS>Beijing National Laboratory for Condensed
Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing
100190, China University of Chinese Academy of Sciences, Beijing 100049,
China Songshan Lake Materials Laboratory, Dongguan, Guangdong 523808, China
###### Abstract
We propose an energy-scale correspondence between the Mott physics and the
Kondo lattice physics and construct a tentative phase diagram of their
correlated electrons with two characteristic energy scales $\omega^{*}$ and
$\Omega$ marking the upper boundary of a low-energy regime with well-developed
long-range coherence and the lower boundary of localized spectral weight in
the energy space, respectively. In between, there exists a crossover region
with emergent but damped quasiparticle excitations. We argue that the presence
of two separate energy scales is a generic property of correlated electrons on
a lattice and reflects an intrinsic two-stage process of the quasiparticle
dynamics to build up the lattice coherence. For the Hubbard model, they
correspond to the kink and waterfall structures on the dispersion, while for
the periodic Anderson model, they are associated with the indirect and direct
hybridization gaps. Our work reveals a deep connection between the Mott and
Kondo lattice physics and provides a basic ingredient for the study of many-
body systems.
In correlated electron systems, a “kink” denotes an abrupt slope change of the
dispersive band and has often been regarded as a fingerprint of the coupling
between electronic quasiparticles and collective bosonic excitations. It has
been intensively studied in cuprates Lanzara2001Nature ; He2001PRL ;
Shen2002PMB ; Damascelli2003RMP ; Meevasana2007 ; Armitage2010RMP , where its
appearance at an energy scale of around 30-90 meV has been attributed to spin
fluctuations or phonons, which may potentially participate in the
superconducting pairing. Different from these “bosonic kinks” which represent
the characteristic energy scales of the bosonic excitations, it has been shown
later that kink may also arise purely due to electronic mechanism without
involving bosonic excitations Byczuk2007NatPhys . This “electronic kink” was
first seen in the Hubbard model (HM) within the framework of the dynamical
mean-field theory (DMFT) Nekrasov2006 and then extended to other more
complicated models Macridin2007PRL ; Grete2011PRB ; Kainz2012PRB ;
Greger2013PRL ; Toschi2009 ; Toschi2010 . It might explain the so-called high
energy kink observed over a wide energy range from 40 to 800 meV in cuprates
and some other transition-metal oxides Yang2005PRL ; Iwasawa2005 ; Yoshida2005
; Ingle2005PRB ; Valla2007 ; Graf2007 ; Xie2007PRL ; Aizaki2012PRL .
Theoretically, the “electronic kink” has been argued to mark an important
energy scale of the Mott physics, namely the boundary of its Fermi liquid
ground state Byczuk2007NatPhys . Roughly speaking, the kink energy
$\omega^{*}$ constrains a low-energy region of well-defined Landau
quasiparticles and poses a threshold for the application of the Fermi liquid
theory. A higher energy scale $\Omega$ has also been proposed as the boundary
separating the many-body resonant state from the high-energy Hubbard bands.
$\omega^{*}$ and $\Omega$ together define a cascade of key energy scales in
the HM. Their discovery has since stimulated many debates concerning their
true physical origin. Some suggested that the kink should still be of bosonic
origin due to emergent collective spin fluctuations Rass2009 , while some
associated it with an effective Kondo energy scale Held2013PRL . A better
understanding of its origin will deepen our knowledge of the many-body
physics, but a consensus has yet to be reached. In particular, its fundamental
importance and potential implications have not been fully recognized.
Here we point out their deep connection with the general physics of
quasiparticle emergence and coherence and report a close correspondence
between the energy scales of the Mott and Kondo lattice physics. While the two
phenomena have mostly been studied separately in different families of
correlated systems based on either the HM or the periodic Anderson model
(PAM), there are increasing evidences supporting an intimate underlying
connection Held2000PRL ; Logan2016 . The Kondo lattice physics has been argued
to be a special case of the (orbital-selective) Mott transition Medici2005 ;
Pepin2007 ; Pepin2008 ; Vojta2010 , and the Mott physics in cuprates is itself
a low-energy projection of the Kondo-Heisenberg model Zhang1988 . In the
lately-discovered nickelate superconductors, both seem to exist and determine
the properties of the ground state and low-energy excitations Zhang2020 .
Our proposed correspondence is based on the simple observation that the two
energy scales of the HM have the simple relationship $\omega^{*}\propto Z$ and
$\Omega\propto{Z^{1/2}}$ with the renormalization factor Z Byczuk2007NatPhys ;
Bulla1999PB . In comparison, the PAM also has two well-known energy scales,
namely the indirect and direct hybridization gaps Coleman1984PRB ;
Auerbach1986PRL ; Benlagra2011 ; Coleman2015 , which, in the mean-field
approximation, are related to an effective hybridization $\tilde{V}$ through
$\Delta_{\rm ind}\propto\tilde{V}^{2}$ and $\Delta_{\rm dir}\propto\tilde{V}$.
This is reminiscent of $\omega^{*}$ and $\Omega$ in the HM. Since the
hybridization gaps have well-defined physical meanings, it is natural to ask
if this similarity can provide some insight on the physics of the HM or reveal
certain generic features of both models. In this work, we will establish such
a correspondence through systematic numerical studies and explore their origin
and potential implications on the physics of correlated electrons. We will
show that they represent two distinct groups of related energy scales and
provide a good characterization of the energy boundaries separating the fully
coherent and localized spectral weights with an itinerant but damped crossover
region in between. This allows us to construct a tentative phase diagram in
the energy space. We argue that the separation of the two energy scales
reveals an instrinsic two-stage process of the quasiparticle dynamics for
building up the long-range spatial coherence and represents a generic property
of a typical class of correlated electron physics on a lattice represented by
the HM and PAM. We note that the present work only concerns the paramagnetic
Fermi liquid side of the HM phase diagram but for the whole spectrum beyond
the Landau quasiparticle regime in the energy space. There is a difference
between the energy domain and temperature domain. While the Fermi liquid is
usually realized at low temperatures, the two energy scales should still be
observed in the energy spectrum, reflecting the critical charge or spin
dynamics of low-lying quasiparticle excitations.
Figure 1: Illustration of different energy scales identified from the $f$
electron DOS $\rho(\omega)=-{{\rm Im}}G(\omega)/\pi$, the real part of the $f$
electron self-energy ${\rm Re}\Sigma(\omega)$ and its slope ${\rm
Re}\Sigma^{\prime}(\omega)$, and the real part of the Green function ${\rm
Re}G(\omega)$ for (a) the HM with $U=3$ and (b) the PAM with $U=5.6$ and
$V^{2}=0.4$. $\omega^{*}$ and $\omega_{\rm m}$ are estimated from the kink in
${\rm Re}\Sigma(\omega)$ and the maximum of ${\rm Re}G(\omega)$, respectively.
For the HM, $\Omega$ is defined from the maximum of ${\rm
Re}\Sigma^{\prime}(\omega)$. For the PAM, $\Delta_{\rm ind}$, $\Delta_{\rm
dir}$, and $\Omega$ are obtained from the dispersion in Fig. 3. The insets are
the enlarged plots for the low energy scales: $\omega^{*}$, $\omega_{\rm m}$,
and $\Delta_{\rm ind}$.
In both models, the Hamiltonians include two parts: $H=H_{K}+H_{U}$. The
potential energy has the form,
$H_{U}=U\sum_{i}\left(n_{i\uparrow}-\frac{1}{2}\right)\left(n_{i\downarrow}-\frac{1}{2}\right)$,
where $U$ is the onsite Coulomb interaction and $n_{i\sigma}$ is the density
operator of the local orbital. There are two general ways to delocalize the
$f$ electrons. In the HM, one introduces a direct hopping between neighoring
lattice sites, giving rise to a kinetic energy,
$H_{K}=\sum_{k\sigma}\epsilon_{k}f_{k\sigma}^{{\dagger}}f_{k\sigma}$, where
$f^{\dagger}_{k\sigma}(f_{k\sigma})$ are the creation (annihilation) operator
of the $f$ electrons. In the PAM, the $f$ orbitals remain local and the
delocalization is achieved by hybridizing with an additional conduction band
such that
$H_{K}=\sum_{k\sigma}\epsilon_{k}c_{k\sigma}^{{\dagger}}c_{k\sigma}+V\sum_{i\sigma}(f_{i\sigma}^{{\dagger}}c_{i\sigma}+{\rm
H.c.})$, where $c^{\dagger}_{k\sigma}$($c_{k\sigma}$) are the creation
(annihilation) operator of the conduction electrons and $V$ is the bare
hybridization parameter. For simplicity, we focus on the paramagnetic state in
the half-filled one-band case with particle-hole symmetry and use the
numerical renormalization group (NRG) Wilson1975 ; Bulla2008 as the impurity
solver within the DMFT framework Metzner1989 ; Georges1992PRB ; Georges1996RMP
; Kotliar2006RMP . The local self-energy of the $f$ electrons is calculated
using $\Sigma(\omega)=UF(\omega)/G(\omega)$ with $G(\omega)=\langle\langle
f_{i\sigma};f_{i\sigma}^{{\dagger}}\rangle\rangle_{\omega}$ and
$F(\omega)=\langle\langle
f_{i\sigma}(n_{i\overline{\sigma}}-1/2);f_{i\sigma}^{{\dagger}}\rangle\rangle_{\omega}$
Bulla1998JPCM . We then solve the DMFT self-consistent equation
$G(\omega)=\int d\epsilon\rho_{0}(\epsilon)/[\omega-\epsilon-\Sigma(\omega)]$
for the HM and $G(\omega)=\int
d\epsilon\rho_{0}(\epsilon)/[\omega-V^{2}/(\omega-\epsilon+i\eta)-\Sigma(\omega)]$
for the PAM Zhang1993PRL ; Pruschke2000PRB , where
$\rho_{0}(\epsilon)=e^{-(\epsilon/t)^{2}}/(\sqrt{\pi}t)$ is the noninteracting
density of states for a hypercubic lattice with the dimensionality
$d\rightarrow\infty$. We set $t=1$ as the energy unit and choose the
logarithmic discretization parameter $\Lambda=2$ and a total number of stored
states $N_{\rm s}=800$ for the NRG calculations. Our calculations are limited
in the paramagnetic phase concerning the emergence and development of
quasiparticle coherence. Magnetic instabilities are additional issues beyond
the current work.
Figure 2: The scaling relation of different energy scales with respect to the
renormalization factor $Z$ for (a) the HM and (b) the PAM with tuning Coulomb
interaction $U$. For the PAM, the hybridization strength is set to
$V^{2}=0.4$.
Figure 1 illustrates how the different energy scales can be identified from
the $f$ electron density of states (DOS) $\rho(\omega)=-{\rm
Im}G(\omega)/\pi$, the real part of the self-energy ${\rm Re}\Sigma(\omega)$
and its $\omega$-derivative ${\rm Re}\Sigma^{\prime}(\omega)$, and the real
part of the Green function, ${\rm Re}G(\omega)$. For the HM, the DOS has a
three-peak structure including two Hubbard peaks at higher energy and a
quasiparticle peak around zero energy. The larger scale $\Omega$ is defined
here from the maximum of ${\rm Re}\Sigma^{\prime}(\omega)$. In Ref.
Byczuk2007NatPhys , it was defined from the minimum of ${\rm
Re}\Sigma(\omega)$, which, however, only follows the proposed scaling near the
Mott transition with a small $Z$. Nonetheless, both reflect the same physics
and separate roughly the quasiparticle peak and the Hubbard peaks in the DOS.
The lower scale $\omega^{*}$ can be defined from an additional slope change or
the kink in ${\rm Re}\Sigma(\omega)$ that constrains a low-energy region for
quasiparticle excitations. A different scale $\omega_{\rm m}$ may be
introduced from the maximum of ${\rm Re}G(\omega)$. $\omega^{*}$ and
$\omega_{\rm m}$ are of the same origin and related through
$\omega^{*}\approx(\sqrt{2}-1)\omega_{\rm m}$ for the HM Byczuk2007NatPhys . A
third definition may be found from the maximum ($\omega_{s}$) of the local
spin susceptibility as discussed later in Fig. 4. The relationship of these
scales can be established if we tune the local Coulomb interaction $U$ and
plot their variations with the renormalization factor, $Z^{-1}=1-{\rm
Re}\Sigma^{\prime}(0)$. The results are summarized in Fig. 2(a). We see that
$\omega^{*}\propto\omega_{m}\propto Z$ and $\Omega\propto Z^{1/2}$ over a wide
range of $Z$. Thus, $\omega^{*}$ and $\Omega$ represent two distinct groups of
related energy scales. The different definitions originate from the crossover
nature of the underlying physics.
For comparison, Fig. 1(b) shows the energy scales in the PAM. The indirect
hybridization gap $\Delta_{\rm ind}$ is determined by the DOS gap and
coincides with the additional minimum of ${\rm Re}G(\omega)$, while the direct
hybridization gap $\Delta_{\rm dir}$ cannot be clearly discerned in these
quantities and is best seen in Fig. 3 from the spectral function,
$A_{k}(\omega)=-\pi^{-1}{\rm
Im}[\omega-\Sigma(\omega)-\frac{V^{2}}{\omega-\epsilon_{k}+i\eta}]^{-1}$. The
plot implies two separated hybridization bands for the PAM. The indirect gap
$\Delta_{\rm ind}$ is the energy difference between the bottom of the upper
band and the top of the lower band and marks the boundary for the low-energy
gap. Although the PAM is in a Kondo insulating state, the self-energy below
$\Delta_{\rm ind}$ behaves as that of a Fermi liquid Pruschke2000PRB ,
reflecting the establishment of full coherence on the Kondo lattice Hu2019 ;
Qi2020 . By contrast, the direct hybridization gap $\Delta_{\rm dir}$ measures
the minimal energy difference between the two hybridization bands, below which
the spectra are governed by the itinerant $f$ character. The energy scales
$\omega^{*}$ and $\omega_{\rm m}$ can still be defined similarly as in the HM
and contain the flat part of the hybridization bands. We obtain
$\omega^{*}\propto\omega_{\rm m}\propto\Delta_{\rm ind}$. On the other hand,
the maximum of ${\rm Re}\Sigma^{\prime}(\omega)$ changes slightly with varying
$U$ and is no longer a meaningful quantity separating the boundary of the
itinerant and localized regions. Rather, we find it is better to define the
larger energy $\Omega$ from the deviation of the hybridization bands from the
original conduction bands. In Fig. 3, it is seen that $\Omega$ sets roughly
the outmost boundary of the itinerant $f$ electron spectral weight. Both
$\Omega$ and $\Delta_{\rm dir}$ reflect the separation of the hybridized
(itinerant) and unhybridized (localized) spectral regions of the $f$ electrons
in the energy space.
The scaling results for the PAM are summarized in Fig. 2(b). Similarly, we
have $\omega^{*}\propto\omega_{\rm m}\propto\Delta_{\rm ind}\propto Z$ and
$\Omega\propto\Delta_{\rm dir}\propto Z^{1/2}$ over a wide range of $Z$. The
obtained scaling relations link $\omega^{*}$ and $\Omega$ to $\Delta_{\rm
ind}$ and $\Delta_{\rm dir}$ and suggest that they reflect similar physics in
the PAM. However, unlike $\omega^{*}$ and $\Omega$ which are in reality hard
to measure, the indirect and direct hybridization gaps have unambiguous
physical meaning and can be directly probed in experiment. Their scaling
relations in the PAM can be derived from the pole properties of the Green
function,
$G_{k}(\omega)=[\omega-\Sigma(\omega)-\frac{V^{2}}{\omega-\epsilon_{k}+i\eta}]^{-1}$.
If we make the lowest order approximation, ${\rm
Re}\Sigma(\omega)\approx(1-Z^{-1})\omega$, the poles are determined by
$Z^{-1}\omega-\frac{V^{2}}{\omega-\epsilon_{k}}=0$. The direct gap is defined
as the energy differences of the two poles at $\epsilon_{k}=0$, while the
indirect gap is the energy difference at the band edges ($\epsilon_{k}=\pm
D$). We have $\Delta_{\rm dir}=Z^{1/2}V$ and $\Delta_{\rm ind}=ZV^{2}/D$,
confirming their relationship with $Z$. The relative deviation of the
hybridization bands from the conduction band may be estimated to be
$\delta=(\omega-\epsilon_{k})/\omega=ZV^{2}/\omega^{2}$. We have thus
$\Omega\approx Z^{1/2}V/\delta^{1/2}$ for a chosen small cutoff $\delta$, at
which the $f$ electron spectral weight on the dispersion is also reduced to
the order of $Z\delta$. Since all these energy scales diminish at $V=0$, their
presence reflects the delocalization of the $f$ electrons on the lattice
driven by the hybridization with conduction electrons. In this respect, the
larger scale $\Omega$ constrains the major spectral region of delocalized $f$
electrons, while $\omega^{*}$ puts a further restriction for well-established
long-range coherence.
Figure 3: Different energy scales on the intensity plot of the resolved
spectral functions for the HM and PAM. The background colors reflect the
magnitude of the $f$ electron spectral weight $A_{\rm k}(\omega)$. The blue
dashed lines denote the dispersion $Z\epsilon_{k}$ for the HM and
$\epsilon_{k}$ for the PAM. The parameters are the same as in Fig. 1. For the
PAM, the localized flat $f$ bands are outside the plotted energy window.
Similar analyses may help to understand $\omega^{*}$ and $\Omega$ in the HM.
Figure 3 also plots the dispersion $A_{k}(\omega)=-\pi^{-1}{\rm
Im}[\omega-\epsilon_{k}-\Sigma(\omega)]^{-1}$ of the HM. Indeed, we see that
$\omega^{*}$ marks the boundary of well-defined Landau quasiparticles, above
which the dispersion has a different renormalization factor, while $\Omega$
separates the low-energy many-body states and the high energy incoherent
region, beyond which the $f$ electron weights are localized. The presence of
both $\omega^{*}$ and $\Omega$ is a reflection of the special cascade
structure of ${\rm Re}\Sigma^{\prime}(\omega)$, which, as shown in the middle
panel of Fig. 1(a), has a maximum at $\Omega$ but, with lowering energy,
decreases first to a plateau (or local minimum) on the hillside before it
eventually reaches the valley floor below $\omega^{*}$ and $\omega_{\rm m}$.
Interestingly, $\Omega$ lies roughly at the waterfall structure connecting the
quasiparticle and Hubbard bands Weber2008 ; Kohno2012 . This poses a question
concerning the relation of the two properties. For this, we note that the
waterfall is an abrupt change of the pole of the Green function with
increasing $\epsilon_{k}$. It can only occur around the inflection point of
$\omega-{\rm Re}\Sigma(\omega)$ if one wants to tentatively avoid multiple
poles. This immediately implies ${\rm Re}\Sigma^{\prime\prime}(\Omega)\approx
0$ or a maximum (minimum) in ${\rm Re}\Sigma^{\prime}(\Omega)$. Thus, $\Omega$
defined here is indeed associated with the waterfall structure in the HM. We
conclude that the two energy scales correspond to two features of the
dispersion: $\omega^{*}$ for the kink and $\Omega$ for the waterfall, which
also reflect the boundaries for the low-energy fully coherent region and the
high-energy localized region. The fact that $\Omega$ connects two regions with
opposite energy dependence in the self-energy implies
$Z^{-1}\Omega\sim\Omega^{-1}$ or $\Omega\propto Z^{1/2}$, similar to that in
the PAM.
Figure 4: The imaginary part of the local spin susceptibility, showing a
maximum at $\omega_{s}$ for both models. The parameters are the same as in
Fig. 1. The inset plots $\omega_{s}$ as functions of $Z$ with $V^{2}=0.4$
(PAM) and varying $U$.
Further insight on $\omega^{*}$ may be obtained if we consider that
$\Delta_{\rm ind}$ in the PAM is related to the spin screening,
$\omega^{*}\propto\Delta_{\rm ind}\propto T_{K}$ where $T_{K}$ is the lattice
Kondo temperature Jarrell1995PRB ; Chen2016 . The effect of spin screening
might be best seen from the local susceptibility,
$\chi_{s}(\omega)=\langle\langle S_{i}^{z};S_{i}^{z}\rangle\rangle_{\omega}$
with $S_{i}^{z}=\frac{1}{2}(n_{i\uparrow}-n_{i\downarrow})$ Shimizu2000JPSJ .
Figure 4 plots the imaginary part of the local spin susceptibilities for the
HM and PAM using the same parameters as in Fig. 1. Both exhibit a maximum at
$\omega_{s}$, which, as shown in the inset, varies linearly with $Z$ as those
of $\omega^{*}$ and $\Delta_{\rm ind}$. In fact, we find
$\omega_{s}\approx\omega^{*}$ in both models (Fig. 3). Away from half filling,
this kink or hybridization energy scale $\omega^{*}$ still exists and retains
the same relation with $\omega_{s}$ Grete2011PRB ; Kainz2012PRB . Below
$\omega_{s}$ the local susceptibility is suppressed due to the spin screening.
It is therefore natural to associate the kink with an effective spin screening
scale as proposed in Ref. Held2013PRL . We should emphasize, however, that the
presence of both $\omega^{*}$ and $\Omega$ is a property of the lattice but
not in the usual single-impurity Kondo model where only the Kondo scale
exists. This implies that the lattice feedback is crucial for separating
$\omega^{*}$ and $\Omega$ and causing the cascade structure of ${\rm
Re}\Sigma^{\prime}(\omega)$. Although DMFT does not contain explicit spatial
correlations, it necessarily includes important lattice information through
the self-consistent iterative procedure.
Thus, all energy scales can be classified into two categories, a lower one
$\omega^{*}\propto Z$ and a higher one $\Omega\propto Z^{1/2}$, which separate
the $f$ electron spectral weight into different regions of distinct
properties. If we start from localized $f$ electrons, we may conclude that
their spin screening and consequential delocalization is an energy-dependent
process marked by the two scales. While $\Omega$ sets the outmost boundary for
the many-body resonant states and covers the full energy range with
delocalized $f$ electron spectral weight, $\omega^{*}$ is a lower boundary for
quasiparticle excitations with well-established long-range coherence. This is
common for both PAM and HM and may be best seen if we construct a tentative
phase diagram in Fig. 5 in the energy space based on the properties of the $f$
electron spectra. For $Z>0$, the $\omega^{*}\propto Z$ line indicates a
screening scale and marks the upper boundary of a well-developed coherent
region irrespective of the Fermi liquid or Kondo insulating state, while
$\Omega\propto{Z^{1/2}}$ marks the lower boundary of an incoherent region with
localized $f$ spectral weight. In between, there exists a crossover region
with itinerant but strongly damped excitations of the $f$ character. With
decreasing $Z$ to zero, the system enters a Mott insulating state for the HM
or an orbital-selective Mott state for the PAM, where the many-body resonance
is suppressed and all $f$ electrons turn into fully localized magnetic
moments. Since $Z$ is dimensionless, the scaling relations suggest constant
prefactors roughly given by $D$ or $V^{2}/D$. Major correlation effects in
both properties are encapsulated in the renormalization factor $Z$ (as a
function of $U/D$) over a wide parameter region.
Figure 5: A tentative phase diagram of the HM and PAM in the energy space as a
function of the renormalization factor $Z$, showing different regions of
correlated electron spectral weights. In the shadow region, the $f$ electrons
become completely localized so that the HM is in a Mott state and the PAM is
in an orbital-selective Mott state.
We remark again that the presence of the two energy scales represents two
related but distinct features of the correlated electron physics. It is not
just a matter of theory but has real implications and observable consequences
in experiment. To see this, we may consider the gradual delocalization of the
$f$ electrons with lowering temperature. The two energy scales then turn into
two temperature scales, as observed in recent pump-probe experiment on the
heavy fermion compound CeCoIn5 Qi2020 . Determinate Quantum Monte Carlo
simulations (DQMC) taking into account spatial degree of freedom for the PAM
have confirmed two distinct stages of hybridization dynamics, namely, the
onset of precursor hybridization fluctuations and the eventual establishment
of a long-range coherent (Kondo insulating) state Hu2019 . In the HM, the two
energy scales are also expected to represent two succeeding stages of the
(nonequilibrium) quasiparticle dynamics, namely, a Fermi liquid phase with
well-established Landau quasiparticles and a higher crossover region with
itinerant but damped quasiparticles (bad metal) Pruschke1995 ; Mravlje2011PRL
; Deng2013PRL . It might also be reflected in the temperature or time
evolution of the doublon-holon excitations. Exact lattice simulations on the
HM may help clarify this issue but are often limited by the numerical accuracy
due to small lattice size. At even higher temperatures, the localized spectral
weight might be thermally excited and contribute to physical properties, but
whether or not (and how) the highly damped electrons can become Anderson
localized requires more rigorous study Antipov2016 .
Along this line of thought, we anticipate that the existence and separation of
two energy scales reflect a generic two-stage development of the electronic
coherence and is an intrinsic property of correlated electrons. Before the
long-range coherence is eventually established, there exists a precursor stage
where the electrons become partially delocalized with damped quasiparticle
excitations. On the other hand, the usual Mott transition also exists in other
models such as the Falicov-Kimball model, the Ising-Kondo model, and the
Hatsugai-Kohmoto model, which do not seem to possess the two energy scales. A
closer inspection suggests that the Mott transitions in these models are
different from that in the HM. In the Falicov-Kimball model and the Ising-
Kondo model, the ground state near the transition is not a Fermi liquid due to
“disorder scattering” Freericks2003 ; Yang2020 , while the Hatsugai-Kohmoto
model lacks the so-called dynamical spectral weight transfer which is a key
feature of the HM Yeo2019 . Thus the presence of two energy scales should not
be viewed as a generic property of the Mott transition. Rather, they are
associated with the emergence and establishment of coherent quasiparticles on
the lattice, possibly induced by the spectral weight transfer from the high-
energy localized part to the low-energy itinerant part. More elaborate studies
are required to establish this important distinction.
To summarize, we report systematic analyses of the energy-scale cascade in the
HM and the PAM and reveal a deep connection between the Mott and Kondo lattice
physics. This allows us to construct a tentative phase diagram for correlated
electrons based on two energy scales marking the upper boundary of the fully
coherent low-energy states and the lower boundary of the incoherent regime
with localized spectral weight in the energy space. For the HM, these
correspond to the kink and waterfall structures on the dispersion. For the
PAM, they are associated with the indirect and direct hybridization gaps. The
separation of two energy scales is an intrinsic property of the lattice models
and reflects a two-stage dynamical process to build up the lattice coherence
of itinerant quasiparticles. Our work clarifies the origin of these energy
scales and reveals a potentially basic and generic property for understanding
the key of correlated electrons on a lattice.
This work was supported by the National Natural Science Foundation of China
(NSFC Grant No. 11974397, No. 11774401, No. 11974420), the National Key
Research and Development Program of MOST of China (Grant No. 2017YFA0303103),
and the Strategic Priority Research Program of the Chinese Academy of Sciences
(Grant No. XDB33010100).
## References
* (1) A. Lanzara, P. V. Bogdanov, X. J. Zhou, S. A. Kellar, D. L. Feng, E. D. Lu, T. Yoshida, H. Eisaki, A. Fujimori, K. Kishio, J.-I. Shimoyama, T. Noda, S. Uchida, Z. Hussain, and Z.-X. Shen, Evidence for ubiquitous strong electron–phonon coupling in high-temperature superconductors, Nature (London) 412, 510 (2001).
* (2) H. He, Y. Sidis, P. Bourges, G. D. Gu, A. Ivanov, N. Koshizuka, B. Liang, C. T. Lin, L. P. Regnault, E. Schoenherr, and B. Keimer, Resonant Spin Excitation in an Overdoped High Temperature Superconductor, Phys. Rev. Lett. 86, 1610 (2001).
* (3) Z.-X. Shen, A. Lanzara, S. Ishihara, and N. Nagaosa, Role of the electron-phonon interaction in the strongly correlated cuprate superconductors, Philos. Mag. B 82, 1349 (2002).
* (4) A. Damascelli, Z. Hussain, and Z.-X. Shen, Angle-resolved photoemission studies of the cuprate superconductors, Rev. Mod. Phys. 75, 473 (2003).
* (5) W. Meevasana, X. J. Zhou, S. Sahrakorpi, W. S. Lee, W. L. Yang, K. Tanaka, N. Mannella, T. Yoshida, D. H. Lu, Y. L. Chen, R. H. He, H. Lin, S. Komiya, Y. Ando, F. Zhou, W. X. Ti, J. W. Xiong, Z. X. Zhao, T. Sasagawa, T. Kakeshita, K. Fujita, S. Uchida, H. Eisaki, A. Fujimori, Z. Hussain, R. S. Markiewicz, A. Bansil, N. Nagaosa, J. Zaanen, T. P. Devereaux, and Z.-X. Shen, Hierarchy of multiple many-body interaction scales in high-temperature superconductors, Phys. Rev. B 75, 174506 (2007).
* (6) N. P. Armitage, P. Fournier, and R. L. Greene, Progress and perspectives on electron-doped cuprates, Rev. Mod. Phys. 82, 2421 (2010).
* (7) K. Byczuk, M. Kollar, K. Held, Y.-F. Yang, I. A. Nekrasov, Th. Pruschke, and D. Vollhardt, Kinks in the dispersion of strongly correlated electrons, Nat. Phys. 3, 168 (2007).
* (8) I. A. Nekrasov, K. Held, G. Keller, D. E. Kondakov, Th. Pruschke, M. Kollar, O. K. Andersen, V. I. Anisimov, and D. Vollhardt, Momentum-resolved spectral functions of SrVO3 calculated by LDA+DMFT, Phys. Rev. B 73, 155112 (2006).
* (9) A. Macridin, M. Jarrell, T. Maier, and D. J. Scalapino, High-Energy Kink in the Single-Particle Spectra of the Two-Dimensional Hubbard Model, Phys. Rev. Lett. 99, 237001 (2007).
* (10) P. Grete, S. Schmitt, C. Raas, F. B. Anders, and G. S. Uhrig, Kinks in the electronic dispersion of the Hubbard model away from half filling, Phys. Rev. B, 84 205104 (2011).
* (11) A. Kainz, A. Toschi, R. Peters, and K. Held, Kinks in the periodic Anderson model, Phys. Rev. B 86, 195110 (2012).
* (12) M. Greger, M. Kollar, and D. Vollhardt, Emergence of a Common Energy Scale Close to the Orbital-Selective Mott Transition, Phys. Rev. Lett. 110, 046403 (2013).
* (13) A. Toschi, M. Capone, C. Castellani, and K. Held, Kinks in the Electronic Specific Heat, Phys. Rev. Lett. 102, 076402 (2009).
* (14) A. Toschi, M. Capone, C. Castellani, and K. Held, Kinks: Fingerprints of strong electronic correlations, J. Phys. Conf. Ser. 200, 012207 (2010).
* (15) H.-B. Yang, Z.-H. Pan, A. K. P. Sekharan, T. Sato, S. Souma, T. Takahashi, R. Jin, B. C. Sales, and D. Mandrus, Fermi Surface Evolution and Luttinger Theorem in NaxCoO2: A Systematic Photoemission Study, Phys. Rev. Lett. 95,146401 (2005).
* (16) H. Iwasawa, Y. Aiura, T. Saitoh, I. Hase, S. I. Ikeda, Y. Yoshida, H. Bando, M. Higashiguchi, Y. Miura, X. Y. Cui, K. Shimada, H. Namatame, and M. Taniguchi, Orbital selectivity of the kink in the dispersion of Sr2RuO4, Phys. Rev. B 72, 104514 (2005).
* (17) T. Yoshida, K. Tanaka, H. Yagi, A. Ino, H. Eisaki, A. Fujimori, and Z.-X. Shen, Direct Observation of the Mass Renormalization in SrVO3 by Angle Resolved Photoemission Spectroscopy, Phys. Rev. Lett. 95, 146404 (2005).
* (18) N. J. C. Ingle, K. M. Shen, F. Baumberger, W. Meevasana, D. H. Lu, and Z.-X. Shen, Quantitative analysis of Sr2RuO4 angle-resolved photoemission spectra: Many-body interactions in a model Fermi liquid, Phys. Rev. B, 72 205114 (2005).
* (19) J. Graf, G.-H. Gweon, K. McElroy, S. Y. Zhou, C. Jozwiak, E. Rotenberg, A. Bill, T. Sasagawa, H. Eisaki, S. Uchida, H. Takagi, D.-H. Lee, and A. Lanzara, Universal High Energy Anomaly in the Angle-Resolved Photoemission Spectra of High Temperature Superconductors: Possible Evidence of Spinon and Holon Branches, Phys. Rev. Lett. 98, 067004 (2007).
* (20) T. Valla, T. E. Kidd, W.-G. Yin, G. D. Gu, P. D. Johnson, Z.-H. Pan, and A. V. Fedorov, High-Energy Kink Observed in the Electron Dispersion of High-Temperature Cuprate Superconductors, Phys. Rev. Lett. 98, 167003 (2007).
* (21) B. P. Xie, K. Yang, D. W. Shen, J. F. Zhao, H. W. Ou, J. Wei, S. Y. Gu, M. Arita, S. Qiao, H. Namatame, M. Taniguchi, N. Kaneko, H. Eisaki, K. D. Tsuei, C. M. Cheng, I. Vobornik, J. Fujii, G. Rossi, Z. Q. Yang, and D. L. Feng, High-Energy Scale Revival and Giant Kink in the Dispersion of a Cuprate Superconductor, Phys. Rev. Lett. 98, 147001 (2007).
* (22) S. Aizaki, T. Yoshida, K. Yoshimatsu, M. Takizawa, M. Minohara, S. Ideta, A. Fujimori, K. Gupta, P. Mahadevan, K. Horiba, H. Kumigashira, and M. Oshima, Self-Energy on the Low- to High-Energy Electronic Structure of Correlated Metal SrVO3, Phys. Rev. Lett. 109, 056401 (2012).
* (23) C. Raas, P. Grete, and G. S. Uhrig, Emergent Collective Modes and Kinks in Electronic Dispersions, Phys. Rev. Lett. 102, 076406 (2009).
* (24) K. Held, R. Peters, and A. Toschi, Poor Man’s Understanding of Kinks Originating from Strong Electronic Correlations, Phys. Rev. Lett. 110, 246402 (2013).
* (25) K. Held, C. Huscroft, R. T. Scalettar, and A. K. McMahan, Similarities between the Hubbard and Periodic Anderson Models at Finite Temperatures, Phys. Rev. Lett. 85, 373 (2000).
* (26) D. E. Logan, M. R. Galpin, and J. Mannouch, Mott transitions in the periodic Anderson model, J. Phys. Condens. Matter 28, 455601 (2016).
* (27) L. de’ Medici, A. Georges, G. Kotliar, and S. Biermann, Mott Transition and Kondo Screening in $f$-Electron Metals, Phys. Rev. Lett. 95, 066402 (2005).
* (28) C. Pépin, Kondo Breakdown as a Selective Mott Transition in the Anderson Lattice, Phys. Rev. Lett. 98, 206401 (2007).
* (29) C. Pépin, Selective Mott transition and heavy fermions, Phys. Rev. B, 77, 245129 (2008).
* (30) M. Vojta, Orbital-Selective Mott Transitions: Heavy Fermions and Beyond, J. Low Temp. Phys. 161, 203 (2010).
* (31) F. C. Zhang and T. M. Rice, Effective Hamiltonian for the superconducting Cu oxides, Phys. Rev. B 37, 3759(R) (1988).
* (32) G. M. Zhang, Y.-F. Yang, and F. C. Zhang, Self-doped Mott insulator for parent compounds of nickelate superconductors, Phys. Rev. B 101, 020501(R) (2020).
* (33) R. Bulla, Th. Pruschke, and A. C. Hewson, Metal-insulator transition in the Hubbard model, Physica B 259-261, 721 (1999).
* (34) P. Coleman, New approach to the mixed-valence problem, Phys. Rev. B 29, 3035 (1984).
* (35) A. Auerbach and K. Levin, Kondo Bosons and the Kondo Lattice: Microscopic Basis for the Heavy Fermi Liquid, Phys. Rev. Lett. 57, 877 (1986).
* (36) A. Benlagra, Th. Pruschke, and M. Vojta, Finite-temperature spectra and quasiparticle interference in Kondo lattices: From light electrons to coherent heavy quasiparticles, Phys. Rev. B 84, 195141 (2011).
* (37) P. Coleman, Introduction to Many-body Physics (Cambridge University Press, Cambridge, England, 2015).
* (38) K. G. Wilson, The renormalization group: Critical phenomena and the Kondo problem, Rev. Mod. Phys. 47, 773 (1975).
* (39) R. Bulla, T. A. Costi, and Th. Pruschke, Numerical renormalization group method for quantum impurity systems, Rev. Mod. Phys. 80, 395 (2008).
* (40) W. Metzner and D. Vollhardt, Correlated Lattice Fermions in $d=\infty$ Dimensions, Phys. Rev. Lett. 62, 324 (1989).
* (41) A. Georges, G. Kotliar, W. Krauth, and M. Rozenberg, Hubbard model in infinite dimensions, Phys. Rev. B 45, 6479 (1992).
* (42) A. Georges, G. Kotliar, W. Krauth, and M. Rozenberg, Dynamical mean-field theory of strongly correlated fermion systems and the limit of infinite dimensions, Rev. Mod. Phys. 68, 13 (1996).
* (43) G. Kotliar, S. Y. Savrasov, K. Haule, V. S. Oudovenko, O. Parcollet, and C. A. Marianetti, Electronic structure calculations with dynamical mean-field theory, Rev. Mod. Phys. 78, 865 (2006).
* (44) R. Bulla, A. C. Hewson, and Th. Pruschke, Numerical renormalization group calculations for the self-energy of the impurity Anderson model, J. Phys. Condens. Matter 10, 8365 (1998).
* (45) X. Y. Zhang, M. Rozenberg, and G. Kotliar, Mott Transition in the $d=\infty$ Hubbard Model at Zero Temperature, Phys. Rev. Lett. 70, 1666(1993).
* (46) Th. Pruschke, R. Bulla, and M. Jarrell, Low-energy scale of the periodic Anderson model, Phys. Rev. B 61, 12799 (2000).
* (47) Y. P. Liu, Y. J. Zhang, J. J. Dong, H. Lee, Z. X. Wei, W. L. Zhang, C. Y. Chen, H. Q. Yuan, Y.-F. Yang, and J. Qi, Hybridization Dynamics in CeCoIn5 Revealed by Ultrafast Optical Spectroscopy, Phys. Rev. Lett. 124, 057404 (2020).
* (48) D. Hu, J. J. Dong, and Y.-F. Yang, Hybridization fluctuations in the half-filled periodic Anderson model, Phys. Rev. B 100, 195133 (2019).
* (49) C. Weber, K. Haule, and G. Kotliar, Optical weights and waterfalls in doped charge-transfer insulators: A local density approximation and dynamical mean-field theory study of La2-xSrxCuO4, Phys. Rev. B, 78, 134519 (2008).
* (50) M. Kohno, Mott Transition in the Two-Dimensional Hubbard Model, Phys. Rev. Lett. 108, 076401 (2012).
* (51) M. Jarrell, Symmetric periodic Anderson model in infinite dimensions, Phys. Rev. B, 51, 7429 (1995).
* (52) R. Y. Chen and N. L. Wang, Infrared properties of heavy fermions: evolution from weak to strong hybridizations, Rep. Prog. Phys. 79, 064502 (2016).
* (53) Y. Shimizu, O. Sakai, and A. C. Hewson, The Effects of Band Dispersion and Interactions on the Excitation Gaps in the Periodic Anderson Model in Infinite Dimensions, J. Phys. Soc. Jpn. 69, 1777 (2000).
* (54) J. Mravlje, M. Aichhorn, T. Miyake, K. Haule, G. Kotliar, and A. Georges, Coherence-Incoherence Crossover and the Mass-Renormalization Puzzles in Sr2RuO4, Phys. Rev. Lett. 106, 096401 (2011).
* (55) X. Deng, J. Mravlje, R. Žitko, M. Ferrero, G. Kotliar, and A. Georges, How Bad Metals Turn Good: Spectroscopic Signatures of Resilient Quasiparticles, Phys. Rev. Lett. 110, 086401 (2013).
* (56) Th. Pruschke, M. Jarrell, and J. K. Freericks, Anomalous normal-state properties of high-$T_{c}$ superconductors: intrinsic properties of strongly correlated electron systems? Adv. Phys. 44, 187 (1995).
* (57) A. E. Antipov, Y. Javanmard, P. Ribeiro, and S. Kirchner, Interaction-Tuned Anderson versus Mott Localization, Phys. Rev. Lett. 117, 146601 (2016).
* (58) J. K. Freericks and V. Zlatić, Exact dynamical mean-field theory of the Falicov-Kimball model, Rev. Mod. Phys. 75, 1333 (2003).
* (59) W.-W. Yang, L. Zhang, X.-M. Guo, and Y. Zhong, Hidden Anderson localization in disorder-free Ising-Kondo lattice, Chin. Phys. B 29, 107301 (2020).
* (60) L. Yeo and P. W. Phillips, Local entropies across the Mott transition in an exactly solvable model, Phys. Rev. D 99, 094030 (2019).
|
2024-09-04T02:54:55.061928 | 2020-02-23T16:39:38 | 2002.12144 | {
"authors": "George Cevora",
"full_text_license": null,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25914",
"submitter": "George Cevora",
"url": "https://arxiv.org/abs/2002.12144"
} | arxiv-papers | # Fair Adversarial Networks
George Čevora
illumr Ltd., London, United Kingdom
<EMAIL_ADDRESS>
###### Abstract
The influence of human judgement is ubiquitous in datasets used across the
analytics industry, yet humans are known to be sub-optimal decision makers
prone to various biases. Analysing biased datasets then leads to biased
outcomes of the analysis. Bias by protected characteristics (e.g. race) is of
particular interest as it may not only make the output of analytical process
sub-optimal, but also illegal. Countering the bias by constraining the
analytical outcomes to be fair is problematic because A) fairness lacks a
universally accepted definition, while at the same time some definitions are
mutually exclusive, and B) the use of optimisation constraints ensuring
fairness is incompatible with most analytical pipelines. Both problems are
solved by methods which remove bias from the data and returning an altered
dataset. This approach aims to not only remove the actual bias variable (e.g.
race), but also alter all proxy variables (e.g. postcode) so the bias variable
is not detectable from the rest of the data. The advantage of using this
approach is that the definition of fairness as a lack of detectable bias in
the data (as opposed to the output of analysis) is universal and therefore
solves problem (A). Furthermore, as the data is altered to remove bias the
problem (B) disappears because the analytical pipelines can remain unchanged.
This approach has been adopted by several technical solutions. None of them,
however, seems to be satisfactory in terms of ability to remove multivariate,
non-linear and non-binary biases. Therefore, in this paper I propose the
concept of _Fair Adversarial Networks_ as an easy-to-implement general method
for removing bias from data. This paper demonstrates that Fair Adversarial
Networks achieve this aim.
Unfair treatment of individuals enacted by automated decision-makers has
recently attracted a large amount of attention [14, 7, 19]. This problem is,
however, not limited to automated decision-making; all Data Analytics [DA] and
Machine Learning [ML] has the potential to lead to biased conclusions and
therefore enact discrimination. The prominence of automated decision-makers in
the discussion of discrimination is mostly due to two factors: 1) machine
learning models are hard to scrutinise and therefore lack the human oversight
that otherwise could prevent discrimination, and 2) automated decision-makers
are easy to test and therefore they can be relatively easily proven to be
biased. Many conventional uses of Data Analytics can lead to same problems
though - for instance the analyst may conclude that individuals from
particular postcodes are less credit-worthy. If that judgement is based on
biased data and those postcodes are predominantly home to individuals of a
specific race, this amounts to racial discrimination despite race not being
considered by the analyst at all. The methods presented in this paper aim to
prevent these exact situations and at the same time provide a generic easy-to-
use solution accessible to any data analyst.
Discrimination is a result of past biased human judgements that make-up the
datasets, or by biased human judgements which shape the composition of the
dataset [1, 4]. This paper focuses on illegal discrimination; however, the
findings are applicable to all types of bias, including biases that would not
be generally seen as unfair. For instance in the case of dealing with HR data
collected across a number of regional offices, the bias of the region might be
undesirable in employee evaluation.
It has been reported [17] that it is harder for African-Americans to obtain
loans than for their white-American counterparts with the same repayment
ability. As race is one of the protected characteristics under US law, use of
such a system to perform decision-making is illegal. However, this problem
does not only concern ethics or legality. In fact, there is a solid business
case for fairness. Let us consider a case in which two individuals with equal
repayment ability apply for a loan but one of them is rejected based on
discrimination against a group they are a member of. Besides being unfair and
illegal this also makes a lost profit for the lender in question.
There are two main scenarios in which racism creeps into DA. The more
straightforward scenario occurs whenever the objective of the DA exercise is
to predict (biased) human judgement. The need to remove the past human bias in
this scenario is widely acknowledged, yet often ignored. Examples of this
issue come from many of the companies that use DA or ML for hiring. They may
try to link a candidate’s characteristics such as CV, performance on aptitude
tests, or even appearance to figure out how well the candidate will perform in
the job. However, they almost never link that information with actual
performance of people hired in the past. They are more likely to simply check
whether the individuals with particular characteristics were hired or not,
which is the decision of a potentially biased hiring manager. This approach is
pragmatic - evaluating performance is not easy and can be only done for
individuals who were actually hired, not for all applicants, thus severely
reducing the size of the dataset.
The fairer and more accurate option is to use the actual performance of the
individuals hired on the job to guide future decision making. However, this
data too can be biased. The reason for this is that the datasets used for this
task are almost never a uniform sample from the population. For instance,
women have lower chances to be hired for many roles, therefore they are less
likely to be included in the dataset used to train the automated decision-
maker. This results in a situation in which the dataset used is systematically
different to the population, where the difference may increase over time - an
ubiquitous sight in DA/ML. This approach too offers a biased view of the
population.
Several attempts to address this issue have been made in the past [8, 11, 18].
However, a complete lack of consensus on which of these methods should be the
gold standard led to the current status quo in which data analysts or ML
engineers decide themselves how to address this problem [1].
The bias-removal methods can be broadly divided into two categories: 1)
methods seeking statistical parity between groups of individuals that differ
on their protected characteristic; 2) methods that seek to remove information
about the protected characteristics in the datasets used for ML.
### Statistical Parity
Ensuring statistical parity between groups, such as equal access to
opportunity, is one of the most popular bias-aware method in DA/ML [12]. The
fundamental problem with this approach is the necessity to explicitly define
fairness. Unfortunately there is a complete lack of consensus on this issue.
In fact Berk and colleagues [2] identify at least six different notions of
fairness in the context of penal systems. Many of these notions are not
mutually compatible [5], therefore the bias-aware data analysts have to pick
what fairness means to them. This principle is fundamentally flawed as the
notion of fairness should not be subjective. Unfortunately, while the legal
system requires fairness in respect to protected characteristics (e.g. US -
Human Rights Act, EU - European Convention on Human Rights), it doesn’t
provide sensible111The U.S. Supreme Court asserted the principle of disparate
impact, while resisting any sort of mathematical definition (p. 5, [8]). Other
bodies such as the Equal Employment Opportunity Commission [6] have provided
some guidance, but these are not legally binding and leave a lot to be
desired. guidance on what type of statistical parity is required.
A further big hurdle for achieving fairness through statistical parity is the
difficulty of implementing it. Whilst matching the means and standard
deviations between the groups is easy222Just a simple elastic transformation
of one group’s outcomes to match the outcomes of the other group. it is also a
fairly weak notion of fairness [5]. Most other notions of fairness require
inclusion of a parity optimiser into the DA/ML model - adding a fairness term
to the loss function. This is clearly outside the area of expertise of most
data analysts, as they do not tend to write their own loss functions. It also
renders most standard DA functions or packages, such as scikit learn, obsolete
as there usually are no reasonable options to customize loss functions.
Therefore this approach causes a significant disturbance to existing DA
pipelines and significantly increases the time commitment from the analysts,
as well as the technical prowess required for such a role.
Furthermore, it should not be forgotten that often the data used for ML does
not represent the population well. Optimising for parity on training data
doesn’t guarantee parity at the population level when the training sample is
heavily biased.
### Removing protected characteristics
An approach that avoids many of the issues of the statistical parity is to
remove the information about protected characteristics from the dataset. The
logic behind this issue is beautifully elegant; an agent that is not aware of
the protected characteristics can not discriminate based on those
characteristics. Unfortunately, the realisation of this principle often fails
to deliver on its promises.
An approach that is sometimes applied to remove information on protected
characteristics is a simple exclusion of these labels from the dataset. While
it is widely recognized [14] that this approach is highly flawed, it is still
widespread across the industry (author’s anecdotal evidence). The logic behind
the failure is simple, the information about the protected characteristics is
partially contained in the other characteristics relevant for the decision at
hand [15]. These cases include racially segregated neighbourhoods where the
postcode reveals the race of an individual, or professions that have
traditionally high gender imbalance which reveal gender of the individual. At
the same time, home address or previous profession are highly desirable
information for a whole range of data applications; therefore can not be
excluded from the dataset. It is therefore apparent that this approach cannot
lead to fair and accurate DA outcomes.
A more sophisticated application of the same principle is manipulating the
dataset in a way that removes the ability to predict the protected
characteristics for the individuals in the dataset from the rest of the data.
This approach has been developed over time to a great success [3, 10, 11, 18];
however, some shortcomings are still apparent: 1) the relationship between
protected and other characteristics is certainly not linear like most methods
assume; 2) even in cases where a variable seems to not relate to the protected
characteristic, it may do so via an interaction with another variable; 3) none
of the methods can address continuous characteristics (e.g. age) and even
multiple categories pose significant problems [3]. Proposing a novel method
that addresses these issues is the main objective of this paper.
## 1 Fair Adversarial Networks
A number of methods of removing bias from datasets have been devised, however
they generally fall short at removing non-linear, non-binary and/or
multivariate biases. To address the problems of the current methods this paper
introduces a novel concept of _Fair Adversarial Networks_ [FANs]. Like
Generative Adversarial Networks [GANs] [9] this method consists of two
networks with different objectives trained iteratively. FANs are a system that
creates an unbiased version of a dataset that can subsequently be used with
any analytical tools. There are two main components to this system 1) an
_autoencoder_ [16] function $y=\rho(x,W_{A})$ that provides $y$ a
reconstruction of data $x$ given autoencoder weights $W_{A}$, and 2) a _Racist
Network_ that provides estimate $\hat{r}$ of the true protected
characteristics $\bar{r}$ (_race_ in this example) from $y$.
Figure 1: Architecture of a Fair Adversarial Network. Two networks are trained
iteratively: the Racist network minimizes error on predicting race (or other
protected characteristic of interest) while the autoencoder jointly minimizes
its reconstruction error and the predictive capability of the Racist network.
The measure of bias we wish to minimize333Please note that the the cross
entropy function $D$ is only appropriate to evaluate classifiers, not
regressors, where a different objective function needs to be used. is the true
performance, such as the cross-entropy function $D$, of the Racist Network
after full training $\bar{D}$ given the autoencoder weights at given
epoch444$\bar{D}$ is identical to $D$ when optimal training schedule of the
Racist Network has been concluded.. The main problem of this approach is the
complexity of such a bias measure. The measure of bias is required at each
epoch on which the autoencoder is trained, but it is not easy to obtain.
Unlike the discriminator accuracy in GANs, the only way find the bias of
encoded data is to fully train the racist network - to measure the true
potential of the data to reveal the protected characteristic. Without full
training, there is no guarantee that a failure of the racist network to
predict the protected characteristic (detect bias) is not simply due to recent
changes in the encoding that the racist network has not been able to adapt to
yet. Therefore, to find the bias of the data exactly, we would have to perform
lengthy full-training of the racist network on each epoch of the autoencoder
training, clearly making the algorithm impractical. Furthermore, a guarantee
of optimality for the network hyperparameters would be needed.
We have developed a method for approximating the fully-trained performance
from a single forward pass through the network $\hat{D}(\hat{r},\bar{r})$
which is the core of the autoencoder loss function specified by the equation
1. Further developments to this measure that are required include a general
mechanism to stabilise the adversarial training and a number of regularisers
$\mathcal{R}$ that ensure quality of the final data encoding. While these
developments cannot be shared publicly as they are core to the intellectual
property of illumr Ltd., standard approaches are enough to replicate debiasing
process on a one-off basis given sufficient hyperparameter tuning.
Formally, autoencoder minimizes loss function
$\mathcal{L}_{A}(x,r,W_{A},W_{R})=MSE(y,x)+c\hat{D}(\hat{r},\bar{r})+\mathcal{R}$
(1)
where $MSE(y,x)$ is the reconstruction error (Mean Square Error) of the
autoencoder, $W_{R}$ the Racist Network weights, and $c$ is a constant
balancing the individual terms of loss function. The Racist Network optimizes
the appropriate loss function such as
$\mathcal{L}_{R}(y,\bar{r},W_{R})=D(\hat{r},\bar{r}),$ (2)
which is a cross entropy function of two vectors $\bar{r}$ and $\hat{r}$. As
mentioned before this cost function ($D$) is different to the estimate of the
performance of the Racist Network as minimized by the autoencoder ($\hat{D}$).
This is because $D$ is a bad approximation of performance after full training
$\bar{D}$.
This system of two networks is trained in an iterative adversarial fashion
similar to GANs:
Algorithm 1 Fair Adversarial Networks: removing bias from data
1:while stopping criteria is not reached do
2: Update the Autoencoder weights $W_{A}$ using the loss function
3: $\mathcal{L}_{A}(x,r,W_{A},W_{R})$.
4: Obtain reconstructed data $y=\rho(x,W_{A})$
5: for k steps do
6: Update the Racist Network weights $W_{R}$ using the loss function
7: $\mathcal{L}_{R}(y,r,W_{R},W_{A})$.
8: end for
9:end whilereturn y $\triangleright$ debiased data
Given the success of this training procedure, the end result should be a
dataset that is as similar to the original as possible while it should be
harder/impossible to detect the undesirable protected characteristic. However,
GANs are notoriously hard to train [13].
### 1.1 Convergence
Adversarial training often leads to very unstable or even run-away behaviour
[13]. Here we demonstrate acceptable convergence of our algorithm on five
real-world datasets. While the convergence may seem still fairly sub-optimal,
we argue that it is sufficiently good for our purpose. Crucially, we implement
a ratchet mechanism which always preserves the state of the network with the
lowest $\mathcal{L}_{A}$, therefore run-away behaviour after a period of
convergence is not particularly problematic.
While, we optimize the loss functions $\mathcal{L}_{A}$ and $\mathcal{L}_{R}$,
they are not of interest for our purposes. Instead what we truly aim to
achieve is to bring the predictive performance of the racist network after
full training $\bar{D}(\hat{r},\bar{r})$ down to random. For this task we
operationalized $\bar{D}(\hat{r},\bar{r})$ as the best performance of the
Racist Network on the validation data from 3 random initializations, and
subsequent 10,000 epochs of training. The Racist Network used had a single
hidden layer of the same width as the dataset. The random benchmark we are
using is $\bar{D}(ML(\bar{r}),\bar{r})$ i.e. always picking the most likely
category. The other value of importance to us is $MSE(x,y)$ as we aim to
scramble the data as little as possible. Therefore these will be the focus of
the analysis of convergence.
Further interesting values to observe include $D(\hat{r},\bar{r})$, the loss
function of the racist network, but this does not provide a good indication of
$\bar{D}(\hat{r},\bar{r})$, and therefore is fundamentally unsuitable to be a
part of $\mathcal{L}_{A}$. $\hat{D}(\hat{r},\bar{r})$ is also of interest
which while being noisy provides good gradients for training as a part of
$\mathcal{L}_{A}$.
Figures 2 to 6 clearly show that the $\bar{D}(\hat{r},\bar{r})$ has decreased
to random performance and also an orderly behaviour of $MSE(x,y)$ correctly
approaching minimum distortion of the data. The actual impact on Data
Analytical outcomes will be discussed in another paper that is currently in
preparation.
Figure 2: The Absenteeism at Work data includes personal information and total
absenteeism time over 3 years for employees at a courier company in Brazil. We
have selected age as the undesirable protected characteristic and successfully
removed it. Source: UCI Machine Learning Database. Creators: Andrea
Martiniano, Ricardo Pinto Ferreira, and Renato Jose Sassi. $\bar{D}$ values
denote proportion of correct predictions of race, while all other values are
arbitrarily scaled. Figure 3: Performance data from schools in New York. The
bias removed was a variable called ’Majority Black/Hispanic’. Source: Kaggle.
Creators: PASSNYC. $\bar{D}$ values denote proportion of correct predictions
of race, while all other values are arbitrarily scaled. Figure 4: The Heart
Disease Dataset consists of blood measurements from a set of patients with and
without heart disease. The bias variable removed was ’Sex’. Source: UCI
Machine Learning Database. Creators: Hungarian Institute of Cardiology,
University Hospital Zurich, University Hospital Basel, V.A. Medical Center
Long Beach and Cleveland Clinic Foundation. $\bar{D}$ values denote proportion
of correct predictions of gender, while all other values are arbitrarily
scaled. Figure 5: The COMPAS Dataset consists of profiles of criminals from
Broward County, Florida. The bias variable removed was ’Race’. Source: Kaggle.
Creators: ProPublica. $\bar{D}$ values denote proportion of correct
predictions of race, while all other values are arbitrarily scaled. Figure 6:
The Communities and Crime Dataset includes statistics on various communities i
$\bar{D}$ values denote proportion of correct predictions of race, while all
other values are arbitrarily scaled.n the U.S. Bias variable removed was
’Majority Black’, indicating whether the community has a majority black
population. Source: UCI Machine Learning Database. Creators: Michael Redmond.
## 2 Discussion
It is apparent that the outcomes of Data Analytics [DA] and Machine Learning
[ML] are often perpetuating the human bias present in the datasets and
therefore enacting illegal discrimination. Constraining the DA/ML outcomes to
be fair is problematic as there is no universally accepted definition of
fairness while at the same time many of the notions of fairness are very hard
to implement, disrupting DA pipelines and putting a significant extra load on
DA resources. One apparent solution is to remove bias from the data before
proceeding with DA as usual; however, these methods generally, cannot account
for non-linear, non-binary and/or multivariate relationships between race (or
other biasing factor) and the rest of the data [3]. This paper introduced
_Fair Adversarial Networks_ [FANs] as a method that compensates for these
shortcomings and provides a very significant improvement in both fairness and
ease of use.
There is no universally accepted, or even legally binding, notion of fairness
that can be used for optimisation, while at the same time many definitions of
fairness are mutually exclusive. Unless a definition of fairness is provided
by regulatory bodies it seems unlikely optimising for parity between groups on
a fairness measure can be a useful bias-removal approach. Even if a
mathematical notion of fairness becomes agreed there is no guarantee that
optimizing for parity on a training set achieves parity on the population-
level.
Furthermore, the need to optimise for fairness introduces an extra term into
the cost function of any optimisation procedure, which is not compatible with
current DA tools. Data analysts are currently not required to write their own
loss functions or optimisation procedures, therefore including such a
requirement would damage their ability to perform their jobs. Even if the data
analysts become comfortable with this requirement, the huge time overhead of
this task makes it unlikely it will be performed in practise.
Removing information about protected characteristics from the data is an
attractive alternative. It’s philosophically very simple - without knowledge
of membership to a protected group (such as race) it should be impossible to
discriminate based on it - therefore it removes the subjective nature of
treating bias. It can also be made very simple, a single preprocessing step
can remove bias from the data while the rest of DA/ML pipeline can remain
exactly the same.
However many methods removing bias from data fail. Removing the column
containing the protected characteristic is clearly insufficient due to the
presence of proxy variables. A number of methods go beyond removing the column
containing the protected characteristic and attempt to de-correlate the other
characteristics from the protected ones. However, these approaches generally
cannot account for non-linear, non-binary, and/or multivariate relationships
between the characteristics [3]. To counter these problems this paper has
introduced FANs.
FANs are a version of adversarial networks with two main components 1) an
_autoencoder_ that encodes a fair representation of data, and a _Racist
Network_ which is the adversary predicting the protected characteristics (e.g.
race) from the data, the performance of which needs to be minimised.
The autoencoder’s cost function consist of reconstruction error of the
transformed data, and also the performance of the racist network, both of
which are to be minimised. The Racist Network simply tries to achieve the best
predictive performance on the protected characteristic, using the
autoencoder’s output as its input. This system produces a data representation
that is most similar to the original data, but at the same time from which the
protected characteristics cannot (or at least are harder) be predicted. Any
analytical methods can be subsequently used with such a representation.
This paper describes the principles of FANs and demonstrates on five real-
world, disparate datasets that FANs can indeed achieve their goal of removing
the ability to predict the protected characteristic from the data, while
minimising the difference between the original data and its fair
reconstruction.
The problematic part of training FANs is that we aim to remove _possibility to
predict_ protected characteristic from reconstructed data. Possibility to
predict implies full training, not just the current state of the adversarial
process. The success of our algorithm across five disparate datasets crucially
relies on our approximation of full-training performance of a neural network
from a single forward pass. While this approximation will remain our trade
secret, it is possible to replicate our success on a one-off basis using
conventional approaches and heavy parameter-tuning.
This paper has demonstrated that FANs can consistently succeed at removing
bias from datasets while keeping the necessary alterations of the data to the
minimum. FANs are particularly valuable because they can be used as a generic
and easy to use data pre-processing step, allowing all Data Analysts to
account for biases in their datasets without significant overheads.
### Limitations and Future Directions
It is necessary to mention that under special circumstances FANs have the
potential to make things worse for discriminated groups. FANs will remove all
kinds of discrimination including the positive one, which might be a desirable
way of breaking-up vicious cycles of deprivation in some areas.
Optimising for two metrics at the same time is a compromise. The present paper
has not attempted to analyse the residual discrimination which, while
statistically insignificant, is likely still present. On the other hand the
statistical insignificance can be seen as the criterion for success. Either
way it is apparent that FANs provide a step in the right direction in respect
to increasing the fairness.
Lastly, it is unclear what the correct time to stop the training of Neural
Networks is, and what the right hyper-parameters are. It is certain that our
neural architecture, neither hyper-parameter choice is optimal. Especially
with GANs, one wishes to have interactive, optimal control of hyper-parameters
throughout training to stabilize the process and ensure convergence. Therefore
we are now exploring Reinforcement Learning as a method to control these
factors interactively throughout training.
## References
* [1] Solon Barocas and Andrew D Selbst. Big data’s disparate impact. Cal. L. Rev., 104:671, 2016.
* [2] Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. Fairness in criminal justice risk assessments: the state of the art. arXiv preprint arXiv:1703.09207, 2017.
* [3] Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems, pages 4349–4357, 2016.
* [4] Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186, 2017.
* [5] Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153–163, 2017.
* [6] Equal Employment Opportunity Commission et al. Uniform guidelines on employee selection procedures. Fed Register, 1:216–243, 1990.
* [7] Virginia Eubanks. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, 2018.
* [8] Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 259–268. ACM, 2015.
* [9] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
* [10] Sara Hajian, Josep Domingo-Ferrer, and Antoni Martinez-Balleste. Discrimination prevention in data mining for intrusion and crime detection. In Computational Intelligence in Cyber Security (CICS), 2011 IEEE Symposium on, pages 47–54. IEEE, 2011.
* [11] Moritz Hardt, Eric Price, Nati Srebro, et al. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pages 3315–3323, 2016.
* [12] Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. Fairness-aware classifier with prejudice remover regularizer. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 35–50. Springer, 2012.
* [13] Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. On convergence and stability of gans. arXiv preprint arXiv:1705.07215, 2017.
* [14] Cathy O’Neil. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books, 2016.
* [15] Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 560–568. ACM, 2008.
* [16] Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on International Conference on Machine Learning, pages 833–840. Omnipress, 2011\.
* [17] Margery Austin Turner. Mortgage lending discrimination: A review of existing evidence. 1999\.
* [18] Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In International Conference on Machine Learning, pages 325–333, 2013.
* [19] James Zou and Londa Schiebinger. Ai can be sexist and racist—it’s time to make it fair, 2018.
|
2024-09-04T02:54:55.072158 | 2020-02-27T14:53:44 | 2002.12146 | {
"authors": "CMS and TOTEM Collaborations",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25915",
"submitter": "The CMS Collaboration",
"url": "https://arxiv.org/abs/2002.12146"
} | arxiv-papers | FSQ-12-033
$HeadURL$ $Id$
FSQ-12-033
# Measurement of single-diffractive dijet production in proton-proton
collisions at $\sqrt{s}=8\TeV$ with the CMS and TOTEM experiments
The CMS and TOTEM Collaborations
###### Abstract
Measurements are presented of the single-diffractive dijet cross section and
the diffractive cross section as a function of the proton fractional momentum
loss $\xi$ and the four-momentum transfer squared $t$. Both processes
$\Pp\Pp\to\Pp\PX$ and $\Pp\Pp\to\PX\Pp$, with the proton scattering to either
side of the interaction point, are measured, where $\PX$ includes at least two
jets; the results of the two processes are averaged. The analyses are based on
data collected simultaneously with the CMS and TOTEM detectors at the LHC in
proton-proton collisions at $\sqrt{s}=8\TeV$ during a dedicated run with
$\beta^{\ast}=90\unit{m}$ at low instantaneous luminosity and correspond to an
integrated luminosity of $37.5\nbinv$. The single-diffractive dijet cross
section $\sigma^{\Pp\PX}_{\mathrm{jj}}$, in the kinematic region $\xi<0.1$,
$0.03<\abs{t}<1\GeV^{2}$, with at least two jets with transverse momentum
$\pt>40\GeV$, and pseudorapidity $\abs{\eta}<4.4$, is $21.7\pm
0.9\stat\,^{+3.0}_{-3.3}\syst\pm 0.9\lum\unit{nb}$. The ratio of the single-
diffractive to inclusive dijet yields, normalised per unit of $\xi$, is
presented as a function of $x$, the longitudinal momentum fraction of the
proton carried by the struck parton. The ratio in the kinematic region defined
above, for $x$ values in the range $-2.9\leq\log_{10}x\leq-1.6$, is
$R=(\sigma^{\Pp\PX}_{\mathrm{jj}}/\Delta\xi)/\sigma_{\mathrm{jj}}=0.025\pm
0.001\stat\pm 0.003\syst$, where $\sigma^{\Pp\PX}_{\mathrm{jj}}$ and
$\sigma_{\mathrm{jj}}$ are the single-diffractive and inclusive dijet cross
sections, respectively.
The results are compared with predictions from models of diffractive and
nondiffractive interactions. Monte Carlo predictions based on the HERA
diffractive parton distribution functions agree well with the data when
corrected for the effect of soft rescattering between the spectator partons.
We dedicate this paper to the memory of our colleague and friend Sasha
Proskuryakov, who started this analysis but passed away before it was
completed. His contribution to the study of diffractive processes at CMS is
invaluable.
## 0.1 Introduction
In proton-proton ($\Pp\Pp$) collisions a significant fraction of the total
cross section is attributed to diffractive processes. Diffractive events are
characterised by at least one of the two incoming protons emerging from the
interaction intact or excited into a low-mass state, with only a small energy
loss. These processes can be explained by the exchange of a virtual object,
the so-called Pomeron, with the vacuum quantum numbers [1]; no hadrons are
therefore produced in a large rapidity range adjacent to the scattered proton,
yielding a so-called large rapidity gap (LRG). A subleading exchange of
Reggeons, as opposed to a Pomeron, also contributes to diffractive scattering,
especially for large values of the proton fractional momentum loss $\xi$, and
is required to describe diffractive data [2, 3, 4, 5]. While Pomerons mainly
consist of gluons, Reggeons are mesons composed of a quark-antiquark pair.
Hard diffraction has been studied in hadron-hadron collisions at the SPS at
CERN [6], the Tevatron at Fermilab [7, 8, 9, 10, 11], the CERN LHC [12, 13],
and in electron-proton ($\Pe\Pp$) collisions at the HERA collider at DESY [2,
3, 4, 5, 14]. Hard diffractive processes can be described in terms of the
convolution of diffractive parton distribution functions (dPDFs) and hard
scattering cross sections, which can be calculated in perturbative quantum
chromodynamics (pQCD). The dPDFs have been determined by the HERA experiments
[2, 4, 5] by means of fits to inclusive diffractive deep inelastic scattering
data. The dPDFs have been successfully applied to describe different hard
diffractive processes in $\Pe\Pp$ collisions. This success is based on the
factorisation theorem proven for $\Pe\Pp$ interactions at large $Q^{2}$, and
on the validity of the QCD evolution equations for the dPDFs [15, 16, 17].
However, in hard diffractive hadron-hadron collisions factorisation is broken
because of the presence of soft rescattering between the spectator partons.
This leads to a suppression of the observed diffractive cross section in
hadron-hadron collisions [18]. The suppression factor, often called the
rapidity gap survival probability ($\langle S^{2}\rangle$), is $\sim$10% at
the Tevatron energies [9].
Experimentally, diffractive events can be selected either by exploiting the
presence of an LRG or by measuring the scattered proton. The latter method is
superior since it gives a direct measurement of $t$, the squared four momentum
transfer at the proton vertex, and suppresses the contribution from events in
which the proton dissociates into a low-mass state. The CMS Collaboration has
previously reported a measurement of diffractive dijet production at
$\sqrt{s}=7\TeV$ [12] that did not include information on the scattered
proton. The ATLAS Collaboration has also measured dijet production with large
rapidity gaps at $\sqrt{s}=7\TeV$ [13].
This article presents a measurement of dijet production with a forward, high
longitudinal momentum proton at $\sqrt{s}=8\TeV$. It corresponds to the
processes $\Pp\Pp\to\Pp\PX$ or $\Pp\Pp\to\PX\Pp$, with the proton scattering
to either side of the interaction and $\PX$ including at least two jets. The
system $\PX$ is measured in CMS and the scattered proton in the TOTEM roman
pots (RPs). This process is referred to as single-diffractive dijet
production.
The single-diffractive dijet production cross section is measured as a
function of $\xi$ and $t$ in the kinematic region $\xi<0.1$ and
$0.03<\abs{t}<1\GeV^{2}$, in events with at least two jets, each with
transverse momentum $\pt>40\GeV$ and pseudorapidity $\abs{\eta}<4.4$. The
ratio of the single-diffractive to inclusive dijet cross sections is measured
as a function of $x$, the longitudinal momentum fraction of the proton carried
by the struck parton for $x$ values in the range $-2.9\leq\log_{10}x\leq-1.6$.
This is the first measurement of hard diffraction with a measured proton at
the LHC.
## 0.2 The CMS and TOTEM detectors
The central feature of the CMS apparatus is a superconducting solenoid of 6m
internal diameter, providing a magnetic field of 3.8T. Within the
superconducting solenoid volume are a silicon pixel and strip tracker, a lead
tungstate crystal electromagnetic calorimeter (ECAL), and a brass and
scintillator hadron calorimeter (HCAL), each composed of a barrel and two
endcap sections. Forward calorimeters extend the pseudorapidity coverage
provided by the barrel and endcap detectors. The forward hadron (HF)
calorimeter uses steel as an absorber and quartz fibers as the sensitive
material. The two HFs are located 11.2m from the interaction region, one on
each end, and together they provide coverage in the range
$3.0<\abs{\eta}<5.2$. Muons are measured in gas-ionisation detectors embedded
in the steel flux-return yoke outside the solenoid.
When combining information from the entire detector, including that from the
tracker, the jet energy resolution amounts typically to 15% at 10, 8% at 100,
and 4% at 1, to be compared to about 40, 12, and 5%, respectively, obtained
when ECAL and HCAL alone are used. In the region $\abs{\eta}<1.74$, the HCAL
cells have widths of 0.087 in pseudorapidity and 0.087 in azimuth ($\phi$). In
the $\eta$-$\phi$ plane, and for $\abs{\eta}<1.48$, the HCAL cells map on to
$5{\times}5$ arrays of ECAL crystals to form calorimeter towers projecting
radially outwards from close to the nominal interaction point. For
$\abs{\eta}>1.74$, the coverage of the towers increases progressively to a
maximum of 0.174 in $\Delta\eta$ and $\Delta\phi$. Within each tower, the
energy deposits in the ECAL and HCAL cells are summed to define the
calorimeter tower energies, subsequently used to provide the energies and
directions of hadronic jets.
The reconstructed vertex with the largest value of summed charged-particle
track $\pt^{2}$ is taken to be the primary interaction vertex. Tracks are
clustered based on the $z$ coordinate of the track at the point of closest
approach to the beamline. In the vertex fit, each track is assigned a weight
between 0 and 1, which reflects the likelihood that it genuinely belongs to
the vertex. The number of degrees of freedom in the fit is strongly correlated
with the number of tracks arising from the interaction region.
The particle-flow (PF) algorithm [19] aims to reconstruct and identify each
individual particle in an event with an optimised combination of information
from the various elements of the CMS detector. The energy of photons is
directly obtained from the ECAL measurement, corrected for zero-suppression
effects. The energy of electrons is determined from a combination of the
electron momentum at the primary interaction vertex as determined by the
tracker, the energy of the corresponding ECAL cluster, and the energy sum of
all bremsstrahlung photons spatially compatible with originating from the
electron track. The energy of muons is obtained from the curvatures of the
corresponding track. The energy of charged hadrons is determined from a
combination of their momentum measured in the tracker and the matching ECAL
and HCAL energy deposits, corrected for zero-suppression effects and for the
response function of the calorimeters to hadronic showers. Finally, the energy
of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL
energies.
Hadronic jets are clustered from these reconstructed particles using the anti-
algorithm [20, 21]. The jet momentum is determined as the vectorial sum of all
PF candidate momenta in the jet, and is found from simulation to be within 5
to 10% of the true momentum over the whole spectrum and detector acceptance.
Jet energy corrections are derived from simulation, and are confirmed with in
situ measurements of the energy balance in dijet, multijet, photon + jet, and
+ jet events [22]. The jet resolution in the simulation is scaled upwards by
around 15% in the barrel region, 40% in the endcaps and 20% in the forward
region to match the resolution in the data. Additional selection criteria are
applied to each event to remove spurious jet-like features originating from
isolated noise patterns in some HCAL regions [23].
A more detailed description of the CMS detector, together with a definition of
the coordinate system used and the relevant kinematic variables, can be found
in Ref. [24].
The TOTEM experiment [25, 26] is located at the LHC interaction point (IP) 5
together with the CMS experiment. The RP system is the subdetector relevant
for measuring scattered protons. The RPs are movable beam pipe insertions that
approach the LHC beam very closely (few) to detect protons scattered at very
small angles or with small $\xi$. The proton remains inside the beam pipe and
its trajectory is measured by tracking detectors installed inside the RPs.
They are organised in two stations placed symmetrically around the IP; one in
LHC sector 45 (positive $z$), the other in sector 56 (negative $z$). Each
station is formed by two units: near (215m from the IP) and far (220m from the
IP). Each unit includes three RPs: one approaching the beam from the top, one
from the bottom and one horizontally. Each RP hosts a stack of 10 silicon
strip sensors (pitch 66) with a strongly reduced insensitive region at the
edge facing the beam (few tens of ). Five of these planes are oriented with
the silicon strips at a $+45^{\circ}$ angle with respect to the bottom of the
RP and the other five have the strips at a $-45^{\circ}$ angle.
The beam optics relates the proton kinematics at the IP and at the RP
location. A proton emerging from the interaction vertex ($x^{\ast}$,
$y^{\ast}$) at horizontal and vertical angles $\theta_{x}^{\ast}$ and
$\theta_{y}^{\ast}$, with a fractional momentum loss $\xi$, is transported
along the outgoing beam through the LHC magnets. It arrives at the RPs at the
transverse position:
$\displaystyle x(z_{\mathrm{RP}})$
$\displaystyle=L_{x}(z_{\mathrm{RP}})\,\theta_{x}^{\ast}+v_{x}(z_{\mathrm{RP}})\,x^{\ast}-D_{x}(z_{\mathrm{RP}})\,\xi,$
(1) $\displaystyle y(z_{\mathrm{RP}})$
$\displaystyle=L_{y}(z_{\mathrm{RP}})\,\theta_{y}^{\ast}+v_{y}(z_{\mathrm{RP}})\,y^{\ast}-D_{y}(z_{\mathrm{RP}})\,\xi,$
relative to the beam centre. This position is determined by the optical
functions, characterising the transport of protons in the beamline and
controlled via the LHC magnet currents. The effective length $L_{x,y}(z)$,
magnification $v_{x,y}(z)$ and horizontal dispersion $D_{x}(z)$ quantify the
sensitivity of the measured proton position to the scattering angle, vertex
position, and fractional momentum loss, respectively. The dispersion in the
vertical plane, $D_{y}$, is nominally zero.
For the present measurement, a special beam optical setup with
$\beta^{\ast}=90\unit{m}$ was used, where $\beta^{\ast}$ is the value of the
amplitude function of the beam at the IP. This optical setup features
parallel-to-point focussing ($v_{y}\sim 0$) and large $L_{y}$, making $y$ at
RP directly proportional to $\theta_{y}^{\ast}$, and an almost vanishing
$L_{x}$ and $v_{x}$, implying that any horizontal displacement at the RP is
approximately proportional to $\xi$. Protons can hence be measured with large
detector acceptance in the vertical RPs that approach the beam from the top
and bottom.
To reduce the impact of imperfect knowledge of the optical setup, a
calibration procedure [27] has been applied. This method uses elastic
scattering events and various proton observables to determine fine corrections
to the optical functions presented in Eq. (1). For the RP alignment, a three-
step procedure [26] has been applied: beam-based alignment prior to the run
(as for the LHC collimators) followed by two offline steps. First, track-based
alignment for the relative positions among RPs, and second, alignment with
elastic events for the absolute position with respect to the beam. The final
uncertainties per unit (common for top and bottom RPs) are: 2(horizontal
shift), 100(vertical shift), and 0.2mrad (rotation about the beam axis).
The kinematic variables ($\xi$, $\theta_{x}^{\ast}$, $\theta_{y}^{\ast}$ as
well as $t$) are reconstructed with the use of parametrised proton transport
functions [26]. The values of the optical functions vary with $\xi$, an effect
that is taken into account by the optics parametrisation. The details of the
reconstruction algorithms and optics parametrisation are discussed in Refs.
[26, 28]. The momentum loss reconstruction depends mostly on the horizontal
dispersion, which is determined with a precision better than 10%. The
scattering angle resolution depends mainly on the angular beam divergence and
in the horizontal plane also on the detector resolution, whereas the momentum
loss resolution depends mainly on the optics [29]. The $\xi$ resolution is
about $\sigma$($\xi$) = 0.7% and the $\theta_{y}^{\ast}$ and the
$\theta_{x}^{\ast}$ resolutions 2.4$\mu$rad and 25$\mu$rad, respectively.
## 0.3 Event kinematics
Figure 1 shows a schematic diagram of the single-diffractive reaction
$\Pp\Pp\to\PX\Pp$ with $\PX$ including two high-$\pt$ jets. Single-diffractive
dijet production is characterised by the presence of a high-energy proton,
which escapes undetected by the CMS detector, and the system $\PX$, which
contains high-$\pt$ jets, separated from the proton by an LRG.
Figure 1: Schematic diagram of single-diffractive dijet production. The
exchange of a virtual object with the vacuum quantum numbers (a Pomeron) is
indicated by the symbol IP. The diagram shows an example of the
$\Pg\Pg\to\text{dijet}$ hard scattering process; the $\Pq\Pq$ and $\Pg\Pq$
initial states also contribute.
The proton is scattered at small angles, has small fractional momentum loss
$\xi=1-{\abs{\mathbf{p}_{f}}}/{\abs{\mathbf{p}_{i}}}$, and small absolute
value of the 4-momentum transfer squared $t=(p_{f}-p_{i})^{2}$, where $p_{i}$
and $p_{f}$ are the four-momenta of the incoming and outgoing protons,
respectively. The scattered proton does not leave the beam pipe and can only
be detected by using the TOTEM RP detectors, which make a direct measurement
of $t$ and $\xi$ (hereafter referred to as $\xi_{\text{\tiny TOTEM}}$).
If only CMS information is used, as in Ref. [12], $\xi$ can be estimated only
from the energies and longitudinal momenta of the particles measured in CMS:
${\xi}_{\text{\tiny CMS}}^{\pm}=\frac{\sum_{i}\left(E^{i}\pm
p_{z}^{i}\right)}{\sqrt{s}},$ (2)
where the sum is carried out with PF objects. The positive (negative) sign
corresponds to the scattered proton moving towards the positive (negative) $z$
direction. In this case, $t$ cannot be measured.
The combination of the limited CMS pseudorapidity coverage ($\abs{\eta}<5$)
and the detector inefficiency causes $\xi_{\text{\tiny CMS}}$ to be smaller
than $\xi_{\text{\tiny TOTEM}}$ in general, $\xi_{\text{\tiny
CMS}}-\xi_{\text{\tiny TOTEM}}\leq 0$. However, the limited detector
resolution may cause $\xi_{\text{\tiny CMS}}$ to be larger than
$\xi_{\text{\tiny TOTEM}}$.
The momentum fraction of the partons initiating the hard scattering, $x^{+}$
and $x^{-}$, can be estimated from the energies and longitudinal momenta of
the measured jets as:
$x^{\pm}=\frac{\sum_{\text{\tiny jets}}\left(E^{\text{\tiny jet}}\pm
p_{z}^{\text{\tiny jet}}\right)}{\sqrt{s}},$ (3)
where the sum is carried out over the two highest transverse momentum jets in
the event, and an additional third jet, if present. The positive (negative)
sign corresponds to the incoming proton moving towards the positive (negative)
$z$ direction.
Finally, the fraction $\beta$ of the Pomeron momentum carried by the
interacting parton is measured from the values of $x^{\pm}$ and
$\xi_{\text{\tiny TOTEM}}$ as $\beta=x^{\pm}/\xi_{\text{\tiny TOTEM}}$.
## 0.4 Data samples
The data were collected in July 2012 during a dedicated run with low
probability ($\sim$6–10%) of overlapping $\Pp\Pp$ interactions in the same
bunch crossing (pileup) and a nonstandard $\beta^{*}=90\unit{m}$ beam optics
configuration. These data correspond to an integrated luminosity of
$\lumi=37.5\nbinv$. Events are selected by trigger signals that are delivered
simultaneously to the CMS and TOTEM detectors. The first level of the CMS
trigger system (L1) is used. The L1 signal is propagated to the TOTEM
electronics to enable the simultaneous readout of the CMS and TOTEM
subdetectors. The CMS orbit-counter reset signal, delivered to the TOTEM
electronics at the start of the run, assures the time synchronisation of the
two experiments. The CMS and the TOTEM events are combined offline based on
the LHC orbit and bunch numbers.
## 0.5 Monte Carlo simulation
The simulation of nondiffractive dijet events is performed with the pythia6
(version 6.422) [30], pythia8 (version 8.153) [31], and herwig6 [32] Monte
Carlo (MC) event generators. The underlying event is simulated in pythia6 with
tune Z2* [33] and in pythia8 with tunes 4C [34], CUETP8M1, and CUETP8S1 [35].
Single-diffractive dijet events are simulated with the pythia8 and pomwig
(version 2.0) [36] generators. Hard diffraction is simulated in pythia8 using
an inclusive diffraction model, where both low- and high-mass systems are
generated [37]. High-mass diffraction is simulated using a perturbative
description. Pomeron parton densities are introduced and the diffractive
process is modelled as a proton-Pomeron scattering at a reduced centre-of-mass
energy. The default generator settings are used, including that for the
proton-Pomeron total cross section. Multiparton interactions (MPI) are
included within the proton-Pomeron system to provide cross sections for
parton-parton interactions. In this model, the presence of secondary
interactions does not lead to a suppression of the visible diffractive cross
section.
Additionally, pythia8 implements a model to simulate hard-diffractive events
based on a direct application of dPDFs, and a dynamical description of the
rapidity gap survival probability in diffractive hadron-hadron interactions
[38]. In this model an event is classified as diffractive only when no MPI are
generated. We refer to this implementation as the dynamic gap (DG) model.
Single-diffractive dijet events using the inclusive diffraction model are
simulated with pythia8, tunes 4C and CUETP8M1. The simulation of diffractive
dijet events using the DG model is performed with pythia8 version 8.223 [38]
with the underlying event tune CUETP8M1. These pythia8 tunes give a fair
description of the charged-particle pseudorapidity and distributions in a
sample with a large fraction of single-diffractive inelastic events [39, 35,
40].
The pomwig generator is based on herwig6 and implements dPDFs to simulate
hard-diffractive processes. The simulation uses dPDFs from a fit to deep
inelastic scattering data (H1 fit B [2]). The pomwig generator uses a next-to-
leading order dPDF fit, whereas pythia8 uses a leading order dPDF fit. When
using pomwig, a constant factor $\langle S^{2}\rangle=7.4\%$ is applied to
account for the rapidity gap survival probability leading to the suppression
of the diffractive cross section. This value is calculated from the ratio of
the measured diffractive cross section and the prediction from pomwig, as
described in Section 0.8.2. Both Pomeron and Reggeon exchange contributions
are generated. Reggeon exchange is not simulated in pythia8.
To improve the description of the data by the MC samples, correction factors
are applied event-by-event as a function of $\beta$, by a reweighting
procedure. The correction modifies the event distribution as a function of
$\beta$ by up to 40%, and the $\log_{10}x$ and $\xi$ distributions by as much
as 30% and 8%, respectively. The correction has a negligible effect on the $t$
distribution.
The generated events are processed through the simulation of the CMS detector,
based on [41], and reconstructed in the same manner as the data. The
acceptance and resolution of the TOTEM RP detectors are parametrised as a
function of the proton kinematics, as discussed below. All samples are
simulated without pileup.
### 0.5.1 Roman pot detectors acceptance and resolution
The proton path from the IP to the TOTEM RPs is calculated using a
parametrisation of the LHC optics [27]. To obtain a realistic simulation of
the scattered proton, the following procedure is used:
* •
Proton transport: The simulation of the RP detectors acceptance is
parametrised in terms of the vertex position, the proton scattering angles at
the vertex $\theta_{x}^{\ast}$ and $\theta_{y}^{\ast}$, and $\xi$. The
incident beam energy spread and beam divergence are also simulated [29].
* •
Reconstruction of $t$ and $\xi$: The detector-level distributions of $t$ and
$\xi$ are obtained from the scattering angles $\theta_{x}^{\ast}$ and
$\theta_{y}^{\ast}$, where the correlation between the $\xi$ and
$\theta_{x}^{\ast}$ uncertainties is taken into account [26]. The generated
values of $\theta_{x}^{\ast}$ and $\theta_{y}^{\ast}$ are spread by
$25\unit{$\mu$rad}$ and $2.4\unit{$\mu$rad}$, respectively. These values
include the effects of detector resolution, as well as those of the beam
optics and the beam divergence.
* •
Proton reconstruction inefficiency: The track reconstruction in the RPs may
fail for several reasons: inefficiency of the silicon sensors, interaction of
the proton with the RP mechanics, or the simultaneous presence of a beam halo
particle or a proton from a pileup interaction. The silicon strips of the
detectors in an RP are oriented in two orthogonal directions; this allows for
good rejection of inclined background tracks, but makes it very difficult to
reconstruct more than one track almost parallel to the beam direction [26].
These uncorrelated inefficiencies are evaluated from elastic scattering data
[29], and amount to $\sim$6%. To correct for this, an extra normalisation
factor is applied, obtained separately for protons traversing the RPs on
either side of the IP.
## 0.6 Event selection
Dijet events are selected online by requiring at least two jets with
$\pt>20\GeV$ [42]. The efficiency of this trigger selection is estimated with
a sample of minimum bias events, events collected with a loose trigger
intended to select inelastic collisions with as little bias as possible, and
containing a leading jet with $\pt$, as reconstructed offline, of at least 40.
The fraction of dijet events accepted by the trigger is calculated as a
function of the subleading jet $\pt$. The efficiency is above $94\%$ for
$\pt>40\GeV$.
The offline selection requires at least two jets with $\pt>40\GeV$ and
$\abs{\eta}<4.4$. Jets are reconstructed from PF objects with the anti-
algorithm with a distance parameter $R=0.5$. The reconstructed jet energy is
corrected with the procedure described in Ref. [22]. The parton momentum
fractions $x^{+}$ and $x^{-}$ are reconstructed using Eq. (3) from the two
highest transverse momentum jets and an additional third jet, if present. The
latter is selected with $\pt>20\GeV$. In addition, the selection requires at
least one reconstructed primary interaction vertex and at least one
reconstructed proton track in the RP stations. The fit of the reconstructed
vertex is required to have more than four degrees of freedom.
Events with protons in the RP stations on both sides are rejected if their
kinematics are consistent with those of elastic scattering. Elastic scattering
events, which are present in the data sample because of pileup, are identified
by the presence of two proton tracks in opposite directions, in a diagonal
configuration: the protons traverse the two top RPs in sector 45 and the two
bottom RPs in sector 56, or vice versa. The horizontal and vertical scattering
angles are required to match within the measured resolutions. These
requirements are similar to those described in Ref. [29].
To avoid detector edges with rapidly varying efficiency or acceptance, as well
as regions dominated by secondary particles produced by aperture limitations
in the beamline upstream of the RPs, proton track candidates are selected if
the corresponding hit coordinates on the RP stations satisfy the following
fiducial requirements: $0<x<7\mm$ and $8.4<\abs{y}<27\mm$, where $x$ and $y$
indicate the horizontal and vertical coordinates of the hit with respect to
the beam.
To suppress background from secondary particles and pileup in the RPs, the
reconstructed proton track is selected if it is associated to one track
element in both top or both bottom RPs on a given side. The kinematic
requirements $0.03<\abs{t}<1.0\GeV^{2}$ and $0<\xi_{\text{\tiny TOTEM}}<0.1$
are then applied.
For signal events, one expects $\xi_{\text{\tiny CMS}}$ to be smaller than
$\xi_{\text{\tiny TOTEM}}$, $\xi_{\text{\tiny CMS}}-\xi_{\text{\tiny
TOTEM}}\leq 0$ (as discussed in Section 0.3). This selection is imposed to
suppress the contribution of pileup and beam halo events, in which the proton
is uncorrelated with the hadronic final state $\PX$ measured in the CMS
detector. Roughly $6\%$ of signal events are rejected by this requirement, as
estimated from a simulation of single-diffractive dijet production.
Table 0.6 shows the number of events passing each selection. The number of
events with the proton detected in the RPs in sector 45 (56) after all the
selections is 368 (420).
A difference in the yields for events with a proton in sector 45 and 56 could
notably arise from different background contributions, which is discussed in
Section 0.7. Both an imperfect knowledge of the optical functions, especially
the horizontal dispersion, discussed in Section 0.8.1, and statistical
fluctuations of the two mostly independent event samples contribute to the
difference.
Number of events after each selection. Selection Sector 45 Sector 56 At least
2 jets ($\pt>40\GeV$, $\abs{\eta}<4.4$) 427689 Elastic scattering veto 405112
Reconstructed proton 9530 RP and fiducial region 2137 3033
$0.03<\abs{t}<1.0\GeV^{2}$, $0<\xi_{\text{\tiny TOTEM}}<0.1$ 1393 1806
$\xi_{\text{\tiny CMS}}-\xi_{\text{\tiny TOTEM}}\leq 0$ 368 420
## 0.7 Background
The main background is due to the overlap of a $\Pp\Pp$ collision in the CMS
detector and an additional track in the RP stations, originating from either a
beam halo particle or an outgoing proton from a pileup interaction.
Pileup and beam halo events are not simulated, but they are present in the
data. To estimate the pileup and beam halo contribution in the data, a zero
bias sample consisting of events from randomly selected, nonempty LHC bunch
crossings is used. Events with a proton measured in the RP stations and with
any number of reconstructed vertices are selected from the zero bias data set.
Such events are denoted by ZB in the following.
The RP information from events in the zero bias sample is added to diffractive
and nondiffractive events generated with pomwig and pythia6, respectively. The
mixture of MC and ZB events simulates data events in the presence of pileup
and beam halo.
The pomwig sample is normalised assuming a rapidity gap survival probability
factor of 7.4%, as discussed in Section 0.5. The MC and ZB event mixture is
then passed through the selection procedure illustrated in Section 0.6, except
for the requirement ${\xi_{\text{\tiny CMS}}-\xi_{\text{\tiny TOTEM}}\leq 0}$,
which is not applied.
Such mixed events with a proton in the RPs are considered as signal if the
proton originates from the MC simulated sample, or as background if it
originates from the ZB sample. If an event has a proton from both the MC
sample and the ZB sample, the proton with smaller $\xi$ is chosen. However,
the probability of such a combination is small and none of these events pass
all the selections. Figure 2 shows the distribution of $\xi_{\text{\tiny
CMS}}-\xi_{\text{\tiny TOTEM}}$ for the data compared to the MC+ZB event
mixture. The requirement $\xi_{\text{\tiny CMS}}-\xi_{\text{\tiny TOTEM}}\leq
0$ selects signal events and rejects the kinematically forbidden region
populated by the MC+ZB background events (filled histogram). The background
distribution is normalised to the data in the $\xi_{\text{\tiny
CMS}}-\xi_{\text{\tiny TOTEM}}$ region from 0.048 to 0.4, which is dominated
by background events.
The background is estimated separately for events with a proton traversing the
two top (top-top) or the two bottom (bottom-bottom) RPs on each side. The top-
top and bottom-bottom distributions are similar. Figure 2 shows the sum of the
two contributions.
The background contribution for events with a proton detected in sector 56
(right panel of Fig. 2) is larger than that for events with a proton detected
in sector 45 (left panel of Fig. 2). The remaining contamination of background
in the signal region is estimated to be $15.7\%$ for events in which the
proton is detected in sector 45 and $16.8\%$ for those in which the proton is
detected in sector 56.
Figure 3 shows the distribution of $\xi_{\text{\tiny TOTEM}}$ for the data and
the MC+ZB sample, before and after the $\xi_{\text{\tiny
CMS}}-\xi_{\text{\tiny TOTEM}}\leq 0$ requirement, as well as the distribution
of $t$, after the $\xi_{\text{\tiny CMS}}-\xi_{\text{\tiny TOTEM}}\leq 0$
selection. The sum of the top-top and bottom-bottom combinations is used. The
data and the MC+ZB sample are in good agreement.
Figure 2: Distribution of $\xi_{\text{\tiny CMS}}-\xi_{\text{\tiny TOTEM}}$
for events with a reconstructed proton in sector 45 (left) and sector 56
(right). The data are indicated by solid circles. The blue histogram is the
mixture of pomwig or pythia6 and zero bias (ZB) data events described in the
text. An event with a proton measured in the RPs contributes to the open
histogram (signal) if the proton originates from the MC sample, or to the
filled histogram (background) if it originates from the ZB sample.
Figure 3: Distribution of $\xi_{\text{\tiny TOTEM}}$ before (upper) and after
(middle) the $\xi_{\text{\tiny CMS}}-\xi_{\text{\tiny TOTEM}}$ requirement and
distribution of $t$ after the $\xi_{\text{\tiny CMS}}-\xi_{\text{\tiny
TOTEM}}$ requirement (lower) for events in which the proton is detected in
sector 45 (left) and sector 56 (right). The data are indicated by solid
circles. The blue histogram is the mixture of pomwig or pythia6 and zero bias
(ZB) data events described in the text. An event with the proton measured in
the RPs contributes to the open histogram (signal) if the proton originates
from the MC sample, or to the filled histogram (background) if it originates
from the ZB sample.
An alternative method, used at HERA [4], takes two events randomly chosen from
the data sample. First, $\xi_{\text{\tiny CMS}}$ is sampled from events that
have passed the dijet selection; $\xi_{\text{\tiny TOTEM}}$ is then taken from
events with $\xi_{\text{\tiny CMS}}>0.12$ that have passed the event selection
described in Section 0.6, except for the ${\xi_{\text{\tiny
CMS}}-\xi_{\text{\tiny TOTEM}}}$ requirement, to select proton tracks
considered to be mostly from background. These two values are used to plot the
$\xi_{\text{\tiny CMS}}-\xi_{\text{\tiny TOTEM}}$ distribution, which is
normalised to the data in a region dominated by background. The remaining
contamination in the signal region is $\sim$19% both for events with a proton
detected in sector 45 and for those with a proton in sector 56. The ZB method
is used in this analysis. Half the difference between the results of the two
methods is taken as an estimate of the systematic uncertainty of the
background subtraction procedure.
## 0.8 Results
In this section the measurements of the differential cross sections
$\rd\sigma/{\rd}t$, $\rd\sigma/\rd\xi$, and the ratio $R(x)$ of the single-
diffractive ($\sigma^{\Pp\PX}_{\mathrm{jj}}(x)$) to inclusive dijet cross
sections ($\sigma_{\mathrm{jj}}(x)$) are presented. The ratio $R(x)$,
normalised per unit of $\xi$, is defined by:
$R(x)=\frac{\sigma^{\Pp\PX}_{\mathrm{jj}}(x)/\Delta\xi}{\sigma_{\mathrm{jj}}(x)},$
(4)
where $\Delta\xi=0.1$.
The cross sections are calculated in the kinematic region $\xi<0.1$,
$0.03<\abs{t}<1\GeV^{2}$, with at least two jets at a stable-particle level
with $\pt>40\GeV$ and $\abs{\eta}<4.4$. The ratio $R(x)$ is calculated for $x$
values in the region $-2.9\leq\log_{10}x\leq-1.6$. In the following, the
estimated background is subtracted from the number of single-diffractive dijet
candidates following the procedure described in the previous section.
The differential cross sections for dijet production in bins of $t$ and $\xi$
are evaluated as:
$\displaystyle\frac{\rd\sigma^{\Pp\PX}_{\mathrm{jj}}}{{\rd}t}$
$\displaystyle=\mathcal{U}\left\\{\frac{N^{\Pp\PX}_{\mathrm{jj}}}{\mathcal{L}A_{\text{\tiny
CMS-TOTEM}}{\Delta t}}\right\\},$ (5)
$\displaystyle\frac{\rd\sigma^{\Pp\PX}_{\mathrm{jj}}}{\rd\xi}$
$\displaystyle=\mathcal{U}\left\\{\frac{N^{\Pp\PX}_{\mathrm{jj}}}{\mathcal{L}A_{\text{\tiny
CMS-TOTEM}}{\Delta\xi}}\right\\},$
where $N^{\Pp\PX}_{\mathrm{jj}}$ is the measured number of single-diffractive
dijet candidates per bin of the distribution after subtracting the estimated
background; ${\Delta t}$ and ${\Delta\xi}$ are the bin widths, and
$\mathcal{L}$ is the integrated luminosity. The factors $A_{\text{\tiny CMS-
TOTEM}}$ indicate the acceptance of CMS and TOTEM for single-diffractive dijet
events. Unfolding corrections, represented by the symbol $\mathcal{U}$ in Eq.
(5), are applied to account for the finite resolution of the reconstructed
variables used in the analysis. They are evaluated with pomwig, pythia8 4C and
pythia8 CUETP8M1. The results presented are the average of those obtained with
the different unfolding corrections. The measured cross sections are obtained
by unfolding the data using the D’Agostini method with early stopping [43]. In
this method the regularisation parameter is the number of iterations used,
which is optimised to obtain a relative $\chi^{2}$ variation between
iterations lower than 5%.
The ratio $R(x)$ of the single-diffractive to inclusive dijet cross sections
is evaluated as a function of $x$ as:
$R(x)=\frac{\sigma^{\Pp\PX}_{\mathrm{jj}}(x)/\Delta\xi}{\sigma_{\mathrm{jj}}(x)}=\frac{\mathcal{U}\left\\{N^{\Pp\PX}_{\mathrm{jj}}/A_{\text{\tiny
CMS-
TOTEM}}\right\\}/\Delta\xi}{\mathcal{U}\left\\{N_{\mathrm{jj}}/A_{\text{\tiny
CMS}}\right\\}},$ (6)
where $N^{\Pp\PX}_{\mathrm{jj}}$ is the number of single-diffractive dijet
candidates with $\xi_{\text{\tiny TOTEM}}<0.1$, and $N_{\mathrm{jj}}$ is the
total number of dijet events without the requirement of a proton detected in
the RPs. This number is dominated by the nondiffractive contribution. The
symbol $A_{\text{\tiny CMS-TOTEM}}$ indicates the acceptance of CMS and TOTEM
for single-diffractive dijet events, evaluated with pomwig, pythia8 4C and
pythia8 CUETP8M1; $A_{\text{\tiny CMS}}$ is the acceptance for nondiffractive
dijet production ($\pt>40\GeV$, $\abs{\eta}<4.4$), evaluated with pythia6,
pythia8 4C, pythia8 CUETP8M1, pythia8 CUETP8S1, and herwig6. The acceptance
includes unfolding corrections to the data with the D’Agostini method with
early stopping, denoted by the symbol $\mathcal{U}$ in Eq. (6).
### 0.8.1 Systematic uncertainties
The systematic uncertainties are estimated by varying the selections and
modifying the analysis procedure, as discussed in this Section. Tables 0.8.2
and 0.8.3 summarise the main systematic uncertainties of the single-
diffractive cross section and the ratio of the single-diffractive and
inclusive dijet cross sections, respectively, presented in Sections 0.8.2 and
0.8.3.
* •
Trigger efficiency: The trigger efficiency is calculated as a function of the
subleading jet $\pt$ using a fit to the data. The sensitivity to the trigger
efficiency determination is estimated by varying the fit parameters within
their uncertainties. This variation corresponds to a trigger efficiency that
increases or decreases by roughly 2% at jet $\pt=40\GeV$ and less than 1% at
$\pt=50\GeV$.
* •
Calorimeter energy scale: The reconstruction of $\xi_{\text{\tiny CMS}}$ is
affected by the uncertainty in the calorimeter energy scale and is dominated
by the HF contribution. This uncertainty is estimated by changing the energy
of the PF candidates by $\pm 10\%$ [12, 44].
* •
Jet energy scale and resolution: The energy of the reconstructed jets is
varied according to the jet energy scale uncertainty following the procedure
described in Ref. [22]. The systematic uncertainty in the jet energy
resolution is estimated by varying the scale factors applied to the MC, as a
function of pseudorapidity. The uncertainties obtained from the jet energy
scale and resolution are added in quadrature. The effect of the jet energy
resolution uncertainty amounts to less than 1% of the measured cross section.
* •
Background: Half the difference between the results of the ZB and HERA methods
used to estimate the background, described in Section 0.7, is an estimate of
the effect of the systematic uncertainty of the background.
* •
RP acceptance: The sensitivity to the size of the fiducial region for the
impact position of the proton in the RPs is estimated by modifying its
vertical boundaries by $200\mum$ and by reducing the horizontal requirement by
1, to $0<x<6\mm$. Half the difference of the results thus obtained and the
nominal ones is used as a systematic uncertainty. The uncertainties obtained
when modifying the vertical and horizontal boundaries are added in quadrature.
* •
Resolution: The reconstructed variables $t$ and $\xi$ are calculated by
applying two methods: either directly, with a resolution function depending on
each of these variables, or indirectly from the scattering angles
$\theta_{x}^{\ast}$ and $\theta_{y}^{\ast}$. Half the difference between the
results using the two methods is taken as a systematic uncertainty.
* •
Horizontal dispersion: The reconstructed $\xi$ value depends on the optical
functions describing the transport of the protons from the interaction vertex
to the RP stations, and specifically the horizontal dispersion. This
uncertainty is calculated by scaling the value of $\xi$ by $\pm 10\%$. This
value corresponds to a conservative limit of the possible horizontal
dispersion variation with respect to the nominal optics.
* •
$t$-slope: The sensitivity to the modelling of the exponential $t$-slope is
quantified by replacing its value in pomwig by that measured in the data. Half
the difference between the results thus found and the nominal results is used
as an estimate of the uncertainty.
* •
$\beta$-reweighting: Half the difference of the results with and without the
reweighting as a function of $\beta$ in pomwig (as discussed in Section 0.5)
is included in the systematic uncertainty. The effect amounts to less than 1%
of the single-diffractive cross section and less than about 6% of the single-
diffractive to inclusive dijet cross section ratio versus $x$.
* •
Acceptance and unfolding: Half the maximum difference between the single-
diffractive cross section results found by unfolding with pomwig, pythia8 4C,
and pythia8 CUETP8M1 is taken as a further component of the systematic
uncertainty. Likewise for the results obtained with pythia6 Z2*, pythia8 4C,
pythia8 CUETP8M1 and pythia8 CUETP8S1 for the inclusive dijet cross section.
* •
Unfolding regularisation: The regularisation parameter used in the unfolding,
given by the number of iterations in the D’Agostini method used in this
analysis, is optimised by calculating the relative $\chi^{2}$ variation
between iterations. The value is chosen such that the $\chi^{2}$ variation is
below 5%. The number of iterations when the relative variation of $\chi^{2}$
is below 2% is also used and half the difference with respect to the nominal
is taken as a systematic uncertainty.
* •
Unfolding bias: A simulated sample, including all detector effects, is
unfolded with a different model. The difference between the corrected results
thus obtained and those at the particle level is an estimate of the bias
introduced by the unfolding procedure. Half the maximum difference obtained
when repeating the procedure with all generator combinations is a measure of
the systematic uncertainty related to the unfolding.
* •
Integrated luminosity: The uncertainty in the integrated luminosity is 4%,
measured using a dedicated sample collected by TOTEM during the same data
taking period [29].
The total systematic uncertainty is calculated as the quadratic sum of the
individual contributions. The uncertainties in the jet energy scale and
horizontal dispersion are the dominant contributions overall.
### 0.8.2 Extraction of the cross section as a function of $t$ and $\xi$
Figure 4 shows the differential cross section as a function of $t$ and $\xi$,
integrated over the conjugate variable. The results from events in which the
proton is detected on either side of the IP are averaged.
Figure 4: Differential cross section as a function of $t$ (left) and as a
function of $\xi$ (right) for single-diffractive dijet production, compared to
the predictions from pomwig, pythia8 4C, pythia8 CUETP8M1, and pythia8 DG. The
pomwig prediction is shown with no correction for the rapidity gap survival
probability ($\langle S^{2}\rangle=1$) and with a correction of $\langle
S^{2}\rangle=7.4\%$. The vertical bars indicate the statistical uncertainties
and the yellow band indicates the total systematic uncertainty. The average of
the results for events in which the proton is detected on either side of the
interaction point is shown. The ratio between the data and the pomwig
prediction, when no correction for the rapidity gap survival probability is
applied, is shown in the bottom.
The data are compared to pomwig, pythia8 4C, pythia8 CUETP8M1, and pythia8 DG.
The pomwig prediction is shown for two values of the suppression of the
diffractive cross section, the rapidity gap survival probability, represented
by $\langle S^{2}\rangle$. When $\langle S^{2}\rangle=1$, no correction is
applied. The resulting cross sections are higher than the data by roughly an
order of magnitude, in agreement with the Tevatron results [9, 10, 11]. The
pomwig prediction is also shown with the correction $\langle
S^{2}\rangle=7.4\%$, calculated from the ratio of the measured diffractive
cross section and the MC prediction, as discussed below. After this
correction, pomwig gives a good description of the data. The pomwig prediction
is shown in Fig. 4 as the sum of the Pomeron ($\Pp\mathrm{I}\\!\mathrm{P}$),
Reggeon ($\Pp\mathrm{I}\\!\mathrm{R}$) and Pomeron-Pomeron
($\mathrm{I}\\!\mathrm{P}\mathrm{I}\\!\mathrm{P}$) exchange contributions,
while pythia8 includes only the Pomeron ($\Pp\mathrm{I}\\!\mathrm{P}$)
contribution. pythia8 4C and pythia8 CUETP8M1 predict cross sections higher
than the data by up to a factor of two. The pythia8 DG model shows overall a
good agreement with the data. No correction is applied to the normalisation of
the pythia8 samples. The pythia8 DG model is the only calculation that
predicts the cross section normalisation without an additional correction.
The ratio between the data and the pomwig predictions is shown in the bottom
of the left and right panels of Fig. 4. No correction is applied for the
rapidity gap survival probability ($\langle S^{2}\rangle=1$). Within the
uncertainties, no significant dependence on $t$ and $\xi$ is observed.
The value of the cross section for single-diffractive dijet production,
measured in the kinematic region $\pt>40\GeV$, $\abs{\eta}<4.4$, $\xi<0.1$ and
$0.03<\abs{t}<1\GeV^{2}$, is:
$\sigma^{\Pp\PX}_{\mathrm{jj}}=21.7\pm 0.9\stat\,^{+3.0}_{-3.3}\syst\pm
0.9\lum\unit{nb}.$ (7)
Table 0.8.2 summarises the main systematic uncertainties of the measured cross
section. The cross section is calculated independently for events in which the
proton scatters towards the positive and negative $z$ directions, namely the
processes $\Pp\Pp\to\Pp\PX$ and $\Pp\Pp\to\PX\Pp$, and the results are
averaged. They are compatible within the uncertainties. The pythia8 DG model
predicts in the same kinematic region a cross section of $23.7\unit{nb}$,
consistent with the measurement.
Individual contributions to the systematic uncertainties in the measurement of
the single-diffractive dijet production cross section in the kinematic region
$\pt>40\GeV$, $\abs{\eta}<4.4$, $\xi<0.1$, and $0.03<\abs{t}<1\GeV^{2}$. The
second column indicates the relative uncertainties in the integrated cross
section. The third and fourth columns represent the minimum and maximum
relative uncertainties in the differential cross sections in bins of $t$ and
$\xi$, respectively. The minimum relative uncertainty is not shown when it is
below 1%. The total uncertainty is the quadratic sum of the individual
contributions. The uncertainty of the integrated luminosity is not shown.
Uncertainty source Relative uncertainty $\sigma^{\Pp\PX}_{\mathrm{jj}}$
$\rd\sigma/\rd{}t$ $\rd\sigma/\rd\xi$ Trigger efficiency $\pm$2 % 1–2% $<$2.4%
Calorimeter energy scale $+$1/$-$2 % $<$7% $<$7% Jet energy scale and
resolution $+$9/$-$8 % 3–32% 7–16% Background $\pm$3 % 2–27% $<$8% RP
acceptance $<$1 % $<$21% $<$2% Resolution $\pm$2 % 2–30% $<$8% Horizontal
dispersion $+$9/$-$12 % 8–71% 8–41% $t$-slope $<$1 % $<$16% $<$1.3%
$\beta$-reweighting $<$1 % $<$1% $<$1% Acceptance and unfolding $\pm$2 % 2–50%
5–12% Unfolding bias $\pm$3 % 2–50% 5–11% Unfolding regularization $<$8% $<$1%
Total $+$14/$-$15 %
The differential cross section as a function of $t$ is well described by an
exponential function for $\abs{t}$ values up to about $0.4\GeV^{2}$. A fit is
performed with the function
$\rd\sigma/{\rd}t\propto\exp\left({-b\abs{t}}\right)$ for $t$ values in the
range $0.03<\abs{t}<0.45\GeV^{2}$.
The resulting exponential slope is:
$b=6.5\pm 0.6\stat\,^{+1.0}_{-0.8}\syst\GeV^{-2},$ (8)
where the systematic uncertainties include the contributions discussed in
Section 0.8.1. The results for the exponential slope of the cross section
calculated independently for events in which the proton scatters towards the
positive and negative $z$ directions are compatible within the uncertainties.
The parametrisation obtained from the fit is shown in Fig. 4. In the fit range
($0.03<\abs{t}<0.45\GeV^{2}$), the horizontal position of the data points is
calculated as the value for which the parametrised function equals its average
over the bin width. The data points in the larger-$\abs{t}$ region outside the
fit range ($\abs{t}>0.45\GeV^{2}$) are shown at the centre of the bins.
The slope measured by CDF is $b\approx 5\text{--}6\GeV^{-2}$ for
$\abs{t}\lessapprox 0.5\GeV^{2}$ [10]. In the larger-$\abs{t}$ region, the CDF
data exhibit a smaller slope that becomes approximately independent of $t$ for
$\abs{t}\gtrapprox 2\GeV^{2}$.
The present measurement of the slope is consistent with that by CDF at
small-$\abs{t}$. The data do not conclusively indicate a flattening of the $t$
distribution at larger-$\abs{t}$.
An estimate of the rapidity gap survival probability can be obtained from the
ratio of the measured cross section in Eq. (7) and that predicted by pomwig
with $\langle S^{2}\rangle=1$. Alternatively, the pythia8 hard-diffraction
model can be used if the DG suppression framework is not applied. The two
results are consistent.
The overall suppression factor obtained with respect to the pomwig cross
section is $\langle S^{2}\rangle=7.4\,^{+1.0}_{-1.1}\%$, where the statistical
and systematic uncertainties are added in quadrature. A similar result is
obtained when the pythia8 unsuppressed cross section is used as reference
value.
The H1 fit B dPDFs used in this analysis include the contribution from proton
dissociation in $\Pe\Pp$ collisions. They are extracted from the process
$\Pe\Pp\to\Pe\PX\mathrm{Y}$, where Y can be a proton or a low-mass excitation
with $M_{\mathrm{Y}}<1.6\GeV$ [2]. The results found when the proton is
detected are consistent, apart from a different overall normalisation. The
ratio of the cross sections is
$\sigma(M_{\mathrm{Y}}<1.6\GeV)/\sigma(M_{\mathrm{Y}}=M_{\Pp})=1.23\pm
0.03\stat\pm 0.16\syst$ [2, 3]. No dependence on $\beta$, $Q^{2}$, or $\xi$ is
observed. To account for the different normalisation, the ratio is used to
correct $\langle S^{2}\rangle$; this yields $\langle S^{2}\rangle=\left(9\pm
2\right)\%$ when the pomwig cross section is taken as the reference value. A
similar result is obtained with pythia8.
### 0.8.3 Extraction of the ratio of the single-diffractive to inclusive
dijet yields
Figure 5 shows the ratio $R(x)$ in the kinematic region $\pt>40\GeV$,
$\abs{\eta}<4.4$, $\xi<0.1$, $0.03<\abs{t}<1\GeV^{2}$ and
$-2.9\leq\log_{10}x\leq-1.6$. The average of the results for events in which
the proton is detected on either side of the IP is shown. The yellow band
represents the total systematic uncertainty (cf. Section 0.8.1). The data are
compared to the ratio of the single-diffractive and nondiffractive dijet cross
sections from different models. The single-diffractive contribution is
simulated with pomwig, pythia8 4C, pythia8 CUETP8M1, and pythia8 DG. The
nondiffractive contribution is simulated with pythia6 and herwig6 if pomwig is
used for the diffractive contribution. When using pythia8 the diffractive and
nondiffractive contributions are simulated with the same underlying event
tune. When no correction for the rapidity gap survival probability is applied
($\langle S^{2}\rangle=1$), pomwig gives a ratio higher by roughly an order of
magnitude, consistent with the results discussed in Section 0.8.2. The
suppression seen in the data with respect to the simulation is not
substantially different when using pythia6 or herwig6 for the nondiffractive
contribution. pomwig with a correction of $\langle S^{2}\rangle=7.4\%$ gives
overall a good description of the data when pythia6 is used for the
nondiffractive contribution. When herwig6 is used for the nondiffractive
contribution the agreement is worse, especially in the lower- and higher-$x$
regions. The agreement for pythia8 4C is fair in the intermediate $x$ region,
but worse at low- and high-$x$. The agreement is worse for pythia8 CUETP8M1,
with values of the ratio higher than those in the data by up to a factor of
two. The pythia8 DG predictions agree well with the data overall, though the
agreement is worse in the lowest-$x$ bin. No correction is applied to the
pythia8 normalisation. In the lowest-$x$ bin, the ratio in the data is below
the predictions. The observed discrepancy is not significant for the
predictions that agree well overall with the data elsewhere, taking into
account the systematic and statistical uncertainties.
The measured value of the ratio, normalised per unit of $\xi$, in the full
kinematic region defined above is:
$R=\left(\sigma^{\Pp\PX}_{\mathrm{jj}}/\Delta\xi\right)/\sigma_{\mathrm{jj}}=0.025\pm
0.001\stat\pm 0.003\syst.$ (9)
Table 0.8.3 summarises the main contributions to the systematic uncertainty of
the ratio. The uncertainty of the jet energy scale is considerably smaller
than in the case of the single-diffractive cross section.
Individual contributions to the systematic uncertainty in the measurement of
the single-diffractive to inclusive dijet yields ratio in the kinematic region
$\pt>40\GeV$, $\abs{\eta}<4.4$, $\xi<0.1$, $0.03<\abs{t}<1\GeV^{2}$, and
$-2.9\leq\log_{10}x\leq-1.6$. The second and third columns represent the
relative uncertainties in the ratio in the full kinematic region and in bins
of $\log_{10}x$, respectively. The minimum relative uncertainty is not shown
when it is below 1%. The total uncertainty is the quadratic sum of the
individual contributions. Uncertainty source Relative uncertainty $R$ $R(x)$
Trigger efficiency Negligible 2–3% Calorimeter energy scale $+$1/$-$2 % $<$7%
Jet energy scale and resolution $\pm$2 % 1–10% Background $\pm$1 % 1–17% RP
acceptance $<$1 % $<$4% Resolution $\pm$2 % $<$4% Horizontal dispersion
$+$9/$-$11 % 11–23% $t$-slope $<$1 % $<$3% $\beta$-reweighting $\pm$1 % $<$6%
Acceptance and unfolding $\pm$2 % 3–11% Unfolding bias $\pm$3 % 3–14%
Unfolding regularization $<$11% Total $+$10/$-$13 %
Figure 5: Ratio per unit of $\xi$ of the single-diffractive and inclusive
dijet cross sections in the region given by $\xi<0.1$ and
$0.03<\abs{t}<1\GeV^{2}$, compared to the predictions from the different
models for the ratio between the single-diffractive and nondiffractive cross
sections. The pomwig prediction is shown with no correction for the rapidity
gap survival probability ($\langle S^{2}\rangle=1$) (left) and with a
correction of $\langle S^{2}\rangle=7.4\%$ (right). The vertical bars indicate
the statistical uncertainties and the yellow band indicates the total
systematic uncertainty. The average of the results for events in which the
proton is detected on either side of the interaction point is shown. The ratio
between the data and the pomwig prediction using pythia6 or herwig6 as the
nondiffractive contribution, when no correction for the rapidity gap survival
probability is applied, is shown in the bottom of the left panel.
Figure 6 shows the comparison between the results of Fig. 5 and those from CDF
[10]. The CDF results are shown for jets with $Q^{2}$ of roughly $100\GeV^{2}$
and pseudorapidity $\abs{\eta}<2.5$, with $0.03<\xi<0.09$. In this case
$Q^{2}$ is defined, per event, as the mean transverse energy of the two
leading jets squared. CDF measures the ratio for $Q^{2}$ values up to
$10^{4}\GeV^{2}$. A relatively small dependence on $Q^{2}$ is observed. The
present data are lower than the CDF results. A decrease of the ratio of
diffractive to inclusive cross sections with centre-of-mass energy has also
been observed by CDF by comparing data at 630 and 1800 [11].
Figure 6: Ratio per unit of $\xi$ of the single-diffractive and inclusive
dijet cross sections in the kinematic region given by $\xi<0.1$ and
$0.03<\abs{t}<1\GeV^{2}$. The vertical bars indicate the statistical
uncertainties and the yellow band indicates the total systematic uncertainty.
The red squares represent the results obtained by CDF at $\sqrt{s}=1.96\TeV$
for jets with $Q^{2}\approx 100\GeV^{2}$ and $\abs{\eta}<2.5$, with
$0.03<\xi<0.09$.
## 0.9 Summary
The differential cross section for single-diffractive dijet production in
proton-proton ($\Pp\Pp$) collisions at $\sqrt{s}=8\TeV$ has been measured as a
function of the proton fractional momentum loss $\xi$ and the squared four
momentum transfer $t$, using the CMS and TOTEM detectors. The data,
corresponding to an integrated luminosity of $37.5\nbinv$, were collected
using a nonstandard optics configuration with $\beta^{*}=90\unit{m}$. The
processes considered are $\Pp\Pp\to\Pp\PX$ or $\Pp\Pp\to\PX\Pp$, with $\PX$
including a system of two jets, in the kinematic region $\xi<0.1$ and
$0.03<\abs{t}<1.0\GeV^{2}$. The two jets have transverse momentum $\pt>40\GeV$
and pseudorapidity $\abs{\eta}<4.4$. The integrated cross section in this
kinematic region is $\sigma^{\Pp\PX}_{\mathrm{jj}}=21.7\pm
0.9\stat\,^{+3.0}_{-3.3}\syst\pm 0.9\lum\unit{nb}$; it is the average of the
cross sections when the proton scatters to either side of the interaction
point. The exponential slope of the cross section as a function of $t$ is
$b=6.5\pm 0.6\stat\,^{+1.0}_{-0.8}\syst\GeV^{-2}$. This is the first
measurement of hard diffraction with a measured proton at the LHC.
The data are compared with the predictions of different models. After applying
a normalisation shift ascribed to the rapidity gap survival probability,
pomwig agrees well with the data. The pythia8 dynamic gap model describes the
data well, both in shape and normalisation. In this model the effects of the
rapidity gap survival probability are simulated within the framework of
multiparton interactions. The pythia8 dynamic gap model is the only
calculation that predicts the cross section normalisation without an
additional correction.
The ratios of the measured single-diffractive cross section to those predicted
by pomwig and pythia8 give estimates of the rapidity gap survival probability.
After accounting for the correction of the dPDF normalisation due to proton
dissociation, the value of $\langle S^{2}\rangle$ is $\left(9\pm 2\right)\%$
when using pomwig as the reference cross section value, with a similar result
when pythia8 is used.
The ratio of the single-diffractive to inclusive dijet cross section has been
measured as a function of the parton momentum fraction $x$. The ratio is lower
than that observed at CDF at a smaller centre-of-mass energy. In the region
$\pt>40\GeV$, $\abs{\eta}<4.4$, $\xi<0.1$, $0.03<\abs{t}<1.0\GeV^{2}$, and
$-2.9\leq\log_{10}x\leq-1.6$, the ratio, normalised per unit $\xi$, is
$R=(\sigma^{\Pp\PX}_{\mathrm{jj}}/\Delta\xi)/\sigma_{\mathrm{jj}}=0.025\pm
0.001\stat\pm 0.003\syst$.
###### Acknowledgements.
We congratulate our colleagues in the CERN accelerator departments for the
excellent performance of the LHC and thank the technical and administrative
staffs at CERN and at other CMS and TOTEM institutes for their contributions
to the success of the common CMS-TOTEM effort. We gratefully acknowledge work
of the beam optics development team at CERN for the design and the successful
commissioning of the high $\beta^{*}$ optics and thank the LHC machine
coordinators for scheduling dedicated fills. In addition, we gratefully
acknowledge the computing centres and personnel of the Worldwide LHC Computing
Grid for delivering so effectively the computing infrastructure essential to
our analyses. Finally, we acknowledge the enduring support for the
construction and operation of the LHC and the CMS and TOTEM detectors provided
by the following funding agencies: BMBWF and FWF (Austria); FNRS and FWO
(Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES (Bulgaria);
CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF
(Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, PUT and ERDF
(Estonia); Academy of Finland, Finnish Academy of Science and Letters (The
Vilho Yrjö and Kalle Väisälä Fund), MEC, Magnus Ehrnrooth Foundation, HIP, and
Waldemar von Frenckell Foundation (Finland); CEA and CNRS/IN2P3 (France);
BMBF, DFG, and HGF (Germany); GSRT (Greece); the Circles of Knowledge Club,
NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy);
MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM
(Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS
(Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT
(Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD
(Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss
Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and
NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU (Ukraine); STFC (United
Kingdom); DOE and NSF (USA). Individuals have received support from the Marie-
Curie programme and the European Research Council and Horizon 2020 Grant,
contract Nos. 675440, 752730, and 765710 (European Union); the Leventis
Foundation; the A.P. Sloan Foundation; the Alexander von Humboldt Foundation;
the Belgian Federal Science Policy Office; the Fonds pour la Formation à la
Recherche dans l’Industrie et dans l’Agriculture (FRIA-Belgium); the
Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the
F.R.S.-FNRS and FWO (Belgium) under the “Excellence of Science – EOS” – be.h
project n. 30820817; the Beijing Municipal Science & Technology Commission,
No. Z191100007219010; the Ministry of Education, Youth and Sports (MEYS) and
MSMT CR of the Czech Republic; the Nylands nation vid Helsingfors universitet
(Finland); the Deutsche Forschungsgemeinschaft (DFG) under Germany’s
Excellence Strategy – EXC 2121 “Quantum Universe” – 390833306; the Lendület
(“Momentum”) Programme and the János Bolyai Research Scholarship of the
Hungarian Academy of Sciences, the New National Excellence Program ÚNKP, the
NKFIA research grants 123842, 123959, 124845, 124850, 125105, 128713, 128786,
129058, K 133046, and EFOP-3.6.1- 16-2016-00001 (Hungary); the Council of
Science and Industrial Research, India; the HOMING PLUS programme of the
Foundation for Polish Science, cofinanced from European Union, Regional
Development Fund, the Mobility Plus programme of the Ministry of Science and
Higher Education, including Grant No. MNiSW DIR/WK/2018/13, the National
Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus
2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis
2012/07/E/ST2/01406; the National Priorities Research Program by Qatar
National Research Fund; the Ministry of Science and Education, grant no.
14.W03.31.0026 (Russia); the Programa Estatal de Fomento de la Investigación
Científica y Técnica de Excelencia María de Maeztu, grant MDM-2015-0509 and
the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia
programmes cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot
Fund for Postdoctoral Fellowship, Chulalongkorn University and the
Chulalongkorn Academic into Its 2nd Century Project Advancement Project
(Thailand); the Kavli Foundation; the Nvidia Corporation; the SuperMicro
Corporation; the Welch Foundation, contract C-1845; and the Weston Havens
Foundation (USA).
## References
* [1] P. D. B. Collins, “An Introduction to Regge Theory and High-Energy Physics”. Cambridge Monographs on Mathematical Physics. Cambridge Univ. Press, Cambridge, UK, 2009. ISBN 9780521110358.
* [2] H1 Collaboration, “Measurement and QCD analysis of the diffractive deep-inelastic scattering cross section at HERA”, Eur. Phys. J. C 48 (2006) 715, 10.1140/epjc/s10052-006-0035-3, arXiv:hep-ex/0606004.
* [3] H1 Collaboration, “Diffractive deep-inelastic scattering with a leading proton at HERA”, Eur. Phys. J. C 48 (2006) 749, 10.1140/epjc/s10052-006-0046-0, arXiv:hep-ex/0606003.
* [4] ZEUS Collaboration, “Deep inelastic scattering with leading protons or large rapidity gaps at HERA”, Nucl. Phys. B 816 (2009) 1, 10.1016/j.nuclphysb.2009.03.003, arXiv:0812.2003.
* [5] ZEUS Collaboration, “A QCD analysis of ZEUS diffractive data”, Nucl. Phys. B 831 (2010) 1, 10.1016/j.nuclphysb.2010.01.014, arXiv:0911.4119.
* [6] UA8 Collaboration, “Evidence for a super-hard pomeron structure”, Phys. Lett. B 297 (1992) 417, 10.1016/0370-2693(92)91281-D.
* [7] CDF Collaboration, “Measurement of diffractive dijet production at the Fermilab Tevatron”, Phys. Rev. Lett. 79 (1997) 2636, 10.1103/PhysRevLett.79.2636.
* [8] D0 Collaboration, “Hard single diffraction in $\PAp\Pp$ collisions at $\sqrt{s}$ = 630 and 1800 GeV”, Phys. Lett. B 531 (2002) 52, 10.1016/S0370-2693(02)01364-3, arXiv:hep-ex/9912061.
* [9] CDF Collaboration, “Diffractive dijets with a leading antiproton in $\PAp\Pp$ collisions at $\sqrt{s}$ = 1800 GeV”, Phys. Rev. Lett. 84 (2000) 5043, 10.1103/PhysRevLett.84.5043.
* [10] CDF Collaboration, “Diffractive dijet production in $\PAp\Pp$ collisions at $\sqrt{s}$ = 1.96 TeV”, Phys. Rev. D 86 (2012) 032009, 10.1103/PhysRevD.86.032009, arXiv:1206.3955.
* [11] CDF Collaboration, “Diffractive dijet production at $\sqrt{s}$ = 630 and $\sqrt{s}$ = 1800 GeV at the Fermilab Tevatron”, Phys. Rev. Lett. 88 (2002) 151802, 10.1103/PhysRevLett.88.151802, arXiv:hep-ex/0109025.
* [12] CMS Collaboration, “Observation of a diffractive contribution to dijet production in proton-proton collisions at $\sqrt{s}=7$ TeV”, Phys. Rev. D 87 (2013) 012006, 10.1103/PhysRevD.87.012006, arXiv:1209.1805.
* [13] ATLAS Collaboration, “Dijet production in $\sqrt{s}=7$ TeV $\Pp\Pp$ collisions with large rapidity gaps at the ATLAS experiment”, Phys. Lett. B 754 (2016) 214, 10.1016/j.physletb.2016.01.028, arXiv:1511.00502.
* [14] H1 Collaboration, “Diffractive dijet photoproduction in $\Pe\Pp$ collisions at HERA”, Eur. Phys. J. C 70 (2010) 15, 10.1140/epjc/s10052-010-1448-6, arXiv:1006.0946.
* [15] L. Trentadue and G. Veneziano, “Fracture functions. An improved description of inclusive hard processes in QCD”, Phys. Lett. B 323 (1994) 201, 10.1016/0370-2693(94)90292-5.
* [16] J. C. Collins, “Proof of factorization for diffractive hard scattering”, Phys. Rev. D 57 (1998) 3051, 10.1103/PhysRevD.57.3051, arXiv:hep-ph/9709499. [Erratum: 10.1103/PhysRevD.61.019902].
* [17] M. Grazzini, L. Trentadue, and G. Veneziano, “Fracture functions from cut vertices”, Nucl. Phys. B 519 (1998) 394, 10.1016/S0550-3213(97)00840-7, arXiv:hep-ph/9709452.
* [18] J. D. Bjorken, “Rapidity gaps and jets as a new-physics signature in very high-energy hadron-hadron collisions”, Phys. Rev. D 47 (1993) 101, 10.1103/PhysRevD.47.101.
* [19] CMS Collaboration, “Particle-flow reconstruction and global event description with the CMS detector”, JINST 12 (2017) P10003, 10.1088/1748-0221/12/10/P10003, arXiv:1706.04965.
* [20] M. Cacciari, G. P. Salam, and G. Soyez, “The anti-jet clustering algorithm”, JHEP 04 (2008) 063, 10.1088/1126-6708/2008/04/063, arXiv:0802.1189.
* [21] M. Cacciari, G. P. Salam, and G. Soyez, “FastJet user manual”, Eur. Phys. J. C 72 (2012) 1896, 10.1140/epjc/s10052-012-1896-2, arXiv:1111.6097.
* [22] CMS Collaboration, “Jet energy scale and resolution in the CMS experiment in $\Pp\Pp$ collisions at 8 TeV”, JINST 12 (2017) P02014, 10.1088/1748-0221/12/02/P02014, arXiv:1607.03663.
* [23] CMS Collaboration, “Identification and filtering of uncharacteristic noise in the CMS hadron calorimeter”, JINST 5 (2010) T03014, 10.1088/1748-0221/5/03/T03014, arXiv:0911.4881.
* [24] CMS Collaboration, “The CMS experiment at the CERN LHC”, JINST 3 (2008) S08004, 10.1088/1748-0221/3/08/S08004.
* [25] TOTEM Collaboration, “The TOTEM experiment at the CERN Large Hadron Collider”, JINST 3 (2008) S08007, 10.1088/1748-0221/3/08/S08007.
* [26] TOTEM Collaboration, “Performance of the TOTEM detectors at the LHC”, Int. J. Mod. Phys. A 28 (2013) 1330046, 10.1142/S0217751X13300469, arXiv:1310.2908.
* [27] TOTEM Collaboration, “LHC optics measurement with proton tracks detected by the roman pots of the TOTEM experiment”, New J. Phys. 16 (2014) 103041, 10.1088/1367-2630/16/10/103041, arXiv:1406.0546.
* [28] H. Niewiadomski, “Reconstruction of protons in the TOTEM roman pot detectors at the LHC”. PhD thesis, University of Manchester, 2008. cds.cern.ch/record/1131825.
* [29] TOTEM Collaboration, “Evidence for non-exponential elastic proton-proton differential cross-section at low $|t|$ and $\sqrt{s}=8$ TeV by TOTEM”, Nucl. Phys. B 899 (2015) 527, 10.1016/j.nuclphysb.2015.08.010, arXiv:1503.08111.
* [30] T. Sjöstrand, S. Mrenna, and P. Skands, “PYTHIA 6.4 physics and manual”, JHEP 05 (2006) 026, 10.1088/1126-6708/2006/05/026, arXiv:hep-ph/0603175.
* [31] T. Sjöstrand, S. Mrenna, and P. Skands, “A brief introduction to PYTHIA 8.1”, Comput. Phys. Commun. 178 (2008) 852, 10.1016/j.cpc.2008.01.036, arXiv:0710.3820.
* [32] G. Corcella et al., “HERWIG 6: An event generator for hadron emission reactions with interfering gluons (including supersymmetric processes)”, JHEP 01 (2001) 010, 10.1088/1126-6708/2001/01/010, arXiv:hep-ph/0011363.
* [33] R. Field, “Early LHC underlying event data - findings and surprises”, in Hadron collider physics. Proceedings, 22nd Conference, HCP 2010, Toronto, Canada, August 23-27, 2010. 2010\. arXiv:1010.3558.
* [34] R. Corke and T. Sjöstrand, “Interleaved parton showers and tuning prospects”, JHEP 03 (2011) 032, 10.1007/JHEP03(2011)032, arXiv:1011.1759.
* [35] CMS Collaboration, “Event generator tunes obtained from underlying event and multiparton scattering measurements”, Eur. Phys. J. C 76 (2016) 155, 10.1140/epjc/s10052-016-3988-x, arXiv:1512.00815.
* [36] B. Cox and J. Forshaw, “Pomwig: Herwig for diffractive interactions”, Comput. Phys. Commun. 144 (2002) 104, 10.1016/S0010-4655(01)00467-2, arXiv:hep-ph/0010303.
* [37] S. Navin, “Diffraction in PYTHIA”, (2010). arXiv:1005.3894.
* [38] C. O. Rasmussen and T. Sjöstrand, “Hard diffraction with dynamic gap survival”, JHEP 02 (2016) 142, 10.1007/JHEP02(2016)142, arXiv:1512.05525.
* [39] CMS and TOTEM Collaboration, “Measurement of pseudorapidity distributions of charged particles in proton-proton collisions at $\sqrt{s}$ = 8 TeV by the CMS and TOTEM experiments”, Eur. Phys. J. C 74 (2014) 3053, 10.1140/epjc/s10052-014-3053-6, arXiv:1405.0722.
* [40] CMS Collaboration, “Measurement of charged particle spectra in minimum-bias events from proton-proton collisions at $\sqrt{s}=13\,\text{TeV}$”, Eur. Phys. J. C 78 (2018) 697, 10.1140/epjc/s10052-018-6144-y, arXiv:1806.11245.
* [41] GEANT4 Collaboration, “—a simulation toolkit”, Nucl. Instrum. Meth. A 506 (2003) 250, 10.1016/S0168-9002(03)01368-8.
* [42] CMS Collaboration, “The CMS trigger system”, JINST 12 (2017) P01020, 10.1088/1748-0221/12/01/P01020, arXiv:1609.02366.
* [43] G. D’Agostini, “A multidimensional unfolding method based on Bayes’ theorem”, Nucl. Instrum. Meth. A 362 (1995) 487, 10.1016/0168-9002(95)00274-X.
* [44] CMS Collaboration, “Measurement of energy flow at large pseudorapidities in $\Pp\Pp$ collisions at $\sqrt{s}=0.9$ and 7 TeV”, JHEP 11 (2011) 148, 10.1007/JHEP11(2011)148, arXiv:1110.0211. [Erratum: 10.1007/JHEP02(2012)055].
## .10 The CMS Collaboration
Yerevan Physics Institute, Yerevan, Armenia
A.M. Sirunyan, A. Tumasyan Institut für Hochenergiephysik, Wien, Austria
W. Adam, F. Ambrogi, E. Asilar, T. Bergauer, J. Brandstetter, M. Dragicevic,
J. Erö, A. Escalante Del Valle, M. Flechl, R. Frühwirth1, V.M. Ghete, J.
Hrubec, M. Jeitler1, N. Krammer, I. Krätschmer, D. Liko, T. Madlener, I.
Mikulec, N. Rad, H. Rohringer, J. Schieck1, R. Schöfbeck, M. Spanring, D.
Spitzbart, W. Waltenberger, J. Wittmann, C.-E. Wulz1, M. Zarucki Institute for
Nuclear Problems, Minsk, Belarus
V. Chekhovsky, V. Mossolov, J. Suarez Gonzalez Universiteit Antwerpen,
Antwerpen, Belgium
E.A. De Wolf, D. Di Croce, X. Janssen, J. Lauwers, A. Lelek, M. Pieters, H.
Van Haevermaet, P. Van Mechelen, N. Van Remortel Vrije Universiteit Brussel,
Brussel, Belgium
S. Abu Zeid, F. Blekman, J. D’Hondt, J. De Clercq, K. Deroover, G. Flouris, D.
Lontkovskyi, S. Lowette, I. Marchesini, S. Moortgat, L. Moreels, Q. Python, K.
Skovpen, S. Tavernier, W. Van Doninck, P. Van Mulders, I. Van Parijs
Université Libre de Bruxelles, Bruxelles, Belgium
D. Beghin, B. Bilin, H. Brun, B. Clerbaux, G. De Lentdecker, H. Delannoy, B.
Dorney, G. Fasanella, L. Favart, A. Grebenyuk, A.K. Kalsi, T. Lenzi, J.
Luetic, N. Postiau, E. Starling, L. Thomas, C. Vander Velde, P. Vanlaer, D.
Vannerom, Q. Wang Ghent University, Ghent, Belgium
T. Cornelis, D. Dobur, A. Fagot, M. Gul, I. Khvastunov2, D. Poyraz, C. Roskas,
D. Trocino, M. Tytgat, W. Verbeke, B. Vermassen, M. Vit, N. Zaganidis
Université Catholique de Louvain, Louvain-la-Neuve, Belgium
H. Bakhshiansohi, O. Bondu, G. Bruno, C. Caputo, P. David, C. Delaere, M.
Delcourt, A. Giammanco, G. Krintiras, V. Lemaitre, A. Magitteri, K.
Piotrzkowski, A. Saggio, M. Vidal Marono, P. Vischia, J. Zobec Centro
Brasileiro de Pesquisas Fisicas, Rio de Janeiro, Brazil
F.L. Alves, G.A. Alves, G. Correia Silva, C. Hensel, A. Moraes, M.E. Pol, P.
Rebello Teles Universidade do Estado do Rio de Janeiro, Rio de Janeiro, Brazil
E. Belchior Batista Das Chagas, W. Carvalho, J. Chinellato3, E. Coelho, E.M.
Da Costa, G.G. Da Silveira4, D. De Jesus Damiao, C. De Oliveira Martins, S.
Fonseca De Souza, L.M. Huertas Guativa, H. Malbouisson, D. Matos Figueiredo,
M. Melo De Almeida, C. Mora Herrera, L. Mundim, H. Nogima, W.L. Prado Da
Silva, L.J. Sanchez Rosas, A. Santoro, A. Sznajder, M. Thiel, E.J. Tonelli
Manganote3, F. Torres Da Silva De Araujo, A. Vilela Pereira Universidade
Estadual Paulista a, Universidade Federal do ABC b, São Paulo, Brazil
S. Ahujaa, C.A. Bernardesa, L. Calligarisa, T.R. Fernandez Perez Tomeia, E.M.
Gregoresb, P.G. Mercadanteb, S.F. Novaesa, SandraS. Padulaa Institute for
Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Sofia,
Bulgaria
A. Aleksandrov, R. Hadjiiska, P. Iaydjiev, A. Marinov, M. Misheva, M. Rodozov,
M. Shopova, G. Sultanov University of Sofia, Sofia, Bulgaria
A. Dimitrov, L. Litov, B. Pavlov, P. Petkov Beihang University, Beijing, China
W. Fang5, X. Gao5, L. Yuan Department of Physics, Tsinghua University,
Beijing, China
Y. Wang Institute of High Energy Physics, Beijing, China
M. Ahmad, J.G. Bian, G.M. Chen, H.S. Chen, M. Chen, Y. Chen, C.H. Jiang, D.
Leggat, H. Liao, Z. Liu, S.M. Shaheen6, A. Spiezia, J. Tao, E. Yazgan, H.
Zhang, S. Zhang6, J. Zhao State Key Laboratory of Nuclear Physics and
Technology, Peking University, Beijing, China
Y. Ban, G. Chen, A. Levin, J. Li, L. Li, Q. Li, Y. Mao, S.J. Qian, D. Wang
Universidad de Los Andes, Bogota, Colombia
C. Avila, A. Cabrera, C.A. Carrillo Montoya, L.F. Chaparro Sierra, C. Florez,
C.F. González Hernández, M.A. Segura Delgado University of Split, Faculty of
Electrical Engineering, Mechanical Engineering and Naval Architecture, Split,
Croatia
B. Courbon, N. Godinovic, D. Lelas, I. Puljak, T. Sculac University of Split,
Faculty of Science, Split, Croatia
Z. Antunovic, M. Kovac Institute Rudjer Boskovic, Zagreb, Croatia
V. Brigljevic, D. Ferencek, K. Kadija, B. Mesic, M. Roguljic, A. Starodumov7,
T. Susa University of Cyprus, Nicosia, Cyprus
M.W. Ather, A. Attikis, M. Kolosova, G. Mavromanolakis, J. Mousa, C. Nicolaou,
F. Ptochos, P.A. Razis, H. Rykaczewski Charles University, Prague, Czech
Republic
M. Finger8, M. Finger Jr.8 Escuela Politecnica Nacional, Quito, Ecuador
E. Ayala Universidad San Francisco de Quito, Quito, Ecuador
E. Carrera Jarrin Academy of Scientific Research and Technology of the Arab
Republic of Egypt, Egyptian Network of High Energy Physics, Cairo, Egypt
A. Ellithi Kamel9, M.A. Mahmoud10,11, E. Salama11,12 National Institute of
Chemical Physics and Biophysics, Tallinn, Estonia
S. Bhowmik, A. Carvalho Antunes De Oliveira, R.K. Dewanjee, K. Ehataht, M.
Kadastik, M. Raidal, C. Veelken Department of Physics, University of Helsinki,
Helsinki, Finland
P. Eerola, H. Kirschenmann, J. Pekkanen, M. Voutilainen Helsinki Institute of
Physics, Helsinki, Finland
J. Havukainen, J.K. Heikkilä, T. Järvinen, V. Karimäki, R. Kinnunen, T.
Lampén, K. Lassila-Perini, S. Laurila, S. Lehti, T. Lindén, P. Luukka, T.
Mäenpää, H. Siikonen, E. Tuominen, J. Tuominiemi Lappeenranta University of
Technology, Lappeenranta, Finland
T. Tuuva IRFU, CEA, Université Paris-Saclay, Gif-sur-Yvette, France
M. Besancon, F. Couderc, M. Dejardin, D. Denegri, J.L. Faure, F. Ferri, S.
Ganjour, A. Givernaud, P. Gras, G. Hamel de Monchenault, P. Jarry, C. Leloup,
E. Locci, J. Malcles, G. Negro, J. Rander, A. Rosowsky, M.Ö. Sahin, M. Titov
Laboratoire Leprince-Ringuet, CNRS/IN2P3, Ecole Polytechnique, Institut
Polytechnique de Paris
A. Abdulsalam13, C. Amendola, I. Antropov, F. Beaudette, P. Busson, C.
Charlot, R. Granier de Cassagnac, I. Kucher, A. Lobanov, J. Martin Blanco, C.
Martin Perez, M. Nguyen, C. Ochando, G. Ortona, P. Paganini, J. Rembser, R.
Salerno, J.B. Sauvan, Y. Sirois, A.G. Stahl Leiton, A. Zabi, A. Zghiche
Université de Strasbourg, CNRS, IPHC UMR 7178, Strasbourg, France
J.-L. Agram14, J. Andrea, D. Bloch, G. Bourgatte, J.-M. Brom, E.C. Chabert, V.
Cherepanov, C. Collard, E. Conte14, J.-C. Fontaine14, D. Gelé, U. Goerlach, M.
Jansová, A.-C. Le Bihan, N. Tonon, P. Van Hove Centre de Calcul de l’Institut
National de Physique Nucleaire et de Physique des Particules, CNRS/IN2P3,
Villeurbanne, France
S. Gadrat Université de Lyon, Université Claude Bernard Lyon 1, CNRS-IN2P3,
Institut de Physique Nucléaire de Lyon, Villeurbanne, France
S. Beauceron, C. Bernet, G. Boudoul, N. Chanon, R. Chierici, D. Contardo, P.
Depasse, H. El Mamouni, J. Fay, L. Finco, S. Gascon, M. Gouzevitch, G.
Grenier, B. Ille, F. Lagarde, I.B. Laktineh, H. Lattaud, M. Lethuillier, L.
Mirabito, S. Perries, A. Popov15, V. Sordini, G. Touquet, M. Vander Donckt, S.
Viret Georgian Technical University, Tbilisi, Georgia
T. Toriashvili16 Tbilisi State University, Tbilisi, Georgia
Z. Tsamalaidze8 RWTH Aachen University, I. Physikalisches Institut, Aachen,
Germany
C. Autermann, L. Feld, M.K. Kiesel, K. Klein, M. Lipinski, M. Preuten, M.P.
Rauch, C. Schomakers, J. Schulz, M. Teroerde, B. Wittmer RWTH Aachen
University, III. Physikalisches Institut A, Aachen, Germany
A. Albert, M. Erdmann, S. Erdweg, T. Esch, R. Fischer, S. Ghosh, A. Güth, T.
Hebbeker, C. Heidemann, K. Hoepfner, H. Keller, L. Mastrolorenzo, M.
Merschmeyer, A. Meyer, P. Millet, S. Mukherjee, T. Pook, M. Radziej, H.
Reithler, M. Rieger, A. Schmidt, D. Teyssier, S. Thüer RWTH Aachen University,
III. Physikalisches Institut B, Aachen, Germany
G. Flügge, O. Hlushchenko, T. Kress, T. Müller, A. Nehrkorn, A. Nowack, C.
Pistone, O. Pooth, D. Roy, H. Sert, A. Stahl17 Deutsches Elektronen-
Synchrotron, Hamburg, Germany
M. Aldaya Martin, T. Arndt, C. Asawatangtrakuldee, I. Babounikau, K.
Beernaert, O. Behnke, U. Behrens, A. Bermúdez Martínez, D. Bertsche, A.A. Bin
Anuar, K. Borras18, V. Botta, A. Campbell, P. Connor, C. Contreras-Campana, V.
Danilov, A. De Wit, M.M. Defranchis, C. Diez Pardos, D. Domínguez Damiani, G.
Eckerlin, T. Eichhorn, A. Elwood, E. Eren, E. Gallo19, A. Geiser, J.M. Grados
Luyando, A. Grohsjean, M. Guthoff, M. Haranko, A. Harb, H. Jung, M. Kasemann,
J. Keaveney, C. Kleinwort, J. Knolle, D. Krücker, W. Lange, T. Lenz, J.
Leonard, K. Lipka, W. Lohmann20, R. Mankel, I.-A. Melzer-Pellmann, A.B. Meyer,
M. Meyer, M. Missiroli, G. Mittag, J. Mnich, V. Myronenko, S.K. Pflitsch, D.
Pitzl, A. Raspereza, A. Saibel, M. Savitskyi, P. Saxena, P. Schütze, C.
Schwanenberger, R. Shevchenko, A. Singh, H. Tholen, O. Turkot, A. Vagnerini,
M. Van De Klundert, G.P. Van Onsem, R. Walsh, Y. Wen, K. Wichmann, C. Wissing,
O. Zenaiev University of Hamburg, Hamburg, Germany
R. Aggleton, S. Bein, L. Benato, A. Benecke, T. Dreyer, A. Ebrahimi, E.
Garutti, D. Gonzalez, P. Gunnellini, J. Haller, A. Hinzmann, A. Karavdina, G.
Kasieczka, R. Klanner, R. Kogler, N. Kovalchuk, S. Kurz, V. Kutzner, J. Lange,
D. Marconi, J. Multhaup, M. Niedziela, C.E.N. Niemeyer, D. Nowatschin, A.
Perieanu, A. Reimers, O. Rieger, C. Scharf, P. Schleper, S. Schumann, J.
Schwandt, J. Sonneveld, H. Stadie, G. Steinbrück, F.M. Stober, M. Stöver, B.
Vormwald, I. Zoi Karlsruher Institut fuer Technologie, Karlsruhe, Germany
M. Akbiyik, C. Barth, M. Baselga, S. Baur, E. Butz, R. Caspart, T. Chwalek, F.
Colombo, W. De Boer, A. Dierlamm, K. El Morabit, N. Faltermann, B. Freund, M.
Giffels, M.A. Harrendorf, F. Hartmann17, S.M. Heindl, U. Husemann, I.
Katkov15, S. Kudella, S. Mitra, M.U. Mozer, Th. Müller, M. Musich, M. Plagge,
G. Quast, K. Rabbertz, M. Schröder, I. Shvetsov, H.J. Simonis, R. Ulrich, S.
Wayand, M. Weber, T. Weiler, C. Wöhrmann, R. Wolf Institute of Nuclear and
Particle Physics (INPP), NCSR Demokritos, Aghia Paraskevi, Greece
G. Anagnostou, G. Daskalakis, T. Geralis, A. Kyriakis, D. Loukas, G. Paspalaki
National and Kapodistrian University of Athens, Athens, Greece
A. Agapitos, G. Karathanasis, P. Kontaxakis, A. Panagiotou, I. Papavergou, N.
Saoulidou, K. Vellidis National Technical University of Athens, Athens, Greece
K. Kousouris, I. Papakrivopoulos, G. Tsipolitis University of Ioánnina,
Ioánnina, Greece
I. Evangelou, C. Foudas, P. Gianneios, P. Katsoulis, P. Kokkas, S. Mallios, N.
Manthos, I. Papadopoulos, E. Paradas, J. Strologas, F.A. Triantis, D.
Tsitsonis MTA-ELTE Lendület CMS Particle and Nuclear Physics Group, Eötvös
Loránd University, Budapest, Hungary
M. Bartók21, M. Csanad, N. Filipovic, P. Major, M.I. Nagy, G. Pasztor, O.
Surányi, G.I. Veres Wigner Research Centre for Physics, Budapest, Hungary
G. Bencze, C. Hajdu, D. Horvath22, Á. Hunyadi, F. Sikler, T.Á. Vámi, V.
Veszpremi, G. Vesztergombi${}^{\textrm{\textdagger}}$ Institute of Nuclear
Research ATOMKI, Debrecen, Hungary
N. Beni, S. Czellar, J. Karancsi21, A. Makovec, J. Molnar, Z. Szillasi
Institute of Physics, University of Debrecen, Debrecen, Hungary
P. Raics, Z.L. Trocsanyi, B. Ujvari Indian Institute of Science (IISc),
Bangalore, India
S. Choudhury, J.R. Komaragiri, P.C. Tiwari National Institute of Science
Education and Research, HBNI, Bhubaneswar, India
S. Bahinipati24, C. Kar, P. Mal, K. Mandal, A. Nayak25, S. Roy Chowdhury, D.K.
Sahoo24, S.K. Swain Panjab University, Chandigarh, India
S. Bansal, S.B. Beri, V. Bhatnagar, S. Chauhan, R. Chawla, N. Dhingra, R.
Gupta, A. Kaur, M. Kaur, S. Kaur, P. Kumari, M. Lohan, M. Meena, A. Mehta, K.
Sandeep, S. Sharma, J.B. Singh, A.K. Virdi, G. Walia University of Delhi,
Delhi, India
A. Bhardwaj, B.C. Choudhary, R.B. Garg, M. Gola, S. Keshri, Ashok Kumar, S.
Malhotra, M. Naimuddin, P. Priyanka, K. Ranjan, Aashaq Shah, R. Sharma Saha
Institute of Nuclear Physics, HBNI, Kolkata, India
R. Bhardwaj26, M. Bharti26, R. Bhattacharya, S. Bhattacharya, U. Bhawandeep26,
D. Bhowmik, S. Dey, S. Dutt26, S. Dutta, S. Ghosh, M. Maity27, K. Mondal, S.
Nandan, A. Purohit, P.K. Rout, A. Roy, G. Saha, S. Sarkar, T. Sarkar27, M.
Sharan, B. Singh26, S. Thakur26 Indian Institute of Technology Madras, Madras,
India
P.K. Behera, A. Muhammad Bhabha Atomic Research Centre, Mumbai, India
R. Chudasama, D. Dutta, V. Jha, V. Kumar, D.K. Mishra, P.K. Netrakanti, L.M.
Pant, P. Shukla, P. Suggisetti Tata Institute of Fundamental Research-A,
Mumbai, India
T. Aziz, M.A. Bhat, S. Dugad, G.B. Mohanty, N. Sur, RavindraKumar Verma Tata
Institute of Fundamental Research-B, Mumbai, India
S. Banerjee, S. Bhattacharya, S. Chatterjee, P. Das, M. Guchait, Sa. Jain, S.
Karmakar, S. Kumar, G. Majumder, K. Mazumdar, N. Sahoo Indian Institute of
Science Education and Research (IISER), Pune, India
S. Chauhan, S. Dube, V. Hegde, A. Kapoor, K. Kothekar, S. Pandey, A. Rane, A.
Rastogi, S. Sharma Institute for Research in Fundamental Sciences (IPM),
Tehran, Iran
S. Chenarani28, E. Eskandari Tadavani, S.M. Etesami28, M. Khakzad, M.
Mohammadi Najafabadi, M. Naseri, F. Rezaei Hosseinabadi, B. Safarzadeh29, M.
Zeinali University College Dublin, Dublin, Ireland
M. Felcini, M. Grunewald INFN Sezione di Bari a, Università di Bari b,
Politecnico di Bari c, Bari, Italy
M. Abbresciaa,b, C. Calabriaa,b, A. Colaleoa, D. Creanzaa,c, L. Cristellaa,b,
N. De Filippisa,c, M. De Palmaa,b, A. Di Florioa,b, F. Erricoa,b, L. Fiorea,
A. Gelmia,b, G. Iasellia,c, M. Incea,b, S. Lezkia,b, G. Maggia,c, M. Maggia,
G. Minielloa,b, S. Mya,b, S. Nuzzoa,b, A. Pompilia,b, G. Pugliesea,c, R.
Radognaa, A. Ranieria, G. Selvaggia,b, A. Sharmaa, L. Silvestrisa, R.
Vendittia, P. Verwilligena INFN Sezione di Bologna a, Università di Bologna b,
Bologna, Italy
G. Abbiendia, C. Battilanaa,b, D. Bonacorsia,b, L. Borgonovia,b, S. Braibant-
Giacomellia,b, R. Campaninia,b, P. Capiluppia,b, A. Castroa,b, F.R. Cavalloa,
S.S. Chhibraa,b, G. Codispotia,b, M. Cuffiania,b, G.M. Dallavallea, F.
Fabbria, A. Fanfania,b, E. Fontanesi, P. Giacomellia, C. Grandia, L.
Guiduccia,b, F. Iemmia,b, S. Lo Meoa,30, S. Marcellinia, G. Masettia, A.
Montanaria, F.L. Navarriaa,b, A. Perrottaa, F. Primaveraa,b, A.M. Rossia,b, T.
Rovellia,b, G.P. Sirolia,b, N. Tosia INFN Sezione di Catania a, Università di
Catania b, Catania, Italy
S. Albergoa,b, A. Di Mattiaa, R. Potenzaa,b, A. Tricomia,b, C. Tuvea,b INFN
Sezione di Firenze a, Università di Firenze b, Firenze, Italy
G. Barbaglia, K. Chatterjeea,b, V. Ciullia,b, C. Civininia, R.
D’Alessandroa,b, E. Focardia,b, G. Latino, P. Lenzia,b, M. Meschinia, S.
Paolettia, L. Russoa,31, G. Sguazzonia, D. Stroma, L. Viliania INFN Laboratori
Nazionali di Frascati, Frascati, Italy
L. Benussi, S. Bianco, F. Fabbri, D. Piccolo INFN Sezione di Genova a,
Università di Genova b, Genova, Italy
F. Ferroa, R. Mulargiaa,b, E. Robuttia, S. Tosia,b INFN Sezione di Milano-
Bicocca a, Università di Milano-Bicocca b, Milano, Italy
A. Benagliaa, A. Beschib, F. Brivioa,b, V. Cirioloa,b,17, S. Di Guidaa,b,17,
M.E. Dinardoa,b, S. Fiorendia,b, S. Gennaia, A. Ghezzia,b, P. Govonia,b, M.
Malbertia,b, S. Malvezzia, D. Menascea, F. Monti, L. Moronia, M. Paganonia,b,
D. Pedrinia, S. Ragazzia,b, T. Tabarelli de Fatisa,b, D. Zuoloa,b INFN Sezione
di Napoli a, Università di Napoli ’Federico II’ b, Napoli, Italy, Università
della Basilicata c, Potenza, Italy, Università G. Marconi d, Roma, Italy
S. Buontempoa, N. Cavalloa,c, A. De Iorioa,b, A. Di Crescenzoa,b, F.
Fabozzia,c, F. Fiengaa, G. Galatia, A.O.M. Iorioa,b, L. Listaa, S.
Meolaa,d,17, P. Paoluccia,17, C. Sciaccaa,b, E. Voevodinaa,b INFN Sezione di
Padova a, Università di Padova b, Padova, Italy, Università di Trento c,
Trento, Italy
P. Azzia, N. Bacchettaa, D. Biselloa,b, A. Bolettia,b, A. Bragagnolo, R.
Carlina,b, P. Checchiaa, M. Dall’Ossoa,b, P. De Castro Manzanoa, T. Dorigoa,
U. Dossellia, F. Gasparinia,b, U. Gasparinia,b, A. Gozzelinoa, S.Y. Hoh, S.
Lacapraraa, P. Lujan, M. Margonia,b, A.T. Meneguzzoa,b, J. Pazzinia,b, M.
Presillab, P. Ronchesea,b, R. Rossina,b, F. Simonettoa,b, A. Tiko, E.
Torassaa, M. Tosia,b, M. Zanettia,b, P. Zottoa,b, G. Zumerlea,b INFN Sezione
di Pavia a, Università di Pavia b, Pavia, Italy
A. Braghieria, A. Magnania, P. Montagnaa,b, S.P. Rattia,b, V. Rea, M.
Ressegottia,b, C. Riccardia,b, P. Salvinia, I. Vaia,b, P. Vituloa,b INFN
Sezione di Perugia a, Università di Perugia b, Perugia, Italy
M. Biasinia,b, G.M. Bileia, C. Cecchia,b, D. Ciangottinia,b, L. Fanòa,b, P.
Laricciaa,b, R. Leonardia,b, E. Manonia, G. Mantovania,b, V. Mariania,b, M.
Menichellia, A. Rossia,b, A. Santocchiaa,b, D. Spigaa INFN Sezione di Pisa a,
Università di Pisa b, Scuola Normale Superiore di Pisa c, Pisa, Italy
K. Androsova, P. Azzurria, G. Bagliesia, L. Bianchinia, T. Boccalia, L.
Borrello, R. Castaldia, M.A. Cioccia,b, R. Dell’Orsoa, G. Fedia, F. Fioria,c,
L. Gianninia,c, A. Giassia, M.T. Grippoa, F. Ligabuea,c, E. Mancaa,c, G.
Mandorlia,c, A. Messineoa,b, F. Pallaa, A. Rizzia,b, G. Rolandi32, P.
Spagnoloa, R. Tenchinia, G. Tonellia,b, A. Venturia, P.G. Verdinia INFN
Sezione di Roma a, Sapienza Università di Roma b, Rome, Italy
L. Baronea,b, F. Cavallaria, M. Cipriania,b, D. Del Rea,b, E. Di Marcoa,b, M.
Diemoza, S. Gellia,b, E. Longoa,b, B. Marzocchia,b, P. Meridiania, G.
Organtinia,b, F. Pandolfia, R. Paramattia,b, F. Preiatoa,b, S. Rahatloua,b, C.
Rovellia, F. Santanastasioa,b INFN Sezione di Torino a, Università di Torino
b, Torino, Italy, Università del Piemonte Orientale c, Novara, Italy
N. Amapanea,b, R. Arcidiaconoa,c, S. Argiroa,b, M. Arneodoa,c, N. Bartosika,
R. Bellana,b, C. Biinoa, A. Cappatia,b, N. Cartigliaa, F. Cennaa,b, S.
Comettia, M. Costaa,b, R. Covarellia,b, N. Demariaa, B. Kiania,b, C.
Mariottia, S. Masellia, E. Migliorea,b, V. Monacoa,b, E. Monteila,b, M.
Montenoa, M.M. Obertinoa,b, L. Pachera,b, N. Pastronea, M. Pelliccionia, G.L.
Pinna Angionia,b, A. Romeroa,b, M. Ruspaa,c, R. Sacchia,b, R. Salvaticoa,b, K.
Shchelinaa,b, V. Solaa, A. Solanoa,b, D. Soldia,b, A. Staianoa INFN Sezione di
Trieste a, Università di Trieste b, Trieste, Italy
S. Belfortea, V. Candelisea,b, M. Casarsaa, F. Cossuttia, A. Da Rolda,b, G.
Della Riccaa,b, F. Vazzolera,b, A. Zanettia Kyungpook National University,
Daegu, Korea
D.H. Kim, G.N. Kim, M.S. Kim, J. Lee, S. Lee, S.W. Lee, C.S. Moon, Y.D. Oh,
S.I. Pak, S. Sekmen, D.C. Son, Y.C. Yang Chonnam National University,
Institute for Universe and Elementary Particles, Kwangju, Korea
H. Kim, D.H. Moon, G. Oh Hanyang University, Seoul, Korea
B. Francois, J. Goh33, T.J. Kim Korea University, Seoul, Korea
S. Cho, S. Choi, Y. Go, D. Gyun, S. Ha, B. Hong, Y. Jo, K. Lee, K.S. Lee, S.
Lee, J. Lim, S.K. Park, Y. Roh Sejong University, Seoul, Korea
H.S. Kim Seoul National University, Seoul, Korea
J. Almond, J. Kim, J.S. Kim, H. Lee, K. Lee, K. Nam, S.B. Oh, B.C. Radburn-
Smith, S.h. Seo, U.K. Yang, H.D. Yoo, G.B. Yu University of Seoul, Seoul,
Korea
D. Jeon, H. Kim, J.H. Kim, J.S.H. Lee, I.C. Park Sungkyunkwan University,
Suwon, Korea
Y. Choi, C. Hwang, J. Lee, I. Yu Riga Technical University, Riga, Latvia
V. Veckalns34 Vilnius University, Vilnius, Lithuania
V. Dudenas, A. Juodagalvis, J. Vaitkus National Centre for Particle Physics,
Universiti Malaya, Kuala Lumpur, Malaysia
Z.A. Ibrahim, M.A.B. Md Ali35, F. Mohamad Idris36, W.A.T. Wan Abdullah, M.N.
Yusli, Z. Zolkapli Universidad de Sonora (UNISON), Hermosillo, Mexico
J.F. Benitez, A. Castaneda Hernandez, J.A. Murillo Quijada Centro de
Investigacion y de Estudios Avanzados del IPN, Mexico City, Mexico
H. Castilla-Valdez, E. De La Cruz-Burelo, M.C. Duran-Osuna, I. Heredia-De La
Cruz37, R. Lopez-Fernandez, J. Mejia Guisao, R.I. Rabadan-Trejo, M. Ramirez-
Garcia, G. Ramirez-Sanchez, R. Reyes-Almanza, A. Sanchez-Hernandez Universidad
Iberoamericana, Mexico City, Mexico
S. Carrillo Moreno, C. Oropeza Barrera, F. Vazquez Valencia Benemerita
Universidad Autonoma de Puebla, Puebla, Mexico
J. Eysermans, I. Pedraza, H.A. Salazar Ibarguen, C. Uribe Estrada Universidad
Autónoma de San Luis Potosí, San Luis Potosí, Mexico
A. Morelos Pineda University of Auckland, Auckland, New Zealand
D. Krofcheck University of Canterbury, Christchurch, New Zealand
S. Bheesette, P.H. Butler National Centre for Physics, Quaid-I-Azam
University, Islamabad, Pakistan
A. Ahmad, M. Ahmad, M.I. Asghar, Q. Hassan, H.R. Hoorani, W.A. Khan, M.A.
Shah, M. Shoaib, M. Waqas National Centre for Nuclear Research, Swierk, Poland
H. Bialkowska, M. Bluj, B. Boimska, T. Frueboes, M. Górski, M. Kazana, M.
Szleper, P. Traczyk, P. Zalewski Institute of Experimental Physics, Faculty of
Physics, University of Warsaw, Warsaw, Poland
K. Bunkowski, A. Byszuk38, K. Doroba, A. Kalinowski, M. Konecki, J.
Krolikowski, M. Misiura, M. Olszewski, A. Pyskir, M. Walczak Laboratório de
Instrumentação e Física Experimental de Partículas, Lisboa, Portugal
M. Araujo, P. Bargassa, C. Beirão Da Cruz E Silva, A. Di Francesco, P.
Faccioli, B. Galinhas, M. Gallinaro, J. Hollar, N. Leonardo, J. Seixas, G.
Strong, O. Toldaiev, J. Varela Joint Institute for Nuclear Research, Dubna,
Russia
S. Afanasiev, P. Bunin, M. Gavrilenko, I. Golutvin, I. Gorbunov, A. Kamenev,
V. Karjavine, A. Lanev, A. Malakhov, V. Matveev39,40, P. Moisenz, V. Palichik,
V. Perelygin, S. Shmatov, S. Shulha, N. Skatchkov, V. Smirnov, N. Voytishin,
A. Zarubin Petersburg Nuclear Physics Institute, Gatchina (St. Petersburg),
Russia
V. Golovtsov, Y. Ivanov, V. Kim41, E. Kuznetsova42, P. Levchenko, V. Murzin,
V. Oreshkin, I. Smirnov, D. Sosnov, V. Sulimov, L. Uvarov, S. Vavilov, A.
Vorobyev Institute for Nuclear Research, Moscow, Russia
Yu. Andreev, A. Dermenev, S. Gninenko, N. Golubev, A. Karneyeu, M. Kirsanov,
N. Krasnikov, A. Pashenkov, A. Shabanov, D. Tlisov, A. Toropin Institute for
Theoretical and Experimental Physics named by A.I. Alikhanov of NRC ‘Kurchatov
Institute’, Moscow, Russia
V. Epshteyn, V. Gavrilov, N. Lychkovskaya, V. Popov, I. Pozdnyakov, G.
Safronov, A. Spiridonov, A. Stepennov, V. Stolin, M. Toms, E. Vlasov, A.
Zhokin Moscow Institute of Physics and Technology, Moscow, Russia
T. Aushev P.N. Lebedev Physical Institute, Moscow, Russia
V. Andreev, M. Azarkin, I. Dremin40, M. Kirakosyan, A. Terkulov Skobeltsyn
Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow,
Russia
A. Belyaev, E. Boos, A. Ershov, A. Gribushin, L. Khein, V. Klyukhin, O.
Kodolova, I. Lokhtin, O. Lukina, S. Obraztsov, S. Petrushanko, V. Savrin, A.
Snigirev Novosibirsk State University (NSU), Novosibirsk, Russia
A. Barnyakov43, V. Blinov43, T. Dimova43, L. Kardapoltsev43, Y. Skovpen43
Institute for High Energy Physics of National Research Centre ‘Kurchatov
Institute’, Protvino, Russia
I. Azhgirey, I. Bayshev, S. Bitioukov, V. Kachanov, A. Kalinin, D.
Konstantinov, P. Mandrik, V. Petrov, R. Ryutin, S. Slabospitskii, A. Sobol, S.
Troshin, N. Tyurin, A. Uzunian, A. Volkov National Research Tomsk Polytechnic
University, Tomsk, Russia
A. Babaev, S. Baidali, V. Okhotnikov University of Belgrade: Faculty of
Physics and VINCA Institute of Nuclear Sciences
P. Adzic44, P. Cirkovic, D. Devetak, M. Dordevic, P. Milenovic45, J. Milosevic
Centro de Investigaciones Energéticas Medioambientales y Tecnológicas
(CIEMAT), Madrid, Spain
J. Alcaraz Maestre, A. Álvarez Fernández, I. Bachiller, M. Barrio Luna, J.A.
Brochero Cifuentes, M. Cerrada, N. Colino, B. De La Cruz, A. Delgado Peris, C.
Fernandez Bedoya, J.P. Fernández Ramos, J. Flix, M.C. Fouz, O. Gonzalez Lopez,
S. Goy Lopez, J.M. Hernandez, M.I. Josa, D. Moran, A. Pérez-Calero Yzquierdo,
J. Puerta Pelayo, I. Redondo, L. Romero, S. Sánchez Navas, M.S. Soares, A.
Triossi Universidad Autónoma de Madrid, Madrid, Spain
C. Albajar, J.F. de Trocóniz Universidad de Oviedo, Instituto Universitario de
Ciencias y Tecnologías Espaciales de Asturias (ICTEA), Oviedo, Spain
J. Cuevas, C. Erice, J. Fernandez Menendez, S. Folgueras, I. Gonzalez
Caballero, J.R. González Fernández, E. Palencia Cortezon, V. Rodríguez Bouza,
S. Sanchez Cruz, J.M. Vizan Garcia Instituto de Física de Cantabria (IFCA),
CSIC-Universidad de Cantabria, Santander, Spain
I.J. Cabrillo, A. Calderon, B. Chazin Quero, J. Duarte Campderros, M.
Fernandez, P.J. Fernández Manteca, A. García Alonso, J. Garcia-Ferrero, G.
Gomez, A. Lopez Virto, J. Marco, C. Martinez Rivero, P. Martinez Ruiz del
Arbol, F. Matorras, J. Piedra Gomez, C. Prieels, T. Rodrigo, A. Ruiz-Jimeno,
L. Scodellaro, N. Trevisani, I. Vila, R. Vilar Cortabitarte University of
Ruhuna, Department of Physics, Matara, Sri Lanka
N. Wickramage CERN, European Organization for Nuclear Research, Geneva,
Switzerland
D. Abbaneo, B. Akgun, E. Auffray, G. Auzinger, P. Baillon, A.H. Ball, D.
Barney, J. Bendavid, M. Bianco, A. Bocci, C. Botta, E. Brondolin, T.
Camporesi, M. Cepeda, G. Cerminara, E. Chapon, Y. Chen, G. Cucciati, D.
d’Enterria, A. Dabrowski, N. Daci, V. Daponte, A. David, A. De Roeck, N.
Deelen, M. Dobson, M. Dünser, N. Dupont, A. Elliott-Peisert, F.
Fallavollita46, D. Fasanella, G. Franzoni, J. Fulcher, W. Funk, D. Gigi, A.
Gilbert, K. Gill, F. Glege, M. Gruchala, M. Guilbaud, D. Gulhan, J. Hegeman,
C. Heidegger, V. Innocente, G.M. Innocenti, A. Jafari, P. Janot, O.
Karacheban20, J. Kieseler, A. Kornmayer, M. Krammer1, C. Lange, P. Lecoq, C.
Lourenço, L. Malgeri, M. Mannelli, A. Massironi, F. Meijers, J.A. Merlin, S.
Mersi, E. Meschi, F. Moortgat, M. Mulders, J. Ngadiuba, S. Nourbakhsh, S.
Orfanelli, L. Orsini, F. Pantaleo17, L. Pape, E. Perez, M. Peruzzi, A.
Petrilli, G. Petrucciani, A. Pfeiffer, M. Pierini, F.M. Pitters, D. Rabady, A.
Racz, T. Reis, M. Rovere, H. Sakulin, C. Schäfer, C. Schwick, M. Selvaggi, A.
Sharma, P. Silva, P. Sphicas47, A. Stakia, J. Steggemann, D. Treille, A.
Tsirou, A. Vartak, M. Verzetti, W.D. Zeuner Paul Scherrer Institut, Villigen,
Switzerland
L. Caminada48, K. Deiters, W. Erdmann, R. Horisberger, Q. Ingram, H.C.
Kaestli, D. Kotlinski, U. Langenegger, T. Rohe, S.A. Wiederkehr ETH Zurich -
Institute for Particle Physics and Astrophysics (IPA), Zurich, Switzerland
M. Backhaus, L. Bäni, P. Berger, N. Chernyavskaya, G. Dissertori, M. Dittmar,
M. Donegà, C. Dorfer, T.A. Gómez Espinosa, C. Grab, D. Hits, T. Klijnsma, W.
Lustermann, R.A. Manzoni, M. Marionneau, M.T. Meinhard, F. Micheli, P.
Musella, F. Nessi-Tedaldi, F. Pauss, G. Perrin, L. Perrozzi, S. Pigazzini, M.
Reichmann, C. Reissel, D. Ruini, D.A. Sanz Becerra, M. Schönenberger, L.
Shchutska, V.R. Tavolaro, K. Theofilatos, M.L. Vesterbacka Olsson, R. Wallny,
D.H. Zhu Universität Zürich, Zurich, Switzerland
T.K. Aarrestad, C. Amsler49, D. Brzhechko, M.F. Canelli, A. De Cosa, R. Del
Burgo, S. Donato, C. Galloni, T. Hreus, B. Kilminster, S. Leontsinis, I.
Neutelings, G. Rauco, P. Robmann, D. Salerno, K. Schweiger, C. Seitz, Y.
Takahashi, S. Wertz, A. Zucchetta National Central University, Chung-Li,
Taiwan
T.H. Doan, R. Khurana, C.M. Kuo, W. Lin, A. Pozdnyakov, S.S. Yu National
Taiwan University (NTU), Taipei, Taiwan
P. Chang, Y. Chao, K.F. Chen, P.H. Chen, W.-S. Hou, Y.F. Liu, R.-S. Lu, E.
Paganis, A. Psallidas, A. Steen Chulalongkorn University, Faculty of Science,
Department of Physics, Bangkok, Thailand
B. Asavapibhop, N. Srimanobhas, N. Suwonjandee Çukurova University, Physics
Department, Science and Art Faculty, Adana, Turkey
A. Bat, F. Boran, S. Cerci50, S. Damarseckin, Z.S. Demiroglu, F. Dolek, C.
Dozen, I. Dumanoglu, E. Eskut, G. Gokbulut, Y. Guler, E. Gurpinar, I. Hos51,
C. Isik, E.E. Kangal52, O. Kara, A. Kayis Topaksu, U. Kiminsu, M. Oglakci, G.
Onengut, K. Ozdemir53, A. Polatoz, D. Sunar Cerci50, U.G. Tok, S. Turkcapar,
I.S. Zorbakir, C. Zorbilmez Middle East Technical University, Physics
Department, Ankara, Turkey
B. Isildak54, G. Karapinar55, M. Yalvac, M. Zeyrek Bogazici University,
Istanbul, Turkey
I.O. Atakisi, E. Gülmez, M. Kaya56, O. Kaya57, S. Ozkorucuklu58, S. Tekten,
E.A. Yetkin59 Istanbul Technical University, Istanbul, Turkey
M.N. Agaras, A. Cakir, K. Cankocak, Y. Komurcu, S. Sen60 Institute for
Scintillation Materials of National Academy of Science of Ukraine, Kharkov,
Ukraine
B. Grynyov National Scientific Center, Kharkov Institute of Physics and
Technology, Kharkov, Ukraine
L. Levchuk University of Bristol, Bristol, United Kingdom
F. Ball, J.J. Brooke, D. Burns, E. Clement, D. Cussans, O. Davignon, H.
Flacher, J. Goldstein, G.P. Heath, H.F. Heath, L. Kreczko, D.M. Newbold61, S.
Paramesvaran, B. Penning, T. Sakuma, D. Smith, V.J. Smith, J. Taylor, A.
Titterton Rutherford Appleton Laboratory, Didcot, United Kingdom
K.W. Bell, A. Belyaev62, C. Brew, R.M. Brown, D. Cieri, D.J.A. Cockerill, J.A.
Coughlan, K. Harder, S. Harper, J. Linacre, K. Manolopoulos, E. Olaiya, D.
Petyt, T. Schuh, C.H. Shepherd-Themistocleous, A. Thea, I.R. Tomalin, T.
Williams, W.J. Womersley Imperial College, London, United Kingdom
R. Bainbridge, P. Bloch, J. Borg, S. Breeze, O. Buchmuller, A. Bundock, D.
Colling, P. Dauncey, G. Davies, M. Della Negra, R. Di Maria, P. Everaerts, G.
Hall, G. Iles, T. James, M. Komm, C. Laner, L. Lyons, A.-M. Magnan, S. Malik,
A. Martelli, J. Nash63, A. Nikitenko7, V. Palladino, M. Pesaresi, D.M.
Raymond, A. Richards, A. Rose, E. Scott, C. Seez, A. Shtipliyski, G. Singh, M.
Stoye, T. Strebler, S. Summers, A. Tapper, K. Uchida, T. Virdee17, N. Wardle,
D. Winterbottom, J. Wright, S.C. Zenz Brunel University, Uxbridge, United
Kingdom
J.E. Cole, P.R. Hobson, A. Khan, P. Kyberd, C.K. Mackay, A. Morton, I.D. Reid,
L. Teodorescu, S. Zahid Baylor University, Waco, USA
K. Call, J. Dittmann, K. Hatakeyama, H. Liu, C. Madrid, B. McMaster, N.
Pastika, C. Smith Catholic University of America, Washington, DC, USA
R. Bartek, A. Dominguez The University of Alabama, Tuscaloosa, USA
A. Buccilli, S.I. Cooper, C. Henderson, P. Rumerio, C. West Boston University,
Boston, USA
D. Arcaro, T. Bose, D. Gastler, S. Girgis, D. Pinna, C. Richardson, J. Rohlf,
L. Sulak, D. Zou Brown University, Providence, USA
G. Benelli, B. Burkle, X. Coubez, D. Cutts, M. Hadley, J. Hakala, U. Heintz,
J.M. Hogan64, K.H.M. Kwok, E. Laird, G. Landsberg, J. Lee, Z. Mao, M. Narain,
S. Sagir65, R. Syarif, E. Usai, D. Yu University of California, Davis, Davis,
USA
R. Band, C. Brainerd, R. Breedon, D. Burns, M. Calderon De La Barca Sanchez,
M. Chertok, J. Conway, R. Conway, P.T. Cox, R. Erbacher, C. Flores, G. Funk,
W. Ko, O. Kukral, R. Lander, M. Mulhearn, D. Pellett, J. Pilot, S. Shalhout,
M. Shi, D. Stolp, D. Taylor, K. Tos, M. Tripathi, Z. Wang, F. Zhang University
of California, Los Angeles, USA
M. Bachtis, C. Bravo, R. Cousins, A. Dasgupta, S. Erhan, A. Florent, J.
Hauser, M. Ignatenko, N. Mccoll, S. Regnard, D. Saltzberg, C. Schnaible, V.
Valuev University of California, Riverside, Riverside, USA
E. Bouvier, K. Burt, R. Clare, J.W. Gary, S.M.A. Ghiasi Shirazi, G. Hanson, G.
Karapostoli, E. Kennedy, F. Lacroix, O.R. Long, M. Olmedo Negrete, M.I.
Paneva, W. Si, L. Wang, H. Wei, S. Wimpenny, B.R. Yates University of
California, San Diego, La Jolla, USA
J.G. Branson, P. Chang, S. Cittolin, M. Derdzinski, R. Gerosa, D. Gilbert, B.
Hashemi, A. Holzner, D. Klein, G. Kole, V. Krutelyov, J. Letts, M.
Masciovecchio, S. May, D. Olivito, S. Padhi, M. Pieri, V. Sharma, M. Tadel, J.
Wood, F. Würthwein, A. Yagil, G. Zevi Della Porta University of California,
Santa Barbara - Department of Physics, Santa Barbara, USA
N. Amin, R. Bhandari, C. Campagnari, M. Citron, V. Dutta, M. Franco Sevilla,
L. Gouskos, R. Heller, J. Incandela, H. Mei, A. Ovcharova, H. Qu, J. Richman,
D. Stuart, I. Suarez, S. Wang, J. Yoo California Institute of Technology,
Pasadena, USA
D. Anderson, A. Bornheim, J.M. Lawhorn, N. Lu, H.B. Newman, T.Q. Nguyen, J.
Pata, M. Spiropulu, J.R. Vlimant, R. Wilkinson, S. Xie, Z. Zhang, R.Y. Zhu
Carnegie Mellon University, Pittsburgh, USA
M.B. Andrews, T. Ferguson, T. Mudholkar, M. Paulini, M. Sun, I. Vorobiev, M.
Weinberg University of Colorado Boulder, Boulder, USA
J.P. Cumalat, W.T. Ford, F. Jensen, A. Johnson, E. MacDonald, T. Mulholland,
R. Patel, A. Perloff, K. Stenson, K.A. Ulmer, S.R. Wagner Cornell University,
Ithaca, USA
J. Alexander, J. Chaves, Y. Cheng, J. Chu, A. Datta, K. Mcdermott, N. Mirman,
J.R. Patterson, D. Quach, A. Rinkevicius, A. Ryd, L. Skinnari, L. Soffi, S.M.
Tan, Z. Tao, J. Thom, J. Tucker, P. Wittich, M. Zientek Fermi National
Accelerator Laboratory, Batavia, USA
S. Abdullin, M. Albrow, M. Alyari, G. Apollinari, A. Apresyan, A. Apyan, S.
Banerjee, L.A.T. Bauerdick, A. Beretvas, J. Berryhill, P.C. Bhat, K. Burkett,
J.N. Butler, A. Canepa, G.B. Cerati, H.W.K. Cheung, F. Chlebana, M. Cremonesi,
J. Duarte, V.D. Elvira, J. Freeman, Z. Gecse, E. Gottschalk, L. Gray, D.
Green, S. Grünendahl, O. Gutsche, J. Hanlon, R.M. Harris, S. Hasegawa, J.
Hirschauer, Z. Hu, B. Jayatilaka, S. Jindariani, M. Johnson, U. Joshi, B.
Klima, M.J. Kortelainen, B. Kreis, S. Lammel, D. Lincoln, R. Lipton, M. Liu,
T. Liu, J. Lykken, K. Maeshima, J.M. Marraffino, D. Mason, P. McBride, P.
Merkel, S. Mrenna, S. Nahn, V. O’Dell, K. Pedro, C. Pena, O. Prokofyev, G.
Rakness, F. Ravera, A. Reinsvold, L. Ristori, A. Savoy-Navarro66, B.
Schneider, E. Sexton-Kennedy, A. Soha, W.J. Spalding, L. Spiegel, S. Stoynev,
J. Strait, N. Strobbe, L. Taylor, S. Tkaczyk, N.V. Tran, L. Uplegger, E.W.
Vaandering, C. Vernieri, M. Verzocchi, R. Vidal, M. Wang, H.A. Weber
University of Florida, Gainesville, USA
D. Acosta, P. Avery, P. Bortignon, D. Bourilkov, A. Brinkerhoff, L. Cadamuro,
A. Carnes, D. Curry, R.D. Field, S.V. Gleyzer, B.M. Joshi, J. Konigsberg, A.
Korytov, K.H. Lo, P. Ma, K. Matchev, N. Menendez, G. Mitselmakher, D.
Rosenzweig, K. Shi, D. Sperka, J. Wang, S. Wang, X. Zuo Florida International
University, Miami, USA
Y.R. Joshi, S. Linn Florida State University, Tallahassee, USA
A. Ackert, T. Adams, A. Askew, S. Hagopian, V. Hagopian, K.F. Johnson, T.
Kolberg, G. Martinez, T. Perry, H. Prosper, A. Saha, C. Schiber, R. Yohay
Florida Institute of Technology, Melbourne, USA
M.M. Baarmand, V. Bhopatkar, S. Colafranceschi, M. Hohlmann, D. Noonan, M.
Rahmani, T. Roy, M. Saunders, F. Yumiceva University of Illinois at Chicago
(UIC), Chicago, USA
M.R. Adams, L. Apanasevich, D. Berry, R.R. Betts, R. Cavanaugh, X. Chen, S.
Dittmer, O. Evdokimov, C.E. Gerber, D.A. Hangal, D.J. Hofman, K. Jung, J.
Kamin, C. Mills, M.B. Tonjes, N. Varelas, H. Wang, X. Wang, Z. Wu, J. Zhang
The University of Iowa, Iowa City, USA
M. Alhusseini, B. Bilki67, W. Clarida, K. Dilsiz68, S. Durgut, R.P.
Gandrajula, M. Haytmyradov, V. Khristenko, J.-P. Merlo, A. Mestvirishvili, A.
Moeller, J. Nachtman, H. Ogul69, Y. Onel, F. Ozok70, A. Penzo, C. Snyder, E.
Tiras, J. Wetzel Johns Hopkins University, Baltimore, USA
B. Blumenfeld, A. Cocoros, N. Eminizer, D. Fehling, L. Feng, A.V. Gritsan,
W.T. Hung, P. Maksimovic, J. Roskes, U. Sarica, M. Swartz, M. Xiao The
University of Kansas, Lawrence, USA
A. Al-bataineh, P. Baringer, A. Bean, S. Boren, J. Bowen, A. Bylinkin, J.
Castle, S. Khalil, A. Kropivnitskaya, D. Majumder, W. Mcbrayer, M. Murray, C.
Rogan, S. Sanders, E. Schmitz, J.D. Tapia Takaki, Q. Wang Kansas State
University, Manhattan, USA
S. Duric, A. Ivanov, K. Kaadze, D. Kim, Y. Maravin, D.R. Mendis, T. Mitchell,
A. Modak, A. Mohammadi Lawrence Livermore National Laboratory, Livermore, USA
F. Rebassoo, D. Wright University of Maryland, College Park, USA
A. Baden, O. Baron, A. Belloni, S.C. Eno, Y. Feng, C. Ferraioli, N.J. Hadley,
S. Jabeen, G.Y. Jeng, R.G. Kellogg, J. Kunkle, A.C. Mignerey, S. Nabili, F.
Ricci-Tam, M. Seidel, Y.H. Shin, A. Skuja, S.C. Tonwar, K. Wong Massachusetts
Institute of Technology, Cambridge, USA
D. Abercrombie, B. Allen, V. Azzolini, A. Baty, R. Bi, S. Brandt, W. Busza,
I.A. Cali, M. D’Alfonso, Z. Demiragli, G. Gomez Ceballos, M. Goncharov, P.
Harris, D. Hsu, M. Hu, Y. Iiyama, M. Klute, D. Kovalskyi, Y.-J. Lee, P.D.
Luckey, B. Maier, A.C. Marini, C. Mcginn, C. Mironov, S. Narayanan, X. Niu, C.
Paus, D. Rankin, C. Roland, G. Roland, Z. Shi, G.S.F. Stephans, K. Sumorok, K.
Tatar, D. Velicanu, J. Wang, T.W. Wang, B. Wyslouch University of Minnesota,
Minneapolis, USA
A.C. Benvenuti${}^{\textrm{\textdagger}}$, R.M. Chatterjee, A. Evans, P.
Hansen, J. Hiltbrand, Sh. Jain, S. Kalafut, M. Krohn, Y. Kubota, Z. Lesko, J.
Mans, R. Rusack, M.A. Wadud University of Mississippi, Oxford, USA
J.G. Acosta, S. Oliveros University of Nebraska-Lincoln, Lincoln, USA
E. Avdeeva, K. Bloom, D.R. Claes, C. Fangmeier, F. Golf, R. Gonzalez Suarez,
R. Kamalieddin, I. Kravchenko, J. Monroy, J.E. Siado, G.R. Snow, B. Stieger
State University of New York at Buffalo, Buffalo, USA
A. Godshalk, C. Harrington, I. Iashvili, A. Kharchilava, C. Mclean, D. Nguyen,
A. Parker, S. Rappoccio, B. Roozbahani Northeastern University, Boston, USA
G. Alverson, E. Barberis, C. Freer, Y. Haddad, A. Hortiangtham, G. Madigan,
D.M. Morse, T. Orimoto, A. Tishelman-charny, T. Wamorkar, B. Wang, A.
Wisecarver, D. Wood Northwestern University, Evanston, USA
S. Bhattacharya, J. Bueghly, O. Charaf, T. Gunter, K.A. Hahn, N. Odell, M.H.
Schmitt, K. Sung, M. Trovato, M. Velasco University of Notre Dame, Notre Dame,
USA
R. Bucci, N. Dev, R. Goldouzian, M. Hildreth, K. Hurtado Anampa, C. Jessop,
D.J. Karmgard, K. Lannon, W. Li, N. Loukas, N. Marinelli, F. Meng, C. Mueller,
Y. Musienko39, M. Planer, R. Ruchti, P. Siddireddy, G. Smith, S. Taroni, M.
Wayne, A. Wightman, M. Wolf, A. Woodard The Ohio State University, Columbus,
USA
J. Alimena, L. Antonelli, B. Bylsma, L.S. Durkin, S. Flowers, B. Francis, C.
Hill, W. Ji, T.Y. Ling, W. Luo, B.L. Winer Princeton University, Princeton,
USA
S. Cooperstein, P. Elmer, J. Hardenbrook, N. Haubrich, S. Higginbotham, A.
Kalogeropoulos, S. Kwan, D. Lange, M.T. Lucchini, J. Luo, D. Marlow, K. Mei,
I. Ojalvo, J. Olsen, C. Palmer, P. Piroué, J. Salfeld-Nebgen, D. Stickland, C.
Tully University of Puerto Rico, Mayaguez, USA
S. Malik, S. Norberg Purdue University, West Lafayette, USA
A. Barker, V.E. Barnes, S. Das, L. Gutay, M. Jones, A.W. Jung, A. Khatiwada,
B. Mahakud, D.H. Miller, N. Neumeister, C.C. Peng, S. Piperov, H. Qiu, J.F.
Schulte, J. Sun, F. Wang, R. Xiao, W. Xie Purdue University Northwest,
Hammond, USA
T. Cheng, J. Dolen, N. Parashar Rice University, Houston, USA
Z. Chen, K.M. Ecklund, S. Freed, F.J.M. Geurts, M. Kilpatrick, Arun Kumar, W.
Li, B.P. Padley, R. Redjimi, J. Roberts, J. Rorie, W. Shi, Z. Tu, A. Zhang
University of Rochester, Rochester, USA
A. Bodek, P. de Barbaro, R. Demina, Y.t. Duh, J.L. Dulemba, C. Fallon, T.
Ferbel, M. Galanti, A. Garcia-Bellido, J. Han, O. Hindrichs, A.
Khukhunaishvili, E. Ranken, P. Tan, R. Taus The Rockefeller University, New
York, USA
R. Ciesielski, K. Goulianos Rutgers, The State University of New Jersey,
Piscataway, USA
B. Chiarito, J.P. Chou, Y. Gershtein, E. Halkiadakis, A. Hart, M. Heindl, E.
Hughes, S. Kaplan, R. Kunnawalkam Elayavalli, S. Kyriacou, I. Laflotte, A.
Lath, R. Montalvo, K. Nash, M. Osherson, H. Saka, S. Salur, S. Schnetzer, D.
Sheffield, S. Somalwar, R. Stone, S. Thomas, P. Thomassen University of
Tennessee, Knoxville, USA
H. Acharya, A.G. Delannoy, J. Heideman, G. Riley, S. Spanier Texas A&M
University, College Station, USA
O. Bouhali71, A. Celik, M. Dalchenko, M. De Mattia, A. Delgado, S. Dildick, R.
Eusebi, J. Gilmore, T. Huang, T. Kamon72, S. Luo, D. Marley, R. Mueller, D.
Overton, L. Perniè, D. Rathjens, A. Safonov Texas Tech University, Lubbock,
USA
N. Akchurin, J. Damgov, F. De Guio, P.R. Dudero, S. Kunori, K. Lamichhane,
S.W. Lee, T. Mengke, S. Muthumuni, T. Peltola, S. Undleeb, I. Volobouev, Z.
Wang, A. Whitbeck Vanderbilt University, Nashville, USA
S. Greene, A. Gurrola, R. Janjam, W. Johns, C. Maguire, A. Melo, H. Ni, K.
Padeken, F. Romeo, P. Sheldon, S. Tuo, J. Velkovska, M. Verweij, Q. Xu
University of Virginia, Charlottesville, USA
M.W. Arenton, P. Barria, B. Cox, R. Hirosky, M. Joyce, A. Ledovskoy, H. Li, C.
Neu, T. Sinthuprasith, Y. Wang, E. Wolfe, F. Xia Wayne State University,
Detroit, USA
R. Harr, P.E. Karchin, N. Poudyal, J. Sturdy, P. Thapa, S. Zaleski University
of Wisconsin - Madison, Madison, WI, USA
J. Buchanan, C. Caillol, D. Carlsmith, S. Dasu, I. De Bruyn, L. Dodd, B.
Gomber73, M. Grothe, M. Herndon, A. Hervé, U. Hussain, P. Klabbers, A. Lanaro,
K. Long, R. Loveless, T. Ruggles, A. Savin, V. Sharma, N. Smith, W.H. Smith,
N. Woods †: Deceased
1: Also at Vienna University of Technology, Vienna, Austria
2: Also at IRFU, CEA, Université Paris-Saclay, Gif-sur-Yvette, France
3: Also at Universidade Estadual de Campinas, Campinas, Brazil
4: Also at Federal University of Rio Grande do Sul, Porto Alegre, Brazil
5: Also at Université Libre de Bruxelles, Bruxelles, Belgium
6: Also at University of Chinese Academy of Sciences, Beijing, China
7: Also at Institute for Theoretical and Experimental Physics named by A.I.
Alikhanov of NRC ‘Kurchatov Institute’, Moscow, Russia
8: Also at Joint Institute for Nuclear Research, Dubna, Russia
9: Now at Cairo University, Cairo, Egypt
10: Also at Fayoum University, El-Fayoum, Egypt
11: Now at British University in Egypt, Cairo, Egypt
12: Now at Ain Shams University, Cairo, Egypt
13: Also at Department of Physics, King Abdulaziz University, Jeddah, Saudi
Arabia
14: Also at Université de Haute Alsace, Mulhouse, France
15: Also at Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State
University, Moscow, Russia
16: Also at Tbilisi State University, Tbilisi, Georgia
17: Also at CERN, European Organization for Nuclear Research, Geneva,
Switzerland
18: Also at RWTH Aachen University, III. Physikalisches Institut A, Aachen,
Germany
19: Also at University of Hamburg, Hamburg, Germany
20: Also at Brandenburg University of Technology, Cottbus, Germany
21: Also at Institute of Physics, University of Debrecen, Debrecen, Hungary,
Debrecen, Hungary
22: Also at Institute of Nuclear Research ATOMKI, Debrecen, Hungary
23: Also at MTA-ELTE Lendület CMS Particle and Nuclear Physics Group, Eötvös
Loránd University, Budapest, Hungary, Budapest, Hungary
24: Also at IIT Bhubaneswar, Bhubaneswar, India, Bhubaneswar, India
25: Also at Institute of Physics, Bhubaneswar, India
26: Also at Shoolini University, Solan, India
27: Also at University of Visva-Bharati, Santiniketan, India
28: Also at Isfahan University of Technology, Isfahan, Iran
29: Also at Plasma Physics Research Center, Science and Research Branch,
Islamic Azad University, Tehran, Iran
30: Also at Italian National Agency for New Technologies, Energy and
Sustainable Economic Development, Bologna, Italy
31: Also at Università degli Studi di Siena, Siena, Italy
32: Also at Scuola Normale e Sezione dell’INFN, Pisa, Italy
33: Also at Kyung Hee University, Department of Physics, Seoul, Korea
34: Also at Riga Technical University, Riga, Latvia, Riga, Latvia
35: Also at International Islamic University of Malaysia, Kuala Lumpur,
Malaysia
36: Also at Malaysian Nuclear Agency, MOSTI, Kajang, Malaysia
37: Also at Consejo Nacional de Ciencia y Tecnología, Mexico City, Mexico
38: Also at Warsaw University of Technology, Institute of Electronic Systems,
Warsaw, Poland
39: Also at Institute for Nuclear Research, Moscow, Russia
40: Now at National Research Nuclear University ’Moscow Engineering Physics
Institute’ (MEPhI), Moscow, Russia
41: Also at St. Petersburg State Polytechnical University, St. Petersburg,
Russia
42: Also at University of Florida, Gainesville, USA
43: Also at Budker Institute of Nuclear Physics, Novosibirsk, Russia
44: Also at Faculty of Physics, University of Belgrade, Belgrade, Serbia
45: Also at University of Belgrade: Faculty of Physics and VINCA Institute of
Nuclear Sciences, Belgrade, Serbia
46: Also at INFN Sezione di Pavia a, Università di Pavia b, Pavia, Italy,
Pavia, Italy
47: Also at National and Kapodistrian University of Athens, Athens, Greece
48: Also at Universität Zürich, Zurich, Switzerland
49: Also at Stefan Meyer Institute for Subatomic Physics, Vienna, Austria,
Vienna, Austria
50: Also at Adiyaman University, Adiyaman, Turkey
51: Also at Istanbul Aydin University, Application and Research Center for
Advanced Studies (App. & Res. Cent. for Advanced Studies), Istanbul, Turkey
52: Also at Mersin University, Mersin, Turkey
53: Also at Piri Reis University, Istanbul, Turkey
54: Also at Ozyegin University, Istanbul, Turkey
55: Also at Izmir Institute of Technology, Izmir, Turkey
56: Also at Marmara University, Istanbul, Turkey
57: Also at Kafkas University, Kars, Turkey
58: Also at Istanbul University, Istanbul, Turkey
59: Also at Istanbul Bilgi University, Istanbul, Turkey
60: Also at Hacettepe University, Ankara, Turkey
61: Also at Rutherford Appleton Laboratory, Didcot, United Kingdom
62: Also at School of Physics and Astronomy, University of Southampton,
Southampton, United Kingdom
63: Also at Monash University, Faculty of Science, Clayton, Australia
64: Also at Bethel University, St. Paul, Minneapolis, USA, St. Paul, USA
65: Also at Karamanoğlu Mehmetbey University, Karaman, Turkey
66: Also at Purdue University, West Lafayette, USA
67: Also at Beykent University, Istanbul, Turkey, Istanbul, Turkey
68: Also at Bingol University, Bingol, Turkey
69: Also at Sinop University, Sinop, Turkey
70: Also at Mimar Sinan University, Istanbul, Istanbul, Turkey
71: Also at Texas A&M University at Qatar, Doha, Qatar
72: Also at Kyungpook National University, Daegu, Korea, Daegu, Korea
73: Also at University of Hyderabad, Hyderabad, India
## .11 The TOTEM Collaboration
G. Antcheva, P. Aspell9, I. Atanassova, V. Avati7,9, J. Baechler9, C.
Baldenegro Barrera11, V. Berardi4a,4b, M. Berretti2a, V. Borchsh8, E.
Bossini9,6b, U. Bottigli6b, M. Bozzo5a,5b, H. Burkhardt9, F. S. Cafagna4a, M.
G. Catanesi4a, M. Csanád3a,b, T. Csörgő3a,3b, M. Deile9, F. De Leonardis4c,4a,
M. Doubek1c, D. Druzhkin8,9, K. Eggert10, V. Eremind, A. Fiergolski9, L.
Forthomme2a,2b, F. Garcia2a, V. Georgiev1a, S. Giani9, L. Grzanka7, J.
Hammerbauer1a, T. Isidori11, V. Ivanchenko8, M. Janda1c, A. Karev9, J.
Kašpar1b,9, B. Kaynake, J. Kopal9, V. Kundrát1b, S. Lami6a, R. Linhart1a, C.
Lindsey11, M. V. Lokajíček1b,†, L. Losurdo6b, F. Lucas Rodríguez9, M. Macrí5a,
M. Malawski7, N. Minafra11, S. Minutoli5a, T. Naaranoja2a,2b, F. Nemes9,3a, H.
Niewiadomski10, T. Novák3b, E. Oliveri9, F. Oljemark2a,2b, M. Oriunnof, K.
Österberg2a,2b, P. Palazzi9, V. Passaro4c,4a, Z. Peroutka1a, J. Procházka1b,
M. Quinto4a,4b, E. Radermacher9, E. Radicioni4a, F. Ravotti9, C. Royon11, G.
Ruggiero9, H. Saarikko2a,2b, V.D. Samoylenkoc, A. Scribano6a, J. Siroky1a, J.
Smajek9, W. Snoeys9, R. Stefanovitch9, J. Sziklai3a, C. Taylor10, E.
Tcherniaev8, N. Turini6b, O. Urban1a, V. Vacek1c, O. Vavroch1a, J. Welti2a,2b,
J. Williams11, J. Zich1a, K. Zielinski7
1aUniversity of West Bohemia, Pilsen, Czech Republic.
1bInstitute of Physics of the Academy of Sciences of the Czech Republic,
Prague, Czech Republic.
1cCzech Technical University, Prague, Czech Republic.
2aHelsinki Institute of Physics, University of Helsinki, Helsinki, Finland.
2bDepartment of Physics, University of Helsinki, Helsinki, Finland.
3aWigner Research Centre for Physics, RMKI, Budapest, Hungary.
3bEKU KRC, Gyöngyös, Hungary.
4aINFN Sezione di Bari, Bari, Italy.
4bDipartimento Interateneo di Fisica di Bari, Bari, Italy.
4cDipartimento di Ingegneria Elettrica e dell’Informazione — Politecnico di
Bari, Bari, Italy.
5aINFN Sezione di Genova, Genova, Italy.
5bUniversità degli Studi di Genova, Italy.
6aINFN Sezione di Pisa, Pisa, Italy.
6bUniversità degli Studi di Siena and Gruppo Collegato INFN di Siena, Siena,
Italy.
7AGH University of Science and Technology, Krakow, Poland.
8Tomsk State University, Tomsk, Russia.
9CERN, Geneva, Switzerland.
10Case Western Reserve University, Dept. of Physics, Cleveland, OH, USA.
11The University of Kansas, Lawrence, USA.
††a INRNE-BAS, Institute for Nuclear Research and Nuclear Energy, Bulgarian
Academy of Sciences, Sofia, Bulgaria.††b Department of Atomic Physics, ELTE
University, Budapest, Hungary.††c NRC ‘Kurchatov Institute’–IHEP, Protvino,
Russia.††d Ioffe Physical - Technical Institute of Russian Academy of
Sciences, St. Petersburg, Russian Federation.††e Istanbul University,
Istanbul, Turkey.††f SLAC National Accelerator Laboratory, Stanford CA,
USA.††† Deceased.
|
2024-09-04T02:54:55.091623 | 2020-02-27T15:01:11 | 2002.12151 | {
"authors": "Fares Elsabbagh, Blaise Tine, Priyadarshini Roshan, Ethan Lyons, Euna\n Kim, Da Eun Shim, Lingjun Zhu, Sung Kyu Lim and Hyesoon kim",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:25916",
"submitter": "Blaise Pascal Tine",
"url": "https://arxiv.org/abs/2002.12151"
} | arxiv-papers | # Vortex: OpenCL Compatible RISC-V GPGPU
Fares Elsabbagh Georgia Tech
<EMAIL_ADDRESS>Blaise Tine Georgia Tech
<EMAIL_ADDRESS>Priyadarshini Roshan Georgia Tech
<EMAIL_ADDRESS>Ethan Lyons Georgia Tech
<EMAIL_ADDRESS>Euna Kim Georgia Tech
<EMAIL_ADDRESS>Da Eun Shim Georgia Tech
<EMAIL_ADDRESS>Lingjun Zhu Georgia Tech
<EMAIL_ADDRESS>Sung Kyu Lim Georgia Tech
<EMAIL_ADDRESS>Hyesoon Kim
Georgia Tech
<EMAIL_ADDRESS>
###### Abstract
The current challenges in technology scaling are pushing the semiconductor
industry towards hardware specialization, creating a proliferation of
heterogeneous systems-on-chip, delivering orders of magnitude performance and
power benefits compared to traditional general-purpose architectures. This
transition is getting a significant boost with the advent of RISC-V with its
unique modular and extensible ISA, allowing a wide range of low-cost processor
designs for various target applications. In addition, OpenCL is currently the
most widely adopted programming framework for heterogeneous platforms
available on mainstream CPUs, GPUs, as well as FPGAs and custom DSP.
In this work, we present Vortex, a RISC-V General-Purpose GPU that supports
OpenCL. Vortex implements a SIMT architecture with a minimal ISA extension to
RISC-V that enables the execution of OpenCL programs. We also extended OpenCL
runtime framework to use the new ISA. We evaluate this design using 15nm
technology. We also show the performance and energy numbers of running them
with a subset of benchmarks from the Rodinia Benchmark suite.
###### Index Terms:
GPGPU, OpenCL, Vector processors
## I Introduction
The emergence of data parallel architectures and general purpose graphics
processing units (GPGPUs) have enabled new opportunities to address the power
limitations and scalability of multi-core processors, allowing new ways to
exploit the abundant data parallelism present in emerging big-data parallel
applications such as machine learning and graph analytics. GPGPUs in
particular, with their Single Instruction Multiple-Thread (SIMT) execution
model, heavily leverage data-parallel multi-threading to maximize throughput
at relatively low energy cost, leading the current race for energy efficiency
(Green500 [12]) and applications support with their accelerator-centric
parallel programming model (CUDA [19] and OpenCL [17]).
The advent of RISC-V [21, 20, 2], open-source and free instruction set
architecture (ISA), provides a new level of freedom in designing hardware
architectures at lower cost, leveraging its rich eco-system of open-source
software and tools. With RISC-V, computer architects have designed several
innovative processors and cores such as BOOM v1 and BOOM v2 [4] out-of-order
cores, as well as system-on-chip (SoC) platforms for a wide range of
applications. For instance, Gautschi et al. [9] have extended RISC-V to
digital signal processing (DSP) for scalable Internet-of-things (IoT) devices.
Moreover, vector processors [22] [15] [3] and processors integrated with
vector accelerators [16] [10] have been designed and fabricated based on
RISC-V. In spite of the advantages of the preceding works, not enough
attention has been devoted to building an open-source general-purpose GPU
(GPGPU) system based on RISC-V.
Although a couple of recent work have been proposed for massively parallel
computations on FPGA using RISC-V, (GRVI Phalanx) [11], (Simty) [6], none of
them have implemented the full-stack by extending the RISC-V ISA, synthesizing
the microarchitecture, and implementing the software stack to execute OpenCL
programs. We believe that such an implementation is in fact necessary to
achieve the level of usability and customizability in massively parallel
platforms.
In this paper, we propose an ISA RISC-V extension for GPGPU programs and
microarchitecture. We also extend a software stack to support OpenCL.
This paper makes the following key contributions:
* •
We propose a highly configurable SIMT-based General Purpose GPU architecture
targeting the RISC-V ISA and synthesized the design using a Synopsys library
with our RTL design.
* •
We show that the minimal set of five instructions on top of RV32IM (RISC-V 32
bit integer and multiply extensions) enables SIMT execution.
* •
We describe the necessary changes in the software stack that enable the
execution of OpenCL programs on Vortex. We demonstrate the portability by
running a subset of Rodinia benchmarks [5].
Figure 1: Vortex System Overview
## II Background
### II-A Open-Source OpenCL Implementations
POCL [13] implements a flexible compilation backend based on LLVM, allowing it
to support a wider range of device targets including general purpose
processors (e.g. x86, ARM, Mips), General Purpose GPU (e.g. Nvidia), and TCE-
based processors [14] and custom accelerators. The custom accelerator support
provides an efficient solution for enabling OpenCL applications to use
hardware devices with specialized fixed-function hardware (e.g. SPMV, GEMM).
POCL is comprised of two main components: A back-end compilation engine, and a
front-end OpenCL runtime API.
The POCL runtime implements the device-independent common interface where the
target implementation of each device plugs into POCL to specialize their
operations. At runtime, POCL invokes the back-end compiler with the provided
OpenCL kernel source. POCL supports target-specific execution models including
SIMT, MIMD, SIMD, and VLIW. On platforms supporting MIMD and SIMD execution
models such as CPUs, the POCL compiler attempts to pack as many OpenCL work-
items to the same vector instruction, then the POCL runtime will distribute
the remaining work-items among the active hardware threads on the device with
provided synchronization. On platforms supporting SIMT execution model such as
GPUs, the POCL compiler delegates the distribution of the work-items to the
hardware to spread the execution among its hardware threads, relying on the
device to also handle the necessary synchronization. On platforms supporting
VLIW execution models such as TCE-based accelerators, the POCL compiler
attempts to ”unroll” the parallel regions in the kernel code such that the
operations of several independent work-items can be statically scheduled to
the multiple function units of the target device.
## III The OpenCL Software Stack
Figure 2: Vortex Runtime Library
⬇
1vx_intrinsic.s
2vx_split:
3 .word 0x0005206b # split a0
4 ret
5vx_join:
6 .word 0x0000306b #join
7 ret
8
9kernel.cl
10#define __if(cond) split(cond); \
11 if(cond)
12
13#define __endif join();
14void opencl_kernel(){
15 int id = vx_getTid();
16 __if(id<4) {
17 // Path A
18 } else {
19 // Path B
20 } __endif
21}
Figure 3: This figure shows the control divergent __if __endif macro
definitions and how they could be used to enable control divergence in OpenCL
kernels. Currently, this process is done manually for each kernel.
### III-A Vortex Native Runtime
The Vortex software stack implements a native runtime library for developing
applications that will run on Vortex and take advantage of the new RISC-V ISA
extension. Figure 2 illustrates the Vortex runtime layer, which is comprised
of three main components: 1) Low-level intrinsic library exposing the new ISA
interface, 2) A support library that implements NewLib stub functions [7], 3)
a native runtime API for launching POCL kernels.
#### III-A1 Instrinsic Library
To enable Vortex runtime kernel to utilize the new instructions without
modifying the existing compilers, we implemented an intrinsic layer that
implements the new ISA. Figure 2 shows the functions and ISA supported by the
intrinsic library. We leverage RISC-V’s ABI which guarantees function
arguments being passed through the argument registers and return values begin
passed through a0 register. Thus, these intrinsic functions have only two
assembly instructions: 1) The encoded 32-bit hex representation of the
instruction that uses the argument registers as source registers, and 2) a
return instruction that returns back to the C++ program. An example of these
intrinsic functions is illustrated in Figure 3. In addition, to handle control
divergence, which is frequent in OpenCL kernels, we implement __if and __endif
macros shown in Figure 3 to handle the insertion of these intrinsic functions
with minimal changes to the code. These changes are currently done manually
for the OpenCL kernels. This approach achieves the required functionality
without restricting the platform or requiring any modifications to the RISC-V
compilers.
#### III-A2 Newlib Stubs Library
Vortex software stack uses the NewLib [7] library to enable programs to use
the C/C++ standard library without the need to support an operating system.
NewLib defines a minimal set of stub functions that client applications need
to implement to handle necessary system calls such as file I/O, allocation,
time, process, etc..
#### III-A3 Vortex Native API
The Vortex native API implements some general purpose utility routines for
applications to use. One of such routines is pocl_spawn() which allows
programs to schedule POCL kernel execution on Vortex. pocl_spawn() is
responsible for mapping work groups requested by POCL to the hardware: 1) It
uses the intrinsic layer to find out the available hardware resources, 2) Uses
the requested work group dimension and numbers to divide the work equally
among the hardware resources, 3) For each OpenCL dimension, it assigns a range
of IDs to each available warp in a global structure, 4) It uses the intrinsic
layer to spawn the warps and activate threads, and finally 5) Each warp will
loop through the assigned IDs, executing the kernel every time with a new
OpenCL global_id. Figure 4 shows an example. In the original OpenCL code, the
kernel is called once with global/local sizes as arguments. POCL wraps the
kernel with three loops and sequentially calls with the logic that converts
x,y,z to global ids. For a Vortex version, warps and threads are spawned, then
each thread is assigned a different work-group to execute the kernel. POCL
provides the feature to map the correct wid, which was a part of the baseline
POCL implementation to support various hardware such as vector architecture.
Figure 4: SIMT Kernel invocation modification for Vortex in POCL
### III-B POCL Runtime
We modified the POCL runtime, adding a new device target to its common device
interface to support Vortex. The new device target is essentially a variant of
the POCL basic CPU target with the support for pthreads and other OS
dependencies removed to target the NewLib interface. We also modified the
single-threaded logic for executing work-items to use Vortex’s pocl_spawn
runtime API.
### III-C Barrier Support
Synchronizations within work-group in OpenCL is supported by barriers. POCL’s
back-end compiler splits the control-flow graph (CFG) of the kernel around the
barrier and splits the kernel into two sections that will be executed by all
local work-groups sequentially.
## IV Vortex Parallel Hardware Architecture
Figure 5: Vortex Microarchitecture. Figure 6: This figure shows the Warp
Scheduler under different scenarios. In the actual microarchitecture
implementation, the instruction is only known the next cycle, however it’s
displayed in the same cycle in this figure for simplicity.
### IV-A SIMT Hardware Primitives
SIMT, Single Instruction-Multiple Threads, execution model takes advantage of
the fact that in most parallel applications, the same code is repeatedly
executed but with different data. Thus, it provides the concept of Warps [19],
which is a group of threads that share the same PC and follows the same
execution path with minimal divergence. Each thread in a warp has a private
set of general purpose registers, and the width of the ALU is matched with the
number of threads. However, the fetching, decoding, and issuing of
instructions is shared within the same warp which reduces execution cycles.
However, in some cases, the threads in the same warp will not agree on the
direction of branches. In such cases, the hardware must provide a thread mask
to predicate instructions for each thread, and an IPDOM stack, to ensure all
threads execute correctly, which are explained in Section IV-C.
### IV-B Warp Scheduler
The warp scheduler is in the fetch stage which decides what to fetch from
I-cache as shown in Figure 5. It has two components: 1) A set of warp masks to
choose the warp to schedule next, and 2) a warp table that includes private
information for each warp.
There are 4 thread masks that the scheduler uses: 1) an active warps mask, one
bit indicating whether a warp is active or not, 2) a stalled warp mask, which
indicates which warps should not be scheduled temporarily ( e.g., waiting for
a memory request), 3) a barrier warps stalled mask, which indicates warps that
have been stalled because of a barrier instruction, and 4) a visible warps
mask to support hierarchical scheduling policy [18].
Each cycle, the scheduler selects one warp from the visible warp mask and
invalidates that warp. When visible warp mask is zero, the active mask is
refilled by checking which warps are currently active and not stalled.
An example of the warp scheduler is shown in Figure 6(a). This figure shows
the normal execution; The first cycle warp zero executes an instruction, then
in the second cycle warp zero is invalidated in the visible warp mask, and
warp one is scheduled. In third cycle, because there are no more warps to be
scheduled, the scheduler uses the active warps to refill the visible mask, and
schedules warp zero for execution.
Figure 6(b) shows how the warp scheduler handles a stalled warp. In the first
cycle the scheduler schedules warp zero. In the second cycle, the scheduler
schedules warp one. At the same cycle, because the decode stage identified
that warp zero’s instruction requires a change of state, it stalls warp zero.
In the third cycle, because warp zero is stalled, the scheduler only sets warp
one to visible warp mask and schedules warp one again. When warp zero updates
its thread mask, the bit in stalled mask will be set to 0 to allow scheduling.
Figure 6(c) shows an example of spawning warps. When warp zero executes a
wspawn instruction (Table I) , which activates warps and manipulates the
active warps mask by setting warps two and three to be active. When it’s time
to refill the visible mask, because it no longer has any warps to schedule, it
includes warps two and three. Warps will stay in the Active Mask until they
set their thread mask’s value to zero, or warp zero utilizes wspawn to
deactivate these warps.
### IV-C Threads Masks and IPDOM Stack
To support the thread concepts provided in Section IV-A, a thread mask
register and an IPDOM stack have been added to the hardware similar to other
SIMT architectures [8]. The thread mask register acts like a predicate for
each thread, controlling which threads are active. If the bit in the thread
mask for a specific thread is zero, no modifications would be made to that
thread’s register file and no changes to the cache would be made based on that
thread.
The IPDOM stack, illustrated in Figure 5 is used to handle control divergence
and is controlled by the split and join instructions. These instructions
utilize the IPDOM stack to enable divergence as shown in Figure 3.
When a split instruction is executed by a warp, the predicate value for each
thread is evaluated. If there is only one thread active, or all threads agree
on a direction, the split acts like a nop instruction and does not change the
state of the warp. When there is more than one active thread that contradicts
on the value of the predicate, three microarchitecture events occur: 1) The
current thread mask is pushed into the IPDOM stack as a fall-through entry, 2)
The active threads that evaluate the predicate as false are pushed into the
stack with PC+4 (i.e., size of instruction) of the split instruction, and 3)
The current thread mask is updated to reflect the active threads that evaluate
the predicate to be true.
When a join instruction is executed, an entry is popped out of the stack which
causes one of two scenarios: 1) If the entry is not a fall-through entry, the
PC is set to the entry’s PC and the thread mask is updated to the value of the
entry’s mask, which enables the threads evaluating the predicate as false to
follow their own execution path, and 2) If the entry is a fall-through entry,
the PC continues executing to PC+4 and the thread mask is updated to the
entry’s mask, which is the case when both paths of the control divergence have
been executed.
### IV-D Warp Barriers
Warp barriers are important in SIMT execution, as it provides synchronization
between warps. Barriers are provided in the hardware to support global
synchronization between work-groups. Each barrier has a private entry in
barrier table, shown in Figure 5, with the following information: 1) Whether
that barrier is currently valid, 2) the number of warps left that need to
execute the barrier instruction with that entry’s ID for the barrier to be
released, and 3) a mask of the warps that are currently stalled by that
barrier. However, Figure 5 only shows the per core barriers. There is also
another table on multi-core configurations that allows for global barriers
between all the cores. The MSB of the barrier ID indicates whether the
instruction uses local barriers or global barriers.
When a barrier instruction is executed, the microarchitecture checks the
number of warps executed with the same barrier ID. If the number of warps is
not equal to one, the warp is stalled until that number is reached and the
release mask is manipulated to include that warp. Once the same number of
warps have been executed, the release mask is used to release all the warps
stalled by the corresponding barrier ID. The same method works for both local
and global barriers; however, global barrier tables have a release mask per
each core.
TABLE I: Proposed SIMT ISA extension. Instructions | Description
---|---
wspawn %numW, %PC | Spawn W new warps at PC
tmc %numT | Change the thread mask to activate threads
split %pred | Control flow divergence
join | Control flow reconvergence
bar %barID, %numW | Hardware Warps Barrier
(a) GDS Layout
(b) Power density distribution across Vortex
Figure 7: GDS layouts for our Vortex with 8 warp, 4 thread configuration (4KB
register file). The design was synthesized for 300Mhz and produced a total
power output of 46.8mW. 4KB 2 ways 4 banks-data cache, 8KB with 4 banks-shared
memory, 1Kb 2 way cache, one bank-I cache.
## V Evaluation
This section will evaluate both the RTL Verilog model for Vortex and the
software stack.
### V-A Micro-architecture Design space explorations
In Vortex design, we can increase the data-level parallelism either by
increasing the number of threads or by increasing the number of warps.
Increasing the number of threads is similar to increasing the SIMD width and
involves the following changes to the hardware: 1) Increasing the GPR memory
width for reads and writes, 2) Increasing the number of ALUs to match the
number of threads, 3) increasing the register width for every pipeline stage
after the GPR read stage, 4) increasing the arbitration logic required in both
the cache and the shared memory to detect bank conflicts and handle cache
misses, and 5) increasing the number of IPDOM enteries.
Whereas, increasing the number of warps does not require increasing the number
of ALUs because the ALUs are multiplexed by a higher number of warps.
Increasing the number of warps involves the following changes to the hardware:
1) increasing the logic for the warp scheduler, 2) increasing the number of
GPR tables, 3) increasing the number of IPDOM stacks, 4) increasing the number
of register scoreboards, and 5) increasing the size of the warp table. It’s
important to note that the cost of increasing the number of warps is dependant
on the number of threads in that warp; thus increasing warps for bigger thread
configurations becomes more expensive. This is because the size of each GPR
table, IPDOM stack, and warp table are dependant on the number of threads.
Figure 8 shows the increases in the area and power as we increase the number
of threads and warps. The number is normalized to 1 warp and 1 thread support.
All the data includes 1Kb 2 way instruction cache, 4 Kb 2 way 4 banks data
cache, and an 8kb 4 banks shared memory module.
Figure 8: Synthesized results for power, area and cell counts for different
number of warps and threads
### V-B Benchmarks
All the benchmarks used for evaluations were taken from the Rodinia [5], a
popular GPGPU benchmark suite.111The benchmarks that are not evaluated in this
paper is due to the lack of support from LLVM RISC-V.
### V-C simX Simulator
Because the benchmarks used in Rodinia Benchmark Suite have large data-sets
that took Modelsim a long time to simulate, we used simX, a C++ cycle-level
in-house simulator for Vortex with a cycle accuracy within 6% of the actual
Verilog model. Please note that power and area numbers are synthesized from
the RTL.
### V-D Performance Evaluations
Figure 9: Performance of a subset of Rodinia benchmark suite (# of warps x #
of threads
Figure 9 shows the normalized execution time of the benchmarks normalized to
the 2warps x 2threads configuration. As we predict, most of the time, as we
increase the number of threads (i.e., increasing the SIMD width.), the
performance is improved, but not too much from increasing the number of warps.
Some benchmarks get benefits from increasing the number of warps such as bfs,
but in most of the cases increasing the number of warps is not translated into
performance benefit. The main reason is that to reduce the simulation time, we
warmed up caches and reduced the data set size, thereby the cache hit rate in
the evaluated benchmarks was high. Increasing the number of warps is typically
useful to hide long latency operations such as cache misses by increasing TLP
and MLP; Thus, the benchmark that benefited the most from the high warp count
is BFS which is an irregular benchmark.
As we increase the number of threads and warps, the power consumption
increases but they do not necessarily produce more performance. Hence, the
most power efficient design points vary depending on the benchmark. Figure 10
shows a power efficiency metric (similar to performance per watt) which is
normalized to the 2 warps x 2 threads configuration. The results show that for
many benchmarks, the most power efficient design is the one with fewer number
of warps and 32 threads except for the BFS benchmark. As we discussed earlier
since BFS benchmark gets the best performance from the 32 warps x 32 threads
configuration, it also shows the most power efficient design point.
Figure 10: Power efficiency (# of warps x # of threads (Power efficiency and
Energy)
### V-E Placement and layout
We synthesized our RTL using a 15-nm educational library. Using Innovus, we
also performed Place and route (PnR). Figure 7 shows the GDS layout and the
power density map of our Vortex processor. From the power density map, we
observe that the power is well distributed among the cell area. In addition,
we observe the memory including the GPR, data cache, instruction icache and
the shared memory have a higher power consumption.
## VI Related Work
ARA [3] is a RISC-V Vector Processor that implements a variable-length single-
instruction multiple-data execution model where vector instructions are
streamed into vector lanes and their execution is time-multiplexed over the
shared execution units to increase energy efficiency. Ara design is based on
the open-source RISC-V Vector ISA Extension proposal [1] taking advantage of
its vector-length agnostic ISA and its relaxed architectural vector registers.
Maximizing the utilization of the vector processors can be challenging,
specifically when dealing with data dependent control flow. That is where SIMT
architectures like Vortex present an advantage with their flexible scalar-
threads that can diverge independently.
HWACHA [15] is a RISC-V scalar processor with a vector accelerator that
implements a SIMT-like architecture where vector arithmetic instructions are
expanded into micro-ops and scheduled on separate processing threads on the
accelerator. An advantage that Hwacha has over pure SIMT processors like
Vortex is its ability to overlap the execution of scalar instructions on the
scalar processor which increases hardware complexity for hazard management.
Simty [6] processor implements a specialized RISC-V architecture that supports
SIMT execution similar to Vortex, but with different control flow divergence
handling. In their work, only microarchitecture was implemented as a proof of
concept and there was no software stack, and none of GPGPU applications were
executed with the architecture.
## VII Conclusions
In this paper we proposed Vortex that supports an extended version of RISC-V
for GPGPU applications. We also modified OpenCL software stack (POCL) to run
various OpenCL kernels and demonstrated that. We plan to release the Vortex
RTL and POCL modifications to the public.222currently the github is private
for blind review. We believe that an Open Source version of RISC-V GPGPU will
enrich the RISC-V echo systems and accelerate other researchers that study
GPGPUs in wider topics since the entire software stack is also based on Open
Source implementations.
## References
* [1] K. Asanovic, _RISC-V Vector Extension_. [Online]. Available: https://github.com/riscv/riscv-v-spec/blob/master/v-spec.adoc
* [2] K. Asanović _et al._ , “Instruction sets should be free: The case for risc-v,” _EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2014-146_ , 2014.
* [3] M. A. Cavalcante _et al._ , “Ara: A 1 GHz+ scalable and energy-efficient RISC-V vector processor with multi-precision floating point support in 22 nm FD-SOI,” _CoRR_ , vol. abs/1906.00478, 2019.
* [4] C. Celio _et al._ , “Boomv2: an open-source out-of-order risc-v core,” in _First Workshop on Computer Architecture Research with RISC-V (CARRV)_ , 2017\.
* [5] S. Che _et al._ , “Rodinia: A benchmark suite for heterogeneous computing,” ser. IISWC ’09. IEEE Computer Society, 2009.
* [6] S. Collange, “Simty: generalized simt execution on risc-v,” in _First Workshop on Computer Architecture Research with RISC-V (CARRV 2017)_ , 2017, p. 6.
* [7] J. J. Corinna Vinschen, “Newlib,” http://sourceware. org/newlib, 2001.
* [8] W. W. L. Fung _et al._ , “Dynamic warp formation and scheduling for efficient gpu control flow,” ser. MICRO 40. IEEE Computer Society, 2007, pp. 407–420.
* [9] M. Gautschi _et al._ , “Near-threshold risc-v core with dsp extensions for scalable iot endpoint devices,” _IEEE Transactions on Very Large Scale Integration (VLSI) Systems_ , vol. 25, no. 10, pp. 2700–2713, 2017.
* [10] G. Gobieski _et al._ , “Manic: A vector-dataflow architecture for ultra-low-power embedded systems,” ser. MICRO ’52. ACM, 2019, pp. 670–684.
* [11] J. Gray, “Grvi phalanx: A massively parallel risc-v fpga accelerator accelerator,” in _2016 IEEE 24th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)_. IEEE, 2016, pp. 17–20.
* [12] Green500, “Green500 list - june 2019,” 2019. [Online]. Available: https://www.top500.org/lists/2019/06/
* [13] P. Jaaskelainen _et al._ , “Pocl: Portable computing language,” _International Journal of Parallel Programming_ , pp. 752–785, 2015.
* [14] P. O. Jäskeläinen _et al._ , “Opencl-based design methodology for application-specific processors,” in _2010 International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation_ , July 2010, pp. 223–230.
* [15] Y. Lee _et al._ , “A 45nm 1.3ghz 16.7 double-precision gflops/w risc-v processor with vector accelerators,” in _ESSCIRC 2014 - 40th European Solid State Circuits Conference (ESSCIRC)_ , Sep. 2014, pp. 199–202.
* [16] Y. Lee _et al._ , “A 45nm 1.3 ghz 16.7 double-precision gflops/w risc-v processor with vector accelerators,” in _ESSCIRC 2014-40th European Solid State Circuits Conference (ESSCIRC)_. IEEE, 2014, pp. 199–202.
* [17] A. Munshi, “The opencl specification,” in _2009 IEEE Hot Chips 21 Symposium (HCS)_ , Aug 2009, pp. 1–314.
* [18] V. Narasiman _et al._ , “Improving gpu performance via large warps and two-level warp scheduling,” ser. MICRO-44. New York, NY, USA: ACM, 2011, pp. 308–317.
* [19] NVIDIA, “Cuda binary utilities,” NVIDIA Application Note, 2014.
* [20] A. Waterman _et al._ , “The risc-v instruction set manual. volume 1: User-level isa, version 2.0,” EECS Department, UC Berkeley, Tech. Rep., 2014\.
* [21] A. Waterman _et al._ , “The risc-v instruction set manual, volume i: Base user-level isa,” _EECS Department, UC Berkeley, Tech. Rep. UCB/EECS-2011-62_ , vol. 116, 2011.
* [22] B. Zimmer _et al._ , “A risc-v vector processor with tightly-integrated switched-capacitor dc-dc converters in 28nm fdsoi,” in _2015 Symposium on VLSI Circuits (VLSI Circuits)_. IEEE, 2015, pp. C316–C317.
|
2024-09-04T02:54:55.100310 | 2020-02-26T16:19:54 | 2002.12164 | {
"authors": "Varun Mannam, Arman Kazemi",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25917",
"submitter": "Varun Mannam",
"url": "https://arxiv.org/abs/2002.12164"
} | arxiv-papers | # Performance Analysis of Semi-supervised Learning in the Small-data Regime
using VAEs
Varun Mannam
Department of Electrical Engineering
University of Notre Dame
Notre Dame, IN 46556
<EMAIL_ADDRESS>
&Arman Kazemi
Department of Computer Science and Engineering
University of Notre Dame
Notre Dame, IN 46556
<EMAIL_ADDRESS>
Varun Mannam is with the Department of Electrical Engineering, University of
Notre Dame, Notre Dame, IN, 46556 USA e-mail<EMAIL_ADDRESS>
###### Abstract
Extracting large amounts of data from biological samples is not feasible due
to radiation issues, and image processing in the small-data regime is one of
the critical challenges when working with a limited amount of data. In this
work, we applied an existing algorithm named Variational Auto Encoder (VAE)
that pre-trains a latent space representation of the data that captures the
features in a lower-dimension for the small-data regime input. The latent
space representation will be fine-tuned, and its weights will be fixed. The
latent space will be used as a segment of the neural network that can be used
for classification. Here we will present the performance analysis of the VAE
algorithm with various latent space sizes in the semi-supervised learning
using the CIFAR-10 dataset.
_K_ eywords VAE, CIFAR-10, Small Data
## 1 Introduction
Artificial neural networks (ANNs), specifically Convolutional Neural Networks
(CNNS), have become popular due to their success in image classification,
feature extraction and object recognition and detection [1] in the recent
years. CNNs leverage the huge amount of labeled data available to train
networks that outperform humans in image recognition tasks. However, in the
small-data regime, the accuracy of trained networks using a limited number of
labeled samples is low [2]. This is a typical case when working with
biological samples where exposure to radiation (in order to capture an image)
is detrimental to the well-being of the sample. More images can be derived
from the initial data by some augmentation methods, but it is unhelpful due to
lack of labeled images.
To address this problem, there exists a framework called “Auto Encoder” (AE
[3]) that uses all the input data, labeled and unlabeled, to train a low-
dimensional embedding. AE is a neural network that takes unlabeled images as
the input and regards the input itself as the label. As illustrated in Figure
1, AE is comprised of two parts: the encoder and the decoder. The encoder part
tries to embed the features in a latent space that can extract the features of
the original image and the decoder tries to restore the image to the original
image. The process called “pre-training” trains the weights for both encoder
and decoder parts. Once trained, the encoder part of the AE will be a
representation of all the labeled and unlabeled data. This increases the
amount of usable information from all of the images.
Traditional semi-supervised models consists of pretraining using Restricted
Boltzmann machine (RBM)[4] or Gaussian-Restricted Boltzmann machine
(G-RBM)[4]. RBM is an energy based model that is represented using an
undirected graph containing a layer of observable variables and a single layer
of latent variables (similar to hidden units in a multi-layer perceptron) [1,
5]. This energy based model was first introduced in 1980’s [6] and have been
implemented using diverse datasets including image [4] and medical data [7].
Hinton and Salakhutdinov [8] showed that RBMs can be stacked and trained in a
greedy manner. This deep learning model, Deep belief networks(DBN), that
utilizes RBM as the learning model has been implemented on various
unsupervised and supervised learning problems. Later, Bengio et. al. [9]
showed that the pre-trained undirected graphical model in semi-supervised
setting performs well with deep architectures. However, the challenge working
with RBM is that it is constructed using sigmoid functions as the activation
function between the input and the hidden layer. As we are aware that the
major drawback of using sigmoid activation function is the vanishing gradient
problem. Hence, in this work, we pre-train the model using “Variational Auto
Encoder” (VAE [3]).
After pre-training the encoder is able to observe similar images to the
training images and extract the valuable features from it in a lower dimension
than the initial image. Now it is possible to couple the encoder with a small
neural network and train that network for classification tasks. The present
work is similar to reinforcement learning where the model is trained with one
dataset and uses the feature extraction part of that model to train another
model for a different dataset. It is important to mention that the encoder
weights are fixed and can’t be changed. It is only the small neural network
that will be trained. The input to this network is the small set of labeled
data. This is called “fine-tuning”.
In this project we will implement VAE that tries to capture not only the
compressed representation of the images, but also the parameters of a
probability distribution representing the data. We will examine the effect of
different size of the latent spaces and how it affects the accuracy of the
model. Later, we will analyze the performance of the semi-supervised model
with the optimum latent space.
## 2 Background
In this section we enumerate the basic details about AE and VAE.
### 2.1 Auto Encoder
An autoencoder is a type of ANN used to learn efficient data encoding in an
unsupervised manner. The aim of an autoencoder is to learn a representation of
a set of data, typically for dimensionality reduction, by training the network
to ignore signal noise. Along with the reduction side, a reconstructing side
is learned, where the autoencoder tries to generate from the reduced encoding
a representation as close as possible to its original input. An autoencoder
always consists of two parts, the encoder and the decoder, which can be
defined as transitions $\phi$ and $\psi$ such that [3]:
$\phi:X\rightarrow F$ (1) $\psi:F\rightarrow Y\newline $ (2)
$\phi,\psi=argmin_{\phi,\psi}||X-(\psi o\phi)(X)^{2}||$ (3)
where the given input is $X$ and the predicted output is $Y$. If the feature
space $F$ has lower dimensionality than the input space $X$, then the feature
vector $\phi(x)$ can be regarded as a compressed representation of the input
$X$.
Figure 1: Autoencoder block diagram
### 2.2 VAE
To extend the idea used in the AE, there is a variation called VAE which uses
the “KL-divergence” between the predicted probability distribution function
and actual posterior distribution in the latent space whereas traditional AEs
try to find the accurate mapping functions in the encoder (between input and
latent space) as well as in the decoder (between the latent space and output).
Using VAE, we generate a large dataset by adding the noise in the latent space
which is similar to input data augmentation (adding noise to images to
increase the number of examples in the input dataset).
VAE uses a variational approach for latent representation learning, which
results in an additional loss component. It assumes that the data is generated
by a directed graphical model $p(\textbf{X}|\textbf{Z})$ and that the encoder
is learning an approximation $q_{\phi}(\textbf{Z}|\textbf{X})$ of the
posterior distribution $p_{\theta}(\textbf{X}|\textbf{Z})$ where $\phi$ and
$\theta$ denote the parameters of the encoder (recognition model) and decoder
(generative model) respectively. We can write the conditional or posterior
distribution
$p(\bm{z}|\bm{x})=\frac{p(\bm{z},\bm{x})}{p(\bm{x})}=\frac{p(\bm{x}|\bm{z})p(\bm{z})}{p(\bm{x})}$
(4)
The denominator of above equation is the marginal distribution of the
observations and is calculated by marginalizing out the latent variables from
the joint distribution, i.e.
$p(x)=\int_{z}p(z,x)dz$ (5)
In many cases of interest this integral is not available in closed form or is
intractable (requires exponential time to compute). Hence, we consider
variational approximation as follows: consider a tractable distribution q(z).
The goal is to find the best approximation, e.g., the one that satisfies the
following optimization problem:
$Minimize:D_{KL}[q_{\phi}(\bm{z}|\bm{x})||p_{\theta}(\bm{z}|\bm{x})]$ (6)
Therefore, the objective of the variational autoencoder in this case has the
following form:
$\mathcal{L}(\phi,\theta,\mathbf{x})=D_{\mathrm{KL}}\left(q_{\phi}(\mathbf{z}|\mathbf{x})\|p_{\theta}(\mathbf{z})\right)-\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\left(\log
p_{\theta}(\mathbf{x}|\mathbf{z})\right)$ (7)
where $D_{KL}$ stands for the KL-divergence. In the VAE, the principle is to
minimize the loss between input and the restored image along with the loss
generated by the latent space to represent the features in the input images.
## 3 Methodology
Figure 2: Network architecture: pre-training with VAE(first row) In the second
row, parameter $\theta$ is underlined and bold to indicate that these
parameters are freezed when the fine-tuning the network
Consider a surrogate model $y=f(x,\theta)$ which is trained using limited
simulation data
$\mathcal{D}=\\{x^{i},y^{i}\\}_{i=1}^{N},\\{x^{j}\\}_{j=N+1}^{D}$. Where the
input data, $x^{i}\in\mathbb{R}^{d_{x}\times H\times W}$ is the input from
CIFAR-10 dataset. Here H and W are the height and width respectively and
$d_{x}$ is the number of dimensions for the input x at one location. $x^{j}$
is the additional data utilized for pretraining the model.
$y^{i}\in\mathbb{R}^{1}$ is the classified result. $\theta$ is the model
parameter and N is the total number of training data utilized during fine-
tuning and D is the total number of data utilized for pre-training. In semi-
supervised model, we pre-train the model with the input data
$\mathbb{R}^{d_{x}\times H\times W}$ and then perform image classification
problem $\mathbb{R}^{d_{x}\times H\times W}\rightarrow\mathbb{R}^{1}$. For
both the pre-training and fine tuning, we used stochastic gradient descent
with Adam optimizer to update the network weights and biases. The simulation
was performed using Pytorch machine learning package in Python.
### 3.1 VAE pre-training
We implemented dense-net [10],[11] version of VAE for the pre-training part.
Dense-net contains the encoder and decoder blocks along with the dense-layer
that has the simple and complex features.
### 3.2 VAE fine-tuning
We implemented a simple fully connected layer to classify the input images on
the CIFAR-10 [12] data. This is due to the expectation that the latent space
is smaller (either 4 $\times$ 4 or 8 $\times$ 8) (image size is 32 $\times$
32). If the number of channels are large at the latent space, we will add more
fully connected layers for classification of images.
## 4 Data
We perform the simulations on the CIFAR-10 dataset with ten image classes with
three input channels $(C=3)$ of size 32 $\times$ 32 (W $\times$ H). CIFAR-10
dataset has 50000 training images and 10000 test images.
## 5 Results
In this section we enumerate the results obtained using CIFAR-10 data. We
consider the following latent dimensions: 6400, 10,000 and 14,400. In order to
evaluate the performance of the model for above three latent space, we
consider the distribution estimated for the values at various pixel location.
### 5.1 Pre-training
For the results presented in this section, we have a dataset with 50,000
$\\{x^{k}\\}_{k=1}^{50000}$ examples for pre-training and the test set consist
of 10000, $\\{x^{k}\\}_{k=1}^{10000}$ examples. Adam optimizer was used for
training 100 epochs, with learning rate of 1e-4 and a plateau scheduler on the
test RMSE. Batch size is always smaller than the number of training data. In
this work, a batch size of 16 for pre-training was used. Weight decay was set
to 1e-3 for pre-training.
We consider Equation: 7 loss function to evaluate the trained model on test
data and also to monitor the convergence.
Figure 3: Error v/s epoch for 6400 latent space (top), Error v/s epoch for
10000 latent space (middle) and Error v/s epoch for 14400 latent space
(bottom)
From figure 1, we observe that the solution is converged after 50 epochs and
most importantly the loss for the three latent spaces is similar.
Figure 4: Distribution estimate for the values at various location of the
square domain for 6400, 10000 and 14400 latent space
From figure 4, we observe that even when the latent size is small (Batch
$\times$ 100 (channels) $\times$ 8 $\times$ 8) and (Batch $\times$ 100
(channels) $\times$ 10 $\times$ 10) the reconstructed density estimate is
close to actual input data. The PDF with the latent size 10000 is closer to
the actual input and also 6400 & 14400 latent space. Since, all the latent
spaces yield the similar outputs,we fine-tune and compare the classification
accuracy in the next section.
### 5.2 Fine-tuning
In this section we freeze the parameters (weights and bias) used in the pre-
training stage and fine tune the parameters (weights and bias) in the
classification network. For this problem, we consider small data from the
given CIFAR-10 dataset and use fully connected layers to perform
classification. Cross entropy loss function is commonly used for all
classification problems, we implemented cross entropy to measures the
performance of a classification mode.
Figure 5: Fine tuning results with three different latent space models
From figure 5, we observe the latent space 10000 yields better accuracy than
other two latent spaces (6400 and 14400). The smaller the latent space the
lower the test accuracy, this is due to insufficient features to classify the
data. Also, for the large latent space, the test accuracy is low, this is due
to model complexity [1].
## 6 Conclusions and Future work
The present document outlines the development of surrogate model for semi-
supervised problem. In this work, we have implemented VAE as a pre-training
model and a feed forward deep learning model for the classification. The
results obtained for differently sized latent spaces are presented. It was
observed that there is a slight improvement in the test accuracy when the
latent space is 10000 in comparison with latent space of 6400 and 14400.
For future work, a Bayesian approach can be explored. Due to a limited amount
of data, it is necessary to model appropriate surrogate, since it is important
to quantify the epistemic uncertainty induced by limited data [13], [14] and
hence, a Bayesian probabilistic approach is a natural way of addressing this
challenge.
## References
* [1] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
* [2] Nitesh V Chawla et al. Learning from labeled and unlabeled data: An empirical study across techniques and domains. Journal of Artificial Intelligence Research, 23:331–366, 2005.
* [3] Autoencoder. Autoencoder — Wikipedia, the free encyclopedia, 2019.
* [4] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504–507, 2006.
* [5] Kevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
* [6] Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report, Colorado Univ at Boulder Dept of Computer Science, 1986\.
* [7] Tu Dinh Nguyen, Truyen Tran, Dinh Phung, and Svetha Venkatesh. Latent patient profile modelling and applications with mixed-variate restricted boltzmann machine. In Pacific-Asia conference on knowledge discovery and data mining, pages 123–135. Springer, 2013.
* [8] G Hinton and R Salakhutdinov. An efficient learning procedure for deep boltzmann machines. Neural Computation, 24(8):1967–2006, 2012.
* [9] Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. In Advances in neural information processing systems, pages 153–160, 2007.
* [10] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
* [11] Yide Zhang, Yinhao Zhu, Evan Nichols, Qingfei Wang, Siyuan Zhang, Cody Smith, and Scott Howard. A poisson-gaussian denoising dataset with real fluorescence microscopy images. arXiv preprint arXiv:1812.10366, 2018.
* [12] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
* [13] Rohit K Tripathy and Ilias Bilionis. Deep uq: Learning deep neural network surrogate models for high dimensional uncertainty quantification. Journal of Computational Physics, 375:565–588, 2018.
* [14] Yinhao Zhu and Nicholas Zabaras. Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification. Journal of Computational Physics, 366:415–447, 2018.
|
2024-09-04T02:54:55.109051 | 2020-02-26T14:47:14 | 2002.12172 | {
"authors": "Xiang-Yu Li, Lars Mattsson",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25918",
"submitter": "Xiangyu Li",
"url": "https://arxiv.org/abs/2002.12172"
} | arxiv-papers | 11institutetext: Nordita, KTH Royal Institute of Technology and Stockholm
University, 10691 Stockholm, Sweden
11email<EMAIL_ADDRESS>22institutetext: Section for Meteorology
and Oceanography, Department of Geosciences, University of Oslo, P.O. Box 1022
Blindern, 0315, Oslo, Norway
# Coagulation of inertial particles in supersonic turbulence
Xiang-Yu Li 1122 Lars Mattsson 11
Coagulation driven by supersonic turbulence is primarily an astrophysical
problem because coagulation processes on Earth are normally associated with
incompressible fluid flows at low Mach numbers, while dust aggregation in the
interstellar medium (ISM) for instance is an example of the opposite regime.
We study coagulation of inertial particles in compressible turbulence using
high-resolution direct and shock-capturing numerical simulations with a wide
range of Mach numbers from nearly incompressible to moderately supersonic. The
particle dynamics is simulated by representative particles and the effects on
the size distribution and coagulation rate due to increasing Mach number is
explored. We show that the time evolution of particle size distribution mainly
depends on the compressibility (Mach number). We find that the average
coagulation kernel $\langle C_{ij}\rangle$ scales linearly with the average
Mach number $\mathcal{M}_{\rm rms}$ multiplied by the combined size of the
colliding particles, that is, $\langle
C_{ij}\rangle\sim\langle(a_{i}+a_{j})^{3}\rangle\,\mathcal{M}_{\rm
rms}\tau_{\eta}^{-1}$, which is qualitatively consistent with expectations
from analytical estimates. A quantitative correction $\langle
C_{ij}\rangle\sim\langle(a_{i}+a_{j})^{3}\rangle(v_{\rm p,rms}/c_{\rm
s})\tau_{\eta}^{-1}$ is proposed and can serve as a benchmark for future
studies. We argue that the coagulation rate $\langle R_{c}\rangle$ is also
enhanced by compressibility-induced compaction of particles.
###### Key Words.:
Coagulation, inertial particles, turbulence, compressibility, shock waves
## 1 Introduction
The kinetics of inertial particles that are (finite-size particles that are
massive enough to have significant inertia) in turbulence has drawn much
attentions for decades. It has been driven by wide applications in
astrophysics, atmospheric sciences, and engineering. The preferential
concentration and fractal clustering of inertial particles in (nearly)
incompressible turbulence has been simulated extensively (see Maxey, 1987;
Squires & Eaton, 1991; Eaton & Fessler, 1994; Bec, 2003, 2005; Bec et al.,
2007b, a; Bhatnagar et al., 2018; Yavuz et al., 2018). In combination with the
theory of coagulation of particles, this has an important application in
planet-formation theory (see e.g. Pan et al., 2011; Birnstiel et al., 2016;
Johansen et al., 2012; Johansen & Lambrechts, 2017). However, proto-planetary
discs are dominated by low Mach-number turbulence, which is not the case in
many other astrophysical environments. One example is the cold phase of the
interstellar medium (ISM), where turbulence is highly compressible with Mach
numbers of order 10 and thus is dominated by shock waves. Only a few studies
of inertial particles in high Mach-number turbulence can be found in the
literature (e.g. Hopkins & Lee, 2016; Mattsson et al., 2019, 2019), and direct
numerical simulations of turbulence-driven coagulation of inertial particles
have not been performed so far. Exploring the effects of compressibility (high
Mach numbers) on coagulation is therefore an important branch of research that
is now becoming possible through the rapid development of computing power.
From an astrophysical perspective, cosmic dust grains, which are made of
mineral or carbonaceous material and are a perfect example of inertial
particles, are ubiquitous throughout the universe. Rapid interstellar dust
growth by accretion of molecules is thought to be necessary to compensate for
various dust destruction processes (see e.g. Mattsson, 2011; Valiante et al.,
2011; Rowlands et al., 2014). Grains may also grow by aggregation or
coagulation (which does not increase the dust mass), however, when the growth
by accretion has produced large enough grains for coagulation to become
efficient, that is, once the ‘coagulation bottleneck’ has been passed
(Mattsson, 2016). How efficient the coagulation is in molecular clouds (MCs)
is not fully understood, although models and simulations have suggested that
turbulence is the key to high growth efficiency (Elmegreen & Scalo, 2004;
Hirashita & Yan, 2009; Hirashita, 2010; Pan et al., 2014a, b; Pan & Padoan,
2015; Hopkins & Lee, 2016; Mattsson et al., 2019, 2019) and observations
indicate the presence of very large grains (which can be tens of $\mu$m
across) in the cores of MCs (Hirashita et al., 2014).
We aim to study the role that the Mach number plays for the coagulation of
inertial particles in compressible turbulence. Specifically, we strive to
study how coagulation of particles depends on Mach number in regions where the
particles cluster and form filaments. The purpose is not to target any
specific astrophysical context, but to explore the Mach-number dependence to
the extent that this is computationally feasible. There are three main
challenges. First, the dynamics of inertial particles in compressible
turbulence is poorly understood. Second, the coagulation process is a non-
equilibrium process, as the particle size distribution evolves with time.
Third, the coagulation timescale and the characteristic timescale of the
turbulence are very different in dilute systems, such as those typically
studied in astrophysics. In classical kinetic theory, the collision kernel
$C_{ij}$ is a function of the relative velocity $\Delta\bm{v}_{ij}$ of two
particles $i$ and $j$, which is difficult to calculate analytically, except in
some special cases. For the same reason, it is difficult to calculate the
coagulation rate using analytical models. More exactly, the classical
Smoluchowski (1916) problem has only three known exact solutions (Aldous,
1999), and numerical solution of the coagulation equation is only feasible if
treated as a local or ‘zero-dimensional’ problem.
The main objective of the present work is to offer a way to quantify and
possibly parametrise the effects of turbulent gas dynamics and hydrodynamic
drag on the coagulation rate in such a way that it can be included for
instance in traditional models of galactic chemical evolution (including
dust), which are based on average physical quantities (Mattsson, 2016). A
major problem when simulating the dust growth in the ISM is that the system is
large scale and dilute. The coagulation rate is extremely low in such a
system, which leads to very different timescales for the turbulent gas
dynamics and coagulation.
## 2 Turbulence and kinetic drag
In this section, equations governing compressible flow and particle dynamics
of inertial particles (e.g. dust grains) are presented. The Pencil Code with
HDF5 IO (Brandenburg et al., 2020; Bourdin, 2020) is used to solve these
equations.
Since the carrier fluid in our study is isothermal, its turbulence described
by Eq. (1) is scale free, that is, the box size $L$, the mean mass density
$\langle\rho\rangle$, and the sound speed $c_{\rm s}$ are the unit length,
unit density, and unit velocity, respectively. These quantities can thus be
scaled freely. However, the inclusion of coagulation process means our
simulation is no longer scale free. This requires a careful treatment of
initial conditions and scaling of units, which is discussed in more detail in
Section 3.4.
### 2.1 Momentum equation of the carrier flow
The motion of the gas flow is governed by the Navier-Stokes equation,
${\partial{\bm{u}}\over\partial
t}+\bm{u}\cdot{\bm{\nabla}}\bm{u}={\bm{f}}-\rho^{-1}{\bm{\nabla}}p+\rho^{-1}\bm{F}_{\rm
visc},$ (1)
where ${\bm{f}}$ is a forcing function (Brandenburg, 2001), $p$ is the gas
pressure, and $\rho$ is the fluid or gas density obeying the continuity
equation,
${\partial\rho\over\partial t}+{\bm{\nabla}}\cdot(\rho\bm{u})=0.$ (2)
For the case of direct numerical simulation with a constant kinetic viscosity
of the gas flow, the viscosity term $\bm{F}_{\rm visc}$ equals the physical
viscosity term $\bm{F}_{\rm visc}^{\nu}$ given by
$\bm{F}_{\rm
visc}^{\nu}=\rho\nu\left({\bm{\nabla}}^{2}\bm{u}+\frac{1}{3}\nabla\nabla\cdot\bm{u}+2\bm{{\sf
S}}\cdot\nabla\ln\rho\right),$ (3)
where $\bm{{\sf S}}={1\over
2}\left[{\bm{\nabla}}\bm{u}+\left({\bm{\nabla}}\bm{u}\right)^{T}\right]-{1\over
3}\left({\bm{\nabla}}\cdot\bm{u}\right){\sf I}$ is the rate-of-strain tensor
(${\sf I}$ is the unit tensor) resulting from the shear effect and the
deformation due to compression. For the case with shock-capturing viscosity,
the viscosity term becomes
$\bm{F}_{\rm visc}=\bm{F}_{\rm visc}^{\nu}+\rho\zeta_{\rm
shock}\nabla\nabla\cdot\bm{u}+\left(\nabla\cdot\bm{u}\right)\nabla\left(\rho\zeta_{\rm
shock}\right).$ (4)
The shock viscosity $\zeta_{\rm shock}$ is given by
$\zeta_{\rm shock}=c_{\rm
shock}\langle\rm{max}[(-{\bm{\nabla}}\cdot\bm{u})_{+}]\rangle(\min(\delta
x,\delta y,\delta z))^{2},$ (5)
where $c_{\rm shock}$ is a constant defining the strength of the shock
viscosity (Haugen et al., 2004). The length of the lattice is given by $\delta
x$, $\delta y$, and $\delta z$, respectively. $\bm{F}_{\rm visc}^{\rm shock}$
is used in simulations with high Mach number, where we strive to use the
highest spatial resolution to capture the shocks. Nevertheless, it is
necessary to introduce this term to handle the strongest shocks. Two
dimensionless parameters characterise compressible turbulence: the Reynolds
number Re and the root-mean-square (rms) Mach number $\mathcal{M}_{\rm rms}$.
Re is defined as
${\rm Re}\equiv u_{\rm rms}L_{\rm inj}/\nu,$ (6)
where $u_{\rm rms}$ is the rms turbulent velocity and $L_{\rm inj}$ is the
energy injection length scale. The compressibility of the flow is
characterised by $\mathcal{M}_{\rm rms}$, which is defined as
${\cal{M}_{\rm rms}}=u_{\rm rms}/c_{\rm s},$ (7)
where $c_{\rm s}$ is the sound speed. The sound speed is kept constant because
the compressible flow to be investigated here is assumed to be isothermal such
that $c_{\rm s}^{2}=\gamma p/\rho$, where $\gamma=c_{\rm P}/c_{\rm v}=1$ with
the specific heats $c_{\rm P}$ and $c_{\rm V}$ at constant pressure and
constant volume, respectively. Another quantity is the mean energy dissipation
rate $\langle\bar{\epsilon}\rangle$, which measures how vigorous the small
eddies are in turbulence. It can be calculated from the trace of $\bm{{\sf
S}}_{ij}$ as $\langle\bar{\epsilon}\rangle=2\nu\,\textstyle{\overline{{\rm
Tr\,}{\sf S_{ij}}{\sf S_{ji}}}}$. $\langle\bar{\epsilon}\rangle$ determines
the smallest scales of the turbulence, for example, the Kolmogorov length
scale is defined as $\eta=(\nu^{3}/{\langle\bar{\epsilon}\rangle})^{1/4}$ and
the timescale is defined as
$\tau_{\eta}=(\nu/\langle\bar{\epsilon}\rangle)^{1/2}$. Becausee the Saffman-
Turner collision rate is proportional to $\tau_{\eta}^{-1}$ (Saffman & Turner,
1956), $\langle\bar{\epsilon}\rangle$ indirectly determines the coagulation
rate of particles in an incompressible flow. Coagulation occurs at the small
scales of turbulence, and the strength of the small eddies determines the
particle velocity (Li et al., 2018). Therefore it is worth investigating
whether and how $\langle\bar{\epsilon}\rangle$ affects the coagulation rate in
compressible turbulence as well. We show that it does not affect the
coagulation rate in the compressible case.
The stochastic solenoidal forcing $\bm{f}$ is given by
$\bm{f}(\bm{x},t)=\mbox{Re}\\{N\bm{f}_{\bm{k}(t)}\exp[i\bm{k}(t)\cdot\bm{x}+i\phi(t)]\\},$
(8)
where $\bm{k}(t)$ is the wave space, $\bm{x}$ is position, and $\phi(t)$
($|\phi|<\pi$) is a random phase. The normalization factor is given by
$N=f_{0}c_{\rm s}(kc_{\rm s}/\Delta t)^{1/2}$, where $f_{0}$ is a non-
dimensional factor, $k=|\bm{k}|$, and $\Delta t$ is the integration time step
(Brandenburg & Dobler, 2002). We chose a completely non-helical forcing, that
is,
$\bm{f}_{\bm{k}}=\left(\bm{k}\times\bm{e}\right)/\sqrt{\bm{k}^{2}-(\bm{k}\cdot\bm{e})^{2}},$
(9)
where $\bm{e}$ is the unit vector.
To achieve different $\cal{M}_{\rm rms}$ with fixed ${\rm Re}$ and
$\langle\bar{\epsilon}\rangle$ in the simulations, we need to change $u_{\rm
rms}$, $\nu$, and the simulation box $L$ simultaneously according to Eq. (6)
and Eq. (7), and we also considered
$\langle\bar{\epsilon}\rangle\sim\frac{u_{\rm rms}^{3}}{L_{\rm inj}}.$ (10)
Since $u_{\rm rms}$ is essentially determined by the amplitude of forcing
$f_{0}$, we changed $f_{0}$ in the simulation as well.
### 2.2 Particle dynamics
The trajectories of inertial particles is determined by
$\frac{d\bm{x}_{i}}{dt}=\bm{v}_{i}$ (11)
and
$\frac{d\bm{v}_{i}}{dt}=\frac{1}{\tau_{i}}(\bm{u}-\bm{v}_{i})\,,$ (12)
where
$\tau_{i}=\sqrt{\pi\over 8}{\rho_{\rm mat}\over\rho}{a\over c_{\rm
s}}\left(1+{9\pi\over 128}{|\bm{u}-\bm{v}_{i}|^{2}\over c_{\rm
s}^{2}}\right)^{-1/2},$ (13)
is the stopping time, that is, the kinetic-drag timescale. In the equation
above, $a$ is the radius of a particle, $\rho_{\rm mat}$ is the material
density of particles, and $\rho$ is the mass density of the gas. We assumed
that particles are well described in the Epstein limit because the mean-free-
path $\lambda$ is large and particles are small in most astrophysical contexts
(large Knudsen number, Kn $=\lambda/a\gg 1$ Armitage, 2010). The stopping time
at low relative Mach number ($\mathcal{W}=|\bm{u}-\bm{v}_{i}|/c_{\rm s}\ll 1$)
is
$\tau_{i}(\mathcal{W}\ll 1)=\sqrt{\pi\over 8}{\rho_{\rm mat}\over\rho}{a\over
c_{\rm s}}.$ (14)
The term in parentheses of Eq. (13) is a correction for high $\mathcal{W}$.
Eq. (13) is essentially a quadratic interpolation between the two expressions
for the limits $\mathcal{W}\ll 1$ and $\mathcal{W}\gg 1$ derived from the
exact expression for $\tau_{i}$ (see Schaaf, 1963; Kwok, 1975; Draine &
Salpeter, 1979).
To characterize the inertia of particles, we define a ‘grain-size parameter’
as
$\alpha={\rho_{\rm mat}\over\langle\rho\rangle}{a\over L},$ (15)
which is the parametrisation used by Hopkins & Lee (2016). Because the total
mass of a simulation box of size $L$, as well as the mass of a grain of a
given radius $a$, is constant, the quantity $\alpha$ is solely determined by
$a$ regardless of the characteristics of the simulated flow.
In general, the inertia of particles is characterised by the Stokes number
${\rm St}=\tau_{i}/\tau_{\eta}$. The disadvantage of ${\rm St}$ as the size
parameter for inertial particles in a highly compressible carrier fluid is
that a fluid flow with ${\rm Re}\gg 1$ cannot be regarded as a Stokes flow. If
$\mathcal{M}_{\rm rms}\gg 1$ as well, ${\rm St}$ is not even well defined as
an average quantity in a finite simulation domain. The parameter $\alpha$ is
therefore a better dimensionless measure of grain size than the average Stokes
number $\langle{\rm St}\rangle$ for a supersonic compressible flow. Moreover,
$\langle{\rm St}\rangle$ is not only a function of the size, but also a
function of the mean energy dissipation rate $\langle\bar{\epsilon}\rangle$,
which complicates the picture even further.
### 2.3 Averages
In the following we frequently refer to the mean or average quantities of
three different types. For each of them we use a different notation. First, we
use the bracket notation $\langle Q\rangle$ for volume averages, taken over
the whole simulation box unless stated otherwise. Second, we use the over-bar
notion $\bar{Q}$ for straight time-averaged quantities. Third, we use the
tilde notation $\tilde{Q}$ for ensemble averages, that is, averages defined by
the first moment of the distribution function of the particles.
The rms value of a fluctuating physical quantity has been mentioned in section
1. In terms of the above notion, rms values always refer to $Q_{\rm
rms}\equiv\sqrt{\langle Q^{2}\rangle}$.
## 3 Coagulation
### 3.1 Numerical treatment of coagulation
The most physically consistent way to model coagulation is to track each
individual Lagrangian particle and to measure the collisions among them when
they overlap in space, which is computationally challenging because the
coagulation timescale of inertial particles is often much shorter than the
Kolmogorov timescale. We also used $10^{7}$ representative particles, which
means solving a large $N$-body problem. Because of the aforementioned
computational load, a super-particle approach is often used to study the
coagulation of dust grains (Zsom & Dullemond, 2008; Johansen et al., 2012; Li
et al., 2017). Instead of tracking each individual particles, super-particles
consisting of several identical particles are followed. Within each super-
particle, all the particles have the same velocity $\bm{v}_{i}$ and size $a$.
The super-particle approach is a Monte Carlo approach, which treats
coagulation of dust grains in a stochastic manner (Bird, 1978, 1981; Jorgensen
et al., 1983). Each super-particle is assigned a virtual volume that is the
same as the volume of the lattice, therefore a number density $n_{j}$.
When two super-particles $i$ and $j$ reside in the same grid cell, the
probability of coagulation is $p_{\rm c}=\tau_{\rm c}^{-1}\Delta t$, where
$\tau_{\rm c}$ is the coagulation time and $\Delta t$ is the integration time
step. A coagulation event occurs when $p_{\rm c}>\eta_{c}$, where $\eta_{c}$
is a random number. The coagulation timescale $\tau_{\rm c}$ is defined as
$\tau_{\rm c}^{-1}=\sigma_{\rm c}n_{j}\,|{\bm{w}}_{ij}|\,E_{\rm c},$ (16)
where $\sigma_{\rm c}=\pi(a_{i}+a_{j})^{2}$ and ${\bm{w}}_{ij}$ are the
geometric coagulation cross section and the absolute velocity difference
between two particles with radii $a_{i}$ and $a_{j}$, respectively, and
$E_{\rm c}$ is the coagulation efficiency (Klett & Davis, 1973). For
simplicity, we set $E_{\rm c}$ to unity. This means that all particles
coalesce upon collision, that is, bouncing and fragmentation are neglected.
This treatment may overestimate the collision rate but does not affect the
dynamics of particles. Therefore the $\mathcal{M}$ dependence should not be
affected. Compared with the super-particle algorithm that is widely used in
planet formation (Zsom & Dullemond, 2008; Johansen et al., 2012), our
algorithm provides better collision statistics (Li et al., 2017). We refer to
Li et al. (2017) for a detailed comparison of the super-particle algorithm
used in Johansen et al. (2012) and Li et al. (2017, 2018, 2020).
### 3.2 Timescale differences
Before we describe the basic theory of coagulation of particles in a turbulent
carrier fluid, it is important that we consider the different timescales that
are involved in this complex and composite problem. In Eq. (16), we introduced
$\tau_{\rm c}$. The other important timescale in a model of coagulation of
particles in turbulence is the flow timescale of the carrier fluid, in this
case, the large-eddy turnover time $\tau_{L}=L/u_{\rm rms}$, where $u_{\rm
rms}$ is the rms flow velocity. Clearly, $\tau_{L}$ depends on the scaling of
the simulation. When we compare the two timescales, we find that
${\tau_{L}\over\langle{\tau_{\rm c}\rangle}}\sim N_{\rm p}{a^{2}\over
L^{2}}{\langle{|\bm{w}_{ij}|}\rangle\over u_{\rm rms}},$ (17)
where $N_{\rm p}$ is the total number of particles in the simulated volume. We
note that $\langle|{\bm{w}}_{ij}|\rangle/u_{\rm rms}\ll 1$ as long as the
particles do not decouple completely from the carrier flow. In order to avoid
slowing down the simulation too much, we aim for $\tau_{L}/\langle\tau_{\rm
c}\rangle\sim 1$. From this we may conclude that $N_{\rm p}\gg(L/a)^{2}$,
which implies that if we have an upper bound of $N_{\rm p}$ for computational
reasons, we cannot simulate tiny particles in a large volume. The ratio
$\tau_{L}/\langle{\tau_{\rm c}\rangle}$ shows how difficult it can be to
simulate the coagulation in astrophysical contexts, in particular when the
details of coagulation of inertial particles are simulated in a carrier fluid
representing well-resolved compressible turbulence.
In addition to the two timescales discussed above, we must also consider the
stopping time $\tau_{i}$ of the particles because we study inertial particles.
For $\alpha\lesssim 0.1$, $\tau_{i}$ is typically smaller than $\tau_{L}$.
Hence, the competing timescales would rather be $\tau_{\rm c}$ and $\tau_{i}$,
which suggests that the ratio $\tau_{i}/\tau_{\rm c}$ should be of order unity
to avoid slowing down the simulation compared to the case of non-interacting
particles. By the same assumptions as above ($k_{\rm f}\approx 3$ and $E_{\rm
c}\sim 1$), we can show that
${\tau_{i}\over\langle{\tau_{\rm
c}\rangle}}\sim{\langle|\bm{w}_{ij}|\rangle\over c_{\rm s}}{\rho_{\rm
p}\over\rho}\,a_{i}^{3},$ (18)
where $\rho_{\rm p}{\equiv\rho_{\rm mat}\,n_{i}}$ is the mass density of
particles (not to be confused with the bulk material density $\rho_{\rm
mat}$). In many astrophysical contexts (in particular, cold environments)
$\langle|\bm{w}_{ij}|\rangle/c_{\rm s}\sim 1$, which then suggests we must
have $\rho_{\rm p}/\rho\sim 1$. This is always inconsistent with cosmic dust
abundances, however, whether in stars, interstellar clouds, or even proto-
planetary discs. In the cold ISM, $\rho_{\rm p}/\rho\sim 10^{-2}$ and
$\langle|\bm{w}_{ij}|\rangle/c_{\rm s}\sim 1$, which implies that
$\tau_{i}/\tau_{\rm c}\ll 1$ and thus the time step of a simulation of
coagulation in such an environment is limited by $\tau_{i}$. In practice, this
means that it will be difficult (or even impossible) to target coagulation in
cold molecular clouds in the ISM without highly specialised numerical methods.
The goal of the present study is primarily to investigate how coagulation of
inertial particles depends on $\mathcal{M}_{\rm rms}$ and not to simulate
coagulation in a realistic and dilute astrophysical environment. We note,
however, that any result on coagulation of particles in compressible
turbulence is primarily of importance for astrophysics, for instance, the
processing of dust grains in the ISM and various types of circumstellar
environments. Therefore we tried to make the simulation system as dilute,
while still ensuring statistical convergence and computational feasibility.
### 3.3 Theory of coagulation of inertial particles in turbulence
Coagulation, as described by the Smoluchowski (1916) equation, is determined
by the coagulation rate $R_{ij}$ between two grains species (sizes) $i$ and
$j$ and the associated rates of depletion. In general, we have $R_{ij}={1\over
2}n_{i}\,n_{j}\,C_{ij}$, where $n_{i}$, $n_{j}$ are the number densities of
the grains $i$ and $j$, and $C_{ij}$ is the collision kernel. Turbulence has
been proposed to have a profound effect on $C_{ij}$, and we focus this theory
section on what happens to $C_{ij}$.
Assuming the distribution of particle pairs can be separated into distinct
spatial and velocity distributions, we have (Sundaram & Collins, 1997)
$\langle
C_{ij}\rangle=\pi(a_{i}+a_{j})^{2}\,g(r,a_{i},a_{j})\int\bm{w}_{ij}\,P(\bm{w}_{ij},a_{i},a_{j})\,d\bm{w}_{ij},$
(19)
where $g$ is the radial distribution function (RDF) and $P$ is the probability
density distribution of relative velocities $\bm{w}_{ij}$.
Below, we review the basic theory of coagulation in the tracer particle limit
and large-inertial limit. Since small-clustering is negligible for both small-
and large-inertial particles, we implicitly assumed that $g(r)=1$. Moreover,
in case of Maxwellian velocities, that is, $P(\bm{w}_{ij},a_{i},a_{j})$
follows a Maxwellian distribution, the integral part of Eq. (19) becomes
$\langle\bm{w}_{ij}\rangle=\sqrt{8\pi/3\,\langle\bm{w}_{ij}\cdot\bm{w}_{ij}\rangle}$.
Thus, the collision kernel in Eq. (19) reduces to
$\langle
C_{ij}\rangle=\sqrt{8\pi/3}\,(a_{i}+a_{j})^{2}\sqrt{\langle\bm{w}_{ij}\cdot\bm{w}_{ij}\rangle},$
(20)
which is the form assumed in the two following subsections.
#### 3.3.1 Tracer-particle limit
In the low-inertial limit, also known as the Saffman-Turner limit or the
tracer-particle limit (Saffman & Turner, 1956),
$\langle\bm{w}_{ij}^{2}\rangle$ is a simple function of $a_{i}$ and $a_{j}$.
In case of a mono-dispersed grain population ($a=a_{i}=a_{j}$) suspended in a
turbulent low Mach-number medium, we may use the Saffman & Turner (1956)
assumption, $\langle\bm{w}_{ij}^{2}\rangle={1\over 5}(a/\tau_{\eta})^{2}$,
where $\tau_{\eta}=\sqrt{\nu/\langle\bar{\epsilon}\rangle}$ is the Kolmogorov
timescale. This is a reasonable approximation if $\mathcal{M}_{\rm rms}$ is
not too large. The Saffman & Turner (1956) theory relies on $\bm{w}_{ij}$
having a Gaussian distribution (Maxwellian velocities) and the final
expression for $\langle C_{ij}\rangle$ becomes
$\langle C_{i}\rangle=\sqrt{8\pi\over
15}\frac{\left(2a_{i}\right)^{3}}{\tau_{\eta}}.$ (21)
For the multi-dispersed case, we replace $2a_{i}$ by $(a_{i}+a_{j})$ in Eq.
(21).
At first sight, compressibility does not seem to play any role at all, given
the collision kernel $\langle C_{ij}\rangle$ above. However, it can affect the
number density of particles $n_{i}$, and therefore, $R_{ij}$. In the tracer
particle limit, the spatial distribution of particles is statistically the
same as for gas. Simulations have shown that the gas density of isothermal
hydrodynamic turbulence exhibits a lognormal distribution (e.g. Federrath et
al., 2009; Hopkins & Lee, 2016; Mattsson et al., 2019) with a standard
deviation of $\sigma_{\rho}^{2}$ that is empirically related to ${\cal M}_{\rm
rms}$ (see e.g. Passot & Vázquez-Semadeni, 1998; Price et al., 2011; Federrath
et al., 2010). Consequentially, $n_{i}$ depends on ${\cal M}_{\rm rms}$, and
so does $R_{ij}$.
#### 3.3.2 Large-inertia limit
In the opposite limit, the large-inertia limit, particles should behave
according to kinetic theory. As shown by Abrahamson (1975), we have in this
limit that particles are randomly positioned and follow a Maxwellian velocity
distribution. In this case, we may conclude that
$\langle\bm{w}_{ij}^{2}\rangle=\langle\bm{v}_{i}^{2}\rangle+\langle\bm{v}_{j}^{2}\rangle$
because the particles are then statistically independent and thus they have a
covariance that is identically zero. As in the tracer-particle limit, the
expression for a mono-disperesed population becomes
$\langle C_{i}\rangle=a^{2}\left({16\pi\over
3}\langle\bm{v}_{i}^{2}\rangle\right)^{1/2}.$ (22)
Previous theoretical work on inertial particles in turbulent flows (e.g.
Abrahamson, 1975; Hopkins & Lee, 2016; Mattsson et al., 2019; Pan & Padoan,
2013; Wang et al., 2000) have shown that the rms velocity $v_{\rm rms}$ of the
particles is a function of their size. More precisely, we can parametrise in
terms of grain size such that $v_{\rm rms}^{2}(a_{i})/u_{\rm rms}^{2}\approx
a_{0}/a_{i}$ for $a/a_{0}\gg 1$, where $a_{0}$ is a scaling radius that can be
estimated from the stopping time $\tau_{i}$ and the integral timescale
$\tau_{L}$ (Hedvall & Mattsson, 2019; Mattsson & Hedvall, 2021). Assuming
Maxwellian velocity distributions again, we have that
$\langle\bm{w}_{ij}\cdot\bm{w}_{ij}\rangle=v_{\rm rms}^{2}(a_{i})+v_{\rm
rms}^{2}(a_{j})$. Thus, the mean collision kernel for the multi-dispersed case
can be expressed as
$\langle C_{ij}\rangle\approx\sqrt{8\pi\over 3}\,(a_{i}+a_{j})^{2}\,u_{\rm
rms}\,\left({a_{0}\over a_{i}}+{a_{0}\over a_{j}}\right)^{1/2}.$ (23)
Here, we may note that $\langle C_{ij}\rangle$ is proportional to $u_{\rm
rms}$, implying that the collision rate should scale with $\mathcal{M}_{\rm
rms}$. For particles of equal size, that is, $a_{i}=a_{j}$, $\langle
C_{ij}\rangle$ reduces to the form given above in Eq. (22), which also means
that $\langle C_{i}\rangle\sim a_{i}^{5/2}$. In the absence of external body
forces other than kinetic drag, $\langle{\bm{v}}_{i}^{2}\rangle\to 0$ for very
large inertia particles. Thus, $\langle C_{ij}\rangle\to 0$, eventually.
To summarise, we note that for the tracer-particle limit (small-inertia
particles, St $\ll 1$), density variations in high-${\cal M}_{\rm rms}$
turbulence, and the locally elevated number densities ($n_{i}$) of dust
particles that follows consequently, can have a significant impact on the
average collision rate $\langle R_{ij}\rangle$, although not on $\langle
C_{ij}\rangle$. For particles with large inertia, beyond the effect of
compressibility on $\langle R_{ij}\rangle$, caustics in the particle phase may
also contribute to $\langle C_{ij}\rangle$ (see Appendix A for details of the
discussion).
### 3.4 Initial conditions
Super-particles were initially distributed randomly in the simulation box and
mono-dispersed in size ($\alpha_{0}=10^{-4}$). As discussed in section 3.1,
each super-particle was assigned a virtual volume $(\delta x)^{3}$, where
$\delta x$ is the lateral size of the lattice. With an initial number density
of dust grains $n_{0}$, the total number of dust grains in the computational
domain is given by
$n_{0}L^{3}=N_{\rm s}(n_{j}\delta x^{3}),$ (24)
where $N_{\rm s}$ is initial total number of super-particles. Since $(L/\delta
x)^{3}=N_{\rm grid}$ with $N_{\rm grid}$ the number of grid cells, Eq. (24)
can be rewritten as
$n_{0}N_{\rm grid}=N_{\rm s}n_{j},$ (25)
where $n_{j}$ is the number density within each super-particle at $t=0$. The
number of physical particles in each super-particle $N_{\rm p/s}$ is
determined by
$N_{\rm p/s}=N_{\rm p}/N_{\rm s}=\frac{L^{3}}{N_{\rm grid}}n_{j},$ (26)
which means that $N_{\rm p/s}$ is uniquely determined by $L^{3}$ when $N_{\rm
s}$, $N_{\rm grid}$, and $n_{j}$ are fixed.
To avoid running out of memory while executing the simulation, we must limit
the number of super-particles to $N_{\rm s}\sim 10^{7}$, which leads to a
required resolution $N_{\rm grid}=512^{3}$. The value of $n_{0}$ must be
chosen for computational feasibility, and we also kept the number of particles
within each super-particle to a minimum to avoid averaging out too much of the
turbulence effects (as explained in Section 3.1).
We can take the physical parameters of dust grains in the ISM as an example of
how difficult it is to simulate a dilute system, even on a current modern
supercomputer. According to Eq. (16) and Eq. (24), the collision frequency is
proportional to $n_{0}$. With $n_{0}=3.33\times 10^{-7}\,\rm{cm}^{-3}$,
$a_{0}=10^{-6}\,\rm{cm}$, $|\bm{v}_{i}-\bm{v}_{j}|\approx
10^{5}\,\rm{cm\,s^{-1}}$, and $E_{c}\approx 1$, the initial collision
frequency is $\tau_{c}^{-1}\approx 10^{-9}\,\rm{s}^{-1}$. The simulation time
step must match the corresponding physical coagulation timescale for the
particles, which is far beyond the computational power at our disposal in case
of a small number of particles within each super-particle.
### 3.5 Diagnostics
The coagulation process is sensitive to the large-particle tail of the
particle size distribution $f(a,t)$ because particle coagulation is strongly
dependent on the total cross section. The tails of $f(a,t)$ can be
characterised using the normalised moments of radius $a$ (Li et al., 2017),
$a_{\zeta}=\left({M_{\zeta}}/{M_{0}}\right)^{1/\zeta},$ (27)
where
$M_{\zeta}(t)=\int_{0}^{\infty}f(a,t)\,a^{\zeta}\,{\rm d}a,$ (28)
is the $\zeta$th moment of $a$. We adopted $\zeta=24$ to characterise the
large-particle tail. To follow the overall evolution of $f(a,t)$, we also
considered the mean radius, defined by the first-order normalised moment,
$\tilde{a}=a_{1}=M_{1}/M_{0}$. The relative standard deviation of $f(a,t)$ can
be defined as
$\sigma_{a}/\tilde{a}=(a_{2}^{2}-a_{1}^{2})^{1/2}/a_{1},$ (29)
where $\sigma_{a}=(a_{2}^{2}-a_{1}^{2})^{1/2}$ is the standard deviation of
$a$.
## 4 Results
Table 1: Parameter values used in different simulation runs. The sound speed is $c_{s}=1.0\,[L/T]$ in all simulations except for Run F, where $c_{s}=0.5\,[L/T]$. Run | $f_{0}$ | $L_{x}$ | $N_{\rm grid}$ | $N_{\rm s}$ | $\alpha_{\rm ini}$ | $\nu$ | $c_{\rm shock}$ | ${\cal M}_{\rm rms}$ | ${\rm Re}$ | $\langle\bar{\epsilon}\rangle$ | $\eta$ | $\rm{St}^{L}(t=0)$ | $\rm{St}^{\eta}(t=0)$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
A | $1.0$ | $2\pi$ | $512^{3}$ | $15624960$ | 0.016 | $2.5\times 10^{-3}$ | 2.0 | 1.56 | 200 | 0.80 | 0.012 | 0.047 | 1.071
C | $0.73$ | $7.47$ | $512^{3}$ | $15624960$ | 0.013 | $2.5\times 10^{-3}$ | 2.0 | 1.27 | 194 | 0.42 | 0.014 | 0.032 | 0.769
D | $0.02$ | $0.05$ | $256^{3}$ | $1953120$ | 2.000 | $1\times 10^{-5}$ | – | 0.066 | 83 | 0.76 | 0.0002 | 0.200 | 2.778
E | $1.20$ | $0.16$ | $256^{3}$ | $1953120$ | 0.625 | $5\times 10^{-5}$ | 2.0 | 0.55 | 90 | 1.13 | 0.0007 | 0.667 | 10.000
F | $3.50$ | $\pi$ | $512^{3}$ | $15624960$ | 0.032 | $2.5\times 10^{-3}$ | 8.0 | 2.58 | 82 | 1.03 | 0.01 | 0.167 | 2.600
G | $1.00$ | 1.60 | $256^{3}$ | $1953120$ | 0.032 | $1\times 10^{-3}$ | 2.0 | 0.99 | 81 | 0.80 | 0.006 | 0.118 | 1.500
H | $1.00$ | 0.60 | $256^{3}$ | $1953120$ | 0.167 | $2.5\times 10^{-4}$ | 2.0 | 0.75 | 92 | 0.79 | 0.002 | 0.240 | 3.000
I | $1.00$ | 0.80 | $256^{3}$ | $1953120$ | 0.125 | $1\times 10^{-3}$ | 2.0 | 0.74 | 30 | 0.79 | 0.006 | 0.176 | 1.500
To investigate how coagulation depends on the Mach number $\mathcal{M}_{\rm
rms}$, we performed simulations for different $\mathcal{M}_{\rm rms}$ ranging
from 0.07 to 2.58 while keeping ${\rm Re}$ and $\langle\bar{\epsilon}\rangle$
fixed (see the details of the simulation setup in Table 1). As shown in Fig.
1, the power spectra follow the classical Kolmogorov $-5/3$ law. Because the
Reynolds number that can be achieved in DNS studies is much lower than the one
in the ISM, large-scale separation of turbulence is not observed.
Figure 1: Power spectra for simulations of different ${\cal M}_{\rm rms}$:
${\cal M}_{\rm rms}=0.07$ (solid dotted black curve), $0.55$ (dashed cyan
curve), $0.99$ (dotted blue curve), and $2.58$ (red curve). The dashed black
curve shows the Kolmogorov $-5/3$ law.
Next, we inspected the time evolution of the dust size distribution $f(a,t)$.
As shown in Fig. 2, the tail of $f(a,t)$ widens with increasing ${\cal M}_{\rm
rms}$. The broadening of $f(a,t)$ is slowest for the nearly incompressible
flow with ${\cal M}_{\rm rms}=0.07$ (solid dotted black curve). A transition
is observed when the flow pattern changes from subsonic (${\cal M}_{\rm
rms}\sim 0.5$) to transonic or supersonic (${\cal M}_{\rm rms}\gtrsim 1$)111A
turbulent flow may be categorised in the following way according to Mach
number: subsonic range (${\cal M}_{\rm rms}<0.8$), transonic range ($0.8<{\cal
M}_{\rm rms}<1.3$), and the supersonic range ($1.3<{\cal M}_{\rm rms}<5.0$).,
where the broadening and the extension of the tail of $f(a,t)$ become
prominent. This is further evidenced by the simulations with ${\cal M}_{\rm
rms}=0.75$ (dash dotted magenta curve) and ${\cal M}_{\rm rms}=0.99$ (dotted
blue curve), in the intermediate transonic regime. The supersonic case with
${\cal M}_{\rm rms}=2.58$ displays a significant broadening of the tail.
Figure 2: Time evolution of $f(a,t)$ for simulations in Fig. 1 and for an
additional simulation run H in Table 1.
Fig. 3 shows the time evolution of the mean radius $\tilde{a}$ normalised by
the initial size of particles. It is obvious that $\tilde{a}/a_{\rm ini}$
increases with increasing ${\cal M}_{\rm rms}$. Although this does not say
much about tail effects, $\tilde{a}$ is a good measurement of the mean
evolution of $f(a,t)$.
Figure 3: Time evolution of $\tilde{a}/a_{\rm ini}$ for simulations in Fig. 2.
According to Eq. (19), the coagulation rate depends on the total cross section
of the two colliding particles. Therefore growth by coagulation is sensitive
to the large tail of $f(a,t)$. As discussed in Section 3.5, the tail of
$f(a,t)$ can be characterised by $a_{24}$, that is, the 24th normalised
moment. Fig. 4(a) shows that the rate of increase of $a_{24}$ increases with
${\cal M}_{\rm rms}$. The corresponding relative dispersion of $f(a,t)$,
$\sigma_{a}/\tilde{a}$, is shown in Fig. 4(b), which exhibits the same ${\cal
M}_{\rm rms}$ dependence as $a_{24}$. However, the form of
$\sigma_{a}/\tilde{a}$ as a function of $a_{24}/a_{\rm ini}$ is essentially
independent of ${\cal M}_{\rm rms}$ , as shown by the inset in Fig. 4(b).
Figure 4: Time evolution of (a) the $a_{24}$ and (b) $\sigma_{a}/\tilde{a}$
for simulations in Fig. 2. The inset shares the same y-axis as the main plot
in panel (b). The dashed magenta curve in the inset shows ${\rm
ln}(a_{24}/a_{\rm ini})$.
As mentioned in section 3.3, the mean collision kernel $\langle C_{ij}\rangle$
depends on ${\cal M}_{\rm rms}$. Fig. 5(a) shows the collision kernel $\langle
C_{ij}\rangle$ normalised according to $(a_{i}+a_{j})_{\rm ini}^{3}$, that is,
the initial particle size. The Saffman-Turner model should not apply to
coagulation of inertial particles in compressible turbulence as it assumes
that particles act as passive tracers and are advected by the turbulent motion
of the carrier. In spite of this, $\langle C_{ij}\rangle$ appears to scale
with particle size as $a^{3}$, which is shown in Fig. 5(b), where $\langle
C_{ij}\rangle$ is normalised to $\mathcal{M}_{\rm rms}\,(a_{i}+a_{j})^{3}$.
The reason for this is not obvious. We recall, however, that we consider
turbulence in highly compressible flows, and more importantly, that the
trajectories of inertial particles tend to deviate from the flow. This leads
to higher particle densities by compaction and clustering in the convergence
zones in between vortex tubes (Maxey, 1987). Moreover, depending on the
particle masses, it may also lead to the formation of caustics, which are the
singularities in phase-space of suspended inertial particles (Falkovich et
al., 2002; Wilkinson & Mehlig, 2005). This will lead to large velocity
differences between colliding particles and thus to large $\langle
C_{ij}\rangle$. The net result is a rather complex coagulation process, where
$\langle C_{ij}\rangle$ varies strongly from one location to the next, which
is further discussed below.
According to Fig. 5(a), $\langle C_{ij}\rangle$ exhibits a clear increase from
subsonic to supersonic turbulence (cf. ${\cal M}_{\rm rms}=0.55$, cyan curve
in Fig. 5(a) and ${\cal M}_{\rm rms}=2.58$, red curve in Fig. 5(a)). As we
argued in section 3.3, $\langle C_{ij}\rangle$ is proportional to ${\cal
M}_{\rm rms}$ under the assumption of Maxwellian velocity distributions. Fig.
5 (b) shows $\langle C_{ij}\rangle$ normalised also by ${\cal M}_{\rm rms}$.
We note that a linear scaling seems to be applicable from the subsonic regime
to the supersonic regime. This means that the simple analytical theory is
reasonable. The cyan curve deviates from other curves because the initial
Stokes number for this simulation is about 10, as listed in Table 1. The
Stokes number dependence of $\langle C_{ij}\rangle$ can be further confirmed
by Fig. 5(c). As $\langle C_{ij}\rangle$ is determined by the relative
velocity of colliding pairs, we normalised $\langle C_{ij}\rangle$ by $v_{\rm
p,rms}/c_{\rm s}$ , as shown in Fig. 5(c). It is obvious that $\langle
C_{ij}\rangle$ scales linearly with $v_{\rm p,rms}/c_{\rm s}$ up to the
supersonic regime. Fig. 6 shows the distribution of the magnitudes of the
particle velocities, which indeed is very similar to a Maxwellian velocity
distribution. It also shows that particle velocities become higher with
increasing ${\cal M}_{\rm rms}$. Especially the tail of the particle velocity
distribution becomes more populated. This indicates that stronger shocks
accelerate inertial particles more and may therefore increase the coagulation
rate.
Figure 5: Measured collision kernel $\langle C\rangle$ normalised to
$(a_{i}+a_{j})^{3}$. Panel (a) shows the case of constant (initial) $a_{i}$,
while panel (b) shows the case where $(a_{i}+a_{j})^{3}$ evolves and where
${\cal M}_{\rm rms}$ is also included in the normalisation. The simulations
are the same as in Fig. 2. Panel (c) shows the same normalisation as panel
(b), but with ${\cal M}_{\rm p,rms}$.
Figure 6: Particle velocity distribution at $105\tau_{L}$. The dashed line is
a Maxwellian fit of the particle velocity of run G.
According to Eq. (19), the collision kernel is determined by the relative
velocity $\bm{w}_{ij}$ and the relative separation $\Delta\bm{r}$ of two
colliding particles. The former scenario is known as caustics (Wilkinson et
al., 2006) and the latter as clustering (Gustavsson & Mehlig, 2016), as
discussed above. Our simulations involve coagulation, which leads to evolution
of $\alpha(a,t)$. This makes it difficult to analyse clustering and caustics
based on these simulations. Below we try to understand the ${\cal M}_{\rm
rms}$-dependence based on the spatial distribution and velocity statistics of
the particles.
As shown in Fig. 7, the spatial distribution of particles exhibits different
behaviours in the three $\alpha$ ranges we considered. When $\alpha<0.1$,
particles tend to be trapped in regions where high gas density occurs. This is
consistent with the findings of Hopkins & Lee (2016) and Mattsson et al.
(2019), even though coagulation was not considered in their studies. When
$0.1\leq\alpha\leq 0.3$, particles still accumulate in the high-density
regions, but are also spread out in regions with low gas density. This
dispersion is expected as $\tau_{i}$ increases. Finally, when $\alpha>0.3$,
particles more or less decouple from the flow, demonstrating essentially a
random-walk behaviour. When we compare with ${\cal M}(\bm{x},t)$ instead of
$\ln{\rho}$, we see that particles accumulate in regions with low local ${\cal
M}(\bm{x},t)$, as shown in Fig. 8. That is, low ${\cal M}(\bm{x},t)$
corresponds to high $\ln{\rho}(\bm{x},t)$. The physical picture is the
following. Strong shocks generated in these local supersonic regions push
particles to low ${\cal M}(\bm{x},t)$ regions, which is then how particle
densities increase due to compression of the gas. This compaction of particles
is different from the fractal clustering of inertial particles, which mainly
occurs as a result of accumulation of particles in the convergence zones
between vortices. Statistically, the spatial distribution of particles can be
characterised by $g(r)$, which contributes to the mean collision rate as
expressed in Eq. (19). However, $g(r)$ is only useful as a diagnostic for a
mono-dispersed particle distribution or fixed size bins (Pan et al., 2011).
Therefore we only show the spatial distributions of particles and do not go
into details about the quantitative statistics.
Figure 7: Spatial distribution of inertial particles and the gas density at 80
time units in a slab with thickness $\eta$ for run F. Figure 8: Same as Fig.
7, but with ${\cal M}({\bm{x}},t)$ as the contour map.
As the collision kernel $\langle C_{ij}\rangle$ is also determined by
$|{\bm{w}}_{ij}|$, we also examined the magnitude of particle velocities for
different ranges of $\alpha$. Fig. 9 shows the PDF of $|\bm{v}_{p}|/u_{\rm
rms}$ for $\alpha<0.1$, $0.1\leq\alpha\leq 0.3$, and $\alpha>0.3$,
respectively. It is evident that the velocity magnitude of particles that are
coupled to the flow is higher than that of particles that are decoupled from
the flow. Thus, the ${\cal M}_{\rm rms}$ dependence of $\langle C_{ij}\rangle$
could very well be due to enhanced caustics and compression-induced
concentration with increasing ${\cal M}_{\rm rms}$.
Figure 9: Corresponding PDF for Fig. 7.
In incompressible turbulence, the collision kernel depends on $\tau_{\eta}$,
which is determined by $\langle\bar{\epsilon}\rangle$, while it is insensitive
to ${\rm Re}$ (Grabowski & Wang, 2013; Li et al., 2018). We examined the
$\langle\bar{\epsilon}\rangle$ and ${\rm Re}$ dependences of $a_{24}$ and
$\sigma_{a}$ in compressible turbulence. As shown in Fig. 10, $a_{24}$ and
$\sigma_{a}/\tilde{a}$ have only a weak dependence on
$\langle\bar{\epsilon}\rangle$ in the supersonic regime (e.g. simulations A
and C have similar ${\cal M}_{\rm rms}$ but differ by a factor of two in
$\langle\bar{\epsilon}\rangle$). By inspection of Fig. 10, changing ${\rm Re}$
(run H and I) does not obviously affect $a_{24}$ and $\sigma_{a}/\tilde{a}$ in
the transonic regime, which may seem to be consistent with the simulation
results for incompressible turbulence.
Figure 10: Time evolution of (a) $a_{24}$ and (b) dispersion of $f(a,t)$ for
different $\langle\bar{\epsilon}\rangle$ and ${\rm Re}$ with the same ${\cal
M}_{\rm rms}$ (see runs A, C, H, and I in Table 1).
## 5 Discussion
We have observed that as ${\cal M}_{\rm rms}$ increases, the tail of the
distribution increases. This poses mainly two questions: (1) how the
compression-induced density variation (simply compaction) affects $\langle
R_{ij}\rangle$, and (2) how the velocity dispersion of particles due to shocks
affects $\langle C_{ij}\rangle$.
According to Eq. (19), the coagulation rate $\langle R_{ij}\rangle$ is
determined by $g(a,r)$ and $|\bm{w}_{ij}|,$ as discussed in section 4.
Particles tend to stay in regions where the gas density is high due to shock-
induced compaction. With larger ${\cal M}_{\rm rms}$, the gas density
fluctuations become stronger. This leads to somewhat higher concentrations of
particles, especially for particles with $\alpha\leq 0.3$. It is important to
note that particles accumulate in low ${\cal M}\left(\bm{x},t\right)$ regions,
where the gas density is high because particles are pushed to lower ${\cal
M}\left(\bm{x},t\right)$ regions by the shocks. Supersonic flows cover a wide
range of ${\cal M}\left(\bm{x},t\right)$, which results in stronger density
variations of particles. Local concentrations of particles might lead to
higher $\langle R_{ij}\rangle$. Therefore the enhanced $\langle R_{ij}\rangle$
with increasing ${\cal M}_{\rm rms}$ is indeed due to the change in the flow
structure. In addition to the compressibility, fractal clustering due to
inertia effect (large St) of particles might also enhance $\langle
C_{ij}\rangle$.
Higher ${\cal M}_{\rm rms}$ results in stronger shocks and thus higher
particle velocities, which also leads to larger $\langle C_{ij}\rangle$. In
particular, for supersonic flows, the local fluctuations of ${\cal
M}\left(\bm{x},t\right)$ are strong. This leads to significant local
dispersion in the particle velocities and a consequent enhancement of $\langle
C_{ij}\rangle$. The coagulation rate time series almost collapse on top of
each other when normalised by $v_{\rm p,rms}/c_{\rm s}$ up to the supersonic
regime. This indicates that the simple description of the collision kernel,
Eq. (23), applies up to the supersonic regime. As particles grow larger in the
simulation, they decouple from the flow. Statistically, this decoupling can be
roughly described by the difference between $u_{\rm rms}$ and $v_{\rm p,rms}$.
This probably causes these curves to collapse on each other when $\langle
C_{ij}\rangle$ is normalised by $v_{\rm p,rms}/c_{\rm s}$. We therefore
propose that $\langle C_{ij}\rangle$ is proportional to $v_{\rm p,rms}/c_{\rm
s}$ instead of ${\cal M}_{\rm rms}$ in Eq. (23).
Since the inertial range is determined by ${\rm Re}$, we also examined how
$\langle C_{ij}\rangle$ depends on ${\rm Re}$. As shown in Fig. 10, the
$\rm{Re}$ dependence is weak because $\tau_{i}\ll\tau_{L}$. Pan & Padoan
(2014) have suggested that the collision kernel is independent of $\rm{Re}$ in
the subsonic regime. We show here that this also appears to apply to the
transonic regime and likely also the supersonic regime. As we discussed in
section 2.1, for incompressible turbulence, $\langle C_{ij}\rangle$ is
determined by $\langle\bar{\epsilon}\rangle$ through $\tau_{\eta}$. Fig. 10
shows that the $\langle\bar{\epsilon}\rangle$ dependence observed in
incompressible flows vanishes in compressible flows, however, which is quite
expected. We demonstrated that the coagulation rate of inertial particles is
mainly affected by ${\cal M}_{\rm rms}$, essentially the level of compression
of the flow. We conclude that ${\cal M}_{\rm rms}$ is the main parameter
determining $\langle C_{ij}\rangle$ in the trans- and supersonic regimes.
The pioneering work of Saffman & Turner (1956) suggested that $\langle
C_{i}\rangle\sim\tau_{\eta}^{-1}$ for specified sizes of particles. Because
$\tau_{\eta}=(\nu/\langle\bar{\epsilon}\rangle)^{1/2}$, $\langle
C_{i}\rangle\sim\langle\bar{\epsilon}\rangle^{1/2}$, which has been confirmed
in many studies of incompressible turbulence. More importantly, all simulation
works have found that $\langle C_{i}\rangle$ is independent of Re (see
Grabowski & Wang, 2013; Li et al., 2018, and the references therein). This is
contradictory, however, because $\langle\bar{\epsilon}\rangle$ is a parameter
that is determined by Re. We argue that this is because of the forcing term in
the N-S equation, which is inevitable in turbulence simulations. This forcing
term invokes a third dimensional parameter $\langle\bar{\epsilon}\rangle$ that
determines $\langle C_{i}\rangle$ in incompressible turbulence. For
compressible turbulence, however, our study showed that $\langle C_{i}\rangle$
is determined by ${\cal M}_{\rm rms}$ alone and independent of
$\langle\bar{\epsilon}\rangle$. We can then avoid the contradiction described
above. However, this contradiction is a more fundamental problem that requires
a solution, but this is beyond the scope of our study. In short, compressible
turbulence implies that $\langle\bar{\epsilon}\rangle$ is not only determined
by Re, but also by ${\cal M}_{\rm rms}$. Even though
$\langle\bar{\epsilon}\rangle$ does not affect $\langle C_{i}\rangle$
directly, it affects the Stokes number ${\rm St}=\tau_{\rm i}/\tau_{\eta}$.
Therefore the $\langle C_{i}\rangle$ time series collapse on top of each other
when they are normalised by $\tau_{\eta}$ in Fig. 5(c).
Andersson et al. (2007) proposed that in 2D compressible flow with a Gaussian
random velocity, fractal particle (inertialess) clustering can lead to a
higher coagulation rate. In addition to the aforementioned assumptions, the
collision rate suggested in Andersson et al. (2007) involves the fractal
dimension $D_{2}$, which is difficult to measure in our case because we have
an evolving size distribution of particles. A direct comparison between our
simulation and the theory of Andersson et al. (2007) is therefore not
feasible. Nevertheless, we do observe fractal clustering in our simulation,
which could indeed elevate the coagulation rate because Fig. 6 shows that high
${\cal M}_{\rm rms}$ flow results in higher particle velocities. Because we
have a wide range of Stokes numbers, the fractal clustering (Falkovich et al.,
2001) could also be enhanced.
## 6 Summary and conclusion
Coagulation of inertial particles in compressible turbulence was investigated
by direct and shock-capturing numerical simulations. Particle interaction was
tracked dynamically in a Lagrangian manner, and the consequential coagulation
events were counted at each simulation time step. We specifically explored the
Mach-number dependence of the coagulation rate and the effects on the widening
of the particle-size distribution. To our knowledge, this is the first time
that this has been done.
We showed that the coagulation rate is determined by Mach number ${\cal
M}_{\rm rms}$ in compressible turbulence. This is fundamentally different from
the incompressible case, where the coagulation rate is mainly determined by
$\langle\bar{\epsilon}\rangle$ through the Kolmogorov timescale.
The dispersion or variance of $f(a,t)$, $\sigma_{a}$, increases with
increasing ${\cal M}_{\rm rms}$. We also note that $\sigma_{a}$ is a simple
and more or less universal function of the size of the largest particles,
measured by $a_{24}$, and is apparently independent of ${\cal M}_{\rm rms}$.
All effects on coagulation increase progressively with ${\cal M}_{\rm rms}$,
which shows the importance of compressibility for coagulation processes. Taken
at face value, our simulations appear to suggest that existing theories of the
${\cal M}_{\rm rms}$ dependence of $\langle C_{ij}\rangle$, which imply an
underlying linear scaling with ${\cal M}_{\rm rms}$ , are correct to first
order, but we cannot draw any firm conclusions at this point. For this we will
need more simulations with a wider range of ${\cal M}_{\rm rms}$ values. We
note that $\langle C_{ij}\rangle$ scales as $\langle
C_{ij}\rangle\sim(a_{i}+a_{j})^{3}\,\mathcal{M}_{\rm rms}/\tau_{\eta}$. When
the collision kernel $\langle C_{ij}\rangle$ is normalised by $v_{\rm
p,rms}/c_{\rm s}$, the curves collapse on top of each other. We therefore
suggest that $\langle C_{ij}\rangle$ is proportional to $v_{\rm p,rms}/c_{\rm
s}$, rather than $\mathcal{M}_{\rm rms}$. This finding may serve as a
benchmark for future studies of coagulation of dust grains in highly
compressible turbulence.
We propose two mechanisms that might be behind the ${\cal M}_{\rm rms}$
dependence of the broadening of $f(a,t)$ even though it is still not fully
understood due to the non-equilibrium nature of the coagulation process in
compressible turbulence. The first mechanism is the compaction-induced
concentration of particles. Supersonic flow exhibits stronger fluctuations of
local ${\cal M}(\bm{x},t)$. The consequent vigorous shocks compact small
particles (e.g. $\alpha<0.3$) into low-${\cal M}(\bm{x},t)$ regions. This
leads to high densities of particles ($n_{i}$) and then potentially to a
higher coagulation rate $\langle R_{ij}\rangle$. The second mechanism is
larger dispersion of particle velocities caused by stronger shocks. Again,
stronger local fluctuations ${\cal M}(\bm{x},t)$ lead to a larger dispersion
of particle velocities, which increases the coagulation rate.
Simulating the coagulation problem in compressible and supersonic turbulence,
we achieved ${\cal M}_{\rm rms}=2.58$, but with a non-astrophysical scaling.
This is smaller than the ${\cal M}_{\rm rms}\geq 10$ observed in cold clouds.
To explore whether a saturation limit of the ${\cal M}_{\rm rms}$ dependence
of the coagulation rate exists, a direct numerical simulation coupled with
coagulation would have to reach at least ${\cal M}_{\rm rms}\sim 10$.
We also note that the simulated systems in our study have flow timescales
(turn-over times) that are of the same order as the coagulation timescale,
that is, $\tau_{\rm c}/\tau_{L}<1$, which is computationally convenient, but
very different from dust in the ISM, for example, where $\tau_{c}/\tau_{L}\gg
1$. Nonetheless, our study provides a benchmark for simulations of dust-grain
growth by coagulation in the ISM and other dilute astrophysical environments.
Reaching really high ${\cal M}_{\rm rms}$, and astrophysical scales in general
is currently being explored. Fragmentation is also omitted in this study,
which may overestimate the coagulation rate. Adding fragmentation is a topic
for future work.
## Acknowledgement
Xiang-Yu Li wishes to thank Axel Brandenburg, Nils Haugen, and Anders Johansen
for illuminating discussions about the simulation code used in this study, the
Pencil Code. Lars Mattsson wishes to thank the Swedish Research Council
(Vetenskapsrdet, grant no. 2015-04505) for financial support. We thank the
referee for very helpful suggestions to improve the manuscript. Our
simulations were performed using resources provided by the Swedish National
Infrastructure for Computing (SNIC) at the Royal Institute of Technology in
Stockholm, Linköping University in Linköping, and Chalmers Centre for
Computational Science and Engineering (C3SE) in Gothenburg. The Pencil Code is
freely available on https://github.com/pencil-code/. The authors also thank
the anonymous reviewer for constructive comments on the paper.
## References
* Abrahamson (1975) Abrahamson, J. 1975, Chemical Engineering Science, 30, 1371
* Aldous (1999) Aldous, D. J. 1999, Bernoulli, 5, 3
* Andersson et al. (2007) Andersson, B., Gustavsson, K., Mehlig, B., & Wilkinson, M. 2007, EPL (Europhysics Letters), 80, 69001
* Armitage (2010) Armitage, P. J. 2010, Astrophysics of planet formation (Cambridge University Press)
* Bec (2003) Bec, J. 2003, Physics of Fluids, 15, L81
* Bec (2005) Bec, J. 2005, Journal of Fluid Mechanics, 528, 255
* Bec et al. (2007a) Bec, J., Biferale, L., Cencini, M., et al. 2007a, Phys. Rev. Letters, 98, 084502
* Bec et al. (2007b) Bec, J., Cencini, M., & Hillerbrand, R. 2007b, Phys. Rev. E, 75, 025301
* Bhatnagar et al. (2018) Bhatnagar, A., Gustavsson, K., & Mitra, D. 2018, Phys. Rev. E, 97, 023105
* Bird (1978) Bird, G. 1978, Annu. Rev. Fluid Mech., 10, 11
* Bird (1981) Bird, G. 1981, Progress in Astronautics and Aeronautics, 74, 239
* Birnstiel et al. (2016) Birnstiel, T., Fang, M., & Johansen, A. 2016, Space Science Reviews, 205, 41
* Bourdin (2020) Bourdin, P.-A. 2020, Geophys. Astrophys. Fluid Dynam., 114, 235
* Brandenburg (2001) Brandenburg, A. 2001, Astrophys. J., 550, 824
* Brandenburg & Dobler (2002) Brandenburg, A. & Dobler, W. 2002, Comput. Phys. Commun., 147, 471
* Brandenburg et al. (2020) Brandenburg, A., Johansen, A., Bourdin, P., et al. 2020, arXiv preprint arXiv:2009.08231
* Draine & Salpeter (1979) Draine, B. T. & Salpeter, E. E. 1979, ApJ, 231, 438
* Eaton & Fessler (1994) Eaton, J. & Fessler, J. 1994, International Journal of Multiphase Flow, 20, 169
* Elmegreen & Scalo (2004) Elmegreen, B. G. & Scalo, J. 2004, Annu. Rev. Astron. Astrophys., 42, 211
* Falkovich et al. (2002) Falkovich, G., Fouxon, A., & Stepanov, M. 2002, Nature, 419, 151
* Falkovich et al. (2001) Falkovich, G., Gawedzki, K., & Vergassola, M. 2001, Reviews of modern Physics, 73, 913
* Federrath et al. (2009) Federrath, C., Klessen, R. S., & Schmidt, W. 2009, ApJ, 692, 364
* Federrath et al. (2010) Federrath, C., Roman-Duval, J., Klessen, R. S., Schmidt, W., & Mac Low, M.-M. 2010, A&A, 512, A81
* Grabowski & Wang (2013) Grabowski, W. W. & Wang, L.-P. 2013, Annu. Rev. Fluid Mech., 45, 293
* Gustavsson & Mehlig (2014) Gustavsson, K. & Mehlig, B. 2014, J. Turbulence, 15, 34
* Gustavsson & Mehlig (2016) Gustavsson, K. & Mehlig, B. 2016, Advances in Physics, 65, 1
* Haugen et al. (2004) Haugen, N. E. L., Brandenburg, A., & Mee, A. J. 2004, MNRAS, 353, 947
* Hedvall & Mattsson (2019) Hedvall, R. & Mattsson, L. 2019, RNAAS, 3, 82
* Hirashita (2010) Hirashita, H. 2010, MNRAS, 407, L49
* Hirashita et al. (2014) Hirashita, H., Asano, R. S., Nozawa, T., Li, Z.-Y., & Liu, M.-C. 2014, Planet. Space Sci., 100, 40
* Hirashita & Yan (2009) Hirashita, H. & Yan, H. 2009, MNRAS, 394, 1061
* Hopkins & Lee (2016) Hopkins, P. F. & Lee, H. 2016, MNRAS, 456, 4174
* Johansen & Lambrechts (2017) Johansen, A. & Lambrechts, M. 2017, Annual Review of Earth and Planetary Sciences, 45, 359
* Johansen et al. (2012) Johansen, A., Youdin, A. N., & Lithwick, Y. 2012, Astron. Astroph., 537, A125
* Jorgensen et al. (1983) Jorgensen, W. L., Chandrasekhar, J., Madura, J. D., Impey, R. W., & Klein, M. L. 1983, J. Chem. Phys., 79, 926
* Klett & Davis (1973) Klett, J. & Davis, M. 1973, Journal of the Atmospheric Sciences, 30, 107
* Kwok (1975) Kwok, S. 1975, ApJ, 198, 583
* Li et al. (2017) Li, X.-Y., Brandenburg, A., Haugen, N. E. L., & Svensson, G. 2017, J. Adv. Modeling Earth Systems, 9, 1116
* Li et al. (2020) Li, X.-Y., Brandenburg, A., Svensson, G., et al. 2020, Journal of the Atmospheric Sciences, 77, 337
* Li et al. (2018) Li, X.-Y., Brandenburg, A., Svensson, G., et al. 2018, Journal of the Atmospheric Sciences, 75, 3469
* Mattsson (2011) Mattsson, L. 2011, MNRAS, 414, 781
* Mattsson (2016) Mattsson, L. 2016, P&SS, 133, 107
* Mattsson et al. (2019) Mattsson, L., Bhatnagar, A., Gent, F. A., & Villarroel, B. 2019, MNRAS, 483, 5623
* Mattsson et al. (2019) Mattsson, L., Fynbo, J. P. U., & Villarroel, B. 2019, MNRAS, 490, 5788
* Mattsson & Hedvall (2021) Mattsson, L. & Hedvall, R. 2021, MNRAS, in prep.
* Maxey (1987) Maxey, M. R. 1987, Journal of Fluid Mechanics, 174, 441–465
* Pan & Padoan (2013) Pan, L. & Padoan, P. 2013, ApJ, 776, 12
* Pan & Padoan (2014) Pan, L. & Padoan, P. 2014, ApJ, 797, 101
* Pan & Padoan (2015) Pan, L. & Padoan, P. 2015, ApJ, 812, 10
* Pan et al. (2014a) Pan, L., Padoan, P., & Scalo, J. 2014a, ApJ, 791, 48
* Pan et al. (2014b) Pan, L., Padoan, P., & Scalo, J. 2014b, ApJ, 792, 69
* Pan et al. (2011) Pan, L., Padoan, P., Scalo, J., Kritsuk, A. G., & Norman, M. L. 2011, ApJ, 740, 6
* Pan et al. (2011) Pan, L., Padoan, P., Scalo, J., Kritsuk, A. G., & Norman, M. L. 2011, The Astrophysical Journal, 740, 6
* Passot & Vázquez-Semadeni (1998) Passot, T. & Vázquez-Semadeni, E. 1998, Phys. Rev. E, 58, 4501
* Price et al. (2011) Price, D. J., Federrath, C., & Brunt, C. M. 2011, ApJL, 727, L21
* Procacia et al. (1983) Procacia, I. et al. 1983, Physica. D, 9, 189
* Reade & Collins (2000) Reade, W. C. & Collins, L. R. 2000, Physics of Fluids, 12, 2530
* Rowlands et al. (2014) Rowlands, K., Gomez, H. L., Dunne, L., et al. 2014, MNRAS, 441, 1040
* Saffman & Turner (1956) Saffman, P. G. & Turner, J. S. 1956, J. Fluid Mech., 1, 16
* Schaaf (1963) Schaaf, S. A. 1963, Handbuch der Physik, 3, 591
* Smoluchowski (1916) Smoluchowski, M. V. 1916, Zeitschrift fur Physik, 17, 557
* Squires & Eaton (1991) Squires, K. D. & Eaton, J. K. 1991, Physics of Fluids A: Fluid Dynamics, 3, 1169
* Sundaram & Collins (1997) Sundaram, S. & Collins, L. R. 1997, Journal of Fluid Mechanics, 335, 75–109
* Valiante et al. (2011) Valiante, R., Schneider, R., Salvadori, S., & Bianchi, S. 2011, MNRAS, 416, 1916
* Voßkuhle et al. (2014) Voßkuhle, M., Pumir, A., Lévêque, E., & Wilkinson, M. 2014, Journal of fluid mechanics, 749, 841
* Wang et al. (2000) Wang, L.-P., Wexler, A. S., & Zhou, Y. 2000, Journal of Fluid Mechanics, 415, 117
* Wilkinson & Mehlig (2005) Wilkinson, M. & Mehlig, B. 2005, EPL (Europhysics Letters), 71, 186
* Wilkinson et al. (2006) Wilkinson, M., Mehlig, B., & Bezuglyy, V. 2006, Phys. Rev. Lett., 97, 048501
* Yavuz et al. (2018) Yavuz, M. A., Kunnen, R. P. J., van Heijst, G. J. F., & Clercx, H. J. H. 2018, Physical Review Letters, 120, 244504
* Zsom & Dullemond (2008) Zsom, A. & Dullemond, C. P. 2008, Astron. Astrophys., 489, 931
## Appendix A Coagulation kernel: An alternative expression
This appendix presents a discussion of an alternative description of the
coagulation rate and the underlying mechanisms. Eq. (19) appears to be a
natural expression for the collision kernel between particle pairs. However,
it is not a physically complete and always representative model (Wilkinson et
al., 2006). Two mechanisms contribute to the collision kernel $C_{ij}$ due to
particle inertia: clustering and caustics. The former is characterised by
$g(r)\propto r^{D_{2}-d}$ at a fixed Stokes number, where $g$ is the radial
distribution function, $d$ is the spatial dimension, and $D_{2}$ is the
correlation dimension (Reade & Collins, 2000; Procacia et al., 1983). The
latter is the effect of singularities in the particle phase space at non-zero
values of the Stokes number (Falkovich et al., 2002; Wilkinson et al., 2006;
Gustavsson & Mehlig, 2014). It appears when phase-space manifolds fold over.
In the fold region, the velocity field at a given point in space becomes
multi-valued, allowing for large velocity differences between nearby
particles, which results in a temporarily increased particle-interaction rate
and more efficient coagulation (Gustavsson & Mehlig, 2014). As indicated
above, the relative velocity between two colliding pairs obeys a power law
$|\langle w_{ij}\rangle|\propto(r_{i}+r_{j})^{d-D_{2}}$. Thus, the product of
$g(r)$ and $|\langle w_{ij}\rangle|$ is independent of $(r_{i}+r_{j})$, that
is, caustics and clustering cancel each other out in this formulation.
Therefore Wilkinson et al. (2006) proposed that $C_{ij}$ is a superposition of
clustering and caustics, which was confirmed by numerical simulations
(Voßkuhle et al., 2014). In section 3.3 of our study we ignored the effects of
caustics mainly because caustics in high $\mathcal{M}_{\rm rms}$ compressible
turbulence are associated with shock interaction and density variance in the
carrier fluid, in which case the resultant increase in particle number density
is a far greater effect than the caustics.
|
2024-09-04T02:54:55.127369 | 2020-02-27T16:01:53 | 2002.12216 | {
"authors": "Debashis Saha, Rafael Santos, Remigiusz Augusiak",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25919",
"submitter": "Debashis Saha",
"url": "https://arxiv.org/abs/2002.12216"
} | arxiv-papers | # Sum-of-squares decompositions for a family of noncontextuality inequalities
and self-testing of quantum devices
Debashis Saha Center for Theoretical Physics, Polish Academy of Sciences,
Aleja Lotników 32/46, 02-668 Warsaw, Poland Rafael Santos Center for
Theoretical Physics, Polish Academy of Sciences, Aleja Lotników 32/46, 02-668
Warsaw, Poland Remigiusz Augusiak Center for Theoretical Physics, Polish
Academy of Sciences, Aleja Lotników 32/46, 02-668 Warsaw, Poland
###### Abstract
Violation of a noncontextuality inequality or the phenomenon referred to
‘quantum contextuality’ is a fundamental feature of quantum theory. In this
article, we derive a novel family of noncontextuality inequalities along with
their sum-of-squares decompositions in the simplest (odd-cycle) sequential-
measurement scenario capable to demonstrate Kochen-Specker contextuality. The
sum-of-squares decompositions allow us to obtain the maximal quantum violation
of these inequalities and a set of algebraic relations necessarily satisfied
by any state and measurements achieving it. With their help, we prove that our
inequalities can be used for self-testing of three-dimensional quantum state
and measurements. Remarkably, the presented self-testing results rely on
weaker assumptions than the ones considered in Kochen-Specker contextuality.
To realize genuine quantum technologies such as cryptographic systems, quantum
simulators or quantum computing devices, the back-end user should be ensured
that the quantum devices work as specified by the provider. Methods to certify
that a quantum device operates in a nonclassical way are therefore needed. The
most compelling one, developed in the cryptographic context, is self-testing
[MY04]. It exploits nonlocality, i.e., the existence of quantum correlations
that cannot be reproduced by the local-realist models, and provides the
complete form of device-independent 111With the requirement of the spatial
separation between measurements on subsystems, and without any assumption on
the internal features of the devices. characterization of quantum devices only
from the statistical data the devices generate. Thus, it is being extensively
studied in recent years [YVB+14, BP15, CGS17].
However, since self-testing, as defined in Ref. [MY04], stands on nonlocality
[Bel64] (or, in other words, quantum correlations that violate local-realist
inequalities), it is restricted to preparations of composite quantum systems
and local measurements on them. Therefore, it poses a fundamental question:
presuming the minimum features of the devices how to characterize $(i)$
quantum systems of prime dimension that are not capable of exhibiting nonlocal
correlations, and $(ii)$ quantum systems without entanglement or spatial
separation between subsystems? A possible way to address such instances is to
employ quantum contextuality (Kochen-Specker contextuality), a generalization
of nonlocal correlations obtained from the statistics of commuting
measurements that are performed on a single quantum system [KS75, Cab08,
CSW14, KCBbuS08]. Indeed, the recent study [BRV+19b, IMOK20, BRV+19a] provides
self-testing statements based on contextual correlations (or correlations that
violate noncontextuality inequality). Since quantum contextual correlations
are essential in many aspects of quantum computation [HWVE14, Rau13] and
communication [GHH+14, SHP19], self-testing statements are crucial for
certifying quantum technology [BRV+19a]. Apart from that, it is, nonetheless,
fundamentally interesting to seek the maximum information one can infer about
the quantum devices only from the observed statistics in a contextuality
experiment.
In the context of nonlocality, sum-of-squares (SOS) decomposition of quantum
operators associated with local-realist inequalities has been the key
mathematical tool in recent years to obtain optimal quantum values and self-
testing properties of quantum devices [BP15, ŠASA16, SAT+17, KŠT+19, SSKA19,
ASTA19, Kan19, CMMN19]. Whether this line of study, albeit, restricted to
nonlocal correlations, can further be extended to contextuality scenario is of
great interest from the perspective of unified approach to non-classical
correlations [CSW14, AC18].
In this work, we consider Klyachko-Can-Binicioğlu-Shumovsky (KCBS) scenario
which comprises of one preparation and $n$ (where $n\geqslant 5$ is odd)
number of measurements [KCBbuS08, AQB+13, LSW11]. This is the simplest
scenario capable to exhibit contextual correlations using a three-dimensional
quantum system and five binary outcome measurements. It also has several
implications in quantum foundation and quantum information [GBC+14, GHH+14,
SBA17, Cab13, KanCK14, SR17, XSS+16]. We first introduce a modified version of
KCBS expression for $n=5$ involving correlation between the outcomes of two
sequential measurements, along with an SOS decomposition of the respective
quantum operator. We describe our methodology to obtain SOS and
simultaneously, generalize for $n$-cycle KCBS scenario where
$n=2^{m}+1,m\in\mathbbm{N}$. Interestingly, the SOS decomposition holds even
without the idealizations that the measurement effects are projectors and
satisfy cyclic orthogonality conditions. By virtue of this decomposition, we
obtain the maximum quantum value of our modified $n$-cycle expression and a
set of algebraic relations involving any quantum state and measurements that
yield those maximum values. By solving those relations, we show the existence
of a three-dimensional vector-space invariant under the algebra of measurement
operators. Subsequently, we prove the uniqueness of the projected three-
dimensional measurements and state up to unitary equivalence, that is, self-
testing property of the quantum devices. The presented self-testing relies on
the premise that the Kraus operator corresponding to one of the outcomes
outcome is uniquely determined by the respective measurement effect, and it
does not assume any other idealizations of the measurements or the dimension
of the preparation.
## 1 Preliminaries
We begin by illustrating our scenario and specifying the assumptions.
Sequential-measurement set-up. Each run of the experimental observation
comprises of preparation of a physical system followed by two measurements in
a sequence using one non-demolishing measurement device as depicted in Fig. 1.
The measurement device has $n$ (odd) different settings, each of which yields
$\pm 1$ outcome. Let’s denote the first and second measurement settings by
$\mathcal{A}_{i}$ and $\mathcal{A}_{j}$ where $i,j\in\\{1,\dots,n\\}$. The
settings are chosen such that $j=i\pm 1$, where from now on the subscript $i$
is taken modulo $n$, that is, $\mathcal{A}_{i\pm n}=\mathcal{A}_{i}$. We first
make the following assumption.
###### Assumption 1.
The measurement device only returns the actual post-measurement state.
This assumption is necessary, otherwise, any quantum statistics can be
reproduced by classical systems.
By repeating this experiment many times we can obtain joint probabilities
$p(a_{i},a_{i\pm 1}|\mathcal{A}_{i},\mathcal{A}_{i\pm 1})$ of two measurements
and single probabilities $p(a_{i}|\mathcal{A}_{i})$ of the first measurement,
and consequently, their correlation functions,
$\displaystyle\langle\mathcal{A}_{i}\mathcal{A}_{i\pm 1}\rangle$
$\displaystyle=$ $\displaystyle\sum_{a_{i},a_{i\pm 1}}a_{i}a_{i\pm
1}p(a_{i},a_{i\pm 1}|\mathcal{A}_{i},\mathcal{A}_{i\pm 1}),$
$\displaystyle\langle\mathcal{A}_{i}\rangle$ $\displaystyle=$
$\displaystyle\sum_{a_{i}}a_{i}p(a_{i}|\mathcal{A}_{i}),$ (1)
where the measurement outcomes are denoted as $a_{i}=\pm 1$.
Figure 1: Sequential-measurement set-up. The simplest contextuality scenario
comprises of one preparation $\mathcal{P}$ and one measurement device with
settings $\mathcal{A}_{i}$ each of them returns $\pm 1$ outcome.
In quantum theory a two-outcome measurement $\mathcal{A}_{i}$ is represented
by a pair of operators $K_{i},K^{\prime}_{i}$ acting on some finite-
dimensional Hilbert space $\mathcal{H}$, such that $F_{i}\equiv
K^{\dagger}_{i}K_{i}\leqslant\mathbbm{1}$ and
$(K^{\prime}_{i})^{\dagger}K^{\prime}_{i}=\mathbbm{1}-F_{i}$. It is often
convenient to represent such a binary measurement with the aid of the
following operator
$A_{i}=2F_{i}-\mathbbm{1}$ (2)
that acts on $\mathcal{H}$. The preparation is represented by a quantum state
that, without loss of generality, can be considered pure; we denote it by
$|\psi\rangle$.
Kochen-Specker contextuality [CSW14] pertains to the following assumptions:
$(i)$ the measurements are projective, that is, $K_{i},K^{\prime}_{i}$ are
projectors, and $(ii)$ the projectors satisfy certain orthogonality relations,
particularly in this scenario, $F_{i}F_{i\pm 1}=0$ for all $i$, implying
$[A_{i},A_{i\pm 1}]=0$. Such prerequisites about the measurement device are
difficult to justify in practice. Since we aim to characterize the quantum
devices from their minimal features, we do not make these assumptions.
Instead, we consider a single assumption as given below, which is even weaker
than the requirement of projectivity. We will see later that projectivity and
orthogonality relations between measurement effects will be derived facts from
the maximal violation of our inequality.
###### Assumption 2.
The measurements are realized in a particular way such that $K_{i}$ are
Hermitian or, equivalently, $K_{i}=\sqrt{F_{i}}$, for all $i$.
In other words, whenever the measurement outcome is ‘+’ after measuring
$A_{i}$ (2) the post-measurement state reduces according to the rule of Lüders
instrument, that is,
$|\psi\rangle\rightarrow\frac{\sqrt{F_{i}}|\psi\rangle}{\sqrt{\langle\psi|F_{i}|\psi\rangle}}.$
Although, a general expression of $K_{i}=U\sqrt{F_{i}}$ for some unitary $U$,
such rule for post-measurement evolution appears naturally for unsharp
measurements [Bus86] and other physical scenarios [BS98, PK98].
A general linear expression that can be considered to test nonclassicality (or
noncontextuality in the usual scenario) in this set-up is given by,
$\mathcal{B}=\sum_{i}c_{i}(\langle\mathcal{A}_{i}\mathcal{A}_{i+1}\rangle+\langle\mathcal{A}_{i+1}\mathcal{A}_{i}\rangle)+\sum_{i}d_{i}\langle\mathcal{A}_{i}\rangle.$
(3)
The optimal quantum value of the above expression is defined as
$\eta^{Q}=\sup_{|\psi\rangle,A_{i}}\langle\psi|B|\psi\rangle,$ (4)
where $B=\sum_{i}c_{i}\\{A_{i},A_{i+1}\\}+\sum_{i}d_{i}A_{i}$ is the quantum
operator associated with the expression $\mathcal{B}$ and $A_{i}$ are of the
form (2). Notice that in the usual scenario, due to commutativity relations,
$\\{A_{i},A_{i+1}\\}$ can be replaced by $2A_{i}A_{i+1}$. The maximal
classical value $\eta^{C}$ (or noncontextual value in the usual scenario 222
Since any noncontextual value assignment pertains to projective measurements
with certain orthogonality conditions, here we refer to $\eta^{C}$ as to the
classical value for the relaxed scenario. Indeed, under Assumption 1 the
optimal value in classical theory or any other theory where measurement does
not affect the system is given by Eq. (5). In the ideal scenario $\eta^{C}$
reduces to the noncontextual value ) is defined as
$\eta^{C}=\max_{a_{i}\in\\{1,-1\\}}\left\\{2\sum_{i}c_{i}a_{i}a_{i+1}+\sum_{i}d_{i}a_{i}\right\\}.$
(5)
KCBS inequality. The well known $n$-cycle KCBS noncontextuality inequality
[AQB+13] is of the form
$\mathcal{B}_{\mathrm{KCBS}}:=-\sum^{n}_{i=1}\langle\mathcal{A}_{i}\mathcal{A}_{i+1}\rangle\leqslant\eta^{C}=n-2.$
(6)
The maximal quantum violation of this inequality is
$\eta^{Q}=\frac{3\cos{(\pi/n)}-1}{1+\cos{(\pi/n)}}n$ (7)
and it is achieved by the following quantum state
$|\widehat{\psi}\rangle=|0\rangle\equiv(1,0,0)^{T},$ (8)
and observables
$\widehat{A}_{i}=2|\widehat{v}_{i}\rangle\\!\langle\widehat{v}_{i}|-\mathbbm{1},$
(9)
where $|\widehat{v}_{i}\rangle$ are three-dimensional real vectors defined as
$|\widehat{v}_{i}\rangle=(\cos{\theta},\sin{\theta}\sin{\phi_{i}},\sin{\theta}\cos{\phi_{i}})^{T}$
(10)
where $\theta$ is defined as $\cos\theta=\sqrt{1/(1+2\alpha)}$, where
$\alpha=\frac{1}{2}\sec\left(\frac{\pi}{n}\right)$ (11)
and
$\phi_{i}=\frac{n-1}{n}\pi i.$ (12)
Note that $\alpha$ and $\phi_{i}$ are functions of $n$, which for the sake of
simplification is not explicitly specified in their notation. Let us also
remark that $|\widehat{\psi}\rangle\in\mathbbm{C}^{3}$ and $\widehat{A}_{i}$
acting on $\mathbbm{C}^{3}$ denote a particular example of quantum
realizations achieving the maximal quantum value of the KCBS inequality (6).
The self-testing properties of the above-mentioned state and measurements
based on the violation of KCBS inequality are shown in [BRV+19b]. The proof is
based on the optimization method of semidefinite programming under the usual
assumptions of contextuality.
Sum-of-squares decomposition. Let us finally discuss the concept of sum-of-
squares decompositions. Consider a quantum operator $B$ corresponding to some
noncontextuality expression $\mathcal{B}$ like the one in (4). Now, if for any
choice of quantum measurements $A_{i}$ and some $\eta\in\mathbbm{R}$ one can
decompose the shifted operator $\eta\mathbbm{1}-B$ as
$\eta\mathbbm{1}-B=\sum_{k}E^{\dagger}_{k}E_{k},$ (13)
the maximal quantum value of $\mathcal{B}$ is upper bounded by $\eta$, i.e.,
$\langle\psi|B|\psi\rangle\leqslant\eta$ for any quantum state $|\psi\rangle$.
We call (13) a sum-of-squares decomposition associated to $B$. Typically
$E_{k}$ are constructed from the measurement operators $A_{i}$. The bound
$\eta$ is realized by a state and a set of measurements if and only if the
following algebraic relation holds true for all $k$,
$\quad E_{k}|\psi\rangle=0.$ (14)
Our self-testing proofs heavily rely on the above relations.
Let us remark that Ref. [LSW11] provides an SOS decomposition for the
conventional KCBS operator under the assumptions that the measurements $A_{i}$
are projective and satisfy $[A_{i},A_{i\pm 1}]=0$. In what follows we derive
an alternative noncontextuality inequality together with the corresponding SOS
decomposition of the form (13) which does not require making these
assumptions. Furthermore, our SOS is designed in such a way that the algebraic
relations (14) it implies can be used for self-testing.
## 2 Modified KCBS inequality with sum-of-squares decomposition
We are now ready to present our results. For pedagogical purposes we begin
with the simplest case of $n=5$ and consider the following modified KCBS
expression
$\mathcal{B}=-\frac{1}{2}\sum^{5}_{i=1}(\langle\mathcal{A}_{i}\mathcal{A}_{i+1}\rangle+\langle\mathcal{A}_{i+1}\mathcal{A}_{i}\rangle)-\alpha^{2}\sum^{5}_{i=1}\langle\mathcal{A}_{i}\rangle,$
(15)
where $\alpha$ is given in (11) with $n=5$. Following (5) it is not difficult
to find the maximal classical value of $B$ is $\eta^{C}=3+\alpha^{2}$.
###### Result 1 (Modified KCBS inequality with SOS).
The maximal quantum value of $\mathcal{B}$ given in Eq. (15) with
$\alpha=(1/2)\sec(\pi/n)$ is $\eta^{Q}=3(1+\alpha^{2})$.
###### Proof.
To prove this statement we present the SOS decomposition for the modified KCBS
operator
$B=-\frac{1}{2}\sum_{i}\\{A_{i},A_{i+1}\\}-\alpha^{2}\sum_{i}A_{i}.$ (16)
Let us first define the following Hermitian operators for $i=1,\dots,5$,
$\displaystyle M_{i,1}$ $\displaystyle=$
$\displaystyle-\frac{1}{\alpha^{3}}(A_{i}+\alpha A_{i-1}+\alpha A_{i+1}),$
$\displaystyle M_{i,2}$ $\displaystyle=$
$\displaystyle-\frac{1}{\alpha^{4}}(-\alpha A_{i}+A_{i-2}+A_{i+2}),$ (17)
and observe that they satisfy the following relations
$-\frac{\alpha^{5}}{5}\sum_{i}\left(2M_{i,1}+\alpha^{3}M_{i,2}\right)=\alpha^{2}\sum_{i}A_{i},$
(18)
and
$\displaystyle\frac{\alpha^{5}}{5}\sum_{i}\left(M^{2}_{i,1}+\frac{\alpha^{3}}{2}M^{2}_{i,2}\right)$
$\displaystyle=$ $\displaystyle\frac{1}{2}\sum_{i}\\{A_{i},A_{i+1}\\}$ (19)
$\displaystyle+\frac{1}{2\alpha}\sum_{i}A^{2}_{i},$
where we have used the identity $\alpha^{2}+\alpha=1$ for $\alpha$ given in
Eq. (11) with $n=5$. With the aid of these relations it is straightforward to
verify that
$\displaystyle\frac{\alpha^{5}}{5}\sum_{i}\\!\left(\mathbbm{1}-M_{i,1}\right)^{2}\\!+\\!\frac{\alpha^{8}}{10}\sum_{i}\\!\left(\mathbbm{1}-M_{i,2}\right)^{2}\\!+\\!\frac{1}{2\alpha}\sum_{i}\\!\left(\mathbbm{1}-A^{2}_{i}\right)$
$\displaystyle=\left(\alpha^{5}+\frac{\alpha^{8}}{2}+\frac{5}{2\alpha}\right)\mathbbm{1}-\frac{\alpha^{5}}{5}\sum_{i}\left(2M_{i,1}+\alpha^{3}M_{i,2}\right)$
$\displaystyle\hskip
14.22636pt+\frac{\alpha^{5}}{5}\sum_{i}\left(M^{2}_{i,1}+\frac{\alpha^{3}}{2}M^{2}_{i,2}\right)-\frac{1}{2\alpha}\sum_{i}A^{2}_{i}$
$\displaystyle=3(1+\alpha^{2})\mathbbm{1}-B,$ (20)
where $B$ is given in Eq. (16).
Thus, using the fact that $A^{2}_{i}\leqslant\mathbbm{1}$, the above equation
constitutes a SOS decomposition (13) of the modified KCBS operator in which
$E_{k}=\sqrt{\frac{\alpha^{5}}{5}}(\mathbbm{1}-M_{k,1})$ (21)
for $k=1,\ldots,5$;
$E_{k}=\sqrt{\frac{\alpha^{8}}{10}}(\mathbbm{1}-M_{k-5,2})$ (22)
for $k=6,\ldots,10$;
$E_{k}=\sqrt{\frac{1}{2\alpha}\left(\mathbbm{1}-A^{2}_{k-10}\right)}$ (23)
for $k=11,\dots,15$; and $3+3\alpha^{2}=4.146$ is the quantum bound of $B$. We
can validate that the state and measurements in dimension three (8)-(9)
responsible for optimal value of KCBS inequality achieve this bound. ∎
Inspired by the above $n=5$ case, let us now derive our modified KCBS
expression for more measurements. Our aim is to obtain a general expression
for which the sum-of-squares decomposition can easily be constructed as the
one in Eq. (2) and later directly used for self-testing.
To reach this goal, let us consider $n$ two-outcome quantum measurements
represented by operators $A_{i}$ (2) acting on some Hilbert space of unknown
but finite dimension. Let us then consider the expression (13) in which the
operators $E_{k}$ are of the form $\mathbbm{1}-M_{k}$ with some positive
multiplicative factors, where $M_{k}$ are constructed from $A_{i}$. Notice
that for such a choice, Eq. (14) implies that $M_{k}$ must be stabilizing
operators of the state $|\psi\rangle$ maximally violating our modified KCBS
expression, that is, $M_{k}|\psi\rangle=|\psi\rangle$. Now, to design the
explicit form of $M_{k}$ we can use the optimal quantum realization (8)-(9) of
the $n$-cycle KCBS inequality (6), which gives us (see Appendix A for details
of the derivation)
$M_{i,k}=\bar{\alpha}\left[\left(1-2\beta_{k}\right)A_{i}+\beta_{k}(A_{i+k}+A_{i-k})\right],$
(24)
where $i=1,\dots,n$ and $k=1,\dots,(n-1)/2$, whereas the coefficients
$\beta_{k}$ and $\bar{\alpha}$ are given by
$\beta_{k}=\frac{1}{2(1-\cos{\phi_{k}})}$ (25)
and
$\bar{\alpha}=\frac{1+2\alpha}{1-2\alpha},$ (26)
where $\alpha$, $\phi_{k}$ are defined in Eqs. (11) and (12), respectively.
Let us remark that $M_{i,k},\bar{\alpha},\beta_{i}$ are all functions of $n$
which for the sake of simplification is not specified explicitly. Moreover,
the operators $M_{i,k}$ defined in (24) act on unknown Hilbert space
$\mathcal{H}$ of finite dimension.
We now go back to the SOS decomposition (13) which is deemed to be of the form
$\sum_{i,k}c_{k}\left[\mathbbm{1}-M_{i,k}\right]^{2}+d\sum_{i}\left[\mathbbm{1}-A^{2}_{i}\right]$
(27)
with some non-negative parameters $c_{k},d$ to be determined. By plugging the
expression of $M_{i,k}$ (24) into it and after some rearrangement of indices,
we obtain
$\displaystyle\sum_{i,k}c_{k}\left[\mathbbm{1}-M_{i,k}\right]^{2}+d\sum_{i}\left[\mathbbm{1}-A^{2}_{i}\right]$
$\displaystyle=$
$\displaystyle\left(\sum_{k}c_{k}+d\right)n\mathbbm{1}-\left(2\bar{\alpha}{\sum}\limits_{k}c_{k}\right)\sum_{i}A_{i}$
(28)
$\displaystyle+\left[\bar{\alpha}^{2}\sum_{k}c_{k}\left(1+6\beta_{k}^{2}-4\beta_{k}\right)-d\right]\sum_{i}A^{2}_{i}$
$\displaystyle+\bar{\alpha}^{2}\sum_{i}\left[2c_{1}\beta_{1}\left(1-2\beta_{1}\right)+c_{\frac{n-1}{2}}\beta^{2}_{\frac{n-1}{2}}\right]\\{A_{i},A_{i+1}\\}$
$\displaystyle+\bar{\alpha}^{2}\sum_{i}\sum^{(n-3)/2}_{k=2}\left[2c_{k}\beta_{k}\left(1-2\beta_{k}\right)+c_{f\left(\frac{k}{2}\right)}\beta^{2}_{f\left(\frac{k}{2}\right)}\right]\\{A_{i},A_{i+k}\\},$
where
$\displaystyle f\left(\frac{k}{2}\right)=\begin{cases}k/2,&\text{ if $k$ is
even}\\\ (n-k)/2,&\text{ if $k$ is odd}.\end{cases}$ (29)
We want to choose the coefficient $c_{k}$ so that they are non-negative and
all the anti-commutators $\\{A_{i},A_{i+k}\\}$ vanish except for $k=\pm 1$.
For that purpose we consider $n=2^{m}+1$ for
$m\in\mathbbm{N}\setminus\\{1\\}$. First we take $c_{k}=0$ whenever $k\neq
2^{x}$, where $x=0,\dots,m-1$. It follows from (28) that our requirement is
fulfilled if the following set of equations is satisfied
$2c_{2^{x}}\beta_{2^{x}}\left(1-2\beta_{2^{x}}\right)+c_{2^{x-1}}\beta_{2^{x-1}}^{2}=0$
(30)
for $x=1,\dots,m-1$. The above equation (30) implies for all $x=1,\dots,m-1$
$\displaystyle\frac{c_{2^{x}}}{c_{1}}$ $\displaystyle=$
$\displaystyle\frac{1}{2^{x}}\prod^{x}_{j=1}\frac{\beta_{2^{j-1}}^{2}}{\beta_{2^{j}}\left(2\beta_{2^{j}}-1\right)}$
(31) $\displaystyle=$
$\displaystyle\left(\frac{\beta_{1}}{2^{x}\beta_{2^{x}}}\right)^{2}\prod^{x}_{j=1}\sec(\phi_{2^{j}}).$
Since $\sec(\phi_{2^{j}})$ is positive for all $j$ 333Note that
$\cos{\phi_{2^{j}}}=\cos{(\pi 2^{j}/n)}$ and $0<\pi 2^{j}/n<\pi/2,\forall
j=1,2,\dots,m-1$., $c_{2^{x}}/c_{1}$ is also positive. Now, to provide a
plausible solution of $c_{2^{x}}$, it suffices to choose a positive $c_{1}$.
Due to (30) the remaining anti-commutators in (28) are $\\{A_{i},A_{i+1}\\}$
with a factor
$\bar{\alpha}^{2}\left[2c_{1}\beta_{1}\left(1-2\beta_{1}\right)+c_{2^{m-1}}\beta_{2^{m-1}}^{2}\right].$
(32)
For simplicity we choose this factor to be 1/2 which implies that $c_{1}$ is
such that
$4c_{1}\beta_{1}\left(1-2\beta_{1}\right)+2c_{2^{m-1}}\beta_{2^{m-1}}^{2}=\frac{1}{\bar{\alpha}^{2}}.$
(33)
After substituting $c_{2^{m-1}}$ from Eq. (31), the above gives
$c_{1}=\frac{2^{2m-3}}{\bar{\alpha}^{2}}\frac{1}{2^{2m-1}\beta_{1}\left(1-2\beta_{1}\right)+\beta_{1}^{2}\
\prod\limits^{m-1}_{j=1}\sec(\phi_{2^{j}})}.$ (34)
One can readily verify that $c_{1}$ is positive. Further, we choose
$d=\bar{\alpha}^{2}\
{\sum}\limits_{k}c_{k}\left(1+6\beta_{k}^{2}-4\beta_{k}\right)$ (35)
so that all the $A^{2}_{i}$ terms vanish and $d$ is positive. Finally, due to
(30), (33) and (35), Eq. (28) reads as,
$\sum_{i,k}c_{k}\left[\mathbbm{1}-M_{i,k}\right]^{2}+d\sum_{i}\left[\mathbbm{1}-A^{2}_{i}\right]=\eta_{n}\mathbbm{1}-B_{n},$
(36)
where
$B_{n}=-\frac{1}{2}\sum_{i}\\{A_{i},A_{i+1}\\}-\gamma\sum_{i}A_{i}\ ,$ (37)
$\gamma=-2\bar{\alpha}\sum_{k}c_{k}\ ,$ (38)
and
$\displaystyle\eta_{n}=n\bar{\alpha}^{2}\sum_{k}c_{k}\left(\frac{1}{\bar{\alpha}^{2}}+1+6\beta_{k}^{2}-4\beta_{k}\right),$
(39)
and $c_{k},d,M_{i,k}$ are defined in (31), (34), (35) and (24).
From Eq. (25) we know that $\bar{\alpha}$ is a negative quantity and hence
$\gamma$ is positive. Thus, our modified $n$-cycle KCBS inequality is
$\mathcal{B}_{n}:=-\frac{1}{2}\sum_{i}(\langle\mathcal{A}_{i}\mathcal{A}_{i+1}\rangle+\langle\mathcal{A}_{i+1}\mathcal{A}_{i}\rangle)-\gamma\sum_{i}\langle\mathcal{A}_{i}\rangle\leqslant\eta_{n}^{C}$
(40)
whose quantum bound is $\eta_{n}$ (39) and the classical value $\eta^{C}_{n}$
is provided in Result 3. It follows from the construction of the SOS (36) that
the qutrit quantum state and measurements defined in Eqs. (8)-(12) satisfy the
stabilizing relations $M_{i,k}|\psi\rangle=|\psi\rangle$ and
$A_{i}^{2}|\psi\rangle=|\psi\rangle$, implying the bound $\eta_{n}$ is tight,
or, in other words, the maximal quantum value of (40) equals $\eta_{n}$.
To put the above mathematical analysis in a nutshell, the expression of the
noncontextuality inequality (40) is derived such that it meets a SOS
decomposition (13) of certain form. This leads us to the following result.
###### Result 2 (Modified $n$-cycle expression with SOS).
The maximum quantum value of modified $n$-cycle noncontextuality expression
(40) with a SOS decomposition (36) is $\eta_{n}$ (39) (where
$n=2^{m}+1,m\in\mathbbm{N}\setminus\\{1\\})$.
Let us finally prove the classical bound of our new noncontextuality
expression.
###### Result 3 (Maximal classical value).
The classical value of $\mathcal{B}_{n}$ in Eq. (40) is given by $n+\gamma-2$.
###### Proof.
The classical value can be obtained by assigning $\pm 1$ values to the
observables appearing in (40), that is,
$\eta_{n}^{C}=\max_{a_{i}\in\\{1,-1\\}}\left\\{-\sum^{n}_{i=1}a_{i}a_{i+1}-\gamma\sum^{n}_{i=1}a_{i}\right\\},$
(41)
where $\gamma$ is positive. Let us say in the optimal assignment there are $k$
number of $a_{i}$ which are $-1$. We first assume $k>n/2$. When there are $k$
number of $-1$, and $n-k$ number of $+1$, the minimum value of
$\sum_{i}a_{i}a_{i+1}=4k-3n$, and the quantity $\sum_{i}a_{i}=n-2k$.
Substituting these values in (41) we see
$\eta_{n}^{C}=\left(3-\gamma\right)n-\left(4-2\gamma\right)k.$ (42)
Therefore, the optimal value of $\eta_{n}^{C}$ is obtained for the minimum
value of $k$, that is, for $k=(n+1)/2$. This implies the right-hand-side of
(42) is $n+\gamma-2$. Similarly, if $k<n/2$, then we have $(n-k)>n/2$, and
following a similar argument we can obtain the same bound. ∎
## 3 Self-testing of quantum devices
An exact self-testing statement provides us the certification of quantum
devices, given that we observe an optimal violation of a noncontextuality
inequality. However, the observed statistics are unchanged in the presence of
auxiliary degrees of freedom (or auxiliary systems) and a global unitary.
Therefore, self-testing in the context of state-dependent quantum contextual
correlation [BRV+19b, IMOK20] infers unique state and measurements up to these
equivalences.
More formally, self-testing of preparation
$|\bar{\psi}\rangle\in\mathbbm{C}^{d}$ and a set of measurements
$\\{\bar{A}_{i}\\}^{n}_{i=1}$ acting on $\mathbbm{C}^{d}$ are defined as
follows: if a set of observables $\\{A_{i}\\}^{n}_{i=1}$ acting on unknown
finite-dimensional Hilbert space $\mathcal{H}$ and a state
$|\psi\rangle\in\mathcal{H}$ maximally violate a noncontextuality inequality,
then there exists an isometry $\Phi:\mathcal{H}\mapsto\mathbbm{C}^{d}$ such
that
1. 1.
$\Phi|\psi\rangle=|\bar{\psi}\rangle$ ,
2. 2.
$\Phi A_{i}\Phi^{\dagger}=\bar{A}_{i}$ for all $i=1,\ldots,n$.
To obtain self-testing only from the reduced Assumptions mentioned in section
1, we consider a modified version of the expression $\mathcal{B}_{n}$
$\eqref{Bn}$ of the following form
$\tilde{\mathcal{B}}_{n}:=\mathcal{B}_{n}-\sum_{i}[p(++|\mathcal{A}_{i+1},\mathcal{A}_{i})+p(++|\mathcal{A}_{i-1},\mathcal{A}_{i})].$
(43)
Since the additional term is non-positive, the classical and quantum bounds of
$\tilde{\mathcal{B}}_{n}$ are the same as for $\mathcal{B}_{n}$. Moreover, it
follows from (36) that the SOS decomposition of $\tilde{B}_{n}$ is
$\displaystyle\eta_{n}\mathbbm{1}-\tilde{B}_{n}$ $\displaystyle=$
$\displaystyle\sum_{i,k}c_{k}\left[\mathbbm{1}-M_{i,k}\right]^{2}+d\sum_{i}\left[\mathbbm{1}-A^{2}_{i}\right]$
(44) $\displaystyle+\sum_{i}(K_{i}K_{i+1})^{\dagger}(K_{i}K_{i+1})$
$\displaystyle+\sum_{i}(K_{i}K_{i-1})^{\dagger}(K_{i}K_{i-1}),$
where
$\displaystyle\tilde{B}_{n}=B_{n}-\sum_{i}(K_{i}K_{i+1})^{\dagger}(K_{i}K_{i+1})$
$\displaystyle-\sum_{i}(K_{i}K_{i-1})^{\dagger}(K_{i}K_{i-1}),$ (45)
and $\eta_{n}$ is again the optimal quantum value of $\tilde{B}_{n}$ 444 Note
that ${K_{i\pm 1}|\psi\rangle/\langle\psi|K_{i\pm 1}^{\dagger}K_{i\pm
1}|\psi\rangle}$ is the post-measurement quantum state after $A_{i\pm 1}$ is
measured and the ‘+’ outcome is obtained. Consequently,
${p(++|\mathcal{A}_{i\pm 1},\mathcal{A}_{i})=\langle\psi|(K_{i}K_{i\pm
1})^{\dagger}(K_{i}K_{i\pm 1})|\psi\rangle}$ . Let us now show that our
inequality (43) can be used to make a self-testing statement, according to the
above definition, for the state and observables (8)-(9) maximally violating
it.
###### Result 4 (Self-testing).
Under the Assumptions 1 and 2 stated in Sec. 1, if a quantum state
$|\psi\rangle\in\mathcal{H}$ and a set of $n$ (where
$n=2^{m}+1,m\in\mathbbm{N}\setminus\\{1\\})$ measurements $A_{i}$ acting on
$\mathcal{H}$ violate the inequality (43) maximally, then there exists a
projection $P:\mathcal{H}\to\mathbbm{C}^{3}$ and a unitary $U$ acting on
$\mathbbm{C}^{3}$ such that
$\displaystyle
U(PA_{i}P^{\dagger})U^{\dagger}=2|\widehat{v}_{i}\rangle\\!\langle\widehat{v}_{i}|-\mathbbm{1}_{3},$
$\displaystyle\quad U(P|\psi\rangle)=(1,0,0)^{T},$ (46)
where $|\widehat{v}_{i}\rangle$ are defined in (10).
###### Proof.
Taking the expectation value of the state $|\psi\rangle$ on both side of the
SOS decomposition (44) of $\mathcal{B}$, we obtain by virtue of (14) that for
any $i$ and $k$,
$M_{i,k}|\psi\rangle=|\psi\rangle.$ (47)
In the particular $k=1$ case this condition when combined with the explicit
form of $M_{i,1}$ given in Eq. (24) together with the fact that
$\beta_{1}=\alpha/(1+2\alpha)$, leads to the following relations for all
$i=1,\dots,n$,
$(A_{i}+\alpha A_{i+1}+\alpha A_{i-1})|\psi\rangle=(1-2\alpha)|\psi\rangle.$
(48)
Similarly, from the second term of the SOS decomposition (44) we get that for
all $i=1,\ldots,n$,
$A^{2}_{i}|\psi\rangle=|\psi\rangle.$ (49)
Additionally, due to the last two terms of SOS decomposition (44) we know
$K_{i}K_{i\pm 1}|\psi\rangle=0$ for all $i$, which using Assumption 2 read
$\sqrt{F_{i}}\sqrt{F_{i\pm 1}}|\psi\rangle=0$. Since, $\sqrt{F_{i}}$ and
$F_{i}$ have the same support, this conditions further imply the following
relations for all $i=1,\dots,n,$
$\displaystyle F_{i}F_{i\pm 1}|\psi\rangle=0.$ (50)
Given the relations (48), (49) and (50), the next Theorem provides the proof
for the self-testing statement. ∎
The self-testing property implies our modified inequality (43) are non-trivial
since any classical value assignment is not equivalent to the realization
given in (4).
###### Theorem.
If a set of Hermitian operators $\\{A_{i}\\}^{n}_{i=1}$ (where $n$ is odd) of
the form (2) acting on arbitrary finite-dimensional Hilbert space
$\mathcal{H}$ and a unit vector $|\psi\rangle\in\mathcal{H}$ satisfy the
relations (48),(49) and (50), then there exists a projection operator
$P:\mathcal{H}\to\mathbbm{C}^{3}$ and a unitary $U$ acting on
$\mathbbm{C}^{3}$ such that (4) holds true.
###### Proof.
We prove this theorem in two steps.
Step 1. In the first step, we deduce the effective dimensionality of the
observables $A_{i}$ and the state $|\psi\rangle$. Let us define a vector space
$V=\text{Span}\\{|\psi\rangle,A_{1}|\psi\rangle,A_{3}|\psi\rangle\\}.$ Due to
Lemma 1 (stated in Appendix B), it suffices to consider the observables
$A_{i}$ and the state $|\psi\rangle$ restricted to $V$. In other words, Lemma
1 points out that the Hilbert space $\mathcal{H}$ can be decomposed as
$V\oplus V^{\bot}$ and all the operators $A_{i}$ have the following block
structure
$A_{i}=\left(\begin{array}[]{@{}c|c@{}}\tilde{A}_{i}&\mathbb{O}\\\
\hline\cr\mathbb{O}&A^{\prime}_{i}\end{array}\right),$ (51)
wherein $\tilde{A}_{i},A^{\prime}_{i}$ are acting on $V,V^{\bot}$
respectively. This allows us to define
$\displaystyle\tilde{A}_{i}=PA_{i}P^{\dagger}=2\tilde{F}_{i}-\mathbbm{1},$
$\displaystyle|\tilde{\psi}\rangle=P|\psi\rangle,$ (52)
where $P$ is the projection operator from $\mathcal{H}$ to $V$,
$\tilde{F}_{i}=PF_{i}P^{\dagger}\geqslant 0$ and $\mathbbm{1}$ is the identity
operator acting on $V$. It follows from Eq. (2) and Eqs. (50), (48), (49) that
the projected measurements $\tilde{F}_{i}$ and the state
$|\tilde{\psi}\rangle$ satisfy the following sets of relations for all
$i=1,\dots,n$,
$\displaystyle\quad\tilde{F}_{i}\tilde{F}_{i\pm 1}|\tilde{\psi}\rangle=0,$
(53)
$\displaystyle\left(\tilde{F}_{i}+\alpha\tilde{F}_{i-1}+\alpha\tilde{F}_{i+1}\right)|\tilde{\psi}\rangle=|\tilde{\psi}\rangle,$
(54)
$\displaystyle\tilde{F}^{2}_{i}|\tilde{\psi}\rangle=\tilde{F}_{i}|\tilde{\psi}\rangle.$
(55)
Step 2. In the second step, we characterize the observables $\tilde{A}_{i}$.
With the help of Lemma 2 given in Appendix B, we first show that all
observables $\tilde{A}_{i}$ are of the form
$\tilde{A}_{i}=2|v_{i}\rangle\\!\langle v_{i}|-\mathbbm{1}$ (56)
for some normalized vectors $|v_{i}\rangle\in\mathbbm{C}^{3}$ such that
$\langle v_{i}|v_{i\pm 1}\rangle=0$. The remaining part is the
characterization of $|v_{i}\rangle$. By plugging Eq. (56) into Eq. (54) we
obtain that for all $i$,
$(|v_{i}\rangle\\!\langle v_{i}|+\alpha|v_{i-1}\rangle\\!\langle
v_{i-1}|+\alpha|v_{i+1}\rangle\\!\langle
v_{i+1}|)|\tilde{\psi}\rangle=|\tilde{\psi}\rangle.$ (57)
We use the fact that $|v_{i}\rangle,|v_{i\pm 1}\rangle$ are orthogonal and
multiply $\langle v_{i-1}|$ and $\langle v_{i+1}|$ with Eq. (57), which lead
us to the following equations
$\alpha\langle v_{i-1}|v_{i+1}\rangle\langle
v_{i+1}|\tilde{\psi}\rangle=(1-\alpha)\langle v_{i-1}|\tilde{\psi}\rangle$
(58)
and
$\alpha\langle v_{i+1}|v_{i-1}\rangle\langle
v_{i-1}|\tilde{\psi}\rangle=(1-\alpha)\langle v_{i+1}|\tilde{\psi}\rangle$
(59)
for all $i$. By substituting the term $\langle v_{i-1}|\tilde{\psi}\rangle$
from the first equation to the second one, we arrive at the following
conditions
$\forall i,\quad|\langle v_{i-1}|v_{i+1}\rangle|=\frac{1-\alpha}{\alpha}.$
(60)
Note that, here we use the fact that $\langle v_{i+1}|\tilde{\psi}\rangle\neq
0$ 555If ${\langle v_{j+1}|\tilde{\psi}\rangle}=0$ for some $j$, then (58)
implies ${\langle v_{j-1}|\tilde{\psi}\rangle}$ is also 0, and further (57)
implies ${|v_{j}\rangle\\!\langle
v_{j}|\tilde{\psi}\rangle=|\tilde{\psi}\rangle}$. Substituting these in (57)
taking $i=j+1$, we arrive at a relation ${|v_{j+2}\rangle\\!\langle
v_{j+2}|\tilde{\psi}\rangle=(1-\alpha)/\alpha|\tilde{\psi}\rangle}$ which
cannot be true for any finite $n$ since ${|v_{j+2}\rangle\\!\langle v_{j+2}|}$
has eigenvalues 1,0.. Considering the absolute value of both side of (59) and
using (60) we obtain another set of conditions
$\forall
i,\quad|\langle\tilde{\psi}|v_{i-1}\rangle|=|\langle\tilde{\psi}|v_{i+1}\rangle|.$
(61)
And since $n$ is odd, as a consequence of the above equation,
$\forall
i,j,\quad|\langle\tilde{\psi}|v_{i}\rangle|=|\langle\tilde{\psi}|v_{j}\rangle|.$
(62)
Let us try to see what is the most general form of $|v_{i}\rangle$ compatible
with the above conditions. First let us exploit the fact that observed
probabilities do not change if we rotate the state and measurements by a
unitary operation. We thus choose it so that
$U|\tilde{\psi}\rangle=(1,0,0)^{T}\equiv|0\rangle$. We also notice that any
unitary of the following form
$\left(\begin{array}[]{cc}1&0\\\ 0&U^{\prime}\end{array}\right)$ (63)
with $U^{\prime}$ being any $2\times 2$ unitary does not change $|0\rangle$.
Later we will use this freedom.
Due to the fact that we are characterizing projectors $|v_{i}\rangle\\!\langle
v_{i}|$ rather than the vectors themselves, we can always assume the first
element of the vector is positive, that is, $|v_{i}\rangle$ has the form,
$|v_{i}\rangle=\left(\cos\theta_{i},e^{\mathbbm{i}a_{i}}\sin\theta_{i}\sin\phi_{i},e^{\mathbbm{i}b_{i}}\sin\theta_{i}\cos\phi_{i}\right)^{T}.$
(64)
The condition (62) implies that all $\cos\theta_{i}$ are equal and therefore
let us denote $\theta_{i}=\theta$. Plugging these forms of $|v_{i}\rangle$ and
$|\tilde{\psi}\rangle=|0\rangle$ into Eq. (57), the first element of the
vector equation leads to
$\cos\theta=\frac{1}{\sqrt{1+2\alpha}}.$ (65)
Next, we use the freedom given by (63) to bring one of the vectors, say
$|v_{n}\rangle$, to $(\cos\theta,0,\sin\theta)^{T}$ by taking
$\sin\phi_{n}=0,\quad e^{\mathbbm{i}b_{n}}=1.$ (66)
Then, due to the condition $\langle v_{1}|v_{n}\rangle=\langle
v_{n-1}|v_{n}\rangle=0$ we infer $e^{\mathbbm{i}b_{1}},e^{\mathbbm{i}b_{n-1}}$
are real and without loss of generality we can take
$e^{\mathbbm{i}b_{1}}=e^{\mathbbm{i}b_{n-1}}=1$ (67)
by absorbing the sign in $\cos\phi_{1},\cos\phi_{n-1}$. Further, we can get
rid one of the phases in $|v_{1}\rangle$, that is,
$e^{\mathbbm{i}a_{1}}=1,$ (68)
and take $\sin(\phi_{1})$ to be non-negative by applying another unitary of
the form (63),
$U^{\prime}=\mathrm{diag}[\pm\exp(-\mathbbm{i}a_{1}),1]$ (69)
that does not change the simplified form of $|v_{n}\rangle$. Equating the
second and third element of the vector equation (57) for
$|\tilde{\psi}\rangle=|0\rangle$, we obtain the relations
$e^{\mathbbm{i}a_{i}}\sin\phi_{i}+\alpha
e^{\mathbbm{i}a_{i-1}}\sin\phi_{i-1}+\alpha
e^{\mathbbm{i}a_{i+1}}\sin\phi_{i+1}=0,$ (70)
and
$e^{\mathbbm{i}b_{i}}\cos\phi_{i}+\alpha
e^{\mathbbm{i}b_{i-1}}\cos\phi_{i-1}+\alpha
e^{\mathbbm{i}b_{i+1}}\cos\phi_{i+1}=0.$ (71)
With the aid of (66) and (68), Eq. (70) for $i=n$ points out
$\sin(\phi_{1})=-e^{\mathbbm{i}a_{n-1}}\sin(\phi_{n-1})$ which allows us to
consider $e^{\mathbbm{i}a_{n-1}}=1$. Taking $i=1$ in Eqs. (70) and (71) and
replacing the values of
$\sin\phi_{n},\cos\phi_{n},e^{\mathbbm{i}a_{1}},e^{\mathbbm{i}b_{1}},e^{\mathbbm{i}b_{n}}$
we obtain,
$\displaystyle\sin\phi_{1}+\alpha e^{\mathbbm{i}a_{2}}\sin\phi_{2}=0,$ (72)
$\displaystyle\cos\phi_{1}+\alpha+\alpha e^{\mathbbm{i}b_{2}}\cos\phi_{2}=0.$
(73)
Thus, $e^{\mathbbm{i}a_{2}},e^{\mathbbm{i}b_{2}}$ are real and can be taken to
be 1. Note, here we use the fact that $\sin{\phi_{1}}\neq 0$ 666If
$\sin{\phi_{1}}=0$, then $\cos{\phi_{1}}=\pm 1$ and consequently ${\langle
v_{n}|v_{1}\rangle=\cos{(\theta\mp\theta)}}$ which contradicts the relation
${\langle v_{n}|v_{1}\rangle=0}$. Analogously, if we suppose
$\cos{\phi_{2}}=0$, then $\cos{\phi_{1}}+\alpha=0$ and $\sin\phi_{2}=\pm 1$.
Now, the first equation holds only if $2\alpha^{2}=1$.. Similarly, by taking
$i=2,\dots,n-2$ we conclude for all $i$
$e^{\mathbbm{i}a_{i}}=e^{\mathbbm{i}b_{i}}=1.$ (74)
On the other hand, the condition $\langle v_{i}|v_{i+1}\rangle=0$ implies,
$\displaystyle\phi_{i+1}-\phi_{i}$ $\displaystyle=$
$\displaystyle\cos^{-1}\left(-\frac{\cos^{2}\theta}{\sin^{2}\theta}\right)$
(75) $\displaystyle=$ $\displaystyle\frac{(n-1)\pi}{n}.$
Finally, considering $i=n$ in the above Eq. (75) and using $\sin\phi_{n}=0$ we
deduce $\phi_{1}=(n-1)\pi/n$. We discard the possibility
$\phi_{1}=-(n-1)\pi/n$ since $\sin\phi_{1}$ is taken to be non-negative. Thus,
the equations (65), (74), and (75) together with $\phi_{1}$ establish that the
unknown vectors $|v_{i}\rangle$ in (64) are unitarily equivalent to
$|\widehat{v}_{i}\rangle$. This completes the proof. ∎
## 4 Conclusion
Kochen-Specker contextuality captures the intrinsic nature of quantum theory
that essentially departs from classicality. It also offers a generalization of
quantum correlations beyond nonlocality to a larger class of quantum systems
and minimizes the demands to test non-classicality. Therefore, it is a
fundamental problem to understand what is the maximal information about the
underlying quantum system that can be inferred from the correlations observed
in a contextuality experiment, and whether this information can be used for
certification of quantum devices from minimal assumptions of their internal
functioning.
In this work, we derive self-testing statements for $n$-cycle scenario using
weaker assumptions than those made in previous approaches based on Kochen-
Specker contextuality [CSW14, BRV+19b, IMOK20, BRV+19a]. In particular, we do
not assume orthogonality relations between measurement effects and
projectivity of the measurements. Instead, we consider general two-outcome
measurements which nevertheless obey a single assumption (Assumption 2) that
the Kraus operators representing one of the outcomes are uniquely determined
by their corresponding measurement effects. While ideal measurements are the
prerequisite for noncontextuality, our self-testing statement holds beyond
such idealizations inferring nonclassicality from the correlations obtained in
sequential measurements. Moreover, we take a different approach, that is, we
use the sum-of-squares ’technique’ that has successfully been used in the Bell
scenario to derive maximal quantum violation of certain Bell inequalities as
well as in making self-testing statements [BP15, ŠASA16, SAT+17, KŠT+19,
SSKA19, CMMN19, Kan19, ASTA19], but has never been explored for self-testing
in the contextuality scenario.
We further remark that self-testing from quantum contextuality is not fully
device-independent as far as its original definition is concerned, while, its
experimental test does not require space-like separation. The assumption 1 is
critical to verify for practical purposes, however, in future studies, one may
try to overcome it by restricting the computational power or the memory of the
measurement device. Nonetheless, it is way more powerful than the usual
process of tomography. It is also distinct from the self-testing approach in
prepare-and-measure scenario [TKV+18, FK19] since no restriction on the
dimensionality of the preparation is imposed here.
Although the SOS decompositions hold for a certain number of measurements, a
suitable adaptation of our approach in future studies may lead to SOS
decompositions for an arbitrary odd number of measurements. Another direction
for further study is to explore whether our approach can be applied to states
and measurements of higher dimension than three and whether our self-testing
statements can be made robust to experimental imperfections. From a more
general perspective, it would be interesting to design a unifying approach to
self-testing based on Bell nonlocality and quantum contextuality.
## Acknowledgement
This work is supported by the Foundation for Polish Science through the First
Team project (First TEAM/2017- 4/31) co-financed by the European Union under
the European Regional Development Fund.
## References
* [AC18] B. Amaral and M. T. Cunha. Contextuality: The Compatibility-Hypergraph Approach, pages 13–48. Springer Briefs in Mathematics. Springer, Cham, 2018.
DOI: 10.1007/978-3-319-93827-1_2.
* [AQB+13] M. Araújo, M. T. Quintino, C. Budroni, M. T. Cunha, and A. Cabello. All noncontextuality inequalities for the $n$-cycle scenario. Phys. Rev. A, 88: 022118, 2013.
DOI: 10.1103/PhysRevA.88.022118.
* [ASTA19] R. Augusiak, A. Salavrakos, J. Tura, and A. Acín. Bell inequalities tailored to the Greenberger–Horne–Zeilinger states of arbitrary local dimension. New J. Phys., 21(11): 113001, 2019.
DOI: 10.1088/1367-2630/ab4d9f.
* [Bel64] J. S. Bell. On the Einstein Podolsky Rosen paradox. Physics Physique Fizika, 1: 195–200, 1964.
DOI: 10.1103/PhysicsPhysiqueFizika.1.195.
* [BP15] C. Bamps and S. Pironio. Sum-of-squares decompositions for a family of Clauser-Horne-Shimony-Holt-like inequalities and their application to self-testing. Phys. Rev. A, 91: 052111, 2015.
DOI: 10.1103/PhysRevA.91.052111.
* [BRV+19a] K. Bharti, M. Ray, A. Varvitsiotis, A. Cabello, and L. Kwek. Local certification of programmable quantum devices of arbitrary high dimensionality. 2019\.
* [BRV+19b] K. Bharti, M. Ray, A. Varvitsiotis, N. Warsi, A. Cabello, and L. Kwek. Robust Self-Testing of Quantum Systems via Noncontextuality Inequalities. Phys. Rev. Lett., 122: 250403, 2019.
DOI: 10.1103/PhysRevLett.122.250403.
* [BS98] P. Busch and J. Singh. Lüders theorem for unsharp quantum measurements. Physics Letters A, 249(1): 10–12, 1998.
DOI: 10.1016/S0375-9601(98)00704-X.
* [Bus86] P. Busch. Unsharp reality and joint measurements for spin observables. Phys. Rev. D, 33: 2253–2261, 1986.
DOI: 10.1103/PhysRevD.33.2253.
* [Cab08] A. Cabello. Experimentally Testable State-Independent Quantum Contextuality. Phys. Rev. Lett., 101: 210401, 2008.
DOI: 10.1103/PhysRevLett.101.210401.
* [Cab13] A. Cabello. Simple Explanation of the Quantum Violation of a Fundamental Inequality. Phys. Rev. Lett., 110: 060402, 2013.
DOI: 10.1103/PhysRevLett.110.060402.
* [CGS17] A. Coladangelo, K. Goh, and V. Scarani. All pure bipartite entangled states can be self-tested. Nature Communications, 8(1): 15485, 2017.
DOI: 10.1038/ncomms15485.
* [CMMN19] D. Cui, A. Mehta, H. Mousavi, and S. Nezhadi. A generalization of CHSH and the algebraic structure of optimal strategies. 2019\.
* [CSW14] A. Cabello, S. Severini, and A. Winter. Graph-Theoretic Approach to Quantum Correlations. Phys. Rev. Lett., 112: 040401, 2014.
DOI: 10.1103/PhysRevLett.112.040401.
* [FK19] M. Farkas and J. Kaniewski. Self-testing mutually unbiased bases in the prepare-and-measure scenario. Phys. Rev. A, 99: 032316, 2019.
DOI: 10.1103/PhysRevA.99.032316.
* [GBC+14] O. Gühne, C. Budroni, A. Cabello, M. Kleinmann, and J. Larsson. Bounding the quantum dimension with contextuality. Phys. Rev. A, 89: 062107, 2014.
DOI: 10.1103/PhysRevA.89.062107.
* [GHH+14] A. Grudka, K. Horodecki, M. Horodecki, P. Horodecki, R. Horodecki, P. Joshi, W. Kłobus, and A. Wójcik. Quantifying Contextuality. Phys. Rev. Lett., 112: 120401, 2014.
DOI: 10.1103/PhysRevLett.112.120401.
* [HWVE14] M. Howard, J. Wallman, V. Veitch, and J. Emerson. Contextuality supplies the “magic” for quantum computation. Nature, 510(7505): 351–355, 2014.
DOI: 10.1038/nature13460.
* [IMOK20] A. Irfan, K. Mayer, G. Ortiz, and E. Knill. Certified quantum measurement of Majorana fermions. Phys. Rev. A, 101: 032106, 2020.
DOI: 10.1103/PhysRevA.101.032106.
* [Kan19] J. Kaniewski. A weak form of self-testing. 2019\.
* [KanCK14] P. Kurzyński, A. Cabello, and D. Kaszlikowski. Fundamental Monogamy Relation between Contextuality and Nonlocality. Phys. Rev. Lett., 112: 100401, 2014.
DOI: 10.1103/PhysRevLett.112.100401.
* [KCBbuS08] A. Klyachko, M. Can, S. Binicioğlu, and A. Shumovsky. Simple Test for Hidden Variables in Spin-1 Systems. Phys. Rev. Lett., 101: 020403, 2008.
DOI: 10.1103/PhysRevLett.101.020403.
* [KS75] S. Kochen and E. Specker. The Problem of Hidden Variables in Quantum Mechanics. In The Logico-Algebraic Approach to Quantum Mechanics, The Western Ontario Series in Philosophy of Science, pages 293–328. Springer Netherlands, 1975.
DOI: 10.1007/978-94-010-1795-4.
* [KŠT+19] J. Kaniewski, I. Šupić, J. Tura, F. Baccari, A. Salavrakos, and R. Augusiak. Maximal nonlocality from maximal entanglement and mutually unbiased bases, and self-testing of two-qutrit quantum systems. Quantum, 3: 198, 2019.
DOI: 10.22331/q-2019-10-24-198.
* [LSW11] Y. Liang, R. Spekkens, and H. Wiseman. Specker′s parable of the overprotective seer: A road to contextuality, nonlocality and complementarity. Phys. Rep., 506(1): 1–39, 2011.
DOI: 10.1016/j.physrep.2011.05.001.
* [MY04] D. Mayers and A. Yao. Self testing quantum apparatus. Quantum Inf. Comput., 4(4): 273–286, 2004.
DOI: doi.org/10.26421/QIC4.4.
* [PK98] M. B. Plenio and P. L. Knight. The quantum-jump approach to dissipative dynamics in quantum optics. Rev. Mod. Phys., 70: 101–144, 1998.
DOI: 10.1103/RevModPhys.70.101.
* [Rau13] R. Raussendorf. Contextuality in measurement-based quantum computation. Phys. Rev. A, 88: 022322, 2013.
DOI: 10.1103/PhysRevA.88.022322.
* [ŠASA16] I. Šupić, R. Augusiak, A. Salavrakos, and A. Acín. Self-testing protocols based on the chained bell inequalities. New J. Phys., 18(3): 035013, 2016.
DOI: 10.1088/1367-2630/18/3/035013.
* [SAT+17] A. Salavrakos, R. Augusiak, J. Tura, P. Wittek, A. Acín, and S. Pironio. Bell Inequalities Tailored to Maximally Entangled States. Phys. Rev. Lett., 119: 040402, 2017.
DOI: 10.1103/PhysRevLett.119.040402.
* [SBA17] J. Singh, K. Bharti, and Arvind. Quantum key distribution protocol based on contextuality monogamy. Phys. Rev. A, 95: 062333, 2017.
DOI: 10.1103/PhysRevA.95.062333.
* [SHP19] D. Saha, P. Horodecki, and M. Pawłowski. State independent contextuality advances one-way communication. New J. Phys., 21(9): 093057, 2019.
DOI: 10.1088/1367-2630/ab4149.
* [SR17] D. Saha and R. Ramanathan. Activation of monogamy in nonlocality using local contextuality. Phys. Rev. A, 95: 030104, 2017.
DOI: 10.1103/PhysRevA.95.030104.
* [SSKA19] S. Sarkar, D. Saha, J. Kaniewski, and R. Augusiak. Self-testing quantum systems of arbitrary local dimension with minimal number of measurements. 2019\.
* [TKV+18] A. Tavakoli, J. Kaniewski, T. Vértesi, D. Rosset, and N. Brunner. Self-testing quantum states and measurements in the prepare-and-measure scenario. Phys. Rev. A, 98: 062307, 2018.
DOI: 10.1103/PhysRevA.98.062307.
* [XSS+16] Z. Xu, D. Saha, H. Su, M. Pawłowski, and J. Chen. Reformulating noncontextuality inequalities in an operational approach. Phys. Rev. A, 94: 062103, 2016.
DOI: 10.1103/PhysRevA.94.062103.
* [YVB+14] T. Yang, T. Vértesi, J. Bancal, V. Scarani, and M. Navascués. Robust and Versatile Black-Box Certification of Quantum Devices. Phys. Rev. Lett., 113: 040401, 2014.
DOI: 10.1103/PhysRevLett.113.040401.
## Appendix A Obtaining the stabilizing operators
To guess the stabilizing operators $M_{i,k}$ we use the stabilizing operators
in the optimal quantum realization of $n$-cycle KCBS inequality (6). Let us
assume that these operators are in the following form
$\widehat{M}_{i,k}=a\widehat{A}_{i}+b\widehat{A}_{i+k}+b^{\prime}\widehat{A}_{i-k},$
(76)
where the coefficients $a$, $b$ and $b^{\prime}$ are to be determined as a
solution to the equation
$(a\widehat{A}_{i}+b\widehat{A}_{i+k}+b^{\prime}\widehat{A}_{i-k})|\widehat{\psi}\rangle=|\widehat{\psi}\rangle,$
(77)
and $|\widehat{\psi}\rangle,\widehat{A}_{i}$ are given in Eqs. (8)-(9). To
solve the above we first notice the following relation,
$\widehat{A}_{i}|\widehat{\psi}\rangle=(\cos{2\theta},\sin{2\theta}\sin{\phi_{i}},\sin{2\theta}\cos{\phi_{i}})^{T},$
(78)
which when substituted into Eq. (77) leads one to a system of equations
$\begin{bmatrix}a(1+\frac{b}{a}+\frac{b^{\prime}}{a})\cos{2\theta}\\\\[4.30554pt]
a\sin{2\theta}\big{(}\sin{\phi_{i}}+\frac{b}{a}\sin{\phi_{i+k}}+\frac{b^{\prime}}{a}\sin{\phi_{i-k}}\big{)}\\\\[4.30554pt]
a\sin{2\theta}\big{(}\cos{\phi_{i}}+\frac{b}{a}\cos{\phi_{i+k}}+\frac{b^{\prime}}{a}\cos{\phi_{i-k}}\big{)}\\\
\end{bmatrix}=\begin{bmatrix}1\\\ 0\\\ 0\\\ \end{bmatrix}.$ (79)
Assuming that $a\neq 0$ and taking into account that $\sin{2\theta}\neq 0$,
the last two equations in the above system can be rewritten as
$\displaystyle\begin{bmatrix}\sin{\phi_{i}}&\sin{\phi_{i+k}}&\sin{\phi_{i-k}}\\\\[4.30554pt]
\cos{\phi_{i}}&\cos{\phi_{i+k}}&\cos{\phi_{i-k}}\\\
\end{bmatrix}\begin{bmatrix}1\\\ b/a\\\
b^{\prime}/a\end{bmatrix}=\begin{bmatrix}0\\\ 0\end{bmatrix}.$ (80)
After multiplying the above equation from left by
$\begin{bmatrix}\sin{\phi_{i}}&\cos{\phi_{i}}\\\
\cos{\phi_{i}}&-\sin{\phi_{i}}\\\ \end{bmatrix}$ (81)
and using the fact $\phi_{i+k}-\phi_{i}=\phi_{k}$, Eq. (80) simplifies to,
$\displaystyle\begin{bmatrix}1&\cos{\phi_{k}}&\cos{\phi_{k}}\\\
0&\sin{\phi_{k}}&-\sin{\phi_{k}}\\\ \end{bmatrix}\begin{bmatrix}1\\\ b/a\\\
b^{\prime}/a\end{bmatrix}=\begin{bmatrix}0\\\ 0\end{bmatrix}.$ (82)
In this way we remark that the dependence of $i$ in (80) disappears and the
system of equations (82) imply
$\frac{b}{a}=\frac{b^{\prime}}{a}=-\frac{1}{2}\sec{\phi_{k}}.$ (83)
Substitution of above in the first vector equality of (79) leads to
$a=\frac{1}{(1-\sec{\phi_{k}})(2\cos^{2}{\theta}-1)},$ (84)
and thus, we obtain a unique solution of $a,b,b^{\prime}$. Finally,
substituting $a,b,b^{\prime}$ into Eq. (77) we can conveniently state
$\widehat{M}_{i,k}$ operators in the following way
$\displaystyle\widehat{M}_{i,k}:=\left(\frac{1+2\alpha}{1-2\alpha}\right)\left[(1-2\beta_{k})\widehat{A}_{i}+\beta_{k}(\widehat{A}_{i+k}+\widehat{A}_{i-k})\right],$
where
$\beta_{k}=\frac{1}{2(1-\cos{\phi_{k}})},\qquad\alpha=\frac{1}{2}\sec\left(\frac{\pi}{n}\right).$
(86)
## Appendix B Lemma 1-2
In this appendix, we provide two Lemmas that are used in the proof of the
Theorem.
###### Lemma 1.
If a set of Hermitian operators $\\{A_{i}\\}^{n}_{i=1}$ (where $n$ is odd) of
the form (2) and a vector $|\psi\rangle$ satisfy the relations (50), (48) and
(49), then the vector space
$V=\mathrm{span}\\{|\psi\rangle,A_{1}|\psi\rangle,A_{3}|\psi\rangle\\}$ (87)
is invariant under the algebra generated by $A_{i}$.
###### Proof.
To prove this statement it suffices to show that $A_{i}|\psi\rangle$ for all
$i=1,\ldots,n$ as well as all $A_{i}A_{j}|\psi\rangle$ with $i\neq j$ can be
expressed as linear combinations of the basis vectors $|\psi\rangle$,
$A_{1}|\psi\rangle$ and $A_{3}|\psi\rangle$.
Let us begin by noting that Eq. (48) for $i=2$ gives us directly such a linear
combination for $A_{2}|\psi\rangle$ and so $A_{2}|\psi\rangle\in V$. Then, the
fact that $A_{i}|\psi\rangle\in V$ for $i=4,\ldots,n$ follows from Eq. (48);
it is enough to rewrite the latter as
$A_{i}|\psi\rangle=\frac{1-2\alpha}{\alpha}|\psi\rangle-\frac{1}{\alpha}A_{i-1}|\psi\rangle-
A_{i-2}|\psi\rangle.$ (88)
Let us now move on to showing that $A_{i}A_{j}|\psi\rangle\in V$ for all
$i\neq j$. To this end, we first observe that using (50) we obtain
$\displaystyle A_{i}A_{i\pm 1}|\psi\rangle$ $\displaystyle=$
$\displaystyle(2F_{i}-\mathbbm{1})(2F_{i\pm 1}-\mathbbm{1})|\psi\rangle$ (89)
$\displaystyle=$ $\displaystyle-(A_{i}+A_{i\pm 1}+\mathbbm{1})|\psi\rangle,$
which due to the fact that $A_{i}|\psi\rangle\in V$, allows us to conclude
that for all $i$, $A_{i}A_{i\pm 1}|\psi\rangle\in V$.
Let us then consider the vectors $A_{i}A_{j}|\psi\rangle$ for pairs $i,j$ such
that $|i-j|=2$. Using Eq. (49) and the fact $[A_{i},A_{i\pm 1}]|\psi\rangle=0$
which is a consequence of Eq. (50), we get
$\displaystyle A_{i}A_{i\pm 2}|\psi\rangle$ $\displaystyle=$ $\displaystyle
A_{i}A_{i\pm 2}(A_{i\pm 1})^{2}|\psi\rangle$ (90) $\displaystyle=$
$\displaystyle(A_{i}A_{i\pm 1})(A_{i\pm 1}A_{i\pm 2})|\psi\rangle.$
Since we have already shown $A_{i}A_{i\pm 1}|\psi\rangle\in V$, the above
equation implies $A_{i}A_{i\pm 2}|\psi\rangle\in V$.
Given that $A_{i}A_{j}|\psi\rangle\in V$ for $|i-j|=1$ and $|i-j|=2$ we can
then prove, applying the same argument as above, that $A_{i}A_{j}|\psi\rangle$
belong to $V$ for any pair $i,j$ such that $|i-j|=3$. In fact, following this
approach recursively we can prove that $A_{i}A_{j}|\psi\rangle\in V$ for $i,j$
such that $|i-j|=k$ with $k=3,\ldots,n-1$, which completes the proof. ∎
Let us remark that the subspace $V$ is in fact spanned by any triple of the
vectors $|\psi\rangle$, $A_{i}|\psi\rangle$ and $A_{j}|\psi\rangle$ with
$i\neq j$. This is a consequence of the fact that, as proven above, any vector
$A_{i}|\psi\rangle$ is a linear combination of $|\psi\rangle$,
$A_{1}|\psi\rangle$ and $A_{3}|\psi\rangle$.
###### Lemma 2.
If a set of positive operators $\\{\tilde{F}_{i}\\}^{n}_{i=1}$ acting on
$\mathbbm{C}^{3}$ and a vector $|\tilde{\psi}\rangle$ satisfy the relations
(53), (54) and (55), then for each $i$ there exists a normalized vector
$|v_{i}\rangle\in\mathbbm{C}^{3}$ such that
$\tilde{F}_{i}=|v_{i}\rangle\\!\langle v_{i}|$ and, moreover, $\langle
v_{i}|v_{i\pm 1}\rangle=0$.
###### Proof.
Let us begin by showing that $\tilde{F}_{i}|\tilde{\psi}\rangle\neq 0$ for any
$i$. Assume to this end that there exist $j$ such that
$\tilde{F}_{j}|\tilde{\psi}\rangle=0$. Using then Eq. (54) for $i=j-1$ we
arrive at
$(\tilde{F}_{j-1}+\alpha\tilde{F}_{j-2})|\tilde{\psi}\rangle=|\tilde{\psi}\rangle.$
(91)
After applying $\tilde{F}_{j-2}$ to both sides of this equation and using Eq.
(53), we obtain
$\alpha\tilde{F}^{2}_{j-2}|\tilde{\psi}\rangle=\tilde{F}_{j-2}|\tilde{\psi}\rangle$
which is consistent with Eq. (55) if and only if
$\tilde{F}_{j-2}|\tilde{\psi}\rangle=0$. Therefore, due to Eq. (91) we have
$\tilde{F}_{j-1}|\tilde{\psi}\rangle=|\tilde{\psi}\rangle$. Again,
substituting these relations in (54) taking $i=j$, we arrive at
$\tilde{F}_{j+1}|\tilde{\psi}\rangle=[(1-\alpha)/\alpha]|\tilde{\psi}\rangle$
which contradicts Eq. (55).
Let us now show that all the operators $\tilde{F}_{i}$ are of rank one. We
first prove that none of them can be of rank three. Assume for this purpose
that $\mathrm{rank}(\tilde{F}_{j})=3$ for some $j$. Then, the condition (55)
gives $\tilde{F}_{j}|\tilde{\psi}\rangle=|\tilde{\psi}\rangle$. This, after
taking into account that $\tilde{F}_{j+1}\tilde{F}_{j}|\tilde{\psi}\rangle=0$
implies $\tilde{F}_{j+1}|\tilde{\psi}\rangle=0$ which contradicts the fact
$\tilde{F}_{i}|\tilde{\psi}\rangle\neq 0$ for all $i$, as shown before.
Let us then prove that none of $\tilde{F}_{i}$ can be of rank two. To this
end, assume that there is $j$ such that $\mathrm{rank}(\tilde{F}_{j})=2$ and
consider the eigendecomposition of $\tilde{F}_{j}$,
$\tilde{F}_{j}=\lambda|1\rangle\\!\langle
1|+\lambda^{\prime}|2\rangle\\!\langle 2|,$ (92)
where $|1\rangle,|2\rangle,|3\rangle$ are the eigenvectors, forming an
orthonormal basis in $\mathbbm{C}^{3}$, whereas
$\lambda,\lambda^{\prime}\in(0,1]$ are the eigenvalues. Subsequently,
$|\tilde{\psi}\rangle$ can be expressed as
$|\tilde{\psi}\rangle=x_{1}|1\rangle+x_{2}|2\rangle+x_{3}|3\rangle$ (93)
for some $x_{1},x_{2},x_{3}\in\mathbbm{C}$. It follows from Eq. (55) that
$\lambda=1\quad\text{ or }\quad x_{1}=0,$ (94)
and
$\lambda^{\prime}=1\quad\text{ or }\quad x_{2}=0.$ (95)
Note that $x_{1}=x_{2}=0$ is not possible since it requires
$F_{j}|\tilde{\psi}\rangle=0$. Similarly, $x_{3}\neq 0$ as otherwise
$F_{j}|\tilde{\psi}\rangle=|\tilde{\psi}\rangle$ which implies $F_{j\pm
1}|\tilde{\psi}\rangle=0$.
Now, employing the fact that $\tilde{F}_{j}$ is supported on
$\mathrm{span}\\{|1\rangle,|2\rangle\\}$, it follows from the condition
$\tilde{F}_{j}\tilde{F}_{j\pm 1}|\tilde{\psi}\rangle=0$ that $\tilde{F}_{j\pm
1}|\tilde{\psi}\rangle=q_{3,\pm}|3\rangle$ for some $q_{3,\pm}\in\mathbbm{C}$.
By combining this with (55) we find that
$\tilde{F}_{j\pm 1}|3\rangle=|3\rangle,$ (96)
that is, $|3\rangle$ is the eigenvector of $\tilde{F}_{j\pm 1}$ with
eigenvalue one, which, due to the fact that $\tilde{F}_{j\pm
1}\leqslant\mathbbm{1}$, implies that $\tilde{F}_{j\pm 1}$ decompose as
$\tilde{F}_{j\pm 1}=\tilde{F}_{j\pm 1}^{\prime}+|3\rangle\\!\langle 3|$ (97)
with $\tilde{F}_{j\pm 1}^{\prime}$ being positive matrices supported on
$\mathrm{span}\\{|1\rangle,|2\rangle\\}$.
By finally plugging Eqs. (92) and (97) into Eq. (54) and projecting the
obtained equation onto $|3\rangle$ we see that $2\alpha=1$, which is not
satisfied for any $n$.
As a result all the operators $\tilde{F}_{i}$ are of rank one and therefore
they can be expressed as
$\tilde{F}_{i}=\lambda_{i}|v_{i}\rangle\\!\langle v_{i}|$ (98)
for some $\lambda_{i}\in(0,1]$ and $|v_{i}\rangle\in\mathbbm{C}^{3}$. The
condition (55) holds only if $\lambda_{i}=1$. Furthermore, since
$F_{i}|\tilde{\psi}\rangle\neq 0$, Eq. (53) implies $\langle v_{i}|v_{i\pm
1}\rangle=0$. This completes the proof. ∎
|
2024-09-04T02:54:55.176078 | 2020-02-27T16:21:43 | 2002.12229 | {
"authors": "You Lu, Bert Huang",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25920",
"submitter": "You Lu",
"url": "https://arxiv.org/abs/2002.12229"
} | arxiv-papers | # Woodbury Transformations for
Deep Generative Flows
You Lu
Department of Computer Science
Virginia Tech
Blacksburg, VA
<EMAIL_ADDRESS>
&Bert Huang
Department of Computer Science
Tufts University
Medford, MA
<EMAIL_ADDRESS>
###### Abstract
Normalizing flows are deep generative models that allow efficient likelihood
calculation and sampling. The core requirement for this advantage is that they
are constructed using functions that can be efficiently inverted and for which
the determinant of the function’s Jacobian can be efficiently computed.
Researchers have introduced various such flow operations, but few of these
allow rich interactions among variables without incurring significant
computational costs. In this paper, we introduce _Woodbury transformations_ ,
which achieve efficient invertibility via the Woodbury matrix identity and
efficient determinant calculation via Sylvester’s determinant identity. In
contrast with other operations used in state-of-the-art normalizing flows,
Woodbury transformations enable (1) high-dimensional interactions, (2)
efficient sampling, and (3) efficient likelihood evaluation. Other similar
operations, such as 1x1 convolutions, emerging convolutions, or periodic
convolutions allow at most two of these three advantages. In our experiments
on multiple image datasets, we find that Woodbury transformations allow
learning of higher-likelihood models than other flow architectures while still
enjoying their efficiency advantages.
## 1 Introduction
Deep generative models are powerful tools for modeling complex distributions
and have been applied to many tasks such as synthetic data generation [27,
39], domain adaption [40], and structured prediction [33]. Examples of these
models include autoregressive models [13, 28], variational autoencoders [21,
31], generative adversarial networks [11], and normalizing flows [6, 30, 7,
22]. Normalizing flows are special because of two advantages: They allow
efficient and exact computation of log-likelihood and sampling.
Flow-based models are composed of a series of invertible functions, which are
specifically designed so that their inverse and determinant of the Jacobian
are easy to compute. However, to preserve this computational efficiency, these
functions usually cannot sufficiently encode dependencies among dimensions of
a variable. For example, affine coupling layers [6] split a variable to two
parts and require the second part to only depend on the first. But they ignore
the dependencies among dimensions in the second part.
To address this problem, Dinh et al. [6, 7] introduced a fixed permutation
operation that reverses the ordering of the channels of pixel variables.
Kingma and Dhariwal [22] introduced a 1$\times$1 convolution, which are a
generalized permutation layer, that uses a weight matrix to model the
interactions among dimensions along the channel axis. Their experiments
demonstrate the importance of capturing dependencies among dimensions.
Relatedly, Hoogeboom et al. [15] proposed emerging convolution operations, and
Hoogeboom et al. [15] and Finz et al. [9] proposed periodic convolution. These
two convolution layers have $d\times d$ kernels that can model dependencies
along the spatial axes in addition to the channel axis. However, the increase
in representational power comes at a cost: These convolution operations do not
scale well to high-dimensional variables. The emerging convolution is a
combination of two autoregressive convolutions [10, 23], whose inverse is not
parallelizable. To compute the inverse or determinant of the Jacobian, the
periodic convolution requires transforming the input and the convolution
kernel to Fourier space. This transformation is computationally costly.
In this paper, we develop _Woodbury transformations_ for generative flows. Our
method is also a generalized permutation layer and uses spatial and channel
transformations to model dependencies among dimensions along spatial and
channel axes. We use the Woodbury matrix identity [37] and Sylvester’s
determinant identity [35] to compute the inverse and Jacobian determinant,
respectively, so that both the training and sampling time complexities are
linear to the input variable’s size. We also develop a memory-efficient
variant of the Woodbury transformation, which has the same advantage as the
full transformation but uses significantly reduced memory when the variable is
high-dimensional. In our experiments, we found that Woodbury transformations
enable model quality comparable to many state-of-the-art flow architectures
while maintaining significant efficiency advantages.
## 2 Deep Generative Flows
In this section, we briefly introduce the deep generative flows. More
background knowledge can be found in the appendix.
A normalizing flow [30] is composed of a series of invertible functions
$\mathbf{f}=\mathbf{f}_{1}\circ\mathbf{f}_{2}\circ...\circ\mathbf{f}_{K}$,
which transform $\mathbf{x}$ to a latent code $\mathbf{z}$ drawn from a simple
distribution. Therefore, with the _change of variables_ formula, we can
rewrite the log-likelihood $\log p_{\theta}(\mathbf{x})$ to be
$\log p_{\theta}(\mathbf{x})=\log
p_{Z}(\mathbf{z})+\sum_{i=1}^{K}\log\left|\det\left(\frac{\partial\mathbf{f}_{i}}{\partial\mathbf{r}_{i-1}}\right)\right|,$
(1)
where $\mathbf{r}_{i}=\mathbf{f}_{i}(\mathbf{r}_{i-1})$,
$\mathbf{r}_{0}=\mathbf{x}$, and $\mathbf{r}_{K}=\mathbf{z}$.
Flow-based generative models [6, 7, 22] are developed on the theory of
normalizing flows. Each transformation function used in the models is a
specifically designed neural network that has a tractable Jacobian determinant
and inverse. We can sample from a trained flow $\mathbf{f}$ by computing
$\mathbf{z}\sim
p_{Z}(\mathbf{z}),\quad\mathbf{x}=\mathbf{f}^{-1}(\mathbf{z})$.
There have been many operations, i.e., layers, proposed in recent years for
generative flows. In this section, we discuss some commonly used ones, and
more related works will be discussed in Section 4.
Actnorm layers [22] perform per-channel affine transformations of the
activations using scale and bias parameters to improve training stability and
performance. The actnorm is formally expressed as
$\mathbf{y}_{:,i,j}=\mathbf{s}\odot\mathbf{x}_{:,i,j}+\mathbf{b}$, where both
the input $\mathbf{x}$ and the output $\mathbf{y}$ are $c\times h\times w$
tensors, $c$ is the channel dimension, and $h\times w$ are spatial dimensions.
The parameters $\mathbf{s}$ and $\mathbf{b}$ are $c\times 1$ vectors.
Affine coupling layers [6, 7] split the input $\mathbf{x}$ into two parts,
$\mathbf{x}_{a},\mathbf{x}_{b}$. And then fix $\mathbf{x}_{a}$ and force
$\mathbf{x}_{b}$ to only relate to $\mathbf{x}_{a}$, so that the Jacobian is a
triangular matrix. Formally, we compute
$\displaystyle\mathbf{x}_{a},\mathbf{x}_{b}=\text{split}(\mathbf{x}),\quad\quad\quad\quad~{}~{}\mathbf{y}_{a}=\mathbf{x}_{a},$
$\displaystyle\mathbf{y}_{b}=\mathbf{s}(\mathbf{x}_{a})\odot\mathbf{x}_{b}+\mathbf{b}(\mathbf{x}_{a}),\quad\mathbf{y}=\text{concat}(\mathbf{y}_{a},\mathbf{y}_{b}),$
where $\mathbf{s}$ and $\mathbf{b}$ are two neural networks with
$\mathbf{x}_{a}$ as input. The split and the concat split and concatenate the
variables along the channel axis. Usually, $s$ is restricted to be positive.
An additive coupling layer is a special case when $\mathbf{s}=\mathbf{1}$.
Actnorm layers only rescale the dimensions of $\mathbf{x}$, and affine
coupling layers only relate $\mathbf{x}_{b}$ to $\mathbf{x}_{a}$ but omit
dependencies among different dimensions of $\mathbf{x}_{b}$. Thus, we need
other layers to capture local dependencies among dimensions.
Invertible convolutional layers [22, 15, 9] are generalized permutation layers
that can capture correlations among dimensions. The 1$\times$1 convolution
[22] is $\mathbf{y}_{:,i,j}=\mathbf{Mx}_{:,i,j}$, where $\mathbf{M}$ is a
$c\times c$ matrix. The Jacobian of a 1$\times$1 convolution is a block
diagonal matrix, so that its log-determinant is $hw\log|\det(\mathbf{M})|$.
Note that the 1$\times$1 convolution only operates along the channel axis and
ignores the dependencies along the spatial axes.
(a) Flow step
(b) Multi-scale arch.
Figure 1: Overview of architecture of generative flows. We can design the flow
step by selecting a suitable convolutional layer and a coupling layer based on
the task. Glow [22] uses 1$\times$1 convolutions and affine coupling.
Emerging convolutions [15] combine two autoregressive convolutions [10, 23].
Each autoregressive convolution masks out some weights to force an
autoregressive structure, so that the Jacobian is a triangular matrix and
computing its determinant is efficient. One problem of emerging convolution is
the computation of inverse is non-parallelizable, so that is inefficent for
high-dimensional variables.
Periodic convolutions [15, 9] transform the input and kernel to the Fourier
domain using discrete Fourier transformations, so the convolution function is
an element-wise matrix product with a block-diagonal Jacobian. The
computational cost of periodic convolutions is
$\mathcal{O}(chw\log(hw)+c^{3}hw)$. Thus, when the input is high-dimensional,
both training and sampling are expensive.
Multi-scale architectures [7] compose flow layers to generate rich models,
using _split layers_ to factor out variables and _squeeze layers_ to shuffle
dimensions, resulting in an architecture with $K$ flow steps and $L$ levels.
See Fig. 1.
## 3 Woodbury Transformations
In this section, we introduce Woodbury transformations as an efficient means
to model high-dimensional correlations.
### 3.1 Channel and Spatial Transformations
Suppose we reshape the input $\mathbf{x}$ to be a $c\times n$ matrix, where
$n=hw$. Then the 1$\times$1 convolution can be reinterpreted as a matrix
transformation
$\mathbf{y}=\mathbf{W}^{(c)}\mathbf{x},$ (2)
where $\mathbf{y}$ is also a $c\times n$ matrix, and $\mathbf{W}^{(c)}$ is a
$c\times c$ matrix. For consistency, we will call this a channel
transformation. For each column $\mathbf{x}_{:,i}$, the correlations among
channels are modeled by $\mathbf{W}^{(c)}$. However, the correlation between
any two rows $\mathbf{x}_{:,i}$ and $\mathbf{x}_{:,j}$ is not captured.
Inspired by Eq. 2, we use a spatial transformation to model interactions among
dimensions along the spatial axis
$\mathbf{y}=\mathbf{xW}^{(s)},$ (3)
where $\mathbf{W}^{(s)}$ is an $n\times n$ matrix that models the correlations
of each row $\mathbf{x}_{i,:}$. Combining Equation 2 and Equation 3, we have
$\displaystyle\mathbf{x}_{c}=\mathbf{W}^{(c)}\mathbf{x},~{}~{}~{}~{}~{}~{}~{}~{}\mathbf{y}=\mathbf{x}_{c}\mathbf{W}^{(s)}.$
(4)
For each dimension of output $\mathbf{y}_{i,j}$, we have
$\mathbf{y}_{i,j}=\sum_{v=1}^{c}\left(\sum_{u=1}^{n}\mathbf{W}^{(c)}_{i,u}\cdot\mathbf{x}_{u,v}\right)\cdot\mathbf{W}^{(s)}_{v,j}$.
Therefore, the spatial and channel transformations together can model the
correlation between any pair of dimensions. However, in this preliminary form,
directly using Eq. 4 is inefficient for large $c$ or $n$. First, we would have
to store two large matrices $\mathbf{W}^{c}$ and $\mathbf{W}^{s}$, so the
space cost is $\mathcal{O}(c^{2}+n^{2})$. Second, the computational cost of
Eq. 4 is $\mathcal{O}(c^{2}n+n^{2}c)$—quadratic in the input size. Third, the
computational cost of the Jacobian determinant is $\mathcal{O}(c^{3}+n^{3})$,
which is far too expensive in practice.
### 3.2 Woodbury Transformations
We solve the three scalability problems by using a low-rank factorization.
Specifically, we define
$\displaystyle\mathbf{W}^{(c)}=\mathbf{I}^{(c)}+\mathbf{U}^{(c)}\mathbf{V}^{(c)},~{}~{}~{}~{}~{}~{}~{}~{}\mathbf{W}^{(s)}=\mathbf{I}^{(s)}+\mathbf{U}^{(s)}\mathbf{V}^{(s)},$
where $\mathbf{I}^{(c)}$ and $\mathbf{I}^{(s)}$ are $c$\- and $n$-dimensional
identity matrices, respectively. The matrices $\mathbf{U}^{c}$,
$\mathbf{V}^{c}$, $\mathbf{U}^{s}$, and $\mathbf{V}^{s}$ are of size $c\times
d_{c}$, $d_{c}\times c$, $n\times d_{s}$, and $d_{c}\times n$, respectively,
where $d_{c}$ and $d_{s}$ are constant latent dimensions of these four
matrices. Therefore, we can rewrite Equation 4 as
$\displaystyle\mathbf{x}_{c}=(\mathbf{I}^{(c)}+\mathbf{U}^{(c)}\mathbf{V}^{(c)})\mathbf{x},~{}~{}~{}~{}~{}~{}~{}~{}\mathbf{y}=\mathbf{x}_{c}(\mathbf{I}^{(s)}+\mathbf{U}^{(s)}\mathbf{V}^{(s)}).$
(5)
We call Eq. 5 the Woodbury transformation because the Woodbury matrix identity
[37] and Sylvester’s determinant identity [35] allow efficient computation of
its inverse and Jacobian determinant.
Woodbury matrix identity.111A more general version replaces $\mathbf{I}^{(n)}$
and $\mathbf{I}^{(k)}$ with arbitrary invertible $n\times n$ and $k\times k$
matrices. But this simplified version is sufficient for our tasks. Let
$\mathbf{I}^{(n)}$ and $\mathbf{I}^{(k)}$ be $n$\- and $k$-dimensional
identity matrices, respectively. Let $\mathbf{U}$ and $\mathbf{V}$ be $n\times
k$ and $k\times n$ matrices, respectively. If $\mathbf{I}^{(k)}+\mathbf{VU}$
is invertible, then
$(\mathbf{I}^{(n)}+\mathbf{UV})^{-1}=\mathbf{I}^{(n)}-\mathbf{U}(\mathbf{I}^{k}+\mathbf{VU})^{-1}\mathbf{V}$.
Sylvester’s determinant identity. Let $\mathbf{I}^{(n)}$ and
$\mathbf{I}^{(k)}$ be $n$\- and $k$-dimensional identity matrices,
respectively. Let $\mathbf{U}$ and $\mathbf{V}$ be $n\times k$ and $k\times n$
matrices, respectively. Then,
$\det(\mathbf{I}^{(n)}+\mathbf{UV})=\det(\mathbf{I}^{(k)}+\mathbf{VU})$.
Based on these two identities, we can efficiently compute the inverse and
Jacobian determinant
$\displaystyle\mathbf{x}_{c}=\mathbf{y}(\mathbf{I}^{(s)}-\mathbf{U}^{(s)}(\mathbf{I}^{(d_{s})}+\mathbf{V}^{(s)}\mathbf{U}^{(s)})^{-1}\mathbf{V}^{(s)}),$
$\displaystyle\mathbf{x}=(\mathbf{I}^{(c)}-\mathbf{U}^{(c)}(\mathbf{I}^{(d_{c})}+\mathbf{V}^{(c)}\mathbf{U}^{(c)})^{-1}\mathbf{V}^{(c)})\mathbf{x}_{c},$
(6)
and
$\displaystyle\log\left|\det\left(\frac{\partial\mathbf{y}}{\partial\mathbf{x}}\right)\right|$
$\displaystyle=n\log\left|\det(\mathbf{I}^{(d_{c})}+\mathbf{V}^{(c)}\mathbf{U}^{(c)})\right|+c\log\left|\det(\mathbf{I}^{(d_{s})}+\mathbf{V}^{(s)}\mathbf{U}^{(s)})\right|,$
(7)
where $\mathbf{I}^{(d_{c})}$ and $\mathbf{I}^{(d_{s})}$ are $d_{c}$\- and
$d_{s}$-dimensional identity matrices, respectively.
A Woodbury transformation is also a generalized permutation layer. We can
directly replace an invertible convolution in Figure 1 with a Woodbury
transformation. In contrast with 1$\times$1 convolutions, Woodbury
transformations are able to model correlations along both channel and spatial
axes. We illustrate this in Figure 2. To implement Woodbury transformations,
we need to store four weight matrices, i.e.,
$\mathbf{U}^{(c)},\mathbf{U}^{(s)},\mathbf{V}^{(c)}$, and $\mathbf{V}^{(s)}$.
To simplify our analysis, let $d_{c}\leq d$ and $d_{s}\leq d$, where $d$ is a
constant. This setting is also consistent with our experiments. The size of
$\mathbf{U}^{(c)}$ and $\mathbf{V}^{(c)}$ is $\mathcal{O}(dc)$, and the size
of $\mathbf{U}^{(c)}$ and $\mathbf{V}^{(c)}$ is $\mathcal{O}(dn)$. The space
complexity is $\mathcal{O}(d(c+n))$.
For training and likelihood computation, the main computational bottleneck is
computing $\mathbf{y}$ and the Jacobian determinant. To compute $\mathbf{y}$
with Equation 4, we need to first compute the channel transformation and then
compute the spatial transformation. The computational complexity is
$\mathcal{O}(dcn)$. To compute the determinant with Equation 7, we need to
first compute the matrix product of $\mathbf{V}$ and $\mathbf{U}$, and then
compute the determinant. The computational complexity is
$\mathcal{O}(d^{2}(c+n)+d^{3})$.
For sampling, we need to compute the inverse transformations, i.e., Equation
6. With the Woodbury identity, we actually only need to compute the inverses
of $\mathbf{I}^{(d_{s})}+\mathbf{V}^{(s)}\mathbf{U}^{(s)}$ and
$\mathbf{I}^{(d_{c})}+\mathbf{V}^{(c)}\mathbf{U}^{(c)}$, which are computed
with time complexity $\mathcal{O}(d^{3})$. To implement the inverse
transformations, we can compute the matrix chain multiplication, so we can
avoid computing the product of two large matrices twice, yielding cost
$\mathcal{O}(c^{2}+n^{2})$. For example, for the inverse spatial
transformation, we can compute it as
$\mathbf{x}_{c}=\mathbf{y}-((\mathbf{yU}^{(s)})(\mathbf{I}^{(d_{s})}+\mathbf{V}^{(s)}\mathbf{U}^{(s)})^{-1})\mathbf{V}^{(s)}$,
so that its complexity is $\mathcal{O}(d^{3}+cd^{2}+cnd)$. The total
computational complexity of Equation 6 is $\mathcal{O}(dcn+d^{2}(n+c)+d^{3})$.
In practice, we found that for a high-dimensional input, a relatively small
$d$ is enough to obtain good performance, e.g., the input is $256\times
256\times 3$ images, and $d=16$. In this situation, $nc\geq d^{3}$. Therefore,
we can omit $d$ and approximately see the spatial complexity as
$\mathcal{O}(c+n)$, and the forward or inverse transformation as
$\mathcal{O}(nc)$. They are all linear to the input size.
We do not restrict $\mathbf{U}$ and $\mathbf{V}$ to force $\mathbf{W}$ to be
invertible. Based on analysis by Hoogeboom et al. [15], the training maximizes
the log-likelihood, which implicitly pushes $\det(\mathbf{I}+\mathbf{VU})$
away from $0$. Therefore, it is not necessary to explicitly force
invertibility. In our experiments, the Woodbury transformations are as robust
as other invertible convolution layers.
(a) 1$\times$1 convolution
(b) Woodbury
(c) ME-Woodbury
Figure 2: Visualization of three transformations. The 1$\times$1 convolution
only operates along the channel axis. The Woodbury transformation operates
along both the channel and spatial axes, modeling the dependencies of one
channel directly via one transformation. The ME-Woodbury transformation
operates along three axes. It uses two transformations to model spatial
dependencies.
### 3.3 Memory-Efficient Variant
In Eq. 4, one potential challenge arises from the sizes of $\mathbf{U}^{(s)}$
and $\mathbf{V}^{((s))}$, which are linear in $n$. The challenge is that $n$
may be large in some practical problems, e.g., high-resolution images. We
develop a memory-efficient variant of Woodbury transformations, i.e., ME-
Woodbury, to solve this problem. The ME version can effectively reduce space
complexity from $\mathcal{O}(d(c+hw))$ to $\mathcal{O}(d(c+h+w))$.
The difference between ME-Woodbury transformations and Woodbury
transformations is that the ME form cannot directly model spatial
correlations. As shown in Figure 2(c), it uses two transformations, for height
and width, together to model the spatial correlations. Therefore, for a
specific channel $k$, when two dimensions $\mathbf{x}_{k,i,j}$ and
$\mathbf{x}_{k,u,v}$ are in two different heights, and widths, their
interaction will be modeled indirectly. In our experiments, we found that this
limitation only slightly impacts ME-Woodbury’s performance. More details on
ME-Woodbury transformations are in the appendix.
## 4 Related Work
Rezende and Mohamed [30] developed planar flows for variational inference
$\mathbf{z}_{t+1}=\mathbf{z}_{t}+\mathbf{u}\delta(\mathbf{w}^{T}\mathbf{z}_{t}+b)$,
where $\mathbf{z}$, $\mathbf{w}$, and $\mathbf{u}$ are $d$-dimensional
vectors, $\delta()$ is an activation function, and $b$ is a scalar.
Berg et al. [3] generalized these to Sylvester flows
$\mathbf{z}_{t+1}=\mathbf{z}_{t}+\mathbf{QR}\delta(\tilde{\mathbf{R}}\mathbf{Q}^{T}\mathbf{z}_{t}+\mathbf{r})$,
where $\mathbf{R}$ and $\tilde{\mathbf{R}}$ are upper triangular matrices,
$\mathbf{Q}$ is composed of a set of orthonormal vectors, and $\mathbf{r}$ is
a $d$-dimensional vector. The resulting Jacobian determinant can be
efficiently computed via Sylvester’s identity, just as our methods do.
However, Woodbury transformations have key differences from Sylvester flows.
First, Berg et al. only analyze their models on vectors. The inputs to our
layers are matrices, so our method operates on high-dimensional input, e.g.,
images. Second, though Sylvester flows are inverse functions, computing their
inverse is difficult. One possible way is to apply iterative methods [2, 34,
5] to compute the inverse. But this research direction is unexplored. Our
layers can be inverted efficiently with the Woodbury identity. Third, our
layers do not restrict the transformation matrices to be triangular or
orthogonal. In fact, Woodbury transformations can be seen as another
generalized variant of planar flows on matrices, with
$\delta(\mathbf{x})=\mathbf{x}$, and whose inverse is tractable. Roughly
speaking, Woodbury transformations can also be viewed as applying the planar
flows sequentially to each row of the input matrix. After this work was
completed and submitted, we learned that the TensorFlow software [1] also uses
the Woodbury identity in their affine bijector.
Normalizing flows have also been used for variational inference, density
estimation, and generative modeling. Autoregressive flows [23, 29, 17, 25]
restrict each variable to depend on those that precede it in a sequence,
forcing a triangular Jacobian. Non-linear coupling layers replace the affine
transformation function. Specifically, spline flows [26, 8] use spline
interpolation, and Flow++ [14] uses a mixture cumulative distribution function
to define these functions. Flow++ also uses variational dequantization to
prevent model collapse. Many works [22, 15, 9, 18] develop invertible
convolutional flows to model interactions among dimensions. MintNet [34] is a
flexible architecture composed of multiple masked invertible layers. I-ResNet
[2, 5] uses discriminative deep network architecture as the flow. These two
models require iterative methods to compute the inverse. Discrete flows [36,
16] and latent flows [41] can be applied to discrete data such as text.
Continuous-time flows [4, 12] have been developed based on the theory of
ordinary differential equations.
## 5 Experiments
In this section, we compare the performance of Woodbury transformations
against other modern flow architectures, measuring running time, bit per-
dimension ($\log_{2}$-likelihood), and sample quality.
Figure 3: Running time comparison. Sampling with emerging convolutions is
slow, since their inverses are not parallelizable. Periodic convolutions are
costly for larger inputs. Both 1$\times$1 convolutions and Woodbury
transformations are efficient in training and sampling.
Running Time We follow Finz et al. [9] and compare the per-sample running time
of Woodbury transformations to other generalized permutations: 1$\times$1
[22], emerging [15], and periodic convolutions [15, 9]. We test the training
time and sampling time. In training, we compute (1) forward propagation, i.e.,
$\mathbf{y}=\mathbf{f}(\mathbf{x})$, of a given function $\mathbf{f}()$, (2)
the Jacobian determinant, i.e.,
$\det\left(\left|\frac{\partial\mathbf{y}}{\partial\mathbf{x}}\right|\right)$,
and (3) the gradient of parameters. For sampling, we compute the inverse of
transformation $\mathbf{x}=\mathbf{f}^{-1}(\mathbf{y})$. For emerging and
periodic convolutions, we use $3\times 3$ kernels. For Woodbury
transformations, we fix the latent dimension $d=16$. For fair comparison, we
implement all methods in Pytorch and run them on an Nvidia Titan V GPU. We
follow Hoogeboom et al. [15] and implement the emerging convolution inverse in
Cython, and we compute it on a 4 Ghz CPU (the GPU version is slower than the
Cython version). We first fix the spatial size to be $64\times 64$ and vary
the channel number. We then fix the channel number to be $96$ and vary the
spatial size.
The results are shown in Figure 3. For training, the emerging convolution is
the fastest. This is because its Jacobian is a triangular matrix, so computing
its determinant is much more efficient than other methods. The Woodbury
transformation and ME-Woodbury are slightly slower than the 1x1 convolution,
since they contain more transformations. Emerging convolutions, Woodbury
transformations, and 1x1 convolutions only slightly increase with input size,
rather than increasing with $\mathcal{O}(c^{3})$. This invariance to input
size is likely because of how the GPU parallelizes computation. The periodic
convolution is efficient only when the input size is small. When the size is
large, it becomes slow, e.g., when the input size is $96\times 64\times 64$,
it is around $30$ times slower than Woodbury transformations. In our
experiments, we found that the Fourier transformation requires a large amount
of memory. According to Finz et al. [9], the Fourier step may be the
bottleneck that impacts periodic convolution’s scalability. A more efficient
implementation of Fourier transformation, e.g., [18], may improve its running
time.
For sampling, both 1$\times$1 convolutions and Woodbury transformations are
efficient. The 1$\times$1 convolution is the fastest, and the Woodbury
transformations are only slightly slower. Neither is sensitive to the change
of input size. Emerging convolutions and periodic convolutions are much slower
than Woodbury transformations, and their running time increases with the input
size. When the input size is $96\times 128\times 128$, they are around $100$
to $200$ times slower than Woodbury transformations. This difference is
because emerging convolutions cannot make use of parallelization, and periodic
transformations require conversion to Fourier form. Based on these results, we
can conclude that both emerging convolution and periodic convolution do not
scale well to high-dimensional inputs. In contrast, Woodbury transformations
are efficient in both training and sampling.
Quantitative Evaluation We compare Woodbury transformations with state-of-the-
art flow models, measuring bit per-dimension (bpd). We train with the CIFAR-10
[24] and ImageNet [32] datasets. We compare with three generalized permutation
methods—1$\times$1 convolution, emerging convolution, and periodic
convolution—and two coupling layers—neural spline coupling [8] and MaCow [25].
We use Glow (Fig. 1, [22]) as the basic flow architecture. For each method, we
replace the corresponding layer. For example, to construct a flow with
Woodbury transformations, we replace the 1$\times$1 convolution with a
Woodbury transformation, i.e., Eq. 4. For all generalized permutation methods,
we use affine coupling. For each of the coupling layer baselines, we
substitute it for the affine coupling. We tune the parameters of neural spline
coupling and MaCow so that their sizes are close to affine coupling. We follow
Hoogeboom et al. [15] and test the performance of small models. For $32\times
32$ images, we set the number of levels to $L=3$ and the number of steps per-
level to $K=8$. For $64\times 64$ images, we use $L=4$ and $K=16$. More
details are in the appendix.
Table 1: Quantitative evaluation results. | Quantitative measure (bpd) | Model sizes (# parameters)
---|---|---
| CIFAR-10 | ImageNet | ImageNet | 32x32 images | 64x64 images
| 32x32 | 32x32 | 64x64 | |
1$\times$1 convolution | 3.51 | 4.32 | 3.94 | 11.02M | 37.04M
Emerging | 3.48 | 4.26 | 3.91 | 11.43M | 40.37M
Periodic | 3.49 | 4.28 | 3.92 | 11.21M | 38.61M
Neural spline | 3.50 | 4.24 | 3.95 | 10.91M | 38.31M
MaCow | 3.48 | 4.34 | 4.15 | 11.43M | 37.83M
ME-Woodbury | 3.48 | 4.22 | 3.91 | 11.02M | 36.98M
Woodbury | 3.47 | 4.20 | 3.87 | 11.10M | 37.60M
Woodbury-Glow
Iteration 50,000
Woodbury-Glow
Iteration 100,000
Woodbury-Glow
Iteration 600,000
Glow
Iteration 50,000
Glow
Iteration 100,000
Glow
Iteration 600,000
Figure 4: Random samples $64\times 64$ drawn from models trained on CelebA
with temperature $0.7$.
The test-set likelihoods are listed in Table 1 left. Our scores are worse than
those reported by Kingma and Dhariwal [22], Hoogeboom et al. [15] because we
use smaller models and train each model on a single GPU. Based on the scores,
1$\times$1 convolutions perform the worst. Emerging convolutions and periodic
convolutions score better than the 1$\times$1 convolutions, since they are
more flexible and can model the dependencies along the spatial axes. Neural
spline coupling works well on $32\times 32$ images, but do slightly worse than
1$\times$1 convolution on $64\times 64$ images. MaCow does not work well on
ImageNet. This trend demonstrates the importance of permutation layers. They
can model the interactions among dimensions and shuffle them, which coupling
layers cannot do. Without a good permutation layer, a better coupling layer
still cannot always improve the performance. The Woodbury transformation
models perform the best, likely because they can model the interactions
between the target dimension and all other dimensions, while the invertible
convolutions only model the interactions between target dimension its
neighbors. ME-Woodbury performs only slightly worse than the full version,
showing that its restrictions provide a useful tradeoff between model quality
and efficiency.
We list model sizes in Table 1 (right). Despite modeling rich interactions,
Woodbury transformations are not the largest. With $32\times 32$ images, ME-
Woodbury and 1$\times$1 convolution are the same size. When the image size is
$64\times 64$, ME-Woodbury is the smallest. This is because we use the multi-
scale architecture, i.e., Fig. 1, to combine layers. The squeeze layer doubles
the input variable’s channels at each level, so larger $L$ suggests larger
$c$. The space complexities of invertible convolutions are
$\mathcal{O}(c^{2})$, while the space complexity of ME-Woodbury is linear to
$c$. When $c$ is large, the weight matrices of invertible convolutions are
larger than the weight matrices of ME-Woodbury.
Table 2: Evaluation of different $d$ (bpd). | Woodbury | ME-Woodbury
---|---|---
$d=2$ | 3.54 | 3.53
$d=4$ | 3.51 | 3.51
$d=8$ | 3.48 | 3.48
$d=16$ | 3.47 | 3.48
$d=32$ | 3.47 | 3.48
Latent Dimension Evaluation We test the impact of latent dimension $d$ on the
performance of Woodbury-Glow. We train our models on CIFAR-10, and use bpd as
metric. We vary $d$ within $\\{2,4,8,16,32\\}$. The results are in Table 2.
When $d<8$, the model performance will be impacted. When $d>16$, increasing
$d$ will not improve the bpd. This is probably because when $d$ is too small,
the latent features cannot represent the input variables well, and when $d$ is
too big, the models become hard to train. When $8\leq d\leq 16$, the Woodbury
transformations are powerful enough to model the interactions among
dimensions. We also test two values of $d$, i.e., $16,32$, of Woodbury-Glow on
ImageNet $64\times 64$. The bpds of both $d$ are $3.87$, which are consistent
with our conclusion.
Figure 5: Learning curves on CelebA-HQ 64x64. The NLL of Woodbury Glow
decreases faster than Glow.
Sample Quality Comparisons We train Glow and Woodbury-Glow on the CelebA-HQ
dataset [19]. We use $5$-bit images and set the size of images to be $64\times
64$, $128\times 128$, and $256\times 256$. Due to our limited computing
resources, we use relatively small models in our experiments. We follow Kingma
and Dhariwal [22] and choose a temperature parameter to encourage higher
quality samples. Detailed parameter settings are in the appendix. We compare
samples from Glow and Woodbury-Glow during three phases of training, displayed
in Fig. 4. The samples show a clear trend where Woodbury-Glow more quickly
learns to generate reasonable face shapes. After 100,000 iterations, it can
already generate reasonable samples, while Glow’s samples are heavily
distorted. Woodbury-Glow samples are consistently smoother and more realistic
than samples from Glow in all phases of training. The samples demonstrate
Woodbury transformations’ advantages. The learning curves in Figure 5 also
show that the NLL of Woodbury Glow decreases faster, which is consistent to
the sample comparisons. In the appendix, we show analogous comparisons using
higher resolution versions of CelebA data, which also exhibit the trend of
Woodbury-Glow generating more realistic images than Glow at the same training
iterations.
## 6 Conclusion
In this paper, we develop Woodbury transformations, which use the Woodbury
matrix identity to compute the inverse transformations and Sylvester’s
determinant identity to compute Jacobian determinants. Our method has the same
advantages as invertible $d\times d$ convolutions that can capture
correlations among all dimensions. In contrast to the invertible $d\times d$
convolutions, our method is parallelizable and the computational complexity of
our methods are linear to the input size, so that it is still efficient in
computation when the input is high-dimensional. One potential limitation is
that Woodbury transformations do not have parameter sharing scheme as in
convolutional layers, so one potential future research is to develop partially
Woodbury transformations that can share parameters. We test our models on
multiple image datasets and they outperform state-of-the-art methods.
## Broader Impact
This paper presents fundamental research on increasing the expressiveness of
deep probabilistic models. Its impact is therefore linked to the various
applications of such models. By enriching the class of complex deep models for
which we can train with exact likelihood, we may enable a wide variety of
applications that can benefit from modeling of uncertainty. However, a
potential danger of this research is that deep generative models have been
recently applied to synthesize realistic images and text, which can be used
for misinformation campaigns.
## Acknowledgments
We thank NVIDIA’s GPU Grant Program and Amazon’s AWS Cloud Credits for
Research program for their support. The work was completed while both authors
were affiliated with the Virginia Tech Department of Computer Science. Bert
Huang was partially supported by an Amazon Research Award and a grant from the
U.S. Department of Transportation Safe-D Program for work on separate projects
not directly related to this paper.
## References
* Abadi et al. [2016] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In _12th USENIX Symposium on Operating Systems Design and Implementation_ , pages 265–283, 2016.
* Behrmann et al. [2018] Jens Behrmann, Will Grathwohl, Ricky TQ Chen, David Duvenaud, and Jörn-Henrik Jacobsen. Invertible residual networks. _arXiv preprint arXiv:1811.00995_ , 2018.
* Berg et al. [2018] Rianne van den Berg, Leonard Hasenclever, Jakub M Tomczak, and Max Welling. Sylvester normalizing flows for variational inference. _arXiv preprint arXiv:1803.05649_ , 2018.
* Chen et al. [2018] Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In _Advances in Neural Information Processing Systems_ , pages 6571–6583, 2018.
* Chen et al. [2019] Tian Qi Chen, Jens Behrmann, David K Duvenaud, and Jörn-Henrik Jacobsen. Residual flows for invertible generative modeling. In _Advances in Neural Information Processing Systems_ , pages 9913–9923, 2019.
* Dinh et al. [2014] Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: Non-linear independent components estimation. _arXiv preprint arXiv:1410.8516_ , 2014.
* Dinh et al. [2016] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. _arXiv preprint arXiv:1605.08803_ , 2016.
* Durkan et al. [2019] Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. _arXiv preprint arXiv:1906.04032_ , 2019.
* Finz et al. [2019] Marc Finz, Pavel Izmailov, Wesley Maddox, Polina Kirichenko, and Andrew Gordon Wilson. Invertible convolutional networks. In _ICML Workshop on Invertible Neural Networks and Normalizing Flows_ , 2019.
* Germain et al. [2015] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution estimation. In _International Conference on Machine Learning_ , pages 881–889, 2015.
* Goodfellow et al. [2014] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In _Advances in Neural Information Processing Systems_ , pages 2672–2680, 2014.
* Grathwohl et al. [2018] Will Grathwohl, Ricky TQ Chen, Jesse Betterncourt, Ilya Sutskever, and David Duvenaud. Ffjord: Free-form continuous dynamics for scalable reversible generative models. _arXiv preprint arXiv:1810.01367_ , 2018.
* Graves [2013] Alex Graves. Generating sequences with recurrent neural networks. _arXiv preprint arXiv:1308.0850_ , 2013.
* Ho et al. [2019] Jonathan Ho, Xi Chen, Aravind Srinivas, Yan Duan, and Pieter Abbeel. Flow++: Improving flow-based generative models with variational dequantization and architecture design. _arXiv preprint arXiv:1902.00275_ , 2019.
* Hoogeboom et al. [2019a] Emiel Hoogeboom, Rianne van den Berg, and Max Welling. Emerging convolutions for generative normalizing flows. _arXiv preprint arXiv:1901.11137_ , 2019a.
* Hoogeboom et al. [2019b] Emiel Hoogeboom, Jorn WT Peters, Rianne van den Berg, and Max Welling. Integer discrete flows and lossless compression. _arXiv preprint arXiv:1905.07376_ , 2019b.
* Huang et al. [2018] Chin-Wei Huang, David Krueger, Alexandre Lacoste, and Aaron Courville. Neural autoregressive flows. _arXiv preprint arXiv:1804.00779_ , 2018.
* Karami et al. [2019] Mahdi Karami, Dale Schuurmans, Jascha Sohl-Dickstein, Laurent Dinh, and Daniel Duckworth. Invertible convolutional flow. In _Advances in Neural Information Processing Systems_ , pages 5636–5646, 2019.
* Karras et al. [2017] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. _arXiv preprint arXiv:1710.10196_ , 2017.
* Kingma and Ba [2014] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014.
* Kingma and Welling [2013] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. _arXiv preprint arXiv:1312.6114_ , 2013.
* Kingma and Dhariwal [2018] Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In _Advances in Neural Information Processing Systems_ , pages 10215–10224, 2018.
* Kingma et al. [2016] Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. In _Advances in Neural Information Processing Systems_ , pages 4743–4751, 2016.
* Krizhevsky et al. [2009] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
* Ma et al. [2019] Xuezhe Ma, Xiang Kong, Shanghang Zhang, and Eduard Hovy. Macow: Masked convolutional generative flow. In _Advances in Neural Information Processing Systems_ , pages 5891–5900. 2019.
* Müller et al. [2019] Thomas Müller, Brian McWilliams, Fabrice Rousselle, Markus Gross, and Jan Novák. Neural importance sampling. _ACM Transactions on Graphics (TOG)_ , 38(5):1–19, 2019.
* Oord et al. [2016a] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. _arXiv preprint arXiv:1609.03499_ , 2016a.
* Oord et al. [2016b] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. _arXiv preprint arXiv:1601.06759_ , 2016b.
* Papamakarios et al. [2017] George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. In _Advances in Neural Information Processing Systems_ , pages 2338–2347, 2017.
* Rezende and Mohamed [2015] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. _arXiv preprint arXiv:1505.05770_ , 2015.
* Rezende et al. [2014] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. _arXiv preprint arXiv:1401.4082_ , 2014.
* Russakovsky et al. [2015] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. _International Journal of Computer Vision_ , 115(3):211–252, 2015.
* Sohn et al. [2015] Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In _Advances in Neural Information Processing Systems_ , pages 3483–3491, 2015.
* Song et al. [2019] Yang Song, Chenlin Meng, and Stefano Ermon. Mintnet: Building invertible neural networks with masked convolutions. In _Advances in Neural Information Processing Systems_ , pages 11002–11012, 2019.
* Sylvester [1851] James Joseph Sylvester. On the relation between the minor determinants of linearly equivalent quadratic functions. _The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science_ , 1(4):295–305, 1851.
* Tran et al. [2019] Dustin Tran, Keyon Vafa, Kumar Krishna Agrawal, Laurent Dinh, and Ben Poole. Discrete flows: Invertible generative models of discrete data. _arXiv preprint arXiv:1905.10347_ , 2019.
* Woodbury [1950] Max A Woodbury. Inverting modified matrices. 1950\.
* Yu et al. [2015] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. _arXiv preprint arXiv:1506.03365_ , 2015.
* Yu et al. [2017] Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. In _Thirty-First AAAI Conference on Artificial Intelligence_ , 2017\.
* Zhu et al. [2017] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In _Proceedings of International Conference on Computer Vision_ , pages 2223–2232, 2017.
* Ziegler and Rush [2019] Zachary M Ziegler and Alexander M Rush. Latent normalizing flows for discrete sequences. _arXiv preprint arXiv:1901.10548_ , 2019.
## Appendix A More Background
In this section, we introduce more detailed background knowledge.
### A.1 Normalizing Flows
Let $\mathbf{x}$ be a high-dimensional continuous variable. We suppose that
$\mathbf{x}$ is drawn from $p^{*}(\mathbf{x})$, which is the true data
distribution. Given a collected dataset
$\mathcal{D}=\\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{D}\\}$, we are
interested in approximating $p^{*}(\mathbf{x})$ with a model
$p_{\theta}(\mathbf{x})$. We optimize $\theta$ by minimizing the negative log-
likelihood
$\mathcal{L}(\mathcal{D})=\sum_{i=1}^{D}-\log p_{\theta}(\mathbf{x}_{i}).$ (8)
For some settings, variable $\tilde{\mathbf{x}}$ is discrete, e.g., image
pixel values are often integers. In these cases, we dequantize
$\tilde{\mathbf{x}}$ by adding continuous noise $\bm{\mu}$ to it, resulting in
a continuous variable $\mathbf{x}=\tilde{\mathbf{x}}+\bm{\mu}$. As shown by Ho
et al. [14], the log-likelihood of $\tilde{\textbf{x}}$ is lower-bounded by
the log-likelihood of $\mathbf{x}$.
Normalizing flows enable computation of $p_{\theta}(\mathbf{x})$, even though
it is usually intractable for many other model families. A normalizing flow
[30] is composed of a series of invertible functions
$\mathbf{f}=\mathbf{f}_{1}\circ\mathbf{f}_{2}\circ...\circ\mathbf{f}_{K}$,
which transform $\mathbf{x}$ to a latent code $\mathbf{z}$ drawn from a simple
distribution. Therefore, with the _change of variables_ formula, we can
rewrite $\log p_{\theta}(\mathbf{x})$ to be
$\log p_{\theta}(\mathbf{x})=\log
p_{Z}(\mathbf{z})+\sum_{i=1}^{K}\log\left|\det\left(\frac{\partial\mathbf{f}_{i}}{\partial\mathbf{r}_{i-1}}\right)\right|,$
(9)
where $\mathbf{r}_{i}=\mathbf{f}_{i}(\mathbf{r}_{i-1})$,
$\mathbf{r}_{0}=\mathbf{x}$, and $\mathbf{r}_{K}=\mathbf{z}$.
### A.2 Invertible $d\times d$ Convolutions
Emerging convolutions [15] combine two autoregressive convolutions [10, 23].
Formally,
$\displaystyle\mathbf{M}^{\prime}_{1}=\mathbf{M}_{1}\odot\mathbf{A}_{1},~{}~{}~{}~{}~{}~{}~{}~{}\mathbf{M}^{\prime}_{2}=\mathbf{M}_{2}\odot\mathbf{A}_{2},~{}~{}~{}~{}~{}~{}~{}~{}\mathbf{y}=\mathbf{M}^{\prime}_{2}\star(\mathbf{M}^{\prime}_{1}\star\mathbf{x}),$
where $\mathbf{M}_{1},\mathbf{M}_{2}$ are convolutional kernels whose size is
$c\times c\times d\times d$, and $\mathbf{A}_{1},\mathbf{A}_{2}$ are binary
masks. The symbol $\star$ represents the convolution operator.222In practice,
a convolutional layer is usually implemented as an aggregation of cross-
correlations. We follow Hoogeboom et al. [15] and omit this detail. An
emerging convolutional layer has the same receptive fields as standard
convolutional layers, which can capture correlations between a target pixel
and its neighbor pixels. However, like other autoregressive convolutions,
computing the inverse of an emerging convolution requires sequentially
traversing each dimension of input, so its computation is not parallelizable
and is a computational bottleneck when the input is high-dimensional.
Periodic convolutions [15, 9] use discrete Fourier transformations to
transform both the input and the kernel to Fourier domain. A periodic
convolution is computed as
$\mathbf{y}_{u,:,:}=\sum_{v}\mathcal{F}^{-1}(\mathcal{F}(\mathbf{M}^{(p)}_{u,v,:,:})\odot\mathcal{F}(\mathbf{x}_{v,:,:})),$
where $\mathcal{F}$ is a discrete Fourier transformation, and
$\mathbf{M}^{(p)}$ is the convolution kernel whose size is $c\times c\times
d\times d$. The computational complexity of periodic convolutions is
$\mathcal{O}(c^{2}hw\log(hw)+c^{3}hw)$. In our experiments, we found that the
Fourier transformation requires a large amount of memory. These two problems
impact the efficiency of both training and sampling when the input is high-
dimensional.
## Appendix B Memory-Efficient Woodbury Transformations
Memory-Efficient Woodbury transformations can effectively reduce the space
complexity. The main idea is to perform spatial transformations along the
height and width axes separately, i.e., a height transformation and a width
transformation. The transformations are:
$\displaystyle\mathbf{x}_{c}$ $\displaystyle=$
$\displaystyle(\mathbf{I}^{(c)}+\mathbf{U}^{(c)}\mathbf{V}^{(c)})\mathbf{x},$
$\displaystyle\mathbf{x}_{w}$ $\displaystyle=$
$\displaystyle\text{reshape}(\mathbf{x}_{c},(ch,w)),$
$\displaystyle\mathbf{x}_{w}$ $\displaystyle=$
$\displaystyle\mathbf{x}_{c}(\mathbf{I}^{(w)}+\mathbf{U}^{(w)}\mathbf{V}^{(w)}),$
$\displaystyle\mathbf{x}_{h}$ $\displaystyle=$
$\displaystyle\text{reshape}(\mathbf{x}_{w},(cw,h)),$
$\displaystyle\mathbf{y}$ $\displaystyle=$
$\displaystyle\mathbf{x}_{h}(\mathbf{I}^{(h)}+\mathbf{U}^{(h)}\mathbf{V}^{(h)}),$
$\displaystyle\mathbf{y}$ $\displaystyle=$
$\displaystyle\text{reshape}(\mathbf{y},(c,hw)),$ (10)
where $\text{reshape}(\mathbf{x},(n,m))$ reshapes $\mathbf{x}$ to be an
$n\times m$ matrix. Matrices $\mathbf{I}^{(w)}$ and $\mathbf{I}^{(h)}$ are
$w$\- and $h$-dimensional identity matrices, respectively. Matrices
$\mathbf{U}^{(w)},\mathbf{V}^{(w)},\mathbf{U}^{(h)}$, and $\mathbf{V}^{(h)}$
are $w\times d_{w}$, $d_{w}\times w$, $w\times d_{w}$, and $d_{w}\times w$
matrices, respectively, where $d_{w}$ and $d_{h}$ are constant latent
dimensions.
Using the Woodbury matrix identity and the Sylvester’s determinant identity,
we can compute the inverse and Jacobian determinant:
$\displaystyle\mathbf{y}$ $\displaystyle=$
$\displaystyle\text{reshape}(\mathbf{y},(cw,h)),$
$\displaystyle\mathbf{x}_{h}$ $\displaystyle=$
$\displaystyle\mathbf{y}(\mathbf{I}^{(h)}-\mathbf{U}^{(h)}(\mathbf{I}^{(d_{h})}+\mathbf{V}^{(h)}\mathbf{U}^{(h)})^{-1}\mathbf{V}^{(h)}),$
$\displaystyle\mathbf{x}_{w}$ $\displaystyle=$
$\displaystyle\text{reshape}(\mathbf{x}_{h},(ch,w)),$
$\displaystyle\mathbf{x}_{w}$ $\displaystyle=$
$\displaystyle\mathbf{x}_{w}(\mathbf{I}^{(w)}-\mathbf{U}^{(w)}(\mathbf{I}^{(d_{w})}+\mathbf{V}^{(w)}\mathbf{U}^{(w)})^{-1}\mathbf{V}^{(w)}),$
$\displaystyle\mathbf{x}_{c}$ $\displaystyle=$
$\displaystyle\text{reshape}(\mathbf{x}_{w},(c,hw)),$
$\displaystyle\mathbf{x}$ $\displaystyle=$
$\displaystyle(\mathbf{I}^{(c)}-\mathbf{U}^{(c)}(\mathbf{I}^{(d_{c})}+\mathbf{V}^{(c)}\mathbf{U}^{(c)})^{-1}\mathbf{V}^{(c)})\mathbf{x}_{c},$
(11)
$\displaystyle\log\left|\det(\frac{\partial\mathbf{y}}{\partial\mathbf{x}})\right|$
$\displaystyle=hw\log\left|\det(\mathbf{I}^{(d_{c})}+\mathbf{V}^{(c)}\mathbf{U}^{(c)})\right|+ch\log\left|\det(\mathbf{I}^{(d_{w})}+\mathbf{V}^{(w)}\mathbf{U}^{(w)})\right|$
(12)
$\displaystyle+cw\log\left|\det\left(\mathbf{I}^{(d_{h})}+\mathbf{V}^{(h)}\mathbf{U}^{(h)}\right)\right|,$
where $\mathbf{I}^{(d_{w})}$ and $\mathbf{I}^{(d_{h})}$ are $d_{w}$\- and
$d_{h}$-dimensional identity matrices, respectively. The Jacobian of the
$\text{reshape}()$ is an identity matrix, so its log-determinant is $0$.
We call Equation 10 the memory-efficient Woodbury transformation because it
reduces space complexity from $\mathcal{O}(c+hw)$ to $\mathcal{O}(c+h+w)$.
This method is effective when $h$ and $w$ are large. To analyze its
complexity, we let all latent dimensions be less than $d$ as before. The
complexity of forward transformation is $\mathcal{O}(dchw)$; the complexity of
computing the determinant is $\mathcal{O}(d(c+h+w)+d^{3})$; and the complexity
of computing the inverse is $\mathcal{O}(dchw+d^{2}(c+ch+cw)+d^{3})$. The same
as Woodbury transformations, when the input is high dimensional, we can omit
$d$. Therefore, the computational complexities of the memory-efficient
Woodbury transformation are also linear with the input size.
We list the complexities of different methods in Table 3. We can see that the
computational complexities of Woodbury transformations are comparable to other
methods, and maybe smaller when the input is high-dimensional, i.e., the
$c,h,w$ are big.
Table 3: Comparisons of computational complexities. Method | Forward | Backward
---|---|---
1x1 convolution | $\mathcal{O}(c^{2}hw+c^{3})$ | $\mathcal{O}(c^{2}hw)$
Periodic conolution | $\mathcal{O}(chw\log(hw)+c^{3}hw)$ | $\mathcal{O}(chw\log(hw)+c^{2}hw)$
Emerging convolution | $\mathcal{O}(c^{2}hw)$ | $\mathcal{O}(c^{2}hw)$
ME-Woodbury transformation | $\mathcal{O}(dchw)$ | $\mathcal{O}(dchw)$
Woodbury transformation | $\mathcal{O}(dchw)$ | $\mathcal{O}(dchw)$
## Appendix C Parameter Settings
In this section, we present additional details about our experiments to aid
reproducibility.
### C.1 Experiments of Quantitative Evaluation
In the experiments of qualitative evalution, we compare Woodbury
transformations with $3$ permutation layer baselines, i.e., 1x1 convolution,
emerging convolution, and periodic coupling, and $2$ coupling layer baselines,
i.e., neural spline coupling, and MaCow. For all generalized permutation
methods, we use affine coupling, which is composed of $3$ convolutional
layers, and the $2$ latent layers have $512$ channels. For the neural spline
coupling, we set the number of spline bins to $4$. The spline parameters are
generated by a neural network, which is also composed of convolutional layers.
For $32\times 32$ images, we set the number of channels to $256$, and for
$64\times 64$ images, we set it to $224$. Ma et al. [25] used steps containing
a MaCow unit, i.e., $4$ autoregressive convolution coupling layers, and a full
Glow step. For fair comparison, we directly use the MaCow unit to replace the
affine coupling. For $32\times 32$ images, we set the convolution channel to
$384$, and for $64\times 64$ images, we set it to $296$.
We run each method to fixed number of iterations and test it every $10,000$
iterations. The bpds reported in our main paper are the best bpds obtained by
each method. The bpds are single-run results. This is because each run of the
experiment requires 3 to 5 days, and running each model multiple times is a
major cost. We found in our experiments that for the same model and parameter
settings, the bpds’ standard deviation of multiple runs are very small, i.e.,
around $0.003$, so single run results are sufficient for comparing bpd.
### C.2 Hyper-parameter Settings
We use Adam [20] to tune the learning rates, with $\alpha=0.001$,
$\beta_{1}=0.9$, and $\beta_{2}=0.999$. We use uniform dequantization. The
sizes of models we use, and mini-batch sizes for training in our experiments
are listed in Table 4.
Table 4: Model sizes and mini-batch sizes. Dataset | Mini-batch size | Levels(L) | Steps(K) | Coupling channels
---|---|---|---|---
CIFAR-10 32x32 | 64 | 3 | 8 | 512
ImageNet 32x32 | 64 | 3 | 8 | 512
ImageNet 64x64 | 32 | 4 | 16 | 512
LSUN Church 96x96 | 16 | 5 | 16 | 256
CelebA-HQ 64x64 | 8 | 4 | 16 | 512
CelebA-HQ 128x128 | 4 | 5 | 24 | 256
CelebA-HQ 256x256 | 4 | 6 | 16 | 256
### C.3 Latent Dimension Settings
In all our experiments, we set the latent dimensions of Woodbury
transformations, and ME-Woodbury transformations as in Table 5.
Table 5: Latent dimensions of Woodbury transformations and ME-Woodbury transformations. The numbers in the brackets represent the latent dimension used in that level. For example, the $d_{c}:\\{8,8,16\\}$, represents that the settings of $d_{c}$ at the three levels are $8$, $8$, and $16$. Dataset | Woodbury | ME-Woodbury
---|---|---
CIFAR-10 32x32 | $d_{c}:\\{8,8,16\\}$ | $d_{c}:\\{8,8,16\\}$
| $d_{s}:\\{16,16,8\\}$ | $d_{h}:\\{16,16,8\\}$
| | $d_{w}:\\{16,16,8\\}$
ImageNet 32x32 | $d_{c}:\\{8,8,16\\}$ | $d_{c}:\\{8,8,16\\}$
| $d_{s}:\\{16,16,8\\}$ | $d_{h}:\\{16,16,8\\}$
| | $d_{w}:\\{16,16,8\\}$
ImageNet 64x64 | $d_{c}:\\{8,8,16,16\\}$ | $d_{c}:\\{8,8,16,16\\}$
| $d_{s}:\\{16,16,8,8\\}$ | $d_{h}:\\{16,16,8,8\\}$
| | $d_{w}:\\{16,16,8,8\\}$
LSUN Church 96x96 | $d_{c}:\\{8,8,16,16,16\\}$ | —
| $d_{s}:\\{16,16,16,8,8\\}$ |
CelebA-HQ 64x64 | $d_{c}:\\{8,8,16,16\\}$ | —
| $d_{s}:\\{16,16,8,8\\}$ |
CelebA-HQ 128x128 | $d_{c}:\\{8,8,16,16,16\\}$ | —
| $d_{s}:\\{16,16,16,8,8\\}$ |
CelebA-HQ 256x256 | $d_{c}:\\{8,8,16,16,16,16\\}$ | —
| $d_{s}:\\{16,16,16,16,8,8\\}$ |
## Appendix D Sample Quality Comparisons
We compare the samples generated by Woodbury-Glow and Glow models trained on
the CelebA-HQ dataset. We follow Kingma and Dhariwal [22] and randomly hold
out 3,000 images as a test set. We use $5$-bits images. We use $64\times 64$,
$128\times 128$, $256\times 256$ images. Due to the our limited computing
resources, we use relatively small models. The model sizes and other settings
are listed in Table 4 and Table 5. We generate samples from the models during
different phases of training and display them in Figure 6, and Figure 7 (The
results of $64\times 64$ images are shown in the main paper). For the
$128\times 128$ images, both Glow and Woodbury-Glow generate distorted images
at iteration 100,000, but Woodbury-Glow seems to improve in later stages,
stabilizing the shapes of faces and structure of facial features. Glow,
continues generating faces with distorted overall shapes as training
continues. For the $256\times 256$ images, neither model ever trains
sufficiently to generate highly realistic faces, but Woodbury-Glow makes
significantly more progress in these 300,000 iterations than Glow. Glow’s
samples at 300,000 are still mostly random swirls with an occasional
recognizable face, while almost all of Woodbury-Glow’s samples look like
faces, though distorted. Due to limits on our computational resources, we
stopped the higher resolution experiments at 300,000 iterations (rather than
running to 600,000 iterations as we did for the $64\times 64$ experiments in
the main paper). With a larger model and longer training time, it seems
Woodbury-Glow would reach higher sample quality much faster than Glow.
The likelihoods of test set under the trained model are listed in Table 3. For
the $64\times 64$ and $128\times 128$ images, Woodbury-Glow scores higher
likelihood than Glow. For the $256\times 256$ images, their likelihoods are
almost identical, and are better than the score reported in [22]. This may be
due to three possible reasons: (1) We use affine coupling rather than additive
coupling, which is a non-volume preserving layer and may improve the
likelihoods; (2) Since the test set is randomly collected, it is different
from the one used in [22]; And (3) The model used in [22] is very large, so it
may be somewhat over-fitting. Surprisingly, the clear difference in sample
quality is not reflected by the likelihoods. This discrepancy may be because
we use $5$-bit images, and the images are all faces, so the dataset is less
complicated than other datasets such as ImageNet. Moreover, even though Glow
cannot generate reasonable $256\times 256$ samples, the colors of these
samples already match the colors of real images well, so these strange samples
may non-intuitively be equivalently likely as the face-like samples from
Woodbury-Glow.
Table 6: Bit per-dimension results on CelebA-HQ Size of images | Glow | Woodbury-Glow
---|---|---
$64\times 64$ | 1.27 | 1.23
$128\times 128$ | 1.09 | 1.04
$256\times 256$ | 0.93 | 0.93
Woodbury-Glow
Iteration 100,000
Woodbury-Glow
Iteration 200,000
Woodbury-Glow
Iteration 300,000
Glow
Iteration 100,000
Glow
Iteration 200,000
Glow
Iteration 300,000
Figure 6: Random samples of $128\times 128$ images drawn with temperature
$0.7$ from a model trained on CelebA data.
Woodbury-Glow
Iteration 150,000
Woodbury-Glow
Iteration 220,000
Woodbury-Glow
Iteration 300,000
Glow
Iteration 150,000
Glow
Iteration 220,000
Glow
Iteration 300,000
Figure 7: Random samples of $256\times 256$ images drawn with temperature
$0.7$ from a model trained on CelebA data.
## Appendix E Additional Samples
In this section, we include additional samples from Woodbury-Glow models
trained on our various datasets. These samples complement our quantitative
analysis. We train our models on CIFAR-10 [24], ImageNet [32], the LSUN church
dataset [38], and the CelebA-HQ dataset [19]. Specifically, for ImageNet, we
use $32\times 32$ and $64\times 64$ images. For the LSUN dataset, we use the
same approach as Kingma and Dhariwal [22] to resize the images to be $96\times
96$. For the CelebA-HQ dataset, we use $64\times 64$, $128\times 128$, and
$256\times 256$ images. For LSUN and CelebA-HQ datasets, we use $5$-bit
images. The parameter settings of our models are in Table 4 and Table 5. The
samples are in Figures 8, 9, 10, 11, 12, 13, and 14.
Figure 8: CIFAR-10 $32\times 32$ Woodbury-Glow samples.
Figure 9: ImageNet $32\times 32$ Woodbury-Glow samples.
Figure 10: ImageNet $64\times 64$ Woodbury-Glow samples.
Figure 11: LSUN church $96\times 96$ Woodbury-Glow samples (temperature
$0.875$).
Figure 12: CelebA-HQ $64\times 64$ Woodbury-Glow samples (temperature $0.7$).
Figure 13: CelebA-HQ $128\times 128$ Woodbury-Glow samples (temperature
$0.5$).
Figure 14: Selected CelebA-HQ $256\times 256$ Woodbury-Glow samples
(temperature $0.5$).
|
2024-09-04T02:54:55.187903 | 2020-02-27T16:46:11 | 2002.12248 | {
"authors": "Ivan Madarevic, Niels Claessens, Aleksandr Seliverstov, Chris Van\n Haesendonck and Margriet J. Van Bael",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25921",
"submitter": "Ivan Madarevic",
"url": "https://arxiv.org/abs/2002.12248"
} | arxiv-papers | ††thanks: These two authors contributed equally††thanks: These two authors
contributed equally
# Non-trivial quantum magnetotransport oscillations
in pure and robust topological $\alpha$-Sn films
Ivan Madarevic Quantum Solid State Physics, KU Leuven, Celestijnenlaan 200D,
3001 Leuven, Belgium Niels Claessens Quantum Solid State Physics, KU Leuven,
Celestijnenlaan 200D, 3001 Leuven, Belgium IMEC, Kapeldreef 75, 3001 Leuven,
Belgium Aleksandr Seliverstov Quantum Solid State Physics, KU Leuven,
Celestijnenlaan 200D, 3001 Leuven, Belgium Chris Van Haesendonck Quantum
Solid State Physics, KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium
Margriet J. Van Bael Quantum Solid State Physics, KU Leuven, Celestijnenlaan
200D, 3001 Leuven, Belgium
###### Abstract
We report experimental evidence of topological Dirac fermion charge carriers
in pure and robust $\alpha$-Sn films grown on InSb substrates. This evidence
was acquired using standard macroscopic four-point contact resistance
measurements, conducted on uncapped films with a significantly reduced bulk
mobility. We analyzed and compared electrical characteristics of the
constituting components of the $\alpha$-Sn/InSb sample, and propose a three-
band drift velocity model accordingly. A surface band, with low carrier
density and high mobility, is identified as the origin of the observed
Shubnikov – de Haas oscillations. The analysis of these quantum oscillations
results in a non-trivial value of the phase shift $\gamma=0$, characteristic
for topologically protected Dirac fermions. For the same uncapped samples we
estimate the momentum relaxation time $\tau\approx 300\ \mathrm{fs}$, which is
significantly larger in comparison with the previous reports on grown
$\alpha$-Sn films.
††preprint: APS/123-QED
The alpha phase of Sn ($\alpha$-Sn) is a diamond structured crystal which is a
zero band gap semimetal. Due to its suitable band structure, $\alpha$-Sn was
predicted as a material capable of acquiring several topological phases [1,
2]. Crystal strain can bring about the topologically protected states in
$\alpha$-Sn, more specifically: in-plane tensile strain turns it into a 3D
topological insulator (3DTI), while in-plane compressive strain converts it
into a topological Dirac semimetal (DSM). Topologically protected bands of
$\alpha$-Sn crystalline thin films were characterized by experiments during
the last decade. This was particularly done using angle-resolved photoemission
spectroscopy (ARPES), which revealed a Dirac cone near the Fermi level in the
surface electronic structure of Te and/or Bi doped $\alpha$-Sn thin films,
compressively strained on InSb substrates [3, 4, 5, 6, 7]. Interestingly, the
most recent (soft X-ray) ARPES investigations have suggested the existence of
a topologically protected bulk band (DSM phase) in compressively strained
$\alpha$-Sn films [8, 5, 9]. These reports confirm that this material is an
excellent platform to explore several topological phases [1, 2], making it
very interesting for possible spintronics [10, 11] and quantum computing
applications [12, 13, 14]. Moreover, as a very well structurally defined non-
toxic elemental material with a robust topological phase, $\alpha$-Sn could be
more favorable for applications when compared to prototypical DSM binary
compounds Na3Bi [15, 16] and Cd3As2 [17, 18, 19].
There are recent reports indicating topological magnetotransport in
$\alpha$-Sn. In the case of $\alpha$-Sn on InSb there is, until now, one
report [7] suggesting topologically protected carriers based on
magnetotransport measurements. However, these experiments were conducted on a
relatively complicated system (Bi-doped $\alpha$-Sn$\mid$InSb$\mid$GaAs), in
which the Dirac point was positioned below the Fermi level. There is another
report on the $\alpha$-Sn magnetotransport properties in the case of a thin
film grown on CdTe [20]. The use of insulating CdTe should allow more
straightforward detection of the $\alpha$-Sn topological bands. However, both
Cd and Te bond with Sn (by metallic and covalent bonding respectively),
complicating the morphology of the films, disturbing its epitaxial growth and
the introduction of strain [21, 20]. At this moment, no unambiguous
experimental evidence has been presented for topological Dirac charge carriers
in completely pure (no dopants) $\alpha$-Sn films.
Very recently we presented [22] a straightforward method to grow pure,
compressively strained $\alpha$-Sn films on InSb(100) substrates. The
structure and morphology of these films were fully characterized and the
presence of a Dirac cone was confirmed by ARPES. Moreover, these films
demonstrated an excellent surface quality and a notable robustness against
ambient conditions. In this letter we report prominent accessibility of the
topologically protected Dirac charge carriers in pure and uncapped 30 nm thick
$\alpha$-Sn films compressively strained on InSb(100) [22]. This was made
possible due to the decreased mobility of the trivial bulk charge carriers in
the grown films – most likely a consequence of increased grain-boundary
barrier due to the small grain size. We present evidence for the existence of
the Dirac charge carriers, as an absolutely intrinsic property of the
topological $\alpha$-Sn films. This evidence is based on our analysis of the
Shubnikov – de Haas (SdH) oscillations originating from a surface band. The
Dirac charge carriers exhibit a significantly enhanced relaxation time
compared to previous reports [7, 11].
For the unambiguous interpretation of electronic transport experiments on
composite systems (i.e. film/substrate) it is generally required to analyze
the transport properties of the constituting components. Therefore, along with
the transport measurements on the 30 nm $\alpha$-Sn$\mid$InSb(100) system, we
conducted the same measurements on a non-covered InSb(100) substrate, prepared
according to the procedure presented in our previous report [22]. The
magnetotransport experiments were conducted at a temperature of 1.5 K, with
silver paste contacts in a four-point geometry, at constant current (between
50 and 100 $\mu$A), and magnetic field ($B$) up to 8 T.
Figure 1 depicts the results of the conducted Hall effect measurements,
revealing a clear qualitative difference between the off-diagonal
magnetoresistance ($R_{xy}$) of the InSb(100) substrate and the grown
$\alpha$-Sn$\mid$InSb(100) sample. While the substrate’s off-diagonal
magnetoresistance shows a linear behavior ($n$-type), the grown sample clearly
displays a non-linear Hall effect, indicating the presence of multiple types
of carriers. At low fields ($<$ 2 T), the substrate is shunting the
$\alpha$-Sn film, and the behavior of $R_{xy}$ is slightly non-linear. From
the results of our detailed study of the transport properties of the
substrate, there are some indications that the substrate preparation (Ar+
plasma etching and thermal annealing [22]) leaves it electronically
heterogenous in depth, which may cause such non-linear behavior. However, it
is beyond the scope of this letter to discuss the cause of the non-linear
behavior at low fields. Since it can be attributed to the substrate, for which
the linear $n$-type behavior becomes dominant at higher fields, we omit the
low field region from further analysis.
Figure 1: Measured off-diagonal magnetoresistance tensor element ($R_{xy}$)
of a 30 nm thick film of $\alpha$-Sn(100) on InSb(100), and a non-covered
InSb(100) substrate. Measurements were done using the Van der Pauw technique
[23]. The inset shows the four-point contact (1 – 4) configuration on the
surface of the sample. Figure 2: Fit based on the three-band drift velocity
model (orange) for the high-field off-diagonal magnetoresistance above 2 T
(black) of a 30 nm thick film of $\alpha$-Sn on InSb(100).
The method to identify different contributions leading to the observed Hall
effect is to fit a multi-band drift velocity model to the measured $R_{xy}$
data. $R_{xy}$ can, in general, be expressed using the following equations:
$R_{xy}=\ \frac{{\sigma}_{xy}}{{{\sigma}_{xy}}^{2}+{{\sigma}_{xx}}^{2}}~{},$
(1)
with the conductance matrix components of the different contributions ($i$) to
$\sigma$ given by:
${\sigma}^{i}_{xx}=\ \frac{n^{i}_{s}\cdot
e\cdot{\mu}^{i}}{1+{\left({\mu}^{i}\cdot B\right)}^{2}}~{},$ (2)
${\sigma}^{i}_{xy}=\ \pm\frac{n^{i}_{s}\cdot
e\cdot{\left({\mu}^{i}\right)}^{2}\cdot B}{1+{\left({\mu}^{i}\cdot
B\right)}^{2}}~{},$ (3)
under the following constraint for the measured sheet resistance at zero
field:
$R_{s}=\ \left(\sum^{k}_{1}{n^{i}_{s}\cdot}e\cdot{\mu}^{i}\right)^{-1}~{}.{}$
(4)
Fitting the measured $R_{xy}$ behavior of the $\alpha$-Sn sample to a two-band
($k=2$) drift velocity model results in one $p$-type band ($i=p$) and one
$n$-type band ($i=n$), with sheet carrier concentrations
$n^{p}_{s}=3.2(1)\cdot{10}^{14}\ \mathrm{cm}^{-2}$,
$n^{n}_{s}=1.65(4)\cdot{10}^{13}\ \mathrm{cm}^{-2}$, and mobilities
${\mu}^{p}=0.184(2)\cdot 10^{3}\ \mathrm{cm^{2}V^{-1}s^{-1}}$,
${\mu}^{n}=1.4(8)\cdot{10}^{4}\ \mathrm{cm^{2}V^{-1}s^{-1}}$. In this case,
the reduced chi-square value (${\chi}^{2}$) for the fit equals 0.78, at a
$R_{xy}$ relative error of 5 %. On the other hand, repeating the procedure for
a three-band ($k=3$) drift velocity model (Fig. 2) results in two $p$-type
bands ($i=p1,~{}p2$) and one $n$-type band ($i=n$), with sheet carrier
concentrations $n^{p1}_{s}=3.2(1)\cdot{10}^{13}\ \mathrm{cm}^{-2}$,
$n^{p2}_{s}=1.3(2)\cdot{10}^{12}\ \mathrm{cm}^{-2}$,
$n^{n}_{s}=1.65(4)\cdot{10}^{13}\ \mathrm{cm}^{-2}$, and mobilities
${\mu}^{p1}=0.177(2)\cdot 10^{3}\ \mathrm{cm^{2}V^{-1}s^{-1}}$,
${\mu}^{p2}=3.47(12)\cdot 10^{3}\ \mathrm{cm^{2}V^{-1}s^{-1}}$,
${\mu}^{n}=1,4(8)\cdot{10}^{4}\ \mathrm{cm^{2}V^{-1}s^{-1}}$, with
${\chi}^{2}=0.53$ at the same relative error. Considering these values, it is
tempting to choose the latter model over the first one. Indeed, knowing the
film/substrate thickness [22], the bands with the large number of carriers
($n$ and $p1$) are typical bulk bands expected for InSb [24] and $\alpha$-Sn
grown on InSb [25], which exhibits a reduced mobility compared to the bulk
single crystal case [26]. At the same time, the band with the high mobility
and low carrier density ($p2$) may be identify as a possible topological
surface band of $\alpha$-Sn [27]. Although the second model fits the data
better, a nicer fit on its own does not prove the correctness of the model.
To exclude one of the two models, we extracted the SdH oscillations from the
diagonal magnetoresistance $R_{xx}$ measurement of a grown $\alpha$-Sn sample
(Fig. 3). The experiments were again conducted at a temperature of 1.5 K and
with magnetic field (up to 8 T) applied perpendicularly to the surface of the
samples ($\vec{B}\perp(100)$), with macroscopic silver paste contacts arranged
in a linear four-point geometry. Figure 4 (inlay) depicts magnetoresistance
oscillations ($\Delta R$) of a grown $\alpha$-Sn sample, extracted after
removing the background which was obtained from a $3^{\mathrm{rd}}$ order
polynomial fit. The maximum of the fast Fourier transform (FFT) of $\Delta R$
vs. $1/B$ provides an estimate of the characteristic frequency $f=14(6)\
\mathrm{T}$ of the oscillations. A more accurate value of 13.5(8) T was
extracted from the Landau index plot (Fig. 4), as shown later.
Figure 3: Measured diagonal magnetoresistance tensor element ($R_{xx}$) of a
30 nm thick film of $\alpha$-Sn(100) on InSb(100) and a non-covered InSb(100)
substrate, with the magnetic field perpendicular to the film/substrate
surface. The inset shows the four-point contact configuration on the surface
of the sample.
Assuming the oscillations originate from a bulk band, the estimated value for
$f$ would correspond to a carrier density of $n=(4\pi
f/\mathrm{\Phi_{0}})^{{3}/{2}}/{3{\pi}^{2}}=2.8(2)\cdot{10}^{17}\
\mathrm{cm}^{-3}$, where ${\mathrm{\Phi}}_{0}=h/e$ is the elementary flux
quantum, with Planck’s constant $h$ and the electron charge $e$. If we convert
this to the hypothetical sheet carrier densities for such bulk bands of the
$\alpha$-Sn film and InSb substrate, by multiplying with their thicknesses
[22], this would give $8.4(7)\cdot{10}^{11}\ \mathrm{cm}^{-2}$ and
$1.75(16)\cdot~{}{10}^{16}\ \mathrm{cm}^{-2}$ respectively. From these sheet
carrier densities, which differ from the bulk bands extracted using the Hall
effect measurements ($p1$ and $n1$, respectively), it appears that neither the
two- nor three-band model would be compatible with the observed SdH
oscillations if the oscillations would originate from a bulk band.
Figure 4: Landau index plot of the magnetoresistance oscillations after
removal of the background ($\vec{B}\perp(100)$). The Landau index for the
minima and maxima of the observed SdH oscillations is plotted against the
inverse of the field strength (red dots) at which they are observed. The fit
of Eq. (6) (black line) through the data points is shown together with the low
field limit ($A_{2}=0$) (red dashed line). The inlay shows magnetoresistance
oscillations in the $\vec{B}\perp(100)$ (blue line) and the
$\vec{B}\parallel(100)$ orientation (black dashed line).
If we instead assume that the experimentally observed quantum oscillations
arise from a surface band, the sheet carrier density corresponding to the
observed SdH frequency is given by $n_{s}=2\cdot
n=2\cdot(2f/{\mathrm{\Phi}}_{0})={1.30(7)\cdot{10}^{12}\ \mathrm{cm}^{-2}}$.
This carrier density value is in very good agreement with the second $p$-type
band ($p2$) in the above fitted three-band drift velocity model, but in
contradiction with the two-band model. Note as well that the SdH oscillations
are only observable when their cyclotron frequency is larger than the inverse
of the relaxation time [28] – equivalent to $\mu>{1}/{B}\approx 3.33\cdot
10^{3}\ \mathrm{cm^{2}V^{-1}s^{-1}}$ for oscillations starting at $\sim 3$ T.
This condition is satisfied for the $p2$ band in the three-band drift velocity
model, but it is not met for the two-band model. Moreover, the reduced
mobility of the trivial bulk carriers (increased grain boundary scattering) in
this model (${\mu}^{p1}$) is consistent with the fact that the grown
$\alpha$-Sn films have a granular morphology, with grain size around 10 – 20
nm [22]. Therefore, it can be concluded that the two-band drift velocity model
fails, while the three-band model explains our observations. The second
$p$-type band of the three-band model ($p2$) can thus be identified as a
surface band of the $\alpha$-Sn(100) film. The first $p$-type band ($p1$) we
assign to the bulk $\alpha$-Sn and the $n$-type band to the bulk of the InSb
substrate.
Using Eq. (4) we can now estimate the sheet resistance contribution of the
bands separately: $R^{~{}p1}_{s}=1.10(4)\ \mathrm{k\Omega/\Box}$ ($\alpha$-Sn
bulk band), $R^{~{}p2}_{s}=1.4(2)\ \mathrm{k\Omega/\Box}$ ($\alpha$-Sn surface
band) and $R^{~{}n}_{s}=27(15)\ \mathrm{\Omega/\Box}$ (InSb bulk band).
We will now investigate the possible Dirac nature of the carriers in the
second $p$ band ($i=p2$). The phase offset $\gamma$ of the SdH oscillations
contains the information on the dispersion of the corresponding charge
carriers [29]. The SdH oscillations can be expressed as:
$\Delta
R\propto{\mathrm{cos}\left[2\pi\cdot\left(\frac{f}{B}-\gamma\right)\right]}~{},$
(5)
where $\gamma$ can be related to Berry’s phase acquired by the carriers within
a cyclotron orbit [30]. When the zero-field electronic dispersion would be
followed by the carriers, then $\gamma=0$ (non-trivial Berry’s phase of $\pi$)
for Dirac fermions, while $\gamma=1/2$ (trivial Berry’s phase of 0) for
“normal” fermions [31]. The intercept of a linear fit of the Landau level
index N with respect to $1/B$ then gives the $\gamma$ value. However, for
finite field strengths, as pointed out by Wright and McKenzie, the fermions
can no longer be expected to follow the zero-field dispersion, and a deviation
from a perfect quantization, where $\gamma=\gamma(B)$, is to be expected [31].
For that reason, a more accurate low-field fit procedure was proposed to
extract the zero-field phase offset:
$N=\frac{f}{B}+A_{1}+A_{2}\cdot B~{},{}$ (6)
where $f$ (characteristic frequency), $A_{1}$ and $A_{2}$ are the fit-
parameters. The zero-field limit of the Eq. (6) is equivalent with
$\gamma(0)=A_{1}$. Therefore, the parameter $A_{1}$ is the quantized offset
related to the zero-field Berry’s phase. Figure 4 presents the fit of Eq. (6)
to the Landau indices of the observed SdH oscillations minima and maxima
extracted from the magnetoresistance measurement with the magnetic field
applied perpendicular to the sample surface plane
($\vec{B}\perp\mathrm{(100)}$). Here, due to the dominance of bulk
contributions to the conductance [32], the minima of the SdH oscillations
correspond to integer Landau level indices ($N$), while the maxima correspond
to half-integer Landau level indices ($N+1/2$). The resulting fit-parameters
are $A_{1}=0.02(15)$ and $A_{2}=0.00(6)$, and $f=13.5(8)\ \mathrm{T}$.
Contrary to the near-zero $A_{1}$ value for the $\alpha$-Sn samples, the fit
of the SdH oscillations extracted from the magnetoresistance measurement on a
non-covered InSb substrate gave $A_{1}=0.52(13)$, $A_{2}=-0.04(2)$ and
$f=39.4(8)\ \mathrm{T}$. The characteristic frequencies were determined by a
linear fit of the Landau index plot (Fig. 4) and held fixed as the linear fit
yields more accurate results on the frequency compared to the estimation of
the FFT maximum of $\Delta R$, which is the usual procedure in order to reduce
the number of fit parameters. As the analysis in the case of $\alpha$-Sn gives
$A_{1}\approx 0$, it can be concluded that the second $p$-type band ($i=p2$)
indeed has a topological Dirac fermion nature. Based on this conclusion, we
can now estimate the value of the characteristic Fermi wavevector for this
$p2$ band as $k_{F}=\sqrt{{4\pi}{f/\Phi_{0}}}=0.203(6)\ \mathrm{nm}^{-1}$.
Assuming a linear energy dispersion [33] and using the previously extracted
Fermi velocity value [22], we then estimate the Fermi energy at $E_{F}=\hslash
k_{F}v_{F}=67(2)\ \mathrm{meV}$. These values, estimated from the transport
measurements, are in agreement with the ones extracted from our recent ARPES
and scanning tunneling spectroscopy (STS) study ($k_{F}^{~{}ARPES}\sim 0.18\
\mathrm{nm}^{-1}$, $E_{F}^{~{}ARPES}\sim 60\ \mathrm{meV}$ and
$E_{F}^{~{}STS}\sim 70\ \mathrm{meV}$ below the Dirac point) [22].
From the calculated $R^{~{}p2}_{s}$ value, using the resistance expression for
non-degenerate 2D bands with a linear dispersion, i.e.,
$R^{p2}_{s}=(4\hbar^{2}\pi)/(e^{2}E_{F}\tau)$ [34], we estimate the momentum
relaxation time $\tau=300(40)\ \mathrm{fs}$. This value is about five times
larger than the one estimated for $\alpha$-Sn films with an AlOx capping layer
[7], and almost two orders of magnitude larger compared with $\alpha$-Sn films
covered with Ag [11], indicating that there are fewer parallel momentum
relaxation channels in our samples.
In the case of a 2D gas of Dirac fermions existing on top (and at the bottom)
of the grown $\alpha$-Sn film, one expects the SdH oscillations to vanish for
the $\vec{B}\parallel\mathrm{(100)}$ alignment. However, for our samples this
is not the case. Surprisingly, in the same type of grown $\alpha$-Sn samples,
SdH oscillations were also observed (inlay of Fig. 4), when the magnetic field
is applied parallel to the film surface (equivalent to
$\vec{B}\perp\mathrm{(010)}$). These SdH oscillations have a characteristic
frequency $f=10.1(15)\ \mathrm{T}$, which is quite similar to the
$\vec{B}\perp\mathrm{(100)}$ case, and which cannot be attributed to InSb or
$\alpha$-Sn bulk bands. The phase offset analysis of these oscillations gave a
value $A_{1}=0.1(3)$ close to the non-trivial Berry’s phase of $\pi$. From the
topographic details of our samples [22] we know that the surface of the film
is not atomically flat, but in fact consists of crystalline grains with non-
zero roughness. Therefore, a significant portion of the surface area is never
in parallel alignment with the applied magnetic field.
All of the above presented results do strongly support the existence of
topological Dirac fermion charge carriers, but our discussion so far has not
yet specified which type of Dirac material, 3DTI or DSM, our $\alpha$-Sn films
belong to. In the case of the 3DTI $\alpha$-Sn, a surface phase with a clear
topological Dirac signature should arise in the electrical transport
experiments [1, 2]. The detected $p2$ surface band appears to agree with the
3DTI picture. However, this phase is predicted to emerge for tensile strained
$\alpha$-Sn, while our samples are compressively strained [22]. On the other
hand, the most recent ARPES studies confirmed that for in-plane compressive
strain (which is the case for our samples) $\alpha$-Sn is a DSM [8, 9]. This
portrays $\alpha$-Sn as a material which is topologically non-trivial
regardless of the type of strain. For in-plane compressively strained film,
the $\alpha$-Sn DSM phase should host two Dirac nodes, projected to the (100)
plane as a single Dirac cone [1]. Topological surface states and Fermi arcs
(between multiple Dirac points) can form as a consequence of strain, and/or
the applied magnetic field [2]. However, magnetotransport experiments, aside
of the detected topological Dirac-like 2D surface band, do not show the
existence of the (DSM) Dirac bulk charge carriers in our samples (e.g. through
an additional frequency in the SdH oscillations). For our experiments the lack
of sensitivity to the topology of the bulk is due to the low mobility of the
film bulk charge carriers and the obvious low resistance of the InSb substrate
(resulting in pronounced shunting of the $\alpha$-Sn film). For the above
stated reasons, based on our transport results it is not possible to
unambiguously resolve the type of topological phase (3DTI or DSM) responsible
for the Dirac cone observed in our $\alpha$-Sn films [22].
We conclude that we obtained evidence for the existence of topological Dirac
fermion charge carriers in uncapped pure and robust $\alpha$-Sn(100) films. A
$p$-type surface band with low carrier density and high mobility has been
identified to be the origin of the observed SdH oscillations. To analyze these
quantum oscillations, Landau index plots were fitted, revealing a non-trivial
value of the phase shift, characteristic for topologically protected Dirac
fermions. Such findings strongly support earlier theoretical reports of
topologically protected charge carriers being an intrinsic property of
strained $\alpha$-Sn. A significantly longer momentum relaxation time of the
detected carriers suggests more prominent accessibility of the topological
properties in $\alpha$-Sn(100) grown on InSb(100). The remarkable fact that it
is possible to detect topological Dirac fermion-like transport in our samples,
using standard macroscopic four-point resistance measurements, provides unique
opportunity to further investigate and apply this elemental topological
material.
We thank J. Van de Vondel, B. Raes and G. Lippertz for valuable discussions.
This work was supported by the Research Foundation – Flanders (FWO) and by the
KU Leuven C1 program grants No. C14/18/074 and C12/18/006.
## References
* Huang and Liu [2017] H. Huang and F. Liu, Tensile strained gray tin: Dirac semimetal for observing negative magnetoresistance with Shubnikov–de Haas oscillations, Phys. Rev. B 95, 201101 (2017).
* Zhang _et al._ [2018] D. Zhang, H. Wang, J. Ruan, G. Yao, and H. Zhang, Engineering topological phases in the Luttinger semimetal $\alpha$-Sn, Phys. Rev. B 97, 195139 (2018).
* Barfuss _et al._ [2013] A. Barfuss, L. Dudy, M. R. Scholz, H. Roth, P. Höpfner, C. Blumenstein, G. Landolt, J. H. Dil, N. C. Plumb, M. Radovic, A. Bostwick, E. Rotenberg, A. Fleszar, G. Bihlmayer, D. Wortmann, G. Li, W. Hanke, R. Claessen, and J. Schäfer, Elemental topological insulator with tunable Fermi level: Strained $\alpha$-Sn on InSb(001), Phys. Rev. Lett. 111, 157205 (2013).
* Ohtsubo _et al._ [2013] Y. Ohtsubo, P. Le Fèvre, F. Bertran, and A. Taleb-Ibrahimi, Dirac cone with helical spin polarization in ultrathin $\alpha$-Sn(001) films, Phys. Rev. Lett. 111, 216401 (2013).
* Rogalev _et al._ [2017] V. A. Rogalev, T. Rauch, M. R. Scholz, F. Reis, L. Dudy, A. Fleszar, M.-A. Husanu, V. N. Strocov, J. Henk, I. Mertig, J. Schäfer, and R. Claessen, Double band inversion in $\alpha$-Sn: Appearance of topological surface states and the role of orbital composition, Phys. Rev. B 95, 161117 (2017).
* Scholz _et al._ [2018] M. R. Scholz, V. A. Rogalev, L. Dudy, F. Reis, F. Adler, J. Aulbach, L. J. Collins-McIntyre, L. B. Duffy, H. F. Yang, Y. L. Chen, T. Hesjedal, Z. K. Liu, M. Hoesch, S. Muff, J. H. Dil, J. Schäfer, and R. Claessen, Topological surface state of $\alpha$-Sn on InSb(001) as studied by photoemission, Phys. Rev. B 97, 075101 (2018).
* Barbedienne _et al._ [2018] Q. Barbedienne, J. Varignon, N. Reyren, A. Marty, C. Vergnaud, M. Jamet, C. Gomez-Carbonell, A. Lemaître, P. Le Fèvre, F. Bertran, A. Taleb-Ibrahimi, H. Jaffrès, J.-M. George, and A. Fert, Angular-resolved photoemission electron spectroscopy and transport studies of the elemental topological insulator $\alpha$-Sn, Phys. Rev. B 98, 195445 (2018).
* Xu _et al._ [2017] C.-Z. Xu, Y.-H. Chan, Y. Chen, P. Chen, X. Wang, C. Dejoie, M.-H. Wong, J. A. Hlevyack, H. Ryu, H.-Y. Kee, N. Tamura, M.-Y. Chou, Z. Hussain, S.-K. Mo, and T.-C. Chiang, Elemental topological Dirac semimetal: $\alpha$-Sn on InSb(111), Phys. Rev. Lett. 118, 146402 (2017).
* Rogalev _et al._ [2019] V. A. Rogalev, F. Reis, F. Adler, M. Bauernfeind, J. Erhardt, A. Kowalewski, M. R. Scholz, L. Dudy, L. B. Duffy, T. Hesjedal, M. Hoesch, G. Bihlmayer, J. Schäfer, and R. Claessen, Tailoring the topological surface state in ultrathin $\alpha$-Sn(111) films, Phys. Rev. B 100, 245144 (2019).
* Kondou _et al._ [2016] K. Kondou, R. Yoshimi, A. Tsukazaki, Y. Fukuma, J. Matsuno, K. S. Takahashi, M. Kawasaki, Y. Tokura, and Y. Otani, Fermi-level-dependent charge-to-spin current conversion by Dirac surface states of topological insulators, Nat. Phys. 12, 1027 (2016).
* Rojas-Sánchez _et al._ [2016] J.-C. Rojas-Sánchez, S. Oyarzún, Y. Fu, A. Marty, C. Vergnaud, S. Gambarelli, L. Vila, M. Jamet, Y. Ohtsubo, A. Taleb-Ibrahimi, P. Le Fèvre, F. Bertran, N. Reyren, J.-M. George, and A. Fert, Spin to charge conversion at room temperature by spin pumping into a new type of topological insulator: $\alpha$-Sn films, Phys. Rev. Lett. 116, 096602 (2016).
* Kitaev [2003] A. Kitaev, Fault-tolerant quantum computation by anyons, Ann. Phys. (N. Y.) 303, 2 (2003).
* Tewari _et al._ [2007] S. Tewari, S. Das Sarma, C. Nayak, C. Zhang, and P. Zoller, Quantum computation using vortices and Majorana zero modes of a ${p}_{x}+i{p}_{y}$ superfluid of fermionic cold atoms, Phys. Rev. Lett. 98, 010506 (2007).
* Fu and Kane [2008] L. Fu and C. L. Kane, Superconducting proximity effect and Majorana fermions at the surface of a topological insulator, Phys. Rev. Lett. 100, 096407 (2008).
* Liu _et al._ [2014a] Z. K. Liu, B. Zhou, Y. Zhang, Z. J. Wang, H. M. Weng, D. Prabhakaran, S.-K. Mo, Z. X. Shen, Z. Fang, X. Dai, Z. Hussain, and Y. L. Chen, Discovery of a three-dimensional topological Dirac semimetal, Na3Bi, Science 343, 864 (2014a).
* Xu _et al._ [2014] S.-Y. Xu, C. Liu, S. K. Kushwaha, R. Sankar, J. W. Krizan, I. Belopolski, M. Neupane, G. Bian, N. Alidoust, T.-R. Chang, H.-T. Jeng, C.-Y. Huang, W.-F. Tsai, H. Lin, P. P. Shibayev, F.-C. Chou, R. J. Cava, and M. Z. Hasan, Observation of Fermi arc surface states in a topological metal, Science 347, 294 (2014).
* Borisenko _et al._ [2014] S. Borisenko, Q. Gibson, D. Evtushinsky, V. Zabolotnyy, B. Büchner, and R. J. Cava, Experimental realization of a three-dimensional Dirac semimetal, Phys. Rev. Lett. 113, 027603 (2014).
* Liu _et al._ [2014b] Z. K. Liu, J. Jiang, B. Zhou, Z. J. Wang, Y. Zhang, H. M. Weng, D. Prabhakaran, S.-K. Mo, H. Peng, P. Dudin, T. Kim, M. Hoesch, Z. Fang, X. Dai, Z. X. Shen, D. L. Feng, Z. Hussain, and Y. L. Chen, A stable three-dimensional topological Dirac semimetal Cd3As2, Nature Materials 13, 677 (2014b).
* Neupane _et al._ [2014] M. Neupane, S.-Y. Xu, R. Sankar, N. Alidoust, G. Bian, C. Liu, I. Belopolski, T.-R. Chang, H.-T. Jeng, H. Lin, _et al._ , Observation of a three-dimensional topological Dirac semimetal phase in high-mobility Cd3As2, Nat. Commun. 5, 1 (2014).
* Vail _et al._ [2019] O. Vail, P. Taylor, P. Folkes, B. Nichols, B. Haidet, K. Mukherjee, and G. de Coster, Growth and magnetotransport in thin-film $\alpha$-Sn on CdTe, Phys. Status Solidi B 257, 1800513 (2019).
* Tu _et al._ [1989] L. W. Tu, G. K. Wong, S. N. Song, Z. Zhao, and J. B. Ketterson, Shubnikov–de Haas effect in thin epitaxial films of gray tin, Appl. Phys. Lett. 55, 2643 (1989).
* Madarevic _et al._ [2020] I. Madarevic, U. Thupakula, G. Lippertz, N. Claessens, P.-C. Lin, H. Bana, S. Gonzalez, G. D. Santo, L. Petaccia, M. N. Nair, L. M. Pereira, C. Van Haesendonck, and M. J. Van Bael, Structural and electronic properties of the pure and stable elemental 3D topological Dirac semimetal $\alpha$-Sn, APL Mater. 8, 031114 (2020).
* Van der Pauw [1958] L. J. Van der Pauw, A method of measuring specific resistivity and Hall effect of discs of arbitrary shape, Philips Res. Rep 13, 1 (1958).
* Madelung _et al._ [2002] O. Madelung, U. Rössler, and M. Schulz, eds., _Group IV Elements, IV-IV and III-V Compounds. Part b - Electronic, Transport, Optical and Other Properties_ (Springer-Verlag, 2002).
* Farrow _et al._ [1981] R. Farrow, D. Robertson, G. Williams, A. Cullis, G. Jones, I. Young, and P. Dennis, The growth of metastable, heteroepitaxial films of $\alpha$-Sn by metal beam epitaxy, J. Cryst. Growth 54, 507 (1981).
* Ewald and Tufte [1958] A. W. Ewald and O. N. Tufte, Gray tin single crystals, J. Appl. Phys. 29, 1007 (1958).
* Franz and Molenkamp [2013] M. Franz and L. Molenkamp, _Topological insulators_ (Elsevier, 2013).
* Ashcroft _et al._ [1976] N. W. Ashcroft, N. D. Mermin, _et al._ , Solid state physics (1976).
* Shoenberg [2009] D. Shoenberg, _Magnetic oscillations in metals_ (Cambridge university press, 2009).
* Fuchs _et al._ [2010] J. N. Fuchs, F. Piéchon, M. O. Goerbig, and G. Montambaux, Topological Berry phase and semiclassical quantization of cyclotron orbits for two dimensional electrons in coupled band models, Eur. Phys. J. B 77, 351 (2010).
* Wright and McKenzie [2013] A. R. Wright and R. H. McKenzie, Quantum oscillations and Berry’s phase in topological insulator surface states with broken particle-hole symmetry, Phys. Rev. B 87, 085411 (2013).
* Xiong _et al._ [2012] J. Xiong, Y. Luo, Y. Khoo, S. Jia, R. J. Cava, and N. P. Ong, High-field Shubnikov – de Haas oscillations in the topological insulator Bi2Te2Se, Phys. Rev. B 86, 045314 (2012).
* Neto _et al._ [2009] A. C. Neto, F. Guinea, N. M. Peres, K. S. Novoselov, and A. K. Geim, The electronic properties of graphene, Rev. Mod. Phys. 81, 109 (2009).
* Gantmakher and Levinson [2012] V. Gantmakher and Y. Levinson, _Carrier scattering in metals and semiconductors_ (Elsevier, 2012).
|
2024-09-04T02:54:55.198550 | 2020-02-27T16:57:36 | 2002.12257 | {
"authors": "Max Ruby, David S. Bolme, Joel Brogan, David Cornett III, Baldemar\n Delgado, Gavin Jager, Christi Johnson, Jose Martinez-Mendoza, Hector\n Santos-Villalobos, Nisha Srinivas",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25922",
"submitter": "Joel Brogan",
"url": "https://arxiv.org/abs/2002.12257"
} | arxiv-papers | Further author information: (Send correspondence to Hector Santos-Villalobos)
Hector Santos-Villalobos: E-mail<EMAIL_ADDRESS>Telephone: 1 865 574 0215
David S. Bolme: E-mail<EMAIL_ADDRESS>Telephone: 1 865 576 0300
Notice: This manuscript has been authored by UT-Battelle, LLC, under contract
DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government
retains and the publisher, by accepting the article for publication,
acknowledges that the US government retains a nonexclusive, paid-up,
irrevocable, worldwide license to publish or reproduce the published form of
this manuscript, or allow others to do so, for US government purposes. DOE
will provide public access to these results of federally sponsored research in
accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-
public-access-plan).
# The Mertens Unrolled Network (MU-Net): A High Dynamic Range Fusion Neural
Network for Through the Windshield Driver Recognition
Max Ruby Purdue University, 610 Purdue Mall, West Lafayette, USA David S.
Bolme Oak Ridge National Laboratory, 1 Bethel Valley Rd, Oak Ridge, USA Joel
Brogan Oak Ridge National Laboratory, 1 Bethel Valley Rd, Oak Ridge, USA
David Cornett III Oak Ridge National Laboratory, 1 Bethel Valley Rd, Oak
Ridge, USA Baldemar Delgado Texas A&M University-Kingsville, 700 University
Blvd, Kingsville, USA Gavin Jager University of Michigan, 500 S. State St,
Ann Arbor, USA Christi Johnson Oak Ridge National Laboratory, 1 Bethel
Valley Rd, Oak Ridge, USA Jose Martinez-Mendoza Texas A&M University-
Kingsville, 700 University Blvd, Kingsville, USA Hector Santos-Villalobos
Oak Ridge National Laboratory, 1 Bethel Valley Rd, Oak Ridge, USA Nisha
Srinivas Oak Ridge National Laboratory, 1 Bethel Valley Rd, Oak Ridge, USA
###### Abstract
Face recognition of vehicle occupants through windshields in unconstrained
environments poses a number of unique challenges ranging from glare, poor
illumination, driver pose and motion blur. In this paper, we further develop
the hardware and software components of a custom vehicle imaging system to
better overcome these challenges. After the build out of a physical prototype
system that performs High Dynamic Range (HDR) imaging, we collect a small
dataset of through-windshield image captures of known drivers. We then re-
formulate the classical Mertens-Kautz-Van Reeth HDR fusion algorithm as a pre-
initialized neural network, which we name the Mertens Unrolled Network (MU-
Net), for the purpose of fine-tuning the HDR output of through-windshield
images. Reconstructed faces from this novel HDR method are then evaluated and
compared against other traditional and experimental HDR methods in a pre-
trained state-of-the-art (SOTA) facial recognition pipeline, verifying the
efficacy of our approach.
###### keywords:
MU-Net, HDR, Facial Recognition, Through Windshield
## 1 INTRODUCTION
Face recognition of vehicle occupants in unconstrained environments,
particularly through the windshield, poses a number of challenges. Previous
research has shown that artifacts introduced by a windshield can greatly
impact a camera’s ability to image within the vehicle interior [1].
Additionally, images captured in this scenario are typically from long
distances and moderate speeds, which reduces the amount of light available to
the sensor. Low-intensity ultraviolet or near-infrared (NIR) illumination has
become increasingly ineffective in alleviating these issues as tinted
windshields that block these wavelengths gain popularity. Likewise, increasing
the exposure time cannot fix the problems associated with that since the
vehicle is moving, and motion blur artifacts caused by increasing exposure
time significantly degrades face recognition quality.
Furthermore, windshields are reflective, which produces several unique
challenges. First, it further reduces the light available to the sensors.
Second, the windshield will reflect light from other sources into the sensor.
If that light propagates directly from a light source, this will often cause
obstructive glare (Figure 1). Even if that light is reflected off of an object
onto the windshield, the object will appear as an unwanted artifact overlaid
with the desired image of the driver.
Current solutions for this problem include flashing visible lights at the
driver, such as the system devised by Gatekeeper Security [2]. This is highly
undesirable, as it both distracts the driver from safe vehicle operation and
has the potential to cause obstructive glare due to the windshield.
To provide the best possible input to a deep learning algorithm, a custom
multi-camera imaging system was developed to specifically mitigate these
hurdles while remaining non-intrusive [3]. The system is modular in design,
where each unit is composed of both an imaging system and an associated
computational system, seen in Figures 2, 3, and 4. The raw images captured by
the system can subsequently be processed by any HDR method to ultimately
provide the processed input to facial recognition software.
Figure 1: A typical example of an HDR burst captured from the Through the
Windshield System, including heavy glaring artifacts.
This paper extends the previous work in two main ways: First, we collect and
annotate a new dataset for the purpose of training HDR reconstruction models
fine-tuned for our specific imaging hardware. Second, we propose an novel
network architecture for HDR reconstruction by using an unrolling technique to
model the Mertens-Kautz-Van Reeth HDR algorithm [4] as a neural network - we
call this network the Mertens-Unrolled Network, or MU-Net. Finally, to analyze
the end-to-end system, SOTA facial detection and recognition algorithms are
used to evaluate the system’s image capture and HDR reconstruction quality in
its overall goal to detect and identify faces through the windshield of moving
vehicles. We also compare our efficacy of MU-Net against other classical and
deelply-learned HDR methods from previous works in the same recognition
pipeline.
## 2 Motivation
The applications of this system are numerous. For example, the ability to
recognize drivers through the windshield has the potential to speed up
security checkpoints in restricted areas. These can also be permanently
deployed along roads to locate criminals and terrorists. Moreover, a more
agile version of this system can be used to greatly accelerate the flow of
traffic through temporary checkpoints which have been set up to catch
identified criminals who have recently committed a high-profile crime or
escaped from prison. Alternatively, it can be used to help locate a child who
is the focus of an amber alert, which can possibly return them to safety
faster.
However, contrast within the vehicle is a function of the ambient light, the
shadows from the vehicle, and the tint of the windshield. We should expect
that these conditions may change quickly; by the addition or removal of cloud
cover, or by a passing driver with their lights on. If we have a single camera
with fixed optics, then we cannot expect the image to have good contrast most
of the time. Unfortunately, automatic camera gain is difficult given these
situations. This is why HDR fusion techniques for our camera system are so
valuable - we expect that if we take pictures from multiple cameras with
varying optics, we should be able to get a high-quality image by combining
them.
## 3 Related Work
There is a notable body of work in the field of HDR fusion, a review of which
can be found in [5]. Note the algorithms given by Mertens et. al. [4] and
Debevec et. al. [6] usually perform adequate HDR fusion under normal
circumstances. However, when dealing with through-windshield environments,
these methods are not designed to account for the unfortunately common cases
of obstructive glare, which obscures the face within one or more images in the
HDR burst.
Most external driver recognition research has been performed in the commercial
sector [7] and public sector [8]. Beyond an initial case study at Oak Ridge
National Laboratory [3], very little open research has been published on the
topic. While little work has been done in the field of driver recognition
through the windshield , there has been significant related work two related
fields: Vehicle occupancy detection, and glass artifact suppression. In
vehicle occupancy detection [9, 10, 11, 12], algorithms aim to detect and
accurately count the number of occupants in an automobile, usually for the
purpose of automatically enforcing HOV and carpool lane rules. In glass
artifact suppression, algorithms look to effectively remove artifacts such as
reflection, refraction, and glare, that are introduced by glass into real-
world images [13, 14, 15, 16]. While these works are useful within their own
context, they are solutions designed around separate problems and therefore
sub-optimal to solve the through-windshield recognition task, due to less-
stringent time or accuracy constraints on their output
Convolutional Neural Networks (CNNs) are the the current SOTA approaches for a
large number of image processing tasks. Some recent successes of the neural
networks can be attributed to their ability learn and optimize the behavior
previously established deterministic algorithms, as is pointed out in [17].
Moreover, the technique of taking an algorithm and turning it into a neural
network - for example, through “deconstructing” or “Unrolling” - has been
shown to effectively incorporate domain-specific priors into a given model[18,
19]. This technique can provide a straightforward improvement to a problem’s
solution, as long as the solution in question is already acceptably effective
and easily posed as a network architecture. If we have designed and initialize
a network with parameters to accommodate a target algorithm, we can ensure
that the original algorithm can not only be mimicked by the neural network,
but further improve upon it through fine-tuning [20]. We should expect an
algorithm re-formulated as a neural network to perform no worse than the
original.
There is a growing body of work focused around performing HDR reconstruction
using a CNN. Many of them attempt to do this using a single shot [21, 22] or
by simply having a extreme number of weights [21, 23, 24, 22]. For real-time,
robust driver recognition, both of these are unacceptable. In the former case,
we expect that a single shot will have a high probability of either misfiring
or inaccurately capturing the face, due to the potential for obstructive
glare. In the latter case, an extremely large model cannot be run locally,
within an embedded system on-device, and expected to perform with adequate
throughput for the frequency and volume of highway traffic. Previous work
attempts to overcome these issues using a GAN approach [3], which this work
improves upon. While our proposed Deep HDR Pipeline, the MU-Net, is designed
and trained to work specifically with the our custom face imaging system, it’s
architectural design mimicking the general Merten’s HDR algorithm inherently
improves the method’s generalizability across all domains, not just through-
windshield faces.
## 4 Computational Imaging System Overview
Our custom imaging system was developed to withstand harsh environments as
well as produce quick results for biometric assessment in field deployment.
The overall system design is modular, where each computational unit added can
perform stand-alone, or input its data into a central analysis system. A
single computational imaging unit is comprised of two primary components:
weatherproof camera enclosure and a ruggedized computer. The camera enclosure
is shown in Figure 2 and the associated computing system can be seen in Figure
3.
The camera enclosure houses an array of three Basler GigE cameras ($2048\times
1536$ pixel matrix, 3.45 $\mu$m pixel size), mounted on a 3D-printed bracket
and arranged horizontally. The cameras are linked to the associated computer
unit via Power over Ethernet (PoE) connections. They are aligned to have a
common focal point at 10 meters. Gain and exposure time is programmatically
adjusted at the time of data acquisition. The cameras are equipped with
filters as follows: First, all cameras are equipped with a 50 mm focal length
and 34 mm aperture lens. Next, all cameras are equipped with a Midwest Optical
Systems UVIR filter to sample only visible light. Third, Cameras #1 and #2 are
equipped with a Midwest Optical Systems 0.9 and 0.3 ND filter, respectively.
Last, cameras #1, #2, and #3 are equipped with Meadowlark Optics linear
polarizers (extinction ratio greater than 10,000,000:1) placed at angles of
$80^{\circ}$, $90^{\circ}$, and $100^{\circ}$, respectively.
The ruggedized computer is also housed in a 3D-printed, weatherize enclosure.
The computer has 32GB of memory, Intel Core i7 CPU with 8 processing cores and
a NVIDIA GTX 1050 Ti GPU. The computer was chosen for its PoE ports used to
interface with the camera unit as well as its GPU so that it can handle the
computational burden of the implemented deep learning algorithms.
To trigger the imaging system an Optex through beam photoelectric sensor was
used. To ensure frame synchronization across multiple systems the same trigger
signal was distributed to all systems. Once triggered, an imaging unit
captures 20 frame-sets (20 x 3 cameras = 60 images per frame-set).
The system software is also modular by design, and leverages the Google Remote
Procedure Call (gRPC) library. The gRPC library allows for the use of
independent servers so that each task of the image acquistion and processing
pipeline can be managed independently.
For the all experiments mentioned in this paper, two modular units were used -
one on either side of the driving lane that test subjects would drive through.
This configuration was used to account for driver head pose, ensuring that a
sufficiently frontal face shot could be captured for detection and
identification tasks. The layout of the units can be observed in Figure 4.
Figure 2: Computational imaging unit - weatherproof camera array. Figure 3:
Computational imaging unit - weatherproof ruggedized computer. Figure 4: Two-
unit camera setup for dataset collection at security portal.
## 5 Overview of Image Pipeline
Before images are acquired, the cameras are calibrated to give an initial
estimate for coarse image registration; After this, we use FaRO’s [25]
implementation of RetinaFace face detection on the three images. Using these
detection locations, a secondary registration is performed (in the case that a
face was not detected in a given image, this step is skipped). We crop the
images to 600 $\times$ 600 pixels and perform fine registration using a
modified form FFT-based image registration [26]; We modify the algorithm to
multiply the images using a Kaiser window with $\beta=4$ as to “prefer”
smaller shifts (we expect the fine registration to be, in our case, usually
between $30$ and $150$ pixels in any given direction, as we have already been
coarsely registered). We utilize this method of registration to achieve
accurate correspondence and alignment without the need to detect facial
landmark points, which are often noisy or incorrect within our dataset. It
should be noted that this method deviates from the previous work, [3], which
uses Hamming windows to perform alignment instead. The images are then cropped
to $256\times 256$ pixels and then fed into the HDR fusion neural network
outlined in Figure 5. This produces a final face tile which can then be fed to
a biometric template extractor.
## 6 MU-Net: Mertens Unrolled Network
Motivated by the simplicity of the Mertens HDR algorithm, we first wanted to
show that the algorithm itself could be reformulated within a CNN
architecture. Therefore, MU-Net was conceived by applying an “Unrolling”,
first used in [27], to the Mertens algorithm.
For completeness, a rough sketch of the Mertens HDR algorithm follows. First,
for each pixel, the contrast and well-exposedness of each pixel is computed,
by the formulas
$\begin{split}C_{i,j,k}=|R_{i-1,j,k}+R_{i+1,j,k}+R_{i,j-1,k}\\\
+R_{i,j+1,k}-4R_{i,j,k}|\end{split}$ (1)
and
$X_{i,j,k}=\exp\left(-\frac{1}{2}\left(\frac{R_{i,j,k}-0.5}{\sigma}\right)^{2}\right)$
(2)
where $R_{i,j,k}\in[0,1]$ is the $(i,j)$th pixel of the $k$th image, and
$\sigma$ is some standard deviation (in the standard implementation
$\sigma=0.2$.) Concretely, $X$ is a measure of image exposure, while $C$ is a
measure of contrast. We note that for a color image, there is also a measure
of saturation; this network can be extended to include this saturation
measure, but in our greyscale implementation it is not necessary.
After this, the $C$ and $X$ measurements are multiplied to give a measure of
“quality” of each pixel of each image. This quality metric is then normalized,
and then the images are mixed using a Laplacian Blending scheme, as in [28].
We determine that the way to proceed is to develop a network composed of three
blocks, following the scheme above. First, a Contrast Block, or “C-Block”,
which is initialized to compute the contrast as above. Second, a Well-Exposure
Block, or “X-Block,”, which is initialized to compute the Well-Exposure as
above. Third, a Pyramid Blending Block, or “P-Block,”, which is initialized to
perform Laplacian Pyramid Blending. The output of the C-Block and X-Block is
multiplied together, a convolutional layer is applied, and the resulting
“Image Quality” is fed into the P-Block along with the original images. A
sketch of the neural network, as well as the blocks, are in Figure 5. Note
that only 3 “steps” of the P-Block are shown in Figure 5, for brevity. In
reality, our network contains 8 (that is, the network should be naturally
extended so that the output of the upper-right and lower-right 2D
convolutional layers are $1\times 1$ pixel each, as each convolutional layer
has a stride of 2)
Figure 5: An overview of the MU-Net architecture. From left to right: i) The
Contrast Block (C-Block), ii) The Exposure Block (X-Block), iii) the
Concatenation Block (P-Block). Note that P-Block shows only 3 unrolled loops
for brevity. Each convolutional and upsampling layer has a stride of 2 with 3
output channels, unless denoted by a superscript. Each conv layer utilizes
tanh activation unless otherwise denoted with a ReLU block. * - contains an
ELU, Negative, Exp, and Tanh activation.
Unless denoted with a RelU block, each convolutional layer in Figure 5 is
followed by a tanh activation. The C-Block’s first convolutional layer has 6
output channels, while its second has 3 output channels, using $7\times 7$
kernels. Each layer in the X-Block has 3 channels with $3\times 3$ kernels.
The ReLU layer that has been starred within the X-Block has an ELU, Negative,
Exp, and Tanh activation. The output of the last Conv2D layer represents an
exponent, whose value we want to remain negative so that our network’s output
never “explodes”. The ELU and Negative force that condition on that parameter.
The Conv2D layer between the P-Block has $3$ output channels and a $3\times 3$
kernel. Every Conv2D layer in the P-block has a $7\times 7$ kernel and a tanh
activation function. All of them have 3 output channels, with the exception of
the ones which are superscripted.
As we developed this generator by unrolling the Laplacian Blending scheme in
the Mertens algorithm, we call it the Mertens Unrolled Network, or “MU-Net.”
Note that the architecture is rather light-weight: our current architecture
has under $45,000$ parameters.
A significant advantage to this approach is that we avoid treating our neural
network like a “black box.” Specifically, we expect that our C-Block measures
something similar to the contrast, our X-Block measures something similar to
the Well-Exposedness, and that our P-Block performs something similar to
Laplacian Pyramid Blending. This means that it should be possible to debug the
MU-Net, and we can identify when and where our training has gone badly.
### 6.1 MU-Net Discriminator: The NU-Net
We use a Feature Pyramid Network (FPN) architecture to design our
discriminator [29]. The FPN provides a multi-scale framework for learning a
feature to detect real and fake images at different frequency scales. If we
take the maximum element in each channel (that is, if we maxpool down to
$1\times 1$ pixel), this should serve as an indicator of how much each feature
exists within the input image. This pooled result is then feed that a
classifier layer with a single dimension output. We decided to downsample more
aggressively than in the MU-Net to reduce the number of parameters required;
it was found that it was easy for the architecture to overtrain if it was
given more parameters, so we opt to use a smaller network. Moreover, we feed
the contrast well-exposure maps from the C-Block and X-Block into the
discriminator, as these are pertinent features to generate correctly when
producing a fused HDR image.
Figure 6: NU-Net (Discriminator) Architecture.
The architecture for NU-Net is shown in Figure 6. The upper half of the
diagram is the same as in the MU-Net, utilizing the C- and X-Block. The
leftmost Conv2D layers all have a $7\times 7$ kernel, except for the last,
which has a $4\times 4$ kernel, due to reduced resolution. They have 4,8,16,
and 32 output channels, from top to bottom, respectively. The upsampling
Conv2D layers have 16 output channels and a $7\times 7$ kernel, except the
first, which has a $3\times 3$ kernel due to reduced resolution. The rightmost
Conv2D layers all have $8$ output channels, with $5\times 5$ kernels.
As we developed this discriminator by doing something other than unrolling, we
call it the Not Unrolled Network, or “NU-Net.” Both the MU-Net and NU-Net were
built in Keras, using the Tensorflow backend.
### 6.2 Training
We were able to take a number of photos at the security gate of a restricted
access vehicle portal. From this set of photos, we were able to capture $1957$
triples of images which were of faces, as well as $2371$ images of faces which
were of sufficiently high quality for training the discriminator. Of this, we
used $1440$ of the triples and face tiles for training, $160$ for validation,
and the rest as a testing set.
We determined that the use of Batch Normalization layers to overcome the
exploding gradients problem was not a good solution; naive approaches to Batch
Normalization eliminate the advantages of our initialization, due to our
initialization passing values of nonzero mean and nonunit variance between
layers. We opted to perform $L1$ normalization on the layers, with
$\lambda=0.001$, along with gradient clipping, with our clipnorm value being
$0.1$. Moreover, we determined that pretraining the adversary for $10$ epochs
would improve the initial training, as our generator had a “head-start” due to
its good initialization. We used ADAM to optimize, with an initial learning
rate of $0.005$, a learning rate decay of $0.000001$, and a value of epsilon
equal to $0.001$; we set all other parameters to the defaults.
During training, we noticed that the output of our GAN began to degenerate
once the adversary began to learn. It seems that this is due to the generator
learning a “cheap trick” that fools the adversary, but does not produce
realistic looking images. This was solved by taking the training result before
degeneration; we save output images after each epoch, and take the result that
looks qualitatively best. Although we trained for $100$ epochs, we determined
that the result of training after the $71$st epoch provided the best aesthetic
quality.
## 7 Experimental Evaluation and Results
This section details our effort to evaluate and analyze the effectiveness of
the MU-Net HDR algorithm for the purpose of through-windshield facial
recognition.
Figure 7: ROC performance of MU-Net compared to other HDR Methods.
### 7.1 The Driver Dataset
To evaluate MU-Net on the Image pipeline from Section 5, we capture a new
dataset of known drivers, disjoint from the training data captured in Section
6.2. The dataset was captured at a facility security portal using live driving
conditions over the course of two days. For this experiment there were 22
participants enrolled as gallery subjects for the recognition set. These
participants entered the facility through the security portal under normal
driving conditions. The images captured during this experiment represented the
scene illumination changes that occur through sunrise to sundown. There were
471 vehicle triggered events that produced 56,520 raw images. For each of
these events that involved an enrolled participant, each of the three raw
image frame-sets were fused via each of the evaluated HDR methods to be used
as probes by the face identification systems.
### 7.2 Baseline Methods
Classical HDR fusion can be performed in a number of ways, usually combining
images taken at different radiance ranges and fuses them into a single
radiance image. This method allows the fused image to display a range of
radiance levels that could not be captured by a single image. For our
application, HDR imaging provides a means to overcome the artifacts introduced
into face images from windshield glare and reflection.
A brief summary of all methods we will compare to our novel approach is laid
out below:
* •
Matlab HDR[30] is the standard Matlab implementation for HDR fusion. It is
used as a standard comparison against the other methods.
* •
Mertens-Kautz-Van HDR (Mertens)[4] is based on the Mertens algorithm utilizes
contrast and exposure estimates to fuse the three raw frames for detection and
recognition.
* •
Debevec HDR[6] is based on Debevec radiance mapping with Durand tone mapping.
This method is also a physics-based implementation of HDR.
* •
PeceptionGAN[3] is a GAN network trained specifically for visual perception
and was the initial design for the system used in [3].
Figure 8: Sampled images from cameras (top), input to HDR (middle), Mertens
HDR reconstruction (bottom left), and MU-Net HDR reconstruction (bottom
right).
To analyze the efficacy of MU-Net, we performed a verification study on the
dataset introduced in Section 7.1, comparing performance against the 5
baselines described in Section 7.2. We also include the baseline Single
Camera, which uses a randomly-selected, non-processed image from each HDR
burst. For each method, we detect faces using RetinaFace [31] and subsequently
extract feature templates using Additive Angular Margin Loss for Deep Face
Recognition (ArcFace)[32]. Because our driver dataset is relatively small and
was not used for network training, we report results over the entire set of
subjects. Figure 7 Shows the final results of all 6 methods. The MU-Net
provides the best performance out of all those evaluated, improving upon the
PerceptionGAN by almost $7\%$. As can be seen by the AUC’s of most other
methods, this problem is inherently difficult to solve, and classical HDR
methods do not provide high enough quality reconstructions for accurate face
recognition. Figure 8 shows that classical methods like Debevec and Merten’s
are not robust towards artifacts introduced by windshield glare and
reflection, which significantly impairs verification performance.
Quantitative and Qualitative analysis of the MU-Net pipeline revealed two key
features of the system: First, MU-Net is better able to discriminate areas of
poor quality, including glare and reflection artifacts, than the original
Mertens or Debevec HDR methods, as seen in Figure 8 columns 3,4, and 5.
Second, MU-Net provides output reconstructions that are more stable than that
of PerceptionGan. Because the underlying architecture of MU-Net was designed
and initialized from a state that already performed classical HDR, the network
overfits our data less, resulting in fewer GAN-like artifacts (see Figure 8
columns 6 and 7). We see in general that MU-Net consistently outputs a more
stable image free from windshield artifacts than its competing baselines.
While MU-Net improves upon previous methods for through-windshield
recognition, there is still significant work to be done to achieve a highly
reliable system. For instance, MU-Net can suffer when running in low-light
conditions. This highlights the need for more uniform data collection, ideally
over the course of an entire day. Moreover, the MU-Net was trained on faces -
it does not appear to work properly on images that are not faces.
## 8 Conclusion and Discussion
The MU-Net architecture described in this paper represents a significant step
forward in the use of neural network architectures for HDR fusion for face
recognition through the windshield. Using this method we are able to obtain
higher quality images than classical fusion methods while maintaining a
relatively low computational overhead. While other works have performed HDR
using deep learning, our method is unique in that it is designed on top of the
original Merten’s HDR algorithm, and has a relatively low number of parameters
in contrast to other comparative algorithms. The MU-Net shows that we are able
to successfully incorporate exposure and contrast quality priors into our deep
learning architecture, allowing for faster training, more stable fusion
results that introduce significantly fewer GAN-related generative artifacts.
While the MU-Net is a major improvement over previous methods, significant
work must be done to further enhance facial recognition in the adverse
conditions of through-windshield imaging. We believe it would be beneficial to
train the network stacked end-to-end with a face recognition algorithm,
eliminating the need for training with an ad-hoc discriminator, and optimizing
the HDR fusion expressly for the task of facial recognition.
Efforts are currently being made to expand the custom dataset of through the
windshield images. This expansion would include more enrolled subjects, a more
expansive training set, and a wider array of weather and illumination
conditions. Additionally, the custom imaging system is being redesigned with
better optics, utilizing polarized filter arrays to mitigate glare for
different angles of incidence.
###### Acknowledgements.
This research was supported in part by an appointment to the Oak Ridge
National Laboratory Post-Bachelor’s Research Associate Program, sponsored by
the U.S. Department of Energy and administered by the Oak Ridge Institute for
Science and Education, the U.S. Department of Energy, Office of Science,
Office of Workforce Development for Teachers and Scientists (WDTS) under the
Science Undergraduate Laboratory Internships Program (SULI), and the Texas A&M
University-Kingsville, Office of University Programs, Science and Technology
Directorate, Department of Homeland Security Grant Award # 2012-ST
-062-000054.
## References
* [1] Bhise, V. and Sethumadhavan, S., “Effect of windshield veiling glare on driver visibility,” Transportation Research Record 2056(1), 1–8 (2008).
* [2] Harmon, D., “Facial recognition and registration plate reading,” (2017).
* [3] Cornett, D., Yen, A., Nayola, G., Montez, D., Johnson, C. R., Baird, S. T., Santos-Villalobos, H., and Bolme, D. S., “Through the windshield driver recognition,” Electronic Imaging XVII, 140–1–140–9 (2019).
* [4] Mertens, T., Kautz, J., and Van Reeth, F., “Exposure fusion,” Pacific Graphics , 382 – 390 (10 2007).
* [5] Nayana, A. and Johnson, A. K., “High dynamic range imaging- a review,” (2015).
* [6] Debevec, P. E. and Malik, J., “Recovering high dynamic range radiance maps from photographs,” in [Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques ], SIGGRAPH ’97, 369–378, ACM Press/Addison-Wesley Publishing Co., New York, NY, USA (1997).
* [7] “Komoto unveils through windscreen face recognition,” (Mar 2018).
* [8] Brandom, R., “New homeland security system will bring facial recognition to land borders this summer,” (Jun 2018).
* [9] Hao, X., Chen, H., Yang, Y., Yao, C., Yang, H., and Yang, N., “Occupant detection through near-infrared imaging,” Tamkang Journal of Science and Engineering 14(3), 275–283 (2011).
* [10] Morris, T., Morellas, V., Canelon-Suarez, D., and Papanikolopoulos, N., “Sensing for hov/hot lanes enforcement,” (2017).
* [11] Fan, Z., Islam, A. S., Paul, P., Xu, B., and Mestha, L. K., “Front seat vehicle occupancy detection via seat pattern recognition,” (Dec. 17 2013). US Patent 8,611,608.
* [12] Artan, Y. and Paul, P., “Occupancy detection in vehicles using fisher vector image representation,” arXiv preprint arXiv:1312.6024 (2013).
* [13] Kong, N., Tai, Y.-W., and Shin, J. S., “A physically-based approach to reflection separation: from physical modeling to constrained optimization,” IEEE transactions on pattern analysis and machine intelligence 36(2), 209–221 (2013).
* [14] Wieschollek, P., Gallo, O., Gu, J., and Kautz, J., “Separating reflection and transmission images in the wild,” in [European Conference on Computer Vision (ECCV) ], (September 2018).
* [15] Fan, Q., Yang, J., Hua, G., Chen, B., and Wipf, D., “A generic deep architecture for single image reflection removal and image smoothing,” in [Proceedings of the IEEE International Conference on Computer Vision ], 3238–3247 (2017).
* [16] Arvanitopoulos, N., Achanta, R., and Susstrunk, S., “Single image reflection suppression,” in [Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition ], 4498–4506 (2017).
* [17] McCann, M. T., Jin, K. H., and Unser, M. A., “Convolutional neural networks for inverse problems in imaging: A review,” IEEE Signal Processing Magazine 34, 85–95 (2017).
* [18] Adler, J. and Öktem, O., “Learned primal-dual reconstruction,” (2017).
* [19] Putzky, P. and Welling, M., “Recurrent inference machines for solving inverse problems,” (2017).
* [20] Rao, Y. and Ni, J., “A deep learning approach to detection of splicing and copy-move forgeries in images,” in [2016 IEEE International Workshop on Information Forensics and Security (WIFS) ], 1–6, IEEE (2016).
* [21] Huo, Y. and Zhu, X., “High dynamic range image forensics using cnn,” CoRR abs/1902.10938 (2019).
* [22] Marnerides, D., Bashford-Rogers, T., Hatchett, J., and Debattista, K., “Expandnet: A deep convolutional neural network for high dynamic range expansion from low dynamic range content,” CoRR abs/1803.02266 (2018).
* [23] WANG, J., Wang, W., XU, G., and Liu, H., “End-to-end exposure fusion using convolutional neural network,” IEICE Transactions on Information and Systems E101.D, 560–563 (02 2018).
* [24] Eilertsen, G., Kronander, J., Denes, G., Mantiuk, R., and Unger, J., “Hdr image reconstruction from a single exposure using deep cnns,” ACM Transactions on Graphics (TOG) 36(6) (2017).
* [25] Bolme, D. S., III, D. C. C., and Srinivas, N., “FaRO: FAce Recognition from Oak ridge.” https://github.com/ORNL/faro (2019).
* [26] Reddy, B. S. and Chatterji, B. N., “An fft-based technique for translation, rotation, and scale-invariant image registration,” IEEE Transactions on Image Processing 5, 1266–1271 (Aug 1996).
* [27] Gregor, K. and LeCun, Y., “Learning fast approximations of sparse coding,” in [Proceedings of the 27th International Conference on International Conference on Machine Learning ], ICML’10, 399–406, Omnipress, USA (2010).
* [28] ADELSON, H., “Laplacian pyramid as a compact image code peter j. burt,” (07 2019).
* [29] Lin, T., Dollár, P., Girshick, R. B., He, K., Hariharan, B., and Belongie, S. J., “Feature pyramid networks for object detection,” CoRR abs/1612.03144 (2016).
* [30] Reinhard, E., Heidrich, W., Debevec, P., Pattanaik, S., Ward, G., and Myszkowski, K., [High dynamic range imaging: acquisition, display, and image-based lighting ], Morgan Kaufmann (2010).
* [31] Deng, J., Guo, J., Yuxiang, Z., Yu, J., Kotsia, I., and Zafeiriou, S., “Retinaface: Single-stage dense face localisation in the wild,” in [arxiv ], (2019).
* [32] Deng, J., Guo, J., Niannan, X., and Zafeiriou, S., “Arcface: Additive angular margin loss for deep face recognition,” in [CVPR ], (2019).
|
2024-09-04T02:54:55.212797 | 2020-02-27T17:29:14 | 2002.12271 | {
"authors": "Helin Yang, Zehui Xiong, Jun Zhao, Dusit Niyato, Liang Xiao, Qingqing\n Wu",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25923",
"submitter": "Jun Zhao",
"url": "https://arxiv.org/abs/2002.12271"
} | arxiv-papers | empty
# Deep Reinforcement Learning Based Intelligent Reflecting Surface for Secure
Wireless Communications
Helin Yang, , Zehui Xiong, , Jun Zhao, , Dusit Niyato, , Liang Xiao, , and
Qingqing Wu This research is supported by the National Research Foundation
(NRF), Singapore, under Singapore Energy Market Authority (EMA), Energy
Resilience, NRF2017EWT-EP003-041, Singapore NRF2015-NRF-ISF001-2277, Singapore
NRF National Satellite of Excellence, Design Science and Technology for Secure
Critical Infrastructure NSoE DeST-SCI2019-0007, A*STAR-NTU-SUTD Joint Research
Grant on Artificial Intelligence for the Future of Manufacturing RGANS1906,
Wallenberg AI, Autonomous Systems and Software Program and Nanyang
Technological University (WASP/NTU) under grant M4082187 (4080), Singapore
Ministry of Education (MOE) Tier 1 (RG16/20), and NTU-WeBank JRI
(NWJ-2020-004), Alibaba Group through Alibaba Innovative Research (AIR)
Program, Alibaba-NTU Singapore Joint Research Institute (JRI), Nanyang
Technological University (NTU) Startup Grant, Singapore Ministry of Education
Academic Research Fund Tier 1 RG128/18, Tier 1 RG115/19, Tier 1 RT07/19, Tier
1 RT01/19, and Tier 2 MOE2019-T2-1-176, NTU-WASP Joint Project, Singapore
National Research Foundation under its Strategic Capability Research Centres
Funding Initiative: Strategic Centre for Research in Privacy-Preserving
Technologies & Systems, Energy Research Institute @NTU , Singapore NRF
National Satellite of Excellence, Design Science and Technology for Secure
Critical Infrastructure NSoE DeST-SCI2019-0012, AI Singapore 100 Experiments
(100E) programme, NTU Project for Large Vertical Take-Off & Landing Research
Platform, and the Natural Science Foundation of China under Grant 61971366. A
part of this paper was accepted in IEEE Global Communications Conference,
December 2020, Taipei, Taiwan [40]. _(Corresponding author: Zehui Xiong.)_ H.
Yang, Z. Xiong, J. Zhao, and D. Niyato are with the School of Computer Science
and Engineering, Nanyang Technological University, Singapore 639798 (e-mail:
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]). L. Xiao is with the Department of Information and
Communication Engineering, Xiamen University, Xiamen 361005, China (e-mail:
[email protected]).Q. Wu is with State Key Laboratory of Internet of Things for
Smart City, University of Macau, Macau, 999078 China.
(email:[email protected]).
###### Abstract
In this paper, we study an intelligent reflecting surface (IRS)-aided wireless
secure communication system, where an IRS is deployed to adjust its reflecting
elements to secure the communication of multiple legitimate users in the
presence of multiple eavesdroppers. Aiming to improve the system secrecy rate,
a design problem for jointly optimizing the base station (BS)’s beamforming
and the IRS’s reflecting beamforming is formulated considering different
quality of service (QoS) requirements and time-varying channel conditions. As
the system is highly dynamic and complex, and it is challenging to address the
non-convex optimization problem, a novel deep reinforcement learning
(DRL)-based secure beamforming approach is firstly proposed to achieve the
optimal beamforming policy against eavesdroppers in dynamic environments.
Furthermore, post-decision state (PDS) and prioritized experience replay (PER)
schemes are utilized to enhance the learning efficiency and secrecy
performance. Specifically, a modified PDS scheme is presented to trace the
channel dynamic and adjust the beamforming policy against channel uncertainty
accordingly. Simulation results demonstrate that the proposed deep PDS-PER
learning based secure beamforming approach can significantly improve the
system secrecy rate and QoS satisfaction probability in IRS-aided secure
communication systems.
_Index Terms_ —Secure communication, intelligent reflecting surface,
beamforming, secrecy rate, deep reinforcement learning.
## I Introduction
Physical layer security (PLS) has attracted increasing attention as an
alternative of cryptography-based techniques for wireless communications [1],
where PLS exploits the wireless channel characteristics by using signal
processing designs and channel coding to support secure communication services
without relying on a shared secret key [1], [2]. So far, a variety of
approaches have been reported to improve PLS in wireless communication
systems, e.g., cooperative relaying strategies [3], [4], artificial noise-
assisted beamforming [5], [6], and cooperative jamming [7], [8]. However,
employing a large number of active antennas and relays in PLS systems incurs
an excessive hardware cost and the system complexity. Moreover, cooperative
jamming and transmitting artificial noise require extra transmit power for
security guarantees.
To tackle these shortcomings of the existing approaches [3]-[8], a new
paradigm, called intelligent reflecting surface (IRS) [9]-[13], has been
proposed as a promising technique to achieve high spectrum efficiency and
energy efficiency, and enhance secrecy rate in the fifth generation (5G) and
beyond wireless communication systems. In particular, IRS is a uniform planar
array which is comprised of a number of low-cost passive reflecting elements,
where each of elements adaptively adjusts its reflection amplitude and/or
phase to control the strength and direction of the electromagnetic wave, hence
IRS is capable of enhancing and/or weakening the reflected signals at
different users [9]. As a result, the reflected signal by IRS can increase the
received signal at legitimate users while suppressing the signal at the
eavesdroppers [9]-[13]. Hence, from the PLS perspective, some innovative
studies have been recently devoted to performance optimization for IRS-aided
secure communications [14]-[25].
### I-A Related Works
Initial studies on IRS-aided secure communication systems have reported in
[14]-[17], where a simple system model with only a single-antenna legitimate
user and a single-antenna eavesdropper was considered in these works. The
authors in [14] and [15] applied the alternative optimization (AO) algorithm
to jointly optimize the transmit beamforming vector at the base station (BS)
and the phase elements at the IRS for the maximization of the secrecy rate,
but they did not extend their models to multi-user IRS-assisted secure
communication systems. To minimize the transmit power at the BS subject to the
secrecy rate constraint, the authors in [18] utilized AO solution and
semidefinite programming (SDP) relaxation to address the optimization problem
with the objective to jointly optimize the power allocation and the IRS
reflecting beamforming. In addition, Feng $et~{}al$. [19] also studied the
secure transmission framework with an IRS to minimize the system transmit
power in cases of rank-one and full-rank BS-IRS links, and derived a closed-
form expression of beamforming matrix. Different from these studies [14]-[19]
which considered only a single eavesdropper, secure communication systems
comprising multiple eavesdroppers were investigated in [20]-[22]. Chen
$et~{}al$. [20] presented a minimum-secrecy-rate maximization design to
provide secure communication services for multiple legitimate users while
keeping them secret from multiple eavesdroppers in an IRS-aided multi-user
multiple-input single-output (MISO) system, but the simplification of the
optimization problem may cause a performance loss. The authors in [23] and
[24] studied an IRS-aided multiple-input multiple-output (MIMO) channel, where
a multi-antenna BS transmits data stream to a multi-antenna legitimate user in
the presence of an eavesdropper configured with multiple antennas, and a
suboptimal secrecy rate maximization approach was presented to optimize the
beamforming policy. In addition to the use of AO or SDP in the system
performance optimization, the minorization-maximization (MM) algorithm was
recently utilized to optimize the joint transmit beamforming at the BS and
phase shift coefficient at the IRS [16], [23].
Moreover, the authors in [22] and [25] employed the artificial noise-aided
beamforming for IRS-aided MISO secure communication systems to improve the
system secrecy rate, and an AO based solution was applied to jointly optimize
the BS’s beamforming, artificial noise interference vector and IRS’s
reflecting beamforming with the goal to maximize the secrecy rate. All these
existing studies [14]-[20], [22]-[25] assumed that perfect channel state
information (CSI) of legitimate users or eavesdroppers is available at the BS,
which is not a practical assumption. The reason is that acquiring perfect CSI
at the BS is challenging since the corresponding CSI may be outdated when the
channel is time-varying due to the transmission delay, processing delay, and
high mobility of users. Hence, Yu $et~{}al$. [21] investigated an optimization
problem with considering the impacts of outdated CSI of the eavesdropping
channels in an IRS-aided secure communication system, and a robust algorithm
was proposed to address the optimization problem in the presence of multiple
eavesdroppers.
The above mentioned studies [14]-[25] mainly applied the traditional
optimization techniques e.g., AO, SDP or MM algorithms to jointly optimize the
BS’s beamforming and the IRS’s reflecting beamforming in IRS-aided secure
communication systems, which are less efficient for large-scale systems.
Inspired by the recent advances of artificial intelligence (AI), several works
attempted to utilize AI algorithms to optimize IRS’s reflecting beamforming
[26]-[29]. Deep learning (DL) was exploited to search the optimal IRS
reflection matrices that maximize the achievable system rate in an IRS-aided
communication system, and the simulation demonstrated that DL significantly
outperforms conventional algorithms. Moreover, the authors in [28] and [29]
proposed deep reinforcement learning (DRL) based approach to address the non-
convex optimization problem, and the phase shifts at the IRS are optimized
effectively. However, the works [26]-[29] merely considered to maximize the
system achievable rate of a single user without considering the scenario of
multiple users, secure communication and imperfect CSI in their models. The
authors in [30] and [31] applied reinforcement learning (RL) to achieve smart
beamforming at the BS against an eavesdropper in complex environments, but the
IRS-aided secure communication system needs to optimize the IRS’s reflect
beamforming in addition to the BS’s transmit beamforming. To the best of our
knowledge, RL or DRL has not been explored yet in prior works to optimize both
the BS’s transmit beamforming and the IRS’s reflect beamforming in dynamic
IRS-aided secure communication systems, under the condition of multiple
eavesdroppers and imperfect CSI, which thus motivates this work.
### I-B Contributions
In this paper, we investigate an IRS-aided secure communication system with
the objective to maximize the system secrecy rate of multiple legitimate users
in the presence of multiple eavesdroppers under realistic time-varying
channels, while guaranteeing quality of service (QoS) requirements of
legitimate users. A novel DRL-based secure beamforming approach is firstly
proposed to jointly optimize the beamforming matrix at the BS and the
reflecting beamforming matrix (reflection phases) at the IRS in dynamic
environments. The major contributions of this paper are summarized as follows:
* •
The physical secure communication based on IRS with multiple eavesdroppers is
investigated under the condition of time-varying channel coefficients in this
paper. In addition, we formulate a joint BS’s transmit beamforming and IRS’s
reflect beamforming optimization problem with the goal of maximizing the
system secrecy rate while considering the QoS requirements of legitimate
users.
* •
An RL-based intelligent beamforming framework is presented to achieve the
optimal BS’s beamforming and the IRS’s reflecting beamforming, where the
central controller intelligently optimizes the beamforming policy by using a
Markov decision process (MDP) according to the instantaneous observations from
dynamic environment. Specifically, a QoS-aware reward function is constructed
by covering both the secrecy rate and users’ QoS requirements into the
learning process.
* •
A DRL-based secure beamforming approach is proposed to improve the learning
efficiency and secrecy performance by fully exploiting the information of
complex structure of the beamforming policy domain, where a modified post-
decision state (PDS) learning is presented to trace the channel dynamic
against channel uncertainty, and prioritized experience replay (PER) is
applied to enhance the learning efficiency.
* •
Extensive simulation results are provided to demonstrate the effectiveness of
the proposed deep PDS-PER leaning based secure beamforming approach in terms
of improving the secrecy rate and the QoS satisfaction probability, compared
with other existing approaches. For instance, the proposed learning approach
achieves the secrecy rate and QoS satisfaction level improvements of 17.21%
and 8.67%, compared with the approach [14] in time-varying channel condition.
The rest of this paper is organized as follows. Section II presents the system
model and problem formulation. The optimization problem is formulated as an RL
problem in Section III. Section IV proposes a deep PDS-PER based secure
beamforming approach. Section V provides simulation results and Section VI
concludes the paper.
Notations: In this paper, vectors and matrices are represented by Boldface
lowercase and uppercase letters, respectively. ${\rm{Tr}}(\cdot)$,
${(\cdot)^{*}}$ and ${(\cdot)^{H}}$ denote the trace, the conjugate and the
conjugate transpose operations, respectively. $|\cdot|$ and $||\cdot||$ stand
for the absolute value of a scalar and the Euclidean norm of a vector or
matrix, respectively. $\mathbb{E}[\cdot]$ denotes the expectation operation.
${\mathbb{C}^{M\times N}}$ represents the space of complex-valued matrices.
## II System Model and Problem Formulation
### II-A System Model
Figure 1: IRS-aided secure communication under multiple eavesdroppers.
We consider an IRS-aided secure communication system, as shown in Fig. 1,
where the BS is equipped with $N$ antennas to serve $K$ single-antenna
legitimate mobile users (MUs) in the presence of $M$ single-antenna
eavesdroppers. An IRS with $L$ reflecting elements is deployed in the system
to assist secure wireless communications from the BS to the MUs. The IRS is
equipped with a controller to coordinate with the BS. For the ease of
practical implementation, the maximal reflection without power loss at the IRS
is considered since the reflecting elements are designed to maximize the
reflected desired signal power to the MUs [13]-[23]. In addition, unauthorized
eavesdroppers aim to eavesdrop any of the data streams of the MUs. Hence, the
use of reflecting beamforming at IRS is also investigated to improve the
achievable secrecy rate at the MUs while suppressing the wiretapped data rate
at the eavesdroppers. In addition, we explicitly state that the eavesdroppers
cannot collide [5], [6], [18]-[21].
Let $\mathcal{K}=\\{1,2,\ldots,K\\}$, $\mathcal{M}=\\{1,2,\ldots,M\\}$ and
$\mathcal{L}=\\{1,2,\ldots,L\\}$ denote the MU set, the eavesdropper set and
the IRS reflecting element set, respectively. Let
${\bf{H}_{{\rm{br}}}}\in{\mathbb{C}^{L\times N}}$,
${\bf{h}}_{{\rm{bu,}}k}^{H}\in{\mathbb{C}^{1\times N}}$,
${\bf{h}}_{{\rm{ru,}}k}^{H}\in{\mathbb{C}^{1\times L}}$,
${\bf{h}}_{{\rm{be,}}m}^{H}\in{\mathbb{C}^{1\times N}}$, and
${\bf{h}}_{{\rm{re,}}m}^{H}\in{\mathbb{C}^{1\times L}}$ denote the channel
coefficients from the BS to the IRS, from the BS to the $k$-th MU, from the
IRS to the $k$-th MU, from the BS to the $m$-th eavesdropper, and from the IRS
to the $m$-th eavesdropper, respectively. All the above mentioned channel
coefficients in the system are assumed to be small-scale fading with path loss
which follows the Rayleigh fading model [11]-[14], [21]. Let
${\bf{\Psi}}={\rm{diag}}({\chi_{1}}{e^{j{\theta_{1}}}},{\chi_{2}}{e^{j{\theta_{2}}}},\ldots,{\chi_{L}}{e^{j{\theta_{L}}}})$
denote the reflection coefficient matrix associated with effective phase
shifts at the IRS, where ${\chi_{l}}\in[0,1]$ and ${\theta_{l}}\in[0,2\pi]$
denote the amplitude reflection factor and the phase shift coefficient on the
combined transmitted signal, respectively. As each phase shift is desired to
be designed to achieve full reflection, we consider that ${\chi_{l}}=1$,
$\forall l\in\mathcal{L}$ in the sequel of the paper.
At the BS side, the beamforming vector for the $k$-th MU is denoted as
${{\bf{v}}_{k}}\in{\mathbb{C}^{N\times 1}}$, which is the continuous linear
precoding [11]-[16], [23]. Thus, the transmitted signal for all MUs at the BS
is written as ${\bf{x}}=\sum\nolimits_{k=1}^{K}{{{\bf{v}}_{k}}{s_{k}}}$, where
${s_{k}}$ is the transmitted symbol for the $k$-th MU which can be modelled as
independent and identically distributed (i.i.d.) random variables with zero
mean and unit variance [11]-[16], [23], and ${s_{k}}\sim\mathcal{CN}(0,1)$.
The total transmit power at the BS is subject to the maximum power constraint:
$\begin{split}\mathbb{E}[||{\bf{x}}{||^{2}}]={\rm{Tr(}}{\bf{V}}{\bf{V}}^{H}{\rm{)}}\leq{P_{\max}}\end{split}$
(1)
where
${\bf{V}}\buildrel\Delta\over{=}[{{\bf{v}}_{1}},{{\bf{v}}_{2}},\ldots,{{\bf{v}}_{K}}]\in{\mathbb{C}^{M\times
K}}$, and ${P_{\max}}$ is the maximum transmit power at the BS.
When the BS transmits a secret message to the $k$-th MU, the MU will receive
the signal from the BS and the reflected signal from the IRS. Accordingly, the
received signal at MU $k$ can be given by
$\begin{split}\begin{array}[]{l}{y_{k}}=\underbrace{\left({{\bf{h}}_{{\rm{ru,}}k}^{H}{\bf{\Psi}}{{\bf{H}}_{{\rm{br}}}}+{\bf{h}}_{{\rm{bu,}}k}^{H}}\right){{\bf{v}}_{k}}{s_{k}}}_{{\rm{desired}}\;{\rm{signal}}}\\\
\;\;\;\;+\underbrace{\sum\limits_{i\in{\mathcal{K}},i\neq
k}{\left({{\bf{h}}_{{\rm{ru,}}k}^{H}{\bf{\Psi}}{{\bf{H}}_{{\rm{br}}}}+{\bf{h}}_{{\rm{bu,}}k}^{H}}\right){{\bf{v}}_{i}}{s_{i}}}}_{{\rm{inter-
userinterference}}}+{n_{k}}\end{array}\end{split}$ (2)
where ${n_{k}}$ denotes the additive complex Gaussian noise (AWGN) with the
with zero mean and variance $\delta_{k}^{2}$ at the $k$-th MU. In (2), we
observe that in addition to the received desired signal, each MU also suffers
inter-user interference (IUI) in the system. In addition, the received signal
at eavesdropper $m$ is expressed by
$\begin{split}{y_{m}}=\left({{\bf{h}}_{{\rm{re,}}m}^{H}{\bf{\Psi}}{{\bf{H}}_{{\rm{br}}}}+{\bf{h}}_{{\rm{be,}}m}^{H}}\right)\sum\limits_{k\in{\mathcal{K}}}{{{\bf{v}}_{k}}{s_{k}}}+{n_{m}}\end{split}$
(3)
where ${n_{m}}$ is the AWGN of eavesdropper $m$ with the variance
$\delta_{m}^{2}$ .
In practical systems, it is not easy for the BS and the IRS to obtain perfect
CSI [9], [21]. This is due to the fact that both the transmission delay and
processing delay exist, as well as the mobility of the users. Therefore, CSI
is outdated at the time when the BS and the IRS transmit the data stream to
MUs [21]. Once this outdated CSI is employed for beamforming, it will lead to
a negative effect on the demodulation at the MUs, thereby leading to
substantial performance loss [21]. Therefore, it is necessary to consider
outdated CSI in the IRS-aided secure communication system.
Let ${T_{{\rm{delay}}}}$ denote the delay between the outdated CSI and the
real-time CSI. In other words, when the BS receives the pilot sequences sent
from the MUs at the time slot $t$, it will complete the channel estimation
process and begin to transmit data stream to the MUs at the time slot
$t+{T_{{\rm{delay}}}}$. Hence, the relation between the outdated channel
vector ${\bf{h}}(t)$ and the real-time channel vector
${\bf{h}}(t+{T_{{\rm{delay}}}})$ can be expressed by
$\begin{split}{\bf{h}}(t+{T_{{\rm{delay}}}})=\rho{\bf{h}}(t)+\sqrt{1-{\rho^{2}}}\hat{\bf{h}}(t+{T_{{\rm{delay}}}}).\end{split}$
(4)
In (4), $\hat{\bf{h}}(t+{T_{{\rm{delay}}}})$ is independent identically
distributed with ${\bf{h}}(t)$ and ${\bf{h}}(t+{T_{{\rm{delay}}}})$, and it is
with zero-mean and unit-variance complex Gaussian entries. $\rho$ is the
autocorrelation function (outdated CSI coefficient) of the channel gain
${\bf{h}}(t)$ and $0\leq\rho\leq 1$, which is given by
$\begin{split}\rho={J_{0}}(2{\pi_{{\rm{pi}}}}{f_{D}}{T_{{\rm{delay}}}})\end{split}$
(5)
where ${J_{0}}(\cdot)$ is the zeroth-order Bessel function of the first kind,
${f_{D}}$ is the Doppler spread which is generally a function of the velocity
($\upsilon$) of the transceivers, the carrier frequency (${f_{c}}$) and the
speed of light $(c)$, i.e., ${f_{D}}=\upsilon{f_{c}}/c$. Note that $\rho=1$
indicates the outdated CSI effect is eliminated, whereas $\rho=0$ represents
no CSI.
As the outdated CSI introduces the channel uncertainty in practical dynamic
systems, the actual channel coefficients can be rewritten as
$\begin{split}\begin{array}[]{l}{{\bf{h}}_{{\rm{bu}},k}}={{{\bf{\tilde{h}}}}_{{\rm{bu}},k}}+\Delta{{\bf{h}}_{{\rm{bu}},k}},\;\forall
k\in\mathcal{K},\\\
{{\bf{h}}_{{\rm{ru}},k}}={{{\bf{\tilde{h}}}}_{{\rm{ru}},k}}+\Delta{{\bf{h}}_{{\rm{ru}},k}},\;\forall
k\in\mathcal{K},\\\
{{\bf{h}}_{{\rm{be}},m}}={{{\bf{\tilde{h}}}}_{{\rm{be}},m}}+\Delta{{\bf{h}}_{{\rm{be}},m}},\;\forall
m\in\mathcal{M},\\\
{{\bf{h}}_{{\rm{re}},m}}={{{\bf{\tilde{h}}}}_{{\rm{re}},m}}+\Delta{{\bf{h}}_{{\rm{re}},m}},\;\forall
m\in\mathcal{M},\end{array}\end{split}$ (6)
where ${{\bf{\tilde{h}}}_{{\rm{bu}},k}}$, ${{\bf{\tilde{h}}}_{{\rm{ru}},k}}$,
${{\bf{\tilde{h}}}_{{\rm{be}},m}}$ and ${{\bf{\tilde{h}}}_{{\rm{re}},m}}$
denote the estimated channel vectors; $\Delta{{\bf{h}}_{{\rm{bu}},k}}$,
$\Delta{{\bf{h}}_{{\rm{ru}},k}}$, $\Delta{{\bf{h}}_{{\rm{be}},m}}$ and
$\Delta{{\bf{h}}_{{\rm{re}},m}}$ are the corresponding channel error vectors.
In the paper, generally, the channel error vectors of each MU and each
eavesdropper can be bounded with respect to the Euclidean norm by using norm-
bounded error model, i.e.,
$\begin{split}\begin{array}[]{l}||\Delta{{\bf{h}}_{{\rm{bu}}}}|{|^{2}}\leq{({\varsigma_{{\rm{bu}}}})^{2}},\;\;||\Delta{{\bf{h}}_{{\rm{ru}}}}|{|^{2}}\leq{({\varsigma_{{\rm{ru}}}})^{2}},\\\
||\Delta{{\bf{h}}_{{\rm{be}}}}|{|^{2}}\leq{({\varsigma_{{\rm{be}}}})^{2}},\;\;||\Delta{{\bf{h}}_{{\rm{re}}}}|{|^{2}}\leq{({\varsigma_{{\rm{re}}}})^{2}},\end{array}\end{split}$
(7)
where ${\varsigma_{{\rm{bu}}}}$, ${\varsigma_{{\rm{ru}}}}$,
${\varsigma_{{\rm{be}}}}$, and ${\varsigma_{{\rm{re}}}}$ refer to the radii of
the deterministically bounded error regions.
Under the channel uncertainty model, the achievable rate of the $k$-th MU is
given by
$\begin{split}R_{k}^{\rm{u}}={\log_{2}}\left({1+\frac{{{{\left|{({\bf{h}}_{{\rm{ru,}}k}^{H}{\bf{\Psi}}{{\bf{H}}_{{\rm{br}}}}+{\bf{h}}_{{\rm{bu,}}k}^{H}){{\bf{v}}_{k}}}\right|}^{2}}}}{{{{|{\sum\limits_{i\in{\mathcal{K}},i\neq
k}{({\bf{h}}_{{\rm{ru,}}k}^{H}{\bf{\Psi}}{{\bf{H}}_{{\rm{br}}}}+{\bf{h}}_{{\rm{bu,}}k}^{H}){{\bf{v}}_{i}}}}|}^{2}}+\delta_{k}^{2}}}}\right).\end{split}$
(8)
If the $m$-th eavesdropper attempts to eavesdrop the signal of the $k$-th MU,
its achievable rate can be expressed by
$\begin{split}\begin{array}[]{l}R_{m,k}^{\rm{e}}=\\\
{\log_{2}}\left({1+\frac{{{{\left|{({\bf{h}}_{{\rm{re,}}m}^{H}{\bf{\Psi}}{{\bf{H}}_{{\rm{br}}}}+{\bf{h}}_{{\rm{be,}}m}^{H}){{\bf{v}}_{k}}}\right|}^{2}}}}{{{{|{\sum\limits_{i\in{\mathcal{K}},i\neq
k}{({\bf{h}}_{{\rm{re,}}m}^{H}{\bf{\Psi}}{{\bf{H}}_{{\rm{br}}}}+{\bf{h}}_{{\rm{be,}}m}^{H}){{\bf{v}}_{i}}}}|}^{2}}+\delta_{m}^{2}}}}\right).\end{array}\end{split}$
(9)
Since each eavesdropper can eavesdrop any of the $K$ MUs’ signal, according to
[14]-[25], the achievable individual secrecy rate from the BS to the $k$-th MU
can be expressed by
$\begin{split}R_{k}^{\sec}={\left[{R_{k}^{\rm{u}}-\mathop{\max}\limits_{\forall
m}R_{m,k}^{\rm{e}}}\right]^{+}}\end{split}$ (10)
where ${[z]^{+}}=\max(0,z)$.
### II-B Problem Formulation
Our objective is to jointly optimize the robust BS’s transmit beamforming
matrix ${\bf{V}}$ and the robust IRS’s reflecting beamforming matrix
${\bf{\Psi}}$ from the system beamforming codebook $\mathcal{F}$ to maximize
the worst-case secrecy rate with the worst-case secrecy rate and data rate
constraints, the total BS transmit power constraint and the IRS reflecting
unit constraint. As such, the optimization problem is formulated as
$\begin{split}\begin{array}[]{l}\mathop{\max}\limits_{{\bf{V}},{\bf{\Psi}}}\mathop{\min}\limits_{\\{\Delta{\bf{h}}\\}}\sum\limits_{k\in\mathcal{K}}{R_{k}^{\sec}}\\\
s.t.\;\;({\rm{a}}):\;R_{k}^{\sec}\geq R_{k}^{\sec,\min},\;\forall
k\in\mathcal{K},\\\
\;\;\;\;\;\;\;({\rm{b}}):\;\mathop{\min}\limits_{\scriptstyle||\Delta{{\bf{h}}_{{\rm{bu}}}}|{|^{2}}\leq{({\varsigma_{{\rm{bu}}}})^{2}},\hfill\atop\scriptstyle||\Delta{{\bf{h}}_{{\rm{ru}}}}|{|^{2}}\leq{({\varsigma_{{\rm{ru}}}})^{2}}\hfill}(R_{k}^{\rm{u}})\geq
R_{k}^{\min},\;\forall k\in\mathcal{K},\\\
\;\;\;\;\;\;\;({\rm{c}}):\;{\rm{Tr}}\left({{\bf{V}}{{\bf{V}}^{H}}}\right)\leq{P_{\max}},\\\
\;\;\;\;\;\;\;({\rm{d}}):\;|\chi{e^{j{\theta_{l}}}}|=1,\;0\leq{\theta_{l}}\leq
2\pi,\;\forall l\in\mathcal{L},\end{array}\end{split}$ (11)
where $R_{k}^{\sec{\rm{,min}}}$ is the target secrecy rate of the $k$-th MU,
and $R_{k}^{{\rm{min}}}$ denotes its target data rate. The constraints in
(11a) and (11b) are imposed to satisfy the worst-case secrecy rate and data
rate requirements, respectively. The constraint in (11c) is set to satisfy the
BS’s maximum power constraint. The constraint in (11d) is the constraint of
the IRS reflecting elements. Obviously, it is challenging to obtain an optimal
solution to the optimization (11), since the objective function in (11) is
non-concave with respect to either ${\bf{V}}$ or ${\bf{\Psi}}$, and the
coupling of the optimization variables (${\bf{V}}$ and ${\bf{\Psi}}$) and the
unit-norm constraints in (11d) are non-convex. In addition, we would consider
the robust beamforming design to maximize the worst-case achievable secrecy
rate of the system while guaranteeing the worst-case constraints.
## III Problem Transformation Based on RL
The optimization problem given in (11) is difficult to address as it is a non-
convex problem. In addition, in realistic IRS-aided secure communication
systems, the capabilities of MUs, the channel quality, and the service
applications will change dynamically. Moreover, the problem in (11) is just a
single time slot optimization problem, which may converge to a suboptimal
solution and obtain the greedy-search like performance due to the ignorance of
the historical system state and the long term benefit. Hence, it is generally
infeasible to apply the traditional optimization techniques (AO, SDP, and MM)
to achieve an effective secure beamforming policy in uncertain dynamic
environments.
Model-free RL is a dynamic programming tool which can be adopted to solve the
decision-making problem by learning the optimal solution in dynamic
environments [32]. Hence, we model the secure beamforming optimization problem
as an RL problem. In RL, the IRS-aided secure communication system is treated
as an environment, the central controller at the BS is regarded as a learning
agent. The key elements of RL are defined as follows.
State space: Let ${\mathcal{S}}$ denote the system state space. The current
system state $s\in{\mathcal{S}}$ includes the channel information of all
users, the secrecy rate, the transmission data rate of the last time slot and
the QoS satisfaction level, which is defined as
$\begin{split}\begin{array}[]{l}s=\left\\{{{{\\{{{\bf{h}}_{k}}{\rm{\\}}}}_{k\in{\mathcal{K}}}},{{\\{{{\bf{h}}_{m}}{\rm{\\}}}}_{m\in{\mathcal{M}}}},{{{\rm{\\{}}R_{k}^{\sec}{\rm{\\}}}}_{k\in{\mathcal{K}}}},{{\\{{R_{k}}\\}}_{k\in{\mathcal{K}}}},}\right.\\\
\;\;\;\;\;\;\left.{{{\\{{\rm{Qo}}{{\rm{S}}_{k}}\\}}_{k\in{\mathcal{K}}}}}\right\\}\end{array}\end{split}$
(12)
where ${{\bf{h}}_{k}}$ and ${{\bf{h}}_{m}}$ are the channel coefficients of
the $k$-th MU and $m$-th eavesdropper, respectively. ${\rm{Qo}}{{\rm{S}}_{k}}$
is the feedback QoS satisfaction level of the $k$-th MU, where the QoS
satisfaction level consists of both the minimum secrecy rate satisfaction
level in (11a) and the minimum data rate satisfaction level in (11b). Other
parameters in (12) are already defined in Section II.
Action space: Let ${\mathcal{A}}$ denote the system action space. According to
the observed system state $s$, the central controller chooses the beamforming
vector ${\\{{{\bf{v}}_{k}}\\}_{k\in{\mathcal{K}}}}$ at the BS and the IRS
reflecting beamforming coefficient (phase shift)
${\\{{\theta_{l}}\\}_{l\in{\mathcal{L}}}}$ at the IRS. Hence, the action
$a\in{\mathcal{A}}$ can be defined by
$\begin{split}a=\left\\{{{{\\{{{\bf{v}}_{k}}\\}}_{k\in{\mathcal{K}}}},{{\\{{\theta_{l}}\\}}_{l\in{\mathcal{L}}}}}\right\\}.\end{split}$
(13)
Transition probability: Let ${\mathcal{T}}(s^{\prime}|s,a)$ represent the
transition probability, which is the probability of transitioning to a new
state $s^{\prime}\in{\mathcal{S}}$, given the action $a$ executed in the sate
$s$.
Reward function: In RL, the reward acts as a signal to evaluate how good the
secure beamforming policy is when the agent executes an action at a current
state. The system performance will be enhanced when the reward function at
each learning step correlates with the desired objective. Thus, it is
important to design an efficient reward function to improve the MUs’ QoS
satisfaction levels.
In this paper, the reward function represents the optimization objective, and
our objective is to maximize the system secrecy rate of all MUs while
guaranteeing their QoS requirements. Thus, the presented QoS-aware reward
function is expressed as
$\begin{split}r=\underbrace{\sum\limits_{k\in{\mathcal{K}}}{R_{k}^{\sec}}}_{{\rm{part}}\;{\rm{1}}}-\underbrace{\sum\limits_{k\in{\mathcal{K}}}{{\mu_{1}}p_{k}^{\sec}}}_{{\rm{part}}\;2}-\underbrace{\sum\limits_{k\in{\mathcal{K}}}{{\mu_{2}}p_{k}^{\rm{u}}}}_{{\rm{part}}\;3}\end{split}$
(14)
where
$\begin{split}p_{k}^{\sec}=\left\\{\begin{array}[]{l}1,\;{\rm{if}}\;R_{k}^{\sec}<R_{k}^{\sec{\rm{,min}}},\forall
k\in{\mathcal{K}},\\\
0,\;{\rm{otherwise}}{\rm{,}}\end{array}\right.\end{split}$ (15)
$\begin{split}p_{k}^{\rm{u}}=\left\\{\begin{array}[]{l}1,\;{\rm{if}}\;{R_{k}}<R_{k}^{{\rm{min}}},\forall
k\in{\mathcal{K}},\\\
0,\;{\rm{otherwise}}{\rm{.}}\end{array}\right.\end{split}$ (16)
In (14), the part 1 represents the immediate utility (system secrecy rate),
the part 2 and the part 3 are the cost functions which are defined as the
unsatisfied secrecy rate requirement and the unsatisfied minimum rate
requirement, respectively. The coefficients ${\mu_{1}}$ and ${\mu_{2}}$ are
the positive constants of the part 2 and the part 3 in (14) , respectively,
and they are used to balance the utility and cost [33]-[35].
The goals of (15) and (16) are to impose the QoS satisfaction levels of both
the secrecy rate and the minimum data rate requirements, respectively. If the
QoS requirement is satisfied in the current time slot, then
$p_{k}^{\sec}={\rm{0}}$ or $p_{k}^{\rm{u}}={\rm{0}}$, indicating that there is
no punishment of the reward function due to the successful QoS guarantees.
The goal of the learning agent is to search for an optimal policy ${\pi^{*}}$
($\pi$ is a mapping from states in ${\mathcal{S}}$ to the probabilities of
choosing an action in ${\mathcal{A}}$: $\pi(s):{\mathcal{S}}\to{\mathcal{A}}$)
that maximizes the long-term expected discounted reward, and the cumulative
discounted reward function can be defined as
$\begin{split}{U_{t}}=\sum\limits_{\tau=0}^{\infty}{{\gamma_{\tau}}{r_{t+\tau+1}}}\end{split}$
(17)
where $\gamma\in(0,1]$ denotes the discount factor. Under a certain policy
$\pi$, the state-action function of the agent with a state-action pair ($s$,
$a$) is given by
$\begin{split}{Q^{\pi}}({s_{t}},{a_{t}})={\mathbb{E}_{\pi}}\left[{{U_{t}}|{s_{t}}=s,{a_{t}}=a}\right].\end{split}$
(18)
The conventional Q-Learning algorithm can be adopted to learn the optimal
policy. The key objective of Q-Learning is to update Q-table by using the
Bellman’s equation as follows:
$\begin{split}\begin{array}[]{l}{Q^{\pi}}({s_{t}},{a_{t}})={\mathbb{E}_{\pi}}\left[{{r_{t}}+\gamma\sum\limits_{{s_{t+1}}\in{\mathcal{S}}}{{\mathcal{T}}({s_{t+1}}|{s_{t}},{a_{t}})}}\right.\\\
\;\;\;\;\;\;\;\;\;\;\left.{\sum\limits_{{a_{t+1}}\in{\mathcal{A}}}{\pi({s_{t+1}},{a_{t+1}}){Q^{\pi}}({s_{t+1}},{a_{t+1}})}}\right]\end{array}\end{split}$
(19)
The optimal action-value function in (17) is equivalent to the Bellman
optimality equation, which is expressed by
$\begin{split}{Q^{*}}({s_{t}},{a_{t}})={r_{t}}+\gamma\mathop{\max}\limits_{{a_{t+1}}}{Q^{*}}({s_{t+1}},{a_{t+1}})\end{split}$
(20)
and the state-value function is achieved as follows:
$\begin{split}V({s_{t}})=\mathop{\max}\limits_{{a_{t}}\in{\mathcal{A}}}Q({s_{t}},{a_{t}}).\end{split}$
(21)
In addition, the Q-value is updated as follows:
$\begin{split}\begin{array}[]{l}{Q_{t+1}}({s_{t}},{a_{t}})=(1-{\alpha_{t}}){Q_{t}}({s_{t}},{a_{t}})+{\alpha_{t}}\left({{r_{t}}+\gamma{V_{t}}({s_{t{\rm{+1}}}})}\right)\end{array}\end{split}$
(22)
where ${\alpha_{t}}\in(0,1]$ is the learning rate. Q-Learning generally
constructs a lookup Q-table $Q(s,a)$, and the agent selects actions based on
the greedy policy for each learning step [32]. In the $\varepsilon-$greedy
policy, the agent chooses the action with the maximum Q-table value with
probability ${\rm{1}}-\varepsilon$, whereas a random action is picked with
probability $\varepsilon$ to avoid achieving stuck at non-optimal policies
[32]. Once the optimal Q-function ${Q^{*}}(s,a)$ is achieved, the optimal
policy is determined by
$\begin{split}{\pi^{*}}(s,a)=\arg\mathop{\max}\limits_{a\in{\mathcal{A}}}{Q^{*}}(s,a).\end{split}$
(23)
## IV Deep PDS-PER Learning Based Secure Beamforming
The secure beamforming policy discussed in Section III can be numerically
achieved by using Q-Learning, policy gradient, and deep Q-Network (DQN)
algorithms [32]. However, Q-Learning is not an efficient learning algorithm
because it cannot deal with continuous state space and it has slow learning
convergence speed. The policy gradient algorithm has the ability to handle
continuous state-action spaces, but it may converge to a suboptimal solution.
In addition, it is intractable for Q-learning and policy gradient algorithms
to solve the optimization problem under high-dimensional input state space.
Although DQN performs well in policy learning under high-dimensional state
space, its non-linear Q-function estimator may lead to unstable learning
process.
Considering the fact that the IRS-aided secure communication system has high-
dimensional and high-dynamical characteristics according to the system state
that is defined in (12) and uncertain CSI that is shown in (4) and (6), we
propose a deep PDS-PER learning based secure beamforming approach, as shown in
Fig. 2, where PDS-learning and PER mechanisms are utilized to enable the
learning agent to learn and adapt faster in dynamic environments. In detail,
the agent utilizes the observed state (i.e, CSI, previous secrecy rate, QoS
satisfaction level), the feedback reward from environment as well as the
historical experience from the replay buffer to train its learning model.
After that, the agent employs the trained model to make decision (beamforming
matrices ${\bf{V}}$ and ${\bf{\Psi}}$) based on its learned policy. The
procedures of the proposed learning based secure beamforming are provided in
the following subsections.
Figure 2: Deep PDS-PER learning based beamforming for IRS-aided secure
communications.
Note that the policy optimization (in terms of the BS’s beamforming matrix
${\bf{V}}$ and the RIS’s reflecting beamforming matrix ${\bf{\Psi}}$) in the
IRS-aided secure communication system can be performed at the BS and that the
optimized reflecting beamforming matrix can be transferred in an offline
manner to the IRS by the controller to adjust the corresponding reflecting
elements accordingly.
### IV-A Proposed Deep PDS-PER Learning
As discussed in Section II, CSI is unlikely to be known accurately due to the
transmission delay, processing delay, and mobility of users. At the same time,
beamforming with outdated CSI will decrease the secrecy capacity, and
therefore, a fast optimization solution needs to be designed to reduce
processing delay. PDS-learning as a well-known algorithm has been used to
improve the learning speed by exploiting extra partial information (e.g., the
previous location information and the mobility velocity of MUs or
eavesdroppers that affect the channel coefficients) and search for an
optimized policy in dynamic environments [33]-[35]. Motivated by this, we
devise a modified deep PDS-learning to trace the environment dynamic
characteristics, and then adjust the transmit beamforming at the BS and the
reflecting elements at the IRS accordingly, which can speed up the learning
efficiency in dynamic environments.
PDS-learning can be defined as an immediate system state
${\tilde{s}_{t}}\in{\mathcal{S}}$ happens after executing an action ${a_{t}}$
at the current state ${s_{t}}$ and before the next time state ${s_{t+1}}$ . In
detail, the PDS-learning agent takes an action ${a_{t}}$ at state ${s_{t}}$,
and then will receive known reward ${r^{\rm{k}}}({s_{t}},{a_{t}})$ from the
environment before transitioning the current state ${s_{t}}$ to the PDS state
${\tilde{s}_{t}}$ with a known transition probability
${{\mathcal{T}}^{\rm{k}}}({\tilde{s}_{t}}|{s_{t}},{a_{t}})$. After that, the
PDS state further transform to the next state ${s_{t+1}}$ with an unknown
transition probability
${{\mathcal{T}}^{\rm{u}}}({s_{t+1}}|{\tilde{s}_{t}},{a_{t}})$ and an unknown
reward ${r^{\rm{u}}}({s_{t}},{a_{t}})$, which corresponds to the wireless CSI
dynamics. In PDS-learning, ${s_{t+1}}$ is independent of ${s_{t}}$ given the
PDS state ${\tilde{s}_{t}}$, and the reward $r({s_{t}},{a_{t}})$ is decomposed
into the sum of ${r^{\rm{k}}}({s_{t}},{a_{t}})$ and
${r^{\rm{u}}}({s_{t}},{a_{t}})$ at ${\tilde{s}_{t}}$ and ${s_{t+1}}$,
respectively. Mathematically, the state transition probability in PDS-learning
from ${s_{t}}$ to ${s_{t+1}}$ admits
$\begin{split}{\mathcal{T}}({s_{t+1}}|{s_{t}},{a_{t}})=\sum\nolimits_{{{\tilde{s}}_{t}}}{{{\mathcal{T}}^{\rm{u}}}({s_{t+1}}|{{\tilde{s}}_{t}},{a_{t}}){{\mathcal{T}}^{\rm{k}}}({{\tilde{s}}_{t}}|{s_{t}},{a_{t}})}.\end{split}$
(24)
Moreover, it can be verified that the reward of the current state-action pair
$({s_{t}},{a_{t}})$ is expressed by
$\begin{split}r({s_{t}},{a_{t}})={r^{\rm{k}}}({s_{t}},{a_{t}})+\sum\nolimits_{{{\tilde{s}}_{t}}}{{{\mathcal{T}}^{\rm{k}}}({{\tilde{s}}_{t}}|{s_{t}},{a_{t}}){r^{\rm{u}}}({{\tilde{s}}_{t}},{a_{t}})}.\end{split}$
(25)
At the time slot $t$, the PDS action-value function
$\tilde{Q}({\tilde{s}_{t}},{a_{t}})$ of the current PDS state-action pair
$({\tilde{s}_{t}},{a_{t}})$ is defined as
$\begin{split}\tilde{Q}({\tilde{s}_{t}},{a_{t}})={r^{\rm{u}}}({\tilde{s}_{t}},{a_{t}})+\gamma\sum\limits_{{s_{t+1}}}{{{\mathcal{T}}^{\rm{u}}}({s_{t+1}}|{{\tilde{s}}_{t}},{a_{t}})V({s_{t+1}})}.\end{split}$
(26)
By employing the extra information (the known transition probability
${{\mathcal{T}}^{\rm{k}}}({\tilde{s}_{t}}|{s_{t}},{a_{t}})$ and known reward
${r^{\rm{k}}}({s_{t}},{a_{t}})$), the Q-function $\hat{Q}({s_{t}},{a_{t}})$ in
PDS-learning can be further expanded under all state-action pairs $(s,a)$,
which is expressed by
$\begin{split}\hat{Q}({s_{t}},{a_{t}})={r^{\rm{k}}}({s_{t}},{a_{t}})+\sum\nolimits_{{{\tilde{s}}_{t}}}{{{\mathcal{T}}^{\rm{k}}}({{\tilde{s}}_{t}}|{s_{t}},{a_{t}})\tilde{Q}({{\tilde{s}}_{t}},{a_{t}})}.\end{split}$
(27)
The state-value function in PDS-learning is defined by
$\begin{split}{\hat{V}_{t}}({s_{t}})=\sum\nolimits_{{s_{t+1}}}{{{\mathcal{T}}^{\rm{k}}}({s_{t+1}}|{s_{t}},{a_{t}})\tilde{V}({s_{t+1}})}\end{split}$
(28)
where
${\tilde{V}_{t}}({s_{t+1}})=\mathop{\max}\limits_{{a_{t}}\in{\mathcal{A}}}{\tilde{Q}_{t}}({\tilde{s}_{t+1}},{a_{t}})$.
At each time slot, the PDS action-value function
$\tilde{Q}({\tilde{s}_{t}},{a_{t}})$ is updated by
$\begin{split}\begin{array}[]{l}{{\tilde{Q}}_{{}_{t+1}}}({{\tilde{s}}_{t}},{a_{t}})=(1-{\alpha_{t}}){{\tilde{Q}}_{t}}({{\tilde{s}}_{t}},{a_{t}})\\\
\;\;\;\;\;\;\;\;\;\;\;\;\;+{\alpha_{t}}\left({{r^{\rm{u}}}({{\tilde{s}}_{t}},{a_{t}})+\gamma{{\hat{V}}_{t}}({s_{t+1}})}\right)\end{array}\end{split}$
(29)
After updating ${\tilde{Q}_{{}_{t+1}}}({\tilde{s}_{t}},{a_{t}})$, the action-
value function ${\hat{Q}_{{}_{t+1}}}({s_{t}},{a_{t}})$ can be updated by
plugging ${\tilde{Q}_{{}_{t+1}}}({\tilde{s}_{t}},{a_{t}})$ into (27).
After presenting in the above modified PDS-learning, a deep PDS learning
algorithm is presented. In the presented learning algorithm, the traditional
DQN is adopted to estimatete the action-value Q-function $Q(s,a)$ by using
$Q(s,a;{\bm{\theta}})$, where ${\bm{\theta}}$ denote the DNN parameter. The
objective of DQN is to minimize the following loss function at each time slot
$\begin{split}\begin{array}[]{l}{\mathcal{L}}({{\bm{\theta}}_{t}})=\left[{{{\\{{{\hat{V}}_{t}}({s_{t}};{{\bm{\theta}}_{t}})-\hat{Q}({s_{t}},{a_{t}};{{\bm{\theta}}_{t}})\\}}^{2}}}\right]=\left[{\\{r({s_{t}},{a_{t}})}\right.\\\
\;\;\;\;\;\;\;\left.{+\gamma\mathop{\max}\limits_{{a_{t+1}}\in{\mathcal{A}}}{{\hat{Q}}_{t}}({s_{t+1}},{a_{t+1}};{{\bm{\theta}}_{t}})-\hat{Q}({s_{t}},{a_{t}};{{\bm{\theta}}_{t}}){\\}^{2}}}\right]\end{array}\end{split}$
(30)
where
${\hat{V}_{t}}({s_{t}};{{\bm{\theta}}_{t}})=r({s_{t}},{a_{t}})+\gamma\mathop{\max}\limits_{{a_{t+1}}\in{\mathcal{A}}}{\hat{Q}_{t}}({s_{t+1}},{a_{t+1}};{{\bm{\theta}}_{t}})$
is the target value. The error between
${\hat{V}_{t}}({s_{t}};{{\bm{\theta}}_{t}})$ and the estimated value
$\hat{Q}({s_{t}},{a_{t}};{{\bm{\theta}}_{t}})$ is usually called temporal-
difference (TD) error, which is expressed by
$\begin{split}{\delta_{t}}={\hat{V}_{t}}({s_{t}};{{\bm{\theta}}_{t}})-\hat{Q}({s_{t}},{a_{t}};{{\bm{\theta}}_{t}}).\end{split}$
(31)
The DNN parameter ${\bm{\theta}}$ is achieved by taking the partial
differentiation of the objective function (30) with respect to
${\bm{\theta}}$, which is given by
$\begin{split}{{\bm{\theta}}_{t+1}}={{\bm{\theta}}_{t}}+\beta\nabla{\mathcal{L}}({{\bm{\theta}}_{t}}).\end{split}$
(32)
where $\beta$ is the learning rate of ${\bm{\theta}}$, and $\nabla(\cdot)$
denotes the first-order partial derivative.
Accordingly, the policy ${\hat{\pi}_{t}}(s)$ of the modified deep PDS-learning
algorithm is given by
$\begin{split}{\hat{\pi}_{t}}(s)=\arg\mathop{\max}\limits_{{a_{t}}\in{\mathcal{A}}}\hat{Q}({s_{t}},{a_{t}};{{\bm{\theta}}_{t}}).\end{split}$
(33)
Although DQN is capable of performing well in policy learning with continuous
and high-dimensional state space, DNN may learn ineffectively and cause
divergence owing to the nonstationary targets and correlations between
samples. Experience replay is utilized to avoid the divergence of the RL
algorithm. However, classical DQN uniformly samples each transition
${e_{t}}=\langle{s_{t}},{a_{t}},{r_{t}},{\tilde{s}_{t}},{s_{t+1}}\rangle$ from
the experience replay, which may lead to an uncertain or negative effect on
learning a better policy. The reason is that different transitions (experience
information) in the replay buffer have different importance for the learning
policy, and sampling every transition equally may unavoidably result in
inefficient usage of meaningful transitions. Therefore, a prioritized
experience replay (PER) scheme has been presented to address this issue and
enhance the sampling efficiency [36], [37], where the priority of transition
is determined by the values of TD error. In PER, a transition with higher
absolute TD error has higher priority in the sense that is has more aggressive
correction for the action-value function.
In the deep PDS-PER learning algorithm, similar to classical DQN, the agent
collects and stores each experience
${e_{t}}=\langle{s_{t}},{a_{t}},{r_{t}},{\tilde{s}_{t}},{s_{t+1}}\rangle$ into
its experience replay buffer, and DNN updates the parameter by sampling a
mini-batch of tuples from the replay buffer. So far, PER was adopted only for
DRL and Q-learning, and has never been employed with the PDS-learning
algorithm to learn the dynamic information. In this paper, we further extend
this PER scheme to enable prioritized experience replay in the proposed deep
PDS-PER learning framework, in order to improve the learning convergence rate.
The probability of sampling transition $i$ (experience $i$) based on the
absolute TD-error is defined by
$\begin{split}p(i)={{|\delta(i){|^{{\eta_{1}}}}}\mathord{\left/{\vphantom{{|\delta(i){|^{{\eta_{1}}}}}{\sum\nolimits_{j^{\prime}}{|\delta(j^{\prime}){|^{{\eta_{1}}}}}}}}\right.\kern-1.2pt}{\sum\nolimits_{j^{\prime}}{|\delta(j^{\prime}){|^{{\eta_{1}}}}}}}\end{split}$
(34)
where the exponent ${\eta_{1}}$ weights how much prioritization is used, with
${\eta_{1}}=0$ corresponding to being uniform sampling. The transition with
higher $p(i)$ will be more likely to be replayed from the replay buffer, which
is associated with very successful attempts by preventing the DNN from being
over-fitting. With the help of PER, the proposed deep PDS-PER learning
algorithm tends to replay valuable experience and hence learns more
effectively to find the best policy.
It is worth noting that experiences with high absolute TD-error are more
frequently replayed, which alters the visitation frequency of some experiences
and hence causes the training process of the DNN prone to diverge. To address
this problem, importance-sampling (IS) weights are adopted in the calculation
of weight changes
$\begin{split}W(i)={\left({D\cdot p(i)}\right)^{-{\eta_{2}}}}\end{split}$ (35)
where $D$ is the size of the experience replay buffer, and the parameter
${\eta_{2}}$ is used to adjust the amount of correction used.
Accordingly, by using the PER scheme into the deep PDS-PER learning, the DNN
loss function (30) and the corresponding parameters are rewritten respectively
as follows:
$\begin{split}{\mathcal{L}}\left({{{\bm{\theta}}_{t}}}\right)=\frac{1}{H}\sum\limits_{i=1}^{H}{\left({{W_{i}}{{\mathcal{L}}_{i}}({{\bm{\theta}}_{t}})}\right)}\end{split}$
(36)
$\begin{split}{{\bm{\theta}}_{t+1}}={{\bm{\theta}}_{t}}+\beta{\delta_{t}}{\nabla_{\bm{\theta}}}{\mathcal{L}}({{\bm{\theta}}_{t}}))\end{split}$
(37)
_Theorem 1:_ The presented deep PDS-PER learning can converge to the optimal
$\hat{Q}({s_{t}},{a_{t}})$ of the MDP with probability 1 when the learning
rate sequence ${\alpha_{t}}$ meets the following conditions
${\alpha_{t}}\in[0,1)$, $\sum\nolimits_{t=0}^{\infty}{{\alpha_{t}}}=\infty$
and $\sum\nolimits_{t=0}^{\infty}{\alpha_{t}^{2}}<\infty$, where the
aforementioned requirements have been appeared in most of the RL algorithms
[32] and they are not specific to the proposed deep PDS-PER learning algorithm
[32].
_Proof:_ If each action can be executed with an infinite number of learning
steps at each system state, or in other words, the learning policy is greedy
with the infinite explorations, the Q-function $\hat{Q}({s_{t}},{a_{t}})$ in
PDS-learning and its corresponding policy strategy $\pi(s)$ will converge to
the optimal points, respectively, with probability of 1 [33]-[35]. The
existing references [34] and [35] have provided the proof.
### IV-B Secure Beamforming Based on Proposed Deep PDS-PER Learning
Similar to most DRL algorithms, our proposed deep PDS-PER learning based
secure beamforming approach consists of two stages, i.e., the training stage
and implementation stage. The training process of the proposed approach is
shown in Algorithm 1. A central controller at the BS is responsible for
collecting environment information and making decision for secure beamforming.
In the training stage, similar to RL-based policy control, the controller
initializes network parameters and observes the current system state including
CSI of all users, the previous predicted secrecy rate and the transmission
data rate. Then, the state vector is input into DQN to train the learning
model. The $\varepsilon$-greedy scheme is leveraged to balance both the
exploration and exploitation, i.e., the action with the maximum reward is
selected probability 1- $\varepsilon$ according to the current information
(exploitation, which is known knowledge), while a random action is chosen with
probability $\varepsilon$ based on the unknown knowledge (i.e., keep trying
new actions, hoping it brings even higher reward (exploration, which is
unknown knowledge) ). After executing the selected action, the agent receives
a reward from the environment and observes the sate transition from ${s_{t}}$
to PDS state ${\tilde{s}_{t}}$ and then to the next state ${s_{t+1}}$. Then,
PDS-learning is used to update the PDS action-value function
$\tilde{Q}({\tilde{s}_{t}},{a_{t}};{{\bm{\theta}}_{t}})$ and Q-function
$\hat{Q}({s_{t}},{a_{t}};{{\bm{\theta}}_{t}})$, before collecting and storing
the transition tuple (also called experience)
${e_{t}}=\langle{s_{t}},{a_{t}},{r_{t}},{\tilde{s}_{t}},{s_{t+1}}\rangle$ into
the experience replay memory buffer ${\mathcal{D}}$, which includes the
current system state, selected action, instantaneous reward and PDS state
along with the next state. The experience in the replay buffer is selected by
the PER scheme to generate mini-batches and they are used to train DQN. In
detail, the priority of each transition $p(i)$ is calculated by using (34) and
then get its IS weight $W(i)$ in (35), where the priorities ensure that high-
TD-value ($\delta(i)$) transitions are replayed more frequently. The weight
$W(i)$ is integrated into deep PDS learning to update both the loss function
${\mathcal{L}}({\bf{\theta}})$ and DNN parameter ${\bm{\theta}}$. Once DQN
converges, the deep PDS-PER learning model is achieved.
After adequate training in Algorithm 1, the learning model is loaded for the
implementation stage. During the implementation stage, the controller uses the
trained learning model to output its selected action $a$ by going through the
DNN parameter ${\bm{\theta}}$, with the observed state $s$ from the IRS-aided
secure communication system. Specifically, it chooses an action $a$, with the
maximum value based on the trained deep PDS-PER learning model. Afterwards,
the environment feeds back an instantaneous reward and a new system state to
the agent. Finally, the beamforming matrix ${{\bf{V}}^{*}}$ at the BS and the
phase shift matrix ${{\bf{\Psi}}^{*}}$ (reflecting beamforming) at the IRS are
achieved according to the selected action.
We would like to point out that the training stage needs a powerful
computation server which can be performed offline at the BS while the
implementation stage can be completed online. The trained learning model
requires to be updated only when the environment (IRS-aided secure
communication system) has experienced greatly changes, mainly depending on the
environment dynamics and service requirements.
Algorithm 1 Deep PDS-PER Learning Based Secure Beamforming
1: Input: IRS-aided secure communication simulator and QoS requirements of all
MUs (e.g., minimum secrecy rate and transmission rate).
2: Initialize: DQN with initial Q-function $Q(s,a;{\bm{\theta}})$, parameters
${\bm{\theta}}$, learning rate $\alpha$ and $\beta$.
3: Initialize: experience replay buffer ${\mathcal{D}}$ with size $D$, and
mini-batch size $H$.
4: for each episode =1, 2, …, ${N^{{\rm{epi}}}}$ do
5: Observe an initial system state $s$;
6: for each time step $t$=0, 1, 2, …, $T$ do
7: Select action based on the $\varepsilon$-greedy policy at current state
${s_{t}}$: choose a random action ${a_{t}}$ with probability $\varepsilon$;
8: Otherwise,
${a_{t}}=\arg\mathop{\max}\limits_{{a_{t}}\in{\mathcal{A}}}Q({s_{t}},{a_{t}};{{\bf{\theta}}_{t}})$;
9: Execute action ${a_{t}}$, receive an immediate reward
${r^{\rm{k}}}({s_{t}},{a_{t}})$ and observe the sate transition from ${s_{t}}$
to PDS state ${\tilde{s}_{t}}$ and then to the next state ${s_{t+1}}$;
10: Update the reward function $r({s_{t}},{a_{t}})$ under PDS-learning using
(25);
11: Update the PDS action-value function
$\tilde{Q}({\tilde{s}_{t}},{a_{t}};{{\bm{\theta}}_{t}})$ using (29);
12: Update the Q-function $\hat{Q}({s_{t}},{a_{t}};{{\bm{\theta}}_{t}})$ using
(25);
13: Store PDS experience
${e_{t}}=\langle{s_{t}},{a_{t}},{r_{t}},{\tilde{s}_{t}},{s_{t+1}}\rangle$ in
experience replay buffer ${\mathcal{D}}$, if ${\mathcal{D}}$ is full, remove
least used experience from ${\mathcal{D}}$;
14: for $i$= 1, 2, …, $H$ do
15: Sample transition $i$ with the probability $p(i)$ using (34);
16: Calculate the absolute TD-error $|\delta(i)|$ in (31);
17: Update the corresponding IS weight ${W_{i}}$ using (35);
18: Update the priority of transition $i$ based on $|\delta(i)|$;
19: end for
20: Update the loss function ${\mathcal{L}}\left({\bm{\theta}}\right)$ and
parameter ${\bm{\theta}}$ of DQN using (36) and (37), respectively;
21: end for
22: end for
23: Output: Return the deep PDS-PER learning model.
### IV-C Computational Complexity Analysis
For the training stage, in DNN, let $L$, ${Z_{0}}$ and ${Z_{l}}$ denote the
training layers, the size of the input layer (which is proportional to the
number of states) and the number of neurons in the $l$-th layer, respectively.
The computational complexity in each time step for the agent is
$O({Z_{0}}{Z_{l}}+\sum\nolimits_{l=1}^{L-1}{{Z_{l}}{Z_{l+1}}})$. In the
training phase, each mini-batch has ${N^{{\rm{epi}}}}$ episodes with each
episode being $T$ time steps, each trained model is completed iteratively
until convergence. Hence, the total computational complexity in DNN is
$O\left({{N^{{\rm{epi}}}}T({Z_{0}}{Z_{l}}+\sum\nolimits_{l=1}^{L-1}{{Z_{l}}{Z_{l+1}}})}\right)$.
The high computational complexity of the DNN training phase can be performed
offline for a finite number of episodes at a centralized powerful unit (such
as the BS).
In our proposed deep PDS-PER learning algorithm, PDS-learning and PER schemes
are utilized to improve the learning efficiency and enhance the convergence
speed, which requires extra computational complexity. In PDS-learning leaning,
since the set of PDS states is the same as the set of MDP states
${\mathcal{S}}$ [30]-[32], the computational complexity of the classical DQN
algorithm and the deep PDS-learning algorithm are
$O(|{\mathcal{S}}{|^{2}}\times|{\mathcal{A}}|)$ and
$O(2|{\mathcal{S}}{|^{2}}\times|{\mathcal{A}}|)$, respectively. In PER, since
the relay buffer size is $D$, the system requires to make both updating and
sampling $O\left({{{\log}_{2}}D}\right)$ operations, so the computational
complexity of the PER scheme is $O\left({{{\log}_{2}}D}\right)$.
According the above analysis, the complexity of the classical DQN algorithm
and the proposed deep PDS-PER learning algorithm are respectively
$O\left({I{N^{{\rm{epi}}}}T({Z_{0}}{Z_{l}}+\sum\nolimits_{l=1}^{L-1}{{Z_{l}}{Z_{l+1}}})+|{\mathcal{S}}{|^{2}}\times|{\mathcal{A}}|}\right)$
and
$O\left({I{N^{{\rm{epi}}}}T({Z_{0}}{Z_{l}}+\sum\nolimits_{l=1}^{L-1}{{Z_{l}}{Z_{l+1}}})+2|{\mathcal{S}}{|^{2}}\times|{\mathcal{A}}|+{{\log}_{2}}D}\right)$,
indicating that the complexity of the proposed algorithm is slightly higher
than the classical DQN learning algorithm. However, our proposed algorithm
achieves better performance than that of the classical DQN algorithm, which
will be shown in the next section.
### IV-D Implementation Details of DRL
This subsection provides extensive details regarding the generation of
training, validation, and testing dataset production.
Generation of training: As shown in Fig. 3, $K$ single-antenna MUs and $M$
single-antenna eavesdroppers are randomly located in the $100m\times 100m$
half right-hand side rectangular of Fig. 3 (light blue area) in a two-
dimensional x-y rectangular grid plane. The BS and the IRS are located at (0,
0) and (150, 100) in meter (m), respectively. The x-y grid has dimensions
100m$\times$100m with a resolution of 2.5 m, i.e., a total of 1600 points.
Figure 3: Simulation setup.
In the presented IRS-assisted system, the system beamforming codebook
${\mathcal{F}}$ includes the BS’s beamforming codebook
${\mathcal{{F_{{\rm{BS}}}}}}$ and the IRS’s beamforming codebook
${\mathcal{{F_{{\rm{IRS}}}}}}$. Both the BS’s beamforming matrix ${\bf{V}}$
and the IRS’s reflection beamforming matrix ${\bf{\Psi}}$ are picked from the
pre-defined codebook ${\mathcal{{F_{{\rm{BS}}}}}}$ and
${\mathcal{{F_{{\rm{IRS}}}}}}$, respectively. The data points of sampled
channel vector and the corresponding reward vector
$\left\langle{{\bf{h}},{\bf{r}}}\right\rangle$ is added into the DNN training
data set ${\mathcal{D}}$. The sampled channel, ${\bf{h}}$, is the input to
DQN. All the simples are normalized by using the normalization scheme to
realize a simple per-dataset scaling. After training, the selected BS’s
beamforming matrix ${\bf{V}}$ and IRS’s beamforming matrix ${\bf{\Psi}}$ with
the highest achievable reward are used to reflect the security communication
performance.
The DQN learning model is trained using empirically hyper-parameter, where DNN
trained for 1000 epochs with 128 mini-batches being utilized in each epoch. In
the training process, 80% and 20% of all generated data are selected as the
training and validation (test) datasets, respectively. The experience replay
buffer size is 32000 where the corresponding samples are randomly sampled from
this number of the most recently experiences.
DQN structure: The DQN model is designed as a Multi-Layer Perceptron network,
which is also referred to as the feedforward Fully Connected network. Note
here that Multi-Layer Perceptron network is widely used to build an advanced
estimator, which fulfills the relation between the environment descriptors and
the beamforming matrices (both the BS’s beamforming matrix and the IRS’s
reflecting beamforming matrix).
The DQN model is comprised of $L$ layers, as illustrated in Fig. 2, where the
first layer is input layer, the last layer is output layer and the remaining
layers are the hidden layers. The $l$-th hidden layer in the network has a
stack of neurons, each of which connects all the outputs of the previous
layer. Each unit operates on a single input value outputting another single
value. The input of the input layer consist of the systems states, i.e.,
channel samples, the achievable rate and QoS satisfaction level information in
the last time slot, while the output layer outputs the predicted reward values
with beamforming matrices in terms of the BS’s beamforming matrix and the
IRS’s reflecting beamforming matrix. The DQN construction is used for training
stability. The network parameters will be provided in the next section.
Training loss function: The objective of DRL model is to find the best
beamforming matrices, i.e., ${\bf{V}}$ and ${\bf{\Psi}}$, from the beamforming
codebook with the highest achievable reward from the environment. In this
case, having the highest achievable reward estimation, the regression loss
function is adopted to train the learning model, where DNN is trained to make
its output, ${\bf{\hat{r}}}$, as close as possible to the desired normalized
reward, ${\bf{\bar{r}}}$. Formally, the training is driven by minimizing the
loss function, ${\mathcal{L}}({\bm{\theta}})$, defined as
$\begin{split}{\mathcal{L}}({\bm{\theta}})=MSE\left({{\bf{\hat{r}}},{\bf{\bar{r}}}}\right),\end{split}$
(38)
where ${\bm{\theta}}$ is the set of all DNN parameters, $MSE(\cdot)$ denotes
the mean-squared-error between ${\bf{\hat{r}}}$ and ${\bf{\bar{r}}}$. Note
that the outputs of DNN, ${\bf{\hat{r}}}$ can be acted as functions of
${\bm{\theta}}$ and the inputs of DNN are the system states shown in (12) in
the paper.
## V Simulation Results and Analysis
This section evaluates the performance of the IRS-aided secure communication
system. The background noise power of MUs and eavesdroppers is equal to -90
dBm. We set the number of antennas at the BS is $N=4$, the number of MUs is
$K=2$ and the number of eavesdroppers is $M=2$. The transmit power
${P_{\max}}$ at the BS varies between 15 dBm and 40 dBm, the number of IRS
elements $L$ varies between 10 and 60, and the outdated CSI coefficient $\rho$
varies from 0.5 to 1 for different simulation settings. The minimum secrecy
rate and the minimum transmission data rate are 3 bits/s/Hz and 5 bits/s/Hz,
respectively. The path loss model is defined by
$PL=\left({P{L_{0}}-10\varsigma\log 10(d/{d_{0}})}\right)$ dB, where
$P{L_{0}}=30\;{\rm{dB}}$ is the path loss at the reference distance
${d_{0}}=1\;{\rm{m}}$ [9], [38], $\varsigma=3$ is the path loss exponent, and
$d$ is the distance from the transmitter to the receiver. The learning model
consists of three connected hidden layers, containing 500, 250, and 200
neurons [39], respectively. The learning rate is set to $\alpha=0.002$ and the
discount factor is set to $\gamma=0.95$. The the exploration rate
$\varepsilon$ is linearly annealed from 0.8 to 0.1 over the beginning 300
episodes and remains constant afterwards. The parameters ${\mu_{1}}$ and
${\mu_{2}}$ in (12) are set to ${\mu_{1}}={\mu_{2}}=2$ to balance the utility
and cost [33]-[35]. Similar to the IRS-aided communication systems [9], [13]
and [17], the path loss exponents from the BS to the UEs is set to 3.2, from
BS to IRS is set to 2.2, and from IRS to UEs is set to 2.2.
The selection of the network parameters decide the learning convergence speed
and efficiency. Here, we take the network parameters, i.e., the learning rate,
as an example to demonstrate the importance of the network parameters
selection. Fig. 4 shows the average system reward versus training episodes
under different learning rates, i.e., $\alpha=\\{0.1,0.01,0.001,0.0001\\}$. It
can be observed that different learning rates have different effect on the
performance of the deep reinforcement learning algorithm. Specifically, there
exits oscillations in behavior for the too-large learning rate of
$\alpha=0.1$, and its reward performance is much lower than that of
$\alpha=0.001$. In addition, if we set the too-small learning rate of
$\alpha=0.0001$, it requires the longer time to achieve the convergence. We
can see that the model is able to learn the problem well with the learning
rate of $\alpha=0.001$. Hence, we select a suitable learning rate, neither too
large nor too small, and the value can be around 0.001 in our test.
Figure 4: Average reward performance versus episodes under different learning
rates.
To analyze the loss function of the presented DRL shown in Section IV.D, Fig.
5 illustrates the training/validation loss value during training versus the
number of epochs. We can observe that the training/validation loss value
decreases significantly in the first few decades epochs and then tend to be
approximately at a horizontal level after 150 epochs. Furthermore, the
validation loss is only slightly higher than the training loss, which
demonstrates that the DNN weights designed have the ability to provide a good
fit in terms of the mapping between input samples and output samples. It is
worth noting that Fig. 5 is used to investigate how well the DNN weight
parameters are designed. If the validation loss is high while the training
loss is low, this means that the DQN model is over-fitting and thus the
regularization factors may need to be adjusted; if both the validation and
training loss values are high, this shows that the DQN model is under-fitting
and hence the number of neurons may require to be adjusted.
Figure 5: The training and validation losses of DNN employed.
In addition, simulation results are provided to evaluate the performance of
the proposed deep PDS-PER learning based secure beamforming approach (denoted
as deep PDS-PER beamforming) in the IRS-aided secure communication system, and
compare the proposed approach with the following exiting approaches:
* •
The classical DQN based secure beamforming approach (denoted as DQN-based
beamforming), where DNN is employed to estimate the Q-value function, when
acting and choosing the secure beamforming policy corresponding to the highest
Q-value.
* •
The existing secrecy rate maximization approach which optimizes the BS’s
transmit beamforming and the IRS’s reflect beamforming by fixing other
parameters as the constants by using the iterative algorithm, which is similar
to the suboptimal solution [14] (denoted as Baseline 1 [14]).
* •
The optimal BS’s transmit beamforming approach without IRS assistance (denoted
as optimal BS without IRS). Without IRS, the optimization problem (11) is
transformed as
$\begin{split}\begin{array}[]{l}\mathop{\max}\limits_{{\bf{V}}}\mathop{\min}\limits_{\\{\Delta{\bf{h}}\\}}\sum\limits_{k\in{\mathcal{K}}}{R_{k}^{\sec}}\\\
s.t.\;\;({\rm{a}}):\;R_{k}^{\sec}\geq R_{k}^{\sec,\min},\;\forall
k\in{\mathcal{K}},\\\
\;\;\;\;\;\;\;({\rm{b}}):\;\mathop{\min}\limits_{\scriptstyle||\Delta{{\bf{h}}_{{\rm{bu}}}}|{|^{2}}\leq{({\varsigma_{{\rm{bu}}}})^{2}},\hfill\atop\scriptstyle||\Delta{{\bf{h}}_{{\rm{ru}}}}|{|^{2}}\leq{({\varsigma_{{\rm{ru}}}})^{2}}\hfill}(R_{k}^{\rm{u}})\geq
R_{k}^{\min},\;\forall k\in{\mathcal{K}},\\\
\;\;\;\;\;\;\;({\rm{c}}):\;{\rm{Tr}}\left({{\bf{V}}{{\bf{V}}^{H}}}\right)\leq{P_{\max}},\\\
\;\;\;\;\;\;\;({\rm{d}}):\;|\chi{e^{j{\theta_{l}}}}|=1,\;0\leq{\theta_{l}}\leq
2\pi,\;\forall l\in{\mathcal{L}}.\end{array}\end{split}$ (39)
From the optimization problem (39), they system only needs to optimize the
BS’s transmit beamforming matrix . Problem (39) is non-convex due to the rate
constraints, and hence we consider semidefinite programming (SDP) relaxation
to solve it. After transforming problem (39), into a convex optimization
problem, we can use CVX to obtain the solution [12]-[16].
Figure 6: Performance comparisons versus the maximum transmit power at the BS.
Fig. 6 shows the average secrecy rate and QoS satisfaction probability
(consists of the secrecy rate satisfaction probability (11a) and the minimum
rate satisfaction probability (11b)) versus the maximum transmit power
${P_{\max}}$, when $L=40$ and $\rho=0.95$. As expected, both the secrecy rate
and QoS satisfaction probability of all the approaches enhance monotonically
with increasing ${P_{\max}}$. The reason is that when ${P_{\max}}$ increases,
the received SINR at MUs improves, leading to the performance improvement. In
addition, we find that our proposed learning approach outperforms the
Baseline1 approach. In fact, our approach jointly optimizes the beamforming
matrixes ${\bf{V}}$ and ${\bf{\Psi}}$, which can simultaneously facilitates
more favorable channel propagation benefit for MUs and impair eavesdroppers,
while the Baseline1 approach optimizes the beamforming matrixes in an
iterative way. Moreover, our proposed approach has higher performance than DQN
in terms of both secrecy rate and QoS satisfaction probability, due to its
efficient learning capacity by utilizing PDS-learning and PER schemes in the
dynamic environment. From Fig. 6, we also find that the three IRS assisted
secure beamforming approaches provide significant higher secrecy rate and QoS
satisfaction probability than the traditional system without IRS. This
indicates that the IRS can effectively guarantee secure communication and QoS
requirements via reflecting beamforming, where reflecting elements (IRS-
induced phases) at the IRS can be adjusted to maximize the received SINR at
MUs and suppress the wiretapped rate at eavesdroppers.
Figure 7: Performance comparisons versus the number of IRS elements.
In Fig. 7, the achievable secrecy rate and QoS satisfaction level performance
of all approaches are evaluated through changing the IRS elements, i.e., from
$L=10$ to 60, when ${P_{\max}}=30\;{\rm{dBm}}$ and $\rho=0.95$. For the secure
beamforming approaches assisted by the IRS, their achievable secrecy rates and
QoS satisfaction levels significantly increase with the number of the IRS
elements. The improvement results from the fact that more IRS elements, more
signal paths and signal power can be reflected by the IRS to improve the
received SINR at the MUs but to decrease the received SINR at the
eavesdroppers. In addition, the performance of the approach without IRS
remains constant under the different numbers of the IRS elements.
From Fig. 7(a), it is found that the secrecy rate of the proposed learning
approach is higher than those of the Baseline 1 and DQN approaches,
especially, their performance gap also obviously increases with $L$, this is
because that with more reflecting elements at the IRS, the proposed deep PDS-
PER learning based secure communication approach becomes more flexible for
optimal phase shift (reflecting beamforming) design and hence achieves higher
gains. In addition, from Fig. 7(b) compared with the Baseline 1 and DQN
approaches, as the reflecting elements at the IRS increases, we observe that
the proposed learning approach is the first one who attains 100% QoS
satisfaction level. This superior achievements are based on the particular
design of the QoS-aware reward function shown in (14) for secure
communication.
Figure 8: Performance comparisons versus outdated CSI coefficient $\rho$.
In Fig. 8, we further analyze how the system secrecy rate and QoS satisfaction
level performances are affected by the outdated CSI coefficient $\rho$ in the
system, i.e., from $\rho=0.5$ to 1, when ${P_{\max}}=30\;{\rm{dBm}}$ and
$L=40$. Note that as $\rho$ decreases, the CSI becomes more outdated as shown
in (4) and (6), and $\rho=1$ means non-outdated CSI. It can be observed from
all beamforming approaches, when CSI becomes more outdated (as $\rho$
decreases), the average secrecy rate and QoS satisfaction level decrease. The
reason is that a higher value of $\rho$ indicates more accurate CSI, which
will enable all the approaches to optimize secure beamforming policy to
achieve higher average secrecy rate and QoS satisfaction level in the system.
It can be observed that reducing $\rho$ has more effects on the performance of
the other three approaches while our proposed learning approach still
maintains the performance at a favorable level, indicating that the other
three approaches are more sensitive to the uncertainty of CSI and the robust
of the proposed learning approach. For instance, the proposed learning
approach achieves the secrecy rate and QoS satisfaction level improvements of
17.21% and 8.67%, compared with the Baseline 1 approach when $\rho=0.7$.
Moreover, in comparison, the proposed learning approach achieves the best
performance among all approaches against channel uncertainty. The reason is
that the proposed learning approach considers the time-varying channels and
takes advantage of PDS-learning to effectively learn the dynamic environment.
## VI Conclusion
In this work, we have investigated the joint BS’s beamforming and IRS’s
reflect beamforming optimization problem under the time-varying channel
conditions. As the system is highly dynamic and complex, we have exploited the
recent advances of machine learning, and formulated the secure beamforming
optimization problem as an RL problem. A deep PDS-PER learning based secure
beamforming approach has been proposed to jointly optimize both the BS’s
beamforming and the IRS’s reflect beamforming in the dynamic IRS-aided secure
communication system, where PDS and PER schemes have been utilized to improve
the learning convergence rate and efficiency. Simulation results have verified
that the proposed learning approach outperforms other existing approaches in
terms of enhancing the system secrecy rate and the QoS satisfaction
probability.
## References
* [1] N. Yang, L. Wang, G. Geraci, M. Elkashlan, J. Yuan, and M. D. Renzo, “Safeguarding 5G wireless communication networks using physical layer security,” _IEEE Commun. Mag._ , vol. 53, no. 4, pp. 20-27, Apr. 2015.
* [2] A. D. Wyner, “The wiretap channel,” _Bell Syst. Tech. J._ , vol. 54, no. 8, pp. 1355–1387, Oct. 1975.
* [3] Q. Li and L. Yang, “Beamforming for cooperative secure transmission in cognitive two-way relay networks,” _IEEE Trans. Inf. Forensics Security_ , vol. 15, pp. 130-143, Jan. 2020.
* [4] L. Xiao, X. Lu, D. Xu, Y. Tang, L. Wang, and W. Zhuang, “UAV relay in VANETs against smart jamming with reinforcement learning,” _IEEE Trans. Veh. Technol._ , vol. 67, no. 5, pp. 4087-4097, May 2018.
* [5] W. Wang, K. C. Teh and K. H. Li, “Artificial noise aided physical layer security in multi-antenna small-cell networks,” _IEEE Trans. Inf. Forensics Security_ , vol. 12, no. 6, pp. 1470-1482, Jun. 2017.
* [6] H. Wang, T. Zheng, and X. Xia, “Secure MISO wiretap channels with multiantenna passive eavesdropper: Artificial noise vs. artificial fast fading,” _IEEE Trans. Wireless Commun._ , vol. 14, no. 1, pp. 94-106, Jan. 2015.
* [7] R. Nakai and S. Sugiura, “Physical layer security in buffer-state-based max-ratio relay selection exploiting broadcasting with cooperative beamforming and jamming,” _IEEE Trans. Inf. Forensics Security_ , vol. 14, no. 2, pp. 431-444, Feb. 2019.
* [8] Z. Mobini, M. Mohammadi, and C. Tellambura, “Wireless-powered full-duplex relay and friendly jamming for secure cooperative communications,” _IEEE Trans. Inf. Forensics Security_ , vol. 14, no. 3, pp. 621-634, Mar. 2019.
* [9] Q. Wu and R. Zhang, “Towards smart and reconfigurable environment: Intelligent reflecting surface aided wireless network,” _IEEE Commun. Mag._ , vol. 58, no. 1, pp. 106-112, Jan. 2020.
* [10] J. Zhao, “A survey of intelligent reflecting surfaces (IRSs): Towards 6G wireless communication networks,” 2019. [Online]. Available: https://arxiv.org/abs/1907.04789.
* [11] H. Han, _et al._ , “Intelligent reflecting surface aided power control for physical-layer broadcasting,” 2019. [Online]. Available: https://arxiv.org/abs/1912.03468.
* [12] C. Huang, A. Zappone, G. C. Alexandropoulos, M. Debbah, and C. Yuen, “Reconfigurable intelligent surfaces for energy efficiency in wireless communication,” _IEEE Trans. Wireless Commun._ , vol. 18, no. 8, pp. 4157-4170, Aug. 2019.
* [13] Q. Wu and R. Zhang, “Intelligent reflecting surface enhanced wireless network via joint active and passive beamforming,” _IEEE Trans. Wireless Commun._ , vol. 18, no. 11, pp. 5394-5409, Nov. 2019.
* [14] M. Cui, G. Zhang, and R. Zhang, “Secure wireless communication via intelligent reflecting surface,” _IEEE Wireless Commun. Lett._ , vol. 8, no. 5, pp. 1410-1414, Oct. 2019.
* [15] H. Shen, W. Xu, S. Gong, Z. He, and C. Zhao, “Secrecy rate maximization for intelligent reflecting surface assisted multi-antenna communications,” _IEEE Commun. Lett._ , vol. 23, no. 9, pp. 1488-1492, Sep. 2019.
* [16] X. Yu, D. Xu, and R. Schober, “Enabling secure wireless communications via intelligent reflecting surfaces,” in _Proc. IEEE Global Commun. Conf. (GLOBECOM)_ , Waikoloa, HI, USA, Dec. 2019, pp. 1–6
* [17] Q. Wu and R. Zhang, “Beamforming optimization for wireless network aided by intelligent reflecting surface with discrete phase shifts,” _IEEE Trans. Commun._ , vol. 68, no. 3, pp. 1838-1851, Mar. 2020.
* [18] Z. Chu, W. Hao, P. Xiao, and J. Shi, “Intelligent reflecting surface aided multi-antenna secure transmission,” _IEEE Wireless Commun. Lett._ , vol. 9, no. 1, pp. 108-112, Jan. 2020.
* [19] B. Feng, Y. Wu, and M. Zheng, “Secure transmission strategy for intelligent reflecting surface enhanced wireless system,” 2019. [Online]. Available: http://arxiv.org/abs/1909.00629.
* [20] J. Chen, Y. Liang, Y. Pei and H. Guo, “Intelligent reflecting surface: A programmable wireless environment for physical layer security,” _IEEE Access_ , vol. 7, pp. 82599-82612, May 2019.
* [21] X. Yu, D. Xu, Y. Sun, D. W. K. Ng, and R. Schober, “Robust and secure wireless communications via intelligent reflecting surfaces,” 2019. [Online]. Available: https://arxiv.org/abs/1912.01497.
* [22] X. Guan, Q. Wu, and R. Zhang, “Intelligent reflecting surface assisted secrecy communication: Is artificial noise helpful or not?,” _IEEE Wireless Commun. Lett._ , vol. 9, no. 6, pp. 778-782, Jun. 2020.
* [23] L. Dong and H. Wang, “Secure MIMO transmission via intelligent reflecting surface,” Appear in _IEEE Wireless Commun. Lett._ , vol. 9, no. 6, pp. 787-790, June 2020.
* [24] W. Jiang, Y. Zhang, J. Wu, W. Feng, and Y. Jin, “Intelligent reflecting surface assisted secure wireless communications with multiple-transmit and multiple-receive antennas,” 2019. [Online]. Available: https://arxiv.org/abs/2001.08963.
* [25] D. Xu, X. Yu, Y. Sun, D. W. K. Ng, and R. Schober, “Resource allocation for secure IRS-assisted multiuser MISO systems,” 2019. [Online]. Available: http://arxiv.org/abs/1907.03085.
* [26] C. Huang, G. C. Alexandropoulos, C. Yuen, and M. Debbah, “Indoor signal focusing with deep learning designed reconfigurable intelligent surfaces,” 2019. [Online]. Available: https://arxiv.org/abs/1905.07726.
* [27] A. Taha, M. Alrabeiah, and A. Alkhateeb, “Enabling large intelligent surfaces with compressive sensing and deep learning,” 2019. [Online]. Available: https://arxiv.org/abs/1904.10136.
* [28] K. Feng, Q. Wang, X. Li and C. Wen, “Deep reinforcement learning based intelligent reflecting surface optimization for MISO communication systems,” _IEEE Wireless Commun. Lett._ , vol. 9, no. 5, pp. 745-749, May 2020.
* [29] C. Huang, R. Mo, and C. Yuen, “Reconfigurable intelligent surface assisted multiuser MISO systems exploiting deep reinforcement learning,” 2020. [Online]. Available: https://arxiv.org/abs/2002.10072.
* [30] C. Li, W. Zhou, K. Yu, L. Fan, and J. Xia, “Enhanced secure transmission against intelligent attacks,” _IEEE Access_ , vol. 7, pp. 53596-53602, Aug. 2019.
* [31] L. Xiao, G. Sheng, S. Liu, H. Dai, M. Peng, and J. Song, “Deep reinforcement learning-enabled secure visible light communication against eavesdropping,” _IEEE Trans. Commun._ , vol. 67, no. 10, pp. 6994-7005, Oct. 2019.
* [32] M. Wiering and M. Otterlo, Reinforcement learning: Stateof-the-art, Springer Publishing Company, Incorporated, 2014.
* [33] H. L. Yang A. Alphones, C. Chen, W. D. Zhong, and X. Z. Xie, “Learning-based energy-efficient resource management by heterogeneous RF/VLC for ultra-reliable low-latency industrial IoT networks,” _IEEE Trans. Ind. Informat._ , vol. 16, no. 8, pp. 5565-5576, Aug. 2020.
* [34] X. He, R. Jin, and H. Dai, “Deep PDS-learning for privacy-aware offloading in MEC-enabled IoT,” _IEEE Internet of Things J._ , vol. 6, no. 3, pp. 4547-4555, Jun. 2019
* [35] N. Mastronarde and M. van der Schaar, “Joint physical-layer and systemlevel power management for delay-sensitive wireless communications,” _IEEE Trans. Mobile Comput._ , vol. 12, no. 4, pp. 694-709, Apr. 2013.
* [36] T. Schaul, J. Quan, I. Antonoglou, and D. Silver,“Prioritized experience replay,” in _Proc. 4th Int. Conf. Learn. Represent. (ICLR)_ , San Juan, US, May. 2016, pp. 1–21.
* [37] H. Gacanin and M. Di Renzo, “Wireless 2.0: Towards an intelligent radio environment empowered by reconfigurable meta-surfaces and artificial intelligence,” 2020. [Online]. Available: https://arxiv.org/abs/2002.11040.
* [38] C. W. Huang, $et~{}al.$, “Holographic MIMO surfaces for 6G wireless networks: opportunities, challenges, and trends,” to appear in _IEEE Wireless Commun._ , Apr. 2020.
* [39] F. B. Mismar, B. L. Evans, and A. Alkhateeb, “Deep reinforcement learning for 5G networks: Joint beamforming, power control, and interference coordination,” _IEEE Trans. Commun._ , vol. 68, no. 3, pp. 1581-1592, Mar. 2020.
* [40] H. Yang, $et~{}al.$, “Deep reinforcement learning based intelligent reflecting surface for secure wireless communications,” will be presented in _Proc. IEEE Global Commun. Conf. (GLOBECOM)_ , Dec. 2020, Taipei, Taiwan.
| Helin Yang (S’15) received the B.S. and M.S. degrees in the School of
Telecommunications Information Engineering from Chongqing University of Posts
and Telecommunications in 2013, and 2016, and the Ph.D. degree from School of
Electrical and Electronic Engineering, Nanyang Technological University,
Singapore, in 2020\. His current research interests include wireless
communication, visible light communication, Internet of Things and resource
management.
---|---
| Zehui Xiong (S’17) is currently a researcher with Alibaba-NTU Singapore
Joint Research Institute, Nanyang Technological University, Singapore. He
received the Ph.D. degree at Nanyang Technological University, Singapore. He
received the B.Eng degree with the highest honors at Huazhong University of
Science and Technology, Wuhan, China. He is the visiting scholar with
Princeton University and University of Waterloo. His research interests
include network economics, wireless communications, blockchain, and edge
intelligence. He has published more than 60 peer-reviewed research papers in
leading journals and flagship conferences, and 3 of them are ESI Highly Cited
Papers. He has won several Best Paper Awards. He is an Editor for Elsevier
Computer Networks (COMNET) and Elsevier Physical Communication (PHYCOM), and
an Associate Editor for IET Communications. He is the recipient of the Chinese
Government Award for Outstanding Students Abroad in 2019, and NTU SCSE
Outstanding PhD Thesis Runner-Up Award in 2020.
---|---
| Jun Zhao (S’10-M’15) is currently an Assistant Professor in the School of
Computer Science and Engineering at Nanyang Technological University (NTU) in
Singapore. He received a PhD degree in Electrical and Computer Engineering
from Carnegie Mellon University (CMU) in the USA (advisors: Virgil Gligor,
Osman Yagan; collaborator: Adrian Perrig), affiliating with CMU’s renowned
CyLab Security & Privacy Institute, and a bachelor’s degree from Shanghai Jiao
Tong University in China. Before joining NTU first as a postdoc with Xiaokui
Xiao and then as a faculty member, he was a postdoc at Arizona State
University as an Arizona Computing PostDoc Best Practices Fellow (advisors:
Junshan Zhang, Vincent Poor). His research interests include communications,
networks, security, and AI.
---|---
| Dusit Niyato (M’09-SM’15-F’17) is currently a professor in the School of
Computer Science and Engineering, at Nanyang Technological University,
Singapore. He received B.Eng. from King Mongkuts Institute of Technology
Ladkrabang (KMITL), Thailand in 1999 and Ph.D. in Electrical and Computer
Engineering from the University of Manitoba, Canada in 2008. His research
interests are in the area of energy harvesting for wireless communication,
Internet of Things (IoT) and sensor networks.
---|---
| Liang Xiao (M’09-SM’13) is currently a Professor in the Department of
Information and Communication Engineering, Xiamen University, Xiamen, China.
She has served as an associate editor of IEEE Trans. Information Forensics and
Security and guest editor of IEEE Journal of Selected Topics in Signal
Processing. She is the recipient of the best paper award for 2016 INFOCOM Big
Security WS and 2017 ICC. She received the B.S. degree in communication
engineering from Nanjing University of Posts and Telecommunications, China, in
2000, the M.S. degree in electrical engineering from Tsinghua University,
China, in 2003, and the Ph.D. degree in electrical engineering from Rutgers
University, NJ, in 2009. She was a visiting professor with Princeton
University, Virginia Tech, and University of Maryland, College Park.
---|---
| Qingqing Wu (S’13-M’16) received the B.Eng. and the Ph.D. degrees in
Electronic Engineering from South China University of Technology and Shanghai
Jiao Tong University (SJTU) in 2012 and 2016, respectively. He is currently an
assistant professor in the Department of Electrical and Computer Engineering
at the University of Macau, China, and also affiliated with the State key
laboratory of Internet of Things for Smart City. He was a Research Fellow in
the Department of Electrical and Computer Engineering at National University
of Singapore. His current research interest includes intelligent reflecting
surface (IRS), unmanned aerial vehicle (UAV) communications, and MIMO
transceiver design. He has published over 70 IEEE journal and conference
papers. He was the recipient of the IEEE WCSP Best Paper Award in 2015, the
Outstanding Ph.D. Thesis Funding in SJTU in 2016, the Outstanding Ph.D. Thesis
Award of China Institute of Communications in 2017. He was the Exemplary
Editor of IEEE Communications Letters in 2019 and the Exemplary Reviewer of
several IEEE journals. He serves as an Associate Editor for IEEE
Communications Letters, IEEE Open Journal of Communications Society, and IEEE
Open Journal of Vehicular Technology. He is the Lead Guest Editor for IEEE
Journal on Selected Areas in Communications on ”UAV Communications in 5G and
Beyond Networks”, and the Guest Editor for IEEE Open Journal on Vehicular
Technology on “6G Intelligent Communications” and IEEE Open Journal of
Communications Society on “Reconfigurable Intelligent Surface-Based
Communications for 6G Wireless Networks”. He is the workshop co-chair for IEEE
ICC 2019-2021 workshop on “Integrating UAVs into 5G and Beyond”, and the
workshop co-chair for IEEE GLOBECOM 2020 workshop on “Reconfigurable
Intelligent Surfaces for Wireless Communication for Beyond 5G”. He serves as
the Workshops and Symposia Officer of Reconfigurable Intelligent Surfaces
Emerging Technology Initiative and Research Blog Officer of Aerial
Communications Emerging Technology Initiative.
---|---
|
2024-09-04T02:54:55.227312 | 2020-02-27T17:33:16 | 2002.12274 | {
"authors": "Paz Grimberg, Tobias Lauinger, Damon McCoy",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25924",
"submitter": "Paz Grimberg",
"url": "https://arxiv.org/abs/2002.12274"
} | arxiv-papers | # Empirical Analysis of Indirect Internal Conversions in Cryptocurrency
Exchanges
Paz Grimberg, Tobias Lauinger, Damon McCoy
New York University
## I Abstract
Algorithmic trading is well studied in traditional financial markets. However,
it has received less attention in centralized cryptocurrency exchanges. The
Commodity Futures Trading Commission (CFTC) attributed the $2010$ flash crash,
one of the most turbulent periods in the history of financial markets that saw
the Dow Jones Industrial Average lose $9\%$ of its value within minutes, to
automated order “spoofing” algorithms. In this paper, we build a set of
methodologies to characterize and empirically measure different algorithmic
trading strategies in Binance, a large centralized cryptocurrency exchange,
using a complete data set of historical trades. We find that a sub-strategy of
triangular arbitrage is widespread, where bots convert between two coins
through an intermediary coin, and obtain a favorable exchange rate compared to
the direct one. We measure the profitability of this strategy, characterize
its risks, and outline two strategies that algorithmic trading bots use to
mitigate their losses. We find that this strategy yields an exchange ratio
that is $0.144\%$, or $14.4$ basis points (bps) better than the direct
exchange ratio. $2.71\%$ of all trades on Binance are attributable to this
strategy.
## II Introduction
Cryptocurrency exchanges today handle more than $50b in daily trade volume
[5]. Most of it occurs on centralized exchanges, which hold assets and settle
trades on behalf of their customers. Traders on these exchanges can convert
different coins at certain exchange ratios, just like traditional foreign
exchange (FX) markets. The ability to convert between coins creates potential
arbitrage opportunities, where traders can make a profit through a series of
conversions. The case involving three conversions, coin $c_{1}$ converted to
coin $c_{2}$, which is then converted to coin $c_{3}$ and back to $c_{1}$, is
called triangular arbitrage if the proceeds of the conversions are greater
than the initial quantity. The existence of triangular arbitrage in foreign
exchange markets is well documented [14] [19] [18] [26]. The characteristics
of cryptocurrency exchanges and their relationship to traditional foreign
exchange markets have been studied as well [17]. However, to the best of our
knowledge, triangular arbitrage has never been studied within the context of a
centralized cryptocurrency exchange.
In this paper, we measure arbitrage activity in Binance, a centralized
cryptocurrency exchanges, by empirically exploring its complete historical
trade data. Since different traders cannot be identified from trade data, we
cluster sequences of consecutive trades that match in their quantities and
timing and attribute them to the same trader. To the best of our knowledge, we
are the firsts to employ a clustering methodology for identifying triangular
arbitrage traders, based on trade-by-trade data.
We find that triangular arbitrage is rarely accomplished. Participants
predominantly engage in an alternative strategy, which we call indirect
internal conversions, where coin $A$ is converted to coin $B$ through an
intermediary coin $x$, at a favorable exchange ratio compared to directly
converting $A$ to $B$. This activity accounts for $2.71$% of the total daily
volume, and offers an exchange ratio that is $14.4$ bps better on average.
We believe that the fee structure in cryptocurrency exchanges makes it
unprofitable for participants to engage in triangular arbitrage. Instead,
participants turn to indirect conversions as an efficient way to rebalance
their holdings.
## III Background
### III-A Exchanges
An exchange is an organized market where tradable securities, commodities,
foreign exchange/cryptocurrencies (or “coins”) and derivatives are sold and
bought (collectively referred to as instruments). In a centralized exchange,
the deposits and assets of participants are held and settled by the exchange.
In decentralized exchanges (or “DEXes”), a smart contract (a program executing
on a blockchain) or other form of peer-to-peer network executes exchange
functionality. In DEXes, funds cannot be stolen by the exchange operator,
because their custody and exchange logic is processed and guaranteed by the
smart contract.
In centralized cryptocurrency exchanges, different cryptocurrencies can be
exchanged to others, such as Bitcoin and Ethereum. In addition, some exchanges
list ERC20 tokens, or simply “tokens,” that can also be exchanged to
cryptocurrencies. Tokens are essentially smart contracts that make use of the
Ethereum blockchain [13].
Participants can place orders on an exchange. An order is an instruction to
buy or sell some traded instrument. These instructions can be simple or
complicated, and can be sent to either a broker or directly to an exchange via
direct market access. There are some standard instructions for such orders.
For example, a market order is a buy or sell order to be executed immediately
at the current market prices, i.e., buy at the lowest asking price or sell to
the highest bidding price. Market orders are typically used when certainty of
execution is a priority over the price of execution. A limit order is an order
to buy an instrument at no more than a specific price, or to sell at no less
than a specific price (called “or better” for either direction). This gives
the trader control over the price at which the trade is executed; however, the
order may never be executed (“filled”). Limit orders are typically used when
the trader wishes to control price rather than certainty of execution.
Each instrument traded on an exchange has an order book. The order book refers
to an electronic list of buy and sell orders for a specific security or
financial instrument organized by price level. An order book lists the number
of shares being bid on or offered at each price point, or market depth [7].
### III-B Arbitrage
The traditional financial industry has settled on three main types of
quantitative strategies that are sustainable because they provide real
economic value to the market: arbitrage, market-making and market-taking [8].
In market-taking strategies, traders post both buy and sell orders in the same
financial instrument, hoping to make a profit on the bid-ask spread [25]. In
market-taking strategies, traders engage in longer-term trading, subject to
some rules-based investment methodology. Market-taking strategies on
centralized cryptocurrency exchanges have been studied in [23].
Arbitrage and its economic benefits have been well understood for quite some
time and documented by academia [27] [15]. Competitive arbitrageurs on
centralized exchanges have at least one of three advantages.
1. 1.
Scale: participants who trade large volumes are often compensated in the form
of kickbacks, rebates, and low (or zero) trading fees, which provide such
participants the opportunity to make profits in cases where others with less
scale cannot.
2. 2.
Speed: the ability to access order book information and trade faster than
others provides an opportunity to gain from mispricings. For example,
triangular arbitrage on FX products is a major impetus for the Go West and
Hibernia microwave telecommunications projects [9] [1], where multi-million
dollar network infrastructure was developed for the purpose of shaving off
milliseconds in electronic trading latency.
3. 3.
Queue position: being able to enter one leg of an arbitrage trade by placing
orders ahead of time, without crossing the spreads, i.e., placing orders that
execute immediately without being queued in the order book, significantly
reduces fees, which enables profitable completion of triangular arbitrage
trades. Participants are compensated for the risk of queuing orders in the
order book.
Arbitrage strategies involving multiple centralized cryptocurrency exchanges,
exploiting different prices of same assets, have been studied [22] [21] [24]
[20].
Arbitrage strategies on decentralized exchanges are executed by bots who pay
high transaction fees and optimize their network latency to front-run ordinary
users’ trades [16].
At time of writing, the vast majority of cryptocurrencies trading volume (over
$99\%$) is done in centralized exchanges [5], therefore in this paper, we
focus on triangular arbitrage in a single centralized cryptocurrency exchange
(Binance). Triangular arbitrage is an arbitrage strategy resulting from a
discrepancy between three coins that occurs when their exchange rates give
rise to a profitable sequence of trades, i.e., trades of the form
$c_{1}\mapsto c_{2}\mapsto c_{3}\mapsto c_{1}$, where $c_{1},c_{2},c_{3}$ are
coins, and the ending balance of $c_{1}$ is greater than the initial balance.
We call $c_{1}$ the base coin, $c_{2}$ and $c_{3}$ intermediary coins, and the
sequence a cycle [10].
To the best of our knowledge, we are the firsts to employ a clustering
methodology for identifying triangular arbitrage traders.
## IV Dataset
We use historical cryptocurrency trade data and order book data provided by
Kaiko from Nov $9$th, $2017$ through Aug $6$th, $2019$ [6]. We conducted our
measurements on Binance’s exchange data, as it is the largest centralized
exchange by daily volume and has the highest number of listed symbols (the
“ticker” of an exchange pair). Kaiko’s data consists of trade-by-trade data
and market depth data for all cryptocurrencies and ERC20 tokens traded on
Binance. There are $1,964,461,269$ trades. Figure 1 shows the monthly number
of trades executed on Binance.
Figure 1: Number of monthly trades, in millions, executed on Binance, based on
Kaiko’s data set.
Every coin in Binance is denominated by at least one of the following coins:
BNB (Binance’s native token), Bitcoin, ALTS (Etheruem, Ripple, Tron) and
stable coins. Stable coins are fiat-pegged cryptocurrencies designed to
minimize the volatility of price fluctuations. We call these coins anchor
coins, as these coins can be directly exchanged to many other coins and have
historically had the greatest market value (in US dollar terms) [3].
There are $204$ different coins and $704$ different possible conversions. We
denote such conversions by $c_{1}\Leftrightarrow c_{2}$, where $c_{1}$ and
$c_{2}$ are two different coins. We call $c_{1}\Leftrightarrow c_{2}$, a pair.
For example, if $c_{1}=\text{BTC}$ and $c_{2}$ = ETH, then the pair is denoted
$\text{BTC}\Leftrightarrow\text{ETH}$. Traders can exchange their BTC to ETH
(or vice versa) using this conversion. We write $c_{1}\mapsto c_{2}$ when
considering the side of the $c_{1}$ holder.
There are 6,534 possible cycles in Binance, i.e., sequences of the form
$c_{1}\mapsto c_{2}\mapsto c_{3}\mapsto c_{1}$. In $3,900$ of them, one of
$c_{2}$ and $c_{3}$ is an anchor. In $2,634$ of them, both $c_{2}$ and $c_{3}$
are anchors. There are no cases where both $c_{2}$ and $c_{3}$ are non-
anchors. $c_{1}$ can be anchor or non-anchor.
Cycle statistics in Binance are summarized in Table I. There are $1,982$
cycles with a non-anchor coin as the base coin. These cycles have the
potential to create arbitrage gains in non-anchor coins. However, anchor coins
represent over $92\%$ of the total market value of all listed coins and
historically had lower volatility, compared to non-anchor coins. Therefore, we
focus on cycles with anchor base coins. As future work, we could explore
cycles with no-anchor base coins. However, we note that there is inherent risk
in operating in non-anchor coins, due to volatility.
Base coin | Number of cycles
---|---
BTC | 986
BNB | 818
ALTS | 908
Stable coins | 1,840
Other coins | 1,982
Total | 6,534
Table I: Number of cycles in Binance by base coin. BTC,BNB,ALTS and stable
coins represent over $92\%$ of the total market value of all listed coins and
historically had lower volatility. Therefore, we omit other cycles in this
study.
### IV-A Data Fields
Kaiko’s historical trade data fields are described in Table II. The
granularity of the date field is in ms. Since multiple trades can execute
within the same ms, we use the monotonically increasing trade ids as a way to
order the trades.
Field | Description
---|---
id | Incremental trade id
exchange | Exchange id
symbol | Pair symbol
date | Time (in epoch) when the trade took place
price | Trade price
amount | Trade quantity (quoted in base asset)
sell | | Binary. If TRUE, then the
---
holder of $c_{1}$, swapping to $c_{2}$,
was a liquidity taker, and $c_{1}$ is the
base asset in the pair
listed on the exchange
Table II: Kaiko’s trade data fields.
Kaiko’s historical order book data is given on a 60-second basis, that is, for
every pair listed, its entire order book is given in 60 second increments of
the date field. Note that the quotes’ depth are not given and need to be
reconstructed separately. The data fields are described in Table III.
Field | Description
---|---
symbol | Pair symbol
date | Time (in epoch) when the order was placed
type | Bid or ask
amount | Order quantity (quoted in base asset)
Table III: Kaiko’s order book data fields.
### IV-B Data Limitations
Kaiko’s order book data is given on a $60$-second interval basis and Binance’s
API does not provide historical order-book data [4]. Complete order book data
could reveal all arbitrage opportunities, not just the ones executed with
actual trades, as it would accurately paint the traders’ view of the market at
any point in time. When merging Kaiko’s historical trade data with its order
book, only about $5\%$ of the trade data has matching order book data
available within a $100ms$ interval. (During our measurements, we used smaller
time intervals than $100ms$.) For profitability measurements, about $0.5\%$ of
the time, we were able to use the order book’s bid/ask directly. For the
remainder, we approximated the bid/ask price with the volume-weighted average
price of that time interval, based on the trade data. In addition, Kaiko does
not include trade fees or historical fee schedules, so we had to reconstruct
Binance’s historical fee time series.
### IV-C Binance Trading Fees
An important data point for our analysis are the trading fees collected by
Binance [2]. For every trade executed, Binance collects a fee from the
proceeds of the trade. The fees are collected either in the currency of the
proceeds or in Binance’s native ERC20 token, BNB, depending on the trader’s
BNB balance. If the trader has enough BNB balance, Binance will calculate the
conversion rate between the proceeds and BNB, and withdraw the fees from the
BNB balance. In general, Binance rewards participants who trade large volumes
by charging them a lower fee. In addition, for high-volume traders, Binance
distinguishes between liquidity-taking trades, i.e., market order trades or
limit order trades that were matched immediately at the best bid/ask level,
and market-making trades, which are trades that were placed somewhere in the
order book but were not executed immediately. Binance charges less fees for
market-making trades, as they wish to encourage participants to “fill up” the
order book, thereby narrowing the bid/ask spread and increasing liqudity. This
is a common practice; however, in traditional financial exchanges, market-
makers pay zero or even negative fees (rebates, kickbacks).
We assume that arbitrageurs are operating at the lowest fee level. To track
Binance’s fee schedule, we used the Wayback Machine [12], a digital archive of
the World Wide Web, to view Binance’s fee web page historically. In our
analysis time span, Binance’s fee web page changed $46$ times. However, the
lowest fee level remained constant at $1.2$bps for makers and $2.4$bps for
takers.
## V Methodology
In this section, we describe our methodology for discovering triangular
arbitrage trade sequences in Binance by examining the trade data. During our
analysis, we discovered that a sub-strategy of triangular arbitrage, involving
only the first two conversions, is widely deployed. We refined our original
methodology to identify such conversions. These conversions exhibit some
risks, one of them is the scenario where multiple traders compete for the same
intermediary coin, potentially harming laggards’ profitability. We identify
these risks and outline a methodology for clustering competing events. Lastly,
we observe different strategies traders take to mitigate this risk and
describe a methodology to detect it.
### V-A Discovering Triangular Arbitrage Trading Sequences
To discern triangular arbitrage trading sequences, we design a methodology to
identify likely arbitrage trading sequences. We look for trading sequences of
the form $c_{1}\mapsto c_{2}\mapsto c_{3}\mapsto c_{1}$, where $c_{1}$ is an
anchor coin (BTC, BNB, ALTS, Stable coins). Arbitrageurs start with some
quantity $Q_{1}$ of $c_{1}$, exchange it to $c_{2}$ at price $p_{12}$,
resulting in $Q_{2}$ units of $c_{2}$. In practice, Binance deducts the fee
upon conversion, therefore $Q_{2}$ will be slightly lower than the conversion
price. Next, $Q_{2}$ units of $c_{2}$ are converted to $c_{3}$, resulting in
$Q_{3}$ units, minus fees. Lastly, $Q_{3}$ units of $c_{3}$ are converted back
to $c_{1}$, minus fees, resulting in $Q_{1}^{\prime}$ units of $c_{1}$. If
$Q_{1}^{\prime}>Q_{1}$, then the arbitrage sequence is profitable.
To successfully profit from this opportunity, arbitrageurs need to execute $3$
trades, $c_{1}\mapsto c_{2}$, $c_{2}\mapsto c_{3}$ and $c_{3}\mapsto c_{1}$.
First, an arbitrageur needs to calculate the initial quantity $Q_{1}$ of
$c_{1}$, such that during conversions, fee payments and other trading
constraints will leave no or minimal residue in the intermediary coins.
Second, since Binance does not support batch trading, i.e., grouping multiple
orders in a single request, arbitrageurs need to ensure correct timing of
their trades. For example, if the order $c_{2}\mapsto c_{3}$ arrives before
$c_{1}\mapsto c_{2}$, then it will fail as the arbitrageur does not hold
$c_{2}$ yet. Furthermore, the arbitrageur competes with other arbitrageurs for
these trades, so speed is an important factor.
In order to identify triangular arbitrage sequences, we first need to ensure
that the same quantity is flowing through different coins, ending at the base
coin. Binance quotes quantities based on how the pair is listed. Different
pairs have different quantity/price restrictions, namely, minimum, maximum and
increment size. Since quantities and prices change in discrete steps, a small
quantity might not be converted, leaving a residue. These small residues need
to be accounted for when identifying trades with equal quantities. While these
residues have a (small) value, in practice, they cannot be converted
immediately to anchor coins, due to minimum size restrictions. As residue
accumulates beyond the exchange threshold, arbitrageurs can convert it back to
anchor coins. We found that less than $0.01\%$ of the time, the profitability
was decided by the residue. Therefore, in our detection process we ignore the
residue value. To illustrate coin exchanges, we follow an actual trade example
from Binance.
#### V-A1 BTC$\Leftrightarrow$ETH Exchange Example
Consider the conversion between BTC and ETH. It is listed as ETH/BTC in
Binance. ETH is called base asset and BTC is called the quote asset. The
quantity is always given in base asset terms. This pair has a $0.001$ minimum
quantity (base asset), $100,000$ maximum quantity, $0.001$ quantity
increments, $0.000001$ minimum price (quote asset), $100,000$ maximum price
and $10^{-6}$ price increments. At the time of writing, the last trade of
ETH/BTC was executed at a price of $0.018876$ and quantity $0.153$. The “sell”
flag was on, meaning that the holder of ETH was the one to initiate the trade
and met the buyer at the buyer’s bid price. Assume both participants are at
the cheapest fee level, currently at $2.4$bps for takers and $1.2$bps for
makers.
From the ETH holder’s perspective: Since ETH is the base asset and the “sell”
flag is on, this means the ETH holder initiated the trade and met the buyer at
the buyer’s bid price, therefore paying a liquidity taking fee of $2.4$bps.
The holder exchanged $0.153$ units of his ETH at a price of $0.018876$ BTC per
$1$ ETH. This means the holder ended with $0.153\cdot 0.018876\cdot
0.99976=0.002887334873$ units of BTC. Note that if $0.002887334873$ BTC are to
be exchanged for some other coin, only $0.00288733$ could be converted,
leaving $0.000000004873$ BTC residue.
From the BTC holder’s perspective: Since BTC is the quote asset and the “sell”
flag is on, this means the BTC holder provided the liquidity for the trade and
his price was met by the seller, thus paying a liquidity making fee of
$1.2$bps. The BTC holder exchanged $0.153\cdot 0.018876=0.002888028$ units of
BTC at a price of $0.018876$ BTC per $1$ ETH, while paying $1.2$bps fee,
resulting in $(0.002888028/0.018876)\cdot 0.99988=0.15298164$ ETH.
From Binance’s perspective: They collect $2.4$bps of the BTC proceeds from the
ETH holder, i.e., $0.00024\cdot 0.002888028\approx 6.93\cdot 10^{-7}$ units of
BTC. If the ETH holder also holds BNB, then this amount is actually converted
to BNB terms (using an average of recent exchange ratios between BTC and BNB),
and deducted from the BNB balance. If the ETH holder does not own BNB, then
$6.93\cdot 10^{-7}$ BTC are deducted from the proceeds. In addition, Binance
also collect $1.2$bps of the ETH proceeds from the BTC holder, i.e.,
$0.153\cdot 0.00012\approx 18.36\cdot 10^{-6}$ units of ETH. If the BTC holder
also holds BNB, this amount is collected in BNB terms as well. Note that if
the seller/buyer did not hold BNB, the fee would have been higher. In our
analysis, we assumed arbitrageurs operate at the lowest fee level.
#### V-A2 Identifying Equal Quantities in Triangular Arbitrage Sequences
We identify sequences of trades, $c_{1}\mapsto c_{2}\mapsto c_{3}\mapsto
c_{1}$, where the same quantity is passed. The quantity is quoted in the base
asset and depends on whether $c_{1}/c_{2}$ or $c_{2}/c_{1}$ is listed, whether
$c_{2}/c_{3}$ or $c_{3}/c_{2}$ is listed and whether $c_{1}/c_{3}$ or
$c_{3}/c_{1}$ is listed. When matching quantities between trades of two pairs,
having a common coin, $c_{1}\mapsto c_{2}$ and $c_{2}\mapsto c_{3}$, we
translate the quantities to the common coin’s terms, $c_{2}$, and check if
they are equal, up to fees and residue resulting from the trade. In Table IV,
we describe the translation process, based on different listing scenarios in
Binance.
Binance listing | Matching condition
---|---
$c_{2}/c_{3}$, $c_{1}/c_{2}$ | $q_{12}p_{12}-q_{23}=\text{fee + residue}$
$c_{2}/c_{3}$, $c_{2}/c_{1}$ | $q_{12}-q_{23}=\text{fee + residue}$
$c_{3}/c_{2}$, $c_{1}/c_{2}$ | $q_{12}p_{12}-q_{23}p_{23}=\text{fee + residue}$
$c_{3}/c_{2}$, $c_{2}/c_{1}$ | $q_{12}-q_{23}p_{23}=\text{fee + residue}$
Table IV: Matching equal quantities in conversions with common coin. $q_{ij}$
and $p_{ij}$ are the quantities and prices quoted in the trade data.
#### V-A3 Identifying Trade Latency in Triangular Arbitrage Sequences
When a triangular arbitrage opportunity presents itself, arbitrageurs compete
amongst each other to be the first to execute the trade sequences. There are
three trades to be executed, $c_{1}\mapsto c_{2}$, $c_{2}\mapsto c_{3}$ and
$c_{3}\mapsto c_{1}$. Since Binance does not support batch trading, an
arbitrageur will have to send three limit orders with prices
$p_{12},p_{23},p_{31}$ and quantities $q_{12},q_{23},q_{31}$. These quantities
are equal in the conversion process, in the sense explained previously and the
prices match the prices quoted in the order book. To ensure that trades are
received in their original order, the arbitrageur will take into account
network latency and wait a small amount of time between sending consecutive
orders. Analysis of the trade data suggests that the average latency between
consecutive trades is $20$ ms with a standard deviation of $15$ ms. Around
$95\%$ of trades with matching quantities are executed within $10$ms-$30$ms of
each other. Therefore, clustering trades using longer latency does not have
material impact on the number of trades identified. We find that using $\Delta
t\approx 50$ms as an approximation for arbitrage latency between consecutive
trades maximizes precision and minimizes recall.
We found that less than $0.01\%$ of triangular arbitrage sequences contain a
liquidity-making trade. We believe this behavior is caused by Binance’s fee
model, which charges traders a commission even for liquidity making trades. If
traders used liquidity-making orders, they would need to pay the fee in case
they were filled. At that point, it is not guaranteed that an arbitrage
opportunity will exist, while the fee is already paid and exposure to what is
mostly an illiquid coin is already established. To further enhance our
precision, we require arbitrage trade sequences to be all liquidity-taking.
Furthermore, we found a surprisingly low number of triangular arbitrage
trades, 1,381,928 in total, accounting for $0.24\%$ of total number of trades.
However, in the course of clustering triangular arbitrage trades, we witnessed
a much larger number of partial arbitrage sequences, 20,892,825, or $2.71\%$
of total trades, where traders executed the first two trades, $c_{1}\mapsto
c_{2}$ and $c_{2}\mapsto c_{3}$, but did not execute the third trade,
$c_{3}\mapsto c_{1}$. Executing the first two trades effectively converts
$c_{1}$ to $c_{3}$ at an exchange ratio of $p_{12}\cdot p_{23}$, minus fees
and residue. Interestingly, $95\%$ of the time these trades resulted in an
exchange ratio that was favorable to the exchange ratio of the direct trade
$c_{1}\mapsto c_{3}$. We call this trading strategy indirect internal
conversions and explain below in more details what is means to have a
“favorable” rate.
We refine our methodology to identify indirect internal conversions and study
the root cause of unprofitable instances. We elaborate on the risks associated
with this trading strategy in the discussion section.
### V-B Discovering Indirect Internal Conversion Attempts
We refined our original methodology for discovering triangular arbitrage
sequences by relaxing the constraint for the third trade to be executed. We
identify equal quantities in the first two trades in the same way as before.
To determine the trade latency, we empirically explored different time
constraints. $93\%$ of trades with equal quantities have a latency between
$10$ms and $100$ms. Latency lower than $10$ms gives poor recall, with less
than $5\%$ of total attempts. Latency greater than $100$ms only accounts for
$2\%$ of all attempts. The average latency is $22$ms with a standard deviation
of $15$ms. We find that, as before, $\Delta t\approx 50$ms is an approximation
that likely provides both high precision and recall for identifying indirect
conversion trading sequences. However, we lack ground truth to definitively
evaluate this part of our methodology.
#### V-B1 Determining Profitability of Internal Conversions
The first two trades of a triangular arbitrage sequence, $c_{1}\mapsto
c_{2}\mapsto c_{3}\mapsto c_{1}$ are $c_{1}\mapsto c_{2}$ and $c_{2}\mapsto
c_{3}$. Executing these two trades gives an indirect exchange ratio between
$c_{1}$ and $c_{3}$. If this exchange ratio, net of fees and residue, is
greater than the exchange ratio of $c_{1}\mapsto c_{3}$, then this conversion
offers a favorable exchange ratio to the direct one. We call such favorable
conversions indirect internal conversions. Since the exchange ratio between
$c_{1}$ and $c_{3}$ fluctuates, it is important to define the time it is
taken. We wish to approximate the trader’s view of the market upon executing
the internal conversion. Therefore, we take the exchange ratio at a short
period of time prior to the first trade’s execution time. As discussed above,
$50$ms is a good delay approximation for the trader’s view. Kaiko’s order book
data is given on a $60$-second basis, so it is unlikely that this $50$ms time
window falls within Kaiko’s data intervals. To approximate the order book’s
best bid/ask price at the time, we take the volume weighted average price
(VWAP) [11] of $c_{1}\mapsto c_{3}$, over the period of $50$ms, prior to the
execution time of the first trade. These trades tell us the best bid/ask level
at that time, and taking an average, weighted by volume, is a good
approximation of the trader’s view of the order book. However, it is possible
that within that time period, other participants posted bids and asks and then
cancelled them, giving the trader a completely different view of $c_{1}\mapsto
c_{3}$. In our analysis, we assume that this did not occur. If there were no
trades during that period and order book data is unavailable, we cannot
determine if the conversion is profitable and did not include such conversions
in our profitability analysis.
We found $26,603,038$ indirect conversions, which is $2.71\%$ of the total
number of trades. $95\%$ of the indirect conversions resulted in a favorable
exchange ratio. We hypothesized that unprofitable conversions occur when
multiple traders obtain the same intermediary coin $c_{2}$, and simultaneously
attempt to convert it to $c_{3}$. For example, one trader is looking to
convert $c_{1}\mapsto c_{2}\mapsto c_{3}$ for a favorable exchange ratio to
$c_{1}\mapsto c_{3}$, and a second trader is looking to convert $y\mapsto
c_{2}\mapsto c_{3}$ at a favorable exchange ratio to $y\mapsto c_{3}$ ($y$
could be the same coin as $c_{1}$). When both traders obtain $c_{2}$, there
could be a scenario where only one trader is able to convert $c_{2}$ to
$c_{3}$ at the best bid/ask level. This happens when the first one to execute
$c_{2}\mapsto c_{3}$ consumes the entire quantity of the best bid/ask and
causes the order book to change. The laggard in this case engages in a loss
mitigating strategy. One option is to convert $c_{2}$ to $c_{3}$ at a worse
exchange ratio than originally intended, potentially resulting in a conversion
that is worse than the direct one. Another option is to convert $c_{2}$ to
several other coins, with the goal of minimizing the potential losses from
unfavorable exchange ratios. We call the former strategy a full-exit strategy
and the latter partial-exit strategy.
To corroborate our hypothesis with the trade data, we refine our methodology
to cluster competing conversions and identify loss-mitigating exit strategies.
### V-C Clustering Competing Indirect Internal Conversion Attempts
Using our methodology to discover indirect conversions, we identified
conversions that were initiated around the same time and had an overlapping
intermediary coin. This is because competing conversions try to complete the
same second trade. Therefore, for a given second trade $c_{2}\mapsto c_{3}$,
we look at indirect conversions attempts of the form $x\mapsto c_{2}\mapsto
c_{3}$ that started within $100$ms of each other.
Every cluster of competing conversions can have winning conversions, i.e.,
indirect conversions that were completed at a favorable rate to the direct
rate and losing conversions, conversions that completed at an unfavorable rate
to the direct rate.
Losing conversions can have many causes, such as mistiming or an inaccurate
view of the order book. We believe that one of those reasons is that traders
who successfully completed the first trade, and failed to complete the second
trade, are unloading their $c_{2}$ coin at an unfavorable rate to avoid having
directional exposure to $c_{2}$.
#### V-C1 Identifying Loss-Mitigating Trading Strategies
For losing conversions, we wanted to see if the loss was the result of a
competing winning conversion that utilized all the capacity of $c_{2}\mapsto
c_{3}$. Determining whether a conversion utilized all capacity is impossible
without knowing the order book at that time. However, we can approximate it by
observing the next trade of $c_{2}\mapsto c_{3}$. Since we have the trade id,
which is monotonically increasing with the trades, we can tell whether the
next trade of $c_{2}\mapsto c_{3}$ was completed at the same price or not. If
it was not completed at the same price, it is likely that the previous trade
used up the capacity of the previous level. This is only a heuristic, as
traders can post and cancel orders at any time.
Loss-mitigating conversions are losing conversions where a winning conversion
in the same cluster used up the intermediary’s coin capacity.
By analyzing the trade data, we found that $17.7\%$ of all losing conversions
corresponded to loss-mitigating traders.
We identified two sub-strategies of loss-mitigating traders.
#### V-C2 Full-Exit Loss Mitigating Strategy
These are conversions that converted an equal quantity $Q_{1}$ from $x\mapsto
c_{2}$ and $c_{2}\mapsto c_{3}$, i.e., these are traders who converted all
their $c_{2}$ into one coin.
#### V-C3 Partial-Exit Loss Mitigating Strategy
These are conversions that converted a quantity $Q_{1}$ from $x\mapsto c_{2}$
at price $p_{12}$, but converted a lower quantity $Q_{2}<Q_{1}$ from
$c_{2}\mapsto c_{3}$, i.e., these are traders who unloaded some, but not all
of their $c_{2}$ into one coin. This strategy can be attributed to traders
solving the following minimization problem: Find a set of $k$ coins
$\\{d_{1},d_{2},\ldots d_{k}\\}$ having exchange ratios $\\{p_{1},p_{2},\ldots
p_{k}\\}$, and a set of $k$ quantities $\\{q_{1},q_{2},\ldots q_{k}\\}$, where
$\sum_{j=1}^{k}q_{j}=Q_{1}$ such that the following loss function is
minimized:
$\displaystyle
Loss=\sum_{j=1}^{k}q_{j}\left(\frac{p_{13}}{p_{12}p_{j}}\right)$
where $p_{13}$ is the direct exchange ratio of $x\mapsto c_{3}$. To detect
partial exits, we iterate over all different combinations of trades from
$c_{2}\mapsto c_{3}$ having quantities less than $Q_{1}$, such that they sum
exactly to $Q_{1}$, up to fees and residue.
We found that $85\%$ of loss-mitigating strategies are partial exits and
$15\%$ are full exits. It also turns out that solving a loss minimization
problems is effective, as the average partial-exit loss is $21$bps while the
full-exit average loss is $25$bps.
## VI Results
### VI-A Volume
We found $26,603,038$ indirect conversions, which is $2.71\%$ of total trades.
We found $1,381,928$ triangular arbitrage sequences, accounting for $0.24\%$
of total trades.
The time series of the number of favorable indirect conversion attempts as a
percentage of number of direct conversions, on a daily basis, is shown in
Figure 2. The number of favorable indirect conversion attempts, on a daily
basis, in shown in Figure 3. The number of triangular arbitrage attempts, on a
daily basis, in shown in Figure 4.
Figure 2: Percentage of daily volume of direct conversions trades that is also
done indirectly. Typically, $1\%-3\%$ of the daily volume of a direct
conversion pair is also executed via indirect conversion sequences. Figure 3:
Daily number of indirect conversion attempts. From October $2017$ \- October
$2018$, the number of indirect conversions has been trending down and since
October $2018$ has been trending up. Figure 4: Daily number of triangular
arbitrage attempts. From October $2017$ \- April $2019$, the number of
triangular arbitrage attempts has been trending down and since April $2019$
has been trending up, with a sharp spike in July $2019$
### VI-B Latency
We found that indirect conversion sequences have, on average, $21.43$ms of
delay between the first trade and the second, with a standard deviation of
$11.04$ms. $78.94\%$ have a latency of $30$ms or less. Triangular arbitrage
sequences are faster than indirect conversions. This could be an indication
that this space is more competitive. We found that triangular arbitrage
sequences have, on average, $20.7$ms of delay between consecutive trades, with
a standard deviation of $11.04$ms. $85.43\%$ have a latency of $30$ms or less.
The latency statistics of triangular arbitrage sequences and indirect
conversions are shown in Table V.
Type | Metric | Value
---|---|---
Triangular Arbitrage | Latency Average | $20.7$ms
Triangular Arbitrage | Latency Stdev | $11.04$ms
Triangular Arbitrage | % Below $30$ms | $85.43\%$
Indirect Conversion | Latency Average | $21.43$ms
Indirect Conversion | Latency Stdev | $13.7$ms
Indirect Conversion | % Below $30$ms | $78.94\%$
Table V: Latency statistics for triangular arbitrage and indirect conversions.
The triangular arbitrage strategy exhibits lower latency than indirect
conversions, possibly a sign that this strategy is more competitive
The latency distribution, in ms, of indirect conversions and triangular
arbitrage is shown in Figure 5.
Figure 5: Left: Latency distribution, in ms, of indirect conversion trade
sequences. Right: Latency distribution, in ms, of triangular arbitrage trade
sequences. Indirect conversion trades exhibit higher latency with a fatter
tail, indicating that triangular arbitrage is more competitive.
### VI-C Profitability
We calculate the equal-weighted return and the return on capital for
triangular arbitrage trades and indirect conversions. The equal-weighted
return is the average of returns. The return on capital is the arithmetic
average of returns, weighted by quantity of base coin committed to the trade,
i.e., larger trades receive higher weight.
Indirect conversion attempts were profitable $93.92\%$ of the time with an
equal-weighted net return of $11.8$bps. The return on capital for indirect
conversions is $22\%$ greater, at $14.4$bps, which means that traders
efficiently commit more coins to more profitable opportunities.
Triangular arbitrage trades were profitable $94.97\%$ of the time with an
equal-weighted net return of $9.3$bps. The return on capital for triangular
arbitrage sequences is $5\%$ greater, at $9.8$bps, which is slightly higher
than the equal-weighted return, but not significantly.
It appears that triangular arbitrage traders are unable to significantly
allocate more coins to more profitable opportunities. One potential
explanation is that the space is more crowded, or “arb-ed out,” than the
indirect conversions strategy.
### VI-D Loss-Mitigating Strategies
We saw $1,617,464$ instances of indirect conversion completed at an
unfavorable rate compared to the direct rate, or $6.08\%$ of the total number
of indirect conversions. $17.7\%$ of such conversions were loss-mitigating
trades, triggered by a competing indirect conversion that used all the
intermediary coin’s capacity. When this happened, there were $1.06$ competing
conversions on average. The highest number of competing conversions we saw in
one cluster was $5$.
In $15\%$ of such cases, the entire base quantity was unloaded into another
coin, i.e., they were full exits. The return on capital for the full exit
strategy was $-27.4$bps. $85\%$ of the time, the base coin quantity was
unloaded to several other coins, i.e., they were partial exits. On average,
the number of coins used to exit was $2.7$, and the return on capital was
$-23.8$bps. Intuitively, this shows that solving a loss minimization problem
mitigates losses by $3.6$bps compared to a full exit. Note, however, that a
full exit could potentially be the solution to the minimization problem as
well.
## VII Discussion
The existence of triangular arbitrage in traditional foreign exchange markets
is well documented and studied in academia. In centralized cryptocurrency
exchanges such as Binance, triangular arbitrage happens at a small scale.
However, we found that a different strategy is taking place, which converts
two coins through an intermediary coin, at a favorable rate to the direct
conversion rate. This strategy comes with a risk, however, that multiple
competing conversions occur at the same time, preventing slower ones from
completing their conversions. We saw that when this happens, traders engage in
a loss-mitigating strategy. We believe that the rationale for this strategy
stems from Binance’s fee structure, where there is a fee for market-making,
thus creating frictions for the triangular arbitrage strategy. We believe
arbitrageurs adapted to the fee structure by executing the indirect
conversions strategy. While triangular arbitrage increases the arbitrageur’s
holding in the base asset, indirect conversions simply provide a more
favorable ratio to the direct one. Participants who already have a stake in
the cryptocurrency ecosystem and wish to rebalance their portfolio might
choose to engage in this strategy.
## VIII Conclusion
We found that $0.24\%$ of daily trades on Binance can be attributed to
triangular arbitrage trades. We found that a different strategy is $11$ times
more prevalent, at $2.71\%$ of daily trades, which involves exchanging coins
through an intermediary coin at a favorable rate to the direct exchange rate.
We designed a methodology to measure such events and discovered that $93.92\%$
of the time the rate obtained from these conversions is $14.4$bps better than
the direct one. When it is not, $17.7\%$ of the time traders engage in loss-
mitigating strategies, and we identified two of them.
## Acknowledgements
The authors would like to thank Andrew Papanicolaou and David Yermack for
their feedback and suggestions to improve the quality of our manuscript. We
acknowledge funding support under National Science Foundation award 1844753.
## References
* [1] The $300m cable that will save traders milliseconds. https://www.telegraph.co.uk/technology/news/8753784/The-300m-cable-that-will-save-traders-milliseconds.html.
* [2] Binance fee schedule. https://www.binance.com/en/fee/schedule.
* [3] Binance markets page. https://www.binance.com/en/markets.
* [4] Binance official API docs. https://github.com/binance-exchange/binance-official-api-docs/blob/master/rest-api.md.
* [5] Coinmarketcap cryptocurrency exchanges trade volume. https://coinmarketcap.com/rankings/exchanges/.
* [6] Kaiko historical trade data. https://www.kaiko.com/pages/historical-data.
* [7] Order book definition. https://www.investopedia.com/terms/o/order-book.asp.
* [8] Quantitative trading summary. https://blog.headlandstech.com/2017/08/03/quantitative-trading-summary.
* [9] Ready, set, go west: Chicago-to-Tokyo trading is revving up. https://www.bloomberg.com/news/articles/2017-09-07/ready-set-go-west-chicago-to-tokyo-trading-is-revving-up.
* [10] Triangular arbitrage definition. https://www.investopedia.com/terms/t/triangulararbitrage.asp.
* [11] Volume weighted average price (VWAP) definition. https://www.investopedia.com/terms/v/vwap.asp.
* [12] Wayback machine. https://archive.org/web/.
* [13] What is ERC-20 and what does it mean for Ethereum. https://www.investopedia.com/news/what-erc20-and-what-does-it-mean-ethereum/.
* [14] Y. Aiba, N. Hatano, H. Takayasu, K. Marumo, and T. Shimizu. Triangular arbitrage as an interaction among foreign exchange rates. Physica A: Statistical Mechanics and its Applications, 310(3):467–479, 2002.
* [15] T. Björk. Arbitrage theory in continuous time. Oxford university press, 2009.
* [16] P. Daian, S. Goldfeder, T. Kell, Y. Li, X. Zhao, I. Bentov, L. Breidenbach, and A. Juels. Flash boys 2.0: Frontrunning, transaction reordering, and consensus instability in decentralized exchanges. CoRR, abs/1904.05234, 2019.
* [17] S. Drozdz, L. Minati, P. Oswiecimka, M. Stanuszek, and M. Watorek. Signatures of the crypto-currency market decoupling from the Forex. Future Internet, 11:154, 2019.
* [18] Feng Wang, Yuanxiang Li, Li Liang, and Kangshun Li. Triangular arbitrage in foreign exchange rate forecasting markets. In 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), pages 2365–2371, June 2008.
* [19] D. Fenn, S. Howison, M. McDonald, S. Williams, and N. Johnson. The mirage of triangular arbitrage in the spot foreign exchange market. International Journal of Theoretical and Applied Finance, 12(8):1105–1123, 12 2009.
* [20] T. G. Fischer, C. Krauss, and A. Deinert. Statistical arbitrage in cryptocurrency markets. Journal of Risk and Financial Management, 12(1):31, 2019.
* [21] A. Kroeger and A. Sarkar. Law of one bitcoin price? Federal Reserve Bank of Philadelphia, 2017.
* [22] S. Krückeberg and P. Scholz. Decentralized efficiency? Arbitrage in Bitcoin markets. Arbitrage in Bitcoin Markets (March 29, 2018), 2018.
* [23] S. Krueger, M. Müller, A. Betzer, and C. Rokitta. Event-driven strategies in crypto assets. Available at SSRN 3363589, 2019.
* [24] I. Makarov and A. Schoar. Trading and arbitrage in cryptocurrency markets. Journal of Financial Economics, 135(2):293 – 319, 2020.
* [25] R. Radcliffe. Investment: Concepts, Analysis, Strategy. Addison-Wesley Educational Publishers, Inc., 1997.
* [26] G. Robert, O. Pawel, W. Marcin, and D. Stanislaw. Detecting correlations and triangular arbitrage opportunities in the Forex by means of multifractal detrended cross-correlations analysis. Nonlinear Dynamics, 98(3):2349–2364, Nov 2019.
* [27] A. Shleifer and R. W. Vishny. The limits of arbitrage. The Journal of finance, 52(1):35–55, 1997.
## IX Appendix
In this appendix, we define notations that are specific to the market
microstructure of Binance and formally define triangular arbitrage and
indirect conversions.
Let $\mathbb{P}$ be the set of all pairs $x/y$ traded on Binance. Every pair
facilitates two conversions; from $x$ to $y$ and from $y$ to $x$, denoted by
$(x\mapsto y)$ and $(y\mapsto x)$, respectively.
###### Definition IX.1.
(exchange ratio) Let $\psi_{ij}$ be the proceeds of converting 1 unit of coin
$c_{i}$ to $c_{j}$. We call $\psi_{ij}$ the exchange ratio and it is given by
$\displaystyle\psi_{ij}=\begin{cases}\frac{1}{p_{c_{j}/c_{i}}^{ask}}&\text{if
$c_{j}/c_{i}\in\mathbb{P}$}\\\ p_{c_{i}/c_{j}}^{bid}&\text{if
$c_{i}/c_{j}\in\mathbb{P}$}\\\ \end{cases}$
###### Definition IX.2.
(pair capacity) Denote $\eta_{ij}$ the maximum quantity $c_{i}$ that can be
converted to $c_{j}$. We call $\eta_{ij}$ the capacity of $(c_{i}\mapsto
c_{j})$ and it is given by
$\displaystyle\eta_{ij}=\begin{cases}p_{c_{j}/c_{i}}^{ask}q_{c_{j}/c_{i}}^{ask}&\text{if
$c_{j}/c_{i}\in\mathbb{P}$}\\\ q_{c_{i}/c_{j}}^{bid}&\text{if
$c_{i}/c_{j}\in\mathbb{P}$}\\\ \end{cases}$
###### Definition IX.3.
(bid-ask spread) We write
$\displaystyle\Delta_{ij}=\begin{cases}p_{c_{j}/c_{i}}^{ask}-p_{c_{j}/c_{i}}^{bid}&\text{if
$c_{j}/c_{i}\in\mathbb{P}$}\\\
p_{c_{i}/c_{j}}^{ask}-p_{c_{i}/c_{j}}^{bid}&\text{if
$c_{i}/c_{j}\in\mathbb{P}$}\\\ \end{cases}$
the bid ask spread of pair $(c_{i}\mapsto c_{j})$
###### Definition IX.4.
(trade quantity) suppose we wish to convert $q$ units of $c_{i}$ to $c_{j}$.
The trade quantity passed to the exchange as a parameter is denoted $T(q)$ and
is given by
$\displaystyle T_{ij}(q)=\begin{cases}\frac{q}{p_{c_{j}/c_{i}}^{ask}}&\text{if
$c_{j}/c_{i}\in\mathbb{P}$}\\\ q&\text{if $c_{i}/c_{j}\in\mathbb{P}$}\\\
\end{cases}$
###### Definition IX.5.
(minimum trade lot size) Let $m_{ij}$ be the minimum amount of $c_{i}$ that
must be converted to $c_{j}$ in
$\left(\text{$c_{i}$}\mapsto\text{$c_{j}$}\right)$, i.e., when converting $q$
units of $c_{i}$ to $c_{j}$, $m_{ij}\leq q$
###### Definition IX.6.
(high-level parameters) the $h$-th order book exchange ratio and capacity are
denoted $\psi_{ij}^{h}$ and $\eta_{ij}^{h}$, respectively.
###### Definition IX.7.
(minimum price increment) Denote $dx_{ij}$ the the minimum order book price
increments of $(c_{i}\mapsto c_{j})$.
###### Remark.
It holds that $dx_{ij}=dx_{ji}$.
In practice, when converting $q$ units of $c_{i}$ to $c_{j}$, the trade
quantity passed to the exchange as a parameter has to be a multiple of
$dx_{ij}$, therefore only $\left\lfloor\frac{T(q)}{dx_{ij}}\right\rfloor
dx_{ij}$ units are passed. Denote $\left\lfloor
x\right\rfloor_{y}\mathrel{\mathop{:}}=\left\lfloor\frac{x}{y}\right\rfloor y$
the rounding operation of $x$ to the nearest multiple of $y$.
###### Definition IX.8.
(residuals) Let $r_{ij}(q)$ be the quantity of $c_{i}$ left when converting
$q$ units of $c_{i}$ to $c_{j}$. We write
$\displaystyle r_{ij}(q)=\begin{cases}\left(T(q)-\left\lfloor
T(q)\right\rfloor_{dx_{ij}}\right)p_{c_{j}/c_{i}}^{ask}&\text{if
$c_{j}/c_{i}\in\mathbb{P}$}\\\ T(q)-\left\lfloor
T(q)\right\rfloor_{dx_{ij}}&\text{if $c_{i}/c_{j}\in\mathbb{P}$}\\\
\end{cases}$
Therefore, converting $x$ units of $c_{i}$ yields
$\psi_{ij}\left[x-r_{ij}(x)\right]$ units of $c_{j}$ and $r_{ij}(x)$ residual
units of $c_{i}$
### IX-A Cycle Arbitrage
The set of pairs $C=\\{(c_{1}\mapsto c_{2}),(c_{2}\mapsto
c_{3}),\ldots,(c_{n}\mapsto c_{n+1})\\}$ is called a cycle, if $c_{n+1}=c_{1}$
and $c_{j}\not\in\\{c_{1},\ldots\,c_{j-1}\\}$ for $1<j\leq n$. In the context
of cycles, we use the shortened notation $\psi_{i},\eta_{i},dx_{i},r_{i}$ when
referring to pairs of the form $(c_{i}\mapsto c_{i+1})$. A cycle with $n=3$ is
called a triangular sequence.
###### Definition IX.9.
The capacity of cycle $C=\\{(c_{1}\mapsto c_{2}),\ldots,(c_{n}\mapsto
c_{n+1})\\}$ is the maximum quantity of $c_{1}$ that can be converted through
its pairs.
The capacity of a cycle is given by
$\displaystyle Q=\min_{1\leq i\leq
n}\left\\{\eta_{i}\left(\prod_{k=1}^{i-1}\psi_{k}\right)^{-1}\right\\}$ (1)
The balance of $c_{i}$, denoted $q_{i}$, is given by the recurrence relation
$\displaystyle q_{1}$ $\displaystyle=Q-r_{1}(Q)$ $\displaystyle q_{i+1}$
$\displaystyle=\psi_{i}\left[q_{i}-r_{i}(q_{i})\right],\qquad 1\leq i\leq n$
###### Remark.
$r_{i}\Big{(}x-r_{i}(x)\Big{)}=0$
### IX-B Binance Fee Structure
The fees paid on $(c_{i}\mapsto c_{i+1})$ are $fq_{i+1}b_{i+1}$ where
$f=5\cdot 10^{-4}$ and $b_{i+1}$ is the last traded price of
$(c_{i+1}\mapsto\text{BNB})$
$\displaystyle b_{i}=\begin{cases}\frac{1}{p_{\text{BNB}/c_{i}}}&\text{if
$\text{BNB}/c_{i}\in\mathbb{P}$}\\\ p_{c_{i}/\text{BNB}}&\text{if
$c_{i}/\text{BNB}\in\mathbb{P}$}\\\ \end{cases}$
The gain/loss in $c_{1}$ terms is given by
$\displaystyle
G=(q_{n+1}-q_{1})+\underbrace{\sum_{i=1}^{n}r_{i}(q_{i})\psi_{i1}}_{\text{residuals
$c_{1}$
value}}-\underbrace{f\sum_{i=1}^{n}q_{i+1}b_{i+1}\psi_{B1}}_{\text{fees
$c_{1}$ value}}$ (2)
Note that $G$ is a function of
$\\{\psi_{21},\ldots,\psi_{n1},\psi_{B1},b_{2},\ldots,b_{n+1}\\}$ as well as
$\psi_{1},\ldots\psi_{n}$.
###### Definition IX.10.
a cycle is called an arbitrage free cycle if $G\leq 0$
###### Definition IX.11.
a cycle is called an open cycle if $G>0$
###### Remark.
One can argue that there are additional costs for converting the residuals to
$c_{1}$ and purchasing BNB tokens ahead of time to be able to pay fees, which
are not accounted for in (2). However, this can be done in infrequent bulk
trades. We make the assumption that small directional exposure to BNB and
residuals is negligible compared to the accumulated gains over a period of
time.
###### Remark 1.
If we assume zero residuals, i.e., $\forall i:r_{i}=0,$ then
$q_{i+1}=Q\prod_{j=1}^{i-1}\psi_{j}$ and (2) reduces to
$G=Q\left(\prod_{i=1}^{n}\psi_{i}-1\right)-f\sum_{i=1}^{n}Q\left(\prod_{j=1}^{i}\psi_{j}\right)b_{i+1}\psi_{B1}$
### IX-C Indirect Internal Conversions
The set of pairs $V=\\{(c_{1}\mapsto c_{2}),(c_{2}\mapsto
c_{3}),\ldots,(c_{n}\mapsto c_{n+1})\\}$ is called a conversion if
$c_{j}\not\in\\{c_{1},\ldots\,c_{j-1}\\}$ for $1<j\leq n+1$. In the context of
conversions, we use notation $\psi_{i},\eta_{i},dx_{i},r_{i}$ when referring
to pairs of the form $(c_{i}\mapsto c_{i+1})$. When the intermediate pairs are
clear, we use the shortened notation $c_{1}\stackrel{{\scriptstyle
V}}{{\leadsto}}c_{n+1}$
###### Definition IX.12.
The capacity of conversion $V=\\{(c_{1}\mapsto c_{2}),\ldots,(c_{n}\mapsto
c_{n+1})\\}$ is the maximum quantity of $c_{1}$ that can be converted through
its pairs and is defined similarly to the capacity of a cycle.
###### Definition IX.13.
(conversion proceeds) Let $c_{1}\stackrel{{\scriptstyle
V}}{{\leadsto}}c_{n+1}$ be a conversion from $c_{1}$ to $c_{n+1}$ with
capacity $Q$. The conversion proceeds of $q\leq Q$ units of $c_{1}$ is the
quantity of $c_{n+1}$ that results from converting through the pairs of $V$,
accounting for residuals and fees, i.e.,
$\displaystyle
Pr(q,V)=q_{n+1}+\underbrace{\sum_{i=1}^{n}r_{i}(q_{i})\psi_{i(n+1)}}_{\text{residuals
$c_{n+1}$
value}}-\underbrace{f\sum_{i=1}^{n}q_{i+1}b_{i+1}\psi_{B(n+1)}}_{\text{fees
$c_{n+1}$ value}}$
###### Definition IX.14.
(profitable conversions) Let $x\stackrel{{\scriptstyle V_{1}}}{{\leadsto}}y$
and $x\stackrel{{\scriptstyle V_{2}}}{{\leadsto}}y$ be two conversions from
$x$ to $y$, with capacities $Q_{1}$ and $Q_{2}$, respectively. Let
$Q=\min\\{Q_{1},Q_{2}\\}$. We say that $V_{1}$ is a profitable conversion
w.r.t. $V_{2}$ if the proceeds of conversion $V_{1}$ are greater than the
proceeds of conversion $V_{2}$, i.e., $Pr(Q,V_{1})>Pr(Q,V_{2})$
### IX-D Binance Order Types
Binance supports the following order types:
1. 1.
LIMIT \- an order to buy/sell a pair at a specified price and can only be
executed at that price (or better). Not guaranteed to execute
2. 2.
MARKET \- an order to buy/sell a pair at the best current market price, i.e.,
lowest ask or highest bid.
3. 3.
STOP_LOSS_LIMIT \- an order to buy (sell) a pair, once its price exceeds
(drops below) the specified price. In contrast to a LIMIT order, the price
should be above (below) the lowest ask (highest bid). The execution price is
guaranteed to be the specified price.
4. 4.
STOP_LOSS \- same as STOP_LOSS_LIMIT, but when the price threshold is
breached, a MARKET order is executed and the execution price is not
guaranteed.
5. 5.
TAKE_PROFIT_LIMIT \- equivalent to a LIMIT order
6. 6.
TAKE_PROFIT \- automatically places a MARKET order when the specified price
level is met
7. 7.
LIMIT_MAKER \- LIMIT orders that will be rejected if they would immediately
match and trade as a taker.
8. 8.
ICEBERG \- an order used for large quantities as it automatically breaks down
to multiple LIMIT orders with different prices. The goal is to hide the actual
order quantity and prevent trend-followers from unfavorably moving the price.
|
2024-09-04T02:54:55.239469 | 2020-02-27T17:35:28 | 2002.12276 | {
"authors": "Alexander Chamolly, Eric Lauga",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25925",
"submitter": "Alexander Chamolly",
"url": "https://arxiv.org/abs/2002.12276"
} | arxiv-papers | # Stochastic dynamics of dissolving active particles
Alexander Chamolly Eric Lauga<EMAIL_ADDRESS>Department of Applied
Mathematics and Theoretical Physics, University of Cambridge, Wilberforce
Road, Cambridge CB3 0WA, United Kingdom
###### Abstract
The design of artificial microswimmers has generated significant research
interest in recent years, for promise in applications such as nanomotors and
targeted drug-delivery. However, many current designs suffer from a common
problem, namely the swimmers remain in the fluid indefinitely, posing risks of
clogging and damage. Inspired by recently proposed experimental designs, we
investigate mathematically the dynamics of degradable active particles. We
develop and compare two distinct chemical models for the decay of a swimmer,
taking into account the material composition and nature of the chemical or
enzymatic reaction at its surface. These include a model for dissolution
without a reaction, as well as models for a reacting swimmer studied in the
limit of large and small Damköhler number. A new dimensionless parameter
emerges that allows the classification of colloids into ballistic and
diffusive type. Using this parameter, we perform an asymptotic analysis to
derive expressions for colloid lifetimes and their total mean-squared
displacement from release and validate these by numerical Monte Carlo
simulations of the associated Langevin dynamics. Supported by general scaling
relationships, our theoretical results provide new insight into the
experimental applicability of a wide range of designs for degradable active
colloids.
## I Introduction
In recent years, scientists from a wide variety of different fields have given
considerable attention to the subject of synthetic microswimmers. This focus
in research is no coincidence, as such colloids show great promise in
biomedical and engineering applications Wang and Gao (2012); Wang _et al._
(2013); Nelson _et al._ (2010). The design of autonomous swimmers in
particular has received significant theoretical and experimental attention
Elgeti _et al._ (2015); Moran and Posner (2017). In an effort to exploit the
peculiarities of the associated low-Reynolds number hydrodynamics Purcell
(1977), many different propulsion mechanisms have been invented. These include
self-phoretic propulsion, such as chemophoresis Michelin and Lauga (2014);
Golestanian _et al._ (2007); Brady (2011); Walther and Mueller (2013) and
electrophoresis Ebbens _et al._ (2014); Paxton _et al._ (2006); Moran and
Posner (2011), as well as ultrasound propulsion Gallino _et al._ (2018); Mou
_et al._ (2015); Wang _et al._ (2012), bubble propulsion Gibbs and Zhao
(2009); Wang and Wu (2014) and magnetic propulsion Zhang _et al._ (2009);
Ghosh and Fischer (2009).
Despite this remarkable progress, common experimental designs still need to be
improved in order to be suitable for sensitive applications, such as non-
invasive medicine. Next to potential toxicity of swimmer components or their
fuel Gao _et al._ (2015), the question of waste disposal remains largely
open. This can be a serious problem, since artificial micron sized particles
in the blood stream have the potential to cause clogging Bächer _et al._
(2017); Sauret _et al._ (2018); Fogelson and Neeves (2015) and may thus pose
a significant health risk Nesbitt _et al._ (2009); Fogelson and Neeves
(2015). It is therefore essential to develop designs for microswimmers that
degrade after fulfilling their purpose.
Very recently, novel experimental designs have begun to address these issues.
Examples of such colloids include non-toxic magnesium-based bubble propelled
swimmers Chen _et al._ (2018) suitable for aqueous environments, as well as
other kinds of inorganic compositions driven by reactions in either acidic or
alkaline environments Chen _et al._ (2016). More designs have been proposed
using organic compounds that may be 3D-printed Wang _et al._ (2018) or that
self-assemble into nanomotors Tu _et al._ (2017).
These experimental advances raise new theoretical questions. While the
dynamics of classical non-dissolving colloids have been studied extensively,
the time-evolution of colloid size modifies its stochastic behaviour, and new
quantities characterising its physics emerge. The purpose of this paper is
therefore to provide theoretical answers to two fundamental questions. First,
we examine which material and environmental parameters determine the lifetime
of a dissolving spherical microswimmer. Second, we study the influence of
dissolution on the stochastic behaviour of both passive and self-propelled
colloids. Here, a new dimensionless quantity arises which splits microswimmers
into two categories: those that are subject to sufficient amounts of thermal
noise during their life time to evolve diffusively, and those that exhibit
near-ballistic trajectories that may be exploited for delivery applications.
We show that both scenarios may enter for realistic values of the material and
environmental parameters. Knowledge of these and their scaling relations is
thus essential for the application-specific engineering of degradable
microswimmer designs.
The structure of this paper is as follows. We begin by presenting two
theoretical models for the dissolution process in §II, one suitable for
designs in which the dissolution process is not driven by a reaction with a
fuel in the solvent (such as dissolution by hydrogen bonding), and one for
swimmers whose matrix is decomposed by means of a reaction (chemical or
enzymatic). For further analysis the latter case is considered in the two
limits of slow and fast reaction, the former corresponding to a fixed material
flux boundary condition. In all these models we find expressions for the time
dependence of the swimmer size, as well as their total lifetime in terms of
the essential physical parameters. We present the necessary modification to
classical Brownian motion in §III, and derive expressions for the passive mean
squared displacement of not self-propelling colloids. Based on this, we next
derive corresponding expressions for active motion in §IV and validate our
results numerically. Finally we discuss the implications of our research on
future studies in §V.
## II Dissolution models
Inspired by recent experimental realisations, we propose two models for the
dissolution of a spherical colloid based on different possibilities for the
boundary conditions at its surface. Specifically, we distinguish between the
case in which dissolution occurs through binding colloid material to fluid
molecules (for example, the case of ionic dissolution in water), which we call
non-reacting, and the case of dissolution through a chemical or enzymatic
reaction that consumes a fuel. In the latter scenario we distinguish further
between the limits of slow and fast reaction, and discuss their physical
implications.
As a preamble, we note that, unlike geophysical melting processes Woods
(1992), enthalpy plays no role in the dissolution processes considered in our
paper. This means the Stefan boundary condition does not apply and the
dynamics we derived is different from e.g. the dissolution of ice crystals in
water. While the general dynamics of diffusive dissolution have been
considered in the geophysical literature Zhang _et al._ (1989), there has to
the best of our knowledge been no study that derived the asymptotic solutions
we compute below. This is likely due to the dominance of convection driven
processes on relevant geophysical scales that require different modelling Kerr
(1995).
### II.1 Non-reacting swimmer
Figure 1: Schematic presentation of non-reacting dissolution dynamics. The
matrix of the swimmer consists of a substance that dissolves by bonding to the
fluid (thus acting as a solvent). Near the boundary, solute is present at a
saturation concentration $c_{0}$ and subject to advective-diffusive transport
in the bulk. Dissolution emerges through maintaining a normal concentration
gradient at the swimmer surface.
In our first model, we assume that the colloidal particle is composed of a
material that dissolves in the surrounding fluid through bonding of solute
colloid material to fluid molecules, as illustrated schematically in Fig. 1.
We consider this an appropriate model for non-reacting dissolution processes,
such as dissolution of many organic compounds as well as ionic salts in water.
In order to keep the mathematics simple we make the simplifying assumption
that only one species of solute is dissolved into the bulk. This allows us to
define the (mass) concentration, $c(\bm{r},t)$, of solute defined as the mass
of solute dissolved in a unit volume of solvent, with $c=c_{\infty}\geq 0$ far
away from the colloid. Note that this differs from the definition of molar
concentration common in chemistry by a factor equal to the molar mass of the
solute. We make this choice in order to avoid clutter that would arise from
the application of mass conservation below.
In this model and the following we assume the absence of any background flow
that would disturb the distribution of solute or reactant in the bulk fluid.
This assumption is of course violated for self-propelled particles moving
relative to a background fluid. However, we can use a scaling argument to show
that this does not affect our leading-order results. Since typical propulsion
velocities $U$ are expected to be on the order of a few microns per second,
initial colloid radii $R_{0}$ on the scale of microns Elgeti _et al._ (2015)
and for many ions in water at room temperature the solute diffusivity is
approximately $D_{s}\sim 10^{-9}$ m2/s Haynes (2014), the Péclet number
quantifying the relative important of advection to diffusion for the solute is
$\text{Pe}_{\text{sol}}=R_{0}U/D_{s}\sim 10^{-4}-10^{-3}$. This indicates that
advection of solute can be safely neglected. This remains true even when the
Péclet number associated with motion of the colloid,
$\text{Pe}_{\text{col}}=R_{0}U/D$ is large, since the particle is several
orders of magnitude larger than a solvent molecule and therefore has a much
smaller diffusivity. The same result applies to phoretic slip flows, which are
typically of the same strength as the propulsion velocity. In the context of
dissolution dynamics, the flows arising from propulsion can therefore be
neglected in the transport processes of solute and reactant.
We further assume that the swimmer has a homogeneous mass density, $\rho_{s}$,
and the fluid solvent a constant density, $\rho_{l}$. In general the density
of the solvent depends weakly on the amount of solute dissolved Haynes (2014).
However, we will soon develop an asymptotic analysis based on the assumption
that the solubility is weak and therefore can neglect this effect. Finally, we
assume also that the swimmer remains spherical at all times, and that that the
dissolution dynamics is independent of any self-propulsion mechanism or
background flow. Both these assumptions will be justified _a posteriori_ in
section §II.1.3. A brief discussion of the case of a partially dissolving
swimmer is included our discussion §V.
#### II.1.1 Mathematical model
We consider a spherically symmetric colloid or radius $R(t)$ with initial
condition $R(0)=R_{0}>0$. Near the boundary, there is chemical equilibrium
between solute attached to the swimmer surface and present in the fluid. In
this case the dissolution process is driven by removal (through diffusion) of
solute from a boundary layer into the bulk and subsequent replenishment from
the swimmer surface (Fig. 1). We model this effect by imposing the boundary
condition
$c(R(t),t)=c_{0}>c_{\infty},\quad t\geq 0,$ (1)
where $c_{0}$ is the saturation concentration of solute in the solvent. This
condition assumes that the boundary layer is negligibly thin and that the
surface reaches chemical equilibrium instantaneously, which may be justified
by noting that time scales of interest will be much larger than the molecular
collision time, $\tau_{MC}\approx 10^{-13}\text{ s}$ Haynes (2014). The other
condition we impose is the requirement that the solute is initially
distributed homogeneously in the bulk, i.e.
$c(r,0)=c_{\infty},\quad r>R_{0}.$ (2)
Conservation of solute at the boundary gives
$\displaystyle 4\pi R^{2}\rho_{s}\frac{dR}{dt}$ $\displaystyle=-\text{(solute
flux into the fluid)}$ $\displaystyle=-\left(-D_{s}4\pi R^{2}\frac{\partial
c}{\partial r}\bigg{\rvert}_{r=R}\right),$ (3)
and therefore
$\frac{dR}{dt}=\frac{D_{s}}{\rho_{s}}\frac{\partial c}{\partial
r}\bigg{\rvert}_{r=R},$ (4)
where $D_{s}$ is the diffusivity of solute in the solvent.
Furthermore, in the case of unequal densities we also get a non-zero fluid
flux at the boundary since by mass conservation there is equality
$-\dot{R}\rho_{s}=(-\dot{R}+{\bm{u}}\cdot{\hat{\bm{r}}})\rho_{l}$ (5)
and thus
${\bm{u}}\cdot{\hat{\bm{r}}}=\dot{R}\frac{\rho_{l}-\rho_{s}}{\rho_{l}},$ (6)
where $\hat{\bm{r}}$ denotes a unit vector in the outward radial direction.
For a self-propelled microscopic colloid in water the Reynolds number, defined
as the ratio of colloid radius times velocity divided by kinematic viscosity,
is typically on the order of $10^{-9}\ll 1$. Therefore the fluid dynamics obey
the incompressible Stokes equations,
$\mu\nabla^{2}\bm{u}=\nabla p,\quad\nabla\cdot\bm{u}=0,$ (7)
where $\mu$ is dynamic viscosity and $p$ is the pressure field. Solving these
with the boundary condition given in Eq. (6) at $r=R$ leads to the flow of a
point source
$\bm{u}=\dot{R}\frac{\rho_{l}-\rho_{s}}{\rho_{l}}\frac{R^{2}}{r^{2}}\hat{{\bm{r}}}.$
(8)
The transport equation for $c(r,t)$ is the standard advection-diffusion
equation
$\displaystyle\frac{\partial c}{\partial
t}+\nabla\cdot(c\bm{u})=D_{s}\nabla^{2}c.$ (9)
Using the result of Eq. (8) together with incompressibility and assuming
radial symmetry of the solute concentration, this becomes
$\displaystyle\frac{\partial c}{\partial
t}+\frac{\rho_{l}-\rho_{s}}{\rho_{l}}\frac{D_{s}}{\rho_{s}}\frac{R^{2}}{r^{2}}\frac{\partial
c}{\partial r}\bigg{\rvert}_{r=R}\frac{\partial c}{\partial
r}=D_{s}\left(\frac{\partial^{2}c}{\partial r^{2}}+\frac{2}{r}\frac{\partial
c}{\partial r}\right),$ (10)
Next we non-dimensionalise this transport equation using the scalings
$c^{*}=\frac{c-c_{\infty}}{c_{0}-c_{\infty}},\,R^{*}=\frac{R}{R_{0}},\,r^{*}=\frac{r}{R_{0}},\,t^{*}=\frac{D_{s}t}{R_{0}^{2}}.$
(11)
Substituting in Eq. (10) and dropping stars in what follows for notational
convenience, we obtain the colloid dynamics as solution to
$\frac{dR}{dt}=\alpha_{1}\frac{\partial c}{\partial r}\bigg{\rvert}_{r=R}$
(12)
with $c$ solution to
$\frac{\partial c}{\partial
t}+\frac{R^{2}}{r^{2}}(\alpha_{1}-\beta_{1})\frac{\partial c}{\partial
r}\bigg{\rvert}_{r=R}\frac{\partial c}{\partial
r}=\left(\frac{\partial^{2}c}{\partial r^{2}}+\frac{2}{r}\frac{\partial
c}{\partial r}\right),$ (13)
with dimensionless boundary conditions
$c(R(t),t)=1,\,\,t\geq 0,\quad{\rm and}\quad c(r,0)=0,\,\,r>1,$ (14)
where we have defined the two dimensionless parameters
$\alpha_{1}=\frac{c_{0}-c_{\infty}}{\rho_{s}},\quad\beta_{1}=\frac{c_{0}-c_{\infty}}{\rho_{l}}\cdot$
(15)
We note that despite a negligibly small solute Péclet number, it was necessary
to include an advective term due to volume conservation, whose relative
strength is given by $(\alpha_{1}-\beta_{1})$. It is therefore independent of
the Péclet number and its irrelevance at leading order will be only a
consequence of the weak solubility assumption. Only when there is no density
mismatch between colloid and fluid is this term identically zero. Furthermore,
the swimmer radius remains constant when the solvent is saturated with solute,
as may be expected intuitively.
#### II.1.2 Asymptotic solution
In order to make analytical progress, we make the assumptions that
$\alpha_{1},\beta_{1}\ll 1,$ (16)
which corresponds to a low-solubility limit for the colloid material. We can
then develop an asymptotic expansion to solve for $c$ and $R$. Here we will
only calculate the leading-order solution, but our setup allows for
calculations to arbitrarily high orders. We proceed by a rescaling of our
spatial coordinate as
$x=\frac{r}{R},\quad y(x,t)=xc(x,t),$ (17)
so that our system becomes
$\displaystyle R^{2}\frac{\partial y}{\partial t}$
$\displaystyle+R\dot{R}y+(\alpha_{1}-\beta_{1})\left(\frac{1}{x^{2}}\frac{\partial
y}{\partial x}-\frac{y}{x^{3}}\right)\left(\frac{\partial y}{\partial
x}\bigg{\rvert}_{x=1}-1\right)$ $\displaystyle=\frac{\partial^{2}y}{\partial
x^{2}}$ (18)
and
$R^{2}=1+2\alpha_{1}\left(\int_{0}^{t}\frac{\partial y}{\partial
x}\bigg{\rvert}_{x=1}dt^{\prime}-t\right)$ (19)
with boundary conditions
$y(1,t)=1,\quad y(x,0)=0.$ (20)
The solution may be written as
$y(x,t;\alpha_{1},\beta_{1})=y_{0}(x,t)+\alpha_{1}y_{\alpha}(x,t)+\beta_{1}y_{\beta}(x,t)+o(\alpha_{1},\beta_{1}).$
(21)
The problem for $y_{0}$ reduces to the one-dimensional heat equation with
Dirichlet boundary conditions and its solution is well known to be
$y_{0}(x,t)=\text{erfc}\left(\frac{x-1}{2\sqrt{t}}\right),$ (22)
whence to leading order
$R^{2}=1-2\alpha_{1}\left(t+2\sqrt{\frac{t}{\pi}}\right),$ (23)
or, after reinserting dimensions, we obtain our desired result
$R(t)=R_{0}\sqrt{1-2\alpha_{1}\left(\frac{t}{t_{s}}+\frac{2}{\sqrt{\pi}}\sqrt{\frac{t}{t_{s}}}\right)}.$
(24)
where $t_{s}=R_{0}^{2}/D_{s}$ is the diffusive time scale for the solute. An
illustration of this decay, along with a comparison to the reacting model is
presented in Fig. 3. Denoting by $T_{d}$ the finite time at which the particle
disappears, and taking into account the order of terms we neglect, we can
deduce that
$\begin{split}T_{d}&=\frac{t_{s}}{2\alpha_{1}}\left(1-\sqrt{\frac{8}{\pi}}\sqrt{\alpha_{1}}+\mathcal{O}(\alpha_{1},\beta_{1})\right).\end{split}$
(25)
Therefore at leading order, the lifetime of the colloid scales inversely
proportional with the solubility and diffusivity of its material, but
quadratically with the initial colloid radius $R_{0}$. However, the correction
from the next-to-leading order term remains significant for $\alpha_{1}\gtrsim
10^{-3}$ due to its slow square-root like decay.
#### II.1.3 Physical interpretation
The aim of this section is to provide some physical interpretation for Eq.
(24). For many ions in water at room temperature, the diffusivity is
approximately $D_{s}\sim 10^{-9}$ m2/s Haynes (2014). In the case of an
initially micron-sized colloid this gives
$t_{s}\sim 10^{-3}\,{\rm s}.$ (26)
The other (previously unknown) time scale in the problem is the swimmer
lifetime $T_{d}$. There is a separation of scales that is to leading order
inversely proportional to $\alpha_{1}$. In the specific example of calcium
carbonate with $\alpha_{1}\approx 10^{-6}$ Haynes (2014), we obtain
$T_{d}\sim 10^{3}\,{\rm s}\sim 10\text{ min},$ (27)
which is a conceivably desirable lifetime for a microswimmer.
The separation of scales has further consequences for the decay rate. For
$t\ll t_{s}$ we have $R^{2}\sim 1-4\sqrt{t\alpha_{1}^{2}/t_{s}\pi}$, while for
$t\gg t_{s}$ we obtain the behaviour $R^{2}\sim 1-2\alpha_{1}t/t_{s}$.
Therefore the particle size satisfies $R\sim\sqrt{1-2\alpha_{1}t/t_{s}}$
except for a short, transient period on the order of $t_{s}$. This feature may
be explained physically. Initially, the discontinuity in concentration at
$r=R$ causes a large concentration gradient and fast dissolution but on the
(fast) scale of solute diffusion the system relaxes to equilibrium in a
boundary layer of thickness $\sim\sqrt{D_{s}t_{s}}$, which is on the order of
the colloid size, $R_{0}$. From this point onwards the colloid is surrounded
by a cloud of solute in equilibrium and the process becomes quasi-static. At
leading order, the dissolution dynamics therefore reduces to steady diffusion.
This gives simultaneously justification to our assumption of sphericity, since
the diffusive boundary layer smooths out any surface inhomogeneities.
As an aside, we note while the dissolution process of microbubbles is driven
by capillary pressures Michelin _et al._ (2018), the $R\sim\sqrt{1-t}$
behaviour also emerges in the absence of surface tension, essentially also due
to the dominance of diffusive effects.
Finally, we point out that $\alpha_{1}$ and $\beta_{1}$ depend only on the
material chosen for the swimmer (and its abundance in the bulk fluid).
Unsurprisingly, only materials that are considered insoluble on the macroscale
yield appreciable microswimmer life times. Hence, together with fine tuning of
the initial radius $R_{0}$, full control of the dissolution dynamics can be
achieved through the microswimmer design.
### II.2 Dissolution through reaction
Figure 2: Schematic presentation of the molecular dynamics near the boundary
of a reacting colloid. In this example, motivated by experiments in Ref. Chen
_et al._ (2016), zinc is dissolved in acid forming zinc-ions and molecular
hydrogen. If $\text{Da}=0$, i.e. infinitely much H+ is present to sustain the
reaction, the dissolution rate is constant. If $\text{Da}>0$, the reaction
rate will depend on the amount of fuel present, but not on the amount of
product.
Artificial microswimmers are rarely composed solely of chemically inert
materials. Indeed, autophoretic swimmers often consume a fuel in the solvent,
like in the widely studied case of catalytic platinum swimmers splitting
hydrogen peroxide into water and oxygen Moran and Posner (2017). A sketch of
the process is illustrated in Fig. 2 in the specific case of zinc dissolving
in acid as realised experimentally by Chen et. al. Chen _et al._ (2016). An
analogous picture may be imagined for the case of biodegradation by enzymes.
A degradable autophoretic colloid might therefore consist of a reactant that
will then dissolve into the fluid. To this end, let us consider a fixed
reaction-rate boundary condition. It will be important to distinguish between
the concentration of fuel $c_{f}(\bm{r},t)$ and concentration of swimmer
substrate $c_{s}(\bm{r},t)$. For example, in the case of zinc, the fuel
concentration might be provided by hydrogen ions in acid, which relates their
concentration directly to the pH-value of the solvent, while the concentration
of substrate influences the dissolution rate through mass conservation.
Notation-wise, we will use the subscript $f$ to refer below to the fuel and
the subscript $s$ to the substrate.
#### II.2.1 Mathematical model
The mathematical development is similar to the non-reacting swimmer, with an
important change to the boundary conditions. Indeed, unlike Eq. (4) where the
concentration at the boundary was fixed, the boundary conditions for the
fields $c_{s}$ and $c_{f}$ are now given by
$-D_{s}{\bm{n}}\cdot\nabla c_{s}|_{R}=k_{s}c_{f},\quad-
D_{f}{\bm{n}}\cdot\nabla c_{f}|_{R}=-k_{f}c_{f},$ (28)
where $k_{s}$ and $k_{f}$ are the constant reaction rates for solute and fuel
respectively and $D_{f}$ the diffusivity of fuel in the solution. Mass
conservation for the colloidal particle leads to
$\frac{dR}{dt}=-\frac{k_{s}c_{f}(R)}{\rho_{s}}.$ (29)
Furthermore, we once again have conservation of fluid volume giving rise to a
source flow
$\bm{u}=\dot{R}(1-\rho_{s}/\rho_{l})\frac{R^{2}}{r^{2}}{\hat{\bm{r}}}.$ (30)
Similar to what was done above, we assume that the Péclet numbers associated
with the solute and the fuel dynamics are small, so that only volume
conservation gives rise to advective flows. We can then write the advection-
diffusion equation for $c_{f}$ as
$\frac{\partial c_{f}}{\partial
t}-(1-\rho_{s}/\rho_{l})\frac{k_{s}c_{f}(R)}{\rho_{s}}\frac{R^{2}}{r^{2}}\frac{\partial
c_{f}}{\partial r}=D_{f}\left(\frac{\partial^{2}c_{f}}{\partial
r^{2}}+\frac{2}{r}\frac{\partial c_{f}}{\partial r}\right).$ (31)
Introducing non-dimensionalised variables as
$c_{f}^{*}=\frac{c_{f}}{c_{f,\infty}},R^{*}=\frac{R}{R_{0}},r^{*}=\frac{r}{R_{0}},t^{*}=\frac{D_{f}t}{R_{0}^{2}},$
(32)
where $c_{f,\infty}$ is the mass concentration of fuel in the bulk, we may
substitute in Eqs. (29) and (31) and dropping stars immediately we find
$\frac{\partial c_{f}}{\partial
t}-\text{Da}(\alpha_{2}-\beta_{2})c_{f}(R)\frac{R^{2}}{r^{2}}\frac{\partial
c}{\partial r}=\frac{\partial^{2}c}{\partial r^{2}}+\frac{2}{r}\frac{\partial
c}{\partial r},$ (33) $\frac{dR}{dt}=-\text{Da}\alpha_{2}c_{f}(R),$ (34)
with the boundary conditions
$\displaystyle c_{f}\to 1,\quad$ $\displaystyle r\to\infty,$ (35)
$\displaystyle\frac{\partial c_{f}}{\partial r}=\text{Da}c_{f},\quad$
$\displaystyle r=1,$ $\displaystyle c_{f}(r,0)=1,\quad$ $\displaystyle r>1,$
$\displaystyle R(0)=1,$
where we have defined the three dimensionless numbers
$\text{Da}=\frac{R_{0}k_{f}}{D_{f}},\quad\alpha_{2}=\frac{c_{f,\infty}k_{s}}{\rho_{s}k_{f}},\quad\beta_{2}=\alpha_{2}\frac{\rho_{s}}{\rho_{l}}\cdot$
(36)
Here Da is a Damköhler number for the fuel, indicating the ratio between
reactive and diffusive fluxes, while $\alpha_{2}$ and $\beta_{2}$ may be
interpreted as dimensionless ratios comparing the mass of fuel consumed
against the mass of solute shed in the reaction.
Upon rescaling our coordinates according to
$x=\frac{r}{R},\quad y(x,t)=c_{f}x,$ (37)
our system becomes
$R^{2}\frac{\partial y}{\partial
t}+R\dot{R}y-\text{Da}(\alpha_{2}-\beta_{2})R\left(\frac{1}{x^{2}}\frac{\partial
y}{\partial x}-\frac{y}{x^{3}}\right)y(1,t)=\frac{\partial^{2}y}{\partial
x^{2}},$ (38)
and
$R=1-\text{Da}\alpha_{2}\int_{0}^{t}y(1,t^{\prime})dt^{\prime},$ (39)
with
$y(x,0)=1,\quad\frac{\partial y}{\partial x}(1,t)=\text{Da}y(1,t).$ (40)
From here, we can again proceed by means of an asymptotic expansion.
#### II.2.2 Asymptotic expansion
We next assume $\alpha_{2}\text{Da},\,\beta_{2}\text{Da}\ll 1$ and write the
solution as a power expansion
$\displaystyle y(x,t;\alpha_{2},\beta_{2};\text{Da})=$ $\displaystyle
y_{0}(x,t;\text{Da})+\alpha_{2}y_{\alpha}(x,t;\text{Da})+\beta_{2}y_{\beta}(x,t;\text{Da})+{\rm
h.o.t.}$ (41)
The boundary condition in Eq. (40) consitutes a Robin problem and can be
solved by considering the quantity $\phi=y-\text{Da}^{-1}\partial y/\partial
x$ subject to Cauchy conditions Carslaw and Jaeger (1959). The solution for
$y_{0}$ is
$\displaystyle y_{0}(x,t;\text{Da})=$
$\displaystyle\text{erf}\left(\frac{x-1}{2\sqrt{t}}\right)$
$\displaystyle+e^{\text{Da}(x-1)+\text{Da}^{2}t}\text{erfc}\left(\frac{x-1}{2\sqrt{t}}+\text{Da}\sqrt{t}\right).$
(42)
It follows that
$y_{0}(1,t;\rm{Da})=e^{\text{Da}^{2}t}\text{erfc}\left(\text{Da}\sqrt{t}\right),$
(43)
and hence to leading order in $\alpha_{2}$,
$R(t)=1-2\alpha_{2}\sqrt{\frac{t}{\pi}}-\frac{\alpha_{2}}{\text{Da}}\left[e^{\text{Da}^{2}t}\text{erfc}\left(\text{Da}\sqrt{t}\right)-1\right].$
(44)
Upon reinserting dimensions we finally arrive at
$\displaystyle R(t)=R_{0}\Bigg{\\{}$ $\displaystyle
1-\alpha_{2}\frac{2}{\sqrt{\pi}}\sqrt{\frac{t}{t_{f}}}$
$\displaystyle\left.-\frac{\alpha_{2}}{\text{Da}}\left[e^{\text{Da}^{2}t/t_{f}}\text{erfc}\left(\text{Da}\sqrt{\frac{t}{t_{f}}}\right)-1\right]\right\\}.$
(45)
where $t_{f}=R_{0}^{2}/D_{f}$ is the diffusive time scale for the fuel.
#### II.2.3 Slow reaction limit (fixed solute flux)
Inspired by a study of boundary conditions in the context of finite Péclet-
number propulsion in Ref. Michelin and Lauga (2014), we may consider
separately the limits $\text{Da}\to 0$ and $\text{Da}\to\infty$. Each of these
limits will lead to a different model that we will consider in the remainder
of this paper.
For small Damköhler number, we find
$\displaystyle
R(t)=R_{0}\left[1-\frac{\alpha_{2}\text{Da}}{t_{f}}t+\mathcal{O}\left(\text{Da}^{2}{\left(\frac{t}{t_{f}}\right)}^{3/2}\right)\right],$
$\displaystyle\text{as Da}\to 0,\quad\frac{t}{t_{f}}\lesssim\text{Da}^{-2}.$
When $\text{Da}=0$, no reaction takes place and the radius of the colloid
remains constant. At next to leading order we have linear decay, so the
lifetime $T_{d}$ is
$T_{d}=\frac{t_{f}}{\alpha_{2}}\text{Da}^{-1}=\frac{R_{0}\rho_{s}}{c_{f,\infty}k_{s}}\quad(\text{Da}\to
0),$ (46)
which is consistent with the asymptotic expansion to this order. Thus we
arrive at a model for the dissolution with a constant solute flux. We note the
different scaling compared to the non-reacting model where the lifetime scaled
as $T_{d}\sim R_{0}^{2}$. This is indicative of the absence of diffusion in
this limit. Note that the model can be recovered from simply applying mass
conservation to a flux boundary condition of the form
$-D_{s}\frac{\partial c}{\partial r}\bigg{|}_{r=R(t)}=c_{f,\infty}k_{s},$ (47)
which shows that the flux is equal to $c_{f,\infty}k_{s}$.
#### II.2.4 Fast reaction limit
Conversely, as $\text{Da}\to\infty$ (still with $\alpha_{2}\text{Da}\ll 1$),
we find that
$\displaystyle
R(t)=R_{0}\left\\{1-\frac{2\alpha_{2}}{\sqrt{\pi}}\sqrt{\frac{t}{t_{f}}}+\mathcal{O}\left(\text{Da}^{-1}\right)\right\\},$
$\displaystyle\text{as
Da}\to\infty,\quad\frac{t}{t_{f}}\gtrsim\text{Da}^{-2}.$
In this limit the reaction is infinitely fast, so the boundary condition on
the fuel effectively reduces to instantaneous depletion, $c_{f}(R,t)=0$, and
the dissolution rate is limited by the diffusive flux of fuel from the bulk.
Correspondingly the lifetime $T_{d}$ in dimensional units is
$T_{d}=\frac{\pi}{4\alpha_{2}^{2}}\frac{R_{0}^{2}}{D_{f}}\quad(\text{Da}\to\infty,\alpha_{2}\text{Da}\ll
1),$ (48)
a result which is again consistent with the expansion. Apart from the
introduction of reaction rates, this result is qualitatively different from
the non-reacting swimmer insofar as the lifetime depends on the square of
swimmer density and reactant concentration at infinity, rather than being
inversely proportional to solubility. We remark that in the case of hydrogen
ions, the concentration $c_{f}$ is directly related to the pH value of the
solvent, which establishes an experimentally accessible relationship between
the pH and swimmer dissolution dynamics.
Figure 3: Comparison of the decay dynamics between the three models: decay of
the dimensionless colloid radius as a function of dimensionless time. (i) Non-
reacting (red solid line; $t_{d}/T_{d}=0.01$); (ii) Slow reaction (green
dashed line); (iii) Fast reaction (blue dash-dotted line).
In Fig. 3 we illustrate the different decay behaviour for our three models:
(i) Non-reacting (red solid line, with $t_{d}/T_{d}=0.01$); (ii) Slow reaction
(green dashed line); and (iii) Fast reaction (blue dash-dotted line). We note
for the non-reacting model the decay rate increases with time, whereas it is
constant for the slowly reacting, and decreasing for the fast reacting model.
In the following two sections, we will explore the important consequences this
has for the stochastic behaviour of dissolving microswimmers.
## III Passive dynamics of dissolving colloids
After developing three models for the dissolution of a spherical colloid, we
now ask what effect this reduction in size has on its fluctuating trajectory.
As will be shown, the mean squared displacement of a stochastic self-propelled
particle is given by the sum of the contributions from translational noise and
active motion. This allows us to split the analysis into the case of a passive
colloid with no intrinsic propulsion mechanism but with translational noise
and an active colloid with rotational but no translational diffusion. We treat
the former case in this section and consider the motion of self-propelled
particles in §IV.
### III.1 Mathematical model
The change in the dynamics of colloidal particles arises through the time
dependence of the translational diffusion coefficient, which is given by the
Stokes-Einstein relation Einstein (1905)
$D(t)=\frac{k_{B}T}{6\pi\mu R(t)}\equiv D_{0}\frac{R_{0}}{R(t)},$ (49)
where $k_{B}$ is Boltzmann’s constant, $T$ is absolute temperature and
$D_{0}\equiv D(0)=k_{B}T/6\pi\mu R_{0}$. In analogy with classical Brownian
motion, we consider the following overdamped Langevin equation for the
position of the passive colloidal particle, $\bm{r}(t)$,
$d\bm{r}=\sqrt{2D(t)}d\bm{W}.$ (50)
Classically, $\bm{W}(t)$ is white noise with the properties that
$\langle d\bm{W}\rangle={\bf{0}},\quad\langle
dW_{i}(t)dW_{j}(t^{\prime})\rangle=\delta_{ij}\delta(t-t^{\prime})dt,$ (51)
with brackets denoting ensemble averages. The right-hand side of Eq. (50)
therefore varies on two different time scales: the rate of change of $D$ and
the time scale of the molecular chaos $\tau_{MC}$ that gives rise to noise.
Typically, $\tau_{MC}=\mathcal{O}(10^{-13}s)$ Haynes (2014). The mathematical
assumption of $\delta$-correlated noise only holds true if $\tau_{MC}$ is very
small compared to the time scale of diffusion, which holds true for
microscopic colloids. However, since the rate of change of $D$ diverges as the
swimmer size tends to 0, this model is expected break down at the very end of
the swimmer lifetime. In the case of the non-reacting model this singularity
is integrable and poses no problem, whereas for the reacting model we will
also include a physical discussion of the breakdown.
For an active self-propelled particle at velocity ${\bf U}(t)$, the right-hand
side of the Langevin equation Eq. (50) includes an additional term ${\bf
U}(t)dt$, which is deterministic in the sense that it is uncorrelated with
translational white noise (even if ${\bf U}(t)$ is subject to rotational
noise). A straightforward integration using the properties in Eq. (51) then
shows that the total mean squared displacement is given by the sum of active
and passive contributions,
$\langle r^{2}\rangle_{tot}=\langle r^{2}\rangle_{a}+\langle
r^{2}\rangle_{p},$ (52)
as claimed.
The stochastic dynamics in Eq. (50) gives rise to a Fokker-Planck equation for
the probability for the position of the particle, $P(\bm{r},t)$, as
$\frac{\partial P}{\partial t}=D(t)\nabla^{2}P.$ (53)
We can solve this by a rescaling of time, introducing $\tau(t)$ such that
$\tau=\int_{0}^{t}D(s)ds=D_{0}\int_{0}^{t}\frac{R_{0}}{R(s)}ds,$ (54)
which yields
$\frac{\partial\tilde{P}}{\partial\tau}=\nabla^{2}\tilde{P}.$ (55)
where $\tilde{P}(\bm{r},\tau)=P(\bm{r},t)$. In three spatial dimensions this
equation has a well known Gaussian solution corresponding to the initial
condition of a particle located at the origin,
$\tilde{P}(\bm{r},\tau)=\tilde{P}(r=|\bm{r}|,\tau)=\frac{1}{(4\pi\tau)^{3/2}}\exp\left(-\frac{r^{2}}{4\tau}\right).$
(56)
The first two moments are well known to be $\langle\bm{r}\rangle={\bf 0}$ and
$\langle{r}^{2}\rangle=6\tau$. The total passive mean squared displacement of
the particle in its lifetime, $\langle r^{2}\rangle_{p}\equiv\langle
r^{2}\rangle(T_{d})$, is therefore given by the integral
$\langle r^{2}\rangle_{p}=6D_{0}\int_{0}^{T_{d}}\frac{R_{0}}{R(t)}dt.$ (57)
Note that since $R\leq R_{0}$, the integral has value larger than $T_{d}$.
Therefore dissolution always enhances passive diffusion. All that remains to
be done is to calculate the integral for each of our three models.
### III.2 Total root mean squared displacement
In the following we consider the solutions to Eq. (57). Bearing in mind the
order of terms we neglected in the derivation of Eq. (24), we can integrate
Eq. (57) directly to obtain the following result for the non-reacting model
$\langle{r}^{2}\rangle_{p}=6D_{0}\times\frac{t_{s}}{\alpha_{1}}\left(1-\sqrt{\frac{\pi}{2}}\sqrt{\alpha_{1}}+\mathcal{O}(\alpha_{1},\beta_{1})\right).$
(58)
Comparing with Eq. (25) we can see that at leading order in $\alpha_{1}$,
dissolution enhances the total mean squared displacement by a factor of two.
Through the scaling of $t_{s}$ with $R_{0}$ we also find that
$\langle{r}^{2}\rangle_{p}\sim R_{0}$. This may be tested easily in
experiments without affecting the other parameters. Perhaps surprisingly, this
also means that in contrast to fixed-size swimmers, the importance of passive
Brownian effects increases with swimmer size, since the smaller diffusivity is
overcompensated for by the longer life span. The scaling with $\alpha_{1}$ can
be explained the same way, as a colloid with small $\alpha_{1}$ decays slower,
lives longer and therefore travels further.
For the slow reaction model we can use Eq. (II.2.3) in the integration of Eq.
(57) to find
$\langle r^{2}\rangle(t)=6D_{0}\times
T_{d}\log\left(\frac{R_{0}}{R(t)}\right).$ (59)
This expression diverges logarithmically as $t\to T_{d}$. This should not be
taken as indicative of superdiffusion, but can be resolved by the breakdown of
the Stokes-Einstein relation below a certain colloid size. Past experiments
suggest this happens for colloids smaller than a few nanometres in diameter Li
(2009). Compared to an initial colloid size on the scale of a few microns,
this corresponds to 2 to 4 orders of magnitude. Since the divergence of the
mean squared displacement is logarithmic, this will give a total mean squared
displacement that is greater than that of a non-dissolving colloid by a factor
of $\mathcal{O}(1)-\mathcal{O}(10)$. Furthermore, since $D_{0}T_{d}$ is
independent of $R_{0}$ for this model, the contribution of passive Brownian
motion only depends weakly on the initial colloid size. This is in contrast
with the other models, and indicative of the absence of diffusion.
Finally, using Eq. (II.2.4) in Eq. (57) we obtain for the fast reaction limit
the result
$\langle{r}^{2}\rangle(t)=6D_{0}\times
2T_{d}\left(\log\left(\frac{R_{0}}{R(t)}\right)+\frac{R(t)}{R_{0}}-1\right).$
(60)
where again we have a logarithmic divergence as $t\to T_{d}$. Using previous
definitions we find that as in the non-reacting model
$\langle{r}^{2}\rangle_{p}\sim R_{0}$ (+ logarithmic corrections) and also
that $\langle{r}^{2}\rangle_{p}\sim\alpha_{2}^{-2}$. The passive mean squared
displacement therefore depends rather sensitively on the availability of fuel
for the reaction.
## IV Active motion of dissolving colloids
After examining the dynamics of passive particles, we now turn to the effect
of dissolution on self-propelled microswimmers. For the case of active
particles subject to rotational diffusion with coefficient $D_{r}$, it is well
known that self-propulsion at velocity $U$ gives rise to an effective enhanced
translational diffusivity Golestanian _et al._ (2007)
$D_{\text{eff}}=D+\frac{U^{2}}{6D_{r}},$ (61)
for times much longer than $D_{r}^{-1}$, the time scale of rotational
diffusion (i.e. in the limit $tD_{r}\gg 1$). On scales much shorter than this
the motion is instead ballistic, i.e. $\langle r^{2}\rangle\sim U^{2}t^{2}$.
Figure 4: 2D-projections of sample trajectories for different values of
$\gamma$. The colloids initially swim from left to right (see arrow) and
dissolve according to the non-reacting model with the same length scale and
lifetime.
In this new scenario however, an additional scale is introduced through the
swimmer lifetime, $T_{d}$. It is therefore vital to consider the dimensionless
quantity
$\gamma:=D_{r,0}T_{d},$ (62)
where we define $D_{r,0}=k_{B}T/8\pi\mu R_{0}^{3}$. If $\gamma\lesssim 1$,
then the particle disappears before displaying macroscopically diffusive
behaviour. Conversely, if $\gamma\gtrsim 1$ we expect trajectories that are
qualitatively similar to that of a classically diffusive colloid at long time
scales. The qualitative role of $\gamma$ is illustrated in Fig. 4 where we
observe three trajectories becoming more curly as time progresses, since
diffusivity increases as the swimmer dissolves. However, only colloids with
large values of $\gamma$ (here, $\gamma=10$) exist long enough for this effect
to become significant, giving rise to a macroscopically ‘diffusive’
trajectory. Conversely, for small $\gamma$ (here, $\gamma=0.1$) trajectories
appear macroscopically ‘ballistic’. Depending on the application, it may be
desirable to design swimmers that belong to either of these two regimes. In
water at room temperature we have $D_{r,0}^{-1}\approx
6(R_{0}/\mu\text{m})^{3}\text{ s}$ Haynes (2014), so depending on the initial
colloid size the threshold lifetime ranges from seconds to hours. Therefore
both regimes are conceivable for applications and thus relevant to study. We
proceed with the development of our theoretical framework to derive
expressions for the active mean squared displacement and present analytical
solutions for each model both as $\gamma\to 0$ and as $\gamma\to\infty$. We
then validate our theoretical results against numerical simulations of the
associated Langevin dynamics.
### IV.1 Mathematical model
In the rest of this section we assume that the colloid is subject to Langevin
dynamics as
$\displaystyle d\bm{r}$ $\displaystyle=U\bm{e}dt,$ (63) $\displaystyle
d\bm{e}$
$\displaystyle=-2D_{r}(t)\bm{e}dt+\sqrt{2D_{r}(t)}\bm{\Pi}(\bm{e})\cdot
d\bm{W},$ (64)
to be understood in the Itô formulation of stochastic calculus. Here $U$ is
the particle self-propulsion speed, $\bm{e}$ the unit vector along the
direction of propul d $\Pi_{ij}=\delta_{ij}-e_{i}e_{j}$. As is the case for a
wide range of phoretic swimmers Moran and Posner (2017), we assume the
velocity $U$ to be independent of the swimmer size. Moreover, we set $D=0$ to
isolate the effect of active diffusion, which generally exceeds that of
(regularised) passive diffusion discussed previously. Since both contribute
independently however, they may simply be added together if the total mean
squared displacement is desired. We also neglect the details of the propulsion
mechanism and possible interactions with our dissolution models.
As in the classical case, the $\bm{e}$-dynamics decouple from the
$\bm{r}$-dynamics. With the same assumptions regarding the separation of time
scales as in the passive case, $\bm{e}(\theta,\phi)$ is therefore subject to
the Fokker-Planck equation
$\frac{\partial}{\partial
t}P(\theta,\phi,t)=D_{r}(t)\nabla_{\text{ang}}^{2}P,$ (65)
where $\nabla_{\text{ang}}^{2}$ denotes the angular part of the Laplacian
operator. By introducing a rescaled time $\tau_{r}(t)$ as
$\tau_{r}=\int_{0}^{t}D_{r}(s)ds=D_{r,0}\int_{0}^{t}\left(\frac{R_{0}}{R(s)}\right)^{3}ds,$
(66)
this may be used to show that
$\langle\bm{e}(t)\cdot\bm{e}(0)\rangle=\exp(-2\tau_{r})$. Therefore we have
the following expression for the total active mean squared displacement,
$\langle
r^{2}\rangle_{a}=2U^{2}\int_{0}^{T_{d}}dt^{\prime}\int_{0}^{t^{\prime}}dt^{\prime\prime}\exp\bigg{\\{}{-2\left[\tau_{r}(t^{\prime})-\tau_{r}(t^{\prime\prime})\right]}\bigg{\\}}.$
(67)
Substituting values for our models and rescaling variables, this gives the
following general expressions.
$\displaystyle\langle r^{2}\rangle_{a}$
$\displaystyle=U^{2}T_{d}^{2}\int_{0}^{\infty}dx^{\prime}\int_{0}^{x^{\prime}}dx^{\prime\prime}\frac{2e^{-2\gamma(x^{\prime}-x^{\prime\prime})}}{(1+x^{\prime}/2)^{3}(1+x^{\prime\prime}/2)^{3}},$
(non-reacting) (68) $\displaystyle\langle r^{2}\rangle_{a}$
$\displaystyle=U^{2}T_{d}^{2}\int_{0}^{\infty}dx^{\prime}\int_{0}^{x^{\prime}}dx^{\prime\prime}\frac{2e^{-2\gamma(x^{\prime}-x^{\prime\prime})}}{(1+2x^{\prime})^{3/2}(1+2x^{\prime\prime})^{3/2}},$
(slow reaction) (69) $\displaystyle\langle r^{2}\rangle_{a}$
$\displaystyle=U^{2}T_{d}^{2}\int_{0}^{\infty}dx^{\prime}\int_{0}^{x^{\prime}}dx^{\prime\prime}\frac{2e^{-2\gamma(x^{\prime}-x^{\prime\prime})}}{(1+\sqrt{x^{\prime}})^{3}(1+\sqrt{x^{\prime\prime}})^{3}}.$
(fast reaction) (70)
Unfortunately, while these are exact results, it is not possible to evaluate
these integrals analytically for arbitrary values of $\gamma$. However, we can
derive asymptotic solutions in both the diffusive and ballistic limits, as we
now show.
#### IV.1.1 Diffusive limit ($\gamma\to\infty$)
In the diffusive limit, $\gamma\gg 1$, we can use Watson’s lemma to develop an
asymptotic expansion, with details given in the Appendix. In the case of a
non-reacting swimmer, we find
$\langle
r^{2}\rangle_{a}\sim\frac{2}{5}\frac{U^{2}T_{d}}{D_{r,0}}\left[1-\frac{5}{8\gamma}+\dots\right],\quad\gamma\to\infty\quad\text{(non-
react.)}.$ (71)
As expected, the behaviour is diffusive and the leading-order scaling is
$\langle
r^{2}\rangle_{a}\sim\frac{U^{2}\mu\rho_{s}R_{0}^{5}}{k_{B}TD_{s}(c_{0}-c_{\infty})},\quad\gamma\to\infty\quad\text{(non-
reacting)}.$ (72)
We notice the appearance of the $2/5$ factor in Eq. (71), indicating that the
enhancement of the diffusivity through active motion is reduced dramatically,
to just 40% of that of a comparable classical colloid. Furthermore, the active
mean squared displacement scales as $\sim R_{0}^{5}$, making the range of the
swimmer extremely sensitive to its initial size. This scaling breaks down for
very large swimmers, since it is necessary that $\gamma\sim R_{0}^{-1}$ is
sufficiently large for this expansion to remain valid.
For the slowly reacting swimmer we find in a similar fashion that
$\langle
r^{2}\rangle_{a}\sim\frac{1}{4}\frac{U^{2}T_{d}}{D_{r,0}}\left[1-\frac{1}{\gamma}+\dots\right],\quad\gamma\to\infty\quad\text{(slow
react.)}.$ (73)
with the leading-order scaling
$\langle
r^{2}\rangle_{a}\sim\frac{U^{2}\mu\rho_{s}R_{0}^{4}}{k_{B}Tc_{f,\infty}k_{s}},\quad\gamma\to\infty\quad\text{(slow
reaction)}.$ (74)
We see that the diffusivity in Eq. (73) is reduced even further, to 25% that
of a classical colloid. Finally for the fast reacting swimmer we obtain
$\langle
r^{2}\rangle_{a}\sim\frac{1}{10}\frac{U^{2}T_{d}}{D_{r,0}}\left[1-\frac{5}{2\gamma}+\dots\right],\quad\gamma\to\infty\quad\text{(fast
react.)/},$ (75)
and the leading-order scaling
$\langle
r^{2}\rangle_{a}\sim\frac{U^{2}\mu\rho_{s}^{2}k_{f}^{2}R_{0}^{5}}{k_{B}TD_{f}c_{f,\infty}^{2}k_{s}^{2}},\quad\gamma\to\infty\quad(\text{fast
reaction}).$ (76)
This third dissolution model gives the strongest reduction of the active mean
squared displacement in the diffusive regime, to just 10% that of a classical
colloid.
The strong reduction in mean squared displacement across all three models
suggests that it is impractical to rely on active diffusion to transport
dissolving microswimmers. Instead designs may be aimed at exploiting the
ballistic regime ($\gamma\ll 1$) or making use of external flows and
geometries to direct swimmers.
#### IV.1.2 Ballistic limit ($\gamma\to 0$)
The asymptotic expansions in the ballistic limit are more complicated, and
rely on careful splitting of the integration range to tame divergences. With
all details shown in the Appendix, we obtain the following leading-order
results:
$\displaystyle\langle r^{2}\rangle_{a}$
$\displaystyle=U^{2}T_{d}^{2}\left(1-\frac{16}{3}\gamma+\mathcal{O}(\gamma^{3/2})\right),$
(non-reacting) (77) $\displaystyle\langle r^{2}\rangle_{a}$
$\displaystyle=U^{2}T_{d}^{2}\left(1-2\sqrt{\pi}\sqrt{\gamma}+\mathcal{O}(\gamma\log\gamma)\right),$
(slow reaction) (78) $\displaystyle\langle r^{2}\rangle_{a}$
$\displaystyle=U^{2}T_{d}^{2}\left(1-4\sqrt{2\pi}\sqrt{\gamma}+\mathcal{O}(\gamma\log\gamma)\right).$
(fast reaction) (79)
Once again, we observe the same hierarchy among the three models, with the
non-reacting swimmer exhibiting the smallest decrease in range compared to a
classical colloid, in contrast with a fast reacting swimmer with the same
lifetime $T_{d}$. Note that in this limit not only the coefficient but also
the leading-order scaling varies between the models.
We obtain therefore that in both the ballistic and diffusive limit there
exists a hierarchy among the three models. The mean squared displacement for a
given value of $\gamma$ is always largest for the non-reacting swimmer,
followed by the slowly reacting and finally the fast reacting colloid. This
may be explained by considering the decay behaviour in Fig. 3. Since the decay
rate of the non-reacting swimmer is accelerating, it is only significantly
smaller than its original size for a comparatively short proportion of its
total lifetime. Since rotational diffusion is strongest for particles of small
radius, this means that it is comparatively weakly affected by the enhancement
in rotational diffusion. In contrast, colloids decaying according the other
two models experience strong rotational diffusion for a significantly longer
proportion of their lifetime, leading to less directed motion and smaller
overall displacement. In Fig. 8 and 9 we illustrate this further using results
from our numerical simulations.
### IV.2 Computational results
#### IV.2.1 Validation of the method
Figure 5: Normalised active mean squared displacement as a function $\gamma$
for the non-reacting model. The solid black line corresponds to direct
numerical integration of Eq. (68), while the dashed orange line is our
theoretical prediction in Eq. (71) for the large $\gamma$ limit. Each scatter
point represents the mean and one standard deviation obtained from $10^{3}$
Monte-Carlo simulations of the associated Langevin equations. Inset: the small
$\gamma$ behaviour, comparing Eq. (68) (solid black) with the asymptotic
solution Eq. (77) (dashed orange). Figure 6: Normalised active mean squared
displacement against $\gamma$ for the slow reaction limit of the reacting
model. The solid black line corresponds to direct numerical integration of Eq.
(69), the dashed orange lines to the theoretical predictions of Eq. (73) and
Eq. (78), and the scatter points to Monte-Carlo simulations in analogy with
Fig. 5. Figure 7: Normalised active mean squared displacement against $\gamma$
for the slow reaction limit of the reacting model. The solid black line
corresponds to direct numerical integration of Eq. (70), the dashed orange
lines to the theoretical predictions of Eq. (75) and Eq. (79), and the scatter
points to Monte-Carlo simulations in analogy with Fig. 5.
In order to test our theoretical approach, we perform direct numerical
integrations of our integral expressions for the active mean squared
displacement in Eqs. (68)-(70). We compare them with Monte-Carlo simulations
of the associated Langevin dynamics to assert its validity, and subsequently
with our analytical predictions for the asymptotic behaviour. The results are
shown in Figs. 5, 6 and 7 for the non-reacting, slowly reacting and fast
reacting models respectively. Since the large $\gamma$ limit corresponds to
strong rotational diffusion and long lifetimes, the Monte Carlo simulations
necessitate very small time steps and very long run times. Depending on the
model, such simulations therefore become prohibitively expensive even for
moderate values of $\gamma$. Since rotational diffusion is strongest for small
colloids, this effect is most pronounced for the fast reacting swimmer whose
rate of dissolution is decreasing since this swimmer spends the longest
proportion of its lifetime in this regime. Conversely, the non-reacting
swimmer is the least expensive to simulate.
As can be seen in Fig. 5, we obtain excellent agreement between the Langevin
dynamics and the predicted mean-squared displacement for a wide range of
$\gamma$ values. In the diffusive limit ($\gamma\gg 1$), the next-to leading
order asymptotics agree extremely well with the exact result down to
$\gamma=\mathcal{O}(1)$ on a log-log scale. In the ballistic limit,
divergences begin to appear at $\gamma=\mathcal{O}(10^{-1})$. Similar
conclusions hold for the slowly reacting swimmer, as shown in Fig. 6. In the
case of the fast reacting swimmer, shown in Fig. 7, the active mean squared
displacement is a less smooth function of $\gamma$, leading to stronger
diversion from the asymptotic expressions.
#### IV.2.2 Distribution of spread
Figure 8: Histograms illustrating the distribution of root mean squared
displacement from the initial position for different values of $\gamma$,
scaled by the ballistic length scale $L_{b}=UT_{d}$ for $\gamma=0.1$ and
$\gamma=1$, and the diffusive length scale $L_{d}=L_{b}/\sqrt{\gamma}$ for
$\gamma=10$. Each histogram is generated from $10^{3}$ Monte Carlo
simulations. Dashed lines indicate sample means. Figure 9: Cloud scatter plot
of lateral displacement, $r_{\perp}=\sqrt{x^{2}+y^{2}}$, vs. vertical
displacement, $z$, of $10^{3}$ Monte Carlo simulations in the weakly ballistic
regime for our three models compared to the non-dissolving case. All
simulations are started at the coordinate origin (filled circle) with initial
orientation vertically upwards in Cartesian coordinates $(x,y,z)$. Symbols
indicate positions of the colloids at time of disappearance. The non-
dissolving data points are generated by initialising a simulation with a given
rotational diffusivity $D_{r}$ and terminating after a time $T$ such that
$TD_{r}=\gamma$. Lengths are scaled by the ballistic length scale $UT_{d}$.
From these Monte-Carlo simulations, we can deduce further information
regarding the spread of particle trajectories. As predicted in §IV.1, a
hierarchy between the models is revealed that applies for a wide range of
values of $\gamma$, covering both the ballistic and the diffusive regime. This
is illustrated in Fig. 8, where we show histograms of root-mean-square
displacement distributions. For equal values of $\gamma$, the non-reacting
model consistently produces the largest displacement. The distribution is
strongly peaked for small $\gamma$ (ballistic), but spreads as $\gamma$ shifts
to larger values. This may be attributed to the general shift towards
diffusion. Contrastingly however, the distribution of the fast-reacting
colloids is spread rather widely even in the ballistic regime and in fact
peaked much more strongly in the diffusive regime than both the non-reacting
and the slowly reacting particles, whose distribution lies between the two
others. This is indicative of fast-reacting dissolution fostering diffusive
behaviour independent of the parameter $\gamma$.
In order to further illustrate this point, we examine the lateral spread of
colloid trajectories in the weakly ballistic regime. In Fig. 9, we plot the
final positions of colloids with identical initial orientations, including
non-dissolving particles for comparison. A clear stratification between the
models is visible with non-dissolving colloids being closely confined to a
spherical cap on the one extreme, and fast reacting colloids in a near-
spherical diffusive cloud close to the origin. These also exhibit the smallest
absolute lateral spread, while the classical colloids are the most spread out.
However, the average angular spread is similar between the models.
## V Discussion
In this paper we provide two fundamental models for the dissolution and
stochastic dynamics of self-propelled artificial microswimmers. Inspired by
recent experimental realisations, we seek to identify the swimmer decay rates
and their influence on translational and rotational diffusivity, and in turn
analyse both theoretically and numerically how changes in these modify the
distribution of swimmer trajectories. We identify a new dimensionless
parameter, $\gamma$ defined as the product of lifetime and initial rotational
diffusivity, that classifies colloids with finite lifetime into ‘ballistic’
and ‘diffusive’ types independent of the dissolution process, and study the
differences between our dissolution models in three distinct limits for
various values of this parameter. We find that for a given value of $\gamma$,
particles dissolving in the absence of a reaction behave the most ballistic,
whereas colloids reacting at high Damköhler number, defined as the ratio of
fuel reactivity and diffusive replenishment, behave the most diffusively. We
find that this is due to increasing and decreasing dissolution rates
respectively for the different models. Furthermore we derive asymptotic
expressions of their mean squared displacement for both small and large values
of $\gamma$, and perform extensive Monte Carlo simulations to validate our
theoretical results and derive more information about the distribution of
spread.
Under experimental conditions, Damköhler numbers of more than about 10 are
often very difficult to realise. However, this does not really constrain the
applicability of our fast-reacting model, since we only require
$t_{f}/\text{Da}^{2}\ll T_{d}$ for the expansion to be valid on the scale of
dissolution dynamics. Since typically $t_{f}\ll T_{d}$ anyway, we find that
even Damköhler numbers of order unity are sufficient for this limit. On the
other hand, this argument implies that very small Damköhler numbers are
required in the slow-reaction asymptotic limit, a situation which might not be
realisable experimentally. Note however that we include also the general
expression of the decay for arbitrary Damköhler number in Eq. (II.2.2), for
which computations similar to the ones provided in §IV.2 may be performed.
Despite this, not all our models can apply to all kinds of microswimmer
designs. Specifically, the non-reacting model might be at odds with phoretic
self-propulsion. Therefore this model only describes colloids that propel
through different mechanisms, such as magnetic swimmers. Furthermore, our
statistical results only hold true for microswimmers that are fully
degradable. A Janus colloid with, e.g., degradable and inert halves is not
going to exhibit divergent diffusivity since the relevant length scale is
bounded. Instead such a swimmer would show a decrease in velocity, which if
known can be dealt with in a manner similar to our theoretical approach. In
this case, however, the changing geometry of the swimmer would likely have to
be solved for numerically.
Another important problem that remains to be investigated is the influence of
directed motion, such as chemotaxis. Breaking the isotropy of orientational
dynamics prevents an analytical investigation similar to the one carried out
in this paper since it relies on the result that the directional correlation
of a particle decays exponentially. However, we can still address the issue
directly in at least one special case. It was shown recently in Ref. Tatulea-
Codrean and Lauga (2018) that artificial colloids perform chemotaxis by
adjusting their trajectory by means of rotation, translation in the direction
to a chemical gradient, and translation at an angle, each with a coefficient
of strength that can be calculated from the surface activity and mobility of
the colloid. In the case of uniform surface activity, the only coefficient
that is non-zero is the one giving rise to translation in the direction of a
chemical gradient. In particular, the rotational dynamics remain unaffected.
In that case, the swimmer trajectories behave therefore just like we describe
in our paper, plus a constant velocity displacing the colloid in the direction
of the chemical gradient. Furthermore numerical work will be required to
address the full interplay between chemotaxis behaviour and dissolution
dynamics.
Before degradable designs may be employed in real-world applications, it will
be furthermore necessary to examine the effects of collective dissolution.
Since our models are sensitive to the background distribution of fuel and/or
solute, the influence of other nearby colloids on their dissolution will be
noticeable. It is conceivable that, in analogy with bubbles Michelin _et al._
(2018), different decay patterns and complex stochastic behaviour emerges.
Similar effects may also be triggered by confinement and also warrant further
investigation.
###### Acknowledgements.
This project has received funding from the European Research Council (ERC)
under the European Union’s Horizon 2020 research and innovation programme
(grant agreement 682754 to EL).
## Author contributions
EL conceived the study, AC developed models and performed computations, all
authors contributed to the interpretation and writing of the manuscript.
## Conflicts of interest
There are no conflicts to declare.
## Appendix A Details of the asymptotics for active MSD
### A.1 Diffusive limit ($\gamma\to\infty$)
The general expression for the active mean squared displacement is
$\langle
r^{2}\rangle_{a}=2U^{2}\int_{0}^{T_{d}}dt^{\prime}\int_{0}^{t^{\prime}}dt^{\prime\prime}\exp\left\\{{-2\left[\tau_{r}(t^{\prime})-\tau_{r}(t^{\prime\prime})\right]}\right\\}.$
(80)
In the case of the non-reacting swimmer we have $R\approx
R_{0}\sqrt{1-t/T_{d}}$, and thus
$\tau_{r}=D_{r,0}T_{d}\int_{0}^{t/T_{d}}\frac{dt^{\prime}}{(1-t^{\prime})^{3/2}}=2\gamma\left(\frac{1}{\sqrt{1-t/T_{d}}}-1\right).$
(81)
We can use this to change integration variables in Eq. (80) by setting
$x=\tau_{r}/\gamma$ and obtain
$\langle
r^{2}\rangle_{a}=2U^{2}T_{d}^{2}\int_{0}^{\infty}dx^{\prime}\int_{0}^{x^{\prime}}dx^{\prime\prime}\frac{e^{-2\gamma(x^{\prime}-x^{\prime\prime})}}{(1+x^{\prime}/2)^{3}(1+x^{\prime\prime}/2)^{3}}.$
(82)
This transformation can be interpreted as mathematically equivalent to the
motion of a non-dissolving colloid with constant rotational diffusivity and
algebraically decaying velocity. We switch variables again to
$\displaystyle y^{\prime}$ $\displaystyle=x^{\prime},$ (83) $\displaystyle
y^{\prime\prime}$ $\displaystyle=x^{\prime}-x^{\prime\prime}.$
and obtain
$\langle
r^{2}\rangle_{a}=2U^{2}T_{d}^{2}\int_{0}^{\infty}dy^{\prime}\int_{0}^{y^{\prime}}dy^{\prime\prime}e^{-2\gamma
y^{\prime\prime}}\left(1+\frac{y^{\prime}}{2}\right)^{-3}\left(1+\frac{y^{\prime}-y^{\prime\prime}}{2}\right)^{-3}.$
(84)
It is then possible to write the $y^{\prime\prime}$-integral in terms of
auxiliary Gamma functions. These may be expanded in the limit
$\gamma\to\infty$ to give
$\langle
r^{2}\rangle_{a}=U^{2}T_{d}^{2}\int_{0}^{\infty}dy^{\prime}\frac{64}{\gamma(2+y^{\prime})^{6}}+\frac{96}{\gamma^{2}(2+y^{\prime})^{7}}-\frac{8e^{-2\gamma
y^{\prime}}}{\gamma(2+y^{\prime})^{3}}+\mathcal{O}(\gamma^{-3}).$ (85)
The first two terms can be evaluated directly, while the last one may be
expanded using Watson’s lemma. We find that
$\langle
r^{2}\rangle_{a}=U^{2}T_{d}^{2}\left(\frac{2}{5\gamma}-\frac{1}{4\gamma^{2}}+\mathcal{O}\left(\gamma^{-3}\right)\right),$
(86)
which is the same as Eq. (71).
The case of a slowly reacting swimmer can be solved in a very similar fashion.
This time we have
$x=\frac{1}{2}\left(\frac{1}{(1-t/T_{d})^{2}}-1\right).$ (87)
It follows that the active part of the mean squared displacement may be
written as
$\langle
r^{2}\rangle_{a}=2U^{2}T_{d}^{2}\int_{0}^{\infty}dx^{\prime}\int_{0}^{x^{\prime}}dx^{\prime\prime}\frac{e^{-2\gamma(x^{\prime}-x^{\prime\prime})}}{(1+2x^{\prime})^{3/2}(1+2x^{\prime\prime})^{3/2}}.$
(88)
Developing an asymptotic expansion as before we get
$\displaystyle\langle r^{2}\rangle_{a}$
$\displaystyle=U^{2}T_{d}^{2}\int_{0}^{\infty}dy^{\prime}\frac{1}{\gamma(1+2y^{\prime})^{3}}+\frac{3}{2\gamma^{2}(1+2y^{\prime})^{4}}-\frac{e^{-2\gamma
y^{\prime}}}{\gamma(1+2y^{\prime})^{3/2}}+\mathcal{O}(\gamma^{-3})$
$\displaystyle=U^{2}T_{d}^{2}\left(\frac{1}{4\gamma}-\frac{1}{4\gamma^{2}}+\mathcal{O}\left(\gamma^{-3}\right)\right),$
(89)
which is Eq. (73).
Finally, for the fast reacting swimmer we have
$x=\frac{t/T_{d}}{(1-\sqrt{t/T_{d}})^{2}},$ (90)
from which we can derive that
$\langle
r^{2}\rangle_{a}=2U^{2}T_{d}^{2}\int_{0}^{\infty}dx^{\prime}\int_{0}^{x^{\prime}}dx^{\prime\prime}\frac{e^{-2\gamma(x^{\prime}-x^{\prime\prime})}}{(1+\sqrt{x^{\prime}})^{3}(1+\sqrt{x^{\prime\prime}})^{3}}.$
(91)
In this case it is easier to interchange the integrals as
$\int_{0}^{\infty}dx^{\prime}\int_{0}^{x^{\prime}}dx^{\prime\prime}=\int_{0}^{\infty}dy^{\prime\prime}\int_{y^{\prime\prime}}^{\infty}dy^{\prime}$
and perform the $y^{\prime}$-integral first. The resulting expression produced
by Wolfram Mathematica 11 contains 1692 terms, but may again be expanded and
simplified significantly upon the application of Watson’s lemma, giving
$\displaystyle\langle r^{2}\rangle_{a}$
$\displaystyle=U^{2}T_{d}^{2}\int_{0}^{\infty}dy^{\prime\prime}e^{-2\gamma
y^{\prime\prime}}\left(\frac{1}{5}-y^{\prime\prime}+\mathcal{O}\left(y^{\prime\prime
3/2}\right)\right)$ (92)
$\displaystyle=U^{2}T_{d}^{2}\left(\frac{1}{10\gamma}-\frac{1}{4\gamma^{2}}+\mathcal{O}\left(\gamma^{-5/2}\right)\right),$
(93)
as claimed in Eq. (75).
### A.2 Ballistic limit ($\gamma\to 0$)
First, the non-reacting swimmer. We have
$\langle
r^{2}\rangle_{a}=2U^{2}T_{d}^{2}\int_{0}^{\infty}dx\int_{0}^{x}dy\frac{e^{-2\gamma(x-y)}}{(1+x/2)^{3}(1+y/2)^{3}},$
(94)
and are interested in the limit $\gamma\to 0$. We set $U^{2}T_{d}^{2}=1$ to
keep the notation clean. Since the denominator decays rapidly enough at
$\infty$ we can Taylor expand the exponential to pick up the two leading-order
contributions to the integral.
$\displaystyle\langle r^{2}\rangle_{a}$
$\displaystyle=\int_{0}^{\infty}dx\int_{0}^{x}dy\frac{2-4\gamma(x-y)+\dots}{(1+x/2)^{3}(1+y/2)^{3}}$
(95) $\displaystyle=1-\frac{16}{3}\gamma+o(\gamma),$ (96)
which is Eq. (77).
For the slowly reacting swimmer we have
$\langle
r^{2}\rangle_{a}=2\int_{0}^{\infty}dx\int_{0}^{x}dy\frac{e^{-2\gamma(x-y)}}{(1+2x)^{3/2}(1+2y)^{3/2}}.$
(97)
Because of the slower decay, it is necessary to divide and conquer from the
start. We set $z=x-y$ and note that
$\int_{0}^{\infty}dx\int_{0}^{x}dy=\int_{0}^{\infty}dz\int_{z}^{\infty}dx$.
Upon performing the inner integral we have
$\langle r^{2}\rangle_{a}=\int_{0}^{\infty}dz\frac{e^{-2\gamma
z}}{1+\sqrt{1+2z}+z(2+\sqrt{1+2z})}.$ (98)
We define $\delta$ such that $1\ll\delta\ll\gamma^{-1}$ and split the integral
into
$\displaystyle I_{1}=\int_{0}^{\delta}dz\frac{e^{-2\gamma
z}}{1+\sqrt{1+2z}+z(2+\sqrt{1+2z})},$ $\displaystyle
I_{2}=\int_{\delta}^{\infty}dz\frac{e^{-2\gamma
z}}{1+\sqrt{1+2z}+z(2+\sqrt{1+2z})}.$ (99)
Upon expanding the exponential in $I_{1}$ and taking $\delta\to\infty$ we have
$I_{1}=1+(2-2\log 2)\gamma+\mathcal{O}(\gamma^{2})+\text{terms depending on
}\delta.$ (100)
Meanwhile, we rescale $z\to\gamma z$ in $I_{2}$ and expand the denominator for
small $\gamma$.
$I_{2}=\int_{\gamma\delta}^{\infty}dze^{-2z}\left(\frac{\gamma^{1/2}}{\sqrt{2}z^{3/2}}-\frac{\gamma}{z^{2}}+\dots\right).$
(101)
Performing the integral and taking the limit $\delta\to 0$ we arrive at
$\displaystyle
I_{2}=-2\sqrt{\pi}\gamma^{1/2}-2\gamma\log\gamma+\left(2-2\gamma_{e}-2\log
2\right)\gamma$ $\displaystyle+o(\gamma)+\text{terms depending on }\delta,$
(102)
where $\gamma_{e}$ is the Euler-Mascheroni constant. Since $\delta$ is
arbitrary, the divergent terms in both integrals must cancel. In summary, we
have for the slowly reacting swimmer that
$\langle
r^{2}\rangle_{a}=1-2\sqrt{\pi}\gamma^{1/2}-2\gamma\log\gamma+\left(4-2\gamma_{e}-4\log
2\right)\gamma+o(\gamma),$ (103)
which is Eq. (78).
Finally, for the fast reacting swimmer we have
$\langle
r^{2}\rangle_{a}=2\int_{0}^{\infty}dx\int_{0}^{x}dy\frac{e^{-2\gamma(x-y)}}{(1+\sqrt{x})^{3}(1+\sqrt{y})^{3}}.$
(104)
This time there is no closed-form expression for the inner integral, forcing
us to split both integrals in two domains. We define $\delta$ as before and
write
$\langle
r^{2}\rangle_{a}=\underbrace{\int_{0}^{\delta}dx\int_{0}^{x}dy}_{I_{1}}+\underbrace{\int_{\delta}^{\infty}dx\int_{0}^{x}dy}_{I_{2}}\frac{2e^{-2\gamma(x-y)}}{(1+\sqrt{x})^{3}(1+\sqrt{y})^{3}}.$
(105)
The first part, $I_{1}$, is straightforward to do once the exponential is
expanded and yields
$I_{1}=1-\frac{296}{3}\gamma+\mathcal{O}(\gamma^{2})+\text{terms depending on
}\delta.$ (106)
To perform $I_{2}$ we write
$I_{2}=\int_{\delta}^{\infty}dx\frac{2e^{-2\gamma
x}}{(1+\sqrt{x})^{3}}\underbrace{\int_{0}^{x}dy\frac{e^{2\gamma
y}}{(1+\sqrt{y})^{3}}}_{J(x)},$ (107)
and split the range of $J(x)$ again with the goal to obtain an expansion valid
for small $\gamma$. Defining $\delta_{1}$, $J_{1}$ and $J_{2}$ in a similar
fashion, we find
$J_{1}=1+10\gamma+\mathcal{O}(\gamma^{2})+\text{terms depending on
}\delta_{1},$ (108)
whereas for $J_{2}$ we have
$\displaystyle J_{2}=$ $\displaystyle\frac{3e^{2\gamma
x}}{x}-\frac{2e^{2\gamma
x}}{\sqrt{x}}+2\sqrt{2\pi}\gamma^{1/2}\text{Erfi}\left(\sqrt{2\gamma
x}\right)$ $\displaystyle-6\gamma\text{Ei}\left(2\gamma
x\right)+2\gamma\log\gamma+\gamma\left(6\gamma_{e}-6+6\log 2\right)$
$\displaystyle+o(\gamma)+\text{terms depending on }\delta_{1},$ (109)
where $\text{Erfi}(z)=\text{Erf}(iz)/i$ and
$\text{Ei}(z)=-\int_{-z}^{\infty}e^{-t}/t\,dt$. Combining these allows us to
write
$\displaystyle
I_{2}=\int_{\gamma\delta}^{\infty}dz\frac{2\gamma^{1/2}e^{-2z}}{z^{3/2}}-\frac{\gamma}{z^{2}}\left(6e^{-2z}+4-4\sqrt{2\pi
z}\text{Erfi}\left(\sqrt{2z}\right)\right)$ $\displaystyle+o(\gamma).$ (110)
Expanding as before and combining with $I_{1}$ we ultimately find that
$\displaystyle\langle
r^{2}\rangle_{a}=1-4\sqrt{2\pi}\gamma^{1/2}-28\gamma\log\gamma-\gamma\left(\frac{164}{3}+28\gamma_{e}+60\log
2\right)$ $\displaystyle+o(\gamma),$ (111)
corresponding to Eq. (79) in the main text.
## References
* Wang and Gao (2012) J. Wang and W. Gao, ACS nano 6, 5745 (2012).
* Wang _et al._ (2013) W. Wang, W. Duan, S. Ahmed, T. E. Mallouk, and A. Sen, Nano Today 8, 531 (2013).
* Nelson _et al._ (2010) B. J. Nelson, I. K. Kaliakatsos, and J. J. Abbott, Annual review of biomedical engineering 12, 55 (2010).
* Elgeti _et al._ (2015) J. Elgeti, R. G. Winkler, and G. Gompper, Reports on progress in physics 78, 056601 (2015).
* Moran and Posner (2017) J. L. Moran and J. D. Posner, Annual Review of Fluid Mechanics 49, 511 (2017).
* Purcell (1977) E. M. Purcell, American journal of physics 45, 3 (1977).
* Michelin and Lauga (2014) S. Michelin and E. Lauga, Journal of Fluid Mechanics 747, 572 (2014).
* Golestanian _et al._ (2007) R. Golestanian, T. Liverpool, and A. Ajdari, New Journal of Physics 9, 126 (2007).
* Brady (2011) J. F. Brady, Journal of Fluid Mechanics 667, 216 (2011).
* Walther and Mueller (2013) A. Walther and A. H. Mueller, Chemical reviews 113, 5194 (2013).
* Ebbens _et al._ (2014) S. Ebbens, D. Gregory, G. Dunderdale, J. Howse, Y. Ibrahim, T. Liverpool, and R. Golestanian, EPL (Europhysics Letters) 106, 58003 (2014).
* Paxton _et al._ (2006) W. F. Paxton, P. T. Baker, T. R. Kline, Y. Wang, T. E. Mallouk, and A. Sen, Journal of the American Chemical Society 128, 14881 (2006).
* Moran and Posner (2011) J. L. Moran and J. D. Posner, Journal of Fluid Mechanics 680, 31 (2011).
* Gallino _et al._ (2018) G. Gallino, F. Gallaire, E. Lauga, and S. Michelin, Advanced Functional Materials , 1800686 (2018).
* Mou _et al._ (2015) F. Mou, Y. Li, C. Chen, W. Li, Y. Yin, H. Ma, and J. Guan, Small 11, 2564 (2015).
* Wang _et al._ (2012) W. Wang, L. A. Castro, M. Hoyos, and T. E. Mallouk, ACS nano 6, 6122 (2012).
* Gibbs and Zhao (2009) J. G. Gibbs and Y.-P. Zhao, Applied Physics Letters 94, 163104 (2009).
* Wang and Wu (2014) S. Wang and N. Wu, Langmuir 30, 3477 (2014).
* Zhang _et al._ (2009) L. Zhang, J. J. Abbott, L. Dong, K. E. Peyer, B. E. Kratochvil, H. Zhang, C. Bergeles, and B. J. Nelson, Nano letters 9, 3663 (2009).
* Ghosh and Fischer (2009) A. Ghosh and P. Fischer, Nano letters 9, 2243 (2009).
* Gao _et al._ (2015) W. Gao, R. Dong, S. Thamphiwatana, J. Li, W. Gao, L. Zhang, and J. Wang, ACS nano 9, 117 (2015).
* Bächer _et al._ (2017) C. Bächer, L. Schrack, and S. Gekle, Physical Review Fluids 2, 013102 (2017).
* Sauret _et al._ (2018) A. Sauret, K. Somszor, E. Villermaux, and E. Dressaire, Physical Review Fluids 3, 104301 (2018).
* Fogelson and Neeves (2015) A. L. Fogelson and K. B. Neeves, Annual review of fluid mechanics 47, 377 (2015).
* Nesbitt _et al._ (2009) W. S. Nesbitt, E. Westein, F. J. Tovar-Lopez, E. Tolouei, A. Mitchell, J. Fu, J. Carberry, A. Fouras, and S. P. Jackson, Nature medicine 15, 665 (2009).
* Chen _et al._ (2018) C. Chen, E. Karshalev, J. Guan, and J. Wang, Small , 1704252 (2018).
* Chen _et al._ (2016) C. Chen, E. Karshalev, J. Li, F. Soto, R. Castillo, I. Campos, F. Mou, J. Guan, and J. Wang, ACS Nano (2016).
* Wang _et al._ (2018) X. Wang, X.-H. Qin, C. Hu, A. Terzopoulou, X.-Z. Chen, T.-Y. Huang, K. Maniura-Weber, S. Pané, and B. J. Nelson, Advanced Functional Materials , 1804107 (2018).
* Tu _et al._ (2017) Y. Tu, F. Peng, A. A. Andree, Y. Men, M. Srinivas, and D. A. Wilson, ACS nano 11, 1957 (2017).
* Woods (1992) A. W. Woods, Journal of Fluid Mechanics 239, 429 (1992).
* Zhang _et al._ (1989) Y. Zhang, D. Walker, and C. E. Lesher, Contributions to Mineralogy and Petrology 102, 492 (1989).
* Kerr (1995) R. C. Kerr, Contributions to Mineralogy and Petrology 121, 237 (1995).
* Haynes (2014) W. M. Haynes, _CRC handbook of chemistry and physics_ (CRC press, 2014).
* Michelin _et al._ (2018) S. Michelin, E. Guérin, and E. Lauga, Physical Review Fluids 3, 043601 (2018).
* Carslaw and Jaeger (1959) H. Carslaw and J. Jaeger, Oxford University Press, Oxford , 75 (1959).
* Einstein (1905) A. Einstein, Annalen der Physik 17, 549 (1905).
* Li (2009) Z. Li, Physical Review E 80, 061204 (2009).
* Tatulea-Codrean and Lauga (2018) M. Tatulea-Codrean and E. Lauga, Journal of Fluid Mechanics 856, 921 (2018).
|
2024-09-04T02:54:55.253187 | 2020-02-27T18:24:52 | 2002.12303 | {
"authors": "Max Geier, Jaan Freudenfeld, Jorge T. Silva, Vladimir Umansky, Dirk\n Reuter, Andreas D. Wieck, Piet W. Brouwer, Stefan Ludwig",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25926",
"submitter": "Max Geier",
"url": "https://arxiv.org/abs/2002.12303"
} | arxiv-papers | # Electrostatic potential shape of gate defined quantum point contacts
M. Geier Dahlem Center for Complex Quantum Systems and Physics Department,
Freie Universität Berlin, Arnimallee 14, 14195 Berlin, Germany J. Freudenfeld
Paul-Drude-Institut für Festkörperelektronik, Leibniz-Institut im
Forschungsverbund Berlin e.V., Hausvogteiplatz 5-7, 10117 Berlin, Germany J.
T. Silva Paul-Drude-Institut für Festkörperelektronik, Leibniz-Institut im
Forschungsverbund Berlin e.V., Hausvogteiplatz 5-7, 10117 Berlin, Germany V.
Umansky Weizmann Institute of Science, Rehovot 76100, Israel D.
Reuter111Present address: Department of Physics, Paderborn University,
Warburger Straße 100, 33098 Paderborn Angewandte Festkörperphysik, Ruhr-
Universität Bochum, Universitätsstrasse 150, 44780 Bochum, Germany A. D.
Wieck Angewandte Festkörperphysik, Ruhr-Universität Bochum,
Universitätsstrasse 150, 44780 Bochum, Germany P. W. Brouwer Dahlem Center
for Complex Quantum Systems and Physics Department, Freie Universität Berlin,
Arnimallee 14, 14195 Berlin, Germany S. Ludwig Paul-Drude-Institut für
Festkörperelektronik, Leibniz-Institut im Forschungsverbund Berlin e.V.,
Hausvogteiplatz 5-7, 10117 Berlin, Germany
###### Abstract
Quantum point contacts (QPC) are fundamental building blocks of nanoelectronic
circuits. For their emission dynamics as well as for interaction effects such
as the 0.7-anomaly the details of the electrostatic potential are important,
but the precise potential shapes are usually unknown. Here, we measure the
one-dimensional subband spacings of various QPCs as a function of their
conductance and compare our findings with models of lateral parabolic versus
hard wall confinement. We find that a gate-defined QPC near pinch-off is
compatible with the parabolic saddle point scenario. However, as the number of
populated subbands is increased Coulomb screening flattens the potential
bottom and a description in terms of a finite hard wall potential becomes more
realistic.
###### pacs:
## I introduction
Given the importance of quantum point contacts (QPC) as fundamental building
blocks of nanoelectronic circuits and the vast amount of literature about them
Landauer (1981); van Wees et al. (1988a); Wharam et al. (1988); Berggren and
Pepper (2010); Micolich (2011), surprisingly little is known about the shape
of their electrostatic potential as a function of gate voltages. However,
knowledge of the precise confinement potential is crucial for understanding
interaction effects in QPCs Rejec and Meir (2006); Koop et al. (2007); Bauer
et al. (2013); Heyder et al. (2015) as well as their carrier emission dynamics
Topinka et al. (2000); Freudenfeld et al. (2020), which is central for
optimizing a quantum electronic circuit. The lateral confinement defines the
mode structure of the one-dimensional (1D) channel while the longitudinal
potential shape governs the coupling of the 1D modes into the surrounding
2DES. Populating the 1D channel with electrons by increasing the voltage
applied to the split gates enhances Coulomb screening inside the constriction.
As a consequence, the lateral confinement potential undergoes a transition
from an unscreened approximately parabolic shape near pinch-off towards a
screened potential for many occupied 1D subbands. Such a transition had been
theoretically predicted Laux et al. (1988). Here, we experimentally
demonstrate it using transport spectroscopy at finite source-drain voltage.
Details of the confinement vary between individual devices produced by various
layouts based on different methods, which include the field effect Wharam et
al. (1988); van Wees et al. (1988a), etching Martin et al. (2008) or oxidation
Senz et al. (2001) techniques and more Fricke et al. (2006); Rössler et al.
(2010). The manifestation of 1D conductance quantization, $G=NG_{\text{Q}}$
with $G_{\text{Q}}=2e^{2}/h$ and $N=1,2,3,\dots$, at cryogenic temperatures is
often seen as a quality feature of QPCs. An “optimally” designed QPC has
several conductance steps that are approximately equidistant in gate voltage
as the QPC is opened up starting from pinch-off at $G=0$. It is tempting to
interpret the presence of equidistant conductance steps van Wees et al.
(1988a); Taboryski et al. (1995); Thomas et al. (1998); Hew et al. (2008);
Rössler et al. (2011); Burke et al. (2012) as a signature of a parabolic
transverse confinement potential as introduced in Ref. Büttiker (1990), since
such a potential has transverse modes at equally spaced energies. However,
this interpretation is questionable as the distance of the conductance steps
as a function of gate voltage is not one-to-one related with the energy
spacing of the 1D modes Rössler et al. (2011); Micolich and Zülicke (2011).
Figure 1: (a) Scanning electron microscope (SEM) picture of Ti/Au gates (light
gray) on the wafer surface (dark) of QPC1 and sketch of the electric field
effect device. Negative voltage $V_{\text{g}}$ applied to the gates (yellow)
is used to locally deplete the 2DES (blue, where conducting, gray where
depleted) 107 nm beneath the surface. (b) Pinch-off curve
$G(V_{\text{g}})/G_{\text{Q}}$ of QPC1 using the source-drain voltage $V=-0.5$
mV; solid line: raw data; dots: corrected for lead resistance
$R_{\text{lead}}=4.62$ k$\Omega$ which includes $4.4$ k$\Omega$ resistance of
external RC filters; inset: simplified circuit diagram of the measurement. (c)
Finite bias spectroscopy of QPC1,
$\text{d}g/\text{d}V_{\text{g}}(V_{\text{QPC}},V_{\text{g}})$, accounting for
the lead resistance (see main text). Local maxima of
$\text{d}g/\text{d}V_{\text{g}}$ (white lines) indicate transitions between
adjacent conductance plateaus. (d) SEM picture of a screen gate equivalent to
that of QPC2. As shown in the sketch, the actual device is covered with a 130
nm thick layer of cross-linked PMMA which carries a global top gate. (e)
Pinch-off curves of QPC2 corrected for a gate voltage dependent lead
resistance, including a constant $4.4$ k$\Omega$ resistance of external RC
filters, cf. Fig. 2: $G(V_{\text{t}})/G_{\text{Q}}$ for $V_{\text{s}}=0.5\,$V
and $G(V_{\text{s}})/G_{\text{Q}}$ for $V_{\text{t}}=-3.4\,$V at $V=-0.1$ mV.
(f) $\text{d}g/\text{d}V_{\text{g}}(V_{\text{QPC}},V_{\text{t}})$ of QPC2,
accounting for the lead resistance. Additional lines and symbols in panels (c)
and (f) are explained in the main text.
We study QPCs of two different designs, but both defined using gate voltages
by means of the electric field effect. In agreement with previous publications
Büttiker (1990); Berggren and Pepper (2010); Burke et al. (2012); Heyder et
al. (2015) our findings are consistent with a parabolic confinement potential
near the pinch-off point of the QPCs. However, as the conductance of a QPC is
increased, more and more carriers populate the 1D subbands and thereby arrange
themselves to partially screen the electric field induced by the applied gate
voltages. The resulting effective potential is then a function of the position
of all charges, which also includes the usually not well-known distribution of
surface states and charged bulk defects. A precise theoretical description of
this screening effect requires a three-dimensional self-consistent calculation
solving the classical Poisson equations together with the quantum mechanical
Schrödinger equations Laux et al. (1988); Yakimenko et al. (2013); Birner et
al. (2007). A self consistent Poisson-Schrödinger calculation performed for a
set of fictitious boundary conditions and for the case of a standard split-
gate defined QPC suggests a transition from a parabolic lateral confinement
for $N=1$ towards a truncated parabola and, eventually, a hard wall
confinement as $N$ is increased Laux et al. (1988). To test this scenario we
measure non-linear response transport through our QPC from which we identify
the energy spacings between its highest occupied 1D modes. We compare our
results to the two extreme scenarios for the lateral electrostatic
confinement: parabolic confinement as, e.g., in Refs. Weisz and Berggren
(1989); Hew et al. (2008); Song et al. (2009); Rössler et al. (2011) and a
hard-wall confinement as, e.g., in Refs. van Wees et al. (1988b); Gloos et al.
(2006). Our results are inconsistent with parabolic confinement for $N\geq 4$
but are consistent with a transition from a parabolic lateral confinement at
$N=1$ towards a hard wall potential as the QPCs are opened up.
## II Transport spectroscopy of quantum point contacts
Our QPCs are formed using the electric field effect in a 2D electron system
(2DES) embedded 107 nm beneath the surface of a (Al,Ga)As/GaAs
heterostructure. The 2DESs Fermi energy and mobility measured at cryogenic
temperatures are $E_{\text{F}}\simeq 10.9$ meV and $\mu_{e}\simeq 2.6\times
10^{6}\,\text{cm}^{2}/$Vs for QPC1 and similar for QPC2. We performed all
measurements in a helium-3 evaporation cryostat at temperatures near $T=250$
mK. In Fig. 1(a) and (d) we present scanning electron microscope images of the
two QPC samples and sketches of the gate layouts. For QPC1 shown in panel (a)
we use a standard split-gate layout and define the 1D constriction of the 2DES
by applying a negative voltage $V_{\text{g}}$ to both gates while the 2DES and
a back gate approximately 500 $\mu$m below the surface are at ground
potential. The resulting linear response pinch-off curve,
$G(V_{\text{g}})/G_{\text{Q}}$, is presented in Fig. 1(b). It features clear
and, for $N<6$, nearly equidistant steps of quantized conductance. To create
the second QPC2, see Fig. 1 (d), we use a global top gate to globally deplete
the 2DES. Only below a screen gate placed in between the top gate and the 2DES
we induce a finite density of free electrons Bachsoliani et al. (2017). The
screen gate shapes a narrow constriction, i.e., a QPC between 2D leads. Both,
the QPC conductance and the carrier density in the leads are controlled by the
combination of the voltages $V_{\text{t}}$ and $V_{\text{s}}$ applied to the
top gate and screen gate, respectively. We present example pinch-off curves
$G(V_{\text{t}})$ for fixed $V_{\text{s}}=0.5\,$V and $G(V_{\text{s}})$ for
constant $V_{\text{t}}=-3.4\,$V in Fig. 1(e). Note, that the screen gate
voltage is restricted to $V_{\text{s}}\lesssim 0.5\,$V as larger
$V_{\text{s}}$ causes a leakage current from the gate into the 2DES (as
expected for a Schottky barrier).
All our pinch-off curves feature smooth transitions between quantized
conductance plateaus. They indicate that the potential varies slowly and
smoothly in current direction, reminiscent of a parabolic potential profile in
the longitudinal direction, which results in reflectionless contacts between
constriction and leads.
Quantized conductance is a consequence of the energy quantization in a 1D
channel caused by the lateral confinement of the constriction. To
experimentally determine the energies of the 1D modes we need a known energy
scale to compare with. For this reason we measure the differential conductance
$g=\text{d}I/\text{d}V$ (e.g., using a lock-in amplifier) as a function of
source-drain voltage $V$ along the pinch-off curves. In panels (c) and (f) of
Fig. 1 we plot the differential transconductances
$\text{d}g/\text{d}V_{\text{g}}$ ($\text{d}g/\text{d}V_{\text{t}}$) for the
two QPCs as a function of the gate voltage and the bias voltage
$V_{\text{QPC}}$ (defined below) dropping across the QPC. In these plots steps
of the conductance $G(V_{\text{g}},V_{\text{QPC}})$
[$G(V_{\text{t}},V_{\text{QPC}})$] appear as lines of positive differential
transconductance (white). Red lines are a guide for the eye, indicating
resonances between the 1D modes and the chemical potentials of the source and
drain leads. Along the $N$th line of positive (negative) slope counted from
the bottom of the plot, the $N$th 1D subband bottom energy is equal to the
chemical potential in the source (drain) lead,
$\varepsilon_{N}=\mu_{\text{S}}$ ($\varepsilon_{N}=\mu_{\text{D}}$). The lines
frame diamond shaped regions around $V_{\text{QPC}}=0$. Within these regions
the conductance takes the quantized values $G=NG_{\text{Q}}$. Intersection
points at $V_{\text{QPC}}=0$ indicate steps of the linear response pinch-off
curves, i.e., $G=(N-0.5)G_{\text{Q}}$. At intersection points at finite
$V_{\text{QPC}}\neq 0$ the chemical potential drop across a QPC equals the
energy spacing between the corresponding 1D modes,
$|\mu_{\text{S}}-\mu_{\text{D}}|=eV_{\text{QPC}}=\varepsilon_{N}-\varepsilon_{M}$.
The additional curved lines of enhanced differential transconductance within
the $N=1$ diamond indicate the 0.7-anomaly Rejec and Meir (2006); Koop et al.
(2007); Micolich (2011); Bauer et al. (2013); Heyder et al. (2015) which is
not a topic of this article.
Since the source-drain voltage $V$ is applied across the QPC and its leads
(which is always the case, because of the finite contact sizes even for a
four-terminal measurement), the voltage drop across a QPC is
$V_{\text{QPC}}=V-V_{\text{lead}}=V-R_{\text{lead}}I$, cf. sketch in Fig.
1(b). The lead resistance can be directly determined from the linear response
pinch-off curves by forcing the conductance plateaus to their quantized
values, $R_{\text{lead}}=V/I-(NG_{\text{Q}})^{-1}$. Our pinch-off curves in
panels (b) and (e) of Fig. 1 are already corrected for the lead resistances,
while for QPC1 we additionally plot the uncorrected curve, i.e., the raw data
as a solid line. For completeness we present the lead resistances for all
three pinch-off curves in Fig. 2.
Figure 2: Resistances $R_{\text{lead}}$ of the leads to the QPCs, cf. sketch
in Fig. 1(b). For the split gate design of QPC1 $R_{\text{lead}}$ is constant
while it is a function of gate voltages for QPC2.
From these we determine the voltage drop across the QPC,
$V_{\text{QPC}}=V/(R_{\text{lead}}G+1)$, which is the x-axis in panels (c) and
(f) of Fig. 1. The tapered shape of the region of plotted data is a result of
correcting for the lead resistances (we measured between $-8\text{mV}\leq
V\leq 8\,$mV).
At the intersection points marked by red squares in panels (c) and (f) of Fig.
1 the bias $V_{\text{QPC}}$ is precisely equal to the energy spacings between
subsequent subbands,
$\displaystyle\delta\varepsilon(N)=$
$\displaystyle\,\varepsilon_{N+1}-\varepsilon_{N}$ $\displaystyle=$
$\displaystyle\,eV_{\text{QPC}}.$ (1)
We plot $\delta\varepsilon(N)$ in Fig. 3
Figure 3: Subband spacings $\delta\epsilon(N)$ of both QPCs for the three
pinch-off curves presented in Fig. 1. Lines are guides for the eyes. At the
intersection of the two lines of QPC2 the gate voltages $V_{\text{s}}$ and
$V_{\text{t}}$ would be identical for both measurements. Error bars reflect
the uncertainties of the red lines in Figs. 1(c) and (d).
for all three pinch-off curves. Related to the variations in geometry the
three implementations of QPCs have different subband spacings. However, as a
general feature we observe a strong decrease of $\delta\varepsilon(N)$ as the
QPCs are opened and $N$ is increased.
## III Hard wall versus parabolic lateral confinement
Given reflectionless contacts the conductance of a QPC is limited by its
strongest lateral confinement in the center of the constriction. The measured
subband spacings are uniquely related with this lateral confinement. In the
following, we compare the two most common models describing the lateral
confinement, namely a hard-wall versus a parabolic potential. These two models
may be considered the extreme limits of a “continuum” of realistic scenarios
for the transverse confinement.
### III.1 Lateral hard-wall potential
Figure 4: Comparison between hard-wall (left column) and parabolic (right
column) potential models of the lateral confinement. (a) Width of hard-wall
potential $W(N)$. (b) Offset of hard-wall potential $\Phi_{0}(N)$. (c) Shape
of hard-wall potential for $1\leq N\leq 9$, only for QPC1. (d) Curvature of
parabolic potential $\omega_{y}(N)$. (e) Offset of parabolic potential
$\Phi_{0}(N)$. (f) Shape of parabolic potential for $1\leq N\leq 9$, only for
QPC1. Error bars in panels (a), (b), (d) and (e) are calculated by error
propagation from the error of $\delta\varepsilon(N)$, cf. Fig. 3.
For the lateral hard-wall potential we model the transverse confinement
potential $\Phi(y)$ as
$\Phi(y)=\begin{cases}\Phi_{0},&|y|\leq{W}/{2}\\\
\infty,&|y|>{W}/{2}\,,\end{cases}$ (2)
where the two parameters $W$ and $\Phi_{0}$ are the width and offset of the
hard-wall potential well. An offset can be caused by a partial depletion of
the constriction related to the incomplete screening in a semi conductor with
a small carrier density. The threshold energies for the transverse modes are
$E_{n}=\frac{\pi^{2}\hbar^{2}n^{2}}{2m^{\star}W^{2}}+\Phi_{0}\,$ (3)
where $m^{\star}=0.067m_{0}$ is the effective mass of the electrons in GaAs,
$m_{0}$ being the free electron mass. Using Eq. (II) to relate the bias
voltage at the intersection points marked by the red squares in Fig. 1(c) and
(f) to the subband spacing $\delta\varepsilon(N)=E_{N+1}-E_{N}$, we calculate
the widths
$W(N)=\pi\hbar\sqrt{\frac{2N+1}{2m\delta\varepsilon(N)}}.$ (4)
Neglecting additional screening effects from the applied bias voltage, these
values of $W(N)$ apply everywhere along the (almost horizontal) lines
connecting pairs of red squares, see the yellow lines for $N=2$ in Fig. 1(c)
and (f). In particular, this allows us to extend our estimate of the width
$W(N)$ to $V_{\text{QPC}}=0$, indicated for $N=2$ by the yellow dot in Fig.
1(c) and (f). Substituting $W$ in Eq. (3) with $W(N)$ we then find the
potential offset $\Phi_{0}$ using the relation $E_{\text{F}}\simeq
E_{N}+0.5\delta\epsilon(N)$, which gives
$\Phi_{0}(N)\simeq
E_{\text{F}}-\delta\varepsilon(N)\,\left(\frac{N^{2}}{2N+1}+\frac{1}{2}\right)\,.$
(5)
The potential shift by $0.5\delta\epsilon(N)$ accounts for the difference
between the $N$th subband bottom $E_{N}$ and the Fermi level $E_{\text{F}}$ in
the center of each diamond at $V_{\text{QPC}}=0$, assuming symmetric coupling
between the 1D constriction and both leads. (The assumption of symmetric
coupling is confirmed by the fact that the lines connecting pairs of red
squares in Fig. 1(c) and (f) are almost horizontal.)
### III.2 Lateral parabolic potential
To model a lateral parabolic potential we use
$\Phi(y)=\Phi_{0}+\frac{m\omega_{y}^{2}y^{2}}{2},$ (6)
where $\omega_{y}$ and $\Phi_{0}$ are the characteristic frequency and offset
of the parabolic potential well. In analogy to the analysis assuming hard-wall
potentials we determine the two parameters from the measured subband spacings.
At the intersection points indicated with red squares in Fig. 1(c) and (f) we
find
$\hbar\omega_{y}(N)=eV_{\text{QPC}}=\delta\varepsilon(N)$ (7)
and in the centers of the diamonds at $V_{\text{QPC}}=0$ in addition
$\Phi_{0}(N)\simeq E_{F}-N\hbar\omega_{y}\,.$ (8)
### III.3 Comparison of the two potential shapes
In Fig. 4 we directly compare our results for the hard-wall potential shown in
the left column and for assuming parabolic confinement plotted on the right
hand side. We present the parameters $W$ and $\Phi_{0}$ as a function of the
subband number $N$ for all three QPC implementations for the hard-wall
potential in panels (a) and (b) and $\omega_{y}$ and $\Phi_{0}$ for the
parabolic potential in panels (d) and (e). The results are qualitatively
similar for the various implementations of QPCs; the variations in $W$ or
$\omega_{y}$ between QPCs indicate that the lateral confinement potential of
QPC2 is slightly wider compared to QPC1. In panels (c) and (f) showing the
actual potentials for QPC1, for comparison we indicate the lithographic
distance of 250 nm between the gates seen in the inset of Fig. 1(a). It
corresponds to the white area between regions of gray background. The width of
the hard-wall potential slightly exceeds the lithographic width for $N=9$.
QPC1 does not show further plateaus for $N>9$.
Comparing the two models a substantial difference is visible in $\Phi_{0}(N)$.
While for $N=1$ the potential offset is similar for both models with
$\Phi_{0}/E_{\text{F}}\simeq 0.6$, in case of the hard-wall potential it
slowly decreases to $\Phi_{0}/E_{\text{F}}\simeq 0.4$ at $N=4$ and stays
approximately constant at that level as the QPC is opened further. In
contrast, the decrease of the offset $\Phi_{0}(N)$ of the parabolic potential
with $N$ is much steeper, such that for $N\gtrsim 4$ it moves below the bottom
of the conduction band in the 2D leads, indicated as a dashed line at
$\Phi=0$. We are not aware of a realistic mechanism that could lead to such an
over-screening of the negative voltages applied to the control gates
($V_{\text{g}}$ for QPC1 or $V_{\text{t}}$ for QPC2).
## IV Discussion and summary
The main result of our simple analysis starting from the measured subband
spacings $\delta\varepsilon(N)$ is, that for $N\geq 4$ we can exclude a
parabolic lateral confinement potential for our QPCs. Based on a self-
consistent calculation it has been suggested that the increasing population of
the 1D constriction with $N$ as a QPC is opened up leads to an increased
screening of the electric field originating from the charged control gates.
For a gate defined QPC this process can cause a transition from a parabolic
confinement for the case of little screening, i.e., $N=1$ towards a truncated
potential with a flat bottom at larger $N$ where many carriers populate the
constriction Laux et al. (1988). Our findings are in favor of such a scenario.
The hard-wall potential presents a somewhat unrealistic extreme case of strong
screening. Nevertheless, for $N\geq 4$ it seems more realistic than the other
extreme, namely the parabolic potential. The true shape of the lateral
confinement potential of a QPC for $N\geq 4$ likely lies between these two
extremes, maybe close to a truncated parabola Laux et al. (1988); Wharam et
al. (1989), i.e., a parabola with a flat bottom identical to that of a hard-
wall potential but with smoothly increasing side walls of constant curvature
as the case for a parabola.
In summary, a parabolic saddle point potential is likely a realistic
description of a QPC near pinch-off, although our measurement can also be
explained with a hard-wall confinement in this regime. However, as the QPC is
opened up beyond $N\simeq 4$, the parabolic lateral confinement turns out to
be a bad approximation. In this regime of enhanced screening a hard-wall
potential is the better approximation.
## V appendix
### V.1 Coupling between control gates and the QPC
The electrostatic potential shaping the QPCs is generated and controlled via
the field effect by applying voltages to nearby metal gates. The size of the
plateaus of quantized conductance in the pinch-off curves as a function of
gate voltage, cf. Fig. 1(b) and (e), is proportional to the capacitive
coupling between the control gates and the QPC, which we approximate as a
conducting 1D-channel with the carrier density $n_{\text{1D}}$. We determine
the approximate capacitance per unit length between gate and QPC as
$c_{\text{1D}}=e\delta n_{\text{1D}}/\delta V_{\text{gate}}\,,$ (9)
where $\delta n_{\text{1D}}$ is the carrier density increase as the voltage on
the control gate is increased by $\delta V_{\text{gate}}$. If we take for
$\delta V_{\text{gate}}$ the voltage difference between two subsequent
intersection points of the source- and drain-resonances at $V_{\text{QPC}}=0$
in Fig. 1(c) and (f), $\delta n_{\text{1D}}$ corresponds to the difference of
the values of $n_{\text{1D}}$ at these points with $N$ versus $N+1$ subbands
being populated. The 1D carrier density is
$n_{\text{1D}}(N)=\int_{0}^{\infty}D_{\text{1D}}(E)f(E)\text{d}E\,,$ (10)
where $D_{\text{1D}}=\frac{1}{\pi\hbar}\sqrt{\frac{2m^{\star}}{E}}$ is the 1D
electron density of states and $f(E)$ the Fermi-Dirac distribution. Given
$k_{\text{B}}T\ll E_{\text{F}}$ we approximate $f(E)=1$ for $E<E_{\text{F}}$
and $f(E)=0$ for $E>E_{\text{F}}$. Summing up all 1D modes which are actually
populated for the QPC tuned to the conductance $G=NG_{\text{Q}}$ we find
$\displaystyle n_{\text{1D}}(N)$
$\displaystyle=\frac{\sqrt{2m^{\star}}}{\pi\hbar}\sum_{n=1}^{N}\int_{E_{n}}^{E_{\text{F}}}\frac{1}{\sqrt{E-E_{n}}}dE$
$\displaystyle=\frac{\sqrt{8m^{\star}}}{\pi\hbar}\sum_{n=1}^{N}\sqrt{E_{\text{F}}-E_{n}}\,.$
(11)
Inserting $\delta n_{\text{1D}}(N)=n_{\text{1D}}(N+1)-n_{\text{1D}}(N)$ from
Eq. (V.1) in Eq. (9) we finally determine the 1D capacitance density as
$c_{\text{1D}}(N)=\frac{\sqrt{8m^{\star}e^{2}}}{\pi\hbar}\,\frac{\sqrt{E_{\text{F}}-E_{N+1}}}{\delta
V_{\text{gate}}(N)}\,,$ (12)
where $\delta V_{\text{gate}}(N)$ is the width of the $Nth$ plateau of the
pinch-off curve, cf. Fig. 1, measured between the conductance
$(N+0.5)G_{\text{Q}}$ and $(N-0.5)G_{\text{Q}}$. Substituting $E_{N+1}$ with
the according eigen-energy of the hard-wall potential using Eq. (3) we can now
determine $c_{\text{1D}}(N)$. In Fig. 5
Figure 5: 1D carrier density $n_{\text{1D}}(N)$ assuming an infinitely long
hard-wall 1D channel of width $W(N)$ and depth $E_{\text{F}}-\Phi_{0}(N)$ of
QPC1 (red squares, rhs axis) and corresponding 1D capacitance density
$c_{\text{1D}}(N)$ (blue triangles, lhs axis).
we present the 1D capacitance density $c_{\text{1D}}(N)$, which is the slope
of the also shown 1D carrier density $n_{\text{1D}}(N)$. The strong decrease
of the capacitance with $N$ for $N\leq 4$ is a direct signature of the
increase of the screening of the electric field of the gates with growing
carrier density.
In addition, the variations in capacitance as a function of $N$ explain the
counter-intuitive result that the subband spacings $\delta\epsilon(N)$
strongly vary in a region of almost equal widths of the plateaus of quantized
conductance of the pinch-off curve, cf. Figs. 1(b),(e) and Fig. 3.
### V.2 Width of the 1D constriction as a function of gate voltage
In Fig. 4(a) we have presented the width of the hard-wall potential $W(N)$. In
Fig. 6
Figure 6: Width of the hard-wall potential $W(V_{\text{g}})$ for QPC1 [same
data as the $W(N)$ in Fig. 4(a)]. The slope of the red line is
$\text{d}W/\text{d}V_{\text{g}}=300\,\text{nm}/$V, cf. main text.
we plot $W(V_{\text{g}})$ for QPC1. Next, we compare this result with the
dependence of the depletion region of a gate voltage using a different sample
on the same wafer material. The sample shown in Fig. 7(a)
Figure 7: (a) QPC nominally identical to QPC1 for $N=4$ coupled to a
hemispherical mirror defined by a negative gate voltage $V_{\text{m}}$. (b)
Conductance of the QPC as a function of $V_{\text{m}}$. (c) Fourier transform
of the conductance. From the peak value, we determine the period $\delta
V_{\text{m}}\simeq 150\,$mV of the oscillations in panel (b).
contains a QPC nominally identical to QPC1 and a hemispherical mirror gate.
The two samples have been prepared in parallel and on the same wafer. In Fig.
7(b) we present the conductance of the QPC as a function of the voltage
applied to the mirror gate. The bare conductance (without mirror) is
$G=4G_{\text{Q}}$. However, with the 2DES below the mirror gate depleted it is
reduced roughly by a factor 2, because of enhanced back scattering through the
QPC. At the same time $G(V_{\text{m}})$ oscillates with a visibility of 40 %.
Both, the conductance reduction and oscillation are related to the formation
of localized modes inside the hemispherical resonator. The oscillation can be
interpreted in analogy to the oscillations of the standing wave in a Fabry-
Pérot resonator, while here, the coherent electrons generate the standing
wave. By increasing the gate voltage $V_{\text{m}}$ we decrease the area of
2DES depleted next to the mirror gate and thereby increase the length of the
resonator (the distance between the QPC and the mirror). Per period of the
conductance oscillation the length of the resonator is reduced by half of the
Fermi wavelength
$\text{d}L_{\text{res}}/\text{d}V_{\text{m}}=0.5\lambda_{\text{F}}/\delta
V_{\text{m}}$ with the resonator length $L_{\text{res}}$. We determine the
averaged period from the fast Fourier transform of the oscillation, cf. Fig.
7(c), and find $\delta V_{\text{m}}\simeq 150\,$mV. With
$\lambda_{\text{F}}=45\,$nm we finally estimate the rate of the depletion
length reduction as
$\text{d}L_{\text{res}}/\text{d}V_{\text{m}}=150\,\text{nm}/$V. Changing the
voltage applied to the QPC gates instead of the mirror gates results in the
same conclusion while in this case the interference pattern appears on top of
the QPC pinch-off curve.
To estimate the depletion of the electron system between the QPC gates we have
to add up the electric fields of both gates. Based on the fact that the same
voltage is applied to both gates and the distance between gates is more than
two times larger than the distance between each gate and the 2DES we neglect
any influences that a gate has on the depletion caused by the other gate. From
the slope of the red line in Fig. 6 we find the dependence of the width of the
hard-wall potential $W(V_{\text{g}})$ as a function of gate voltage to be
$\text{d}W/\text{d}V_{\text{g}}\simeq 300\,\text{nm}/$V, twice as large as the
effect of a single mirror gate. This finding supports the applicability of the
hard-wall model for QPCs with $N\geq 4$.
## VI Acknowledgments
We thank Philipp Altpeter for technical support and are grateful for financial
support from the DFG via Grant No. LU 819/11-1. M.G. acknowledges support by
project A03 of the CRC-TR 183. A.D.W. acknowledges gratefully support of DFG-
TRR160, BMBF - Q.Link.X 16KIS0867 and the DFH/UFA CDFA-05-06.
M. Geier and J. Freudenfeld contributed equally to this work.
## References
* Landauer (1981) R. Landauer, Physics Letters A 85, 91 (1981), ISSN 0375-9601, URL http://www.sciencedirect.com/science/article/pii/0375960181902309.
* van Wees et al. (1988a) B. J. van Wees, H. van Houten, C. W. J. Beenakker, J. G. Williamson, L. P. Kouwenhoven, D. van der Marel, and C. T. Foxon, Phys. Rev. Lett. 60, 848 (1988a), URL https://link.aps.org/doi/10.1103/PhysRevLett.60.848.
* Wharam et al. (1988) D. A. Wharam, T. J. Thornton, R. Newbury, M. Pepper, H. Ahmed, J. E. F. Frost, D. G. Hasko, D. C. Peacock, D. A. Ritchie, and G. A. C. Jones, Journal of Physics C: Solid State Physics 21, L209 (1988), URL http://stacks.iop.org/0022-3719/21/i=8/a=002.
* Berggren and Pepper (2010) K.-F. Berggren and M. Pepper, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 368, 1141 (2010), URL https://royalsocietypublishing.org/doi/abs/10.1098/rsta.2009.0226.
* Micolich (2011) A. P. Micolich, Journal of Physics: Condensed Matter 23, 443201 (2011), URL http://stacks.iop.org/0953-8984/23/i=44/a=443201.
* Rejec and Meir (2006) T. Rejec and Y. Meir, Nature 442, 900–903 (2006), ISSN 1476-4687, URL https://www.nature.com/articles/nature05054.
* Koop et al. (2007) E. J. Koop, A. I. Lerescu, J. Liu, B. J. van Wees, D. Reuter, A. D. Wieck, and C. H. van der Wal, Journal of Superconductivity and Novel Magnetism 20, 433–441 (2007), ISSN 1557-1947, URL https://link.springer.com/article/10.1007/s10948-007-0289-5.
* Bauer et al. (2013) F. Bauer, J. Heyder, E. Schubert, D. Borowsky, D. Taubert, B. Bruognolo, D. Schuh, W. Wegscheider, J. von Delft, and S. Ludwig, Nature 501, 73–78 (2013), URL https://www.nature.com/articles/nature12421.
* Heyder et al. (2015) J. Heyder, F. Bauer, E. Schubert, D. Borowsky, D. Schuh, W. Wegscheider, J. von Delft, and S. Ludwig, Phys. Rev. B 92, 195401 (2015), URL http://link.aps.org/doi/10.1103/PhysRevB.92.195401.
* Topinka et al. (2000) M. A. Topinka, B. J. LeRoy, S. E. J. Shaw, E. J. Heller, R. M. Westervelt, K. D. Maranowski, and A. C. Gossard, Science 289, 2323 (2000), ISSN 0036-8075, URL http://science.sciencemag.org/content/289/5488/2323.
* Freudenfeld et al. (2020) J. Freudenfeld, M. Geier, V. Umansky, P. Brouwer, and S. Ludwig, arXiv p. arXiv:2002.12340 (2020), URL https://arxiv.org/abs/2002.12340.
* Laux et al. (1988) S. Laux, D. Frank, and F. Stern, Surface Science 196, 101 (1988), ISSN 0039-6028, URL http://www.sciencedirect.com/science/article/pii/0039602888906711.
* Martin et al. (2008) T. P. Martin, C. A. Marlow, L. Samuelson, A. R. Hamilton, H. Linke, and R. P. Taylor, Phys. Rev. B 77, 155309 (2008), URL https://link.aps.org/doi/10.1103/PhysRevB.77.155309.
* Senz et al. (2001) V. Senz, T. Heinzel, T. Ihn, S. Lindemann, R. Held, K. Ensslin, W. Wegscheider, and M. Bichler, Journal of Physics: Condensed Matter 13, 3831 (2001), URL https://iopscience.iop.org/article/10.1088/0953-8984/13/17/303.
* Fricke et al. (2006) C. Fricke, J. Regul, F. Hohls, D. Reuter, A. Wieck, and R. Haug, Physica E: Low-dimensional Systems and Nanostructures 34, 519–521 (2006), ISSN 1386-9477, URL http://www.sciencedirect.com/science/article/pii/S1386947706001500.
* Rössler et al. (2010) C. Rössler, M. Herz, M. Bichler, and S. Ludwig, Solid State Communications 150, 861 (2010), ISSN 0038-1098, URL http://www.sciencedirect.com/science/article/pii/S0038109810000906.
* Taboryski et al. (1995) R. Taboryski, A. Kristensen, C. B. Sørensen, and P. E. Lindelof, Phys. Rev. B 51, 2282 (1995), URL https://link.aps.org/doi/10.1103/PhysRevB.51.2282.
* Thomas et al. (1998) K. J. Thomas, J. T. Nicholls, N. J. Appleyard, M. Y. Simmons, M. Pepper, D. R. Mace, W. R. Tribe, and D. A. Ritchie, Phys. Rev. B 58, 4846 (1998), URL https://link.aps.org/doi/10.1103/PhysRevB.58.4846.
* Hew et al. (2008) W. Hew, K. Thomas, I. Farrer, D. Anderson, D. Ritchie, and M. Pepper, Physica E: Low-dimensional Systems and Nanostructures 40, 1645 (2008), ISSN 1386-9477, URL http://www.sciencedirect.com/science/article/pii/S1386947707006364.
* Rössler et al. (2011) C. Rössler, S. Baer, E. de Wiljes, P.-L. Ardelt, T. Ihn, K. Ensslin, C. Reichl, and W. Wegscheider, New Journal of Physics 13, 113006 (2011), URL https://iopscience.iop.org/article/10.1088/1367-2630/13/11/113006.
* Burke et al. (2012) A. M. Burke, O. Klochan, I. Farrer, D. A. Ritchie, A. R. Hamilton, and A. P. Micolich, Nano Lett. 12, 4495 (2012), ISSN 1530-6984, URL https://pubs.acs.org/doi/abs/10.1021/nl301566d.
* Büttiker (1990) M. Büttiker, Phys. Rev. B 41, 7906 (1990), URL https://link.aps.org/doi/10.1103/PhysRevB.41.7906.
* Micolich and Zülicke (2011) A. P. Micolich and U. Zülicke, Journal of Physics: Condensed Matter 23, 362201 (2011), URL https://iopscience.iop.org/article/10.1088/0953-8984/23/36/362201.
* Yakimenko et al. (2013) I. I. Yakimenko, V. S. Tsykunov, and K.-F. Berggren, Journal of Physics: Condensed Matter 25, 072201 (2013), URL https://iopscience.iop.org/article/10.1088/0953-8984/25/7/072201.
* Birner et al. (2007) S. Birner, T. Zibold, T. Andlauer, T. Kubis, M. Sabathil, A. Trellakis, and P. Vogl, IEEE Transactions on Electron Devices 54, 2137 (2007), ISSN 1557-9646, URL https://ieeexplore.ieee.org/document/4294186.
* Weisz and Berggren (1989) J. F. Weisz and K.-F. Berggren, Phys. Rev. B 40, 1325 (1989), URL https://link.aps.org/doi/10.1103/PhysRevB.40.1325.
* Song et al. (2009) J. Song, Y. Kawano, K. Ishibashi, J. Mikalopas, G. R. Aizin, N. Aoki, J. L. Reno, Y. Ochiai, and J. P. Bird, Applied Physics Letters 95, 233115 (2009), URL https://aip.scitation.org/doi/10.1063/1.3272677.
* van Wees et al. (1988b) B. J. van Wees, L. P. Kouwenhoven, H. van Houten, C. W. J. Beenakker, J. E. Mooij, C. T. Foxon, and J. J. Harris, Phys. Rev. B 38, 3625 (1988b), URL https://link.aps.org/doi/10.1103/PhysRevB.38.3625.
* Gloos et al. (2006) K. Gloos, P. Utko, M. Aagesen, C. B. Sørensen, J. B. Hansen, and P. E. Lindelof, Phys. Rev. B 73, 125326 (2006), URL https://link.aps.org/doi/10.1103/PhysRevB.73.125326.
* Bachsoliani et al. (2017) N. Bachsoliani, S. Platonov, A. D. Wieck, and S. Ludwig, Phys. Rev. Applied 8, 064015 (2017), URL https://link.aps.org/doi/10.1103/PhysRevApplied.8.064015.
* Wharam et al. (1989) D. A. Wharam, U. Ekenberg, M. Pepper, D. G. Hasko, H. Ahmed, J. E. F. Frost, D. A. Ritchie, D. C. Peacock, and G. A. C. Jones, Phys. Rev. B 39, 6283 (1989), URL https://link.aps.org/doi/10.1103/PhysRevB.39.6283.
|
2024-09-04T02:54:55.264699 | 2020-02-27T18:58:15 | 2002.12344 | {
"authors": "Christopher Malon and Bing Bai",
"full_text_license": null,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25927",
"submitter": "Christopher Malon",
"url": "https://arxiv.org/abs/2002.12344"
} | arxiv-papers | # Generating Followup Questions for Interpretable Multi-hop Question Answering
Christopher Malon
NEC Laboratories America
Princeton, NJ 08540
<EMAIL_ADDRESS>
Bing Bai
NEC Laboratories America
Princeton, NJ 08540
<EMAIL_ADDRESS>
###### Abstract
We propose a framework for answering open domain multi-hop questions in which
partial information is read and used to generate followup questions, to
finally be answered by a pretrained single-hop answer extractor. This
framework makes each hop interpretable, and makes the retrieval associated
with later hops as flexible and specific as for the first hop. As a first
instantiation of this framework, we train a pointer-generator network to
predict followup questions based on the question and partial information. This
provides a novel application of a neural question generation network, which is
applied to give weak ground truth single-hop followup questions based on the
final answers and their supporting facts. Learning to generate followup
questions that select the relevant answer spans against downstream supporting
facts, while avoiding distracting premises, poses an exciting semantic
challenge for text generation. We present an evaluation using the two-hop
bridge questions of HotpotQA.
## 1 Introduction
Multi-hop question answering tests the ability of a system to retrieve and
combine multiple facts to answer a single question. HotpotQA Yang et al.
(2018) introduces a task where questions are free-form text, supporting facts
come from Wikipedia, and answer text and supporting facts are labeled. The
questions in HotpotQA are further categorized as bridge-type questions or
comparison-type questions. For comparison questions, often all necessary facts
may be retrieved using terms in the question itself. For challenging bridge-
type questions, it may not be possible to retrieve all the necessary facts
based on the terms present in the original question alone. Rather, partial
information must first be retrieved and used to formulate an additional query.
Although many systems have been submitted to the HotpotQA leaderboard,
surprisingly, only a few have directly addressed the challenge of followups.
Systems can either be evaluated in a distractor setting, where a set of ten
paragraphs containing all supporting facts is provided, or in a full wiki
setting, where supporting facts must be retrieved from all of Wikipedia. The
systems that compete only in the distractor setting can achieve good
performance by combining and ranking the information provided, without
performing followup search queries. Furthermore, even in the distractor
setting, Min et al. (2019a) found that only 27% of the questions required
multi-hop reasoning, because additional evidence was redundant or unnecessary
or the distractors were weak. They trained a single-hop model that considered
each paragraph in isolation and ranked confidences of the answers extracted
from each, to obtain competitive performance.
Of the nine systems with documentation submitted to the full wiki HotpotQA
leaderboard as of 24 November 2019, four of them (Nie et al., 2019; Ye et al.,
2019; Nishida et al., 2019; Yang et al., 2018) attempt to retrieve all
relevant data with one search based on the original question, without any
followups. Fang et al. (2019) retrieves second hop paragraphs simply by
following hyperlinks from or to the first hop paragraphs.
Qi et al. (2019), Ding et al. (2019), and Feldman and El-Yaniv (2019) form
various kinds of followup queries without writing a new question to be
answered. Qi et al. (2019) trains a span extractor to predict the longest
common subsequence between the question plus the first hop evidence and the
(unseen) second hop evidence. At inference time, these predicted spans become
followup search queries. In Ding et al. (2019), a span extractor is trained
using the titles of the second hop evidence. Feldman and El-Yaniv (2019)
trains a neural retrieval model that uses maximum inner product with an
encoding of the question plus first hop evidence to retrieve second hop
evidence.
Min et al. (2019b) forms not just followup queries but followup questions.
They use additional specially labeled data to train a pointer network to
divide the original question into substrings, and use handcrafted rules to
convert these substrings into subquestions. The original question is answered
by the second subquestion, which incorporates a substitution of the answer to
the first subquestion.
While performing followup retrievals of some sort should be essential for
correctly solving the most difficult multi-hop problems, formulating a
followup question whose answer becomes the answer to the original question is
motivated primarily by interpretability rather than accuracy. In this paper,
we pursue a trained approach to generating followup questions that is not
bound by handcrafted rules, posing a new and challenging application for
abstractive summarization and neural question generation technologies. Our
contributions are to define the task of a followup generator module (Section
2), to propose a fully trained solution to followup generation (Section 3),
and to establish an objective evaluation of followup generators (Section 5).
## 2 Problem Setting
Our technique is specifically designed to address the challenge of discovering
new information is needed that is not specified by the terms of the original
question. At the highest level, comparison questions do not pose this
challenge, because each quantity to be compared is specified by part of the
original question. (They also pose different semantics than bridge questions
because a comparison must be applied after retrieving answers to the
subquestions.) Therefore we focus only on bridge questions in this paper.
Figure 1: The architecture of our system to generate intermediate questions
for answer extraction.
Figure 1 shows our pipeline to answer a multi-hop bridge question. As partial
information is obtained, an original question is iteratively reduced to
simpler questions generated at each hop. Given an input question or
subquestion, possible premises which may answer the subquestion are obtained
from an information retrieval module. Each possible premise is classified
against the question as irrelevant, containing a final answer, or containing
intermediate information, by a three-way controller module. For premises that
contain a final answer, the answer is extracted with a single-hop question
answering module. For premises that contain intermediate information, a
question generator produces a followup question, and the process may be
repeated with respect to this new question. It is this question generator that
is the focus of this paper. Various strategies may be used to manage the
multiple reasoning paths that may be produced by the controller. Details are
in section 5.
Although our method applies to bridge questions with arbitrary number of hops,
for simplicity we focus on two-hop problems and on training the followup
question generator. Let $Cont$ denote the controller, $SingleHop$ denote the
answer extractor, and $Followup$ denote the followup generator. Let $Q_{1}$ be
a question with answer $A$ and gold supporting premises $\hat{P_{1}}$ and
$\hat{P_{2}}$, and suppose that $\hat{P_{2}}$ but not $\hat{P_{1}}$ contains
the answer. The task of the followup generator is to use $Q_{1}$ and
$\hat{P_{1}}$ to generate a followup question $Q_{2}$ such that
$\displaystyle SingleHop(Q_{2},\hat{P_{2}})$ $\displaystyle=$ $\displaystyle
A$ (1) $\displaystyle Cont(Q_{2},\hat{P_{2}})$ $\displaystyle=$ $\displaystyle
Final$ (2) $\displaystyle\mbox{and}\;Cont(Q_{2},P)$ $\displaystyle=$
$\displaystyle Irrel\,\mbox{for}\,P\neq\hat{P_{2}}\ldotp$ (3)
Failure of any of these desiderata could harm label accuracy in the HotpotQA
full wiki or distractor evaluations.
Some questions labeled as bridge type in HotpotQA have a different logical
structure, called “intersection” by Min et al. (2019b). Here the subquestions
specify different properties that the answer entity is supposed to satisfy,
and the intersection of possible answers to the subquestions is the answer to
the original question. Our approach is not oriented towards this type of
question, but there is no trivial way to exclude them from the dataset.
One non-interpretable implementation of our pipeline would be for $Followup$
to simply output $Q_{1}$ concatenated with $P_{1}$ as the “followup question.”
Then $SingleHop$ would operate on input that really does not take the form of
a single question, along with $P_{2}$, to determine the final answer.
Effectively, $SingleHop$ would be doing multi-hop reasoning. To ensure that
$Followup$ gets credit only for forming real followup questions, we insist
that $SingleHop$ is first trained as a single-hop answer extractor, by
training it on SQuAD 2.0 (Rajpurkar et al., 2018), then freeze it while
$Followup$ and $Cont$ are trained.
## 3 Method
Ideally, we might train $Followup$ using cross entropy losses inspired by
equations 1, 2, and 3 with $SingleHop$ and $Cont$ fixed, but the decoded
output $Q_{2}$ is not differentiable with respect to $Followup$ parameters.
Instead, we train $Followup$ with a token-based loss against a set of weakly
labeled ground truth followup questions.
The weakly labeled ground truth followups are obtained using a neural question
generation (QG) network. Given a context $\overline{C}$ and an answer
$\overline{A}$, QG is the task of finding a question
$\overline{Q}=\mbox{argmax}_{Q}Prob(Q|\overline{C},\overline{A})$ (4)
most likely to have produced it. We use reverse SQuAD to train the QG model of
Zhao et al. (2018), which performs near the top of an extensive set of models
tested by Tuan et al. (2019) and has an independent implementation available.
Applied to our training set with $\overline{C}=\hat{P_{2}}$ and
$\overline{A}=A$, it gives us a weak ground truth followup $\overline{Q_{2}}$.
We instantiate the followup question generator, which uses $Q_{1}$ and $P_{1}$
to predict $Q_{2}$, with a pointer-generator network (See et al., 2017). This
is a sequence to sequence model whose decoder repeatedly chooses between
generating a word from a fixed vocabulary and copying a word from the input.
Typically, pointer-generator networks are used for abstractive summarization.
Although the output serves a different role here, their copy mechanism is
useful in constructing a followup that uses information from the original
question and premise.
We train $Cont$ with cross-entropy loss for ternary classification on the
ground truth triples $(Q_{1},\hat{P_{1}},Intermediate)$,
$(Q_{1},\hat{P_{2}},Final)$ if $SingleHop(Q_{1},\hat{P_{2}})\cap
A\neq\emptyset$, and $(Q_{1},P,Irrel)$ for all other $P$. In this way the
controller learns to predict when a premise has sufficient or necessary
information to answer a question. Both $Cont$ and $SingleHop$ are implemented
by BERT following the code by Devlin et al. (2019).
## 4 Related Work
Evaluating a followup question generator by whether its questions are answered
correctly is analogous to verifying the factual accuracy of abstractive
summarizations, which has been studied by many, including Falke et al. (2019),
who estimate factual correctness using a natural language inference model, and
find that it does not correlate with ROUGE score. Contemporaneous work by
Zhang et al. (2019) uses feedback from a fact extractor in reinforcement
learning to optimize the correctness of a summary, suggesting an interesting
future direction for our work.
A recent neural question generation model has incorporated feedback from an
answer extractor into the training of a question generator, rewarding the
generator for constructing questions the extractor can answer correctly (Klein
and Nabi, 2019). Although the loss is not backpropagated through both the
generator and extractor, the generator is penalized by token level loss
against ground truth questions when the question is answered wrongly, but by
zero loss when it constructs a variant that the extractor answers correctly.
## 5 Experiments
To isolate the effect of our followup generator on the types of questions for
which it was intended, our experiments cover the subset of questions in
HotpotQA labeled with exactly two supporting facts, with the answer string
occurring in exactly one of them. There are 38,806 such questions for training
and 3,214 for development, which we use for testing because the structure of
the official test set is not available. For a baseline we compare to a trivial
followup generator that returns the original question $Q_{1}$ without any
rewriting.
| EM | F1
---|---|---
Oracle setting | |
Trained $Q_{2}$ | 14.7 | 19.0
$Q_{2}=Q_{1}$ | 27.6 | 34.9
$Q_{1}$ else $Q_{2}$ | 34.7 | 43.8
Full system | |
One hop ($Q_{1}$ only) | 16.8 | 21.5
Two hops (trained $Q_{2}$) | 19.8 | 25.4
Table 1: Answer accuracy on filtered subset of HotpotQA development set in the distractor setting. $Q_{1}$ | Truth | $Q_{1}$ Answer | $Q_{2}$ | $Q_{2}$ Answer
---|---|---|---|---
Randall Cunningham II was a multi-sport athlete at the high school located in what Nevada city? | Summerlin | — | where is bishop gorman high school located? | Summerlin, Nevada
Alexander Kerensky was defeated and destroyed by the Bolsheviks in the course of a civil war that ended when? | October 1922 | — | what was the name of the russian civil war? | The Russian Civil War
Peter Marc Jacobson is best known as the co-creator of the popular sitcom ”The Nanny”, which he created and wrote with his then wife an actress born in which year ? | 1957 | 1993 | what year was fran drescher born in? | 1957
Who did the Star and Dagger bass player marry? | Sean Yseult. | Sean Yseult | what was the name of the american rock musician? | Chris Lee
Table 2: Example generated followup questions $Q_{2}$, evaluated against
oracle $\hat{P_{2}}$.
First, we evaluate performance using an oracle controller, which forwards only
$(Q_{1},\hat{P_{1}})$ to the followup generator, and only
$(Q_{2},\hat{P_{2}})$ to the answer extractor. Results are shown in Table 1.
Best performance is achieved using the system “$Q_{1}$ else $Q_{2}$,” which
answers with $SingleHop(Q_{1},\hat{P_{2}})$ or $SingleHop(Q_{2},\hat{P_{2}})$,
whichever is non-null. Thus, although many questions are really single-hop and
best answered using the original question, using the followup questions when a
single-hop answer cannot be found helps the F1 score by 8.9%. Table 2 shows
followup generations and extracted answers in two typical successful and two
typical failed cases.
Next we consider the full system of Figure 1. We use the distractor paragraphs
provided. We run the loop for up to two hops, collecting all answer
extractions requested by the controller, stopping after the first hop where a
non-null extracted answer was obtained. If multiple extractions were requested
for the same problem, we take the answer in where $SingleHop$ had the highest
confidence. The controller requested 2,989 followups, and sent 975 $(Q,P)$
pairs for answer extraction in hop one, and 1,180 in hop two. The performance
gain shows that the followup generator often can generate questions which are
good enough for the frozen single hop model to understand and extract the
answer with, even when the question must be specific enough to avoid
distracting premises.
## 6 Conclusion
Followup queries are essential to solving the difficult cases of multi-hop QA,
and real followup questions are an advance in making this process
interpretable. We have shown that pointer generator networks can effectively
learn to read partial information and produce a fluent, relevant question
about what is not known, which is a complement to their typical role in
summarizing what is known. Our task poses a novel challenge that tests
semantic properties of the generated output.
By using a neural question generator to produce weak ground truth followups,
we have made this task more tractable. Future work should examine using
feedback from the answer extractor or controller to improve the sensitivity
and specificity of the generated followups. Additionally, the approach should
be developed on new datasets such as QASC (Khot et al., 2019), which are
designed to make single-hop retrieval less effective.
## References
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Ding et al. (2019) Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. 2019. Cognitive graph for multi-hop reading comprehension at scale. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 2694–2703, Florence, Italy. Association for Computational Linguistics.
* Falke et al. (2019) Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 2214–2220, Florence, Italy. Association for Computational Linguistics.
* Fang et al. (2019) Yuwei Fang, Siqi Sun, Zhe Gan, Rohit Pillai, Shuohang Wang, and Jingjing Liu. 2019\. Hierarchical graph network for multi-hop question answering. _CoRR_ , 1911.03631.
* Feldman and El-Yaniv (2019) Yair Feldman and Ran El-Yaniv. 2019. Multi-hop paragraph retrieval for open-domain question answering. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 2296–2309, Florence, Italy. Association for Computational Linguistics.
* Khot et al. (2019) Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2019\. Qasc: A dataset for question answering via sentence composition. _CoRR_ , 1910.11473.
* Klein and Nabi (2019) Tassilo Klein and Moin Nabi. 2019. Learning to answer by learning to ask: Getting the best of gpt-2 and bert worlds. _CoRR_ , 1911.02365.
* Min et al. (2019a) Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019a. Compositional questions do not necessitate multi-hop reasoning. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4249–4257, Florence, Italy. Association for Computational Linguistics.
* Min et al. (2019b) Sewon Min, Victor Zhong, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2019b. Multi-hop reading comprehension through question decomposition and rescoring. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 6097–6109, Florence, Italy. Association for Computational Linguistics.
* Nie et al. (2019) Yixin Nie, Songhe Wang, and Mohit Bansal. 2019. Revealing the importance of semantic retrieval for machine reading at scale. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2553–2566, Hong Kong, China. Association for Computational Linguistics.
* Nishida et al. (2019) Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, and Junji Tomita. 2019. Answering while summarizing: Multi-task learning for multi-hop QA with evidence extraction. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 2335–2345, Florence, Italy. Association for Computational Linguistics.
* Qi et al. (2019) Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, and Christopher D. Manning. 2019. Answering complex open-domain questions through iterative query generation. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2590–2602, Hong Kong, China. Association for Computational Linguistics.
* Rajpurkar et al. (2018) Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 784–789, Melbourne, Australia. Association for Computational Linguistics.
* See et al. (2017) Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1073–1083, Vancouver, Canada. Association for Computational Linguistics.
* Tuan et al. (2019) Luu Anh Tuan, Darsh J Shah, and Regina Barzilay. 2019. Capturing greater context for question generation. _CoRR_ , 1910.10274.
* Yang et al. (2018) Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics.
* Ye et al. (2019) Deming Ye, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, and Maosong Sun. 2019. Multi-paragraph reasoning with knowledge-enhanced graph neural network. _CoRR_ , 1911.02170.
* Zhang et al. (2019) Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christopher D. Manning, and Curtis P. Langlotz. 2019. Optimizing the factual correctness of a summary: A study of summarizing radiology reports. _CoRR_ , 1911.02541.
* Zhao et al. (2018) Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 3901–3910, Brussels, Belgium. Association for Computational Linguistics.
|
2024-09-04T02:54:55.279759 | 2020-02-27T19:09:33 | 2002.12393 | {
"authors": "Tarique Siddiqui, Alekh Jindal, Shi Qiao, Hiren Patel, Wangchao le",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25928",
"submitter": "Tarique Siddiqui",
"url": "https://arxiv.org/abs/2002.12393"
} | arxiv-papers | # Cleo: Learned Cost Models for Query Optimization in Shared Clouds
Tarique Siddiqui1,2, Alekh Jindal1, Shi Qiao1, Hiren Patel1, Wangchao Le1
1Microsoft 2University of Illinois, Urbana-Champaign
(2020)
# Learned Cost Models for Cloud Data Services
Tarique Siddiqui1,2, Alekh Jindal1, Shi Qiao1, Hiren Patel1, Wangchao Le1
1Microsoft 2University of Illinois, Urbana-Champaign
(2020)
# Learned Cost Models for Query Optimization in the Cloud
Tarique Siddiqui1,2, Alekh Jindal1, Shi Qiao1, Hiren Patel1, Wangchao Le1
1Microsoft 2University of Illinois, Urbana-Champaign
(2020)
# Cleo: Accurate Cost Models for Query Optimization
Tarique Siddiqui1,2, Alekh Jindal1, Shi Qiao1, Hiren Patel1, Wangchao Le1
1Microsoft 2University of Illinois, Urbana-Champaign
(2020)
# Learned Cost Models for Optimizing Big Data Queries
Tarique Siddiqui1,2, Alekh Jindal1, Shi Qiao1, Hiren Patel1, Wangchao Le1
1Microsoft 2University of Illinois, Urbana-Champaign
(2020)
# Cost Models for Serverless Query Processing: Learning, Retrofitting, and Our
Findings
Tarique Siddiqui1,2, Alekh Jindal1, Shi Qiao1, Hiren Patel1, Wangchao Le1
1Microsoft 2University of Illinois, Urbana-Champaign
(2020)
# CLEO: Learned Cost Models for Query Optimization in Big Data Systems
Tarique Siddiqui1,2, Alekh Jindal1, Shi Qiao1, Hiren Patel1, Wangchao Le1
1Microsoft 2University of Illinois, Urbana-Champaign
(2020)
# CLEO: Learned Cost Models for Cloud-based Big Data Systems
Tarique Siddiqui1,2, Alekh Jindal1, Shi Qiao1, Hiren Patel1, Wangchao Le1
1Microsoft 2University of Illinois, Urbana-Champaign
(2020)
# Cost Models for Big Data Query Processing: Learning, Retrofitting, and Our
Findings
Tarique Siddiqui1,2, Alekh Jindal1, Shi Qiao1, Hiren Patel1, Wangchao Le1
1Microsoft 2University of Illinois, Urbana-Champaign
(2020)
###### Abstract.
Query processing over big data is ubiquitous in modern clouds, where the
system takes care of picking both the physical query execution plans _and_ the
resources needed to run those plans, using a cost-based query optimizer. A
good cost model, therefore, is akin to better resource efficiency and lower
operational costs. Unfortunately, the production workloads at Microsoft show
that costs are very complex to model for big data systems. In this work, we
investigate two key questions: (i) can we learn accurate cost models for big
data systems, and (ii) can we integrate the learned models within the query
optimizer. To answer these, we make three core contributions. First, we
exploit workload patterns to learn a large number of individual cost models
and combine them to achieve high accuracy and coverage over a long period.
Second, we propose extensions to Cascades framework to pick optimal resources,
i.e, number of containers, during query planning. And third, we integrate the
learned cost models within the Cascade-style query optimizer of SCOPE at
Microsoft. We evaluate the resulting system, Cleo, in a production environment
using both production and TPC-H workloads. Our results show that the learned
cost models are $2$ to $3$ orders of magnitude more accurate, and $20\times$
more correlated with the actual runtimes, with a large majority ($70\%$) of
the plan changes leading to substantial improvements in latency as well as
resource usage.
††journalyear: 2020††copyright: acmcopyright††conference: Proceedings of the
2020 ACM SIGMOD International Conference on Management of Data; June 14–19,
2020; Portland, OR, USA††booktitle: Proceedings of the 2020 ACM SIGMOD
International Conference on Management of Data (SIGMOD’20), June 14–19, 2020,
Portland, OR, USA††price: 15.00††doi: 10.1145/3318464.3380584††isbn:
978-1-4503-6735-6/20/06
## 1\. Introduction
Figure 1. Impact of manual tuning and cardinality feedback on cost models in
SCOPE
There is a renewed interest in cost-based query optimization in big data
systems, particularly in modern cloud data services (e.g., Athena (aws-athena,
), ADLA (adla, ), BigSQL (ibm-bigsql, ), and BigQuery (google-bigquery, ))
that are responsible for picking both the query execution plans and the
resources (e.g., number of containers) needed to run those plans. Accurate
cost models are therefore crucial for generating efficient combination of plan
and resources. Yet, the traditional wisdom from relational databases is that
cost models are less important and fixing cardinalities automatically fixes
the cost estimation (leis2015good, ; lohman2014query, ). The question is
whether this also holds for the new breed of big data systems. To dig deeper,
we analyzed one day’s worth of query logs from the big data infrastructure
(SCOPE (scopeVLDB2008, ; scopeVLDBJ12, )) at Microsoft. We feed back the
actual runtime cardinalities, i.e., the ideal estimates that any cardinality
estimator, including learned models (stillger2001leo, ; cardLearner, ;
lightweightCardModels, ; corrJoinsCardModels, ) can achieve. Figure 1 compares
the ratio of cost estimates with the actual runtimes for two cost models in
SCOPE: 1) a default cost model, and 2) a manually-tuned cost model that is
partially available for limited workloads. The vertical dashed-line at
$10^{0}$ corresponds to an ideal situation where all cost estimates are equal
to the actual runtimes. Thus, the closer a curve is to the dashed line, the
more accurate it is.
The dotted lines in Figure 1(b) show that fixing cardinalities reduces the
over-estimation, but there is still a wide gap between the estimated and the
actual costs, with the Pearson correlation being as low as $0.09$. This is due
to the complexity of big data systems coupled with the variance in cloud
environments (cloudVariance, ), which makes cost modeling incredibly
difficult. Furthermore, any improvements in cost modeling need to be
consistent across workloads and over time since performance spikes are
detrimental to the reliability expectations of enterprise customers. Thus,
accurate cost modeling is still a challenge in SCOPE like big data systems.
In this paper, we explore the following two questions:
* (1)
Can we learn accurate, yet robust cost models for big data systems? This is
motivated by the presence of massive workloads visible in modern cloud
services that can be harnessed to accurately model the runtime behavior of
queries. This helps not only in dealing with the various complexities in the
cloud, but also specializing or instance optimizing (marcus2019neo, ) to
specific customers or workloads, which is often highly desirable.
Additionally, in contrast to years of experience needed to tune traditional
optimizers, learned cost models are potentially easy to update at a regular
frequency.
* (2)
Can we effectively integrate learned cost models within the query optimizer?
This stems from the observation that while some prior works have considered
learning models for predicting query execution times for a given physical plan
in traditional databases (ganapathi2009predicting, ; bach2002kernel, ;
akdere2012learning, ; li2012robust, ), none of them have integrated learned
models within a query optimizer for selecting physical plans. Moreover, in big
data systems, resources (in particular the number of machines) play a
significant role in cost estimation (raqo, ), making the integration even more
challenging. Thus, we investigate the effects of learned cost models on query
plans by _extending_ the SCOPE query optimizer in a _minimally invasive_ way
for _predicting costs_ in a _resource-aware manner_. To the best of our
knowledge, this is the first work to integrate learned cost models within an
industry-strength query optimizer.
Our key ideas are as follows. We note that the cloud workloads are quite
diverse in nature, i.e., there is no representative workload to tune the query
optimizer, and hence there is no single cost model that fits the entire
workload, i.e., no-one-size-fits-all. Therefore, we learn a large collection
of smaller-sized cost models, one for each common subexpressions that are
typically abundant in production query workloads (cloudviews, ; bigsubs, ).
While this approach results in specialized cost models that are very accurate,
the models do not cover the entire workload: expressions that are not common
across queries do not have models. The other extreme is to learn a cost model
per operator, which covers the entire workload but sacrifices the accuracy
with very general models. Thus, _there is an accuracy-coverage trade-off that
makes cost modeling challenging_. To address this, we define the notion of
cost model _robustness_ with three desired properties: (i) high accuracy, (ii)
high coverage, and (iii) high retention, i.e., stable performance for a long-
time period before retraining. We achieve these properties in two steps:
First, we bridge the accuracy-coverage gap by learning additional mutually
enhancing models that improve the coverage as well as the accuracy. Then, we
learn a combined model that automatically corrects and combines the
predictions from multiple individual models, providing accurate and stable
predictions for a sufficiently long window (e.g., more than 10 days).
We implemented our ideas in a Cloud LEarning Optimizer (Cleo) and integrated
it within SCOPE. Cleo uses a feedback loop to periodically train and update
the learned cost models within the Cascades-style top-down query planning
(cascades95, ) in SCOPE. We extend the optimizer to invoke the learned models,
instead of the default cost models, to estimate the cost of candidate
operators. However, in big data systems, the cost depends heavily on the
resources used (e.g., number of machines for each operator) by the optimizer
(raqo, ). Therefore, we extend the Cascades framework to explore resources,
and propose mechanisms to explore and derive optimal number of machines for
each stage in a query plan. Moreover, instead of using handcrafted heuristics
or assuming fixed resources, we leverage the learned cost models to find
optimal resources as part of query planning, thereby _using learned models for
producing both runtime as well as resource-optimal plans_.
In summary, our key contributions are as follows.
* (1)
We motivate the cost estimation problem from production workloads at
Microsoft, including prior attempts for manually improving the cost model
(Section 2).
* (2)
We propose machine learning techniques to learn highly accurate cost models.
Instead of building a generic cost model for the entire workload, we learn a
large collection of smaller specialized models that are resource-aware and
highly accurate in predicting the runtime costs (Section 3).
* (3)
We describe the accuracy and coverage trade-off in learned cost models, show
the two extremes, and propose additional models to bridge the gap. We combine
the predictions from individual models into a robust model that provides the
best of both accuracy and coverage over a long period (Section 4).
* (4)
We describe integrating Cleo within SCOPE, including periodic training,
feedback loop, model invocations during optimization, and novel extensions for
finding the optimal resources for a query plan (Section 5).
* (5)
Finally, we present a detailed evaluation of Cleo, using both the production
workloads and the TPC-H benchmark. Our results show that Cleo improves the
correlation between predicted cost and actual runtimes from $0.1$ to $0.75$,
the accuracy by $2$ to $3$ orders of magnitude, and the performance for 70% of
the changed plans (Section 6). In Section 6.7, we further describe practical
techniques to address performance regressions in our production settings.
## 2\. Motivation
In this section, we give an overview of SCOPE, its workload and query
optimizer, and motivate the cost modeling problem from production workloads at
Microsoft.
### 2.1. Overview of SCOPE
SCOPE (scopeVLDB2008, ; scopeVLDBJ12, ) is the big data system used for
internal data analytics across the whole of Microsoft to analyze and improve
its various products. It runs on a hyper scale infrastructure consisting of
hundreds of thousands of machines, running a massive workload of hundreds of
thousands of jobs per day that process exabytes of data. SCOPE exposes a job
service interface where users submit their analytical queries and the system
takes care of automatically provisioning resources and running queries in a
distributed environment.
SCOPE query processor partitions data into smaller subsets and processes them
in parallel. The number of machines running in parallel (i.e., degree of
parallelism) depends on the number of partitions of the input. When no
specific partitioning is required by upstream operators, certain physical
operators (e.g., Extract and Exchange (also called Shuffle)), decide partition
counts based on data statistics and heuristics. The sequence of intermediate
operators that operate over the same set of input partitions are grouped into
a stage — all operators in a stage run on the same set of machines. Except for
selected scenarios, Exchange operator is commonly used to re-partition data
between two stages.
### 2.2. Recurring Workloads
SCOPE workloads primarily consist of recurring jobs. A recurring job in SCOPE
is used to provide periodic (e.g., hourly, six-hourly, daily, etc.) analytical
result for a specific application functionality. Typically, a recurring job
consists of a script template that accepts different input parameters similar
to SQL modules. Each instance of the recurring job runs on different input
data, parameters and have potentially different statements. As a result, each
instance is different in terms of input/output sizes, query execution plan,
total compute hour, end-to-end latency, etc. Figure 2 shows $150$ instances of
an hourly recurring job that extracts facts from a production clickstream.
Over these $150$ instances, we can see a big change in the total input size
and the total execution time, from $69,859$ GiB to $118,625$ GiB and from $40$
mins and $50$ seconds to $2$ hours and $21$ minutes respectively. Note that a
smaller portion of SCOPE workload is ad-hoc as well. Figure 3 shows our
analysis from four of the production clusters. We can see that $7\%-20\%$ jobs
are ad-hoc on a daily basis, with the fraction varying over different clusters
and different days. However, compared to ad-hoc jobs, recurring jobs represent
long term business logic with critical value, and hence the focus of several
prior research works (bruno2013continuous, ; recurrJobOpt, ;
recurrJobOptScope, ; redoop, ; rope, ; jockey, ; jyothi2016morpheus, ;
cloudviews, ; bigsubs, ; cardLearner, ) and also the primary focus for
performance improvement in this paper.
Figure 2. $150$ instances of an hourly recurring job that extracts facts from
a production clickstream.
Figure 3. Illustrating ad-hoc jobs in SCOPE.
### 2.3. Overview of SCOPE Optimizer
SCOPE uses a cost-based optimizer based on the Cascades Framework (cascades95,
) for generating the execution plan for a given query. Cascades (cascades95, )
transforms a logical plan using multiple tasks: (i) Optimize Groups, (ii)
Optimize Expressions, (iii) Explore Groups, and (iv) Explore Expressions, and
(v) Optimize Inputs. While the first four tasks search for candidate plans via
transformation rules, our focus in this work is essentially on the Optimize
Inputs tasks, where the cost of a physical operator in estimated. The cost of
an operator is modeled to capture its runtime latency, estimated using a
combination of data statistics and hand-crafted heuristics developed over many
years. Cascades performs optimization in a top-down fashion, where physical
operators higher in the plan are identified first. The exclusive (or local)
costs of physical operators are computed and combined with costs of children
operators to estimate the total cost. In some cases, operators can have a
_required property_ (e.g., sorting, grouping) from its parent that it must
satisfy, as well as can have a derived property from its children operators.
In this work, we optimize _how partition counts are derived_ as it is a key
factor in cost estimation for massively parallel data systems. Overall, our
goal is to improve the cost estimates with minimal changes to the Cascades
framework. We next analyze the accuracy of current cost models in SCOPE.
### 2.4. Cost Model Accuracy
The solid red line in Figure 1 shows that the cost estimates from the default
cost model range between an under-estimate of $100\times$ to an over-estimate
of $1000\times$, with a Pearson correlation of just $0.04$. As mentioned in
the introduction, this is because of the difficulty in modeling the highly
complex big data systems. Current cost models rely on hand-crafted heuristics
that combine statistics (e.g., cardinality, average row length) in complex
ways to estimate each operator’s execution time. These estimates are usually
way off and get worse with constantly changing workloads and systems in cloud
environments. Big data systems, like SCOPE, further suffer from the widespread
use of custom user code that ends up as black boxes in the cost models.
One could consider improving a cost model by considering newer hardware and
software configurations, such as machine SKUs, operator implementations, or
workload characteristics. SCOPE team did attempt this path and put in
significant efforts to improve their default cost model. This alternate cost
model is available for SCOPE queries under a flag. We turned this flag on and
compared the costs from the improved model with the default one. Figure 1b
shows the alternate model in solid blue line. We see that the correlation
improves from $0.04$ to $0.10$ and the ratio curve for the manually improved
cost model shifts a bit up, i.e., it reduces the over-estimation. However, it
still suffers from the wide gap between the estimated and actual costs, again
indicating that cost modeling is non-trivial in these environments.
Finally, as discussed in the introduction and shown as dotted lines in Figure
1(b), fixing cardinalities to the perfect values, i.e., that best that any
cardinality estimator (stillger2001leo, ; cardLearner, ;
lightweightCardModels, ; corrJoinsCardModels, ) could achieve, does not fill
the gap between the estimated and the actual costs in SCOPE-like systems.
## 3\. Learned Cost Models
In this section, we describe how we can leverage the common sub-expressions
abundant in big data systems to learn a large set of smaller-sized but highly
accurate cost models.
We note that it is practically infeasible to learn a single global model that
is equally effective for all operators. This is why even traditional query
optimizers model each operator separately.A single model is prone to errors
because operators can have very different performance behavior (e.g., hash
group by versus merge join), and even the performance of same operator can
vary drastically depending on interactions with underneath operators via
pipelining, sharing of sorting and grouping properties, as well as the
underlying software or hardware platform (or the cloud instance). In addition,
because of the complexity, learning a single model requires a large number of
features, that can be prohibitively expensive to extract and combine for every
candidate operator during query optimization.
### 3.1. Specialized Cost Models
As described in Section 2.2, shared cloud environments often have a large
portion of recurring analytical queries (or jobs), i.e., the same business
logic is applied to newer instances of the datasets that arrive at regular
intervals (e.g., daily or weekly). Due to shared inputs, such recurring jobs
often end up having one or more common subexpressions across them. For
instance, the SCOPE query processing system at Microsoft has more than $50\%$
of jobs as recurring, with a large fraction of them appearing daily
(jyothi2016morpheus, ), and as high as 60% having common subexpressions
between them (cloudviews, ; bigsubs, ; cardLearner, ). Common subexpression
patterns have also been reported in other production workloads, including
Spark SQL queries in Microsoft’s HDInsight (sparkcruise, ), SQL queries from
risk control department at Ant Financial Services Group (equitas, ), and
iterative machine learning workflows (helix, ).
Figure 4. Illustrating common subexpressions.
Figure 4 illustrates a common subexpression, consisting of a scan followed by
a filter, between two queries. We exploit these common subexpressions by
learning a large number of specialized models, one for each unique operator-
sub- graph template representing the subexpression. An operator-subgraph
template covers the root physical operator (e.g., Filter) and all prior
(descendants) operators (e.g., scan) from the root operator of the
subexpression. However, parameters and inputs in operator-subgraphs _can vary
over time_ , and are used as features for the model (along with other logical
and physical features) as discussed in Section 3.3.
The operator-subgraph templates essentially _capture the context of root
operator, i.e, learn the behavior of root physical operator conditioned on the
operators beneath it in the query plan_. This is helpful because of two
reasons. First, the execution time of an operator depends on whether it is
running in a pipelined manner, or is blocked until the completion of
underneath operators. For example, the latency of a hash operator running on
top of a filter operator is typically smaller compared to when running over a
sort operator. Similarly, the grouping and sorting properties of operators
beneath the root operator can influence the latency of root operator
(cascades95, ).
Second, the estimation errors (e.g., of cardinality) grow quickly as we move
up the query plan, with each intermediate operator building upon the errors of
children operators. The operator-subgraph models mitigates this issue
partially since the intermediate operators are fixed and the cost of root
operator depends only on the leaf level inputs. Moreover, when the estimation
errors are systematically off by certain factors (e.g., 10x), the subgraph
models can adjust the weights such that the predictions are close to actual
(discussed subsequently in Section 3.4). This is similar to adjustments
learned explicitly in prior cardinality estimation work (stillger2001leo, ).
These adjustments generalize well since recurring jobs share similar schemas
and the data distributions remain relatively stable, even as the input sizes
change over time. Accurate cardinality estimations are, however, still needed
in cases where simple adjustment factors do not exist (cardLearner, ).
Next, we discuss the learning settings, feature selection, and our choice of
learning algorithm for operator-subgraphs. In Section 5, we describe the
training and integration of learned models with the query optimizer.
### 3.2. Learning Settings
Target variable. Given an operator-subgraph template, we learn the _exclusive
cost_ of the root operator as our target. At every intermediate operator, we
predict the exclusive cost of the operator conditioned on the subgraph below
it. The exclusive cost is then combined with the costs of the children
subgraphs to compute the total cost of the sub-graph, similar to how default
cost models combine the costs.
Loss Function | Median Error
---|---
Median Absolute Error | 246%
Mean Absolute Error | 62%
Mean Squared Error | 36%
Mean Squared-Log Error | 14%
Table 1. Median error using 5-fold CV over the production workload for
regression loss functions
Loss function. As the loss function, we use mean-squared log error between the
predicted exclusive cost ($p$) and actual exclusive latency ($a$) of the
operator: $\frac{\sum_{n}(log(p+1)-log(a+1))^{2}}{n}$, here $1$ is added for
mathematical convenience. Table 1 compares the average median errors using
5-fold cross validation (CV) of mean-squared log error with other commonly
used regression loss functions, using elastic net as the learning model
(described subsequently Section 3.4). We note that not taking the log
transformation makes learning more sensitive to extremely large differences
between actual and predicted costs. However, large differences often occur
when the job’s running time itself is long or even due to outlier samples
because of machine or network failures (typical in big data systems). We,
therefore, minimize the relative error (since
$log(p+1)-log(a+1)=log(\frac{p+1}{a+1})$), that reduces the penalty for large
differences. Moreover, our chosen loss function helps penalize under-
estimation more than over-estimation, since under estimation can lead to under
allocation of resources which is typically decided based on cost estimates.
Finally, log transformation implicitly ensures that the predicted costs are
always positive.
### 3.3. Feature Selection
It is expensive to extract and combine a large number of features every time
we predict the cost of an operator — a typical query plan in big data systems
can involve 10s of physical operators, each of which can have multiple
possible candidates. Moreover, large feature sets require more number of
training samples, while many operator-subgraph instances typically have much
fewer samples. Thus, we perform an offline analysis to identify a small set of
useful features.
For selecting features, we start with a set of basic statistics that are
frequently used for estimating costs of operators in the default cost model.
These include the cardinality, the average row length, and the number of
partitions. We consider three kinds of cardinalities: 1) base cardinality: the
total input cardinality of the leaf operators in the subgraph, 2) input
cardinality: the total input cardinality from the children operators, and 3)
output cardinality: the output cardinality of the operator-subgraph. We also
consider normalized inputs (ignoring dates and numbers) and parameters to the
job that typically vary over time for the recurring jobs. We ignore features
such as job name or cluster name, since they could induce strong bias and make
the model brittle to the smallest change in names.
We further combine basic features to create additional derived features to
capture the behavior of operator implementations and other heuristics used in
default cost models. We start a large space of possible derivations by
applying (i) logarithms, square root, squares, and cubes of basic features,
(ii) pairwise products among basic features and derived features listed in
(i), and (iii) cardinality features divided by the number of partitions (i.e.
machine). Given this set of candidate features, we use a variant of elastic
net (zou2005regularization, ) model to select a subset of useful features that
have at least one non-zero weight over all subgraph models.
Feature | Description
---|---
Input Cardinality (I) | Total Input Cardinality from children operators
Base Cardinality (B) | Total Input Cardinality at the leaf operators
Output Cardinality (C) | Output Cardinality from the current operator
AverageRowLength (L) | Length (in bytes) of each tuple
Number of Partitions (P) | Number of partitions allocated to the operator
Input (IN) | Normalized Inputs (ignored dates, numbers)
Parameters (PM) | Parameters
Table 2. Basic Features
Category | Features
---|---
Input or Output data | $\sqrt{I}$, $\sqrt{B}$, L*I, L*B, L*log(B), L*log(I), L*log(C)
Input $\times$ Output | B*C,I*C,B*log(C),I*log(C),log(I)*log(C),I*C, log(B)*log(C)
Input or Output per partition | I/P, C/P, I*L/P, C*L/P, $\sqrt{I}$/P, $\sqrt{C}$/P,log(I)/P
Table 3. Derived Features
Table 2 and Table 3 depict the selected basic and derived features with non-
zero weights. We group the derived features into three categories (i) input or
output data, capturing the amount of data read, or written, (ii) the product
of input and output, covering the data processing and network communication
aspects, and finally (iii) per-machine input or output, capturing the
partition size.
Further, we analyze the influence of each feature. While the influence of each
feature varies over different subgraph models, Figure 5 shows the aggregated
influence over all subgraph models of each feature. Given a total of $K$ non
zero features and $N$ subgraph models, with $w_{in}$ as the weight of feature
$i$ in model $n$, we measure the influence of feature $i$ using normalized
weight $nw_{i}$, as
$nw_{i}=\frac{\sum_{N}abs(w_{in})}{\sum_{K}\sum_{N}abs(w_{kn})}$.
Figure 5. Features weights (Op-Subgraph model)
### 3.4. Choice of learning model
For learning costs over operator-subgraphs, we considered a number of variants
of linear-regression, SVM, decision tree and their ensembles, as well as
neural network models. On 5-fold cross-validation over our production
workload, the following models give more accurate results compared to the
default cost model: (i) Neural network. 3-layers, hidden layer size = 30,
solver = adam, activation = relu, l2 regularization = 0.005, (ii) Decision
tree: depth =15, (ii) Random forest number of trees = 20, depth = 5, (iii)
FastTree Regression (a variant of Gradient Boosting Tree): number of trees =
20, depth = 5, and (iv) Elastic net: $\alpha$ =1.0, fit intercept=True, l1
ratio=0.5.
We observe that the strict structure of subgraph template helps reduce the
complexity, making the simpler models, e.g., linear- and decision tree-based
regression models, perform reasonably well with the chosen set of features. A
large number of operator-subgraph templates have fewer training samples, e.g.,
more than half of the subgraph instances have $<30$ training samples for the
workload described in Section 2. In addition, because of the variance in cloud
environments (e.g., workload fluctuations, machine failures, etc.), training
samples can have noise both in their features (e.g., inaccurate statistics
estimates) and the class labels (i.e., execution times of past queries).
Together, both these factors lead to over-fitting, making complex models such
as neural network as well as ensemble-based models such as gradient-boost
perform worse.
Elastic net (zou2005regularization, ), a $L_{1}$ and $L_{2}$ regularized
linear regression model, on the other hand, is relatively less prone to
overfitting. In many cases, the number of candidate features (ranging between
$25$ to $30$) is as many as the number of samples, while only a select few
features are usually relevant for a given subgraph. The relevant features
further tend to differ across subgraph instances. Elastic net helps perform
_automatic feature selection_ , by selecting a few relevant predictors for
each subgraph independently. Thus, we train all subgraphs with the same set of
features, and let elastic net select the relevant ones. Another advantage of
elastic net model is that it is intuitive and easily interpretable, like the
default cost models which are also weighted sums of a number of statistics.
This is an important requirement for effective debugging and analysis of
production jobs.
Table 4 depicts the Pearson correlation and median error of the five machine
learning models over the production workload. We see that operator-subgraphs
models trained using elastic net can make sufficiently accurate cost
predictions (14% median error), with a high correlation (more than $0.92$)
with the actual runtime, a substantial improvement over the default cost model
(median error of $258$% and Pearson correlation of $0.04$). In addition,
elastic net models are fast to invoke during query optimization, and have low
storage footprint that indeed makes it feasible for us to learn specialized
models for each possible subgraph.
## 4\. Robustness
We now discuss how we learn robust cost models. As defined in Section 1,
robust cost models cover the entire workload with high accuracy for a
substantial time period before requiring retraining. In contrast to prior work
on robust query processing (markl2004robust, ; dutt2016plan, ; lookahead-ip, )
that either modify the plan during query execution, or execute multiple plans
simultaneously, we leverage the massive cloud workloads to learn robust models
offline and integrate them with the optimizer to generate one robust plan with
minimum runtime overhead. In this section, we first explain the coverage and
accuracy tradeoff for the operator-subgraph model. Then, we discuss the other
extreme, namely an operator model, and introduce additional models to bridge
the gap between the two extremes. Finally, we discuss how we combine
predictions from individual models to achieve robustness.
### 4.1. Accuracy-Coverage Tradeoff
The operator-subgraph model presented in Section 3 is highly specialized. As a
result, this model is likely to be highly accurate. Unfortunately, the
operator-subgraph model does not cover subgraphs that are not repeated in the
training dataset, i.e., it has limited coverage. For example, over $1$ day of
Microsoft production workloads, operator-subgraphs have learned models for
only 54% of the subgraphs. Note that we create a learned model for a subgraph
if it has at least $5$ occurrences over the single day worth of training data.
Thus, it is difficult to predict costs for arbitrary query plans consisting of
subgraphs never seen in training dataset.
The other extreme: Operator model. In contrast to the operator-subgraph model,
we can learn a model for each physical operator, similar to how traditional
query optimizers model the cost. The operator models estimate the execution
latency of a query by composing the costs of individual operators in a
hierarchical manner akin to how default cost models derive query costs. As a
result, operator models can predict the cost of any query in the workload,
including those previously unseen in the training dataset. However, similar to
traditional cost model, operator models also suffer from poor accuracy since
the behavior of an operator changes based on what operations appear below it.
Furthermore, the estimation errors in the features or statistics at the lower
level operators of a query plan are propagated to the upper level operators
that significantly degrades the final prediction accuracy. On $5$-fold cross-
validation over $1$ day of Microsoft production workloads, operator models
results in $42\%$ median error and $0.77$ Pearson correlation, which although
better than the default cost model (258% median error and 0.04 Pearson
correlation), is relatively lower compared to that of operator-subgraph models
(14% median error and $0.92$ Pearson correlation). Thus, there is an accuracy
and coverage tradeoff when learning cost models, with operator-subgraph and
operator models being the two extremes of this tradeoff.
### 4.2. Bridging the Gap
We now present additional models that fall between the two extreme models in
terms of the accuracy-coverage trade-off.
Operator-input model. An improvement to per-operator is to learn a model for
all jobs that share similar inputs. Similar inputs also tend to have similar
schema and similar data distribution even as the size of the data changes over
time, thus operator models learned over similar inputs often generalize over
future job invocations. In particular, we create a model for each operator and
input template combination. An input template is a normalized input where we
ignore the dates, numbers, and parts of names that change over time for the
same recurring input, thereby allowing grouping of jobs that run on the same
input schema over different sessions. Further, to partially capture the
context, we featurize the intermediate subgraph by introducing two additional
features: 1) the number of logical operator in the subgraph (CL) and 2) the
depth of the physical operator in the sub-graph (D). This helps in
distinguishing subgraph instances that are extremely different from each
other.
Model | Correlation | Median Error
---|---|---
Default | 0.04 | 258%
Neural Network | 0.89 | 27%
Decision Tree | 0.91 | 19 %
Fast-Tree regression | 0.90 | 20%
Random Forest | 0.89 | 32%
Elastic net | 0.92 | 14%
Table 4. Correlation and error w.r.t. actual runtime for the operator-
subgraphs
Figure 6. Feature weights (all other models)
Operator-subgraphApprox model. While operator-sub- graph models exploit the
overlapping and recurring nature of big data analytics, there is also a large
number (about 15-20%) of subgraphs that are similar but are not exactly the
same. To bridge this gap, we relax the notion of subgraph similarity, and
learn one model for all subgraphs that have the same inputs and the same
_approximate_ underlying subgraph. We consider two subgraphs to be
approximately same if they have the same physical operator at the root, and
consist of the _same frequency of each logical operator in the underneath
subgraph (ignoring the ordering between operators)._ Thus, there are two
relaxations: (1) we use frequency of logical operators instead of physical
operators (note that this is one of the additional features in Operator-input
model) and (ii) we ignore the ordering between operators. This relaxed
similarity criteria allows grouping of similar subgraphs without substantially
reducing the coverage. Overall, operator-subgraphApprox model is a hybrid of
the operator-subgraph and operator-input models: it achieves much higher
accuracy compared to operator or operator-input models, and more coverage
compared to operator-subgraph model.
Table 5 depicts the Pearson correlation, the median accuracy using 5-fold
cross-validation, as well as the coverage of individual cost models using
elastic net over production workloads. As we move from more specialized to
more generalized models (i.e., operator-subgraph to operator-subgraphApprox to
operator-input to operator), we see that the model accuracy decreases while
the coverage over the workload increases. Figure 6 shows the feature weights
for each of the intermediate models. We see that while the weight for
specialized models, like the operator-subgraph model (Figure 5), are
concentrated on a few features, the weights for more generalized models, like
the per-operator model, are more evenly distributed.
### 4.3. The Combined Model
Given multiple learned models, with varying accuracy and coverage, the
strawman approach is to select a learned model in decreasing order of their
accuracy over training data, starting from operator-subgraphs to operator-
subgraphApprox to operator-inputs to operators. However, as discussed earlier,
there are subgraph instances where more accurate models result in poor
performance over test data due to over-fitting.
Figure 7. Heatmap of errors over $42K$ operators from production jobs. Each
point in the heatmap depicts the error of learned cost with respect to the
actual runtime
Model | Correlation | Median Error | Coverage
---|---|---|---
Default | 0.04 | 258% | 100%
Op-Subgraph | 0.92 | 14% | 54%
Op-Subgraph Approx | 0.89 | 16 % | 76%
Op-Input | 0.85 | 18% | 83%
Operator | 0.77 | 42% | 100%
Combined | 0.84 | 19% | 100%
Table 5. Performance of learned models w.r.t to actual runtimes
To illustrate, Figure 7 depicts the heat-map representation of the accuracy
and coverage of different individual models, over more than $42K$ operator
instances from our production workloads. Each point in the heat-map represents
the error of the predicted cost with respect to the actual runtime: the more
green the point, the less the error. The empty region at the top of the chart
depicts that the learned model does not cover those subgraph instances. We can
see that the predictions are mostly accurate for operator-subgraph models,
while Operator models have relatively more error (i.e, less green) than the
operator-subgraph models. Operator-input have more coverage with marginally
more error (less green) compared to operator-subgraphs. However, for regions
b, d, and f, as marked on the left side of the figure, we notice that
operator-input performs better (more blue and green) than operator-subgraph.
This is because operators in those regions have much fewer training samples
that results in over-fitting. Operator-input, on the other hand, has more
training samples, and thus performs better. Thus, it is difficult to decide a
rule or threshold that always select the best performing model for a given
subgraph instance.
Model | Correlation | Median Error
---|---|---
Default | 0.04 | 258%
Neural Network | 0.79 | 31%
Decision Tree | 0.73 | 41 %
FastTree Regression | 0.84 | 19%
Random Forest | 0.80 | 28%
Elastic net | 0.68 | 64%
Table 6. Correlation and errror w.r.t actual runtimes for the Combined Model
Learning a meta-ensemble model. We introduce a meta-ensemble model that uses
the predictions from specialized models as meta features, along with the
following extra features: (i) cardinalities (I, B, C), (ii) cardinalities per
partition (I/P, B/P, C/P), and (iii) number of partitions (P) to output a more
accurate cost. Table 6 depicts the performance of different machine learning
models that we use as a meta-learner on our production workloads. We see that
the FastTree regression (fasttree, ) results in the most accurate predictions.
FastTree regression is a variant of the gradient boosted regression trees
(friedman2002stochastic, ) that uses an efficient implementation of the MART
gradient boosting algorithm (mart, ). It builds a series of regression trees
(estimators), with each successive tree fitting on the residual of trees that
precede it. Using $5$-fold cross-validation, we find that the maximum of only
$20$ regression trees with mean-squared log error as the loss function and the
sub-sampling rate of $0.9$ is sufficient for optimal performance. As depicted
in Figure 7, using FastTree Regression as a meta-learner has three key
advantages.
First, FastTree regression can effectively characterize the space where each
model performs well. The regression trees recursively split the space defined
by the predictions from individual models and features, creating fine-grained
partitions such that prediction in each partition are highly accurate.
Second, FastTree regression performs better for operator instances where
individual models perform worse. For example, some of the red dots found in
outlier regions b, d and f of the individual model heat-maps are missing in
the combined model. This is because FastTree regression makes use of sub-
sampling to build each of the regression trees in the ensemble, making it more
resilient to overfitting and noise in execution times of prior queries.
Similarly, for region a where subgraph predictions are missing, FastTree
regression creates an extremely large number of fine-grained splits using
extra features and operator predictions to give even better accuracy compared
to Operator models.
Finally, the combined model is naturally capable of covering all possible
plans, since it uses Operator model as one of the predictors. Further, as
depicted in Figure 7, the combined model is comparable to that of specialized
models, and almost always has better accuracy than Operator model. The
combined model is also flexible enough to incorporate additional models or
features, or replace one model with another. However, on also including the
default cost model, it did not result in any improvement on SCOPE. Next, we
discuss how we integrate the learned models within the query optimizer.
## 5\. Optimizer Integration
In this section, we describe our two-fold integration of Cleo within SCOPE.
First, we discuss our end-to-end feedback loop to learn cost models and
generate predictions during optimization. Then, we discuss how we extend the
optimizer for supporting resource-exploration using learned models.
(a) Resource-aware planning
(b) Example query plan
(c) Model look-ups for partition exploration
Figure 8. Integrating learned cost models with the query optimizer
### 5.1. Integrating Learned Cost Models
The integration of learned models with the query optimizer involves the
following three components.
Instrumentation and Logging. Big data systems, such as SCOPE, are already
instrumented to collect logs of query plan statistics such as cardinalities,
estimated costs, as well as runtime traces for analysis and debugging. For
uniquely identifying each operator and the subplan, query optimizers annotate
each operator with a _signature_ (bruno2013continuous, ), a 64-bit hash value
that can be recursively computed in a bottom-up fashion by combining (i) the
signatures of children operators, (ii) hash of current operator’s name, and
(iii) hash of operator’s logical properties. We extend the optimizer to
compute three more signatures, one for each individual sub-graph mode, and
extract additional statistics (i.e., features) that were missing for Cleo.
Since all signatures can be computed simultaneously in the same recursion, and
that there are only $25-30$ features (most of which were already extracted),
the additional overhead is minimal ($\leq$ 10%), as we describe in Section 6.
This overhead includes both the logging and the model lookup that we discuss
subsequently.
Training and Feedback. Given the logs of past query runs, we learn each of the
four individual elastic net-based models independently and in parallel using
our SCOPE-based parallel model trainer. Experimentally, we found that a
training window of two days and a training frequency of every ten days results
in acceptable accuracy and coverage (Section 6). We then use the predictions
from individual models on the next day queries to learn the combined FastTree
regression model. Since individual models can be trained in parallel, the
training time is not much, e.g., it takes less than $45$ minutes for training
$25K$ models from over $50,000$ jobs using a cluster of $200$ nodes. Once
trained, we serialize the models and feedback them to the optimizer. The
models can be served either from a text file, using an additional compiler
flag, or using a web service that is backed by a SQL database.
Look-up. All models relevant for a cluster are loaded upfront by the
optimizer, into a hash map with keys as signatures of models, to avoid
expensive lookup calls during optimization. When loaded simultaneously, all
$25K$ models together take about $600$ MB of memory, which is within an
acceptable range. Finally, for cost estimation, we modify the Optimize Input
phase of Cascade optimizer to invoke learned models. Figure 8(a) highlights
the the key steps that we added in blue. Essentially, we replace the calls to
the default cost-models with the learned model invocations (Step $10$ in
Figure 8(a)) to predict the exclusive cost of an operator, which is then
combined with costs of children operators, similar to how default cost models
combine costs. Moreover, since there is an operator model and a combined model
for every physical operator, Cleo can cover all possible query plans, and can
even generate a plan unseen in training data. All the features that learned
models need are available during query optimization. However, we do not reuse
the partition count derived by the default cost model, rather we try to find a
more optimal partition count (steps $3$, $4$, $9$) as it drives the query
latency. We discuss the problem with existing partition count selection and
our solution below.
### 5.2. Resource-aware Query Planning
The degree of parallelism (i.e., the number of machines or containers
allocated for each operator) is a key factor in determining the runtime of
queries in massively parallel databases (raqo, ), which implicitly depends on
the partition count. This makes partition count as an important feature in
determining the cost of an operator (as noted in Figures 5–6).
Unfortunately, in existing optimizers, the partition count is not explored for
all operators, rather partitioning operators (e.g., Exchange for stage $2$ in
Figure 8(b)) set the partition count for the entire stage based on their local
statistics (yint2018bubble, ). The above operators on the same stage simply
derive the partition count set by the partitioning operator. For example, in
stage $2$ of Figure 8(b), Exchange sets a partition count to $2$ as it results
in its smallest local cost (i.e., $15$). The operators above Exchange (i.e.,
Reduce and Output) derive the same partition count, resulting in a total cost
of $305$ for the entire stage. However, we can see that the partition count of
$16$ results in a much lower overall cost of $125$, though it’s not locally
optimal for Exchange. Thus, _not optimizing the partition count for the entire
stage results in a sub-optimal plan_.
To address this, we explore the partition counts during query planning, by
making query planning resource-aware. Figures 8(a) and 8(b) illustrate our
resource-aware query planning approach. We introduce the notion of a
_resource-context_ , within the optimizer-context, for tracking costs of
partitions across operators in a stage. Furthermore, we add a _partition
exploration_ step, where each physical operator attaches a list of learned
costs for different partition counts to the resource context (step 3 in Figure
8(a)). For example, in Figure 8(b), the resource-context for stage 2 shows the
learned cost for different partition counts for each of the three operators.
On reaching the stage boundary, the partitioning operator Exchange performs
_partition optimization_ (step $9$ in Figure 8(a)) to set its local partition
count to $16$, that results in the lowest total cost of $125$ across for the
stage. Thereafter, the higher level operators simply derive the selected
partition count (line 8 in Figure 8(a)), like in the standard query planning,
and estimate their local costs using learned models. Note that when a
partition comes as required property (cascades95, ) from upstream operators,
we set the partition count to the required value without any exploration
(Figure 8(a) step 2).
SCOPE currently does not allow varying other resources, therefore we focus
only on partition counts in this work. However, the resource-aware query
planning with the three new abstractions to Cascades framework, namely the
resource-context, partition-exploration, and partition-optimization, is
general enough to incorporate additional resources such as memory sizes,
number of cores, VM instance types, and other infrastructure level decisions
to jointly optimize for both plan and resources. Moreover, our proposed
extensions can also be applied to other big data systems such as Spark, Flink,
and Calcite that use variants of Cascades optimizers and follow a similar top-
down query optimization as SCOPE.
Our experiments in Section 6 show that the resource-aware query planning not
only generates better plans in terms of latency, but also leads to resource
savings. However, the challenge is that estimating the cost for every single
partition count for each operator in the query plan can explode the search
space and make query planning infeasible. We discuss our approach to address
this next.
### 5.3. Efficient Resource Exploration
We now discuss two techniques for efficiently exploring the partition counts
in Cleo, without exploding the search space.
Sampling-based approach. Instead of considering every single partition count,
one option is to consider a uniform sample over the set of all possible
containers for the tenant or the cluster. However, the relative change in
partition is more interesting when considering its influence on the cost,
e.g., a change from $1$ to $2$ partitions influences the cost more than a
change from $1200$ to $1210$ partitions. Thus, we sample partition counts in a
geometrically increasing sequence, where a sample $x_{i+1}$ is derived from
previous sample $x_{i}$ using: $x_{i+1}=\left\lceil x_{i}+x_{i}/s\right\rceil$
with $x_{0}=1$, $x_{1}=2$. Here, $s$ is a skipping coefficient that decides
the gap between successive samples. A large $s$ leads to a large number of
samples and more accurate predictions, but at the cost of higher model look-
ups and prediction time.
Figure 9. Workload consisting of $0.5$ million jobs from $4$ different
production clusters over $3$ days
Analytical approach. We reuse the individual learned models to directly model
the relationship between the partition count and the cost of an operator. The
key insight here is that only features where partition count is present are
relevant for partition exploration, while the rest of the features can be
considered constants since their values are fixed during partition
exploration. Thus, we can express operator cost as follows:
$cost\propto\frac{(\theta_{1}*I+\theta_{2}*C+\theta_{3}*I*C)}{P}+\theta_{c}*P$
where $I$, $C$, and $P$ refer to input cardinality, cardinality and partition
count respectively. During optimization, we know $I$, $C$, $I*C$, therefore:
$cost\propto\frac{\theta_{P}}{P}+\theta_{c}*P$. Extending the above
relationship across all operators (say $n$ in number) in a stage, the
relationship can be modeled as:
$cost\propto\frac{\sum_{i=1}^{n}\theta_{P_{i}}}{P}+\sum_{i=1}^{n}\theta_{C_{i}}*P$
Thus, during partition exploration, each operator calculates $\theta_{P}$ and
$\theta_{C}$ and adds them to resource-context, and the partitioning operator
selects the optimal partition count by optimizing the above function. There
are three possible scenarios: (i) $\sum_{i=1}^{n}\theta_{P_{i}}$ is positive
while $\sum_{i=1}^{n}\theta_{C_{i}}$ is negative: we can have the maximum
number of partitions for the stage since there is no overhead of increasing
the number of partitions, (ii) $\sum_{i=1}^{n}\theta_{P_{i}}$ is negative
while $\sum_{i=1}^{n}\theta_{C_{i}}$ is positive: we set the partition count
to minimum as increasing the partition count increases the cost, (iii)
$\sum_{i=1}^{n}\theta_{P_{i}}$ and $\sum_{i=1}^{n}\theta_{C_{i}}$ are either
both positive or both negative: we can derive the optimal partition count by
differentiating the cost equation with respect to P. Overall, for $m$ physical
operator, and the maximum possible partition count of $P_{max}$, the
analytical model makes $5\cdot m\cdot log_{\frac{s+1}{s}}P_{max}$ cost model
look-ups. Figure 8(c) shows the number of model look-ups for sampling and
analytical approaches as we increase the number of physical operators from $1$
to $40$ in a plan. While the analytical model incurs a maximum of $200$ look-
ups, the sampling approach can incur several thousands depending on the
skipping coefficients. In section 6.5, we further analyze the accuracy of the
sampling strategy with the analytical model on the production workload as we
vary the sample size. Our results show that the analytical model is at least
$20$ $\times$ more efficient than the sampling approach for achieving the same
accuracy. Thus, we use the analytical model as our default partition
exploration strategy.
## 6\. Experiments
Figure 10. Illustrating workload changes over different clusters and different
days.
Figure 11. Cross-validation results of ML algorithms for each learned model on
Cluster 4 workload
| All jobs | Ad-hoc jobs
---|---|---
| Correlation | Median Error | 95%tile Error | Coverage | Correlation | Median Error | 95%tile Error | Coverage
Default | 0.12 | 182% | 12512% | 100% | 0.09 | 204% | 17791% | 100%
Op-Subgraph | 0.86 | 9% | 56% | 65% | 0.81 | 14% | 57% | 36%
Op-Subgraph Approx | 0.85 | 12 % | 71% | 82% | 0.80 | 16 % | 79% | 64%
Op-Input | 0.81 | 23% | 90% | 91% | 0.77 | 26% | 103% | 79%
Operator | 0.76 | 33% | 138% | 100% | 0.73 | 42% | 186% | 100%
Combined | 0.79 | 21% | 112% | 100% | 0.73 | 29% | 134% | 100%
Table 7. Breakdown of accuracy and coverage of each learned model for all jobs
and ad-hoc jobs separately on Cluster1.
In this section, we present an evaluation of our learned optimizer Cleo. For
fairness, we feed the same statistics (e.g., cardinality, average row length)
to learned models that are used by the SCOPE default cost model. Our goals are
five fold: (i) to compare the prediction accuracy of our learned cost models
over all jobs as well as over only ad-hoc jobs across multiple clusters, (ii)
to test the coverage and accuracy of learned cost models over varying test
windows, (iii) to compare the Cleo cost estimates with those from CardLearner,
(iv) to explore why perfect cardinality estimates are not sufficient for query
optimization, (iv) to evaluate the effectiveness of sampling strategies and
the analytical approach proposed in Section 5.2 in finding the optimal
resource (i.e., partition count), and (v) to analyze the performance of plans
produced by Cleo with those generated from the default optimizer in Cleo using
both the production workloads and the TPC-H benchmark (v) to understand the
training and runtime overheads when using learned cost models.
Cluster | Default (all jobs) | Learned (all jobs) | Learned (ad-hoc jobs)
---|---|---|---
| Correlation | Median Accuracy | Correlation | Median Accuracy | Correlation | Median Accuracy
Cluster 1 | 0.12 | 182% | 0.79 | 21% | 0.73 | 29%
Cluster 2 | 0.08 | 256% | 0.77 | 33% | 0.75 | 40%
Cluster 3 | 0.15 | 165% | 0.83 | 26% | 0.81 | 38%
Cluster 4 | 0.05 | 153% | 0.74 | 15% | 0.72 | 26%
Table 8. Pearsion Correlation and Median accuracy of default and combined
learned model over all jobs and ad-hoc jobs on each cluster.
Workload. As summarized in Figure 9, we consider a large workload trace from
$4$ different production clusters comprising of $423$ virtual clusters, with
each virtual cluster roughly representing a business unit within Microsoft,
and consisting a total of $\approx 0.5$ million jobs over $3$ days, that ran
with a total processing time of $\approx 6$ million hours, and use a total of
$\approx 1.4$ billion containers. The workload exhibits variations in terms of
the load (e.g., more than $3\times$ jobs on Cluster1 compared to Cluster4), as
well as in terms of job properties such as average number of operators per job
($50s$ in Cluster1 compared to $30s$ in Cluster4) and the average total
processing time of all containers per job (around $17$ hours in Cluster1
compared to around $5$ hours in Cluster4). The workload also varies across
days from a $30\%$ decrease to a $20\%$ increase of different job
characteristics on different clusters(Figure 10). Finally, the workload
consists of a mix of both recurring and ad-hoc jobs, with about $7\%-20\%$ ad-
hoc jobs on different clusters and different days.
Figure 12. Accuracy results on all jobs (recurring + ad-hoc) over four
different clusters
Figure 13. Accuracy results on only ad-hoc jobs over four different clusters.
### 6.1. Cross Validation of ML Models
We first compare default cost model with the five machine learning algorithms
discussed in Section 3.4. (i) Elastic net, a regularized linear-regression
model, (ii) DecisionTree Regressor, (iii) Random Forest, (iv) Gradient
Boosting Tree, and (v) Multilayer Perceptron Regressor (MLP). Figure 11 (a to
d) depicts the $5$-fold cross-validation results for the ML algorithms for
operator-subgraph, operator-input, operator, and combined models respectively.
We skip the results for operator-subgraphApprox as it has similar results to
that of operator-input. We observe that all algorithms result in better
accuracy compared to the default cost model for each of the models. For
operator-subgraph and operator-input models, the performance of most of the
algorithms (except the neural network) is highly accurate. This is because
there are a large number of specialized models, highly optimized for specific
sub-graph and input instances respectively. The accuracy degrades as the
heterogeneity of model increases from operator-subgraph to operator-input to
operator. For individual models, the performances of elastic-net and decision-
tree are similar or better than complex models such as neural network and
ensemble models. This is because the complex models are more prone to
overfitting and noise as discussed in Section 3.4. For the combined model, the
FastTree Regression does better than other models because of its ability to
better characterize the space where each model performs well. The operator-
subgraph and operator-input models have the highest accuracy (at between $45$%
to $85$% coverage), followed by the combined model (at $100$% coverage) and
then the operator model (at $100$% coverage).
### 6.2. Accuracy
Next, we first compare the accuracy and correlation of learned models to that
of the default cost model for each of the clusters. We use the elastic net
model for individual learned model and the Fast-Tree regression for the
combined model. We learn the models on day 1 and day 2, and predict on day 3
of the workload as described in Section 5.1. Table 8 shows the Pearson
correlation and median accuracy of default cost model and the combined model
for all jobs and only ad-hoc jobs separately for each of the clusters on day
3. We further show the breakdown of results for each of the individual models
on cluster 1 jobs in Table 7. Figure 12 and Figure 13 show the CDF
distribution for estimated vs actual ratio of predicted costs for each of the
learned models over different clusters.
All jobs. We observe that learned models result between $8\times$ to
$10\times$ better accuracy and between $6\times$ to $14\times$ better
correlation compared to the default cost model across the $4$ clusters. For
operator-subgraph, the performance is highly accurate (9% median error and .86
correlation), but at lower coverage of 65%. This is because there are a large
number of specialized models, highly optimized for specific sub-graph
instances. As the coverage increases from operator-subgraphApprox to operator-
input to operator, the accuracy decreases. Overall, the combined model is able
to provide the best of both worlds, i.e., accuracy close to those of
individual models and $100$% coverage like that of the operator model. The
$50^{th}$ and the $95^{th}$ percentile errors of the combined model are about
$10$$\times$ and $1000$$\times$ better than the default SCOPE cost model.
These results show that it is possible to learn highly accurate cost models
from the query workload.
Figure 14. Coverage and accuracy with respect to default optimizer over 1
month
Only ad-hoc Jobs. Interestingly, _the accuracy over ad-hoc jobs drop slightly
but it is still close to those over all jobs (Table 8 and Table 7)_. This is
because: (i) ad-hoc jobs can still have one or more similar subexpressions as
other jobs (e.g., they might be scanning and filtering the same input before
doing completely new aggregates), which helps them to leverage the subgraph
models learned from other jobs. This can be seen in Table 7 — the sub-graph
learned models still have substantial coverage of subexpression on ad-hoc
jobs. For example, the coverage of $64\%$ for Op-Subgraph Approx model means
that $64\%$ of the sub-expressions on ad-hoc jobs had matching Op-Subgraph
Approx learned model. (ii) Both the operator and the combined models are
learned on a per-operator basis, and have much lower error ($42\%$ and $29\%$)
than the default model ($182\%$). This is because the query processor, the
operator implementations, etc. still remain the same, and their behavior gets
captured in the learned cost models. Thus, even if there is no matching sub-
expression in ad-hoc jobs, the operator and the combined models still result
in better prediction accuracy.
### 6.3. Robustness
We now look at the robustness (as defined in Section 1) of learned models in
terms of accuracy, coverage, and retention over a month long test window on
cluster 1 workloads.
Coverage over varying test window. Figure 14a depicts the coverage of
different subgraph models as we vary the test window over a duration of 1
month. The coverage of per-operator and combined model is always $100\%$ since
there is one model for every physical operator. The coverage of per-subgraph
models, strictest among all, is about $58\%$ after $2$ days, and decreases to
$37\%$ after $28$ days. Similarly, the coverage of per-subgraphApprox ranges
between $75\%$ and $60\%$. The per-operator-input model, on the other hand
remains stable between $78\%$ and $84\%$.
Error and correlation over varying test window. Figures 14b and 14c depict the
median and $95\%$ile error percentages respectively over a duration of one
month. While the median error percentage of learned models improves on default
cost model predictions by $3$-$15$x, the $95\%$ile error percentage is better
by over three orders of magnitude. For specialized learned models, the error
increases slowly over the first two weeks and then grows much faster due to
decrease in the coverage. Moreover, the $95\%$ile error of the subgraph models
grows worse than their median error. Similarly, in Figure 14d, we see that
predicted costs from learned models are much more correlated with the actual
runtimes, having high Pearson correlation (generally between $0.70$ and
$0.96$), compared to default cost models that have a very small correlation
(around $0.1$). A high correlation over a duration month of $1$ month shows
that learned models can better discriminate between two candidate physical
operators. Overall, based on these results, we believe re-training every $10$
days should be acceptable, with a median error of about $20\%$, $95\%$ error
of about $200\%$, and Pearson correlation of around $0.80$.
Robustness of the combined model. From Figure 14, we note that the combined
model (i) has 100% coverage, (ii) matches the accuracy of best performing
individual model at any time (visually illustrated in Figure 7), while having
a high correlation ($>$ 0.80) with actual runtimes, and (iii) gives relatively
stable performance with graceful degradation in accuracy over longer run.
Together, these results show that the combined model in Cleo is indeed robust.
Figure 15. Comparison with CardLearner
Figure 16. Hash join operator having different weights over different sets of
sub-expressions.
Figure 17. Partition Exploration Accuracy vs. Efficiency
### 6.4. Impact of Cardinality
Comparison with Cardlearner. Next, we compare the accuracy and the Pearson
correlation of Cleo and the default cost model with CardLearner (cardLearner,
), a learning-based cardinality estimation that employs a Poisson regression
model to improve the cardinality estimates, but uses the default cost model to
predict the cost. For comparison, we considered jobs from a single virtual
cluster from cluster 4, consisting of about $900$ jobs. While Cleo and the
default cost model use the cardinality estimates from the SCOPE optimizer, we
additionally consider a variant (Cleo +CardLearner) where Cleo uses the
cardinality estimates from CardLearner. Overall, we observe that the median
error of default cost with CardLearner (211%) is better than that of default
cost model alone (236%) but much still worse than that of Cleo’s (18%) and
Cleo \+ CardLearner (13%). Figure 17 depicts the CDF of estimated and actual
costs ratio, where we can see that by learning costs, Cleo significantly
reduces both the under-estimation as well as over-estimation, while
CardLearner only marginally improves the accuracy for both default cost model
and Cleo. Similarly, the Pearson correlation of CardLearner’s estimates (.01)
is much worse than that of Cleo’s ($0.84$) and Cleo \+ CardLearner ($0.86$).
These results are consistent with our findings from Section 2 where we show
that fixing the cardinality estimates is not sufficient for filling the wide
gap between the estimated and the actual costs in SCOPE-like systems.
Interestingly, we also observe that the Pearson correlation of CardLearner is
marginally less than that of the default cost model (0.04) inspite of better
accuracy, which can happen when a model makes both over and under estimation
over different instances of the same operator.
Why cardinality alone is not sufficient? To understand why fixing cardinality
alone is not enough, we performed the following two micro-experiments on a
subset of jobs, consisting of about $200$ K subexpressions from cluster 4.
1\. The need for a large number of features. First, starting with perfect
cardinality estimates (both input and output) as only features, we fitted an
elastic net model to find / tune the best weights that lead to minimum cost
estimation error (using the loss function described in Section 3.4). Then, we
incrementally added more and more features and also combined feature with
previously selected features, retraining the model with every addition. Figure
18 shows the decrease in cost model error as we cumulatively add features from
left to right. We use the same notations for features as defined in Table 2
and Table 3.
Figure 18. Impact on median error as we cumulatively add features from left to
right. The first two features are perfect output (C) and input (I)
cardinalities.
We make the following observations.
* (1)
When using only perfect cardinalities, the median error is about $110\%$.
However, when adding more features, we observe that the error drops by more
than half to about $40\%$. This is because of the more complex environments
and queries in big data processing that are very hard to cost with just the
cardinalities (as also discussed in Section 2.3).
* (2)
Deriving additional statistics by transforming (e.g., sqrt, log, squares)
input and output cardinalities along with other features helps in reducing the
error. However, it is hard to arrive at these transformations in hand coded
cost models.
* (3)
We also see that features such as parameter (PM), input (IN), and partitions
(P) are quite important, leading to sharp drops in error. While partition
count is good indicator of degree of parallelism (DOP) and hence the runtime
of job; unfortunately query optimizers typically use a fixed partition count
for estimating cost as discussed in Section 5.2.
* (4)
Finally, there is a pervasive use of unstructured data (with schema imposed at
runtime) as well as custom user code (e.g., UDFs) that embed arbitrary
application logic in big data applications. It is hard to come up with generic
heuristics, using just the cardinality, that effectively model the runtime
behavior of all possible inputs and UDFs.
2\. Varying optimal weights. We now compare the optimal weights of features of
physical operators when they occur in different kinds of sub-expressions. We
consider the popular hash join operator and identified two sets of sub-
expressions in the same sample dataset: (i) hash join appears on top of two
scan operators, and (ii) hash join appears on top of two other join operators,
which in turn read from two scans.
Figure 17 shows the weights of the top $10$ features of the hash join cost
model for the two sets. We see that the partition count is more influential
for set 2 compared to set 1. This is because there is more network transfer of
data in set 2 than in set 1 because of two extra joins. Setting the partition
count that leads to minimum network transfer is therefore important. On the
other hand, for jobs in set 1, we notice that hash join typically sets the
partition count to the same value as that of inputs, since that minimizes
repartitioning. Thus, partition count is less important in set 1. To
summarize, even when cardinality is present as a raw or derived feature, its
relative importance is instance specific (heavily influenced by the
partitioning and the number of machines) and hard to capture in a static cost
model.
(a) Changes in Latency
(b) Changes in Total Processing Time
(c) Optimization Time Overhead
Figure 19. Performance comparison on production jobs with changed plans
### 6.5. Efficacy of Partition Exploration
In this section, we explore the effectiveness of partition exploration
strategies proposed in Section 5.2 in selecting the partition count that leads
to lowest cost for learned models. We used a subset of $200$ sub-expression
instances from the production workload from cluster 1, and exhaustively probe
the learned models for all partition counts from $0$ to $3000$ (the maximum
capacity of machines on a virtual cluster) to find the most optimal cost.
Figure 17 depicts the median error in cost accuracy with respect to the
optimal cost for (i) three sampling strategies: random, uniform, and geometric
as we vary the number of samples of partition counts (ii) the analytical model
(dotted blue line) that selects a single partition count.
We note that the analytical model, although approximate, gives more accurate
results compared to sampling-based approaches until the sample size of about
$15$ to $20$, thereby requiring much fewer model invocations. Further, for a
sample size between $4$ to $20$, we see that the geometrically increasing
sampling strategy leads to more accurate results compared to uniform and
random approaches. This is because it picks more samples when the values are
smaller, where costs tend to change more strongly compared to higher values.
During query optimization, each sample leads to five learned cost model
predictions, four for individual models and one for the combined model. Thus,
for a typical plan in big data system that consists of $10$ operators, the
sampling approach requires $20*5*10=1000$ model invocations, while the
analytical approach requires only $5*10=50$ invocations for achieving the same
accuracy. This shows that the analytical approach is practically more
effective when we consider both efficiency and accuracy together. Hence, Cleo
uses the analytical approach as the default partition exploration strategy.
### 6.6. Performance
We split the performance evaluation of Cleo into three parts: (i) the runtime
performance over production workloads, (ii) a case-study on the TPC-H
benchmark (poess2000new, ), explaining in detail the plan changes caused by
the learned cost models, and (iii) the overheads incurred due to the learned
models. For these evaluations, we deployed a new version of the SCOPE runtime
with Cleo optimizer (i.e., SCOPE+Cleo) on the production clusters and compared
the performance of this runtime with the default production SCOPE runtime. We
used the analytical approach for partition exploration.
#### 6.6.1. Production workload
Since production resources are expensive, we selected a subset of the jobs,
similar to prior work on big data systems (cardLearner, ), as follows. We
first recompiled all the jobs from a single virtual cluster from cluster 4
with Cleo and found that $182$ (22%) out of $845$ jobs had plan changes when
partition exploration was turned off, while $322$ ($39$%) had plan changes
when we also applied partition exploration. For execution, we selected jobs
that had at least one change in physical operator implementations, e.g, for
aggregation (hash vs stream grouping), join (hash vs merge), addition or
removal of grouping, sort or exchange operators. We picked $17$ such jobs and
executed them with and without Cleo over the same production data, while
redirecting the output to a dummy location, similar to prior work (rope, ).
Figure 19(a) shows the end-to-end latency for each of the jobs, compared to
their latency when using the default cost model. We see that the learned cost
models improve latency in 70% (12 jobs) cases, while they degrade latency in
the remaining 30% cases. Overall, the average improvement across all jobs is
$15.35$%, while the cumulative latency of all jobs improves by $21.3$%.
Interestingly, Cleo was able to improve the end-to-end latency for $10$ out of
$12$ jobs with less degree of parallelism (i.e., the number of partitions).
This is in contrary to the typical strategy of scaling out the processing by
increasing the degree of parallelism, which does not always help. Instead,
resource-awareness plan selection can reveal more optimal combinations of plan
and resources. To understand the impact on resource consumption, Figure 19(b)
shows the total processing time (CPU hours). Learned cost models reduce the
Overall, we see the total processing time reducing by $32.2$% on average and
$40.4\%$ cumulatively across all 17 jobs— a significant operational cost
saving in big data computing environments.
Thus, the learned cost models could reduce the overall costs while still
improving latencies in most cases. Below, we dig deeper into the plans changes
using the TPC-H workload.
#### 6.6.2. TPC-H workload
We generated TPC-H dataset with a scale factor of $1000$, i.e., total input of
$1TB$. We ran all $22$ queries $10$ times, each time with randomly chosen
different parameters, to generate the training dataset. We then trained our
cost models on this workload and feedback the learned models to re-run all
$22$ queries. We compare the performance with- and without the feedback as
depicted in Figure 20 . For each observation, we take the average of $3$ runs.
Overall, $6$ TPC-H queries had plan changes when using resource-aware cost
model. Out of these, $4$ changes (Q8, Q9, Q16, Q20) improve latency as well as
total processing time, 1 change improves only the latency (Q11), and 1 change
(Q17) leads to performance regression in both. We next discuss the key changes
observed in the plans.
_1\. More optimal partitioning._ In Q8, for Part (100 partitions)
$\Join_{partKey}$ Lineitem (200 partitions) and in Q9 for Part (100
partitions) $\Join_{partKey}$ (Lineitem $\Join$ Supplier) (250 partitions),
the default optimizer performs the merge join over 250 partitions, thereby re-
partitioning both Part and the other join input into $250$ partitions over the
Partkey column. Learned cost models, on other hand, perform the merge join
over $100$ partitions, which requires re-partitioning only the other join
inputs and not the Part table. Furthermore, re-partitioning $200$ or $250$
partitions into $100$ partitions is cheaper, since it involves partial merge
(bruno2014advanced, ), compared to re-partitioning them into $250$ partitions.
In $Q16$, for the final aggregation and top-k selection, both the learned and
the default optimizer repartition the $250$ partitions from the previous
operator’s output on the final aggregation key. While the default optimizer
re-partitions into $128$ partitions, the learned cost models pick $100$
partitions. The aggregation cost has almost negligible change due to change in
partition counts. However, repartitioning from $250$ to $100$ turns out to be
substantially faster than re-partitioning from $250$ to $128$ partitions.
_2\. Skipping exchange (shuffe) operators._ In Q8, for Part (100 partitions)
$\Join_{partKey}$ Lineitem (200 partitions), the learned cost model performs
join over $100$ partitions and thus it skips the costly Exchange operator over
the Part table. On the other hand, the default optimizer creates two Exchange
operator for partitioning each input into $250$ partitions.
_3\. More optimal physical operator:_ For both Q8 and Q20, the learned cost
model performs join between Nations and Supplier using merge join instead of
hash join (chosen by default optimizer). This results in an improvement of
$10\%$ to $15\%$ in the end-to-end latency, and $5\%$ to $8\%$ in the total
processing time (cpu hours).
_4\. Regression due to partial aggregation._ For Q17, the learned cost models
add local aggregation before the global one to reduce data transfer. However,
this degrades the latency by $10\%$ and total processing time by $25\%$ as it
does not help in reducing data. Currently, learned models do not learn from
their own execution traces. We believe that doing so can potentially resolve
some of the regressions.
#### 6.6.3. Training and Runtime Overheads.
We now describe the training and run-time overheads of Cleo. It takes less
than 1 hour to analyze and learn models for a cluster running about $800$ jobs
a day, and less than 4 hours for training over $50$K jobs instances at
Microsoft. We use a parallel model trainer that leverages SCOPE to train and
validate models independently and in parallel, which significantly speeds up
the training process. For a single cluster of about $800$ jobs, Cleo learns
about $23$K models which when simultaneously loaded takes about $600$ MB of
memory. About $80$% of the memory is taken by the individual models while the
remaining is used by the combined model. The additional memory usage is not an
issue for big data environments, where an optimizer can typically have $100$s
of GBs of memory. Finally, we saw between $5$-$10$% increase in the
optimization time for most of the jobs when using learned cost models, which
involves the overhead of signature computation of sub-graph models, invoking
learned models, as well as any changes in plan exploration strategy due to
learned models. Figure 19(c) depicts the overhead in the optimization time for
each of the $17$ production jobs we executed. Since the optimization time is
often in orders of few hundreds of milliseconds, the overhead incurred here is
negligible compared to the overall compilation time (orders of seconds) of
jobs in big data systems as well as the potential gains we can achieve in the
end-to-end latency (orders of 10s of minutes).
Figure 20. Performance change with SCOPE + CLEO for TPC-H queries (higher is
better)
### 6.7. Discussion
We see that it is possible to learn accurate yet robust cost models from cloud
workloads. Given the complexities and variance in the modern cloud
environments, the model accuracies in Cleo are not perfect. Yet, they offer
two to three orders of magnitude more accuracy and improvement in correlation
from less than $0.1$ to greater than $0.7$ over the current state-of-the-art.
The combined meta model further helps to achieve full workload coverage
without sacrificing accuracy significantly. In fact, the combined model
consistently retains the accuracy over a longer duration of time. While the
accuracy and robustness gains from Cleo are obvious, the latency implications
are more nuanced. These are often due to various plan explorations made to
work with the current cost models. For instance, SCOPE jobs tend to over-
partition at the leaf levels and leverage the massive scale-out possible for
improving latency of jobs.
There are several ways to address performance regressions in production
workloads. One option is to revisit the search and pruning strategies for plan
exploration (cascades95, ) in the light of newer learned cost models. For
instance, one problem we see is that current pruning strategies may sometimes
skip operators without invoking their learned models. Additionally, we can
also configure optimizer to not invoke learned models for specific operators
or jobs. Another improvement is to optimize a query twice (each taking only
few orders of milliseconds), with and without Cleo, and select the plan with
the better overall latency, as predicted using learned models since they are
highly accurate and correlated to the runtime latency. We can also monitor the
performance of jobs in pre-production environment and isolate models that lead
to performance regression (or poor latency predictions), and discard them from
the feedback. This is possible since we do not learn a single global model in
the first place. Furthermore, since learning cost models and feeding them back
is a continuous process, regression causing models can self-correct by
learning from future executions. Finally, regressions for a few queries is not
really a problem for ad-hoc workloads, since majority of the queries improve
their latency anyways and reducing the overall processing time (and hence
operational costs) is generally more important.
Finally, in this paper, we focused on the traditional use-case of a cost model
for picking the physical query plan. However, several other cost model use-
cases are relevant in cloud environments, where accuracy of predicted costs is
crucial. Examples include performance prediction (perforator, ), allocating
resources to queries (jyothi2016morpheus, ), estimating task runtimes for
scheduling (boutin2014apollo, ), estimating the progress of a query especially
in server-less query processors (lee2016operator, ), and running what-if
analysis for physical design selection (chaudhuri2007self, ). Exploring these
would be a part of future work. Examples include performance prediction
(perforator, ), allocating resources to queries (jyothi2016morpheus, ),
estimating task runtimes for scheduling (boutin2014apollo, ), estimating the
progress of a query especially in server-less query processors
(lee2016operator, ), and running what-if analysis for physical design
selection (chaudhuri2007self, ). Exploring these would be a part of future
work.
## 7\. Related Work
Machine learning has been used for estimating the query execution time of a
given physical plan in centralized databases (ganapathi2009predicting, ;
akdere2012learning, ; li2012robust, ). In particular, the operator and sub-
plan-level models in (akdere2012learning, ) share similarities with our
operator and operator-subgraph model. However, we discovered the coverage-
accuracy gap between the two models to be substantially large. To bridge this
gap, we proposed additional mutually enhancing models and then combined the
predictions of these individual models to achieve the best of accuracy and
coverage. There are other works on query progress indicators
(chaudhuri2004estimating, ; luo2004toward, ) that use the run time statistics
from the currently running query to tell how much percentage of the work has
been completed for the query. Our approach, in contrast, uses compile time
features to make the prediction before the execution starts.
Cardinality is a key input to cost estimation and several learning and
feedback-driven approaches (cardLearner, ; rope, ; stillger2001leo, ).
However, these works have either focused only on recurring or strict subgraphs
(cardLearner, ; rope, ), or learn only the ratio between the actual and
predicted cardinality (stillger2001leo, ) that can go wrong in many
situations, e.g., partial coverage results in erroneous estimates due to
mixing of disparate models. Most importantly, as we discuss in Section 2,
fixing cardinalities alone do not always lead to accurate costs in big data
systems. There are other factors such as resources (e.g., partitions)
consumed, operator implementations (e.g., custom operators), and hardware
characteristics (e.g., parallel distributed environment) that could determine
cost. In contrast to cardinality models, Cleo introduces novel learning
techniques (e.g., multiple models, coupled with an ensemble) and extensions to
optimizer to robustly model cost. That said, cardinality is still an important
feature (see Figure 5), and is also key to deciding partition counts, memory
allocation at runtime, as well as for speculative execution in the job
manager. A more detailed study on cardinality estimates in big data systems is
an interesting avenue for future work.
Several works find the optimal resources given a physical query execution plan
(ernest, ; perforator, ; cherrypick, ). They either train a performance model
or apply non-parametric Bayesian optimization techniques with a few sample
runs to find the optimal resource configuration. However, the optimal
execution plan may itself depend on the resources, and therefore in this work,
we jointly find the optimal cost and resources. Nevertheless, the ideas from
resource optimization work can be leveraged in our system to reduce the search
space, especially if we consider multiple hardware types.
Generating efficient combination of query plans and resources are also
relevant to the new breed of serverless computing, where users are not
required to provision resources explicitly and they are billed based on usage
(serverless, ). For big data queries, this means that the optimizer needs to
accurately estimate the cost of queries for given resources and explore
different resource combinations so that users do not end up over-paying for
their queries.
Finally, several recent works apply machine learning techniques to improve
different components of a data system (kraska2019sagedb, ; marcus2019neo, ).
The most prominent being learned indexes (kraska2018case, ), which overfits a
given stored data to a learned model that provides faster lookup as well as
smaller storage footprint. (marcus2019neo, ) takes a more disruptive approach,
where the vision is to replace the traditional query optimizers with one built
using neural networks. In contrast, our focus in this paper is on improving
the cost-estimation of operators in big data systems and our goal is to
integrate learned models into existing query optimizers in a minimally
invasive manner.
## 8\. Conclusion
Accurate cost prediction is critical for resource efficiency in big data
systems. At the same time, modeling query costs is incredibly hard in these
systems. In this paper, we present techniques to learn cost models from the
massive cloud workloads. We recognize that cloud workloads are highly
heterogeneous in nature and no one model fits all. Instead, we leverage the
common subexpression patterns in the workload and learn specialized models for
each pattern. We further describe the accuracy and coverage trade-off of these
specialized models and present additional mutual enhancing models to bridge
the gap. We combine the predictions from all of these individual models into a
robust model that provides the best of both accuracy and coverage over a
sufficiently long period of time. A key part of our contribution is to
integrate the learned cost models with existing query optimization frameworks.
We present details on integration with SCOPE, a Cascade style query optimizer,
and show how the learned models could be used to efficiently find both the
query and resource optimal plans. Overall, applying machine learning to
systems is an active area of research, and this paper presents a practical
approach for doing so deep within the core of a query processing system.
## References
* [1] S. Agarwal, S. Kandula, N. Bruno, M.-C. Wu, I. Stoica, and J. Zhou. Re-optimizing data-parallel computing. In Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation, pages 21–21. USENIX Association, 2012.
* [2] M. Akdere, U. Çetintemel, M. Riondato, E. Upfal, and S. B. Zdonik. Learning-based query performance modeling and prediction. In Data Engineering (ICDE), 2012 IEEE 28th International Conference on, pages 390–401. IEEE, 2012.
* [3] O. Alipourfard, H. H. Liu, J. Chen, S. Venkataraman, M. Yu, and M. Zhang. Cherrypick: Adaptively unearthing the best cloud configurations for big data analytics. In NSDI, volume 2, pages 4–2, 2017.
* [4] AWS Athena. https://aws.amazon.com/athena/.
* [5] F. R. Bach and M. I. Jordan. Kernel independent component analysis. Journal of machine learning research, 3(Jul):1–48, 2002.
* [6] E. Boutin, J. Ekanayake, W. Lin, B. Shi, J. Zhou, Z. Qian, M. Wu, and L. Zhou. Apollo: scalable and coordinated scheduling for cloud-scale computing. In 11th $\\{$USENIX$\\}$ Symposium on Operating Systems Design and Implementation ($\\{$OSDI$\\}$ 14), pages 285–300, 2014.
* [7] N. Bruno, S. Agarwal, S. Kandula, B. Shi, M. Wu, and J. Zhou. Recurring job optimization in scope. In SIGMOD, pages 805–806, 2012.
* [8] N. Bruno, S. Jain, and J. Zhou. Continuous cloud-scale query optimization and processing. Proceedings of the VLDB Endowment, 6(11):961–972, 2013.
* [9] N. Bruno, S. Jain, and J. Zhou. Recurring Job Optimization for Massively Distributed Query Processing. IEEE Data Eng. Bull., 36(1):46–55, 2013.
* [10] N. Bruno, Y. Kwon, and M.-C. Wu. Advanced join strategies for large-scale distributed computation. Proceedings of the VLDB Endowment, 7(13):1484–1495, 2014.
* [11] R. Chaiken, B. Jenkins, P. Larson, B. Ramsey, D. Shakib, S. Weaver, and J. Zhou. SCOPE: easy and efficient parallel processing of massive data sets. PVLDB, 1(2):1265–1276, 2008.
* [12] S. Chaudhuri and V. Narasayya. Self-tuning database systems: a decade of progress. In Proceedings of the 33rd international conference on Very large data bases, pages 3–14. VLDB Endowment, 2007.
* [13] S. Chaudhuri, V. Narasayya, and R. Ramamurthy. Estimating progress of execution for sql queries. In Proceedings of the 2004 ACM SIGMOD international conference on Management of data, pages 803–814. ACM, 2004.
* [14] A. Dutt and J. R. Haritsa. Plan bouquets: A fragrant approach to robust query processing. ACM Transactions on Database Systems (TODS), 41(2):11, 2016.
* [15] A. Dutt, C. Wang, A. Nazi, S. Kandula, V. Narasayya, and S. Chaudhuri. Selectivity Estimation for Range Predicates Using Lightweight Models. PVLDB, 12(9):1044–1057, 2019.
* [16] FastTree. https://www.nuget.org/packages/Microsoft.ML.FastTree/.
* [17] A. D. Ferguson, P. Bodík, S. Kandula, E. Boutin, and R. Fonseca. Jockey: guaranteed job latency in data parallel clusters. In EuroSys, pages 99–112, 2012.
* [18] J. H. Friedman. Stochastic gradient boosting. Computational statistics & data analysis, 38(4):367–378, 2002\.
* [19] A. Ganapathi, H. Kuno, U. Dayal, J. L. Wiener, A. Fox, M. Jordan, and D. Patterson. Predicting multiple metrics for queries: Better decisions enabled by machine learning. In Data Engineering, 2009. ICDE’09. IEEE 25th International Conference on, pages 592–603. IEEE, 2009.
* [20] Google BigQuery. https://cloud.google.com/bigquery.
* [21] G. Graefe. The Cascades framework for query optimization. IEEE Data Eng. Bull., 18(3):19–29, 1995.
* [22] IBM BigSQL. https://www.ibm.com/products/db2-big-sql.
* [23] A. Jindal, K. Karanasos, S. Rao, and H. Patel. Selecting Subexpressions to Materialize at Datacenter Scale. In VLDB, 2018.
* [24] A. Jindal, S. Qiao, H. Patel, Z. Yin, J. Di, M. Bag, M. Friedman, Y. Lin, K. Karanasos, and S. Rao. Computation Reuse in Analytics Job Service at Microsoft. In SIGMOD, 2018.
* [25] S. A. Jyothi, C. Curino, I. Menache, S. M. Narayanamurthy, A. Tumanov, J. Yaniv, R. Mavlyutov, Í. Goiri, S. Krishnan, J. Kulkarni, et al. Morpheus: Towards automated slos for enterprise clusters. In 12th $\\{$USENIX$\\}$ Symposium on Operating Systems Design and Implementation ($\\{$OSDI$\\}$ 16), pages 117–134, 2016.
* [26] A. Kipf, T. Kipf, B. Radke, V. Leis, P. Boncz, and A. Kemper. Learned cardinalities: Estimating correlated joins with deep learning. CIDR, 2019.
* [27] T. Kraska, M. Alizadeh, A. Beutel, E. Chi, J. Ding, A. Kristo, G. Leclerc, S. Madden, H. Mao, and V. Nathan. Sagedb: A learned database system. CIDR, 2019.
* [28] T. Kraska, A. Beutel, E. H. Chi, J. Dean, and N. Polyzotis. The case for learned index structures. In Proceedings of the 2018 International Conference on Management of Data, pages 489–504. ACM, 2018.
* [29] K. Lee, A. C. König, V. Narasayya, B. Ding, S. Chaudhuri, B. Ellwein, A. Eksarevskiy, M. Kohli, J. Wyant, P. Prakash, et al. Operator and query progress estimation in microsoft sql server live query statistics. In Proceedings of the 2016 International Conference on Management of Data, pages 1753–1764. ACM, 2016.
* [30] C. Lei, Z. Zhuang, E. A. Rundensteiner, and M. Y. Eltabakh. Redoop infrastructure for recurring big data queries. PVLDB, 7(13):1589–1592, 2014.
* [31] V. Leis, A. Gubichev, A. Mirchev, P. Boncz, A. Kemper, and T. Neumann. How good are query optimizers, really? Proceedings of the VLDB Endowment, 9(3):204–215, 2015.
* [32] J. Li, A. C. König, V. Narasayya, and S. Chaudhuri. Robust estimation of resource consumption for sql queries using statistical techniques. Proceedings of the VLDB Endowment, 5(11):1555–1566, 2012.
* [33] G. Lohman. Is query optimization a “solved” problem. In Proc. Workshop on Database Query Optimization, volume 13. Oregon Graduate Center Comp. Sci. Tech. Rep, 2014.
* [34] G. Luo, J. F. Naughton, C. J. Ellmann, and M. W. Watzke. Toward a progress indicator for database queries. In Proceedings of the 2004 ACM SIGMOD international conference on Management of data, pages 791–802. ACM, 2004.
* [35] R. Marcus, P. Negi, H. Mao, C. Zhang, M. Alizadeh, T. Kraska, O. Papaemmanouil, and N. Tatbul. Neo: A learned query optimizer. arXiv preprint arXiv:1904.03711, 2019.
* [36] V. Markl, V. Raman, D. Simmen, G. Lohman, H. Pirahesh, and M. Cilimdzic. Robust query processing through progressive optimization. In Proceedings of the 2004 ACM SIGMOD international conference on Management of data, pages 659–670. ACM, 2004.
* [37] MART. http://statweb.stanford.edu/ jhf/MART.html.
* [38] M. Poess and C. Floyd. New tpc benchmarks for decision support and web commerce. ACM Sigmod Record, 29(4):64–71, 2000.
* [39] K. Rajan, D. Kakadia, C. Curino, and S. Krishnan. Perforator: eloquent performance models for resource optimization. In Proceedings of the Seventh ACM Symposium on Cloud Computing, pages 415–427. ACM, 2016.
* [40] R. Ramakrishnan, B. Sridharan, J. R. Douceur, P. Kasturi, B. Krishnamachari-Sampath, K. Krishnamoorthy, P. Li, M. Manu, S. Michaylov, R. Ramos, et al. Azure data lake store: a hyperscale distributed file service for big data analytics. In Proceedings of the 2017 ACM International Conference on Management of Data, pages 51–63. ACM, 2017.
* [41] A. Roy, A. Jindal, H. Patel, A. Gosalia, S. Krishnan, and C. Curino. SparkCruise: Handsfree Computation Reuse in Spark. PVLDB, 12(12):1850–1853, 2019.
* [42] J. Schad, J. Dittrich, and J.-A. Quiané-Ruiz. Runtime Measurements in the Cloud: Observing, Analyzing, and Reducing Variance. PVLDB, 3(1-2):460–471, 2010.
* [43] Cloud Programming Simplified: A Berkeley View on Serverless Computing. https://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-3.pdf.
* [44] M. Stillger, G. M. Lohman, V. Markl, and M. Kandil. Leo-db2’s learning optimizer. In VLDB, volume 1, pages 19–28, 2001.
* [45] S. Venkataraman and others. Ernest: Efficient performance prediction for large-scale advanced analytics. In NSDI, pages 363–378, 2016.
* [46] L. Viswanathan, A. Jindal, and K. Karanasos. Query and Resource Optimization: Bridging the Gap. In ICDE, pages 1384–1387, 2018.
* [47] C. Wu, A. Jindal, S. Amizadeh, H. Patel, W. Le, S. Qiao, and S. Rao. Towards a Learning Optimizer for Shared Clouds. PVLDB, 12(3):210–222, 2018.
* [48] D. Xin, S. Macke, L. Ma, J. Liu, S. Song, and A. Parameswaran. HELIX: Holistic Optimization for Accelerating Iterative Machine Learning. PVLDB, 12(4):446–460, 2018.
* [49] Z. Yint, J. Sun, M. Li, J. Ekanayake, H. Lin, M. Friedman, J. A. Blakeley, C. Szyperski, and N. R. Devanur. Bubble execution: resource-aware reliable analytics at cloud scale. Proceedings of the VLDB Endowment, 11(7):746–758, 2018.
* [50] J. Zhou, N. Bruno, M.-C. Wu, P.-Å. Larson, R. Chaiken, and D. Shakib. SCOPE: parallel databases meet MapReduce. VLDB J., 21(5):611–636, 2012.
* [51] Q. Zhou, J. Arulraj, S. Navathe, W. Harris, and D. Xu. Automated Verification of Query Equivalence Using Satisfiability Modulo Theories. PVLDB, 12(11):1276–1288, 2019.
* [52] J. Zhu, N. Potti, S. Saurabh, and J. M. Patel. Looking ahead makes query plans robust: Making the initial case with in-memory star schema data warehouse workloads. Proceedings of the VLDB Endowment, 10(8):889–900, 2017.
* [53] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the royal statistical society: series B (statistical methodology), 67(2):301–320, 2005.
|
2024-09-04T02:54:55.297589 | 2020-02-27T19:22:51 | 2002.12399 | {
"authors": "Andy Su, Jayden Ooi, Tyler Lu, Dale Schuurmans, Craig Boutilier",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25929",
"submitter": "Jayden Ooi",
"url": "https://arxiv.org/abs/2002.12399"
} | arxiv-papers | # ConQUR: Mitigating Delusional Bias in Deep Q-learning
Andy Su Jayden Ooi Tyler Lu Dale Schuurmans Craig Boutilier
###### Abstract
_Delusional bias_ is a fundamental source of error in approximate Q-learning.
To date, the only techniques that explicitly address delusion require
comprehensive search using tabular value estimates. In this paper, we develop
efficient methods to mitigate delusional bias by training Q-approximators with
labels that are “consistent” with the underlying greedy policy class. We
introduce a simple penalization scheme that encourages Q-labels used _across
training batches_ to remain (jointly) consistent with the expressible policy
class. We also propose a search framework that allows multiple Q-approximators
to be generated and tracked, thus mitigating the effect of premature
(implicit) policy commitments. Experimental results demonstrate that these
methods can improve the performance of Q-learning in a variety of Atari games,
sometimes dramatically.
Machine Learning, ICML
## 1 Introduction
_Q-learning_ (Watkins & Dayan, 1992; Sutton & Barto, 2018) lies at the heart
of many of the recent successes of deep reinforcement learning (RL) (Mnih et
al., 2015; Silver et al., 2016), with recent advancements (e.g., van Hasselt
(2010); Bellemare et al. (2017); Wang et al. (2016); Hessel et al. (2017))
helping to make it among the most widely used methods in applied RL. Despite
these successes, many properties of Q-learning are poorly understood, and it
is challenging to successfully apply deep Q-learning in practice. Various
modifications have been proposed to improve convergence or approximation error
(Gordon, 1995, 1999; Szepesvári & Smart, 2004; Melo & Ribeiro, 2007; Maei et
al., 2010; Munos et al., 2016); but it remains difficult to reliably attain
both robustness and scalability.
Recently, Lu et al. (2018) identified a source of error in Q-learning with
function approximation known as _delusional bias_. This bias arises because
Q-learning updates the value of state-action pairs using estimates of
(sampled) successor-state values that can be _mutually inconsistent given the
policy class induced by the approximator_. This can result in unbounded
approximation error, divergence, policy cycling, and other undesirable
behavior. To handle delusion, the authors propose a _policy-consistent backup_
operator that maintains multiple Q-value estimates organized into _information
sets_. Each information set has its own backed-up Q-values and corresponding
“policy commitments” responsible for inducing these values. Systematic
management of these sets ensures that only _consistent_ choices of maximizing
actions are used to update Q-values. All potential solutions are tracked to
prevent premature convergence on specific policy commitments. Unfortunately,
the proposed algorithms use tabular representations of Q-functions, so while
this establishes foundations for delusional bias, the function approximator is
used neither for generalization nor to manage the size of the state/action
space. Consequently, this approach is not scalable to practical RL problems.
In this work, we develop ConQUR (_CONsistent Q-Update Regression_), a general
framework for integrating policy-consistent backups with regression-based
function approximation for Q-learning and for managing the search through the
space of possible regressors (i.e., information sets). With suitable search
heuristics, the proposed framework provides a computationally effective means
for minimizing the effects of delusional bias, while scaling to practical
problems.
Our main contributions are as follows. First, we define novel augmentations of
Q-regression to increase the degree of policy consistency across training
batches. Since testing exact consistency is expensive, we introduce an
efficient _soft-consistency penalty_ that promotes consistency of labels with
earlier policy commitments. Second, using information-set structure (Lu et
al., 2018), we define a search space over Q-regressors to explore multiple
sets of policy commitments. Third, we propose heuristics to guide the search,
critical given the combinatorial nature of information sets. Finally,
experimental results on the Atari suite (Bellemare et al., 2013) demonstrate
that ConQUR can add (sometimes dramatic) improvements to Q-learning. These
results further show that delusion does emerge in practical applications of
Q-learning. We also show that straightforward consistency penalization on its
own (i.e., without search) can improve both standard and double Q-learning.
## 2 Background
We assume a discounted, infinite horizon _Markov decision process (MDP)_ ,
$\mathbf{M}=(\mathcal{S},A,P,p_{0},R,\gamma)$. The state space $\mathcal{S}$
can reflect both discrete and continuous features, but we take the action
space $A$ to be finite (and practically enumerable). We consider _Q-learning_
with a function approximator $Q_{\theta}$ to learn an (approximately) optimal
Q-function (Watkins, 1989; Sutton & Barto, 2018), drawn from some
approximation class parameterized by $\Theta$ (e.g., the weights of a neural
network). When the approximator is a deep network, we generically refer to
this as _DQN_ , the method at the heart of many RL successes (Mnih et al.,
2015; Silver et al., 2016).
For online Q-learning, at a transition $s,a,r,s^{\prime}$, the Q-update is
given by:
$\theta\leftarrow\theta+\alpha\Big{(}r+\gamma\max_{a^{\prime}\in
A}Q_{\theta}(s^{\prime},a^{\prime})-Q_{\theta}(s,a)\Big{)}\nabla_{\theta}Q_{\theta}(s,a).$
(1)
Batch versions of Q-learning are similar, but fit a regressor repeatedly to
batches of training examples (Ernst et al., 2005; Riedmiller, 2005), and are
usually more data efficient and stable than online Q-learning. Batch methods
use a sequence of (possibly randomized) data batches $D_{1},\ldots,D_{T}$ to
produce a sequence of regressors
$Q_{\theta_{1}},\ldots,Q_{\theta_{T}}=Q_{\theta}$, estimating the
Q-function.111We describe our approach using batch Q-learning, but it can
accommodate many variants, e.g., where the estimators generating max-actions
and value estimates are different, as in double Q-learning (van Hasselt, 2010;
Hasselt et al., 2016); indeed, we experiment with such variants. For each
$(s,a,r,s^{\prime})\in D_{k}$, we use a prior estimator $Q_{\theta_{k-1}}$ to
bootstrap the _Q-label_
$q=r+\gamma\max_{a^{\prime}}Q_{\theta_{k-1}}(s^{\prime},a^{\prime})$. We then
fit $Q_{\theta_{k}}$ to this data using a regression procedure with a suitable
loss function. Once trained, the (implicit) induced policy $\pi_{\theta}$ is
the _greedy policy_ w.r.t. $Q_{\theta}$, i.e.,
$\pi_{\theta}(s)=\operatorname*{arg\\!max}_{a\in A}Q_{\theta}(s,a)$. Let
$\mathcal{F}(\Theta)$ (resp., $G(\Theta)$) be the class of expressible
Q-functions (resp., greedy policies).
Intuitively, _delusional bias_ occurs whenever a backed-up value estimate is
derived from action choices that are not (jointly) realizable in $G(\Theta)$
(Lu et al., 2018). Standard Q-updates back up values for each $(s,a)$ pair by
_independently_ choosing maximizing actions at the corresponding next states
$s^{\prime}$. However, such updates may be “inconsistent” under approximation:
if no policy in $G(\Theta)$ can jointly express all past action choices,
_backed up values may not be realizable by any expressible policy_. Lu et al.
(2018) show that delusion can manifest itself with several undesirable
consequences (e.g., divergence). Most critically, it can prevent Q-learning
from learning the optimal representable policy in $G(\Theta)$. To address
this, they propose a non-delusional _policy consistent_ Q-learning (PCQL)
algorithm that provably eliminates delusion. We refer to the original paper
for details, but review the main concepts.222While delusion may not arise in
other RL approaches (e.g., policy iteration, policy gradient), our
contribution focuses on mitigating delusion to derive maximum performance from
widely used Q-learning methods.
The first key concept is that of _policy consistency_. For any
$S\subseteq\mathcal{S}$, an _action assignment_ $\sigma_{S}:S\rightarrow A$
associates an action $\sigma(s)$ with each $s\in S$. We say $\sigma$ is
_policy consistent_ if there is a greedy policy $\pi\in G(\Theta)$ s.t.
$\pi(s)=\sigma(s)$ for all $s\in S$. We sometimes equate a set $\mathit{SA}$
of state-action pairs with the implied assignment $\pi(s)=a$ for all
$(s,a)\in\mathit{SA}$. If $\mathit{SA}$ contains multiple pairs with the same
state $s$, but different actions $a$, it is a _multi-assignment_ (we use the
term “assignment” when there is no risk of confusion).
In (batch) Q-learning, each new regressor uses training labels generated by
assuming maximizing actions (under the prior regressor) are taken at its
successor states. Let $\sigma_{k}$ be the collection of states and
corresponding maximizing actions used to generate labels for regressor
$Q_{\theta_{k}}$ (assume it is policy consistent). Suppose we train
$Q_{\theta_{k}}$ by bootstrapping on $Q_{\theta_{k-1}}$. Now consider a
training sample $(s,a,r,s^{\prime})$. Q-learning generates label
$r+\gamma\max_{a^{\prime}}Q_{\theta_{k-1}}(s^{\prime},a^{\prime})$ for input
$(s,a)$. Notice, however, that taking action
$a^{*}=\operatorname*{arg\\!max}_{a^{\prime}}Q_{\theta_{k}}(s^{\prime},a^{\prime})$
at $s^{\prime}$ may not be _policy consistent_ with $\sigma_{k}$. Thus
Q-learning will estimate a value for $(s,a)$ assuming execution of a policy
that cannot be realized given the approximator. PCQL prevents this by ensuring
that any assignment used to generate labels is consistent with earlier
assignments. This means Q-labels will often _not_ be generated using
maximizing actions w.r.t. the prior regressor.
The second key concept is that of _information sets_. One will generally not
be able to use maximizing actions to generate labels, so tradeoffs can be made
when deciding which actions to assign to different states. Indeed, even if it
is feasible to assign a maximizing action $a$ to state $s$ early in training,
say at batch $k$, since it may prevent assigning a maximizing $a^{\prime}$ to
$s^{\prime}$ later, say batch $k+\ell$, we may want to use a different
assignment to $s$ to give more flexibility to maximize at other states later.
PCQL does not anticipate the tradeoffs—rather it maintains _multiple
information sets_ , each corresponding to a different assignment to the states
seen in the training data this far. Each gives rise to a _different Q-function
estimate_ , resulting in multiple hypotheses. At the end of training, the best
hypothesis is that with maximum expected value w.r.t. an initial state
distribution.
PCQL provides strong convergence guarantees, but it is a tabular algorithm:
the function approximator _restricts_ the policy class, but is not used to
generalize Q-values. Furthermore, its theoretical guarantees come at a cost:
it uses _exact_ policy consistency tests—tractable for linear approximators,
but impractical for large problems and DQN; and it maintains _all_ consistent
assignments. As a result, PCQL cannot be used for large RL problems of the
type tackled by DQN.
## 3 The ConQUR Framework
We develop the ConQUR framework to provide a practical approach to reducing
delusion in Q-learning, specifically addressing the limitations of PCQL
identified above. ConQUR consists of three main components: a practical soft-
constraint penalty that promotes policy consistency; a search space to
structure the search over multiple regressors (information sets, action
assignments); and heuristic search schemes (expansion, scoring) to find good
Q-regressors.
### 3.1 Preliminaries
We assume a set of training data consisting of quadruples
$(s,a,r,s^{\prime})$, divided into (possibly non-disjoint) _batches_
$D_{1},\ldots,D_{T}$ for training. This perspective is quite general: online
RL corresponds to $|D_{i}|=1$; offline batch training (with sufficiently
exploratory data) corresponds to a single batch (i.e., $T=1$); and online or
batch methods with replay are realized when the $D_{i}$ are generated by
sampling some data source with replacement.
For any batch $D$, let $\chi(D)=\\{s^{\prime}:(s,a,r,s^{\prime})\in D\\}$ be
the set of _successor states_ of $D$. An _action assignment_ $\sigma_{D}$ for
$D$ is an assignment (or multi-assignment) from $\chi(D)$ to $A$, dictating
which action $\sigma_{D}(s^{\prime})$ is considered “maximum” when generating
a Q-label for pair $(s,a)$; i.e., $(s,a)$ is assigned training label $r+\gamma
Q(s^{\prime},\sigma(s^{\prime}))$ rather than $r+\gamma\max_{a^{\prime}\in
A}Q(s^{\prime},a^{\prime})$. The set of all such assignments
$\Sigma(D)=A^{\chi(D)}$ grows exponentially with $|D|$.
Given a Q-function parameterization $\Theta$, we say $\sigma_{D}$ is _$\Theta$
-consistent (w.r.t. $D$)_ if there is some $\theta\in\Theta$ s.t.
$\pi_{\theta}(s^{\prime})=\sigma(s^{\prime})$ for all
$s^{\prime}\in\chi(D)$.333We suppress mention of $D$ when clear from context.
This is simple policy consistency, but with notation that emphasizes the
policy class. Let $\Sigma_{\Theta}(D)$ denote the set of all
$\Theta$-consistent assignments over $D$. The union $\sigma_{1}\cup\sigma_{2}$
of two assignments (over $D_{1},D_{2}$, resp.) is defined in the usual way.
### 3.2 Consistency Penalization
Enforcing strict $\Theta$-consistency as regressors
$\theta_{1},\theta_{2},\ldots,\theta_{T}$ are generated is computationally
challenging. Suppose the assignments $\sigma_{1},\ldots,\sigma_{k-1}$, used to
generate labels for $D_{1},\ldots D_{k-1}$, are jointly $\Theta$-consistent
(let $\sigma_{\leq k-1}$ denote their multi-set union). Maintaining
$\Theta$-consistency when generating $\theta_{k}$ imposes two requirements.
First, one must generate an assignment $\sigma_{k}$ over $D_{k}$ s.t.
$\sigma_{\leq k-1}\cup\sigma_{k}$ is consistent. Even testing assignment
consistency can be problematic: for linear approximators this is a linear
feasibility program (Lu et al., 2018) whose constraint set grows linearly with
$|D_{1}\cup\ldots\cup D_{k}|$. For DNNs, this is a complex, more expensive
polynomial program. Second, the regressor $\theta_{k}$ should itself be
consistent with $\sigma_{\leq k-1}\cup\sigma_{k}$. This too imposes a severe
burden on regression optimization: in the linear case, it is a constrained
least-squares problem (solvable, e.g., as a quadratic program); while with
DNNs, it can be solved, say, using a more involved projected SGD. However, the
sheer number of constraints makes this impractical.
Rather than enforcing consistency, we propose a simple, computationally
tractable scheme that “encourages” it: a penalty term that can be incorporated
into the regression itself. Specifically, we add a penalty function to the
usual squared loss to encourage updates of the Q-regressors to be consistent
with the underlying information set, i.e., the prior action assignments used
to generate its labels.
When constructing $\theta_{k}$, let $D_{\leq k}=\cup\\{D_{j}:j\leq k\\}$, and
$\sigma\in\Sigma_{\Theta}({D_{\leq k}})$ be the collective assignment used to
generate labels for all prior regressors (including $\theta_{k}$ itself). The
multiset of pairs
$B=\\{(s^{\prime},\sigma(s^{\prime}))|s^{\prime}\in\chi(D_{\leq k})\\}$, is
called a _consistency buffer_. The assignment need not be consistent (as we
elaborate below), nor does regressor $\theta_{k}$ need to be consistent with
$\sigma$. Instead, we use the following _soft consistency penalty_ when
constructing $\theta_{k}$:
$\displaystyle C_{\theta}(s^{\prime},a)$ $\displaystyle=\sum_{a^{\prime}\in
A}[Q_{\theta}(s^{\prime},a^{\prime})-Q_{\theta}(s^{\prime},a)]_{+},$ (2)
$\displaystyle C_{\theta}(B)$
$\displaystyle=\sum_{(s^{\prime},\sigma(s^{\prime}))\in
B}C_{\theta}(s^{\prime},\sigma(s^{\prime})),$ (3)
where $[x]_{+}=\max(0,x)$. This penalizes Q-values of actions at state $s$
that are larger than that of action $\sigma(s)$. Notice $\sigma$ is
$\Theta$-consistent iff $\min_{\theta\in\Theta}C_{\theta}(B)=0$. We add this
penalty into our regression loss for batch $D_{k}$:
$L_{\theta}(D_{k},B)=\sum_{(s,a,r,s^{\prime})\in D_{k}}\Big{[}r+\gamma
Q_{\theta_{k-1}}(s^{\prime},\sigma(s^{\prime}))-\\\
Q_{\theta}(s,a)\Big{]}^{2}+\lambda C_{\theta}(B).$ (4)
Here $Q_{\theta_{k}}$ is the prior estimator on which labels are bootstrapped
(other regressors may be used). The penalty effectively acts as a
“regularizer” on the squared Bellman error, where $\lambda$ controls the
degree of penalization, allowing a tradeoff between Bellman error and
consistency with the assignment used to generate labels. It thus _promotes_
consistency without incurring the expense of _enforcing_ strict consistency.
It is straightforward to replace the classic Q-learning update (1) with one
using our consistency penalty:
$\theta_{k}\leftarrow\theta_{k-1}+\sum_{(s,a,r,s^{\prime})\in
D_{k}}\alpha\Big{[}r+\gamma Q_{\theta_{k-1}}(s^{\prime},\sigma(s^{\prime}))\\\
-Q_{\theta}(s,a)\Big{]}\nabla_{\theta}Q_{\theta}(s,a)+\alpha\lambda\nabla_{\theta}C_{\theta}(B)\Bigr{\rvert}_{\theta=\theta_{k-1}}.$
(5)
This scheme is quite general. First, it is agnostic as to how the prior action
assignments are made (e.g., standard maximization w.r.t. the prior regressor
as in DQN, Double DQN (DDQN) (Hasselt et al., 2016), or other variants). It
can also be used in conjunction with a search through alternate assignments
(see below).
Second, the consistency buffer $B$ may be populated in a variety of ways.
Including all max-action choices from all past training batches promotes full
consistency. However, this may be too constraining since action choices early
in training are generally informed by inaccurate value estimates. $B$ may be
implemented to focus only on more recent data (e.g., with a sliding recency
window, weight decay, or subsampling); and the degree of recency bias may
adapt during training (e.g., becoming more inclusive as training proceeds and
the Q-function converges). Reducing the size of $B$ also has computational
benefits. We discuss other ways of promoting consistency in Sec. 5.
The proposed consistency penalty resembles the temporal-consistency loss of
Pohlen et al. (2018), but our aims are very different. Their temporal
consistency notion penalizes changes in a next state’s Q-estimate over all
actions, whereas we discourage inconsistencies in the greedy policy induced by
the Q-estimator, regardless of the actual estimated values.
### 3.3 The Search Space
Figure 1: A generic search tree.
Ensuring optimality requires that PCQL track _all $\Theta$-consistent
assignments_. While the set of such assignments has polynomial size (Lu et
al., 2018), it is impractical to track in realistic problems. As such, in
ConQUR we recast information set tracking as a _search problem_ and propose
several strategies for managing the search process. We begin by defining the
search space and discussing its properties. We discuss search procedures in
Sec. 3.4.
As above, assume training data is divided into batches $D_{1},\ldots,D_{T}$
and we have some initial Q-function estimate $\theta_{0}$ (for bootstrapping
$D_{1}$’s labels). The regressor $\theta_{k}$ for $D_{k}$ can, in principle,
be trained with labels generated by _any assignment_
$\sigma\in\Sigma_{\Theta}(D_{k})$ of actions to its successor states
$\chi(D_{k})$, not necessarily maximizing actions w.r.t. $\theta_{k-1}$. Each
$\sigma$ gives rise to a different updated Q-estimator $\theta_{k}$. There are
several restrictions we can place on “reasonable” $\sigma$-candidates: (i)
$\sigma$ is $\Theta$-consistent; (ii) $\sigma$ is jointly $\Theta$-consistent
with all $\sigma_{j}$, for $j<k$, used to construct the prior regressors on
which we bootstrap $\theta_{k-1}$; (iii) $\sigma$ is not _dominated_ by any
$\sigma^{\prime}\in\Sigma_{\Theta}(D_{k})$, where we say $\sigma^{\prime}$
dominates $\sigma$ if
$Q_{\theta_{k-1}}(s^{\prime},\sigma^{\prime}(s^{\prime}))\geq
Q_{\theta_{k-1}}(s^{\prime},\sigma(s^{\prime}))$ for all
$s^{\prime}\in\chi(D)$, and this inequality is strict for at least one
$s^{\prime}$. Conditions (i) and (ii) are the strict consistency requirements
of PCQL. We relax these below as discussed in Sec. 3.2. Condition (iii) is
inappropriate in general, since we may add additional assignments (e.g., to
new data) that render all non-dominated assignments inconsistent, requiring
that we revert to some dominated assignment.
This gives us a generic _search space_ for finding policy-consistent,
delusion-free Q-function (see Fig. 1). Each node $n^{i}_{k}$ at depth $k$ in
the search tree is associated with a regressor $\theta^{i}_{k}$ defining
$Q_{\theta^{i}_{k}}$ and assignment $\sigma^{i}_{k}$ that justifies the labels
used to train $\theta^{i}_{k}$ ($\sigma^{i}_{k}$ can be viewed as an
information set). The root $n_{0}$ is based on an initial $\theta_{0}$, and
has an empty assignment $\sigma_{0}$. Nodes at level $k$ of the tree are
defined as follows. For each node $n_{k-1}^{i}$ at level $k-1$—with regressor
$\theta_{k-1}^{i}$ and $\Theta$-consistent assignment $\sigma_{k-1}^{i}$—we
have one child $n_{k}^{j}$ for each $\sigma_{k}^{j}\in\Sigma_{\Theta}(D_{k})$
such that $\sigma_{k-1}^{i}\cup\sigma_{k}^{j}$ is $\Theta$-consistent. Node
$n_{k}^{j}$’s assignment is $\sigma_{k-1}^{i}\cup\sigma_{k}^{j}$, and its
regressor $\theta_{k}^{i}$ is trained using the data set:
$\\{(s,a)\mapsto r+\gamma
Q_{\theta_{k-1}^{i}}(s^{\prime},\sigma_{k}^{j}(s^{\prime}))\,:\,(s,a,r,s^{\prime})\in
D_{k}\\}.$
The entire search space constructed in this fashion to a maximum depth of $T$.
See Appendix B, Algorithm 1 for pseudocode of a simple depth-first recursive
specification.
The exponential branching factor in this search tree would appear to make
complete search intractable; however, since we only allow $\Theta$-consistent
“collective” assignments we can bound the size of the tree—it is _polynomial_
in the VC-dimension of the approximator.
###### Theorem 1.
The number of nodes in the search tree is no more than
$O(nm\cdot[\binom{m}{2}n]^{\operatorname*{\mathsf{VCDim}}(\mathcal{G})})$
where $\operatorname*{\mathsf{VCDim}}(\cdot)$ is the VC-dimension (Vapnik,
1998) of a set of boolean-valued functions, and $\mathcal{G}$ is the set of
boolean functions defining all feasible greedy policies under $\Theta$:
$\mathcal{G}=\\{g_{\theta}(s,a,a^{\prime}):={\mathbf{1}}[f_{\theta}(s,a)-f_{\theta}(s,a^{\prime})>0],\forall
s,a\neq a^{\prime}~{}|~{}\theta\in\Theta\\}.$
A linear approximator with a fixed set of $d$ features induces a policy-
indicator function class $\mathcal{G}$ with VC-dimension $d$, making the
search tree polynomial in the size of the MDP. Similarly, a fixed ReLU DNN
architecture with $W$ weights and $L$ layers has VC-dimension of size
$O(WL\log W)$ again rendering the tree polynomially sized.
Even with this bound, navigating the search space exhaustively is generally
impractical. Instead, various search methods can be used to explore the space,
with the aim of reaching a “high quality” regressor at some leaf of the tree.
### 3.4 Search Heuristics
Even with the bound in Thm. 1, traversing the search space exhaustively is
generally impractical. Moreover, as discussed above, enforcing consistency
when generating the children of a node, and their regressors, may be
intractable. Instead, various search methods can be used to explore the space,
with the aim of reaching a “high quality” regressor at some (depth $T$) leaf
of the tree. We outline three primary considerations in the search process:
child generation, node evaluation or scoring, and the search procedure.
Generating children. Given node $n^{i}_{k-1}$, there are, in principle,
exponentially many action assignments, or children, $\Sigma_{\Theta}(D_{k})$
(though Thm. 1 limits this if we enforce consistency). Thus, we develop
heuristics for generating a small set of children, driven by three primary
factors.
The first factor is a preference for generating _high-value assignments_. To
accurately reflect the intent of (sampled) Bellman backups, we prefer to
assign actions to state $s^{\prime}\in\chi(D_{k})$ with larger predicted
Q-values i.e., a preference for $a$ over $a^{\prime}$ if
${Q}_{\theta_{k-1}^{j}}(s^{\prime},a)>{Q}_{\theta_{k-1}^{j}}(s^{\prime},a^{\prime})$.
However, since the maximizing assignment may be $\Theta$-inconsistent (in
isolation, jointly with the parent information set, or with future
assignments), candidate children should merely have _higher probability_ of a
high-value assignment. Second, we need to ensure _diversity_ of assignments
among the children. Policy commitments at stage $k$ constrain the assignments
at subsequent stages. In many search procedures (e.g., beam search), we avoid
backtracking, so we want the stage-$k$ commitments to offer flexibility in
later stages. The third factor is the degree to which we enforce consistency.
There are several ways to generate high-value assignments. We focus on one
natural technique: sampling action assignments using a Boltzmann distribution.
Let $\sigma$ be the assignment of some node (parent) at level $k-1$ in the
tree. We generate an assignment $\sigma_{k}$ for $D_{k}$ as follows. Assume
some permutation $s_{1}^{\prime},\ldots,s^{\prime}_{|D_{k}|}$ of
$\chi(D_{k})$. For each $s^{\prime}_{i}$ in turn, we sample $a_{i}$ with
probability proportional to $e^{Q_{\theta_{k-1}}(s^{\prime}_{i},a_{i})/\tau}$.
This can be done _without regard to consistency_ , in which case we use the
consistency penalty when constructing the regressor $\theta_{k}$ for this
child to “encourage” consistency rather than enforce it. If we want strict
consistency, we can use rejection sampling without replacement to ensure
$a_{i}$ is consistent with $\sigma^{j}_{k-1}\cup\sigma_{\leq i-1}$ (we can
also use a subset of $\sigma^{j}_{k-1}$ as a less restrictive consistency
buffer).444Notice that at least one action for state $s^{\prime}_{i}$ must be
consistent with any previous (consistent) information set. The temperature
parameter $\tau$ controls the degree to which we focus on maximizing
assignments versus diverse, random assignments. While sampling gives some
diversity, this procedure biases selection of high-value actions to states
that occur early in the permutation. To ensure further diversity, we use a new
random permutation for each child.
Scoring children. Once the children of some expanded node are generated, we
must assess the quality of each child to decide which new nodes to expand. One
possiblity is to use the average Q-label (overall, or weighted using an
initial distribution), Bellman error, or loss incurred by the regressor.
However, care must be taken when comparing nodes at different depths of the
tree. Since deeper nodes have a greater chance to accrue rewards or costs,
simple calibration methods can be used. Alternatively, when a simulator is
available, rollouts of the induced greedy policy can be used evaluate the node
quality. However, rollouts incur considerable computational expense during
training relative to the more direct scoring methods.
Search Procedure. Given a method for generating and scoring children,
different search procedures can be applied: best-first search, beam search,
local search, etc. all fit very naturally within the ConQUR framework.
Moreover, hybrid strategies are possible—one we develop below is a variant of
beam search in which we generate multiple children only at certain levels of
the tree, then do “deep dives” using consistency-penalized Q-regression at the
intervening levels. This reduces the size of the search tree considerably and,
when managed properly, adds only a constant-factor (proportional to beam size)
slowdown to methods like DQN.
### 3.5 An Instantiation of the ConQUR Framework
We now outline a specific instantiation of the ConQUR framework that
effectively navigates the large search spaces that arise in practical RL
settings. We describe a heuristic, modified beam-search strategy with
backtracking and priority scoring. We outline only key features (see details
in Algorithm 2, Appendix B).
Our search process alternates between two phases. In an _expansion phase_ ,
parent nodes are expanded, generating one or more child nodes with assignments
sampled from the Boltzmann distribution. For each child, we create target
Q-labels, then optimize its regressor using consistency-penalized Bellman
error Eq. 4, foregoing strict policy consistency. In a _dive phase_ , each
parent generates _one_ child, whose action assignment is given by standard
max-action selection w.r.t. the parent’s regressor. No diversity is considered
but we continue to use consistency-penalized regression.
From the root, the search begins with an expansion phase to create $c$
children—$c$ is the _splitting factor_. Each child inherits its parent’s
consistency buffer to which we add the new assignments used for that child’s
Q-labels. To limit the tree size, we track a subset of the children (the
_frontier_), selected using some scoring function. We select the top
$\ell$-nodes for expansion, proceed to a dive phase and iterate.
We consider backtracking strategies that return to unexpanded nodes at
shallower depths of the tree below.
### 3.6 Related Work
Other work has considered multiple hypothesis tracking in RL. One direct
approach uses ensembling, with multiple Q-approximators updated in parallel
(Faußer & Schwenker, 2015; Osband et al., 2016; Anschel et al., 2017) and
combined to reduce instability and variance. Population-based methods,
inspired by evolutionary search, are also used. Conti et al. (2018) combine
novelty search and quality diversity to improve hypothesis diversity and
quality. Khadka & Tumer (2018) augment an off-policy RL method with
diversified population information derived from an evolutionary algorithm.
These techniques do not target a specific weaknesses of Q-learning, such as
delusion.
## 4 Empirical Results
We assess the performance of ConQUR using the Atari test suite (Bellemare et
al., 2013). Since ConQUR directly tackles delusion, any performance
improvement over Q-learning baselines strongly suggests the presence of
delusional bias in the baselines in these domains. We first assess the impact
of our consistency penalty in isolation (without search), treating it as a
“regularizer” that promotes consistency with both DQN and DDQN. We then test
our modified beam search to assess the full power of ConQUR. We do not
directly compare ConQUR to policy gradient or actor-critic methods—which for
some Atari games offer state-of-the-art performance (Schrittwieser et al.,
2019; Kapturowski et al., 2020)—because our aim with ConQUR is to improve the
performance of (widely used) Q-learning-based algorithms.
### 4.1 Consistency Penalization
We first study the effects of augmenting both DQN and DDQN with soft-policy
consistency in isolation. We train models using an open-source implementation
of DQN and DDQN, using default hyperparameters (Guadarrama et al., 2018) . We
refer to the consistency-augmented algorithms as $\mathsf{DQN}{(\lambda)}$ and
$\mathsf{DDQN}{(\lambda)}$, respectively, where $\lambda$ is the penalty
weight (see Eq. 4). When $\lambda=0$, these correspond to DQN and DDQN
themselves. This policy-consistency augmentation is lightweight and can be
applied readily to any regression-based Q-learning method. Since we do not use
search (i.e., do not track multiple hypotheses), these experiments use a small
consistency buffer drawn only from the _current_ data batch by sampling from
the replay buffer—this prevents getting “trapped” by premature policy
commitments. No diversity is used to generate action assignments—standard
action maximization is used.
We evaluate $\mathsf{DQN}{(\lambda)}$ and $\mathsf{DDQN}{(\lambda)}$ for
$\lambda\in\\{0.25,0.5,1,1.5,2\\}$ on 19 Atari games.555These 19 games were
selected arbitrarily simply to test soft-consistency in isolation. See
Appendix C for details. In training, $\lambda$ is initialized at $0$ and
annealed to the desired value to avoid premature commitment to poor
assignments.666The annealing schedule is
$\lambda_{t}=\lambda_{\textrm{final}}t/(t+2\times 10^{6})$. Without annealing,
the model tends anchor on poorly informed assignments during early training,
adversely impacting performance. Unsurprisingly, the best $\lambda$ tends to
differ across games depending on the extent of delusional bias. Despite this,
$\lambda=0.5$ works well across all games tested. Fig. 2 illustrates the
effect of increasing $\lambda$ on two games. In Gravitar, it results in better
performance in both $\mathsf{DQN}{}$ and $\mathsf{DDQN}{}$, while in
SpaceInvaders, $\lambda=0.5$ improves both baselines, but relative performance
degrades at $\lambda=2$.
We also compare performance on each game for each $\lambda$ value, as well as
using the best $\lambda_{\textrm{best}}$ (see Fig. 8, Table 3 in Appendix
C.4). $\mathsf{DQN}{(\lambda_{\text{best}})}$ and
$\mathsf{DDQN}{(\lambda_{\text{best}})}$ outperform their “potentially
delusional” counterparts in all but 3 and 2 games, respectively. In 9 games,
both $\mathsf{DQN}{(\lambda_{\text{best}})}$ and
$\mathsf{DDQN}{(\lambda_{\text{best}})}$ beat _both_ baselines. With a fixed
$\lambda=0.5$, $\mathsf{DQN}{(\lambda)}$ and $\mathsf{DDQN}{(\lambda)}$ each
beat their respective baseline in 11 games. These results suggest that
consistency penalization—independent of the general ConQUR model—can improve
the performance of DQN and DDQN by addressing delusional bias. Moreover,
promoting policy consistency appears to have a different effect on learning
than double Q-learning, which addresses maximization bias. Indeed, consistency
penalization, when applied to $\mathsf{DQN}{}$, achieves greater gains than
$\mathsf{DDQN}{}$ in 15 games. Finally, in 9 games $\mathsf{DDQN}{(\lambda)}$
improves unaugmented $\mathsf{DQN}{(\lambda)}$. Further experiment details and
results can be found in Appendix C.
Figure 2: Varying penalization $\lambda$ (no search procedure).
### 4.2 Full ConQUR
Figure 3: Policy value (game scores) of nodes, sorted.
Figure 4: Training curves of 8 sorted nodes.
Figure 5: Performance effects of varying $\lambda$.
Figure 6: Improvements of ConQUR($\lambda=10$) over multi-DQN baseline on all
59 games. A frontier $F=16$ nodes was used.
We test the full ConQUR framework using our modified beam search (Sec. 3.5) on
the full suite of 59 Atari games. Rather than training a full Q-network using
ConQUR, we leverage pre-trained networks from the Dopamine package (Castro et
al., 2018),777See https://github.com/google/dopamine and use ConQUR to learn
final layer weights, i.e., a new “linear approximator” w.r.t. the learned
feature representation. We do this for two reasons. First, this allows us to
test whether delusional bias occurs in practice. By freezing the learned
representation, any improvements offered by ConQUR when learning a linear
Q-function over those same features provides direct evidence that (a) delusion
is present in the original trained baselines, and (b) ConQUR does in fact
mitigate its impact (without relying on novel feature discovery). Second, from
a practical point of view, this “linear tuning” approach offers a relatively
inexpensive way to apply our methodology in practice. By bootstrapping a model
trained in standard fashion and extracting performance gains with a relatively
small amount of additional training (e.g., linear tuning requires many fewer
training samples, as our results show), we can offset the cost of the ConQUR
search process itself.
We use DQN-networks with the same architecture as in Mnih et al. (2015),
trained on 200M frames as our baseline. We use ConQUR to retrain only the last
(fully connected) layer (freezing other layers), which can be viewed as a
linear Q-approximator over the features learned by the CNN. We train
Q-regressors in ConQUR using _only 4M additional frames_.888This reduces
computational/memory footprint of our experiments, and suffices since we re-
train a simpler approximator. Nothing in the framework _requires_ this reduced
training data. We use a splitting factor of $c=4$ and frontier size 16. The
dive phase is always of length nine (i.e., nine batches of data), giving an
expansion phase every ten iterations. Regressors are trained using soft-policy
consistency (Eq. 4), with the consistency buffer comprising _all_ prior action
assignments. We run ConQUR with $\lambda\in\\{1,10\\}$ and select the best
performing policy. We use larger $\lambda$ values than in Sec. 4.1 since full
ConQUR maintains multiple Q-regressors and can “discard” poor performers. This
allows more aggressive consistency enforcement—in the extreme, with exhaustive
search and $\lambda\to\infty$, ConQUR behaves like PCQL, finding a near-
optimal greedy policy. See Appendix D for further details (e.g.,
hyperparameters) and results.
We first test two approaches to scoring nodes: (i) policy evaluation using
rollouts; and (ii) scoring using the loss function (Bellman error with soft
consistency). Results on a small selection of games are shown in Table 1.
While rollouts, unsurprisingly, tend to induce better-performing policies,
consistent-Bellman scoring is competitive. Since the latter much less
computationally intense, and does not require a simulator (or otherwise
sampling the environment), we use it throughout our remaining experiments.
We next compare ConQUR with the value of the pre-trained DQN. We also evaluate
a “multi-DQN” baseline that trains multiple DQNs independently, warm-starting
from the same pre-trained DQN. It uses the same number of frontier nodes as
ConQUR, and is trained identically to ConQUR, but uses direct Bellman error
(no consistency penalty). This gives DQN the same advantage of multiple-
hypothesis tracking as ConQUR (without its policy consistency).
| Rollouts | Bellman + Consistency Penalty
---|---|---
BattleZone | 33796.30 | 32618.18
BeamRider | 9914.00 | 10341.20
Boxing | 83.34 | 83.03
Breakout | 379.21 | 393.00
MsPacman | 5947.78 | 5365.06
Seaquest | 2848.04 | 3000.78
SpaceInvader | 3442.31 | 3632.25
StarGunner | 55800.00 | 56695.35
Zaxxon | 11064.00 | 10473.08
Table 1: Results of ConQUR with 8 nodes (split 2) on 9 games: comparing loss-
based node scoring with scoring using rollouts.
We test on 59 games. ConQUR with frontier size 16 and expansion factor 4 and
splitting factor 4 (16-4-4) with backtracking (as described in the Appendix D)
results in significant improvements over the pre-trained DQN, with an average
score improvement of 189%. The only games without improvement are Montezuma’s
Revenge, Tennis, Freeway, Pong, PrivateEye and BankHeist. This demonstrates
that, _even when simply retraining the last layer of a highly tuned DQN
network_ , removing delusional bias frequently improves policy performance
significantly. ConQUR exploits the reduced parameterization to obtain these
gains with only 4M frames of training data. A half-dozen games have outsized
improvements over pre-trained DQN, including Venture (35 times greater value),
ElevatorAction (23 times), Tutankham (5 times) and Solaris (5 times).999This
may be in part, but not fully, due to the sticky-action training of the pre-
trained model.
We found that $\lambda=10$ provided the best performance across all games.
Fig. 6 shows the percentage improvement of ConQUR($\lambda=10$) over the
multi-DQN baseline for all 59 games. The improvement is defined as
$(s_{C}-s_{B})/|s_{B}|$ where $s_{C}$ and $s_{B}$ are the average scores (over
5 runs) of the policy generated by ConQUR and that by the multi-DQN baseline
(16 nodes), respectively. Compared to this stronger baseline, ConQUR wins by a
margin of at least 10% in 16 games, while 19 games see improvements of 1–10%,
16 games show little effect ($\pm 1$%) and 8 games show a decline of greater
than 1%. Tables of complete results and figures of training curves (all games)
appears in Appendix D.3, Table 4 and Fig. 11.
Figs. 3 and 4 (smoothed, best frontier node) show node policy values and
training curves, respectively, for Solaris. When examining nodes ranked by
their policy value (Fig. 3), we see that nodes of any given rank generated by
ConQUR dominate their by multi-DQN (baseline) counterparts: the three highest-
ranked nodes exceed their baseline counterparts by 18%, 13% and 15%,
respectively, while the remaining nodes show improvements of roughly 11–12%.
Fig. 5 (smoothed, best frontier node) shows the effect of varying $\lambda$.
In Alien, increasing $\lambda$ from 1 to 10 improves performance, but
performance starts to decline for higher $\lambda$ (we tested both 100 and
1000). This is similar to patterns observed in Sec. 4.1 and represents a
trade-off between emphasizing consistency and not over-committing to action
assignments. In Atlantis, stronger penalization tends to degrade performance.
In fact, the stronger the penalization, the worse the performance.
## 5 Concluding Remarks
We have introduced ConQUR, a framework for mitigating delusional bias in
various forms of Q-learning that relaxes some of the strict assumptions of
exact delusion-free algorithms like PCQL to ensure scalability. Its main
components are a search procedure used to maintain diverse, promising
Q-regressors (and corresponding information sets); and a consistency penalty
that encourages “maximizing” actions to be consistent with the approximator
class. ConQUR embodies elements of both value-based and policy-based RL: it
can be viewed as using partial policy constraints to bias the Q- value
estimator, and as a means of using candidate value functions to bias the
search through policy space. Empirically, we find that ConQUR can improve the
quality of existing approximators by removing delusional bias. Moreover, the
consistency penalty applied on its own, in either DQN or DDQN, can improve
policy quality.
There are many directions for future research. Other methods for nudging
regressors to be policy-consistent include exact consistency (i.e.,
constrained regression), other regularization schemes that push the regressor
to fall within the information set, etc. Further exploration of search, child-
generation, and node-scoring strategies should be examined within ConQUR. Our
(full) experiments should also be extended beyond those that warm-start from a
DQN model. We believe our methods can be extended to both continuous actions
and soft max-action policies. We are also interested in the potential
connection between maintaining multiple “hypotheses” (i.e., Q-regressors) and
notions in distributional RL (Bellemare et al., 2017).
## References
* Anschel et al. (2017) Anschel, O., Baram, N., and Shimkin, N. Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning. arXiv:1611.01929, 2017.
* Bellemare et al. (2017) Bellemare, M., Dabney, W., and Munos, R. A distributional perspective on reinforcement learning. In _Proceedings of the International Conference on Machine Learning (ICML-17)_ , 2017.
* Bellemare et al. (2013) Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. The arcade learning environment: An evaluation platform for general agents. _Journal of Artificial Intelligence Research_ , 47:253–279, June 2013.
* Castro et al. (2018) Castro, P. S., Moitra, S., Gelada, C., Kumar, S., and Bellemare, M. G. Dopamine: A research framework for deep reinforcement learning. arXiv:1812.06110 [cs.LG], 2018.
* Conti et al. (2018) Conti, E., Madhavan, V., Such, F. P., Lehman, J., Stanley, K. O., and Clune, J. Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. arXiv:1712.06560, 2018.
* Ernst et al. (2005) Ernst, D., Geurts, P., and Wehenkel, L. Tree-based batch mode reinforcement learning. _Journal of Machine Learning Research_ , 6:503–556, 2005\.
* Faußer & Schwenker (2015) Faußer, S. and Schwenker, F. Neural network ensembles in reinforcement learning. _Neural Processing Letters_ , 2015.
* Gordon (1999) Gordon, G. _Approximation Solutions to Markov Decision Problems_. PhD thesis, Carnegie Mellon University, 1999.
* Gordon (1995) Gordon, G. J. Stable function approximation in dynamic programming. In _Proceedings of the Twelfth International Conference on Machine Learning (ICML-95)_ , pp. 261–268, Lake Tahoe, 1995.
* Guadarrama et al. (2018) Guadarrama, S., Korattikara, A., Oscar Ramirez, P. C., Holly, E., Fishman, S., Wang, K., Gonina, E., Wu, N., Harris, C., Vanhoucke, V., and Brevdo, E. TF-Agents: A library for reinforcement learning in tensorflow. https://github.com/tensorflow/agents, 2018. URL https://github.com/tensorflow/agents.
* Hasselt et al. (2016) Hasselt, H. v., Guez, A., and Silver, D. Deep reinforcement learning with double q-learning. In _Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence_ , AAAI’16, pp. 2094–2100. AAAI Press, 2016. URL http://dl.acm.org/citation.cfm?id=3016100.3016191.
* Hessel et al. (2017) Hessel, M., Modayil, J., van Hasselt, H., Schaul, T., Ostrovski, G., Dabney, W., Horgan, D., Piot, B., Azar, M., and Silver, D. Rainbow: Combining improvements in deep reinforcement learning. arXiv:1710.02298, 2017.
* Kapturowski et al. (2020) Kapturowski, S., Ostrovski, G., Quan, J., Munos, R., and Dabney, W. Recurrent experience replay in distributed reinforcement learning. In _8th International Conference on Learning Representations_ , Addis Ababa, Ethiopia, 2020.
* Khadka & Tumer (2018) Khadka, S. and Tumer, K. Evolution-guided policy gradient in reinforcement learning. In _Advances in Neural Information Processing Systems 31 (NeurIPS-18)_ , Montreal, 2018.
* Lu et al. (2018) Lu, T., Schuurmans, D., and Boutilier, C. Non-delusional Q-learning and value iteration. In _Advances in Neural Information Processing Systems 31 (NeurIPS-18)_ , Montreal, 2018.
* Maei et al. (2010) Maei, H., Szepesvári, C., Bhatnagar, S., and Sutton, R. Toward off-policy learning control wtih function approximation. In _International Conference on Machine Learning_ , Haifa, Israel, 2010.
* Melo & Ribeiro (2007) Melo, F. and Ribeiro, M. I. Q-learning with linear function approximation. In _Proceedings of the International Conference on Computational Learning Theory (COLT)_ , pp. 308–322, 2007.
* Mnih et al. (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A., Veness, J., Bellemare, M., Graves, A., Riedmiller, M., Fidjeland, A., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. Human-level control through deep reinforcement learning. _Science_ , 518:529–533, 2015.
* Munos et al. (2016) Munos, R., Stepleton, T., Harutyunyan, A., and Bellemare, M. Safe and efficient off-policy reinforcement learning. In _Advances in Neural Information Processing Systems 29 (NIPS-16)_ , Barcelona, 2016.
* Osband et al. (2016) Osband, I., Blundell, C., Pritzel, A., and Van Roy, B. Deep exploration via bootstrapped dqn. _Advances in Neural Information Processing Systems 29 (NIPS-16)_ , 2016.
* Pohlen et al. (2018) Pohlen, T., Piot, B., Hester, T., Azar, M. G., Horgan, D., Budden, D., Barth-Maron, G., van Hasselt, H., Quan, J., Vecerík, M., Hessel, M., Munos, R., and Pietquin, O. Observe and look further: Achieving consistent performance on atari. _CoRR_ , abs/1805.11593, 2018. URL http://arxiv.org/abs/1805.11593. arXiv:1805.1159.
* Riedmiller (2005) Riedmiller, M. Neural fitted q iteration—first experiences with a data efficient neural reinforcement learning method. In _Proceedings of the 16th European Conference on Machine Learning_ , pp. 317–328, Porto, Portugal, 2005.
* Schrittwieser et al. (2019) Schrittwieser, J., Antonoglou, I., Hubert, T., Simonyan, K., Sifre, L., Schmitt, S., Guez, A., Lockhart, E., Hassabis, D., Graepel, T., Lillicrap, T., and Silver, D. Mastering atari, go, chess and shogi by planning with a learned model. arXiv:1911.08265 [cs.LG], 2019.
* Silver et al. (2016) Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. Mastering the game of Go with deep neural networks and tree search. _Nature_ , 529(7587):484–489, 2016.
* Sutton & Barto (2018) Sutton, R. S. and Barto, A. G. _Reinforcement Learning: An Introduction_. MIT Press, Cambridge, MA, 2018.
* Szepesvári & Smart (2004) Szepesvári, C. and Smart, W. Interpolation-based Q-learning. In _Proceedings of the International Conference on Machine Learning (ICML-04)_ , 2004.
* van Hasselt (2010) van Hasselt, H. Double q-learning. In _Advances in Neural Information Processing Systems 23 (NIPS-10)_ , pp. 2613–2621, Vancouver, BC, 2010.
* Vapnik (1998) Vapnik, V. N. _Statistical Learning Theory_. Wiley-Interscience, September 1998.
* Wang et al. (2016) Wang, Z., Schaul, T., Hessel, M., van Hasselt, H., Lanctot, M., and de Freitas, N. Dueling network architectures for deep reinforcement learning. In _Proceedings of the International Conference on Machine Learning (ICML-16)_ , 2016.
* Watkins (1989) Watkins, C. J. C. H. _Learning from Delayed Rewards_. PhD thesis, King’s College, Cambridge, UK, May 1989.
* Watkins & Dayan (1992) Watkins, C. J. C. H. and Dayan, P. Q-learning. _Machine Learning_ , 8:279–292, 1992.
## Appendix A An Example of Delusional Bias
We describe an example, taken directly from (Lu et al., 2018), to show
concretely how delusional bias causes problems for Q-learning with function
approximation. The MDP in Fig. 7 illustrates the phenomenon: Lu et al. (2018)
use a linear approximator over a specific set of features in this MDP to show
that:
1. (a)
No $\pi\in G(\Theta)$ can express the optimal (unconstrained) policy (which
requires taking $a_{2}$ at each state);
2. (b)
The optimal _feasible_ policy in $G(\Theta)$ takes $a_{1}$ at $s_{1}$ and
$a_{2}$ at $s_{4}$ (achieving a value of $0.5$).
3. (c)
Online Q-learning (Eq. 1) with data generated using an $\varepsilon$-greedy
behavior policy must converge to a fixed point (under a range of rewards and
discounts) corresponding to a “compromise” admissible policy which takes
$a_{1}$ at both $s_{1}$ and $s_{4}$ (value of $0.3$).
Q-learning fails to find a reasonable fixed-point because of delusion.
Consider the backups at $(s_{2},a_{2})$ and $(s_{3},a_{2})$. Suppose
$\hat{\theta}$ assigns a “high” value to $(s_{3},a_{2})$, so that
$Q_{\hat{\theta}}(s_{3},a_{2})>Q_{\hat{\theta}}(s_{3},a_{1})$ as required by
$\pi_{\theta^{\ast}}$. They show that any such $\hat{\theta}$ also accords a
“high” value to $(s_{2},a_{2})$. But
$Q_{\hat{\theta}}(s_{2},a_{2})>Q_{\hat{\theta}}(s_{2},a_{1})$ is inconsistent
the first requirement. As such, any update that makes the Q-value of
$(s_{2},a_{2})$ higher _undercuts the justification_ for it to be higher
(i.e., makes the “max” value of its successor state $(s_{3},a_{2})$ lower).
This occurs not due to approximation error, but the inability of Q-learning to
find the value of the optimal _representable_ policy.
Figure 7: A simple MDP (Lu et al., 2018).
## Appendix B Algorithms
The pseudocode of (depth-first) version of the ConQUR search framework is
listed in Algorithm 1.
Algorithm 1 ConQUR Search (Generic, depth-first)
0: Data sets $D_{k},D_{k+1},\ldots D_{T}$; regressor $\hat{Q}_{k-1}$; and
assignment $\sigma$ over $D_{\leq k-1}=\cup_{1\leq j\leq k-1}D_{j}$ reflecting
prior data; policy class $\Theta$.
1: Let
$\Sigma_{\Theta,\sigma}=\\{\sigma_{k}\in\Sigma_{\Theta}(D_{j}):\sigma_{k}\cup\sigma\textrm{
is consistent}\\}$
2: for all $\sigma_{k}^{j}\in\Sigma_{\Theta,\sigma}$ do
3: Training set $S\leftarrow\\{\\}$
4: for all $(s,a,r,s^{\prime})\in D_{k}$ do
5: $q\leftarrow r+\gamma\hat{Q}_{k-1}(s^{\prime},\sigma_{k}^{j}(s^{\prime}))$
6: $S\leftarrow S\cup\\{((s,a),q)\\}$
7: end for
8: Train $\hat{Q}^{j}_{k}$ using training set $S$
9: if $k=T$ then
10: Return $\hat{Q}^{j}_{k}$ // terminate
11: else
12: Return Search($D_{k+1},\ldots
D_{T};\,\hat{Q}^{j}_{k};\,\sigma_{k}^{j}\cup\sigma;\,\Theta$) // recurse
13: end if
14: end for
As discussed in Sec. 3.5, a more specific instantiation of the ConQUR
algorithm is listed in Algorithm 2.
Algorithm 2 Modified Beam Search Instantiation of ConQUR Algorithm
0: Search control parameters: $m$, $\ell$, $c$, $d$, $T$
1: Maintain list of data batches $D_{1},...,D_{k}$, initialized empty
2: Maintain candidate pool $P$ of at most $m$ nodes, initialized
$P=\\{n_{0}\\}$
3: Maintain frontier list $F$ of $\ell^{c}$ nodes
4: Maintain for each node $n^{i}_{k}$ a regressor $\theta^{i}_{k}$ and an
ancestor assignment $\sigma^{i}_{k}$
5: for each search level $k\leq T$ do
6: Find top scoring node $n^{1}\in P$
7: Use $\varepsilon$-greedy policy extracted from $Q_{\theta^{1}}$ to collect
next data batch $D_{k}$
8: if $k$ is an expansion level then
9: Select top $\ell$ scoring nodes $n^{1},...,n^{\ell}\in P$
10: for each selected node $n^{i}$ do
11: Generate $c$ children $n^{i,1},...,n^{i,c}$ using Boltzmann sampling on
$D_{k}$ with $Q_{\theta^{i}}$
12: for each child $n^{i,j}$ do
13: Let assignment history $\sigma^{i,j}$ be $\sigma^{i}\cup\\{\textit{new
assignment}\\}$
14: Determine regressor $\theta^{i,j}$ by applying update (5) from
$\theta^{i}$
15: end for
16: Score and add child nodes to the candidate pool $P$
17: Assign frontier nodes to set of child nodes, $F=\\{n^{i,j}\\}$
18: if $|P|>m$ then
19: evict bottom scoring nodes, keeping top $m$ in $P$
20: end if
21: end for
22: end if
23: if $k$ is a refinement (”dive”) level then
24: for each frontier node $n^{i,j}\in F$ do
25: Update regressor $\theta^{i,j}$ by applying update (5) to $\theta^{i,j}$
26: end for
27: end if
28: Run $d$ ”dive” levels after each expansion level
29: end for
## Appendix C Additional Detail: Effects of Consistency Penalization
### C.1 Delusional bias in DQN and DDQN
Both DQN and DDQN uses a delayed version of the $Q$-network
$Q_{\theta^{-}}(s^{\prime},a^{\prime})$ for label generation, but in a
different way. In DQN, $Q_{\theta^{-}}(s^{\prime},a^{\prime})$ is used for
both value estimate and action assignment
$\sigma_{\text{DQN}}(s^{\prime})=\operatorname*{arg\\!max}_{a^{\prime}}Q_{\theta_{k}}(s^{\prime},a^{\prime})$,
whereas in DDQN, $Q_{\theta^{-}}(s^{\prime},a^{\prime})$ is used only for
value estimate and the action assignment is computed from the current network
$\sigma_{\text{DDQN}}(s^{\prime})=\operatorname*{arg\\!max}_{a^{\prime}}Q_{\theta_{k}}(s^{\prime},a^{\prime})$.
With respect to delusional bias, action assignment of DQN is consistent for
all batches after the latest network weight transfer, as
$\sigma_{\text{DQN}}(s^{\prime})$ is computed from the same
$Q_{\theta^{-}}(s^{\prime},a^{\prime})$ network. DDQN, on the other hand,
could have very inconsistent assignments, since the action is computed from
the current network that is being updated at every step.
### C.2 Training Methodology and Hyperparameters
We implement consistency penalty on top of the DQN and DDQN algorithm by
modifying the open-source TF-Agents library (Guadarrama et al., 2018). In
particular, we modify existing DqnAgent and DdqnAgent by adding a consistency
penalty term to the original TD loss.
We use TF-Agents implementation of DQN training on Atari with the default
hyperparameters, which are mostly the same as that used in the original DQN
paper (Mnih et al., 2015). For conveniece to the reader, some important
hyperparameters are listed in Table 2. The reward is clipped between $[-1,1]$
following the original DQN.
Hyper-parameter | Value
---|---
Mini-batch size | 32
Replay buffer capacity | 1 million transitions
Discount factor $\gamma$ | 0.99
Optimizer | RMSProp
Learning rate | 0.00025
Convolution channel | $32,64,64$
Convolution filter size | $(8\times 8),(4\times 4),(3\times 3)$
Convolution stride | 4, 2, 1
Fully-connected hidden units | 512
Train exploration $\varepsilon_{\text{train}}$ | 0.01
Eval exploration $\varepsilon_{\text{eval}}$ | 0.001
Table 2: Hyperparameters for training DQN and DDQN with consistency penalty on
Atari.
### C.3 Evaluation Methodology
We empirically evaluate our modified DQN and DDQN agents trained with
consistency penalty on 15 Atari games. Evaluation is run using the training
and evaluation framework for Atari provided in TF-Agents without any
modifications.
### C.4 Detailed Results
Fig. 8 shows the effects of varying $\lambda$ on both DQN and DDQN. Table 3
summarizes the best penalties for each game and their corresponding scores.
Fig. 9 shows the training curves of the best penalization constants. Finally,
Fig. 10 shows the training curves for a fixed penalization of $\lambda=0.5$.
The datapoints in each plot of the aforementioned figures are obtained by
averaging over window size of 30 steps, and within each window, we take the
largest policy value (and over $\approx$2–5 multiple runs). This is done to
reduce visual clutter.
Figure 8: DQN and DDQN training curves for different penalty constant $\lambda$. | $\mathsf{DQN}{}$ | $\lambda_{\text{best}}$ | $\mathsf{DQN}{(\lambda_{\text{best}})}$ | $\mathsf{DDQN}{}$ | $\lambda^{\prime}_{\text{best}}$ | $\mathsf{DDQN}{(\lambda^{\prime}_{\text{best}})}$
---|---|---|---|---|---|---
Assault | 2546.56 | 1.5 | 3451.07 | 2770.26 | 1 | 2985.74
Atlantis | 995460.00 | 0.5 | 1003600.00 | 940080.00 | 1.5 | 999680.00
BattleZone | 67500.00 | 2 | 55257.14 | 47025.00 | 2 | 48947.37
BeamRider | 7124.90 | 0.5 | 7216.14 | 5926.59 | 0.5 | 6784.97
Boxing | 86.76 | 0.5 | 90.01 | 82.80 | 0.5 | 91.29
Breakout | 220.00 | 0.5 | 219.15 | 214.25 | 0.5 | 242.73
Enduro | 1206.22 | 0.5 | 1430.38 | 1160.44 | 1 | 1287.50
Gravitar | 475.00 | 1.5 | 685.76 | 462.94 | 1.5 | 679.33
JourneyEscape | -1020.59 | 0.25 | -696.47 | -794.71 | 1 | -692.35
MsPacman | 4104.59 | 2 | 4072.12 | 3859.64 | 0.5 | 4008.91
NameThisGame | 7230.71 | 1 | 9013.48 | 9618.18 | 0.5 | 10210.00
Qbert | 13270.64 | 0.5 | 14111.11 | 13388.92 | 1 | 12884.74
Seaquest | 5849.80 | 1 | 6123.72 | 12062.50 | 1 | 7969.77
SpaceInvaders | 2389.22 | 0.5 | 2707.83 | 3007.72 | 0.5 | 4080.57
StarGunner | 40393.75 | 0.5 | 55931.71 | 55957.89 | 0.5 | 60035.90
TimePilot | 4205.83 | 2 | 7612.50 | 6654.44 | 2 | 7964.10
Tutankham | 222.76 | 1 | 265.86 | 243.20 | 0.25 | 247.17
VideoPinball | 569502.19 | 0.25 | 552456.00 | 509373.50 | 0.25 | 562961.50
Zaxxon | 5533.33 | 1 | 10520.00 | 7786.00 | 0.5 | 10333.33
Table 3: Consistency penalty ablation results on best penalty constants for
DQN and DDQN.
Figure 9: DQN and DDQN training curves for the respective best $\lambda$ and
baseline.
Figure 10: DQN and DDQN training curves for $\lambda=0.5$ and the baseline.
## Appendix D Additional Detail: ConQUR Results
Our results use a frontier queue of size ($F$) 16 (these are the top scoring
leaf nodes which receive gradient updates and rollout evaluations during
training). To generate training batches, we select the best node’s regressor
according to our scoring function, from which we generate training samples
(transitions) using $\varepsilon$-greedy. Results are reported in Table 4, and
training curves in Fig. 11. We used Bellman error plus consistency penalty as
our scoring function. During the training process, we also calibrated the
scoring to account for the depth difference between the leaf nodes at the
frontier versus the leaf nodes in the candidate pool. We calibrated by taking
the mean of the difference between scores of the current nodes in the frontier
with their parents. We scaled this difference by multiplying with a constant
of 2.5.
In our implementation, we initialized our Q-network with a pre-trained DQN. We
start with the expansion phase. During this phase, each parent node splits
into $l$ children nodes and the Q-labels are generated using action
assignments from the Boltzmann sampling procedure, in order to create high
quality and diversified children. We start the dive phase until the number of
children generated is at least $F$. In particular, with $F=16$ configuration,
we performed the expansion phase at the zero-th and first iterations, and then
at every tenth iteration starting at iteration 10, then at 20, and so on until
ending at iteration 90. All other iterations execute the “dive” phase. For
every fifth iteration, Q-labels are generated from action assignments sampled
according to the Boltzmann distribution. For all other iterations, Q-labels
are generated in the same fashion as the standard Q-learning (taking the max
Q-value). The generated Q-labels along with the consistency penalty are then
converted into gradient updates that applies to one or more generated children
nodes.
### D.1 Training Methodology and Hyperparameters
Each iteration consists of 10k transitions sampled from the environment. Our
entire training process has 100 iterations which consumes 1M transitions or 4M
frames. We used RMSProp as the optimizer with a learning rate of $2.5\times
10^{-6}$. One training iteration has 2.5k gradient updates and we used a batch
size of 32. We replace the target network with the online network every fifth
iteration and reward is clipped between $[-1,1]$. We use a discount value of
$\gamma=0.99$ and $\varepsilon$-greedy with $\varepsilon=0.01$ for
exploration. Details of hyper-parameter settings can be found in Table 5, 6.
### D.2 Evaluation Methodology
We empirically evaluate our algorithms on 59 Atari games (Bellemare et al.,
2013), and followed the evaluation procedure as in Hasselt et al. (2016). We
evaluate our agents on every 10-th iteration (and also the initial and first
iteration) by suspending our training process. We evaluate on 500k frames, and
we cap the length of the episodes for 108k frames. We used
$\varepsilon$-greedy as the evaluation policy with $\varepsilon=0.001$. We
evaluated our algorithm under the _no-op starts_ regime—in this setting, we
insert a random number of “do-nothing” (or _no-op_) actions (up to 30) at the
beginning of each episode.
### D.3 Detailed Results
Fig. 11 shows training curves of ConQUR with 16 nodes under different
penalization strengths $\lambda\in\\{1,10\\}$. While each game has its own
optimal $\lambda$, in general, we found that $\lambda=10$ gave the best
performance for most games. Each plotted step of each training curve
(including the baseline) shows the best performing node’s policy value as
evaluated with full rollouts. Table 4 shows the summary of the highest policy
values achieved for all 59 games for ConQUR and the baseline under 16 nodes.
Both the baseline and ConQUR improve overall, but ConQUR’s advantage over the
baseline is amplified. These results all use a splitting factor of $c=4$. (We
show results with 8 nodes and a splitting factor of 2 below.)
Figure 11: Training curves on 16 nodes with up to 30 no-op starts. | ConQUR($\lambda_{\text{best}}$) (16 nodes) | Baseline (16 nodes) | Checkpoint
---|---|---|---
AirRaid | 10613.01 | 9656.21 | 6916.88
Alien | 3253.18 | 2713.05 | 2556.64
Amidar | 555.75 | 446.88 | 203.08
Assault | 2007.81 | 2019.99 | 1932.55
Asterix | 5483.41 | 4649.52 | 2373.44
Asteroids | 1249.13 | 1196.56 | 701.96
Atlantis | 958932.00 | 931416.00 | 902216.00
BankHeist | 1002.59 | 965.34 | 872.91
BattleZone | 31860.30 | 32571.80 | 26559.70
BeamRider | 9009.14 | 9052.38 | 6344.91
Berzerk | 671.95 | 664.69 | 525.06
Bowling | 38.36 | 39.79 | 25.04
Boxing | 81.37 | 81.26 | 80.89
Breakout | 372.31 | 359.17 | 286.83
Carnival | 4889.19 | 4860.66 | 4708.14
Centipede | 4025.57 | 2408.23 | 758.21
ChopperCommand | 7818.22 | 6643.07 | 2991.00
CrazyClimber | 134974.00 | 119194.00 | 63181.14
DemonAttack | 11874.80 | 11445.20 | 7564.14
DoubleDunk | -14.04 | -15.25 | -16.66
ElevatorAction | 24.67 | 28.67 | 0.00
Enduro | 879.84 | 835.11 | 556.97
FishingDerby | 16.28 | 13.22 | 6.92
Freeway | 32.65 | 32.63 | 32.52
Frostbite | 289.25 | 230.29 | 166.44
Gopher | 11959.20 | 9084.00 | 4879.02
Gravitar | 489.22 | 446.64 | 394.46
Hero | 20827.00 | 20765.70 | 20616.30
IceHockey | -3.15 | -3.55 | -8.59
Jamesbond | 710.78 | 681.05 | 624.36
JourneyEscape | 902.22 | 1437.06 | -947.18
Kangaroo | 11017.65 | 10743.10 | 10584.20
Krull | 9556.53 | 9487.49 | 3998.90
MontezumaRevenge | 0.00 | 0.00 | 0.00
MsPacman | 5444.31 | 5487.12 | 4160.50
NameThisGame | 9104.40 | 8445.43 | 5524.73
Phoenix | 5325.33 | 5430.49 | 4801.18
Pitfall | 0.00 | 0.00 | -4.00
Pong | 21.00 | 21.00 | 20.95
Pooyan | 5898.46 | 5728.05 | 4393.09
PrivateEye | 100.00 | 100.00 | 100.00
Qbert | 13812.40 | 15189.00 | 8625.88
Riverraid | 15895.10 | 15370.10 | 11364.90
RoadRunner | 50820.40 | 47481.10 | 45073.25
Robotank | 62.74 | 57.66 | 53.08
Seaquest | 3046.34 | 2691.88 | 1060.77
Skiing | -13638.80 | -14908.21 | -29897.07
Solaris | 1991.33 | 1202.89 | 285.46
SpaceInvaders | 3556.10 | 3520.96 | 2895.30
StarGunner | 55679.27 | 55176.90 | 51490.60
Tennis | 0.00 | 0.00 | 0.00
TimePilot | 6698.88 | 7327.71 | 3806.54
Tutankham | 252.51 | 220.90 | 36.07
UpNDown | 31258.84 | 34455.20 | 5956.24
Venture | 37.37 | 3.64 | 0.00
VideoPinball | 423012.59 | 383105.41 | 268476.04
WizardOfWor | 8154.73 | 4782.11 | 2012.24
YarsRevenge | 26188.24 | 26330.31 | 25000.36
Zaxxon | 11723.20 | 11589.90 | 5334.44
Table 4: Summary of scores with $\varepsilon$-greedy ($\varepsilon=0.001$)
evaluation with up to 30 no-op starts. We ran ConQUR with 16 nodes and with
$\lambda\in\\{1,10\\}$. We report the scores of the best
$\lambda_{\text{best}}$ for each game. For most games, $\lambda_{\text{best}}$
is $10$.
Hyperparameters | Description | Value
---|---|---
Dive levels $d$ to run | We run $d$ levels of diving phase after each expansion phase | 9
Boltzmann Iteration | Every module this number of iteration/level, Q-labels are generated from Boltzmann distribution in order to create diversified node. | 5
Online network target network swap frequency | Iteration (Frequency) at which the online network parameters swap with the target network | 5
Evaluation frequency | Iteration (Frequency) at which we perform rollout operation (testing with the environment). | 10
Learning Rate | Learning rate for the optimizer. | $2.5\times 10^{-6}$
Optimizer | Optimizer for training the neural network. | RMSprop
Iteration training data transition size | For each iteration, we generate this number of transitions and use it as training data. | 10k
Training step frequency | For each iteration, we perform (iteration training data transition size / training step frequency) number of gradient updates. | 4
Mini-batch size | Size of the mini batch data used to train the Q-network. | 32
$\varepsilon_{\textrm{train}}$ | $\varepsilon$-greedy policy for exploration during training. | 0.01
$\varepsilon_{\textrm{eval}}$ | $\varepsilon$-greedy policy for evaluating Q-regressors. | 0.001
Training calibration parameter | Calibration to adjust the difference between the nodes from the candidate pool $m$ which didn’t selected during both the expansion nor the dive phases. The calibration is performed based on the average difference between the frontier nodes and their parents. We denote this difference as $\bigtriangleup$. | $2.5\bigtriangleup$
Temperature $\tau$ | Temperature parameter for Boltzmann sampling. Adaptively multiplied or divided by a factor of 1.5 or 4 respectively. | 1
Discount factor $\gamma$ | Discount factor during the training process. | 0.99
Table 5: Common Hyperparameters for ConQUR training and evaluation.
Hyperparameters | Description | Value
---|---|---
Splitting factor $c$ | Number of children created from a parent node | 4
Candidate pool size $m$ | Pool of candidate leaf nodes for selection into the dive or expansion phase | 46
Maximum frontier nodes $F$ | Maximum number of child leaf nodes for the dive phase | 16
Top nodes to expand $l$ | Select the top $l$ nodes from the candidate pool for the expansion phase. | 4
Table 6: Hyperparameters for ConQUR (16 nodes) training and evaluation.
Figure 12: Improvement ConQUR($\lambda=10$) with 16 nodes achieves over the
initial checkpoint Q-network.
### D.4 Additional Results: ConQUR with 8 Nodes
As an additional study of ConQUR, we present results of the running our method
using 8 nodes (rather than the 16 used above), and compare it to a multi-DQN
baseline that also uses 8 “nodes” (i.e., 8 separate DQN runs). We use a
splitting factor $c=2$ for $\textsc{ConQUR}{}.$ Table 7 shows the average
scores for each game using ConQUR and the baseline with 8 nodes.
Unsurprisingly, ConQUR with 8 nodes does not perform as well as ConQUR with 16
nodes; but as in the 16-node case, ConQUR outperforms the baseline when each
uses 8 nodes. More importantly, the average improvement of $24.5\%$ for ConQUR
with 16 nodes over the corresponding baseline _exceeds_ the $19.6\%$
improvement of ConQUR in the 8-node case. This is a strong indication that
increasing the number of nodes increases the performance gap _relative_ to the
corresponding multi-DQN baseline; this implies that a good search heuristic is
critical to effectively navigate the search space (as compared to randomly
selected nodes) with a greater number of candidate hypotheses.101010Average
score improvements exclude games where the baseline score is zero.
| ConQUR($\lambda_{\text{best}}$) (8 nodes) | Baseline (8 nodes) | Checkpoint
---|---|---|---
AirRaid | 10647.80 | 9050.86 | 6885.72
Alien | 3341.36 | 3207.5.05 | 2556.64
Amidar | 577.45 | 573.55 | 202.74
Assault | 1892.02 | 1976.80 | 1873.05
Asterix | 5026.24 | 4935.21 | 2373.44
Asteroids | 1194.90 | 1170.11 | 704.38
Atlantis | 949012.00 | 932668.00 | 902216.00
BankHeist | 909.61 | 924.75 | 871.91
BattleZone | 32139.90 | 30983.10 | 26558.70
BeamRider | 8613.98 | 8109.63 | 6397.49
Berzerk | 659.64 | 634.83 | 524.76
Bowling | 30.07 | 25.29 | 25.04
Boxing | 81.78 | 81.48 | 80.29
Breakout | 350.11 | 362.98 | 286.14
Carnival | 4862.30 | 4800.83 | 4708.23
Centipede | 2747.89 | 2608.78 | 757.51
ChopperCommand | 7188.25 | 6737.21 | 2641.71
CrazyClimber | 131675.00 | 122424.00 | 63181.11
DemonAttack | 11346.20 | 10947.90 | 8022.08
DoubleDunk | -13.57 | -15.35 | -16.66
ElevatorAction | 28.00 | 21.33 | 0.00
Enduro | 849.07 | 811.58 | 556.56
FishingDerby | 13.34 | 11.56 | 7.15
Freeway | 32.60 | 32.60 | 32.52
Frostbite | 296.57 | 220.81 | 165.01
Gopher | 9999.61 | 8013.34 | 4879.13
Gravitar | 475.03 | 480.64 | 394.46
Hero | 20803.60 | 20774.80 | 20598.40
IceHockey | -3.23 | -4.78 | -8.63
Jamesbond | 664.98 | 669.54 | 626.53
JourneyEscape | -462.64 | 391.44 | -947.18
Kangaroo | 10974.00 | 10733.60 | 10584.20
Krull | 9503.62 | 9538.22 | 4039.78
MontezumaRevenge | 1.46 | 0.00 | 0.00
MsPacman | 5066.17 | 5227.84 | 4160.50
NameThisGame | 9181.30 | 8410.29 | 5529.50
Phoenix | 5307.46 | 5227.84 | 4801.18
Pitfall | 0.00 | 0.00 | -4.00
Pong | 21.00 | 20.99 | 20.95
Pooyan | 5778.99 | 5184.14 | 4393.09
PrivateEye | 100.00 | 100.00 | 100.00
Qbert | 11953.40 | 13965.80 | 8625.88
Riverraid | 15614.40 | 14812.40 | 11253.30
RoadRunner | 49864.80 | 46302.20 | 45073.25
Robotank | 61.92 | 56.90 | 53.08
Seaquest | 2647.82 | 2560.61 | 1034.83
Skiing | -14058.90 | -14079.80 | -29896.80
Solaris | 1956.24 | 1182.59 | 291.70
SpaceInvaders | 3436.16 | 3292.68 | 2895.30
StarGunner | 55479.00 | 54207.30 | 51419.60
Tennis | 0.00 | 0.00 | 0.00
TimePilot | 6717.62 | 6799.19 | 3806.22
Tutankham | 242.03 | 229.23 | 36.00
UpNDown | 22544.60 | 23331.20 | 5956.21
Venture | 15.41 | 1.50 | 0.00
VideoPinball | 382110.59 | 390540.41 | 209620.0
WizardOfWor | 5750.05 | 3632.17 | 2011.77
YarsRevenge | 25631.10 | 25396.70 | 25319.20
Zaxxon | 10751.80 | 11156.20 | 5186.01
Table 7: Summary of scores with $\varepsilon$-greedy ($\varepsilon=0.001$)
evaluation with up to 30 no-op starts. As a side study, we ran ConQUR with 8
nodes and with $\lambda\in\\{1,10\\}$. We report the scores of the best
$\lambda_{\text{best}}$ for each game. For most games, $\lambda_{\text{best}}$
is $10$. The 8 nodes configuration follows the same as in Table 5, except that
$c=2,m=38,F=8,l=2.$
|
2024-09-04T02:54:55.315321 | 2020-02-24T19:16:08 | 2002.12426 | {
"authors": "Milad Roohi, Eric M. Hernandez, David Rosowsky",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25930",
"submitter": "Milad Roohi",
"url": "https://arxiv.org/abs/2002.12426"
} | arxiv-papers | Roohi,
# Reconstructing Element-by-Element Dissipated Hysteretic Energy in
Instrumented Buildings: Application to the Van Nuys Hotel Testbed
Milad Roohi Postdoctoral fellow, NIST Center of Excellence for Risk-Based
Community Resilience Planning, Department of Civil and Environmental
Engineering, Colorado State University, Fort Collins, CO 80523, USA; formerly,
Graduate Research Assistant, Department of Civil and Environmental
Engineering, University of Vermont, Burlington, VT 05405 USA. E-mail:
<EMAIL_ADDRESS>Corresponding author: Milad Roohi,
<EMAIL_ADDRESS>Eric M. Hernandez Professor of Civil Engineering,
Former Provost and Senior Vice President, University of Vermont, Burlington,
VT 05405 USA. E-mail<EMAIL_ADDRESS>David Rosowsky
###### Abstract
The authors propose a seismic monitoring framework for instrumented buildings
that employs dissipated energy as a feature for damage detection and
localization. The proposed framework employs a nonlinear model-based state
observer, which combines a nonlinear finite element model of a building and
global acceleration measurements to estimate the time history of seismic
response at all degrees of freedom of the model. This includes displacements,
element forces, and plastic deformations in all structural members. The
estimated seismic response is then used to 1) estimate inter-story drifts and
determine the post-earthquake re-occupancy classification of the building
based on performance-based criteria, 2) compare the estimated demands with
code-based capacity and reconstruct element-by-element demand-to-capacity
ratios and 3) reconstruct element-level normalized energy dissipation and
ductility. The outcome of this process is employed for the performance-based
monitoring, damage detection, and localization in instrumented buildings. The
proposed framework is validated using data from the Van Nuys hotel testbed; a
seven story reinforced concrete building instrumented by the California Strong
Motion Instrumentation Program (Station 24386). The nonlinear state observer
of the building is implemented using a distributed plasticity finite element
model and seismic response measurements during the 1992 Big Bear and 1994
Northridge earthquakes. The performance and damage assessment results are
compared with the post-earthquake damage inspection reports and photographic
records. The results demonstrate the accuracy and capability of the proposed
framework in the context of a real instrumented building that experienced
significant localized structural damage.
## 1 Introduction
This paper proposes a seismic monitoring framework that employs dissipated
energy as a feature for damage detection and localization in instrumented
moment resisting frame building structures. The main advantages of the
proposed energy-based approach are: i) the proposed feature is physically
meaningful and correlates well with the level of cyclic damage experienced
during strong earthquakes [Uang and Bertero (1990), Sucuoglu and Erberik
(2004), Teran-Gilmore and Jirsa (2007)], ii) dissipated energy can be
reconstructed from element level stress-strain fields, which can be estimated
from global acceleration measurements [Stephens and Yao (1987), Roohi et al.
(2019a)], and iii) it can be calibrated using experimental data [Krawinkler
and Zohrei (1983), Park and Ang (1985), Sucuoglu and Erberik (2004)]. Despite
the immediate appeal, the application of this feature for structural health
monitoring purposes has been limited [Frizzarin et al. (2010), Hernandez and
May (2012)] primarily due to the challenges associated with estimating
dissipated energy under dynamic loading. The main contribution of this paper
consists in reconstructing element-by-element dissipated hysteretic energy
using a nonlinear model-data fusion approach. This approach deviates from the
traditional approach used in structural monitoring and damage identification,
which seeks changes in the structural parameters before and after an
earthquake.
To contextualize the proposed energy-based method, current damage detection
methods are briefly reviewed. Based on the damage features selected to
distinguish between undamaged and damaged states of the structure, the
existing methods can be widely categorized into 1) spectral, 2) wave
propagation, 3) time series, 4) demand-to-capacity ratio, 5) model updating
methods.
The spectral methods assume that changes in spectral parameters (mode shapes
and frequencies) of a structure indicate the occurrence of structural damage;
where the changes are identified from vibration measurements before/after or
during strong ground motion. The main challenges associated with this approach
include: i) spectral parameters can be conceptually defined if the dynamic
response is governed by a linear equation of motion; however, this feature
does not exist for nonlinear hysteretic structural systems, ii) damage
localization is a challenging task using changes in spectral parameters; this
is mainly because low-frequency modes are the only reliable modes identified
from vibration data, and the sensitivity of these modes to localized damage is
low, and iii) changes in spectral parameters can be due to other factors such
as environmental effects, which can negatively affect the reliability of this
approach to detect structural damage.
The wave propagation methods process the measured vibration data to extract
travel times of seismic waves propagating through the structure and use this
feature to detect changes in structural stiffness and subsequently infer
structural damage. The main advantages of this approach include i) damage
detection using a small number of instruments and ii) high sensitivity to
localized damage. However, the resolution of damage detection depends on the
number of instruments. This means that only two instruments are enough to
determine if the building is damaged, and additional instruments are required
to improve the resolution and specify the damaged part or floor of the
structure.
The time series methods employ a data-driven approach to detect damage based
on mathematical models extracted solely from measured vibrations. These
methods require no information from structural models and only track changes
in the time history response or the identified black box input-output model
coefficients. Thus, it is a difficult task to correlate these features with
the damage sensitive structural quantities. This drawback makes these methods
less appealing for seismic monitoring purposes.
The demand-to-capacity ratio methods operate by comparing element-level force
demands with capacities of any pertinent failure mode or comparing inter-story
drifts with qualitative performance criteria to assess the level of damage in
a particular member or story. The intention of selecting this damage feature
for damage detection is to make the results similar to the way in which the
buildings are designed, making them more interpretable. However, this approach
has the drawback that the expected capacities are estimated based on codes and
laboratory experiments, which can differ considerably from the actual
capacities of structural elements because of the uncertainty in the stiffness
and strength of building materials.
The model updating approach updates the structural model parameters to
minimize the error between the model estimates and vibration measurements. The
structural damage can be identified by seeking changes in the model
parameters. The main drawback of the model updating approach include i) the
effectiveness of this approach depends on the model class, and it is necessary
to examine the robustness to modeling error, and ii) the uniqueness problem
may arise in the case of structural models where free parameter space becomes
too large. Therefore, it is necessary to have prior knowledge regarding
elements likely to be damaged.
The proposed energy-based damage feature can overcome some of the drawbacks
associated with existing methods if element-by-element energy demands can be
reconstructed from global response measurement. In this paper, the authors
propose a three step process: (1) implement a state observer to reconstruct
the dynamic response at all degrees of freedom (DoF) of the model, (2) use the
reconstructed response to estimate element-by-element forces and
displacements, (3) use estimated displacement, forces, and constitutive laws
to estimate element-level dissipated energy. The accuracy of this approach
depends mostly on the performance of the state observer in reconstructing the
dynamic response. Researchers have successfully implemented various nonlinear
state observers including the extended Kalman filter (EKF) [Gelb (1974)],
unscented Kalman filter (UKF) [Wan and Van Der Merwe (2000)], particle filter
(PF) [Doucet et al. (2000)], and nonlinear model-based observer (NMBO) [Roohi
et al. (2019a)] for response reconstruction in nonlinear structural systems.
The EKF, UKF, and PF are computationally intensive and require the use of
rather simple state-space models, which may not be capable of capturing the
complexity of nonlinear structural behavior. However, the NMBO can be
implemented directly as a second-order nonlinear FE model. This capability
allows the NMBO to take advantage of simulation and computation using the
conventional structural analysis software for the purpose of state estimation.
The primary aim of this paper is to address the reconstruction of element-by-
element dissipated hysteretic energy for damage detection and localization in
instrumented buildings. A seismic monitoring framework is proposed that
employs the NMBO to combine a nonlinear FE model with acceleration
measurements for reconstructing the complete dynamic response as the vibration
measurements become available for every seismic event. Then, the estimated
response is processed to 1) estimate inter-story drifts and determine the
post-earthquake re-occupancy classification of the building based on
performance-based criteria 2) to compare the estimated demands with code-based
capacity and reconstruct element-by-element demand-to-capacity ratios and 3)
reconstruct element-level dissipated energy and ductility. The outcome of this
process is employed for the performance-based monitoring, damage detection,
and localization of instrumented buildings.
A secondary objective of this paper is to validate the application of the NMBO
for reconstruction of nonlinear response in the context of instrumented
buildings that experience severe structural damage during an earthquake. The
NMBO has been successfully validated using case study of the NEESWood Capstone
project, a fully-instrumented six-story wood-frame building tested in full-
scale at the E-Defense shake table in Japan. It was demonstrated that the NMBO
could estimate quantities such as drifts and internal forces from a few
acceleration measurements [Roohi (2019a), Roohi et al. (2019b)]. However, in
this test, nonlinearity was limited and distributed throughout the building;
e.g., during the maximum credible earthquake (MCE) level test corresponding to
a 2%/50-year event, the damage was limited to nonstructural elements such as
gypsum wallboard, and no structural damage was reported [van de Lindt et al.
(2010)].
The effectiveness of the proposed energy-based damage detection and
localization method is investigated using data from Van Nuys hotel testbed; a
seven story reinforced concrete (RC) building instrumented by the California
Strong Motion Instrumentation (CSMIP) Program (Station 24386). The Van Nuys
building was severely damaged during the 1994 Northridge earthquake, and
localized damage occurred in five of the nine columns in the 4th story
(between floors 4 and 5) of the south longitudinal frame. In the literature,
multiple researchers have studied this building for seismic damage assessment
and localization. Traditionally, the main objective has been to use
acceleration measurements to identify the presence of damage and reproduce its
location and intensity with respect to the visual evidence.
The remainder of this paper is organized as follows. First, the nonlinear
dynamic analysis of building structures is discussed, and the system and
measurement models of interest are presented. This is followed by a section on
dissipated energy reconstruction and nonlinear model-data fusion in
instrumented buildings. Then, a section discussing the case study of Van Nuys
seven-story RC building is presented. The paper ends with a section presenting
the validation of the proposed damage detection and localization methodology
using seismic response measurements of the case-study building.
## 2 Structural Modeling for Nonlinear Model-Data Fusion
Various approaches are available in the literature for the nonlinear
structural modeling and dynamic analysis of moment resisting frame building
structures subjected to seismic excitations. These approaches can be
classified into three categories based on their scales: 1) global modeling, 2)
discrete FE modeling, and 3) continuum FE modeling [Taucer et al. (1991)]. The
global modeling approach condenses the nonlinear behavior of a building at
selected global DoF. One example is to assign the hysteretic lateral load-
displacement and energy dissipation characteristics of every story of building
to an equivalent element and assemble these elements to construct a simplified
model of a building. This method displays low-resolution, which depending on
the specific application might be detrimental. The discrete FE modeling
approach first formulates the hysteretic behavior of elements and then,
assembles interconnected frame elements to construct an FE model of a
structure. Two types of element formulations are used in research and
practice, including 1) a concentrated plasticity formulation and 2) a
distributed plasticity formulation. The concentrated plasticity formulation
lumps the nonlinear behavior in springs or plastic hinges at the end of
elastic elements. The distributed plasticity formulation that concentrates the
nonlinear behavior at selected integration points along the element using
cross-sections that are discretized to fibers, which account for stress-strain
relations of corresponding material. The continuum FE element modeling
approach discretizes structural elements into micro finite elements and
requires localized model parameters (constitutive and geometric nonlinearity)
calibration. The analysis of such high-resolution models increases the
computational complexity. Therefore, this approach can be unpractical for the
model-data fusion and response reconstruction applications. Figure 1 presents
a schematic of five idealized nonlinear beam-column elements developed for
nonlinear modeling of moment resisting frame building structures.
Figure 1: Schematic of nonlinear beam-column elements (Deierlein et al. 2010)
From these formulations, the concentrated and distributed plasticity
formulations have been implemented in advanced structural simulation software
packages such as OpenSEES, Perform, and SAP. In recent years, the fiber-based
distributed plasticity FE modeling has been the most popular approach among
researchers. The main reasons are: 1) the formulation accurately simulates the
coupling between axial force and bending moment and also, accounts for element
shear, 2) various uniaxial material models have been developed by researchers
to characterize section fibers and are available for users of advanced
structural simulation software, 3) the predictions using this formulation have
been validated with experimental testing, and 4) the simulation and analysis
are computationally efficient and accurate, even with a relatively low number
of integration points per element. This paper employs a fiber-based
distributed plasticity FE modeling approach for nonlinear model-data fusion
and seismic response reconstruction.
### 2.1 System and measurement models of interest
The global response of building structures to seismic ground motions can be
accurately described as
$\mathbf{M}\ddot{q}(t)+\mathbf{C}_{\xi}\dot{q}(t)+f_{R}(q(t),\dot{q}(t),z(t))=-\mathbf{M}\mathbf{b}_{1}\ddot{u}_{g}(t)+\mathbf{b}_{2}w(t)$
(1)
where the vector $q(t)\in\mathbb{R}^{n}$ contains the relative displacement
(with respect to the ground) of all stories. $z(t)$ is a vector of auxiliary
variables dealing with material nonlinearity and damage behavior. $n$ denotes
the number of geometric DoF, $\mathbf{M}=\mathbf{M}^{T}\in\mathbb{R}^{n\times
n}$ is the mass matrix,
$\mathbf{C}_{\xi}=\mathbf{C}_{\xi}^{T}\in\mathbb{R}^{n\times n}$ is the
damping matrix, $f_{R}(\cdot)$ is the resultant global restoring force vector.
The matrix $\mathbf{b}_{1}\in\mathbb{R}^{n\times r}$ is the influence matrix
of the $r$ ground acceleration time histories defined by the vector
$\ddot{u}_{g}(t)\in\mathbb{R}^{r}$. The matrix
$\mathbf{b}_{2}\in\mathbb{R}^{n\times p}$ defines the spatial distribution the
vector $w(t)\in\mathbb{R}^{p}$, which in the context of this paper represents
the process noise generated by unmeasured excitations and (or) modeling
errors.
This study relies only on building vibrations measured horizontally in three
independent and non-intersecting directions and assumes the vector of
acceleration measurements, $\ddot{y}(t)\in\mathbb{R}^{m}$, is given by
$\ddot{y}(t)=-\mathbf{c}_{2}\mathbf{M}^{-1}\left[\mathbf{C}_{\xi}\dot{q}(t)+f_{R}(q(t),\dot{q}(t),z(t))-\mathbf{b_{2}}w(t)\right]+\nu(t)$
(2)
where $\mathbf{c}_{2}\in\mathbb{R}^{m\times n}$ is a Boolean matrix that maps
the DoFs to the measurements, and $\nu(t)\in\mathbb{R}^{m\times 1}$ is the
measurement noise.
## 3 Dissipated Energy Reconstruction from Response Measurements
This section presents the theoretical background necessary to calculate
dissipated energy induced by material nonlinearity and proposes a nonlinear
model-data fusion approach to reconstruct element-by-element dissipated energy
from global response measurements of building structures.
### 3.1 Theoretical background
The dissipated hysteretic energy ($E_{h}$) can be defined by a change of
variables and integrating equation of motion in time for multi-DoF systems as
follows
$\int\dot{q}(t)^{T}\mathbf{M}\ddot{q}(t)dt+\int\dot{q}(t)^{T}\mathbf{C}_{\xi}\dot{q}(t)dt+\int\dot{q}(t)^{T}f_{R}(q(t),\dot{q}(t),z(t))dt=-\int\dot{q}(t)^{T}\mathbf{M}\mathbf{b}_{1}\ddot{u}_{g}(t)dt$
(3)
Equation 3 can be represented in energy-balance notation [Uang and Bertero
(1990)] as follows
$E_{k}+E_{\xi}+E_{s}=E_{i}$ (4)
where $E_{k}$, $E_{\xi}$, $E_{s}$ and $E_{i}$ are kinetic, viscous damping,
stain and input energy, respectively. The strain energy is the sum of
recoverable elastic strain energy ($E_{e}$) and irrecoverable dissipated
hysteretic energy ($E_{h}$). Thus, Equation 4 can be written as
$E_{k}+E_{\xi}+(E_{e}+E_{h})=E_{i}$ (5)
The dissipated hysteretic energy ($E_{h}$) can be calculated using element-
level stress-strain or force-displacement demand by integrating the area under
hysteresis loops as follows
$E_{h}=\dfrac{1}{2}\int\epsilon^{T}\sigma dV$ (6)
where $\sigma$ and $\epsilon$ are stress and strain demands and $V$ is the
total volume of an element. In distributed plasticity beam-column elements,
where energy dissipation occurs primarily due to bending, the dissipated
hysteretic energy ($E_{h}$) can be calculated by integrating the moment-
curvature response along the element as follows
$\displaystyle E_{h}=\int_{0}^{L}M\phi
dx=\sum_{j=1}^{N_{p}}(M\phi|_{x=\xi_{j}})\omega_{j}$ (7)
where $M$ and $\phi$ are moment and curvature response of elements,
respectively; $N_{p}$ is number of integration points along the element;
$\xi_{j}$ and $\omega_{j}$ respectiveky denote locations and associated
weights of integration points.
As can be seen from Equations 6 and 7, the calculation of $E_{h}$ requires
element-level seismic response to be known. Therefore, there is a need to
employ signal processing algorithms that can accurately reconstruct the
element-level seismic response from global response measurements. The next
subsection addresses this need by proposing the use of a recently developed
nonlinear model-data fusion algorithm for seismic response reconstruction.
### 3.2 Nonlinear model-data fusion and seismic response reconstruction
Recently, [Roohi et al. (2019a)] proposed a nonlinear state observer for
nonlinear model-data fusion in second-order nonlinear hysteretic structural
systems. This nonlinear state observer has appealing properties for seismic
monitoring application; two most important ones include: (1) it has been
formulated to be realizable as a nonlinear FE model, which allows implementing
the nonlinear state observer using the conventional structural analysis
software; therefore, the computational costs would reduce significantly, and
(2) it uses power spectral density (PSD) representation to account for
measurement noise and unmeasured excitations explicitly. This property is
important as it is consistent with the representation of seismic excitation in
many stochastic models.
The NMBO estimate of the displacement response, ${\hat{q}}(t)$, is given by
the solution of the following set of ordinary differential equations
$\displaystyle\mathbf{M}\ddot{\hat{q}}(t)+(\mathbf{C}_{\xi}+\mathbf{c}_{2}^{T}\mathbf{E}\mathbf{c}_{2})\dot{\hat{q}}(t)+f_{R}(\hat{q}(t),\dot{\hat{q}}(t),z(t))=\mathbf{c}_{2}^{T}\mathbf{E}\dot{y}(t)$
(8)
where $\dot{y}(t)$ is the measured velocity and
$\mathbf{E}\in\mathbb{R}^{m\times m}$ is the feedback gain. It can be seen
that Equation 8 is of the same form of the original nonlinear model of
interest in Equation 1. A physical interpretation of the NMBO can be obtained
by viewing the right-hand side of Equation 8 as a set of corrective forces
applied to a modified version of the original nonlinear model of interest in
the left-hand side. The modification consists in adding the damping term
$c_{2}^{T}\mathbf{E}c_{2}$, where the matrix $\mathbf{E}$ is free to be
selected. The diagonal terms of $\mathbf{E}$ are equivalent to grounded
dampers in the measurement locations, and the off-diagonal terms (typically
set to zero) are equivalent to dampers connecting the respective DoF of the
measurement locations. To retain a physical interpretation, the constraints on
$\mathbf{E}$ are symmetry and positive definiteness. Also, the corrective
forces $\mathbf{c}_{2}^{T}\mathbf{E}\dot{y}(t)$ are proportional to the
velocity measurements and added grounded dampers. The velocity measurements
$\dot{y}(t)$ can be obtained by integration of acceleration measurements
$\ddot{y}(t)$ in Equation 2. The integration might add long period drifts in
velocity measurements, and high-pass filtering can be performed to remove
these baseline shifts. To determine E, the objective function to be minimized
is the trace of the estimation error covariance matrix. Since for a general
nonlinear multi-variable case, a closed-form solution for the optimal matrix
$\mathbf{E}$ has not been found, a numerical optimization algorithm is used.
To derive the optimization objective function, Equation 8 is linearized as
follows
$\displaystyle\mathbf{M}\ddot{\hat{q}}(t)+(\mathbf{C}_{\xi}+\mathbf{c}_{2}^{T}\mathbf{E}\mathbf{c}_{2})\dot{\hat{q}}(t)+\mathbf{K}_{0}{\hat{q}}(t)=\mathbf{c}_{2}^{T}\mathbf{E}\dot{y}(t)$
(9)
where $\mathbf{K}_{0}$ is the initial stiffness matrix. By defining the state
error as $e=q-\hat{q}$, it was shown in [Hernandez (2011)] that the PSD of
estimation error, $\boldsymbol{\Phi}_{ee}$, is given by
$\displaystyle\boldsymbol{\Phi}_{ee}(\omega)=\mathbf{H}_{o}\mathbf{b}_{2}\boldsymbol{\Phi}_{ww}(\omega)\mathbf{b}_{2}^{T}\mathbf{H}_{o}^{*}+\mathbf{H}_{o}\mathbf{c}_{2}^{T}\mathbf{E}\boldsymbol{\Phi}_{vv}(\omega)\mathbf{E}^{T}\mathbf{c}_{2}\mathbf{H}_{o}^{*}$
(10)
with $\mathbf{H}_{o}$ defined as
$\displaystyle\mathbf{H}_{o}=\left(-\mathbf{M}\omega^{2}+\left(\mathbf{C}_{\xi}+\mathbf{c}_{2}^{T}\mathbf{Ec}_{2}\right)i\omega+\mathbf{K}_{0}\right)^{-1}$
(11)
where the matrices $\boldsymbol{\Phi}_{ww}(\omega)$ and
$\boldsymbol{\Phi}_{vv}(\omega)$ are the PSDs of the uncertain excitation on
the system and measurement noise, respectively. In this paper, the uncertain
input corresponds to the ground motion excitation, and the measurement noise
corresponds to unmeasured excitations and (or) modeling errors. To select the
optimal value of $\mathbf{E}$ matrix, the following optimization problem must
be solved
$\displaystyle\begin{aligned}
&\operatorname*{arg\,min}_{\mathbf{E}\,\in\,\mathbb{R}^{+}}\,tr(\mathbf{P})\\\
\end{aligned}$ (12)
where $\mathbf{P}$ is the covariance matrix of displacement estimation error
described as
$\displaystyle\begin{aligned}
\mathbf{P}&=\mathbb{E}\left[[q(t)-\hat{q}(t)][q(t)-\hat{q}(t)]^{T}\right]=\int_{-\infty}^{+\infty}\boldsymbol{\Phi}_{ee}(\omega)d\omega\end{aligned}$
(13)
One alternative for the optimization problem in Equation 12 can be defined if
the objective is minimization of the inter-story drifts (ISD) estimation
error, $\mathbf{P_{\text{ISD}}}$, given by
$\displaystyle\begin{aligned}
&\operatorname*{arg\,min}_{\mathbf{E}\,\in\,\mathbb{R}^{+}}\,tr(\mathbf{P_{\text{ISD}}})\end{aligned}$
(14)
where
$\displaystyle\begin{aligned}
tr(\mathbf{P_{\text{ISD}}})=\sum_{k=1}^{n}\mathbf{P_{\text{ISD}}}_{(k,k)}=\mathbf{P}_{(1,1)}+\sum_{k=2}^{n}[\mathbf{P}_{(k,k)}+\mathbf{P}_{(k-1,k-1)}-2\mathbf{P}_{(k,k-1)}]\end{aligned}$
(15)
$k$ is story number and $n$ is total number of stories.
Any optimization algorithm (e.g., Matlab fminsearch) can be used to solve the
optimization in Equations 12 and 14 by varying the values of the diagonal
elements of the $\mathbf{E}$ matrix to determine the optimized feedback
matrix. Figure 2 presents a summary of the nonlinear model-data fusion using
the NMBO and Figure 3 schematically illustrates the implementation of the
NMBO. Also, readers are kindly referred to [Hernandez (2011), Hernandez
(2013), Hernandez et al. (2018), Roohi et al. (2019a), Roohi (2019b)] for
implementation examples.
Figure 2: Summary of the nonlinear model-data fusion using the NMBO Figure 3:
Implementation of the proposed nonlinear model-based observer
## 4 Proposed Seismic Monitoring Framework
This paper proposes a seismic monitoring framework that can be accurately
employed for seismic damage detection and localization in instrumented
buildings subjected to seismic ground motions. This framework employs the NMBO
to combine a nonlinear structural model with acceleration measurements for
reconstructing the complete seismic response. Then, the estimated response is
processed to 1) estimate inter-story drifts and determine the post-earthquake
re-occupancy classification of the building based on performance-based
criteria 2) to compare the estimated demands with code-based capacity and
reconstruct element-by-element demand-to-capacity ratios and 3) reconstruct
element-level dissipated energy and ductility. The outcome of this process is
employed for the performance-based monitoring, damage detection, and
localization in instrumented buildings. Figure 4 depicts a summary of the
proposed seismic monitoring framework. The following subsections discuss each
step of the framework in more detail.
Figure 4: Summary of the proposed mechanistic damage quantification and
seismic monitoring framework
#### 4.0.1 Performance-based assessment using Inter-story Drifts
The maximum inter-story drift ($\text{ISD}_{\mathrm{max}}$) estimate at each
story can be calculated using the NMBO displacement estimates as follows
$\displaystyle\text{ISD}_{\mathrm{max}_{k}}=\frac{\mathrm{max}\Big{\lvert}\hat{q}_{k}(t)-\hat{q}_{k-1}(t)\Big{\rvert}}{h_{k}}$
(16)
where $h_{k}$ is height of $k$-th story, and the uncertainty in ISD estimation
can be calculated as follows
$\displaystyle\text{ISD}_{\mathrm{max}_{k}}{\>\mathbin{\leavevmode\hbox
to6.46pt{\vbox to5.98pt{\pgfpicture\makeatletter\hbox{\hskip
0.21527pt\lower-0.21527pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{}\pgfsys@setlinewidth{0.43056pt}\pgfsys@invoke{ }{}{{}}{} {}{}{}{{}}{}
{}{}{}{{}}{}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{6.02776pt}{0.0pt}\pgfsys@moveto{3.01387pt}{0.48222pt}\pgfsys@lineto{3.01387pt}{5.54552pt}\pgfsys@moveto{0.0pt}{3.01387pt}\pgfsys@lineto{6.02776pt}{3.01387pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\>\sigma_{{\text{ISD}_{k}}}}={\mathrm{max}\Big{\lvert}\text{ISD}_{k}{\>\mathbin{\leavevmode\hbox
to6.46pt{\vbox to5.98pt{\pgfpicture\makeatletter\hbox{\hskip
0.21527pt\lower-0.21527pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{}\pgfsys@setlinewidth{0.43056pt}\pgfsys@invoke{ }{}{{}}{} {}{}{}{{}}{}
{}{}{}{{}}{}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{6.02776pt}{0.0pt}\pgfsys@moveto{3.01387pt}{0.48222pt}\pgfsys@lineto{3.01387pt}{5.54552pt}\pgfsys@moveto{0.0pt}{3.01387pt}\pgfsys@lineto{6.02776pt}{3.01387pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\>\sqrt{\mathbf{{P}}_{\text{ISD}_{k}}}}\Big{\rvert}}$
(17)
where $\sigma_{{\text{ISD}_{k}}}$ is the uncertainty standard deviation of ISD
estimation for $k$-th story. The estimated ISDs are used to perform the post-
earthquake evaluation of the building based on [FEMA (2000)] performance
measures, including immediate occupancy (IO), life safety (LS), and collapse
prevention (CP).
#### 4.0.2 Demand-to-capacity ratio reconstruction
The demand-to-capacity ratio (DCR) for $i$-th element is reconstructed as
follows
$\displaystyle\mathrm{DCR}_{i}=\frac{\mathrm{max}|\hat{S}_{i}(t)|}{R_{i}}$
(18)
where $\hat{S}_{i}(t)$ and $R_{i}$ are the seismic demand and capacity
estimates of any pertinent failure mode in $i$-th structural element.
#### 4.0.3 Dissipated energy reconstruction for damage detection and
localization
The seismic damage index (DI) is reconstructed using a Park-Ang type damage
model [Park and Ang (1985)] expressed as
$\displaystyle
DI=DI_{\mu}+DI_{E}=\dfrac{\mu_{m}}{\mu_{u}}+\psi\dfrac{E_{h}}{E_{max}}$ (19)
where $DI_{\mu}$ and $DI_{E}$ represent damage due to excessive deformation
and dissipated hysteretic energy, respectively; $\mu_{m}$ is the maximum
ductility experienced during the earthquake, $\mu_{u}$ is the ultimate
ductility capacity under monotonic loading, $\psi$ is calibration parameter,
and $E_{max}$ is the maximum hysteretic energy dissipation capacity for all
relevant failure modes.
## 5 Case-study: Van Nuys Hotel Testbed
The proposed methodology is validated in the remaining sections using seismic
response measurements from Van Nuys hotel. The CSMIP instrumented this
building as Station 24386, and the recorded data of this building are
available from several earthquakes, including 1971 San Fernando, 1987 Whittier
Narrows, 1992 Big Bear, and 1994 Northridge earthquakes. From these data,
measurements during 1992 Big Bear and 1994 Northridge earthquakes are used in
this study to demonstrate the proposed framework. Researchers have widely
studied the Van Nuys building [Islam (1996), Loh and LIN (1996), Li and Jirsa
(1998), Browning et al. (2000), Taghavi and Miranda (2005), Goel (2005),
Bernal and Hernandez (2006), Ching et al. (2006), Naeim et al. (2006),
Todorovska and Trifunac (2008), Rodríguez et al. (2010), Gičev and Trifunac
(2012), Trifunac and Ebrahimian (2014), Shan et al. (2016), Pioldi et al.
(2017)] and the building was selected as a testbed for research studies by
researchers in PEER [Krawinkler (2005)].
### 5.1 Description of the Van Nuys building
The case-study building is a 7-story RC building located in San Fernando
Valley in California. The building plan is 18.9 m $\times$ 45.7 m in the
North-South and East-West directions, respectively. The total height of the
building is 19.88 m, with the first story of 4.11 m tall, while the rest are
2.64 m approximately. The structure was designed in 1965 and constructed in
1966. Its vertical load transfer system consists of RC slabs supported by
concrete columns and spandrel beams at the perimeter. The lateral resisting
systems are made up of interior concrete column-slab frames and exterior
concrete column-spandrel beam frames. The foundation consists of friction
piles, and the local soil conditions are classified as alluvium. The testbed
building is described in more detail in [Trifunac et al. (1999), Krawinkler
(2005)].
### 5.2 Building instrumentation
The CSMIP initially instrumented the building with nine accelerometers at the
1st, 4th, and roof floors. Following the San Fernando earthquake, CSMIP
replaced the recording layout by 16 remote accelerometer channels connected to
a central recording system. These channels are located at 1st, 2nd, 3rd, 6th,
and roof floors. Five of these sensors measure longitudinal accelerations, ten
of them measure transverse accelerations, and one of them measures the
vertical acceleration. Figure 5 shows the location of accelerometers.
Figure 5: (left) Van Nuys hotel testbed (CSMIP Station 24386) and (Right)
Location of building accelerometers on the West-East elevation and floor plans
### 5.3 Earthquake damage
Since the Van Nuys building was instrumented and inspected following
earthquakes that affected the structure, the history of damage suffered by
this building is well-documented. These documents show that the building has
experienced insignificant structural and mostly nonstructural damage before
the Northridge earthquake in 1994. However, the Northridge earthquake
extensively damaged the building. Post-earthquake inspection red-tagged the
building and revealed that the damage was severe in the south longitudinal
frame (Frame A). In Frame A, five of the nine columns in the 4th story
(between floors 4 and 5) were heavily damaged due to inadequate transverse
reinforcement, and shear cracks ($\geq 5cm$) and bending of longitudinal
reinforcement were easily visible [Trifunac and Ivanovic (2003)]. Figure 6
demonstrate the seismic damage following the 1994 Northridge earthquake in the
south and north frames.
Figure 6: Schematic representation and photo records of of seismic damage
following the 1994 Northridge earthquake: (top) south view of Frame A, and
(bottom) south view of Frame D. (Adopted from Trifunac and Ivanovic 2003)
### 5.4 Previous damage assessment studies on Van Nuys building
[Browning et al. (2000)] reported the performance assessment results of the
Van Nuys building based on studies of three independent research teams. These
teams used nonlinear dynamic and nonlinear static analysis to localize
structural damage and concluded that the various studies were successful to
varying degrees. [Naeim et al. (2006)] presented a methodology for automated
post-earthquake damage assessment of instrumented buildings. The methodology
was applied to the measured response from Landers, Big Bear, and Northridge
earthquakes. Their findings show that the building did not suffer structural
damage under the Landers and Big Bear earthquakes and indicate a high
probability of extensive damage to the middle floors of the building under the
Northridge earthquake. They concluded that their methodology was not able to
identify the exact floor level at which the damage occurs because no sensors
were installed on the floor that was damaged. As previously mentioned; [Ching
et al. (2006)] performed state estimation using measured data during the
Northridge earthquake combined with a time-varying linear model and then, with
a simplified time-varying nonlinear degradation model derived from a nonlinear
finite-element model of the building. They found that state estimation using
the nonlinear degradation model shows better performance and estimates the
maximum ISD to be at the 4th story. They concluded that an appropriate
estimation algorithm and a suitable identification model can improve the
accuracy of the state estimation. [Todorovska and Trifunac (2008)] used
impulse response functions computed from the recorded seismic response during
11 earthquakes, including the Northridge earthquake. They analyzed travel
times of vertically propagating waves to obtain the degree and spatial
distribution of changes in stiffness and infer the presence of structural
damage. Their findings showed that during the Northridge earthquake, the
rigidity decreased by about 60% between the ground and 2nd stories; by about
33% between 2nd and 3rd stories, and between 3rd and 6th stories; and by about
41% between the 6th story and roof. [Rodríguez et al. (2010)] implemented
their proposed method called Baseline Stiffness Method to detect and assess
structural and nonstructural damage to the Van Nuys building using data from
the Northridge earthquake. Their approach was able to detect damage in
connections with wide cracks of 5 cm or greater. On the other hand, the method
identified damage in some elements of upper stories that were not detected by
visual inspection reports, and also, they could not identify some of the
moderate damages with small cracks. [Shan et al. (2016)] presented a model-
reference damage detection algorithm of hysteretic buildings and investigated
the Van Nuys hotel using measured data from Big Bear and Northridge
earthquakes. The researchers concluded that their algorithm can only detect
damages of certain floors and cannot detect damages in structural components
or connections of the instrumented structure.
## 6 Implementation of the Proposed Seismic Monitoring Framework
### 6.1 Nonlinear modeling of the Van Nuys hotel testbed in OpenSEES
The nonlinear FE model of the building was implemented using a two-dimensional
fixed-base model within the environment of OpenSEES [McKenna et al. (2000)].
This model corresponds to one of the longitudinal frames of the building
(Frame A in Figure 5). In the FE model, beams and columns were modeled based
on distributed plasticity modeling approach, and the force-based beam-column
elements were used to accurately determine yielding and plastic deformations
at the integration points along the element. Gauss-Lobatto integration
approach was employed to evaluate the nonlinear response of force-based
elements. Each beam and column element was discretized with four integration
points, and the cross-section of each element was subdivided into fibers. The
uniaxial Concrete01 material was selected to construct a Kent-Scott-Park
object with a degraded linear unloading and reloading stiffness and zero
tensile strength. The uniaxial Steel01 material was used to model longitudinal
reinforcing steel as a bilinear model with kinematic hardening. The elasticity
modulus and strain hardening parameters were assumed to be 200 GPa and 0.01,
respectively. Due to insufficient transverse reinforcement in beams and
columns [Jalayer et al. (2017)], an unconfined concrete model was defined to
model concrete. The peak and post-peak strengths were defined at a strain of
0.002 and a compressive strain of 0.006, respectively. The corresponding
strength at ultimate strain was defined as $0.05f^{\prime}_{c}$ for
$f^{\prime}_{c}=34.5$ MPa and $f^{\prime}_{c}=27.6$ MPa and
$0.2f^{\prime}_{c}$ for $f^{\prime}_{c}=20.7$ MPa. Based on the recommendation
of [Islam (1996)], the expected yield strength of Grade 40 and Grade 60 steel
were defined as 345 MPa (50 ksi) and 496 MPa (72 ksi), respectively, to
account for inherent overstrength in the original material and strength gained
over time.
### 6.2 Formulation of the OpenSEES-NMBO of Van Nuys building
The nonlinear FE model and response measurements of the Van Nuys building was
employed to implement the NMBO in OpenSEES. The following subsections present
the step-by-step formulation of the OpenSEES-NMBO.
#### 6.2.1 PSD selection and numerical optimization
The PSD of ground motion, $\boldsymbol{\Phi}_{ww}(\omega)$, was characterized
using the Kanai-Tajimi PSD given by
$S(\omega)=G_{0}\frac{1+4\xi_{g}^{2}(\frac{\omega}{\omega_{g}})^{2}}{\left[1-(\frac{\omega}{\omega_{g}})^{2}\right]^{2}+4\xi_{g}^{2}(\frac{\omega}{\omega_{g}})^{2}}$
(20)
and the amplitude modulating function $I(t)$ was selected as
$I(t)=te^{-\alpha{t}}$ (21)
The parameter were defined as $\xi_{g}=0.35$ for both earthquakes,
$\omega_{g}=6\pi rad/s$ for Northridge earthquake and $\omega_{g}=2\pi rad/s$
for Big Bear earthquake. The underlying white noise spectral density $G_{0}$
for each direction of measured ground motion for each shake table test was
found such that about 95% of the Fourier transform of the measured ground
motion lies within two standard deviations of the average from the Fourier
transforms of an ensemble of 200 realizations of the Kanai-Tajimi stochastic
process. $\alpha$ was selected as 0.12. Details can be found in [Roohi et al.
(2019a)]. Also, the PSD of measurement noise,
$\boldsymbol{\Phi}_{vv}(\omega)$, in each measured channel was taken as zero
mean white Gaussian sequences with a noise-to-signal root-mean-square (RMS)
ratio of 0.02.
Numerical optimization was performed using Equation 14. Table 1 presents the
optimized damper values for each seismic event.
Table 1: Optimized damper values in kN.s/m (kips.s/in) units Earthquake | Story 1 | Story 2 | Story 5 | Story 7
---|---|---|---|---
Big Bear | 7283.11 (41.59) | 9357.25 (53.43) | 19299.40 (110.20) | 34808.04 (198.76)
Northridge | 5209.72 (29.75) | 6592.45 (37.64) | 16612.79 (94.86) | 47217.69 (269.62)
#### 6.2.2 Formulation of the OpenSEES-NMBO
The OpenSEES nonlinear FE model was modified by adding grounded dampers in
measurement locations and was subjected to corrective forces. Dynamic analysis
was performed to estimate the complete seismic response. Figure 7 presents a
schematic of the Van Nuys hotel testbed (with the location of accelerometers)
along with the OpenSEES-NMBO (with corresponding added viscous dampers and
corrective forces in measurement locations).
Figure 7: Schematic of the Van Nuys hotel testbed with location of
accelerometers (left) and the OpenSEES-NMBO with corresponding added viscous
dampers and corrective forces in measurement locations
### 6.3 Seismic damage reconstruction using estimated seismic response
Once the complete seismic response is estimated using the OpenSEES-NMBO, the
seismic damage to the building can be quantified according to the Section
4.0.3. This subsection demonstrates the procedure in more detail.
#### 6.3.1 Shear DCR reconstruction
The shear DCRs were estimated based on the Equation 18. The shear demands were
obtained from OpenSEES-NMBO, and the capacity of columns were calculated based
on the Section 6.5.2.3.1 of [FEMA (2000)].
#### 6.3.2 Ductility demand reconstruction
The seismic damage caused by excessive deformation is the first term of the
Equation 19 given by:
$\displaystyle DI_{\mu}=\dfrac{\mu_{m}}{\mu_{u}}$ (22)
where, the $\mu_{m}$ in each structural element is expressed by the maximum
estimated curvature along integration points normalized by the yield curvature
given by
$\displaystyle\mu_{m}=max\\{{\dfrac{\phi_{m,j}}{\phi_{y}}}\\}_{j=1:N_{p}}$
(23)
here, $\phi_{m,j}$ is maximum estimated curvature in integration point $j$,
$\phi_{y}$ is curvature capacity and $N_{p}$ is number of integration points
along element. The curvature ductility capacity ($\mu_{u}$) is obtained by
$\displaystyle\begin{aligned}
&\mu_{u}=\dfrac{\phi_{u}}{\phi_{y}}\end{aligned}$ (24)
where $\phi_{u}$ is the ultimate curvature capacity of the section.
#### 6.3.3 Dissipated hysteretic energy reconstruction
The seismic damage caused by dissipated hysteretic energy, $DI_{E}$ in
Equation 19, was calculated based on flexure failure mode as follows
$\displaystyle
DI_{E}=\psi\dfrac{E_{h}}{E_{max}}\cong\psi\dfrac{E_{h}}{M_{y}\theta_{y}\mu_{u}}$
(25)
where $M_{y}$ is yield moment and $\theta_{y}$ is yield rotation angle. The
main issue with the calculation of $DI_{E}$ is the determination of $\psi$,
which usually is calibrated to a number between 0.05 or 0.15. A reasonable
$\psi$ value should properly account for the effect of load cycles causing
structural damage. The selection of small value for $\psi$ neglects the effect
of the $DI_{E}$ in the overall damage index [Williams and Sexsmith (1995)].
Since the true $\psi$ is unknown for the elements of Van Nuys building and in
the scope of this paper, the objective is to localize seismic damage, the
calibration parameter is set to one, and the estimated value of each term in
the damage index will be first reported separately and then combined.
The dissipated hysteretic energy ($E_{h}$) is estimated based on Equation 7
and the seismic response estimated using OpenSEES-NMBO. The parameter $M_{y}$
was obtained based on section analysis and the value of $\theta_{y}\mu_{u}$
calculated as follows
$\displaystyle\theta_{y}\mu_{u}=\theta_{p}=(\phi_{u}-\phi_{y})l_{p}=\phi_{p}l_{p}$
(26)
where $l_{p}$ is the plastic hinge length and is defined using an empirically
validated relationship proposed by [Bae and Bayrak (2008)] given by
$\displaystyle\dfrac{l_{p}}{h}=\left[0.3\left(\dfrac{P}{P_{o}}\right)+\left(\dfrac{A_{s}}{A_{g}}\right)-1\right]\left(\dfrac{L}{h}\right)+0.25\geq
0.25$ (27)
where $h$ and $L$ represent depth and length of column; $A_{g}$ and $A_{s}$
denote gross area of concrete section and area of tension reinforcement;
$f^{\prime}_{c}$ and $f_{y}$ are compressive strength of concrete and yield
stress of reinforcement; and
$P_{o}=0.85f^{\prime}_{c}(A_{g}-A_{s})+f_{y}A_{s}$.
## 7 Seismic Response and Damage Reconstruction Results
A summary of the seismic response and damage reconstruction results is
presented in this section to validate the proposed seismic monitoring
framework.
### 7.1 Displacement estimation results
First, we compare the displacement estimates using OpenSEES-NMBO and its
uncertainty with those obtained from 1) response measurements and 2) open-loop
analysis under measured ground motion at instrumented and non-instrumented
stories. Figures 8 and 9 present the comparison of the displacement estimates
at instrumented 1st and 7th stories and non-instrumented 3rd and 6th stories
during the Big Bear earthquake and Northridge earthquake, respectively.
Figure 8: Comparison of displacement estimates using OpenSEES-NMBO with
estimates using open-loop analysis and actual measurements in 1st floor (top
left), 3rd floor (top right), 6th floor (bottom left) and 7th floor (bottom
right) during Big Bear earthquake. The Measured represents measured response,
the OL represents the open-loop analysis of OpenSEES model under measured
ground motion and the NMBO represents the estimated response using the
OpenSEES-NMBO with sensor measurements from measured location along with
$1\sigma$ estimation uncertainty bound
### 7.2 Inter-story drift estimation results
Figure 10 depicts the estimated $\text{ISD}_{\mathrm{max}}$ ratios and their
corresponding $1\sigma$ confidence intervals using OpenSEES-NMBO. These
results are compared with estimated $\text{ISD}_{\mathrm{max}}$ using open-
loop analysis and those obtained from instrumented stories. The OpenSEES-NMBO
ISD estimates indicate that the building could be classified as IO following
the Big Bear earthquake and as LS-CP following the Northridge earthquake. The
actual performance and post-earthquake inspection reports of the buildings
validate the accuracy of the performance estimates. Figure 11 gives an in-
depth examination of the ISD estimates during the Northridge earthquake. The
left plot in this figure shows the comparison of ISDs at 3rd, 4th, and 5th
stories, and the right plot shows the comparison of relative ISDs between
floors 3 and 4 and also, floors 4 and 5. Here, the relative ISD is defined as
follows
$\displaystyle\text{RISD}_{(k,k-1)}=\text{ISD}_{(k)}-\text{ISD}_{(k-1)}$ (28)
where $\text{RISD}_{(k,k-1)}$ is relative ISD between stories $k$ and $k-1$.
The estimation results show that even though the $\text{ISD}_{\mathrm{max}}$
occurs in the third story, the RISD demand between floors 4 and 5 is higher
than floors 3 and 4.
Figure 9: Comparison of displacement time history estimates with estimates
using open-loop analysis and actual measurements in 1st floor (top left), 3rd
floor (top right), 6th floor (bottom left) and 7th floor (bottom right) during
Northridge earthquake. Figure 10: Comparison of $\text{ISD}_{\mathrm{max}}$
ratios obtained from response measurements with those estimated using
OpenSEES-NMBO and open-loop analysis during 1992 Big Bear earthquake (left)
and 1994 Northridge earthquake (right). Figure 11: Comparison of ISD (left)
and RISD (right) time history estimates for stories 3 to 5 during Northridge
earthquake.
### 7.3 Elemet-by-element shear demand to capacity ratio reconstruction
Figure 12 shows results for shear estimated element-by-element shear DCR
ratios by OpenSEES-NMBO using measured seismic response of the Van Nuys
building during Big Bear (left) and Northridge (right) earthquakes.
Figure 12: Estimated element-by-element shear demand to capacity ratios by
OpenSEES-NMBO using measured seismic response of the Van Nuys building during
1992 Big Bear (left) and 1994 Northridge (right) earthquakes.
### 7.4 Element-by-element damage index reconstruction
This section presents the seismic damage quantification results using the
estimated response from OpenSEES-NMBO and the damage model presented at
Section 4.0.3, which is also demonstrated in more detail in Section 6.3.
Figure 13 summarizes the estimated maximum curvature ductility demands
($\mu_{m}$) in two ends of columns for each earthquake. To interpret the
$\mu_{m}$ demands, one needs to consider that the expected ductility capacity
of columns in this building is relatively low as the columns are non-ductile.
Figure 14 presents reconstructed element-by-element normalized energy
dissipation. Figure 15 presents the reconstructed element-by-element damage
indices. Figure 16 schematically depicts the seismic damage suffered during
the Northridge earthquake to compare the reconstructed DIs with the building’s
actual performance. The shear cracks with $\text{width}\geq 5cm$ are
highlights in red color and the shear cracks ($0.5cm\leq\text{width}\leq 1$)
are highlights in green color. As can be seen, the element-by-element
comparison of estimated DIs with post-earthquake inspection results confirms
the accuracy of damage localization using the proposed mechanistic approach.
Figure 13: Reconstructed maximum end curvature ductility demands in columns by
implementing OpenSEES-NMBO using measured seismic response of the Van Nuys
building during 1992 Big Bear (left) and 1994 Northridge (right) earthquakes.
Figure 14: Reconstructed element-by-element normalized energy dissipation by
OpenSEES-NMBO using measured seismic response of the Van Nuys building during
1992 Big Bear (left) and 1994 Northridge (right) earthquakes. Figure 15:
Reconstructed element-by-element damage indices by OpenSEES-NMBO using
measured seismic response of the Van Nuys building during 1992 Big Bear (left)
and 1994 Northridge (right) earthquakes. Figure 16: Seismic damage experienced
during the 1994 Northridge earthquake: (left) south view of Frame D, and
(right) south view of Frame A. (Adopted from Trifunac and Ivanovic 2003)
### 7.5 Discussion on damage detection and localization results
The results described in the preceding sections demonstrate that a nonlinear
model-data fusion using a refined distributed plasticity FE model and a
limited number of response measurements can accurately reconstruct the seismic
response. Subsequently, the estimated response can be used to quantify the
seismic damage based on damage sensitive response parameters and damage
models. The estimated ISDs indicated that the performance-based post-
earthquake re-occupancy category of the building was IO during the Big Bear
earthquake and LS-CP during the Northridge earthquake. The ISD and RISD
analysis during the Northridge earthquake showed that the
$\text{ISD}_{\mathrm{max}}$ occurred at the 3rd story, while the maximum RISD
occurs at the top of the 4th story. Also, dissipated energy and ductility
reconstruction detects no structural damage during the Big Bear earthquake and
severe damage during the Northridge earthquake. By combining the information
from estimated ISDs, RISDs, maximum curvature ductility demands, and element-
by-element damage indices during the Northridge earthquake, severe damage was
localized in the columns of the 4th story (between floors 4 and 5) and also,
small or moderate damage was estimated for the remaining columns. The location
of severe damage in the 4th story can be explained mainly by widely spaced or
absent transverse reinforcing in the beam-column joints contributed to the
lower shear capacity of the story; which can be accounted for by the proposed
mechanistic seismic monitoring framework through high-resolution seismic
response and element-by-element damage index reconstruction. Finally, it was
shown that the damage assessment results were consistent with the building’s
actual performance and post-earthquake inspection reports following the Big
Bear and Northridge earthquakes. Therefore, the applicability of the proposed
framework is validated in the context of a real-world building that
experienced severe localized damage during sequential seismic events.
## 8 Conclusions
This paper proposes a seismic monitoring framework to reconstruct element-by-
element dissipated hysteretic energy and perform structural damage detection
and localization. The framework employs a nonlinear model-based state observer
(NMBO) to combine a design level nonlinear FE model with acceleration
measurements at limited stories to estimate nonlinear seismic response at all
DoF of the model. The estimated response is then used to reconstruct damage-
sensitive response features, including 1) inter-story drifts, 2) code-based
demand to capacity ratios, and 3) normalized dissipated hysteretic energy and
ductility demands. Ultimately, the estimated features are used to conduct the
performance-based post-earthquake assessment, damage detection, and
localization.
The methodology was successfully validated using measured data from the seven-
story Van Nuys hotel testbed instrumented by CSMIP (Station 24386) during 1992
Big Bear and 1994 Northridge earthquakes. The NMBO of the building was
implemented using a distributed plasticity finite element model and measured
data to reconstruct seismic response during each earthquake. The estimated
seismic response was then used to reconstruct inter-story drifts and determine
the performance-based post-earthquake re-occupation category of the building
following each earthquake. The performance categories were estimated as IO and
LS-CP during the Big Bear and Northridge earthquakes, respectively. Analysis
during the Northridge earthquake showed that the maximum inter-story drift
occurred at the 3rd story, while the maximum relative inter-story drift
occurred at the top of the 4th story. Column-by-column shear demand to
capacity ratios, ductility demands, and normalized dissipated hysteretic
energy ratios were computed. The proposed framework correctly estimated linear
behavior and no damage during the Big Bear earthquake and identified the
location of major damage in the beam/column joints located at the fourth floor
of the south frame during the Northridge earthquake. The damage indices were
identified near unity and above (which corresponds to total failure of the
member) in columns with severe damages (wide shear cracks equal or greater
than 5 $cm$); between 0.35 and 0.70 in columns with moderate damage (shear
cracks smaller than 1 $cm$); and smaller than 0.50 in the remaining columns
which did not experienced visible cracks. To the best knowledge of authors,
the results presented in this paper constitute the most accurate and the
highest resolution damage estimates obtained for the Van Nuys hotel testbed.
## 9 Data Availability Statement
Some or all data, models, or code that support the findings of this study are
available from the corresponding author upon reasonable request.
## 10 Acknowledgement
Support for this research provided, in part, by award No. 1453502 from the
National Science Foundation is gratefully acknowledged.
## References
* Bae and Bayrak (2008) Bae, S. and Bayrak, O. (2008). “Plastic hinge length of reinforced concrete columns.” ACI Structural Journal, 105(3), 290.
* Bernal and Hernandez (2006) Bernal, D. and Hernandez, E. (2006). “A data-driven methodology for assessing impact of earthquakes on the health of building structural systems.” The Structural Design of Tall and Special Buildings, 15(1), 21–34.
* Browning et al. (2000) Browning, J., Li, Y. R., Lynn, A., and Moehle, J. P. (2000). “Performance assessment for a reinforced concrete frame building.” Earthquake Spectra, 16(3), 541–556.
* Ching et al. (2006) Ching, J., Beck, J. L., Porter, K. A., and Shaikhutdinov, R. (2006). “Bayesian state estimation method for nonlinear systems and its application to recorded seismic response.” Journal of Engineering Mechanics, 132(4), 396–410.
* Doucet et al. (2000) Doucet, A., Godsill, S., and Andrieu, C. (2000). “On sequential monte carlo sampling methods for bayesian filtering.” Statistics and computing, 10(3), 197–208.
* FEMA (2000) FEMA (2000). “Prestandard and commentary for the seismic rehabilitation of buildings.” American Society of Civil Engineers (ASCE).
* Frizzarin et al. (2010) Frizzarin, M., Feng, M. Q., Franchetti, P., Soyoz, S., and Modena, C. (2010). “Damage detection based on damping analysis of ambient vibration data.” Structural Control and Health Monitoring, 17(4), 368–385.
* Gelb (1974) Gelb, A. (1974). Applied optimal estimation. MIT press.
* Gičev and Trifunac (2012) Gičev, V. and Trifunac, M. D. (2012). “A note on predetermined earthquake damage scenarios for structural health monitoring.” Structural control and health monitoring, 19(8), 746–757.
* Goel (2005) Goel, R. K. (2005). “Evaluation of modal and fema pushover procedures using strong-motion records of buildings.” Earthquake spectra, 21(3), 653–684.
* Hernandez et al. (2018) Hernandez, E., Roohi, M., and Rosowsky, D. (2018). “Estimation of element-by-element demand-to-capacity ratios in instrumented smrf buildings using measured seismic response.” Earthquake Engineering & Structural Dynamics, 47(12), 2561–2578.
* Hernandez (2011) Hernandez, E. M. (2011). “A natural observer for optimal state estimation in second order linear structural systems.” Mechanical Systems and Signal Processing, 25(8), 2938–2947.
* Hernandez (2013) Hernandez, E. M. (2013). “Optimal model-based state estimation in mechanical and structural systems.” Structural control and health monitoring, 20(4), 532–543.
* Hernandez and May (2012) Hernandez, E. M. and May, G. (2012). “Dissipated energy ratio as a feature for earthquake-induced damage detection of instrumented structures.” Journal of Engineering Mechanics, 139(11), 1521–1529.
* Islam (1996) Islam, M. S. (1996). “Analysis of the northridge earthquake response of a damaged non-ductile concrete frame building.” The structural design of tall buildings, 5(3), 151–182.
* Jalayer et al. (2017) Jalayer, F., Ebrahimian, H., Miano, A., Manfredi, G., and Sezen, H. (2017). “Analytical fragility assessment using unscaled ground motion records.” Earthquake Engineering & Structural Dynamics, 46(15), 2639–2663.
* Krawinkler (2005) Krawinkler, H. (2005). Van Nuys hotel building testbed report: exercising seismic performance assessment. Pacific Earthquake Engineering Research Center, College of Engineering ….
* Krawinkler and Zohrei (1983) Krawinkler, H. and Zohrei, M. (1983). “Cumulative damage in steel structures subjected to earthquake ground motions.” Computers & Structures, 16(1-4), 531–541.
* Li and Jirsa (1998) Li, Y. R. and Jirsa, J. O. (1998). “Nonlinear analyses of an instrumented structure damaged in the 1994 northridge earthquake.” Earthquake Spectra, 14(2), 265–283.
* Loh and LIN (1996) Loh, C.-H. and LIN, H.-M. (1996). “Application of off-line and on-line identification techniques to building seismic response data.” Earthquake engineering & structural dynamics, 25(3), 269–290.
* McKenna et al. (2000) McKenna, F., Fenves, G. L., Scott, M. H., et al. (2000). “Open system for earthquake engineering simulation.” University of California, Berkeley, CA.
* Naeim et al. (2006) Naeim, F., Lee, H., Hagie, S., Bhatia, H., Alimoradi, A., and Miranda, E. (2006). “Three-dimensional analysis, real-time visualization, and automated post-earthquake damage assessment of buildings.” The Structural Design of Tall and Special Buildings, 15(1), 105–138.
* Park and Ang (1985) Park, Y.-J. and Ang, A. H.-S. (1985). “Mechanistic seismic damage model for reinforced concrete.” Journal of structural engineering, 111(4), 722–739.
* Pioldi et al. (2017) Pioldi, F., Ferrari, R., and Rizzi, E. (2017). “Seismic fdd modal identification and monitoring of building properties from real strong-motion structural response signals.” Structural Control and Health Monitoring, 24(11), e1982.
* Rodríguez et al. (2010) Rodríguez, R., Escobar, J. A., and Gómez, R. (2010). “Damage detection in instrumented structures without baseline modal parameters.” Engineering Structures, 32(6), 1715–1722.
* Roohi (2019a) Roohi, Milad, H.-E. R. D. (September 10-12, 2019a). “Nonlinear seismic response reconstruction in minimally instrumented buildings— validation using neeswood capstone full-scale tests.” 2519–2526.
* Roohi (2019b) Roohi, M. (2019b). “Performance-based seismic monitoring of instrumented buildings.” Ph.D. thesis, Graduate College Dissertations and Theses. 1140. University of Vermont., Graduate College Dissertations and Theses. 1140. University of Vermont.
* Roohi et al. (2019a) Roohi, M., Hernandez, E. M., and Rosowsky, D. (2019a). “Nonlinear seismic response reconstruction and performance assessment in minimally instrumented wood-frame buildings - validation using neeswood capstone full-scale tests.” Structural Control & Health Monitoring, e2373.
* Roohi et al. (2019b) Roohi, M., Hernandez, E. M., and Rosowsky, D. (2019b). “Seismic damage assessment of instrumented wood-frame buildings: A case-study of neeswood full-scale shake table tests.” arXiv preprint arXiv:1902.09955.
* Shan et al. (2016) Shan, J., Shi, W., and Lu, X. (2016). “Model-reference health monitoring of hysteretic building structure using acceleration measurement with test validation.” Computer-Aided Civil and Infrastructure Engineering, 31(6), 449–464.
* Stephens and Yao (1987) Stephens, J. E. and Yao, J. T. (1987). “Damage assessment using response measurements.” Journal of Structural Engineering, 113(4), 787–801.
* Sucuoglu and Erberik (2004) Sucuoglu, H. and Erberik, A. (2004). “Energy-based hysteresis and damage models for deteriorating systems.” Earthquake engineering & structural dynamics, 33(1), 69–88.
* Taghavi and Miranda (2005) Taghavi, S. and Miranda, E. (2005). “Approximate floor acceleration demands in multistory buildings. ii: Applications.” Journal of Structural Engineering, 131(2), 212–220.
* Taucer et al. (1991) Taucer, F., Spacone, E., and Filippou, F. C. (1991). A fiber beam-column element for seismic response analysis of reinforced concrete structures, Vol. 91. Earthquake Engineering Research Center, College of Engineering, University ….
* Teran-Gilmore and Jirsa (2007) Teran-Gilmore, A. and Jirsa, J. O. (2007). “Energy demands for seismic design against low-cycle fatigue.” Earthquake engineering & structural dynamics, 36(3), 383–404.
* Todorovska and Trifunac (2008) Todorovska, M. I. and Trifunac, M. D. (2008). “Impulse response analysis of the van nuys 7-storey hotel during 11 earthquakes and earthquake damage detection.” Structural Control and Health Monitoring, 15(1), 90–116.
* Trifunac and Ebrahimian (2014) Trifunac, M. and Ebrahimian, M. (2014). “Detection thresholds in structural health monitoring.” Soil Dynamics and Earthquake Engineering, 66, 319–338.
* Trifunac and Ivanovic (2003) Trifunac, M. and Ivanovic, S. (2003). “Analysis of drifts in a seven-story reinforced concrete structure.” University of Southern California Report CE, 03–10.
* Trifunac et al. (1999) Trifunac, M., Ivanovic, S., and Todorovska, M. (1999). “Instrumented 7-storey reinforced concrete building in van nuys, california: description of the damage from the 1994 northridge earthquake and strong motion data.” Report CE 99, 2.
* Uang and Bertero (1990) Uang, C.-M. and Bertero, V. V. (1990). “Evaluation of seismic energy in structures.” Earthquake Engineering & Structural Dynamics, 19(1), 77–90.
* van de Lindt et al. (2010) van de Lindt, J. W., Gupta, R., Pei, S., Tachibana, K., Araki, Y., Rammer, D., and Isoda, H. (2010). “Damage assessment of a full-scale six-story wood-frame building following triaxial shake table tests.” Journal of Performance of Constructed Facilities, 26(1), 17–25.
* Wan and Van Der Merwe (2000) Wan, E. A. and Van Der Merwe, R. (2000). “The unscented kalman filter for nonlinear estimation.” Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No. 00EX373), Ieee, 153–158.
* Williams and Sexsmith (1995) Williams, M. S. and Sexsmith, R. G. (1995). “Seismic damage indices for concrete structures: a state-of-the-art review.” Earthquake spectra, 11(2), 319–349.
|
2024-09-04T02:54:55.341667 | 2020-02-28T01:22:18 | 2002.12502 | {
"authors": "Su-Peng Yu, Daniel C. Cole, Hojoong Jung, Gregory T. Moille, Kartik\n Srinivasan, and Scott B. Papp",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25931",
"submitter": "Su-Peng Yu",
"url": "https://arxiv.org/abs/2002.12502"
} | arxiv-papers | # Spontaneous Pulse Formation in Edge-Less Photonic Crystal Resonators
Su-Peng Yu<EMAIL_ADDRESS>Time and Frequency Division, NIST, Boulder,
CO 80305, USA Department of Physics, University of Colorado, Boulder, CO
80309, USA Daniel C. Cole Time and Frequency Division, NIST, Boulder, CO
80305, USA Department of Physics, University of Colorado, Boulder, CO 80309,
USA Hojoong Jung Time and Frequency Division, NIST, Boulder, CO 80305, USA
Department of Physics, University of Colorado, Boulder, CO 80309, USA Gregory
T. Moille Microsystems and Nanotechnology Division, NIST, Gaithersburg, MD
20899, USA Joint Quantum Institute, NIST/University of Maryland, College
Park, MD 20742, USA Kartik Srinivasan Microsystems and Nanotechnology
Division, NIST, Gaithersburg, MD 20899, USA Joint Quantum Institute,
NIST/University of Maryland, College Park, MD 20742, USA Scott B. Papp Time
and Frequency Division, NIST, Boulder, CO 80305, USA Department of Physics,
University of Colorado, Boulder, CO 80309, USA
###### Abstract
Complex systems are a proving ground for fundamental interactions between
components and their collective emergent phenomena. Through intricate design,
integrated photonics offers intriguing nonlinear interactions that create new
patterns of light. In particular, the canonical Kerr-nonlinear resonator
becomes unstable with a sufficiently intense traveling-wave excitation,
yielding instead a Turing pattern composed of a few interfering waves. These
resonators also support the localized soliton pulse as a separate nonlinear
stationary state. Kerr solitons are remarkably versatile for applications, but
they cannot emerge from constant excitation. Here, we explore an edge-less
photonic-crystal resonator (PhCR) that enables spontaneous formation of a
soliton pulse in place of the Turing pattern. We design a PhCR in the regime
of single-azimuthal-mode engineering to re-balance Kerr-nonlinear frequency
shifts in favor of the soliton state, commensurate with how group-velocity
dispersion balances nonlinearity. Our experiments establish PhCR solitons as
mode-locked pulses by way of ultraprecise optical-frequency measurements, and
we characterize their fundamental properties. Our work shows that sub-
wavelength nanophotonic design expands the palette for nonlinear engineering
of light.
††preprint: APS/123-QED
Integrated nonlinear photonics is a versatile engine to generate and control
electromagnetic radiation, opening new application directions and enabling
fundamental studies. Second- and higher-order nonlinear susceptibilities now
form the basis of many photonics technologies; a good example is harmonic-
Hickstein _et al._ (2017) or difference-frequencyTadanagaa _et al._ (2006)
generation that realize laser sources from the ultraviolet to the infrared. In
particular, third-order, Kerr processes are ubiquitous in photonics due to
intensity dependence of the refractive index, $n=n_{0}+n_{2}\,I$, where
$n_{2}$ is the nonlinear index and $I$ is intensity. They enable spontaneous
formation of stationary configurations of electromagnetic fields that affect
conversion of a laser from one color to another. More generally, modulation
instability that arises from nonlinearity governs interesting behaviors in
systems ranging from quantum matter Carr and Brand (2004) to desert sand dunes
Parteli _et al._ (2011). Studying nonlinear behaviors in the exquisitely
controlled environment of integrated photonics– where sub-wavelength features
lead to new optical behaviors –can increase understanding in a variety of
physical systems.
Kerr resonators– optical cavities built from an $n_{2}$ material –are an
attractive system for fundamental studies and applications. We understand the
formation of some pattern and pulse states of the intraresonator field $\psi$
from the Lugiato-Lefever equation (LLE)
$\partial_{\tau}\psi=-(1+i\alpha)\psi-\frac{i}{2}\beta\partial_{\theta}^{2}\psi+i|\psi|^{2}\psi+F$,
where $\theta$ is the resonator angular coordinate,
$-\frac{i}{2}\beta\partial_{\theta}^{2}\psi$ is the group-velocity dispersion
(hereafter GVD or dispersion), $|\psi|^{2}\psi$ is the nonlinearity, $F$ is a
traveling-wave pump-laser field originating outside the resonator with a red
detuning by $\alpha$ to lower frequency than the resonator mode; see Ref.
Godey _et al._ (2014) for further details. A few states stand out amongst the
diverse solution space of the LLE Godey _et al._ (2014): The constant-
amplitude flat state energized by a sufficiently weak pump laser; the Turing
pattern that emerges when the flat state is unstable; and the Kerr soliton
that is a localized pulse coexisting with, but not emerging spontaneously
from, the flat state. Indeed, microresonator soliton frequency combs
Kippenberg _et al._ (2018) have been engineered to support a wide range of
applications, including optical communicationMarin-Palomo _et al._ (2017);
F$\ddot{u}$l$\ddot{o}$p _et al._ (2018), spectroscopy Suh _et al._ (2016),
and ranging Trocha _et al._ (2018). GVD engineering via the cross-sectional
waveguide dimensions offers powerful control of soliton properties Yu _et
al._ (2019a). Moreover, exotic photonic states have been reported using
unconventional resonator-mode engineering Lobanov _et al._ (2015); Xue _et
al._ (2015).
Spontaneous formation of patterns from break-up of the flat state is a
critical outcome in the LLE. A pattern forms spontaneously by four-wave mixing
(FWM), constrained by a balance of the Kerr frequency shift $\delta_{\mu}$ of
the comb mode number $\mu$, and the phase-mismatch from dispersion
$\frac{1}{2}\beta\,\mu^{2}$. We count the comb modes and the resonator modes
with respect to the mode closest to the pump laser (hereafter the pump mode,
$\mu=0$). Importantly, $\delta_{\mu}$ for each mode depends on the
intraresonator field according to $\delta_{\mu}=g\,(2\,N-|a_{\mu}|^{2})$ Herr
_et al._ (2015), where $a_{\mu}$ are the Fourier decomposition amplitude for
mode $\mu$, $g$ the per-photon Kerr shift, and $N$ the total photon number.
The term $g=1$ is a standard normalization of the LLE. Beginning with the flat
state, all $a_{\mu^{\prime}\neq 0}=0$ and
$\delta_{\mu=0}=2N-N=\frac{1}{2}\delta_{\mu^{\prime}\neq 0}$, where the modes
$\mu^{\prime}$ are not pumped. The difference between self- and cross-phase
modulation results in a reduced Kerr shift for the pump mode by a factor of
two compared to other modes. This reduced Kerr shift enables FWM for the
Turing pattern at modes $\pm\mu^{\prime}$, characterized by
$\frac{1}{2}\beta|\mu^{\prime}|^{2}-\delta_{\pm\mu^{\prime}}=-\delta_{\mu=0}$.
Conversely, the soliton is a collective state with many modes $\mu^{\prime}$
that reach phase-matching only at large $\alpha$ where the flat-state
amplitude is insufficient to support spontaneous FWM processes. These phase-
matching conditions result in the disparate generation behaviors of Turing
patterns and solitons.
Here, we explore a re-balancing of the LLE that causes Kerr-soliton formation
from break-up of the flat state, replacing the Turing pattern. To accomplish
this dramatic outcome, we design and fabricate edge-less photonic-crystal
resonators (PhCR), which are Kerr-microresonators with their inner wall
modified by an azimuthally uniform, nanopatterned shape oscillation. The ring
geometry imposes the edge-less boundary condition on the photonic waveguide,
opening the PhCR bandgap – thus controllably shifting the frequency – for one
azimuthal mode. We program the shift to directly phase-match the soliton with
the pump laser nearly on-resonance of the pump mode. Moreover, this shifts the
Turing pattern off-resonance, precluding its formation. We have realized and
explored spontaneous soliton formation in wide-ranging experiments, including
observing the immediate transition from the flat state to the soliton, soliton
pulse bandwidth control by dispersion engineering through the bulk ring
dimensions, and ultraprecise measurements of the soliton repetition frequency.
Our work draws on advances in nanophotonics and photonic-crystal devices that
provide access to otherwise challenging or impossible to achieve phenomena.
Take, for example, exotic refractive phenomenonKocaman _et al._ (2011),
strong light-matter interactions Miura _et al._ (2014), and coupling to
radiofrequency or phonon modes Fang _et al._ (2016). Moreover, photonic
structures have been demonstrated to suppress Petrovich _et al._ (2008) and
enhance Sharma _et al._ (2015) nonlinear effects, engineer small mode volume
Hu and Weiss (2016), create sophisticated group-velocity dispersion profiles
Kim _et al._ (2017); Moille _et al._ (2018), realize slow-light effects
McGarvey-Lechable _et al._ (2017), and control resonator mode splittings Lu
_et al._ (2014). Photonic-crystal devices are dielectric structures with sub-
wavelength spatial periodicity Joannopoulos _et al._ (1997) that restrict
scattering to discrete momentum values $k_{m}=k_{0}+\frac{2m\pi}{\Lambda}$ not
interacting with free-space modes, where $\Lambda$ is the periodicity and $m$
is an integer. In a photonic resonator, the bandgap imposes reflective
boundaries to confine light as in a Fabry-Perot cavity Yu _et al._ (2019b).
In our experiments, we use the bandgap instead in an edge-less boundary
condition– a complete ring without edges –to modify a select mode of the PhCR.
This condition, combined with an even number of nanopattern periods,
frequency-aligns the bandgap to a mode of the PhCR McGarvey-Lechable and
Bianucci (2014).
Figure 1: Mode structure for the Kerr resonator, showing the cold- (cross) and
hot-cavity (open circle) resonances, pump laser (dashed line), and light in
each mode (solid circle). The left panels show the (A) Turing pattern, and
(B) DKS state in the resonator, with the DKS Kerr mismatch (dashed red
arrow). (C) and (D) show the resonator with photonic crystal shift (dashed
blue arrow) at the corresponding Kerr shift, with (D) in the pulse state. (E)
Illustration of optical pulse formation in a photonic ring resonator. (F)
Simulated peak power versus pump laser detuning for the (green) and (blue)
resonators, with the analytic flat amplitude (dashed gray) for reference. The
corresponding intensity profiles are shown in the right panels.
Figure 1 introduces the mode-frequency structure of a ring resonator () and a
PhCR (), emphasizing how modifying the pump mode affects Turing-pattern and
Kerr-soliton generation. The diagrams plot the modal detuning
$f_{\mu}-(f_{0}+\mu\cdot FSR)$ for each mode $\mu$, showing the cold-resonator
modes that correspond to comb modes $\mu$ (crosses) and the hot-resonator
modes (open circles). The cold resonances follow the integrated dispersion
$D_{\text{int}}=\omega_{\mu}-\omega_{0}-D_{1}\mu=1/2\,D_{2}\,\mu^{2}+\epsilon_{\text{PhC}}\cdot\left(1-\delta(\mu)\right)$,
where $\omega_{\mu}$ is the angular frequency, $D_{1}$ is the free-spectral
range, $\epsilon_{\text{PhC}}$ is the frequency shift of the pump mode, and
$\delta(\mu)$ is the Kronecker delta function. We additionally shift the hot
resonances by the Kerr shift $\delta_{\mu}$, indicating phase accumulation
from the Kerr effect. At the onset of flat-state breakup, $\delta_{\mu=0}$ is
half that for all other modes. Therefore a natural phase matching exists for
FWM to the mode $\mu^{\prime}$, shown in Fig. 1A where the horizontal dashed
line matches the shifted $D_{\text{int}}$ curve. Hence, the Turing pattern
emerges, initially composed of pump and $\pm\mu^{\prime}$ modes (blue dots).
The stationary soliton state (Fig. 1B) of the ring resonator involves Kerr
frequency shifts to balance dispersion across many equidistant comb modes
(blue dots); the horizontal line in Fig. 1B indicates the pump laser. However,
since the pump-mode Kerr shift is reduced, only large $\alpha$ balances the
Kerr mismatch $\xi_{\text{Kerr}}\ =\delta_{\mu\neq 0}-\delta_{\mu=0}$ This
detuning precludes spontaneous formation of the Turing pattern, but also the
formation of solitons, as the low flat state amplitude is below threshold. See
Supplemental Sect. IV for more details.
With a PhCR , we program the frequency shift $\epsilon_{\text{PhC}}$ to
alleviate the $\xi_{\text{Kerr}}$ mismatch of the soliton state. The negative
shift of both the cold and hot resonator at comb mode $\mu$ are apparent in
Fig. 1C, D. Under this condition, the Turing pattern no longer emerges from
the flat state when the pump mode is energized, since the natural FWM phase
matching is removed; see the horizontal line in Fig. 1C. Importantly, the
shift $\epsilon_{\text{PhC}}$ moves the cold pump mode toward lower frequency
by an amount commensurate with the mismatch $\xi_{\text{Kerr}}$, thereby
compensating for the reduced Kerr shift on the pump mode, bringing it
approximately onto resonance with the pump laser. Operationally, soliton
formation in a PhCR proceeds with the configuration shown in Fig. 1E. We
integrate a PhCR device and a coupling waveguide on a silicon chip. The
frequency shift $\epsilon_{\text{PhC}}$ is controlled by the nanopatterning on
the ring, while the pump laser field $F$ couples evanescently into the PhCR
from a waveguide. The continuous-wave pump laser energizes the PhCR and
creates a stable soliton pulse-train at the output.
To verify the physical understanding presented above, we use the LLE to
calculate $\psi$ during a sweep of the pump-laser frequency across the pump
mode; See Supplemental Sect. III for an LLE with the mode shift. Figure 1E
shows the peak intensity $|\psi|^{2}$ versus detuning for the ordinary and
photonic-crystal resonators, respectively. All frequency variables including
$\alpha$ and $\epsilon_{\text{PhC}}$ are in unit of half-width-half-max
linewidths unless otherwise specified. Aside from changing
$\epsilon_{\text{PhC}}$ from 0 to 2.8 to activate the PhCR frequency shift,
both simulations are performed with the same conditions, namely $F=1.5$,
$\beta=-0.17$. The ring produces the 5-lobe Turing pattern (lower panel) as
the pump detuning is swept completely across resonance, corresponding to a
range of $\alpha$ from $-2$ to $4$. We then introduce the PhCR case and carry
out the same $\alpha$ sweep. In contrast to the case, a single pulse forms
with abrupt onset. Neither Turing patterns nor chaotic states form during the
sweep. Furthermore, the pulse demonstrates two distinct sections of
oscillatory stages, known as ‘breather’ soliton states Kippenberg _et al._
(2018). The curious reappearance of the breather state at the end of the sweep
is also contrasts with resonator soliton behavior, and we observe this in our
experiments.
Figure 2: (A) Electron microscope image of the PhCR, showing the unit cell
geometry and electric field profiles. Fabrication steps are shown as (i)
substrate, (ii) electron-beam lithography, (iii) reactive ion etching, and
(iv) dicing. (B) Laser frequency sweep traces of the shifted mode and nearby
modes. (C) Mode frequencies for resonators with varying $A_{\text{PhC}}$,
where $\mu=0$ is the $m_{\text{Ring}}=163$ resonance mode. (D) Laser frequency
sweep traces of pump transmission (red) and comb power (blue) demonstrating a
spontaneous step, where the stable pulse state is shaded in gray. (E) Optical
spectrum of the stable state.
Figure 2 presents our PhCR devices and experimental evidence for spontaneous
soliton formation, according to the principles laid out above. We create a
PhCR device with the oscillatory nanopattern indicated in Fig. 2A. A unit cell
of the pattern is defined by a sinusoidal shape, characterized by the pattern
periodicity and peak-to-peak amplitude $A_{\text{PhC}}$. The periodicity
enforces a photonic bandgap that necessarily overlaps one particular PhCR
mode, denoted as the pump mode $\mu=0$, in the 1550-nm wavelength range, owing
to an equal azimuthal mode number of pattern periods and optical-mode fringes.
The bandgap lifts the degeneracy of counter-propagating light in the PhCR,
creating modes shifted to higher and lower frequency by an amount
$\epsilon_{\text{PhC}}$. Since the nanopattern is edgeless– circumferentially
uniform – high resonator $Q$ is maintained. The properties of the other PhCR
modes ($\mu\neq 0$) with $\epsilon_{\text{PhC}}\approx 0$, including
nonlinearity and GVD, are preserved under the geometric modification. In
particular, the GVD depends sensitively on the thickness and ring-waveguide
width (RW) as in an resonator. We fabricate our devices from a 570-nm-thick
tantalum pentoxide (Ta2O5, hereafter tantala) photonics layer Jung _et al._
(2019), which is deposited on an oxidized silicon wafer. We use electron-beam
lithography to define the photonics pattern for a wafer, and we transfer it to
the tantala layer by use of fluorine reactive-ion etching. A final UV
lithography process defines several chips on the wafer, and we dry-etch facets
in the tantala and oxide layers, and the silicon wafer. See Supplemental Sect.
II for more details.
In our experiments with PhCRs, we characterize $\epsilon_{\text{PhC}}$ by
spectroscopy measurements. We fabricate up to $\sim 75$ PhCRs on a chip with a
systematic, few-linewidth variation of $\epsilon_{\text{PhC}}$ and the
waveguide-resonator coupling gap to optimize the conditions for spontaneous
soliton formation. To measure $\epsilon_{\text{PhC}}$, we couple light to and
from the chip with a standard lensed-fiber system. Using a 1550-nm tunable
laser as input, we record the transmission at the output with a photodetector.
Figure 2B presents several PhCR mode resonances in the 1550-nm band, with
applied frequency offsets so the resonances coincide, that demonstrate a
single mode frequency splitting. We label the non-degenerate modes as upper
and lower, with the latter at a setting of $\epsilon_{\text{PhC}}$ consistent
with spontaneous soliton formation. Our experiments focus on gaps for near-
critical coupling, and this data indicates a loaded PhCR $Q$ of $\sim$
400,000. By adjusting the nanopattern amplitude through our e-beam
lithography, we systematically vary $\epsilon_{\text{PhC}}$; see Fig. 2C. In
the range of $A_{\text{PhC}}$ used in this work, the $Q$ factors are
unaffected, compared to resonators fabricated on the same wafer. With a
nanopattern amplitude of only a few nm, we control $\epsilon_{\text{PhC}}$ for
the $\mu=0$ mode, whereas the $\mu^{\prime}\neq 0$ modes exhibit an anomalous
GVD of $D_{2}=2\pi\cdot 69.0$ MHz/mode. The results confirm our fabrication
process provide the high device geometry resolution and low optical loss to
build PhCRs to support the pulses.
We search for spontaneous soliton formation in a PhCR with
$\epsilon_{\text{PhC}}=2.2$ by sweeping the frequency of the pump laser with
$\sim 36$ mW of on-chip power; Fig. 2D presents a $\sim 20$ GHz sweep range
from high to low frequency that spans the upper and lower resonances. With
photodetectors, we monitor both transmission through the PhCR device (red
trace) and the power of generated comb modes (blue trace), which we obtain by
filtering out the pump. These data show the presence of thermal bistability
effects, which distort the resonances into a triangle shape, and the effects
of nonlinear comb generation. In particular, we observe no comb power at the
upper resonance, as the upper mode is shifted away from the $\mu^{\prime}$
modes needed for FWM. Whereas at the lower resonance we observe immediate comb
formation, corresponding to the step change in comb power that agrees with our
simulation in Fig. 1F. We assess that this nonlinear state on the lower
resonance, indicated by the shaded range in Fig. 2D, is a dissipative Kerr
soliton that spontaneously forms under certain conditions of pump power and
laser detuning. Additionally, we observe a nonlinear state on the lower
resonance that exhibits relatively higher comb power variance, likely a
breather state as indicated theoretically in Fig. 1F. The breather state at
higher detuning than the stable state suggests a modified optical state phase
diagram yet to be explored. Operationally, we adjust the pump power to
maximize the pump-frequency existence range of the low-noise spontaneous
soliton step, and we hand adjust the laser frequency into this range. Under
these conditions we record the optical spectrum (Fig. 2E) of the soliton comb,
which exhibits a clear $sech^{2}(\nu)$ profile as shown by the gray line. The
remainder of our paper presents measurements of such spontaneous solitons.
We attribute the ease of spontaneous-soliton capture to desirable thermal
behaviors of the PhCR. Conventionally in a device, capturing and sustaining a
soliton is difficult as a result of rapid heating and cooling of the
microresonator Stone _et al._ (2018); Brasch _et al._ (2016). Soliton
initiation in the resonator under CW excitation is proceeded by Turing
patterns or chaotic states, which are multiple-pulse states with high average
intensity. Conversely, the desired soliton state is a single pulse with a
relatively low average intensity. Hence, the root of thermal instability is
the transition of nonlinear state in a microresonator. The -resonator
spontaneous solitons offer two primary advantages: First, in soliton
initiation, we bypass the high average intensity states and avoid their
heating effects to the resonator. Second, we keep the pump laser on-resonance
in the soliton state (note the drop in transmission trace in Fig. 2D as the
pulse forms, indicating a more resonant condition), therefore minimizing
changes to the in-resonator pump amplitude as the soliton forms. Together,
these factors minimize the intensity changes in the PhCR, allowing pulses
capturing by hand-tuning alone.
Figure 3: Optical spectra from PhCRs with average ring width (A-C) 1.3, 1.4,
and 1.5 $\mu$m. (D) The two-pulse state on the 1.5 $\mu$m device. The gray
traces are fits with the form $y=A_{0}+10log(sech^{2}((x-x_{0})/BW))$. (E)
Spectra versus power for the RW = 1.4 $\mu$m device.
To explore the universality of spontaneous-soliton formation, we demonstrate
soliton bandwidth control by tuning the GVD of the PhCR and the pump-laser
power. We control the GVD directly by varying the RW from 1.3 to 1.5 $\mu$m,
providing decreasing anomalous GVD that we can understand from FEM calculation
of the PhCR resonator mode structure. Based on the LLE, this change should
affect an increasing soliton bandwidth. We tune by hand into the soliton
states on these devices and acquire their optical spectra, plotted in Fig.
3A-C. The spectrum bandwidth broadens with decreasing anomalous GVD as
expected. Interestingly, we acquired a stable two-pulse state at lower
detuning on the RW = 1.5 $\mu$m device, shown in Fig. 3D. The two-pulse state
suggests that the parameter space of the PhCR – an interplay between
dispersion and mode shift – supports more steady states beyond the single
spontaneous pulse. We also varied the pump laser power for the RW = 1.4 $\mu$m
device, see Fig. 3E, resulting in widening of the spectral envelope consistent
with the DKS. However, unlike the conventional case where increasing pump
power monotonically lengthens the soliton existence range Guo _et al._
(2017), the PhCR produces strong breather states at high power. More study is
underway to fully explore this behavior.
Figure 4: (A) The relative intensity noise on the comb power of a breather
(blue) and quiet (green) states. The dash lines show the detector noise floor
corresponding to the carrier power. (B) Bridging between the 1 THz separation
between comb lines (blue) by creating sidebands (red) using electro-optic
modulation. The arrow indicates the overlapping mode. (C) Electronic beatnote
from the overlapping mode. The gray traces show five consecutive measurements.
(D) Optical spectrum of broad-band soliton.
Stationary microresonator solitons output an optical pulse-train with fixed
period, which composes a low-noise, equidistant frequency comb suitable for
optical-frequency measurements Briles _et al._ (2018); Drake _et al._
(2019a). Therefore, verifying the spectral-noise properties of spontaneous
solitons in PhCR is of utmost importance. Figure 4 presents intensity- and
frequency-noise measurements, excluding the pump laser, of a spontaneous
soliton, which we generate in a device with RW = 1.4 $\mu$m,
$\epsilon_{\text{PhC}}$=3.0. The relative intensity noise (RIN, Fig. 4A) of a
stationary soliton and a breather soliton is below $-140$ dBc/Hz over a
Fourier frequency range to 1.8 GHz. Here, the photodetected soliton power is
282 $\mu$W and the spur-free dynamic range is excellent, whereas the breather
state manifests a single peak at 878 MHz and supports higher power and hence
lower RIN. These measurements are currently limited by the comb power and the
detector noise floor.
To measure the $\sim$ 1 THz PhCR soliton repetition frequency, we apply
electro-optic (EO) phase modulation to create a low-frequency heterodyne beat
between two soliton comb modes Drake _et al._ (2019a); the optical spectrum
trace in Fig. 4B indicates the soliton modes in blue and the EO sidebands in
red. We choose the EO drive frequency such that the $\pm 17^{\text{th}}$ order
sidebands (arrow in Fig. 4B) generate an optical heterodyne on a
photodetector, after filtering out that pair. We identify the tone thus
generated as the heterodyne, as it varies with the EO drive frequency at
$34.2$ MHz/MHz in agreement with the sideband orders. We present the
heterodyne spectrum in Fig. 4C, which shows the typical lineshape with $\sim
50$ kHz linewidth and $<1$ MHz fluctuations. We attribute these properties to
thermal noise Drake _et al._ (2019b) and thermal drift of the microresonator.
Finally, we demonstrate a PhCR device with optimized dispersion to create a
spontaneous DKS with near-octave bandwidth, shown in Fig. 4D. The $F^{2}$
value for this trace is estimated to be 8.7, normalized to threshold power of
$\mu=\pm 1$ modes. We anticipate these optimized devices to enable f-2f self
referencing in the future.
In conclusion, we have presented spontaneous and deterministic generation of
Kerr solitons in edge-less PhCRs, enabled by compensating the Kerr shift
mismatch between the pulse state and its pump mode. Mode-shifting by
nanopatterning enables spontaneous generation, whereas we retain the
capability to engineer broadband dispersion with the bulk ring geometry. The
importance of the nanophotonic capabilities presented in this work is two-
fold: First, the ability to controllably shift modes while maintaining the
bulk dispersion profile provides a tool to explore the physics occurring in a
nonlinear process. Here, the capability modifies the behavior of the pump
mode, but we envision applications such as direct engineering of dispersive
waves Matsko _et al._ (2016) or soliton crystals Cole _et al._ (2017),
potentially enabling inverse design methods for arbitrary desired waveforms;
Second, the spontaneous formation nature of pulses demonstrated here
significantly reduces the system complexity for a soliton formation and
stabilization system, enabling low power consumption, packaging-friendly
devices, or integrated systems with multiple independent pulse sources. We
envision spontaneous pulse devices like the PhCRs presented in this work to
become building blocks for future nonlinear optics and integrated-photonics
technologies.
## References
* Hickstein _et al._ (2017) D. D. Hickstein, D. R. Carlson, A. Kowligy, M. Kirchner, S. R. Domingue, N. Nader, H. Timmers, A. Lind, G. G. Ycas, M. M. Murnane, H. C. Kapteyn, S. B. Papp, and S. A. Diddams, Optica 4, 1538 (2017).
* Tadanagaa _et al._ (2006) O. Tadanagaa, T. Yanagawa, Y. Nishida, H. Miyazawab, K. Magari, M. Asobe, and H. Suzuki, Applied Physics Letters 88 (2006), 10.1063/1.2172400.
* Carr and Brand (2004) L. D. Carr and J. Brand, Physical Review Letters 92 (2004), 10.1103/PhysRevLett.92.040401.
* Parteli _et al._ (2011) E. J. R. Parteli, J. S. A. Jr., and H. J. Herrmann, Physical Review Letters 107 (2011), 10.1103/PhysRevLett.107.188001.
* Godey _et al._ (2014) C. Godey, I. V. Balakireva, A. Coillet, and Y. K. Chembo, Physical Review A 89 (2014), 10.1103/PhysRevA.89.063814.
* Kippenberg _et al._ (2018) T. J. Kippenberg, A. L. Gaeta, M. Lipson, and M. L. Gorodetsky, Science 361, eaan8083 (2018).
* Marin-Palomo _et al._ (2017) P. Marin-Palomo, J. N. Kemal, M. Karpov, A. Kordts, J. Pfeifle, M. H. P. Pfeiffer, P. Trocha, S. Wolf, V. Brasch, M. H. Anderson, R. Rosenberger, K. Vijayan, W. Freude, T. J. Kippenberg, and C. Koos, Nature 546 (2017), 10.1038/nature22387.
* F$\ddot{u}$l$\ddot{o}$p _et al._ (2018) A. F$\ddot{u}$l$\ddot{o}$p, M. Mazur, A. Lorences-Riesgo, Óskar B. Helgason, P.-H. Wang, Y. Xuan, D. E. Leaird, M. Qi, P. A. Andrekson, A. M. Weiner, and V. Torres-Company, Nature Communications 9 (2018), 10.5281/zenodo.1206122.
* Suh _et al._ (2016) M.-G. Suh, Q.-F. Yang, K. Y. Yang, X. Yi, and K. J. Vahala, Science 354, 600 (2016).
* Trocha _et al._ (2018) P. Trocha, M. Karpov, D. Ganin, M. H. P. Pfeiffer, A. Kordts, S. Wolf, J. Krockenberger, P. Marin-Palomo, C. Weimann, S. Randel, W. Freude, T. J. Kippenberg, and C. Koos, Science 359, 887 (2018).
* Yu _et al._ (2019a) S.-P. Yu, H. Jung, T. C. Briles, K. Srinivasan, and S. B. Papp, ACS Photonics 6, 2083 (2019a).
* Lobanov _et al._ (2015) V. Lobanov, G. Lihachev, T. J. Kippenberg, and M. Gorodetsky, Optics Express 23, 7713 (2015).
* Xue _et al._ (2015) X. Xue, Y. Xuan, Y. Liu, P.-H. Wang, S. Chen, J. Wang, D. E. Leaird, M. Qi, and A. M. Weiner, Nature Photonics 9, 594 (2015).
* Herr _et al._ (2015) T. Herr, M. L. Gorodetsky, and T. J. Kippenberg, in _Nonlinear Optical Cavity Dynamics: From Microresonators to Fiber Lasers_ (Wiley-VCH Verlag GmbH & Co, 2015) Chap. 6.
* Kocaman _et al._ (2011) S. Kocaman, M. S. Aras, P. Hsieh, J. F. McMillan, N. C. P. C. G. Biris, M. B. Yu, D. L. Kwong, A. Stein, and C. W. Wong, Nature Photonics 5, 499 (2011).
* Miura _et al._ (2014) R. Miura, S. Imamura, R. Ohta, A. Ishii, X. Liu, T. Shimada, S. Iwamoto, Y. Arakawa, and Y. K. Kato, Nature Communications 5 (2014), 10.1038/ncomms6580.
* Fang _et al._ (2016) K. Fang, M. H. Matheny, X. Luan, and O. Painter, Nature Photonics 10, 489 (2016).
* Petrovich _et al._ (2008) M. N. Petrovich, F. Poletti, A. van Brakel, and D. J. Richardson, Optics Express 16, 4337 (2008).
* Sharma _et al._ (2015) M. Sharma, S. Konar, and K. R. Khan, Journal of Nanophotonics 9 (2015), 10.1117/1.JNP.9.093073.
* Hu and Weiss (2016) S. Hu and S. M. Weiss, ACS Photonics 3, 1647 (2016).
* Kim _et al._ (2017) S. Kim, K. Han, C. Wang, J. A. Jaramillo-Villegas, X. Xue, C. Bao, Y. Xuan, D. E. Leaird, A. M. Weiner, and M. Qi, Nature Communications 8 (2017), 10.1038/s41467-017-00491-x.
* Moille _et al._ (2018) G. Moille, Q. Li, S. Kim, D. Westly, and K. Srinivasan, Optics Letters 43, 2772 (2018).
* McGarvey-Lechable _et al._ (2017) K. McGarvey-Lechable, T. Hamidfar, D. Patel, L. Xu, D. V. Plant, and P. Bianucci, Optics Express 25, 3916 (2017).
* Lu _et al._ (2014) X. Lu, S. Rogers, W. C. Jiang, , and Q. Lin, Applied Physics Letters 105 (2014), 10.1063/1.4898001.
* Joannopoulos _et al._ (1997) J. Joannopoulos, P. R. Villeneuve, and S. Fan, Nature 386, 143 (1997).
* Yu _et al._ (2019b) S.-P. Yu, T. C. Briles, G. T. Moille, X. Lu, S. A. Diddams, K. Srinivasan, and S. B. Papp, Physical Review Applied 11 (2019b), 10.1103/PhysRevApplied.11.044017.
* McGarvey-Lechable and Bianucci (2014) K. McGarvey-Lechable and P. Bianucci, Optics Express 22, 26032 (2014).
* Jung _et al._ (2019) H. Jung, S.-P. Yu, D. R. Carlson, T. E. Drake, T. C. Briles, and S. B. Papp, in _Proceedings to Nonlinear Optics Conference – OSA Technical Digest_ (Waikoloa Beach, Hawaii, USA, 2019) p. NW2A.3.
* Stone _et al._ (2018) J. R. Stone, T. C. Briles, T. E. Drake, D. T. Spencer, D. R. Carlson, S. A. Diddams, and S. B. Papp, Physical Review Letters 24 (2018), 10.1103/PhysRevLett.121.063902.
* Brasch _et al._ (2016) V. Brasch, M. Geiselmann, M. H. P. Pfeiffer, and T. J. Kippenberg, Optics Express 24, 29312 (2016).
* Guo _et al._ (2017) H. Guo, M. Karpov, E. Lucas, A. Kordts, M. H. P. Pfeiffer, V. Brasch, G. Lihachev, V. E. Lobanov, M. L. Gorodetsky, and T. J. Kippenberg, Nature Physics 13, 94 (2017).
* Briles _et al._ (2018) T. C. Briles, J. R. Stone, T. E. Drake, D. T. Spencer, C. Fredrick, Q. Li, D. Westly, B. R. Ilic, K. Srinivasan, S. A. Diddams, and S. B. Papp, Optics Letters 43, 2933 (2018).
* Drake _et al._ (2019a) T. E. Drake, T. C. Briles, J. R. Stone, D. T. Spencer, D. R. Carlson, D. D. Hickstein, Q. Li, D. Westly, K. Srinivasan, S. A. Diddams, and S. B. Papp, Physical Review X 9 (2019a), 10.1103/PhysRevX.9.031023.
* Drake _et al._ (2019b) T. E. Drake, J. R. Stone, T. C. Briles, and S. B. Papp, Arxiv (2019b).
* Matsko _et al._ (2016) A. B. Matsko, W. Liang, A. A. Savchenkov, D. Eliyahu, and L. Maleki, Optics letters 41, 2907 (2016).
* Cole _et al._ (2017) D. C. Cole, E. S. Lamb, P. Del’Haye, S. A. Diddams, and S. B. Papp, Nature Photonics 11, 671 (2017).
* Balram _et al._ (2016) K. C. Balram, D. A. Westly, M. Davanço, K. E. Grutter, Q. Li, T. Michels, C. H. Ray, L. Yu, R. J. Kasica, C. B. Wallin, I. J.Gilbert, B. A. Bryce, G. Simelgor, J. Topolancik, N. Lobontiu, Y. Liu, P. Neuzil, V. Svatos, K. A. Dill, N. A. Bertrand, M. G. Metzler, GeraldLopez, D. A. Czaplewski, L. Ocola, K. A. Srinivasan, S. M. Stavis, V. A. Aksyuk, J. A. Liddle, S. Krylov, and B. R. Ilic, Journal of Research of the National Institute of Standards and Technology 121, 464 (2016).
* Bao and Yang (2014) C. Bao and C. Yang, Journal of the Optical Society of America B 31, 3074 (2014).
## Acknowledgments
Funding provided by the DARPA DODOS, DRINQS, and PIPES programs. We
acknowledge the Boulder Microfabrication Facility, where the devices were
fabricated. We thank Travis Briles and Jeff Chiles for a careful reading of
the manuscript. This work is a contribution of the U.S. Government and is not
subject to copyright. Mention of specific companies or trade names is for
scientific communication only, and does not constitute an endorsement by NIST.
## Supplementary materials
Materials and Methods
Design and Fabrication
Derivation of Modified LLE
Kerr Shift Calculation
Pulse Formation Dynamics
Figs. S1 to S2
References (37-38)
Supplenmental Materials for: Spontaneous Pulse Formation in Edge-Less Photonic
Crystal Resonators
Su-Peng Yu1,2∗, Daniel C. Cole1,2, Hojoong Jung1,2, Gregory T. Moille3,4,
Kartik Srinivasan3,4, and Scott B. Papp1,2
1Time and Frequency Division, NIST, Boulder, CO 80305, USA
2 Department of Physics, University of Colorado, Boulder, CO, 80309, USA
3 Microsystems and Nanotechnology Division, NIST, Gaithersburg, MD 20899, USA
4 Joint Quantum Institute, NIST / University of Maryland, College Park, MD
20742, USA
## I Materials and Methods
Here we provide details for the optical setup used in this work, illustrated
in Figure S1. The light source is a C-band tunable external-cavity diode laser
(ECDL) with fiber-coupled output. The light goes through a fiber isolator and
then to a set of fiber polarization controllers. A 90% fused fiber coupler is
added between the laser and the polarization controller to tap the laser light
for a Mach-Zehnder interferometer and a wavelength meter (wavemeter, 40 MHz
resolution) for frequency measurements. We use the wavemeter to precisely
measure modes frequencies within the ECDL tuning range, enabling us to
characterize the dispersion of PhCRs. For comb generation experiments, the
laser is amplified using an erbium-doped fiber amplifier (EDFA), with a
tunable band-pass filter to suppress the amplified spontaneous emission of the
EDFA for cleaner spectra. For passive measurements, the EDFA and the filter
are bypassed. We send the light into the photonic chip using a lens fiber
mounted on a three-axis flexure stage, controlled by manual micrometers. The
damage threshold of our devices is typically above 1 W incident power. The
typical coupling efficiency between fiber and chip is $\sim$25% per facet,
limited by the mode mismatch between the air-clad waveguides and lens fibers.
The chip is placed on a copper block for thermal contact. The output is
collected with another lens fiber on translation stage. For passive
measurements, we measure the outcoming power using an amplified photodetector,
plotting the transmission versus frequency on an oscilloscope.
During the comb generation experiments, we continuously monitor a portion of
the outcoupled light with an OSA. With photodetectors that have 150 MHz
bandwidth, we also monitor the pump-laser transmission of the resonator and
the comb power, which we obtain by filter out the pump contribution. The comb
power signal provides critical information on break up of the flat background
and soliton initiation, and for monitoring the intensity-noise level of
soliton states. To diagnose breather soliton oscillations and perform
intensity-noise measurements, we use a high-speed photodetector (1.6 GHz
bandwid) and a electronic spectrum analyzer.
The comb-power channel, after filtering out the pump, is also used for the
beatnote measurements. We pass the comb light through two cascaded EO phase
modulators, driven far above $V_{\pi}$ to introduce multiple sidebands to span
the 1 THz frequency spacing between the comb lines, shown in main text Figure
4B. We choose the EO modulation frequency to be 28.000 GHz so the $\pm$17th
sidebands from adjacent comb lines will come into close vicinity. To improve
the signal to noise ratio for the beatnote measurements, we amplify the EO
output with a semiconductor optical amplifier and select the overlapping modes
using a tunable optical filter with a 50GHz passband.
Figure 1: Illustration of the optical testing setup.
## II Design and Fabrication
Here we describe the design process to create a PhCR. We calculate the
resonator dispersion and photonic bandgap using a finite-element method
program. The dispersion calculation yields the propagation constant
$k_{\text{eff}}$ for each RW, ring radius R, and frequency. The azimuthal mode
order m of the PhC is then calculated by the boundary condition
$k_{\text{eff}}\cdot 2\pi R=2m\pi$. The PhCR modulation is then introduced
with the periodicity $2\pi R/2m$ and sinusoidal peak-to-peak amplitude
$A_{PhC}$ on the interior of the ring. The sinusoidal shape is chosen as it
can be fabricated reliably to very small amplitude using lithography and
plasma etching. A bus waveguide approaches the smooth outer edge of the
resonators. The strength of the evanescent coupling between the resonator and
the bus is controlled by the gap between the two. On the edges of the chips
where the bus waveguides terminate, the waveguides are inversely tapered to
improve mode-matching to lens fibers. We generated the mask files using a
pattern-defining script and the CNST Nanolithography Toolbox Balram _et al._
(2016). Typically, we place up to 70 PhCRs and their bus waveguides per chip
in an evenly spaced array. Fine sweeps of $A_{PhC}$ and coupling gap are
included to achieve the correct mode shifts and near-critical coupling.
The fabrication procedure of our devices is as follows: We obtain 3-inch
silicon wafers with 380 $\mu$m thickness and 3 $\mu$m thermal silicon dioxide
on both sides. The tantala device layer is deposited onto the wafer to 570 nm
thickness by an external supplier. For lithography, we carry out a double
spin-coating of ZEP520A resist to reach a total resist thickness of 1 $\mu$m,
then expose the resist in electron beam lithography (EBL) operating at 100 kV.
All device patterns are defined on this EBL step. We develop the resist and
transfer the pattern using plasma etching with an inductively coupled plasma
etching tool, and a $CHF_{3}+CF_{4}+Ar$ chemistry. The ratio between $CHF_{3}$
and $CF_{4}$ is varied to achieve vertical sidewall, while the Ar gas was
found to improve sidewall smoothness. The etch selectivity is sufficient to
clear the device layer with the resist thickness used. A dicing pattern is put
onto the wafer using UV lithography and the SPR-220 photoresist. We etch
through the bottom thermal oxide layer using a plasma etch with
$CHF_{3}+O_{2}$ chemistry. The resist is stripped using solvents, and the UV
lithography step is carried out again for the deep-RIE dicing using the
$C_{4}F_{8}+SF_{6}$ chemistry. We then clean the wafer of the fluoro-polymer
deposited during the RIE steps using DuPont EKC265 solvent, followed by a
Cyantek Nanostrip soak for final cleaning. The chips are then mechanically
removed from the wafer and are ready for testing.
## III Derivation of Modified LLE
Here we provide a modified LLE to accommodate the influences of the shifted
pump mode. Importantly, the form of the modified LLE admits the steady-state
solutions of the LLE, with a effective pump field reflecting the influence of
the shifted mode. We begin with the LLE in the modal basis,
$\partial_{\tau}a_{\mu}=-(1+i\alpha)a_{\mu}+\frac{i}{2}\beta\mu^{2}a_{\mu}+\delta_{\mu
0}F+\hat{\mathcal{F}}\\{i|\psi(\theta)|^{2}\psi(\theta)\\}$ (1)
where $a_{\mu}$ is the field amplitude in mode $\mu$,
$\beta=-\frac{2}{\kappa}D_{2}$ stands for the second-order dispersion
normalized to linewidth $\kappa$, $\delta_{\mu 0}$ the Kronecker delta
function $\delta_{00}=1$, zero otherwise, and $\hat{\mathcal{F}}$ the Fourier
transform. We generalize the equation to arbitrary dispersion profiles by
identifying
$\frac{i}{2}\beta\mu^{2}=-\frac{2i}{\kappa}\cdot\frac{1}{2}D_{2}\mu^{2}$ to be
$-\frac{2i}{\kappa}D_{\text{int}}(\mu)$, where $D_{\text{int}}(\mu)$ is the
integrated dispersion. We implement the pump mode shift with an additional
term to the total dispersion:
$D_{\text{int}}^{Shifted}(\mu)=D_{\text{int}}^{Base}(\mu)+\Xi(1-\delta_{\mu
0})$ (2)
where a constant shift of strength $\Xi$ is applied to all modes except zero,
so that the zero of detuning $\alpha$ remains defined on the pump mode. A red-
shift of the pump mode is associated with $\Xi>0$. The equation becomes:
$\partial_{\tau}a_{\mu}=-(1+i\alpha)a_{\mu}+\delta_{\mu
0}F+\hat{\mathcal{F}}\\{i|\psi(\theta)|^{2}\psi(\theta)\\}\\\
-\frac{2i}{\kappa}\left(D_{\text{int}}^{Base}(\mu)+\Xi(1-\delta_{\mu
0})\right)a_{\mu}$ (3)
now carry out inverse Fourier transform, under the normalization that
$\hat{\mathcal{F}}^{-1}(\delta_{\mu 0})=1$, and that
$\hat{\mathcal{F}}^{-1}(\delta_{\mu 0}\cdot
a_{\mu})=\frac{1}{2\pi}\oint\psi(\theta)d\theta:=\bar{\psi}$, we get the pump-
shifted LLE in the time domain:
$\partial_{\tau}\psi(\theta)=-\left(1+i(\alpha+\epsilon)\right)\psi(\theta)-\frac{i}{2}\beta\partial_{\theta}^{2}\psi(\theta)\\\
+i|\psi(\theta)|^{2}\psi(\theta)+F+i\epsilon\bar{\psi}$ (4)
where $\epsilon=\frac{2\Xi}{\kappa}$ is the normalized mode shift. We make two
observations from the shifted LLE formula: First, in the case of an amplitude
that is a constant in $\theta$, $\bar{\psi}=\psi$ and the shift terms cancel,
indicating that the resonator responses identically to the unmodified LLE
prior to pattern generation. Second, assuming a time-stationary pattern $\psi$
is formed, the $\bar{\psi}$ term is constant in the resonator, and the
equation can be interpreted as an LLE with modified parameters:
$\alpha^{\prime}=\alpha+\epsilon\qquad F^{\prime}=|F+i\epsilon\bar{\psi}|$
This is to say any stationary-state solutions $\psi$ of the modified LLE with
parameters $F,\alpha$ also satisfies the LLE with parameters
$F^{\prime},\alpha^{\prime}$, the later include Turing patterns and Kerr
solitons. This equivalence enables the pump-shifted LLE to produce Kerr
solitons.
## IV Kerr Shift Calculation
We present an interpretation of the Kerr shift term in the modal basis. Under
this interpretation, each of the resonator mode $\mu$ behaves as a nonlinear
harmonic resonator, therefore giving physical meanings to the hot-resonator
modes in the main text. We begin by recalling the case of a single-mode
nonlinear oscillator:
$\partial_{t}a=-i\omega^{\prime}a-\gamma_{0}a+Fe^{i\omega t}$ (5)
where the resonance frequency $\omega^{\prime}=\omega_{0}-g|a|^{2}$ depends on
the field amplitude $|a|^{2}$ through nonlinear coefficient $g$. We identify
this cubic term as the Kerr term $ig|a|^{2}a$ which results from the resonance
frequency change induced by the field amplitude. In the case of the LLE, we
start with the Kerr term instead, and assign an inferred modal frequency for
each field component $a_{\mu}$ by casting the time-evolution of the mode in
the form of a harmonic oscillator:
$\partial_{\tau}a_{\mu}=-i\alpha a_{\mu}+i\delta_{\mu}\cdot
a_{\mu}+g_{\mu}\cdot a_{\mu}+\delta_{\mu 0}F$ (6)
where the detuning $\alpha$ was chosen so the pump field $F$ is not time-
dependent in the rotating frame, and $\delta_{\mu}$ and $g_{\mu}$ can depend
on the in-resonator field profile. This form enables us to identify the modal
frequency and gain from the instantaneous rate of change of the phase and
amplitude induced by the Kerr effect. We calculate these rates by Fourier
transforming the Kerr term $|\psi|^{2}\psi$ for a given field profile
$\psi(\theta)$:
$\partial_{\tau}a_{\mu}|_{Kerr}=\hat{\mathcal{F}}\\{i|\psi(\theta)|^{2}\psi(\theta)\\},_{\mu}$
where the subscript $\mu$ for the Fourier transform specifies the $\mu$-th
component. Casting this into the harmonic oscillator form, we get:
$\delta_{\mu}^{Kerr}=\mathcal{R}\textit{e}\left(\hat{\mathcal{F}}\\{|\psi(\theta)|^{2}\psi(\theta)\\},_{\mu}/a_{\mu}\right)$
(7)
$g_{\mu}^{Kerr}=-\mathcal{I}\textit{m}\left(\hat{\mathcal{F}}\\{|\psi(\theta)|^{2}\psi(\theta)\\},_{\mu}/a_{\mu}\right)$
(8)
where $\delta_{\mu}^{Kerr}$ and $g_{\mu}^{Kerr}$ are the modal Kerr shift and
induced gain on mode $\mu$.
An example for this formula is the modal frequency behavior near the Turing
pattern onset threshold. In order to extract the frequency of the modes, we
assume the modal fields in the resonator take the form:
$\psi(\theta)=a_{0}+\eta\cdot u_{\mu^{\prime}}(\theta)$
where $a_{0}$ is the pump mode amplitude, a constant in the resonator, and
$a_{\mu^{\prime}}=\eta$ an infinitesimal field amplitude in the
$\mu^{\prime}$-th mode, $u_{\mu^{\prime}}(\theta)=exp(i\mu^{\prime}\theta)$ is
the basis function in the $\mu^{\prime}$-th mode. To obtain the pump mode
shift, we evaluate the Kerr shift term $|\psi|^{2}\psi$ to zeroth order in
$\eta$. Since $a_{0}$ is a constant over $\theta$, the form trivially gives:
$\delta_{0}^{Kerr}=\mathcal{R}\textit{e}(\mathcal{F}\\{|a_{0}|^{2}a_{0}\\},_{0}/a_{0})=|a_{0}|^{2}$
(9)
which is just the pump mode intensity. To get the shift for the
$\mu^{\prime}$-th mode, we evaluate the Kerr shift term to first order in
$\eta$:
$|\psi|^{2}\psi=(|a_{0}|^{2}+\eta\cdot(a_{0}u_{\mu^{\prime}}^{*}+a_{0}^{*}u_{\mu^{\prime}})+\mathcal{O}(\eta^{2}))\cdot(a_{0}+\eta
u_{\mu^{\prime}})$ $=|a_{0}|^{2}a_{0}+2|a_{0}|^{2}\cdot\eta
u_{\mu^{\prime}}+a_{0}^{2}\cdot\eta
u_{\mu^{\prime}}^{*}+\mathcal{O}(\eta^{2})$
where $u_{\mu^{\prime}}^{*}=u_{-\mu^{\prime}}$. Fourier transforming this
expression and taking the $\mu^{\prime}$-th component, only the term with
$u_{\mu^{\prime}}$ is non-vanishing. We get:
$\delta_{\mu^{\prime}}^{Kerr}=\mathcal{R}\textit{e}(2|a_{0}|^{2}\cdot\eta/\eta)=2|a_{0}|^{2}$
(10)
which is twice the shift compared to the pump mode, in agreement to the form
in Herr _et al._ (2015).
We are now equipped with the theoretical tools to study the Kerr balancing for
the soliton states. We generate the soliton field profile using the LLE for
sets of parameters $(F,\alpha,\epsilon)$, then calculate the dispersion
balancing conditions for each mode, namely
$-D_{\text{int}}(\mu)+\delta_{\mu}$. We find the term equals $\alpha$ for all
non-pump modes, while the gain terms equal 1 to balance loss. The balancing
effect enables a stationary, time-independent waveform in the reference frame
of the LLE. In the case $\epsilon=0$, the balance is not achieved for the pump
mode, but the mismatch is compensated by the pump field $F$ in a manner
similar to the forced harmonic oscillator. To study the Kerr mismatch in
response to the shifted pump mode, we carry out LLE simulations to obtain the
field profiles of the stable pulse states in a resonator for some
$\epsilon>0$, and calculate the Kerr shifts for each mode. Kerr shift and
$D_{\text{int}}$ plots for the cases with $\epsilon$=0 and $\epsilon$=4.2 are
shown in Figure S2A and B. The blue dots show the sum of the two, balancing to
the horizontal lines at the value of $\alpha$, except for the pump mode, in
agreement with Ref. Bao and Yang (2014). A pronounced Kerr mismatch
$\xi_{Kerr}$ is observed for the $\epsilon$=0 case, but is suppressed in the
$\epsilon$=4.2 case. We carry out the calculations for intermediate values of
$\epsilon$, shown in Fig. S2C. The increasing of $\epsilon$ resulted in a
gradual reduction of mismatch $\xi_{Kerr}$, down to approximately one quarter
of a linewidth. Figure S2C also shows the $\alpha$ ranges where the soliton
state is stable for each $\epsilon$. We observe that the soliton is stable in
detuning ranges in the single-stability range for the given $F$ value for the
shifted-pump cases, while in the $\epsilon=0$ case the soliton is only stable
in the bistability range (shaded area in Fig. S2C). This leads to the
difference that the flat state is stable on the lower branch of the
bistability in the $\epsilon=0$ case, versus the spontaneous generation of
patterns from the flat state in the shifted-pump case. Initiating the
$\epsilon=0$ case with a pulse in the single-stability range results in the
mode reverting spontaneously to multiple-pulse Turing patterns. This suggests
that the mode shift modifies the phase diagram Godey _et al._ (2014) to
enable stable soliton states in ranges where the flat background amplitude is
unstable.
Figure 2: The balancing of Kerr shift (red) and dispersion (black) for (A)
the and (B) soliton pulses, where the sum of the two is magnified and shown
in blue. (C) Calculated Kerr mismatch for various mode shift values and pump
detuning. (D) Simulated time traces and (E) intensity plots during the
spontaneous pulse generation process.
## V Pulse Formation Dynamics
We show the time-evolution of the spontaneous generation of a single pulse in
Fig. S2D-E. Here the pulse arises spontaneously from the flat state with
constant pump $F$ and detuning $\alpha$, seeded only by the vacuum
fluctuation. The LLE simulation of pulse generation shows several transient
states the resonator goes through to arrive at the DKS state. In this
simulation, the resonator is initiated with zero amplitude, and is energized
with a fixed pump field F at fixed detuning $\alpha$ for some time until the
pulse state stabilizes. We identify four transient states in the pulse
generation, shown in Fig. S2E, starting from its bottom panel: First, the flat
amplitude energizes without producing comb power, until it is sufficiently
large that the flat state becomes unstable. Unlike the conventional resonator
where the FWM condition can be reached by the large mode density near the pump
mode to form Turing patterns order $\mu^{\prime}$ determined by dispersion,
the phase matching is prohibited by the shifted pump mode. Second, with the
shifted pump mode, the PhCR instead make a one-lobe sinusoidal pattern once
the flat amplitude is sufficiently high. This can be intuitively understood by
drawing a quadratic curve across the three modes $\mu=0,\mu^{\prime}=\pm 1$
for the PhCR mode structure. The high positive curvature of this curve affects
a local strong anomalous dispersion, causing a transient Turing pattern of
order $\mu^{\prime}=1$ to form. This transient pattern breaks the
$\theta$-symmetry in the resonator, seeding the resonator for a single pulse.
Third, the one-lobe pattern begins to sharpen. This is because unlike a true
high-anomalous-dispersion resonator where $\mu^{\prime}>\pm 1$ modes are FWM-
mismatched from strong dispersion, the $\mu^{\prime}>\pm 1$ modes of the PhCR
follow the base dispersion, therefore are sufficiently phase-matched and can
be energized. The energizing of $\mu^{\prime}>\pm 1$ modes lead to sharpening
of the peak in time domain, and the broadening of its spectrum. Finally, the
pulse stabilizes as the transient components decay away. Note that at this
stage the flat amplitude background is significantly lower than prior to the
pulse formation, a curious result from the modified LLE changing its effective
pump field $F^{\prime}$ in response to the existing pulse in the resonator.
The reduced flat amplitude will no longer spontaneously generate patterns,
preventing further pulse generation beyond the first pulse. This set of
transient steps eventually result in deterministic placement of one broad-band
pulse in the resonator, shown in the top panel of Fig. S2E.
## Disclaimer
This work is a contribution of the U.S. Government and is not subject to
copyright. Mention of specific companies or trade names is for scientific
communication only, and does not constitute an endorsement by NIST.
|
2024-09-04T02:54:55.353168 | 2020-02-28T01:46:25 | 2002.12508 | {
"authors": "Lin Lin and Yu Tong",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25932",
"submitter": "Yu Tong",
"url": "https://arxiv.org/abs/2002.12508"
} | arxiv-papers |
int dom
Preparing the ground state of a given Hamiltonian and estimating its ground energy are important but computationally hard tasks. However, given some additional information, these problems can be solved efficiently on a quantum computer. We assume that an initial state with non-trivial overlap with the ground state can be efficiently prepared, and the spectral gap between the ground energy and the first excited energy is bounded from below. With these assumptions we design an algorithm that prepares the ground state when an upper bound of the ground energy is known, whose runtime has a logarithmic dependence on the inverse error.
When such an upper bound is not known, we propose a hybrid quantum-classical algorithm to estimate the ground energy, where the dependence of the number of queries to the initial state on the desired precision is exponentially improved compared to the current state-of-the-art algorithm proposed in [Ge et al. 2019]. These two algorithms can then be combined to prepare a ground state without knowing an upper bound of the ground energy. We also prove that our algorithms reach the complexity lower bounds by applying it to the unstructured search problem and the quantum approximate counting problem.
§ INTRODUCTION
Estimating ground energy and obtaining information on the ground state of a given quantum Hamiltonian are of immense importance in condensed matter physics, quantum chemistry, and quantum information.
Classical methods suffer from the exponential growth of the size of Hilbert space, and therefore quantum computers are expected to be used to overcome this difficulty. However even for quantum computer, estimating the ground energy is a hard problem: deciding whether the smallest eigenvalue of a generic local Hamiltonian is greater than $b$ or smaller than $a$ for some $a<b$ is QMA-complete [Kitaev et al., 2002, Kempe et al., 2006, Oliveira and Terhal, 2005, Aharonov et al., 2009].
Therefore to make the problem efficiently solvable we need more assumptions. We denote the Hamiltonian we are dealing with by $H$, and consider its spectral decomposition $H=\sum_k\lambda_k\ket{\psi_k}\bra{\psi_k}$ where $\lambda_{k}\leq\lambda_{k+1}$. The key assumption is that we have an initial state $\ket{\phi_0}$ which can be efficiently prepared by an oracle $U_I$, and has some overlap with the ground state $\ket{\psi_0}$ lower bounded by $\gamma$. This is a reasonable assumption in many practical scenarios. For instance, even for strongly-correlated molecules in quantum chemistry, there is often a considerable overlap between the true ground state and the Hartree-Fock state.
The latter can be trivially prepared in the molecular orbital basis, and efficiently prepared in other basis [Kivlichan et al., 2018].
For the moment we also assume the spectral gap is bounded from below: $\lambda_1-\lambda_0\geq\Delta$.
With these assumptions we can already use phase estimation coupled with amplitude amplification [Brassard et al., 2002] to prepare the ground state, if we further know the ground energy to high precision. To our knowledge, the most comprehensive work on ground state preparation and ground state energy estimation was done by Ge [Ge et al., 2019], which provided detailed complexity estimates for well-known methods such as phase estimation, and proposed new methods to be discussed below. As analyzed in <cit.>, in order to prepare the ground state to fidelity[In this work, the fidelity between states $\ket{x},\ket{y}$ is defined to be $\abs{\braket{x|y}}$.] $1-\epsilon$, the runtime of the controlled-time-evolution of the Hamiltonian is $\wt{\Or}(1/(\gamma^2\Delta\epsilon))$ [In this work the notation $\wt{\Or}(f)$ means $\Or(f\poly\log(f))$ unless otherwise stated.], and the number of queries to $U_I$ is $\wt{\Or}(1/\gamma)$, assuming the spectral norm of $H$ is bounded by a constant.
This is however far from optimal. Poulin and Wocjan [Poulin and Wocjan, 2009] proposed a method that, by executing the inverse of phase estimation to filter out the unwanted components in the initial state, can prepare a state whose energy is in a certain given range. A different choice of parameters yields a way to prepare the ground state to fidelity $1-\epsilon$ by running the controlled-time-evolution of the Hamiltonian with $\wt{\Or}(1/(\gamma\Delta)\log(1/\epsilon))$ runtime, and using $\wt{\Or}(1/\gamma)$ queries to $U_I$ <cit.>.
A key difference between ground state preparation and Hamiltonian simulation, where significant progress has been made in recent years [Lloyd, 1996, Berry et al., 2015, Berry et al., 2015, Low and Chuang, 2017, Low and Wiebe, 2018, Low and Chuang, 2019, Childs et al., 2019], is its non-unitary nature. The recent development of linear combination of unitaries (LCU) method [Berry et al., 2015, Childs et al., 2017] provided a versatile tool to apply non-unitary operators.
Using LCU, Ge proposed a new method to filter the initial state by applying a linear combination of time-evolutions of different time length [Ge et al., 2019], which achieves the same complexity, up to logarithmic factors, as the modified version of Poulin and Wocjan's method discussed above.
All of the above methods prepare the ground state assuming the ground energy is known to high precision. When the ground energy is unknown, Ge proposed a method to estimate the ground energy using a search method called minimum label finding [Ge et al., 2019]. This method can estimate the ground energy to precision $h$ by running the controlled-time-evolution of the Hamiltonian for $\wt{\Or}(1/(\gamma h^{3/2}))$ [In [Ge et al., 2019], the meaning of the notation $\wt{\Or}(\cdot)$ is different from that in our work. In particular, $\wt{\Or}(\cdot)$ in [Ge et al., 2019] hides all factors that are poly-logarithmic in $1/h$, $1/\epsilon$, $1/\gamma$, and $1/\Delta$, regardless of what is inside the parentheses. We preserve their notation when citing their results since these factors do not play an important role when comparing the complexities of our methods.], and querying $U_I$ $\wt{\Or}(1/(\gamma \sqrt{h}))$ times. It is worth noting that their method requires $h=\wt{\Or}(\Delta)$, and therefore is very expensive when the gap is extremely small. When the ground energy is not known , Ge proposed a method to first estimate the ground energy and then apply the LCU approach.
In recent years several hybrid quantum-classical algorithms have been developed to estimate the ground energy, or to prepare the ground state, or both. The variational quantum eigenvalue solver (VQE) [Peruzzo et al., 2014] has gained much attention recently because of its low requirement for circuit depth and its variational structure. However the exact complexity of this algorithm is not clear because it relies on a proper choice of ansatz and needs to solve a non-convex optimization problem. Other such algorithms include quantum imaginary-time evolution, quantum Lanczos [Motta et al., 2019], and quantum filter diagonalization [Parrish and McMahon, 2019, Stair et al., 2019]. Their complexities are either quasi-polynomial or unknown.
The recent development of block-encoding [Berry et al., 2015] and quantum signal processing (QSP) [Low et al., 2016, Low and Chuang, 2017, Gilyén et al., 2019] enables us to apply non-unitary operators, specifically polynomials of a block-encoded matrix efficiently. It uses a minimal number of ancilla qubits, and avoids the Hamiltonian simulation. These will be the basic tools of this work, of which we give a brief introduction below.
Block-encoding is a powerful tool to represent a non-unitary matrix in the quantum circuit. A matrix $A\in\CC^{N\times N}$ where $N=2^n$ can be encoded in the upper-left corner of an $(m+n)$-qubit unitary matrix if
\begin{equation}
\norm{A-\alpha(\bra{0^m}\otimes I) U (\ket{0^m}\otimes I)}_2\leq \epsilon.
\label{eqn:block_encoding}
\end{equation}
In this case we say $U$ is an $(\alpha,m,\epsilon)$-block-encoding of $A$.
Many matrices of practical interests can be efficiently block-encoded. In particular we will discuss the block-encoding of Hamiltonians of physical systems in Section <ref>.
Using the block-encoding of a Hermitian $A$, QSP enables us to construct block-encodings for a large class of polynomial eigenvalue transformations of $A$. We pay special attention to even or odd polynomials with real coefficients, because we only apply this type of polynomial eigenvalue transformation in this work. Also for simplicity we assume the block-encoding is done without error. <cit.> enables us to perform eigenvalue transformation of $A$ for polynomials of definite parity (even or odd).
Let $U$ be an $(\alpha,m,0)$-block-encoding of a Hermitian matrix $A$. Let $P\in\RR[x]$ be a degree-$\ell$ even or odd real polynomial and $\abs{P(x)}\le 1$ for any $x\in[-1,1]$. Then there exists an $(1, m+1,0)$-block-encoding $\wt{U}$ of $P(A/\alpha)$ using $\ell$ queries of $U$, $U^{\dagger}$, and $\Or((m+1)\ell)$ other primitive quantum gates.
<cit.> provides a singular value transformation for any square matrix $A$ and polynomials of definite parity. When $A$ is a Hermitian matrix, the eigenvalue transformation is the same as the singular value transformation <cit.>. A related statement in the same paper is <cit.>, which describes the eigenvalue transformation of a Hermitian matrix for an arbitrary polynomial, by means of a linear combination of two polynomials of even and odd parities respectively.
Constructing the quantum circuit for QSP requires computing a sequence of phase factors beforehand, and there are classical algorithms capable of doing this [Haah, 2019]. Some recent progress has been made to efficiently compute phase factors for high-degree polynomials to high precision [Chao et al., 2020, Dong et al., 2020]. In this work, unless otherwise specified, we assume the phase factors are computed without error.
Using the tools introduced above, we assume the Hamiltonian $H$ is given in its $(\alpha,m,0)$-block-encoding $U_H$. This, together with $U_I$, are the two oracles we assume we are given in this work.
QSP enables us to filter eigenstates using fewer qubits than LCU. In [Lin and Tong, 2020] a filtering method named optimal eigenstate filtering is introduced. It is based on an explicitly constructed optimal minimax polynomial, and achieves the same asymptotic complexity, ignoring poly-logarithmic factors, as the method by Ge when applied to the ground state preparation problem if the ground energy is known exactly.
In this work we first develop a filtering method that filters out all eigenstates corresponding to eigenvalues above a certain threshold. This filtering method enables us to prepare the ground state of a Hamiltonian with spectral gap bounded away from zero when only an upper bound of the ground energy is known, unlike in the filtering methods discussed above which all require either exact value or high-precision estimate of the ground energy. Our filtering method has an exponentially improved dependence on precision compared to Kitaev's phase estimation [Kitaev, 1995] and uses fewer qubits compared to other variants of the phase estimation algorithm [Poulin and Wocjan, 2009, Ge et al., 2019]. This filtering method, applied to the initial state given in our assumption, also enables us to tell whether the ground energy is smaller than $a$ or greater than $b$ for some $b>a$, with high probability. Therefore a binary search yields a ground energy estimate with success probability arbitrarily close to one. We then combine the filtering method and ground energy estimation to prepare the ground state when no non-trivial bound for the ground energy is known. A comparison of the query complexities between the method in our work and the corresponding ones in [Ge et al., 2019], which to our best knowledge achieve state-or-the-art query complexities, are shown in Table <ref>.
Preparation (bound known) Ground energy Preparation (bound unknown)
2*$U_H$ This work $\Or\left(\frac{\alpha}{\gamma\Delta}\log(\frac{1}{\epsilon})\right)$ $\wt{\Or}\left(\frac{\alpha}{\gamma h}\log(\frac{1}{\vartheta})\right)$ $\wt{\Or}\left(\frac{\alpha}{\gamma \Delta}\log(\frac{1}{\vartheta\epsilon})\right)$
Ge $\wt{\Or}\left(\frac{\alpha}{\gamma\Delta}\right)$ $\wt{\Or}\left(\frac{\alpha^{3/2}}{\gamma h^{3/2}}\right)$ $\wt{\Or}\left(\frac{\alpha^{3/2}}{\gamma \Delta^{3/2}}\right)$
2*$U_I$ This work $\Or\left(\frac{1}{\gamma}\right)$ $\wt{\Or}\left(\frac{1}{\gamma}\log(\frac{\alpha}{h})\log(\frac{1}{\vartheta})\right)$ $\wt{\Or}\left(\frac{1}{\gamma}\log(\frac{\alpha}{\Delta})\log(\frac{1}{\vartheta})\right)$
Ge $\wt{\Or}\left(\frac{1}{\gamma}\right)$ $\wt{\Or}\left(\frac{1}{\gamma}\sqrt{\frac{\alpha}{h}}\right)$ $\wt{\Or}\left(\frac{1}{\gamma}\sqrt{\frac{\alpha}{\Delta}}\right)$
Extra This work $\Or(1)$ $\Or(\log(\frac{1}{\gamma}))$ $\Or(\log(\frac{1}{\gamma}))$
qubits Ge $\Or(\log(\frac{1}{\Delta}\log(\frac{1}{\epsilon})))$ $\Or(\log(\frac{1}{h}))$ $\Or(\log(\frac{1}{\Delta}\log(\frac{1}{\epsilon})))$
The query complexities of algorithms and number of extra qubits used in our work and the corresponding ones by Ge in [Ge et al., 2019]. $\alpha,\gamma,\Delta,\epsilon$ are the same as above and $h$ is the precision of the ground energy estimate. By extra qubits we mean the ancilla qubits that are not part of the block-encoding. In this work the ground energy estimation algorithm and the algorithm to prepare ground state without bound have success probabilities lower bounded by $1-\vartheta$, while in [Ge et al., 2019] the corresponding algorithms have constant success probabilities. The complexities for algorithms by Ge are estimated assuming Hamiltonian simulation is done as in [Low and Chuang, 2019]. The usage of the notation $\wt{\Or}$ is [Ge et al., 2019] different from that in our work, as explained in footnote <ref>.
From the query complexities in Table <ref> we can see our method for ground energy estimation achieves a exponential speedup in terms of the dependence of number of queries to $U_I$ on the ground energy estimate precision $h$ and a speedup of $1/\sqrt{h}$ factor in the dependence of number of queries to $U_H$ on the precision. Moreover, Ge assumes in their work that the precision $h=\wt{\Or}(\Delta)$, while we make no such assumptions. This gives our algorithm even greater advantage when the gap is much smaller than desired precision. This becomes useful in the case of preparing a low energy state (not necessarily a ground state). Because Ge used a slightly different query assumption, access to time-evolution rather than block-encoding, when computing the complexities for methods in [Ge et al., 2019] in Table <ref> we assume the Hamiltonian simulation is done with $\Or(\alpha t)$ queries to $U_H$, and the error is negligible. This can be achieved using the Hamiltonian simulation in [Low and Chuang, 2019], and cannot be asymptotically improved because of the complexity lower bound proved in [Berry et al., 2015]. Therefore the comparison here is fair even though our work makes use of a different oracle. Also [Ge et al., 2019] assumed a scaled Hamiltonian $H$ with its spectrum contained in $[0,1]$. We do not make such an assumption, and therefore the $\alpha$ factor should be properly taken into account as is done in Table <ref>.
The rest of the paper is organized as follows. In Section <ref> we use QSP to construct block-encodings of reflectors and projectors associated with eigen-subspaces. In Section <ref> we use the projectors to prepare ground state when an upper bound of the ground energy is given. In Section <ref> we introduce the ground energy estimation algorithm, a hybrid quantum-classical algorithm based on the binary search, and use it to prepare the ground state when no ground energy upper bound is known . In Section <ref> we show the dependence of our query complexities on the overlap and gap is essentially optimal by considering the unstructured search problem. We also show the dependence of our ground energy estimation algorithm on the precision is nearly optimal by considering the quantum approximate counting problem. In Section <ref> we use our methods to prepare low-energy states when the spectral lower gap is unknown, or even when the ground state is degenerate. In Section <ref> we discuss practical issues and future research directions.
§ BLOCK-ENCODING OF REFLECTOR AND PROJECTOR
A key component in our method is a polynomial approximation of the sign function in the domain $[-1,-\delta]\cup[\delta,1]$. The error scaling of the best polynomial approximation has been studied in [Eremenko and Yuditskii, 2007], and an explicit construction of a polynomial with the same error scaling is provided in [Low and Chuang, 2017] based on the approximation of the $\mathrm{erf}$ function. We quote <cit.> here with some small modification:
For all $0<\delta<1$, $0<\epsilon<1$, there exists an efficiently computable odd polynomial $S(\cdot;\delta,\epsilon)\in\RR[x]$ of degree $\ell=\Or(\frac{1}{\delta}\log(\frac{1}{\epsilon}))$, such that
(1) for all $x\in [-1,1]$, $|S(x;\delta,\epsilon)|\leq 1$, and
(2) for all $x\in[-1,-\delta]\cup[\delta,1]$, $|S(x;\delta,\epsilon)-\mathrm{sign}(x)|\leq \epsilon$.
Compared to <cit.> we have rescaled the interval from $[-2,2]$ to $[-1,1]$, and this does not result in any substantial change.
When we have the $(\alpha,m,0)$-block-encoding of a Hermitian matrix $H=\sum_{k}\lambda_k\ket{\psi_k}\bra{\psi_k}\in\CC^{N\times N}$, $N=2^n$, $\lambda_k\leq \lambda_{k+1}$, we can construct a $(\alpha+|\mu|,m+1,0)$-block-encoding of matrix $H-\mu I$ using of <cit.> for any $\mu\in\RR$. Then using QSP, by Theorem <ref>, we can obtain an $(1,m+2,0)$-block-encoding of $-S(\frac{H-\mu I}{\alpha+|\mu|};\delta,\epsilon)$ for any $\delta$ and $\epsilon$. If we assume further that $\Delta/2 \leq \min_k |\mu-\lambda_k|$, then we let $\delta=\frac{\Delta}{4\alpha}$, and by Lemma <ref> all the eigenvalues of $-S(\frac{H-\mu I}{\alpha+|\mu|};\delta,\epsilon)$ are $\epsilon$-close to either 0 or 1. Therefore $-S(\frac{H-\mu I}{\alpha+|\mu|};\delta,\epsilon)$ is $\epsilon$-close, in operator norm, to the reflector about the direct sum of eigen-subspaces corresponding to eigenvalues smaller than $\mu$:
\[
R_{<\mu} = \sum_{k:\lambda_k<\mu} \ket{\psi_k}\bra{\psi_k} - \sum_{k:\lambda_k>\mu} \ket{\psi_k}\bra{\psi_k},
\]
and thus the block-encoding is also an $(1,m+2,\epsilon)$-block-encoding of $R_{<\mu}$. We denote this block-encoding by $\REF(\mu,\delta,\epsilon)$. We omitted the dependence on $H$ because $H$ as well as its block-encoding is usually fixed in the rest of the paper.
In the above discussion we have used QSP in a black-box manner. For concreteness, we present a single-qubit illustrative example to demonstrate how to use a block-encoded Hamiltonian to construct the reflector in Appendix <ref>.
Because our goal is to prepare the ground state, we will use the projector more often than the reflector. Now we construct a block-encoding of projector using $\REF(\mu,\delta,\epsilon)$ by the following circuit
\begin{equation}
\label{eq:circ_proj}
\Qcircuit @C=1em @R=.3em{
\lstick{\ket{0}} & \qw & \gate{\mathrm{H}} & \ctrl{1} & \gate{\mathrm{H}} & \qw\\
\lstick{\ket{0^{m+2}}} & \qw & \qw & \multigate{1}{\REF(\mu,\delta,\epsilon)} & \qw & \qw \\
\lstick{\ket{\phi}} & \qw & \qw & \ghost{\REF(\mu,\delta,\epsilon)} & \qw & \qw
\end{equation}
where $\mathrm{H}$ is the Hadamard gate, and we denote this circuit as $\PROJ(\mu,\delta,\epsilon)$.
Note that
\[
\begin{aligned}
&(\bra{0^{m+3}}\otimes I)\PROJ(\mu,\delta,\epsilon)(\ket{0^{m+3}}\otimes I) \\
&= \Big(\bra{+}\bra{0^{m+2}}\otimes I\Big)\Big(\ket{0}\bra{0}\otimes I\otimes I + \ket{1}\bra{1}\otimes \REF(\mu,\delta,\epsilon)\Big)\Big(\ket{+}\ket{0^{m+2}}\otimes I\Big) \\
&= \frac{1}{2}\Big(I+(\bra{0^{m+2}}\otimes I)\REF(\mu,\delta,\epsilon)(\ket{0^{m+2}}\otimes I)\Big),
\end{aligned}
\]
and we have
\[
\begin{aligned}
&\|(\bra{0^{m+3}}\otimes I)\PROJ(\mu,\delta,\epsilon)(\ket{0^{m+3}}\otimes I)-P_{<\mu}\| \\
&\leq \frac{1}{2}\|(\bra{0^{m+2}}\otimes I)\REF(\mu,\delta,\epsilon)(\ket{0^{m+2}}\otimes I)-R_{<\mu}\| \\
&\leq \frac{\epsilon}{2}.
\end{aligned}
\]
Here $P_{<\mu}$ is the projector into the direct sum of eigen-subspaces corresponding to eigenvalues smaller than $\mu$
\[
P_{<\mu} = \sum_{k:\lambda_k<\mu} \ket{\psi_k}\bra{\psi_k}
\]
Therefore $\PROJ(\mu,\delta,\epsilon)$ is an $(1,m+3,\epsilon/2)$-block-encoding of $P_{<\mu}$. In fact this can still be seen as an application of linear combination of block encoding <cit.>, using the relation $P_{<\mu}=\frac{1}{2}(R_{<\mu}+I)$.
We use the following lemma to summarize the results
Given a Hermitian matrix $H$ with its $(\alpha,m,0)$-block-encoding $U_H$, with the guarantee that $\mu\in\RR$ is separated from the spectrum of $H$ by a gap of at least $\Delta/2$, we can construct an $(1,m+2,\epsilon)$-block-encoding of $R_{<\mu}$, and an $(1,m+3,\epsilon/2)$-block-encoding of $P_{<\mu}$, both using $\Or(\frac{\alpha}{\Delta}\log(\frac{1}{\epsilon}))$ applications of $U_H$ and $U_H^\dagger$, and $\Or(\frac{m\alpha}{\Delta}\log(\frac{1}{\epsilon}))$ other one- and two-qubit gates.
We remark that for the block-encoding $\PROJ(\mu,\delta,\epsilon)$, even a failed application of it can give us potentially useful information. We have
\[
\PROJ(\mu,\delta,\epsilon)\ket{0^{m+3}}\ket{\phi}=\ket{0}\ket{0^{m+2}}P_{<\mu}\ket{\phi}+\ket{1}\ket{0^{m+2}}P_{>\mu}\ket{\phi}+\frac{1}{\sqrt{2}}\ket{{-}}\ket{E},
\]
where $P_{>\mu} = I- P_{<\mu}$ and $\ket{E}$ satisfies $\|\ket{E}\|\leq \epsilon$. Thus when we apply the block-encoding and measure the first two registers, the first $m+3$ qubits, we have probability at least $1-\frac{\epsilon^2}{2}$ to obtain an outcome with either 0 or 1 followed by $(m+2)$ 0's. In the former case the projection has been successful, and in the latter case we have obtained an approximation of $P_{>\mu}\ket{\phi}$.
If we do not treat the output of 1 followed by $m+2$ 0's as failure then there is another interpretation of the circuit $\PROJ(\mu,\delta,\epsilon)$: this is an approximate projective measurement $\{P_{<\mu},P_{>\mu}\}$. In fact the whole circuit can be seen as phase estimation on a reflector, which needs only one ancilla qubit.
§ ALGORITHM WITH GROUND ENERGY BOUND
With the approximate projector developed in the previous section we can readily design an algorithm to prepare the ground state. We assume we have the Hamiltonian $H$ given through its block-encoding as in the last section. If we are further given an initial state $\ket{\phi_0}$ prepared by a unitary $U_{I}$, $U_{I}\ket{0^n}=\ket{\phi_0}$, and the promises that for some known $\gamma>0$, $\mu$, and $\Delta$, we have
* Lower bound for the overlap: $|\braket{\phi_0|\psi_0}|\geq \gamma$,
* Bounds for the ground energy and spectral gap: $\lambda_0 \leq \mu - \Delta/2 < \mu + \Delta/2 \leq \lambda_1$.
Here $\mu$ is an upper bound for the ground energy, $\Delta$ is a lower bound for the spectral gap, and $\gamma$ is a lower bound for the initial overlap. Now suppose we want to prepare the ground state to precision $\epsilon$, we can use Lemma <ref> to build a block-encoding of the projector $P_{<\mu}=\ket{\psi_0}\bra{\psi_0}$, and then apply it to $\ket{\phi_0}$ which we can prepare. This will give us something close to $\ket{\psi_0}$. We use fidelity to measure how close we can get. To achieve $1-\epsilon$ fidelity we need to use circuit $\PROJ(\mu,\Delta/4\alpha,\gamma\epsilon)$, and we denote,
\[
\wt{P}_{<\mu} = (\bra{0^{m+3}}\otimes I)\PROJ(\mu,\Delta/4\alpha,\gamma\epsilon)(\ket{0^{m+3}}\otimes I)
\]
then the resulting fidelity will be
\[
\frac{|\braket{\psi_0|\wt{P}_{<\mu}|\phi_0}|}{\|\wt{P}_{<\mu}\ket{\phi_0}\|} \geq \frac{|\braket{\psi_0|\phi_0}|-\gamma\epsilon/2}{|\braket{\psi_0|\phi_0}|+\gamma\epsilon/2} \geq
1-\frac{\gamma\epsilon}{|\braket{\psi_0|\phi_0}|}\geq 1-\epsilon.
\]
Here we have used
\[
\|\wt{P}_{<\mu}\ket{\phi_0}\|\le \norm{P_{<\mu}\ket{\phi_0}+(\wt{P}_{<\mu}-P_{<\mu})\ket{\phi_0}}\le |\braket{\psi_0|\phi_0}|+\gamma\epsilon/2.
\]
This is when we have a successful application of the block-encoding. The success probability is
\[
\|\wt{P}_{<\mu}\ket{\phi_0}\|^2 \geq \left(\|P_{<\mu}\ket{\phi_0}\|-\frac{\gamma\epsilon}{2}\right)^2 \geq \gamma^2\left(1-\frac{\epsilon}{2}\right)^2.
\]
With amplitude amplification [Brassard et al., 2002] we can boost the success probability to $\Omega(1)$ with $\Or(\frac{1}{\gamma})$ applications of $\PROJ(\mu,\Delta/4\alpha,\gamma\epsilon)$ and its inverse, as well as $\Or(\frac{m}{\gamma})$ other one- and two- qubit gates. Here we are describing the expected complexity since the procedure succeeds with some constant probability. In amplitude amplification we need to use a reflector similar to the oracle used in Grover's search algorithm [Grover, 1996]. Instead of constructing a reflector from $\PROJ(\mu,\Delta/4\alpha,\gamma\epsilon)$ we can directly use $\REF(\mu,\Delta/4\alpha,\gamma\epsilon)$ constructed in the previous section.
We summarize the results in the following theorem
Suppose we have Hamiltonian $H=\sum_{k}\lambda_k \ket{\psi_k}\bra{\psi_k}\in\CC^{N\times N}$, where $\lambda_k\leq \lambda_{k+1}$, given through its $(\alpha,m,0)$-block-encoding $U_H$. Also suppose we have an initial state $\ket{\phi_0}$ prepared by circuit $U_{I}$, as well as the promises <ref> and <ref>. Then the ground state $\ket{\psi_0}$ can be prepared to fidelity $1-\epsilon$ with the following costs:
* Query complexity: $\Or(\frac{\alpha}{\gamma\Delta}\log(\frac{1}{\gamma\epsilon}))$ queries to $U_H$ and $\Or(\frac{1}{\gamma})$ queries to $U_{I}$,
* Number of qubits: $\Or(n+m)$,
* Other one- and two- qubit gates: $\Or(\frac{m\alpha}{\gamma\Delta}\log(\frac{1}{\gamma\epsilon}))$.
§ ALGORITHM WITHOUT GROUND ENERGY BOUND
Next we consider the case when we are not given a known $\mu$ to bound the ground energy from above. All other assumptions about $H$ and its eigenvalues and eigenstates are identical to the previous sections. The basic idea is to test different values for $\mu$ and perform a binary search. This leads to a quantum-classical hybrid method that can estimate the ground energy as well as preparing the ground state to high precision.
All eigenvalues must be in the interval $[-\alpha,\alpha]$, thus we first partition $[-\alpha,\alpha]$ by grid points $-\alpha=x_0<x_1<\ldots<x_{G}=\alpha$, where $x_{k+1}-x_k=h$ for all $k$.
Then we attempt to locate $\lambda_0$ in a small interval between two grid points (not necessarily adjacent, but close) through a binary search. To do a binary search we need to be able to tell whether a given $x_k$ is located to the left or right of $\lambda_0$. Because of the random nature of measurement we can only do so correctly with some probability, and we want to make this probability as close to 1 as possible. This is achieved using a technique we call binary amplitude estimation.
Let $U$ be a unitary that acts on two registers, the first register indicating success or failure. Let $A=\|(\bra{0}\otimes I)U(\ket{0}\ket{0})\|$ be the success amplitude. Given $\gamma_0$ and $\gamma_1$, $\Delta := \gamma_1-\gamma_0>0$, provided that $A$ is either smaller than $\gamma_0$ or greater than $\gamma_1$, we can correctly distinguish between the two cases, i.e. output $0$ for the former and $1$ for the latter, with probability $1-\delta$ using $\mathcal{O}((1/\Delta)\log(1/\delta))$ applications of (controlled-) $U$ and its inverse.
The proof is essentially identical to the proof for gapped phase estimation in [Ambainis, 2012, Childs et al., 2017]. We can perform amplitude estimation up to error $\Delta/4$ with $\mathcal{O}(1/\Delta)$ applications of $U$ and $U^\dagger$. This has a success probability of $8/\pi^2$ according to Theorem 12 of [Brassard et al., 2002]. We turn the estimation result into a boolean indicating whether it is larger or smaller than $(\gamma_0+\gamma_1)/2$. The boolean is correct with probability at least $8/\pi^2$. Then we do a majority voting to boost this probability. Chernoff bound guarantees that to obtain a $1-\delta$ probability of getting the correct output we need to repeat $\mathcal{O}(\log(1/\delta))$ times. Therefore in total we need to run $U$ and $U^\dagger$ $\mathcal{O}((1/\Delta)\log(1/\delta))$ times.
We then apply binary amplitude estimation to the block-encoding of the projector defined in (<ref>) $\PROJ(x_k,h/2\alpha,\epsilon')$ for some precision $\epsilon'$ to be chosen.
We denote the amplitude of the “good” component after applying block-encoding by
A_k=\|(\bra{0^{m+3}}\otimes I)\PROJ(x_k,h/2\alpha,\epsilon')(\ket{0^{m+3}}\ket{\phi})\|,
which satisfies the following:
\[
\geq \gamma - \frac{\epsilon'}{2}, & \lambda_{0}\leq x_{k-1},\\
\leq \frac{\epsilon'}{2}, & \lambda_{0}\geq x_{k+1}.
\end{cases}
\]
We can then let
\[
\epsilon'=\gamma/2,
\]
the two amplitudes are separated by a gap lower bounded by $\gamma/2$. Therefore we can run the binary amplitude estimation, letting $U$ in Lemma <ref> be
U=\PROJ(x_k,h/2\alpha,\epsilon')(I\otimes U_I),
to correctly distinguish the two cases where $\lambda_0\leq x_{k-1}$ and $\lambda_0 \geq x_{k+1}$ with probability $1-\delta$, by running $\PROJ(x_k,h/2\alpha,\epsilon')$, $U_I$, and their inverses $\mathcal{O}((1/\gamma)\log(1/\delta))$ times. The output of the binary amplitude estimation is denoted by $B_k$.
We then define $\mathcal{E}$ as the event that an error occurs in the final result of binary amplitude estimation when we are computing $B_k$ for some $k$ such that $x_{k+1}<\lambda_0$ or $x_{k-1}>\lambda_0$ in our search process. All future discussion is conditional on $\mathcal{E}^c$ meaning that there is no error in binary amplitude estimation for $B_k$ when $x_{k+1}<\lambda_0$ or $x_{k-1}>\lambda_0$. This has a probability that is at least $(1-\delta)^R$ where $R$ is the number of times binary amplitude estimation is run.
Conditional on $\mathcal{E}^c$, almost surely (with probability 1) $B_k=1$ when $\lambda_0\leq x_{k-1}$ and $B_k=0$ when $\lambda_0\geq x_{k+1}$. Therefore $B_k=0$ tells us $\lambda_0>x_{k-1}$ and $B_k=1$ tells us $\lambda_0<x_{k+1}$. $B_k$ and $B_{k+1}$ combined give us the information as shown in Table <ref>.
$B_k$ $B_{k+1}$ Position of $\lambda_0$
1 1 $\lambda_0<x_{k+1}$
0 0 $\lambda_0>x_k$
0 1 $x_{k-1}<\lambda_0<x_{k+2}$
1 0 $x_{k}<\lambda_0<x_{k+1}$
Conditional on $\mathcal{E}^c$, $B_k$ and $B_{k+1}$ can provide us with the information as shown in the table.
Binary search to locate $\lambda_0$
$L \gets 0$, $U\gets G$
$k=\lfloor (L+U)/2 \rfloor$
Run binary amplitude estimation to get $B_k$ and $B_{k+1}$.
$(1,1)$ $U\gets k+1$
$(0,0)$ $L\gets k$
$(0,1)$ return $k-1$, $k+2$
$(1,0)$ return $k$, $k+1$
$L$, $U$
Using the Table <ref> we can do the binary search as outlined in Algorithm <ref>.
For the $\ell$-th step in Algorithm <ref> we denote the integer variables $U$ and $L$ by $U_\ell$ and $L_\ell$. In all four outcomes for $(B_k,B_{k+1})$, if the algorithm does not terminate at this step, then the new $U_{\ell+1}-L_{\ell+1}$ will be at most $(U_\ell-L_\ell)/2+1$. Since $U_0-L_0=G$ at the very beginning, we can show inductively $U_\ell-L_\ell\leq (G-2)/2^\ell+2$. Therefore when $\ell \geq \log_2(G-2)$ we have $U_\ell-L_\ell\leq 3$. Thus
the algorithm must terminate in $\lceil \log_2(G) \rceil=\mathcal{O}(\log(\alpha/h))$ steps. The output we denote by $L$ and $U$. They satisfy $x_L<\lambda_0<x_U$ and $U-L\leq 3$.
If we want the whole procedure to be successful with probability at least $1-\vartheta$, then we need $\mathrm{Prob}(\mathcal{E}^c)\geq 1-\vartheta$. Since
\[
\mathrm{Prob}(\mathcal{E}^c)\geq (1-\delta)^{\lceil \log_2(G) \rceil}\geq (1-\delta)^{\log_2(4\alpha/h)},
\]
we only need, for small $\vartheta$,
\[
\delta \leq \frac{\vartheta}{2\log_2(4\alpha/h)}.
\]
Algorithm <ref> enables us to locate $\lambda_0$ within an interval of length at most $3h$. In total we need to run binary amplitude estimation at most $\mathcal{O}(\log(\alpha/h))$ times. Each amplitude estimation queries $\PROJ(x_k,h/2\alpha,\epsilon')$ and $U_I$ $\Or((1/\gamma)\log(1/\delta))$ times, where $\epsilon'=\gamma/2$. Therefore the number of queries to $U_H$ and $U_I$ are respectively
\[
\Or\left(\frac{\alpha}{\gamma h}\log\left(\frac{\alpha}{h}\right)\log\left(\frac{1}{\gamma}\right)\log\left(\frac{\log(\alpha/h)}{\vartheta}\right)\right), \quad
\Or\left(\frac{1}{\gamma}\log\left(\frac{\alpha}{h}\right)\log\left(\frac{\log(\alpha/h)}{\vartheta}\right)\right).
\]
In particular, in the procedure above we did not use <ref> but only used <ref>. Therefore we do not need to assume the presence of a gap. The result can be summarized into the following theorem:
Suppose we have Hamiltonian $H=\sum_{k}\lambda_k \ket{\psi_k}\bra{\psi_k}\in\CC^{N\times N}$, where $\lambda_k\leq \lambda_{k+1}$, given through its $(\alpha,m,0)$-block-encoding $U_H$. Also suppose we have an initial state $\ket{\phi_0}$ prepared by circuit $U_{I}$, as well as the promise <ref>. Then the ground energy can be estimated to precision $h$ with probability $1-\vartheta$ with the following costs:
* Query complexity: $\Or\left(\frac{\alpha}{\gamma h}\log\left(\frac{\alpha}{h}\right)\log\left(\frac{1}{\gamma}\right)\log\left(\frac{\log(\alpha/h)}{\vartheta}\right)\right)$ queries to $U_H$ and
$\Or\left(\frac{1}{\gamma}\log\left(\frac{\alpha}{h}\right)\log\left(\frac{\log(\alpha/h)}{\vartheta}\right)\right)$ queries to $U_{I}$,
* Number of qubits: $\Or(n+m+\log(\frac{1}{\gamma}))$,
* Other one- and two- qubit gates: $\Or\left(\frac{m\alpha}{\gamma h}\log\left(\frac{\alpha}{h}\right)\log\left(\frac{1}{\gamma}\right)\log\left(\frac{\log(\alpha/h)}{\vartheta}\right)\right)$.
The extra $\Or(\log(1/\gamma))$ qubits needed come from amplitude estimation, which uses phase estimation. If we use Kitaev's original version of phase estimation using only a single qubit [Kitaev, 1995], we can reduce the number of extra qubits to $\Or(1)$.
With Theorem <ref> we can then use Algorithm <ref> to prepare the ground state without knowing an upper bound for the ground energy beforehand, when in addition to <ref> we have a lower bound for the spectral gap:
* Bound for the spectral gap: $\lambda_1-\lambda_0\geq \Delta$.
We first run Algorithm <ref> to locate the ground energy in an interval $[x_L,x_U]$ of length at most $\Delta$. Then we simply apply $\PROJ((x_L+x_U)/2,\Delta/4\alpha,\gamma\epsilon)$ to $\ket{\phi_0}$. This will give us an approximate ground state with at least $1-\epsilon$ fidelity. Therefore we have the following corollary:
Suppose we have Hamiltonian $H=\sum_{k}\lambda_k \ket{\psi_k}\bra{\psi_k}\in\CC^{N\times N}$, where $\lambda_k\leq \lambda_{k+1}$, given through its $(\alpha,m,0)$-block-encoding $U_H$. Also suppose we have an initial state $\ket{\phi_0}$ prepared by circuit $U_{I}$, as well as the promises <ref> and <ref>. Then the ground state can be can be prepared to fidelity $1-\epsilon$ with probability $1-\vartheta$ with the following costs:
* Query complexity: $\Or\left(\frac{\alpha}{\gamma \Delta}\left(\log\left(\frac{\alpha}{\Delta}\right)\log\left(\frac{1}{\gamma}\right)\log\left(\frac{\log(\alpha/\Delta)}{\vartheta}\right)+\log\left(\frac{1}{\epsilon}\right)\right)\right)$ queries to $U_H$ and $\Or\left(\frac{1}{\gamma}\log\left(\frac{\alpha}{\Delta}\right)\log\left(\frac{\log(\alpha/\Delta)}{\vartheta}\right)\right)$ queries to $U_{I}$,
* Number of qubits: $\Or(n+m+\log(\frac{1}{\gamma}))$,
* Other one- and two- qubit gates: $\Or\left(\frac{m\alpha}{\gamma \Delta}\left(\log\left(\frac{\alpha}{\Delta}\right)\log\left(\frac{1}{\gamma}\right)\log\left(\frac{\log(\alpha/\Delta)}{\vartheta}\right)+\log\left(\frac{1}{\epsilon}\right)\right)\right)$.
It may be sometimes desirable to ignore whether the procedure is successful or not. In this case we will see the output as a mixed state whose density matrix is
\[
\rho = \mathrm{Prob}(\mathcal{E}^c)\ket{\wt{\psi}_0}\bra{\wt{\psi}_0} + \rho',
\]
where $\ket{\wt{\psi}_0}$ is the approximate ground state with fidelity at least $1-\epsilon$, which is produced conditional on the event $\mathcal{E}^c$, and $\Tr\rho'=\mathrm{Prob}(\mathcal{E})$. Then this mixed state will have a fidelity lower bounded by
\[
\braket{\psi_0|\rho|\psi_0} \geq \mathrm{Prob}(\mathcal{E}^c) |\braket{\wt{\psi}_0|\psi_0}|^2\geq (1-\vartheta)(1-\epsilon)^2.
\]
If we want to achieve $\sqrt{1-\xi}$ fidelity for the mixed state, we can simply let $\vartheta=\epsilon=\xi/3$. Thus the number of queries to $U_H$ and $U_I$ are $\wt{\Or}(\frac{\alpha}{\gamma\Delta}\log(\frac{1}{\xi}))$ and $\wt{\Or}(\frac{1}{\gamma}\log(\frac{\alpha}{\Delta})\log(\frac{1}{\xi}))$ respectively.
§ OPTIMALITY OF THE QUERY COMPLEXITIES
In this section we prove for the ground state preparation algorithms outlined in Section <ref> and Section <ref> the number of queries to $U_H$ and $U_I$ are essentially optimal. We will also show our ground energy estimation algorithm has an nearly optimal dependence on the precision. We first prove the following complexity lower bounds:
Suppose we have a generic Hamiltonian $H=\sum_{k}\lambda_k \ket{\psi_k}\bra{\psi_k}\in\CC^{N\times N}$, where $\lambda_k\leq \lambda_{k+1}$, given through its $(\alpha,m,0)$-block-encoding $U_H$, and $\alpha=\Theta(1)$. Also suppose we have an initial state $\ket{\phi_0}$ prepared by circuit $U_{I}$, as well as the promises <ref> and <ref>. Then the query complexities of preparing the ground state $\ket{\psi_0}$ of $H$ to fidelity at least $\sqrt{3}/2$ satisfy
* When $\Delta=\Omega(1)$, and $\gamma\rightarrow 0^+$, the number of queries to $U_H$ is $\Omega(1/\gamma)$;
* When $\gamma=\Omega(1)$, and $\Delta\rightarrow 0^+$, the number of queries to $U_H$ is $\Omega(1/\Delta)$;
* When $\Delta=\Omega(1)$, and $\gamma\rightarrow 0^+$, it is not possible to accomplish the above task using $\Or(1/\gamma^{1-\theta})$ queries to $U_I$ and $\Or(\mathrm{poly}(1/\gamma))$ queries to $U_H$ for any $\theta>0$.
We prove all three lower bounds by applying the ground state preparation algorithm to the unstructured search problem. In the unstructured search problem we try to find a $n$-bit string $t$ marked out by the oracle
\[
U_t = I - 2\ket{t}\bra{t}.
\]
It is proved for this problem the number of queries to $U_t$ to find $t$ with probability $1/2$ is lower bounded by $\Omega(\sqrt{N})$ where $N=2^n$ [Bennett et al., 1997].
This problem can be seen as a ground state preparation problem. We find that $\ket{t}$ is the ground state of $U_t$, which is at the same time a unitary and therefore an $(1,0,0)$-block-encoding of itself. Therefore $U_t$ serves as the $U_H$ in the theorem. The spectral gap is $2$. Also, let
\[
\ket{u}=\frac{1}{\sqrt{N}}\sum_s\ket{s}
\]
be the uniform superposition of all $n$-strings, then we have
\braket{u|t}=\frac{1}{\sqrt{N}}
and $\ket{u}$ can be efficiently prepared by the Hadamard transform since $\mathrm{H}^{\otimes n}\ket{0^n}=\ket{u}$. Therefore $\mathrm{H}^{\otimes n}$ serves as the $U_I$ described in the theorem.
If the ground state preparation problem can be solved with $o(1/\gamma)$ queries to $U_H$ for fixed $\Delta$ to produce an approximate ground state with fidelity at least $\sqrt{3}/2$, then from the above setup we have $\gamma=1/\sqrt{N}$, and we can first find the approximate ground state and then measure in the computational basis, obtaining $t$ with probability at least $3/4$. Therefore the unstructured search problem can be solved with $o(\sqrt{N})$ queries to the oracle $U_t$, which is impossible. Thus we have proved the first lower bound in our theorem.
To prove the second lower bound we want to create a situation in which the overlap is bounded from below by a constant but the gap vanishes. We need to introduce the Grover diffusion operator
\begin{equation}
\label{eq:grover_diffusion}
D = I_n-2\ket{u}\bra{u}.
\end{equation}
which can be efficiently implemented. Then we define
\begin{equation}
\label{eq:hamilton_comb}
H(\tau) = (1-\tau)D + \tau U_t,
\end{equation}
and consider $H(1/2)$.
Because both $\operatorname{span}(\ket{u},\ket{t})$ and its orthogonal complement are invariant subspaces of $D$ and $U_t$, and both operators become the identity operator when restricted to the orthogonal complement of $\operatorname{span}(\ket{u},\ket{t})$, we only need to look for the ground state in the 2-dimensional subspace $\operatorname{span}(\ket{u},\ket{t})$. In this subspace, relative to the basis $\{\ket{u},\ket{t}\}$, the matrix representation of $H(1/2)$ is
\[
\begin{pmatrix}
0 & -\braket{u|t} \\
-\braket{t|u} & 0
\end{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix}.
\]
Therefore the ground state of $H(1/2)$ is
\[
\ket{\Psi}=\frac{\ket{u}+\ket{t}}{\sqrt{2+\frac{2}{\sqrt{N}}}}.
\]
and therefore $\braket{\Psi|u}=\braket{\Psi|t}= 1/\sqrt{2}+\Or(1/\sqrt{N})$ for large $N$. Furthermore, the gap is $\Delta(1/2)=2/\sqrt{N}$.
Therefore $\ket{t}$ can be prepared in the following way: we first prepare the ground state of $H(1/2)$, whose block-encoding is easy to construct using one application of $U_t$. The resulting approximate ground state we denote by $\ket{\wt{\Psi}}$. Then we measure $\ket{\wt{\Psi}}$ in the computational basis. If there is some non-vanishing probability of obtaining $t$ then we can boost the success probability to above $1/2$ by repeating the procedure and verifying using $U_t$.
If the second lower bound in the theorem does not hold, then $\ket{\wt{\Psi}}$ can be prepared with $o(1/\Delta(1/2))=o(\sqrt{N})$ queries to the block-encoding of $H(1/2)$ and therefore the same number of queries to $U_t$. Because the angle corresponding to fidelity is the great-circle distance on the unit sphere, we have the triangle inequality
(using that $|\braket{\wt{\Psi}|\Psi}|\ge \sqrt{3}/{2}$)
\[
\arccos |\braket{\wt{\Psi}|t}| \leq \arccos |\braket{\Psi|t}| + \arccos |\braket{\wt{\Psi}|\Psi}|\le \frac{5\pi}{12} + \Or\left(\frac{1}{\sqrt{N}}\right).
\]
Therefore for large $N$ we have $|\braket{\wt{\Psi}|t}|\geq \cos(5\pi/12) + \Or(1/\sqrt{N})>1/4$. The probability of getting $t$ when performing measurement is at least $1/16$. Therefore we can boost the success probability to above $1/2$ by $\Or(1)$ repetitions and verifications. The total number of queries to $U_t$ is therefore $o(\sqrt{N})$. Again, this is impossible. Therefore we have proved the second lower bound in our theorem.
For the last lower bound we need to create some trade off between the gap and the overlap. We consider preparing the ground state of the Hamiltonian $H(1/2-N^{-1/2+\delta})$, $0<\delta<1/6$, whose block-encoding can be efficiently constructed with a single application of $U_t$, as an intermediate step. It is shown in Appendix <ref> that the ground state is
\begin{equation}
\label{eq:interm_state_lb3}
\ket{\Phi}=\ket{u} + \frac{1}{4}N^{-\delta}\ket{t} + \Or(N^{-2\delta}).
\end{equation}
\[
\gamma_u=|\braket{\Phi|u}| = 1+\Or(N^{-2\delta}),\quad \gamma_t=|\braket{\Phi|t}| = \frac{1}{4}N^{-\delta} + \Or(N^{-2\delta}).
\]
Also we show in Appendix <ref> that the gap is
\begin{equation}
\label{eq:gap_lb3}
\Delta(1/2-N^{-1/2+\delta})=4N^{\delta-1/2}+\Or(N^{-1/2-\delta}).
\end{equation}
We first apply the algorithm described in Section <ref> to prepare the ground state of $H(1/2-N^{-1/2+\delta})$ to fidelity $1-N^{-2\delta}/128$. Using the overlap $\gamma_u$ and the gap in (<ref>), the approximate ground state, denoted by $\ket{\wt{\Phi}}$, can be prepared with $\Or(N^{1/2-\delta}\log(N))$ queries to the block-encoding of $H(1/2-N^{-1/2+\delta})$, and therefore the same number of queries to $U_t$.
The overlap between $\ket{\wt{\Phi}}$ and $\ket{t}$ can again be bounded using the triangle inequality
\[
\begin{aligned}
\arccos|\braket{\wt{\Phi}|t}| &\leq \arccos|\braket{\Phi|t}| + \arccos|\braket{\wt{\Phi}|\Phi}| \\
&\leq \arccos\left(\frac{N^{-\delta}}{4}\right) + \arccos\left(1-\frac{N^{-2\delta}}{128}\right) + \Or(N^{-2\delta}) \\
&\leq \frac{\pi}{2} - \frac{N^{-\delta}}{4} + \sqrt{2\times \frac{N^{-2\delta}}{128}} + \Or(N^{-2\delta}) \\
&=\frac{\pi}{2} - \frac{N^{-\delta}}{8} + \Or(N^{-2\delta}).
\end{aligned}
\]
Therefore we have
\[
\wt{\gamma}_t = |\braket{\wt{\Phi}|t}| \geq \frac{N^{-\delta}}{8} + \Or(N^{-2\delta}).
\]
If the last lower bound in our theorem does not hold, we can then prepare the ground state of $U_t$ by using the initial state $\ket{\wt{\Phi}}$ only $\Or(1/\wt{\gamma}^{1-\theta}_t)$ times for some $\theta>0$, and the number of queries to $U_t$ at this step, not including the queries used for preparing $\ket{\wt{\Phi}}$, is $\Or(1/\wt{\gamma}^p_t)$ for some $p>0$. Therefore the total number of queries to $U_t$ is
\[
\Or\left(\frac{N^{1/2-\delta}\log(N)}{\wt{\gamma}^{1-\theta}_t}+\frac{1}{\wt{\gamma}^p_t}\right)=\Or(N^{1/2-\delta\theta}\log(N)+N^{\delta p}).
\]
This complexity must be $\Omega(N^{1/2})$ according to the lower bound for unstructured search problem. Therefore we need $\delta p\geq 1/2$. However we can choose $\delta$ to be arbitrarily small, and no finite $p$ can satisfy this condition. Hence we have a contradiction. This proves the last lower bound in our theorem.
When we look at the query complexities of the ground state preparation algorithms in Secs. <ref> and <ref>, we can use $\wt{\Or}$ notation to hide the logarithmic factors, and both algorithms use $\wt{\Or}(\frac{\alpha}{\gamma\Delta})$ queries to $U_H$ and $\wt{\Or}(\frac{1}{\gamma})$ queries to $U_I$ when we want to achieve some fixed fidelity. Given the lower bound in Theorem <ref> we can see the algorithm with bound for ground energy essentially achieves the optimal dependence on $\gamma$ and $\Delta$. The algorithm without bound for ground energy achieves the same complexity modulo logarithmic factors, while using less information. This fact guarantees that the dependence is also nearly optimal.
We will then prove the nearly optimal dependence of our ground energy estimation algorithm on the precision $h$. We have the following theorem:
Suppose we have a generic Hamiltonian $H=\sum_{k}\lambda_k \ket{\psi_k}\bra{\psi_k}\in\CC^{N\times N}$, where $\lambda_k\leq \lambda_{k+1}$, given through its $(\alpha,m,0)$-block-encoding $U_H$, and $\alpha=\Theta(1)$. Also suppose we have an initial state $\ket{\phi_0}$ prepared by circuit $U_{I}$, as well as the promise that $|\braket{\phi_0|\psi_0}|=\Omega(1)$. Then estimating the ground energy to precision $h$ requires $\Omega(1/h)$ queries to $U_H$.
This time we convert the quantum approximate counting problem, which is closely related to the unstructured search problem, into an eigenvalue problem. The quantum approximate counting problem is defined in the following way. We are given a set of $n$-bit strings $S\subset\{0,1\}^n$ specified by the oracle $U_f$ satisfying
\[
U_f\ket{x} = \begin{cases}
-\ket{x} & \ x \in S, \\
\ket{x} & \ x \notin S,
\end{cases}
\]
for any $x\in\{0,1\}^n$. We want to estimate the size $|S|/N$ up to relative error $\epsilon$. It has been proven that this requires $\Omega\left(\frac{1}{\epsilon}\sqrt{\frac{N}{|S|}}\right)$ queries to $U_f$ for $|S|=o(N)$ <cit.>, where $N=2^n$, for the success probability to be greater than $3/4$, and this lower bound can be achieved using amplitude estimation [Brassard et al., 2002].
We convert this problem into an eigenvalue problem of a block-encoded Hamiltonian. Let $\ket{u}$ be the uniform superposition of the computational basis and $D$ be the Grover diffusion operator defined in (<ref>).
Then define the following $(n+1)$-qubit unitary ($\mathrm{H}$ is the Hadamard gate)
\[
U_H = (\mathrm{H}\otimes I_n)[\ket{0}\bra{0}\otimes D-\ket{1}\bra{1}\otimes (U_fDU_f)](\mathrm{H}\otimes I_n),
\]
which can be implemented using two applications of controlled-$U_f$. We define
\[
H = (\bra{0}\otimes I_n)U_H(\ket{0}\otimes I_n)=\frac{1}{2}(D-U_f D U_f).
\]
Note that here $H$ is given in its $(1,1,0)$-block-encoding $U_H$.
\[
\ket{u}=a\ket{u_0} + \sqrt{1-a^2}\ket{u_1}
\]
where the unit vectors $\ket{u_0}$ and $\ket{u_1}$ satisfy
\[
U_f\ket{u_0}=-\ket{u_0},\quad U_f\ket{u_1}=\ket{u_1},
\]
then we find $a=\sqrt{|S|/N}$. We only need to estimate the value of $a$ to precision $\Or(\epsilon'\sqrt{N/|S|})$ in order to estimate $|S|/N$ to precision $\epsilon'$.
We analyze the eigenvalues and eigenvectors of $H$. It can be verified that $\{\ket{u_0},\ket{u_1}\}$ span an invariant subspace of $H$, and relative to this orthonormal basis $H$ is represented by the matrix
\[
\begin{pmatrix}
0 & -2a\sqrt{1-a^2} \\
-2a\sqrt{1-a^2} & 0
\end{pmatrix}.
\]
In the orthogonal complement of this subspace, $H$ is simply the zero matrix. Therefore $H$ has only two non-zero eigenvalues $\pm 2a\sqrt{1-a^2}$ corresponding to eigenvectors
\[
\ket{\psi_{\mp}}=\frac{1}{\sqrt{2}}(\ket{u_0}\mp\ket{u_1}).
\]
The ground state of $H$ is therefore $\ket{\psi_+}$ with ground energy $-2a\sqrt{1-a^2}$. We can use $\ket{u}$ as the initial state, with an overlap $\braket{\psi_+|u}=\frac{1}{\sqrt{2}}(a+\sqrt{1-a^2})\geq\frac{1}{\sqrt{2}}$.
We use this Hamiltonian to prove Theorem <ref>:
Assume toward contradiction that there exists an algorithm that estimates the ground energy to precision $h$ using only $o(1/h)$ queries to $U_H$. Then we use this algorithm to estimate the ground energy of the block-encoded Hamiltonian constructed above, for $a=o(1)$, which means $|S|=o(N)$. Estimating $2a\sqrt{1-a^2}$ to precision $\Or(h)$ enables us to estimate $a$ to precision $\Or(h)$. Setting $h=\epsilon'\sqrt{N/|S|}$, then this algorithm can estimate $|S|/N$ to precision $\epsilon'$, with success probability at least $3/4$. Since we are interested in the relative error we set $\epsilon'=\epsilon|S|/N$. Therefore the whole procedure uses only $o(1/h)=o(\frac{1}{\epsilon}\sqrt{\frac{N}{|S|}})$ queries to $U_H$ and therefore twice the amount of queries to $U_f$. This contradicts the lower bound for the approximate counting problem in [Nayak and Wu, 1999].
Theorem <ref> can also be viewed as a consequence of the optimality of the quantum phase estimation algorithm [Bessen, 2005]. If instead of the block-encoding $U_H$ we have $e^{-i\tau H}$ as the oracle for some $\tau$ such that $|\tau|\|H\|\leq \pi$, then even when given the exact ground state of $H$, <cit.> gives a query complexity lower bound $\Omega(1/h)$ for estimating the ground energy to within additive error $h$. This provides a different proof of the above theorem, since $e^{-i\tau H}$ and the block-encoding of $H$ are interconvertible: one can efficiently implement $e^{-i\tau H}$ via Hamiltonian simulation starting from a block-encoding of $H$ [Low and Chuang, 2019], and can efficiently obtain a block-encoding of $H$ by querying $e^{-i\tau H}$ according to <cit.>.
§ LOW-ENERGY STATE PREPARATION
It is known that estimating the spectral gap $\Delta$ is a difficult task [Ambainis, 2014, Cubitt et al., 2015, Bausch et al., 2018]. Our algorithm for finding ground energy, as discussed in Theorem <ref>, does not depend on knowing the spectral gap. However both of our algorithms for preparing the ground state in Theorem <ref> and Corollary <ref> require a lower bound of the spectral gap. We would like to point out that if we only want to produce a low-energy state $\ket{\psi}$, making $\braket{\psi|H|\psi}\leq \mu$ for some $\mu > \lambda_0$, as in [Poulin and Wocjan, 2009], then this can be done without any knowledge of the spectral gap. In fact this is even possible for when the ground state is degenerate.
To do this, we need to first assume we have a normalized initial state $\ket{\phi_0}$ with non-trivial overlap with the low-energy eigen-subspaces. Quantitatively this means for some $\gamma,\delta>0$, if we expand the initial state in the eigenbasis of $H$, obtaining $\ket{\phi_0}=\sum_k \alpha_k \ket{\psi_k}$, then
\sum_{k:\lambda_k \leq \mu-3\delta}|\alpha_k|^2\geq \gamma^2.
Then we can use the block-encoded projection operator in (<ref>) to get
\[
\ket{\psi'} = (\bra{0^{m+3}}\otimes I)\PROJ(\mu-2\delta,\delta,\epsilon')(\ket{0^{m+3}}\otimes \ket{\phi_0}),
\]
for some precision $\epsilon'$. Now we expand $\ket{\psi'}$ in the eigenbasis to get $\ket{\psi'}=\sum_{k}\beta_k\ket{\psi_k}$, and denote $\ket{\varphi'}=\sum_{k:\lambda_k<\mu-\delta}\beta_k\ket{\psi_k}$. We then have, because of the approximation to the sign function,
\[
\|\ket{\psi'}-\ket{\varphi'}\|\leq\frac{\epsilon'}{2},\quad \braket{\varphi'|\varphi'}\geq \gamma^2(1-\frac{\epsilon'}{2})^2,\quad \braket{\varphi'|H|\varphi'}\leq (\mu-\delta)\braket{\varphi'|\varphi'}.
\]
From the above bounds we further get
\[
\frac{\braket{\psi'|H|\psi'}}{\braket{\psi'|\psi'}} \leq \frac{\braket{\varphi'|H|\varphi'}+\|H\|\epsilon'+\|H\|\epsilon'^2/4}{\braket{\varphi'|\varphi'}-\epsilon'} \leq \frac{\mu - \delta + \frac{\alpha\epsilon'+\alpha\epsilon'^2/4}{\gamma^2(1-\epsilon'/2)^2}}{1-\frac{\epsilon'}{\gamma^2(1-\epsilon'/2)^2}}.
\]
Now denoting $\ket{\psi}=\ket{\psi'}/\|\ket{\psi'}\|$ we can make $\braket{\psi|H|\psi}\leq \mu$ by choosing $\epsilon'=\Or(\gamma^2\delta/\alpha)$. Therefore the total number of queries to $U_H$ required is $\Or(\frac{1}{\delta\gamma}\log(\frac{\alpha}{\delta\gamma}))$ and the number of queries to $U_I$ is $\Or(\frac{1}{\gamma})$.
From this we can see that if the initial state $\ket{\phi_0}$ has a overlap with the the ground state that is at least $\gamma$, and we want to prepare a state with energy upper bounded by $\lambda_0+\delta$, the required number of queries to $U_H$ and $U_I$ are $\Or(\frac{1}{\delta\gamma}\log(\frac{\alpha}{\delta\gamma}))$ and $\Or(\frac{1}{\gamma})$ respectively. If we do not know the ground energy beforehand we can use the algorithm in Theorem <ref> to estimate it first. Note that none of these procedures assumes a spectral gap.
§ DISCUSSIONS
In this work we proposed an algorithm to prepare the ground state of a given Hamiltonian when a ground energy upper bound is known (Theorem <ref>), an algorithm to estimate the ground energy based on binary search (Theorem <ref>), and combining these two to get an algorithm to prepare the ground state without knowing an upper bound (Corollary <ref>). By solving the unstructured search problem and the approximate counting problem through preparing the ground state, we proved that the query complexities for the tasks above cannot be substantially improved, as otherwise the complexity lower bound for the two problems would be violated.
All our algorithms are based on the availability of the block-encoding of the target Hamiltonian. This is a non-trivial task but we know it can be done for many important settings. For example, Childs proposed an LCU approach to block-encode the Hamiltonian of a quantum spin system [Childs et al., 2018], in which the Hamiltonian is decomposed into a sum of Pauli matrices. In [Low and Wiebe, 2018], Low and Wiebe outlined the methods to construct block-encoding of Hubbard Hamiltonian with long-range interaction, and of quantum chemistry Hamiltonian in plane-wave basis, both using fast-fermionic Fourier transform (FFFT) [Babbush et al., 2018]. The FFFT can be replaced by a series of Givens rotations which gives lower circuit depth and better utilizes limited connectivity [Kivlichan et al., 2018, Jiang et al., 2018]. Any sparse Hamiltonian whose entries can be efficiently computed can also be block-encoded using a quantum walk operator [Berry and Childs, 2009, Berry et al., 2015, Childs et al., 2017].
We remark that the quantum circuit used in our method for ground energy estimation can be further simplified. The main obstacle to applying this method to near-term devices is the need of amplitude estimation, which requires phase estimation. It is possible to replace amplitude estimation by estimating the success probability classically. In the context of binary amplitude estimation in Lemma <ref>, we need to determine whether the success amplitude is greater than $3\gamma/4$ or smaller than $\gamma/4$. This can be turned into a classical hypothesis testing to determine whether the success probability is greater than $9\gamma^2/16$ or smaller than $\gamma^2/16$. A simple Chernoff bound argument tells us that we need $\Or(\log(1/\vartheta)/\gamma^2)$ samples to distinguish the two cases with success probability at least $1-\vartheta$, as opposed to the $\Or(\log(1/\vartheta)/\gamma)$ complexity in amplitude estimation.
In this approach, the only quantum circuit we need to use is the one in (<ref>). The circuit depth is therefore only $\Or((\alpha/h)\log(1/\gamma))$. It also does not require the $\Or(\log(1/\gamma))$ qubits that are introduced as a result of using amplitude estimation. These features make it suitable for near-to-intermediate term devices.
In [Lin and Tong, 2020] we proposed an eigenstate filtering method (similar in spirit to the method proposed in Section <ref>), and we combined it with quantum Zeno effect [Childs et al., 2002, Boixo et al., 2009] to solve the quantum linear system problem. The resulting algorithm utilizes the fact that the desired eigenstate along the eigenpath always corresponds to the eigenvalue 0. In the setting of quantum Zeno effect based state preparation, in which we have a series of Hamiltonians and wish to incrementally prepare the ground state of each of them, our algorithm in Theorem <ref> can be used to go from the ground state of one Hamiltonian to the next one, provided that we have a known upper bound for the ground energy. In the absence of such an upper bound, there is the possibility of using the algorithm in Corollary <ref> to solve this problem. However in this setting we only want to use the initial state once for every Hamiltonian, since preparing the initial state involves going through the ground state of all previous Hamiltonians. This presents a challenge and is a topic for our future work.
It is worth pointing out that none of the Hamiltonians used in the proofs of lower bounds in Section <ref> is a local Hamiltonian, and therefore our lower bounds do not rule out the possibility that if special properties such as locality are properly taken into consideration, better complexities can be achieved.
§ ACKNOWLEDGEMENTS
This work was partially supported by the Department of Energy under Grant No. DE-SC0017867, the Quantum Algorithm Teams Program under Grant No. DE-AC02-05CH11231, the Google Quantum Research Award (L.L.), and by the Air Force Office of Scientific Research under award number FA9550-18-1-0095 (L.L. and Y.T.). We thank András Gilyén, and the anonymous referees for helpful suggestions.
[Aharonov et al., 2009]
D. Aharonov, D. Gottesman, S. Irani, and J. Kempe.
The power of quantum systems on a line.
Comm. Math. Phys., 2870 (1):0 41–65, 2009.
[Ambainis, 2012]
A. Ambainis.
Variable time amplitude amplification and quantum algorithms for
linear algebra problems.
In STACS'12 (29th Symposium on Theoretical Aspects of Computer
Science), volume 14, pages 636–647, 2012.
[Ambainis, 2014]
A. Ambainis.
On physical problems that are slightly more difficult than QMA.
In 2014 IEEE 29th Conference on Computational Complexity
(CCC), pages 32–43. IEEE, 2014.
[Babbush et al., 2018]
R. Babbush, N. Wiebe, J. McClean, J. McClain, H. Neven, and G. K.-L. Chan.
Low-depth quantum simulation of materials.
Phys. Rev. X, 80 (1):0 011044, 2018.
[Bausch et al., 2018]
J. Bausch, T. Cubitt, A. Lucia, and D. Perez-Garcia.
Undecidability of the spectral gap in one dimension.
arXiv preprint arXiv:1810.01858, 2018.
[Bennett et al., 1997]
C. H. Bennett, E. Bernstein, G. Brassard, and U. Vazirani.
Strengths and weaknesses of quantum computing.
SIAM J. Comput., 260 (5):0 1510–1523, 1997.
[Berry and Childs, 2009]
D. W. Berry and A. M. Childs.
Black-box hamiltonian simulation and unitary implementation.
arXiv preprint arXiv:0910.4157, 2009.
[Berry et al., 2015]
D. W. Berry, A. M. Childs, R. Cleve, R. Kothari, and R. D. Somma.
Simulating Hamiltonian dynamics with a truncated taylor series.
Phys. Rev. Lett., 1140 (9):0 090502,
[Berry et al., 2015]
D. W. Berry, A. M. Childs, and R. Kothari.
Hamiltonian simulation with nearly optimal dependence on all
In 2015 IEEE 56th Annual Symposium on Foundations of Computer
Science, pages 792–809. IEEE, 2015b.
[Bessen, 2005]
A. J. Bessen.
Lower bound for quantum phase estimation.
Phys. Rev. A, 710 (4):0 042313, 2005.
[Boixo et al., 2009]
S. Boixo, E. Knill, and R. D. Somma.
Eigenpath traversal by phase randomization.
Quantum Info. Comput., 9:0 833–855, 2009.
[Brassard et al., 2002]
G. Brassard, P. Hoyer, M. Mosca, and A. Tapp.
Quantum amplitude amplification and estimation.
Contemp. Math., 305:0 53–74, 2002.
[Chao et al., 2020]
R. Chao, D. Ding, A. Gilyén, C. Huang, and M. Szegedy.
Finding Angles for Quantum Signal Processing with Machine
[Childs et al., 2002]
A. M. Childs, E. Deotto, E. Farhi, J. Goldstone, S. Gutmann, and A. J. Landahl.
Quantum search by measurement.
Phys. Rev. A, 660 (3):0 032314, 2002.
[Childs et al., 2017]
A. M. Childs, R. Kothari, and R. D. Somma.
Quantum algorithm for systems of linear equations with exponentially
improved dependence on precision.
SIAM J. Comput., 46:0 1920–1950, 2017.
[Childs et al., 2018]
A. M. Childs, D. Maslov, Y. Nam, N. J. Ross, and Y. Su.
Toward the first quantum simulation with quantum speedup.
Proc. Natl. Acad. Sci, 1150 (38):0 9456–9461,
[Childs et al., 2019]
A. M. Childs, Y. Su, M. C. Tran, N. Wiebe, and S. Zhu.
A theory of Trotter error.
arXiv preprint arXiv:1912.08854, 2019.
[Cubitt et al., 2015]
T. S. Cubitt, D. Perez-Garcia, and M. M. Wolf.
Undecidability of the spectral gap.
Nature, 5280 (7581):0 207–211, 2015.
[Dong et al., 2020]
Y. Dong, X. Meng, K. B. Whaley, and L. Lin.
Efficient phase factor evaluation in quantum signal processing.
arXiv preprint arXiv:2002.11649, 2020.
[Eremenko and Yuditskii, 2007]
A. Eremenko and P. Yuditskii.
Uniform approximation of $\mathrm{sgn}(x)$ by polynomials and entire
Journal d'Analyse Mathématique, 1010 (1):0
313–324, 2007.
[Ge et al., 2019]
Y. Ge, J. Tura, and J. I. Cirac.
Faster ground state preparation and high-precision ground energy
estimation with fewer qubits.
J. Math. Phys., 600 (2):0 022202, 2019.
[Gilyén et al., 2018]
A. Gilyén, Y. Su, G. H. Low, and N. Wiebe.
Quantum singular value transformation and beyond: exponential
improvements for quantum matrix arithmetics.
arXiv preprint arXiv:1806.01838, 2018.
[Gilyén et al., 2019]
A. Gilyén, Y. Su, G. H. Low, and N. Wiebe.
Quantum singular value transformation and beyond: exponential
improvements for quantum matrix arithmetics.
In Proceedings of the 51st Annual ACM SIGACT Symposium on
Theory of Computing, pages 193–204, 2019.
[Grover, 1996]
L. K. Grover.
A fast quantum mechanical algorithm for database search.
In Proceedings of the twenty-eighth annual ACM symposium on
Theory of computing, pages 212–219, 1996.
[Haah, 2019]
J. Haah.
Product decomposition of periodic functions in quantum signal
Quantum, 3:0 190, 2019.
[Jiang et al., 2018]
Z. Jiang, K. J. Sung, K. Kechedzhi, V. N. Smelyanskiy, and S. Boixo.
Quantum algorithms to simulate many-body physics of correlated
Phys. Rev. Applied, 90 (4):0 044036, 2018.
[Kempe et al., 2006]
J. Kempe, A. Kitaev, and O. Regev.
The complexity of the local Hamiltonian problem.
SIAM J. Comput., 350 (5):0 1070–1097, 2006.
[Kitaev, 1995]
A. Y. Kitaev.
Quantum measurements and the abelian stabilizer problem.
arXiv preprint quant-ph/9511026, 1995.
[Kitaev et al., 2002]
A. Y. Kitaev, A. Shen, and M. N. Vyalyi.
Classical and quantum computation.
Number 47. American Mathematical Soc., 2002.
[Kivlichan et al., 2018]
I. D. Kivlichan, J. McClean, N. Wiebe, C. Gidney, A. Aspuru-Guzik, G. K.-L.
Chan, and R. Babbush.
Quantum simulation of electronic structure with linear depth and
Phys. Rev. Lett., 1200 (11):0 110501, 2018.
[Lin and Tong, 2020]
L. Lin and Y. Tong.
Optimal polynomial based quantum eigenstate filtering with
application to solving quantum linear systems.
Quantum, 4:0 361, 2020.
[Lloyd, 1996]
S. Lloyd.
Universal quantum simulators.
Science, pages 1073–1078, 1996.
[Low and Chuang, 2017]
G. H. Low and I. L. Chuang.
Optimal Hamiltonian simulation by quantum signal processing.
Phys. Rev. Lett., 118:0 010501, 2017.
[Low and Chuang, 2019]
G. H. Low and I. L. Chuang.
Hamiltonian simulation by qubitization.
Quantum, 3:0 163, 2019.
[Low and Wiebe, 2018]
G. H. Low and N. Wiebe.
Hamiltonian simulation in the interaction picture.
arXiv preprint arXiv:1805.00675, 2018.
[Low et al., 2016]
G. H. Low, T. J. Yoder, and I. L. Chuang.
Methodology of resonant equiangular composite quantum gates.
Phys. Rev. X, 6:0 041067, 2016.
[Motta et al., 2019]
M. Motta, C. Sun, A. T. K. Tan, M. J. O'Rourke, E. Ye, A. J. Minnich, F. G.
Brandao, and G. K. Chan.
Quantum imaginary time evolution, quantum lanczos, and quantum
thermal averaging.
arXiv preprint arXiv:1901.07653, 2019.
[Nayak and Wu, 1999]
A. Nayak and F. Wu.
The quantum query complexity of approximating the median and related
In Proceedings of the thirty-first annual ACM symposium on
Theory of computing, pages 384–393, 1999.
[Oliveira and Terhal, 2005]
R. Oliveira and B. M. Terhal.
The complexity of quantum spin systems on a two-dimensional square
arXiv preprint quant-ph/0504050, 2005.
[Parrish and McMahon, 2019]
R. M. Parrish and P. L. McMahon.
Quantum filter diagonalization: Quantum eigendecomposition without
full quantum phase estimation.
arXiv preprint arXiv:1909.08925, 2019.
[Peruzzo et al., 2014]
A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love,
A. Aspuru-Guzik, and J. L. O'brien.
A variational eigenvalue solver on a photonic quantum processor.
Nat. Commun., 5:0 4213, 2014.
[Poulin and Wocjan, 2009]
D. Poulin and P. Wocjan.
Preparing ground states of quantum many-body systems on a quantum
Phys. Rev. Lett., 1020 (13):0 130503, 2009.
[Remes, 1934]
E. Remes.
Sur le calcul effectif des polynomes d’approximation de tchebichef.
C. R. Acad. Sci. Paris, 199:0 337–340, 1934.
[Stair et al., 2019]
N. H. Stair, R. Huang, and F. A. Evangelista.
A multireference quantum Krylov algorithm for strongly correlated
arXiv preprint arXiv:1911.05163, 2019.
§ AN EXAMPLE OF BLOCK-ENCODING AND CONSTRUCTING THE REFLECTOR
In this section we use $\sigma_x$, $\sigma_y$, and $\sigma_z$ to denote the three Pauli matrices. We use $\mathrm{H}$ to denote the Hadamard gate. We consider a single-qubit illustrative example of block-encoded matrix and obtain the corresponding reflector through QSP.
The matrix we consider is
\[
H(a) = a \sigma_x + (1-a) I,
\]
for $0\leq a\leq 1$. Its block-encoding can be using the following circuit
\[
\Qcircuit @C=0.8em @R=1.em {
& \gate{V(a)} & \ctrlo{1} & \gate{V^\dagger(a)} & \qw \\
& \qw & \gate{\sigma_x} & \qw & \qw \\
\]
\[
\begin{pmatrix}
\sqrt{a} & -\sqrt{1-a} \\
\sqrt{1-a} & \sqrt{a}
\end{pmatrix}.
\]
We denote the above circuit by $U_H(a)$. This is a $(\alpha,m,0)$-block-encoding of $H(a)$ where $\alpha=1$ and $m=1$, since we can readily check that
\[
(\bra{0}\otimes I)U_H(a)(\ket{0}\otimes I) = H(a).
\]
The eigendecomposition of $H(a)$ is
\[
H(a) = \ket{+}\bra{+} + (1-2a)\ket{-}\bra{-},
\]
with eigenvalues $\lambda_{+}(a) = 1,\lambda_{-}(a) = 1-2a$. Our goal is to implement the reflector
\[
R_{<0}(a) = -\sign(H(a)) = -\ket{+}\bra{+} - \sign(1-2a)\ket{-}\bra{-}.
\]
To do this we need an odd polynomial $S(x;\delta,\epsilon)$ introduced in Lemma <ref>. Instead of the construction done in Ref. [Low and Chuang, 2017] we use the Remez algorithm [Remes, 1934] to obtain this polynomial. We choose $\delta=0.2$ and the $L^{\infty}$ error of the residual is required to be less than $10^{-4}$, i.e. $\epsilon \leq 10^{-4}$.
\[
\scalebox{0.8}{
\Qcircuit @C=0.8em @R=1.em {
\lstick{\ket{0}}& \gate{\mathrm{H}} & \targ & \gate{e^{-i \varphi_{d} \sigma_z}} & \targ & \qw & \targ & \gate{e^{-i \varphi_{d-1} \sigma_z}} & \targ & \qw & \qw &\raisebox{0em}{$\cdots$}&& \qw & \qw &\targ & \gate{e^{-i \varphi_0 \sigma_z}} & \targ & \qw & \gate{\mathrm{H}}&\qw \\
\lstick{\ket{0^m}}& \qw &\ctrlo{-1} & \qw & \ctrlo{-1} & \multigate{1}{U_H(a)} & \ctrlo{-1} & \qw & \ctrlo{-1} & \multigate{1}{U^{\dag}_H(a)} &\qw &\raisebox{0em}{$\cdots$} && \multigate{1}{U_H(a)} & \qw &\ctrlo{-1} & \qw & \ctrlo{-1} & \qw& \qw & \qw \\
\lstick{\ket{\psi}}& \qw &\qw &\qw&\qw &\ghost{U_H(a)} &\qw&\qw&\qw&\ghost{U_H(a)}&\qw &\raisebox{0em}{$\cdots$} && \ghost{U_H(a)} & \qw&\qw&\qw & \qw&\qw& \qw &\qw
\]
The circuit implementing the polynomial eigenvalue transformation through QSP for an odd polynomial with phase factors $\{\varphi_j\}_{j=0}^d$. $\mathrm{H}$ is the Hadamard gate and $\sigma_z$ is the Pauli-$Z$ gate.
Given the polynomial $S(x;\delta,\epsilon)$, using the optimization method proposed in Ref. [Dong et al., 2020], we find a polynomial $P(x)\in \CC [x]$ of odd degree $d$ such that
\[
\max_{x\in[-1,1]}|\Re P(x)-S(x;\delta,\epsilon)|\leq \epsilon',
\]
where $P(x)$ is characterized by a sequence of phase factors $\{\varphi_j\}_{j=0}^d$ satisfying
\begin{equation}
\label{eq:phase_fac_P}
\begin{pmatrix}
P(x) & \cdot \\
\cdot & \cdot
\end{pmatrix}
e^{i\varphi_0 \sigma_z}\prod_{j=1}^d[R(x)e^{i\varphi_j \sigma_z}],
\end{equation}
\[
\begin{pmatrix}
x & \sqrt{1-x^2} \\
\sqrt{1-x^2} & -x
\end{pmatrix}.
\]
The existence of the phase factors is guaranteed by <cit.>. Ref. [Dong et al., 2020] uses quasi-Newton method to solve a least squares problem to obtain these phase factors, and we terminate the iteration only when $L^\infty$ error of the residual of the real part is smaller than $\epsilon'=10^{-4}$.
The circuit in Figure <ref> with phase factors $\{\varphi_j\}_{j=0}^d$ implements the transformation $H/\alpha\mapsto \Re P(H/\alpha)\approx S(H/\alpha;\delta,\epsilon)$. The various components of this circuit are explained in detail in <cit.>. An important component of this circuit is
\[
\Qcircuit @C=0.8em @R=1.em {
& \targ & \gate{e^{-i \varphi \sigma_z}} & \targ & \qw \\
& \ctrlo{-1} & \qw & \ctrlo{-1} & \qw \\
\]
where the first register has one qubit, the second register has $m$-qubits, and the open bullet indicates control-on-zero for multiple control qubits. This component implements the operator
\[
\ket{0}\bra{0}\otimes (e^{i \varphi (2\ket{0^m}\bra{0^m}-I)}) + \ket{1}\bra{1}\otimes (e^{-i \varphi (2\ket{0^m}\bra{0^m}-I)}).
\]
For a detailed discussion see <cit.>.
The error of implementing $R_{<0}(a)$ for $a\in[0,1]$ using QSP, with polynomial $S(x;\delta,\epsilon)$ where $\delta=0.2$ and $\epsilon$ is of the order of $10^{-4}$. The vertical axis uses logarithmic scale.
Using the above circuit, Lemma <ref> guarantees that when the eigenvalues of $H(a)$ are contained in $[-1,-\delta]\cup[\delta,1]$, we will have a good approximation of $R_{<0}(a)$. However when at least one eigenvalue, which in our case can only be $\lambda_-(a)=1-2a$, is in $(-\delta,\delta)$, or in other words when $a\in (0.4,0.6)$, there is no such guarantee. We plot the operator norm error between the approximate reflector obtained through QSP and the exact reflector $R_{<0}(a)$ in Figure <ref>. It can be seen in the figure that the error is smaller than $10^{-4}$ everywhere except for $a\in(0.4,0.6)$, where the error spikes.
§ GAP AND OVERLAP IN THE UNSTRUCTURED SEARCH PROBLEM
In this appendix we compute the spectral gap of the Hamiltonian $H(1/2-N^{-1/2+\delta})$ for $H(\tau)$ defined in (<ref>), $0<\delta<1/6$, and the overlap between its ground state and $\ket{u}$ and $\ket{t}$ defined in Section <ref>.
The first thing we should realize is that we only need to care about the subspace of the Hilbert space spanned by $\ket{u}$ and $\ket{t}$. In the orthogonal complement of this subspace $H(\tau)$ is simple a multiple of identity. In this subspace, with respect to the non-orthogonal basis $\{\ket{u},\ket{t}\}$, the operator $H(1/2-N^{-1/2+\delta})$ is represented by the following matrix
\begin{equation}
\label{eq:ham_search}
\begin{pmatrix}
-2 & -(N^{-\delta}+2N^{-1/2}) \\
-(N^{-\delta}-2N^{-1/2}) & 2
\end{pmatrix}.
\end{equation}
Direct calculation shows the eigenvalues
\[
\lambda_{\pm} = \pm N^{\delta-1/2}\sqrt{4+N^{-2\delta}-4N^{-1}} = \pm N^{\delta-1/2}(2+\frac{1}{4}N^{-2\delta}+\Or(N^{-4\delta})).
\]
Thus we obtain the spectral gap in (<ref>). To simplify notation we let $\wt{\lambda}=N^{1/2-\delta}\lambda_+$. We then compute the ground state. We first find an eigenvector corresponding to $\lambda_-$
\[
\begin{aligned}
\ket{\chi} &= N^\delta((N^{-\delta}+2N^{-1/2})\ket{u}+(-2+\wt{\lambda})\ket{t}) \\
&=(1+2N^{\delta-1/2})\ket{u} + (\frac{1}{4}N^{-\delta}+\Or(N^{-3\delta}))\ket{t} \\
&=\ket{u}+ \frac{1}{4}N^{-\delta}\ket{t}+\Or(N^{\delta-1/2}).
\end{aligned}
\]
We still need to normalize $\ket{\chi}$. The normalization factor is
\[
\begin{aligned}
\|\ket{\chi}\|& = \sqrt{(1+2N^{\delta-1/2})^2 + (\frac{1}{4}N^{-\delta}+\Or(N^{-3\delta}))^2 + \frac{2}{\sqrt{N}}(1+2N^{\delta-1/2})(\frac{1}{4}N^{-\delta}+\Or(N^{-3\delta}))} \\
&= 1 + \Or(N^{-2\delta}).
\end{aligned}
\]
Note that the third term under the square root comes from the overlap between $\ket{u}$ and $\ket{t}$, and it does not play an important role asymptotically. Therefore normalizing we have the expression for the normalized eigenstate (<ref>).
|
2024-09-04T02:54:55.366377 | 2020-02-28T03:53:31 | 2002.12530 | {
"authors": "Hongyan Hao, Yan Wang, Siqiao Xue, Yudi Xia, Jian Zhao, Furao Shen",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25933",
"submitter": "Furao Shen",
"url": "https://arxiv.org/abs/2002.12530"
} | arxiv-papers | # Temporal Convolutional Attention-based Network For Sequence Modeling
Hongyan Hao1111Equal Contribution Yan Wang2111Equal Contribution Yudi Xia3
Furao Shen1&Jian Zhao4 1Department of Computer Science and Technology, Nanjing
University, Nanjing, China
2School of Artificial Intelligence, Nanjing University, Nanjing, China
3Software Institute, Nanjing University, Nanjing, China
4School of Electronic Science and Engineering, Nanjing University, Nanjing,
China
{haohy, yanwang<EMAIL_ADDRESS>
{frshen<EMAIL_ADDRESS>
###### Abstract
With the development of feed-forward models, the default model for sequence
modeling has gradually evolved to replace recurrent networks. Many powerful
feed-forward models based on convolutional networks and attention mechanisms
were proposed and show more potential to handle sequence modeling tasks. We
wonder that is there an architecture that can not only achieve an approximate
substitution of recurrent networks but also absorb the advantages of feed-
forward models. So we propose an exploratory architecture referred to Temporal
Convolutional Attention-based Network (TCAN) which combines temporal
convolutional network and attention mechanism. TCAN includes two parts, one is
Temporal Attention (TA) which captures relevant features inside the sequence,
the other is Enhanced Residual (ER) which extracts the shallow layer’s
important information and transfers to deep layers. We improve the state-of-
the-art results of bpc/perplexity to 26.92 on word-level PTB, 1.043 on
character-level PTB, and 6.66 on WikiText-2.
## 1 Introduction
With the development of deep learning, researchers have gradually concluded
three frequently used structures for sequence modeling, i.e., convolutional
networks, recurrent networks and attention mechanisms. For the tasks of
sequence learning, the ”default” solutions are recurrent networks in the early
years, however, there are some defects that recurrent networks are hard to
avoid. Besides, the feed-forward has evolved to handle tasks of sequence
modeling. We aim to explore a better architecture to implement an approximate
substitution of recurrent networks using feed-forward networks. So that it can
not only absorb the advantages of feed-forward models but also make up for the
shortcoming of recurrent networks.
Before introducing our new architecture, we first analyze the problem of
sequence modeling to conclude what characteristics an effective architecture
should have. As the input of task, the sequence’s data point at time step $t$
is conditioned on prior one and arbitrary two data points can be relevant.
Accordingly, A feasible model should characterize causality and learn the
conditional correlation between data points. Besides, for high efficiency, it
should train and test data in parallel. And we hope the size of this model can
be as small as possible.
The researchers have done many attempts from feed-forward models to recurrent
models. In the early days of deep learning, Multi-Layer Perceptron (MLP) was
regarded as a complex linear function to solve the problem of prediction. An
idea found in machine learning and statistical models is that for a model,
parameter sharing across different parts makes it possible to extend and
generalize Goodfellow et al. (2016). However, MLP trains separate parameters
for each input feature, that is to say, it can’t share parameters. Waibel et
al., [1989] proposed a time-delay neural network (TDNN), which achieves
parameter sharing by applying the same convolution kernel at each time step.
But the range of memory is limited by the delay factor, so it can’t handle
tasks with intensive data. Recurrent neural network (RNN) is different from
them, it uses the same parameters to process sequential input from the
beginning to the current time step. Due to that, each member of output is the
function of the previous member of output, so that RNN is theoretically
capable of infinite memory Bai et al. (2018); Goodfellow et al. (2016) and can
process variable length input. At the same time, it satisfy causality and
conditional correlation for sequence modeling, This strong expressiveness
makes it be ”default” choice of many sequence modeling tasks.
Although RNNs (i.e. RNN based architectures) has much strength, two
shortcomings are limiting their applicability in reality. One is that in both
training and evaluation, The later time steps must wait for their predecessors
to complete, this inherently sequential nature precludes parallelization in
training and evaluating processes Vaswani et al. (2017); The other one is that
with the growth of sequence’s length, RNNs pay more attention to nearby
context and they are sensitive to the order of words within the most recent
sentence but ignore word order in the long-range context, it is to say that
the ”infinite memory” is unnecessary Khandelwal et al. (2018). While some
variants and optimization methods achieved significant improvements Merity et
al. (2018), but the fundamental constraints mentioned above remain.
Recently, researchers devised many alternative feed-forward models for
sequence modeling, which mainly refer to temporal convolutional network-based
models (TCNs) Bai et al. (2018) and attention mechanism-based models Vaswani
et al. (2017). The basic TCN architecture mainly contains three modules,
causal convolution, dilated convolution and residual block Bai et al. (2018).
Although the output member of the $t$-th step is a function of a particular
number (determined by dilation factor and kernel size) of neighboring members
of input before $t$. However, TCN doesn’t learn distant position’s dependency
inside the sequence and it doesn’t extract internal correlation information of
input. Bai et al., [2019] propose TrellisNet, characterized by weight tying
across the depth and direct injection of the input into deep layers. Besides
that, it combines many optimization methods to promote performance. But they
make TrellisNet bigger and slower than the original TCN. The representative
model of attention mechanism based is Transformer Vaswani et al. (2017) which
tactfully avoids recurrence by entirely relying on attention mechanism to draw
global dependencies between input and output. Based on that, researchers
designed some powerful models, including GPT-2 which combines generative pre-
training on a diverse corpus of unlabeled text and discriminative fine-tuning
on specific tasks. The GPT-2 has achieved state of the art performance on many
sequence modeling tasks. While the refinements and variants of Transformers
have more strong processing power for sequential data, they are too big to
take into account all the characteristics of the sequence model introduced
earlier.
(a) Model overall
(b) TCAN block
Figure 1: Architectural overall and an inter-layer transformation diagram. (a)
An overall of whole architecture including Input layer, Hidden layer and
Output layer, the green square indicates TCAN; (b) Inter-layer transformation
of TCAN, gray squares indicate the intermediate variable of temporal attention
and convolution, yellow block indicate the enhanced residual.
In this work, we propose an exploratory architecture referring to the Temporal
Convolutional Attention-based Network (TCAN). On the one hand, it is inspired
by TCN to utilize the dilated causal network to be an analogy to RNN’s input
causality. On the other hand, it combines self-attention mechanism Vaswani et
al. (2017) to extract internal correlation information and learn distant
position’s dependency. As illustrated in Fig 1, our model consists of two
parts. One of them is Temporal Attention (TA) which is different from regular
attention mechanism in the property of internal causality, TA’s $l$-th hidden
layer $t$-th time step input is a function of all previous input $x_{1:t}$,
which guarantee no information leakage from future to past inside hidden
layer. On the contrast, self-attention captures all time steps information in
the former layer. The other part is Enhanced Residual (ER), it gets
contribution weights of every time step for prediction from TA. The ER will be
summed with a normal skip connection as the final residual of the current
layer. In summary, we try to use a convolutional network and attention
mechanism to design an approximate substitution of RNN to satisfy the
conditions of solving sequence modeling.
For evaluating the impact of TCAN, we conduct experiments on the Penn Treebank
(PTB) Marcus et al. (1993) and the WikiText-2 (WT2) data set Merity et al.
(2017). The results show that TCAN attains state-of-the-art performance by
significant margins. Besides, TCAN has a smaller size than Transformer based
and RNN based models. It indicates that our proposed architecture could
effectively extract sequence features and could be a feasible alternative to
RNN in resolving sequence modeling tasks. The code for reproducing the results
is open sourced and is available at https://github.com/haohy/TCAN
## 2 Methodology
In this section, we will introduce generic TCAN in detail. We first present
the definition of the sequence modeling task in Section 2.1. Then we describe
the processing of data from input to output. We apply commonly used encoder-
decoder structure for sequence modeling. The causal dilated network meets the
requirement of sequential causality. Section 2.2 will give an overall look at
TCAN architecture. The next two sub-sections introduce the details of the two
sub-modules, including Temporal Attention in Section 2.3 and Enhanced Residual
in Section 2.4.
### 2.1 Sequence Modeling
Sequence modeling is a key problem in domains spanning audio, language
modeling, music processing, time series forecasting, and many others Bai et
al. (2018). Suppose an input sequence as $x_{1:T}=x_{1},x_{2},\cdots,x_{T}$
with length $T$ and the target sequence $y_{1:T}=y_{1},y_{2},\cdots,y_{T}$
with length $T$, the task is to find a function $\mathbf{SeqMod}$ that
satisfies the following relation:
$y_{1:T}=\mathbf{SeqMod}(x_{1:T})$
two constraints should be noticed: 1) $y_{t}$ should satisfy the causal
constraint, it is a function of $x_{1:t}$, The model should prevent future
information $x_{t+1:T}$ leakage; 2) the length of the input $x_{1:T}$ should
be the same as the length of output. In essence, the function
$\mathbf{SeqMod}$ is to find the network that minimizes some expected loss
between the model’s outputs and ground truth which we define it to be simply
the input sequence shifted by one time step.
Different from natural machine translation which utilizes an entire input
sequence to predict the whole sentence, sequence modeling is an auto-
regressive prediction which can only depend on past information. Actually, it
is this constraint that makes recurrent networks become default choice for
sequence modeling rather than feed-forward models.
### 2.2 Model Architecture
From a holistic perspective, we adopt a similar structure as TCN which
contains encoder and decoder like most competitive neural sequence
transduction models. At the beginning of the model, the encoder maps an input
sequence of symbol representations $x_{1:T}=x_{1},x_{2},\cdots,x_{T}$ to a
sequence of continuous representations $S_{1:T}^{(0)}=\text{Encoder}(x_{1:T})$
whose T indicate the length of the sequence and 0 indicate the $0$-th layer,
i.e. the first hidden layer’s input. Then we apply the different kernel sizes
of dilated causal convolution as a hidden layer across $L$ layers. After the
final hidden layer, the decoder generates an output sequence
$\hat{y}_{1:T}=\hat{y}_{1},\hat{y}_{2},\cdots,\hat{y}_{T}$. At the most basic
level, the intermediate variable at time step $t$ and level $l+1$
($s_{1:T}^{(l+1)}$) is computed via four steps, illustrated in Figure 1:
1. 1.
The $s_{1:T}^{(l)}$ is passed through Temporal Attention (TA):
$sa_{1:T}^{(l)}=\text{TA}(s_{1:T}^{(l)})$
where $sa_{1:T}^{(l)}$ indicate an intermediate variable that contains
information before time steps $t$, illustrated in Figure 1(b) and will be
elaborated in Section 2.3.
2. 2.
Given the $sa_{1:T}^{(l)}$, we apply causal convolution on it:
$sc_{1:T}^{(l)}=\text{Conv1d}(sa_{1:T}^{(l)})$
where $sc_{1:T}^{(l)}$ indicates the output of causal convolution. The causal
block ($L_{b}$) can be stacked into many layers. For keeping the same length
of each layer, we add zero padding of length ($(k-1)2^{l-1}$) on the left,
white blocks in Figure 1(a). In this way, the left relevant information of
input will gradually accumulate to the right.
3. 3.
Before feature maps being passed through the activation function to get
$s_{1:T}^{(l+1)}$, we add three components $s_{1:T}^{(l)}$, $sc_{1:T}^{(l)}$
and $sr_{1:T}^{(l)}$, where $sr_{1:T}^{(l)}$ represents Enhanced Residual
(ER):
$sr_{1:T}^{(l)}=\text{ER}(s_{1:T}^{(l)})$
which is expressed by yellow blocks in Figure 1(b) and will be detailed
described in Section 2.4
4. 4.
A full TCAN network is built by stacking $L$ layers of TCAN block across depth
and time, which is called dilated convolution. In addition, we use dilated
convolution to enable networks to have enough receptive fields, which
preserves the network computational efficiency. We set the size of dilation to
increase exponentially with the depth of the network (i.e., $d=2^{l}$ for
layer $l$ in the network).
Figure 2: Temporal Attention Block
### 2.3 Temporal Attention
Temporal Attention (TA), illustrated as Figure 2, can be described as a
process that integrates the influence of previous time steps into the current
time step. Our model is inspired by self-attention structure. The self-
attention utilizes information of all time steps, both past and future of time
step $t$. But for the sequential data, we can only handle the past
information, so we refine the processing of the weight matrix to satisfy the
sequential nature.
At the first step, we use three linear transformations $f$, $g$ and $h$ to map
$s_{1:T}^{(l)}$ to three different vectors, keys
($k_{1:T}^{(l)}=f(s_{1:T}^{(l)})$), query ($q_{1:T}^{(l)}=g(s_{1:T}^{(l)})$)
and values ($v_{1:T}^{(l)}=h(s_{1:T}^{(l)})$) of dimension $d_{k}$. Then for
getting the weight matrix $Wa^{(l)}$, we compute the dot products of
$q_{1:T}^{(l)}$ and $k_{1:T}^{(l)}$, and divided each by $\sqrt{d_{k}}$.
$W_{i,j}^{(l)}=\frac{{k_{i}^{(l)}}^{\mathrm{T}}\cdot{q_{j}^{(l)}}}{\sqrt{d_{k}}}$
where $i,j=1,2,\cdots,T$. After that, we extract the lower triangular part of
$W^{(l)}$ as follows:
$Wl_{i,j}^{(l)}=\begin{cases}W_{i,j}^{(l)},&\text{if i $\geq$ j}\\\
0,&\text{if i $<$ j}\end{cases}$
where $i,j=1,2,\cdots,T$. This can shield the weight of future time steps, so
as to achieve the purpose of not using future information. Finally, we apply a
softmax function to normalize $Wl^{(l)}$ to get $Wa^{(l)}$, yellow blocks in
Figure 2. Note that we apply softmax in the first dimension of $Wl^{(l)}$.
Base on this operation, the sum of each column’s weight across one row can be
larger than 1, that is to say, the total contribution of previous time step
$s_{1:t}^{(l)}$ to current $s_{t}^{(l)}$ would have a bigger difference than
normalization in the first dimension. It will be justified by an ablation
study in Section 3.4. Given the weights, we can get the weighted output by:
$sa_{t}^{(l)}=\sum_{i=0}^{t}Wa_{i}^{(l)}\cdot s_{i}^{(l)}$
where $t=1,2,\cdots,T$. $sa_{t}^{(l)}$ will be regarded as input of causal
convolution, described in the $2$-th step in Section 2.2.
Figure 3: Enhanced Residual Block
### 2.4 Enhanced Residual
Before passing the intermediate variables through the activation function to
get $s_{1:T}^{(l+1)}$, we integer three parts of information, they are
identity mapping $s_{1:T}^{(l)}$, convolved vectors $sc_{1:T}^{(l)}$ and
Enhanced Residual (ER) which we design to extract relatively important
information and transfer them to the next layer. For a practical example of
enhanced residual, in a case where you try to learn something new, if someone
tells you what is the relatively important part, then you reinforce learning
particular parts, it will help you grasp it faster and stronger.
Enhanced residual harness the weight matrix $Wa_{1:T}^{(l)}$ got from Temporal
Attention. We take the sum of the weights of each row of $Wa_{1:T}^{(l)}$ to
indicate the importance level of each time step, so we can get another weight
vector as follows:
$M_{t}=\sum_{i=0}^{t}Wa_{i}^{(l)}$
where $M_{t}$ denotes the degree of importance of time step $t$,
$t=1,2,\cdots,T$. Then after the Hadamard product of $M_{t}$ and $S^{(l)}$, we
get the enhanced residual ($Sr^{(l)}$).
## 3 Experiments
To evaluate the effectiveness of our TCAN architecture, we conduct experiments
on language modeling task (LM) task. This task is defined in Section 2.1. The
different datasets for LM have been described in the following section, they
share the same architecture with different parameters setting. We mainly
compare TCAN with representative models of three architectures listed in
Section 3.2. Besides, we will verify the effectiveness of enhanced residual
and the difference between three types of softmax in temporal attention in
experiments.
### 3.1 Datasets and Setting
#### Penn Treebank
Marcus et al. (1993): The Penn Treebank (PTB) dataset has two forms for
testing the performance of models. One is word-level PTB, which contains 888K
words for training, 70K for validation and 79K for testing, with a vocabulary
size of 10K. Each sentence’s end is marked with $<$eos$>$ and all the numbers
were replaced with a $?$ symbol. The other is character-level PTB, which
contains 5M characters for training, 396K for validation and 446K for testing,
with an alphabet size of 50. Note that $<$eos$>$ is considered one character
here. Compared to the former, character-level PTB is a medium size data set.
These two forms of PTB datasets are highly studied in sequence modeling Bai et
al. (2019)Bai et al. (2018)Krueger et al. (2017)
#### WikiText-2
Merity et al. (2017): The WikiText-2 (WT2) contains lightly pre-possessed
Wikipedia articles. In contrast to PTB, it retains capitalization, punctuation
and numbers. WikiText-2 features a vocabulary of over 30,000 words and
contains approximately 2 million words, which is over two times larger than
that of the PTB dataset.
Taking into account different characteristics of every dataset, we set
different parameters for each dataset based on the same TCAN architecture. We
use the gradient clip like TCN to compare the performance with TCAN based on
the same optimizations. For narrative convenience, we denote the dimension of
input embedding as $D_{embed}$, the number of stacking layers as $L$, the
number of convolutional layers inside the TCAN as $L_{b}$ and the dimension of
the linear transformation in Temporal Attention as $D_{attn}$. As to word-
level PTB, we set the kernel size $3$. In order to make the size of our model
and the comparison model as equal as possible, we set $D_{embed}=300$, $L=4$,
$L_{b}=1$ and $D_{attn}=600$. As to character-level PTB. Because the number of
characters in the whole dataset is limited and smaller than word-level PTB, we
set $D_{embed}=100$. Correspondingly, because of stronger long-term
dependencies between characters, we set the convolutional kernel to $7$. At
the same time, the length of input sequence is larger, so we need more layers
to capture the larger receptive field, we set $L=6$, $L_{b}=1$ and
$D_{attn}=100$. As to WT2, it is a larger dataset with a twice larger size of
the dictionary. so we set $D_{embed}=600$, $L=6$, $L_{b}=1$ and
$D_{attn}=600$. we use the Adam optimizer Kingma and Ba (2015) with learning
rate 0.0001 in all above experiments.
In order to verify the availability of enhanced residual, We consider two
versions of the model, one with enhanced residual (TCAN) and one without
enhanced residual (TCAN-no-res). The enhanced residual doesn’t add parameters
to the model, so they have equal model size.
### 3.2 Compared Methods
#### RNN based
: Researchers proposed many variants of RNN. Among them, LSTM becomes the most
used benchmark model because of solving the problem of vanishing gradients.
Coupled with some regularization and optimization methods, it gets impressive
results on several benchmark datasets in language modeling. AWD-LSTM Merity et
al. (2018) is a representative model, it introduces weight-dropped LSTM which
uses DropConnect on hidden-to-hidden weights to be a form of recurrent
regularization and NT-AvSGD which is a non-monotonically triggered (NT)
variant of the averaged stochastic gradient method (AvSGD), wherein the
averaging trigger is determined using a NT condition as opposed to being tuned
by the user. Later, many better algorithms were produced based on the AWD-LSTM
Wang et al. (2019); Gong et al. (2018). Besides that, We also compare TCAN
with NAS models Zoph and Le (2017), which uses reinforcement learning to
maximize or minimize the objective function of the generated architectures on
a validation set to generate the model descriptions of neural networks.
#### CNN based
: A few notable convolutional networks have been applied to sequence modeling
in recent years (e.g., the WaveNet van den Oord et al. (2016a) and PixelCNN
van den Oord et al. (2016b) architectures). Among some derivative models, the
best one is TrellisNet Bai et al. (2019). It combines weight tying across
depth and time and input insertion, besides that, it equips many techniques,
including dropout, weight normalization and auxiliary loss. Putting these
techniques together, it achieves a state-of-the-art performance among various
derivative models of TCN.
#### Attention mechanism
: In recent years, the most frequently used and effective models in industry
are based on Transformer Vaswani et al. (2017) structure, which is a
derivative framework of attention mechanism in essence. GPT-2 Radford et al.
(2019) is considered the best model at present, and it achieves state-of-the-
art results on many sequence modeling benchmark tasks. GPT-2’s goal is to
design a multitask learner, and it utilizes a combination of pre-training and
supervised finetuning to achieve more flexible forms of transfer. Therefore it
has 1542M parameters, much bigger than other comparative models.
Word-level Penn Treebank (PTB)
---
Models | Size | $\text{ppl}^{l}$
Generic TCN Bai et al. (2018) | 13M | 88.68
NAS Cell Zoph and Le (2017) | 54M | 62.4
AWD-LSTM Merity et al. (2018) | 24M | 58.8
TrellisNet Bai et al. (2019) | 33M | 56.80
TrellisNet-MoS Bai et al. (2019) | 34M | 54.19
GPT-2 Radford et al. (2019) | 1542M | 35.76
TCAN-no-res | 13M | 28.10
TCAN | 13M | 26.92
Table 1: Test perplexities (ppl) on word-level language modeling with the PTB
corpus. $l$ means lower is better. WikiText-2 (WT2)
---
Models | Size | $\text{ppl}^{l}$
Generic TCN Bai et al. (2018) | 28.6M | 138.5
AWD-LSTM Merity et al. (2018)† | 33M | 44.3
AWD-LSTM-MoS Yang et al. (2018)† | 35M | 40.68
GPT-2 Radford et al. (2019) | 1542M | 18.34
TCAN-no-res | 33M | 6.95
TCAN | 33M | 6.66
Table 2: Test perplexities (ppl) on word-level language modeling with the
WikiText-2 corpus. $\dagger$ indicates using dynamic evaluation. Character-
level Penn Treebank (PTB)
---
Models | Size | $\text{ppl}^{l}$
Generic TCN Bai et al. (2018) | 3.0M | 1.31
IndRNN Li et al. (2018) | 12.0M | 1.23
NAS Cell Zoph and Le (2017) | 16.3M | 1.214
AWD-LSTM Merity et al. (2018) | 13.8M | 1.175
TrellisNet-MoS Bai et al. (2019) | 13.4M | 1.158
TCAN-no-res | 4.3M | 1.060
TCAN | 4.3M | 1.043
Table 3: Test bits-per-character (bpc) on character-level language modeling
with the PTB corpus.
### 3.3 Results and Analysis
(a) vertical
(b) horizontal
(c) vertival+horizontal
Figure 4: The weights in the last TA layer of TCAN When the matrix $W_{l}$ is
calculated by softmax in the vertical (a), horizontal (b) and mixed softmax
(c).
We evaluate TCAN on word-level and character-level language modeling on Penn
Treebank (PTB) and WikiText-2 datasets as mentioned in Section 3.1. The prior
state of the art on these datasets are set most by GPT-2. For the word-level
datasets, we use PTB and wikitext-2. The former one is a small but frequently
used dataset. We also conduct some further ablation studies on it. The latter
one is larger, so it reduces the risk of overfitting to a certain extent. As
illustrated in Table 1 and Table 2, compared with TCAN, AWD-LSTM and
TrellisNet have more parameters but poorer performance (higher perplexity).
GPT-2 has better performance than others, but its model size is so big. It
causes that it needs so much natural language corpus and computational
resources to get the pre-trained model. Actually, the largest part of
parameters is the embedding module whose size is proportional to dictionary
size. So that the model size of wikitext-2 is bigger than that of word-level
PTB. On PTB dataset, TCAN uses fewer parameters to achieve state-of-the-art
performance. As to wikitext-2 dataset, TCAN uses the same amount of parameters
as AWD-LSTM and sets a new state of the art as well. For the character-level
dataset, illustrated in Table 3, we also set a new state of the art. Due to
the GPT-2 was trained for word-level models, it doesn’t work for this dataset.
From the comparison between AWD-LSTM with TrellisNet and TCN, we found that
for this task of sequence modeling, the feed-forward model can not only
outperforms RNN based model, but also has a similar or even smaller size.
Furthermore, TCAN has a simpler structure and better performance than
TrellisNet, which suggests that the combination of TCN structure and attention
mechanism is a feasible exploration.
According to the comparison between TCAN-no-res and TCAN in the chart, we
found that the enhanced residual can improve the performance in some extent.
As mentioned in Section 2.4, we think that enhanced residual select valuable
features and strengthen the memory of important information. Note that the
values shown on Table 1, 2 and 3 are not the best performance results, but set
for comparison.
### 3.4 Ablation Experiments
To better explain the relative importance of the several conclusions we
proposed, we designed two ablation experiments on the word-level PTB dataset.
one is to prove that temporal attention (TA) layer is more effective than a
convolutional layer, the other is to claim that the softmax applied on the
first dimension of $W_{l}$ is better than on the second dimension.
ER | TA | $L_{b}$ | $L$ | Size | ppll
---|---|---|---|---|---
✗ | ✓ | 1 | 4 | 13.2M | 28.10
✗ | ✗ | 2 | 4 | 14.7M | 151.98
Table 4: Comparative Tests on temporal attention layer and convolutional
layer.
When we compare the efficiency of temporal attention (TA), we discard all TA
layers and replace them with convolutional layers. To guarantee the fairness
of models, we adjust hyperparameters to make the convolutional model have more
parameters than TCAN. The initial TCAN has 1 temporal attention block in each
of the 4 layers. For comparison, we use convolutional layer to replace TA. A
TA layer has similar number of parameters with a convolutional layer. Note
that TCAN didn’t use the enhanced residual module, which is in order to not
interfere with comparative tests. The results are listed in Table 4. Note that
two models are optimized by Adam and the learning rate is 0.0001. The
perplexity of two models shows that the temporal attention layer is more
effective than the convolutional layer.
ER | TA | Direction | ppl
---|---|---|---
✗ | ✓ | Horizontal | 207.16
✗ | ✓ | Vertical | 28.10
✗ | ✓ | Horizontal+Vertical | 30.88
Table 5: Comparison of Horizontal and Vertical softmax in the TA layer.
As claimed in Section 2.3, we apply softmax on the first dimension of $W_{l}$
and get relatively good results. For intuitive description, we define softmax
on the first dimension as ”vertical softmax”, second dimension as ”horizontal
softmax” and mixed dimension (the average of the first dimension and second
dimension) as ”vertical+horizontal softmax”. The results are shown in Table 5
and Figure 4 which is drawing based on the final temporal attention layer’s
$W_{l}$. According to the perplexities in Table 5, We can intuitively see that
vertical softmax is more effective than mixed softmax and the latter is more
effective than horizontal softmax, so we infer that vertical softmax is more
practical than horizontal softmax. For the figures, because the $W_{l}$ is a
lower triangular matrix, the weights concentrate on the lower left side of
figures, the weight of the $i$-th row and $j$-th column indicate the
contribution of the $j$-th time step to $i$-th time step. From Figure 4(a), we
can see that for the time steps from about 25 to 80, about 3-time steps in
front of current data contribute the most, besides that, the first 30 or so
time steps also played a role. In contrast, from the Figure 4(b), horizontal
softmax makes temporal attention concentrate more on the most previous data,
it is unreasonable, because for prediction, the nearest several members
contribute mostKhandelwal et al. (2018). So the performance of horizontal
softmax is poorer. As to mixed direction softmax, it spends too much attention
on the first and most recent time steps, but not the others. This ablation
experiment evidence the claim that the softmax on the first dimension of
$W_{l}$ can integer more relevant features by making the final weighted
features more differentiated.
## 4 Conclusion
In this work, we propose an exploratory architecture TCAN for sequence
modeling. The sub-module temporal attention can integer internal correlative
features under the condition of satisfying sequential characteristics. The
other enhanced residual utilize the weight of temporal attention to emphasize
the importance of certain time step, it can improve the performance without
adding parameters. Our model outperforms prior state-of-the-art models on
WikiText-2, word- and character-level PTB datasets by a significant margin.
## References
* Bai et al. [2018] Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. CoRR, abs/1803.01271, 2018.
* Bai et al. [2019] Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. Trellis networks for sequence modeling. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2019.
* Gong et al. [2018] ChengYue Gong, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tie-Yan Liu. FRAGE: frequency-agnostic word representation. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada, pages 1341–1352, 2018.
* Goodfellow et al. [2016] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
* Khandelwal et al. [2018] Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. Sharp nearby, fuzzy far away: How neural language models use context. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 284–294, 2018.
* Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
* Krueger et al. [2017] David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron C. Courville, and Christopher J. Pal. Zoneout: Regularizing rnns by randomly preserving hidden activations. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017.
* Li et al. [2018] Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, and Yanbo Gao. Independently recurrent neural network (indrnn): Building a longer and deeper RNN. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 5457–5466, 2018.
* Marcus et al. [1993] Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330, 1993.
* Merity et al. [2017] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017.
* Merity et al. [2018] Stephen Merity, Nitish Shirish Keskar, and Richard Socher. Regularizing and optimizing LSTM language models. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018.
* Radford et al. [2019] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019.
* van den Oord et al. [2016a] Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. In The 9th ISCA Speech Synthesis Workshop, Sunnyvale, CA, USA, 13-15 September 2016, page 125, 2016.
* van den Oord et al. [2016b] Aäron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Koray Kavukcuoglu, Oriol Vinyals, and Alex Graves. Conditional image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 4790–4798, 2016.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998–6008, 2017.
* Waibel et al. [1989] Alexander H. Waibel, Toshiyuki Hanazawa, Geoffrey E. Hinton, Kiyohiro Shikano, and Kevin J. Lang. Phoneme recognition using time-delay neural networks. IEEE Trans. Acoustics, Speech, and Signal Processing, 37(3):328–339, 1989.
* Wang et al. [2019] Dilin Wang, ChengYue Gong, and Qiang Liu. Improving neural language modeling via adversarial training. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 6555–6565, 2019.
* Yang et al. [2018] Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. Breaking the softmax bottleneck: A high-rank RNN language model. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018.
* Zoph and Le [2017] Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017.
|
2024-09-04T02:54:55.376623 | 2020-02-28T04:32:16 | 2002.12540 | {
"authors": "Ali Akbar Septiandri, Yosef Ardhito Winatmoko",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25934",
"submitter": "Ali Septiandri",
"url": "https://arxiv.org/abs/2002.12540"
} | arxiv-papers | # UKARA 1.0 Challenge Track 1:
Automatic Short-Answer Scoring in Bahasa Indonesia
Ali Akbar Septiandri
Airy
Jakarta, Indonesia
<EMAIL_ADDRESS>
Yosef Ardhito Winatmoko
Jheronimus Academy of Data Science
’s-Hertogenbosch, The Netherlands
<EMAIL_ADDRESS>
###### Abstract
We describe our third-place solution to the UKARA 1.0 challenge on automated
essay scoring. The task consists of a binary classification problem on two
datasets — answers from two different questions. We ended up using two
different models for the two datasets. For task A, we applied a random forest
algorithm on features extracted using unigram with latent semantic analysis
(LSA). On the other hand, for task B, we only used logistic regression on TF-
IDF features. Our model results in F1 score of 0.812.
## 1 Introduction
Automated essay scoring is the application of computers technologies to assist
human grader in evaluating the score of written answers (Dikli, 2006). The
first track of UKARA 1.0 is the binary classification version of essay
scoring, where participants are expected to develop a model that can
distinguish right and wrong answers in free text format. The organizer
published the questions, the responses with the labels, and the guideline on
how to determine whether an answer is acceptable.
During a period of five weeks, the training set and the development set were
available and we could validate our model through the score of the development
set in a leaderboard. Subsequently, the test set was released, which consists
of roughly four times the size of the development set. We are required to
submit predicted labels based on the model that we have developed and the
winner was determined by the F1 score of the submitted prediction.
Our final submission to this task consists of feature extraction, such as
n-grams and TF-IDF, and classical machine learning algorithms, namely logistic
regression and random forest. We did not use deep learning at all in our
submission. The details of the dataset and our approach will be discussed in
the following sections.
## 2 Datasets
The dataset consists of two questions and the respective responses collected
by the organizer of the challenge. All questions and responses are in
Indonesian. The first question, from now on will be referred as task A, asked
about the consequence of the climate change. Concretely, what are the
potential problems faced by a climate refugee when they have to migrate to a
new place. The second question, referred as task B, is based on an experiment.
There were potential customers, initially wanted to buy clothes, who prefer to
donate the money instead when they are presented with videos of the clothes
manufacturing worker condition before paying. The respondents were required to
give their opinion on why do people decided to change their mind. The
statistics of the responses for both tasks are shown in Table 1.
| Task A | Task B
---|---|---
#Positive Train | $191(71\%)$ | $168(55\%)$
#Negative Train | $77(29\%)$ | $137(45\%)$
Avg. #Char | $87.23$ | $97.33$
#Dev | $215$ | $244$
#Test | $855$ | $974$
Table 1: Summary statistics of the dataset.
## 3 Methodology
### 3.1 Preprocessing
For the preprocessing steps, we first tokenized and lemmatized the text using
bahasa Indonesia tokenizer provided by spaCy (Honnibal and Montani, 2017). We
then extracted the features using bag-of-words or TF-IDF. Since the resulting
matrix from this feature extraction method tends to be sparse, and to encode
token relations, we applied Latent Semantic Analysis (LSA) using Singular
Value Decomposition (SVD) (Deerwester et al., 1990) on the matrix.
Based on our observation, we noticed that the labels of the provided training
set are highly inconsistent. Some responses are clearly labelled incorrectly.
For illustration, in task A we found ”untuk pindah ke daerah yang aman” (to
move to a safe place) labelled as 1 (correct) while clearly it does not fit
the criteria based on the guideline. The mislabeling was even more prominent
in task B: ”karana dengan menyumbang kita bisa membuat produksi pakaian
menjadi lebih beretika” (By donating, we can make clothes production becomes
more ethical) is considered wrong while ”agar upaya untuk membuat produksi
pakaian menjadi lebih beretika.” (As an effort to make clothes production
becomes more ethical) is approved. To approach this problem, we decided to
prepare a separate training set with manually corrected labels based on our
own judgment. The correction result is shown in Table 2.
Finally, as the responses contain a lot of typo and slang words, we also
experimented with simple typo corrector using python difflib package and
Indonesian colloquial dictionary (Salsabila et al., 2018). We tried every
possible combination of preprocessing steps and whether to use altered version
of the training set with a parameter optimization library described in the
following subsection.
| | Original
---
Label
| Corrected
---
Label
Count
Task A | $0$ | $1$ | $10$
$1$ | $0$ | $4$
Task B | $0$ | $1$ | $46$
$1$ | $0$ | $13$
Table 2: Corrected labels of the training set.
### 3.2 The Winning Approach
After trying several machine learning algorithms, such as k-Nearest Neighbors,
Naïve Bayes, logistic regression, and random forest, we found that random
forest was the best model for task A. This corroborates what was found by
Fernández-Delgado et al. (2014) in their comprehensive comparisons among
several machine learning algorithms on different datasets. On the other hand,
logistic regression with L2 regularization was the best for task B. The
machine learning library used in this study is scikit-learn (Pedregosa et al.,
2011). Since the dataset is quite small, we used 5-fold cross validation on
the training set to avoid overfitting. For our winning approach, we set the
n_estimators parameter for random forest to 200 and keep the default values
for other parameters. We also found that it is the best to keep the default
parameter values for the logistic regression model.
### 3.3 Alternative Approaches
In parallel, we also experimented with optimization using
hyperopt111http://hyperopt.github.io/hyperopt/ library, which utilizes
sequential model-based optimization (Bergstra et al., 2011). We tried to
optimize the hyperparameters, including the preprocessing steps, of four
different machine learning algorithms: logistic regression, random forest,
gradient boosting tree, and support vector machine. We trained separate models
for task A and task B. In addition, we tested a voting-based ensemble model by
combining all the optimized model of each algorithm. The evaluation metric
used for the optimization, including for the voting-ensemble model, is F1
score. The results of the experiments are presented in the next section along
with the discussion.
## 4 Results and Discussion
We found that choosing different preprocessing methods resulted in different
performances in the two tasks. Therefore, we varied the use of unigram or TF-
IDF, and whether we should apply SVD to the resulting matrix. On the other
hand, we found that it is always better to use the lemmatizer built on top of
spaCy in this task. Moreover, removing stopwords did not contribute much to
the performance on the training set. Table of local CV results can be seen in
Table 3 and Table 4.
| Precision | Recall | F1
---|---|---|---
1-gram+RF | $0.845\pm 0.057$ | $0.921\pm 0.037$ | $0.881\pm 0.035$
1-gram+logreg | $0.857\pm 0.074$ | $0.869\pm 0.067$ | $0.862\pm 0.065$
1-gram+SVD+RF | $0.794\pm 0.025$ | $0.984\pm 0.014$ | $0.879\pm 0.014$
1-gram+SVD+logreg | $\mathbf{0.859\pm 0.069}$ | $0.874\pm 0.047$ | $0.866\pm 0.053$
TF-IDF+RF | $0.847\pm 0.054$ | $0.942\pm 0.039$ | $\mathbf{0.891\pm 0.036}$
TF-IDF+logreg | $0.748\pm 0.019$ | $0.979\pm 0.034$ | $0.848\pm 0.022$
TF-IDF+SVD+RF | $0.772\pm 0.025$ | $\mathbf{0.990\pm 0.014}$ | $0.867\pm 0.014$
TF-IDF+SVD+logreg | $0.751\pm 0.025$ | $0.979\pm 0.034$ | $0.850\pm 0.025$
Table 3: 5-fold cross validation results from task A | Precision | Recall | F1
---|---|---|---
1-gram+RF | $0.731\pm 0.066$ | $0.744\pm 0.054$ | $0.736\pm 0.047$
1-gram+logreg | $\mathbf{0.735\pm 0.046}$ | $0.727\pm 0.080$ | $0.730\pm 0.059$
1-gram+SVD+RF | $0.686\pm 0.035$ | $0.762\pm 0.046$ | $0.721\pm 0.032$
1-gram+SVD+logreg | $0.722\pm 0.067$ | $0.726\pm 0.089$ | $0.723\pm 0.074$
TF-IDF+RF | $0.709\pm 0.036$ | $0.750\pm 0.049$ | $0.728\pm 0.034$
TF-IDF+logreg | $0.725\pm 0.035$ | $0.810\pm 0.060$ | $\mathbf{0.764\pm 0.035}$
TF-IDF+SVD+RF | $0.637\pm 0.026$ | $\mathbf{0.834\pm 0.061}$ | $0.721\pm 0.036$
TF-IDF+SVD+logreg | $0.705\pm 0.023$ | $0.809\pm 0.063$ | $0.753\pm 0.036$
Table 4: 5-fold cross validation results from task B
For this challenge, we need to optimize the F1 score. Therefore, it is clear
from Table 4 that we should use the model from TF-IDF with random forest
algorithm for task B. From what we can see ini Table 3, we should also go with
the same method for task A. However, we decided to be more pessimistic by
looking at the largest F1 score after subtracting 1 standard deviation from
the mean F1. Thus, we chose 1-gram + SVD + random forest for task A.
Table 5 shows the performance of our best single models compared with two
alternative approaches. First, we used the optimized ensemble model trained on
the original training set (Ens+Ori). Second, we also use a similar method but
with the label-corrected training set (Ens+Upd). While we can see in Table 5
that the F1 score on task B is higher on the label-corrected training set and
got a similar result for the development set, the test score is lower than the
best single models. The test set labels were most likely as noisy as the
training set, thus making the training score with modified label not
representative.
| Train A | Train B | Dev | Test
---|---|---|---|---
Best | $0.879$ | $0.764$ | $0.810$ | $0.812$
Ens+Ori | $0.885$ | $0.764$ | $0.799$ | $0.801$
Ens+Upd | $0.898$ | $0.831$ | $0.810$ | $0.803$
Table 5: F1 score comparison with the alternative Ensemble model.
To analyze how hard it is to separate the right from the wrong answers, we
reduced the dimensionality of the data into 2D using 1-gram, SVD, and t-SNE
(Maaten and Hinton, 2008). Figure 1 suggests that it is harder to separate the
two answers in task B. Since most of the answers are short, we also cannot see
“islands” from applying t-SNE to the data.
Figure 1: t-SNE visualisation
A similar problem was also shown in Figure 2 and Figure 3. We can see a lot
more data points in the 0.4-0.6 prediction range in Figure 3. This suggests
more uncertainty in the model. We argue that this is potentially because of
the incorrect labels in the original training set for task B.
Figure 2: Best model prediction (random forest) with probability on Task A
Figure 3: Best model prediction (logistic regression) with probability on Task
B
## 5 Conclusions
In this report, we describe our winning approach for UKARA 1.0 Challenge Track
1. During the competition, we experimented with single models and ensemble
models to predict which answer is correct given an open-ended question. We
also tried to re-label the training set and pre-process with text correction
which gave a boost in our local cross validation. However, we found that
single models with less pre-processing performed better in the test set. The
best single model for task A is random forest with unigram+SVD and for task B
is logistic regression with TF-IDF. The prediction of both model achieved an
overall F1 score of 0.812, which was enough to get us the third position in
the final leaderboard.
## References
* Bergstra et al. (2011) James S Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011\. Algorithms for hyper-parameter optimization. In _Advances in neural information processing systems_ , pages 2546–2554.
* Deerwester et al. (1990) Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. _Journal of the American society for information science_ , 41(6):391–407.
* Dikli (2006) Semire Dikli. 2006. An overview of automated scoring of essays. _The Journal of Technology, Learning and Assessment_ , 5(1).
* Fernández-Delgado et al. (2014) Manuel Fernández-Delgado, Eva Cernadas, Senén Barro, and Dinani Amorim. 2014\. Do we need hundreds of classifiers to solve real world classification problems? _The Journal of Machine Learning Research_ , 15(1):3133–3181.
* Honnibal and Montani (2017) Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear.
* Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. _Journal of machine learning research_ , 9(Nov):2579–2605.
* Pedregosa et al. (2011) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_ , 12:2825–2830.
* Salsabila et al. (2018) Nikmatun Aliyah Salsabila, Yosef Ardhito Winatmoko, Ali Akbar Septiandri, and Ade Jamal. 2018. Colloquial indonesian lexicon. In _2018 International Conference on Asian Language Processing (IALP)_ , pages 226–229. IEEE.
## Appendix A Supplemental Material
The code for this analysis can be seen on https://github.com/aliakbars/ukara/.
|
2024-09-04T02:54:55.385235 | 2020-02-28T06:42:41 | 2002.12568 | {
"authors": "Michihisa Wakui",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:25935",
"submitter": "Michihisa Wakui",
"url": "https://arxiv.org/abs/2002.12568"
} | arxiv-papers | # Reconstruction of weak bialgebra maps and its applications
Michihisa Wakui
Department of Mathematics, Faculty of Engineering Science,
Kansai University, Suita-shi, Osaka 564-8680, Japan
E-mail address<EMAIL_ADDRESS>
###### Abstract
In this note we give a precise statement and a detailed proof for
reconstruction problem of weak bialgebra maps. As an application we
characterize indecomposability of weak algebras in categorical setting.
## 1 Introduction
The concept of weak bialgebras and weak Hopf algebras was introduced by Böhm,
Nill and Szlachányi [1] as a generalization of bialgebras and Hopf algebras.
In some literatures they are called quantum groupoids [13]. As shown by
Etingof, Nikshych and Ostrik [5] the language of weak Hopf algebras is
convenient to visualize various categorical construction, including
(multi-)fusion categories. At present, a lot of concepts and results for the
ordinal bialgebras and Hopf algebras are generalized or extended to weak
versions.
By the classical Tannaka-Krein reconstruction theorem, it is well known that
any finite-dimensional bialgebra $A$ over a fixed field $\boldsymbol{k}$ is
determined up to isomorphism by its comodule category $\mathbb{M}^{A}$ whose
objects are of finite dimension [4, 9, 15, 19]. More precisely, let $A$ and
$B$ be two finite-dimensional bialgebras over $\boldsymbol{k}$, and
$F:\mathbb{M}^{A}\longrightarrow\mathbb{M}^{B}$ be a $\boldsymbol{k}$-linear
monoidal functor. If $F$ is fibered, then there is a bialgebra map
$\varphi:A\longrightarrow B$ such that $F=\mathbb{M}^{\varphi}$, where
$\mathbb{M}^{\varphi}$ is the induced monoidal functor from $f$. This
statement can be found in Majid’s book [10, Theorem 2.2] and a detailed proof
is given in Franco’s lecture note [6, p.80–84].
In this note we treat the weak bialgebra version of the above result. It is
noted that as is well known, the reconstruction theorem for weak bialgebras
has been established by several researchers [3, 7, 11, 2] (see [20] for other
variants). However, it seems that there is no statement on a reconstruction
theorem for “weak bialgebra map” in any papers though it is very fundamental.
This note is devoted to give a precise statement of it and prove it. In fact,
although the same statement with the classical one holds, the proof is a
rather complicated since the unit object in the comodule category over a weak
bialgebra $A$ is not necessary to the base field $\boldsymbol{k}$. Actually,
it is a subalgebra, which is called the source counital subalgebra of $A$. For
that reason, the proof of the reconstruction theorem for weak bialgebra maps
is accomplished by re-examining the method used in the proof of the
reconstruction of coalgebra map.
Recently, the author study on indecomposability of weak bialgebras, and find
some interesting results and unsolved problems [21]. As an application of the
reconstruction theorem of weak bialgebra maps, we derive a categorical
interpretation of indecomposability of a weak bialgebra.
This paper is organized as follows. In Section 2 we recall the definition and
basic properties of weak bialgebras, and also the comodule structure over
them. In Section 3 after we overview the proof of the reconstruction theorem
of coalgebra maps, we state and prove the reconstruction theorem of bialgebra
maps. In Section 4, that is the final section, we apply the theorem to
characterize indecomposability of finite-dimensional weak bialgebras.
Throughout this note, $\boldsymbol{k}$ denotes a field, and
$\text{Vect}_{\boldsymbol{k}}^{\text{f.d.}}$ stands for the monoidal category
whose objects are finite-dimensional vector spaces over $\boldsymbol{k}$ and
morphisms are $\boldsymbol{k}$-linear maps between them. For a weak bialgebra
or a weak Hopf algebra $H$, we denote by $\Delta_{H}$, $\varepsilon_{H}$ and
$S_{H}$ the comultiplication, the counit and the antipode of $H$,
respectively. When it is clear that they are for $H$, they are simply denoted
by $\Delta$, $\varepsilon$ and $S$, respectively. The notation $\Delta^{(2)}$
means the composition
$(\Delta\otimes\text{id})\circ\Delta=(\text{id}\otimes\Delta)\circ\Delta$. We
use Sweedler’s notation such as $\Delta(x)=x_{(1)}\otimes x_{(2)}$ for $x\in
H$, and for a right $H$-comodule $M$ with coaction $\rho$ we also use the
notation $\rho(m)=m_{(0)}\otimes m_{(1)}$ for $m\in M$.
A monoidal category is described as
$\mathcal{C}=(\mathcal{C},\otimes,I,a,l,r)$, as in MacLane’s book [8], and a
monoidal functor between monoidal categories $\mathcal{C}$ and $\mathcal{D}$
is described as $F=(F,\Phi^{F},\omega^{F})$, where $F$ is a covariant functor
from $\mathcal{C}$ to $\mathcal{D}$, and $\Phi^{F}$ is a natural
transformation obtained by collecting morphisms $\phi^{F}_{M,N}:F(M)\otimes
F(N)\longrightarrow F(M\otimes N)$ for all $M,N\in\mathcal{C}$, and
$\omega^{F}:F(I_{\mathcal{C}})\longrightarrow I_{\mathcal{D}}$ is a morphism,
where $I_{\mathcal{C}}$ and $I_{\mathcal{D}}$ are the unit objects in
$\mathcal{C}$, and $\mathcal{D}$, respectively. If $\Phi^{F}$ is a natural
equivalence and $\omega^{F}$ is an isomorphism, then the monoidal functor
$(F,\Phi^{F},\omega^{F})$ is called strong. As in the classical case, every
weak bialgebra map $\varphi:H\longrightarrow K$ induces a covariant functor
$\mathbb{M}^{\varphi}:\mathbb{M}^{H}\longrightarrow\mathbb{M}^{K}$. We note
that the functor $\mathbb{M}^{\varphi}$ is not monoidal but is comonoidal. By
a comonoidal functor we mean a triplet
$F=(F,\bar{\Phi}^{F},\bar{\omega}^{F})$, which all arrows in $\bar{\Phi}^{F}$
and $\bar{\omega}^{F}$ are in reverse in monoidal categories, that is,
$\bar{\Phi}^{F}$ is a natural transformation obtained by collecting morphisms
$\bar{\phi}^{F}_{M,N}:F(M\otimes N)\longrightarrow F(M)\otimes F(N)$ for all
$M,N\in\mathcal{C}$, and $\bar{\omega}^{F}:I_{\mathcal{D}}\longrightarrow
F(I_{\mathcal{C}})$ is a morphism. By the same condition for a monoidal
functor, the concept called strong is defined for a comonoidal functor. In
some literatures terminologies “colax” or “op-monoidal” are used for
“comonoidal”.
For general facts on Hopf algebras, we refer the reader to Montgomery’s book
[12].
## 2 Definition of weak bialgebras and structures of their comodule
categories
In this section we recall the definition and basic properties of weak
bialgebras, and also the comodule structure over them mainly following [1] and
[13].
Let $H$ be a vector space over $\boldsymbol{k}$, and $(H,\mu,\eta)$ is an
algebra and $(H,\Delta,\varepsilon)$ is a coalgebra over $\boldsymbol{k}$. The
$5$-tuple $(H,\mu,\eta,\Delta,\varepsilon)$ is said to be a weak bialgebra
over $\boldsymbol{k}$ if the following three conditions are satisfied.
1. (WH1)
$\Delta(xy)=\Delta(x)\Delta(y)$ for all $x,y\in H$.
2. (WH2)
$\Delta^{(2)}(1)=(\Delta(1)\otimes
1)(1\otimes\Delta(1))=(1\otimes\Delta(1))(\Delta(1)\otimes 1)$, where
$1=\eta(1)$ is the identity element of the algebra $(H,\mu,\eta)$.
3. (WH3)
For all $x,y,z\in H$
1. (i)
$\varepsilon(xyz)=\varepsilon(xy_{(1)})\varepsilon(y_{(2)}z)$.
2. (ii)
$\varepsilon(xyz)=\varepsilon(xy_{(2)})\varepsilon(y_{(1)}z)$.
Let $S:H\longrightarrow H$ be a $\boldsymbol{k}$-linear map. The $6$-tuple
$(H,\mu,\eta,\Delta,\varepsilon,S)$ is said to be a weak Hopf algebra over
$\boldsymbol{k}$ if the above three conditions and the following additional
condition are satisfied.
1. (WH4)
For all $x\in H$
1. (i)
$x_{(1)}S(x_{(2)})=\varepsilon(1_{(1)}x)1_{(2)}$.
2. (ii)
$S(x_{(1)})x_{(2)}=1_{(1)}\varepsilon(x1_{(2)})$.
3. (iii)
$S(x_{(1)})x_{(2)}S(x_{(3)})=S(x)$.
The above $S$ is called the antipode of $(H,\mu,\eta,\Delta,\varepsilon)$ and
$(H,\mu,\eta,\Delta,\varepsilon,S)$. We note that it is unique if it exists.
For a weak bialgebra $H=(H,\mu,\eta,\Delta,\varepsilon)$, by the condition
(WH4)(i),(ii), the following two $\boldsymbol{k}$-linear maps
$\varepsilon_{t},\varepsilon_{s}:H\longrightarrow H$ are defined:
$\displaystyle\varepsilon_{t}(x)=\varepsilon(1_{(1)}x)1_{(2)},$ (2.1)
$\displaystyle\varepsilon_{s}(x)=1_{(1)}\varepsilon(x1_{(2)}).$ (2.2)
The maps $\varepsilon_{t}$ and $\varepsilon_{s}$ are called the target
counital map and the source counital map, respectively. These maps satisfy the
following properties.
###### Lemma 2.1.
1. $(1)$
$\varepsilon_{t}^{2}=\varepsilon_{t},\ \varepsilon_{s}^{2}=\varepsilon_{s}$.
2. $(2)$
For all $x\in H$
1. $(i)$
$((\mathrm{id}\otimes\varepsilon_{t})\circ\Delta)(x)=1_{(1)}x\otimes 1_{(2)}$.
2. $(ii)$
$((\varepsilon_{s}\otimes\mathrm{id})\circ\Delta)(x)=1_{(1)}\otimes x1_{(2)}$.
In particular,
$1_{(1)}\otimes\varepsilon_{t}(1_{(2)})=1_{(1)}\otimes
1_{(2)}=\varepsilon_{s}(1_{(1)})\otimes 1_{(2)}.$ (2.3)
3. $(3)$
For all $x\in H$
1. $(i)$
$\varepsilon_{t}(x)=x\ \ \Longleftrightarrow\ \ \Delta(x)=1_{(1)}x\otimes
1_{(2)}$.
2. $(ii)$
$\varepsilon_{s}(x)=x\ \ \Longleftrightarrow\ \ \Delta(x)=1_{(1)}\otimes
x1_{(2)}$.
Especially, by Part $(2)$$(i)$
$\displaystyle 1_{(1)}1_{[1]}\otimes 1_{(2)}\otimes 1_{[2]}$
$\displaystyle=1_{(1)}\otimes\varepsilon_{t}(1_{(2)})\otimes 1_{(3)},$
$\displaystyle 1_{(1)}\otimes 1_{[1]}\otimes 1_{(2)}1_{[2]}$
$\displaystyle=1_{(1)}\otimes\varepsilon_{s}(1_{(2)})\otimes 1_{(3)},$
where $\Delta^{(2)}(1)=1_{(1)}\otimes 1_{(2)}\otimes 1_{(3)}$ and
$\Delta(1)=1_{[1]}\otimes 1_{[2]}$.
###### Lemma 2.2.
Let $H$ be a weak bialgebra over $\boldsymbol{k}$. For all $x,y\in H$
1. $(1)$
$\varepsilon_{t}(x\varepsilon_{t}(y))=\varepsilon_{t}(xy)$,
$\varepsilon_{s}(\varepsilon_{s}(x)y)=\varepsilon_{s}(xy)$.
2. $(2)$
$\varepsilon(x\varepsilon_{t}(y))=\varepsilon(xy)$,
$\varepsilon(\varepsilon_{s}(x)y)=\varepsilon(xy)$.
3. $(3)$
$\varepsilon\circ\varepsilon_{t}=\varepsilon=\varepsilon\circ\varepsilon_{s}$.
4. $(4)$
$x=\varepsilon_{t}(x_{(1)})x_{(2)}=x_{(1)}\varepsilon_{s}(x_{(2)})$.
5. $(5)$
$x\varepsilon_{t}(y)=\varepsilon_{t}(x_{(1)}y)x_{(2)}$,
$\varepsilon_{s}(x)y=y_{(1)}\varepsilon_{s}(xy_{(2)})$.
###### Lemma 2.3.
Let $H$ be a weak bialgebra over $\boldsymbol{k}$, and set
$H_{t}:=\varepsilon_{t}(H),\ H_{s}:=\varepsilon_{s}(H)$. Then
1. $(1)$
$x\varepsilon_{t}(y)=\varepsilon_{t}(xy)$ for all $x\in H_{t}$ and $y\in H$.
2. $(2)$
$\varepsilon_{s}(x)y=\varepsilon_{s}(xy)$ for all $x\in H$ and $y\in H_{s}$.
3. $(3)$
1. $(i)$
An element of $H_{t}$ and an element of $H_{s}$ commute, and
2. $(ii)$
$H_{t}$ and $H_{s}$ are a left coideal and a right coideal subalgebras of $H$,
respectively.
The subalgebras $H_{t}$ and $H_{s}$ are called the target and source
subalgebras of $H$, respectively. By (2.3) we have
$\Delta(1)\in H_{s}\otimes H_{t}.$ (2.4)
1. $(4)$
For all $x\in H$, $z\in H_{t}$ and $y\in H_{s}$
$\displaystyle\Delta(xz)=x_{(1)}z\otimes x_{(2)},\quad$
$\displaystyle\Delta(zx)=zx_{(1)}\otimes x_{(2)},$
$\displaystyle\Delta(xy)=x_{(1)}\otimes x_{(2)}y,\quad$
$\displaystyle\Delta(yx)=x_{(1)}\otimes yx_{(2)}.$
In particular
$\displaystyle xz=\varepsilon(x_{(1)}z)x_{(2)},\quad$ $\displaystyle
zx=\varepsilon(zx_{(1)})x_{(2)},$ $\displaystyle
xy=x_{(1)}\varepsilon(x_{(2)}y),\quad$ $\displaystyle
yx=x_{(1)}\varepsilon(yx_{(2)}).$
For a weak bialgebra $H$, one can also consider two $\boldsymbol{k}$-linear
maps $\varepsilon_{t}^{\prime},\varepsilon_{s}^{\prime}:H\longrightarrow H$
defined by
$\displaystyle\varepsilon_{t}^{\prime}(x)$
$\displaystyle=\varepsilon(x1_{(1)})1_{(2)},$ (2.5)
$\displaystyle\varepsilon_{s}^{\prime}(x)$
$\displaystyle=1_{(1)}\varepsilon(1_{(2)}x)$ (2.6)
for all $x\in H$. Then,
$H^{\mathrm{op}}=(H,\mu^{\mathrm{op}},\eta,\Delta,\varepsilon)$,
$H^{\mathrm{cop}}=(H,\mu,\eta,\Delta^{\mathrm{cop}},\varepsilon)$,
$H^{\mathrm{opcop}}=(H,\mu^{\mathrm{op}},\eta,\Delta^{\mathrm{cop}}$,
$\varepsilon)$ are weak bialgebras over $\boldsymbol{k}$, where
$\mu^{\mathrm{op}}$ and $\Delta^{\mathrm{cop}}$ are the opposite
multiplication and comultiplication, respectively. The target and the source
subalgebras of them are given by $(H^{\mathrm{op}})_{t}=H_{t},\
(H^{\mathrm{op}})_{s}=H_{s}$, $(H^{\mathrm{cop}})_{t}=H_{s},\
(H^{\mathrm{cop}})_{s}=H_{t}$, $(H^{\mathrm{opcop}})_{t}=H_{s},\
(H^{\mathrm{opcop}})_{s}=H_{t}$. The target and the source counital maps of
them are given by
$(\varepsilon_{H^{\mathrm{op}}})_{t}=\varepsilon_{t}^{\prime},\
(\varepsilon_{H^{\mathrm{op}}})_{s}=\varepsilon_{s}^{\prime}$,
$(\varepsilon_{H^{\mathrm{cop}}})_{t}=\varepsilon_{s}^{\prime},\
(\varepsilon_{H^{\mathrm{cop}}})_{s}=\varepsilon_{t}^{\prime}$,
$(\varepsilon_{H^{\mathrm{opcop}}})_{t}=\varepsilon_{s},\
(\varepsilon_{H^{\mathrm{opcop}}})_{s}=\varepsilon_{t}$. If $S$ is the
antipode of $H$, then it is also of $H^{\mathrm{opcop}}$.
For a weak bialgebra $H$ over $\boldsymbol{k}$, we denote by
$\text{\boldmath{$\mathsf{M}$}}^{H}$ the $\boldsymbol{k}$-linear category
whose objects are right $H$-comodules and morphisms are $H$-comodule maps
between them. The comodule category $\text{\boldmath{$\mathsf{M}$}}^{H}$ has a
monoidal structure [13, Section 4] as the following lemma.
###### Lemma 2.4 ([14, Lemma 4.2]).
Let $H$ be a weak bialgebra over $\boldsymbol{k}$.
1. $(1)$
For two right $H$-comodules $(M,\rho_{M}),\ (N,\rho_{N})$, the pair
$(M,\rho_{M})\circledast(N,\rho_{N}):=(M\otimes_{H_{s}}N,\rho)$ is also a
right $H$-comodule, where
$\rho:M\otimes_{H_{s}}N\longrightarrow(M\otimes_{H_{s}}N)\otimes H$ is a
$\boldsymbol{k}$-linear map defined by
$\rho(m\otimes_{H_{s}}n)=(m_{(0)}\otimes_{H_{s}}n_{(0)})\otimes
m_{(1)}n_{(1)}\qquad(m\in M,\ n\in N).$ (2.7)
The source algebra $H_{s}$ can be regarded as a right $H$-comodule with the
coaction $\Delta_{s}:=\Delta|_{H_{s}}:H_{s}:\longrightarrow H_{s}\otimes H$,
and for all right $H$-comodules $L,M,N$ there are natural isomorphisms
1. $(i)$
$H_{s}\circledast M\cong M\cong M\circledast H_{s}$ as right $H$-comodules,
2. $(ii)$
$(L\circledast M)\circledast N\cong L\circledast(M\circledast N)$ as right
$H$-comodules.
Here the natural isomorphisms in $(i)$ are given as follows: For a right
$H$-comodule $M$
$\displaystyle l_{M}:$ $\displaystyle\ H_{s}\otimes_{H_{s}}M\longrightarrow
M,\ \quad l_{M}(y\otimes_{H_{s}}m)=y\cdot m\qquad(y\in H_{s},\ m\in M),$
$\displaystyle l_{M}^{-1}:$ $\displaystyle\ M\longrightarrow
H_{s}\otimes_{H_{s}}M,\ \quad l_{M}^{-1}(m)=1\otimes_{H_{s}}m\qquad\quad(m\in
M),$ $\displaystyle r_{M}:$ $\displaystyle\
M\otimes_{H_{s}}H_{s}\longrightarrow M,\ \quad r_{M}(m\otimes_{H_{s}}y)=m\cdot
y\qquad(m\in M,\ y\in H_{s}),$ $\displaystyle r_{M}^{-1}:$ $\displaystyle\
M\longrightarrow M\otimes_{H_{s}}H_{s},\ \quad
r_{M}^{-1}(m)=m\otimes_{H_{s}}1\qquad\quad(m\in M).$
The isomorphism in $(ii)$ is induced from a usual isomorphism between vector
spaces.
2. $(2)$
Let $f:(M,\rho_{M})\longrightarrow(N,\rho_{N})$,
$g:(M^{\prime},\rho_{M}^{\prime})\longrightarrow(N^{\prime},\rho_{N}^{\prime})$
be right $H$-comodule maps. Then
$f\otimes_{H_{s}}g:M\otimes_{H_{s}}M^{\prime}\longrightarrow
N\otimes_{H_{s}}N^{\prime}$ is also a right $H$-comodule map with respect to
the comodule structure given in $(1)$. We denote the map $f\otimes_{H_{s}}g$
by $f\circledast g$.
By Parts $(1)$, $(2)$ the abelian category
$\text{\boldmath{$\mathsf{M}$}}^{H}$ becomes a $\boldsymbol{k}$-linear
monoidal category whose unit object is $(H_{s},\Delta_{s})$.
###### Lemma 2.5 ([14, Proposition 4.1]).
Let $H$ be a weak bialgebra over $\boldsymbol{k}$, and $(M,\rho_{M})$ be a
right $H$-comodule. For $y\in H_{s}$ and $m\in M$, the elements $y\cdot m$ and
$m\cdot y$ in $M$ are defined as follows:
$\displaystyle y\cdot m$ $\displaystyle:=m_{(0)}\varepsilon(ym_{(1)}),$ (2.8)
$\displaystyle m\cdot y$ $\displaystyle:=m_{(0)}\varepsilon(m_{(1)}y).$ (2.9)
1. $(1)$
$M$ becomes an $(H_{s},H_{s})$-bimodule equipped with the above actions.
2. $(2)$
$M\otimes H$ becomes an $(H_{s},H_{s})$-bimodule equipped with the following
actions: For $y\in H_{s},\ m\in M,\ x\in H$,
$\displaystyle y\cdot(m\otimes x)$ $\displaystyle:=(1_{(1)}\cdot
m)\otimes(y1_{(2)}x)=m_{(0)}\otimes\varepsilon_{t}(m_{(1)})yx,$ (2.10)
$\displaystyle(m\otimes x)\cdot y$ $\displaystyle:=(m\cdot
1_{(1)})\otimes(xy1_{(2)})=m_{(0)}\otimes
xy\varepsilon_{t}^{\prime}(m_{(1)}).$ (2.11)
3. $(3)$
The following equations hold for all $y\in H_{s}$ and $m\in M$:
1. $(i)$
$\rho_{M}(y\cdot m)=m_{(0)}\otimes ym_{(1)}=y\cdot\rho_{M}(m)$.
2. $(ii)$
$\rho_{M}(m\cdot y)=m_{(0)}\otimes m_{(1)}y=\rho_{M}(m)\cdot y$.
3. $(iii)$
$\varepsilon_{s}^{\prime}(m_{(1)})\cdot
m_{(0)}=m=m_{(0)}\cdot\varepsilon_{s}(m_{(1)})$.
In particular, $\rho_{M}:M\longrightarrow M\otimes H$ is an
$(H_{s},H_{s})$-bimodule map by $(i)$ and $(ii)$.
4. $(4)$
Let $(N,\rho_{N})$ be a right $H$-comodule and
$f:(M,\rho_{M})\longrightarrow(N,\rho_{N})$ be an $H$-comodule map. Then $f$
becomes an $(H_{s},H_{s})$-bimodule map with respect to the bimodule
structures given by $(1)$.
Let $H$ be a weak bialgebra over $\boldsymbol{k}$, and denote by
${}_{H_{s}}\text{\boldmath{$\mathsf{M}$}}_{H_{s}}$ the $\boldsymbol{k}$-linear
category consisting of whose objects are $(H_{s},H_{s})$-bimodules and
morphisms are $(H_{s},H_{s})$-bimodule maps between them. By Lemma 2.5 for a
right $H$-comodule $(M,\rho_{M})$, the underlying vector space $M$ has an
$(H_{s},H_{s})$-bimodule structure, and $\rho_{M}:M\longrightarrow M\otimes H$
becomes an $(H_{s},H_{s})$-bimodule map. Thus $(M,\rho_{M})$ can be regarded
as a right $H$-comodule in ${}_{H_{s}}\text{\boldmath{$\mathsf{M}$}}_{H_{s}}$.
Then any $H$-comodule map $f:M\longrightarrow N$ always an
$(H_{s},H_{s})$-bimodule map.
We denote by ${}_{H_{s}}\text{\boldmath{$\mathsf{M}$}}_{H_{s}}^{H}$ the
$\boldsymbol{k}$-linear category whose objects are right $H$-comodules in
${}_{H_{s}}\text{\boldmath{$\mathsf{M}$}}_{H_{s}}$ and morphisms are right
$H$-comodule and $(H_{s},H_{s})$-bimodule maps between them. The category
${}_{H_{s}}\text{\boldmath{$\mathsf{M}$}}_{H_{s}}^{H}$ has a
$\boldsymbol{k}$-linear monoidal structure whose tensor product is given by
$\otimes_{H_{s}}$.
As a special case of [18, Theorem 2.2] we have:
###### Lemma 2.6.
Let $H$ be a weak bialgebra over $\boldsymbol{k}$. By Lemma 2.5 for a right
$H$-comodule $(M,\rho_{M})$, there is an $(H_{s},H_{s})$-bimodule structure on
the underlying vector space $M$, and $\rho_{M}:M\longrightarrow M\otimes H$
becomes an $(H_{s},H_{s})$-bimodule map. Thus $(M,\rho_{M})$ can be regarded
as a right $H$-comodule in ${}_{H_{s}}\text{\boldmath{$\mathsf{M}$}}_{H_{s}}$.
Then any $H$-comodule map $f:M\longrightarrow N$ is always an
$(H_{s},H_{s})$-bimodule map. This correspondence gives rise to a
$\boldsymbol{k}$-linear monoidal equivalence
$\Xi^{H}:\text{\boldmath{$\mathsf{M}$}}^{H}\longrightarrow{}_{H_{s}}\text{\boldmath{$\mathsf{M}$}}_{H_{s}}^{H}$
between $\boldsymbol{k}$-linear monoidal categories.
We set
$\hat{U}^{H}:=U^{H}\circ\Xi^{H}:\text{\boldmath{$\mathsf{M}$}}^{H}\longrightarrow{}_{H_{s}}\text{\boldmath{$\mathsf{M}$}}_{H_{s}},$
where
$U^{H}:{}_{H_{s}}\text{\boldmath{$\mathsf{M}$}}_{H_{s}}^{H}\longrightarrow{}_{H_{s}}\text{\boldmath{$\mathsf{M}$}}_{H_{s}}$
is the forgetful monoidal functor. The composition $\hat{U}^{H}$ of monoidal
functors becomes a monoidal functor, whose structure is given as follows:
1. $\bullet$
$\phi_{M,N}^{\hat{U}^{H}}=\text{id}_{M\otimes_{H_{s}}N}$ for each
$M,N\in\text{\boldmath{$\mathsf{M}$}}^{H}$,
2. $\bullet$
$\omega^{\hat{U}^{H}}:H_{s}\longrightarrow\hat{U}^{H}(H_{s})=H_{s}$ is the
identity map.
###### Lemma 2.7.
Let $H,K$ be weak bialgebras over $\boldsymbol{k}$, and
$\varphi:H\longrightarrow K$ be a weak bialgebra map, namely, an algebra map
and coalgebra map. Then $\varphi(H_{s})\subset K_{s}$ and
$\varphi(H_{t})\subset K_{t}$. The former inclusion induces an algebra map
$\varphi_{s}:=\varphi|_{H_{s}}:H_{s}\longrightarrow K_{s}$.
1. $(1)$
For a right $H$-comodule $(M,\rho_{M})$
$\text{\boldmath{$\mathsf{M}$}}^{\varphi}(M,\rho_{M}):=(M,(\text{id}_{M}\otimes\varphi)\circ\rho_{M})$
is a right $K$-comodule, and for a right $H$-comodule map
$f:(M,\rho_{M})\longrightarrow(N,\rho_{N})$
$\text{\boldmath{$\mathsf{M}$}}^{\varphi}(f):=f:\text{\boldmath{$\mathsf{M}$}}^{\varphi}(M,\rho_{M})\longrightarrow\text{\boldmath{$\mathsf{M}$}}^{\varphi}(N,\rho_{N})$
is a right $K$-comodule map. These correspondences define a covariant functor
$\text{\boldmath{$\mathsf{M}$}}^{\varphi}:\text{\boldmath{$\mathsf{M}$}}^{H}\longrightarrow\text{\boldmath{$\mathsf{M}$}}^{K}$.
2. $(2)$
The functor $\text{\boldmath{$\mathsf{M}$}}^{\varphi}$ is a
$\boldsymbol{k}$-linear comonoidal functor. If $\varphi_{s}$ is bijective,
then the $\boldsymbol{k}$-linear comonoidal functor
$\text{\boldmath{$\mathsf{M}$}}^{\varphi}$ is strong. In this case it can be
regarded as a $\boldsymbol{k}$-linear monoidal functor.
3. $(3)$
The algebra map $\varphi_{s}$ induces a $\boldsymbol{k}$-linear monoidal
functor
${}_{\varphi_{s}}\text{\boldmath{$\mathsf{M}$}}_{\varphi_{s}}:{}_{K_{s}}\text{\boldmath{$\mathsf{M}$}}_{K_{s}}\longrightarrow{}_{H_{s}}\text{\boldmath{$\mathsf{M}$}}_{H_{s}}$,
and if $\varphi_{s}$ is bijective, then
$\hat{U}^{K}\circ\text{\boldmath{$\mathsf{M}$}}^{\varphi}={}_{\varphi_{s}^{-1}}\text{\boldmath{$\mathsf{M}$}}_{\varphi_{s}^{-1}}\circ\hat{U}^{H}:\text{\boldmath{$\mathsf{M}$}}^{H}\longrightarrow{}_{K_{s}}\text{\boldmath{$\mathsf{M}$}}_{K_{s}}$
as monoidal functors.
###### Proof..
(2) For right $H$-comodules $(M,\rho_{M}),(N,\rho_{N})$ we set
$\displaystyle\text{\boldmath{$\mathsf{M}$}}^{\varphi}(M,\rho_{M})\circledast\text{\boldmath{$\mathsf{M}$}}^{\varphi}(N,\rho_{N})$
$\displaystyle=(M\otimes_{K_{s}}N,\rho),$
$\displaystyle\text{\boldmath{$\mathsf{M}$}}^{\varphi}\bigl{(}(M,\rho_{M})\circledast(N,\rho_{N})\bigr{)}$
$\displaystyle=(M\otimes_{H_{s}}N,\rho^{\prime}),$
where $\rho$ and $\rho^{\prime}$ are given as follows:
$\displaystyle\rho:M\otimes_{K_{s}}N\longrightarrow(M\otimes_{K_{s}}N)\otimes
K,\quad$
$\displaystyle\rho(m\otimes_{K_{s}}n)=m_{(0)}\otimes_{K_{s}}n_{(0)}\otimes\varphi(m_{(1)})\varphi(n_{(1)}),$
$\displaystyle\rho^{\prime}:M\otimes_{H_{s}}N\longrightarrow(M\otimes_{H_{s}}N)\otimes
K,\quad$
$\displaystyle\rho^{\prime}(m\otimes_{H_{s}}n)=m_{(0)}\otimes_{H_{s}}n_{(0)}\otimes\varphi(m_{(1)}n_{(1)}).$
Since $\varphi$ is an algebra map, the identity map $\text{id}_{M\otimes N}$
induces a surjection $\iota_{M,N}:M\otimes_{H_{s}}N\longrightarrow
M\otimes_{K_{s}}N$ which is a right $K$-comodule map.
The restriction $\varphi_{s}:H_{s}\longrightarrow K_{s}$ can be regarded as a
right $K$-comodule map from
$\text{\boldmath{$\mathsf{M}$}}^{\varphi}(H_{s},(\Delta_{H})_{s})=(H_{s},\
(\text{id}_{H_{s}}\otimes\varphi)\circ(\Delta_{H})_{s})$ to
$(K_{s},(\Delta_{K})_{s})$. Thus, the triplet
$(\text{\boldmath{$\mathsf{M}$}}^{\varphi},\iota,\varphi_{s})$ is a comonoidal
functor from $\text{\boldmath{$\mathsf{M}$}}^{H}$ to
$\text{\boldmath{$\mathsf{M}$}}^{K}$, where
$\iota:=\\{\iota_{M,N}\\}_{M,N\in\text{\boldmath{$\mathsf{M}$}}^{H}}$.
If $\varphi_{s}$ is bijective, then $\iota_{M,N}$ for all
$M,N\in\text{\boldmath{$\mathsf{M}$}}^{H}$ is an isomorphism. Thus,
$(\text{\boldmath{$\mathsf{M}$}}^{\varphi},\iota^{-1},\varphi_{s}^{-1}):\text{\boldmath{$\mathsf{M}$}}^{H}\longrightarrow\text{\boldmath{$\mathsf{M}$}}^{K}$
is a strong monoidal functor.
(3) For a $(K_{s},K_{s})$-bimodule $(M,\alpha_{l},\alpha_{r})$ we set
${}_{\varphi_{s}}\text{\boldmath{$\mathsf{M}$}}_{\varphi_{s}}(M,\alpha_{l},\alpha_{r})=\bigl{(}M,\alpha_{l}\circ(\varphi_{s}\otimes\text{id}_{M}),\alpha_{r}\circ(\text{id}_{M}\otimes\varphi_{s})\bigr{)},$
and a $(K_{s},K_{s})$-bimodule map $f:M\longrightarrow N$ we set
${}_{\varphi_{s}}\text{\boldmath{$\mathsf{M}$}}_{\varphi_{s}}(f)=f.$
Then, ${}_{\varphi_{s}}\text{\boldmath{$\mathsf{M}$}}_{\varphi_{s}}$ is a
$\boldsymbol{k}$-linear covariant functor.
For $(K_{s},K_{s})$-bimodules $M=(M,\alpha_{l}^{M},\alpha_{r}^{M})$ and
$N=(N,\alpha_{l}^{N},\alpha_{r}^{N})$, let
$\jmath_{M,N}:{}_{\varphi_{s}}\text{\boldmath{$\mathsf{M}$}}_{\varphi_{s}}(M)\otimes_{H_{s}}{}_{\varphi_{s}}\text{\boldmath{$\mathsf{M}$}}_{\varphi_{s}}(N)\longrightarrow{}_{\varphi_{s}}\text{\boldmath{$\mathsf{M}$}}_{\varphi_{s}}(M\otimes_{K_{s}}N)$
be the induced $\boldsymbol{k}$-linear map from the identity map
$\text{id}_{M_{\otimes}N}$. This is an $(H_{s},H_{s})$-module map which is
natural with respect to $M,N$. So, we have a monoidal functor
$({}_{\varphi_{s}}\text{\boldmath{$\mathsf{M}$}}_{\varphi_{s}},\jmath,\varphi_{s})$
by setting
$\jmath=\\{\jmath_{M,N}\\}_{M,N\in{}_{K_{s}}\text{\boldmath{$\mathsf{M}$}}_{K_{s}}}$.
If $\varphi_{s}$ is bijective, then one can show that
$\hat{U}^{K}\circ\text{\boldmath{$\mathsf{M}$}}^{\varphi}={}_{\varphi_{s}^{-1}}\text{\boldmath{$\mathsf{M}$}}_{\varphi_{s}^{-1}}\circ\hat{U}^{H}$
as $\boldsymbol{k}$-linear monoidal functors. ∎
## 3 Reconstruction of a weak bialgebra map
First of all, we will recall the proof of the reconstruction theorem of
coalgebra maps, that is a classical and fundamental theorem for Tannakian
reconstruction theorem.
For a coalgebra $C$ over $\boldsymbol{k}$ we denote by $\mathbb{M}^{C}$ the
$\boldsymbol{k}$-linear category consisting of whose objects are right
$C$-comodules of finite dimension and morphisms are $C$-comodule maps between
them, and denote by $U^{C}$ the forgetful functor from $\mathbb{M}^{C}$ to
$\mathrm{Vect}_{\boldsymbol{k}}^{\mathrm{f.d.}}$.
###### Theorem 3.1.
Let $C,D$ be coalgebras over $\boldsymbol{k}$, and
$F:\mathbb{M}^{C}\longrightarrow\mathbb{M}^{D}$ be a $\boldsymbol{k}$-linear
covariant functor. If $U^{D}\circ F=U^{C}$, then there is a unique coalgebra
map $\varphi:C\longrightarrow D$ such that
$F=\mathbb{M}^{\varphi}:\mathbb{M}^{C}\longrightarrow\mathbb{M}^{D}$. Here,
$\mathbb{M}^{\varphi}$ denotes the $\boldsymbol{k}$-linear functor induced
from $\varphi$.
###### Proof..
The proof follows from Franco’s lecture note [6, p.81–84].
Let $(M,\rho_{M})$ be a finite-dimensional $C$-comodule. By the assumption
$U^{D}\circ F=U^{C}$, one can set $F(M,\rho_{M})=(M,\rho_{M}^{F})$, where
$\rho_{M}^{F}:M\longrightarrow M\otimes D$ is a right coaction of $D$.
Let $P$ be a finite-dimensional subcoalgebra of $C$. It can be regarded as a
right $C$-comodule by the coaction
$\rho_{P}:P\xrightarrow{\ \ \Delta_{P}\ \ }P\otimes P\xrightarrow{\ \
\text{id}\otimes\iota_{P}\ \ }P\otimes C,$
where $\iota_{P}$ stands for the inclusion. Since $P$ is finite-dimensional,
it gives an object of $\mathbb{M}^{C}$, and therefore we have
$F(P,\rho_{P})=(P,\rho_{P}^{F})\in\mathbb{M}^{D}$.
Let us consider the composition
$\varphi_{P}:P\xrightarrow{\ \ \rho_{P}^{F}\ \ }P\otimes D\xrightarrow{\
\varepsilon_{P}\otimes\text{id}\ }\boldsymbol{k}\otimes D\cong D.$
Then $\varphi_{P}:P\longrightarrow D$ is a coalgebra map. This fact can be
verified as follows.
$\bullet$ The equation $\varepsilon_{D}\circ\varphi_{P}=\varepsilon_{P}$ comes
from the following commutative diagram.
$P$$P\otimes D$$\boldsymbol{k}\otimes
D$$\cong$$D$$P$$\cong$$P\otimes\boldsymbol{k}$$\boldsymbol{k}\otimes\boldsymbol{k}$$\cong$$\boldsymbol{k}$$\rho_{P}^{F}$$\varepsilon_{P}\otimes\text{id}$$\varepsilon_{P}\otimes\text{id}$id$\text{id}\otimes\varepsilon_{D}$$\text{id}\otimes\varepsilon_{D}$$\varepsilon_{D}$
$\bullet$ To show the equation
$\Delta_{D}\circ\varphi_{P}=(\varphi_{P}\otimes\varphi_{P})\circ\Delta_{P}$,
it is enough to verify the following diagram commutes:
$P$$P\otimes P$$P\otimes D\otimes P\otimes D$$P\otimes D$$P\otimes P\otimes
D$$P\otimes D\otimes D$$\cong$$P\otimes D\otimes\boldsymbol{k}\otimes
D$$\boldsymbol{k}\otimes D\otimes\boldsymbol{k}\otimes
D$$\boldsymbol{k}\otimes D\otimes D$$P\otimes D$$\boldsymbol{k}\otimes D\cong
D$$D\otimes
D$(A)(B)(C)$\Delta_{P}$$\rho_{P}^{F}\otimes\rho_{P}^{F}$$\text{id}\otimes\rho_{P}^{F}$$\rho_{P}^{F}$$\Delta_{P}\otimes\text{id}$$\rho_{P}^{F}\otimes\text{id}$$\text{id}\otimes\Delta_{D}$$\rho_{P}^{F}$$\varepsilon_{P}\otimes\text{id}$$\Delta_{P}$$\rho_{P}^{F}\otimes\text{id}$$\text{id}\otimes\varepsilon_{P}\otimes\text{id}$$\varepsilon_{P}\otimes\text{id}$$\cong$$\varepsilon_{P}\otimes\text{id}\otimes\varepsilon_{P}\otimes\text{id}$$\cong$
Part (A) is commutative since $\rho_{P}^{F}$ is a right $D$-coaction, and Part
(C) is commutative since the following diagram is so.
$P$$P\otimes P$$P\otimes D\otimes P\otimes D$$P\otimes D$$P\otimes D$$P\otimes
D$$D\otimes
D$$\Delta_{P}\otimes\text{id}$$\rho_{P}^{F}\otimes\text{id}$$\cong$$\text{id}\otimes\varepsilon_{P}\otimes\text{id}$$\cong$$\rho_{P}^{F}\otimes\text{id}$$\rho_{P}^{F}\otimes\text{id}$$\cong$$\text{id}\otimes\varepsilon_{P}\otimes\text{id}$
The proof the commutativity of Part (B) is a little technical. For any
$\boldsymbol{k}$-linear map $\gamma:P\longrightarrow\boldsymbol{k}$, the
composition
$P\xrightarrow{\ \ \Delta_{P}\ \ }P\otimes P\xrightarrow{\
\gamma\otimes\text{id}_{P}\ }P$
is a right $P$-comodule map, and hence it is also a right $C$-comodule map.
Sending it by $F$ we have a right $D$-comodule map
$(\gamma\otimes\text{id}_{P})\circ\Delta_{P}:P\longrightarrow P$. Thus the
following diagram commutes:
$P$$P\otimes P$$P$$P$$P\otimes P\otimes D$$P\otimes
D$$\Delta_{P}$$\gamma\otimes\text{id}$$\Delta_{P}\otimes\text{id}$$\gamma\otimes\text{id}\otimes\text{id}$$\rho_{P}^{F}$$\rho_{P}^{F}$
Combining the above diagram with
$\rho_{P}^{F}\circ(\gamma\otimes\text{id}_{P})=(\gamma\otimes\text{id}_{P\otimes
D})\circ(\text{id}_{P}\otimes\rho_{P}^{F})$, we have
$(\gamma\otimes\text{id}_{P\otimes
D})\circ(\text{id}_{P}\otimes\rho_{P}^{F})\circ\Delta_{P}=(\gamma\otimes\text{id}_{P\otimes
D})\circ(\Delta_{P}\otimes\text{id}_{P})\circ\rho_{P}^{F}.$
Since this equation holds for all $\boldsymbol{k}$-linear maps $\gamma$, we
see that
$(\text{id}_{P}\otimes\rho_{P}^{F})\circ\Delta_{P}=(\Delta_{P}\otimes\text{id}_{P})\circ\rho_{P}^{F}$.
This implies the commutativity of Part (B).
By the fundamental theorem for coalgebras, $C$ is a union of finite-
dimensional subcoalgebras. Based on the fact, it can be shown that the
coalgebra maps $\varphi_{P}:P\longrightarrow D$ for all finite-dimensional
subcoalgebras $P$ induce a well-defined coalgebra map
$\varphi:C\longrightarrow D$. In fact it is easily verified that
$(\varphi_{Q})|_{P}=\varphi_{P}$ for two finite-dimensional subcoalgebras $P$
and $Q$ satisfying with $P\subset Q$. In this way, it is proved that there is
a coalgebra map $\varphi:C\longrightarrow D$ such that
$\varphi|_{P}=\varphi_{P}$ for all finite-dimensional subcoalgebra $P$ of $C$.
The coalgebra map $\varphi$ satisfies $F=\mathbb{M}^{\varphi}$. This fact can
be derived as follows.
$\bullet$ Let $(M,\rho_{M})$ be a finite-dimensional right $C$-comodule. Then
there is a finite-dimensional subcoalgebra $P$ of $C$ such that
$\rho_{M}(M)\subset M\otimes P$. Since $(M\otimes
P,\text{id}_{M}\otimes\rho_{P})$ is a right $C$-comodule, we have a right
$D$-comodule $(M\otimes P,\ (\text{id}_{M}\otimes\rho_{P})^{F})$. Since
$M\otimes P$ is decomposed to a direct sum of finite copies of $P$ as a right
$C$-comodule and $F$ is a $\boldsymbol{k}$-linear functor satisfying
$U^{C}=U^{D}\circ F$, it follows that
$(\text{id}_{M}\otimes\rho_{P})^{F}=\text{id}_{M}\otimes\rho_{P}^{F}:M\otimes
P\longrightarrow M\otimes P\otimes D$.
Let $\rho_{M}^{\prime}:M\longrightarrow M\otimes P$ be the restriction of
$\rho_{M}$. Then the map $\rho_{M}^{\prime}$ is a right $C$-comodule map from
$(M,\rho_{M})$ to $(M\otimes P,\text{id}\otimes\rho_{P})$. Thus,
$F(\rho_{M}^{\prime})=\rho_{M}^{\prime}:M\longrightarrow M\otimes P$ is a
right $D$-comodule map from $(M,\rho_{M}^{F})$ to $(M\otimes
P,(\text{id}\otimes\rho_{P})^{F})=(M\otimes
P,\text{id}_{M}\otimes\rho_{P}^{F})$, and hence the following diagram
commutes.
$M$$M\otimes P$$M\otimes D$$M\otimes P\otimes
D$$\rho_{M}^{\prime}$$\rho_{M}^{\prime}\otimes\text{id}$$\rho_{M}^{F}$$\text{id}\otimes\rho_{P}^{F}$
It follows that we have the commutative diagram:
$M$$M\otimes C$$M\otimes P$$M\otimes P\otimes D$$M\otimes\boldsymbol{k}\otimes
D$$M\otimes D$$M\otimes
D$$\rho_{M}$$\rho_{M}^{\prime}$id$\text{id}\otimes\iota_{P}$$\text{id}\otimes\varphi_{P}$$\rho_{M}^{F}$$\text{id}\otimes\varphi$$\text{id}\otimes\rho_{P}^{F}$$\text{id}\otimes\varepsilon_{P}\otimes\text{id}$$\rho_{M}^{\prime}\otimes\text{id}$$\cong$
This implies that $\rho_{M}^{F}=(\text{id}\otimes\varphi)\circ\rho_{M}$, and
hence we see that $F(M,\rho_{M})=\mathbb{M}^{\varphi}(M,\rho_{M})$.
$\bullet$ Let $f:(M,\rho_{M})\longrightarrow(N,\rho_{N})$ be a right
$C$-comodule map between finite-dimensional right $C$-comodules. Then
$\displaystyle F(f)=f$
$\displaystyle:(M,\rho_{M}^{F})\longrightarrow(N,\rho_{N}^{F}),$
$\displaystyle\mathbb{M}^{\varphi}(f)=f$
$\displaystyle:(M,(\text{id}\otimes\varphi)\circ\rho_{M})\longrightarrow(N,(\text{id}\otimes\varphi)\circ\rho_{N}).$
As shown that $\rho_{M}^{F}=(\text{id}\otimes\varphi)\circ\rho_{M}$ and
$\rho_{N}^{F}=(\text{id}\otimes\varphi)\circ\rho_{N}$, we see that
$F(f)=\mathbb{M}^{\varphi}(f)$. Thus $F=\mathbb{M}^{\varphi}$ as
$\boldsymbol{k}$-linear functors.
Finally, we show the uniqueness of $\varphi$. Suppose that a coalgebra map
$\psi:C\longrightarrow D$ satisfies $F=\mathbb{M}^{\psi}$, too. Let $P$ be a
finite-dimensional subcoalgebra of $C$, and regard it as the right
$C$-comodule $(P,\rho_{P})$. Since
$\mathbb{M}^{\varphi}(P,\rho_{P})=\mathbb{M}^{\psi}(P,\rho_{P})$, the
following diagram commutes.
$P$$P\otimes C$$P\otimes C$$P\otimes
D$$\rho_{P}$$\text{id}\otimes\varphi$$\rho_{P}$$\text{id}\otimes\psi$
Since the diagram
$P$$P\otimes C$$P\otimes D$$\boldsymbol{k}\otimes D$$\boldsymbol{k}\otimes D$
$\cong$
$\cong$
$C$$D$$\rho_{P}$$\text{id}\otimes\varphi$$\iota_{P}$$\varepsilon_{P}\otimes\text{id}$$\varepsilon_{P}\otimes\text{id}$$\text{id}\otimes\varphi$$\varphi$
commutes, and the same commutative diagram holds for $\psi$, we have
$\varphi\circ\iota_{P}=\psi\circ\iota_{P}$. This implies that $\varphi=\psi$
since $C$ can be regarded as a union of finite-dimensional subcoalgebras. ∎
###### Remark 3.2.
The coalgebra map $\varphi_{P}:P\longrightarrow D$ in the above proof is a
right $D$-comodule map from $(P,\rho_{P}^{F})$ to $(D,\Delta_{D})$. It follows
from the following commutative diagram.
$P$$D$$P\otimes D$$D\otimes D$$P\otimes D$$\boldsymbol{k}\otimes D$$P\otimes
D\otimes D$$\boldsymbol{k}\otimes D\otimes
D$$\mathrm{(1)}$$\mathrm{(2)}$$\mathrm{(3)}$$\varphi_{P}$$\varepsilon_{P}\otimes\mathrm{id}$$\varepsilon_{P}\otimes\mathrm{id}$$\varphi_{P}\otimes\mathrm{id}$$\rho_{P}^{F}$$\rho_{P}^{F}$$\sim$$\sim$$\rho_{P}^{F}\otimes\mathrm{id}$$\Delta_{D}$$\mathrm{id}\otimes\Delta_{D}$$\mathrm{id}\otimes\Delta_{D}$
Here, commutativity of Parts $\mathrm{(1)}$ and $\mathrm{(2)}$ come from the
definition of $\varphi_{P}$, and Part $\mathrm{(3)}$ comes from what
$(P,\rho_{P}^{F})$ is a right $D$-comodule.
As an application of Theorem 3.1 we have:
###### Corollary 3.3.
Let $C,D$ be two coalgebras over $\boldsymbol{k}$, and
$F:\mathbb{M}^{C}\longrightarrow\mathbb{M}^{D}$ be a $\boldsymbol{k}$-linear
functor. If $F$ is an equivalence of $\boldsymbol{k}$-linear categories and
$U^{D}\circ F=U^{C}$ is satisfied, then the coalgebra map
$\varphi:C\longrightarrow D$ determined by
$F=\mathbb{M}^{\varphi}:\mathbb{M}^{C}\longrightarrow\mathbb{M}^{D}$ in
Theorem 3.1 is an isomorphism.
The above corollary can be easily proved by taking a quasi-inverse $G$ of $F$
such as $U^{C}\circ G=U^{D}$, $G\circ F=1_{\mathbb{M}^{C}}$ and $F\circ
G=1_{\mathbb{M}^{D}}$.
Dualizing of Theorem 3.1 we have also the following corollary.
###### Corollary 3.4.
Let $A,B$ be two finite-dimensional algebras over $\boldsymbol{k}$, and
$F:{}_{B}\mathbb{M}\longrightarrow{}_{A}\mathbb{M}$ be a
$\boldsymbol{k}$-linear functor. If ${}_{A}U\circ F={}_{B}U$ is satisfied for
the forgetful functors ${}_{A}U,{}_{B}U$ to
$\text{Vect}_{\boldsymbol{k}}^{\mathrm{f.d.}}$, then
1. $(1)$
there is a unique algebra map $\varphi:A\longrightarrow B$ such that
$F={}_{\varphi}\mathbb{M}:{}_{B}\mathbb{M}\longrightarrow{}_{A}\mathbb{M}$,
where ${}_{\varphi}\mathbb{M}$ stands for the $\boldsymbol{k}$-linear functor
induced from $\varphi$,
2. $(2)$
if $F$ is an equivalence of $\boldsymbol{k}$-linear categories, then the
algebra map $\varphi:A\longrightarrow B$ given by $(1)$ is an isomorphism.
Now, we show the main theorem in the present paper:
###### Theorem 3.5.
Let $A,B$ be weak bialgebras over $\boldsymbol{k}$, and
$F=(F,\bar{\phi}^{F},\bar{\omega}^{F}):\mathbb{M}^{A}\longrightarrow\mathbb{M}^{B}$
be a strong $\boldsymbol{k}$-linear comonoidal functor. Suppose that
$U^{B}\circ F=U^{A}$ as $\boldsymbol{k}$-linear monoidal categories. Then
there is a unique weak bialgebra map $\varphi:A\longrightarrow B$ such that
$F=\mathbb{M}^{\varphi}$ as $\boldsymbol{k}$-linear monoidal categories and
$\bar{\omega}^{F}=\varphi|_{A_{s}}:A_{s}\longrightarrow B_{s}$ is an algebra
isomorphism. Furthermore, $\hat{U}^{B}\circ
F={}_{\varphi_{s}^{-1}}\text{\boldmath{$\mathsf{M}$}}_{\varphi_{s}^{-1}}\circ\hat{U}^{A}$
is satisfied, where
$\hat{U}^{A}:\mathbb{M}^{A}\longrightarrow{}_{A_{s}}\text{\boldmath{$\mathsf{M}$}}_{A_{s}},\
\hat{U}^{B}:\mathbb{M}^{B}\longrightarrow{}_{B_{s}}\text{\boldmath{$\mathsf{M}$}}_{B_{s}}$
are forgetful functors.
###### Proof..
By Theorem 3.1 there is a unique coalgebra map $\varphi:A\longrightarrow B$
such that $F=\mathbb{M}^{\varphi}$ as $\boldsymbol{k}$-linear functors. Since
$U^{B}\circ F=U^{A}$ as $\boldsymbol{k}$-linear monoidal functors, for all
$M,N\in\mathbb{M}^{A}$ the composition
$\displaystyle U^{B}\bigl{(}F(M)\bigr{)}\otimes U^{B}\bigl{(}F(N)\bigr{)}$
$\displaystyle\xrightarrow{\ \phi^{U^{B}}_{F(M),F(N)}\
}U^{B}\bigl{(}F(M)\otimes_{B_{s}}F(N)\bigr{)}$
$\displaystyle\qquad\qquad\xrightarrow{\
U^{B}\bigl{(}(\bar{\phi}^{F}_{M,N})^{-1}\bigr{)}\
}U^{B}\bigl{(}F(M\otimes_{A_{s}}N)\bigr{)}$
coincides with the natural projection $\phi^{U^{A}}_{M,N}:U^{A}(M)\otimes
U^{A}(N)\longrightarrow U^{A}(M\otimes_{A_{s}}N)$. This implies that the
diagram
$M\otimes N$$M\otimes
N$$M\otimes_{A_{s}}A$$M\otimes_{B_{s}}N$$\text{id}_{M\otimes
N}$$\bar{\phi}^{F}_{M,N}$natural proj.natural proj.
commutes. This means that the map
$\bar{\phi}^{F}_{M,N}:F(M\otimes_{A_{s}}N)\longrightarrow
F(M)\otimes_{B_{s}}F(N)$ is induced from the identity map $\text{id}_{M\otimes
N}$.
Let us show that $\bar{\omega}^{F}=\varphi|_{A_{s}}:A_{s}\longrightarrow
B_{s}$ is an algebra isomorphism. Since $U^{B}\circ F=U^{A}$ as
$\boldsymbol{k}$-linear monoidal functors, the composition
$U^{B}\bigl{(}(\bar{\omega}^{F})^{-1}\bigr{)}\circ\omega^{U^{B}}:\boldsymbol{k}\longrightarrow(U^{B}\circ
F)(A_{s})=A_{s}$
coincides with $\omega^{U^{A}}:\boldsymbol{k}\longrightarrow A_{s}$. Thus
$(\bar{\omega}^{F})^{-1}\circ\omega^{U^{B}}=\omega^{U^{A}}$ as maps, and hence
$\bar{\omega}^{F}(1)=1$.
Since $\bar{\omega}^{F}:F(A_{s})\longrightarrow B_{s}$ is a right $B$-comodule
map and $F(A_{s})=\mathbb{M}^{\varphi}(A_{s})$, the following diagram
commutes.
$M\otimes N$$M\otimes
N$$M\otimes_{A_{s}}A$$M\otimes_{A_{s}}A$$M\otimes_{B_{s}}N$$\bar{\omega}^{F}$$\bar{\omega}^{F}\otimes\text{id}$$\Delta_{A}|_{A_{s}}$$\text{id}\otimes\varphi$$\Delta_{B}|_{B_{s}}$
Thus
$\Delta_{B}\bigl{(}\bar{\omega}^{F}(y)\bigr{)}=\bar{\omega}^{F}(y_{(1)})\otimes\varphi(y_{(2)})$
for all $y\in A_{s}$. Applying $\varepsilon_{B}\otimes\text{id}$ to both
sides, we have
$\displaystyle\bar{\omega}^{F}(y)$
$\displaystyle=\varepsilon_{B}\bigl{(}\bar{\omega}^{F}(y_{(1)})\bigr{)}\varphi(y_{(2)})$
$\displaystyle=\varepsilon_{A}(y_{(1)})\varphi(y_{(2)})$
$\displaystyle=\varphi(y).$
This implies that $\bar{\omega}^{F}=\varphi|_{A_{s}}:A_{s}\longrightarrow
B_{s}$, and hence $\varphi(1)=\bar{\omega}^{F}(1)=1$.
Next we show that $\bar{\omega}^{F}$ preserves products. Since
$F=(F,\bar{\phi}^{F},\bar{\omega}^{F})$ is a comonoidal functor, the following
diagram commutes for all $M\in\mathbb{M}^{A}$:
$F(A_{s}\otimes_{A_{s}}M)$$F(M)$$F(A_{s})\otimes_{B_{s}}F(M)$$B_{s}\otimes_{B_{s}}F(M)$$F(l_{M}^{A})$$\bar{\omega}^{F}\otimes_{B_{s}}\text{id}$$\bar{\phi}^{F}_{A_{s},M}$$l^{B}_{M}$
In particular, the following diagram commutes:
$F(A_{s}\otimes_{A_{s}}A_{s})$$F(A_{s})\otimes_{B_{s}}F(A_{s})$$F(A_{s})$$B_{s}\otimes_{B_{s}}F(A_{s})$$B_{s}$$B_{s}\otimes_{B_{s}}B_{s}$$\bar{\phi}_{A_{s},A_{s}}^{F}$$l_{F(A_{s})}^{B}$$l_{B_{s}}^{B}$$F(l_{A_{s}}^{A})$$\bar{\omega}^{F}\otimes_{B_{s}}\text{id}$$\bar{\omega}^{F}$$\text{id}\otimes_{B_{s}}\bar{\omega}^{F}$
It follows that
$\bar{\omega}^{F}(y_{1}y_{2})=\bar{\omega}^{F}(y_{1})\bar{\omega}^{F}(y_{2})$
for all $y_{1},y_{2}\in A_{s}$ since $\bar{\phi}_{A_{s},A_{s}}^{F}$ is induced
from the identity map $\text{id}_{A_{s}\otimes A_{s}}$. Therefore, it is shown
that $\bar{\omega}^{F}=\varphi|_{A_{s}}$ is an algebra isomorphism.
Next let us show that $\varphi$ is a weak bialgebra map. For this it is enough
to show that $\varphi$ preserves products. For two subspaces $P,P^{\prime}$ of
$A$, let $PP^{\prime}$ denote the subspace of $A$ spanned by the set $\\{\
pp^{\prime}\ |\ p\in P,\ p^{\prime}\in P^{\prime}\ \\}$. Let
$\mu_{P,P^{\prime}}:P\otimes P^{\prime}\longrightarrow PP^{\prime}$ be the
restriction of the product $\mu_{A}$ of $A$. Then $\mu_{P,P^{\prime}}$ induces
a $\boldsymbol{k}$-linear map
$\bar{\mu}_{P,P^{\prime}}:P\otimes_{A_{s}}P^{\prime}\longrightarrow
PP^{\prime}$ since the equation $(p\cdot y)p^{\prime}=p(y\cdot p^{\prime})$
holds for $p\in P,\ y\in A_{s},\ p^{\prime}\in P^{\prime}$.
Let $a,a^{\prime}\in A$, and let $P,P^{\prime}$ be finite-dimensional
subcoalgebras of $A$ such that $a\in P$, $a^{\prime}\in P^{\prime}$. Then
$PP^{\prime}$ is also a finite-dimensional subcoalgebra of $A$ containing
$aa^{\prime}$, and the map $\bar{\mu}_{P,P^{\prime}}$ is a right $A$-comodule
map. Let $\varphi_{P},\varphi_{P^{\prime}}$ be the coalgebra maps defined in
the proof of Theorem 3.1. We will show that the following diagram commutes.
(3.1)
Here, $\bar{\mu}_{B}$ is the induced map from the product $\mu_{B}$ of $B$.
Since $\varphi_{P},\varphi_{P^{\prime}}$ are right $B$-comodule maps by Remark
3.2, the map $\varphi_{P}\otimes_{B_{s}}\varphi_{P^{\prime}}$ is well-defined.
Since $F=\text{\boldmath{$\mathsf{M}$}}^{\varphi}$ and
$\hat{U}^{B}\circ\text{\boldmath{$\mathsf{M}$}}^{\varphi}={}_{\varphi_{s}^{-1}}\text{\boldmath{$\mathsf{M}$}}_{\varphi_{s}^{-1}}\circ\hat{U}^{A}$
as $\boldsymbol{k}$-linear monoidal functors, one can verify that
$\bar{\phi}_{P,P^{\prime}}^{F}:F(P\otimes_{A_{s}}P^{\prime})\longrightarrow
F(P)\otimes_{B_{s}}F(P^{\prime})$ is a $\boldsymbol{k}$-linear map given by
$\bar{\phi}_{P,P^{\prime}}^{F}(p\otimes_{A_{s}}p^{\prime})=p\otimes_{B_{s}}p^{\prime}\qquad(p\in
F(P)=P,\ p^{\prime}\in F(P^{\prime})=P^{\prime}).$ (3.2)
Let $(F(P)\otimes_{B_{s}}F(P^{\prime}),\ \rho_{P,P^{\prime}}^{F})$ be the
tensor product in $\text{\boldmath{$\mathsf{M}$}}^{B}$ of right comodules
$(F(P),\rho_{P}^{F})$ and $(F(P^{\prime}),\rho_{P^{\prime}}^{F})$. Here,
$\rho_{P,P^{\prime}}^{F}$ is the right coaction given by
$\rho_{P,P^{\prime}}^{F}(m\otimes_{B_{s}}n)=(m_{(0)}\otimes_{B_{s}}n_{(0)})\otimes
m_{(1)}n_{(1)}\qquad(m\in F(P),\ n\in F(P^{\prime})),$
and $\rho_{P}^{F}(m)=m_{(0)}\otimes m_{(1)},\
\rho_{P^{\prime}}^{F}(n)=n_{(0)}\otimes n_{(1)}$. Since
$\bar{\phi}_{P,P^{\prime}}^{F}$ is a right $B$-comodule map from
$(F(P\otimes_{A_{s}}P^{\prime}),\ \rho_{P\otimes_{A_{s}}P^{\prime}}^{F})$ to
$(F(P)\otimes_{B_{s}}F(P^{\prime}),\ \rho_{P,P^{\prime}}^{F})$, the following
diagram commutes.
$F(P\otimes_{A_{s}}P^{\prime})$$F(P\otimes_{A_{s}}P^{\prime})\otimes
B$$F(P)\otimes_{B_{s}}F(P^{\prime})$$\bigl{(}F(P)\otimes_{B_{s}}F(P^{\prime})\bigr{)}\otimes
B$$\rho_{P\otimes_{A_{s}}P^{\prime}}^{F}$$\rho_{P,P^{\prime}}^{F}$$\bar{\phi}_{P,P^{\prime}}^{F}$$\bar{\phi}_{P,P^{\prime}}^{F}\otimes\text{id}_{B}$
On the other hand, there is a $(B_{s},B_{s})$-bimodule map
$\chi_{P,P^{\prime}}:(P\otimes B)\otimes_{B_{s}}(P^{\prime}\otimes
B)\longrightarrow(P\otimes_{B_{s}}P^{\prime})\otimes B$, which is defined by
$\chi_{P,P^{\prime}}\bigl{(}(p\otimes b)\otimes_{B_{s}}(p^{\prime}\otimes
b^{\prime})\bigr{)}=(p\otimes_{B_{s}}p^{\prime})\otimes bb^{\prime}\qquad(p\in
P,\ p^{\prime}\in P^{\prime},\ b,b^{\prime}\in B).$
We also define a $\boldsymbol{k}$-linear map
$\varepsilon_{P,P^{\prime}}:P\otimes_{A_{s}}P^{\prime}\longrightarrow\boldsymbol{k}$
by
$\varepsilon_{P,P^{\prime}}(p\otimes_{A_{s}}p^{\prime})=\varepsilon(p1_{(1)})\varepsilon(1_{(2)}p^{\prime})\qquad(p\in
P,\ p^{\prime}\in P^{\prime}).$
Then it can be shown that the following diagram commutes.
$F(P\otimes_{A_{s}}P^{\prime})$$F(P)\otimes_{B_{s}}F(P^{\prime})$$B\otimes_{B_{s}}B$$F(P)\otimes_{B_{s}}F(P^{\prime})\otimes
B$$(F(P)\otimes B)\otimes_{B_{s}}(F(P^{\prime})\otimes
B)$$F(P\otimes_{A_{s}}P^{\prime})\otimes B$$\boldsymbol{k}\otimes
B$$F(PP^{\prime})\otimes
B$$F(PP^{\prime})$$B$($\ast$)$\bar{\phi}_{P,P^{\prime}}^{F}$$\varphi_{P}\otimes_{B_{s}}\varphi_{P^{\prime}}$$\rho_{P\otimes_{A_{s}}P^{\prime}}^{F}$$\rho_{P,P^{\prime}}^{F}$$\bar{\phi}_{P,P^{\prime}}^{F}\otimes\text{id}$$\chi_{P,P^{\prime}}$$\rho_{P}^{F}\otimes_{B_{s}}\rho_{P^{\prime}}^{F}$$(F(\varepsilon_{P})\otimes\text{id})_{B_{s}}(F(\varepsilon_{P^{\prime}})\otimes\text{id})$$F(\varepsilon_{P,P^{\prime}})\otimes\text{id}$$\cong$$F(\bar{\mu}_{P,P^{\prime}})\otimes\text{id}$$F(\varepsilon_{PP^{\prime}})\otimes\text{id}$$\rho_{PP^{\prime}}^{F}$$F(\bar{\mu}_{P,P^{\prime}})$$\varphi_{PP^{\prime}}$$\bar{\mu}_{B}$
In fact, the commutativity of ($\ast$) in the above diagram comes from the
following three facts:
1. (i)
$\bar{\phi}^{F}_{M,N}:F(M\otimes_{A_{s}}N)\longrightarrow
F(M)\otimes_{B_{s}}F(N)$ is an isomorphism,
2. (ii)
the equation (3.2) holds,
3. (iii)
for $p\in P,\ p^{\prime}\in P^{\prime}$,
1. $\bullet$
$p\otimes_{A_{s}}p^{\prime}=(p\cdot 1)\otimes_{A_{s}}p^{\prime}=\bigl{(}p\cdot
1_{(1)}\varepsilon_{s}(1_{(2)})\bigr{)}\otimes_{A_{s}}p^{\prime}=p1_{(1)}\otimes_{A_{s}}\varepsilon_{s}(1_{(2)})p^{\prime}$,
2. $\bullet$
$\varepsilon\bigl{(}\varepsilon_{s}(1_{(2)})p^{\prime}\bigr{)}=\varepsilon(1_{(2)}p^{\prime})$
by Lemma 2.2(2).
Thus, the diagram (3.1) commutes, and therefore $\varphi$ preserves products.
To complete the proof we need to verify that $F=\mathbb{M}^{\varphi}$ as
$\boldsymbol{k}$-linear comonoidal functors. This is an easy consequence of
the proof of Lemma 2.7(2) since $\bar{\phi}^{F}_{M,N}$ is induced from the
identity map $\text{id}_{M\otimes N}$ for all $M,N\in\mathbb{M}^{A}$. ∎
By Corollary 3.3 and Theorem 3.5 we have:
###### Corollary 3.6.
Let $A,B$ be weak bialgebras over $\boldsymbol{k}$, and
$F=(F,\bar{\phi}^{F},\bar{\omega}^{F}):\mathbb{M}^{A}\longrightarrow\mathbb{M}^{B}$
be a strong $\boldsymbol{k}$-linear comonoidal functor satisfying $U^{B}\circ
F=U^{A}$ as $\boldsymbol{k}$-linear monoidal categories. If $F$ is a
$\boldsymbol{k}$-linear monoidal equivalence, then the weak bialgebra map
$\varphi:A\longrightarrow B$ determined in Theorem 3.5 is an isomorphism of
weak bialgebras.
## 4 Categorical aspects of indecomposable weak bialgebras
Let $A=(A,\Delta_{A},\varepsilon_{A})$ and $B=(B,\Delta_{B},\varepsilon_{B})$
be two weak bialgebras over $\boldsymbol{k}$, and set $H=A\oplus B$. Then, $H$
has a weak bialgebra structure such that the algebra structure is the product
and the coalgebra structure is the direct sum of $A$ and $B$. The target and
source counital maps $\varepsilon_{t}$ and $\varepsilon_{s}$ are given by
$\displaystyle\varepsilon_{t}(x)$
$\displaystyle=(\varepsilon_{A})_{t}(a)+(\varepsilon_{B})_{t}(b),$
$\displaystyle\varepsilon_{s}(x)$
$\displaystyle=(\varepsilon_{A})_{s}(a)+(\varepsilon_{B})_{s}(b)$
for all $x=a+b\in H,\ a\in A,\ b\in B$, where
$(\varepsilon_{A})_{t},(\varepsilon_{A})_{s}$ are the target and source
counital maps of $A$, and $(\varepsilon_{B})_{t},(\varepsilon_{B})_{s}$ are
that of $B$, respectively. Moreover, the target and source subalgebras are
given as follows:
$\displaystyle H_{t}$
$\displaystyle=\varepsilon_{t}(H)=(\varepsilon_{A})_{t}(A)+(\varepsilon_{B})_{t}(B)=A_{t}+B_{t},$
$\displaystyle H_{s}$
$\displaystyle=\varepsilon_{s}(H)=(\varepsilon_{A})_{s}(A)+(\varepsilon_{B})_{s}(B)=A_{s}+B_{s}.$
We call the above weak bialgebra $H$ the direct sum of $A$ and $B$.
A weak bialgebra $H$ is called indecomposable if there are no weak bialgebras
$A$ and $B$ such that $H\cong A\oplus B$. Any finite-dimensional weak
bialgebra can be decomposed into the finitely many direct sum of
indecomposable ones. More precisely, we have:
###### Theorem 4.1.
Let $H$ be a finite-dimensional weak bialgebra over $\boldsymbol{k}$. Then
1. $(1)$
there are finitely many indecomposable weak bialgebras $H_{i}\ (i=1,\ldots,k)$
such that $H=H_{1}\oplus\cdots\oplus H_{k}$.
2. $(2)$
If indecomposable weak bialgebras $H_{1},\ldots,H_{k}$ and
$H_{1}^{\prime},\ldots,H_{l}^{\prime}$ satisfy
$H_{1}\oplus\cdots\oplus H_{k}=H=H_{1}^{\prime}\oplus\cdots\oplus
H_{l}^{\prime},$
then $k=l$, and there is a permutation $\sigma\in\mathfrak{S}_{k}$ such that
$H_{i}^{\prime}=H_{\sigma(i)}$ for all $i=1,\ldots,k$.
The above theorem can be proved by decomposability and uniqueness of finite-
dimensional algebras into indecomposable ones (see [21] for detail).
A $\boldsymbol{k}$-linear monoidal category $\mathcal{C}$ is called
indecomposable if there are no $\boldsymbol{k}$-linear monoidal categories
$\mathcal{C}_{1},\mathcal{C}_{2}$ such that
$\mathcal{C}\simeq\mathcal{C}_{1}\times\mathcal{C}_{2}$ as
$\boldsymbol{k}$-linear monoidal categories.
###### Proposition 4.2.
Let $H$ be a weak bialgebra over $\boldsymbol{k}$. If $H$ is decomposable,
then
1. $(1)$
the left $H$-module category ${}_{H}\text{\boldmath{$\mathsf{M}$}}$ and the
finite-dimensional left $H$-module category ${}_{H}\mathbb{M}$ are
decomposable.
2. $(2)$
the right $H$-comodule category $\text{\boldmath{$\mathsf{M}$}}^{H}$ and the
finite-dimensional right $H$-comodule category $\mathbb{M}^{H}$ are
decomposable.
###### Proof..
Suppose that $H=A\oplus B$ for some weak bialgebras $A,B$ over
$\boldsymbol{k}$.
(1) The left $H$-module category ${}_{H}\text{\boldmath{$\mathsf{M}$}}$ is
equivalent to the Cartesian product
${}_{A}\text{\boldmath{$\mathsf{M}$}}\times{}_{B}\text{\boldmath{$\mathsf{M}$}}$
as $\boldsymbol{k}$-linear monoidal categories. An equivalence is given by the
following monoidal functors, which are quasi-inverse each other:
$\displaystyle(F,\phi^{F},\omega^{F}):$
$\displaystyle{}_{H}\text{\boldmath{$\mathsf{M}$}}\longrightarrow{}_{A}\text{\boldmath{$\mathsf{M}$}}\times{}_{B}\text{\boldmath{$\mathsf{M}$}},$
$\displaystyle F(X)$ $\displaystyle=(1_{A}\cdot X,\ 1_{B}\cdot X),$
$\displaystyle\phi^{F}_{X,Y}$ $\displaystyle=\mathrm{id}_{F(X)\circledast
F(Y)}:F(X)\circledast F(Y)\longrightarrow F(X\circledast Y),$
$\displaystyle\omega^{F}$
$\displaystyle=\mathrm{id}_{(A_{t},B_{t})}:(A_{t},B_{t})\longrightarrow
F(H_{t}),$ $\displaystyle(G,\phi^{G},\omega^{G}):$
$\displaystyle{}_{A}\text{\boldmath{$\mathsf{M}$}}\times{}_{B}\text{\boldmath{$\mathsf{M}$}}\longrightarrow{}_{H}\text{\boldmath{$\mathsf{M}$}},$
$\displaystyle G(U,V)$ $\displaystyle=U\times V,$
$\displaystyle\phi^{G}_{(U_{1},V_{1}),(U_{2},V_{2})}$
$\displaystyle=\mathrm{id}_{G(U_{1},V_{1})\circledast
G(U_{2},V_{2})}:G(U_{1},V_{1})\circledast G(U_{2},V_{2})\longrightarrow
G\bigl{(}(U_{1},V_{1})\circledast(U_{2},V_{2})\bigr{)},$
$\displaystyle\omega^{G}$ $\displaystyle:H_{t}\longrightarrow
G(A_{t},B_{t})=A_{t}\times B_{t},\ \ \omega^{G}(z)=(1_{A}\cdot z,\ 1_{B}\cdot
z)\qquad(z\in H_{t}).$
By restricting the above equivalence
${}_{H}\text{\boldmath{$\mathsf{M}$}}\simeq{}_{A}\text{\boldmath{$\mathsf{M}$}}\times{}_{B}\text{\boldmath{$\mathsf{M}$}}$
to finite dimension a $\boldsymbol{k}$-linear monoidal equivalence
${}_{H}\mathbb{M}\simeq{}_{A}\mathbb{M}\times{}_{B}\mathbb{M}$ is obtained.
(2) The right $H$-comodule category $\text{\boldmath{$\mathsf{M}$}}^{H}$ is
equivalent to the Cartesian product
$\text{\boldmath{$\mathsf{M}$}}^{A}\times\text{\boldmath{$\mathsf{M}$}}^{B}$
as $\boldsymbol{k}$-linear monoidal categories. An equivalence is given by the
following monoidal functors, which are quasi-inverse each other:
$\displaystyle(F,\phi^{F},\omega^{F}):$
$\displaystyle\text{\boldmath{$\mathsf{M}$}}^{H}\longrightarrow\text{\boldmath{$\mathsf{M}$}}^{A}\times\text{\boldmath{$\mathsf{M}$}}^{B},$
$\displaystyle F\bigl{(}(X,\rho)\bigr{)}$
$\displaystyle=\Bigl{(}\bigl{(}(\varepsilon_{A}\circ\pi_{A})\cdot X,\
(\mathrm{id}\otimes\pi_{A})\circ\rho|_{(\varepsilon_{A}\circ\pi_{A})\cdot
X}\bigr{)},\bigl{(}(\varepsilon_{B}\circ\pi_{B})\cdot X,\
(\mathrm{id}\otimes\pi_{B})\circ\rho|_{(\varepsilon_{B}\circ\pi_{B})\cdot
X}\bigr{)}\Bigr{)}$ $\displaystyle\phi^{F}_{X,Y}$
$\displaystyle=\mathrm{id}_{F(X)\circledast F(Y)}:F(X)\circledast
F(Y)\longrightarrow F(X\circledast Y),$ $\displaystyle\omega^{F}$
$\displaystyle=\mathrm{id}_{(A_{s},B_{s})}:(A_{s},B_{s})\longrightarrow
F(H_{s}),$ $\displaystyle(G,\phi^{G},\omega^{G}):$
$\displaystyle\text{\boldmath{$\mathsf{M}$}}^{A}\times\text{\boldmath{$\mathsf{M}$}}^{B}\longrightarrow\text{\boldmath{$\mathsf{M}$}}^{H},$
$\displaystyle G(U,V)$ $\displaystyle=U\times V,$
$\displaystyle\phi^{G}_{(U_{1},V_{1}),(U_{2},V_{2})}$
$\displaystyle=\mathrm{id}_{G(U_{1},V_{1})\circledast
G(U_{2},V_{2})}:G(U_{1},V_{1})\circledast G(U_{2},V_{2})\longrightarrow
G\bigl{(}(U_{1},V_{1})\circledast(U_{2},V_{2})\bigr{)},$
$\displaystyle\omega^{G}$ $\displaystyle:H_{s}\longrightarrow
G(A_{s},B_{s})=A_{s}\times B_{s},\ \ \omega^{G}(y)=\bigl{(}\pi_{A}(y),\
\pi_{B}(y)\bigr{)}\qquad(y\in H_{s}),$
where $\pi_{A}$ and $\pi_{B}$ are natural surjections associated with the
direct sum decomposition $H=A\oplus B$.
By restricting the above equivalence
$\text{\boldmath{$\mathsf{M}$}}^{H}\simeq\text{\boldmath{$\mathsf{M}$}}^{A}\times\text{\boldmath{$\mathsf{M}$}}^{B}$
to finite dimension a $\boldsymbol{k}$-linear monoidal equivalence
$\mathbb{M}^{H}\simeq\mathbb{M}^{A}\times\mathbb{M}^{B}$ is obtained. ∎
The converse of the above proposition is true. To prove it we need the
following reconstruction theorem of bialgebras.
###### Theorem 4.3 (Ulbrich [19], Schauenburg [17, Theorem 5.4]).
Let $\mathcal{C}$ be a $\boldsymbol{k}$-linear monoidal category, and
$\omega:\mathcal{C}\longrightarrow\text{Vect}_{\boldsymbol{k}}^{\text{f.d.}}$
be a faithful and exact $\boldsymbol{k}$-linear monoidal functor. Then, there
are a bialgebra $B$ and a monoidal equivalence
$F:\mathcal{C}\longrightarrow\mathbb{M}^{B}$ such that $U^{B}\circ F=\omega$,
where
$U^{B}:\mathbb{M}^{B}\longrightarrow\text{Vect}_{\boldsymbol{k}}^{\text{f.d.}}$
is the forgetful functor. ∎
Combining Theorems 3.5 and 4.3 one can show the following.
###### Theorem 4.4.
Let $H$ be a finite-dimensional weak bialgebra over $\boldsymbol{k}$. Then $H$
is indecomposable as a weak bialgebra if and only if the finite-dimensional
left $H$-module category ${}_{H}\mathbb{M}$ is indecomposable as a
$\boldsymbol{k}$-linear monoidal category.
###### Proof..
By the contraposition of Proposition 4.2 “if part” holds. We will show that
“only if part”. Suppose that $H$ is indecomposable whereas ${}_{H}\mathbb{M}$
is not. Then, there are $\boldsymbol{k}$-linear monoidal categories
$\mathcal{C}_{1},\mathcal{C}_{2}$ such that
${}_{H}\mathbb{M}\simeq\mathcal{C}_{1}\times\mathcal{C}_{2}$ as
$\boldsymbol{k}$-linear monoidal categories. Let
${}_{H}U:{}_{H}\mathbb{M}\longrightarrow\text{Vect}_{\boldsymbol{k}}^{\text{f.d.}}$
be the forgetful functor, and
$F:\mathcal{C}_{1}\times\mathcal{C}_{2}\longrightarrow{}_{H}\mathbb{M}$ be a
$\boldsymbol{k}$-linear monoidal category equivalence. Since the two
$\boldsymbol{k}$-linear monoidal functors
$\displaystyle\omega_{1}:$ $\displaystyle\
\mathcal{C}_{1}\cong\mathcal{C}_{1}\times 0\xrightarrow{\ \ F\ \
}{}_{H}\mathbb{M}\xrightarrow{\ \ {}_{H}U\ \
}\text{Vect}_{\boldsymbol{k}}^{\text{f.d.}},$ $\displaystyle\omega_{2}:$
$\displaystyle\ \mathcal{C}_{2}\cong 0\times\mathcal{C}_{2}\xrightarrow{\ \ F\
\ }{}_{H}\mathbb{M}\xrightarrow{\ \ {}_{H}U\ \
}\text{Vect}_{\boldsymbol{k}}^{\text{f.d.}}$
are faithful and exact, there are bialgebras $A,B$ and $\boldsymbol{k}$-linear
monoidal equivalences $G_{1}:\mathcal{C}_{1}\simeq\mathbb{M}^{A},\
G_{2}:\mathcal{C}_{2}\simeq\mathbb{M}^{B}$ such that $U^{A}\circ
G_{1}=\omega_{1},\ U^{B}\circ G_{2}=\omega_{2}$ by Theorem 4.3. Then
$\mathcal{C}_{1}\times\mathcal{C}_{2}\simeq\mathbb{M}^{A}\times\mathbb{M}^{B}\cong\mathbb{M}^{A\oplus
B}$
as $\boldsymbol{k}$-linear monoidal categories. Thus we have
$\boldsymbol{k}$-linear monoidal equivalence
$G:\mathbb{M}^{H^{\ast}}\longrightarrow\mathbb{M}^{A\oplus B}$. This
equivalence satisfies $U^{A\oplus B}\circ G=U^{H^{\ast}}$. So, by Corollary
3.6, there is a weak bialgebra isomorphism $g:A\oplus B\longrightarrow
H^{\ast}$. Therefore, $H\cong H^{\ast\ast}\cong(A\oplus B)^{\ast}\cong
A^{\ast}\oplus B^{\ast}$. This contradicts the fact that $H$ is indecomposable
as a weak bialgebra. ∎
## References
* [1] G. Böhm, F. Nill and K. Szlachányi, Weak Hopf algebras I. Integral theory and $C^{\ast}$-structure, J. Algebra 221 (1999), 385–438.
* [2] A. Bruguieres and A. Virelizier, Hopf monads, Adv. Math. 215 (2007), 679–733.
* [3] D. Chikhladze, The Tannaka representation theorem for separable Frobenius functors, Algebr. Represent. Theor. 15 (2012), 1205–1213.
* [4] P. Deligne, Catégories Tannakiènnes, In: The Grothendieck Festschrift vol.2, Progr. Math. 87, edited by P.Cartier, L.Illusie, N.M.Katz, G.Laumon, Y.Manin and K.A.Ribet, pp.111–195, Birkhäuiser, Boston, 1991.
* [5] P. Etingof, D. Nikshych and V. Ostrik, On fusion categories, Ann. of Math. 162, (2005), 581–642.
* [6] I.L. Franco, Topics in category theory: Hopf algebras, a lecture note, noted by D. Mehrle, at Cambridge University, 2015.
http://pi.math.cornell.edu/$\sim$dmehrle/notes/partiii/hopfalg partiii
notes.pdf
* [7] R. Häring-Oldenburg, Reconstruction of weak quasi-Hopf algebras, J. Algebra 194, (1997), 14–35.
* [8] S. MacLane, Categories for the working mathematician, Grad.Texts Math. 5, Springer, New York, 1971, Second edition, Springer-Verlag, New York, 1998.
* [9] S. Maijd, Tannaka-Krein theorem for quasi-Hopf algebras and other results, in “Deformation theory and quantum groups with applications to mathematical physics (Amherst, MA, 1990)”, Contemp. Math. 134, 219–232, Amer. Math. Soc., Providence, RI, 1992.
* [10] S. Maijd, Foundations of quantum group theory, Cambridge University Press, 1995.
* [11] M. B. McCrudy Graphical methods for Tannaka duality of weak bialgebras and weak Hopf algebras, Theory and Appl. Cat. 26 (2012), 233–280.
* [12] S. Montgomery, Hopf algebras and their action on rings, C.B.M.S.82, American Mathematical Society, 1993.
* [13] D. Nikshych, V. Turaev and L. Vainerman, Invariants of knots and $3$-manifolds from finite quantum groupoids, Top. Appl. 127 (2003), 91–123.
* [14] F. Nill, Axioms for weak bialgebras, arXiv:math. 9805104v1, 1998.
* [15] P. Schauenburg, Tannaka duality for arbitrary Hopf algebras, Algebra Berichte 66, Verlag Reinhard Fisher, Munich, 1992.
* [16] P. Schauenburg, Weak Hopf algebras and quantum groupoids, Banach Center Publ. 61 (2003), 171–188.
* [17] P. Schauenburg, Hopf bigalois extensions, Comm. Algebra 24 (1996), 3797–3825.
* [18] K. Szlachányi, Adjointable monoidal functors and quantum groupoids, In: “Hopf algebras in noncommutative geometry and physics”, Lecture Notes in Pure and Appl. Math. 239, 291–307, Dekker, New York, 2005.
* [19] K.-H. Ulbrich, On Hopf algebras and rigid monoidal categories, Israel J. Math. 72 (1990), 252–256.
* [20] J. Vercruysse, Hopf algebras–Variant notations and reconstruction theorem, arXiv:1202.3613v1, 2012.
* [21] M. Wakui, Indecomposability of weak Hopf algebras, submitted, preprint, 2020.
|
2024-09-04T02:54:55.396490 | 2020-02-28T06:49:35 | 2002.12569 | {
"authors": "Huyuan Chen, Axander Quaas, Feng Zhou",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25936",
"submitter": "Huyuan Chen",
"url": "https://arxiv.org/abs/2002.12569"
} | arxiv-papers | # Solutions of nonhomogeneous equations involving Hardy potentials with
singularities on the boundary
Huyuan Chen Department of Mathematics, Jiangxi Normal University, Nanchang,
Jiangxi 330022, PR China<EMAIL_ADDRESS>, Alexander Quaas Departamento
de Matemática, Universidad Técnica Federico, Santa María Casilla V-110, Avda.
España 1680, Valparaíso, Chile<EMAIL_ADDRESS>and Feng Zhou Center
for PDEs, School of Mathematical Sciences, East China Normal University,
Shanghai Key Laboratory of PMMP, Shanghai, 200241, PR China
<EMAIL_ADDRESS>
###### Abstract.
In this paper, we present a new distributional identity for the solutions of
elliptic equations involving Hardy potentials with singularities located on
the boundary of the domain. Then we use it to obtain the boundary isolated
singular solutions of nonhomogeneous problems.
###### Key words and phrases:
Distributional identity, Hardy potential, Boundary isolated singularity.
###### 2010 Mathematics Subject Classification:
35B40, 35J99.
## 1\. Introduction
The classical Hardy inequality is stated as following: For any smooth bounded
domain $\mathcal{O}$ in $\mathbb{R}^{N}$ containing the origin, there holds
(1.1) $\int_{\mathcal{O}}|\nabla u|^{2}dx\geq
c_{N}\int_{\mathcal{O}}|x|^{-2}|u|^{2}dx,\quad\forall\,u\in
H_{0}^{1}(\mathcal{O}),$
with the best constant $c_{N}=\frac{(N-2)^{2}}{4}$. The qualitative properties
of Hardy inequality and its improved versions have been studied extensively,
see for example [1, 4, 19, 21], motivated by great applications in the study
of stability of solutions to semilinear elliptic and parabolic equations (cf.
[5, 6, 13, 30, 31]). The isolated singular solutions of Hardy problem with
absorption nonlinearity have been studied in [11, 12, 23] and the one with
source nonlinearity has been done in [3, 16]. The related semilinear elliptic
problem involving the inverse square potential has been studied by variational
methods in [15, 14, 18] and the references therein. In a very recent work [9],
we established a new distributional identity with respect to a specific
weighted measure and we then classify the classical isolated singular
solutions of
$-\Delta u+\frac{\mu}{|x|^{2}}u=f\quad{\rm
in}\quad\mathcal{O}\setminus\\{0\\},$
subject to the homogeneous Dirichlet boundary condition with $\mu\geq-c_{N}$.
These results allow us to draw a complete picture of the existence, non-
existence and the singularities for classical solutions for the above problems
(cf. [10]).
It is of interest to consider the corresponding problem involving Hardy
potential with singularity on the boundary. While the sharp constant $c_{N}$
in Hardy inequality (1.1) could be replaced by $\frac{N^{2}}{4}$ when the
origin is addressed on the boundary of the domain, see [20, Corollary 2.4],
also [7, 8, 17].
Let $\Omega$ be a smooth bounded domain in $\mathbb{R}^{N}$ with
$0\in\partial\Omega$. We study boundary isolated singular solutions of
nonhomogeneous problems:
(1.2)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}u=f\quad{\rm
in}\;\;\Omega,\\\\[4.2679pt] \phantom{L_{\beta}}\displaystyle u=g\quad{\rm
on}\;\;\partial{\Omega}\setminus\\{0\\},\end{array}\right.$
where $f\in C^{\gamma}_{loc}(\bar{\Omega}\setminus\\{0\\})$ with
$\gamma\in(0,1)$, $g\in C(\partial\Omega\setminus\\{0\\})$ and
$\mathcal{L}_{\beta}:=-\Delta+\frac{\beta}{|x|^{2}}$ is the Hardy operator
which is singular at $0$ (with $N\geq 2$,
$\beta\geq\beta_{0}:=-\frac{N^{2}}{4}$). Recall that for $\beta\geq\beta_{0}$,
the problem
(1.3) $\left\\{\begin{array}[]{lll}\mathcal{L}_{\beta}u=0\quad{\rm
in}\;\;\mathbb{R}^{N}_{+},\\\\[2.84526pt]
\phantom{\mathcal{L}_{\beta}}u=0\quad{\rm
on}\;\;\partial\mathbb{R}^{N}_{+}\setminus\\{0\\}\end{array}\right.$
has two special solutions with the explicit formulas as
(1.4)
$\Lambda_{\beta}(x)=\left\\{\begin{array}[]{lll}x_{N}|x|^{\tau_{-}(\beta)}&{\rm
if}\;\;\beta>\beta_{0},\\\\[2.84526pt]
\phantom{}-x_{N}|x|^{\tau_{-}(\beta)}\ln|x|&{\rm
if}\;\;\beta=\beta_{0}\end{array}\right.\quad\;\;{\rm
and}\quad\lambda_{\beta}(x)=x_{N}|x|^{\tau_{+}(\beta)},$
where
$x=(x^{\prime},x_{N})\in\mathbb{R}^{N}_{+}:=\mathbb{R}^{N-1}\times(0,+\infty)$,
and
(1.5) $\tau_{-}(\beta)=-\frac{N}{2}-\sqrt{\beta-\beta_{0}}\quad{\rm
and}\quad\tau_{+}(\beta)=-\frac{N}{2}+\sqrt{\beta-\beta_{0}},$
are two roots of $\beta-\tau(\tau+N)=0$.
As in [10, 9], we first find a certain distributional identity which shows
that the singularity of solution $\Lambda_{\beta}$ for (1.3) is associated to
a Dirac mass. Let $C^{1.1}_{0}(\mathbb{R}^{N}_{+})$ be the set of functions in
$C^{1.1}(\overline{\mathbb{R}^{N}_{+}})$ vanishing on the boundary and having
compact support in $\overline{\mathbb{R}^{N}_{+}}$. Then we have
###### Theorem 1.1.
Let $d\gamma_{\beta}:=\lambda_{\beta}(x)dx$ and
(1.6)
$\mathcal{L}^{*}_{\beta}:=-\Delta-\frac{2\tau_{+}(\beta)}{|x|^{2}}\,x\cdot\nabla-\frac{2}{x_{N}}\frac{\partial}{\partial
x_{N}},\quad x=(x^{\prime},x_{N})\in\mathbb{R}^{N}_{+}.$
Then there holds
(1.7)
$\int_{\mathbb{R}^{N}_{+}}\Lambda_{\beta}\mathcal{L}^{*}_{\beta}(\frac{\zeta}{x_{N}})\,d\gamma_{\beta}=c_{\beta}\frac{\partial\zeta}{\partial
x_{N}}(0),\quad\forall\,\zeta\in C^{1.1}_{0}(\mathbb{R}^{N}_{+}),$
where
(1.8)
$c_{\beta}=\left\\{\begin{array}[]{lll}\sqrt{\beta-\beta_{0}}\,|\mathcal{S}^{N-1}|/N&{\rm
if}\;\;\beta>\beta_{0},\\\\[4.2679pt] \phantom{}|\mathcal{S}^{N-1}|/N&{\rm
if}\;\;\beta=\beta_{0},\end{array}\right.$
and $\mathcal{S}^{N-1}$ is the unit sphere of $\mathbb{R}^{N}$ and
$|\mathcal{S}^{N-1}|$ denotes its $(N-1)$-dimensional Hausdorff measure.
From the distributional identity (1.7), $\Lambda_{\beta}$ is called as a
fundamental solution of (1.3). We remark that when $\beta=0$,
$\mathcal{L}^{*}_{0}=-\Delta-\frac{2}{x_{N}}\frac{\partial}{\partial x_{N}}$,
$\lambda_{\beta}(x)=x_{N}$ and (1.7) could be reduced to
$\displaystyle c_{0}\frac{\partial\zeta}{\partial
x_{N}}(0)=\int_{\mathbb{R}^{N}_{+}}\Lambda_{0}\mathcal{L}^{*}_{0}(\frac{\zeta}{x_{N}})\,d\gamma_{\beta}=\int_{\mathbb{R}^{N}_{+}}\Lambda_{0}(-\Delta\zeta)\,dx,\quad\forall\;\zeta\in
C^{1.1}_{0}(\mathbb{R}^{N}_{+}),$
which coincides with the classical distributional identity proposed in [22].
On this classical subject, it has been vastly expanded in the works [2, 26,
27, 28, 29].
For simplicity, here and in the sequel, we always assume that $\Omega$ is a
bounded $C^{2}-$ domain satisfying that
(1.9) $B_{r_{0}}^{+}(0)\subset\Omega\subset B_{R_{0}}^{+}(0),$
for some $0<r_{0}<R_{0}<+\infty$ where
$B_{r}^{+}(0):=B_{r}(0)\cap\mathbb{R}^{N}_{+}$. Let
$d\omega_{\beta}(x):=|x|^{\tau_{+}(\beta)}d\omega(x)$, where $\omega$ is the
Hausdorff measure of $\partial\Omega$. We can state our main result as follows
###### Theorem 1.2.
Let $\mathcal{L}^{*}_{\beta}$ be given by (1.6), $f\in
C^{\theta}_{loc}(\bar{\Omega}\setminus\\{0\\})$ with $\theta\in(0,1)$, $g\in
C(\partial\Omega\setminus\\{0\\})$.
$(i)$ If
(1.10)
$\int_{\Omega}|f|\,d\gamma_{\beta}+\int_{\partial\Omega}|g|\,d\omega_{\beta}<+\infty,$
then for any $k\in\mathbb{R}$, problem (1.2) admits a unique solution
$u_{k}\in C^{2}(\Omega)\cap L^{1}(\Omega,|x|^{-1}d\gamma_{\beta})$ such that
(1.11)
$\int_{\Omega}u_{k}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=\int_{\Omega}\frac{f\xi}{x_{N}}\,d\gamma_{\beta}-\int_{\partial\Omega}g\frac{\partial\xi}{\partial\nu}d\omega_{\beta}+c_{\beta}k\frac{\partial\xi}{\partial
x_{N}}(0),\quad\forall\,\xi\in C^{1.1}_{0}(\Omega),$
where $\nu$ is the unit outward vector on $\partial\Omega$.
$(ii)$ If $f,g$ are nonnegative and
(1.12) $\lim_{r\to 0^{+}}\Big{(}\int_{\Omega\setminus
B_{r}(0)}f\,d\gamma_{\beta}+\int_{\partial\Omega\setminus
B_{r}(0)}g\,d\omega_{\beta}\Big{)}=+\infty,$
then problem (1.2) has no nonnegative solution.
When $g=0$ on $\partial\Omega$ and $f=0$ in $\Omega$, we prove in Proposition
3.2 in Section 3 that problem (1.2) admits an isolated singular solution
$\Lambda^{\Omega}_{\beta}$, which has the asymptotic behavior at the origin as
the fundamental function $\Lambda_{\beta}$. More precisely, we have
(1.13) $\lim_{t\to 0^{+}}\sup_{z\in
S^{N-1}_{+}}\Big{(}\frac{\Lambda^{\Omega}_{\beta}(tz)}{\Lambda_{\beta}(tz)}-1\Big{)}=0.$
When $g=0$ on $\partial\Omega$ and $f\in
C^{\theta}_{loc}(\bar{\Omega}\setminus\\{0\\})\cap
L^{1}(\Omega,d\gamma_{\beta})$, Theorem 4.1 in Section 4 shows that problem
(1.2) has a solution $u_{f}$ verifying the isolated singularity (see Remark
4.2)
(1.14) $\lim_{t\to 0^{+}}\inf_{z\in
S^{N-1}_{+}}\frac{u_{f}(tz)}{\Lambda_{\beta}(tz)}=0,$
which is less precise than (1.13) due to the lack of estimates of Green kernel
of Hardy operator with singularity on the boundary. However, when $f=0$ and
$g\not=0$, it is not convenient to use (1.14) to describe the singularity of
the solution $u_{g}$, so we may distinguish this by the distributional
identity
$\int_{\Omega}u_{g}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=-\int_{\partial\Omega}g\frac{\partial\xi}{\partial\nu}d\omega_{\beta},\quad\forall\,\xi\in
C^{1.1}_{0}(\Omega),$
All in all, the solution $u_{k}$ of (1.2) can be decomposed into three
components $k\Lambda^{\Omega}_{\beta}$, $u_{f}$ and $u_{g}$.
The method we use to prove the existence of solutions for problem (1.2) is
different from the classical method of the boundary data problem used by
Gmira-Véron in [22] due to the appearance of Hardy potential. They obtained
the very weak solutions by approximating the Dirac mass at boundary. Then they
considered the limit of the solutions to the corresponding problem where the
convergence is guaranteed by the Poisson kernel. In this paper, we prove the
existence of moderate singular solution by using the function
$\Lambda_{\beta}$ to construct suitable solutions of problem (1.2) with the
zero Dirichlet boundary condition. While for nonzero Dirichlet boundary
condition, we transform the boundary data into nonhomogeneous term. However,
for $\beta>0$, that transformation can not totally solve (1.2) with the
nonzero Dirichlet boundary condition, and our idea is to cut off the boundary
data and approximate the solutions.
The rest of the paper is organized as follows. In Section 2, we start from a
comparison principle for $\mathcal{L}_{\beta}$ and show the moderate singular
solution of (1.2) when $g=0$. Section 3 is devoted to prove the distributional
identity (1.7) for the fundamental solution $\Lambda_{\beta}$ in
$\mathbb{R}^{N}_{+}$, to consider its trace, the corresponding distributional
identity in bounded smooth domain. Section 4 is to study the qualitative
properties of the solutions for problem (1.2) when $g=0$ and then we give the
proof of Theorem 1.2 in the case of nonzero boundary data in Section 5. In
what follows, we denote by $c_{i}$ a generic positive constant in the proofs
of the results.
## 2\. Preliminary
### 2.1. Comparison principle
We start the analysis from a comparison principle for $\mathcal{L}_{\beta}$.
Let $\eta_{0}:[0,+\infty)\to[0,\,1]$ be a decreasing $C^{\infty}$ function
such that
(2.1) $\eta_{0}=1\quad{\rm in}\;\;[0,1]\quad{\rm and}\quad\eta_{0}=0\quad{\rm
in}\;\;[2,+\infty).$
###### Lemma 2.1.
Let $\Omega$ be a bounded open set in $\mathbb{R}^{N}_{+}$,
$L:\Omega\times[0,+\infty)\to[0,+\infty)$ be a continuous function satisfying
that for any $x\in\Omega$,
$L(x,s_{1})\geq L(x,s_{2})\quad{\rm if}\;\;s_{1}\geq s_{2},$
then $\mathcal{L}_{\beta}+L$ with $\beta\geq\beta_{0}$ verifies the comparison
principle, that is, if $u,\,v\in C^{1,1}(\Omega)\cap C(\bar{\Omega})$ verify
that
$\mathcal{L}_{\beta}u+L(x,u)\geq\mathcal{L}_{\beta}v+L(x,v)\quad{\rm
in}\;\;\Omega\quad{\rm and}\quad u\geq v\quad{\rm on}\;\;\partial\Omega,$
then $u\geq v\;{\rm in}\;\Omega.$
Proof. Let $w=u-v$ and then $w\geq 0$ on $\partial\Omega$. Denote
$w_{-}=\min\\{w,0\\}$, and we claim that $w_{-}\equiv 0$. Indeed, if
$\Omega_{-}:=\\{x\in\Omega:\,w(x)<0\\}$ is not empty, then it is a bounded
$C^{1,1}$ domain in $\Omega$ and $w_{-}=0$ on $\partial\Omega$. We observe
that $\Omega_{-}\subset\mathbb{R}^{N}_{+}$ and then from Hardy inequality [7,
(1.7)] (see also [25]), it holds that
$\displaystyle 0$ $\displaystyle=$ $\displaystyle\int_{\Omega_{-}}(-\Delta
w_{-}+\frac{\beta}{|x|^{2}}w_{-})w_{-}dx+\int_{\Omega_{-}}[L(x,u)-L(x,v)]w_{-}\,dx$
$\displaystyle\geq$ $\displaystyle\int_{\Omega_{-}}\left(|\nabla
w_{-}|^{2}+\frac{\beta}{|x|^{2}}w_{-}^{2}\right)dx\geq
c_{1}\int_{\Omega_{-}}w_{-}^{2}dx,$
then $w_{-}=0$ in $\Omega_{-}$, by the continuity of $w_{-}$, which is
impossible with the definition of $\Omega_{-}$. $\Box$
###### Lemma 2.2.
Assume that $\beta\geq\beta_{0}$, $f_{1}$, $f_{2}$ are two functions in
$C^{\theta}_{loc}(\Omega)$ with $\theta\in(0,1)$, $g_{1}$, $g_{2}$ are two
continuous functions on $\partial\Omega\setminus\\{0\\}$, and
$f_{1}\geq f_{2}\quad{\rm in}\;\;\Omega\quad{\rm and}\quad g_{1}\geq
g_{2}\quad{\rm on}\;\;\partial\Omega\setminus\\{0\\}.$
Let $u_{i}$ ($i=1,2$) be the classical solutions of
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}u=f_{i}\quad{\rm
in}\;\;{\Omega},\\\\[4.2679pt] \phantom{L_{\beta}}\displaystyle
u=g_{i}\quad{\rm on}\;\;\partial{\Omega}\setminus\\{0\\}.\end{array}\right.$
If
(2.2) $\lim_{r\to
0^{+}}\inf_{x\in\partial_{+}B_{r}(0)}[u_{1}(x)-u_{2}(x)]\Lambda_{\beta}^{-1}(x)\geq
0,$
where $\partial_{+}B_{r}(0)=\partial B_{r}(0)\cap\Omega$. Then $u_{1}\geq
u_{2}\quad{\rm in}\;\;\overline{\Omega}\setminus\\{0\\}.$
Proof. Let $w=u_{2}-u_{1}$, then $w$ satisfies
$\left\\{\begin{array}[]{lll}\displaystyle\qquad\mathcal{L}_{\beta}w\leq
0\quad{\rm in}\;\;{\Omega},\\\\[4.2679pt]
\phantom{L_{\beta}}\displaystyle\qquad w\leq 0\quad{\rm
on}\;\;\partial{\Omega}\setminus\\{0\\},\\\\[4.2679pt]
\phantom{}\displaystyle\lim_{r\to
0^{+}}\sup_{x\in\partial_{+}B_{r}(0)}w(x)\Lambda_{\beta}^{-1}(x)\leq
0.\end{array}\right.$
Thus for any $\epsilon>0$, there exists $r_{\epsilon}>0$ converging to zero as
$\epsilon\to 0$ such that
$w\leq\epsilon\Lambda_{\beta}\quad{\rm on}\quad\partial
B_{r_{\epsilon}}(0)\cap\Omega.$
We observe that $w\leq 0<\epsilon\Lambda_{\beta}\ {\rm on}\
\partial\Omega\setminus B_{r_{\epsilon}}(0),$ which implies by Lemma 2.1 that
$w\leq\epsilon\Lambda_{\beta}\quad{\rm
in}\quad\overline{\Omega}\setminus\\{0\\}.$
Therefore we obtain that $w\leq 0$ in $\overline{\Omega}\setminus\\{0\\}$
which ends the proof. $\Box$
For any $\varepsilon>0$, denote
(2.3)
$\mathcal{L}_{\beta,\varepsilon}=-\Delta+\frac{\beta}{|x|^{2}+\varepsilon}.$
We remark that $\mathcal{L}_{\beta,\varepsilon}$ is strictly elliptic operator
and we have the following existence result for related nonhomogeneous problem.
###### Lemma 2.3.
Assume that $\varepsilon\in(0,\,1)$, $\beta\geq\beta_{0}$,
$\mathcal{L}_{\beta,\varepsilon}$ is given by (2.3) and $f\in
C^{\theta}_{loc}(\Omega)\cap C(\bar{\Omega})$ with $\theta\in(0,1)$ and $g\in
C(\partial\Omega)$. Then the problem
(2.4)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta,\varepsilon}u=f&{\rm
in}\quad\Omega,\\\\[5.69054pt]
\phantom{\mathcal{L}_{\beta,\varepsilon}}u=g&{\rm
on}\quad\partial\Omega\end{array}\right.$
has a unique classical solution $u_{\varepsilon}\in C^{2}(\Omega)\cap
C(\bar{\Omega})$, which verifies that
(2.5)
$\int_{\Omega}u_{\varepsilon}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=\int_{\Omega}\frac{f\xi}{x_{N}}d\gamma_{\beta}-\int_{\partial\Omega}g\frac{\partial\xi}{\partial\nu}\,d\omega_{\beta}+\beta\varepsilon\int_{\Omega}\frac{u_{\varepsilon}\xi}{(|x|^{2}+\varepsilon)|x|^{2}x_{N}}\,d\gamma_{\beta},$
for any $\xi\in C^{1.1}_{0}(\Omega)$.
Assume more that $f\geq 0$ in $\Omega$ and $g\geq 0$ on $\partial\Omega$. Then
the mapping $\varepsilon\mapsto u_{\varepsilon}$ is decreasing if $\beta>0$,
and is increasing if $\beta_{0}\leq\beta<0$.
Proof. We first prove the existence of solution to problem (2.4). We introduce
Poisson kernel $P_{\Omega}$ of $-\Delta$ in $\Omega$, and denote Poisson
operator as
$\mathbb{P}_{\Omega}[g](x)=\int_{\partial\Omega}P_{\Omega}(x,y)g(y)dy.$
We observe that
$\mathcal{L}_{\beta,\varepsilon}\mathbb{P}_{\Omega}[g]=\frac{\beta}{|x|^{2}+\varepsilon}\mathbb{P}_{\Omega}[g]\in
C^{1}(\Omega)\cap C(\bar{\Omega}).$
Then the solution of (2.4) denoted by $u_{\varepsilon}$, could be reduced to
$u_{\varepsilon}=\mathbb{P}_{\Omega}[g]+u_{f}$, where $u_{f}$ is the solution
of
(2.6)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta,\varepsilon}u=f-\frac{\beta}{|x|^{2}+\varepsilon}\mathbb{P}_{\Omega}[g]&{\rm
in}\quad\Omega,\\\\[5.69054pt]
\phantom{\mathcal{L}_{\beta,\varepsilon}}u=0&{\rm
on}\quad\partial\Omega.\end{array}\right.$
For $\beta\geq\beta_{0}$, a solution $u_{f}$ in $H^{1}_{0}(\Omega)$ of (2.6)
could be derived by Ekeland’s variational methods as the critical point of the
functional
$I(u)=\int_{\Omega}|\nabla
u|^{2}dx+\beta\int_{\Omega}\frac{u^{2}}{|x|^{2}+\varepsilon}dx-\int_{\Omega}\Big{(}f-\frac{\beta}{|x|^{2}+\varepsilon}\mathbb{P}_{\Omega}[g]\Big{)}udx.$
That is well-defined in $H^{1}_{0}(\Omega)$ since $\beta\in(\beta_{0},0)$.
From the Hardy’s inequality in [17], we have that, for any $u\in
C_{0}^{2}(\Omega)$,
$\int_{\Omega}|\nabla
u|^{2}dx+\beta\int_{\Omega}\frac{u^{2}}{|x|^{2}+\varepsilon}dx\geq(\beta-\beta_{0})\int_{\Omega}|\nabla
u|^{2}dx,$
for $\beta=\beta_{0}$, from the improved Hardy inequality in [17], it holds
$\displaystyle c_{2}\int_{\Omega}u^{2}dx$ $\displaystyle\leq$
$\displaystyle\int_{\Omega}|\nabla
u|^{2}dx-|\beta_{0}|\int_{\Omega}\frac{u^{2}}{|x|^{2}}dx$ $\displaystyle<$
$\displaystyle\int_{\Omega}|\nabla
u|^{2}dx-|\beta_{0}|\int_{\Omega}\frac{u^{2}}{|x|^{2}+\varepsilon}dx.$
Finally it is trivial for the case $\beta\geq 0$.
By the standard regularity result (e.g. [24]), we have that $u_{f}$ is a
classical solution of (2.6). Then problem (2.4) admits a classical solution
and the uniqueness follows by comparison principle.
Finally, we prove (2.5). Multiple $\frac{\lambda_{\beta}\xi}{x_{N}}$ with
$\xi\in C^{1.1}_{0}(\Omega)$ and integrate over $\Omega$, we have that
$\displaystyle\int_{\Omega}\frac{\lambda_{\beta}\xi}{x_{N}}f\,dx=\int_{\Omega}\frac{\lambda_{\beta}\xi}{x_{N}}\mathcal{L}_{\beta,\varepsilon}u_{\varepsilon}\,dx$
$\displaystyle=$
$\displaystyle\int_{\Omega}u_{\varepsilon}(-\Delta\frac{\lambda_{\beta}\xi}{x_{N}})\,dx+\int_{\partial\Omega}g\frac{\partial(|x|^{\tau_{+}(\beta)}\xi)}{\partial\nu}\,d\omega(x)+\int_{\Omega}\frac{\beta}{|x|^{2}+\varepsilon}u_{\varepsilon}\lambda_{\beta}\xi\,dx$
$\displaystyle=$
$\displaystyle\int_{\Omega}u_{\varepsilon}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}+\int_{\partial\Omega}g\frac{\partial\xi}{\partial\nu}\,d\omega_{\beta}-\beta\varepsilon\int_{\Omega}\frac{u_{\varepsilon}\xi}{(|x|^{2}+\varepsilon)|x|^{2}x_{N}}\,d\gamma_{\beta}.$
Note that if $f\geq 0$ in $\Omega$ and $g\geq 0$ on $\partial\Omega$, then
$u_{\varepsilon}\geq 0$ in $\Omega$. Let $\varepsilon_{1}\geq\varepsilon_{2}$
and $u_{\varepsilon_{1}},\,u_{\varepsilon_{2}}$ be two solutions of (2.4)
respectively. If $\beta\geq\beta_{0}$, we observe that
$\mathcal{L}_{\beta,\varepsilon_{2}}u_{\varepsilon_{1}}\geq\mathcal{L}_{\beta,\varepsilon_{1}}u_{\varepsilon_{1}}=f,$
so $u_{\varepsilon_{1}}$ is a super solution of (2.4) with
$\varepsilon=\varepsilon_{2}$ and by comparison principle, it holds
$u_{\varepsilon_{1}}\geq u_{\varepsilon_{2}}\;\;{\rm in}\;\Omega.$ The proof
ends. $\Box$
Now we build the distributional identity for the classical solution of
nonhomogeneous problem with $g=0$ and moderate singularity at the origin, i.e.
(2.7) $\lim_{r\to
0^{+}}\sup_{x\in\partial_{+}B_{r}(0)}\frac{|u(x)|}{\Lambda_{\beta}(x)}=0.$
###### Proposition 2.4.
Let $\beta\geq\beta_{0}$, $N\geq 2$, $f\in C_{loc}^{\theta}(\bar{\Omega})$
with $\theta\in(0,1)$, then
(2.8)
$\left\\{\begin{array}[]{lll}\displaystyle\quad\mathcal{L}_{\beta}u=f\quad{\rm
in}\;\;{\Omega},\\\\[5.69054pt] \phantom{L_{\beta}}\quad\displaystyle
u=0\quad{\rm on}\;\;\partial{\Omega}\setminus\\{0\\},\end{array}\right.$
subjecting to (2.7), has a unique solution $u_{\beta}$, which satisfies the
distributional identity
(2.9)
$\int_{\Omega}u_{\beta}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=\int_{\Omega}\frac{f\xi}{x_{N}}\,d\gamma_{\beta},\quad\forall\,\xi\in
C^{1.1}_{0}(\Omega).$
Proof. The uniqueness follows by Lemma 2.2. Since $\mathcal{L}_{\beta}$ is a
linear operator, we only have to deal with the case that $f\geq 0$ in
$\Omega$.
Part 1: $\beta>0$. In this case, the mapping $\varepsilon\mapsto
u_{\varepsilon}$ is decreasing, where $u_{\varepsilon}>0$ is the solution of
(2.4) with $g=0$. Then $u_{\beta}:=\lim_{\varepsilon\to 0^{+}}u_{\varepsilon}$
exists, and by the standard regularity theory, we have that $u_{\beta}$ is a
classical solution of
(2.10)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}u=f\quad{\rm
in}\;\;{\Omega},\\\\[5.69054pt] \phantom{L_{\beta}}\displaystyle u=0\quad{\rm
on}\;\;\partial{\Omega}.\end{array}\right.$
Part 2: $\beta\in[\beta_{0},0)$. Without loss of generality, we assume that
$\Omega\subset B_{\frac{1}{2}}(0)$. Denote
$V_{t,s}(x):=\left\\{\begin{array}[]{lll}\displaystyle
tx_{N}|x|^{-\frac{N}{2}}-sx_{N}^{2}|x|^{\tau_{+}(\beta)}&{\rm
if}\;\;\beta\in(\beta_{0},0),\\\\[5.69054pt] \phantom{}\displaystyle
tx_{N}|x|^{-\frac{N}{2}}(-\ln|x|)^{\frac{1}{2}}-sx_{N}^{2}|x|^{-\frac{N}{2}}&{\rm
if}\;\;\beta=\beta_{0},\end{array}\right.$
where the parameters $s,t\geq 0$.
Then for $\beta\in(\beta_{0},0)$, we see that $V_{t,s}(x)>0$ for $x\in\Omega$
if $t\geq s$ and
$\mathcal{L}_{\beta}V_{t,s}(x)=tc_{\beta}(-N/2)x_{N}|x|^{-\frac{N}{2}-2}+2s|x|^{\tau_{+}(\beta)}+2s\tau_{+}(\beta)x_{N}^{2}|x|^{\tau_{+}(\beta)-2},$
where $c_{\beta}(-N/2)>0$ and $\tau_{+}(\beta)<0$. Since $f$ is bounded in
$\Omega$, let
$s_{0}=\frac{1}{2}\sup_{x\in\Omega}\frac{|f(x)|}{|x|^{\tau_{+}(\beta)}}$
and then we fix $t_{0}\geq s_{0}$ such that
$t_{0}c_{\beta}(-N/2)x_{N}|x|^{-\frac{N}{2}-2}+2s_{0}\tau_{+}(\beta)x_{N}^{2}|x|^{\tau_{+}(\beta)-2}\geq
0.$
So $V_{t_{0},s_{0}}$ is a positive supersolution of (2.8).
For $\beta=\beta_{0},\,\tau_{-}(\beta)=-\frac{N}{2}$, we have that
$\mathcal{L}_{\beta}V_{t,s}(x)=\frac{t}{4}x_{N}|x|^{-\frac{N}{2}-2}(-\ln|x|)^{-\frac{1}{2}}+2s|x|^{-\frac{N}{2}}-2sNx_{N}^{2}|x|^{-\frac{N}{2}-2}.$
We take $s_{0}$ as above where $\beta$ is replaced by $\beta_{0}$ and we fix
$t_{0}\geq s_{0}$ such that
$\frac{t_{0}}{4}x_{N}|x|^{-\frac{N}{2}-2}(-\ln|x|)^{-\frac{1}{2}}-2s_{0}Nx_{N}^{2}|x|^{-\frac{N}{2}-2}\geq
0.$
So $V_{t_{0},s_{0}}$ is also a positive supersolution of (2.8) in this case
which implies, by comparison principle, that we have
$u_{\varepsilon}(x)\leq V_{t_{0},s_{0}}(x),\quad\forall\,x\in\Omega.$
Proof of (2.9). We need to estimate
$\displaystyle\int_{\Omega}\frac{u_{\varepsilon}\xi}{(|x|^{2}+\varepsilon)|x|^{2}x_{N}}\,d\gamma_{\beta}$
for $0<\varepsilon<\varepsilon_{0}$ for some $\varepsilon_{0}>0$ fixed. we
first consider the case $\beta>0$. We observe that
$\displaystyle\varepsilon\int_{\Omega\setminus
B_{\sqrt{\varepsilon}}(0)}\frac{u_{\varepsilon}\xi\lambda_{\beta}(x)}{(|x|^{2}+\varepsilon)|x|^{2}x_{N}}\,dx$
$\displaystyle\leq$
$\displaystyle\varepsilon\|u_{\varepsilon_{0}}\|_{L^{\infty}(\Omega)}\|\xi/\rho\|_{L^{\infty}(\Omega)}\int_{\Omega\setminus
B_{\sqrt{\varepsilon}}(0)}\frac{|x|^{\tau_{+}(\beta)-2}}{|x|^{2}+\varepsilon}\,dx$
$\displaystyle\leq$
$\displaystyle\|u_{\varepsilon_{0}}\|_{L^{\infty}(\Omega)}\|\xi/\rho\|_{L^{\infty}(\Omega)}\varepsilon^{\frac{N-2+\tau_{+}(\beta)}{2}}\int_{B_{\frac{1}{2\sqrt{\varepsilon}}}(0)\setminus
B_{1}(0)}|y|^{\tau_{+}(\beta)-4}\,dy$ $\displaystyle\leq$ $\displaystyle
c_{3}\|u_{\varepsilon_{0}}\|_{L^{\infty}(\Omega)}\|\xi/\rho\|_{L^{\infty}(\Omega)}(2^{-\tau_{+}(\beta)+4-N}\varepsilon+\varepsilon^{\frac{N-2+\tau_{+}(\beta)}{2}})$
$\displaystyle\to$ $\displaystyle 0\ \quad{\rm as}\quad\ \varepsilon\to 0^{+}$
and
$\displaystyle\varepsilon\int_{B_{\sqrt{\varepsilon}}(0)}\frac{u_{\varepsilon}\xi\lambda_{\beta}(x)}{(|x|^{2}+\varepsilon)|x|^{2}x_{N}}\,\,dx$
$\displaystyle\leq$
$\displaystyle\|u_{\varepsilon_{0}}\|_{L^{\infty}(\Omega)}\|\xi/\rho\|_{L^{\infty}(\Omega)}\int_{B_{\sqrt{\varepsilon}}(0)}|x|^{\tau_{+}(\beta)-2}\,dx$
$\displaystyle\leq$ $\displaystyle
c_{4}\|u_{\varepsilon_{0}}\|_{L^{\infty}(\Omega)}\|\xi/\rho\|_{L^{\infty}(\Omega)}\varepsilon^{\frac{N-2+\tau_{+}(\beta)}{2}}$
$\displaystyle\to$ $\displaystyle 0\ \quad{\rm as}\quad\ \varepsilon\to
0^{+},$
where $\rho(x)={\rm dist}(x,\,\partial\Omega)$ and
$\frac{N-2+\tau_{+}(\beta)}{2}>0$. Therefore, passing to the limit of (2.5),
we obtain (2.9).
For $\beta\in(\beta_{0},\,0)$, from the increasing monotonicity and the upper
bound $V_{s_{0},t_{0}}$, we have that
$\lim_{\varepsilon\to
0^{+}}\int_{\Omega}u_{\varepsilon}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})d\gamma_{\beta}=\int_{\Omega}u_{\beta}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})d\gamma_{\beta}$
and
$\varepsilon\int_{\Omega}\frac{\xi
u_{\varepsilon}\lambda_{\beta}(x)}{(|x|^{2}+\varepsilon)|x|^{2}x_{N}}dx\leq
c_{5}\varepsilon\int_{\Omega}\frac{|x|^{-N+\sqrt{\beta-\beta_{0}}}}{|x|^{2}+\varepsilon}dx.$
By directly compute, we have that
$\displaystyle\varepsilon\int_{\Omega\setminus
B_{\sqrt{\varepsilon}}(0)}\frac{|x|^{-N+\sqrt{\beta-\beta_{0}}}}{|x|^{2}+\varepsilon}dx$
$\displaystyle\leq$ $\displaystyle
c_{6}\varepsilon^{\frac{\sqrt{\beta-\beta_{0}}}{2}}\int_{B_{\frac{1}{2\sqrt{\varepsilon}}}(0)\setminus
B_{1}(0)}|y|^{-N-2+\sqrt{\beta-\beta_{0}}}\,dy$ $\displaystyle\leq$
$\displaystyle
c_{7}(\varepsilon+\varepsilon^{\frac{\sqrt{\beta-\beta_{0}}}{2}})\to
0\quad{\rm as}\quad\varepsilon\to 0^{+}$
and
$\displaystyle\varepsilon\int_{B_{\sqrt{\varepsilon}}(0)}\frac{|x|^{-N+\sqrt{\beta-\beta_{0}}}}{|x|^{2}+\varepsilon}dx$
$\displaystyle\leq$
$\displaystyle\int_{B_{\sqrt{\varepsilon}}(0)}|x|^{-N+\sqrt{\beta-\beta_{0}}}\,dx$
$\displaystyle\leq$ $\displaystyle
c_{8}\varepsilon^{\frac{\sqrt{\beta-\beta_{0}}}{2}}\to 0\quad{\rm
as}\quad\varepsilon\to 0^{+},$
As a conclusion, passing to the limit in (2.5) as $\varepsilon\to 0^{+}$, we
have that $u_{\beta}$ satisfies that
(2.11)
$\int_{\Omega}u_{\beta}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=\int_{\Omega}\frac{f\xi}{x_{N}}\,d\gamma_{\beta},\quad\forall\,\xi\in
C^{1.1}_{0}(\Omega).$
Finally, we prove (2.9) with $\beta=\beta_{0}$, We claim that the mapping
$\beta\mapsto u_{\beta}$ with $\beta\in(\beta_{0},0)$ is decreasing. In fact,
if $\beta_{0}<\beta_{1}\leq\beta_{2}<0$, we know that
$\displaystyle f=\mathcal{L}_{\beta_{1}}u_{\beta_{1}}$ $\displaystyle=$
$\displaystyle-\Delta u_{\beta_{1}}+\frac{\beta_{1}}{|x|^{2}}u_{\beta_{1}}$
$\displaystyle\leq$ $\displaystyle-\Delta
u_{\beta_{1}}+\frac{\beta_{2}}{|x|^{2}}u_{\beta_{1}}=\mathcal{L}_{\beta_{2}}u_{\beta_{1}},$
by Lemma 2.2, which implies that $u_{\beta_{1}}\geq u_{\beta_{2}}.$
We know that $V_{s_{0},t_{0}}$ is a super solution of (2.8) with
$\beta\in(\beta_{0},0)$. So it follows by Lemma 2.2 that
$\\{u_{\beta}\\}_{\beta}$ is uniformly bounded by the upper bound
$V_{s_{0},t_{0}}\in L^{1}(\Omega,\frac{1}{x_{N}}d\gamma_{\beta})$.
For $\xi\in C^{1.1}_{0}(\Omega)$, we have that
$|\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})|\leq
c_{9}(\|\frac{\xi}{x_{N}}\|_{C^{1.1}(\Omega)}+\|\frac{\xi}{x_{N}}\|_{C^{1}(\Omega)}x_{N}^{-1}),$
where $c_{9}>0$ is independent of $\beta$.
From the dominate monotonicity convergence theorem and the uniqueness of the
solution, we have that
$u_{\beta}\to u_{\beta_{0}}\quad{\rm a.e.\ in}\ \Omega\;{\rm
as}\;\;\beta\to\beta^{+}_{0}\quad{\rm
and\;in}\;\;L^{1}(\Omega,\,x_{N}^{-1}d\gamma_{\beta})$
and $u_{\beta_{0}}$ is a classical solution of (2.8) with $\beta=\beta_{0}$.
Passing to the limit of (2.11) as $\beta\to\beta^{+}_{0}$ to obtain that
$\int_{\Omega}u_{\beta_{0}}\mathcal{L}^{*}_{\beta_{0}}(\frac{\xi}{x_{N}})\,d\gamma_{\beta_{0}}=\int_{\Omega}\frac{f\xi}{x_{N}}d\gamma_{\beta_{0}}.$
The proof ends. $\Box$
###### Remark 2.5.
We note that when $\beta\geq 0$ and $f$ is bounded, the moderate singular
solution of problem (2.8) is no longer singular, that means, it is a classical
solution of
(2.12)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}u=f\quad{\rm
in}\;\;{\Omega},\\\\[4.2679pt] \phantom{L_{\beta}}\displaystyle u=0\quad{\rm
on}\;\;\partial{\Omega}.\end{array}\right.$
Now we prove the following
###### Lemma 2.6.
$(i)$ The problem
(2.13)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}^{*}(\frac{u}{x_{N}})=1\quad{\rm
in}\;\;\Omega,\\\\[5.69054pt] \phantom{\mathcal{L}_{\beta}^{*}-\
\,\,\,}\displaystyle u=0\quad{\rm on}\;\;\partial{\Omega}\end{array}\right.$
has a unique positive solution $w_{1}\in C^{2}(\Omega)\cap
C^{0.1}_{0}(\Omega)$.
$(ii)$ The problem
(2.14)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}^{*}(\frac{u}{x_{N}})=\frac{1}{x_{N}}&{\rm
in}\;\;\Omega,\\\\[5.69054pt] \phantom{\mathcal{L}_{\beta}^{*}--}\displaystyle
u=0&{\rm on}\;\;\partial{\Omega}\end{array}\right.$
has a unique positive solution $w_{2}\in C^{2}(\Omega)\cap
C^{1}_{0}(\bar{\Omega}\setminus\\{0\\})\cap C^{0.1}_{0}(\Omega)$.
Proof. We first claim that problem (2.8) has a unique classical positive
solution $w_{\beta}$ under the constraint (2.7) when $f(x)=\lambda_{\beta}(x)$
or $f(x)=|x|^{\tau_{+}(\beta)}$.
In fact, let $f_{n}(x)=\lambda_{\beta}(x)\eta_{0}(n|x|)$, where
$\eta_{0}:[0,+\infty)\to[0,\,1]$ is a decreasing $C^{\infty}$ function
satisfying (2.1). Then $f_{n}\in C^{\theta}(\bar{\Omega})$ with
$\theta\in(0,1),f_{n}\leq f$, and by Proposition 2.4, let $w_{n}$ be the
solution of problem
(2.15)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}u=f_{n}&{\rm
in}\;\;{\Omega},\\\\[5.69054pt] \phantom{L_{\beta}}\displaystyle u=0&{\rm
on}\;\;\partial{\Omega}\setminus\\{0\\},\end{array}\right.$
subject to (2.7). We know that the mapping: $n\to w_{n}$ is increasing by the
increasing monotone of $\\{f_{n}\\}$. So we only construct a suitable upper
bound for $w_{n}$ in the cases that $f(x)=\lambda_{\beta}(x)$ and
$f(x)=|x|^{\tau_{+}(\beta)}$ respectively.
When $f(x)=\lambda_{\beta}(x)$, let
$V_{t,s}(x)=t\lambda_{\beta}(x)-sx_{N}|x|^{\tau_{+}(\beta)+2}$ for $s,t>0$. It
is know that
$\mathcal{L}_{\beta}V_{t,s}=-sc_{\tau_{+}(\beta)+2}\lambda_{\beta}(x),\quad
x\in\mathbb{R}^{N}_{+},$
for some $c_{\tau_{+}(\beta)+2}<0$. So fix $s=-1/c_{\tau_{+}(\beta)+2}$ and
then fix $t>0$ such that
$V_{t,s}(x)>0,\quad\forall x\in\Omega.$
The limit of $\\{w_{n}\\}_{n}$, denoting by $w_{\beta,1}$, is a solution of
(2.7) satisfying $w_{\beta,1}\leq V_{t,s}(x).$
When $f(x)=|x|^{\tau_{+}(\beta)}$, let
$W_{t,s,l}(x)=t\lambda_{\beta}(x)-s(x_{N}|x|^{\tau_{+}(\beta)+2}+lx_{N}^{2}|x|^{\tau_{+}(\beta)+2}),$
where $s,t,l>0$. We observe that
$\mathcal{L}_{\beta}W_{t,s,l}(x)=s[-c_{\tau_{+}(\beta)+2}\lambda_{\beta}(x)+2l|x|^{\tau_{+}(\beta)}+2l\tau_{+}(\beta)x_{N}^{2}|x|^{\tau_{+}(\beta)}],\;x\in\mathbb{R}^{N}_{+},$
with the same constant $c_{\tau_{+}(\beta)+2}<0$ as above. Then we choose
$l>0$ such that $-2c_{\tau_{+}(\beta)+2}l\tau_{+}(\beta)x_{N}>0$ for
$x\in\Omega$, $s=\frac{1}{2l}$ and we take $t>0$ such that $W_{t,s,l}>0$ in
$\Omega$ and
$\mathcal{L}_{\beta}W_{t,s,l}(x)\geq|x|^{\tau_{+}(\beta)}.$
Thus, the limit of $\\{w_{n}\\}_{n}$, denoting by $w_{\beta,2}$, is a solution
of (2.7) such that
$w_{\beta,2}(x)\leq W_{t,s,l}(x).$
As a conclusion, for $i=1,2$,
(2.16) $w_{\beta,i}\leq t\lambda_{\beta}\quad{\rm in}\quad\Omega.$
Denote $w_{i}=w_{\beta,i}x_{N}/\lambda_{\beta},$ we observe that
$\displaystyle
1=\lambda_{\beta}^{-1}\mathcal{L}_{\beta}w_{\beta,1}=\lambda_{\beta}^{-1}\mathcal{L}_{\beta}(\lambda_{\beta}w_{1}/x_{N})=\mathcal{L}_{\beta}^{*}(w_{1}/x_{N})$
and
$\displaystyle
1/x_{N}=\lambda_{\beta}^{-1}\mathcal{L}_{\beta}w_{\beta,2}=\lambda_{\beta}^{-1}\mathcal{L}_{\beta}(\lambda_{\beta}w_{2}/x_{N})=\mathcal{L}_{\beta}^{*}(w_{2}/x_{N}).$
Moreover, by (2.16), it follow that $w_{i}\leq tx_{N}.$ Then we have that
$w_{i}\in C^{2}(\Omega)\cap C^{0.1}_{0}(\Omega)$ for $i=1,2$. Away from the
origin, Hardy’s operator is uniform elliptic, thus $u\in
C^{1}_{0}(\bar{\Omega}\setminus\\{0\\})$ and then $u\in C^{2}(\Omega)\cap
C^{1}_{0}(\bar{\Omega}\setminus\\{0\\})\cap C^{0.1}_{0}(\Omega).$ $\Box$
Although $C^{2}(\Omega)\cap C^{1}_{0}(\bar{\Omega}\setminus\\{0\\})\cap
C^{0.1}_{0}(\Omega)$ is not suitable as test function space for problem (1.2),
$w_{1},\,w_{2}$ are still valid as test functions for formula (1.11) with
$k=0$ in the distributional sense.
For given $f\in C^{1}(\bar{\Omega})$, a direct consequence of Lemma 2.6 can be
stated as follows
###### Corollary 2.7.
Assume that $f\in C^{1}(\bar{\Omega}\setminus\\{0\\})$ satisfying for some
$c_{10}>0$
$|f(x)|\leq\frac{c_{10}}{x_{N}}.$
Then there exists a unique solution of $w_{f}\in C^{2}(\Omega)\cap
C^{0.1}_{0}(\Omega)$ of
(2.17)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}^{*}(\frac{u}{x_{N}})=f&{\rm
in}\;\;\Omega,\\\\[5.69054pt] \phantom{\mathcal{L}_{\beta}^{*}-\
\,\,\,}\displaystyle u=0&{\rm on}\;\;\partial{\Omega}.\end{array}\right.$
## 3\. Fundamental solution
### 3.1. In half space
In this subsection, we give the proof of Theorem 1.1.
Proof of Theorem 1.1. For any $\xi\in C^{1.1}_{0}(\mathbb{R}^{N}_{+})$, we
know there exists a unique $\zeta\in C^{1.1}_{c}(\ \mathbb{R}^{N})$ such that
$\xi(x)=x_{N}\zeta(x)$ for $x\in\overline{\mathbb{R}^{N}_{+}}$. Moreover, we
have that $\frac{\partial\xi}{\partial x_{N}}(0)=\zeta(0)$.
Take $\zeta\in C^{1.1}_{c}(\mathbb{R}^{N})$, multiplying
$\lambda_{\beta}\zeta$ in (1.3) and integrating over
$\mathbb{R}^{N}_{+}\setminus\overline{B_{r}(0)}$, then we have that
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\int_{\mathbb{R}^{N}_{+}\setminus\overline{B_{r}(0)}}\mathcal{L}_{\beta}(\Lambda_{\beta})\lambda_{\beta}\zeta\,dx=\int_{\mathbb{R}^{N}_{+}\setminus\overline{B_{r}(0)}}\Lambda_{\beta}\mathcal{L}_{\beta}^{*}(\zeta)\,d\gamma_{\beta}$
$\displaystyle+\int_{\partial_{+}B_{r}(0)}\Big{(}-\nabla\Lambda_{\beta}\cdot\frac{x}{|x|}\lambda_{\beta}+\nabla\lambda_{\beta}\cdot\frac{x}{|x|}\Lambda_{\beta}\Big{)}\zeta\,d\omega$
$\displaystyle+\int_{\partial_{+}B_{r}(0)}\Lambda_{\beta}\lambda_{\beta}\Big{(}\nabla\zeta\cdot\frac{x}{|x|}\Big{)}\,d\omega,$
where $\partial_{+}B_{r}(0)=\partial B_{r}(0)\cap\mathbb{R}^{N}_{+}$. For
$\beta\geq\beta_{0}$, we see that for $r=|x|>0$ small,
$\displaystyle-\nabla\Lambda_{\beta}(x)\cdot\frac{x}{|x|}\lambda_{\beta}(x)+\nabla\lambda_{\beta}(x)\cdot\frac{x}{|x|}\Lambda_{\beta}(x)$
$\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{lll}2\sqrt{\beta-\beta_{0}}\,x_{N}^{2}r^{-N-1}&{\rm
if}\;\;\beta>\beta_{0},\\\\[4.2679pt] \phantom{}x_{N}^{2}r^{-N-1}&{\rm
if}\;\;\beta=\beta_{0}\end{array}\right.$
and
$|\zeta(x)-\zeta(0)|\leq c_{11}r,$
then
$\displaystyle\int_{\partial_{+}B_{r}(0)}\sqrt{\beta-\beta_{0}}\,x_{N}^{2}r^{-N-1}\zeta(0)x_{N}d\omega(x)$
$\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{lll}\sqrt{\beta-\beta_{0}}\,\displaystyle\int_{\partial_{+}B_{1}(0)}x_{N}^{2}d\omega(x)\,\zeta(0)&{\rm
if}\;\;\beta>\beta_{0},\\\\[5.69054pt]
\phantom{}\displaystyle\int_{\partial_{+}B_{1}(0)}x_{N}^{2}d\omega(x)\,\zeta(0)&{\rm
if}\;\;\beta=\beta_{0}\end{array}\right.$ $\displaystyle=$ $\displaystyle
c_{\beta}\zeta(0)$
and
$\displaystyle\Big{|}\int_{\partial_{+}B_{r}(0)}\Big{(}-\nabla\Lambda_{\beta}\cdot\frac{x}{|x|}\lambda_{\beta}+\nabla\lambda_{\beta}\cdot\frac{x}{|x|}\Lambda_{\beta}\Big{)}\zeta\,d\omega-
c_{\beta}\zeta(0)\Big{|}$ $\displaystyle\leq$ $\displaystyle
c_{12}(\sqrt{\beta-\beta_{0}}+1)\,r\int_{\partial_{+}B_{1}(0)}x_{N}^{2}d\omega(x)$
$\displaystyle\to$ $\displaystyle 0\quad{\rm as}\quad r\to 0^{+},$
that is,
$\lim_{r\to
0}\Big{(}\int_{\partial_{+}B_{r}(0)}-\nabla\Lambda_{\beta}\cdot\frac{x}{|x|}\lambda_{\beta}\zeta\,d\omega+\int_{\partial
B_{r}(0)}\nabla\lambda_{\beta}\cdot\frac{x}{|x|}\Lambda_{\beta}\zeta\,d\omega\Big{)}=c_{\beta}\zeta(0).$
Moreover, we see that
$\Big{|}\int_{\partial_{+}B_{r}(0)}\Lambda_{\beta}\lambda_{\beta}\Big{(}\nabla\zeta\cdot\frac{x}{|x|}\Big{)}\,d\omega\Big{|}\leq\|\zeta\|_{C^{1}}\,r\int_{\partial_{+}B_{1}(0)}x_{N}^{2}d\omega\to
0\quad{\rm as}\quad r\to 0^{+}.$
Therefore, we have that
$\lim_{r\to
0^{+}}\int_{\mathbb{R}^{N}\setminus\overline{B_{r}(0)}}\Lambda_{\beta}\mathcal{L}_{\beta}^{*}(\zeta)d\gamma_{\beta}=c_{\beta}\zeta(0),$
which implies (1.7). The proof ends. $\Box$
### 3.2. Trace of $\Lambda_{\beta}$.
The following theorem shows the trace of $\Lambda_{\beta}$.
###### Theorem 3.1.
Let $d\omega_{\beta}(x^{\prime})=|x^{\prime}|^{\tau_{+}(\beta)}dx^{\prime}$
for $x^{\prime}\in\mathbb{R}^{N-1}$, then for any $\zeta\in
C_{c}(\mathbb{R}^{N-1})$,
(3.3) $\lim_{t\to
0^{+}}\int_{\mathbb{R}^{N-1}}\Lambda_{\beta}(x^{\prime},t)\zeta(x^{\prime})d\omega_{\beta}(x^{\prime})=b_{N}\zeta(0),$
where
$b_{N}=\int_{\mathbb{R}^{N-1}}(1+|y^{\prime}|^{2})^{-\frac{N}{2}}dy^{\prime}>0.$
This is to say that the trace of $\Lambda_{\beta}$ is $\delta_{0}$ in the
$d\gamma_{\beta}$-distributional sense.
Proof. For any $\zeta\in C_{c}(\mathbb{R}^{N-1})$, there exists $R>0$ such
that supp$\,\zeta\subset B_{R}^{\prime}(0)$, here and in the sequel, denoting
by $B_{R}^{\prime}(0)$ the ball in $\mathbb{R}^{N-1}$. By direct computations,
we have that
$\displaystyle\int_{\mathbb{R}^{N-1}}\Lambda_{\beta}(x^{\prime},t)\zeta(x^{\prime})\,d\omega_{\beta}(x^{\prime})$
$\displaystyle=$
$\displaystyle\int_{B_{R}^{\prime}(0)}\Lambda_{\beta}(x^{\prime},t)\zeta(x^{\prime})\,d\omega_{\beta}(x^{\prime})$
$\displaystyle=$
$\displaystyle\int_{B_{R/t}^{\prime}(0)}(|y^{\prime}|^{2}+1)^{\frac{\tau_{-}(\beta)}{2}}|y^{\prime}|^{\tau_{+}(\beta)}\zeta(ty^{\prime})dy^{\prime}.$
For any $\varepsilon>0$, there exists $R_{\varepsilon}>1$ such that
$\displaystyle\int_{B_{R/t}^{\prime}(0)\setminus
B^{\prime}_{R_{\varepsilon}}(0)}(|y^{\prime}|^{2}+1)^{\frac{\tau_{-}(\beta)}{2}}|y^{\prime}|^{\tau_{+}(\beta)}\zeta(ty^{\prime})dy^{\prime}$
$\displaystyle\leq$
$\displaystyle\|\zeta\|_{L^{\infty}(\mathbb{R}^{N-1})}\int_{\mathbb{R}^{N-1}\setminus
B^{\prime}_{R_{\varepsilon}}(0)}|y^{\prime}|^{-N}dy^{\prime}$
$\displaystyle\leq$
$\displaystyle\|\zeta\|_{L^{\infty}(\mathbb{R}^{N-1})}|\mathcal{S}^{N-2}|\varepsilon,$
where $R_{\varepsilon}\leq\frac{1}{\varepsilon}$. Let
$A:=\int_{B^{\prime}_{R_{\varepsilon}}(0)}(|y^{\prime}|^{2}+1)^{\frac{\tau_{-}(\beta)}{2}}|y^{\prime}|^{\tau_{+}(\beta)}\zeta(ty^{\prime})dy^{\prime}-\int_{\mathbb{R}^{N-1}}(|y^{\prime}|^{2}+1)^{\frac{\tau_{-}(\beta)}{2}}|y^{\prime}|^{\tau_{+}(\beta)}\zeta(0)dy^{\prime},$
we have that
$\displaystyle|A|$ $\displaystyle\leq$
$\displaystyle\int_{B^{\prime}_{R_{\varepsilon}}(0)}(|y^{\prime}|^{2}+1)^{\frac{\tau_{-}(\beta)}{2}}|y^{\prime}|^{\tau_{+}(\beta)}\left|\zeta(ty^{\prime})-\zeta(0)\right|\,dy^{\prime}+\varepsilon|\zeta(0)||\mathcal{S}^{N-2}|$
$\displaystyle\leq$ $\displaystyle
t\|\zeta\|_{C^{1}(\mathbb{R}^{N-1})}\int_{B^{\prime}_{R_{\varepsilon}}(0)}(|y^{\prime}|^{2}+1)^{\frac{\tau_{-}(\beta)}{2}}|y^{\prime}|^{\tau_{+}(\beta)}dy^{\prime}+\varepsilon|\zeta(0)||\mathcal{S}^{N-2}|$
$\displaystyle=$ $\displaystyle
R_{\varepsilon}t\|\zeta\|_{C^{1}(\mathbb{R}^{N-1})}+\varepsilon|\zeta(0)||\mathcal{S}^{N-2}|$
$\displaystyle\leq$
$\displaystyle\left(\|\zeta\|_{C^{1}(\mathbb{R}^{N-1})}+|\zeta(0)||\mathcal{S}^{N-2}|\right)\varepsilon,$
if we take $t=\varepsilon^{2}$. Passing to the limit as $\varepsilon\to 0$, we
derive (3.3). $\Box$
### 3.3. Fundamental solution in bounded domain
In this subsection, we do an approximation of the isolated singular solution.
###### Proposition 3.2.
Let $\Omega$ be a $C^{2}$ domain verifying (1.9). Then the problem
(3.4)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}u=0\quad{\rm
in}\;\;{\Omega},\\\\[5.69054pt] \phantom{L_{\beta}}\displaystyle u=0\quad{\rm
on}\;\;\partial{\Omega}\setminus\\{0\\},\\\\[5.69054pt]
\phantom{}\displaystyle\lim_{r\to 0^{+}}\sup_{x\in
B_{r}^{+}(0)}\frac{|u(x)-\Lambda_{\beta}(x)|}{\Lambda_{\beta}(x)}=0\end{array}\right.$
admits a unique solution $\Lambda^{\Omega}_{\beta}$ satisfying the following
distributional identity:
(3.5)
$\int_{\Omega}\Lambda^{\Omega}_{\beta}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=c_{\beta}\frac{\partial\xi}{\partial
x_{N}}(0),\quad\forall\,\xi\in C^{1.1}_{0}(\Omega).$
Proof. Let $\eta_{r_{0}}(t)=\eta_{0}(\frac{2}{r_{0}}t)$, which satisfies that
(3.6) $\eta_{r_{0}}=1\quad{\rm in}\quad[0,r_{0}/2]\quad{\rm
and}\quad\eta_{r_{0}}=0\quad{\rm in}\quad[r_{0},+\infty).$
For $i=1,2$ the problem
(3.7)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}w_{i}=-\nabla\eta_{r_{0}}\cdot\nabla\Lambda_{\beta}-\Lambda_{\beta}\Delta\eta_{r_{0}}\quad{\rm
in}\;\;{\Omega},\\\\[5.69054pt] \phantom{L_{\beta}}\displaystyle
w_{i}=0\quad{\rm on}\;\;\partial{\Omega}\setminus\\{0\\},\\\\[5.69054pt]
\phantom{}\displaystyle\lim_{e\in\mathcal{S}^{N}_{+},\,t\to
0^{+}}w_{i}(te)\Lambda_{\beta}^{-1}(te)=2-i,\end{array}\right.$
admits a unique solutions $w_{1}$ and $w_{2}$ respectively. Obviously,
$w_{1}=\Lambda_{\beta}\eta_{r_{0}}$
and
$-\nabla\eta_{r_{0}}\cdot\nabla\Lambda_{\beta}-\Lambda_{\beta}\Delta\eta_{r_{0}}$
has compact set in $\Omega\cap(\overline{B_{r_{0}}(0)\setminus
B_{\frac{r_{0}}{2}}(0)})$ and then
$-\nabla\eta_{r_{0}}\cdot\nabla\Lambda_{\beta}-\Lambda_{\beta}\Delta\eta_{r_{0}}$
is smooth and bounded, it follows by the proof of Proposition 2.4 that there
exist $s_{0},t_{0}>0$ such that $|w_{2}|\leq V_{s_{0},t_{0}}$.
For $i=1$, following the proof of Theorem 1.1, we get then for any $\xi\in
C^{1.1}_{0}(\Omega)$,
(3.8)
$\int_{\Omega}w_{1}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=\int_{\Omega}\Big{(}-\nabla\eta_{r_{0}}\cdot\nabla\Lambda_{\beta}-\Lambda_{\beta}\Delta\eta_{r_{0}}\Big{)}\frac{\xi}{x_{N}}\,d\gamma_{\beta}+c_{\beta}\frac{\partial\xi}{\partial
x_{N}}(0).$
For $i=2$, it follows by Proposition 2.4 that for any $\xi\in
C^{1.1}_{0}(\Omega)$,
(3.9)
$\int_{\Omega}w_{2}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})d\gamma_{\beta}=\int_{\Omega}\Big{(}-\nabla\eta_{r_{0}}\cdot\nabla\Lambda_{\beta}-\Lambda_{\beta}\Delta\eta_{r_{0}}\Big{)}\frac{\xi}{x_{N}}\,d\gamma_{\beta}.$
Let $\Lambda^{\Omega}_{\beta}=\Lambda_{\beta}\eta_{r_{0}}-w_{2},$ it follows
by (3.8) and (3.9) that
$\int_{\Omega}\Lambda^{\Omega}_{\beta}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=c_{\beta}\frac{\partial\xi}{\partial
x_{N}}(0),\quad\forall\,\xi\in C^{1.1}_{0}(\Omega).$
Finally, it’s clear that if $u_{1}$ and $u_{2}$ are two solutions of (3.4),
then $w:=u_{1}-u_{2}$ satisfies
$\lim_{r\to 0^{+}}\sup_{x\in
B_{r}^{+}(0)}\frac{|w(x)|}{\Lambda_{\beta}(x)}=0.$
Combining with the fact that
$\mathcal{L}_{\beta}w=0\;\;{\rm in}\;\;{\Omega}\quad{\rm and}\quad w=0\;\;{\rm
on}\;\;\partial{\Omega}\setminus\\{0\\},$
and Lemma 2.2, we have that $w\equiv 0$. Thus the uniqueness is proved. $\Box$
## 4\. Existence
### 4.1. Zero Dirichlet boundary
Our purpose in this section is to clarify the isolated singularities of the
nonhomogeneous problem
(4.1)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}u=f\quad{\rm
in}\;\;\Omega,\\\\[4.2679pt] \phantom{L_{\beta}}\displaystyle u=0\quad{\rm
on}\;\;\partial{\Omega}\setminus\\{0\\},\end{array}\right.$
where $f\in C^{\theta}_{loc}(\bar{\Omega}\setminus\\{0\\})$ with
$\theta\in(0,1)$. Recall that $\mathcal{L}^{*}_{\beta}$ is given by (1.6) and
$d\gamma_{\beta}(x)=\lambda_{\beta}(x)dx$. We prove the following
###### Theorem 4.1.
$(i)$ Assume that $f\in L^{1}(\Omega,\,d\gamma_{\beta})$ and $u\in
L^{1}(\Omega,\frac{1}{|x|}d\gamma_{\beta})$ is a classical solution of problem
(4.1), then there exists some $k\in\mathbb{R}$ such that there holds
(4.2)
$\int_{\Omega}u\,\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=\int_{\Omega}\frac{f\xi}{x_{N}}\,d\gamma_{\beta}+k\frac{\partial\xi}{\partial
x_{N}}(0),\quad\forall\,\xi\in C^{1.1}_{0}(\Omega).$
$(ii)$ Inversely, assume that $f\in L^{1}(\Omega,\,d\gamma_{\beta})$, then for
any $k\in\mathbb{R}$, problem (4.1) has a unique solution $u_{k}\in
L^{1}(\Omega,\frac{1}{|x|}d\gamma_{\beta})$ verifying (4.2) with such $k.$
Proof. $(i)$ Let $\tilde{\Omega}$ be the interior set of
$\bar{\Omega}\cup\overline{\\{(x^{\prime},-x_{N}):(x^{\prime},x_{N})\in\Omega\\}}$
and extend $u$ (resp. $f$) by the $x_{N}$-odd extension to $\tilde{u}$ (resp.
$\tilde{f}$) in $\tilde{\Omega}$, then
$\mathcal{L}_{\beta}\tilde{u}=\tilde{f}$. Our aim is to see the distributional
property at the origin. Denote by $L$ the operator related to
$\mathcal{L}_{\beta}\tilde{u}-\tilde{f}$ in the distribution sense, i.e.
(4.3)
$L(\zeta)=\int_{\tilde{\Omega}}\Big{(}\tilde{u}\mathcal{L}_{\beta}^{*}(\zeta)-\tilde{f}\zeta\Big{)}|x_{N}||x|^{\tau_{+}(\beta)}\,dx,\quad\forall\zeta\in
C^{\infty}_{c}(\tilde{\Omega}).$
For any $\zeta\in C^{\infty}_{c}(\tilde{\Omega}\setminus\\{0\\})$, we have
that $L(\zeta)=0.$ In fact, there exists $\varepsilon>0$ such that
supp$(\zeta)\subset\tilde{\Omega}\setminus B_{\varepsilon}(0)$ and then
$\displaystyle 0$ $\displaystyle=$ $\displaystyle
2\int_{\Omega}\zeta(\mathcal{L}_{\beta}u-f)\,d\gamma_{\beta}=\int_{\tilde{\Omega}}\zeta(\mathcal{L}_{\beta}\tilde{u}-\tilde{f})\,d\tilde{\gamma}_{\beta}$
$\displaystyle=$
$\displaystyle-\int_{\tilde{\Omega}}\tilde{f}\zeta\,d\tilde{\gamma}_{\beta}+\int_{\Omega\setminus
B_{\varepsilon}(0)}u\mathcal{L}_{\beta}^{*}\zeta
d\gamma_{\beta}+\int_{\partial(\Omega\setminus
B_{\varepsilon}(0))\cap(\mathbb{R}^{N-1}\times\\{0\\})}\frac{\partial
u}{\partial x_{N}}\zeta d\omega_{\beta}$
$\displaystyle+\int_{(-\Omega)\setminus
B_{\varepsilon}(0)}(-u)\mathcal{L}_{\beta}^{*}\zeta
d\tilde{\gamma}_{\beta}+\int_{\partial(-\Omega\setminus
B_{\varepsilon}(0))\cap(\mathbb{R}^{N-1}\times\\{0\\})}\frac{\partial\tilde{u}}{\partial(-x_{N})}\zeta
d\omega_{\beta}$ $\displaystyle=$ $\displaystyle\int_{\tilde{\Omega}\setminus
B_{\varepsilon}(0)}(\tilde{u}\mathcal{L}_{\beta}^{*}\zeta-\tilde{f}\zeta)\,d\tilde{\gamma}_{\beta}$
$\displaystyle=$
$\displaystyle\int_{\tilde{\Omega}}(\tilde{u}\mathcal{L}_{\beta}^{*}\zeta-\tilde{f}\zeta)\,d\tilde{\gamma}_{\beta},$
where $d\tilde{\gamma}_{\beta}=|\tilde{\lambda}_{\beta}(x)|dx$,
$\tilde{\lambda}_{\beta}$ is the odd extension of $\lambda_{\beta}$ and
$\int_{\partial(\Omega\setminus
B_{\varepsilon}(0))\cap(\mathbb{R}^{N-1}\times\\{0\\})}\frac{\partial
u}{\partial x_{N}}\zeta d\omega_{\beta}=-\int_{\partial(-\Omega\setminus
B_{\varepsilon}(0))\cap(\mathbb{R}^{N-1}\times\\{0\\})}\frac{\partial\tilde{u}}{\partial(-x_{N})}\zeta
d\omega_{\beta}.$
By Theorem XXXV in [33] (see also Theorem 6.25 in [32]), it implies that
(4.4) $L=\sum_{|a|=0}^{p}k_{a}D^{a}\delta_{0},$
where $p\in\mathbb{N}$, $a=(a_{1},\cdots,a_{N})$ is a multiple index with
$a_{i}\in\mathbb{N}$, $|a|=\sum_{i=1}^{N}a_{i}$ and in particular,
$D^{0}\delta_{0}=\delta_{0}$. Then we have that
(4.5)
$L(\zeta)=\int_{\tilde{\Omega}}\Big{(}\tilde{u}\mathcal{L}_{\beta}^{*}\zeta-f\zeta\Big{)}\,d\tilde{\gamma}_{\beta}=\sum_{|a|=0}^{\infty}k_{a}D^{a}\zeta(0),\quad\
\ \forall\zeta\in C^{\infty}_{c}(\tilde{\Omega}).$
For any multiple index $a=(a_{1},\cdots,a_{N})$, let $\zeta_{a}$ be a
$C^{\infty}$ function such that
(4.6) ${\rm supp}(\zeta_{a})\subset\overline{B_{2}(0)}\quad{\rm
and}\quad\zeta_{a}(x)=k_{a}\prod_{i=1}^{N}x_{i}^{a_{i}}\quad{\rm for}\ \ x\in
B_{1}(0).$
Now we use the test function
$\zeta_{\varepsilon,a}(x):=\zeta_{a}(\varepsilon^{-1}x)$ for
$x\in\tilde{\Omega}$ in (4.5), we have that
$\sum_{|a|\leq
q}k_{a}D^{a}\zeta_{\varepsilon,a}(0)=\frac{k_{a}^{2}}{\varepsilon^{|a|}}\prod^{N}_{i=1}a_{i}!,$
where $a_{i}!=a_{i}\cdot(a_{i}-1)\cdots 1>0$ and $a_{i}!=1$ if $a_{i}=0$.
Let $r>0$, we obtain that
$\displaystyle\Big{|}\int_{\tilde{\Omega}}\tilde{u}\mathcal{L}_{\beta}^{*}\zeta_{\varepsilon}\,d\tilde{\gamma}_{\beta}\Big{|}$
$\displaystyle=$
$\displaystyle\Big{|}\int_{B_{2\varepsilon}(0)}\tilde{u}\mathcal{L}_{\beta}^{*}\zeta_{\varepsilon}\,d\tilde{\gamma}_{\beta}\Big{|}$
$\displaystyle\leq$
$\displaystyle\frac{1}{\varepsilon^{2}}\Big{|}\int_{B_{2\varepsilon}(0)}\tilde{u}(x)(-\Delta)\zeta_{a}(\varepsilon^{-1}x)\,d\tilde{\gamma}_{\beta}\Big{|}$
$\displaystyle+\frac{2|\tau_{+}(\beta)|}{\varepsilon}\,\Big{|}\int_{B_{2\varepsilon}(0)}\tilde{u}(x)\frac{x}{|x|^{2}}\cdot\nabla\zeta_{a}(\varepsilon^{-1}x)\,d\tilde{\gamma}_{\beta}\Big{|}$
$\displaystyle\leq$ $\displaystyle
c_{13}\left[\frac{1}{\varepsilon^{2}}\int_{B_{2\varepsilon}(0)}|\tilde{u}(x)|\,d\tilde{\gamma}_{\beta}+\frac{1}{\varepsilon}\,\int_{B_{2\varepsilon}(0)}\frac{|\tilde{u}(x)|}{|x|}\,d\tilde{\gamma}_{\beta}\right]$
$\displaystyle\leq$
$\displaystyle\frac{c_{14}}{\varepsilon}\,\int_{B_{2\varepsilon}(0)}\frac{|\tilde{u}(x)|}{|x|}\,d\tilde{\gamma}_{\beta},$
then, by the fact that $u\in L^{1}(\Omega,\frac{1}{|x|}d\gamma_{\beta})$, it
follows that
(4.7) $\lim_{\varepsilon\to
0^{+}}\displaystyle\int_{B_{2\varepsilon}(0)}\frac{|\tilde{u}(x)|}{|x|}\,d\tilde{\gamma}_{\beta}=0\quad{\rm
and}\quad\lim_{\varepsilon\to
0^{+}}\varepsilon\Big{|}\int_{\tilde{\Omega}}\tilde{u}\mathcal{L}_{\beta}^{*}\zeta_{\varepsilon}\,d\tilde{\gamma}_{\beta}\Big{|}=0.$
For $|a|\geq 1$, we have that
$k_{a}^{2}\leq
c_{15}\varepsilon^{|a|-1}\Big{|}\int_{\tilde{\Omega}}\tilde{u}\mathcal{L}_{\beta}^{*}\zeta_{\varepsilon}\,d\tilde{\gamma}_{\beta}\Big{|}\to
0\quad{\rm as}\quad\varepsilon\to 0,$
then we have $k_{a}=0$ by arbitrary of $\varepsilon>0$ in (4.5) with $|a|\geq
1$, thus,
(4.8)
$L(\zeta)=\int_{\tilde{\Omega}}\left[\tilde{u}\mathcal{L}_{\beta}^{*}\zeta-\tilde{f}\zeta\right]\,d\tilde{\gamma}_{\beta}=k_{0}\zeta(0),\quad\
\forall\,\xi\in C^{\infty}_{c}(\tilde{\Omega}).$
For any $\zeta\in C^{1.1}_{c}(\tilde{\Omega})$, by taking a sequence
$\zeta_{n}\in C^{\infty}_{c}(\tilde{\Omega})$ converging to $\zeta$, we obtain
that (4.8) holds for any $\zeta\in C^{1.1}_{c}(\tilde{\Omega})$.
Now we fix $\xi\in C^{1.1}_{0}(\Omega)$ with compact support in
$\Omega\cup\\{(x^{\prime},0)\in\mathbb{R}^{N-1}\times\mathbb{R}:|x^{\prime}|<r_{0}\\}$,
then $\xi/x_{N}\in C^{1.1}(\bar{\Omega})$ and we may do $x_{N}$-even extension
of $\xi/x_{N}$ in $\tilde{\Omega}$, denoting by $\tilde{\xi}$, then
$\tilde{\xi}\in C^{1.1}_{c}(\tilde{\Omega})$, by the $x_{N}$-even extension,
we have that
$\tilde{\xi}(0)=\frac{\partial\xi}{\partial x_{N}}(0).$
So it follows from (4.8) that
(4.9)
$\int_{\Omega}\Big{(}\tilde{u}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})-\tilde{f}\frac{\xi}{x_{N}}\Big{)}\,d\gamma_{\beta}=k_{0}\frac{\partial\xi}{\partial
x_{N}}(0),\quad\ \forall\,\xi\in C^{\infty}_{c}(\tilde{\Omega}),$
so (4.2) holds.
$(ii)$ By the linearity of $\mathcal{L}_{\beta}$, we may assume that $f\geq
0$. Let $f_{n}=f\eta_{n}$, where $\eta_{n}(r)=1-\eta_{0}(nr)$ for $r\geq 0$,
where $\eta_{0}$ satisfies (2.1) and let $v_{n}$ be solution of (2.8) where
$f$ is replaced by $f_{n}$. We see that $f_{n}$ is bounded and for any $\xi\in
C^{1.1}_{0}(\Omega)$,
(4.10)
$\int_{\Omega}v_{n}\,\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=\int_{\Omega}f_{n}\frac{\xi}{x_{N}}\,d\gamma_{\beta}.$
Then taking $\xi=w_{2}$ in Lemma 2.6, we have that $v_{n}$ is uniformly
bounded in $L^{1}(\Omega,\,d\gamma_{\beta})$ and in
$L^{1}(\Omega,\,x_{N}^{-1}d\gamma_{\beta})$, that is,
$\|v_{n}\|_{L^{1}(\Omega,\,x_{N}^{-1}d\gamma_{\beta})}\leq\|\frac{\xi}{x_{N}}\|_{L^{\infty}(\Omega)}\|f_{n}\|_{L^{1}(\Omega,d\gamma_{\beta})}\leq\|\frac{\xi}{x_{N}}\|_{L^{\infty}(\Omega)}\|f\|_{L^{1}(\Omega,d\gamma_{\beta})}.$
Moreover, $\\{v_{n}\\}$ is increasing, and then there exists $v_{f}$ such that
$v_{n}\to v_{f}\quad{\rm a.e.\ in}\ \ \Omega\quad{\rm and\ in}\ \
L^{1}(\Omega,\,x_{N}^{-1}d\gamma_{\beta}).$
Then we have that
$\int_{\Omega}v_{f}\mathcal{L}_{\beta}^{*}(\xi)\,d\gamma_{\beta}=\int_{\Omega}f\xi\,d\gamma_{\beta},\quad\forall\,\xi\in
C^{1.1}_{0}(\Omega).$
Since $f\in C^{\gamma}(\overline{\Omega}\setminus\\{0\\})$, then it follows by
the standard regularity theory that $v_{f}\in C^{2}(\Omega)$.
We claim that $v_{f}$ is a classical solution of (4.1). From Corollary 2.8 in
[31] with $L^{*}=\mathcal{L}_{\beta}^{*}$, which is strictly elliptic in
$\Omega\setminus B_{r}(0)$, we have that for $q<\frac{N}{N-1}$,
(4.11) $\displaystyle\|v_{n}\lambda_{\beta}\|_{W^{1,q}(\Omega_{2r})}$
$\displaystyle\leq$ $\displaystyle
c_{16}\|f\lambda_{\beta}\|_{L^{1}(\Omega\setminus
B_{r}(0))}+c_{16}\|v_{n}\lambda_{\beta}\|_{L^{1}(\Omega\setminus B_{r}(0))}$
$\displaystyle\leq$ $\displaystyle
c_{17}\|f\|_{L^{1}(\Omega,\,d\gamma_{\beta})},$
where $\Omega_{2r}=\\{x\in\Omega\setminus B_{2r}(0):\,\rho(x)>2r\\}.$ We see
that
$\displaystyle-\Delta v_{n}=-\frac{\beta}{|x|^{2}}v_{n}+f.$
For any compact set $K$ in $\Omega$, it is standard to improve the regularity
$v_{n}$
$\|v_{n}\|_{C^{2,\lambda}(K)}\leq
c_{18}[\|f\|_{L^{1}(\Omega,\,d\gamma_{\beta})}+\|f\|_{C^{\lambda}(K)}]$
where $c_{18}>0$ is independent of $n$. Then $v_{f}$ is a classical solution
of (4.1) verifying the identity
(4.12)
$\int_{\Omega}v_{f}\,\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=\int_{\Omega}\frac{f\xi}{x_{N}}\,d\gamma_{\beta},\quad\forall\,\xi\in
C^{1.1}_{0}(\Omega).$
Together with the fact that $u_{k,f}=k\Lambda^{\Omega}_{\beta}+v_{f}$, we
conclude that the function $u_{k,f}$ is a solution of (4.1), verifying the
identity (4.2) by (4.12).
Finally, we prove the uniqueness. In fact, let $w_{k,f}$ be a solution of
(4.1) verifying the identity (4.2).
$\int_{\Omega}(u_{k,f}-w_{k,f})\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=0.$
For any Borel subset $O$ of $\Omega$, Corollary 2.7 implies that problem
(4.13)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}^{*}(\frac{u}{x_{N}})=\zeta_{n}&{\rm
in}\;\;\Omega,\\\\[2.84526pt] \phantom{\mathcal{L}_{\mu}^{*}}\displaystyle
u=0&{\rm on}\;\;\partial{\Omega},\end{array}\right.$
has a solution $\eta_{\omega,n}\in C^{2}(\Omega)\cap C^{0.1}_{0}(\Omega)$,
where $\zeta_{n}:\bar{\Omega}\mapsto[0,1]$ is a $C^{1}(\bar{\Omega})$ function
such that $\zeta_{n}\to\chi_{O}\;{\rm{in}}\ L^{\infty}(\Omega)\;{\rm{as}}\
n\to\infty.$ Therefore by passing to the limit as $n\to\infty$, we have that
$\displaystyle\int_{O}(u_{k,f}-w_{k,f})d\gamma_{\beta}=0,$
which implies that $u_{k,f}=w_{k,f}$ a.e. in $\Omega$ and then the uniqueness
holds true. $\Box$
###### Remark 4.2.
Let $u_{f}$ be the solution of (4.1) verifying the identity (4.2) with $k=0$,
then $u_{f}$ satisfies the isolated singular behavior (1.14). In fact, letting
$f\geq 0$, then $u_{f}\geq 0$ in $\Omega$. So if (1.14) fails, it implies by
the positivity of $u_{f}$, that $\liminf_{t\to 0^{+}}\inf_{z\in
S^{N-1}_{+}}\frac{u_{f}(tz)}{\Lambda_{\beta}(tz)}=l_{0}>0$ and
$\tilde{u}_{f}:=u_{f}-l_{0}\Lambda_{\beta}^{\Omega}$ is a solution of (4.1).
By Lemma 2.2, we have that $\tilde{u}_{f}\geq 0$ in $\Omega$, By the
approximating procedure, $\tilde{u}_{f}$ verifies the identity (4.2) with
$k=0$, which is impossible with the fact that
$u_{f}-\tilde{u}_{f}=l_{0}\Lambda_{\beta}^{\Omega}$, which satisfies
$\int_{\Omega}(u_{f}-\tilde{u}_{f})\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=l_{0}c_{\beta}\frac{\partial\xi}{\partial
x_{N}}(0),\quad\forall\,\xi\in C^{1.1}_{0}(\Omega).$
### 4.2. Nonzero Dirichlet boundary
Recall that $P_{\Omega}$ is Poisson’s Kernel of $-\Delta$ in $\Omega$ and
$\mathbb{P}_{\Omega}[g](x)=\displaystyle\int_{\partial\Omega}P_{\Omega}(x,y)g(y)d\omega(y).$
It is known that if $g$ is continuous, $\mathbb{P}_{\Omega}[g]$ is a solution
of
(4.14) $\left\\{\begin{array}[]{lll}\displaystyle-\Delta u=0\quad{\rm
in}\;\;\Omega,\\\\[2.84526pt] \phantom{-\Delta}\displaystyle u=g\quad{\rm
on}\;\;\partial{\Omega}.\end{array}\right.$
Multiply $\frac{\xi\lambda_{\beta}}{x_{N}}$ where $\xi\in C^{1.1}_{0}(\Omega)$
and integrate over $\Omega$, then we have that
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\int_{\Omega}(-\Delta\mathbb{P}_{\Omega}[g])\frac{\xi\lambda_{\beta}}{x_{N}}dx$
$\displaystyle=$
$\displaystyle\int_{\partial\Omega}\mathbb{P}_{\Omega}[g]\nabla(\frac{\xi\lambda_{\beta}}{x_{N}})\cdot\nu
d\omega+\int_{\Omega}\mathbb{P}_{\Omega}[g]\Big{(}-\Delta(\frac{\xi\lambda_{\beta}}{x_{N}})\Big{)}dx$
$\displaystyle=$
$\displaystyle\int_{\partial\Omega}g\frac{\partial\xi}{\partial\nu}d\omega_{\beta}+\int_{\Omega}\mathbb{P}_{\Omega}[g]\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}-\beta\int_{\Omega}\frac{\mathbb{P}_{\Omega}[g]}{|x|^{2}}\frac{\xi}{x_{N}}d\gamma_{\beta},$
that is, for any $\xi\in C^{1.1}_{0}(\Omega)$, there holds
(4.15)
$\int_{\Omega}\mathbb{P}_{\Omega}[g]\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=\beta\int_{\Omega}\frac{\mathbb{P}_{\Omega}[g]}{|x|^{2}}\frac{\xi}{x_{N}}d\gamma_{\beta}-\int_{\partial\Omega}g\frac{\partial\xi}{\partial\nu}d\omega_{\beta}.$
###### Lemma 4.3.
Let $\beta\in[\beta_{0},\,+\infty)\setminus\\{0\\}$,
$d\tilde{\omega}_{\beta}=(1+|x|^{\tau_{+}(\beta)})d\omega(x)$ and $g\geq 0$.
We have that
$(i)$ If $g\in C(\partial\Omega\setminus\\{0\\})\cap
L^{1}(\partial\Omega,\,d\tilde{\omega}_{\beta})$, then
$\frac{1}{|\cdot\;|^{2}}\mathbb{P}_{\Omega}[g]\in
L^{1}(\Omega,\,d\gamma_{\beta}).$
$(ii)$ If $g\in C(\partial\Omega\setminus\\{0\\})$ and
(4.16) $\lim_{r\to 0^{+}}\int_{\partial\Omega\setminus
B_{r}(0)}g\,d\tilde{\omega}_{\beta}=+\infty,$
then
$\lim_{r\to 0^{+}}\int_{\Omega\setminus
B_{r}(0)}\frac{1}{|x|^{2}}\mathbb{P}_{\Omega}[g](x)d\gamma_{\beta}=+\infty.$
Proof. From Proposition 2.1 in [2] that
(4.17) $c_{19}\rho(x)|x-y|^{-N}\leq P_{\Omega}(x,y)\leq
c_{20}\rho(x)|x-y|^{-N},\quad x\in\Omega,\ y\in\partial\Omega,$
where $\rho(x)={\rm dist}(x,\partial\Omega)$. Since $g$ is continuous in
$\partial\Omega\setminus\\{0\\}$ and $\Omega$ is flat near the origin, we can
only consider the integrability of
$\frac{1}{|\cdot\;|^{2}}\mathbb{P}_{\Omega}[g]$ near the origin. Fix
$r=r_{0}/2$, let
$B^{\prime}_{r}(0)=\\{x^{\prime}\in\mathbb{R}^{N-1}:|x^{\prime}|<r\\}$ and
$e_{(y^{\prime},0)}=(\frac{y^{\prime}}{|y^{\prime}|},\,0)$ for
$y^{\prime}\not=0$, then
$\displaystyle\int_{B_{r}^{+}(0)}\frac{1}{|x|^{2}}\mathbb{P}_{\Omega}[g]d\gamma_{\beta}$
$\displaystyle\geq$ $\displaystyle
c_{21}\int_{B_{r}^{+}(0)}\int_{B^{\prime}_{r}(0)\setminus\\{0\\}}g(y^{\prime})|x-(y^{\prime},0)|^{-N}\frac{x_{N}^{2}}{|x|^{2}}|x|^{\tau_{+}(\beta)}\,dy^{\prime}dx$
$\displaystyle=$ $\displaystyle
c_{22}\int_{B_{r}^{\prime}(0)\setminus\\{0\\}}g(y^{\prime})|y^{\prime}|^{\tau_{+}(\beta)}\int_{B_{r/|y^{\prime}|}^{+}(0)}|z-e_{(y^{\prime},0)}|^{-N}\frac{z_{N}^{2}}{|z|^{2}}|z|^{\tau_{+}(\beta)}dzdy^{\prime}$
and
$\displaystyle\int_{B_{r}^{+}(0)}\frac{1}{|x|^{2}}\mathbb{P}_{\Omega}[g]d\gamma_{\beta}$
$\displaystyle\leq$ $\displaystyle
c_{23}\int_{B_{r}^{+}(0)}\int_{B^{\prime}_{r}(0)\setminus\\{0\\}}g(y^{\prime})|x-(y^{\prime},0)|^{-N}\frac{x_{N}^{2}}{|x|^{2}}|x|^{\tau_{+}(\beta)}\,dy^{\prime}dx$
$\displaystyle=$ $\displaystyle
c_{24}\int_{B_{r}^{\prime}(0)\setminus\\{0\\}}g(y^{\prime})|y^{\prime}|^{\tau_{+}(\beta)}\int_{B_{r/|y^{\prime}|}^{+}(0)}|z-e_{(y^{\prime},0)}|^{-N}\frac{z_{N}^{2}}{|z|^{2}}|z|^{\tau_{+}(\beta)}dzdy^{\prime}.$
Now we do estimates for
$\int_{B_{r/|y^{\prime}|}^{+}(0)}I(z)dz:=\int_{B_{r/|y^{\prime}|}^{+}(0)}|z-e_{(y^{\prime},0)}|^{-N}\frac{z_{N}^{2}}{|z|^{2}}|z|^{\tau_{+}(\beta)}dz,$
we have
$\displaystyle 0<\int_{B_{\frac{1}{2}}^{+}(0)}I(z)\,dz\leq
2^{N}\int_{B_{\frac{1}{2}}^{+}(0)}|z|^{\tau_{+}(\beta)}dz,$ $\displaystyle
0<\int_{B_{\frac{1}{2}}^{+}(e_{(y^{\prime},0)})}I(z)\,dz$ $\displaystyle\leq$
$\displaystyle
2^{|\tau_{+}(\beta)|+2}\int_{B_{\frac{1}{2}}^{+}(e_{(y^{\prime},0)})}|z-e_{(y^{\prime},0)}|^{-N}z_{N}^{2}dz$
$\displaystyle\leq$ $\displaystyle
2^{|\tau_{+}(\beta)|+2}\int_{B_{\frac{1}{2}}^{+}(0)}|z|^{2-N}dz,$
and
$\displaystyle\int_{B_{r/|y^{\prime}|}^{+}(0)\setminus\left(B_{\frac{1}{2}}^{+}(0)\cup
B_{\frac{1}{2}}^{+}(e_{(y^{\prime},0)})\right)}I(z)dz$ $\displaystyle\leq$
$\displaystyle c_{25}\int_{B_{r/|y^{\prime}|}^{+}(0)\setminus
B_{\frac{1}{2}}^{+}(0)}|z|^{-N+\tau_{+}(\beta)}\,dz$ $\displaystyle\leq$
$\displaystyle\left\\{\begin{array}[]{lll}\displaystyle
c_{26}\int_{\mathbb{R}^{N}\setminus
B_{\frac{1}{2}}(0)}|z|^{-N+\tau_{+}(\beta)}\,dz&{\rm
if}\quad\beta<0,\\\\[5.69054pt] \phantom{}\displaystyle
c_{26}|y^{\prime}|^{-\tau_{+}(\beta)}&{\rm if}\quad\beta>0\end{array}\right.$
$\displaystyle\leq$ $\displaystyle c_{27}(1+|y^{\prime}|^{-\tau_{+}(\beta)})$
and
$\displaystyle\int_{B_{r/|y^{\prime}|}^{+}(0)\setminus\left(B_{\frac{1}{2}}^{+}(0)\cup
B_{\frac{1}{2}}^{+}(e_{(y^{\prime},0)})\right)}I(z)\,dz$ $\displaystyle\geq$
$\displaystyle c_{28}\int_{B_{r/|y^{\prime}|}^{+}(0)\setminus
B_{\frac{1}{2}}^{+}(0)}|z|^{-N+\tau_{+}(\beta)}\,dz$ $\displaystyle\geq$
$\displaystyle c_{29}(1+|y^{\prime}|^{-\tau_{+}(\beta)}).$
Thus, we have that
(4.19)
$c_{30}\int_{B_{r}^{\prime}(0)\setminus\\{0\\}}g(y^{\prime})d\tilde{\omega}(y^{\prime})\leq\int_{B_{r}^{+}(0)}\frac{1}{|x|^{2}}\mathbb{P}_{\Omega}[g]d\gamma_{\beta}\leq
c_{31}\int_{B_{r}^{\prime}(0)\setminus\\{0\\}}g(y^{\prime})d\tilde{\omega}(y^{\prime}),$
which, together with the fact that $\mathbb{P}_{\Omega}[g]$ is nonnegative and
bounded in $\Omega\setminus B_{r}^{+}(0)$, proves Lemma 4.3. $\Box$
We remark that Lemma 4.3 provides estimates for transforming the boundary data
into the nonhomogeneous term. Now we are ready to prove Theorem 1.2 part $(i)$
where we distinguish two cases $\beta\in[\beta_{0},\,0]$ and $\beta>0$.
Proof of Theorem 1.2. Part $(i)$. The existence for $g\in
L^{1}(\partial\Omega,\,d\tilde{\omega}_{\beta})$. Let
$\bar{f}=f-\frac{\beta}{|\cdot\;|^{2}}\mathbb{P}_{\Omega}[g].$ Then it follows
from Lemma 4.3 part $(i)$ that $\bar{f}\in L^{1}(\Omega,\,d\gamma_{\beta})$
and applying Theorem 4.1 part $(i)$, problem (4.1) verifying (4.2) for
$k\in\mathbb{R}$ and replaced $f$ by $\bar{f}$ admits a unique solution of
$u_{f}$. Denote $u_{f,g}:=u_{f}+\mathbb{P}_{\Omega}[g]$, then
$\mathcal{L}_{\beta}u_{f,g}=f\quad{\rm and}\quad u_{f,g}=g\quad{\rm on}\ \
\partial\Omega\setminus\\{0\\}.$
Together with (4.2) and (4.15), we have that $u_{f,g}$ verifies (1.11) and it
is the unique solution of problem (4.1) verifying (4.2) for that $k$.
Case of $\beta\in[\beta_{0},\,0]$. Then $d\tilde{\omega}_{\beta}$ is
equivalent to $d\omega_{\beta}$, so
$L^{1}(\partial\Omega,\,d\tilde{\omega}_{\beta})=L^{1}(\partial\Omega,\,d\omega_{\beta})$
and we are done.
Case of $\beta>0$. We note that
$L^{1}(\partial\Omega,\,d\tilde{\omega}_{\beta})\subsetneqq
L^{1}(\partial\Omega,\,d\omega_{\beta}).$
So for $g\in L^{1}(\partial\Omega,\,d\omega_{\beta})\setminus
L^{1}(\partial\Omega,\,d\tilde{\omega}_{\beta})$, we may assume $g\geq 0$ by
linearity of $\mathcal{L}_{\beta}$. Let
(4.20) $\eta_{n}(s)=1-\eta_{0}(ns)\quad{\rm and}\quad
g_{n}(x)=g(x)\eta_{n}(|x|),$
where $\eta_{0}$ is defined in (2.1). Then $\\{g_{n}\\}_{n}\subset
L^{1}(\partial\Omega,\,d\tilde{\omega}_{\beta})$ is an increasing sequence of
functions. For simplicity, we assume that $f=0$. Then the problem
(4.21) $\left\\{\begin{array}[]{lll}\mathcal{L}_{\beta}^{*}u=0&{\rm
in}\;\;\Omega,\\\\[2.84526pt] \phantom{\mathcal{L}_{\beta}}u=g_{n}&{\rm
on}\;\;\partial\Omega\setminus\\{0\\}\end{array}\right.$
has a unique solution of $u_{n}$ verifying the identify
(4.22)
$\int_{\Omega}u_{n}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=-\int_{\partial\Omega}g_{n}\frac{\partial\xi}{\partial\nu}d\omega_{\beta},\quad\forall\,\xi\in
C^{1.1}_{0}(\Omega).$
Since $0\leq g_{n}\leq g$ and $g\in L^{1}(\partial\Omega,\,d\omega_{\beta})$,
we may expand the text function space including $w_{1}$, $w_{2}$, which are
the solutions of (2.13) and (2.14) respectively. Taking $\xi=w_{1}$ and then
$w_{2}$ , we derive that
$\|u_{n}\|_{L^{1}(\Omega)}\leq
c_{32}\|g_{n}\|_{L^{1}(\partial\Omega,\,d\omega_{\beta})}\leq
c_{33}\|g\|_{L^{1}(\partial\Omega,\,d\omega_{\beta})}$
and
$\|u_{n}\|_{L^{1}(\Omega,\,x_{N}^{-1}d\gamma_{\beta})}\leq
c_{34}\|g\|_{L^{1}(\partial\Omega,\,d\omega_{\beta})}.$
We notice that $u_{n}\geq 0$ and the mapping $n\mapsto u_{n}$ is increasing,
then by the monotone converge theorem, we have that there exists $u$ such that
$u_{n}$ converging to $u$ in $L^{1}(\Omega,\frac{1}{x_{N}}d\gamma_{\beta})$.
Since $\xi\in C_{0}^{1.1}(\Omega)$, we have that
$|\mathcal{L}_{\beta}^{*}(\xi/x_{N})|\leq cx_{N}^{-1}.$ Pass to the limit of
(4.22), we have that $u$ verifies that
(4.23)
$\int_{\Omega}u\mathcal{L}_{\beta}^{*}(\xi/x_{N})\,d\gamma_{\beta}=-\int_{\partial\Omega}g\frac{\partial\xi}{\partial\nu}d\omega_{\beta},\quad\forall\,\xi\in
C^{1.1}_{0}(\Omega).$
From standard interior regularity, we have that $u$ is a classical solution
$\left\\{\begin{array}[]{lll}\mathcal{L}_{\beta}^{*}u=0\quad{\rm
in}\;\;\Omega,\\\\[2.84526pt] \phantom{\mathcal{L}_{\beta}}u=g\quad{\rm
on}\;\;\partial\Omega\setminus\\{0\\},\end{array}\right.$
which ends the proof. $\Box$
## 5\. Nonexistence
In this subsection, we establish the approximation of the fundamental solution
$G_{\mu}$.
###### Lemma 5.1.
$(i)$ Let $\\{\delta_{n}\\}_{n}$ be a sequence of nonnegative
$L^{\infty}$-functions defined in $\Omega$ such that ${\rm
supp}\,\delta_{n}\subset B_{r_{n}}(0)\cap\Omega,$ where $r_{n}\to 0$ as
$n\to+\infty$ and
$\int_{\Omega}\delta_{n}\xi dx\to\frac{\partial\xi(0)}{\partial
x_{N}}\quad{\rm as}\quad n\to+\infty,\quad\forall\xi\in C_{0}^{1}(\Omega).$
For any $n$, let $w_{n}$ be the unique solution of the problem in the
$d\gamma_{\beta}$-distributional sense
(5.1)
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}u=\delta_{n}/\lambda_{\beta}&{\rm
in}\;\;{\Omega}\setminus\\{0\\},\\\\[5.69054pt]
\phantom{L_{\beta}}\displaystyle u=0&{\rm
on}\;\;\partial{\Omega},\\\\[5.69054pt] \phantom{}\displaystyle\lim_{r\to
0^{+}}\sup_{x\in\partial_{+}B_{r}(0)}\frac{|u(x)|}{\Lambda_{\beta}(x)}=0.\end{array}\right.$
Then
$\lim_{n\to+\infty}w_{n}(x)=\frac{1}{c_{\beta}}\Lambda_{\beta}^{\Omega}(x),\quad\forall\,x\in\Omega\setminus\\{0\\}$
and for any compact set $K\subset\Omega\setminus\\{0\\}$,
(5.2) $w_{n}\to\frac{1}{c_{\beta}}\Lambda_{\beta}^{\Omega}\quad{\rm as}\quad
n\to+\infty\ \ {\rm in}\ \ C^{2}(K).$
$(ii)$ Let $\\{\sigma_{n}\\}_{n}$ be a sequence of nonnegative $L^{\infty}$
functions defined on $\partial\Omega$ such that ${\rm
supp}\,\sigma_{n}\subset\partial\Omega\cap B_{r_{n}}(0),$ where $r_{n}\to 0$
as $n\to+\infty$ and
$\int_{\partial\Omega}\sigma_{n}\zeta d\omega(x)\to\zeta(0)\quad{\rm as}\quad
n\to+\infty,\quad\forall\zeta\in C^{1}(\partial\Omega).$
For any $n$, let $v_{n}$ be the unique solution of the problem
(5.3) $\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\beta}u=0&{\rm
in}\;\;{\Omega}\setminus\\{0\\},\\\\[5.69054pt] \phantom{}\displaystyle
u=\frac{\sigma_{n}}{|\cdot|^{\tau_{+}(\beta)}}&{\rm
on}\;\;\partial{\Omega}\setminus\\{0\\}\end{array}\right.$
subject to
$\int_{\Omega}v_{n}\mathcal{L}_{\beta}^{*}(\xi/x_{N})\,d\gamma_{\beta}=-\int_{\partial\Omega}\sigma_{n}\frac{\partial\xi}{\partial\nu}d\omega,\quad\forall\,\xi\in
C^{1.1}_{0}(\Omega).$
Then
$\lim_{n\to+\infty}v_{n}(x)=\frac{1}{c_{\beta}}\Lambda_{\beta}^{\Omega}(x),\quad\forall\,x\in\Omega\setminus\\{0\\}$
and for any compact set $K\subset\Omega\setminus\\{0\\}$, (5.2) holds true.
Proof. From Lemma 2.3, problems (5.1) and (5.3) have unique solutions
$w_{n},v_{n}\geq 0$ respectively and satisfying that
(5.4)
$\int_{\Omega}w_{n}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=\int_{\Omega}\delta_{n}\xi\,dx,\quad\forall\,\xi\in
C^{1,1}_{0}(\Omega)$
and
(5.5)
$\int_{\Omega}v_{n}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=-\int_{\partial\Omega}\sigma_{n}\frac{\partial\xi}{\partial\nu}d\omega_{\beta},\quad\forall\,\xi\in
C^{1,1}_{0}(\Omega).$
By taking $\xi=\xi_{0}$, the solution of (2.13), we obtain that
$\|w_{n}\|_{L^{1}(\Omega,\,d\gamma_{\beta})}\leq\|\xi_{0}\|_{L^{\infty}(\Omega)}\|\delta_{n}\|_{L^{1}(\Omega)}=\|\xi_{0}\|_{L^{\infty}(\Omega)}.$
For any $r>0$, take $\xi$ with the support in $\Omega\setminus B_{r}(0)$, then
$\xi\in C^{1.1}_{c}(\overline{\Omega\setminus B_{r}(0)})$,
$\int_{\Omega\setminus
B_{r}(0)}w_{n}\mathcal{L}_{\mu}^{*}(\xi)\,d\gamma_{\beta}=0.$
Take $\xi$ the solution of (2.17) with $f(x)=\frac{1}{|x|}$, we have that
(5.6)
$\int_{\Omega}w_{n}|x|^{-1}\,d\gamma_{\beta}=\int_{\Omega}\delta_{n}\xi\,dx\leq\|\xi_{0}\|_{L^{\infty}(\Omega)}$
and
(5.7)
$\int_{\Omega}v_{n}|x|^{-1}\,d\gamma_{\beta}=-\int_{\partial\Omega}\sigma_{n}\frac{\partial\xi}{\partial\nu}d\omega_{\beta}\leq\|\nabla\xi_{0}\|_{L^{\infty}(\Omega)}.$
So $w_{n},v_{n}$ are uniform bounded in
$L^{1}(\Omega,\,|x|^{-1}d\gamma_{\beta})$.
From Corollary 2.8 in [31] with $L^{*}=\mathcal{L}_{\mu}^{*}$, which is
strictly elliptic in $\Omega\setminus B_{r}(0)$, we have that for
$q<\frac{N}{N-1}$,
$\displaystyle\|w_{n}\lambda_{\beta}\|_{W^{1,q}(\Omega_{2r})}\leq
c_{35}\|\delta_{n}\|_{L^{1}(\Omega\setminus
B_{r}(0))}+c_{36}\|w_{n}\|_{L^{1}(\Omega\setminus
B_{r}(0),\,d\gamma_{\beta})}\leq c_{37}$
and
$\displaystyle\|v_{n}\lambda_{\beta}\|_{W^{1,q}(\Omega_{2r})}$
$\displaystyle\leq$ $\displaystyle
c_{38}\|\sigma_{n}\|_{L^{1}(\partial\Omega\setminus
B_{r}(0))}+c_{39}\|v_{n}\|_{L^{1}(\Omega\setminus
B_{r}(0),\,d\gamma_{\beta})}\leq c_{40},$
where $\Omega_{2r}=\\{x\in\Omega\setminus B_{2r}(0):\,\rho(x)>2r\\}.$ By the
compact embedding $W^{1,q}(\Omega_{2r})\hookrightarrow L^{1}(\Omega_{2r}),$ up
to some subsequence, there exists $w_{\infty},\,v_{\infty}\in
W^{1,q}_{loc}(\Omega)\cap L^{1}(\Omega,\,d\gamma_{\beta})$ such that
$w_{n}\to w_{\infty}\quad{\rm as}\quad n\to+\infty\quad{\rm a.e.\ \ in}\ \
\Omega\ \ {\rm and\ in}\quad L^{1}(\Omega,\,d\gamma_{\beta})$
and it follows by (5.4) and (5.5) that for $\xi\in C^{1.1}_{0}(\Omega)$,
$\int_{\Omega}w_{\infty}\mathcal{L}_{\beta}^{*}(\xi)\,d\gamma_{\beta}=\int_{\Omega}v_{\infty}\mathcal{L}_{\beta}^{*}(\xi)\,d\gamma_{\beta}=\frac{\partial\xi}{\partial
x_{N}}(0).$
Furthermore,
$\int_{\Omega}(w_{\infty}-\frac{1}{c_{\beta}}G_{\beta})\mathcal{L}_{\beta}^{*}(\xi)\,d\gamma_{\beta}=0.$
From the Kato’s inequality, we deduce that
$w_{\infty}=v_{\infty}=\frac{1}{c_{\beta}}\Lambda_{\beta}^{\Omega}\quad{a.e.}\;\;\Omega.$
Proof of (5.2). For any $x_{0}\in\Omega\setminus\\{0\\}$, let
$r_{0}=\frac{1}{4}\\{|x_{0}|,\,\rho(x_{0})\\}$ and $\mu_{n}=w_{n}\eta,$ where
$\eta(x)=\eta_{0}(\frac{|x-x_{0}|}{r_{0}})$. There exists $n_{0}>0$ such that
for $n\geq n_{0}$, ${\rm supp}\mu_{n}\cap B_{r_{n}}(0)=\emptyset.$ Then
$\displaystyle-\Delta\mu_{n}(x)$ $\displaystyle=$ $\displaystyle-\Delta
w_{n}(x)\eta(x)-2\nabla w_{n}\cdot\nabla\eta-w_{n}\Delta\eta$ $\displaystyle=$
$\displaystyle-2\nabla w_{n}\cdot\nabla\eta-w_{n}\Delta\eta,$
where $\nabla\eta$ and $\Delta\eta$ are smooth.
We observe that $w_{n}\in W^{1,q}(B_{2r_{0}}(x_{0}))$ and $-2\nabla
w_{n}\cdot\nabla\eta-w_{n}\Delta\eta\in L^{q}(B_{2r_{0}}(x_{0})),$ then we
have that
$\|\mu_{n}\|_{W^{2,q}(B_{r_{0}}(x_{0}))}\leq
c\|w_{n}\|_{L^{1}(\Omega,\,d\gamma_{\beta})},$
where $c>0$ is independent of $n$. Thus, $-2\nabla w_{n}\cdot\nabla\eta-
w_{n}\Delta\eta\in W^{1,q}(B_{r_{0}}(x_{0})),$ repeat above process $N_{0}$
steps, for $N_{0}$ large enough, we deduce that
$\|w_{n}\|_{C^{2,\gamma}(B_{\frac{r_{0}}{2^{N_{0}}}}(x_{0}))}\leq
c\|w_{n}\|_{L^{1}(\Omega,\,d\gamma_{\beta})},$
where $\gamma\in(0,1)$ and $c>0$ is independent of $n$. As a conclusion, (5.2)
follows by Arzelà-Ascola theorem and Heine-Borel theorem. The above process
also holds for $v_{n}$. This ends the proof. $\Box$
Proof of Theorem 1.2. Part $(ii)$. From (1.12), one of the following two cases
holds true,
${\rm case}\ 1:\lim_{r\to 0^{+}}\int_{\Omega\setminus
B_{r}(0)}f\,d\gamma_{\beta}=+\infty,\;\;{\rm or\ case}\ 2:\;\;\lim_{r\to
0^{+}}\int_{\partial\Omega\setminus B_{r}(0)}g\,d\omega_{\beta}=+\infty.$
Case 1. We argue by contradiction. Assume that problem (1.2) has a nonnegative
solution of $u_{f}$. Let $\\{r_{n}\\}_{n}$ be a sequence of strictly
decreasing positive numbers converging to $0$. From the fact $f\in
C_{loc}^{\gamma}(\overline{\Omega}\setminus\\{0\\})$, for any $r_{n}$ fixed,
we have that
$\lim_{r\to 0^{+}}\int_{(B_{r_{n}}(0)\setminus
B_{r}(0))\cap\Omega}f(x)d\gamma_{\beta}=+\infty,$
then there exists $R_{n}\in(0,r_{n})$ such that
$\int_{(B_{r_{n}}(0)\setminus B_{R_{n}}(0))\cap\Omega}fd\gamma_{\beta}=n.$
Let $\delta_{n}=\frac{1}{n}\lambda_{\beta}f\chi_{B_{r_{n}}(0)\setminus
B_{R_{n}}(0)}$, then the problem
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\mu}u\cdot\lambda_{\beta}=\delta_{n}\qquad{\rm
in}\quad{\Omega}\setminus\\{0\\},\\\\[2.84526pt]
\phantom{L_{\mu}--}\displaystyle u=0\qquad{\rm
on}\quad\partial{\Omega},\\\\[2.84526pt] \phantom{}\displaystyle\lim_{x\to
0}u(x)\Phi_{\mu}^{-1}(x)=0\end{array}\right.$
has a unique positive solution $w_{n}$ satisfying (in the usual sense)
$\int_{\Omega}w_{n}\mathcal{L}_{\mu}(\lambda_{\beta}\xi)dx=\int_{\Omega}\delta_{n}\xi
dx,\quad\forall\,\xi\in C^{1.1}_{0}(\Omega).$
For any $\xi\in C^{1.1}_{0}(\Omega)$, we have that
$\int_{\Omega}w_{n}\mathcal{L}_{\mu}^{*}(\xi)\,d\gamma_{\beta}=\int_{\Omega}\delta_{n}\xi\,dx\to\frac{\partial\xi}{\partial
x_{N}}(0)\quad{\rm as}\quad n\to+\infty.$
Therefore, by Lemma 5.1 for any compact set
$\mathcal{K}\subset\Omega\setminus\\{0\\}$
$\|w_{n}-\Lambda_{\beta}^{\Omega}\|_{C^{1}(\mathcal{K})}\to 0\quad{\rm
as}\quad{n\to+\infty}.$
We fix a point $x_{0}\in\Omega$ and let
$r_{0}=\frac{1}{2}\min\\{|x_{0}|,\,\rho(x_{0})\\}$ and
$\mathcal{K}=\overline{B_{r_{0}}(x_{0})}$, then there exists $n_{0}>0$ such
that for $n\geq n_{0}$,
(5.8) $w_{n}\geq\frac{1}{2}G_{\mu}\quad{\rm in}\quad\mathcal{K}.$
Let $u_{n}$ be the solution (in the usual sense) of
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\mu}u\cdot\lambda_{\beta}=n\delta_{n}\quad{\rm
in}\;\;{\Omega}\setminus\\{0\\},\\\\[2.84526pt]
\phantom{L_{\mu}--}\displaystyle u=0\quad\ \ {\rm
on}\;\;\partial{\Omega},\\\\[2.84526pt] \phantom{}\displaystyle\lim_{r\to
0^{+}}\sup_{x\in\partial_{+}B_{r}(0)}\frac{|u(x)|}{\Lambda_{\beta}(x)}=0,\end{array}\right.$
then we have that $u_{n}\geq nw_{n}\;\;{\rm in}\;\;\Omega.$ Together with
(5.8), we derive that
$u_{n}\geq\frac{n}{2}\Lambda_{\mu}^{\Omega}\quad{\rm in}\;\;\mathcal{K}.$
Then by comparison principle, we have that $u_{f}(x_{0})\geq
u_{n}(x_{0})\to+\infty\quad{\rm as}\;\;n\to+\infty,$ which contradicts to the
fact that $u_{f}$ is classical solution of (4.1).
Case 2. Similarly for any $n\in\mathbb{N}$, we can take $r_{n}>R_{n}>0$ such
that $r_{n}\to 0$ as $n\to+\infty$ and
$\int_{(B_{r_{n}}(0)\setminus
B_{R_{n}}(0))\cap\partial\Omega}gd\omega_{\beta}=n.$
Let $\sigma_{n}=\frac{1}{n}g\chi_{B_{r_{n}}(0)\setminus B_{R_{n}}(0)}$,
$w_{n}$ be the solution of
$\left\\{\begin{array}[]{lll}\displaystyle\mathcal{L}_{\mu}u=0\quad{\rm
in}\;\;{\Omega}\setminus\\{0\\},\\\\[2.84526pt] \phantom{L_{\mu}}\displaystyle
u=\sigma_{n}/|\cdot|^{\tau^{+}(\beta)}\quad\ \ {\rm
on}\;\;\partial{\Omega},\end{array}\right.$
subject to
$\int_{\Omega}w_{n}\mathcal{L}_{\beta}^{*}(\frac{\xi}{x_{N}})\,d\gamma_{\beta}=-\int_{\partial\Omega}\sigma_{n}\frac{\partial\xi}{\partial\nu}d\omega,\quad\forall\,\xi\in
C^{1.1}_{0}(\Omega).$
Repeat the procedure in Case 1, we get a contradiction which completes the
proof. $\Box$
Acknowledgements: H. Chen is supported by the Natural Science Foundation of
China [Nos. 11661045, 11726614]. A. Quaas is partially supported by Fondecyt
Grant No. 1151180, Programa Basal, CMM in U. de Chile and Millennium Nucleus
Center for Analysis of PDE NC130017. F. Zhou is partially supported by NSFC
[Nos. 11726613, 11431005]; and STCSM [No. 18dz2271000].
## References
* [1] Adimurthi, N. Chaudhuri and M. Ramaswamy, An improved Hardy-Sobolev inequality and its application, Proc. Amer. Math. Soc. 130 (2002), 489–505.
* [2] M. F. Bidaut-Véron and L. Vivier, An elliptic semilinear equation with source term involving boundary measures: the subcritical case, Rev. Mat. Iberoamericana 16 (2000), 477–513.
* [3] H. Brezis, L. Dupaigne and A. Tesei, On a semilinear elliptic equation with inverse-square potential, Selecta Mathematica 11 (2005), 1–7.
* [4] H. Brezis and M. Marcus, Hardy’s inequalities revisited, Ann. Sc. Norm. Super. Pisa Cl. Sci. 25 (1997), 217–237.
* [5] H. Brezis and J. L. Vázquez, Blow-up solutions of some nonlinear elliptic problems, Rev. Mat. Univ. Complut. Madrid 10 (1997), 443–469.
* [6] X. Cabré and Y. Martel, Weak eigenfunctions for the linearization of extremal elliptic problems, J. Funct. Anal. 156 (1998), 30–56.
* [7] C. Cazacu, Hardy inequalities with boundary singularities, arXiv:1009.0931, (2010).
* [8] C. Cazacu, On Hardy inequalities with singularities on the boundary, C. R. Math. Acad.Sci. Paris, Sér. I 349 (2011), 273–277.
* [9] H. Chen, A. Quaas and F. Zhou, On nonhomogeneous elliptic equations with the Hardy-Leray potentials, Accepted by J. Anal. Math. (2019), arXiv:1705.08047.
* [10] H. Chen and F. Zhou, Isolated singularities for elliptic equations with Hardy operator and source nonlinearity, Discrete Contin. Dyn. Syst. 38 (2018), 2945-C2964.
* [11] N. Chaudhuri and F. Cîrstea, On trichotomy of positive singular solutions associated with the Hardy-Sobolev operator, C. R. Math. Acad.Sci. Paris, Sér. I 347 (2009), 153–158.
* [12] F. Cîrstea, A complete classification of the isolated singularities for nonlinear elliptic equations with inverse square potentials, Memoirs of AMS. 227 (2014), No. 1068.
* [13] E. Davies, A review of Hardy inequalities, Oper. Theory Adv. Appl. 110 (1999), 55–67.
* [14] J. Davila and L. Dupaigne, Hardy-type inequalities, J. Eur. Math. Soc. 6 (2004), 335–365.
* [15] L. Dupaigne, A nonlinear elliptic PDE with the inverse square potential, J. Anal. Math. 86 (2002), 359–398.
* [16] M. Fall, Nonexistence of distributional supersolutions of a semilinear elliptic equation with Hardy potential, J. Funct. Anal. 264 (2013), 661–690.
* [17] M. Fall and M. Roberta Musina, Hardy-Poincaré inequalities with boundary singularities, Proc. Royal Soc. Edinburgh 142A (2012), 769–786.
* [18] V. Felli and A. Ferrero, On semilinear elliptic equations with borderline Hardy potentials, J. Anal. Math. 123 (2014), 303–340.
* [19] S. Filippas and A. Tertikas, Optimizing Improved Hardy Inequalities, J. Funct. Anal. 192 (2002), 186–233.
* [20] S. Filippas, A. Tertikas and J. Tidblom, On the structure of Hardy-Sobolev-Maz’ya inequalities, J. Eur. Math. Soc. 11 (2009), 1165–1185.
* [21] A. García and G. Peral, Hardy inequalities and some critical elliptic and parabolic problems, J. Differential Equations 144 (1998), 441–476.
* [22] A. Gmira and L. Véron, Boundary singularities of solutions of some nonlinear elliptic equations, Duke Math. J. 64 (1991), 271–324.
* [23] B. Guerch and L. Véron, Local properties of stationary solutions of some nonlinear singular Schrödinger equations, Rev. Mat. Iberoamericana 7 (1991), 65–114.
* [24] Q. Han and F. Lin, Elliptic Partial Differential Equations, Courant Lecture Notes in Mathematics, 1997.
* [25] M.F. Mouhamed and R. Musina, Hardy-Poincaré inequalities with boundary singularities, Proc. Roy. Soc. Edinburgh Sect. A 142 (2012), 769–786.
* [26] M. Marcus and L. Véron, The boundary trace of positive solutions of semilinear elliptic equations: the subcritical case, Arch. Rat. Mech. Anal. 144 (1998), 201–231.
* [27] M. Marcus and L. Véron, The boundary trace of positive solutions of semilinear elliptic equations: the supercritical case, J. Math. Pures Appl. 77 (1998), 481–524.
* [28] M. Marcus and L. Véron, Removable singularities and boundary traces, J. Math. Pures Appl. 80 (2001), 879–900.
* [29] M. Marcus and L. Véron, The boundary trace and generalized B.V.P. for semilinear elliptic equations with coercive absorption, Comm. Pure Appl. Math. 56 (2003), 689–731.
* [30] I. Peral and J. L. Vázquez, On the stability or instability of the singular solution of the semilinear heat equation with exponential reaction term, Arch. Rat. Mech. Anal. 129 (1995), 201–224.
* [31] J. L. Vázquez, Domain of existence and blowup for the exponential reaction-diffusion equation, Indiana Univ. Math. J. 48 (1999), 677–709.
* [32] W. Rudin, Functional Analysis, McGraw-Hill, 1973.
* [33] L. Schwartz, Théorie des distributions, Hermann, Paris, 1966.
|
2024-09-04T02:54:55.410763 | 2020-02-28T08:56:10 | 2002.12602 | {
"authors": "Dominik Dannheim, Katharina Dort, Daniel Hynds, Magdalena Munker,\n Andreas N\\\"urnberg, Walter Snoeys, Simon Spannagel",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25937",
"submitter": "Simon Spannagel",
"url": "https://arxiv.org/abs/2002.12602"
} | arxiv-papers | # Combining TCAD and Monte Carlo Methods to Simulate CMOS Pixel Sensors with a
Small Collection Electrode using the Allpix2 Framework
D. Dannheim K. Dort111Also at University of Giessen, Germany D. Hynds222Now
at Nikhef, Amsterdam, Netherlands M. Munker A. Nürnberg333Now at KIT,
Karlsruhe, Germany W. Snoeys S. Spannagel<EMAIL_ADDRESS>CERN,
Geneva, Switzerland
###### Abstract
Combining electrostatic field simulations with Monte Carlo methods enables
realistic modeling of the detector response for novel monolithic silicon
detectors with strongly non-linear electric fields. Both the precise field
description and the inclusion of Landau fluctuations and production of
secondary particles in the sensor are crucial ingredients for the
understanding and reproduction of detector characteristics.
In this paper, a CMOS pixel sensor with small collection electrode design,
implemented in a high-resistivity epitaxial layer, is simulated by integrating
a detailed electric field model from finite element TCAD into a Monte Carlo
based simulation with the $\mathrm{Allpix}^{2}$ framework. The simulation
results are compared to data recorded in test-beam measurements and very good
agreement is found for various quantities such as cluster size, spatial
resolution and efficiency. Furthermore, the observables are studied as a
function of the intra-pixel incidence position to enable a detailed comparison
with the detector behavior observed in data.
The validation of such simulations is fundamental for modeling the detector
response and for predicting the performance of future prototype designs.
Moreover, visualization plots extracted from the charge carrier drift model of
the framework can aid in understanding the charge propagation behavior in
different regions of the sensor.
###### keywords:
Simulation , Monte Carlo , Silicon Detectors , High Resistivity CMOS , TCAD ,
Drift-Diffusion , Geant4
††journal: Nucl. Instr. Meth. A
###### Contents
1. 1 Introduction
2. 2 The High-Resistivity CMOS Process
3. 3 Detector Design under Investigation
4. 4 Simulation Flow
1. 4.1 Electrostatic Field Modeling with TCAD
2. 4.2 Energy Deposition with Geant4
3. 4.3 Charge Carrier Transport
4. 4.4 Digitization of Signals
5. 4.5 Data Processing and Storage
5. 5 Reconstruction and Analysis
1. 5.1 Reference tracks
2. 5.2 Clustering
3. 5.3 Reconstruction of the Cluster Position
6. 6 Systematic Uncertainties
1. 6.1 Free parameters
2. 6.2 Parameters constrained by measurements
7. 7 Validation With Test-Beam Data
1. 7.1 Cluster Charge
2. 7.2 Cluster Size
8. 8 Detector Performance
1. 8.1 Intrinsic Resolution
2. 8.2 Efficiency
9. 9 Summary & Outlook
## 1 Introduction
Integrated monolithic CMOS technologies with small collection electrodes [1]
are emerging technologies enabling advances in the design of next-generation
high-performance silicon vertex and tracking detectors for high-energy
physics. These technologies have allowed significant reductions in the
material budget with respect to hybrid pixel detectors, while improving the
signal-to-noise ratio and the position resolution that is achievable with CMOS
sensors.
However, the simulation of such devices remains challenging due to the complex
field configuration in the sensor. Advanced simulation tools are required to
understand and model the performance of detectors built in these technologies
and to optimize the design of future prototypes.
This paper presents a simulation performed with a combination of commonly used
tools employed in silicon detector simulation. The $\mathrm{Allpix}^{2}$
framework [2] is used to combine TCAD-simulated electric fields with a Geant4
[3, 4, 5] simulation of the particle interaction with matter, to investigate
the behavior of high-resistivity CMOS detectors and to compare the predicted
performance with measurements recorded in a particle beam.
This allows direct access to detector performance parameters such as spatial
resolution and detection efficiency by taking into account the stochastic
nature of the initial energy deposition. While many of these properties could
also be investigated by advanced TCAD transient simulations, this approach is
not practical owing to the high computing time for a single event and the
high-statistics samples required to evaluate the effects related to the strong
variation of the electric field in three dimensions.
Instead, a simplified charge transport algorithm is used, taking as an input
the electrostatic field map calculated by the TCAD simulation of the complex
field configuration within the sensor. The algorithm takes into account
effects like Landau fluctuations in the energy deposition and the production
of secondary particles such as delta rays. With event simulation rates of
several tens of Hertz, this allows for the generation of high-statistics
samples necessary for detailed studies of the detector behavior.
The paper is structured as follows. Section 2 provides a brief overview of the
CMOS process under investigation, while the detector properties and the
simulated setup are introduced in Section 3. The simulation is described in
detail in Section 4, while Section 5 introduces the reconstruction of physical
properties from the detector response. The sensitivity of the simulation to a
range of parameters is examined in Section 6. The simulation is validated
using data recorded in test-beam measurements in Section 7, while performance
quantities are derived in Section 8 and compared with the values obtained from
data. Finally, Section 9 summarizes the results and provides an outlook for
future investigations of this technology.
## 2 The High-Resistivity CMOS Process
Monolithic CMOS technologies incorporating the readout electronics in the
sensor are attractive candidates for new detector designs to simplify the
production and to benefit from a reduction of the material budget. By
integrating the CMOS logic in doped wells separated from the inversely doped
signal collection electrode, the size of the latter can be minimized as
illustrated in Figure 1. The small collection electrode design allows the
sensor capacitance to be reduced down to the order of $\mathrm{fF}$, enabling
detector designs with low noise and detection thresholds, low analog power
consumption, and large signal-to-noise ratio (SNR) [1].
Figure 1: Schematic cross section of a single pixel cell in the CMOS process
under investigation. The elements shown are not to scale. Modified from [6].
Implemented in a standard silicon substrate, only a small depleted region
evolves around the _pn_ -junction surrounding the collection electrode when
applying a bias voltage between the doped wells and the backside of the
sensor. The applicable bias voltage is limited to $-6\text{\,}\mathrm{V}$ by
the process-specific breakdown voltage of the NMOS transistors [7]. In order
to achieve a sizable depletion volume around the collection electrode, an
epitaxial layer with high resistivity silicon can be used.
The size of the depleted region forming in this epitaxial layer is restricted
to the area around the collection electrode and, without additional
modifications of the process, no full depletion of the sensor volume is
achieved. In the CMOS process under investigation, the depleted region has the
shape of a bubble as indicated by the white line in Figure 1, resulting in
contributions to the overall detector response from both drift and diffusion
of charge carriers. In addition, signal contributions are expected from charge
carriers that are created in the highly _p_ -doped backside substrate and
subsequently diffuse into the epitaxial layer.
## 3 Detector Design under Investigation
The _Investigator_ test-chip is an analog test chip that has been developed
within the ALICE ITS upgrade [8]. It has been investigated by the CLICdp
collaboration to evaluate this technology in terms of sensor performance
focussing on precise measurements of spatial resolution and detection
efficiency [6, 9]. The digitization of signals is performed off-chip in the
data acquisition system using one $65\text{\,}\mathrm{MHz}$ sampling analog-
to-digital converter (ADC) per channel which records the full waveform of all
detector channels, once a configurable triggering threshold has been exceeded
in any of them [10]. It should be noted that the threshold values for data
quoted below represent the offline analysis thresholds applied in addition to
the triggering threshold of about $120\text{\,}\mathrm{e}$.
The chip has a total thickness of $100\text{\,}\mathrm{\SIUnitSymbolMicro m}$.
The upper $25\text{\,}\mathrm{\SIUnitSymbolMicro m}$ of the sensor, below the
implants, consist of the epitaxially grown silicon with a resisitiviy of
$1-$8\text{\,}\mathrm{k\SIUnitSymbolOhm}\text{\,}\mathrm{cm}$$ in which the
depleted region forms, while the additional
$75\text{\,}\mathrm{\SIUnitSymbolMicro m}$ represent the undepleted low-
resistivity silicon substrate [7].
\begin{overpic}[width=216.81pt]{detector2.png} \put(25.0,5.0){Sensor}
\put(35.0,10.0){\vector(2,1){15.0}} \put(90.0,5.0){PCB}
\put(85.0,7.0){\vector(-4,1){23.0}} \end{overpic} Figure 2: Visualization of
the simulated detector setup consisting of the CMOS sensor on a printed
circuit board for support. The detector is oriented perpendicular to the beam
incident from the left. The colored lines represent the primary and secondary
particles propagated through the setup.
While the actual detector contains several sub-matrices with $8\text{\times}8$
active pixels each, with different layouts such as altered collection
electrode size, only one matrix has been simulated and is compared to data.
The pixel cells of the chosen matrix have a pitch of
$28\text{\times}28\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and feature the
following geometrical parameters: the distance between the _p_ -wells and the
collection electrode is $3\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and an
octagonal collection electrode with a size of
$2\text{\times}2\text{\,}\mathrm{\SIUnitSymbolMicro m}$ is placed in the
center of the pixel cell. A bias voltage of $-6\text{\,}\mathrm{V}$ is applied
to the _p_ -wells and a positive voltage of $0.8\text{\,}\mathrm{V}$ is
applied to the collection electrode itself. The simulated detector is placed
on a printed circuit board (PCB) as visualized in Figure 2.
## 4 Simulation Flow
In the following section, the simulation of the detector in the
$\mathrm{Allpix}^{2}$ framework is described. In order to avoid simulating a
full beam telescope setup and performing a track reconstruction, the
capabilities of the framework to record the Monte Carlo truth information
about primary and secondary particles are exploited.
Consequently, only a single CMOS detector and the source of ionizing particles
are simulated as shown in Figure 2. The figure depicts the overlay of many
events, as only a single primary particle is simulated in each event.
The following sections describe the individual steps of the simulation in
detail, providing information on the configuration of the respective
$\mathrm{Allpix}^{2}$ modules where applicable and relevant.
### 4.1 Electrostatic Field Modeling with TCAD
The electrostatic field in the epitaxial layer of the sensor is modeled using
a three-dimensional TCAD simulation. The doping profile is taken from [9, 11]
and resembles the technology described in Section 2, with the detector
geometry introduced in Section 3. The simulation comprises a single pixel
cell, and periodic boundary conditions allow the field to be replicated over
the entire sensor.
Figure 3: Magnitude of the electric field inside the pixel cell, simulated
using TCAD. The visualization only shows the upper
$25\text{\,}\mathrm{\SIUnitSymbolMicro m}$ of the sensor with the epitaxial
layer. The gray structures represent metal contacts used as terminals for the
biasing voltages. The plane C1 indicated in gray corresponds to the cut
presented in Figure 4 (color online).
Figure 3 shows a visualization of the magnitude of the electric field in the
three-dimensional pixel cell, with the corresponding voltages applied to the
terminals via metal contacts indicated as gray structures. A low electric
field is present in the _p_ -well rings as indicated by the blue region on the
surface of the simulated pixel cell. The center of the _p_ -well rings is
fitted with a squared opening that contains the collection electrode with a
high-field region evolving around it.
Figure 4: Magnitude of the electric field and field lines for a cut through
the 3D TCAD simulation. The plot only depicts the upper
$25\text{\,}\mathrm{\SIUnitSymbolMicro m}$ of the sensor with the epitaxial
layer, while the undepleted substrate region is omitted (color online).
The strong inhomogeneities of the electric field in different regions of the
pixel cell are best observed in a cut through the collection electrode,
perpendicular to the sensor surface, as depicted in Figure 4. The high
electric field strength close to the _pn_ -junction around the collection
electrode decreases rapidly towards the sensor backside and the pixel corners.
The white line indicates the depleted volume of the pixel cell. The electric
field lines, indicated as black arrows, provide a first insight into the
complexity of the field configuration in the sensor and the drift effects
induced by this strong non-linearity. The low electric field regions in the
pixel corners result in a slower charge carrier drift and an increased impact
of diffusion, leading to an enhanced charge sharing which improves the
position resolution without the need to reduce the pixel pitch. In the low-
resistivity substrate, recombination of charge carriers is a relevant process
owing to the higher doping concentration.
The electrostatic field obtained from TCAD is converted to a regularly spaced
mesh using the _Mesh Converter_ tool provided with the $\mathrm{Allpix}^{2}$
framework. This conversion speeds up the look-up of field values during the
simulation by several orders of magnitude since all necessary interpolations
are already performed offline prior to the simulation. A regular mesh
granularity of $0.1\text{\,}\mathrm{\SIUnitSymbolMicro m}$ is chosen to
correctly replicate the field in the high-density mesh regions of the TCAD
simulation close to the implant.
It has been verified that the selected granularity correctly replicates the
TCAD simulation by comparing the two fields. Using an even finer granularity
has not shown any significant improvement on the simulation results. Loading
highly granular electrostatic fields in $\mathrm{Allpix}^{2}$ does not impact
the performance of the simulation, but only the memory footprint of the
program during execution.
### 4.2 Energy Deposition with Geant4
$\mathrm{Allpix}^{2}$ provides the _DepositionGeant4_ module, an interface to
Geant4 [3, 4, 5] which facilitates the simulation of energy deposition in the
sensor. A $120\text{\,}\mathrm{GeV}$ beam of
$\mathup{{{\pi}}^{\scriptstyle{+}}}$ incident on the pixel detector is
simulated, replicating the beam conditions of the test-beam measurements. The
beam points along the positive _z_ -axis, perpendicular to the _xy_ -plane of
the detector. The cross section of the beam is chosen to be significantly
smaller than the detector surface to suppress effects stemming from the
altered charge sharing behavior at the sensor edge. The energy deposited in
the sensor by Geant4 is translated into charge carriers with a conversion
factor of $3.64\text{\,}\mathrm{eV}$ per electron-hole pair [12].
The framework also stores the Monte Carlo truth information including
secondary particles such as delta rays and their relation to the primary
particles. This information can be exploited to establish a link between the
incident particles and the electron-hole pairs created in the detector.
The simulation is performed with the Photo-Absorption Ionization model (PAI)
[13] to improve the description of energy deposition in thin sensors. This is
of importance in spite of the total sensor thickness of
$100\text{\,}\mathrm{\SIUnitSymbolMicro m}$, since a majority of the charge
carriers forming the final signal will originate from the
$25\text{\,}\mathrm{\SIUnitSymbolMicro m}$ epitaxial layer.
⬇
[DepositionGeant4]
physics_list = ”FTFP_BERT_EMY”
enable_pai = true
particle_type = ”Pi+”
source_type = ”beam”
source_energy = 120GeV
source_position = 0um 0um -200um
beam_size = 0.5mm
beam_direction = 0 0 1
number_of_particles = 1
max_step_length = 1.0um
Figure 5: Configuration section for the _DepositionGeant4_ module setting up
the particle source in Geant4 for the initial energy deposition in the sensor.
The module configuration used for the $\mathrm{Allpix}^{2}$ framework is
provided in Listing 5.
### 4.3 Charge Carrier Transport
The signal formation is simulated using a simplified model for charge carrier
transport based on the notion of _collected charges_. The electron-hole pairs
created by the incident particle are propagated along the electric field lines
through the sensor volume using an adaptive fourth-order Runge-Kutta-Fehlberg
(RKF) method [14] and a mobility parametrization which depends on the electric
field vector [15]. The RKF method adapts the simulated time step depending on
the position uncertainty derived from a fifth-order error estimation; the
allowed range for time steps was set to $$0.5\text{\,}\mathrm{ps}$\leq\Delta
t\leq$0.5\text{\,}\mathrm{ns}$$.
While this model is not expected to reproduce a realistic time dependence of
the signal, the final state of charge collected at the sensor implants is
equivalent to the integrated induced current over the respective drift time.
This approximation is valid since the Shockley-Ramo weighting field [16, 17]
is negligible in most of the sensor volume owing to the small ratio between
signal collection electrode size and sensor thickness.
In the upper $25\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{m}$ of the sensor
the charge carrier motion is a superposition of drift and diffusion, while in
the lower $75\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{m}$ the charge
carriers are only subject to random diffusion as the electric field is
negligible.
The propagation algorithm is halted after $22.5\text{\,}\mathrm{ns}$, the so-
called integration time, and all charge carriers within a volume of $3\times
3\times$2\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{m}^{3}$$ around each of
the signal collection electrodes are attributed to the respective pixel
signal. The volume has been chosen to cover the electrode implant itself as
well as an additional volume accounting for the uncertainty in the final
position of the transported charge carriers. The integration time has been
chosen such that the simulation produces clusters with the same most probable
value (MPV) for the cluster charge as obtained from data. This aims to emulate
the physical process of charge carrier recombination in the silicon substrate,
which might be modeled directly in future simulations as briefly discussed in
Section 9. The systematic uncertainty introduced by this approach is discussed
in Section 6.
Charge carriers are transported in groups of five instead of individually to
speed up the simulation process. The group size has been chosen such that an
adequate number of transport steps is retained with the expected MPV for the
signal of around $1.5\text{\,}\mathrm{k}\mathrm{e}$. It has been verified that
this simplification does not affect the simulation result as further
elaborated in Section 6.
Figure 6: Visualization of the time evolution of the collected charge. Shown
are snapshots at different times after the initial energy deposition with each
line representing the drift and diffusion motion of a group of charge
carriers. Only charge carrier groups which have reached the implant are drawn,
all other charge carriers are omitted. The ionizing particle traverses the
sensor along the $z$-axis in the center of a pixel cell, each plot represents
three adjacent pixels.
Figure 6 visualizes this transport model and shows the collection of charge
carriers at the electrodes of the sensor. In this representation, only
electrons that have reached a sensor implant within the integration time are
shown. Electrons that are still in motion as well as holes are suppressed. The
motion of each group of charge carriers is represented by one line and is
shown at different integration times after the initial energy deposition.
Here, the incident particle traversed the detector along the $z$-axis through
the center of one pixel cell.
After the first few hundred picoseconds, only charge carriers in the vicinity
of the electrode are collected. The straight lines indicate that their motion
is dominated by drift. With increasing integration time, the motion patterns
of further groups of charge carriers arriving at the implant exhibit a strong
contribution from diffusion as indicated by the numerous kinks in the
respective paths. After about $15\text{\,}\mathrm{ns}$, lateral motion enables
some charge carriers to be collected in the two adjacent pixel cells.
The line graphs also allow visual distinction between the substrate and the
epitaxially grown high-resistivity layer, which ends about
$25\text{\,}\mathrm{\SIUnitSymbolMicro m}$ from the top of the sensor. A
faster drift motion can be observed in the high-field region close to the
backside of the epitaxial layer as straight lines; the contribution from
substrate charge carriers diffusing into the epitaxial layer starts only after
approximately $10\text{\,}\mathrm{ns}$.
Figure 7: Three-dimensional visualization of the charge carrier motion,
corresponding to the $20\text{\,}\mathrm{ns}$ snapshot shown as projection in
Figure 6.
In Figure 7, a three-dimensional representation of the line plot at
$20\text{\,}\mathrm{ns}$ is presented. The lines end at five different points,
each representing a different collection electrode.
⬇
[GenericPropagation]
temperature = 293K
charge_per_step = 5
timestep_min = 0.5ps
timestep_max = 0.5ns
integration_time = 20ns
Figure 8: Configuration section for the _GenericPropagation_ module used to
simulate the charge transport.
The configuration provided in Listing 8 has been used for the charge carrier
transport. Settings for creating line graphs of the charge carrier motion can
be found in the $\mathrm{Allpix}^{2}$ user manual available from the project
website [18].
### 4.4 Digitization of Signals
To simulate the response of the readout electronics, the charge carriers
accumulated in the region around the signal collection electrode during the
integration time are transformed into a digital signal. While the detector
under investigation uses off-chip ADCs for the signal digitization as
described in Section 3, the simulation aims to simulate an on-chip per-pixel
threshold using the _DefaultDigitizer_ module of $\mathrm{Allpix}^{2}$.
Equivalent noise values have been used where applicable, as discussed below.
⬇
[DefaultDigitizer]
electronics_noise = 10e
threshold = 40e
threshold_smearing = 5e
Figure 9: Configuration section used for the _DefaultDigitizer_ module of the
simulation.
An additional signal contribution, randomly drawn from a Gaussian distribution
with a width of $10\text{\,}\mathrm{e}$ and a mean of $0\text{\,}\mathrm{e}$
is added to the signal to account for electronics noise present during
digitization. The applied threshold is varied between $40\text{\,}\mathrm{e}$
and $700\text{\,}\mathrm{e}$, and a threshold dispersion, sampled from a
Gaussian distribution with a width of $5\text{\,}\mathrm{e}$ and a mean of
$0\text{\,}\mathrm{e}$, is added. For simplicity, the threshold dispersion is
not a fixed offset calculated per-pixel, but randomly chosen per pixel hit.
The setup of the module is summarized in Listing 9.
### 4.5 Data Processing and Storage
The simulation results are stored in ROOT [19] trees using the
_ROOTObjectWriter_ module. In order to speed up the process, the simulation is
performed in two separate steps. In the first step, the energy deposition,
charge carrier transport and summing of charge at the collection electrodes is
performed. The result of this step is stored to disk.
In a second step, the _ROOTObjectReader_ is used to read the information from
the respective file and the final digitization step is performed. This allows
to re-run this final section of the simulation on the full set of Monte Carlo
events with different settings applied without the need to recompute the drift
motions. A full threshold scan, performed on the same set of initial
simulation events, thus only takes a couple of minutes instead of several
hours required to create the initial data set. Since the threshold scan
performed on the test-beam data has also been performed offline on the same
data set [9], this is an adequate simplification of the simulation.
The central simulation data set comprises about 2.5 million primary events
which have been reprocessed for every threshold setting. In addition, several
smaller data sets with different integration times have been produced in order
to optimize agreement with data as discussed in Section 6.
## 5 Reconstruction and Analysis
In the following, the reconstruction and analysis of the Monte Carlo events
are discussed. The simulation was set up using known, independent parameters
of the measurement setup, such as track resolution or charge threshold. Only
the cluster charge MPV was used as direct observable provided by the detector
to tune the simulation. All parameters were fixed before comparison with data
for observables used to quantify the performance, such as cluster size,
position resolution and efficiency. This blinded approach avoids drawing
premature conclusions from the figures of merit and thus influencing the
parameter optimization. Using only the MPV of the cluster charge for
calibrating the simulation minimizes the correlation between simulation and
data, and maximizes the prediction power of the simulation.
### 5.1 Reference tracks
The Monte Carlo truth information provided by the $\mathrm{Allpix}^{2}$
framework is used as reference track information. All registered particles in
the sensor are filtered and only primary particles entering the sensor from
the outside, i.e. those without a parent particle, are selected for further
analysis. This set of particles represents external tracks, and their position
in the mid-plane of the sensor is calculated by linearly interpolating their
entry and exit points registered by the framework. This position is then
convolved with the track resolution at the device under test (DUT) of
$2.0\text{\,}\mathrm{\SIUnitSymbolMicro m}$, in accordance with the value
obtained for the beam telescope used for the acquisition of the test-beam data
[20].
### 5.2 Clustering
The pixel hits registered by the simulated detector are grouped into clusters
by starting with a seed pixel and adding all directly adjacent pixel hits to
the cluster until no additional neighbors are found. This already allows basic
properties of the simulation to be compared with data, namely cluster size as
well as the shape of the cluster charge distribution.
The total cluster charge is given by the sum of the individual pixel charges
of the cluster. Its comparison with data allows the required integration time
in the simplified simulation model to be adjusted to achieve the same
integrated charge as seen in data. This procedure is described in detail in
Section 6.
The cluster size is defined as the total number of pixels contained in the
respective cluster. It has a strong dependence on the drift and diffusion of
the charge carriers in the sensors and is the primary measure for charge
sharing between pixel cells. It thus allows evaluation of the performance of
the simulation, e.g. how well the electric field is modeled.
### 5.3 Reconstruction of the Cluster Position
For assessing the performance of the detector, a particle incidence position
has to be extracted from the cluster information available. To replicate the
analysis performed for the test-beam data, the charge-weighted center-of-
gravity position of the cluster is corrected for non-linear charge sharing by
an $\eta$ algorithm [21].
Figure 10: $\eta$-distribution along the $x$ axis, derived from simulation at
a threshold of $40\text{\,}\mathrm{e}$.
Since the $\eta$ distribution represents the charge sharing between two pixels
only, for each cluster the two pixels with the highest charge, $Q_{1}$ and
$Q_{2}$, are chosen to construct the $\eta$ variable independently in $x$ and
$y$:
$\displaystyle\eta_{k}=\frac{\sum_{i}k_{i}\cdot Q_{i}}{\sum_{i}Q_{i}}\qquad
k=\\{x,y\\}\quad i=\\{1,2\\}$
where $k_{i}$ is the relative position between the two pixel centers. An
example of the $\eta$ distribution in $x$ is depicted in Figure 10 for a pixel
charge threshold of $40\text{\,}\mathrm{e}$.
## 6 Systematic Uncertainties
The sensitivity of the simulation to different input parameters has been
examined by varying the values within their respective uncertainties, if
known, or within a reasonable range otherwise. The impact on the reconstructed
observables was investigated. While some parameters exhibit little or no
effect on the simulation results, others have a strong influence on the
outcome.
### 6.1 Free parameters
For the initial deposition of energy in the sensor, the influence of the
maximum allowed step length of tracking primary and secondary particles
through the sensor material has been evaluated by varying the respective value
between $0.1\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and
$5\text{\,}\mathrm{\SIUnitSymbolMicro m}$, and no significant difference was
observed. Since large parts of the sensor volume are undepleted, a strong
impact of diffusion is expected which smears the initial position of the
charge carriers.
The charge carrier transport is mainly dominated by the precision of the
numeric integration and its granularity. The number of charge carriers
transported as group has been varied from a single charge carrier up to ten
per group in order to study possible effects on the distribution at the
implants. The effect on the reconstruction observables is found to be
negligible.
### 6.2 Parameters constrained by measurements
The behavior of the sensor has been shown to be very sensitive to the
simulated physical properties of the CMOS sensor, i.e. the thickness of the
epitaxial layer as well as the modeled electric field. Even small changes in
the sensor design, such as a more simplistic approximation of the implant
doping profiles in the TCAD design cause large changes in the resulting
cluster size distributions and position resolution. It is therefore of
paramount importance to model the sensor as precisely as possible and to
constrain the different parameters in TCAD by additional measurements [22].
The low-field regions found in the corners of the pixel cell visible in
Figures 3 and 4 are strongly influenced by these modifications, and their
contribution to the detector signal changes accordingly.
Figure 11: Most probable cluster charge as a function of the simulated
integration time. The black horizontal line represents the value obtained from
data, the hatched band corresponds to the assumed systematic uncertainty of
the charge calibration.
The integration time currently used to stop the transport of charge carriers
is also linked to the sensor design, since it is used to emulate an effective
recombination of charge carriers. Their lifetime in the different regions of
the sensor is dominated by the respective doping concentration, and
potentially affected by the silicon wafer production process. Since this was
not modeled in detail for this simulation, the integration time was chosen
such that the MPV of the cluster charge matched the value obtained from data
as discussed in Section 4. The corresponding uncertainty on the charge
calibration of the reference data has therefore to be taken into account as
systematic uncertainty of the simulation by comparing the cluster charge MPV
for different integration times to the value obtained from data as shown in
Figure 11. Here, the hatched band represents an assumed uncertainty of $\pm
50\text{\,}\mathrm{e}$ on the charge calibration of data [7, 9]. This
translates to an uncertainty on the integration time of
$22.5^{+1.5}_{-1.3}\,\textrm{ns}$, which is propagated as systematic
uncertainty to the results presented in this paper.
It has been observed that the overall agreement between data and simulation
seems to improve for lower integration times, which might indicate either an
offset in the absolute charge calibration of data or an insufficient modeling
of the signal formation processes in silicon.
Also the charge threshold applied to the individual pixels has a strong impact
on both the cluster size and the intrinsic resolution, with decreasing
influence towards higher thresholds. At a threshold of
$40\text{\,}\mathrm{e}$, a change of as little as $\pm 5\text{\,}\mathrm{e}$
visibly alters the observables. Since the absolute charge calibration and the
threshold value in electrons are fully correlated, the uncertainty on the
applied threshold has been taken into account by varying the two parameters
simultaneously and by calculating the total uncertainty arising from the
variations.
A variation of the threshold dispersion and electronics noise of up to
$10\text{\,}\mathrm{e}$ at a threshold of $40\text{\,}\mathrm{e}$ yielded no
observable effect. The values for noise and threshold dispersion have been
estimated from the evaluation of the full waveform in data [9].
The residual width and the final intrinsic resolution depend on the resolution
of the reference tracks at the position of the detector under investigation.
This resolution has been determined for the test-beam data used, and a
variation of $\pm 0.2\text{\,}\mathrm{\SIUnitSymbolMicro m}$ around this value
shifts the obtained resolution accordingly. This strong influence arises from
the fact that the two values are of similar size.
In summary, while the free parameters of the simulation have little to no
influence on the final result when varied within a reasonable range, several
parameters show a high sensitivity but are constrained by measurements.
## 7 Validation With Test-Beam Data
The simulation is compared to data recorded with the _Investigator_ chip,
described in Section 3, at the CERN SPS accelerator with a
$120\text{\,}\mathrm{G}\mathrm{e}\mathrm{V}$
$\mathup{{{\pi}}^{\scriptstyle{+}}}$ beam. A total of 25660 tracks through the
region of interest have been recorded, mainly limited by the very small active
area of the DUT and the dead time of the data acquisition system used. More
details about the test-beam setup, data samples and the analysis of data used
for comparison in this paper can be found in [6, 9].
### 7.1 Cluster Charge
Figure 12: Cluster charge distributions at a pixel threshold of
$120\text{\,}\mathrm{e}$ for simulation and experiment. The distributions
resemble the expected Landau-Gauss distribution. The hatched band represents
the total uncertainty.
The cluster charge distributions for both simulation and data at a charge
threshold of $120\text{\,}\mathrm{e}$ are shown in Figure 12. The
distributions are fitted with the convolution of a Gaussian and Landau
function. The MPV is $1.42\text{\,}\mathrm{k}\mathrm{e}$ for both data and
simulation, and the width of the Gaussian is
$0.21\text{\,}\mathrm{k}\mathrm{e}$/$0.22\text{\,}\mathrm{k}\mathrm{e}$ for
data/simulation, respectively. A good agreement between data and simulation is
observed, as also indicated by the ratio of the two distributions displayed in
the lower part of the figure. While the MPV has been tuned to match data using
the integration time of the simulation as discussed in Section 4, the
agreement of the shapes indicates that the energy deposition in the relevant
parts of the sensor as well as the collection of charge carriers is well-
modeled by the simulation. The data distribution exhibits some fluctuations
owing to the low statistics of the sample.
### 7.2 Cluster Size
Figure 13: Cluster size distributions for experiment and simulation at a
threshold of $120\text{\,}\mathrm{e}$.
The distribution of the total cluster size at a threshold of
$120\text{\,}\mathrm{e}$ for simulation and experiment is presented in Figure
13. Qualitatively, the distributions are in good agreement. A possible source
of the observed deviations for individual cluster sizes are uncertainties in
the modeled electric field of the sensor as discussed in Section 6.
Figure 14: Cluster size projected in $x$ (left) and $y$ (right) at a threshold
of $120\text{\,}\mathrm{e}$ for data and simulation.
The projection of the cluster size in $x$ and $y$, depicted in Figure 14,
provides additional details about the charge sharing process. Data and
simulation agree well, but a small difference between the distributions in $x$
and $y$ can be observed in data despite the symmetry of the pixel cell layout.
It has been verified that this does not stem from a remaining misalignment in
data by repeating the simulation with a sensor rotated around the $x$ axis by
up to $\pm 15\text{\,}\mathrm{\SIUnitSymbolDegree}$ in an attempt to reproduce
the difference. The deviation might be a result of the non-symmetric layout of
the circuitry in the _Investigator_ pixel. While the p-well structure has been
designed to be fully symmetric in x and y, the layout of the circuitry placed
in the p-wells is not symmetric, which is a possible source of the asymmetry.
Figure 15: Intra-pixel representation of the cluster size for data and
simulation at a threshold of $120\text{\,}\mathrm{e}$. Shown is an array of
$2\times 2$ pixel cells, with the top-right pixel displaying data, taken from
[9], and the other pixels showing results from the simulation (color online).
The cluster size distribution is a precise measure for charge sharing as
confirmed by the intra-pixel representation of the total cluster size
presented in Figure 15. For the simulation, the Monte Carlo truth information
is exploited to produce a multi-pixel map indicating the mean cluster size as
a function of the particle incidence position within the pixel cells.
Likewise, the reference track supplied by the beam telescope is used to obtain
the particle incidence position for data. To increase statistics, data events
from the full active matrix are folded into a single pixel cell, which is
displayed in the upper-right quarter of Figure 15.
The largest clusters originate from the pixel corners since the low electric
field between pixel implants results in a strong contribution from diffusion
of charge carriers. Single-pixel clusters, on the other hand, are almost
exclusively produced if the incident particle traverses the sensor very close
to the center of the pixel cell.
While the overall mean size distribution is faithfully reproduced in the
simulation, minor discrepancies in the pixel corners are visible. The
transition from four to three-pixel clusters represented by the yellow regions
is more apparent in simulation than in data. The same holds true for the
transition between two to three pixel clusters corresponding to the turquoise
regions in Figure 15. Particles penetrating the sensor at the corners of a
pixel cell, for example, are more likely to give rise to clusters with size
four in data compared to simulation. This observation is in line with the
higher number of clusters with size four in the cluster size distribution
displayed in Figure 13. Moreover, the cluster size is particularly sensitive
to a mis-modeling in the pixel corners as the diffusion of charge carriers to
neighboring pixel cells is most likely if the incident particle enters the
sensor at the corners between four cells. Most notably, small modifications in
the electric field in the pixel corners are capable of inhibiting or enhancing
the motion of charge carriers to neighboring cells causing deviations in
cluster size by up to two units as there are two cells directly adjacent to
one corner. As discussed in the previous section, the low field regions in the
pixel corners are strongly influenced by the exact doping profile of the
sensor.
Figure 16: Mean cluster size as a function of the threshold, shown for
experimental data as well as simulations with TCAD-modeled and linear electric
fields. The hatched band represents the total uncertainty.
The mean cluster size has been studied as a function of the applied charge
threshold. Figure 16 shows the curves for data and simulation. In addition, a
simulation with a linear electric field replacing the TCAD model in the
epitaxial layer is plotted as a dashed line for comparison. By increasing the
threshold, the mean cluster size shifts to smaller values as individual pixels
fall below the charge threshold. Data and simulation match well down to very
low thresholds, with a maximum deviation of about $6\text{\,}\mathrm{\char
37\relax}$ at very low thresholds, while the simulation with a linear electric
field produces incompatible results. This deviation from the experimental
results demonstrates the significance of a precise modeling of the electric
field for this type of detector. Similar results have been obtained for the
mean projected cluster sizes along the $x$ and $y$ coordinates.
Figure 17: Intra-pixel representation of the mean projected cluster size as
obtained from simulation. Shown are arrays of $2\times 2$ pixel cells
indicating the mean projected cluster size in $x$ (left) and $y$ (right)
direction as a function of their relative position within the four pixel cells
(color online).
Figure 17 displays $2\times 2$ pixel maps of the mean projected cluster size
in $x$ and $y$ as a function of the particle incidence position at a threshold
of $40\text{\,}\mathrm{e}$. Instead of the uniform bands along the respective
coordinate expected for uncorrelated observables, eye-shaped structures reveal
a correlation between charge sharing along the two dimensions caused by the
inhomogeneous electric field and the bubble-shaped depletion region described
in Section 2. The same effect is observed in data as demonstrated in [9]. With
increasing threshold, charge sharing effects are suppressed and the
correlation between the mean cluster size in $x$ and $y$ vanishes.
## 8 Detector Performance
Using the reconstructed cluster position and the Monte Carlo truth information
from the primary particle, the performance of the CMOS detector is assessed in
terms of spatial resolution and hit detection efficiency. The results obtained
from simulation are compared to data.
### 8.1 Intrinsic Resolution
Figure 18: Residuals in $x$ direction for data and simulation at a threshold
of $120\text{\,}\mathrm{e}$. The hatched band represents the uncertainty on
simulation.
Figure 18 shows the residual in $x$, defined as the difference between the
impact point of the incident particle obtained from the Monte Carlo truth and
the reconstructed cluster position. The width of the residual is obtained as
the root mean square (RMS) of the distribution, evaluated for the central
$99.73\text{\,}\mathrm{\char 37\relax}$ of the histogram, equivalent to $\pm
3\sigma$ of a Gaussian distribution, to match the definition used in the data
analysis. This allows the width of the distribution to be quantified
independently from its shape while providing a statistically robust metric.
The spatial resolution is then calculated by quadratically subtracting the
track resolution from the residual width, i.e.
$\displaystyle\sigma=\sqrt{\textrm{RMS}_{$99.73\text{\,}\mathrm{\char
37\relax}$}^{2}-\sigma_{\textrm{track}}^{2}}.$
The statistical uncertainty on the resolution is calculated using pseudo-
experiments. The number of entries in each bin of the residual distribution
under consideration is smeared with a Poisson distribution with a mean
equivalent to the original bin content. The width obtained from the smeared
histogram is stored, and the pseudo-experiment repeated $10\,000$ times. The
statistical uncertainty on the residual width is then taken as the width of
the resulting distribution and is propagated to the intrinsic resolution.
Using these definitions, resolutions in $x$ and $y$ of
$\sigma_{x}=$3.60$\pm$0.01$\,\text{(stat)}\,^{+0.24}_{-0.13}\,\text{(syst)}\,$\mathrm{\SIUnitSymbolMicro
m}$$
$\sigma_{y}=$3.57$\pm$0.01$\,\text{(stat)}\,^{+0.13}_{-0.11}\,\text{(syst)}\,$\mathrm{\SIUnitSymbolMicro
m}$$
have been achieved in simulation which is well below the value of
pitch/$\sqrt{12}\approx$8\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{m}$$
expected without charge sharing. It compares very well with the resolutions of
$3.29\pm 0.02\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and $3.42\pm
0.02\text{\,}\mathrm{\SIUnitSymbolMicro m}$ measured in data for $x$ and $y$
respectively.
Figure 19: Spatial resolution in $x$ (left) and $y$ (right) direction as a
function of the applied charge threshold, shown for experimental data as well
as simulations with TCAD-modeled and linear electric fields. The hatched band
represents the total uncertainty.
The resolution has been studied as a function of the charge threshold applied,
shown in Figure 19 for the $x$ and $y$ coordinates separately. With increasing
threshold, the information from pixels not passing the threshold is lost,
leading to a deterioration of the position resolution. The comparison of data
with simulation shows a very good agreement down to a threshold of about
$150\text{\,}\mathrm{e}$. The discrepancy at lower thresholds is most likely
to be a consequence of non-Gaussian noise in the data recorded with the analog
prototype chip as well as a result of the simplification of charge carrier
lifetimes described in Section 4. The disagreement is of limited importance
for practical purposes since a fully integrated sensor is likely to be
operated at thresholds above $150\text{\,}\mathrm{e}$.
The dashed gray line in Figure 19 again represents a simulation using a linear
electric field as approximation, and the deviation from data suggests that
this simplification leads to an inadequate description of the CMOS sensor
response.
### 8.2 Efficiency
The efficiency of the detector is defined as the number of incident primary
particles that can be matched to a reconstructed cluster divided by the total
number of primary particles penetrating the detector. A match between an
incident particle and a reconstructed cluster is made, if the cluster is
located within a radius of $100\text{\,}\mathrm{\SIUnitSymbolMicro m}$ around
the impact point of the incident particle, using the same matching criterion
as applied to data.
The statistical uncertainty of the efficiency has been calculated by
considering a counting experiment with two possible outcomes: either a matched
or an unmatched primary particle track. This results in an uncertainty of
$\displaystyle\sigma_{\textrm{eff}}=\sqrt{\frac{p\cdot\left(1-p\right)}{N}},$
where $p$ is the probability of a matched track while $N$ is the total number
of experiments conducted.
Figure 20: Efficiency obtained from simulations with TCAD-modeled electric
field as a function of the impact position for charge thresholds of
$40\text{\,}\mathrm{e}$ (left), $450\text{\,}\mathrm{e}$ (center) and
$700\text{\,}\mathrm{e}$ (right) for a single pixel cell (color online).
The efficiency obtained from simulation as a function of the particle impact
position within a single pixel cell is displayed in Figure 20 for three
different thresholds.
For the lower threshold of $40\text{\,}\mathrm{e}$, depicted in Figure 20
(left), the simulation yields an overall efficiency of
$$99.95$\,^{+0.05}_{-0.23}\,\text{(syst)}\,$\mathrm{\char 37\relax}$$. The
statistical uncertainty is of the order of $1\text{\times}{10}^{-8}\text{\,}$.
The remaining inefficiencies are evenly distributed throughout the pixel cell
and arise from delta rays which pull the cluster center far away from the
particle incidence point. With increasing threshold, inefficiencies start to
develop in the pixel corners, as these are the regions with the strongest
charge sharing and the largest mean cluster size. The overall hit detection
efficiency at the threshold of $450\text{\,}\mathrm{e}$ shown in Figure 20
(center) decreases to about
$$97.62$\,^{+0.13}_{-0.58}\,\text{(syst)}\,$\mathrm{\char 37\relax}$$. At the
threshold of $700\text{\,}\mathrm{e}$, depicted in Figure 20 (right), a
pronounced inefficiency is observed, extending from the pixel corners into the
pixel cell and leading to an overall efficiency of
$$85.96$\,^{+0.53}_{-1.02}\,\text{(syst)}\,$\mathrm{\char 37\relax}$$.
Figure 21: Efficiency as a function of the charge threshold, shown for
experimental data as well as simulations with TCAD-modeled and linear electric
fields. While the shape of the curve is well reproduced in simulation, a
constant offset from data can be observed. The hatched band represents the
total uncertainty.
This decrease of efficiency can best be observed as a function of the charge
threshold applied, as shown in Figure 21. While the shape of the curve
observed in data is reproduced well, a constant offset to the measured values
can be observed. This difference can be attributed to fluctuations of the
pedestal as well as inefficiencies in the data acquisition system which are
not modeled in simulation. The simulation using the linear electric field
approximation is found to not correctly model the behavior observed at high
threshold values.
## 9 Summary & Outlook
In this paper, a combined 3D TCAD and Monte Carlo simulation of a CMOS pixel
sensor with small collection electrode design, implemented in a high-
resistivity epitaxial layer, has been presented. The simulation combines the
results of a three-dimensional electrostatic TCAD simulation with the
stochastic description of energy deposition by Geant4 using the
$\mathrm{Allpix}^{2}$ framework. Visualizations of the charge carrier motion
in the sensor produced by the simulation framework have been found to be
helpful to qualitatively understand the sensor response.
The simulation results have been compared to measurements of a reference
detector, recorded in a test-beam, and very good agreement has been observed
after tuning the simulation to match the most probable value of the cluster
charge measured in data. The simplified charge transport model implemented in
$\mathrm{Allpix}^{2}$ has been shown to be sufficiently precise to replicate
the detector performance figures of merit such as efficiency and intrinsic
resolution measured in data.
The implemented simulation setup for CMOS sensors will be used for further
studies of similar detector prototypes and designs, including different sensor
geometries and modified production processes aiming at a full lateral
depletion of the epitaxial layer.
In future versions of the $\mathrm{Allpix}^{2}$ framework, a simulation of
charge carrier recombination might be implemented, calculating the lifetime
from the respective doping concentration as a function of their position
within the sensor. This would allow for an even more realistic description of
the charge transport process and would remove the necessity of setting and
tuning the integration time for underdepleted detectors.
Furthermore, the simulation could be extended to the detector performance in
the timing domain by simulating the charge transport taking into account
induced currents using the Shockley-Ramo theorem as possible with the latest
version of the $\mathrm{Allpix}^{2}$ framework.
The presented combination of precise electric field modeling in TCAD and
inclusion of statistical fluctuations is also interesting for the simulation
of other silicon detector technologies with complex field configurations such
as 3D sensors or LGADs.
## Acknowledgements
This work was carried out in the framework of the CLICdp Collaboration. This
project has received funding from the European Union’s Horizon 2020 Research
and Innovation programme under Grant Agreement no. 654168.
## References
## References
* [1] W. Snoeys, Monolithic pixel detectors for high energy physics, Nucl. Instr. Meth. A 731 (2013) 125 – 130, Proceedings of PIXEL 2012. doi:10.1016/j.nima.2013.05.073.
* [2] S. Spannagel, et al., Allpix2: A modular simulation framework for silicon detectors, Nucl. Instr. Meth. A 901 (2018) 164 – 172. arXiv:1806.05813, doi:10.1016/j.nima.2018.06.020.
* [3] S. Agostinelli, et al., Geant4 – a simulation toolkit, Nucl. Instr. Meth. A 506 (3) (2003) 250 – 303. doi:10.1016/S0168-9002(03)01368-8.
* [4] J. Allison, et al., Geant4 developments and applications, IEEE T. Nucl. Sci. 53 (1) (2006) 270–278. doi:10.1109/TNS.2006.869826.
* [5] J. Allison, et al., Recent developments in Geant4, Nucl. Instr. Meth. A 835 (Supplement C) (2016) 186 – 225. doi:10.1016/j.nima.2016.06.125.
* [6] D. Dannheim, et al., Comparison of small collection electrode cmos pixel sensors with partial and full lateral depletion of the high-resistivity epitaxial layer, Nucl. Instr. Meth. 927 (2019) 187 – 193. doi:10.1016/j.nima.2019.02.049.
* [7] J. W. van Hoorne, Study and Development of a novel Silicon Pixel Detector for the Upgrade of the ALICE Inner Tracking System, PhD thesis, Technische Universität Wien (Nov 2015).
URL https://cds.cern.ch/record/2119197
* [8] L. Musa, et al., Technical Design Report for the Upgrade of the ALICE Inner Tracking System, Tech. Rep. CERN-LHCC-2013-024. ALICE-TDR-017 (Nov 2013). doi:10.1088/0954-3899/41/8/087002.
URL https://cds.cern.ch/record/1625842
* [9] M. Munker, Test beam and simulation studies on High Resistivity CMOS pixel sensors, PhD thesis, Universität Bonn (June 2018).
URL https://cds.cern.ch/record/2644054
* [10] K. M. Sielewicz, Mitigation Methods Increasing Radiation Hardness of the FPGA-Based Readout of the ALICE Inner Tracking System, PhD thesis, Warsaw University of Technology (Nov 2018).
URL https://cds.cern.ch/record/2643800
* [11] M. Munker, et al., Simulations of CMOS pixel sensors with a small collection electrode, improved for a faster charge collection and increased radiation tolerance, J. Inst. 14 (05) (2019) C05013–C05013. doi:10.1088/1748-0221/14/05/c05013.
* [12] J. J. Smithrick, I. T. Myers, Average triton energy deposited in silicon per electron-hole pair produced, Phys. Rev. B 1 (1970) 2945–2948. doi:10.1103/PhysRevB.1.2945.
* [13] J. Apostolakis, et al., An implementation of ionisation energy loss in very thin absorbers for the GEANT4 simulation package, Nucl. Instr. Meth. A 453 (2000) 597–605. doi:10.1016/S0168-9002(00)00457-5.
* [14] E. Fehlberg, Low-order classical Runge-Kutta formulas with stepsize control and their application to some heat transfer problems, NASA Technical Report NASA-TR-R-315, NASA (1969).
URL https://ntrs.nasa.gov/search.jsp?R=19690021375
* [15] C. Jacoboni, et al., A review of some charge transport properties of silicon, Solid-State Electron. 20 (2) (1977) 77 – 89. doi:10.1016/0038-1101(77)90054-5.
* [16] W. Shockley, Currents to conductors induced by a moving point charge, J. Appl. Phys. 9 (10) (1938) 635–636. doi:10.1063/1.1710367.
* [17] S. Ramo, Currents induced by electron motion, Proc. IRE 27 (9) (1939) 584–585. doi:10.1109/JRPROC.1939.228757.
* [18] The Allpix Squared project, accessed 8 2019.
URL https://cern.ch/allpix-squared/
* [19] R. Brun, F. Rademakers, ROOT – an object oriented data analysis framework, Nucl. Instr. Meth. A 389 (1–2) (1997) 81 – 86, new Computing Techniques in Physics Research V. doi:10.1016/S0168-9002(97)00048-X.
* [20] N. Alipour Tehrani, Test-beam measurements and simulation studies of thin pixel sensors for the CLIC vertex detector, Ph.D. thesis, CERN (2017). doi:10.3929/ethz-b-000164813.
* [21] E. Belau, et al., Charge collection in silicon strip detectors, Nucl. Instr. Meth. 214 (2–3) (1983) 253 – 260. doi:10.1016/0167-5087(83)90591-4.
* [22] W. Snoeys, et al., A process modification for CMOS monolithic active pixel sensors for enhanced depletion, timing performance and radiation tolerance, Nucl. Instr. Meth. A 871 (2017) 90 – 96. doi:10.1016/j.nima.2017.07.046.
|
2024-09-04T02:54:55.424321 | 2020-02-28T10:28:21 | 2002.12636 | {
"authors": "Alexander Tschantz, Beren Millidge, Anil K. Seth, Christopher L.\n Buckley",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25938",
"submitter": "Alexander Tschantz",
"url": "https://arxiv.org/abs/2002.12636"
} | arxiv-papers | # Reinforcement Learning through
Active Inference
Alexander Tschantz
Sackler Centre for Consciousness Science
Evolutionary & Adaptive Systems Research Group
University of Sussex
Brighton, UK
<EMAIL_ADDRESS>
&Beren Millidge
University of Edinburgh
Edinburgh, UK
<EMAIL_ADDRESS>
&Anil K. Seth
Sackler Centre for Consciousness Science
Evolutionary & Adaptive Systems Research Group
University of Sussex
Brighton, UK
Canadian Institute for Advanced Research
&Christopher L. Buckley
Evolutionary & Adaptive Systems Research Group
University of Sussex
Brighton, UK
###### Abstract
The central tenet of reinforcement learning (RL) is that agents seek to
maximize the sum of cumulative rewards. In contrast, active inference, an
emerging framework within cognitive and computational neuroscience, proposes
that agents act to maximize the evidence for a biased generative model. Here,
we illustrate how ideas from active inference can augment traditional RL
approaches by (i) furnishing an inherent balance of exploration and
exploitation, and (ii) providing a more flexible conceptualization of reward.
Inspired by active inference, we develop and implement a novel objective for
decision making, which we term the free energy of the expected future. We
demonstrate that the resulting algorithm successfully balances exploration and
exploitation, simultaneously achieving robust performance on several
challenging RL benchmarks with sparse, well-shaped, and no rewards.
## 1 Introduction
Both biological and artificial agents must learn to make adaptive decisions in
unknown environments. In the field of reinforcement learning (RL), agents aim
to learn a policy that maximises the sum of expected rewards (Sutton et al.,
1998). This approach has demonstrated impressive results in domains such as
simulated games (Mnih et al., 2015; Silver et al., 2017), robotics (Polydoros
& Nalpantidis, 2017; Nagabandi et al., 2019) and industrial applications
(Meyes et al., 2017).
In contrast, active inference (Friston et al., 2016; 2015; 2012; 2009) \- an
emerging framework from cognitive and computational neuroscience - suggests
that agents select actions in order to maximise the evidence for a model that
is biased towards an agent’s preferences. This framework extends influential
theories of Bayesian perception and learning (Knill & Pouget, 2004; L
Griffiths et al., 2008) to incorporate probabilistic decision making, and
comes equipped with a biologically plausible process theory (Friston et al.,
2017a) that enjoys considerable empirical support (Friston & Kiebel, 2009).
Although active inference and RL have their roots in different disciplines,
both frameworks have converged upon similar solutions to the problem of
learning adaptive behaviour. For instance, both frameworks highlight the
importance of learning probabilistic models, performing inference and
efficient planning. This leads to a natural question: can insights from active
inference inform the development of novel RL algorithms?
Conceptually, there are several ways in which active inference can inform and
potentially enhance the field of RL. First, active inference suggests that
agents embody a generative model of their preferred environment and seek to
maximise the evidence for this model. In this context, rewards are cast as
prior probabilities over observations, and success is measured in terms of the
divergence between preferred and expected outcomes. Formulating preferences as
prior probabilities enables greater flexibility when specifying an agent’s
goals (Friston et al., 2012; Friston, 2019a), provides a principled (i.e.
Bayesian) method for learning preferences (Sajid et al., 2019), and is
consistent with recent neurophysiological data demonstrating the
distributional nature of reward representations (Dabney et al., 2020). Second,
reformulating reward maximisation as maximizing model evidence naturally
encompasses both exploration and exploitation under a single objective,
obviating the need for adding ad-hoc exploratory terms to existing objectives.
Moreover, as we will show, active inference subsumes a number of established
RL formalisms, indicating a potentially unified framework for adaptive
decision-making under uncertainty.
Translating these conceptual insights into practical benefits for RL has
proven challenging. Current implementations of active inference have generally
been confined to discrete state spaces and toy problems (Friston et al., 2015;
2017b; 2017c) (although see (Tschantz et al., 2019a; Millidge, 2019; Catal et
al., 2019)). Therefore, it has not yet been possible to evaluate the
effectiveness of active inference in challenging environments; as a result,
active inference has not yet been widely taken up within the RL community.
In this paper, we consider active inference in the context of decision
making111A full treatment of active inference would consider inference and
learning, see (Buckley et al., 2017) for an overview.. We propose and
implement a novel objective function for active inference - the free energy of
the expected future \- and show that this quantity provides a tractable bound
on established RL objectives. We evaluate the performance of this algorithm on
a selection of challenging continuous control tasks. We show strong
performance on environments with sparse, well-shaped, and no rewards,
demonstrating our algorithm’s ability to effectively balance exploration and
exploitation. Altogether, our results indicate that active inference provides
a promising complement to current RL methods.
## 2 Active Inference
Both active inference and RL can be formulated in the context of a partially
observed Markov decision process POMDPs (Murphy, 1982). At each time step $t$,
the true state of the environment $\mathbf{s}_{t}$ evolves according to the
stochastic transition dynamics
$\mathbf{{\textnormal{s}}}_{t}\sim\mathrm{p}(\mathbf{s}_{t}|\mathbf{s}_{t-1},\mathbf{a}_{t-1})$,
where $\mathbf{a}\in\mathbb{R}^{d_{a}}$ denotes an agent’s actions. Agents do
not necessarily have access to the true state of the environment, but may
instead receive observations ${\textnormal{o}}_{t}\in\mathbb{R}^{d_{o}}$,
which are generated according to
${\textnormal{o}}_{t}\sim\mathrm{p}({\textnormal{o}}_{t}|\mathbf{s}_{t})$. In
this case, agents must operate on beliefs
${\textnormal{s}}_{t}\in\mathbb{R}^{d_{s}}$ about the true state of the
environment $\mathbf{s}_{t}$. Finally, the environment generates rewards
${\textnormal{r}}_{t}$ according to
${\textnormal{r}}_{t}\sim\mathrm{p}({\textnormal{r}}_{t}|\mathbf{s}_{t})$222We
use $\mathbf{x}$ and $\mathrm{p}(\cdot)$ to denote the generative process and
x and $p(\cdot)$ to denote the agent’s generative model..
The goal of RL is to learn a policy that maximises the expected sum of rewards
$\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}{\textnormal{r}}_{t}]$ (Sutton et
al., 1998). In contrast, the goal of active inference is to maximise the
Bayesian model evidence for an agent’s generative model
$p^{\Phi}({\textnormal{o}},{\textnormal{s}},\theta)$, where $\theta\in\Theta$
denote model parameters.
Crucially, active inference allows that an agent’s generative model can be
biased towards favourable states of affairs (Friston, 2019b). In other words,
the model assigns probability to the parts of observation space that are both
likely and beneficial for an agent’s success. We use the notation
$p^{\Phi}(\cdot)$ to represent an arbitrary distribution encoding the agent’s
preferences.
Given a generative model, agents can perform approximate Bayesian inference by
encoding an arbitrary distribution $q({\textnormal{s}},\theta)$ and minimising
variational free energy
$\mathcal{F}=D_{\mathrm{KL}}\Big{(}q({\textnormal{s}},\theta)\|p^{\Phi}({\textnormal{o}},{\textnormal{s}},\theta)\Big{)}$.
When observations o are known, $\mathcal{F}$ can be minimized through standard
variational methods (Bishop, 2006; Buckley et al., 2017), causing
$q({\textnormal{s}},\theta)$ to tend towards the true posterior
$p({\textnormal{s}},\theta|{\textnormal{o}})$. Note that treating model
parameters $\theta$ as random variables casts learning as a process of
inference (Blundell et al., 2015).
In the current context, agents additionally maintain beliefs over policies
$\pi=\\{a_{0},...,a_{T}\\}$, which are themselves random variables. Policy
selection is then implemented by identifying $q(\pi)$ that minimizes
$\mathcal{F}$, thus casting policy selection as a process of approximate
inference (Friston et al., 2015). While the standard free energy functional
$\mathcal{F}$ is generally defined for a single time point $t$, $\pi$ refers
to a temporal sequence of variables. Therefore, we augment the free energy
functional $\mathcal{F}$ to encompass future variables, leading to the free
energy of the expected future $\mathcal{\tilde{F}}$. This quantity measures
the KL-divergence between a sequence of beliefs about future variables and an
agent’s biased generative model.
The goal is now to infer $q(\pi)$ in order to minimise $\mathcal{\tilde{F}}$.
We demonstrate that the resulting scheme naturally encompasses both
exploration and exploitation, thereby suggesting a deep relationship between
inference, learning and decision making.
## 3 Free energy of the expected future
Let ${\textnormal{x}}_{t:T}$ denote a sequence of variables through time,
${\textnormal{x}}_{t:T}=\\{{\textnormal{x}}_{t},...,{\textnormal{x}}_{T}\\}$.
We wish to minimize the free energy of the expected future
$\mathcal{\tilde{F}}$, which is defined as:
$\mathcal{\tilde{F}}=D_{\mathrm{KL}}\Big{(}q({\textnormal{o}}_{0:T},{\textnormal{s}}_{0:T},\theta,\pi)\|p^{\Phi}({\textnormal{o}}_{0:T},{\textnormal{s}}_{0:T},\theta)\Big{)}$
(1)
where $q({\textnormal{o}}_{t:T},{\textnormal{s}}_{t:T},\theta,\pi)$ represents
an agent’s beliefs about future variables, and
$p^{\Phi}({\textnormal{o}}_{t:T},{\textnormal{s}}_{t:T},\theta)$ represents an
agent’s biased generative model. Note that the beliefs about future variables
include beliefs about future observations, ${\textnormal{o}}_{t:T}$, which are
unknown and thus treated as random variables333For readers familiar with the
active inference framework, we highlight that the free energy of the expected
future differs from expected free energy (Friston et al., 2015). We leave a
discussion of the relative merits to future work..
In order to find $q(\pi)$ which minimizes $\mathcal{\tilde{F}}$ we note that
(see Appendix C):
$\displaystyle\mathcal{\tilde{F}}=0\Rightarrow D_{\mathrm{KL}}\Big{(}q(\pi)\
\|\big{(}-e^{-\mathcal{\tilde{F_{\pi}}}}\big{)}\Big{)}=0$ (2)
where
$\displaystyle\mathcal{\tilde{F_{\pi}}}=D_{\mathrm{KL}}\Big{(}q({\textnormal{o}}_{0:T},{\textnormal{s}}_{0:T},\theta|\pi)\
\|\ p^{\Phi}({\textnormal{o}}_{0:T},{\textnormal{s}}_{0:T},\theta)\Big{)}$ (3)
Thus, the free energy of the expected future is minimized when
$q(\pi)=\sigma(-\mathcal{\tilde{F_{\pi}}})$, or in other words, policies are
more likely when they minimise $\mathcal{\tilde{F_{\pi}}}$.
### 3.1 Exploration & exploitation
In order to provide an intuition for what minimizing
$\mathcal{\tilde{F_{\pi}}}$ entails, we factorize the agent’s generative
models as
$p^{\Phi}({\textnormal{o}}_{0:T},{\textnormal{s}}_{0:T},\theta)=p({\textnormal{s}}_{0:T},\theta|{\textnormal{o}}_{0:T})p^{\Phi}({\textnormal{o}}_{0:T})$,
implying that the model is only biased in its beliefs over observations. To
retain consistency with RL nomenclature, we treat ‘rewards’ r as a separate
observation modality, such that $p^{\Phi}({\textnormal{o}}_{t:T})$ specifies a
distribution over preferred rewards. We describe our implementation of
$p^{\Phi}({\textnormal{o}}_{t:T})$ in Appendix E. In a similar fashion,
$q({\textnormal{o}}_{t:T}|{\textnormal{s}}_{t:T},\theta,\pi)$ specifies
beliefs about future rewards, given a policy.
Given this factorization, it is straightforward to show that
$-\mathcal{\tilde{F_{\pi}}}$ decomposes into an expected information gain term
and an extrinsic term (see Appendix B)444The approximation in Eq. 4 arises
from the approximation
$q({\textnormal{s}}_{0:T},\theta|{\textnormal{o}}_{0:T},\pi)\approx
p({\textnormal{s}}_{0:T},\theta|{\textnormal{o}}_{0:T},\pi)$, which is
justifiable given that $q(\cdot)$ represents a variational approximation of
the true posterior (Friston et al., 2017a).:
$\displaystyle-\mathcal{\tilde{F_{\pi}}}\approx$
$\displaystyle-\underbrace{\mathbb{E}_{q({\textnormal{o}}_{0:T}|\pi)}\Big{[}D_{\mathrm{KL}}\Big{(}q({\textnormal{s}}_{0:T},\theta|{\textnormal{o}}_{0:T},\pi)\|q({\textnormal{s}}_{0:T},\theta|\pi)\Big{)}\Big{]}}_{\text{Expected
information gain}}$ (4)
$\displaystyle+\underbrace{\mathbb{E}_{q({\textnormal{s}}_{0:T},\theta|\pi)}\Big{[}D_{\mathrm{KL}}\Big{(}q({\textnormal{o}}_{0:T}|{\textnormal{s}}_{0:T},\theta,\pi)\|p^{\Phi}({\textnormal{o}}_{t:T})\Big{)}\Big{]}}_{\text{Extrinsic
term}}$
Maximizing Eq.4 has two functional consequences. First, it maximises the
expected information gain, which quantifies the amount of information an agent
expects to gain from executing some policy. As agents maintain beliefs about
the state of the environment and model parameters, this term promotes
exploration in both state and parameter space.
Second, it minimizes the extrinsic term - which is the KL-divergence between
an agent’s (policy-conditioned) beliefs about future observations and their
preferred observations. In the current context, it measures the KL-divergence
between the rewards an agent expects from a policy and the rewards an agent
desires. In summary, selecting policies to minimise $\tilde{\mathcal{F}}$
invokes a natural balance between exploration and exploitation.
### 3.2 Relationship to probabilistic RL
In recent years, there have been several attempts to formalize RL in terms of
probabilistic inference (Levine, 2018), such as KL-control (Rawlik, 2013),
control-as-inference (Kappen et al., 2012), and state-marginal matching (Lee
et al., 2019). In many of these approaches, the RL objective is broadly
conceptualized as minimising
$D_{\mathrm{KL}}\Big{(}p({\textnormal{o}}_{0:T}|\pi)\|\
p^{\Phi}(o_{0:T})\Big{)}$555We acknowledge that not all objectives follow this
exact formulation.. In Appendix D, we demonstrate that the free energy of the
expected future $\mathcal{\tilde{F}}$ provides a tractable bound on this
objective:
$\displaystyle\mathcal{\tilde{F}}\geq
D_{\mathrm{KL}}\Big{(}p({\textnormal{o}}_{t:T}|\pi)\|\
p^{\Phi}(o_{t:T})\Big{)}$ (5)
These results suggest a deep homology between active inference and existing
approaches to probabilistic RL.
## 4 Implementation
In this section, we describe an efficient implementation of the proposed
objective function in the context of model-based RL. To select actions, we
optimise $q(\pi)$ at each time step, and execute the first action specified by
the most likely policy. This requires (i) a method for evaluating beliefs
about future variables
$q({\textnormal{s}}_{t:T},{\textnormal{o}}_{t:T},\theta|\pi)$, (ii) an
efficient method for evaluating $\mathcal{F}_{\pi}$, and (iii) a method for
optimising $q(\pi)$ such that $q(\pi)=\sigma(-\mathcal{F}_{\pi})$
#### Evaluating beliefs about the future
We factorize and evaluate the beliefs about the future as:
$\displaystyle q({\textnormal{s}}_{t:T},{\textnormal{o}}_{t:T},\theta|\pi)$
$\displaystyle=q(\theta)\prod_{t=\tau}^{T}q({\textnormal{o}}_{\tau}|{\textnormal{s}}_{\tau},\theta,\pi)q({\textnormal{s}}_{\tau}|{\textnormal{s}}_{\tau-1},\theta,\pi)$
(6) $\displaystyle
q({\textnormal{o}}_{\tau}|{\textnormal{s}}_{\tau},\theta,\pi)$
$\displaystyle=\mathbb{E}_{q({\textnormal{s}}_{\tau}|\theta,\pi)}\big{[}p({\textnormal{o}}_{\tau}|{\textnormal{s}}_{\tau})\big{]}$
$\displaystyle
q({\textnormal{s}}_{\tau}|{\textnormal{s}}_{\tau-1},\theta,\pi)$
$\displaystyle=\mathbb{E}_{q({\textnormal{s}}_{\tau-1}|\theta,\pi)}\big{[}p({\textnormal{s}}_{\tau}|{\textnormal{s}}_{\tau-1},\theta,\pi)\big{]}$
where we have here factorized the generative model as
$p({\textnormal{o}}_{\tau},{\textnormal{s}}_{\tau},\theta|\pi)=p({\textnormal{o}}_{\tau}|{\textnormal{s}}_{\tau},\pi)p({\textnormal{s}}_{\tau}|{\textnormal{s}}_{\tau-1},\theta,\pi)p(\theta)$.
We describe the implementation and learning of the likelihood
$p({\textnormal{o}}_{\tau}|{\textnormal{s}}_{\tau},\pi)$, transition model
$p({\textnormal{s}}_{\tau}|{\textnormal{s}}_{\tau-1},\theta,\pi)$ and
parameter prior $p(\theta)$ in Appendix E.
#### Evaluating $\mathcal{\tilde{F}_{\pi}}$
Note that
$-\mathcal{\tilde{F}}_{\pi}=\sum_{\tau=t}^{t+H}-\mathcal{\tilde{F}}_{\pi_{\tau}}$,
where $H$ is the planning horizon. Given beliefs about future variables, the
free energy of the expected future for a single time point can be efficiently
computed as (see Appendix G):
$\displaystyle-\mathcal{\tilde{F}}_{\pi_{\tau}}$ $\displaystyle\approx
E_{q({\textnormal{s}}_{\tau},\theta|\pi)}\Big{[}D_{\mathrm{KL}}\Big{(}q({\textnormal{o}}_{\tau}|{\textnormal{s}}_{\tau},\theta,\pi)\|p^{\Phi}({\textnormal{o}}_{\tau})\Big{)}\Big{]}$
(7)
$\displaystyle+\underbrace{\mathbf{H}[q({\textnormal{o}}_{\tau}|\pi)]-\mathbb{E}_{q({\textnormal{s}}_{\tau}|\pi)}\Big{[}\mathbf{H}[q({\textnormal{o}}_{\tau}|{\textnormal{s}}_{\tau},\pi)]\Big{]}}_{\text{State
information gain}}$
$\displaystyle+\underbrace{\mathbf{H}[q({\textnormal{s}}_{\tau}|{\textnormal{s}}_{\tau-1},\theta,\pi)]-\mathbb{E}_{q(\theta)}\Big{[}\mathbf{H}[q({\textnormal{s}}_{\tau}|{\textnormal{s}}_{\tau-1},\pi,\theta)]\Big{]}}_{\text{Parameter
information gain}}$
In the current paper, agents observe the true state of the environment
${\textnormal{s}}_{t}$, such that the only partial observability is in rewards
${\textnormal{r}}_{t}$. As as a result, the second term of equation 7 is
redundant, as there is no uncertainty about states. The first (extrinsic) term
can be calculated analytically (see Appendix E). We describe our approximation
of the final term (parameter information gain) in Appendix G.
#### Optimising the policy distribution
We choose to parametrize $q(\pi)$ as a diagonal Gaussian. We use the CEM
algorithm (Rubinstein, 1997) to optimise the parameters of $q(\pi)$ such that
$q(\pi)\propto-\mathcal{F}_{\pi}$. While this solution will fail to capture
the exact shape of $-\mathcal{F}_{\pi}$, agents need only identify the peak of
the landscape to enact the optimal policy.
The full algorithm for inferring $q(\pi)$ is provided in Algorithm 1.
Input: Planning horizon $H$ — Optimisation iterations $I$ — Number of
candidate policies $J$ — Current state ${\textnormal{s}}_{t}$ — Likelihood
$p({\textnormal{o}}_{\tau}|{\textnormal{s}}_{\tau})$ — Transition distribution
$p({\textnormal{s}}_{\tau}|{\textnormal{s}}_{\tau-1},\theta,\pi)$ — Parameter
distribution $P(\theta)$ — Global prior $p^{\Phi}({\textnormal{o}}_{\tau})$
Initialize factorized belief over action sequences
$q(\pi)\leftarrow\mathcal{N}(0,\mathbb{I})$.
for _$\mathrm{optimisation\ iteration}\ i=1...I$_ do
Sample $J$ candidate policies from $q(\pi)$
for _$\mathrm{candidate\ policy}\ j=1...J$_ do
$\pi^{(j)}\sim q(\pi)$
$-\mathcal{\tilde{F}}_{\pi}^{j}=0$
for _$\tau=t...t+H$_ do
$q({\textnormal{s}}_{\tau}|{\textnormal{s}}_{\tau-1},\theta,\pi^{(j)})=\mathbb{E}_{q({\textnormal{s}}_{\tau-1}|\theta,\pi^{(j)})}\big{[}p({\textnormal{s}}_{\tau}|{\textnormal{s}}_{\tau-1},\theta,\pi^{(j)})\big{]}$
$q({\textnormal{o}}_{\tau}|{\textnormal{s}}_{\tau},\theta,\pi^{(j)})=\mathbb{E}_{q({\textnormal{s}}_{\tau}|\theta,\pi^{(j)})}\big{[}p({\textnormal{o}}_{\tau}|{\textnormal{s}}_{\tau})\big{]}$
$-\mathcal{\tilde{F}}_{\pi}^{j}\leftarrow-\mathcal{\tilde{F}}_{\pi}^{j}+E_{q({\textnormal{s}}_{\tau},\theta|\pi^{(j)})}\big{[}D_{\mathrm{KL}}\big{(}q({\textnormal{o}}_{\tau}|{\textnormal{s}}_{\tau},\theta,\pi^{(j)})\|p^{\Phi}({\textnormal{o}}_{\tau})\big{)}\big{]}+\mathbf{H}[q({\textnormal{s}}_{\tau}|{\textnormal{s}}_{\tau-1},\theta,\pi^{(j)})]-\mathbb{E}_{q(\theta)}\big{[}\mathbf{H}[q({\textnormal{s}}_{\tau}|{\textnormal{s}}_{\tau-1},\pi^{(j)},\theta)]\big{]}$
end for
end for
$q(\pi)\leftarrow\mathrm{refit}(-\mathcal{\tilde{F}}_{\pi}^{j})$
end for
return $q(\pi)$
Algorithm 1 Inference of $q(\pi)$
## 5 Experiments
To determine whether our algorithm successfully balances exploration and
exploitation, we investigate its performance in domains with (i) well-shaped
rewards, (ii) extremely sparse rewards and (iii) a complete absence of
rewards. We use four tasks in total. For sparse rewards, we use the Mountain
Car and Cup Catch environments, where agents only receive reward when the goal
is achieved. For well-shaped rewards, we use the challenging Half Cheetah
environment, using both the running and flipping tasks. For domains without
reward, we use the Ant Maze environment, where there are no rewards and
success is measured by the percent of the maze covered (see Appendix H for
details on all environments).
Figure 1: (A) Mountain Car: Average return after each episode on the sparse-
reward Mountain Car task. Our algorithm achieves optimal performance in a
single trial. (B) Cup Catch: Average return after each episode on the sparse-
reward Cup Catch task. Here, results amongst algorithms are similar, with all
agents reaching asymptotic performance in around 20 episodes. (C & D) Half
Cheetah: Average return after each episode on the well-shaped Half Cheetah
environment, for the running and flipping tasks, respectively. We compare our
results to the average performance of SAC after 100 episodes learning,
demonstrating our algorithm can perform successfully in environments which do
not require directed exploration. Each line is the mean of 5 seeds and filled
regions show +/- standard deviation.
For environments with sparse rewards, we compare our algorithm to two
baselines, (i) a reward algorithm which only selects policies based on the
extrinsic term (i.e. ignores the parameter information gain), and (ii) a
variance algorithm that seeks out uncertain transitions by acting to maximise
the output variance of the transition model (see Appendix E). Note that the
variance agent is also augmented with the extrinsic term to enable comparison.
For environments with well-shaped rewards, we compare our algorithm to the
maximum reward obtained by a state-of-the-art model-free RL algorithm after
100 episodes, the soft-actor-critic (SAC) Haarnoja et al. (2018), which
encourages exploration by seeking to maximise the entropy of the policy
distribution. Finally, for environments without rewards, we compare our
algorithm to a random baseline, which conducts actions at random.
The Mountain Car experiment is shown in Fig. 1A, where we plot the total
reward obtained for each episode over 25 episodes, where each episode is at
most 200 time steps. These results demonstrate that our algorithm rapidly
explores and consistently reaches the goal, achieving optimal performance in a
single trial. In contrast, the benchmark algorithms were, on average, unable
to successfully explore and achieve good performance. We qualitatively confirm
this result by plotting the state space coverage with and without exploration
(Fig. 2B). Our algorithm performs comparably to benchmarks on the Cup Catch
environment (Fig. 1B). We hypothesize that this is because, while the reward
structure is technically sparse, it is simple enough to reach the goal with
random actions, and thus the directed exploration afforded by our method
provides little benefit.
Figure 1 C&D shows that our algorithm performs substantially better than a
state of the art model-free algorithm after 100 episodes on the challenging
Half Cheetah tasks. Our algorithm thus demonstrates robust performance in
environments with well-shaped rewards and provides considerable improvements
in sample-efficiency, relative to SAC.
Finally, we demonstrate that our algorithm can perform well in environments
with no rewards, where the only goal is exploration. Figure 2B shows that our
algorithms rate of exploration is substantially higher than that of a random
baseline in the ant-maze environment, resulting in a more substantial portion
of the maze being covered. This result demonstrates that the directed
exploration afforded by minimising the free energy of the expected future
proves beneficial in environments with no reward structure.
Taken together, these results show that our proposed algorithm - which
naturally balances exploration and exploitation - can successfully master
challenging domains with a variety of reward structures.
Figure 2: (A & B) Mountain Car state space coverage: We plot the points in
state-space visited by two agents - one that minimizes the free energy of the
expected future (FEEF) and one that maximises reward. The plots are from 20
episodes and show that the FEEF agent searches almost the entirety of state
space, while the reward agent is confined to a region that be reached with
random actions. (C) Ant Maze Coverage: We plot the percentage of the maze
covered after 35 episodes, comparing the FEEF agent to an agent acting
randomly. These results are the average of 4 seeds.
## 6 Discussion
Despite originating from different intellectual traditions, active inference
and RL both address fundamental questions about adaptive decision-making in
unknown environments. Exploiting this conceptual overlap, we have applied an
active inference perspective to the reward maximization objective of RL,
recasting it as minimizing the divergence between desired and expected
futures. We derived a novel objective that naturally balances exploration and
exploitation and instantiated this objective within a model-based RL context.
Our algorithm exhibits robust performance and flexibility in a variety of
environments known to be challenging for RL. Moreover, we have shown that our
algorithm applies to a diverse set of reward structures. Conversely, by
implementing active inference using tools from RL, such as amortising
inference with neural networks, deep ensembles and sophisticated algorithms
for planning (CEM), we have demonstrated that active inference can scale to
high dimensional tasks with continuous state and action spaces.
While our results have highlighted the existing overlap between active
inference and RL, we end by reiterating two aspects of active inference that
may be of utility for RL. First, representing preferences as a distribution
over observations allows for greater flexibility in modelling and learning
non-scalar and non-monotonic reward functions. This may prove beneficial when
learning naturalistic tasks in complex nonstationary environments. Second, the
fact that both intrinsic and extrinsic value are complementary components of a
single objective - the free energy of the expected future - may suggest new
paths to tackling the exploration-exploitation dilemma. Our method also admits
promising directions for future work. These include investigating the effects
of different distributions over reward, extending the approach to models which
are hierarchical in time and space (Friston et al., 2018; Pezzulo et al.,
2018), and investigating the deep connections to alternative formulations of
probabilistic control.
## Acknowledgements
AT is funded by a PhD studentship from the Dr. Mortimer and Theresa Sackler
Foundation and the School of Engineering and Informatics at the University of
Sussex. BM is supported by an EPSRC funded PhDS Studentship. CLB is supported
by BBRSC grant number BB/P022197/1. AT and AKS are grateful to the Dr.
Mortimer and Theresa Sackler Foundation, which supports the Sackler Centre for
Consciousness Science. AKS is additionally grateful to the Canadian Institute
for Advanced Research (Azrieli Programme on Brain, Mind, and Consciousness).
## Author Contributions
A.T, B.M and C.L.B contributed to the conceptualization of this work. A.T and
B.M contributed to the coding and generation of experimental results. A.T,
B.M, C.L.B, A.K.S contributed to the writing of the manuscript.
## References
* Bellemare et al. (2016) Marc G. Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. 2016\. URL http://arxiv.org/abs/1606.01868.
* Bishop (2006) Christopher M Bishop. _Pattern recognition and machine learning_. springer, 2006.
* Blundell et al. (2015) Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. 2015\. URL http://arxiv.org/abs/1505.05424.
* Buckley et al. (2017) Christopher L Buckley, Chang Sub Kim, Simon McGregor, and Anil K Seth. The free energy principle for action and perception: A mathematical review. _Journal of Mathematical Psychology_ , 81:55–79, 2017.
* Catal et al. (2019) Ozan Catal, Johannes Nauta, Tim Verbelen, Pieter Simoens, and Bart Dhoedt. Bayesian policy selection using active inference. 2019\. URL http://arxiv.org/abs/1904.08149.
* Chatzilygeroudis et al. (2018) Konstantinos Chatzilygeroudis, Vassilis Vassiliades, Freek Stulp, Sylvain Calinon, and Jean-Baptiste Mouret. A survey on policy search algorithms for learning robot controllers in a handful of trials. 2018\. URL http://arxiv.org/abs/1807.02303.
* Chentanez et al. (2005) Nuttapong Chentanez, Andrew G. Barto, and Satinder P. Singh. Intrinsically motivated reinforcement learning. In L. K. Saul, Y. Weiss, and L. Bottou (eds.), _Advances in Neural Information Processing Systems 17_ , pp. 1281–1288. MIT Press, 2005\. URL http://papers.nips.cc/paper/2552-intrinsically-motivated-reinforcement-learning.pdf.
* Chitta et al. (2018) Kashyap Chitta, Jose M. Alvarez, and Adam Lesnikowski. Deep probabilistic ensembles: Approximate variational inference through KL regularization. 2018\. URL http://arxiv.org/abs/1811.02640.
* Chua et al. (2018a) Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In _Advances in Neural Information Processing Systems_ , pp. 4754–4765, 2018a.
* Chua et al. (2018b) Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. 2018b. URL http://arxiv.org/abs/1805.12114.
* Cullen et al. (2018) Maell Cullen, Ben Davey, Karl J Friston, and Rosalyn J Moran. Active inference in openai gym: A paradigm for computational investigations into psychiatric illness. _Biological psychiatry: cognitive neuroscience and neuroimaging_ , 3(9):809–818, 2018.
* Dabney et al. (2020) Will Dabney, Zeb Kurth-Nelson, Naoshige Uchida, Clara Kwon Starkweather, Demis Hassabis, Rémi Munos, and Matthew Botvinick. A distributional code for value in dopamine-based reinforcement learning. _Nature_ , pp. 1–5, 2020.
* de Abril & Kanai (2018) Ildefons Magrans de Abril and Ryota Kanai. A unified strategy for implementing curiosity and empowerment driven reinforcement learning. _arXiv preprint arXiv:1806.06505_ , 2018.
* Fort et al. (2019) Stanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan. Deep ensembles: A loss landscape perspective. 2019\. URL http://arxiv.org/abs/1912.02757.
* Friston (2019a) Karl Friston. A free energy principle for a particular physics. _arXiv preprint arXiv:1906.10184_ , 2019a.
* Friston (2019b) Karl Friston. A free energy principle for a particular physics. 2019b. URL https://arxiv.org/abs/1906.10184v1.
* Friston & Kiebel (2009) Karl Friston and Stefan Kiebel. Predictive coding under the free-energy principle. 364(1521):1211–1221, 2009. ISSN 1471-2970. doi: 10.1098/rstb.2008.0300.
* Friston et al. (2012) Karl Friston, Spyridon Samothrakis, and Read Montague. Active inference and agency: optimal control without cost functions. _Biological cybernetics_ , 106(8-9):523–541, 2012\.
* Friston et al. (2015) Karl Friston, Francesco Rigoli, Dimitri Ognibene, Christoph Mathys, Thomas Fitzgerald, and Giovanni Pezzulo. Active inference and epistemic value. 6(4):187–214, 2015. ISSN 1758-8936. doi: 10.1080/17588928.2015.1020053.
* Friston et al. (2016) Karl Friston, Thomas FitzGerald, Francesco Rigoli, Philipp Schwartenbeck, Giovanni Pezzulo, et al. Active inference and learning. _Neuroscience & Biobehavioral Reviews_, 68:862–879, 2016\.
* Friston et al. (2017a) Karl Friston, Thomas FitzGerald, Francesco Rigoli, Philipp Schwartenbeck, and Giovanni Pezzulo. Active inference: a process theory. _Neural computation_ , 29(1):1–49, 2017a.
* Friston et al. (2017b) Karl Friston, Thomas FitzGerald, Francesco Rigoli, Philipp Schwartenbeck, and Giovanni Pezzulo. Active inference: A process theory. 29(1):1–49, 2017b. ISSN 1530-888X. doi: 10.1162/NECO˙a˙00912.
* Friston et al. (2009) Karl J Friston, Jean Daunizeau, and Stefan J Kiebel. Reinforcement learning or active inference? _PloS one_ , 4(7), 2009.
* Friston et al. (2017c) Karl J. Friston, Marco Lin, Christopher D. Frith, Giovanni Pezzulo, J. Allan Hobson, and Sasha Ondobaka. Active inference, curiosity and insight. 29(10):2633–2683, 2017c. ISSN 0899-7667. doi: 10.1162/neco˙a˙00999. URL https://doi.org/10.1162/neco_a_00999.
* Friston et al. (2018) Karl J. Friston, Richard Rosch, Thomas Parr, Cathy Price, and Howard Bowman. Deep temporal models and active inference. 90:486–501, 2018. ISSN 0149-7634. doi: 10.1016/j.neubiorev.2018.04.004. URL http://www.sciencedirect.com/science/article/pii/S0149763418302525.
* Ha & Schmidhuber (2018) David Ha and Jürgen Schmidhuber. Recurrent world models facilitate policy evolution. In _Advances in Neural Information Processing Systems_ , pp. 2450–2462, 2018.
* Haarnoja et al. (2018) Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. _arXiv preprint arXiv:1801.01290_ , 2018.
* Hafner et al. (2018) Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. _arXiv preprint arXiv:1811.04551_ , 2018.
* Hafner et al. (2019) Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. _arXiv preprint arXiv:1912.01603_ , 2019.
* Houthooft et al. (2016a) Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. Curiosity-driven exploration in deep reinforcement learning via bayesian neural networks. _arXiv preprint arXiv:1605.09674_ , 2016a.
* Houthooft et al. (2016b) Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. VIME: Variational information maximizing exploration. 2016b. URL http://arxiv.org/abs/1605.09674.
* Kaiser et al. (2019) Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H. Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George Tucker, and Henryk Michalewski. Model-based reinforcement learning for atari. 2019\. URL http://arxiv.org/abs/1903.00374.
* Kappen et al. (2012) Hilbert J Kappen, Vicenç Gómez, and Manfred Opper. Optimal control as a graphical model inference problem. _Machine learning_ , 87(2):159–182, 2012.
* Kim et al. (2018) Hyoungseok Kim, Jaekyeom Kim, Yeonwoo Jeong, Sergey Levine, and Hyun Oh Song. EMI: Exploration with mutual information. 2018\. URL http://arxiv.org/abs/1810.01176.
* Kingma & Welling (2013) Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. 2013\. URL http://arxiv.org/abs/1312.6114.
* Knill & Pouget (2004) David C Knill and Alexandre Pouget. The bayesian brain: the role of uncertainty in neural coding and computation. _TRENDS in Neurosciences_ , 27(12):712–719, 2004\.
* L Griffiths et al. (2008) Thomas L Griffiths, Charles Kemp, and Joshua B Tenenbaum. Bayesian models of cognition. 2008\.
* Lee et al. (2019) Lisa Lee, Benjamin Eysenbach, Emilio Parisotto, Eric Xing, Sergey Levine, and Ruslan Salakhutdinov. Efficient exploration via state marginal matching. _arXiv preprint arXiv:1906.05274_ , 2019.
* Leibfried et al. (2019) Felix Leibfried, Sergio Pascual-Diaz, and Jordi Grau-Moya. A unified bellman optimality principle combining reward maximization and empowerment. In _Advances in Neural Information Processing Systems_ , pp. 7867–7878, 2019.
* Levine (2018) Sergey Levine. Reinforcement learning and control as probabilistic inference: Tutorial and review. _arXiv preprint arXiv:1805.00909_ , 2018.
* Lindley (1956) D. V. Lindley. On a measure of the information provided by an experiment. 27(4):986–1005, 1956. ISSN 0003-4851, 2168-8990. doi: 10.1214/aoms/1177728069. URL https://projecteuclid.org/euclid.aoms/1177728069.
* Meyes et al. (2017) Richard Meyes, Hasan Tercan, Simon Roggendorf, Thomas Thiele, Christian Büscher, Markus Obdenbusch, Christian Brecher, Sabina Jeschke, and Tobias Meisen. Motion planning for industrial robots using reinforcement learning. _Procedia CIRP_ , 63:107–112, 2017.
* Millidge (2019) Beren Millidge. Deep active inference as variational policy gradients. 2019\. URL http://arxiv.org/abs/1907.03876.
* Mirchev et al. (2018) Atanas Mirchev, Baris Kayalibay, Maximilian Soelch, Patrick van der Smagt, and Justin Bayer. Approximate bayesian inference in spatial environments. 2018\. URL http://arxiv.org/abs/1805.07206.
* Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. _Nature_ , 518(7540):529–533, 2015.
* Mohamed & Rezende (2015) Shakir Mohamed and Danilo Jimenez Rezende. Variational information maximisation for intrinsically motivated reinforcement learning. 2015\. URL http://arxiv.org/abs/1509.08731.
* Murphy (1982) KP Murphy. A survey of pomdp solution techniques: Theory. _Models, and algorithms, management science_ , 28, 1982.
* Nagabandi et al. (2018) Anusha Nagabandi, Gregory Kahn, Ronald S Fearing, and Sergey Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In _2018 IEEE International Conference on Robotics and Automation (ICRA)_ , pp. 7559–7566. IEEE, 2018.
* Nagabandi et al. (2019) Anusha Nagabandi, Kurt Konoglie, Sergey Levine, and Vikash Kumar. Deep dynamics models for learning dexterous manipulation. _arXiv preprint arXiv:1909.11652_ , 2019.
* O’Donoghue et al. (2017) Brendan O’Donoghue, Ian Osband, Remi Munos, and Volodymyr Mnih. The uncertainty bellman equation and exploration. _arXiv preprint arXiv:1709.05380_ , 2017.
* Okada & Taniguchi (2019) Masashi Okada and Tadahiro Taniguchi. Variational inference MPC for bayesian model-based reinforcement learning. 2019\. URL http://arxiv.org/abs/1907.04202.
* Oudeyer & Kaplan (2009) Pierre-Yves Oudeyer and Frederic Kaplan. What is intrinsic motivation? a typology of computational approaches. _Frontiers in neurorobotics_ , 1:6, 2009.
* Parr & Friston (2017) Thomas Parr and Karl J Friston. The active construction of the visual world. _Neuropsychologia_ , 104:92–101, 2017.
* Parr et al. (2019) Thomas Parr, Dimitrije Markovic, Stefan J Kiebel, and Karl J Friston. Neuronal message passing using mean-field, bethe, and marginal approximations. _Scientific reports_ , 9(1):1–18, 2019.
* Pathak et al. (2017) Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , pp. 16–17, 2017.
* Pezzulo et al. (2016) Giovanni Pezzulo, Emilio Cartoni, Francesco Rigoli, Léo Pio-Lopez, and Karl Friston. Active inference, epistemic value, and vicarious trial and error. _Learning & Memory_, 23(7):322–338, 2016.
* Pezzulo et al. (2018) Giovanni Pezzulo, Francesco Rigoli, and Karl J. Friston. Hierarchical active inference: A theory of motivated control. _Trends in Cognitive Sciences_ , 22(4):294 – 306, 2018. ISSN 1364-6613. doi: https://doi.org/10.1016/j.tics.2018.01.009. URL http://www.sciencedirect.com/science/article/pii/S1364661318300226.
* Polydoros & Nalpantidis (2017) Athanasios S Polydoros and Lazaros Nalpantidis. Survey of model-based reinforcement learning: Applications on robotics. _Journal of Intelligent & Robotic Systems_, 86(2):153–173, 2017.
* Rawlik et al. (2013) Konrad Rawlik, Marc Toussaint, and Sethu Vijayakumar. On stochastic optimal control and reinforcement learning by approximate inference. In _Twenty-Third International Joint Conference on Artificial Intelligence_ , 2013.
* Rawlik (2013) Konrad Cyrus Rawlik. On probabilistic inference approaches to stochastic optimal control. 2013\.
* Rubinstein (1997) Reuven Y Rubinstein. Optimization of computer simulation models with rare events. _European Journal of Operational Research_ , 99(1):89–112, 1997.
* Sajid et al. (2019) Noor Sajid, Philip J Ball, and Karl J Friston. Demystifying active inference. _arXiv preprint arXiv:1909.10863_ , 2019.
* Schmidhuber (1991) Jürgen Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. In _Proc. of the international conference on simulation of adaptive behavior: From animals to animats_ , pp. 222–227, 1991.
* Schmidhuber (2007) Jürgen Schmidhuber. Simple algorithmic principles of discovery, subjective beauty, selective attention, curiosity & creativity. In _International Conference on Discovery Science_ , pp. 26–38. Springer, 2007.
* Schwartenbeck et al. (2019) Philipp Schwartenbeck, Johannes Passecker, Tobias U Hauser, Thomas HB FitzGerald, Martin Kronbichler, and Karl J Friston. Computational mechanisms of curiosity and goal-directed exploration. 8:e41703, 2019. ISSN 2050-084X. doi: 10.7554/eLife.41703. URL https://doi.org/10.7554/eLife.41703.
* Shyam et al. (2018) Pranav Shyam, Wojciech Jaśkowski, and Faustino Gomez. Model-based active exploration. _arXiv preprint arXiv:1810.12162_ , 2018.
* Shyam et al. (2019) Pranav Shyam, Wojciech Jaśkowski, and Faustino Gomez. Model-based active exploration. In _International Conference on Machine Learning_ , pp. 5779–5788, 2019. URL http://proceedings.mlr.press/v97/shyam19a.html.
* Silver et al. (2017) David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. _Nature_ , 550(7676):354–359, 2017.
* Still & Precup (2012) Susanne Still and Doina Precup. An information-theoretic approach to curiosity-driven reinforcement learning. 131(3):139–148, 2012. ISSN 1611-7530. doi: 10.1007/s12064-011-0142-z.
* Storck et al. (1995) Jan Storck, Sepp Hochreiter, and Jürgen Schmidhuber. Reinforcement driven information acquisition in non-deterministic environments. In _Proceedings of the international conference on artificial neural networks, Paris_ , volume 2, pp. 159–164. Citeseer, 1995.
* Sun et al. (2011) Yi Sun, Faustino Gomez, and Juergen Schmidhuber. Planning to be surprised: Optimal bayesian exploration in dynamic environments. 2011\. URL http://arxiv.org/abs/1103.5708.
* Sutton et al. (1998) Richard S Sutton, Andrew G Barto, et al. _Introduction to reinforcement learning_ , volume 135. MIT press Cambridge, 1998.
* Teigen (2018) Bjørn Ivar Teigen. An active learning perspective on exploration in reinforcement learning. 2018\. URL https://www.duo.uio.no/handle/10852/62823.
* Tschantz et al. (2019a) Alexander Tschantz, Manuel Baltieri, Anil K. Seth, and Christopher L. Buckley. Scaling active inference. 2019a. URL http://arxiv.org/abs/1911.10601.
* Tschantz et al. (2019b) Alexander Tschantz, Anil K. Seth, and Christopher L. Buckley. Learning action-oriented models through active inference. pp. 764969, 2019b. doi: 10.1101/764969. URL https://www.biorxiv.org/content/10.1101/764969v1.
* Tschiatschek et al. (2018) Sebastian Tschiatschek, Kai Arulkumaran, Jan Stühmer, and Katja Hofmann. Variational inference for data-efficient model learning in POMDPs. 2018\. URL http://arxiv.org/abs/1805.09281.
* Ueltzhöffer (2018) Kai Ueltzhöffer. Deep active inference. 112(6):547–573, 2018. ISSN 0340-1200, 1432-0770. doi: 10.1007/s00422-018-0785-7. URL http://arxiv.org/abs/1709.02341.
* Watter et al. (2015) Manuel Watter, Jost Tobias Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. 2015\. URL http://arxiv.org/abs/1506.07365.
* Williams et al. (2016) Grady Williams, Paul Drews, Brian Goldfain, James M Rehg, and Evangelos A Theodorou. Aggressive driving with model predictive path integral control. In _2016 IEEE International Conference on Robotics and Automation (ICRA)_ , pp. 1433–1440. IEEE, 2016.
* Yarin Gal et al. (2016) Yarin Gal, Rowan McAllister, and Carl Edward Rasmussen. Improving PILCO with bayesian neural network dynamics models. In _Data-Efficient Machine Learning workshop_ , 2016.
## Appendix A Related work
#### Active inference
There is an extensive literature on active inference within discrete state-
spaces, covering a wide variety of tasks, such as epistemic foraging in
saccades (Parr & Friston, 2017; Friston, 2019b; Schwartenbeck et al., 2019),
exploring mazes (Friston et al., 2015; Pezzulo et al., 2016; Friston et al.,
2016), to playing Atari games (Cullen et al., 2018). Active inference also
comes equipped with a well-developed neural process theory (Friston et al.,
2017a; Parr et al., 2019) which can account for a substantial range of neural
dynamics. There have also been prior attempts to scale up active inference to
continuous RL tasks (Tschantz et al., 2019a; Millidge, 2019; Ueltzhöffer,
2018), which we build upon here.
#### Model based RL
Model based reinforcement learning has been in a recent renaissance, with
implementations vastly exceeding the sample efficiency of model-free methods,
while also approaching their asymptotic performance (Ha & Schmidhuber, 2018;
Nagabandi et al., 2018; Chua et al., 2018a; Hafner et al., 2018). There have
been recent successes on challenging domains such as Atari (Kaiser et al.,
2019), and high dimensional robot locomotion (Hafner et al., 2018; 2019) and
manipulation (Nagabandi et al., 2019) tasks. Key advances include variational
autoencoders (Kingma & Welling, 2013) to flexibly construct latent spaces in
partially observed environments, Bayesian approaches such as Bayes by backprop
(Houthooft et al., 2016a), deep ensembles (Shyam et al., 2018; Chua et al.,
2018a), and other variational approaches (Okada & Taniguchi, 2019;
Tschiatschek et al., 2018; Yarin Gal et al., 2016), which quantify uncertainty
in the dynamics models, and enable the model to learn a latent space that is
useful for action (Tschantz et al., 2019b; Watter et al., 2015). Finally,
progress has been aided by powerful planning algorithms capable of online
planning in continuous state and action spaces (Williams et al., 2016;
Rubinstein, 1997).
#### Intrinsic Measures
Using intrinsic measures to encourage exploration has a long history in RL
(Schmidhuber, 1991; 2007; Storck et al., 1995; Oudeyer & Kaplan, 2009;
Chentanez et al., 2005). Recent model-free and model based-intrinsic measures
that have been proposed in the literature include policy-entropy (Rawlik,
2013; Rawlik et al., 2013; Haarnoja et al., 2018),state entropy (Lee et al.,
2019), information-gain (Houthooft et al., 2016b; Okada & Taniguchi, 2019; Kim
et al., 2018; Shyam et al., 2019; Teigen, 2018), prediction error (Pathak et
al., 2017), divergence of ensembles (Shyam et al., 2019; Chua et al., 2018b),
uncertain state bonuses (Bellemare et al., 2016; O’Donoghue et al., 2017), and
empowerment (de Abril & Kanai, 2018; Leibfried et al., 2019; Mohamed &
Rezende, 2015). Information gain additionally has a substantial history
outside the RL framework, going back to (Lindley, 1956; Still & Precup, 2012;
Sun et al., 2011).
## Appendix B Derivation for the free energy of the expected future
We begin with the full free energy of the expected future and decompose this
into the free energy of the expected future given policies, and the negative
policy entropy:
$\displaystyle\mathcal{\tilde{F}}$
$\displaystyle=\mathbb{E}_{q({\textnormal{o}},{\textnormal{s}},\theta,\pi)}[\log
q({\textnormal{o}},{\textnormal{s}},\theta,\pi)-\log
p^{\Phi}({\textnormal{o}},{\textnormal{s}},\theta)]$ (8)
$\displaystyle=\mathbb{E}_{q(\pi)}[\mathcal{\tilde{F}}_{\pi}]-\mathbf{H}[q(\pi)]$
We now show the free energy of the expected future given policies can be
decomposed into extrinsic and information gain terms:
$\displaystyle\mathcal{\tilde{F}}_{\pi}$
$\displaystyle=E_{q({\textnormal{o}},{\textnormal{s}},\theta,\pi)}[\log
q({\textnormal{o}},{\textnormal{s}},\theta,\pi)-\log
p^{\Phi}({\textnormal{o}},{\textnormal{s}},\theta)]$ (9)
$\displaystyle=\mathbb{E}_{q({\textnormal{o}},{\textnormal{s}},\theta|\pi)}[\log
q({\textnormal{s}},\theta|\pi)+\log
q({\textnormal{o}}|{\textnormal{s}},\theta,\pi)-\log
p({\textnormal{s}},\theta|{\textnormal{o}})-\log p^{\Phi}({\textnormal{o}})]$
$\displaystyle\approx\mathbb{E}_{q({\textnormal{o}},{\textnormal{s}},\theta|\pi)}[\log
q({\textnormal{s}},\theta|\pi)+\log
q({\textnormal{o}}|{\textnormal{s}},\theta,\pi)-\log
q({\textnormal{s}},\theta|{\textnormal{o}},\theta)-\log
p^{\Phi}({\textnormal{o}})]$
$\displaystyle=\mathbb{E}_{q({\textnormal{o}},{\textnormal{s}},\theta|\pi)}[\log
q({\textnormal{s}},\theta|\pi)-\log
q({\textnormal{s}},\theta|{\textnormal{o}},\pi)]+\mathbb{E}_{q({\textnormal{o}},{\textnormal{s}},\theta|\pi)}[\log
q({\textnormal{o}}|{\textnormal{s}},\theta,\pi)-\log
p^{\Phi}({\textnormal{o}})]$ $\displaystyle-\mathcal{\tilde{F}}_{\pi}$
$\displaystyle=\mathbb{E}_{q({\textnormal{o}},{\textnormal{s}},\theta|\pi)}[\log
q({\textnormal{s}},\theta|{\textnormal{o}},\pi)-\log
q({\textnormal{s}},\theta|\pi)]+\mathbb{E}_{q({\textnormal{o}},{\textnormal{s}},\theta|\pi)}[\log
p^{\Phi}({\textnormal{o}})-\log
q({\textnormal{o}}|{\textnormal{s}},\theta,\pi)]$
$\displaystyle=\underbrace{\mathbb{E}_{q({\textnormal{o}}|\pi)}\Big{[}D_{\mathrm{KL}}\Big{(}q({\textnormal{s}},\theta|{\textnormal{o}},\pi)\|q({\textnormal{s}},\theta|\pi)\Big{)}\Big{]}}_{\text{Expected
Information
Gain}}-\underbrace{\mathbb{E}_{q({\textnormal{s}},\theta|\pi)}\Big{[}D_{\mathrm{KL}}\Big{(}q({\textnormal{o}}|{\textnormal{s}},\theta,\pi)\|p^{\Phi}({\textnormal{o}})\Big{)}\Big{]}}_{\text{Extrinsic
Value}}$
Where we have assumed that $p({\textnormal{s}},\theta|{\textnormal{o}})\approx
q({\textnormal{s}},\theta|{\textnormal{o}},\pi)$. We wish to minimize
$\mathcal{\tilde{F}}_{\pi}$, and thus maximize $-\mathcal{\tilde{F}}_{\pi}$.
This means we wish to maximize the information gain and minimize the KL-
divergence between expected and preferred observations.
By noting that $q({\textnormal{s}},\theta|{\textnormal{o}},\pi)\approx
q({\textnormal{s}}|{\textnormal{o}},\pi)q(\theta|{\textnormal{s}})$, we can
split the expected information gain term into state and parameter information
gain terms:
$\displaystyle\mathbb{E}_{q({\textnormal{o}}|\pi)}\Big{[}D_{\mathrm{KL}}\Big{(}q({\textnormal{s}},\theta|{\textnormal{o}},\pi)\|q({\textnormal{s}},\theta|\pi)\Big{)}\Big{]}$
(10)
$\displaystyle=\mathbb{E}_{q({\textnormal{o}}|\pi)q({\textnormal{s}},\theta|{\textnormal{o}},\pi)}\big{[}\log
q({\textnormal{s}},\theta|{\textnormal{o}},\pi)-\log
q({\textnormal{s}},\theta|\pi)\big{]}$
$\displaystyle=\mathbb{E}_{q({\textnormal{o}}|\pi)q({\textnormal{s}},\theta|{\textnormal{o}},\pi)}\big{[}\log
q({\textnormal{s}}|{\textnormal{o}},\pi)+\log q(\theta|{\textnormal{s}})-\log
q({\textnormal{s}}|\theta,\pi)-\log q(\theta)\big{]}$
$\displaystyle=\mathbb{E}_{q({\textnormal{o}}|\pi)q({\textnormal{s}},\theta|{\textnormal{o}},\pi)}\big{[}\log
q({\textnormal{s}}|{\textnormal{o}},\pi)-\log
q({\textnormal{s}}|\theta,\pi)]\big{]}+\mathbb{E}_{q({\textnormal{o}}|\pi)q({\textnormal{s}},\theta|{\textnormal{o}},\pi)}\big{[}\log
q(\theta|{\textnormal{s}})-\log q(\theta)\big{]}$
$\displaystyle=\underbrace{\mathbb{E}_{q({\textnormal{o}}|\pi)q(\theta)}\Big{[}D_{\mathrm{KL}}\big{(}q({\textnormal{s}}|{\textnormal{o}},\pi)\|q({\textnormal{s}}|\theta)\big{)}\Big{]}}_{\text{Expected
State Information
Gain}}+\underbrace{\mathbb{E}_{q({\textnormal{s}}|\theta)}\Big{[}D_{\mathrm{KL}}\big{(}q(\theta|{\textnormal{s}})\|q(\theta)\big{)}\Big{]}}_{\text{Expected
Parameter Information Gain}}$
## Appendix C Derivation of the optimal policy
We derive the distribution for $q(\pi)$ which minimizes $\mathcal{\tilde{F}}$:
$\displaystyle\mathcal{\tilde{F}}$
$\displaystyle=D_{\mathrm{KL}}\Big{(}q({\textnormal{o}},{\textnormal{s}},\theta,\pi)\|p^{\Phi}({\textnormal{o}},{\textnormal{s}},\theta)\Big{)}$
(11)
$\displaystyle=\mathbb{E}_{q({\textnormal{o}},{\textnormal{s}},\theta,\pi)}[\log
q({\textnormal{o}},{\textnormal{s}},\theta|\pi)+\log q(\pi)-\log
p^{\Phi}({\textnormal{o}},{\textnormal{s}},\theta,\pi)]$
$\displaystyle=\mathbb{E}_{q(\pi)}\Big{[}\mathbb{E}_{q({\textnormal{o}},{\textnormal{s}},\theta|\pi)}[\log
q(\pi)-[\log p^{\Phi}({\textnormal{o}},{\textnormal{s}},\theta)-\log
q({\textnormal{o}},{\textnormal{s}},\theta|\pi)]\Big{]}$
$\displaystyle=\mathbb{E}_{q(\pi)}\Big{[}\log
q(\pi)-\mathbb{E}_{q({\textnormal{o}},{\textnormal{s}},\theta|\pi)}[\log
p^{\Phi}({\textnormal{o}},{\textnormal{s}},\theta)-\log
q({\textnormal{o}},{\textnormal{s}},\theta|\pi)]\Big{]}$
$\displaystyle=\mathbb{E}_{q(\pi)}\Big{[}\log
q(\pi)-\big{[}-\mathbb{E}_{q({\textnormal{o}},{\textnormal{s}},\theta|\pi)}[\log
q({\textnormal{o}},{\textnormal{s}},\theta|\pi)-\log
p^{\Phi}({\textnormal{o}},{\textnormal{s}},\theta)]\big{]}\Big{]}$
$\displaystyle=\mathbb{E}_{q(\pi)}\Big{[}\log q(\pi)-\log
e^{-\big{[}-\mathbb{E}_{q({\textnormal{o}},{\textnormal{s}},\theta|\pi)}[\log
q({\textnormal{o}},{\textnormal{s}},\theta|\pi)-\log
p^{\Phi}({\textnormal{o}},{\textnormal{s}},\theta)]\big{]}}\Big{]}$
$\displaystyle=\mathbb{E}_{q(\pi)}\Big{[}\log q(\pi)-\log
e^{-D_{\mathrm{KL}}\big{(}q({\textnormal{o}},{\textnormal{s}},\theta|\pi)\|p^{\Phi}({\textnormal{o}},{\textnormal{s}},\theta)\big{)}}\Big{]}$
$\displaystyle=D_{\mathrm{KL}}\Big{(}q(\pi)\|e^{-D_{\mathrm{KL}}\big{(}q({\textnormal{o}},{\textnormal{s}},\theta|\pi)\|p^{\Phi}({\textnormal{o}},{\textnormal{s}},\theta)\big{)}}\Big{)}$
$\displaystyle=D_{\mathrm{KL}}\Big{(}q(\pi)\
\|e^{-\mathcal{\tilde{F_{\pi}}}}\Big{)}$
## Appendix D Derivation of RL bound
Here we show that the free energy of the expected future is a bound on the
divergence between expected and desired observations. The proof proceeds
straightforwardly by importance sampling on the approximate posterior and then
applying Jensen’s inequality:
$\displaystyle D_{\mathrm{KL}}\big{(}q({\textnormal{o}}_{t:T}|\pi)\|\
p^{\Phi}(o_{t:T})\big{)}$
$\displaystyle=\mathbb{E}_{q({\textnormal{o}}_{t:T}|\pi)}\big{[}\log
q({\textnormal{o}}_{t:T}|\pi)-\log p^{\Phi}({\textnormal{o}})\big{]}$ (12)
$\displaystyle=\mathbb{E}_{q({\textnormal{o}}_{t:T}|\pi)}\bigg{[}\log\big{(}\int
dx_{1:T}\int
d\theta_{1:T}\frac{q({\textnormal{o}}_{t:T},{\textnormal{s}}_{t:T},\theta_{t:T}|\pi)q({\textnormal{s}}_{t:T},\theta_{t:T}|{\textnormal{o}}_{t:T})}{p^{\Phi}({\textnormal{o}}_{t:T})q({\textnormal{s}}_{t:T},\theta_{t:T}|{\textnormal{o}}_{t:T})}\big{)}\bigg{]}$
$\displaystyle\leq\mathbb{E}_{q({\textnormal{o}}_{t:T},{\textnormal{s}}_{t:T},\theta_{t:T}|\pi)}\Big{[}\log\big{(}\frac{q({\textnormal{o}}_{t:T},{\textnormal{s}}_{t:T},\theta_{t:T}|\pi)}{p^{\Phi}({\textnormal{o}}_{t:T},{\textnormal{s}}_{t:T},\theta_{t:T})}\big{)}\Big{]}$
$\displaystyle\leq
D_{\mathrm{KL}}\Big{(}q({\textnormal{o}}_{t:T},{\textnormal{s}}_{t:T},\theta|\pi)\|p^{\Phi}({\textnormal{o}}_{t:T},{\textnormal{s}}_{t:T},\theta)\Big{)}=\mathcal{\tilde{F}}$
## Appendix E Model details
In the current work, we implemented our probabilistic model using an ensemble-
based approach (Chua et al., 2018a; Fort et al., 2019; Chitta et al., 2018).
Here, an ensemble of point-estimate parameters
$\theta=\\{\theta_{0},...,\theta_{B}\\}$ trained on different batches of the
dataset $\mathcal{D}$ are maintained and treated as samples as from the
posterior distribution $p(\theta|\mathcal{D})$. Besides consistency with the
active inference framework, probabilistic models enable the active resolution
of model uncertainty, capture both epistemic and aleatoric uncertainty, and
help avoid over-fitting in low data regimes (Fort et al., 2019; Chitta et al.,
2018; Chatzilygeroudis et al., 2018; Chua et al., 2018b).
This design choice means that we use a trajectory sampling method when
evaluating beliefs about future variables (Chua et al., 2018a), as each pass
through the transition model
$p({\textnormal{s}}_{t}|{\textnormal{s}}_{t-1},\theta,\pi)$ evokes $B$ samples
from ${\textnormal{s}}_{t}$.
#### Transition model
We implement the transition model as
$p({\textnormal{s}}_{t}|{\textnormal{s}}_{t-1},\theta,\pi)$ as
$\mathcal{N}({\textnormal{s}}_{t};f_{\theta}({\textnormal{s}}_{t-1}),f_{\theta}({\textnormal{s}}_{t-1}))$,
where $f_{\theta}(\cdot)$ are a set of function approximators
$f_{\theta}(\cdot)=\\{f_{\theta_{0}}(\cdot),...,f_{\theta_{B}}(\cdot)\\}$. In
the current paper, $f_{\theta_{i}}({\textnormal{s}}_{t-1})$ is a two-layer
feed-forward network with 400 hidden units and swish activation function.
Following previous work, we predict state deltas rather than the next states
(Shyam et al., 2018).
#### Reward model
We implement the reward model as
$p({\textnormal{o}}_{\tau}|{\textnormal{s}}_{\tau},\theta,\pi)=\mathcal{N}({\textnormal{o}}_{\tau};f_{\lambda}({\textnormal{s}}_{\tau}),\mathbf{1})$,
where $f_{\lambda}({\textnormal{s}}_{\tau})$ is some arbitrary function
approximator666Formally, this is an observation model, but we retain RL
terminology for clarity.. In the current paper,
$f_{\lambda}({\textnormal{s}}_{\tau})$ is a two layer feed forward network
with 400 hidden units and ReLU activation function. Learning a reward model
offers several plausible benefits outside of the active inference framework,
as it abolishes the requirement that rewards can be directly calculated from
observations or states (Chua et al., 2018a).
#### Global prior
We implement the global prior $p^{\Phi}({\textnormal{o}})$ as a Gaussian with
unit variance centred around the maximum reward for the respective
environment. We leave it to future work to explore the effects of more
intricate priors.
## Appendix F Implementation details
For all tasks, we initialize a dataset $\mathcal{D}$ with a single episode of
data collected from a random agent. For each episode, we train the ensemble
transition model and reward model for 100 epochs, using the negative-log
likelihood loss. We found cold-starting training at each episode to lead to
more consistent behaviour. We then let the agent act in the environment based
on Algorithm 1, and append the collected data to the dataset $\mathcal{D}$.
We list the full set of hyperparameters below:
Hyperparameters
---
Hidden layer size | 400
Learning rate | 0.001
Training-epochs | 100
Planning-horizon | 30
N-candidates (CEM) | 700
Top-candidates (CEM) | 70
Optimisation-iterations (CEM) | 7
## Appendix G Expected information gain
In Eq. 4, expected parameter information gain was presented in the form
$\mathbb{E}_{q({\textnormal{s}}|\theta)}D_{\mathrm{KL}}\big{(}q(\theta|{\textnormal{s}})\|q(\theta)\big{)}$.
While this provides a nice intuition about the effect of the information gain
term on behaviour, it cannot be computed directly, due to the intractability
of identifying true posteriors over parameters. We here show that, through a
simple application of Bayes’ rule, it is straightforward to derive an
equivalent expression for the expected information gain as the divergence
between the state likelihood and marginal, given the parameters, which
decomposes into an entropy of an average minus an average of entropies:
$\displaystyle\mathbb{E}_{q({\textnormal{s}}|\theta)}D_{\mathrm{KL}}\big{(}q(\theta|{\textnormal{s}})\|q(\theta)\big{)}$
(13)
$\displaystyle=\mathbb{E}_{q({\textnormal{s}}|\theta)q(\theta|{\textnormal{s}})}\big{[}\log
q(\theta|{\textnormal{s}})-\log q(\theta)\big{]}$
$\displaystyle=\mathbb{E}_{q({\textnormal{s}},\theta)}\big{[}\log
q({\textnormal{s}}|\theta)+\log q(\theta)-\log q({\textnormal{s}})-\log
q(\theta)\big{]}$
$\displaystyle=\mathbb{E}_{q({\textnormal{s}},\theta)}\big{[}\log
q({\textnormal{s}}|\theta)-\log q({\textnormal{s}})\big{]}$
$\displaystyle=\mathbb{E}_{q(\theta)q({\textnormal{s}}|\theta)}\big{[}\log
q({\textnormal{s}}|\theta)\big{]}-\mathbb{E}_{q(\theta)q({\textnormal{s}}|\theta)}\big{[}\log\mathbb{E}_{q(\theta)}q({\textnormal{s}}|\theta)\big{]}$
$\displaystyle=-\mathbb{E}_{q(\theta)}\mathbf{H}\big{[}q({\textnormal{s}}|\theta)\big{]}+\mathbf{H}\big{[}\mathbb{E}_{q(\theta)}q({\textnormal{s}}|\theta)\big{]}$
The first term is the (negative) average of the entropies. The average over
the parameters $\theta$ is achieved simply by averaging over the dynamics
models in the ensemble. The entropy of the likelihoods
$\mathbf{H}[p({\textnormal{s}}|\theta)]$ can be computed analytically since
each network in the ensemble outputs a Gaussian distribution for which the
entropy is a known analytical result. The second term is the entropy of the
average $\mathbf{H}[\mathbb{E}_{p(\theta)}p({\textnormal{s}}|\theta)]$.
Unfortunately, this term does not have an analytical solution. However, it can
be approximated numerically using a variety of techniques for entropy
estimation. In our paper, we use the nearest neighbour entropy approximation
(Mirchev et al., 2018).
## Appendix H Environment details
The Mountain Car environment
($\mathcal{S}\subseteq\mathbb{R}^{2}\mathcal{A}\subseteq\mathbb{R}^{1}$)
requires an agent to drive up the side of a hill, where the car is
underactuated requiring it first to gain momentum by driving up the opposing
hill. A reward of one is generated when the agent reaches the goal, and zero
otherwise. The Cup Catch environment
($\mathcal{S}\subseteq\mathbb{R}^{8}\mathcal{A}\subseteq\mathbb{R}^{2}$)
requires the agent to actuate a cup and catch a ball attached to its bottom. A
reward of one is generated when the agent reaches the goal, and zero
otherwise. The Half Cheetah environment
($\mathcal{S}\subseteq\mathbb{R}^{1}7\mathcal{A}\subseteq\mathbb{R}^{6}$)
describes a running planar biped. For the running task, a reward of
$v-0.1||a||^{2}$ is received, where $v$ is the agent’s velocity, and for the
flipping task, a reward of $\epsilon-0.1||a||^{2}$ is received, where
$\epsilon$ is the angular velocity. The Ant Maze environment
($\mathcal{S}\subseteq\mathbb{R}^{2}9\mathcal{A}\subseteq\mathbb{R}^{8}$)
involves a quadruped agent exploring a rectangular maze.
|
2024-09-04T02:54:55.443492 | 2020-02-28T13:39:49 | 2002.12710 | {
"authors": "Helmut Farbmacher, Martin Huber, Luk\\'a\\v{s} Laff\\'ers, Henrika\n Langen, Martin Spindler",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25939",
"submitter": "Martin Huber",
"url": "https://arxiv.org/abs/2002.12710"
} | arxiv-papers | Causal mediation analysis with double machine learning
Helmut Farbmacher+, Martin Huber*, Lukáš Lafférs++, Henrika Langen*, Martin
Spindler**
+Max Planck Society, Munich Center for the Economics of Aging
*University of Fribourg, Dept. of Economics
++Matej Bel University, Dept. of Mathematics
**University of Hamburg, Faculty of Business Sciences
Abstract: This paper combines causal mediation analysis with double machine
learning to control for observed confounders in a data-driven way under a
selection-on-observables assumption in a high-dimensional setting. We consider
the average indirect effect of a binary treatment operating through an
intermediate variable (or mediator) on the causal path between the treatment
and the outcome, as well as the unmediated direct effect. Estimation is based
on efficient score functions, which possess a multiple robustness property
w.r.t. misspecifications of the outcome, mediator, and treatment models. This
property is key for selecting these models by double machine learning, which
is combined with data splitting to prevent overfitting in the estimation of
the effects of interest. We demonstrate that the direct and indirect effect
estimators are asymptotically normal and root-$n$ consistent under specific
regularity conditions and investigate the finite sample properties of the
suggested methods in a simulation study when considering lasso as machine
learner. We also provide an empirical application to the U.S. National
Longitudinal Survey of Youth, assessing the indirect effect of health
insurance coverage on general health operating via routine checkups as
mediator, as well as the direct effect. We find a moderate short-term effect
of health insurance coverage on general health which is, however, not mediated
by routine checkups.
Keywords: mediation, direct and indirect effects, causal mechanisms, double
machine learning, efficient score.
JEL classification: C21.
Addresses for correspondence: Helmut Farbmacher, Max Planck Society, Munich
Center for the Economics of Aging, Amalienstr. 33, 80799 Munich, Germany;
<EMAIL_ADDRESS>Martin Huber, University of Fribourg, Department of
Economics, Bd. de Pérolles 90, 1700 Fribourg, Switzerland;
<EMAIL_ADDRESS>Lukáš Lafférs, Matej Bel University, Department of
Mathematics, Tajovskeho 40, 97411 Banská Bystrica, Slovakia;
<EMAIL_ADDRESS>Henrika Langen, University of Fribourg, Department of
Economics, Bd. de Pérolles 90, 1700 Fribourg, Switzerland;
<EMAIL_ADDRESS>Martin Spindler, University of Hamburg, Faculty of
Business Administration, Moorweidenstr. 18, 20148 Hamburg, Germany;
<EMAIL_ADDRESS>Lafférs acknowledges support provided by the
Slovak Research and Development Agency under contract no. APVV-17-0329 and
VEGA-1/0692/20.
## 1 Introduction
Causal mediation analysis aims at decomposing the causal effect of a treatment
on an outcome of interest into an indirect effect operating through a mediator
(or intermediate outcome) and a direct effect comprising any causal mechanisms
not operating through that mediator. Even if the treatment is random, direct
and indirect effects are generally not identified by naively controlling for
the mediator without accounting for its likely endogeneity, see Robins and
Greenland (1992). While much of the earlier literature either neglected
endogeneity issues or relied on restrictive linear models, see for instance
Cochran (1957), Judd and Kenny (1981), and Baron and Kenny (1986), more recent
contributions consider more general identification approaches using the
potential outcome framework. Some of the numerous examples are Robins and
Greenland (1992), Pearl (2001), Robins (2003), Petersen, Sinisi, and van der
Laan (2006), VanderWeele (2009), Imai, Keele, and Yamamoto (2010), Hong
(2010), Albert and Nelson (2011), Imai and Yamamoto (2013), Tchetgen Tchetgen
and Shpitser (2012), Vansteelandt, Bekaert, and Lange (2012), and Huber
(2014). Using the denomination of Pearl (2001), the literature distinguishes
between natural direct and indirect effects, where mediators are set to their
potential values ‘naturally’ occurring under a specific treatment assignment,
and the controlled direct effect, where the mediator is set to a ‘prescribed’
value.
The vast majority of identification strategies relies on selection-on-
observable-type assumptions implying that the treatment and the mediator are
conditionally exogenous when controlling for observed covariates. Empirical
examples in economics and policy evaluation include Flores and Flores-Lagunes
(2009), Heckman, Pinto, and Savelyev (2013), Keele, Tingley, and Yamamoto
(2015), Conti, Heckman, and Pinto (2016), Huber (2015), Huber, Lechner, and
Mellace (2017), Bellani and Bia (2018), Bijwaard and Jones (2018), and Huber,
Lechner, and Strittmatter (2018). Such studies typically rely on the
(implicit) assumption that the covariates to be controlled for can be
unambiguously preselected by the researcher, for instance based on
institutional knowledge or theoretical considerations. This assumes away
uncertainty related to model selection w.r.t. covariates to be included and
entails incorrect inference under the common practice of choosing and refining
the choice of covariates based on their predictive power.
To improve upon this practice, this paper combines causal mediation analysis
based on efficient score functions, see Tchetgen Tchetgen and Shpitser (2012),
with double machine learning as outlined in Chernozhukov, Chetverikov,
Demirer, Duflo, Hansen, Newey, and Robins (2018) for a data-driven control of
observed confounders to obtain valid inference under specific regularity
conditions. In particular, one important condition is that the number of
important confounders (that make the selection-on-observables assumptions to
hold approximately) is not too large relative to the sample size. However, the
set of these important confounders need not be known a priori and the set of
potential confounders can be even larger than the sample size.111Different
from conventional semiparametric methods, the double machine learning
framework does not require the set of potential confounders to be restricted
by Donsker conditions, but permits the set to be unbounded and to grow with
the sample size. This is particularly useful in high dimensional data with a
vast number of covariates that could potentially serve as control variables,
which can render researcher-based covariate selection complicated if not
infeasible. We demonstrate root-$n$ consistency and asymptotic normality of
the proposed effect estimators under specific regularity conditions by
verifying that the general framework of Chernozhukov, Chetverikov, Demirer,
Duflo, Hansen, Newey, and Robins (2018) for well-behaved double machine
learning is satisfied in our context.
Tchetgen Tchetgen and Shpitser (2012) suggest estimating natural direct and
indirect effects based on the efficient score functions of the potential
outcomes, which requires plug-in estimates for the conditional mean outcome,
mediator density, and treatment probability. Analogous to doubly robust
estimation of average treatment effects, see Robins, Rotnitzky, and Zhao
(1994) and Robins and Rotnitzky (1995), the resulting estimators are
semiparametrically efficient if all models of the plug-in estimates are
correctly specified and remain consistent even if one model is misspecified.
Our first contribution is to show that the efficient score function of
Tchetgen Tchetgen and Shpitser (2012) satisfies the so-called Neyman (1959)
orthogonality discussed in Chernozhukov, Chetverikov, Demirer, Duflo, Hansen,
Newey, and Robins (2018), which makes the estimation of direct and indirect
effects rather insensitive to (local) estimation errors in the plug-in
estimates. Second, we show that by an application of Bayes’ Theorem, the score
function of Tchetgen Tchetgen and Shpitser (2012) can be transformed in a way
that avoids estimation of the conditional mediator density and show it to be
Neyman orthogonal. This appears particularly useful when the mediator is a
vector of variables and/or continuous. Third, we establish the score function
required for estimating the controlled direct effect along with Neyman
orthgonality.
Neyman orthgonality is key for the fruitful application of double machine
learning, ensuring robustness in the estimation of the nuisance parameters
which is crucial when applying modern machine learning methods. Random sample
splitting – to estimate the parameters of the plug-in models in one part of
the data, while predicting the score function and estimating the direct and
indirect effects in the other part – avoids overfitting the plug-in models
(e.g. by controlling for too many covariates). It increases the variance by
only using part of the data for effect estimation. This is avoided by cross-
fitting which consists of swapping the roles of the data parts for estimating
the plug-in models and the treatment effects to ultimately average over the
effects estimates in either part. When combining efficient score-based effect
estimation with sample splitting, $n^{-1/2}$-convergence of treatment effect
estimation can be obtained under a substantially slower convergence of
$n^{-1/4}$ for the plug-in estimates, see Chernozhukov, Chetverikov, Demirer,
Duflo, Hansen, Newey, and Robins (2018). Under specific regularity conditions,
this convergence rate can attained by various machine learning algorithms
including lasso regression, see Tibshirani (1996).
We investigate the estimators’ finite sample behavior based on the score
function of Tchetgen Tchetgen and Shpitser (2012) and the alternative score
suggested in this paper when using post-lasso regression as machine learner
for the plug-in estimates. Furthermore, we apply our method to data from the
National Longitudinal Survey of Youth 1997 (NLSY97), where a large set of
potential control variables is available. We disentangle the short-term effect
of health insurance coverage on general health into an indirect effect which
operates via the incidence of a routine checkup in the last year and a direct
effect covering any other causal mechanisms. While we find a moderate health-
improving direct effect, the indirect effect is very close to zero. We
therefore do not find evidence that health insurance coverage affects general
health through routine checkups in the short run.
We note that basing estimation on efficient score functions is not the only
framework satisfying the previously mentioned robustness w.r.t. estimation
errors in plug-in parameters. This property is also satisfied by the targeted
maximum likelihood estimation (TMLE) framework by van der Laan and Rubin
(2006), see the discussion in Díaz (2020). TMLE relies on iteratively updating
(or robustifying) an initial estimate of the parameter of interest based on
regression steps that involve models for the plug-in parameters. Zheng and van
der Laan (2012) have developed an estimation approach for natural direct and
indirect effects using TMLE, where the plug-in parameters might by estimated
by machine learners, e.g. the super learner, an ensemble method suggested by
van der Laan, Polley, and Hubbard (2007). This iterative estimation approach
is therefore an alternative to the double machine learning-based approach
suggested in this paper, for which we demonstrate $n^{-1/2}$-consistency under
specific conditions.
This paper proceeds as follows. Section 2 introduces the concepts of direct
and indirect effect identification in the potential outcome framework. In
Section 3, we present the identifying assumptions and discuss identification
based on efficient score functions. Section 4 proposes an estimation procedure
based on double machine learning and shows root-$n$ consistency and asymptotic
normality under specific conditions. Section 5 provides a simulation study.
Section 6 presents an empirical application to data from the NLSY97. Section 7
concludes.
## 2 Definition of direct and indirect effects
We aim at decomposing the average treatment effect (ATE) of a binary
treatment, denoted by $D$, on an outcome of interest, $Y$, into an indirect
effect operating through a discrete mediator, $M$, and a direct effect that
comprises any causal mechanisms other than through $M$. We use the potential
outcome framework, see for instance Rubin (1974), to define the direct and
indirect effects of interest, see also Ten Have, Joffe, Lynch, Brown, Maisto,
and Beck (2007) and Albert (2008) for further examples in the context of
mediation. $M(d)$ denotes the potential mediator under treatment value $d$
$\in$ $\\{0,1\\}$, while $Y(d,m)$ denotes the potential outcome as a function
of both the treatment and some value $m$ of the mediator $M$.222Throughout
this paper, capital letters denote random variables and small letters specific
values of random variables. The observed outcome and mediator correspond to
the respective potential variables associated with the actual treatment
assignment, i.e. $Y=D\cdot Y(1,M(1))+(1-D)\cdot Y(0,M(0))$ and $M=D\cdot
M(1)+(1-D)\cdot M(0)$, implying that any other potential outcomes or mediators
are a priori (i.e. without further statistical assumptions) unknown.
We denote the ATE by $\Delta=E[Y(1,M(1))-Y(0,M(0))]$, which comprises both
direct and indirect effects. To decompose the latter, note that the average
direct effect, denoted by $\theta(d)$, equals the difference in mean potential
outcomes when switching the treatment while keeping the potential mediator
fixed, which blocks the causal mechanism via $M$:
$\displaystyle\theta(d)$ $\displaystyle=$ $\displaystyle
E[Y(1,M(d))-Y(0,M(d))],\quad d\in\\{0,1\\}.$ (1)
The (average) indirect effect, $\delta(d)$, equals the difference in mean
potential outcomes when switching the potential mediator values while keeping
the treatment fixed to block the direct effect.
$\displaystyle\delta(d)$ $\displaystyle=$ $\displaystyle
E[Y(d,M(1))-Y(d,M(0))],\quad d\in\\{0,1\\}.$ (2)
Robins and Greenland (1992) and Robins (2003) referred to these parameters as
pure/total direct and indirect effects, Flores and Flores-Lagunes (2009) as
net and mechanism average treatment effects, and Pearl (2001) as natural
direct and indirect effects, which is the denomination used in the remainder
of this paper.
The ATE is the sum of the natural direct and indirect effects defined upon
opposite treatment states $d$, which can be easily seen from adding and
subtracting the counterfactual outcomes $E[Y(0,M(1))]$ and $E[Y(1,M(0))]$:
$\displaystyle\Delta$ $\displaystyle=$ $\displaystyle E[Y(1,M(1))-Y(0,M(0))]$
(3) $\displaystyle=$ $\displaystyle
E[Y(1,M(1))-Y(0,M(1))]+E[Y(0,M(1))-Y(0,M(0))]=\theta(1)+\delta(0)$
$\displaystyle=$ $\displaystyle
E[Y(1,M(0))-Y(0,M(0))]+E[Y(1,M(1))-Y(1,M(0))]=\theta(0)+\delta(1).$
The distinction between $\theta(1)$ and $\theta(0)$ as well as $\delta(1)$ and
$\delta(0)$ hints to the possibility of heterogeneous effects across treatment
states $d$ due to interaction effects between $D$ and $M$. For instance, the
direct effect of health insurance coverage ($D$) on general health ($Y$) might
depend on whether or not a person underwent routine check-ups ($M$). We note
that a different approach to dealing with the interaction effects between $D$
and $M$ is a three-way decomposition of the ATE into the pure direct effect
($\theta(0)$), the pure indirect effect ($\delta(0)$) and the mediated
interaction effect, see VanderWeele (2013).
The so-called controlled direct effect, denoted by $\gamma(m)$, is a further
parameter that received much attention in the mediation literature. It
corresponds to the difference in mean potential outcomes when switching the
treatment and fixing the mediator at some value $m$:
$\gamma(m)=E[Y(1,m)-Y(0,m)],\quad\quad\text{for }m\text{ in the support of
}M.$ (4)
In contrast to $\theta(d)$, which is conditional on the potential mediator
value ‘naturally’ realized for treatment $d$ which may differ across subjects,
$\gamma(m)$ is conditional on enforcing the same mediator state in the entire
population. The two parameters are only equivalent in the absence of an
interaction between $D$ and $M$. Whether the natural or controlled direct
effect is more relevant depends on the feasibility and desirability to
intervene on or prescribe the mediator, see Pearl (2001) for a discussion of
the ‘descriptive’ and ‘prescriptive’ natures of natural and controlled
effects. There is no indirect effect parameter matching the controlled direct
effect, implying that the difference between the total effect and the
controlled direct effect does in general not correspond to the indirect
effect, unless there is no interaction between $D$ and $M$, see e.g. Kaufman,
MacLehose, and Kaufman (2004).
## 3 Assumptions and identification
Our identification strategy is based on the assumption that confounding of the
treatment-outcome, treatment-mediator, and mediator-outcome relations can be
controlled for by conditioning on observed covariates, denoted by $X$. The
latter must not contain variables that are influenced by the treatment, such
that $X$ is typically evaluated prior to treatment assignment. Figure 1
provides a graphical illustration using a directed acyclic graph, with arrows
representing causal effects. Each of $D$, $M$, and $Y$ might be causally
affected by distinct and statistically independent sets of unobservables not
displayed in Figure 1, but none of these unobservables may jointly affect two
or all three elements $(D,M,Y)$ conditional on $X$.
Figure 1: Causal paths under conditional exogeneity given pre-treatment
covariates
Formally, the first assumption invokes conditional independence of the
treatment and potential mediators or outcomes given $X$. This restriction has
been referred to as conditional independence, selection on observables, or
exogeneity in the treatment evaluation literature, see e.g. Imbens (2004).
This rules out confounders jointly affecting the treatment on the one hand and
the mediator and/or the outcome on the other hand conditional on $X$. In non-
experimental data, the plausibility of this assumption critically hinges on
the richness of $X$.
Assumption 1 (conditional independence of the treatment):
$\\{Y(d^{\prime},m),M(d)\\}\bot D|X$ for all $d^{\prime},d\in\\{0,1\\}$ and
$m$ in the support of $M$,
where ‘$\bot$’ denotes statistical independence. The second assumption
requires the mediator to be conditionally independent of the potential
outcomes given the treatment and the covariates.
Assumption 2 (conditional independence of the mediator):
$Y(d^{\prime},m)\bot M|D=d,X=x$ for all $d^{\prime},d\in\\{0,1\\}$ and $m,x$
in the support of $M,X$.
Assumption 2 rules out confounders jointly affecting the mediator and the
outcome conditional on $D$ and $X$. If $X$ is pre-treatment (as is common to
avoid controlling for variables potentially affected by the treatment), this
implies the absence of post-treatment confounders of the mediator-outcome
relation. Such a restriction needs to be rigorously scrutinized and appears
for instance less plausible if the time window between the measurement of the
treatment and the mediator is large in a world of time-varying variables.
The third assumption imposes common support on the conditional treatment
probability across treatment states.
Assumption 3 (common support):
$\Pr(D=d|M=m,X=x)>0$ for all $d\in\\{0,1\\}$ and $m,x$ in the support of
$M,X$.
The common support assumption, also known as positivity or covariate overlap
assumption, restricts the conditional probability to be or not be treated
given $M,X$, henceforth referred to as propensity score, to be larger than
zero. It implies the weaker condition that $\Pr(D=d|X=x)>0$ such that the
treatment must not be deterministic in $X$, otherwise no comparable units in
terms of $X$ are available across treatment states. By Bayes’ theorem,
Assumption 3 also implies that $\Pr(M=m|D=d,X=x)>0$ if $M$ is discrete or that
the conditional density of $M$ given $D,X$ is larger than zero if $M$ is
continuous. Conditional on $X$, the mediator state must not be deterministic
in the treatment, otherwise no comparable units in terms of the treatment are
available across mediator states. Assumptions 1 to 3 are standard in the
causal mediation literature, see for instance Imai, Keele, and Yamamoto
(2010), Tchetgen Tchetgen and Shpitser (2012), Vansteelandt, Bekaert, and
Lange (2012), and Huber (2014), or also Pearl (2001), Petersen, Sinisi, and
van der Laan (2006), and Hong (2010), for closely related restrictions.
Tchetgen Tchetgen and Shpitser (2012) discuss identification of the
counterfactual $E[Y(d,M(1-d))]$ based on the efficient score function:
$\displaystyle E[Y(d,M(1-d))]$ $\displaystyle=$ $\displaystyle E[\psi_{d}],$
$\displaystyle\textrm{ with }\psi_{d}$ $\displaystyle=$
$\displaystyle\frac{I\\{D=d\\}\cdot f(M|1-d,X)}{p_{d}(X)\cdot
f(M|d,X)}\cdot[Y-\mu(d,M,X)]$ (5)
$\displaystyle+\frac{I\\{D=1-d\\}}{1-p_{d}(X)}\cdot\Big{[}\mu(d,M,X)-\int_{m\in\mathcal{M}}\mu(d,m,X)\cdot
f(m|1-d,X)\ dm\Big{]}$ $\displaystyle+\int_{m\in\mathcal{M}}\mu(d,m,X)\cdot
f(m|1-d,X)\ dm$
where $f(M|D,X)$ denotes the conditional density of $M$ given $D$ and $X$ (if
$M$ is discrete, this is a conditional probability and integrals need to be
replaced by sums), $p_{d}(X)=\Pr(D=d|X)$ the probability of treatment $D=d$
given $X$, and $\mu(D,M,X)=E(Y|D,M,X)$ the conditional expectation of outcome
$Y$ given $D$, $M$, and $X$. (3) satisfies a multiple robustness property in
the sense that estimation remains consistent even if one out of the three
models for the plug-in parameters $f(M|D,X)$, $p_{d}(X)$, and $\mu(D,M,X)$ is
misspecified.
To derive an alternative expression for identification, note that by Bayes’
Law,
$\displaystyle\frac{f(M|1-d,X)}{p_{d}(X)\cdot
f(M|d,X)}=\frac{\big{(}1-p_{d}(M,X)\big{)}\cdot
f(M|X)}{1-p_{d}(X)}\cdot\frac{p_{d}(X)}{p_{d}(M,X)\cdot f(M|X)\cdot
p_{d}(X)}=\frac{1-p_{d}(M,X)}{p_{d}(M,X)\cdot\big{(}1-p_{d}(X)\big{)}}$
where $f(M|X)$ is the conditional distribution of $M$ given $X$ and
$p_{d}(X,M)=\Pr(D=d|X,M)$. Furthermore,
$\displaystyle\int\mu(d,m,X)\cdot
f(m|1-d,X)dm=E\Big{[}\mu(d,M,X)\Big{|}D=1-d,X\Big{]}.$
Therefore, an alternative and also multiply robust representation of (3) is
$\displaystyle E[Y(d,M(1-d))]$ $\displaystyle=$ $\displaystyle
E[\psi_{d}^{*}],$ $\displaystyle\textrm{ with }\psi_{d}^{*}$ $\displaystyle=$
$\displaystyle\frac{I\\{D=d\\}\cdot(1-p_{d}(M,X))}{p_{d}(M,X)\cdot(1-p_{d}(X))}\cdot[Y-\mu(d,M,X)]$
$\displaystyle+$
$\displaystyle\frac{I\\{D=1-d\\}}{1-p_{d}(X)}\cdot\Big{[}\mu(d,M,X)-E\Big{[}\mu(d,M,X)\Big{|}D=1-d,X\Big{]}\Big{]}$
$\displaystyle+$ $\displaystyle E\Big{[}\mu(d,M,X)\Big{|}D=1-d,X\Big{]}.$
Similarly as the approaches based on inverse probability weighting (rather
than efficient scores) in Huber (2014) and Tchetgen Tchetgen (2013), (3)
avoids conditional mediator densities, which appears attractive if $M$ is
continuous and/or multidimensional. On the other hand, it requires the
estimation of an additional parameter, namely the nested conditional mean
$E[\mu(d,M,X)|D=1-d,X]$, as similarly found in Miles, Shpitser, Kanki, Meloni,
and Tchetgen Tchetgen (2020), who suggest a multiply robust score function for
assessing path-specific effects. Alternatively to rearranging the score
function by Tchetgen Tchetgen and Shpitser (2012) as outlined above, ratios of
conditional densities as for instance appearing in the first component of (3)
might be treated as additional nuisance parameter and estimated directly via
density-ratio estimation, see e.g. Sugiyama, Kawanabe, and Chui (2010) for
density-ratio estimation in high-dimensional settings. Such methods based on
directly estimating the density ratio without going through estimating the
densities in numerator and denominator separately are shown in several studies
to compare favorably with estimating the densities separately, see e.g.
Kanamori, Suzuki, and Sugiyama (2012).
Efficient score-based identification of $E[Y(d,M(d))]$ under
$Y(d,m)\bot\\{D,M\\}|X=x$ (see Assumption 1) has been established in the
literature on doubly robust ATE estimation, see for instance Robins,
Rotnitzky, and Zhao (1994) and Hahn (1998):
$\displaystyle E[Y(d,M(d))]=E[\alpha_{d}]\textrm{ with
}\alpha_{d}=\frac{I\\{D=d\\}\cdot[Y-\mu(d,X)]}{p_{d}(X)}+\mu(d,X)$ (7)
where $\mu(D,X)=E(Y|D,M(D),X)=E(Y|D,X)$ is the conditional expectation of
outcome $Y$ given $D$ and $X$.
For identifying the controlled direct effect, we now assume that $M$ is
discrete (while this need not be the case in the context of natural direct and
indirect effects) such that for all $m$ in the support of $M$, it must hold
that $\Pr(M=m)>0$. As Assumptions 1 and 2 imply $Y(d,m)\bot\\{D,M\\}|X=x$,
doubly robust identification of the potential outcome $E[Y(d,m)]$, which is
required for the controlled direct effect, follows from replacing $I\\{D=d\\}$
and $p_{d}(X)$ in (7) by $I\\{D=d,M=m\\}=I\\{M=m\\}\cdot I\\{D=d\\}$ and
$\Pr(D=d,M=m|X)=f(m|d,X)\cdot p_{d}(X)$:
$\displaystyle E[Y(d,m)]=E[\psi_{dm}]\textrm{ with
}\psi_{dm}=\frac{I\\{D=d\\}\cdot I\\{M=m\\}\cdot[Y-\mu(d,m,X)]}{f(m|d,X)\cdot
p_{d}(X)}+\mu(d,m,X).$ (8)
## 4 Estimation of the counterfactual with K-fold Cross-Fitting
We subsequently propose an estimation strategy for the counterfactual
$E[Y(d,M(1-d))]$ with $d\in\\{0,1\\}$ based on the efficient score function by
Tchetgen Tchetgen and Shpitser (2012) provided in (3) and show its root-$n$
consistency under specific regularity conditions. To this end, let
$\mathcal{W}=\\{W_{i}|1\leq i\leq N\\}$ with $W_{i}=(Y_{i},M_{i},D_{i},X_{i})$
for $i=1,\ldots,n$ denote the set of observations in an i.i.d. sample of size
$n$. $\eta$ denotes the plug-in (or nuisance) parameters, i.e. the conditional
mean outcome, mediator density and treatment probability. Their respective
estimates are referred to by
$\hat{\eta}=\\{\hat{\mu}(D,M,X),\hat{f}(M|D,X),\hat{p_{d}}(X)\\}$ and the true
nuisance parameters by $\eta_{0}=\\{\mu_{0}(D,M,X),f_{0}(M|D,X),p_{d0}(X)\\}$.
Finally, $\Psi_{d0}=E[Y(d,M(1-d))]$ denotes the true counterfactual.
We suggest estimating $\Psi_{d0}$ using the following algorithm that combines
orthogonal score estimation with sample splitting and is root-$n$ consistent
under conditions outlined further below.
Algorithm 1: Estimation of $E[Y(d,M(1-d))]$ based on equation (3)
1. 1.
Split $\mathcal{W}$ in $K$ subsamples. For each subsample $k$, let $n_{k}$
denote its size, $\mathcal{W}_{k}$ the set of observations in the sample and
$\mathcal{W}_{k}^{C}$ the complement set of all observations not in $k$.
2. 2.
For each $k$, use $\mathcal{W}_{k}^{C}$ to estimate the model parameters of
$p_{d}(X)$, $f(M|D,X)$, and $\mu(D,M,X)$ in order to predict these models in
$\mathcal{W}_{k}$, where the predictions are denoted by $\hat{p_{d}}^{k}(X)$,
$\hat{f}^{k}(M|D,X)$, and $\hat{\mu}^{k}(D,M,X)$.
3. 3.
For each $k$, obtain an estimate of the efficient score function (see
$\psi_{d}$ in (3)) for each observation $i$ in $\mathcal{W}_{k}$, denoted by
$\hat{\psi}_{d,i}^{k}$ :
$\displaystyle\hat{\psi}_{d,i}^{k}$ $\displaystyle=$
$\displaystyle\frac{I\\{D_{i}=d\\}\cdot\hat{f}^{k}(M_{i}|1-d,X_{i})}{\hat{p}_{d}^{k}(X_{i})\cdot\hat{f}^{k}(M_{i}|d,X_{i})}\cdot[Y_{i}-\hat{\mu}^{k}(d,M_{i},X_{i})]$
(9)
$\displaystyle+\frac{I\\{D_{i}=1-d\\}}{1-\hat{p}_{d}^{k}(X_{i})}\cdot\Big{[}\hat{\mu}^{k}(d,M_{i},X_{i})-\int_{m\in\mathcal{M}}\hat{\mu}^{k}(d,m,X_{i})\cdot\hat{f}^{k}(m|1-d,X_{i})dm\Big{]}$
$\displaystyle+\int_{m\in\mathcal{M}}\hat{\mu}^{k}(d,m,X_{i})\cdot\hat{f}^{k}(m|1-d,X_{i})dm.$
4. 4.
Average the estimated scores $\hat{\psi}_{d,i}^{k}$ over all observations
across all $K$ subsamples to obtain an estimate of $\Psi_{d0}=E[Y(d,M(1-d))]$
in the total sample, denoted by
$\hat{\Psi}_{d}=1/n\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\hat{\psi}_{d,i}^{k}$.
Algorithm 1 can be adapted to estimate the counterfactuals required for the
controlled direct effect, see (8). To this end, denote by
$\Psi_{dm0}=E[Y(d,m)]$ the true counterfactual of interest, which is estimated
by replacing $\psi_{d}$ and $\Psi_{d}$ by $\psi_{dm}$ and $\Psi_{dm0}$,
respectively, everywhere in Algorithm 1.
In order to achieve root-$n$ consistency for counterfactual estimation, we
make specific assumptions about the prediction qualities of the machine
learners for our plug-in estimates of the nuisance parameters. Closely
following Chernozhukov, Chetverikov, Demirer, Duflo, Hansen, Newey, and Robins
(2018), we to this end introduce some further notation. Let
$(\delta_{n})_{n=1}^{\infty}$ and $(\Delta_{n})_{n=1}^{\infty}$ denote
sequences of positive constants with $\lim_{n\rightarrow\infty}\delta_{n}=0$
and $\lim_{n\rightarrow\infty}\Delta_{n}=0.$ Also, let
$c,\epsilon,C,\underline{f},\overline{f}$ and $q$ be positive constants such
that $q>2,$ and let $K\geq 2$ be a fixed integer. Furthermore, for any random
vector $Z=(Z_{1},...,Z_{l})$, let $\left\|Z\right\|_{q}=\max_{1\leq j\leq
l}\left\|Z_{l}\right\|_{q},$ where
$\left\|Z_{l}\right\|_{q}=\left(E\left[\left|Z_{l}\right|^{q}\right]\right)^{\frac{1}{q}}$.
For the sake of easing notation, we assume that $n/K$ is an integer. For
brevity, we omit the dependence of probability $\Pr_{P}(\cdot),$ expectation
$E_{P}(\cdot),$ and norm $\left\|\cdot\right\|_{P,q}$ on the probability
measure $P$.
Assumption 4 (regularity conditions and quality of plug-in parameter
estimates):
For all probability laws $P\in\mathcal{P}$, where $\mathcal{P}$ is the set of
all possible probability laws, the following conditions hold for the random
vector $(Y,D,M,X)$ for $d\in\\{0,1\\}$:
1. (a)
$\left\|Y\right\|_{q}\leq C$ and $\left\|E[Y^{2}|d,M,X]\right\|_{\infty}\leq
C^{2},$
2. (b)
$\Pr(\epsilon\leq p_{d0}(X)\leq 1-\epsilon)=1,$
3. (c)
$\Pr(\underline{f}\leq f(M|D,X)\leq\overline{f})=1,$
4. (d)
$\left\|Y-\mu_{0}(d,M,X)\right\|_{2}=E\Big{[}\left(Y-\mu_{0}(d,M,X))\right)^{2}\Big{]}^{\frac{1}{2}}\geq
c$
5. (e)
Given a random subset $I$ of $[n]$ of size $n_{k}=n/K,$ the nuisance parameter
estimator $\hat{\eta}_{0}=\hat{\eta}_{0}(\mathcal{W}_{k}^{C})$ satisfies the
following conditions. With $P$-probability no less than $1-\Delta_{n}:$
$\displaystyle\left\|\hat{\eta}_{0}-\eta_{0}\right\|_{q}$ $\displaystyle\leq$
$\displaystyle C,$ $\displaystyle\left\|\hat{\eta}_{0}-\eta_{0}\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n},$
$\displaystyle\left\|\hat{p}_{d0}(X)-1/2\right\|_{\infty}$ $\displaystyle\leq$
$\displaystyle 1/2-\epsilon,$
$\displaystyle\left\|\hat{f}_{0}(M|D,X)-(\underline{f}+\overline{f})/2\right\|_{\infty}$
$\displaystyle\leq$ $\displaystyle(\overline{f}-\underline{f})/2,$
$\displaystyle\left\|\hat{\mu}_{0}(D,M,X)-\mu_{0}(D,M,X)\right\|_{2}\times\left\|\hat{p}_{d0}(X)-p_{d0}(X)\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}n^{-1/2},$
$\displaystyle\left\|\hat{\mu}_{0}(D,M,X)-\mu_{0}(D,M,X)\right\|_{2}\times\left\|\hat{f}_{0}(M|1-D,X)-f_{0}(M|1-D,X)\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}n^{-1/2}.$
For demonstrating root-$n$ consistency of the proposed estimation strategy for
the counterfactual, we heavily draw from Chernozhukov, Chetverikov, Demirer,
Duflo, Hansen, Newey, and Robins (2018). We show that our estimation strategy
satisfies the requirements for their double machine learning framework by
first verifying the satisfaction of a specific moment condition as well as
linearity and Neyman orthogonality of the score (see Appendix B.1.1). Then, as
e.g. $\psi_{d}(W,\eta,\Psi_{d0})$ is smooth in $(\eta,\Psi_{d0})$, the plug-in
estimators must converge with rate $n^{-1/4}$ in order to achieve
$n^{-1/2}$-convergence for the estimation of $\hat{\Psi}_{d}$. This
convergence rate of $n^{-1/4}$ is achievable for many commonly used machine
learners such as lasso, random forest, boosting and neural nets. The rates for
$L_{2}$-boosting were, for instance, derived in Luo and Spindler (2016).
Theorem 1
Under Assumptions 1-4, it holds for estimating $E[Y(d,M(1-d))]$, $E[Y(d,m)]$
based on Algorithm 1:
$\sqrt{n}\Big{(}\hat{\Psi}_{d}-\Psi_{d0}\Big{)}\rightarrow
N(0,\sigma^{2}_{\psi_{d}})$, where
$\sigma^{2}_{\psi_{d}}=E[(\psi_{d}-\Psi_{d0})^{2}]$.
$\sqrt{n}\Big{(}\hat{\Psi}_{dm}-\Psi_{dm0}\Big{)}\rightarrow
N(0,\sigma^{2}_{\psi_{dm}})$, where
$\sigma^{2}_{\psi_{d}}=E[(\psi_{d}-\Psi_{dm0})^{2}]$.
The proof is provided in Appendix B.1.
Analogous results follow for the estimation of $\Lambda=E[Y(d,M(d))]$ when
replacing $\hat{\psi}_{d}$ in the algorithm above by an estimate of score
function $\alpha_{d}$ from (7),
$\displaystyle\hat{\alpha}_{d}=\frac{I\\{D=d\\}\cdot(Y_{i}-\hat{\mu}^{k}(d,X_{i}))}{\hat{p_{d}}^{k}(X_{i})}+\hat{\mu}^{k}(d,X_{i}),$
(10)
where $\hat{\mu}^{k}(d,x)$ is an estimate of $\mu(d,x)$. This approach has
been discussed in literature on ATE estimation based on double machine
learning, see for instance Belloni, Chernozhukov, Fernández-Val, and Hansen
(2017) and Chernozhukov, Chetverikov, Demirer, Duflo, Hansen, Newey, and
Robins (2018). Denoting by $\hat{\Lambda}$ the estimate of $\Lambda$, it
follows under Assumptions 1-4 that
$\sqrt{n}\Big{(}\hat{\Lambda}_{d}-\Lambda_{d}\Big{)}\rightarrow
N(0,\sigma^{2}_{\alpha_{d}})$, where
$\sigma^{2}_{\alpha_{d}}=E[(\alpha_{d}-\Lambda_{d})^{2}]$. Therefore,
root-$n$-consistent estimates of the total as well as the direct and indirect
effects are obtained as difference of the estimated potential outcomes, which
we denote by $\hat{\Delta}$, $\hat{\theta}(d)$, and $\hat{\delta}(d)$. That
is, $\hat{\Delta}=\hat{\Lambda}_{1}-\hat{\Lambda}_{0}$,
$\hat{\theta}(1)=\hat{\Lambda}_{1}-\hat{\Psi}_{0}$,
$\hat{\theta}(0)=\hat{\Psi}_{1}-\hat{\Lambda}_{0}$,
$\hat{\delta}(1)=\hat{\Lambda}_{1}-\hat{\Psi}_{1}$, and
$\hat{\delta}(0)=\hat{\Psi}_{0}-\hat{\Lambda}_{0}$.
Naturally, the asymptotic variance of any effect is obtained based on the
variance of the difference in the score functions of the potential outcomes
required for the respective effect. For instance, the asymptotic variance of
$\hat{\theta}(1)$ is given by
$Var(\hat{\theta}(1))=Var(\alpha_{1}-\psi_{0})/n=(\sigma^{2}_{\alpha_{1}}+\sigma^{2}_{\psi_{0}}-2Cov(\alpha_{1},\psi_{0}))/n$.
Chernozhukov, Chetverikov, Demirer, Duflo, Hansen, Newey, and Robins (2018)
show that under Assumptions 1-4, $\hat{\sigma}^{2}_{\psi_{d}}$ can be
estimated as:
$\displaystyle\hat{\sigma}^{2}_{\psi_{d}}=1/K\sum_{k=1}^{K}\Big{[}1/n_{k}\sum_{i=1}^{n_{k}}\psi_{d}(W_{i},\hat{\eta}_{0}^{k},\hat{\Psi}_{d})^{2}\Big{]}$
(11)
The asymptotic variance of $\alpha_{d}$ can be estimated accordingly, with
$\psi_{d}$ and $\hat{\Psi}_{d0}$ substituted by $\alpha_{d}$ and
$\hat{\Lambda}_{d0}$.
We subsequently discuss estimation based on the score function $\psi_{d}^{*}$
in expression (3). We note that in this case, one needs to estimate a nested
nuisance parameter $E\Big{[}\mu(d,M,X)\Big{|}D=1-d,X\Big{]}$. To avoid
overfitting, the models for $\mu(d,M,X)$ and
$E\Big{[}\mu(d,M,X)\Big{|}D=1-d,X\Big{]}$ are estimated in different
subsamples. The plug-in estimates for the conditional mean outcome, mediator
density and treatment probability are referred to by
$\hat{\eta}^{*}=\\{\hat{\mu}(D,M,X),\hat{\omega}(D,M,X),\hat{p}(M,X),\hat{p}_{d}(X)\\}$
and the true nuisance parameters by
$\eta_{0}^{*}=\\{\mu_{0}(D,M,X),\omega(D,M,X),p_{d0}(M,X),p_{d0}(X)\\}$.
Algorithm 2: Estimation of $E[Y(d,M(1-d))]$ based on equation (3)
1. 1.
Split $\mathcal{W}$ in $K$ subsamples. For each subsample $k$, let $n_{k}$
denote its size, $\mathcal{W}_{k}$ the set of observations in the sample and
$\mathcal{W}_{k}^{C}$ the complement set of all observations not in $k$.
2. 2.
For each $k$, use $\mathcal{W}_{k}^{C}$ to estimate the model parameters of
$p_{d}(X)$ and $p_{d}(M,X)$. Split $\mathcal{W}_{k}^{C}$ into 2 nonoverlapping
subsamples and estimate the model parameters of the conditional mean
$\mu(d,M,X)$ and the nested conditional mean
$E\Big{[}\mu(d,M,X)\Big{|}D=1-d,X\Big{]}$ in the distinct subsamples. Predict
the nuisance parameters in $\mathcal{W}_{k}$, where the predictions are
denoted by $\hat{p_{d}}^{k}(X)$, $\hat{p}_{d}^{k}(M,X)$,
$\hat{\mu}^{k}(D,M,X)$ and $\hat{E}\Big{[}\mu(d,M,X)\
\Big{|}D=1-d,X\Big{]}^{k}$.
3. 3.
For each $k$, obtain an estimate of the efficient score function (see
$\psi_{d}^{*}$ in (3)) for each observation $i$ in $\mathcal{W}_{k}$, denoted
by $\hat{\psi}_{d,i}^{k}$ :
$\displaystyle\hat{\psi}_{d,i}^{*k}$ $\displaystyle=$
$\displaystyle\frac{I\\{D_{i}=d\\}\big{(}1-\hat{p}_{d}^{k}(M_{i},X_{i})\big{)}}{\hat{p}_{d}^{k}(M_{i},X_{i})\,\big{(}1-\hat{p}_{d}^{k}(X_{i})\big{)}}\cdot[Y-\hat{\mu}^{k}(d,M_{i},X_{i})]$
(12)
$\displaystyle+\frac{I\\{D_{i}=1-d\\}}{1-\hat{p}_{d}^{k}(X_{i})}\cdot\Big{[}\hat{\mu}^{k}(d,M_{i},X_{i})-\hat{E}\Big{[}\hat{\mu}^{k}(d,M_{i},X_{i})\Big{|}D_{i}=1-d,X_{i}\Big{]}\Big{]}$
$\displaystyle+\hat{E}\Big{[}\hat{\mu}^{k}(d,M_{i},X_{i})\Big{|}D_{i}=1-d,X_{i}\Big{]}.$
4. 4.
Average the estimated scores $\hat{\psi}_{d,i}^{*k}$ over all observations
across all $K$ subsamples to obtain an estimate of $\Psi_{d0}=E[Y(d,M(1-d))]$
in the total sample, denoted by
$\hat{\Psi}_{d}^{*}=1/n\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\hat{\psi}_{d,i}^{*k}$.
Also this approach can be shown to be root-$n$-consistent under specific
regularity conditions outlined below.
Assumption 5 (regularity conditions and quality of plug-in parameter
estimates):
For all probability laws $P\in\mathcal{P}$ the following conditions hold for
the random vector $(Y,D,M,X)$ for all $d\in\\{0,1\\}$:
1. (a)
$\left\|Y\right\|_{q}\leq C$ and $\left\|E[Y^{2}|d,M,X]\right\|_{\infty}\leq
C^{2},$
2. (b)
$\Pr(\epsilon\leq p_{d0}(X)\leq 1-\epsilon)=1,$
3. (c)
$\Pr(\epsilon\leq p_{d0}(M,X)\leq 1-\epsilon)=1,$
4. (d)
$\left\|Y-\mu_{0}(d,M,X)\right\|_{2}=E\Big{[}\left(Y-\mu_{0}(d,M,X))\right)^{2}\Big{]}^{\frac{1}{2}}\geq
c$
5. (e)
Given a random subset $I$ of $[n]$ of size $n_{k}=n/K,$ the nuisance parameter
estimator $\hat{\eta}^{*}_{0}=\hat{\eta}^{*}_{0}(\mathcal{W}_{k}^{C})$
satisfies the following conditions. With $P$-probability no less than
$1-\Delta_{n}:$
$\displaystyle\left\|\hat{\eta}^{*}_{0}-\eta^{*}_{0}\right\|_{q}$
$\displaystyle\leq$ $\displaystyle C,$
$\displaystyle\left\|\hat{\eta}^{*}_{0}-\eta^{*}_{0}\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n},$
$\displaystyle\left\|\hat{p}_{d0}(X)-1/2\right\|_{\infty}$ $\displaystyle\leq$
$\displaystyle 1/2-\epsilon,$
$\displaystyle\left\|\hat{p}_{d0}(M,X)-1/2\right\|_{\infty}$
$\displaystyle\leq$ $\displaystyle 1/2-\epsilon,$
$\displaystyle\left\|\hat{\mu}_{0}(D,M,X)-\mu_{0}(D,M,X)\right\|_{2}\times\left\|\hat{p}_{d0}(X)-p_{d0}(X)\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}n^{-1/2},$
$\displaystyle\left\|\hat{\mu}_{0}(D,M,X)-\mu_{0}(D,M,X)\right\|_{2}\times\left\|\hat{p}_{d0}(M,X)-p_{d0}(M,X)\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}n^{-1/2},$
$\displaystyle\left\|\hat{\omega}_{0}(D,M,X)-\omega_{0}(D,M,X)\right\|_{2}\times\left\|\hat{p}_{d0}(X)-p_{d0}(X)\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}n^{-1/2}.$
Theorem 2
Under Assumptions 1-3 and 5, it holds for estimating $E[Y(d,M(1-d))]$ based on
Algorithm 2:
$\sqrt{n}\Big{(}\hat{\Psi}_{d}^{*}-\Psi_{d0}^{*}\Big{)}\rightarrow
N(0,\sigma^{2}_{\psi_{d}^{*}})$, where
$\sigma^{2}_{\psi_{d}^{*}}=E[(\psi_{d}^{*}-\Psi_{d0}^{*})^{2}]$.
The proof is provided in Appendix B.2.
## 5 Simulation study
This section provides a simulation study to investigate the finite sample
behaviour of the proposed methods based on the following data generating
process:
$\displaystyle Y$ $\displaystyle=$ $\displaystyle
0.5D+0.5M+0.5DM+X^{\prime}\beta+U,$ $\displaystyle M$ $\displaystyle=$
$\displaystyle I\\{0.5D+X^{\prime}\beta+V>0\\},\quad
D=I\\{X^{\prime}\beta+W>0\\},$ $\displaystyle X$ $\displaystyle\sim$
$\displaystyle N(0,\Sigma),\quad U,V,W\sim N(0,1)\textrm{ independently of
each other and $X$}.$
Outcome $Y$ is a function of the observed variables $D,M,X$, including an
interaction between the mediator and the treatment, and an unobserved term
$U$. The binary mediator $M$ is a function of $D,X$ and the unobservable $V$,
while the binary treatment $D$ is determined by $X$ and the unobservable $W$.
$X$ is a vector of covariates of dimension $p$, which is drawn from a
multivariate normal distribution with zero mean and covariance matrix
$\Sigma$. The latter is defined based on setting the covariance of the $i$th
and $j$th covariate in $X$ to $\Sigma_{ij}=0.5^{|i-j|}$.333The results
presented below are hardly affected when setting $\Sigma$ to the identity
matrix (zero correlation across $X$). Coefficients $\beta$ gauge the impact of
$X$ on $Y$, $M$, and $D$, respectively, and thus, the strength of confounding.
$U,V,W$ are random and standard normally distributed scalar unobservables. We
consider two sample sizes of $n=1000,4000$ and run $1000$ simulations per data
generating process.
We investigate the performance of effect estimation based on (i) Theorem 1
using the identification result in expression (3) derived by Tchetgen Tchetgen
and Shpitser (2012) as well as (ii) Theorem 2 using the modified score
function in expression (3) which avoids conditional mediator densities. The
nuisance parameters are estimated by post-lasso regression based on the ‘hdm’
package by Spindler, Chernozhukov, and Hansen (2016) for the statistical
software ‘R’ with its default options, using logit specifications for
$p_{d}(X)$, $p_{d}(M,X)$, and $f(M|D,X)$ and linear specifications for
$\mu(D,M,X)$ and $E[\mu(d,M,X)|D=1-d,X]$. The estimation of direct and
indirect effects is based on 3-fold cross-fitting. For all methods
investigated, we drop observations whose (products of) estimated conditional
probabilities in the denominator of any potential outcome expression are close
to zero, namely smaller than a trimming threshold of $0.05$ (or 5%). Our
estimation procedure is available in the ‘causalweight’ package for ‘R’ by
Bodory and Huber (2018).
In our first simulation design, we set $p=200$ and the $i$th element in the
coefficient vector $\beta$ to $0.3/i^{2}$ for $i=1,...,p$, meaning a quadratic
decay of covariate importance in terms of confounding. This specification
implies that the $R^{2}$ of $X$ when predicting $Y$ amounts to 0.22 in large
samples, while the Nagelkerke (1991) pseudo-$R^{2}$ of $X$ when predicting $D$
and $M$ by probit models amounts to 0.10 and 0.13, respectively. The left
panel of Table 1 reports the results for either sample size. For $n=1000$,
double machine learning based on Theorem 2 on average exhibits a slightly
lower absolute bias (‘abias’) and standard deviation (‘sd’) than estimation
based on Theorem 1. The behavior of both approaches improves when increasing
sample size to $n=4000$, as the absolute bias is very close to zero for any
effect estimate and standard deviation is roughly cut by half. Under the
larger sample size, differences in terms of root mean squared error (‘rmse’)
between estimation based on Theorems 1 and 2 are very close to zero. By and
large, the results suggest that the estimators converge to the true effects at
root-$n$ rate.
In our second simulation, confounding is increased by setting $\beta$ to
$0.5/i^{2}$ for $i=1,...,p$. This specification implies that the $R^{2}$ of
$X$ when predicting $Y$ amounts to 0.42, while the Nagelkerke (1991)
pseudo-$R^{2}$ of $X$ when predicting $D$ and $M$ amounts to 0.23 and 0.28,
respectively. The results are displayed in the right panel of Table 1. Again,
estimation based on Theorem 2 slightly dominates in terms of having a smaller
absolute bias and standard deviation, in particular for $n=1000$. However, in
other settings, the two methods might compare differently in terms of finite
sample performance. Both methods based on Theorems 1 and 2, respectively,
appear to converge to the true effects at root-$n$ rate, and differences in
terms of root mean squared errors are minor for $n=4000$.
Table 1: Simulation results for effect estimates ($p=200$)
| Coefficients given by $0.3/i^{2}$ for $i=1,...,p$ | Coefficients given by $0.5/i^{2}$ for $i=1,...,p$
---|---|---
| abias | sd | rmse | abias | sd | rmse | true | abias | sd | rmse | abias | sd | rmse | true
| $n$=1000 | $n$=4000 | | $n$=1000 | $n$=4000 |
| Double machine learning based on Theorem 1
$\hat{\Delta}$ | 0.01 | 0.08 | 0.08 | 0.00 | 0.04 | 0.04 | 1.02 | 0.02 | 0.09 | 0.09 | 0.02 | 0.04 | 0.05 | 1.00
$\hat{\theta}(1)$ | 0.00 | 0.09 | 0.09 | 0.00 | 0.04 | 0.04 | 0.84 | 0.01 | 0.09 | 0.09 | 0.01 | 0.04 | 0.04 | 0.83
$\hat{\theta}(0)$ | 0.01 | 0.08 | 0.08 | 0.00 | 0.04 | 0.04 | 0.75 | 0.02 | 0.08 | 0.09 | 0.01 | 0.04 | 0.04 | 0.75
$\hat{\delta}(1)$ | 0.00 | 0.06 | 0.06 | 0.00 | 0.03 | 0.03 | 0.27 | 0.00 | 0.06 | 0.06 | 0.00 | 0.03 | 0.03 | 0.25
$\hat{\delta}(0)$ | 0.01 | 0.06 | 0.06 | 0.00 | 0.02 | 0.02 | 0.18 | 0.01 | 0.06 | 0.06 | 0.00 | 0.02 | 0.02 | 0.17
trimmed | 17.24 | 19.19 | | 80.25 | 237.50 |
| Double machine learning based on Theorem 2
$\hat{\Delta}$ | 0.00 | 0.08 | 0.08 | 0.00 | 0.04 | 0.04 | 1.02 | 0.01 | 0.09 | 0.09 | 0.01 | 0.04 | 0.04 | 1.00
$\hat{\theta}(1)$ | 0.00 | 0.08 | 0.08 | 0.00 | 0.04 | 0.04 | 0.84 | 0.00 | 0.08 | 0.08 | 0.00 | 0.04 | 0.04 | 0.83
$\hat{\theta}(0)$ | 0.00 | 0.08 | 0.08 | 0.00 | 0.04 | 0.04 | 0.75 | 0.00 | 0.08 | 0.08 | 0.00 | 0.04 | 0.04 | 0.75
$\hat{\delta}(1)$ | 0.00 | 0.06 | 0.06 | 0.00 | 0.03 | 0.03 | 0.27 | 0.00 | 0.06 | 0.06 | 0.00 | 0.03 | 0.03 | 0.25
$\hat{\delta}(0)$ | 0.00 | 0.04 | 0.04 | 0.00 | 0.02 | 0.02 | 0.18 | 0.00 | 0.05 | 0.05 | 0.00 | 0.02 | 0.02 | 0.17
trimmed | 1.20 | 0.11 | | 16.76 | 25.45 |
Note: ‘abias’, ‘sd’, and ‘rmse’ denote the absolute bias, standard deviation
and root mean squared error of the respective effect estimate. ‘true’ provides
the true effect. ‘trimmed’ is the average number of trimmed observations per
simulation. The propensity score-based trimming threshold is set to $0.05$.
Appendix A reports the simulation results (namely the absolute bias, standard
deviation, and root mean squared error) for the standard errors obtained by an
asymptotic approximation based on the estimated variance of the score
functions. The results suggest that the asymptotic standard errors decently
estimate the actual standard deviation of the point estimators.
## 6 Application
In this section, we apply our method to data from the National Longitudinal
Survey of Youth 1997 (NLSY97), a survey following a U.S. nationally
representative sample of 8,984 individuals born in the years 1980-84. Since
1997, the participants have been interviewed on a wide range of demographic,
socioeconomic, and health-related topics in a one- to two-year circle. We
investigate the causal effect of health insurance coverage ($D$) on general
health ($Y$) and decompose it into an indirect pathway via the incidence of a
regular medical checkup ($M$) and a direct effect entailing any other causal
mechanisms. Whether or not an individual undergoes routine checkups appears to
be an interesting mediator, as it is likely to be affected by health insurance
coverage and may itself have an impact on the individual’s health, because
checkups can help identifying medical conditions before they get serious to
prevent them from affecting a person’s general health state.
The effect of health insurance coverage on self-reported health has been
investigated in different countries with no compulsory medical insurance and
no publicly provided universal health coverage, see for example Simon, Soni,
and Cawley (2017), Sommers, Maylone, Blendon, Orav, and Epstein (2017),
Baicker, Taubman, Allen, Bernstein, Gruber, Newhouse, Schneider, Wright,
Zaslavsky, and Finkelstein (2013), Yörük (2016) and Cardella and Depew (2014)
for the U.S. and King, Gakidou, Imai, Lakin, Moore, Nall, Ravishankar, Vargas,
Tellez-Rojo, Avila, et al. (2009) for Mexico). Most of these studies find a
significant positive effect of insurance coverage on self-reported health. The
impact of insurance coverage on the utilization of preventive care measures,
particularly routine checkups like cancer, diabetes and cardiovascular
screenings, is also extensively covered in public health literature. Most
studies find that health insurance coverage increases the odds of attending
routine checkups. While some contributions include selected demographic,
socioeconomic and health-related control variables to account for the
endogeneity of health insurance status (see e.g. Faulkner and Schauffler
(1997), Press (2014), Burstin, Swartz, O’Neil, Orav, and Brennan (1998),
Fowler-Brown, Corbie-Smith, Garrett, and Lurie (2007)), others exploit natural
experiments: Simon, Soni, and Cawley (2017) estimate a difference-in-
differences model comparing states which did and did not expand Medicaid to
low-income adults in 2005, while Baicker, Taubman, Allen, Bernstein, Gruber,
Newhouse, Schneider, Wright, Zaslavsky, and Finkelstein (2013) exploit that
the state of Oregon expanded Medicaid based on lottery drawings from a waiting
list. The results of both studies suggest that the Medicaid expansions
increased use of certain forms of preventive care. In a study on Mexican
adults, Pagán, Puig, and Soldo (2007) use self-employment and commission pay
as instruments for insurance coverage and also find a more frequent use of
some types of preventive care by individuals with health insurance coverage.
While the bulk of studies investigating checkups focus on one particular type
of screening (rather than general health checkups), see Maciosek, Coffield,
Flottemesch, Edwards, and Solberg (2010) for a literature review, several
experimental contributions also assess general health checkups. For instance,
Rasmussen, Thomsen, Kilsmark, Hvenegaard, Engberg, Lauritzen, and Sogaard
(2007) conduct an experiment with individuals aged 30 to 49 in Denmark by
randomly offering a set of health screenings, including advice on healthy
living and find a significant positive effect on life expectation. In a study
on Japan’s elderly population, Nakanishi, Tatara, and Fujiwara (1996) find a
significantly negative correlation between the rate of attendance at health
check-ups and hospital admission rates. Despite the effects of health
insurance coverage and routine checkups being extensively covered in the
public health literature, the indirect effect of insurance on general health
operating via routine checkups as mediator has to the best of our knowledge
not yet been investigated. A further distinction to most previous studies is
that we consider comparably young individuals with an average age below 30.
For this population, the relative importance of different health screenings
might differ from that for other age groups. We also point out that our
application focuses on short-term health effects.
We consider a binary indicator for health insurance coverage, equal to one if
an individual reports to have any kind of health insurance when interviewed in
2006 and zero otherwise. The outcome, self-reported general health, is
obtained from 2008 interview and measured with an ordinal variable, taking on
the values ‘excellent’, ‘very good’, ‘good’, ‘fair’ and ‘poor’. In the 2007
interview, participants were asked whether they have gone for routine checkups
since the 2006 interview. This information serves as binary mediator, measured
post-treatment but pre-outcome.
To ensure that the control variables ($X$) are not influenced by the
treatment, they come from the pre-treatment 2005 and earlier interview rounds.
They cover demographic characteristics, family background and quality of the
home environment during youth, education and training, labor market status,
income and work experience, marital status and fertility, household
characteristics, received monetary transfers, attitudes and expectations,
state of physical and mental health as well as health-related behavior
regarding e.g. nutrition and physical activity. For some variables, we only
consider measurements from 2005 or from the initial interview round covering
demographics and family related topics. For other variables we include
measurements from both the indiviuals’ youth and 2005 in order to capture
their social, emotional and physical development. Treatment and mediator state
in the pre-treatment period (2005) are also considered as potential control
variables. Item non-response in control variables is dealt with by including
missing dummies for each control variable and setting the respective missing
values to zero. In total, we end up with a set of 770 control variables, 601
of which are dummy variables (incl. 252 dummies for missing values).
After excluding 1923 observations with either mediator or treatment status
missing, we remain with 7,061 observations. Table 2 presents some descriptive
statistics for a selection of control variables. It shows that the group of
individuals with and without health insurance coverage differ substantially.
There are significant differences with respect to most of the control
variables listed in the table. Females are significantly more likely to have
health insurance coverage. Education and household income also show a
significant positive correlation with health insurance coverage while the
number of household members for example is negatively correlated with
insurance coverage. Regarding the mediator, we find a similar pattern as for
the treatment. With respect to many of the considered variables, the group of
individuals who went for medical checkup differs substantially from those who
did not. Further, we see that the correlation between many control variables
and the treatment appear to have the same sign as that with the mediator.
Table 2: Descriptive Statistics
| overall | $D=1$ | $D=0$ | diff | p-val | $M=1$ | $M=0$ | diff | p-val
---|---|---|---|---|---|---|---|---|---
$n$ | 7,061 | 2,335 | 4,726 | | | 3,612 | 3,449 | |
Female | 0.5 | 0.55 | 0.42 | 0.13 | 0 | 0.66 | 0.34 | 0.32 | 0
Age | 28.51 | 28.47 | 28.59 | -0.12 | 0 | 28.46 | 28.55 | -0.09 | 0.01
Ethnicity | | | | | | | | |
Black | 0.27 | 0.25 | 0.3 | -0.05 | 0 | 0.32 | 0.21 | 0.11 | 0
Hispanic | 0.21 | 0.19 | 0.26 | -0.07 | 0 | 0.21 | 0.21 | 0 | 0.76
Mixed | 0.01 | 0.01 | 0.01 | 0 | 0.34 | 0.01 | 0.01 | 0 | 0.42
White or Other | 0.51 | 0.55 | 0.43 | 0.12 | 0 | 0.46 | 0.56 | -0.1 | 0
Relationship/Marriage | | | | | | | | |
Not Cohabiting | 0.62 | 0.61 | 0.65 | -0.03 | 0.01 | 0.61 | 0.64 | -0.03 | 0.01
Cohabiting | 0.17 | 0.16 | 0.18 | -0.03 | 0.01 | 0.16 | 0.17 | 0 | 0.72
Married | 0.18 | 0.21 | 0.14 | 0.07 | 0 | 0.2 | 0.17 | 0.03 | 0
Separated/ Widowed | 0.02 | 0.02 | 0.03 | -0.01 | 0.03 | 0.02 | 0.02 | 0 | 0.62
Missing | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0.79
Urban | 1.75 | 1.75 | 1.73 | 0.02 | 0.03 | 1.76 | 1.73 | 0.03 | 0.01
Missing | 0.08 | 0.08 | 0.09 | -0.01 | 0.16 | 0.08 | 0.09 | -0.01 | 0.14
HH Income 444The HH income variable is the sum of several variables measuring HH income components (different sources and receivers). These variables are capped but only a total of 11 observations are in critical cap categories | 43,406 | 48,388 | 33,322 | 15,066 | 0 | 44,217 | 42,556 | 1,661 | 0.24
Missing | 0.21 | 0.19 | 0.24 | -0.05 | 0 | 0.2 | 0.22 | -0.01 | 0.2
HH Size | 3.09 | 3.06 | 3.15 | -0.1 | 0.04 | 3.13 | 3.05 | 0.08 | 0.07
Missing | 0.06 | 0.05 | 0.07 | -0.02 | 0.01 | 0.05 | 0.07 | -0.01 | 0.03
HH Members under 18 | 0.69 | 0.65 | 0.76 | -0.11 | 0 | 0.77 | 0.6 | 0.17 | 0
Missing | 0.06 | 0.06 | 0.07 | -0.02 | 0.01 | 0.06 | 0.07 | -0.01 | 0.04
Biological Children | 0.49 | 0.47 | 0.54 | -0.07 | 0 | 0.56 | 0.43 | 0.13 | 0
Highest Grade | 12.17 | 12.65 | 11.21 | 1.44 | 0 | 12.41 | 11.93 | 0.48 | 0
Missing | 0.06 | 0.06 | 0.07 | -0.02 | 0 | 0.06 | 0.07 | -0.01 | 0.04
Employment | | | | | | | | |
Employed | 0.71 | 0.73 | 0.68 | 0.05 | 0 | 0.7 | 0.72 | -0.02 | 0.05
Unemployed | 0.05 | 0.04 | 0.08 | -0.03 | 0 | 0.05 | 0.06 | -0.01 | 0.17
Out of Labor Force | 0.21 | 0.19 | 0.24 | -0.04 | 0 | 0.21 | 0.2 | 0.01 | 0.4
Military | 0.02 | 0.03 | 0.01 | 0.03 | 0 | 0.03 | 0.01 | 0.02 | 0
Missing | 0 | 0 | 0 | 0 | 0.13 | 0 | 0 | 0 | 0.45
Working Hours (per week) | 24.83 | 25.47 | 23.53 | 1.94 | 0 | 24.44 | 25.24 | -0.81 | 0.09
Missing | 0.06 | 0.05 | 0.07 | -0.02 | 0.01 | 0.05 | 0.07 | -0.01 | 0.04
Weight (pounds) | 157 | 157 | 157 | -1 | 0.64 | 154 | 160 | -6 | 0
Missing | 0.08 | 0.08 | 0.1 | -0.02 | 0.01 | 0.08 | 0.09 | -0.01 | 0.09
Height (feet) | 5.12 | 5.18 | 5.01 | 0.17 | 0 | 5.08 | 5.17 | -0.09 | 0.02
Missing | 0.09 | 0.08 | 0.11 | -0.04 | 0 | 0.08 | 0.09 | -0.01 | 0.13
Days 5+ drinks (per month) | 1.64 | 1.56 | 1.8 | -0.25 | 0.02 | 1.24 | 2.06 | -0.82 | 0
Missing | 0.09 | 0.08 | 0.1 | -0.03 | 0 | 0.08 | 0.09 | -0.02 | 0.01
Days of Exercise (per week) | 2.39 | 2.41 | 2.36 | 0.05 | 0.42 | 2.32 | 2.46 | -0.14 | 0.01
Missing | 0.05 | 0.05 | 0.06 | -0.01 | 0.03 | 0.04 | 0.06 | -0.01 | 0.03
Depressed/ Down | | | | | | | | |
Never | 0.31 | 0.32 | 0.29 | 0.02 | 0.05 | 0.29 | 0.32 | -0.03 | 0.01
Sometimes | 0.51 | 0.52 | 0.48 | 0.03 | 0.01 | 0.52 | 0.49 | 0.02 | 0.06
Mostly | 0.09 | 0.09 | 0.1 | -0.01 | 0.14 | 0.1 | 0.08 | 0.02 | 0
Always | 0.02 | 0.02 | 0.03 | -0.01 | 0 | 0.02 | 0.02 | 0 | 0.68
Missing | 0.08 | 0.07 | 0.1 | -0.03 | 0 | 0.07 | 0.09 | -0.01 | 0.02
Note: ‘overall’, ‘$D=1$’, ‘$D=0$’, ‘$M=1$’, ‘$M=0$’ report the mean of the
respective vaiable in the total sample, among treated, among non-treated,
among mediated, and among non-mediated, respectively. ‘diff’ and ‘p-val’
provide the mean difference (across treatment or mediator states) and the
p-value of a two-sample t-test, respectively.
In order to assess the direct and indirect effect of health insurance coverage
on general health, we consider estimation based on Theorem 1 and expression
(3) derived by Tchetgen Tchetgen and Shpitser (2012) well as (ii) Theorem 2
and expression (3). We estimate the nuisance parameters and treatment effects
in the same way as outlined in Section 5 (i.e. post-lasso regression for
modeling the nuisance parameters and 3-fold cross fitting for effect
estimation) after augmenting the set of covariates with 3101 interaction and
higher order terms. The trimming threshold for discarding observations with
too extreme propensity scores is set to $0.02$ (2%), such that 893 and 136
observations are dropped when basing estimation on Theorems 1 and 2,
respectively.
Table 3 provides the estimated effects along with the standard error (‘se’)
and p-value (‘p-val’) and also provides the estimated mean potential outcome
under non-treatment for comparison (‘$\hat{E}[Y(0,M(0))]$’). The ATEs of
health insurance coverage on general health in the year 2008 (columns 2 and
8), estimated based on Theorems 1 or 2, are statistically significant at the
5% and 10% levels, respectively. As the outcome is measured on an ordinal
scale ranging from ‘excellent’ to ‘poor’, the negative ATEs suggest a short-
term health improving effect of health coverage. The direct effects under
treatment (columns 3 and 9) under non-treatment (columns 4 and 10) are very
similar to the ATEs and statistically significant (at least) at the 10% level
in 3 out ouf 4 cases. In contrast, the indirect effects under treatment
(columns 5 and 11) and non-treatment (columns 6 and 12) are generally close to
zero and not statistically significant at the 10% level in 3 out of 4 cases.
Thus, health insurance coverage does not seem to importantly affect general
health of young adults in the U.S. through routine checkups in the short run,
but rather through other mechanisms.
Table 3: Total, direct, and indirect effects on general health in 2008
| Estimations based on Theorem 1 | Estimations based on Theorem 2
---|---|---
| $\hat{\Delta}$ | $\hat{\theta}(1)$ | $\hat{\theta}(0)$ | $\hat{\delta}(1)$ | $\hat{\delta}(0)$ | $\hat{E}[Y(0,M(0))]$ | $\hat{\Delta}$ | $\hat{\theta}(1)$ | $\hat{\theta}(0)$ | $\hat{\delta}(1)$ | $\hat{\delta}(0)$ | $\hat{E}[Y(0,M(0))]$
effect | -0.06 | -0.06 | -0.06 | -0.00 | 0.00 | 2.34 | -0.05 | -0.07 | -0.05 | 0.00 | 0.02 | 2.29
se | 0.03 | 0.04 | 0.03 | 0.01 | 0.02 | 0.02 | 0.03 | 0.03 | 0.03 | 0.01 | 0.01 | 0.03
p-val | 0.04 | 0.18 | 0.04 | 0.87 | 0.99 | 0.00 | 0.10 | 0.03 | 0.10 | 0.89 | 0.07 | 0.00
Note: ‘effect’, ‘se’, and ‘p-val’ report the respective effect estimate,
standard error and p-value. Lasso regression is used for the estimation of
nuisance parameters. The propensity score-based trimming threshold is set to
$0.02$.
## 7 Conclusion
In this paper, we combined causal mediation analysis with double machine
learning under selection-on-observables assumptions which avoids adhoc pre-
selection of control variables. Thus, this approach appears particularly
fruitful in high-dimensional data with many potential control variables. We
proposed estimators for natural direct and indirect effects as well as the
controlled direct effect exploiting efficient score functions, sample
splitting, and machine learning-based plug-in estimates for conditional
outcome means, mediator densities, and/or treatment propensity scores. We
demonstrated the root-$n$ consistency and asymptotic normality of the effect
estimators under specific regularity conditions. Furthermore, we investigated
the finite sample behavior of the proposed estimators in a simulation study
and found the performance to be decent in samples with several thousand
observations. Finally, we applied our method to data from the U.S. National
Longitudinal Survey of Youth 1997 and found a moderate short-term effect of
health insurance coverage on general health, which was, however, not mediated
by routine checkups. The estimators considered in the simulation study and the
application are available in the ‘causalweight’ package for the statistical
software ‘R’.
## References
* (1)
* Albert (2008) Albert, J. M. (2008): “Mediation analysis via potential outcomes models,” _Statistics in Medicine_ , 27, 1282–1304.
* Albert and Nelson (2011) Albert, J. M., and S. Nelson (2011): “Generalized causal mediation analysis,” _Biometrics_ , 67, 1028–1038.
* Baicker, Taubman, Allen, Bernstein, Gruber, Newhouse, Schneider, Wright, Zaslavsky, and Finkelstein (2013) Baicker, K., S. L. Taubman, H. L. Allen, M. Bernstein, J. H. Gruber, J. P. Newhouse, E. C. Schneider, B. J. Wright, A. M. Zaslavsky, and A. N. Finkelstein (2013): “The Oregon experiment—effects of Medicaid on clinical outcomes,” _New England Journal of Medicine_ , 368(18), 1713–1722.
* Baron and Kenny (1986) Baron, R. M., and D. A. Kenny (1986): “The Moderator-Mediator Variable Distinction in Social Psychological Research: Conceptual, Strategic, and Statistical Considerations,” _Journal of Personality and Social Psychology_ , 51, 1173–1182.
* Bellani and Bia (2018) Bellani, L., and M. Bia (2018): “The long-run effect of childhood poverty and the mediating role of education,” _forthcoming in the Journal of the Royal Statistical Society: Series A (Statistics in Society)_.
* Belloni, Chernozhukov, Fernández-Val, and Hansen (2017) Belloni, A., V. Chernozhukov, I. Fernández-Val, and C. Hansen (2017): “Program Evaluation and Causal Inference with High-Dimensional Data,” _Econometrica_ , 85, 233–298.
* Bijwaard and Jones (2018) Bijwaard, G. E., and A. M. Jones (2018): “An IPW estimator for mediation effects in hazard models: with an application to schooling, cognitive ability and mortality,” _Empirical Economics_ , pp. 1–47.
* Bodory and Huber (2018) Bodory, H., and M. Huber (2018): “The causalweight package for causal inference in R,” _SES Working Paper 493, University of Fribourg_.
* Burstin, Swartz, O’Neil, Orav, and Brennan (1998) Burstin, H. R., K. Swartz, A. C. O’Neil, E. J. Orav, and T. A. Brennan (1998): “The effect of change of health insurance on access to care,” _Inquiry_ , pp. 389–397.
* Cardella and Depew (2014) Cardella, E., and B. Depew (2014): “The effect of health insurance coverage on the reported health of young adults,” _Economics Letters_ , 124(3), 406–410.
* Chernozhukov, Chetverikov, Demirer, Duflo, Hansen, Newey, and Robins (2018) Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey, and J. Robins (2018): “Double/debiased machine learning for treatment and structural parameters,” _The Econometrics Journal_ , 21, C1–C68.
* Cochran (1957) Cochran, W. G. (1957): “Analysis of Covariance: Its Nature and Uses,” _Biometrics_ , 13, 261–281.
* Conti, Heckman, and Pinto (2016) Conti, G., J. J. Heckman, and R. Pinto (2016): “The Effects of Two Influential Early Childhood Interventions on Health and Healthy Behaviour,” _The Economic Journal_ , 126, F28–F65.
* Díaz (2020) Díaz, I. (2020): “Machine learning in the estimation of causal effects: targeted minimum loss-based estimation and double/debiased machine learning,” _Biostatistics_ , 21(2), 353–358.
* Faulkner and Schauffler (1997) Faulkner, L., and H. Schauffler (1997): “The Effect of Health Insurance Coverage on the Appropriate Use f Recommended Clinical Preventive Services,” _American Journal of Preventive Medicine_ , 13(6), 453–458.
* Flores and Flores-Lagunes (2009) Flores, C. A., and A. Flores-Lagunes (2009): “Identification and Estimation of Causal Mechanisms and Net Effects of a Treatment under Unconfoundedness,” _IZA DP No. 4237_.
* Fowler-Brown, Corbie-Smith, Garrett, and Lurie (2007) Fowler-Brown, A., G. Corbie-Smith, J. Garrett, and N. Lurie (2007): “Risk of cardiovascular events and death—does insurance matter?,” _Journal of General Internal Medicine_ , 22(4), 502–507.
* Hahn (1998) Hahn, J. (1998): “On the role of the propensity score in efficient semiparametric estimation of average treatment effects,” _Econometrica_ , 66(2), 315–331.
* Heckman, Pinto, and Savelyev (2013) Heckman, J., R. Pinto, and P. Savelyev (2013): “Understanding the Mechanisms Through Which an Influential Early Childhood Program Boosted Adult Outcomes,” _American Economic Review_ , 103, 2052–2086.
* Hong (2010) Hong, G. (2010): “Ratio of mediator probability weighting for estimating natural direct and indirect effects,” in _Proceedings of the American Statistical Association, Biometrics Section_ , p. 2401–2415. Alexandria, VA: American Statistical Association.
* Huber (2014) Huber, M. (2014): “Identifying causal mechanisms (primarily) based on inverse probability weighting,” _Journal of Applied Econometrics_ , 29, 920–943.
* Huber (2015) Huber, M. (2015): “Causal pitfalls in the decomposition of wage gaps,” _Journal of Business and Economic Statistics_ , 33, 179–191.
* Huber, Lechner, and Mellace (2017) Huber, M., M. Lechner, and G. Mellace (2017): “Why Do Tougher Caseworkers Increase Employment? The Role of Program Assignment as a Causal Mechanism,” _The Review of Economics and Statistics_ , 99, 180–183.
* Huber, Lechner, and Strittmatter (2018) Huber, M., M. Lechner, and A. Strittmatter (2018): “Direct and indirect effects of training vouchers for the unemployed,” _Journal of the Royal Statistical Society: Series A (Statistics in Society)_ , 181, 441–463.
* Imai, Keele, and Yamamoto (2010) Imai, K., L. Keele, and T. Yamamoto (2010): “Identification, Inference and Sensitivity Analysis for Causal Mediation Effects,” _Statistical Science_ , 25, 51–71.
* Imai and Yamamoto (2013) Imai, K., and T. Yamamoto (2013): “Identification and Sensitivity Analysis for Multiple Causal Mechanisms: Revisiting Evidence from Framing Experiments,” _Political Analysis_ , 21, 141–171.
* Imbens (2004) Imbens, G. W. (2004): “Nonparametric Estimation of Average Treatment Effects under Exogeneity: A Review,” _The Review of Economics and Statistics_ , 86, 4–29.
* Judd and Kenny (1981) Judd, C. M., and D. A. Kenny (1981): “Process Analysis: Estimating Mediation in Treatment Evaluations,” _Evaluation Review_ , 5, 602–619.
* Kanamori, Suzuki, and Sugiyama (2012) Kanamori, T., T. Suzuki, and M. Sugiyama (2012): “Statistical analysis of kernel-based least-squares density-ratio estimation,” _Machine Learning_ , 86(3), 335–367.
* Kaufman, MacLehose, and Kaufman (2004) Kaufman, J. S., R. F. MacLehose, and S. Kaufman (2004): “A further critique of the analytic strategy of adjusting for covariates to identify biologic mediation,” _Epidemiologic Perspectives & Innovations_, 1, 4.
* Keele, Tingley, and Yamamoto (2015) Keele, L., D. Tingley, and T. Yamamoto (2015): “Identifying mechanisms behind policy interventions via causal mediation analysis,” _Journal of Policy Analysis and Management_ , 34, 937–963.
* King, Gakidou, Imai, Lakin, Moore, Nall, Ravishankar, Vargas, Tellez-Rojo, Avila, et al. (2009) King, G., E. Gakidou, K. Imai, J. Lakin, R. T. Moore, C. Nall, N. Ravishankar, M. Vargas, M. M. Tellez-Rojo, J. E. H. Avila, et al. (2009): “Public policy for the poor? A randomised assessment of the Mexican universal health insurance programme,” _The Lancet_ , 373(9673), 1447–1454.
* Luo and Spindler (2016) Luo, Y., and M. Spindler (2016): “High-Dimensional $L_{2}$Boosting: Rate of Convergence,” .
* Maciosek, Coffield, Flottemesch, Edwards, and Solberg (2010) Maciosek, M. V., A. B. Coffield, T. J. Flottemesch, N. M. Edwards, and L. I. Solberg (2010): “Greater use of preventive services in US health care could save lives at little or no cost,” _Health Affairs_ , 29(9), 1656–1660.
* Miles, Shpitser, Kanki, Meloni, and Tchetgen Tchetgen (2020) Miles, C. H., I. Shpitser, Kanki, S. Meloni, and E. J. Tchetgen Tchetgen (2020): “On semiparametric estimation of a path-specific effect in the presence of mediator-outcome confounding,” _Biometrika_ , 107(1), 159–172.
* Nagelkerke (1991) Nagelkerke, N. J. D. (1991): “A note on a general definition of the coefficient of determination,” _Biometrika_ , 78, 691–692.
* Nakanishi, Tatara, and Fujiwara (1996) Nakanishi, N., K. Tatara, and H. Fujiwara (1996): “Do preventive health services reduce eventual demand for medical care?,” _Social Science & Medicine_, 43(6), 999–1005.
* Neyman (1959) Neyman, J. (1959): _Optimal asymptotic tests of composite statistical hypotheses_ p. 416–444. Wiley.
* Pagán, Puig, and Soldo (2007) Pagán, J. A., A. Puig, and B. J. Soldo (2007): “Health insurance coverage and the use of preventive services by Mexican adults,” _Health Economics_ , 16(12), 1359–1369.
* Pearl (2001) Pearl, J. (2001): “Direct and indirect effects,” in _Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence_ , pp. 411–420, San Francisco. Morgan Kaufman.
* Petersen, Sinisi, and van der Laan (2006) Petersen, M. L., S. E. Sinisi, and M. J. van der Laan (2006): “Estimation of Direct Causal Effects,” _Epidemiology_ , 17, 276–284.
* Press (2014) Press, R. (2014): “Insurance Coverage and Preventive Care Among Adults,” .
* Rasmussen, Thomsen, Kilsmark, Hvenegaard, Engberg, Lauritzen, and Sogaard (2007) Rasmussen, S. R., J. L. Thomsen, J. Kilsmark, A. Hvenegaard, M. Engberg, T. Lauritzen, and J. Sogaard (2007): “Preventive health screenings and health consultations in primary care increase life expectancy without increasing costs,” _Scandinavian Journal of Public Health_ , 35(4), 365–372.
* Robins (2003) Robins, J. M. (2003): “Semantics of causal DAG models and the identification of direct and indirect effects,” in _In Highly Structured Stochastic Systems_ , ed. by P. Green, N. Hjort, and S. Richardson, pp. 70–81, Oxford. Oxford University Press.
* Robins and Greenland (1992) Robins, J. M., and S. Greenland (1992): “Identifiability and Exchangeability for Direct and Indirect Effects,” _Epidemiology_ , 3, 143–155.
* Robins and Rotnitzky (1995) Robins, J. M., and A. Rotnitzky (1995): “Semiparametric Efficiency in Multivariate Regression Models with Missing Data,” _Journal of the American Statistical Association_ , 90, 122–129.
* Robins, Rotnitzky, and Zhao (1994) Robins, J. M., A. Rotnitzky, and L. Zhao (1994): “Estimation of Regression Coefficients When Some Regressors Are not Always Observed,” _Journal of the American Statistical Association_ , 90, 846–866.
* Rubin (1974) Rubin, D. B. (1974): “Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies,” _Journal of Educational Psychology_ , 66, 688–701.
* Simon, Soni, and Cawley (2017) Simon, K., A. Soni, and J. Cawley (2017): “The impact of health insurance on preventive care and health behaviors: evidence from the first two years of the ACA Medicaid expansions,” _Journal of Policy Analysis and Management_ , 36(2), 390–417.
* Sommers, Maylone, Blendon, Orav, and Epstein (2017) Sommers, B. D., B. Maylone, R. J. Blendon, E. J. Orav, and A. M. Epstein (2017): “Three-year impacts of the Affordable Care Act: improved medical care and health among low-income adults,” _Health Affairs_ , 36(6), 1119–1128.
* Spindler, Chernozhukov, and Hansen (2016) Spindler, M., V. Chernozhukov, and C. Hansen (2016): “High-Dimensional Metrics,” _arXiv:1608.00354_.
* Sugiyama, Kawanabe, and Chui (2010) Sugiyama, M., M. Kawanabe, and P. L. Chui (2010): “Dimensionality Reduction for Density Ratio Estimation in High-dimensional Spaces,” _Neural Networks_ , 23(1), 44–59.
* Tchetgen Tchetgen (2013) Tchetgen Tchetgen, E. J. (2013): “Inverse Odds Ratio-Weighted Estimation for Causal Mediation Analysis,” _Statistics in Medicine_ , 32, 4567–4580.
* Tchetgen Tchetgen and Shpitser (2012) Tchetgen Tchetgen, E. J., and I. Shpitser (2012): “Semiparametric theory for causal mediation analysis: Efficiency bounds, multiple robustness, and sensitivity analysis,” _The Annals of Statistics_ , 40, 1816–1845.
* Ten Have, Joffe, Lynch, Brown, Maisto, and Beck (2007) Ten Have, T. R., M. M. Joffe, K. G. Lynch, G. K. Brown, S. A. Maisto, and A. T. Beck (2007): “Causal mediation analyses with rank preserving models,” _Biometrics_ , 63, 926–934.
* Tibshirani (1996) Tibshirani, R. (1996): “Regresson shrinkage and selection via the LASSO,” _Journal of the Royal Statistical Society_ , 58, 267–288.
* van der Laan and Rubin (2006) van der Laan, M., and D. Rubin (2006): “Targeted Maximum Likelihood Learning,” _The International Journal of Biostatistics_ , 2, 1–38.
* van der Laan, Polley, and Hubbard (2007) van der Laan, M. J., E. C. Polley, and A. E. Hubbard (2007): “Super Learner,” _Statistical Applications in Genetics and Molecular Biology_ , 6.
* VanderWeele (2013) VanderWeele, T. (2013): “A three-way decomposition of a total effect into direct, indirect, and interactive effects,” _Epidemiology_ , 24, 224–232.
* VanderWeele (2009) VanderWeele, T. J. (2009): “Marginal Structural Models for the Estimation of Direct and Indirect Effects,” _Epidemiology_ , 20, 18–26.
* Vansteelandt, Bekaert, and Lange (2012) Vansteelandt, S., M. Bekaert, and T. Lange (2012): “Imputation Strategies for the Estimation of Natural Direct and Indirect Effects,” _Epidemiologic Methods_ , 1, 129–158.
* Yörük (2016) Yörük, B. K. (2016): “Health insurance coverage and self-reported health: new estimates from the NLSY97,” _International Journal of Health Economics and Management_ , 16(3), 285–295.
* Zheng and van der Laan (2012) Zheng, W., and M. J. van der Laan (2012): “Targeted Maximum Likelihood Estimation of Natural Direct Effects,” _The International Journal of Biostatistics_ , 8, 1–40.
Appendices
## A Simulation results for standard errors
Table A.1: Simulation results for standard errors ($p=200$)
| Coefficients given by $0.3/i^{2}$ for $i=1,...,p$ | Coefficients given by $0.5/i^{2}$ for $i=1,...,p$
---|---|---
| abias | sd | rmse | true | abias | sd | rmse | true | abias | sd | rmse | true | abias | sd | rmse | true
| $n$=1000 | $n$=4000 | $n$=1000 | $n$=4000
| Double machine learning based on Theorem 1
$se(\hat{\Delta})$ | 0.00 | 0.00 | 0.01 | 0.08 | 0.00 | 0.00 | 0.00 | 0.04 | 0.00 | 0.01 | 0.01 | 0.09 | 0.00 | 0.00 | 0.00 | 0.04
$se(\hat{\theta}(1))$ | 0.02 | 0.01 | 0.02 | 0.09 | 0.01 | 0.00 | 0.01 | 0.04 | 0.02 | 0.01 | 0.02 | 0.09 | 0.01 | 0.00 | 0.01 | 0.04
$se(\hat{\theta}(0))$ | 0.01 | 0.00 | 0.01 | 0.08 | 0.00 | 0.00 | 0.00 | 0.04 | 0.01 | 0.01 | 0.01 | 0.08 | 0.01 | 0.00 | 0.01 | 0.04
$se(\hat{\delta}(1))$ | 0.00 | 0.00 | 0.00 | 0.06 | 0.00 | 0.00 | 0.00 | 0.03 | 0.00 | 0.01 | 0.01 | 0.06 | 0.00 | 0.00 | 0.00 | 0.03
$se(\hat{\delta}(0))$ | 0.01 | 0.01 | 0.01 | 0.06 | 0.00 | 0.00 | 0.00 | 0.02 | 0.00 | 0.01 | 0.01 | 0.06 | 0.00 | 0.00 | 0.00 | 0.02
| Double machine learning based on Theorem 2
$se(\hat{\Delta})$ | 0.00 | 0.00 | 0.01 | 0.08 | 0.00 | 0.00 | 0.00 | 0.04 | 0.00 | 0.01 | 0.01 | 0.09 | 0.00 | 0.00 | 0.00 | 0.04
$se(\hat{\theta}(1))$ | 0.00 | 0.00 | 0.01 | 0.08 | 0.00 | 0.00 | 0.00 | 0.04 | 0.00 | 0.01 | 0.01 | 0.08 | 0.00 | 0.00 | 0.00 | 0.04
$se(\hat{\theta}(0))$ | 0.00 | 0.01 | 0.01 | 0.08 | 0.00 | 0.00 | 0.00 | 0.04 | 0.01 | 0.01 | 0.01 | 0.08 | 0.00 | 0.00 | 0.00 | 0.04
$se(\hat{\delta}(1))$ | 0.00 | 0.01 | 0.01 | 0.06 | 0.00 | 0.00 | 0.00 | 0.03 | 0.00 | 0.01 | 0.01 | 0.06 | 0.00 | 0.00 | 0.00 | 0.03
$se(\hat{\delta}(0))$ | 0.00 | 0.00 | 0.00 | 0.04 | 0.00 | 0.00 | 0.00 | 0.02 | 0.00 | 0.01 | 0.01 | 0.05 | 0.00 | 0.00 | 0.00 | 0.02
Note: ‘abias’, ‘sd’, and ‘rmse’ denote the absolute bias, standard deviation
and root mean squared error of the respective standard error (‘se’). ‘true’
provides the true standard deviation.
## B Proofs
For the proofs of Theorems 1 and 2, it suffices verifying the conditions of
Assumptions 3.1 and 3.2 underlying Theorem 3.1 and 3.2 in Chernozhukov,
Chetverikov, Demirer, Duflo, Hansen, Newey, and Robins (2018).
### B.1 Proof of Theorem 1
We first show that Assumptions 3.1 and 3.2 in Chernozhukov, Chetverikov,
Demirer, Duflo, Hansen, Newey, and Robins (2018) are satisfied for
$\Psi_{d0}=E[Y(d,M(1-d))]$ based on (3). Then, we show that Assumption 3.1
holds for $\Psi_{dm0}=E[Y(d,m)]$ based on (8), but omit the proof of the
validity of Assumption 3.2, as it follows in a very similar manner as for
$\Psi_{d0}$. All bounds hold uniformly over all probability laws
$P\in\mathcal{P}$, where $\mathcal{P}$ is the set of all possible probability
laws, and we omit $P$ for brevity.
Let $\eta=(\mu(D,M,X),f(M|D,X),p_{d}(X))$ be the vector of nuisance
parameters. Also, let $\mathcal{T}_{n}$ be the set of all $\eta=(\mu,f,p_{d})$
in a neighbourhood of $\eta_{0}$ that is shrinking with increasing $n,$
consisting of $P$-square integrable functions $\mu$, $f$, and $p_{d}$ such
that
$\displaystyle\left\|\eta-\eta_{0}\right\|_{q}$ $\displaystyle\leq$
$\displaystyle C,$ (B.1) $\displaystyle\left\|\eta-\eta_{0}\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n},$
$\displaystyle\left\|p_{d}(X)-1/2\right\|_{\infty}$ $\displaystyle\leq$
$\displaystyle 1/2-\epsilon,$
$\displaystyle\left\|f(M|D,X)-(\underline{f}+\overline{f})/2\right\|_{\infty}$
$\displaystyle\leq$ $\displaystyle(\overline{f}-\underline{f})/2,$
$\displaystyle\left\|\mu(D,M,X)-\mu_{0}(D,M,X)\right\|_{2}\times\left\|p_{d}(X)-p_{d0}(X)\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}n^{-1/2},$
$\displaystyle\left\|\mu(D,M,X)-\mu_{0}(D,M,X)\right\|_{2}\times\left\|f(M|1-D,X)-f_{0}(M|1-D,X)\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}n^{-1/2}.$
We furthermore replace the sequence $(\delta_{n})_{n\geq 1}$ by
$(\delta_{n}^{\prime})_{n\geq 1},$ where
$\delta_{n}^{\prime}=C_{\epsilon}\max(\delta_{n},n^{-1/2}),$ where
$C_{\epsilon}$ is sufficiently large constant that only depends on $C$ and
$\epsilon.$ Let $R\equiv\overline{f}/\underline{f}$ stands for the maximal
ratio of densities $f(m|d,X).$
#### B.1.1 Counterfactual $E[Y(d,M(1-d))]$
The score function for the counterfactual $\Psi_{d0}=E[Y(d,M(1-d))]$ proposed
by Tchetgen Tchetgen and Shpitser (2012) is given by the following expression,
with $W=(Y,M,D,X)$:
$\displaystyle\psi_{d}(W,\eta,\Psi_{d0})$ $\displaystyle=$
$\displaystyle\frac{I\\{D=d\\}\cdot f(M|1-d,X)}{p_{d}(X)\cdot
f(M|d,X)}\cdot[Y-\mu(d,M,X)]$
$\displaystyle+\frac{I\\{D=1-d\\}}{1-p_{d}(X)}\cdot\Big{[}\mu(d,M,X)-\overbrace{\int_{m\in\mathcal{M}}\mu(d,m,X)\cdot
f(m|1-d,X)dm}^{=:\nu(1-d,X)}\Big{]}$
$\displaystyle+\underbrace{\int_{m\in\mathcal{M}}\mu(d,m,X)\cdot
f(m|1-d,X)dm}_{=:\nu(1-d,X)}\ \ -\ \ \Psi_{d0}.$
Assumption 3.1: Moment Condition, Linear scores and Neyman orthogonality
Assumption 3.1(a)
Moment Condition: The moment condition
$E\Big{[}\psi_{d}(W,\eta_{0},\Psi_{d0})\Big{]}=0$ is satisfied:
$\displaystyle E\Big{[}\psi_{d}(W,\eta_{0},\Psi_{d0})\Big{]}$ $\displaystyle=$
$\displaystyle E\Bigg{[}\overbrace{E\Bigg{[}\frac{I\\{D=d\\}\cdot
f_{0}(M|1-d,X)}{p_{d0}(X)\cdot
f_{0}(M|d,X)}\cdot[Y-\mu_{0}(d,M,X)]\Bigg{|}X\Bigg{]}}^{=E[E[Y-\mu_{0}(d,M,X)|D=d,M,X]|D=1-d,X]=0}\Bigg{]}$
$\displaystyle+\
E\Bigg{[}\overbrace{E\Bigg{[}\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}\cdot[\mu_{0}(d,M,X)-\nu_{0}(1-d,X)]\Bigg{|}X\Bigg{]}}^{=E[\mu_{0}(d,M,X)-\nu_{0}(1-d,X)|D=1-d,X]=0}\Bigg{]}$
$\displaystyle+\ E[\nu_{0}(1-d,X)]\ \ -\ \ \Psi_{d0}$ $\displaystyle=$
$\displaystyle\Psi_{d0}\ \ -\ \ \Psi_{d0}\ \ =0,$
where the first equality follows from the law of iterated expectations. To
better see this result, note that
$\displaystyle E\Bigg{[}\frac{I\\{D=d\\}\cdot f_{0}(M|1-d,X)}{p_{d0}(X)\cdot
f_{0}(M|d,X)}\cdot[Y-\mu_{0}(d,M,X)]\Bigg{|}X\Bigg{]}$ $\displaystyle=$
$\displaystyle
E\Bigg{[}\frac{I\\{D=d\\}\cdot(1-p_{d0}(M,X))}{p_{d0}(M,X)\cdot(1-p_{d0}(X))}\cdot[Y-\mu_{0}(d,M,X)]\Bigg{|}X\Bigg{]}$
$\displaystyle=$ $\displaystyle
E\Bigg{[}E\Bigg{[}\frac{I\\{D=d\\}}{p_{d0}(M,X)}\cdot[Y-\mu_{0}(d,M,X)]\Bigg{|}M,X\Bigg{]}\cdot\frac{(1-p_{d0}(M,X))}{(1-p_{d0}(X))}\Bigg{|}X\Bigg{]}$
$\displaystyle=$ $\displaystyle
E\Bigg{[}E[Y-\mu_{0}(d,M,X)|D=d,M,X]\cdot\frac{(1-p_{d0}(M,X))}{(1-p_{d0}(X))}\Bigg{|}X\Bigg{]}$
$\displaystyle=$ $\displaystyle E[E[Y-\mu_{0}(d,M,X)|D=d,M,X]|D=1-d,X]$
$\displaystyle=$ $\displaystyle E[\mu_{0}(d,M,X)-\mu_{0}(d,M,X)|D=1-d,X]=0,$
where the first equality follows from Bayes’ Law, the second from the law of
iterated expectations, the third from basic probability theory, and the fourth
from Bayes’ Law. Furthermore,
$\displaystyle
E\Bigg{[}\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}\cdot[\mu_{0}(d,M,X)-\nu_{0}(1-d,X)]\Bigg{|}X\Bigg{]}$
$\displaystyle=$ $\displaystyle
E\Bigg{[}E\Bigg{[}\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}\cdot[\mu_{0}(d,M,X)-\nu_{0}(1-d,X)]\Big{|}M,X\Bigg{]}\Bigg{|}X\Bigg{]}$
$\displaystyle=$ $\displaystyle
E\Bigg{[}[\mu_{0}(d,M,X)-\nu_{0}(1-d,X)]\cdot\frac{1-p_{d0}(M,X)}{1-p_{d0}(X)}\Bigg{|}X\Bigg{]}$
$\displaystyle=$ $\displaystyle
E[\mu_{0}(d,M,X)-\nu_{0}(1-d,X)|D=1-d,X]=E[\mu_{0}(d,M,X)|D=1-d,X]-\nu_{0}(1-d,X)$
$\displaystyle=$ $\displaystyle\nu_{0}(1-d,X)-\nu_{0}(1-d,X)=0,$
where the first equality follows from the law of iterated expectations and the
third from Bayes’ Law.
Assumption 3.1(b)
Linearity: The score $\psi_{d}(W,\eta_{0},\Psi_{d0})$ is linear in $\Psi_{d0}$
as it can be written as:
$\psi_{d}(W,\eta_{0},\Psi_{d0})=\psi_{d}^{a}(W,\eta_{0})\cdot\Psi_{d0}+\psi_{d}^{b}(W,\eta_{0})$
with $\psi_{d}^{a}(W,\eta_{0})=-1$ and
$\displaystyle\psi_{d}^{b}(W,\eta_{0})$ $\displaystyle=$
$\displaystyle\frac{I\\{D=d\\}\cdot f_{0}(M|1-d,X)}{p_{d0}(X)\cdot
f_{0}(M|d,X)}[Y-\mu_{0}(d,M,X)]$ $\displaystyle+$
$\displaystyle\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}\Big{[}\mu_{0}(d,M,X)-\nu_{0}(1-d,X)\Big{]}+\nu_{0}(1-d,X).$
Assumption 3.1(c)
Continuity: The expression for the second Gateaux derivative of a map
$\eta\mapsto E\Big{[}\psi_{d}(W,\hat{\eta},\Psi_{d0})\Big{]}$, given in (3),
is continuous.
Assumption 3.1(d)
Neyman Orthogonality: For any $\eta\in\mathcal{T}_{n}$, the Gateaux derivative
in the direction
$\eta-\eta_{0}=(\mu(d,M,X)-\mu_{0}(d,M,X),f(M|D,X)-f_{0}(M|D,X),p_{d}(X)-p_{d0}(X))$
is given by:
$\displaystyle\partial
E\big{[}\psi_{d}(W,\eta,\Psi_{d})\big{]}\big{[}\eta-\eta_{0}\big{]}$
$\displaystyle=\leavevmode\resizebox{507.33624pt}{}{$E\Bigg{[}\frac{\big{[}f(M|1-d,X)-f_{0}(M|1-d,X)\big{]}\cdot
f_{0}(M|d,X)-\big{[}f(M|d,X)-f_{0}(M|d,X)\big{]}\cdot
f_{0}(M|1-d,X)}{f_{0}^{2}(M|d,X)}\cdot\overbrace{\frac{I\\{D=d\\}}{p_{d0}(X)}\cdot\Big{(}Y-\mu_{0}(d,M,X)\Big{)}}^{E[\cdot|X]=E[Y-\mu_{0}(d,M,X)|D=d,X]=0}\Bigg{]}$}$
$\displaystyle\underbrace{-\
E\Bigg{[}\underbrace{\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}}_{E[\cdot|X]=1}\cdot\partial
E[\nu_{0}(1-d,X)][f(M|1-d,X)-f_{0}(M|1-d,X)]\Bigg{]}+\partial
E[\nu_{0}(1-d,X)][f(M|1-d,X)-f_{0}(M|1-d,X)]\Bigg{]}}_{=0}$ $\displaystyle-\
E\Bigg{[}\underbrace{\frac{I\\{D=d\\}\cdot f_{0}(M|1-d,X)}{p_{d0}(X)\cdot
f_{0}(M|d,X)}\cdot\Big{(}Y-\mu_{0}(d,M,X)\Big{)}}_{E[\cdot|X]=E[E[Y-\mu_{0}(d,M,X)|D=d,M,X]|D=1-d,X]=0}\cdot\frac{p_{d}(X)-p_{d0}(X)}{p_{d0}(X)}\Bigg{]}$
$\displaystyle+\
E\Bigg{[}\underbrace{\frac{I\\{D=1-d\\}}{(1-p_{d0}(X))}\cdot\Big{(}\mu_{0}(d,M,X)-\nu_{0}(1-d,X)\Big{)}}_{E[\cdot|X]=E[\mu_{0}(d,M,X)-\nu_{0}(1-d,X)|D=1-d,X]=0}\cdot\frac{p_{d}(X)-p_{d0}(X)}{(1-p_{d0}(X))}\Bigg{]}$
$\displaystyle-\ \underbrace{E\Bigg{[}\frac{I\\{D=d\\}\cdot
f_{0}(M|1-d,X)}{p_{d0}(X)\cdot
f_{0}(M|d,X)}\cdot\Big{[}\mu(d,M,X)-\mu_{0}(d,M,X)\Big{]}\Bigg{]}}_{E[\cdot]=E[E[\cdot|M,X]]=E\Big{[}\frac{p_{d0}(M,X)\cdot
f_{0}(M|1-d,X)}{p_{d0}(X)\cdot
f_{0}(M|d,X)}\cdot[\mu(d,M,X)-\mu_{0}(d,M,X)]\Big{]}}$ ($*$) $\displaystyle+\
\underbrace{E\Bigg{[}\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}\cdot\Big{[}\mu(d,M,X)-\mu_{0}(d,M,X)\Big{]}\Bigg{]}}_{E[\cdot]=E[E[\cdot|M,X]]=E\Big{[}\frac{1-p_{d0}(M,X)}{1-p_{d0}(X)}\cdot[\mu(d,M,X)-\mu_{0}(d,M,X)]\Big{]}}$
($**$) $\displaystyle\underbrace{-\
E\Bigg{[}\underbrace{\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}\cdot\partial
E[\nu_{0}(1-d,X)][\mu(d,M,X)-\mu_{0}(d,M,X)]}_{E[\cdot|X]=\frac{1-p_{d0}(X)}{1-p_{d0}(X)}\cdot\partial
E[\nu_{0}(1-d,X)][\mu(d,M,X)-\mu_{0}(d,M,X)]}\Bigg{]}+\partial
E[\nu_{0}(1-d,X)][\mu(d,M,X)-\mu_{0}(d,M,X)]}_{=0},$
where terms $(*)$ and $(**)$ cancel out by Bayes’ Law, $\frac{p_{d0}(M,X)\cdot
f_{0}(M|1-d,X)}{p_{d0}(X)\cdot
f_{0}(M|d,X)}=\frac{p_{d0}(M,X)\cdot(1-p_{d0}(M,X))}{p_{d0}(M,X)\cdot(1-p_{d0}(X))}=\frac{1-p_{d0}(M,X)}{1-p_{d0}(X)}$.
Thus, it follows that:
$\displaystyle\partial
E\big{[}\psi_{d}(W,\eta,\Psi_{d})\big{]}\big{[}\eta-\eta_{0}\big{]}=0$
proving that the score function is orthogonal.
Assumption 3.1(e)
Singular values of $E[\psi_{d}^{a}(W;\eta_{0})]$ are bounded: This holds
trivially, because $\psi_{d}^{a}(W,\eta_{0})=-1.$
Assumption 3.2: Score regularity and quality of nuisance parameter estimators
Assumption 3.2(a)
This assumption follows directly from the regularity conditions (Assumption 4)
and the definition of $\mathcal{T}_{n}$ given in (B.1).
Assumption 3.2(b)
Bounds for $m_{n}$:
We have
$\displaystyle\left\|\mu_{0}(D,M,X)\right\|_{q}$ $\displaystyle=$
$\displaystyle\left(E\left[\left|\mu_{0}(D,M,X)\right|^{q}\right]\right)^{\frac{1}{q}}=\left(\sum_{d\in\\{0,1\\}}E\left[\left|\mu_{0}(d,M,X)\right|^{q}Pr(D=d|M,X)\right]\right)^{\frac{1}{q}}$
$\displaystyle\geq$
$\displaystyle\epsilon^{1/q}\left(\sum_{d\in\\{0,1\\}}E\left[\left|\mu_{0}(d,M,X)\right|^{q}\right]\right)^{\frac{1}{q}}$
$\displaystyle\geq$
$\displaystyle\epsilon^{1/q}\left(\max_{d\in\\{0,1\\}}E\left[\left|\mu_{0}(d,M,X)\right|^{q}\right]\right)^{\frac{1}{q}}$
$\displaystyle=$
$\displaystyle\epsilon^{1/q}\max_{d\in\\{0,1\\}}\left(E\left[\left|\mu_{0}(d,M,X)\right|^{q}\right]\right)^{\frac{1}{q}}=\epsilon^{1/q}\max_{d\in\\{0,1\\}}\left\|\mu_{0}(d,M,X)\right\|_{q}.$
The first equality follows from definition, the second from the law of total
probability, the first inequality from $Pr(D=d|M,X)\geq\epsilon.$ Using the
same line of arguments we get that
$\left\|f_{0}(M|D,X)\right\|_{q}\geq\epsilon^{1/q}\max_{d\in\\{0,1\\}}\left\|f_{0}(M|d,X)\right\|_{q}$
Also, by Jensen’s inequality
$\left\|\mu_{0}(D,M,X)\right\|_{q}\leq\left\|Y\right\|_{q}$, such that for any
$d\in\\{0,1\\}$:
$\displaystyle\left\|\mu_{0}(d,M,X)\right\|_{q}$ $\displaystyle\leq$
$\displaystyle C/\epsilon^{1/q},$ (B.2)
$\displaystyle\left\|f_{0}(M|d,X)\right\|_{q}$ $\displaystyle\leq$
$\displaystyle C/\epsilon^{1/q},$
because of $\left\|Y\right\|_{q}\leq C$ by Assumption 4(a).
Similarly, for any $\eta\in\mathcal{T}_{n}$ we obtain:
$\displaystyle\left\|\mu(d,M,X)-\mu_{0}(d,M,X)\right\|_{q}$
$\displaystyle\leq$ $\displaystyle C/\epsilon^{1/q},$
$\displaystyle\left\|f(M|d,X)-f_{0}(M|d,X)\right\|_{q}$ $\displaystyle\leq$
$\displaystyle C/\epsilon^{1/q},$
due to the definition of $\mathcal{T}_{n}$ given in (B.1).
Also,
$\displaystyle\left\|\nu_{0}(1-d,X)\right\|_{q}$ $\displaystyle=$
$\displaystyle\left(E\left[\left|\nu_{0}(1-d,X)\right|^{q}\right]\right)^{\frac{1}{q}}=\left(E\left[\left|\int_{m\in\mathcal{M}}\mu_{0}(d,m,X)\cdot
f_{0}(m|1-d,X)dm\right|^{q}\right]\right)^{\frac{1}{q}}$ $\displaystyle\leq$
$\displaystyle\left(E\left[\int_{m\in\mathcal{M}}\left|\mu_{0}(d,m,X)\right|^{q}\cdot
f_{0}(m|1-d,X)dm\right]\right)^{\frac{1}{q}}$ $\displaystyle=$
$\displaystyle\left(E\left[\int_{m\in\mathcal{M}}\left|\mu_{0}(d,m,X)\right|^{q}\cdot
f(m|d,X)\cdot\frac{f_{0}(m|1-d,X)}{f(m|d,X)}dm\right]\right)^{\frac{1}{q}}$
$\displaystyle\leq$ $\displaystyle
R^{1/q}\left(E\left[\int_{m\in\mathcal{M}}\left|\mu_{0}(d,m,X)\right|^{q}\cdot
f_{0}(m|d,X)dm\right]\right)^{\frac{1}{q}}$ $\displaystyle=$ $\displaystyle
R^{1/q}\left\|\mu_{0}(d,M,X)\right\|_{q}\leq\epsilon^{1/q}R^{1/q}\left\|\mu_{0}(D,M,X)\right\|_{q}$
where we make use of the definition of $\nu_{0}$, Jensen’s inequality, and the
boundedness of the ratio of densities. We therefore obtain
$\left\|\nu_{0}(1-d,X)\right\|_{q}\leq C/(\epsilon^{1/q}R^{1/q})$ by
inequality (B.2).
This permits bounding the following quantities:
$\displaystyle\left\|\mu(d,M,X)\right\|_{q}$ $\displaystyle\leq$
$\displaystyle\left\|\mu(d,M,X)-\mu_{0}(d,M,X)\right\|_{q}+\left\|\mu_{0}(d,M,X)\right\|_{q}\leq
2C/\epsilon^{1/q},$ (B.4) $\displaystyle\left\|\nu(1-d,X)\right\|_{q}$
$\displaystyle\leq$
$\displaystyle\left\|\nu(1-d,X)-\nu_{0}(1-d,X)\right\|_{q}+\left\|\nu_{0}(1-d,X)\right\|_{q}\leq
2C/(\epsilon^{1/q}R^{1/q}),$ $\displaystyle|\Psi_{d0}|$ $\displaystyle=$
$\displaystyle|E[\nu_{0}(1-d,X)]|\leq
E\Big{[}\left|\nu_{0}(1-d,X)\right|^{1}\Big{]}^{\frac{1}{1}}=\left\|\nu_{0}(1-d,X)\right\|_{1}$
$\displaystyle\leq$
$\displaystyle\left\|\nu_{0}(1-d,X)\right\|_{2}\leq\left\|Y_{2}\right\|_{2}/(\epsilon^{1/2}R^{1/2})\overbrace{\leq}^{q>2}\left\|Y_{2}\right\|_{q}/(\epsilon^{1/2}R^{1/2})\leq
C/(\epsilon^{1/2}R^{1/2}),$
using the triangular inequality, Jensen’s inequality, and properties of
statistical $l_{q}$ norms.
Finally, rearranging $\psi_{d}(W,\eta,\Psi_{d0})$
$\displaystyle\psi_{d}(W,\eta,\Psi_{d0})$ $\displaystyle=$
$\displaystyle\underbrace{\frac{I\\{D=d\\}\cdot f_{0}(M|1-d,X)}{p_{d}(X)\cdot
f_{0}(M|d,X)}\cdot Y}_{=I_{1}}$ $\displaystyle+$
$\displaystyle\underbrace{\bigg{(}\frac{I\\{D=1-d\\}}{1-p_{d}(X)}-\frac{I\\{D=d\\}\cdot
f_{0}(M|1-d,X)}{p_{d}(X)\cdot f_{0}(M|d,X)}\bigg{)}\cdot\mu(d,M,X)}_{=I_{2}}$
$\displaystyle+$
$\displaystyle\underbrace{\bigg{(}1-\frac{I\\{D=1-d\\}}{1-p_{d}(X)}\bigg{)}\nu(1-d,X)}_{=I_{3}}-\Psi_{d0},$
provides
$\displaystyle\left\|\psi_{d}(W,\eta,\Psi_{d0})\right\|_{q}$
$\displaystyle\leq$
$\displaystyle\left\|I_{1}\right\|_{q}+\left\|I_{2}\right\|_{q}+\left\|I_{3}\right\|_{q}+\left\|\Psi_{d0}\right\|_{q}$
$\displaystyle\leq$
$\displaystyle\frac{R}{\epsilon}\left\|Y\right\|_{q}+\frac{1+R}{\epsilon}\left\|\mu(d,M,X)\right\|_{q}+$
$\displaystyle+$
$\displaystyle\frac{1-\epsilon}{\epsilon}\left\|\nu(1-d,X)\right\|_{q}+|\Psi_{d0}|$
$\displaystyle\leq$ $\displaystyle
C\left(\frac{R}{\epsilon}+\frac{2}{\epsilon^{1+1/q}}\left(1+R+\frac{1-\epsilon}{R^{1/q}}\right)+\frac{1}{\epsilon^{1/2}R^{1/2}}\right),$
making use of the triangular inequality and inequalities (B.4). This provides
the upper bound on $m_{n}$ in Assumption 3.2(b).
Bound for $m^{\prime}_{n}$:
We note that
$\Big{(}E[|\psi_{d}^{a}(W,\eta)|^{q}]\Big{)}^{1/q}=1,$
which provides the upper bound on $m^{\prime}_{n}$ in Assumption 3.2(b).
Assumption 3.2(c)
Bound for $r_{n}$:
For any $\eta=(\mu,f,p_{d})$ we have
$\Big{|}E\Big{(}\psi_{d}^{a}(W,\eta)-\psi_{d}^{a}(W,\eta_{0})\Big{)}\Big{|}=|1-1|=0\leq\delta^{\prime}_{n},$
providing the bound on $r_{n}$ in Assumption 3.2(c).
Bound for $r^{\prime}_{n}$:
Using the triangular inequality
$\displaystyle\left\|\psi_{d}(W,\eta,\Psi_{0d})-\psi_{d}(W,\eta_{0},\Psi_{0d})\right\|_{2}\leq\left\|I\\{D=d\\}\cdot
Y\cdot\left(\frac{f(M|1-d,X)}{p_{d}(X)f(M|d,X)}-\frac{f_{0}(M|1-d,X)}{p_{d0}(X)f_{0}(M|d,X)}\right)\right\|_{2}$
$\displaystyle+$
$\displaystyle\left\|I\\{D=d\\}\cdot\left(\frac{\mu(d,M,X)f(M|1-d,X)}{p_{d}(X)f(M|d,X)}-\frac{\mu_{0}(d,M,X)f_{0}(M|1-d,X)}{p_{d0}(X)f_{0}(M|d,X)}\right)\right\|_{2}$
$\displaystyle+$
$\displaystyle\left\|I\\{D=1-d\\}\cdot\left(\frac{\mu(d,M,X)}{1-p_{d}(X)}-\frac{\mu_{0}(d,M,X)}{1-p_{d0}(X)}\right)\right\|_{2}+\left\|I\\{D=1-d\\}\cdot\left(\frac{\nu(1-d,X)}{1-p_{d}(X)}-\frac{\nu_{0}(1-d,X)}{1-p_{d0}(X)}\right)\right\|_{2}$
$\displaystyle+$ $\displaystyle\left\|\nu(1-d,X)-\nu_{0}(1-d,X)\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}\left(\frac{C\cdot
R^{2}}{\epsilon^{2}}+\frac{C\cdot
R^{2}}{\epsilon^{2}}\left(\frac{1}{\epsilon^{1/2}}+\frac{C}{\epsilon^{1/2}}\right)+\frac{1}{\epsilon^{2}}\left(\frac{1}{\epsilon^{1/2}}+\frac{C}{\epsilon^{1/2}}\right)+\frac{1}{\epsilon^{2}}\left(\frac{1}{\epsilon^{1/2}R^{1/2}}+\frac{C}{R^{1/2}}\right)+\frac{1}{\epsilon^{1/2}R^{1/2}}\right)\leq\delta^{\prime}_{n},$
as long as $C_{\epsilon}$ in the definition of $\delta_{n}^{\prime}$ is
sufficiently large. This gives the bound on $r^{\prime}_{n}$ in Assumption
3.2(c). In order to show the second to the last inequalities, we provide
bounds for the terms below, where we made use of the facts that
$\left\|\mu(d,M,X)-\mu_{0}(d,M,X)\right\|_{2}\leq\delta_{n}/\epsilon^{1/2},$
and
$\left\|\nu(1-d,X)-\nu_{0}(1-d,X)\right\|_{2}\leq\delta_{n}/(\epsilon^{1/2}R^{1/2})$
using similar steps as in Assumption 3.1(b) of Chernozhukov, Chetverikov,
Demirer, Duflo, Hansen, Newey, and Robins (2018).
For the first term:
$\displaystyle\left\|I\\{D=d\\}\cdot
Y\cdot\left(\frac{f(M|1-d,X)}{p_{d}(X)f(M|d,X)}-\frac{f_{0}(M|1-d,X)}{p_{d0}(X)f_{0}(M|d,X)}\right)\right\|_{2}\leq
C\cdot\left\|\frac{f(M|1-d,X)}{p_{d}(X)f(M|d,X)}-\frac{f_{0}(M|1-d,X)}{p_{d0}(X)f_{0}(M|d,X)}\right\|_{2}$
$\displaystyle\leq$
$\displaystyle\frac{C}{\epsilon^{2}\underline{f}^{2}}\left\|f(M|1-d,X)f_{0}(M|1-d,X)p_{d0}(X)-f(M|1-d,X)f_{0}(M|1-d,X)p_{d}(X)\right\|_{2}$
$\displaystyle\leq$
$\displaystyle\frac{C\cdot\overline{f}^{2}}{\epsilon^{2}\underline{f}^{2}}\left\|p_{d0}(X)-p_{d}(X)\right\|_{2}\leq\delta_{n}\frac{C\cdot
R^{2}}{\epsilon^{2}},$
where we use $\left\|E[Y^{2}|d,M,X]\right\|_{\infty}\leq C^{2}$ (see our
Assumption 4(a)) in the first inequality.
For the second term:
$\displaystyle\left\|I\\{D=d\\}\cdot\left(\frac{\mu(d,M,X)f(M|1-d,X)}{p_{d}(X)f(M|d,X)}-\frac{\mu_{0}(d,M,X)f_{0}(M|1-d,X)}{p_{d0}(X)f_{0}(M|d,X)}\right)\right\|_{2}$
$\displaystyle\leq$
$\displaystyle\left\|\frac{\mu(d,M,X)f(M|1-d,X)}{p_{d}(X)f(M|d,X)}-\frac{\mu_{0}(d,M,X)f_{0}(M|1-d,X)}{p_{d0}(X)f_{0}(M|d,X)}\right\|_{2}$
$\displaystyle\leq$
$\displaystyle\frac{C}{\epsilon^{2}\underline{f}^{2}}\left\|\mu(d,M,X)f(M|1-d,X)f_{0}(M|1-d,X)p_{d0}(X)-\mu_{0}(d,M,X)f(M|1-d,X)f_{0}(M|1-d,X)p_{d}(X)\right\|_{2}$
$\displaystyle\leq$
$\displaystyle\frac{C\overline{f}^{2}}{\epsilon^{2}\underline{f}^{2}}\left\|\mu(d,M,X)p_{d0}(X)-\mu_{0}(d,M,X)p_{d}(X)\right\|_{2}$
$\displaystyle=$ $\displaystyle\frac{C\cdot
R^{2}}{\epsilon^{2}}\left\|\mu(d,M,X)p_{d0}(X)-\mu_{0}(d,M,X)p_{d}(X)+\mu_{0}(d,M,X)p_{d0}(X)-\mu_{0}(d,M,X)p_{d0}(X)\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\frac{C\cdot
R^{2}}{\epsilon^{2}}\big{(}\left\|p_{d0}(X)(\mu(d,M,X)-\mu_{0}(d,M,X))\right\|_{2}+\left\|\mu_{0}(d,M,X)(p_{d0}(X)-p_{d}(X))\right\|_{2}\big{)}$
$\displaystyle\leq$ $\displaystyle\frac{C\cdot
R^{2}}{\epsilon^{2}}\big{(}\left\|\mu(d,M,X)-\mu_{0}(d,M,X)\right\|_{2}+C\left\|(p_{d0}(X)-p_{d}(X))\right\|_{2}\big{)}$
$\displaystyle\leq$ $\displaystyle\frac{C\cdot
R^{2}}{\epsilon^{2}}\left(\frac{\delta_{n}}{\epsilon^{1/2}}+C\delta_{n}\right)=\delta_{n}\frac{C\cdot
R^{2}}{\epsilon^{2}}\left(\frac{1}{\epsilon^{1/2}}+C\right)$
where the fifth inequality follows from
$E[Y^{2}|D=d,M,X]\geq(E[Y|D=d,M,X])^{2}=\mu^{2}_{0}(d,M,X)$ by conditional
Jensen’s inequality and therefore $\left\|\mu_{0}(d,M,X)\right\|_{\infty}\leq
C^{2}.$
For the third term:
$\displaystyle\left\|I\\{D=1-d\\}\cdot\left(\frac{\mu(d,M,X)}{1-p_{d}(X)}-\frac{\mu_{0}(d,M,X)}{1-p_{d0}(X)}\right)\right\|_{2}\leq\left\|\frac{\mu(d,M,X)}{1-p_{d}(X)}-\frac{\mu_{0}(d,M,X)}{1-p_{d0}(X)}\right\|_{2}$
$\displaystyle\leq$
$\displaystyle\frac{1}{\epsilon^{2}}\left\|\mu(d,M,X)p_{1-d,0}-\mu_{0}(d,M,X)p_{1-d}\right\|_{2}$
$\displaystyle=$
$\displaystyle\frac{1}{\epsilon^{2}}\left\|\mu(d,M,X)p_{1-d,0}-\mu_{0}(d,M,X)p_{1-d}+\mu_{0}(d,M,X)p_{1-d,0}-\mu_{0}(d,M,X)p_{1-d,0}\right\|_{2}$
$\displaystyle\leq$
$\displaystyle\frac{1}{\epsilon^{2}}\big{(}\left\|p_{1-d,0}(\mu(d,M,X)-\mu_{0}(d,M,X))\right\|_{2}+\left\|\mu_{0}(d,M,X)(p_{1-d,0}-p_{1-d})\right\|_{2}\big{)}$
$\displaystyle\leq$
$\displaystyle\frac{1}{\epsilon^{2}}\big{(}\left\|\mu(d,M,X)-\mu_{0}(d,M,X)\right\|_{2}+C\left\|p_{1-d,0}-p_{1-d}\right\|_{2}\big{)}$
$\displaystyle\leq$
$\displaystyle\frac{1}{\epsilon^{2}}\left(\frac{\delta_{n}}{\epsilon^{1/2}}+C\delta_{n}\right)=\delta_{n}\frac{1}{\epsilon^{2}}\left(\frac{1}{\epsilon^{1/2}}+C\right).$
For the fourth term:
$\displaystyle\left\|I\\{D=1-d\\}\cdot\left(\frac{\nu(1-d,X)}{1-p_{d}(X)}-\frac{\nu_{0}(1-d,X)}{1-p_{d0}(X)}\right)\right\|_{2}$
$\displaystyle\leq$
$\displaystyle\frac{1}{\epsilon^{2}}\big{(}\left\|p_{1-d,0}(\nu(1-d,X)-\nu_{0}(1-d,X))\right\|_{2}+\left\|\nu_{0}(1-d,X)(p_{1-d,0}-p_{1-d})\right\|_{2}\big{)}$
$\displaystyle\leq$
$\displaystyle\frac{1}{\epsilon^{2}}\big{(}\left\|\nu(1-d,X)-\nu_{0}(1-d,X)\right\|_{2}+\frac{C}{R^{1/2}}\left\|p_{1-d,0}-p_{1-d}\right\|_{2}\big{)}$
$\displaystyle\leq$
$\displaystyle\frac{1}{\epsilon^{2}}\left(\frac{\delta_{n}}{\epsilon^{1/2}R^{1/2}}+\frac{C}{R^{1/2}}\delta_{n}\right)=\delta_{n}\frac{1}{\epsilon^{2}}\left(\frac{1}{\epsilon^{1/2}R^{1/2}}+\frac{C}{R^{1/2}}\right),$
where we used Jensen’s inequality similarly to B.1.1 in order to get
$E[\nu^{2}_{0}(1-d,X)]\leq R\cdot E[\mu_{0}^{2}(d,M,X)]$ and
hence$\left\|\nu_{0}(1-d,X)\right\|_{\infty}\leq C^{2}/R$.
Bound on $\lambda^{\prime}_{n}$: Consider
$f(r):=E[\psi(W,\eta_{0}+r(\eta-\eta_{0}),\Psi_{d0})]$
We subsequently omit arguments for the sake of brevity and use
$p_{d}=p_{d}(X),f_{d}=f_{d}(M|d,X),\mu=\mu(d,M,X),\nu=\nu(1-d,X)$ and
similarly $p_{d0},f_{0d},\mu_{0},\nu_{0}.$
For any $r\in(0,1):$
$\displaystyle\frac{\partial^{2}f(r)}{\partial r^{2}}$ $\displaystyle=$
$\displaystyle E\Bigg{[}2\cdot
I\\{D=1-d\\}\frac{(\mu-\mu_{0})(p_{d}-p_{d0})}{\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{2}}\Bigg{]}+E\Bigg{[}2\cdot
I\\{D=1-d\\}\frac{(\nu-\nu_{0})(p_{d}-p_{d0})}{\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{2}}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=d\\}\frac{(f_{d}-f_{d0})(f_{1-d}-f_{1-d,0})\left(Y-\mu_{0}-r(\mu-\mu_{0})\right)}{\left(p_{d0}+r(p_{d}-p_{d0})\right)\left(f_{d0}+r(f_{d}-f_{d0})\right)^{2}}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=d\\}\frac{(p_{d}-p_{d0})(f_{1-d}-f_{1-d,0})\left(Y-\mu_{0}-r(\mu-\mu_{0})\right)}{\left(p_{d0}+r(p_{d}-p_{d0})\right)^{2}\left(f_{d0}+r(f_{d}-f_{d0})\right)}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=d\\}\frac{(f_{d}-f_{d0})\left(f_{1-d,0}+r(f_{1-d}-f_{1-d,0})\right)\left(-(\mu-\mu_{0})\right)}{\left(p_{d0}+r(p_{d}-p_{d0})\right)\left(f_{d0}+r(f_{d}-f_{d0})\right)^{2}}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=d\\}\frac{(p_{d}-p_{d0})\left(f_{1-d,0}+r(f_{1-d}-f_{1-d,0})\right)\left(-(\mu-\mu_{0})\right)}{\left(p_{d0}+r(p_{d}-p_{d0})\right)^{2}\left(f_{d0}+r(f_{d}-f_{d0})\right)}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}(-2)\cdot
I\\{D=d\\}\frac{\left(f_{1-d}-f_{1-d,0}\right)(\mu-\mu_{0})}{\left(p_{d0}+r(p_{d}-p_{d0})\right)\left(f_{d0}+r(f_{d}-f_{d0})\right)}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=d\\}\frac{(f_{d}-f_{d0})^{2}\left(f_{1-d,0}+r(f_{1-d}-f_{1-d,0})\right)\left(Y-\mu_{0}-r(\mu-\mu_{0})\right)}{\left(p_{d0}+r(p_{d}-p_{d0})\right)\left(f_{d0}+r(f_{d}-f_{d0})\right)^{3}}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=d\\}\frac{(f_{d}-f_{d0})(p_{d}-p_{d0})\left(f_{1-d,0}+r(f_{1-d}-f_{1-d,0})\right)\left(Y-\mu_{0}-r(\mu-\mu_{0})\right)}{\left(p_{d0}+r(p_{d}-p_{d0})\right)^{2}\left(f_{d0}+r(f_{d}-f_{d0})\right)^{2}}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=d\\}\frac{(p_{d}-p_{d0})^{2}\left(f_{1-d,0}+r(f_{1-d}-f_{1-d,0})\right)\left(Y-\mu_{0}-r(\mu-\mu_{0})\right)}{\left(p_{d0}+r(p_{d}-p_{d0})\right)^{3}\left(f_{d0}+r(f_{d}-f_{d0})\right)}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=1-d\\}\frac{\left(\mu_{0}-\nu_{0}\right)(p_{d}-p_{d0})^{2}}{\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{3}}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=1-d\\}\frac{\left(r(\mu-\mu_{0})-r(\nu-\nu_{0})\right)(p_{d}-p_{d0})^{2}}{\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{3}}\Bigg{]}$
Note that the following inequalities can be shown to hold using similar steps
as in Assumption 3.1(b) of Chernozhukov, Chetverikov, Demirer, Duflo, Hansen,
Newey, and Robins (2018):
$\displaystyle\left\|\mu-\mu_{0}\right\|_{2}$ $\displaystyle=$
$\displaystyle\left\|\mu(d,M,X)-\mu_{0}(d,M,X)\right\|_{2}\leq\left\|\mu(D,M,X)-\mu_{0}(D,M,X)\right\|_{2}/\epsilon^{1/2}\leq\delta_{n}/\epsilon^{1/2},$
$\displaystyle\left\|\nu-\nu_{0}\right\|_{2}$ $\displaystyle=$
$\displaystyle\left\|\nu(1-d,X)-\nu_{0}(1-d,X)\right\|_{2}\leq\left\|\mu(D,M,X)-\mu_{0}(D,M,X)\right\|_{2}/\epsilon^{1/2}R^{1/2}\leq\delta_{n}/(\epsilon^{1/2}R^{1/2}),$
These inequalities together with our Assumption 4 imply
$\displaystyle E[Y-\mu_{0}(d,M,X)|D=d,M,X]$ $\displaystyle=$ $\displaystyle
0,$ $\displaystyle|p_{d}-p_{d0}|$ $\displaystyle\leq$ $\displaystyle 2,$
$\displaystyle\left\|\mu_{0}\right\|_{q}\leq\left\|Y\right\|_{q}/\epsilon^{1/q}$
$\displaystyle\leq$ $\displaystyle C/\epsilon^{1/q}$
$\displaystyle\left\|\mu-\mu_{0}\right\|_{2}\times\left\|p_{d}-p_{d0}\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}n^{-1/2}/\epsilon^{1/2},$
$\displaystyle\left\|\mu-\mu_{0}\right\|_{2}\times\left\|f_{1-d}-f_{1-d,0}\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}n^{-1/2}/(\epsilon R^{1/2}),$
for all $d\in\\{1,0\\}$ and consequently
$\left\|\nu-\nu_{0}\right\|_{2}\times\left\|p_{d}-p_{d0}\right\|_{2}\leq\delta_{n}n^{-1/2}/(\epsilon^{1/2}R^{1/2}).$
Putting everything together, we get that for some value
$C_{\epsilon}^{\prime\prime}$ that only depends on $C$ and $\epsilon$
$\left|\frac{\partial^{2}f(r)}{\partial r^{2}}\right|\leq
C_{\epsilon}^{\prime\prime}\delta_{n}n^{-1/2}\leq\delta_{n}^{\prime}n^{-1/2}.$
This gives the upper bound on $\lambda^{\prime}_{n}$ in Assumption 3.2(c) as
long as $C_{\epsilon}$ in the definition of $\delta^{\prime}_{n}$ satisfies
$C_{\epsilon}\geq C_{\epsilon}^{\prime\prime}$.
In order to verify that this inequality holds we consider all the terms in
$\frac{\partial^{2}f(r)}{\partial r^{2}}$ separately. For the first term we
obtain
$\displaystyle\left|E\Bigg{[}2\cdot
I\\{D=1-d\\}\frac{(\mu-\mu_{0})(p_{d}-p_{d0})}{\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{2}}\Bigg{]}\right|\leq\frac{2}{\epsilon^{3}}\left|E\Bigg{[}(\mu-\mu_{0})(p_{d}-p_{d0})\Bigg{]}\right|\leq\frac{2}{\epsilon^{3}}\frac{\delta_{n}}{\epsilon^{1/2}R^{1/2}}n^{-1/2},$
where we made use of the fact that $1\geq
p_{d0}+r(p_{d}-p_{d0})=(1-r)p_{d0}+rp_{d}\geq(1-r)\epsilon+r\epsilon=\epsilon,$
$\underline{f}\leq f_{d0}+r(f_{d}-f_{d0})\leq\overline{f}$, and Holder’s
inequality. For the third term we obtain
$\displaystyle\left|E\Bigg{[}2\cdot
I\\{D=d\\}\frac{(f_{d}-f_{d0})(f_{1-d}-f_{1-d,0})\left(Y-\mu_{0}-r(\mu-\mu_{0})\right)}{\left(p_{d0}+r(p_{d}-p_{d0})\right)\left(f_{d0}+r(f_{d}-f_{d0})\right)^{2}}\Bigg{]}\right|$
$\displaystyle\leq$
$\displaystyle\frac{2}{\epsilon\underline{f}^{2}}(\overline{f}-\underline{f})^{2}\left|E\Bigg{[}I\\{D=d\\}\left(Y-\mu_{0}\right)\Bigg{]}\right|+\frac{2}{\epsilon\underline{f}^{2}}\left|E\Bigg{[}I\\{D=d\\}(f_{d}-f_{d0})(f_{1-d}-f_{1-d,0})r(\mu-\mu_{0})\Bigg{]}\right|$
$\displaystyle\leq$
$\displaystyle\frac{2}{\epsilon\underline{f}^{2}}(\overline{f}-\underline{f})\left|E\Bigg{[}1\cdot(f_{1-d}-f_{1-d,0})(\mu-\mu_{0})\Bigg{]}\right|\leq\frac{2}{\epsilon\underline{f}^{2}}(\overline{f}-\underline{f})\frac{\delta_{n}}{\epsilon^{1/2}}n^{-1/2}.$
And for the second to the last terms, we obtain
$\displaystyle\left|E\Bigg{[}2\cdot
I\\{D=1-d\\}\frac{\left(\mu_{0}-\nu_{0}\right)(p_{d}-p_{d0})^{2}}{\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{3}}\Bigg{]}\right|$
$\displaystyle=$
$\displaystyle\left|E\Bigg{[}\overbrace{I\\{D=1-d\\}\frac{\left(\mu_{0}-\nu_{0}\right)}{p_{1-d,0}}}^{E[E[\cdot|M,X]|X]=0}\cdot\frac{p_{1-d,0}(p_{d}-p_{d0})^{2}}{\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{3}}\Bigg{]}\right|=0.$
All the remaining terms are bounded similarly.
Assumption 3.2(d)
Finally, we consider
$\displaystyle E\Big{[}(\psi_{d}(W,\eta,\Psi_{d0}))^{2}\Big{]}$
$\displaystyle=$ $\displaystyle
E\Bigg{[}\Bigg{(}\underbrace{\frac{I\\{D=d\\}\cdot
f_{0}(M|1-d,X)}{p_{d}(X)\cdot
f_{0}(M|d,X)}\cdot\left(Y-\mu_{0}(d,M,X)\right)}_{=I_{1}}$ $\displaystyle+$
$\displaystyle\underbrace{\bigg{(}\frac{I\\{D=1-d\\}}{1-p_{d}(X)}\bigg{)}\cdot\left(\mu_{0}(d,M,X)-\nu_{0}(1-d,X)\right)}_{=I_{2}}+\underbrace{\nu_{0}(1-d,X)-\Psi_{d0}}_{=I_{3}}\Bigg{)}^{2}\Bigg{]}$
$\displaystyle=$ $\displaystyle E[I_{1}^{2}+I_{2}^{2}+I_{3}^{2}]\geq
E[I^{2}_{1}]$ $\displaystyle=$ $\displaystyle
E\Bigg{[}\Bigg{(}\frac{I\\{D=d\\}\cdot f_{0}(M|1-d,X)}{p_{d}(X)\cdot
f_{0}(M|d,X)}\Bigg{)}^{2}\left(Y-\mu_{0}(d,M,X)\right)^{2}\Bigg{]}$
$\displaystyle\geq$
$\displaystyle\frac{\underline{f}^{2}}{(1-\epsilon)\overline{f}^{2}}E\Bigg{[}\left(Y-\mu_{0}(d,M,X)\right)^{2}\Bigg{]}\geq\frac{c^{2}}{(1-\epsilon)R^{2}}>0,$
where the second equality follows from
$\displaystyle E\Big{[}I_{1}\cdot I_{2}\Big{]}$ $\displaystyle=$
$\displaystyle E\Bigg{[}\overbrace{\frac{I\\{D=d\\}\cdot
f_{0}(M|1-d,X)}{p_{d}(X)\cdot
f_{0}(M|d,X)}\frac{I\\{D=1-d\\}}{1-p_{d}(X)}}^{I\\{D=d\\}\cdot
I\\{D=1-d\\}=0}\cdot\left(Y-\mu_{0}(d,M,X)\right)\cdot\left(\mu_{0}(d,M,X)-\nu_{0}(1-d,X)\right)\Bigg{]},$
$\displaystyle E\Big{[}I_{2}\cdot I_{3}\Big{]}$ $\displaystyle=$
$\displaystyle
E\Bigg{[}\overbrace{\frac{I\\{D=1-d\\}}{1-p_{d}(X)}\cdot\left(\mu_{0}(d,M,X)-\nu_{0}(1-d,X)\right)}^{E[\cdot|X]=0}\cdot(\nu_{0}(1-d,X)-\Psi_{d0})\Bigg{]},$
$\displaystyle E\Big{[}I_{1}\cdot I_{3}\Big{]}$ $\displaystyle=$
$\displaystyle E\Bigg{[}\overbrace{\frac{I\\{D=d\\}\cdot
f_{0}(M|1-d,X)}{p_{d}(X)\cdot
f_{0}(M|d,X)}\cdot\left(Y-\mu_{0}(d,M,X)\right)}^{E[\cdot|X]=0}\cdot(\nu_{0}(1-d,X)-\Psi_{d0})\Bigg{]}.$
#### B.1.2 Counterfactual $E[Y(d,m)]$
The score for the estimation of $E[Y(d,m)]$ based on (8) is given by:
$\displaystyle E\big{[}\psi_{dm}(W,\eta,\Psi_{dm0})\big{]}=\ $ $\displaystyle
E\Bigg{[}\frac{I\\{D=d\\}\cdot I\\{M=m\\}\cdot[Y-\mu(d,m,X)]}{f(m|d,X)\cdot
p_{d}(X)}+\mu(d,m,X)-\Psi_{dm0}\Bigg{]}.$
Assumption 3.1: Moment Condition, Linear scores and Neyman orthogonality
Assumption 3.1(a)
Moment condition: The moment condition
$E\Big{[}\psi_{dm}(W,\eta_{0},\Psi_{dm0})\Big{]}=0$ is satisfied:
$\displaystyle E\Big{[}\psi_{dm}(W,\eta_{0},\Psi_{dm0})\Big{]}=$
$\displaystyle\ E\Bigg{[}\frac{I\\{D=d\\}\cdot
I\\{M=m\\}\cdot[Y-\mu_{0}(d,m,X)]}{f_{0}(m|d,X)\cdot
p_{d0}(X)}+\mu_{0}(d,m,X)-\Psi_{dm0}\Bigg{]}$ $\displaystyle=\ $
$\displaystyle E\Bigg{[}\overbrace{E\Bigg{[}\frac{I\\{D=d\\}\cdot
I\\{M=m\\}\cdot[Y-\mu_{0}(d,m,X)]}{f_{0}(m|d,X)\cdot
p_{d0}(X)}\Bigg{|}X\Bigg{]}}^{=E[Y-\mu_{0}(d,m,X)|d,m,X]=0}\Bigg{]}+E\Big{[}\mu_{0}(d,m,X)\Big{]}-\Psi_{dm0}$
$\displaystyle=\ $ $\displaystyle\Psi_{dm0}-\Psi_{dm0}=0.$
Assumption 3.1(b)
Linearity: The score $\psi_{dm}(W,\eta_{0},\Psi_{dm0})$ is linear in
$\Psi_{dm0}$ as it can be written as:
$\psi_{dm}(W,\eta_{0},\Psi_{dm0})=\psi_{d}^{a}(W,\eta_{0})\cdot\Psi_{dm0}+\psi_{d}^{b}(W,\eta_{0})$
with $\psi_{d}^{a}(W,\eta_{0})=-1$ and
$\displaystyle\psi_{d}^{b}(W,\eta_{0})=\frac{I\\{D=d\\}\cdot
I\\{M=m\\}\cdot[Y-\mu(d,m,X)]}{f(m|d,X)\cdot p_{d}(X)}+\mu(d,m,X)$
Assumption 3.1(c) Continuity: The expression for the second Gateaux derivative
of a map $\eta\mapsto E\Big{[}\psi_{dm}(W,\eta,\Psi_{dm0})\Big{]}$, is
continuous.
Assumption 3.1(d)
Neyman orthogonality: The Gateaux derivative in the direction
$\eta-\eta_{0}=(\mu(d,M,X)-\mu_{0}(d,M,X),f(M|D,X)-f_{0}(M|D,X),p_{d}(X)-p_{d0}(X))$
is given by:
$\displaystyle\partial E$
$\displaystyle\big{[}\psi_{dm}(W,\eta,\Psi_{dm})\big{]}\big{[}\eta-\eta_{0}\big{]}$
$\displaystyle=$
$\displaystyle\overbrace{-E\Bigg{[}\underbrace{\frac{I\\{D=d\\}\cdot
I\\{M=m\\}}{f_{0}(m|d,X)\cdot
p_{d0}(X)}}_{E[\cdot|X]=\frac{\Pr(D=d,M=m|X)}{\Pr(D=d,M=m|X)}=1}\cdot\Big{[}\mu(d,m,X)-\mu_{0}(d,m,X)\Big{]}\Bigg{]}+E\Big{[}\mu(d,m,X)-\mu_{0}(d,m,X)\Big{]}}^{=0}$
$\displaystyle-E\Bigg{[}\frac{\overbrace{I\\{D=d\\}\cdot
I\\{M=m\\}\cdot[Y-\mu_{0}(d,m,X)]}^{E[\cdot|X]=E[Y-\mu_{0}(d,m,X)|d,m,X]=0}}{f_{0}(m|d,X)\cdot
p_{d0}(X)}\cdot\frac{f(m|d,X)-f_{0}(m|d,X)}{f_{0}(m|d,X)}\Bigg{]}$
$\displaystyle-E\Bigg{[}\frac{\overbrace{I\\{D=d\\}\cdot
I\\{M=m\\}\cdot[Y-\mu_{0}(d,m,X)]}^{E[\cdot|X]=E[Y-\mu_{0}(d,m,X)|d,m,X]=0}}{f_{0}(m|d,X)\cdot
p_{d0}(X)}\cdot\frac{p_{d}(X)-p_{d0}(X)}{p_{d0}(X)}\Bigg{]}.$
Thus, it follows that:
$\displaystyle\partial
E\big{[}\psi_{dm}(W,\eta,\Psi_{dm})\big{]}\big{[}\eta-\eta_{0}\big{]}=0$
proving that the score function is orthogonal.
Assumption 3.1(e)
Singular values of $E[\psi^{a}_{d}(W;\eta_{0})]$ are bounded: This holds
trivially, because $\psi^{a}_{d}(W;\eta_{0})=-1.$
Assumption 3.2: Score regularity and quality of nuisance parameter estimators
This proof is omitted for the sake of brevity. It follows along similar lines
as the proof for $Y(d,M(1-d))$ presented in subsection B.1.1.
This concludes the proof of Theorem 1. $\hfill\square$
### B.2 Proof of Theorem 2
The alternative score for the counterfactual based on (3) is given by:
$\displaystyle\psi_{d}^{*}(W,\eta^{*},\Psi_{d0})$ $\displaystyle=$
$\displaystyle
E\Bigg{[}\frac{I\\{D=d\\}\cdot(1-p_{d}(M,X))}{p_{d}(M,X)\cdot(1-p_{d}(X))}\cdot\Big{[}Y-\mu(d,M,X)\Big{]}$
$\displaystyle+$
$\displaystyle\frac{I\\{D=1-d\\}}{1-p_{d}(X)}\cdot\Bigg{[}\mu(d,M,X)-\overbrace{E\Big{[}\mu(d,M,X)\Big{|}D=1-d,X\Big{]}}^{=:\omega(1-d,X)}\Bigg{]}$
$\displaystyle+$
$\displaystyle\overbrace{E\Big{[}\mu(d,M,X)\Big{|}D=1-d,X\Big{]}}^{=:\omega(1-d,X)}\Bigg{]}-\Psi_{d0}$
with $\eta^{*}=(\mu(D,M,X),\omega(D,X),p_{d}(M,X),p_{d}(X))$.
Let $\mathcal{T}^{*}_{n}$ be the set of all $\eta^{*}$ consisting of
$P$-square integrable functions $\mu(D,M,X),\omega(D,X),p_{d}(M,X)$, and
$p_{d}(X)$ such that
$\displaystyle\left\|\eta^{*}-\eta^{*}_{0}\right\|_{q}$ $\displaystyle\leq$
$\displaystyle C,$ (B.5)
$\displaystyle\left\|\eta^{*}-\eta^{*}_{0}\right\|_{2}$ $\displaystyle\leq$
$\displaystyle\delta_{n},$ $\displaystyle\left\|p_{d}(X)-1/2\right\|_{\infty}$
$\displaystyle\leq$ $\displaystyle 1/2-\epsilon,$
$\displaystyle\left\|p_{d}(M,X)-1/2\right\|_{\infty}$ $\displaystyle\leq$
$\displaystyle 1/2-\epsilon,$
$\displaystyle\left\|\mu(D,M,X)-\mu_{0}(D,M,X)\right\|_{2}\times\left\|p_{d}(X)-p_{d0}(X)\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}n^{-1/2},$
$\displaystyle\left\|\mu(D,M,X)-\mu_{0}(D,M,X)\right\|_{2}\times\left\|p_{d}(M,X)-p_{d0}(M,X)\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}n^{-1/2},$
$\displaystyle\left\|\omega(D,M,X)-\omega_{0}(D,M,X)\right\|_{2}\times\left\|p_{d}(X)-p_{d0}(X)\right\|_{2}$
$\displaystyle\leq$ $\displaystyle\delta_{n}n^{-1/2}.$
We replace the sequence $(\delta_{n})_{n\geq 1}$ by
$(\delta_{n}^{\prime})_{n\geq 1},$ where
$\delta_{n}^{\prime}=C_{\epsilon}\max(\delta_{n},n^{-1/2}),$ where
$C_{\epsilon}$ is a sufficiently large value that only depends on $C$ and
$\epsilon.$
Assumption 3.1: Moment Condition, Linear scores and Neyman orthogonality
Assumption 3.1(a)
Moment condition: The moment condition
$E\Big{[}\psi_{d}^{*}(W,\eta_{0}^{*},\Psi_{d0})\Big{]}=0$ is satisfied:
$\displaystyle E\Big{[}\psi_{d}^{*}(W,\eta_{0}^{*},\Psi_{d0})\Big{]}$
$\displaystyle=$ $\displaystyle
E\Bigg{[}\overbrace{E\Bigg{[}\frac{I\\{D=d\\}\cdot(1-p_{d0}(M,X))}{{p_{d0}(M,X)\cdot(1-p_{d0}(X))}}\cdot[Y-\mu_{0}(d,M,X)]\Bigg{|}X\Bigg{]}}^{=E[E[Y-\mu_{0}(d,M,X)|D=d,M,X]|D=1-d,X]=0}\Bigg{]}$
$\displaystyle+\
E\Bigg{[}\overbrace{E\Bigg{[}\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}\cdot[\mu_{0}(d,M,X)-\omega_{0}(1-d,X)]\Bigg{|}X\Bigg{]}}^{=E[\mu_{0}(d,M,X)-\omega_{0}(1-d,X)|D=1-d,X]=0}\Bigg{]}$
$\displaystyle+\ E[\omega_{0}(1-d,X)]\ \ -\ \ \Psi_{d0}$ $\displaystyle=$
$\displaystyle\Psi_{d0}\ \ -\ \ \Psi_{d0}\ \ =0.$
To better see this result, note that
$\displaystyle
E\Bigg{[}\frac{I\\{D=d\\}\cdot(1-p_{d0}(M,X))}{p_{d0}(M,X)\cdot(1-p_{d0}(X))}\cdot[Y-\mu_{0}(d,M,X)]\Bigg{|}X\Bigg{]}$
$\displaystyle=$ $\displaystyle
E\Bigg{[}E\Bigg{[}\frac{I\\{D=d\\}}{p_{d0}(M,X)}\cdot[Y-\mu_{0}(d,M,X)]\Bigg{|}M,X\Bigg{]}\cdot\frac{(1-p_{d0}(M,X))}{(1-p_{d0}(X))}\Bigg{|}X\Bigg{]}$
$\displaystyle=$ $\displaystyle
E\Bigg{[}E[Y-\mu_{0}(d,M,X)|D=d,M,X]\cdot\frac{(1-p_{d0}(M,X))}{(1-p_{d0}(X))}\Bigg{|}X\Bigg{]}$
$\displaystyle=$ $\displaystyle E[E[Y-\mu_{0}(d,M,X)|D=d,M,X]|D=1-d,X]$
$\displaystyle=$ $\displaystyle E[\mu_{0}(d,M,X)-\mu_{0}(d,M,X)|D=1-d,X]=0,$
where the first equality follows from the law of iterated expectations, the
second from basic probability theory, and the third from Bayes’ Law.
Furthermore,
$\displaystyle
E\Bigg{[}\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}\cdot[\mu_{0}(d,M,X)-\omega_{0}(1-d,X)]\Bigg{|}X\Bigg{]}$
$\displaystyle=$ $\displaystyle
E\Bigg{[}E\Bigg{[}\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}\cdot[\mu_{0}(d,M,X)-\omega_{0}(1-d,X)]\Big{|}M,X\Bigg{]}\Bigg{|}X\Bigg{]}$
$\displaystyle=$ $\displaystyle
E\Bigg{[}[\mu_{0}(d,M,X)-\omega_{0}(1-d,X)]\cdot\frac{1-p_{d0}(M,X)}{1-p_{d0}(X)}\Bigg{|}X\Bigg{]}$
$\displaystyle=$ $\displaystyle
E[\mu_{0}(d,M,X)-\omega_{0}(1-d,X)|D=1-d,X]=E[\mu_{0}(d,M,X)|D=1-d,X]-\omega_{0}(1-d,X)$
$\displaystyle=$ $\displaystyle\omega_{0}(1-d,X)-\omega_{0}(1-d,X)=0,$
where the first equality follows from the law of iterated expectations and the
third from Bayes’ Law.
Assumption 3.1(b)
Linearity: The score $\psi_{d}^{*}(W,\eta^{*}_{0},\Psi_{d0})$ is linear in
$\Psi_{d0}$ as it can be written as:
$\psi_{d}^{*}(W,\eta^{*}_{0},\Psi_{d0})=\psi_{d}^{a}(W,\Psi_{d0})\cdot\Psi_{d0}+\psi_{d}^{b}(W,\eta^{*}_{0})$
with $\psi_{d}^{a}(W,\eta^{*}_{0})=-1$ and
$\displaystyle\psi_{d}^{b}(W,\eta^{*}_{0})$ $\displaystyle=$
$\displaystyle\frac{I\\{D=d\\}\cdot(1-p_{d0}(M,X))}{p_{d0}(M,X)\cdot(1-p_{d0}(X))}\cdot\Big{[}Y-\mu_{0}(d,M,X)\Big{]}$
$\displaystyle+$
$\displaystyle\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}\cdot\Big{[}\mu_{0}(d,M,X)-\omega_{0}(1-d,X)\Big{]}+\omega_{0}(1-d,X)$
Assumption 3.1(c)
Continuity: The expression for the second Gateaux derivative of a map
$\eta^{*}\mapsto E\Big{[}\psi^{*}_{d}(W,\eta^{*},\Psi_{d0})\Big{]}$ is
continuous.
Assumption 3.1(d)
Neyman orthogonality:
The Gateaux derivative in the direction
$\eta^{*}-\eta^{*}_{0}=(\mu_{d}(d,M,X)-\mu_{0}(d,M,X),\omega(1-d,X)-\omega_{0}(1-d,X),p_{d}(M.X)-p_{d0}(M,X),p_{d}(X)-p_{d0}(X))$
is given by:
$\displaystyle\partial E$
$\displaystyle\big{[}\psi_{d}^{*}(W,\eta^{*},\Psi_{d})\big{]}\big{[}\eta^{*}-\eta^{*}_{0}\big{]}$
$\displaystyle=$ $\displaystyle
E\Bigg{[}\frac{-[p_{d}(M,X)-p_{d0}(M,X)]}{p_{d0}(M,X)^{2}}\cdot\overbrace{\frac{I\\{D=d\\}}{1-p_{d0}(X)}\cdot\big{(}Y-\mu(d,M,X)\big{)}}^{E[\cdot|X]=E[Y-\mu(d,M,X)|D=d,X]\cdot\frac{p_{d0}(X)}{1-p_{d0}(X)}=0}\Bigg{]}$
$\displaystyle+$ $\displaystyle
E\Bigg{[}\overbrace{\frac{I\\{D=d\\}\cdot(1-p_{d0}(M,X))}{p_{d0}(M,X)\cdot(1-p_{d0}(X))}\cdot\big{(}Y-\mu_{0}(d,M,X)\big{)}}^{E[\cdot|X]=E[E[Y-\mu_{0}(d,M,X)|D=d,M,X]|D=1-d,X]=0}\cdot\frac{p_{d}(X)-p_{d0}(X)}{(1-p_{d0}(X))}\Bigg{]}$
$\displaystyle+$ $\displaystyle
E\Bigg{[}\underbrace{\frac{I\\{D=1-d\\}}{(1-p_{d0}(X))}\cdot\big{(}\mu_{0}(d,M,X)-\omega_{0}(1-d,X)\big{)}}_{E[\cdot|X]=E[\mu_{0}(d,M,X)-\omega_{0}(1-d,X)|D=1-d,X]=0}\cdot\frac{p_{d}(X)-p_{d0}(X)}{(1-p_{d0}(X))}\Bigg{]}$
$\displaystyle\underbrace{-E\Bigg{[}\underbrace{\frac{I\\{D=d\\}}{p_{d0}(M,X)}}_{E[\cdot|M,X]=1}\cdot\frac{(1-p_{d0}(M,X))}{(1-p_{d0}(X))}\cdot\Big{[}\mu(d,M,X)-\mu_{0}(d,M,X)\Big{]}\Bigg{]}+E\Bigg{[}\underbrace{\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}}_{E[\cdot|M,X]=\frac{1-p_{d0}(M,X)}{1-p_{d0}(X)}}\cdot\Big{[}\mu(d,M,X)-\mu_{0}(d,M,X)\Big{]}\Bigg{]}}_{=0}$
$\displaystyle\underbrace{-E\Bigg{[}\underbrace{\frac{I\\{D=1-d\\}}{1-p_{d0}(X)}}_{E[\cdot|X]=1}\cdot\Big{[}\omega(1-d,X)-\omega_{0}(1-d,X)\Big{]}+\Big{[}\omega(1-d,X)-\omega_{0}(1-d,X)\Big{]}\Bigg{]}}_{=0}.$
Thus, it follows that:
$\displaystyle\partial
E\big{[}\psi_{d}^{*}(W,\eta^{*},\Psi_{d0})\big{]}\big{[}\eta^{*}-\eta^{*}_{0}\big{]}=0$
proving that the score function is orthogonal.
Assumption 3.1(e)
Singular values of $E[\psi^{a}_{d}(W;\eta^{*}_{0})]$ are bounded: This holds
trivially, because $\psi^{a}_{d}(W;\eta^{*}_{0})=-1.$
Assumption 3.2: Score regularity and quality of nuisance parameter estimators
Bounds for $m_{n},m^{\prime}_{n},r_{n},r^{\prime}_{n}$ are omitted for the
sake of brevity, because their derivations follow similarly as in the proof
for $Y(d,M(1-d))$ in subsection B.1.1. However, the proof differs in
establishing the bound on $\lambda^{\prime}_{n}$ in 3.2(c) of Chernozhukov,
Chetverikov, Demirer, Duflo, Hansen, Newey, and Robins (2018), as it is based
on the regularity conditions in Assumption 5 that include $p_{d}(M,X)$ and
$\omega(1-d,X)$.
Bound for $\lambda^{\prime}_{n}$: Consider
$f(r):=E[\psi(W,\eta^{*}_{0}+r(\eta^{*}-\eta^{*}_{0}),\Psi_{d0})]$
We subsequently omit arguments for the sake of brevity and use
$\mu=\mu(d,M,X),\omega=\omega(1-d,X),p_{d}=p_{d}(X),p_{dm}=p_{d}(M,X)$ and
similarly $\mu_{0},\omega_{0},p_{d0},p_{dm0}.$
For any $r\in(0,1):$
$\displaystyle\frac{\partial^{2}f(r)}{\partial r^{2}}$ $\displaystyle=$
$\displaystyle E\Bigg{[}(-2)\cdot
I\\{D=d\\}\frac{(p_{dm}-p_{dm0})(p_{d}-p_{d0})\left(Y-\mu_{0}-r(\mu-\mu_{0})\right)}{\left(p_{dm0}+r(p_{dm}-p_{dm0})\right)\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{2}}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=d\\}\frac{(p_{dm}-p_{dm0})^{2}\left(Y-\mu_{0}-r(\mu-\mu_{0})\right)}{\left(p_{dm0}+r(p_{dm}-p_{dm0})\right)^{2}\left(1-p_{d0}+r(p_{d0}-p_{d})\right)}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=d\\}\frac{\left(1-p_{dm0}+r(p_{dm0}-p_{dm})\right)(p_{d}-p_{d0})(\mu-\mu_{0})}{\left(p_{dm0}+r(p_{dm}-p_{dm0})\right)\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{2}}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}(-2)\cdot
I\\{D=d\\}\frac{\left(1-p_{dm0}+r(p_{dm0}-p_{dm})\right)(p_{dm}-p_{dm0})\left(Y-\mu_{0}-r(\mu-\mu_{0})\right)}{\left(p_{dm0}+r(p_{dm}-p_{dm0})\right)^{2}\left(1-p_{d0}+r(p_{d0}-p_{d})\right)}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}(-2)\cdot
I\\{D=d\\}\frac{(p_{d}-p_{d0})^{2}\left(1-p_{dm0}+r(p_{dm0}-p_{dm})\right)\left(Y-\mu_{0}-r(\mu-\mu_{0})\right)}{\left(p_{dm0}+r(p_{dm}-p_{dm0})\right)\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{3}}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}(-2)\cdot
I\\{D=d\\}\frac{(p_{dm}-p_{dm0})(p_{d}-p_{d0})\left(1-p_{dm0}+r(p_{dm0}-p_{dm})\right)\left(Y-\mu_{0}-r(\mu-\mu_{0})\right)}{\left(p_{dm0}+r(p_{dm}-p_{dm0})\right)^{2}\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{2}}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}(-2)\cdot
I\\{D=d\\}\frac{(p_{dm}-p_{dm0})^{2}\left(1-p_{dm0}+r(p_{dm0}-p_{dm})\right)\left(Y-\mu_{0}-r(\mu-\mu_{0})\right)}{\left(p_{dm0}+r(p_{dm}-p_{dm0})\right)^{3}\left(1-p_{d0}+r(p_{d0}-p_{d})\right)}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=d\\}\frac{(p_{dm}-p_{dm0})(\mu-\mu_{0})}{\left(p_{dm0}+r(p_{dm}-p_{dm0})\right)\left(1-p_{d0}+r(p_{d0}-p_{d})\right)}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=1-d\\}\frac{(p_{d}-p_{d0})^{2}\left(\mu_{0}-\omega_{0}\right)}{\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{3}}\Bigg{]}+E\Bigg{[}2\cdot
I\\{D=1-d\\}\frac{(p_{d}-p_{d0})^{2}r\left((\mu-\mu_{0})-(\omega-\omega_{0})\right)}{\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{3}}\Bigg{]}$
$\displaystyle+$ $\displaystyle E\Bigg{[}2\cdot
I\\{D=1-d\\}\frac{(p_{d}-p_{d0})\left(\mu-\mu_{0}\right)}{\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{2}}\Bigg{]}+E\Bigg{[}(-2)\cdot
I\\{D=1-d\\}\frac{(p_{d}-p_{d0})\left(\omega-\omega_{0}\right)}{\left(1-p_{d0}+r(p_{d0}-p_{d})\right)^{2}}\Bigg{]}$
Bounding these twelve terms proceeds similarly as in subsection B.1.1. In
order to bound the eighth term, we make use of the sixth inequality in B.5.
Similarly, for bounding the tenth and the twelfth terms we make use of the
last inequality in B.5. Thus, we get that for some
$C_{\epsilon}^{\prime\prime}$ that only depends on $C$ and $\epsilon$
$\left|\frac{\partial^{2}f(r)}{\partial r^{2}}\right|\leq
C_{\epsilon}^{\prime\prime}\delta_{n}n^{-1/2}\leq\delta_{n}^{\prime}n^{-1/2}.$
This provides the upper bound on $\lambda^{\prime}_{n}$ in Assumption 3.2(c)
of Chernozhukov, Chetverikov, Demirer, Duflo, Hansen, Newey, and Robins (2018)
as long as $C_{\epsilon}\geq C_{\epsilon}^{\prime\prime}$.
This concludes the proof of Theorem 2. $\hfill\square$
|
2024-09-04T02:54:55.466938 | 2020-02-28T14:08:54 | 2002.12722 | {
"authors": "Paul Dupuis and Guo-Jhen Wu",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25940",
"submitter": "Guo-Jhen Wu",
"url": "https://arxiv.org/abs/2002.12722"
} | arxiv-papers | # Large deviation properties of the empirical measure of a metastable small
noise diffusion
Paul Dupuis and Guo-Jhen Wu Division of Applied Mathematics, Brown University,
Providence, USA. Research supported in part by the National Science Foundation
(DMS-1904992) and the AFOSR (FA-9550-18-1-0214).Department of Mathematics, KTH
Royal Institute of Technology, Stockholm, Sweden. Research supported in part
by the AFOSR (FA-9550-18-1-0214<EMAIL_ADDRESS>(Corresponding author).
###### Abstract
The aim of this paper is to develop tractable large deviation approximations
for the empirical measure of a small noise diffusion. The starting point is
the Freidlin-Wentzell theory, which shows how to approximate via a large
deviation principle the invariant distribution of such a diffusion. The rate
function of the invariant measure is formulated in terms of quasipotentials,
quantities that measure the difficulty of a transition from the neighborhood
of one metastable set to another. The theory provides an intuitive and useful
approximation for the invariant measure, and along the way many useful related
results (e.g., transition rates between metastable states) are also developed.
With the specific goal of design of Monte Carlo schemes in mind, we prove
large deviation limits for integrals with respect to the empirical measure,
where the process is considered over a time interval whose length grows as the
noise decreases to zero. In particular, we show how the first and second
moments of these integrals can be expressed in terms of quasipotentials. When
the dynamics of the process depend on parameters, these approximations can be
used for algorithm design, and applications of this sort will appear
elsewhere. The use of a small noise limit is well motivated, since in this
limit good sampling of the state space becomes most challenging. The proof
exploits a regenerative structure, and a number of new techniques are needed
to turn large deviation estimates over a regenerative cycle into estimates for
the empirical measure and its moments.
Keywords: Large deviations, Freidlin-Wentzell theory, small noise diffusion,
empirical measure, quasipotential, Monte Carlo method
## 1 Introduction
Among the many interesting results proved by Freidlin and Wentzell in the 70’s
and 80’s concerning small random perturbations of dynamical systems, one of
particular note is the large deviation principle for the invariant measure of
such a system. Consider the small noise diffusion
$dX_{t}^{\varepsilon}=b(X_{t}^{\varepsilon})dt+\sqrt{\varepsilon}\sigma(X_{t}^{\varepsilon})dW_{t},\quad
X_{0}^{\varepsilon}=x,$
where $X_{t}^{\varepsilon}\in\mathbb{R}^{d}$,
$b:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$,
$\sigma:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\times\mathbb{R}^{k}$ (the
$d\times k$ matrices) and $W_{t}\in\mathbb{R}^{k}$ is a standard Brownian
motion. Under mild regularity conditions on $b$ and $\sigma$, one has that for
any $T\in(0,\infty)$ the processes
$\\{X_{\cdot}^{\varepsilon}\\}_{\varepsilon>0}$ satisfy a large deviation
principle on $C([0,T]:\mathbb{R}^{d})$ with rate function
$I_{T}(\phi)\doteq\int_{0}^{T}\sup_{\alpha\in\mathbb{R}^{d}}\left[\langle\dot{\phi}_{t},\alpha\rangle-\left\langle
b(\phi_{t}),\alpha\right\rangle-\frac{1}{2}\left\|\sigma(\phi_{t})\alpha\right\|^{2}\right]dt$
when $\phi$ is absolutely continuous and $\phi(0)=x$, and $I_{T}(\phi)=\infty$
otherwise. If $\sigma(x)\sigma(x)^{\prime}>0$ (in the sense of symmetric
square matrices) for all $x\in\mathbb{R}^{d}$, then one can evaluate the
supremum and find
$I_{T}(\phi)=\int_{0}^{T}\frac{1}{2}\left\langle\dot{\phi}_{t}-b(\phi_{t}),\left[\sigma(\phi_{t})\sigma(\phi_{t})^{\prime}\right]^{-1}(\dot{\phi}_{t}-b(\phi_{t}))\right\rangle
dt.$ (1.1)
To simplify the discussion we will assume this non-degeneracy condition. It is
also assumed by Freidlin and Wentzell in [12], but can be weakened.
Define the quasipotential $V(x,y)$ for $x,y\in\mathbb{R}^{d}$ by
$V(x,y)\doteq\inf\left\\{I_{T}(\phi):\phi(0)=x,\phi(T)=y,T<\infty\right\\}.$
Suppose that $\\{X^{\varepsilon}\\}$ is ergodic on a compact manifold
$M\subset\mathbb{R}^{d}$ with invariant measure
$\mu^{\varepsilon}\in\mathcal{P}(M)$. Then under a number of additional
assumptions, including assumptions on the structure of the dynamical system
$\dot{X}_{t}^{0}=b(X_{t}^{0})$, Freidlin and Wentzell [12, Chapter 6] show how
to construct a function $J:M\rightarrow[0,\infty]$ in terms of $V$, such that
$J$ is the large deviation rate function for
$\\{\mu^{\varepsilon}\\}_{\varepsilon>0}$: $J$ has compact level sets, and
$\liminf_{\varepsilon\rightarrow
0}\varepsilon\log\mu^{\varepsilon}(G)\geq-\inf_{y\in G}J(y)\text{ for open
}G\subset M,$ $\limsup_{\varepsilon\rightarrow
0}\varepsilon\log\mu^{\varepsilon}(F)\leq-\inf_{y\in F}J(y)\text{ for closed
}F\subset M.$
This gives a very useful approximation to $\mu^{\varepsilon}$, and along the
way many interesting related results (e.g., transition rates between
metastable states) are also developed.
The aim of this paper is to develop large deviation type estimates for a
quantity that is closely related to $\mu^{\varepsilon}$, which is the
empirical measure over an interval $[0,T^{\varepsilon}]$. This is defined by
$\rho^{\varepsilon}(A)\doteq\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}1_{A}(X_{s}^{\varepsilon})ds$
(1.2)
for $A\in\mathcal{B}(M)$. For reasons that will be made precise later on, we
will assume $T^{\varepsilon}\rightarrow\infty$ as $\varepsilon\rightarrow 0$,
and typically $T^{\varepsilon}$ will grow exponentially in the form
$e^{c/\varepsilon}$ for some $c>0$.
There is of course a large deviation theory for the empirical measure when
$\varepsilon>0$ is held fixed and the length of the time interval tends to
infinity (see e.g., [7, 8]). However, it can be hard to extract information
from the corresponding rate function. Our interest in proving large deviations
estimates when $\varepsilon\rightarrow 0$ and
$T^{\varepsilon}\rightarrow\infty$ is in the hope that one will find it easier
to extract information in this double limit, analogous to the simplified
approximation to $\mu^{\varepsilon}$ just mentioned. These results will be
applied in [10] to analyze and optimize a Monte Carlo method known as infinite
swapping [9, 15] when the noise is small. Small noise models are common in
applications, and are also the setting in which Monte Carlo methods can have
the greatest difficulty. We expect that the general set of results will be
useful for other purposes as well.
We note that while developed in the context of small noise diffusions, the
collection of results due to Freidlin and Wentzell that are discussed in [12]
also hold for other classes of processes, such as scaled stochastic networks,
when appropriate conditions are assumed and the finite time sample path large
deviation results are available (see, e.g., [19]). We expect that such
generalizations are possible for the results we prove as well.
The outline of the paper is as follows. In Section 2 we explain our motivation
and the relevance for studying the particular quantities that are the topic of
the paper. In Section 3 we provide definitions and assumptions that are used
throughout the paper, and Section 4 states the main asymptotic results as well
as a related conjecture. Examples that illustrate the results are given in
Section 5. In Section 6 we introduce an important tool for our analysis — the
regenerative structure, and with this concept, we decompose the original
asymptotic problem into two sub-problems that require very different forms of
analysis. These two types of asymptotic problems are then analyzed separately
in Sections 7 and Section 8. In Section 9 we combine the partial asymptotic
results from Section 7 and Section 8 to prove the main large deviation type
results that were stated in Section 4. Section 10 gives the proof of a key
theorem from Section 8, which asserts an approximately exponential
distribution for return times that arise in the decomposition based on
regenerative structure, as well as a tail bound needed for some integrability
arguments. The last section of the paper, Section 11, presents the proof of an
upper bound for the rate of decay of the variance per unit time in the context
of a special case, thereby showing for the case that the lower bounds of
Section 4 are in a sense tight. To focus on the main discussion, proofs of
some lemmas are collected in an Appendix.
###### Remark 1.1.
There are certain time-scaling parameters that play key roles throughout this
paper. For the reader’s convenience, we record here where they are first
described: $h_{1}$ and $w$ are defined in (4.1) and (4.2); $c$ is introduced
and its relation to $h_{1}$ and $w$ are given in Theorem 4.3; $m$ is
introduced at the beginning of Subsection 6.2.
## 2 Quantities of Interest
The quantities we are interested in are the higher order moments, and in
particular second moments, of an integral of a risk-sensitive functional with
respect to the empirical measure $\rho^{\varepsilon}$ defined in (1.2). To be
more precise, the integral is of the form
$\int_{M}e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)\rho^{\varepsilon}\left(dx\right)$
(2.1)
for some nice (e.g., bounded and continuous) function
$f:M\rightarrow\mathbb{R}$ and a closed set $A\in\mathcal{B}(M)$. Note that
this integral can also be expressed as
$\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt.$
(2.2)
In order to understand the large deviation behavior of moments of such an
integral, we must identify the correct scaling to extract meaningful
information. Moreover, as will be shown, there is an important difference
between centered moments and ordinary (non-centered) moments.
By the use of the regenerative structure of $\\{X_{t}^{\varepsilon}\\}_{t\geq
0}$, we can decompose (2.2) [equivalently (2.1)] into the sum of a random
number of independent and identically distributed (iid) random variables, plus
a residual term which here we will ignore. To simplify the notation, we
temporarily drop the $\varepsilon$, and without being precise about how the
regenerative structure is introduced, let $Y_{j}$ denote the integral of
$e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)$
over a regenerative cycle. (The specific regenerative structure we use will be
identified later on.)
Thus we consider a sequence $\\{Y_{j}\\}_{j\in\mathbb{N}}$ of iid random
variables with finite second moments, and want to compare the scaling
properties of, for example, the second moment and the second centered moment
of $\frac{1}{n}\sum_{j=1}^{n}Y_{j}$. When used for the small noise system,
both $n$ and moments of $Y_{i}$ will scale exponentially in $1/\varepsilon$,
and $n$ will be random, but for now we assume $n$ is deterministic. The second
moment is
$\displaystyle
E\left(\frac{1}{n}\sum_{k=1}^{n}Y_{k}\right)^{2}=\frac{1}{n^{2}}\sum_{k=1}^{n}E\left(Y_{k}\right)^{2}+\frac{1}{n^{2}}\sum_{i,j:i\neq
j}E\left(Y_{i}Y_{j}\right)=\left(EY_{1}\right)^{2}+\frac{1}{n}\mathrm{Var}\left(Y_{1}\right),$
and the second centered moment is
$E\left(\frac{1}{n}\sum_{k=1}^{n}\left(Y_{k}-EY_{1}\right)\right)^{2}=\mathrm{Var}\left(\frac{1}{n}\sum_{k=1}^{n}Y_{k}\right)=\frac{1}{n}\mathrm{Var}\left(Y_{1}\right).$
When analyzing the performance of the Monte Carlo schemes one is concerned of
course with both bias and variance, but in situations where we would like to
apply the results of this paper one assumes $T^{\varepsilon}$ is large enough
that the bias term is unimportant, so that all we are concerned with is the
variance. However some care will be needed to determine a suitable measure of
quality of the algorithm, since as noted $Y_{i}$ could scale exponentially
with in $1/\varepsilon$ with a negative coefficient (exponentially small),
while $n$ will be exponentially large.
In the analysis of unbiased accelerated Monte Carlo methods for small noise
systems over bounded time intervals (e.g., to estimate escape probabilities),
it is standard to use the second moment, which is often easier to analyze, in
lieu of the variance [3, Chapter VI], [4, Chapter 14]. This situation
corresponds to $n=1$. The alternative criterion is more convenient since by
Jensen’s inequality one can easily establish a best possible rate of decay of
the second moment, and estimators are deemed efficient if they possess the
optimal rate of decay [3, 4]. However with $n$ exponentially large this is no
longer true. Using the previous calculations, we see that the second moment of
$\frac{1}{n}\sum_{j=1}^{n}Y_{j}$ can be completely dominated by
$\left(EY_{1}\right)^{2}$, and therefore using this quantity to compare
algorithms may be misleading, since our true concern is the variance of
$\frac{1}{n}\sum_{j=1}^{n}Y_{j}$.
This observation suggests that our study of moments of the empirical measure
we should consider only centered moments, and in particular quantities like
$T^{\varepsilon}\mathrm{Var}\left(\int_{M}e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)\rho^{\varepsilon}\left(dx\right)\right)=T^{\varepsilon}\mathrm{Var}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right),$
which is the variance per unit time. For Monte Carlo one wants to minimize the
variance per unit time, and to make the problem more tractable we instead try
to maximize the decay rate of the variance per unit time. Assuming the limit
exists, this is defined by
$\lim_{\varepsilon\rightarrow
0}-\varepsilon\log\left[T^{\varepsilon}\mathrm{Var}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)\right]$
and so we are especially interested in lower bounds on this decay rate.
Thus our goal is to develop methods that allow the approximation of at least
first and second moments of (2.2). In fact, the methods we introduce can be
developed further to obtain large deviation estimates of higher moments if
that were needed or desired.
## 3 Setting of the Problem, Assumptions and Definitions
The process model we would like to consider is an $\mathbb{R}^{d}$-valued
solution to an Itô stochastic differential equation (SDE), where the drift so
strongly returns the process to some compact set that events involving exit of
the process from some larger compact set are so rare that they can effectively
be ignored when analyzing the empirical measure. However, to simplify the
analysis we follow the convention of [12, Chapter 6], and work with a small
noise diffusion that takes values in a compact and connected manifold
$M\subset\mathbb{R}^{d}$ of dimension $r$ and with smooth boundary. The
precise regularity assumptions for $M$ are given on [12, page 135]. With this
convention in mind, we consider a family of diffusion processes
$\\{X^{\varepsilon}\\}_{\varepsilon\in(0,\infty)},X^{\varepsilon}\in
C([0,\infty):M)$, that satisfy the following condition.
###### Condition 3.1.
Consider continuous $b:M\rightarrow\mathbb{R}^{d}$ and
$\sigma:M\rightarrow\mathbb{R}^{d}\times\mathbb{R}^{d}$ (the $d\times d$
matrices), and assume that $\sigma$ is uniformly nondegenerate, in that there
is $c>0$ such that for any $x$ and any $v$ in the tangent space of $M$ at $x$,
$\langle v,\sigma(x)\sigma(x)^{\prime}v\rangle\geq c\langle v,v\rangle$. For
absolutely continuous $\phi\in C([0,T]:M)$ define $I_{T}(\phi)$ by (1.1),
where the inverse $\left[\sigma(x)\sigma(x)^{\prime}\right]^{-1}$ is relative
to the tangent space of $M$ at $x$. Let $I_{T}(\phi)=\infty$ for all other
$\phi\in C([0,T]:M)$. Then we assume that for each $T<\infty$,
$\\{X^{\varepsilon}_{t}\\}_{0\leq t\leq T}$ satisfies the large deviation
principle with rate function $I_{T}$, uniformly with respect to the initial
condition [4, Definition 1.13].
We note that for such diffusion processes nondegeneracy of the diffusion
matrix implies there is a unique invariant measure
$\mu^{\varepsilon}\in\mathcal{P}(M)$. A discussion of weak sufficient
conditions under which Condition 3.1 holds appears in [12, Section 3, Chapter
5].
###### Remark 3.2.
There are several ways one can approximate a diffusion of the sort described
at the beginning of this section by a diffusion on a smooth compact manifold.
One such “compactification” of the state space can be obtained by assuming
that for some bounded but large enough rectangle trajectories that exit the
rectangle do not affect the large deviation behavior of quantities of
interest, and then extend the coefficients of the process periodically and
smoothly off an even larger rectangle to all of $\mathbb{R}^{d}$ (a technique
sometimes used to bound the state space for purposes of numerical
approximation). One can then map $\mathbb{R}^{d}$ to a manifold that is
topologically equivalent to a torus, and even arrange that the metric
structure on the part of the manifold corresponding to the smaller rectangle
coincides with a Euclidean metric.
Define the quasipotential $V(x,y):M\times M\rightarrow[0,\infty)$ by
$V(x,y)\doteq\inf\left\\{I_{T}(\phi):\phi(0)=x,\phi(T)=y,T<\infty\right\\}.$
(3.1)
For a given set $A\subset M,$ define $V(x,A)\doteq\inf_{y\in A}V(x,y)$ and
$V(A,y)\doteq\inf_{x\in A}V(x,y).$
###### Remark 3.3.
For any fixed $y$ and set $A,$ $V(x,y)$ and $V(x,A)$ are both continuous
functions of $x$. Similarly, for any given $x$ and any set $A,$ $V(x,y)$ and
$V(A,y)$ are also continuous in $y.$
###### Definition 3.4.
We say that a set $N\subset M$ is stable if for any $x\in N,y\notin N$ we have
$V(x,y)>0.$ A set which is not stable is called unstable.
###### Definition 3.5.
We say that $O\in M$ is an equilibrium point of the ordinary differential
equation (ODE) $\dot{x}_{t}=b(x_{t})$ if $b(O)=0.$ Moreover, we say that this
equilibrium point $O$ is asymptotically stable if for every neighborhood
$\mathcal{E}_{1}$ of $O$ (relative to $M$) there exists a smaller neighborhood
$\mathcal{E}_{2}$ such that the trajectories of system $\dot{x}_{t}=b(x_{t})$
starting in $\mathcal{E}_{2}$ converge to $O$ without leaving
$\mathcal{E}_{1}$ as $t\rightarrow\infty.$
###### Remark 3.6.
An asymptotically stable equilibrium point is a stable set, but a stable set
might contain no asymptotically stable equilibrium point.
The following restrictions on the structure of the dynamical system in $M$
will be used. These restrictions include the assumption that the equilibrium
points are a finite collection. This is a more restrictive framework than that
of [12], which allows, e.g., limit cycles. In a remark at the end of this
section we comment on what would be needed to extend to the general setup of
[12].
###### Condition 3.7.
There exists a finite number of points $\\{O_{j}\\}_{j\in L}\subset M$ with
$L\doteq\\{1,2,\ldots,l\\}$ for some $l\in\mathbb{N}$, such that $\cup_{j\in
L}\\{O_{j}\\}$ coincides with the $\omega$-limit set of the ODE
$\dot{x}_{t}=b(x_{t})$.
Without loss of generality, we may assume that $O_{j}$ is stable if and only
if $j\in L_{\rm{s}}$ where $L_{\rm{s}}\doteq\\{1,\ldots,l_{\rm{s}}\\}$ for
some $l_{\rm{s}}\leq l.$
Next we give a definition from graph theory which will be used in the
statement of the main results.
###### Definition 3.8.
Given a subset $W\subset L=\\{1,\ldots,l\\},$ a directed graph consisting of
arrows $i\rightarrow j$ $(i\in L\setminus W,j\in L,i\neq j)$ is called a
$W$-graph on $L$ if it satisfies the following conditions.
1. 1.
Every point $i$ $\in L\setminus W$ is the initial point of exactly one arrow.
2. 2.
For any point $i$ $\in L\setminus W,$ there exists a sequence of arrows
leading from $i$ to some point in $W.$
We note that we could replace the second condition by the requirement that
there are no closed cycles in the graph. We denote by $G(W)$ the set of
$W$-graphs; we shall use the letter $g$ to denote graphs. Moreover, if
$p_{ij}$ ($i,j\in L,j\neq i$) are numbers, then $\prod_{(i\rightarrow j)\in
g}p_{ij}$ will be denoted by $\pi(g).$
###### Remark 3.9.
We mostly consider the set of $\\{i\\}$-graphs, i.e., $G(\\{i\\})$ for some
$i\in$ $L$, and also use $G(i)$ to denote $G(\\{i\\}).$ We occasionally
consider the set of $\\{i,j\\}$-graphs, i.e., $G(\\{i,j\\})$ for some $i,j\in$
$L$ with $i\neq j.$ Again, we also use $G(i,j)$ to denote $G(\\{i,j\\}).$
###### Definition 3.10.
For all $j\in L$, define
$W\left(O_{j}\right)\doteq\min_{g\in
G\left(j\right)}\left[{\textstyle\sum_{\left(m\rightarrow n\right)\in
g}}V\left(O_{m},O_{n}\right)\right]$ (3.2)
and
$W\left(O_{1}\cup O_{j}\right)\doteq\min_{g\in
G\left(1,j\right)}\left[{\textstyle\sum_{\left(m\rightarrow n\right)\in
g}}V\left(O_{m},O_{n}\right)\right].$ (3.3)
###### Remark 3.11.
Heuristically, if we interpret $V\left(O_{m},O_{n}\right)$ as the “cost” of
moving from $O_{m}$ to $O_{n},$ then $W\left(O_{j}\right)$ is the “least total
cost” of reaching $O_{j}$ from every $O_{i}$ with $i\in L\setminus\\{j\\}.$
According to [12, Theorem 4.1, Chapter 6], one can interpret
$W(O_{i})-\min_{j\in L}W(O_{j})$ as the decay rate of
$\mu^{\varepsilon}(B_{\delta}(O_{i}))$, where $B_{\delta}(O_{i})$ is a small
open neighborhood of $O_{i}$.
###### Definition 3.12.
We use $G_{\rm{s}}\left(W\right)$ to denote the collection of all $W$-graphs
on $L_{\rm{s}}=\\{1,\ldots,l_{\rm{s}}\\}$ with $W\subset L_{\rm{s}}.$
W make the following technical assumptions on the structure of the SDE. Let
$B_{\delta}(K)$ denote the $\delta$-neighborhood of a set $K\subset M.$ Recall
that $\mu^{\varepsilon}$ is the unique invariant probability measure of the
diffusion process $\\{X^{\varepsilon}_{t}\\}_{t}.$ The existence of the limits
appearing in the first part of the condition is ensured by Theorem 4.1 in [12,
Chapter 6].
###### Condition 3.13.
1. 1.
There exists a unique asymptotically stable equilibrium point $O_{1}$ of the
system $\dot{x}_{t}=b(x_{t})$ such that
$\lim_{\delta\rightarrow 0}\lim_{\varepsilon\rightarrow
0}-\varepsilon\log\mu^{\varepsilon}(B_{\delta}(O_{1}))=0,\text{ and
}\lim_{\delta\rightarrow 0}\lim_{\varepsilon\rightarrow
0}-\varepsilon\log\mu^{\varepsilon}(B_{\delta}(O_{j}))>0\mbox{ for any }j\in
L\setminus\\{1\\}.$
2. 2.
All of the eigenvalues of the matrix of partial derivatives of $b$ at
$O_{\ell}$ relative to $M$ have negative real parts for $\ell\in L_{\rm{s}}$.
3. 3.
$b:M\rightarrow\mathbb{R}^{d}$ and
$\sigma:M\rightarrow\mathbb{R}^{d}\times\mathbb{R}^{d}$ are $C^{1}$.
###### Remark 3.14.
According to [12, Theorem 4.1, Chapter 6] and the first part of Condition
3.13, we know that $W(O_{j})>W(O_{1})$ for all $j\in L\setminus\\{1\\}.$
###### Remark 3.15.
We comment on the use of the various parts of the condition. Part 1 means that
neighborhoods of $O_{1}$ capture more of the mass as $\varepsilon\rightarrow
0$ than neighborhoods of any other equilibrium point. It simplifies the
analysis greatly, but we expect it could be weakened if desired. Parts 2 and 3
are assumed in [6], which gives an explicit exponential bound on the tail
probability of the exit time from the domain of attraction. It is largely
because of our reliance on the results of [6] that we must assume that
equilibrium sets are points in Condition 3.7, rather than the more general
compacta as considered in [12]. Both Condition 3.7 and Condition 3.13 could be
weakened if the corresponding versions of the results we use from [6] were
available.
###### Remark 3.16.
The quantities $V(O_{i},O_{j})$ determine various key transition probabilities
and time scales in the analysis of the empirical measure. The more general
framework of [12], as well as the one dimensional case (i.e., $r=1$) in the
present setting, require some closely related but slightly more complicated
quantities. These are essentially the analogues of $V(O_{i},O_{j})$ under the
assumption that trajectories used in the definition are not allowed to pass
through equilibrium compacta (such as a limit cycle) when traveling from
$O_{i}$ to $O_{j}$. The related quantities, which are designated using
notation of the form $\tilde{V}(O_{i},O_{j})$ in [12], are needed since the
probability of a direct transition from $O_{i}$ to $O_{j}$ without passing
though another equilibrium structure may be zero, which means that transitions
from $O_{i}$ to $O_{j}$ must be decomposed according to these intermediate
transitions. To simplify the presentation we do not provide the details of the
one dimensional case in our setup, but simply note that it can be handled by
the introduction of these additional quantities.
Consider the filtration $\\{\mathcal{F}_{t}\\}_{t\geq 0}$ defined by
$\mathcal{F}_{t}\doteq\sigma(X_{s}^{\varepsilon},s\leq t)$ for any $t\geq 0.$
For any $\delta>0$ smaller than a quarter of the minimum of the distances
between $O_{i}$ and $O_{j}$ for all $i\neq j$, we consider two types of
stopping times with respect to the filtration $\\{\mathcal{F}_{t}\\}_{t}$. The
first type are the hitting times of $\\{X^{\varepsilon}_{t}\\}_{t}$ at the
$\delta$-neighborhood of all equilibrium points $\\{O_{j}\\}_{j\in L}$ after
traveling a reasonable distance away from those neighborhoods. More precisely,
we define stopping times by $\tau_{0}\doteq 0,$
$\sigma_{n}\doteq\inf\\{t>\tau_{n}:X_{t}^{\varepsilon}\in{\cup_{j\in
L}}\partial B_{2\delta}(O_{j})\\}\text{ and
}\tau_{n}\doteq\inf\\{t>\sigma_{n-1}:X_{t}^{\varepsilon}\in{\cup_{j\in
L}}\partial B_{\delta}(O_{j})\\}.$
The second type of stopping times are the return times of
$\\{X^{\varepsilon}_{t}\\}_{t}$ to the $\delta$-neighborhood of $O_{1}$, where
as noted previously $O_{1}$ is in some sense the most important equilibrium
point. The exact definitions are $\tau_{0}^{\varepsilon}\doteq 0,$
$\sigma_{n}^{\varepsilon}\doteq\inf\\{t>\tau_{n}^{\varepsilon}:X_{t}^{\varepsilon}\in{\textstyle\cup_{j\in
L\setminus\\{1\\}}}\partial B_{\delta}(O_{j})\\}\text{ and
}\tau_{n}^{\varepsilon}\doteq\inf\left\\{t>\sigma_{n-1}^{\varepsilon}:X_{t}^{\varepsilon}\in\partial
B_{\delta}(O_{1})\right\\}.$ (3.4)
We then define two embedded Markov chains
$\\{Z_{n}\\}_{n\in\mathbb{N}_{0}}\doteq\\{X_{\tau_{n}}^{\varepsilon}{}\\}_{n\in\mathbb{N}_{0}}$
with state space ${\textstyle\cup_{j\in L}}\partial B_{\delta}(O_{j})$, and
$\\{Z_{n}^{\varepsilon}\\}_{n\in\mathbb{N}_{0}}\doteq\\{X_{\tau_{n}^{\varepsilon}}^{\varepsilon}{}\\}_{n\in\mathbb{N}_{0}}$
with state space $\partial B_{\delta}(O_{1}).$
Let $p(x,\partial B_{\delta}(O_{j}))$ denote the one-step transition
probabilities of $\\{Z_{n}\\}_{n\in\mathbb{N}_{0}}$ starting from a point
$x\in{\textstyle\cup_{i\in L}}\partial B_{\delta}(O_{i}),$ namely,
$p(x,\partial B_{\delta}(O_{j}))\doteq P_{x}(Z_{1}\in\partial
B_{\delta}(O_{j})).$
We have the following estimates on $p(x,\partial B_{\delta}(O_{j}))$ in terms
of $V$. The lemma is a consequence of [12, Lemma 2.1, Chapter 6] and the fact
that under our conditions $V(O_{i},O_{j})$ and $\tilde{V}(O_{i},O_{j})$ as
defined in [12] coincide.
###### Lemma 3.17.
For any $\eta>0,$ there exists $\delta_{0}\in(0,1)$ and
$\varepsilon_{0}\in(0,1),$ such that for any $\delta\in(0,\delta_{0})$ and
$\varepsilon\in(0,\varepsilon_{0}),$ for all $x\in\partial B_{\delta}(O_{i}),$
the one-step transition probability of the Markov chain
$\\{Z_{n}\\}_{n\in\mathbb{N}}$ on $\partial B_{\delta}(O_{j})$ satisfies
$e^{-\frac{1}{\varepsilon}\left(V\left(O_{i},O_{j}\right)+\eta\right)}\leq
p(x,\partial B_{\delta}(O_{j}))\leq
e^{-\frac{1}{\varepsilon}\left(V\left(O_{i},O_{j}\right)-\eta\right)}$
for any $i,j\in L.$
###### Remark 3.18.
According to Lemma 4.6 in [16], Condition 3.1 guarantees the existence and
uniqueness of invariant measures for $\\{Z_{n}\\}_{n}$ and
$\\{Z_{n}^{\varepsilon}\\}_{n}.$ We use
$\nu^{\varepsilon}\in\mathcal{P}(\cup_{i\in L}\partial B_{\delta}(O_{i}))$ and
$\lambda^{\varepsilon}\in\mathcal{P}(\partial B_{\delta}(O_{1}))$ to denote
the associated invariant measures.
## 4 Results and a Conjecture
The following main results of this paper assume Conditions 3.1, 3.7 and 3.13.
Although moments higher than the second moment are not considered in this
paper, as noted previously one can use arguments such as those used here to
identify and prove the analogous results.
Recall that $\\{O_{j}\\}_{j\in L}$ are the equilibrium points and that they
satisfy Condition 3.7 and Condition 3.13. In addition, $O_{j}$ is stable if
and only if $j\in L_{\rm{s}}$, where
$L_{\rm{s}}\doteq\\{1,\ldots,l_{\rm{s}}\\}$ for some $l_{\rm{s}}\leq
l=\left|L\right|$, and $\tau^{\varepsilon}_{1}$ is the first return time to
the $\delta$-neighborhood of $O_{1}$ after having first visited the
$\delta$-neighborhood of any other equilibrium point.
###### Lemma 4.1.
For any $\delta\in(0,1)$ smaller than a quarter of the minimum of the
distances between $O_{i}$ and $O_{j}$ for all $i\neq j$, any $\varepsilon>0$
and any nonnegative measurable function $g:M\rightarrow\mathbb{R}$
$E_{\lambda^{\varepsilon}}\left(\int_{0}^{\tau_{1}^{\varepsilon}}g\left(X_{s}^{\varepsilon}\right)ds\right)=E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\cdot\int_{M}g\left(x\right)\mu^{\varepsilon}\left(dx\right),$
where $\lambda^{\varepsilon}\in\mathcal{P}(\partial B_{\delta}(O_{1}))$ is the
unique invariant measure of
$\\{Z_{n}^{\varepsilon}\\}_{n}=\\{X_{\tau_{n}^{\varepsilon}}^{\varepsilon}\\}_{n}$
and $\mu^{\varepsilon}\in\mathcal{P}(M)$ is the unique invariant measure of
$\\{X_{t}^{\varepsilon}\\}_{t}.$
###### Proof.
We define a measure on $M$ by
$\hat{\mu}^{\varepsilon}\left(B\right)\doteq
E_{\lambda^{\varepsilon}}\left(\int_{0}^{\tau_{1}^{\varepsilon}}1_{B}\left(X^{\varepsilon}_{t}\right)dt\right)$
for $B\in\mathcal{B}(M),$ so that for any nonnegative measurable function
$g:M\rightarrow\mathbb{R}$
$\int_{M}g\left(x\right)\hat{\mu}^{\varepsilon}\left(dx\right)=E_{\lambda^{\varepsilon}}\left(\int_{0}^{\tau_{1}^{\varepsilon}}g\left(X^{\varepsilon}_{t}\right)dt\right).$
According to the proof of Theorem 4.1 in [16], the measure given by
$\hat{\mu}^{\varepsilon}\left(B\right)/\hat{\mu}^{\varepsilon}\left(M\right)$
is an invariant measure of $\\{X^{\varepsilon}_{t}\\}_{t}.$ Since we already
know that $\mu^{\varepsilon}$ is the unique invariant measure of
$\\{X^{\varepsilon}_{t}\\}_{t},$ this means that
$\mu^{\varepsilon}(B)=\hat{\mu}^{\varepsilon}\left(B\right)/\hat{\mu}^{\varepsilon}\left(M\right)$
for any $B\in\mathcal{B}(M).$ Therefore for any nonnegative measurable
function $g:M\rightarrow\mathbb{R}$
$\displaystyle
E_{\lambda^{\varepsilon}}\left(\int_{0}^{\tau_{1}^{\varepsilon}}g\left(X^{\varepsilon}_{t}\right)dt\right)$
$\displaystyle=\int_{M}g\left(x\right)\mu^{\varepsilon}\left(dx\right)\cdot\hat{\mu}^{\varepsilon}\left(M\right)=\int_{M}g\left(x\right)\mu^{\varepsilon}\left(dx\right)\cdot
E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}.$
∎
Recall the definitions of $W(O_{j})$ and $W(O_{1}\cup O_{j})$ in Definition
3.10, as well as the definition of the quasipotential $V(x,y)$ in (3.1). For
any $k\in L$, we define
$h_{k}\doteq\min_{j\in L\setminus\\{k\\}}V(O_{k},O_{j}).$ (4.1)
In addition, define
$w\doteq W(O_{1})-\min_{j\in L\setminus\\{1\\}}W(O_{1}\cup O_{j}).$ (4.2)
###### Remark 4.2.
The quantity $h_{k}$ is related to the time that it takes for the process to
leave a neighborhood of $O_{k}$, and $W(O_{1})-W(O_{1}\cup O_{j})$ is related
to the transition time from a neighborhood of $O_{j}$ to one of $O_{1}$. It
turns out that our results and arguments depend on which of $h_{1}$ or $w$ is
larger. Throughout the paper, the constructions used in the case when
$h_{1}>w$ will be in terms of what we call a single cycle, and those for the
case when $h_{1}\leq w$ in terms of a multicycle.
###### Theorem 4.3.
Let $T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some $c>h_{1}\vee w$.
Given $\eta>0,$ a continuous function $f:M\rightarrow\mathbb{R}$ and any
compact set $A\subset M,$ there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left|E_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)-\int_{M}e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)\mu^{\varepsilon}\left(dx\right)\right|$
$\displaystyle\qquad\geq\inf_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-W\left(O_{1}\right)+c-(h_{1}\vee
w)-\eta,$
where $W(x)\doteq\min_{j\in L}[W(O_{j})+V(O_{j},x)]$.
###### Remark 4.4.
Since $W(x)=\min_{j\in L}[W(O_{j})+V(O_{j},x)],$ the lower bound appearing in
Theorem 4.3 is equivalent to
$\min_{j\in L}\left(\inf_{x\in
A}\left[f\left(x\right)+V(O_{j},x)\right]+W(O_{j})-W\left(O_{1}\right)\right)+c-(h_{1}\vee
w)-\eta.$
The next result gives an upper bound on the variance per unit time, or
equivalently a lower bound on its rate of decay. In the design of a Markov
chain Monte Carlo method, one would maximize this rate of decay to improve the
method’s performance.
###### Theorem 4.5.
Let $T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some $c>h_{1}\vee w$.
Given $\eta>0,$ a continuous function $f:M\rightarrow\mathbb{R}$ and any
compact set $A\subset M,$ there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(T^{\varepsilon}\cdot\mathrm{Var}_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)\right)$
$\displaystyle\qquad\geq\begin{cases}\min_{j\in L}\left(R_{j}^{(1)}\wedge
R_{j}^{(2)}\right)-\eta,&\text{if }h_{1}>w\\\ \min_{j\in
L}\left(R_{j}^{(1)}\wedge R_{j}^{(2)}\wedge
R_{j}^{(3)}\right)-\eta,&\text{otherwise}\end{cases},$
where
$R_{j}^{(1)}\doteq\inf_{x\in
A}\left[2f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)-W\left(O_{1}\right),$
$R_{1}^{(2)}\doteq 2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{1},x\right)\right]-h_{1},$
and for $j\in L\setminus\\{1\\}$
$\displaystyle R_{j}^{(2)}$ $\displaystyle\doteq 2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)-2W\left(O_{1}\right)+W(O_{1}\cup
O_{j}),$ $R_{j}^{(3)}\doteq 2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+2W\left(O_{j}\right)-2W\left(O_{1}\right)-w.$
###### Remark 4.6.
If one mistakenly treated a single cycle case as a multicycle case in the
application of Theorem 4.5, then the result is the same since with $h_{1}>w$,
(4.2) implies that $R_{j}^{(3)}\geq R_{j}^{(2)}$ for any $j\in L$.
###### Remark 4.7.
Although Theorems 4.3 and 4.5 as stated assume the starting distribution
$\lambda^{\varepsilon}$, they can be extended to general initial distributions
by using results from Section 10, which show that the process essentially
forgets the initial distribution before leaving the neighborhood of $O_{1}$.
###### Remark 4.8.
In this remark we interpret the use of Theorems 4.3 and 4.5 in the context of
Monte Carlo, and also explain the role of the time scaling $T^{\varepsilon}$.
There is a minimum amount of time that must elapse before the process can
visit all stable equilibrium points often enough that good estimation of risk-
sensitive integrals is possible. As is well known, this time scales
exponentially in the form of $T^{\varepsilon}=e^{c/\varepsilon}$, and the
issue is the selection of the constant $c>0$, which motivates the assumptions
on $T^{\varepsilon}$ for the two cases. However, when designing a scheme there
typically will be parameters available for selection. The growth constant in
$T^{\varepsilon}$ will then depend on these parameters, which will then be
chosen to (either directly or indirectly, depending on the criteria used)
reduce the size of $T^{\varepsilon}$. For a compelling example we refer to
[10], which shows how for a system with fixed well depths a scheme known as
infinite swapping can be designed so that given any $a>0$ one can design a
scheme so that an interval of length $e^{a/\varepsilon}$ suffices.
Theorem 4.3 is concerned with bias, and for $T^{\varepsilon}$ as above will
give a negligible contribution to the total error in comparison to the
variance. Thus it is Theorem 4.5 that determines the performance of the scheme
and serves as a criteria for optimization. Of particular note is that the
value of $c$ does not appear in the variational problem appearing in Theorem
4.5.
Theorem 4.5 gives a lower bound on the rate of decay of variance per unit
time. For applications to the design of Monte Carlo schemes as in [10] there
is an a priori bound on the best possible performance, and so this lower bound
(which yields an upper bound on variances) is sufficient to determine if a
scheme is nearly optimal. However, for other purposes an upper bound on the
decay rate could be useful, and we expect the other direction holds as well.
The proofs of Theorems 4.3 and 4.5 for single cycles and multicycles are
almost identical with a few key differences. We focus on providing proofs in
the single cycle case, and then point out the required modifications in the
proofs for the multicycle case.
###### Theorem 4.9.
The bound in Theorem 4.3 can be calculated using only stable equilibrium
points. Specifically,
1. 1.
$W(x)=\min_{j\in L_{\rm{s}}}[W(O_{j})+V(O_{j},x)]$
2. 2.
$W\left(O_{j}\right)=\min_{g\in
G_{\rm{s}}\left(j\right)}\left[{\textstyle\sum_{\left(m\rightarrow n\right)\in
g}}V\left(O_{m},O_{n}\right)\right]$
3. 3.
$W(O_{1}\cup O_{j})=\min_{g\in
G_{\rm{s}}\left(1,j\right)}\left[{\textstyle\sum_{\left(m\rightarrow
n\right)\in g}}V\left(O_{m},O_{n}\right)\right]$
4. 4.
$\min_{j\in L}(\inf_{x\in A}[f(x)+V(O_{j},x)]+W(O_{j}))=\min_{j\in
L_{\rm{s}}}(\inf_{x\in A}[f(x)+V(O_{j},x)]+W(O_{j}))$.
###### Remark 4.10.
Theorem 4.9 says that the bound appearing in Theorem 4.3 depends on the set of
indices of only stable equilibrium points. This is not surprising, since in
[12, Chapter 6], it has been shown that the logarithmic asymptotics of the
invariant measure of a Markov process in this framework can be characterized
in terms of graphs on the set of indices of just stable equilibrium points. It
is natural to ask if the same property holds for the lower bound appearing in
Theorem 4.5. Notice that part 4 of Theorem 4.9 implies $\min_{j\in
L}R_{j}^{(1)}=\min_{j\in L_{\rm{s}}}R_{j}^{(1)}$, so if one can prove
(possibly under extra conditions, for example, by considering a double-well
model as in Section 11) that $\min_{j\in L}R_{j}^{(2)}=\min_{j\in
L_{\rm{s}}}R_{j}^{(2)}$, then these two equations assert the property we want
for the single cycle case, namely, $\min_{j\in L}(R_{j}^{(1)}\wedge
R_{j}^{(2)})=\min_{j\in L_{\rm{s}}}(R_{j}^{(1)}\wedge R_{j}^{(2)}).$ An
analogous comment applies for the multicycle case.
###### Conjecture 4.11.
Let $T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some $c>h_{1}\vee w$. Let
$f$ be continuous and suppose that $A$ is the closure of its interior. Then
for any $\eta>0,$ there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(T^{\varepsilon}\cdot\mathrm{Var}_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)\right)$
$\displaystyle\qquad\leq\begin{cases}\min_{j\in L}\left(R_{j}^{(1)}\wedge
R_{j}^{(2)}\right)+\eta,&\text{if }h_{1}>w\\\ \min_{j\in
L}\left(R_{j}^{(1)}\wedge R_{j}^{(2)}\wedge
R_{j}^{(3)}\right)+\eta,&\text{otherwise}\end{cases}.$
In Section 11 we outline the proof of Conjecture 4.11 for a special case.
## 5 Examples
###### Example 5.1.
We first consider the situation depicted in Figure 1. Values of $W(O_{j})$ are
given in the figure. If one interprets the figure as a potential with minimum
zero then the corresponding heights of the equilibrium points are given by
$W(O_{j})-W(O_{1})$. We take $f=0$ and $A$ to be a small closed interval about
$O_{5}$. As we will see and should be clear from the figure, this example can
be analyzed using single regenerative cycles.
Recall that
$R_{j}^{(1)}\doteq\inf_{x\in A}[2f(x)+V(O_{j},x)]+W(O_{j})-W(O_{1})$
$R_{1}^{(2)}\doteq 2\inf_{x\in A}[f(x)+V(O_{1},x)]-h_{1}$
and for $j>1$
$R_{j}^{(2)}\doteq 2\inf_{x\in
A}[f(x)+V(O_{j},x)]+\left(W(O_{j})-W(O_{1})\right)-W(O_{1})+W(O_{1}\cup
O_{j})$
If one traces through the proof of Theorem 4.5 for the case of a single cycle,
then one finds that the constraining bound is given in Lemma 7.23, which is in
turn based on Lemma 7.9. As we will see, in the minimization problem
$\min_{j\in L}(R_{j}^{(1)}\wedge R_{j}^{(2)})$ the min on $j$ turns out to be
achieved at $j=5$. This is of course not surprising, since $A$ is an interval
about $O_{5}$. It is then the minimum of $R_{5}^{(1)}$ and $R_{5}^{(2)}$ which
determines the dominant source of the variance of the estimator.
Figure 1: Single cycle example
We recall that $\tau_{1}^{\varepsilon}$ is the time for a full regenerative
cycle, and that $\tau_{1}$ is the time to first reach the $2\delta$
neighborhood of an equilibrium point and then reach a $\delta$ neighborhood of
a (perhaps the same) equilibrium point. The quantities that are relevant in
Lemma 7.9 are
$\sup_{y\in\partial
B_{\delta}(O_{5})}E_{y}\left(\int_{0}^{\tau_{1}}1_{A}(X_{t}^{\varepsilon})dt\right)^{2}\text{
and }E_{x}N_{5}$
for $R_{j}^{(1)}$ and
$\left[\sup_{y\in\partial
B_{\delta}(O_{5})}E_{y}\int_{0}^{\tau_{1}}1_{A}(X_{t}^{\varepsilon})dt\right]^{2}\text{
, }E_{x}N_{5}\text{, and essentially }\sup_{y\in\partial
B_{\delta}(O_{5})}E_{y}N_{5}$
for $R_{j}^{(2)}$. Decay rates are in turn determined by (see the proof of
Lemma 7.23)
$0\text{ and }W(O_{1})-W(O_{5})+h_{1}$
and
$0,\text{ }W(O_{1})-W(O_{5})+h_{1}\text{ and }W(O_{1})-W(O_{1}\cup O_{5}),$
respectively. Thus for this example it is only the term $W(O_{1})-W(O_{1}\cup
O_{5})$ that distinguishes between the two. Since this is always greater than
zero and it appears in $R_{j}^{(2)}$ in the form $-(W(O_{1})-W(O_{1}\cup
O_{5}))$ it must be the case that $R_{5}^{(2)}<$ $R_{5}^{(1)}$.
The numerical values for the example are
$(W(O_{1}\cup O_{j}),j=2,\ldots,5)=(5,3,5,2)$
$(V(O_{j},O_{5}),j=1,\ldots,5)=(8,4,4,0,0)$
$(W(O_{j})-W(O_{1}),j=1,\ldots,5)=(0,4,2,6,3)$
$(R_{j}^{(1)},j=1,\ldots,5)=(8,8,6,6,3)$
$(R_{j}^{(2)},j=2,\ldots,5)=(12,8,6,0)$
and $R_{1}^{(2)}=16-4=12,h_{1}=4$ and $w=5-2=3$. Since $w<h_{1}$ it falls into
the single cycle case. We therefore find $\min_{j}R_{j}^{(1)}\wedge
R_{j}^{(2)}$ equals to $0$ and occurs with superscript $2$ and at $j=5$.
For an example where the dominant contribution to the variance is through the
quantities associated with $R_{j}^{(1)}$, we move the set $A$ further to the
right of $O_{5}$. All other quantities are unchanged save
$\sup_{y\in\partial
B_{\delta}(O_{5})}E_{y}\left(\int_{0}^{\tau_{1}}1_{A}(X_{t}^{\varepsilon})dt\right)^{2}\text{
and }\left[\sup_{y\in\partial
B_{\delta}(O_{5})}E_{y}\int_{0}^{\tau_{1}}1_{A}(X_{t}^{\varepsilon})dt\right]^{2},$
whose decay rates are governed (for $j=5$) by $\inf_{x\in A}[V(O_{5},x)]\text{
and }2\inf_{x\in A}[V(O_{5},x)],$ respectively. Choosing $A$ so that
$\inf_{x\in A}[V(O_{5},x)]>3$, it is now the case that $R_{5}^{(1)}<$
$R_{5}^{(2)}$.
###### Example 5.2.
In this example we again take $f=0$ and $A$ to be a small closed interval
about $O_{3}$. Since the well at $O_{5}$ is deeper than that at $O_{1}$ we
expect that multicycles will be needed, and so recall
$R_{j}^{(3)}\doteq 2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+2W\left(O_{j}\right)-2W\left(O_{1}\right)-w.$
Figure 2: Multicycle example
The needed values are
$(W(O_{1}\cup O_{j}),j=2,\ldots,5)=(7,5,7,2)$
$(V(O_{j},O_{3}),j=1,\ldots,5)=(4,0,0,0,5)$
$(W(O_{j})-W(O_{1}),j=1,\ldots,5)=(0,4,2,6,1)$
$(R_{j}^{(1)},j=1,\ldots,5)=(4,4,2,6,6)$
$(R_{j}^{(2)},j=2,\ldots,5)=(4,0,6,6)$
$(R_{j}^{(3)},j=1,\ldots,5)=(3,3,-1,7,7)$
and $R_{1}^{(2)}=8-4=4,h_{1}=4$ and $w=7-2=5$. Since $w>h_{1}$ a single cycle
cannot be used for the analysis of the variance, and we need to use
multicycles. We find $\min_{j}R_{j}^{(1)}\wedge R_{j}^{(2)}\wedge R_{j}^{(3)}$
is equal to $-1$ and occurs with superscript $3$ and $j=3$.
## 6 Wald’s Identities and Regenerative Structure
To prove Theorems 4.3 and 4.5, we will use the regenerative structure to
analyze the system over the interval $[0,T^{\varepsilon}]$. Since the number
of regenerative cycles will be random, Wald’s identities will be useful.
Recall that $\tau_{n}^{\varepsilon}$ is the $n$-th return time to $\partial
B_{\delta}\left(O_{1}\right)$ after having visited the neighborhood of a
different equilibrium point, and $\lambda^{\varepsilon}\in\mathcal{P}(\partial
B_{\delta}(O_{1}))$ is the invariant measure of the Markov process
$\\{X_{\tau_{n}^{\varepsilon}}^{\varepsilon}\\}_{n\in\mathbb{N}_{0}}$ with
state space $\partial B_{\delta}(O_{1}).$ If we let the process
$\\{X^{\varepsilon}_{t}\\}_{t}$ start with $\lambda^{\varepsilon}$ at time
$0,$ that is, assume the distribution of $X^{\varepsilon}_{0}$ is
$\lambda^{\varepsilon},$ then by the strong Markov property of
$\\{X^{\varepsilon}_{t}\\}_{t},$ we find that $\\{X^{\varepsilon}_{t}\\}_{t}$
is a regenerative process and the cycles
$\\{\\{X^{\varepsilon}_{\tau_{n-1}^{\varepsilon}+t}:0\leq
t<\tau_{n}^{\varepsilon}-\tau_{n-1}^{\varepsilon}\\},\tau_{n}^{\varepsilon}-\tau_{n-1}^{\varepsilon}\\}$
are iid objects. Moreover, $\\{\tau_{n}^{\varepsilon}\\}_{n\in\mathbb{N}_{0}}$
is a sequence of renewal times under $\lambda^{\varepsilon}.$
### 6.1 Single cycle
Define the filtration $\\{\mathcal{H}_{n}\\}_{n\in\mathbb{N}},$ where
$\mathcal{H}_{n}\doteq\mathcal{F}_{\tau_{n}^{\varepsilon}}$ and
$\mathcal{F}_{t}\doteq$ $\sigma(\\{X^{\varepsilon}_{s}$; $s\leq t\\})$. With
respect to this filtration, in the single cycle case (i.e., when $h_{1}>w$),
we consider the stopping times
$N^{\varepsilon}\left(T\right)\doteq\inf\left\\{n\in\mathbb{N}:\tau_{n}^{\varepsilon}>T\right\\}.$
Note that $N^{\varepsilon}\left(T\right)-1$ is the number of complete single
renewal intervals contained in $[0,T]$.
With this notation, we can bound
$\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt$
from above and below by
$\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)-1}S_{n}^{\varepsilon}\leq\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\leq\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon},$
(6.1)
where
$S_{n}^{\varepsilon}\doteq\int_{\tau_{n-1}^{\varepsilon}}^{\tau_{n}^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt.$
Applying Wald’s first identity shows
$\displaystyle
E_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon}\right)=\frac{1}{T^{\varepsilon}}E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}.$
(6.2)
Therefore, the logarithmic asymptotics of
$E_{\lambda^{\varepsilon}}(\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt/T^{\varepsilon})$
are determined by those of
$E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)/T^{\varepsilon}$
and $E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}.$ Likewise, to understand
the logarithmic asymptotics of
$T^{\varepsilon}\cdot$Var${}_{\lambda^{\varepsilon}}(\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt/T^{\varepsilon}),$
it is sufficient to identify the corresponding logarithmic asymptotics of
Var${}_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)/T^{\varepsilon}$,
Var${}_{\lambda^{\varepsilon}}(S_{1}^{\varepsilon}),$
$E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)/T^{\varepsilon}$
and $E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}$. This can be done with the
help of Wald’s second identity, since
$\displaystyle T^{\varepsilon}$
$\displaystyle\cdot\mathrm{Var}_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}{\textstyle\sum_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}}S_{n}^{\varepsilon}\right)$
(6.3) $\displaystyle\leq 2T^{\varepsilon}\cdot
E_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}{\textstyle\sum_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}}S_{n}^{\varepsilon}-\frac{1}{T^{\varepsilon}}N^{\varepsilon}\left(T^{\varepsilon}\right)E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}\right)^{2}$
$\displaystyle\quad+2T^{\varepsilon}\cdot
E_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}N^{\varepsilon}\left(T^{\varepsilon}\right)E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}-\frac{1}{T^{\varepsilon}}E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}\right)^{2}$
$\displaystyle=2\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}\mathrm{Var}_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}+2\frac{\mathrm{Var}_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}\left(E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}\right)^{2}.$
In the next two sections we derive bounds on
$E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}$,
Var${}_{\lambda^{\varepsilon}}(S_{1}^{\varepsilon})$ and
$E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)$,
Var${}_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)$,
respectively.
### 6.2 Multicycle
Recall that in the case of a multicycle we have $w\geq h_{1}$. For any $m>0$
such that $h_{1}+m>w$ and for any $\varepsilon>0$, on the same probability
space as $\\{\tau^{\varepsilon}_{n}\\}$, one can define a sequence of
independent and geometrically distributed random variables
$\\{\mathbf{M}^{\varepsilon}_{i}\\}_{i\in\mathbb{N}}$ with parameter
$e^{-m/\varepsilon}$ that are independent of $\\{\tau^{\varepsilon}_{n}\\}$.
We then define multicycles according to
$\mathbf{K}^{\varepsilon}_{i}\doteq\sum_{j=1}^{i}\mathbf{M}^{\varepsilon}_{j},\quad\hat{\tau}^{\varepsilon}_{i}\doteq\sum_{n=\mathbf{K}^{\varepsilon}_{i-1}+1}^{\mathbf{K}^{\varepsilon}_{i}}\tau^{\varepsilon}_{n},\quad
i\in\mathbb{N}.$ (6.4)
Consider the stopping times
$\hat{N}^{\varepsilon}\left(T\right)\doteq\inf\left\\{n\in\mathbb{N}:\hat{\tau}_{n}^{\varepsilon}>T\right\\}.$
Note that $\hat{N}^{\varepsilon}\left(T\right)-1$ is the number of complete
multicycles contained in $[0,T]$. With this notation and by following the same
idea as in the single cycle case, we can bound
$\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt$
from above and below by
$\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right)-1}\hat{S}_{n}^{\varepsilon}\leq\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\leq\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right)}\hat{S}_{n}^{\varepsilon},$
(6.5)
where
$\hat{S}_{n}^{\varepsilon}\doteq\int_{\hat{\tau}_{n-1}^{\varepsilon}}^{\hat{\tau}_{n}^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt.$
Therefore, by applying Wald’s first and second identities, we know that the
logarithmic asymptotics of
$E_{\lambda^{\varepsilon}}(\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt/T^{\varepsilon})$
are determined by those of
$E_{\lambda^{\varepsilon}}(\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right))/T^{\varepsilon}$
and $E_{\lambda^{\varepsilon}}\hat{S}_{1}^{\varepsilon}$, and the asymptotics
of
$T^{\varepsilon}\cdot$Var${}_{\lambda^{\varepsilon}}(\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt/T^{\varepsilon})$
by those of
Var${}_{\lambda^{\varepsilon}}(\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right))/T^{\varepsilon}$,
Var${}_{\lambda^{\varepsilon}}(\hat{S}_{1}^{\varepsilon})$,
$E_{\lambda^{\varepsilon}}(\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right))/T^{\varepsilon}$
and $E_{\lambda^{\varepsilon}}\hat{S}_{1}^{\varepsilon}$. In particular, we
have
$\displaystyle
E_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right)}\hat{S}_{n}^{\varepsilon}\right)=\frac{1}{T^{\varepsilon}}E_{\lambda^{\varepsilon}}\left(\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right)\right)E_{\lambda^{\varepsilon}}\hat{S}_{1}^{\varepsilon}.$
(6.6)
and
$\displaystyle
T^{\varepsilon}\cdot\mathrm{Var}_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}{\sum\nolimits_{n=1}^{\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right)}}\hat{S}_{n}^{\varepsilon}\right)\leq
2\frac{E_{\lambda^{\varepsilon}}(\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right))}{T^{\varepsilon}}\mathrm{Var}_{\lambda^{\varepsilon}}\hat{S}_{1}^{\varepsilon}+2\frac{\mathrm{Var}_{\lambda^{\varepsilon}}(\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right))}{T^{\varepsilon}}\left(E_{\lambda^{\varepsilon}}\hat{S}_{1}^{\varepsilon}\right)^{2}.$
(6.7)
In the next two sections we derive bounds on
$E_{\lambda^{\varepsilon}}\hat{S}_{1}^{\varepsilon}$,
Var${}_{\lambda^{\varepsilon}}(\hat{S}_{1}^{\varepsilon})$ and
$E_{\lambda^{\varepsilon}}(\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right))$,
Var${}_{\lambda^{\varepsilon}}(\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right))$,
respectively.
###### Remark 6.1.
It should be kept in mind that
$\hat{\tau}^{\varepsilon}_{n},\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right)$
and $\hat{S}_{n}^{\varepsilon}$ all depend on $m$, although this dependence is
not explicit in the notation.
###### Remark 6.2.
In general, for any quantity in the single cycle case, we use analogous
notation with a “hat” on it to represent the corresponding quantity in the
multicycle version. For instance, we use $\tau^{\varepsilon}_{n}$ for a single
regenerative cycle, and $\hat{\tau}^{\varepsilon}_{n}$ for a multi
regenerative cycle.
## 7 Asymptotics of Moments of $S_{1}^{\varepsilon}$ and
$\hat{S}_{1}^{\varepsilon}$
In this section we will first introduce the elementary theory of an
irreducible finite state Markov chain $\\{Z_{n}\\}_{n\in\mathbb{N}_{0}}$ with
state space $L$, and then state and prove bounds for the asymptotics of
moments of $S_{1}^{\varepsilon}$ and $\hat{S}_{1}^{\varepsilon}$.
For the asymptotic analysis, the following useful facts will be used
repeatedly.
###### Lemma 7.1.
For any nonnegative sequences
$\left\\{a_{\varepsilon}\right\\}_{\varepsilon>0}$ and
$\left\\{b_{\varepsilon}\right\\}_{\varepsilon>0}$, we have
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(a_{\varepsilon}b_{\varepsilon}\right)\geq\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log a_{\varepsilon}+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log b_{\varepsilon},$ (7.1) $\limsup_{\varepsilon\rightarrow
0}-\varepsilon\log\left(a_{\varepsilon}+b_{\varepsilon}\right)\leq\min\left\\{\limsup_{\varepsilon\rightarrow
0}-\varepsilon\log a_{\varepsilon},\limsup_{\varepsilon\rightarrow
0}-\varepsilon\log b_{\varepsilon}\right\\},$ $\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(a_{\varepsilon}+b_{\varepsilon}\right)=\min\left\\{\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log a_{\varepsilon},\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log b_{\varepsilon}\right\\}.$ (7.2)
### 7.1 Markov chains and graph theory
In this subsection we state some elementary theory for finite state Markov
chains taken from [1, Chapter 2]. For a finite state Markov chain, the
invariant measure, the mean exit time, etc., can be expressed explicitly as
the ratio of certain determinants, i.e., sums of products consisting of
transition probabilities, and these sums only contain terms with a plus sign.
Which products should appear in the various sums can be described conveniently
by means of graphs on the set of states of the chain. This method of linking
graphs and quantities associated with a finite state Markov chain was
introduced by Freidlin and Wentzell in [12, Chapter 6].
Consider an irreducible finite state Markov chain
$\\{Z_{n}\\}_{n\in\mathbb{N}_{0}}$ with state space $L.$ For any $i,j\in L,$
let $p_{ij}$ be the one-step transition probability of $\\{Z_{n}\\}_{n}$ from
state $i$ to state $j.$ Write $P_{i}(\cdot)$ and $E_{i}(\cdot)$ for
probabilities and expectations of the chain started at state $i$ at time $0.$
Recall the notation $\pi(g)\doteq\prod_{(i\rightarrow j)\in g}p_{ij}$.
###### Lemma 7.2.
The unique invariant measure of $\\{Z_{n}\\}_{n\in\mathbb{N}}$ can be
expressed
$\lambda_{i}=\frac{\sum_{g\in G\left(i\right)}\pi\left(g\right)}{\sum_{j\in
L}\left(\sum_{g\in G\left(j\right)}\pi\left(g\right)\right)}.$
###### Proof.
See Lemma 3.1, Chapter 6 in [12]. ∎
To analyze the empirical measure we will need additional results, including
representations for the number of visits to a state during a regenerative
cycle. Write
$T_{i}\doteq\inf\left\\{n\geq 0:Z_{n}=i\right\\}$
for the first hitting time of state $i,$ and write
$T_{i}^{+}\doteq\inf\left\\{n\geq 1:Z_{n}=i\right\\}.$
Observe that $T_{i}^{+}=T_{i}$ unless $Z_{0}=i,$ in which case we call
$T_{i}^{+}$ the first return time to state $i.$
Let $\hat{N}\doteq\inf\\{n\in\mathbb{N}_{0}:Z_{n}\in L\setminus\\{1\\}\\}$ and
$N\doteq\inf\\{n\in\mathbb{N}:Z_{n}=1,n\geq\hat{N}\\}.$ $\hat{N}$ is the first
time of visiting a state other than state $1$ and $N$ is the first time of
visiting state $1$ after $\hat{N}.$ For any $j\in L,$ let $N_{j}$ be the
number of visits (including time $0$) of state $j$ before $N,$ i.e.,
$N_{j}=\left|\\{n\in\mathbb{N}_{0}:n<N\text{ and }Z_{n}=j\\}\right|.$ We would
like to understand $E_{1}N_{j}$ and $E_{j}N_{j}$ for any $j\in L.$ These
quantities will appear later on in Subsection 7.2. The next lemma shows how
they can be related to the invariant measure of $\\{Z_{n}\\}_{n}$.
###### Lemma 7.3.
1. 1.
For any $j\in L\setminus\\{1\\}$
$E_{j}N_{j}=\frac{\sum_{g\in G\left(1,j\right)}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}\text{ and
}E_{j}N_{j}=\lambda_{j}\left(E_{j}T_{1}+E_{1}T_{j}\right).$
2. 2.
For any $i,j\in L,$ $j\neq i$
$P_{i}\left(T_{j}<T_{i}^{+}\right)=\frac{1}{\lambda_{i}\left(E_{j}T_{i}+E_{i}T_{j}\right)}.$
3. 3.
For any $j\in L$
$E_{1}N_{j}=\frac{1}{1-p_{11}}\frac{\lambda_{j}}{\lambda_{1}}.$
###### Proof.
See Lemma 3.4 in [12, Chapter 6] for the first assertion of part 1 and see
Lemma 2.7 in [1, Chapter 2] for the second assertion of part 1\. For part 2,
see Corollary 2.8 in [1, Chapter 2]. For part 3, since
$E_{1}N_{j}=\sum_{\ell=1}^{\infty}P_{1}\left(N_{j}\geq\ell\right),$ we need to
understand $P_{1}\left(N_{j}\geq\ell\right)$, which means we need to know how
to count all the ways to get $N_{j}\geq\ell$ before returning to state $1.$
We first have to move away from state $1$, so the types of sequences are of
the form
$\underset{i\text{
times}}{\underbrace{1,1,\ldots,1}},k_{1},k_{2},\ldots,k_{q},1$
for some $i,q\in\mathbb{N}$ and $k_{1}\neq 1,\cdots,k_{q}\neq 1$. When $j=1,$
we do not care about $k_{1},k_{2},\ldots,k_{q},$ and therefore
$P_{1}\left(N_{1}\geq i\right)=p_{11}^{i-1}\text{ and
}E_{1}N_{1}=\sum\nolimits_{i=1}^{\infty}P_{1}\left(N_{1}\geq
i\right)=\frac{1}{1-p_{11}}.$
For $j\in L\setminus\\{1\\},$ the event $\\{N_{j}\geq\ell\\}$ requires that
within $k_{1},k_{2},\ldots,k_{q},$ we
1. 1.
first visit state $j$ before returning to state $1,$ which has corresponding
probability $P_{1}(T_{j}<T_{1}^{+})$,
2. 2.
then start from state $j$ and again visit state $j$ before returning to state
$1,$ which has corresponding probability $P_{j}(T_{j}^{+}<T_{1}).$
Step 2 needs to happen at least $\ell-1$ times in a row, and after that we do
not care. Thus,
$\displaystyle P_{1}\left(N_{j}\geq\ell\right)$
$\displaystyle=\sum\nolimits_{i=1}^{\infty}\left(p_{11}\right)^{i-1}P_{1}\left(T_{j}<T_{1}^{+}\right)(P_{j}(T_{j}^{+}<T_{1}))^{\ell-1}$
$\displaystyle=\frac{1}{1-p_{11}}P_{1}\left(T_{j}<T_{1}^{+}\right)(P_{j}(T_{j}^{+}<T_{1}))^{\ell-1}$
and
$\displaystyle\sum\nolimits_{\ell=1}^{\infty}P_{1}\left(N_{j}\geq\ell\right)$
$\displaystyle=\frac{1}{1-p_{11}}\frac{P_{1}\left(T_{j}<T_{1}^{+}\right)}{P_{j}(T_{1}<T_{j}^{+})}=\frac{1}{1-p_{11}}\frac{\lambda_{j}\left(E_{1}T_{j}+E_{j}T_{1}\right)}{\lambda_{1}\left(E_{1}T_{j}+E_{j}T_{1}\right)}$
$\displaystyle=\frac{1}{1-p_{11}}\frac{\lambda_{j}}{\lambda_{1}}.$
The third equality comes from part 2. ∎
To apply the preceding results using the machinery developed by Freidlin and
Wentzell, one must have analogues that allow for small perturbations of the
transition probabilities due to the fact that initial conditions are to be
taken in small neighborhoods of the equilibrium points. The addition of a
tilde will be used to identify the corresponding objects, such as hitting and
return times. Take as given a Markov chain
$\\{\tilde{Z}_{n}\\}_{n\in\mathbb{N}_{0}}$ on a state space
$\mathcal{X}={\textstyle\cup_{i\in L}}\mathcal{X}_{i},$ with
$\mathcal{X}_{i}\cap\mathcal{X}_{j}=\emptyset$ $(i\neq j),$ and assume there
is $a\in[1,\infty)$ such that for any $i,j\in L$ and $j\neq i,$ the transition
probability of the chain from $x\in\mathcal{X}_{i}$ to $\mathcal{X}_{j}$
(denoted by $p\left(x,\mathcal{X}_{j}\right)$) satisfies the inequalities
$a^{-1}p_{ij}\leq p\left(x,\mathcal{X}_{j}\right)\leq ap_{ij}$ (7.3)
for any $x\in\mathcal{X}_{i}$. Write $P_{x}(\cdot)$ and $E_{x}(\cdot)$ for
probabilities and expectations of the chain started at $x\in\mathcal{X}$ at
time $0.$ Write
$\tilde{T}_{i}\doteq\inf\\{n\geq 0:\tilde{Z}_{n}\in\mathcal{X}_{i}\\}$
for the first hitting time of $\mathcal{X}_{i},$ and write
$\tilde{T}_{i}^{+}\doteq\inf\\{n\geq 1:\tilde{Z}_{n}\in\mathcal{X}_{i}\\}.$
Observe that $\tilde{T}_{i}^{+}=\tilde{T}_{i}$ unless
$\tilde{Z}_{0}\in\mathcal{X}_{i},$ in which case we call $\tilde{T}_{i}^{+}$
the first return time to $\mathcal{X}_{i}.$ Recall that $l=|L|$.
###### Remark 7.4.
Observe that given $j\in L$ and for any $x\in\mathcal{X}_{j}$,
$1-p\left(x,\mathcal{X}_{j}\right)=\textstyle\sum_{k\in
L\setminus\left\\{j\right\\}}p\left(x,\mathcal{X}_{k}\right).$ Therefore, we
can apply (7.3) to obtain
$a^{-1}\textstyle\sum_{k\in L\setminus\left\\{j\right\\}}p_{jk}\leq
1-p\left(x,\mathcal{X}_{j}\right)\leq a\textstyle\sum_{k\in
L\setminus\left\\{j\right\\}}p_{jk}.$
###### Lemma 7.5.
1. 1.
Consider distinct $i,j,k\in L$. Then for $x\in\mathcal{X}_{k},$
$a^{-4^{l-2}}P_{k}\left(T_{j}<T_{i}\right)\leq
P_{x}(\tilde{T}_{j}<\tilde{T}_{i})\leq
a^{4^{l-2}}P_{k}\left(T_{j}<T_{i}\right).$
2. 2.
For any $i\in L$, $j\in L\setminus\\{i\\}$ and $x\in\mathcal{X}_{i},$
$a^{-4^{l-2}-1}P_{i}\left(T_{j}<T_{i}^{+}\right)\leq
P_{x}(\tilde{T}_{j}<\tilde{T}_{i}^{+})\leq
a^{4^{l-2}+1}P_{i}\left(T_{j}<T_{i}^{+}\right).$
###### Proof.
For part 1, see Lemma 3.3 in [12, Chapter 6]. We only need to prove part $2.$
Note that by a first step analysis on
$\\{\tilde{Z}_{n}\\}_{n\in\mathbb{N}_{0}}$, for any $i\in L$, $j\in
L\setminus\\{i\\}$ and $x\in\mathcal{X}_{i},$
$\displaystyle P_{x}(\tilde{T}_{j}<\tilde{T}_{i}^{+})$
$\displaystyle=p\left(x,\mathcal{X}_{j}\right)+\sum\nolimits_{k\in
L\setminus\\{i,j\\}}\int_{\mathcal{X}_{k}}P_{y}(\tilde{T}_{j}<\tilde{T}_{i})p\left(x,dy\right)$
$\displaystyle\leq ap_{ij}+\sum\nolimits_{k\in
L\setminus\\{i,j\\}}\left(a^{4^{l-2}}P_{k}\left(T_{j}<T_{i}\right)\right)\left(ap_{ik}\right)$
$\displaystyle\leq a^{4^{l-2}+1}\left(p_{ij}+\sum\nolimits_{k\in
L\setminus\\{i,j\\}}P_{k}\left(T_{j}<T_{i}\right)p_{ik}\right)$
$\displaystyle=a^{4^{l-2}+1}P_{i}\left(T_{j}<T_{i}^{+}\right).$
The first inequality comes from the use of (7.3) and part 1; the last equality
holds since we can do a first step analysis on $\\{Z_{n}\\}_{n}.$ Similarly,
we can show the lower bound. ∎
Let $\check{N}\doteq\inf\\{n\in\mathbb{N}_{0}:\tilde{Z}_{n}\in\cup_{j\in
L\setminus\\{1\\}}\mathcal{X}_{j}\\}$ and
$\tilde{N}\doteq\inf\\{n\in\mathbb{N}:Z_{n}\in\mathcal{X}_{1},n\geq\check{N}\\}.$
For any $j\in L,$ let $\tilde{N}_{j}$ be the number of visits (including time
$0$) of state $\mathcal{X}_{j}$ before $\tilde{N},$ i.e.
$\tilde{N}_{j}=|\\{n\in\mathbb{N}_{0}:n<\tilde{N}\text{ and
}Z_{n}\in\mathcal{X}_{j}\\}|.$ We would like to understand
$E_{x}\tilde{N}_{j}$ for any $j\in L$ and $x\in\mathcal{X}_{1}$ or
$\mathcal{X}_{j}.$
###### Lemma 7.6.
For any $j\in L$ and $x\in\mathcal{X}_{1}$
$E_{x}\tilde{N}_{j}\leq\frac{a^{4^{l-1}}}{\sum_{\ell\in
L\setminus\\{1\\}}p_{1\ell}}\frac{\sum_{g\in
G\left(j\right)}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}.$
Moreover, for any $j\in L\setminus\\{1\\}$
$\sum_{\ell=1}^{\infty}\sup_{x\in\mathcal{X}_{j}}P_{x}\left(\tilde{N}_{j}\geq\ell\right)\leq
a^{4^{l-1}}\frac{\sum_{g\in G\left(1,j\right)}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}\text{ and
}\sum_{\ell=1}^{\infty}\sup_{x\in\mathcal{X}_{1}}P_{x}\left(\tilde{N}_{1}\geq\ell\right)\leq\frac{a}{\sum_{\ell\in
L\setminus\\{1\\}}p_{1\ell}}.$
###### Proof.
For any $x\in\mathcal{X}_{1},$ note that for any $\ell\in\mathbb{N},$ by a
conditioning argument as in the proof of Lemma 7.3 (3), we find that for $j\in
L\setminus\\{1\\}$
$P_{x}(\tilde{N}_{j}\geq\ell)\leq\frac{\sup_{y\in\mathcal{X}_{1}}P_{y}(\tilde{T}_{j}<\tilde{T}_{1}^{+})}{1-\sup_{y\in\mathcal{X}_{1}}p\left(y,\mathcal{X}_{1}\right)}\left(\sup\nolimits_{y\in\mathcal{X}_{j}}P_{y}(\tilde{T}_{j}^{+}<\tilde{T}_{1})\right)^{\ell-1}$
and
$P_{x}(\tilde{N}_{1}\geq\ell)\leq\left(\sup\nolimits_{y\in\mathcal{X}_{1}}p\left(y,\mathcal{X}_{1}\right)\right)^{\ell-1}.$
Thus, for any $x\in\mathcal{X}_{1}$ and for $j\in L\setminus\\{1\\}$
$\displaystyle E_{x}\tilde{N}_{j}$
$\displaystyle=\sum_{\ell=1}^{\infty}P_{x}(\tilde{N}_{j}\geq\ell)\leq\frac{\sup_{y\in\mathcal{X}_{1}}P_{y}(\tilde{T}_{j}<\tilde{T}_{1}^{+})}{1-\sup_{y\in\mathcal{X}_{1}}p\left(y,\mathcal{X}_{1}\right)}\cdot\frac{1}{1-\sup_{y\in\mathcal{X}_{j}}P_{y}(\tilde{T}_{j}^{+}<\tilde{T}_{1})}$
$\displaystyle=\frac{\sup_{y\in\mathcal{X}_{1}}P_{y}(\tilde{T}_{j}<\tilde{T}_{1}^{+})}{\left(\inf_{y\in\mathcal{X}_{j}}\left(1-p\left(y,\mathcal{X}_{1}\right)\right)\right)(\inf_{y\in\mathcal{X}_{j}}P_{y}(\tilde{T}_{1}<\tilde{T}_{j}^{+}))}$
$\displaystyle\leq a^{4^{l-1}}\frac{P_{1}(T_{j}<T_{1}^{+})}{(\sum_{\ell\in
L\setminus\\{1\\}}p_{1\ell})P_{j}(T_{1}<T_{j}^{+})}$
$\displaystyle=\frac{a^{4^{l-1}}}{\sum_{\ell\in
L\setminus\\{1\\}}p_{1\ell}}\frac{\lambda_{j}}{\lambda_{1}}=\frac{a^{4^{l-1}}}{\sum_{\ell\in
L\setminus\\{1\\}}p_{1\ell}}\frac{\sum_{g\in
G\left(j\right)}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}.$
The second inequality is from Remark 7.4 and Lemma 7.5 (2); the third equality
comes from Lemma 7.3 (2); the last equality holds due to Lemma 7.2. Also
$\displaystyle
E_{x}\tilde{N}_{1}=\sum_{\ell=1}^{\infty}P_{x}(\tilde{N}_{1}\geq\ell)\leq\frac{1}{1-\sup_{y\in\mathcal{X}_{1}}p\left(y,\mathcal{X}_{1}\right)}=\frac{1}{\inf_{y\in\mathcal{X}_{1}}\left(1-p\left(y,\mathcal{X}_{1}\right)\right)}\leq\frac{a}{\sum_{\ell\in
L\setminus\\{1\\}}p_{1\ell}}.$
The last inequality is from Remark 7.4. This completes the proof of part 1.
Turning to part 2, since for any $\ell\in\mathbb{N}$
$\sup\nolimits_{x\in\mathcal{X}_{1}}P_{x}(\tilde{N}_{1}\geq\ell)\leq\left(\sup\nolimits_{y\in\mathcal{X}_{1}}p\left(y,\mathcal{X}_{1}\right)\right)^{\ell-1},$
we have
$\sum_{\ell=1}^{\infty}\sup_{x\in\mathcal{X}_{1}}P_{x}(\tilde{N}_{1}\geq\ell)\leq\frac{1}{1-\sup_{y\in\mathcal{X}_{1}}p\left(y,\mathcal{X}_{1}\right)}\leq\frac{a}{\sum_{\ell\in
L\setminus\\{1\\}}p_{1\ell}}.$
Furthermore, we use the conditioning argument again to find that for any $j\in
L\setminus\\{1\\}$ and $\ell\in\mathbb{N}$
$\sup\nolimits_{x\in\mathcal{X}_{j}}P_{x}(\tilde{N}_{j}\geq\ell)\leq(\sup\nolimits_{y\in\mathcal{X}_{j}}P_{y}(\tilde{T}_{j}^{+}<\tilde{T}_{1}))^{\ell-1}.$
This implies that
$\displaystyle\sum_{\ell=1}^{\infty}\sup\nolimits_{x\in\mathcal{X}_{j}}P_{x}(\tilde{N}_{j}\geq\ell)$
$\displaystyle\qquad\leq\sum_{\ell=1}^{\infty}(\sup\nolimits_{y\in\mathcal{X}_{j}}P_{y}(\tilde{T}_{j}^{+}<\tilde{T}_{1}))^{\ell-1}=\frac{1}{1-\sup_{y\in\mathcal{X}_{j}}P_{y}(\tilde{T}_{j}^{+}<\tilde{T}_{1})}$
$\displaystyle\qquad=\frac{1}{\inf_{y\in\mathcal{X}_{j}}P_{y}(\tilde{T}_{1}<\tilde{T}_{j}^{+})}\leq
a^{4^{l-1}}\frac{1}{P_{j}(T_{1}<T_{j}^{+})}$
$\displaystyle\qquad=a^{4^{l-1}}\lambda_{j}(E_{1}T_{j}+E_{j}T_{1})=a^{4^{l-1}}\frac{\sum_{g\in
G\left(1,j\right)}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}.$
We use Lemma 7.5 (2) to obtain the second inequality and Lemma 7.3, parts (2)
and (1), for the penultimate and last equalities. ∎
### 7.2 Asymptotics of moments of $S_{1}^{\varepsilon}$
Recall that $\\{X^{\varepsilon}\\}_{\varepsilon\in(0,\infty)}\subset
C([0,\infty):M)$ is a sequence of stochastic processes satisfying Condition
3.1, Condition 3.7 and Condition 3.13. Moreover, recall that
$S_{1}^{\varepsilon}$ is defined by
$S_{1}^{\varepsilon}\doteq\int_{0}^{\tau_{1}^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt.$
(7.4)
As mentioned in Section 6, we are interested in the logarithmic asymptotics of
$E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}$ and
$E_{\lambda^{\varepsilon}}(S_{1}^{\varepsilon})^{2}.$ To find these
asymptotics, the main tool we will use is Freidlin-Wentzell theory [12]. In
fact, we will generalize the results of Freidlin-Wentzell to the following:
For any given continuous function $f:M\rightarrow\mathbb{R}$ and any compact
set $A\subset M,$ we will provide lower bounds for
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}\left(\int_{0}^{\tau_{1}^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{s}^{\varepsilon}\right)}1_{A}\left(X_{s}^{\varepsilon}\right)ds\right)\right)$
(7.5)
and
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}\left(\int_{0}^{\tau_{1}^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{s}^{\varepsilon}\right)}1_{A}\left(X_{s}^{\varepsilon}\right)ds\right)^{2}\right).$
(7.6)
As will be shown, these two bounds can be expressed in terms of the
quasipotentials $V(O_{i},O_{j})$ and $V(O_{i},x).$
###### Remark 7.7.
In the Freidlin-Wentzell theory as presented in [12], they only consider
bounds for
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup\nolimits_{z\in\partial
B_{\delta}(O_{1})}E_{z}\tau_{1}^{\varepsilon}\right).$
Thus, their result is a special case of (7.5) with $f\equiv 0$ and $A=M$.
Moreover, we generalize their result further by considering the logarithmic
asymptotics of higher moment quantities such as (7.6).
Before proceeding, we recall that $L=\\{1,\ldots,l\\}$ and for any $\delta>0,$
we define $\tau_{0}\doteq 0,$
$\sigma_{n}\doteq\inf\\{t>\tau_{n}:X_{t}^{\varepsilon}\in{\textstyle\bigcup\nolimits_{j\in
L}}\partial B_{2\delta}(O_{j})\\}\text{ and
}\tau_{n}\doteq\inf\\{t>\sigma_{n-1}:X_{t}^{\varepsilon}\in{\textstyle\bigcup\nolimits_{j\in
L}}\partial B_{\delta}(O_{j})\\}.$
Moreover, $\tau_{0}^{\varepsilon}\doteq 0,$
$\sigma_{n}^{\varepsilon}\doteq\inf\\{t>\tau_{n}^{\varepsilon}:X_{t}^{\varepsilon}\in{\textstyle\bigcup\nolimits_{j\in
L\setminus\\{1\\}}}\partial B_{\delta}(O_{j})\\}\text{ and
}\tau_{n}^{\varepsilon}\doteq\inf\left\\{t>\sigma_{n-1}^{\varepsilon}:X_{t}^{\varepsilon}\in\partial
B_{\delta}(O_{1})\right\\}.$
In addition,
$\\{Z_{n}\\}_{n\in\mathbb{N}_{0}}\doteq\\{X_{\tau_{n}}^{\varepsilon}\\}_{n\in\mathbb{N}_{0}}$
is a Markov chain on ${\textstyle\bigcup\nolimits_{j\in L}}\partial
B_{\delta}(O_{j})$ and $\\{Z_{n}^{\varepsilon}\\}_{n\in\mathbb{N}_{0}}$
$\doteq\\{X_{\tau_{n}^{\varepsilon}}^{\varepsilon}\\}_{n\in\mathbb{N}_{0}}$ is
a Markov chain on $\partial B_{\delta}(O_{1}).$ It is essential to keep the
distinction clear: when there is an $\varepsilon$ superscript the chain makes
transitions between neighborhoods of distinct equilibria, while if absent such
transitions are possible, but for stable equilibria there will be many more
transitions between the $\delta$ and $2\delta$ neighborhoods.
Following the notation of Subsection 7.1, let
$\hat{N}\doteq\inf\\{n\in\mathbb{N}_{0}:Z_{n}\in{\textstyle\bigcup\nolimits_{j\in
L\setminus\\{1\\}}}\partial B_{\delta}(O_{j})\\}$,
$N\doteq\inf\\{n\geq\hat{N}:Z_{n}\in\partial B_{\delta}(O_{1})\\}$, and recall
$\mathcal{F}_{t}\doteq\sigma(\\{X_{s}^{\varepsilon};s\leq t\\})$. Then since
$\\{\tau_{n}\\}_{n\in\mathbb{N}_{0}}$ are stopping times with respect to the
filtration $\\{\mathcal{F}_{t}\\}_{t\geq 0},$ $\mathcal{F}_{\tau_{n}}$ are
well-defined for any $n\in\mathbb{N}_{0}$ and we use $\mathcal{G}_{n}$ to
denote $\mathcal{F}_{\tau_{n}}.$ One can prove that $\hat{N}$ and $N$ are
stopping times with respect to $\\{\mathcal{G}_{n}\\}_{n\in\mathbb{N}}.$ For
any $j\in L,$ let $N_{j}$ be the number of visits of
$\\{Z_{n}\\}_{n\in\mathbb{N}_{0}}$ to $\partial B_{\delta}(O_{j})$ (including
time $0$) before $N.$
The proofs of the following two lemmas are given in the Appendix.
###### Lemma 7.8.
Given $\delta>0$ sufficiently small, for any $x\in\partial B_{\delta}(O_{1})$
and any nonnegative measurable function $g$ $:M\rightarrow\mathbb{R}$,
$E_{x}\left(\int_{0}^{\tau_{1}^{\varepsilon}}g\left(X_{s}^{\varepsilon}\right)ds\right)\leq\sum_{j\in
L}\left[\sup_{y\in\partial
B_{\delta}(O_{j})}E_{y}\left(\int_{0}^{\tau_{1}}g\left(X_{s}^{\varepsilon}\right)ds\right)\right]\cdot
E_{x}N_{j}.$
###### Lemma 7.9.
Given $\delta>0$ sufficiently small, for any $x\in\partial B_{\delta}(O_{1})$
and any nonnegative measurable function $g$ $:M\rightarrow\mathbb{R}$,
$\displaystyle
E_{x}\left(\int_{0}^{\tau_{1}^{\varepsilon}}g\left(X_{s}^{\varepsilon}\right)ds\right)^{2}$
$\displaystyle\leq l\sum_{j\in L}\left[\sup_{y\in\partial
B_{\delta}(O_{j})}E_{y}\left(\int_{0}^{\tau_{1}}g\left(X_{s}^{\varepsilon}\right)ds\right)^{2}\right]\cdot
E_{x}N_{j}$ $\displaystyle\qquad+2l\sum_{j\in L}\left[\sup_{y\in\partial
B_{\delta}(O_{j})}E_{y}\left(\int_{0}^{\tau_{1}}g\left(X_{s}^{\varepsilon}\right)ds\right)\right]^{2}\cdot
E_{x}N_{j}$
$\displaystyle\quad\quad\qquad\cdot\sum_{k=1}^{\infty}\sup_{y\in\partial
B_{\delta}(O_{j})}P_{y}\left(k\leq N_{j}\right).$ (7.7)
Although as noted the proofs are given in the Appendix, these results follow
in a straightforward way by decomposing the excursion away from $O_{1}$ during
$[0,\tau_{1}^{\varepsilon}]$, which only stops when returning to a
neighborhood of $O_{1}$, into excursions between any pair of equilibrium
points, counting the number of such excursions that start near a particular
equilibrium point, and using the strong Markov property.
###### Remark 7.10.
Following an analogous argument as in the proof of Lemma 7.8 and Lemma 7.9, we
can prove the following: Given $\delta>0$ sufficiently small, for any
$x\in\partial B_{\delta}(O_{1})$ and any nonnegative measurable function $g$
$:M\rightarrow\mathbb{R}$,
$E_{x}\left(\int_{\sigma_{0}^{\varepsilon}}^{\tau_{1}^{\varepsilon}}g\left(X_{s}^{\varepsilon}\right)ds\right)\leq{\textstyle\sum_{j\in
L\setminus\\{1\\}}}\left[\sup_{y\in\partial
B_{\delta}(O_{j})}E_{y}\left(\int_{0}^{\tau_{1}}g\left(X_{s}^{\varepsilon}\right)ds\right)\right]\cdot
E_{x}N_{j}$
and
$\displaystyle
E_{x}\left(\int_{\sigma_{0}^{\varepsilon}}^{\tau_{1}^{\varepsilon}}g\left(X_{s}^{\varepsilon}\right)ds\right)^{2}$
$\displaystyle\leq l{\textstyle\sum_{j\in
L\setminus\\{1\\}}}\left[\sup_{y\in\partial
B_{\delta}(O_{j})}E_{y}\left(\int_{0}^{\tau_{1}}g\left(X_{s}^{\varepsilon}\right)ds\right)^{2}\right]\cdot
E_{x}N_{j}$ $\displaystyle\quad+2l{\textstyle\sum_{j\in
L\setminus\\{1\\}}}\left[\sup_{y\in\partial
B_{\delta}(O_{j})}E_{y}\left(\int_{0}^{\tau_{1}}g\left(X_{s}^{\varepsilon}\right)ds\right)\right]^{2}\cdot
E_{x}N_{j}$
$\displaystyle\quad\qquad\cdot{\textstyle\sum_{\ell=1}^{\infty}}\sup_{y\in\partial
B_{\delta}(O_{j})}P_{y}\left(k\leq N_{j}\right).$
The main difference is that if the integration starts from
$\sigma_{0}^{\varepsilon}$ (the first visiting time of
${\textstyle\bigcup\nolimits_{j\in L\setminus\\{1\\}}}\partial
B_{\delta}(O_{j})$), then any summation appearing in the upper bounds should
sum over all indices in $L\setminus\\{1\\}$ instead of $L.$
Owing to its frequent appearance but with varying arguments, we introduce the
notation
$I^{\varepsilon}(t_{1},t_{2};f,A)\doteq\int_{t_{1}}^{t_{2}}e^{-\frac{1}{\varepsilon}f(X_{s}^{\varepsilon})}1_{A}(X_{s}^{\varepsilon})ds,$
(7.8)
and write $I^{\varepsilon}(t;f,A)$ if $t_{1}=0$ and $t_{2}=t$ so that, e.g.,
$S_{1}^{\varepsilon}=I^{\varepsilon}(\tau_{1}^{\varepsilon};f,A)$.
###### Corollary 7.11.
Given any measurable set $A\subset M$, a measurable function
$f:M\rightarrow\mathbb{R},$ $j\in L$ and $\delta>0,$ we have
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}I^{\varepsilon}(\tau_{1}^{\varepsilon};f,A)\right)$
$\displaystyle\quad\geq\min_{j\in L}\left\\{\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{j}\right)+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)\right)\right\\},$
and
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}I^{\varepsilon}(\tau_{1};f,A)^{2}\right)\geq\min_{j\in
L}\left(\hat{R}_{j}^{(1)}\wedge\hat{R}_{j}^{(2)}\right),$
where
$\hat{R}_{j}^{(1)}\doteq\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)^{2}\right)+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{j}\right)$
and
$\displaystyle\hat{R}_{j}^{(2)}$ $\displaystyle\doteq
2\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)\right)+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{j}\right)$
$\displaystyle\qquad+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sum\nolimits_{\ell=1}^{\infty}\sup_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\ell\leq N_{j}\right)\right).$
###### Proof.
For the first part, applying Lemma 7.8 with
$g(x)=e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)$ and using
(7.1) and (7.2) completes the proof. For the second part, using Lemma 7.9 with
$g(x)=e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)$ and using
(7.1) and (7.2) again completes the proof. ∎
###### Remark 7.12.
Owing to Remark 7.10, we can modify the proof of Corollary 7.11 and show that
given any set $A\subset M,$ a measurable function $f:M\rightarrow\mathbb{R},$
$j\in L$ and $\delta>0,$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}I^{\varepsilon}(\sigma_{0}^{\varepsilon},\tau_{1}^{\varepsilon};f,A)\right)$
$\displaystyle\quad\geq\min_{j\in
L\setminus\\{1\\}}\left\\{\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{j}\right)+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)\right)\right\\}.$
Moreover,
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}I^{\varepsilon}(\sigma_{0}^{\varepsilon},\tau_{1}^{\varepsilon};f,A)^{2}\right)\geq\min_{j\in
L\setminus\\{1\\}}\left(\hat{R}_{j}^{(1)}\wedge\hat{R}_{j}^{(2)}\right),$
where the definitions of $\hat{R}_{j}^{(1)}$ and $\hat{R}_{j}^{(2)}$ can be
found in Corollary 7.11.
We next consider lower bounds on
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)\right)\quad\mbox{and}\quad\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)^{2}\right)$
for $j\in L$. We state some useful results before studying the lower bounds.
Recall also that $\tau_{1}$ is the time to reach the $\delta$-neighborhood of
any of the equilibrium points after leaving the $2\delta$-neighborhood of one
of the equilibrium points.
###### Lemma 7.13.
For any $\eta>0,$ there exists $\delta_{0}\in(0,1)$ and
$\varepsilon_{0}\in(0,1)$, such that for all $\delta\in(0,\delta_{0})$ and
$\varepsilon\in(0,\varepsilon_{0})$
$\sup_{x\in M}E_{x}\tau_{1}\leq e^{\frac{\eta}{\varepsilon}}\text{ and
}\sup_{x\in M}E_{x}\left(\tau_{1}\right)^{2}\leq
e^{\frac{\eta}{\varepsilon}}.$
###### Proof.
If $x$ is not in $\cup_{j\in L}B_{2\delta}(O_{j})$ then a uniform (in $x$ and
small $\varepsilon$) upper bound on these expected values follows from the
corollary to [12, Lemma 1.9, Chapter 6].
If $x\in\cup_{j\in L}B_{2\delta}(O_{j})$ then we must wait till the process
reaches $\cup_{j\in L}\partial B_{2\delta}(O_{j})$, after which we can use the
uniform bound (and the strong Markov property). Since there exists $\delta>0$
such the lower bound $P_{x}(\inf\\{t\geq 0:X_{t}^{\varepsilon}\in\cup_{j\in
L}\partial B_{2\delta}(O_{j})\leq 1)\geq e^{-\eta/2\varepsilon}$ is valid for
all $x\in\cup_{j\in L}B_{2\delta}(O_{j})$ and small $\varepsilon>0$, upper
bounds of the desired form follow from the Markov property and standard
calculations. ∎
For any compact set $A\subset M$, we use $\vartheta_{A}$ to denote the first
hitting time
$\vartheta_{A}\doteq\inf\left\\{t\geq 0:X_{t}^{\varepsilon}\in A\right\\}.$
Note that $\vartheta_{A}$ is a stopping time with respect to filtration
$\\{\mathcal{F}_{t}\\}_{t\geq 0}.$ The following result is relatively
straightforward given the just discussed bound on the distribution of
$\tau_{1}$, and follows by partitioning according to $\tau_{1}\geq T$ and
$\tau_{1}<T$ for large but fixed $T$.
###### Lemma 7.14.
For any compact set $A\subset M,$ $j\in L$ and any $\eta>0,$ there exists
$\delta_{0}\in(0,1)$ and $\varepsilon_{0}\in(0,1)$, such that for all
$\varepsilon\in(0,\varepsilon_{0})$ and $\delta\in(0,\delta_{0})$
$\sup_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{A}\leq\tau_{1}\right)\leq
e^{-\frac{1}{\varepsilon}\left(\inf_{x\in
A}\left[V\left(O_{j},x\right)\right]-\eta\right)}.$
###### Lemma 7.15.
Given a compact set $A\subset M$, any $j\in L$ and $\eta>0,$ there exists
$\delta_{0}\in(0,1),$ such that for any $\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}\left[\int_{0}^{\tau_{1}}1_{A}\left(X_{s}^{\varepsilon}\right)ds\right]\right)\geq\inf_{x\in
A}V\left(O_{j},x\right)-\eta$
and
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}\left(\int_{0}^{\tau_{1}}1_{A}\left(X_{s}^{\varepsilon}\right)ds\right)^{2}\right)\geq\inf_{x\in
A}V\left(O_{j},x\right)-\eta.$
###### Proof.
The idea of this proof follows from the proof of Theorem 4.3 in [12, Chapter
4]. Since
$I^{\varepsilon}(\tau_{1};0,A)=\int_{0}^{\tau_{1}}1_{A}\left(X_{s}^{\varepsilon}\right)ds$,
for any $x\in\partial B_{\delta}(O_{j}),$
$\displaystyle E_{x}I^{\varepsilon}(\tau_{1};0,A)$
$\displaystyle=E_{x}\left[I^{\varepsilon}(\tau_{1};0,A)1_{\left\\{\vartheta_{A}\leq\tau_{1}\right\\}}\right]=E_{x}\left[E_{x}\left[\left.I^{\varepsilon}(\tau_{1};0,A)\right|\mathcal{F}_{\vartheta_{A}}\right]1_{\left\\{\vartheta_{A}\leq\tau_{1}\right\\}}\right]$
$\displaystyle=E_{x}\left[(E_{X_{\vartheta_{A}}^{\varepsilon}}I^{\varepsilon}(\tau_{1};0,A))1_{\left\\{\vartheta_{A}\leq\tau_{1}\right\\}}\right]\leq\sup\nolimits_{y\in\partial
A}E_{y}\tau_{1}\cdot\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{A}\leq\tau_{1}\right).$
The inequality is due
to$E_{X_{\vartheta_{A}}^{\varepsilon}}I^{\varepsilon}(\tau_{1};0,A)\leq
E_{X_{\vartheta_{A}}^{\varepsilon}}\tau_{1}\leq\sup_{y\in\partial
A}E_{y}\tau_{1}.$ We then apply Lemma 7.13 and Lemma 7.14 to find that for the
given $\eta>0,$ there exists $\delta_{0}\in(0,1)$ and
$\varepsilon_{0}\in(0,1)$, such that for all
$\varepsilon\in(0,\varepsilon_{0})$ and $\delta\in(0,\delta_{0}),$
$E_{x}I^{\varepsilon}(\tau_{1};0,A)\leq\sup_{y\in\partial
A}E_{y}\tau_{1}\cdot\sup_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{A}\leq\tau_{1}\right)\leq
e^{\frac{\eta/2}{\varepsilon}}e^{-\frac{1}{\varepsilon}\left(\inf_{y\in
A}V\left(O_{j},y\right)-\eta/2\right)}.$
Thus,
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};0,A)\right)\geq\inf_{x\in
A}V\left(O_{j},x\right)-\eta.$
This completes the proof of part 1.
For part 2, following the same conditioning argument as for part 1 with the
use of Lemma 7.13 and Lemma 7.14 gives that for the given $\eta>0,$ there
exists $\delta_{0}\in(0,1)$ and $\varepsilon_{0}\in(0,1)$, such that for all
$\varepsilon\in(0,\varepsilon_{0})$ and $\delta\in(0,\delta_{0}),$
$E_{x}I^{\varepsilon}(\tau_{1};0,A)^{2}\leq\sup_{y\in\partial
A}E_{y}\left(\tau_{1}\right)^{2}\cdot\sup_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{A}\leq\tau_{1}\right)\leq
e^{\frac{\eta/2}{\varepsilon}}e^{-\frac{1}{\varepsilon}\left(\inf_{x\in
A}V\left(O_{j},x\right)-\eta/2\right)}.$
Therefore,
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};0,A)^{2}\right)\geq\inf_{x\in
A}V\left(O_{j},x\right)-\eta.$
∎
###### Lemma 7.16.
Given compact sets $A_{1},A_{2}\subset M$, $j\in L$ and $\eta>0,$ there exists
$\delta_{0}\in(0,1),$ such that for any $\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}\left[\left(\int_{0}^{\tau_{1}}1_{A_{1}}\left(X_{s}^{\varepsilon}\right)ds\right)\left(\int_{0}^{\tau_{1}}1_{A_{2}}\left(X_{s}^{\varepsilon}\right)ds\right)\right]\right)$
$\displaystyle\qquad\geq\max\left\\{\inf_{x\in
A_{1}}V\left(O_{j},x\right),\inf_{x\in
A_{2}}V\left(O_{j},x\right)\right\\}-\eta.$
###### Proof.
We set $\vartheta_{A_{i}}\doteq\inf\left\\{t\geq 0:X_{t}^{\varepsilon}\in
A_{i}\right\\}$ for $i=1,2.$ For any $x\in\partial B_{\delta}(O_{j}),$ using a
conditioning argument as in the proof of Lemma 7.15 we obtain that for any
$\eta>0,$ there exists $\delta_{0}\in(0,1)$ and $\varepsilon_{0}\in(0,1)$,
such that for all $\varepsilon\in(0,\varepsilon_{0})$ and
$\delta\in(0,\delta_{0}),$
$\displaystyle
E_{x}\left[\left(\int_{0}^{\tau_{1}}1_{{A}_{1}}\left(X_{s}^{\varepsilon}\right)ds\right)\left(\int_{0}^{\tau_{1}}1_{{A}_{2}}\left(X_{s}^{\varepsilon}\right)ds\right)\right]$
(7.9)
$\displaystyle=E_{x}\left[\left(\int_{0}^{\tau_{1}}\int_{0}^{\tau_{1}}1_{{A}_{1}}\left(X_{s}^{\varepsilon}\right)1_{{A}_{2}}\left(X_{t}^{\varepsilon}\right)dsdt\right)1_{\left\\{\vartheta_{{A}_{1}}\vee\vartheta_{{A}_{2}}\leq\tau_{1}\right\\}}\right]$
$\displaystyle=E_{x}\left[\left(E_{X_{\vartheta_{{A}_{1}}\vee\vartheta_{{A}_{2}}}^{\varepsilon}}\left[\int_{0}^{\tau_{1}}\int_{0}^{\tau_{1}}1_{{A}_{1}}\left(X_{s}^{\varepsilon}\right)1_{{A}_{2}}\left(X_{t}^{\varepsilon}\right)dsdt\right]\right)1_{\left\\{\vartheta_{{A}_{1}}\vee\vartheta_{{A}_{2}}\leq\tau_{1}\right\\}}\right]$
$\displaystyle\leq\sup\nolimits_{y\in\partial{A}_{1}\cup\partial{A}_{2}}E_{y}\left(\tau_{1}\right)^{2}\cdot\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{{A}_{1}}\leq\tau_{1},\vartheta_{{A}_{2}}\leq\tau_{1}\right)$
$\displaystyle\leq
e^{\frac{\eta/2}{\varepsilon}}\cdot\min\left\\{\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{{A}_{1}}\leq\tau_{1}\right),\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{{A}_{2}}\leq\tau_{1}\right)\right\\},$
The last inequality holds since for $i=1,2$
$\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{A_{1}}\leq\tau_{1},\vartheta_{A_{2}}\leq\tau_{1}\right)\leq\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{A_{i}}\leq\tau_{1}\right)$
and owing to Lemma 7.13, for all $\varepsilon\in(0,\varepsilon_{0})$
$\sup\nolimits_{y\in\partial A_{1}}E_{y}\left(\tau_{1}\right)^{2}\leq
e^{\frac{\eta/2}{\varepsilon}}\text{ and }\sup\nolimits_{y\in\partial
A_{2}}E_{y}\left(\tau_{1}\right)^{2}\leq e^{\frac{\eta/2}{\varepsilon}}.$
Furthermore, for the given $\eta>0,$ by Lemma 7.14, there exists
$\delta_{i}\in(0,1)$ such that for any $\delta\in(0,\delta_{i})$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{A_{i}}\leq\tau_{1}\right)\right)\geq\inf_{x\in
A_{i}}V\left(O_{j},x\right)-\eta/2$
for $i=1,2.$ Hence, letting $\delta_{0}=\delta_{1}\wedge\delta_{2},$ for any
$\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}E_{z}\left[\left(\int_{0}^{\tau_{1}}1_{A_{1}}\left(X_{s}^{\varepsilon}\right)ds\right)\left(\int_{0}^{\tau_{1}}1_{A_{2}}\left(X_{s}^{\varepsilon}\right)ds\right)\right]\right)$
$\displaystyle\geq\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(e^{\frac{\eta}{2\varepsilon}}\min\left\\{\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{A_{1}}\leq\tau_{1}\right),\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{A_{2}}\leq\tau_{1}\right)\right\\}\right)$
$\displaystyle\geq-\eta/2+\max\left\\{\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{A_{1}}\leq\tau_{1}\right)\right),\right.$
$\displaystyle\left.\qquad\qquad\qquad\qquad\qquad\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup\nolimits_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\vartheta_{A_{2}}\leq\tau_{1}\right)\right)\right\\}$
$\displaystyle\geq\max\left\\{\inf\nolimits_{x\in
A_{1}}V\left(O_{j},x\right),\inf\nolimits_{x\in
A_{2}}V\left(O_{j},x\right)\right\\}-\eta.$
The first inequality is from (7.9). ∎
###### Remark 7.17.
The next lemma considers asymptotics of the first and second moments of a
certain integral that will appear in a decomposition of $S^{\varepsilon}_{1}$.
It is important to note that the variational bounds for both moments have the
same structure as an infimum over $x\in A$. While one might consider it
possible that the variational problem for the second moment could require a
pair of parameters (e.g., infimum over $x,y\in A$), the infimum is in fact
achieved on the “diagonal” $x=y$. This means that the biggest contribution to
the second moment is likewise due to mass along the “diagonal.”
###### Lemma 7.18.
Given a compact set $A\subset M,$ a continuous function
$f:M\rightarrow\mathbb{R},$ $j\in L$ and $\eta>0,$ there exists
$\delta_{0}\in(0,1),$ such that for any $\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)\right)\geq\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]-\eta$
and
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)^{2}\right)\geq\inf_{x\in
A}\left[2f\left(x\right)+V\left(O_{j},x\right)\right]-\eta.$
###### Proof.
Since a continuous function is bounded on a compact set, there exists
$m\in(0,\infty)$ such that $-m\leq f(x)\leq m$ for all $x\in A.$ For
$n\in\mathbb{N}$ and $k\in\\{1,2,\ldots,n\\},$ consider the sets
$A_{n,k}\doteq\left\\{x\in
A:f\left(x\right)\in\left[-m+\frac{2\left(k-1\right)m}{n},-m+\frac{2km}{n}\right]\right\\}.$
Note that $A_{n,k}$ is a compact set for any $n,k.$ In addition, for any $n$
fixed, ${\textstyle\bigcup_{k=1}^{n}}A_{n,k}=A.$ With this expression, for any
$x\in\partial B_{\delta}(O_{j})$ and $n\in\mathbb{N}$
$\displaystyle
E_{x}I^{\varepsilon}(\tau_{1};f,A)\leq\sum\nolimits_{k=1}^{n}E_{x}I^{\varepsilon}(\tau_{1};f,{A_{n,k}})\leq\sum\nolimits_{k=1}^{n}E_{x}I^{\varepsilon}(\tau_{1};0,{A_{n,k}})e^{-\frac{1}{\varepsilon}\left(F_{n,k}-2m/n\right)}.$
The second inequality holds because by definition of $A_{n,k},$ for any $x\in
A_{n,k}$, $f(x)\geq F_{n,k}-2m/n$ with $F_{n,k}\doteq\sup_{y\in
A_{n,k}}f\left(y\right)$.
Next we first apply (7.2) and then Lemma 7.15 with compact sets $A_{n,k}$ for
$k\in\\{1,2,\ldots,n\\}$ to get
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)\right)$
$\displaystyle\geq\min_{k\in\left\\{1,\ldots,n\right\\}}\left\\{\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup\limits_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};0,{A_{n,k}})e^{-\frac{1}{\varepsilon}\left(F_{n,k}-\frac{2m}{n}\right)}\right)\right\\}$
$\displaystyle=\min_{k\in\left\\{1,\ldots,n\right\\}}\left\\{\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};0,{A_{n,k}})\right)+F_{n,k}\right\\}-\frac{2m}{n}$
$\displaystyle\geq\min_{k\in\left\\{1,\ldots,n\right\\}}\left\\{\sup_{x\in
A_{n,k}}f\left(x\right)+\inf_{x\in
A_{n,k}}V\left(O_{j},x\right)\right\\}-\eta-\frac{2m}{n}.$
Finally, we know that $V\left(O_{j},x\right)$ is bounded below by $0$, and
then we use the fact that for any two functions
$f,g:\mathbb{R}^{d}\rightarrow\mathbb{R}$ with $g$ being bounded below (to
ensure that the right hand side is well defined) and any set
$A\subset\mathbb{R}^{d},$ $\inf_{x\in
A}\left(f\left(x\right)+g\left(x\right)\right)\leq\sup_{x\in
A}f\left(x\right)+\inf_{x\in A}g\left(x\right)$ to find that the last minimum
in the previous display is greater than or equal to
$\displaystyle\min_{k\in\left\\{1,\ldots,n\right\\}}\left\\{\inf_{x\in
A_{n,k}}\left[f\left(x\right)+V\left(O_{j},x\right)\right]\right\\}=\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right].$
Therefore,
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)\right)\geq\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]-\eta-\frac{2m}{n}.$
Since $n$ is arbitrary, sending $n\rightarrow\infty$ completes the proof for
the first part.
Turning to part 2, we follow the same argument as for part 1. For any
$n\in\mathbb{N},$ we use the decomposition of $A$ into
${\textstyle\bigcup_{k=1}^{n}}A_{n,k}$ to have that for any $x\in\partial
B_{\delta}(O_{j}),$
$\displaystyle E_{x}I^{\varepsilon}(\tau_{1};f,A)^{2}\leq
E_{x}\left(\sum_{k=1}^{n}I^{\varepsilon}(\tau_{1};f,{A_{n,k}})\right)^{2}=\sum_{k=1}^{n}\sum_{\ell=1}^{n}E_{x}\left[I^{\varepsilon}(\tau_{1};f,{A_{n,k}})I^{\varepsilon}(\tau_{1};f,A_{n,\ell})\right].$
Recall that $F_{n,k}$ is used to denote $\sup_{y\in A_{n,k}}f\left(y\right)$.
Using the definition of $A_{n,k}$ gives that for any
$k,\ell\in\\{1,\ldots,n\\}$
$\displaystyle
E_{x}\left[I^{\varepsilon}(\tau_{1};f,{A_{n,k}})I^{\varepsilon}(\tau_{1};f,A_{n,\ell})\right]$
$\displaystyle\leq\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}\left[I^{\varepsilon}(\tau_{1};0,{A_{n,k}})I^{\varepsilon}(\tau_{1};0,A_{n,\ell})\right]e^{-\frac{1}{\varepsilon}\left(F_{n,k}+F_{n,\ell}-\frac{4m}{n}\right)}.$
Applying (7.2) first and then Lemma 7.16 with compact sets $A_{n,k}$ and
$A_{n,\ell}$ pairwise for all $k,\ell\in\\{1,2,\ldots,n\\}$ gives that
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)^{2}\right)$
$\displaystyle\geq\min_{k,\ell\in\left\\{1,\ldots,n\right\\}}\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}\left[I^{\varepsilon}(\tau_{1};f,{A_{n,k}})I^{\varepsilon}(\tau_{1};f,A_{n,\ell})\right]$
$\displaystyle\geq\min_{k,\ell\in\left\\{1,\ldots,n\right\\}}\left\\{\max\left\\{\inf_{x\in
A_{n,k}}V\left(O_{j},x\right),\inf_{x\in
A_{n,\ell}}V\left(O_{j},x\right)\right\\}+F_{n,k}+F_{n,\ell}\right\\}-\eta-\frac{4m}{n}$
$\displaystyle\geq\min_{k\in\left\\{1,\ldots,n\right\\}}\left\\{\sup_{x\in
A_{n,k}}\left[2f\left(x\right)\right]+\inf_{x\in
A_{n,k}}V\left(O_{j},x\right)\right\\}-\eta-\frac{4m}{n}$
$\displaystyle\geq\min_{k\in\left\\{1,\ldots,n\right\\}}\left\\{\inf_{x\in
A_{n,k}}\left[2f\left(x\right)+V\left(O_{j},x\right)\right]\right\\}-\eta-\frac{4m}{n}$
$\displaystyle=\inf_{x\in
A}\left[2f\left(x\right)+V\left(O_{j},x\right)\right]-\eta-\frac{4m}{n}.$
Sending $n\rightarrow\infty$ completes the proof for the second part. ∎
Our next interest is to find lower bounds for
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{j}\right)\text{ and }\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sum_{\ell=1}^{\infty}\sup_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\ell\leq N_{j}\right)\right).$
We first recall that $N_{j}$ is the number of visits of the embedded Markov
chain $\\{Z_{n}\\}_{n}=\\{X_{\tau_{n}}^{\varepsilon}\\}_{n}$ to $\partial
B_{\delta}(O_{j})$ within one loop of regenerative cycle. Also, the
definitions of $G(i)$ and $G(i,j)$ for any $i,j\in L$ with $i\neq j$ are given
in Definition 3.8 and Remark 3.9.
###### Lemma 7.19.
For any $\eta>0,$ there exists $\delta_{0}\in(0,1),$ such that for any
$\delta\in(0,\delta_{0})$ and for any $j\in L$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup\nolimits_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{j}\right)\geq-\min_{\ell\in
L\setminus\\{1\\}}V\left(O_{1},O_{\ell}\right)+W\left(O_{j}\right)-W\left(O_{1}\right)-\eta,\text{
}$
where
$W\left(O_{j}\right)\doteq\min_{g\in
G\left(j\right)}\left[{\textstyle\sum_{\left(m\rightarrow n\right)\in
g}}V\left(O_{m},O_{n}\right)\right].$
###### Proof.
According to Lemma 3.17 we know that for any $\eta>0,$ there exist
$\delta_{0}\in(0,1)$ and $\varepsilon_{0}\in(0,1),$ such that for any
$\delta\in(0,\delta_{0})$ and $\varepsilon\in(0,\varepsilon_{0}),$ for all
$x\in\partial B_{\delta}(O_{i}),$ the one-step transition probability of the
Markov chain $\\{Z_{n}\\}_{n}$ on $\partial B_{\delta}(O_{j})$ satisfies the
inequalities
$e^{-\frac{1}{\varepsilon}\left(V\left(O_{i},O_{j}\right)+\eta/4^{l-1}\right)}\leq
p(x,\partial B_{\delta}(O_{j}))\leq
e^{-\frac{1}{\varepsilon}\left(V\left(O_{i},O_{j}\right)-\eta/4^{l-1}\right)}.$
(7.10)
We can then apply Lemma 7.6 with
$p_{ij}=e^{-\frac{1}{\varepsilon}V\left(O_{i},O_{j}\right)}$ and
$a=e^{\frac{1}{\varepsilon}\eta/4^{l-1}}$ to obtain that
$\displaystyle\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{j}\leq\frac{e^{\frac{1}{\varepsilon}\eta}}{\sum_{\ell\in
L\setminus\\{1\\}}e^{-\frac{1}{\varepsilon}V\left(O_{1},O_{\ell}\right)}}\frac{{\textstyle\sum_{g\in
G\left(j\right)}}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}\leq\frac{e^{\frac{1}{\varepsilon}\eta}}{e^{-\frac{1}{\varepsilon}\min_{\ell\in
L\setminus\\{1\\}}V\left(O_{1},O_{\ell}\right)}}\frac{{\textstyle\sum_{g\in
G\left(j\right)}}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}.$
Thus,
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{j}\right)\geq-\min_{\ell\in
L\setminus\\{1\\}}V\left(O_{1},O_{\ell}\right)-\eta+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\frac{\sum_{g\in
G\left(j\right)}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}\right).$
Hence it suffices to show that
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\frac{\sum_{g\in
G\left(j\right)}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}\right)\geq
W\left(O_{j}\right)-W\left(O_{1}\right).$
Observe that by definition for any $j\in L$ and $g\in G\left(j\right)$
$\displaystyle\pi\left(g\right)={\textstyle\prod_{\left(m\rightarrow
n\right)\in g}}p_{mn}={\textstyle\prod_{\left(m\rightarrow n\right)\in
g}}e^{-\frac{1}{\varepsilon}V\left(O_{m},O_{n}\right)}=\exp\left\\{-\frac{1}{\varepsilon}{\textstyle\sum_{\left(m\rightarrow
n\right)\in g}}V\left(O_{m},O_{n}\right)\right\\},$
which implies that
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\frac{\sum_{g\in
G\left(j\right)}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}\right)$ $\displaystyle\qquad\geq\min_{g\in
G\left(j\right)}\left[\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\exp\left\\{-\frac{1}{\varepsilon}{\textstyle\sum_{\left(m\rightarrow
n\right)\in g}}V\left(O_{m},O_{n}\right)\right\\}\right)\right]$
$\displaystyle\qquad\qquad-\min_{g\in
G\left(1\right)}\left[\limsup_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\exp\left\\{-\frac{1}{\varepsilon}{\textstyle\sum_{\left(m\rightarrow
n\right)\in g}}V\left(O_{m},O_{n}\right)\right\\}\right)\right]$
$\displaystyle\qquad=\min_{g\in
G\left(j\right)}\left[{\textstyle\sum_{\left(m\rightarrow n\right)\in
g}}V\left(O_{m},O_{n}\right)\right]-\min_{g\in
G\left(1\right)}\left[{\textstyle\sum_{\left(m\rightarrow n\right)\in
g}}V\left(O_{m},O_{n}\right)\right]$
$\displaystyle\qquad=W\left(O_{j}\right)-W\left(O_{1}\right).$
The inequality is from Lemma 7.1; the last equality holds due the definition
of $W\left(O_{j}\right)$. ∎
Recall the definition of $W(O_{1}\cup O_{j})$ in (3.3). In the next result we
obtain bounds on, for example, a quantity close to the expected number of
visits to $B_{\delta}(O_{j})$ before visiting a neighborhood of $O_{1}$, after
starting near $O_{j}$.
###### Lemma 7.20.
For any $\eta>0,$ there exists $\delta_{0}\in(0,1),$ such that for any
$\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sum_{\ell=1}^{\infty}\sup_{z\in\partial
B_{\delta}(O_{1})}P_{z}\left(\ell\leq N_{1}\right)\right)\geq-\min_{\ell\in
L\setminus\\{1\\}}V\left(O_{1},O_{\ell}\right)-\eta$
and for any $j\in L\setminus\\{1\\}$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sum_{\ell=1}^{\infty}\sup_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\ell\leq N_{j}\right)\right)\geq W(O_{1}\cup
O_{j})-W\left(O_{1}\right)-\eta.$
###### Proof.
We again use that by Lemma 3.17, for any $\eta>0$ there exist
$\delta_{0}\in(0,1)$ and $\varepsilon_{0}\in(0,1),$ such that (7.10) holds for
any $\delta\in(0,\delta_{0})$, $\varepsilon\in(0,\varepsilon_{0})$ and all
$x\in\partial B_{\delta}(O_{i}).$ Then by Lemma 7.6 with
$p_{ij}=e^{-\frac{1}{\varepsilon}V\left(O_{i},O_{j}\right)}$ and
$a=e^{\frac{1}{\varepsilon}\eta/4^{l-1}}$
$\sum_{\ell=1}^{\infty}\sup_{x\in\partial
B_{\delta}(O_{j})}P_{x}\left(N_{1}\geq\ell\right)\leq\frac{e^{\frac{1}{\varepsilon}\eta}}{\sum_{\ell\in
L\setminus\\{1\\}}e^{-\frac{1}{\varepsilon}V\left(O_{1},O_{\ell}\right)}}$
and for any $j\in L\setminus\\{1\\}$
$\sum_{\ell=1}^{\infty}\sup_{x\in\partial
B_{\delta}(O_{j})}P_{x}\left(N_{j}\geq\ell\right)\leq
e^{\frac{1}{\varepsilon}\eta}\frac{\sum_{g\in
G\left(1,j\right)}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}.$
Thus,
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sum_{\ell=1}^{\infty}\sup_{z\in\partial
B_{\delta}(O_{1})}P_{z}\left(\ell\leq
N_{1}\right)\right)\geq-\limsup_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sum\nolimits_{\ell\in
L\setminus\\{1\\}}e^{-\frac{1}{\varepsilon}V\left(O_{1},O_{\ell}\right)}\right)-\eta$
and
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sum_{\ell=0}^{\infty}\sup_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\ell\leq
N_{j}\right)\right)\geq\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\frac{\sum_{g\in
G\left(1,j\right)}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}\right)-\eta.$
Following the same argument as for the proof of Lemma 7.19, we can use Lemma
7.1 to obtain that
$-\limsup_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sum\nolimits_{\ell\in
L\setminus\\{1\\}}e^{-\frac{1}{\varepsilon}V\left(O_{1},O_{\ell}\right)}\right)\geq-\min_{\ell\in
L\setminus\\{1\\}}V\left(O_{1},O_{\ell}\right)$
and
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\frac{\sum_{g\in
G\left(1,j\right)}\pi\left(g\right)}{\sum_{g\in
G\left(1\right)}\pi\left(g\right)}\right)$ $\displaystyle\qquad\geq\min_{g\in
G\left(1,j\right)}\left[{\textstyle\sum_{\left(m\rightarrow n\right)\in
g}}V\left(O_{m},O_{n}\right)\right]-\min_{g\in
G\left(1\right)}\left[{\textstyle\sum_{\left(m\rightarrow n\right)\in
g}}V\left(O_{m},O_{n}\right)\right].$
Recalling (3.2) and (3.3), we are done. ∎
As mentioned at the beginning of this subsection, our main goal is to provide
lower bounds for
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}\left(\int_{0}^{\tau_{1}^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{s}^{\varepsilon}\right)}1_{A}\left(X_{s}^{\varepsilon}\right)ds\right)\right)$
and
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}\left(\int_{0}^{\tau_{1}^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{s}^{\varepsilon}\right)}1_{A}\left(X_{s}^{\varepsilon}\right)ds\right)^{2}\right)$
for a given continuous function $f:M\rightarrow\mathbb{R}$ and compact set
$A\subset M.$ We now state the main results of the subsection. Recall that
$h_{1}=\min_{\ell\in L\setminus\\{1\\}}V\left(O_{1},O_{\ell}\right)$,
$S_{1}^{\varepsilon}\doteq\int_{0}^{\tau_{1}^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{s}^{\varepsilon}\right)}1_{A}\left(X_{s}^{\varepsilon}\right)ds$
and $W\left(O_{j}\right)\doteq\min_{g\in
G\left(j\right)}[\sum_{\left(m\rightarrow n\right)\in
g}V\left(O_{m},O_{n}\right)]$ and the definitions (7.8).
###### Lemma 7.21.
Given a compact set $A\subset M,$ a continuous function
$f:M\rightarrow\mathbb{R}$ and $\eta>0,$ there exists $\delta_{0}\in(0,1),$
such that for any $\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left[\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}S_{1}^{\varepsilon}\right]\geq\min_{j\in
L}\left\\{\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)\right\\}-W\left(O_{1}\right)-h_{1}-\eta.$
###### Proof.
Recall that by Lemma 7.18, we have shown that for the given $\eta,$ there
exists $\delta_{1}\in(0,1),$ such that for any $\delta\in(0,\delta_{1})$ and
$j\in L$
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)\right)\geq\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]-\frac{\eta}{2}.$
In addition, by Lemma 7.19, we know that for the same $\eta,$ there exists
$\delta_{2}\in(0,1),$ such that for any $\delta\in(0,\delta_{2})$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup\nolimits_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{j}\right)\geq-\min_{\ell\in
L\setminus\\{1\\}}V\left(O_{1},O_{\ell}\right)+W\left(O_{j}\right)-W\left(O_{1}\right)-{\eta}/{2}.$
Hence for any $\delta\in(0,\delta_{0})$ with
$\delta_{0}=\delta_{1}\wedge\delta_{2},$ we apply Corollary 7.11 to get
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left[E_{x}I^{\varepsilon}(\tau_{1}^{\varepsilon};f,A)\right]$
$\displaystyle\geq\min_{j\in L}\left\\{\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)\right)+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}\left(N_{j}\right)\right)\right\\}$
$\displaystyle\geq\min_{j\in L}\left\\{\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)\right\\}-W\left(O_{1}\right)-h_{1}-\eta,$
where $\tau_{1}^{\varepsilon}$ is the time for a regenerative cycle and
$\tau_{1}$ is the first visit time of neighborhoods of equilibrium points
after being a certain distance away from them. ∎
###### Remark 7.22.
According to Remark 7.12 and using the same argument as in Lemma 7.21, we can
find that given a compact set $A\subset M,$ a continuous function
$f:M\rightarrow\mathbb{R}$ and $\eta>0,$ there exists $\delta_{0}\in(0,1),$
such that for any $\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left[\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}I^{\varepsilon}(\sigma_{0}^{\varepsilon},\tau_{1}^{\varepsilon};f,A)\right]$
$\displaystyle\qquad\geq\min_{j\in L\setminus\\{1\\}}\left\\{\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)\right\\}-W\left(O_{1}\right)-h_{1}-\eta.$
###### Lemma 7.23.
Given a compact set $A\subset M,$ a continuous function
$f:M\rightarrow\mathbb{R}$ and $\eta>0,$ there exists $\delta_{0}\in(0,1),$
such that for any $\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left[\sup\nolimits_{z\in\partial
B_{\delta}(O_{1})}E_{z}(S_{1}^{\varepsilon})^{2}\right]\geq\min_{j\in
L}\left(R_{j}^{(1)}\wedge R_{j}^{(2)}\right)-h_{1}-\eta,$
where
$S_{1}^{\varepsilon}\doteq\int_{0}^{\tau_{1}^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{s}^{\varepsilon}\right)}1_{A}\left(X_{s}^{\varepsilon}\right)ds$
and $h_{1}=\min_{\ell\in L\setminus\\{1\\}}V\left(O_{1},O_{\ell}\right)$, and
$R_{j}^{(1)}\doteq\inf_{x\in
A}\left[2f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)-W\left(O_{1}\right)$
$R_{1}^{(2)}\doteq 2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{1},x\right)\right]-h_{1}$
and for $j\in L\setminus\\{1\\}$
$R_{j}^{(2)}\doteq 2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)-2W\left(O_{1}\right)+W(O_{1}\cup
O_{j}).$
###### Proof.
Following a similar argument as for the proof of Lemma 7.21, given any
$\eta>0,$ owing to Lemmas 7.18, 7.19 and 7.20, there exists
$\delta_{0}\in(0,1)$ such that for any $\delta\in(0,\delta_{0})$ and for any
$j\in L$
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)\right)\geq\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]-\frac{\eta}{4},$
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)^{2}\right)\geq\inf_{x\in
A}\left[2f\left(x\right)+V\left(O_{j},x\right)\right]-\frac{\eta}{4},$
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{j}\right)\geq-
h_{1}+W\left(O_{j}\right)-W\left(O_{1}\right)-\frac{\eta}{4},$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sum_{\ell=1}^{\infty}\sup_{z\in\partial
B_{\delta}(O_{1})}P_{z}\left(\ell\leq N_{1}\right)\right)\geq-
h_{1}-\frac{\eta}{4},$
and for any $j\in L\setminus\\{1\\}$,
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left({\textstyle\sum_{\ell=1}^{\infty}}\sup_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\ell\leq N_{j}\right)\right)\geq W(O_{1}\cup
O_{j})-W\left(O_{1}\right)-\frac{\eta}{4}.$
Hence for any $\delta\in(0,\delta_{0})$ we apply Corollary 7.11 to get
$\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}\left(S_{1}^{\varepsilon}\right)^{2}\right)\geq\min_{j\in
L}\left(\hat{R}_{j}^{(1)}\wedge\hat{R}_{j}^{(2)}\right),$
where
$\displaystyle\hat{R}_{j}^{(1)}$
$\displaystyle\doteq\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)^{2}\right)+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{j}\right)$ $\displaystyle\geq\inf_{x\in
A}\left[2f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)-W\left(O_{1}\right)-h_{1}-\eta=R_{j}^{(1)}-h_{1}-\eta$
and
$\displaystyle\hat{R}_{1}^{(2)}$ $\displaystyle\doteq
2\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}I^{\varepsilon}(\tau_{1};f,A)\right)$
$\displaystyle\quad+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{1}\right)+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left({\textstyle\sum_{\ell=1}^{\infty}}\sup_{z\in\partial
B_{\delta}(O_{1})}P_{z}\left(\ell\leq N_{1}\right)\right)$ $\displaystyle\geq
2\left(\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{1},x\right)\right]-\frac{\eta}{4}\right)+\left(-h_{1}-\frac{\eta}{4}\right)+\left(-h_{1}-\frac{\eta}{4}\right)$
$\displaystyle=2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{1},x\right)\right]-2h_{1}-\eta=R_{1}^{(2)}-h_{1}-\eta$
and for $j\in L\setminus\\{1\\}$
$\displaystyle\hat{R}_{j}^{(2)}$ $\displaystyle\doteq
2\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{j})}E_{z}I^{\varepsilon}(\tau_{1};f,A)\right)$
$\displaystyle\quad+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}N_{j}\right)+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left({\textstyle\sum_{\ell=1}^{\infty}}\sup_{z\in\partial
B_{\delta}(O_{j})}P_{z}\left(\ell\leq N_{j}\right)\right)$ $\displaystyle\geq
2\left(\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]-\frac{\eta}{4}\right)+\left(-h_{1}+W\left(O_{j}\right)-W\left(O_{1}\right)-\frac{\eta}{4}\right)$
$\displaystyle\quad+\left(W(O_{1}\cup
O_{j})-W\left(O_{1}\right)-\frac{\eta}{4}\right)$ $\displaystyle=2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)-2W\left(O_{1}\right)+W(O_{1}\cup
O_{j})-h_{1}-\eta$ $\displaystyle=R_{j}^{(2)}-h_{1}-\eta.$
∎
### 7.3 Asymptotics of moments of $\hat{S}_{1}^{\varepsilon}$
Recall that
$\hat{S}_{n}^{\varepsilon}\doteq\int_{\hat{\tau}_{n-1}^{\varepsilon}}^{\hat{\tau}_{n}^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt,$
where $\hat{\tau}_{i}^{\varepsilon}$ is a multicycle defined according to
(6.4) and with $\\{\mathbf{M}^{\varepsilon}_{i}\\}_{i\in\mathbb{N}}$ being a
sequence of independent and geometrically distributed random variables with
parameter $e^{-m/\varepsilon}$ for some $m>0$ such that $m+h_{1}>w$. Moreover,
$\\{\mathbf{M}^{\varepsilon}_{i}\\}$ is also independent of
$\\{\tau^{\varepsilon}_{n}\\}$. Using the independence of
$\\{\mathbf{M}^{\varepsilon}_{i}\\}$ and $\\{\tau^{\varepsilon}_{n}\\}$, and
the fact that $\\{\tau^{\varepsilon}_{n}\\}$ and $\\{S_{n}^{\varepsilon}\\}$
are both iid under $P_{\lambda^{\varepsilon}}$, we find that
$\\{\hat{S}_{n}^{\varepsilon}\\}$ is also iid under
$P_{\lambda^{\varepsilon}}$ and
$\displaystyle
E_{\lambda^{\varepsilon}}\hat{S}_{1}^{\varepsilon}=E_{\lambda^{\varepsilon}}\mathbf{M}^{\varepsilon}_{1}\cdot
E_{\lambda^{\varepsilon}}S^{\varepsilon}_{1}$ (7.11)
and
$\displaystyle\mathrm{Var}_{\lambda^{\varepsilon}}\hat{S}_{1}^{\varepsilon}$
$\displaystyle=E_{\lambda^{\varepsilon}}\mathbf{M}^{\varepsilon}_{1}\cdot\mathrm{Var}_{\lambda^{\varepsilon}}(S^{\varepsilon}_{1})+\mathrm{Var}_{\lambda^{\varepsilon}}(\mathbf{M}^{\varepsilon}_{1})\cdot(E_{\lambda^{\varepsilon}}S^{\varepsilon}_{1})^{2}$
$\displaystyle\leq E_{\lambda^{\varepsilon}}\mathbf{M}^{\varepsilon}_{1}\cdot
E_{\lambda^{\varepsilon}}(S^{\varepsilon}_{1})^{2}+\mathrm{Var}_{\lambda^{\varepsilon}}(\mathbf{M}^{\varepsilon}_{1})\cdot(E_{\lambda^{\varepsilon}}S^{\varepsilon}_{1})^{2}$
(7.12)
On the other hand, since $\mathbf{M}^{\varepsilon}_{1}$ is geometrically
distributed with parameter $e^{-m/\varepsilon}$, this gives that
$\displaystyle
E_{\lambda^{\varepsilon}}\mathbf{M}^{\varepsilon}_{1}=e^{\frac{m}{\varepsilon}}\text{
and
}\mathrm{Var}_{\lambda^{\varepsilon}}(\mathbf{M}^{\varepsilon}_{1})=e^{\frac{2m}{\varepsilon}}(1-e^{\frac{-m}{\varepsilon}}).$
(7.13)
Therefore, by combining (7.11), (7.3) and (7.13) with Lemma 7.21 and Lemma
7.23, we have the following two lemmas.
###### Lemma 7.24.
Given a compact set $A\subset M,$ a continuous function
$f:M\rightarrow\mathbb{R}$ and $\eta>0,$ there exists $\delta_{0}\in(0,1),$
such that for any $\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log
E_{\lambda^{\varepsilon}}\hat{S}_{1}^{\varepsilon}$
$\displaystyle\qquad\geq\min_{j\in L}\left\\{\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)\right\\}-W\left(O_{1}\right)-(m+h_{1})-\eta.$
###### Lemma 7.25.
Given a compact set $A\subset M,$ a continuous function
$f:M\rightarrow\mathbb{R}$ and $\eta>0,$ there exists $\delta_{0}\in(0,1),$
such that for any $\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\mathrm{Var}_{\lambda^{\varepsilon}}(\hat{S}_{1}^{\varepsilon})\geq\min_{j\in
L}\left(R_{j}^{(1)}\wedge R_{j}^{(2)}\wedge
R_{j}^{(3,m)}\right)-(m+h_{1})-\eta,$
where $R_{j}^{(1)}$ and $R_{j}^{(2)}$ are defined as in Lemma 7.23, and
$R_{j}^{(3,m)}\doteq 2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+2W\left(O_{j}\right)-2W\left(O_{1}\right)-(m+h_{1}).$
Later on we will optimize on $m$ to obtain the largest bound from below. This
will require that we consider first $m>w-h_{1}$, so that as shown in the next
section $N^{\varepsilon}(T^{\varepsilon})$ can be suitably approximated in
terms of a Poisson distribution, and then sending $m\downarrow w-h_{1}$.
## 8 Asymptotics of Moments of $N^{\varepsilon}(T^{\varepsilon})$ and
$\hat{N}^{\varepsilon}(T^{\varepsilon})$
Recall that the number of single cycles in the time interval
$[0,T^{\varepsilon}]$ plus one is defined as
$N^{\varepsilon}\left(T^{\varepsilon}\right)\doteq\inf\left\\{n\in\mathbb{N}:\tau_{n}^{\varepsilon}>T^{\varepsilon}\right\\},$
where the $\tau_{n}^{\varepsilon}$ are the return times to $B_{\delta}(O_{1})$
after ever visiting one of the $\delta$-neighborhood of other equilibrium
points than $O_{1}.$ In addition, $\lambda^{\varepsilon}$ is the unique
invariant measure of
$\\{Z_{n}^{\varepsilon}\\}_{n}=\\{X_{\tau_{n}^{\varepsilon}}^{\varepsilon}\\}_{n}.$
The number of multicycles in the time interval $[0,T^{\varepsilon}]$ plus one
is defined as
$\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right)\doteq\inf\left\\{n\in\mathbb{N}:\hat{\tau}_{n}^{\varepsilon}>T^{\varepsilon}\right\\},$
where $\hat{\tau}^{\varepsilon}_{i}$ are defined as in (6.4).
In this section, we will find the logarithmic asymptotics of the expected
value and the variance of $N^{\varepsilon}\left(T^{\varepsilon}\right)$ with
$T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some $c>h_{1}$ in Lemma 8.2
and Lemma 8.4 under the assumption that $h_{1}>w$ (i.e., single cycle case),
and the analogous quantities for
$\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right)$ with
$T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some $c>w$ in Lemma 8.19 and
Lemma 8.21 under the assumption that $w\geq h_{1}$ (i.e., multicycle case).
###### Remark 8.1.
While the proofs of these asymptotic results are quite detailed, it is
essential that we obtain estimates good enough for a relatively precise
comparison of the expected value and the variance of
$N^{\varepsilon}\left(T^{\varepsilon}\right)$, and likewise for
$\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right)$. For this, the key result
needed is the characterization of
$N^{\varepsilon}\left(T^{\varepsilon}\right)$ (and
$\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right)$) as having an
approximately Poisson distribution. These follow by exploiting the
asymptotically exponential character of $\tau_{n}^{\varepsilon}$ (and
$\hat{\tau}_{n}^{\varepsilon}$), together with some uniform integrability
properties.
Lemmas 8.2 and 8.4 below are proved in Section 8.3.
###### Lemma 8.2.
If $h_{1}>w$ and $T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some
$c>h_{1}$, then there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left|\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}-\frac{1}{E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\right|\geq
c.$
###### Corollary 8.3.
If $h_{1}>w$ and $T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some
$c>h_{1}$, then there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}\geq\varkappa_{\delta},$
where $\varkappa_{\delta}\doteq\min_{y\in\cup_{k\in L\setminus\\{1\\}}\partial
B_{\delta}(O_{k})}V(O_{1},y)$.
###### Lemma 8.4.
If $h_{1}>w$ and $T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some
$c>h_{1}$, then for any $\eta>0,$ there exists $\delta_{0}\in(0,1)$ such that
for any $\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{\mathrm{Var}_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}\geq
h_{1}-\eta.$
Before proceeding, we mention a result from [11] and define some notation
which will be used in this section. Results in Section 5 and Section 10 of
[11, Chapter XI] say that for any $t>0,$ the first and second moment of
$N^{\varepsilon}\left(t\right)$ can be represented as
$E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(t\right)\right)=\sum\nolimits_{n=0}^{\infty}P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
t\right)\text{ and
}E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(t\right)\right)^{2}=\sum\nolimits_{n=0}^{\infty}\left(2n+1\right)P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
t\right).$ (8.1)
Let $\Gamma^{\varepsilon}\doteq
T^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}$ and
$\gamma^{\varepsilon}\doteq\left(\Gamma^{\varepsilon}\right)^{-\ell}$ with
some $\ell\in(0,1)$ which will be chosen later. Intuitively,
$\Gamma^{\varepsilon}$ is the typical number of regenerative cycles in
$[0,T^{\varepsilon}]$ since $E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}$
is the expected length of one regenerative cycle. To simplify notation, we
pretend that $\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}$ and
$\left(1-2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}$ are positive
integers so that we can divide
$E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)$
into three partial sums which are
$\mathfrak{P}_{1}\doteq\sum\nolimits_{n=\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}+1}^{\infty}P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right),\text{
}\mathfrak{P}_{2}\doteq\sum\nolimits_{n=\left(1-2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}}^{\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}}\text{
}P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right)\text{ }$
and
$\mathfrak{P}_{3}\doteq\sum\nolimits_{n=0}^{\left(1-2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}-1}P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right).$ (8.2)
Similarly, we divide
$E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)^{2}$
into
$\mathfrak{R}_{1}\doteq\sum_{n=\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}+1}^{\infty}\left(2n+1\right)P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right),\text{
}\mathfrak{R}_{2}\doteq\sum_{n=\left(1-2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}}^{\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}}\text{
}\left(2n+1\right)P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right)\text{ }$
and
$\mathfrak{R}_{3}\doteq\sum\nolimits_{n=0}^{\left(1-2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}-1}\left(2n+1\right)P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right).$ (8.3)
The next step is to find upper bounds for these partial sums, and these bounds
will help us to find suitable lower bounds for the logarithmic asymptotics of
$E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)$
and
Var${}_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)$.
Before looking into the upper bound for partial sums, we establish some
properties.
###### Theorem 8.5.
If $h_{1}>w$, then for any $\delta>0$ sufficiently small,
$\lim_{\varepsilon\rightarrow 0}\varepsilon\log
E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}=\varkappa_{\delta}\text{ and
}\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\overset{d}{\rightarrow}\rm{Exp}(1).$
Moreover, there exists $\varepsilon_{0}\in(0,1)$ and a constant $\tilde{c}>0$
such that
$P_{\lambda^{\varepsilon}}\left(\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}>t\right)\leq
e^{-\tilde{c}t}$
for any $t>0$ and any $\varepsilon\in(0,\varepsilon_{0}).$
###### Remark 8.6.
For any $\delta>0,$ $\varkappa_{\delta}\leq h_{1}.$
The proof of Theorem 8.5 will be given in Section 10. In that section, we will
first prove an analogous result for the exit time (or first visiting time to
other equilibrium points to be more precise), and then show how one can extend
those results to the return time. The proof of the following lemma is
straightforward and hence omitted.
###### Lemma 8.7.
If $h_{1}>w$ and $T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some
$c>h_{1}$, then for any $\eta>0,$ there exists $\delta_{0}\in(0,1)$ such that
for any $\delta\in(0,\delta_{0})$,
$h_{1}-\eta\geq\lim_{\varepsilon\rightarrow
0}-\varepsilon\log\Gamma^{\varepsilon}\geq h_{1}-c-\eta.$
###### Lemma 8.8.
Define
$\mathcal{Z}_{1}^{\varepsilon}\doteq\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}.$
Then for any $\delta>0$ sufficiently small,
* •
there exists some $\varepsilon_{0}\in(0,1)$ such that
$\sup_{\varepsilon\in(0,\varepsilon_{0})}E_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{3}<\infty,$
* •
there exists some $\varepsilon_{0}\in(0,1)$ such that
$\inf_{\varepsilon\in(0,\varepsilon_{0})}\mathrm{Var}_{\lambda^{\varepsilon}}(\mathcal{Z}_{1}^{\varepsilon})>0$
and
$E_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{2}=E_{\lambda^{\varepsilon}}\left(\tau_{1}^{\varepsilon}\right)^{2}/\left(E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)^{2}\rightarrow
2$ as $\varepsilon\rightarrow 0.$
###### Proof.
For the first part, we use Theorem 8.5 to find that there exists
$\varepsilon_{0}\in(0,1)$ and a constant $\tilde{c}>0$ such that
$P_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}>t\right)=P_{\lambda^{\varepsilon}}\left(\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}>t\right)\leq
e^{-\tilde{c}t}$
for any $t>0$ and any $\varepsilon\in(0,\varepsilon_{0}).$ Therefore, for
$\varepsilon\in(0,\varepsilon_{0})$
$E_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{3}=3\int_{0}^{\infty}t^{2}P_{\lambda^{\varepsilon}}(\mathcal{Z}_{1}^{\varepsilon}>t)dt\leq
3\int_{0}^{\infty}t^{2}e^{-\tilde{c}t}dt<\infty.$
For the second assertion, since
$\sup_{0<\varepsilon<\varepsilon_{0}}E_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{3}<\infty,$
it implies that
$\\{\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{2}\\}_{0<\varepsilon<\varepsilon_{0}}$
and $\\{\mathcal{Z}_{1}^{\varepsilon}\\}_{0<\varepsilon<\varepsilon_{0}}$ are
both uniformly integrable. Moreover, because
$\mathcal{Z}_{1}^{\varepsilon}\overset{d}{\rightarrow}$ Exp$(1)$ as
$\varepsilon\rightarrow 0$ from Theorem 8.5 and since for $X\overset{d}{=}$
Exp$(1),$ $EX=1$ and $EX^{2}=2,$ we obtain
$E_{\lambda^{\varepsilon}}\left(\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)^{2}=E_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{2}\rightarrow
2\text{ and }E_{\lambda^{\varepsilon}}\mathcal{Z}_{1}^{\varepsilon}\rightarrow
1.$
as $\varepsilon\rightarrow 0.$ This implies
Var${}_{\lambda^{\varepsilon}}(\mathcal{Z}_{1}^{\varepsilon})\rightarrow 1$ as
$\varepsilon\rightarrow 0.$ Obviously, there exists some
$\varepsilon_{0}\in(0,1)$ such that
$\inf\nolimits_{\varepsilon\in(0,\varepsilon_{0})}\mathrm{Var}_{\lambda^{\varepsilon}}(\mathcal{Z}_{1}^{\varepsilon})\geq
1/2>0.$ This completes the proof. ∎
###### Remark 8.9.
Throughout the rest of this section, we will use $C$ to denote a constant in
$(0,\infty)$ which is independent of $\varepsilon$ but whose value may change
from use to use.
### 8.1 Chernoff bound
In this subsection we will provide upper bounds for
$\mathfrak{P}_{1}\doteq\sum_{n=\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}+1}^{\infty}P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right)\text{ and
}\mathfrak{R}_{1}\doteq\sum_{n=\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}+1}^{\infty}\left(2n+1\right)P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right)$
via a Chernoff bound. The following result is well known and its proof is
standard.
###### Lemma 8.10 (Chernoff bound).
Let $X_{1},\ldots,X_{n}$ be an iid sequence of random variables. For any
$a\in\mathbb{R}$ and for any $t\in(0,\infty)$
$P\left(X_{1}+\cdots+X_{n}\leq
a\right)\leq\left(Ee^{-tX_{1}}\right)^{n}e^{ta}.$
Recall that $\Gamma^{\varepsilon}\doteq
T^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}$ and
$\gamma^{\varepsilon}\doteq\left(\Gamma^{\varepsilon}\right)^{-\ell}$ with
some $\ell\in(0,1)$ which will be chosen later.
###### Lemma 8.11.
Given any $\delta>0$ and any $\ell>0,$ there exists $\varepsilon_{0}\in(0,1)$
such that for any $\varepsilon\in(0,\varepsilon_{0})$
$P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right)\leq e^{-n\left(\Gamma^{\varepsilon}\right)^{-2\ell}}$
for any $n\geq\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}.$ In
addition,
$\mathfrak{P}_{1}\leq
C\left(\Gamma^{\varepsilon}\right)^{2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}\text{
and }\mathfrak{R}_{1}\leq
C\left(\Gamma^{\varepsilon}\right)^{1+2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}+C\left(\Gamma^{\varepsilon}\right)^{4\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}.$
###### Proof.
Given $\delta>0$, $\ell>0$ and $\varepsilon\in(0,1),$ we find that for
$n\geq\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}$
$\displaystyle P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right)$
$\displaystyle=P_{\lambda^{\varepsilon}}\left(\frac{\tau_{1}^{\varepsilon}+\left(\tau_{2}^{\varepsilon}-\tau_{1}^{\varepsilon}\right)+\cdots+\left(\tau_{n}^{\varepsilon}-\tau_{n-1}^{\varepsilon}\right)}{E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\leq\Gamma^{\varepsilon}\right)$
$\displaystyle\leq
P_{\lambda^{\varepsilon}}\left(\frac{\tau_{1}^{\varepsilon}+\left(\tau_{2}^{\varepsilon}-\tau_{1}^{\varepsilon}\right)+\cdots+\left(\tau_{n}^{\varepsilon}-\tau_{n-1}^{\varepsilon}\right)}{E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\leq\frac{n}{1+2\gamma^{\varepsilon}}\right)$
$\displaystyle\leq\left(E_{\lambda^{\varepsilon}}e^{-\gamma^{\varepsilon}\mathcal{Z}_{1}^{\varepsilon}}\right)e^{\frac{n\gamma^{\varepsilon}}{1+2\gamma^{\varepsilon}}},$
where
$\mathcal{Z}_{1}^{\varepsilon}=\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}.$
We use the fact that
$\\{\tau_{n}^{\varepsilon}-\tau_{n-1}^{\varepsilon}\\}_{n\in\mathbb{N}}$ are
iid and apply Lemma 8.10 (Chernoff bound) with
$a=n/\left(1+2\gamma^{\varepsilon}\right)$ and $t=\gamma^{\varepsilon}$ for
the last inequality. Therefore, in order to verify the first claim, it
suffices to show that
$\left(E_{\lambda^{\varepsilon}}e^{-\gamma^{\varepsilon}\mathcal{Z}_{1}^{\varepsilon}}\right)e^{\frac{\gamma^{\varepsilon}}{1+2\gamma^{\varepsilon}}}\leq
e^{-\left(\gamma^{\varepsilon}\right)^{2}}=e^{-\left(\Gamma^{\varepsilon}\right)^{-2\ell}}.$
We observe that for any $x\geq 0,$ $e^{-x}\leq 1-x+x^{2}/2,$ and this gives
$\displaystyle
E_{\lambda^{\varepsilon}}e^{-\gamma^{\varepsilon}\mathcal{Z}_{1}^{\varepsilon}}\leq
1-E_{\lambda^{\varepsilon}}\left(\gamma^{\varepsilon}\mathcal{Z}_{1}^{\varepsilon}\right)+E_{\lambda^{\varepsilon}}\left(\gamma^{\varepsilon}\mathcal{Z}_{1}^{\varepsilon}\right)^{2}/2=1-\gamma^{\varepsilon}+\left(\gamma^{\varepsilon}\right)^{2}E_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{2}/2.$
Moreover, since we can apply Lemma 8.8 to find
$E_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{2}\rightarrow
2$ as $\varepsilon\rightarrow 0,$ there exists $\varepsilon_{0}\in(0,1)$ such
that for any $\varepsilon\in(0,\varepsilon_{0})$,
$E_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{2}\leq
9/4.$ Thus, for any $\varepsilon\in(0,\varepsilon_{0})$
$\left(E_{\lambda^{\varepsilon}}e^{-\gamma^{\varepsilon}\mathcal{Z}_{1}^{\varepsilon}}\right)e^{\frac{\gamma^{\varepsilon}}{1+2\gamma^{\varepsilon}}}\leq\exp\left\\{\gamma^{\varepsilon}/(1+2\gamma^{\varepsilon})+\log(1-\gamma^{\varepsilon}+(9/8)\left(\gamma^{\varepsilon}\right)^{2})\right\\}.$
Using a Taylor series expansion we find that for all $\left|x\right|<1$
$1/(1+x)=1-x+O\left(x^{2}\right)\text{ and
}\log\left(1+x\right)=x-x^{2}/2+O\left(x^{3}\right),$
which gives
$\displaystyle\gamma^{\varepsilon}/(1+2\gamma^{\varepsilon})+\log(1-\gamma^{\varepsilon}+(9/8)\left(\gamma^{\varepsilon}\right)^{2})$
$\displaystyle\quad=\gamma^{\varepsilon}-2\left(\gamma^{\varepsilon}\right)^{2}+[(-\gamma^{\varepsilon}+(9/8)\left(\gamma^{\varepsilon}\right)^{2}]-[-\gamma^{\varepsilon}+(9/8)\left(\gamma^{\varepsilon}\right)^{2}]^{2}/2+O((\gamma^{\varepsilon})^{3})$
$\displaystyle\quad=-(11/8)\left(\gamma^{\varepsilon}\right)^{2}+O(\left(\gamma^{\varepsilon}\right)^{3})\leq-\left(\gamma^{\varepsilon}\right)^{2},$
for all $\varepsilon\in(0,\varepsilon_{0})$. We are done for part 1.
For part 2, we use the estimate from part 1 and find
$\sum_{n=\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}+1}^{\infty}P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right)\leq\sum_{n=\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}+1}^{\infty}e^{-n\left(\gamma^{\varepsilon}\right)^{2}}\leq\frac{e^{-\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}\left(\gamma^{\varepsilon}\right)^{2}}}{1-e^{-\left(\gamma^{\varepsilon}\right)^{2}}}.$
Since $e^{-x}\leq 1-x+x^{2}/2$ for any $x\in\mathbb{R},$ we have $1-e^{-x}\geq
x-x^{2}/2\geq x-x/2=x/2$ for all $x\in(0,1),$ and thus $1/(1-e^{-x})\leq 2/x$
for all $x\in(0,1).$ As a result
$\displaystyle\sum_{n=\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}+1}^{\infty}P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right)\leq\frac{e^{-\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}\left(\gamma^{\varepsilon}\right)^{2}}}{1-e^{-\left(\gamma^{\varepsilon}\right)^{2}}}\leq\frac{2}{\left(\gamma^{\varepsilon}\right)^{2}}e^{-\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}\left(\gamma^{\varepsilon}\right)^{2}}\leq
2\left(\Gamma^{\varepsilon}\right)^{2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}.$
This completes the proof of part 2.
Finally, for part 3, we use the fact that for $x\in(0,1),$ and for any
$k\in\mathbb{N},$
$\sum\nolimits_{n=k}^{\infty}nx^{n}=kx^{k}(1-x)^{-1}+x^{k+1}(1-x)^{-2}\leq(k(1-x)^{-1}+(1-x)^{-2})x^{k}.$
Using the estimate from part 1 once again, we have
$\displaystyle\sum_{n=\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}+1}^{\infty}nP_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right)$
$\displaystyle\leq\sum_{n=\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}}^{\infty}ne^{-n\left(\gamma^{\varepsilon}\right)^{2}}$
$\displaystyle\leq\left(\frac{\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}}{1-e^{-\left(\gamma^{\varepsilon}\right)^{2}}}+\left(1-e^{-\left(\gamma^{\varepsilon}\right)^{2}}\right)^{-2}\right)e^{-\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}\left(\gamma^{\varepsilon}\right)^{2}}$
$\displaystyle\leq\left(4\left(\Gamma^{\varepsilon}\right)^{1+2\ell}+4\left(\Gamma^{\varepsilon}\right)^{4\ell}\right)e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}.$
We are done. ∎
###### Remark 8.12.
If $0<\ell<1/2,$ then $\mathfrak{P}_{1}$ and $\mathfrak{R}_{1}$ converge to
$0$ doubly exponentially fast as $\varepsilon\rightarrow 0$ in the sense that
for any $k\in(0,\infty)$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left[\left(\Gamma^{\varepsilon}\right)^{k}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}\right]=\infty.$
### 8.2 Berry-Esseen bound
In this subsection we will provide upper bounds for
$\mathfrak{P}_{2}\doteq\sum_{n=\left(1-2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}}^{\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}}P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right)\text{ and
}\mathfrak{R}_{2}\doteq\sum_{n=\left(1-2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}}^{\left(1+2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}}\left(2n+1\right)P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right)$
via the Berry-Esseen bound.
We first recall that
$\Gamma^{\varepsilon}=T^{\varepsilon}/E_{\lambda^{\varepsilon}}^{\varepsilon}\tau_{1}^{\varepsilon}$.
The following is Theorem 1 in [11, Chapter XVI.5].
###### Theorem 8.13 (Berry-Esseen).
Let $\left\\{X_{n}\right\\}_{n\in\mathbb{N}}$ be independent real-valued
random variables with a common distribution such that
$E\left(X_{1}\right)=0,\text{ }\sigma^{2}\doteq
E\left(X_{1}\right)^{2}>0,\text{ }\rho\doteq E\left|X_{1}\right|^{3}<\infty.$
Then for all $x\in\mathbb{R}$ and $n\in\mathbb{N},$
$\left|P\left(\frac{X_{1}+\cdots+X_{n}}{\sigma\sqrt{n}}\leq
x\right)-\Phi\left(x\right)\right|\leq\frac{3\rho}{\sigma^{3}\sqrt{n}},$
where $\Phi\left(\cdot\right)$ is the distribution function of
$N\left(0,1\right).$
###### Corollary 8.14.
For any $\varepsilon>0,$ let
$\left\\{X_{n}^{\varepsilon}\right\\}_{n\in\mathbb{N}}$ be independent real-
valued random variables with a common distribution such that
$E\left(X_{1}^{\varepsilon}\right)=0,\text{
}\left(\sigma^{\varepsilon}\right)^{2}\doteq
E\left(X_{1}^{\varepsilon}\right)^{2}>0,\text{ }\rho^{\varepsilon}\doteq
E\left|X_{1}^{\varepsilon}\right|^{3}<\infty.$
Assume that there exists $\varepsilon_{0}\in(0,1)$ such that
$\hat{\rho}\text{
}\doteq\sup\nolimits_{\varepsilon\in(0,\varepsilon_{0})}\rho^{\varepsilon}<\infty\text{
and
}\hat{\sigma}^{2}\doteq\inf\nolimits_{\varepsilon\in(0,\varepsilon_{0})}\left(\sigma^{\varepsilon}\right)^{2}>0.$
Then for all $x\in\mathbb{R},n\in\mathbb{N}$ and
$\varepsilon\in(0,\varepsilon_{0}),$
$\left|P\left(\frac{X_{1}^{\varepsilon}+\cdots+X_{n}^{\varepsilon}}{\sigma^{\varepsilon}\sqrt{n}}\leq
x\right)-\Phi\left(x\right)\right|\leq\frac{3\rho^{\varepsilon}}{\left(\sigma^{\varepsilon}\right)^{3}\sqrt{n}}\leq\frac{3\hat{\rho}}{\hat{\sigma}^{3}\sqrt{n}}.$
###### Lemma 8.15.
Given any $\delta>0$ and any $\ell>0,$ there exists $\varepsilon_{0}\in(0,1)$
such that for any $\varepsilon\in(0,\varepsilon_{0})$ and
$k\in\mathbb{N}_{0}$, $0\leq k\leq 2\gamma^{\varepsilon}\Gamma^{\varepsilon}$
$P_{\lambda^{\varepsilon}}\left(\tau_{\Gamma^{\varepsilon}+k}^{\varepsilon}\leq
T^{\varepsilon}\right)\leq
1-\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\right)+\frac{3\hat{\rho}}{\hat{\sigma}^{3}\sqrt{\Gamma^{\varepsilon}+k}}$
and
$P_{\lambda^{\varepsilon}}\left(\tau_{\Gamma^{\varepsilon}-k}^{\varepsilon}\leq
T^{\varepsilon}\right)\leq\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}-k}}\right)+\frac{3\hat{\rho}}{\hat{\sigma}^{3}\sqrt{\Gamma^{\varepsilon}-k}},$
where $(\sigma^{\varepsilon})^{2}\doteq
E_{\lambda^{\varepsilon}}\left(\mathfrak{X}_{1}^{\varepsilon}\right)^{2},$
$\hat{\rho}\doteq\sup_{\varepsilon\in(0,\varepsilon_{0})}E_{\lambda^{\varepsilon}}\left|\mathfrak{X}_{1}^{\varepsilon}\right|^{3}<\infty$
and
$\hat{\sigma}^{2}\doteq\inf_{\varepsilon\in(0,\varepsilon_{0})}(\sigma^{\varepsilon})^{2}>0$
with
$\mathfrak{X}_{1}^{\varepsilon}\doteq\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}^{\varepsilon}\tau_{1}^{\varepsilon}-1.$
###### Proof.
For any $n\in\mathbb{N},$ we define
$\mathfrak{X}_{n}^{\varepsilon}\doteq\mathcal{Z}_{n}^{\varepsilon}-E_{\lambda^{\varepsilon}}^{\varepsilon}\mathcal{Z}_{1}^{\varepsilon}$
with
$\mathcal{Z}_{n}^{\varepsilon}\doteq(\tau_{n}^{\varepsilon}-\tau_{n-1}^{\varepsilon})/E_{\lambda^{\varepsilon}}^{\varepsilon}\tau_{1}^{\varepsilon}.$
Obviously, $E_{\lambda^{\varepsilon}}\mathcal{Z}_{n}^{\varepsilon}=1$ and
$E_{\lambda^{\varepsilon}}\mathfrak{X}_{n}^{\varepsilon}=0$ and if we apply
Lemma 8.8, then we find that there exists some $\varepsilon_{0}\in(0,1)$ such
that
$\sup\nolimits_{\varepsilon\in(0,\varepsilon_{0})}E_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{3}<\infty\text{
and
}\inf\nolimits_{\varepsilon\in(0,\varepsilon_{0})}\mathrm{Var}_{\lambda^{\varepsilon}}(\mathcal{Z}_{1}^{\varepsilon})>0.$
Since $\mathcal{Z}_{1}^{\varepsilon}\geq 0$, Jensen’s inequality implies
$\left(E_{\lambda^{\varepsilon}}\mathcal{Z}_{1}^{\varepsilon}\right)^{3}\leq
E_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{3}$, and
therefore
$\displaystyle\hat{\rho}\leq
4\sup\nolimits_{\varepsilon\in(0,\varepsilon_{0})}\left(E_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{3}+\left(E_{\lambda^{\varepsilon}}\mathcal{Z}_{1}^{\varepsilon}\right)^{3}\right)\leq
8\sup\nolimits_{\varepsilon\in(0,\varepsilon_{0})}E_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)^{3}<\infty,$
and
$\hat{\sigma}^{2}=\inf\nolimits_{\varepsilon\in(0,\varepsilon_{0})}E_{\lambda^{\varepsilon}}\left(\mathfrak{X}_{1}^{\varepsilon}\right)^{2}=\inf\nolimits_{\varepsilon\in(0,\varepsilon_{0})}\mathrm{Var}_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}\right)>0.$
Therefore we can use Corollary 8.14 with the iid sequence
$\left\\{\mathfrak{X}_{n}^{\varepsilon}\right\\}_{n\in\mathbb{N}}$ to find
that for any $k\in\mathbb{N}_{0}$ and $0\leq k\leq
2\gamma^{\varepsilon}\Gamma^{\varepsilon}$
$\displaystyle
P_{\lambda^{\varepsilon}}\left(\tau_{\Gamma^{\varepsilon}+k}^{\varepsilon}\leq
T^{\varepsilon}\right)$
$\displaystyle=P_{\lambda^{\varepsilon}}\left(\mathcal{Z}_{1}^{\varepsilon}+\cdots+\mathcal{Z}_{\Gamma^{\varepsilon}+k}^{\varepsilon}\leq\Gamma^{\varepsilon}\right)$
$\displaystyle=P_{\lambda^{\varepsilon}}\left(\frac{\mathfrak{X}_{1}^{\varepsilon}+\cdots+\mathfrak{X}_{\Gamma^{\varepsilon}+k}^{\varepsilon}}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\leq\frac{-k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\right)$
$\displaystyle\leq
1-\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\right)+\frac{3\hat{\rho}}{\hat{\sigma}^{3}\sqrt{\Gamma^{\varepsilon}+k}},$
and similarly
$\displaystyle
P_{\lambda^{\varepsilon}}\left(\tau_{\Gamma^{\varepsilon}-k}^{\varepsilon}\leq
T^{\varepsilon}\right)$
$\displaystyle\leq\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}-k}}\right)+\frac{3\hat{\rho}}{\hat{\sigma}^{3}\sqrt{\Gamma^{\varepsilon}-k}}.$
∎
###### Lemma 8.16.
Given any $\delta>0$ and any $\ell\in(0,1/2),$ there exists
$\varepsilon_{0}\in(0,1)$ such that for any
$\varepsilon\in(0,\varepsilon_{0})$, $\mathfrak{P}_{2}\leq
C\left(\Gamma^{\varepsilon}\right)^{\frac{1}{2}-\ell}+2\left(\Gamma^{\varepsilon}\right)^{1-\ell}.$
###### Proof.
We rewrite $\mathfrak{P}_{2}$ as
$\displaystyle\mathfrak{P}_{2}=\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}P_{\lambda^{\varepsilon}}\left(\tau_{\Gamma^{\varepsilon}-k}^{\varepsilon}\leq
T^{\varepsilon}\right)+P_{\lambda^{\varepsilon}}\left(\tau_{\Gamma^{\varepsilon}}^{\varepsilon}\leq
T^{\varepsilon}\right)+\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}P_{\lambda^{\varepsilon}}\left(\tau_{\Gamma^{\varepsilon}+k}^{\varepsilon}\leq
T^{\varepsilon}\right).$
Then we use the upper bounds from Lemma 8.15 to get
$\displaystyle\mathfrak{P}_{2}$
$\displaystyle\leq\sum_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left[\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}-k}}\right)+\frac{3\hat{\rho}}{\hat{\sigma}^{3}\sqrt{\Gamma^{\varepsilon}-k}}\right]+1+\sum_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left[1-\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\right)+\frac{3\hat{\rho}}{\hat{\sigma}^{3}\sqrt{\Gamma^{\varepsilon}+k}}\right]$
$\displaystyle\leq\frac{24\hat{\rho}}{\hat{\sigma}^{3}}\gamma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}}+1+2\gamma^{\varepsilon}\Gamma^{\varepsilon}+\sum_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left[\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}-k}}\right)-\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\right)\right].$
The sum of the first three terms is easily bounded above by
$C\left(\Gamma^{\varepsilon}\right)^{\frac{1}{2}-\ell}+2\left(\Gamma^{\varepsilon}\right)^{1-\ell}$.
We will show that the last term is bounded above by a constant to complete the
proof.
To prove this, we observe that for any $k\leq
2\gamma^{\varepsilon}\Gamma^{\varepsilon},$ we may assume
$k\leq\Gamma^{\varepsilon}/2$ by taking $\varepsilon$ sufficiently small. Then
we apply the Mean Value Theorem and find
$\displaystyle\left|\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}-k}}\right)-\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\right)\right|\leq\sup\limits_{x\in\left[\frac{\sqrt{2/3}k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}}},\frac{\sqrt{2}k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}}}\right]}\phi\left(x\right)\cdot\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}-k}}-\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\right),$
where $\phi\left(x\right)\doteq e^{-\frac{x^{2}}{2}}/\sqrt{2\pi}$ and since
$0\leq k\leq\Gamma^{\varepsilon}/2,$ we have
$\left[\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}},\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}-k}}\right]\subset\left[\frac{\sqrt{2/3}k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}}},\frac{\sqrt{2}k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}}}\right].$
Additionally, because $\phi\left(x\right)=e^{-\frac{x^{2}}{2}}/\sqrt{2\pi}$ is
a monotone decreasing function on $[0,\infty)$, we find that
$x\in[(\sqrt{2/3}k)(\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}}),(\sqrt{2}k)(\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}})]\quad\mbox{implies}\quad\phi\left(x\right)\leq
e^{-\frac{k^{2}}{3\left(\sigma^{\varepsilon}\right)^{2}\Gamma^{\varepsilon}}}/{\sqrt{2\pi}}.$
Also, $\sqrt{1+x}-\sqrt{1-x}\leq 2x$ for all $x\in[0,1]$ and
$k\leq\Gamma^{\varepsilon}/2$ and a little algebra give
${k}/{\sqrt{\Gamma^{\varepsilon}-k}}-{k}/{\sqrt{\Gamma^{\varepsilon}+k}}\leq{4k^{2}}/{\Gamma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}}}.$
Therefore we find
$\displaystyle\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left[\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}-k}}\right)-\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\right)\right]$
$\displaystyle\qquad\leq\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\frac{1}{\sqrt{2\pi}}e^{-\frac{k^{2}}{3\left(\sigma^{\varepsilon}\right)^{2}\Gamma^{\varepsilon}}}\frac{4k^{2}}{\sigma^{\varepsilon}\Gamma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}}}\leq\frac{4}{\sigma^{\varepsilon}\Gamma^{\varepsilon}}\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\int_{k-1}^{k}\frac{\left(1+x\right)^{2}}{\sqrt{2\pi\Gamma^{\varepsilon}}}e^{-\frac{x^{2}}{3\left(\sigma^{\varepsilon}\right)^{2}\Gamma^{\varepsilon}}}dx$
$\displaystyle\qquad\leq\frac{4}{\Gamma^{\varepsilon}}\sqrt{\frac{3}{2}}\int_{0}^{\infty}\frac{\left(1+x\right)^{2}}{\sqrt{3\pi\left(\sigma^{\varepsilon}\right)^{2}\Gamma^{\varepsilon}}}e^{-\frac{x^{2}}{3\left(\sigma^{\varepsilon}\right)^{2}\Gamma^{\varepsilon}}}dx\leq\frac{6}{\Gamma^{\varepsilon}}E\left(1+X^{+}\right)^{2},$
where $X\sim
N(0,3\left(\sigma^{\varepsilon}\right)^{2}\Gamma^{\varepsilon}/2).$ Finally,
since $E\left(1+X^{+}\right)^{2}\leq
2+2E\left(X^{2}\right)=2+3\left(\sigma^{\varepsilon}\right)^{2}\Gamma^{\varepsilon},$
this implies that
$\displaystyle\sum_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left[\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}-k}}\right)-\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\right)\right]$
$\displaystyle\leq\frac{6}{\Gamma^{\varepsilon}}\left(2+3\left(\sigma^{\varepsilon}\right)^{2}\Gamma^{\varepsilon}\right)\leq
12+18\hat{\rho}^{2/3},$ (8.4)
where the last inequality is from
$\displaystyle\sup\nolimits_{\varepsilon\in(0,\varepsilon_{0})}\sigma^{\varepsilon}$
$\displaystyle=\sup\nolimits_{\varepsilon\in(0,\varepsilon_{0})}\left(E_{\lambda^{\varepsilon}}(\mathfrak{X}_{1}^{\varepsilon}\right)^{2})^{1/2}\leq\sup\nolimits_{\varepsilon\in(0,\varepsilon_{0})}(E_{\lambda^{\varepsilon}}\left|\mathfrak{X}_{1}^{\varepsilon}\right|^{3})^{1/3}=\hat{\rho}^{1/3}.$
Since according to Lemma 8.15 $\hat{\rho}^{1/3}$ is finite, we are done. ∎
###### Lemma 8.17.
Given any $\delta>0$ and any $\ell\in(0,1/2),$ there exists
$\varepsilon_{0}\in(0,1)$ and a constant $C<\infty$ such that for any
$\varepsilon\in(0,\varepsilon_{0})$, $\mathfrak{R}_{2}\leq
4\left(\Gamma^{\varepsilon}\right)^{2-\ell}+C\left(\Gamma^{\varepsilon}\right)^{2\left(1-\ell\right)}.$
###### Proof.
The proof of this lemma is similar to the proof of Lemma 8.16. We rewrite
$\mathfrak{R}_{2}$ as
$\displaystyle\mathfrak{R}_{2}$
$\displaystyle=\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left(2\Gamma^{\varepsilon}-2k+1\right)P_{\lambda^{\varepsilon}}\left(\tau_{\Gamma^{\varepsilon}-k}^{\varepsilon}\leq
T^{\varepsilon}\right)+\left(2\Gamma^{\varepsilon}+1\right)P_{\lambda^{\varepsilon}}\left(\tau_{\Gamma^{\varepsilon}}^{\varepsilon}\leq
T^{\varepsilon}\right)$
$\displaystyle\quad+\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left(2\Gamma^{\varepsilon}+2k+1\right)P_{\lambda^{\varepsilon}}\left(\tau_{\Gamma^{\varepsilon}+k}^{\varepsilon}\leq
T^{\varepsilon}\right).$
Then we use the upper bounds from Lemma 8.15 to get
$\displaystyle\mathfrak{R}_{2}\leq\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left(2\Gamma^{\varepsilon}-2k+1\right)\left[\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}-k}}\right)+\frac{3\hat{\rho}}{\hat{\sigma}^{3}\sqrt{\Gamma^{\varepsilon}-k}}\right]+\left(2\Gamma^{\varepsilon}+1\right)$
$\displaystyle\quad\qquad+\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left(2\Gamma^{\varepsilon}+2k+1\right)\left[1-\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\right)+\frac{3\hat{\rho}}{\hat{\sigma}^{3}\sqrt{\Gamma^{\varepsilon}+k}}\right].$
The next thing is to pair all the terms carefully and bound these pairs
separately. We start with
$\displaystyle\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left(2\Gamma^{\varepsilon}-2k+1\right)\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}-k}}\right)-\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left(2\Gamma^{\varepsilon}+2k+1\right)\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\right)$
$\displaystyle\quad\leq\left(2\Gamma^{\varepsilon}+1\right)\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left[\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}-k}}\right)-\Phi\left(\frac{k}{\sigma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}+k}}\right)\right]\leq
C\Gamma^{\varepsilon}.$
We use (8.4) for the last inequality. The second pair is
$\displaystyle\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left(2\Gamma^{\varepsilon}-2k+1\right)\frac{3\hat{\rho}}{\hat{\sigma}^{3}\sqrt{\Gamma^{\varepsilon}-k}}+\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left(2\Gamma^{\varepsilon}+2k+1\right)\frac{3\hat{\rho}}{\hat{\sigma}^{3}\sqrt{\Gamma^{\varepsilon}+k}}$
$\displaystyle\quad=\frac{6\hat{\rho}}{\hat{\sigma}^{3}}\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left(\sqrt{\Gamma^{\varepsilon}-k}+\sqrt{\Gamma^{\varepsilon}+k}\right)+\frac{3\hat{\rho}}{\hat{\sigma}^{3}}\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left(\frac{1}{\sqrt{\Gamma^{\varepsilon}-k}}+\frac{1}{\sqrt{\Gamma^{\varepsilon}+k}}\right)$
$\displaystyle\quad\leq\frac{6\hat{\rho}}{\hat{\sigma}^{3}}\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}2\sqrt{2\Gamma^{\varepsilon}}+\frac{3\hat{\rho}}{\hat{\sigma}^{3}}\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}2\leq
C\gamma^{\varepsilon}\Gamma^{\varepsilon}\sqrt{\Gamma^{\varepsilon}}+C\gamma^{\varepsilon}\Gamma^{\varepsilon}\leq
C\left(\Gamma^{\varepsilon}\right)^{\frac{3}{2}-\ell},$
where the first inequality holds due to $k\leq\Gamma^{\varepsilon}/2$. The
third term is
$\displaystyle\sum\nolimits_{k=1}^{2\gamma^{\varepsilon}\Gamma^{\varepsilon}}\left(2\Gamma^{\varepsilon}+2k+1\right)+\left(2\Gamma^{\varepsilon}+1\right)$
$\displaystyle=4\gamma^{\varepsilon}\left(\Gamma^{\varepsilon}\right)^{2}+2\gamma^{\varepsilon}\Gamma^{\varepsilon}+4\left(\gamma^{\varepsilon}\Gamma^{\varepsilon}\right)^{2}+2\gamma^{\varepsilon}\Gamma^{\varepsilon}+\left(2\Gamma^{\varepsilon}+1\right)$
$\displaystyle\leq
4\gamma^{\varepsilon}\left(\Gamma^{\varepsilon}\right)^{2}+C\left(\gamma^{\varepsilon}\Gamma^{\varepsilon}\right)^{2}=4\left(\Gamma^{\varepsilon}\right)^{2-\ell}+C\left(\Gamma^{\varepsilon}\right)^{2\left(1-\ell\right)},$
where the inequality holds since for $\ell\in(0,1/2),$ $2-2\ell\geq 1$ and
this implies that $\left(2\Gamma^{\varepsilon}+1\right)\leq
C\left(\gamma^{\varepsilon}\Gamma^{\varepsilon}\right)^{2}.$ Lastly, combining
all the pairs and the corresponding upper bounds, we find that for any
$\ell\in(0,1/2),$
$\displaystyle\mathfrak{R}_{2}$
$\displaystyle\leq[4\left(\Gamma^{\varepsilon}\right)^{2-\ell}+C\left(\Gamma^{\varepsilon}\right)^{2\left(1-\ell\right)}]+C\Gamma^{\varepsilon}+C\left(\Gamma^{\varepsilon}\right)^{\frac{3}{2}-\ell}\leq
4\left(\Gamma^{\varepsilon}\right)^{2-\ell}+C\left(\Gamma^{\varepsilon}\right)^{2\left(1-\ell\right)},$
where $C$ is a constant which depends on $\ell$ only (and in particular is
independent of $\varepsilon$). ∎
### 8.3 Asymptotics of moments of $N^{\varepsilon}(T^{\varepsilon})$
In this subsection, we prove Lemma 8.2 and Lemma 8.4.
###### Proof of Lemma 8.2.
First, recall that
$E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)=\sum\nolimits_{n=0}^{\infty}P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
T^{\varepsilon}\right)=\mathfrak{P}_{1}+\mathfrak{P}_{2}+\mathfrak{P}_{3},$
where the $\mathfrak{P}_{i}$ are defined in (8.2). We can simply bound
$\mathfrak{P}_{3}$ from above by
$\left(1-2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}$. Applying Lemma
8.11 and Lemma 8.16 for the other terms, we have for any $\ell\in(0,1/2)$ that
$\displaystyle
E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)$
$\displaystyle\leq
C\left(\Gamma^{\varepsilon}\right)^{2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}+(C\left(\Gamma^{\varepsilon}\right)^{\frac{1}{2}-\ell}+2\left(\Gamma^{\varepsilon}\right)^{1-\ell})+\left(1-2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}$
$\displaystyle=T^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}+C\left(\Gamma^{\varepsilon}\right)^{\frac{1}{2}-\ell}+C\left(\Gamma^{\varepsilon}\right)^{2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}.$
On the other hand, from the definition of
$N^{\varepsilon}\left(T^{\varepsilon}\right),$
$E_{\lambda^{\varepsilon}}\tau_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}\geq
T^{\varepsilon}.$ Using Wald’s first identity, we find
$E_{\lambda^{\varepsilon}}\tau_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}=E_{\lambda^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}\left(\tau_{n}^{\varepsilon}-\tau_{n-1}^{\varepsilon}\right)=E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)\cdot
E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}.$
Hence
$0\leq\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}-\frac{1}{E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\leq\frac{1}{T^{\varepsilon}}[C\left(\Gamma^{\varepsilon}\right)^{\frac{1}{2}-\ell}+C\left(\Gamma^{\varepsilon}\right)^{2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}].$
Therefore,
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left|\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}-\frac{1}{E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\right|\geq\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left[\frac{1}{T^{\varepsilon}}\left(C\left(\Gamma^{\varepsilon}\right)^{\frac{1}{2}-\ell}+\left(\Gamma^{\varepsilon}\right)^{2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}\right)\right].$
It remains to find an appropriate lower bound for the liminf.
We use (7.2), Lemma 8.7 and Remark 8.12 to find that for any $\eta>0,$ there
exists $\delta_{0}\in(0,1)$ such that for any $\delta\in(0,\delta_{0})$ and
any $\ell\in(0,1/2)$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left[\frac{1}{T^{\varepsilon}}\left(C\left(\Gamma^{\varepsilon}\right)^{\frac{1}{2}-\ell}+\left(\Gamma^{\varepsilon}\right)^{2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}\right)\right]$
$\displaystyle\quad\geq\liminf_{\varepsilon\rightarrow 0}\varepsilon\log
T^{\varepsilon}+\min\left\\{\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\Gamma^{\varepsilon}\right)^{1/2-\ell},\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\left(\Gamma^{\varepsilon}\right)^{2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}\right)\right\\}$
$\displaystyle\quad\geq
c+\min\left\\{\left(1/2-\ell\right)\left(h_{1}-c-\eta\right),\infty\right\\}=c+\left(1/2-\ell\right)\left(h_{1}-c-\eta\right).$
We complete the proof by sending $\ell$ to $1/2$. ∎
###### Proof of Lemma 8.4.
Recall that
$E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)^{2}=\sum\nolimits_{n=0}^{\infty}\left(2n+1\right)P_{\lambda^{\varepsilon}}\left(\tau_{n}^{\varepsilon}\leq
t\right)=\mathfrak{R}_{1}+\mathfrak{R}_{2}+\mathfrak{R}_{3}$
where the $\mathfrak{R}_{i}$ are defined in (8.3). We can bound
$\mathfrak{R}_{3}$ from above by
$\displaystyle\sum\nolimits_{n=0}^{\left(1-2\gamma^{\varepsilon}\right)\Gamma^{\varepsilon}-1}\left(2n+1\right)=(1-4\gamma^{\varepsilon}+4\left(\gamma^{\varepsilon}\right)^{2})\left(\Gamma^{\varepsilon}\right)^{2}.$
Applying Lemma 8.11 and Lemma 8.17, we have for any $\ell\in(0,1/2)$ that
$\displaystyle
E_{\lambda^{\varepsilon}}(N^{\varepsilon}\left(T^{\varepsilon}\right))^{2}$
$\displaystyle\leq
C\left(\Gamma^{\varepsilon}\right)^{1+2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}+4\left(\Gamma^{\varepsilon}\right)^{2-\ell}+C\left(\Gamma^{\varepsilon}\right)^{2\left(1-\ell\right)}+(1-4\gamma^{\varepsilon}+4\left(\gamma^{\varepsilon}\right)^{2})\left(\Gamma^{\varepsilon}\right)^{2}$
$\displaystyle\leq\left(\Gamma^{\varepsilon}\right)^{2}+C\left(\Gamma^{\varepsilon}\right)^{2\left(1-\ell\right)}+C\left(\Gamma^{\varepsilon}\right)^{1+2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}.$
As in the proof of Lemma 8.2
$E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)\geq\Gamma^{\varepsilon}.$
Thus for any $\ell\in(0,1/2)$
$\displaystyle\mathrm{Var}_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)$
$\displaystyle\leq
E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)^{2}-\left(\Gamma^{\varepsilon}\right)^{2}$
$\displaystyle\leq[\left(\Gamma^{\varepsilon}\right)^{2}+C\left(\Gamma^{\varepsilon}\right)^{2\left(1-\ell\right)}+C\left(\Gamma^{\varepsilon}\right)^{1+2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}]-\left(\Gamma^{\varepsilon}\right)^{2}$
$\displaystyle=C\left(\Gamma^{\varepsilon}\right)^{2\left(1-\ell\right)}+C\left(\Gamma^{\varepsilon}\right)^{1+2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}.$
Again we use (7.2), Lemma 8.7 and Remark 8.12 to find that for any $\eta>0,$
there exists $\delta_{0}\in(0,1)$ such that for any $\delta\in(0,\delta_{0})$
and for any $\ell\in(0,1/2)$,
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{\mathrm{Var}_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}$
$\displaystyle\quad\geq\liminf_{\varepsilon\rightarrow 0}\varepsilon\log
T^{\varepsilon}+\min\left\\{\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\Gamma^{\varepsilon}\right)^{2\left(1-\ell\right)},\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\left(\Gamma^{\varepsilon}\right)^{1+2\ell}e^{-\left(\Gamma^{\varepsilon}\right)^{1-2\ell}}\right)\right\\}$
$\displaystyle\quad\geq
c+\min\left\\{2\left(1-\ell\right)\left(h_{1}-c-\eta\right),\infty\right\\}=2\left(1-\ell\right)(h_{1}-\eta)+\left(2\ell-1\right)c.$
We complete the proof by sending $\ell$ to $1/2$. ∎
### 8.4 Asymptotics of moments of $\hat{N}^{\varepsilon}(T^{\varepsilon})$
The proof of the following result is given in Section 10.
###### Theorem 8.18.
If $w\geq h_{1}$, then given any $m>0$ such that $m+h_{1}>w$ and for any
$\delta>0$ sufficiently small,
$\lim_{\varepsilon\rightarrow 0}\varepsilon\log
E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}=m+\varkappa_{\delta}\text{
and
}\hat{\tau}_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}\overset{d}{\rightarrow}\mathrm{Exp}(1).$
Moreover, there exists $\varepsilon_{0}\in(0,1)$ and a constant $\tilde{c}>0$
such that
$P_{\lambda^{\varepsilon}}\left(\hat{\tau}_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}>t\right)\leq
e^{-\tilde{c}t}$
for any $t>0$ and any $\varepsilon\in(0,\varepsilon_{0}).$
Notice that Theorem 8.18 is a multicycle version of Theorem 8.5, which is the
key to the proofs of the asymptotics of moments of
$N^{\varepsilon}(T^{\varepsilon})$, namely, Lemma 8.2 and Lemma 8.4. Given
Theorem 8.18, the proofs of the following analogous results follow from
essentially the same arguments as those of Lemma 8.2 and Lemma 8.4.
###### Lemma 8.19.
Suppose that $w\geq h_{1}$, $m+h_{1}>w$, and that
$T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some $c>w$. Then there exists
$\delta_{0}\in(0,1)$ such that for any $\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left|\frac{E_{\lambda^{\varepsilon}}(\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right))}{T^{\varepsilon}}-\frac{1}{E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}}\right|\geq
c.$
###### Corollary 8.20.
Suppose that $w\geq h_{1}$, $m+h_{1}>w$ and that
$T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some $c>w$. Then there exists
$\delta_{0}\in(0,1)$ such that for any $\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{E_{\lambda^{\varepsilon}}(\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right))}{T^{\varepsilon}}\geq
m+\varkappa_{\delta}.$
###### Lemma 8.21.
Suppose that $w\geq h_{1}$, $m+h_{1}>w$ and that
$T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some $c>w$. Then for any
$\eta>0,$ there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{\mathrm{Var}_{\lambda^{\varepsilon}}(\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right))}{T^{\varepsilon}}\geq
m+h_{1}-\eta.$
## 9 Large Deviation Type Upper Bounds
In this section we collect results from the previous sections to prove the
main results of the paper, Theorems 4.3 and 4.5, which give large deviation
upper bounds on the bias under the empirical measure and the variance per unit
time. We also give the proof of Theorem 4.9, which shows how to simplify some
expressions appearing in the large deviation bounds. Before giving the proof
of the first result we establish Lemma 9.1 and Lemma 9.2 for the single cycle
case, and Lemma 9.3 and Lemma 9.4 for the multicycle case, which are needed in
the proof of Theorems 4.3. Recall that for any $n\in\mathbb{N}$
$S_{n}^{\varepsilon}\doteq\int_{\tau_{n-1}^{\varepsilon}}^{\tau_{n}^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt.$
(9.1)
###### Lemma 9.1.
If $h_{1}>w$, $A\subset M$ is compact and
$T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some $c>h_{1}$, then for any
$\eta>0$, there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left|\frac{E_{\lambda^{\varepsilon}}N^{\varepsilon}\left(T^{\varepsilon}\right)}{T^{\varepsilon}}E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}-\int_{M}e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)\mu^{\varepsilon}\left(dx\right)\right|$
$\displaystyle\quad\geq\inf_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-W\left(O_{1}\right)+c-h_{1}-\eta.$
###### Proof.
To begin, by Lemma 4.1 with
$g\left(x\right)=e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)$,
we know that for any $\delta$ sufficiently small and $\varepsilon>0,$
$\displaystyle E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}$
$\displaystyle=E_{\lambda^{\varepsilon}}\left(\int_{0}^{\tau_{1}^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{s}^{\varepsilon}\right)}1_{A}\left(X_{s}^{\varepsilon}\right)ds\right)=E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\cdot\int_{M}e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)\mu^{\varepsilon}\left(dx\right).$
This implies that
$\displaystyle\left|\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}-\int_{M}e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)\mu^{\varepsilon}\left(dx\right)\right|=E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}\cdot\left|\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}-\frac{1}{E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\right|.$
Hence, by (7.1), Lemma 7.21 and Lemma 8.2, we find that given $\eta>0,$ there
exists $\delta_{0}\in(0,1)$ such that for any $\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left|\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}-\int_{M}e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)\mu^{\varepsilon}\left(dx\right)\right|$
$\displaystyle\quad\geq\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log
E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left|\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}-\frac{1}{E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\right|$
$\displaystyle\quad\geq\inf_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-W\left(O_{1}\right)+c-h_{1}-\eta.$
∎
In the application of Wald’s identity a difficulty arises in that, owing to
the randomness of $N^{\varepsilon}\left(T^{\varepsilon}\right)$,
$S_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}$ need not have
the same distribution as $S_{1}^{\varepsilon}$. Nevertheless, such minor term
can be dealt with by using technique in, for example, [18, Theorem 3.16]. The
proof of the following lemma can be found in the Appendix.
###### Lemma 9.2.
If $h_{1}>w$, $A\subset M$ is compact and
$T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some $c>h_{1}$, then for any
$\eta>0$, there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{E_{\lambda^{\varepsilon}}S_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}}{T^{\varepsilon}}\geq\inf_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-W\left(O_{1}\right)+c-h_{1}-\eta.$
We have similar results for multicycles. To be specific, we have the following
two lemmas.
###### Lemma 9.3.
Suppose that $w\geq h_{1}$, $m+h_{1}>w$, $A\subset M$ is compact and that
$T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some $c>w$. Then for any
$\eta>0$, there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left|\frac{E_{\lambda^{\varepsilon}}\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right)}{T^{\varepsilon}}E_{\lambda^{\varepsilon}}\hat{S}_{1}^{\varepsilon}-\int_{M}e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)\mu^{\varepsilon}\left(dx\right)\right|$
$\displaystyle\quad\geq\inf_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-W\left(O_{1}\right)+c-(m+h_{1})-\eta.$
###### Lemma 9.4.
Suppose that $w\geq h_{1}$, $m+h_{1}>w$, $A\subset M$ is compact and that
$T^{\varepsilon}=e^{\frac{1}{\varepsilon}c}$ for some $c>w$. Then for any
$\eta>0$, there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{E_{\lambda^{\varepsilon}}\hat{S}_{\hat{N}^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}}{T^{\varepsilon}}\geq\inf_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-W\left(O_{1}\right)+c-(m+h_{1})-\eta.$
###### Proof of Theorem 4.3.
If $h_{1}>w$, then recall that
$\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)-1}S_{n}^{\varepsilon}\leq\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\leq\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon},$
where $S_{n}^{\varepsilon}$ is defined in (9.1). Then we apply Wald’s first
identity to obtain
$\displaystyle
E_{\lambda^{\varepsilon}}\left(\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)-1}S_{n}^{\varepsilon}\right)$
$\displaystyle=E_{\lambda^{\varepsilon}}\left(\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon}\right)-E_{\lambda^{\varepsilon}}S_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}$
$\displaystyle=E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}-E_{\lambda^{\varepsilon}}S_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}.$
Thus
$\displaystyle\left|E_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)-\int_{M}e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)\mu^{\varepsilon}\left(dx\right)\right|$
$\displaystyle\quad\leq\left|\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}-\int_{M}e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)\mu^{\varepsilon}\left(dx\right)\right|+\frac{E_{\lambda^{\varepsilon}}S_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}}{T^{\varepsilon}}.$
Therefore, by Lemma 9.1 and Lemma 9.2 we have that for any $\eta>0$, there
exists $\delta_{0}\in(0,1)$ such that for any $\delta\in(0,\delta_{0})$,
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left|E_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)-\int_{M}e^{-\frac{1}{\varepsilon}f\left(x\right)}1_{A}\left(x\right)\mu^{\varepsilon}\left(dx\right)\right|$
$\displaystyle\quad\geq\inf_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-W\left(O_{1}\right)+c-h_{1}-\eta.$
The argument for $h_{1}\leq w$ is entirely analogous but uses by Lemma 9.3 and
Lemma 9.4. ∎
The following lemma bounds quantities that will arise in the proof of Theorem
4.5. Its proof is given in the Appendix.
###### Lemma 9.5.
Recall the definitions $R_{1}^{(2)}\doteq 2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{1},x\right)\right]-h_{1},$ and for $j\in
L\setminus\\{1\\}$, $R_{j}^{(2)}\doteq 2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)-2W\left(O_{1}\right)+W\left(O_{1}\cup
O_{j}\right)$ with $h_{1}\doteq\min_{\ell\in
L\setminus\\{1\\}}V(O_{1},O_{\ell}).$ Then $2\inf_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-2W\left(O_{1}\right)-h_{1}\geq\min_{j\in
L}R_{j}^{(2)}.$
###### Proof of Theorem 4.5.
We begin with the observation that is for any random variables $X,Y$ and $Z$
satisfying $0\leq Y-Z\leq X\leq Y,$
$\displaystyle\mathrm{Var}\left(X\right)$
$\displaystyle=EX^{2}-\left(EX\right)^{2}\leq
EY^{2}-\left(E\left(Y-Z\right)\right)^{2}$
$\displaystyle=\mathrm{Var}\left(Y\right)+2EY\cdot
EZ-\left(EZ\right)^{2}\leq\mathrm{Var}\left(Y\right)+2EY\cdot EZ.$
When $h_{1}>w$, since
$0\leq\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon}-\frac{1}{T^{\varepsilon}}S_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}\leq\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\leq\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon},$
we have
$\displaystyle\mathrm{Var}_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)$
$\displaystyle\quad\leq\mathrm{Var}_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon}\right)+2E_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon}\right)\frac{E_{\lambda^{\varepsilon}}S_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}}{T^{\varepsilon}},$
and with the help of (7.2)
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\mathrm{Var}_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)T^{\varepsilon}\right)$
$\displaystyle\quad\geq\min\left\\{\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left[\mathrm{Var}_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon}\right)T^{\varepsilon}\right],\right.$
$\displaystyle\quad\qquad\left.\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left[E_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon}\right)\frac{E_{\lambda^{\varepsilon}}S_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}}{T^{\varepsilon}}T^{\varepsilon}\right]\right\\}.$
We complete the proof in the case of single cycle by showing both terms are
bounded below by $\min\nolimits_{j\in L}(R_{j}^{(1)}\wedge R_{j}^{(2)})-\eta$,
where we recall
$R_{j}^{(1)}\doteq\inf\nolimits_{x\in
A}\left[2f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)-W\left(O_{1}\right),$
$R_{1}^{(2)}\doteq 2\inf\nolimits_{x\in
A}\left[f\left(x\right)+V\left(O_{1},x\right)\right]-h_{1},$
and for $j\in L\setminus\\{1\\}$
$\displaystyle R_{j}^{(2)}$ $\displaystyle\doteq 2\inf\nolimits_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)-2W\left(O_{1}\right)+W\left(O_{1}\cup
O_{j}\right).$
For the second term, we apply Wald’s first identity, Lemma 7.21, Corollary 8.3
and Lemma 9.2 to find that given $\eta>0,$ there exists $\delta_{0}\in(0,1),$
such that for any $\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left[E_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon}\right)\frac{E_{\lambda^{\varepsilon}}S_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}}{T^{\varepsilon}}T^{\varepsilon}\right]$
$\displaystyle\quad\geq\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log
T^{\varepsilon}+\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log
E_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon}\right)+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{E_{\lambda^{\varepsilon}}S_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}}{T^{\varepsilon}}$
$\displaystyle\quad\geq-c+(\inf\nolimits_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-W\left(O_{1}\right)-h_{1}-\eta/3)+\varkappa_{\delta}$
$\displaystyle\qquad+(\inf\nolimits_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-W\left(O_{1}\right)+\left(c-h_{1}\right)-\eta/3)$
$\displaystyle\quad\geq 2\inf\nolimits_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-2W\left(O_{1}\right)-h_{1}-\eta$
$\displaystyle\quad\geq\min\nolimits_{j\in
L}R_{j}^{(2)}-\eta\geq\min\nolimits_{j\in L}(R_{j}^{(1)}\wedge
R_{j}^{(2)})-\eta.$
The third inequality holds by choosing $\delta$ sufficiently small
$h_{\delta}\geq h_{1}-\eta/3$. The fourth inequality is from Lemma 9.5.
Turning to the first term, we can bound the variance by (6.3):
$\displaystyle\mathrm{Var}_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon}\right)T^{\varepsilon}$
$\displaystyle\leq
2\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}\mathrm{Var}_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}+2\frac{\mathrm{Var}_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}\left(E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}\right)^{2}$
$\displaystyle\leq
2\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}E_{\lambda^{\varepsilon}}\left(S_{1}^{\varepsilon}\right)^{2}+2\frac{\mathrm{Var}_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}\left(E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}\right)^{2}.$
If we use Corollary 8.3 and Lemma 7.23, then we know that given $\eta>0,$
there exists $\delta_{0}\in(0,1),$ such that for any $\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left[\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}E_{\lambda^{\varepsilon}}\left(S_{1}^{\varepsilon}\right)^{2}\right]$
$\displaystyle\geq\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log
E_{\lambda^{\varepsilon}}\left(S_{1}^{\varepsilon}\right)^{2}$
$\displaystyle\geq\min_{j\in L}(R_{j}^{(1)}\wedge R_{j}^{(2)})-\eta.$
In addition, we can apply Lemma 7.21 and Lemma 8.4 to show that given
$\eta>0,$ there exists $\delta_{0}\in(0,1),$ such that for any
$\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left[\frac{\mathrm{Var}_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}\left(E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}\right)^{2}\right]$
$\displaystyle\quad\geq\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{\mathrm{Var}_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(T^{\varepsilon}\right)\right)}{T^{\varepsilon}}+2\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}$
$\displaystyle\quad\geq 2\inf_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-2W\left(O_{1}\right)-h_{1}-\eta$
$\displaystyle\quad\geq\min_{j\in L}R_{j}^{(2)}-\eta\geq\min_{j\in
L}(R_{j}^{(1)}\wedge R_{j}^{(2)})-\eta.$
The second last inequality comes from Lemma 9.5.
Hence, we find that given $\eta>0,$ there exists $\delta_{0}\in(0,1),$ such
that for any $\delta\in(0,\delta_{0})$
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\mathrm{Var}_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\sum\nolimits_{n=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}S_{n}^{\varepsilon}\right)T^{\varepsilon}\right)\geq\min_{j\in
L}(R_{j}^{(1)}\wedge R_{j}^{(2)})-\eta,$
and we are done for the single cycle case.
For multicycle case, by using a similar argument and applying Lemmas 7.24,
7.25, 8.21, 9.4 and Corollary 8.20, we find that
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\mathrm{Var}_{\lambda^{\varepsilon}}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}e^{-\frac{1}{\varepsilon}f\left(X_{t}^{\varepsilon}\right)}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)T^{\varepsilon}\right)\geq\min_{j\in
L}(R_{j}^{(1)}\wedge R_{j}^{(2)}\wedge R_{j}^{(3,m)})-\eta,$
with
$R_{j}^{(3,m)}\doteq 2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+2W\left(O_{j}\right)-2W\left(O_{1}\right)-(m+h_{1}).$
We complete the proof by sending $m\downarrow w-h_{1}$. ∎
###### Proof of Theorem 4.9.
Parts 1, 2 and 3 are from Theorem 4.3, Lemma 4.3 (b) and Theorem 6.1 in [12,
Chapter 6], respectively.
We now turn to part 4. Before giving the proof, we state a result from [12].
The result is Lemma 4.3 (c) in [12, Chapter 6], which says that for any
unstable equilibrium point $O_{j},$ there exists a stable equilibrium point
$O_{i}$ such that $W(O_{j})=W(O_{i})+V(O_{i},O_{j}).$
Now suppose that $\min\nolimits_{j\in L}(\inf\nolimits_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right))$ is
attained at some $\ell\in L$ such that $O_{\ell}$ is unstable (i.e., $\ell\in
L\setminus L_{s}$). Then since there exists a stable equilibrium point $O_{i}$
(i.e., $i\in L_{s}$) such that $W(O_{\ell})=W(O_{i})+V(O_{i},O_{\ell})$ we
find
$\displaystyle\min_{j\in L}\left(\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)\right)$
$\displaystyle\quad=\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{\ell},x\right)\right]+W\left(O_{\ell}\right)=\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{\ell},x\right)\right]+V(O_{i},O_{\ell})+W(O_{i})$
$\displaystyle\quad\geq\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{i},x\right)\right]+W(O_{i})\geq\min_{j\in
L_{\rm{s}}}\left(\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)\right)$
$\displaystyle\quad\geq\min_{j\in L}\left(\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+W\left(O_{j}\right)\right).$
The first inequality is from a dynamic programming inequality. Therefore, the
minimum is also attained at $i\in L_{\rm{s}}$ and $\min_{j\in
L}R_{j}^{(1)}=\min_{j\in L_{\rm{s}}}R_{j}^{(1)}.$ ∎
## 10 Exponential Return Law and Tail Behavior
In this section we give the proof of Theorem 8.5, which was the key fact
needed to obtain bounds on the distribution of
$N^{\varepsilon}(T^{\varepsilon})$, and the related multicycle analogy. A
result of this type first appears in [6], which asserts that the time needed
to escape from an open subset of the domain of attraction of a stable
equilibrium point that contains the equilibrium point has an asymptotically
exponential distribution. [6] also proves a nonasymptotic bound on the tail of
the probability of escape before a certain time that is also of exponential
form. Theorem 8.5 is a more complicated statement, in that it asserts the
asymptotically exponential form for the return time to the neighborhood of
$O_{1}$. To prove this we build on the results of [6], and decompose the
return time into times of transitions between equilibrium points. This in turn
will require the proof of a number of related results, such as establishing
the independence of certain estimates with respect to initial distributions.
The existence of an exponentially distributed first hitting time is a central
topic in the theory of quasistationary distributions. For a recent book length
treatment of the topic we refer to [5]. However, so far as we can tell the
types of situations we encounter are not covered by existing results, and so
as noted we develop what is needed using [6] as the starting point. See Remark
3.15.
For any $j\in L,$ define $\upsilon_{j}^{\varepsilon}$ as the hitting time of
$\partial B_{\delta}(O_{k})$ for some $k\in L\setminus\\{j\\}$, i.e.,
$\upsilon_{j}^{\varepsilon}\doteq\inf\left\\{t>0:X_{t}^{\varepsilon}\in\cup_{k\in
L\setminus\\{j\\}}\partial B_{\delta}(O_{k})\right\\}.$ (10.1)
We will prove the following result for first hitting times of another
equilibrium point, and later extend to return times.
###### Lemma 10.1.
For any $j\in L_{\rm{s}}$, there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$ and any distribution $\lambda^{\varepsilon}$ on
$\partial B_{\delta}(O_{j}),$
$\lim_{\varepsilon\rightarrow 0}\varepsilon\log
E_{\lambda^{\varepsilon}}\upsilon_{j}^{\varepsilon}=\min_{y\in\cup_{k\in
L\setminus\\{j\\}}\partial B_{\delta}(O_{k})}V(O_{j},y)\text{ and
}\upsilon_{j}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{j}^{\varepsilon}\overset{d}{\rightarrow}\rm{Exp}(1).$
Moreover, there exists $\varepsilon_{0}\in(0,1)$ and a constant $\tilde{c}>0$
such that
$P_{\lambda^{\varepsilon}}\left(\upsilon_{j}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{j}^{\varepsilon}>t\right)\leq
e^{-\tilde{c}t}$
for any $t>0$ and any $\varepsilon\in(0,\varepsilon_{0}).$
The organization of this section is as follows. The first part of Lemma 10.1
that is concerned with mean first hitting times is proved in Subsection 10.1,
while the second part that is concerned with an asymptotically exponential
distribution but when starting with a special distribution is proved in
Subsection 10.2. The last part of the lemma, which focuses on bounds on the
tail of the hitting time of another equilibrium point but when starting with a
special distribution is proved in Subsection 10.3. We then extend the second
and third parts of Lemma 10.1 to general initial distributions in Subsection
10.4 and Subsection 10.5. The last two subsections then extend all of Lemma
10.1 to return times for single cycles and multicycles, respectively.
### 10.1 Mean first hitting time
###### Lemma 10.2.
For any $\delta>0$ sufficiently small and $x\in\partial B_{\delta}(O_{j})$
with $j\in L_{\rm{s}}$
$\lim_{\varepsilon\rightarrow 0}\varepsilon\log
E_{x}\upsilon_{j}^{\varepsilon}=\min_{y\in\cup_{k\in
L\setminus\\{j\\}}\partial B_{\delta}(O_{k})}V(O_{j},y).$ (10.2)
###### Proof.
For the given $j\in L_{\rm{s}}$ let $D_{j}$ denote the corresponding domain of
attraction. We claim there is $k\in L\setminus L_{\rm{s}}$ such that
$q_{j}\doteq\inf_{y\in\partial D_{j}}V(O_{j},y)=V(O_{j},O_{k}).$
Since $V(O_{j},\cdot)$ is continuous and $\partial D_{j}$ is compact, there is
a point $y^{\ast}\in\partial D_{j}$ such that $q_{j}=V(O_{j},y^{\ast})$. If
$y^{\ast}\in\cup_{k\in L\setminus L_{\rm{s}}}O_{k}$ then we are done. If this
is not true, then since $y^{\ast}\notin(\cup_{k\in
L_{\rm{s}}}D_{k})\cup(\cup_{k\in L\setminus L_{\rm{s}}}O_{k})$, and since the
solution to $\dot{\phi}=b(\phi),\phi(0)=y^{\ast}$ must converge to $\cup_{k\in
L}O_{k}$ as $t\rightarrow\infty$, it must in fact converge to a point in
$\cup_{k\in L\setminus L_{\rm{s}}}O_{k}$, say $O_{k}$. Since such trajectories
have zero cost, by a standard argument for any $\varepsilon>0$ we can
construct by concatenation a trajectory that connects $O_{j}$ to $O_{k}$ in
finite time and with cost less than $q_{j}+\varepsilon$. Since $\varepsilon>0$
is arbitrary we have $q_{j}=V(O_{j},O_{k})$.
There may be more than one $l\in L\setminus L_{\rm{s}}$ such that
$O_{l}\in\partial D_{j}$ and $q_{j}=V(O_{j},O_{l})$, but we can assume that
for some $k\in L\setminus L_{\rm{s}}$ and $\bar{y}\in\partial
B_{\delta}(O_{k})$ we attain the min in (10.2). Then $\bar{q}_{j}\doteq
V(O_{j},\bar{y})\leq q_{j}$, and we need to show $\lim_{\varepsilon\rightarrow
0}\varepsilon\log E_{x}\upsilon_{j}^{\varepsilon}=\bar{q}_{j}$.
Given $s<\bar{q}_{j}$, let $D_{j}(s)=\\{x:V(O_{j},x)\leq s\\}$ and assume $s$
is large enough that $B_{\delta}(O_{j})\subset D_{j}(s)^{\circ}$. Then
$D_{j}(s)\subset D_{j}^{\circ}$ is closed and contained in the open set
$D_{j}\setminus\cup_{l\in L\setminus\\{j\\}}B_{\delta}(O_{l})$, and thus the
time to reach $\partial D_{j}(s)$ is never greater than
$\upsilon_{j}^{\varepsilon}$. Given $\eta>0$ we can find a set
$D_{j}^{\eta}(s)$ that is contained in $D_{j}(s)$ and satisfies the conditions
of [12, Theorem 4.1, Chapter 4], and also $\inf_{z\in\partial
D_{j}^{\eta}(s)}V(O_{j},z)\geq s-\eta$. This theorem gives the equality in the
following display:
$\displaystyle\liminf_{\varepsilon\rightarrow 0}\varepsilon\log
E_{x}\upsilon_{j}^{\varepsilon}\geq\liminf_{\varepsilon\rightarrow
0}\varepsilon\log E_{x}\inf\\{t\geq 0:X_{t}^{\varepsilon}\in\partial
D_{j}^{\eta}(s)\\}=\inf_{z\in\partial D_{j}^{\eta}(s)}V(O_{j},z)\geq s-\eta.$
Letting $\eta\downarrow 0$ and then $s\uparrow\bar{q}_{j}$ gives
$\liminf_{\varepsilon\rightarrow 0}\varepsilon\log
E_{x}\upsilon_{j}^{\varepsilon}\geq\bar{q}_{j}$.
For the reverse inequality we also adapt an argument from the proof of [12,
Theorem 4.1, Chapter 4]. One can find $T_{1}<\infty$ such that the probability
for $X_{t}^{\varepsilon}$ to reach $\cup_{l\in L}B_{\delta}(O_{l})$ by time
$T_{1}$ from any $x\in M\setminus\cup_{l\in L}B_{\delta}(O_{l})$ is bounded
below by $1/2$. (This follows easily from the law of large numbers and that
all trajectories of the noiseless system reach $\cup_{l\in
L}B_{\delta/2}(O_{l})$ in some finite time that is bounded uniformly in $x\in
M\setminus\cup_{l\in L}B_{\delta}(O_{l})$.) Also, given $\eta>0$ there is
$T_{2}<\infty$ and $\varepsilon_{0}>0$ such that $P_{x}\\{X_{t}^{\varepsilon}$
reaches $\cup_{k\in L\setminus\\{j\\}}\partial B_{\delta}(O_{k})$ before
$T_{2}\\}\geq\exp-(\bar{q}_{j}+\eta)/\varepsilon$ for all $x\in\partial
B_{\delta}(O_{j})$. It then follows from the strong Markov property that for
any $x\in M\setminus\cup_{l\in L}B_{\delta}(O_{l})$
$P_{x}\\{\upsilon_{j}^{\varepsilon}\leq T_{1}+T_{2}\\}\geq
e^{-\frac{1}{\varepsilon}(\bar{q}_{j}+\eta)}/2.$
Using the ordinary Markov property we have
$\displaystyle E_{x}\upsilon_{j}^{\varepsilon}$
$\displaystyle\leq\sum\nolimits_{n=0}^{\infty}(n+1)(T_{1}+T_{2})P_{x}\\{n(T_{1}+T_{2})<\upsilon_{j}^{\varepsilon}\leq(n+1)(T_{1}+T_{2})\\}$
$\displaystyle=(T_{1}+T_{2})\sum\nolimits_{n=0}^{\infty}P_{x}\\{\upsilon_{j}^{\varepsilon}>n(T_{1}+T_{2})\\}$
$\displaystyle\leq(T_{1}+T_{2})\sum\nolimits_{n=0}^{\infty}\left[1-\inf_{x\in
M\setminus\cup_{l\in
L}B_{\delta}(O_{l})}P_{x}\\{\upsilon_{j}^{\varepsilon}\leq
T_{1}+T_{2}\\}\right]^{n}$ $\displaystyle=(T_{1}+T_{2})\left(\inf_{x\in
M\setminus\cup_{l\in
L}B_{\delta}(O_{l})}P_{x}\\{\upsilon_{j}^{\varepsilon}\leq
T_{1}+T_{2}\\}\right)^{-1}$ $\displaystyle\leq
2(T_{1}+T_{2})e^{\frac{1}{\varepsilon}(\bar{q}_{j}+\eta)}.$
Thus $\limsup_{\varepsilon\rightarrow 0}\varepsilon\log
E_{x}\upsilon_{j}^{\varepsilon}\leq\bar{q}_{j}+\eta$, and letting
$\eta\downarrow 0$ completes the proof. ∎
###### Remark 10.3.
By the standard Freidlin-Wentzell theory, the convergence asserted in Lemma
10.2 is uniform on $\partial B_{\delta}(O_{j})$. Therefore, we have the first
part of Lemma 10.1.
### 10.2 Asymptotically exponential distribution
###### Lemma 10.4.
For each $j\in L_{\rm{s}}$ there is a distribution $u^{\varepsilon}$ on
$\partial B_{2\delta}(O_{j})$ such that
$\upsilon_{j}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{j}^{\varepsilon}\overset{d}{\rightarrow}\rm{Exp}(1).$
###### Proof.
To simplify notation and since it plays no role, we write $j=1$ throughout the
proof. We call $\partial B_{\delta}\left(O_{1}\right)$ and $\partial
B_{2\delta}\left(O_{1}\right)$ the inner and outer rings of $O_{1}$. We can
then decompose the hitting time as
$\upsilon_{1}^{\varepsilon}=\sum\nolimits_{k=1}^{\mathcal{N}^{\varepsilon}-1}\theta_{k}^{\varepsilon}+\zeta^{\varepsilon},$
(10.3)
where $\theta_{k}^{\varepsilon}$ is the $k$-th amount of time that the process
travels from the outer ring to the inner ring and back without visiting
$\cup_{j\in L\setminus\\{1\\}}\partial B_{\delta}(O_{j})$,
$\zeta^{\varepsilon}$ is the amount of time that the process travels from the
outer ring directly to $\cup_{j\in L\setminus\\{1\\}}\partial
B_{\delta}(O_{j})$ without visiting the inner ring, and
$\mathcal{N}^{\varepsilon}-1$ is the number of times that the process goes
back and forth between the inner ring and outer ring. (It is assumed that
$\delta>0$ is small enough that $B_{2\delta}\left(O_{1}\right)\subset
M\setminus\cup_{j\in L\setminus\\{1\\}}B_{2\delta}(O_{j})$.) Note that
$\theta_{k}^{\varepsilon}$ grows exponentially of the order $\delta$, due to
the time taken to travel from the inner ring to the outer ring, and
$\zeta^{\varepsilon}$ is uniformly bounded in expected value.
For any set $A,$ define the first hitting time by
$\tau\left(A\right)\doteq\inf\left\\{t>0:X_{t}^{\varepsilon}\in A\right\\}.$
Consider the conditional transition probability from $x\in\partial
B_{2\delta}\left(O_{1}\right)$ to $y\in\partial B_{\delta}\left(O_{1}\right)$
given by
$\psi_{1}^{\varepsilon}\left(dy|x\right)\doteq P\left(X_{\tau\left(\partial
B_{\delta}\left(O_{1}\right)\right)}^{\varepsilon}\in
dy|X_{0}^{\varepsilon}=x,\text{ }X_{t}^{\varepsilon}\notin\cup_{j\in
L\setminus\\{1\\}}\partial B_{\delta}(O_{j}),t\in[0,\tau(\partial
B_{\delta}\left(O_{1}\right)))]\right),$
and the transition probability from $y\in\partial
B_{\delta}\left(O_{1}\right)$ to $x\in\partial B_{2\delta}\left(O_{1}\right)$
given by
$\psi_{2}^{\varepsilon}\left(dx|y\right)\doteq P\left(X_{\tau\left(\partial
B_{2\delta}\left(O_{1}\right)\right)}^{\varepsilon}\in
dx|X_{0}^{\varepsilon}=y\right).$ (10.4)
Then we can create a transition probability from $x\in\partial
B_{2\delta}\left(O_{1}\right)$ to $y\in\partial B_{2\delta}\left(O_{1}\right)$
by
$\psi^{\varepsilon}\left(dy|x\right)\doteq\int_{\partial
B_{\delta}\left(O_{1}\right)}\psi_{2}^{\varepsilon}\left(dy|z\right)\psi_{1}^{\varepsilon}\left(dz|x\right).$
(10.5)
Since $\partial B_{2\delta}\left(O_{1}\right)$ is compact and
$\\{X_{t}^{\varepsilon}\\}_{t}$ is non-degenerate and Feller, there exists an
invariant measure $u^{\varepsilon}\in\mathcal{P}\left(\partial
B_{2\delta}\left(O_{1}\right)\right)$ with respect to the transition
probability $\psi^{\varepsilon}\left(dy|x\right).$ If we start with the
distribution $u^{\varepsilon}$ on $\partial B_{2\delta}\left(O_{1}\right)$,
then it follows from the definition of $u^{\varepsilon}$ and the strong Markov
property that the
$\\{\theta_{k}^{\varepsilon}\\}_{k<\mathcal{N}^{\varepsilon}}$ are iid.
Moreover, the indicators of escape (i.e., $1_{\\{\tau(\cup_{j\in
L\setminus\\{1\\}}\partial B_{\delta}(O_{j}))=\tau(\cup_{j\in L}\partial
B_{\delta}(O_{j}))\\}}$) are iid Bernoulli, and we write them as
$Y_{k}^{\varepsilon}$ with
$P_{u^{\varepsilon}}(Y_{k}^{\varepsilon}=1)=e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon},$
where $\delta>0$ is from the construction,
$h_{1}^{\varepsilon}(\delta)\rightarrow h_{1}(\delta)$ as
$\varepsilon\rightarrow 0$ and $h_{1}(\delta)\uparrow h_{1}$ as
$\delta\downarrow 0$ with $h_{1}=\min_{j\in L\setminus\\{1\\}}V(O_{1},O_{j})$.
Note that
$\mathcal{N}^{\varepsilon}=\inf\left\\{k\in\mathbb{N}:Y_{k}^{\varepsilon}=1\right\\}.$
We therefore have
$P_{u^{\varepsilon}}(\mathcal{N}^{\varepsilon}=k)=(1-e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon})^{k-1}e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon},$
and thus
$E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}=E_{u^{\varepsilon}}\left[\sum\nolimits_{j=1}^{\mathcal{N}^{\varepsilon}-1}\theta_{j}^{\varepsilon}\right]+E_{u^{\varepsilon}}\zeta^{\varepsilon}=E_{u^{\varepsilon}}(\mathcal{N}^{\varepsilon}-1)E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}+E_{u^{\varepsilon}}\zeta^{\varepsilon},$
where the second equality comes from Wald’s identity. Using
$\sum_{k=1}^{\infty}ka^{k-1}={1/(1-a)^{2}}$ for $a\in[0,1)$, we also have
$\displaystyle
E_{u^{\varepsilon}}\mathcal{N}^{\varepsilon}=\sum\nolimits_{k=1}^{\infty}k(1-e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon})^{k-1}e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}=e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}e^{2h_{1}^{\varepsilon}(\delta)/\varepsilon}=e^{h_{1}^{\varepsilon}(\delta)/\varepsilon},$
and therefore
$E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}=e^{h_{1}^{\varepsilon}(\delta)/\varepsilon}E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}+(E_{u^{\varepsilon}}\zeta^{\varepsilon}-E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}).$
(10.6)
Next consider the characteristic function of
$\upsilon_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}$
$\phi^{\varepsilon}(t)=E_{u^{\varepsilon}}e^{it\upsilon_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}=\phi_{\upsilon}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}),$
where $\phi_{\upsilon}^{\varepsilon}$ is the characteristic function of
$\upsilon_{1}^{\varepsilon}.$ By (10.3), we have
$\displaystyle\phi_{\upsilon}^{\varepsilon}(s)$
$\displaystyle=E_{u^{\varepsilon}}e^{is\left(\sum_{k=1}^{\mathcal{N}^{\varepsilon}-1}\theta_{k}^{\varepsilon}+\zeta^{\varepsilon}\right)}=E_{u^{\varepsilon}}e^{is\zeta^{\varepsilon}}E_{u^{\varepsilon}}e^{is\left(\sum\nolimits_{k=1}^{\mathcal{N}^{\varepsilon}-1}\theta_{k}^{\varepsilon}\right)}$
$\displaystyle=\phi_{\zeta}^{\varepsilon}(s)\sum\nolimits_{k=1}^{\infty}(1-e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon})^{k-1}e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}\phi_{\theta}^{\varepsilon}(s)^{k-1}$
$\displaystyle=\phi_{\zeta}^{\varepsilon}(s)e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}(1-[(1-e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon})\phi_{\theta}^{\varepsilon}(s)])^{-1},$
where $\phi_{\theta}^{\varepsilon}$ and $\phi_{\zeta}^{\varepsilon}$ are the
characteristic functions of $\theta_{1}^{\varepsilon}$ and
$\zeta^{\varepsilon}$, respectively. We want to show that for any
$t\in\mathbb{R}$
$\phi^{\varepsilon}(t)=\phi_{\upsilon}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon})\rightarrow
1/(1-it)\text{ as }\varepsilon\rightarrow 0.$
We first show that
$\phi_{\zeta}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon})\rightarrow
1.$ By definition,
$\phi_{\zeta}^{\varepsilon}\left(t/E_{u^{\varepsilon}\upsilon_{1}^{\varepsilon}}\right)=E_{u^{\varepsilon}}\cos\left(t\zeta^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)+iE_{u^{\varepsilon}}\sin\left(t\zeta^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right).$
According to [12, Lemma 1.9, Chapter 6], we know that there exist
$T_{0}\in(0,\infty)$ and $\beta>0$ such that for any $T\in(T_{0},\infty)$ and
for all $\varepsilon$ sufficiently small
$P_{u^{\varepsilon}}\left(\zeta^{\varepsilon}>T\right)\leq
e^{-\frac{1}{\varepsilon}\beta\left(T-T_{0}\right)},$ (10.7)
and therefore for any bounded and continuous function
$f:\mathbb{R}\rightarrow\mathbb{R}$
$\displaystyle\left|E_{u^{\varepsilon}}f\left(t\zeta^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)-f\left(0\right)\right|\leq
2\left\|f\right\|_{\infty}P_{u^{\varepsilon}}\left(\zeta^{\varepsilon}>T\right)+E_{u^{\varepsilon}}\left[\left|f\left(t\zeta^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)-f\left(0\right)\right|1_{\left\\{\zeta^{\varepsilon}\leq
T\right\\}}\right].$
The first term in the last display goes to $0$ as $\varepsilon\rightarrow 0$.
For any fixed $t,$ $t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\rightarrow
0$ as $\varepsilon\rightarrow 0$. Since $f$ is continuous, the second term in
the last display also converges to $0$ as $\varepsilon\rightarrow 0.$
$\phi_{\zeta}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon})\rightarrow
1$ follows by taking $f$ to be $\sin x$ and $\cos x.$
It remains to show that for any $t\in\mathbb{R}$
$e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}\left(1-\left[(1-e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon})\phi_{\theta}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon})\right]\right)^{-1}\rightarrow
1/(1-it)$
as $\varepsilon\rightarrow 0.$ Observe that
$e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}\left(1-\left[(1-e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon})\phi_{\theta}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon})\right]\right)^{-1}=\left({\frac{1-\phi_{\theta}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon})}{e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}}+\phi_{\theta}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}})\right)^{-1},$
so it suffices to show that
$\phi_{\theta}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon})\rightarrow
1$ and
$[1-\phi_{\theta}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon})]/e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}\rightarrow-
it$ as $\varepsilon\rightarrow 0.$
For the former, note that by (10.6)
$0\leq
E_{u^{\varepsilon}}\left(t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)\leq\frac{tE_{u^{\varepsilon}}\theta_{1}^{\varepsilon}}{\left(e^{h^{\varepsilon}_{1}(\delta)/\varepsilon}-1\right)E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}}\rightarrow
0$
as $\varepsilon\rightarrow 0,$ and so
$t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}$
converges to $0$ in distribution. Moreover, since $e^{ix}$ is bounded and
continuous, we find
$\phi_{\theta}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon})\rightarrow
1$. For the second part, using
$x-{x^{3}}/{3!}\leq\sin x\leq x\text{ and }1-{x^{2}}/{2}\leq\cos x\leq 1$
for $x\in\mathbb{R}$ we find that
$0\leq\frac{1-E_{u^{\varepsilon}}\cos\left(t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)}{e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}}\leq\frac{E_{u^{\varepsilon}}\left(t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)^{2}}{2e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}}$
and
$\frac{E_{u^{\varepsilon}}\left(t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)}{e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}}-\frac{E_{u^{\varepsilon}}\left(t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)^{3}}{3!e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}}\leq\frac{E_{u^{\varepsilon}}\sin\left(t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)}{e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}}\leq\frac{E_{u^{\varepsilon}}\left(t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)}{e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}}.$
From our previous observation regarding the distribution of
$\zeta^{\varepsilon}$ and (10.6)
$\frac{E_{u^{\varepsilon}}\left(t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)}{e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}}\rightarrow
t\text{ as }\varepsilon\rightarrow 0.$
In addition, since $\theta_{1}^{\varepsilon}$ can be viewed as the time from
the outer ring to the inner ring without visiting $\cup_{j\in
L\setminus\\{1\\}}\partial B_{\delta}(O_{j})$ plus the time from the inner
ring to the outer ring, by applying (10.7) to the former and using [6, Theorem
4 and Corollary 1] under Condition 3.13 to the later, we find that
$P_{u^{\varepsilon}}\left(\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}>t\right)\leq
2e^{-t}$ (10.8)
for all $t\in[0,\infty)$ and $\varepsilon$ sufficiently small. This implies
that
$\displaystyle
E_{u^{\varepsilon}}\left(\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}\right)^{2}=2\int_{0}^{\infty}t^{2}P_{u^{\varepsilon}}\left(\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}>t\right)dt\leq
4\int_{0}^{\infty}t^{2}e^{-t}dt=8$
and similarly
$E_{u^{\varepsilon}}\left(\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}\right)^{3}=3\int_{0}^{\infty}t^{3}\leq
36$. Then combined with (10.6), we have
$0\leq\frac{E_{u^{\varepsilon}}\left(t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)^{2}}{2e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}}\leq\frac{t^{2}E_{u^{\varepsilon}}\left(\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}\right)^{2}}{2e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}(e^{h_{1}^{\varepsilon}(\delta)/\varepsilon}-1)^{2}}\rightarrow
0$
and
$0\leq\frac{E_{u^{\varepsilon}}\left(t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)^{3}}{3!e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}}\leq\frac{t^{3}E_{u^{\varepsilon}}\left(\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}\right)^{3}}{3!e^{-h_{1}^{\varepsilon}(\delta)/\varepsilon}(e^{h_{1}^{\varepsilon}(\delta)/\varepsilon}-1)^{3}}\rightarrow
0.$
Therefore, we have shown that for any $t\in\mathbb{R}$
$\frac{1-\phi_{\theta}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon})}{e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}}=\frac{1-E_{u^{\varepsilon}}\cos\left(t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)}{e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}}-i\frac{E_{u^{\varepsilon}}\sin\left(t\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)}{e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}}\rightarrow-
it.$
∎
###### Remark 10.5.
From the proof of Lemma 10.4, we actually know that
$\phi_{\upsilon}^{\varepsilon}(t/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon})\rightarrow
1/(1-it)$
uniformly on any compact set in $\mathbb{R}$ as $\varepsilon\rightarrow 0$.
### 10.3 Tail probability
The goal of this subsection is to prove the following.
###### Lemma 10.6.
For each $j\in L_{\rm{s}}$ there is a distribution $u^{\varepsilon}$ on
$\partial B_{2\delta}(O_{j})$ and $\tilde{c}>0$ such that for any
$t\in[0,\infty)$,
$P_{u^{\varepsilon}}(\upsilon_{j}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{j}^{\varepsilon}>t)\leq
e^{-\tilde{c}t}$ (here $\upsilon_{j}^{\varepsilon}$ and $u^{\varepsilon}$ are
defined as in the last subsection).
###### Proof.
As in the last subsection we give the proof for the case $j=1$. To begin we
note that for any $\alpha>0$ Chebyshev’s inequality implies
$P_{u^{\varepsilon}}\left(\upsilon_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}>t\right)=P_{u^{\varepsilon}}(e^{\alpha\upsilon_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}>e^{\alpha
t})\leq e^{-\alpha t}\cdot
E_{u^{\varepsilon}}e^{\alpha\upsilon_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}.$
By picking $\alpha=\alpha^{\ast}\doteq 1/8$, it suffices to show that
$E_{u^{\varepsilon}}e^{{\alpha^{\ast}\upsilon_{1}^{\varepsilon}}/{E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}}$
is bounded by a constant. We will do this by showing how the finiteness of
$E_{u^{\varepsilon}}e^{{\alpha^{\ast}\upsilon_{1}^{\varepsilon}}/{E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}}$
is implied by the finiteness of
$E_{u^{\varepsilon}}e^{{\alpha^{\ast}\theta_{1}^{\varepsilon}}/{E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}}$
and
$E_{u^{\varepsilon}}e^{{\alpha^{\ast}\zeta^{\varepsilon}}/{E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}}.$
Using (10.8) we find that for any $\alpha>0$
$P_{u^{\varepsilon}}(e^{\alpha\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}}>t)\leq
2e^{-\frac{1}{\alpha}\log t}=2t^{-\frac{1}{\alpha}}$
for all $t\in[1,\infty)$ and $\varepsilon$ sufficiently small. Then (10.6)
implies
$E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\geq\left(e^{h^{\varepsilon}_{1}\left(\delta\right)/\varepsilon}-1\right)E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}$
and therefore
$\displaystyle
E_{u^{\varepsilon}}e^{\alpha^{\ast}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\theta_{1}^{\varepsilon}$
$\displaystyle\leq\int_{0}^{1}P_{u^{\varepsilon}}\left(\exp\left(\alpha^{\ast}\theta_{1}^{\varepsilon}/[{(e}^{h^{\varepsilon}_{1}\left(\delta\right)/\varepsilon}-1{)}E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}]\right)>t\right)dt$
$\displaystyle\quad+\int_{1}^{\infty}P_{u^{\varepsilon}}\left(\exp\left(\alpha^{\ast}\theta_{1}^{\varepsilon}/[{(e}^{h^{\varepsilon}_{1}\left(\delta\right)/\varepsilon}-1{)}E_{u^{\varepsilon}}\theta_{1}^{\varepsilon}]\right)>t\right)dt$
$\displaystyle\leq
1+2\int_{1}^{\infty}t^{-{(e}^{h^{\varepsilon}_{1}\left(\delta\right)/\varepsilon}-1{)/\alpha}^{\ast}}dt$
$\displaystyle=1+2[{(e}^{h^{\varepsilon}_{1}\left(\delta\right)/\varepsilon}-1{)/\alpha}^{\ast}-1]^{-1}=1+2\alpha^{\ast}[{e}^{h^{\varepsilon}_{1}\left(\delta\right)/\varepsilon}-\alpha^{\ast}-1]^{-1}.$
To estimate $\zeta^{\varepsilon},$ we use that by (10.7) there are
$T_{0}\in(0,\infty)$ and $\beta>0$ such that for any $t\in(T_{0},\infty)$ and
for all $\varepsilon$ sufficiently small
$P_{u^{\varepsilon}}\left(\zeta^{\varepsilon}>t\right)\leq
e^{-\frac{1}{\varepsilon}\beta\left(t-T_{0}\right)},$ so that for any
$\alpha>0$
$P_{u^{\varepsilon}}\left(e^{\alpha\zeta^{\varepsilon}}>t\right)\leq
e^{-\frac{1}{\varepsilon}\beta\left(\frac{1}{\alpha}\log t-T_{0}\right)}$ for
any $t\geq e^{\alpha T_{0}}.$ Given $n\in\mathbb{N},$ for all sufficiently
small $\varepsilon$ we have
$\alpha^{\ast}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\leq 1/n$, and
thus
$P_{u^{\varepsilon}}\left(e^{\alpha^{\ast}\zeta^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}>t\right)\leq
P_{u^{\varepsilon}}\left(e^{\zeta^{\varepsilon}/n}>t\right)\leq
e^{-\frac{1}{\varepsilon}\beta\left(n\log t-T_{0}\right)}.$
Hence for any $n$ such that $e^{T_{0}/n}\leq 3/2$ and $\left(-\beta
n+1\right)\log\left(3/2\right)+\beta T_{0}<0,$ and for $\varepsilon$ small
enough that $\alpha^{\ast}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}\leq
1/n,$ we have
$\displaystyle
E_{u^{\varepsilon}}e^{\alpha^{\ast}\zeta^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}$
$\displaystyle\leq\int_{0}^{\infty}P_{u^{\varepsilon}}\left(e^{\alpha^{\ast}\zeta^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}>t\right)dt\leq
3/2+\int_{\frac{3}{2}}^{\infty}P_{u^{\varepsilon}}\left(e^{\alpha^{\ast}\zeta^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}>t\right)dt$
$\displaystyle\leq
3/2+\int_{\frac{3}{2}}^{\infty}e^{-\frac{1}{\varepsilon}\beta\left(n\log
t-T_{0}\right)}dt=3/2+e^{\frac{1}{\varepsilon}\beta T_{0}}(\beta
n/{\varepsilon}-1)^{-1}\left(3/2\right)^{\frac{1}{\varepsilon}\left(-\beta
n+\varepsilon\right)}$ $\displaystyle=3/2+(\beta
n/{\varepsilon}-1)^{-1}e^{\frac{1}{\varepsilon}\left[\left(-\beta
n+\varepsilon\right)\log\left(3/2\right)+\beta T_{0}\right]}\leq 3/2+(\beta
n/{\varepsilon}-1)^{-1}\leq 2.$
We have shown that for such $\alpha^{\ast},$
$E_{u^{\varepsilon}}e^{\alpha^{\ast}\zeta^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}$
and
$E_{u^{\varepsilon}}e^{\alpha^{\ast}\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}$
are uniformly bounded for all $\varepsilon$ sufficiently small. Lastly, using
the same calculation as used for the characteristic function
$\displaystyle
E_{u^{\varepsilon}}e^{\alpha^{\ast}\upsilon_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}$
$\displaystyle=E_{u^{\varepsilon}}e^{\alpha^{\ast}\zeta^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\cdot
e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}\left(1-\left[(1-e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon})E_{u^{\varepsilon}}e^{\alpha^{\ast}\theta_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\right]\right)^{-1}$
$\displaystyle\leq
2e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}\left(1-\left[(1-e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon})\left(1+\frac{2\alpha^{\ast}}{{e}^{h^{\varepsilon}_{1}\left(\delta\right)/\varepsilon}-\alpha^{\ast}-1}\right)\right]\right)^{-1}$
$\displaystyle=2e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}\left(e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}-\frac{2\alpha^{\ast}}{{e}^{h^{\varepsilon}_{1}\left(\delta\right)/\varepsilon}-\alpha^{\ast}-1}+\frac{2\alpha^{\ast}}{{e}^{h^{\varepsilon}_{1}\left(\delta\right)/\varepsilon}-\alpha^{\ast}-1}e^{-h^{\varepsilon}_{1}(\delta)/\varepsilon}\right)^{-1}$
$\displaystyle=2\left(1-2\alpha^{\ast}\frac{e^{h^{\varepsilon}_{1}(\delta)/\varepsilon}-1}{{e}^{h^{\varepsilon}_{1}\left(\delta\right)/\varepsilon}-\alpha^{\ast}-1}\right)^{-1}\leq
2/(1-4\alpha^{\ast})=4.$
∎
### 10.4 General initial condition
This subsection presents results that will allow us to extend the results in
the previous two subsections to arbitrary initial distribution
$\lambda^{\varepsilon}\in\mathcal{P}(\partial B_{\delta}\left(O_{1}\right))$.
Under our assumptions, for any $j\in L_{\rm{s}}$ we observe that the process
model
$dX_{t}^{\varepsilon}=b\left(X_{t}^{\varepsilon}\right)dt+\sqrt{\varepsilon}\sigma\left(X_{t}^{\varepsilon}\right)dW_{t}$
(10.9)
has the property that $b(x)=A(x-O_{j})[1+o(1)]$ and
$\sigma\left(x\right)=\bar{\sigma}[1+o\left(1\right)]$, where $o(1)\rightarrow
0$ as $\left\|x-O_{j}\right\|\rightarrow 0$, $A$ is stable and $\bar{\sigma}$
is invertible. By an invertible change of variable we can arrange so that
$O_{j}=0$ and $\bar{\sigma}=I$, and to simplify we assume this in the rest of
the section.
Since $A$ is stable there exists a positive definite and symmetric solution
$M$ to the matrix equation $AM+MA^{T}=-I$ (we can in fact exhibit the solution
in the form $M=\int_{0}^{\infty}e^{At}e^{A^{T}t}dt)$. To prove the ergodicity
we introduce some additional notation: $U(x)\doteq\langle x,Mx\rangle$,
$B_{i}\doteq\\{x:U(x)<b_{i}^{2}\\}$ and
$\mathcal{S}_{i}(\varepsilon)\doteq\\{x:U(x)<a_{i}^{2}\varepsilon\\},$ for
$i=1,2$, where $0<a_{1}<a_{2}$, $0<b_{0}<b_{1}<b_{2}$. If
$\varepsilon_{0}=(b_{0}^{2}/a_{2}^{2})/2$ then with cl denoting closure, ${\rm
cl}(\mathcal{S}_{2}(\varepsilon_{0}))\subset B_{0},$ and we will assume
$\varepsilon\in(0,\varepsilon_{0})$ henceforth. For a use later on, we will
also assume that $a_{1}^{2}=2\sup\nolimits_{x\in
B_{2}}\mbox{tr}[\sigma(x)\sigma(x)^{T}M].$
###### Remark 10.7.
The sets $B_{1}$ and $B_{2}$ will play the roles that $B_{\delta}(O_{1})$ and
$B_{2\delta}(O_{1})$ played previously in this section. Although elsewhere in
this paper as well as in the reference [12] these sets are taken to be balls
with respect to the Euclidean norm, in this subsection we take them to be
level sets of $U(x)$. The shape of these sets and the choice of the factor of
$2$ relating the radii play no role in the analysis of [12] or in our prior
use in this paper. However in this subsection it is notationally convenient
for the sets to be level sets of $U$, since $U$ is a Lyapunov function for the
noiseless dynamics near $0$. After this subsection we will revert to the
$B_{\delta}(O_{1})$ and $B_{2\delta}(O_{1})$ notation.
In addition to the restrictions $a_{1}<a_{2}$ and
$a_{2}^{2}\varepsilon_{0}\leq b_{0}^{2}$, we also assume that $a_{1},a_{2}$
and $\varepsilon_{0}>0$ are such that if $\phi^{x}$ is the solution to the
noiseless dynamics $\dot{\phi}=b(\phi)$ with initial condition $x$, then: (i)
for all $x\in\partial\mathcal{S}_{2}(\varepsilon)$, $\phi^{x}$ never crosses
$\partial B_{1}$; (i) for all $x\in\partial\mathcal{S}_{1}(\varepsilon)$,
$\phi^{x}$ never exits $\mathcal{S}_{2}(\varepsilon)$.
The idea that will be used to establish asymptotic independence from the
starting distribution is the following. We start the process on $\partial
B_{1}$. With some small probability it will hit $\partial B_{2}$ before
hitting $\partial\mathcal{S}_{2}(\varepsilon)$. This gives a contribution to
$\psi_{2}^{\varepsilon}(dz|x)$ defined in (10.4) that will be relatively
unimportant. If instead it hits $\partial\mathcal{S}_{2}(\varepsilon)$ first,
then we do a Freidlin-Wentzell type analysis, and decompose the trajectory
into excursions between $\partial\mathcal{S}_{2}(\varepsilon)$ and
$\partial\mathcal{S}_{1}(\varepsilon)$, before a final excursion from
$\partial\mathcal{S}_{2}(\varepsilon)$ to $\partial B_{2}$.
To exhibit the asymptotic independence from $\varepsilon$, we introduce the
scaled process $Y^{\varepsilon}_{t}=X^{\varepsilon}_{t}/\sqrt{\varepsilon}$,
which solves the SDE
$dY^{\varepsilon}_{t}=\frac{1}{\sqrt{\varepsilon}}b(\sqrt{\varepsilon}Y^{\varepsilon}_{t})dt+\sigma(\sqrt{\varepsilon}Y^{\varepsilon}_{t})dW_{t}.$
Let $\mathcal{\bar{S}}_{1}=\partial\mathcal{S}_{1}(1)\text{ and
}\mathcal{\bar{S}}_{2}=\partial\mathcal{S}_{2}(1).$ Let
$\omega^{\varepsilon}(w|x)$ denote the density of the hitting location on
$\mathcal{\bar{S}}_{2}$ by the process $Y^{\varepsilon}$, given
$Y^{\varepsilon}_{0}=x\in\mathcal{\bar{S}}_{1}$. The following estimate is
essential. The density function can be identified with the normal derivative
of a related Green’s function, which is bounded from above by the boundary
gradient estimate and bounded below by using the Hopf lemma [13].
###### Lemma 10.8.
Given $\varepsilon_{0}>0$, there are $0<c_{1}<c_{2}<\infty$ such that
$c_{1}\leq\omega^{\varepsilon}(w|x)\leq c_{2}$ for all
$x\in\mathcal{\bar{S}}_{1}$, $w\in\mathcal{\bar{S}}_{2}$ and
$\varepsilon\in(0,\varepsilon_{0})$.
Next let $p^{\varepsilon}(u|w)$ denote the density of the return location for
$Y^{\varepsilon}$ on $\mathcal{\bar{S}}_{2}$, conditioned on visiting
$\mathcal{\bar{S}}_{1}$ before $\partial B_{2}/\sqrt{\varepsilon}$, and
starting at $w\in\mathcal{\bar{S}}_{2}$. The last lemma then directly gives
the following.
###### Lemma 10.9.
For $\varepsilon_{0}>0$ and $c_{1},c_{2}$ as in the last lemma $c_{1}\leq
p^{\varepsilon}(u|w)\leq c_{2}$ for all $u,w\in\mathcal{\bar{S}}_{2}$ and
$\varepsilon\in(0,\varepsilon_{0})$.
Let $r^{\varepsilon}(w)$ denote the unique stationary distribution of
$p^{\varepsilon}(u|w)$, and let $p^{\varepsilon,n}(u|w)$ denote the $n$-step
transition density. The preceding lemma, [14, Theorem 10.1 Chapter 3], and the
existence of a uniform strictly positive lower bound on $r^{\varepsilon}(u)$
for all sufficiently small $\varepsilon>0$ imply the following.
###### Lemma 10.10.
There is $K<\infty$ and $\alpha\in(0,1)$ such that for all
$\varepsilon\in(0,\varepsilon_{0})$
$\sup_{w\in\mathcal{\bar{S}}_{2}}\left|p^{\varepsilon,n}(u|w)-r^{\varepsilon}(u)\right|/r^{\varepsilon}(u)\leq
K\alpha^{n}.$
Let $\eta^{\varepsilon}(dx|w)$ denote the distribution of $X^{\varepsilon}$
upon first hitting $\partial B_{2}$ given that $X^{\varepsilon}$ reaches
$\partial\mathcal{S}_{1}(\varepsilon)$ before $\partial B_{2}$ and starts at
$w\in\partial\mathcal{S}_{2}(\varepsilon)$.
###### Lemma 10.11.
There is $\kappa>0$ and $\varepsilon_{0}>0$ such that for all
$\varepsilon\in(0,\varepsilon_{0})$
$\sup_{x\in\partial B_{1}}P_{x}\left\\{X^{\varepsilon}\mbox{ reaches }\partial
B_{2}\mbox{ before }\mathcal{S}_{2}(\varepsilon)\right\\}\leq
e^{-\kappa/\varepsilon}.$
###### Lemma 10.12.
There are $\bar{\eta}^{\varepsilon}(dz)\in$ $\mathcal{P}(\partial B_{2})$,
$s^{\varepsilon}$ that tends to $0$ as $\varepsilon\rightarrow 0$ and
$\varepsilon_{0}>0$, such that for all $A\in\mathcal{B}(\partial
B_{2}),w\in\partial\mathcal{S}_{2}(\varepsilon)$ and
$\varepsilon\in(0,\varepsilon_{0})$
$\bar{\eta}^{\varepsilon}(A)[1-s^{\varepsilon}K/(1-\alpha)]\leq\eta^{\varepsilon}(A|w)\leq\bar{\eta}^{\varepsilon}(A)[1+s^{\varepsilon}K/(1-\alpha)],$
where $K$ and $\alpha$ are from Lemma 10.10.
###### Proof of Lemma 10.11.
Recall that $a_{1}^{2}=2\sup_{x\in B_{2}}$tr$[\sigma(x)\sigma(x)^{T}M]$. We
then use that $AM+MA^{T}=-I$ to get that with $U(x)\doteq\left\langle
x,Mx\right\rangle$,
$\left\langle DU(x),b(x)\right\rangle\leq-\varepsilon a_{1}^{2}$ (10.10)
for $x\in B_{2}\setminus\mathcal{S}_{2}(\varepsilon)$, and
$\left\langle DU(x),b(x)\right\rangle\leq-\frac{1}{8}b_{0}^{2}$ (10.11)
for $B_{2}\setminus(B_{0}/2)$. By Itô’s formula
$\displaystyle dU(X^{\varepsilon}_{t})=\left\langle
DU(X^{\varepsilon}_{t}),b(X^{\varepsilon}_{t})\right\rangle
dt+\frac{\varepsilon}{2}\text{tr}[\sigma(X^{\varepsilon}_{t})\sigma(X^{\varepsilon}_{t})^{T}M]dt+\sqrt{\varepsilon}\left\langle
DU(X^{\varepsilon}_{t}),\sigma(X^{\varepsilon}_{t})dW_{t}\right\rangle.$
(10.12)
Starting at $x\in\partial B_{1}$, we are concerned with the probability
$P_{x}\left\\{U(X^{\varepsilon}_{t})\text{ reaches }b_{2}^{2}\text{ before
}a_{2}^{2}\varepsilon\right\\},$
where $U(x)=b_{1}^{2}$. However, according to (10.12) and (10.11), reaching
$b_{2}^{2}$ before $b_{0}^{2}/4$ is a rare event, and its probability decays
exponentially in the form $e^{-\kappa/\varepsilon}$ for some $\kappa>0$ and
uniformly in $x\in\partial B_{1}$. Once the process reaches $B_{0}/2$, (10.12)
and (10.10) imply $U(X^{\varepsilon}_{t})$ is a supermartingale as long as it
is in the interval $[0,b_{0}^{2}]$, and therefore after $X^{\varepsilon}_{t}$
reaches $B_{0}/2$, the probability that $U(X^{\varepsilon}_{t})$ reaches
$a_{2}^{2}\varepsilon$ before $b_{0}^{2}$ is greater than $1/2$. ∎
###### Proof of Lemma 10.12.
Consider a starting position $w\in\partial\mathcal{S}_{2}(\varepsilon)$, and
recall that $\eta^{\varepsilon}(dz|w)$ denotes the hitting distribution on
$\partial B_{2}$ after starting at $w$. Let $\theta_{k}^{\varepsilon}$ denote
the return times to $\partial\mathcal{S}_{2}(\varepsilon)$ after visiting
$\partial\mathcal{S}_{1}(\varepsilon)$, and let $q_{n}^{\varepsilon}(w)$
denote the probability that the first $k$ for which $X^{\varepsilon}$ visits
$\partial B_{2}$ before visiting $\partial\mathcal{S}_{1}(\varepsilon)$ during
$[\theta_{k}^{\varepsilon},\theta_{k+1}^{\varepsilon}]$ is $n$. Then by the
strong Markov property and using the rescaled process
$\int_{\partial
B_{2}}g(z)\eta^{\varepsilon}(dz|w)=\sum\nolimits_{n=0}^{\infty}\int_{\partial
B_{2}}g(z)q_{n}^{\varepsilon}(w)\int_{\partial\mathcal{S}_{2}(\varepsilon)}\eta^{\varepsilon}(dz|u)J^{\varepsilon}(u)p^{\varepsilon,n}(\sqrt{\varepsilon}u|\sqrt{\varepsilon}w)du,$
where $J^{\varepsilon}(u)$ is the Jacobian that accounts for the mapping
between $\partial\mathcal{S}_{2}(\varepsilon)$ and
$\partial\mathcal{S}_{2}(1)$ and is given by $u/\sqrt{\varepsilon}$. We next
use that uniformly in $w\in\partial\mathcal{S}_{2}(\varepsilon)$
$p^{\varepsilon,n}(\sqrt{\varepsilon}u|\sqrt{\varepsilon}w)\leq
r^{\varepsilon}(\sqrt{\varepsilon}u)[1+K\alpha^{n}]$
to get
$\displaystyle\sum\nolimits_{n=0}^{\infty}\int_{\partial
B_{2}}g(z)q_{n}^{\varepsilon}(w)\int_{\partial\mathcal{S}_{2}(\varepsilon)}\eta^{\varepsilon}(dz|u)J^{\varepsilon}(u)p^{\varepsilon,n}(\sqrt{\varepsilon}u|\sqrt{\varepsilon}w)du$
$\displaystyle\quad\leq\left(\sum\nolimits_{n=0}^{\infty}\int_{\partial
B_{2}}g(z)q_{n}^{\varepsilon}(w)\int_{\partial\mathcal{S}_{2}(\varepsilon)}\eta^{\varepsilon}(dz|u)J^{\varepsilon}(u)r^{\varepsilon}(\sqrt{\varepsilon}u)du\right)[1+K\alpha^{n}]$
$\displaystyle\quad=\int_{\partial
B_{2}}g(z)\int_{\partial\mathcal{S}_{2}(\varepsilon)}\eta^{\varepsilon}(dz|u)J^{\varepsilon}(u)r^{\varepsilon}(\sqrt{\varepsilon}u)du\left[1+K\sum\nolimits_{n=0}^{\infty}q_{n}^{\varepsilon}(w)\alpha^{n}\right].$
Now use that $K\sum_{n=0}^{\infty}\alpha^{n}=K/(1-\alpha)<\infty$ and
$\sup_{w\in\partial\mathcal{S}_{2}(\varepsilon)}\sup_{n\in\mathbb{N}_{0}}q_{n}^{\varepsilon}(w)\rightarrow
0$ as $\varepsilon\rightarrow 0$ to get the upper bound with
$\bar{\eta}^{\varepsilon}(dz)\doteq\int_{\partial\mathcal{S}_{2}(\varepsilon)}\eta^{\varepsilon}(dz|u)J^{\varepsilon}(u)r^{\varepsilon}(\sqrt{\varepsilon}u)du.$
When combined with the lower bound which has an analogous proof, Lemma 10.12
follows. ∎
### 10.5 Times to reach another equilibrium point after starting with general
initial distribution
###### Lemma 10.13.
For each $j\in L_{\rm{s}}$, there exist $\tilde{c}>0$ and
$\varepsilon_{0}\in(0,1)$ such that for any distribution
$\lambda^{\varepsilon}$ on $\partial B_{\delta}(O_{j})$,
$P_{\lambda^{\varepsilon}}(\upsilon_{j}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{j}^{\varepsilon}>t)\leq
e^{-\tilde{c}t}$
for all $t>0$ and $\varepsilon\in(0,\varepsilon_{0}).$
###### Proof.
We give the proof for the case $j=1$. We first show for any $r\in(0,1)$ there
is $\varepsilon_{0}>0$ such that for any $\varepsilon\in(0,\varepsilon_{0})$
and $\lambda^{\varepsilon},\theta^{\varepsilon}\in\mathcal{P}(\partial
B_{\delta}(O_{1}))$
$E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}/E_{\theta^{\varepsilon}}\upsilon_{1}^{\varepsilon}\geq
r.$ (10.13)
We use that $\upsilon_{1}^{\varepsilon}$ can be decomposed into
$\bar{\upsilon}_{1}^{\varepsilon}+\hat{\upsilon}_{1}^{\varepsilon}$, where
$\bar{\upsilon}_{1}^{\varepsilon}$ is the first hitting time to $\partial
B_{2\delta}(O_{1})$. Since by standard large deviation theory the exponential
growth rate of the expected value of $\upsilon_{1}^{\varepsilon}$ is strictly
greater than that of $\bar{\upsilon}_{1}^{\varepsilon}$ (uniformly in the
initial distribution)
$E_{\lambda^{\varepsilon}}\bar{\upsilon}_{1}^{\varepsilon}$ (respectively
$E_{\theta^{\varepsilon}}\bar{\upsilon}_{1}^{\varepsilon}$) is negligible
compared to $E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}$
(respectively $E_{\theta^{\varepsilon}}\upsilon_{1}^{\varepsilon}$), and so it
is enough to show
$E_{\lambda^{\varepsilon}}\hat{\upsilon}_{1}^{\varepsilon}/E_{\theta^{\varepsilon}}\hat{\upsilon}_{1}^{\varepsilon}\geq
r.$ Owing to Lemma 10.11 (and in particular because $\kappa>0$) the
contribution to either
$E_{\lambda^{\varepsilon}}\hat{\upsilon}_{1}^{\varepsilon}$ or
$E_{\theta^{\varepsilon}}\hat{\upsilon}_{1}^{\varepsilon}$ from trajectories
that reach $\partial B_{2\delta}(O_{1})$ before
$\partial\mathcal{S}_{2}(\varepsilon)$ can be neglected. Using Lemma 10.12 and
the strong Markov property gives
$\inf_{w_{1},w_{2}\in\partial\mathcal{S}_{2}(\varepsilon)}\frac{E_{w_{1}}\hat{\upsilon}_{1}^{\varepsilon}}{E_{w_{2}}\hat{\upsilon}_{1}^{\varepsilon}}\geq\frac{[1-s^{\varepsilon}K/(1-\alpha)]}{[1+s^{\varepsilon}K/(1-\alpha)]},$
and the lower bound follows since $s^{\varepsilon}\rightarrow 0$.
We next claim that a suitable bound can be found for
$P_{\lambda^{\varepsilon}}(\hat{\upsilon}_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}>t)$.
Recall that $u^{\varepsilon}\in\mathcal{P}(\partial B_{2\delta}(O_{1}))$ is
the stationary probability for $\psi^{\varepsilon}$ defined in (10.5). Let
$\beta^{\varepsilon}$ be the probability measure on $\partial
B_{\delta}(O_{1})$ obtained by integrating the transition kernel
$\psi_{1}^{\varepsilon}$ with respect to $u^{\varepsilon}$, and note that
integrating $\psi_{2}^{\varepsilon}$ with respect to $\beta^{\varepsilon}$
returns $u^{\varepsilon}$. Since the diffusion matrix is uniformly
nondegenerate, by using well known “Gaussian type” bounds on the transition
density for the process [2] there are $K\in(0,\infty)$ and $p\in(0,\infty)$
such that
$P_{x}\left\\{X_{\theta}^{\varepsilon}\in
A|X_{\theta}^{\varepsilon}\in\partial B_{2\delta}(O_{1})\right\\}\leq
Km(A)/\varepsilon^{p}$
for all $x\in\partial B_{\delta}(O_{1})$, where $m$ is the uniform measure on
$\partial B_{2\delta}(O_{1})$ and
$\theta=\inf\\{t>0:X_{t}^{\varepsilon}\in\partial
B_{2\delta}(O_{1})\cup\mathcal{S}_{2}(\varepsilon)\\}$. Together with Lemmas
10.11 and 10.12, this implies that for all sufficiently small $\varepsilon>0$
and any bounded measurable function $h:\partial
B_{2\delta}(O_{1})\rightarrow\mathbb{R}$,
$\displaystyle\int_{\partial B_{2\delta}(O_{1})}\int_{\partial
B_{\delta}(O_{1})}h(y)\psi_{2}^{\varepsilon}(dy|x)\lambda^{\varepsilon}(dx)$
$\displaystyle\leq 2\int_{\partial B_{2\delta}(O_{1})}\int_{\partial
B_{\delta}(O_{1})}h(y)\psi_{2}^{\varepsilon}(dy|x)\beta^{\varepsilon}(dx)$
$\displaystyle\leq 2\int_{\partial
B_{2\delta}(O_{1})}h(y)u^{\varepsilon}(dy).$
Using the last display for the first inequality, (10.13) for the second, that
$\bar{\upsilon}_{1}^{\varepsilon}$ is small compared with
$\hat{\upsilon}_{1}^{\varepsilon}$ for the third and Lemma 10.6 for the last,
there is $\varepsilon_{1}>0$ such that
$\displaystyle
P_{\lambda^{\varepsilon}}(\hat{\upsilon}_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}>t)$
$\displaystyle=E_{\lambda^{\varepsilon}}(P_{X_{\bar{\upsilon}_{1}^{\varepsilon}}^{\varepsilon}}(\hat{\upsilon}_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}>t))\leq
2P_{u^{\varepsilon}}(\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}>t)$
$\displaystyle\leq
2P_{u^{\varepsilon}}(\upsilon_{1}^{\varepsilon}/E_{\beta^{\varepsilon}}\upsilon_{1}^{\varepsilon}>t/2)\leq
2P_{u^{\varepsilon}}(\upsilon_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}>t/4)\leq
2e^{-\tilde{c}t/4}$
for all $\varepsilon\in(0,\varepsilon_{1})$ and $t\geq 0$.
Since as noted previously
$E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}\geq
E_{\lambda^{\varepsilon}}\bar{\upsilon}_{1}^{\varepsilon}$ and since by [6,
Theorem 4 and Corollary 1] there exists $\varepsilon_{2}\in(0,1)$ such that
$P_{\lambda^{\varepsilon}}(\bar{\upsilon}_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\bar{\upsilon}_{1}^{\varepsilon}>t)\leq
2e^{-t/2}$ for any $t>0$ and $\varepsilon\in(0,\varepsilon_{2})$, we conclude
that for any
$t>0$$P_{\lambda^{\varepsilon}}(\bar{\upsilon}^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}>t/2)\leq
P_{\lambda^{\varepsilon}}(\bar{\upsilon}^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\bar{\upsilon}^{\varepsilon}_{1}>t/2)\leq
2e^{-t/2}.$ The conclusion of the lemma follows from these two bounds and
$\displaystyle
P_{\lambda^{\varepsilon}}(\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}$
$\displaystyle>t)\leq
P_{\lambda^{\varepsilon}}(\bar{\upsilon}^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}>t/2)+P_{\lambda^{\varepsilon}}(\hat{\upsilon}^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}>t/2).$
∎
###### Lemma 10.14.
For any $j\in L_{\rm{s}}$ and any distribution $\lambda^{\varepsilon}$ on
$\partial B_{\delta}(O_{j})$,
$\upsilon_{j}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{j}^{\varepsilon}$
converges in distribution to an Exp(1) random variable under
$P_{\lambda^{\varepsilon}}.$ Moreover,
$E_{\lambda^{\varepsilon}}e^{it\upsilon_{j}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{j}^{\varepsilon}}\rightarrow
1/(1-it)$ uniformly on any compact set in $\mathbb{R}$.
###### Proof.
We give the proof for the case $j=1$. Recall that
$E_{u^{\varepsilon}}e^{it\upsilon_{1}^{\varepsilon}/E_{u^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\rightarrow
1/(1-it)$ uniformly on any compact set in $\mathbb{R}$ as
$\varepsilon\rightarrow 0$ from Remark 10.5. We would like to show that
$E_{\lambda^{\varepsilon}}e^{it\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\rightarrow
1/(1-it)$ uniformly on any compact set in $\mathbb{R}$. Since
$\upsilon_{1}^{\varepsilon}=\bar{\upsilon}^{\varepsilon}_{1}+\hat{\upsilon}^{\varepsilon}_{1}$
with $\bar{\upsilon}^{\varepsilon}_{1}$ the first hitting time to $\partial
B_{2\delta}(O_{1}),$ we know that
$E_{\lambda^{\varepsilon}}\bar{\upsilon}^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}\rightarrow
0$ and thus
$E_{\lambda^{\varepsilon}}\hat{\upsilon}^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}\rightarrow
1.$ Observe that
$\displaystyle
E_{\lambda^{\varepsilon}}e^{it\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}}=E_{\lambda^{\varepsilon}}\left[e^{it\bar{\upsilon}^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\cdot
E_{X^{\varepsilon}\left(\bar{\upsilon}^{\varepsilon}_{1}\right)}\left(e^{it\hat{\upsilon}^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\right)\right],$
$E_{\lambda^{\varepsilon}}\left[E_{X^{\varepsilon}\left(\bar{\upsilon}^{\varepsilon}_{1}\right)}\left(e^{it\hat{\upsilon}^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\right)\right]\leq\frac{[1+s^{\varepsilon}K/(1-\alpha)]}{[1-s^{\varepsilon}K/(1-\alpha)]}E_{u^{\varepsilon}}e^{it\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\rightarrow
1/(1-it)$
and
$E_{\lambda^{\varepsilon}}\left[E_{X^{\varepsilon}\left(\bar{\upsilon}^{\varepsilon}_{1}\right)}\left(e^{it\hat{\upsilon}^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\right)\right]\geq\frac{[1-s^{\varepsilon}K/(1-\alpha)]}{[1+s^{\varepsilon}K/(1-\alpha)]}E_{u^{\varepsilon}}e^{it\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\rightarrow
1/(1-it).$
Since
$E_{\lambda^{\varepsilon}}\bar{\upsilon}^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}\rightarrow
0$ and $e^{ix}$ is a bounded and continuous function, a conditioning argument
gives
$\left|E_{\lambda^{\varepsilon}}e^{it\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}}-E_{\lambda^{\varepsilon}}\left[E_{X^{\varepsilon}\left(\bar{\upsilon}^{\varepsilon}_{1}\right)}\left(e^{it\hat{\upsilon}^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\right)\right]\right|\leq
E_{\lambda^{\varepsilon}}\left|e^{it\bar{\upsilon}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}}-1\right|\rightarrow
0.$
We conclude that
$E_{\lambda^{\varepsilon}}e^{it\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}}\rightarrow
1/(1-it)$ uniformly on any compact set in $\mathbb{R}$. ∎
### 10.6 Return times (single cycles)
In this subsection, we will extend all the three results to return times for
the single cycle case (i.e. when $h_{1}>w$).
###### Lemma 10.15.
There exists $\delta_{0}\in(0,1)$ such that for any $\delta\in(0,\delta_{0})$
and any distribution $\lambda^{\varepsilon}$ on $\partial B_{\delta}(O_{1}),$
$\lim_{\varepsilon\rightarrow 0}\varepsilon\log
E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}=\min_{y\in\cup_{k\in
L\setminus\\{1\\}}\partial B_{\delta}(O_{k})}V(O_{1},y).$
###### Proof.
We have
$E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}=E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}+E_{\lambda^{\varepsilon}}(\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})$,
and by Lemma 10.2 we know that
$\lim_{\varepsilon\rightarrow 0}\varepsilon\log
E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}=\min_{y\in\cup_{k\in
L\setminus\\{1\\}}\partial B_{\delta}(O_{k})}V(O_{1},y).$
Moreover, observe that $W(O_{j})>W(O_{1})$ for any $j\in L\setminus\\{1\\}$
due to Remark 3.14. Note that $\upsilon_{1}^{\varepsilon}$ as defined in
(10.1) coincides with $\sigma_{0}^{\varepsilon}$ defined in (3.4). We can
therefore apply Remark 7.22 with $f=0$, $A=M$ and $\eta=[\min_{j\in
L\setminus\\{1\\}}W(O_{j})-W(O_{1})]/3,$ we find that there exists
$\delta_{1}\in(0,1)$ such that for any $\delta\in(0,\delta_{1})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}\left(\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon}\right)\right)$
$\displaystyle\geq\min_{j\in L\setminus\\{1\\}}W(O_{j})-W(O_{1})-\min_{j\in
L\setminus\\{1\\}}V(O_{1},O_{j})-\eta$ $\displaystyle=-\min_{j\in
L\setminus\\{1\\}}V(O_{1},O_{j})+2\eta.$
On the other hand, by continuity of $V(O_{1},\cdot),$ for this given $\eta,$
there exists $\delta_{2}\in(0,1)$ such that for any $\delta\in(0,\delta_{2})$
$\min_{y\in\cup_{k\in L\setminus\\{1\\}}\partial
B_{\delta}(O_{k})}V(O_{1},y)\geq\min_{j\in
L\setminus\\{1\\}}V(O_{1},O_{j})-\eta.$
Thus, for any $\delta\in(0,\delta_{0})$ with
$\delta_{0}\doteq\delta_{1}\wedge\delta_{2}$
$\displaystyle\limsup_{\varepsilon\rightarrow 0}\varepsilon\log
E_{\lambda^{\varepsilon}}(\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})$
$\displaystyle\leq\limsup_{\varepsilon\rightarrow
0}\varepsilon\log\left(\sup_{z\in\partial
B_{\delta}(O_{1})}E_{z}(\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})\right)$
$\displaystyle\leq\min_{j\in
L\setminus\\{1\\}}V(O_{1},O_{j})-2\eta\leq\min_{y\in\cup_{k\in
L\setminus\\{1\\}}\partial B_{\delta}(O_{k})}V(O_{1},y)-\eta$
$\displaystyle=\lim_{\varepsilon\rightarrow 0}\varepsilon\log
E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}-\eta$
and
$\displaystyle\lim_{\varepsilon\rightarrow 0}\varepsilon\log
E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}$
$\displaystyle=\lim_{\varepsilon\rightarrow 0}\varepsilon\log
E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}=\min_{y\in\cup_{k\in
L\setminus\\{1\\}}\partial B_{\delta}(O_{k})}V(O_{1},y).$
∎
###### Lemma 10.16.
Given $\delta>0$ sufficiently small, and for any distribution
$\lambda^{\varepsilon}$ on $\partial B_{\delta}(O_{1}),$ there exist
$\tilde{c}>0$ and $\varepsilon_{0}\in(0,1)$ such that
$P_{\lambda^{\varepsilon}}(\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}>t)\leq
e^{-\tilde{c}t}$
for all $t\geq 1$ and $\varepsilon\in(0,\varepsilon_{0}).$
###### Proof.
For any $t>0,$
$P_{\lambda^{\varepsilon}}(\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}>t)\leq
P_{\lambda^{\varepsilon}}(\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}>t/2)+P_{\lambda^{\varepsilon}}((\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}>t/2)$.
It is easy to see that the first term has this sort of bound due to Lemma
10.13 and $E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\geq
E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}.$
It suffices to show that this sort of bound holds for the second term, namely,
there exists a constant $\tilde{c}>0$ such that
$P_{\lambda^{\varepsilon}}\left((\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}>t\right)\leq
e^{-\tilde{c}t}$
for all $t\in[0,\infty)$ and $\varepsilon$ sufficiently small. By Chebyshev’s
inequality,
$P_{\lambda^{\varepsilon}}\left((\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}>t\right)=P_{\lambda^{\varepsilon}}(e^{(\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}>e^{t})\leq
e^{-t}E_{\lambda^{\varepsilon}}e^{(\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}},$
and it therefore suffices to prove that
$E_{\lambda^{\varepsilon}}e^{(\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}$
is less than a constant for all $\varepsilon$ sufficiently small. Observe that
$\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon}=\sum\nolimits_{j\in
L\setminus\\{1\\}}\sum\nolimits_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k),$
where $N_{j}$ is the number of visits of $\partial B_{\delta}(O_{j})$, and
$\upsilon_{j}^{\varepsilon}(k)$ is the $k$-th copy of the first hitting time
to $\cup_{k\in L\setminus\\{j\\}}\partial B_{\delta}(O_{k})$ after starting
from $\partial B_{\delta}(O_{j}).$
If we consider $\partial B_{\delta}(O_{j})$ as the starting location of a
regenerative cycle, as was done previously in the paper for $\partial
B_{\delta}(O_{1})$, then there will be a unique stationary distribution, and
if the process starts with that as the initial distribution then the times
$\upsilon_{j}^{\varepsilon}(k)$ are independent from each other and from the
number of returns to $\partial B_{\delta}(O_{j})$ before first visiting
$\partial B_{\delta}(O_{1})$. While these random times as used here do not
arise from starting with such a distribution, we can use Lemma 10.12 to bound
the error in terms of a multiplicative factor that is independent of
$\varepsilon$ for small $\varepsilon>0$, and thereby justify treating $N_{j}$
as though it is independent of the $\upsilon_{j}^{\varepsilon}(k)$.
Recalling that $l\doteq|L|$,
$\displaystyle
E_{\lambda^{\varepsilon}}e^{(\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}$
$\displaystyle=E_{\lambda^{\varepsilon}}\prod\nolimits_{j\in
L\setminus\\{1\\}}e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}$
$\displaystyle\leq\prod\nolimits_{j\in
L\setminus\\{1\\}}\left(E_{\lambda^{\varepsilon}}\left[e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)(l-1)/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\right]\right)^{1/(l-1)},$
where we use the generalized Hölder’s inequality for the last line. Thus, if
we can show for each $j\in L\setminus\\{1\\}$ that
$E_{\lambda^{\varepsilon}}\exp[{(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k))(l-1)/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}]$
is less than a constant for all $\varepsilon$ sufficiently small, then we are
done.
Such an estimate is straightforward for the case of an unstable equilibrium,
i.e., for $j\in L\backslash L_{\rm{s}}$, and so we focus on the case $j\in
L_{\rm{s}}\backslash\\{1\\}$. For this case, we apply Lemma 10.13 to find that
there exists $\tilde{{c}}>0$ and $\varepsilon_{0}\in(0,1)$ such that for any
$j\in L$ and any distribution $\tilde{\lambda}^{\varepsilon}$ on $\partial
B(O_{j}),$
$P_{\tilde{\lambda}^{\varepsilon}}(\upsilon_{j}^{\varepsilon}/E_{\tilde{\lambda}^{\varepsilon}}\upsilon_{j}^{\varepsilon}>t)\leq
e^{-\tilde{{c}}t}$ (10.14)
for any $t>0$ and $\varepsilon\in(0,\varepsilon_{0}).$ Hence, given any
$\eta>0$, there is $\bar{\varepsilon}_{0}\in(0,\varepsilon_{0})$ such that for
all $\varepsilon\in(0,\bar{\varepsilon}_{0})$ and any $j\in L\setminus\\{1\\}$
$\displaystyle
E_{\lambda^{\varepsilon}}\left[e^{\upsilon_{j}^{\varepsilon}(l-1)/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\right]$
$\displaystyle\leq
1+\int_{1}^{\infty}P_{\lambda^{\varepsilon}}(e^{(l-1)\upsilon_{j}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}>t)dt$
$\displaystyle\leq
1+\int_{1}^{\infty}P_{\lambda^{\varepsilon}}\left(\upsilon_{j}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{j}^{\varepsilon}>\log
t\cdot
E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}/((l-1)E_{\lambda^{\varepsilon}}\upsilon_{j}^{\varepsilon})\right)dt$
$\displaystyle\leq
1+\int_{1}^{\infty}t^{-\tilde{{c}}E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}/((l-1)E_{\lambda^{\varepsilon}}\upsilon_{j}^{\varepsilon})}dt$
$\displaystyle=1+\left(\tilde{{c}}E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}/((l-1)E_{\lambda^{\varepsilon}}\upsilon_{j}^{\varepsilon})-1\right)^{-1}\leq
1+e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)},$
where the last inequality comes from Lemma 10.2 and Lemma 10.15, and by
picking the range of $\varepsilon$ small if it needs to be.
By using induction and a conditioning argument, it follows that for any
$\eta>0$, for any $j\in L\setminus\\{1\\}$ and for any $n\in\mathbb{N},$
$E_{\lambda^{\varepsilon}}\left[e^{\left(\sum_{k=1}^{n}\upsilon_{j}^{\varepsilon}(k)\right)(l-1)/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\right]\leq\left(1+e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)}\right)^{n}.$
This implies that
$E_{\lambda^{\varepsilon}}\left[e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)(l-1)/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\right]\leq
E_{\lambda^{\varepsilon}}\left[\left(1+e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)}\right)^{N_{j}}\right].$
The next thing we need to know is the distribution of $N_{j}$, i.e.,
$P_{\lambda^{\varepsilon}}(N_{j}=n)$ for $n\in\mathbb{N}$. Following a similar
argument as in the proof of Lemma 7.3 and the proof of Lemma 7.6, for
sufficiently small $\varepsilon>0$ we find
$\displaystyle P_{\lambda^{\varepsilon}}(N_{j}=n)$
$\displaystyle\leq\left(1\wedge e^{-\frac{1}{\varepsilon}(W(O_{j})-W(O_{1}\cup
O_{j})-h_{1}-\eta)}\right)(1-q_{j})^{n-1}q_{j},$
where
$\displaystyle q_{j}\doteq\frac{\inf_{x\in\partial
B_{\delta}(O_{j})}P_{x}(\tilde{{T_{1}}}<{\tilde{T}}_{j}^{+})}{1-\sup_{y\in\partial
B_{\delta}(O_{j})}p(y,\partial B_{\delta}(O_{j}))}\geq
e^{-\frac{1}{\varepsilon}(W(O_{1})-W(O_{1}\cup O_{j})-h_{j}+\eta)}.$ (10.15)
Therefore,
$\displaystyle
E_{\lambda^{\varepsilon}}\left[e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)(l-1)/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\right]$
$\displaystyle\quad\leq
E_{\lambda^{\varepsilon}}\left[\left(1+e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)}\right)^{N_{j}}\right]=\sum\nolimits_{n=1}^{\infty}\left(1+e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)}\right)^{n}P_{\lambda^{\varepsilon}}(N_{j}=n)$
$\displaystyle\quad\leq\sum_{n=1}^{\infty}\left(1\wedge
e^{-\frac{1}{\varepsilon}(W(O_{j})-W(O_{1}\cup
O_{j})-h_{1}-\eta)}\right)\left(1+e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)}\right)^{n}(1-q_{j})^{n-1}q_{j}$
$\displaystyle\quad=\frac{\left(1\wedge
e^{-\frac{1}{\varepsilon}(W(O_{j})-W(O_{1}\cup
O_{j})-h_{1}-\eta)}\right)q_{j}\left(1+e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)}\right)}{1-\left(1+e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)}\right)(1-q_{j})}$
$\displaystyle\quad\leq\frac{\left(1\wedge
e^{-\frac{1}{\varepsilon}(W(O_{j})-W(O_{1}\cup
O_{j})-h_{1}-\eta)}\right)\left(1+e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)}\right)}{-e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)}/q_{j}+1}$
$\displaystyle\quad\leq\frac{\left(1\wedge
e^{-\frac{1}{\varepsilon}(W(O_{j})-W(O_{1}\cup
O_{j})-h_{1}-\eta)}\right)\left(1+e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)}\right)}{-e^{-\frac{1}{\varepsilon}(h_{1}+W(O_{1}\cup
O_{j})-W(O_{1})-2\eta)}+1}.$
The second equality holds since $h_{1}>w\geq h_{j}$ and (10.15) imply
$(1-q_{j})(1+e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)})<1$ for all
$\varepsilon$ sufficiently small; the last inequality is from (10.15).
Then we use the fact that for $x\in(0,1/2)$, $1/(1-x)\leq 1+2x$ to find that
$\displaystyle
E_{\lambda^{\varepsilon}}\left[e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)(l-1)/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}}\right]$
$\displaystyle\leq\left(1\wedge e^{-\frac{1}{\varepsilon}(W(O_{j})-W(O_{1}\cup
O_{j})-h_{1}-\eta)}\right)\left(1+e^{-\frac{1}{\varepsilon}(h_{1}-h_{j}-\eta)}\right)\left(1+2e^{-\frac{1}{\varepsilon}(h_{1}+W(O_{1}\cup
O_{j})-W(O_{1})-2\eta)}\right)$ $\displaystyle\leq\left(1\wedge
e^{-\frac{1}{\varepsilon}(W(O_{j})-W(O_{1}\cup
O_{j})-h_{1}-\eta)}\right)\left(1+5e^{-\frac{1}{\varepsilon}(h_{1}+W(O_{1}\cup
O_{j})-W(O_{1})-2\eta)}\right)$ (10.16) $\displaystyle\leq 1\cdot 6=6.$
The third inequality holds due to the fact that $W(O_{1})\geq W(O_{1}\cup
O_{j})+h_{j}$ and the last inequality comes from the assumption that $h_{1}>w$
and by picking $\eta$ to be smaller than $(h_{1}-w)/2$. This completes the
proof. ∎
###### Lemma 10.17.
Given $\delta>0$ sufficiently small, and for any distribution
$\lambda^{\varepsilon}$ on $\partial B_{\delta}(O_{1})$,
$\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}$
converges in distribution to an Exp(1) random variable under
$P_{\lambda^{\varepsilon}}.$ Moreover,
$E_{\lambda^{\varepsilon}}(e^{it\left(\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)})\rightarrow
1/(1-it)$ uniformly on any compact set in $\mathbb{R}$.
###### Proof.
Note that
$E_{\lambda^{\varepsilon}}\left(e^{it\left(\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)}\right)=E_{\lambda^{\varepsilon}}\left(e^{it\left(\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)}E_{X^{\varepsilon}(\upsilon_{1}^{\varepsilon})}\left(e^{it\left((\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)}\right)\right).$
Since
$E_{\lambda^{\varepsilon}}\left(e^{it\left(\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)}\right)=E_{\lambda^{\varepsilon}}\left(e^{it\left(E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)\left(\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}\right)}\right)$
and we know that
$E_{\lambda^{\varepsilon}}\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\rightarrow
1$ from the proof of Lemma 10.15, by applying Lemma 10.14 we have
$E_{\lambda^{\varepsilon}}(e^{it\left(\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)})\rightarrow
1/(1-it)$ uniformly on any compact set in $\mathbb{R}$. Also
$\displaystyle\left|E_{\lambda^{\varepsilon}}\left(e^{it\left(\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)}\right)-E_{\lambda^{\varepsilon}}\left(e^{it\left(\upsilon_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)}\right)\right|\leq
E_{\lambda^{\varepsilon}}\left|E_{X^{\varepsilon}(\upsilon_{1}^{\varepsilon})}\left(e^{it\left((\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)}\right)-1\right|,$
where the right hand side converges to $0$ using
$E_{\lambda^{\varepsilon}}(\tau_{1}^{\varepsilon}-\upsilon_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\rightarrow
0$ and the dominated convergence theorem. The convergence of
$\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}$ to an
Exp(1) random variable under $P_{\lambda^{\varepsilon}}$ and uniform
convergence of
$E_{\lambda^{\varepsilon}}(e^{it\left(\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\right)})$
to $1/(1-it)$ on compact set in $\mathbb{R}$ follows. ∎
### 10.7 Return times (multicycles)
In this subsection, we will extend all the three results to multi regenerative
cycles (when $w\geq h_{1}$). Recall that the multicycle times
$\hat{\tau}^{\varepsilon}_{i}$ are defined according to (6.4) where
$\\{\mathbf{M}^{\varepsilon}_{i}\\}_{i\in\mathbb{N}}$ is a sequence of
independent and geometrically distributed random variables with parameter
$e^{-m/\varepsilon}$ for some $m>0$ such that $m+h_{1}>w$. In addition,
$\\{\mathbf{M}^{\varepsilon}_{i}\\}$ is independent of
$\\{\tau^{\varepsilon}_{n}\\}$.
###### Lemma 10.18.
There exists $\delta_{0}\in(0,1)$ such that for any $\delta\in(0,\delta_{0})$
and any distribution $\lambda^{\varepsilon}$ on $\partial B_{\delta}(O_{1}),$
$\lim_{\varepsilon\rightarrow 0}\varepsilon\log
E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}=m+\min_{y\in\cup_{k\in
L\setminus\\{1\\}}\partial B_{\delta}(O_{k})}V(O_{1},y).$
###### Proof.
Since $\\{\mathbf{M}^{\varepsilon}_{i}\\}$ is independent of
$\\{\tau^{\varepsilon}_{n}\\}$ and
$E_{\lambda^{\varepsilon}}\mathbf{M}^{\varepsilon}_{i}=e^{m/\varepsilon}$, we
apply Lemma 10.15 to complete the proof. ∎
###### Lemma 10.19.
Given $\delta>0,$ for any distribution $\lambda^{\varepsilon}$ on $\partial
B_{\delta}(O_{1}),$ there exist $\tilde{c}>0$ and $\varepsilon_{0}\in(0,1)$
such that
$P_{\lambda^{\varepsilon}}(\hat{\tau}_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}>t)\leq
e^{-\tilde{c}t}$
for all $t\geq 1$ and $\varepsilon\in(0,\varepsilon_{0}).$
###### Proof.
We divide the multicycle into a sum of two terms. The first term is the sum of
all the hitting times to $\cup_{j\in L\setminus\\{1\\}}\partial
B_{\delta}(O_{j})$, and the second term is the sum of all residual times. That
is,
$\hat{\tau}_{1}^{\varepsilon}=\hat{\upsilon}_{1}^{\varepsilon}+(\hat{\tau}_{1}^{\varepsilon}-\hat{\upsilon}_{1}^{\varepsilon})$,
where
$\hat{\upsilon}_{1}^{\varepsilon}=\sum\nolimits_{i=1}^{\mathbf{M}^{\varepsilon}_{1}}\upsilon_{1}^{\varepsilon}(i)\text{
and
}\hat{\tau}_{1}^{\varepsilon}-\hat{\upsilon}_{1}^{\varepsilon}=\sum\nolimits_{i=1}^{\mathbf{M}^{\varepsilon}_{1}}\left(\sum\nolimits_{j\in
L\setminus\\{1\\}}\sum\nolimits_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(i,k)\right).$
As discussed many times, it suffices to show that there exist $\tilde{c}>0$
and $\varepsilon_{0}\in(0,1)$ such that
$P_{\lambda^{\varepsilon}}\left(\hat{\upsilon}_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}>t\right)\leq
e^{-\tilde{c}t}\text{ and
}P_{\lambda^{\varepsilon}}\left((\hat{\tau}_{1}^{\varepsilon}-\hat{\upsilon}_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}>t\right)\leq
e^{-\tilde{c}t}$
for all $t\geq 1$ and $\varepsilon\in(0,\varepsilon_{0}).$
The first bound is relatively easy since $\upsilon_{1}^{\varepsilon}(i)$ is a
sum of approximate exponentials with a tail bound of the given sort, and since
the sum of geometrically many independent and identically distributed
exponentials is again an exponential distribution.
For the second bound, we use Chebyshev’s inequality again as in the proof of
Lemma 10.16 to find that it suffices to prove that
$E_{\lambda^{\varepsilon}}e^{(\hat{\tau}_{1}^{\varepsilon}-\hat{\upsilon}_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}}$
is less than a constant for all $\varepsilon$ sufficiently small. Now due to
the independence of $\mathbf{M}^{\varepsilon}_{1}$ and
$\\{\upsilon_{j}^{\varepsilon}(i,k)\\}$, we have
$\displaystyle
E_{\lambda^{\varepsilon}}e^{(\hat{\tau}_{1}^{\varepsilon}-\hat{\upsilon}_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}}$
$\displaystyle\qquad=\sum\nolimits_{i=1}^{\infty}\left(E_{\lambda^{\varepsilon}}\left[\prod\nolimits_{j\in
L\setminus\\{1\\}}e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}}\right]\right)^{i}\cdot
P_{\lambda^{\varepsilon}}(\mathbf{M}^{\varepsilon}_{1}=i)$
$\displaystyle\qquad=e^{-m/\varepsilon}\cdot
E_{\lambda^{\varepsilon}}\left[\prod\nolimits_{j\in
L\setminus\\{1\\}}e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}}\right]$
$\displaystyle\qquad\qquad\cdot\sum\nolimits_{i=1}^{\infty}\left(E_{\lambda^{\varepsilon}}\left[\prod\nolimits_{j\in
L\setminus\\{1\\}}e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}}\right](1-e^{-m/\varepsilon})\right)^{i-1}.$
(10.17)
To do a further computation, we have to at least make sure that
$E_{\lambda^{\varepsilon}}\left[\prod\nolimits_{j\in
L\setminus\\{1\\}}e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}}\right](1-e^{-m/\varepsilon})<1.$
(10.18)
To see this, we first use the generalized Hölder’s inequality to find
$\displaystyle E_{\lambda^{\varepsilon}}\left[\prod\nolimits_{j\in
L\setminus\\{1\\}}e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}}\right]\leq\prod\nolimits_{j\in
L\setminus\\{1\\}}\left(E_{\lambda^{\varepsilon}}\left[e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)(l-1)/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}}\right]\right)^{1/(l-1)}.$
Moreover, since $m+h_{1}>w$ and
$E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}=E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}\cdot
E_{\lambda^{\varepsilon}}\mathbf{M}^{\varepsilon}_{1}=e^{m/\varepsilon}E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}$,
by the same argument that gives (10.6), for any $\eta>0$ and $j\in
L\setminus\\{1\\}$
$\displaystyle
E_{\lambda^{\varepsilon}}\left[e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)(l-1)/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}}\right]$
$\displaystyle\quad\leq\left(1\wedge
e^{-\frac{1}{\varepsilon}(W(O_{j})-W(O_{1}\cup
O_{j})-h_{1}-\eta)}\right)\left(1+5e^{-\frac{1}{\varepsilon}(m+h_{1}+W(O_{1}\cup
O_{j})-W(O_{1})-2\eta)}\right).$
Therefore,
$\displaystyle E_{\lambda^{\varepsilon}}\left[\prod\nolimits_{j\in
L\setminus\\{1\\}}e^{\left(\sum_{k=1}^{N_{j}}\upsilon_{j}^{\varepsilon}(k)\right)/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}}\right](1-e^{-m/\varepsilon})\leq\prod\nolimits_{j\in
L\setminus\\{1\\}}s_{j}^{1/(l-1)},$
with
$\displaystyle s_{j}\doteq\left(1\wedge
e^{-\frac{1}{\varepsilon}(W(O_{j})-W(O_{1}\cup
O_{j})-h_{1}-\eta)}\right)\left(1+5e^{-\frac{1}{\varepsilon}(m+h_{1}+W(O_{1}\cup
O_{j})-W(O_{1})-2\eta)}\right)(1-e^{-m/\varepsilon}).$
Using $(a\wedge b)(c+d)\leq ac+bd$ for positive numbers $a,b,c,d$,
$s_{j}\leq\left(1+5e^{-\frac{1}{\varepsilon}(m+W(O_{j})-W(O_{1})-3\eta)}\right)(1-e^{-m/\varepsilon})\leq
1-e^{-m/\varepsilon}/2,$
where we use $W(O_{j})>W(O_{1})$ for the second inequality and pick the range
of $\varepsilon$ small if it needs to be. Thus (10.18) holds, and by (10.7)
$E_{\lambda^{\varepsilon}}e^{(\hat{\tau}_{1}^{\varepsilon}-\hat{\upsilon}_{1}^{\varepsilon})/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}}\leq
e^{-m/\varepsilon}\cdot
2\sum_{i=1}^{\infty}\left(1-e^{-m/\varepsilon}/2\right)^{i-1}=\frac{2e^{-m/\varepsilon}}{1-\left(1-e^{-m/\varepsilon}/2\right)}=4.$
We complete the proof. ∎
###### Lemma 10.20.
Given $\delta>0,$ for any distribution $\lambda^{\varepsilon}$ on $\partial
B_{\delta}(O_{1})$,
$\hat{\tau}_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon}$
converges in distribution to an Exp(1) random variable under
$P_{\lambda^{\varepsilon}}.$
###### Proof.
Let $\mathbf{M}$ be a geometrically distributed random variables with
parameter $p\in(0,1)$ and assume that it is independent of
$\\{\tau^{\varepsilon}_{n}\\}$. Then
$E_{\lambda^{\varepsilon}}\left(\sum\nolimits_{n=1}^{\mathbf{M}}\tau^{\varepsilon}_{n}\right)=E_{\lambda^{\varepsilon}}\tau^{\varepsilon}_{1}/p$
and
$\displaystyle
E_{\lambda^{\varepsilon}}e^{it\left(\left(p\sum_{n=1}^{\mathbf{M}}\tau^{\varepsilon}_{n}\right)/E_{\lambda^{\varepsilon}}\tau^{\varepsilon}_{1}\right)}=\sum_{k=1}^{\infty}\left(E_{\lambda^{\varepsilon}}e^{it\left(p\tau^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\tau^{\varepsilon}_{1}\right)}\right)^{k}(1-p)^{k-1}p=\frac{pE_{\lambda^{\varepsilon}}e^{it\left(p\tau^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\tau^{\varepsilon}_{1}\right)}}{1-(1-p)E_{\lambda^{\varepsilon}}e^{it\left(p\tau^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\tau^{\varepsilon}_{1}\right)}}.$
Given any fixed $t\in\mathbb{R}$, consider
$f_{\varepsilon}(p)=\frac{pE_{\lambda^{\varepsilon}}e^{it\left(p\tau^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\tau^{\varepsilon}_{1}\right)}}{1-(1-p)E_{\lambda^{\varepsilon}}e^{it\left(p\tau^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\tau^{\varepsilon}_{1}\right)}}\text{
and }f(p)=\frac{1}{1-it}.$
According to Lemma 10.17,
$E_{\lambda^{\varepsilon}}e^{it\left(\tau^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\tau^{\varepsilon}_{1}\right)}\rightarrow
1/(1-it)$ uniformly on any compact set in $\mathbb{R}$. This implies that
$f_{\varepsilon}(p)=\frac{pE_{\lambda^{\varepsilon}}e^{it\left(p\tau^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\tau^{\varepsilon}_{1}\right)}}{1-(1-p)E_{\lambda^{\varepsilon}}e^{it\left(p\tau^{\varepsilon}_{1}/E_{\lambda^{\varepsilon}}\tau^{\varepsilon}_{1}\right)}}\rightarrow\frac{p/(1-itp)}{1-(1-p)/(1-itp)}=\frac{1}{1-it}=f(p)$
uniformly on $p\in(0,1)$. Therefore, if we consider $p^{\varepsilon}\doteq
e^{-m/{\varepsilon}}\rightarrow 0$, it follows from the uniform (in $p$)
convergence that
$E_{\lambda^{\varepsilon}}e^{it(\hat{\tau}_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\hat{\tau}_{1}^{\varepsilon})}=f_{\varepsilon}(p^{\varepsilon})\rightarrow
f(0)=\frac{1}{1-it}.$
We complete the proof. ∎
## 11 Sketch of the Proof of Conjecture 4.11 for a Special Case
In this section we outline the proof of the upper bound on the decay rate
(giving a lower bound on the variance per unit time) that complements Theorem
4.5 for a special case. Consider $U:\mathbb{R}\rightarrow\mathbb{R}$ shown as
in Figure 3.
Figure 3: Asymmetric well for lower bound
In particular, assume $U$ is a bounded $C^{2}$ function satisfying the
following conditions:
###### Condition 11.1.
* •
$U$ is defined on a compact interval
$D\doteq[\bar{x}_{L},\bar{x}_{R}]\subset\mathbb{R}$ and extends periodically
as a $C^{2}$ function.
* •
$U$ has two local minima at $x_{L}$ and $x_{R}$ with values
$U(x_{L})<U(x_{R})$ and $[x_{L}-\delta,x_{R}+\delta]\subset D$ for some
$\delta>0$.
* •
$U$ has one local maximum at $0\in(x_{L},x_{R})$.
* •
$U(x_{L})=0,$ $U(0)=h_{L}$ and $U(x_{R})=h_{L}-h_{R}>0.$
* •
$\inf_{x\in\partial D}U(x)>h_{L}.$
Consider the diffusion process $\\{X^{\varepsilon}_{t}\\}_{t\geq 0}$
satisfying the stochastic differential equation
$dX^{\varepsilon}_{t}=-\nabla
U\left(X^{\varepsilon}_{t}\right)dt+\sqrt{2\varepsilon}dW_{t},$ (11.1)
where $W$ is a $1$-dimensional standard Wiener process. Then there are just
two stable equilibrium points $O_{1}=x_{L}$ and $O_{2}=x_{R}$, and one
unstable equilibrium point $O_{3}=0.$ Moreover, one easily finds that
$V(O_{1},O_{2})=h_{L}$ and $V(O_{2},O_{1})=h_{R},$ and these give that
$W(O_{1})=V(O_{2},O_{1})$, $W(O_{2})=V(O_{1},O_{2})$ and $W\left(O_{1}\cup
O_{2}\right)=0$ (since $L_{\rm{s}}=\\{1,2\\},$ this implies that
$G_{\rm{s}}(1)=\\{(2\rightarrow 1)\\}$ and $G_{\rm{s}}(2)=\\{(1\rightarrow
2)\\}$). Another observation is that
$h_{1}\doteq\min_{\ell\in\mathcal{M}\setminus\\{1\\}}V\left(O_{1},O_{\ell}\right)=V\left(O_{1},O_{3}\right)=h_{L}$
in this model.
If $f\equiv 0,$ then one obtains
$\displaystyle R_{1}^{(1)}$ $\displaystyle\doteq\inf_{y\in
A}V(O_{1},y)+W(O_{1})-W(O_{1})=\inf_{y\in A}V(O_{1},y);$ $\displaystyle
R_{1}^{(2)}$ $\displaystyle\doteq 2\inf_{y\in
A}V\left(O_{1},y\right)-h_{1}=2\inf_{y\in A}V\left(O_{1},y\right)-h_{L};$
$\displaystyle R_{2}^{(1)}$ $\displaystyle\doteq\inf_{y\in
A}V(O_{2},y)+W(O_{2})-W(O_{1})=\inf_{y\in A}V(O_{2},y)+h_{L}-h_{R};$
$\displaystyle R_{2}^{(2)}$ $\displaystyle\doteq 2\inf_{y\in
A}V\left(O_{2},y\right)+W\left(O_{2}\right)-2W\left(O_{1}\right)+0-W\left(O_{1}\cup
O_{2}\right)$ $\displaystyle=2\inf_{y\in
A}V\left(O_{2},y\right)+h_{L}-2h_{R}.$
Let $A\subset[0,\bar{x}_{R}]$ and assume that it contains a nonempty open
interval, so that we are computing approximations to probabilities that are
small under the stationary distribution (the case of bounded and continuous
$f$ can be dealt with by approximation, as in the case of the upper bound on
the decay rate). We first compute the bounds one would obtain from Theorem
4.5.
Case I. If $x_{R}\in A,$ then $\inf_{y\in A}V(O_{1},y)=h_{L}$ and $\inf_{y\in
A}V\left(O_{2},y\right)=0.$ Thus the decay rate of variance per unit time is
bounded below by
$\displaystyle\min_{j=1,2}\left[R_{j}^{(1)}\wedge
R_{j}^{(2)}\right]=\min\left\\{h_{L},h_{L}-2h_{R}\right\\}=h_{L}-2h_{R}.$
Case II. If $A\subset[0,x_{R}-\delta]$ for some $\delta>0$ and $\delta<x_{R},$
then $\inf_{y\in A}V(O_{1},y)=h_{L}$ and $\inf_{y\in
A}V\left(O_{2},y\right)>0$ (we denote it by $b\in(0,h_{R}]$). Thus the decay
rate of variance per unit time is bounded below by
$\displaystyle\min_{j=1,2}\left[R_{j}^{(1)}\wedge
R_{j}^{(2)}\right]=\min\left\\{h_{L},h_{L}+2\left(b-h_{R}\right)\right\\}=h_{L}+2\left(b-h_{R}\right).$
Case III. If $A\subset[x_{R}+\delta,x^{\ast}]$ with $U(x^{\ast})=h_{L}$ for
some $\delta>0$ and $\delta<x^{\ast}-x_{R},$ then $\inf_{y\in
A}V(O_{1},y)=h_{L}+\inf_{y\in A}V\left(O_{2},y\right)$ and $\inf_{y\in
A}V\left(O_{2},y\right)>0$ (we denote it by $b\in(0,h_{R}]$). Thus the decay
rate of variance per unit time is bounded below by
$\displaystyle\min_{j=1,2}\left[R_{j}^{(1)}\wedge
R_{j}^{(2)}\right]=\min\left\\{h_{L}+b,h_{L}+2\left(b-h_{R}\right)\right\\}=h_{L}+2\left(b-h_{R}\right).$
Case IV. If $A\subset[x^{\ast}+\delta,\bar{x}_{R}]$ with $U(x^{\ast})=h_{L}$
for some $\delta>0$ and $x^{\ast}>x_{R},$ then $\inf_{y\in
A}V(O_{1},y)=h_{L}+\inf_{y\in A}V\left(O_{2},y\right)$ and $\inf_{y\in
A}V\left(O_{2},y\right)>0$ (we denote it by $\bar{b}>h_{R}$). Thus the decay
rate of variance per unit time is bounded below by
$\displaystyle\min_{j=1,2}\left[R_{j}^{(1)}\wedge
R_{j}^{(2)}\right]=\min\left\\{h_{L}+\bar{b},h_{L}+\left(\bar{b}-h_{R}\right)\right\\}=h_{L}+\left(\bar{b}-h_{R}\right).$
To find an upper bound for the decay rate of variance per unit time, we recall
that
$\frac{1}{T^{\varepsilon}}\sum_{j=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)-1}\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\leq\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\leq\frac{1}{T^{\varepsilon}}\sum_{j=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt$
with $\tau_{j}^{\varepsilon}$ being the $j$-th regenerative cycle. In Case I,
one might guess that
$\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt$
(11.2)
has approximately the same distribution as the exit time from the shallow
well, which has been shown to asymptotically have an exponential distribution
with parameter $\exp(-h_{R}/\varepsilon).$ Additionally, since the exit time
from the shallower well is exponentially smaller than
$\tau_{j}^{\varepsilon},$ it suggests that the random variables (11.2) can be
taken as independent of $N^{\varepsilon}\left(T^{\varepsilon}\right)$ when
$\varepsilon$ is small. We also know that
$EN^{\varepsilon}\left(T^{\varepsilon}\right)/T^{\varepsilon}\approx
1/E\tau_{1}^{\varepsilon}\approx\exp\left(-h_{L}(\delta)/\varepsilon\right),$
where $h_{L}(\delta)\uparrow h_{L}$ as $\delta\downarrow 0$ and $\approx$
means that quantities on either side have the same exponential decay rate.
Using Jensen’s inequality to find that
$E[N^{\varepsilon}(T^{\varepsilon})]^{2}\geq[EN^{\varepsilon}(T^{\varepsilon})]^{2}$
and then applying Wald’s identity, we obtain
$\displaystyle
T^{\varepsilon}\mathrm{Var}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)$
$\displaystyle\approx\frac{1}{T^{\varepsilon}}E\left[\sum\nolimits_{j=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt-
EN^{\varepsilon}\left(T^{\varepsilon}\right)E\left(\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)\right]^{2}$
$\displaystyle=\frac{1}{T^{\varepsilon}}E\left(\sum\nolimits_{j=1}^{N^{\varepsilon}\left(T^{\varepsilon}\right)}\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)^{2}-\frac{1}{T^{\varepsilon}}(E(N^{\varepsilon}(T^{\varepsilon})))^{2}\left(E\left(\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)\right)^{2}$
$\displaystyle=\frac{1}{T^{\varepsilon}}EN^{\varepsilon}\left(T^{\varepsilon}\right)E\left(\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)^{2}-\frac{1}{T^{\varepsilon}}\left[EN^{\varepsilon}\left(T^{\varepsilon}\right)\right]^{2}\left(E\left(\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)\right)^{2}$
$\displaystyle\quad+\frac{1}{T^{\varepsilon}}\left(E\left[N^{\varepsilon}\left(T^{\varepsilon}\right)\right]^{2}-EN^{\varepsilon}\left(T^{\varepsilon}\right)\right)\left(E\left(\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)\right)^{2}$
$\displaystyle\geq\frac{EN^{\varepsilon}\left(T^{\varepsilon}\right)}{T^{\varepsilon}}\mathrm{Var}\left(\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)$
(11.3)
$\displaystyle\approx\exp\left(-h_{L}(\delta)/\varepsilon\right)\cdot\exp(2h_{R}/\varepsilon)=\exp(\left(2h_{R}-h_{L}(\delta)\right)/\varepsilon).$
Letting $\delta\rightarrow 0$, we see that the decay rate of variance per unit
time is bounded above by $h_{L}-2h_{R}$, which is the same as lower bound
found for Case I.
For the other three Cases II, III and IV, the process spends only a very small
fraction of the time while in the shallower well in the set $A$. In fact,
using the stopping time arguments of the sort that appear in [12, Chapter 4],
the event that the process enters $A$ during an excursion away from the
neighborhood of $x_{R}$ can be accurately approximated (as far as large
deviation behavior goes) using independent Bernoulli random variables
$\\{B^{\varepsilon}_{i}\\}$ with success parameter $e^{-b/\varepsilon}$, and
when this occurs the process spends an order one amount of time in $A$ before
returning to the neighborhood of $x_{R}$. There is however another sequence of
independent Bernoulli random variables with success parameter
$e^{-h_{R}/\varepsilon}$, and the process accumulates time in $A$ only up till
the time of first success of this sequence.
Then
$\mathrm{Var}(\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt)$
has the same logarithmic asymptotics as
$\mathrm{Var}(\sum\nolimits_{i=1}^{R^{\varepsilon}}1_{\\{B^{\varepsilon}_{i}=1\\}}),$
where $R^{\varepsilon}$ is geometric with success parameter
$e^{-h_{R}/\varepsilon}$ and independent of the $\\{B^{\varepsilon}_{i}\\}$.
Straightforward calculation using Wald’s identity then gives the exponential
rate of decay $2h_{R}-2b$ for Cases II, III and $h_{R}-\bar{b}$ for Case IV,
so according to (11) we obtain
$\displaystyle
T^{\varepsilon}\mathrm{Var}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)\geq\frac{EN^{\varepsilon}\left(T^{\varepsilon}\right)}{T^{\varepsilon}}\mathrm{Var}\left(\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)\approx
e^{\left[\left(2\left(h_{R}-b\right)-h_{L}(\delta)\right)/\varepsilon\right]}$
for Cases II and III and
$\displaystyle
T^{\varepsilon}\mathrm{Var}\left(\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)\geq\frac{EN^{\varepsilon}\left(T^{\varepsilon}\right)}{T^{\varepsilon}}\mathrm{Var}\left(\int_{\tau_{j-1}^{\varepsilon}}^{\tau_{j}^{\varepsilon}}1_{A}\left(X_{t}^{\varepsilon}\right)dt\right)\approx
e^{\left[\left((h_{R}-\bar{b})-h_{L}(\delta)\right)/\varepsilon\right]}$
for Case IV.
Letting $\delta\rightarrow 0$, this means that the decay rate of variance per
unit time is bounded above by $h_{L}+2\left(b-h_{R}\right)$ for Case II and
III, and by $h_{L}+(\bar{b}-h_{R})$ for Case IV which is again the same as the
corresponding lower bound.
Acknowledgement. We thank the referee for corrections and suggestions that
improved this paper.
## References
* [1] D. Aldous and J. Fill. Reversible markov chains and random walks on graphs (monograph), 2002\. Available at https://www.stat.berkeley.edu/users/aldous/RWG/book.pdf.
* [2] D.G. Aronson. Bounds for the fundamental solution of a parabolic equation. Bulletin of the American Mathematical society, 73(6):890–896, 1967\.
* [3] S. Asmussen and P.W. Glynn. Stochastic Simulation: Algorithms and Analysis. Applications of Mathematics. Springer Science+Business Media, LLC, 2007\.
* [4] A. Budhiraja and P. Dupuis. Analysis and Approximation of Rare Events: Representations and Weak Convergence Methods. Number 94 in Probability Theory and Stochastic Modelling. Springer-Verlag, New York, 2019.
* [5] P. Collet, S. Martínez, and J. San Martín. Quasi-Stationary Distributions. Springer-Verlag, Berlin, 2013.
* [6] M.V. Day. On the exponential exit law in the small parameter exit problem. Stochastics, 8(4):297–323, 1983.
* [7] M.D. Donsker and S.R.S. Varadhan. Asymptotic evaluation of certain Markov process expectations for large time, I. Comm. Pure Appl. Math., 28:1–47, 1975.
* [8] M.D. Donsker and S.R.S. Varadhan. Asymptotic evaluation of certain Markov process expectations for large time, III. Comm. Pure Appl. Math., 29:389–461, 1976.
* [9] P. Dupuis, Y. Liu, N. Plattner, and J.D. Doll. On the infinite swapping limit for parallel tempering. SIAM J. Multiscale Model. Simul., 10:986–1022, 2012.
* [10] P. Dupuis and G.-J. Wu. Analysis and optimization of certain parallel Monte Carlo methods in the low temperature limit. page working paper, 2020.
* [11] W. Feller. An Introduction to Probability Theory and Its Applications, Vol. 2. John Wiley, New York, 1971.
* [12] M. I. Freidlin and A. D. Wentzell. Random Perturbations of Dynamical Systems. Springer-Verlag, New York, third edition, 2012.
* [13] D. Gilbarg and N. S. Trudinger. Elliptic Partial Differential Equations of Second Order. Springer–Verlag, Berlin, second edition, 1983.
* [14] T.E. Harris. The Theory of Branching Processes. Die Grundlehren der Mathematischen Wissenschaften in Einzeldarstellungen. Springer-Verlag, 1963.
* [15] P. Dupuis J. Doll and P. Nyquist. A large deviations analysis of certain qualitative properties of parallel tempering and infinite swapping algorithms. Applied Math. And Opt., 78:103–144, 2018.
* [16] R. Khasminskii. Stochastic Stability of Differential Equations. Springer Berlin Heidelberg, second edition, 2012.
* [17] N. Limnios and G. Oprisan. Semi-Markov Processes and Reliability. Statistics for Industry and Technology. Birkhäuser Boston, 2001.
* [18] S. Ross. Applied Probability Models with Optimization Applications. Dover Publications, 1992.
* [19] A. Shwartz and A. Weiss. Large Deviations for Performance Analysis: Queues, Communication and Computing. Chapman and Hall, New York, 1995.
## 12 Appendix
###### Proof of Lemma 7.8.
Given a function $g$, we define the notation
$I(t_{1},t_{2};g)\doteq\int_{t_{1}}^{t_{2}}g(X^{\varepsilon}_{s})ds,$
for any $0\leq t_{1}\leq t_{2}$. By definition,
$\tau_{1}^{\varepsilon}=\tau_{N}$ and observe that
$\displaystyle I(0,\tau_{N};g)$
$\displaystyle=\sum\nolimits_{\ell=1}^{N}I(\tau_{\ell-1},\tau_{\ell};g)=\sum\nolimits_{\ell=1}^{\infty}I(\tau_{\ell-1},\tau_{\ell};g)\cdot
1_{\left\\{\ell\leq N\right\\}}$
$\displaystyle=\sum\nolimits_{\ell=1}^{\infty}I(\tau_{\ell-1},\tau_{\ell};g)\cdot
1_{\left\\{\ell\leq\hat{N}\right\\}}+\sum\nolimits_{\ell=1}^{\infty}I(\tau_{\ell-1},\tau_{\ell};g)\cdot
1_{\left\\{\hat{N}+1\leq\ell\leq N\right\\}}$
$\displaystyle\quad+\sum\nolimits_{j\in
L\setminus\left\\{1\right\\}}\sum\nolimits_{\ell=1}^{\infty}\left(I(\tau_{\ell-1},\tau_{\ell};g)\cdot
1_{\left\\{\hat{N}+1\leq\ell\leq N,Z_{\ell-1}\in\partial
B_{\delta}(O_{j})\right\\}}\right).$
Since $\hat{N}$ and $N$ are stopping times with respect to the filtration
$\\{\mathcal{G}_{n}\\}_{n},$ it implies that
$\\{\ell\leq\hat{N}\\}=\\{\hat{N}\leq\ell-1\\}^{c}\in\mathcal{G}_{\ell-1}$ and
$\\{\hat{N}+1\leq\ell\leq N,Z_{\ell-1}\in\partial
B_{\delta}(O_{j})\\}\in\mathcal{G}_{\ell-1}.$ Let
$\mathfrak{S}_{1}=\sum_{\ell=1}^{\infty}I(\tau_{\ell-1},\tau_{\ell};g)\cdot
1_{\left\\{\ell\leq\hat{N}\right\\}}\text{ and
}\mathfrak{S}_{j}=\sum_{\ell=1}^{\infty}\left(I(\tau_{\ell-1},\tau_{\ell};g)\cdot
1_{\left\\{\hat{N}+1\leq\ell\leq N,Z_{\ell-1}\in\partial
B_{\delta}(O_{j})\right\\}}\right)$
for all $j\in L\setminus\left\\{1\right\\}.$ We find
$\displaystyle E_{x}\left(\mathfrak{S}_{1}\right)$
$\displaystyle=\sum\nolimits_{\ell=1}^{\infty}E_{x}\left(E_{x}\left[\left.I(\tau_{\ell-1},\tau_{\ell};g)\cdot
1_{\left\\{\ell\leq\hat{N}\right\\}}\right|\mathcal{G}_{\ell-1}\right]\right)$
$\displaystyle=\sum\nolimits_{\ell=1}^{\infty}E_{x}\left(1_{\left\\{\ell\leq\hat{N}\right\\}}E_{Z_{\ell-1}}\left[I(0,\tau_{1};g)\right]\right)$
$\displaystyle\leq\sup\nolimits_{y\in\partial
B_{\delta}(O_{1})}E_{y}\left[I(0,\tau_{1};g)\right]\cdot\left(\sum\nolimits_{\ell=1}^{\infty}P_{x}(\hat{N}\geq\ell)\right).$
In addition, for $j\in L\setminus\left\\{1\right\\},$
$\displaystyle E_{x}\left(\mathfrak{S}_{j}\right)$
$\displaystyle=\sum\nolimits_{\ell=1}^{\infty}E_{x}\left(I(\tau_{\ell-1},\tau_{\ell};g)\cdot
1_{\left\\{\hat{N}+1\leq\ell\leq N,Z_{\ell-1}\in\partial
B_{\delta}(O_{j})\right\\}}\right)$
$\displaystyle=\sum\nolimits_{\ell=1}^{\infty}E_{x}\left(E_{x}\left[\left.I(\tau_{\ell-1},\tau_{\ell};g)\cdot
1_{\left\\{\hat{N}+1\leq\ell\leq N,Z_{\ell-1}\in\partial
B_{\delta}(O_{j})\right\\}}\right|\mathcal{G}_{\ell-1}\right]\right)$
$\displaystyle=\sum\nolimits_{\ell=1}^{\infty}E_{x}\left(1_{\left\\{\hat{N}+1\leq\ell\leq
N,Z_{\ell-1}\in\partial
B_{\delta}(O_{j})\right\\}}E_{Z_{\ell-1}}\left[I(0,\tau_{1};g)\right]\right)$
$\displaystyle\leq\sup\nolimits_{y\in\partial
B_{\delta}(O_{j})}E_{y}\left[I(0,\tau_{1};g)\right]\cdot\left(\sum\nolimits_{\ell=1}^{\infty}E_{x}\left(1_{\left\\{\hat{N}+1\leq\ell\leq
N,Z_{\ell-1}\in\partial B_{\delta}(O_{j})\right\\}}\right)\right).$
It is straightforward to see that $\hat{N}=N_{1}$. This implies that
$\sum\nolimits_{\ell=1}^{\infty}P_{x}(\hat{N}\geq\ell)=E_{x}\hat{N}=E_{x}N_{1}.$
Moreover, observe that for any $j\in L\setminus\left\\{1\right\\}$
$\sum_{\ell=1}^{\infty}1_{\left\\{\hat{N}+1\leq\ell\leq
N,Z_{\ell-1}\in\partial B_{\delta}(O_{j})\right\\}}=N_{j},$ which gives that
$\sum_{\ell=1}^{\infty}E_{x}(1_{\\{\hat{N}+1\leq\ell\leq
N,Z_{\ell-1}\in\partial B_{\delta}(O_{j})\\}})=E_{x}N_{j}.$ Hence,
$\displaystyle E_{x}\left(I(0,\tau_{N};g)\right)=\sum\nolimits_{j\in
L}E_{x}\left(\mathfrak{S}_{j}\right)\leq\sum\nolimits_{j\in
L}\left[\sup\nolimits_{y\in\partial
B_{\delta}(O_{j})}E_{y}\left(I(0,\tau_{1};g)\right)\right]\cdot E_{x}N_{j}.$
∎
###### Proof of Lemma 7.9.
Let $l=|L|$. For any $j\in L$ and $n\in\mathbb{N},$
$\xi_{1}^{(j)}=\inf\\{k\in\mathbb{N}_{0}:$ $Z_{k}\in\partial
B_{\delta}(O_{j})\\},$ $\xi_{n}^{(j)}=\inf\\{k\in\mathbb{N}:k>\xi_{n-1}^{(j)}$
and $Z_{k}\in\partial B_{\delta}(O_{j})\\}$ i.e. $\xi_{n}^{(j)}$ is the $n$-th
time of hitting $\partial B_{\delta}(O_{j}).$ Moreover, we define
$N^{(j)}=\inf\\{n\in\mathbb{N}:\xi_{n}^{(j)}\geq N\\},$ recalling that
$N\doteq\inf\\{n\geq\hat{N}:Z_{n}\in\partial B_{\delta}(O_{1})\\}$ and
$\hat{N}\doteq\inf\\{n\in\mathbb{N}:Z_{n}\in{\textstyle\cup_{j\in
L\setminus\\{1\\}}}\partial B_{\delta}(O_{j})\\}.$ Since $\xi_{n}^{(j)}$ is a
stopping time with respect to $\\{\mathcal{G}_{n}\\}_{n},$ we can define the
filtration $\\{\mathcal{G}_{\xi_{n}^{(j)}}\\},$ and one can verify that
$N^{(j)}$ is a stopping time with respect to
$\\{\mathcal{G}_{\xi_{n}^{(j)}}\\}_{n}.$ As in the proof just given, for any
function $g$ and for any $0\leq t_{1}\leq t_{2}$ we define
$I(t_{1},t_{2};g)\doteq\int_{t_{1}}^{t_{2}}g(X^{\varepsilon}_{s})ds$. With
this notation and since by definition $\tau_{1}^{\varepsilon}=\tau_{N}$, we
can write
$I(0,\tau_{N};g)=\sum\nolimits_{j\in
L}\sum\nolimits_{\ell=1}^{\infty}I(\tau_{\xi_{\ell}^{(j)}},\tau_{\xi_{\ell}^{(j)}+1};g)\cdot
1_{\left\\{\ell\leq N^{(j)}-1\right\\}}.$
Since $(x_{1}+\cdots+x_{l})^{2}\leq l(x_{1}^{2}+\cdots+x_{l}^{2})$ for any
$(x_{1},\ldots,x_{l})\in\mathbb{R}^{l}$ and $l\in\mathbb{N},$
$\displaystyle I(0,\tau_{N};g)^{2}$ $\displaystyle\leq l\sum\nolimits_{j\in
L}\left(\sum\nolimits_{\ell=1}^{\infty}I(\tau_{\xi_{\ell}^{(j)}},\tau_{\xi_{\ell}^{(j)}+1};g)\cdot
1_{\left\\{\ell\leq N^{(j)}-1\right\\}}\right)^{2}.$
Now for any $j\in L$, each square term from the right can be written an
addition of two sums, where the first sum is summation of
$I(\tau_{\xi_{\ell}^{(j)}},\tau_{\xi_{\ell}^{(j)}+1};g)^{2}\cdot
1_{\\{\ell\leq N^{(j)}-1\\}}$ over all $\ell$, and the second sum is twice of
summation of $I(\tau_{\xi_{\ell}^{(j)}},\tau_{\xi_{\ell}^{(j)}+1};g)\cdot
1_{\\{\ell\leq
N^{(j)}-1\\}}I(\tau_{\xi_{k}^{(j)}},\tau_{\xi_{k}^{(j)}+1};g)\cdot
1_{\\{\ell\leq N^{(j)}-1\\}}$ over $k,\ell$ with $k<\ell$. For the expected
value of the first sum, note that $\\{\ell\leq
N^{(j)}-1\\}=\\{N^{(j)}\leq\ell\\}^{c}\in\mathcal{G}_{\xi_{\ell}^{(j)}},$ we
have
$\displaystyle\sum_{\ell=1}^{\infty}E_{x}\left[I(\tau_{\xi_{\ell}^{(j)}},\tau_{\xi_{\ell}^{(j)}+1};g)^{2}1_{\left\\{\ell\leq
N^{(j)}-1\right\\}}\right]$
$\displaystyle=\sum_{\ell=1}^{\infty}E_{x}\left[1_{\left\\{\ell\leq
N^{(j)}-1\right\\}}E_{x}\left[\left.I(\tau_{\xi_{\ell}^{(j)}},\tau_{\xi_{\ell}^{(j)}+1};g)^{2}\right|\mathcal{G}_{\xi_{\ell}^{(j)}}\right]\right]$
$\displaystyle\leq\sup_{y\in\partial
B_{\delta}(O_{j})}E_{y}I(0,\tau_{1};g)^{2}\sum\nolimits_{\ell=1}^{\infty}P_{x}(N^{(j)}-1\geq\ell)$
$\displaystyle=\sup_{y\in\partial
B_{\delta}(O_{j})}E_{y}I(0,\tau_{1};g)^{2}E_{x}(N_{j}).$
The last equality holds since $N^{(j)}-1=N_{j}$ (recall that $N_{j}$ is the
number of visits of $\\{Z_{n}\\}_{n\in\mathbb{N}_{0}}$ to $\partial
B_{\delta}(O_{j})$ before $N$ including the initial position) this implies
that
$\sum_{\ell=1}^{\infty}P_{x}(N^{(j)}-1\geq\ell)=\sum_{\ell=1}^{\infty}P_{x}(N_{j}\geq\ell)=E_{x}(N_{j}).$
Turning to the expected value of the second sum, by conditioning on
$\mathcal{G}_{\xi_{\ell}^{(j)}}$ gives
$\displaystyle\sum_{\ell=2}^{\infty}\sum_{k=1}^{\ell-1}E_{x}\left[I(\tau_{\xi_{\ell}^{(j)}},\tau_{\xi_{\ell}^{(j)}+1};g)\cdot
1_{\left\\{\ell\leq
N^{(j)}-1\right\\}}I(\tau_{\xi_{k}^{(j)}},\tau_{\xi_{k}^{(j)}+1};g)\cdot
1_{\left\\{k\leq N^{(j)}-1\right\\}}\right]$
$\displaystyle\leq\sup_{y\in\partial
B_{\delta}(O_{j})}E_{y}\left(I(0,\tau_{1};g)\right)\sum_{\ell=2}^{\infty}\sum_{k=1}^{\ell-1}E_{x}\left[1_{\left\\{\ell\leq
N^{(j)}-1\right\\}}I(\tau_{\xi_{k}^{(j)}},\tau_{\xi_{k}^{(j)}+1};g)\cdot
1_{\left\\{k\leq N^{(j)}-1\right\\}}\right].$
Now since for any $k\leq\ell-1,$ i.e. $k+1\leq\ell$,
$I(\tau_{\xi_{k}^{(j)}},\tau_{\xi_{k}^{(j)}+1};g)\in\mathcal{G}_{\xi_{k}^{(j)}+1}\text{
and }1_{\left\\{\ell\leq
N^{(j)}-1\right\\}}\in\mathcal{G}_{\xi_{\ell}^{(j)}},$ we have
$\displaystyle E_{x}\left[1_{\left\\{\ell\leq
N^{(j)}-1\right\\}}I(\tau_{\xi_{k}^{(j)}},\tau_{\xi_{k}^{(j)}+1};g)\cdot
1_{\left\\{k\leq N^{(j)}-1\right\\}}\right]$
$\displaystyle=E_{x}\left[I(\tau_{\xi_{k}^{(j)}},\tau_{\xi_{k}^{(j)}+1};g)\cdot
1_{\\{\tau_{\xi_{1}^{(j)}}<N,\ldots,\tau_{\xi_{\ell}^{(j)}}<N\\}}\right]$
$\displaystyle=E_{x}\left[E_{Z_{\xi_{k+1}^{(j)}}}\left[1_{\\{\tau_{\xi_{1}^{(j)}}<N,\ldots,\tau_{\xi_{\ell-k}^{(j)}}<N\\}}\right]1_{\\{\tau_{\xi_{k+1}^{(j)}}<N\\}}I(\tau_{\xi_{k}^{(j)}},\tau_{\xi_{k}^{(j)}+1};g)\cdot
1_{\\{\tau_{\xi_{1}^{(j)}}<N,\ldots,\tau_{\xi_{k}^{(j)}}<N\\}}\right]$
$\displaystyle=E_{x}\left[E_{Z_{\xi_{k+1}^{(j)}}}\left[1_{\left\\{\ell-k\leq
N^{(j)}-1\right\\}}\right]1_{\\{\tau_{\xi_{k+1}^{(j)}}<N\\}}I(\tau_{\xi_{k}^{(j)}},\tau_{\xi_{k}^{(j)}+1};g)\cdot
1_{\left\\{k\leq N^{(j)}-1\right\\}}\right]$
$\displaystyle\leq\sup\nolimits_{y\in\partial
B_{\delta}(O_{j})}P_{y}(\ell-k\leq
N^{(j)}-1)E_{x}\left[I(\tau_{\xi_{k}^{(j)}},\tau_{\xi_{k}^{(j)}+1};g)\cdot
1_{\left\\{k\leq N^{(j)}-1\right\\}}\right]$
$\displaystyle=\sup\nolimits_{y\in\partial
B_{\delta}(O_{j})}P_{y}\left(\ell-k\leq
N_{j}\right)E_{x}\left[E_{x}\left[\left.I(\tau_{\xi_{k}^{(j)}},\tau_{\xi_{k}^{(j)}+1};g)\right|\mathcal{G}_{\xi_{k}^{(j)}}\right]\cdot
1_{\left\\{k\leq N^{(j)}-1\right\\}}\right]$
$\displaystyle\leq\sup\nolimits_{y\in\partial
B_{\delta}(O_{j})}E_{y}\left(I(0,\tau_{1};g)\right)\cdot\sup\nolimits_{y\in\partial
B_{\delta}(O_{j})}P_{y}\left(\ell-k\leq N_{j}\right)\cdot P_{x}\left(k\leq
N_{j}\right).$
This gives that the expected value of the second sum is less than or equal to
$\displaystyle\left(\sup\nolimits_{y\in\partial
B_{\delta}(O_{j})}E_{y}I(0,\tau_{1};g)\right)^{2}\sum\nolimits_{\ell=2}^{\infty}\sum\nolimits_{k=1}^{\ell-1}\sup\nolimits_{y\in\partial
B_{\delta}(O_{j})}P_{y}\left(\ell-k\leq N_{j}\right)\cdot P_{x}\left(k\leq
N_{j}\right)$ $\displaystyle\qquad=\left(\sup\nolimits_{y\in\partial
B_{\delta}(O_{j})}E_{y}I(0,\tau_{1};g)\right)^{2}\sum\nolimits_{k=1}^{\infty}\sup\nolimits_{y\in\partial
B_{\delta}(O_{j})}P_{y}\left(k\leq N_{j}\right)\cdot E_{x}N_{j}.$
Therefore, putting the estimates together gives
$\displaystyle E_{x}I(0,\tau_{1}^{\varepsilon};g)^{2}$ $\displaystyle\leq
2l\sum_{j\in L}\left[\sup_{y\in\partial
B_{\delta}(O_{j})}E_{y}I(0,\tau_{1};g)\right]^{2}\cdot
E_{x}N_{j}\cdot\sum_{\ell=1}^{\infty}\sup_{y\in\partial
B_{\delta}(O_{j})}P_{y}\left(\ell\leq N_{j}\right)$
$\displaystyle\quad+l\sum_{j\in L}\left[\sup_{y\in\partial
B_{\delta}(O_{j})}E_{y}I(0,\tau_{1};g)^{2}\right]\cdot E_{x}N_{j}.$
∎
###### Proof of Lemma 9.2.
The main idea of the proof comes from [18, Theorem 3.16].
Given any $\varepsilon>0,$ we define $g^{\varepsilon}\left(t\right)\doteq
E_{\lambda^{\varepsilon}}S_{N^{\varepsilon}\left(t\right)}^{\varepsilon}$ for
any $t\geq 0.$ Conditioning on $\tau_{1}^{\varepsilon}$ yields
$g^{\varepsilon}\left(t\right)=\int_{0}^{\infty}E_{\lambda^{\varepsilon}}[S_{N^{\varepsilon}\left(t\right)}^{\varepsilon}|\tau_{1}^{\varepsilon}=x]dF^{\varepsilon}\left(x\right),$
where $F^{\varepsilon}\left(\cdot\right)$ is the distribution function of
$\tau_{1}^{\varepsilon}.$ Note that
$E_{\lambda^{\varepsilon}}\left[S_{N^{\varepsilon}\left(t\right)}^{\varepsilon}|\tau_{1}^{\varepsilon}=x\right]=\left\\{\begin{array}[c]{c}g^{\varepsilon}\left(t-x\right)\text{
if }x\leq t\\\
E_{\lambda^{\varepsilon}}\left[S_{1}^{\varepsilon}|\tau_{1}^{\varepsilon}=x\right]\text{
if }x>t\end{array}\right.,$
which implies
$g^{\varepsilon}\left(t\right)=\int_{0}^{t}g^{\varepsilon}\left(t-x\right)dF^{\varepsilon}\left(x\right)+h^{\varepsilon}\left(t\right),$
with
$h^{\varepsilon}\left(t\right)=\int_{t}^{\infty}E_{\lambda^{\varepsilon}}\left[S_{1}^{\varepsilon}|\tau_{1}^{\varepsilon}=x\right]dF^{\varepsilon}\left(x\right).$
Since
$E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}=\int_{0}^{\infty}E_{\lambda^{\varepsilon}}\left[S_{1}^{\varepsilon}|\tau_{1}^{\varepsilon}=x\right]dF^{\varepsilon}\left(x\right)<\infty,$
we have $h^{\varepsilon}\left(t\right)\leq
E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}$ for all $t\geq 0.$ Moreover, if
we apply Hölder’s inequality first and then the conditional Jensen’s
inequality, we find that for all $t\geq 0,$
$\displaystyle h^{\varepsilon}\left(t\right)$
$\displaystyle\leq\left(\int_{t}^{\infty}\left(E_{\lambda^{\varepsilon}}\left[S_{1}^{\varepsilon}|\tau_{1}^{\varepsilon}=x\right]\right)^{2}dF^{\varepsilon}\left(x\right)\right)^{\frac{1}{2}}\left(\int_{t}^{\infty}1^{2}dF^{\varepsilon}\left(x\right)\right)^{\frac{1}{2}}$
$\displaystyle\leq\left(1-F^{\varepsilon}\left(t\right)\right)^{\frac{1}{2}}\left(\int_{t}^{\infty}E_{\lambda^{\varepsilon}}[\left(S_{1}^{\varepsilon}\right)^{2}|\tau_{1}^{\varepsilon}=x]dF^{\varepsilon}\left(x\right)\right)^{\frac{1}{2}}\leq\left(1-F^{\varepsilon}\left(t\right)\right)^{\frac{1}{2}}(E_{\lambda^{\varepsilon}}\left(S_{1}^{\varepsilon}\right)^{2})^{\frac{1}{2}}.$
Given $\ell\in(0,c-h_{1})$ let $U^{\varepsilon}\doteq
e^{{\ell}/{\varepsilon}}E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}$.
According to Theorem 8.5, there exists $\varepsilon_{0}\in(0,1)$ and a
constant $\tilde{c}>0$ such that
$1-F^{\varepsilon}\left(U^{\varepsilon}\right)=P_{\lambda^{\varepsilon}}(\tau_{1}^{\varepsilon}/E_{\lambda^{\varepsilon}}\tau_{1}^{\varepsilon}>e^{{\ell}/{\varepsilon}})\leq
e^{-\tilde{c}e^{{\ell}/{\varepsilon}}}$
for any $\varepsilon\in(0,\varepsilon_{0}).$ Also by Theorem 8.5,
$U^{\varepsilon}<T^{\varepsilon}$ for all $\varepsilon$ small enough. Hence
for any $t\geq U^{\varepsilon}$,
$1-F^{\varepsilon}\left(t\right)\leq
1-F^{\varepsilon}\left(U^{\varepsilon}\right)\leq
e^{-\tilde{c}e^{{\ell}/{\varepsilon}}}\text{ and
}h^{\varepsilon}\left(t\right)\leq
e^{-\tilde{c}e^{{\ell}/{\varepsilon}}/2}(E_{\lambda^{\varepsilon}}\left(S_{1}^{\varepsilon}\right)^{2})^{\frac{1}{2}}.$
By Proposition 3.4 in [18], we know that for any $\varepsilon>0$, for
$t\in[0,\infty)$
$g^{\varepsilon}\left(t\right)=h^{\varepsilon}\left(t\right)+\int_{0}^{t}h^{\varepsilon}\left(t-x\right)da^{\varepsilon}\left(x\right),$
where
$a^{\varepsilon}\left(t\right)\doteq\int_{0}^{\infty}E_{\lambda^{\varepsilon}}\left[N^{\varepsilon}\left(t\right)|\tau_{1}^{\varepsilon}=x\right]dF^{\varepsilon}\left(x\right)=E_{\lambda^{\varepsilon}}\left(N^{\varepsilon}\left(t\right)\right).$
This implies
$\displaystyle\frac{E_{\lambda^{\varepsilon}}S_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}}{T^{\varepsilon}}$
$\displaystyle=\frac{h^{\varepsilon}\left(T^{\varepsilon}\right)}{T^{\varepsilon}}+\frac{1}{T^{\varepsilon}}\int_{0}^{T^{\varepsilon}-U^{\varepsilon}}h^{\varepsilon}\left(T^{\varepsilon}-x\right)da^{\varepsilon}\left(x\right)+\frac{1}{T^{\varepsilon}}\int_{T^{\varepsilon}-U^{\varepsilon}}^{T^{\varepsilon}}h^{\varepsilon}\left(T^{\varepsilon}-x\right)da^{\varepsilon}\left(x\right),$
$\displaystyle\leq\frac{E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}}{T^{\varepsilon}}+e^{-\tilde{c}e^{{\ell}/{\varepsilon}}/2}(E_{\lambda^{\varepsilon}}\left(S_{1}^{\varepsilon}\right)^{2})^{\frac{1}{2}}\frac{a^{\varepsilon}\left(T^{\varepsilon}-U^{\varepsilon}\right)}{T^{\varepsilon}}+E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}\frac{a^{\varepsilon}\left(T^{\varepsilon}\right)-a^{\varepsilon}\left(T^{\varepsilon}-U^{\varepsilon}\right)}{T^{\varepsilon}},$
where we use $h^{\varepsilon}\left(t\right)\leq
E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}$ to bound the first term and the
third term, and $h^{\varepsilon}\left(t\right)\leq
e^{-\tilde{c}e^{{\ell}/{\varepsilon}}/2}(E_{\lambda^{\varepsilon}}\left(S_{1}^{\varepsilon}\right)^{2})^{1/2}$
for any $t\geq U^{\varepsilon}$ for the second term.
To calculate the decay rate of the first term, we apply Lemma 7.21 to find
that for any $\eta>0$, there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}}{T^{\varepsilon}}\geq\inf_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-W\left(O_{1}\right)+c-h_{1}-\eta.$
(12.1)
For the decay rate of the second term, given any $\delta>0$
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(e^{-\tilde{c}e^{{\ell}/{\varepsilon}}/4}(E_{\lambda^{\varepsilon}}\left(S_{1}^{\varepsilon}\right)^{2})^{\frac{1}{2}}\frac{a^{\varepsilon}\left(T^{\varepsilon}-U^{\varepsilon}\right)}{T^{\varepsilon}}\right)$
(12.2) $\displaystyle\quad=\frac{\tilde{c}}{4}\liminf_{\varepsilon\rightarrow
0}\varepsilon e^{{\ell}/{\varepsilon}}+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left((E_{\lambda^{\varepsilon}}\left(S_{1}^{\varepsilon}\right)^{2})^{\frac{1}{2}}\frac{a^{\varepsilon}\left(T^{\varepsilon}-U^{\varepsilon}\right)}{T^{\varepsilon}}\right)=\infty,$
where the last equality holds since $\ell>0$ implies
$\liminf_{\varepsilon\rightarrow 0}\varepsilon
e^{{\ell}/{\varepsilon}}=\infty$ and also because Lemma 7.23 and Corollary 8.3
ensure that
$\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log((E_{\lambda^{\varepsilon}}\left(S_{1}^{\varepsilon}\right)^{2})^{1/2}a^{\varepsilon}(T^{\varepsilon}-U^{\varepsilon})/T^{\varepsilon})$
is bounded below by a constant.
For the last term, note that for any $\varepsilon$ fixed, the renewal function
$a^{\varepsilon}\left(t\right)$ is subadditive in $t$ (see for example Lemma
1.2 in [17]), so we have
$a^{\varepsilon}\left(T^{\varepsilon}\right)-a^{\varepsilon}\left(T^{\varepsilon}-U^{\varepsilon}\right)\leq
a^{\varepsilon}\left(U^{\varepsilon}\right).$ Thus we apply by Lemma 7.21,
Corollary 8.3 and Theorem 8.5 to find that for any $\eta>0$, there exists
$\delta_{0}\in(0,1)$ such that for any $\delta\in(0,\delta_{0})$,
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\left(E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}\frac{a^{\varepsilon}\left(T^{\varepsilon}\right)-a^{\varepsilon}\left(T^{\varepsilon}-U^{\varepsilon}\right)}{T^{\varepsilon}}\right)$
$\displaystyle\quad\geq\liminf_{\varepsilon\rightarrow 0}-\varepsilon\log
E_{\lambda^{\varepsilon}}S_{1}^{\varepsilon}+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{a^{\varepsilon}\left(U^{\varepsilon}\right)}{U^{\varepsilon}}+\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{U^{\varepsilon}}{T^{\varepsilon}}$
$\displaystyle\quad\geq\inf_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-W\left(O_{1}\right)+\left(c-h_{1}-\ell\right)-\eta.$
(12.3)
Since (12.3) holds for all $\ell>0$, by sending $\ell$ to 0, we know that
(12.3) holds with $\ell=0$.
Putting the bounds (12.1), (12.2) and (12.3) with $\ell=0$ together gives that
for any $\eta>0$, there exists $\delta_{0}\in(0,1)$ such that for any
$\delta\in(0,\delta_{0})$,
$\displaystyle\liminf_{\varepsilon\rightarrow
0}-\varepsilon\log\frac{E_{\lambda^{\varepsilon}}S_{N^{\varepsilon}\left(T^{\varepsilon}\right)}^{\varepsilon}}{T^{\varepsilon}}\geq\inf_{x\in
A}\left[f\left(x\right)+W\left(x\right)\right]-W\left(O_{1}\right)+c-h_{1}-\eta.$
∎
###### Proof of Lemma 9.5.
By the definition of $W(x),$
$\displaystyle 2\inf_{x\in
A}[f\left(x\right)+W\left(x\right)]-2W\left(O_{1}\right)-h_{1}$
$\displaystyle=2\inf_{x\in A}[f\left(x\right)+\min_{j\in
L}\left(V(O_{j},x)+W\left(O_{j}\right)\right)]-2W\left(O_{1}\right)-h_{1}$
$\displaystyle=\min_{j\in L}\\{2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+2W\left(O_{j}\right)-2W\left(O_{1}\right)-h_{1}\\}.$
Define $Q_{j}\doteq 2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{j},x\right)\right]+2W\left(O_{j}\right)-2W\left(O_{1}\right)-h_{1}$.
Then it suffices to show that $Q_{j}\geq R_{j}^{(2)}$ for all $j\in L.$
For $j=1,$ $Q_{1}=2\inf_{x\in
A}\left[f\left(x\right)+V\left(O_{1},x\right)\right]-h_{1}=R_{1}^{(2)}.$ For
$j\in L\setminus\\{1\\},$ $Q_{j}\geq R_{j}^{(2)}$ if and only if
$W\left(O_{j}\right)-h_{1}\geq W\left(O_{1}\cup O_{j}\right).$ Recall that
$W\left(O_{j}\right)=\min_{g\in
G\left(j\right)}\left[{\textstyle\sum_{\left(m\rightarrow n\right)\in
g}}V\left(O_{m},O_{n}\right)\right]\text{ and }W\left(O_{1}\cup
O_{j}\right)=\min_{g\in
G\left(1,j\right)}\left[{\textstyle\sum_{\left(m\rightarrow n\right)\in
g}}V\left(O_{m},O_{n}\right)\right].$
Therefore, for any $\tilde{g}\in G\left(j\right)$ such that
$W\left(O_{j}\right)=\sum\nolimits_{\left(m\rightarrow
n\right)\in\tilde{g}}V\left(O_{m},O_{n}\right),$ if we remove the arrow
starting from $1$, and assume that it goes to $i,$ then it is easy to see that
$\hat{g}\doteq\tilde{g}\setminus\\{(1,i)\\}\in G(1,j)$. Since $V(O_{1},$
$O_{j})\geq h_{1},$ we find that
$\displaystyle W\left(O_{j}\right)-h_{1}$
$\displaystyle=\sum\nolimits_{\left(m\rightarrow
n\right)\in\tilde{g}}V\left(O_{m},O_{n}\right)-h_{1}=\sum\nolimits_{\left(m\rightarrow
n\right)\in\hat{g}}V\left(O_{m},O_{n}\right)+V(O_{1},O_{j})-h_{1}$
$\displaystyle\geq\min_{g\in
G\left(1,j\right)}\left[{\textstyle\sum_{\left(m\rightarrow n\right)\in
g}}V\left(O_{m},O_{n}\right)\right]=W\left(O_{1}\cup O_{j}\right).$
∎
|
2024-09-04T02:54:55.501964 | 2020-02-28T14:30:19 | 2002.12740 | {
"authors": "Andrew D. Kercher, Andrew Corrigan, David A. Kessler",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:25941",
"submitter": "Andrew Kercher",
"url": "https://arxiv.org/abs/2002.12740"
} | arxiv-papers | , , and (2020), The Moving Discontinuous Galerkin Finite Element Method with
Interface Condition Enforcement for Compressible Viscous Flows, IJNMF,
2020;00:1–6.
Kercher et al *Andrew D. Kercher,
# The Moving Discontinuous Galerkin Finite Element Method with Interface
Condition Enforcement for Compressible Viscous Flows
A. D. Kercher A. Corrigan D. A. Kessler Andrew D. Kercher Andrew Corrigan
David A. Kessler Laboratories for Computational Physics and Fluid Dynamics,
U.S. Naval Research Laboratory, 4555 Overlook Ave SW, Washington, DC 20375
<EMAIL_ADDRESS>
(February 2020)
###### Abstract
The moving discontinuous Galerkin finite element method with interface
condition enforcement (MDG-ICE) is applied to the case of viscous flows. This
method uses a weak formulation that separately enforces the conservation law,
constitutive law, and the corresponding interface conditions in order to
provide the means to detect interfaces or under-resolved flow features. To
satisfy the resulting overdetermined weak formulation, the discrete domain
geometry is introduced as a variable, so that the method implicitly fits a
priori unknown interfaces and moves the grid to resolve sharp, but smooth,
gradients, achieving a form of anisotropic curvilinear $r$-adaptivity. This
approach avoids introducing low-order errors that arise using shock capturing,
artificial dissipation, or limiting. The utility of this approach is
demonstrated with its application to a series of test problems culminating
with the compressible Navier-Stokes solution to a Mach 5 viscous bow shock for
a Reynolds number of $10^{5}$ in two-dimensional space. Time accurate
solutions of unsteady problems are obtained via a space-time formulation, in
which the unsteady problem is formulated as a higher dimensional steady space-
time problem. The method is shown to accurately resolve and transport viscous
structures without relying on numerical dissipation for stabilization.
###### keywords:
High-order finite elements; Discontinuous Galerkin method; Interface condition
enforcement; MDG-ICE; Implicit shock fitting; Anisotropic curvilinear
$r$-adaptivity; Space-time
††articletype: Research Article00footnotetext: Abbreviations: MDG-ICE, Moving
Discontinuous Galerkin Finite Element Method with Interface Condition
Enforcement
Distribution A. Approved for public release: distribution unlimited.
## 1 Introduction
The discontinuous Galerkin (DG) method 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, has
become a popular method for simulating flow fields corresponding to a wide
range of physical phenomena, from low speed incompressible flows 11, 12, 13 to
chemically reacting compressible Navier-Stokes flows 14, 15, 16, due to its
ability to achieve high-order accuracy on unstructured grids and its natural
support for local polynomial, $p$, adaptivity. However, the solutions are
known to contain oscillations in under-resolved regions of the flow, e.g.,
shocks, material interfaces, and boundary layers, where stabilization, usually
in the form of shock capturing, or limiting, is required. These ad hoc methods
often lead to inconsistent discretizations that are no longer capable of
achieving high-order accuracy 17. This lack of robustness and accuracy when
treating discontinuities and regions with sharp, but smooth, gradients is one
of the main obstacles preventing widespread adoption of high-order methods for
the simulation of complex high-speed turbulent flow fields.
The Moving Discontinuous Galerkin Method with Interface Condition Enforcement
(MDG-ICE) was introduced by the present authors 18, 19 as a high-order method
for computing solutions to inviscid flow problems, even in the presence of
discontinuous interfaces. The method accurately and stably computes flows with
a priori unknown interfaces without relying on shock capturing. In order to
detect a priori unknown interfaces, MDG-ICE uses a weak formulation that
enforces the conservation law and its interface condition separately, while
treating the discrete domain geometry as a variable. Thus, in contrast to a
standard DG method, MDG-ICE has both the means to detect via interface
condition enforcement and satisfy via grid movement the conservation law and
its associated interface condition. Using this approach, not only does the
grid move to fit interfaces, but also to better resolve smooth regions of the
flow.
In this work, we apply MDG-ICE to the case of viscous flows, and therefore
extend the capability of MDG-ICE to move the grid to resolve otherwise under-
resolved flow features such as boundary layers and viscous shocks. In addition
to enforcing the conservation law and interface (Rankine-Hugoniot) condition,
for the case of viscous flows, we separately enforce a constitutive law and
the corresponding interface condition, which constrains the continuity of the
state variable across the interface. Thus, in contrast to a standard DG
method, MDG-ICE implicitly achieves a form of anisotropic curvilinear
$r$-adaptivity via satisfaction of the weak formulation.
We study the utility of this approach by solving problems involving linear
advection-diffusion, unsteady Burgers flow, and steady compressible Navier-
Stokes flow. The ability of the method to move the grid in order to resolve
boundary layer profiles is studied in the context of one-dimensional linear
advection-diffusion, where convergence under polynomial refinement is also
considered. The problem of space-time viscous shock formation for Burgers flow
is considered in order to assess the ability of the method to accurately
resolve and transport viscous shocks without relying on shock capturing or
limiting. Lastly, a Mach 5 compressible Navier-Stokes flow over a cylindrical
blunt body in two dimensions is studied for a series of increasing Reynolds
numbers to assess the ability of MDG-ICE to simultaneously resolve multiple
viscous structures, i.e., a viscous shock and boundary layer, via anisotropic
curvilinear $r$-adaptivity.
### 1.1 Background
In prior work, MDG-ICE was shown to be a consistent discretization for
discontinuous flows and is therefore capable of using high-order
approximations to achieve extremely accurate solutions for problems containing
discontinuous interfaces on relatively coarse grids 19. The previously
presented test cases demonstrated that MDG-ICE can be used to compute both
steady and unsteady flows with a priori unknown interface topology and point
singularities using higher-order elements in arbitrary-dimensional spaces. For
example, MDG-ICE was applied to fit intersecting oblique planar shocks in
three dimensions. The ability to fit steady shocks extends to unsteady flows
using a space-time formulation, cf. Lowrie et al. 20, 21, that was applied to
compute the solution to a space-time inviscid Burgers shock formation problem,
where a continuous temporal initial condition steepens to form a shock, while
later work presented proof-of-concept results for unsteady flow in three- and
four-dimensional space-time 22. More recently, MDG-ICE was applied to shocked
compressible flow problems of increased complexity, including transonic flow
over a smooth bump over which an attached curved shock forms for which
optimal-order convergence was verified 23, 24.
Earlier attempts at aligning the grid with discontinuous interfaces present in
the flow field have resulted in mixed success, cf. Moretti 25 and Salas 26,
27. These earlier $explicit$ shock fitting, or tracking, approaches were
capable of attaining high-order accuracy in the presence of shocks, but the
general applicability of such methods is limited. A specialized discretization
and implementation strategy is required at discontinuous interfaces making it
difficult to handle discontinuities whose topologies are unknown a priori or
whose topologies evolve in time. In contrast, MDG-ICE is an $implicit$ shock
fitting method, automatically fitting a priori unknown interfaces, their
interactions, and their evolving trajectory in space-time.
Another promising form of implicit shock tracking, or fitting, is the
optimization-based, $r$-adaptive, approach proposed independently by Zahr and
Persson 28, 29, 30, which has been used to compute very accurate solutions to
discontinuous inviscid flows on coarse grids without the use of artificial
stabilization. This approach retains a standard discontinuous Galerkin method
as the state equation, while employing an objective function to detect and fit
interfaces present in the flow. Recently, Zahr et. al. 31, 32 have extended
their implicit shock tracking framework with the addition of a new objective
function based on an enriched test space and an improved SQP solver.
Furthermore, the regularization introduced by the current authors 18, 19, 23,
24, 22 has been modified to include a factor proportional to the inverse
volume of an element that accounts for variations in the element sizes of a
given grid. This may be beneficial for obtaining solutions on highly
nonuniform grids.
In the case of viscous flows, regions with sharp, but smooth, gradients
present a unique set of challenges. The resolution required to achieve high-
order convergence, or at a minimum, achieve stability, is such that
computations on uniformly refined grids are prohibitively expensive.
Therefore, the local resolution must be selectively increased in certain
regions of the flow. Identifying these regions is not always obvious and
striking a balance between computational feasibility and accuracy is an
equally challenging task. Traditionally, overcoming these challenges was
viewed as an a priori grid design problem with solutions including anisotropic
grid generation 33, 34, 35, 36, 37, 38, 39, 40, 41, 42 and boundary layer grid
generation 43, 44, 45, 46, 47, 48, 49, 50, 51.
A complementary approach to problem-specific grid generation is a posteriori
grid adaptation 52, 53, 10, 54. This is an iterative process in which regions
of interest are locally refined with the goal of reducing the discretization
error. Anisotropic grid adaptation, which combines a posteriori grid
adaptation with anisotropic grid generation, has been shown to successfully
enhance accuracy for a range of aerodynamic applications as reviewed by
Alauzet and Loseille 55. MDG-ICE seeks to achieve similar anisotropic grid
adaptation as an intrinsic part of the solver, such that the region of
anisotropic refinement evolves with the flow field solution, thereby avoiding
grid coarsening as the viscous layer is more accurately resolved.
In the case of least-squares (LS) methods, the residual is a natural indicator
of the discretization error 56, 57. In particular, for the discontinuous
Petrov-Galerkin (DPG) method introduced by Demkowicz and Gopalakrishnan 58,
59, 60, 61, 62, 63, 64, in which the ultra-weak formulation corresponds to the
best approximation in the polynomial space, a posteriori grid adaptation, for
both the grid resolution $h$ and the polynomial degree $p$, is driven by the
built-in error representation function in the form of the Riesz representation
of the residual 65. In addition to such a posteriori $hp$-adaptivity
strategies, MDG-ICE achieves a form of in situ $r$-adaptivity 66, 67, 68, 69,
70, 71, 72 where the resolution of the flow is continuously improved through
repositioning of the grid points. For a review of $r$-adaptivity the reader is
referred to the work of Budd et al. 73 and the survey of Huang and Russell 74.
## 2 Moving Discontinuous Galerkin Method with Interface Condition
Enforcement for Compressible Viscous Flows
In this section we develop the formulation of MDG-ICE for compressible viscous
flows. We assume that $\Omega\subset\mathbb{R}^{d}$ is a given domain, which
may be either a spatial domain $\Omega\subset\mathbb{R}^{d=d_{x}}$ or a space-
time domain $\Omega\subset\mathbb{R}^{d=d_{x}+1}$. In many cases, the space-
time domain is defined in terms of a fixed spatial domain
$\Omega_{x}\subset\mathbb{R}^{d_{x}}$ and time interval
$T\subset\left\\{t\in\mathbb{R}:t>0\right\\}$ by $\Omega=\Omega_{x}\times T$.
In the remainder of this work, we assume that $\Omega$ is partitioned by
$\mathcal{T}$, consisting of disjoint sub-domains or cells $\kappa$, so that
$\overline{\Omega}=\cup_{\kappa\in\mathcal{T}}\overline{\kappa}$, with
interfaces $\epsilon$, composing a set $\mathcal{E}$ so that
$\cup_{\epsilon\in\mathcal{E}}\epsilon=\cup_{\kappa\in\mathcal{T}}\partial\kappa$.
Furthermore, we assume that each interface $\epsilon$ is oriented so that a
unit normal $n:\epsilon\rightarrow\mathbb{R}^{d}$ is defined. In order to
account for space-time problems, we also consider the spatial normal
$n_{x}:\epsilon\rightarrow\mathbb{R}^{d_{x}}$, which is defined such that
$\left(n_{x,1},\ldots n_{x,d_{x}}\right)=\left(n_{1},\ldots n_{d_{x}}\right)$.
### 2.1 Governing equations
Consider a nonlinear conservation law governing the behavior of smooth,
$\mathbb{R}^{m}$-valued, functions $y$,
$\displaystyle\nabla\cdot\mathcal{F}\left(y,\nabla_{x}y\right)=0$
$\displaystyle\textup{ in }\Omega,$ (2.1)
in terms of a given flux function,
$\mathcal{F}:\mathbb{R}^{m}\times\mathbb{R}^{m\times
d_{x}}\rightarrow\mathbb{R}^{m\times d}$ that depends on the flow state
variable $y$ and its $d_{x}$-dimensional spatial gradient,
$\nabla_{x}y=\left(\frac{\partial y}{\partial x_{1}},\ldots,\frac{\partial
y}{\partial x_{d_{x}}}\right).$ (2.2)
The flux function is assumed to be defined in terms of a spatial flux function
$\mathcal{F}^{x}:\mathbb{R}^{m}\times\mathbb{R}^{m\times
d_{x}}\rightarrow\mathbb{R}^{m\times d_{x}}$ that itself is defined in terms
of a convective flux, depending on the state variable only, and the viscous,
or diffusive flux, which also depends on the spatial gradient of the state
variable,
$\mathcal{F}^{x}\left(y,\nabla_{x}y\right)=\mathcal{F}^{c}\left(y\right)-\mathcal{F}^{v}\left(y,\nabla_{x}y\right).$
(2.3)
In the case of a spatial domain, $d=d_{x}$, the flux function $\mathcal{F}$
coincides with the spatial flux,
$\mathcal{F}\left(y,\nabla_{x}y\right)=\mathcal{F}^{x}\left(y,\nabla_{x}y\right),$
(2.4)
so that the divergence operator in Equation (2.1) is defined as the spatial
divergence operator
$\nabla\cdot\mathcal{F}\left(y,\nabla_{x}y\right)=\nabla_{x}\cdot\mathcal{F}^{x}\left(y,\nabla_{x}y\right)=\frac{\partial}{\partial
x_{1}}\mathcal{F}_{1}^{x}\left(y,\nabla_{x}y\right)+\ldots+\frac{\partial}{\partial
x_{d_{x}}}\mathcal{F}_{d_{x}}^{x}\left(y,\nabla_{x}y\right).$ (2.5)
Otherwise, in the case of a space-time domain, $d=d_{x}+1$, the space-time
flux incorporates the state variable as the temporal flux component,
$\mathcal{F}\left(y,\nabla_{x}y\right)=\left(\mathcal{F}_{1}^{x}\left(y,\nabla_{x}y\right),\ldots,\mathcal{F}_{d_{x}}^{x}\left(y,\nabla_{x}y\right),y\right),$
(2.6)
so that the divergence operator in (2.1) is defined as the space-time
divergence operator
$\nabla\cdot\mathcal{F}\left(y,\nabla_{x}y\right)=\nabla_{x}\cdot\mathcal{F}^{x}\left(y,\nabla_{x}y\right)+\frac{\partial}{\partial
t}y.$ (2.7)
In this work, we consider conservation laws corresponding to linear advection-
diffusion, space-time viscous Burgers, and compressible Navier-Stokes flow as
detailed in the following sections.
#### 2.1.1 Linear advection-diffusion
Linear advection-diffusion involves a single-component flow state variable
$y:\Omega\rightarrow\mathbb{R}^{1}$ with a linear diffusive flux,
$\mathcal{F}^{v}\left(y,\nabla_{x}y\right)=\epsilon\nabla_{x}y,$ (2.8)
that is independent of the state $y$, where the coefficient $\epsilon$
corresponds to mass diffusivity. The convective flux is given as
$\mathcal{F}^{c}\left(y\right)=\left(v_{1}y,\ldots,v_{d_{x}}y\right),$ (2.9)
where $\left(v_{1},\ldots,v_{d_{x}}\right)\in\mathbb{R}^{d_{x}}$ is a
prescribed spatial velocity that in the present setting is assumed to be
spatially uniform. The corresponding spatial flux is given by
$\mathcal{F}^{x}\left(y,\nabla_{x}y\right)=\left(\left(v_{1}y,\ldots,v_{d_{x}}y\right)-\epsilon\nabla_{x}y\right),$
(2.10)
#### 2.1.2 One-dimensional Burgers flow
As in the case of linear advection-diffusion, one-dimensional Burgers flow
involves a single-component flow state variable
$y:\Omega\rightarrow\mathbb{R}^{1}$ with a linear viscous flux,
$\mathcal{F}^{v}\left(y,\nabla_{x}y\right)=\epsilon\nabla_{x}y,$ (2.11)
which is independent of the state $y$, where the coefficient, $\epsilon$,
corresponds to viscosity. The convective flux is given as
$\mathcal{F}^{c}\left(y\right)=\left(\frac{1}{2}y^{2}\right),$ (2.12)
so that the one-dimensional spatial flux is given by
$\mathcal{F}^{x}\left(y,\nabla_{x}y\right)=\left(\frac{1}{2}y^{2}-\epsilon\nabla_{x}y\right),$
(2.13)
#### 2.1.3 Compressible Navier-Stokes flow
For compressible Navier-Stokes flow, the state variable
$y:\Omega\rightarrow\mathbb{R}^{m}$, where $m=d_{x}+2$, is given by
$y=\left(\rho,\rho v_{1},\ldots,\rho v_{d_{x}},\rho E\right).$ (2.14)
The $i$-th spatial component of the convective flux,
$\mathcal{F}^{c}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m\times d_{x}}$, is
$\mathcal{F}_{i}^{c}\left(y\right)=\left(\rho v_{i},\rho
v_{i}v_{1}+p\delta_{i1},\ldots,\rho v_{i}v_{d_{x}}+p\delta_{id_{x}},\rho
Hv_{i}\right),$ (2.15)
where $\delta_{ij}$ is the Kronecker delta,
$\rho:\Omega\rightarrow\mathbb{R}_{+}$ is density,
$\left(v_{1},\ldots,v_{d_{x}}\right):\mathbb{R}^{m}\rightarrow\mathbb{R}^{d_{x}}$
is velocity, $\rho E:\Omega\rightarrow\mathbb{R}_{+}$ is stagnation energy per
unit volume, and
$H=\left(\rho E+p\right)/\rho$ (2.16)
is stagnation enthalpy, where $H:\mathbb{R}^{m}\rightarrow\mathbb{R}_{+}$.
Assuming the fluid is a perfect gas, the pressure
$p:\mathbb{R}^{m}\rightarrow\mathbb{R}_{+}$ is defined as
$p=\left(\gamma-1\right)\left(\rho E-\frac{1}{2}\sum_{i=1}^{d_{x}}\rho
v_{i}v_{i}\right),$ (2.17)
where the ratio of specific heats for air is given as $\gamma=1.4$. The $i$-th
spatial component of the viscous flux is given by
$\mathcal{F}_{i}^{\nu}\left(y,\nabla_{x}y\right)=\left(0,\tau_{1i},\ldots,\tau_{d_{x}i},\sum_{j=1}^{d_{x}}\tau_{ij}v_{j}-q_{i}\right),$
(2.18)
where $q:\mathbb{R}^{m}\times\mathbb{R}^{m\times
d_{x}}\rightarrow\mathbb{R}^{d_{x}}$ is the thermal heat flux,
$\tau:\mathbb{R}^{m}\times\mathbb{R}^{m\times
d_{x}}\rightarrow\mathbb{R}^{d_{x}\times d_{x}}$ is the viscous stress tensor.
The $i$-th spatial component of the thermal heat flux is given by
$q_{i}=-k\frac{\partial T}{\partial x_{i}},$ (2.19)
where $T:\mathbb{R}^{m}\rightarrow\mathbb{R}_{+}$ is the temperature and $k$
is thermal conductivity. The temperature $T$ is defined as
$T=\frac{p}{R\rho},$ (2.20)
where $R=287$ is the mixed specific gas constant for air. The $i$-th spatial
component of the viscous stress tensor is given by
$\tau_{i}=\mu\left(\frac{\partial v_{1}}{\partial x_{i}}+\frac{\partial
v_{i}}{\partial x_{1}}-\delta_{i1}\frac{2}{3}\sum_{j=1}^{d_{x}}\frac{\partial
v_{j}}{\partial x_{j}},\ldots,\frac{\partial v_{d_{x}}}{\partial
x_{i}}+\frac{\partial v_{i}}{\partial
x_{d_{x}}}-\delta_{id_{x}}\frac{2}{3}\sum_{j=1}^{d_{x}}\frac{\partial
v_{j}}{\partial x_{j}}\right),$ (2.21)
where $\mu$ is the dynamic viscosity coefficient.
### 2.2 Interface conditions for viscous flow
The viscous conservation laws described in the previous sections require a
constraint on the continuity of the state variable, $y$, across an interface,
in addition to the interface condition considered in our previous work 19,
which enforced the continuity of the normal flux across an interface. In order
to deduce the interface conditions governing viscous flow, we revisit the
derivation of the DG formulation for viscous flow, cf. Arnold et al. 5. This
discussion follows Section 6.3 of Hartmann and Leicht 10 and restricts the
presentation to a viscous flux. Upon deducing the governing interface
conditions, we will reintroduce the convective flux in Section 2.3.1. Here, we
consider a spatial conservation law,
$\displaystyle-\nabla_{x}\cdot\left(\mathcal{F}^{v}\left(y,\nabla_{x}y\right)\right)=0$
$\displaystyle\textup{ in }\kappa\qquad\forall\kappa\in\mathcal{T},$ (2.22)
defined in terms of a given viscous flux function
$\mathcal{F}^{v}:\mathbb{R}^{m}\times\mathbb{R}^{m\times
d_{x}}\rightarrow\mathbb{R}^{m\times d_{x}}$, for piecewise smooth functions
$y$ and their spatial gradients $\nabla_{x}y$. We introduce an
$\mathbb{R}^{m\times d_{x}}$-valued auxiliary variable $\sigma$ and rewrite
(2.22) as a first-order system of equations
$\displaystyle-\nabla_{x}\cdot\sigma=0$ $\displaystyle\textup{ in
}\kappa\qquad\forall\kappa\in\mathcal{T},$ (2.23)
$\displaystyle\sigma-G\left(y\right)\nabla_{x}y=0$ $\displaystyle\textup{ in
}\kappa\qquad\forall\kappa\in\mathcal{T}.$ (2.24)
We assume here that $\mathcal{F}^{v}$ is linear with respect to its gradient
argument so that
$G\left(y\right)\nabla_{x}y=\mathcal{F}^{v}\left(y,\nabla_{x}y\right)=\mathcal{F}_{\nabla_{x}y}^{v}\left(y,\nabla_{x}y\right)\nabla_{x}y$
(2.25)
where $G\left(y\right)\in\mathbb{R}^{m\times d_{x}\times m\times d_{x}}$ is a
tensor of rank 4 that is referred to as the _homogeneity tensor_ 10.
We integrate (2.23) and (2.24) against separate test functions and upon an
application of integration by parts arrive at the following weak formulation :
find $\left(y,\sigma\right)\in Y\times\Sigma$ such that
$\displaystyle 0=$
$\displaystyle+\sum_{\kappa\in\mathcal{T}}\left(\sigma,\nabla_{x}v\right)_{\kappa}$
$\displaystyle-\sum_{\kappa\in\mathcal{T}}\left(\sigma\cdot
n_{x},v\right)_{\partial\kappa}$
$\displaystyle+\sum_{\kappa\in\mathcal{T}}\left(\sigma,\tau\right)_{\kappa}$
$\displaystyle+\sum_{\kappa\in\mathcal{T}}\left(y,\nabla_{x}\cdot\left(G(y)^{\top}\tau\right)\right)_{\kappa}$
$\displaystyle-\sum_{\kappa\in\mathcal{T}}\left(y\otimes
n_{x},G(y)^{\top}\tau\right)_{\partial\kappa}\qquad\forall\left(v,\tau\right)\in
V_{y}\times V_{\sigma},$ (2.26)
where the solution spaces $Y\times\Sigma$ and test spaces $V_{y}\times
V_{\sigma}$ are broken Sobolev spaces. Since $y$ and $\sigma$ are multi-valued
across element interfaces, in a DG formulation, they are substituted with
single-valued functions of their traces,
$\displaystyle\hat{\sigma}=$
$\displaystyle\hat{\sigma}\left(\sigma^{+},\sigma^{-}\right),$ (2.27)
$\displaystyle\hat{y}=$ $\displaystyle\hat{y}\left(y^{+},y^{-}\right),$ (2.28)
cf. Table 3.1 of Arnold et al. 5 for various definitions of both
$\hat{\sigma}$ and $\hat{y}$. After another application of integration by
parts and transposition of the homogeneity tensor, we obtain: find
$\left(y,\sigma\right)\in Y\times\Sigma$ such that
$\displaystyle 0=$
$\displaystyle-\sum_{\kappa\in\mathcal{T}}\left(\nabla_{x}\cdot\sigma,v\right)_{\kappa}$
$\displaystyle+\sum_{\kappa\in\mathcal{T}}\left(\left(\sigma-\hat{\sigma}\right)\cdot
n_{x},v\right)_{\partial\kappa}$
$\displaystyle+\sum_{\kappa\in\mathcal{T}}\left(\sigma-G(y)\nabla_{x}y,\tau\right)_{\kappa}$
$\displaystyle-\sum_{\kappa\in\mathcal{T}}\left(G(y)\left(\left(\hat{y}-y\right)\otimes
n_{x}\right),\tau\right)_{\partial\kappa}\qquad\forall\left(v,\tau\right)\in
V_{y}\times V_{\sigma}.$ (2.29)
Finally, the auxiliary variable $\sigma$ is substituted with
$G\left(y\right)\nabla_{x}y$, the tensor-valued test function $\tau$ is
substituted with $\nabla_{x}v$, so that upon a final application of
integration by parts we obtain a DG primal formulation: find $y\in Y$ such
that
$\displaystyle 0=$
$\displaystyle\sum_{\kappa\in\mathcal{T}}\left(G(y)\nabla_{x}y,\nabla_{x}v\right)_{\kappa}$
$\displaystyle-\sum_{\kappa\in\mathcal{T}}\left(\hat{\sigma}\cdot
n_{x},v\right)_{\partial\kappa}$
$\displaystyle+\sum_{\kappa\in\mathcal{T}}\left(G(y)\left(\left(\hat{y}-y\right)\otimes
n_{x}\right),\nabla_{x}v\right)_{\partial\kappa}\qquad\forall v\in V_{y},$
(2.30)
cf. Equation (254) and Section 6.6 in the work of Hartmann and Leicht 10.
In contrast, we propose an MDG-ICE formulation that retains the auxiliary
variable and instead makes a different substitution: the test functions $v$
and $\tau$ that appear in the surface integrals of (2.29) are substituted with
separate test functions $w_{y}\in W_{y}$ and $w_{\sigma}\in W_{\sigma}$ from
the single-valued trace spaces of $V_{y}$ and $V_{\sigma}$. Upon accumulating
contributions from adjacent elements in (2.29) to each interface, we obtain:
find $\left(y,\sigma\right)\in Y\times\Sigma$
$\displaystyle 0=$
$\displaystyle-\sum_{\kappa\in\mathcal{T}}\left(\nabla_{x}\cdot\sigma,v\right)_{\kappa}$
$\displaystyle+\sum_{\epsilon\in\mathcal{E}}\left(\left\llbracket
n_{x}\cdot\sigma\right\rrbracket,w_{y}\right)_{\epsilon}$
$\displaystyle+\sum_{\kappa\in\mathcal{T}}\left(\sigma-G(y)\nabla_{x}y,\tau\right)_{\kappa}$
$\displaystyle-\sum_{\epsilon\in\mathcal{E}}\left(\left\\{\\!\\!\left\\{G\left(y\right)\right\\}\\!\\!\right\\}\left\llbracket
y\otimes
n_{x}\right\rrbracket,w_{\sigma}\right)_{\epsilon}\qquad\forall\left(v,\tau,w_{y},w_{\sigma}\right)\in
V_{y}\times V_{\sigma}\times W_{y}\times W_{\sigma}.$ (2.31)
We make use of the relationship
$\displaystyle\left(\sigma^{+}-\hat{\sigma}\right)\cdot
n_{x}^{+}+\left(\sigma^{-}-\hat{\sigma}\right)\cdot n_{x}^{-}$
$\displaystyle=$ $\displaystyle\left(\sigma^{+}-\hat{\sigma}\right)\cdot
n_{x}^{+}-\left(\sigma^{-}-\hat{\sigma}\right)\cdot n_{x}^{+}$
$\displaystyle=$ $\displaystyle\left(\sigma^{+}-\sigma^{-}\right)\cdot
n_{x}^{+}$ $\displaystyle=$ $\displaystyle\left\llbracket
n_{x}\cdot\sigma\right\rrbracket,$ (2.32)
so that contributions from $\hat{\sigma}$ vanish, and
$\displaystyle G(y^{+})\left(\left(\hat{y}-y^{+}\right)\otimes
n_{x}^{+}\right)+G(y^{-})\left(\left(\hat{y}-y^{-}\right)\otimes
n_{x}^{-}\right)$ $\displaystyle=$ $\displaystyle
G(y^{+})\left(\frac{1}{2}\left(y^{-}-y^{+}\right)\otimes
n_{x}^{+}\right)+G(y^{-})\left(\frac{1}{2}\left(y^{+}-y^{-}\right)\otimes
n_{x}^{-}\right)$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left(G(y^{+})\left(\left(y^{-}-y^{+}\right)\otimes
n_{x}^{+}\right)+G(y_{h}^{-})\left(\left(y^{+}-y^{-}\right)\otimes
n_{x}^{-}\right)\right)$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left(G(y^{+})+G(y^{-})\right)\left(y^{-}-y^{+}\right)\otimes
n_{x}^{+}$ $\displaystyle=$
$\displaystyle-\left\\{\\!\\!\left\\{G\left(y\right)\right\\}\\!\\!\right\\}\left\llbracket
y\otimes n_{x}\right\rrbracket,$ (2.33)
on interior interfaces (2.49), where we define
$\hat{y}=\left\\{\\!\\!\left\\{y\right\\}\\!\\!\right\\}$ , a common choice
among the various DG discretizations 5, 10.
From (2.31), we deduce the strong form of the viscous interface conditions to
be
$\displaystyle\left\llbracket n_{x}\cdot\sigma\right\rrbracket=0$
$\displaystyle\textup{ on }\epsilon\qquad\forall\epsilon\in\mathcal{E},$
(2.34)
$\displaystyle\left\\{\\!\\!\left\\{G\left(y\right)\right\\}\\!\\!\right\\}\left\llbracket
y\otimes n_{x}\right\rrbracket=0$ $\displaystyle\textup{ on
}\epsilon\qquad\forall\epsilon\in\mathcal{E}.$ (2.35)
The first interface condition, Equation (2.34) is the jump or Rankine-Hugoniot
condition 75 that ensures continuity of the normal flux at the interface and
will balance with the jump in the normal convective flux in Equation (2.38).
The second interface condition (2.35) corresponds to the constitutive law
(2.37) and enforces a constraint on the continuity of the state variable at
the interface.
### 2.3 Formulation in physical space with fixed geometry
Having deduced the interface conditions that arise in the case of viscous
flow, we reintroduce the convective flux and write the second order system
(2.1) as a system of first-order equations, incorporating the additional
interface conditions (2.34) and (2.35).
#### 2.3.1 Strong formulation
Consider a nonlinear conservation law, generalized constitutive law, and their
corresponding interface conditions,
$\displaystyle\nabla\cdot\mathcal{F}\left(y,\sigma\right)=0$
$\displaystyle\textup{ in }\kappa\qquad\forall\kappa\in\mathcal{T},$ (2.36)
$\displaystyle\sigma-G(y)\nabla_{x}y=0$ $\displaystyle\textup{ in
}\kappa\qquad\forall\kappa\in\mathcal{T},$ (2.37)
$\displaystyle\left\llbracket
n\cdot\mathcal{F}\left(y,\sigma\right)\right\rrbracket=0$
$\displaystyle\textup{ on }\epsilon\qquad\forall\epsilon\in\mathcal{E},$
(2.38)
$\displaystyle\left\\{\\!\\!\left\\{G\left(y\right)\right\\}\\!\\!\right\\}\left\llbracket
y\otimes n_{x}\right\rrbracket=0$ $\displaystyle\textup{ on
}\epsilon\qquad\forall\epsilon\in\mathcal{E},$ (2.39)
governing the flow state variable $y$ and auxiliary variable $\sigma$. The
interface condition (2.38) corresponding to the conservation law (2.36) is the
jump or Rankine-Hugoniot condition 75, which now accounts for both the
convective and viscous flux, ensuring continuity of the normal flux at the
interface. The interface condition (2.39) corresponding to the constitutive
law (2.37) is unmodified from (2.35) by the inclusion of the convective flux.
The flux $\mathcal{F}\left(y,\sigma\right)$ is defined in terms of the spatial
flux $\mathcal{F}^{x}\left(y,\sigma\right)$ analogously to (2.4) or (2.6). The
spatial flux $\mathcal{F}^{x}\left(y,\sigma\right)$ is defined as
$\mathcal{F}^{x}\left(y,\sigma\right)=\mathcal{F}^{c}\left(y\right)-\mathcal{\tilde{F}}^{v}\left(y,\sigma\right),$
(2.40)
where $\tilde{\mathcal{F}}^{v}:\mathbb{R}^{m}\times\mathbb{R}^{m\times
d_{x}}\rightarrow\mathbb{R}^{m\times d_{x}}$ is the modified viscous flux
defined consistently with the primal formulation of Section 2.1,
$\mathcal{\tilde{F}}^{v}\left(y,G\left(y\right)\nabla_{x}y\right)=\mathcal{F}^{v}\left(y,\nabla_{x}y\right),$
(2.41)
and $G\left(y\right)\in\mathbb{R}^{m\times d_{x}\times m\times d_{x}}$ is now
a generalized _constitutive tensor_ that depends on the specific choice of
constitutive law.
One approach to defining the constitutive law is to define a _gradient
formulation_ , where the constitutive tensor $G\left(y\right)$ is taken as the
identity,
$G\left(y\right)\nabla_{x}y=\nabla_{x}y,$ (2.42)
while the viscous flux remains unmodified,
$\mathcal{\tilde{F}}^{v}\left(y,\sigma\right)=\mathcal{F}^{v}\left(y,\sigma\right).$
(2.43)
The gradient formulation has been used in the context of local discontinuous
Galerkin 76 and hybridized discontinuous Galerkin methods 77. This formulation
results in a constitutive law (2.37) and corresponding interface condition
(2.39) that are linear with respect to the state variable and do not introduce
a coupling between flow variable components 76. In this case, the interface
condition (2.39) reduces to
$\left\llbracket y\otimes n_{x}\right\rrbracket=0,$ (2.44)
which implies $\left\llbracket y\right\rrbracket=0$ at spatial interfaces, an
interface condition arising in the context of elliptic interface problems 78,
79 that directly enforces the continuity of the state variable. While this
choice is reasonable if the solution is smooth, this approach would not be
appropriate for flows that contain discontinuities in the state variable, such
as problems with inviscid sub-systems, cf. Mott et al. 80.
An alternative approach is to define a _flux formulation_ , as in Section
(2.2), where the constitutive tensor $G\left(y\right)$ is defined to be the
homogeneity tensor (2.25) so that
$G\left(y\right)\nabla_{x}y=\mathcal{F}^{v}\left(y,\nabla_{x}y\right)=\mathcal{F}_{\nabla_{x}y}^{v}\left(y,\nabla_{x}y\right)\nabla_{x}y,$
(2.45)
while the modified viscous flux is defined to be the auxiliary variable,
$\mathcal{\tilde{F}}^{v}\left(y,\sigma\right)=\sigma,$ (2.46)
recovering a standard mixed method 81.
A slight modification of the flux formulation for the case of linear
advection-diffusion or viscous Burgers, where
$\mathcal{F}^{v}\left(y,\nabla_{x}y\right)=\epsilon\nabla_{x}y$, is obtained
by setting $G\left(y\right)=\sqrt{\epsilon}$ , which recovers the formulation
advocated by Broersen and Stevenson 82, 83 and later Demkowicz and
Gopalakrishnan 65 in the context of Discontinuous Petrov-Galerkin (DPG)
methods for singularly perturbed problems 84. A similar approach was used in
the original description of the LDG method where nonlinear diffusion
coefficients were considered by Cockburn and Shu 81.
In the case of compressible Navier-Stokes flow, we take an approach similar to
that of Chan et al. 85 with the scaling advocated by Broersen and Stevenson
also incorporated. The constitutive tensor $G\left(y\right)$ is defined such
that
$\left(G\left(y\right)\nabla_{x}y\right)_{i}=\mu_{\infty}^{-\nicefrac{{1}}{{2}}}\left(0,\tau_{1i},\ldots,\tau_{d_{x}i},-q_{i}\right),$
(2.47)
where $\mu_{\infty}$ is the freestream dynamic viscosity. The viscous flux is
defined in terms of the auxiliary variable as
$\mathcal{F}_{i}^{v}\left(y,\sigma\right)=\mu_{\infty}^{\nicefrac{{1}}{{2}}}\left(\sigma_{1i},\sigma_{2i},\ldots,\sigma_{d_{x}+1i},\sigma_{i+1j}v_{j}+\sigma_{mi}\right).$
(2.48)
In this way, the auxiliary variable is defined, up to a factor
$\mu_{\infty}^{-\nicefrac{{1}}{{2}}}$, as the viscous stress tensor, $\tau$,
given by (2.21) and thermal heat flux, $q$, given by (2.19). In contrast to
Chan et. al. 85 we do not strongly enforce symmetry of the viscous stress
tensor, $\tau$. However, we may explore this approach in future work as it
could lead to a more computationally efficient and physically accurate
formulation.
#### 2.3.2 Interior and boundary interfaces
We assume that $\mathcal{E}$ consists of two disjoint subsets: the interior
interfaces
$\left\\{\epsilon_{0}\in\mathcal{E}\,\middle|\,\epsilon_{0}\cap\partial\Omega=\emptyset\right\\}$
(2.49)
and exterior interfaces
$\left\\{\epsilon_{\partial}\in\mathcal{E}\,\middle|\,\epsilon_{\partial}\subset\partial\Omega\right\\},$
(2.50)
so that $\mathcal{E}=\mathcal{E}_{0}\cup\mathcal{E}_{\partial}$. For interior
interfaces $\epsilon_{0}\in\mathcal{E}_{0}$ there exists
$\kappa^{+},\kappa^{-}\in\mathcal{T}$ such that
$\epsilon_{0}=\partial\kappa^{+}\cap\partial\kappa^{-}$. On interior
interfaces Equations (2.38) and (2.39) are defined as
$\displaystyle\left\llbracket
n\cdot\mathcal{F}\left(y,\sigma\right)\right\rrbracket=n^{+}\cdot\mathcal{F}\left(y^{+},\sigma^{+}\right)+n^{-}\cdot\mathcal{F}\left(y^{-},\sigma^{-}\right)=0,$
$\displaystyle\textup{ on }\epsilon\qquad\forall\epsilon\in\mathcal{E}_{0},$
(2.51)
$\displaystyle\left\\{\\!\\!\left\\{G\left(y\right)\right\\}\\!\\!\right\\}\left\llbracket
y\otimes
n_{x}\right\rrbracket=\frac{1}{2}\left(G\left(y^{+}\right)+G\left(y^{-}\right)\right)\left(y^{+}\otimes
n_{x}^{+}+y^{-}\otimes n_{x}^{-}\right)=0,$ $\displaystyle\textup{ on
}\epsilon\qquad\forall\epsilon\in\mathcal{E}_{0}.$ (2.52)
where $n^{+},n^{-}$ denote the outward facing normal of
$\kappa^{+},\kappa^{-}$ respectively, so that $n^{+}=-n^{-}$. For exterior
interfaces
$\displaystyle\left\llbracket
n\cdot\mathcal{F}\left(y,\sigma\right)\right\rrbracket=n^{+}\cdot\mathcal{F}\left(y^{+},\sigma^{+}\right)-n^{+}\cdot\mathcal{\mathcal{F}}_{\partial}\left(y^{+},\sigma^{+}\right)=0,$
$\displaystyle\textup{ on
}\epsilon\qquad\forall\epsilon\in\mathcal{E}_{\partial},$ (2.53)
$\displaystyle\left\\{\\!\\!\left\\{G\left(y\right)\right\\}\\!\\!\right\\}\left\llbracket
y\otimes
n_{x}\right\rrbracket=G_{\partial}\left(y^{+}\right)\left(y^{+}\otimes
n_{x}^{+}-y_{\partial}\left(y^{+}\right)\otimes n_{x}^{+}\right)=0,$
$\displaystyle\textup{ on
}\epsilon\qquad\forall\epsilon\in\mathcal{E}_{\partial}.$ (2.54)
Here $n^{+}\cdot\mathcal{F}_{\partial}\left(y^{+},\sigma^{+}\right)$ is the
imposed normal boundary flux, $G_{\partial}\left(y^{+}\right)$ is the boundary
modified constitutive tensor, and $y_{\partial}\left(y^{+}\right)$ is the
boundary state, which are functions chosen depending on the type of boundary
condition. Therefore, we further decompose $\mathcal{E}_{\partial}$ into
disjoint subsets of inflow and outflow interfaces
$\mathcal{E}_{\partial}=\mathcal{E}_{\text{in}}\cup\mathcal{E}_{\text{out}},$
so that at an outflow interface $\epsilon_{\textup{out}}$ the boundary flux is
defined as the interior convective flux, and the boundary state is defined as
the interior state,
$\displaystyle
n^{+}\cdot\mathcal{F}_{\partial}\left(y^{+},\sigma^{+}\right)=n^{+}\cdot\mathcal{F}\left(y^{+},\sigma_{\text{out}}=0\right),$
$\displaystyle\textup{ on
}\epsilon\qquad\forall\epsilon\in\mathcal{E}_{\text{out}},$ (2.55)
$\displaystyle G_{\partial}\left(y^{+}\right)=G\left(y^{+}\right),$
$\displaystyle\textup{ on
}\epsilon\qquad\forall\epsilon\in\mathcal{E}_{\text{out}},$ (2.56)
$\displaystyle y_{\partial}\left(y^{+}\right)=y^{+},$ $\displaystyle\textup{
on }\epsilon\qquad\forall\epsilon\in\mathcal{E}_{\text{out}},$ (2.57)
and therefore Equations (2.53) and (2.54) are satisfied trivially. At an
inflow boundary $\epsilon_{\textup{in}}\in\mathcal{E}_{\text{in}}$, the normal
convective boundary flux and boundary state are prescribed values independent
of the interior state $y^{+}$ , while the normal viscous boundary flux is
defined as the interior normal viscous flux,
$\displaystyle
n^{+}\cdot\mathcal{\mathcal{F}}_{\partial}\left(y^{+},\sigma^{+}\right)=n^{+}\cdot\mathcal{\mathcal{F}}_{\text{in}}^{c}-n^{+}\cdot\mathcal{\tilde{F}}^{v}\left(y^{+},\sigma^{+}\right),$
$\displaystyle\textup{ on
}\epsilon\qquad\forall\epsilon\in\mathcal{E}_{\text{in}},$ (2.58)
$\displaystyle
G_{\partial}\left(y^{+}\right)=G\left(y_{\partial}\left(y^{+}\right)\right),$
$\displaystyle\textup{ on
}\epsilon\qquad\forall\epsilon\in\mathcal{E}_{\text{in}},$ (2.59)
$\displaystyle y_{\partial}\left(y^{+}\right)=y_{\text{in}},$
$\displaystyle\textup{ on
}\epsilon\qquad\forall\epsilon\in\mathcal{E}_{\text{in}}.$ (2.60)
#### 2.3.3 Weak formulation
A weak formulation in physical space is obtained by integrating the
conservation law (2.36), the constitutive law (2.37), and the corresponding
interface conditions (2.38), (2.39) for each element and interface against
separate test functions: find $\left(y,\sigma\right)\in Y\times\Sigma$ such
that
$\displaystyle 0=$
$\displaystyle\;\;\;\;\sum_{\kappa\in\mathcal{T}}\left(\nabla\cdot\mathcal{F}\left(y,\sigma\right)-f,v\right)_{\kappa}$
$\displaystyle+\sum_{\kappa\in\mathcal{T}}\left(\sigma-G(y)\nabla_{x}y,\tau\right)_{\kappa}$
$\displaystyle-\sum_{\epsilon\in\mathcal{E}}\left(\left\llbracket
n\cdot\mathcal{F}\left(y,\sigma\right)\right\rrbracket,w_{y}\right)_{\epsilon}$
$\displaystyle-\sum_{\epsilon\in\mathcal{E}}\left(\left\\{\\!\\!\left\\{G\left(y\right)\right\\}\\!\\!\right\\}\left\llbracket
y\otimes
n_{x}\right\rrbracket,w_{\sigma}\right)_{\epsilon}\qquad\forall\left(v,\tau,w_{y},w_{\sigma}\right)\in
V_{y}\times V_{\sigma}\times W_{y}\times W_{\sigma}.$ (2.61)
The solution spaces $Y$ and $\Sigma$ are the broken Sobolev spaces,
$\displaystyle Y$ $\displaystyle=$
$\displaystyle\left\\{y\in\left[L^{2}\left(\Omega\right)\right]^{m\hphantom{\times
d_{x}}}\bigl{|}\forall\kappa\in\mathcal{T},\>\>\hphantom{\nabla_{x}\cdot}\left.y\right|_{\kappa}\in\left[H^{1}\left(\kappa\right)\right]^{m}\right\\},$
(2.62) $\displaystyle\Sigma$ $\displaystyle=$
$\displaystyle\left\\{\sigma\in\left[L^{2}\left(\Omega\right)\right]^{m\times
d_{x}}\bigl{|}\forall\kappa\in\mathcal{T},\left.\nabla_{x}\cdot\sigma\right|_{\kappa}\in\left[L^{2}\left(\Omega\right)\right]^{m}\right\\},$
(2.63)
while the test spaces are defined as
$V_{y}=\left[L^{2}\left(\Omega\right)\right]^{m}$ and
$V_{\sigma}=\left[L^{2}\left(\Omega\right)\right]^{m\times d}$, with $W_{y}$
and $W_{\sigma}$ defined to be the corresponding single-valued trace spaces,
cf. Carstensen et al. 63.
### 2.4 Formulation in reference space with variable geometry
Analogous to our previous work 19, the grid must be treated as a variable in
order to align discrete grid interfaces with flow interfaces or more generally
to move the grid to resolve under-resolved flow features. Therefore, we
transform the strong formulation (2.36), (2.37), (2.38), (2.39) and weak
formulation (2.61) of the flow equations from physical to reference
coordinates in order to facilitate differentiation with respect to geometry.
#### 2.4.1 Mapping from reference space
We assume that there is a continuous, invertible mapping
$u:\hat{\Omega}\rightarrow\Omega,$ (2.64)
from a reference domain $\hat{\Omega}\subset\mathbb{R}^{d}$ to the physical
domain $\Omega\subset\mathbb{R}^{d}$. We assume that $\hat{\Omega}$ is
partitioned by $\hat{\mathcal{T}}$, so that
$\overline{\hat{\Omega}}=\cup_{\hat{\kappa}\in\hat{\mathcal{T}}}\overline{\hat{\kappa}}$.
Also, we consider the set of interfaces $\hat{\mathcal{E}}$ consisting of
disjoint interfaces $\hat{\epsilon}$, such that
$\cup_{\hat{\epsilon}\in\hat{\mathcal{E}}}\hat{\epsilon}=\cup_{\hat{\kappa}\in\hat{\mathcal{T}}}\partial\hat{\kappa}$.
The mapping $u$ is further assumed to be (piecewise) differentiable with
derivative or Jacobian matrix denoted
$\left.\nabla
u\right|_{\hat{\kappa}}:\hat{\kappa}\rightarrow\mathbb{R}^{d\times
d}\qquad\forall\hat{\kappa}\in\hat{\mathcal{T}}.$ (2.65)
The cofactor matrix $\left.\operatorname{cof}\left(\nabla
u\right)\right|_{\hat{\kappa}}:\hat{\kappa}\rightarrow\mathbb{R}^{d\times d},$
is defined for $\hat{\kappa}\in\hat{\mathcal{T}}$,
$\operatorname{cof}\left(\nabla
u\left(\hat{x}\right)\right)=\operatorname{det}\left(\nabla
u\left(\hat{x}\right)\right)\left(\nabla
u\left(\hat{x}\right)\right)^{-\top}\qquad\forall\hat{x}\in\hat{\kappa},$
(2.66)
where $\left.\operatorname{det}\left(\nabla
u\right)\right|_{\hat{\kappa}}:\hat{\kappa}\rightarrow\mathbb{R}$ is the
determinant of the Jacobian.
As detailed in our related work 86, assuming that $y$ and $v$ are functions
over reference space, the weak formulation of a conservation law in physical
space can be evaluated in reference space according to
$\left(\nabla\cdot\mathcal{F}\left(y\circ u^{-1}\right),v\circ
u^{-1}\right)_{\kappa}=\left(\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)\cdot\mathcal{F}\left(y\right),v\right)_{\hat{\kappa}}.$
(2.67)
Likewise, treating $\sigma$ and $\tau$ as functions over reference space, the
constitutive law can be evaluated in reference space according to
$\left(\sigma\circ u^{-1}-G(y\circ u^{-1})\nabla_{x}\left(y\circ
u^{-1}\right),\tau\circ
u^{-1}\right)_{\kappa}=\left(\operatorname{det}\left(\nabla
u\right)\sigma-G(y)\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)_{x}y,\tau\right)_{\hat{\kappa}},$ (2.68)
In order to represent the spatial gradient in a space-time setting
($d=d_{x}+1$), we define $\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)_{x}$ to be the spatial components of
$\left(\operatorname{cof}\left(\nabla u\right)\nabla\right)$, so that if
$\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)y:\hat{\kappa}\rightarrow\mathbb{R}^{m\times d}$ then
$\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)y:\hat{\kappa}\rightarrow\mathbb{R}^{m\times d_{x}}$,
while in a spatial ($d=d_{x}$) setting $\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)=\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)_{x}$.
The weak formulation of each interface condition can similarly be evaluated in
reference space according to
$\left(\left\llbracket n\cdot\mathcal{F}\left(y\circ
u^{-1}\right)\right\rrbracket,w_{y}\circ
u^{-1}\right)_{\epsilon}=\left(\left\llbracket s\left(\nabla
u\right)\cdot\mathcal{F}\left(y\right)\right\rrbracket,w_{y}\right)_{\hat{\epsilon}}$
(2.69)
and
$\left(\left(\left\\{\\!\\!\left\\{G\left(y\circ
u^{-1}\right)\right\\}\\!\\!\right\\}\left\llbracket\left(y\circ
u^{-1}\right)\otimes n_{x}\right\rrbracket\right),w_{\sigma}\circ
u^{-1}\right)_{\epsilon}=\left(\left\\{\\!\\!\left\\{G\left(y\right)\right\\}\\!\\!\right\\}\left\llbracket
y\otimes s\left(\nabla
u\right)_{x}\right\rrbracket,w_{\sigma}\right)_{\hat{\epsilon}}$ (2.70)
In this setting, $\hat{\epsilon}\in\hat{\mathcal{E}}$,
$\epsilon=u\left(\hat{\epsilon}\right)\in\mathcal{E}$, and
$n:\epsilon\rightarrow\mathbb{R}^{d}$ is the unit normal, which can be
evaluated in terms of $u$ according to
$n=\left(\frac{s\left(\nabla u\right)}{\left\|s\left(\nabla
u\right)\right\|}\right)\circ u^{-1},$ (2.71)
where $s\left(\nabla u\right):\hat{\epsilon}\rightarrow\mathbb{R}^{d}$ is
defined as follows. We assume that there exists a parameterization
$\theta_{\hat{\epsilon}}:\hat{D}\rightarrow\hat{\epsilon}$, mapping from
points $\left(\xi_{1},\ldots,\xi_{d-1}\right)$ in parameter space
$\hat{D}\subset\mathbb{R}^{d-1}$ to points $\hat{x}$ on the reference space
interface, such that the reference space tangent plane basis vectors
$\partial_{\xi_{1}}\theta_{\hat{\epsilon}},\ldots,\partial_{\xi_{d-1}}\theta_{\hat{\epsilon}}$
are of unit magnitude. A parameterization of
$\epsilon=u\left(\hat{\epsilon}\right)$ is then given by the composition
$\theta_{\epsilon}=u\circ\theta_{\hat{\epsilon}}:\hat{D}\rightarrow\epsilon$.
Given $\hat{\epsilon}\in\hat{\mathcal{E}}$, the scaled normal
$\left.s\left(\nabla
u\right)\right|_{\hat{\epsilon}}:\hat{\epsilon}\rightarrow\mathbb{R}^{d}$ is
defined for $\hat{x}\in\hat{\epsilon}$ as the scaled normal of the tangent
plane of $\epsilon$ corresponding to the parameter
$\theta_{\hat{\epsilon}}^{-1}\left(\hat{x}\right)$. If $d=3$ and $\xi,\eta$
denote the parametric coordinates, then
$\left.s\left(\nabla
u\right)\right|_{\hat{\epsilon}}=\left(\partial_{\xi}\theta_{\epsilon}\times\partial_{\eta}\theta_{\epsilon}\right)\circ\theta_{\hat{\epsilon}}^{-1}$
(2.72)
where $\partial_{\xi}\theta_{\epsilon}\times\partial_{\eta}\theta_{\epsilon}$
is the cross product of the tangent plane basis vectors.
A general formula for evaluating the cross product of tangent plane basis
vectors is given by the following: let
$\left(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{d}\right)$ denote the
coordinate directions in $\mathbb{R}^{d}$, and the parameterization be given
in terms of components
$\theta_{\epsilon}=\left(\theta_{\epsilon}^{1},\ldots,\theta_{\epsilon}^{d}\right),$
then
$\partial_{\xi_{1}}\theta_{\epsilon}\times\cdots\times\partial_{\xi_{d-1}}\theta_{\epsilon}=\operatorname{det}\left(\begin{array}[]{ccc}\partial_{\xi_{1}}\theta_{\epsilon}^{1}&\cdots&\partial_{\xi_{1}}\theta_{\epsilon}^{d}\\\
\vdots&\ddots&\vdots\\\
\partial_{\xi_{d-1}}\theta_{\epsilon}^{1}&\cdots&\partial_{\xi_{d-1}}\theta_{\epsilon}^{d}\\\
\boldsymbol{x}_{1}&\ldots&\boldsymbol{x}_{d}\end{array}\right).$ (2.73)
By the chain rule we can express $\partial_{\xi_{i}}\theta_{\epsilon}$ in
terms of $\nabla u$,
$\partial_{\xi_{i}}\theta_{\epsilon}\left(\xi\right)=\nabla
u\left(\theta_{\hat{\epsilon}}\left(\xi\right)\right)\cdot\partial_{\xi_{i}}\theta_{\hat{\epsilon}}\left(\xi\right),$
(2.74)
so that in general the physical space scaled normal as a function of $\nabla
u$ is
$\left.s\left(\nabla
u\right)\right|_{\hat{\epsilon}}=\left(\partial_{\xi_{1}}\theta_{\epsilon}\times\cdots\times\partial_{\xi_{d-1}}\theta_{\epsilon}\right)\circ\theta_{\hat{\epsilon}}^{-1}.$
(2.75)
In the present work, we have adopted the more standard convention in the
definition of the generalized cross product given by Equation (2.73), which is
used to define the generalized scaled normal given by Equation (2.75). This
definition differs from Equation (3.33) of our previous work 19 by a factor of
$\left(-1\right)^{\left(d-1\right)}$ in order to ensure that (2.73) and (2.75)
are positively oriented, cf. Massey 87.
#### 2.4.2 Strong and weak formulation in reference space
The strong form in reference space is
$\displaystyle\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)\cdot\mathcal{F}\left(y,\sigma\right)=0$
$\displaystyle\textup{ in
}\hat{\kappa}\qquad\forall\hat{\kappa}\in\hat{\mathcal{T}},$ (2.76)
$\displaystyle\operatorname{det}\left(\nabla
u\right)\sigma-G(y)\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)_{x}y=0$ $\displaystyle\textup{ in
}\hat{\kappa}\qquad\forall\hat{\kappa}\in\hat{\mathcal{T}},$ (2.77)
$\displaystyle\left\llbracket s\left(\nabla
u\right)\cdot\mathcal{F}\left(y,\sigma\right)\right\rrbracket=0$
$\displaystyle\textup{ on
}\hat{\epsilon}\qquad\forall\hat{\epsilon}\in\hat{\mathcal{E}},$ (2.78)
$\displaystyle\left\\{\\!\\!\left\\{G\left(y\right)\right\\}\\!\\!\right\\}\left\llbracket
y\otimes s\left(\nabla u\right)_{x}\right\rrbracket=0$ $\displaystyle\textup{
on }\hat{\epsilon}\qquad\forall\hat{\epsilon}\in\hat{\mathcal{E}},$ (2.79)
$\displaystyle b\left(u\right)-u=0$ $\displaystyle\textup{ on
}\hat{\epsilon}\qquad\forall\hat{\epsilon}\in\hat{\mathcal{E}},$ (2.80)
where $\nabla u$ is the Jacobian of the mapping from reference to physical
space, $\operatorname{det}\left(\nabla u\right)$ is its determinant,
$\operatorname{cof}\left(\nabla u\right)$ is its cofactor matrix, and
$s\left(\nabla u\right)$ is the scaled normal as defined in Section 2.4.1.
Equation (2.80) imposes geometric boundary conditions that constrain points to
the boundary of the physical domain via a projection operator $b:U\rightarrow
U$, where $U=\left[H^{1}\left(\hat{\Omega}\right)\right]^{d}$is the
$\mathbb{R}^{d}$-valued Sobolev space over $\hat{\Omega}$. Examples of
$b\left(u\right)$ are given in earlier work 19, 24. We assume that $Y$ and
$\Sigma$, originally defined for functions over physical space, cf. (2.62) and
(2.63), now consist, respectively, of functions defined in
$\mathbb{R}^{m}$-valued and $\mathbb{R}^{m\times d_{x}}$-valued broken Sobolev
spaces over $\hat{\mathcal{T}}$. We further assume that the test spaces
$V_{y}$, $V_{\sigma},W_{y},W_{\sigma}$ now consist of functions defined over
reference space.
We define a provisional state operator $\tilde{e}:Y\times\Sigma\times
U\rightarrow\left(V_{y}\times V_{\sigma}\times W_{y}\times
W_{\sigma}\right)^{*}$ for $\left(y,\sigma,u\right)\in Y\times\Sigma\times U$,
by
$\displaystyle\tilde{e}\left(y,\sigma,u\right)=\left(v,\tau,w_{y},w_{\sigma}\right)\mapsto$
$\displaystyle\;\;\;\;\sum_{\hat{\kappa}\in\hat{\mathcal{T}}}\left(\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)\cdot\mathcal{F}\left(y,\sigma\right),v\right)_{\hat{\kappa}}$
$\displaystyle+\sum_{\hat{\kappa}\in\hat{\mathcal{T}}}\left(\operatorname{det}\left(\nabla
u\right)\sigma-G(y)\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)_{x}y,\tau\right)_{\hat{\kappa}}$
$\displaystyle-\sum_{\hat{\epsilon}\in\hat{\mathcal{E}}}\left(\left\llbracket
s\left(\nabla
u\right)\cdot\mathcal{F}\left(y,\sigma\right)\right\rrbracket,w_{y}\right)_{\hat{\epsilon}}$
$\displaystyle-\sum_{\hat{\epsilon}\in\hat{\mathcal{E}}}\left(\left\\{\\!\\!\left\\{G\left(y\right)\right\\}\\!\\!\right\\}\left\llbracket
y\otimes s\left(\nabla
u\right)_{x}\right\rrbracket,w_{\sigma}\right)_{\hat{\epsilon}}$ (2.81)
which has a Fréchet derivative defined for perturbation $\left(\delta
y,\delta\sigma,\delta u\right)\in Y\times\Sigma\times U$, and test functions
$\left(v,\tau,w_{y},w_{\sigma}\right)\in V_{y}\times V_{\sigma}\times
W_{y}\times W_{\sigma}$, by its partial derivative with respect to the state
variable $y$,
$\displaystyle\tilde{e}_{y}\left(y,\sigma,u\right)\delta
y=\left(v,\tau,w_{y},w_{\sigma}\right)\mapsto$
$\displaystyle\;\;\;\;\sum_{\hat{\kappa}\in\hat{\mathcal{T}}}\left(\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)\cdot\left(\mathcal{F}_{y}\left(y,\sigma\right)\delta
y\right),v\right)_{\hat{\kappa}}$
$\displaystyle+\sum_{\hat{\kappa}\in\hat{\mathcal{T}}}\left(-\left(\left(G^{\prime}(y)\delta
y\right)\left(\operatorname{cof}\left(\nabla u\right)\nabla\right)_{x}\delta
y+G(y)\left(\operatorname{cof}\left(\nabla u\right)\nabla\right)_{x}\delta
y\right),\tau\right)_{\hat{\kappa}}$
$\displaystyle-\sum_{\hat{\epsilon}\in\hat{\mathcal{E}}}\left(\left\llbracket
s\left(\nabla u\right)\cdot\left(\mathcal{F}_{y}\left(y,\sigma\right)\delta
y\right)\right\rrbracket,w_{y}\right)_{\hat{\epsilon}}$
$\displaystyle-\sum_{\hat{\epsilon}\in\hat{\mathcal{E}}}\left(\left\\{\\!\\!\left\\{G^{\prime}\left(y\right)\delta
y\right\\}\\!\\!\right\\}\left\llbracket y\otimes s\left(\nabla
u\right)_{x}\right\rrbracket+\left\\{\\!\\!\left\\{G\left(y\right)\right\\}\\!\\!\right\\}\left\llbracket\delta
y\otimes s\left(\nabla
u\right)_{x}\right\rrbracket,w_{\sigma}\right)_{\hat{\epsilon}},$ (2.82)
its partial derivative with respect to the auxiliary variable $\sigma$,
$\displaystyle\tilde{e}_{\sigma}\left(y,\sigma,u\right)\delta\sigma=\left(v,\tau,w_{y},w_{\sigma}\right)\mapsto$
$\displaystyle\;\;\;\;\sum_{\hat{\kappa}\in\hat{\mathcal{T}}}\left(\left(\operatorname{cof}\left(\nabla
u\right)\nabla\right)\cdot\left(\mathcal{F}_{\sigma}\left(y,\sigma\right)\delta\sigma\right),v\right)_{\hat{\kappa}}$
$\displaystyle+\sum_{\hat{\kappa}\in\hat{\mathcal{T}}}\left(\operatorname{det}\left(\nabla
u\right)\delta\sigma,\tau\right)_{\hat{\kappa}}$
$\displaystyle-\sum_{\hat{\epsilon}\in\hat{\mathcal{E}}}\left(\left\llbracket
s\left(\nabla
u\right)\cdot\left(\mathcal{F}_{\sigma}\left(y,\sigma\right)\delta\sigma\right)\right\rrbracket,w_{y}\right)_{\hat{\epsilon}},$
(2.83)
and its partial derivative with respect to the geometry variable $u$,
$\displaystyle\tilde{e}_{u}\left(y,\sigma,u\right)\delta
u=\left(v,\tau,w_{y},w_{\sigma}\right)\mapsto$
$\displaystyle\;\;\;\;\sum_{\hat{\kappa}\in\hat{\mathcal{T}}}\left(\left(\left(\operatorname{cof}^{\prime}\left(\nabla
u\right)\nabla\delta
u\right)\nabla\right)\cdot\mathcal{F}\left(y,\sigma\right),v\right)_{\hat{\kappa}}$
$\displaystyle+\sum_{\hat{\kappa}\in\hat{\mathcal{T}}}\left(\left(\operatorname{det}^{\prime}\left(\nabla
u\right)\nabla\delta
u\right)\sigma-G(y)\left(\left(\operatorname{cof}^{\prime}\left(\nabla
u\right)\nabla\delta u\right)\nabla\right)_{x}y,\tau\right)_{\hat{\kappa}}$
$\displaystyle-\sum_{\hat{\epsilon}\in\mathcal{\hat{E}}}\left(\left\llbracket\left(s^{\prime}\left(\nabla
u\right)\nabla\delta
u\right)\cdot\mathcal{F}\left(y,\sigma\right)\right\rrbracket,w_{y}\right)_{\hat{\epsilon}}$
$\displaystyle-\sum_{\hat{\epsilon}\in\hat{\mathcal{E}}}\left(\left\\{\\!\\!\left\\{G\left(y\right)\right\\}\\!\\!\right\\}\left\llbracket
y\otimes\left(s^{\prime}\left(\nabla u\right)\nabla\delta
u\right)_{x}\right\rrbracket,w_{\sigma}\right)_{\hat{\epsilon}}.$ (2.84)
The state operator $e:Y\times\Sigma\times U\rightarrow\left(V_{y}\times
V_{\sigma}\times W_{y}\times W_{\sigma}\right)^{*}$, which imposes geometric
boundary conditions (2.80) by composing the provisional state operator (2.81)
with the projection $b\left(u\right)$, is defined by
$e\left(y,\sigma,u\right)=\tilde{e}\left(y,\sigma,b\left(u\right)\right),$
(2.85)
with Fréchet derivative defined, for state $\left(y,\sigma,u\right)\in
Y\times\Sigma\times U$ and perturbation $\left(\delta y,\delta\sigma,\delta
u\right)\in Y\times\Sigma\times U$, by
$e^{\prime}\left(y,\sigma,u\right)=\left(\delta y,\delta\sigma,\delta
u\right)\mapsto\tilde{e}_{y}\left(y,\sigma,b\left(u\right)\right)\delta
y+\tilde{e}_{\sigma}\left(y,\sigma,b\left(u\right)\right)\delta\sigma+\tilde{e}_{u}\left(y,\sigma,b\left(u\right)\right)b^{\prime}\left(u\right)\delta
u.$ (2.86)
The state equation in reference coordinates is $e\left(y,\sigma,u\right)=0.$
The corresponding weak formulation in reference coordinates is: find
$\left(y,\sigma,u\right)\in Y\times\Sigma\times U$ such that
$\left\langle
e\left(y,\sigma,u\right),\left(v,\tau,w_{y},w_{\sigma}\right)\right\rangle=0\qquad\forall\left(v,\tau,w_{y},w_{\sigma}\right)\in
V_{y}\times V_{\sigma}\times W_{y}\times W_{\sigma},$ (2.87)
so that the solution satisfying (2.76) and (2.78) weakly and (2.80) strongly
is therefore given as $\left(y,\sigma,b\left(u\right)\right)\in
Y\times\Sigma\times U$.
### 2.5 Discretization
We choose discrete (finite-dimensional) subspaces $Y_{h}\subset Y$,
$\Sigma_{h}\subset\Sigma$, $U_{h}\subset U$, $V_{y,h}\subset V_{y}$,
$V_{\sigma,h}\subset V_{\sigma}$, $W_{y,h}\subset W_{y}$, and
$W_{\sigma,h}\subset W_{\sigma}$ to discretize the weak formulation (2.87),
which is restricted to the discrete subspaces via the discrete state operator,
$e_{h}:Y_{h}\times\Sigma_{h}\times U_{h}\rightarrow\left(V_{y,h}\times
V_{\sigma,h}\times W_{y,h}\times W_{\sigma,h}\right)^{*}$ (2.88)
defined such that $e_{h}\left(y,\sigma,u\right)=e\left(y,\sigma,u\right)$ for
all $\left(y,\sigma,u\right)\in Y_{h}\times\Sigma_{h}\times U_{h}$ and the
$h$-subscript indicates that discrete subspaces have been selected.
We use standard piecewise polynomials, cf. 10, defined over reference
elements. Let $\mathcal{P}_{p}$ denote the space of polynomials spanned by the
monomials $\boldsymbol{x}^{\alpha}$ with multi-index
$\alpha\in\mathbb{N}_{0}^{d}$ , satisfying $\sum_{i=1}^{d}\alpha_{i}\leq p$.
In the case of a simplicial grid,
$\displaystyle Y_{h}$ $\displaystyle=$ $\displaystyle\left\\{y\in
Y\,\middle|\,\forall\hat{\kappa}\in\hat{\mathcal{T}},\left.y\right|_{\hat{\kappa}}\in\left[\mathcal{P}_{p}\right]^{m\hphantom{\times
d_{x}}}\right\\},$ (2.89) $\displaystyle\Sigma_{h}$ $\displaystyle=$
$\displaystyle\left\\{\sigma\in\Sigma\,\middle|\,\forall\hat{\kappa}\in\hat{\mathcal{T}},\left.\sigma\right|_{\hat{\kappa}}\in\left[\mathcal{P}_{p}\right]^{m\times
d_{x}}\right\\}.$ (2.90)
The polynomial degree of the state space and flux space are in general
distinct. In the present work, we choose $V_{y,h}=Y_{h}$,
$V_{\sigma,h}=\Sigma_{h}$, while $W_{y,h}$, and $W_{\sigma,h}$ are chosen to
be the corresponding single-valued polynomial trace spaces. While the present
approach is a discrete least squares method 88 with a priori chosen test
spaces, future work will investigate a least squares finite element
formulation 89, 90 with optimal test spaces automatically generated using the
discontinuous Petrov–Galerkin methodology of Demkowicz and Gopalakrishnan 58,
59, 65.
The discrete subspace $U_{h}$ of mappings from reference space to physical
space are also discretized into $\mathbb{R}^{d}$-valued piecewise polynomials,
in the case of a simplicial grid
$U_{h}=\left\\{u\in
U\,\middle|\,\forall\hat{\kappa}\in\hat{\mathcal{T}},\left.u\right|_{\hat{\kappa}}\in\left[\mathcal{P}_{p}\right]^{d}\right\\}.$
(2.91)
The case that the chosen polynomial degree of $U_{h}$ is equal to that of
$Y_{h}$ is referred to as isoparametric. It is also possible to choose the
polynomial degree of $U_{h}$ to be less (sub-parametric) or greater (super-
parametric) than that of $Y_{h}$.
### 2.6 Solver
In general, the dimensionality of the discrete solution space and discrete
residual space do not match. Therefore, the weak formulation is solved
iteratively using unconstrained optimization to minimize
$\frac{1}{2}\left\|e_{h}\left(y,\sigma,u\right)\right\|^{2}$, by seeking a
stationary point111The stationary point (2.92) and Newton’s method (2.94) were
stated incorrectly in previous work, cf. 19, Equations (77) and (80).,
$e_{h}^{\prime}\left(y,\sigma,u\right)^{*}e_{h}\left(y,\sigma,u\right)=0.$
(2.92)
Given an initialization $\left(y,\sigma,u\right)_{0}$ the solution is
repeatedly updated
$\left(y,\sigma,u\right)_{i+1}=\left(y,\sigma,u\right)_{i}+\Delta\left(y,\sigma,u\right)_{i}\qquad
i=0,1,2,\ldots$ (2.93)
until (2.92) is satisfied to a given tolerance. One approach is to use
Newton’s method, which is a second-order method with increment given by,
$\Delta\left(y,\sigma,u\right)=-\left(\left(e_{h}^{\prime\prime}\left(y,\sigma,u\right)^{*}\cdot\right)e_{h}\left(y,\sigma,u\right)+e_{h}^{\prime}\left(y,\sigma,u\right)^{*}e_{h}^{\prime}\left(y,\sigma,u\right)\right)^{-1}\left(e_{h}^{\prime}\left(y,\sigma,u\right)^{*}e_{h}\left(y,\sigma,u\right)\right).$
(2.94)
Alternatively, the Gauss-Newton method neglects second derivatives, yet
recovers the second-order convergence rate of Newton’s method as the residual
vanishes and ensures a positive semi-definite matrix, resulting in an
increment given by
$\Delta\left(y,\sigma,u\right)=-\left(e_{h}^{\prime}\left(y,\sigma,u\right)^{*}e_{h}^{\prime}\left(y,\sigma,u\right)\right)^{-1}\left(e_{h}^{\prime}\left(y,\sigma,u\right)^{*}e_{h}\left(y,\sigma,u\right)\right).$
(2.95)
We employ a Levenberg-Marquardt method to solve (2.92), which augments the
Gauss-Newton method (2.95) with a regularization term,
$\Delta\left(y,\sigma,u\right)=-\left(e_{h}^{\prime}\left(y,\sigma,u\right)^{*}e_{h}^{\prime}\left(y,\sigma,u\right)+I_{\lambda}\left(y,\sigma,u\right)\right)^{-1}\left(e_{h}^{\prime}\left(y,\sigma,u\right)^{*}e_{h}\left(y,\sigma,u\right)\right),$
(2.96)
where the regularization operator,
$I_{\lambda}\left(y,\sigma,u\right):\left(\delta y,\delta\sigma,\delta
u\right)\mapsto\left(\lambda_{y}\delta
y,\lambda_{\sigma}\delta\sigma,\lambda_{u}\delta u\right),$ (2.97)
ensures invertibility and therefore positive definiteness of the linear system
of equations. Separate regularization coefficients
$\lambda_{y},\lambda_{\sigma},\lambda_{u}\geq 0$ are defined for each solution
variable. In practice, the state and auxiliary regularization coefficients can
be set to zero, $\lambda_{y}=\lambda_{\sigma}=0$, while the grid
regularization coefficient $\lambda_{u}>0$ must be positive in order to ensure
rank sufficiency and to limit excessive grid motion. Additional symmetric
positive definite operators can be incorporated into the regularization
operator 24. In the present work, we incorporate a linear elastic grid
regularization, which is a symmetric positive definite operator that has the
effect of distributing the grid motion to neighboring elements. The linear
elastic grid regularization is a variation of the Laplacian grid
regularization, $\delta u\mapsto-\lambda_{\Delta
u}\left(b^{\prime}\left(u\right)^{*}\Delta
b^{\prime}\left(u\right)\right)\delta u$, with $\lambda_{\Delta u}\geq 0$,
that we employed in previous work 24 that offers the added benefit of
introducing a compressibility effect into the grid motion that we have found
useful for resolving thin viscous layers. Other possible regularization
strategies include the weighted elliptic regularization proposed by Zahr et
al. 31, 32. The resulting linear system of equations is positive definite and
symmetric. In the present work, we employ a sparse direct solver provided by
Eigen 91.
The grid topology may need to be modified by the solver in order to fit a
priori unknown interfaces and ensure element validity while resolving sharp
gradients, for which we employ standard edge refinement and edge collapse
algorithms 92. In the present work, element quality is used as an indicator
for local refinement. Elements that become highly anisotropic as MDG-ICE moves
the grid to resolve thin viscous structures are adaptively split by refining
their longest edge. In the case of nonlinear elements, if the determinant of
the Jacobian at any degree of freedom is negative, we apply a control that
projects the elements to a linear shape representation and locally refines the
element if projecting the cell does not recover a valid grid. Often, the
introduction of the additional grid topology and resolution is sufficient for
the solver to recover a valid grid. The solver does not currently incorporate
any other grid smoothing or optimization terms based on element quality 29,
30, 31.
## 3 Results
We now apply MDG-ICE to compute steady and unsteady solutions for flows
containing sharp, smooth, gradients. Solutions to unsteady problems are solved
using a space-time formulation. Unless otherwise indicated, the grid is
assumed to consist of isoparametric elements, see Section 2.5.
### 3.1 Linear advection-diffusion
We consider steady, one-dimensional linear advection-diffusion described in
Section 2.1.1, subject to the following boundary conditions
$\displaystyle y\left(x=0\right)$ $\displaystyle=0,$ $\displaystyle
y\left(x=1\right)$ $\displaystyle=1.$ (3.1)
The exact solution is given by
$y\left(x\right)=\frac{1-\exp\left(x\cdot\mathrm{Pe}\right)}{1-\exp\left(\mathrm{Pe}\right)}$
(3.2)
where $\mathrm{Pe}=\frac{1}{\varepsilon}=\frac{v\ell}{\mu}$ is the Péclet
number, $v$ is the characteristic velocity, $\ell$ is the characteristic
length, and $\mu$ is the mass diffusivity. In this case, the solution exhibits
a boundary layer like profile at $x=1$ 93.
|
---|---
|
Figure 3.1: Steady, linear advection-diffusion, $\mathrm{Pe}=100$. The $L^{2}$
projection of the exact solution onto a uniform grid consisting of 8 linear
line cells is compared for various polynomial degrees to MDG-ICE, which
automatically moved the initially uniform grid, indicated by red tick marks,
in order to resolve the boundary layer profile, resulting in the adapted grid
indicated with black tick marks. Figure 3.2: Steady, linear advection-
diffusion, $\mathrm{Pe}=100$. The rate of convergence with respect to
polynomial degree on a log-linear plot is shown, comparing the $L^{2}$
projection onto a uniform grid to MDG-ICE, which automatically moved the
initially uniform grid to resolve the boundary layer profile. Reference slopes
of $10.7$ and $3.5$ are shown, illustrating the increased rate of convergence
achieved using MDG-ICE.
Figure 3.1 shows the MDG-ICE solution to the linear advection-diffusion
problem with exact solution (3.2) for $\mathrm{Pe}=100$ as well as the
corresponding $L^{2}$ projection of the exact solution onto a uniform grid of
8 linear line cells. The $L^{2}$ projection minimizes the error in the $L^{2}$
norm and therefore provides an upper bound on the accuracy attainable by
methods based on a static grid, e.g., DG. By moving the grid to resolve the
boundary layer profile, MDG-ICE is able to achieve accurate, oscillation-free,
solutions for a range of polynomial degrees.
Figure 3.2 presents the corresponding convergence results with respect to
polynomial degree, i.e., $p$-refinement. The rate of convergence of MDG-ICE
with respect to polynomial degree is compared to the $L^{2}$ projection of the
exact solution onto a uniform grid. These results confirm that MDG-ICE
resolves sharp boundary layers with enhanced accuracy compared to static grid
methods. Even for a $\mathcal{P}_{1}$ approximation, MDG-ICE provides nearly
two orders of magnitude improved accuracy compared to the best approximation
available on a uniform grid, a gap that only widens at higher polynomial
degrees. The MDG-ICE error is plotted on a log-linear plot, with a reference
slope of $10.7$, indicating spectral convergence. This shows that the
$r$-adaptivity provided by MDG-ICE enhances the effectiveness of
$p$-refinement, even in the presence of initially under-resolved flow
features. This results demonstrates the enhanced accuracy of MDG-ICE for
resolving initially under-resolved flow features using high-order finite
element approximation in comparison to traditional static grid methods, such
as DG.
### 3.2 Space-time Burgers viscous shock formation
For a space-time Burgers flow, described in Section 2.1.2, a shock will form
at time $t=t_{s}=0.5$ for the following initial conditions
$y\left(x,t=0\right)=\frac{1}{2\pi t_{s}}\sin\left(2\pi x\right)+y_{\infty},$
(3.3)
where $y_{\infty}=0.2$ is the freestream velocity. The space-time solution was
initialized by extruding the temporal inflow condition, given by Equation
(3.3), throughout the space-time domain.
(a) Inviscid MDG-ICE$\left(\mathcal{P}_{5}/\mathcal{P}_{1}\right)$ space-time
solution computed using 200 triangle elements. (b) MDG-
ICE$\left(\mathcal{P}_{5}/\mathcal{P}_{1}\right)$ space-time solution for
$\epsilon=10^{-3}$ computed using 200 triangle elements. (c) MDG-
ICE$\left(\mathcal{P}_{5}/\mathcal{P}_{1}\right)$ space-time solution for
$\epsilon=10^{-4}$ computed using 200 triangle elements.
Figure 3.3: Space-time Burgers shock formation: inviscid, viscous
$\left(\epsilon=10^{-3}\right)$, and viscous $\left(\epsilon=10^{-4}\right)$
solutions computed using $\mathcal{P}_{5}$ linear triangle elements without
shock capturing. Instead, the viscous shock was resolved via anisotropic
space-time $r$-adaptivity. The solver was initialized by extruding the initial
condition at $t=0$ in time.
(a) Inviscid MDG-ICE$\left(\mathcal{P}_{5}/\mathcal{P}_{1}\right)$ at $t=0$
and $t=1$. The corresponding space-time solution is shown in Figure 3.3a. (b)
Viscous MDG-ICE$\left(\mathcal{P}_{5}/\mathcal{P}_{1}\right)$ with
$\epsilon={1}\mathrm{e}{-3}$ at $t=0$ and $t=1$ . The corresponding space-time
solution is shown in Figure 3.3b. (c) Viscous MDG-
ICE$\left(\mathcal{P}_{5}/\mathcal{P}_{1}\right)$ with
$\epsilon={1}\mathrm{e}{-4}$ at $t=0$ and $t=1$ . The corresponding space-time
solution is shown in Figure 3.3c.
Figure 3.4: Burgers shock formation one-dimensional profiles at $t=0$ and
$t=1$: inviscid, viscous $\left(\epsilon=10^{-3}\right)$, and viscous
$\left(\epsilon=10^{-4}\right)$ solutions computed using $\mathcal{P}_{5}$
linear triangle elements without shock capturing. The corresponding space-time
solutions are shown in Figure 3.3.
Figure 3.3 presents the space-time Burgers shock formation solutions for an
inviscid flow, a viscous flow with $\epsilon=10^{-3}$, and a viscous flow with
$\epsilon=10^{-4}$. Figure 3.4 presents the corresponding one-dimensional
profiles at $t=0$ and $t=1$. The space-time solutions were initialized by
extruding the inflow condition, given by Equation (3.3), at $t=0$ in time. The
initial simplicial grid was generated by converting a uniform $10\times 10$
quadrilateral grid into triangles.
In the inviscid case, MDG-ICE fits the point of shock formation and tracks the
shock at the correct speed. In addition to the shock, the inviscid flow
solution has a derivative discontinuity that MDG-ICE also detects and tracks
at the correct speed of $0.2$. For the two viscous flow cases,
$\epsilon=10^{-3}$ and $\epsilon=10^{-4}$, MDG-ICE accurately resolves each
viscous shock as a sharp, yet smooth, profile by adjusting the grid geometry,
without modifying the grid topology. This case demonstrates the inherent
ability of MDG-ICE to achieve anisotropic space-time $r$-adaptivity for
unsteady flow problems.
### 3.3 Mach 5 viscous bow shock
(a) 392 linear triangle cells (b) 392 linear $\mathcal{P}_{2}$ triangle cells
Figure 3.5: The initial linear grid and temperature field corresponding to a
shock captured DG($\mathcal{P}_{2}/\mathcal{P}_{1}$) solution for the viscous
Mach 5 bow shock at $\mathrm{Re}=10^{3}$.
(a) 400 isoparametric $\mathcal{P}_{4}$ triangle cells (b) 400 isoparametric
$\mathcal{P}_{4}$ triangle cells (c) 400 isoparametric $\mathcal{P}_{4}$
triangle cells (d) 400 isoparametric $\mathcal{P}_{4}$ triangle cells
Figure 3.6: The MDG-ICE solution computed using $\mathcal{P}_{4}$
isoparametric triangle elements for the viscous Mach 5 bow shock at
$\mathrm{Re}=10^{3}$. The MDG-ICE grid was initialized by projecting the
linear triangle grid shown in Figure 3.5a to the closest point on the boundary
of the domain. The MDG-ICE field variables were initialized by cell averaging
the interpolated the DG($\mathcal{P}_{2}/\mathcal{P}_{1}$) solution shown in
Figure 3.5b. The MDG-ICE flux variables were initialized to zero for
consistency with the initial piecewise constant field variables. The location
of the shock along the line $x=0$ was computed as $y=1.49995$ for a stand-off
distance of $0.49995$.
(a) The temperature sampled along $x=0$. The exact temperature at the
stagnation point, $T=2.5$, is marked with the symbol $\times$. (b) The normal
velocity, $v_{n}=0$, sampled along $x=0$. The exact normal velocity at the
stagnation point, $v_{n}=0$, is marked with the symbol $\times$. (c) The
pressure, $p$, sampled along $x=0$. The exact pressure at the stagnation point
for an inviscid flow, $p\approx 23.324$, is marked with the symbol $\times$.
(d) The density, $\rho$, sampled along $x=0$. The density at the stagnation
point, computed using the stagnation pressure corresponding to an inviscid
flow, $\rho\approx 13.061389724919298$, is marked with the symbol $\times$.
(e) The normal component of the normal viscous stress tensor, $\tau_{nn}$,
sampled along $x=0$. (f) The normal thermal heat flux, $q_{n}$, sampled along
$x=0$.
Figure 3.7: Centerline profiles of temperature and normal velocity for the
viscous Mach 5 bow shock at $\mathrm{Re}=10^{3}$ computed with MDG-
ICE($\mathcal{P}_{4}$) compared to ODE and MDG-ICE($\mathcal{P}_{4}$)
approximations of the exact solution for the corresponding one-dimensional
viscous shock. The one-dimensional MDG-ICE($\mathcal{P}_{4}$) approximation
was computed using 16 isoparametric line cells. The location of the shock was
computed as $y=1.49995$ for a stand-off distance of $0.49995$.
(a) The pressure coefficient, $C_{p}$, sampled at each degree of freedom on
the surface of the cylinder. The exact pressure coefficient at the stagnation
point for an inviscid flow, $C_{p}\approx 1.8087699607027568$, is marked with
the symbol $\times$. (b) The Stanton number sampled at each degree of freedom
on the surface of the cylinder. The computed Stanton number at the stagnation
point is marked with the symbol $\times$.
Figure 3.8: Pressure coefficient and Stanton number for the viscous Mach 5 bow
shock at $\mathrm{Re}=10^{3}$ computed with MDG-ICE($\mathcal{P}_{4}$) .
(a) 527 isoparametric $\mathcal{P}_{4}$ triangle cells (b) 527 isoparametric
$\mathcal{P}_{4}$ triangle cells (c) 527 isoparametric $\mathcal{P}_{4}$
triangle cells (d) 527 isoparametric $\mathcal{P}_{4}$ triangle cells
Figure 3.9: The MDG-ICE solution computed using $\mathcal{P}_{4}$
isoparametric triangle elements for the viscous Mach 5 bow shock at
$\mathrm{Re}=10^{4}$. The MDG-ICE solution was initialized from the MDG-ICE
solution at $\mathrm{Re}=10^{3}$ shown in Figure 3.5a. The location of the
shock along the line $x=0$ was computed as $y=1.48437$ for a stand-off
distance of $0.48437$.
(a) 768 isoparametric $\mathcal{P}_{4}$ triangle cells (b) 768 isoparametric
$\mathcal{P}_{4}$ triangle cells (c) 768 isoparametric $\mathcal{P}_{4}$
triangle cells (d) 768 isoparametric $\mathcal{P}_{4}$ triangle cells
Figure 3.10: The MDG-ICE solution computed using $\mathcal{P}_{4}$
isoparametric triangle elements for the viscous Mach 5 bow shock at
$\mathrm{Re}=10^{5}$. The MDG-ICE solution was initialized from the MDG-ICE
solution at $\mathrm{Re}=10^{4}$ shown in Figure 3.5a. The location of the
shock along the line $x=0$ was computed as $y=1.4809125$ for a stand-off
distance of $0.4809125$.
---
(a) The final grid and temperature fields of the MDG-
ICE$\left(\mathcal{P}_{4}\right)$ solution computed using 400
$\mathcal{P}_{4}$ isoparametric triangle elements for the viscous Mach 5 bow
shock at $10^{3}$ $\mathrm{Re}$ shown in Figure 3.6a and Figure 3.6b
respectively.
---
(b) The final grid and temperature fields of the MDG-
ICE$\left(\mathcal{P}_{4}\right)$ solution computed using 527
$\mathcal{P}_{4}$ isoparametric triangle elements for the viscous Mach 5 bow
shock at $10^{4}$ $\mathrm{Re}$ shown in Figure 3.9a and Figure 3.9b.
---
(c) The final grid and temperature fields of the MDG-
ICE$\left(\mathcal{P}_{4}\right)$ solution computed using 768
$\mathcal{P}_{4}$ isoparametric triangle elements for the viscous Mach 5 bow
shock at $10^{5}$ $\mathrm{Re}$ shown in Figure 3.10a and Figure 3.10b.
Figure 3.11: The final grid and temperature fields corresponding to the MDG-
ICE solution computed using $\mathcal{P}_{4}$ isoparametric triangle elements
for the viscous Mach 5 bow shock at $\mathrm{Re}=10^{3}$,
$\mathrm{Re}=10^{4}$, and $\mathrm{Re}=10^{5}$. Local edge refinement was used
to adaptively split highly anisotropic elements within the viscous structures
as they were resolved by MDG-ICE.
The viscous MDG-ICE discretization is applied to a compressible Navier-Stokes
flow, described in Section (2.1.3), and used to approximate the solution to a
supersonic viscous flow over a cylinder in two dimensions. The solution is
characterized by the Reynolds number $\mathrm{Re}$, and the freestream Mach
number $\mathrm{M}_{\infty}$. The Reynolds number is defined as,
$\mathrm{Re}=\frac{\rho vL}{\mu},$ (3.4)
where $L$ is the characteristic length. The freestream Mach number is defined
$\mathrm{M}_{\infty}=\frac{v_{\infty}}{c_{\infty}}$
$\mathrm{M}_{\infty}=\frac{v_{\infty}}{c_{\infty}},$ (3.5)
where $v_{\infty}$ is the freestream velocity,
$c_{\infty}=\sqrt{\nicefrac{{\gamma P_{\infty}}}{{\rho_{\infty}}}}$ is the
freestream speed of sound, $P_{\infty}$ is the freestream pressure, and
$\rho_{\infty}$ is the freestream density. In this work we consider Mach 5
flows at Reynolds numbers of $10^{3}$, $10^{4}$, and $10^{5}$ traveling in the
$\left(0,-1\right)$ direction, i.e., from top to bottom in Figure 3.5 , Figure
3.6, Figure 3.9, and Figure 3.10. Supersonic inflow and outflow boundary
conditions are applied at the ellipse and outflow planes respectively. An
isothermal no-slip wall is specified at the surface of the cylinder of radius
$r=1$ centered at the origin. The temperature at the isothermal wall is given
as $T_{\mathrm{wall}}=2.5T_{\infty}$, where $T_{\infty}$ is the freestream
temperature.
Figure 3.6 presents the MDG-ICE($\mathcal{P}_{4}$) solution at
$\mathrm{Re}=10^{3}$ computed on a grid of $400$ isoparametric triangle
elements. The MDG-ICE($\mathcal{P}_{4}$) solution was initialized by
interpolating the DG($\mathcal{P}_{2}$) field variables and the corresponding
grid. Figure 3.5a shows the grid consisting of 392 linear triangle elements
that was used to initialize the MDG-ICE($\mathcal{P}_{4}$) by interpolating
the field variables. The high-order isoparametric boundary faces were
initialized by projecting to closest point on the boundary of the domain via
the boundary operator (2.80). The MDG-ICE($\mathcal{P}_{4}$) field variables
were initialized by cell averaging the interpolated DG($\mathcal{P}_{2}$)
solution shown in Figure 3.5b. The MDG-ICE auxiliary variable, with spatial
components given by (2.47), were initialized to zero for consistency with the
initial piecewise constant field variables. As the MDG-ICE solution converged,
the previously uniformly distributed points were relocated in order to resolve
the viscous shock. This resulted in a loss of resolution downstream, which was
conveniently handled via local edge refinement where highly anisotropic
elements were adaptively split as the viscous structures were resolved.
Although this was unnecessary for maintaining a valid solution, i.e., field
variables that are finite and grid composed of only cells with a positive
determinant of the Jacobian, we found it sufficient for maintaining a
reasonable grid resolution, as compared to the initial grid resolution
downstream of the shock. The location of the shock along the line $x=0$,
estimated as the location corresponding to the minimum normal heat flux, was
computed as $y=1.49995$, giving a stand-off distance of $0.49995$.
In one dimension, the viscous shock is described by a system of ordinary
differential equations that can be solved numerically, cf. 94, 93 for details.
We use this solution to verify that the viscous MDG-ICE formulation predicts
the correct viscous shock profile when diffusive effects are prominent, i.e.,
at low Reynolds number. Figure 3.7 presents a comparison of an approximation
of the exact solution for a one-dimensional viscous shock to the centerline
profiles of the Mach 5 bow shock at $\mathrm{Re}=10^{3}$ for the following
variables: temperature, $T$, normal velocity, $v_{n}$, pressure, $p$, density,
$\rho$, normal component of the normal viscous stress tensor, $\tau_{nn}$, and
normal heat flux, $q_{n}$, where the normal is taken to be in the streamwise
direction.
As expected, the one-dimensional profiles deviate from the two-dimensional bow
shock centerline profiles downstream of the viscous shock. The one-dimensional
solution assumes the viscous and diffusive fluxes are zero outside of the
shock. This is not the case for the two-dimensional bow shock geometry where
the blunt body and corresponding boundary layer produce gradients in the
solution downstream of the shock. For density, in which case the diffusive
flux is zero, the jump across the viscous shock is directly comparable to the
one-dimensional solution. We also directly compare the exact solution to a
one-dimensional viscous shock profile computed by MDG-ICE($\mathcal{P}_{4}$)
using 16 isoparametric line cells. Figure 3.7d shows that MDG-ICE accurately
reproduces the exact shock structure of density profile with only a few high-
order anisotropic curvilinear cells.
For reference, the exact and approximate values at the stagnation point are
marked on the centerline plots with the symbol $\times$. An approximate value
corresponding to the inviscid solution was used when the exact value for the
viscous flow was unavailable, e.g., the stagnation pressure marked in Figure
3.7c. Although the analytic stagnation pressure for a inviscid flow neglects
viscous effects, it is not expected to differ significantly from the value
corresponding to the viscous solution for the problem considered here, as
shown in Table 1 in the work of Williams et al. 95.
We also report the pressure coefficient and the Stanton number, sampled at the
degrees of freedom, on the cylindrical, no-slip, isothermal surface. The
pressure coefficient at the surface is defined as
$C_{p}=\frac{p-p_{\infty}}{\frac{1}{2}\rho_{\infty}v_{\infty}^{2}},$ (3.6)
where $p_{\infty}$, $\rho_{\infty}$, and $v_{\infty}$ are the freestream
pressure, density, and velocity respectively. The Stanton number at the
surface is defined as
$C_{h}=\frac{q_{\mathrm{n}}}{c_{p}\rho_{\infty}v_{\infty}\left(T_{t,\infty}-T_{\mathrm{wall}}\right)},$
(3.7)
where $q_{n}$ is the normal heat flux, $T_{\mathrm{wall}}$ is the wall
temperature, and $T_{t,\infty}$ is the freestream stagnation temperature. In
Figure 3.8a the pressure coefficient on the cylindrical surface is plotted and
the exact pressure coefficient at the stagnation point for an inviscid flow,
$C_{p}\approx 1.8087699607027568$, is marked with the symbol $\times$.
In order to compute solutions at higher Reynolds numbers, continuation of the
freestream viscosity, $\mu_{\infty}$ was employed. Figure 3.9 presents the
MDG-ICE($\mathcal{P}_{4}$) solution at $\mathrm{Re}=10^{4}$ computed on a grid
of $527$ isoparametric triangle elements, which was initialized from the
$\mathrm{Re}=10^{3}$ MDG-ICE($\mathcal{P}_{4}$) solution. Figure 3.9 presents
the MDG-ICE($\mathcal{P}_{4}$) solution at $\mathrm{Re}=10^{5}$ computed on a
grid of $768$ isoparametric triangle elements, which was initialized from the
$\mathrm{Re}=10^{4}$ MDG-ICE($\mathcal{P}_{4}$) solution. As in the
$\mathrm{Re}=10^{3}$ case, local edge refinement was used to adaptively split
highly anisotropic elements within the viscous structures as they were
resolved by MDG-ICE. At these higher Reynolds numbers, local refinement was
necessary to maintain a valid grid. In addition to the splitting of highly
anisotropic cells, local edge refinement was also applied to cells in which
the determinant of the Jacobian became non-positive.
Figure 3.11 compares the MDG-ICE($\mathcal{P}_{4}$) solutions directly
downstream of the viscous shock at $\mathrm{Re}=10^{3}$, $\mathrm{Re}=10^{4}$,
and $\mathrm{Re}=10^{5}$. By adapting the grid to the flow field, MDG-ICE is
able to simultaneously resolve the thin viscous structure over a range of
Reynolds numbers. As the MDG-ICE solution converges, elements within regions
that contain strong gradients become highly anisotropic and warp nonlinearly
to conform to both the curved shock geometry and efficiently resolve the flow
around the curved blunt body. Thus, unlike a posteriori anisotropic mesh
adaptation, MDG-ICE achieves high-order anisotropic curvilinear $r$-adaptivity
as an intrinsic part of the solver. Furthermore, MDG-ICE automatically
repositions the nodes in order to resolve the flow field over different length
scales as the Reynolds number is increased from $10^{3}$ to $10^{5}$. As such,
MDG-ICE overcomes another challenge associated with a posteriori anisotropic
mesh adaptation which produces regions of excessive refinement on the scale of
the coarse mesh cell size and therefore must rely on grid coarsening to limit
the region of refinement to the more appropriate length scale corresponding to
the feature under consideration.
## 4 Conclusions and future work
The Moving Discontinuous Galerkin Method with Interface Condition Enforcement
(MDG-ICE) has been applied to viscous flow problems, involving both linear and
nonlinear viscous fluxes, where it was shown to detect and resolve previously
under-resolved flow features. In the case of linear advection-diffusion, MDG-
ICE adapted the grid to resolve the initially under-resolved boundary layer,
thereby achieving spectral convergence and a more accurate solution than the
best possible approximation on a uniform static grid, which is given by the
$L^{2}$ projection of the exact solution. Unsteady flows were computed using a
space-time formulation where viscous structures were automatically resolved
via anisotropic space-time $r$-adaptivity. High speed compressible Navier-
Stokes solutions for a viscous Mach 5 bow shock at a Reynolds numbers of
$10^{3}$, $10^{4}$, and $10^{5}$ were presented. The viscous MDG-ICE
formulation was shown to produce the correct viscous shock profile in one
dimension for a Mach 5 flow at $\mathrm{Re}=10^{3}$. The one-dimensional
viscous shock profile was compared to the centerline profile of the two-
dimensional MDG-ICE solution where it was shown to accurately compute both the
shock profile and boundary layer profile simultaneously using only a few high-
order anisotropic curved cells within each region and thus overcoming an
ongoing limitation of anisotropic mesh adaptation. Local edge refinement was
used to adaptively split highly anisotropic elements within the viscous
structures as they were resolved by MDG-ICE. Finally, MDG-ICE is a consistent
discretization of the governing equations that does not introduce low-order
errors via artificial stabilization or limiting and treats the discrete grid
as a variable.
It should be noted that the internal structure of a viscous shock may not be
adequately described by the compressible Navier-Stokes equations due to non-
equilibrium effects 96, an issue surveyed by Powers et al. 97. While in the
present work MDG-ICE was shown to provide highly accurate solutions to the
compressible Navier-Stokes equations, future work will apply MDG-ICE to an
improved multi-scale model that incorporates physical effects more adequately
described by the kinetic theory of gases 98. MDG-ICE is a promising method to
apply within such a framework due to its ability to isolate regions in which
enhanced physical modeling is required.
In future work, we will also develop a least-squares MDG-ICE formulation with
optimal test functions by applying the DPG methodology of Demkowicz and
Gopalakrishnan 58, 59, 65. Using this approach we will demonstrate high-order
convergence for both linear and nonlinear problems. We also plan to mitigate
the need for local $h$-refinement by considering alternative methods for
maintaining grid validity. We will explore adaptively increasing the order of
the local polynomial approximation for cells within thin internal and boundary
layers. For instance, Chan et al. 94, 85 used a combination of $h$ and $p$
refinement to resolve viscous shocks. In their adaptive strategy,
$h-$refinement is used until the local grid resolution is on the order of the
viscous scale, at which point $p$-refinement is used to further enhance
accuracy. Additionally, scaling the regularization by the inverse of volume of
the cell, an approach used by Zahr et al. 31, 32 may also be effective for
maintaining grid validity at higher Reynolds numbers. Ultimately, we plan on
maintaining grid validity by incorporating smoothing, or untangling, into the
projection operator, Equation (2.80), which enforces the geometric boundary
conditions.
## Acknowledgements
This work is sponsored by the Office of Naval Research through the Naval
Research Laboratory 6.1 Computational Physics Task Area.
## References
* 1 Bassi F., Rebay S.. A high-order accurate discontinuous finite element method for the numerical solution of the compressible Navier–Stokes equations. Journal of Computational Physics. 1997;131(2):267–279.
* 2 Bassi F., Rebay S.. High-order accurate discontinuous finite element solution of the 2D Euler equations. Journal of Computational Physics. 1997;138(2):251–285.
* 3 Cockburn B., Shu C.-W.. The Runge–Kutta discontinuous Galerkin method for conservation laws V: multidimensional systems. Journal of Computational Physics. 1998;141(2):199–224.
* 4 Cockburn B., Karniadakis G.E., Shu C.-W.. The development of discontinuous Galerkin methods. In: Springer 2000 (pp. 3–50).
* 5 Arnold D.N., Brezzi F., Cockburn B., Marini L.D.. Unified analysis of discontinuous Galerkin methods for elliptic problems. SIAM Journal on Numerical Analysis. 2002;39(5):1749–1779.
* 6 Hartmann R., Houston P.. Adaptive discontinuous Galerkin finite element methods for the compressible Euler equations. Journal of Computational Physics. 2002;183(2):508–532.
* 7 Fidkowski K.J., Oliver T.A, Lu J., Darmofal D.L.. p-Multigrid solution of high-order discontinuous Galerkin discretizations of the compressible Navier–Stokes equations. Journal of Computational Physics. 2005;207(1):92–113.
* 8 Hesthaven J. S., Warburton T.. Nodal discontinuous Galerkin methods: algorithms, analysis, and applications. Springer Science & Business Media; 2007.
* 9 Persson P-O, Peraire J.. Newton-GMRES preconditioning for discontinuous Galerkin discretizations of the Navier–Stokes equations. SIAM Journal on Scientific Computing. 2008;30(6):2709–2733.
* 10 Hartmann R., Leicht T.. Higher order and adaptive DG methods for compressible flows. In: Deconinck H., ed. VKI LS 2014-03: 37th Advanced VKI CFD Lecture Series: Recent developments in higher order methods and industrial application in aeronautics, Dec. 9-12, 2013, Von Karman Institute for Fluid Dynamics, Rhode Saint Genèse, Belgium 2014. Retrieved from https://ganymed.math.uni-heidelberg.de/~hartmann/publications/2014/HL14a.pdf.
* 11 Liu J.-G., Shu C.-W.. A high-order discontinuous Galerkin method for 2D incompressible flows. Journal of Computational Physics. 2000;160(2):577–596.
* 12 Bassi F., Crivellini A., Di Pietro D. A., Rebay S.. An implicit high-order discontinuous Galerkin method for steady and unsteady incompressible flows. Computers & Fluids. 2007;36(10):1529–1546.
* 13 Rhebergen S., Cockburn B.. A space–time hybridizable discontinuous Galerkin method for incompressible flows on deforming domains. Journal of Computational Physics. 2012;231(11):4185–4204.
* 14 Lv Y., Ihme M.. Discontinuous Galerkin method for multicomponent chemically reacting flows and combustion. Journal of Computational Physics. 2014;270:105–137.
* 15 Johnson R. F., Goodwin G. B., Corrigan A. T., Kercher A., Chelliah H. K.. Discontinuous-Galerkin Simulations of Premixed Ethylene-Air Combustion in a Cavity Combustor. In: AIAA , ed. 2019 AIAA SciTech Forum, ; 2019. AIAA-2019-1444.
* 16 Johnson R.F., Kercher A.D.. A Conservative Discontinuous Galerkin Discretization for the Total Energy Formulation of the Reacting Navier Stokes Equations. arXiv preprint arXiv:1910.10544. 2019;.
* 17 Wang Z.J., Fidkowski K., Abgrall R., et al. High-Order CFD Methods: Current Status and Perspective. International Journal for Numerical Methods in Fluids. 2013;.
* 18 Corrigan A., Kercher A.D., Kessler D.A.. A Moving Discontinuous Galerkin Finite Element Method for Flows with Interfaces. NRL/MR/6040–17-9765: U.S. Naval Research Laboratory; 2017. https://apps.dtic.mil/dtic/tr/fulltext/u2/1042881.pdf.
* 19 Corrigan A., Kercher A.D., Kessler D.A.. A Moving Discontinuous Galerkin Finite Element Method for Flows with Interfaces. International Journal for Numerical Methods in Fluids. 2019;89(9):362-406.
* 20 Lowrie R., Roe P., Leer B.. A space-time discontinuous Galerkin method for the time-accurate numerical solution of hyperbolic conservation laws. In: AIAA , ed. 12th Computational Fluid Dynamics Conference, ; 1995. AIAA-995-1658.
* 21 Lowrie R.B., Roe P.L., Van Leer B.. Space-time methods for hyperbolic conservation laws. In: Springer 1998 (pp. 79–98).
* 22 Corrigan A., Kercher A., Kessler D.. The Moving Discontinuous Galerkin Method with Interface Condition Enforcement for Unsteady Three-Dimensional Flows. In: AIAA , ed. 2019 AIAA SciTech Forum, ; 2019. AIAA-2019-0642.
* 23 Corrigan A., Kercher A., Kessler D., Wood-Thomas D.. Application of the Moving Discontinuous Galerkin Method with Interface Condition Enforcement to Shocked Compressible Flows. In: AIAA , ed. 2018 AIAA AVIATION Forum, ; 2018. AIAA-2018-4272.
* 24 Corrigan A., Kercher A., Kessler D., Wood-Thomas D.. Convergence of the Moving Discontinuous Galerkin Method with Interface Condition Enforcement in the Presence of an Attached Curved Shock. In: AIAA , ed. 2019 AIAA AVIATION Forum, ; 2019. AIAA-2019-3207.
* 25 Moretti G.. Thirty-six years of shock fitting. Computers & Fluids. 2002;31(4):719–723.
* 26 Salas M.D.. A shock-fitting primer. CRC Press; 2009.
* 27 Salas M.D.. A brief history of shock-fitting. In: Springer 2011 (pp. 37–53).
* 28 Zahr M. J., Persson P.-O.. An optimization-based approach for high-order accurate discretization of conservation laws with discontinuous solutions. ArXiv e-prints. 2017;.
* 29 Zahr M.J., Persson P.-O.. An Optimization Based Discontinuous Galerkin Approach for High-Order Accurate Shock Tracking. In: AIAA , ed. 2018 AIAA Aerospace Sciences Meeting, ; 2018. AIAA-2018-0063.
* 30 Zahr M.J., Persson P-O. An optimization-based approach for high-order accurate discretization of conservation laws with discontinuous solutions. Journal of Computational Physics. 2018;.
* 31 Zahr M. J., Shi A., Persson P.-O.. Implicit shock tracking using an optimization-based, $r$-adaptive, high-order discontinuous Galerkin method. ArXiv e-prints. 2019;.
* 32 Zahr M. J., Shi A., Persson P.-O.. An $r$-adaptive, high-order discontinuous Galerkin method for flows with attached shocks. In: AIAA , ed. 2020 AIAA SciTech Forum, ; 2020. AIAA-2020-0537.
* 33 Tam A., Ait-Ali-Yahia D., Robichaud M.P., Moore M., Kozel V., Habashi W.G.. Anisotropic mesh adaptation for 3D flows on structured and unstructured grids. Computer Methods in Applied Mechanics and Engineering. 2000;189(4):1205–1230.
* 34 Pain C.C., Umpleby A.P., De Oliveira C.R.E., Goddard A.J.H.. Tetrahedral mesh optimisation and adaptivity for steady-state and transient finite element calculations. Computer Methods in Applied Mechanics and Engineering. 2001;190(29-30):3771–3796.
* 35 George P.-L.. Gamanic3d, adaptive anisotropic tetrahedral mesh generator. : Technical Report, INRIA; 2002.
* 36 Bottasso C. L.. Anisotropic mesh adaption by metric-driven optimization. International Journal for Numerical Methods in Engineering. 2004;60(3):597–639.
* 37 Li X., Shephard M. S., Beall M. W.. 3D anisotropic mesh adaptation by mesh modification. Computer methods in applied mechanics and engineering. 2005;194(48-49):4915–4950.
* 38 Jones W., Nielsen E., Park M.. Validation of 3D adjoint based error estimation and mesh adaptation for sonic boom prediction. In: AIAA , ed. 44th AIAA Aerospace Sciences Meeting and Exhibit, :1150; 2006. AIAA-2006-1150.
* 39 Dobrzynski C., Frey P.. Anisotropic Delaunay mesh adaptation for unsteady simulations. In: Springer 2008 (pp. 177–194).
* 40 Compere G., Remacle J.-F., Jansson J., Hoffman J.. A mesh adaptation framework for dealing with large deforming meshes. International journal for numerical methods in engineering. 2010;82(7):843–867.
* 41 Loseille A., Löhner R.. Anisotropic adaptive simulations in aerodynamics. In: AIAA , ed. 48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, :169; 2010. AIAA-2010-0169.
* 42 Loseille A., Löhner R.. Boundary layer mesh generation and adaptivity. In: AIAA , ed. 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, :894; 2011. AIAA-2011-0894.
* 43 Löhner R.. Matching semi-structured and unstructured grids for Navier-Stokes calculations. In: AIAA , ed. 11th Computational Fluid Dynamics Conference, :3348; 1993. AIAA-1993-3348.
* 44 Löhner R.. Generation of unstructured grids suitable for RANS calculations. In: Springer 2000 (pp. 153–163).
* 45 Pirzadeh S.. Viscous unstructured three-dimensional grids by the advancing-layers method. In: AIAA , ed. 32nd Aerospace Sciences Meeting and Exhibit, :417; 1994. AIAA-1994-0417.
* 46 Marcum D. L.. Adaptive unstructured grid generation for viscous flow applications. AIAA journal. 1996;34(11):2440–2443.
* 47 Garimella R. V., Shephard M. S.. Boundary layer mesh generation for viscous flow simulations. International Journal for Numerical Methods in Engineering. 2000;49(1-2):193–218.
* 48 Bottasso C. L., Detomi D.. A procedure for tetrahedral boundary layer mesh generation. Engineering with Computers. 2002;18(1):66–79.
* 49 Ito Y., Nakahashi K.. Unstructured Mesh Generation for Viscous Flow Computations.. In: :367–377; 2002.
* 50 Ito Y., Shih A., Soni B., Nakahashi K.. An approach to generate high quality unstructured hybrid meshes. In: AIAA , ed. 44th AIAA Aerospace Sciences Meeting and Exhibit, :530; 2006. AIAA-2006-0530.
* 51 Aubry R., Löhner R.. Generation of viscous grids at ridges and corners. International journal for numerical methods in engineering. 2009;77(9):1247–1289.
* 52 Fidkowski K. .J, Darmofal D. .L. Review of output-based error estimation and mesh adaptation in computational fluid dynamics. AIAA journal. 2011;49(4):673–694.
* 53 Yano M., Darmofal D. L.. An optimization-based framework for anisotropic simplex mesh adaptation. Journal of Computational Physics. 2012;231(22):7626–7649.
* 54 Carson H. A., Allmaras S. R., Galbraith M. C., Darmofal D.L.. Mesh optimization via error sampling and synthesis: An update. In: AIAA , ed. 2020 AIAA SciTech Forum, :87; 2020. AIAA-2020-0087.
* 55 Alauzet F., Loseille A.. A decade of progress on anisotropic mesh adaptation for computational fluid dynamics. Computer-Aided Design. 2016;72:13–39.
* 56 Jiang B.-N., Carey G.F.. Adaptive refinement for least-squares finite elements with element-by-element conjugate gradient solution. International journal for numerical methods in engineering. 1987;24(3):569–580.
* 57 Carey G.F., Pehlivanov A.I.. Local error estimation and adaptive remeshing scheme for least-squares mixed finite elements. Computer methods in applied mechanics and engineering. 1997;150(1-4):125–131.
* 58 Demkowicz L., Gopalakrishnan J.. A class of discontinuous Petrov–Galerkin methods. Part I: The transport equation. Computer Methods in Applied Mechanics and Engineering. 2010;199(23-24):1558–1572.
* 59 Demkowicz L., Gopalakrishnan J.. A class of discontinuous Petrov–Galerkin methods. II. Optimal test functions. Numerical Methods for Partial Differential Equations. 2011;27(1):70–105.
* 60 Demkowicz L., Gopalakrishnan J., Niemi Antti H. A class of discontinuous Petrov–Galerkin methods. Part III: Adaptivity. Applied numerical mathematics. 2012;62(4):396–427.
* 61 Gopalakrishnan J.. Five lectures on DPG methods. arXiv preprint arXiv:1306.0557. 2013;.
* 62 Gopalakrishnan J., Qiu W.. An analysis of the practical DPG method. Mathematics of Computation. 2014;83(286):537–552.
* 63 Carstensen C., Demkowicz L., Gopalakrishnan J.. Breaking spaces and forms for the DPG method and applications including Maxwell equations. Computers & Mathematics with Applications. 2016;72(3):494–522.
* 64 Demkowicz L., Gopalakrishnan J., Keith B.. The DPG-star method. Computers & Mathematics with Applications. 2020;.
* 65 Demkowicz L., Gopalakrishnan J.. Discontinuous Petrov-Galerkin (DPG) method. 15-20: ICES; 2015. Retrieved from https://www.oden.utexas.edu/media/reports/2015/1520.pdf.
* 66 Miller K., Miller R.N.. Moving finite elements. I. SIAM Journal on Numerical Analysis. 1981;18(6):1019–1032.
* 67 Miller K.. Moving finite elements. II. SIAM Journal on Numerical Analysis. 1981;18(6):1033–1057.
* 68 Gelinas R.J., Doss S.K., Miller K.. The moving finite element method: applications to general partial differential equations with multiple large gradients. Journal of Computational Physics. 1981;40(1):202–249.
* 69 Bank R.E., Santos R.F.. Analysis of some moving space-time finite element methods. SIAM journal on numerical analysis. 1993;30(1):1–18.
* 70 Bochev P., Liao G., Pena G.. Analysis and computation of adaptive moving grids by deformation. Numerical Methods for Partial Differential Equations: An International Journal. 1996;12(4):489–506.
* 71 Roe P., Nishikawa H.. Adaptive grid generation by minimizing residuals. International Journal for Numerical Methods in Fluids. 2002;40(1-2):121–136.
* 72 Sanjaya D.P., Fidkowski K.J.. Improving High-Order Finite Element Approximation Through Geometrical Warping. AIAA Journal. 2016;54(12):3994–4010.
* 73 Budd C.J., Huang W., Russell R.D.. Adaptivity with moving grids. Acta Numerica. 2009;18:111–241.
* 74 Huang W., Russell R.D.. Adaptive moving mesh methods. Springer Science & Business Media; 2010.
* 75 Majda A.. Compressible fluid flow and systems of conservation laws in several space variables. Springer Science & Business Media; 2012.
* 76 Persson P-O., Peraire J.. Sub-cell shock capturing for discontinuous Galerkin methods. In: AIAA , ed. 44th AIAA Aerospace Sciences Meeting and Exhibit, ; 2006. AIAA-2006-112.
* 77 Peraire J., Nguyen NC, Cockburn B.. A hybridizable discontinuous Galerkin method for the compressible Euler and Navier-Stokes equations. In: AIAA , ed. 48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, ; 2010. AIAA-2010-0363.
* 78 Hansbo A., Hansbo P.. An unfitted finite element method, based on Nitsche’s method, for elliptic interface problems. Computer Methods in Applied Mechanics and Engineering. 2002;191(47):5537–5552.
* 79 Massjung R.. An unfitted discontinuous Galerkin method applied to elliptic interface problems. SIAM Journal on Numerical Analysis. 2012;50(6):3134–3162.
* 80 Mott D. R., Kercher A.D., Adams A., et al. Interface-fitted Simulation of Multi-Material Sheath Flow using MDG-ICE. In: AIAA , ed. 2020 AIAA SciTech Forum, ; 2020. AIAA-2020-0562.
* 81 Cockburn B., Shu C.-W.. The local discontinuous Galerkin method for time-dependent convection-diffusion systems. SIAM Journal on Numerical Analysis. 1998;35(6):2440–2463.
* 82 Broersen D., Stevenson R.. A robust Petrov–Galerkin discretisation of convection–diffusion equations. Computers & Mathematics with Applications. 2014;68(11):1605–1618.
* 83 Broersen D., Stevenson R. P.. A Petrov–Galerkin discretization with optimal test space of a mild-weak formulation of convection–diffusion equations in mixed form. IMA Journal of Numerical Analysis. 2015;35(1):39–73.
* 84 Roos H.-G., Stynes M., Tobiska L.. Robust numerical methods for singularly perturbed differential equations: convection-diffusion-reaction and flow problems. Springer Science & Business Media; 2008.
* 85 Chan J., Demkowicz L., Moser R.. A DPG method for steady viscous compressible flow. Computers & Fluids. 2014;98:69–90.
* 86 Corrigan A., Williams D.M., Kercher A.D.. Weak Formulation of a Conservation Law in Reference Space. : U.S. Naval Research Laboratory; 2020.
* 87 Massey W.S.. Cross products of vectors in higher dimensional Euclidean spaces. The American Mathematical Monthly. 1983;90(10):697–701.
* 88 Keith B., Petrides S., Fuentes F., Demkowicz L.. Discrete least-squares finite element methods. Computer Methods in Applied Mechanics and Engineering. 2017;327:226–255.
* 89 Bochev P.B., Gunzburger M.D.. Finite element methods of least-squares type. SIAM review. 1998;40(4):789–837.
* 90 Bochev P.B., Gunzburger M.D.. Least–squares finite element methods. Springer Science & Business Media; 2009.
* 91 Guennebaud Gaël, Jacob Benoît, others . Eigen v3 http://eigen.tuxfamily.org2010.
* 92 Löhner R.. Applied CFD Techniques. J. Wiley & Sons; 2008.
* 93 Masatsuka K.. I do Like CFD, vol. 1. Lulu.com; 2013.
* 94 Chan J., Demkowicz L, Moser R., Roberts N.. A new discontinuous Petrov-Galerkin method with optimal test functions. part V: solution of 1D Burgers’ and Navier-Stokes equations. 10-25: ICES; 2010. Retrieved from https://www.oden.utexas.edu/media/reports/2010/1025.pdf.
* 95 Williams D. M., Kamenetskiy D. S., Spalart P. R.. On stagnation pressure increases in calorically perfect, ideal gases. International Journal of Heat and Fluid Flow. 2016;58:40–53.
* 96 Pham-Van-Diep G, Erwin D, Muntz EP. Nonequilibrium molecular motion in a hypersonic shock wave. Science. 1989;245(4918):624–626.
* 97 Powers J.M., Bruns J.D., Jemcov A.. Physical diffusion cures the carbuncle phenomenon. In: AIAA , ed. 53rd AIAA Aerospace Sciences Meeting, ; 2015\. AIAA-2015-0579.
* 98 Kessler D.A., Oran E.S., Kaplan C.R.. Towards the development of a multiscale, multiphysics method for the simulation of rarefied gas flows. Journal of fluid mechanics. 2010;661:262–293.
|
2024-09-04T02:54:55.519637 | 2020-02-28T14:50:31 | 2002.12758 | {
"authors": "Alexander Bednyakov, Andrey Pikelner",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25942",
"submitter": "Andrey Pikelner",
"url": "https://arxiv.org/abs/2002.12758"
} | arxiv-papers | # Quark masses: N3LO bridge from ${\rm RI/SMOM}$ to ${\rm\overline{MS}}$
scheme
Alexander Bednyakov<EMAIL_ADDRESS>Bogoliubov Laboratory of Theoretical
Physics, Joint Institute for Nuclear Research, Joliot-Curie 6, Dubna 141980,
Russia P.N. Lebedev Physical Institute of the Russian Academy of Sciences,
Leninskii pr., 5, Moscow 119991, Russia Andrey Pikelner
<EMAIL_ADDRESS>Bogoliubov Laboratory of Theoretical Physics, Joint
Institute for Nuclear Research, Joliot-Curie 6, Dubna 141980, Russia
###### Abstract
We analytically compute the three-loop corrections to the relation between the
renormalized quark masses defined in the minimal-subtraction
(${\rm\overline{MS}}$) and the regularization-invariant symmetric momentum-
subtraction (RI/SMOM) schemes. Our result is valid in the Landau gauge and can
be used to reduce the uncertainty in a lattice determination of the
${\rm\overline{MS}}$ quark masses.
## I Introduction
Quark masses $m_{q}$ arise in the Standard Model (SM) from Yukawa interactions
of the quarks with the Higgs field. Although not being of fundamental origin,
quark masses are usually treated as parameters of the SM and for many years
were the only source of information on the Higgs Yukawa couplings. As a
consequence, precise knowledge of $m_{q}$ is required both to test the SM and
study new physics. The values of the quark masses can be determined in several
ways (for a review see, e.g., Ref. Tanabashi _et al._ (2018)). Since all
colored fermions but the top are confined inside hadrons, there is no unique
(“physical”) definition of the corresponding mass parameters, and one is free
to choose a renormalization scheme that suits better for a problem at hand. To
compare the results of different determinations, it is customary to use
perturbation theory (PT) and convert the obtained values to the short-distance
running mass $m_{q}^{{\rm\overline{MS}}}(\mu)$ in the minimal-subtraction
scheme ${\rm\overline{MS}}$, evaluated at a fixed scale $\mu$.
|
---|---
Figure 1: Momentum flow of a Green function (left), and the three-point vertex
with $O_{S}=\bar{\psi}\psi$ operator insertion (right) considered in the
paper. SMOM kinematics corresponds to $p_{1}^{2}=p_{2}^{2}=q^{2}$, while in
the “exceptional” case $p_{1}^{2}=p_{2}^{2}$, and $q^{2}=0$.
One of the approaches to the quark-mass determination, especially useful in
the case of light quarks, is based on lattice computations (see, e.g., Ref.
Aoki _et al._ (2019)). The resulting values, in this case, are bare quark
masses $m_{\rm bare}$, corresponding to a particular discretization of QCD
with the lattice spacing $a$ acting as the ultraviolet cutoff. While it is, in
principle, possible to directly relate $m_{\rm bare}$ to
$m_{q}^{{\rm\overline{MS}}}$, it turns out to be more convenient to relate
$m_{\rm bare}$ to a mass parameter $m_{q}^{\rm RI}$ defined in a
regularization-independent (RI) momentum-subtraction renormalization scheme,
which can be realized directly in lattice QCD. The continuum PT is used in
this case to convert the finite value $m_{q}^{\rm RI}$ to
$m_{q}^{{\rm\overline{MS}}}$. Among such kind of schemes, the so-called
RI/SMOM Sturm _et al._ (2009), in which certain three-point Green functions
with momenta $p_{1}$, $p_{2}$, and $q=p_{1}+p_{2}$ (see, Fig. 1) are
normalized at _symmetric_ kinematics ($p_{1}^{2}=p_{2}^{2}=q^{2}=-\mu^{2}$)
and have advantages over original RI/MOM Martinelli _et al._ (1995) scheme.
The latter utilizes “exceptional” momenta configuration with $q^{2}=0$,
$p_{1}^{2}=p_{2}^{2}=-\mu^{2}$ and suffers from enhanced sensitivity to
nonperturbative infrared effects (see, e.g., Ref. Aoki _et al._ (2008) for
details). In addition, the RI/SMOM PT series show a much better convergence
behavior than that of the RI/MOM ones.
Recent state-of-the-art lattice determination Lytle _et al._ (2018) of the
running ${\rm\overline{MS}}$ masses of the charm
($m_{c}^{{\rm\overline{MS}}}(3~{}{\rm GeV})=0.9896(61)$ GeV) and strange
($m_{s}^{{\rm\overline{MS}}}(3~{}{\rm GeV})=0.008536(85)$ GeV) quarks in
$n_{f}{}=4$ QCD heavily relies on the two-loop (next-to-next-to-leading, or
NNLO) conversion factor Gorbahn and Jager (2010); Almeida and Sturm (2010)
relating ${\rm\overline{MS}}$ and ${\rm SMOM}$ schemes. According to the
estimates given in this reference, the uncertainty due to the missing next-to-
next-to-next-to-leading (N3LO) term is comparable with other sources of
uncertainties (e.g., due to continuum extrapolation or condensate effects) and
contribute a significant part to the overall error budget (for details see
Table VI of Ref.Lytle _et al._ (2018)).
In this letter, we report on the analytical computation of the three-loop
contribution, thus, providing additional precision for such an analysis.
Recently, a _numerical_ evaluation of the same quantity appeared in Ref.
Kniehl and Veretin (2020). Our result confirms the estimates provided therein.
## II Details of calculation
To calculate the required conversion factor $C_{m}^{{\rm SMOM}}$, we consider
QCD with $n_{f}{}$ flavors and define
$\displaystyle m_{q}^{{\rm\overline{MS}}}=C_{m}^{{\rm SMOM}}m_{q}^{{\rm
SMOM}},\quad C_{m}^{{\rm SMOM}}=\frac{Z_{m}^{{\rm
SMOM}}}{Z_{m}^{{\rm\overline{MS}}}}.$ (1)
The mass parameters in ${\rm\overline{MS}}$ and ${\rm SMOM}$ schemes are
related to the quark bare mass $m_{\rm bare}$ via $Z_{m}^{\rm
R}=\\{Z_{m}^{{\rm\overline{MS}}},Z_{m}^{{\rm SMOM}}\\}$
$\displaystyle m_{\rm bare}=Z_{m}^{\rm R}m_{q}^{\rm
R}=Z_{m}^{{\rm\overline{MS}}}m_{q}^{{\rm\overline{MS}}}=Z_{m}^{{\rm
SMOM}}m_{q}^{{\rm SMOM}}.$ (2)
In continuum QCD the bare mass $m_{\rm bare}$ is usually defined in
dimensional regularization so that each $Z_{m}^{\rm R}$ contains poles in
$\varepsilon=(4-d)/2$. To determine $Z_{m}^{\rm R}$ we do not compute massive
propagators but renormalize the scalar bilinear operator
$O_{S}\equiv\bar{\psi}\psi$ (see Fig. 1) in massless QCD
$\displaystyle\left[\bar{\psi}\psi\right]_{\rm R}=Z_{m}^{\rm
R}(\bar{\psi}\psi)_{\rm bare}.$ (3)
This simplified approach neglects both valence and sea quark masses, but still
provides a reasonable approximation to the conversion factor $C_{m}^{{\rm
SMOM}}$ in a range of renormalization scales utilized in lattice calculations
(see,e.g, Ref. Lytle _et al._ (2018) for numerical studies of the two-loop
corrections due to nonzero quark masses).
We compute $Z_{m}^{{\rm SMOM}}$ and $Z_{m}^{{\rm\overline{MS}}}$ order-by-
order in PT by considering bare three-point one-particle-irreducible vertex
function
$\displaystyle\left.\Lambda_{S}(p_{1},p_{2})\right|_{sym}=\left.\langle\psi(-p_{2})O_{S}(q)\bar{\psi}(-p_{1})\rangle\right|_{p_{1}^{2}=p_{2}^{2}=q^{2}=-\mu^{2}},\quad
q=p_{1}+p_{2}$ (4)
in ${\rm SMOM}$ kinematics. We use Landau gauge and require that
$\displaystyle 1=Z_{m}^{{\rm SMOM}}\cdot Z_{\psi}^{{\rm
SMOM}}\cdot\frac{1}{12}\cdot\left.{\rm tr}\left[\Lambda^{\rm
bare}_{S}\right]\right|_{sym},\quad 1=Z_{\psi}^{{\rm
SMOM}}\cdot\frac{1}{12p^{2}}\left.\cdot{\rm tr}\left[iS_{\rm
bare}^{-1}(p)\hat{p}\right]\right|_{p^{2}=-\mu^{2}},$ (5)
where both $\Lambda_{S}^{\rm bare}$ and the bare quark inverse propagator
$S_{\rm bare}^{-1}$ are reexpanded in terms of ${\rm\overline{MS}}$ strong
coupling $\alpha_{s}^{{\rm\overline{MS}}}=(4\pi)a_{{\rm\overline{MS}}}$ via
the well-known formula $\mu^{-2\varepsilon}a_{\rm
bare}=Z_{a_{{\rm\overline{MS}}}}a_{{\rm\overline{MS}}}$ available with five-
loop accuracy Chetyrkin _et al._ (2017); Luthe _et al._ (2017). In Eq. (5)
the quark field renormalization constants are defined as111It is worth
mentioning that, e.g., in Refs. Sturm _et al._ (2009); Almeida and Sturm
(2010); Lytle _et al._ (2018), different notation can be adopted for the
renormalization constants,and one should make the substitutions $Z_{\psi}\to
Z_{\psi}^{-1}$ and $Z_{m}\to Z^{-1}_{m}$ to compare the results.
$\displaystyle\psi_{\rm bare}=\sqrt{Z_{\psi}^{\rm R}}\psi_{\rm R},\quad{\rm
R}=\\{{\rm\overline{MS}},{\rm SMOM}\\}$ (6)
The conditions (5) can be implemented in lattice computations, leading to a
nonperturbative determination Martinelli _et al._ (1995) of $Z_{m}^{{\rm
SMOM}}$. The latter converts the bare lattice mass into $m_{q}^{{\rm SMOM}}$,
providing input for $m_{q}^{{\rm\overline{MS}}}$ calculation via Eq. (1). The
${\rm\overline{MS}}$ counterparts $Z_{m}^{{\rm\overline{MS}}}$,
$Z_{\psi}^{{\rm\overline{MS}}}$ of the renormalization constants in Eq. (5)
required to compute $C_{m}^{{\rm SMOM}}$ are obtained by subtracting only
divergent terms of the corresponding Green functions.
A comment is in order regarding the determination of the wave function
renormalization constant $Z_{\psi}^{{\rm SMOM}}$. Due to Ward identities, the
latter can also be obtained from the (non)renormalization of vector (axial)
quark bilinear operators $O^{\mu}_{V}\equiv\psi\gamma^{\mu}\psi$
($O^{\mu}_{A}\equiv\psi\gamma^{\mu}\gamma_{5}\psi$). In the continuum, Ward
identifies and chiral symmetry guarantee that $Z_{V}=Z_{A}=1$, and it can be
proven Sturm _et al._ (2009) that the condition on $Z_{\psi}^{{\rm SMOM}}$
given in Eq. (5) corresponds to
$\displaystyle 1=Z_{\psi}^{{\rm SMOM}}\cdot\frac{1}{12q^{2}}\cdot\left.{\rm
tr}\left[q_{\mu}\Lambda^{\mu,\rm bare}_{V}\hat{q}\right]\right|_{sym},\quad
1=Z_{\psi}^{{\rm SMOM}}\cdot\frac{1}{12q^{2}}\cdot\left.{\rm
tr}\left[q_{\mu}\Lambda^{\mu,\rm
bare}_{A}\gamma_{5}\hat{q}\right]\right|_{sym}$ (7)
with $\Lambda_{V}^{\mu}$ ($\Lambda_{A}^{\mu}$) being analogs of (4) with
$O_{S}$ replaced by $O_{V}^{\mu}$ ($O_{A}^{\mu}$). It is also possible to use
the so-called RI/SMOM${}_{\gamma_{\mu}}$ Sturm _et al._ (2009) and require
$\displaystyle 1=Z_{\psi}^{{\rm
SMOM}_{\gamma_{\mu}}}\cdot\frac{1}{48}\cdot\left.{\rm
tr}\left[\gamma_{\mu}\Lambda^{\mu,\rm bare}_{V}\right]\right|_{sym},\quad
1=Z_{\psi}^{{\rm SMOM}_{\gamma_{\mu}}}\cdot\frac{1}{48}\cdot\left.{\rm
tr}\left[\Lambda^{\mu,\rm
bare}_{A}\gamma_{5}\gamma_{\mu}\right]\right|_{sym}.$ (8)
Both RI/SMOM and RI/SMOM${}_{\gamma_{\mu}}$ conditions can be implemented on
lattice (see, e.g., Refs. Blum _et al._ (2016); Aoki _et al._ (2008) for
details and subtleties). In Ref. Almeida and Sturm (2010) it was demonstrated
that the PT series for the quark-mass conversion factor exhibits slightly
better behavior in RI/SMOM than in RI/SMOM${}_{\gamma_{\mu}}$. Given this
argument we carry out our calculation in RI/SMOM.
Let us mention a few technical details of our calculation. We generate Feynman
graphs with DIANA Tentyukov and Fleischer (2000) and take fermion and color
van Ritbergen _et al._ (1999) traces according to Eq. (5). Resulting scalar
integrals are reduced to the set of master integrals identified in our
previous paper Bednyakov and Pikelner (2020) on $\alpha_{s}$ renormalization
in the ${\rm SMOM}$ scheme. To perform reduction we make use of the
FIRE6Smirnov and Chuharev (2019) package. Substituting masters integrals
evaluated previously, we end up with expressions valid for a general gauge
group. The number of master integrals and the necessary expansion depth in
dimensional regularization parameter $\varepsilon=(4-d)/2$ are the same as in
the paperBednyakov and Pikelner (2020). It is worth noting that as a cross-
check of our calculation we also consider the renormalization of the
pseudoscalar quark current $O_{P}=\bar{\psi}\gamma_{5}\psi$, which can also be
used to extract $Z_{m}^{{\rm SMOM}}$ from lattice calculations.
## III Results and conclusion
Expressing all the renormalization constants in terms of
$a_{{\rm\overline{MS}}}$, from Eq. (1) we obtain the following N3LO conversion
factor
$\displaystyle C_{m}^{{\rm SMOM}}$
$\displaystyle=1+x_{1}a_{{\rm\overline{MS}}}+x_{2}a_{{\rm\overline{MS}}}^{2}+x_{3}a_{{\rm\overline{MS}}}^{2}$
(9)
with
$\displaystyle x_{1}=$
$\displaystyle{\color[rgb]{0.01171875,0.22265625,0.421875}C_{F}}\bigg{(}-4-\frac{2}{3}\pi^{2}+\psi_{1}\bigg{)}$
(10) $\displaystyle x_{2}=$
$\displaystyle{\color[rgb]{0.01171875,0.22265625,0.421875}n_{f}T_{F}C_{F}}\bigg{(}\frac{83}{6}+\frac{40}{27}\pi^{2}-\frac{20}{9}\psi_{1}\bigg{)}$
$\displaystyle+$
$\displaystyle{\color[rgb]{0.01171875,0.22265625,0.421875}C_{F}^{2}}\bigg{(}\frac{19}{8}+\frac{28}{9}\pi^{2}-\frac{14}{3}\psi_{1}+4\zeta_{3}+\frac{58}{81}\pi^{4}-\frac{52}{27}\psi_{1}\pi^{2}+\frac{13}{9}\psi_{1}^{2}-\frac{1}{36}\psi_{3}\bigg{)}$
$\displaystyle+$
$\displaystyle{\color[rgb]{0.01171875,0.22265625,0.421875}C_{A}C_{F}}\bigg{(}-\frac{1285}{24}-\frac{385}{54}\pi^{2}+\frac{385}{36}\psi_{1}+10\zeta_{3}-\frac{8}{81}\pi^{4}+\frac{8}{27}\pi^{2}\psi_{1}-\frac{2}{9}\psi_{1}^{2}\bigg{)}$
(11) $\displaystyle x_{3}=$
$\displaystyle{\color[rgb]{0.01171875,0.22265625,0.421875}n_{f}^{2}T_{F}^{2}C_{F}}\bigg{(}-\frac{7514}{243}-\frac{800}{243}\pi^{2}+\frac{400}{81}\psi_{1}-\frac{32}{9}\zeta_{3}-\frac{32}{243}\pi^{4}+\frac{4}{81}\psi_{3}\bigg{)}$
$\displaystyle+$
$\displaystyle{\color[rgb]{0.01171875,0.22265625,0.421875}n_{f}T_{F}C_{A}C_{F}}\bigg{(}\frac{95387}{243}+\frac{13172}{243}\pi^{2}-\frac{6586}{81}\psi_{1}-\frac{152}{9}\zeta_{3}+\frac{3952}{3645}\pi^{4}$
$\displaystyle-$
$\displaystyle\frac{320}{243}\psi_{1}\pi^{2}+\frac{80}{81}\psi_{1}^{2}-\frac{23}{162}\psi_{3}+\frac{320}{81}\pi^{2}\zeta_{3}+\frac{16240}{729}\zeta_{5}-\frac{160}{27}\psi_{1}\zeta_{3}+\frac{64}{81}H_{5}\bigg{)}$
$\displaystyle+$
$\displaystyle{\color[rgb]{0.01171875,0.22265625,0.421875}n_{f}T_{F}C_{F}^{2}}\bigg{(}\frac{1109}{9}-\frac{241}{81}\pi^{2}+\frac{241}{54}\psi_{1}-\frac{1384}{9}\zeta_{3}-\frac{15392}{3645}\pi^{4}+\frac{2080}{243}\psi_{1}\pi^{2}$
$\displaystyle-$
$\displaystyle\frac{520}{81}\psi_{1}^{2}+\frac{67}{162}\psi_{3}-\frac{128}{9}\pi^{2}\zeta_{3}-\frac{32480}{729}\zeta_{5}+\frac{64}{3}\psi_{1}\zeta_{3}-\frac{128}{81}H_{5}\bigg{)}$
$\displaystyle+$
$\displaystyle{\color[rgb]{0.01171875,0.22265625,0.421875}C_{F}^{3}}\bigg{(}-\frac{3227}{12}-\frac{191}{12}\pi^{2}+\frac{191}{8}\psi_{1}-58\zeta_{3}-\frac{992}{81}\pi^{4}+\frac{232}{9}\psi_{1}\pi^{2}-\frac{58}{3}\psi_{1}^{2}+\frac{37}{27}\psi_{3}$
$\displaystyle+$
$\displaystyle\frac{80}{9}\pi^{2}\zeta_{3}-\frac{32980}{81}\zeta_{5}-\frac{40}{3}\psi_{1}\zeta_{3}-\frac{112}{9}H_{5}-\frac{131776}{98415}\pi^{6}+\frac{23992}{6561}\psi_{1}\pi^{4}-\frac{394}{81}\psi_{1}^{2}\pi^{2}$
$\displaystyle+$
$\displaystyle\frac{679}{6561}\psi_{3}\pi^{2}-\frac{679}{4374}\psi_{1}\psi_{3}+\frac{197}{81}\psi_{1}^{3}+\frac{1}{135}\psi_{5}+\frac{2}{8505}H_{6}\bigg{)}$
$\displaystyle+$
$\displaystyle{\color[rgb]{0.01171875,0.22265625,0.421875}C_{A}C_{F}^{2}}\bigg{(}\frac{18781}{72}+\frac{23231}{324}\pi^{2}-\frac{23231}{216}\psi_{1}+\frac{2879}{9}\zeta_{3}+\frac{34423}{1458}\pi^{4}-\frac{11306}{243}\psi_{1}\pi^{2}$
$\displaystyle+$
$\displaystyle\frac{5653}{162}\psi_{1}^{2}-\frac{3937}{1296}\psi_{3}-\frac{178}{9}\pi^{2}\zeta_{3}+\frac{379285}{729}\zeta_{5}+\frac{89}{3}\psi_{1}\zeta_{3}+\frac{1840}{81}H_{5}-\frac{1519}{32805}\pi^{6}$
$\displaystyle+$
$\displaystyle\frac{4}{81}\psi_{1}\pi^{4}+\frac{4}{27}\psi_{1}^{2}\pi^{2}+\frac{1}{27}\psi_{3}\pi^{2}-\frac{1}{18}\psi_{1}\psi_{3}-\frac{2}{27}\psi_{1}^{3}-\frac{77}{116640}\psi_{5}\bigg{)}$
$\displaystyle+$
$\displaystyle{\color[rgb]{0.01171875,0.22265625,0.421875}C_{A}^{2}C_{F}}\bigg{(}-\frac{3360023}{3888}-\frac{243283}{1944}\pi^{2}+\frac{243283}{1296}\psi_{1}+\frac{4511}{24}\zeta_{3}-\frac{20513}{5832}\pi^{4}+\frac{5107}{972}\psi_{1}\pi^{2}$
$\displaystyle-$
$\displaystyle\frac{5107}{1296}\psi_{1}^{2}+\frac{3433}{5184}\psi_{3}+\frac{7535}{324}\pi^{2}\zeta_{3}-\frac{1140715}{5832}\zeta_{5}-\frac{7535}{216}\psi_{1}\zeta_{3}-\frac{668}{81}H_{5}+\frac{100133}{314928}\pi^{6}$
$\displaystyle-$
$\displaystyle\frac{10619}{13122}\psi_{1}\pi^{4}+\frac{325}{324}\psi_{1}^{2}\pi^{2}-\frac{461}{13122}\psi_{3}\pi^{2}+\frac{461}{8748}\psi_{1}\psi_{3}-\frac{325}{648}\psi_{1}^{3}-\frac{611}{373248}\psi_{5}-\frac{1}{17010}H_{6}\bigg{)}.$
(12)
Here $\zeta_{i}$ is the Riemann zeta function, and $\psi_{m}=\psi^{(m)}(1/3)$
corresponds to the $(m+1)$th derivative of the gamma function. Additional
constants of uniform transcendental weight $H_{5}$ and $H_{6}$, introduced in
Ref. Bednyakov and Pikelner (2020),
$H_{5}=-23.9316195698,\quad H_{6}=248215.038289$ (13)
are linear combinations of real parts of harmonic polylogarithms with six-root
of unity argument from the basis constructed in Ref. Kniehl _et al._ (2017).
Our result reproduces the well-known analytic one-loop Sturm _et al._ (2009)
and two-loop Gorbahn and Jager (2010); Almeida and Sturm (2010) expressions,
together with recent numerical evaluation of Ref. Kniehl and Veretin (2020):
$\displaystyle C_{m}^{{\rm SMOM}}=1$
$\displaystyle-0.6455188560a_{{\rm\overline{MS}}}-(22.60768757-4.013539470n_{f})a_{{\rm\overline{MS}}}^{2}$
$\displaystyle-(860.2874030-164.7423004n_{f}+2.184402262n_{f}^{2})a_{{\rm\overline{MS}}}^{3}.$
(14)
Given this general result (III), we are ready to provide our numerical
estimates of the N3LO contribution for different $n_{f}$. Expanding the
matching factor in powers of
$\alpha_{s}\equiv\alpha_{s}^{{\rm\overline{MS}}}$, we obtain
$\displaystyle n_{f}=0:\quad$ $\displaystyle 1$
$\displaystyle-0.05136875839\alpha_{s}-0.1431648540\alpha_{s}^{2}$
$\displaystyle-0.4335248250\alpha_{s}^{3},$ (15) $\displaystyle n_{f}=1:\quad$
$\displaystyle 1$
$\displaystyle-0.05136875839\alpha_{s}-0.1177488184\alpha_{s}^{2}$
$\displaystyle-0.3516069867\alpha_{s}^{3},$ (16) $\displaystyle n_{f}=2:\quad$
$\displaystyle 1$
$\displaystyle-0.05136875839\alpha_{s}-0.09233278278\alpha_{s}^{2}$
$\displaystyle-0.2718907211\alpha_{s}^{3},$ (17) $\displaystyle n_{f}=3:\quad$
$\displaystyle 1$
$\displaystyle-0.05136875839\alpha_{s}-0.06691674717\alpha_{s}^{2}$
$\displaystyle-0.1943760281\alpha_{s}^{3},$ (18) $\displaystyle n_{f}=4:\quad$
$\displaystyle 1$
$\displaystyle-0.05136875839\alpha_{s}-0.04150071157\alpha_{s}^{2}$
$\displaystyle-0.1190629077\alpha_{s}^{3},$ (19) $\displaystyle n_{f}=5:\quad$
$\displaystyle 1$
$\displaystyle-0.05136875839\alpha_{s}-0.01608467597\alpha_{s}^{2}$
$\displaystyle-0.04595136006\alpha_{s}^{3},$ (20) $\displaystyle
n_{f}=6:\quad$ $\displaystyle 1$
$\displaystyle-0.05136875839\alpha_{s}+0.009331359638\alpha_{s}^{2}$
$\displaystyle+0.02495861498\alpha_{s}^{3}.$ (21)
Given the value $\alpha_{s}^{nf=4}(3\,)=0.2545$ used by HPQCD collaboration
Lytle _et al._ (2018) in the determination of charm- and strange-quark
masses, we evaluate the matching factor at the reference scale $\mu_{\rm
ref}=3\,$
$\displaystyle Z^{{\rm\overline{MS}}/{\rm SMOM}}_{m}\equiv C_{m}^{{\rm SMOM}}$
$\displaystyle=1-\underbrace{0.0130733}_{\alpha_{s}}-\underbrace{0.00268801}_{\alpha_{s}^{2}}-\underbrace{0.00196264}_{\alpha_{s}^{3}}=0.982276,\quad
n_{f}=4,~{}\mu=3~{}.$ (22)
One can see that the three-loop contribution is of the same order as the two-
loop correction and is of the same size as the uncertainty $0.22\%$ quoted in
Ref. Lytle _et al._ (2018) and attributed to the missing N3LO term. The
comparision with the result given in Ref. Lytle _et al._ (2018) also shows
that the effect of the $\alpha_{s}^{3}$ term in Eq. (22) is four times larger
than the two-loop contribution due to massive charm quark in the sea and
becomes an order of magnitude larger if $\mu=5$ GeV is chosen.
It is also worth mentioning that the authors of Ref. Kniehl and Veretin (2020)
also consider vector and tensor quark bilinears. We apply the projector (8) to
the expression for the vector-operator $O_{V}$ matrix element given in Ref.
Kniehl and Veretin (2020), evaluate the quark wave function renormalization in
RI/SMOM${}_{\gamma_{\mu}}$, and obtain the following numeric result for the
corresponding matching factor:
$\displaystyle C_{m}^{{\rm SMOM}_{\gamma_{\mu}}}=1-$ $\displaystyle
1.978852189a_{{\rm\overline{MS}}}-(55.03243483-6.161687618n_{f})a_{{\rm\overline{MS}}}^{2}$
$\displaystyle-$
$\displaystyle(2086.34(14)-362.560(3)n_{f}+6.7220(1)n_{f}^{2})a_{{\rm\overline{MS}}}^{3}.$
(23)
While the two-loop contribution to Eq. (23) is known in analytic form Almeida
and Sturm (2010), the three-loop term is new and, to our knowledge, is not
presented in the literature. One can see that numerical coefficients in
RI/SMOM${}_{\gamma_{\mu}}$ (23) is indeed larger than that in RI/SMOM (III),
and, e.g., at our reference scale $\mu_{\rm ref}$ we have
$\displaystyle C_{m}^{{\rm SMOM}_{\gamma_{\mu}}}$
$\displaystyle=1-\underbrace{0.04007663}_{\alpha_{s}}-\underbrace{0.012463065}_{\alpha_{s}^{2}}-\underbrace{0.006177}_{\alpha_{s}^{3}}=0.941283,\quad
n_{f}=4,~{}\mu=3~{}.$ (24)
To conclude, we analytically calculate the three-loop correction to the
matching factor in RI/SMOM scheme required to extract ${\rm\overline{MS}}$
quark masses from nonperturbative lattice computationsBlum _et al._ (2016);
Lytle _et al._ (2018). Our numerical evaluation confirms the estimate of
$x_{3}$ given in Ref. Kniehl and Veretin (2020). In addition, we use the
results of Ref. Kniehl and Veretin (2020) to evaluate the three-loop
expression for the corresponding matching factor in
RI/SMOM${}_{\gamma_{\mu}}$. We believe that the obtained N3LO contribution to
$C_{m}^{{\rm SMOM}}$ will increase the precision of the resulting
${\rm\overline{MS}}$ quark masses and/or provide a more reliable estimate of
the uncertainties due to missing high-order terms.
###### Acknowledgements.
We would like to thank Christine Davies for the correspondence regarding Ref.
Lytle _et al._ (2018) and clarifying comments on the sea quark contribution.
The work of A.P. is supported by the Foundation for the Advancement of
Theoretical Physics and Mathematics “BASIS.” The work of A.B. is supported by
the Grant of the Russian Federation Government, Agreement No. 14.W03.31.0026
from 15.02.2018.
## References
* Tanabashi _et al._ (2018) M. Tanabashi _et al._ (Particle Data Group), “Review of particle physics,” Phys.Rev.D 98, 030001 (2018).
* Aoki _et al._ (2019) S. Aoki _et al._ (Flavour Lattice Averaging Group), “FLAG Review 2019,” (2019), arXiv:1902.08191 [hep-lat] .
* Sturm _et al._ (2009) C. Sturm, Y. Aoki, N. H. Christ, T. Izubuchi, C. T. C. Sachrajda, and A. Soni, “Renormalization of quark bilinear operators in a momentum-subtraction scheme with a nonexceptional subtraction point,” Phys. Rev. D80, 014501 (2009), arXiv:0901.2599 [hep-ph] .
* Martinelli _et al._ (1995) G. Martinelli, C. Pittori, Christopher T. Sachrajda, M. Testa, and A. Vladikas, “A General method for nonperturbative renormalization of lattice operators,” Nucl. Phys. B445, 81–108 (1995), arXiv:hep-lat/9411010 [hep-lat] .
* Aoki _et al._ (2008) Y. Aoki _et al._ , “Non-perturbative renormalization of quark bilinear operators and B(K) using domain wall fermions,” Phys. Rev. D78, 054510 (2008), arXiv:0712.1061 [hep-lat] .
* Lytle _et al._ (2018) A. T. Lytle, C. T. H. Davies, D. Hatton, G. P. Lepage, and C. Sturm (HPQCD), “Determination of quark masses from $\mathbf{n_{f}=4}$ lattice QCD and the RI-SMOM intermediate scheme,” Phys. Rev. D98, 014513 (2018), arXiv:1805.06225 [hep-lat] .
* Gorbahn and Jager (2010) Martin Gorbahn and Sebastian Jager, “Precise MS-bar light-quark masses from lattice QCD in the RI/SMOM scheme,” Phys. Rev. D82, 114001 (2010), arXiv:1004.3997 [hep-ph] .
* Almeida and Sturm (2010) Leandro G. Almeida and Christian Sturm, “Two-loop matching factors for light quark masses and three-loop mass anomalous dimensions in the RI/SMOM schemes,” Phys. Rev. D82, 054017 (2010), arXiv:1004.4613 [hep-ph] .
* Kniehl and Veretin (2020) Bernd A. Kniehl and Oleg L. Veretin, “Bilinear quark operators in the RI/SMOM scheme at three loops,” (2020), arXiv:2002.10894 [hep-ph] .
* Chetyrkin _et al._ (2017) K. G. Chetyrkin, G. Falcioni, F. Herzog, and J. A. M. Vermaseren, “Five-loop renormalisation of QCD in covariant gauges,” JHEP 10, 179 (2017), [Addendum: JHEP12,006(2017)], arXiv:1709.08541 [hep-ph] .
* Luthe _et al._ (2017) Thomas Luthe, Andreas Maier, Peter Marquard, and York Schroder, “The five-loop Beta function for a general gauge group and anomalous dimensions beyond Feynman gauge,” JHEP 10, 166 (2017), arXiv:1709.07718 [hep-ph] .
* Blum _et al._ (2016) T. Blum _et al._ (RBC, UKQCD), “Domain wall QCD with physical quark masses,” Phys. Rev. D93, 074505 (2016), arXiv:1411.7017 [hep-lat] .
* Tentyukov and Fleischer (2000) M. Tentyukov and J. Fleischer, “A Feynman diagram analyzer DIANA,” Comput. Phys. Commun. 132, 124–141 (2000), arXiv:hep-ph/9904258 [hep-ph] .
* van Ritbergen _et al._ (1999) T. van Ritbergen, A. N. Schellekens, and J. A. M. Vermaseren, “Group theory factors for Feynman diagrams,” Int. J. Mod. Phys. A14, 41–96 (1999), arXiv:hep-ph/9802376 [hep-ph] .
* Bednyakov and Pikelner (2020) Alexander Bednyakov and Andrey Pikelner, “Four-loop QCD MOM beta functions from the three-loop vertices at the symmetric point,” (2020), arXiv:2002.02875 [hep-ph] .
* Smirnov and Chuharev (2019) A. V. Smirnov and F. S. Chuharev, “FIRE6: Feynman Integral REduction with Modular Arithmetic,” (2019), 10.1016/j.cpc.2019.106877, arXiv:1901.07808 [hep-ph] .
* Kniehl _et al._ (2017) B. A. Kniehl, A. F. Pikelner, and O. L. Veretin, “Three-loop massive tadpoles and polylogarithms through weight six,” JHEP 08, 024 (2017), arXiv:1705.05136 [hep-ph] .
|
2024-09-04T02:54:55.536646 | 2020-02-27T16:06:58 | 2002.12812 | {
"authors": "Pedro Mec\\^e, Elena Gofas-Salas, Michel Paques, Kate Grieve, Serge\n Meimon",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25943",
"submitter": "Pedro Mec\\^e",
"url": "https://arxiv.org/abs/2002.12812"
} | arxiv-papers | Optical Incoherence Tomography: a method to generate tomographic retinal
cross-sections with non-interferometric imaging systems
Pedro Mecê1,∗, Elena Gofas-Salas2, Michel Paques2,3, Kate Grieve2,3, and Serge
Meimon4
1Institut Langevin, ESPCI Paris, CNRS, PSL University, 1 rue Jussieu, 75005
Paris, France
2Quinze-Vingts National Eye Hospital, 28 Rue de Charenton, Paris, 75012,
France
3Institut de la Vision, Sorbonne Université, INSERM, CNRS, F-75012, Paris,
France
4DOTA, ONERA, Université Paris Saclay F-91123 Palaiseau, France
<EMAIL_ADDRESS>
###### Abstract
Optical tomographic cross-sectional images of biological samples were made
possible by interferometric imaging techniques such as Optical Coherence
Tomography (OCT) [1, 2, 3]. Owing to its unprecedented view of the sample, OCT
has become a gold standard, namely for human retinal imaging in the clinical
environment. In this Letter, we present Optical Incoherence Tomography (OIT):
a completely digital method extending the possibility to generate tomographic
retinal cross-sections to non-interferometric imaging systems such as en-face
AO-ophthalmoscopes [4, 5]. We demonstrate that OIT can be applied to different
imaging modalities using back-scattered and multiply-scattered light including
systems without inherent optical sectioning. We show that OIT can be further
used to guide focus position when the user is “blind” focusing, allowing
precise imaging of translucent retinal structures [6], the vascular plexuses
[7] and the retinal pigment epithelium (RPE) [8] using respectively split
detection, motion contrast, and autofluorescence techniques.
High-resolution in-vivo imaging of the human retina can be achieved using
Adaptive Optics (AO) ophthalmoscopes, such as Flood-Illumination
Ophthalmoscopes (FIO) [9] and Scanning Laser Ophthalmoscopes (SLO) [10], owing
to the capacity of AO to measure and correct for static and dynamic
monochromatic ocular aberrations in real-time [11, 12]. Such high-resolution
retinal images play an important role in early-stage retinal disease
diagnosis, monitoring the progression of retinal disease and the effect of new
therapeutic drugs. [10, 13].
To be able to explore the retinal volume using AO ophthalmoscopes the control
of the imaging focal position becomes crucial. At present, the positioning of
the image focal plane is typically done empirically by visualizing the en-face
images displayed in real-time and judging if the retinal structure of interest
is sharp or not. This focus guidance approach seems sufficient when using
confocal AO-SLO to image hyperreflective retinal layers such as
photoreceptors, vasculature, and nerve fiber layer (NFL) [10], especially due
to the optical sectioning capability of such imaging systems. Nevertheless,
the same cannot be said when using nonconfocal imaging modalities as AO-FIO
and AO-SLO split-detection [6], multi-offset [14], motion contrast [15], and
autofluorescence [8], since displayed images present a weak signal-to-noise
ratio (SNR) and a low contrast, and users are mostly “blind” focusing. As a
result, the acquisition of several image stacks around the retinal layer of
interest, i.e. for different focal planes, and the assessment of the image
quality after the acquisition, to select the best image stack, are mandatory,
time-consuming steps which are not always compatible with the clinical
environment. To avoid these drawbacks, a focus-guidance tool becomes
essential, especially to reveal hypo-reflective or transparent structures such
as cone photoreceptor inner segments (IS) [6], retinal ganglion cells [14],
perfusion in microvasculature [16, 15] or those masked by neighboring
structures of high reflectivity such as RPE lying beneath photoreceptors [8].
Here, we present OIT: a digital method that enables the generation of
tomographic retinal cross-sections in non-interferometric AO-ophthalmoscopes.
We apply OIT to different AO-ophthalmoscopes modalities: AO-FIO, confocal AO-
SLO, split-detection AO-SLO, motion contrast AO-SLO. We demonstrate that most
of the retinal layers commonly resolved by OCT cross-sections can also be
resolved in OIT cross-sections. Finally, we use OIT to precisely guide focus
positioning, enabling imaging of all retinal vascular plexuses, photoreceptor
IS and RPE with ease when using respectively motion contrast, split detection,
and autofluorescence techniques.
The OIT procedure is composed of three main steps (see Methods Section for
further details): 1) acquisition of en-face retinal images from different
focal planes, forming a Z-stack; 2) filtering out the low-spatial frequency
content of each image using a high-pass filter; 3) Measurement of the image
sharpness through the computation of the image energy. A crucial step of the
OIT procedure is filtering out the low-spatial frequency content. Indeed,
since nonconfocal imaging systems do not present inherent optical sectioning,
i.e. the capacity to reject out-of-focus photons, the high-pass filter enables
one to take advantage of the fact that high-spatial frequency is only present
for photons coming from the in-focus plane [17], creating an axial sectioning
effect. To demonstrate the axial sectioning capability of the OIT and the
impact of the choice of the cut-off frequency, we applied the OIT procedure to
a Z-stack obtained from a USAF target using the PARIS AO-FIO. Figures 1(a,b)
present the region of interest (ROI) of one en-face image and OIT cross-
sections for different normalized cut-off frequencies, where two behaviors can
be noticed.
On the one hand, by increasing the cut-off frequency, the axial sectioning
ability of OIT is enhanced. Figure 1(c) outlines this behavior by presenting
the axial sectioning as a function of the normalized cut-off frequency. Not
surprisingly, the sectioning ability is limited by the depth of field (DOF) at
the diffraction limit, as the latter can be defined as the distance from the
best focus where the high spatial frequency content starts to lose contrast,
and, consequently, the image sharpness decreases [18]. On the other hand, as
the cut-off frequency is increased, the OIT cross-section loses contrast and
SNR (Fig. 1(d)). This latter effect happens since the contrast is gradually
reduced towards zero at a point defined by the lateral resolution of the
optical imaging system [18]. We can expect to obtain sufficient OIT image
contrast (higher than 80%) for normalized cut-off frequencies ranging from 1%
to 25%. To distinguish different retinal layers, we decided to use a high-pass
filter with a normalized cut-off frequency of 20%, representing a good trade-
off between the sectioning ability ($1.2\times$ DOF) and the image contrast
(90%). With this cut-off frequency, for a 7-mm diameter pupil and a light
source of 850 nm, we expect to achieve an axial sectioning of
$25\leavevmode\nobreak\ \mu m$. As the OIT cross-section is given as a
function of the focus position, one can precisely determine beforehand the
position of the imaging plane to extract a sharp USAF target image.
Figure 2(a,c) presents OIT tomographic retinal cross-sections acquired on a
healthy subject at 7o Nasal using the AO-FIO and AO-SLO, both in bright-field
modality. We compare both OIT cross-sections with an OCT image extracted at
the same retinal location with Spectralis OCT (Heidelberg Engineering,
Germany, Fig. 2(b)). Although OCT can achieve an enhanced axial resolution
compared to OIT (i.e. not limited by DOF but by the light source bandwidth),
most of the retinal layers commonly identified in OCT can be identified in
both AO-SLO and AO-FIO OIT cross-sections: 1) the NFL, which gets thicker as
eccentricity increases (blue line); 2) two intermediate layers, most probably
corresponding to the inner plexiform layer (IPL, red line) and outer plexiform
layer (OPL, yellow line); 3) inner/outer segment junction (IS/OS, green line)
and RPE (orange line).
The proposed retinal layer labeling can be further confirmed by looking at the
en-face images acquired when positioning the imaging focal plane at each of
these layers (Figs. 2(d,e)). The RPE labeling is confirmed when applying
autofluorescence with AO-SLO [8] at the given focus position (Figs. 2(f-i)).
Figure 2(j) presents the radial averaged power spectral density (PSD) for both
bright-field and autofluorescence AO-SLO acquired simultaneously, presenting
the typical spatial frequency of, respectively, the photoreceptor and RPE
mosaics. We measured a photoreceptor density of $18\leavevmode\nobreak\
000\leavevmode\nobreak\ cells/mm^{2}$ and an RPE density of
$5\leavevmode\nobreak\ 000\leavevmode\nobreak\ cells/mm^{2}$, which is
consistent with previous studies for the given retinal eccentricity [8, 19].
The ROI where the OIT tomographic retinal cross-sections were generated,
comprising a retinal vessel, is indicated by the white dashed-rectangle in NFL
en-face images in Figs. 2(d,e). Through the OIT cross-sections, it is possible
to identify the vessel present in the ROI and the OCT (red arrows) and
conclude on its axial position, here at the superficial vascular plexus (SVP)
[20]. One main advantage of the OIT compared to OCT is its enhanced lateral
resolution, enabling visualization of some retinal features that cannot be
visualized in OCT, such as interconnecting capillaries, linking different
capillary plexus [7] (Supplementary Video 1).
Multiply scattered light (or nonconfocal) imaging modalities such as dark-
field, offset aperture and split detection, largely applied in AO-SLO [13],
and recently introduced for AO-FIO [21, 16], provide excellent contrast for
blood vessels and mural cells [15, 16] and translucent retinal structures [6,
14], all poorly or not visualized in back-scattered light imaging systems.
Because the coherent detection of OCT limits the use of multiply scattered
light, OCT is not able to generate split detection tomographic retinal cross-
sections. On the other hand, the OIT procedure can be applied to Z-stacks
acquired in multiply scattered light imaging modalities to generate split
detection tomographic retinal cross-sections, revealing a different cross-
sectional view of the retina. Figures 3(a,b) present a comparison between OIT
cross-sections generated through bright-field confocal AO-SLO and nonconfocal
split detection AO-SLO at 7o Nasal of a healthy subject. Colored arrows in OIT
cross-sections indicate the focus position where en-face images, presented in
Fig. 3(d,e), were acquired. White dashed rectangles indicate the ROI where OIT
cross-sections were generated. Full Z-stacks can be visualized in
Supplementary Video 2.
Three main differences can be noticed when producing split detection OIT
cross-sections compared to those generated from bright-field images. The first
difference concerns the NFL layer which seems to disappear in split detection
OIT, indicating, as previously stated, that NFL becomes mostly transparent in
multiply scattered light modalities [15]. Secondly, retinal layers in the
inner retina get brighter with split detection OIT. By producing the OIT
corresponding to a Z-stack of perfusion map images (Fig. 3(c)), we can deduce
that these layers correspond to vascular plexuses, of which we expect there to
be four at 7o Nasal [7, 20]. The focus position was adjusted using split
detection OIT to acquire perfusion map images of each of the four vascular
plexuses (Fig. 3(f)), named according to [7]: radial peripapillary capillary
plexus (RPCP), SVP, intermediate vascular plexus (IVP) and deep vascular
plexus (DVP). Owing to the precise location of focus position, one can
generate depth-color coded perfusion maps with ease, revealing the 3D
organization of the vascular network (Fig. 3(i)). Finally, another interesting
finding is a retinal layer, just above the IS/OS, that gets brighter with
split detection OIT. By using OIT to position the focal plane at this layer we
were able to precisely image photoreceptor IS (Figs. 3(g,h)) [6]. Figure 4 and
Supplementary Video 3 present the same comparison and results but at 7o
Temporal, where NFL is less dense and all three expected vascular plexuses
[20] and the photoreceptor IS layers are visible in split-detection OIT,
enabling focus-guidance and image acquisition of perfusion maps (for vascular
plexuses) and the IS.
Throughout this study, we showed the capacity of OIT to generate tomographic
retinal cross-section for various non-interferometric imaging modalities.
While this Letter discusses the application of OIT to retinal imaging, this
method can be applied to other samples (e.g. cornea [22] and skin [23]) and
for other high-resolution microscopic/imaging techniques that go beyond
biological samples. Since the focus position is known, OIT can be used as a
focus guidance tool to image a retinal layer of interest, even when the user
is “blind” focusing. We demonstrate this asset by extracting perfusion maps
from all vascular plexuses, photoreceptor IS and RPE images assisted by OIT
using respectively motion contrast, split detection, and autofluorescence
techniques. Moreover, the split detection cross-sectional view of the retina,
not possible with OCT, may be a valuable tool to understand the origin of
retinal features observed in multiply scattered light modalities [24]. Since
OIT is a completely digital method, it can be easily implemented in any
optical imaging system. Moreover, an OIT cross-section can be obtained in a
relatively short time, even for AO-SLO, as the ROI of en-face images from the
Z-stack is reduced in one direction (about $50\leavevmode\nobreak\ \mu m$),
which can coincide with the slow galvanometer scanner axis. Thanks to its easy
implementation, OIT can be used during imaging sessions to help focus
guidance. One can start by acquiring a fly-through focus movie, generate the
patient OIT image, then using the cross-sectional view and focus information
to precisely position the imaging focal plane at the retinal layer of
interest, avoiding loss of time in acquiring images from different depths and
selecting the best image sequences afterward. Finally, laser photocoagulation
retinal surgery can also benefit from OIT, by precisely focusing the
therapeutic laser at the diseased tissue avoiding damaging neighboring healthy
tissues [25].
## Acknowledgements
This work was supported by Agence Nationale de la Recherche CLOVIS3D grant
(ANR-14-CE17-0011), and European Research Council HELMHOLTZ grant (#610110).
The authors want to thank Laurent Mugnier, Cyril Petit, Yann Lai-Tim and
Antoine Chen for fruitful discussions.
## Author contributions
P.M. wrote the software for data processing and generation of OIT cross-
sections, analyzed the results and drafted the manuscript. P.M., E.G. and K.G.
designed the experiment. P.M. and S.M. developed the presented method. P.M.,
E.G., K.G. and M.P. collected data. M.P., K.G. and S.M. provided overall
guidance to the project. All authors discussed the results, reviewed and
edited the manuscript.
## Competing interests
P.M. and S.M. are listed as inventors on a patent application (FR 1904271)
related to the work presented in this manuscript. All other authors have
nothing to disclose.
Figure 1: a. ROI where the OIT method was applied. b. Generated OIT cross-
sections for different normalized cut-off frequencies. c,d Influence of the
cut-off frequency choice on, respectively, the axial sectioning capacity
(given in terms of the DOF at the diffraction limit) and contrast of OIT
cross-section.
Figure 2: a-c Tomographic retinal cross-sections generated by, respectively,
AO-SLO OIT, OCT and AO-FIO OIT for the same subject and retinal location,
where the main retinal layer can be identified. d-g En-face retinal images
obtained when precisely positioning the focal plane at the layers labelled in
a-c, guided by OIT images. At RPE focal plane, the RPE signal is masked by
highly reflective photoreceptors signal due to poor axial resolution, hence
need for autofluorescence. h,i Fourier Transform of en-face zoomed images
(orange dashed-square) f,g at RPE focal plane. j. PSD radial average of h,i
outlining the spatial frequency of photoreceptor and RPE mosaic respectively.
White-dashed rectangle: ROI where OIT cross-sections were extracted. Red
arrows: vessel location. Scale bar: $50\mu m$.
Figure 3: a-c Tomographic retinal cross-sections generated by, respectively,
bright-field, split detection and motion contrast techniques in AO-SLO for the
same subject at 7o Nasal where the NFL is dense and four vascular plexuses can
be seen. d-h En-face retinal images obtained when precisely positioning the
focal plane at the layers labelled in a-c, with the help of OIT method. i
Composite perfusion map image, revealing the 3D organization of the retinal
vascular network. White-dashed rectangle: ROI where OIT cross-sections were
extracted. Scale bar: $100\mu m$.
Figure 4: a-c Tomographic retinal cross-sections generated by, respectively,
bright-field, split detection and motion contrast techniques in AO-SLO for the
same subject at 7o Temporal where NFL is less thick and three vascular
plexuses can be seen. d-h En-face retinal images obtained when precisely
positioning the focal plane at the layers labelled in a-c, with the help of
OIT method. i Composite perfusion map image, revealing the 3D organization of
the retinal vascular network. White-dashed rectangle: ROI where OIT cross-
sections were extracted. Scale bar: $100\mu m$.
## Methods
### Optical Incoherence Tomography procedure
As mentioned in the Letter, the OIT procedure is composed of three steps:
Z-stack acquisition, image filtering and computation of the image energy.
#### Z-stack acquisition:
The acquisition of en-face images from different focal planes, forming a
Z-stack (or a fly-through movie), can be done in two different ways: step-
wise, by manually changing the focus position and acquiring enough images for
each imaging plane; or continuously, by operating a fly-through focus
(continuous change of focus position) during imaging acquisition. While the
OIT cross-section generated by the former will present a better SNR and
contrast, as multiple images for each focus position can be averaged, the
latter will be faster. Although in this Letter we only used the step-wise
acquisition, the fly through focus acquisition might be sufficient depending
on the retinal structure of interest or when the SNR is high, as in the case,
for example, of confocal AO-SLO.
#### Image filtering:
After acquiring the Z-stack, each image has its low-spatial frequency content
filtered out. Here, we empirically chose to use an order 2 Butterworth filter
to avoid adding high-spatial frequency artifacts to images.
#### Image energy computation:
To obtain OIT cross-sectional images, similar to an OCT ”B-scan”, each image
is divided into an overlapping grid of $n\times m$ pixel ROI, where each ROI
is displaced from the previous by 1 pixel. To favor a trade-off between point-
wise accuracy and smoothness, we empirically chose $m=4$ and $n=80$, which is
equivalent of $3\mu m\times 60\mu m$. The image energy of each ROI is then
computed as follows:
$\sum_{i=1}^{m}\sum_{j=1}^{n}|\widetilde{I}(i,j)|^{2}$ (1)
where $\widetilde{I}$ is the Fourier Transform of the filtered ROI of $m\times
n$ dimensions. High energy values will be obtained in ROI presenting a
significant amount of high-spatial frequency content. The axial sectioning
capacity of OIT is limited by the DOF at the diffraction limit which can be
defined, according to [18] as:
$DOF=\frac{n\lambda}{NA^{2}}$ (2)
Finally, to facilitate visualization of OIT cross-sections, we used bicubic
interpolation in the axial direction.
### Experimental set-up
The Z-stacks necessary to generate OIT cross-sections were obtained using the
PARIS AO-FIO and a modified version of the MAORI (multimodal adaptive optics
retinal imager) AO-SLO (Physical Sciences, Inc., Andover, MA, USA). Both
systems were described in detail elsewhere [9, 8]. The Z-stack from the PARIS
AO-FIO was obtained by translating the imaging camera parallel to the optical
axis with a constant step of 30 $\mu m$ in the retinal plane. AO-SLO Z-stacks
were obtained by adding constant defocus values to the deformable mirror
(equivalent to an axial displacement of 20 $\mu m$ of the retinal plane).
### Subjects
Image acquisition was performed on two healthy subjects aged 25 and 38.
Research procedures followed the tenets of the Declaration of Helsinki.
Informed consent was obtained from subjects after the nature and possible
outcomes of the study were explained. The study was authorized by the
appropriate ethics review boards (CPP and ANSM (IDRCB numbers: 2016-A00704-47
and 2019-A00942-55)). Before the acquisition, pupil dilation and accommodation
paralysis were performed by introducing one drop of each Tropicamide and
Phenylephrine 10%, assuring a constant pupil diameter and minimal interference
of defocus dynamics due to natural accommodation during Z-stack acquisition
[12, 26]. Subjects were seated in front of the system and stabilized with a
chin and forehead rest and asked to fixate a target placed at an infinite
focal conjugate.
### Imaging acquisition
For each focal plane, a total of 100 images were acquired. When using the AO-
SLO device, four sets of data were recorded simultaneously: bright-field,
split-detection, and autofluorescence imaging; and the deformable mirror focal
plane position. In this configuration, the AO-SLO light level was 1.65 mW. For
the PARIS AO-FIO, the total power entering the eye from the illumination
source and the wavefront sensor laser beacon were respectively 350 $\mu W$ and
1.8 $\mu W$. For both imaging systems, the light level was below the ocular
safety limits established by the ISO standards for group 1 devices.
### Imaging processing
Following the acquisition, to correct for fixational eye movements [27],
strip-based registration (only for the AO-SLO images) and normalized cross-
correlation registration (for both devices) were performed on each image
sequence for a given focal plane. After registration, we selected 20 out of
100 images presenting the best quality (computed through the image energy
[12]). Then the 20 selected images were averaged or had the temporal standard
deviation computed to extract perfusion maps, providing the final Z-stack used
to generate the OIT retinal cross-sections. Before performing the OIT
procedure, images composing the Z-stacks were registered pairwise.
### Data availability
The study data are available from the corresponding author upon request.
### Code availability
The software to generate OIT cross-section is available upon request.
## References
* [1] D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, et al., “Optical coherence tomography,” Science, vol. 254, no. 5035, pp. 1178–1181, 1991\.
* [2] P. Mecê, J. Scholler, K. Groux, and C. Boccara, “High-resolution in-vivo human retinal imaging using full-field oct with optical stabilization of axial motion,” Biomedical Optics Express, vol. 11, no. 1, pp. 492–504, 2020.
* [3] P. Mecê, K. Groux, J. Scholler, O. Thouvenin, M. Fink, K. Grieve, and C. Boccara, “Curved-full-field oct for high-resolution imaging of living human retina over a large field-of-view,” arXiv preprint arXiv:2001.06893, 2020.
* [4] J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” JOSA A, vol. 14, no. 11, pp. 2884–2892, 1997.
* [5] A. Roorda, F. Romero-Borja, W. J. Donnelly III, H. Queener, T. J. Hebert, and M. C. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Optics express, vol. 10, no. 9, pp. 405–412, 2002.
* [6] D. Scoles, Y. N. Sulai, C. S. Langlo, G. A. Fishman, C. A. Curcio, J. Carroll, and A. Dubra, “In vivo imaging of human cone photoreceptor inner segments,” Investigative ophthalmology & visual science, vol. 55, no. 7, pp. 4244–4251, 2014.
* [7] J. Campbell, M. Zhang, T. Hwang, S. Bailey, D. Wilson, Y. Jia, and D. Huang, “Detailed vascular anatomy of the human retina by projection-resolved optical coherence tomography angiography,” Scientific reports, vol. 7, p. 42201, 2017.
* [8] K. Grieve, E. Gofas-Salas, R. D. Ferguson, J. A. Sahel, M. Paques, and E. A. Rossi, “In vivo near-infrared autofluorescence imaging of retinal pigment epithelial cells with 757 nm excitation,” Biomedical optics express, vol. 9, no. 12, pp. 5946–5961, 2018.
* [9] E. Gofas-Salas, P. Mecê, C. Petit, J. Jarosz, L. M. Mugnier, A. M. Bonnefois, K. Grieve, J. Sahel, M. Paques, and S. Meimon, “High loop rate adaptive optics flood illumination ophthalmoscope with structured illumination capability,” Applied Optics, vol. 57, no. 20, pp. 5635–5642, 2018.
* [10] A. Roorda and J. L. Duncan, “Adaptive optics ophthalmoscopy,” Annual review of vision science, vol. 1, pp. 19–50, 2015.
* [11] J. Jarosz, P. Mecê, J.-M. Conan, C. Petit, M. Paques, and S. Meimon, “High temporal resolution aberrometry in a 50-eye population and implications for adaptive optics error budget,” Biomedical optics express, vol. 8, no. 4, pp. 2088–2105, 2017.
* [12] P. Mecê, E. Gofas-Salas, C. Petit, F. Cassaing, J. Sahel, M. Paques, K. Grieve, and S. Meimon, “Higher adaptive optics loop rate enhances axial resolution in nonconfocal ophthalmoscopes,” Optics letters, vol. 44, no. 9, pp. 2208–2211, 2019.
* [13] S. A. Burns, A. E. Elsner, K. A. Sapoznik, R. L. Warner, and T. J. Gast, “Adaptive optics imaging of the human retina,” Progress in retinal and eye research, vol. 68, pp. 1–30, 2019.
* [14] E. A. Rossi, C. E. Granger, R. Sharma, Q. Yang, K. Saito, C. Schwarz, S. Walters, K. Nozato, J. Zhang, T. Kawakami, et al., “Imaging individual neurons in the retinal ganglion cell layer of the living eye,” Proceedings of the National Academy of Sciences, vol. 114, no. 3, pp. 586–591, 2017.
* [15] T. Y. Chui, D. A. VanNasdale, and S. A. Burns, “The use of forward scatter to improve retinal vascular imaging with an adaptive optics scanning laser ophthalmoscope,” Biomedical optics express, vol. 3, no. 10, pp. 2537–2549, 2012.
* [16] E. Gofas-Salas, P. Mecê, L. Mugnier, A. M. Bonnefois, C. Petit, K. Grieve, J. Sahel, M. Paques, and S. Meimon, “Near infrared adaptive optics flood illumination retinal angiography,” Biomedical optics express, vol. 10, no. 6, pp. 2730–2743, 2019.
* [17] D. Lim, K. K. Chu, and J. Mertz, “Wide-field fluorescence sectioning with hybrid speckle and uniform-illumination microscopy,” Optics letters, vol. 33, no. 16, pp. 1819–1821, 2008.
* [18] M. Born and E. Wolf, Principles of optics: electromagnetic theory of propagation, interference and diffraction of light. Elsevier, 2013.
* [19] R. F. Cooper, M. A. Wilk, S. Tarima, and J. Carroll, “Evaluating descriptive metrics of the human cone mosaic,” Investigative ophthalmology & visual science, vol. 57, no. 7, pp. 2992–3001, 2016.
* [20] C. Lavia, P. Mecê, M. Nassisi, S. Bonnin, J. Marie-Louise, A. Couturier, A. Erginay, R. Tadayoni, and A. Gaudric, “Retinal capillary plexus pattern and density from fovea to periphery measured in healthy eyes with swept-source optical coherence tomography angiography,” Scientific Reports, vol. 10, no. 1, pp. 1–11, 2020.
* [21] S. Meimon, E. G. Salas, P. Mecê, K. Grieve, J. A. Sahel, and M. Paques, “Manipulation of the illumination geometry on adaptive optics (ao) flood illumination ophthalmoscope (fio) for dark field imaging of the retina,” Investigative Ophthalmology & Visual Science, vol. 59, no. 9, pp. 4641–4641, 2018.
* [22] I. Jalbert, F. Stapleton, E. Papas, D. Sweeney, and M. Coroneo, “In vivo confocal microscopy of the human cornea,” British Journal of Ophthalmology, vol. 87, no. 2, pp. 225–236, 2003.
* [23] M. Rajadhyaksha, A. Marghoob, A. Rossi, A. C. Halpern, and K. S. Nehal, “Reflectance confocal microscopy of skin in vivo: From bench to bedside,” Lasers in surgery and medicine, vol. 49, no. 1, pp. 7–19, 2017.
* [24] A. Guevara-Torres, D. Williams, and J. Schallek, “Origin of cell contrast in offset aperture adaptive optics ophthalmoscopy,” Optics Letters, vol. 45, no. 4, pp. 840–843, 2020.
* [25] P. Mecê, C. Petit, E. G. Salas, L. Mugnier, K. Grieve, C. Chabrier, J. A. Sahel, M. Paques, and S. Meimon, “What can adaptive optics do for laser photocoagulation?,” Investigative Ophthalmology & Visual Science, vol. 59, no. 9, pp. 6194–6194, 2018.
* [26] P. Mecê, E. Gofas Salas, C. Petit, K. Grieve, C. Chabrier, M. Paques, and S. Meimon, “Visualizing and enhancing axial resolution in nonconfocal adaptive optics ophthalmoscopy,” Proc. SPIE, vol. 10858, 2019.
* [27] P. Mecê, J. Jarosz, J.-M. Conan, C. Petit, K. Grieve, M. Paques, and S. Meimon, “Fixational eye movement: a negligible source of dynamic aberration,” Biomedical optics express, vol. 9, no. 2, pp. 717–727, 2018\.
## 1 Supplementary information
#### Video 1
Interconnecting capillaries are visible in OIT cross-section generated with
AO-FIO. Upper image: ROI used to compute the OIT cross-section. Lower image:
Corresponding OIT cross-section. Arrows highlight the three-dimensional
position of interconnecting capillaries. Blue arrow: capillaries connecting
the superficial vascular plexus (SVP) and the deep vascular plexus (DVP). Red
arrow: capillaries connecting the SVP and the intermediate vascular plexus
(IVP). Green arrow: capillaries connecting the SVP with the radial
peripapillary capillary plexus (RPCP). Retinal image acquired at 7o Nasal.
#### Video 2
The fly-through focus movies for bright-field AO-SLO and split-detection AO-
SLO modalities at 7o Nasal.
#### Video 3
The fly-through focus movies for bright-field AO-SLO and split-detection AO-
SLO modalities at 7o Temporal.
|
2024-09-04T02:54:55.546163 | 2020-02-27T15:31:55 | 2002.12813 | {
"authors": "Jack Morava",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:25944",
"submitter": "Jack Morava",
"url": "https://arxiv.org/abs/2002.12813"
} | arxiv-papers | # On the canonical formula of C Lévi-Strauss, II
Jack Morava Department of Mathematics, The Johns Hopkins University,
Baltimore, Maryland 21218<EMAIL_ADDRESS>
(Date: Imbolc 2020)
§1 Introduction and Organization
> We venture a leap: we grant ab initio that there is ‘something there’ to be
> translated …
>
> George Steiner, quoted in [3] (p 19)
1.1 This is a sequel to, and elaboration of, earlier work [21, 22] on a
possible formal model for the canonical formula
${\sf CF}:F_{x}(a):F_{y}(b)\;\simeq\;F_{x}(b):F_{a^{-1}}(y)$
of C Lévi-Strauss, which he proposed [1, 14, 15, 16], cf. [Appendix] as a tool
for the structural analysis of mythological systems; see [12] (p 562) for a
discussion of the hair-raising example
${\rm marriage}_{\rm solidarity}:{\rm rape}_{\rm hostility}\simeq{\rm
marriage}_{\rm hostility}:{\rm dissociation}_{\rm rape}\;.$
The model proposed here is based on the study of small finite groups, which
have proved useful in the classification of kinship systems, crystallography,
and other fields [8, 19, 25, 27]; a short account of some of the mathematics
involved is postponed till §3. For clarity, a sketch of the model, with
technical details backgrounded, is displayed immediately below. However,
because translation across the conceptual and cognitive gulfs separating
anthropology and mathematics raises significant questions, a short discussion
(§2) of models and metaphors precedes the (not actually very complicated)
verification of the details of the model, in §4.
1.2 A model
> The first formulation of the real situation would be to state that the
> speaking subject perceives neither idea $a$ nor form $A$, but only the
> relation $a/A$ …What he perceives is the relation between the two relations
> $a/AHZ$ and $abc/A$, or $b/ARS$ and $blr/B$, etc. This is what we term the
> LAST QUATERNION …
>
> F de Saussure, Writings…[26] (p 22), [16]
Proposition: To elements $\\{x,a,y,b\\}$, e.g. $\\{1,i,j,k\\}$ of the group
$Q$ of Lipschitz unit quaternions [3.2.1], the function
$x,a\mapsto\Phi_{x}(a)=2^{-1/2}(x-a)\in 2\cdot O$
assigns elements of the binary octahedral group [3.4.2] such that the anti-
automorphism $\lambda=*I\sigma^{2}$ [3.3iii] of that group transforms the
(noncommutative [3.2.2]) ratio $\\{\Phi_{x}(a):\Phi_{y}(b)\\}$ into
$\\{\Phi_{\lambda(x)}(\lambda(a)):\Phi_{\lambda(y)}(\lambda(b))\\}\;=\;\\{\Phi_{x}(b):\Phi_{a^{-1}}(y)\\}\;.$
Terms such as $x,a,y,b$ or $1,i,j,k$ will be referred to here as ‘values’,
while functions such as $F$ and $\Phi\in 2\cdot O$ of ordered pairs of values
will be called ‘valences’. [Citations such as [1.2] refer to sections of this
paper.]
§2 Wider Questions
> It takes a while to learn that things like tenses and articles …are not
> ‘understood’ in Burmese [as] something not uttered but implied; they just
> aren’t there…
>
> AL Becker, Beyond Translation [3] ( p 8)
2.1 For the philologist Becker it is fundamental that languages (and cultures)
have ‘exuberances and deficiencies’ that can make translation problematic; in
the present case these issues are acute.
It seems generally agreed [21] that the ${\sf CF}$ is underdetermined. One of
the first things a mathematician notices (cf. the original statement, included
below as an appendix) is the absence of quantifiers, which usually convey
information about the circumstances under which a proposition holds; moreover,
the reversal of function and term [condition 2] is mysterious. One issue - the
assumption that the element $a$ has a natural or canonical dual $a^{-1}$ \-
can perhaps be resolved by reading the ${\sf CF}$ as asserting, in the context
$\\{x,a,y,b\\}$, the existence of $a^{-1}$: for example, as in [12]. On the
other hand, an exuberance of the paraphrase above is a precise specification
of the binary octahedral group as a repository for the values of our analog
$\Phi$ of Lévi-Strauss’s $F$.
2.2 I make no claim about the uniqueness of the model proposed here: I hope
there are better ones. Perhaps this is the place to say that my concern with
the ${\sf CF}$ is not its validity, or ‘truth-value’; it is rather whether or
not it can be usefully be interpreted as a formal mathematical assertion. Its
interest as an empirical hypothesis, like Bohr’s model for the atom, seems
well-established. Po moemu the key question is what ‘interpretation’, in this
context, could even mean.
2.3 An answer may lie in the ancient opposition, older than Zeno, between
continuous and discrete. An anthropologist, like William James’ infant [17]
(Ch 13) is ‘assailed by eyes, ears, nose, skin and entrails at once, feels it
all as one great blooming, buzzing confusion’; but ‘nowadays, fundamental
psychological changes occur [to the mathematician]. Instead of sets, clouds of
discrete elements, we envisage some sorts of vague spaces …mapped one to
another. If you want a discrete set, then you pass to the set of connected
components of spaces defined only up to homotopy’ [20] (p 1274), [17].
I believe these remarks of Manin point to an answer to this question, in terms
of cognitive condensation of chaotic clouds of experience into discrete,
classifiable conceptual entities, cf. [4].
§3 A short mathematical glossary
This section summarizes more than enough background from the theory of groups
for the purposes of this paper:
3.1.1 The objects of the category of groups (for example the integers
${\mathbb{Z}}=\\{\dots,-2,-1,0,1,2,\dots\\}$) consist of sets $G$ with an
associated multiplication operation
$\mu_{G}:G\times G\ni(g,h)\mapsto g\cdot h\in G$
and an identity element $1=1_{G}\in G$, subject to familiar rules of
associativity which I will omit. A homomorphism (or map, or morphism)
$\phi:G\to G^{\prime}$ between groups respects multiplication (i.e.
$\phi(g\cdot g^{\prime})=\phi(g)\cdot\phi(g^{\prime})$ etc.); a composition of
homomorphisms is again a homomorphism, thus defining a category. The set of
one-to-one self-maps of a set with $n$ elements, for example, defines the
symmetric group $\Sigma_{n}$.
A group $A$ is commutative if $a\cdot a^{\prime}=a^{\prime}\cdot a$ for any
$a,a^{\prime}\in A$; in such cases the multiplication operation is often
written additively, i.e. with $a+a^{\prime}$ instead of $a\cdot a^{\prime}$,
e.g. as in the additive group ${\mathbb{Z}}$ of integers. The order of a group
is the (not necessarily finite) number of its elements.
Example The set of isomorphisms $\alpha:G\to G$ with itself is similarly a
group ${\rm Aut}(G)$ (of automorphisms of $G$). Any element $h\in G$ defines
an inner automorphism
$\alpha_{h}:G\ni g\mapsto hgh^{-1}\in G$
of $G$; it is a group homomorphism since
$\alpha_{h}(g\cdot g^{\prime})=h(g\cdot g^{\prime})h^{-1}=hgh^{-1}\cdot
hg^{\prime}h^{-1}=\alpha_{h}(g)\cdot\alpha_{h}(g^{\prime})\;,$
and $h\mapsto\alpha_{h}:G\to{\rm Aut}(G)$ is a homomorphism since
$(\alpha_{h}\circ\alpha_{k})(g)=\alpha_{h}(kgk^{-1})=hkgk^{-1}h^{-1}=hk\cdot
g\cdot(hk)^{-1}=\alpha_{hk}(g)\;;$
much mathematics consists of shuffling parentheses. The subgroup ${\rm
In}(G)=\\{\alpha_{g}\in{\rm Aut}(G)\\}$ is normal in that
$(\forall\beta\in{\rm Aut}(G))\;\alpha\in{\rm
In}(G)\Rightarrow\beta\alpha\beta^{-1}\in{\rm In}(G)\;.$
In particular, if $G=A$ is commutative, then ${\rm In}(A)=\\{1\\}$ is the
trivial group.
If a subgroup $H$ of $G$ is normal, then the set $G/H=\\{gH\;|\;g\in G\\}$ (of
‘orbits’ of elements of $G$ under right multiplication by $H$) is again a
group.
3.1.2 A composition
$\textstyle{H\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{K}$
of homomorphisms is exact if the image
${\rm im}\;\phi=\\{g\in G\>|\>\exists h\in H,\;\phi(h)=g\\}$
of $\phi$ equals the kernel
$\ker\;\psi=\\{g\in G\>|\>\psi(g)=1_{K}\\}$
of $\psi$; this implies in particular that the composition $\psi\circ\phi$ is
trivial (i.e. maps every element of $H$ to the identity element of $K$), but
is more restrictive. A sequence
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{K\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1}$
of groups and homomorphisms is exact if its consecutive two-term compositions
are exact; this implies that
$\bullet$ $\phi$ is one-to-one (or is a monomorphism, or has trivial kernel),
$\bullet$ $\psi$ is ‘onto’ (or surjective, or $K={\rm im}\;\psi$), and
$\bullet$ ${\rm im}\;H$ is normal in $G$, and $\psi$ factors through an
isomorphism $G/H\cong K$.
3.1.3 Such an exact sequence is said to split, if there is a homomorphism
$\rho:K\to G$ inverse to $\psi$ in the sense that the composition
$\psi\circ\rho$
$\textstyle{K\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{K}$
is the identity map $K$. In that case there is a unique homomorphism
${\varepsilon}:K\to{\rm Aut}(H)$
such that (the ‘semi-direct product) $G\cong H\rtimes K$ is isomorphic to the
group defined on the set product $H\times K$, with twisted multiplication
$(h_{0},k_{0})\cdot(h_{1},k_{1})=(h_{0}\cdot{\varepsilon}(k_{0})h_{1},k_{0}k_{1})\;;$
such a split sequence will usually be displayed below as
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{G\cong
H\rtimes
K\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{K\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1\;.}$
3.2 Some topological groups and algebras
3.2.1 The quaternion group
$Q=\\{\pm 1,\pm i,\pm j,\pm k\\}$
of order eight is defined by three elements $i,j,k$ with multiplication
$i^{2}=j^{2}=k^{2}=-1$ and
$ij=k=-ji,\;jk=i=-kh,\;ki=j=-ik\;.$
The (noncommutative) division algebra
${\mathbb{H}}=\\{q=q_{0}+q_{1}i+q_{2}j+q_{3}k\>|\>q_{i}\in{\mathbb{R}}\\}\cong{\mathbb{R}}^{4}$
of Hamiltonian quaternions is the four-dimensional real vector space with
multiplication extended from $Q$; alternately, it is the two-dimensional
complex vector space
${\mathbb{H}}=\\{z_{0}+z_{1}j\>|\>z_{i}\in{\mathbb{C}}\\}\;,$
where $z_{0}=q_{0}+iq_{1},\;z_{1}=q_{2}+iq_{3}$. The quaternions thus extend
the field ${\mathbb{C}}$ of complex numbers much as ${\mathbb{C}}$ extends the
field ${\mathbb{R}}$ of real numbers.
The quaternion conjugate $q^{*}=q_{0}-q_{1}i-q_{2}j-q_{3}k$ to $q$ has
positive product
$q^{*}\cdot q=q\cdot q^{*}=|q|^{2}=\sum q_{i}^{2}>0$
with $q$ if $q\neq 0$, implying the existence of a multiplicative inverse
$q^{-1}=|q|^{-2}q^{*}$. This defines an isomorphism
${\mathbb{H}}^{\times}\ni q\mapsto(|q|^{-1}q,|q|)\in
S^{3}\times{\mathbb{R}}^{\times}_{+}$
making the three-dimensional sphere $S^{3}$ a group under multiplication. This
notation is nonstandard but convenient; note that $q^{**}=q$ and that
quaternion conjugation $*:q\mapsto q^{*}$ is an anti-homomorphism, i.e.
$(u\cdot v)^{*}=v^{*}\cdot u^{*}\;.$
[Similarly,
${\mathbb{R}}^{\times}\cong
S^{0}\times{\mathbb{R}}^{\times}_{+},\;S^{0}=\\{\pm 1\\},$
while
${\mathbb{C}}^{\times}\cong S^{1}\times{\mathbb{R}}^{\times}_{+}$
(where $S^{n}=\\{{\bf x}\in{\mathbb{R}}^{n+1}\>|\>|{\bf x}|^{2}=1\\}$ is the
$n$-dimensional sphere of radius one, e.g. the circle when $n=1$).]
3.2.2 The subalgebra (i.e. closed under addition and multiplication, but not
division) of Lipschitz quaternions in ${\mathbb{H}}$ is the set of $q$ with
integral coordinates ($q_{i}\in{\mathbb{Z}}$), while the subalgebra of Hurwitz
quaternions consists of elements $q$ with all coordinates either integral or
half-integral (i.e. such that each $q_{i}$ is half of an odd integer).
Finally, the subalgebra of Lipschitz (integral) quaternions has an additional
(commutative but nonassociative) Jordan algebra product
$u,v\mapsto\\{u,v\\}={\textstyle{\frac{1}{2}}}(u\cdot v+v\cdot
u)=\\{v,u\\}\;,$
e.g.
$\\{1,1\\}=1,\;\\{i,i\\}=\\{j,j\\}=\\{k,k\\}=-1,\;\\{i,j\\}=\\{j,k\\}=\\{k,i\\}=0\;.$
This allows us to define a (non-commutative) Jordan ratio
$\\{u:v\\}=\\{u,v^{*}\\}=|v|^{-2}\\{u,v^{-1}\\}$
for $u,v\in{\mathbb{H}}$, distributive
$\\{u+u^{\prime}:v\\}=\\{u:v\\}+\\{u:v^{\prime}\\},\;\\{u:v+v^{\prime}\\}=\\{u:v\\}+\\{u:v^{\prime}\\}$
in both variables, satisfying
$\\{u:v\\}^{*}=\\{u^{*}:v^{*}\\}\;.$
Remark ${\mathbb{H}}$ can be regarded as a subalgebra of the $2\times 2$
complex matrices $M_{2}({\mathbb{C}})$, in such a way that the quaternion norm
$|q|^{2}$ equals the determinant of $q$, regarded as a matrix. This identifies
the 3-sphere $S^{3}$ with a subgroup of the Lie group ${\rm
Sl}_{2}({\mathbb{C}})$ of complex $2\times 2$ matrices with determinant one;
as such, it is the maximal compact (special unitary) subgroup ${\rm SU}(2)$ of
${\mathbb{H}}^{\times}$. The special orthogonal group ${\rm SO}(3)\cong{\rm
SU}(2)/\\{\pm 1\\}$ (of rotations in three dimensions) is a quotient of this
group, and the various ‘binary’ (tetrahedral, octahedral etc.) groups lift the
symmetry groups of the classical Platonic solids [12] to subgroups of the
three-sphere. See [2], and its comments, for some very pretty animations of a
certain ‘24-cell’ associated [10] (cf. note ar) to the octahedral and
tetrahedral groups. [The noncompact group $SL_{2}({\mathbb{C}})$ is similarly
a double cover of (the identity component of) the physicists’ Lorentz group.]
3.3 The subset $Q\subset{\mathbb{H}}^{\times}$ (of Lipschitz units) is a
finite subgroup of ${\rm SU}(2)$. Similarly, the subset
$Q\subset A_{24}\subset{\rm SU}(2)$
(of Hurwitz units) is the union of $Q$ with the set of sixteen elements of the
form
${\textstyle{\frac{1}{2}}}[\pm 1\pm i\pm j\pm k]\;.$
It is known as well [5] as the binary tetrahedral group $2\cdot T$.
Klein’s (commutative) ‘Vierergruppe’
$V=\\{1,I,J,K\\}$
with multiplication $I^{2}=J^{2}=K^{2}=1$ and $IJ=JI=K,\;JK=KJ=I,\;KI=IK=J$
can be regarded as the subgroup
$V={\rm In}(Q)\subset{\rm Aut}(Q)$
defined by $\alpha_{i}=I,\;\alpha_{j}=J,\;\alpha_{k}=K$.
Exercise:
i) $I$ sends $i$ to itself, and $j,k$ to $-j,-k$; similarly $J$ sends $j$ to
itself, while $i,k\mapsto-i,-k$, etc. The cyclic permutation
$(abc)=(a\to b\to c\to a)\in\Sigma_{3}$
of three things defines a homomorphism
$C_{3}\cong\\{1,\sigma,\sigma^{2}\\}\to{\rm Aut}(Q)$ sending $\sigma$ to
$(ijk)$.
ii) The map $C_{2}^{2}=C_{2}\times C_{2}\to V$ defined by $(1,0)\mapsto
I,\;(0,1)\mapsto J,\;(1,1)\mapsto K$ (and $1\mapsto(0,0)$) is a (nonunique)
isomorphism.
iii) For example, in $V\rtimes C_{3}=A_{4}$ (i.e. the alternating subgroup of
order 12 (see below) of $\Sigma_{4}\cong{\rm Aut}(Q)$) we have
$(I\sigma^{2})\cdot(I\sigma^{2})=I\sigma^{2}(I)\cdot\sigma^{4}=IK\sigma=J\sigma\;.$
The anti-automorphism $\lambda=*I\sigma^{2}$ of $Q$ satisfies
$\lambda(i)=k,\;\lambda(j)=-i,\;\lambda(k)=j,$
e.g. $\lambda(j)=(I\sigma^{2}(j))^{*}=(Ii)^{*}=i^{*}=-i$, cf. [21 §5].
3.4.1 Some useful small groups
order & name
n cyclic $C_{n}=\\{0,\dots,n-1\\},\;n\geq 1$
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{\mathbb{Z}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{n}$$\textstyle{{\mathbb{Z}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{\mathbb{Z}}/n{\mathbb{Z}}\cong
C_{n}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1}$
n! symmetric $\Sigma_{n},\;n\geq 1$
4 Klein Vierergruppe $V$
$V=\\{1,I,J,K\\}\cong C_{2}\times C_{2}\;({\rm non-uniquely})$
6 symmetric $\Sigma_{3}$
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C_{3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\Sigma_{3}\cong
C_{3}\rtimes
C_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1}$
8 (Lipschitz) quaternion units $Q=\\{\pm 1,\pm i,\pm j,\pm k\\}$
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Q\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{V\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1}$
12 alternating or tetrahedral ($T=A_{4}$)
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{V\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{=}$$\textstyle{A_{4}\cong
V\rtimes
C_{3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C_{3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1}$$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{V\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\Sigma_{4}\cong
V\rtimes\Sigma_{3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\Sigma_{3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1}$
24 $\Sigma_{4}$ as above; binary tetrahedral $2\cdot T$ = Hurwitz units
$A_{24}$
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{2\cdot
T=Q\rtimes
C_{3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{A_{4}=V\rtimes
C_{3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1}$
48 binary octahedral $2\cdot O$
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{2\cdot
O\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{2\cdot
T\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1}$
as well as
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Q\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{2\cdot
O\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\Sigma_{3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1\;.}$
3.4.2 The binary octahedral group $2\cdot O$, regarded as a subgroup of the
unit quaternions [2, 6], is the disjoint union of $A_{24}$ with the set of
twenty-four special elements
$q=2^{-1/2}[q_{0}+q_{1}i+q_{2}j+q_{3}k]$
in which exactly two of $q_{0},\dots,q_{3}$ are nonzero and equal $\pm 1$, in
which $\Phi$ takes its values.
3.4.3 We have
${\rm Aut}(Q)\cong\Sigma_{4}\cong{\rm Aut}(2\cdot T)$
and
${\rm Aut}(2\cdot O)\cong C_{2}\times(2\cdot T\cong A_{24})\;.$
3.5 Some of the groups above can be presented in terms of matrices over finite
Galois fields ${\mathbb{F}}$: in particular, $\Sigma_{3}\cong{\rm
Sl}_{2}({\mathbb{F}}_{2})$ and $A_{24}\cong{\rm Sl}_{2}({\mathbb{F}}_{3})$.
Similarly, the binary icosahedral group (which plays no role in this paper) is
isomorphic to ${\rm Sl}_{2}({\mathbb{F}}_{5})$.
It is worth mentioning that the group of $2\times 2$ matrices with entries
from ${\mathbb{Z}}$ and determinant one is a quotient
$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{\mathbb{Z}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{t}$$\textstyle{{\mathbb{B}}_{3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{\rm
Sl}_{2}({\mathbb{Z}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1}$
of Artin’s three-strand braid group [7], which thus maps (by
${\mathbb{Z}}\to{\mathbb{Z}}/3{\mathbb{Z}}\cong{\mathbb{F}}_{3}$) to ${\rm
Aut}(2\cdot O)$.
[The set of braids on $n$ strands, imagined for example as displayed on a
loom, define a group under ‘concatenation’:
Technically, an $n$-strand braid can be defined as a smooth path in the space
of configurations defined by $n$ distinct points in the plane, starting for
example at time $t=0$ at the integral points $(1,0),(2,0),\dots,(n,0)$ and
ending at time $t=1$ at the points $(1,1),(2,1),\dots,(n,1)$, though not
necessarily in that order. Such braids can be composed by concatenation (i.e.
glueing and rescaling), and define elements of $\Sigma_{n}$ (sending $k$ to
$l$ if the strand starting at $(k,0)$ ends at $(l,1)$). The braid group
${\mathbb{B}}_{n}$ is the set of such things under the equivalence relation
roughly described as straightening: thus for example any braid can be parsed
into a composition of elementary moves, in which one strand over- or under-
passes one of its nearest neighbors. For example, ${\mathbb{B}}_{3}$ can be
presented as the group with two generators $a,b$ satisfying the ‘braid
relation’ $aba=bab$, thus the map $t$ above sends 1 to the full twist
$(aba)^{2}=(ab)^{3}$.]
§4 A Calculation
4.1 Lévi-Strauss’s formula is expressed in terms of formal analogies, e.g.
$F_{x}(a):F_{y}(b)$, understood roughly as a ratio, in a sense going back to
Eudoxus; but noncommutative algebra distinguishes the left fraction
$a^{-1}b=a\backslash b$ from the right fraction $ba^{-1}=b/a$. The
noncommutative ratio [3.3.2] splits this difference. Thus if $x,a,y,b\in Q$ we
have
$\\{\Phi_{x}(a):\Phi_{y}(b)\\}={\textstyle{\frac{1}{2}}}\\{x-a:y-b\\}={\textstyle{\frac{1}{2}}}[(\\{x:y\\}+\\{a:b\\})-(\\{a:y\\}+\\{x:b\\})]\;,$
so for example if $x=1,\;a=i,\;y=j,\;b=k$ we have
$\\{\Phi_{x}(a):\Phi_{y}(b)\\}=\\{\Phi_{1}(i):\Phi_{j}(k)\\}={\textstyle{\frac{1}{2}}}\\{1-i:j-k\\}=$
${\textstyle{\frac{1}{2}}}[(\\{1:j\\}+\\{i:k\\})-(\\{i:j\\}+\\{1:k\\})]={\textstyle{\frac{1}{2}}}[(j+0)-(0+k)]={\textstyle{\frac{1}{2}}}(j-k)\;,$
while
$\\{\Phi_{x}(b):\Phi_{a^{-1}}(y)\\}={\textstyle{\frac{1}{2}}}\\{1-k:-i-j\\}=$
${\textstyle{\frac{1}{2}}}[(-\\{1:i\\}-\\{1:j\\})+(\\{k:i\\}+\\{k:j\\})]={\textstyle{\frac{1}{2}}}[-i-j+0]=-{\textstyle{\frac{1}{2}}}(i+j)\;.$
But now, applying the anti-automorphisms $\lambda$ of [20] (§5), we have
$\lambda(j-k)=-(i+j)$
by 3.3iii above, and the proposition is verified. $\Box$
Remarks
4.2.1 The automorphism group $\Sigma_{4}$ of $Q$ preserves the set of special
elements [3.4.2] of $2\cdot O$, i.e. the possible values of $\Phi$, as well as
their Jordan ratios; but it differs from the (inner) automorphism group
$A_{24}$ of $2\cdot O$. However, the term $x$ appears in the ${\sf CF}$ in the
same place on both sides of the equation, and can be interpreted as playing
the role of $1\in Q$, fixed by all automorphisms. The canonical formula
appears in variant forms in the literature, but (as far as I know) they all
feature a term in the same place on both sides of the equation, allowing the
variants to be reconciled by a cyclic permutation in
$\Sigma_{3}\subset\Sigma_{4}$, which lies in a quotient of ${\rm Aut}(2\cdot
O)$.
4.2.2 The classic work of Thom on singularity theory [24] has turned out (e.g.
under the influence of Arnol’d, McKay, and others) to have deep connections
with the theory of Platonic symmetry groups. The binary orthogonal group, in
particular, seems to be related to a certain ‘symbolic umbilic’ singularity
[9, 11], a special case of Thom’s original classification.
4.2.3 As a closing remark: the model proposed here is in fact not that
complicated. To a mathematician, perhaps the most interesting implication is
its connection with the theory of braids, which is arguably related to the
processing of recursion, and to cognitive evolution.
Acknowledgements and thanis: I am deeply indebted to many people, especially
John Baez, Ellen Contini-Morava, Fred Damon, John McKay , Tony Phillips, Emily
Riehl, Dale Rolfsen, Lucien Scubla, and Roy Wagner, for conversations,
insight, and advice, during the preparation of this paper. Its excesses and
deficiencies, though, are my own responsibilty.
Appendix From The structural study of myth [14], Journal of American Folklore
68 (1955):
7.30 Finally, when we have succeeded in organizing a whole series of variants
in a kind of permutation group, we are in a position to formulate the law of
that group. Although it is not possible at the present stage to come closer
than an approximate formulation which will certainly need to be made more
accurate in the future, it seems that every myth (considered as the collection
of all its variants) corresponds to a formula of the following type:
$F_{x}(a):F_{y}(b)\;\simeq\;F_{x}(b):F_{a^{-1}}(y)$
where, two terms being given as well as two functions of these terms, it is
stated that a relation of equivalence still exists between two situations when
terms and relations are inverted, under two conditions: 1. that one term be
replaced by its contrary; 2. that an inversion be made between the function
and the term value of the two elements.
## References
* [1] Kwame Anthony Appiah, https://www.nybooks.com/articles/2020/02/13/claude-levi-strauss-key-to-all-mythologies/
* [2] John Carlos Baez, https://johncarlosbaez.wordpress.com/2019/08/29/the-binary-octahedral-group/
* [3] AL Becker, Beyond Translation, U of Michigan Press (1998)
* [4] JL Borges, The analytical language of John Wilkins, in Otras Inquisiciones (1952) Buenos Aires: Sur.
* [5] https://en.wikipedia.org/wiki/Binary_tetrahedral_group
* [6] https://en.wikipedia.org/wiki/Binary_octahedral_group
* [7] https://en.wikipedia.org/wiki/Braid_group
* [8] https://en.wikipedia.org/wiki/Crystallographic_point_group
* [9] https://en.wikipedia.org/wiki/Du_Val_singularity
* [10] https://en.wikipedia.org/wiki/24-cell
* [11] P Dechant, From the trinity $(A3,B3,H3)$ to an ADE correspondence, Proc. R. Soc A 474 (2018), no. 2220, 20180034, https://arxiv.org/abs/1812.02804
* [12] A Doja, Politics of mass rapes in ethnic conflict …, Crime, Law and Social Change (2019) 71 : 541 – 580, https://link.springer.com/article/10.1007/s10611-018-9800-0
* [13] N Epa, N Ganter, Platonic and alternating 2-groups, Higher Structures 11 122 – 146 (2017), https://arxiv.org/abs/1605.09192
* [14] C Lévi-Strauss, The structural study of myth, Journal of American Folklore 68 no 270 (1955) 428–444
* [15] https://fr.wikipedia.org/wiki/Formule_canonique_du_mythe
* [16] https://ncatlab.org/nlab/show/canonical+formula+of+myth
* [17] W James, Principles of Psychology [1890]
* [18] A Joyal, Notes on quasicategories, http://www.math.uchicago.edu/~may/IMA/Joyal.pdf
* [19] https://ncatlab.org/nlab/show/kinship
* [20] Y Manin, We do not choose our profession …, AMS Notices 56 (2009) 1268 -– 1274 (p 1274)
* [21] P. Maranda, The double twist: from ethnography to morphodynamics, U of Toronto Press (2001)
* [22] J Morava, On the canonical formula of C Lévi-Strauss, https://arxiv.org/abs/math/0306174
* [23] —–, From Lévi-Strauss to chaos and complexity, in MS Mosko, FH Damon, On the order of chaos p 47 – 63, Berghan Books (2005)
* [24] J Petitot, A morphodynamical schematization of the canonical formula for myths, in [19] 267 – 311
* [25] AV Phillips, A non-commutative marriage system in the South Pacific, http://www.math.stonybrook.edu/~tony/whatsnew/oct09/vanuatu2.html
* [26] F de Saussure, Writings on General Linguistics, OUP 2006
* [27] A Weil, On the algebraic study of certain types of marriage laws, appendix (221–232) to C Lévi-Strauss, Elementary structures of kinship, Beacon (1969)
|
2024-09-04T02:54:55.556169 | 2020-02-27T15:32:08 | 2002.12814 | {
"authors": "Alexandru Gheorghiu, Matty J. Hoban",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25945",
"submitter": "Alexandru Gheorghiu",
"url": "https://arxiv.org/abs/2002.12814"
} | arxiv-papers | # Estimating the entropy of shallow circuit outputs is hard
Alexandru Gheorghiu111Email<EMAIL_ADDRESS>Department of Computing,
Goldsmiths, University of London Matty J. Hoban222Email<EMAIL_ADDRESS>
###### Abstract
Calculating relevant entropic quantities of probability distributions and
quantum states is a fundamental task in information processing. The decision
problem version of estimating the Shannon entropy is the _Entropy Difference_
problem (ED): given descriptions of two circuits, determine which circuit
produces more entropy in its output when acting on a uniformly random input.
The analogous problem with quantum circuits (QED) is to determine which
circuit produces the state with greater von Neumann entropy, when acting on a
fixed input state and after tracing out part of the output. Based on plausible
complexity-theoretic assumptions, both of these problems are believed to be
intractable for polynomial-time quantum computation.
In this paper, we investigate the hardness of these problems in the case where
the circuits under consideration have logarithmic (ED${}_{\textrm{log}}$ and
QED${}_{\textrm{log}}$) and constant depth (ED${}_{\textrm{O(1)}}$ and
QED${}_{\textrm{O(1)}}$), respectively. We first show that, relative to an
oracle, these problems cannot be as hard as their counterparts with
polynomial-size circuits. Furthermore, we show that if a certain type of
reduction from QED to QED${}_{\textrm{log}}$ exists, it implies that any
polynomial-time quantum computation can be performed in log depth. While this
suggests that having shallow circuits makes entropy estimation easier, we give
indication that the problem remains intractable for polynomial-time quantum
computation by proving a reduction from _Learning-With-Errors_
($\mathrm{LWE}$) to ED${}_{\textrm{O(1)}}$.
We then consider a potential application of our results to quantum gravity
research via the AdS/CFT correspondence. First, we introduce HQED, a
Hamiltonian version of QED, where one is given two local Hamiltonians and
asked to estimate the _entanglement entropy_ difference in their ground
states. We show that this problem is at least as hard as the circuit version
by providing a reduction from QED to HQED. With these results, we then discuss
a potential experiment that would make use of AdS/CFT in order to solve
$\mathrm{LWE}$ efficiently. We conjecture that unless the AdS/CFT bulk to
boundary map is exponentially complex, this experiment would violate the
intractability assumption of $\mathrm{LWE}$.
## 1 Introduction
The entropy of a probability distribution or a quantum state is a useful
measure for characterising information content with numerous applications in
information theory. Shannon and von Neumann entropies, both appear in data
compression [Sch95] and asymptotic cryptography [DW05], and in the case of the
von Neumann entropy, entanglement theory [BBPS96]. Furthermore, this link to
entanglement theory has led to the use of the von Neumann entropy in condensed
matter theory [ECP10] and quantum gravity research [RT06].
Given its importance in physics and information theory, it is natural to ask
how difficult it is to estimate the entropy of a process. A natural way to
formalise this question is in terms of _sample complexity_ : this looks at how
many samples from an unknown probability distribution, or how many copies of
an unknown quantum state, are needed to estimate its entropy. Previous work
has considered various algorithms, both quantum and classical, for computing
the entropy of a probability distribution [DKRB02, WY16, JVHW17, VV11], as
well as entropies of quantum states [HGKM10, AISW19, SH19]. More recently, it
has been shown that for multiple entropy measures, computing the relevant
entropy is as hard as full state tomography [AISW19, SH19, OW15]. In other
words, the sample complexity would scale with the support of the probability
distribution, or the dimension of the Hilbert space for the quantum state,
respectively. To provide some intuition for why entropy estimation is
difficult, in Appendix A, we use the computational complexity tools of advice
to give a simple proof that no algorithm exists for efficiently estimating the
entropy of an a priori unknown quantum state. In particular, we show that if
the entropy of a (mixed) quantum state could be estimated within additive
error $\epsilon$ in time that scales polynomially in the number of qubits of
the state and in $\log(1/\epsilon)$, then such an algorithm could be leveraged
to solve _any_ problem. To be more precise, it would imply that polynomial-
time quantum computation with quantum advice, could decide all languages,
which is known to be false.
Rather than considering sample complexity, an operational way of capturing the
complexity of entropy estimation is to start with descriptions of two random
processes and ask which process produces more entropy in its output. The
natural decision problem for this task was defined by Goldreich and Vadhan
[GV99] and is known as the _entropy difference_ (ED) problem: given two
classical circuits $C_{1}$ and $C_{2}$, let $C_{1}(x)$ and $C_{2}(x)$ define
the output distributions of the two circuits when acting on an $n$-bit string,
$x$, drawn from the uniform distribution over $\\{0,1\\}^{n}$; the problem is
to decide whether $C_{1}(x)$ has higher entropy than $C_{2}(x)$ or vice versa
(promised that one of these is the case and promised that the entropy
difference is at least $1/poly(n)$).
What can one say about the computational hardness of this problem? In [GV99]
it was shown that the problem is complete for the class $\sf SZK$. This class,
known as _statistical zero-knowledge_ , contains all decision problems for
which a computationally unbounded _prover_ can convince a polynomial-time
_verifier_ that there exists a solution, when one exists (and fail to do so
when a solution does not exist) without revealing anything about the solution
to the verifier333This last condition, known as the zero-knowledge condition,
is formally defined by saying that there exists a polynomial-sized circuit
called a _simulator_ that can approximately produce transcripts of the
protocol for accepting (or “yes”) instances.. Due to oracle separation results
[Aar02] and from the fact that certain cryptographic tasks (such as finding
collisions for a cryptographic hash function) are contained in $\sf SZK$, it
is believed that $\sf SZK$ contains problems that are intractable even for
polynomial-time quantum algorithms. Thus, the fact that ED is complete for
$\sf SZK$ tells us that we should expect a similar intractability for the
problem of distinguishing the entropies of general classical circuits.
Transitioning to the quantum setting, Ben-Aroya, Schwartz and Ta-Shma defined
the analogue of ED known as the _quantum entropy difference_ (QED) problem
[BASTS10]. In this case, the input circuits $C_{1}$ and $C_{2}$ are
polynomial-size quantum circuits acting on a fixed input state (say the state
$\ket{00...0}$), and a fraction of the output qubits are traced out. The
remaining qubits will generally be mixed quantum states. As in the classical
case, the question is which of the two mixed states (the one produced by
$C_{1}$ or the one produced by $C_{2}$) has higher entropy, subject to the
same promise as for ED (that there is a $1/poly(n)$ gap in the entropy
difference). Ben-Aroya et al showed that QED is complete for the class $\sf
QSZK$, the quantum counterpart of $\sf SZK$ of problems admitting a _quantum
statistical zero-knowledge_ proof protocol [BASTS10]. Assuming $\sf QSZK$
strictly contains $\sf SZK$, this would mean that QED is strictly harder than
ED.
For both ED and QED the circuits under consideration were assumed to be
polynomial in the size of their inputs. A natural follow-up question is: does
the hardness of these problems change if we reduce the depth of the circuits?
Specifically, what happens if the circuits have depth that is logarithmic or
even constant in the size of the input? Focusing specifically on the quantum
case, there are number of reasons why one would be interested in answering
these questions. From a complexity-theory perspective, it lets us compare and
contrast $\sf QSZK$ to other classes in the interactive proofs model, such as
$\sf QMA$ or $\sf QIP$. Both of these classes have natural complete problems
with the inputs being circuits and in both cases the problems remain complete
if the circuits are made to have logarithmic depth [Ros08, Ji17]. Furthermore,
from a more practical perspective, given that current and near-term quantum
computing devices are subject to noise and imperfections, it is expected that
the states produced in these experiments will be the result of circuits of low
depth. Estimating the entropy of these states would help in computing other
quantities of interest such as the amount of entanglement [BBPS96] or the
Gibbs free energy [GHR+16]. It is therefore important to know whether entropy
estimation can be performed efficiently for states produced by shallow
circuits.
Lastly, we are motivated to answer these questions by recent connections
between quantum gravity research and quantum information theory, in the form
of the AdS/CFT correspondence [Mal99]. Briefly, AdS/CFT is a correspondence
between a quantum gravity theory in a hyperbolic _bulk_ space-time known as
_Anti-de Sitter_ (AdS) space and a _conformal field theory_ (CFT) on the
_boundary_ of that space-time. The general idea is to compute physical
quantities of interest in the bulk quantum gravity theory by mapping them to
the boundary field theory. A surprising result to come out of this program is
a correspondence between bulk geometry and boundary entanglement known as the
_Ryu-Takayanagi formula_ [RT06]. It states that, to leading order, the area of
a bulk surface is equal to the entropy of the reduced state on the part of the
boundary that encloses that surface. Moreover, for certain families of
boundary field theories, it is conjectured that the underlying states can be
approximated by logarithmic depth tensor networks known as MERA (_multi-scale
entanglement renormalization ansatz_) [Swi12]. Thus, characterising the
complexity of entropy difference for shallow circuits could yield insight into
the complexity of distinguishing different bulk geometries in quantum gravity.
Motivated by all of these various aspects of entropy from shallow circuits, we
thus initiate a study of entropy distinguishability for shallow circuits and
discuss the potential implications of our results in physics. We show that
both the classical and quantum versions of entropy difference are hard
assuming that the Learning-With-Errors ($\mathrm{LWE}$) problem has no
efficient algorithm, even for a quantum computer. Since $\mathrm{LWE}$ serves
as a basis for various schemes of _post-quantum cryptography_ , our result
implies that entropy estimation for shallow circuits is intractable unless
these cryptographic schemes are unsecure. Remarkably, this result also holds
for constant-depth quantum and classical circuits. In contrast, we also show
that both versions of entropy distinguishability for shallow classical and
quantum circuits are unlikely to be $\sf SZK$-complete and $\sf QSZK$-complete
respectively. Therefore, the entropy difference problem for both classical and
quantum shallow circuits occupies an interesting intermediate complexity.
Finally, we consider a version of von Neumann entropy difference where
Hamiltonians are given as input, not circuits, and show that this problem is
at least as hard as the circuit-based version. This last result allows to
relate these computational complexity results to physical systems.
### 1.1 Main results
We start by defining EDδ (QEDδ) as the (quantum) entropy difference problem
where the circuits under consideration have depth $\delta(n)$, for some
monotonically increasing function $\delta:\mathbb{N}\to\mathbb{N}$. We will
denote ED${}_{\textrm{polylog}}$ (QED${}_{\textrm{polylog}}$) and
ED${}_{\textrm{O(1)}}$ (QED${}_{\textrm{O(1)}}$) to be the collection of all
problems EDδ (QEDδ), with $\delta(n)$ in $O(polylog(n))$ and $O(1)$,
respectively. Our first result gives an indication that these problems are
unlikely to be as hard as their poly-size counterparts. To show this, we will
consider $\sf SZK_{\delta}$ ($\sf QSZK_{\delta}$) to be the set of problems
that reduce to EDδ (QEDδ) under polynomial-time reductions444These correspond
to statistical zero-knowledge protocols in which the verifier and simulator
circuits have depth $\delta(n)$.. For $\delta(n)=polylog(n)$, we denote these
classes as $\sf SZK_{polylog}$ and $\sf QSZK_{polylog}$ respectively, and
prove the following:
###### Theorem 1.
There exists an oracle $\mathcal{O}$ such that
$\mathsf{SZK^{\mathcal{O}}_{polylog}}\neq\mathsf{SZK}^{\mathcal{O}}$ and
$\mathsf{QSZK^{\mathcal{O}}_{polylog}}\neq\mathsf{QSZK}^{\mathcal{O}}$.
In particular, this means that
$\mathsf{SZK^{\mathcal{O}}_{log}}\neq\mathsf{SZK}^{\mathcal{O}}$ and
$\mathsf{QSZK^{\mathcal{O}}_{log}}\neq\mathsf{QSZK}^{\mathcal{O}}$. The oracle
we use is the same as the one from the recent work of Chia, Chung and Lai
[CCL19], showing the separation
$\mathsf{BPP}^{\mathsf{QNC}^{\mathcal{O}}}\neq\mathsf{BQP}^{\mathcal{O}}$,
where $\mathsf{QNC}$ denotes the set of problems that can be solved by quantum
circuits of polylogarithmic depth. We conjecture that the oracle of Coudron
and Menda [CM19], showing the same separation, could also be used.
Our second result concerns a direct approach at trying to show that $\sf
QSZK_{log}$$=$ $\sf QSZK$ and why we believe this is unlikely. To explain this
approach, let us first discuss the class $\sf QIP$, of problems that can be
decided using an interactive proof system having a quantum verifier. A problem
that is complete for this class is the quantum circuit distinguishability
problem (QCD), in which one is given as input two quantum circuits and asked
to determine whether, when restricting to a subset of input and output qubits,
the corresponding channels are close or far in diamond distance. It was shown
by Rosgen that this problem remains $\sf QIP$-complete even when the circuits
under consideration are log-depth [Ros08]. This is achieved by constructing
log-depth circuits that check a “history state” of the original circuits,
which uses a different construction to the Feynman-Kitaev history state
[KSV02]. One can also show that any $\sf QIP$ protocol can be made to have a
log-depth verifier, using the same history state construction. In analogy to
Rosgen’s result, we can now suppose that any $\sf QSZK$ protocol can be made
into a $\sf QSZK_{log}$ protocol by having the prover send history states of
the computations that the verifier in the $\sf QSZK$ protocol would perform.
It is clear from the $\sf QIP$ result that making the verifier have
logarithmic depth in a $\sf QSZK$ protocol does not reduce the set of problems
that can be decided by such a protocol. The question is whether _in addition_
to having a log-depth verifier, the transcript of such a protocol555Here we
mean the transcript of the protocol for “yes” instances to the problem. can be
produced by a log-depth circuit. We show that if this is possible, then since
$\sf BQP$ $\subseteq$ $\sf QSZK$ it would be possible to simulate any $\sf
BQP$ computation by “checking” this history state on a log-depth quantum
computer. This result can be stated as follows:
###### Theorem 2.
If there exists a polynomial-time reduction from a $\sf QSZK$ protocol with a
log-depth verifier to a $\sf QSZK_{log}$ protocol which preserves the
transcript of the $\sf QSZK$ protocol, then $\sf BQP=\sf BPP^{\sf QNC^{1}}$.
Based on these results, we conjecture that estimating the output entropy of
circuits having (poly)logarithmic or constant depth should be easier than for
circuits of larger depth. Despite this fact, our next result shows that even
for these shallow circuits, entropy difference is still intractable for
polynomial-time classical and quantum algorithms, assuming the quantum
intractability of the Learning-With-Errors ($\mathrm{LWE}$) problem:
###### Theorem 3.
$\mathrm{LWE}$ $\leq_{P}$ ED${}_{\textrm{O(1)}}$ $\leq_{P}$
QED${}_{\textrm{O(1)}}$.
$\mathrm{LWE}$, a problem defined by Regev in [Reg09], serves as the basis for
recently proposed cryptographic protocols that are believed to be post-quantum
secure. In other words, it is conjectured that no polynomial-time classical or
quantum algorithm is able to solve this problem. Recent results have leveraged
the versatility of this problem to achieve tasks such as verifiable delegation
of quantum computation [Mah18, GV19], certifiable randomness generation
[BCM+18], and self-testing of a single quantum device [MV20]. In all of these
works, the protocols were based on the use of _Extended Trapdoor Claw-Free
Functions_ (ETCF), introduced in [Mah18]. An ETCF family consists of a pair of
one-way functions, $(f,g)$, that are hard to invert assuming the hardness of
$\mathrm{LWE}$. Importantly, $f$ is a $1$-to-$1$ one-way function, whereas $g$
is a $2$-to-$1$ one-way function. An essential property of these functions is
that it should be hard (based on $\mathrm{LWE}$) to determine which is the
$1$-to-$1$ function and which is the $2$-to-$1$ function, given descriptions
of the functions. This property is known as _injective invariance_. Consider
what happens if we evaluate each of these functions on a string $x$ drawn
uniformly at random from $\\{0,1\\}^{n}$. Given that $f$ is a $1$-to-$1$
function, the distribution $f(x)$ is still the uniform distribution over
$n$-bit strings and will therefore have maximum entropy, $S(f)=n$. On the
other hand, since $g$ is a $2$-to-$1$, $g(x)$ will be the uniform distribution
on _half_ of the strings in $\\{0,1\\}^{n}$, thus having entropy $S(g)=n-1$.
This means that given descriptions of $f$ and $g$, if one can distinguish the
entropy of their outputs, one can also solve $\mathrm{LWE}$, which effectively
shows that $\mathrm{LWE}$ $\leq_{P}$ ED.
While the above argument shows a reduction from $\mathrm{LWE}$ to ED, to
obtain the result of Theorem 3 one would need an ETCF pair of functions that
can be evaluated in constant depth. Such a construction is not known to exist
and so we adopt a different approach towards proving the result. We start by
showing that the ETCF functions defined in [Mah18] can be performed using log-
depth circuits, thus showing $\mathrm{LWE}$ $\leq_{P}$ ED${}_{\textrm{log}}$.
This follows from the fact that the specific ETCF functions we use involve
only matrix and vector multiplications, which can be parallelized and
performed in logarithmic depth666The functions also require the ability to
sample from a Gaussian distribution, but this can also be done in logarithmic
depth (following a preprocessing step that is done as part of the reduction)
[Pei10].. To then go to constant-depth circuits, we use the result of
Applebaum, Ishai and Kushilevitz of compiling log-depth one-way functions to
constant-depth [AIK06]. The main idea is that, for a given function $f$, that
can be evaluated in log-depth, as well as an input string $x$, it is possible
to produce a _randomized encoding_ of $f(x)$ using a circuit of constant
depth. A randomized encoding is an encoded version of $f(x)$ from which $f(x)$
can be efficiently (and uniquely) reconstructed. The main difficulty of the
proof is to show that the constant-depth randomized encoders for $f$ and $g$
remain indistinguishable, based on $\mathrm{LWE}$. This then implies the
desired result, $\mathrm{LWE}$ $\leq_{P}$ ED${}_{\textrm{O(1)}}$. Since
QED${}_{\textrm{O(1)}}$ is simply the quantum generalization of
ED${}_{\textrm{O(1)}}$, the reduction ED${}_{\textrm{O(1)}}$ $\leq_{P}$
QED${}_{\textrm{O(1)}}$ is immediate.
Having obtained this characterisation for the hardness of entropy difference
with shallow circuits, we now wish to investigate the potential application of
these results to physics. To make this link, we first define a Hamiltonian
version of the quantum entropy difference problem, which we denote HQED.
Informally, instead of being given as input two quantum circuits and being
asked to determine which produces a state of higher entropy, our input will
consist of the descriptions of two local Hamiltonians $H_{1}$ and $H_{2}$.
Upon tracing out a certain number of qubits from the ground states of these
Hamiltonians, the problem is to decide which of the two reduced states has
higher entropy777This is essentially the same as asking which of the two
ground states has higher entanglement entropy, for a given bipartition of the
qubits in the two states.. Unsurprisingly, it can be shown that this problem
is at least at hard as the circuit version:
###### Theorem 4.
QED $\leq_{P}$ HQED.
The proof makes use of the Feynman-Kitaev history state construction to encode
the history state of a quantum circuit in the ground state of a Hamiltonian
[KSV02]. Directly using the history states of the circuits, $C_{1}$ and
$C_{2}$, from an instance of QED would not necessarily yield the desired
result since those states have small overlap with the output state of $C_{1}$
and $C_{2}$. To get around this issue, we make use of the padding construction
from [NVY18], and pad the circuits $C_{1}$ and $C_{2}$ with polynomially-many
identity gates. It can then be shown that the resulting history states for
these new circuits will have overlap $1-1/poly(n)$ with the output states of
$C_{1}$ and $C_{2}$. Correspondingly, up to inverse polynomial factors, the
entropy difference between these states and the original ones is preserved.
If we now denote as HQED${}_{\textrm{log}}$ and HQED${}_{\textrm{O(1)}}$ the
analogous problems in which the ground states can be approximated by circuits
of logarithmic and constant depth, respectively, it follows from the previous
results that QED${}_{\textrm{log}}$ $\leq_{P}$ HQED${}_{\textrm{log}}$ and
QED${}_{\textrm{O(1)}}$ $\leq_{P}$ HQED${}_{\textrm{O(1)}}$, and therefore
that $\mathrm{LWE}$ $\leq_{P}$ HQED${}_{\textrm{O(1)}}$ $\leq_{P}$
HQED${}_{\textrm{log}}$.
### 1.2 Discussion and open questions
In Theorem 1, as mentioned, the oracle used to separate
$\mathsf{SZK_{polylog}}$ and $\mathsf{SZK}$ ($\mathsf{QSZK_{polylog}}$ and
$\mathsf{QSZK}$) is taken from the work of Chia et al [CCL19]. Interestingly,
Chia et al, along with Coudron and Menda [CM19], speculate that their oracles
could be instantiated based on a cryptographic assumption, such as the
hardness of $\mathrm{LWE}$. We further conjecture that for our specific case
this should also be true. This would have the intriguing implication that the
intractability of $\mathrm{LWE}$ implies ED (QED) is “strictly harder” than
ED${}_{\textrm{log}}$ (QED${}_{\textrm{log}}$), while at the same time, from
Theorem 3, the intractability of $\mathrm{LWE}$ also implies
ED${}_{\textrm{O(1)}}$ (QED${}_{\textrm{O(1)}}$) is hard for polynomial-time
quantum algorithms.
A natural open problem raised by our work is to characterise more precisely
the complexity classes induced by the shallow-depth versions of entropy
difference ($\sf SZK_{log}$, $\sf SZK_{const}$, $\sf QSZK_{log}$, $\sf
QSZK_{const}$). We show that these correspond to zero-knowledge protocols in
which the verifier and simulator are bounded-depth circuits, but can one give
other characterisations of these classes? We have also seen that these classes
appear to be _strictly_ contained in their counterparts with general
polynomial-depth circuits. Thus, if estimating the entropy of shallow circuits
is easier, what is the runtime of the optimal quantum algorithm for deciding
QED (or ED) for log-depth and constant-depth circuits? It is also pertinent to
ask this question not just for worst-case instances of the problems, but also
for the average case. $\mathrm{LWE}$ is assumed to be hard on-average, and
thus we can use the reduction to argue that ED should be hard on average for a
particular range of parameters. Can we say anything more general? What about
the entropy of random quantum circuits? States produced by random circuits
will typically be highly entangled and thus typically subsystems will have the
maximum amount of entropy. Does this imply that all forms of entropy
calculation for random circuits is easy?
Another open problem has to do with the question of _entropy ratio_ rather
than entropy difference. As mentioned, the entropy difference between two
circuits can be amplified to constant or even polynomial in the input size, by
simply repeating the circuits in parallel. However, this would not affect the
ratio between the output entropies. Could our techniques be used to show that
even the entropy ratio is hard to estimate based on $\mathrm{LWE}$? We
conjecture that the answer is yes and that this could be achieved by extending
the ETCF family to include functions that are $2^{m}$-to-1, with
$m=poly(n+k)$. The same reductions could then be used as in the entropy
difference case.
In our reduction from $\mathrm{LWE}$ to ED${}_{\textrm{O(1)}}$ we used the
results of Applebaum et al in[AIK06] for compiling log-depth circuits to
constant depth. An appealing feature of their construction is that the
resulting circuits are “fairly local”, in that they have output locality 4. In
other words, each output bit depends on at most 4 input bits. They conjecture
that it may also be possible to achieve output locality 3. For the purpose of
constructing one-way functions, this would be optimal since it is impossible
to generate one-way functions having output locality 2. We also conjecture
that ED for circuits with output locality 3 is hard for quantum computers.
Could the compiling technique of Applebaum et al be used to base the hardness
of other natural problems in (quantum) information theory on the hardness of
$\mathrm{LWE}$? The fact that the entropy difference problem is complete for
$\sf SZK$ gives a connection to cryptography, but can $\mathrm{LWE}$ be used
to show that other natural problems are hard for a quantum computer? Going
further, is it possible to have some notion of fine-grained computational
complexity based on $\mathrm{LWE}$?
Let us also comment on the use of $\mathrm{LWE}$ and ETCF functions to derive
Theorem 3. $\mathrm{LWE}$ and lattice problems are leading candidates in the
development of post-quantum cryptographic protocols due to their conjectured
intractability even for quantum algorithms. Moreover, an appealing feature of
$\mathrm{LWE}$ for cryptographic purposes, is that one can reduce _worst-case_
instances of lattice problems to _average-case_ instances of $\mathrm{LWE}$
[Reg09, Pei16]. Combining this with Theorem 3, means that average-case
instances (relative to a certain distribution over input circuits) of
ED${}_{\textrm{O(1)}}$ will also be “at least as hard” as worst-case lattice
problems. Thus, using $\mathrm{LWE}$ gives strong evidence that
ED${}_{\textrm{O(1)}}$ is quantum intractable. ETCF functions then provide a
very natural instance of an entropy difference problem: determine whether a
given function is 1-to-1 or 2-to-1. Such functions differ by one bit in
entropy, under a uniform input, but by the injective invariance property it is
at least as hard as solving $\mathrm{LWE}$ to tell which is which. It is then
only necessary, for our results, to show that these functions can be evaluated
by circuits of constant depth. As mentioned, this can be achieved in a
relatively straightforward manner by first showing that the functions can be
evaluated in log depth and then using the compiling technique of Applebaum,
Ishai and Kushilevitz to achieve constant depth.
Given the relevance of the von Neumann entropy for quantum gravity research,
it is natural to ask what impact our results have for the AdS/CFT
correspondence. As previously mentioned, if states on the boundary are
described by log-depth tensor networks according to MERA, then computing the
entropy for these states should inform the bulk geometry contained within a
boundary. If computing the entropy of the boundary is difficult even for
quantum computers, then this poses a challenge to the quantum version of the
Extended Church Turing thesis. That is, in some sense, quantum gravitational
systems cannot be simulated efficiently by a quantum computer. Bouland,
Fefferman and Vazirani have also explored this challenge to the thesis from
the perspective of the wormhole growth paradox using the tools of
computational complexity [BFV19]. In particular, they showed that one can
prepare computationally pseudorandom states on the boundary CFT. These are
efficiently preparable ensembles of quantum states that require exponential
complexity to distinguish from each other. Importantly, the states can have
different circuit complexities and under the AdS/CFT correspondence this would
correspond to different wormhole lengths. An observer in the bulk could, in
principle, determine the length of a wormhole, however the circuit complexity
of the boundary states should be computationally intractable to determine.
They conclude that either the map from bulk to boundary in the AdS/CFT duality
is exponentially complex, or a quantum computer cannot efficiently simulate
such a system.
With our final result in Theorem 4 as a basis, we can make tentative
connections to the AdS/CFT duality and reach a similar conclusion to the
Bouland et al result. If one can have ground states of differing entanglement
entropy, based on $\mathrm{LWE}$, for a CFT Hamiltonian then this will
correspond to differing bulk geometries. An observer in the bulk can, in
principle, efficiently differentiate between these geometries. Based on this,
we can devise an experiment that allows for the efficient estimation of the
ground state entanglement entropy of our Hamiltonian. We conjecture that
unless the AdS/CFT bulk to boundary map is exponentially complex, this
experiment would violate the intractability assumption of $\mathrm{LWE}$. We
leave it open whether one can more formally describe our experiment by giving
a Hamiltonian for the CFT whose grounds states encode the hard problem,
instead of just positing that the CFT is prepared in one of those states.
### Acknowledgements
The authors would like to thank Mehmet Burak Şahinoğlu, Thomas Vidick, Grant
Salton, Hrant Gharibyan and Junyu Liu for useful comments and discussions. AG
is supported by MURI Grant FA9550-18-1-0161 and the IQIM, an NSF Physics
Frontiers Center (NSF Grant PHY-1125565) with support of the Gordon and Betty
Moore Foundation (GBMF-12500028). MJH acknowledges the FQXi large grant The
Emergence of Agents from Causal Order.
## 2 Preliminaries
### 2.1 Notation
We write $\mathbb{N}$ for the set of natural numbers, $\mathbb{Z}$ for the set
of integers, $\mathbb{Z}_{q}$ for the set of integers modulo $q$, $\mathbb{R}$
for the set of real numbers, $\mathcal{H}$ for a finite-dimensional Hilbert
space, using indices $\mathcal{H}_{A}$, $\mathcal{H}_{B}$ to specify distinct
spaces. $\mathrm{L}(\mathcal{H})$ is the set of linear operators on
$\mathcal{H}$. We write $\mbox{\rm Tr}(\cdot)$ for the trace, and $\mbox{\rm
Tr}_{B}:\mathrm{L}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})\to\mathrm{L}(\mathcal{H}_{A})$
for the partial trace. $\mathrm{Pos}(\mathcal{H})$ is the set of positive
semidefinite operators and
$\mathrm{D}(\mathcal{H})=\\{X\in\mathrm{Pos}(\mathcal{H}):\mbox{\rm
Tr}(X)=1\\}$ the set of density matrices (also called states).
Given $A,B\in\mathrm{L}(\mathcal{H})$, $\|A\|_{1}=\mbox{\rm
Tr}\sqrt{A^{\dagger}A}$ is the Schatten $1$-norm,
$TD(A,B)=\frac{1}{2}\|A-B\|_{1}$ the trace distance,
$F(A,B)=\left[Tr\sqrt{\sqrt{A}B\sqrt{A}}\right]^{2}$ the fidelity, and
$S(A)=-Tr(A\log A)$ the Von Neumann entropy.
Probability distributions will generally be over $n$-bit binary strings and so
will be functions $\mathcal{D}:\\{0,1\\}^{n}\to[0,1]$ for which
$\mathcal{D}(x)\geq 0$, for all $x\in\\{0,1\\}^{n}$ and
$\sum_{x\in\\{0,1\\}^{n}}\mathcal{D}(x)=1$. The Shannon entropy of a
distribution is
$S(\mathcal{D})=-\sum_{x\in\\{0,1\\}^{n}}\mathcal{D}(x)\log(\mathcal{D}(x)).$
(1)
Here we are abusing notation since $S$ denotes both the Shannon entropy and
the Von Neumann entropy and it will be clear from the context which notion of
entropy is being used. We will also use $h:[0,1]\to[0,1]$ to denote the binary
entropy function
$h(\epsilon)=-\frac{1}{\epsilon}\log(\epsilon)-\frac{1}{1-\epsilon}\log(1-\epsilon)$.
We write $\textsc{Supp}(\mathcal{D})=\\{x\;|\;\mathcal{D}(x)>0\\}$ for the
support of the distribution $\mathcal{D}$. The Hellinger distance between two
distributions is:
$H^{2}(\mathcal{D}_{1},\mathcal{D}_{2})=1-\sum_{x}\sqrt{\mathcal{D}_{1}(x)\mathcal{D}_{2}(x)}.$
(2)
Note that:
$||\mathcal{D}_{1}-\mathcal{D}_{2}||_{TV}=\frac{1}{2}\sum_{x}|\mathcal{D}_{1}(x)-\mathcal{D}_{2}(x)|\leq
H^{2}(\mathcal{D}_{1},\mathcal{D}_{2}),$ (3)
where $||\cdot||_{TV}$ is the total variation distance.
For a finite set $S$, we write $x\leftarrow_{U}S$ to mean that $x$ is drawn
uniformly at random from $S$. In general, $x\leftarrow_{\chi}S$ will mean that
$x$ was drawn according to the distribution $\chi:S\to[0,1]$ from $S$. We say
that a function $\mu:\mathbb{N}\to\mathbb{R}$ is negligible if it goes to $0$
faster than any inverse-polynomial function, i.e. for any polynomial
$p:\mathbb{N}\to\mathbb{R}$, $p(n)\mu(n)\to_{n\to\infty}0$.
### 2.2 Complexity theory
In this paper we reference the standard complexity classes describing
polynomial-time probabilistic and quantum computation, respectively, denoted
$\sf BPP$ and $\sf BQP$. In addition, we also utilize the complexity classes
for computations that can be performed by polynomial-size circuits having
polylogarithmic depth, NC for classical circuits and QNC for quantum circuits,
logarithmic depth $\sf NC^{1}$ and $\sf QNC^{1}$, as well as constant depth,
$\sf NC^{0}$ and $\sf QNC^{0}$, respectively. For formal definitions of these
classes, as well as others mentioned in this paper, we refer to the Complexity
Zoo [zoo]. Here, we provide the definition of $\sf QSZK$, as it is the focus
of our main results:
###### Definition 1 (Quantum Statistical Zero-Knowledge ($\sf QSZK$)).
A promise problem $(L_{yes},L_{no})$ belongs to $\sf QSZK$ if there exists a
uniform family of polynomial-size quantum circuits $V=\\{V_{n}\\}_{n>0}$,
known as a verifier such that the following conditions are satisfied:
* •
Completeness. For each $x\in L_{yes}$, there exists a prover $P(x)$ exchanging
polynomially-many quantum messages with $V(x)$ that makes $V(x)$ accept with
probability greater than $2/3$,
* •
Soundness. For each $x\in L_{no}$, any prover $P(x)$ exchanging polynomially-
many quantum messages with $V(x)$ will make $V(x)$ accept with probability at
most $1/3$,
* •
Zero-Knowledge. There exists a uniform family of polynomial-size quantum
circuits $S=\\{S_{n}\\}_{n>0}$, known as a simulator, as well as a negligible
function $\varepsilon:\mathbb{N}\rightarrow[0,1]$ such that for all $x\in
L_{yes}$, $S(x)$ produces a state that is $\varepsilon$-close in trace
distance to the transcript of $V(x)\leftrightarrow P(x)$.
As was shown by Ben-Aroya, Schwartz and Ta-Shma [BASTS10], the following
problem is $\sf QSZK$-complete under polynomial-time reductions:
###### Definition 2 (Quantum Entropy Difference (QED) [BASTS10]).
Let $C_{1}$ and $C_{2}$ be quantum circuits acting on $n+k$ qubits such that
$depth(C_{1})$ is $O(log(n+k))$ and $depth(C_{2})$ is $O(log(n+k))$. Define
the following $n$-qubit mixed states:
$\rho_{1}=Tr_{k}(C_{1}\ket{00...0}\bra{00...0}C_{1}^{\dagger})\quad\quad\quad\quad\rho_{2}=Tr_{k}(C_{2}\ket{00...0}\bra{00...0}C_{2}^{\dagger})$
(4)
Given $n$, $k$ and descriptions of $C_{1}$ and $C_{2}$ as input, decide
whether:
$S(\rho_{1})\geq S(\rho_{2})+1/poly(n+k)$ (5)
or
$S(\rho_{2})\geq S(\rho_{1})+1/poly(n+k)$ (6)
promised that one of these is the case.
Note that the entropy difference can always be made constant by considering
_parallel_ 888Hence the new circuits have the same depths as the original
ones. repeated version of $C_{1}$ and $C_{2}$. Specifically, if the entropy
difference for $C_{1}$ and $C_{2}$ is $D=1/poly(n+k)$, then the difference for
$C_{1}^{\otimes D}$ and $C_{2}^{\otimes D}$, acting on $(n+k)D$ qubits and
when tracing out $kD$ qubits, will be $O(1)$. For this reason, throughout this
paper we will consider the entropy difference problem (and all its variants)
with a constant entropy difference.
The classical/probabilistic analogues of the above definitions are represented
by the class $\sf SZK$ and the Entropy Difference (ED) problem. In analogy to
QED, ED is defined as the problem of distinguishing the output entropies of
classical circuits having uniformly random inputs. It was shown in [GV99] that
ED is $\sf SZK$-complete.
### 2.3 Learning-With-Errors and Extended Trapdoor Claw-Free Functions
In this section we give definitions for the Learning-With-Errors
($\mathrm{LWE}$) problem, introduced by Regev [Reg09], as well as Extended
Trapdoor Claw-Free Functions (ETCF). Most of this section is taken from
[BCM+18]. An informal description of $\mathrm{LWE}$ is that it is the problem
of solving an approximate system of linear equations. In other words, given a
matrix $A\in\mathbb{Z}_{q}^{n\times m}$, as well as a vector
$y=Ax+e\;(mod\;q)$, with $y\in\mathbb{Z}_{q}^{m}$, $x\in\mathbb{Z}_{q}^{n}$
and $e\leftarrow_{\chi^{m}}\mathbb{Z}^{m}$ (here $\chi$ denotes a probability
distribution that is described below), the problem is to determine $x$.
More formally, let us start by defining the _truncated Gaussian distribution_.
For a positive real $B$ and positive integer $q$, the truncated discrete
Gaussian distribution over $\mathbb{Z}_{q}$ with parameter $B$ is supported on
$\\{x\in\mathbb{Z}_{q}:\,\|x\|\leq B\\}$ and has density
$D_{\mathbb{Z}_{q},B}(x)\,=\,\frac{e^{\frac{-\pi\lVert
x\rVert^{2}}{B^{2}}}}{\sum\limits_{x\in\mathbb{Z}_{q},\,\|x\|\leq
B}e^{\frac{-\pi\lVert x\rVert^{2}}{B^{2}}}}\;.$ (7)
For a positive integer $m$, the truncated discrete Gaussian distribution over
$\mathbb{Z}_{q}^{m}$ with parameter $B$ is supported on
$\\{x\in\mathbb{Z}_{q}^{m}:\,\|x\|\leq B\sqrt{m}\\}$ and has density
$\forall x=(x_{1},\ldots,x_{m})\in\mathbb{Z}_{q}^{m}\;,\qquad
D_{\mathbb{Z}_{q}^{m},B}(x)\,=\,D_{\mathbb{Z}_{q},B}(x_{1})\cdots
D_{\mathbb{Z}_{q},B}(x_{m})\;.$ (8)
###### Definition 3 (Learning-With-Errors [BCM+18]).
For a security parameter $\lambda$, let $n,m,q\in\mathbb{N}$ be integer
functions of $\lambda$. Let $\chi=\chi(\lambda)$ be a distribution over
$\mathbb{Z}$. The $LWE_{n,m,q,\chi}$ problem is to distinguish between the
distributions $(\\*A,\\*A\\*s+\\*e\pmod{q})$ and $(\\*A,\\*u)$, where $\\*A$
is uniformly random in $\mathbb{Z}_{q}^{n\times m}$, $\\*s$ is a uniformly
random row vector in $\mathbb{Z}_{q}^{n}$, $\\*e$ is a row vector drawn at
random from the distribution $\chi^{m}$, and $\\*u$ is a uniformly random
vector in $\mathbb{Z}_{q}^{m}$. Often we consider the hardness of solving
$\mathrm{LWE}$ for any function $m$ such that $m$ is at most a polynomial in
$n\log q$. This problem is denoted $LWE_{n,q,\chi}$.
As shown in [Reg09, PRSD17], for any $\alpha>0$ such that $\sigma=\alpha q\geq
2\sqrt{n}$ the $LWE_{n,q,D_{\mathbb{Z}_{q},\sigma}}$ problem, where
$D_{\mathbb{Z}_{q},\sigma}$ is the discrete Gaussian distribution, is at least
as hard (under a quantum poly-time reduction) as approximating the shortest
independent vector problem (SIVP) to within a factor of
$\gamma=\tilde{O}({n}/\alpha)$ in _worst case_ dimension $n$ lattices.
The so-called “$\mathrm{LWE}$ assumption” is that no quantum polynomial-time
procedure can solve the $LWE_{n,q,\chi}$ problem with more than a negligible
advantage in $\lambda$, even when given access to a quantum polynomial-size
state depending on the parameters $n,m,q$ and $\chi$ of the problem.
For the reduction from $\mathrm{LWE}$ to QED${}_{\textrm{log}}$ we make use of
“extended trapdoor claw-free functions (ETCF),” introduced in [Mah18].
Briefly, an ETCF consists of two families of functions, denoted $\mathcal{F}$
and $\mathcal{G}$. The first family, $\mathcal{F}$, is referred to as a
“trapdoor injective family” and consists of a pair of functions
$(f_{k,0},f_{k,1})$ that have disjoint ranges. The second, $\mathcal{G}$, is
referred to as a “noisy trapdoor claw-free family (NTCF)”, introduced in
[BCM+18], and consists of a pair of functions $(g_{k,0},g_{k,1})$ that have
overlapping ranges999The specific functions we consider will be functions that
output probability distributions. The NTCF function pair will produce
distributions with overlapping supports., essentially acting as a one-way
$2$-to-$1$ function. While we do not require all the properties of ETCF
functions for our results, we give the full definitions of these functions for
completeness.
The following definitions are taken from [Mah18, Section 4], with similar
definitions in [BCM+18, Section 3]. Note that we have flipped the naming
convention of these functions with respect to how they were originally defined
in [Mah18, BCM+18]. In other words, for us the $\mathcal{F}$ family is the
injective family and the $\mathcal{G}$ family is the NTCF one, whereas in the
cited results this is reversed. This was done in order to be consistent with
referenced injective functions denoted $f$ from other results. We start with
the definition of the NTCF family:
###### Definition 4 (NTCF Family [Mah18, BCM+18]).
Let $\lambda$ be a security parameter. Let $\mathcal{X}$ and $\mathcal{Y}$ be
finite sets. Let $\mathcal{K}_{\mathcal{G}}$ be a finite set of keys. A family
of functions
$\mathcal{G}\,=\,\big{\\{}g_{k,b}:\mathcal{X}\rightarrow\mathcal{D}_{\mathcal{Y}}\big{\\}}_{k\in\mathcal{K}_{\mathcal{G}},b\in\\{0,1\\}}$
is called a noisy trapdoor claw-free (NTCF) family if the following conditions
hold:
1. 1.
Efficient Function Generation. There exists an efficient probabilistic
algorithm $\textrm{GEN}_{\mathcal{G}}$ which generates a key
$k\in\mathcal{K}_{\mathcal{G}}$ together with a trapdoor $t_{k}$:
$(k,t_{k})\leftarrow\textrm{GEN}_{\mathcal{G}}(1^{\lambda})\;.$
2. 2.
Trapdoor Injective Pair. For all keys $k\in\mathcal{K}_{\mathcal{G}}$ the
following conditions hold.
1. (a)
Trapdoor: For all $b\in\\{0,1\\}$ and $x\neq x^{\prime}\in\mathcal{X}$,
$\textsc{Supp}(g_{k,b}(x))\cap\textsc{Supp}(g_{k,b}(x^{\prime}))=\emptyset$.
Moreover, there exists an efficient deterministic algorithm
$\textrm{INV}_{\mathcal{G}}$ such that for all $b\in\\{0,1\\}$,
$x\in\mathcal{X}$ and $y\in\textsc{Supp}(g_{k,b}(x))$,
$\textrm{INV}_{\mathcal{G}}(t_{k},b,y)=x$.
2. (b)
Injective pair: There exists a perfect matching
$\mathcal{R}_{k}\subseteq\mathcal{X}\times\mathcal{X}$ such that
$g_{k,0}(x_{0})=g_{k,1}(x_{1})$ if and only if
$(x_{0},x_{1})\in\mathcal{R}_{k}$.
3. 3.
Efficient Range Superposition. For all keys $k\in\mathcal{K}_{\mathcal{G}}$
and $b\in\\{0,1\\}$ there exists a function
$g^{\prime}_{k,b}:\mathcal{X}\mapsto\mathcal{D}_{\mathcal{Y}}$ such that
1. (a)
For all $(x_{0},x_{1})\in\mathcal{R}_{k}$ and
$y\in\textsc{Supp}(g^{\prime}_{k,b}(x_{b}))$,
INV${}_{\mathcal{G}}(t_{k},b,y)=x_{b}$ and INV${}_{\mathcal{G}}(t_{k},b\oplus
1,y)=x_{b\oplus 1}$.
2. (b)
There exists an efficient deterministic procedure CHKG that, on input $k$,
$b\in\\{0,1\\}$, $x\in\mathcal{X}$ and $y\in\mathcal{Y}$, returns $1$ if
$y\in\textsc{Supp}(g^{\prime}_{k,b}(x))$ and $0$ otherwise. Note that CHKG is
not provided the trapdoor $t_{k}$.
3. (c)
For every $k$ and $b\in\\{0,1\\}$,
$\textsc{E}_{x\leftarrow_{U}\mathcal{X}}\big{[}\,H^{2}(g_{k,b}(x),\,g^{\prime}_{k,b}(x))\,\big{]}\,\leq\,\mu(\lambda)\;,$
for some negligible function $\mu(\cdot)$. Here $H^{2}$ is the Hellinger
distance. Moreover, there exists an efficient procedure SAMPG that on input
$k$ and $b\in\\{0,1\\}$ prepares the state
$\frac{1}{\sqrt{|\mathcal{X}|}}\sum_{x\in\mathcal{X},y\in\mathcal{Y}}\sqrt{(g^{\prime}_{k,b}(x))(y)}\ket{x}\ket{y}\;.$
(9)
4. 4.
Adaptive Hardcore Bit. For all keys $k\in\mathcal{K}_{\mathcal{G}}$ the
following conditions hold, for some integer $w$ that is a polynomially bounded
function of $\lambda$.
1. (a)
For all $b\in\\{0,1\\}$ and $x\in\mathcal{X}$, there exists a set
$G_{k,b,x}\subseteq\\{0,1\\}^{w}$ such that
$\Pr_{d\leftarrow_{U}\\{0,1\\}^{w}}[d\notin G_{k,b,x}]$ is negligible, and
moreover there exists an efficient algorithm that checks for membership in
$G_{k,b,x}$ given $k,b,x$ and the trapdoor $t_{k}$.
2. (b)
There is an efficiently computable injection $J:\mathcal{X}\to\\{0,1\\}^{w}$,
such that $J$ can be inverted efficiently on its range, and such that the
following holds. If
$\displaystyle H_{k}$ $\displaystyle=$
$\displaystyle\big{\\{}(b,x_{b},d,d\cdot(J(x_{0})\oplus
J(x_{1})))\,|\;b\in\\{0,1\\},\;(x_{0},x_{1})\in\mathcal{R}_{k},\;d\in
G_{k,0,x_{0}}\cap G_{k,1,x_{1}}\big{\\}}\;,\text{}$
$\displaystyle\overline{H}_{k}$ $\displaystyle=$
$\displaystyle\\{(b,x_{b},d,c)\,|\;(b,x,d,c\oplus 1)\in H_{k}\big{\\}}\;,$
then for any quantum polynomial-time procedure $\mathcal{A}$ there exists a
negligible function $\mu(\cdot)$ such that
$\Big{|}\Pr_{(k,t_{k})\leftarrow\textrm{GEN}_{\mathcal{G}}(1^{\lambda})}[\mathcal{A}(k)\in
H_{k}]-\Pr_{(k,t_{k})\leftarrow\textrm{GEN}_{\mathcal{G}}(1^{\lambda})}[\mathcal{A}(k)\in\overline{H}_{k}]\Big{|}\,\leq\,\mu(\lambda)\;.$
(10)
We now define the trapdoor injective family.
###### Definition 5 (Trapdoor Injective Function Family [Mah18, BCM+18]).
Let $\lambda$ be a security parameter. Let $\mathcal{X}$ and $\mathcal{Y}$ be
finite sets. Let $\mathcal{K}_{\mathcal{F}}$ be a finite set of keys. A family
of functions
$\mathcal{F}\,=\,\big{\\{}f_{k,b}:\mathcal{X}\rightarrow\mathcal{D}_{\mathcal{Y}}\big{\\}}_{b\in\\{0,1\\},k\in\mathcal{K}_{\mathcal{F}}}$
is called a trapdoor injective family if the following conditions hold:
1. 1.
Efficient Function Generation. There exists an efficient probabilistic
algorithm $\textrm{GEN}_{\mathcal{F}}$ which generates a key
$k\in\mathcal{K}_{\mathcal{F}}$ together with a trapdoor $t_{k}$:
$(k,t_{k})\leftarrow\textrm{GEN}_{\mathcal{F}}(1^{\lambda})\;.$
2. 2.
Disjoint Trapdoor Injective Pair. For all keys
$k\in\mathcal{K}_{\mathcal{F}}$, for all $b,b^{\prime}\in\\{0,1\\}$ and
$x,x^{\prime}\in\mathcal{X}$, if $(b,x)\neq(b^{\prime},x^{\prime})$,
$\textsc{Supp}(f_{k,b}(x))\cap\textsc{Supp}(f_{k,b^{\prime}}(x^{\prime}))=\emptyset$.
Moreover, there exists an efficient deterministic algorithm
$\textrm{INV}_{\mathcal{F}}$ such that for all $b\in\\{0,1\\}$,
$x\in\mathcal{X}$ and $y\in\textsc{Supp}(g_{k,b}(x))$,
$\textrm{INV}_{\mathcal{F}}(t_{k},y)=(b,x)$.
3. 3.
Efficient Range Superposition. For all keys $k\in\mathcal{K}_{\mathcal{F}}$
and $b\in\\{0,1\\}$
1. (a)
There exists an efficient deterministic procedure CHKF that, on input $k$,
$b\in\\{0,1\\}$, $x\in\mathcal{X}$ and $y\in\mathcal{Y}$, outputs $1$ if
$y\in\textsc{Supp}(f_{k,b}(x))$ and $0$ otherwise. Note that CHKF is not
provided the trapdoor $t_{k}$.
2. (b)
There exists an efficient procedure SAMPF that on input $k$ and
$b\in\\{0,1\\}$ returns the state
$\frac{1}{\sqrt{|\mathcal{X}|}}\sum_{x\in\mathcal{X},y\in\mathcal{Y}}\sqrt{(f_{k,b}(x))(y)}\ket{x}\ket{y}\;.$
(11)
###### Definition 6 (Injective Invariance [Mah18]).
A noisy trapdoor claw-free family $\mathcal{G}$ is injective invariant if
there exists a trapdoor injective family $\mathcal{F}$ such that:
1. 1.
The algorithms CHKF and SAMPF are the same as the algorithms CHKG and SAMPG.
2. 2.
For all quantum polynomial-time procedures $\mathcal{A}$, there exists a
negligible function $\mu(\cdot)$ such that
$\Big{|}\Pr_{(k,t_{k})\leftarrow\textrm{GEN}_{\mathcal{F}}(1^{\lambda})}[\mathcal{A}(k)=0]-\Pr_{(k,t_{k})\leftarrow\textrm{GEN}_{\mathcal{G}}(1^{\lambda})}[\mathcal{A}(k)=0]\Big{|}\leq\mu(\lambda)$
(12)
###### Definition 7 (Extended Trapdoor Claw-Free Family [Mah18]).
A noisy trapdoor claw-free family $\mathcal{G}$ is an extended trapdoor claw-
free family if:
1. 1.
It is injective invariant.
2. 2.
For all $k\in\mathcal{K}_{\mathcal{G}}$ and $d\in\\{0,1\\}^{w}$, let:
$\displaystyle H^{\prime}_{k,d}$ $\displaystyle=$
$\displaystyle\\{d\cdot(J(x_{0})\oplus
J(x_{1}))|(x_{0},x_{1})\in\mathcal{R}_{k}\\}$ (13)
For all quantum polynomial-time procedures $\mathcal{A}$, there exists a
negligible function $\mu(\cdot)$ and a string $d\in\\{0,1\\}^{w}$ such that
$\Big{|}\Pr_{(k,t_{k})\leftarrow\textrm{GEN}_{\mathcal{G}}(1^{\lambda})}[\mathcal{A}(k)\in
H^{\prime}_{k,d}]-\frac{1}{2}\Big{|}\leq\mu(\lambda)$ (14)
### 2.4 Randomized encodings
To prove the reduction from $\mathrm{LWE}$ to the constant depth version of
the entropy difference problem, we make use of randomized encodings. The
following definitions and results are taken from [AIK06]:
###### Definition 8 (Uniform randomized encoding [AIK06]).
Let $f:\\{0,1\\}^{*}\to\\{0,1\\}^{*}$ be a polynomial-time computable function
and $l(n)$ an output length function such that $|f(x)|=l(|x|)$, for every
$x\in\\{0,1\\}^{*}$. We say that a function
$\hat{f}:\\{0,1\\}^{*}\times\\{0,1\\}^{*}\to\\{0,1\\}^{*}$ is a
$\delta(n)$-correct, $\varepsilon(n)$-private randomized encoding of $f$, if
it satisfies the following properties:
* •
Length regularity. There exist polynomially-bounded and efficiently computable
length functions $m(n)$, $s(n)$ such that for every $x\in\\{0,1\\}^{n}$ and
$r\in\\{0,1\\}^{m(n)}$ we have that $|\hat{f}(x,r)|=s(n)$.
* •
Efficient evaluation. There exists a polynomial-time evaluation algorithm
that, given $x\in\\{0,1\\}^{*}$ and $r\in\\{0,1\\}^{m(n)}$, outputs
$\hat{f}(x,r)$.
* •
$\delta$-correctness. There exists a polynomial-time algorithm $C$, called a
decoder, such that for every input $x\in\\{0,1\\}^{n}$,
$\Pr_{r\leftarrow_{U}\\{0,1\\}^{m(n)}}[C(1^{n},\hat{f}(x,r))\neq
f(x)]\leq\delta(n).$ (15)
* •
$\varepsilon$-privacy. There exists a probabilistic polynomial-time algorithm
$S$, called a simulator such that for every $x\in\\{0,1\\}^{n}$ and
$r\leftarrow_{U}\\{0,1\\}^{m(n)}$,
$||S(1^{n},f(x))-\hat{f}(x,r)||_{TV}\leq\varepsilon(n).$ (16)
Correctness (and privacy, respectively) can be _perfect_ when $\delta=0$
($\varepsilon=0$, respectively), or _statistical_ when $\delta(n)$
($\varepsilon(n)$, respectively) is a negligible function in $n$. Thus, a
_perfect randomized encoding_ is one that has perfect correctness and perfect
privacy111111In fact a perfect randomized encoding needs to satisfy two
further properties called _balance_ and _stretch-preservation_ [AIK06], though
we omit them here since they are not important for our results.. Similarly, a
_statistical randomized encoding_ is one that has statistical correctness and
statistical privacy.
###### Lemma 1 (Unique randomness [AIK06]).
Suppose $\hat{f}$ is a perfect randomized encoding of $f$. Then,
* (a)
for any input $x$, $\hat{f}(x,\cdot)$ is injective, i.e. there are no distinct
$r,r^{\prime}$, such that $\hat{f}(x,r)=\hat{f}(x,r^{\prime})$,
* (b)
if $f$ is a permutation then so is $\hat{f}$.
###### Theorem 5 ([AIK06]).
Any function $f\in\sf NC^{1}$ admits a perfect randomized encoding in
$\mathsf{NC}^{0}_{4}$. Moreover, constructing the randomized encoding of $f$
can be achieved in time polynomial in the size of the circuit that evaluates
$f$.
Here $\mathsf{NC}^{0}_{4}$ denotes the set of uniform constant-depth circuits
having output locality 4. In other words, each output bit depends on at most 4
input bits.
## 3 Entropy difference for shallow circuits
In this section we examine the hardness of the entropy difference problem for
circuits whose depth scales at most (poly)logarithmically with the size of the
input. We start by formally defining the entropy difference problem, in both
the classical and the quantum cases, for circuits with depth scaling as
$\delta(n)$, for inputs of size $n$ and where
$\delta:\mathbb{N}\to\mathbb{N}$, is a monotonically increasing function:
###### Definition 9 (EDδ).
Let $C_{1}$ and $C_{2}$ be reversible boolean circuits acting on $n+k$ bits
such that $depth(C_{1})\leq\delta(n+k)$, $depth(C_{2})\leq\delta(n+k)$. Let
$\mathcal{D}_{1}(y)=\Pr_{x\leftarrow_{U}\\{0,1\\}^{n+k}}\left[y=Tr_{k}(C_{1}(x))\right]\quad\quad\quad\mathcal{D}_{2}(y)=\Pr_{x\leftarrow_{U}\\{0,1\\}^{n+k}}\left[y=Tr_{k}(C_{2}(x))\right]$
(17)
denote the output distributions of the circuits when restricted to the first
$n$ output bits and with the input chosen uniformly at random.
Given $n$, $k$ and descriptions of $C_{1}$ and $C_{2}$ as input, decide
whether:
$S(\mathcal{D}_{1})\geq S(\mathcal{D}_{2})+1\quad\quad\text{or}\quad\quad
S(\mathcal{D}_{2})\geq S(\mathcal{D}_{1})+1$ (18)
promised that one of these is the case121212As mentioned in Subsection 2.2,
the gap in entropy difference can be $1/poly(n+k)$, but this can always be
made constant by simply taking parallel repeated versions of the circuits and
using the fact that entropy is additive. We restrict to the case where the
entropy difference is at least $1$, unless otherwise specified..
###### Definition 10 (QEDδ).
Let $C_{1}$ and $C_{2}$ be quantum circuits acting on $n+k$ qubits such that
$depth(C_{1})\leq\delta(n+k)$, $depth(C_{2})\leq\delta(n+k)$. Define the
following $n$-qubit mixed states:
$\rho_{1}=Tr_{k}(C_{1}\ket{00...0}\bra{00...0}C_{1}^{\dagger})\quad\quad\quad\quad\rho_{2}=Tr_{k}(C_{2}\ket{00...0}\bra{00...0}C_{2}^{\dagger})$
(19)
Given $n$, $k$ and descriptions of $C_{1}$ and $C_{2}$ as input, decide
whether:
$S(\rho_{1})\geq S(\rho_{2})+1\quad\quad\text{or}\quad\quad S(\rho_{2})\geq
S(\rho_{1})+1$ (20)
promised that one of these is the case.
Note that if $\delta(n)=O(\log(n))$ we obtain the definitions for
ED${}_{\textrm{log}}$ and QED${}_{\textrm{log}}$, respectively, and if
$\delta(n)=O(1)$, we obtain the definitions for ED${}_{\textrm{O(1)}}$ and
QED${}_{\textrm{O(1)}}$, respectively131313We slightly abuse notation here
since $O(\log(n))$ and $O(1)$ are sets of functions and so, correspondingly,
ED${}_{\textrm{log}}$ (QED${}_{\textrm{log}}$) and ED${}_{\textrm{O(1)}}$
(QED${}_{\textrm{O(1)}}$) will also be sets of problems. However, our results
are valid for any instances of the problems in these classes.. In addition,
when $\delta(n)=poly(n)$, we recover the original definitions of ED and QED,
respectively. As those problems are complete for $\sf SZK$ and $\sf QSZK$, we
proceed to define the analogous classes $\sf SZK_{\delta}$ and $\sf
QSZK_{\delta}$ of problems that poly-time reduce to EDδ and QEDδ.
###### Definition 11 ($\sf SZK_{\delta}$, $\sf QSZK_{\delta}$).
We let $\sf SZK_{\delta}$ ($\sf QSZK_{\delta}$) consist of all promise
problems $(L_{yes},L_{no})$ for which there exists a deterministic polynomial-
time reduction to EDδ (QEDδ).
Taking $\delta(n)=polylog(n)$, we now show that these classes have the
following characterisation:
###### Lemma 2.
$\sf SZK_{polylog}$ ($\sf QSZK_{polylog}$) is the set of promise problems that
admit a (quantum) statistical zero-knowledge protocol in which the verifier
circuit and the simulator circuit have depth $polylog(n)$, for inputs of size
$n$.
###### Proof.
To prove this result we need to show that ED${}_{\textrm{polylog}}$
(QED${}_{\textrm{polylog}}$) is contained and complete for the class of
problems that admit a (quantum) statistical zero-knowledge protocol in which
the verifier circuit and the simulator circuit have depth $polylog(n)$. This
follows immediately from the proofs that ED (QED) is contained and complete
for $\sf SZK$ ($\sf QSZK$), from [VW16], by replacing all circuits in those
proofs with the corresponding circuits of depth $polylog(n)$. ∎
### 3.1 Entropy difference is easier for shallow circuits
With the above definitions, let us now address the question of whether
ED${}_{\textrm{polylog}}$ (QED${}_{\textrm{polylog}}$) is $\sf SZK$-complete
($\sf QSZK$-complete). From the above lemma, we see that this is the same as
asking whether $\sf SZK_{polylog}=SZK$ ($\sf QSZK_{polylog}=QSZK$). In other
words, can any $\sf SZK$ ($\sf QSZK$) protocol be turned into a protocol
having polylog-depth circuits for the simulator and verifier? Providing an
_unconditional_ negative answer seems difficult without proving explicit
circuit lower-bounds. Instead, we will give “complexity-theoretic evidence”
that the answer is no. We first show that there exists an oracle,
$\mathcal{O}$, such that, relative to $\mathcal{O}$, $\sf SZK_{polylog}$
$\neq$ $\sf SZK$ ($\sf QSZK_{polylog}$$\neq$ $\sf QSZK$).
###### Theorem 6.
There exists an oracle $\mathcal{O}$ such that
$\mathsf{SZK^{\mathcal{O}}_{polylog}}\neq\mathsf{SZK}^{\mathcal{O}}$ and
$\mathsf{QSZK^{\mathcal{O}}_{polylog}}\neq\mathsf{QSZK}^{\mathcal{O}}$.
###### Proof.
The proof will be primarily for the quantum case, since the classical case is
analogous, though we will specify whenever there is a distinction between the
two. The oracle in our proof will be identical to the one of Chia, Chung and
Lai [CCL19], showing the separation
$\mathsf{BPP}^{\mathsf{QNC}^{\mathcal{O}}}\neq\mathsf{BQP}^{\mathcal{O}}$.
Their oracle provides access to a function $f:\\{0,1\\}^{n}\to\\{0,1\\}^{n}$
that is promised to be either $1$-to-$1$ or $2$-to-$1$. However, the oracle
does not give direct query access to $f$. Instead, the oracle allows for the
querying of $d+1$ functions
$f_{0},f_{1}...,f_{d}:\\{0,1\\}^{n}\to\\{0,1\\}^{n}$ such that
$f=f_{d}\circ...\circ f_{0}$, for some $d>0$. The functions
$f_{0},f_{1},...,f_{d-1}$ are $1$-to-$1$ functions and $f_{d}$ is either a
$1$-to-$1$ function or a $2$-to-$1$ function depending on whether $f$ is
$1$-to-$1$ or $2$-to-$1$. This is referred to as a _$d$ -shuffling oracle_.
Its purpose is to force any algorithm that attempts to query $f$ to first
evaluate the $d+1$ functions. This is achieved by having the image of each
function be a random subset of its co-domain. In other words, each function
will be defined as $f_{i}:S_{i}^{(i)}\to S_{i+1}^{(i+1)}$, with
$S_{i}^{(i)}\subseteq\\{0,1\\}^{n}$. The input domain, however, will be a set
$S_{0}\subseteq\\{0,1\\}^{n}$ chosen uniformly at random from subsets of
$n$-bit strings. Thus, the image of $f_{1}$ on $S_{0}$ will be
$Im_{S_{0}}(f_{1})=f_{1}(S_{0})=S_{1}$ and in general
$S_{i}=Im_{S_{i-1}}(f_{i})$.
The problem that Chia, Chung and Lai define relative to this oracle is to
determine whether the function $f:S_{0}\to S_{d+1}$ is $1$-to-$1$ or
$2$-to-$1$. In the latter case, the function also has Simon’s property so that
the problem (called _$d$ -shuffling Simon’s problem_, or $d$-SSP) can be
solved efficiently in quantum polynomial time, thus showing containment in
$\sf BQP^{\mathcal{O}}$. Using the properties of the $d$-shuffling oracle it
is possible to show that no quantum circuit of depth smaller than $d$ can
solve the problem, even when alternating these quantum circuits with classical
circuits of polynomial depth. Thus, taking $d=O(n)$ is sufficient to show that
the problem is not contained ${\sf BPP^{\mathsf{QNC}}}^{\mathcal{O}}$.
For the proof of our result we also consider the $d$-SSP problem. Since we
already know that the problem is in $\sf BQP^{\mathcal{O}}$ this immediately
implies that the problem is also in $\sf QSZK^{\mathcal{O}}$. For the
classical case, we would also need to show that the problem is contained in
$\sf SZK^{\mathcal{O}}$. This follows from the fact that ED is in $\sf SZK$
and the problem of determining whether a function is $1$-to-$1$ or $2$-to-$1$
reduces to ED141414This is because $f$ evaluated on a uniformly random $n$-bit
string will have maximum entropy $n$, when $f$ is $1$-to-$1$ and entropy $n-1$
when the function is $2$-to-$1$.. We therefore need to show that the problem
is not contained in $\sf SZK_{polylog}^{\mathcal{O}}$ and $\sf
QSZK_{polylog}^{\mathcal{O}}$.
Figure 1: Circuit with oracle calls. The unitaries $U_{j}$ are assumed to be
polylogarithmic depth quantum circuits. The unitaries $U_{\mathcal{O}}$
represent calls to the oracle $\mathcal{O}$. The circuit has polylogarithmic
depth overall.
To do this, consider a $\sf QSZK_{polylog}$ protocol in which the verifier,
the prover and the simulator all have access to the oracle $\mathcal{O}$. In
such a protocol, the verifier and the simulator are circuits of depth
$O(polylog(n))$ that generically consist of alternating sequences of polylog-
depth circuits and calls to the oracle, as shown in Figure 1. For a fixed
input state that starts off uncorrelated with the oracle, let us examine the
output state of such a circuit in the cases when the function is injective and
when it is a $2$-to-$1$ function. We will denote the oracle as
$\mathcal{O}(f)$ in the former case and as $\mathcal{O}(g)$ in the
latter151515The interpretation of this notation is that the $d$-shuffling
oracle is providing “shuffled” access to either $f$ or $g$.. We also denote
the circuit under consideration, when given access to the oracle, as
$C^{\mathcal{O}(f)}$ and $C^{\mathcal{O}(g)}$, respectively. These can be
expressed as follows
$C^{\mathcal{O}(f)}=U_{m}U_{\mathcal{O}(f)}U_{m-1}...U_{\mathcal{O}(f)}U_{1}\quad\quad
C^{\mathcal{O}(g)}=U_{m}U_{\mathcal{O}(g)}U_{m-1}...U_{\mathcal{O}(g)}U_{1}$
(21)
with $m=polylog(n)$ and where each $U_{i}$ is a circuit of depth one. We
denote the input state to the circuit as161616The analysis remains unchanged
if the input state is a mixed state. $\ket{\psi(0)}$. We will also write
$\ket{\psi^{f}(i)}=U_{\mathcal{O}(f)}U_{i}\ket{\psi^{f}(i-1)}$ and
$\ket{\psi^{g}(i)}=U_{\mathcal{O}(g)}U_{i}\ket{\psi^{g}(i-1)}$. Note that
$\ket{\psi^{f}(m)}=C^{\mathcal{O}(f)}\ket{\psi(0)}\quad\quad\ket{\psi^{g}(m)}=C^{\mathcal{O}(g)}\ket{\psi(0)}$
(22)
Following a similar analysis to that of [CCL19], we have that
$TD(\ket{\psi^{f}(m)},\ket{\psi^{g}(m)})\leq
TD(\ket{\psi^{f}(m)},U_{\mathcal{O}(f)}U_{m}\ket{\psi^{g}(m-1)})+\\\
TD(U_{\mathcal{O}(f)}U_{m}\ket{\psi^{g}(m-1)},\ket{\psi^{g}(m)})$ (23)
which can be extended to
$TD(\ket{\psi^{f}(m)},\ket{\psi^{g}(m)})\leq\sum_{i=1}^{m}TD(U_{\mathcal{O}(f)}U_{i}\ket{\psi^{g}(i-1)},\ket{\psi^{g}(i)})$
(24)
Both of these equations follow from the triangle inequality. We next have that
$TD(\ket{\psi^{f}(m)},\ket{\psi^{g}(m)})\leq m\max_{i\leq
m}\;TD(U_{\mathcal{O}(f)}U_{i}\ket{\psi^{g}(i-1)},U_{\mathcal{O}(g)}U_{i}\ket{\psi^{g}(i-1)})$
(25)
Finally, as was shown in [CCL19, Theorem 6.1], for $i\leq d$,
$TD(U_{\mathcal{O}(f)}U_{i}\ket{\psi^{g}(i-1)},U_{\mathcal{O}(g)}U_{i}\ket{\psi^{g}(i-1)})\leq\frac{poly(n)}{2^{n}}$
(26)
and since $m=polylog(n)$, $d=O(n)$, for sufficiently large $n$, $m<d$, hence
$TD(\ket{\psi^{f}(m)},\ket{\psi^{g}(m)})\leq m\frac{poly(n)}{2^{n}}.$ (27)
If we now consider $C^{\mathcal{O}}$ to be the simulator circuit in a $\sf
QSZK_{polylog}$ ($\sf SZK_{polylog}$) protocol, we see that the simulator
produces nearly identical transcripts irrespective of whether the oracle
function is $1$-to-$1$ or $2$-to-$1$. This means that, for the “yes” instances
(the function being $1$-to-$1$), the transcript that the verifier circuit acts
on, upon its interaction with the prover, is _almost completely uncorrelated_
with the oracle itself. Stated differently, the transcript is
$poly(n)/2^{n}$-close in trace distance to a transcript for a “no” instance.
Thus, the interaction with the prover can provide the verifier with at most a
$poly(n)/2^{n}$ advantage in deciding the problem correctly. Since the
verifier circuit itself is polylogarithmic in depth, from the above analysis
(and the result of [CCL19]), it follows that if the oracle function type is
equiprobable to be $1$-to-$1$ or $2$-to-$1$, the resulting $\sf
QSZK_{polylog}$ ($\sf SZK_{polylog}$) protocol will decide correctly with
probability at most $1/2+poly(n)/2^{n}$. This concludes the proof. ∎
It should be noted that, following [CCL19], the above result extends to
circuits of depth strictly less than $d$. The key insight of the proof is the
fact that the shuffling oracle requires circuits of depth at least $d$ in
order to obtain an output that is non-negligibly correlated with the oracle
type. Intuitively, if we were to look strictly at instances of
ED${}_{\textrm{polylog}}$ and QED${}_{\textrm{polylog}}$ in which the circuits
under consideration can query the oracle, the number of queries is too small
for there to be any noticeable difference in the output entropies. Thus,
relative to the shuffling oracle, ED${}_{\textrm{polylog}}$ and
QED${}_{\textrm{polylog}}$ are strictly weaker than ED and QED.
Let us now consider a different argument for why it is unlikely that entropy
difference with shallow circuits is as hard as with general poly-size
circuits. We will focus specifically on the quantum case for circuits of
logarithmic depth and show the following:
###### Theorem 7.
If there exists a polynomial-time reduction from a $\sf QSZK$ protocol with a
log-depth verifier to a $\sf QSZK_{log}$ protocol which preserves the
transcript of the $\sf QSZK$ protocol, then $\sf BQP=\sf BPP^{\sf QNC^{1}}$.
###### Proof.
It is straightforward to show that $\sf BQP\subseteq\sf QSZK$: the quantum
verifier ignores the prover and can decide any language in $\sf BQP$ .
However, one can also give a $\sf QSZK$ protocol for any language in $\sf BQP$
where there is non-trivial interaction between a prover and verifier.
Furthermore, we show that such a protocol only requires the verifier’s circuit
to be log depth. To show this, we adapt the proof by Rosgen that $\sf QIP$
only requires log depth quantum verifiers [Ros08].
Given a polynomial-time quantum circuit on $n$ qubits of the form
$C=U_{m}U_{m-1}...U_{1}$, the following state $\ket{\psi_{U}}$ can be
constructed by another $\sf BQP$ circuit,
$\ket{\psi_{U}}=\ket{0...0}\otimes U_{1}\ket{0...0}\otimes...\otimes
U_{m}U_{m-1}...U_{1}\ket{0...0}.$ (28)
Notice the similarity to a Feynman-Kitaev history state except where one takes
the tensor product of unitaries applied to the input, and not the
superposition. The state $\ket{\psi_{U}}$ stores the final state of the
circuit above in the rightmost $n$ qubits. Therefore, any language in $\sf
BQP$ can be decided by measuring the relevant qubit in these rightmost $n$
qubits in $\ket{\psi_{U}}$.
Now we can have a $\sf QSZK$ protocol where an honest prover gives the
verifier the state $\ket{\psi_{U}}$, and thus a verifier has the ability to
decide any language in $\sf BQP$ when given this state. In the case of a
dishonest prover, we can use the techniques described by Rosgen, to verify
that the state given by the prover is $\ket{\psi_{U}}$. For convenience of
explanation, we divide the state given by the prover up into $m$ “registers”
of $n$ qubits, where the first register should be in the state $\ket{0...0}$,
the second register in the state $U_{1}\ket{0...0}$, and so on. The idea for
verifying that the state is $\ket{\psi_{U}}$ is to pairwise compare the $j$th
and $(j+1)$th registers with SWAP tests. More precisely the verification
circuit will apply $U_{j+1}$ to the $j$th register and perform a SWAP test on
the $j$th and $(j+1)$th registers: if the states are the same then swapping
leaves the states invariant and the circuit accepts, otherwise it rejects.
Therefore, after $n$ SWAP tests of this form, if all tests accept, then with
high probability the state is $\ket{\psi_{U}}$. Importantly, all of the SWAP
tests to compare registers can be done in log depth [Ros08], thus the
verifier’s circuit is a log depth quantum circuit.
In the above protocol we have outlined how the verifier can verify the state
$\ket{\psi_{U}}$, but we have not shown that it satisfies the property of
statistical zero-knowledge. Note that the state $\ket{\psi_{U}}$ produced by
the prover (such that an input is accepted) can be generated by a polynomial
time quantum circuit. Therefore, in the case of a $\sf QSZK$ protocol, the
simulator could produce this state $\ket{\psi_{U}}$, and since this state is
the whole transcript of the protocol, the protocol has the property of zero-
knowledge.
If we assume that there exists a polynomial-time reduction from a $\sf QSZK$
protocol to a $\sf QSZK_{log}$ protocol which preserves the transcript of the
$\sf QSZK$ protocol, then the above $\sf QSZK$ protocol for deciding $\sf BQP$
can be turned into a $\sf QSZK_{log}$ protocol. Therefore, a simulator $S$
must be able to produce a state very close to $\ket{\psi_{U}}$ with a log-
depth quantum circuit, in the case that the input $x$ is in the language
$L_{yes}$. Furthermore, since $\sf BQP$ is closed under complement, we can
take the complement of the language above, and have a $\sf QSZK_{log}$
protocol for this, and thus another simulator $S^{\prime}$ that produces a
state close to $\ket{\psi_{U}}$ for $x\in L_{no}$.
Now we have the situation where if the conditions of the theorem hold, we have
two log-depth quantum circuits corresponding to the simulators $S$ and
$S^{\prime}$ above that can be used to decide membership of a language in $\sf
BQP$: if $x\in L_{yes}$ then $S$ will produce the state $\ket{\psi_{U}}$, but
$S^{\prime}$ could produce anything; if $x\in L_{no}$ then $S^{\prime}$
produces the correct state $\ket{\psi_{U}}$. To decide which is which, a
verifier can apply the SWAP tests outlined above individually on both of the
states generate by $S$ and $S^{\prime}$: at least one of the two states will
satisfy the tests and correctly accept or reject. We can now leverage these
observations to prove the theorem.
To collect the observations together, we have pointed out that if the
conditions of the theorem hold, there are log-depth quantum circuits $S$ and
$S^{\prime}$ that generate states $\ket{\psi_{U}}$, which can be used by a
log-depth quantum circuit to decide membership in $\sf BQP$. Thus we could
decide any language in $\sf BQP$ with a $\sf BPP^{\sf QNC^{1}}$ algorithm in
the following way: a $\sf BPP$ machine computes the reduction from a $\sf
QSZK$ protocol to a $\sf QSZK_{log}$ protocol, feeds the circuit descriptions
of the log-depth simulators $S$ and $S^{\prime}$ to a $\sf QNC^{1}$ oracle,
which can then produce states of the form $\ket{\psi_{U}}$, and carry out the
necessary SWAP tests to verify this state. If the SWAP tests are passed for at
least one of the two states produced by $S$ and $S^{\prime}$, then the
accept/reject measurement is performed on it (again by the oracle), and the
algorithm accepts if the oracle accepts, or rejects otherwise. ∎
It is worthwhile pointing out that this trick of deciding languages in $\sf
BQP$ with a $\sf BPP^{\sf QNC^{1}}$ algorithm does not obviously generalise to
other complexity classes. First, we would need that there are quantum
statistical zero-knowledge protocols for the class, and use the property of
closure under complementation. Naturally $\sf QSZK$ satisfies both of these
properties, but an arbitrary protocol for languages in $\sf QSZK$ has multiple
rounds of communication between prover and verifier. Our construction uses the
fact that a $\sf QSZK$ protocol for any language in $\sf BQP$ has a single
round of communication from prover to verifier, which facilitates verification
of a state in log depth. It is far from obvious how to construct such a
verification procedure for an arbitrary language in $\sf QSZK$.
We can ask about the possibility of a classical analogue of Theorem 7. That
is, if there is a reduction from a $\sf SZK$ protocol to one with a log-depth
verifier and simulator, does this imply that polynomial-time classical
computation can be parallelised to log depth? This would then imply something
about the hardness of entropy distinguishability for probability distributions
from log-depth classical circuits. However, it’s not clear how to do this
since we cannot use the history state construction of Rosgen [Ros08] for
classical circuits. For one thing, it is not obvious how an $\sf SZK$ protocol
would work with communication only from the prover to the verifier. Indeed, it
is a feature of quantum interactive proof systems that they can be
parallelised to a constant number of rounds of interaction, but this
parallelisation is not known to be a feature of classical interactive proof
systems.
### 3.2 Hardness based on Learning-With-Errors
In the previous subsection we showed that the polylogarithmic-depth version of
the entropy difference problem, in both the classical and quantum case, is
unlikely to be as hard as the polynomial-depth variant. This raises the
question of whether the problem becomes tractable for polynomial-time
classical or quantum algorithms. Here we give indication that the answer is no
by proving a reduction from $\mathrm{LWE}$ to ED${}_{\textrm{log}}$. Using
techniques from [AIK06], we then strengthen this result by also showing a
reduction from $\mathrm{LWE}$ to ED${}_{\textrm{O(1)}}$. We begin with the
log-depth case:
###### Theorem 8.
$\mathrm{LWE}$ $\leq_{P}$ ED${}_{\textrm{log}}$.
###### Proof.
The proof uses the ETCF functions defined in Subsection 2.3. As mentioned, an
ETCF family consists of an injective function and a $2$-to-$1$ (or claw-free)
function and there exists a reduction from $\mathrm{LWE}$ to the problem of
distinguishing the two functions given their descriptions171717This is the
injective invariance property of Definition 6.. By showing that the two
functions can be evaluated using circuits of logarithmic depth, we will extend
this reduction to ED${}_{\textrm{log}}$. While it has already been shown that
certain cryptographic functions based on $\mathrm{LWE}$ can be performed in
$\sf NC^{1}$ [BPR12], our result requires that we show this for the circuits
that we construct from an ETCF function family.
The ETCF functions we consider will be the same as the ones from [Mah18,
BCM+18]:
$f(b,x)=Ax+b\cdot u+e\;(mod\;q)\quad\quad\quad
g(b,x)=Ax+b\cdot(As+e^{\prime})+e\;(mod\;q)$ (29)
with $q\geq 2$ a prime integer181818It should be noted that $q$ itself is a
function of $n$. In general $q$ is taken to be suprapolynomial in $n$, though
in recent constructions $q$ can also be polynomial in $n$ while still
preserving the hardness of the $\mathrm{LWE}$ instance [BLP+13].,
$b\in\\{0,1\\}$, $x\in\mathbb{Z}_{q}^{n}$, $s\in\mathbb{Z}_{q}^{n}$,
$A\in\mathbb{Z}^{n\times m}_{q}$, $u,e^{\prime}\in\mathbb{Z}_{q}^{m}$,
$e\leftarrow_{D_{\mathbb{Z}_{q},B}^{m}}\mathbb{Z}^{m}$ as in Definition 3.
These functions will be ETCF even when $A$, $e^{\prime}$ and $u$ are chosen at
random as follows: $A\leftarrow_{U}\mathbb{Z}_{q}^{n\times m}$,
$u\leftarrow_{U}\mathbb{Z}_{q}^{m}$,
$e^{\prime}\leftarrow_{D_{\mathbb{Z}_{q},B^{\prime}}^{m}}\mathbb{Z}^{m}$. The
values $B^{\prime}$ and $B$ are the ones from [Mah18, BCM+18] (there denoted
as $B_{P}$ and $B_{V}$), and determine where the Gaussian distributions are
truncated. Specifically, $B^{\prime}=\frac{q}{C_{T}\sqrt{mn\log(q)}}$, where
$C_{T}$ is a fixed constant and $B$ is chosen so that $B^{\prime}/B$ is super-
polynomial in $n$. Since it was already shown in [Mah18] that the above
functions are ETCF, we need only show that they can be evaluated by log-depth
circuits.
First of all note that the functions from Equation 29 output probability
distributions, whereas the circuits in ED${}_{\textrm{log}}$ need to output
fixed outcomes. We will fix this by making the error, $e$, be part of the
input. We cannot, however, make it directly part of the input since the input
to the circuits in ED${}_{\textrm{log}}$ is distributed uniformly, whereas $e$
must be drawn from a truncated Gaussian distribution,
$D^{m}_{\mathbb{Z}_{q},B}$. Instead, we will have as part of the input a
string $e_{u}$ which we will turn into a Gaussian sample using a log-depth
circuit denoted Gaussify. Implementing this procedure can be achieved using,
for instance, the algorithm of Peikert from [Pei10]. Gaussify satisfies the
property that if $e_{u}\leftarrow_{U}\mathbb{Z}_{q}^{m}$ then
$e=\textsc{Gaussify}(e_{u})$ is distributed according to the truncated
Gaussian distribution $D^{m}_{\mathbb{Z}_{q},B}$.
The ED${}_{\textrm{log}}$ circuits we construct will therefore have the form
$C_{f}(b,x,e_{u})=Ax+b\cdot u+\textsc{Gaussify}(e_{u})\;(mod\;q)$ (30)
$C_{g}(b,x,e_{u})=Ax+b\cdot(As+e^{\prime})+\textsc{Gaussify}(e_{u})\;(mod\;q)$
(31)
where $A$, $s$, $e^{\prime}$ and $u$ are fixed. Essentially, one is given $A$,
$u$ and $As+e^{\prime}$ and one has to construct the above circuits and ensure
that they have logarithmic depth. Note that the input length for each circuit
is $(m+n)\log(q)+1$. Following [Mah18, BCM+18] we will assume that
$m=\Omega(n\log(q))$, so that what we need to ensure is that the circuit depth
is $O(\log(m\log(q)))$. The $\mathrm{LWE}$ assumption states that it should be
computationally intractable to distinguish $u$ from $As+e^{\prime}$, when
given $A$. However, as we will show, the above circuits will have different
entropies in their outputs (when the inputs are chosen uniformly at random).
Intuitively this is because one function is $1$-to-$1$ and the other is
_approximately_ $2$-to-$1$ and so the two cases could be distinguished if we
have the ability to solve ED${}_{\textrm{log}}$. This is the essence of the
reduction.
Let us take stock of all the operations performed by these circuits and why
they are all in $\sf NC^{1}$:
1. 1.
Addition and multiplication modulo $q$ can be performed in logarithmic depth
with respect to the input size (which in this case is $O(\log(q))$) [Wal64],
so that this operation requires only depth $O(\log(\log(q)))$. For vectors in
$\mathbb{Z}^{m}_{q}$, component-wise addition can be performed in parallel by
increasing the width by a factor of $m$. Thus, the overall depth remains
$O(\log(\log(q)))$.
2. 2.
The dot-product between two vectors in $\mathbb{Z}^{m}_{q}$ requires depth
$O(\log(m\log(q)))$. One first computes the component-wise product of the two
vectors. This is the same as component-wise addition and can be performed in
$O(\log(\log(q)))$ depth. One then adds together all of the results (modulo
$q$) and this can be performed in $O(\log(m\log(q)))$ depth with a divide-and-
conquer strategy191919Divide the result vector into two vectors of equal
length, recursively compute the sum of their components and add the results.
Adding the results requires constant depth and the depth of the recursion tree
is logarithmic in the length of the vectors..
3. 3.
Matrix-vector multiplication with matrices in $\mathbb{Z}_{q}^{n\times m}$ and
vectors in $\mathbb{Z}_{q}^{n}$ can be performed in depth $O(\log(m\log(q)))$.
Start by copying the vector $m$ times (one copy for each row in the matrix).
This can be done in depth $O(\log(m\log(q)))$ with a divide-and-conquer
strategy. Then perform the inner products between each row of the input matrix
and a corresponding copy of the vector. The inner products can be performed in
parallel and each requires $O(\log(m\log(q)))$ depth. Thus, the resulting
circuit will have $O(\log(m\log(q)))$ depth.
4. 4.
Gaussify can be performed in depth $O(\log(m\log(q)))$ as shown in [Pei10].
The procedure from [Pei10] requires a pre-processing step of
$O(m^{3}\log^{2}(q))$ operations. This will be done as part of the polynomial-
time reduction that generates the ED${}_{\textrm{log}}$ instance so that the
circuits $C_{f}$ and $C_{g}$ already contain the results of this pre-
processing step. The actual sampling procedure requires $O(m^{2})$
multiplications and additions modulo $q$ which can be performed in parallel
requiring depth $O(\log(\log(q)))$. Collecting the results will then require
depth at most $O(\log(m\log(q)))$.
This shows that $C_{f},C_{g}\in\sf NC^{1}$. We now estimate the entropies
$S(C_{f})$ and $S(C_{g})$ when the inputs of the circuits are chosen uniformly
at random. We will consider $A$ to be a matrix of full rank202020For
$\mathrm{LWE}$, the matrix $A$ is chosen uniformly at random from
$\mathbb{Z}^{n\times m}_{q}$ and so will be full rank, with high probability.,
$n$. Given that $x\leftarrow_{U}\mathbb{Z}^{n}_{q}$ and since $A$ is full
rank, we have that $Ax\leftarrow_{U}Im(A)$, where $Im(A)$ denotes the image of
$A$. Note that $|Im(A)|=q^{n}$. For $C_{f}$, we can choose $u$ such that the
distributions $Ax+u$ and $Ax$ have no overlap212121Similar to the choice of
$A$, this will be true for most choices of $u$.. Thus, $Ax+b\cdot u$ will be
uniform over $\\{0,1\\}\times Im(A)$ and have $n\cdot\log(q)+1$ bits of
entropy. Lastly, we need to account for the Gaussian error. Since this term
appears in both $C_{f}$ and $C_{g}$, we will simply denote its contribution as
$S_{Gaussian}$. We therefore have that
$S(C_{f})=n\cdot\log(q)+1+S_{Gaussian}$.
For the case of $C_{g}$ the difference will be due to the term
$b\cdot(As+e^{\prime})$. Note that this leads to overlap among different
inputs. Specifically $C_{g}(0,x,e_{u})=C_{g}(1,x-s,e^{\prime}_{u})$, where
$e^{\prime}_{u}$ is such that
$\textsc{Gaussify}(e^{\prime}_{u})=\textsc{Gaussify}(e_{u})-e^{\prime}$. This
condition on $e^{\prime}_{u}$ is true on all but a negligible fraction of
error vectors (as a result of taking $B^{\prime}/B$ to be super-polynomial in
$n$) [Mah18, BCM+18]. The circuit $C_{g}$ will therefore behave like a
$2$-to-$1$ function on all but a negligible fraction of the input domain. We
therefore have that $S(C_{g})=n\cdot\log(q)+S_{Gaussian}+\mu(n)$, where
$\mu(n)$ is a negligible function. For sufficiently large $n$, the $\mu(n)$
term will be less than $1$ and we therefore have that the entropy difference
between $C_{f}$ and $C_{g}$ is at least a positive constant, as
desired222222The entropy difference can be made larger by simply repeating the
circuits in parallel. However, an alternate approach presented by the use of
ETCF functions is to instead consider the circuits: $\displaystyle
C_{f}(b_{1},b_{2},x,e_{u})$ $\displaystyle=Ax+b_{1}\cdot u_{1}+b_{2}\cdot
u_{2}+\textsc{Gaussify}(e_{u})\;(mod\;q)$ (32) $\displaystyle
C_{g}(b_{1},b_{2},x,e_{u})$
$\displaystyle=Ax+b_{1}\cdot(As_{1}+e^{\prime}_{1})+b_{2}\cdot(As_{2}+e^{\prime}_{2})+\textsc{Gaussify}(e_{u})\;(mod\;q)$
(33) Here $f$ is still a $1$-to-$1$ function, however $g$ is $4$-to-$1$ so
that the entropy difference for these circuits will be $2-negl(n)$. This
construction can be generalised so that, for any constant $k$, the function
$g$ can be made into a $2^{k}$-to-$1$ function. .
To complete the proof, note that the reduction we have just described
constructs circuits with different entropies starting from an ETCF family (and
specifically starting from an $\mathrm{LWE}$ instance $(A,As+e^{\prime})$ and
a uniformly random vector $u$). However, being able to distinguish between the
output entropies of the circuits allows us to determine which of the two
functions is $1$-to-$1$ and which is $2$-to-$1$. By the injective invariance
property of ETCF functions (Definition 6), this is as hard as $\mathrm{LWE}$,
concluding the proof. ∎
It is worth mentioning that in the above proof we essentially picked “worst-
case” instances of $A$, $s$, $e^{\prime}$ and $u$. However, all of the above
arguments hold with high probability when
$A\leftarrow_{U}\mathbb{Z}_{q}^{n\times m}$,
$s,u\leftarrow_{U}\mathbb{Z}_{q}^{n}$ and
$e^{\prime}\leftarrow_{D^{m}_{\mathbb{Z}_{q},B^{\prime}}}\mathbb{Z}_{q}^{m}$
[Mah18, BCM+18]. This means that
###### Corollary 1.
ED${}_{\textrm{log}}$ is hard-on-average, based on $\mathrm{LWE}$ when the
input circuits have the structure given in Equations 30, 31 and are chosen at
random according to $A\leftarrow_{U}\mathbb{Z}_{q}^{n\times m}$,
$s,u\leftarrow_{U}\mathbb{Z}_{q}^{n}$ and
$e^{\prime}\leftarrow_{D^{m}_{\mathbb{Z}_{q},B^{\prime}}}\mathbb{Z}_{q}^{m}$.
In the previous proof we saw that the ETCF functions based on $\mathrm{LWE}$
can be evaluated by circuits of logarithmic depth. It seems unlikely, however,
that the same functions could be evaluated by circuits of constant depth. To
get around this issue, we make use of the compiling techniques from [AIK06]
that can take a one-way function with log-depth circuit complexity and map it
to a corresponding one-way function having constant-depth circuit complexity.
This is achieved through the use of a randomized polynomial encoding, in which
the value of a function is encoded in a series of points that can be computed
using a constant-depth circuit. Formally, we have that,
###### Theorem 9.
$\mathrm{LWE}$ $\leq_{P}$ ED${}_{\textrm{O(1)}}$.
###### Proof.
As shown in Theorem 8, the circuits $C_{f}$ and $C_{g}$ constructed from the
ETCF functions, can be evaluated in log depth. Using Theorem 5, from [AIK06],
this means that there exist randomized encodings $\hat{C}_{f}$ and
$\hat{C}_{g}$ for the two circuits, that can be computed in
$\mathsf{NC}^{0}_{4}$. To prove our reduction we need to show two things: that
the randomized encodings of $C_{f}$ and $C_{g}$ preserve the injective
invariance property (i.e. distinguishing between the randomized encodings is
as hard as $\mathrm{LWE}$); that the randomized encodings are $1$-to-$1$ and
$2$-to-$1$ functions, respectively. This last condition is required so that
when we evaluate $\hat{C}_{f}$ and $\hat{C}_{g}$ with uniform inputs, it is
still the case that $\hat{C}_{f}$ has more entropy in its output than
$\hat{C}_{g}$.
Showing that injective invariance is satisfied is immediate. Suppose, for the
sake of contradiction, that there exists an algorithm that has non-negligible
advantage in distinguishing $\hat{C}_{f}$ and $\hat{C}_{g}$. It is easy to see
that this leads to an efficient algorithm for distinguishing $C_{f}$ and
$C_{g}$ with non-negligible advantage. Given instances of $C_{f}$ and $C_{g}$
we can construct the randomized encodings $\hat{C}_{f}$ and $\hat{C}_{g}$.
This can be done in polynomial time, according to Theorem 5. We then use our
distinguisher on the randomized encodings and this then allows us to
distinguish between $C_{f}$ and $C_{g}$ which contradicts the injective
invariance property.
We now show that the randomized encodings are $1$-to-$1$ and $2$-to-$1$,
respectively. Recall first that the randomized encodings take two arguments,
$x$ and $r$. First, from Lemma 1 we know that for any fixed input $x$, the
encodings are injective in the second argument. In addition, the perfect
correctness property (from Definition 8) guarantees that there are simulators
$S_{f}$ and $S_{g}$ such that $S_{f}(\hat{C}_{f}(x,r))=C_{f}(x)$ and
$S_{g}(\hat{C}_{g}(x,r))=C_{g}(x)$, for all $x$ and $r$. This ensures that if
$C_{f}$ is injective then $\hat{C}_{f}$ will also be injective. It also
ensures that whenever there exists a collision in $C_{g}$, i.e. $x_{1}$,
$x_{2}$ such that $C_{g}(x_{1})=C_{g}(x_{2})$, there will be a corresponding
collision for $\hat{C}_{g}$, i.e.
$\hat{C}_{g}(x_{1},r_{1})=\hat{C}_{g}(x_{2},r_{2})$, _for all_ $r_{1}$ and
$r_{2}$. It follows that the randomized encodings $\hat{C}_{f}$ and
$\hat{C}_{g}$ will have the same entropy difference as $C_{f}$ and $C_{g}$,
concluding the proof. ∎
Just as with the log-depth case, we also have:
###### Corollary 2.
ED${}_{\textrm{O(1)}}$ is hard-on-average, based on $\mathrm{LWE}$ when the
input circuits have the structure given in Equations 30, 31 and are chosen at
random according to $A\leftarrow_{U}\mathbb{Z}_{q}^{n\times m}$,
$s,u\leftarrow_{U}\mathbb{Z}_{q}^{n}$ and
$e^{\prime}\leftarrow_{D^{m}_{\mathbb{Z}_{q},B^{\prime}}}\mathbb{Z}_{q}^{m}$.
###### Proof.
The argument is the same as for the log-depth case: for most choices of the
circuit parameters (i.e. the matrix $A$, the vectors $s$ and $u$ and the error
vector $e^{\prime}$), we obtain instances of the circuits that satisfy the
injective invariance property. As the above proof shows, this remains true for
the randomized encodings of these functions as well. ∎
In the above proofs, we didn’t explicitly make use of the fact that the
circuits are reversible, though the same results hold in those cases as well
(provided we trace out the ancilla required to performed the reversible
gates). Since classical reversible circuits are a particular kind of quantum
circuits, these results have the corollary that:
###### Corollary 3.
$\mathrm{LWE}$ $\leq_{P}$ QED${}_{\textrm{O(1)}}$.
###### Proof.
Follows from Theorem 9 together with the fact that ED${}_{\textrm{O(1)}}$
$\leq_{P}$ QED${}_{\textrm{O(1)}}$. ∎
## 4 Hamiltonian quantum entropy difference
In this section we consider a Hamiltonian analogue of QED which we call
Hamiltonian Quantum Entropy Difference (HQED). The problem will be to estimate
the _entanglement entropy_ difference between the ground states of two local
Hamiltonians. Equivalently, if we trace out parts of the ground states and
examine the resulting reduced states, we want to know which of the two has
higher Von Neumann entropy. Formally:
###### Definition 12 (HQED).
Let $H_{1}$ and $H_{2}$ be local Hamiltonians acting on $n+k$ qubits, whose
ground states are $\ket{\psi_{1}}$ and $\ket{\psi_{2}}$. Define the following
$n$-qubit mixed states:
$\rho_{1}=Tr_{k}(\ket{\psi_{1}}\bra{\psi_{1}})\quad\quad\quad\quad\rho_{2}=Tr_{k}(\ket{\psi_{2}}\bra{\psi_{2}})$
(34)
Given $n$, $k$ and descriptions of $H_{1}$ and $H_{2}$ as input, decide
whether:
$S(\rho_{1})\geq S(\rho_{2})+1$ (35)
or
$S(\rho_{2})\geq S(\rho_{1})+1$ (36)
promised that one of these is the case. For the cases where either of the two
Hamiltonians has a degenerate groundspace, $\ket{\psi_{j}}$ will denote a
state in the groundspace for which $S(\rho_{j})$ is minimal.
We will refer to HQED${}_{\textrm{log}}$and HQED${}_{\textrm{O(1)}}$,
respectively, as instances of HQED in which the input Hamiltonians have the
additional promise that purifications of the states $\rho_{1}$ and $\rho_{2}$
can be approximated (to within a $1/poly(n+k)$ additive error in trace
distance) by quantum circuits of logarithmic and constant depth, respectively.
###### Theorem 10.
There exists a deterministic poly-time reduction from QED to HQED
($\textsc{QED}\leq_{P}\textsc{HQED}$).
###### Proof.
We could start the reduction by constructing Hamiltonians $H_{1}$ and $H_{2}$
such that the respective ground states $\ket{\psi_{1}}$ and $\ket{\psi_{2}}$
are Feynman-Kitaev history states for quantum circuits $C_{1}$ and $C_{2}$
respectively,
$\ket{\psi_{j}}=\frac{1}{\sqrt{T+1}}\sum_{t=0}^{T}U_{t}U_{t-1}...U_{1}\ket{00...0}\ket{t}$
(37)
where $C_{j}=U_{T}U_{T-1}...U_{1}$, $j\in\\{1,2\\}$ and the states $\ket{t}$
are the _clock states_ in the Feyman-Kitaev history state construction.
However, a priori this does not guarantee that determining the entropy
difference between the reduced states of $\ket{\psi_{1}}$ and $\ket{\psi_{2}}$
implies determining the entropy difference between the reduced states created
by $C_{1}$ and $C_{2}$ respectively. This is because the output state of
$C_{j}$ constitutes only one term in the history state superposition.
Furthermore, we have no information about the entropy difference between the
other terms in $\ket{\psi_{1}}$ relative to their counterparts in
$\ket{\psi_{2}}$. To resolve this, we will use a trick from [NVY18], where a
circuit can be padded at the end with identities to give more weight to the
“final term” in the Feynman-Kitaev history state. We will now explain this
construction.
Figure 2: Circuit $C_{j}$ padded with identities.
Given the input circuit $C_{j}$ to the problem QED, first apply $N$ identity
operators at the end of the circuit to each qubit, as depicted in Figure 2; we
will call this the padded circuit. Clearly this does not affect the final
state, but it will be useful in the reduction to HQED. We now construct the
Feynman-Kitaev history state from the padded circuit, which is
$\ket{\psi_{j}}=\frac{1}{\sqrt{T+N+1}}\sum_{t=0}^{T+N}U_{t}U_{t-1}...U_{1}\ket{00...0}\ket{t}$
(38)
Note that all unitaries $U_{i}$ for $T+1\leq i\leq T+N$,
$U_{i}=\mathbb{I}^{\otimes(n+k)}$, thus we can simplify the state
$\ket{\psi_{j}}$ to be
$\ket{\psi_{j}}=\frac{1}{\sqrt{T+N+1}}\left((N+1)U_{T}U_{T-1}...U_{1}\ket{00...0}\sum_{t^{\prime}=T}^{T+N}\ket{t^{\prime}}+\sum_{t=0}^{T-1}U_{t}U_{t-1}...U_{1}\ket{00...0}\ket{t}\right).$
(39)
By the Feynman-Kitaev construction there is a local Hamiltonian, having
$poly(n+k)$ terms, for which this is a ground state. This Hamiltonian will act
on $n+k+N+T$ qubits due to the clock states $\ket{t}$, which are encoded in
unary. Let us now examine the reduced states obtained by tracing out the clock
register, which we denote as $\sigma_{j}$. In addition, to simplify the
notation, we also denote $\ket{\phi_{j}(t)}=U_{t}U_{t-1}...U_{1}\ket{00...0}$
(with $\ket{\phi_{j}(0)}=\ket{00...0}$), so that the padded history state can
be written as
$\ket{\psi_{j}}=\frac{1}{\sqrt{T+N+1}}\left((N+1)\ket{\phi_{j}(T)}\sum_{t^{\prime}=T}^{T+N}\ket{t^{\prime}}+\sum_{t=0}^{T-1}\ket{\phi_{j}(t)}\ket{t}\right).$
(40)
Note that $\ket{\phi_{j}(T)}$ is the output state of the circuit $C_{j}$. If
we now trace out the clock register, we have
$\sigma_{j}=Tr_{t}(\ket{\psi_{j}}\bra{\psi_{j}})=\frac{N+1}{T+N+1}\ket{\phi_{j}(T)}\bra{\phi_{j}(T)}+\frac{1}{T+N+1}\sum_{t=0}^{T-1}\ket{\phi_{j}(t)}\bra{\phi_{j}(t)}$
(41)
Computing the fidelity between $\rho_{j}$ and $\ket{\phi_{j}(T)}$ we get
$F(\sigma_{j},\ket{\phi_{j}(T)}\bra{\phi_{j}(T)})=\bra{\phi_{j}(T)}\sigma_{j}\ket{\phi_{j}(T)}=\frac{N+1}{T+N+1}+\frac{1}{T+N+1}\sum_{t=0}^{T-1}|\braket{\phi_{j}(t)}{\phi_{j}(T)}|^{2}$
(42)
hence
$F(\sigma_{j},\ket{\phi_{j}(T)}\bra{\phi_{j}(T)})\geq\frac{N+1}{T+N+1}=1-\frac{T}{T+N+1}.$
(43)
By taking $N=poly(T)$, and given that $T=poly(n+k)$, we get that
$F(\sigma_{j},\ket{\phi_{j}(T)}\bra{\phi_{j}(T)})\geq 1-\frac{1}{poly(n+k)}.$
(44)
From the relationship between fidelity and trace distance this also means that
$TD(\sigma_{j},\ket{\phi_{j}(T)}\bra{\phi_{j}(T)})\leq\frac{1}{poly(n+k)}.$
(45)
If we now trace out the $k$ qubits from both of these states and use the fact
that this is a trace non-increasing operation, we get
$TD(\rho^{\prime}_{j},\rho_{j})\leq\frac{1}{poly(n+k)},$ (46)
where $\rho_{j}$ is the output state of $C_{1}$ when tracing out the subsystem
of $k$ qubits and $\rho^{\prime}_{j}$ is the analogous state for the
Hamiltonian $H_{j}$. Next, we apply the Fannes-Audanaert inequality [Fan73,
Aud07] relating trace distance and entropy, which says that if
$TD(\rho^{\prime}_{j},\rho_{j})\leq\epsilon$ (47)
then
$|S(\rho^{\prime}_{j})-S(\rho_{j})|\leq\frac{\epsilon}{2}(n-1)+h\left(\frac{\epsilon}{2}\right),$
(48)
where $h$ is the binary entropy function. Given that $\epsilon=1/poly(n+k)$,
it follows that
$|S(\rho^{\prime}_{j})-S(\rho_{j})|\leq\frac{1}{poly(n+k)}.$ (49)
By the triangle inequality we get that if $S(\rho_{1})\geq S(\rho_{2})+O(1)$,
then $S(\rho^{\prime}_{1})\geq S(\rho^{\prime}_{2})+O(1)$ and if
$S(\rho_{2})\geq S(\rho_{1})+O(1)$, then $S(\rho^{\prime}_{2})\geq
S(\rho^{\prime}_{1})+O(1)$.
Thus, from the instance of QED $(n+k,k,C_{1},C_{2})$ we have constructed an
instance of HQED $(n+k+N+T,k+N+T,H_{1},H_{2})$ that preserves the entropy
difference of the original circuits (up to a $1/poly(n+k)$ error). This
concludes the proof. ∎
###### Corollary 4.
QED${}_{\textrm{log}}$ $\leq_{P}$ HQED${}_{\textrm{log}}$,
QED${}_{\textrm{O(1)}}$ $\leq_{P}$ HQED${}_{\textrm{O(1)}}$.
###### Proof.
In the proof of Theorem 10, we constructed Hamiltonians $H_{1}$ and $H_{2}$
for which the reduced ground states, $\rho^{\prime}_{1}$ and
$\rho^{\prime}_{2}$, are close in trace distance to the output states of the
QED circuits $C_{1}$ and $C_{2}$. Furthermore, the padded construction used in
the previous theorem does not alter the depth of the original circuits, since
we are padding with identity gates. Thus, the depth of the circuits required
to approximate $\rho^{\prime}_{1}$ and $\rho^{\prime}_{2}$ (to within additive
error $1/poly(n+k)$ in trace distance) is upper bounded by the depth of
$C_{1}$ and $C_{2}$ (as $\rho_{1}$ and $\rho_{2}$ are approximations of these
states). For the cases in which these circuits are log-depth or constant-
depth, respectively, we obtain instances of HQED${}_{\textrm{log}}$ and
HQED${}_{\textrm{O(1)}}$, respectively. ∎
For the constant depth case, in Appendix B, we give a different construction
for a local Hamiltonian for which the ground state is _exactly_ the output of
a quantum circuit from QED${}_{\textrm{O(1)}}$.
### 4.1 Applications to holography
Holographic duality is an idea inspired from early results of Bekenstein and
Hawking showing that the entropy of a black hole is proportional to its area
[Bek73, Haw71]. This connection between a quantum mechanical property, the Von
Neumann entropy of a quantum state, and geometry, in the form of the black
hole area, was later expanded upon through the AdS/CFT correspondence [Mal99].
Briefly, the AdS/CFT correspondence is a duality between a non-gravitational
quantum field theory (the conformal field theory, or CFT) and a quantum
gravitational theory that takes place in an Anti-de Sitter (AdS) space-time.
The CFT is defined on the boundary of the AdS space. The purpose of the
correspondence is to be able to relate physical observables in the bulk to
observables on the boundary and vice versa through the so-called _AdS
dictionary_ (or bulk-to-boundary and boundary-to-bulk maps). This would allow
for the derivation of predictions in the quantum gravitational bulk theory
purely from a non-gravitational boundary theory.
Similar to the Bekenstein-Hawking result, Ryu and Takayanagi showed a
correspondence between geometry and entanglement in AdS/CFT [RT06]. This is
known as the _Ryu-Takayanagi_ formula and it states that, to leading order,
the entropy of a state on the boundary CFT is given by the area of a minimal
bulk surface that encloses that state.
Since entropy is an important quantity of interest in AdS/CFT, we discuss
potential implications of our result for this duality232323We note that $\sf
QSZK$ has appeared before in holography in the context of the Harlow-Hayden
decoding task [HH13]. Briefly, Harlow and Hayden considered the task of
decoding information from the Hawking radiation of a black hole and showed
that one could encode a $\sf QSZK$-complete problem in this task. The result
was later improved by Aaronson who showed that decoding the information from
the radiation would allow one to invert general one-way functions [Aar16]. In
particular we propose a gedankenexperiment based on our results that gives
evidence for certain instances of AdS/CFT having the AdS dictionary be
computationally intractable to compute, unless $\mathrm{LWE}$ is tractable. A
similar result was obtained by Bouland et al [BFV19], in the context of the
wormhole growth paradox. Their result also uses cryptographic techniques in
the form of pseudorandom quantum states. In contrast to our setting, they only
require that such states exist and are computationally indistinguishable,
whereas we are using the more fine-grained $\mathrm{LWE}$ assumption.
Roughly speaking, the main idea here is that the Ryu-Takayanagi formula
relates a quantity that we have shown is hard to compute even for shallow
circuits (the entropy), to a quantity that seemingly can be efficiently
computed, the area of a surface. Thus assuming that $\mathrm{LWE}$ is hard for
polynomial time quantum computers, we arrive at a contradiction. A potential
resolution is that the AdS dictionary does not efficiently translate from the
bulk to the boundary.
In the following thought experiment, we will assume that CFT states can be
prepared efficiently starting from descriptions of functions $f$ and $g$, such
as the ETCF functions used in Theorem 8, that are $1$-to-$1$ and $2$-to-$1$,
respectively. Furthermore, it should be the case that there is a constant
difference in entanglement entropy for the two types of states242424As alluded
to in the proof of Theorem 8, we can in fact make the entropy difference be
any constant, $k$, by taking $g$ to be a $2^{k}$-to-$1$ function (and changing
$f$ appropriately, though keeping it a $1$-to-$1$ function).. To give
arguments for why we think this is true, first note that, as stated in Theorem
4, we can construct local Hamiltonians for which the ground states will indeed
encode instances of such functions. These ground states will have different
entanglement entropy depending on which function was used.
A second argument is based on the observation that certain quantum error-
correcting codes serve as toy models for the AdS/CFT correspondence [PYHP15].
Specifically, as discussed in [Har17], codes that protect against erasure
errors constitute such toy models. They satisfy the property that encoded
information can be recovered by acting on only a fraction of the qubits in the
encoded state. As an example of this (taken from [Har17]), if we let
$\ket{\tilde{\psi}}_{123}$ be an encoding of $\ket{\psi}$ on three subsystems,
it should be that there exists a unitary $U_{12}$ such that
$\ket{\tilde{\psi}}=U_{12}[\ket{\psi}_{1}\otimes\ket{\chi}_{23}]$ (50)
as well as corresponding unitaries $U_{13}$ and $U_{23}$ that act in a similar
way. Here $\ket{\chi}$ is the maximally entangled state
$\ket{\chi}=\sum_{i}\ket{i}\ket{i}$. As a toy model for AdS/CFT, the state
$\ket{\psi}$ represents the quantum state of a bulk observer and
$\ket{\tilde{\psi}}$ will be the corresponding CFT state that lives on the
boundary of the AdS space. The indices label three different subsystems on the
boundary and Equation 50 simply states that the bulk information (the state
$\ket{\psi}$) can be recovered by acting on only part of the boundary. As
shown in [Har17], these states satisfy a Ryu-Takayanagi formula (in addition
to other properties that are satisfied by AdS/CFT). Specifically, the
entanglement entropy of $\ket{\chi}$ corresponds to the area of a bulk
surface. One could imagine considering the states
$\ket{\chi_{f}}=\sum_{x}\ket{x}\ket{f(x)}\quad\quad\quad\ket{\chi_{g}}=\sum_{x}\ket{x}\ket{g(x)}$
(51)
instead of $\ket{\chi}$, where $f$ and $g$ are $1$-to-$1$ and $2$-to-$1$,
respectively. In this case the difference in entanglement entropy will be
determined by whether the function that was used was $1$-to-$1$ or $2$-to-$1$.
States such as the ones from Equation 51, or even analogous weighted
superpositions of such states are efficiently preparable (according to the
efficient range superposition properties from Definitions 4 and 5).
Finally, note that since $f$ and $g$ themselves can be implemented by circuits
of constant depth, as shown in Theorem 9, the quantum states derived from
these functions (such as $\ket{\chi_{f}}$ or $\ket{\chi_{g}}$) could also be
prepared by short depth quantum circuits. This would be consistent with a
conjecture by Swingle that the underlying CFT states that lead to the Ryu-
Takayanagi formula are well approximated by states resulting from MERA
(_multi-scale entanglement renormalization ansatz_) tensor networks [Swi12].
Such MERA states essentially have log-depth quantum circuit descriptions. Let
us now describe our thought experiment.
Suppose Alice has a quantum computer and is given the description of a
function, denoted $h$, which is promised to be either $f$ (that is a
$1$-to-$1$) or $g$ (that is a $2$-to-$1$) from Equation 29. Alice is then
asked whether the function she received is $1$-to-$1$ or $2$-to-$1$. By the
injective invariance property (Definition 5) this is as hard to determine as
solving $\mathrm{LWE}$. Suppose now that Alice uses her quantum computer to do
the following:
1. 1.
She first prepares a state $\ket{\psi^{h}_{CFT}}$ that is supposed to
represent a CFT state whose entanglement entropy is determined by the type of
function of $h$. In other words, if $h$ is $f$, we will say that the state has
high entanglement entropy and if $h$ is $g$ we will say it has low
entanglement entropy. As discussed above, we conjecture that there should
exist an efficient procedure for preparing such a state, given the function
description.
2. 2.
By the AdS/CFT correspondence, $\ket{\psi^{h}_{CFT}}$ should be dual to a
state $\ket{\psi^{h}_{bulk}}$ in the bulk. In this bulk space-time, under the
Ryu-Takayanagi formula the area of a certain surface $\gamma_{h}$ will be
equal252525Or be approximately equal. to the entanglement entropy of
$\ket{\psi^{h}_{CFT}}$. Using the AdS dictionary, Alice then considers a bulk
Hamiltonian $H_{bulk}$ such that the time evolution of $\ket{\psi^{h}_{bulk}}$
under $H_{bulk}$ corresponds to an observer in the bulk measuring the area of
$\gamma_{h}$. If this fictional bulk observer notices that the area of
$\gamma_{h}$ is above a certain threshold (corresponding to the case of high
entropy), it will “reset itself” so that at the end of the evolution it
returns to the state $\ket{\psi^{h}_{bulk}}$ (and so the corresponding CFT
state returns to $\ket{\psi^{h}_{CFT}}$). If, on the other hand the area is
below the threshold (corresponding to low entropy) it should then map itself
into a state for which the dual CFT state is “as close to orthogonal to
$\ket{\psi^{h}_{CFT}}$ as possible”. In other words, we would like this state
to be distinguishable from $\ket{\psi^{h}_{CFT}}$. A schematic illustration of
the fictional bulk observer’s two perspectives is shown in Figure 3. The time
required for the observer to perform the measurement should be proportional to
the area262626Note that since we don’t know the entanglement entropy of
$\ket{\psi^{h}_{CFT}}$ in advance, we can take the time we evolve by
$H_{bulk}$ to be the longest of the two choices, corresponding to the case of
maximum entropy. of $\gamma_{h}$.
3. 3.
Using the AdS dictionary, Alice computes the boundary CFT Hamiltonian
$H_{CFT}$ that is dual to $H_{bulk}$. She then time-evolves her state
$\ket{\psi^{h}_{CFT}}$ with $H_{CFT}$. Under the AdS/CFT correspondence the
evolution of her state will be dual to the time-evolution of the bulk observer
that is performing the measurement of $\gamma_{h}$. Alice is, in effect,
simulating this process on her quantum computer.
4. 4.
At the end of the evolution, Alice performs SWAP tests to check whether the
state she is left with is $\ket{\psi^{h}_{CFT}}$. If this is the case, she
concludes that the original function was $1$-to-$1$, otherwise she concludes
that it is $2$-to-$1$.
Figure 3: The two situations for the bulk observer. The observer should
measure the area of the surface $\gamma$ to determine which situation it is
in.
If all the steps in the above procedure can be performed efficiently, then
this experiment would violate the injective invariance property of ETCF
functions, since it can efficiently distinguish between the two function
types. Correspondingly, we would have an efficient quantum algorithm for
solving $\mathrm{LWE}$. Since we believe that this is unlikely, we would need
to determine which of the above steps is intractable. As mentioned, we
conjecture that preparing the CFT states should be efficient. The time-
evolution under a Hamiltonian, that Alice has to perform, should also be
efficient using standard techniques from quantum simulation [BMK10, BCK15].
One step that seems potentially problematic is step 3. Here the bulk observer
needs to affect his space-time so as to map Alice’s state to one of two
efficiently distinguishable states. It certainly seems plausible that the bulk
observer can do very different things depending on the area he measures,
resulting in completely different bulk states. But it is unclear whether the
resulting dual CFT states would then be distinguishable by Alice. The other
possible source of intractability is the use of the AdS dictionary. If the
dictionary is exponentially complex, Alice cannot determine the state that is
dual to her CFT state or what her boundary Hamiltonian should be.
An important observation to make here is that the entropy difference between
the two cases is constant and one could argue that we should not expect bulk
observers to be able to efficiently detect such small differences in geometry.
Indeed, it might be more relevant to have a scenario in which the _entropy
ratio_ is constant instead, since this would correspond to a noticeable change
in area between the two cases (and would be more in line with the portrayal in
Figure 3). As mentioned in the discussion from Section 1, we conjecture that
even estimating the entropy ratio should be hard based on $\mathrm{LWE}$.
Specifically, by considering extensions of the ETCF functions in which the
function $g$ is taken to be $2^{m}$-to-$1$, with $m=poly(n+k)$, we would
achieve a constant (or even polynomial) ratio between the entropies of the two
functions. The results from the previous sections would still allow for these
functions to be evaluated in constant depth, leading the desired result.
A final comment we make about the above experiment is that it does not require
holographic duality to be true for our own universe. Indeed, as long as
AdS/CFT is true it should in principle be possible to simulate dynamics in a
“virtual” AdS space by constructing CFT states on a quantum computer and
evolving them under the CFT Hamiltonian. Can instances of $\mathrm{LWE}$ and
ETCF functions be encoded in these CFT states? We leave answering this
question and formalizing the above experiment for future work.
## References
* [Aar02] Scott Aaronson. Quantum lower bound for the collision problem. In Proceedings of the Thiry-Fourth Annual ACM Symposium on Theory of Computing, STOC ’02, page 635–642, New York, NY, USA, 2002. Association for Computing Machinery.
* [Aar16] Scott Aaronson. The complexity of quantum states and transformations: from quantum money to black holes. arXiv preprint arXiv:1607.05256, 2016.
* [AIK06] Benny Applebaum, Yuval Ishai, and Eyal Kushilevitz. Cryptography in $\mathsf{NC^{0}}$. SIAM Journal on Computing, 36(4):845–888, 2006.
* [AISW19] Jayadev Acharya, Ibrahim Issa, Nirmal V Shende, and Aaron B Wagner. Measuring quantum entropy. In 2019 IEEE International Symposium on Information Theory (ISIT), pages 3012–3016. IEEE, 2019.
* [Aud07] Koenraad MR Audenaert. A sharp continuity estimate for the von Neumann entropy. Journal of Physics A: Mathematical and Theoretical, 40(28):8127, 2007.
* [BASTS10] Avraham Ben-Aroya, Oded Schwartz, and Amnon Ta-Shma. Quantum expanders: Motivation and construction. Theory of Computing, 6(3):47–79, 2010.
* [BBPS96] C. H. Bennett, H. J. Bernstein, S. Popescu, and B. Schumacher. Concentrating partial entanglement by local operations. Phys. Rev. A, 53:2046, 1996.
* [BCK15] Dominic W Berry, Andrew M Childs, and Robin Kothari. Hamiltonian simulation with nearly optimal dependence on all parameters. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 792–809. IEEE, 2015.
* [BCM+18] Z. Brakerski, P. Christiano, U. Mahadev, U. Vazirani, and T. Vidick. A cryptographic test of quantumness and certifiable randomness from a single quantum device. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 320–331, Oct 2018.
* [Bek73] Jacob D. Bekenstein. Black holes and entropy. Phys. Rev. D, 7:2333–2346, Apr 1973.
* [BFV19] Adam Bouland, Bill Fefferman, and Umesh Vazirani. Computational pseudorandomness, the wormhole growth paradox, and constraints on the AdS/CFT duality, 2019. Eprint:arXiv:1910.14646.
* [BGM19] Sergey Bravyi, David Gosset, and Ramis Movassagh. Classical algorithms for quantum mean values. arXiv preprint arXiv:1909.11485, 2019.
* [BLP+13] Zvika Brakerski, Adeline Langlois, Chris Peikert, Oded Regev, and Damien Stehlé. Classical hardness of learning with errors. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 575–584, 2013.
* [BMK10] Katherine L Brown, William J Munro, and Vivien M Kendon. Using quantum computers for quantum simulation. Entropy, 12(11):2268–2307, 2010.
* [BPR12] Abhishek Banerjee, Chris Peikert, and Alon Rosen. Pseudorandom functions and lattices. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 719–737. Springer, 2012.
* [CCL19] Nai-Hui Chia, Kai-Min Chung, and Ching-Yi Lai. On the need for large quantum depth, 2019. Eprint:arXiv:1909.10303.
* [CM19] Matthew Coudron and Sanketh Menda. Computations with greater quantum depth are strictly more powerful (relative to an oracle), 2019. Eprint:arXiv:1909.10503.
* [DKRB02] Sanjoy Dasgupta, Ravi Kumar, Ronitt Rubinfeld, and Tugkan Batu. The complexity of approximating the entropy. In Proceedings 17th IEEE Annual Conference on Computational Complexity, pages 0017–0017, 2002.
* [DW05] Igor Devetak and Andreas Winter. Distillation of secret key and entanglement from quantum states. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 461:207–235, 2005.
* [ECP10] J. Eisert, M. Cramer, and M. B. Plenio. Colloquium: Area laws for the entanglement entropy. Rev. Mod. Phys., 82:277–306, Feb 2010.
* [Fan73] Mark Fannes. A continuity property of the entropy density for spin lattice systems. Communications in Mathematical Physics, 31(4):291–294, 1973.
* [GHR+16] John Goold, Marcus Huber, Arnau Riera, Lídia del Rio, and Paul Skrzypczyk. The role of quantum information in thermodynamics—a topical review. Journal of Physics A: Mathematical and Theoretical, 49(14):143001, feb 2016.
* [GV99] Oded Goldreich and Salil Vadhan. Comparing entropies in statistical zero knowledge with applications to the structure of $\mathsf{SZK}$. In Proceedings of the Fourteenth Annual IEEE Conference on Computational Complexity, COCO ’99, page 54, USA, 1999. IEEE Computer Society.
* [GV19] A. Gheorghiu and T. Vidick. Computationally-secure and composable remote state preparation. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 1024–1033, Nov 2019.
* [Har17] Daniel Harlow. The Ryu–Takayanagi formula from quantum error correction. Communications in Mathematical Physics, 354(3):865–912, 2017.
* [Haw71] S. W. Hawking. Gravitational radiation from colliding black holes. Phys. Rev. Lett., 26:1344–1346, May 1971.
* [HGKM10] Matthew B. Hastings, Iván González, Ann B. Kallin, and Roger G. Melko. Measuring Renyi entanglement entropy in quantum Monte Carlo simulations. Phys. Rev. Lett., 104:157201, Apr 2010.
* [HH13] Daniel Harlow and Patrick Hayden. Quantum computation vs. firewalls. Journal of High Energy Physics, 2013(6):85, 2013.
* [Ji17] Zhengfeng Ji. Compression of quantum multi-prover interactive proofs. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, page 289–302, New York, NY, USA, 2017. Association for Computing Machinery.
* [JVHW17] J. Jiao, K. Venkat, Y. Han, and T. Weissman. Maximum likelihood estimation of functionals of discrete distributions. IEEE Transactions on Information Theory, 63(10):6774–6798, Oct 2017\.
* [KSV02] Alexei Yu Kitaev, Alexander Shen, and Mikhail N Vyalyi. Classical and quantum computation. Number 47. American Mathematical Soc., 2002.
* [Mah18] U. Mahadev. Classical verification of quantum computations. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 259–267, Oct 2018.
* [Mal99] Juan Maldacena. The large-N limit of superconformal field theories and supergravity. J. International Journal of Theoretical Physics, 38:1113, 1999.
* [MV20] Tony Metger and Thomas Vidick. Self-testing of a single quantum device under computational assumptions, 2020. Eprint:arXiv:2001.09161.
* [NVY18] Chinmay Nirkhe, Umesh Vazirani, and Henry Yuen. Approximate Low-Weight Check Codes and Circuit Lower Bounds for Noisy Ground States. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018), volume 107 of Leibniz International Proceedings in Informatics (LIPIcs), pages 91:1–91:11, 2018.
* [NY04] Harumichi Nishimura and Tomoyuki Yamakami. Polynomial time quantum computation with advice. Information Processing Letters, 90(4):195 – 204, 2004.
* [OW15] Ryan O’Donnell and John Wright. Quantum spectrum testing. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing, pages 529–538, 2015.
* [Pei10] Chris Peikert. An efficient and parallel Gaussian sampler for lattices. In Annual Cryptology Conference, pages 80–97. Springer, 2010.
* [Pei16] Chris Peikert. A decade of lattice cryptography. Foundations and Trends® in Theoretical Computer Science, 10(4):283–424, 2016.
* [PRSD17] Chris Peikert, Oded Regev, and Noah Stephens-Davidowitz. Pseudorandomness of Ring-LWE for any ring and modulus. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, page 461–473, New York, NY, USA, 2017. Association for Computing Machinery.
* [PYHP15] Fernando Pastawski, Beni Yoshida, Daniel Harlow, and John Preskill. Holographic quantum error-correcting codes: Toy models for the bulk/boundary correspondence. Journal of High Energy Physics, 2015(6):149, 2015.
* [Reg09] Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. J. ACM, 56(6), September 2009.
* [Ros08] Bill Rosgen. Distinguishing Short Quantum Computations. In 25th International Symposium on Theoretical Aspects of Computer Science, volume 1 of Leibniz International Proceedings in Informatics (LIPIcs), pages 597–608, 2008.
* [RT06] Shinsei Ryu and Tadashi Takayanagi. Holographic derivation of entanglement entropy from the anti–de Sitter space/Conformal Field Theory correspondence. Phys. Rev. Lett., 96:181602, May 2006.
* [Sch95] B. Schumacher. Quantum coding. Phys. Rev. A, 51:2738, 1995.
* [SH19] Sathyawageeswar Subramanian and Min-Hsiu Hsieh. Quantum algorithm for estimating Renyi entropies of quantum states. arXiv preprint arXiv:1908.05251, 2019.
* [Swi12] Brian Swingle. Entanglement renormalization and holography. Phys. Rev. D, 86:065007, Sep 2012.
* [VV11] Gregory Valiant and Paul Valiant. The power of linear estimators. In Proceedings of the 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS ’11, page 403–412, USA, 2011. IEEE Computer Society.
* [VW16] Thomas Vidick and John Watrous. Quantum proofs. Foundations and Trends® in Theoretical Computer Science, 11(1-2):1–215, 2016.
* [Wal64] Christopher S Wallace. A suggestion for a fast multiplier. IEEE Transactions on electronic Computers, (1):14–17, 1964.
* [WY16] Y. Wu and P. Yang. Minimax rates of entropy estimation on large alphabets via best polynomial approximation. IEEE Transactions on Information Theory, 62(6):3702–3720, June 2016\.
* [zoo] Complexity Zoo. https://complexityzoo.uwaterloo.ca/Complexity_Zoo.
## Appendix A An unconditional lower bound on entropy estimation
A question we can ask concerning entropy estimation is whether the entropy of
a quantum state, $\rho$, on $n$-qubits can be estimated up to additive error
$\epsilon$ in time (and with a number of copies of $\rho$) that scales as
$poly(n,\log(1/\epsilon))$. While it has already been shown in [AISW19, SH19]
that the answer is no, here we give a complexity-theoretic proof of this fact:
###### Theorem 11.
There is no quantum algorithm running in time $poly(n,\log(1/\epsilon))$ for
estimating the entropy of an $n$-qubit quantum state $\rho$ to within additive
error $\epsilon>0$.
###### Proof.
We prove this result by contradiction. Suppose that a
$poly(n,\log(1/\epsilon))$ quantum algorithm for entropy estimation existed.
We will argue that this implies $\mathsf{BQP/qpoly}=\mathsf{ALL}$, where
$\mathsf{BQP/qpoly}$ denotes the set of languages that can be decided by a
polynomial-time quantum algorithm with quantum advice, and $\mathsf{ALL}$
denotes the set of all languages [zoo]. A $\mathsf{BQP/qpoly}$ algorithm is a
quantum algorithm that runs in polynomial time and that receives, in addition
to its input, denoted $x$, a quantum state on $poly(|x|)$ qubits (the advice
state) that depends only on the size of the input and not on the input itself.
We leverage this fact, together with the ability to efficiently estimate
entropy to provide an algorithm for deciding any language.
For a given language $L\subseteq\\{0,1\\}^{*}$ and input length $n$, let
$L_{n}$ denote the set of strings $x$ of length $n$, such that $x\in L$. We
also let $\bar{L}_{n}$ denote the strings of length $n$ not contained in $L$,
i.e. $\bar{L}_{n}=\\{0,1\\}^{n}\setminus L_{n}$. With this notation, we define
the states
$\ket{\psi_{n}}_{Yes}=\frac{1}{\sqrt{L_{n}}}\sum_{x\in
L_{n}}\ket{x}\quad\quad\quad\ket{\psi_{n}}_{No}=\frac{1}{\sqrt{\bar{L}_{n}}}\sum_{x\in\bar{L}_{n}}\ket{x}$
(52)
to be the equal superpositions over the “yes” instances of length $n$ and the
“no” instances, respectively. Finally, we let $EQ_{x}$ be the following
unitary operation acting on $n+1$ qubits (for $b\in\\{0,1\\}$):
$EQ_{x}\ket{y}\ket{b}=\left\\{\begin{array}[]{ll}\ket{y}\ket{b\oplus 1}&\text{
if }x=y\\\ \ket{y}\ket{b}&\text{ otherwise }\\\ \end{array}\right.$ (53)
Note that $EQ_{x}$ can be implemented with $poly(|x|)$-many gates.
The $\mathsf{BQP/qpoly}$ algorithm works as follows. Setting
$\epsilon=2^{-n-1}$, the quantum advice will consist of $poly(n)$ copies of
$\ket{\psi_{n}}_{Yes}\ket{\psi_{n}}_{No}$. The algorithm then appends two
qubits in the state $\ket{0}$ to each copy
$\ket{\psi_{n}}_{Yes}\ket{\psi_{n}}_{No}$ to get
$\ket{\psi_{n}}_{Yes}\ket{0}\ket{\psi_{n}}_{No}\ket{0}$. Then for the input
$x$, the algorithm applies $EQ_{x}$ to both $\ket{\psi_{n}}_{Yes}\ket{0}$ and
$\ket{\psi_{n}}_{No}\ket{0}$ individually, for all copies. Consider what
happens if $x$ is a “yes” instance (the “no” instance case is analogous). The
resulting states will be
$EQ_{x}\ket{\psi_{n}}_{Yes}\ket{0}=\frac{1}{\sqrt{L_{n}}}\sum_{z\in
L_{n},z\neq x}\ket{z}\ket{0}+\frac{1}{\sqrt{\bar{L}_{n}}}\ket{x}\ket{1}$ (54)
$EQ_{x}\ket{\psi_{n}}_{No}\ket{0}=\ket{\psi_{n}}_{No}\ket{0}$ (55)
If we trace out the first $n$ qubits and denote the resulting states as
$\rho_{Y}$ and $\rho_{N}$, we can see that
$S(\rho_{Y})=-\frac{L_{n}-1}{L_{n}}\log\left(\frac{L_{n}-1}{L_{n}}\right)-\frac{1}{L_{n}}\log\left(\frac{1}{L_{n}}\right)$
(56) $S(\rho_{N})=0$ (57)
Since $L_{n}\leq 2^{n}$, we have that $S(\rho_{Y})\geq 2^{-n}$. But now, by
assumption, having $poly(n)$-many copies of $\rho_{Y}$ and $\rho_{N}$ we can
estimate the entropies of the two states to within additive error $2^{-n-1}$,
thus being able to determine which of the two has non-zero entropy. The
algorithm is therefore able to decide any language $L$, hence showing that
$\mathsf{BQP/qpoly}=\mathsf{ALL}$. However, we know from [NY04] that
$\mathsf{BQP/qpoly}\neq\mathsf{ALL}$ (since, in particular
$\mathsf{EESPACE}\not\subset\mathsf{BQP/qpoly}$) and this provides the desired
contradiction. ∎
## Appendix B HQED with constant depth ground state
In the reduction from Theorem 10 we used the history state construction to
_approximately_ map the outputs of circuits $C_{1}$ and $C_{2}$ from an
instance of QED to ground states of Hamiltonians $H_{1}$ and $H_{2}$ in HQED.
Correspondingly, the entanglement entropy difference for the ground states of
$H_{1}$ and $H_{2}$ differed from that of the output states of $C_{1}$ and
$C_{2}$ by an additive term of $1/poly(n+k)$.
Here we give an alternate reduction for the case where $C_{1}$ and $C_{2}$ are
of constant depth $d$, based on a recent result of Bravyi, Gosset and
Movassagh [BGM19]. This reduction has the appealing feature that the resulting
Hamiltonians will have as ground states _exactly_ $C_{1}\ket{00...0}$ and
$C_{2}\ket{00...0}$, rather than approximate versions of these states. This
means that the entanglement entropy difference for the ground states of
$H_{1}$ and $H_{2}$ will be identical to that of the states produced by
$C_{1}$ and $C_{2}$.
###### Lemma 3.
There exists a reduction QED${}_{\textrm{O(1)}}$ $\leq_{P}$
HQED${}_{\textrm{O(1)}}$ that exactly preserves the entropy difference.
###### Proof.
Assuming, as before, that the circuits acting on $n+k$ qubits have the form
$C_{j}=U_{d}U_{d-1}...U_{1}$, $j\in\\{1,2\\}$, where each $U_{i}$ is a quantum
gate and $d$ is constant, we use the Hamiltonians considered in [BGM19]:
$H_{j}=\sum_{i=1}^{n+k}C_{j}\ket{1}\bra{1}_{i}C_{j}^{\dagger}$ (58)
where $\ket{1}\bra{1}_{i}$ acts non-trivially only on the $i$’th qubit. Since
$C_{j}$ is a circuit of depth $d$, the locality of each term is $2^{d}$.
Because $d$ is constant, the resulting Hamiltonian is local. As shown in
[BGM19], the unique ground state of $H_{j}$ is $C_{j}\ket{00...0}$. Thus, the
entanglement entropy difference for $H_{1}$ and $H_{2}$ is given by the
entanglement entropy difference of $C_{1}\ket{00...0}$ and
$C_{2}\ket{00...0}$, concluding the proof. ∎
|
2024-09-04T02:54:55.577533 | 2020-02-28T16:07:52 | 2002.12836 | {
"authors": "Sigiswald Barbier, Sam Claerebout, Hendrik De Bie",
"full_text_license": null,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25946",
"submitter": "Sigiswald Barbier",
"url": "https://arxiv.org/abs/2002.12836"
} | arxiv-papers | A Fock Model and the Segal–Bargmann Transform for $\mathfrak{osp}(m,2|2n)$
A Fock Model and the Segal–Bargmann Transform for the Minimal Representation
of the Orthosymplectic Lie Superalgebra $\boldsymbol{\mathfrak{osp}(m,2|2n)}$
Sigiswald BARBIER, Sam CLAEREBOUT and Hendrik DE BIE S. Barbier, S. Claerebout
and H. De Bie Department of Electronics and Information Systems, Faculty of
Engineering and Architecture,
Ghent University, Krijgslaan 281, 9000 Gent, Belgium
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
Received March 17, 2020, in final form August 12, 2020; Published online
August 26, 2020
The minimal representation of a semisimple Lie group is a ‘small’ infinite-
dimensional irreducible unitary representation. It is thought to correspond to
the minimal nilpotent coadjoint orbit in Kirillov’s orbit philosophy. The
Segal–Bargmann transform is an intertwining integral transformation between
two different models of the minimal representation for Hermitian Lie groups of
tube type. In this paper we construct a Fock model for the minimal
representation of the orthosymplectic Lie superalgebra
$\mathfrak{osp}(m,2|2n)$. We also construct an integral transform which
intertwines the Schrödinger model for the minimal representation of the
orthosymplectic Lie superalgebra $\mathfrak{osp}(m,2|2n)$ with this new Fock
model.
Segal–Bargmann transform; Fock model; Schrödinger model; minimal
representations; Lie superalgebras; spherical harmonics; Bessel–Fischer
product
17B10; 17B60; 22E46; 58C50
## 1 Introduction
The classical Segal–Bargmann transform
$\displaystyle\operatorname{SB}f(z):=\exp\left(-\tfrac{1}{2}z^{2}\right)\int_{{\mathbb{R}}^{m}}\exp(2x\cdot
z)\exp\big{(}{-}x^{2}\big{)}f(x)\mathrm{d}x$
is a unitary isomorphism from the space of square integrable functions on
$\mathbb{R}^{m}$ to the Fock space of entire functions on $\mathbb{C}^{m}$
which are square integrable which respect to the weight function
$\exp\big{(}{-}|z|^{2}\big{)}$. The Segal–Bargmann transform is defined in
such a way that it maps the creation (resp. anihilation) operators on the
Schrödinger space to coordinate multiplication (resp. differentiation) on the
Fock space. This implies in particular that the harmonic oscillator becomes
the much simpler Euler operator on the Fock space [7]. The Segal–Bargmann
transform can also be interpreted as an intertwining operator between two
models of the metaplectic representation (also known as the Segal–Shale–Weil
or oscillator representation) of the metaplectic group, a double cover of the
symplectic group. See [14] for more on the classical Segal–Bargmann transform
and the metaplectic representation.
The (even part of the) metaplectic representation is a prominent example of a
minimal representation. Another extensively studied example is the minimal
representation of ${\rm O}(p,q)$ [21, 22, 23, 20]. The minimal representation
of a semisimple Lie group is the irreducible unitary representation that
according to the orbit method should correspond to the minimal nilpotent
coadjoint orbit [15]. The Segal–Bargmann transform has been generalized to
this setting of minimal representations. Namely for a Hermitian Lie group of
tube type, there exists an explicit integral transform which intertwines the
Schrödinger and Fock model of the minimal representation [18].
Lie superalgebras and Lie supergroups are generalisations of Lie algebras and
Lie groups. They were introduced to mathematically describe supersymmetry.
Their representation theory is an active field of study with still a lot of
open questions, for instance a description of the unitary irreducible
representations. Since most ingredients of the orbit method still exist in the
super setting, it is believed that the orbit method should also in the super
setting be a good tool for the study of irreducible representations [19,
Chapter 6.3]. For example, the irreducible unitary representations of
nilpotent Lie supergroups have been classified that way [25, 26].
An ambitious aim in this light would be to construct minimal representations
and the intertwining Segal–Bargmann transform for Lie supergroups. Recently a
first step in that direction has already been taken. Namely the construction
of a minimal representation of the orthosymplectic Lie supergroup ${\rm
OSp}(p,q|2n)$ was accomplished in [6]. In the bosonic case (i.e., $n=0$) this
realisation corresponds to the Schrödinger model for ${\rm O}(p,q)$ of [20].
In this article we achieve two further goals. First we construct a Fock model
for the minimal representation of the Lie superalgebra
$\mathfrak{osp}(m,2|2n)$. We also define an integral transform which
intertwines the Schrödinger model of the minimal representation of
$\mathfrak{osp}(m,2|2n)$ with this Fock model. Note that only for $q=2$ the
Lie group ${\rm O}(p,q)$ is Hermitian of tube type and thus only in that case
do we have a Segal–Bargmann transform. For that reason we have only
constructed a Segal–Bargmann transform in the super setting for
$\mathfrak{osp}(m,2|2n)$. Our main results hold for $m-2n\geq 4$. This
restriction comes from [6], where key properties of the integral we use in the
definition of the Segal–Bargmann transform are only proven for the case
$m-2n\geq 4$.
We will work in this paper always on the algebraic level. So we will work with
representations of the Lie superalgebra $\mathfrak{osp}(m,2|2n)$ instead of
the Lie supergroup ${\rm OSp}(m,2|2n)$, and we will act on super-vector spaces
defined using superpolynomials instead of using global sections of a
supermanifold. This allows us to circumvent the delicate technicalities
associated with supergroups and supermanifolds. Note that in the bosonic case,
the spaces we work with are dense in certain Hilbert spaces. Using standard
techniques one can then integrate the representation to group level and extend
to the whole Hilbert space, see for example [17, Theorem 2.30] or [18, Theorem
2.17]. These techniques no longer work/exist in the super case. There does
exist an abstract way to integrate a so-called admissible
$(\mathfrak{g},K)$-module to group level [1] which was for example used in [6]
to integrate the minimal representation of $\mathfrak{osp}(p,q|2n)$ to ${\rm
OSp}(p,q|2n)$. However, this approach is not very concrete.
Explicit examples such as the one constructed in this paper could help to
develop such integration tools and to find the correct definitions in the
super setting. For example our representations ought to be ‘unitary’ in some
sense. A definition of unitary representations does exists in the supersetting
[8, Definition 2]. However, a large class of Lie superalgebras, including
$\mathfrak{osp}(p,q|2n)$, do not allow for any super unitary representation in
this sense [25, Theorem 6.2.1]. This highly unsatisfactory situation has
inspired the search for a new or extended definition of a unitary
representation [13, 27]. At the moment it is still unclear what the right
definition should be, but we believe that the construction of explicit
examples which ought to be ‘unitary’ could be useful for this endeavour.
### 1.1 Structure of the paper
We structure the paper as follows. In Section 2 we fix notations and introduce
the spaces and algebras we will use throughout the paper. In Section 3 we
recall the Schrödinger model of the minimal representation of
$\mathfrak{osp}(m,2|2n)$ defined in [6]. We also introduce an integral which
leads to an $\mathfrak{osp}(m,2|2n)$-invariant, non-degenerate, superhermitian
form.
The next three sections contain the main body of this paper. In Section 4 we
construct the Fock space as a quotient of the space of superpolynomials. We
then define the Bessel–Fischer product, which gives us a non-degenerate,
superhermitian form on our Fock space (Propositions 4.7 and 4.12). In the
bosonic case $(n=0)$, this Bessel–Fischer product is equivalent to the inner
product coming from an integral on the Fock space [18, Proposition 2.6]. Since
we do no longer have this integral in the super setting, we construct a direct
proof for the superhermitian property. This seems new even in the bosonic
case. We also show that our Fock space has a reproducing kernel (Theorem
4.11).
In Section 5 we endow this Fock space with an $\mathfrak{osp}(m,2|2n)$-module
structure leading to a Fock model of the minimal representation of
$\mathfrak{osp}(m,2|2n)$. We prove that this is an irreducible representation
and obtain a very explicit description (Theorem 5.3). In particular, we have
complete branching rules for the subalgebras $\mathfrak{osp}(m|2n)$ and
$\mathfrak{osp}(m-1|2n)$.
In Section 6, we define an integral transform which maps the space of
functions used in the Schrödinger realisation to the space of functions of the
Fock realisation (Definition 6.1). We show that this integral is an
intertwining isomorphism which preserves the superhermitian form (Theorems 6.3
and 6.6). We also give an explicit inverse (Definition 6.7). As an application
we use the Segal–Bargmann transform to define generalised Hermite functions.
In Appendix A we gather some definitions and results on special functions
which are used throughout the paper. We have also put the technical and
lengthy proof of Theorem 6.3 in Appendix B.
## 2 Preliminaries and notations
In this paper Jordan and Lie algebras will be defined over complex numbers
$\mathbb{C}$ if they have a $\mathbb{C}$ in subscript, otherwise they are
defined over the field of real numbers $\mathbb{R}$. Function spaces will
always be defined over the complex field $\mathbb{C}$. We use the convention
$\mathbb{N}=\\{0,1,2,\ldots\\}$.
A super-vector space is defined as a $\mathbb{Z}_{2}$-graded vector space,
i.e., $V=V_{0}\oplus V_{1}$, with $V_{0}$ and $V_{1}$ vector spaces. An
element $v$ of a super-vector space $V$ is called homogeneous if it belongs to
$V_{i}$, $i\in\mathbb{Z}_{2}$. We call $i$ the parity of $v$ and denote it by
$|v|$. An homogeneous element $v$ is even if $|v|=0$ and odd if $|v|=1$. When
we use $|v|$ in a formula, we are considering homogeneous elements, with the
implicit convention that the formula has to be extended linearly for arbitrary
elements. We denote the super-vector space $V$ with $V_{0}=\mathbb{R}^{m}$ and
$V_{1}=\mathbb{R}^{n}$ as $\mathbb{R}^{m|n}$. We will always assume $m\geq 2$.
### 2.1 Superpolynomials
Let $\mathbb{K}$ be either $\mathbb{R}$ or $\mathbb{C}$.
###### Definition 2.1.
The space of superpolynomials over $\mathbb{K}$ is defined as
$\displaystyle\mathcal{P}\big{(}\mathbb{K}^{m|2n}\big{)}:=\mathcal{P}\big{(}\mathbb{K}^{m}\big{)}\otimes_{\mathbb{C}}\Lambda\big{(}\mathbb{K}^{2n}\big{)},$
where $\mathcal{P}\big{(}\mathbb{K}^{m}\big{)}$ denotes the space of complex-
valued polynomials over the field $\mathbb{K}$ in $m$ variables and
$\Lambda\big{(}\mathbb{K}^{2n}\big{)}$ denotes the Grassmann algebra in $2n$
variables. The variables of $\mathcal{P}(\mathbb{K}^{m})$ and
$\Lambda\big{(}\mathbb{K}^{2n}\big{)}$ are called even and odd variables,
respectively. They satisfy the commutation relations
$\displaystyle x_{i}x_{j}=(-1)^{|i||j|}x_{j}x_{i},$
where $|i|:=|x_{i}|$, $i\in\\{0,1,\ldots,m+2n-1\\}$.
Let $\langle{\cdot\,,\cdot}\rangle_{\beta}$ be a supersymmetric, non-
degenerate, even bilinear form on $\mathbb{K}^{m|2n}$ with basis
$\\{x_{i}\\}_{i=0}^{m+2n-1}$. We denote the matrix components by
$\beta_{ij}:=\langle{x_{i},x_{j}}\rangle_{\beta}$ and denote the components of
the inverse matrix by $\beta^{ij}$, i.e., $\beta^{ij}$ is defined such that
$\sum_{j}\beta_{ij}\beta^{jk}=\delta_{ik}$. Set
$x^{j}=\sum_{i}x_{i}\beta^{ij}$. The differential operator $\partial^{i}$ is
defined as the unique derivation in
$\operatorname{End}\big{(}\mathcal{P}\big{(}\mathbb{K}^{m|2n}\big{)}\big{)}$
such that $\partial^{i}(x_{j})=\delta_{ij}$, with $\delta_{ij}$ the Kronecker
delta. We also define $\partial_{j}=\sum_{i}\partial^{i}\beta_{ji}$. Then it
holds that $\partial_{i}(x^{j})=\delta_{ij}$.
When we are working with both real and complex polynomials at the same time,
we will denote $\partial_{i}$ and $\partial^{i}$ for the real variable $x_{i}$
as $\partial_{x^{i}}$ and $\partial_{x_{i}}$, respectively. Similarly, we will
denote $\partial_{i}$ and $\partial^{i}$ for the complex variable $z_{i}$ as
$\partial_{z^{i}}$ and $\partial_{z_{i}}$, respectively.
We will make frequent use of the following operators:
$\displaystyle
R^{2}:=\sum_{i,j}\beta^{ij}x_{i}x_{j},\qquad\mathbb{E}:=\sum_{i,j}\beta^{ij}x_{i}\partial_{j}\qquad\mbox{and}\qquad\Delta:=\sum_{ij}\beta^{ij}\partial_{i}\partial_{j}.$
(2.1)
Here, the operator $R^{2}$ is called the square of the radial coordinate and
acts through multiplication. The operators $\mathbb{E}$ and $\Delta$ are
called the Euler operator and the Laplacian, respectively. We have the
following lemma.
###### Lemma 2.2.
The operators $R^{2}$, $\mathbb{E}$ and $\Delta$ satisfy
$\displaystyle[\Delta,R^{2}]=4\mathbb{E}+2M,\qquad[\Delta,\mathbb{E}]=2\Delta,\qquad[R^{2},\mathbb{E}]=-2R^{2},$
where $M=m-2n$ is the superdimension. In particular,
$\big{(}R^{2},\mathbb{E}+\frac{M}{2},-\frac{\Delta}{2}\big{)}$ forms an
$\mathfrak{sl}_{\mathbb{K}}(2)$-triple. Furthermore, they commute in
$\operatorname{End}\big{(}\mathcal{P}\big{(}\mathbb{K}^{m|2n}\big{)}\big{)}$
with the operators
$L_{ij}:=x_{i}\partial_{j}-(-1)^{|i||j|}x_{j}\partial_{i}.$
###### Proof.
A straightforward calculation or see, for example, [12]. ∎
If we are working with two sets of variables we will add a variable indicator
to avoid confusion. For example, we denote
$\displaystyle
R^{2}_{x}=\sum_{i,j}\beta^{ij}x_{i}x_{j},\qquad\mathbb{E}_{x}=\sum_{i,j}\beta^{ij}x_{i}\partial_{x^{j}},\qquad\Delta_{x}=\sum_{ij}\beta^{ij}\partial_{x^{i}}\partial_{x^{j}}$
and $L_{ij}^{x}=x_{i}\partial_{x^{j}}-(-1)^{|i||j|}x_{j}\partial_{x^{i}}$ for
the real variables and
$\displaystyle
R^{2}_{z}=\sum_{i,j}\beta^{ij}z_{i}z_{j},\qquad\mathbb{E}_{z}=\sum_{i,j}\beta^{ij}z_{i}\partial_{z^{j}},\qquad\Delta_{z}=\sum_{ij}\beta^{ij}\partial_{z^{i}}\partial_{z^{j}}$
and $L^{z}_{ij}=z_{i}\partial_{z^{j}}-(-1)^{|i||j|}z_{j}\partial_{z^{i}}$ for
the complex variables.
### 2.2 The orthosymplectic Lie superalgebra
Let $\mathbb{K}$ be either $\mathbb{R}$ or $\mathbb{C}$.
###### Definition 2.3.
The orthosymplectic Lie superalgebra $\mathfrak{osp}_{\mathbb{K}}(m|2n,\beta)$
is defined as the subalgebra of $\mathfrak{gl}_{\mathbb{K}}(m|2n)$ preserving
a supersymmetric non-degenerate even bilinear form $\beta$. Thus
$\mathfrak{osp}_{\mathbb{K}}(m|2n,\beta)$ is spanned by
$X\in\mathfrak{gl}_{\mathbb{K}}(m|2n)$ such that
$\displaystyle\langle{X(u),v}\rangle_{\beta}+(-1)^{|u||X|}\langle{u,X(v)}\rangle_{\beta}=0,$
for all $u,v\in\mathbb{K}^{m|2n}$.
The orthosymplectic Lie superalgebra has a differential operator realisation
on $\mathcal{P}\big{(}\mathbb{K}^{m|2n}\big{)}$. A basis in this realisation
is given by
$\displaystyle
L_{ij}:=x_{i}\partial_{j}-(-1)^{|i||j|}x_{j}\partial_{i},\qquad\mbox{for
}i<j,$ $\displaystyle L_{ii}:=2x_{i}\partial_{i},\qquad\mbox{for }|i|=1.$
### 2.3 Spherical harmonics
Let $\mathbb{K}$ be either $\mathbb{R}$ or $\mathbb{C}$. The space of
homogeneous superpolynomials of degree $k$ is denoted by
$\displaystyle\mathcal{P}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}:=\big{\\{}p\in\mathcal{P}\big{(}\mathbb{K}^{m|2n}\big{)}\colon\mathbb{E}p=kp\big{\\}}.$
The space of spherical harmonics of degree $k$ is defined by
$\displaystyle\mathcal{H}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}:=\big{\\{}f\in\mathcal{P}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}\colon\Delta
f=0\big{\\}},$
i.e., it is the space of homogeneous polynomials of degree $k$ which are in
the kernel of the Laplacian.
The Fischer decomposition gives a decomposition of the space of
superpolynomials using these spherical harmonics [12, Theorem 3].
###### Proposition 2.4.
If $m-2n\neq-2\mathbb{N}$, then $\mathcal{P}\big{(}\mathbb{K}^{m|2n}\big{)}$
decomposes as
$\displaystyle\mathcal{P}\big{(}\mathbb{K}^{m|2n}\big{)}=\bigoplus_{k=0}^{\infty}\mathcal{P}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}=\bigoplus_{k=0}^{\infty}\bigoplus_{j=0}^{\infty}R^{2j}\mathcal{H}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}.$
In [24] the following generalisation of the Fischer decomposition was obtained
that still holds for the exceptional case $M\in-2\mathbb{N}$.
###### Proposition 2.5 (generalised Fischer decomposition).
The superspace $\mathcal{P}\big{(}\mathbb{K}^{m|2n}\big{)}$ decomposes as
$\displaystyle\mathcal{P}\big{(}\mathbb{K}^{m|2n}\big{)}=\bigoplus_{k=0}^{\infty}\mathcal{P}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}=\bigoplus_{k=0}^{\infty}\bigoplus_{j=0}^{\infty}\big{(}R^{2}\Delta
R^{2}\big{)}^{j}\operatorname{\widetilde{\mathcal{H}}}_{k}\big{(}\mathbb{K}^{m|2n}\big{)},$
where
$\displaystyle\operatorname{\widetilde{\mathcal{H}}}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}=\big{\\{}f\in\mathcal{P}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}\colon\Delta
R^{2}\Delta f=0\big{\\}}$
is the space of generalised spherical harmonics of degree $k$.
From [9, Theorem 5.2] we obtain the following.
###### Proposition 2.6.
If $M\not\in-2\mathbb{N}$, then
$\mathcal{H}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}$ is an irreducible
$\mathfrak{osp}_{\mathbb{K}}(m|2n)$-module.
The dimension of the spherical harmonics of degree $k$ is given in [12,
Corollary 1].
###### Proposition 2.7.
The dimension of $\mathcal{H}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}$, for $m\neq
0$ is given by
$\displaystyle\dim\mathcal{H}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}=\dim\mathcal{P}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}-\dim\mathcal{P}_{k-2}\big{(}\mathbb{K}^{m|2n}\big{)},$
with
$\displaystyle\dim\mathcal{P}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}=\sum_{i=0}^{\min(k,2n)}\binom{2n}{i}\binom{k-i+m-1}{m-1}.$
We will also use the following formula for the dimension of the spherical
harmonics of degree $k$.
###### Proposition 2.8.
The dimension of $\mathcal{H}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}$, for $m>1$
is given by
$\displaystyle\dim\mathcal{H}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}=\dim\mathcal{P}_{k}\big{(}\mathbb{K}^{m-1|2n}\big{)}+\dim\mathcal{P}_{k-1}\big{(}\mathbb{K}^{m-1|2n}\big{)}.$
###### Proof.
If we prove
$\displaystyle\dim\mathcal{P}_{k}\big{(}\mathbb{K}^{m|2n}\big{)}-\dim\mathcal{P}_{k-2}\big{(}\mathbb{K}^{m|2n}\big{)}-\dim\mathcal{P}_{k}\big{(}\mathbb{K}^{m-1|2n}\big{)}-\dim\mathcal{P}_{k-1}\big{(}\mathbb{K}^{m-1|2n}\big{)}=0,$
then the proposition follows from Proposition 2.7. First suppose $2n\leq k-2$,
then the above equation becomes
$\displaystyle\sum_{i=0}^{2n}\binom{2n}{i}\left(\binom{k-i+m-1}{m-1}-\binom{k-i+m-3}{m-1}\right.$
$\displaystyle\left.\qquad{}-\binom{k-i+m-2}{m-2}-\binom{k-i+m-3}{m-2}\right)=0,$
which is true since the recursive formula of binomial coefficients gives us
that
$\displaystyle\binom{k-i+m-1}{m-1}-\binom{k-i+m-3}{m-1}-\binom{k-i+m-2}{m-2}-\binom{k-i+m-3}{m-2}=0,$
for all $i\in\\{0,\ldots,2n\\}$. For $2n>k-2$ we have the following extra
terms
$\displaystyle\binom{2n}{k-1}\left(\binom{m}{m-1}-\binom{m-1}{m-2}-\binom{m-2}{m-2}\right)+\binom{2n}{k}\left(\binom{m-1}{m-1}-\binom{m-2}{m-2}\right),$
where we ignore the last term if $2n=k-1$. Using basic binomial properties,
these terms are clearly equal to zero. ∎
For $n=0$ a more insightful reasoning as to why this formula holds is given by
Proposition 5 in [11].
### 2.4 The spin factor Jordan superalgebra $\boldsymbol{J}$
To each Hermitian Lie group of tube type corresponds an Euclidean Jordan
algebra. These Jordan algebras were a crucial ingredient in the unified
approach to construct minimal representations and the Segal–Bargmann transform
[17, 18]. More concretely, one can associate with each Jordan algebra certain
Lie algebras (the structure algebra and the TKK-algebra), and the Jordan
algebra is also used in the construction of spaces on which these Lie algebras
act. In this paper we will not use anything directly from Jordan theory, but
introducing $\mathfrak{osp}(m,2|2n)$ via the spin factor Jordan superalgebra
leads to a natural decomposition of $\mathfrak{osp}(m,2|2n)$ as well as to
some interesting subalgebras that we will use.
###### Definition 2.9.
A Jordan superalgebra is a supercommutative superalgebra $J$ satisfying the
Jordan identity
$\displaystyle(-1)^{|x||z|}[L_{x},L_{yz}]+(-1)^{|y||x|}[L_{y},L_{zx}]+(-1)^{|z||y|}[L_{z},L_{xy}]=0\qquad\text{for
all }x,y,z\in J.$
Here the operator $L_{x}$ is (left) multiplication with $x$ and
$[\cdot\,,\cdot]$ is the supercommutator, i.e.,
$[L_{x},L_{y}]:=L_{x}L_{y}-(-1)^{|x||y|}L_{y}L_{x}$.
Let $\mathbb{K}$ be either $\mathbb{R}$ or $\mathbb{C}$. We will define the
spin factor Jordan superalgebra associated with a supersymmetric, non-
degenerate, even, bilinear form. Let $V_{\mathbb{K}}$ be a super-vector space
over $\mathbb{K}$ with $\dim(V_{\mathbb{K}})=(m-1|2n)$ and a supersymmetric,
non-degenerate, even, bilinear form
$\langle\cdot\;,\cdot\rangle_{\tilde{\beta}}$ where, for
$\mathbb{K}=\mathbb{R}$, the even part has signature $(m-1,0)$. Recall that we
always assume $m\geq 2$. We choose a homogeneous basis
$(e_{i})_{i=1}^{m+2n-1}$ of $V_{\mathbb{K}}$. For $u=\sum_{i}u^{i}e_{i}$ and
$v=\sum_{i}v^{i}e_{i}$ we then have
$\displaystyle\langle
u,v\rangle_{\tilde{\beta}}=\sum_{i,j}u^{i}{\tilde{\beta}}_{ij}v^{j}\qquad\mbox{with}\quad{\tilde{\beta}}_{ij}:=\langle
e_{i},e_{j}\rangle_{\tilde{\beta}}.$
###### Definition 2.10.
The spin factor Jordan superalgebra is defined as
$J_{\mathbb{K}}:=\mathbb{K}e_{0}\oplus V_{\mathbb{K}}$ with $|e_{0}|=0$. The
Jordan product is given by
$\displaystyle(\lambda e_{0}+u)(\mu e_{0}+v)=(\lambda\mu+\langle
u,v\rangle_{\tilde{\beta}})e_{0}+\lambda v+\mu u,$
for $u,v\in V_{\mathbb{K}}$ and $\lambda,\mu\in\mathbb{K}$.
Thus $e_{0}$ is the unit of $J_{\mathbb{K}}$. We extend the homogeneous basis
$(e_{i})_{i=1}^{m+2n-1}$ of $V_{\mathbb{K}}$ to a homogeneous basis
$(e_{i})_{i=0}^{m+2n-1}$ of $J_{\mathbb{K}}$ and extend the bilinear form
$\langle\cdot\;,\cdot\rangle_{\tilde{\beta}}$ as follows. Set $\beta_{00}=-1$,
$\beta_{i0}=0=\beta_{0i}$ and $\beta_{ij}={\tilde{\beta}}_{ij}$ for
$i,j\in\\{1,\ldots,m+2n-1\\}$. Then the corresponding form
$\langle\cdot\;,\cdot\rangle_{\beta}$ is a supersymmetric, non-degenerate,
even bilinear form on the super-vector space $J_{\mathbb{K}}$ where, for
$\mathbb{K}=\mathbb{R}$, the even part has signature $(m-1,1)$.
Define $(\beta^{ij})_{ij}$ as the inverse of $(\beta_{ij})_{ij}$. Let
$\big{(}e^{i}\big{)}_{i}$ be the right dual basis of $(e_{i})_{i}$ with
respect to the form $\langle\cdot\;,\cdot\rangle_{\beta}$, i.e.,
$\displaystyle\langle e_{i}\;,e^{j}\rangle_{\beta}=\delta_{ij}.$
Then
$\displaystyle e^{j}=\sum_{i}e_{i}\beta^{ij}.$
In this paper we will assume that the orthosymplectic metric is standardized
such that
$\displaystyle\beta=(\beta_{ij})_{i,j=0}^{m+2n-1}=\left(\begin{array}[]{cc|cc}-1&&&\\\
&I_{m-1}&&\\\ \hline\cr&&&-I_{n}\\\ &&I_{n}&\end{array}\right),$
with $I_{d}$ the $d$-dimensional identity matrix.
From now on the real spin factor Jordan superalgebra will always be denoted by
$J$ or $J_{\mathbb{R}}$ and its complexified version by $J_{\mathbb{C}}$.
### 2.5 The TKK algebra
With each Jordan (super)algebra one can associate a $3$-graded Lie
(super)algebra via the TKK-construction. There exist different TKK-
constructions in the literature, see [5] for an overview, but for the spin
factor Jordan superalgebra $J_{\mathbb{K}}$ all constructions lead to the
orthosymplectic Lie superalgebras $\mathfrak{osp}_{\mathbb{K}}(m,2|2n)$. We
will quickly review the Koecher construction. First consider
$\operatorname{Inn}(J_{\mathbb{K}})$, the subalgebra of
$\mathfrak{gl}(J_{\mathbb{K}})$ of inner derivations. It is generated by the
operators $[L_{u},L_{v}]$, $u,v\in J_{\mathbb{K}}$. If we add the left
multiplication operators $L_{u}$ to the inner derivations we obtain the inner
structure algebra:
$\displaystyle\mathfrak{istr}(J_{\mathbb{K}}):=\\{L_{u}|u\in
J_{\mathbb{K}}\\}\oplus\operatorname{Inn}(J_{\mathbb{K}})=\operatorname{span}_{\mathbb{K}}\\{L_{u},[L_{u},L_{v}]\colon
u,v\in J_{\mathbb{K}}\\}.$
Let $J_{\mathbb{K}}^{+}$ and $J_{\mathbb{K}}^{-}$ be two copies of
$J_{\mathbb{K}}$. As a vector space we define the TKK-algebra of
$J_{\mathbb{K}}$ as
$\displaystyle\operatorname{TKK}(J_{\mathbb{K}}):=J_{\mathbb{K}}^{-}\oplus\mathfrak{istr}(J_{\mathbb{K}})\oplus
J_{\mathbb{K}}^{+}.$
The Lie bracket on $\operatorname{TKK}(J_{\mathbb{K}})$ is defined as follows.
We interpret $\mathfrak{istr}(J_{\mathbb{K}})$ as a subalgebra of
$\mathfrak{gl}(J_{\mathbb{K}})$ and for homogeneous $x,y\in
J_{\mathbb{K}}^{+}$, $u,v\in J_{\mathbb{K}}^{-}$, $a,b\in J_{\mathbb{K}}$ we
set
$\displaystyle[x,u]=2L_{xu}+2[L_{x},L_{u}],\qquad$
$\displaystyle[x,y]=[u,v]=0,$ $\displaystyle[L_{a},x]=ax,\qquad$
$\displaystyle[L_{a},u]=-au,$
$\displaystyle[[L_{a},L_{b}],x]=[L_{a},L_{b}]x,\qquad$
$\displaystyle[[L_{a},L_{b}],u]=[L_{a},L_{b}]u.$
From [6, Proposition 3.1] we obtain the following
###### Theorem 2.11.
We have
$\displaystyle\mathfrak{istr}(J_{\mathbb{K}})=\mathfrak{osp}_{\mathbb{K}}(J_{\mathbb{K}})\oplus\mathbb{K}L_{e_{0}}\cong\mathfrak{osp}_{\mathbb{K}}(m-1,1)\oplus\mathbb{K}L_{e_{0}},\qquad\operatorname{TKK}(J_{\mathbb{K}})\cong\mathfrak{osp}_{\mathbb{K}}(m,2|2n),$
where the direct sum decomposition is as algebras.
Using the bilinear form
$\displaystyle\overline{\beta}=\left(\begin{array}[]{cccc|c}1&&&&\\\
&(\beta_{ij})_{i,j=1}^{m-1}&&&\\\ &&\beta_{00}&&\\\ &&&-1&\\\
\hline\cr&&&&(\beta_{ij})_{i,j=m}^{m+2n-1}\end{array}\right),$
we have the following explicit isomorphism of
$\operatorname{TKK}(J_{\mathbb{K}})$ with the differential operator
realisation of $\mathfrak{osp}_{\mathbb{K}}(m,2|2n)$:
$\displaystyle e_{i}^{-}\mapsto L_{\tilde{i},(m+1)}+L_{\tilde{i},0},\qquad$
$\displaystyle e_{0}^{-}\mapsto L_{m,(m+1)}+L_{m,0},$ $\displaystyle
L_{e_{i}}\mapsto L_{\tilde{i},m},\qquad$ $\displaystyle L_{e_{0}}\mapsto
L_{0,(m+1)},$ $\displaystyle[L_{e_{i}},L_{e_{j}}]\mapsto
L_{\tilde{i},\tilde{j}},$ $\displaystyle e_{i}^{+}\mapsto
L_{\tilde{i},(m+1)}-L_{\tilde{i},0},\qquad$ $\displaystyle e_{0}^{+}\mapsto-
L_{m,(m+1)}+L_{m,0}.$
Here $\tilde{i}=i$ if $|i|=0$ and $\tilde{i}=i+2$ if $|i|=1$.
## 3 The Schrödinger model of $\boldsymbol{\mathfrak{osp}(m,2|2n)}$
In the bosonic case (i.e., $n=0$), the Schrödinger model of the minimal
representation of a Hermitian Lie group $G$ of tube type can be seen as a
representation realized on the Hilbert space $L^{2}(C)$ where $C$ is a minimal
orbit for the structure group of $G$, see [18]. In the super setting the
classical definition of minimal orbits no longer works. Indeed,
supermanifolds, in contrast to ordinary manifolds, are not completely
determined by their points. In [6] an orbit was instead defined as the
quotient supermanifold of the structure group by a stabilizer subgroup
together with an embedding. Using this definition, a minimal orbit $C$ was
constructed which can be characterized by $R^{2}=0$, with $R^{2}$ as
introduced in (2.1). We refer to Section 4 of [6] for a detailed description
of this minimal orbit. We will now recall the Schrödinger model constructed in
[6]. A critical role in this construction is played by the Bessel operators.
### 3.1 The Bessel operator
The Bessel operator $\operatorname{\mathcal{B}}_{\lambda}(x_{k})$ is a linear
operator acting on $\mathcal{P}\big{(}\mathbb{R}^{m|2n}\big{)}$. It depends on
a complex parameter $\lambda$ and an explicit expression is given by
$\displaystyle\operatorname{\mathcal{B}}_{\lambda}(x_{k}):=(-\lambda+2\mathbb{E})\partial_{k}-x_{k}\Delta,$
where $\mathbb{E}$ and $\Delta$ are the Euler operator and Laplacian
introduced in (2.1).
From Proposition 4.9 of [6] we obtain the following.
###### Proposition 3.1.
The Bessel operators map $\langle R^{2}\rangle$ into
$\big{\langle}R^{2}\big{\rangle}$ if and only if $\lambda=2-M$, where $M=m-2n$
is the superdimension of $\mathbb{R}^{m|2n}$ and
$\big{\langle}R^{2}\big{\rangle}$ is the ideal in
$\mathcal{P}\big{(}\mathbb{R}^{m|2n}\big{)}$ generated by $R^{2}$.
Therefore we will only use the Bessel operator with the parameter $2-M$ in
this paper and we set
$\operatorname{\mathcal{B}}(x_{i}):=\operatorname{\mathcal{B}}_{2-M}(x_{i})$.
We obtain the following two properties of the Bessel operator from Proposition
4.2 in [4].
###### Proposition 3.2 (supercommutativity).
For all $i\in\\{0,\ldots,m+2n-1\\}$ we have
$\displaystyle\operatorname{\mathcal{B}}(x_{i})\operatorname{\mathcal{B}}(x_{j})=(-1)^{|i||j|}\operatorname{\mathcal{B}}(x_{j})\operatorname{\mathcal{B}}(x_{i}).$
###### Proposition 3.3 (product rule).
For all $i\in\\{0,\ldots,m+2n-1\\}$ we have
$\displaystyle\operatorname{\mathcal{B}}(x_{i})(\phi\psi)=\operatorname{\mathcal{B}}(x_{i})(\phi)\psi+(-1)^{|i||\phi|}\phi\operatorname{\mathcal{B}}(x_{i})(\psi)$
$\displaystyle\hphantom{\operatorname{\mathcal{B}}(x_{i})(\phi\psi)=}{}+2(-1)^{|i||\phi|}\mathbb{E}(\phi)\partial_{i}(\psi)+2\partial_{i}(\phi)\mathbb{E}(\psi)-2x_{i}\sum_{r,s}(-1)^{|\phi||r|}\beta^{rs}\partial_{r}(\phi)\partial_{s}(\psi).$
As a direct result from the product rule we have
$\displaystyle[\operatorname{\mathcal{B}}(x_{i}),x_{j}]=\beta_{ij}(M-2+2\mathbb{E})-2L_{ij},$
(3.1)
for all $i,j\in\\{0,\ldots,m+2n-1\\}$.
In what follows we will mostly use the following slightly modified version of
the Bessel operator.
###### Definition 3.4.
The modified Bessel operator
$\operatorname{\widetilde{\mathcal{B}}}_{\lambda}(x_{k})$ is given by
$\displaystyle\operatorname{\widetilde{\mathcal{B}}}(x_{0}):=-\operatorname{\mathcal{B}}(x_{0}),\qquad\operatorname{\widetilde{\mathcal{B}}}(x_{k}):=\operatorname{\mathcal{B}}(x_{k}),$
for $k\in\\{1,\ldots,m+2n-1\\}$.
### 3.2 Radial superfunctions
Let us denote the supervariables $x_{m},\ldots,x_{m+2n-1}$ by
$\theta_{1},\ldots,\theta_{2n}$. We keep the notations for
$x_{0},\ldots,x_{m-1}$. Define $\theta^{I}$ as
$\theta_{1}^{i_{1}}\theta_{2}^{i_{2}}\cdots\theta_{n}^{i_{2n}}$ for
$I=(i_{1},i_{2},\dots,i_{2n})\in\mathbb{Z}_{2}^{2n}$. Consider a function
$h\colon\mathbb{R}\rightarrow\mathbb{R}$,
$h\in\mathcal{C}^{2n}(\mathbb{R}\setminus\\{0\\})$ and a superfunction
$f=f_{0}+\sum\limits_{I\neq 0}f_{I}\theta^{I}$, with
$f_{0},f_{I}\in\mathcal{C}^{\infty}\big{(}\mathbb{R}^{m}\big{)}$ and where
$f_{0}$ has non-zero values. Then a new superfunction
$\displaystyle h(f):=\sum_{j=0}^{2n}\dfrac{1}{j!}\left(\sum_{I\neq
0}f_{I}\theta^{I}\right)^{j}h^{(j)}(f_{0})$ (3.2)
is defined in [10, Definition 3]. Here $h^{(j)}$ denotes the $j$th derivative
of $h$.
Set the supervariable
$\displaystyle r^{2}:=\sum_{i=1}^{m+2n-1}x^{i}x_{i},$
so $R^{2}=-x_{0}^{2}+r^{2}$. We can use equation (3.2) with
$f=\big{(}x_{0}^{2}+r^{2}\big{)}/2$ and $h$ the square root to define the
superfunction $|X|$ as
$\displaystyle|X|:=\sqrt{\dfrac{x_{0}^{2}+r^{2}}{2}}.$
Using equation (3.2) again, but now with $f=|X|$ we define $\exp(|X|)$ and
$\Lambda_{2,j}^{\mu,\nu}(|X|)$ as radial superfunctions. Here
$\Lambda_{2,j}^{\mu,\nu}$ is the generalised Laguerre function introduced in
Appendix A.2.
### 3.3 The $\boldsymbol{(\mathfrak{g},\mathfrak{k})}$-module
$\boldsymbol{W}$
We introduce the notations
$\displaystyle\mathfrak{g}:=\operatorname{TKK}(J)\cong\mathfrak{osp}(m,2|2n),$
$\displaystyle\mathfrak{k}:=\\{(x,I,-x)\colon I\in\operatorname{Inn}(J),\,x\in
J\\},$
$\displaystyle\mathfrak{k}_{0}:=\mathfrak{k}\cap\mathfrak{istr}(J)=\operatorname{Inn}(J).$
###### Theorem 3.5.
The following isomorphisms
$\displaystyle\mathfrak{k}_{0}\cong\mathfrak{osp}(m-1,0|2n),\qquad\mathfrak{k}\cong\mathfrak{osp}(m,0|2n)\oplus\mathbb{R},$
hold as algebras.
###### Proof.
This follows from a straightforward verification by, for example, looking at
the matrix realisation of $\mathfrak{g}$. ∎
In Section 5.2 of [6] the so-called Schrödinger model of $\mathfrak{g}$ was
constructed for $M\geq 4$. Since this model will play an important role in
this paper, we will give a short recall of its construction. Firstly, we
consider a representation $\pi$ of $\mathfrak{g}$ acting on smooth
superfunctions on $J^{-}$. Explicitly, $\pi$ of
$\mathfrak{g}=\operatorname{TKK}(J)=J^{-}\oplus\mathfrak{istr}(J)\oplus J^{+}$
is given as follows:
* •
$\pi(e_{l},0,0)=-2\imath x_{l}$,
* •
$\pi(0,L_{e_{k}},0)=-x_{0}\partial_{k}+x_{k}\partial_{0}$,
* •
$\pi(0,[L_{e_{i}},L_{e_{j}}],0)=x_{i}\partial_{j}-(-1)^{|i||j|}x_{j}\partial_{i}$,
* •
$\pi(0,L_{e_{0}},0)=\frac{2-M}{2}-\mathbb{E}$,
* •
$\pi(0,0,e_{l})=-\frac{1}{2}\imath\operatorname{\widetilde{\mathcal{B}}}(x_{l})$,
with $i,j,k\in\\{1,2,\ldots,m+2n-1\\}$, $l\in\\{0,1,\ldots,m+2n-1\\}$ and
where $\imath$ is the imaginary unit. For $n=0$, our convention corresponds to
the Schrödinger realisation given in [18]. It only differs from the
Schrödinger realisation given in [6] by a change of basis. It can be shown
that all the operators occuring in $\pi$ are tangential to $R^{2}$, with
$R^{2}$ as in equation (2.1). So we can consider the representation obtained
by quotienting out $R^{2}$. This quotient representation has an interesting
subrepresentation consisting of $\mathfrak{k}$-finite vectors. It is generated
by $\exp(-2|X|)$, with $\exp(-2|X|)$ the smooth radial superfunction on
$J^{-}$ as defined in Section 3.2. This subrepresentation wil be our
Schrödinger model. So the Schrödinger model is given as follows:
$\displaystyle
W:=U(\mathfrak{g})\exp(-2|X|)\mod\big{\langle}R^{2}\big{\rangle},$
where the $\mathfrak{g}$-module structure is given by the Schrödinger
representation $\pi$.
Theorem 5.3 of [6] gives the following decomposition of $W$.
###### Theorem 3.6 (decomposition of $W$).
Assume $M\geq 4$.
* $(1)$
The decomposition of $W$ as a $\mathfrak{k}$-module is given by
$\displaystyle W=\bigoplus_{j=0}^{\infty}W_{j},\qquad\mbox{with}\quad
W_{j}=U(\mathfrak{k})\Lambda_{2,j}^{M-3,-1}(2|X|),$
where $W_{j}$ and thus also $W$ are $\mathfrak{k}$-finite.
* $(2)$
$W$ is a simple $\mathfrak{g}$-module.
* $(3)$
An explicit decomposition of $W_{j}$ into irreducible
$\mathfrak{k}_{0}$-modules is given by
$\displaystyle
W_{j}=\bigoplus_{k=0}^{j}\bigoplus_{l=0}^{1}\Lambda_{2,j-k}^{M-3+2k,2l-1}(2|X|)\big{(}\mathcal{H}_{k}\big{(}\mathbb{R}^{m-1|2n}\big{)}\otimes\mathcal{H}_{l}(\mathbb{R})\big{)}.$
Furthermore, if $m$ is even we also have the following
$\mathfrak{k}$-isomorphism
$\displaystyle
W_{j}\cong\mathcal{H}_{j}\big{(}\mathbb{R}^{m|2n}\big{)}\otimes\mathcal{H}_{\frac{M-2}{2}+j}\big{(}\mathbb{R}^{2}\big{)}.$
### 3.4 The integral and sesquilinear form
In Section 8 of [6] an integral which restricts to $W$ was constructed. We
will use a renormalized version of that integral, restricted to $x_{0}>0$. To
give the integral explicitly we consider spherical coordinates in
$\mathbb{R}^{m-1}$ by setting $x_{i}=\omega_{i}s$ with
$\omega_{i}\in\mathbb{S}^{m-2}$, $i\in\\{1,\ldots,m-1\\}$ and
$\displaystyle s^{2}:=\sum_{i=1}^{m-1}x_{i}^{2}.$
We also introduce the following notations
$\displaystyle\theta^{2}:=\sum_{i,j=m}^{m+2n-1}\beta^{ij}x_{i}x_{j},\qquad
1+\eta:=\sqrt{1-\dfrac{\theta^{2}}{2s^{2}}},\qquad
1+\xi:=\sqrt{1+\dfrac{\theta^{2}}{2x_{0}^{2}}}$
and the morphism
$\displaystyle\phi^{\sharp}(f):=\sum_{j=0}^{n}\dfrac{\theta^{2j}}{j!}\left(\dfrac{1}{4x_{0}}\partial_{x_{0}}-\dfrac{1}{4s}\partial_{s}\right)^{j}(f),$
for
$f\in\mathcal{C}^{\infty}\big{(}\mathbb{R}^{m}\setminus\\{0\\}\big{)}\otimes\Lambda\big{(}\mathbb{R}^{2n}\big{)}$.
We remark that in [6, Lemma 8.2] it is shown that $\phi^{\sharp}$ is actually
an algebra isomorphism. The Berezin integral on
$\Lambda\big{(}\mathbb{R}^{2n}\big{)}$ is defined as
$\displaystyle\int_{B}:=\partial^{m+2n-1}\partial^{m+2n-2}\cdots\partial^{m}.$
###### Definition 3.7.
Suppose $M\geq 4$. For $f\in W$ we define the integral $\int_{W}$ by
$\displaystyle\int_{W}f:=\dfrac{1}{\gamma}\int_{0}^{\infty}\int_{\mathbb{S}^{m-2}}\int_{B}\rho^{m-3}(1+\eta)^{m-3}(1+\xi)^{-1}\phi^{\sharp}(f)_{|s=x_{0}=\rho}\mathrm{d}\rho\mathrm{d}\omega,$
(3.3)
where $\gamma\in\mathbb{C}$ is the renormalisation factor such that
$\int_{W}\exp(-4|X|)=1$.
We can show that the integral is well defined modulo
$\big{\langle}R^{2}\big{\rangle}$. This follows from Section 8 of [6] together
with the fact that $\gamma$ is non-zero.
###### Proposition 3.8.
For $M=m-2n\geq 4$ we have
$\displaystyle\gamma=\dfrac{2^{5-2M}}{n!}\left(\dfrac{3-m}{2}\right)_{n}\dfrac{\pi^{\frac{m-1}{2}}}{\Gamma{\big{(}\frac{m-1}{2}\big{)}}}\Gamma(M-2),$
where we used the Pochhammer symbol $(a)_{k}=a(a+1)(a+2)\cdots(a+k-1)$. Note
that $\big{(}\frac{3-m}{2}\big{)}_{n}=0$ implies $M=m-2n\leq 1$. Therefore
$\gamma$ is non-zero.
###### Proof.
Let us denote the integral $\int_{W}$ not normalized by $\gamma$ as
$\int_{W^{\prime}}$, i.e., $\int_{W^{\prime}}=\gamma\int_{W}$. We wish to
calculate
$\displaystyle\gamma=\int_{W^{\prime}}\exp(-4|X|).$
In [3, Lemma 6.3.5] a similar integral is calculated if one observes that
$\widetilde{K}_{-\frac{1}{2}}(t)=\frac{\sqrt{\pi}}{2}\exp(-t)$, (equation
(A.1) in Appendix A.1) where $\widetilde{K}_{\alpha}$ is the K-Bessel function
introduced in Appendix A.1. Using the same calculations as in the proof of [3,
Lemma 6.3.5], we then obtain
$\displaystyle\gamma=\dfrac{1}{n!}\left(\dfrac{3-m}{2}\right)_{n}\dfrac{\pi^{\frac{m-3}{2}}}{\Gamma{\big{(}\frac{m-1}{2}\big{)}}}\dfrac{\Gamma\big{(}\frac{M}{2}\big{)}\Gamma\big{(}\frac{M}{2}-1\big{)}\Gamma\big{(}\frac{M-1}{2}\big{)}^{2}}{\Gamma(M-1)},$
provided we take into account that we need extra factors $2$ in certain places
and that we restrict ourselves to $x_{0}>0$. Note that in [3, Lemma 6.3.5] the
tilde in $\widetilde{K}_{-\frac{1}{2}}$ sometimes mistakenly disappears. If we
use Legendre’s duplication formula:
$\displaystyle\Gamma\left(\dfrac{z+1}{2}\right)\Gamma\left(\dfrac{z}{2}\right)=2^{1-z}\sqrt{\pi}\Gamma(z),$
on $\Gamma\big{(}\frac{M}{2}\big{)}\Gamma\big{(}\frac{M-1}{2}\big{)}$ and
$\Gamma\big{(}\frac{M-1}{2}\big{)}\Gamma\big{(}\frac{M}{2}-1\big{)}$ the
result follows. ∎
###### Definition 3.9.
For $f,g\in W$ we define the sesquilinear form
$\langle{\cdot\,,\cdot}\rangle_{W}$ as
$\displaystyle\langle{f,g}\rangle_{W}:=\int_{W}f\overline{g}.$
Theorem 8.13 and Lemma 8.14 in [6] give us the following two properties.
###### Proposition 3.10.
Suppose $M=m-2n\geq 4$. The Schrödinger representation $\pi$ on $W$ is skew-
supersymmetric with respect to $\langle{\cdot\,,\cdot}\rangle_{W}$, i.e.,
$\displaystyle\langle{\pi(X)f,g}\rangle_{W}=-(-1)^{|X||f|}\langle{f,\pi(X)g}\rangle_{W},$
for all $X\in\mathfrak{g}$ and $f,g\in W$.
###### Proposition 3.11.
Suppose $M=m-2n\geq 4$. The form $\langle{\cdot\,,\cdot}\rangle_{W}$ defines a
sesquilinear, non-degenerate form on $W$, which is superhermitian, i.e.,
$\displaystyle\langle{f,g}\rangle_{W}=(-1)^{|f||g|}\overline{\langle{g,f}\rangle}_{W},$
for all $f,g\in W$.
Note that for both Theorem 8.13 and Lemma 8.14 in [6] there is an extra
condition saying that $M$ must be even. However, since we are working in the
exceptional case that corresponds with $q=2$ in [6], the proofs still hold
without this extra condition.
## 4 The Fock space
In [18, Section 2.3] an inner product on the polynomial space
$\mathcal{P}(\mathbb{C}^{m})$ was introduced, namely the Bessel–Fischer inner
product
$\displaystyle\langle{p,q}\rangle_{\mathcal{B}}:=p\big{(}\operatorname{\widetilde{\mathcal{B}}}\big{)}\bar{q}(z)\big{|}_{z=0},$
where $\bar{q}(z)=\overline{q(\bar{z})}$ is obtained by conjugating the
coefficients of the polynomial $q$. In [18, Proposition 2.6] it is proven
that, for polynomials, the Bessel–Fischer inner product is equal to the
$L^{2}$-inner product of the Fock space. Since there is no immediate extension
of this $L^{2}$-inner product to the super setting, we will use the
Bessel–Fischer inner product on polynomials as the starting point to
generalize the Fock space to superspace.
### 4.1 Definition and properties
###### Definition 4.1.
We define the polynomial Fock space as the superspace
$\displaystyle\operatorname{\mathcal{F}}:=\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}/\big{\langle}R^{2}\big{\rangle}.$
From the generalised Fischer decomposition of Proposition 2.5 it follows that
$\displaystyle\operatorname{\mathcal{F}}\cong\bigoplus_{l=0}^{\infty}\operatorname{\widetilde{\mathcal{H}}}_{l}\big{(}\mathbb{C}^{m|2n}\big{)}.$
In particular, if the superdimension $M=m-2n$ is such that
$M\not\in-2\mathbb{N}$, then the Fischer decomposition from Proposition 2.4
gives us
$\displaystyle\operatorname{\mathcal{F}}\cong\bigoplus_{l=0}^{\infty}\mathcal{H}_{l}\big{(}\mathbb{C}^{m|2n}\big{)}.$
###### Definition 4.2.
For $p,q\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$ we define the
Bessel–Fischer product of $p$ and $q$ as
$\displaystyle\langle{p,q}\rangle_{\mathcal{B}}:=p\big{(}\operatorname{\widetilde{\mathcal{B}}}\big{)}\bar{q}(z)\big{|}_{z=0},$
where $\bar{q}(z)=\overline{q(\bar{z})}$ is obtained by conjugating the
coefficients of the polynomial $q$ and
$\operatorname{\widetilde{\mathcal{B}}}$ is the complex version of the
modified Bessel operators introduced in Definition 3.4.
Explicitly for $p=\sum_{\alpha}a_{\alpha}z^{\alpha}$ and
$q=\sum_{\beta}b_{\beta}z^{\beta}$ we have
$\displaystyle\langle{p,q}\rangle_{\mathcal{B}}=\sum_{\alpha,\beta}a_{\alpha}\bar{b}_{\beta}(-1)^{\alpha_{0}}\operatorname{\mathcal{B}}(z_{0})^{\alpha_{0}}\cdots\operatorname{\mathcal{B}}(z_{m+2n-1})^{\alpha_{m+2n-1}}z_{0}^{\beta_{0}}\cdots
z_{m+2n-1}^{\beta_{m+2n-1}}\big{|}_{z=0}.$
Note that it is only an inner product in the bosonic case. However, in [13] a
new definition of Hilbert superspaces was introduced where the preserved form
is no longer an inner product, but rather a non-degenerate, sesquilinear,
superhermitian form. We will prove that the Bessel–Fischer product is such a
form when restricted to $\operatorname{\mathcal{F}}$ with
$M-2\not\in-2\mathbb{N}$.
###### Proposition 4.3 (sesquilinearity).
For $p,q,r,s\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$ and
$\alpha,\beta,\gamma,\delta\in\mathbb{C}$ we have
$\displaystyle\langle{\alpha p+\gamma r,\beta q+\delta
s}\rangle_{\mathcal{B}}=\alpha\bar{\beta}\langle{p,q}\rangle_{\mathcal{B}}+\alpha\bar{\delta}\langle{p,s}\rangle_{\mathcal{B}}+\gamma\bar{\beta}\langle{r,q}\rangle_{\mathcal{B}}+\gamma\bar{\delta}\langle{r,s}\rangle_{\mathcal{B}}.$
###### Proof.
This follows from the linearity of the Bessel operators. ∎
###### Proposition 4.4 (orthogonality).
For $p_{k}\in\mathcal{P}_{k}\big{(}\mathbb{C}^{m|2n}\big{)}$ and
$p_{l}\in\mathcal{P}_{l}\big{(}\mathbb{C}^{m|2n}\big{)}$ with $l\neq k$ we
have $\langle{p_{k},p_{l}}\rangle_{\mathcal{B}}=0$.
###### Proof.
This follows from the fact that Bessel operators lower the degree of
polynomials by one. ∎
###### Proposition 4.5.
For $p,q\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$ and
$i\in\\{0,1,\ldots,m+2n-1\\}$ we have
$\displaystyle\langle{z_{i}p,q}\rangle_{\mathcal{B}}=(-1)^{|i||p|}\big{\langle}p,\operatorname{\widetilde{\mathcal{B}}}(z_{i})q\big{\rangle}_{\mathcal{B}}.$
###### Proof.
This follows immediately from the definition of the Bessel–Fischer product. ∎
To prove that the Bessel–Fischer product is superhermitian we will use
induction on the degree of the polynomials. We will need the following lemma
in the induction step of the proof.
###### Lemma 4.6.
Suppose we have proven that
$\langle{p,q}\rangle_{\mathcal{B}}=(-1)^{|p||q|}\overline{\langle{q,p}\rangle}_{\mathcal{B}}$
for all $p,q\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$ of degree lower
than or equal to $k\in\mathbb{N}$. Then for
$p,q\in\mathcal{P}_{l}\big{(}\mathbb{C}^{m|2n}\big{)}$, $l\leq k$ we have
$\displaystyle\langle{L_{ij}p,q}\rangle_{\mathcal{B}}=-(-1)^{(|i|+|j|)|p|}\langle{p,L_{ij}q}\rangle_{\mathcal{B}}\qquad\text{and}\qquad\langle{L_{0i}p,q}\rangle_{\mathcal{B}}=(-1)^{|i||p|}\langle{p,L_{0i}q}\rangle_{\mathcal{B}},$
for all $i,j\in\\{1,\ldots,m+2n-1\\}$.
###### Proof.
We will prove the $L_{ij}$ case explicitly. The $L_{0i}$ case is entirely
analogous. First note that if we combine the given superhermitian property
with Proposition 4.5, we get
$\displaystyle\big{\langle}\operatorname{\widetilde{\mathcal{B}}}(z_{i})p,q\big{\rangle}_{\mathcal{B}}=(-1)^{|i||p|}\langle{p,z_{i}q}\rangle_{\mathcal{B}}.$
for all $p\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$ of degree $k$ or
lower and all $q\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$ of degree $k-1$
or lower. Assume $p,q\in\mathcal{P}_{l}\big{(}\mathbb{C}^{m|2n}\big{)}$,
$l\leq k$. We obtain
$\displaystyle\langle{z_{i}\partial_{j}p,q}\rangle_{\mathcal{B}}=(-1)^{|i||p|}\langle{\partial_{j}p,\operatorname{\widetilde{\mathcal{B}}}(z_{i})q}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{z_{i}\partial_{j}p,q}\rangle_{\mathcal{B}}}{}=(-1)^{|i|(|p|+|j|)}\big{(}(M-2+2(l-1))\langle{\partial_{j}p,\partial_{i}q}\rangle_{\mathcal{B}}-\langle{\partial_{j}p,z_{i}\Delta
q}\rangle_{\mathcal{B}}\big{)}$
$\displaystyle\hphantom{\langle{z_{i}\partial_{j}p,q}\rangle_{\mathcal{B}}}{}=(-1)^{|i|(|p|+|j|)}\big{(}\langle{\operatorname{\widetilde{\mathcal{B}}}(z_{j})p,\partial_{i}q}\rangle_{\mathcal{B}}+\langle{z_{j}\Delta
p,\partial_{i}q}\rangle_{\mathcal{B}}-\langle{\partial_{j}p,z_{i}\Delta
q}\rangle_{\mathcal{B}}\big{)}$
$\displaystyle\hphantom{\langle{z_{i}\partial_{j}p,q}\rangle_{\mathcal{B}}}{}=(-1)^{(|i|+|j|)|p|+|i||j|}\langle{p,z_{j}\partial_{i}q}\rangle_{\mathcal{B}}+(-1)^{|i|(|p|+|j|)}\big{(}\langle{z_{j}\Delta
p,\partial_{i}q}\rangle_{\mathcal{B}}-\langle{\partial_{j}p,z_{i}\Delta
q}\rangle_{\mathcal{B}}\big{)}.$
This gives
$\displaystyle\langle{L_{ij}p,q}\rangle_{\mathcal{B}}=\langle{z_{i}\partial_{j}p,q}\rangle_{\mathcal{B}}-(-1)^{|i||j|}\langle{z_{j}\partial_{i}p,q}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{L_{ij}p,q}\rangle_{\mathcal{B}}}{}=(-1)^{(|i|+|j|)|p|+|i||j|}\langle{p,z_{j}\partial_{i}q}\rangle_{\mathcal{B}}-(-1)^{(|i|+|j|)|p|}\langle{p,z_{i}\partial_{j}q}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{L_{ij}p,q}\rangle_{\mathcal{B}}=}{}+(-1)^{|i|(|p|+|j|)}\big{(}\langle{z_{j}\Delta
p,\partial_{i}q}\rangle_{\mathcal{B}}-\langle{\partial_{j}p,z_{i}\Delta
q}\rangle_{\mathcal{B}}\big{)}$
$\displaystyle\hphantom{\langle{L_{ij}p,q}\rangle_{\mathcal{B}}=}{}-(-1)^{|j||p|}\big{(}\langle{z_{i}\Delta
p,\partial_{j}q}\rangle_{\mathcal{B}}-\langle{\partial_{i}p,z_{j}\Delta
q}\rangle_{\mathcal{B}}\big{)}$
$\displaystyle\hphantom{\langle{L_{ij}p,q}\rangle_{\mathcal{B}}}{}=-(-1)^{(|i|+|j|)|p|}\langle{p,L_{ij}q}\rangle_{\mathcal{B}}+(-1)^{|i|(|p|+|j|)}\big{(}\langle{z_{j}\Delta
p,\partial_{i}q}\rangle_{\mathcal{B}}-\langle{\partial_{j}p,z_{i}\Delta
q}\rangle_{\mathcal{B}}\big{)}$
$\displaystyle\hphantom{\langle{L_{ij}p,q}\rangle_{\mathcal{B}}=}{}-(-1)^{|j||p|}\big{(}\langle{z_{i}\Delta
p,\partial_{j}q}\rangle_{\mathcal{B}}-\langle{\partial_{i}p,z_{j}\Delta
q}\rangle_{\mathcal{B}}\big{)},$
from which the desired result follows if we prove
$\displaystyle 0=(-1)^{|i|(|p|+|j|)}\langle{z_{j}\Delta
p,\partial_{i}q}\rangle_{\mathcal{B}}-(-1)^{|i|(|p|+|j|)}\langle{\partial_{j}p,z_{i}\Delta
q}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{0=}{}-(-1)^{|j||p|}\langle{z_{i}\Delta
p,\partial_{j}q}\rangle_{\mathcal{B}}+(-1)^{|j||p|}\langle{\partial_{i}p,z_{j}\Delta
q}\rangle_{\mathcal{B}}.$ (4.1)
For the first term in right hand side of this equation we have
$\displaystyle\langle{z_{j}\Delta
p,\partial_{i}q}\rangle_{\mathcal{B}}=(-1)^{|p||j|}\big{\langle}\Delta
p,\operatorname{\widetilde{\mathcal{B}}}(z_{j})\partial_{i}q\big{\rangle}_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{z_{j}\Delta
p,\partial_{i}q}\rangle_{\mathcal{B}}}{}=(-1)^{|p||j|}\big{(}(M-2+2(l-1))\langle{\Delta
p,\partial_{j}\partial_{i}q}\rangle_{\mathcal{B}}-\langle{\Delta
p,z_{j}\partial_{i}\Delta q}\rangle_{\mathcal{B}}\big{)},$
such that, using similar calculations for the other three terms, equation
(4.1) can be rewritten as
$\displaystyle\langle{L_{ij}\Delta p,\Delta
q}\rangle_{\mathcal{B}}=-(-1)^{(|i|+|j|)|p|}\langle{\Delta p,L_{ij}\Delta
q}\rangle_{\mathcal{B}}.$
Since $\Delta p$ and $\Delta q$ are polynomials of degree lower than $l$ the
lemma follows from a straightforward induction argument on $l$. ∎
###### Proposition 4.7 (superhermitianity).
The Bessel–Fischer product is superhermitian, i.e.,
$\displaystyle\langle{p,q}\rangle_{\mathcal{B}}=(-1)^{|p||q|}\overline{\langle{q,p}\rangle}_{\mathcal{B}},$
for $p,q\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$.
###### Proof.
Because of the orthogonality we only need to prove the property for $p$, $q$
homogeneous polynomials of the same degree. Because of the sesquilinearity we
may assume that $p$ and $q$ are monomials. We will use induction on the degree
$k$ of the polynomials, the case $k=0$ being trivial. Suppose we have proven
the theorem for all $p,q\in\mathcal{P}_{k}\big{(}\mathbb{C}^{m|2n}\big{)}$. We
now look at $\langle{z_{i}p,z_{j}q}\rangle_{\mathcal{B}}$ for arbitrary
$i,j\in\\{0,1,\ldots,m+2n-1\\}$. To simplify the following calculations we
will restrict ourselves to $i,j\neq 0$, the cases $i=0$ or $j=0$ being
similar. Denote $c=(M-2+2k)$. Using the commutation of equation (3.1) and
Proposition 4.5 we find
$\displaystyle\langle{z_{i}p,z_{j}q}\rangle_{\mathcal{B}}$
$\displaystyle=(-1)^{|i||p|}\langle{p,\operatorname{\mathcal{B}}(z_{i})z_{j}q}\rangle_{\mathcal{B}}$
$\displaystyle=(-1)^{|i||p|+|i||j|}\langle{p,z_{j}\operatorname{\mathcal{B}}(z_{i})q}\rangle_{\mathcal{B}}+(-1)^{|i||p|}\langle{p,[\operatorname{\mathcal{B}}(z_{i}),z_{j}]q}\rangle_{\mathcal{B}}$
$\displaystyle=(-1)^{|i||p|+|i||j|}\langle{p,z_{j}\operatorname{\mathcal{B}}(z_{i})q}\rangle_{\mathcal{B}}+(-1)^{|i||p|}c\beta_{ij}\langle{p,q}\rangle_{\mathcal{B}}-2(-1)^{|i||p|}\langle{p,L_{ij}q}\rangle_{\mathcal{B}}.$
Using the induction hypothesis together with Proposition 4.5 this becomes
$\displaystyle\langle{z_{i}p,z_{j}q}\rangle_{\mathcal{B}}=(-1)^{(|i|+|j|)|p|+|i||j|}\langle{\operatorname{\mathcal{B}}(z_{j})p,\operatorname{\mathcal{B}}(z_{i})q}\rangle_{\mathcal{B}}+(-1)^{|i||p|}c\beta_{ij}\langle{p,q}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{z_{i}p,z_{j}q}\rangle_{\mathcal{B}}=}{}-2(-1)^{|i||p|}\langle{p,L_{ij}q}\rangle_{\mathcal{B}}.$
Switching the roles of $z_{i}p$ and $z_{j}q$ we also obtain
$\displaystyle\langle{z_{j}q,z_{i}p}\rangle_{\mathcal{B}}=(-1)^{(|i|+|j|)|q|+|i||j|}\langle{\operatorname{\mathcal{B}}(z_{i})q,\operatorname{\mathcal{B}}(z_{j})p}\rangle_{\mathcal{B}}+(-1)^{|j||q|}c\beta_{ji}\langle{q,p}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{z_{j}q,z_{i}p}\rangle_{\mathcal{B}}=}{}-2(-1)^{|j||q|}\langle{q,L_{ji}p}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{z_{j}q,z_{i}p}\rangle_{\mathcal{B}}}{}=(-1)^{|i||q|+|i||p|+|q||p|}\langle{\operatorname{\mathcal{B}}(z_{j})p,\operatorname{\mathcal{B}}(z_{i})q}\rangle_{\mathcal{B}}+(-1)^{|j||q|+|i||j|+|p||q|}c\beta_{ij}\langle{p,q}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{z_{j}q,z_{i}p}\rangle_{\mathcal{B}}=}{}+2(-1)^{|i||q|+|q||p|+|i||j|}\langle{L_{ij}p,q}\rangle_{\mathcal{B}},$
where we used the induction hypothesis on all three terms of the right hand
side. If we use Lemma 4.6 on the last term and multiply both sides of this
equation with $(-1)^{(|p|+|i|)(|q|+|j|)}$ we get
$\displaystyle(-1)^{(|p|+|i|)(|q|+|j|)}\langle{z_{j}q,z_{i}p}\rangle_{\mathcal{B}}=(-1)^{|i||j|+|i||p|+|j||p|}\langle{\operatorname{\mathcal{B}}(z_{j})p,\operatorname{\mathcal{B}}(z_{i})q}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{(-1)^{(|p|+|i|)(|q|+|j|)}\langle{z_{j}q,z_{i}p}\rangle_{\mathcal{B}}=}{}+(-1)^{|j||q|+|i||q|+|p||j|}c\beta_{ij}\langle{p,q}\rangle_{\mathcal{B}}-2(-1)^{|i||p|}\langle{p,L_{ij}q}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{(-1)^{(|p|+|i|)(|q|+|j|)}\langle{z_{j}q,z_{i}p}\rangle_{\mathcal{B}}}{}=\langle{z_{i}p,z_{j}q}\rangle_{\mathcal{B}},$
where we made use of $\beta_{ij}=0$ for $|i|\neq|j|$ in the last step. ∎
Lemma 4.6 can now be extended to the following proposition.
###### Corollary 4.8.
We have
$\displaystyle\langle{L_{ij}p,q}\rangle_{\mathcal{B}}=-(-1)^{(|i|+|j|)|p|}\langle{p,L_{ij}q}\rangle_{\mathcal{B}}\qquad\text{and}\qquad\langle{L_{0i}p,q}\rangle_{\mathcal{B}}=(-1)^{|i||p|}\langle{p,L_{0i}q}\rangle_{\mathcal{B}},$
for all $i,j\in\\{1,\ldots,m+2n-1\\}$ and
$p,q\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$.
We have the following proposition.
###### Proposition 4.9.
For all $p,q\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$ it holds that
$\displaystyle\big{\langle}R^{2}p,q\big{\rangle}_{\mathcal{B}}=0=\big{\langle}p,R^{2}q\big{\rangle}_{\mathcal{B}}.$
Thus the Bessel–Fischer product is well-defined on
$\operatorname{\mathcal{F}}$.
###### Proof.
Because of the superhermitian property we only need to prove
$\big{\langle}p,R^{2}q\big{\rangle}=0$ holds for all
$p,q\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$. The complex version of
Proposition 3.1 implies that for arbitrary
$p,q\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$ there exists a
$q^{\prime}\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$ such that
$\displaystyle\big{\langle}p,R^{2}q\big{\rangle}_{\mathcal{B}}=p\big{(}\operatorname{\widetilde{\mathcal{B}}}\big{)}R^{2}q(z)|_{z=0}=R^{2}q^{\prime}(z)|_{z=0}.$
Since the constant term of $R^{2}q^{\prime}(z)$ is always zero this proves the
theorem. ∎
Note that the above proposition also shows that the Bessel–Fischer product is
degenerate on the space $\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$. In the
following subsection we look at the non-degeneracy of the Bessel–Fischer
product when we restrict it to $\operatorname{\mathcal{F}}$.
### 4.2 The reproducing kernel and non-degeneracy
In the bosonic case a reproducing kernel for the Fock space was constructed in
Section 2.4 of [18]. We will prove the non-degeneracy of the Bessel–Fischer
product on $\operatorname{\mathcal{F}}$ by first constructing a generalisation
of this reproducing kernel in superspace.
###### Lemma 4.10.
Suppose $M-2\not\in-2\mathbb{N}$ and $k\in\mathbb{N}$. Define the
superfunction $\mathbb{K}^{k}(z,w)$ by
$\displaystyle\mathbb{K}^{k}(z,w):=\dfrac{1}{4^{k}k!}\left(\dfrac{M}{2}-1\right)_{k}^{-1}(z|\overline{w})^{k},$
where we used the Pochhammer symbol $(a)_{k}=a(a+1)(a+2)\cdots(a+k-1)$ and
$z|w$ is defined as
$\displaystyle z|w:=2z_{0}w_{0}+2\sum_{i,j=1}^{m+2n-1}z_{i}\beta^{ij}w_{j}.$
For all $p\in\mathcal{P}_{k}\big{(}\mathbb{C}^{m|2n}\big{)}$ we then have
$\displaystyle\big{\langle}p,\mathbb{K}^{k}(z,w)\big{\rangle}_{\mathcal{B}}=p(w)\mod\big{\langle}R^{2}_{w}\big{\rangle}.$
###### Proof.
We have
$\displaystyle\operatorname{\widetilde{\mathcal{B}}}(z_{l})(z|w)^{k}$
$\displaystyle=(-1)^{\delta_{0l}}\left((M-2+2\mathbb{E})\partial_{l}-z_{l}\Delta\right)(z|w)^{k}$
$\displaystyle=2w_{l}k(M+2k-4)(z|w)^{k-1}-4z_{l}k(k-1)R^{2}_{w}(z|w)^{k-2}$
$\displaystyle=4w_{l}k\left(k+\dfrac{M}{2}-2\right)(z|w)^{k-1}-R^{2}_{w}4z_{l}k(k-1)(z|w)^{k-2}.$
Now suppose $p$ is a monomial, i.e.,
$p(z)=a\prod\limits_{l=0}^{m+2n-1}z_{l}^{\alpha_{l}}$, with $|\alpha|=k$ and
$a\in\mathbb{C}$. Iterating the previous calculation and working modulo
$R^{2}_{w}$ we obtain
$\displaystyle\langle{p,(z|\overline{w})^{k}}\rangle_{\mathcal{B}}$
$\displaystyle=\left.p\big{(}\operatorname{\widetilde{\mathcal{B}}}\big{)}(z|w)^{k}\right|_{z=0}=a\prod_{l=0}^{m+2n-1}\left.\operatorname{\widetilde{\mathcal{B}}}(z_{l})^{\alpha_{l}}(z|w)^{k}\right|_{z=0}$
$\displaystyle=4^{k}(k(k-1)\cdots
1)\left(k+\dfrac{M}{2}-2\right)\cdots\left(\dfrac{M}{2}-1\right)a\prod_{l=0}^{m+2n-1}w_{l}^{\alpha_{l}}$
$\displaystyle=4^{k}k!\left(\dfrac{M}{2}-1\right)_{k}p(w),$
which gives us the desired result. ∎
###### Theorem 4.11 (reproducing kernel of $\operatorname{\mathcal{F}}$).
Suppose $M-2\not\in-2\mathbb{N}$ and define the superfunction
$\mathbb{K}(z,w)$ by
$\displaystyle\mathbb{K}(z,w):=\Gamma\left(\dfrac{M}{2}-1\right)\widetilde{I}_{\frac{M}{2}-2}\big{(}\sqrt{(z|\overline{w})}\big{)}=\sum_{k=0}^{\infty}\dfrac{1}{4^{k}k!}\left(\dfrac{M}{2}-1\right)_{k}^{-1}(z|\overline{w})^{k},$
where $\tilde{I}_{\alpha}$ is the I-Bessel introduced in Appendix A.1. For all
$p\in\operatorname{\mathcal{F}}$ we have
$\displaystyle\langle{p,\mathbb{K}(z,w)}\rangle_{\mathcal{B}}=p(w).$
###### Proof.
By Lemma 4.10, the orthogonality property and the Fischer decomposition we
find that
$\displaystyle\sum_{k=0}^{\infty}\mathbb{K}^{k}(z,w)=\sum_{k=0}^{\infty}\dfrac{1}{4^{k}k!}\left(\dfrac{M}{2}-1\right)_{k}^{-1}(z|\overline{w})^{k},$
has the desired property. ∎
The non-degeneracy of the Bessel–Fischer product on
$\operatorname{\mathcal{F}}$ is now an almost immediate result.
###### Proposition 4.12 (non-degenerate case).
For $M-2\not\in-2\mathbb{N}$ the Bessel–Fischer product is non-degenerate on
the polynomial Fock space $\operatorname{\mathcal{F}}$, i.e., if
$\langle{p,q}\rangle_{\mathcal{B}}=0,$ for all
$q\in\operatorname{\mathcal{F}}$, then $p=0$.
###### Proof.
Suppose $p\in\operatorname{\mathcal{F}}$ is such that
$\langle{p,q}\rangle_{\mathcal{B}}=0$ for all
$q\in\operatorname{\mathcal{F}}$. Using the reproducing kernel we obtain
$p(w)=\langle{p,\mathbb{K}(z,w)}\rangle_{\mathcal{B}}=0$. Hence $p=0$. ∎
Note that the previous proposition only works when $M-2\not\in-2\mathbb{N}$.
For the $M-2\in{-}2\mathbb{N}$ case the Bessel–Fischer product will always be
degenerate.
###### Proposition 4.13 (degenerate case).
For $M-2\in-2\mathbb{N}$ the Bessel–Fischer product is degenerate on the
polynomial Fock space $\operatorname{\mathcal{F}}$, i.e., there exists a
$q\in\operatorname{\mathcal{F}}$, $q\neq 0$ such that
$\langle{p,q}\rangle_{\mathcal{B}}=0,$ for all
$p\in\operatorname{\mathcal{F}}$.
###### Proof.
Suppose $M-2\in-2\mathbb{N}$ and
$q\in\mathcal{H}_{2-\frac{M}{2}}\big{(}\mathbb{C}^{m|2n}\big{)}\subset\operatorname{\widetilde{\mathcal{H}}}_{2-\frac{M}{2}}\big{(}\mathbb{C}^{m|2n}\big{)}\subset\operatorname{\mathcal{F}}$.
Then
$\displaystyle\operatorname{\widetilde{\mathcal{B}}}(z_{i})q=\pm((M-2+2\mathbb{E})\partial_{i}q-z_{i}\Delta
q)=0,$
for all $i\in\\{0,1,\ldots,m+2n-1\\}$. For all
$p\in\operatorname{\mathcal{F}}$ it follows then immediately from the
definition that $\langle{p,q}\rangle_{\mathcal{B}}=0$. Since
$\displaystyle\dim\mathcal{H}_{2-\frac{M}{2}}\big{(}\mathbb{C}^{m|2n}\big{)}\geq\dim\mathcal{P}_{2-\frac{M}{2}}\big{(}\mathbb{C}^{m-1|2n}\big{)}\geq
1,$
we conclude that such a $q\neq 0$ exists. ∎
## 5 The Fock model of $\boldsymbol{\mathfrak{osp}(m,2|2n)}$
In [18] the Fock representation $\rho:=\pi_{\mathbb{C}}\circ c$ is obtained by
twisting the complexification of the Schrödinger representation
$\pi_{\mathbb{C}}$ with the Cayley transform $c$. This Cayley transform is an
isomorphism of $\mathfrak{g}_{\mathbb{C}}$ which induces a Lie algebra
isomorphism between $\mathfrak{k}_{\mathbb{C}}$ and
$\mathfrak{istr}(J_{\mathbb{C}})$. We will use a similar approach in our
construction of the Fock model. We start by defining a Cayley transform $c$ in
our setting.
### 5.1 Definition and properties
Let $\mathfrak{g}_{\mathbb{C}}$ and $\mathfrak{k}_{\mathbb{C}}$ be the
complexified versions of $\mathfrak{g}$ and $\mathfrak{k}$ introduced in
Section 3.3, i.e.,
$\displaystyle\mathfrak{g}_{\mathbb{C}}=\operatorname{TKK}(J_{\mathbb{C}})\cong\mathfrak{osp}_{\mathbb{C}}(m+2|2n),$
$\displaystyle\mathfrak{k}_{\mathbb{C}}=\\{(z,I,-z)\colon
I\in\operatorname{Inn}(J_{\mathbb{C}}),z\in
J_{\mathbb{C}}\\}\cong\mathfrak{osp}_{\mathbb{C}}(m|2n)\oplus\mathbb{C},$
Let $\imath$ denote the complex unit. Define the Cayley transform
$c\in\operatorname{End}(\mathfrak{g}_{\mathbb{C}})$ as
$\displaystyle
c:=\exp\left(\frac{\imath}{2}\operatorname{ad}(e_{0}^{-})\right)\exp\big{(}\imath\operatorname{ad}\big{(}e_{0}^{+}\big{)}\big{)},$
with $e_{0}^{-}$ and $e_{0}^{+}$ the copies of $e_{0}$ in $J^{-}$ and $J^{+}$,
respectively.
###### Proposition 5.1.
Using the decomposition
$\mathfrak{g}_{\mathbb{C}}=J^{-}_{\mathbb{C}}\oplus\mathfrak{istr}(J_{\mathbb{C}})\oplus
J^{+}_{\mathbb{C}}$ we obtain the following explicit expression for the Cayley
transform
* •
$c(a,0,0)=\left(\dfrac{a}{4},\imath L_{a},a\right)$,
* •
$c(0,L_{a}+I,0)=\left(\imath\dfrac{a}{4},I,-\imath a\right)$,
* •
$c(0,0,a)=\left(\dfrac{a}{4},-\imath L_{a},a\right)$,
with $a\in J$ and $I\in\operatorname{Inn}(J_{\mathbb{C}})$. It induces a Lie
superalgebra isomorphism:
$\displaystyle c\colon\
\mathfrak{k}_{\mathbb{C}}\rightarrow\mathfrak{istr}(J_{\mathbb{C}}),\qquad(a,I,-a)\mapsto
I+2\imath L_{a}.$
###### Proof.
Expanding
$\exp\big{(}\frac{\imath}{2}\operatorname{ad}\big{(}e_{0}^{-}\big{)}\big{)}\exp\big{(}\imath\operatorname{ad}\big{(}e_{0}^{+}\big{)}\big{)}$
we obtain
$\displaystyle
c=1+\dfrac{\imath}{2}\operatorname{ad}\big{(}e_{0}^{-}\big{)}-\dfrac{1}{8}\operatorname{ad}\big{(}e_{0}^{-}\big{)}\operatorname{ad}\big{(}e_{0}^{-}\big{)}+\imath\operatorname{ad}\big{(}e_{0}^{+}\big{)}-\dfrac{1}{2}\operatorname{ad}\big{(}e_{0}^{+}\big{)}\operatorname{ad}\big{(}e_{0}^{+}\big{)}-\dfrac{1}{2}\operatorname{ad}\big{(}e_{0}^{-}\big{)}\operatorname{ad}\big{(}e_{0}^{+}\big{)}$
$\displaystyle\hphantom{c=}{}-\dfrac{\imath}{4}\operatorname{ad}\big{(}e_{0}^{-}\big{)}\operatorname{ad}\big{(}e_{0}^{+}\big{)}\operatorname{ad}\big{(}e_{0}^{+}\big{)}-\dfrac{\imath}{8}\operatorname{ad}\big{(}e_{0}^{-}\big{)}\operatorname{ad}\big{(}e_{0}^{-}\big{)}\operatorname{ad}\big{(}e_{0}^{+}\big{)}$
$\displaystyle\hphantom{c=}{}+\dfrac{1}{16}\operatorname{ad}\big{(}e_{0}^{-}\big{)}\operatorname{ad}\big{(}e_{0}^{-}\big{)}\operatorname{ad}\big{(}e_{0}^{+}\big{)}\operatorname{ad}\big{(}e_{0}^{+}\big{)}.$
We have the following straightforward calculations
$\displaystyle c(a,0,0)=a^{-}+0+0+2\imath L_{a}+a^{+}-a^{-}-\imath
L_{a}+0+\dfrac{a^{-}}{4}=\left(\dfrac{a}{4},\imath L_{a},a\right),$
$\displaystyle c(0,L_{a}+I,0)=(L_{a}+I)+\imath\dfrac{a^{-}}{2}+0-\imath
a^{+}+0-L_{a}+0-\imath\dfrac{a^{-}}{4}+0=\left(\imath\dfrac{a}{4},I,-\imath
a\right),$ $\displaystyle c(0,0,a)=a^{+}-\imath
L_{a}+\dfrac{a^{-}}{4}+0+0+0+0+0+0=\left(\dfrac{a}{4},-\imath L_{a},a\right),$
proving the theorem. ∎
We complexify the Schrödinger representation $\pi$ given in Section 3.3 to
obtain a representation $\pi_{\mathbb{C}}$ of
$\mathfrak{g}_{\mathbb{C}}=J^{-}_{\mathbb{C}}\oplus\mathfrak{istr}(J_{\mathbb{C}})\oplus
J^{+}_{\mathbb{C}}$ acting on $\operatorname{\mathcal{F}}$. Explicitly,
$\pi_{\mathbb{C}}$ is given by
* •
$\pi_{\mathbb{C}}(e_{l},0,0)=-2\imath z_{l}$,
* •
$\pi_{\mathbb{C}}(0,L_{e_{k}},0)=-z_{0}\partial_{k}+z_{k}\partial_{0}$,
* •
$\pi_{\mathbb{C}}(0,[L_{e_{i}},L_{e_{j}}],0)=z_{i}\partial_{j}-(-1)^{|i||j|}z_{j}\partial_{i}$,
* •
$\pi_{\mathbb{C}}(0,L_{e_{0}},0)=\frac{2-M}{2}-\mathbb{E}$,
* •
$\pi_{\mathbb{C}}(0,0,e_{l})=-\frac{1}{2}\imath\operatorname{\widetilde{\mathcal{B}}}(z_{l})$,
with $i,j,k\in\\{1,2,\ldots,m+2n-1\\}$ and $l\in\\{0,1,2,\ldots,m+2n-1\\}$.
Since $L_{ij}$, $\mathbb{E}$ and
$\operatorname{\widetilde{\mathcal{B}}}(z_{l})$ map
$\big{\langle}R^{2}\big{\rangle}$ into $\big{\langle}R^{2}\big{\rangle}$ this
representation is well defined on $\operatorname{\mathcal{F}}$. As in the
bosonic case, we will define the Fock representation $\rho$ as the composition
of $\pi_{\mathbb{C}}$ with the Cayley transform $c$,
$\displaystyle\rho:=\pi_{\mathbb{C}}\circ c.$
So $\rho$ of $\mathfrak{g}=J^{-}\oplus\mathfrak{istr}(J)\oplus J^{+}$ acting
on $\operatorname{\mathcal{F}}$ is given as follows
* •
$\rho(e_{0},0,0)=-\dfrac{\imath}{2}\big{(}z_{0}+\operatorname{\widetilde{\mathcal{B}}}(z_{0})+M-2+2\mathbb{E}\big{)}$,
* •
$\rho(e_{k},0,0)=-\dfrac{\imath}{2}\big{(}z_{k}+\operatorname{\widetilde{\mathcal{B}}}(z_{k})+2(z_{0}\partial_{k}-z_{k}\partial_{0})\big{)}$,
* •
$\rho(0,L_{e_{0}},0)=\dfrac{1}{2}\big{(}z_{0}-\operatorname{\widetilde{\mathcal{B}}}(z_{0})\big{)}$,
* •
$\rho(0,L_{e_{k}},0)=\dfrac{1}{2}\big{(}z_{k}-\operatorname{\widetilde{\mathcal{B}}}(z_{k})\big{)}$,
* •
$\rho(0,[L_{e_{i}},L_{e_{j}}],0)=z_{i}\partial_{j}-(-1)^{|i||j|}z_{j}\partial_{i}$,
* •
$\rho(0,0,e_{k})=-\dfrac{\imath}{2}\big{(}z_{k}+\operatorname{\widetilde{\mathcal{B}}}(z_{k})-2(z_{0}\partial_{k}-z_{k}\partial_{0})\big{)}$,
* •
$\rho(0,0,e_{0})=-\dfrac{\imath}{2}\big{(}z_{0}+\operatorname{\widetilde{\mathcal{B}}}(z_{0})+2-M-2\mathbb{E}\big{)}$,
with $i,j,k\in\\{1,2,\ldots,m+2n-1\\}$.
###### Proposition 5.2 ($\mathfrak{osp}(m,2|2n)$-invariance).
The Fock representation $\rho$ on $\operatorname{\mathcal{F}}$ is skew-
supersymmetric with respect to the Bessel–Fischer product, i.e.,
$\displaystyle\langle{\rho(X)p,q}\rangle_{\mathcal{B}}=-(-1)^{|X||p|}\langle{p,\rho(X)q}\rangle_{\mathcal{B}},$
for all $X\in\mathfrak{g}$ and $p,q\in\operatorname{\mathcal{F}}$.
###### Proof.
Suppose $p,q\in\operatorname{\mathcal{F}}$. From Corollary 4.8 we obtain
$\displaystyle\langle{\rho(e_{i},0,-e_{i})p,q}\rangle_{\mathcal{B}}=\langle{-2\imath
L_{0i}p,q}\rangle_{\mathcal{B}}=-(-1)^{|i||p|}\langle{p,2\overline{\imath}L_{0i}q}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{\rho(e_{i},0,-e_{i})p,q}\rangle_{\mathcal{B}}}{}=-(-1)^{|i||p|}\langle{p,\rho(e_{i},0,-e_{i})q}\rangle_{\mathcal{B}}$
and
$\displaystyle\langle{\rho(0,[L_{e_{i}},L_{e_{j}}],0)p,q}\rangle_{\mathcal{B}}=\langle{L_{ij}p,q}\rangle_{\mathcal{B}}=-(-1)^{(|i|+|j|)|p|}\langle{p,L_{ij}q}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{\rho(0,[L_{e_{i}},L_{e_{j}}],0)p,q}\rangle_{\mathcal{B}}}{}=-(-1)^{(|i|+|j|)|p|}\langle{p,\rho(0,[L_{e_{i}},L_{e_{j}}],0)q}\rangle_{\mathcal{B}},$
for $i,j\in\\{1,\ldots,m+2n-1\\}$. Furthermore, we have
$\displaystyle\langle{\rho(e_{k},0,e_{k})p,q}\rangle_{\mathcal{B}}=\big{\langle}{-}\imath\big{(}z_{k}+\operatorname{\widetilde{\mathcal{B}}}(z_{k})\big{)}p,q\big{\rangle}_{\mathcal{B}}=-\imath\big{(}\langle{z_{k}p,q}\rangle_{\mathcal{B}}+\big{\langle}\operatorname{\widetilde{\mathcal{B}}}(z_{k})p,q\big{\rangle}_{\mathcal{B}}\big{)}$
$\displaystyle\hphantom{\langle{\rho(e_{k},0,e_{k})p,q}\rangle_{\mathcal{B}}}{}=-\imath(-1)^{|k||p|}\big{(}\big{\langle}p,\operatorname{\widetilde{\mathcal{B}}}(z_{k})q\big{\rangle}_{\mathcal{B}}+\langle{p,z_{k}q}\rangle_{\mathcal{B}}\big{)}$
$\displaystyle\hphantom{\langle{\rho(e_{k},0,e_{k})p,q}\rangle_{\mathcal{B}}}{}=-(-1)^{|k||p|}\big{\langle}p,\overline{\imath}\big{(}z_{k}+\operatorname{\widetilde{\mathcal{B}}}(z_{k})\big{)}q\big{\rangle}_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{\rho(e_{k},0,e_{k})p,q}\rangle_{\mathcal{B}}}{}=-(-1)^{|k||p|}\langle{p,\rho(e_{k},0,e_{k})q}\rangle_{\mathcal{B}}$
and similarly
$\displaystyle\langle{\rho(0,L_{e_{k}},0)p,q}\rangle_{\mathcal{B}}=\left\langle\dfrac{1}{2}\big{(}z_{k}-\operatorname{\widetilde{\mathcal{B}}}(z_{k})\big{)}p,q\right\rangle_{\mathcal{B}}=-(-1)^{|k||p|}\langle{p,\rho(0,L_{e_{k}},0)q}\rangle_{\mathcal{B}},$
for $k\in\\{0,\ldots,m+2n-1\\}$. Because of Proposition 4.4 we may assume $p$
and $q$ are homogeneous polynomials the same degree. We now have
$\displaystyle\langle{\rho(e_{0},0,-e_{0})p,q}\rangle_{\mathcal{B}}$
$\displaystyle=\langle{-2\imath(M-2+2\mathbb{E})p,q}\rangle_{\mathcal{B}}=-\langle{p,-2\imath(M-2+2\mathbb{E})q}\rangle_{\mathcal{B}}$
$\displaystyle=-\langle{p,\rho(e_{0},0,-e_{0})q}\rangle_{\mathcal{B}},$
which proves the theorem. ∎
### 5.2 The $\boldsymbol{(\mathfrak{g},\mathfrak{k})}$-module
$\boldsymbol{F}$
We define
$\displaystyle F:=U(\mathfrak{g})1\mod\big{\langle}R^{2}\big{\rangle},$
where the $\mathfrak{g}$ module structure is given by the Fock representation
$\rho$. We also introduce the notation
$\displaystyle
F_{k}:=\bigoplus_{l=0}^{k}z_{0}^{l}\mathcal{H}_{k-l}\big{(}\mathbb{C}^{m-1|2n}\big{)}\mod\big{\langle}R^{2}\big{\rangle}$
and reintroduce $r^{2}$ from Section 3.2 as its complexified version:
$\displaystyle r^{2}:=\sum_{i=1}^{m+2n-1}z^{i}z_{i}.$
We have $R^{2}=-z_{0}^{2}+r^{2}$ and since we are working modulo
$\big{\langle}R^{2}\big{\rangle}$ this implies $z_{0}^{2}=r^{2}$. In the
following, we will work modulo $\big{\langle}R^{2}\big{\rangle}$ but omit
$\big{\langle}R^{2}\big{\rangle}$ from our notation. For
$M-1\not\in-2\mathbb{N}$ we now find
$\displaystyle F_{k}$
$\displaystyle=\bigoplus_{l=0}^{k}z_{0}^{l}\mathcal{H}_{k-l}\big{(}\mathbb{C}^{m-1|2n}\big{)}=\bigoplus_{l=0}^{\big{\lfloor}\frac{k}{2}\big{\rfloor}}z_{0}^{2l}\mathcal{H}_{k-2l}\big{(}\mathbb{C}^{m-1|2n}\big{)}\oplus\bigoplus_{l=0}^{\big{\lfloor}\frac{k+1}{2}\big{\rfloor}-1}z_{0}^{2l+1}\mathcal{H}_{k-2l-1}\big{(}\mathbb{C}^{m-1|2n}\big{)}$
$\displaystyle=\bigoplus_{l=0}^{\big{\lfloor}\frac{k}{2}\big{\rfloor}}r^{2l}\mathcal{H}_{k-2l}\big{(}\mathbb{C}^{m-1|2n}\big{)}\oplus\bigoplus_{l=0}^{\big{\lfloor}\frac{k+1}{2}\big{\rfloor}-1}z_{0}r^{2l}\mathcal{H}_{k-2l-1}\big{(}\mathbb{C}^{m-1|2n}\big{)}$
$\displaystyle\cong\mathcal{P}_{k}(\mathbb{C}^{m-1|2n})\oplus
z_{0}\mathcal{P}_{k-1}\big{(}\mathbb{C}^{m-1|2n}\big{)},$
where we made use of the Fischer decomposition (Theorem 2.4) in the last step.
In particular, by Proposition 2.8,
$\displaystyle\dim
F_{k}=\dim\mathcal{P}_{k}\big{(}\mathbb{C}^{m-1|2n}\big{)}+\dim\mathcal{P}_{k-1}\big{(}\mathbb{C}^{m-1|2n}\big{)}=\dim\mathcal{H}_{k}\big{(}\mathbb{C}^{m|2n}\big{)}.$
If $M\not\in-2\mathbb{N}$, then for $p\in F_{k}$ the Fischer decomposition on
$\mathcal{P}_{k}\big{(}\mathbb{C}^{m|2n}\big{)}$ also gives
$\displaystyle
p=\sum_{l=0}^{\left\lfloor\frac{k}{2}\right\rfloor}R^{2l}h_{k-2l}=h_{k}\mod\big{\langle}R^{2}\big{\rangle},$
with $h_{k-2l}\in\mathcal{H}_{k-2l}\big{(}\mathbb{C}^{m|2n}\big{)}$. This
implies $F_{k}\cong\mathcal{H}_{k}\big{(}\mathbb{C}^{m|2n}\big{)}$ for $M\geq
2$.
###### Theorem 5.3 (decomposition of $F$).
For $M\geq 3$, we have the following:
* $(1)$
$F_{k}$ is an irreducible $\mathfrak{k}$-module.
* $(2)$
$F$ is an irreducible $\mathfrak{g}$-module and its $\mathfrak{k}$-type
decomposition is given by
$\displaystyle F=\bigoplus_{k=0}^{\infty}F_{k}.$
* $(3)$
An explicit decomposition of $F_{k}$ into irreducible
$\mathfrak{k}_{0}$-modules is given by
$\displaystyle
F_{k}=\bigoplus_{l=0}^{k}z_{0}^{l}\mathcal{H}_{k-l}\big{(}\mathbb{C}^{m-1|2n}\big{)}\mod\big{\langle}R^{2}\big{\rangle}.$
###### Proof.
We have the following elements of the action of $\mathfrak{g}$:
$\displaystyle\rho^{+}_{0}:=\rho\left(c^{-1}\left(\frac{-e_{0}}{2},0,0\right)\right)=\imath
z_{0},$
$\displaystyle\rho^{-}_{0}:=\rho\big{(}c^{-1}(0,0,-2e_{0})\big{)}=\imath\operatorname{\widetilde{\mathcal{B}}}(z_{0}),$
$\displaystyle\rho_{0i}:=\rho\left(-\frac{e_{i}}{2},0,\frac{e_{i}}{2}\right)=\imath
L_{0i},$ $\displaystyle\rho_{ij}:=\rho(0,[L_{e_{i}},L_{e_{j}}],0)=L_{ij},$
with $i,j\in\\{1,\ldots,m+2n-1\\}$. The elements $\rho_{ij}$ for
$i,j\in\\{1,\ldots,m+2n-1\\}$, $i\leq j$ give rise to an irreducible
representation of $\mathfrak{k}_{0}$ on
$\mathcal{H}_{k}\big{(}\mathbb{C}^{m-1|2n}\big{)}$ as a result of Proposition
2.6. These elements leave $r^{2}=z_{0}^{2}$ invariant and therefore also leave
powers of $z_{0}$ invariant, which proves $(3)$.
Again by Proposition 2.6 the elements $\rho_{ij}$ and $\rho_{0i}$ give rise to
an irreducible representation of $\mathfrak{k}$ on
$\mathcal{H}_{k}\big{(}\mathbb{C}^{m|2n}\big{)}\cong F_{k}$, which proves
$(1)$. For the first two elements we have
$\displaystyle\rho^{+}_{0}\big{(}z_{0}^{k}\big{)}=\imath z_{0}^{k+1},$
$\displaystyle\rho^{-}_{0}\big{(}z_{0}^{k}\big{)}=\imath\operatorname{\widetilde{\mathcal{B}}}(z_{0})z_{0}^{k}=\imath
k(M+2k-4)z_{0}^{k-1}-\imath k(k-1)z_{0}^{k-1}=\imath k(M+k-3)z_{0}^{k-1},$
which shows that $\rho^{+}_{0}$ allows us to go to polynomials of higher
degrees while $\rho^{-}_{0}$ allows us to go the other direction for $M\geq
3$. Therefore we obtain $(2)$. ∎
The following isomorphism is a direct result of this theorem.
###### Corollary 5.4.
Suppose $M\geq 3$ and let
$\displaystyle\operatorname{\mathcal{F}}=\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}/\big{\langle}R^{2}\big{\rangle}$
be the polynomial Fock space defined in Definition 4.1. We have
$\operatorname{\mathcal{F}}\cong F$.
## 6 The Segal–Bargmann transform
In this section we construct the Segal–Bargmann transform and show that it is
an isomorphism from $W$ as defined in Section 3.3 to $F$ as defined in Section
5.2. It will make use of the integral $\int_{W}$ we defined in Definition 3.7.
This integral is only defined for $M\geq 4$. Therefore we will always assume
$M\geq 4$ throughout this section.
### 6.1 Definition and properties
Let $\widetilde{I}_{\alpha}(t)$ be the I-Bessel function as introduced in
Appendix A.1. We define an entire function
$\operatorname{\mathbb{I}}_{\alpha}$ on $\mathbb{C}$ by
$\displaystyle\operatorname{\mathbb{I}}_{\alpha}(t):=\Gamma\left(\dfrac{M}{2}-1\right)\widetilde{I}_{\frac{M}{2}-2+\alpha}\big{(}2\sqrt{t}\big{)}=\Gamma\left(\dfrac{M}{2}-1\right)\sum_{l=0}^{\infty}\dfrac{1}{l!\Gamma\big{(}l+\frac{M}{2}-1+\alpha\big{)}}t^{l}.$
Clearly we have $\operatorname{\mathbb{I}}_{0}(0)=1$ and
$\displaystyle\partial_{t}^{j}\operatorname{\mathbb{I}}_{0}(t)=\Gamma\left(\dfrac{M}{2}-1\right)\partial_{t}^{j}\left(\widetilde{I}_{\frac{M}{2}-2}\big{(}2\sqrt{t}\big{)}\right)=\Gamma\left(\dfrac{M}{2}-1\right)\widetilde{I}_{\frac{M}{2}-2+j}\big{(}2\sqrt{t}\big{)}=\operatorname{\mathbb{I}}_{j}(t).$
We are now able to state the Segal–Bargmann transform that extends the one
from the bosonic case obtained in [18].
###### Definition 6.1.
For $f\in W$ the Segal–Bargmann transform is defined as
$\displaystyle\operatorname{SB}f(z):=\exp(-z_{0})\int_{W}\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0})f(x),$
where $x|z$ is defined as
$\displaystyle x|z:=2x_{0}z_{0}+2\sum_{i,j=1}^{m+2n-1}x_{i}\beta^{ij}z_{j}$
and we view $\operatorname{\mathbb{I}}_{\alpha}(x|z)$ as a radial
superfunction in the sense of equation (3.2).
Note that $\operatorname{\mathbb{I}}_{0}(4(x|z))$ is the reproducing kernel
$\mathbb{K}(x,z)$ of the Fock space we found in Theorem 4.11.
###### Proposition 6.2.
For $M\geq 4$ the Segal–Bargmann transform $\operatorname{SB}$ is well
defined.
###### Proof.
We wish to prove that the integral
$\displaystyle\int_{W}\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0})f(x)$
is convergent for all $f\in W$ and that $\operatorname{SB}(R^{2})=0$. As shown
in the proof of Theorem 8.13 in [6], the elements of $W$ can be decomposed
into elements of the form
$\displaystyle P_{k}\widetilde{K}_{-\frac{1}{2}+\alpha_{1}+\alpha_{2}}(2|X|),$
with $P_{k}$ a homogeneous polynomial of degree $k$. Here
$\widetilde{K}_{\alpha}$ is the K-Bessel function introduced in Appendix A.1
interpreted as a radial superfunction as in Section 3.2. Furthermore
$\alpha_{1},\alpha_{2}\in\mathbb{N}$ are subject to the relations
$k\geq\alpha_{1}+2\alpha_{2}$ and $M\geq 2\alpha_{1}+2$. Also, observe that
$\displaystyle|X|=\sqrt{\dfrac{x_{0}^{2}+r^{2}}{2}}=\sqrt{x_{0}^{2}+\dfrac{R^{2}}{2}}=\sqrt{x_{0}^{2}}\quad\mod\big{\langle}R^{2}\big{\rangle},$
(6.1)
which is equal to $x_{0}$ within the domain of integration of $\int_{W}$.
Because of all this and equation (A.1) it suffices to prove
$\displaystyle\int_{W}P_{k}\operatorname{\mathbb{I}}_{0}(x|z)\widetilde{K}_{-\frac{1}{2}}(2|X|)\widetilde{K}_{-\frac{1}{2}+\alpha_{1}+\alpha_{2}}(2|X|)$
is convergent for all $k\in\mathbb{N}$ and
$\alpha_{1},\alpha_{2}\in\mathbb{N}$ subject to the above mentioned relations.
We will use the explicit description of $\int_{W}$ given in equation (3.3).
The morphism $\phi^{\sharp}$ leaves the degree of a polynomial unchanged.
Hence, we can expand
$\displaystyle\big{(}\phi^{\sharp}(P_{k})\big{)}_{|s=x_{0}=\rho}=\sum_{j=0}^{k}\rho^{k-j}a_{j}(\theta)b_{j}(\omega),$
where $a_{j}(\theta)$ is a polynomial in
$\mathcal{P}\big{(}\mathbb{R}^{0|2n}\big{)}$ of degree $j$ and $b_{j}(\omega)$
is a function depending on the spherical coordinates $\omega$. For
$c\in\mathbb{Z}$ we obtain
$\displaystyle(1+\eta)^{c}=\sum_{j=0}^{n}\dfrac{1}{j!}\left(\dfrac{-c}{2}\right)_{j}\dfrac{\theta^{2j}}{2^{j}s^{2j}},\qquad(1+\xi)^{c}=\sum_{j=0}^{n}\dfrac{(-1)^{j}}{j!}\left(\dfrac{-c}{2}\right)_{j}\dfrac{\theta^{2j}}{2^{j}x_{0}^{2j}}$
and
$\displaystyle\phi^{\sharp}\big{(}\widetilde{K}_{\alpha}(c|X|)\big{)}=\widetilde{K}_{\alpha}(c|X|)=\sum_{j=0}^{n}\dfrac{(-1)^{j}c^{2j}\theta^{2j}}{j!8^{j}}\widetilde{K}_{\alpha+j}(c\rho),$
from the proof of Lemma 8.6 in [6]. Here $\eta$, $\xi$ and $\theta^{2}$ were
defined in Section 3.4. We introduce the notations
$\displaystyle\omega|z:=2\sum_{i=1}^{m-1}\omega_{i}z_{i}\qquad\mbox{and}\qquad\theta|z:=2\sum_{i,j=m}^{m+2n-1}x_{i}\beta^{ij}z_{j}.$
We can use equation (3.2) with $h=\operatorname{\mathbb{I}}_{0}$ and $f=x|z$
as a function in the $x$ variables to obtain
$\displaystyle\operatorname{\mathbb{I}}_{0}(x|z)=\operatorname{\mathbb{I}}_{0}(2x_{0}z_{0}+s\omega|z+\theta|z)=\sum_{j=0}^{2n}\dfrac{1}{j!}(\theta|z)^{j}\operatorname{\mathbb{I}}_{j}(2x_{0}z_{0}+s\omega|z).$
Using the properties of $\phi^{\sharp}$ described in Lemma 8.3 of [6] and the
expansion of $(1+\xi)$ and $(1+\eta)$, we now find
$\displaystyle\phi^{\sharp}(\operatorname{\mathbb{I}}_{0}(x|z))_{|s=x_{0}=\rho}=\sum_{l_{1}=0}^{2n}\dfrac{1}{l_{1}!}(\theta|z)^{l_{1}}\operatorname{\mathbb{I}}_{l_{1}}(2\rho(1+\xi)z_{0}+\rho(1+\eta)\omega|z)_{|s=x_{0}=\rho}$
$\displaystyle\hphantom{\phi^{\sharp}(\operatorname{\mathbb{I}}_{0}(x|z))_{|s=x_{0}=\rho}}{}=\sum_{l_{1}=0}^{2n}\dfrac{1}{l_{1}!}(\theta|z)^{l_{1}}\operatorname{\mathbb{I}}_{l_{1}}\left(\rho\sum_{l_{2}=0}^{n}\dfrac{1}{l_{2}!}\left(-\dfrac{1}{2}\right)_{l_{2}}\dfrac{\theta^{2l_{2}}}{2^{l_{2}}\rho^{2l_{2}}}(2(-1)^{l_{2}}z_{0}+\omega|z)\right).$
We use equation (3.2) again, this time with
$h=\operatorname{\mathbb{I}}_{l_{1}}$ and $f$ equal to the sum over $l_{2}$.
Note that the $l_{2}=0$ term corresponds with $f_{0}$. We obtain
$\displaystyle\phi^{\sharp}(\operatorname{\mathbb{I}}_{0}(x|z))_{|s=x_{0}=\rho}=\sum_{l_{1},l_{3}=0}^{2n}\dfrac{1}{l_{1}!l_{3}!}(\theta|z)^{l_{1}}\left(\rho\sum_{l_{2}=1}^{n}\dfrac{1}{l_{2}!}\left(-\dfrac{1}{2}\right)_{l_{2}}\dfrac{\theta^{2l_{2}}}{2^{l_{2}}\rho^{2l_{2}}}(2(-1)^{l_{2}}z_{0}+\omega|z)\right)^{l_{3}}$
$\displaystyle\hphantom{\phi^{\sharp}(\operatorname{\mathbb{I}}_{0}(x|z))_{|s=x_{0}=\rho}=}{}\times\operatorname{\mathbb{I}}_{l_{1}+l_{3}}(\rho(2z_{0}+\omega|z)).$
Combining all this, we see that
$\displaystyle\dfrac{1}{\gamma}\int_{0}^{\infty}\int_{\mathbb{S}^{m-2}}\int_{B}\rho^{m-3}(1+\eta)^{m-3}(1+\xi)^{-1}$
$\displaystyle\qquad{}\times\phi^{\sharp}(P_{k}\operatorname{\mathbb{I}}_{0}(x|z)\widetilde{K}_{-\frac{1}{2}}(2|X|)\widetilde{K}_{-\frac{1}{2}+\alpha_{1}+\alpha_{2}}(|X|))_{|s=x_{0}=\rho}\mathrm{d}\rho\mathrm{d}\omega,$
converges if
$\displaystyle\int_{0}^{\infty}\int_{B}\rho^{m-3}\sum_{j_{1}=0}^{k}\sum_{j_{2},j_{3},j_{4},j_{5}=0}^{n}\sum_{l_{1},l_{3}=0}^{2n}\dfrac{1}{j_{2}!}\left(\dfrac{3-m}{2}\right)_{j_{2}}\dfrac{\theta^{2j_{2}}}{2^{j_{2}}\rho^{2{j_{2}}}}\dfrac{(-1)^{j_{3}}}{j_{3}!}\left(\dfrac{1}{2}\right)_{j_{3}}\dfrac{\theta^{2{j_{3}}}}{2^{j_{3}}\rho^{2{j_{3}}}}$
$\displaystyle\qquad{}\times\rho^{k-j_{1}}a_{j_{1}}(\theta)\dfrac{1}{l_{1}!l_{3}!}(\theta|z)^{l_{1}}\left(\rho\sum_{l_{2}=1}^{n}\dfrac{1}{l_{2}!}\left(-\dfrac{1}{2}\right)_{l_{2}}\dfrac{\theta^{2l_{2}}}{2^{l_{2}}\rho^{2l_{2}}}c_{1}\right)^{l_{3}}\operatorname{\mathbb{I}}_{l_{1}+l_{3}}(c_{2}\rho)$
$\displaystyle\qquad{}\times\dfrac{(-1)^{j_{4}}\theta^{2{j_{4}}}}{{j_{4}}!2^{j_{4}}}\widetilde{K}_{-\frac{1}{2}+{j_{4}}}(2\rho)\dfrac{(-1)^{j_{5}}\theta^{2{j_{5}}}}{{j_{5}}!2^{j_{5}}}\widetilde{K}_{-\frac{1}{2}+\alpha_{1}+\alpha_{2}+j_{5}}(2\rho)\mathrm{d}\rho$
converges for all $c_{1},c_{2}\in\mathbb{C}$. This in turn converges if
$\displaystyle\int_{0}^{\infty}\int_{B}\rho^{m-3+k-j_{1}+l_{3}-2l_{3}l_{2}-2j_{2}-2j_{3}}a_{j_{1}}(\theta)(\theta|z)^{l_{1}}\theta^{2j_{2}+2j_{3}+2j_{4}+2j_{5}+2l_{3}l_{2}}\operatorname{\mathbb{I}}_{l_{1}+l_{3}}(c\rho)$
$\displaystyle\qquad\times\widetilde{K}_{-\frac{1}{2}+{j_{4}}}(2\rho)\widetilde{K}_{-\frac{1}{2}+\alpha_{1}+\alpha_{2}+j_{5}}(2\rho)\mathrm{d}\rho$
converges for all $0\leq j_{1}\leq k$, $0\leq j_{2},j_{3},j_{4},j_{5}\leq n$,
$0\leq l_{1},l_{3}\leq 2n$, $1\leq l_{2}\leq n$ and all $c\in\mathbb{C}$. The
Berezin integral is zero unless
$j_{1}+l_{1}+2j_{2}+2j_{3}+2j_{4}+2j_{5}+2l_{3}l_{2}=2n$. The integral
$\displaystyle\int_{0}^{\infty}\rho^{\sigma-1}\widetilde{I}_{\beta_{1}}\big{(}\sqrt{a\rho}\big{)}\widetilde{K}_{\beta_{2}}(2\rho)\widetilde{K}_{\beta_{3}}(2\rho)\mathrm{d}\rho,$
with $\beta_{1}\geq 0$ converges if
$\sigma>2\max\\{\beta_{2},0\\}+2\max\\{\beta_{3},0\\}$. This follows from the
asymptotic behaviour of the Bessel functions, see Appendix A.1. Therefore we
get the following condition
$\displaystyle m-2+k-j_{1}+l_{3}-2l_{3}l_{2}-2j_{2}-2j_{3}$
$\displaystyle\qquad{}>2\max\left\\{-\frac{1}{2}+{j_{4}},0\right\\}+2\max\left\\{-\frac{1}{2}+\alpha_{1}+\alpha_{2}+j_{5},0\right\\},$
with $j_{1}+l_{1}+2j_{2}+2j_{3}+2j_{4}+2j_{5}+2l_{3}l_{2}=2n$. Taking into
account $k\geq\alpha_{1}+2\alpha_{2}$ and $M\geq 2\alpha_{1}+2$ the condition
reduces to $M>2$. We still need that $\operatorname{SB}\big{(}R^{2}\big{)}=0$,
but this follows easily from
$\big{(}\phi^{\sharp}\big{(}R^{2}\big{)}\big{)}_{|s=x_{0}=\rho}=\big{(}{-}x_{0}^{2}+s^{2}\big{)}_{|s=x_{0}=\rho}=0$.
∎
We can now show that $\operatorname{SB}$ intertwines the Schrödinger model
with the Fock model.
###### Theorem 6.3 (intertwining property).
For $M\geq 4$ the Segal–Bargmann transform intertwines the action $\pi$ on $W$
with the action $\rho$ on $F$, i.e.,
$\displaystyle\operatorname{SB}\circ\pi(X)=\rho(X)\circ\operatorname{SB},$
for all $X\in\mathfrak{g}$.
###### Proof.
The proof of this theorem is a technical and long but rather straightforward
calculation. We refer to Appendix B for more details. ∎
###### Proposition 6.4.
The Segal–Bargmann transform $\operatorname{SB}$ induces a
$\mathfrak{g}$-module isomorphism between $W$ and $F$.
###### Proof.
From the way we normalized the integral $\int_{W}$ and equation (6.1) it is
clear that
$\displaystyle\operatorname{SB}(\exp(-2|X|))(0)=\int_{W}\exp(-4|X|)=1.$
Therefore the Segal–Bargmann transform maps a non-zero element of $W$ to a
non-zero element of $F$. It also intertwines the actions $\pi$ and $\rho$.
Since Theorems 3.6 and 5.3 give us that $W$ and $F$ are irreducible
$\mathfrak{g}$-modules, we conclude that $\operatorname{SB}$ is an isomorphism
of $\mathfrak{g}$-modules. ∎
###### Lemma 6.5.
We have $\operatorname{SB}(\exp(-2|X|)(z)=1$.
###### Proof.
Since $\exp(-2|X|)$ is in $W_{0}$ Proposition 6.4 implies that
$\operatorname{SB}(\exp(-2|X|))(z)$ is in $F_{0}=\mathbb{C}$. Hence
$\operatorname{SB}(\exp(-2|X|))(z)$ is a constant. From the way we normalized
the integral $\int_{W}$ we have $\operatorname{SB}(\exp(-2|X|))(0)=1$ and
therefore $\operatorname{SB}(\exp(-2|X|))(z)=1$. ∎
###### Theorem 6.6 (unitary property).
For $M\geq 4$ the Segal–Bargmann transform preserves the sesquilinear forms,
i.e.,
$\displaystyle\langle{\operatorname{SB}f,\operatorname{SB}g}\rangle_{\mathcal{B}}=\langle{f,g}\rangle_{W},$
for all $f,g\in W$.
###### Proof.
We first look at the case $f=\exp(-2|X|)$. Because of Lemma 6.5 and the
superhermitian property of the Bessel–Fischer product we have
$\displaystyle\langle{\operatorname{SB}(\exp(-2|X|)),\operatorname{SB}g}\rangle_{\mathcal{B}}=\langle{1,\operatorname{SB}g}\rangle_{\mathcal{B}}=\overline{\langle{\operatorname{SB}g,1}\rangle}_{\mathcal{B}}=\operatorname{SB}(\overline{g(x)})\big{(}\operatorname{\widetilde{\mathcal{B}}}\big{)}1\big{|}_{z=0}$
$\displaystyle\hphantom{\langle{\operatorname{SB}(\exp(-2|X|)),\operatorname{SB}g}\rangle_{\mathcal{B}}}{}=\int_{W}\exp(-2x_{0})\big{(}\operatorname{\mathbb{I}}_{0}(x|\operatorname{\widetilde{\mathcal{B}}}(z))\exp\big{(}{-}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\big{)}1\big{)}\overline{g(x)}\big{|}_{z=0},$
for all $g\in W$. Here
$\operatorname{\mathbb{I}}_{0}\big{(}x|\operatorname{\widetilde{\mathcal{B}}}(z)\big{)}$
and $\exp\big{(}{-}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\big{)}$
should be considered as infinite power sums of the Bessel operator with
$\displaystyle
x|\operatorname{\widetilde{\mathcal{B}}}(z):=2x_{0}\operatorname{\widetilde{\mathcal{B}}}(z_{0})+2\sum_{i,j=1}^{m+2n-1}x_{i}\beta^{ij}\operatorname{\widetilde{\mathcal{B}}}(z_{j})=2\sum_{i,j=0}^{m+2n-1}x_{i}\beta^{ij}\operatorname{\mathcal{B}}(z_{j}).$
Since they act on a constant with respect to the variable $z$ we get
$\displaystyle\operatorname{\mathbb{I}}_{0}\big{(}x|\operatorname{\widetilde{\mathcal{B}}}(z)\big{)}\exp\big{(}{-}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\big{)}1=\operatorname{\mathbb{I}}_{0}(0)\exp(0)1=1,$
which gives
$\displaystyle\langle{\operatorname{SB}(\exp(-2|X|)),\operatorname{SB}g}\rangle_{\mathcal{B}}=\int_{W}\exp(-2x_{0})\overline{g(x)}=\langle{\exp(-2|X|),g}\rangle_{W},$
if we use equation (6.1). Now suppose $f,g\in W$. Since $W$ is an irreducible
$\mathfrak{g}$-module (Theorem 3.6), there exists a $Y\in U(\mathfrak{g})$
such that $f=\pi(Y)\exp(-2|X|)$. Therefore we can reduce the general case to
the previous case using the intertwining property (Theorem 6.3) and the fact
that the sesquilinear forms are skew symmetric for $\pi$ and $\rho$
(Propositions 3.10 and 5.2):
$\displaystyle\langle{\operatorname{SB}f,\operatorname{SB}g}\rangle_{\mathcal{B}}=\langle{\operatorname{SB}(\pi(Y)\exp(-2|X|)),\operatorname{SB}g}\rangle_{\mathcal{B}}=\langle{\rho(Y)\operatorname{SB}(\exp(-2|X|)),\operatorname{SB}g}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{\operatorname{SB}f,\operatorname{SB}g}\rangle_{\mathcal{B}}}{}=-\langle{\operatorname{SB}(\exp(-2|X|)),\rho(Y)\operatorname{SB}g}\rangle_{\mathcal{B}}=-\langle{\operatorname{SB}(\exp(-2|X|)),\operatorname{SB}(\pi(Y)g)}\rangle_{\mathcal{B}}$
$\displaystyle\hphantom{\langle{\operatorname{SB}f,\operatorname{SB}g}\rangle_{\mathcal{B}}}{}=-\langle{\exp(-2|X|),\pi(Y)g}\rangle_{W}=\langle{\pi(Y)\exp(-2|X|),g}\rangle_{W}=\langle{f,g}\rangle_{W},$
which proves the theorem. ∎
### 6.2 The inverse Segal–Bargmann transform
###### Definition 6.7.
For $p\in\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$ the inverse
Segal–Bargmann transform is defined as
$\displaystyle\operatorname{SB}^{-1}p(x):=\exp(-2|X|)\operatorname{\mathbb{I}}_{0}\big{(}x|\operatorname{\widetilde{\mathcal{B}}}(z)\big{)}\exp\big{(}{-}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\big{)}p(z)\big{|}_{z=0},$
with
$\displaystyle
x|\operatorname{\widetilde{\mathcal{B}}}(z):=2x_{0}\operatorname{\widetilde{\mathcal{B}}}(z_{0})+2\sum_{i,j=1}^{m+2n-1}x_{i}\beta^{ij}\operatorname{\widetilde{\mathcal{B}}}(z_{j})=2\sum_{i,j=0}^{m+2n-1}x_{i}\beta^{ij}\operatorname{\mathcal{B}}(z_{j}).$
Note that both
$\operatorname{\mathbb{I}}_{0}\big{(}x|\operatorname{\widetilde{\mathcal{B}}}(z)\big{)}$
and $\exp\big{(}{-}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\big{)}$ are
infinite power sums of the Bessel operator. However, they are well defined
operators on polynomials in $z$ since the Bessel operator lowers the degree of
the polynomials and therefore the power operators become zero after a finite
number of terms. Thus $\operatorname{SB}^{-1}$ is a well-defined operator on
$\mathcal{P}\big{(}\mathbb{C}^{m|2n}\big{)}$. Because of Proposition 3.1 it
maps $\big{\langle}R^{2}\big{\rangle}$ to zero and thus
$\operatorname{SB}^{-1}$ can be restricted to $F$. Moreover, it is also well
defined as the inverse of the Segal–Bargmann transform. This follows from the
following proposition.
###### Proposition 6.8.
The inverse Segal–Bargmann transform is well defined as the inverse of the
Segal–Bargmann transform defined in Definition 6.1.
###### Proof.
From Proposition 6.4, we know that $\operatorname{SB}$ has an inverse. Suppose
the operator $A$ is this inverse. Using Theorem 6.6 we then have the following
calculation:
$\displaystyle\langle{Ap,\psi}\rangle_{W}$
$\displaystyle=\langle{\operatorname{SB}(Ap),\operatorname{SB}\psi}\rangle_{\mathcal{B}}=(-1)^{|p||\psi|}\overline{\langle{\operatorname{SB}\psi,p}\rangle}_{\mathcal{B}}=(-1)^{|p||\psi|}\overline{\operatorname{SB}\psi}\big{(}\operatorname{\widetilde{\mathcal{B}}}\big{)}p(z)\big{|}_{z=0}$
$\displaystyle=(-1)^{|p||\psi|}\left.\left(\exp\big{(}{-}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\big{)}\int_{W}\operatorname{\mathbb{I}}_{0}\big{(}x|\operatorname{\widetilde{\mathcal{B}}}(z)\big{)}\exp(-2x_{0})\overline{\psi(x)}p(z)\right)\right|_{z=0}$
$\displaystyle=\int_{W}\big{(}\exp\big{(}{-}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\big{)}\operatorname{\mathbb{I}}_{0}\big{(}x|\operatorname{\widetilde{\mathcal{B}}}(z)\big{)}\exp(-2x_{0})p(z)\big{)}\big{|}_{z=0}\overline{\psi(x)}$
$\displaystyle=\big{\langle}\exp\big{(}{-}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\big{)}\operatorname{\mathbb{I}}_{0}\big{(}x|\operatorname{\widetilde{\mathcal{B}}}(z)\big{)}\exp(-2|X|)p(z)\big{|}_{z=0},\psi\big{\rangle}_{W},$
where we used equation (6.1) in the last step. Since the sesquilinear form
$\langle{\cdot\,,\cdot}\rangle_{W}$ is non-degenerate we obtain
$A=\operatorname{SB}^{-1}$. ∎
We can make the inverse Segal–Bargmann transform more explicit on the space of
homogeneous polynomials. To the best of our knowledge, this explicit
expression is also new for the bosonic case.
###### Proposition 6.9.
For $p\in\mathcal{P}_{k}\big{(}\mathbb{C}^{m|2n}\big{)}$, $k\in\mathbb{N}$ the
inverse Segal–Bargmann transform $\operatorname{SB}^{-1}$ is given by
$\displaystyle\operatorname{SB}^{-1}p(x)=\exp(-2|X|)\sum_{j=0}^{k}\dfrac{(-1)^{j}}{j!(k-j)!}\dfrac{\Gamma\big{(}\frac{M}{2}-1\big{)}}{\Gamma\big{(}k-j+\frac{M}{2}-1\big{)}}\big{\langle}z_{0}^{j}(x|z)^{k-j},\overline{p}\big{\rangle}.$
###### Proof.
For $p\in\mathcal{P}_{k}\big{(}\mathbb{C}^{m|2n}\big{)}$ we have
$\displaystyle\operatorname{SB}^{-1}p(x)$
$\displaystyle=\exp(-2|X|)\operatorname{\mathbb{I}}_{0}\big{(}x|\operatorname{\widetilde{\mathcal{B}}}(z)\big{)}\exp\big{(}{-}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\big{)}p(z)\big{|}_{z=0}$
$\displaystyle=\exp(-2|X|)\langle{\operatorname{\mathbb{I}}_{0}(x|z)\exp(-z_{0}),\overline{p}}\rangle_{\mathcal{B}}.$
Because of the orthogonality of the Bessel–Fischer product we only need to
look at the homogeneous polynomial term of degree $k$ in the expansion of
$\operatorname{\mathbb{I}}_{0}(x|z)\exp(-z_{0})$. Explicitly, we need the
homogeneous term of degree $k$ in
$\displaystyle\Gamma\left(\dfrac{M}{2}-1\right)\sum_{l=0}^{\infty}\dfrac{1}{l!\Gamma\big{(}l+\frac{M}{2}-1\big{)}}(x|z)^{l}\sum_{j=0}^{\infty}\dfrac{(-1)^{j}}{j!}z_{0}^{j},$
which is
$\displaystyle\Gamma\left(\frac{M}{2}-1\right)\sum_{j=0}^{k}\dfrac{1}{(k-j)!\Gamma\big{(}k-j+\frac{M}{2}-1\big{)}}(x|z)^{k-j}\dfrac{(-1)^{j}}{j!}z_{0}^{j}$
as desired. ∎
### 6.3 The generalized Hermite functions
As a standard application of the Segal–Bargmann transform, we can construct
generalized Hermite functions which extend the ones of the bosonic case given
in [18].
###### Definition 6.10.
The generalized Hermite functions on $W$ are defined by
$\displaystyle
h_{\alpha}(x):=\exp(2|X|)\left(\dfrac{1}{2}\operatorname{\widetilde{\mathcal{B}}}\right)^{\alpha}\exp(-4|X|),\qquad\text{with}\quad\left(\dfrac{1}{2}\operatorname{\widetilde{\mathcal{B}}}\right)^{\alpha}:=\prod_{i}\dfrac{1}{2^{\alpha_{i}}}\operatorname{\widetilde{\mathcal{B}}}(e_{i})^{\alpha_{i}},$
for $\alpha\in\mathbb{N}^{m|2n}$. The generalized Hermite polynomials
$H_{\alpha}$ are defined by the equation
$\displaystyle h_{\alpha}(x)=H_{\alpha}(x)\exp(-2|X|).$
###### Proposition 6.11 (Hermite to monomial property).
We have
$\displaystyle\operatorname{SB}h_{\alpha}=(2z)^{\alpha}.$
###### Proof.
We will use the fact that $\int_{W}$ is supersymmetric with respect to the
Bessel operators [6, Proposition 8.9]:
$\displaystyle\int_{W}(\operatorname{\mathcal{B}}(x_{k})f)g=(-1)^{|f||k|}\int_{W}f(\operatorname{\mathcal{B}}(x_{k})g).$
Combining this with equation (B.3) of Lemma B.1 we find
$\displaystyle\operatorname{SB}h_{\alpha}(z)$
$\displaystyle=\exp(-z_{0})\int_{W}\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0})h_{\alpha}(x)$
$\displaystyle=\exp(-z_{0})\int_{W}\operatorname{\mathbb{I}}_{0}(x|z)\prod_{i}\dfrac{1}{2^{\alpha_{i}}}\operatorname{\widetilde{\mathcal{B}}}(x_{i})^{\alpha_{i}}\exp(-4|X|)$
$\displaystyle=\exp(-z_{0})\int_{W}\prod_{i}\dfrac{1}{2^{\alpha_{i}}}\operatorname{\widetilde{\mathcal{B}}}(x_{i})^{\alpha_{i}}\left(\operatorname{\mathbb{I}}_{0}(x|z)\right)\exp(-4|X|)$
$\displaystyle=\exp(-z_{0})\int_{W}\prod_{i}\dfrac{1}{2^{\alpha_{i}}}(4z_{i})^{\alpha_{i}}\left(\operatorname{\mathbb{I}}_{0}(x|z)\right)\exp(-4|X|)$
$\displaystyle=\exp(-z_{0})\int_{W}(2z)^{\alpha}\operatorname{\mathbb{I}}_{0}(x|z)\exp(-4|X|)$
$\displaystyle=(2z)^{\alpha}\operatorname{SB}(\exp(-2|X|))(z).$
Because of Lemma 6.5 the theorem follows. ∎
## Appendix A Special functions
### A.1 Bessel functions
The I-Bessel function $I_{\alpha}(t)$ (or modified Bessel function of the
first kind) is defined by
$\displaystyle
I_{\alpha}(t):=\left(\dfrac{t}{2}\right)^{\alpha}\sum_{k=0}^{\infty}\dfrac{1}{k!\Gamma(k+\alpha+1)}\left(\dfrac{t}{2}\right)^{2k}$
and the K-Bessel function $K_{\alpha}$ (or modified Bessel function of the
third kind) by
$\displaystyle
K_{\alpha}(t):=\dfrac{\pi}{2\sin(\pi\alpha)}(I_{-\alpha}(t)-I_{\alpha}(t)),$
for $\alpha,t\in\mathbb{C}$, see [2, Section 4.12]. In this paper we will need
the following renormalisations
$\displaystyle\widetilde{I}_{\alpha}(t):=\left(\dfrac{t}{2}\right)^{-\alpha}I_{\alpha}(t),\qquad\widetilde{K}_{\alpha}(t):=\left(\dfrac{t}{2}\right)^{-\alpha}K_{\alpha}(t).$
Remark that we have the following special case [2, equation (4.12.4)]
$\displaystyle\widetilde{K}_{-\frac{1}{2}}(t)=\frac{\sqrt{\pi}}{2}\exp(-t).$
(A.1)
The asymptotic behaviour of the I-Bessel function can be deducted from
equations (4.12.7) and (4.12.8) of [2],
$\displaystyle\widetilde{I}_{\alpha}(t)=\dfrac{1}{\sqrt{2\pi}}\big{(}{-}t^{2}\big{)}^{-\frac{2\alpha+1}{4}}\left(\exp\left(-i\left(\dfrac{(2\alpha+1)\pi}{4}-\sqrt{-t^{2}}\right)\right)\left(1+\mathcal{O}\left(\dfrac{1}{t}\right)\right)\right.$
$\displaystyle\left.\hphantom{\widetilde{I}_{\alpha}(t)=}{}+\exp\left(i\left(\dfrac{(2\alpha+1)\pi}{4}-\sqrt{-t^{2}}\right)\right)\left(1+\mathcal{O}\left(\dfrac{1}{t}\right)\right)\right),$
for $|t|\rightarrow+\infty$. The asymptotic behaviour of the K-Bessel function
for $t\in\mathbb{R}$ is given in Appendix B.2 of [6],
$\displaystyle\mbox{for }t\rightarrow
0\colon\quad\widetilde{K}_{\alpha}(t)=\begin{cases}\displaystyle\frac{\Gamma(\alpha)}{2}\left(\frac{t}{2}\right)^{-2\alpha}+o\big{(}t^{-2\alpha}\big{)}&\mbox{if
}\alpha>0,\vspace{1mm}\\\
\displaystyle-\log\left(\frac{t}{2}\right)+o\left(\log\left(\frac{t}{2}\right)\right)&\mbox{if
}\alpha=0,\vspace{1mm}\\\ \displaystyle\frac{\Gamma(-\alpha)}{2}+o(1)&\mbox{if
}\alpha<0,\end{cases}$ $\displaystyle\mbox{for
}t\rightarrow+\infty\colon\quad\widetilde{K}_{\alpha}(t)=\dfrac{\sqrt{\pi}}{2}\left(\dfrac{t}{2}\right)^{-\alpha-\frac{1}{2}}e^{-t}\left(1+\mathcal{O}\left(\dfrac{1}{t}\right)\right).$
### A.2 Generalised Laguerre functions
Consider the generating function
$\displaystyle
G_{2}^{\mu,\nu}(t,x):=\dfrac{1}{(1-t)^{\frac{\mu+\nu+2}{2}}}\widetilde{I}_{\frac{\mu}{2}}\left(\dfrac{tx}{1-t}\right)\widetilde{K}_{\frac{\nu}{2}}\left(\dfrac{x}{1-t}\right),$
for complex parameters $\mu$ and $\nu$. The generalised Laguerre functions
$\Lambda_{2,j}^{\mu,\nu}(x)$ are defined in [16] as the coefficients in the
expansion
$\displaystyle
G_{2}^{\mu,\nu}(t,x)=\sum_{j=0}^{\infty}\Lambda_{2,j}^{\mu,\nu}(x)t^{j}.$
## Appendix B Proof of Theorem 6.3
To give the proof of Theorem 6.3 we first need a few technical lemmas.
###### Lemma B.1 (properties of $\operatorname{\mathbb{I}}_{0}$).
For $k\in\\{1,\ldots,m+2n-1\\}$ we have
$\displaystyle\partial_{z^{k}}\operatorname{\mathbb{I}}_{0}(x|z)=2x_{k}\operatorname{\mathbb{I}}_{1}(x|z),\qquad$
$\displaystyle\partial_{x^{k}}\operatorname{\mathbb{I}}_{0}(x|z)=2z_{k}\operatorname{\mathbb{I}}_{1}(x|z),$
(B.1)
$\displaystyle\partial_{z^{0}}\operatorname{\mathbb{I}}_{0}(x|z)=-2x_{0}\operatorname{\mathbb{I}}_{1}(x|z),\qquad$
$\displaystyle\partial_{x^{0}}\operatorname{\mathbb{I}}_{0}(x|z)=-2z_{0}\operatorname{\mathbb{I}}_{1}(x|z),$
(B.2)
$\displaystyle\mathbb{E}_{z}\operatorname{\mathbb{I}}_{0}(x|z)=(x|z)\operatorname{\mathbb{I}}_{1}(x|z),\qquad$
$\displaystyle\mathbb{E}_{x}\operatorname{\mathbb{I}}_{0}(x|z)=(x|z)\operatorname{\mathbb{I}}_{1}(x|z),$
and for $i\in\\{0,\ldots,m+2n-1\\}$ we have
$\displaystyle\operatorname{\widetilde{\mathcal{B}}}(z_{i})\operatorname{\mathbb{I}}_{0}(x|z)=4x_{i}\operatorname{\mathbb{I}}_{0}(x|z),\qquad\operatorname{\widetilde{\mathcal{B}}}(x_{i})\operatorname{\mathbb{I}}_{0}(x|z)=4z_{i}\operatorname{\mathbb{I}}_{0}(x|z).$
(B.3)
The last equation expresses that $\operatorname{\mathbb{I}}_{0}(x|z)$ is an
eigenfunction of $\operatorname{\widetilde{\mathcal{B}}}$.
###### Proof.
Equation (B.1) follows from the chain rule. Equation (B.2) follows immediately
from equation (B.1) and the definition of the Euler operator. Using the same
calculation as in the proof of Lemma 4.10, we obtain
$\displaystyle\operatorname{\widetilde{\mathcal{B}}}(z_{k})(x|z)^{l}=4x_{k}l\left(l+\frac{M}{2}-2\right)(x|z)^{l-1}.$
We then find
$\displaystyle\operatorname{\widetilde{\mathcal{B}}}(z_{k})\operatorname{\mathbb{I}}_{0}(x|z)$
$\displaystyle=\Gamma\left(\dfrac{M}{2}-1\right)\sum_{l=0}^{\infty}\dfrac{1}{l!\,\Gamma\left(l+\frac{M}{2}-1\right)}\operatorname{\widetilde{\mathcal{B}}}(z_{k})(x|z)^{l},$
$\displaystyle=4x_{k}\Gamma\left(\dfrac{M}{2}-1\right)\sum_{l=1}^{\infty}\dfrac{l\left(l+\frac{M}{2}-2\right)}{l!\,\Gamma\left(l+\frac{M}{2}-1\right)}(x|z)^{l-1}$
$\displaystyle=4x_{k}\Gamma\left(\dfrac{M}{2}-1\right)\sum_{l-1=0}^{\infty}\dfrac{1}{(l-1)!\,\Gamma\left((l-1)+\frac{M}{2}-1\right)}(x|z)^{l-1}$
$\displaystyle=4x_{k}\operatorname{\mathbb{I}}_{0}(x|z).$
The calculations for the case
$\operatorname{\widetilde{\mathcal{B}}}(x_{k})\operatorname{\mathbb{I}}_{0}(x|z)$
is analogous. ∎
###### Lemma B.2 (properties of $\exp$).
For $k\in\\{1,\ldots,m+2n-1\\}$ we have
$\displaystyle\mathbb{E}_{z}\exp(-z_{0})=-z_{0}\exp(-z_{0}),\qquad$
$\displaystyle\mathbb{E}_{x}\exp(-2x_{0})=-2x_{0}\exp(-2x_{0}),$ (B.4)
$\displaystyle\operatorname{\widetilde{\mathcal{B}}}(z_{k})\exp(-z_{0})=z_{k}\exp(-z_{0}),\qquad$
$\displaystyle\operatorname{\widetilde{\mathcal{B}}}(x_{k})\exp(-2x_{0})=4x_{k}\exp(-2x_{0}),$
(B.5)
$\displaystyle\operatorname{\widetilde{\mathcal{B}}}(z_{0})\exp(-z_{0})=(2-M+z_{0})\exp(-z_{0}).\quad$
(B.6)
###### Proof.
Equation (B.4) is immediate. For equation (B.6) we have
$\displaystyle\operatorname{\widetilde{\mathcal{B}}}(z_{0})\exp(-z_{0})$
$\displaystyle=(2-M-2\mathbb{E})\partial_{0}\exp(-z_{0})+z_{0}\Delta\exp(-z_{0})$
$\displaystyle=(2-M-2\mathbb{E})\exp(-z_{0})-z_{0}\partial_{0}\partial_{0}\exp(-z_{0})$
$\displaystyle=(2-M+2z_{0})\exp(-z_{0})-z_{0}\exp(-z_{0})$
$\displaystyle=(2-M+z_{0})\exp(-z_{0}),$
while for equation (B.5) we compute
$\displaystyle\operatorname{\widetilde{\mathcal{B}}}(z_{k})\exp(-z_{0})=0-z_{k}\Delta\exp(-z_{0})=z_{k}\partial_{0}\partial_{0}\exp(-z_{0})=z_{k}\exp(-z_{0}).$
A similar calculation shows
$\operatorname{\widetilde{\mathcal{B}}}(x_{k})\exp(-2x_{0})=4x_{k}\exp(-2x_{0})$.
∎
###### Lemma B.3 (properties of $\operatorname{SB}$).
We have
$\displaystyle\mathbb{E}_{z}\operatorname{SB}f(z)=-z_{0}\operatorname{SB}f(z)+\int_{W}\left(x|z\right)\operatorname{\mathbb{I}}_{1}(x|z)\exp(-z_{0}-2x_{0})f(x),$
(B.7)
$\displaystyle\dfrac{1}{2}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\operatorname{SB}f(z)=\dfrac{1}{2}(2-M+z_{0})\operatorname{SB}f(z)+2\operatorname{SB}(x_{0}f)(z)$
$\displaystyle\hphantom{\dfrac{1}{2}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\operatorname{SB}f(z)=}{}-\int_{W}(x|z)\operatorname{\mathbb{I}}_{1}(x|z)\exp(-z_{0}-2x_{0})f(x),$
(B.8)
$\displaystyle\dfrac{1}{2}\operatorname{\widetilde{\mathcal{B}}}(z_{k})\operatorname{SB}f(z)=\dfrac{1}{2}z_{k}\operatorname{SB}f(z)+2\operatorname{SB}(x_{k}f)(z)$
$\displaystyle\hphantom{\dfrac{1}{2}\operatorname{\widetilde{\mathcal{B}}}(z_{k})\operatorname{SB}f(z)=}{}-2\int_{W}(z_{0}x_{k}+z_{k}x_{0})\operatorname{\mathbb{I}}_{1}(x|z)\exp(-z_{0}-2x_{0})f(x),$
(B.9) $\displaystyle
L_{0k}^{z}\operatorname{SB}f(z)=-z_{k}\operatorname{SB}f(z)+2\int_{W}\left(z_{0}x_{k}+z_{k}x_{0}\right)\operatorname{\mathbb{I}}_{1}(x|z)\exp(-z_{0}-2x_{0})f(x).$
(B.10)
###### Proof.
Equation (B.7) follows from (B.2) and (B.4). To prove equation (B.8) we apply
the product rule (Proposition 3.3) to
$-\mathcal{B}(z_{0})(\exp(-z_{0})\operatorname{\mathbb{I}}_{0}(x|z))$ and
substituting equations (B.3), (B.6), and the following easily verified
identities
$\displaystyle\mathbb{E}_{z}(\exp(-z_{0}))\partial_{z^{0}}(\operatorname{\mathbb{I}}_{0}(x|z))=2z_{0}x_{0}\exp(-z_{0})\operatorname{\mathbb{I}}_{1}(x|z),$
$\displaystyle\partial_{z^{0}}(\exp(-z_{0}))\mathbb{E}_{z}(\operatorname{\mathbb{I}}_{0}(x|z))=\left(x|z\right)\exp(-z_{0})\operatorname{\mathbb{I}}_{1}(x|z),$
$\displaystyle
z_{0}\sum_{r,s}\beta^{rs}\partial_{z^{r}}(\exp(-z_{0}))\partial_{z^{s}}(\operatorname{\mathbb{I}}_{0}(x|z))=2z_{0}x_{0}\exp(-z_{0})\operatorname{\mathbb{I}}_{1}(x|z).$
Equation (B.9) follows in the same way using equations (B.3), (B.5), and
$\displaystyle\mathbb{E}_{z}(\exp(-z_{0}))\partial_{z^{k}}(\operatorname{\mathbb{I}}_{0}(x|z))=-2z_{0}x_{k}\exp(-z_{0})\operatorname{\mathbb{I}}_{1}(x|z),$
$\displaystyle\partial_{z^{k}}(\exp(-z_{0}))\mathbb{E}_{z}(\operatorname{\mathbb{I}}_{0}(x|z))=0,$
$\displaystyle
z_{k}\sum_{r,s}\beta^{rs}\partial_{z^{r}}(\exp(-z_{0}))\partial_{z^{s}}(\operatorname{\mathbb{I}}_{0}(x|z))=2z_{k}x_{0}\exp(-z_{0})\operatorname{\mathbb{I}}_{1}(x|z),$
while equation (B.10) follows immediately from
$\displaystyle
L_{0k}^{z}(\exp(-z_{0})\operatorname{\mathbb{I}}_{0}(x|z))=L_{0k}^{z}(\exp(-z_{0}))\operatorname{\mathbb{I}}_{0}(x|z)+\exp(-z_{0})L_{0k}^{z}(\operatorname{\mathbb{I}}_{0}(x|z))$
$\displaystyle\qquad{}=-z_{k}\exp(-z_{0})\operatorname{\mathbb{I}}_{0}(x|z)+\exp(-z_{0})(z_{0}(2x_{k}\operatorname{\mathbb{I}}_{1}(x|z))-z_{k}(-2x_{0}\operatorname{\mathbb{I}}_{1}(x|z)))$
$\displaystyle\qquad{}=-z_{k}\exp(-z_{0})\operatorname{\mathbb{I}}_{0}(x|z)+2\exp(-z_{0})(z_{0}x_{k}+z_{k}x_{0})\operatorname{\mathbb{I}}_{1}(x|z).$
This proves the lemma. ∎
We can now prove Theorem 6.3. For convenience we restate it here.
###### Theorem B.4 (intertwining property).
For $M\geq 4$ the Segal–Bargmann transform intertwines the action $\pi$ on $W$
with the action $\rho$ on $F$, i.e.,
$\displaystyle\operatorname{SB}\circ\pi(X)=\rho(X)\circ\operatorname{SB},$
(B.11)
for all $X\in\mathfrak{g}$.
###### Proof.
We will use the decomposition
$\mathfrak{g}=J^{-}\oplus\mathfrak{istr}(J)\oplus J^{+}$ to prove equation
(B.11) case by case.
Case 1: $X=(e_{0},0,0)$. We wish to prove
$\displaystyle\operatorname{SB}(2x_{0}f)(z)=\dfrac{1}{2}z_{0}\operatorname{SB}f(z)+\dfrac{1}{2}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\operatorname{SB}f(z)+\left(\dfrac{M}{2}-1\right)\operatorname{SB}f(z)+\mathbb{E}_{z}\operatorname{SB}f(z).$
If we substitute (B.7) and (B.8) into the equation the result follows.
Case 2: $X=(e_{k},0,0)$, $k\neq 0$. Substituting (B.9) and (B.10) in
$\displaystyle\operatorname{SB}(2x_{k}f)(z)=\dfrac{1}{2}z_{k}\operatorname{SB}f(z)+\dfrac{1}{2}\operatorname{\widetilde{\mathcal{B}}}(z_{k})\operatorname{SB}f(z)+L_{0k}^{z}\operatorname{SB}f(z)$
proves equation (B.11) for $X=(e_{k},0,0)$.
Case 3: $X=(0,L_{e_{0}},0)$. We wish to prove
$\displaystyle\dfrac{1}{2}\operatorname{SB}((2-M)f)-\operatorname{SB}(\mathbb{E}_{x}f)(z)=\dfrac{1}{2}z_{0}\operatorname{SB}f(z)-\dfrac{1}{2}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\operatorname{SB}f(z).$
Using (B.8) this becomes
$\displaystyle\operatorname{SB}((2-M)f)=\operatorname{SB}(\mathbb{E}_{x}f)(z)-2\operatorname{SB}(x_{0}f)(z)+\int_{W}(x|z)\operatorname{\mathbb{I}}_{1}(x|z)\exp(-z_{0}-2x_{0})f(x).$
From the proof of [6, Proposition 8.9] it follows that
$\int_{W}(\mathbb{E}_{x}+M-2)f=0$ when $\int_{W}f$ is well defined. This gives
$\displaystyle\operatorname{SB}((2-M)f)$
$\displaystyle=\exp(-z_{0})\int_{W}(2-M)\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0})f(x)$
$\displaystyle=\exp(-z_{0})\int_{W}\mathbb{E}_{x}(\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0})f(x))$
$\displaystyle=\int_{W}(x|z)\operatorname{\mathbb{I}}_{1}(x|z)\exp(-z_{0}-2x_{0})f(x)-2\operatorname{SB}(x_{0}f)(z)+\operatorname{SB}(\mathbb{E}_{x}f)(z),$
where we used (B.2) and (B.4) to obtain the last equality.
Case 4: $X=(0,L_{e_{k}},0)$, $k\neq 0$. In a similar fashion to (B.10), we
find
$\displaystyle-\operatorname{SB}\big{(}L_{0k}^{x}f\big{)}(z)$
$\displaystyle=-\exp(-z_{0})\int_{W}\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0})L_{0k}^{x}f(x)$
$\displaystyle=\exp(-z_{0})\int_{W}L_{0k}^{x}(\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0}))f(x)$
$\displaystyle=2\int_{W}(x_{0}z_{k}+z_{0}x_{k})\operatorname{\mathbb{I}}_{1}(x|z)\exp(-z_{0}-2x_{0})f(x)-2\operatorname{SB}(x_{k}f)(z).$
Because of (B.9), this is equivalent to
$\displaystyle-\operatorname{SB}\big{(}L_{0k}^{x}f\big{)}(z)=\dfrac{1}{2}z_{k}\operatorname{SB}f(z)-\dfrac{1}{2}\operatorname{\widetilde{\mathcal{B}}}(z_{k})\operatorname{SB}f(z),$
proving equation (B.11) for $X=(0,L_{e_{k}},0)$.
Case 5: $X=(0,[L_{e_{i}},L_{e_{j}}],0)$, $i,j\neq 0$. Using Proposition 3.10,
we obtain
$\displaystyle\operatorname{SB}\big{(}L_{ij}^{x}f\big{)}(z)$
$\displaystyle=\exp(-z_{0})\int_{W}\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0})L_{ij}^{x}f(x)$
$\displaystyle=-\exp(-z_{0})\int_{W}L_{ij}^{x}(\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0}))f(x)$
$\displaystyle=-2\exp(-z_{0})\int_{W}\big{(}x_{i}z_{j}-(-1)^{|i||j|}x_{j}z_{i}\big{)}\operatorname{\mathbb{I}}_{1}(x|z)\exp(-2x_{0})f(x)$
$\displaystyle=\exp(-z_{0})\int_{W}L_{ij}^{z}(\operatorname{\mathbb{I}}_{0}(x|z))\exp(-2x_{0})f(x)$
$\displaystyle=L_{ij}^{z}\operatorname{SB}f(z),$
thus equation (B.11) holds.
Case 6: $X=(0,0,e_{0})$. Using (B.7) and (B.8), we can rewrite
$\displaystyle\dfrac{1}{2}z_{0}\operatorname{SB}f(z)+\dfrac{1}{2}\operatorname{\widetilde{\mathcal{B}}}(z_{0})\operatorname{SB}f(z)+\left(1-\dfrac{M}{2}\right)\operatorname{SB}f(z)-\mathbb{E}_{z}\operatorname{SB}f(z)$
as
$\displaystyle
2z_{0}\operatorname{SB}f(z)+2\operatorname{SB}(x_{0}f)(z)+(2-M)\operatorname{SB}f(z)-2\int_{W}(x|z)\operatorname{\mathbb{I}}_{1}(x|z)\exp(-z_{0}-2x_{0})f(x).$
In a similar fashion to (B.8), we find
$\displaystyle\dfrac{1}{2}\operatorname{SB}\big{(}\operatorname{\widetilde{\mathcal{B}}}(x_{0})f\big{)}(z)=\dfrac{1}{2}\exp(-z_{0})\int_{W}\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0})\operatorname{\widetilde{\mathcal{B}}}(x_{0})f(x)$
$\displaystyle\hphantom{\dfrac{1}{2}\operatorname{SB}\big{(}\operatorname{\widetilde{\mathcal{B}}}(x_{0})f\big{)}(z)}{}=\dfrac{1}{2}\exp(-z_{0})\int_{W}\operatorname{\widetilde{\mathcal{B}}}(x_{0})\left(\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0})\right)f(x)$
$\displaystyle\hphantom{\dfrac{1}{2}\operatorname{SB}\big{(}\operatorname{\widetilde{\mathcal{B}}}(x_{0})f\big{)}(z)}{}=2z_{0}\operatorname{SB}f(z)+(2-M)\operatorname{SB}f(z)+2\operatorname{SB}(x_{0}f)(z)$
$\displaystyle\hphantom{\dfrac{1}{2}\operatorname{SB}\big{(}\operatorname{\widetilde{\mathcal{B}}}(x_{0})f\big{)}(z)=}{}-2\int_{W}(x|z)\operatorname{\mathbb{I}}_{1}(x|z)\exp(-z_{0}-2x_{0})f(x).$
So we conclude that equation (B.11) holds for $X=(0,0,e_{0})$.
Case 7: $X=(0,0,e_{k})$, $k\neq 0$. To show
$\displaystyle\dfrac{1}{2}\operatorname{SB}\big{(}\operatorname{\widetilde{\mathcal{B}}}(x_{k})f\big{)}(z)=\dfrac{1}{2}z_{k}\operatorname{SB}f(z)+\dfrac{1}{2}\operatorname{\widetilde{\mathcal{B}}}(z_{k})\operatorname{SB}f(z)-L_{0k}^{z}\operatorname{SB}f(z),$
we substitute (B.9) and (B.10) into the equation. So we obtain
$\displaystyle\dfrac{1}{2}\operatorname{SB}\big{(}\operatorname{\widetilde{\mathcal{B}}}(x_{k})f\big{)}(z)=2z_{k}\operatorname{SB}f(z)+2\operatorname{SB}(x_{k}f)(z)$
$\displaystyle\hphantom{\dfrac{1}{2}\operatorname{SB}\big{(}\operatorname{\widetilde{\mathcal{B}}}(x_{k})f\big{)}(z)=}{}-4\int_{W}(z_{0}x_{k}+z_{k}x_{0})\operatorname{\mathbb{I}}_{1}(x|z)\exp(-z_{0}-2x_{0})f(x).$
In a similar fashion to (B.9), we find
$\displaystyle\dfrac{1}{2}\operatorname{SB}\big{(}\operatorname{\widetilde{\mathcal{B}}}(x_{k})f\big{)}(z)=\dfrac{1}{2}\exp(-z_{0})\int_{W}\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0})\operatorname{\widetilde{\mathcal{B}}}(x_{k})f(x)$
$\displaystyle\hphantom{\dfrac{1}{2}\operatorname{SB}\big{(}\operatorname{\widetilde{\mathcal{B}}}(x_{k})f\big{)}(z)}{}=\dfrac{1}{2}\exp(-z_{0})\int_{W}\operatorname{\widetilde{\mathcal{B}}}(x_{k})(\operatorname{\mathbb{I}}_{0}(x|z)\exp(-2x_{0}))f(x)$
$\displaystyle\hphantom{\dfrac{1}{2}\operatorname{SB}\big{(}\operatorname{\widetilde{\mathcal{B}}}(x_{k})f\big{)}(z)}{}=2z_{k}\operatorname{SB}f(z)+2\operatorname{SB}(x_{k}f)(z)$
$\displaystyle\hphantom{\dfrac{1}{2}\operatorname{SB}\big{(}\operatorname{\widetilde{\mathcal{B}}}(x_{k})f\big{)}(z)=}{}-4\int_{W}(z_{0}x_{k}+z_{k}x_{0})\operatorname{\mathbb{I}}_{1}(x|z)\exp(-z_{0}-2x_{0})f(x).$
This proves the theorem. ∎
### Acknowledgements
SB is supported by a BOF Postdoctoral Fellowship from Ghent University. HDB is
supported by the Research Foundation Flanders (FWO) under Grant EOS 30889451.
The authors would like to thank the anonymous referees for carefully reading
the paper and for their useful comments.
## References
* [1] Alldridge A., Fréchet globalisations of Harish-Chandra supermodules, Int. Math. Res. Not. 2017 (2017), 5182–5232, arXiv:1403.4055.
* [2] Andrews G.E., Askey R., Roy R., Special functions, Encyclopedia of Mathematics and its Applications, Vol. 71, Cambridge University Press, Cambridge, 1999.
* [3] Barbier S., A minimal representation of the orthosymplectic Lie superalgebra, Ph.D. Thesis, Ghent University, 2018, available at http://hdl.handle.net/1854/LU-8560881.
* [4] Barbier S., Coulembier K., Polynomial realisations of Lie (super)algebras and Bessel operators, Int. Math. Res. Not. 2017 (2017), 3148–3179, arXiv:1512.01387.
* [5] Barbier S., Coulembier K., On structure and TKK algebras for Jordan superalgebras, Comm. Algebra 46 (2018), 684–704, arXiv:1609.00271.
* [6] Barbier S., Frahm J., A minimal representation of the orthosymplectic Lie supergroup, Int. Math. Res. Not., to appear, arXiv:1710.07271.
* [7] Bargmann V., On a Hilbert space of analytic functions and an associated integral transform, Comm. Pure Appl. Math. 14 (1961), 187–214.
* [8] Carmeli C., Cassinelli G., Toigo A., Varadarajan V.S., Unitary representations of super Lie groups and applications to the classification and multiplet structure of super particles, Comm. Math. Phys. 263 (2006), 217–258, arXiv:hep-th/0501061.
* [9] Coulembier K., The orthosymplectic superalgebra in harmonic analysis, J. Lie Theory 23 (2013), 55–83, arXiv:1208.3827.
* [10] Coulembier K., De Bie H., Sommen F., Orthosymplectically invariant functions in superspace, J. Math. Phys. 51 (2010), 083504, 23 pages, arXiv:1006.4744.
* [11] De Bie H., Genest V.X., van de Vijver W., Vinet L., A higher rank Racah algebra and the ${\mathbb{Z}}^{n}_{2}$ Laplace–Dunkl operator, J. Phys. A: Math. Theor. 51 (2018), 025203, 20 pages, arXiv:1610.02638.
* [12] De Bie H., Sommen F., Spherical harmonics and integration in superspace, J. Phys. A: Math. Theor. 40 (2007), 7193–7212, arXiv:0705.3148.
* [13] de Goursac A., Michel J.-P., Superunitary representations of Heisenberg supergroups, Int. Math. Res. Not., to appear, arXiv:1601.07387.
* [14] Folland G.B., Harmonic analysis in phase space, Annals of Mathematics Studies, Vol. 122, Princeton University Press, Princeton, NJ, 1989.
* [15] Gan W.T., Savin G., On minimal representations definitions and properties, Represent. Theory 9 (2005), 46–93.
* [16] Hilgert J., Kobayashi T., Mano G., Möllers J., Special functions associated with a certain fourth-order differential equation, Ramanujan J. 26 (2011), 1–34, arXiv:0907.2608.
* [17] Hilgert J., Kobayashi T., Möllers J., Minimal representations via Bessel operators, J. Math. Soc. Japan 66 (2014), 349–414, arXiv:1106.3621.
* [18] Hilgert J., Kobayashi T., Möllers J., Ørsted B., Fock model and Segal–Bargmann transform for minimal representations of Hermitian Lie groups, J. Funct. Anal. 263 (2012), 3492–3563, arXiv:1203.5462.
* [19] Kirillov A.A., Lectures on the orbit method, Graduate Studies in Mathematics, Vol. 64, Amer. Math. Soc., Providence, RI, 2004.
* [20] Kobayashi T., Mano G., The Schrödinger model for the minimal representation of the indefinite orthogonal group ${\rm O}(p,q)$, Mem. Amer. Math. Soc. 213 (2011), vi+132 pages, arXiv:0712.1769.
* [21] Kobayashi T., Ørsted B., Analysis on the minimal representation of ${\rm O}(p,q)$. I. Realization via conformal geometry, Adv. Math. 180 (2003), 486–512, arXiv:math.RT/0111083.
* [22] Kobayashi T., Ørsted B., Analysis on the minimal representation of ${\rm O}(p,q)$. II. Branching laws, Adv. Math. 180 (2003), 513–550, arXiv:math.RT/0111085.
* [23] Kobayashi T., Ørsted B., Analysis on the minimal representation of ${\rm O}(p,q)$. III. Ultrahyperbolic equations on ${\mathbb{R}}^{p-1,q-1}$, Adv. Math. 180 (2003), 551–595, arXiv:math.RT/0111086.
* [24] Lávička R., Šmíd D., Fischer decomposition for polynomials on superspace, J. Math. Phys. 56 (2015), 111704, 9 pages, arXiv:1508.03426.
* [25] Neeb K.-H., Salmasian H., Lie supergroups, unitary representations, and invariant cones, in Supersymmetry in Mathematics and Physics, Lecture Notes in Math., Vol. 2027, Springer, Heidelberg, 2011, 195–239, arXiv:1012.2809.
* [26] Salmasian H., Unitary representations of nilpotent super Lie groups, Comm. Math. Phys. 297 (2010), 189–227, arXiv:0906.2515.
* [27] Tuynman G.M., The left-regular representation of a super Lie group, J. Lie Theory 29 (2019), 1–78.
|
2024-09-04T02:54:55.600968 | 2020-02-28T18:04:21 | 2002.12895 | {
"authors": "A. Neronov, V. Savchenko, A. Tramacere, M. Meharga, C. Ferrigno,\n S.Paltani",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25947",
"submitter": "Volodymyr Savchenko",
"url": "https://arxiv.org/abs/2002.12895"
} | arxiv-papers | 11institutetext: Department of Astronomy, University of Geneva, Ch. d’Ècogia
16, 1290, Versoix, Switzerland 22institutetext: APC, University of Paris,
CNRS/IN2P3, CEA/IRFU, 10 rue Alice Domon et Leonie Duquet, Paris, France
# An online data analysis system of INTEGRAL telescope
A. Neronov 1122 V. Savchenko 11 A. Tramacere 11 M. Meharga 11 C. Ferrigno 11
S. Paltani 11
###### Abstract
Context. During more than 17 years of operation in space INTEGRAL telescope
has accumulated large data set that contains records of hard X-ray and soft
$\gamma$-ray astronomical sources. These data can be re-used in the context of
multi-wavelength or multi-messenger studies of astronomical sources and have
to be preserved on long time scales.
Aims. We present a scientific validation of an interactive online INTEGRAL
data analysis system for multi-wavelength studies of hard X-ray and soft
$\gamma$-ray sources.
Methods. The online data analysis system generates publication-quality high-
level data products: sky images, spectra and light-curves in response to user
queries that define analysis parameters, such as source position, time and
energy interval and binning. The data products can be requested via a web
browser interface or via Application Programming Interface (API) available as
a Python package. The products for the IBIS/ISGRI instrument of INTEGRAL are
generated using the Offline Science Analysis (OSA) software which is provided
by the instrument teams and is conventionally used to analyse INTEGRAL data.
The analysis workflow organized to preserve and re-use various intermediate
analysis products, ensuring that frequently requested results are available
without delay. The platform is implemented in a Docker cluster which allows
operation of the software in a controlled virtual environment, and can be
deployed in any compatible infrastructure. The scientific results produces by
ODA are identical to those produced by OSA, since ODA simply provides a
platform to retrieve the OSA results online, while leveraging a provenance-
indexed database of pre-computed (cached) results to optimize and reuse the
result.
Results. We report the functionalities and performance of the online data
analysis system by reproducing the benchmark INTEGRAL results on different
types of sources, including bright steady and transient Galactic sources, and
bright and weak variable extra-galactic sources. We compare the results
obtained with the online data analysis system with previously published
results on these sources. We also discuss limitations of the online analysis
system.
Conclusions. We consider the INTEGRAL online data analysis as a demonstrator
of more general web-based “data analysis as a service” approach that provides
a promising solution for preservation and maintenance of data analysis tools
of astronomical telescopes on (multi)decade long time scales and facilitates
combination of data in multi-wavelength and multi-messenger studies of
astronomical sources.
###### Key Words.:
Methods: data analysis – X-rays: general
## 1 Introduction
The INTErnational Gamma-RAy Laboratory (INTEGRAL) (Winkler et al., 2003) is an
ESA space astronomy mission that has collected data in orbit since 2002. It
provides observations of astronomical sources in the keV-MeV energy range.
IBIS, the Imager on Board the INTEGRAL Satellite (Ubertini et al., 2003), is a
coded-aperture instrument that provides fine imaging (12′ FWHM) in a nearly
squared partially coded field-of-view (FoV) of 30${}^{\circ}\times$30∘, source
identification and spectral sensitivity to both continuum and broad lines
between 15 keV and 10 MeV. Its focal plane is composed of two detectors,
optimized for two different energy ranges: ISGRI from 15 to $\sim$1000 keV
(Lebrun et al., 2003) and PICsIT from 400 keV to 10 MeV (Labanti et al.,
2003). In the following, we will discuss only the ISGRI detector layer, and
refer to it as the ISGRI instrument. The Joint European X-Ray Monitor (JEM-X
Lund et al., 2003) provides images, timing, and spectral information in a
lower energy range (3–25 keV). It has a narrower circular FoV $\sim
10^{\circ}$ diameter. The spectrometer SPI (Vedrenne, G. et al., 2003)
provides high-resolution spectroscopy data with moderate angular resolution.
The SPI instrument is surrounded by an Anti-Coincidence Shield (ACS) that
reduces the level of background in SPI and simultaneously works as an all-sky
Gamma-Ray Burst (GRB) monitor (Savchenko et al., 2012). The wide FoV of IBIS
inherently includes data on a large number of astronomical sources, which are
not necessarily the targets of specific observational proposals. This means
that all source-specific observational campaigns of INTEGRAL possess large
potential for serendipitous science. All INTEGRAL data become publicly
available after a one-year proprietary period, so that analysis of all the
sources found in the field of view is possible.
Astronomical sources visible in X-ray and soft $\gamma$-ray band are variable
on different time scales, from milliseconds to years and decades van der Klis
(2006); Beckmann et al. (2007); Mészáros et al. (2019). In this respect, the
17-year-long data-set of INTEGRAL is unique, as it contains the information on
long-term variability patterns of a large number of sources. Some sources
spend long periods in quiescent states producing little emission and exhibit
occasional outbursts or flares on irregular basis with a duty cycles spread
over years or decades. The INTEGRAL archive contains data taken nearly
continuously since min November 2002 with a duty cycle on nearly 90%
-instruments are switched off at each perigee passage through the Earth
radiation belts. Until March 2003, on-board setting were continuously adjusted
to optimize the performance, making the scientific analysis a challenging. The
general user is normally suggested to start using archive data after this date
finding nearly 50 Ms of usable data in the direction of the Galactic center,
the region with the highest exposure. Archive data contain, thus, information
on previous history of quiescence and activity of sky sources ($>1200$
detected by the imager so far), including those which might undergo outbursts
in future.
The INTEGRAL data and Offline Science Analysis (OSA) software that includes
data analysis pipelines for all instruments, including ISGRI (Goldwurm et al.,
2003) and JEM-X (Westergaard et al., 2003) is distributed by the INTEGRAL
Science Data Centre (ISDC, Courvoisier et al., 2003). The main components of
OSA have been developed in the period preceding the INTEGRAL launch and are
maintained by the ISDC. The architecture of OSA was optimized for computing
environments that were common more than 20 years ago. INTEGRAL was initially
planned to operate in space for five years and generate relatively small data
sets. The only solution for data processing was via local installation of OSA
on a user computer. This is not necessarily the case today for data sets
spanning 17 years. Their analysis requires a significant amount of computing
resources. Moreover, maintenance of legacy software on evolving operating
systems poses more and more challenges.
The development of high-performance computing (HPC) and cloud computing (CC)
technologies and their applications to astronomical data management (Banek et
al., 2019; Smith et al., 2019) over the last decades opens up a new
possibility of deployment of OSA-based data analyses without the need for
local installation of the software, and provides access to large pool of
computing resources, thus significantly reducing the complexity of the
analysis of INTEGRAL data. Such online data analysis (ODA) system for the
ISGRI instrument has been recently developed at the Department of astronomy of
the University of Geneva111https://www.astro.unige.ch in synergy with the
INTEGRAL Science Data Centre (ISDC)222https://www.isdc.unige.ch/integral/, and
is maintained jointly with the François Arago Centre (FACe) of Astroparticle
and Cosmology laboratory in Paris333http://www.apc.univ-paris7.fr/FACe/home.
This system entirely relies on the official OSA package, provided and validate
by the instrument teams, integrated and distributed by by ISDC - all of the
results are not only equivalent, they are identical. This design allows to
leverage, in principle, the unrestrained power of the OSA software, preserving
and maintaining the complete potential of the INTEGRAL data for the future
explorers, without making assumptions about which products are likely to be
useful. While this approach may require larger real-time computing resources,
the platform exploits a dynamic innovative provenance-based database of
precomputed products to eliminate analysis duplication and flexibly respond to
the user requests, as detailed in Section 2.2 and Section 5. This means, in
principle, that the ODA validation is in part redundant, since it is
equivalent to OSA validation. On the other hand, this paper reveals how a
selection of particular OSA cookbook threads, adopted in ODA, allows to
reproduce the published results.
In the following, we describe the system and characterize its performance via
comparison of science results obtained using the system with previously
published benchmark INTEGRAL results on a variety of Galactic and extra-
galactic sources. We also demonstrate the usefulness of the system in the
context of multi-wavelength studies and discuss advantages and limitations of
this online data analysis implementation.
Throughout the paper, we provide direct links to the code producing the
figures in the text, showing the potential of this approach for the open
access, reusability and reproducibility of science results.
## 2 Online data analysis interface
Figure 1: Architecture of the INTEGRAL Online Data Analysis system.
The general scheme of the ODA is shown in Fig. 1. The user has several
possibilities to submit requests for analysis of INTEGRAL data:
* •
through a web browser by accessing the ODA
website444https://www.astro.unige.ch/cdci/astrooda_ on his/her local computer
and entering the analysis parameters into the parameter boxes, or
* •
directly specifying analysis parameters in a URL link to the ODA website
(examples are given in the next section) or
* •
through an ODA Application Programming Interface (API),
oda_api555https://github.com/oda-hub/oda_api, e.g., from a Jupyter Notebook,
also on their local computer.
The full process can be schematized as follows:
1. 1.
The request can be sent using the front-end or the oda_api client. Both
interfaces verify the syntactical correctness and completeness of user
queries.
2. 2.
Requests arrive at dispatcher and here processed by an internal abstraction
process, which implements classes (interfaces) for specific instruments data
products, such as spectra or light curves data, and post-processing products,
such as mosaic images, light curves images, spectral fits.
3. 3.
Each data product interface communicate with a specific backend, i.e. data
server(s) implemented as Docker666https://hub.docker.com/r/odahub/ddosa-
interface containers running OSA in service mode. The containers are currently
deployed locally on the HPC resources of the University of Geneva, but they
could be deployed on any other HPC or CC services.
4. 4.
The request can be either synchronous or asynchronous. In the latter case, a
continuous report from the backend brings information to the dispatcher
regarding the process status until the product is ready. In the former case, a
single report is provided.
5. 5.
Data products provided by the data server upon analysis requests are stored in
a data repository and are made available to the dispatcher.
6. 6.
In the current version of ODA, the dispatcher also performs post-processing
and visualisation of the data, using specific services, providing high-level
products to be displayed on the front-end.
7. 7.
Final products are available to the user either through the front-end or
through the client API. The front-end displays sky images, spectra, light
curves, and source catalogs in a virtual-desktop environment embedded in the
web browser, providing the possibility to download the data products to the
local user computer.
### 2.1 Front-end interface – Dispatcher – Data Server interactions
Figure 2: Left: General parameter space of ODA front-end. Right: example of
instrument-specific parameter field for ISGRI telescope.
IBIS and JEM-X are coded-mask instruments that rely on a dithering pointing
strategy with individual exposures called Science Windows (ScW) lasting 0.1–1
hour. Reconstruction of sky images creates a catalog of detected sources. This
catalog should be used during the extraction of spectra and light curves of
specific sources. Disentangling of signals from different sources in the field
of view requires a sky model which lists positions of all possible sources.
(see Goldwurm et al., 2003, for details). The default ODA workflow allows the
user to select the data set, obtain a catalog of detected sources by
reconstructing an image, and then manipulate this catalog to extract spectra
and light curves.
The data processing is initiated in response to the user queries defining the
analysis parameters (Fig. 2). These queries include at the very minimum:
* •
Source name or sky coordinates
* •
Time interval for the data or Science Window (ScW) list
The front-end is able to resolve astronomical source names by accessing two of
the main astronomical databases (SIMBAD and NED). It accepts time parameters
in several different conventional formats. The list of ScWs can be specified
as a comma-separated list of their unique identifiers. The ScW data base is
separately accessible through the w3browse interface on the web
pages777https://www.isdc.unige.ch/integral/archive.
Apart from these generic query parameters, the front-end allows the user to
specify parameters that are specific to the INTEGRAL instruments: ISGRI,
JEM-X, SPI-ACS. An example of parameter field for ISGRI is shown in the right
panel of Fig. 2. For ISGRI and JEM-X, it is possible to specify:
* •
one of the two currently used versions of OSA: 10.2 and 11.0;
* •
radius of the “region of interest” within which pointing data are selected
(which depends on the instrument field-of-view);
* •
One of the two units of the JEM-X instrument (JEM-X1 or JEM-X2);
* •
type of the data product (image, spectrum or light curve);
* •
energy range of the image and light curve. It should be noted that the
spectrum is always computed in the full energy range with predefined
resolution (16 channels for JEM-X, 128 channels for ISGRI;
* •
minimal detection significance of sources in the output catalog from imaging;
* •
time binning for the light curve;
* •
source catalog to be used in spectral and timing analyses.
In a similar way, the parameters can also be specified in the API requests
using the oda_api Python package. For example, OSA version is specifiable by
setting a parameter osa_version=’OSA10.2’ in the API requests888See oda_api
documentation at https://oda-api.readthedocs.io/en/latest/ for full details..
The front-end provides a display of the high-level data products (images,
spectra, light curves, source catalogues) through a virtual desktop
environment. It also provides the possibility of performing post-analysis of
the data, like, e.g., fitting spectra with XSPEC spectral models, using public
domain rendering packages999https://bokeh.pydata.org/en/latest/.
### 2.2 Data analysis and storage organization
The ODA infrastructure is using the online archive of INTEGRAL raw data,
provided by the ISDC101010http://www.isdc.unige.ch/integral/archive. The data
server’s task is to provide high-level data products corresponding to the user
requests received through the dispatcher. Running OSA is time consuming (about
50 CPU-hours for a spectrum in a typical short transient observation, or of
the order of 2000 CPU-hours for an analysis of historic data for a typical
source), so as far as possible it is desirable to keep pre-computed products
for future (re-)uses. However, it is not possible to store high-level data
products for all imaginable combinations of user input parameters. In ODA, the
permanent archive of the raw data is complemented by an analysis cache
containing high- and intermediate-level data products that are added or
removed depending on the the user demands averaged over certain time periods.
The cache storage is organized according to the data lineage (e.g. Ikeda &
Widom, 2009), which is a specific case of data provenance (Gupta, 2009). The
data lineage metadata comprises the information on the sequence of analysis
steps (the analysis or workflow nodes) undertaken to produce given high- or
intermediate-level data products. The ontology of the workflow nodes
prescribes specific metadata associated with each step, and induces the
collection of metadata of the final product.
The lineage metadata of the cache storage contains all relevant properties of
all stored data products, and only this information. This provides a
possibility to re-use previously produced intermediate-stage data products for
the processing of new data analysis requests by users. New data analysis
workflows typically do not start processing of raw data from “from scratch”.
Instead, they are formed from a combination of parts of already available
workflows derived from specific intermediate or high-level data products
stored in the cache, together with the provenance DAG (Directed Acyclic Graph)
metadata. This approach provides an efficient way to speed-up data processing
following user requests if those are repetitive or recursive or if the
requests are nearly identical to those done by previous users with only
moderate parameter modifications.
Efficient reuse of parts of the OSA based data analysis workflow is enabled by
the re-organisation of OSA in data analysis units expressed as Python classes,
following the Declarative Data Analysis (DDA) approach inspired by the
principles of functional programming. This development was driven by the needs
of efficiently managing the data of INTEGRAL together with the information on
the 17-year history of the telescope operations: by 2018, in the raw-data
archive, there are about $10^{3}$ different types of data occupying 20 TB in
some $2\times 10^{4}$ files.
The re-factored OSA implementing the DDA approach (called DDOSA111111See
https://github.com/volodymyrss/dda-ddosa/ for implementation details.) follows
a simple logical scheme suitable for reproducibility of the analysis. Each
analysis unit is a pure function of its input data, meaning that it depends
only on its own explicit input. It transforms the input data into other data
products. Any data are uniquely identified by a tree of connected analysis
units that were used to produce it, or, equivalently, by its DAG “provenance
graph”. In other words, DDOSA uses provenance as a data identifier (see
Savchenko, 2020, for more details).
The high-level data products associated to very large analysis chains may be
eventually associated with a very large provenance graphs. An example of the
provenance graph for a single ScW image high-level data product is shown in
Fig. 3.
Figure 3: Example of high-level provenance graph for a sky image derived from
a single INTEGRAL/ISGRI ScW.
The DAG provenance graph approach for data identification at different
analysis levels is optimal not only for caching frequently re-used
intermediate analysis step results, but also for the parallelization of the
analysis. The DAG structure of DDOSA workflows implies the presence of
different independent branches of analysis that can be naturally executed
independently in a distributed environment. This is taken into account in the
system of job scheduling. For each analysis unit, execution requests originate
either from the users (via dispatcher) or other analysis units. Each request
processing starts from the evaluation of the request, resulting either in the
retrieval of the requested products from the storage cache or in the
delegation of the request to a local or remote executor, a process which is
transparent from the point of view of the request. A simple scheduling layer
has been implemented following this approach. The advantage of this scheduler
is the straightforward treatment of complex dependencies.
## 3 Benchmark analysis results
The web-based ODA interface retains all the functionalities of OSA and could
be used to obtain publication quality results of analysis of INTEGRAL
observations with no difference from what an experienced user can obtain
running locally OSA. In this section, we demonstrate the performance of the
ODA based analysis on benchmark cases, by showing that the results obtainable
with ODA are compatible with previously published results or improve on them,
owing to upgraded algorithms and calibration.
### 3.1 Crab pulsar and nebula
The Crab pulsar and Nebula complex is one of the brightest sources on the
X-ray sky (Hester, 2008). Because of this property and its flux stability, it
is often used as a “calibration” source in high-energy astronomy. INTEGRAL
observations of Crab are reported in a number of publications (e.g, Mineo et
al., 2006) and, in particular, in the context of the study of the long-term
variability of the source emission (Wilson-Hodge et al., 2011). We verify the
performance of ODA by reproducing the published results on Crab variability
and extending the evolution study over a 15 year time period. It can be noted
that the magnitude and details Crab of variability as observed by ISGRI is not
identical to those reported by other instruments. The excess variability in in
part due to the systematic uncertainties in ISGRI calibration, and in part due
to the difference in the instrument energy ranges. Detailed evaluation of if
OSA reconstruction of ISGRI observations as well as discussion ISGRI
calibration challenges is beyond the scope of this work. Here we demonstrate
that ODA reproduces the best available OSA results. ODA will naturally follow
any upgrades to OSA software and calibration files in the future.
Figure 4: Mosaic image of Crab region extracted from a sample of 50 randomly
selected ScWs, together with the display of the catalog of detected sources.
The result could be re-generated using the URL
https://doi.org/10.5281/zenodo.3634648.
The ODA interface is currently limited to single requests for data analysis
based on no more than 50 science windows (ScWs), to limit the waiting time on
the available resources (see Section 4 for details, future plans, and a work-
around). If the requested time span of data extends over more than 50 ScWs,
random selection of ScWs within the specified time limits is performed. This
is the case for the results reported in Figs. 4, 8, and 5. The time interval
of the analysis for this specific ScW subset is 2003-03-15 23:27:40.0 to
2018-03-16 00:03:15.0 UTC, spanning over more than 15 years. Pointings within
15 degrees from Crab position are selected.
Our example Crab image could be accessed or re-generated by directly re-
launching the analysis via an URL https://www.astro.unige.ch/cdci/astrooda_ in
which the analysis parameters are specified after the sign `{?}` and separated
by the `{\&}` sign as, for example:
src_name=Crab&RA=83.633080&DEC=22.014500&radius=15 for the source name and/or
sky coordinates and region of interest specification,
&T1=2003-03-15T23:27:40.0 &T2=2018-03-16T00:03:15.0 &T_format=isot for the
time interval, &E1_keV=20 &E2_keV=40 for the energy interval,
&instrument=isgri &osa_version=OSA10.2 &product_type=isgri_image for the
instrument, software version and data product specification (spaces are added
for readability, but should be removed in the actual query). The parameter
detection_threshold=7.5 will result in display of the sources detected at
significance level higher than $7.5$. The analysis with specified parameters
is launched automatically, as soon as the instrument parameter is defined:
&instrument=isgri. The parameters that are not explicitly specified in the
parameter field of the URL are fixed to their default values.
The executable URL with all specified parameters for each data product could
be obtained by pressing the “Share” button displayed in the data product
window on ODA frontend (see Fig. 4).
This example of analysis could also be launched from a python interface on the
user laptop (e.g., from a shell or a Jupyter notebook) by providing parameters
to the request
from oda_api.api import DispatcherAPI
disp=DispatcherAPI(
host=
’www.astro.unige.ch/cdci/astrooda/dispatch-data’)
disp.get_product(
RA=83.633080,
DEC=22.014500,
radius=15,
T1=’2003-03-15T23:27:40.0’,
T2=’2018-03-16T00:03:15.0’,
E1_keV=20.0,
E2_keV=40.0,
instrument=’isgri’,
product=’isgri_image’,
osa_version=’OSA10.2’)
The API code for each data product can be obtained directly by pressing the
“API code” button in the product window on the ODA front-end (see Fig. 4).
A crucial part of the imaging analysis is the search for significantly
detected sources both in individual ScWs and in the mosaic image. Setting the
source detection threshold to $7.5\sigma$ (a parameter in the web form of ODA)
results, in our example, in the detection of four sources displayed in the
image in Fig. 4. Details of the source detection are available in the catalog
display accessible through a button from the image display panel, as shown in
Fig. 4. Occasionally, sources may have multiple appearances in the catalog
display, because this table combines several output catalogs of the standard
INTEGRAL analysis, namely results of the search of sources in the mosaic and
in individual ScWs. This might be important, because some flaring sources are
detectable in individual ScWs during short time periods, but are not
detectable in mosaic images with longer exposure times (as it is typical for
bursting and transient sources, see e.g..Mereghetti et al. 2020 for a recent
example). The user is asked to carefully inspect the output catalog from the
imaging step and adjust the source selection for the following spectral and
timing analyses.
Imaging, spectral extraction and timing routines of OSA use catalogues of
sources to match the shadow patterns corresponding to these sources on the
detector plane. The catalog used for imaging, spectral or timing analysis
could be explicitly specified in the “User catalog” parameter window in the
parameter panel. If no user catalog is specified, the default general INTEGRAL
catalog is used. This is advisable for the imaging products, but sub-optimal
for the extraction of spectra and light curves, which relies on fitting a sky
model on the shadowgram. If this sky model is redundant, the fitting becomes
more problematic, resulting in unreliable flux determinations. The user can
edit the catalog entries in the display of the catalog output of the imaging
step. This display also has a “Use catalog” button, which would push the
edited catalog to the “User catalog” to be used at the subsequent stages of
analysis. The catalog can also be defined explicitly in the form of a python
“dictionary” in the URL parameter field. The correctly formatted catalogue
embedded in the URL can be obtained by clicking the “Share” button next to the
displayed data product.
The display of results of spectral analysis for all sources listed in the
catalog (user custom catalog or the output catalog from the imaging step
analysis) provides a possibility to choose a spectral model for fitting the
spectrum121212Based on Xspec package fitting
(https://heasarc.gsfc.nasa.gov/xanadu/xspec/).. The display of the spectrum
together with the fitted model also provides the details of the fit, such as
the model parameter values with their uncertainties. Binning of the spectra is
performed only at the plotting stage, the fit is performed on the spectrum at
full resolution, which can be downloaded from the web interface. We notice
that the 20-30 keV band is affected by the long-term evolution of the ISGRI
response, as the ISGRI energy threshold is gradually increasing with time and
low-energy events are lost. For consistency, only data above 30 keV are thus
automatically fitted in the web interface, but data at lower energy are
available, upon download of the FITS-format spectral
file131313https://www.isdc.unige.ch/integral/download/osa/doc/10.2/osa_um_ibis.pdf.
Fig. 5 shows the 30–100 keV light-curve of the source during a 17-years time
span, extracted from the same set of 50 random ScWs and binned into ScW-long
time bins. The figure shows the fit of the light-curve with constant and
linearly changing flux models. There is a noticeable decrease of the source
count rate toward later times, which becomes especially pronounced after MJD
56800 (mid-2014). Superimposed onto this instrumental trend, there is the true
variability of the Crab nebula studied by Wilson-Hodge et al. (2011).
Such rapid decrease of the count rate is due to the decrease of the instrument
response at low energy and is not corrected in version 10.2 of OSA, because
calibration algorithms were not able to correct this rapid evolution and
calibration files were frozen at this moment. The correct instrument response
after MJD 56800 is provided by version 11.0 of OSA with the relative
calibration files141414It is foreseen that the OSA11 software release will
cover the full mission at the end of 2020.. See Section 4 for details on the
instrumental effects contributing to these results.
Figure 5: Crab long-term (15 year time span) lightcurve extracted from a
sample of 50 randomly selected ScWs in 30-100 keV band, using OSA10.2. The
lightcurve is generated via Crab_spectra_lc.ipynb via oda-hub.
Fig. 6 shows a comparison of the long-term variability of the Crab flux in the
30–100 and 100–300 keV ranges for 17 years of INTEGRAL operations together
with the ones measured by Swift/BAT and Fermi/GBM telescopes (Wilson-Hodge et
al., 2011). To produce this figure, we select random sets of 50 ScWs spanning
one year time intervals and extracts ScW-by-ScW lightcurves by specifying 10
ks time steps in the ODA parameters for lightcurve time binning (this is
longer than the duration of one ScW). These lightcurves are subsequently
averaged into the time bins used by Wilson-Hodge et al. (2011) for comparison.
For this workflow, we exploited the API access to ODA platform, as coded in
the Crab_lc_longterm.ipynb Jupyter notebook, which is part of the
https://github.com/oda-hub/oda_api_benchmark GitHub repository. In Sect. 3.4,
we detail how to run online Jupyter notebooks as the one used to produce Fig.
6.
Figure 6: Evolution of the Crab count rate over 17 years of INTEGRAL
operations in 30–100 keV (top panel), 100–300 keV (bottom panel) energy
ranges, compared to those observed by Swift/BAT (magenta) and Fermi/GBM (green
data points) (Wilson-Hodge et al., 2011). Grey data points show the lightcurve
binned in ScW by ScW time bins by the ODA. Blue blue data points show the
rebinned individual ScW measurements. The reader can launch the Jupyter
notebook Crab_lc_longterm.ipynb used to generate these long-term light curves
via API access to ODA.
From Fig. 6 one can see that the lightcurve extracted using OSA10.2 and
OSA11.0 in their time intervals of validity are compatible with the
lightcurves of Swift/BAT and Fermi/GBM, after they have been normalized to
their average level. Intrinsic variability of the Crab nebula (at $\sim 5\%$
level) can be appreciated in the general trend of all instruments, as done by
Wilson-Hodge et al. (2011). There is, however, some residual differences
between the INTEGRAL and Swift/BAT or Fermi/GBM lightcurves, which can be used
to estimate the systematic cross-calibration uncertainty. This amounts up to
10% in the 100-300 keV energy range. Count rate light curve have variations
that are also due to the evolution of the instrument gain; these are accounted
for using response files to recover the intrinsic source flux in the spectral
extraction stage.
The ODA interface allows us to extract also images, spectra and lightcurves of
JEM-X instrument, using the same set of parameters as for ISGRI. Fig. 7 shows
the 3-20 keV image of Crab extracted from 50 randomly selected ScWs with
pointing directions within $5^{\circ}$ from Crab (to assure that the source is
in the field-of-view of JEM-X.
Figure 7: Image of sky region around Crab obtained by JEM-X1 instrument in the
energy range 3-20 keV. The image can be regenerated via URL
https://doi.org/10.5281/zenodo.3832080.
Crab is detected with significance in excess of $10^{3}\sigma$ in the image.
Two sources are detected with significance larger than 20 in the image: Crab
and an X-ray binary 1A 0535+262. We use a catalog containing these two sources
for the spectral extraction.
Fig. 8 shows combined ISGRI $+$ JEM-X1 unfolded spectra of the spectrum of
Crab, extracted from the 50 ScW data sets on year-by-year basis from 2003 to
2018. We have modeled the spectra with a broken power law with two break
energies fixed at 20 and 100 keV. The former value is meant to catch possible
differences between JEM-X1 and ISGRI spectral responses; the latter is a
reasonable approximation for the most probable spectral shape of Crab
(Jourdain & Roques, 2009). We applied a systematic factor of 2% to all
spectral data, limited JEM-X1 data between 5 and 20 keV, ISGRI data between 20
and 300 keV. The number of degrees of freedom is 55 for OSA10.2 (before 2016)
and 112 after 2016 (OSA11.0). Fig. 9 shows the year-by-year change of the Crab
spectrum measurements: our results are roughly in-line with previous findings
(e.g., Mineo et al., 2006). Formally non-acceptable fits are present in 2003
and 2005 with an anomalous power-law spectral index $\Gamma_{2}$ (20–100 keV),
due to a change of low threshold in the ISGRI detector, which is currently
taken into account in a sub-optimal way by the calibration files. Similarly,
an anomalous low flux in 2017 for ISGRI is due to calibration issues being
currently investigated by part of our team in a parallel effort.
Figure 8: Display of the Crab unfolded spectra extracted during each year from
2003 to 2018 from a samples of 50 randomly selected ScWs. The fitted model (in
black) is a double broken power law with breaks fixed at 20 and 100 keV. The
analysis results could be re-generated through oda-hub notebook Fit-Crab-
Spectra.ipynb run after Crab_spectra_lc.ipynb. Figure 9: Best-fit spectral
parameters with uncertainty intervals at 68% confidence level on a single
parameter and the reduced $\chi^{2}$ fit statistics of the Crab pulsar and
nebula ISGRI $+$ JEM-X1 spectra averaged on a year-by-year basis; the first
point refers to 2003 and the last one to 2018. Data range is from 3.5 to 20
keV for JEM-X and 20–300 keV for ISGRI. The model is a broken power law with
break energies fixed at 20 and 100 keV; $\Gamma_{1-3}$ are the spectral
indexed at increasing energies. We allowed spectra to assume different
normalization through the flux parameter that we report for 5–20 keV (JEM-X1)
and 20-100 keV (ISGRI). The resulting number of degrees of freedom is 55
before 2016 and 112 after 2016.
### 3.2 Extremely bright source: V404 Cygni
Figure 10: Significance map of V404 Cygni region in 20-40 keV band. The image
can be re-generated via URL https://doi.org/10.5281/zenodo.3634669.
V404 Cygni is a microquasar that underwent a spectacular outburst in 2015
during which the source flux has reached 50 Crab (Sánchez-Fernández et al.,
2017; Rodriguez et al., 2015). Such high flux might pose challenges for data
analysis because of saturation or pileup effects that need to be properly
taken into account.
To validate the performance of ODA for the bright source case, we have
reproduced the results of the analysis of V404 Cyg reported by Rodriguez et
al. (2015).
At the first stage we assess the overall evolution of the source throughout
the activity period that lasted from 2015-06-20T15:50:00.0 UTC to
2015-06-25T04:05:59.0 UTC, analyzed by Rodriguez et al. (2015). Entering this
time interval into the ODA interface via the the URL, we let the system select
randomly 50 ScWs for the analysis with pointing directions within 10 degrees
from the source direction and produce a mosaic image shown in Fig. 10. The
source is the brightest one in the image detected with significance exceeding
1000$\sigma$. The next brightest source in the field is Cyg X-1. A very strong
source produces “ghost” images due to the specific of the coded mask imaging
technique. This could be readily seen in the example of V404 Cyg by extracting
all the sources in the image detectable at significance threshold above
5$\sigma$: this would result in very large amount of “ghosts” that would
appear in the resulting catalog as “NEW” sources.
Figure 11: JEM-X1 (top) and ISGRI (bottom) Lightcurves of V404 Cygni during
2015 flaring period (red data points). Black lines are from Ref. (Rodriguez et
al., 2015).
Fig. 11 shows the JEM-X1 and ISGRI lightcurves of V 404 Cyg extracted for this
period. Given the problem of detection of “ghost” sources around a very bright
source, it is important to correctly define an input source catalog for the
light curve production. Otherwise, the catalog that would be produced based on
the imaging step of the analysis would include all the “ghost” sources in the
procedure of fitting the detector image, which would lead to either wrong flux
calculations or larger error estimates compared to the flux measurements
including only real sources.
From Fig. 11, one can notice that the source underwent several short time
flares. The overall evolution of the source flux inferred from ODA is
practically indistinguishable from that calculated by with that reported by
Rodriguez et al. (2015). This is clear from direct comparison of the two
results, shown in Fig. 11, where black color shows the lightcurve of Rodriguez
et al. (2015) and red color shows the result extracted using ODA.
### 3.3 Crowded field: GX 5$-$1
Figure 12: Zoom of the significance map of the sky region around GX 5$-$1 in
the 20–30 keV energy band obtained from a selection of 44 science windows
belonging to the GPS and GCDE programs carried out in 2013 with a poiting
offset of less than three degrees from GX 5$-$1\. This image can be obtained
from the following URL.
To verify the performance of ODA in the crowded field, we consider an example
of GX 5$-$1, a persistent bright low-mass X-ray binary with a neutron star. It
is located in the inner Galaxy region in a “crowded” field with many bright
sources. In such situation, OSA has to take into account simultaneously all
the bright sources while modeling superposition of the shadows of different
sources on the detector. If this is not done properly, the signal of the
source of interest could be contaminated by the overlapping shadows of
unaccounted sources. Moreover, if bright sources are not included in the sky
model for spectral or light curve extraction, their photons will be
erroneously assigned to other sources or background, hampering a reliable
estimate. However, neglecting weak sources is a minor concern if one is
interested in bright sources, as contamination can reach at most a few
percents of the contaminant’s flux in the worst cases.
To provide a direct comparison with the published results by Paizis et al.
(2005), we reproduced their selection of 44 science windows when the source
was observed by the “GPS” and “GCDE” programs in 2013 with a maximum offest of
three degrees. We made first an image in the 20–30 keV band (Fig. 12) to
select sources with a minimum significance of 7 for the subsequent spectral
extraction. This resulted in a catalog of 25 sources. It should be noted that,
owing to OSA construction, some sources might appear multiple times in the
catalog, so it is necessary to delete duplication from the catalog used as
input in the following steps.
We performed the spectral extraction of both JEM-X2 amd ISGRI data using the
python API with the same selection of science windows. The dead-time and
vignetting corrected exposures for JEM-X2 and ISGRI are 40.6, and 57.2 ks,
respectively. We added 1% flux systematic to ISGRI and 3% to JEM-X2, we
ignored JEM-X2 data below 3 and above 20 keV; ISGRI data below 20 and above 70
keV, owing tot he partucular spectral shape. We modelled the joint spectrum
with the same spectral models of Paizis et al. (2005): the “western” model
made of a compTT+bbodyrad and the “eastern” model, made of compBB+diskbb using
Xspec. We have also introduce a cross-normalization factor fixed to one for
ISGRI, to account for non-simultaneity of some data.
We determined the 1$\sigma$ confidence ranges of parameters using a Monte
Carlo Markov Chain with the Goodman-Weare algorithm (Goodman & Weare, 2010) as
implemented in Xspec v. 12.11.0, and taking the 16, 50, 84% percentiles of the
posteriors as lower, central, and upper values. For the chain, we used 40
walkers, a length of 26 000 and a burn-in phase length of 6000. Results are
presented in Table 15: both model represent well the spectra. However, the
evolution of instrument calibration and analysis algorithms has lead to
significant differences in the parameter values between the 2005 work, made
with OSA v4.2 and ours, performed with OSA v.10.2. This is inherent to OSA and
not directly related to the interface used to extract these spectra. A python
notebook with the full workflow is available at this URL.
Table 1: Best-fit parameters of spectral modelling of GX 5$-$1 in analogy with
Paizis et al. (2005).
Western model
---
$kT_{\mathrm{BB}}$ | 2.05 | $\pm$ 0.10 | keV
normBB | 26 | ${}_{-8}^{+12}$ | $(\mathrm{km/d_{10}})^{2}$
$T_{0}$ | 1.06 | $\pm$ 0.03 | keV
$kT_{e}$ | 5.4 | $\pm$ 0.5 | keV
$\tau$ | 1.5 | ${}_{-0.2}^{+0.3}$ |
norm | 1.1 | $\pm$ 0.1 |
factor | 0.76 | $\pm$ 0.03 |
$\chi^{2}$/d.o.f. | 18 | /19 |
Eastern model
$T_{\mathrm{in}}$ | 1.99 | $\pm 0.04$ | keV
normdisk | 131 | ${}_{-10}^{+11}$ | $(\mathrm{km/d_{10}})^{2}$
$kT$ | 3.0 | ${}_{-0.1}^{+0.2}$ | keV
$\tau$ | 0.27 | ${}_{-0.12}^{+0.08}$ |
norm | 4.1 | ${}_{-1.5}^{+2.0}$ |
factor | 0.74 | ${}_{-0.04}^{+0.03}$ |
$\chi^{2}$/d.o.f. | 19 | /20 |
Flux (5–20 keV)aa$a$Fluxes from both models are compatible within the uncertainties, so we report them only once. | 1.28 | $\pm 0.02\times 10^{-8}$ | $\mathrm{erg\,s^{-1}\,cm^{-2}}$
Flux (20–100 keV)aa$a$Fluxes from both models are compatible within the uncertainties, so we report them only once. | 3.97 | $\pm 0.06\times 10^{-10}$ | $\mathrm{erg\,s^{-1}\,cm^{-2}}$
151515
Figure 13: Time-averaged spectrum of GX 5–1 extracted from the same 44 science
windows as Paizis et al. (2005), collected in 2003 with a maximum pointing
offset of 3∘ from the source. Red stars and black circles represent JEM-X2 and
ISGRI data, respectively.
### 3.4 3C 279
In spite of its large bolometric luminosity and powerful jet (Hayashida et
al., 2015), the active galactic nucleus (AGN) 3C 279 is a weak source for
INTEGRAL. It has hard spectrum in the hard X-ray energy band. Its flux is at
the level of the sensitivity limit of ISGRI and its detectability depends on
the source state. One of the flaring episodes occurred in 2015 and INTEGRAL
observations of 3C 279 during this episode were obtained as a Target-of-
Opportunity campaign. The results of data analysis for this TOO are described
by Bottacini et al. (2016).
Following the same approach as for other sources, we first performed an
exploration of the source behavior throughout the 15 year time span of the
data. However, this source would likely be only marginally detected in any 50
ScW exposure, and no assessment of the variability pattern is possible in this
way.
Instead, the full dataset has to be explored to find the periods during which
the sources is detected. We use the API access to ODA to work with the
datasets longer than 50 ScWs in the follwoing way. As for the Crab lightcurve
case in Sect. 3.1, the Jupyter notebooks for the 3C 279 analysis can be
launched from the oda-hub/oda_api_benchmark
GitHub repository, which is integrated with the Binder interactive notebook
service161616https://mybinder.org. Launching the binder using the “launch
binder” button in the oda-hub/oda_api_benchmark repository and choosing the
notebooks for 3C 279 lightcurve and spectra found in examples makes it
possible to generate the results described below online.
At the first stage of analysis, we determine the bright sources in the source
field. We generate a mosaic image of the field and use the output catalog of
the mosaic analysis, adding explicitly 3C 279 to the catalog, as an input step
for the timing and spectral analysis. Using the resulting source catalog, we
process all sets of 50 ScWs in sequence to obtain a long-term lightcurve of
the source shown in Fig. 14. From this figure one could see that the source is
systematically detected throughout the entire 15 year time span. It shows
moderate (if any) variability from year to year.
Figure 14: Lightcurve of 3C 279 on 15 yr time span. Grey vertical lines show
exposure periods of the source. The notebook 3C279_lc.ipynb for the
calculation of the lightcurve could be launched using this URL.
The 2015 flare of the source reported by Bottacini et al. (2016) is
identifiable as the highest flux point in the lightcurve in Fig. 14. More
detailed view of the lightcurve for the flaring episode discussed by Bottacini
et al. (2016) is shown in Fig. 15, where we plot (red points) the ODA API
lightcurve, extracted for the full range of the flaring period investigated in
Bottacini et al. (2016), and the ISGRI light curve reported in Fig. 1 of
Bottacini et al. (2016). The average flux of the Bottacini et al. (2016)
lightucurve is compatible with our bin overlapping the same time span The
average count rate is at the level of 1 ct/s, which agrees with the published
value. This lightcurve can be re-generated using the same lightcurve
extraction notebook as for the long-term lightcurve of the source, changing
the time interval to focus on the flaring period, July 2015, and adjusting the
energy range. This shows how the notebook available for on-the-fly re-
deployment via the oda-hub/oda_api_benchmark web page can be re-used for
refinement or re-use of the analysis for different energy ranges or different
sources.
Fig. 16 shows a comparison of the time-averaged spectrum of the source with
the flaring state spectrum. We have used the same spectral range extraction of
20-100 keV, and the same spectral model (flux-pegged power- law model
pegpwrlw171717Based on Xspec package fitting
(https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node207.html).) as in
Bottacini et al. (2016). The spectral fit of the flaring state spectrum shown
in Fig. 16 has a very hard slope $\Gamma=0.81_{-0.11}^{+1.71}$ and a flux, in
the 18-55 keV band, of $4_{-3}^{+5}\times
10^{-11}\,\mathrm{erg\,cm^{-2}s^{-1}}$ ( both reported at 90% c.l.), which are
consistent with Bottacini et al. (2016), where the authors report a photon
index of $\Gamma=1.08_{-0.15}^{+1.98}$ and a flux of $8_{-6}^{+10}\times
10^{-11}\,\mathrm{erg\,cm^{-2}s^{-1}}$.
Figure 15: Lightcurve of 3C 279 in 20-100 keV range for the flare period
reported by Bottacini et al. (2016) (red points), extracted using the ODA API,
and the ISGRI lightcurve reported in fig. 1 of Bottacini et al. (2016) (blue
points), corresponding to their ISGRI data time span. . The notebook
3C279_lc_flare.ipynb for re-generation of the result can be executed online at
this URL, and is re-executable via Binder integration of oda-hub project.
Figure 16: Comparison of time-averaged spectrum of 3C 279 (black) with the
spectrum of the flaring state observed during the TOO period reported by
Bottacini et al. (2016) in blue; the upper limit is at 3$\sigma$ confidence
level. The respective models are represented as solid lines. The notebook
3C279_spectrum.ipynb for the calculation of the spectra can be launched using
this URL.
### 3.5 NGC 2110
NGC 2110 is an example of moderately bright Seyfert galaxy, i.e. a
representative of typical hard X-ray bright and persistent AGN (Peterson,
1997). AGN of this type are the most abundant extragalactic sources observed
by INTEGRAL.
Fig. 17 shows the significance image of the source region extracted from a set
of 50 random ScWs spread over 15 years of INTEGRAL operations. One can see
that NGC 2110 is the brightest source in the field. Its detection significance
reaches $\simeq 17\sigma$ in the exposure $T\simeq 90$ ks. The dimmer source H
0614+091 is detected at significance level $10\sigma$, and there is no other
source in the field detected with significance exceeding $5\sigma$.
Figure 17: 20-40 keV image of the NGC 2110 region extracted from a set of 50
randomly selected ScWs. The image could be regenerated via the URL
https://doi.org/10.5281/zenodo.3634584.
Fig. 18 shows the long-term lightcurve of the source extracted from the
sequence of 50 ScW datasets pointed within 10 degrees from the source, using
the same notebook as for the 3C 279 analysis, part of the the oda-
hub/oda_api_benchmark repository. The statistical uncertainty of the flux
measurement in individual ScWs is too large and it is not possible to follow
the long-term evolution of the source spectrum with ScW-by-Scw binning. Taking
this into account, we have rebinned the lightcurve into wider time bins.
Several variability episodes are clearly identifiable with such binning in the
long-term lightcurve.
Figure 18: Rebinned lightcurve of NGC 2110 extracted from the sequence of 50
ScW-long intervals using the notebook lc_longterm.ipynb. The result could be
re-generated via notebook deployment on Binder at this URL.
We used the same approach as for 3C 279 for the extraction of the source
spectrum with long exposure time. We extracted the spectrum of NGC 2110
stacking the spectra in sequences of 50 ScW long exposures and calculating the
weighted average using the same notebook as for the extraction of the 3C 279
spectrum, available and executable at the oda-hub/oda_api_benchmark project
page. The resulting source spectrum with a total exposure of 2.4 Ms is shown
in Fig. 19. The figure shows two separate spectra for the periods of
applicability of OSA 10.2 and OSA 11.0. The physical origin of the hard X-ray
emission from Seyfert galaxies is thought to be due to the inverse Compton
scattering of emission from an accretion disk in a hot corona characterized by
certain temperature (Lubiński et al., 2016). Suppression of the emission above
100 keV expected in this model is seen in the longer (1.8 Ms) exposure
spectrum for OSA 10.2 applicability period. However, the exposure of OSA 11.0
period (0.6 Ms) is not sufficiently long to constrain a high-energy cut-off in
the spectrum. In order to compare our results with those reported in (Lubiński
et al., 2016), we have extracted ScWs for the same time span and angular
extraction cone radius. We have a final exposure of $\simeq 1714$ ks. We
stress that full reproduction of the analysis reported in Lubiński et al.
(2016) is beyond the scope of our work, because we are not using soft X-ray
data to constrain the parameters of the COMPPS Xspec model, used to describe
inverse Compton emission. In our analysis we have used the same model as in
Lubiński et al. (2016), and we have fixed all the parameters to those reported
in their analysis, except for the electron temperature $kT_{e}$, the
$y$-Compton parameter and the normalization. In detail, we have fixed the seed
photons temperature $T_{bb}$ to a value of 10 eV, the amount of reflection $R$
to 0.63, and the geometry to the spherical case. We have estimated the $90\%$
parameters confidence range using the Xspec (v. 12.11.0) MCMC implementation,
based on the Goodman-Weare algorithm (Goodman & Weare, 2010), with 20 walkers,
and a Cauchy proposal distribution, running a 10000 steps chain, with a burn-
in phase of 3000 steps. We have run the chain with an initial state starting
from the Xspec best fit solution. We report the 0.05, 0.5, and 0.95 quantiles
of the posterior distribution as lower, central, and upper values. We find a
temperature $kT_{e}=3.5^{+1.3}_{-2.4}\times 10^{2}$ keV, a normalization
constant $N=2.6^{+1.2}_{-1.0}\times 10^{8}$, and a value of the $y$-Compton
parameter of $y=1.10^{+0.19}_{-0.25}$, compared to the values reported in
Lubiński et al. (2016), of $kT_{e}=2.3\pm 0.5\times 10^{2}$ keV,
$N=1.8^{+0.4}_{-0.2}\times 10^{8}$, and $y=0.94^{+0.10}_{-0.09}$,
respectively. The spectrum and the frequentist best fit model are reported in
the top panel of Fig. 19. As further benchmark we have compared the results
from ODA to those published in Ursini et al. (2019). In this case the authors
have distributed online the INTEGRAL/ISGRI spectra extracted with OSA used for
their publication. We have extracted the INTEGRAL/ISGRI spectra with ODA for
the same time span and spectral window as in Ursini et al. (2019). The results
are shown in the lower panel of Fig. 19.
Figure 19: Top: Spectrum of NGC 2110 extracted from the stacking of spectra
obtained for 50 ScW sets in the time periods used in (Lubiński et al., 2016),
using OSA 10.2. Bottom: Comparison of the spectra of NGC 2110 extracted from
the stacking of spectra obtained for 50 ScW sets in the time periods used in
(Ursini et al., 2019) and the spectra published in (Ursini et al., 2019). The
spectra can be re-generated online using the notebook spectrum_longterm.ipynb
and spectrum_longterm_Ursini19.ipynb.
## 4 Analysis limitations
In the current ODA implementation, the dispatcher defines which scientific
analysis workflows can be executed through the frontend, and enables two
versions of INTEGRAL analysis software: standard OSA10.2 and OSA11.0. OSA10.2
can only be used for the analysis of the data taken before 2016. OSA11.0 can
only be used for the ISGRI data since 2016 (at the time of writing), while it
can be used for JEM-X during the full mission lifetime. This will change, as
soon as updated ISGRI calibration files are made available. It should be
remarked that the introduction of new releases of OSA will be made available.
The maximum number of ScW allowed in a single request is set to 50
(corresponding to up to 50 CPU-hours of computing), to reduce load on the
system through too long jobs. This might change if the back-end is deployed in
a different location or more computing resources are made available. Large
requests will also be available for authenticated users. As of now, it is
possible to overcome this limitation by looping over successive sets of 50
ScWs as it is demonstrated in the examples of analysis in this paper.
The INTEGRAL data analysis within ODA platform entirely relies on scientific
data analysis with OSA, modifying only the mechanism with which individual
components of OSA are invoked (principally to allow for preservation and re-
use of intermediate analysis results, as explained above). It means that
INTEGRAL analysis within ODA shares the limitations of OSA, and any future
changes in OSA will be available in ODA.
Detailed scientific validation of the long-term evolution of the INTEGRAL
instruments and the current status of the data analysis software with OSA goes
beyond the scope of this paper.
## 5 Comparison with other Online Analysis Platforms
Several other online data analysis platforms offer similar services, some of
them - for INTEGRAL instruments.
HEASARC Hera exposes much of the HEASoft capabilities (with no relevant
support for INTEGRAL) as a web service. This original and promising
development has limitations of the service resources. Hera was, in part, an
inspiration for the most superficial scheme of INTEGRAL ODA back-end.
Swift/XRT online data analysis181818https://www.swift.ac.uk/user_objects/ is a
particularly successful example of an online analysis and has encouraged
adoption of this strategy for other instruments. XMM-Newton Newton’s
RISA191919https://www.cosmos.esa.int/web/xmm-newton/xsa is conceptually
similar to Swift/XRT online analysis, but is, arguably, less well known,
perhaps due to different strategy of observation scheduling in XMM-Newton:
while Swift is favoring a large number of short public observations with quick
impact, XMM-Newton observations are often part of larger campaigns and private
observations, favoring traditional offline analysis.
Both Swift/XRT and XMM-Newton telescopes feature focusing mirrors, and their
data analysis workflows require much smaller computing resources than
INTEGRAL. Coded mask telescopes feature much larger FoV than the focusing ones
- which in turn results in a larger number sources to be considered by the
decoding process. In addition, their PSF is highly non-trivial and generally
spread over the entire FoV. The combination of the typically long observations
and dependency on a model with potentially large number of sources means that
online analysis of a coded mask telescopes is resource-hungry, and has to
adopt a very different approach to handling data analysis workflows and pre-
computed data.
INTEGRAL SPIDAI provides online data analysis for INTEGRAL/SPI. SPI data
analysis requires considerable re-analysis for each new request. This is in
principle similar to ODA ISGRI and JEM-X workflows, but without the comparably
extensive re-use of pre-computed results. SPIDAI analysis can avoid this
additional complexity since only relatively small number of sources are
sufficiently bright to be observed by SPI.
HEAVENS202020http://isdc.unige.ch/heavens/ follows very different approach
from that of INTEGRAL ODA, SPIDAI, Swift/XRT, RISA, or Hera, and pre-computes
the raw data using undisclosed customized procedures, creating a space-energy-
time map of celestial flux, with fixed pre-defined resolution in each
dimension. This considerably speeds-up certain kinds of analysis, but also
implies that the scientific meaning of the HEAVENS output may be very
different from that provided by expert-validated OSA. Furthermore, a
particular pre-defined selection of space-time-energy resolutions restricts
certain kinds of analysis (e.g. light curves with time bins less than about
3000 seconds - the INTEGRAL pointing duration). Finally, pre-computing of the
results is costly (both in computing time and human effort) and does not
necessarily privilege popular results, since it is not done in response to the
user needs. As a result - the pre-computed HEAVENS results are available with
a large delay. Finally, HEAVENS imposes additional severe restrictions on the
output results, for example, the size of the requested image.
To the contrary, ODA INTEGRAL analysis runs on-demand OSA analysis based
exclusively on publicly available tools. Any improvements in OSA, officially
validated by the instrument experts and the data center, are immediately
adopted by ODA (by providing an additional OSA version in the parameter
selection).
Conversely, ODA operations expose some of the technical issues of OSA. While
in the traditional offline analysis approach these issues would plague users
with poorly understood error messages, an equivalent message in ODA-wrapped
OSA can be directed straight to the OSA software developers, and the patch
will be made available in the next OSA release (with a corresponding update to
ODA).
INTEGRAL ODA provides a much larger set of results than that available in pre-
computed databases (such as HEAVENS). For example, it is possible to produce
light-curves with small (down to 10 seconds) time bins and high-resolution
spectra. More advanced users may upload custom source lists to use as sky
model, to provide the most reliable INTEGRAL results, as recommended by OSA
cookbook. Pre-computing the results with this granularity would not be
feasible in platforms like HEAVENS.
The ODA platform is built as a cloud-native solution, designed to be adaptable
to any modern computing environment. It is already open-source, and can be
readily developed via addition of new components. The platform consists of a
range of independent components, following common standards of communication
and interoperability. This means that the platform may grow beyond single
deployment, for example it can be deployed as a part of a federated
infrastructure of ESA DataLabs, https://datalabs.esa.int.
ODA platform is strongly committed to interoperability and integration with
the European Open Science Cloud (EOSC) https://www.eosc-portal.eu, and has
been validated as an EOSC service212121https://marketplace.eosc-
portal.eu/services/astronomical-online-data-analysis-astrooda/.
## 6 Conclusions
We have presented the new approach for the INTEGRAL data analysis which uses
the cloud computing technology to enable deployment of data analysis workflows
for imaging, spectral and timing analysis online through a web interface in a
web browser: https://www.astro.unige.ch/cdci/astrooda_ or through a dedicated
API that can be used in Python notebooks executable locally or remotely from
the oda-hub project Github repository. Such an approach provides an important
boost for reproducibility of results extracted from INTEGRAL data and
possibilities of sharing and re-use of data analysis methods. Virtualisation
of the data analysis system also provides a viable solution for the long-term
preservation of the data and analysis results. This paper demonstrates how
reusable astronomical data analysis workflows can be shared and embedded in
publications.
Performance tests presented in this paper validate ODA for use in scientific
publications making use of INTEGRAL data. ODA results are identical to those
which are obtained with OSA with parameters choices following standard
recommendations of the OSA hand books.
INTEGRAL ODA provides on-demand analysis of any INTEGRAL ISGRI and JEM-X data,
leveraging an almost complete set of OSA capabilities, yielding the results
identical to the expert-validated publicly released OSA. It is especially
useful in the context of multi-wavelength and multi-messenger studies of
variable astronomical sources (Mészáros et al., 2019), because it provides
read-to-use and flexible data products: images, spectra and lightcurves which
can be adjusted to specific details of source variability, observation periods
of other instruments. It can also be used to explore long-term behaviour of
multi-messenger sources prior to their activity periods.
## References
* Banek et al. (2019) Banek, C., Thornton, A., Economou, F., et al. 2019, arXiv e-prints, arXiv:1911.06404
* Beckmann et al. (2007) Beckmann, V., Barthelmy, S. D., Courvoisier, T. J. L., et al. 2007, A&A, 475, 827
* Bottacini et al. (2016) Bottacini, E., Böttcher, M., Pian, E., & Collmar, W. 2016, ApJ, 832, 17
* Courvoisier et al. (2003) Courvoisier, T. J.-L., Walter, R., Beckmann, V., et al. 2003, A&A, 411, L53
* Goldwurm et al. (2003) Goldwurm, A., David, P., Foschini, L., et al. 2003, A&A, 411, L223
* Goodman & Weare (2010) Goodman, J. & Weare, J. 2010, Communications in Applied Mathematics and Computational Science, 5, 65
* Gupta (2009) Gupta, A. 2009, Data Provenance (Boston, MA: Springer US), 608
* Hayashida et al. (2015) Hayashida, M., Nalewajko, K., Madejski, G. M., et al. 2015, The Astrophysical Journal, 807, 79
* Hester (2008) Hester, J. J. 2008, ARA&A, 46, 127
* Ikeda & Widom (2009) Ikeda, R. & Widom, J. 2009, Data Lineage: A Survey, Technical report, Stanford University
* Jourdain & Roques (2009) Jourdain, E. & Roques, J. P. 2009, ApJ, 704, 17
* Labanti et al. (2003) Labanti, C., Di Cocco, G., Ferro, G., et al. 2003, A&A, 411, L149
* Lebrun et al. (2003) Lebrun, F., Leray, J. P., Lavocat, P., et al. 2003, A&A, 411, L141
* Lubiński et al. (2016) Lubiński, P., Beckmann, V., Gibaud, L., et al. 2016, MNRAS, 458, 2454
* Lund et al. (2003) Lund, N., Budtz-Jargensen, C., Westergaard, N. J., et al. 2003, A&A, 411, L231
* Mereghetti et al. (2020) Mereghetti, S., Savchenko, V., Ferrigno, C., et al. 2020, ApJ, 898, L29
* Mészáros et al. (2019) Mészáros, P., Fox, D. B., Hanna, C., & Murase, K. 2019, Nature Reviews Physics, 1, 585
* Mineo et al. (2006) Mineo, T., Ferrigno, C., Foschini, L., et al. 2006, A&A, 450, 617
* Paizis et al. (2005) Paizis, A., Ebisawa, K., Tikkanen, T., et al. 2005, A&A, 443, 599
* Peterson (1997) Peterson, B. M. 1997, An Introduction to Active Galactic Nuclei (Cambridge University Press)
* Rodriguez et al. (2015) Rodriguez, J., Cadolle Bel, M., Alfonso-Garzón, J., et al. 2015, A&A, 581, L9
* Sánchez-Fernández et al. (2017) Sánchez-Fernández, C., Kajava, J. J. E., Motta, S. E., & Kuulkers, E. 2017, A&A, 602, A40
* Savchenko (2020) Savchenko, V. 2020, PoS, Asterics2019, 072
* Savchenko et al. (2012) Savchenko, V., Neronov, A., & Courvoisier, T. J.-L. 2012, A&A, 541, A122
* Smith et al. (2019) Smith, A., Pike, R., O’Mullane, W., et al. 2019, in BAAS, Vol. 51, 55
* Ubertini et al. (2003) Ubertini, P., Lebrun, F., Di Cocco, G., et al. 2003, A&A, 411, L131
* Ursini et al. (2019) Ursini, F., Bassani, L., Malizia, A., et al. 2019, A&A, 629, A54
* van der Klis (2006) van der Klis, M. 2006, Rapid X-ray variability, ed. W. Lewin & M. van der Klis, Cambridge Astrophysics (Cambridge University Press), 39–112
* Vedrenne, G. et al. (2003) Vedrenne, G., Roques, J.-P., Schönfelder, V., et al. 2003, A&A, 411, L63
* Westergaard et al. (2003) Westergaard, N. J., Kretschmar, P., Oxborrow, C. A., et al. 2003, A&A, 411, L257
* Wilson-Hodge et al. (2011) Wilson-Hodge, C. A., Cherry, M. L., Case, G. L., et al. 2011, ApJ, 727, L40
* Winkler et al. (2003) Winkler, C., Courvoisier, T. J.-L., Di Cocco, G., et al. 2003, A&A, 411, L1
|
2024-09-04T02:54:55.620014 | 2020-02-28T18:13:47 | 2002.12903 | {
"authors": "Michael Celentano, Andrea Montanari, Yuchen Wu",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25948",
"submitter": "Michael Celentano",
"url": "https://arxiv.org/abs/2002.12903"
} | arxiv-papers | # The estimation error of general first order methods
Michael Celentano Department of Statistics, Stanford University Andrea
Montanari11footnotemark: 1 Department of Electrical Engineering, Stanford
University Yuchen Wu11footnotemark: 1
###### Abstract
Modern large-scale statistical models require to estimate thousands to
millions of parameters. This is often accomplished by iterative algorithms
such as gradient descent, projected gradient descent or their accelerated
versions. What are the fundamental limits to these approaches? This question
is well understood from an optimization viewpoint when the underlying
objective is convex. Work in this area characterizes the gap to global
optimality as a function of the number of iterations. However, these results
have only indirect implications in terms of the gap to _statistical_
optimality.
Here we consider two families of high-dimensional estimation problems: high-
dimensional regression and low-rank matrix estimation, and introduce a class
of ‘general first order methods’ that aim at efficiently estimating the
underlying parameters. This class of algorithms is broad enough to include
classical first order optimization (for convex and non-convex objectives), but
also other types of algorithms. Under a random design assumption, we derive
lower bounds on the estimation error that hold in the high-dimensional
asymptotics in which both the number of observations and the number of
parameters diverge. These lower bounds are optimal in the sense that there
exist algorithms whose estimation error matches the lower bounds up to
asymptotically negligible terms. We illustrate our general results through
applications to sparse phase retrieval and sparse principal component
analysis.
## 1 Introduction
High-dimensional statistical estimation problems are often addressed by
constructing a suitable data-dependent cost function
$\mathcal{L}(\boldsymbol{\vartheta})$, which encodes the statistician’s
knowledge of the problem. This cost is then minimized using an algorithm which
scales well to large dimension. The most popular algorithms for high-
dimensional statistical applications are first order methods, i.e., algorithms
that query the cost $\mathcal{L}(\boldsymbol{\vartheta})$ by computing its
gradient (or a subgradient) at a sequence of points
$\boldsymbol{\theta}^{1}$,…$\boldsymbol{\theta}^{t}$. Examples include
(projected) gradient descent, mirror descent, and accelerated gradient
descent.
This raises a fundamental question: _What is the minimal statistical error
achieved by first order methods?_ In particular, we would like to understand
in which cases these methods are significantly sub-optimal (in terms of
estimation) with respect to statistically optimal but potentially intractable
estimators, and what is the optimal tradeoff between number of iterations and
estimation error.
These questions are relatively well understood only from the point of view of
convex optimization, namely if estimation is performed by minimizing a convex
cost function $\mathcal{L}(\boldsymbol{\vartheta})$, see e.g. [CT07, BRT09].
The seminal work of Nemirovsy and Yudin [NY83] characterizes the minimum gap
to global optimality
$\mathcal{L}(\boldsymbol{\theta}^{t})-\min_{\boldsymbol{\vartheta}}\mathcal{L}(\boldsymbol{\vartheta})$,
where $\boldsymbol{\theta}^{t}$ is the algorithm’s output
$\boldsymbol{\theta}^{t}$ after $t$ iterations (i.e., after $t$ gradient
evaluations). For instance, if $\mathcal{L}(\boldsymbol{\theta})$ is a smooth
convex function, there exists a first order algorithm which achieves
$\mathcal{L}(\boldsymbol{\theta}^{t})\leq\min_{\boldsymbol{\vartheta}}\mathcal{L}(\boldsymbol{\vartheta})+O(t^{-2})$.
At the same time, no algorithm can be guaranteed to achieve a better
convergence rate over all functions in this class.
In contrast, if the cost $\mathcal{L}(\boldsymbol{\vartheta})$ is nonconvex,
there cannot be general guarantees of global optimality. Substantial effort
has been devoted to showing that –under suitable assumptions about the data
distribution– certain nonconvex costs $\mathcal{L}(\boldsymbol{\theta})$ can
be minimized efficiently, e.g. by gradient descent [KMO10, LW11, CC15]. This
line of work resulted in upper bounds on the estimation error of first order
methods. Unlike in the convex case, worst case lower bounds are typically
overly pessimistic since non-convex optimization is NP-hard. Our work aims at
developing precise average-case lower bounds for a restricted class of
algorithms, which are applicable both to convex and nonconvex problems.
We are particularly interested in problems that exhibit an information-
computation gap: we know that the optimal statistical estimator has high
accuracy, but existing upper bounds on first order methods are substantially
sub-optimal (see examples below). Is this a limitation of our analysis, of the
specific algorithm under consideration, or of first order algorithms in
general? The main result of this paper is a tight asymptotic characterization
of the minimum estimation error achieved by first order algorithms for two
families of problems. This characterization can be used –in particular– to
delineate information-computation gaps.
Our results are novel even in the case of a convex cost function
$\mathcal{L}(\boldsymbol{\vartheta})$, for two reasons. First, classical
theory [Nes18] lower bounds the objective value
$\mathcal{L}(\boldsymbol{\theta}^{t})-\min_{\boldsymbol{\vartheta}}\mathcal{L}(\boldsymbol{\vartheta})$
after $t$ iterations. This has only indirect implications on estimation error,
e.g., $\|\boldsymbol{\theta}^{t}-\boldsymbol{\theta}_{*}\|_{2}$ (here
$\boldsymbol{\theta}_{*}$ is the true value of the parameters, not the
minimizer of the cost $\mathcal{L}(\boldsymbol{\vartheta})$). Second, the
classical lower bounds on the objective value are worst case with respect to
the function $\mathcal{L}(\boldsymbol{\vartheta})$ and do not take into
account the data distribution.
Concretely, we consider two families of estimation problems:
High-dimensional regression.
Data are i.i.d. pairs $\\{(y_{i},\boldsymbol{x}_{i})\\}_{i\leq n}$, where
$y_{i}\in\mathbb{R}$ is a label and $\boldsymbol{x}_{i}\in\mathbb{R}^{p}$ is a
feature vector. We assume
$\boldsymbol{x}_{i}\sim{\mathsf{N}}(\boldsymbol{0},\boldsymbol{I}_{p}/n)$ and
$y_{i}|\boldsymbol{x}_{i}\sim\mathbb{P}(y_{i}\in\,\cdot\,|\boldsymbol{x}_{i}^{\mathsf{T}}\boldsymbol{\theta})$
for a vector $\boldsymbol{\theta}\in\mathbb{R}^{p}$. Our objective is to
estimate the coefficients $\theta_{j}$ from data
$\boldsymbol{X}\in\mathbb{R}^{n\times p}$ (the matrix whose $i$-th row is
vector $\boldsymbol{x}_{i}$) and $\boldsymbol{y}\in\mathbb{R}^{n}$ (the vector
whose $i$-th entry is label $y_{i}$).
Low-rank matrix estimation.
Data consist of a matrix $\boldsymbol{X}\in\mathbb{R}^{n\times p}$ where
$x_{ij}=\frac{1}{n}\boldsymbol{\lambda}_{i}^{\mathsf{T}}\boldsymbol{\theta}_{j}+z_{ij}$
with $\boldsymbol{\lambda}_{i},\boldsymbol{\theta}_{j}\in\mathbb{R}^{r}$ and
$z_{ij}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1/n)$. We
denote by $\boldsymbol{\lambda}\in\mathbb{R}^{n\times r}$ and
$\boldsymbol{\theta}\in\mathbb{R}^{p\times r}$ the matrices whose rows are
$\boldsymbol{\lambda}_{i}^{\mathsf{T}}$ and
$\boldsymbol{\theta}_{j}^{\mathsf{T}}$ respectively. Our objective is to to
estimate $\boldsymbol{\lambda},\boldsymbol{\theta}$ from data
$\boldsymbol{X}$.
In order to discuss these two examples in a unified fashion, we will introduce
a dummy vector $\boldsymbol{y}$ (e.g., the all-zeros vector) as part of the
data in the low-rank matrix estimation problem. Let us point out that our
normalizations are somewhat different from, but completely equivalent to, the
traditional ones in statistics.
The first question to address is how to properly define ‘first order methods.’
A moment of thought reveals that the above discussion in terms of a cost
function $\mathcal{L}(\boldsymbol{\theta})$ needs to be revised. Indeed, given
either of the above statistical models, there is no simple way to construct a
‘statistically optimal’ cost function.111In particular, maximum likelihood is
not statistically optimal in high dimension [BBEKY13]. Further, it is not
clear that using a faster optimization algorithm for that cost will result in
faster decrease of the estimation error.
We follow instead a different strategy and introduce the class of _general
first order methods_ (GFOM). In words, these include all algorithms that keep
as state sequences of matrices
$\boldsymbol{u}^{1},\dots,\boldsymbol{u}^{t}\in\mathbb{R}^{n\times r}$, and
$\boldsymbol{v}^{1},\dots,\boldsymbol{v}^{t}\in\mathbb{R}^{p\times r}$, which
are updated by two types of operations: row-wise application of a function, or
multiplication by $\boldsymbol{X}$ or $\boldsymbol{X}^{\top}$. We will then
show that standard first order methods, for common choices of the cost
$\mathcal{L}(\boldsymbol{\theta})$, are in fact special examples of GFOMs.
Formally, a GFOM is defined by sequences of functions
$F^{(1)}_{t},G^{(2)}_{t}:\mathbb{R}^{r(t+1)+1}\to\mathbb{R}^{r}$,
$F^{(2)}_{t},G^{(1)}_{t}:\mathbb{R}^{r(t+1)}\to\mathbb{R}^{r}$, with the $F$’s
indexed by $t\geq 0$ and the $G$’s indexed by $t\geq 0$. In the high-
dimensional regression problem, we set $r=1$. The algorithm produces two
sequences of matrices (vectors for $r=1$) $(\boldsymbol{u}^{t})_{t\geq 1}$,
$\boldsymbol{u}^{t}\in\mathbb{R}^{n\times r}$, and
$(\boldsymbol{v}^{t})_{t\geq 1}$, $\boldsymbol{v}^{t}\in\mathbb{R}^{p\times
r}$,
$\displaystyle\boldsymbol{v}^{t+1}$
$\displaystyle=\boldsymbol{X}^{\top}F^{(1)}_{t}(\boldsymbol{u}^{1},\dots,\boldsymbol{u}^{t};\boldsymbol{y},\boldsymbol{u})+F^{(2)}_{t}(\boldsymbol{v}^{1},\dots,\boldsymbol{v}^{t};\boldsymbol{v})$
(1a) $\displaystyle\boldsymbol{u}^{t}$
$\displaystyle=\boldsymbol{X}G^{(1)}_{t}(\boldsymbol{v}^{1},\dots,\boldsymbol{v}^{t};\boldsymbol{v})+G^{(2)}_{t}(\boldsymbol{u}^{1},\dots,\boldsymbol{u}^{t-1};\boldsymbol{y},\boldsymbol{u})\,,$
(1b)
where it is understood that each function is applied row-wise. For instance
$\displaystyle
F^{(1)}_{t}(\boldsymbol{u}^{1},\dots,\boldsymbol{u}^{t};\boldsymbol{u})=(F^{(1)}_{t}(\boldsymbol{u}_{i}^{1},\dots,\boldsymbol{u}_{i}^{t};\boldsymbol{u}_{i}))_{i\leq
n}\in\mathbb{R}^{n\times r}\,,$
where $(\boldsymbol{u}_{i}^{s})^{{\mathsf{T}}}$ is the $i^{\text{th}}$ row of
$\boldsymbol{u}^{s}$. Here $\boldsymbol{u},\boldsymbol{v}$ are either
deterministic or random and independent of everything else. In particular, the
iteration is initialized with
$\boldsymbol{v}^{1}=\boldsymbol{X}^{\mathsf{T}}F_{0}^{(1)}(\boldsymbol{y},\boldsymbol{u})+F_{0}^{(2)}(\boldsymbol{v})$.
The unknown matrices (or vectors) $\boldsymbol{\theta}$ and
$\boldsymbol{\lambda}$ are estimated after $t_{*}$ iterations by
$\hat{\boldsymbol{\theta}}=G_{*}(\boldsymbol{v}^{1},\cdots,\boldsymbol{v}^{t_{*}};\boldsymbol{v})$
and
$\hat{\boldsymbol{\lambda}}=F_{*}(\boldsymbol{u}^{1},\ldots,\boldsymbol{u}^{t_{*}};\boldsymbol{y},\boldsymbol{u})$,
where the latter only applies in the low-rank matrix estimation problem. Let
us point out that the update also depend on additional information encoded in
the two vectors $\boldsymbol{u}\in\mathbb{R}^{n}$,
$\boldsymbol{v}\in\mathbb{R}^{p}$. This enables us to model side information
provided to the statistician (e.g., an ‘initialization’ correlated with the
true signal) or auxiliary randomness.
We study the regime in which $n,p\to\infty$ with $n/p\to\delta\in(0,\infty)$
and $r$ is fixed. We assume the number of iterations $t_{*}$ is fixed, or
potentially $t_{*}\to\infty$ after $n\to\infty$. In other words, we are
interested in linear-time or nearly linear-time algorithms (complexity being
measured relative to the input size $np$). As mentioned above, our main result
is a general lower bound on the minimum estimation error that is achieved by
any GFOM in this regime.
The paper is organized as follows: Section 2 illustrates the setting
introduced above in two examples; Section 3 contains the statement of our
general lower bounds; Section 4 applies these lower bounds to the two
examples; Section 5 presents an outline of the proof, deferring technical
details to appendices.
## 2 Two examples
### Example $\\#1$: M-estimation in high-dimensional regression and phase
retrieval
Consider the high-dimensional regression problem. Regularized M-estimators
minimize a cost
$\displaystyle\mathcal{L}_{n}(\boldsymbol{\vartheta}):=\sum_{i=1}^{n}\ell(y_{i};\langle\boldsymbol{x}_{i},\boldsymbol{\vartheta}\rangle)+\Omega_{n}(\boldsymbol{\vartheta})=\hat{\ell}_{n}(\boldsymbol{y},\boldsymbol{X}\boldsymbol{\vartheta})+\Omega_{n}(\boldsymbol{\vartheta})\,,$
(2)
Here $\ell:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ is a loss function,
$\hat{\ell}_{n}(\boldsymbol{y},\hat{\boldsymbol{y}}):=\sum_{i=1}^{n}\ell(y_{i},\hat{y}_{i})$
is its empirical average, and $\Omega_{n}:\mathbb{R}^{p}\to\mathbb{R}$ is a
regularizer. It is often the case that $\ell$ is smooth and $\Omega_{n}$ is
separable, i.e.,
$\Omega_{n}(\boldsymbol{\vartheta})=\sum_{i=1}^{p}\Omega_{1}(\vartheta_{i})$.
We will assume this to be the case in our discussion.
The prototypical first order method is proximal gradient [PB13]:
$\displaystyle\boldsymbol{\theta}^{t+1}=\textsf{Prox}_{\gamma_{t}\Omega_{1}}\big{(}\boldsymbol{\theta}^{t}-\gamma_{t}\nabla_{\boldsymbol{\vartheta}}\hat{\ell}_{n}(\boldsymbol{y},\boldsymbol{X}\boldsymbol{\theta}^{t})\big{)}\,,$
$\displaystyle\textsf{Prox}_{\gamma\Omega_{1}}(y):=\arg\min_{\theta\in\mathbb{R}}\left\\{\frac{1}{2}(y-\theta)^{2}+\gamma\Omega_{1}(\theta)\right\\}\,.$
Here $(\gamma_{t})_{t\geq 0}$ is a sequence of step sizes and
$\textsf{Prox}_{\gamma\Omega_{1}}$ acts on a vector coordinate-wise. Notice
that
$\displaystyle\nabla_{\boldsymbol{\vartheta}}\hat{\ell}_{n}(\boldsymbol{y},\boldsymbol{X}\boldsymbol{\theta}^{t})=\boldsymbol{X}^{{\mathsf{T}}}s(\boldsymbol{y},\boldsymbol{X}\boldsymbol{\theta}^{t})\,,\;\;\;\;s(\boldsymbol{h},\hat{\boldsymbol{y}})_{i}\equiv\frac{\partial\ell}{\partial\hat{y}_{i}}(y_{,}\hat{y}_{i})\,.$
(3)
Therefore proximal gradient –for the cost function (2)– is an example of a
GFOM. Similarly, mirror descent with a separable Bregman divergence and
accelerated proximal gradient methods are easily shown to fit in the same
framework.
Among the countless applications of regularized M-estimation, we will focus on
the sparse phase retrieval problem. We want to reconstruct a sparse signal
$\boldsymbol{\theta}\in\mathbb{R}^{p}$ but only have noisy measurements of the
modulus $|\langle\boldsymbol{\theta},\boldsymbol{x}_{i}\rangle|$; that is, we
lose the ‘phase’ of these projections. (We will consider for simplicity the
case of a real-valued signal, but the generalization of our results to the
complex case should be immediate.)
As a concrete model, we will assume that number of non-zero entries of
$\boldsymbol{\theta}$ is $\|\boldsymbol{\theta}\|_{0}\leq s_{0}$. From an
information-theoretic viewpoint, it is known that $\boldsymbol{\theta}$ can be
reconstructed accurately as soon as the number of measurements satisfies
$n\geq Cs_{0}\log(p/s_{0})$, with $C$ a sufficiently large constant [LV13].
Several groups have investigated practical reconstruction algorithms by
exploiting either semidefinite programming relaxations [LV13] or first order
methods [SR14, CLS15, CLM16]. A standard approach would be to apply a proximal
gradient algorithm to the cost function (2) with
$\Omega_{n}(\boldsymbol{\vartheta})=\lambda\|\boldsymbol{\vartheta}\|_{1}$.
However, all existing global convergence guarantees for these methods require
$n\geq Cs_{0}^{2}\log p$. Is the dependence on $s_{0}^{2}$ due to a
fundamental computational barrier or an artifact of the theoretical analysis?
Recently [Sol19] presented partial evidence towards the possibility of
‘breaking’ this barrier, by proving that a first order method can accurately
reconstruct the signal for $n\geq Cs_{0}\log(p/s_{0})$, if it is initialized
close enough to the true signal $\boldsymbol{\theta}$.
### Example $\\#2$: Sparse PCA
In a simple model for sparse principal component analysis (PCA), we observe a
matrix
$\boldsymbol{X}=\frac{1}{n}\boldsymbol{\lambda}\boldsymbol{\theta}^{{\mathsf{T}}}+\boldsymbol{Z}\in\mathbb{R}^{n\times
p}$, where $\boldsymbol{\lambda}\in\mathbb{R}^{n}$ has entries
$(\lambda_{i})_{i\leq
n}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1)$,
$\boldsymbol{\theta}\in\mathbb{R}^{p}$ is a sparse vector with $s_{0}\ll p$
non-zero entries, and $\boldsymbol{Z}$ is a noise matrix with entries
$(z_{ij})_{i\leq n,j\leq
p}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1/n)$. Given
data $\boldsymbol{X}$, we would like to reconstruct the signal
$\boldsymbol{\theta}$. From an information-theoretic viewpoint, it is known
that accurate reconstruction of $\boldsymbol{\theta}$ is possible if $n\geq
Cs_{0}\log(p/s_{0})$, with $C$ a sufficiently large constant [AW08].
A number of polynomial time algorithms have been studied, ranging from simple
thresholding algorithms [JL09, DM16] to sophisticated convex relaxations
[AW08, MW15]. Among other approaches, one natural idea is to modify the power
iteration algorithm of standard PCA by computing
$\displaystyle\boldsymbol{\theta}^{t+1}=c_{t}\,\boldsymbol{X}^{{\mathsf{T}}}\boldsymbol{X}\eta(\boldsymbol{\theta}^{t};\gamma_{t})\,.$
(4)
Here $(c_{t})_{t\geq 0}$ is a deterministic normalization, and
$\eta(\;\cdot\;;\gamma)$ is a thresholding function at level $\gamma$, e.g.,
soft thresholding $\eta(x;\gamma)=\operatorname{sign}(x)(|x|-\gamma)_{+}$. It
is immediate to see that this algorithm is a GFOM. More elaborate versions of
non-linear power iteration were developed, for example, by [JNRS10, Ma13], and
are typically equivalent to suitable GFOMs.
Despite these efforts, no algorithm is known to succeed unless $n\geq
Cs_{0}^{2}$. Is this a fundamental barrier or a limitation of present
algorithms or analysis? Evidence towards intractability was provided by [BR13,
BBH18] via reduction from the planted clique problem. Our analysis provides
new evidence towards the same conclusion.
## 3 Main results
In this section we state formally our general results about high-dimensional
regression and low-rank matrix estimation. The next section will apply these
general results to concrete instances. Throughout we make the following
assumptions:
* A1.
The functions
$F^{(1)}_{t},G^{(2)}_{t},F_{*}:\mathbb{R}^{r(t+1)+1}\to\mathbb{R}$,
$F^{(2)}_{t},G^{(1)}_{t},G_{*}:\mathbb{R}^{r(t+1)}\to\mathbb{R}$, are
Lipschitz continuous, with the $F$’s indexed by $t\geq 0$ and the $G$’s
indexed by $t\geq 0$.
* A2.
The covariates matrix $\boldsymbol{X}$ (for high-dimensional regression) or
the noise matrix $\boldsymbol{Z}$ (for low-rank estimation) have entries
$x_{ij}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1/n)$,
$z_{ij}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1/n)$.
Also, we denote by $\mathscrsfs{P}_{q}(\mathbb{R}^{k})$ the set of probability
distributions with finite $q$-th moment on $\mathbb{R}^{k}$ and
$\mathscrsfs{P}_{\mathrm{c}}(\mathbb{R}^{k})$ those with compact support. We
say a function $f:\mathbb{R}^{k}\rightarrow\mathbb{R}$ is _pseudo-Lipschitz of
order 2_ if there exists constant $C$ such that
$|f(\boldsymbol{x})-f(\boldsymbol{x}^{\prime})|\leq
C(1+\|\boldsymbol{x}\|+\|\boldsymbol{x}^{\prime}\|)\|\boldsymbol{x}-\boldsymbol{x}^{\prime}\|$
for all $\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{R}^{k}$. We call a
function $\ell:(\mathbb{R}^{k})^{2}\rightarrow\mathbb{R}$ a _quadratically-
bounded loss_ if it is non-negative and pseudo-Lipschitz of order 2 and there
exists $C>0$ such that for all
$\boldsymbol{x},\boldsymbol{x}^{\prime},\boldsymbol{d}\in\mathbb{R}^{k}$ we
have
$|\ell(\boldsymbol{x},\boldsymbol{d})-\ell(\boldsymbol{x}^{\prime},\boldsymbol{d})|\leq
C(1+\sqrt{\ell(\boldsymbol{x},\boldsymbol{d})}+\sqrt{\ell(\boldsymbol{x}^{\prime},\boldsymbol{d})})\|\boldsymbol{x}-\boldsymbol{x}^{\prime}\|$.
### 3.1 High-dimensional regression
We make the following additional assumptions for the regression problem:
* R1.
We sample $\\{(w_{i},u_{i})\\}_{i\leq
n}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{W,U}$,
$\\{(\theta_{i},v_{i})\\}_{i\leq
p}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\Theta,V}$ for
$\mu_{\Theta,V},\mu_{W,U}\in\mathscrsfs{P}_{2}(\mathbb{R}^{2})$.
* R2.
There exists a measurable function $h:\mathbb{R}^{2}\rightarrow\mathbb{R}$
such that
$y_{i}=h(\boldsymbol{x}_{i}^{{\mathsf{T}}}\boldsymbol{\theta},w_{i})$.
Moreover, there exists constant $C$ such that $|h(x,w)|\leq C(1+|x|+|w|)$ for
all $x,w$.
Notice that the description in terms of a probability kernel
$\mathbb{P}(y_{i}\in\,\cdot\,|\boldsymbol{x}_{i}^{{\mathsf{T}}}\boldsymbol{\theta})$
is equivalent to the one in terms of a ‘noisy’ function
$y_{i}=h(\boldsymbol{x}_{i}^{{\mathsf{T}}}\boldsymbol{\theta},w_{i})$ in most
cases of interest.
Our lower bound is defined in terms of a one-dimensional recursion. Let
$(\Theta,V)\sim\mu_{\Theta,V}$. Let $\textsf{mmse}_{\Theta,V}(\tau^{2})$ be
the minimum mean square error for estimation of $\Theta$ given observations
$V$ and $\Theta+\tau G$ where $G\sim{\mathsf{N}}(0,1)$ independent of
$\Theta$. Set $\tau_{\Theta}^{2}=\mathbb{E}[\Theta^{2}]$ and
$\tau_{0}^{2}=\infty$, and define recursively
$\begin{gathered}\tilde{\tau}_{s}^{2}=\frac{1}{\delta}\,\textsf{mmse}_{\Theta,V}(\tau_{s}^{2}),\;\;\;\;\;\;\sigma_{s}^{2}=\frac{1}{\delta}(\tau_{\Theta}^{2}-\textsf{mmse}_{\Theta,V}(\tau_{s}^{2}))\,,\\\
\frac{1}{\tau_{s+1}^{2}}=\frac{1}{\tilde{\tau}_{s}^{2}}\mathbb{E}\left[\mathbb{E}[G_{1}|Y,G_{0},U]^{2}\right],\\\
\end{gathered}$ (5)
where $Y=h(\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},W)$ and the expectation is
with respect to
$G_{0},G_{1}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1)$
and $(W,U)\sim\mu_{W,U}$ independent.
###### Theorem 1.
Under assumptions A1, A2, R1, R2 in the high-dimensional regression model and
under the asymptotics $n,p\rightarrow\infty$,
$n/p\rightarrow\delta\in(0,\infty)$, let $\hat{\boldsymbol{\theta}}^{t}$ be
output of any GFOM after $t$ iterations ($2t-1$ matrix-vector
multiplications). Then
$\displaystyle\lim_{n\rightarrow\infty}\frac{1}{p}\|\hat{\boldsymbol{\theta}}^{t}-\boldsymbol{\theta}\|_{2}^{2}\geq\textsf{mmse}_{\Theta,V}(\tau_{t}^{2})\,.$
More generally, for any quadratically-bounded loss
$\ell:\mathbb{R}^{2}\rightarrow\mathbb{R}_{\geq 0}$,
$\displaystyle\lim_{n\rightarrow\infty}\frac{1}{p}\sum_{j=1}^{p}\ell(\theta_{j},\hat{\theta}_{j}^{t})\geq\inf_{\hat{\theta}(\,\cdot\,)}\mathbb{E}\big{\\{}\ell(\Theta,\hat{\theta}(\Theta+\tau_{t}G,V))\big{\\}}\,,$
(6)
where $(\Theta,V)\sim\mu_{\Theta,V}$ independent of $G\sim{\mathsf{N}}(0,1)$,
and the infimum on the right-hand side is over measurable functions
$\hat{\theta}:\mathbb{R}^{2}\to\mathbb{R}$. The limits are in probability and
to a constant, and they are guaranteed to exist. For all $\epsilon>0$, there
exist GFOMs which satisfy these bounds to within tolerance $\epsilon$.
### 3.2 Low-rank matrix estimation
We make the following additional assumption:
* M1.
We sample $\\{(\boldsymbol{\lambda}_{i},\boldsymbol{u}_{i})\\}_{i\leq
n}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\boldsymbol{\Lambda},\boldsymbol{U}}$
and $\\{(\boldsymbol{\theta}_{j},\boldsymbol{v}_{j})\\}_{j\leq
p}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\boldsymbol{\Theta},\boldsymbol{V}}$
for
$\mu_{\boldsymbol{\Lambda},\boldsymbol{U}},\mu_{\boldsymbol{\Theta},\boldsymbol{V}}\in\mathscrsfs{P}_{2}(\mathbb{R}^{2r})$.
Again, our lower bound is defined in terms of recursion, which this time is
defined over positive semidefinite matrices
$\boldsymbol{Q}_{t},\hat{\boldsymbol{Q}}_{t}\in\mathbb{R}^{r\times r}$,
$\boldsymbol{Q}_{t},\hat{\boldsymbol{Q}}_{t}\succeq\boldsymbol{0}$. Set
$\hat{\boldsymbol{Q}}_{0}=\boldsymbol{0}$, and define recursively
$\displaystyle\boldsymbol{Q}_{t+1}={\boldsymbol{V}}_{\boldsymbol{\Lambda},\boldsymbol{U}}(\hat{\boldsymbol{Q}}_{t})\,,\;\;\;\;\;\;\;\;\;\;\;\hat{\boldsymbol{Q}}_{t}=\frac{1}{\delta}{\boldsymbol{V}}_{\boldsymbol{\Theta},\boldsymbol{V}}(\boldsymbol{Q}_{t})\,,$
(7)
where we define the second moment of the conditional expectation
${\boldsymbol{V}}_{\boldsymbol{\Theta},\boldsymbol{V}}:\mathbb{R}^{r\times
r}\to\mathbb{R}^{r\times r}$ by
$\displaystyle{\boldsymbol{V}}_{\boldsymbol{\Theta},\boldsymbol{V}}(\boldsymbol{Q}):=\mathbb{E}\Big{\\{}\mathbb{E}[\boldsymbol{\Theta}|\boldsymbol{Q}^{1/2}\boldsymbol{\Theta}+\boldsymbol{G}=\boldsymbol{Y};\boldsymbol{V}]\mathbb{E}[\boldsymbol{\Theta}|\boldsymbol{Q}^{1/2}\boldsymbol{\Theta}+\boldsymbol{G}=\boldsymbol{Y};\boldsymbol{V}]^{{\mathsf{T}}}\Big{\\}},$
and analogously for
${\boldsymbol{V}}_{\boldsymbol{\Lambda},\boldsymbol{U}}(\hat{\boldsymbol{Q}})$.
Here the expectation is with respect to
$(\boldsymbol{\Theta},\boldsymbol{V})\sim\mu_{\boldsymbol{\Theta},\boldsymbol{V}}$
and an independent Gaussian vector
$\boldsymbol{G}\sim{\mathsf{N}}(\boldsymbol{0},{\boldsymbol{I}}_{r})$. Notice
in particular that
$\mathbb{E}\\{\boldsymbol{\Theta}\boldsymbol{\Theta}^{{\mathsf{T}}}\\}-{\boldsymbol{V}}_{\boldsymbol{\Theta},\boldsymbol{V}}(\boldsymbol{Q})$
is the vector minimum mean square error when $\boldsymbol{\Theta}$ is observed
in Gaussian noise with covariance $\boldsymbol{Q}^{-1}$. For $r=1$, Eq. (7) is
a simple scalar recursion.
###### Theorem 2.
Under assumptions A1, A2, M1 in the low-rank matrix estimation model and under
the under the asymptotics
$n,p\rightarrow\infty,n/p\rightarrow\delta\in(0,\infty)$, let
$\hat{\boldsymbol{\theta}}^{t}$ be output of any GFOM after $t$ iterations
($2t-1$ matrix-vector multiplications). Then
$\displaystyle\lim_{n\rightarrow\infty}\frac{1}{p}\|\hat{\boldsymbol{\theta}}^{t}-\boldsymbol{\theta}\|_{\mathsf{F}}^{2}\geq\mathbb{E}\\{\|\boldsymbol{\Theta}\|^{2}\\}-\operatorname{Tr}{\boldsymbol{V}}_{\boldsymbol{\Theta},\boldsymbol{V}}(\boldsymbol{Q}_{t})\,.$
More generally, for any quadratically-bounded loss
$\ell:\mathbb{R}^{2r}\rightarrow\mathbb{R}_{\geq 0}$,
$\displaystyle\lim_{n\rightarrow\infty}\frac{1}{p}\sum_{j=1}^{p}\ell(\boldsymbol{\theta}_{j},\hat{\boldsymbol{\theta}}_{j}^{t})\geq\inf_{\hat{\boldsymbol{\theta}}(\,\cdot\,)}\mathbb{E}\big{\\{}\ell(\boldsymbol{\Theta},\hat{\boldsymbol{\theta}}(\boldsymbol{Q}_{t}^{1/2}\boldsymbol{\Theta}+\boldsymbol{G},\boldsymbol{V}))\big{\\}}\,,$
(8)
where the infimum on the right-hand side is over functions
$\hat{\boldsymbol{\theta}}:\mathbb{R}^{r}\to\mathbb{R}^{r}$. The limits are in
probability and to a constant, and they are guaranteed to exist. As above, for
all $\epsilon>0$ there exist GFOMs which satisfy these bounds to within
tolerance $\epsilon$.
### 3.3 Discussion
Our motivations are similar to the ones for statistical query (SQ) lower
bounds [FGR+17, FGV17]: we want to provide estimation lower bounds under a
restricted computational model, that are sensitive to the data distribution.
However the scope of our approach is significantly different from SQ
algorithms: the latter can query data distributions and compute approximate
expectations with respect to that distribution. In contrast, our algorithms
work with a fixed sample (the data matrix $\boldsymbol{X}$ and responses
$\boldsymbol{y}$), which is queried multiple times. These queries can be
thought as weighted averages of _both rows and columns_ of $\boldsymbol{X}$
and, as such, cannot be simulated by the SQ oracle. For instance, the proximal
gradient method or the nonlinear power iteration of Section 2 cannot be framed
as a SQ algorithms.
The lower bounds of Theorems 1 and 2 are satisfied with equality by a specific
first order method that is an approximate message passing (AMP) algorithm,
with Bayes updates. This can be regarded as a version of belief propagation
(BP) for densely connected graphs [KF09], or an iterative implementation of
the TAP equations from spin glass theory [MPV87].
Our proof builds on the asymptotically exact analysis of AMP algorithms
developed in [Bol14, BM11, JM18, BMN19]. However we need to overcome three
technical obstacles: $(1)$ Show that any GFOM can be reduced (in a suitable
sense) to a certain AMP algorithms, whose behavior can be exactly tracked.
$(2)$ Show that Bayes-AMP is optimal among all AMP algorithms. We achieve this
goal by considering an estimation problem on trees and showing that, in a
suitable large degree limit, it has the same asymptotic behavior as AMP on the
complete graph. On trees it is immediate to see that BP is the optimal local
algorithm. $(3)$ We need to prove that the asymptotic behavior of BP for trees
of large degree is equivalent to the one of Bayes-AMP on the original problem.
This amounts to proving a Gaussian approximation theorem for BP. While similar
results were obtained in the past for discrete models [Sly09, MX16], the
current setting is technically more challenging because the underlying
variables $\theta_{i}$ are continuous and unbounded.
While the line of argument above is –in hindsight– very natural, the
conclusion is broadly useful. For instance, [AFUZ19] study a class of of
message passing algorithms inspired to replica symmetry breaking and survey
propagation [MPZ02], and observe that they do not perform better than Bayes
AMP. These algorithms are within the scope of our Theorem 2, which implies
that indeed they cannot outperform Bayes AMP, for any constant number of
iterations.
Finally, a sequence of recent papers characterize the asymptotics of the
Bayes-optimal estimation error in the two models described above [LM19,
BKM+19]. It was conjectured that, in this context, no polynomial-time
algorithm can outperform Bayes AMP, provided these algorithms have access to
an arbitrarily small amount of side information.222Concretely, side
information can take the form
$\boldsymbol{v}=\eta\boldsymbol{\theta}+\boldsymbol{g}$ for $\eta>0$
arbitrarily small, $\boldsymbol{g}\sim{\mathsf{N}}(0,{\boldsymbol{I}}_{p})$
Theorems 1 and 2 establish this result within the restricted class of GFOMs.
## 4 Applying the general lower bounds
In our two examples, we will refer to the sets
$B^{p}_{0}(k)\subset\mathbb{R}^{p}$ of $k$-sparse vectors and
$B^{p}_{2}(R)\subset\mathbb{R}^{p}$ of vectors with $\ell_{2}$-norm bounded by
$R$.
### Example $\\#1$: Sparse phase retrieval
For the reader’s convenience, we follow the standard normalization in phase
retrieval, whereby the ‘sensing vectors’ (i.e. the rows of the design matrix)
have norm concentrated around one. In other words, we observe $y_{i}\sim
p(\,\cdot\,|\tilde{\boldsymbol{x}}_{i}^{{\mathsf{T}}}\boldsymbol{\theta}){\mathrm{d}}y$,
where $\tilde{\boldsymbol{x}}_{i}\sim{\mathsf{N}}(0,{\boldsymbol{I}}_{p}/p)$.
In order to model the phase retrieval problem, we assume that the conditional
density $p(\,\cdot\,|\,\cdot\,\,)$ satisfies the symmetry condition
$p(y|x)=p(y|-x)$. In words: we only observe a noisy version of the absolute
value $|\langle\tilde{\boldsymbol{x}}_{i},\boldsymbol{\theta}\rangle|$. An
important role is played by the following critical value of the number of
observations per dimension
$\displaystyle\delta_{\mbox{\tiny{sp}}}:=\left(\int_{\mathbb{R}}\frac{\mathbb{E}_{G}[p(y|G)(G^{2}-1)]}{\mathbb{E}_{G}[p(y|G)]}\,{\mathrm{d}}y\right)^{-1}\,.$
(9)
Here expectation is with respect to $G\sim{\mathsf{N}}(0,1)$. It was proved in
[MM19] that, if $\|\boldsymbol{\theta}\|_{2}=\sqrt{p}$ and
$n>(\delta_{\mbox{\tiny{sp}}}+\eta)p$, for some $\eta$ bounded away from zero,
then there exists a simple spectral estimator
$\hat{\boldsymbol{\theta}}_{\mbox{\tiny{sp}}}$ that achieves weak recovery,
i.e., a positive correlation with the true signal. Namely,
$\frac{|\langle\hat{\boldsymbol{\theta}}_{\mbox{\tiny{sp}}},\boldsymbol{\theta}\rangle|}{\|\hat{\boldsymbol{\theta}}_{\mbox{\tiny{sp}}}\|_{2}\|\boldsymbol{\theta}\|_{2}}$
is bounded away from zero as $p,n\to\infty$.
In the case of a dense signal $\boldsymbol{\theta}$ and observation model
$y_{i}=|\tilde{\boldsymbol{x}}_{i}^{{\mathsf{T}}}\boldsymbol{\theta}|+w_{i},\,w_{i}\sim{\mathsf{N}}(0,\sigma^{2})$,
the oversampling ratio $\delta_{\mbox{\tiny{sp}}}$ is known to be information-
theoretically optimal: for $n<(\delta_{\mbox{\tiny{sp}}}-\eta)p$ no estimator
can achieve a correlation that is bounded away from $0$ [MM19]. On the other
hand, if $\boldsymbol{\theta}$ has at most $p\varepsilon$ nonzero entries, it
is information-theoretically possible to reconstruct it from
$\delta>C\varepsilon\log(1/\varepsilon)$ phaseless measurements per dimension
[LV13].
Our next result implies that no GFOM can achieve reconstruction from
$O(\varepsilon\log(1/\varepsilon))$ measurements per dimension, unless it is
initialized close enough to the true signal. In order to model the additional
information provided by the initialization we assume to be given
$\displaystyle\overline{\boldsymbol{v}}=\sqrt{\alpha}\,\boldsymbol{\theta}/\|\boldsymbol{\theta}\|_{2}+\sqrt{1-\alpha}\tilde{\boldsymbol{g}},\;\;\;\;\;\;(\tilde{g}_{i})_{i\leq
p}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1/p),.$ (10)
Notice that with this normalization $\|\overline{\boldsymbol{v}}\|_{2}$
concentrates tightly around $1$, and $\sqrt{\alpha}$ can be interpreted as the
cosine of the angle between $\boldsymbol{\theta}$ and
$\overline{\boldsymbol{v}}$.
###### Corollary 1.
Consider the phase retrieval model, for a sequence of deterministic signals
$\boldsymbol{\theta}\in\mathbb{R}^{p}$, and let
$\mathscrsfs{T}(\varepsilon,R):=B^{p}_{0}(p\varepsilon)\cap B^{p}_{2}(R)$.
Assume the noise kernel $p(\,\cdot\,|x)$ to satisfy the conditions of Theorem
1 and to be be twice differentiable with respect to $x$.
Then, for any _$\delta <\delta_{\mbox{\tiny{sp}}}$_, there exists
$\alpha_{*}=\alpha_{*}(\delta,\varepsilon)>0$ and
$C_{*}=C_{*}(\delta,\varepsilon)$ such that, if $\alpha\leq\alpha_{*}$, then
$\displaystyle\sup_{t\geq
0}\lim_{n,p\to\infty}\inf_{\boldsymbol{\theta}\in\mathscrsfs{T}(\varepsilon,\sqrt{p})}\mathbb{E}\frac{\langle\boldsymbol{\theta},\hat{\boldsymbol{\theta}}^{t}\rangle}{\|\boldsymbol{\theta}\|_{2}\|\hat{\boldsymbol{\theta}}^{t}\|_{2}}\leq
C_{*}\sqrt{\alpha}\,.$ (11)
The same conclusion holds if $\boldsymbol{\theta}$ is drawn randomly with
i.i.d. entries
$\theta_{i}\sim\mu_{\theta}:=(1-\varepsilon)\delta_{0}+(\varepsilon/2)(\delta_{\mu}+\delta_{-\mu})$,
$\mu=1/\sqrt{\varepsilon}$.
### Example $\\#2$: Sparse PCA
For ease of interpretation, we assume the observation model
$\tilde{\boldsymbol{X}}=\boldsymbol{\lambda}\overline{\boldsymbol{\theta}}^{{\mathsf{T}}}+\tilde{\boldsymbol{Z}}$,
where $(\tilde{z}_{ij})_{i\leq n,j\leq p}\sim{\mathsf{N}}(0,1)$ and
$(\lambda_{i})_{i\leq n}\sim{\mathsf{N}}(0,1)$. Equivalently, conditional on
$\overline{\boldsymbol{\theta}}$, the rows of $\tilde{\boldsymbol{X}}$ are
i.i.d. samples
$\tilde{\boldsymbol{x}}_{i}\sim{\mathsf{N}}(0,\boldsymbol{\Sigma})$,
$\boldsymbol{\Sigma}={\boldsymbol{I}}_{p}+\overline{\boldsymbol{\theta}}\overline{\boldsymbol{\theta}}^{{\mathsf{T}}}$.
We also assume to have access to an initialization $\overline{\boldsymbol{v}}$
correlated with $\overline{\boldsymbol{\theta}}$, as per Eq. (10). In order to
apply Theorem 2, we choose a specific distribution for the spike. Defining
$\boldsymbol{\theta}=\overline{\boldsymbol{\theta}}\sqrt{p}$, we assume that
the entries of $\boldsymbol{\theta}$ follow a three-points sparse distribution
$(\theta_{i})_{i\leq
p}\sim\mu_{\theta}:=(1-\varepsilon)\delta_{0}+(\varepsilon/2)(\delta_{+\mu}+\delta_{-\mu})$.
The next lemma specializes Theorem 2.
###### Lemma 1.
Assume the sparse PCA model with the distribution of
$\overline{\boldsymbol{\theta}}$ given above. Define $(q_{t})_{t\geq 0}$ by
$\displaystyle q_{t+1}$
$\displaystyle=\frac{V_{\pm}(q_{t}+\tilde{\alpha})}{1+V_{\pm}(q_{t}+\tilde{\alpha})}\,,\;\;\;\;\;\;q_{0}=0\,,$
(12) $\displaystyle V_{\pm}(q)$ $\displaystyle:=e^{-\delta
q\mu^{2}}\mu^{2}\varepsilon^{2}\mathbb{E}\left\\{\frac{\sinh(\mu\sqrt{\delta
q}G)^{2}}{1-\varepsilon+\varepsilon e^{-\delta
q\mu^{2}/2}\cosh(\mu\sqrt{\delta q}G)}\right\\}\,,$ (13)
where $\tilde{\alpha}=\alpha/(\mu^{2}\varepsilon(1-\alpha))$. Then, for any
GFOM
$\displaystyle\lim_{n,p\to\infty}\frac{\langle\overline{\boldsymbol{\theta}},\hat{\boldsymbol{\theta}}^{t}\rangle}{\|\overline{\boldsymbol{\theta}}\|_{2}\|\hat{\boldsymbol{\theta}}^{t}\|_{2}}\leq\sqrt{\frac{V_{\pm}(q_{t}+\tilde{\alpha})}{\mu^{2}\varepsilon}}\,.$
(14)
The bound in the last lemma holds for random vectors
$\overline{\boldsymbol{\theta}}$ with i.i.d. entries from the three-points
distribution. As a consequence, it implies a minimax bound for non-random
vectors $\overline{\boldsymbol{\theta}}$ with given $\ell_{2}$-norm and
sparsity. We state this bound in the corollary below. In order to develop
explicit expressions, we analyze the recursion of Eqs. (12), (13).
###### Corollary 2.
Assume the sparse PCA model, for
$\overline{\boldsymbol{\theta}}\in\mathbb{R}^{p}$ a deterministic vector and
$\boldsymbol{\lambda}$, $\tilde{\boldsymbol{Z}}$ random, and consider the
parameter space $\mathscrsfs{T}(\varepsilon,R):=B^{p}_{0}(p\varepsilon)\cap
B^{p}_{2}(R)$.
1. $(a)$
If $R^{2}<1/\sqrt{\delta}$, then there exists
$\alpha_{*}=\alpha_{*}(R,\delta,\varepsilon),C_{*}=C_{*}(R,\delta,\varepsilon)$
such that, for $\alpha<\alpha_{*}$, and any GFOM
$\displaystyle\sup_{t\geq
0}\lim_{n,p\to\infty}\inf_{\overline{\boldsymbol{\theta}}\in\mathscrsfs{T}(\varepsilon,R)}\mathbb{E}\frac{\langle\overline{\boldsymbol{\theta}},\hat{\boldsymbol{\theta}}^{t}\rangle}{\|\overline{\boldsymbol{\theta}}\|_{2}\|\hat{\boldsymbol{\theta}}^{t}\|_{2}}\leq
C_{*}\sqrt{\alpha}\,.$ (15)
2. $(b)$
If $R^{2}<\sqrt{(1-\varepsilon)/4\delta}$, then the above statement holds with
$\alpha_{*}=\left(\frac{\varepsilon}{4\delta}\wedge\frac{1}{2}\right)$,
$C_{*}=3/R^{2}$.
In words, the last corollary implies that for $R^{2}\delta<1$, no estimator
achieves a non-vanishing correlation with the true signal
$\overline{\boldsymbol{\theta}}$, unless sufficient side information about
$\overline{\boldsymbol{\theta}}$ is available. Notice that for $R^{2}\delta=1$
is the threshold above which the principal eigenvector of the empirical
covariance $\tilde{\boldsymbol{X}}^{{\mathsf{T}}}\tilde{\boldsymbol{X}}/n$
becomes correlated with $\overline{\boldsymbol{\theta}}$. Hence, our result
implies that, simple PCA fails, then every GFOM will fail.
Viceversa, if simple PCA succeed, then it can be implemented via a GFOM,
provided arbitrarily weak side information if available. Indeed, assume side
information $\boldsymbol{v}=\eta\boldsymbol{\theta}+\boldsymbol{g}$, with
$\boldsymbol{g}\sim{\mathsf{N}}(0,{\boldsymbol{I}}_{p})$, and an $\eta$
arbitrarily small constant. Then the power method initialized at
$\boldsymbol{v}$ converges to an estimate that has correlation with
$\boldsymbol{\theta}$ bounded away from zero in $O(\log(1/\eta))$ iterations.
## 5 Proof of main results
In this section, we prove Theorems 1 and 2 under stronger assumptions than in
their statements. In the high-dimensional regression model, these assumptions
are as follows.
* R3.
Given $\mu_{\Theta,V}\in\mathscrsfs{P}_{\mathrm{c}}(\mathbb{R}^{2})$ and
$\mu_{\boldsymbol{W},U}\in\mathscrsfs{P}_{4}(\mathbb{R}^{k}\times\mathbb{R})$
for some $k\geq 1$, we sample $\\{(\theta_{i},v_{i})\\}_{i\leq
p}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\Theta,V}$,
$\\{(\boldsymbol{w}_{i},u_{i})\\}_{i\leq
n}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\boldsymbol{W},U}$.
* R4.
There exists Lipschitz function
$h:\mathbb{R}\times\mathbb{R}^{k}\rightarrow\mathbb{R}$ such that
$y_{i}=h(\boldsymbol{x}_{i}^{{\mathsf{T}}}\boldsymbol{\theta},\boldsymbol{w}_{i})$.
Measure $\mu_{\boldsymbol{W},U}$ has regular conditional probability
distribution $\mu_{\boldsymbol{W}|U}(u,\cdot)$ such that, for all fixed $x,u$,
the distribution of $h(x,\boldsymbol{W})$ when
$\boldsymbol{W}\sim\mu_{\boldsymbol{W}|u}(u,\cdot)$ has positive and bounded
density $p(y|x,u)$ with respect Lebesgue measure. Further,
$\partial_{x}^{k}\log p(y|x,u)$ for $1\leq k\leq 5$ exists and is bounded.
In the low-rank matrix estimation model, this assumption is as follows.
* M2.
Given
$\mu_{\boldsymbol{\Lambda},\boldsymbol{U}},\mu_{\boldsymbol{\Theta},\boldsymbol{V}}\in\mathscrsfs{P}_{\mathrm{c}}(\mathbb{R}^{2r})$,
we sample $\\{(\boldsymbol{\lambda}_{i},\boldsymbol{u}_{i})\\}_{i\leq
n}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\boldsymbol{\Lambda},\boldsymbol{U}}$,
$\\{(\boldsymbol{\theta}_{j},\boldsymbol{v}_{j})\\}_{j\leq
p}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\boldsymbol{\Theta},\boldsymbol{V}}$.
In Appendix E, we show that Theorem 1 (resp. Theorem 2) under assumptions R3
and R4 (resp. M2) implies the theorem under the weaker assumptions R1 and R2
(resp. M1).
### 5.1 Reduction of GFOMs to approximate message passing algorithms
Approximate message passing (AMP) algorithms are a special class of GFOMs that
admit an asymptotic characterization called _state evolution_ [BM11]. We show
that, in both models we consider, any GFOM is equivalent to an AMP algorithm
after a change of variables.
An AMP algorithm is defined by sequences of Lipschitz functions
$(f_{t}:\mathbb{R}^{r(t+1)+1}\rightarrow\mathbb{R}^{r})_{t\geq 0}$,
$(g_{t}:\mathbb{R}^{r(t+1)}\rightarrow\mathbb{R}^{r})_{t\geq 1}$. It generates
sequences $(\boldsymbol{a}^{t})_{t\geq 1}$, $(\boldsymbol{b}^{t})_{t\geq 1}$
of matrices in $\mathbb{R}^{p\times r}$ and $\mathbb{R}^{n\times r}$,
respectively, according to
$\begin{split}\boldsymbol{a}^{t+1}=\boldsymbol{X}^{\mathsf{T}}f_{t}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{t};\boldsymbol{y},\boldsymbol{u})-\sum_{s=1}^{t}g_{s}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{s};\boldsymbol{v})\boldsymbol{\xi}_{t,s}^{\mathsf{T}},\\\
\boldsymbol{b}^{t}=\boldsymbol{X}g_{t}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{t};\boldsymbol{v})-\sum_{s=0}^{t-1}f_{s}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{s};\boldsymbol{y},\boldsymbol{u})\boldsymbol{\zeta}_{t,s}^{\mathsf{T}},\end{split}$
(16)
with initialization
$\boldsymbol{a}^{1}=\boldsymbol{X}^{\mathsf{T}}f_{0}(\boldsymbol{y},\boldsymbol{u})$.
Here $(\boldsymbol{\xi}_{t,s})_{1\leq s\leq t}$,
$(\boldsymbol{\zeta}_{t,s})_{0\leq s<t}$ are deterministic $r\times r$
matrices. The we refer to the recursion (16) as to an AMP algorithm if only if
the matrices $(\boldsymbol{\xi}_{t,s})_{1\leq s\leq t}$,
$(\boldsymbol{\zeta}_{t,s})_{0\leq s<t}$ are determined by the functions
$(f_{t})_{t\geq 0}$, $(g_{t})_{t\geq 1}$ in a specific way, which depends on
the model under consideration, and we describe in Appendix B. For this special
choice of the matrices $(\boldsymbol{\xi}_{t,s})_{1\leq s\leq t}$,
$(\boldsymbol{\zeta}_{t,s})_{0\leq s<t}$, the iterates
$\boldsymbol{a}^{t},\boldsymbol{b}^{t}$ are asymptotically Gaussian, with a
covariance that can be determined via the state evolution recursion.
The next lemma, proved in Appendix B, makes this precise and describes the
state evolution of the resulting AMP algorithm.
###### Lemma 1.
Under assumptions A1, A2, R3, R4 (for high-dimensional regression) or
assumptions A1, A2, M2 (for low-rank matrix estimation), there exist Lipschitz
functions $(f_{t})_{t\geq 0},(g_{t})_{t\geq 1}$ as above and
$(\varphi_{t}:\mathbb{R}^{r(t+1)}\rightarrow\mathbb{R})_{t\geq
1},(\phi_{t}:\mathbb{R}^{r(t+1)+1}\rightarrow\mathbb{R})_{t\geq 1}$, such that
the following holds. Let $(\boldsymbol{\xi}_{t,s})_{1\leq s\leq
t},(\boldsymbol{\zeta}_{t,s})_{0\leq s<t}$ be $r\times r$ matrices determined
by the general AMP prescription (see Appendix B), and define
$\\{\boldsymbol{a}^{s},\boldsymbol{b}^{s}\\}_{s\geq 0}$ via the AMP algorithm
(16). Then we have
$\displaystyle\boldsymbol{v}^{t}=\varphi_{t}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{t};\boldsymbol{v}),\quad
t\geq 1,$
$\displaystyle\boldsymbol{u}^{t}=\phi_{t}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{t};\boldsymbol{y},\boldsymbol{u}),\quad
t\geq 1.$
Further, state evolution determines two collections of of $r\times r$ matrices
$(\boldsymbol{T}_{s,t})_{s,t\geq 1},(\boldsymbol{\alpha}_{t})_{t\geq 1}$ such
that for all pseudo-Lipschitz functions
$\psi:\mathbb{R}^{r(t+2)}\rightarrow\mathbb{R}$ of order 2,
$\frac{1}{p}\sum_{j=1}^{p}\psi(\boldsymbol{a}^{1}_{j},\ldots,\boldsymbol{a}^{t}_{j},\boldsymbol{v}_{j},\boldsymbol{\theta}_{j})\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\mathbb{E}[\psi(\boldsymbol{\alpha}_{1}\boldsymbol{\Theta}+\boldsymbol{Z}^{1},\ldots,\boldsymbol{\alpha}_{t}\boldsymbol{\Theta}+\boldsymbol{Z}^{t},\boldsymbol{V},\boldsymbol{\Theta})],\\\
$ (17)
where
$(\boldsymbol{\Theta},\boldsymbol{V})\sim\mu_{\boldsymbol{\Theta},\boldsymbol{V}}$
independent of
$(\boldsymbol{Z}^{1},\ldots,\boldsymbol{Z}^{t})\sim\mathsf{N}(\boldsymbol{0},\boldsymbol{T}_{[1:t]})$.
Here $\boldsymbol{T}_{[1:t]}\in\mathbb{R}^{tr\times tr}$ is a positive semi-
definite block matrix with block $(s,s^{\prime})$ given by
$\boldsymbol{T}_{s,s^{\prime}}$.333We emphasize that the construction of all
relevant functions and matrices depend on the model. We describe these
constructions and prove Lemma 1 in Appendix B.
Lemma 1 implies that the estimator $\hat{\boldsymbol{\theta}}^{t}$ in Theorem
1 and 2 can alternatively be viewed as a Lipschitz function
$g_{*}:\mathbb{R}^{r(t+1)}\rightarrow\mathbb{R}^{r}$ of the AMP iterates
$(\boldsymbol{a}^{s})_{s\leq t}$ and side information $\boldsymbol{v}$,
applied row-wise. Thus,
$\ell(\boldsymbol{\theta}_{j},\hat{\boldsymbol{\theta}}_{j}^{t})$ can be
viewed as a pseudo-Lipschitz function of order 2 applied to
$(\boldsymbol{a}_{j}^{s})_{s\leq
t},\boldsymbol{v}_{j},\boldsymbol{\theta}_{j}$; namely,
$\ell(\boldsymbol{\theta}_{j},g_{*}((\boldsymbol{a}_{j}^{s})_{s\leq
t},\boldsymbol{v}_{j}))$. Then, Lemma 1 implies that the limits in Theorems 1
and 2 exist and have lower bound
$\inf
R_{\ell}(g_{*},(\boldsymbol{\alpha}_{s}),(\boldsymbol{T}_{s,s^{\prime}})):=\inf\mathbb{E}[\ell(\boldsymbol{\Theta},g_{*}(\boldsymbol{\alpha}_{1}\boldsymbol{\Theta}+\boldsymbol{Z}^{1},\ldots,\boldsymbol{\alpha}_{t}\boldsymbol{\Theta}+\boldsymbol{Z}^{t},\boldsymbol{V}))],$
(18)
where the infimum is taken over Lipschitz functions $g_{*}$ and matrices
$(\boldsymbol{\alpha}_{s}),(\boldsymbol{T}_{s,s^{\prime}})$ generated by the
state evolution of _some_ AMP algorithm. This lower bound is characterized in
the following sections.
### 5.2 Models and message passing on the computation tree
We introduce two statistical models on trees and a collection of algorithms
which correspond, in a sense we make precise, to the high-dimensional
regression and low-rank matrix estimation models, and AMP algorithms. We
derive lower bounds on the estimation error in these models using information-
theoretic, rather than algorithmic, techniques. We then transfer these to
lower bounds on (18). The models are defined using an infinite connected tree
$\mathcal{T}=(\mathcal{V},\mathcal{F},\mathcal{E})$ consisting of infinite
collections of variable nodes $\mathcal{V}$, factor nodes $\mathcal{F}$, and
edges $\mathcal{E}$. Factor nodes have degree $p$ and have only variables
nodes as neighbors, and variable nodes have degree $n$ and have only factor
nodes as neighbors. These properties define the tree uniquely up to
isomorphism. We denote the set of neighbors of a variable $v$ by $\partial v$,
and similarly define $\partial f$. We call $\mathcal{T}$ the _computation
tree_.
The statistical models are joint distributions over random variables
associated to the nodes and edges of the computation tree.
High-dimensional regression on the computation tree.
The random variables
$\\{(\theta_{v},v_{v})\\}_{v\in\mathcal{V}}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\Theta,V}$,
$\\{(\boldsymbol{w}_{f},u_{f})\\}_{f\in\mathcal{F}}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\boldsymbol{W},U}$,
and
$\\{x_{fv}\\}_{(f,v)\in\mathcal{E}}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1/n)$
are generated independently. We assume $\mu_{\Theta,V}$,
$\mu_{\boldsymbol{W},U}$ are as in assumption R3. We define
$y_{f}=h(\sum_{v\in\partial f}x_{fv}\theta_{v},\boldsymbol{w}_{f})$ for $h$ as
in assumption R4. For each $v\in\mathcal{V}$, our objective is to estimate the
coefficient $\theta_{v}$ from data $(y_{f},u_{f})_{f\in\mathcal{F}}$,
$(v_{v})_{v\in\mathcal{V}}$, and $(x_{fv})_{(f,v)\in\mathcal{E}}$.
Low-rank matrix estimation on the computation tree.
The random variables
$\\{(\boldsymbol{\theta}_{v},\boldsymbol{v}_{v})\\}_{v\in\mathcal{V}}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\boldsymbol{\Theta},\boldsymbol{V}}$,
$\\{(\boldsymbol{\lambda}_{f},\boldsymbol{u}_{f})\\}_{f\in\mathcal{F}}$, and
$\\{z_{fv}\\}_{(f,v)\in\mathcal{E}}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1/n)$
are generated independently. We assume
$\mu_{\boldsymbol{\Lambda},\boldsymbol{U}}$,
$\mu_{\boldsymbol{\Theta},\boldsymbol{V}}$ are as in assumption M2. For each
$v\in\mathcal{V}$, our objective is to estimate $\boldsymbol{\theta}_{v}$ from
data $(x_{fv})_{(f,v)\in\mathcal{E}}$,
$(\boldsymbol{v}_{v})_{v\in\mathcal{V}}$, and
$(\boldsymbol{u}_{f})_{f\in\mathcal{F}}$.
When ambiguity will result, we will refer to the models of Section 3 as high-
dimensional regression and low-rank matrix estimation _on the graph._ 444This
terminology is motivated by viewing the models of Section 3 as equivalent to
the tree-based models except that they are defined with respect to a finite
complete bipartite graph between factor and variable nodes. As on the graph,
we introduce dummy variables $(y_{f})_{f\in\mathcal{F}}$ in the low-rank
matrix estimation problem on the computation tree.
To estimate $\boldsymbol{\theta}_{v}$, we introduce the class of _message
passing algorithms_. A message passing algorithm is defined by sequences of
Lipschitz functions
$(f_{t}:\mathbb{R}^{r(t+1)+1}\rightarrow\mathbb{R}^{r})_{t\geq 0}$,
$(g_{t}:\mathbb{R}^{r(t+1)}\rightarrow\mathbb{R}^{r})_{t\geq 1}$. For each
edge $(f,v)\in\mathcal{E}$, it generates sequences
$(\boldsymbol{a}_{v\rightarrow f}^{t})_{t\geq 1}$,
$(\boldsymbol{q}_{v\rightarrow f}^{t})_{t\geq 1}$,
$(\boldsymbol{b}_{f\rightarrow v}^{t})_{t\geq 1}$, and
$(\boldsymbol{r}_{f\rightarrow v}^{t})_{t\geq 0}$ of vectors in
$\mathbb{R}^{r}$, called _messages_ , according to
$\begin{gathered}\boldsymbol{a}_{v\rightarrow
f}^{t+1}=\sum_{f^{\prime}\in\partial v\setminus
f}x_{f^{\prime}v}\boldsymbol{r}_{f^{\prime}\rightarrow
v}^{t},\qquad\boldsymbol{r}_{f\rightarrow
v}^{t}=f_{t}(\boldsymbol{b}_{f\rightarrow
v}^{1},\ldots,\boldsymbol{b}_{f\rightarrow
v}^{t};y_{f},\boldsymbol{u}_{f}),\\\ \boldsymbol{b}_{f\rightarrow
v}^{t}=\sum_{v^{\prime}\in\partial f\setminus
v}x_{fv^{\prime}}\boldsymbol{q}_{v^{\prime}\rightarrow
f}^{t},\qquad\boldsymbol{q}_{v\rightarrow
f}^{t}=g_{t}(\boldsymbol{a}_{v\rightarrow
f}^{1},\ldots,\boldsymbol{a}_{v\rightarrow
f}^{t};\boldsymbol{v}_{v}),\end{gathered}$ (19)
with initialization $\boldsymbol{r}_{f\rightarrow
v}^{0}=f_{0}(y_{f},\boldsymbol{u}_{f})$ and $\boldsymbol{a}_{v\rightarrow
f}^{1}=\sum_{f^{\prime}\in\partial v\setminus
f}x_{f^{\prime}v}\boldsymbol{r}_{f^{\prime}\rightarrow v}^{0}$. We also define
for every variable and factor node the vectors
$\boldsymbol{a}_{v}^{t+1}=\sum_{f\in\partial
v}x_{fv}\boldsymbol{r}_{f\rightarrow
v}^{t},\qquad\boldsymbol{b}_{f}^{t}=\sum_{v\in\partial
f}x_{fv}\boldsymbol{q}_{v\rightarrow f}^{t}.$ (20)
These are called _beliefs_. The vector $\boldsymbol{\theta}_{v}$ is estimated
after $t$ iterations by
$\hat{\boldsymbol{\theta}}_{v}^{t}=g_{*}(\boldsymbol{a}_{v}^{1},\ldots,\boldsymbol{a}_{v}^{t};\boldsymbol{v}_{v})$.
Message passing algorithms on the computation tree correspond to AMP
algorithms on the graph in the sense that their iterates are asymptotically
characterized by the same state evolution.
###### Lemma 2.
In both the high-dimensional regression and low-rank matrix estimation
problems on the tree, the following is true. For any Lipschitz functions
$(f_{t})_{t\geq 0}$, $(g_{t})_{t\geq 1}$, there exist collections of $r\times
r$ matrices $(\boldsymbol{T}_{s,t})_{s,t\geq
1},(\boldsymbol{\alpha}_{t})_{t\geq 1}$ such that for any node $v$ chosen
independently of the randomness on the model, fixed $t\geq 1$, and under the
asymptotics $n,p\rightarrow\infty$, $n/p\rightarrow\delta\in(0,\infty)$, the
message passing algorithm (19) generates beliefs at $v$ satisfying
$\displaystyle(\boldsymbol{a}^{1}_{v},\ldots,\boldsymbol{a}^{t}_{v},\boldsymbol{v}_{v},\boldsymbol{\theta}_{v})\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}(\boldsymbol{\alpha}_{1}\boldsymbol{\Theta}+\boldsymbol{Z}^{1},\ldots,\boldsymbol{\alpha}_{t}\boldsymbol{\Theta}+\boldsymbol{Z}^{t},\boldsymbol{V},\boldsymbol{\Theta}),$
where
$(\boldsymbol{\Theta},\boldsymbol{V})\sim\mu_{\boldsymbol{\Theta},\boldsymbol{V}}$
independent of
$(\boldsymbol{Z}^{1},\ldots,\boldsymbol{Z}^{t})\sim\mathsf{N}(\boldsymbol{0},\boldsymbol{T}_{[1:t]})$,
and $\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}$ denotes convergence
in the Wasserstein metric of order 2 (see Appendix A). Moreover, the matrices
$(\boldsymbol{T}_{s,t})_{s,t\geq 1},(\boldsymbol{\alpha}_{t})_{t\geq 1}$ agree
with those in Lemma 1 when the functions $(f_{t})_{t\geq 0}$, $(g_{t})_{t\geq
1}$ also agree.
We prove Lemma 2 in Appendix C. Lemma 2 and the properties of convergence in
the Wasserstein metric of order 2 (see Lemma 2, Appendix A) imply that for any
message passing estimator $\hat{\boldsymbol{\theta}}_{v}^{t}$ and loss $\ell$,
the risk
$\mathbb{E}[\ell(\boldsymbol{\theta}_{v},\hat{\boldsymbol{\theta}}_{v}^{t})]=\mathbb{E}[\ell(\boldsymbol{\theta}_{v},g_{*}(\boldsymbol{a}_{v}^{1},\ldots,\boldsymbol{a}_{v}^{t};\boldsymbol{v}_{v})]$
converges to
$R_{\ell}(g_{*},(\boldsymbol{\alpha}_{s}),(\boldsymbol{T}_{s,s^{\prime}}))$,
in agreement with the asymptotic error of the corresponding AMP estimator on
the graph.
On the computation tree, we may lower bound this limiting risk by information-
theoretic techniques, as we now explain. By induction, the estimate
$\hat{\boldsymbol{\theta}}_{v}^{t}$ is a function only of observations
corresponding to edges and nodes in the ball of radius $2t-1$ centered at $v$
on the computation tree. We denote the observations in this local neighborhood
by $\mathcal{T}_{v,2t-1}$. We lower bound the risk of
$\hat{\boldsymbol{\theta}}_{v}^{t}$ by the optimal risk of any measurable
estimator, possibly intractable, which depends only on $\mathcal{T}_{v,2t-1}$;
we call this the _local Bayes risk_. The following lemma characterizes the
local Bayes risk.
###### Lemma 3.
Consider a quadratically-bounded loss
$\ell:\mathbb{R}^{2r}\rightarrow\mathbb{R}_{\geq 0}$. In the high-dimensional
regression (resp. low-rank matrix estimation) model on the computation tree
and under the asymptotics $n,p\rightarrow\infty$,
$n/p\rightarrow\delta\in(0,\infty)$,
$\liminf_{n\rightarrow\infty}\inf_{\hat{\boldsymbol{\theta}}(\cdot)}\mathbb{E}[\ell(\boldsymbol{\theta}_{v},\hat{\boldsymbol{\theta}}(\mathcal{T}_{v,2t-1}))]\geq
R^{*},$
where the infimum is over all measurable functions of $\mathcal{T}_{v,2t-1}$,
and $R^{*}$ is equal to the right-hand side of Eq. (6) (resp. Eq. (8)).
We prove Lemma 3 in Appendix D. Combining Lemma 3 with the preceding
discussion, we conclude that
$R_{\ell}(g_{*},(\boldsymbol{\alpha}_{s}),(\boldsymbol{T}_{s,s^{\prime}}))\geq
R_{*}$ for all Lipschitz functions $g_{*}$ and matrices
$(\boldsymbol{\alpha}_{s}),(\boldsymbol{T}_{s,s^{\prime}})$ generated by the
state evolution of some message passing or, equivalently, by some AMP
algorithm. The bounds (6) and (8) now follow. Moreover, as we show in Appendix
F, the bounds (6) and (8) are achieved by a certain AMP algorithm. The proof
is complete.
## Acknowledgements
MC is supported by the National Science Foundation Graduate Research
Fellowship under Grant No. DGE – 1656518. AM was partially supported by NSF
grants CCF-1714305, IIS-1741162 and by the ONR grant N00014-18-1-2729.
## References
* [AFUZ19] Fabrizio Antenucci, Silvio Franz, Pierfrancesco Urbani, and Lenka Zdeborová. Glassy nature of the hard phase in inference problems. Physical Review X, 9(1):011020, 2019.
* [AW08] Arash A Amini and Martin J Wainwright. High-dimensional analysis of semidefinite relaxations for sparse principal components. In 2008 IEEE International Symposium on Information Theory, pages 2454–2458. IEEE, 2008.
* [BBEKY13] Derek Bean, Peter J Bickel, Noureddine El Karoui, and Bin Yu. Optimal M-estimation in high-dimensional regression. Proceedings of the National Academy of Sciences of the United States of America, 110(36):14563–8, 9 2013.
* [BBH18] Matthew Brennan, Guy Bresler, and Wasim Huleihel. Reducibility and computational lower bounds for problems with planted sparse structure. arXiv:1806.07508, 2018.
* [Bil12] Patrick Billingsley. Probability and Measure. John Wiley & Sons, Inc., Hoboken, New Jersey, anniversar edition, 2012\.
* [BKM+19] Jean Barbier, Florent Krzakala, Nicolas Macris, Léo Miolane, and Lenka Zdeborová. Optimal errors and phase transitions in high-dimensional generalized linear models. Proceedings of the National Academy of Sciences, 116(12):5451–5460, 2019.
* [BM11] Mohsen Bayati and Andrea Montanari. The dynamics of message passing on dense graphs, with applications to compressed sensing. IEEE Transactions on Information Theory, 57(2):764–785, Feb 2011\.
* [BMN19] Raphael Berthier, Andrea Montanari, and Phan-Minh Nguyen. State evolution for approximate message passing with non-separable functions. Information and Inference, 01 2019.
* [Bol14] Erwin Bolthausen. An iterative construction of solutions of the TAP equations for the Sherrington–Kirkpatrick model. Communications in Mathematical Physics, 325(1):333–366, 2014.
* [BR13] Quentin Berthet and Philippe Rigollet. Optimal detection of sparse principal components in high dimension. The Annals of Statistics, 41(4):1780–1815, 2013.
* [BRT09] Peter J Bickel, Ya’acov Ritov, and Alexandre B Tsybakov. Simultaneous analysis of lasso and dantzig selector. The Annals of Statistics, 37(4):1705–1732, 2009.
* [BY08] Zhi-Dong Bai and Yong-Qua Yin. Limit of the smallest eigenvalue of a large dimensional sample covariance matrix. In Advances In Statistics, pages 108–127. World Scientific, 2008\.
* [CC15] Yuxin Chen and Emmanuel Candes. Solving random quadratic systems of equations is nearly as easy as solving linear systems. In Advances in Neural Information Processing Systems, pages 739–747, 2015.
* [Cha06] Sourav Chatterjee. A generalization of the Lindeberg principle. Ann. Probab., 34(6):2061–2076, 11 2006.
* [CLM16] T Tony Cai, Xiaodong Li, and Zongming Ma. Optimal rates of convergence for noisy sparse phase retrieval via thresholded Wirtinger flow. The Annals of Statistics, 44(5):2221–2251, 2016.
* [CLS15] Emmanuel J Candes, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985–2007, 2015\.
* [CT07] Emmanuel Candés and Terence Tao. The Dantzig selector: statistical estimation when p is much larger than n. Annals of Statistics, 35:2313–2351, 2007.
* [DM16] Yash Deshpande and Andrea Montanari. Sparse pca via covariance thresholding. The Journal of Machine Learning Research, 17(1):4913–4953, 2016\.
* [Dur10] Rick Durrett. Probability: Theory and Examples. Cambridge University Press, New York, NY, fourth edition, 2010.
* [EG15] Lawrence C. Evans and Ronald F. Gariepy. Measure Theory and Fine Properties of Functions. CRC Press, Taylor & Francis Group, Boca Raton, FL, revised edition, 2015.
* [FGR+17] Vitaly Feldman, Elena Grigorescu, Lev Reyzin, Santosh S Vempala, and Ying Xiao. Statistical algorithms and a lower bound for detecting planted cliques. Journal of the ACM (JACM), 64(2):1–37, 2017.
* [FGV17] Vitaly Feldman, Cristobal Guzman, and Santosh Vempala. Statistical query algorithms for mean vector estimation and stochastic convex optimization. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1265–1277. SIAM, 2017.
* [JL09] Iain M Johnstone and Arthur Yu Lu. On consistency and sparsity for principal components analysis in high dimensions. Journal of the American Statistical Association, 104(486):682–693, 2009.
* [JM13] Adel Javanmard and Andrea Montanari. State evolution for general approximate message passing algorithms, with applications to spatial coupling. Information and Inference: A Journal of the IMA, 2(2):115–144, 2013\.
* [JM18] Adel Javanmard and Andrea Montanari. Debiasing the lasso: Optimal sample size for gaussian designs. Ann. Statist., 46(6A):2593–2622, 12 2018.
* [JNRS10] Michel Journée, Yurii Nesterov, Peter Richtárik, and Rodolphe Sepulchre. Generalized power method for sparse principal component analysis. Journal of Machine Learning Research, 11(Feb):517–553, 2010.
* [KF09] Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009.
* [KMO10] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from noisy entries. Journal of Machine Learning Research, 11(Jul):2057–2078, 2010.
* [LM19] Marc Lelarge and Léo Miolane. Fundamental limits of symmetric low-rank matrix estimation. Probability Theory and Related Fields, 173(3-4):859–929, 2019.
* [LR05] E.L. Lehmann and Joseph P. Romano. Testing Statistical Hypotheses. Springer Science+Business Media, Inc., New York, NY, third edition, 2005\.
* [LV13] Xiaodong Li and Vladislav Voroninski. Sparse signal recovery from quadratic measurements via convex programming. SIAM Journal on Mathematical Analysis, 45(5):3019–3033, 2013.
* [LW11] Po-Ling Loh and Martin J Wainwright. High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity. In Advances in Neural Information Processing Systems, pages 2726–2734, 2011.
* [Ma13] Zongming Ma. Sparse principal component analysis and iterative thresholding. The Annals of Statistics, 41(2):772–801, 2013.
* [MM19] Marco Mondelli and Andrea Montanari. Fundamental limits of weak recovery with applications to phase retrieval. Found Comput Math, 19:703–773, 06 2019.
* [MPV87] Marc Mézard, Giorgio Parisi, and Miguel Virasoro. Spin glass theory and beyond: An Introduction to the Replica Method and Its Applications, volume 9. World Scientific Publishing Company, 1987.
* [MPZ02] Marc Mézard, Giorgio Parisi, and Riccardo Zecchina. Analytic and algorithmic solution of random satisfiability problems. Science, 297(5582):812–815, 2002.
* [MW15] Tengyu Ma and Avi Wigderson. Sum-of-squares lower bounds for sparse PCA. In Advances in Neural Information Processing Systems, pages 1612–1620, 2015.
* [MX16] Elchanan Mossel and Jiaming Xu. Local algorithms for block models with side information. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, pages 71–80, 2016.
* [Nes18] Yurii Nesterov. Lectures on convex optimization, volume 137. Springer, 2018.
* [NY83] Arkadii Semenovich Nemirovsky and David Borisovich Yudin. Problem complexity and method efficiency in optimization. 1983\.
* [PB13] Neal Parikh and Stephen Boyd. Proximal Algorithms. Foundations and Trends in Optimization, 1(3):123–231, 2013.
* [Sly09] Allan Sly. Reconstruction for the potts model. In Proceedings of the forty-first annual ACM symposium on Theory of computing, pages 581–590, 2009.
* [Sol19] Mahdi Soltanolkotabi. Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization. IEEE Transactions on Information Theory, 65(4):2374–2400, 2019\.
* [SR14] Philip Schniter and Sundeep Rangan. Compressive phase retrieval via generalized approximate message passing. IEEE Transactions on Signal Processing, 63(4):1043–1055, 2014.
* [Ste81] Charles M. Stein. Estimation of the Mean of a Multivariate Normal Distribution. The Annals of Statistics, 9(6):1135–1151, 11 1981.
* [Vaa98] Aad W. van der Vaart. Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1998.
* [Ver12] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Y. Eldar and G. Kutyniok, editors, Compressed Sensing, Theory and Applications, volume 23, chapter 5, pages 210–268. Cambridge University Press, 2012.
* [Vil10] Cèdric Villani. Optimal Transport, old and new. Springer-Verlag Berlin Heidelberg, New York, NY, 2010.
## Appendix A Technical definitions and lemmas
We collect some useful technical definitions and lemmas, some of which we
state without proof. First, we recall the definition of the Wasserstein metric
of order 2 on the space $\mathscrsfs{P}_{2}(\mathbb{R}^{k})$:
$W_{2}(\mu,\mu^{\prime})^{2}=\inf_{\Pi}\mathbb{E}_{(\boldsymbol{A},\boldsymbol{A}^{\prime})\sim\Pi}[\|\boldsymbol{A}-\boldsymbol{A}^{\prime}\|^{2}]\,,$
where the infimum is over couplings $\Pi$ between $\mu$ and $\mu^{\prime}$.
That is, $\Pi\in\mathscrsfs{P}_{2}(\mathbb{R}^{k}\times\mathbb{R}^{k})$ whose
first and second marginals are $\mu$ (where a marginal here involves a block
of $k$ coordinates). It is well known that $W_{2}(\mu,\mu^{\prime})$ is a
metric on $\mathscrsfs{P}_{2}(\mathbb{R}^{k})$ [Vil10, pg. 94]. When a
sequence of probability distributions $\mu_{n}$ converges to $\mu$ in the
Wasserstein metric of order 2, we write
$\mu_{n}\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}\mu$. We also write
$\boldsymbol{A}_{n}\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}\boldsymbol{A}$
when $\boldsymbol{A}_{n}\sim\mu_{n}$, $\boldsymbol{A}\sim\mu$ for such a
sequence.
###### Lemma 1.
If $f:\mathbb{R}^{r}\rightarrow\mathbb{R}$ and
$g:\mathbb{R}^{r}\rightarrow\mathbb{R}$ are pseudo-Lipschitz of order $k_{1}$
and $k_{2}$, respectively, then their product is pseudo-Lipschitz of order
$k_{1}+k_{2}$.
###### Lemma 2.
If a sequence of random vectors
$\boldsymbol{X}_{n}\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}\boldsymbol{X}$,
then for any pseudo-Lipschitz function $f$ of order $2$ we have
$\mathbb{E}[f(\boldsymbol{X}_{n})]\rightarrow\mathbb{E}[f(\boldsymbol{X})]$.
###### Lemma 3.
Consider a sequence of random variables
$(A_{n},\boldsymbol{B}_{n})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(A,\boldsymbol{B})$
with values in $\mathbb{R}\times\mathbb{R}^{k}$ such that
$(A_{n},\boldsymbol{B}_{n})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(A,\boldsymbol{B})$
and $A_{n}\stackrel{{\scriptstyle\mathrm{d}}}{{=}}A$ for all $n$. Then, for
any bounded measurable function
$f:\mathbb{R}\times\mathbb{R}^{k}\rightarrow\mathbb{R}$ for which
$\boldsymbol{b}\mapsto f(a,\boldsymbol{b})$ is continuous for all $a$, we have
$\mathbb{E}[f(A_{n},\boldsymbol{B}_{n})]\rightarrow\mathbb{E}[f(A,\boldsymbol{B})]$.
Further, for any function
$\phi:\mathbb{R}\times\mathbb{R}^{k}\rightarrow\mathbb{R}^{k^{\prime}}$
(possibly unbounded) which is continuous in all but the first coordinate, we
have
$\phi(A_{n},\boldsymbol{B}_{n})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}\phi(A,\boldsymbol{B})$.
###### Proof of Lemma 3.
Without loss of generality, $f$ takes values in $[0,1]$. First we show that
for any set $S\times I$ where $S\subset\mathbb{R}$ is measurable and
$I\subset\mathbb{R}^{k}$ is a rectangle whose boundary has probability 0 under
$\boldsymbol{B}$ that
$\mu_{A_{n},\boldsymbol{B}_{n}}(S\times
I)\rightarrow\mu_{A,\boldsymbol{B}}(S\times I)\,.$ (21)
First, we show this is true for $S=K$ a closed set. Fix $\epsilon>0$. Let
$\phi_{K}^{\epsilon}:\mathbb{R}\rightarrow[0,1]$ be a continuous function
which is 1 on $K$ and 0 for all points separted from $K$ by distance
$\epsilon$. Similarly define
$\phi_{I}^{\epsilon}:\mathbb{R}^{k}\rightarrow\mathbb{R}$. Then
$\displaystyle\mathbb{E}[\phi_{K}^{\epsilon}(A_{n})\phi_{I}^{\epsilon}(\boldsymbol{B}_{n})]\geq\mu_{A_{n},\boldsymbol{B}_{n}}(K\times
I)\geq\mathbb{E}[\phi_{K}^{\epsilon}(A_{n})\phi_{I}^{\epsilon}(\boldsymbol{B}_{n})]-\epsilon-\mu_{\boldsymbol{B}_{n}}(\mathsf{spt}(\phi_{I}^{\epsilon})\setminus
I)\,.$
Because the boundary of $I$ has measure 0 under $\mu_{\boldsymbol{B}}$, we
have $\lim_{\epsilon\rightarrow
0}\limsup_{n\rightarrow\infty}\mu_{\boldsymbol{B}_{n}}(\mathsf{spt}(\phi_{I}^{\epsilon})\setminus
I)=0$. Also, $\lim_{\epsilon\rightarrow
0}\lim_{n\rightarrow\infty}\mathbb{E}[\phi_{K}^{\epsilon}(A_{n})\phi_{I}^{\epsilon}(\boldsymbol{B}_{n})]=\lim_{\epsilon\rightarrow
0}=\mathbb{E}[\phi_{K}^{\epsilon}(A)\phi_{I}^{\epsilon}(\boldsymbol{B})]=\mu_{A,\boldsymbol{B}}(K\times
I)$. Thus, taking $\epsilon\rightarrow\infty$ after $n\rightarrow\infty$, the
previous display gives $\mu_{A_{n},\boldsymbol{B}_{n}}(K\times
I)\rightarrow\mu_{A,\boldsymbol{B}}(K\times I)$. For $S=G$ an open set, we can
show $\mu_{A_{n},\boldsymbol{B}_{n}}(G\times
I)\rightarrow\mu_{A,\boldsymbol{B}}(G\times I)$ by a similar argument: take
instead $\phi_{K}^{\epsilon}$ to be 0 outside of $G$ and $1$ for all points in
$G$ separated from the boundary by at least $\epsilon$, and likewise for
$\phi_{I}^{\epsilon}$. By Theorem 12.3 of [Bil12], we can construct $K\subset
S\subset G$ such that $K$ is closed and $G$ is open, and
$\mu_{A}(K)>\mu_{A}(S)-\epsilon$, $\mu_{A}(G)<\mu_{A}(S)+\epsilon$. The
previous paragraph implies that
$\displaystyle\mu_{A,\boldsymbol{B}}(S\times I)-\epsilon$
$\displaystyle\leq\mu_{A,\boldsymbol{B}}(K\times
I)\leq\liminf_{n\rightarrow\infty}\mu_{A_{n},\boldsymbol{B}_{n}}(S\times I)$
$\displaystyle\leq\limsup_{n\rightarrow\infty}\mu_{A_{n},\boldsymbol{B}_{n}}(S\times
I)\leq\mu_{A,\boldsymbol{B}}(G\times I)\leq\mu_{A,\boldsymbol{B}}(S\times
I)+\epsilon\,.$
Taking $\epsilon\rightarrow 0$, we conclude (21).
We now show (21) implies the lemma. Fix $\epsilon>0$. Let $M$ be such that
$\mathbb{P}(\boldsymbol{B}_{n}\in[-M,M]^{k})>1-\epsilon$ for all $n$ and
$\mathbb{P}(\boldsymbol{B}\in[-M,M]^{k})>1-\epsilon$, which we may do by
tightness. For each $a$, let $\delta(a,\epsilon)=\sup\\{0<\Delta\leq
M\mid\|\boldsymbol{b}-\boldsymbol{b}^{\prime}\|_{\infty}<\Delta\Rightarrow|f(a,\boldsymbol{b})-f(a,\boldsymbol{b}^{\prime})|<\epsilon\\}$.
Because continuous functions are uniformly continuous on compact sets, the
supremum is over a non-empty, bounded set. Thus, $\delta(a,\epsilon)$ is
positive and bounded above by $M$ for all $a$. Further, $\delta(a,\epsilon)$
is measurable and non-decreasing in $\epsilon$. Pick $\delta_{*}$ such that
$\mathbb{P}(\delta(A,\epsilon)<\delta_{*})<\epsilon$, which we may do because
$\delta(a,\epsilon)$ is positive for all $a$. We can partition $[-M,M]^{k}$
into rectangles with side-widths smaller than $\delta_{*}$ such that the
probability that $\boldsymbol{B}$ lies on the boundary of one of the
partitioning rectangles is 0. Define
$f_{-}(a,\boldsymbol{b}):=\sum_{\iota}\mathbf{1}\\{\boldsymbol{b}\in
I_{\iota}\\}\inf_{\boldsymbol{b}^{\prime}\in
I_{\iota}}f(a,\boldsymbol{b}^{\prime})$ and
$f_{+}(a,\boldsymbol{b}):=\sum_{\iota}\mathbf{1}\\{\boldsymbol{b}\in
I_{\iota}\\}\sup_{\boldsymbol{b}^{\prime}\in
I_{\iota}}f(a,\boldsymbol{b}^{\prime})$, and note that on
$\\{\delta(a,\epsilon)<\delta^{*}\\}\times[-M,M]^{k}$, we have
$f_{-}(a,\boldsymbol{b})\leq f(a,\boldsymbol{b})\leq f_{+}(a,\boldsymbol{b})$
and $|f(a,\boldsymbol{b})-f_{-}(a,\boldsymbol{b})|<\epsilon$ and
$|f(a,\boldsymbol{b})-f_{+}(a,\boldsymbol{b})|<\epsilon$. Thus, by the
boundedness of $f$ and the high-probability bound on
$\\{\delta(a,\epsilon)<\delta^{*}\\}\times[-M,M]^{k}$
$\begin{gathered}\mathbb{E}[f_{-}(A_{n},\boldsymbol{B}_{n})]-2\epsilon<\mathbb{E}[f(A_{n},\boldsymbol{B}_{n})]<\mathbb{E}[f_{+}(A_{n},\boldsymbol{B}_{n})]+2\epsilon\,,\\\
\mathbb{E}[f_{-}(A,\boldsymbol{B})]-2\epsilon<\mathbb{E}[f(A,\boldsymbol{B})]<\mathbb{E}[f_{+}(A,\boldsymbol{B})]+2\epsilon\,.\end{gathered}$
(22)
We show that
$\mathbb{E}[f_{-}(A_{n},\boldsymbol{B}_{n})]\rightarrow\mathbb{E}[f_{-}(A,\boldsymbol{B})]$.
Fix $\xi>0$. Take $0=x_{0}\leq\ldots\leq x_{N}=1$ such that
$x_{j+1}-x_{j}<\xi$ for all $j$. Let
$S_{j\iota}=\\{a\mid\inf_{\boldsymbol{b}^{\prime}\in
I_{\iota}}f(a,\boldsymbol{b}^{\prime})\in[x_{j},x_{j+1})\\}$. Then
$\sum_{\iota,j}x_{j}\mathbf{1}\\{a\in S_{j\iota},\boldsymbol{b}\in
I_{\iota}\\}+\xi\geq
f_{-}(a,\boldsymbol{b})\geq\sum_{\iota,j}x_{j}\mathbf{1}\\{a\in
S_{j\iota},\boldsymbol{b}\in I_{\iota}\\}\,.$
By (21), we conclude $\mathbb{E}[\sum_{\iota,j}x_{j}\mathbf{1}\\{A_{n}\in
S_{j\iota},\boldsymbol{B}_{n}\in
I_{\iota}\\}]\rightarrow\mathbb{E}[\sum_{\iota,j}x_{j}\mathbf{1}\\{A\in
S_{j\iota},\boldsymbol{B}\in I_{\iota}\\}]$. Combined with the previous
display and taking $\xi\rightarrow 0$, we conclude that
$\mathbb{E}[f_{-}(A_{n},\boldsymbol{B}_{n})]\rightarrow\mathbb{E}[f_{-}(A,\boldsymbol{B})]$.
Similarly, we may argue that
$\mathbb{E}[f_{+}(A_{n},\boldsymbol{B}_{n})]\rightarrow\mathbb{E}[f_{+}(A,\boldsymbol{B})]$.
The first statment in the lemma now follows from taking $\epsilon\rightarrow
0$ after $n\rightarrow\infty$ in (22).
The second statement in the lemma follows by observing that for any bounded
continuous function $f:\mathbb{R}^{k^{\prime}}\rightarrow\mathbb{R}$, we have
that $f\circ\phi$ is bounded and is continuous in all but the first
coordinate, so that we may apply the first part of the lemma to conclude
$\mathbb{E}[f(\phi(A_{n},\boldsymbol{B}_{n}))]\rightarrow\mathbb{E}[f(\phi(A,\boldsymbol{B}))]$.
∎
We will sometimes use the following alternative form of recursion (5) defining
the lower bound in the high-dimensional regression model.
###### Lemma 4.
Consider a family, indexed by $x\in\mathbb{R}$, of bounded probability
densities $p(\cdot|x,u)$ with respect to some base measure $\mu_{Y}$. Then for
$\tilde{\tau}>0$ and $\sigma\geq 0$ we have that
$\frac{1}{\tilde{\tau}^{2}}\mathbb{E}[\mathbb{E}[G_{1}|Y,G_{0},U]^{2}]=\mathbb{E}_{G_{0},Y}\left[\left(\frac{\mathrm{d}\phantom{b}}{\mathrm{d}x}\log\mathbb{E}_{G_{1}}p(Y|x+\sigma
G_{0}+\tilde{\tau}G_{1},U)\Big{|}_{x=0}\right)^{2}\right]\,,$
where
$G_{0},G_{1}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1)$
and $Y|G_{0},G_{1},U$ had density $p(\cdot|\sigma G_{0}+\tilde{\tau}G_{1},U)$
with respect to $\mu_{Y}$. In particular, the derivatives exist. (In this
case, we may equivalently generate $Y=h(\sigma
G_{0}+\tilde{\tau}G_{1},\boldsymbol{W})$ for
$(\boldsymbol{W},U)\sim\mu_{\boldsymbol{W},U}$).
The preceding lemma applies, in particular, for $p$ as in R4. It then provides
an alternative form of the second equation in recursion (5).
###### Lemma 4.
We have
$\mathbb{E}_{G_{1}}p(Y|x+\sigma G_{0}+\tilde{\tau}G_{1},U)=\int p(Y|\sigma
G_{0}+s,U)\frac{1}{\sqrt{2\pi}\tilde{\tau}}e^{-\frac{1}{2\tilde{\tau}^{2}}(s-x)^{2}}{\mathrm{d}}g\,,$
so that
$\displaystyle\frac{\mathrm{d}\phantom{b}}{\mathrm{d}x}\mathbb{E}_{G_{1}}p(Y|x+\sigma
G_{0}+\tilde{\tau}G_{1},U)$ $\displaystyle=\frac{1}{\tilde{\tau}^{2}}\int
p(Y|\sigma
G_{0}+s,U)\frac{(s-x)}{\sqrt{2\pi}\tilde{\tau}}e^{-\frac{1}{2\tilde{\tau}^{2}}(s-x)^{2}}{\mathrm{d}}g\,,$
where the boundedness of of $p$ allows us to exchange integration and
differentition. Thus,
$\frac{\mathrm{d}\phantom{b}}{\mathrm{d}x}\log\mathbb{E}_{G_{1}}p(Y|x+\sigma
G_{0}+\tilde{\tau}G_{1},U)=\frac{1}{\tilde{\tau}}\mathbb{E}[G_{1}|Y,G_{0},U]\,.$
The result follows. ∎
Finally, we collect some results on the Bayes risk with respect to
quadratically-bounded losses
$\ell:\mathbb{R}^{k}\times\mathbb{R}^{k}\rightarrow\mathbb{R}_{\geq 0}$.
Recall that quadratically-bounded means that $\ell$ is pseudo-Lipschitz of
order 2 and also satisfies
$|\ell(\boldsymbol{\vartheta},\boldsymbol{d})-\ell(\boldsymbol{\vartheta}^{\prime},\boldsymbol{d})|\leq
C\left(1+\sqrt{\ell(\boldsymbol{\vartheta},\boldsymbol{d})}+\sqrt{\ell(\boldsymbol{\vartheta}^{\prime},\boldsymbol{d})})\right)\|\boldsymbol{\vartheta}-\boldsymbol{\vartheta}^{\prime}\|\,.$
(23)
We consider a setting
$(\boldsymbol{\Theta},\boldsymbol{V})\sim\mu_{\boldsymbol{\Theta},\boldsymbol{V}}\in\mathscrsfs{P}_{2}(\mathbb{R}^{k}\times\mathbb{R}^{k})$,
$\boldsymbol{Z}\sim{\mathsf{N}}(0,\boldsymbol{I}_{k})$ independent and
$\tau,K,M\geq 0$. Define $\boldsymbol{\Theta}^{(K)}$ by
$\Theta^{(K)}_{i}=\Theta_{i}\mathbf{1}\\{|\Theta_{i}|\leq K\\}$. Denote by
$\mu_{\boldsymbol{\Theta}^{(K)},\boldsymbol{V}}$ the joint distribution of
$\boldsymbol{\Theta}^{(K)}$ and $\boldsymbol{V}$, and by
$\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}:\mathbb{R}^{k}\times\mathcal{B}\rightarrow[0,1]$
a regular conditional probability distribution for $\boldsymbol{\Theta}^{(K)}$
conditioned on $\boldsymbol{V}$. Define the posterior Bayes risk
$R(\boldsymbol{y},\tau,\boldsymbol{v},K,M):=\inf_{\|\boldsymbol{d}\|_{\infty}\leq
M}\int\frac{1}{Z}\ell(\boldsymbol{\vartheta},\boldsymbol{d})e^{-\frac{1}{2\tau^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})\,,$
(24)
where $Z=\int
e^{-\frac{1}{2\tau^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})$
is a normalization constant. It depends on
$\boldsymbol{y},\tau,\boldsymbol{v},K$. When required for clarity, we write
$Z(\boldsymbol{y},\tau,\boldsymbol{v},K)$.
###### Lemma 5.
The following properties hold for the Bayes risk with respect to pseudo-
Lipschitz losses of order 2 satsifying (23).
1. (a)
For any $\tau,K,M$, with $K,M$ possibly equal to infinity, the Bayes risk is
equal to the expected posterior Bayes risk. That is,
$\inf_{\hat{\boldsymbol{\theta}}(\cdot)}\mathbb{E}[\ell(\boldsymbol{\Theta}^{(K)},\hat{\boldsymbol{\theta}}(\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{z},\boldsymbol{V})]=\mathbb{E}[R(\boldsymbol{Y}^{(K)},\tau,\boldsymbol{V},K,M)]\,,$
(25)
where $\boldsymbol{Y}^{(K)}=\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z}$ with
$\boldsymbol{Z}\sim{\mathsf{N}}(\boldsymbol{0},\boldsymbol{I}_{k})$
independent of $\boldsymbol{\Theta}^{(K)}$ and the infimum is taken over all
measurable functions $(\mathbb{R}^{k})^{2}\rightarrow[-M,M]^{k}$. Moreover,
$\displaystyle\mathbb{E}[R(\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z},\tau,\boldsymbol{V},K,\infty)]$
$\displaystyle=\lim_{M\rightarrow\infty}\mathbb{E}[R(\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z},\tau,\boldsymbol{V},K,M)]\,.$
(26)
2. (b)
For a fixed $K<\infty$, the posterior Bayes risk is bounded:
$R(\boldsymbol{y},\tau,\boldsymbol{v},K,M)\leq\bar{R}(K)$ for some function
$\bar{R}$ which does not depend on $\boldsymbol{y},\tau,\boldsymbol{v},M$.
Further, for $K<\infty$ the function $(\boldsymbol{y},\tau)\mapsto
R(\boldsymbol{y},\tau,\boldsymbol{v},K,M)$ is continuous on
$\mathbb{R}^{k}\times\mathbb{R}_{>0}$.
3. (c)
The Bayes risk is jointly continuous in truncation level $K$ and noise
variance $\tau$. This is true also at $K=\infty$:
$\displaystyle\mathbb{E}[R(\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z},\tau,\boldsymbol{V},K,\infty)]$
$\displaystyle=\lim_{\begin{subarray}{c}K\rightarrow\infty\\\
\tau^{\prime}\rightarrow\tau\end{subarray}}\mathbb{E}[R(\boldsymbol{\Theta}^{(K)}+\tau^{\prime}\boldsymbol{Z},\tau^{\prime},\boldsymbol{V},K,\infty)]\,,$
(27)
where the limit holds for any way of taking $K,\tau^{\prime}$ to their limits
(ie., sequentially or simultaneously).
###### Proof of Lemma 5(a).
For any measurable
$\hat{\boldsymbol{\theta}}:\mathbb{R}^{k}\times\mathbb{R}^{k}\rightarrow[-M,M]^{k}$,
$\displaystyle\mathbb{E}[\ell(\boldsymbol{\Theta}^{(K)},\hat{\boldsymbol{\theta}}(\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z},\boldsymbol{V}))]$
$\displaystyle=\mathbb{E}[\mathbb{E}[\ell(\boldsymbol{\Theta}^{(K)},\hat{\boldsymbol{\theta}}(\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z},\boldsymbol{V}))|\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z},\boldsymbol{V}]]$
$\displaystyle\geq\mathbb{E}[R(\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z},\tau,\boldsymbol{V},K,M)]\,.$
(28)
For $M<\infty$, equality obstains. Indeed, we may define
$\hat{\boldsymbol{\theta}}^{(M)}(\boldsymbol{y},\boldsymbol{v};\tau)=\arg\min_{\|\boldsymbol{d}\|_{\infty}\leq
M}\int\frac{1}{Z}\ell(\boldsymbol{\vartheta},\boldsymbol{d})e^{-\frac{1}{2\tau^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})\,,$
(29)
because the integral is continuous in $\boldsymbol{d}$ by dominated
convergence. Then
$\mathbb{E}[\ell({\boldsymbol{\Theta}^{(K)}},\hat{\boldsymbol{\theta}}^{(M)}(\boldsymbol{Y},\boldsymbol{V};\tau))]=\mathbb{E}[R(\boldsymbol{Y},\tau,\boldsymbol{V},K,M)]$
when $\boldsymbol{Y}=\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z}$. Observe
$R(\boldsymbol{y},\tau,\boldsymbol{v},K,M)\downarrow
R(\boldsymbol{y},\tau,\boldsymbol{v},K,\infty)$ as $M\rightarrow\infty$ with
the other arguments fixed. Thus,
$\mathbb{E}[R(\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z},\tau,\boldsymbol{V},K,M)]\downarrow\mathbb{E}[R(\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z},\tau,\boldsymbol{V},K,\infty)]\,$
in this limit. Because
$\mathbb{E}[R(\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z},\tau,\boldsymbol{V},K,\infty)]$
is a lower bound on the Bayes risk at $M=\infty$ by (28) and we may achieve
risk arbitrarily close to this lower bound by taking $M\rightarrow\infty$ in
(29), we conclude (25) at $M=\infty$ as well. ∎
###### Proof of Lemma 5(b).
The quantity $R(\boldsymbol{y},\tau,\boldsymbol{v},K,M)$ is non-negative.
Define $\bar{R}(K)=\max_{\|\boldsymbol{\vartheta}\|_{\infty}\leq
K}\ell(\boldsymbol{\vartheta},\boldsymbol{0})$. Observe that
$R(\boldsymbol{y},\tau,\boldsymbol{v},K,M)\leq\bar{R}(K)$ for all
$\boldsymbol{y},\tau,\boldsymbol{v},K,M$. Let
$p^{*}(\boldsymbol{\theta}|\boldsymbol{y},\tau,\boldsymbol{v},K)=\frac{1}{Z}e^{-\frac{1}{2\tau^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}$.
For any fixed $\boldsymbol{d}$, we have
$\displaystyle\left\|\nabla_{\boldsymbol{y}}\int\ell(\boldsymbol{\vartheta},\boldsymbol{d})p^{*}(\boldsymbol{\vartheta}|\boldsymbol{y},\tau,\boldsymbol{v},K)\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},\mathrm{d}\boldsymbol{\vartheta})\right\|$
$\displaystyle\leq\int\ell(\boldsymbol{\vartheta},\boldsymbol{d})p^{*}(\boldsymbol{\vartheta}|\boldsymbol{y},\tau,v)\left\|\nabla_{\boldsymbol{y}}\log
p^{*}(\boldsymbol{\vartheta}|\boldsymbol{y},\tau,\boldsymbol{v})\right\|\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},\mathrm{d}\boldsymbol{\vartheta})$
$\displaystyle\leq\frac{2K\sqrt{k}}{\tau^{2}}\int\ell(\boldsymbol{\vartheta},\boldsymbol{d})p^{*}(\boldsymbol{\vartheta}|\boldsymbol{y},\tau,\boldsymbol{v})\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},\mathrm{d}\boldsymbol{\vartheta})\,,$
where we have used that $\|\nabla_{\boldsymbol{y}}\log
p^{*}(\boldsymbol{\vartheta}|\boldsymbol{y},\tau,\boldsymbol{v})\|=\frac{1}{\tau^{2}}(\boldsymbol{\vartheta}-\mathbb{E}_{\boldsymbol{\Theta}^{(K)}}[\boldsymbol{\Theta}^{(K)}])\leq
2K\sqrt{k}/\tau^{2}$, and the expectation is taken with respect to
$\boldsymbol{\Theta}^{(K)}$ having density
$p^{*}(\boldsymbol{\vartheta}|\boldsymbol{y},\tau,\boldsymbol{v})$ with
respect to $\mu_{\boldsymbol{\Theta}^{(K)}|V}(\boldsymbol{v},\cdot)$. Thus,
for fixed $\tau,\boldsymbol{d},\boldsymbol{v}$ satisfying
$\int\ell(\boldsymbol{\vartheta},\boldsymbol{d})p^{*}(\boldsymbol{\vartheta}|\boldsymbol{y},\tau,\boldsymbol{v})\mu_{\Theta|V}(\boldsymbol{v},\mathrm{d}\boldsymbol{\vartheta})\leq\bar{R}$,
the function
$\boldsymbol{y}\mapsto\int\ell(\boldsymbol{\vartheta},\boldsymbol{d})p^{*}(\boldsymbol{\vartheta}|\boldsymbol{y},\tau,\boldsymbol{v})\mu_{\Theta|V}(\boldsymbol{v},\mathrm{d}\boldsymbol{\vartheta})$
is $2K\sqrt{k}\bar{R}/\tau^{2}$-Lipschitz. Because the infimum defining $R$
can be taken over such $\boldsymbol{d}$ and infima retain a uniform Lipschitz
property, $R(\boldsymbol{y},\tau,\boldsymbol{v},K,M)$ is
$2K\sqrt{k}\bar{R}/\tau^{2}$-Lipschitz in $\boldsymbol{y}$ for fixed
$\tau,\boldsymbol{v},K,M$. By a similar argument, we can establish that
$R(\boldsymbol{y},\tau,\boldsymbol{v},K,M)$ is
$2(K^{2}k+2\|\boldsymbol{y}\|K\sqrt{k})/\bar{\tau}^{3}$-Lipschitz in $\tau$ on
the set $\tau>\bar{\tau}$ for any fixed $\bar{\tau}>0$ and any fixed
$\boldsymbol{y},\boldsymbol{v},K,M$. We conclude $(\boldsymbol{y},\tau)\mapsto
R(\boldsymbol{y},\tau,\boldsymbol{v},K,M)$ is continuous on
$\mathbb{R}^{k}\times\mathbb{R}_{>0}$. Lemma 5(b) has been shown. ∎
###### Proof of Lemma 5(c).
Finally, we prove (27). For any $K>0$, we may write555Precisely, for any
regular conditional probability distribution
$\mu_{\boldsymbol{\Theta}|\boldsymbol{V}}$ for $\boldsymbol{\Theta}$ given
$\boldsymbol{V}$, this formula gives a valid version of a regular conditional
probability distribution for $\boldsymbol{\Theta}^{(K)}$ given
$\boldsymbol{V}$. We assume we use this version throughout our proof.
$\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},\cdot)=\mu_{\boldsymbol{\Theta}|\boldsymbol{V}}(\boldsymbol{v},\cdot)|_{[-K,K]^{k}}+\mu_{\boldsymbol{\Theta}|\boldsymbol{V}}(\boldsymbol{v},([-K,K]^{k})^{c})\delta_{\boldsymbol{0}}(\cdot)\,.$
(30)
Choose $\bar{K},\epsilon^{\prime}>0$ such that
$|\tau^{\prime}-\tau|<\epsilon^{\prime}$ implies
$\int_{[-\bar{K},\bar{K}]^{k}}\frac{1}{Z(\boldsymbol{y},\tau^{\prime},\boldsymbol{v},\infty)}e^{-\frac{1}{2{\tau^{\prime}}^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})\geq\frac{1}{2}\int\frac{1}{Z(\boldsymbol{y},\tau,\boldsymbol{v},\infty)}e^{-\frac{1}{2\tau^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})\,.$
Fix $\epsilon>0$ and $K^{\prime}>K>0$ with $K^{\prime}$ possibly equal to
infinity. By (24), we may choose $\boldsymbol{d}^{*}$ such that
$\int\frac{1}{Z(\boldsymbol{y},\tau,\boldsymbol{v},K)}\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})e^{-\frac{1}{2\tau^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})\leq(1+\epsilon)R(\boldsymbol{y},\tau,\boldsymbol{v},K,\infty)\,.$
(31)
By the definition of $\bar{K}$, there exists
$\boldsymbol{\vartheta}^{*}\in[-\bar{K},\bar{K}]^{k}$ such that
$\ell(\boldsymbol{\vartheta}^{*},\boldsymbol{d}^{*})\leq
2(1+\epsilon)R(\boldsymbol{y},\tau,\boldsymbol{v},K,\infty)\,.$
By (23), we conclude that
$\displaystyle\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})$
$\displaystyle\leq
C\left(1+\sqrt{2(1+\epsilon)R(\boldsymbol{y},\tau,\boldsymbol{v},K,\infty)}+\sqrt{\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})}\right)\|\boldsymbol{\vartheta}-\boldsymbol{\vartheta}^{*}\|\,,$
whence
$\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})\leq\left(1+\sqrt{2(1+\epsilon)R(\boldsymbol{y},\tau,\boldsymbol{v},K,\infty)}+3C\|\boldsymbol{\vartheta}-\boldsymbol{\vartheta}^{*}\|\right)^{2}\,.$
(32)
Then
$\displaystyle\left|\int\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})e^{-\frac{1}{2{\tau^{\prime}}^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K^{\prime})}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})-\int\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})e^{-\frac{1}{2\tau^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})\right|$
$\displaystyle\qquad\leq\left|\int\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})e^{-\frac{1}{2{\tau^{\prime}}^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K^{\prime})}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})-\int\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})e^{-\frac{1}{2{\tau^{\prime}}^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})\right|$
$\displaystyle\qquad\qquad+\left|\int\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})e^{-\frac{1}{2{\tau^{\prime}}^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})-\int\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})e^{-\frac{1}{2{\tau}^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})\right|$
$\displaystyle\qquad\leq\int_{([-K,K]^{k})^{c}}\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})e^{-\frac{1}{2{\tau^{\prime}}^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})+\ell(\boldsymbol{0},\boldsymbol{d}^{*})e^{-\frac{1}{2{\tau^{\prime}}^{2}}\|\boldsymbol{y}\|^{2}}\mu_{\boldsymbol{\Theta}|\boldsymbol{V}}(\boldsymbol{v},([-K,K]^{k})^{c})$
$\displaystyle\qquad\qquad+\left|\int\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})e^{-\frac{1}{2{\tau^{\prime}}^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})-\int\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})e^{-\frac{1}{2{\tau}^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})\right|$
$\displaystyle\qquad\leq\xi(K,\tau^{\prime})(1+R(\boldsymbol{y},\tau,\boldsymbol{v},K,\infty))\,,$
for some $\xi(K,\tau^{\prime})\rightarrow 0$ as $K\rightarrow\infty$,
$\tau^{\prime}\rightarrow\tau$ because the conditional measure
$\mu_{\boldsymbol{\Theta}|\boldsymbol{V}}(\boldsymbol{v},\cdot)$ has finite
second moment and $\ell$ is bounded by (32). Then, by (31),
$\displaystyle
Z(\boldsymbol{y},\tau^{\prime},\boldsymbol{v},K^{\prime})R(\boldsymbol{y},\tau^{\prime},\boldsymbol{v},K^{\prime},\infty)$
$\displaystyle\leq\int\ell(\boldsymbol{\vartheta},\boldsymbol{d}^{*})e^{-\frac{1}{2\tau^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K^{\prime})}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})$
$\displaystyle\leq(1+\epsilon)Z(\boldsymbol{y},\tau,\boldsymbol{v},K)R(\boldsymbol{y},\tau,\boldsymbol{v},K,\infty)+\xi(K,\tau^{\prime})(1+R(\boldsymbol{y},\tau,\boldsymbol{v},K,\infty))\,.$
By dominated convergence, we have that
$Z(\boldsymbol{y},\tau^{\prime},\boldsymbol{v},K^{\prime})\rightarrow
Z(\boldsymbol{y},\tau,\boldsymbol{v},\infty)$ as
$\tau^{\prime}\rightarrow\tau,K^{\prime}\rightarrow\infty$. Also,
$\bar{R}(K)=\max_{\|\boldsymbol{\vartheta}\|_{\infty}\leq
K}\ell(\boldsymbol{\vartheta},\boldsymbol{0})$ cannot diverge at finite $K$.
Thus, applying the previous display with $K,\epsilon$ fixed allows us to
conclude that $R(\boldsymbol{y},\tau,\boldsymbol{v},K^{\prime},\infty)$ is
uniformly bounded over $K^{\prime}>K$ and $\tau^{\prime}$ in a neighborhood of
$\tau$. Then, taking $K^{\prime}=\infty$ and $K\rightarrow\infty$,
$\tau^{\prime}\rightarrow\tau$ followed by $\epsilon\rightarrow 0$ allows us
to conclude that
$\displaystyle\lim_{\begin{subarray}{c}K\rightarrow\infty\\\
\tau^{\prime}\rightarrow\tau\end{subarray}}R(\boldsymbol{y},\tau^{\prime},\boldsymbol{v},K,\infty)=R(\boldsymbol{y},\tau,\boldsymbol{v},\infty,\infty)\,.$
(33)
for every fixed $\boldsymbol{y},\boldsymbol{v}$. Moreover,
$\displaystyle R(\boldsymbol{y},\tau,\boldsymbol{v},K,M)$
$\displaystyle=\inf_{\|\boldsymbol{d}\|_{\infty}\leq
M}\int\frac{1}{Z}\ell(\boldsymbol{\vartheta},\boldsymbol{d})e^{-\frac{1}{2\tau^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})$
$\displaystyle\leq\int\frac{1}{Z}\ell(\boldsymbol{\vartheta},\boldsymbol{0})e^{-\frac{1}{2\tau^{2}}\|\boldsymbol{y}-\boldsymbol{\vartheta}\|^{2}}\mu_{\boldsymbol{\Theta}^{(K)}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})$
$\displaystyle\leq\int\frac{1}{Z}C(1+\|\boldsymbol{\Theta}^{(K)}\|^{2})e^{-\frac{1}{\tau^{2}}(\boldsymbol{y}-\boldsymbol{\vartheta})^{2}}\mu_{{\boldsymbol{\Theta}^{(K)}}|\boldsymbol{V}}(\boldsymbol{v},{\mathrm{d}}\boldsymbol{\vartheta})$
$\displaystyle=C(1+\mathbb{E}[\|{\boldsymbol{\Theta}^{(K)}}\|^{2}|{\boldsymbol{\Theta}^{(K)}}+\tau\boldsymbol{G}=\boldsymbol{y},\boldsymbol{V}=\boldsymbol{v}])\,.$
Thus,
$R(\boldsymbol{\Theta}^{(K)}+\tau\boldsymbol{Z},\tau,\boldsymbol{V},K,M)$ is
uniformly integrable as we vary $\tau,K,M$. Because the total variation
distance between
$(\boldsymbol{\Theta}^{(K)}+\tau^{\prime}\boldsymbol{Z},\boldsymbol{V})$ and
$(\boldsymbol{\Theta}+\tau\boldsymbol{Z},\boldsymbol{V}))$ goes to 0 as
$K\rightarrow\infty$ and $\tau^{\prime}\rightarrow\tau$, for any discrete
sequence $(K,\tau^{\prime})\rightarrow(\infty,\tau)$, there exists a
probability space containing variables
$\tilde{\boldsymbol{Y}}^{(K,\tau^{\prime})},\tilde{\boldsymbol{V}},\tilde{\boldsymbol{Y}}$
such that
$(\tilde{\boldsymbol{Y}}^{(K,\tau^{\prime})},\tilde{\boldsymbol{V}})=(\tilde{\boldsymbol{Y}},\tilde{\boldsymbol{V}})$
eventually. Thus, Eq. (33) and uniform integrability imply (27). ∎
## Appendix B Proof for reduction from GFOMs to AMP (Lemma 1)
In this section, we prove Lemma 1.
### B.1 A general change of variables
For any GFOM (1), there is a collection of GFOMs to which it is, up to a
change of variabes, equivalent. In this section, we specify these GFOMs and
the corresponding changes of variables.
The change of variables is determined by a collection of $r\times r$ matrices
$(\boldsymbol{\xi}_{t,s})_{t\geq 1,1\leq s\leq t}$,
$(\boldsymbol{\zeta}_{t,s})_{t\geq 1,0\leq s<t}$. We will often omit
subscripts outside of the parentheses. Define recursively the functions
$(f_{t})_{t\geq 0}$, $(\phi_{t})_{t\geq 1}$
$\displaystyle
f_{t}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{t};y,\boldsymbol{u})$
$\displaystyle=F_{t}^{(1)}(\phi_{1}(\boldsymbol{b}^{1};y,\boldsymbol{u}),\ldots,\phi_{t}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{t};y,\boldsymbol{u});y,\boldsymbol{u})$
(34a)
$\displaystyle\phi_{t}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{t};y,\boldsymbol{u})$
$\displaystyle=\boldsymbol{b}^{t}+\sum_{s=0}^{t-1}f_{s}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{s};y,\boldsymbol{u})\boldsymbol{\zeta}_{t,s}^{\mathsf{T}}$
$\displaystyle\qquad\qquad+G_{t}^{(2)}(\phi_{1}(\boldsymbol{b}^{1};y,\boldsymbol{u}),\ldots,\phi_{t-1}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{t-1};y,\boldsymbol{u});y,\boldsymbol{u}),$
initialized by $f_{0}(y,\boldsymbol{u})=F_{0}^{(1)}(y,\boldsymbol{u})$ (here
$\boldsymbol{b}^{s},\boldsymbol{u}\in\mathbb{R}^{r}$), and define recursively
the functions $(g_{t})_{t\geq 1}$, $(\varphi_{t})_{t\geq 1}$
$\displaystyle\varphi_{t+1}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{t+1};\boldsymbol{v})$
$\displaystyle=\boldsymbol{a}^{t+1}+\sum_{s=1}^{t}g_{s}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{t+1};\boldsymbol{v})\boldsymbol{\xi}_{t,s}^{\mathsf{T}}$
(34b)
$\displaystyle\qquad\qquad+F_{t}^{(2)}(\phi_{1}(\boldsymbol{a}^{1};\boldsymbol{v}),\ldots,\phi_{t}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{t};\boldsymbol{v});\boldsymbol{v}),$
$\displaystyle\qquad
g_{t}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{t};\boldsymbol{v})$
$\displaystyle=G_{t}^{(1)}(\varphi_{1}(\boldsymbol{a}^{1};\boldsymbol{v}),\ldots,\varphi_{t}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{t};\boldsymbol{v});\boldsymbol{v}),$
initialized by
$\varphi_{1}(\boldsymbol{a}^{1};\boldsymbol{v})=a^{1}+F_{0}^{(2)}(\boldsymbol{v})$
(here $\boldsymbol{a}^{s},\boldsymbol{v}\in\mathbb{R}^{r}$).
Algebraic manipulation verifies that the iteration
$\displaystyle\boldsymbol{a}^{t+1}$
$\displaystyle=\boldsymbol{X}^{\mathsf{T}}f_{t}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{t};y,\boldsymbol{u})-\sum_{s=1}^{t}g_{s}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{s};\boldsymbol{v})\boldsymbol{\xi}_{t,s}^{\mathsf{T}},$
(35) $\displaystyle\boldsymbol{b}^{t}$
$\displaystyle=\boldsymbol{X}g_{t}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{t};\boldsymbol{v})-\sum_{s=0}^{t-1}f_{s}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{s};y,\boldsymbol{u})\boldsymbol{\zeta}_{t,s}^{\mathsf{T}}$
initialized by
$\boldsymbol{a}^{1}=\boldsymbol{X}^{\mathsf{T}}f_{0}(y,\boldsymbol{u})$
generates sequences $(\boldsymbol{a}^{t})_{t\geq 1}$,
$(\boldsymbol{b}^{t})_{t\geq 1}$ which satisfy
$\displaystyle\boldsymbol{v}^{t}=\varphi_{t}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{t};\boldsymbol{v}),\quad
t\geq 1,$
$\displaystyle\boldsymbol{u}^{t}=\phi_{t}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{t};y,\boldsymbol{u}),\quad
t\geq 1.$
Thus, $(\boldsymbol{\xi}_{t,s})$, $(\boldsymbol{\zeta}_{t,s})$ index a
collection of GFOMs which, up to a change of variables, are equivalent.
### B.2 Approximate message passing and state evolution
We call the iteration (35) an approximate message passing algorithm if the
matrices $(\boldsymbol{\xi}_{t,s}),(\boldsymbol{\zeta}_{t,s})$ satisfy a
certain model-specific recursion involving the functions $f_{t},g_{t}$. The
state evolution characterization of the iterates (see Eq. (17)) holds whenever
the matrices $\boldsymbol{\xi}_{t,s}$, $\boldsymbol{\zeta}_{t,s}$ satisfy this
recursion. In this section, we specify this recursion and the parameters
$(\boldsymbol{\alpha}_{s}),(\boldsymbol{T}_{s,s^{\prime}})$ in both the high-
dimensional regression and low-rank matrix estimation models.
#### B.2.1 High-dimensional regression AMP
In the high-dimensional regression model, $r=1$ and $\xi_{t,s}$,
$\zeta_{t,s}$, $\alpha_{t}$, and $T_{s,s^{\prime}}$ will be scalars (hence,
written with non-bold font). The recursion defining $\xi_{t,s}$, $\zeta_{t,s}$
also defines $(\alpha_{t})$, $T_{s,s^{\prime}}$ as well as a collection of
scalars $(\Sigma_{s,t})_{s,t\geq 0}$ which did not appear in the statement of
Lemma 1. The recursion, whose lines are implemented in the order in which they
appear, is
$\begin{split}\xi_{t,s}&=\mathbb{E}[\partial_{B^{s}}f_{t}(B^{1},\ldots,B^{t};h(B^{0},W),U)],\quad
1\leq s\leq t,\\\
\alpha_{t+1}&=\mathbb{E}[\partial_{B^{0}}f_{t}(B^{1},\ldots,B^{t};h(B^{0},W),U)],\\\
T_{s+1,t+1}&=\mathbb{E}[f_{s}(B^{1},\ldots,B^{s};h(B^{0},W),U)f_{t}(B^{1},\ldots,B^{t};h(B^{0},W),U)],\quad
0\leq s\leq t,\\\
\zeta_{t,s}&=\frac{1}{\delta}\mathbb{E}[\partial_{Z^{s+1}}g_{t}(\alpha_{1}\Theta+Z^{1},\ldots,\alpha_{t}\Theta+Z^{t};V)],\quad
0\leq s\leq t-1,\\\ \Sigma_{0,t}&=\frac{1}{\delta}\mathbb{E}[\Theta
g_{t}(\alpha_{1}\Theta+Z^{1},\ldots,\alpha_{t}\Theta+Z^{t};V)],\\\
\Sigma_{s,t}&=\frac{1}{\delta}\mathbb{E}[g_{s}(\alpha_{1}\Theta+Z^{1},\ldots,\alpha_{t}\Theta+Z^{s};V)g_{t}(\alpha_{1}\Theta+Z^{1},\ldots,\alpha_{t}\Theta+Z^{t};V)],\quad
1\leq s\leq t,\end{split}$ (36)
where $\Theta\sim\mu_{\Theta}$, $U\sim\mu_{U}$, $V\sim\mu_{V}$,
$W\sim\mu_{W}$,
$(B^{0},\ldots,B^{t})\sim\mathsf{N}(\boldsymbol{0},\boldsymbol{\Sigma}_{[0:t]})$,
$(Z^{1},\ldots,Z^{t})\sim\mathsf{N}(\boldsymbol{0},\boldsymbol{T}_{[1:t]})$,
all independent. We initialize just before the second line with
$\Sigma_{0,0}=\mathbb{E}[\Theta^{2}]$.
Eq. (17) for $(\alpha_{s}),(T_{s,s^{\prime}})$ defined in this way is a
special case of Proposition 5 of [JM13], as we now explain. We fix iteration
$t$ design an algorithm that agrees, after a change of variables, with
iteration (16) up to iteration $t$ and to which we can apply the results of
[JM13]. Because we take $n,p\rightarrow\infty$ before $t\rightarrow\infty$,
this establishes the result.
We view the first $t$ iterations of (16) as acting on matrices
$\tilde{\boldsymbol{a}}^{s}\in\mathbb{R}^{p\times(t+1)}$ and
$\tilde{\boldsymbol{b}}^{s}\in\mathbb{R}^{n\times(t+1)}$ as follows. Define
$\tilde{\boldsymbol{a}}^{s}$ to be the matrix whose first column is
$\boldsymbol{\theta}$ and whose $i^{\text{th}}$ column is
$\boldsymbol{a}^{i-1}$ for $2\leq i\leq s+1$ and is $\boldsymbol{0}$ for
$i>s+1$; define $\tilde{\boldsymbol{b}}^{s}$ to be the matrix whose first
column is $\boldsymbol{X}\boldsymbol{\theta}$ and whose $i^{\text{th}}$ column
is $\boldsymbol{b}^{i=1}$ for $2\leq i\leq s+1$ and is $\boldsymbol{0}$ for
$i>s+1$. The following change of variables transforms (16) into equations (28)
and (29) of Proposition 5 in [JM13]. Our notation is on the right and is
separated from the notation of [JM13] by the symbol “$\leftarrow$”.
$\displaystyle\tilde{A}\leftarrow X,$ $\displaystyle
u^{s}(i)\leftarrow\begin{cases}\boldsymbol{X}\boldsymbol{\theta}&i=1,\\\
\boldsymbol{b}^{i-1}&2\leq i\leq s+1,\\\
\boldsymbol{0}&\text{otherwise},\end{cases}\qquad\text{and}\qquad
v^{s}(i)\leftarrow\begin{cases}\boldsymbol{a}^{i-1}-\alpha_{i-1}\boldsymbol{\theta}&2\leq
i\leq s+1,\\\ \boldsymbol{0}&\text{otherwise},\end{cases}$ $\displaystyle
y(i)\leftarrow\begin{cases}\boldsymbol{v}&i=1,\\\ \boldsymbol{\theta}&i=2,\\\
\boldsymbol{0}&\text{otherwise},\end{cases}\qquad\text{and}\qquad
w(i)\leftarrow\begin{cases}\boldsymbol{u}&i=1,\\\ \boldsymbol{w}&i=2,\\\
\boldsymbol{0}&\text{otherwise},\end{cases}$
$\displaystyle\widehat{e}(v,y;s)(i)\leftarrow\begin{cases}y(2)&i=1,\\\
g_{i-1}(v(2)+\alpha_{1}y(2),\ldots,v(i+1)+\alpha_{i}y(2);y(1))&2\leq i\leq
s+1,\\\ \boldsymbol{0}&\text{otherwise},\end{cases}$
$\displaystyle\widehat{h}(u,w;s)(i)\leftarrow\begin{cases}f_{i-1}(u(2),\ldots,u(i+1);h(u(1),w(2)),w(1)),&1\leq
i\leq s+1,\\\ \boldsymbol{0}&\text{otherwise},\end{cases}$
where the “$(i)$” notation indexes columns of a matrix. The Onsager correction
coefficients $(\xi_{t,s})$ and $(\zeta_{t,s})$ correspond, after a change of
variables, to entries in the matrices $\mathsf{D}_{s}$ and $\mathsf{B}_{s}$ in
[JM13].
$\displaystyle(\mathsf{D}_{s})_{i,j}=\mathbb{E}[\partial_{u(j)}\widehat{h}(U,W;s)]\leftarrow\begin{cases}\mathbb{E}[\partial_{B^{j-1}}f_{i-1}(B^{1},\ldots,B^{i};h(B^{0},W),U)]&1\leq
j-1\leq i\leq s+1,\\\ 0&\text{otherwise},\end{cases},$
$\displaystyle(\mathsf{B}_{s})_{i,j}=\frac{1}{\delta}\mathbb{E}[\partial_{v(j)}\widehat{e}(V,Y;i)]\leftarrow\begin{cases}0&i=1\text{
or }j=1,\\\
\frac{1}{\delta}\mathbb{E}[\partial_{Z^{j-1}}g_{i}(\alpha_{1}\Theta+Z^{1},\ldots,\alpha_{i}\Theta+Z^{i};V)]&2\leq
j\leq i+1\leq s+2,\\\ 0&\text{otherwise}.\end{cases}$
The Onsager coefficients and state evolution coefficients are arrived at
through the change of variables:
$\displaystyle(\mathsf{B}_{s})_{s+1,s^{\prime}+2}\leftarrow\zeta_{s,s^{\prime}},\;\;\;\;(\mathsf{D}_{s})_{s+1,s^{\prime}+1}\leftarrow\xi_{s,s^{\prime}}\;\;\;\;(\mathsf{D}_{s})_{s+1,1}\leftarrow\alpha_{s}.$
We remark that in [JM13] the quantities $(\mathsf{B}_{s})_{s+1,s^{\prime}+2}$,
$(\mathsf{D}_{s})_{s+1,s^{\prime}+1}$, and $(\mathsf{D}_{s})_{s+1,1}$ are
empirical averages. Because they concentration well on their population
averages, we may replace them with their population averages, as we do here,
without affecting the validity of state evolution. This observation is common
in the AMP literature: see, for example, the relationship between Theorem 1
and Corollary 2 of [BMN19]. The state evolution matrices now correspond to
$\displaystyle\mathbb{E}[V^{s+1}(s+1)V^{s+1}(s^{\prime}+1)]$
$\displaystyle=\mathbb{E}[\widehat{h}(U,W;s)(s)\widehat{h}(U,W;s)(s^{\prime})]$
$\displaystyle\leftarrow\mathbb{E}[f_{s-1}(B^{1},\ldots,B^{s-1};h(B^{0},W),U)f_{s^{\prime}-1}(B^{1},\ldots,B^{s^{\prime}-1};h(B^{0},W),U)]$
$\displaystyle=T_{s,s^{\prime}},$
$\displaystyle\mathbb{E}[U^{s+1}(s+1)U^{s+1}(s^{\prime}+1)]$
$\displaystyle=\frac{1}{\delta}\mathbb{E}[\widehat{e}(V,Y;s+1)(s+1)\widehat{e}(V,Y;s+1)(s^{\prime}+1)]$
$\displaystyle\leftarrow\frac{1}{\delta}\mathbb{E}[g_{s}(\alpha_{1}\Theta+Z^{1},\ldots,\alpha_{s}\Theta+Z^{s};V)g_{s^{\prime}}(\alpha_{1}\Theta+Z^{1},\ldots,\alpha_{s^{\prime}}\Theta+Z^{s^{\prime}};V)]$
$\displaystyle=\Sigma_{s,s^{\prime}}.$
From these changes of variables, Eq. (17) holds in the high-dimensional
regression model from Theorem 1 and Proposition 5 of [JM13].
#### B.2.2 Low-rank matrix estimation AMP
In the low-ank matrix estimation model, the recrusion defining
$(\boldsymbol{x}_{t,x})$, $(\boldsymbol{\zeta}_{t,s})$ also defines
$(\boldsymbol{\alpha}_{t})$, $(\boldsymbol{T}_{s,t})_{s,t\geq 1}$ as well as
collections of $r\times r$ matrices $(\boldsymbol{\gamma}_{t})_{t\geq 1}$,
$(\boldsymbol{\Sigma}_{s,t})_{s,t\geq 0}$ which did not appear in Lemma 1. The
recursion, whose lines are implemented in the order in which they appear, is
$\begin{split}\boldsymbol{\xi}_{t,s}&=\mathbb{E}[\nabla_{\tilde{\boldsymbol{Z}}^{s}}f_{t}(\boldsymbol{\gamma}_{1}\boldsymbol{\Lambda}+\tilde{\boldsymbol{Z}}^{1},\ldots,\boldsymbol{\gamma}_{t}\boldsymbol{\Lambda}+\tilde{\boldsymbol{Z}}^{t};0,\boldsymbol{U})],\quad
1\leq s\leq t,\\\
\boldsymbol{\alpha}_{t+1}&=\mathbb{E}[f_{t}(\boldsymbol{\gamma}_{1}\boldsymbol{\Lambda}+\tilde{\boldsymbol{Z}}^{1},\ldots,\boldsymbol{\gamma}_{t}\boldsymbol{\Lambda}+\tilde{\boldsymbol{Z}}^{t};0,\boldsymbol{U})\boldsymbol{\Lambda}^{\mathsf{T}}],\\\
\boldsymbol{T}_{s+1,t+1}&=\mathbb{E}[f_{s}(\boldsymbol{\gamma}_{1}\boldsymbol{\Lambda}+\tilde{\boldsymbol{Z}}^{1},\ldots,\boldsymbol{\gamma}_{t}\boldsymbol{\Lambda}+\tilde{\boldsymbol{Z}}^{s};0,\boldsymbol{U})f_{t}(\boldsymbol{\gamma}_{1}\boldsymbol{\Lambda}+\tilde{\boldsymbol{Z}}^{1},\ldots,\boldsymbol{\gamma}_{t}\boldsymbol{\Lambda}+\tilde{\boldsymbol{Z}}^{t};0,\boldsymbol{U})^{\mathsf{T}}],\;s\leq
t,\\\
\boldsymbol{\zeta}_{t,s}&=\frac{1}{\delta}\mathbb{E}[\nabla_{\boldsymbol{Z}^{s+1}}g_{t}(\boldsymbol{\alpha}_{1}\boldsymbol{\Theta}+Z^{1},\ldots,\boldsymbol{\alpha}_{t}\boldsymbol{\Theta}+\boldsymbol{Z}^{t};\boldsymbol{V})],\quad
0\leq s\leq t-1,\\\
\boldsymbol{\gamma}_{t}&=\frac{1}{\delta}\mathbb{E}[g_{t}(\boldsymbol{\alpha}_{1}\boldsymbol{\Theta}+\boldsymbol{Z}^{1},\ldots,\boldsymbol{\alpha}_{t}\boldsymbol{\Theta}+\boldsymbol{Z}^{t};\boldsymbol{V})\boldsymbol{\Theta}^{\mathsf{T}}],\\\
\boldsymbol{\Sigma}_{s,t}&=\frac{1}{\delta}\mathbb{E}[g_{s}(\boldsymbol{\alpha}_{1}\boldsymbol{\Theta}+\boldsymbol{Z}^{1},\ldots,\boldsymbol{\alpha}_{t}\boldsymbol{\Theta}+\boldsymbol{Z}^{s};\boldsymbol{V})g_{t}(\boldsymbol{\alpha}_{1}\boldsymbol{\Theta}+\boldsymbol{Z}^{1},\ldots,\boldsymbol{\alpha}_{t}\boldsymbol{\Theta}+\boldsymbol{Z}^{t};\boldsymbol{V})^{\mathsf{T}}],\quad
1\leq s\leq t,\end{split}$ (37)
where $\boldsymbol{\Lambda}\sim\mu_{\boldsymbol{\Lambda}}$
$\boldsymbol{U}\sim\mu_{\boldsymbol{U}}$,
$\boldsymbol{\Theta}\sim\mu_{\boldsymbol{\Theta}}$,
$\boldsymbol{V}\sim\mu_{\boldsymbol{V}}$,
$(\tilde{\boldsymbol{Z}}^{1},\ldots,\tilde{\boldsymbol{Z}}^{t})\sim{\mathsf{N}}(\boldsymbol{0},\boldsymbol{\Sigma}_{[1:t]})$,
and
$(\boldsymbol{Z}^{1},\ldots,\boldsymbol{Z}^{t})\sim{\mathsf{N}}(\boldsymbol{0},\boldsymbol{T}_{[1:t]})$,
all independent. Here $\nabla$ denotes the Jacobian with respect to
subscripted (vectorial) argument, which exists almost everywhere because the
functions involved are Lipschitz and the random variables have density with
respect to Lebesgue measure [EG15, pg. 81]. As with $\boldsymbol{T}_{[1:t]}$,
we define $\boldsymbol{\Sigma}_{[1:t]}$ to be the $rt\times rt$ block matrix
with block $(s,t)$ given by $\boldsymbol{\Sigma}_{s,t}$. We initialize at the
second line with
$\boldsymbol{\alpha}_{1}=\mathbb{E}[f_{0}(0,\boldsymbol{U})\boldsymbol{\Lambda}^{\mathsf{T}}]$.
In addition to (17), we have
$\frac{1}{n}\sum_{i=1}^{n}\psi(\boldsymbol{b}^{1}_{i},\ldots,\boldsymbol{b}^{t}_{i},\boldsymbol{u}_{i},\boldsymbol{\lambda}_{i})\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\mathbb{E}[\psi(\boldsymbol{\gamma}_{1}\boldsymbol{\Lambda}+\tilde{\boldsymbol{Z}}^{1},\ldots,\boldsymbol{\gamma}_{t}\boldsymbol{\Lambda}+\tilde{\boldsymbol{Z}}^{t},\boldsymbol{U},\boldsymbol{\Lambda})],$
where we remind the reader that
$\psi:\mathbb{R}^{r(t+2)}\rightarrow\mathbb{R}$ is any pseudo-Lipschitz
function of order 2.
We now show Eq. (17) for $(\alpha_{s}),(T_{s,s^{\prime}})$ defined in this
way. We consider the $r=1$ case, as $r>1$ is similar by requires more
notational overhead. Because
$\boldsymbol{X}=\frac{1}{n}\boldsymbol{\lambda}\boldsymbol{\theta}^{\mathsf{T}}+\boldsymbol{Z}$,
we have
$\displaystyle\boldsymbol{a}^{t+1}-\frac{1}{n}\langle\boldsymbol{\lambda},f_{t}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{t},0,\boldsymbol{u})\rangle\boldsymbol{\theta}$
$\displaystyle=\boldsymbol{Z}^{\mathsf{T}}f_{t}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{t},0,\boldsymbol{u})-\sum\limits_{s=1}^{t}\xi_{t,s}g_{s}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{s},\boldsymbol{v}),$
$\displaystyle\boldsymbol{b}^{t}-\frac{1}{n}\langle\boldsymbol{\theta},g_{t}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{t},\boldsymbol{v})\rangle\boldsymbol{\lambda}$
$\displaystyle=\boldsymbol{Z}g_{t}(\boldsymbol{a}^{1},\ldots,\boldsymbol{a}^{t},\boldsymbol{v})-\sum\limits_{s=0}^{t-1}\zeta_{t,s}f_{s}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{s},\boldsymbol{y},\boldsymbol{u}).$
We introduce a change of variables:
$\displaystyle\hat{f}_{t}(d^{1},\ldots,d^{t},u,\lambda)\overset{\Delta}{=}f_{t}(d^{1}+\gamma_{1}\lambda,\ldots,d^{t}+\gamma_{t}\lambda,0,u),$
$\displaystyle\boldsymbol{d}^{t}=\boldsymbol{b}^{t}-\gamma_{t}\boldsymbol{\lambda}\in\mathbb{R}^{n},$
$\displaystyle\hat{g}_{t}(c^{1},\ldots,c^{t},v,\theta)\overset{\Delta}{=}g_{t}(c^{1}+\alpha_{1}\theta,\ldots,c^{t}+\alpha_{t}\theta,v),$
$\displaystyle\boldsymbol{c}^{t}=\boldsymbol{a}^{t}-\alpha_{t}\boldsymbol{\theta}\in\mathbb{R}^{p}.$
Because $f_{t}$, $g_{t}$ are Lipschitz continuous, so too are $\hat{f}_{t}$,
$\hat{g}_{t}$. We have
$\displaystyle\boldsymbol{a}^{t+1}-\frac{1}{n}\langle\boldsymbol{\lambda},\hat{f}_{t}(\boldsymbol{d}^{1},\ldots,\boldsymbol{d}^{t},\boldsymbol{u},\boldsymbol{\lambda})\rangle\boldsymbol{\theta}$
$\displaystyle=\boldsymbol{Z}^{\mathsf{T}}\hat{f}_{t}(\boldsymbol{d}^{1},\ldots,\boldsymbol{d}^{t},\boldsymbol{u},\boldsymbol{\lambda})-\sum\limits_{s=1}^{t}\xi_{t,s}\hat{g}_{s}(\boldsymbol{c}^{1},\ldots,\boldsymbol{c}^{s},\boldsymbol{v},\boldsymbol{\theta}),$
$\displaystyle\boldsymbol{b}^{t}-\frac{1}{n}\langle\boldsymbol{\theta},\hat{g}_{t}(\boldsymbol{c}^{1},\ldots,\boldsymbol{c}^{t},\boldsymbol{v},\boldsymbol{\theta})\rangle\boldsymbol{\lambda}$
$\displaystyle=\boldsymbol{Z}\hat{g}_{t}(\boldsymbol{c}^{1},\ldots,\boldsymbol{c}^{t},\boldsymbol{v},\boldsymbol{\theta})-\sum\limits_{s=0}^{t-1}\zeta_{t,s}\hat{f}_{s}(\boldsymbol{b}^{1},\ldots,\boldsymbol{b}^{s},\boldsymbol{u},\boldsymbol{\lambda}).$
Define
$\displaystyle\hat{\boldsymbol{c}}^{t+1}$
$\displaystyle=\boldsymbol{Z}^{\mathsf{T}}\hat{f}_{t}(\hat{\boldsymbol{d}}^{1},\ldots,\hat{\boldsymbol{d}}^{t},\boldsymbol{u},\boldsymbol{\lambda})-\sum\limits_{s=1}^{t}\xi_{t,s}\hat{g}_{s}(\hat{\boldsymbol{c}}^{1},\ldots,\hat{\boldsymbol{c}}^{t},\boldsymbol{v},\boldsymbol{\theta}),$
$\displaystyle\hat{\boldsymbol{d}}^{t}$
$\displaystyle=\boldsymbol{Z}\hat{g}_{t}(\hat{\boldsymbol{c}}^{1},\ldots,\hat{\boldsymbol{c}}^{t},\boldsymbol{v},\boldsymbol{\theta})-\sum\limits_{s=0}^{t-1}\zeta_{t,s}\hat{f}_{s}(\hat{\boldsymbol{d}}^{1},\ldots,\hat{\boldsymbol{d}}^{t},\boldsymbol{u},\boldsymbol{\lambda}).$
We can analzye this iteration via the same techniques we used to analyze AMP
in the high-dimensional regression model in the previous section [JM13]. In
particular, for any pseudo-Lipschitz function
$\psi:\mathbb{R}^{t+2}\rightarrow\mathbb{R}$ of order 2, we have
$\displaystyle\frac{1}{p}\sum\limits_{j=1}^{p}\psi(\hat{c}_{j}^{1},\ldots,\hat{c}_{j}^{t},v_{j},\theta_{j})$
$\displaystyle\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\mathbb{E}[\psi(Z^{1},\ldots,Z^{t},V,\Theta)],$
(38)
$\displaystyle\frac{1}{n}\sum\limits_{i=1}^{n}\psi(\hat{d}^{1}_{i},\ldots,\hat{d}^{t}_{i},u_{i},\lambda_{i})$
$\displaystyle\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\mathbb{E}[\psi(\tilde{Z}^{1},\ldots,\tilde{Z}^{t},U,\Lambda)].$
Now, tøestablish (17), it suffices to show
$\frac{1}{n}\|\hat{\boldsymbol{c}}^{t}-\boldsymbol{c}^{t}\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0,\qquad\frac{1}{n}\|\hat{\boldsymbol{d}}^{t}-\boldsymbol{d}^{t}\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0.$
(39)
We proceed by induction. By the weak law of large numbers, we have that
$\frac{1}{n}\langle\boldsymbol{\lambda},\hat{f}_{0}(\boldsymbol{\lambda},\boldsymbol{u})\rangle=\frac{1}{n}\langle\boldsymbol{\lambda},f_{0}(0,\boldsymbol{u})\rangle\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\alpha_{1}$.
Therefore,
$\boldsymbol{c}^{1}=\boldsymbol{Z}^{\mathsf{T}}\hat{f}_{0}(\boldsymbol{\lambda},\boldsymbol{u})+o_{p}(1)\boldsymbol{\theta}=\hat{\boldsymbol{c}}^{1}+o_{p}(1)\boldsymbol{\theta}$.
Since
$\frac{1}{p}\|\boldsymbol{\theta}\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\mathbb{E}[\Theta^{2}]$,
we have that
$\frac{1}{n}\|\boldsymbol{c}^{1}-\hat{\boldsymbol{c}}^{1}\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0$.
Because $\hat{g}_{1}$ is Lipschitz and
$\frac{1}{p}\|\boldsymbol{\theta}\|^{2}=O_{p}(1)$, we have
$|\frac{1}{n}\langle\boldsymbol{\theta},\hat{g}_{1}(\boldsymbol{c}^{1},\boldsymbol{\theta},\boldsymbol{v})\rangle-\frac{1}{n}\langle\boldsymbol{\theta},\hat{g}_{1}(\hat{\boldsymbol{c}}^{1},\boldsymbol{\theta},\boldsymbol{v})\rangle|\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0$.
By (38), we have that
$\frac{1}{n}\langle\boldsymbol{\theta},\hat{g}_{1}(\hat{\boldsymbol{c}}^{1},\boldsymbol{\theta},\boldsymbol{v})\rangle\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\gamma_{1}$.
We have
$\frac{1}{n}\|\hat{g}_{1}(\boldsymbol{c}^{1},\boldsymbol{v},\boldsymbol{\theta})-\hat{g}_{1}(\hat{\boldsymbol{c}}^{1},\boldsymbol{v},\boldsymbol{\theta})\|_{2}^{2}\leq\frac{1}{n}L^{2}\|\boldsymbol{c}^{1}-\hat{\boldsymbol{c}}^{1}\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0,$
wehre $L$ is a Lipschitz constant for $\hat{g}_{1}$. By [BY08], the maximal
singular value of $\boldsymbol{Z}^{T}\boldsymbol{Z}$ is $O_{p}(1)$. Therefore,
$\frac{1}{n}\|\boldsymbol{Z}\hat{g}_{1}(\boldsymbol{c}^{1},\boldsymbol{v},\boldsymbol{\theta})-\boldsymbol{Z}\hat{g}_{1}(\hat{\boldsymbol{c}}^{1},\boldsymbol{v},\boldsymbol{\theta})\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0$.
As a result, and using that $\frac{1}{n}\|\boldsymbol{\lambda}\|_{2}^{2}$
converges almost surely to a constant,
$\frac{1}{n}\|\hat{\boldsymbol{d}}^{1}-\boldsymbol{d}^{1}\|_{2}^{2}=\frac{1}{n}\|\boldsymbol{Z}\hat{g}_{t}(\hat{\boldsymbol{c}}^{1},\boldsymbol{v},\boldsymbol{\theta})-\boldsymbol{Z}\hat{g}_{t}(\boldsymbol{c}^{1},\boldsymbol{v},\boldsymbol{\theta})+(\frac{1}{n}\langle\boldsymbol{\theta},\hat{g}_{1}(\boldsymbol{c}^{1},\boldsymbol{\theta},\boldsymbol{v})\rangle-\gamma_{1})\boldsymbol{\lambda}\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0.$
Now assume that (39) holds for $1,2,\ldots,t$. For the $(t+1)$-th iteration,
we have
$|\frac{1}{n}\langle\boldsymbol{\lambda},\hat{f}_{t}(\boldsymbol{d}^{1},\ldots,\boldsymbol{d}^{t},\boldsymbol{u},\boldsymbol{\lambda})\rangle-\frac{1}{n}\langle\boldsymbol{\lambda},\hat{f}_{t}(\hat{\boldsymbol{d}}^{1},\ldots,\hat{\boldsymbol{d}}^{t},\boldsymbol{u},\boldsymbol{\lambda})\rangle|\leq\frac{L}{n}\|\boldsymbol{\lambda}\|_{2}\sum\limits_{s=1}^{t}\|\boldsymbol{d}^{s}-\hat{\boldsymbol{d}}^{s}\|_{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0.$
where $L$ is a Lipschitz constant for $\hat{f}$. By (38), we have
$\frac{1}{n}\langle\boldsymbol{\lambda},\hat{f}_{t}(\hat{\boldsymbol{d}}^{1},\ldots,\hat{\boldsymbol{d}}^{t},\boldsymbol{\lambda},\boldsymbol{u})\rangle\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\alpha_{t+1}$.
As a result, we have
$\frac{1}{n}\langle\boldsymbol{\lambda},\hat{f}_{t}(\boldsymbol{d}^{1},\ldots,\boldsymbol{d}^{t},\boldsymbol{u},\boldsymbol{\lambda})\rangle\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\alpha_{t+1}$.
Furthermore, for any $1\leq s\leq t$, we have
$\displaystyle\frac{1}{n}\|\hat{f}_{s}(\boldsymbol{d}^{1},\ldots,\boldsymbol{d}^{s},\boldsymbol{u},\boldsymbol{\lambda})-\hat{f}_{s}(\hat{\boldsymbol{d}}^{1},\ldots,\hat{\boldsymbol{d}}^{s},\boldsymbol{u},\boldsymbol{\lambda})\|_{2}^{2}$
$\displaystyle\leq\frac{\hat{L}_{t}^{2}}{n}\sum\limits_{i=1}^{s}\|\boldsymbol{d}^{i}-\hat{\boldsymbol{d}}^{i}\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0,$
$\displaystyle\frac{1}{n}\|\hat{g}_{s}(\boldsymbol{c}^{1},\ldots,\boldsymbol{c}^{s},\boldsymbol{v},\boldsymbol{\theta})-\hat{g}_{s}(\hat{\boldsymbol{c}}^{1},\ldots,\hat{\boldsymbol{c}}^{s},\boldsymbol{v},\boldsymbol{\theta})\|_{2}^{2}$
$\displaystyle\leq\frac{\hat{L}_{t}^{2}}{n}\sum\limits_{i=1}^{s}\|\boldsymbol{c}^{i}-\hat{\boldsymbol{c}}^{i}\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0.$
Again using that the maximal singular value of
$\boldsymbol{Z}^{\mathsf{T}}\boldsymbol{Z}$ is $O_{p}(1)$, we have
$\frac{1}{n}\|\boldsymbol{Z}^{\mathsf{T}}\hat{f}_{t}(\hat{\boldsymbol{d}}^{1},\ldots,\hat{\boldsymbol{d}}^{t},\boldsymbol{u},\boldsymbol{\lambda})-\boldsymbol{Z}^{\mathsf{T}}\hat{f}_{t}(\boldsymbol{d}^{1},\ldots,\boldsymbol{d}^{t},\boldsymbol{u},\boldsymbol{\lambda})\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0\,.$
As a result, we have
$\displaystyle\frac{1}{n}\|\hat{\boldsymbol{c}}^{t+1}-\boldsymbol{c}^{t+1}\|_{2}^{2}$
$\displaystyle=$
$\displaystyle\frac{1}{n}\|(\frac{1}{n}\langle\boldsymbol{\lambda},\hat{f}_{t}(\boldsymbol{d}^{1},\ldots,\boldsymbol{d}^{t},\boldsymbol{u},\boldsymbol{\lambda})-\alpha_{t+1})\boldsymbol{\theta}+\boldsymbol{Z}^{\mathsf{T}}(\hat{f}_{t}(\hat{\boldsymbol{d}}^{1},\ldots,\hat{\boldsymbol{d}}^{t},\boldsymbol{u},\boldsymbol{\lambda})-\hat{f}_{t}(\boldsymbol{d}^{1},\ldots,\boldsymbol{d}^{t},\boldsymbol{u},\boldsymbol{\lambda}))-$
$\displaystyle\sum\limits_{s=1}^{t}\xi_{t,s}(\hat{g}_{s}(\hat{\boldsymbol{c}}^{1},\ldots,\hat{\boldsymbol{c}}^{s},\boldsymbol{v},\boldsymbol{\theta})-\hat{g}_{s}(\boldsymbol{c}^{1},\ldots,\boldsymbol{c}^{s},\boldsymbol{\theta},\boldsymbol{v}))\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0.$
Similarly, we have
$|\frac{1}{n}\langle\boldsymbol{\theta},\hat{g}_{t+1}(\hat{\boldsymbol{c}}^{1},\ldots,\hat{\boldsymbol{c}}^{t+1},\boldsymbol{v},\boldsymbol{\theta})\rangle-\frac{1}{n}\langle\boldsymbol{\theta},\hat{g}_{t+1}(\boldsymbol{c}^{1},\ldots,\boldsymbol{c}^{t+1},\boldsymbol{v},\boldsymbol{\theta})\rangle|\leq\frac{L}{n}\|\boldsymbol{\theta}\|_{2}\sum\limits_{s=1}^{t+1}\|\hat{\boldsymbol{c}}^{t+1}-\boldsymbol{c}^{t+1}\|_{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0,$
where $L$ is a Lipschitz constant for $\hat{g}_{t+1}$. By (38), we have that
$\frac{1}{n}\langle\boldsymbol{\theta},\hat{g}_{t+1}(\hat{\boldsymbol{c}}^{1},\ldots,\hat{\boldsymbol{c}}^{t+1},\boldsymbol{v},\boldsymbol{\theta})\rangle\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\gamma_{t+1}$.
As a result, we have that
$\frac{1}{n}\langle\boldsymbol{\theta},\hat{g}_{t+1}(\boldsymbol{c}^{1},\ldots,\boldsymbol{c}^{t+1},\boldsymbol{v},\boldsymbol{\theta})\rangle\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\gamma_{t+1}$.
Furthermore, for any $1\leq s\leq t$, we have
$\frac{1}{n}\|\hat{f}_{s}(\boldsymbol{d}^{1},\ldots,\boldsymbol{d}^{s},\boldsymbol{u},\boldsymbol{\lambda})-\hat{f}_{s}(\hat{\boldsymbol{d}}^{1},\ldots,\hat{\boldsymbol{d}}^{s},\boldsymbol{u},\boldsymbol{\lambda})\|_{2}^{2}\leq\frac{L^{2}}{n}\sum\limits_{i=1}^{s}\|\boldsymbol{d}^{i}-\hat{\boldsymbol{d}}^{i}\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0.$
Also, for any $1\leq s\leq t+1$, we have
$\frac{1}{n}\|\hat{g}_{s}(\boldsymbol{c}^{1},\ldots,\boldsymbol{c}^{s},\boldsymbol{v},\boldsymbol{\theta})-\hat{g}_{s}(\hat{\boldsymbol{c}}^{1},\ldots,\hat{\boldsymbol{c}}^{s},\boldsymbol{v},\boldsymbol{\theta})\|_{2}^{2}\leq\frac{L^{2}}{n}\sum\limits_{i=1}^{s}\|\boldsymbol{c}^{i}-\hat{\boldsymbol{c}}^{i}\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0.$
Then
$\frac{1}{n}\|\boldsymbol{Z}\hat{g}_{t+1}(\hat{\boldsymbol{c}}^{1},\ldots,\hat{\boldsymbol{c}}^{t+1},\boldsymbol{v},\boldsymbol{\theta})-\boldsymbol{Z}\hat{g}_{t+1}(\boldsymbol{c}^{1},\ldots,\boldsymbol{c}^{t+1},\boldsymbol{v},\boldsymbol{\theta})\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0$.
As a result, we have
$\displaystyle\frac{1}{n}\|\hat{\boldsymbol{d}}^{t+1}-\boldsymbol{d}^{t+1}\|_{2}^{2}$
$\displaystyle\qquad=\frac{1}{n}\|(\frac{1}{n}\langle\boldsymbol{\theta},\hat{g}_{t+1}(\boldsymbol{c}^{1},\ldots,\boldsymbol{c}^{t+1},\boldsymbol{v},\boldsymbol{\theta})\rangle-\gamma_{t+1})\boldsymbol{\lambda}$
$\displaystyle\qquad+\boldsymbol{Z}(\hat{g}_{t+1}(\hat{\boldsymbol{c}}^{1},\ldots,\hat{\boldsymbol{c}}^{t+1},\boldsymbol{v},\boldsymbol{\theta})-\hat{g}_{t+1}(\boldsymbol{c}^{1},\ldots,\boldsymbol{c}^{t+1},\boldsymbol{v},\boldsymbol{\theta}))$
$\displaystyle\qquad-\sum\limits_{s=0}^{t}\zeta_{t,s}(\hat{f}_{s}(\hat{\boldsymbol{d}}^{1},\ldots,\hat{\boldsymbol{d}}^{s},\boldsymbol{u},\boldsymbol{\lambda})-\hat{f}_{s}(\boldsymbol{d}^{1},\ldots,\boldsymbol{d}^{s},\boldsymbol{u},\boldsymbol{\lambda}))\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0.$
Thus, we have proved (39). Therefore, for all pseudo-Lipschitz function $\psi$
of order 2, we have that there exists a numerical constant $C$ such that
$\displaystyle\left|\frac{1}{p}\sum\limits_{j=1}^{p}\psi(c_{j}^{1}+\alpha_{1}\theta_{j},\ldots,c_{j}^{t}+\alpha_{t}\theta_{j},v_{j},\theta_{j})-\frac{1}{p}\sum\limits_{j=1}^{p}\psi(\hat{c}_{j}^{1}+\alpha_{1}\theta_{j},\ldots,\hat{c}_{j}^{t}+\alpha_{t}\theta_{j},v_{j},\theta_{j})\right|$
$\displaystyle\qquad\qquad\leq
L_{\psi}(1+\sum\limits_{s=1}^{t}\|\boldsymbol{a}^{s}\|_{2}+\|\boldsymbol{\theta}\|_{2}+\|\boldsymbol{v}\|_{2})\sum\limits_{s=1}^{t}\|\hat{\boldsymbol{c}}^{s}-\boldsymbol{c}^{s}\|_{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0.$
By (38),
$\dfrac{1}{p}\sum\limits_{j=1}^{p}\psi(\hat{c}_{j}^{1}+\alpha_{1}\theta_{j},\ldots,\hat{c}_{j}^{t}+\alpha_{t}\theta_{j},v_{j},\theta_{j})\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\mathbb{E}[\psi(\boldsymbol{\alpha}_{1}\boldsymbol{\Theta}+\boldsymbol{Z}^{1},\ldots,\boldsymbol{\alpha}_{t}\boldsymbol{\Theta}+\boldsymbol{Z}^{t},\boldsymbol{V},\boldsymbol{\Theta})].$
Therefore,
$\frac{1}{p}\sum\limits_{j=1}^{p}\psi(a_{j}^{1},\ldots,a_{j}^{t},v_{j},\theta_{j})\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\mathbb{E}[\psi(\boldsymbol{\alpha}_{1}\boldsymbol{\Theta}+\boldsymbol{Z}^{1},\ldots,\boldsymbol{\alpha}_{t}\boldsymbol{\Theta}+\boldsymbol{Z}^{t},\boldsymbol{V},\boldsymbol{\Theta})]$.
Similarly, we can show that
$\frac{1}{n}\sum_{i=1}^{n}\psi(\boldsymbol{b}^{1}_{i},\ldots,\boldsymbol{b}^{t}_{i},\boldsymbol{u}_{i},\boldsymbol{\lambda}_{i})\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\mathbb{E}[\psi(\boldsymbol{\gamma}_{1}\boldsymbol{\Lambda}+\tilde{\boldsymbol{Z}}^{1},\ldots,\boldsymbol{\gamma}_{t}\boldsymbol{\Lambda}+\tilde{\boldsymbol{Z}}^{t},\boldsymbol{U},\boldsymbol{\Lambda})]$.
Thus we have finished the proof.
### B.3 The AMP change of variables
To prove Lemma 1, all that remains is to show that for any GFOM (1), at least
one of the change-of-variables in Eqs. (34) generates an iteration (35) which
is an AMP iteration. That is, in addition to satisfying Eq. (34), the matrices
$(\boldsymbol{\xi}_{t,s})$, $(\boldsymbol{\zeta}_{t,s})$ and functions
$(f_{t})$, $(g_{t})$ satisfy Eqs. (36) and (37) in the high-dimensional
regression and low-rank matrix estimation models respectively.
To construct such a choice of scalars, we may define
$(\boldsymbol{\xi}_{t,s}),(\boldsymbol{\zeta}_{t,s}),(f_{t}),(g_{t})$ in a
single recursion by interlacing definition (34) with either (36) or (37).
Specifically, in the high-dimensional regression model, we place (34a) before
the first line of (36) and (34b) before the fourth line of (36). In the
combined recursion, all quantities are defined in terms of previously defined
quantities, yielding choices for
$(\boldsymbol{\xi}_{t,s}),(\boldsymbol{\zeta}_{t,s}),(f_{t}),(g_{t})$ which
simultaneously satisfy (34) and (36). Thus, in the high-dimensional regression
model every GFOM is equivalent, up to a change of variables, to a certain AMP
algorithm. The construction in the low-rank matrix estimation model is
analogous: we place (34a) before the first line of (37) and (34b) before the
fourth line of (37).
The proof of Lemma 1 is complete.
## Appendix C Proof of state evolution for message passing (Lemma 2)
In this section, we prove Lemma 2. We restrict ourselves to the case $r=1$ and
$k=1$ (with $k$ the dimensionality of $\boldsymbol{W}$) because the proof for
$r>1$ or $k>1$ is completely analogous but would complicate notation.
Let $\mathcal{T}_{v\rightarrow f}=(\mathcal{V}_{v\rightarrow
f},\mathcal{F}_{v\rightarrow f},\mathcal{E}_{v\rightarrow f})$ be the tree
consisting of edges and nodes in $\mathcal{T}$ which are separated from $f$ by
$v$. By convention, $\mathcal{T}_{v\rightarrow f}$ will also contain the node
$v$. In particular, $f\not\in\mathcal{F}_{v\rightarrow f}$ and
$(f,v)\not\in\mathcal{E}_{v\rightarrow f}$, but $v\in\mathcal{V}_{v\rightarrow
f}$, and $f^{\prime}\in\mathcal{F}_{v\rightarrow f}$ and
$(v,f^{\prime})\in\mathcal{E}_{v\rightarrow f}$ for $f^{\prime}\in\partial
v\setminus f$. We define $\mathcal{T}_{f\rightarrow
v},\mathcal{V}_{f\rightarrow v},\mathcal{F}_{f\rightarrow
v},\mathcal{E}_{f\rightarrow v}$ similarly. With some abuse of notation, we
will sometimes use $\mathcal{T}_{f\rightarrow v},\mathcal{V}_{f\rightarrow
v},\mathcal{F}_{f\rightarrow v},\mathcal{E}_{f\rightarrow v}$ to denote either
the collection of observations corresponding to nodes and edges in these sets
or the $\sigma$-algebra generated by these obervations. No confusion should
result. Which random variables we consider to be “observed” will vary with the
model, and will be explicitly described in each part of the proof to avoid
potential ambiguity.
### C.1 Gaussian message passing
We first introduce a message passing algorithm whose behavior is particularly
easy to analyze. We call this message passing algorithm a _Gaussian message
passing_ algorithm. We will see that in both the high-dimensional regression
and low-rank matrix estimation models, the message passing algorithm (19)
approximates a certain Gaussian message passing algorithm.
Gaussian message passing algorithms operate on a computation tree with
associated random variables
$\\{(\theta_{v},v_{v})\\}_{v\in\mathcal{V}}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\Theta,V}$,
$\\{(w_{f},u_{f})\\}_{f\in\mathcal{F}}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{W,U}$,
and
$\\{z_{fv}\\}_{(f,v)\in\mathcal{E}}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1/n)$,
all independent, where
$\mu_{\Theta,V},\mu_{W,U}\in\mathscrsfs{P}_{4}(\mathbb{R}^{2})$.666We believe
that only $\mu_{\Theta,V},\mu_{W,U}\in\mathscrsfs{P}_{2}(\mathbb{R}^{2})$ is
needed, but the analysis under this weaker assumption would be substantially
more complicated, and the weaker assumptions are not necessary for our
purposes. Gaussian message passing algorithms access all these random
variables, so that all are considered to be “observed.” Thus, for example,
$\mathcal{V}_{f\rightarrow v}$ contains $\theta_{v^{\prime}},v_{v^{\prime}}$
for all nodes $v^{\prime}$ separated from $f$ by $v$ (including, by
convention, $v$).
Gaussian message passing algorithms are defined by sequences of Lipschitz
functions $(\tilde{f}_{t}:\mathbb{R}^{t+3}\rightarrow\mathbb{R})_{t\geq 0}$,
$(\tilde{g}_{t}:\mathbb{R}^{t+2}\rightarrow\mathbb{R})_{t\geq 0}$. We
initialize the indexing differently than with Gaussian message passing
algorithms than with the message passing algorithms in Section 5 in
anticipation of notational simplifications that will occur later. For every
pair of neighboring nodes $v,f$, we generate sequences of messages
$(\tilde{a}_{v\rightarrow f}^{t})_{t\geq 1}$, $(\tilde{q}_{v\rightarrow
f}^{t})_{t\geq 0}$, $(\tilde{b}_{f\rightarrow v}^{t})_{t\geq 0}$,
$(\tilde{r}_{f\rightarrow v}^{t})_{t\geq 0}$ according to the iteration
$\displaystyle\tilde{a}_{v\rightarrow f}^{t+1}=\sum_{f^{\prime}\in\partial
v\setminus f}z_{f^{\prime}v}\tilde{r}_{f^{\prime}\rightarrow
v}^{t},\qquad\tilde{r}_{f\rightarrow
v}^{t}=\tilde{f}_{t}(\tilde{b}_{f\rightarrow
v}^{0},\ldots,\tilde{b}_{f\rightarrow v}^{t};w_{f},u_{f}),$ (40a)
$\displaystyle\tilde{b}_{f\rightarrow v}^{t}=\sum_{v^{\prime}\in\partial
f\setminus v}z_{fv^{\prime}}\tilde{q}_{v^{\prime}\rightarrow
f}^{t},\qquad\tilde{q}_{v\rightarrow
f}^{t}=\tilde{g}_{t}(\tilde{a}_{v\rightarrow
f}^{1},\ldots,\tilde{a}_{v\rightarrow f}^{t};\theta_{v},v_{v}),$ (40b)
with initialization $\tilde{q}_{v\rightarrow f}^{0}=g_{0}(\theta_{v},v_{v})$.
For $t\geq 0$, define the node beliefs
$\tilde{a}_{v}^{t+1}=\sum_{f\in\partial v}z_{fv}\tilde{r}_{f\rightarrow
v}^{t},\qquad\tilde{b}_{f}^{t}=\sum_{v\in\partial
f}z_{fv}\tilde{q}_{v\rightarrow f}^{t}.$ (41)
To compactify notation, denote
$\tilde{\boldsymbol{a}}_{v}^{t}=(\tilde{a}_{v}^{1},\ldots,\tilde{a}_{v}^{t})^{\mathsf{T}}$,
and likewise for $\tilde{\boldsymbol{a}}_{v\rightarrow f}^{t}$,
$\tilde{\boldsymbol{q}}_{v\rightarrow f}^{t}$,
$\tilde{\boldsymbol{b}}_{f}^{t}$, $\tilde{\boldsymbol{b}}_{f\rightarrow
v}^{t},\tilde{\boldsymbol{r}}_{f\rightarrow v}^{t}$ (where the first two of
these are $t$-dimensional, and the last three are $(t+1)$-dimensional). We
will often write $\tilde{f}_{t}(\tilde{\boldsymbol{b}}_{f\rightarrow
v}^{t};w_{f},u_{f})$ in place of $\tilde{f}_{t}(\tilde{b}_{f\rightarrow
v}^{0},\ldots,b_{f\rightarrow v}^{t};w_{f},u_{f})$, and similarly for
$\tilde{g}_{t}$. The reader should not confuse the bold font here with that in
Section 5, in which, for example, $\boldsymbol{a}_{v\rightarrow f}^{t}$
denotes the vectorial message at time $t$ rather than the collection of scalar
messages prior to and including time $t$.
Gaussian message passing obeys a Gaussian state evolution, defined by
covariance matrices
$\Sigma_{s,s^{\prime}}=\mathbb{E}[\tilde{g}_{s}(\tilde{\boldsymbol{A}}^{s};\Theta,V)\tilde{g}_{s^{\prime}}(\tilde{\boldsymbol{A}}^{s^{\prime}};\Theta,V)],\;\;\;T_{s+1,s^{\prime}+1}=\mathbb{E}[\tilde{f}_{s}(\tilde{\boldsymbol{B}}^{s};W,U)\tilde{f}_{s^{\prime}}(\tilde{\boldsymbol{B}}^{s^{\prime}};W,U)],$
(42)
where $s,s^{\prime}\geq 0$,
$\tilde{\boldsymbol{A}}^{s}\sim{\mathsf{N}}(\boldsymbol{0}_{s},\boldsymbol{T}_{[1{:}s]})$,
$\tilde{\boldsymbol{B}}^{s}\sim{\mathsf{N}}(\boldsymbol{0}_{s+1},\boldsymbol{\Sigma}_{[0{:}s]})$,
and $(\Theta,V)\sim\mu_{\Theta,V}$, $(W,U)\sim\mu_{W,U}$ independent of
$\tilde{\boldsymbol{A}}^{s},\tilde{\boldsymbol{B}}^{s}$. The iteration is
initialized by $\Sigma_{0,0}=\mathbb{E}[\tilde{g}_{0}(\Theta,V)^{2}]$.
###### Lemma 1.
If we choose a variable node $v$ and factor node $f$ independently of the
randomness in our model, then for fixed $t$ and for $n,p\rightarrow\infty$,
$n/p\rightarrow\delta$ we have
$\displaystyle(\tilde{\boldsymbol{a}}_{v}^{t},\theta_{v},v_{v})\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}\mathsf{N}(\boldsymbol{0}_{t},\boldsymbol{T}_{[1{:}t]})\otimes\mu_{\Theta,V}\;\;\text{and}\;\;(\tilde{\boldsymbol{a}}_{v\rightarrow
f}^{t},\theta_{v},v_{v})\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}\mathsf{N}(\boldsymbol{0}_{t},\boldsymbol{T}_{[1{:}t]})\otimes\mu_{\Theta,V},$
(43a)
$\displaystyle(\tilde{\boldsymbol{b}}_{f}^{t},w_{f},u_{f})\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}\mathsf{N}(\boldsymbol{0}_{t+1},\boldsymbol{\Sigma}_{[0{:}t]})\otimes\mu_{W,U}\;\;\text{and}\;\;(\tilde{\boldsymbol{b}}_{f\rightarrow
v}^{t},w_{f},u_{f})\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}\mathsf{N}(\boldsymbol{0}_{t+1},\boldsymbol{\Sigma}_{[0{:}t]})\otimes\mu_{W,U}.$
(43b)
Further, all the random variables in the preceding displays have bounded
fourth moments and
$\mathbb{E}[\|\tilde{\boldsymbol{a}}_{v}^{t}-\tilde{\boldsymbol{a}}_{v\rightarrow
f}^{t}\|^{2}]\rightarrow 0$ and
$\mathbb{E}[\|\tilde{\boldsymbol{b}}_{f}^{t}-\tilde{\boldsymbol{b}}_{f\rightarrow
v}^{t}\|^{2}]\rightarrow 0$.
The analysis of message passing on the tree is facilitated by the many
independence relationships between messages, which follow from the following
lemma.
###### Lemma 2.
For all $(f,v)\in\mathcal{E}$ and all $t$, the messages
$\tilde{r}_{f\rightarrow v}^{t},\tilde{b}_{f\rightarrow v}^{t}$ are
$\mathcal{T}_{f\rightarrow v}$-measurable, and the messages
$\tilde{q}_{v\rightarrow f}^{t},\tilde{a}_{a\rightarrow f}^{t}$ is
$\mathcal{T}_{v\rightarrow f}$-measurable.
###### Lemma 2.
The proof is by induction. The base case is that $\tilde{q}_{v\rightarrow
f}^{0}=g_{0}(\theta_{v},v_{v})$ is $\mathcal{T}_{v\rightarrow f}$-measurable.
Then, if $\tilde{q}_{v\rightarrow f}^{s}$ are $\mathcal{T}_{v\rightarrow
f}$-measurable and $\tilde{b}_{f\rightarrow v}^{s}$ are
$\mathcal{T}_{f\rightarrow v}$-measurable for $0\leq s\leq t$ and all
$(f,v)\in\mathcal{E}$, then $\tilde{b}_{f\rightarrow
v}^{t},\tilde{r}_{f\rightarrow v}^{t}$ are $\mathcal{T}_{f\rightarrow
v}$-measurable by (40). Similarly, if $\tilde{r}_{f\rightarrow v}^{s}$ are
$\mathcal{T}_{f\rightarrow v}$-measurable and $\tilde{a}_{v\rightarrow f}^{s}$
are $\mathcal{T}_{v\rightarrow r}$-measurable for $0\leq s\leq t$ and all
$(f,v)\in\mathcal{E}$, then $\tilde{a}_{f\rightarrow
v}^{t+1},\tilde{r}_{f\rightarrow v}^{t+1}$ are $\mathcal{T}_{v\rightarrow
f}$-measurable by (40). The induction is complete. ∎
We now prove Lemma 1.
###### Lemma 1.
The proof is by induction.
Base case:
$(\theta_{v},v_{f})\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}\mu_{\Theta,V}$.
This is the exact distribution in finite samples by assumption.
Inductive step 1: Eq. (43a) at $t$, bounded fourth moments of
$\tilde{\boldsymbol{a}}_{v}^{t},\tilde{\boldsymbol{a}}_{v\rightarrow f}^{t}$,
and
$\mathbb{E}[\|\tilde{\boldsymbol{a}}_{v}^{t}-\tilde{\boldsymbol{a}}_{v\rightarrow
f}^{t}\|^{2}]\rightarrow 0$ imply Eq. (43b) at $t$, bounded fourth moments of
$\tilde{\boldsymbol{b}}_{f}^{t},\tilde{\boldsymbol{b}}_{f\rightarrow v}^{t}$,
and
$\mathbb{E}[\|\tilde{\boldsymbol{b}}_{f}^{t}-\tilde{\boldsymbol{b}}_{f\rightarrow
v}^{t}\|^{2}]\rightarrow 0$.
The $\sigma$-algebras $(\mathcal{T}_{v\rightarrow f})_{v\in\partial f}$ are
independent of $(z_{fv})_{v\in\partial f}$, which are mutually independent of
each other. Thus, by (41), conditional on $\sigma((\mathcal{T}_{v\rightarrow
f})_{v\in\partial f})$ the beliefs $\tilde{\boldsymbol{b}}_{f}^{t}$ are
jointly normal with covariance
$\widehat{\boldsymbol{\Sigma}}_{[0{:}t]}:=\frac{1}{n}\sum_{v\in\partial
f}\tilde{\boldsymbol{q}}_{v\rightarrow
f}^{t}(\tilde{\boldsymbol{q}}_{v\rightarrow f}^{t})^{\mathsf{T}}$. That is,
$\tilde{\boldsymbol{b}}_{f}^{t}\bigm{|}\sigma((\mathcal{T}_{v\rightarrow
f})_{v\in\partial
f})\sim\mathsf{N}(\boldsymbol{0}_{t+1},\widehat{\boldsymbol{\Sigma}}_{[0{:}t]}).$
Because $(\tilde{\boldsymbol{a}}_{v\rightarrow
f}^{t},\theta_{v},v_{v})\mapsto\tilde{g}_{s}(\tilde{\boldsymbol{a}}_{v\rightarrow
f}^{s};\theta_{v},v_{v})\tilde{g}_{s^{\prime}}(\tilde{\boldsymbol{a}}_{v\rightarrow
f}^{s^{\prime}};\theta_{v},v_{v})$ is uniformly pseudo-Lipschitz of order 2 by
Lemma 1, we have
$\mathbb{E}[\widehat{\Sigma}_{s,s^{\prime}}]=\mathbb{E}[\tilde{q}_{v\rightarrow
f}^{s}\tilde{q}_{v\rightarrow
f}^{s^{\prime}}]=\mathbb{E}[\tilde{g}_{s}(\tilde{\boldsymbol{a}}_{v\rightarrow
f}^{s};\theta_{v},v_{v})\tilde{g}_{s^{\prime}}(\tilde{\boldsymbol{a}}_{v\rightarrow
f}^{s^{\prime}};\theta_{v},v_{v})]\rightarrow\Sigma_{s,s^{\prime}}$ by the
inductive hypothesis, Lemma 2, and (42). The terms in the sum defining
$\widehat{\boldsymbol{\Sigma}}_{[0{:}t]}$ are mutually independent by Lemma 2
and have bounded second moments by the inductive hypothesis and the Lipschitz
continuity of the functions $(\tilde{g}_{s})_{0\leq s\leq t}$. By the weak law
of large numbers,
$\widehat{\boldsymbol{\Sigma}}_{[0{:}t]}\stackrel{{\scriptstyle
L_{1}}}{{\rightarrow}}\boldsymbol{\Sigma}_{[0{:}t]}$, whence by Slutsky’s
theorem,
$\tilde{\boldsymbol{b}}_{f}^{t}\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}{\mathsf{N}}(\boldsymbol{0}_{t+1},\boldsymbol{\Sigma}_{[0{:}t]})$.
Further,
$\mathbb{E}[\tilde{\boldsymbol{b}}_{f}^{t}(\tilde{\boldsymbol{b}}_{f}^{t})^{\mathsf{T}}]=\mathbb{E}[\widehat{\boldsymbol{\Sigma}}_{[0{:}t]}]\rightarrow\boldsymbol{\Sigma}_{[0{:}t]}$.
Convergence in distribution and in second moment implies convergence in the
Wasserstein space of order 2 [Vil10, Theorem 6.9], so
$\tilde{\boldsymbol{b}}_{f}^{t}\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}{\mathsf{N}}(\boldsymbol{0}_{t+1},\boldsymbol{\Sigma}_{[0{:}t]})$.
To bound the fourth moments of $\tilde{b}_{f}^{t}$, we compute
$\mathbb{E}[(\tilde{b}_{f}^{t})^{4}]=\mathbb{E}[\widehat{\Sigma}_{t,t}^{2}]=\frac{1}{n^{2}}\sum_{v\in\partial
f}\mathbb{E}[(\tilde{q}_{v\rightarrow f}^{t})^{4}]+\frac{1}{n^{2}}\sum_{v\neq
v^{\prime}\in\partial f}\mathbb{E}[(\tilde{q}_{v\rightarrow
f}^{t})^{2}]\mathbb{E}[(\tilde{q}_{v^{\prime}\rightarrow
f}^{t})^{2}]\rightarrow\Sigma_{t,t},$
where the first term goes to 0 because the fourth moments of
$\tilde{q}_{v\rightarrow f}^{t}$ are bounded by the inductive hypothesis and
Lipschitz continuity of $\tilde{g}_{t}$, and the second term goes to
$\mathbb{E}[(\tilde{q}_{v\rightarrow f}^{t})^{2}]$ by the same argument in the
preceding paragraph. The boundedness of the fourth moments of
$\tilde{b}_{f}^{s}$ holds similarly (and, anyway, will have been established
earlier in the induction).
Finally, observe $\tilde{b}_{f}^{t}-\tilde{b}_{f\rightarrow
v}^{t}=z_{fv}\tilde{q}_{v\rightarrow f}^{t}$ and
$\mathbb{E}[(z_{fv}\tilde{q}_{v\rightarrow
f}^{t})^{2}]=\mathbb{E}[\tilde{q}_{v\rightarrow f}^{t})^{2}]/n\rightarrow 0$,
where $\mathbb{E}[\tilde{q}_{v\rightarrow f}^{t})^{2}]$ is bounded by the
inductive hypothesis and Lipschitz continuity of $\tilde{g}_{t}$. The
convergence $\mathbb{E}[(\tilde{b}_{f}^{t}-\tilde{b}_{f\rightarrow
v}^{s})^{2}]\rightarrow 0$ for $s<t$ holds similarly (and, anyway, will have
been established earlier in the induction). The Wasserstein convergence of
$(\tilde{\boldsymbol{b}}_{v\rightarrow f}^{t},\theta_{v},v_{v})$ now follows.
The bounded fourth moments of $\tilde{\boldsymbol{b}}_{v\rightarrow f}^{t}$
hold similarly.
Inductive step 2: Eq. (43) at $t$, bounded fourth moments of
$\tilde{\boldsymbol{b}}_{f}^{t},\tilde{\boldsymbol{b}}_{f\rightarrow v}^{t}$,
and
$\mathbb{E}[\|\tilde{\boldsymbol{b}}_{f}^{t}-\tilde{\boldsymbol{b}}_{f\rightarrow
v}^{t}\|^{2}]\rightarrow 0$ imply Eq. (43) at $t+1$, bounded fourth moments of
$\tilde{\boldsymbol{a}}_{v}^{t},\tilde{\boldsymbol{a}}_{v\rightarrow
f}^{t+1}$, and
$\mathbb{E}[\|\tilde{\boldsymbol{a}}_{v}^{t+1}-\tilde{\boldsymbol{a}}_{v\rightarrow
f}^{t+1}\|^{2}]\rightarrow 0$.
This follows by exactly the same argument as in inductive step 1.
The induction is complete, and Lemma 1 follows. ∎
### C.2 Message passing in the high-dimensional regression model
We prove Lemma 2 for the high-dimensional regression model by showing that the
iteration (19) is well approximated by a Gaussian message passing algorithm
after a change of variables. The functions $\tilde{f}_{t},\tilde{g}_{t}$ in
the Gaussian message passing algorithm are defined in terms of the functions
$f_{t},g_{t}$ of the original message passing algorithm (19) and the function
$h$ used to define the high-dimensional regression model.
$\displaystyle\tilde{f}_{t}(\tilde{b}^{0},\cdots,\tilde{b}^{t},w,u):=f_{t}(\tilde{b}^{1},\cdots,\tilde{b}^{t};h(\tilde{b}^{0},w),u),\;\;t\geq
0,$
$\displaystyle\tilde{g}_{0}(\theta,v)=\theta,\;\;\tilde{g}_{t}(\tilde{a}^{1},\cdots,\tilde{a}^{t};\theta,v):=g_{t}(\alpha_{1}\theta+\tilde{a}^{1},\cdots,\alpha_{1}\theta+\tilde{a}^{t};v),\;\;t\geq
1.$
Define $(\tilde{a}_{v\rightarrow f}^{t})_{t\geq 1}$,
$(\tilde{a}_{v}^{t})_{t\geq 1}$, $(\tilde{q}_{v\rightarrow f}^{t})_{t\geq 0}$,
$(\tilde{b}_{f\rightarrow v}^{t})_{t\geq 0}$, $(\tilde{b}_{f}^{t})_{t\geq 0}$,
$(\tilde{r}_{f\rightarrow v}^{t})_{t\geq 0}$ via the Gaussian message passing
algorithm (40) with initial data $\theta_{v},v_{v},w_{f},u_{f}$ and with
$z_{fv}=x_{fv}$. Because $f_{t}$, $g_{t}$, and $h$ are Lipschitz, so too are
$\tilde{f}_{t}$ and $\tilde{g}_{t}$. Under the function definitions
$\tilde{f}_{t},\tilde{g}_{t}$ given above, the definitions of $\Sigma_{s,s}$
and $T_{s,s^{\prime}}$ in (42) and (36) are equivalent. Thus, Lemma 1 holds
for the iterates of this Gaussian message passing algorithm with the
$\boldsymbol{T}_{[1{:}t]}$, $\boldsymbol{\Sigma}_{[0{:}t]}$ defined by (36).
We claim that for fixed $s\geq 1$, as $n\rightarrow\infty$ we have
$\mathbb{E}[(\alpha_{s}\theta_{v}+\tilde{a}_{v\rightarrow
f}^{s}-a_{v\rightarrow f}^{s})^{2}]\rightarrow
0\;\;\text{and}\;\;\mathbb{E}[(\tilde{b}_{f\rightarrow v}^{s}-b_{f\rightarrow
v}^{s})^{2}]\rightarrow 0,$ (44a) and $\mathbb{E}[(a_{v\rightarrow
f}^{s})^{4}]\;\;\text{and}\;\;\mathbb{E}[(b_{f\rightarrow
v}^{s})^{4}]\;\;\mbox{are uniformly bounded with respect to $n$},$ (44b)
where $(\alpha_{s})$ are defined by (36). These are the same coefficients
appearing in the AMP state evolution (Lemma 1), as claimed. We show (44) by
induction. There is no base case because the inductive steps work for $t=0$ as
written.
Inductive step 1: If (44) holds for $1\leq s\leq t$, then (44a) holds for
$s=t+1$.
We expand
$\displaystyle\alpha_{t+1}\theta_{v}+\tilde{a}_{v\rightarrow
f}^{t+1}-a_{v\rightarrow f}^{t+1}$
$\displaystyle=\alpha_{t+1}\theta_{v}+\sum_{f^{\prime}\in\partial v\setminus
f}z_{f^{\prime}v}(\tilde{f}_{t}(\tilde{\boldsymbol{b}}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}})-f_{t}(\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};y_{f^{\prime}},u_{f^{\prime}}))$
$\displaystyle=\alpha_{t+1}\theta_{v}+\sum_{f^{\prime}\in\partial v\setminus
f}z_{f^{\prime}v}(\tilde{f}_{t}(\tilde{\boldsymbol{b}}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}})-\tilde{f}_{t}(\tilde{b}_{f^{\prime}\rightarrow
v}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}}))$
$\displaystyle\quad\quad+\sum_{f^{\prime}\in\partial v\setminus
f}z_{f^{\prime}v}(\tilde{f}_{t}(\tilde{b}_{f^{\prime}\rightarrow
v}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}})-\tilde{f}_{t}(\tilde{b}_{f^{\prime}}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}}))$
$\displaystyle=:\alpha_{t+1}\theta_{v}+\mathsf{I}+\mathsf{II}.$
(Note that $\tilde{\boldsymbol{b}}_{f^{\prime}\rightarrow v}^{t}$ is
$(t+1)$-dimensional and $\boldsymbol{b}_{f^{\prime}\rightarrow v}^{t}$ is
$t$-dimensional). First we analyze $\mathsf{I}$. We have
$\displaystyle|\tilde{f}_{t}(\tilde{\boldsymbol{b}}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}})-\tilde{f}_{t}(\tilde{b}_{f^{\prime}\rightarrow
v}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}})|\leq
L\sum_{s=1}^{t}|\tilde{b}_{f^{\prime}\rightarrow
v}^{s}-b_{f^{\prime}\rightarrow v}^{s}|,$
where $L$ is a Lipschitz constant of $\tilde{f}_{t}$. The terms in the sum
defining $\mathsf{I}$ are mutually independent, and
$\tilde{b}_{f^{\prime}\rightarrow v}^{s},b_{f^{\prime}\rightarrow v}^{s}$ are
independent of $z_{f^{\prime}v}$. Thus,
$\displaystyle\mathbb{E}[\mathsf{I}^{2}]$
$\displaystyle=\frac{n-1}{n}\mathbb{E}[(\tilde{f}_{t}(\tilde{\boldsymbol{b}}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}})-\tilde{f}_{t}(\tilde{b}_{f^{\prime}\rightarrow
v}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}}))^{2}]$
$\displaystyle\leq\frac{L^{2}(n-1)t}{n}\sum_{s=1}^{t}\mathbb{E}[(\tilde{b}_{f^{\prime}\rightarrow
v}^{s}-b_{f^{\prime}\rightarrow v}^{s})^{2}]\rightarrow 0,$
by the inductive hypothesis.
Next we analyze $\mathsf{II}$. Note that all arguments to the functions in the
sum defining $\mathsf{II}$ are independent of $z_{f^{\prime}v}$ and
$\theta_{v}$ except for
$\tilde{b}_{f^{\prime}}^{0}=z_{f^{\prime}v}\theta_{v}+\sum_{v^{\prime}\in\partial
f^{\prime}\setminus v}z_{f^{\prime}v^{\prime}}\theta_{v^{\prime}}$. Because
$\tilde{f}_{t}$ is Lipschitz, we may apply Stein’s lemma (ie., Gaussian
integration by parts) [Ste81] to get
$\displaystyle\mathbb{E}[\alpha_{t+1}\theta_{v}+\mathsf{II}\bigm{|}\theta_{v},\sigma((\mathcal{T}_{v^{\prime\prime}\rightarrow
f^{\prime}})_{v^{\prime\prime}\in\partial f^{\prime}\backslash v})]$
$\displaystyle=\alpha_{t+1}\theta_{v}+(n-1)\mathbb{E}\big{[}z_{f^{\prime}v}(\tilde{f}_{t}(\tilde{b}_{f^{\prime}\rightarrow
v}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}})-\tilde{f}_{t}(\tilde{b}_{f^{\prime}}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}}))\bigm{|}\theta_{v}\big{]}$
$\displaystyle=\theta_{v}\left(\alpha_{t+1}-\frac{n-1}{n}\mathbb{E}[\partial_{\tilde{b}^{0}}\tilde{f}_{t}(\tilde{b}_{f^{\prime}}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}})\bigm{|}\theta_{v}]\right),$
where $\partial_{\tilde{b}^{0}}\tilde{f}_{t}$ is the weak-derivative of
$\tilde{f}_{t}$ with respect to its first argument, which is defined almost
everywhere with respect to Lebesgue measure because $\tilde{f}_{t}$ is
Lipschitz [EG15, pg. 81].
We claim the right-hand side of the preceding display converges in $L_{2}$ to
0, as we now show. The random variable
$\mathbb{E}[\partial_{\tilde{b}^{0}}\tilde{f}_{t}(\tilde{b}_{f^{\prime}}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}})|\theta_{v},(\mathcal{T}_{v^{\prime\prime}\rightarrow
f^{\prime}})_{v^{\prime\prime}\in\partial f^{\prime}\backslash v}]$ is almost-
surely bounded because $\tilde{f}_{t}$ is Lipschitz. It converges in
probability to $\alpha_{t+1}$. The random vector
$(\tilde{b}_{f^{\prime}}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow v}^{t})$
has a Gaussian distribution conditional on
$\sigma((\mathcal{T}_{v^{\prime\prime}\rightarrow
f^{\prime}})_{v^{\prime\prime}\in\partial f^{\prime}\backslash v})$ and
$\theta_{v}$; in particular,
$\displaystyle(\tilde{b}^{0}_{f^{\prime}\rightarrow
v}+z_{f^{\prime}v}\theta_{v},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t})|\theta_{v},\sigma((\mathcal{T}_{v^{\prime\prime}\rightarrow
f^{\prime}})_{v^{\prime\prime}\in\partial f^{\prime}\backslash
v})\stackrel{{\scriptstyle\mathrm{d}}}{{=}}{\mathsf{N}}(\mathbf{0},\widehat{\boldsymbol{\Sigma}}),$
where we define
$\widehat{\boldsymbol{\Sigma}}\in\mathbb{R}^{(t+1)\times(t+1)}$ by
$\displaystyle\widehat{\Sigma}_{0,0}=\dfrac{1}{n}\sum\limits_{v^{\prime}\in\partial
f^{\prime}}\theta_{v^{\prime}}^{2}\;\;\text{and}\;\;\widehat{\Sigma}_{s,s^{\prime}}=\dfrac{1}{n}\sum\limits_{v^{\prime}\in\partial
f^{\prime}\setminus v}q_{v^{\prime}\rightarrow
f^{\prime}}^{s}q_{v^{\prime}\rightarrow
f^{\prime}}^{s^{\prime}}\;\;\text{for}\;s\geq 1\mbox{ or }s^{\prime}\geq 1,$
where for the purposes of the preceding display we set
$q_{v^{\prime}\rightarrow f^{\prime}}^{0}=\theta_{v^{\prime}}$. By the
Lipschitz continuity of the functions $(g_{s})$, Lemmas 1 and 2, and the
inductive hypothesis, we have
$\mathbb{E}[\widehat{\boldsymbol{\Sigma}}]\rightarrow\boldsymbol{\Sigma}_{[0{:}t]}$.
The terms in the sums in the previous display have bounded second moments by
the inductive hypthesis (44b) and the Lipschitz continuity of the functions
$(g_{s})$. By the weak law of large numbers, we conclude
$\widehat{\boldsymbol{\Sigma}}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\boldsymbol{\Sigma}_{[0:t+1]}$.
Observe that
$\mathbb{E}[\partial_{\tilde{b}^{0}}\tilde{f}_{t}(\tilde{b}_{f^{\prime}}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}})|\theta_{v},(\mathcal{T}_{v^{\prime\prime}\rightarrow
f^{\prime}})_{v^{\prime\prime}\in\partial f^{\prime}\backslash
v}]=\mathbb{E}[\partial_{\tilde{b}^{0}}\tilde{f}_{t}(\widehat{\boldsymbol{\Sigma}}^{1/2}\boldsymbol{Z};W,U)]$,
where on the right-hand side the expectation is with respect to
$(W,U)\sim\mu_{W,U}$ and
$\boldsymbol{Z}\sim{\mathsf{N}}(\boldsymbol{0}_{t+1},\boldsymbol{I}_{t+1})$
independent. Because $\partial_{\tilde{b}^{0}}\tilde{f}_{t}$ is almost surely
bounded, by the dominated convergence theorem, the right-hand side is
continuous in $\widehat{\boldsymbol{\Sigma}}$. By the continuous mapping
theorem and (36), we conclude
$\mathbb{E}[\partial_{\tilde{b}^{0}}\tilde{f}_{t}(\tilde{b}_{f^{\prime}}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}})|\theta_{v},(\mathcal{T}_{v^{\prime\prime}\rightarrow
f^{\prime}})_{v^{\prime\prime}\in\partial f^{\prime}\backslash
v}]\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\alpha_{t+1}$. Then, by
dominated convergence,
$\mathbb{E}[\alpha_{t+1}\theta_{v}+\mathsf{II}\bigm{|}\theta_{v}]\stackrel{{\scriptstyle
L_{2}}}{{\rightarrow}}0$. Moreover, because the terms in the sum defining
$\mathsf{II}$ are mutually independent given $\theta_{v}$
$\displaystyle\operatorname{Var}(\alpha_{t+1}\theta_{v}+\mathsf{II}\mid\theta_{v})$
$\displaystyle\leq(n-1)\mathbb{E}\left[z_{f^{\prime}v}^{2}(\tilde{f}_{t}(\tilde{b}_{f^{\prime}\rightarrow
v}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}})-\tilde{f}_{t}(\tilde{b}_{f^{\prime}}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};w_{f^{\prime}},u_{f^{\prime}}))^{2}\mid\theta_{v}\right]$
$\displaystyle\leq
L^{2}(n-1)\mathbb{E}[z_{f^{\prime}v}^{4}\theta_{v}^{2}\mid\theta_{v}]\leq
3\theta_{v}^{2}/n,$
where $L$ is the Lipschitz constant of $\tilde{f}_{t}$. We conclude that
$\mathbb{E}[\operatorname{Var}(\alpha_{t+1}\theta_{v}+\mathsf{II}\mid\theta_{v})]\rightarrow
0$. Combined with
$\mathbb{E}[\alpha_{t+1}\theta_{v}+\mathsf{II}\bigm{|}\theta_{v}]\stackrel{{\scriptstyle
L_{2}}}{{\rightarrow}}0$, we get
$\operatorname{Var}(\alpha_{t+1}\theta_{v}+\mathsf{II})=\operatorname{Var}(\mathbb{E}[\alpha_{t+1}\theta_{v}+\mathsf{II}|\theta_{v}])+\mathbb{E}[\operatorname{Var}(\alpha_{t+1}\theta_{v}+\mathsf{II}|\theta_{v})]\rightarrow
0$, so that $\alpha_{t+1}\theta_{v}+\mathsf{II}\stackrel{{\scriptstyle
L_{2}}}{{\rightarrow}}0$. Combining $\mathsf{I}\stackrel{{\scriptstyle
L_{2}}}{{\rightarrow}}0$ and
$\alpha_{t+1}\theta_{v}+\mathsf{II}\stackrel{{\scriptstyle
L_{2}}}{{\rightarrow}}0$ gives
$\mathbb{E}[(\alpha_{t+1}\theta_{v}+\tilde{a}_{v\rightarrow
f}^{t+1}-a_{v\rightarrow f}^{t+1})^{2}]\rightarrow 0$, as desired.
We now expand
$\tilde{b}_{f\rightarrow v}^{t+1}-b_{f\rightarrow
v}^{t+1}=\sum_{v^{\prime}\in\partial f\setminus
v}z_{fv^{\prime}}(g_{t}(\boldsymbol{\alpha}_{t+1}\theta_{v^{\prime}}+\tilde{\boldsymbol{a}}_{v^{\prime}\rightarrow
f}^{t+1};v_{v^{\prime}})-g_{t}(\boldsymbol{a}_{v^{\prime}\rightarrow
f}^{t+1};v_{v^{\prime}})).$
The terms in this sum are mutually independent, and
$\tilde{\boldsymbol{a}}_{v^{\prime}\rightarrow
f}^{t+1},\boldsymbol{a}_{v^{\prime}\rightarrow f}^{t+1},\theta_{v^{\prime}}$
are independent of $z_{f^{\prime}v}$. Thus,
$\displaystyle\mathbb{E}[(\tilde{b}_{f\rightarrow v}^{t+1}-b_{f\rightarrow
v}^{t+1})^{2}]$
$\displaystyle=\frac{p-1}{n}\mathbb{E}[(g_{t}(\boldsymbol{\alpha}_{t+1}\theta_{v^{\prime}}+\tilde{\boldsymbol{a}}_{v^{\prime}\rightarrow
f}^{t+1};v_{v^{\prime}})-g_{t}(\boldsymbol{a}_{v^{\prime}\rightarrow
f}^{t+1};v_{v^{\prime}}))^{2}]$
$\displaystyle\leq\frac{L^{2}(p-1)(t+1)}{n}\sum_{s=1}^{t+1}\mathbb{E}[(\alpha_{s}\theta_{v^{\prime}}+\tilde{a}_{v^{\prime}\rightarrow
f}^{s}-a_{v^{\prime}\rightarrow f}^{s})^{2}]\rightarrow 0.$
This completes the proof of (44a) at $s=t+1$.
Inductive step 2: If (44) holds for $1\leq s\leq t$, then (44b) holds for
$s=t+1$.
By Lipschitz continuity,
$\displaystyle\left|a_{v\rightarrow f}^{t+1}-\sum_{f^{\prime}\in\partial
v\setminus f}z_{f^{\prime}v}\tilde{f}_{t}(\tilde{b}_{f^{\prime}\rightarrow
v}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t},u_{f^{\prime}},w_{f^{\prime}})\right|\leq
L|\theta_{v}|\sum_{f^{\prime}\in\partial v\setminus f}|z_{f^{\prime}v}|,$
where $L$ is a Lipschitz constant for $\tilde{f}_{t}$. The right-hand side has
bounded fourth moment, so we must only show that the sum in the previous
display has bounded fourth moment. The quantity
$\tilde{f}_{t}(\tilde{b}_{f^{\prime}\rightarrow
v}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t},u_{f^{\prime}},w_{f^{\prime}})$ has bounded fourth moment by the
inductive hypothesis and Lipschitz continuity of $\tilde{f}_{t}$. Because
$z_{f^{\prime}v}$ is independent of the argument to $\tilde{f}_{t}$ and has
fourth moment $3/n^{2}$, the product
$z_{f^{\prime}v}\tilde{f}_{t}(\tilde{b}_{f^{\prime}\rightarrow
v}^{0},\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t},u_{f^{\prime}},w_{f^{\prime}})$ has mean 0 and fourth moment
$O(1/n^{2})$. Because these products are mean zero and independent across
$f^{\prime}$, their sum has bounded fourth moment. We conclude
$a_{v\rightarrow f}^{t+1}$ has bounded fourth moment as well.
Recall $b_{f\rightarrow v}^{t+1}=\sum_{v^{\prime}\in\partial f\setminus
v}z_{fv^{\prime}}g_{t}(\boldsymbol{a}_{v^{\prime}\rightarrow
f}^{t+1};v_{v}^{\prime})$. The terms in the sum are independent, and
$z_{fv^{\prime}}$ is independent of $\boldsymbol{a}_{v^{\prime}\rightarrow
f}^{t+1};v_{v}^{\prime}$. Using the Lipschitz continuity of $g_{t}$ and the
inductive hypothesis, we conclude $b_{f\rightarrow v}^{t+1}$ has bounded
fourth moment by the same argument as in the preceding paragraph.
We conclude (44b) at $s=t+1$.
The induction is complete, and (44a) holds for all $s\geq 1$. Lemma 2 follows
by combining Lemma 1 and Eq. (44a).
### C.3 Message passing in the low-rank matrix estimation model
Like in the preceding section, we prove Lemma 2 for the low-rank matrix
estimation model by showing that the iteration (19) is well approximated by a
Gaussian message passing algorithm after a change of variables. The functions
in the Gaussian message passing algorithm are defined in terms of the
functions $f_{t},g_{t}$ of the original message passing algorithm (19).
$\displaystyle\tilde{f}_{t}(\tilde{b}^{0},\cdots,\tilde{b}^{t},w,u):=f_{t}(\tilde{b}^{1}+\gamma_{1}w,\cdots,\tilde{b}^{t}+\gamma_{t}w;0,u),$
$\displaystyle\tilde{g}_{t}(\tilde{a}^{1},\cdots,\tilde{a}^{t};\theta,v):=g_{t}(\tilde{a}^{1}+\alpha_{1}\theta,\cdots,\tilde{a}^{t}+\alpha_{t}\theta;v).$
Note that here $\tilde{f}_{t}$ does not depend on $\tilde{b}^{0}$ is never
used, and we may define $\tilde{g}_{0}$ arbitrarily without affecting later
iterates.777The iterate $\tilde{b}^{0}$ only played a role in approximating
the high-dimensional regression message passing algorithm by a Gaussian
message passing algorithm. Define $(\tilde{a}_{v\rightarrow f}^{t})_{t\geq
1}$, $(\tilde{a}_{v}^{t})_{t\geq 1}$, $(\tilde{q}_{v\rightarrow f}^{t})_{t\geq
0}$, $(\tilde{b}_{f\rightarrow v}^{t})_{t\geq 0}$, $(\tilde{b}_{f}^{t})_{t\geq
0}$, $(\tilde{r}_{f\rightarrow v}^{t})_{t\geq 0}$ via the Gaussian message
passing algorithm (40) with initial data $\theta_{v},v_{v},u_{f},z_{fv}$ and
$w_{f}=\lambda_{f}$. Because $f_{t}$, $g_{t}$, and $h$ are Lipschitz, so too
are $\tilde{f}_{t}$ and $\tilde{g}_{t}$. Under the function definitions
$\tilde{f}_{t},\tilde{g}_{t}$ given above and the change of variables
$w_{f}=\lambda_{f}$, the definitions of $\Sigma_{s,s}$ and $T_{s,s^{\prime}}$
in (42) and (37) are equivalent. Thus, Lemma 1 holds for the iterates of this
Gaussian message passing algorithm with the $\boldsymbol{T}_{[1{:}t]}$,
$\boldsymbol{\Sigma}_{[0{:}t]}$ defined by (37).
We claim that for fixed $s\geq 1$, as $n\rightarrow\infty$ we have
$\mathbb{E}[(\alpha_{s}\theta_{v}+\tilde{a}_{v\rightarrow
f}^{s}-a_{v\rightarrow f}^{s})^{2}]\rightarrow
0\;\;\text{and}\;\;\mathbb{E}[(\gamma_{s}\lambda_{f}+\tilde{b}_{f\rightarrow
v}^{s}-b_{f\rightarrow v}^{s})^{2}]\rightarrow 0,$ (45a) and
$\mathbb{E}[\theta_{v}^{2}(a_{v\rightarrow
f}^{s})^{2}]\;\;\text{and}\;\;\mathbb{E}[\lambda_{f}^{2}(b_{f\rightarrow
v}^{s})^{2}]\;\;\text{are bounded for fixed $s$.}$ (45b)
We show this by induction. There is no base case because the inductive step
works for $t=0$ as written.
Inductive step: If (45) holds for $1\leq s\leq t$, then (45) holds for
$s=t+1$.
We expand
$\displaystyle\alpha_{t+1}\theta_{v}+\tilde{a}_{v\rightarrow
f}^{t+1}-a_{v\rightarrow f}^{t+1}$
$\displaystyle=\alpha_{t+1}\theta_{v}+\sum_{f^{\prime}\in\partial v\setminus
f}z_{f^{\prime}v}(f_{t}(\tilde{\boldsymbol{b}}_{f^{\prime}\rightarrow
v}^{t}+\boldsymbol{\gamma}_{t}\lambda_{f^{\prime}};0,u_{f^{\prime}})-f_{t}(\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};0,u_{f^{\prime}}))$
$\displaystyle\quad\quad-\frac{1}{n}\theta_{v}\sum_{f^{\prime}\in\partial
v\setminus f}\lambda_{f^{\prime}}f_{t}(\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};0,u_{f^{\prime}})$
$\displaystyle=:\alpha_{t+1}\theta_{v}+\mathsf{I}+\mathsf{II},$
where $\tilde{\boldsymbol{b}}_{f^{\prime}\rightarrow
v}^{t}=(\tilde{b}_{f^{\prime}\rightarrow
v}^{1},\ldots,\tilde{b}_{f^{\prime}\rightarrow v}^{t})$ and
$\boldsymbol{\gamma}_{t}=(\gamma_{1},\ldots,\gamma_{t})$ (note that
$\tilde{b}_{f^{\prime}\rightarrow v}^{0}$ is excluded, which differs from the
notation used in the proof of Lemma 2).
First we analyze $\mathsf{I}$. The terms in the sum defining $\mathsf{I}$ are
mutually independent, and $\tilde{b}_{f^{\prime}\rightarrow v}^{s}$,
$b_{f^{\prime}\rightarrow v}^{s}$, $\lambda_{f^{\prime}}$, $u_{f^{\prime}}$
are independent of $z_{f^{\prime}v}$. Thus,
$\displaystyle\mathbb{E}[\mathsf{I}^{2}]$
$\displaystyle=\frac{n-1}{n}\mathbb{E}[(f_{t}(\tilde{\boldsymbol{b}}_{f^{\prime}\rightarrow
v}^{t}+\boldsymbol{\gamma}_{t}\lambda_{f^{\prime}};0,u_{f^{\prime}})-f_{t}(\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};0,u_{f^{\prime}})^{2}]$
$\displaystyle\leq\frac{L^{2}(n-1)t}{n}\sum_{s=1}^{t}\mathbb{E}[(\tilde{b}_{f^{\prime}\rightarrow
v}^{s}+\gamma_{s}\lambda_{f^{\prime}}-b_{f^{\prime}\rightarrow
v}^{s})^{2}]\rightarrow 0,$
by the inductive hypothesis, where $L$ is a Lipschitz constant of $f_{t}$.
Moreover, because $\theta_{v}$ is independent of $\mathsf{I}$ and has bounded
fourth moment, $\mathbb{E}[\theta_{v}^{2}\mathsf{I}^{2}]\rightarrow 0$ as
well.
Next we analyze $\mathsf{II}$. By the inductive hypothesis and Lemma 1,
$(\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t},\lambda_{f^{\prime}},u_{f^{\prime}})\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}(\boldsymbol{\gamma}_{t}\Lambda+\tilde{B}^{t},\Lambda,U),$
where $(\Lambda,U)\sim\mu_{\Lambda,U}$ and
$\tilde{B}^{t}\sim{\mathsf{N}}(\boldsymbol{0}_{t},\boldsymbol{\Sigma}_{[1{:}t]})$
independent. Because $(\boldsymbol{b}^{t},\lambda,u)\mapsto\lambda
f_{t}(\boldsymbol{b}^{t};0,u)$ is uniformly pseudo-Lipschitz of order 2 by
Lemma 1, we have
$\mathbb{E}[\lambda_{f^{\prime}}f_{t}(\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};0,u_{f^{\prime}})]\rightarrow\alpha_{t+1}$ by Lemma 2 and the state
evolution recursion (37). Moreover, because $f_{t}$ is Lipschitz, for some
constant $C$
$\displaystyle\mathbb{E}[\lambda_{f^{\prime}}^{2}f_{t}(\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};0,u_{f^{\prime}})^{2}]$ $\displaystyle\leq
C\mathbb{E}\left[\lambda_{f^{\prime}}^{2}\left(1+\sum_{s=1}^{t}(b_{f^{\prime}\rightarrow
v}^{s})^{2}+u_{f^{\prime}}^{2}\right)\right]$
$\displaystyle=C\left(\mathbb{E}[\lambda_{f^{\prime}}^{2}]+\sum_{s=1}^{t}\mathbb{E}[\lambda_{f^{\prime}}^{2}(b_{f^{\prime}\rightarrow
v}^{s})^{2}]+\mathbb{E}[\lambda_{f^{\prime}}^{2}u_{f^{\prime}}^{2}]\right),$
which bounded by the inductive hypothesis and the fourth moment assumption on
$\mu_{\Lambda,U}$. Because the terms in the sum defining $\mathsf{II}$ are
mutually independent, by the weak law of large numbers the preceding
observations imply
$\frac{1}{n}\sum_{f^{\prime}\in\partial v\setminus
f}\lambda_{f^{\prime}}f_{t}(\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};0,u_{f^{\prime}})\stackrel{{\scriptstyle
L_{2}}}{{\rightarrow}}\alpha_{t+1}.$
Because $\theta_{v}$ is independent of this sum and has bounded second moment,
we conclude that
$\alpha_{t+1}\theta_{v}+\mathsf{II}=\theta_{v}\left(\alpha_{t+1}-\frac{1}{n}\sum_{f^{\prime}\in\partial
v\setminus f}\lambda_{f^{\prime}}f_{t}(\boldsymbol{b}_{f^{\prime}\rightarrow
v}^{t};0,u_{f^{\prime}})\right)\stackrel{{\scriptstyle
L_{2}}}{{\rightarrow}}0.$
Moreover, because $\theta_{v}$ is independent of the term in parentheses and
has bounded fourth moment,
$\mathbb{E}[\theta_{v}^{2}(\alpha_{t+1}\theta_{v}+\mathsf{II})^{2}]\rightarrow
0$.
Combining the preceding results, we have that
$\mathbb{E}[(\alpha_{t+1}\theta_{v}+\tilde{a}_{v\rightarrow
f}^{t+1}-a_{v\rightarrow f}^{t+1})^{2}]\rightarrow 0$ and
$\mathbb{E}[\theta_{v}^{2}(\alpha_{t+1}\theta_{v}+\tilde{a}_{v\rightarrow
f}^{t+1}-a_{v\rightarrow f}^{t+1})^{2}]$ is bounded. Because $\theta_{v}$ is
independent of $\tilde{a}_{v\rightarrow f}^{t+1}$, the term
$\mathbb{E}[\theta_{v}^{2}(\tilde{a}_{v\rightarrow f}^{t+1})^{2}]$ is bounded,
so also $\mathbb{E}[\theta_{v}^{2}(a_{v\rightarrow f}^{t+1})^{2}]$ is bounded,
as desired.
The argument establishing that
$\mathbb{E}[(\gamma_{t+1}\lambda_{f}+\tilde{b}_{f\rightarrow
v}^{t+1}-b_{f\rightarrow v}^{t+1})^{2}]\rightarrow 0$ and that
$\mathbb{E}[\lambda_{f}^{2}(b_{f\rightarrow v}^{t+1})^{2}]$ is bounded is
equivalent. The induction is complete, and (45) holds for all $s$.
Lemma 2 follows by combining Lemma 1 and Eq. (45).
## Appendix D Proof of information-theoretic lower bounds on the computation
tree (Lemma 3)
In this section, we prove Lemma 3 in both the high-dimensional regression and
low-rank matrix estimation models. We restrict ourselves to the case $r=1$ and
$k=1$ (with $k$ the dimensionality of $\boldsymbol{W}$) because the proof for
$r>1$ or $k>1$ is completely analogous but would complicate notation.
For any pair of nodes $u,u^{\prime}$ in the tree $\mathcal{T}$, let
$d(u,u^{\prime})$ denote the length (number of edges) of the shortest path
between nodes $u$ and $u^{\prime}$ in the tree. Let
$\mathcal{T}_{u,k}=(\mathcal{V}_{u,k},\mathcal{F}_{u,k},\mathcal{E}_{u,k})$ be
the radius-$k$ neighborhood of node $u$; that is,
$\displaystyle\mathcal{V}_{u,k}=\\{v\in\mathcal{V}\mid d(u,v)\leq k\\},$
$\displaystyle\mathcal{F}_{u,k}=\\{f\in\mathcal{F}\mid d(u,f)\leq k\\},$
$\displaystyle\mathcal{E}_{u,k}=\\{(f,v)\in\mathcal{E}\mid\max\\{d(u,f),d(u,v)\\}\leq
k\\}.$
With some abuse of notation, we will often use
$\mathcal{T}_{u,k},\mathcal{V}_{u,k},\mathcal{F}_{u,k},\mathcal{E}_{u,k}$ to
denote either the collection of observations corresponding to nodes and edges
in these sets or the $\sigma$-algebra generated by these obervations. No
confusion should result. Note, our convention is that when used to denote a
$\sigma$-algebra or collection of random variables, only observed random
variables are in include. Thus, in the high-dimensional regression model,
$\mathcal{T}_{u,k}$ is the $\sigma$-algebra generated by the local
observations $x_{fv}$, $y_{f}$, $v_{v}$, and $u_{f}$; in the low-rank matrix
estimation, it is the $\sigma$-algebra genreated by the local observations
$x_{fv}$, $v_{v}$, and $u_{f}$. We also denote by $\mathcal{T}_{v\rightarrow
f}^{t,k}$ the collection of observations associated to edges or nodes of
$\mathcal{T}$ which are separated from $f$ by $v$ by at least $k$ intervening
edges and at most $t$ intervening edges. For example,
$\mathcal{T}_{v\rightarrow f}^{1,1}$ contains only
$(y_{f^{\prime}})_{f^{\prime}\in\partial v\setminus f}$, and
$\mathcal{T}_{v\rightarrow f}^{2,1}$ contains additional the observations
$v_{v^{\prime}}$ and $x_{f^{\prime}v^{\prime}}$ for $v^{\prime}\in\partial
f^{\prime}\setminus v$ for some some $f^{\prime}\in\partial v\setminus f$. The
collections (or $\sigma$-algebras) $\mathcal{V}_{v\rightarrow f}^{t,k}$,
$\mathcal{F}_{v\rightarrow f}^{t,k}$, $\mathcal{E}_{v\rightarrow f}^{t,k}$ are
defined similarly, as are the versions of these where the roles of $v$ and $f$
are reversed.
### D.1 Information-theoretic lower bound in the high-dimensional regression
model
In this section, we prove Lemma 3 in the high-dimensional regression model.
Note that conditions on the conditional density in assumption R4 are
equivalent positivity, boundedness, and the existence finite, non-negative
constants $q_{k}^{\prime}$ such that
$\frac{|\partial_{x}^{k}p(y|x)|}{p(y|x)}\leq q_{k}^{\prime}$ for $1\leq k\leq
5$. We will often use this form of the assumption without further comment.
This implies that for any random variable $A$
$\frac{|\partial_{x}^{k}\mathbb{E}[p(y|x+A)]|}{\mathbb{E}[p(y|x+A)]}\leq\int\frac{|\partial_{x}^{k}p(y|x+a)|}{p(y|x+a)}\frac{p(y|x+a)}{\mathbb{E}[p(y|x+A)]}\mu_{A}({\mathrm{d}}a)\leq
q_{k}^{\prime},$ (46)
because $p(y|x+a)/\mathbb{E}[p(y|x+A)]$ is a probability density with respect
to $\mu_{A}$, the distribution of $A$.
Denote the regular conditional probability of $\Theta$ conditional on $V$ for
the measure $\mu_{\Theta,V}$ by
$\mu_{\Theta|V}:\mathbb{R}\times\mathcal{B}\rightarrow[0,1]$, where
$\mathcal{B}$ denotes the Borel $\sigma$-algebra on $\mathbb{R}$. The
posterior of $\theta_{v}$ given $\mathcal{T}_{v,2t}$ has density with respect
to $\mu_{\Theta|V}(v_{v},\cdot)$ given by
$p_{v}(\vartheta|\mathcal{T}_{v,2t})\propto\int\prod_{f\in\mathcal{F}_{v,2t}}p(y_{f}\mid\sum_{v^{\prime}\in\partial
f}\vartheta_{v^{\prime}}X_{v^{\prime}f},u_{f})\prod_{v^{\prime}\in\mathcal{V}_{v,2t}\setminus
v}\mu_{\Theta|V}(v_{v^{\prime}},\mathrm{d}\vartheta_{v^{\prime}}).$
Asymptotically, the posterior density with respect to
$\mu_{\Theta|V}(v_{v},\cdot)$ behaves like that produced by a Gaussian
observation of $\theta_{v}$ with variance $\tau_{t}^{2}$, where $\tau_{t}$ is
defined by (5).
###### Lemma 1.
In the high-dimensional regression model, there exist
$\mathcal{T}_{v,2t}$-measurable random variables $\tau_{v,t},\chi_{v,t}$ such
that
$p_{v}(\vartheta|\mathcal{T}_{v,2t})\propto\exp\left(-\frac{1}{2\tau_{v,t}^{2}}(\chi_{v,t}-\vartheta)^{2}+o_{p}(1)\right),$
where $o_{p}(1)$ has no $\vartheta$ dependence. Moreover,
$(\chi_{v,t},\tau_{v,t},\theta_{v},v_{v})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Theta+\tau_{t}G,\tau_{t},\Theta,V)$
where $(\Theta,V)\sim\mu_{\Theta,V}$, $G\sim{\mathsf{N}}(0,1)$ independent of
$\Theta,V$, and $\tau_{t}$ is given by (5).
###### Lemma 1.
We compute the posterior density $p_{v}(\vartheta|\mathcal{T}_{v,2t})$ via an
iteration called belief propagation. For each edge $(v,f)\in\mathcal{E}$,
belief propagation generates a pair of sequences of real-valued functions
$(m_{v\rightarrow f}^{t}(\vartheta))_{t\geq 0},(m_{f\rightarrow
v}^{t}(\vartheta))_{t\geq 0}$. The iteration is
$\displaystyle m_{v\rightarrow f}^{0}(\vartheta)=1,$ $\displaystyle
m_{f\rightarrow v}^{s}(\vartheta)\propto\int
p(y_{f}|X_{fv}\vartheta+\sum_{v^{\prime}\in\partial f\setminus
v}X_{fv^{\prime}}\vartheta_{v^{\prime}},u_{f})\prod_{v^{\prime}\in\partial
f\setminus v}m_{v^{\prime}\rightarrow
f}^{s}(\vartheta_{v^{\prime}})\prod_{v^{\prime}\in\partial f\setminus
v}\mu_{\Theta|V}(v_{v^{\prime}},\mathrm{d}\vartheta_{v^{\prime}}),$
$\displaystyle m_{v\rightarrow
f}^{s+1}(\vartheta)\propto\prod_{f^{\prime}\in\partial v\setminus
f}m_{f^{\prime}\rightarrow v}^{s}(\vartheta),$
with normalization $\int m_{f\rightarrow
v}^{t}(\vartheta)\mu_{\Theta|V}(v_{v},\mathrm{d}\vartheta)=\int
m_{v\rightarrow f}^{t}(\vartheta)\mu_{\Theta|V}(v_{v},\mathrm{d}\vartheta)=1$.
For any variable node $v$,
$p_{v}(\vartheta|\mathcal{T}_{v,2t})\propto\prod_{f\in\partial
v}m_{f\rightarrow v}^{t-1}(\vartheta).$ (47)
This equation is exact.
We define several quantities related to the belief propagation iteration.
$\displaystyle\mu_{v\rightarrow f}^{s}$ $\displaystyle=\int\vartheta
m_{v\rightarrow f}^{s}(\vartheta)\mu_{\Theta|V}(v_{v},\mathrm{d}\vartheta),$
$\displaystyle(\tilde{\tau}_{v\rightarrow f}^{s})^{2}$
$\displaystyle=\int\vartheta^{2}m_{v\rightarrow
f}^{s}(\vartheta)\mu_{\Theta|V}(v_{v},\mathrm{d}\vartheta)-(\mu_{v\rightarrow
f}^{s})^{2},$ $\displaystyle\mu_{f\rightarrow v}^{s}$
$\displaystyle=\sum_{v^{\prime}\in\partial f\setminus
v}x_{fv^{\prime}}\mu_{v^{\prime}\rightarrow f}^{s},$
$\displaystyle(\tilde{\tau}_{f\rightarrow v}^{s})^{2}$
$\displaystyle=\sum_{v^{\prime}\in\partial f\setminus
v}x_{fv^{\prime}}^{2}(\tilde{\tau}_{v^{\prime}\rightarrow f}^{s})^{2},$
$\displaystyle a_{f\rightarrow v}^{s}$
$\displaystyle=\frac{1}{x_{fv}}\frac{\mathrm{d}\phantom{b}}{\mathrm{d}\vartheta}\log
m_{f\rightarrow v}^{s}(\vartheta)\Big{|}_{\vartheta=0},$ $\displaystyle
b_{f\rightarrow v}^{s}$
$\displaystyle=-\frac{1}{x_{fv}^{2}}\frac{\mathrm{d}^{2}\phantom{b}}{\mathrm{d}\vartheta^{2}}\log
m_{f\rightarrow v}^{s}(\vartheta)\Big{|}_{\vartheta=0},$ $\displaystyle
a_{v\rightarrow f}^{s}$
$\displaystyle=\frac{\mathrm{d}\phantom{b}}{\mathrm{d}\vartheta}\log
m_{v\rightarrow f}^{s}(\vartheta)\Big{|}_{\vartheta=0},$ $\displaystyle
b_{v\rightarrow f}^{s}$
$\displaystyle=-\frac{\mathrm{d}^{2}\phantom{b}}{\mathrm{d}\vartheta^{2}}\log
m_{v\rightarrow f}^{s}(\vartheta)\Big{|}_{\vartheta=0},$
$\displaystyle\chi_{v\rightarrow f}^{s}$ $\displaystyle=a_{v\rightarrow
f}^{s}/b_{v\rightarrow f}^{s},$ $\displaystyle(\tau_{v\rightarrow f}^{s})^{2}$
$\displaystyle=1/b_{v\rightarrow f}^{s}.$
Lemma 1 follows from the following asymptotic characterization of the
quantities in the preceding display in the limit $n,p\rightarrow\infty$,
$n/p\rightarrow\delta$:
$\begin{gathered}\qquad\mathbb{E}[(\mu_{v\rightarrow
f}^{s})^{2}]\rightarrow\delta\sigma_{s}^{2},\qquad\mathbb{E}[(\tilde{\tau}_{v\rightarrow
f}^{s})^{2}]\rightarrow\delta\tilde{\tau}_{s}^{2},\\\ (\mu_{f\rightarrow
v}^{s},u_{f})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}{\mathsf{N}}(0,\sigma_{s}^{2})\otimes\mu_{U},\qquad(\tilde{\tau}_{f\rightarrow
v}^{s})^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\tilde{\tau}_{s}^{2},\\\
(\theta_{v},v_{v},a_{v\rightarrow f}^{s}/b_{v\rightarrow
f}^{s},b_{v\rightarrow
f}^{s})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Theta,V,\Theta+\tau_{s}G,1/\tau_{s}^{2}),\end{gathered}$
(48)
where in the last line $\Theta\sim\mu_{\Theta}$, $G\sim{\mathsf{N}}(0,1)$
independent, and $\sigma_{s}^{2},\tau_{s}^{2}$ are defined in (5). By
symmetry, the distribution of these quantities does not depend upon $v$ or
$f$, so that the limits holds for all $v,f$ once we establish them for any
$v,f$. We establish the limits inductively in $s$.
Base case: $\mathbb{E}[(\mu_{v\rightarrow
f}^{0})^{2}]\rightarrow\delta\sigma_{0}^{2}$ and
$\mathbb{E}[(\tilde{\tau}_{v\rightarrow
f}^{0})^{2}]\rightarrow\delta\tilde{\tau}_{0}^{2}$.
Observe that $\mu_{v\rightarrow
f}^{s}=\int\vartheta\mu_{\Theta|V}(v_{v},\mathrm{d}\vartheta)=\mathbb{E}_{\Theta,V}[\Theta|V=v_{v}]$.
Because $v_{v}\sim\mu_{V}$, we have $\mathbb{E}[(\mu_{v\rightarrow
f}^{1})^{2}]=\mathbb{E}_{\Theta,V}[\mathbb{E}_{\Theta,V}[\Theta|V]^{2}]=\mathbb{E}[\Theta^{2}]-\textsf{mmse}_{\Theta,V}(\infty)=\delta\sigma_{1}^{2}$.
Similarly, $(\tilde{\tau}_{v\rightarrow
f}^{1})^{2}=\operatorname{Var}_{\Theta,V}(\Theta|V=v_{v})$, so that
$\mathbb{E}[(\tilde{\tau}_{v\rightarrow
f}^{1})^{2}]=\textsf{mmse}_{\Theta,V}(\infty)=\delta\tilde{\tau}_{0}^{2}$.
Inductive step 1: If $\mathbb{E}[(\mu_{v\rightarrow
f}^{s})^{2}]\rightarrow\delta\sigma_{s}^{2}$, then $(\mu_{f\rightarrow
v}^{s},u_{f})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}{\mathsf{N}}(0,\sigma_{s}^{2})\otimes\mu_{U}$.
The quantity $\mu_{v^{\prime}\rightarrow f}^{s}$ is
$\mathcal{T}_{v^{\prime}\rightarrow f}^{2s,0}$-measurable, whence it is
independent of $x_{fv^{\prime}}$ and $u_{f}$. Moreover,
$(\mu_{v^{\prime}\rightarrow f},x_{fv})$ are independent as we vary
$v^{\prime}\in\partial f\setminus v$. Thus, $\mu_{f\rightarrow
v}^{s}|\mathcal{T}_{f\rightarrow
v}^{2s+1,1}\sim{\mathsf{N}}(0,\frac{1}{n}\sum_{v^{\prime}\in\partial
f\setminus v}(\mu_{v^{\prime}\rightarrow f}^{s})^{2})$. Note that
$\mathbb{E}[\frac{1}{n}\sum_{v^{\prime}\in\partial f\setminus
v}(\mu_{v^{\prime}\rightarrow f}^{s})^{2}]=(p-1)\mathbb{E}[(\mu_{v\rightarrow
f}^{s})^{2}]/n\rightarrow\sigma_{s}^{2}$ by the inductive hypothesis.
Moreover, $\mu_{v\rightarrow f}^{s}$ has bounded fourth moments because it is
bounded by $M$. By the weak law of large numbers,
$\frac{1}{n}\sum_{v^{\prime}\in\partial f\setminus
v}(\mu_{v^{\prime}\rightarrow
f}^{s})^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\sigma_{s}^{2}$.
We conclude by Slutsky’s theorem and independence that $(\mu_{f\rightarrow
v}^{s},u_{f})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}{\mathsf{N}}(0,\sigma_{s}^{2})\otimes\mu_{U}$.
Inductive step 2: If $\mathbb{E}[(\tilde{\tau}_{v\rightarrow
f}^{s})^{2}]\rightarrow\delta\tilde{\tau}_{s}^{2}$, then
$(\tilde{\tau}_{f\rightarrow
v}^{s})^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\tilde{\tau}_{s}^{2}$.
The quantity $\tilde{\tau}_{v^{\prime}\rightarrow f}^{s}$ is
$\mathcal{T}_{v^{\prime}\rightarrow f}^{2s,0}$-measurable, whence it is
independent of $x_{fv^{\prime}}$. Therefore,
$\mathbb{E}[\sum_{v^{\prime}\in\partial f\setminus
v}x_{fv^{\prime}}^{2}(\tilde{\tau}_{v^{\prime}\rightarrow
f}^{s})^{2}]=(p-1)\mathbb{E}[(\tilde{\tau}_{v\rightarrow
f}^{s})^{2}]/n\rightarrow\tilde{\tau}_{s}^{2}.$
Moreover, $(\tilde{\tau}_{v^{\prime}\rightarrow f},x_{fv})$ are mutually
independent as we vary $v^{\prime}\in\partial f\setminus v$, and because
$\tilde{\tau}_{v\rightarrow f}^{s}$ is bounded by $M$, the terms
$nx_{fv^{\prime}}^{2}(\sigma_{v^{\prime}\rightarrow f}^{s})^{2}$ have bounded
fourth moments. By the weak law of large numbers, $(\tilde{\tau}_{f\rightarrow
v}^{s})^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\tilde{\tau}_{s}^{2}$.
Inductive step 3: If $(\mu_{f\rightarrow
v}^{s},u_{f},\tilde{\tau}_{f\rightarrow
v}^{s})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}{\mathsf{N}}(0,\sigma_{s}^{2})\otimes\mu_{U}\otimes\delta_{\tilde{\tau}_{s}}$,
then $(\theta_{v},v_{v},a_{v\rightarrow f}^{s+1}/b_{v\rightarrow
f}^{s+1},b_{v\rightarrow
f}^{s+1})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Theta,V,\Theta+\tau_{s+1}G,1/\tau_{s+1}^{2})$
where $G\sim{\mathsf{N}}(0,1)$ independent of $(\Theta,V)\sim\mu_{\Theta,V}$.
For all $(f,v)\in\mathcal{E}$ and $s\geq 1$, define
$p_{f\rightarrow v}^{s}(y;x)=\int p(y|x+\sum_{v^{\prime}\in\partial f\setminus
v}x_{fv^{\prime}}\vartheta_{v^{\prime}},u_{f})\prod_{v^{\prime}\in\partial
f\setminus v}m_{v^{\prime}\rightarrow
f}^{s}(\vartheta_{v^{\prime}})\prod_{v^{\prime}\in\partial f\setminus
v}\mu_{\Theta|V}(v_{v^{\prime}},\mathrm{d}\vartheta_{v^{\prime}}).$
More compactly, we may write $p_{f\rightarrow
v}^{s}(y;x,u_{f})=\mathbb{E}_{\\{\Theta_{v^{\prime}}\\}}[p(y|x+\sum_{v^{\prime}\in\partial
f\setminus v}x_{fv^{\prime}}\Theta_{v^{\prime}},u_{f})]$, where it is
understood that the expectation is taken over $\Theta_{v^{\prime}}$
independent with densities $m_{v^{\prime}\rightarrow f}^{s}$ with respect to
$\mu_{\Theta|V}(v_{v^{\prime}},\cdot)$. Note that for all $x$, we have
$\int p_{f\rightarrow v}^{s}(y;x){\mathrm{d}}y=1$
everywhere. That is, $p_{f\rightarrow v}^{s}(\cdot;x)$ is a probability
density with respect to Lebesgue measure. We will denote by
$\dot{p}_{f\rightarrow
v}^{s}(y;x)=\frac{\mathrm{d}\phantom{b}}{\mathrm{d}\xi}p_{f\rightarrow
v}^{s}(y;x)\big{|}_{\xi=x}$, and likewise for higher derivatives. These
derivatives exist and may be taken under the integral by R4. Define
$a_{f\rightarrow v}^{s}(y)=\frac{\mathrm{d}\phantom{b}}{\mathrm{d}x}\log
p_{f\rightarrow v}^{s}(y;x)\Big{|}_{x=0}\qquad\text{ and }\qquad
b_{f\rightarrow
v}^{s}(y)=-\frac{\mathrm{d}^{2}\phantom{b}}{\mathrm{d}x^{2}}\log
p_{f\rightarrow v}^{s}(y;x)\Big{|}_{x=0}.$
For fixed $y$, the quantity $a_{f^{\prime}\rightarrow v}^{s}(y)$ is
independent of $x_{f^{\prime}v}$, and $(a_{f^{\prime}\rightarrow
v}^{s}(y),x_{f^{\prime}v})$ are mutually independent for
$f^{\prime}\in\partial v\setminus f$. Observe that
$\displaystyle a_{f\rightarrow v}^{s}=a_{f\rightarrow
v}^{s}(y_{f})\qquad\text{and}\qquad a_{v\rightarrow
f}^{s+1}=\sum_{f^{\prime}\in\partial v\setminus
f}x_{f^{\prime}v}a_{f^{\prime}\rightarrow v}^{s}(y_{f^{\prime}}),$
$\displaystyle b_{f\rightarrow v}^{s}=b_{f\rightarrow
v}^{s}(y_{f})\qquad\text{and}\qquad b_{v\rightarrow
f}^{s+1}=\sum_{f^{\prime}\in\partial v\setminus
f}x_{f^{\prime}v}^{2}b_{f^{\prime}\rightarrow v}^{s}(y_{f^{\prime}}).$
We will study the distributions of $a_{f\rightarrow v}^{s},a_{v\rightarrow
f}^{s+1},b_{f\rightarrow v}^{s}$, and $b_{v\rightarrow f}^{s+1}$ under several
measures, which we now introduce. Define $P_{v,\vartheta}$ to be the
distribution of the regression model with $\theta_{v}$ forced to be $\theta$
and $v_{v}$ forced to be 0. That is, under $P_{v,\theta}$, we have
$(\theta_{v^{\prime}},v_{v^{\prime}})\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\Theta,V}$
for $v^{\prime}\neq v$, $v_{v}=0$ and $\theta_{v}=\theta$, the features are
distributed independently
$x_{fv^{\prime}}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1/n)$
for all $f,v^{\prime}$, and the observations $y_{f}$ are drawn independently
from $p(\cdot|\sum_{v^{\prime}\in\partial
f}x_{fv^{\prime}}\theta_{v^{\prime}})$ for all $f$. We will consider the
distribution of $a_{f\rightarrow v}^{s},a_{v\rightarrow
f}^{s+1},b_{f\rightarrow v}^{s}$, and $b_{v\rightarrow f}^{s+1}$ under
$P_{v,\theta}$ for $\theta\in[-M,M]$.
We require the following lemmas, whose proofs are deferred to Section D.1.1.
###### Lemma 2.
Under $P_{v,\theta}$ for any $\theta\in[-M,M]$, we have for all fixed $y$ that
$\displaystyle p_{f\rightarrow
v}^{s}(y;0)-\mathbb{E}_{G_{1}}[p(y|\mu_{f\rightarrow
v}^{s}+\tilde{\tau}_{f\rightarrow v}^{s}G_{1},u_{f})]=o_{p}(1),$
$\displaystyle\dot{p}_{f\rightarrow
v}^{s}(y;0)-\mathbb{E}_{G_{1}}[\dot{p}(y|\mu_{f\rightarrow
v}^{s}+\tilde{\tau}_{f\rightarrow v}^{s}G_{1},u_{f})]=o_{p}(1),$
$\displaystyle\ddot{p}_{f\rightarrow
v}^{s}(y;0)-\mathbb{E}_{G_{1}}[\ddot{p}(y|\mu_{f\rightarrow
v}^{s}+\tilde{\tau}_{f\rightarrow v}^{s}G_{1},u_{f})]=o_{p}(1),$
where the expectation is over $G_{1}\sim{\mathsf{N}}(0,1)$. Further, for any
$u$, the functions
$(\mu,\tilde{\tau})\mapsto\mathbb{E}_{G_{1}}[p(y|\mu+\tilde{\tau}G_{1},u)]$,
$(\mu,\tilde{\tau})\mapsto\mathbb{E}_{G_{1}}[\dot{p}(y|\mu+\tilde{\tau}G_{1},u)]$,
and
$(\mu,\tilde{\tau})\mapsto\mathbb{E}_{G_{1}}[\ddot{p}(y|\mu+\tilde{\tau}G_{1},u)]$
are continuous.
###### Lemma 3.
Under $P_{v,\theta}$ for any $\theta\in[-M,M]$, we have for any fixed $s$
$\log\frac{m_{v\rightarrow f}^{s+1}(\vartheta)}{m_{v\rightarrow
f}^{s+1}(0)}=\vartheta a_{v\rightarrow
f}^{s+1}-\frac{1}{2}\vartheta^{2}b_{v\rightarrow f}^{s+1}+O_{p}(n^{-1/2}),$
where $O_{p}(n^{-1/2})$ has no $\vartheta$ dependence, and the statement holds
for $\vartheta\in[-M,M]$.
First we study the distribution of $a_{v\rightarrow f}^{s+1},b_{v\rightarrow
f}^{s+1}$ under $P_{v,0}$. Because $\mu_{f^{\prime}\rightarrow
v}^{s},\tilde{\tau}_{f^{\prime}\rightarrow v}^{s}$ is independent of
$\theta_{v},v_{v}$ for all $f^{\prime}\in\partial v$, its distribution is the
same under $P_{v,\theta}$ for all $\theta\in[-M,M]$ and is equal to its
distribution under the original model. Thus, the inductive hypothesis implies
$(\mu_{f\rightarrow v}^{s},\tilde{\tau}_{f\rightarrow
v}^{s})\xrightarrow[P_{v,0}]{\mathrm{d}}{\mathsf{N}}(0,\sigma_{s}^{2})\times\delta_{\tilde{\tau}_{s}}$.
By Lemma 2, the inductive hypothesis, and Lemma 3, we have for fixed $y$
$\begin{pmatrix}\mathbb{E}_{G_{1}}[p(y|\mu_{f\rightarrow
v}^{s}+\tilde{\tau}_{f\rightarrow v}^{s}G_{1},u_{f})]\\\
\mathbb{E}_{G_{1}}[\dot{p}(y|\mu_{f\rightarrow
v}^{s}+\tilde{\tau}_{f\rightarrow v}^{s}G_{1},u_{f})]\\\
\mathbb{E}_{G_{1}}[\ddot{p}(y|\mu_{f\rightarrow
v}^{s}+\tilde{\tau}_{f\rightarrow
v}^{s}G_{1},u_{f})]\end{pmatrix}\xrightarrow[P_{v,0}]{\mathrm{d}}\begin{pmatrix}\mathbb{E}_{G_{1}}[p(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U]\\\
\mathbb{E}_{G_{1}}[\dot{p}(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]\\\
\mathbb{E}_{G_{1}}[\ddot{p}(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]\end{pmatrix},$
where and $G_{0},G_{1}\sim{\mathsf{N}}(0,1)$ and $U\sim\mu_{U}$ independent.
Applying Lemma 2 and Slutsky’s Theorem, we have that
$\begin{pmatrix}p_{f\rightarrow v}^{s}(y;0)\\\ \dot{p}_{f\rightarrow
v}^{s}(y;0)\\\ \ddot{p}_{f\rightarrow
v}^{s}(y;0)\end{pmatrix}\xrightarrow[P_{v,0}]{\mathrm{d}}\begin{pmatrix}\mathbb{E}_{G_{1}}[p(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]\\\
\mathbb{E}_{G_{1}}[\dot{p}(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]\\\
\mathbb{E}_{G_{1}}[\ddot{p}(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]\end{pmatrix}.$
By the Continuous Mapping Theorem,
$\displaystyle p_{f\rightarrow
v}^{s}(y;0)\xrightarrow[P_{v,0}]{\mathrm{d}}\mathbb{E}_{G_{1}}[p(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)],$
$\displaystyle a_{f\rightarrow
v}^{s}(y)\xrightarrow[P_{v,0}]{\mathrm{d}}\frac{\mathrm{d}\phantom{b}}{\mathrm{d}x}\log\mathbb{E}_{G_{1}}[\dot{p}(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]\Big{|}_{x=0},$
$\displaystyle b_{f\rightarrow
v}^{s}(y)\xrightarrow[P_{v,0}]{\mathrm{d}}-\frac{\mathrm{d}^{2}\phantom{b}}{\mathrm{d}x^{2}}\log\mathbb{E}_{G_{1}}[\dot{p}(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]\Big{|}_{x=0}.$
Because the quantity $p(y|x)$ is bounded (assumption R4) and the quantities
$a_{f\rightarrow v}^{s}(y),b_{f\rightarrow v}^{s}(y)$ are bounded by (46), we
have
$\displaystyle\mathbb{E}_{P_{v,0}}[p_{f\rightarrow
v}^{s}(y|0)]\rightarrow\mathbb{E}_{G_{0},G_{1},U}[p(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)],$
$\displaystyle\mathbb{E}_{P_{v,0}}[a_{f\rightarrow
v}^{s}(y)^{2}]\rightarrow\mathbb{E}_{G_{0},U}\left[\left(\frac{\mathrm{d}\phantom{b}}{\mathrm{d}x}\log\mathbb{E}_{G_{1}}[\dot{p}(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]\Big{|}_{x=0}\right)^{2}\right],$
$\displaystyle\mathbb{E}_{P_{v,0}}[b_{f\rightarrow
v}^{s}]\rightarrow-\mathbb{E}_{G_{0},U}\left[\frac{\mathrm{d}^{2}\phantom{b}}{\mathrm{d}x^{2}}\log\mathbb{E}_{G_{1}}[\dot{p}(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]\Big{|}_{x=0}\right].$
Under $P_{v,0}$, we have for all $f^{\prime}\in\partial v$ that the random
variable $y_{f^{\prime}}$ is independent of $x_{f^{\prime}v}$. Thus,
conditional on $\mathcal{T}_{v\rightarrow f}^{2s+2,1}$, the random variable
$\sum_{f^{\prime}\in\partial v\setminus
f}x_{f^{\prime}v}a_{f^{\prime}\rightarrow v}^{s}(y_{f^{\prime}})$ is normally
distributed. Specifically,
$\sum_{f^{\prime}\in\partial v\setminus
f}x_{f^{\prime}v}a_{f^{\prime}\rightarrow
v}^{s}(y_{f^{\prime}})\bigm{|}\mathcal{T}_{v\rightarrow
f}^{2s+2,1}\underset{P_{v,0}}{\sim}{\mathsf{N}}\left(0,\frac{1}{n}\sum_{f^{\prime}\in\partial
v\setminus f}(a_{f^{\prime}\rightarrow v}^{s}(y_{f^{\prime}}))^{2}\right).$
Because $(a_{f^{\prime}\rightarrow v}^{s}(y_{f^{\prime}}))^{2}$ is bounded by
(46), if we show $\mathbb{E}_{P_{v,0}}[(a_{f\rightarrow
v}^{s}(y_{f}))^{2}]\rightarrow 1/\tau_{s+1}^{2}$, then the weak law of large
numbers and Slutsky’s theorem will imply that
$a_{v\rightarrow f}^{s+1}=\sum_{f^{\prime}\in\partial v\setminus
f}x_{f^{\prime}v}a_{f^{\prime}\rightarrow
v}^{s}(y_{f^{\prime}})\xrightarrow[P_{v,0}]{\mathrm{d}}{\mathsf{N}}\left(0,1/\tau_{s+1}^{2}\right).$
(49)
We compute
$\displaystyle\mathbb{E}_{P_{v,0}}[(a_{f\rightarrow v}^{s}(y_{f}))^{2}]$
$\displaystyle=\mathbb{E}_{P_{v,0}}[\mathbb{E}_{P_{v,0}}[(a_{f\rightarrow
v}^{s}(y_{f}))^{2}|\sigma(\mathcal{T}_{f\rightarrow
v}^{2s+1,1},(x_{fv^{\prime}})_{v^{\prime}\in\partial f\setminus v}),u_{f}]]$
$\displaystyle=\mathbb{E}_{P_{v,0}}\left[\int a_{f\rightarrow
v}^{s}(y)^{2}p_{f\rightarrow v}^{s}(y;0){\mathrm{d}}y\right]$
$\displaystyle=\int\mathbb{E}_{P_{v,0}}\left[a_{f\rightarrow
v}^{s}(y)^{2}p_{f\rightarrow v}^{s}(y;0)\right]{\mathrm{d}}y.$
where the second equation holds because under $P_{v,0}$ we have
$y_{f}\mid\sigma(\mathcal{T}_{f\rightarrow
v}^{2s+1,1},(x_{fv^{\prime}})_{v^{\prime}\in\partial f\setminus v},u_{f})$ has
density $p_{f\rightarrow v}^{s}(\cdot;0)$ with respect to Lebesgue measure,
and the last equation follows by Fubini’s theorem (using the non-negativity of
the integrand). Because $a_{f\rightarrow
v}^{s}(y)^{2}\leq(q_{1}^{\prime})^{2}$ and
$\mathbb{E}_{P_{v,0}}[p_{f\rightarrow v}^{s}(y;0)]$ are probability densities
which converge pointwise to
$\mathbb{E}_{G_{1}}[p(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1})]$, we conclude
that
$\displaystyle\mathbb{E}_{P_{v,0}}[(a_{f\rightarrow v}^{s}(y_{f}))^{2}]$
$\displaystyle\rightarrow\int\mathbb{E}_{G_{0},U}\left[\frac{\mathbb{E}_{G_{1}}[\dot{p}(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]^{2}}{\mathbb{E}_{G_{1}}[p(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]}\right]{\mathrm{d}}y$
$\displaystyle=\mathbb{E}_{G_{0},U}\left[\int\frac{\mathbb{E}_{G_{1}}[\dot{p}(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]^{2}}{\mathbb{E}_{G_{1}}[p(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]}{\mathrm{d}}y\right]=\frac{1}{\tau_{s+1}^{2}},$
where we have used the alternative characterization of the recursion (5) from
Lemma 4. We conclude (49).
Now we compute the asymptotic behavior of $b_{v\rightarrow f}^{s+1}$ under
$P_{v,0}$. Under $P_{v,0}$, $x_{f^{\prime}v}$ is independent of
$y_{f^{\prime}}$, and $(x_{f^{\prime}v},b_{f^{\prime}\rightarrow
v}^{s}(y_{f^{\prime}}))$ are mutually independent for $f^{\prime}\in\partial
v\setminus f$. Thus,
$\mathbb{E}_{P_{v,0}}[x_{f^{\prime}v}^{2}b_{f^{\prime}\rightarrow
v}^{s}(y_{f^{\prime}})]=\mathbb{E}_{P_{v,0}}[b_{f^{\prime}\rightarrow
v}^{s}(y_{f^{\prime}})]/n$. Because $b_{f^{\prime}\rightarrow
v}^{s}(y_{f^{\prime}})$ is bounded by (46), if we can show that
$\mathbb{E}_{P_{v,0}}[b_{f^{\prime}\rightarrow
v}^{s}(y_{f^{\prime}})]\rightarrow 1/\tau_{s+1}^{2}$, then $b_{v\rightarrow
f}^{s+1}\xrightarrow[P_{v,0}]{\mathrm{p}}1/\tau_{s+1}^{2}$ will follow by the
weak law of large numbers. We compute
$\displaystyle\mathbb{E}_{P_{v,0}}[b_{f\rightarrow v}^{s}(y_{f})]$
$\displaystyle=\mathbb{E}_{P_{v,0}}[\mathbb{E}_{P_{v,0}}[b_{f\rightarrow
v}^{s}(y_{f})|\sigma(\mathcal{T}_{f\rightarrow
v}^{2s+1,1},(x_{fv^{\prime}})_{v^{\prime}\in\partial f\setminus v},u_{f})]]$
$\displaystyle=\mathbb{E}_{P_{v,0}}\left[\int b_{f\rightarrow
v}^{s}(y)p_{f\rightarrow v}^{s}(y;0){\mathrm{d}}y\right]$
$\displaystyle=\int\mathbb{E}_{P_{v,0}}\left[b_{f\rightarrow
v}^{s}(y)p_{f\rightarrow v}^{s}(y;0)\right]{\mathrm{d}}y$
where the last equation follows by Fubini’s theorem (using that the integrand
is bounded by the integrable function
$q_{2}\mathbb{E}_{P_{v,0}}[p_{f\rightarrow v}^{s}(y;0)]$). The integrands
converge point-wise, so that
$\displaystyle\mathbb{E}_{P_{v,0}}$ $\displaystyle[b_{f\rightarrow
v}^{s}(y_{f})]$
$\displaystyle\rightarrow\mathbb{E}_{G_{0},U}\left[\int\frac{\mathbb{E}_{G_{1}}[\dot{p}(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]^{2}}{\mathbb{E}_{G_{1}}[p(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]}{\mathrm{d}}y\right]-\int\mathbb{E}_{G_{0},G_{1},U}[\ddot{p}(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]{\mathrm{d}}y$
$\displaystyle=\frac{1}{\tau_{s+1}^{2}},$
where we have concluded that the second integral is zero because
$x\mapsto\mathbb{E}_{G_{0},G_{1},U}[p(y|\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},U)]$
parameterizes a statistical model whose scores up to order 3 are bounded by
(46). Thus, we conclude that $b_{v\rightarrow
f}^{s+1}\xrightarrow[P_{v,0}]{\mathrm{p}}1/\tau_{s+1}^{2}$.
Now we compute the asymptotic distribution of $(a_{v\rightarrow
f}^{s+1},b_{v\rightarrow f}^{s+1})$ under $P_{v,\theta}$ for any
$\theta\in[-M,M]$. The log-likelihood ratio between $P_{v,\theta}$ and
$P_{v,0}$ is
$\displaystyle\sum_{f^{\prime}\in\partial v}\log\frac{p_{f^{\prime}\rightarrow
v}^{s}(y_{f^{\prime}}|x_{f^{\prime}v}\theta)}{p_{f^{\prime}\rightarrow
v}^{s}(y_{f^{\prime}}|0)}$ $\displaystyle=\log\frac{m_{v\rightarrow
f}^{s+1}(\theta)}{m_{v\rightarrow f}^{s+1}(0)}+\log\frac{p_{f\rightarrow
v}^{s}(y_{f}|x_{fv}\theta)}{p_{f\rightarrow v}^{s}(y_{f}|0)}$
$\displaystyle=\theta a_{v\rightarrow
f}^{s+1}-\frac{1}{2}\theta^{2}b_{v\rightarrow f}^{s+1}+O_{p}(n^{-1/2}),$
where we have used Lemma 3 and that $\left|\log\frac{p_{f\rightarrow
v}^{s}(y_{f}|x_{fv}\theta)}{p_{f\rightarrow v}^{s}(y_{f}|0)}\right|\leq
Mq_{1}|x_{fv}|=O_{p}(n^{-1/2})$. Thus,
$\left(a_{v\rightarrow f}^{s+1},b_{v\rightarrow
f}^{s+1},\log\frac{P_{v,\theta}}{P_{v,0}}\right)\xrightarrow[P_{v,0}]{\mathrm{p}}\left(Z,\frac{1}{\tau_{s+1}^{2}},\theta
Z-\frac{1}{2}\frac{\theta^{2}}{\tau_{s+1}^{2}}\right),$
where $Z\sim{\mathsf{N}}(0,1/\tau_{s+1}^{2})$. By Le Cam’s third lemma [Vaa98,
Example 6.7], we have
$(a_{v\rightarrow f}^{s+1},b_{v\rightarrow
f}^{s+1})\xrightarrow[P_{v,\theta}]{\mathrm{d}}\left(Z^{\prime},\frac{1}{\tau_{s+1}^{2}}\right).$
where $Z^{\prime}\sim{\mathsf{N}}(\theta/\tau_{s+1}^{2},1/\tau_{s+1}^{2})$. By
the Continuous Mapping Theorem [Vaa98, Theorem 2.3], we conclude
$(a_{v\rightarrow f}^{s+1}/b_{v\rightarrow f}^{s+1},b_{v\rightarrow
f}^{s+1})\xrightarrow[P_{v,\theta}]{\mathrm{d}}{\mathsf{N}}(\theta,\tau_{s+1}^{2})\otimes\delta_{1/\tau_{s+1}^{2}}$.
Denote by $P^{*}$ the distribution of the the original model. Consider a
continuous bounded function $f:(\theta,\nu,\chi,b)\mapsto\mathbb{R}$, and
define
$\hat{f}_{n}(\theta,\nu)=\mathbb{E}_{P_{v,\theta}}[f(\theta,\nu,a_{v\rightarrow
f}^{s+1}/b_{v\rightarrow f}^{s+1},b_{v\rightarrow f}^{s+1})]$. Under $P^{*}$,
the random variables $a_{v\rightarrow f}^{s+1},b_{v\rightarrow f}^{s+1}$ are
functions are $\theta_{v}$ and random vectors
$\boldsymbol{D}:=\mathcal{T}_{v,2t}\setminus\\{\theta_{v},v_{v}\\}$, which is
independent of $\theta_{v},v_{v}$. In particular, we may write
$\mathbb{E}_{P^{*}}[f(\theta_{v},v_{v},a_{v\rightarrow
f}^{s+1}/b_{v\rightarrow f}^{s+1},b_{v\rightarrow
f}^{s+1})]=\mathbb{E}_{P^{*}}[f(\theta_{v},v_{v},\chi(\theta_{v},\boldsymbol{D}),B(\theta_{v},\boldsymbol{D}))],$
for some measurable functions $\chi,B$. We see that
$\mathbb{E}_{P^{*}}[f(\theta_{v},v_{v},a_{v\rightarrow
f}^{s+1}/b_{v\rightarrow f}^{s+1},b_{v\rightarrow
f}^{s+1})\mid\theta_{v},v_{v}]=\hat{f}_{n}(\theta_{v},v_{v})$
where
$\hat{f}_{n}(\theta,\nu)=\mathbb{E}_{\boldsymbol{D}}[f(\theta,\nu,\chi(\theta,\boldsymbol{D}),B(\theta,\boldsymbol{D}))],$
with $\boldsymbol{D}$ distributed as it is under $P^{*}$ (see e.g., [Dur10,
Example 5.1.5]). Because $\boldsymbol{D}$ has the same distribution on $P^{*}$
as under $P_{v,\theta}$, we see that in fact
$\hat{f}_{n}(\theta,\nu)=\mathbb{E}_{P_{v,\theta}}[f(\theta,\nu,a_{v\rightarrow
f}^{s+1}/b_{v\rightarrow f}^{s+1},b_{v\rightarrow f}^{s+1})]$. Because
$(a_{v\rightarrow f}^{s+1}/b_{v\rightarrow f}^{s+1},b_{v\rightarrow
f}^{s+1})\xrightarrow[P_{v,\theta}]{\mathrm{d}}{\mathsf{N}}(\theta,\tau_{s+1}^{2})\otimes\delta_{1/\tau_{s+1}^{2}}$,
we conclude that
$\hat{f}_{n}(\theta,\nu)\rightarrow\mathbb{E}_{G}[f(\theta,\nu,\theta+\tau_{s+1}G,\tau_{s+1}^{-2})]$
for all $\theta,\nu$. By bounded convergence and the tower property,
$\mathbb{E}_{\Theta,V}[\hat{f}_{n}(\Theta,V)]\rightarrow\mathbb{E}_{\Theta,V,G}[f(\theta,\nu,\theta+\tau_{s+1}G,\tau_{s+1}^{-2})]$
where $(\Theta,V)\sim\mu_{\Theta,V}$ independent of $G\sim{\mathsf{N}}(0,1)$.
Also by the tower property, we have
$\mathbb{E}_{\Theta,V}[\hat{f}_{n}(\Theta,V)]=\mathbb{E}_{P^{*}}[f(\theta_{v},v_{v},\chi(\theta_{v},\boldsymbol{D}),B(\theta_{v},\boldsymbol{D}))]=\mathbb{E}_{P^{*}}[f(\theta_{v},v_{v},a_{v\rightarrow
f}^{s+1}/b_{v\rightarrow f}^{s+1},b_{v\rightarrow f}^{s+1})].$
We conclude
$\mathbb{E}_{P^{*}}[f(\theta_{v},v_{v},a_{v\rightarrow
f}^{s+1}/b_{v\rightarrow f}^{s+1},b_{v\rightarrow
f}^{s+1})]\rightarrow\mathbb{E}_{\Theta,V,G}[f(\Theta,V,\Theta+\tau_{s+1}G,\tau_{s+1}^{-2})].$
Thus, we conclude that $(\theta_{v},v_{v},a_{v\rightarrow
f}^{s+1}/b_{v\rightarrow f}^{s+1},b_{v\rightarrow
f}^{s+1})\xrightarrow[P^{*}]{\mathrm{d}}(\Theta,V,\Theta+\tau_{s+1}G,1/\tau_{s+1}^{2})$,
as desired.
Inductive step 4: If $(\theta_{v},v_{v},a_{v\rightarrow
f}^{s+1}/b_{v\rightarrow f}^{s+1},b_{v\rightarrow
f}^{s+1})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Theta,V,\Theta+\tau_{s+1}G,1/\tau_{s+1}^{2})$
where $G\sim{\mathsf{N}}(0,1)$ independent of $(\Theta,V)\sim\mu_{\Theta,V}$,
then $\mathbb{E}[(\mu_{v\rightarrow
f}^{s})^{2}]\rightarrow\delta\sigma_{s}^{2}$ and
$\mathbb{E}[(\tilde{\tau}_{v\rightarrow
f}^{s})^{2}]\rightarrow\textsf{mmse}_{\Theta,V}(\tau_{s}^{2})$.
Define
$\epsilon_{v\rightarrow
f}^{s}=\sup_{\vartheta\in[-M,M]}\left|\log\frac{m_{v\rightarrow
f}^{s}(\vartheta)}{m_{v\rightarrow f}^{s}(0)}-\left(\vartheta a_{v\rightarrow
f}^{s}-\frac{1}{2}\vartheta^{2}b_{v\rightarrow f}^{s}\right)\right|,$
where because all the terms are continuous in $\vartheta$, the random variable
$\epsilon_{v\rightarrow f}^{s}$ is measurable and finite. We have that
$\mu_{v\rightarrow f}^{s}\geq\frac{\int\vartheta\exp(\vartheta a_{v\rightarrow
f}^{s}-\vartheta^{2}b_{v\rightarrow f}^{s}/2-\epsilon_{v\rightarrow
f}^{s})\mu_{\Theta}(v_{v},\mathrm{d}\vartheta)}{\int\exp(\vartheta
a_{v\rightarrow f}^{s}-\vartheta^{2}b_{v\rightarrow
f}^{s}/2+\epsilon_{v\rightarrow
f}^{s})\mu_{\Theta}(v_{v},\mathrm{d}\vartheta)}\geq
e^{-2\epsilon_{v\rightarrow f}^{s}}\eta_{\Theta,V}(a_{v\rightarrow
f}^{s}/b_{v\rightarrow f}^{s},v_{v};1/b_{v\rightarrow f}^{s})$
where
$\eta_{\Theta,V}(y,v;\tau^{2})=\mathbb{E}_{\Theta,V,G}[\Theta|\Theta+\tau
G=y;V=v]$ where $(\Theta,V)\sim\mu_{\Theta,V}$, $G\sim{\mathsf{N}}(0,1)$
independent. Likewise,
$\mu_{v\rightarrow f}^{s}\leq e^{2\epsilon_{v\rightarrow
f}^{s}}\eta_{\Theta,V}(a_{v\rightarrow f}^{s}/b_{v\rightarrow
f}^{s},v_{v};1/b_{v\rightarrow f}^{s}).$
Because $\eta_{\Theta,V}$ takes values in the bounded interval $[-M,M]$ and
$\epsilon_{v\rightarrow f}=o_{p}(1)$ by Lemma 3, we conclude that
$\mu_{v\rightarrow f}^{s}=\eta_{\Theta,V}(a_{v\rightarrow
f}^{s}/b_{v\rightarrow f}^{s},v_{v};1/b_{v\rightarrow f}^{s})+o_{p}(1).$
For a fixed $v_{v}$, the Bayes estimator $\eta_{\Theta,V}$ is continuous in
the observation and the noise variance on
$\mathbb{R}\times\mathbb{R}_{>0}$.888This commonly known fact holds, for
example, by [LR05, Theorem 2.7.1] because the posterior mean can be viewed as
the mean in an exponential family paramterized by the observation and noise
variance. Thus, by the inductive hypothesis and the fact that
$v_{v}\sim\mu_{V}$ for all $n$, we have
$\mathbb{E}[\eta_{\Theta,V}(a_{v\rightarrow f}^{s}/b_{v\rightarrow
f}^{s},v_{v};1/b_{v\rightarrow
f}^{s})^{2}]=\mathbb{E}[\eta_{\Theta,V}(a_{v\rightarrow f}^{s}/b_{v\rightarrow
f}^{s},v_{v};1/b_{v\rightarrow f}^{s})^{2}\vee
M^{2}]\rightarrow\mathbb{E}_{\Theta,V,G}[\eta_{\Theta,V}(\Theta+\tau_{s}G,V;\tau_{s}^{2})]=\mathbb{E}[\Theta^{2}]-\textsf{mmse}_{\Theta,V}(\tau_{s}^{2})=\delta\sigma_{s}^{2}$
by Lemma 3. By the previous display and the boundedness of $\mu_{v\rightarrow
f}^{s}$ and $\eta_{\Theta,V}$, we conclude $\mathbb{E}[(\mu_{v\rightarrow
f}^{s})^{2}]\rightarrow\delta\sigma_{s}^{2}$, as desired.
Similarly, we may derive that
$\displaystyle e^{-2\epsilon_{v\rightarrow
f}^{s}}s_{\Theta,V}^{2}(a_{v\rightarrow f}^{s}/b_{v\rightarrow
f}^{s},v_{v};1/b_{v\rightarrow f}^{s})$
$\displaystyle\leq\int\vartheta^{2}m_{v\rightarrow
f}^{s}(\vartheta)\mu_{\Theta}(\mathrm{d}\vartheta)$ $\displaystyle\leq
e^{2\epsilon_{v\rightarrow f}^{s}}s_{\Theta,V}^{2}(a_{v\rightarrow
f}^{s}/b_{v\rightarrow f}^{s},v_{v};1/b_{v\rightarrow f}^{s}),$
where
$s_{\Theta,V}^{2}(y,v;\tau^{2})=\mathbb{E}_{\Theta,V,G}[\Theta^{2}|\Theta+\tau
G=y,V=v]$ where $(\Theta,V)\sim\mu_{\Theta,V}$, $G\sim{\mathsf{N}}(0,1)$
independent. For fixed $v_{v}$, the the posterior second moment is continuous
in the observation and the noise variance. Further, it is bounded by $M^{2}$.
Thus, by exactly the same argument as in the previous paragraph, we have that
$\mathbb{E}[(\tilde{\tau}_{v\rightarrow
f}^{s})^{2}]\rightarrow\mathbb{E}_{\Theta,V,G}[s_{\Theta,V}^{2}(\Theta+\tau_{s}G,V;\tau_{s}^{2})-\eta_{\Theta,V}(\Theta+\tau_{s}G,V;\tau_{s}^{2})^{2}]=\textsf{mmse}_{\Theta,V}(\tau_{s}^{2})$,
as desired.
The inductive argument is complete, and (48) is established.
To complete the proof of Lemma 1, first observe by (47) that we may express
$\log p_{v}(\vartheta|\mathcal{T}_{v,2t})$ as, up to a constant,
$\log\frac{m_{v\rightarrow f}^{t}(\vartheta)}{m_{v\rightarrow
f}^{t}(0)}+\log\frac{m_{f\rightarrow v}^{t-1}(\vartheta)}{m_{f\rightarrow
v}^{t-1}(0)}$. Note that
$\left|\log\frac{m_{f\rightarrow v}^{t-1}(\vartheta_{v})}{m_{f\rightarrow
v}^{t-1}(0)}\right|\leq
M|x_{fv}|\sup_{x\in\mathbb{R}}\left|\frac{\dot{p}_{f\rightarrow
v}^{t-1}(y_{f};x)}{p_{f\rightarrow v}^{t-1}(y_{f};x)}\right|\leq
Mq_{1}|x_{fv}|=o_{p}(1).$
By Lemma 3, we have that, up to a constant, $\log\frac{m_{v\rightarrow
f}^{t}(\vartheta)}{m_{v\rightarrow f}^{t}(0)}=-\frac{1}{2}b_{v\rightarrow
f}^{s}\left(a_{v\rightarrow f}^{t}/b_{v\rightarrow
f}^{t}-\vartheta\right)^{2}+o_{p}(1)$. The lemma follows from (48). ∎
We complete the proof of Lemma 3 for the high-dimensional regression model.
Consider any estimator $\hat{\theta}:\mathcal{T}_{v,2t}\mapsto[-M,M]$ on the
computation tree. We compute
$\displaystyle\mathbb{E}[\ell(\theta_{v},\hat{\theta}(\mathcal{T}_{v,2t}))]=\mathbb{E}[\mathbb{E}[\ell(\theta_{v},\hat{\theta}(\mathcal{T}_{v,2t}))|\mathcal{T}_{v,2t}]]$
$\displaystyle\qquad=\mathbb{E}\left[\int\ell(\vartheta,\hat{\theta}(\mathcal{T}_{v,2t}))\frac{1}{Z(\mathcal{T}_{v,2t})}\exp\left(-\frac{1}{2\tau_{v,t}^{2}}(\chi_{v,t}-\vartheta)^{2}+o_{p}(1)\right)\mu_{\Theta|V}(v_{v},\mathrm{d}\vartheta)\right]$
$\displaystyle\qquad\geq\mathbb{E}\left[\exp(-2\epsilon_{v})\int\ell(\vartheta,\hat{\theta}(\mathcal{T}_{v,2t}))\frac{1}{Z(\chi_{v,t},\tau_{v,t},v_{v})}\exp\left(-\frac{1}{2\tau_{v,t}^{2}}(\chi_{v,t}-\vartheta)^{2}\right)\mu_{\Theta|V}(v_{v},\mathrm{d}\vartheta)\right]$
$\displaystyle\qquad\geq\mathbb{E}\left[\exp(-2\epsilon_{v})R(\chi_{v,2t},\tau_{v,2t},v_{v})\right],$
where
$Z(\mathcal{T}_{v,2t})=\int\exp\left(-\frac{1}{2\tau_{v,t}^{2}}(\chi_{v,t}-\vartheta)^{2}+o_{p}(1)\right)\mu_{\Theta|V}(v_{v},\mathrm{d}\vartheta)$,
$R(\chi,\tau,v):=\inf_{d\in\mathbb{R}}\int\frac{1}{Z}\ell(\vartheta,d)e^{-\frac{1}{2\tau^{2}}(\chi-\vartheta)^{2}}\mu_{\Theta|V}(v,{\mathrm{d}}\vartheta)\,,$
and
$\epsilon_{v}=\sup_{\vartheta\in[-M,M]}\left|\log\frac{p(\vartheta|\mathcal{T}_{v,2t})}{p(0|\mathcal{T}_{v,2t})}+\vartheta\chi_{v,t}/\tau_{v,t}^{2}-\vartheta^{2}/(2\tau_{v,t}^{2})\right|.$
Because $\Theta$ is bounded support, by Lemma 5(b), $R(\chi,\tau,v)$ is
continuous in $(\chi,\tau)$ on $\mathbb{R}\times\mathbb{R}_{>0}$. By Lemma 1,
$\epsilon_{v}=o_{p}(1)$. The quantity on the right-hand side does not depend
on $\hat{\theta}$, so provides a uniform lower bound over the performance of
any estimator. Because
$(v_{v},\chi_{v,2t},\tau_{v,2t},\epsilon_{v})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(V,\Theta+\tau_{t}G,\tau_{t},0)$,
$v_{v}\stackrel{{\scriptstyle\mathrm{d}}}{{=}}V$ for all $n$, and
$\tau_{t}>0$, we have
$\mathbb{E}\left[\exp(-2\epsilon_{v})R(\chi_{v,2t},\tau_{v,2t},v_{v})\right]\rightarrow\mathbb{E}[R(\Theta+\tau_{t}G,\tau_{t},V)]=\inf_{\hat{\theta}(\cdot)}\mathbb{E}[\ell(\Theta,\hat{\theta}(\Theta+\tau_{t}G,V))]$,
where the convergence holds by Lemma 3 and the equality holds by Lemma 5(a).
Thus,
$\liminf_{n\rightarrow\infty}\inf_{\hat{\theta}(\cdot)}\mathbb{E}[\ell(\theta_{v},\hat{\theta}(\mathcal{T}_{v,2t}))]\geq\inf_{\hat{\theta}(\cdot)}\mathbb{E}[\ell(\Theta,\hat{\theta}(\Theta+\tau_{t}G))].$
The proof of Lemma 3 in the high-dimensional regression model is complete.
#### D.1.1 Technical tools
###### Lemma 2.
By Lindeberg’s principle (see, e.g., [Cha06]) and using that $\mu_{\Theta}$ is
supported on $[-M,M]$, we have
$\displaystyle|p_{f\rightarrow
v}^{s}(y;0)-\mathbb{E}_{G_{1}}[p(y|\mu_{f\rightarrow
v}^{s}+\tilde{\tau}_{f\rightarrow
v}^{s}G_{1},u_{f})]|\leq\frac{M^{3}\sup_{x\in\mathbb{R}}|\partial_{x}^{3}p(y|x,u_{f})|}{3}\sum_{v^{\prime}\in\partial
f\setminus v}|x_{fv^{\prime}}|^{3},$ $\displaystyle|\dot{p}_{f\rightarrow
v}^{s}(y;0)-\mathbb{E}_{G_{1}}[\dot{p}(y|\mu_{f\rightarrow
v}^{s}+\tilde{\tau}_{f\rightarrow
v}^{s}G_{1},u_{f})]|\leq\frac{M^{3}\sup_{x\in\mathbb{R}}|\partial_{x}^{4}p(y|x,u_{f})|}{3}\sum_{v^{\prime}\in\partial
f\setminus v}|x_{fv^{\prime}}|^{3},$ $\displaystyle|\ddot{p}_{f\rightarrow
v}^{s}(y;0)-\mathbb{E}_{G_{1}}[\ddot{p}(y|\mu_{f\rightarrow
v}^{s}+\tilde{\tau}_{f\rightarrow
v}^{s}G_{1},u_{f})]|\leq\frac{M^{3}\sup_{x\in\mathbb{R}}|\partial_{x}^{5}p(y|x,u_{f})|}{3}\sum_{v^{\prime}\in\partial
f\setminus v}|x_{fv^{\prime}}|^{3}.$
Using the $\sup_{x\in\mathbb{R}}|\partial_{x}^{k}p(y|x,u)|\leq
q_{k}^{\prime}\sup_{x\in\mathbb{R}}|p(y|x,u)|<\infty$ for $k=3,4,5$ by R4, we
have that for fixed $y$ the expectations on the right-hand side go to 0 as
$n\rightarrow\infty$, whence the required expessions are $o_{p}(1)$.
Further,
$|\mathbb{E}_{G_{1}}[p(y|\mu+\tilde{\tau}G_{1},u)]-\mathbb{E}_{G_{1}}[p(y|\mu^{\prime}+\tilde{\tau}^{\prime}G_{1},u)]|\leq(|\mu-\mu^{\prime}|+|\tilde{\tau}-\tilde{\tau}^{\prime}|\sqrt{2/\pi})\sup_{x\in\mathbb{R}}|\dot{p}(y|x,u)|$,
whence $\mathbb{E}_{G_{1}}[p(y|\mu+\tilde{\tau}G_{1},u)]$ is continuous in
$(\mu,\tilde{\tau})$ by R4. The remaining continuity results follow similarly.
∎
###### Lemma 3.
Fix any $\vartheta\in[-M,M]$. By Taylor’s theorem, there exist
$\vartheta_{i}\in[-M,M]$ (in fact, between $0$ and $\vartheta$) such that
$\displaystyle\log$ $\displaystyle\frac{m_{v\rightarrow
f}^{s+1}(\vartheta)}{m_{v\rightarrow f}^{s+1}(0)}=\sum_{f^{\prime}\in\partial
v\setminus f}\log\frac{m_{f^{\prime}\rightarrow
v}^{s}(\vartheta)}{m_{f^{\prime}\rightarrow v}^{s}(0)}$
$\displaystyle=\vartheta a_{v\rightarrow
f}^{s+1}-\frac{1}{2}\vartheta^{2}b_{v\rightarrow
f}^{s+1}+\frac{1}{6}\vartheta^{3}\sum_{f^{\prime}\in\partial v\setminus
f}\left(\frac{\mathrm{d}^{3}}{\mathrm{d}\vartheta^{3}}\log\mathbb{E}_{\hat{G}_{f^{\prime}}}[p(y_{f^{\prime}}|x_{f^{\prime}v}\vartheta+\hat{G}_{f^{\prime}},u_{f^{\prime}})]\bigg{|}_{\vartheta=\vartheta_{i}}\right).$
where it is understood that expectation is taken with respect to
$\hat{G}_{f^{\prime}}\stackrel{{\scriptstyle\mathrm{d}}}{{=}}\sum_{v^{\prime}\in\partial
f^{\prime}\setminus v}x_{f^{\prime}v^{\prime}}\Theta_{v^{\prime}\rightarrow
f^{\prime}}$ where $x_{f^{\prime}v^{\prime}}$ is considered fixed and
$\Theta_{v^{\prime}\rightarrow f^{\prime}}$ are drawn independently with
densities $m_{v^{\prime}\rightarrow f^{\prime}}^{s}$ with respect to
$\mu_{\Theta|V}(v_{v^{\prime}},\cdot)$. We bound the sum using assumption R4:
$\displaystyle\left|\sum_{f^{\prime}\in\partial v\setminus
f}\left(\frac{\mathrm{d}^{3}}{\mathrm{d}\vartheta^{3}}\log\mathbb{E}_{\hat{G}_{f^{\prime}}}[p(y_{f}|x_{fv}\vartheta+\hat{G}_{f^{\prime}},u_{f^{\prime}})]\bigg{|}_{\vartheta=\vartheta_{i}}\right)\right|$
$\displaystyle\leq q_{3}\sum_{f^{\prime}\in\partial v\setminus
f}|x_{f^{\prime}v}|^{3}=O_{p}(n^{-1/2}).$
The proof is complete. ∎
### D.2 Information-theoretic lower bound in the low-rank matrix estimation
model
In this section, we prove Lemma 3 in the low-rank matrix estimation model.
Recall that conditions on the conditional density in assumption R4 are
equivalent positivity, boundedness, and the existence finite, non-negative
constants $q_{k}^{\prime}$ such that
$\frac{|\partial_{x}^{k}p(y|x)|}{p(y|x)}\leq q_{k}^{\prime}$ for $1\leq k\leq
5$. In particular, we have (46) for any random variable $A$.
Denote the regular conditional probability of $\Theta$ conditional on $V$ for
the measure $\mu_{\Theta,V}$ by
$\mu_{\Theta|V}:\mathbb{R}\times\mathcal{B}\rightarrow[0,1]$, where
$\mathcal{B}$ denotes the Borel $\sigma$-algebra on $\mathbb{R}$, similarly
for $\mu_{\Lambda|U}$. The posterior density of $\theta_{v}$ given
$\mathcal{T}_{v,2t-1}$ has density respect to $\mu_{\Theta|V}(v_{v},\cdot)$
given by
$p_{v}(\vartheta_{v}|\mathcal{T}_{v,2t-1})\propto\int\prod\exp\left(-\frac{n}{2}(x_{f^{\prime}v^{\prime}}-\frac{1}{n}\ell_{f^{\prime}}\vartheta_{v^{\prime}})^{2}\right)\prod\mu_{\Lambda}(u_{f},\mathrm{d}\ell_{f})\prod\mu_{\Theta}(v_{v^{\prime}},\mathrm{d}\vartheta_{v^{\prime}}),$
where the produces are over $(f^{\prime},v^{\prime})\in\mathcal{E}_{v,2t-1}$,
$f\in\mathcal{F}_{v,2t-1}$, and $v^{\prime}\in\mathcal{V}_{v,2t-1}$,
respectively. Asymptotically, the posterior behaves like that produced by a
Gaussian observation of $\theta_{v}$ with variance $\tau_{t}^{2}$.
###### Lemma 4.
In the low-rank matrix estimation model, there exist
$\mathcal{T}_{v,2t-1}$-measurable random variables $q_{v,t},\chi_{v,t}$ such
that for fixed $t\geq 1$
$p_{v}(\vartheta|\mathcal{T}_{v,2t-1})\propto\exp\left(-\frac{1}{2}(\chi_{v,t}-q_{v,t}^{1/2}\vartheta)^{2}+o_{p}(1)\right),$
where $o_{p}(1)$ has no $\vartheta$ dependence. Moreover,
$(\theta_{v},v_{v},\chi_{v,t},q_{v,t})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Theta,V,q_{t}^{1/2}\Theta+G,q_{t})$
where $(\Theta,V)\sim\mu_{\Theta,V},G\sim{\mathsf{N}}(0,1)$ independent of
$\Theta,V$, and $q_{t}$ is given by (7).
###### Lemma 4.
As in the proof of Lemma 1, we compute the posterior density
$p_{v}(\vartheta|\mathcal{T}_{v,2t-1})$ via belief propogation. The belief
propagation iteration is
$\displaystyle m_{f\rightarrow v}^{0}(\ell)=1,$ $\displaystyle m_{v\rightarrow
f}^{s+1}(\vartheta)\propto\int\prod_{f^{\prime}\in\partial v\setminus
f}\left(\exp\left(-\frac{n}{2}(x_{f^{\prime}v}-\frac{1}{n}\ell_{f^{\prime}}\vartheta)^{2}\right)m_{f^{\prime}\rightarrow
v}^{s}(\ell_{f^{\prime}})\mu_{\Lambda|U}(u_{f^{\prime}},\mathrm{d}\ell_{f^{\prime}})\right),$
$\displaystyle m_{f\rightarrow
v}^{s}(\ell)\propto\int\prod_{v^{\prime}\in\partial f\setminus
v}\left(\exp\left(-\frac{n}{2}(x_{fv^{\prime}}-\frac{1}{n}\ell\vartheta_{v^{\prime}})^{2}\right)m_{v^{\prime}\rightarrow
f}^{s}(\vartheta_{v^{\prime}})\mu_{\Theta|V}(v_{v^{\prime}},\mathrm{d}\vartheta_{v^{\prime}})\right),$
with normalization $\int m_{f\rightarrow
v}^{s}(\ell)\mu_{\Lambda|U}(u_{f},\mathrm{d}\ell)=\int m_{v\rightarrow
f}^{s}(\vartheta)\mu_{\Theta|V}(v_{v},\mathrm{d}\vartheta)=1$. For $t\geq 1$
$\displaystyle
p_{v}(\vartheta|\mathcal{T}_{v,2t-1})\propto\int\prod_{f\in\partial
v}\left(\exp\left(-\frac{n}{2}(x_{fv}-\frac{1}{n}\ell_{f}\vartheta)^{2}\right)m_{f\rightarrow
v}^{t-1}(\ell_{f})\mu_{\Lambda|U}(u_{f},\mathrm{d}\ell_{f})\right),$
This equation is exact.
We define several quantities related to the belief propagation iteration.
$\displaystyle\mu_{f\rightarrow v}^{s}$ $\displaystyle=\int\ell
m_{f\rightarrow v}^{s}(\ell)\mu_{\Lambda|U}(u_{f},\mathrm{d}\ell),$
$\displaystyle s_{f\rightarrow v}^{s}$
$\displaystyle=\int\ell^{2}m_{f\rightarrow
v}^{s}(\ell)\mu_{\Lambda|U}(u_{f},\mathrm{d}\ell),$
$\displaystyle\alpha_{v\rightarrow f}^{s+1}$
$\displaystyle=\frac{1}{n}\sum_{f^{\prime}\in\partial v\setminus
f}\mu_{f^{\prime}\rightarrow v}^{s}\lambda_{f^{\prime}},$
$\displaystyle(\tau_{v\rightarrow f}^{s+1})^{2}$
$\displaystyle=\frac{1}{n}\sum_{f^{\prime}\in\partial v\setminus
f}(\mu_{f^{\prime}\rightarrow v}^{s})^{2},$ $\displaystyle a_{v\rightarrow
f}^{s}$ $\displaystyle=\frac{\mathrm{d}\phantom{b}}{\mathrm{d}\vartheta}\log
m_{v\rightarrow f}^{s}(\vartheta)\Big{|}_{\vartheta=0},$ $\displaystyle
b_{v\rightarrow f}^{s}$
$\displaystyle=-\frac{\mathrm{d}^{2}\phantom{b}}{\mathrm{d}\vartheta^{2}}\log
m_{v\rightarrow f}^{s}(\vartheta)\Big{|}_{\vartheta=0},$
$\displaystyle\mu_{v\rightarrow f}^{s}$ $\displaystyle=\int\vartheta
m_{v\rightarrow f}^{s}(\vartheta)\mu_{\Theta|V}(v_{v},\mathrm{d}\vartheta),$
$\displaystyle s_{v\rightarrow f}^{s}$
$\displaystyle=\int\vartheta^{2}m_{v\rightarrow
f}^{s}(\vartheta)\mu_{\Theta|V}(v_{v},\mathrm{d}\vartheta),$
$\displaystyle\alpha_{f\rightarrow v}^{s}$
$\displaystyle=\frac{1}{n}\sum_{v^{\prime}\in\partial f\setminus
v}\mu_{v^{\prime}\rightarrow f}^{s}\theta_{v^{\prime}},$
$\displaystyle(\hat{\tau}_{f\rightarrow v}^{s})^{2}$
$\displaystyle=\frac{1}{n}\sum_{v^{\prime}\in\partial f\setminus
v}(\mu_{v^{\prime}\rightarrow f}^{s})^{2},$ $\displaystyle a_{f\rightarrow
v}^{s}$ $\displaystyle=\frac{\mathrm{d}\phantom{b}}{\mathrm{d}\ell}\log
m_{f\rightarrow v}^{s}(\ell)\Big{|}_{\ell=0},$ $\displaystyle b_{f\rightarrow
v}^{s}$
$\displaystyle=-\frac{\mathrm{d}^{2}\phantom{b}}{\mathrm{d}\ell^{2}}\log
m_{f\rightarrow v}^{s}(\ell)\Big{|}_{\ell=0},$
Lemma 4 follows from the following asymptotic characterization of the
quantities in the preceding display in the limit $n,p\rightarrow\infty$,
$n/p\rightarrow\delta$:
$\begin{gathered}\mathbb{E}[\mu_{f\rightarrow v}^{s}\lambda_{f}]\rightarrow
q_{s+1},\qquad\mathbb{E}[(\mu_{f\rightarrow v}^{s})^{2}]\rightarrow
q_{s+1},\\\ \alpha_{v\rightarrow
f}^{s+1}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}q_{s+1},\qquad(\tau_{v\rightarrow
f}^{s+1})\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}q_{s+1},\\\
(\theta_{v},v_{v},a_{v\rightarrow f}^{s},b_{v\rightarrow
f}^{s})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Theta,V,q_{s}\Theta+q_{s}^{1/2}G_{2},q_{s}),\\\
\mathbb{E}[\mu_{v\rightarrow
f}^{s}\theta_{v}]\rightarrow\delta\hat{q}_{s},\qquad\mathbb{E}[(\mu_{v\rightarrow
f}^{s})^{2}]\rightarrow\delta\hat{q}_{s},\\\ \alpha_{f\rightarrow
v}^{s}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\hat{q}_{s},\qquad(\hat{\tau}_{f\rightarrow
v}^{s+1})^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\hat{q}_{s},\\\
(\lambda_{f},u_{f},a_{f\rightarrow v}^{s},b_{f\rightarrow
v}^{s})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Lambda,U,\hat{q}_{s}\Lambda+\hat{q}_{s}^{1/2}G,\hat{q}_{s}).\end{gathered}$
(50)
As in the proof of Lemma 1, the distribution of these quantities does not
depend upon $v$ or $f$, so that the limits hold for all $v,f$ once we
establish them for any $v,f$. We establish the limits inductively in $s$.
Base case: $\mathbb{E}[\mu_{f\rightarrow v}^{0}\lambda_{f}]\rightarrow q_{1}$
and $\mathbb{E}[(\mu_{f\rightarrow v}^{0})^{2}]\rightarrow q_{1}$.
Note $\mu_{f\rightarrow v}^{0}=\mathbb{E}[\lambda_{f}|u_{f}]$. Thus
$\mathbb{E}[\mu_{f\rightarrow
v}^{0}\lambda_{f}]=\mathbb{E}[\mathbb{E}[\lambda_{f}|u_{f}]^{2}]=V_{\Lambda,U}(0)=q_{1}$
exactly in finite samples, so also asymptotically. The expectation
$\mathbb{E}[(\mu_{f\rightarrow v}^{0})^{2}]$ has the same value.
Inductive step 1: If $\mathbb{E}[\mu_{f\rightarrow
v}^{s}\lambda_{f}]\rightarrow q_{s+1}$ and $\mathbb{E}[(\mu_{f\rightarrow
v}^{s})^{2}]\rightarrow q_{s+1}$, then $\alpha_{v\rightarrow
f}^{s+1}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}q_{s+1}$ and
$(\tau_{v\rightarrow
f}^{s+1})^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}q_{s+1}$.
By the inductive hypothesis, $\mathbb{E}[\alpha_{v\rightarrow
f}^{s+1}]=(n-1)\mathbb{E}[\mu_{f\rightarrow v}^{s}\lambda_{f}]/n\rightarrow
q_{s+1}$ and $\mathbb{E}[(\tau_{v\rightarrow
f}^{s+1})^{2}]=(n-1)\mathbb{E}[(\mu_{f\rightarrow v}^{s})^{2}]/n\rightarrow
q_{s+1}$. Moreover, $\mu_{f^{\prime}\rightarrow v}^{s}\lambda_{f^{\prime}}$
are mutually independent as we vary $f^{\prime}\in\partial v\setminus f$, and
likewise for $\mu_{f^{\prime}\rightarrow v}^{s}$. We have
$\mathbb{E}[(\mu_{f^{\prime}\rightarrow v}^{s}\lambda_{f^{\prime}})^{2}]\leq
M^{4}$ and $\mathbb{E}[(\mu_{f^{\prime}\rightarrow v}^{s})^{4}]\leq M^{4}$
because the integrands are bounded by $M^{4}$. By the weak law of large
numbers, $\alpha_{v\rightarrow
f}^{s+1}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}q_{s+1}$ and
$(\tau_{v\rightarrow
f}^{s+1})^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}q_{s+1}$.
Inductive step 2: If $\alpha_{v\rightarrow
f}^{s+1}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}q_{s+1}$ and
$(\tau_{v\rightarrow
f}^{s+1})^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}q_{s+1}$, then
$(\theta_{v},v_{v},a_{v\rightarrow f}^{s+1},b_{v\rightarrow
f}^{s+1})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Theta,V,q_{s+1}\Theta+q_{s+1}^{1/2}G,q_{s+1})$.
We may express
$\log m_{v\rightarrow
f}^{s+1}(\vartheta)=\mathsf{const}+\sum_{f^{\prime}\in\partial v\setminus
f}\log\mathbb{E}_{\Lambda_{f^{\prime}}}\left[\exp\left(-\frac{1}{2n}\Lambda_{f^{\prime}}^{2}\vartheta^{2}+x_{f^{\prime}v}\Lambda_{f^{\prime}}\vartheta\right)\right],$
where $\Lambda_{f^{\prime}}$ has density $m_{f^{\prime}\rightarrow v}^{s}$
with respect to $\mu_{\Lambda|U}(u_{f^{\prime}},\cdot)$. We compute
$\displaystyle\frac{\mathrm{d}\phantom{b}}{\mathrm{d}\vartheta}\mathbb{E}_{\Lambda_{f^{\prime}}}\left[\exp\left(-\frac{1}{2n}\Lambda_{f^{\prime}}^{2}\vartheta^{2}+x_{f^{\prime}v}\Lambda_{f^{\prime}}\vartheta\right)\right]\Big{|}_{\vartheta=0}$
$\displaystyle=\mathbb{E}_{\Lambda_{f^{\prime}}}\left[x_{f^{\prime}v}\Lambda_{f^{\prime}}\right]=x_{f^{\prime}v}\mu_{f^{\prime}\rightarrow
v}^{s},$
$\displaystyle\frac{\mathrm{d}^{2}\phantom{b}}{\mathrm{d}\vartheta^{2}}\mathbb{E}_{\Lambda_{f^{\prime}}}\left[\exp\left(-\frac{1}{2n}\Lambda_{f^{\prime}}^{2}\vartheta^{2}+x_{f^{\prime}v}\Lambda_{f^{\prime}}\vartheta\right)\right]\Big{|}_{\vartheta=0}$
$\displaystyle=\mathbb{E}_{\Lambda_{f^{\prime}}}\left[x_{f^{\prime}v}^{2}\Lambda_{f^{\prime}}^{2}-\frac{1}{n}\Lambda_{f^{\prime}}^{2}\right]=\left(x_{f^{\prime}v}^{2}-\frac{1}{n}\right)s_{f^{\prime}\rightarrow
v}^{s}.$
Then
$\displaystyle a_{v\rightarrow f}^{s+1}=\sum_{f^{\prime}\in\partial v\setminus
f}x_{f^{\prime}v}\mu_{f^{\prime}\rightarrow
v}^{s}\;\;\text{and}\;\;b_{v\rightarrow f}^{s+1}=\sum_{f^{\prime}\in\partial
v\setminus f^{\prime}}\left(x_{f^{\prime}v}^{2}(\mu_{f^{\prime}\rightarrow
v}^{s})^{2}-\left(x_{f^{\prime}v}^{2}-\frac{1}{n}\right)s_{f^{\prime}\rightarrow
v}^{s}\right).$
We compute
$\displaystyle a_{v\rightarrow
f}^{s+1}=\left(\frac{1}{n}\sum_{f^{\prime}\in\partial v\setminus
f}\mu_{f^{\prime}\rightarrow
v}^{s}\lambda_{f^{\prime}}\right)\theta_{v}+\sum_{f^{\prime}\in\partial
v\setminus f}z_{f^{\prime}v}\mu_{f^{\prime}\rightarrow v}^{s}.$
Because $(z_{f^{\prime}v})_{f^{\prime}\in\partial v\setminus f}$ are
independent of $\mu_{f^{\prime}\rightarrow v}^{2}$ and are mutually
independent from each other, conditional on $\mathcal{T}_{v\rightarrow f}^{1}$
the quantity $\sum_{f^{\prime}\in\partial v\setminus
f}z_{f^{\prime}v}\mu_{f^{\prime}\rightarrow v}^{s}$ is distributed
${\mathsf{N}}(0,(\tau_{v\rightarrow f}^{s+1})^{2})$. By the inductive
hypothesis, $(\tau_{f\rightarrow
v}^{s+1})^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}q_{s+1}$, so
that $\sum_{f^{\prime}\in\partial v\setminus
f}z_{f^{\prime}v}\mu_{f^{\prime}\rightarrow
v}^{s}\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}{\mathsf{N}}(0,q_{s+1})$.
Further, $z_{f^{\prime}v}$ and $\mu_{f^{\prime}\rightarrow v}^{s}$ are
independent of $\theta_{v}$, and by the inductive hypothesis, the coefficient
of $\theta_{v}$ converges in probability to $q_{s+1}$. By the Continuous
Mapping Theorem [Vaa98, Theorem 2.3], we conclude that
$(\theta_{v},v_{v},a_{v\rightarrow
f}^{s+1})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Theta,V,q_{s+1}\Theta+q_{s+1}^{1/2}G)$
where $G\sim{\mathsf{N}}(0,1)$ independent of $\Theta$, as desired.
Now we show that $b_{v\rightarrow
f}^{s+1}\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}q_{s+1}$. We expand
$b_{v\rightarrow f}^{s+1}=A-B$ where $A=\sum_{f^{\prime}\in\partial v\setminus
f}x_{f^{\prime}v}^{2}(\mu_{f^{\prime}\rightarrow v}^{s})^{2}$ and
$B=\sum_{f^{\prime}\in\partial v\setminus
f}(x_{f^{\prime}v}^{2}-1/n)s_{f^{\prime}\rightarrow v}^{s}$. We have
$A=\frac{1}{n^{2}}\sum_{v^{\prime}\in\partial f\setminus
v}\lambda_{f^{\prime}}^{2}\theta_{v}^{2}(\mu_{f^{\prime}\rightarrow
v}^{s})^{2}+\frac{2}{n}\sum_{f^{\prime}\in\partial v\setminus
f}\lambda_{f^{\prime}}\theta_{v}z_{f^{\prime}v}(\mu_{f^{\prime}\rightarrow
v}^{s})^{2}+\sum_{v^{\prime}\in\partial f\setminus
v}z_{f^{\prime}v}^{2}(\mu_{f^{\prime}\rightarrow v}^{s})^{2}.$
Observe
$\mathbb{E}[\lambda_{f^{\prime}}^{2}\theta_{v}^{2}(\mu_{f^{\prime}\rightarrow
v}^{s})^{2}]\leq M^{6}$, so that the expectation of the first term is bounded
by $M^{6}(p-1)/n^{2}\rightarrow 0$. Thus, the first term converges to 0 in
probability. Because $z_{f^{\prime}v}$ is independent of
$\mu_{f^{\prime}\rightarrow v}^{s}$,
$\mathbb{E}[|\lambda_{f^{\prime}}\theta_{v}z_{f^{\prime}v}(\mu_{f^{\prime}\rightarrow
v}^{s})^{2}|]\leq M^{4}\sqrt{2/(\pi n)}$, so that the absolute value of the
expectation of the second term is bounded by $2M^{4}\sqrt{2/(\pi
n)}\rightarrow 0$. Thus, the second term converges to 0 in probability.
Because $\mu_{f^{\prime}\rightarrow v}^{s}$ is independent of
$z_{f^{\prime}v}$, the expectation of the last term is
$(n-1)\mathbb{E}[(\mu_{f^{\prime}\rightarrow v})^{2}]/n\rightarrow q_{s+1}$
(we have used here the assumption of inductive step 1). The terms
$(z_{f^{\prime}v}^{2}(\mu_{f^{\prime}\rightarrow
v}^{s})^{2})_{f^{\prime}\in\partial v\setminus f}$ are mutually independent
and $\mathbb{E}[z_{f^{\prime}v}^{4}(\mu_{f^{\prime}\rightarrow
v}^{s})^{4}]\leq 3M^{4}/n^{2}$, so that by the weak law of large numbers we
have that the last term converges to $q_{s+1}$ in probability. Thus,
$A\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}q_{s+1}$.
We have
$B=\frac{1}{n^{2}}\sum_{v^{\prime}\in\partial f\setminus
v}\lambda_{f}^{2}\theta_{v^{\prime}}^{2}s_{v^{\prime}\rightarrow
f}^{s}+\frac{2}{n}\sum_{v^{\prime}\in\partial f\setminus
v}\lambda_{f}\theta_{v^{\prime}}s_{v^{\prime}\rightarrow
f}^{s}+\sum_{v^{\prime}\in\partial f\setminus
v}(z_{f^{\prime}v}^{2}-1/n)s_{v^{\prime}\rightarrow f}^{s}.$
As in the analysis of the first two terms of $A$, we may use that
$s_{v^{\prime}\rightarrow f}^{s}\leq M^{2}$ to argue that the first two terms
of $B$ converge to 0 in probability. Further, because $z_{f^{\prime}v}$ is
independent of $s_{v^{\prime}\rightarrow f}^{s}$, the expectation of the last
term is 0. Further,
$\mathbb{E}[(z_{f^{\prime}v}^{2}-1/n)^{2}(s_{v^{\prime}\rightarrow
f}^{s})^{2}]\leq
2\mathbb{E}[(z_{f^{\prime}v}^{4}+1/n^{2})]\mathbb{E}[(s_{v^{\prime}\rightarrow
f}^{s})^{2}]\leq 8M^{4}/n^{2}$, so that by the weak law of large numbers, the
final term converges to 0 in probability. Thus,
$B\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}0$. Because, as we have
shown, $A\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}q_{s+1}$, we
conclude $b_{v\rightarrow
f}^{s+1}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}q_{s+1}$.
Combining with $(\theta_{v},v_{v},a_{v\rightarrow
f}^{s+1})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Theta,V,q_{s+1}\Theta+q_{s+1}^{1/2}G)$
and applying the Continuous Mapping Theorem [Vaa98, Theorem 2.3], we have
$(\theta_{v},a_{v\rightarrow f}^{s+1},b_{v\rightarrow
f}^{s+1})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Theta,q_{s+1}\Theta+q_{s+1}^{1/2}G,q_{s+1})$.
Inductive step 3: If $(\theta_{v},v_{v},a_{v\rightarrow f}^{s},b_{v\rightarrow
f}^{s})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Theta,V,q_{s}\Theta+q_{s}^{1/2}G_{1},q_{s})$,
then $\mathbb{E}[\mu_{v\rightarrow
f}^{s}\theta_{v}]\rightarrow\delta\hat{q}_{s}$ and
$\mathbb{E}[(\mu_{v\rightarrow f}^{s})^{2}]\rightarrow\delta\hat{q}_{s}$.
We will require the following lemma, whose proof is deferred to section D.2.1.
###### Lemma 5.
For any fixed $s$, we have $\vartheta,\ell\in[-M,M]$
$\displaystyle\log\frac{m_{v\rightarrow f}^{s}(\vartheta)}{m_{v\rightarrow
f}^{s}(0)}=\vartheta a_{v\rightarrow
f}^{s}-\frac{1}{2}\vartheta^{2}b_{v\rightarrow f}^{s}+O_{p}(n^{-1/2}),$
$\displaystyle\log\frac{m_{f\rightarrow v}^{s}(\ell)}{m_{f\rightarrow
v}^{s}(0)}=\ell a_{f\rightarrow v}^{s}-\frac{1}{2}\ell^{2}b_{f\rightarrow
v}^{s}+O_{p}(n^{-1/2}),$
where $O_{p}(n^{-1/2})$ has no $\vartheta$ (or $\ell$) dependence.
Define
$\epsilon_{f\rightarrow
v}^{s}=\sup_{\vartheta\in[-M,M]}\left|\log\frac{m_{v\rightarrow
f}^{s}(\vartheta)}{m_{v\rightarrow f}^{s}(0)}-\left(\vartheta a_{v\rightarrow
f}^{s}-\frac{1}{2}\vartheta^{2}b_{v\rightarrow f}^{s}\right)\right|.$
By Lemma 5, we have $\epsilon_{v\rightarrow f}^{s}=o_{p}(1)$. Moreover, using
the same argument as in inductive step 4 of the proof of Theorem 1, we have
that
$\displaystyle e^{-2\epsilon_{v\rightarrow
f}^{s}}\eta_{\Theta,V}(a_{v\rightarrow f}^{s}(b_{v\rightarrow f}^{s})^{-1/2}$
$\displaystyle,v_{v};b_{v\rightarrow f}^{s})\leq\mu_{v\rightarrow f}^{s}$
$\displaystyle\leq e^{2\epsilon_{v\rightarrow
f}^{s}}\eta_{\Theta,V}(a_{v\rightarrow f}^{s}(b_{v\rightarrow
f}^{s})^{1/2},v_{v};b_{v\rightarrow f}^{s}),$
where
$\eta_{\Theta,V}(y,v;q)=\mathbb{E}_{\Theta,V,G}[\Theta|q^{1/2}\Theta+\tau
G=y;V=v]$. Because $\eta_{\Theta,V}$ takes values in the bounded interval
$[-M,M]$ and $\epsilon_{v\rightarrow f}^{s}=o_{p}(1)$ by Lemma 5, we conclude
that
$\mu_{v\rightarrow f}^{s}=\eta_{\Theta,V}(a_{v\rightarrow
f}^{s}/b_{v\rightarrow f}^{s},v_{v};b_{v\rightarrow f}^{s})+o_{p}(1).$
For a fixed $v_{v}$, the Bayes estimator in the observation and coefficient
$q$. Thus, by the inductive hypothesis and the fact that $v_{v}\sim\mu_{V}$
for all $n$, we have that $\mathbb{E}[\Theta\eta_{\Theta,V}(a_{v\rightarrow
f}^{s}(b_{v\rightarrow f}^{s})^{1/2},v_{v};b_{v\rightarrow f}^{s})]$ has limit
$\mathbb{E}[\Theta\eta_{\Theta,V}(q_{s}^{1/2}\Theta+G,V;q_{s})]=\delta\hat{q}_{s}$
and $\mathbb{E}[\eta_{\Theta,V}(q_{s}^{1/2}\Theta+G,V;q_{s})^{2}]$ has limit
$\mathbb{E}_{\Theta,V,G}[\eta_{\Theta,V}(q_{s}^{1/2}\Theta+G,V;q_{s})^{2}]=\delta\hat{q}_{s}$.
Because $|\theta_{v}|,|\mu_{v\rightarrow
f}^{s}|,|\eta_{\Theta,V}(a_{v\rightarrow f}^{s}/b_{v\rightarrow
f}^{s},v_{v};b_{v\rightarrow f}^{s})|\leq M$, by bounded convergence, we
conclude $\mathbb{E}[\mu_{v\rightarrow
f}^{s}\theta_{v}]\rightarrow\delta\hat{q}_{s}$ and
$\mathbb{E}[(\mu_{f\rightarrow v}^{s})^{2}]\rightarrow\delta\hat{q}_{s}$.
The remaining inductive steps are completely analagous to those already shown.
We list them here for completeness.
Inductive step 4: If $\mathbb{E}[\mu_{v\rightarrow
f}^{s}\theta_{v}]\rightarrow\delta\hat{q}_{s}$ and
$\mathbb{E}[(\mu_{v\rightarrow f}^{s})^{2}]\rightarrow\delta\hat{q}_{s}$, then
$\alpha_{f\rightarrow
v}^{s}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\hat{q}_{s}$ and
$(\hat{\tau}_{f\rightarrow
v}^{s+1})^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\hat{q}_{s}$.
Inductive step 5: If $\alpha_{f\rightarrow
v}^{s}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\hat{q}_{s}$ and
$(\hat{\tau}_{f\rightarrow
v}^{s})^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\hat{q}_{s}$,
then $(\lambda_{f},u_{f},a_{f\rightarrow v}^{s},b_{f\rightarrow
v}^{s})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Lambda,U,\hat{q}_{s}\Lambda+\hat{q}_{s}^{1/2}G,\hat{q}_{s})$.
Inductive step 6: If $(\lambda_{f},u_{f},a_{f\rightarrow
v}^{s},b_{f\rightarrow
v}^{s})\stackrel{{\scriptstyle\mathrm{d}}}{{\rightarrow}}(\Lambda,U,\hat{q}_{s}\Lambda+\hat{q}_{s}^{1/2}G,\hat{q}_{s})$,
then $\mathbb{E}[\mu_{f\rightarrow v}^{s}\lambda_{f}]\rightarrow q_{s+1}$ and
$\mathbb{E}[(\mu_{f\rightarrow v}^{s})^{2}]\rightarrow q_{s+1}$.
The induction is complete, and we conclude (50).
To complete the proof of Lemma 4, first observe that we may express
$\log\frac{p_{v}(\vartheta|\mathcal{T}_{v,2t-1})}{p_{v}(0|\mathcal{T}_{v,2t-1})}$
as $\log\frac{m_{v\rightarrow f}^{t}(\vartheta)}{m_{v\rightarrow
f}^{t}(\vartheta)}+\log\mathbb{E}_{\Lambda_{f}}[\exp(\vartheta
x_{fv}\Lambda_{f}-\vartheta^{2}\Lambda_{f}^{2}/(2n))]$. Note that
$\left|\log\mathbb{E}_{\Lambda_{f}}[\exp(\vartheta
x_{fv}\Lambda_{f}-\vartheta^{2}\Lambda_{f}^{2}/(2n))]\right|\leq
M^{2}|x_{fv}|+M^{4}/2n=o_{p}(1).$
By Lemma 5, we have that, up to a constant, $\log\frac{m_{v\rightarrow
f}^{t}(\vartheta)}{m_{v\rightarrow
f}^{t}(\vartheta)}=-\frac{1}{2}((a_{v\rightarrow f}^{t}(b_{v\rightarrow
f}^{t})^{-1/2}-b_{v\rightarrow f}^{t})^{1/2}\vartheta)^{2}+o_{p}(1)$. The
lemma follows from (50) and Slutsky’s theorem. ∎
Lemma 3 in the low-rank matrix estimation model follows from Lemma 4 by
exactly the same argument that derived Lemma 3 in the high-dimensional
regression model from Lemma 1.
#### D.2.1 Technical tools
###### Lemma 5.
Fix any $\vartheta\in[-M,M]$. By Taylor’s theorem, there exist
$\vartheta_{f^{\prime}}\in[-M,M]$ (in fact, between $0$ and $\vartheta$) such
that
$\displaystyle\log\frac{m_{v\rightarrow f}^{s}(\vartheta)}{m_{v\rightarrow
f}^{s}(0)}$ $\displaystyle=\sum_{f^{\prime}\in\partial v\setminus
f}\log\frac{\mathbb{E}_{\Lambda_{f^{\prime}}}[\exp(-n(x_{f^{\prime}v}-\Lambda_{f^{\prime}}\vartheta/n)^{2}/2)]}{\mathbb{E}_{\Lambda_{f^{\prime}}}[\exp(-nx_{f^{\prime}v}^{2}/2)]}$
$\displaystyle=\vartheta a_{v\rightarrow
f}^{s+1}-\frac{1}{2}\vartheta^{2}b_{v\rightarrow
f}^{s+1}+\frac{1}{6}\vartheta^{3}\sum_{f^{\prime}\in\partial v\setminus
f}\frac{\mathrm{d}^{3}}{\mathrm{d}\vartheta^{3}}\log\mathbb{E}_{\Lambda_{f^{\prime}}}[\exp(-n(x_{f^{\prime}v}-\Lambda_{f^{\prime}}\vartheta/n)^{2}/2)]\Big{|}_{\vartheta=\vartheta_{f^{\prime}}},$
where it is understood that
$\Lambda_{f^{\prime}}\sim\mu_{\Lambda|U}(u_{f^{\prime}},\cdot)$. Denote
$\psi(\vartheta,\ell,x)=-n(x_{f^{\prime}v}-\ell\vartheta/n)^{2}/2$. By the
same argument that allowed us to derive (46) from R4 in the proof of Lemma
3(a), we conclude
$\displaystyle\frac{\mathrm{d}^{3}}{\mathrm{d}\vartheta^{3}}\log\mathbb{E}_{\Lambda}[\exp(\psi(\vartheta,\Lambda,x))]\Big{|}_{\vartheta=\vartheta_{f^{\prime}}}$
$\displaystyle\qquad\qquad\leq
C\sup_{\ell,\vartheta\in[-M,M]}\max\\{|\partial_{\vartheta}\psi(\vartheta,\ell,x)|^{3},|\partial_{\vartheta}\psi(\vartheta,\ell,x)\partial_{\vartheta}^{2}\psi(\vartheta,\ell,x)|,|\partial_{\vartheta}^{3}\psi(\vartheta,\ell,x)|\\}$
$\displaystyle\qquad\qquad\leq
C\max\left\\{M^{3}|M^{2}/n+x_{f^{\prime}v}|^{3},(M^{2}/n)M|M^{2}/n+x_{f^{\prime}v}|,0\right\\},$
where $C$ is a universal constant. The expectaton of the right-hand side is
$O(n^{-3/2})$, whence we get
$\frac{1}{6}\vartheta^{3}\sum_{f^{\prime}\in\partial v\setminus
f}\frac{\mathrm{d}^{3}}{\mathrm{d}\vartheta^{3}}\log\mathbb{E}_{\Lambda_{f^{\prime}}}[\exp(-n(x_{f^{\prime}v}-\Lambda_{f^{\prime}}\vartheta/n)^{2}/2)]\Big{|}_{\vartheta=\vartheta_{f^{\prime}}}=O_{p}(n^{-1/2}),$
where because $\vartheta\in[-M,M]$, we may take $O_{p}(n^{-1/2})$ to have no
$\vartheta$-dependence.
The expansion of $\log\frac{m_{f\rightarrow v}^{s}(\ell)}{m_{f\rightarrow
v}^{s}(0)}$ is proved similarly. ∎
## Appendix E Weakening the assumptions
Section 5 and the preceding appendices establish under the assumptions A1, A2
and either R3, R4 or M2 all claims in Theorems 1 and 2 except that the lower
bound may be achieved. In this section we show that if these claims hold under
assumptions A1, A2, R3, R4, then they also hold under assumptions A1, A2, R1,
R2 in the high-dimensional regression model; and similarly for the low-rank
matrix estimation model. In the next section we prove we can achieve the lower
bounds under the weaker assumptions A1, A2 and either R1, R2 or M1.
### E.1 From strong to weak assumptions in the high-dimensional regression
model
To prove the reduction from the stronger assumptions in the high-dimensional
regression model, we need the following lemma, whose proof is given at the end
of this section.
###### Lemma 1.
Consider on a single probability space random variables $A,B,(B_{n})_{n\geq
1}$, and $Z\sim{\mathsf{N}}(0,1)$ independent of the $A$’s and $B$’s, all with
finite second moment. Assume $\mathbb{E}[(B-B_{n})^{2}]\rightarrow 0$. Let
$Y=B+\tau Z$ and $Y_{n}=B_{n}+\tau Z$ for $\tau>0$. Then
$\mathbb{E}[\mathbb{E}[A|Y_{n}]^{2}]\rightarrow\mathbb{E}[\mathbb{E}[A|Y]^{2}]\,.$
We now establish the reduction.
Consider $\mu_{W,U}$, $\mu_{\Theta,V}$, and $h$ satisfying R1 and R2. For any
$\epsilon>0$, we construct $\mu_{\tilde{\boldsymbol{W}},\tilde{U}}$,
$\mu_{\tilde{\Theta},\tilde{V}}$, and $\tilde{h}$ satisfying R3 and R4 for
$k=3$ as well as data $\boldsymbol{X}\in\mathbb{R}^{n\times p}$,
$\boldsymbol{\theta},\tilde{\boldsymbol{\theta}},\boldsymbol{v},\tilde{\boldsymbol{v}}\in\mathbb{R}^{p}$,
and
$\boldsymbol{y},\tilde{\boldsymbol{y}},\boldsymbol{w},\boldsymbol{u},\tilde{\boldsymbol{u}}\in\mathbb{R}^{n}$
and $\tilde{\boldsymbol{w}}\in\mathbb{R}^{n\times 3}$ such that the following
all hold.
1. 1.
$(\boldsymbol{X},\boldsymbol{\theta},\boldsymbol{v},\boldsymbol{u},\boldsymbol{w},\boldsymbol{y})$
and
$(\boldsymbol{X},\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{v}},\tilde{\boldsymbol{u}},\tilde{\boldsymbol{w}},\tilde{\boldsymbol{y}})$
are generated according to their respective regression models: namely,
$(\theta_{j},v_{j})\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\Theta,V}$
and $(w_{i},u_{i})\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{W,U}$
independent;
$(\tilde{\theta}_{j},\tilde{v}_{j})\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\tilde{\Theta},\tilde{V}}$
and
$(\tilde{\boldsymbol{w}}_{i},\tilde{\boldsymbol{u}}_{i})\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\tilde{\boldsymbol{W}},\tilde{U}}$
independent;
$x_{ij}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1/n)$
independent of everything else; and
$\boldsymbol{y}=h(\boldsymbol{X}\boldsymbol{\theta},\boldsymbol{w})$ and
$\tilde{\boldsymbol{y}}=\tilde{h}(\boldsymbol{X}\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{v}})$.
Here $\tilde{\boldsymbol{w}}_{i}^{\mathsf{T}}$ is the $i^{\text{th}}$ row of
$\tilde{\boldsymbol{w}}$. We emphasize that the data from the two models are
not independent.
2. 2.
We have
$\displaystyle\mathbb{P}\left(\frac{1}{n}\|\boldsymbol{y}-\tilde{\boldsymbol{y}}\|^{2}>\epsilon\right)\rightarrow
0,\;\mathbb{P}\left(\frac{1}{p}\|\boldsymbol{v}-\tilde{\boldsymbol{v}}\|^{2}>\epsilon\right)\rightarrow
0,\;\mathbb{P}\left(\frac{1}{n}\|\boldsymbol{u}-\tilde{\boldsymbol{u}}\|^{2}>\epsilon\right)\rightarrow
0\,.$ (51)
Note that because in any GFOM the functions
$F_{t}^{(1)},F_{t}^{(2)},G_{t}^{(1)},G_{t}^{(2)},G_{*}$ are Lipschitz and
$\|\boldsymbol{X}\|_{\mathsf{op}}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}C_{\delta}<\infty$
as $n,p\rightarrow\infty,n/p\rightarrow 0$ [Ver12, Theorem 5.31], the previous
display and the iteration (1) imply
$\mathbb{P}\left(\frac{1}{p}\|\hat{\boldsymbol{\theta}}^{t}-\tilde{\hat{\boldsymbol{\theta}}}^{t}\|^{2}>c(\epsilon,t)\right)\rightarrow
0\,,$ (52)
for some $c(\epsilon,t)<\infty$ which goes to 0 as $\epsilon\rightarrow 0$ for
fixed $t$.
3. 3.
We have
$\displaystyle|\textsf{mmse}_{\Theta,V}(\tau_{s}^{2})-\textsf{mmse}_{\tilde{\Theta},\tilde{V}}(\tau_{s})^{2}|<\epsilon,$
(53)
$\displaystyle\left|\mathbb{E}\left[\mathbb{E}[G_{1}|h(G,W)+\epsilon^{1/2}Z,G_{0}]^{2}\right]-\mathbb{E}\left[\mathbb{E}[G_{1}|\tilde{h}(G,\tilde{\boldsymbol{W}}),G_{0}]^{2}\right]\right|<\tilde{\tau}_{s}^{2}\epsilon\,,$
(54)
for all $s\leq t$ where
$G_{0},G_{1},Z\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1)$,
$W\sim\mu_{W}$, and $\tilde{\boldsymbol{W}}\sim\mu_{\tilde{\boldsymbol{W}}}$
independent, and $G=\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1}$.
We now describe the construction described and prove it has the desired
properties. Let $\mu_{A}$ be a smoothed Laplace distribution with mean zero
and variance 1; namely, $\mu_{A}$ has a $C_{\infty}$ positive density
$p_{A}(\cdot)$ with respect to Lebesgue measure which satisfies
$\partial_{a}\log p_{A}(a)=c\cdot\mathsf{sgn}(a)$ when $|x|>1$ for some
positive constant $c$. This implies that $|\partial_{a}^{k}\log p_{A}(a)|\leq
q_{k}$ for all $k$ and some constants $q_{k}$, and that $\mu_{A}$ has moments
of all orders.
First we construct $\tilde{h}$ and $\tilde{\boldsymbol{W}}$. For a $\xi>0$ to
be chosen, let $\hat{h}$ be a Lipschitz function such that
$\mathbb{E}[(\hat{h}(G,W)-h(G,W))^{2}]<\xi$ for $(G,W)$ as above, which is
permitted by assumption R2. Let $L>0$ be a Lipschitz constant for $\hat{h}$.
Choose $M>0$ such that $\mathbb{E}[W^{2}\mathbf{1}\\{|W|>M\\}]<\xi/L^{2}$.
Define $\bar{W}=W\mathbf{1}\\{|W|\leq M\\}$. Note that
$\mathbb{E}[(h(G,W)-\hat{h}(G+\xi^{1/2}A,\bar{W}))^{2}]\leq
2\mathbb{E}[(h(G,W)-\hat{h}(G,W))^{2}]+2\mathbb{E}[(\hat{h}(G,W)-\hat{h}(G+\xi^{1/2}A,\bar{W}))^{2}]<4\xi$.
By Lemma 1, we may pick $0<\xi<\min\\{\epsilon/4,\epsilon/L^{2}\\}$
sufficiently small that
$\left|\mathbb{E}\left[\mathbb{E}[G_{1}|h(G,W)+\epsilon^{1/2}Z,G_{0}]^{2}\right]-\mathbb{E}\left[\mathbb{E}[G_{1}|\hat{h}(G+\xi^{1/2}A,\bar{W})+\epsilon^{1/2}Z,G_{0}]^{2}\right]\right|<\tilde{\tau}_{s}^{2}\epsilon\,.$
In fact, because $t$ is finite, we may choose $\xi>0$ small enough that this
holds for all $s\leq t$. Define $\tilde{\boldsymbol{W}}=(\bar{W},A,Z)$ and
$\tilde{h}(x,\tilde{\boldsymbol{w}})=\hat{h}(x+\xi^{1/2}a,\bar{w})+\epsilon^{1/2}z$
where $\tilde{\boldsymbol{w}}=(\bar{w},a,z)$. Then $\tilde{h}$ is Lipschitz,
Eq. (54) holds for all $s\leq t$, and
$\mathbb{E}[(h(G,W)-\tilde{h}(G,\tilde{\boldsymbol{W}}))^{2}]<\epsilon$ (the
last because $\xi<\epsilon/4$).
Now choose $K>0$ large enough that
$\displaystyle\mathbb{E}[\Theta^{2}\mathbf{1}\\{|\Theta|>K\\}]<\delta\epsilon/L^{2},\;\;\mathbb{E}[U^{2}\mathbf{1}\\{|U|>K\\}]<\epsilon/2,\;\;\mathbb{E}[V^{2}\mathbf{1}\\{|V|>K\\}]<\epsilon/2\,.$
(55)
Define $\tilde{\Theta}=\bar{\Theta}=\Theta\mathbf{1}\\{|\Theta|\leq K\\}$,
$\tilde{V}=\bar{V}=V\mathbf{1}\\{|V|\leq K\\}$,
$\tilde{U}=\bar{U}=U\mathbf{1}\\{|U|\leq K\\}$, and let
$\mu_{\tilde{\Theta},\tilde{V}},\mu_{\tilde{\boldsymbol{W}},\tilde{U}}$ be the
corresponding distributions; namely, $\mu_{\tilde{\Theta},\tilde{V}}$ is the
distribution of $(\Theta\mathbf{1}\\{|\Theta|\leq K\\},V\mathbf{1}\\{|V|\leq
K\\})$ when $(\Theta,V)\sim\mu_{\Theta,V}$, and
$\mu_{\tilde{\boldsymbol{W}},\tilde{U}}$ is the distribution of
$(W\mathbf{1}\\{|W|\leq M\\},A,Z)$ when $(W,U)\sim\mu_{W,U}$ and
$(A,Z)\sim\mu_{A}\otimes{\mathsf{N}}(0,1)$ independent. Because the Bayes risk
converges as $K\rightarrow\infty$ to the Bayes risk with respect to the
untruncated prior, we may choose $K$ large enough that also (53) holds for
these truncated distributions.
The distributions
$\mu_{\tilde{\Theta},\tilde{V}},\mu_{\tilde{\boldsymbol{W}},\tilde{U}}$
satisfy assumption R3. We now show that $\tilde{h}$ and
$\tilde{\boldsymbol{W}}$ constructed in this way satisfy assumption R4. The
function $\tilde{h}$ is Lipschitz because $\hat{h}$ is Lipschitz. The random
variable $\tilde{Y}:=\hat{h}(x+\xi^{1/2}A,\bar{W})+\epsilon^{1/2}Z$ has
density with respect to Lebesgue measure given by
$p(y|x)=\int\int
p_{\xi^{1/2}A}\left(s-x\right)p_{{\mathsf{N}}(0,\epsilon)}(y-\hat{h}(s,\bar{w}))\mu_{\bar{W}}({\mathrm{d}}\bar{w}){\mathrm{d}}s,$
where $p_{{\mathsf{N}}(0,\epsilon)}$ is the density of
${\mathsf{N}}(0,\epsilon)$ and $p_{\xi^{1/2}A}\left(s-x\right)$ the density of
$\xi^{1/2}A$ with respect to Lebesgue measure. We have
$p(y|x)\leq\sup_{y}p_{{\mathsf{N}}(0,\epsilon)}(y)=1/\sqrt{2\pi\epsilon}$, so
is bounded, as desired. Moreover
$\left|\frac{\int\int\partial_{x}p_{\xi^{1/2}A}\left(s-x\right)p_{{\mathsf{N}}(0,\epsilon)}(y-\hat{h}(s,\bar{w}))\mu_{\bar{W}}({\mathrm{d}}\bar{w}){\mathrm{d}}s}{p(y|x)}\right|\leq\sup_{s}\left|\frac{\dot{p}_{\xi^{1/2}A}(s)}{p_{\xi^{1/2}A}(s)}\right|.$
Because $A$ has a smoothed Laplace distribution, the right-hand side is
finite. Thus, by bounded convergence, we may exchange differentiation and
integration and the preceding display is equal to $\partial_{x}\log p(y|x)$.
We conclude that $|\partial_{x}\log p(y|x)|$ is bounded. The boundededness of
all higher derivatives holds similarly. Thus, R4 holds.
We now generate the appropriate joint distribution over
$(\boldsymbol{X},\boldsymbol{\theta},\boldsymbol{v},\boldsymbol{u},\boldsymbol{w},\boldsymbol{y})$
and
$(\boldsymbol{X},\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{v}},\tilde{\boldsymbol{u}},\tilde{\boldsymbol{w}},\tilde{\boldsymbol{y}})$.
First, generate
$(\boldsymbol{X},\boldsymbol{\theta},\boldsymbol{v},\boldsymbol{u},\boldsymbol{w},\boldsymbol{y})$
from original the high-dimensional regression model. Then generate
$\boldsymbol{a},\boldsymbol{z}$ independent and with entries
$a_{i}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{A}$ and
$z_{i}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1)$. Define
$\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{v}},\tilde{\boldsymbol{u}}$ by
truncating $\boldsymbol{\theta},\boldsymbol{v},\boldsymbol{u}$ at threshold
$K$; define $\tilde{\boldsymbol{w}}$ by truncating $\boldsymbol{w}$ at
threshold $M$ to form $\bar{\boldsymbol{w}}$ and concatenating to it the
vectors $\boldsymbol{a},\boldsymbol{z}$ to form a matrix in
$\mathbb{R}^{n\times 3}$; and define
$\tilde{\boldsymbol{y}}=\tilde{h}(\boldsymbol{X}\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{w}})$.
All that remains is to show (51) holds for the model generated in this way.
The bounds on $\|\boldsymbol{v}-\tilde{\boldsymbol{v}}\|^{2}$ and
$\|\boldsymbol{u}-\tilde{\boldsymbol{u}}\|^{2}$ hold by the weak law of large
numbers and (55). To control $\|\boldsymbol{y}-\tilde{\boldsymbol{y}}\|$, we
bound
$\displaystyle\|\boldsymbol{y}-\tilde{\boldsymbol{y}}\|=\|h(\boldsymbol{X}\boldsymbol{\theta},\boldsymbol{w})-\tilde{h}(\boldsymbol{X}\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{w}})\|$
$\displaystyle\qquad\leq\|h(\boldsymbol{X}\boldsymbol{\theta},\boldsymbol{w})-\hat{h}(\boldsymbol{X}\boldsymbol{\theta},\boldsymbol{w})\|+\|\hat{h}(\boldsymbol{X}\boldsymbol{\theta},\boldsymbol{w})-\hat{h}(\boldsymbol{X}\tilde{\boldsymbol{\theta}},\boldsymbol{w})\|+\|\hat{h}(\boldsymbol{X}\tilde{\boldsymbol{\theta}},\boldsymbol{w})-\tilde{h}(\boldsymbol{X}\tilde{\boldsymbol{\theta}},\tilde{\boldsymbol{w}})$
$\displaystyle\qquad\leq\|h(\boldsymbol{X}\boldsymbol{\theta},\boldsymbol{w})-\hat{h}(\boldsymbol{X}\boldsymbol{\theta},\boldsymbol{w})\|+L\|\boldsymbol{X}(\boldsymbol{\theta}-\tilde{\boldsymbol{\theta}})\|+L\xi^{1/2}\|\boldsymbol{a}\|+L\|\boldsymbol{w}-\bar{\boldsymbol{w}}\|+\epsilon^{1/2}\|\boldsymbol{z}\|\,.$
Because $|h(x,w)|\leq C(1+|x|+|w|)$ by R2 and $\hat{h}$ is Lipschitz, there
exist $C>0$ such that $|h(x,w)-\hat{h}(x,w)|\leq C(1+|x|+|w|)$. Then,
$\mathbb{E}[(h(\tau Z,w)-\hat{h}(\tau
Z,w))^{2}]=\int(h(x,w)-\hat{h}(x,w))^{2}\frac{1}{\sqrt{2\pi}\tau}e^{-\frac{1}{2\tau^{2}}x^{2}}{\mathrm{d}}x<C(1+\tau^{2}+w^{2})$
and is continuous in $\tau^{2}$ for $\tau>0$ by dominated convergence
convergence, and is uniformly continuous for $\tau$ bounded away from 0 and
infinity and $w_{i}$ restricted to a compact set. Because
$\boldsymbol{x}_{i}^{\mathsf{T}}\boldsymbol{\theta}|\boldsymbol{\theta}\sim{\mathsf{N}}(0,\|\boldsymbol{\theta}\|^{2}/n)$
and
$\|\boldsymbol{\theta}\|^{2}/n\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\tau_{\Theta}^{2}/\delta$,
we have that
$\mathbb{E}[(h(\boldsymbol{x}_{i}^{\mathsf{T}}\boldsymbol{\theta},w_{i})-\hat{h}(\boldsymbol{x}_{i}^{\mathsf{T}}\boldsymbol{\theta},w_{i}))^{2}|\boldsymbol{\theta},w_{i}]=\mathbb{E}[(h(\tau_{\Theta}\boldsymbol{x}_{i}^{\mathsf{T}}\boldsymbol{\theta}/\|\boldsymbol{\theta}\|,w_{i})-\hat{h}(\tau_{\Theta}\boldsymbol{x}_{i}^{\mathsf{T}}\boldsymbol{\theta}/\|\boldsymbol{\theta}\|,w_{i}))^{2}|\boldsymbol{\theta},w_{i}]+o_{p}(1)\,.$
The right-hand side is a constant equal to
$\mathbb{E}[(h(G,W)-\hat{h}(G,W))^{2}]$ and the left-hand side is uniformly
integrable. Thus,
$\limsup_{n\rightarrow\infty}\mathbb{E}[(h(\boldsymbol{x}_{i}^{\mathsf{T}}\boldsymbol{\theta},w_{i})-\hat{h}(\boldsymbol{x}_{i}^{\mathsf{T}}\boldsymbol{\theta},w_{i}))^{2}]\leq\mathbb{E}[(h(G,w_{i})-\hat{h}(G,w_{i}))^{2}]<\xi\,.$
Markov’s inequality proves the the first convergence in (51) because
$\xi<\epsilon$. Further, by the weak law of large numbers
$\frac{L^{2}}{n}\|\boldsymbol{X}(\boldsymbol{\theta}-\tilde{\boldsymbol{\theta}})\|^{2}\leq\frac{L^{2}\|\boldsymbol{X}\|_{\mathsf{op}}^{2}}{n}\|\boldsymbol{\theta}-\tilde{\boldsymbol{\theta}}\|^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}L^{2}C_{\delta}\delta^{-1}\mathbb{E}[\Theta^{2}\mathbf{1}\\{|\Theta|>M\\}]<C_{\delta}\epsilon\,,$
where $C_{\delta}$ is the constant satisfying
$\|\boldsymbol{X}\|_{\mathsf{op}}^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}C_{\delta}$
[Ver12, Theorem 5.31]. Similarly, by the weak law of large numbers
$\frac{L^{2}\xi}{n}\|\boldsymbol{a}\|^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}L^{2}\xi<\epsilon,\;\;\frac{L^{2}}{n}\|\boldsymbol{w}-\bar{\boldsymbol{w}}\|^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}L^{2}\mathbb{E}[W^{2}\mathbf{1}\\{|W|>M\\}]<\xi<\epsilon,\;\;\frac{\epsilon}{n}\|\boldsymbol{z}\|^{2}\stackrel{{\scriptstyle\mathrm{p}}}{{\rightarrow}}\epsilon\,.$
We conclude that
$\mathbb{P}\left(\frac{1}{n}\|\boldsymbol{y}-\tilde{\boldsymbol{y}}\|^{2}>5(C_{\delta}+4)\epsilon\right)\rightarrow
0.$
Becuse $\epsilon$ was arbitrary, we can in fact achieve (51) by considering a
smaller $\epsilon$ (without affecting the validity of (53)).
This completes the construction. To summarize, we have two models: the first
satisfying R1 and R2, and the second satisfying R3 and R4.
With the construction now complete, we explain why it establishes the
reduction. Let $\tau_{s}^{(\epsilon)},\tilde{\tau}_{s}^{(\epsilon)}$ be the
state evolution parameters generated by (5) with
$\mu_{\tilde{\boldsymbol{W}},\tilde{U}}$, $\mu_{\tilde{\Theta},\tilde{V}}$,
and $\tilde{h}$ in place of $\mu_{W,U},\mu_{\Theta,V}$, and $h$. First, we
claim that Eqs. (53) and (54) imply, by induction, that as
$\epsilon\rightarrow 0$, we have
$\tau_{t}^{(\epsilon)}\rightarrow\tau_{t}.$
Indeed, to show this, we must only establish that
$\mathbb{E}\left[\mathbb{E}[G_{1}|h(G,W)+\epsilon^{1/2}Z,G_{0}]^{2}\right]$
converges to $\mathbb{E}[\mathbb{E}[G_{1}|h(G,W),G_{0}]^{2}]$ as
$\epsilon\rightarrow 0$. Without loss of generality, we may assume that on the
same probability space there exists a Brownian motion
$(B_{\epsilon})_{\epsilon>0}$ independent of everything else. We see that
$\mathbb{E}[G_{1}|h(G,W)+\epsilon^{1/2}Z,G_{0}]^{2}]\stackrel{{\scriptstyle\mathrm{d}}}{{=}}\mathbb{E}[G_{1}|h(G,W)+B_{\epsilon},G_{0}]=\mathbb{E}[G_{1}|(h(G,W)+B_{s})_{s\geq\epsilon},G_{0}]$.
By Lévy’s upward theorem [Dur10, Theorem 5.5.7], we have that
$\mathbb{E}[G_{1}|(h(G,W)+B_{s})_{s\geq\epsilon},G_{0}]$ converges to
$\mathbb{E}[G_{1}|(h(G,W)+B_{s})_{s\geq
0},G_{0}]=\mathbb{E}[G_{1}|h(G,W),G_{0}]$ almost surely. By uniform
integrability, we conclude that
$\mathbb{E}[\mathbb{E}[G_{1}|(h(G,W)+B_{s})_{s\geq\epsilon},G_{0}]^{2}]\rightarrow\mathbb{E}[\mathbb{E}[G_{1}|h(G,W),G_{0}]^{2}]$,
as claimed. Thus, we conclude the previous display.
We now show that as $\epsilon\rightarrow 0$, we have
$\inf_{\hat{\theta}(\cdot)}\mathbb{E}[\ell(\tilde{\Theta},\hat{\theta}(\tilde{\Theta}+\tau_{t}^{(\epsilon)}G,V))]\rightarrow\inf_{\hat{\theta}(\cdot)}\mathbb{E}[\ell(\Theta,\hat{\theta}(\Theta+\tau_{t}G,V))]\,.$
Because the truncation level $K$ can be taken to $\infty$ as
$\epsilon\rightarrow 0$, this holds by combining Lemma 5(a) and (c), and
specifically, Eqs. (25) and (27).
Because the lower bound of Theorem 1 holds under assumptions R3 and R4, which
are satisfied by $\mu_{\tilde{\boldsymbol{W}},\tilde{U}}$,
$\mu_{\tilde{\Theta},\tilde{V}}$, and $\tilde{h}$, we conclude that
$\lim_{n\rightarrow\infty}\frac{1}{p}\sum_{j=1}^{p}\ell(\theta_{j},\hat{\theta}_{j}^{t})\geq\inf_{\hat{\theta}(\cdot)}\mathbb{E}[\ell(\tilde{\Theta},\hat{\theta}(\tilde{\Theta}+\tau_{t}^{(\epsilon)}G,V))].$
Taking $\epsilon\rightarrow 0$ and applying (52), we conclude that (6) holds
for $\hat{\boldsymbol{\theta}}^{t}$, as desired.
The reduction in the high-dimensional regression model is complete.
###### Lemma 1.
It is enough to prove the result for $\tau=1$. Note
$\mathbb{E}[A|Y=y]=\frac{\int
ae^{-(y-b)^{2}}\mu({\mathrm{d}}a,{\mathrm{d}}b)}{\int
e^{-(y-b)^{2}}\mu({\mathrm{d}}a,{\mathrm{d}}b)},\qquad\mathbb{E}[A|Y_{n}=y]=\frac{\int
ae^{-(y-b)^{2}}\mu_{n}({\mathrm{d}}a,{\mathrm{d}}b)}{\int
e^{-(y-b)^{2}}\mu_{n}({\mathrm{d}}a,{\mathrm{d}}b)}.$
Because $\mu_{n}\stackrel{{\scriptstyle\mathrm{W}}}{{\rightarrow}}\mu$, we
have
$\frac{\int ae^{-(y-b)^{2}}\mu_{n}({\mathrm{d}}a,{\mathrm{d}}b)}{\int
e^{-(y-b)^{2}}\mu_{n}({\mathrm{d}}a,{\mathrm{d}}b)}\rightarrow\frac{\int
ae^{-(y-b)^{2}}\mu({\mathrm{d}}a,{\mathrm{d}}b)}{\int
e^{-(y-b)^{2}}\mu({\mathrm{d}}a,{\mathrm{d}}b)},$
for all $y$, and moreover, this convergence is uniform on compact sets.
Moreover, one can check that the stated functions are Lipschitz (with uniform
Lipschitz constant) in $y$ on compact sets. This implies that
$\mathbb{E}[A|Y_{n}]\rightarrow\mathbb{E}[A|Y]$ almost surely. Because the
$\mathbb{E}[A|Y_{n}]^{2}$ are uniformly integrable, the lemma follows. ∎
### E.2 From strong to weak assumptions in the low-rank matrix estimation
model
Consider
$\mu_{\boldsymbol{\Lambda},\boldsymbol{U}},\mu_{\boldsymbol{\Theta},\boldsymbol{V}}$
satisfying M1. Fix $M>0$. For
$(\boldsymbol{\Lambda},\boldsymbol{U})\sim\mu_{\boldsymbol{\Lambda},\boldsymbol{U}}$,
define $\tilde{\Lambda}$ by setting
$\tilde{\Lambda}_{i}=\Lambda_{i}\mathbf{1}\\{|\Lambda_{i}|\leq M\\}$ for
$1\leq i\leq k$. Define $\tilde{\boldsymbol{U}}$ similarly, and let
$\mu_{\tilde{\boldsymbol{\Lambda}},\tilde{\boldsymbol{U}}}$ be the
distribution of $(\tilde{\boldsymbol{\Lambda}},\tilde{\boldsymbol{U}})$ so
constructed. Define $\mu_{\tilde{\boldsymbol{\Theta}},\tilde{\boldsymbol{V}}}$
similarly.
Consider $\\{(\boldsymbol{\lambda}_{i},\boldsymbol{u}_{i})\\}_{i\leq
n}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\boldsymbol{\Lambda},\boldsymbol{U}}$
and $\\{(\boldsymbol{\theta}_{j},\boldsymbol{v}_{j})\\}_{j\leq
p}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}\mu_{\boldsymbol{\Theta},\boldsymbol{V}}$
and $\boldsymbol{Z}\in\mathbb{R}^{n\times p}$ independent with
$z_{ij}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1/n)$.
Constructe
$\tilde{\boldsymbol{\lambda}}_{i},\tilde{\boldsymbol{u}}_{i},\tilde{\boldsymbol{\theta}}_{j},\tilde{\boldsymbol{v}}_{j}$
by truncated each coordinate at level $M$ as above. Define
$\boldsymbol{X},\tilde{\boldsymbol{X}}\in\mathbb{R}^{n\times p}$ by
$x_{ij}=\frac{1}{n}\boldsymbol{\lambda}_{i}^{\mathsf{T}}\boldsymbol{\theta}_{j}+z_{ij}$
and
$\tilde{z}_{ij}=\frac{1}{n}\tilde{\boldsymbol{\lambda}}_{i}^{\mathsf{T}}\tilde{\boldsymbol{\theta}}+z_{ij}$.
As in the previous section, we have for any $\epsilon>0$ that
$\mathbb{P}(\|\boldsymbol{X}-\tilde{\boldsymbol{X}}\|_{\mathsf{op}}>\epsilon)\rightarrow
0,\;\;\mathbb{P}\left(\frac{1}{p}\|\boldsymbol{v}-\tilde{\boldsymbol{v}}\|^{2}>\epsilon\right)\rightarrow
0,\;\;\mathbb{P}\left(\frac{1}{p}\|\boldsymbol{u}-\tilde{\boldsymbol{u}}\|^{2}>\epsilon\right)\rightarrow
0.$
As in the previous section, this implies that the iterates of the GFOMs before
and after the truncation become arbitrarily close with high probability at a
fixed iterate $t$ as we take $M\rightarrow\infty$.
Further, as $M\rightarrow\infty$ we have
$\boldsymbol{V}_{\tilde{\boldsymbol{\Theta}},\tilde{\boldsymbol{V}}}(\boldsymbol{Q})\rightarrow\boldsymbol{V}_{\boldsymbol{\Theta},\boldsymbol{V}}(\boldsymbol{Q})$
for all $\boldsymbol{Q}$, and likewise for
$\tilde{\boldsymbol{\Lambda}},\tilde{\boldsymbol{U}}$. Further,
$\boldsymbol{V}_{\tilde{\boldsymbol{\Theta}},\tilde{\boldsymbol{V}}}(\boldsymbol{Q})$
is jointly continuous in $\boldsymbol{Q}$ and $M$ (where $M$ is implicit in
the truncation used to generate
$\tilde{\boldsymbol{\Theta}},\tilde{\boldsymbol{V}}$). Thus, as we take
$M\rightarrow\infty$, the state evolution (7) after the truncation converges
to the state evolution with no truncation.
The reduction now occurs exactly as in the previous section.
## Appendix F Achieving the bound
All that remains to prove Theorems 1 and 2 under assumptions A1, A2 and either
R1, R2 or M1, respectively, is to show that the lower bounds in Eqs. (6) and
(8) can be achieved. In both cases, we can achieve the bound up to tolerance
$\epsilon$ using a certain AMP algorithm.
### F.1 Achieving the bound in the high-dimensional regression model
We first derive certain monotonicity properies of the parameters
$\tau_{s},\sigma_{s},\tilde{\tau}_{s}$ defined in the state evolution
recursion (5). As we saw in Appendix D.1 and in particular, in Lemma 1, the
posterior of $\theta_{v}$ on the computation tree given observations in the
local neighborhood $T_{v,2s}$ behaves like that from an observation under
Gaussian noise with variance $\tau_{s}^{2}$. This is made precise in Lemma 1.
Moreover, we saw in the same section that a consequence of Lemma 1 is that the
asymptotic limiting Bayes risk with respect to loss $\ell$ for estimation
$\theta_{v}$ given observations in $\mathcal{T}_{v,2s}$ is given by the
corresponding risk for estimating $\Theta$ given $\Theta+\tau_{s}G$, $V$ with
$(\Theta,V)\sim\mu_{\Theta,V}$ and $G\sim{\mathsf{N}}(0,1)$ independent. In
particular, this applies to the minimum mean square error. On the computation
tree, minimum mean square error can only decrease as $s$ grows because as $s$
grows we receive strictly more information. If
$\mathbb{E}[\operatorname{Var}(\Theta|V)]>0$, then
$\textsf{mmse}_{\Theta,V}(\tau^{2})$ is strictly increasing in $\tau$, so that
we conclude that $\tau_{s}$ is non-increasing in $s$. Thus, by (5), we have
also $\tilde{\tau}_{s}$ is non-increasing in $s$ and $\sigma_{s}$ is non-
decreasing in $s$. In the complementary case that
$\mathbb{E}[\operatorname{Var}(\Theta|V)]=0$, we compute
$\sigma_{s}^{2}=\tau_{\Theta}^{2}/\delta$ and $\tilde{\tau}_{s}^{2}=0$ for all
$s\geq 0$, and $\tau_{s}^{2}=0$ for all $s\geq 1$. Thus, the same monotoncity
results hold in this case. These monotonicity results will imply the needed
structural properties of the state evolution matrices
$(T_{s,s^{\prime}}),(\Sigma_{s,s^{\prime}})$ used below.
For all $s\leq t$, define
$\alpha_{s}=\frac{1}{\tilde{\tau}_{s}}\mathbb{E}[\mathbb{E}[G_{1}|Y,G_{0},U]^{2}],\;\;T_{s,t}=\mathbb{E}[\mathbb{E}[G_{1}|Y,G_{0},U]^{2}],\;\;\Sigma_{s,t}=\sigma_{t}^{2},$
where $Y=h(\sigma_{s}G_{0}+\tilde{\tau}_{s}G_{1},W)$ and
$G_{0},G_{1}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1)$
and $W\sim\mu_{W}$ independent. By the monotoncity properties stated,
$(T_{s,t}),(\Sigma_{s,t})$ define positive definite arrays. Define
$\displaystyle
f_{t}(b^{t};y,u)=\mathbb{E}[B^{0}-B^{t}|h(B^{0},W)=y,\,B^{t}=b^{t},\,U=u]/\tilde{\tau}_{t},$
$\displaystyle
g_{t}(a^{t};v)=\mathbb{E}[\Theta|V=v,\,\alpha_{t}\Theta+Z^{t}=a^{t}],$
where $(\Theta,V)\sim\mu_{\Theta,V})$, $(W,U)\sim\mu_{W,U}$,
$(B^{0},\ldots,B^{t})\sim{\mathsf{N}}(\boldsymbol{0},\boldsymbol{\Sigma}_{[0{:}t]})$,
$(Z^{1},\ldots,Z^{t})\sim{\mathsf{N}}(\boldsymbol{0},\boldsymbol{T}_{[1{:}t]})$,
all independent. With these definitions,
$(B^{t},B^{0}-B^{t})\stackrel{{\scriptstyle\mathrm{d}}}{{=}}(\sigma_{t}G_{0},\tilde{\tau}_{t}G_{1})$
where
$G_{0},G_{1}\stackrel{{\scriptstyle\mathrm{iid}}}{{\sim}}{\mathsf{N}}(0,1)$.
In particular, $(B^{t})$ form a backwards Gaussian random walk. We thus
compute
$\displaystyle\mathbb{E}[(B^{0}-B^{t})f_{t}(B^{t};h(B^{0},W),U)]/\tilde{\tau}_{t}^{2}=\mathbb{E}[(\mathbb{E}[B^{0}-B^{t}|Y,B^{t},U]/\tilde{\tau}_{t})^{2}]/\tilde{\tau}_{t}=\alpha_{t},$
$\displaystyle\mathbb{E}[f_{s}(B^{s};h(B^{0},W),U)f_{t}(B^{t};h(B^{0},W),U)]$
$\displaystyle\qquad\qquad=\mathbb{E}[\mathbb{E}[B^{0}-B^{s}|Y,B^{s},U]\mathbb{E}[B^{0}-B^{t}|Y,B^{t},U]]/\tilde{\tau}_{t}^{2}$
$\displaystyle\qquad\qquad=\mathbb{E}[(B^{0}-B^{t})^{2}|Y,B^{t},U]/\tilde{\tau}_{t}^{2}=T_{s,t},$
$\displaystyle\frac{1}{\delta}\mathbb{E}[\Theta
g_{t}(\alpha_{t}\Theta+Z^{t};V)]=\frac{1}{\delta}\mathbb{E}[\mathbb{E}[\Theta|\Theta+Z^{t}/\alpha_{t},V]^{2}]=\sigma_{t}^{2},$
$\displaystyle\frac{1}{\delta}\mathbb{E}[g_{s}(\alpha_{s}\Theta+Z^{s};V)g_{t}(\alpha_{t}\Theta+Z^{t};V)]=\frac{1}{\delta}\mathbb{E}[\mathbb{E}[\Theta|\Theta+Z^{t}/\alpha_{t},V]^{2}].$
If $f_{t},g_{t}$ are Lipschitz, then, because $h$ is also Lipschitz, Stein’s
lemma [Ste81] implies that the first line is equivalent to
$\mathbb{E}[\partial_{B^{0}}f_{t}(B^{t};h(B^{0},W),U)]=\alpha_{t}$. (Here, we
have used that $B^{0}-B^{t}$ is independent of $B^{t}$). Thus,
$(\alpha_{s}),(T_{s,t}),(\Sigma_{s,t})$ are exactly the state evolution
parameters determined by (36), and Lemma 1 implies that AMP with these
$(f_{s}),(g_{s})$ achieves the lower bound.
If the $f_{t},g_{t}$ are not Lipschitz, we proceed as follows. Fix
$\epsilon>0$. First, pick Lipschitz $\hat{f}_{0}$ such that
$\mathbb{E}[(\hat{f}_{0}(B^{0},W)-f_{0}(B^{0},W))^{2}]<\epsilon$, which is
possibly because Lipschitz functions are dense in $L_{2}$. Define
$\hat{\alpha}_{0}$ and $\hat{T}_{1,1}$ via (36) with $\hat{f}_{0}$ in place of
$f_{0}$. Note that $\lim_{\epsilon\rightarrow 0}\hat{\alpha}_{0}=\alpha_{0}$
and $\lim_{\epsilon\rightarrow 0}\hat{T}_{1,1}=T_{1,1}$. Next, pick Lipschitz
$\hat{g}_{0}$ such that
$\mathbb{E}[(\hat{g}_{0}(\hat{\alpha}_{0}\Theta+\hat{T}_{1,1}^{1/2}G;V)-\mathbb{E}[\Theta|\hat{\alpha}_{0}+\Theta+\hat{T}_{1,1}^{1/2}G;V)])^{2}]<\epsilon$,
which is again possibly because Lipschitz functions are dense in $L_{2}$.
Define
$\hat{\Sigma}_{0,1}=\frac{1}{\delta}\mathbb{E}[\Theta\hat{g}_{t}(\hat{\alpha}\Theta+\hat{T}_{1,1}^{1/2}G;V)]$
and
$\hat{\Sigma}_{1,1}=\frac{1}{\delta}\mathbb{E}[\hat{g}_{t}(\hat{\alpha}\Theta+\hat{T}_{1,1}^{1/2}G;V)^{2}]$.
Because as $\alpha\rightarrow\alpha_{0}$ and $\tau\rightarrow T_{0,0}^{1/2}$,
we have $\mathbb{E}[\Theta|\alpha\Theta+\tau G;V)]\stackrel{{\scriptstyle
L_{2}}}{{\rightarrow}}\mathbb{E}[\Theta|\alpha_{0}\Theta+T_{0,0}^{1/2}G;V)]$,
we conclude that as $\epsilon\rightarrow 0$ that
$\hat{\Sigma}_{0,1}\rightarrow\Sigma_{1,1}$ and
$\hat{\Sigma}_{1,1}\rightarrow\Sigma_{1,1}$. Continuing in this way, we are
able to by taking $\epsilon$ sufficiently small construct Lipschitz functions
$(\hat{f}_{t}),(\hat{g}_{t})$ which track the state evolutoin of the previous
paragraph arbitrarily closely up to a fixed time $t^{*}$. Thus, we may come
arbitrarily close to achieving the lower bound of Theorem 1.
### F.2 Achieving the bound in the low-rank matrix estimation model
Let $\boldsymbol{\gamma}_{t}=\hat{\boldsymbol{Q}}_{t}$ for $t\geq 0$ and
$\boldsymbol{\alpha}_{t}=\boldsymbol{Q}_{t}$,
$\boldsymbol{\Sigma}_{t,t}=\hat{\boldsymbol{Q}}_{t}$,
$\boldsymbol{T}_{t,t}=\boldsymbol{Q}_{t}$ for $t\geq 1$. Define
$\displaystyle
f_{t}(\boldsymbol{b}^{t};\boldsymbol{u})=\mathbb{E}[\boldsymbol{\Lambda}|\boldsymbol{\gamma}_{t}\boldsymbol{\Lambda}+\boldsymbol{\Sigma}_{t,t}^{1/2}\boldsymbol{G}=\boldsymbol{b}^{t};\boldsymbol{U}],$
$\displaystyle
g_{t}(\boldsymbol{a}^{t};\boldsymbol{v})=\mathbb{E}[\boldsymbol{\Theta}|\boldsymbol{\alpha}_{t}\boldsymbol{\Theta}+\boldsymbol{T}_{t,t}^{1/2}\boldsymbol{G}=\boldsymbol{a}^{t};\boldsymbol{V}].$
We check that the parameters so defined satisfy the AMP state evolution (37).
Note that by (7),
$\displaystyle\boldsymbol{T}_{t+1,t+1}$
$\displaystyle=\boldsymbol{Q}_{t+1}=\mathbb{E}[\mathbb{E}[\boldsymbol{\Lambda}|\hat{\boldsymbol{Q}}_{t}^{1/2}\boldsymbol{\Lambda}+\boldsymbol{G};\boldsymbol{U}]\mathbb{E}[\boldsymbol{\Lambda}|\hat{\boldsymbol{Q}}_{t}^{1/2}\boldsymbol{\Lambda}+\boldsymbol{G};\boldsymbol{U}]^{\mathsf{T}}]$
$\displaystyle=\mathbb{E}[\mathbb{E}[\boldsymbol{\Lambda}|\hat{\boldsymbol{Q}}_{t}\boldsymbol{\Lambda}+\hat{\boldsymbol{Q}}_{t}^{1/2}\boldsymbol{G};\boldsymbol{U}]\mathbb{E}[\boldsymbol{\Lambda}|\hat{\boldsymbol{Q}}_{t}\boldsymbol{\Lambda}+\hat{\boldsymbol{Q}}_{t}^{1/2}\boldsymbol{G};\boldsymbol{U}]^{\mathsf{T}}]$
$\displaystyle=\mathbb{E}[\mathbb{E}[\boldsymbol{\Lambda}|\boldsymbol{\gamma}_{t}\boldsymbol{\Lambda}+\boldsymbol{\Sigma}_{t,t}^{1/2}\boldsymbol{G};\boldsymbol{U}]\mathbb{E}[\boldsymbol{\Lambda}|\boldsymbol{\gamma}_{t}\boldsymbol{\Lambda}+\boldsymbol{\Sigma}_{t,t}^{1/2}\boldsymbol{G};\boldsymbol{U}]^{\mathsf{T}}],$
$\displaystyle\boldsymbol{\alpha}_{t+1}$
$\displaystyle=\mathbb{E}[\mathbb{E}[\boldsymbol{\Lambda}|\hat{\boldsymbol{Q}}_{t}^{1/2}\boldsymbol{\Lambda}+\boldsymbol{G};\boldsymbol{U}]\mathbb{E}[\boldsymbol{\Lambda}|\hat{\boldsymbol{Q}}_{t}^{1/2}\boldsymbol{\Lambda}+\boldsymbol{G};\boldsymbol{U}]^{\mathsf{T}}]$
$\displaystyle=\mathbb{E}[\mathbb{E}[\boldsymbol{\Lambda}|\boldsymbol{\gamma}_{t}\boldsymbol{\Lambda}+\boldsymbol{\Sigma}_{t,t}^{1/2}\boldsymbol{G};\boldsymbol{U}]\boldsymbol{\Lambda}^{\mathsf{T}}]$
where
$(\boldsymbol{\Theta},\boldsymbol{V})\sim\mu_{\boldsymbol{\Theta},\boldsymbol{V}}$
and
$(\boldsymbol{\Lambda},\boldsymbol{U})\sim\mu_{\boldsymbol{\Lambda},\boldsymbol{U}}$.
The state evolution equations (7) for $\boldsymbol{\Sigma}_{t,t}$ and
$\boldsymbol{\gamma}_{t}$ hold similarly.
If $f_{t},g_{t}$ so defined are Lipschitz, then
$(\alpha_{s}),(\boldsymbol{T}_{s,t}),(\boldsymbol{\Sigma}_{s,t})$ are exactly
the state evolution parameters determined by (36), and Lemma 1 implies that
AMP with these $(f_{s}),(g_{s})$ achieves the lower bound. If the
$f_{t},g_{t}$ so defined are not Lipschitz, then the same strategy used in the
previous section allows us to achieve the lower bound within tolerance
$\epsilon>0$.
## Appendix G Proofs for sparse phase retrieval and sparse PCA
### G.1 Proof of Lemma 1
Note that $\|\overline{\boldsymbol{\theta}}_{0}\|_{2}$ is tightly concentrated
around $\mu^{2}\varepsilon$. As a consequence, we can replace the side
information $\overline{\boldsymbol{v}}$ by
$\boldsymbol{v}=\sqrt{\tilde{\alpha}}\boldsymbol{\theta}_{0}+\boldsymbol{g}$.
We apply Theorem 2 with $r=1$, and loss
$\ell_{\lambda}(\theta,\hat{\theta})=(\hat{\theta}-\theta_{0}/\lambda)^{2}$,
where $\lambda\in\mathbb{R}_{\geq 0}$ will be adjusted below. Setting
$\boldsymbol{Q}_{t}=q_{t}$, $\hat{\boldsymbol{Q}}_{t}=\hat{q}_{t}$, we obtain
the iteration
$\displaystyle
q_{t+1}=\frac{\hat{q}_{t}}{1+\hat{q}_{t}}\,,\;\;\;\;\hat{q}_{t}=\frac{1}{\delta}\mathbb{E}\big{\\{}\mathbb{E}[\sqrt{\delta}\Theta_{0}|(\delta
q_{t})^{1/2}\Theta_{0}+G;V]^{2}\big{\\}}\ ,$ (56)
where $\Theta_{0}\sim\mu_{\theta}$,and
$V=\sqrt{\delta\tilde{\alpha}}+G^{\prime}$, $G^{\prime}\sim{\mathsf{N}}(0,1)$.
Notice that the additional factors $\sqrt{\delta}$ are due to the different
normalization of the vector $\boldsymbol{\theta}_{0}$ with respect to the
statement in Theorem 2. Also note that the second moment of the conditional
expectation bove is equal to
$\mathbb{E}\big{\\{}\mathbb{E}[\sqrt{\delta}\Theta_{0}|(\delta(q_{t}+\tilde{\alpha}))^{1/2}\Theta_{0}+G]^{2}\big{\\}}$
and a simple calculation yields
$\displaystyle\hat{q}_{t+1}=V_{\pm}(q_{t}+\tilde{\alpha})\,,\;\;\;\;q_{t}=\frac{\hat{q}_{t}}{1+\hat{q}_{t}}\,,$
(57)
which is equivalent to Eqs. (12), (13).
Let $Y=\sqrt{\delta(q_{t}+\tilde{\alpha})}\Theta_{0}+G$,
$G\sim{\mathsf{N}}(0,1)$. Theorem 2 then yields
$\displaystyle\frac{1}{p}\|\hat{\boldsymbol{\theta}}^{t}-\boldsymbol{\theta}_{0}/\lambda\|_{2}^{2}$
$\displaystyle\geq\inf_{\hat{\theta}(\,\cdot\,)}\mathbb{E}\big{\\{}\big{(}\hat{\theta}(Y)-\Theta_{0}/\lambda\big{)}^{2}\big{\\}}+o_{p}(1)$
(58)
$\displaystyle=\frac{1}{\lambda^{2}}\mathbb{E}\big{\\{}\big{(}\mathbb{E}(\Theta_{0}|Y)-\Theta_{0}\big{)}^{2}\big{\\}}+o_{p}(1)\,.$
(59)
In order to prove the upper bound (14), it is sufficient to consider
$\|\hat{\boldsymbol{\theta}}^{t}\|^{2}_{2}\leq p$. Then, for any $\lambda\geq
0$,
$\displaystyle\frac{1}{p}\langle\hat{\boldsymbol{\theta}}^{t},\boldsymbol{\theta}_{0}\rangle$
$\displaystyle\leq\frac{1}{p}\langle\hat{\boldsymbol{\theta}}^{t},\boldsymbol{\theta}_{0}\rangle-\frac{\lambda}{2p}(\|\hat{\boldsymbol{\theta}}^{t}\|^{2}_{2}-p)$
(60) $\displaystyle=\frac{\lambda}{2}+\frac{1}{2\lambda
p}\|\boldsymbol{\theta}_{0}\|_{2}^{2}-\frac{\lambda}{2p}\|\hat{\boldsymbol{\theta}}^{t}-\boldsymbol{\theta}_{0}/\lambda\|_{2}^{2}$
(61)
$\displaystyle\leq\frac{\lambda}{2}+\frac{1}{2\lambda}\mathbb{E}\\{\Theta_{0}^{2}\\}-\frac{1}{2\lambda}\mathbb{E}\big{\\{}\big{(}\mathbb{E}(\Theta_{0}|Y)-\Theta_{0}\big{)}^{2}\big{\\}}+o(1)$
(62)
$\displaystyle\leq\frac{\lambda}{2}+\frac{1}{2\lambda}V_{\pm}(q_{t}+\tilde{\alpha})+o(1)\,.$
(63)
The claim follows by choosing $\lambda=V_{\pm}(q_{t}+\tilde{\alpha})^{1/2}$,
and noting that $\|\boldsymbol{\theta}_{0}\|^{2}_{2}/p\to\mu^{2}\varepsilon$,
almost surely.
### G.2 Proof of Corollary 2
Choose $\mu=R/\sqrt{\varepsilon}$, and let $\mu^{\prime}<\mu$,
$\varepsilon^{\prime}<\varepsilon$,
$R^{\prime}=\mu^{\prime}\sqrt{\varepsilon^{\prime}}$. Draw the coordinates of
$\boldsymbol{\theta}_{0}=\overline{\boldsymbol{\theta}}_{0}\sqrt{p}$ according
to the three points distribution with parameters
$\mu^{\prime},\varepsilon^{\prime}$. Then, with probability one, we have
$\overline{\boldsymbol{\theta}}_{0}\in\mathscrsfs{T}(\varepsilon,R)$ for all
$n$ large enough. Applying Lemma 1, we get
$\displaystyle\lim_{n\to\infty}\inf_{\overline{\boldsymbol{\theta}}_{0}\in\mathscrsfs{T}(\varepsilon,R)}\mathbb{E}\left\\{\frac{\langle\overline{\boldsymbol{\theta}}_{0},\hat{\boldsymbol{\theta}}^{t}\rangle}{\|\overline{\boldsymbol{\theta}}_{0}\|_{2}\|\hat{\boldsymbol{\theta}}^{t}\|_{2}}\right\\}\leq\sqrt{\frac{V_{\pm}(q^{\prime}_{t}+\tilde{\alpha}^{\prime})}{(\mu^{\prime})^{2}\varepsilon^{\prime}}}\,,$
(64)
ahere we used dominated convergence to pass from the limit in probability to
limit in expectation, and $q^{\prime}_{t},\tilde{\alpha}^{\prime}$ are
computed with parameters $\mu^{\prime}$, $\varepsilon^{\prime}$. By letting
$\varepsilon^{\prime}\to\varepsilon$, $\mu^{\prime}\to\mu$, and since
$\tilde{\alpha}^{\prime},q^{\prime}_{t}$ are continuous in these parameters by
an induction argument, Eq. (64) also holds with $\mu^{\prime}$,
$\varepsilon^{\prime}$, $q^{\prime}_{t}$ replaced by $\mu$, $\varepsilon$,
$q_{t}$:
$\displaystyle\lim_{n\to\infty}\inf_{\overline{\boldsymbol{\theta}}_{0}\in\mathscrsfs{T}(\varepsilon,R)}\mathbb{E}\left\\{\frac{\langle\overline{\boldsymbol{\theta}}_{0},\hat{\boldsymbol{\theta}}^{t}\rangle}{\|\overline{\boldsymbol{\theta}}_{0}\|_{2}\|\hat{\boldsymbol{\theta}}^{t}\|_{2}}\right\\}\leq\sqrt{\frac{V_{\pm}(q_{t}+\tilde{\alpha})}{\mu^{2}\varepsilon}}\,,$
(65)
Claims $(a)$ and $(b)$ follow by upper bounding the right-hand side of the
last equation.
First notice that $V_{\pm}(q)=\mu^{4}\varepsilon^{2}\delta\,q+O(q^{2})$ and
hence Eqs. (12), (13) imply that, for any $\eta>0$ there exists $q_{*}>0$ such
that, if $q_{t}+\tilde{\alpha}\leq q_{*}$, then
$\displaystyle
q_{t+1}\leq(\mu^{4}\varepsilon^{2}\delta+\eta)(q_{t}+\tilde{\alpha})\,.$ (66)
If $\mu^{4}\varepsilon^{2}\delta<1$, choosing
$\eta=(1-\mu^{4}\varepsilon^{2}\delta)/2$, this inequality implies $q_{t}\leq
2\tilde{\alpha}/(1-\mu^{4}\varepsilon^{2}\delta)$, which proves claim $(a)$.
For the second claim, we use the bounds $e^{-\delta
q\mu^{2}/2}\cosh(\mu\sqrt{\delta q}G)\geq 0$ and $x/(1+x)\leq x$ in Eq. (13)
to get $q_{t}\leq\overline{q}_{t}$ for all $t$, where $\overline{q}_{0}=0$ and
$\displaystyle\overline{q}_{t+1}$
$\displaystyle=F_{0}(\overline{q}_{t}+\tilde{\alpha})\,,\;\;\;\;\;\;\;F_{0}(q):=\frac{\mu^{2}\varepsilon^{2}}{1-\varepsilon}\sinh(\mu^{2}\delta
q)\,.$ (67)
Further Eq. (65) implies
$\displaystyle\lim_{n\to\infty}\inf_{\overline{\boldsymbol{\theta}}_{0}\in\mathscrsfs{T}(\varepsilon,R)}\mathbb{E}\left\\{\frac{\langle\overline{\boldsymbol{\theta}}_{0},\hat{\boldsymbol{\theta}}^{t}\rangle}{\|\overline{\boldsymbol{\theta}}_{0}\|_{2}\|\hat{\boldsymbol{\theta}}^{t}\|_{2}}\right\\}\leq\sqrt{\frac{\overline{q}_{t+1}}{\mu^{2}\varepsilon}}\,.$
(68)
Define $x_{t}:=\mu^{2}\delta\overline{q}_{t}$,
$a:=\mu^{4}\varepsilon^{2}\delta/(1-\varepsilon)$,
$b:=\mu^{2}\delta\tilde{\alpha}=(\delta/\varepsilon)(\alpha/(1-\alpha))$. Then
$x_{t}$ obeys the recursion
$\displaystyle x_{t+1}=a\sinh(x_{t}+b)\,.$ (69)
Since $a=R^{4}\delta/(1-\varepsilon)$, we know that $a<1/4$. Using the fact
that $\sinh(u)\leq 2u$ for $u\leq 1$, this implies $x_{t}\leq b$ for all $t$
provided $b<1/2$. Subsitiuting this bound in Eq. (68), we obtain the desired
claim.
### G.3 Proof of Corollary 1
Consider first the case of a random vector $\boldsymbol{\theta}_{0}$ with
i.i.d. entries $\theta_{0,i}\sim\mu_{\theta}$. Define, for
$\Theta_{0}\sim\mu_{\theta}$,
$\displaystyle F_{\varepsilon}(q)$
$\displaystyle:=\mathbb{E}\big{\\{}\mathbb{E}[\Theta_{0}|\sqrt{q}\Theta_{0}+G]^{2}\big{\\}}$
(70)
$\displaystyle=e^{-q\mu^{2}}\mu^{2}\varepsilon^{2}\mathbb{E}\left\\{\frac{\sinh(\mu\sqrt{q}G)^{2}}{1-\varepsilon+\varepsilon
e^{-q\mu^{2}/2}\cosh(\mu\sqrt{q}G)}\right\\}\,.$ (71)
Setting $q_{t}=\tau_{t}^{-2}$, $\hat{q}_{t}=\sigma_{t}^{2}$, and
$\tilde{\alpha}=\alpha/(1-\alpha)$, and referring to Lemma 4, the state
evolution recursion (5) takes the form
$\displaystyle\hat{q}_{t}$
$\displaystyle=F_{\varepsilon}(q_{t}+\tilde{\alpha})\,,\;\;\;\;q_{t+1}=\delta\,H(\hat{q}_{t})\,,$
(72) $\displaystyle H(q)$
$\displaystyle:=\mathbb{E}_{G_{0},Y}\left[\left(\frac{\mathbb{E}_{G_{1}}\partial_{x}p(Y|\sqrt{q}\,G_{0}+\sqrt{1-q}G_{1})}{\mathbb{E}_{G_{1}}p(Y|\sqrt{q}\,G_{0}+\sqrt{1-q}G_{1}}\right)^{2}\right]\,.$
(73)
Notice the change in factors $\delta$ with respect to Eq. (5), which is due to
the different normalization of the design matrix.
By the same argument used in the proof of Lemma 1, Theorem 1 implies that, for
any GFOM, with output $\hat{\boldsymbol{\theta}}_{t}$, we have
$\displaystyle\lim_{n,p\to\infty}\mathbb{E}\frac{\langle\overline{\boldsymbol{\theta}}_{0},\hat{\boldsymbol{\theta}}^{t}\rangle}{\|\overline{\boldsymbol{\theta}}_{0}\|_{2}\|\hat{\boldsymbol{\theta}}^{t}\|_{2}}\leq\sqrt{\hat{q}_{t}}\,.$
(74)
We next compute the first order Taylor-expansion of the iteration (72), and
obtain $F_{\varepsilon}(q)=q+O(q^{2})$,
$H(q)=q/\delta_{\mbox{\tiny{sp}}}+O(q^{2})$ (the first order Taylor expanson
of $H(q)$ was already computed in [MM19]). As a consequence, for any $\eta>0$,
there exists $\alpha_{0}$ such that, if $\tilde{\alpha}<\alpha_{0}$,
$q_{t}<\alpha_{0}$, then
$\displaystyle
q_{t+1}\leq(\frac{\delta}{\delta_{\mbox{\tiny{sp}}}}+\eta)(q_{t}+\tilde{\alpha})\,.$
The claim follows by taking
$\eta=\eta(\delta):=(\delta_{\mbox{\tiny{sp}}}-\delta)/(2\delta_{\mbox{\tiny{sp}}})$,
whence $q_{t}\leq\tilde{\alpha}/\eta(\delta)$ for all $t$, provided
$\tilde{\alpha}<\alpha_{*}:=\alpha_{0}\eta(\delta)$. The deterministic
argument follows in the same way as Corollary 2.
|
2024-09-04T02:54:55.653372 | 2020-02-28T18:23:05 | 2002.12907 | {
"authors": "Prashanta K. Mukharjee, K. M. Ranjith, M. Baenitz, Y. Skourski, A. A.\n Tsirlin, and R. Nath",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25949",
"submitter": "Ramesh Chandra Nath",
"url": "https://arxiv.org/abs/2002.12907"
} | arxiv-papers | # Two types of alternating spin-$\frac{1}{2}$ chains
and their field-induced transitions in $\varepsilon$-LiVOPO4
Prashanta K. Mukharjee School of Physics, Indian Institute of Science
Education and Research, Thiruvananthapuram-695551, India K. M. Ranjith M.
Baenitz Max Planck Institute for Chemical Physics of Solids,
N$\ddot{o}$thnitzer Str. 40, 01187 Dresden, Germany Y. Skourski Dresden High
Magnetic Field Laboratory (HLD-EMFL), Helmholtz-Zentrum Dresden-Rossendorf,
01314 Dresden, Germany A. A. Tsirlin<EMAIL_ADDRESS>Experimental
Physics VI, Center for Electronic Correlations and Magnetism, Institute of
Physics, University of Augsburg, 86135 Augsburg, Germany R. Nath
<EMAIL_ADDRESS>School of Physics, Indian Institute of Science Education
and Research, Thiruvananthapuram-695551, India
###### Abstract
Thermodynamic properties, 31P nuclear magnetic resonance (NMR) measurements,
and density-functional band-structure calculations for $\varepsilon$-LiVOPO4
are reported. This quantum magnet features a singlet ground state and
comprises two types of alternating spin-$\frac{1}{2}$ chains that manifest
themselves by the double maxima in the susceptibility and magnetic specific
heat, and by the two-step magnetization process with an intermediate
$\frac{1}{2}$-plateau. From thermodynamic data and band-structure
calculations, we estimate the leading couplings of $J_{1}\simeq 20$ K and
$J_{2}\simeq 60$ K and the alternation ratios of
$\alpha_{1}=J_{1}^{\prime}/J_{1}\simeq 0.6$ and
$\alpha_{2}=J_{2}^{\prime}/J_{2}\simeq 0.3$ within the two chains,
respectively. The zero-field spin gap $\Delta_{0}/k_{\rm B}\simeq 7.3$ K
probed by thermodynamic and NMR measurements is caused by the
$J_{1}$-$J_{1}^{\prime}$ spin chains and can be closed in the applied field of
$\mu_{0}H_{\rm c1}\simeq 5.6$ T, giving rise to a field-induced long-range
order. The NMR data reveal predominant three-dimensional spin-spin
correlations at low temperatures. Field-induced magnetic ordering transition
observed above $H_{c1}$ is attributed to the Bose-Einstein condensation of
triplons in the sublattice formed by the $J_{1}$-$J_{1}^{\prime}$ chains with
weaker exchange couplings.
## I Introduction
Field-induced quantum phase transitions in magnets set a link between
fermionic spin systems and lattice boson gas Matsubara and Matsuda (1956);
Giamarchi _et al._ (2008); Zapf _et al._ (2014). In this context, spin-dimer
compounds possessing a gap in the excitation spectrum are extensively studied
Giamarchi _et al._ (2008); Zapf _et al._ (2014). Their triplet excitations
(triplons) are equivalent to lattice bosons and can be tuned by applying
magnetic field. Gap closing in the applied field will usually lead to magnetic
ordering that can be understood as Bose-Einstein condensation (BEC) of
triplons Mukhopadhyay _et al._ (2012); Sebastian _et al._ (2006); Giamarchi
_et al._ (2008); Nikuni _et al._ (2000). If interactions between the dimers
are one-dimensional (1D) in nature, one further expects a non-Fermi-liquid-
type Tomonaga-Luttinger Liquid (TLL) state realized at intermediate
temperatures before the BEC state is reached Klanjšek _et al._ (2008);
Matsushita _et al._ (2017); Willenberg _et al._ (2015); Thielemann _et al._
(2009).
The ground-state spin configuration in the applied field may also depend on
the delicate balance between the kinetic energy and repulsive interaction of
the triplons or $S_{\rm z}=+1$ bosons Rice (2002). The dominance of repulsive
interaction would lead to the formation of superlattices, which result in
magnetization plateaus Narumi _et al._ (1998); Shiramura _et al._ (1998).
This has been experimentally verified in the celebrated Shastry-Sutherland
compound SrCu2(BO3)2 Kodama _et al._ (2002); Kageyama _et al._ (1999). On
the other hand, when the kinetic energy dominates over repulsive interactions,
the triplons become delocalized, and the ground state is a superposition of
singlet-triplet states, which can be approximated as a BEC of triplons. The
phenomenon of BEC has been studied in detail for spin-dimer compounds
TlCuCl3Rüegg _et al._ (2003), BaCuSi2O6Jaime _et al._ (2004), (Ba,Sr)3Cr2O8
Aczel _et al._ (2009a, b), etc. The transition from TLL as a 1D quantum
critical state to the three-dimensional (3D) BEC state has often been observed
in quasi-1D spin systems e.g. spin-$1/2$ ladders (C7H10N)2CuBr4 Jeong _et
al._ (2013, 2017); Möller _et al._ (2017) and (C5H12N)2CuBr4 Thielemann _et
al._ (2009); Rüegg _et al._ (2008); Klanjšek _et al._ (2008) and spin-$1/2$
alternating spin chains Cu(NO3)${}_{2}\cdot$2.5D2O and F5PNN Willenberg _et
al._ (2015); Matsushita _et al._ (2017). Thus, quasi-1D spin gap materials
provide ample opportunities to tune the spin gap and study the field-induced
quantum phase transitions.
The V4+-based compounds offer an excellent playground to study gapped quantum
magnets and related phenomena. Several of these compounds were already
reported in the past in the context of spin gap physics Ueda (1998); Yamauchi
_et al._ (1999); Johnston _et al._ (1987); Ghoshray _et al._ (2005).
Recently, we studied magnetic properties of the $A$VO$X$O4 series ($A$ = Na,
Ag; $X$ = P, As), where all compounds were found to host alternating
spin-$\frac{1}{2}$ chains not matching the structural chains of the VO6
octahedra Mukharjee _et al._ (2019); Arjun _et al._ (2019); Ahmed _et al._
(2017); Tsirlin _et al._ (2011). In these systems, long-range superexchange
couplings via two oxygen atoms play a central role. They are highly tunable
and allow the variation of the zero-field spin gap $\Delta_{\rm 0}/k_{\rm B}$
from 21 K in NaVOAsO4 to $\sim 2$ K in NaVOPO4. External magnetic field closes
the gap and leads to a field-induced magnetic transition, which is explained
in terms of the triplon BEC Mukharjee _et al._ (2019); Weickert _et al._
(2019).
Herein, we report ground-state properties of the chemically similar, but
structurally different LiVOPO4. Unlike the $A$VO$X$O4 compounds, which are all
monoclinic ($P2_{1}/c$), LiVOPO4 crystallizes in several polymorphs with
different symmetries and atomic arrangements Harrison and Manthiram (2013);
Hidalgo _et al._ (2019). We shall focus on the triclinic
$\varepsilon$-LiVOPO4 ($P\bar{1}$) that can be seen as a distorted version of
monoclinic $A$VO$X$O4 111Triclinic crystal structure is derived from
$\varepsilon$-VOPO4. Alternatively, triclinic LiVOPO4 is sometimes referred to
as the $\alpha$-phase, because it was the earliest discovered LiVOPO4
polymorph. and has been actively studied in the context of battery research
Yang _et al._ (2008); Quackenbush _et al._ (2015); Lin _et al._ (2016); Shi
_et al._ (2018); Chung _et al._ (2019), but not as a quantum magnet. In its
crystal structure, each of the Li, V, and P reside at two nonequivalent sites,
whereas the O atoms have ten nonequivalent sites. The magnetic V4+ ions form
chains of the VO6 octahedra with the alternation of V1 and V2, leading to two
alternating V–V distances of 3.599 and 3.629 Å along these structural chains
(Fig. 1).
Assuming that strongest magnetic interactions run along the structural chains,
one expects the magnetic behavior of alternating spin-$\frac{1}{2}$ chains
that was indeed proposed by Onoda and Ikeda Onoda and Ikeda (2013) who
reported, among others Yang _et al._ (2008); Chung _et al._ (2019), magnetic
susceptibility of $\varepsilon$-LiVOPO4. On the other hand, our recent results
for the monoclinic $A$VO$X$O4 compounds suggest that spin chains may not
coincide with the structural chains, because leading interactions occur
through the double bridges of the $X$O4 tetrahedra. In this case,
$\varepsilon$-LiVOPO4 should feature two types of alternating
spin-$\frac{1}{2}$ chains, one formed by V(1) and the other one formed by
V(2), each with different interactions and different spin gaps (Fig. 1).
Below, we report experimental fingerprints of these two nonequivalent spin
chains and thus directly confirm the proposed microscopic scenario. Moreover,
we detect a field-induced phase transition, which is reminiscent of triplon
BEC.
Table 1: Atomic distances and bond angles along the superexchange paths involving P(1) and P(2) in the two alternating spin chains of $\varepsilon$-LiVOPO4 at $300$ K to highlight the coupling of P atoms with V atoms. Site | Bond Length (Å) | Angle (deg.)
---|---|---
P(1) | | V(1)-O(1) = 1.96
---
O(1)-P(1) = 1.53
P(1)-O(2) = 1.54
O(2)-V(1) = 1.97
V(2)-O(7) = 1.98
O(7)-P(1) = 1.53
P(1)-O(8) = 1.53
O(8)-V(2) = 2.01
| $\angle$V(1)-O(1)-O(2) = 152.16
---
$\angle$V(1)-O(2)-O(1) = 109.37
$\angle$V(2)-O(8)-O(7) = 140.38
$\angle$V(2)-O(7)-O(8) = 124.64
Average value $\simeq$ 131.63
P(2) | | V(1)-O(3) = 1.96
---
O(3)-P(2) = 1.53
P(2)-O(4) = 1.55
O(4)-V(1) = 2.02
V(2)-O(9) = 1.98
O(9)-P(2) = 1.53
P(2)-O(10) = 1.52
O(10)-V(2) = 1.94
| $\angle$V(1)-O(4)-O(3) = 118.59
---
$\angle$V(1)-O(3)-O(4) = 149.06
$\angle$V(2)-O(9)-O(10) = 134.87
$\angle$V(2)-O(10)-O(9) = 132.66
Average value $\simeq$ 133.79
Figure 1: (a) Crystal structure of $\varepsilon$-LiVOPO4 projected onto the
$ac$-plane. The deep blue and light blue solid circles represent the V(1) and
V(2) sites, respectively. Chain 1 with the couplings $J_{1}$ and
$J_{1}^{\prime}$ is formed by V(1) and chain 2 is formed by V(2) atoms with
the couplings $J_{2}$ and $J_{2}^{\prime}$ via the extended V-O$\ldots$O-V
paths. These chains are nearly orthogonal to each other, whereas the
structural chains are parallel comprising both V1 and V2 atoms at the same
time. These chains run perpendicular to the $ac$-plane. (b) A segment of the
chain 1 formed by the V(1)O6 octahedra with the intrachain couplings $J_{1}$
and $J_{1}^{\prime}$. The distances $d_{1}$ and $d_{1}^{\prime}$ are the
lateral displacements of the VO6 octahedra in chain 1. (c) A section of the
structural chain with the V(1)-O-V(2) paths along the $b$-axis. (d) An
empirical/qualitative sketch of the spin model with all possible exchange
interactions involving chain 1 ($J_{1}$, $J_{1}^{\prime}$) and chain 2
($J_{2},J_{2}^{\prime}$).
## II Methods
Polycrystalline sample of $\varepsilon$-LiVOPO4 was prepared by the
conventional solid-state reaction method from stoichiometric mixtures of LiPO3
and VO2 (Aldrich, 99.995%). LiPO3 was obtained by heating LiH2PO4.H2O
(Aldrich, 99.995%) for 4 h at 400 ∘C in air. The reactants were ground
thoroughly, pelletized, and fired at 740 ∘C for two days in flowing argon
atmosphere with two intermediate grindings. Phase purity of the sample was
confirmed by powder x-ray diffraction (XRD) recorded at room temperature using
a PANalytical powder diffractometer (CuKα radiation, $\lambda_{\rm avg}\simeq
1.5418$ Å). Rietveld refinement of the acquired data was performed using
FULLPROF software package Carvajal (1993) taking the initial cell parameters
from Ref. [A. Mba _et al._ , 2012]. The low-temperature XRD data down to $15$
K were recorded using a low-temperature attachment (Oxford Phenix) to the
x-ray diffractometer.
Magnetization ($M$) was measured as a function of temperature (2 K $\leq
T\leq$ 380 K) using the vibrating sample magnetometer (VSM) attachment to the
Physical Property Measurement System [PPMS, Quantum Design]. A 3He attachment
to the SQUID [MPMS-7T, Quantum Design] magnetometer was used for magnetization
measurements in the low-temperature range ($0.5$ K $\leq T\leq$ $2$ K).
Specific heat ($C_{\rm p}$) as a function of temperature was measured down to
$0.35$ K using the thermal relaxation technique in PPMS under magnetic fields
up to $14$ T. For $T\leq 2$ K, measurements were performed using an additional
3He attachment to PPMS. High-field magnetization was measured in pulsed
magnetic field up to $60$ T at the Dresden High Magnetic Field Laboratory
Tsirlin _et al._ (2009); Skourski _et al._ (2011).
The NMR experiments on the 31P nucleus (nuclear spin $I=1/2$ and gyromagnetic
ratio $\gamma/2\pi=17.235$ MHz/T) were carried out using pulsed NMR technique
in the temperature range 1.6 K $\leq T\leq$ 230 K. The 31P NMR spectra as a
function of temperature were obtained either by sweeping the field at a fixed
frequency or by taking the Fourier transform (FT) of the echo signal, keeping
the magnetic field fixed. The NMR shift $K(T)$ = [$H_{\rm ref}-H(T)$]/$H(T)$
was determined by measuring the resonant field $H(T)$ of the sample with
respect to the standard H3PO4 sample (resonance frequency $H_{\rm ref}$). The
31P nuclear spin-lattice relaxation rate ($1/T_{1}$) was measured using the
inversion recovery technique at different temperatures. Details of the
positions of both P sites with respect to the magnetic V4+ centers are given
in Table 1.
Density-functional (DFT) band-structure calculations were performed in the
FPLO code Koepernik and Eschrig (1999) using experimental crystal structure
from Ref. [Lavrov _et al._ , 1982] and the Perdew-Burke-Ernzerhof (PBE)
flavor of the exchange-correlation potential Perdew _et al._ (1996). Exchange
couplings were obtained within superexchange theory or by a mapping procedure
Tsirlin (2014) using total energies of collinear spin configurations
calculated within DFT+$U$, where the on-site Coulomb repulsion $U_{d}=5$ eV
and Hund’s coupling $J_{d}=1$ eV Nath _et al._ (2008a); Tsirlin _et al._
(2008) were used to account for strong correlations in the V $3d$ shell. All
calculations are performed for the $4\times 4\times 4$ $k$-mesh. To resolve
all pertinent exchange couplings, the
$(\mathbf{a}_{t}+\mathbf{c}_{t})\times\mathbf{b}_{t}\times(\mathbf{a}_{t}-\mathbf{c}_{t})$,
$\mathbf{a}_{t}\times 2\mathbf{b}_{t}\times\mathbf{c}_{t}$, and
$(\mathbf{a}_{t}+\mathbf{b}_{t})\times 2\mathbf{b}_{t}\times\mathbf{c}_{t}$
supercells were used, where $\mathbf{a}_{t}$, $\mathbf{b}_{t}$, and
$\mathbf{c}_{t}$ are lattice vectors of the triclinic unit cell given in Ref.
[Lavrov _et al._ , 1982].
## III Results
### III.1 X-ray Diffraction
Figure 2: X-ray powder diffraction patterns of $\varepsilon$-LiVOPO4 at two
different temperatures ($T=300$ K and $15$ K). The solid lines denote the
Rietveld refinement fit of the data. The Bragg-peak positions are indicated by
green vertical bars and bottom blue line indicates the difference between the
experimental and calculated intensities.
Figure 3: Lattice parameters (a) $a$, (b) $b$, (c) $c$, (d) $\alpha$, (e)
$\beta$, and (f) $\gamma$ vs temperature. The unit cell volume ($V_{\rm
cell}$) is plotted as a function of temperature in right $y$-axis of (c) and
the solid line represents the fit using Eq. (1).
In order to confirm the phase purity and to study the temperature variation of
the crystal structure, the powder XRD patterns are analyzed for $15$ K $\leq$
$T$ $\leq$ $300$ K. The XRD patterns at two end temperatures, $T=300$ K and 15
K, along with the refinement are shown in Fig. 2. At room temperature, all the
peaks could be indexed based on the triclinic (space group: $P\bar{1}$)
structure, implying phase purity of the sample. The refined lattice parameters
[$a=6.729(1)$ Å, $b=7.194(1)$ Å, $c=7.920(2)$ Å, $\alpha=89.82(2)^{\circ}$,
$\beta=91.2288(2)^{\circ}$, and $\gamma=116.8799(2)^{\circ}$] at room
temperature are in good agreement with the previous report A. Mba _et al._
(2012). Identical XRD patterns with no extra peaks are observed in the whole
measured temperature range, which excludes any structural phase transition or
lattice deformation in $\varepsilon$-LiVOPO4, unlike other spin gap compounds
NaTiSi2O6 Isobe _et al._ (2002); Popović _et al._ (2004), CuGeO3 Hirota _et
al._ (1994), NaV2O5Fujii _et al._ (1997), and K-TCNQ Lépine _et al._ (1978).
The variation of both lattice parameters and unit cell volume ($V_{\rm cell}$)
as a function of temperature are shown in Fig. 3. The cell parameters $b$ and
$c$ decrease systematically, while the rest of the parameters ($a$, $\alpha$,
$\beta$, and $\gamma$) rise with decreasing temperature. However, the overall
unit cell volume shrinks upon cooling. The temperature variation of $V_{\rm
cell}$ was fitted by the equation Kittel (1986); Bag _et al._ (2018)
$V_{\rm cell}(T)=\gamma U(T)/K_{0}+V_{0},$ (1)
where $V_{0}$ is the unit cell volume at $T$ = $0$ K, $K_{0}$ is the bulk
modulus, and $\gamma$ is the Gr$\ddot{\rm u}$neisen parameter. The internal
energy $U(T)$ can be expressed in terms of the Debye approximation as
$U(T)=9pk_{\rm B}T\left(\frac{T}{\theta_{\rm
D}}\right)^{3}\int_{0}^{\theta_{\rm D}/T}\dfrac{x^{3}}{e^{x}-1}dx.$ (2)
In the above, $p$ stands for the total number of atoms in the unit cell and
$k_{\rm B}$ is the Boltzmann constant. The best fit of the data down to 15 K
[see Fig. 3(c)] was obtained with the Debye temperature $\theta_{D}\simeq 530$
K, $\frac{\gamma}{K_{0}}\simeq 5.78\times 10^{-5}$ Pa-1, and $V_{0}\simeq
340.5$ Å3.
### III.2 Magnetization
Figure 4: Upper panel: $\chi(T)$ measured in $\mu_{0}H=0.5$ T. The two
shoulders of the broad maximum are indicated by the vertical arrows. Inset:
low-$T$ $\chi(T)$ measured in different applied fields. Lower panel: $1/\chi$
vs $T$ and the solid line is the CW fit using Eq. (3).
The temperature-dependent bulk magnetic susceptibility $\chi~{}(\equiv M/H$)
of $\varepsilon$-LiVOPO4 measured in an applied field of $\mu_{0}H=0.5$ T is
shown in the upper panel of Fig. 4. It is consistent with earlier reports
Chung _et al._ (2019); Yang _et al._ (2008); Onoda and Ikeda (2013) and
exhibits a very broad maximum with two shoulders at around $T_{\chi}^{\rm
max1}\simeq 11$ K and $T_{\chi}^{\rm max2}\simeq 27$ K. Such a broad maximum
should not be mistaken with a magnetic transition Chung _et al._ (2019) and
mimics the short-range ordering of a low-dimensional quantum magnet. However,
already the fact that two shoulders are observed in the susceptibility
indicates the presence of two nonequivalent spin chains in
$\varepsilon$-LiVOPO4. Since peak position is related to the intrachain
exchange coupling Johnston _et al._ (2000), we can estimate relative
strengths of the interactions in the two chains,
$\bar{J}_{2}/\bar{J}_{1}\simeq 2.45$, where
$\bar{J}_{1}=(J_{1}+J_{1}^{\prime})/2$ and
$\bar{J}_{2}=(J_{2}+J_{2}^{\prime})/2$ are average couplings in the chain 1
and chain 2, respectively. The susceptibility alone does not give information
on which of the chains features stronger couplings, but our DFT calculations
(Sec. III.5) suggest that stronger interactions occur within chain 2.
Below $T_{\chi}^{\rm max1}$, $\chi$ decreases rapidly suggesting the opening
of a spin gap. However, below $2$ K, a large upturn is observed, which can be
attributed to the effect of extrinsic paramagnetic contributions. As shown in
the inset of the upper panel of Fig. 4, with the application of magnetic field
this low-temperature Curie tail gets suppressed. Moreover, our powder XRD
suggests high purity of the sample. Therefore, this low-temperature upturn in
$\chi(T)$ may be due to the uncorrelated V4+ free spins or chain-end effects
that largely affect $\chi(T)$ at low temperatures Eggert and Affleck (1995).
To extract the magnetic parameters, we first fitted the $1/\chi(T)$ data (see
the lower panel of Fig. 4) above 150 K by the modified Curie-Weiss (CW) law,
$\chi(T)=\chi_{0}+\frac{C}{T-\theta_{\rm CW}},$ (3)
where $\chi_{\rm 0}$ represents the temperature-independent susceptibility,
which includes the Van Vleck paramagnetic and core diamagnetic contributions,
$C$ is the Curie constant, and $\theta_{\rm CW}$ is the CW temperature. The
resulting fitting parameters are $\chi_{0}\simeq 7.76\times 10^{-5}$
cm3/mol-V4+, $C\simeq 0.383$ cm3K/mol-V4+, and $\theta_{\rm CW}\simeq-13.4$ K.
Negative value of $\theta_{\rm CW}$ indicates that the dominant interactions
between the V4+ spins are antiferromagnetic (AFM) in nature. The effective
moment was calculated by using the experimental value of $C$ in the relation
$\mu_{\rm eff}=\sqrt{3k_{\rm B}C/N_{\rm A}}$, where $N_{\rm A}$ is the
Avogadro’s number. The calculated value of $\mu_{\rm eff}\simeq 1.72\mu_{\rm
B}$/V4+ is very close to the theoretical spin-only value [$\mu_{\rm
eff}=g\sqrt{S(S+1)}\simeq 1.73\mu_{\rm B}$] for $S=1/2$, assuming the
Land$\acute{e}$ $g$ factor $g=2$. The experimental value of $\mu_{\rm eff}$
corresponds to a slightly reduced $g$ value of $g\simeq 1.98$ which is
identical to the case of other V4+-based compounds Mukharjee _et al._ (2019);
Tsirlin _et al._ (2011); Yogi _et al._ (2015). The core diamagnetic
susceptibility ($\chi_{\rm core}$) of $\varepsilon$-LiVOPO4 was estimated to
be $-6.8\times 10^{-5}$ cm3/mol by adding the $\chi_{\rm core}$ of the
individual ions Li1+, V4+, P5+, and O2- Selwood (2013). The Van Vleck
paramagnetic susceptibility ($\chi_{\rm VV}$) of $\sim 14.6\times 10^{-5}$
cm3/mol is obtained by subtracting $\chi_{\rm core}$ from $\chi_{\rm 0}$.
Figure 5: Magnetization vs field measured at $T=1.5$ K using pulsed magnetic
field. Inset: $dM$/$dH$ vs $H$ to highlight the critical fields $H_{\rm c1}$,
$H_{\rm c2}$, and $H_{\rm c3}$.
Two alternating spin chains with different exchange parameters should also
manifest themselves in the magnetization process. Indeed, experimental
magnetization curve measured at the base temperature of $T=1.5$ K (Fig. 5)
shows a kink due to the gap closing at $\mu_{0}H_{\rm c1}\simeq 5.6$ T that
corresponds to the spin gap of $\Delta_{\rm 0}^{\rm M}/k_{\rm
B}=g\mu_{0}\mu_{\rm B}H_{\rm c1}/k_{\rm B}\simeq 7.3$ K. At $\mu_{0}H_{\rm
c2}\simeq 25$ T, half of the spins are saturated, with the
$\frac{1}{2}$-plateau observed up to $\mu_{0}H_{\rm c3}\simeq 35$ T. At even
higher fields, the remaining spins are gradually polarized with nearly 80 % of
the saturation magnetization reached at 58 T, the highest field of our
experiment.
The two-step increase of the magnetization with the spin gap and intermediate
$\frac{1}{2}$-plateau is common for dimers built by two spin-1 ions Samulon
_et al._ (2009). However, $\varepsilon$-LiVOPO4 contains spins-$\frac{1}{2}$,
so this behavior should have a different origin. We infer that at
$\mu_{0}H_{\rm c1}$ the spin gap corresponding to chain 1 is closed. This
chain is fully polarized at $\mu_{0}H_{\rm c2}$, whereas at $\mu_{0}H_{\rm
c3}$ the gap corresponding to chain 2 is closed, and the magnetization
increases further. Such a scenario is confirmed by our quantitative analysis
in Sec. III.5.
### III.3 Specific heat
Figure 6: Upper panel: specific heat $C_{\rm p}$ vs $T$ of
$\varepsilon$-LiVOPO4 in zero applied field. The dashed line stands for the
phonon contribution to the specific heat ($C_{\rm ph}$) using Debye fit [Eq.
(4)]. The solid line is the magnetic contribution to the specific heat
($C_{\rm mag}$). Inset: $C_{\rm mag}$ vs $T$ in the low-temperature region.
The solid line is an exponential fit as described in the text. Lower panel:
$C_{\rm mag}/T$ and $S_{\rm mag}$ vs $T$ in the left and right $y$-axes,
respectively. Inset: $C_{\rm p}/T$ vs $T$ in different magnetic fields in the
low-temperature regime.
The specific-heat ($C_{\rm p}$) data measured under zero field are shown in
the upper panel of Fig. 6. No sharp anomaly/peak was noticed down to $T=0.35$
K, thus ruling out the possibility of any magnetic or structural transition. A
broad hump associated with low-dimensional short-range order is observed
around $T_{C}^{\rm max}$ $\simeq 6$ K which moves weakly toward low
temperatures with increasing magnetic field (see the inset of the lower panel
of Fig. 6), reflecting the closing of the spin gap. The gap was estimated by
fitting the data below 4 K with the activated behavior, $C_{\rm
mag}\propto\exp(-\Delta_{0}^{\rm C}/k_{\rm B}T)$. The fit, as illustrated in
the inset of the upper panel of Fig. 6, returns the zero-field spin gap of
$\Delta_{0}^{\rm C}/k_{\rm B}\simeq 7.3$ K that matches nicely the value
obtained from the high-field magnetization data.
Typically, in magnetic insulators, the high-temperature part of $C_{\rm p}$ is
dominated by the phonon contribution, whereas the magnetic contribution
becomes prominent at low temperatures. To estimate the phonon contribution
($C_{\rm ph}$), the experimental data at high temperatures (60 K $\leq T\leq$
110 K) were fitted by a linear combination of three Debye functions Nath _et
al._ (2008b)
$C_{\rm ph}(T)=9R\displaystyle\sum\limits_{\rm n=1}^{3}c_{\rm
n}\left(\frac{T}{\theta_{\rm Dn}}\right)^{3}\int_{0}^{\frac{\theta_{\rm
Dn}}{T}}\frac{x^{4}e^{x}}{(e^{x}-1)^{2}}dx.$ (4)
In the above, $R$ is the universal gas constant, the coefficients $c_{\rm n}$
stand for the groups of different atoms present in the crystal, and
$\theta_{\rm Dn}$ are the corresponding Debye temperatures. The $C_{\rm ph}$
was extrapolated down to low temperatures and subtracted from the total
specific heat to obtain the magnetic contribution to the specific heat
($C_{\rm mag}$). The obtained $C_{\rm mag}(T)$ is presented in the upper panel
of Fig. 6. The accuracy of the above fitting procedure was further verified by
calculating the magnetic entropy $S_{\rm mag}$ obtained by integrating $C_{\rm
mag}/T$ (see the lower panel of Fig. 6). The value of $S_{\rm mag}$ is
calculated to be $\sim 5.9$ J/mol K at $T\simeq 100$ K, which is close to
$S_{\rm mag}=R\ln 2=5.76$ J/mol K expected for spin $\frac{1}{2}$.
As shown in the upper panel of Fig. 6, $C_{\rm mag}$ develops two broad maxima
at $T_{C}^{\rm max1}\simeq 7$ K and $T_{C}^{\rm max2}\simeq 27$ K, similar to
the two shoulders in the $\chi(T)$ data. The clear separation of these maxima
indicates the different interaction strength in chains 1 and 2.
Figure 7: $C_{\rm p}/T$ vs $T$ measured in different applied magnetic fields
up to 14 T.
With the spin gap closed around $\mu_{0}H_{\rm c1}\simeq 5.6$ T, the system
may enter a long-range order (LRO) state. This state is indeed observed in the
specific-heat data measured above $\mu_{0}H_{\rm c1}$. As shown in Fig. 7, no
anomaly in $C_{\rm p}$ is found down to $T=0.35$ K and up to $\mu_{0}H=5$ T.
However, for $\mu_{0}H>7$ T a clear anomaly appears indicating the onset of a
field-induced magnetic LRO ($T_{\rm N}$). This peak shifts toward higher
temperature with increasing magnetic field.
### III.4 31P NMR
As mentioned previously, $\chi(T)$ does not show an exponential decrease at
low temperatures anticipated for a gapped spin system, and may be influenced
by extrinsic contributions. To access the intrinsic susceptibility of
$\varepsilon$-LiVOPO4, we performed NMR measurements on the 31P nuclei.
#### III.4.1 NMR Shift
Figure 8: Typical 31P field-sweep NMR spectra of $\varepsilon$-LiVOPO4
measured at 24.96 MHz at different temperatures. The dashed lines track the
shifting of NMR line positions. The vertical line corresponds to the 31P
reference line ($H_{\rm ref}$) measured for H3PO4. The solid line is the fit
of the spectra at $T=50$ K using a double Gaussian function.
Figure 8 presents the field-sweep 31P NMR spectra measured over a wide
temperature range. At high temperatures, a narrow and symmetric spectral line
typical for a $I=1/2$ nucleus is observed. As the temperature is lowered, the
line shape becomes asymmetric, followed by a complete splitting of the two
spectral lines below about 120 K. This suggests the existence of two
nonequivalent P sites, P(1) and P(2), with a different crystallographic
environment, which is consistent with the structural data (Table 1). Both
lines shift with temperature and merge below about 4 K. The absence of drastic
line broadening and/or line splitting down to 1.6 K, except for the one caused
by the two nonequivalent P sites, rules out the occurrence of any structural
and magnetic transitions. The line with a lower intensity shifts more strongly
than the one with the higher intensity. The former can be assigned to P(2) and
the latter to P(1), because the P(2)O4 tetrahedra mediate stronger exchange
interactions, $J_{1}$ in chain 1 and $J_{2}$ in chain 2, whereas the P(1)O4
tetrahedra mediate weaker interactions $J_{1}^{\prime}$ and $J_{2}^{\prime}$,
respectively (see Sec. III.5). At $T=1.6$ K, the position of the peak is very
close to the zero shift value suggesting that the ground state of
$\varepsilon$-LiVOPO4 is nonmagnetic.
The temperature-dependent NMR shift $K(T)$ for both 31P sites was extracted by
fitting each spectrum to a sum of two Gaussian functions. The results are
shown in the upper panel of Fig. 9. One advantage of the NMR shift over the
bulk $\chi(T)$ is that the Curie-Weiss term due to foreign phases and/or
defects does not appear. The randomly distributed defects/impurities only
broaden the NMR line but do not contribute to the NMR shift Walstedt and
Walker (1974). Therefore, $K(T)$ is more favorable than bulk $\chi(T)$ data
for a reliable determination of magnetic parameters.
The $K(T)$’s corresponding to the P(1) and P(2) sites pass through a very
broad maximum, similar to the $\chi(T)$ data. The overall temperature
dependence is similar for P(1) and P(2), but the absolute values differ due to
the different hyperfine couplings. The presence of several distinct P sites in
the structure is reminiscent of the ambient-pressure polymorph of (VO)2P2O7
with its two non-equivalent alternating spin-$\frac{1}{2}$ chains. In that
case, different phosphorous sites probe the behavior of different spin chains
in the structure Yamauchi _et al._ (1999); Kikuchi _et al._ (1999). In
contrast, each of the P sites in $\varepsilon$-LiVOPO4 is coupled to both spin
chains, so it is not possible to probe $K(T)$ separately. Two distinct
shoulders are observed in $K(T)$ at $T_{K}^{\rm max1}\simeq 10$ K and
$T_{K}^{\rm max2}\simeq 26$ K and closely resemble bulk magnetic
susceptibility (Fig. 4).
At low temperatures, both shifts decrease rapidly toward zero, suggesting the
opening of a spin gap between the singlet ($S=0$) ground state and triplet
($S=1$) excited states. As NMR shift is insensitive to the impurities and
defects, one can use it to accurately measure the intrinsic spin
susceptibility. In the upper panel of Fig. 9, we show that for both P sites,
$K(T)$ decreases toward zero, which is in contrast to the upturn observed in
the low-temperature $\chi(T)$ data. This confirms the extrinsic nature of the
low-temperature upturn observed in $\chi(T)$. In powder samples, the defects
often break spin chains, with the unpaired spins at the ends of finite chains
giving rise to the staggered magnetization, which also appears as the low-
temperature Curie tail in $\chi(T)$.
Figure 9: Upper panel: temperature-dependent 31P NMR shift $K(T)$ measured at
24.96 MHz as a function of temperature for both P(1) and P(2) sites. Lower
panel: $K$ vs $\chi$ (measured at 1.5 T) with temperature as an implicit
parameter. The solid lines are the linear fits.
The direct relation between $K(T)$ and spin susceptibility $\chi_{\rm spin}$
can be written as
$K(T)=K_{0}+\frac{A_{\rm hf}}{N_{A}\mu_{\rm B}}\chi_{\rm spin},$ (5)
where $K_{0}$ is the temperature-independent NMR shift and $A_{\rm hf}$ is the
total hyperfine coupling constant between the 31P nuclei and V4+ spins. The
$A_{\rm hf}$ consists of the contributions due to transferred hyperfine
coupling and nuclear dipolar coupling constants. Since both of the
aforementioned couplings are temperature-independent, $K(T)$ is a direct
measure of $\chi_{\rm spin}$. Using Eq. (5), $A_{\rm hf}$ can be calculated
from the slope of the linear $K$ vs $\chi$ plot with temperature as an
implicit parameter. The lower panel of Fig. 9 presents the $K$ vs $\chi$ plots
for both P sites showing linear behavior at high temperatures. From the linear
fit, the hyperfine coupling constants $A_{\rm hf}^{\rm P(1)}\simeq 3290$
Oe/$\mu_{\rm B}$ and $A_{\rm hf}^{\rm P(2)}\simeq 7068$ Oe/$\mu_{\rm B}$ are
estimated for the P(1) and P(2) sites, respectively. Thus, the P(2) site is
coupled with the V4+ ions twice stronger than the P(1) site. These values are
comparable to the $A_{\rm hf}$ values reported for other spin chains with
similar interaction geometry Mukharjee _et al._ (2019); Nath _et al._
(2008c, 2005).
Figure 10: Upper panel: $K$ vs $T$ for both P(1) and P(2) sites. The solid
lines are the fits below 5 K by $K=K_{0}+mT^{(d/2-1)}e^{-\Delta/k_{\rm B}T}$,
fixing $\Delta/k_{\rm B}\simeq 5.5$ K. Lower panel: $(K-K_{0})e^{5.5/T}$ vs
$T$ is shown in the low-temperature regime. The solid lines are the fits with
$d\simeq 2.83$ and 2.73 for the P(1) and P(2) sites, respectively.
It is also possible to estimate the value of spin gap and the effective
lattice dimensionality ($d$) by analyzing the low-temperature $K(T)$ data.
Typically, the non-negligible interchain couplings are inevitably present in
real materials and significantly influence the $K(T)$ data at low
temperatures. For a $d$-dimensional system, the susceptibility at $k_{\rm
B}T\ll\Delta$ can be approximated as Taniguchi _et al._ (1995)
$\chi_{d}\propto T^{(d/2)-1}\times e^{-\Delta/k_{\rm B}T}.$ (6)
Our NMR experiments were carried out in the magnetic field of $\mu_{0}H=1.4$
T. Assuming a linear variation of $\Delta/k_{\rm B}$ with $H$, the zero-field
spin gap $\Delta_{\rm 0}/k_{\rm B}\simeq 7.3$ K determined from the specific
heat and magnetization is expected to be reduced to $\Delta_{\rm
1.4~{}T}/k_{\rm B}\simeq 5.5$ K at $\mu_{0}H=1.4$ T. In the upper panel of
Fig. 10, the $K(T)$ data below 5 K are fitted by
$K=K_{0}+mT^{(d/2-1)}e^{-\Delta/k_{\rm B}T}$, fixing $\Delta/k_{\rm B}\simeq
5.5$ K where $m$ is a proportionality constant. In order to highlight the low-
temperature linear regime and the fit, in the lower panel of Fig. 10, we have
plotted $(K-K_{0})e^{5.5/T}$ vs $T$ in the log-log plot. The fit in this
region returns $K_{0}\simeq 0.04694$%, $m\simeq 0.4654$ %/K1/2, and $d\simeq
2.83$ for the P(1) site and $K_{0}\simeq 0.0462$%, $m\simeq 1.1663$ %/K1/2,
and $d\simeq 2.73$ for the P(2) site. The average value of $d\simeq 2.78$,
which is very close to 3, suggests the dominant role of 3D spin-spin
correlations at low temperatures. Indeed, we find rather strong interchain
couplings from DFT (Sec. III.5).
#### III.4.2 Spin-lattice relaxation rate $1/T_{1}$
Figure 11: Upper panel: recovery of the longitudinal magnetization as a
function of waiting time $t$ at three different temperatures for the P(1)
site. Solid lines are the fits using Eq. (7). Lower panel: temperature
variation of $1/T_{1}$ measured in different magnetic fields. For
$\mu_{0}H=1.4$ T, measurements are done on both P(1) and P(2) sites while for
other fields, only P(1) site is probed. The solid line represents the fit
using $1/T_{1}\propto T^{\alpha}$ to the 10 T data at low temperatures with
$\alpha\simeq-0.6$.
The spin-lattice relaxation rate $1/T_{1}$ measures the dynamic
susceptibility, which provides direct access to the low-energy spin
excitations or spin-spin correlation function Moriya (1956). $1/T_{1}$ was
measured at the central peak position of the spectra at each temperature using
an inversion pulse sequence down to $T=1.6$ K. Since 31P has the nuclear spin
$I=1/2$, the value of $T_{1}$ at each temperature was estimated by fitting the
recovery curve of the longitudinal magnetization to a single exponential
function
$\frac{1}{2}\left[\frac{M(0)-M(t)}{M(0)}\right]=Ae^{-(t/T_{1})}.$ (7)
Here, $M(t)$ is the nuclear magnetization at a time $t$ after the inversion
pulse and $M(0)$ is the equilibrium magnetization. The upper panel of Fig. 11
shows the recovery curves at three different temperatures probed for the P(1)
site at $\mu_{0}H=1.4$ T. The recovery curves show linearity over one and half
decades when the $y$ axis is plotted in log scale. $1/T_{1}$ was estimated by
fitting Eq. (7) in this linear regime.
$1/T_{1}$ estimated from the above fit is shown in the lower panel of Fig. 11.
Our measurements are done at different field values ranging from 1.4 T to 10
T, above and below $H_{\rm c1}$. For $\mu_{0}H=1.4$ T, the measurements are
done at both P(1) and P(2) sites and over the whole temperature range, while
for other fields only the P(1) site is probed and the measurements are
restricted to low temperatures ($T<30$ K). Since there is a large difference
in the magnitude of $A_{\rm hf}$ for the P(1) and P(2) sites, they experience
different local magnetic fields induced by the V4+ spins. Therefore, it is
expected that the resulting temperature-dependent $1/T_{1}$ will have
different values accordingly. Indeed, for $\mu_{0}H=1.4$ T, $1/T_{1}$ of the
P(2) site has larger magnitude than that of the P(1) site, as $A_{\rm hf}$ of
P(2) is larger than that of P(1). For both the P-sites, $1/T_{1}$ follows a
temperature-independent behavior due to the random fluctuation of the
paramagnetic moments at high temperatures Moriya (1956). At lower
temperatures, $1/T_{1}$ starts to decrease and below about 10 K it drops
rapidly towards zero. The 1.6 K value is almost two orders of magnitude lower
than the room-temperature one, indicating the opening of a spin gap. In higher
fields, the low-temperature values of $1/T_{1}$ increase and show an upward
curvature.
The spin gap should be closed at $H_{\rm c1}$ and thereafter an AFM LRO sets
in. Therefore, we measured $1/T_{1}$ at different fields above $H_{\rm c1}$.
The increase in the low-temperature values of $1/T_{1}$ confirms the closing
of the spin gap and the growth of 3D AFM correlations due to the field-induced
LRO Klanjšek _et al._ (2008). Since our measurements are limited down to 1.6
K, we are unable to detect the field-induced LRO from the $1/T_{1}$ data.
Nevertheless, the systematic increase of $1/T_{1}$ with field implies that
$T_{\rm N}$ shifts toward higher temperatures with increasing $H$, in good
agreement with our $C_{\rm p}(T)$ data, where field-induced LRO is detected
above $H_{\rm c1}$.
The data sets for the P(1) site at various field values exhibit a fanlike
pattern in the low-temperature regime. Similar behavior has been reported for
spin-$1/2$ ladder compound (C5H12N)2CuBr4 (BPCB) and spin-1 chain compound
NiCl2-4SC(NH2)2 (DTN) Mukhopadhyay _et al._ (2012). It is expected that
$1/T_{1}(T)$ data for different fields, around the quantum critical point
(QCP) (i.e., around $H_{\rm c1}$) should follow a power-law behavior,
$1/T_{1}\propto T^{\alpha}$. The exponent $\alpha$ should vary across $H_{\rm
c1}$, resulting in a fanlike pattern of the $1/T_{1}$ data. For instance, in
the gapless TLL region ($H>H_{\rm c1}$), the power law has the form
$1/T_{1}\propto T^{1/(2K)-1}$, with $K$ being the TLL exponent Klanjšek _et
al._ (2008); Giamarchi and Tsvelik (1999). The value of $K$ increases
gradually as one approaches QCP from the TLL regime and reaches the maximum
value 1 that corresponds to $\alpha[=1/(2K)-1]=-0.5$. In order to test this
formalism, we fitted the $1/T_{1}$ data below 5 K measured at 10 T by the
power law. As shown in the lower panel of Fig. 11, our fit yields
$\alpha\simeq-0.6$, which is very close to the predicted value ($\alpha=-0.5$)
in the TLL regime. This indicates the pronounced 1D character of the compound.
Figure 12: $1/T_{1}e^{5.5/T}$ vs $T$ in the log-log plot. The solid lines are
the fits, as described in the text with $d\simeq 2.74$ and 3.2, for the P(1)
and P(2) sites, respectively.
On the other hand, from analyzing the $K(T)$ data in magnetic fields below
$H_{c1}$ we already inferred that 3D spin-spin correlations caused by the
interchain coupling may be relevant. According to Mukhopadhyay et al.
Mukhopadhyay _et al._ (2012), in the gapped region ($H\leq H_{c1}$) with 3D
correlations $1/T_{1}$ follows an activated behavior of the form,
$1/T_{1}\propto T^{\alpha_{\rm 0}}\exp\left[\frac{g\mu_{B}(H-H_{\rm
c1})}{k_{\rm B}T}\right].$ (8)
The exponent $\alpha_{0}$ in the above equation depends on the effective
dimension of the magnon dispersion relation as set by the thermal fluctuations
$k_{\rm B}T$. With increasing temperature, $\alpha_{0}$ slowly decreases from
2 for $k_{\rm B}T<J_{\rm 3D}$ (3D regime) to 0 for $J_{3D}<k_{\rm B}T<J_{1D}$
(1D regime). In order to detect the effective dimensionality of the spin-spin
correlations, Eq. (8) was further simplified as $1/T_{1}\propto
T^{d-1}e^{-\Delta_{\rm 1.4\,T}/k_{\rm B}T}$ by putting $\alpha_{\rm 0}=d-1$
and $\Delta_{\rm 1.4~{}T}/k_{\rm B}=g\mu_{\rm B}(H_{\rm c1}-H)/k_{\rm B}$ and
fitted to the low temperature $1/T_{1}$ data. The fit for $T\leq 5$ K, taking
$\Delta_{\rm 1.4\,T}/k_{\rm B}\simeq 5.5$ K results in $d\simeq 2.74$ and 3.2
for the P(1) and P(2) sites, respectively. The average value of $d\simeq 2.97$
is consistent with the value obtained from the $K(T)$ analysis. This further
confirms the importance of interchain couplings at low temperatures, where
activated behavior is observed. Figure 12 presents the $1/T_{1}e^{5.5/T}$ vs
$T$ plot for the data measured at $\mu_{0}H=1.4$ T along with the fit using
Eq. (8). Both the $x$ and $y$ axes are shown in log scale to highlight the
power-law prefactor to the activated behavior, at low temperatures.
### III.5 Microscopic magnetic model
Similar to Refs. Tsirlin _et al._ (2011); Arjun _et al._ (2019); Mukharjee
_et al._ (2019), we use two complementary computational methods to derive
exchange couplings in $\varepsilon$-LiVOPO4. For a single magnetic orbital of
V4+, superexchange theory yields antiferromagnetic exchange couplings
$J_{i}^{\rm AFM}=4t_{i}^{2}/U_{\rm eff}$, where $t_{i}$ are V–V hoppings
extracted from the uncorrelated (PBE) band structure, and $U_{\rm eff}$ is an
effective Coulomb repulsion in the V $3d$ bands. On the other hand, exchange
couplings $J_{i}$ can be obtained from DFT+$U$ by a mapping procedure, where
both ferromagnetic and antiferromagnetic contributions are taken into account.
Table 2: Exchange couplings in $\varepsilon$-LiVOPO4. The $t_{i}$ values are the V–V hoppings extracted from the uncorrelated band structure, and show relative strengths of the AFM contributions to the exchange couplings $J_{i}^{\rm AFM}\sim t_{i}^{2}$. The $J_{i}$ are exchange interactions obtained by the mapping procedure within DFT+$U$. | $d_{\rm V-V}$ (Å) | | $t_{i}$ (meV) | $J_{i}$ (K)
---|---|---|---|---
$J_{1}$ | 5.250 | V1–V1 | $-72$ | 33
$J_{1}^{\prime}$ | 5.101 | V1–V1 | $-55$ | 23
$J_{2}$ | 5.275 | V2–V2 | $-117$ | 63
$J_{2}^{\prime}$ | 5.303 | V2–V2 | $-78$ | 22
$J_{c1}$ | 3.599 | V1–V2 | 0 | $-15$
$J_{c2}$ | 3.629 | V1–V2 | 0 | $-15$
$J_{a1}$ | 6.018 | V1–V2 | $-21$ | 12
$J_{a2}$ | 6.070 | V1–V2 | $-32$ | 7
In Table 2, we list the $t_{i}$ values for the uncorrelated band structure and
the exchange couplings $J_{i}$ obtained from DFT+$U$. The two methods are in
excellent agreement and consistently show the stronger couplings within chain
2. Moreover, we find that within each spin chain the stronger couplings
involve the P(2) bridges and the weaker couplings involve the P(1) bridges. On
the structural level, this difference should be traced back to the lateral
displacements $d_{i}$ of the VO6 octahedra within the spin chain [Fig. 1(b)],
where smaller displacement leads to a stronger coupling Mukharjee _et al._
(2019); Roca _et al._ (1998). Indeed, we find $d_{1}=0.71$ Å for $J_{1}$ vs
$d_{1}^{\prime}=0.97$ Å for $J_{1}^{\prime}$ and $d_{2}=0.08$ Å for $J_{2}$ vs
$d_{2}^{\prime}=0.23$ Å for $J_{2}^{\prime}$. The smaller lateral
displacements $d_{2}$ and $d_{2}^{\prime}$ could also explain the overall
stronger couplings in chain 2, although in this case other geometrical
parameters Roca _et al._ (1998) are relevant as well, because $J_{1}$ is
about as strong as $J_{2}^{\prime}$, despite the fact that
$d_{1}>d_{2}^{\prime}$.
Regarding the interchain couplings, the microscopic scenario is very similar
to that of monoclinic $A$VO$X$O4 with $A$ = Ag, Na and $X$ = P, As Tsirlin
_et al._ (2011); Arjun _et al._ (2019); Mukharjee _et al._ (2019). Shorter
V–O–V bridges render $J_{c1}$ and $J_{c2}$ ferromagnetic, whereas the long-
range couplings $J_{a1}$ and $J_{a2}$ are weakly antiferromagnetic. These
ferromagnetic and antiferromagnetic interactions compete and make the spin
lattice of $\varepsilon$-LiVOPO4 frustrated.
Table 3: Exchange couplings (in K) extracted from the $\chi(T)$ and $M(H)$ data using the fits shown in Fig. 13. The susceptibility fit using Eq. (9) returns $\chi_{0}=2.7\times 10^{-5}$ cm3/mol, $C_{\rm imp}=0.013$ cm3K/mol (3.5% of paramagnetic impurities), $\theta_{\rm imp}=0.9$ K, and $g=2.03$. This $g$ value is slightly higher than 1.98 obtained from the Curie-Weiss fit, probably because the interchain couplings were not taken into account. For magnetization data, $g=1.98$ has been used as a fixed parameter. | $J_{1}$ | $J_{1}^{\prime}$ | $J_{2}$ | $J_{2}^{\prime}$
---|---|---|---|---
$\chi(T)$ | 20 | 12 | 70 | 20
$M(H)$ | 19 | 12 | 63 | 22
Our DFT results suggest that chain 1 shows only a moderate degree of
alternation ($\alpha_{1}=J_{1}^{\prime}/J_{1}\simeq 0.6$) that, together with
the lower energy scale of the couplings, leads to a relatively small spin gap
closed at $H_{\rm c1}$. In contrast, the alternation ratio of
$\alpha_{2}=J_{2}^{\prime}/J_{2}\simeq 0.3$ renders chain 2 strongly dimerized
with a larger spin gap that is closed at the much higher field $H_{\rm c3}$.
The model of two alternating spin-$\frac{1}{2}$ chains was further used to
calculate temperature-dependent magnetic susceptibility and field-dependent
magnetization of $\varepsilon$-LiVOPO4 (Fig. 13). For the susceptibility, we
used the analytical expression from Ref. Johnston _et al._ (2000) augmented
by additional terms that account for the temperature-independent ($\chi_{0}$)
and Curie-type impurity contributions,
$\chi=\chi_{0}+\frac{C_{\rm imp}}{T+\theta_{\rm imp}}+\chi_{\rm ch1}+\chi_{\rm
ch2},$ (9)
where $\chi_{\rm ch1}$ and $\chi_{\rm ch2}$ are susceptibilities of two
nonequivalent alternating spin-$\frac{1}{2}$ chains with the interaction
parameters $J_{1}$, $J_{1}^{\prime}$ and $J_{2}$, $J_{2}^{\prime}$,
respectively (same $g$ factor is used for both chains). Magnetization curve of
$\varepsilon$-LiVOPO4 was modeled by the sum of simulated magnetization curves
for the V(1) and V(2) sublattices obtained by the method described in Ref.
Tsirlin _et al._ (2011).
Figure 13: Temperature-dependent susceptibility (top) and field-dependent
magnetization (bottom) of $\varepsilon$-LiVOPO4 fitted using the model of two
non-equivalent alternating spin-$\frac{1}{2}$ chains, as explained in the
text. See Table 3 for the fitting parameters.
The fitting results listed in Table 3 show good agreement between the fits to
the susceptibility and magnetization. They are also consistent with the
exchange parameters calculated by DFT (Table 2). We chose not to include
interchain couplings into the susceptibility fit, which became ambiguous when
more than four exchange parameters were involved. The effect of the interchain
couplings can be seen from the fact that for an isolated chain 1 one finds,
using $J_{1}$ and $J_{1}^{\prime}$ from the susceptibility fit, the zero-field
spin gap of 11.3 K, which is somewhat larger than 7.3 K obtained
experimentally 222Similar to Ref. Tsirlin _et al._ (2011), magnetization
curve was simulated by including a weak interchain coupling of
$J_{\perp}/J_{1}=-0.05$ in order to reproduce the experimental data around
$H_{\rm c1}$. This interchain coupling is much lower than estimated by DFT
(Table 2), possibly because $J_{\perp}$ reflects a cumulative effect of the
frustrated couplings $J_{a1}$, $J_{a2}$ and $J_{c1}$, $J_{c2}$. .
## IV Discussion and summary
$\varepsilon$-LiVOPO4 features two types of alternating spin-$\frac{1}{2}$
chains that manifest themselves in the double maxima of the susceptibility and
magnetic specific heat and in the two-step magnetization process. This unusual
microscopic scenario is reminiscent of the ambient-pressure polymorph of
(VO)2P2O7 Johnston _et al._ (2001), where two spin gaps corresponding to two
types of spin chains were directly observed by NMR Yamauchi _et al._ (1999)
and inelastic neutron scattering Garrett _et al._ (1997). On the other hand,
large size of these gaps (35 K and 68 K, respectively) and high critical
fields associated with them preclude experimental access to field-induced
transitions, where triplon excitations of the spin chains consecutively
condense leading to a long-range magnetic order.
Figure 14: $H-T$ phase diagram of $\varepsilon$-LiVOPO4 obtained using the
data points from $C_{\rm p}(T)$ measurements. The solid line corresponds to
Eq. (10). Inset: $T_{\rm N}$ vs ($H-H_{\rm c1})^{1/1.5}$ to highlight the low-
temperature linear regime and the agreement of the simulated curve with the
experimental data points.
With its lower critical fields, $\varepsilon$-LiVOPO4 offers a much better
platform for studying these transitions experimentally. Indeed, we observed
field-induced magnetic order already above $\mu_{0}H_{\rm c1}\simeq 5.6$ T.
Transition temperature systematically increases with field and tracks the
$H$-$T$ phase boundary shown in Fig. 14.
The field-induced transition in gapped quantum magnets is often understood as
triplon BEC. In this case, the phase boundary close to $H_{\rm c1}$ should
follow the universal power law Zapf _et al._ (2014); Giamarchi and Tsvelik
(1999); Nohadani _et al._ (2004),
$T_{\rm N}\propto(H-H_{\rm c1})^{\frac{1}{\phi}},$ (10)
where $\phi=d/2$ is the critical exponent reflecting the universality class of
the quantum phase transition at $H_{\rm c1}$, and $d$ is the lattice
dimensionality. In the absence of the low-temperature data immediately above
$H_{\rm c1}$, we simulate the anticipated critical behavior by choosing $d=3$,
$\phi=\frac{3}{2}$, and $\mu_{0}H_{\rm c1}\simeq 5.6$ T in Eq. (10). The
resulting curves match our data up to 1.2 K (Fig. 14), although this agreement
may be partly accidental, because the temperature of 1.2 K is rather high
compared to the temperature scale of the BEC transition given by
$T_{N}^{\max}$ at the tip of the BEC dome. Since 14 T is roughly the midpoint
between $H_{\rm c1}$ and $H_{\rm c2}$, we expect $T_{N}^{\max}\simeq 1.7$ K,
and 1.2 K is more than half of this temperature. Further measurements of the
phase boundary around $H_{\rm c1}$ and below 0.7 K would be clearly
interesting to confirm the tentative BEC critical behavior in
$\varepsilon$-LiVOPO4. Impurity effects witnessed by the low-temperature
upturn in the magnetic susceptibility (Fig. 4) may also be important.
The fate of the ordered phase in fields above 14 T is of significant interest
too. The two-step increase in the magnetization suggests that the transition
at $H_{\rm c1}$ corresponds to chain 1 and will lead to a BEC dome between
$H_{\rm c1}$ and $H_{\rm c2}$, while chain 2 remains in the singlet state up
to $H_{\rm c3}$, where another BEC dome should appear. Phenomenologically,
this behavior would be similar to the two-dome $H$-$T$ phase diagrams of
spin-1 dimer magnets Samulon _et al._ (2009), although in
$\varepsilon$-LiVOPO4 with local spin $\frac{1}{2}$ it should have a very
different microscopic origin. It does in fact arise from the coexistence of
the nonequivalent magnetic sublattices. We also point out that the
$\frac{1}{2}$-magnetization plateau in this compound is not caused by Wigner
crystallization and is thus dissimilar to the magnetization plateaus in
SrCu2(BO${}_{3})_{2}$. Because $H_{\rm c3}$ lies above $H_{\rm c2}$, magnetic
orders related to chain 1 and chain 2 should be mostly decoupled. This is
different from other systems with multiple spin gaps, where intertwined BEC
transitions lead to an unusual critical behavior, as in the dimer magnet
BaCuSi2O6 Mazurenko _et al._ (2014); Allenspach _et al._ (2020).
In summary, we have shown that $\varepsilon$-LiVOPO4 is a gapped quantum
magnet that features singlet ground state in zero field. With two non-
equivalent alternating spin-$\frac{1}{2}$ chains, it shows double maxima in
the susceptibility and magnetic specific heat and a two-step increase in the
magnetization. Chain 1 features weaker couplings and a weaker alternation
($J_{1}\simeq 20$ K, $\alpha_{1}\simeq 0.6$), whereas chain 2 reveals stronger
couplings and lies closer to the dimer limit ($J_{2}\simeq 60$ K,
$\alpha_{2}\simeq 0.3$). The zero-field spin gap of $\Delta_{0}/k_{B}\simeq
7.3$ K is closed at $\mu_{0}H_{\rm c1}\simeq 5.6$ T. The magnetization
increases up to $\mu_{0}H_{\rm c2}\simeq 25$ T, flattens out within the
$\frac{1}{2}$ plateau, and increases again above $\mu_{0}H_{\rm c3}\simeq 35$
T. The gap closing above $H_{\rm c1}$ leads to a field-induced LRO that can be
understood as Bose-Einstein condensation of triplons.
###### Acknowledgements.
P.K.M. and R.N. acknowledge BRNS, India for financial support bearing sanction
No.37(3)/14/26/2017-BRNS. We also thank C. Klausnitzer (MPI-CPfS) for the
technical support. A.A.T was funded by the Federal Ministry for Education and
Research through the Sofja Kovalevskaya Award of Alexander von Humboldt
Foundation. We also acknowledge the support of the HLD at HZDR, member of
European Magnetic Field Laboratory (EMFL).
## References
* Matsubara and Matsuda (1956) T. Matsubara and H. Matsuda, “A Lattice Model of Liquid Helium, I,” Prog. Theor. Phys. 16, 569 (1956).
* Giamarchi _et al._ (2008) T. Giamarchi, Ch. Rüegg, and O. Tchernyshyov, “Bose-Einstein condensation in magnetic insulators,” Nat. Phys. 4, 198 (2008).
* Zapf _et al._ (2014) V. Zapf, M. Jaime, and C. D. Batista, “Bose-Einstein condensation in quantum magnets,” Rev. Mod. Phys. 86, 563 (2014).
* Mukhopadhyay _et al._ (2012) S. Mukhopadhyay, M. Klanjšek, M. S. Grbić, R. Blinder, H. Mayaffre, C. Berthier, M. Horvatić, M. A. Continentino, A. Paduan-Filho, B. Chiari, and O. Piovesana, “Quantum-critical spin dynamics in quasi-one-dimensional antiferromagnets,” Phys. Rev. Lett. 109, 177206 (2012).
* Sebastian _et al._ (2006) S. E. Sebastian, N. Harrison, C. D. Batista, L. Balicas, M. Jaime, P. A. Sharma, N. Kawashima, and I. R. Fisher, “Dimensional reduction at a quantum critical point,” Nature 441, 617 (2006).
* Nikuni _et al._ (2000) T. Nikuni, M. Oshikawa, A. Oosawa, and H. Tanaka, “Bose-Einstein condensation of dilute magnons in TlCuCl3,” Phys. Rev. Lett. 84, 5868 (2000).
* Klanjšek _et al._ (2008) M. Klanjšek, H. Mayaffre, C. Berthier, M. Horvatić, B. Chiari, O. Piovesana, P. Bouillot, C. Kollath, E. Orignac, R. Citro, and T. Giamarchi, “Controlling luttinger liquid physics in spin ladders under a magnetic field,” Phys. Rev. Lett. 101, 137207 (2008).
* Matsushita _et al._ (2017) T. Matsushita, N. Hori, S. Takata, N. Wada, N. Amaya, and Y. Hosokoshi, “Direct three-dimensional ordering of quasi-one-dimensional quantum dimer system near critical fields,” Phys. Rev. B 95, 020408(R) (2017).
* Willenberg _et al._ (2015) B. Willenberg, H. Ryll, K. Kiefer, D. A. Tennant, F. Groitl, K. Rolfs, P. Manuel, D. Khalyavin, K. C. Rule, A. U. B. Wolter, and S. Süllow, “Luttinger liquid behavior in the alternating spin-chain system copper nitrate,” Phys. Rev. B 91, 060407(R) (2015).
* Thielemann _et al._ (2009) B. Thielemann, Ch. Rüegg, K. Kiefer, H. M. Rønnow, B. Normand, P. Bouillot, C. Kollath, E. Orignac, R. Citro, T. Giamarchi, A. M. Läuchli, D. Biner, K. W. Krämer, F. Wolff-Fabris, V. S. Zapf, M. Jaime, J. Stahn, N. B. Christensen, B. Grenier, D. F. McMorrow, and J. Mesot, “Field-controlled magnetic order in the quantum spin-ladder system (Hpip)2CuBr4,” Phys. Rev. B 79, 020408(R) (2009).
* Rice (2002) T. M. Rice, “To condense or not to condense,” Science 298, 760 (2002).
* Narumi _et al._ (1998) Y. Narumi, M. Hagiwara, R. Sato, K. Kindo, H. Nakano, and M. Takahashi, “High field magnetization in a $S=1$ antiferromagnetic chain with bond alternation,” Physica B: Cond. Mat. 246, 509 (1998).
* Shiramura _et al._ (1998) B. Shiramura, K. Takatsu, B. Kurniawan, H. Tanaka, H. Uekusa, Y. Ohashi, K. Takizawa, H. Mitamura, and T. Goto, “Magnetization plateaus in NH4CuCl3,” J. Phys. Soc. Jpn. 67, 1548 (1998).
* Kodama _et al._ (2002) K. Kodama, M. Takigawa, M. Horvatić, C. Berthier, H. Kageyama, Y. Ueda, S. Miyahara, F. Becca, and F. Mila, “Magnetic superstructure in the two-dimensional quantum antiferromagnet SrCu2(BO3)2,” Science 298, 395 (2002).
* Kageyama _et al._ (1999) H. Kageyama, K. Yoshimura, R. Stern, N. V. Mushnikov, K. Onizuka, M. Kato, K. Kosuge, C. P. Slichter, T. Goto, and Y. Ueda, “Exact dimer ground state and quantized magnetization plateaus in the two-dimensional spin system SrCu2(BO3)2,” Phys. Rev. Lett. 82, 3168 (1999).
* Rüegg _et al._ (2003) C. Rüegg, N. Cavadini, A. Furrer, H. U. Güdel, K. Krämer, H. Mutka, A. Wildes, K. Habicht, and P. Vorderwisch, “Bose-Einstein condensation of the triplet states in the magnetic insulator TlCuCl3,” Nature 423, 62 (2003).
* Jaime _et al._ (2004) M. Jaime, V. F. Correa, N. Harrison, C. D. Batista, N. Kawashima, Y. Kazuma, G. A. Jorge, R. Stern, I. Heinmaa, S. A. Zvyagin, Y. Sasago, and K. Uchinokura, “Magnetic-field-induced condensation of triplons in han purple pigment BaCuSi2O6,” Phys. Rev. Lett. 93, 087203 (2004).
* Aczel _et al._ (2009a) A. A. Aczel, Y. Kohama, C. Marcenat, F. Weickert, M. Jaime, O. E. Ayala-Valenzuela, R. D. McDonald, S. D. Selesnic, H. A. Dabkowska, and G. M. Luke, “Field-induced Bose-Einstein condensation of triplons up to 8 K in Sr3Cr2O8,” Phys. Rev. Lett. 103, 207203 (2009a).
* Aczel _et al._ (2009b) A. A. Aczel, Y. Kohama, M. Jaime, K. Ninios, H. B. Chan, L. Balicas, H. A. Dabkowska, and G. M. Luke, “Bose-Einstein condensation of triplons in Ba3Cr2O8,” Phys. Rev. B 79, 100409(R) (2009b).
* Jeong _et al._ (2013) M. Jeong, H. Mayaffre, C. Berthier, D. Schmidiger, A. Zheludev, and M. Horvatić, “Attractive Tomonaga-Luttinger liquid in a quantum spin ladder,” Phys. Rev. Lett. 111, 106404 (2013).
* Jeong _et al._ (2017) M. Jeong, H. Mayaffre, C. Berthier, D. Schmidiger, A. Zheludev, and M. Horvatić, “Magnetic-order crossover in coupled spin ladders,” Phys. Rev. Lett. 118, 167206 (2017).
* Möller _et al._ (2017) J. S. Möller, T. Lancaster, S. J. Blundell, F. L. Pratt, P. J. Baker, F. Xiao, R. C. Williams, W. Hayes, M. M. Turnbull, and C. P. Landee, “Quantum-critical spin dynamics in a Tomonaga-Luttinger liquid studied with muon-spin relaxation,” Phys. Rev. B 95, 020402(R) (2017).
* Rüegg _et al._ (2008) Ch. Rüegg, K. Kiefer, B. Thielemann, D. F. McMorrow, V. Zapf, B. Normand, M. B. Zvonarev, P. Bouillot, C. Kollath, T. Giamarchi, S. Capponi, D. Poilblanc, D. Biner, and K. W. Krämer, “Thermodynamics of the spin Luttinger liquid in a model ladder material,” Phys. Rev. Lett. 101, 247202 (2008).
* Ueda (1998) Y. Ueda, “Vanadate family as spin-gap systems,” Chem. Mater. 10, 2653 (1998).
* Yamauchi _et al._ (1999) T. Yamauchi, Y. Narumi, J. Kikuchi, Y. Ueda, K. Tatani, T. C. Kobayashi, K. Kindo, and K. Motoya, “Two gaps in (VO)2P2O7: Observation using high-field magnetization and NMR,” Phys. Rev. Lett. 83, 3729 (1999).
* Johnston _et al._ (1987) D. C. Johnston, J. W. Johnson, D. P. Goshorn, and A. J. Jacobson, “Magnetic susceptibility of (VO)2P2O7: A one-dimensional spin-1/2 Heisenberg antiferromagnet with a ladder spin configuration and a singlet ground state,” Phys. Rev. B 35, 219 (1987).
* Ghoshray _et al._ (2005) K. Ghoshray, B. Pahari, B. Bandyopadhyay, R. Sarkar, and A. Ghoshray, “${}^{51}\rm{V}$ NMR study of the quasi-one-dimensional alternating chain compound ${\mathrm{BaCu}}_{2}{\mathrm{V}}_{2}{\mathrm{O}}_{8}$,” Phys. Rev. B 71, 214401 (2005).
* Mukharjee _et al._ (2019) P. K. Mukharjee, K. M. Ranjith, B. Koo, J. Sichelschmidt, M. Baenitz, Y. Skourski, Y. Inagaki, Y. Furukawa, A. A. Tsirlin, and R. Nath, “Bose-Einstein condensation of triplons close to the quantum critical point in the quasi-one-dimensional spin-$\frac{1}{2}$ antiferromagnet NaVOPO4,” Phys. Rev. B 100, 144433 (2019).
* Arjun _et al._ (2019) U. Arjun, K. M. Ranjith, B. Koo, J. Sichelschmidt, Y. Skourski, M. Baenitz, A. A. Tsirlin, and R. Nath, “Singlet ground state in the alternating spin-$\frac{1}{2}$ chain compound NaVOAsO4,” Phys. Rev. B 99, 014421 (2019).
* Ahmed _et al._ (2017) N. Ahmed, P. Khuntia, K. M. Ranjith, H. Rosner, M. Baenitz, A. A. Tsirlin, and R. Nath, “Alternating spin chain compound AgVOAsO4 probed by ${}^{75}\mathrm{As}$ NMR,” Phys. Rev. B 96, 224423 (2017).
* Tsirlin _et al._ (2011) A. A. Tsirlin, R. Nath, J. Sichelschmidt, Y. Skourski, C. Geibel, and H. Rosner, “Frustrated couplings between alternating spin-$\frac{1}{2}$ chains in AgVOAsO4,” Phys. Rev. B 83, 144412 (2011).
* Weickert _et al._ (2019) F. Weickert, A. A. Aczel, M. B. Stone, V. O. Garlea, C. Dong, Y. Kohama, R. Movshovich, A. Demuer, N. Harrison, M. B. Gamża, A. Steppke, M. Brando, H. Rosner, and A. A. Tsirlin, “Field-induced double dome and Bose-Einstein condensation in the crossing quantum spin chain system AgVOAsO4,” Phys. Rev. B 100, 104422 (2019).
* Harrison and Manthiram (2013) K.L. Harrison and A. Manthiram, “Microwave-assisted solvothermal synthesis and characterization of various polymorphs of LiVOPO4,” Chem. Mater. 25, 1751 (2013).
* Hidalgo _et al._ (2019) M.F.V. Hidalgo, Y.-C. Lin, A. Grenier, D. Xiao, J. Rana, R. Tran, H. Xin, M. Zuba, J. Donohue, F.O. Omenya, I.-H. Chu, Z. Wang, X.G. Li, N.A. Chernova, K.W. Chapman, G. Zhou, L. Piper, S.P. Ong, and M.S. Whittingham, “Rational synthesis and electrochemical performance of LiVOPO4 polymorphs,” J. Mater. Chem. A 7, 8423 (2019).
* Note (1) Triclinic crystal structure is derived from $\varepsilon$-VOPO4. Alternatively, triclinic LiVOPO4 is sometimes referred to as the $\alpha$-phase, because it was the earliest discovered LiVOPO4 polymorph.
* Yang _et al._ (2008) Y. Yang, H. Fang, J. Zheng, L. Li, G. Li, and G. Yan, “Towards the understanding of poor electrochemical activity of triclinic LiVOPO4: Experimental characterization and theoretical investigations,” Solid State Sci. 10, 1292 (2008).
* Quackenbush _et al._ (2015) N. F. Quackenbush, L. Wangoh, D. O. Scanlon, R. Zhang, Y. Chung, Z. Chen, B. Wen, Y. Lin, J. C. Woicik, N. A. Chernova, S. P. Ong, M.S. Whittingham, and L. F. J. Piper, “Interfacial effects in $\varepsilon$-LixVOPO4 and evolution of the electronic structure,” Chem. Mater. 27, 8211 (2015).
* Lin _et al._ (2016) Y.-C. Lin, B. Wen, K. M. Wiaderek, S. Sallis, H. Liu, S. H. Lapidus, O. J. Borkiewicz, N. F. Quackenbush, N. A. Chernova, K. Karki, F. Omenya, P. J. Chupas, L. F. J. Piper, M. S. Whittingham, K. W. Chapman, and S. P. Ong, “Thermodynamics, kinetics and structural evolution of $\epsilon$-LiVOPO4 over multiple lithium intercalation,” Chem. Mater. 28, 1794 (2016).
* Shi _et al._ (2018) Y. Shi, H. Zhou, I.D. Seymour, S. Britto, J. Rana, L.W. Wangoh, Y. Huang, Q. Yin, P.J. Reeves, M. Zuba, Y. Chung, F. Omenya, N.A. Chernova, G. Zhou, L.F.J. Piper, C.P. Grey, and M.S. Whittingham, “Electrochemical performance of nanosized disordered LiVOPO4,” ACS Omega 3, 7310 (2018).
* Chung _et al._ (2019) Y. Chung, E. Cassidy, K. Lee, C. Siu, Y. Huang, F. Omenya, J. Rana, K.M. Wiaderek, N.A. Chernova, K.W. Chapman, L.F.J. Piper, and M.S. Whittingham, “Nonstoichiometry and defects in hydrothermally synthesized $\varepsilon$-LiVOPO4,” ACS Appl. Energy Mater. 2, 4792 (2019).
* Onoda and Ikeda (2013) M. Onoda and S. Ikeda, “Crystal structure and spin-singlet state of the LixVOPO4 insertion electrode system with alternating-bond chain,” J. Phys. Soc. Jpn. 82, 053801 (2013).
* Carvajal (1993) J. R. Carvajal, “Recent advances in magnetic structure determination by neutron powder diffraction,” Physica B: Cond. Mat. 192, 55 (1993).
* A. Mba _et al._ (2012) J. A. Mba, C. Masquelier, E. Suard, and L. Croguennec, “Synthesis and crystallographic study of homeotypic LiVPO4F and LiVOPO4,” Chem. Mater. 24, 1223 (2012).
* Tsirlin _et al._ (2009) A. A. Tsirlin, B. Schmidt, Y. Skourski, R. Nath, C. Geibel, and H. Rosner, “Exploring the spin-$\frac{1}{2}$ frustrated square lattice model with high-field magnetization studies,” Phys. Rev. B 80, 132407 (2009).
* Skourski _et al._ (2011) Y. Skourski, M. D. Kuz’min, K. P. Skokov, A. V. Andreev, and J. Wosnitza, “High-field magnetization of Ho2Fe17,” Phys. Rev. B 83, 214420 (2011).
* Koepernik and Eschrig (1999) K. Koepernik and H. Eschrig, “Full-potential nonorthogonal local-orbital minimum-basis band-structure scheme,” Phys. Rev. B 59, 1743 (1999).
* Lavrov _et al._ (1982) A.V. Lavrov, V.P. Nikolaev, G.G. Sadikov, and M.A. Poray-Koshits, “Synthesis and crystal structure of mixed vanadyl and lithium orthophosphate LiVOPO4,” Dokl. Akad. Nauk SSSR 266, 343 (1982).
* Perdew _et al._ (1996) J. P. Perdew, K. Burke, and M. Ernzerhof, “Generalized gradient approximation made simple,” Phys. Rev. Lett. 77, 3865 (1996).
* Tsirlin (2014) A.A. Tsirlin, “Spin-chain magnetism and uniform Dzyaloshinsky-Moriya anisotropy in BaV3O8,” Phys. Rev. B 89, 014405 (2014).
* Nath _et al._ (2008a) R. Nath, A.A. Tsirlin, E.E. Kaul, M. Baenitz, N. Büttgen, C. Geibel, and H. Rosner, “Strong frustration due to competing ferromagnetic and antiferromagnetic interactions: Magnetic properties of M(VO)2(PO${}_{4})_{2}$ (M = Ca and Sr),” Phys. Rev. B 78, 024418 (2008a).
* Tsirlin _et al._ (2008) A.A. Tsirlin, R. Nath, C. Geibel, and H. Rosner, “Magnetic properties of Ag2VOP2O7: An unexpected spin dimer system,” Phys. Rev. B 77, 104436 (2008).
* Isobe _et al._ (2002) M. Isobe, E. Ninomiya, A. N. Vasil’ev, and U. Yutaka, “Novel phase transition in spin-$\frac{1}{2}$ linear chain systems: NaTiSi2O6 and LiTiSi2O6,” J. Phys. Soc. Jpn. 71, 1423 (2002).
* Popović _et al._ (2004) Z. S. Popović, Z. V. Šljivančanin, and F. R. Vukajlović, “Sodium pyroxene NaTiSi2O6: Possible Haldane spin-1 chain system,” Phys. Rev. Lett. 93, 036401 (2004).
* Hirota _et al._ (1994) K. Hirota, D. E. Cox, J. E. Lorenzo, G. Shirane, J. M. Tranquada, M. Hase, K. Uchinokura, H. Kojima, Y. Shibuya, and I. Tanaka, “Dimerization of CuGeO3 in the spin-Peierls state,” Phys. Rev. Lett. 73, 736 (1994).
* Fujii _et al._ (1997) Y. Fujii, H. Nakao, T. Yosihama, M. Nishi, K. Nakajima, K. Kakurai, M. Isobe, Y. Ueda, and H. Sawa, “New inorganic spin-Peierls compound NaV2O5 evidenced by x-ray and neutron scattering,” J. Phys. Soc. Jpn. 66, 326 (1997).
* Lépine _et al._ (1978) Y. Lépine, A. Caillé, and V. Larochelle, “Potassium-tetracyanoquinodimethane (K-TCNQ): A spin-Peierls system,” Phys. Rev. B 18, 3585 (1978).
* Kittel (1986) Charles Kittel, _Introduction to Solid State Physics_ , 6th ed. (John Wiley & Sons, Inc., New York, 1986).
* Bag _et al._ (2018) P. Bag, P. R. Baral, and R. Nath, “Cluster spin-glass behavior and memory effect in Cr0.5Fe0.5Ga,” Phys. Rev. B 98, 144436 (2018).
* Johnston _et al._ (2000) D.C. Johnston, R. K. Kremer, M. Troyer, X. Wang, A. Klümper, S. L. Budko, A. F. Panchula, and P. C. Canfield, “Thermodynamics of spin $S$= $\frac{1}{2}$ antiferromagnetic uniform and alternating-exchange Heisenberg chains,” Phys. Rev. B 61, 9558 (2000).
* Eggert and Affleck (1995) S. Eggert and I. Affleck, “Impurities in $S=\frac{1}{2}$ Heisenberg antiferromagnetic chains: Consequences for neutron scattering and knight shift,” Phys. Rev. Lett. 75, 934 (1995).
* Yogi _et al._ (2015) A. Yogi, N. Ahmed, R. Nath, A. A. Tsirlin, S. Kundu, A. V. Mahajan, J. Sichelschmidt, B. Roy, and Y. Furukawa, “Antiferromagnetism of Zn2VO(PO4)2 and the dilution with Ti4+,” Phys. Rev. B 91, 024413 (2015).
* Selwood (2013) P. W. Selwood, _Magnetochemistry_ (Read Books Ltd, 2013).
* Samulon _et al._ (2009) E.C. Samulon, Y. Kohama, R.D. McDonald, M.C. Shapiro, K.A. Al-Hassanieh, C.D. Batista, M. Jaime, and I.R. Fisher, “Asymmetric quintuplet condensation in the frustrated $S=1$ spin-dimer compound Ba3Mn2O8,” Phys. Rev. Lett. 103, 047202 (2009).
* Nath _et al._ (2008b) R. Nath, A. A. Tsirlin, H. Rosner, and C. Geibel, “Magnetic properties of $\text{BaCdVO}{({\text{PO}}_{4})}_{2}$: A strongly frustrated spin-$\frac{1}{2}$ square lattice close to the quantum critical regime,” Phys. Rev. B 78, 064422 (2008b).
* Walstedt and Walker (1974) R. E. Walstedt and L. R. Walker, “Nuclear-resonance line shapes due to magnetic impurities in metals,” Phys. Rev. B 9, 4857 (1974).
* Kikuchi _et al._ (1999) J. Kikuchi, K. Motoya, T. Yamauchi, and Y. Ueda, “Coexistence of double alternating antiferromagnetic chains in (VO)2P2O7:NMR study,” Phys. Rev. B 60, 6731 (1999).
* Nath _et al._ (2008c) R. Nath, D. Kasinathan, H. Rosner, M. Baenitz, and C. Geibel, “Electronic and magnetic properties of K2CuP2O7: A model $S=\frac{1}{2}$ Heisenberg chain system,” Phys. Rev. B 77, 134451 (2008c).
* Nath _et al._ (2005) R. Nath, A. V. Mahajan, N. Büttgen, C. Kegler, A. Loidl, and J. Bobroff, “Study of one-dimensional nature of $S=\frac{1}{2}$ of (Sr,Ba)2Cu(PO4)2 and BaCuP2O7 via 31P NMR,” Phys. Rev. B 71, 174436 (2005).
* Taniguchi _et al._ (1995) S. Taniguchi, T. Nishikawa, Y. Yasui, Y. Kobayashi, M. Sato, T. Nishioka, M. Kontani, and K. Sano, “Spin gap behavior of $S$= $\frac{1}{2}$ quasi-two-dimensional system CaV4O9,” J. Phys. Soc. Jpn. 64, 2758 (1995).
* Moriya (1956) T. Moriya, “Nuclear magnetic relaxation in antiferromagnetics,” Prog. Theor. Phys. 16, 23 (1956).
* Giamarchi and Tsvelik (1999) T. Giamarchi and A. M. Tsvelik, “Coupled ladders in a magnetic field,” Phys. Rev. B 59, 11398 (1999).
* Roca _et al._ (1998) M. Roca, P. Amorós, J. Cano, M. Dolores Marcos, J. Alamo, A. Beltrán-Porter, and D. Beltrán-Porter, “Prediction of magnetic properties in oxovanadium(IV) phosphates: The role of the bridging PO4 anions,” Inorg. Chem. 37, 3167 (1998).
* Note (2) Similar to Ref. Tsirlin _et al._ (2011), magnetization curve was simulated by including a weak interchain coupling of $J_{\perp}/J_{1}=-0.05$ in order to reproduce the experimental data around $H_{\rm c1}$. This interchain coupling is much lower than estimated by DFT (Table 2), possibly because $J_{\perp}$ reflects a cumulative effect of the frustrated couplings $J_{a1}$, $J_{a2}$ and $J_{c1}$, $J_{c2}$.
* Johnston _et al._ (2001) D.C. Johnston, T. Saito, M. Azuma, M. Takano, T. Yamauchi, and Y. Ueda, “Modeling of the magnetic susceptibilities of the ambient- and high-pressure phases of (VO)2P2O7,” Phys. Rev. B 64, 134403 (2001).
* Garrett _et al._ (1997) A. W. Garrett, S. E. Nagler, D. A. Tennant, B. C. Sales, and T. Barnes, “Magnetic excitations in the $S$ = $1/2$ alternating chain compound (VO)2P2O7,” Phys. Rev. Lett. 79, 745 (1997).
* Nohadani _et al._ (2004) O. Nohadani, S. Wessel, B. Normand, and S. Haas, “Universal scaling at field-induced magnetic phase transitions,” Phys. Rev. B 69, 220402(R) (2004).
* Mazurenko _et al._ (2014) V.V. Mazurenko, M.V. Valentyuk, R. Stern, and A.A. Tsirlin, “Nonfrustrated interlayer order and its relevance to the Bose-Einstein condensation of magnons in BaCuSi2O6,” Phys. Rev. Lett. 112, 107202 (2014).
* Allenspach _et al._ (2020) S. Allenspach, A. Biffin, U. Stuhr, G. S. Tucker, S. Ohira-Kawamura, M. Kofu, D. J. Voneshen, M. Boehm, B. Normand, N. Laflorencie, F. Mila, and Ch. Rüegg, “Multiple magnetic bilayers and unconventional criticality without frustration in BaCuSi2O6,” Phys. Rev. Lett. 124, 177205 (2020).
|
2024-09-04T02:54:55.679893 | 2020-02-29T00:15:46 | 2003.00008 | {
"authors": "Andres Fernandez Herrero",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25950",
"submitter": "Andres Fernandez Herrero",
"url": "https://arxiv.org/abs/2003.00008"
} | arxiv-papers | spacing=nonfrench
# Reduction theory for connections over the formal punctured disc
Andres Fernandez Herrero
###### Abstract
We give a purely algebraic treatment of reduction theory for connections over
the formal punctured disc. Our proofs apply to arbitrary connected linear
algebraic groups over an algebraically closed field of characteristic $0$. We
also state and prove some new quantitative results.
###### Contents
1. 1 Introduction
2. 2 Some notation and definitions
1. 2.1 Preliminaries on formal connections
2. 2.2 Adjoint orbits in semisimple Lie algebras
3. 3 Regular connections
1. 3.1 Regular connections for semisimple groups
2. 3.2 Regular connections for tori and reductive groups
3. 3.3 Connections for unipotent groups
4. 3.4 Regular connections for solvable groups
5. 3.5 Regular connections for arbitrary linear algebraic groups
6. 3.6 Descent for gauge equivalence classes
4. 4 Irregular connections for $\mathbf{G}$ reductive
1. 4.1 Connections in canonical form
2. 4.2 The case when $A_{r}$ is not nilpotent
3. 4.3 The case when $A_{r}$ is nilpotent
4. 4.4 Algorithm for reductive groups and some quantitative results
5. 5 Irregular connections for arbitrary linear algebraic groups
1. 5.1 Irregular connections for solvable groups
2. 5.2 Irregular connections for arbitrary linear algebraic groups
3. 5.3 Galois cohomology for irregular connections
## 1 Introduction
Let $\mathbf{k}$ be an algebraically closed field of characteristic $0$. Fix a
connected linear algebraic group $\mathbf{G}$ over $\mathbf{k}$. Let
$D^{*}\vcentcolon=\text{Spec}\,\mathbf{k}(({t}))$ denote the formal punctured
disc over $\mathbf{k}$.
In this paper we give an algebraic classification of formal
$\mathbf{G}$-connections over $D^{*}$ up to gauge equivalence. The regular
singular case is more explicit and is presented first (see Subsection 3.6).
The more general irregular case is Proposition 5.9. These constitute the two
main results of the paper.
In order to get our parametrizations, we first prove that every connection can
be put into canonical form [BV83] after passing to a ramified cover (Theorems
3.32 and 5.3).
Next, we describe a set of representatives for canonical forms for which we
can develop a clean description of Galois cohomology cocycles (see the set of
conditions in Theorem 5.3). We then proceed to describe the Galois cocycles in
Subsections 3.6 and 5.3. As a consequence of our arguments we obtain some new
quantitative results in Subsection 4.4.
Our approach to the existence of canonical forms is based on the work of
Babbitt and Varadarajan [BV83]. Some of the crucial parts in their argument
are analytic in nature, so they only apply when the ground field is
$\mathbb{C}$. We sidestep those parts to provide a completely algebraic proof.
In addition, we simplify the global structure of their inductive arguments.
Our treatment of uniqueness of canonical forms is substantially different from
the one in [BV83]. We choose a different set of representatives for canonical
classes in order to set up our Galois cohomology argument (see the list of
properties in Theorem 5.3). This allows us to avoid the use of the complex
exponential map. In our setup the proof of uniqueness and the identification
of the gauge transformation centralizer become elementary power series
arguments.
We develop separate treatments of reduction theory depending on whether
$\mathbf{G}$ is reductive, unipotent or solvable. This allows us to give
sharper separate statements, including some new determinacy results in the
unipotent and solvable cases (see Propositions 3.25, 3.29 and 5.2).
There is some related work by Schnürer on the subject. [Sch07] gives a purely
algebraic proof of the reduction of formal connections when the group
$\mathbf{G}$ is reductive and the connection has a regular singularity. In
contrast, our arguments apply more generally to arbitrary linear algebraic
groups and irregular singular connections.
Let us briefly describe the body of the paper. Section 2 sets up the notation
and some of the facts that are used throughout.
In Section 3 we develop reduction theory for regular singular connections.
Section 3 culminates with the first main result of this paper: an explicit
parametrization of regular singular connections over the formal punctured
disc. This is achieved in Subsection 3.6. This parametrization is rather
concrete in the case of classical groups, see Example 3.42.
Section 4 treats reduction theory of irregular connections for reductive
groups. Section 4 also includes some of the quantitative results mentioned in
the abstract. In Subsection 4.4 we give an explicit description of the
reduction algorithm for reductive groups. An analysis of this algorithm yields
determinacy results for both the irregular and the regular part of the
canonical form in the reductive case (Proposition 4.21). We also prove a new
uniform bound on the ramification needed to put connections into canonical
form (see Propositions 4.19 and 4.23).
Section 5 develops reduction theory for irregular connections in the case of
an arbitrary linear algebraic group. Subsection 5.3 gives a parametrization of
irregular connections over $D^{*}$ up to gauge equivalence; this is the second
main result of this paper. Precisely, this parametrization consists of two
pieces of data. The first is a connection $B$ in canonical form satisfying all
five conditions listed in Theorem 5.3. $B$ determines the connection up to
ramified gauge equivalence (i.e. when we allow passing to ramified covers of
the formal disc). The second piece of data is a $B$-twisted $\mu_{b}$-cocycle,
which is an element in the centralizer of the residue of $B$ satisfying
certain additional condition. See Definition 5.7 and the paragraph following
it for an explanation. Section 5 ends with Proposition 5.10, where we use this
parametrization to give another proof of a result found in [CK17].
## 2 Some notation and definitions
### 2.1 Preliminaries on formal connections
We will always work over a fixed algebraically closed field $\mathbf{k}$ of
characteristic $0$. An undecorated product of $\mathbf{k}$-schemes (e.g.
$X\times S$) should always be interpreted as a fiber product over
$\mathbf{k}$. $\mathbf{G}$ will be a connected linear algebraic group over
$\mathbf{k}$ and $\mathfrak{g}=\text{Lie}(\mathbf{G})$ will be the
corresponding Lie algebra. We let $\mathcal{O}=\mathbf{k}[[{t}]]$ denote the
ring of formal power series over $\mathbf{k}$ and $F=\mathbf{k}(({t}))$ denote
the corresponding field of Laurent series. $\mathcal{O}$ is a discrete
valuation ring with maximal ideal $t\mathcal{O}$.
Recall that the module of Kähler differentials
$\Omega^{1}_{\mathcal{O}/\mathbf{k}}$ classifies $\mathbf{k}$-derivations from
$\mathcal{O}$. It is spanned as an $\mathcal{O}$-module by formal elements
$df$ for every $f\in\mathcal{O}$, subject to the relations $d(fg)=fdg+gdf$. We
will work with the module of continuous Kähler differentials
$\hat{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}$ , which is defined to be the
completion
$\hat{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}\vcentcolon=\underset{n}{\varprojlim}\,\,\,{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}\,/\,\,t^{n}\,{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}$
This is a free $\mathcal{O}$-module of rank $1$. The natural completion map
$\widehat{(-)}:\Omega_{\mathcal{O}/\mathbf{k}}^{1}\rightarrow\hat{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}$
can be thought of as the projection onto the quotient obtained by adding the
extra relations coming from allowing termwise differentiation of power series.
###### Remark 2.1.
The module of ordinary Kähler differentials
$\Omega^{1}_{\mathcal{O}/\mathbf{k}}$ is not finitely generated as an
$\mathcal{O}$-module. We don’t want to work with
$\Omega^{1}_{\mathcal{O}/\mathbf{k}}$, because the relations above do not
include classical intuitive identities like $d(e^{t})=e^{t}dt$. That is the
reason why we use continuous Kähler differentials instead.
For any positive natural number $b$, let
$F_{b}\vcentcolon=\mathbf{k}(({t^{\frac{1}{b}}}))$. This is a finite Galois
extension of $F$ with Galois group canonically isomorphic to
$\mathbf{\mu}_{b}$, the group of $b$-roots of unity in $\mathbf{k}$. Under
this isomorphism, we have that $\gamma\in\mu_{b}$ acts by $\gamma\cdot
t^{\frac{1}{b}}=\gamma^{-1}t^{\frac{1}{b}}$. Notice that the choice of a
primitive root of unity yields an identification
$\mu_{b}\cong\mathbb{Z}/b\,\mathbb{Z}$, since we are working in characteristic
$0$. A well known theorem of Puiseux states that the algebraic closure of $F$
is $\overline{F}=\bigcup_{b\geq 1}F_{b}$.
In this paper we will work with a (right) $\mathbf{G}$-torsor $P$ over the
formal punctured disc $D^{*}\vcentcolon=\text{Spec}\,F$. We know that $P$ can
be trivialized, meaning that $P\cong\text{Spec}\,F\times\mathbf{G}$ as right
$\mathbf{G}$-torsors. This follows from theorems of Tsen and Springer, see
[Ser02] page 80 - 3.3(b) and page 132 - 2.3(c). A formal connection $A$ on $P$
is a function from the set of trivializations of $P$ into
$\mathfrak{g}\,\otimes_{\mathbf{k}}\,\hat{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}\left[\frac{1}{t}\right]$
that satisfies a certain transformation law. In order to describe the
transformation law we need some notation.
Let $T_{\mathbf{G}}$ be the tangent sheaf of $\mathbf{G}$. There is a natural
trivialization
$T_{\mathbf{G}}\cong\mathfrak{g}\,\otimes_{\mathbf{k}}\mathcal{O}_{\mathbf{G}}$
given by left translation. Therefore, we get an isomorphism
$\mathfrak{g}\,\otimes_{\mathbf{k}}\,\Omega^{1}_{\mathbf{G}/\mathbf{k}}\cong\mathfrak{g}\,\otimes_{\mathbf{k}}\,\text{Hom}_{\mathcal{O}_{\mathbf{G}}}(T_{\mathbf{G}},\mathcal{O}_{\mathbf{G}})\cong\text{Hom}_{\mathbf{k}}(\mathfrak{g},\mathfrak{g})\otimes_{\mathbf{k}}\,\mathcal{O}_{\mathbf{G}}$.
The invariant $\mathfrak{g}$-valued 1-form on $\mathbf{G}$ that corresponds to
$\text{id}_{\mathfrak{g}}\otimes 1$ under this isomorphism is called the
Maurer-Cartan form. We will denote it by
$\omega\in\mathfrak{g}\,\otimes_{\mathbf{k}}\Omega^{1}_{\mathbf{G}/\mathbf{k}}$.
Suppose that we are given an element $g\in\mathbf{G}(F)$. We can think of it
as a map $g:\text{Spec}\,F\longrightarrow\mathbf{G}$. We can use $g$ to pull
back the Maurer-Cartan form to $\text{Spec}\,F$ in order to obtain
$g^{*}\omega\in\mathfrak{g}\,\otimes_{\mathbf{k}}\,\Omega_{F/\mathbf{k}}^{1}=\mathfrak{g}\,\otimes_{\mathbf{k}}\,\Omega_{\mathcal{O}/\mathbf{k}}^{1}\left[\frac{1}{t}\right]$.
By applying the completion map
$\widehat{(-)}:\Omega_{\mathcal{O}/\mathbf{k}}^{1}\rightarrow\hat{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}$,
we get an element
$\widehat{{g}^{*}\omega}\in\mathfrak{g}\,\otimes_{\mathbf{k}}\,\hat{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}\left[\frac{1}{t}\right]$.
Now we can define the gauge action of $\mathbf{G}\left(F\right)$ on
$\mathfrak{g}\,\otimes_{\mathbf{k}}\,\hat{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}\left[\frac{1}{t}\right]$.
For any $g\in\mathbf{G}\left(F\right)$ and
$B\in\mathfrak{g}\,\otimes_{\mathbf{k}}\,\hat{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}\left[\frac{1}{t}\right]$,
we set $g\cdot B\vcentcolon=\text{Ad}(g)B+\widehat{{g}^{*}\omega}$ .
###### Definition 2.2.
By a formal connection $A$ for $P$ we mean a function
$A\;:\;\left\\{\text{trivializations}\;\;P\xrightarrow{\sim}\text{Spec}\,F\times\mathbf{G}\right\\}\;\;\longrightarrow\;\;\mathfrak{g}\,\otimes_{\mathbf{k}}\,\hat{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}\left[\frac{1}{t}\right]$
satisfying the following transformation law. Let
$\phi_{1},\,\phi_{2}\,:\,P\xrightarrow{\sim}\text{Spec}\,F\times\mathbf{G}$ be
two trivializations of $P$. We know that $\phi_{2}\circ\phi_{1}^{-1}$ is given
by left multiplication by a unique element $g\in\mathbf{G}\left(F\right)$. We
then require $A(\phi_{2})=g\cdot A(\phi_{1})$.
###### Remark 2.3.
Th reader might have encountered a different definition of formal connection.
Using the action of $\mathbf{G}$ on
$\mathfrak{g}\otimes_{\mathbf{k}}\Omega^{1}_{\mathcal{O}/\mathbf{k}}\left[\frac{1}{t}\right]$
one can define a formal version of the Atiyah sequence [Ati57]. Splittings of
such sequence will correspond to formal connections as we have defined them.
Such a connection $A$ is completely determined by its value at any given
trivialization. We will often assume that we have chosen a fixed
trivialization of $P$. Hence we can think of $P$ as the trivial bundle, and
think of $A$ as the element of
$\mathfrak{g}\,\otimes_{\mathbf{k}}\,\hat{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}\left[\frac{1}{t}\right]$
given by the image of this trivialization. Note that we have implicitly fixed
a choice of uniformizer $t$ for $\mathcal{O}$. This yields an isomorphism
$\hat{\Omega}_{\mathcal{O}/\mathbf{k}}^{1}=\mathcal{O}\,dt\cong\mathcal{O}$.
We will often think of connections as elements of
$\mathfrak{g}_{F}\vcentcolon=\mathfrak{g}\otimes_{\mathbf{k}}F$ obtained under
the induced isomorphism
$\Omega_{\mathcal{O}/\mathbf{k}}^{1}\left[\frac{1}{t}\right]=F\,dt\cong F$.
All of the discussion above also applies over any finite field extension
$F_{b}$ of $F$. The choice of a uniformizer $u\vcentcolon=t^{\frac{1}{b}}$ for
$F_{b}$ yields an isomorphism from $F$ onto $F_{b}$ sending $t$ to $u$. This
allows us to“lift” $\mathbf{G}$-bundles and trivializations from
$\text{Spec}\,F_{b}$ to $\text{Spec}\,F$ by transport of structure. We can
therefore lift connections from $F_{b}$ to $F$.
There are some subtleties for the lift of connections when we think of them as
elements of $\mathfrak{g}_{F}$. We generally take derivatives with respect to
$t$, and not $u=t^{\frac{1}{b}}$. That is, we fix the trancendental element
$t=u^{b}$ of $F_{b}$ in order to get the isomorphism
$\hat{\Omega}_{\mathcal{O}_{b}/\mathbf{k}}^{1}\left[\frac{1}{u}\right]=\left(\mathcal{O}_{b}\,dt\right)\left[\frac{1}{u}\right]\cong\mathcal{O}_{b}\left[\frac{1}{u}\right]=F_{b}$.
Under this identification, the lift of a $\mathbf{G}$-connection is not the
obvious one given by replacing $u$ by $t$. Instead, the lift of a connection
$A=\sum_{j=r}^{\infty}A_{j}\,t^{\frac{j}{b}}\in\mathfrak{g}_{F_{b}}$ is given
by $\tilde{A}\vcentcolon=bt^{b-1}\sum_{j=r}^{\infty}A_{j}\,t^{j}$. This is
called the $b$-lift of the connection.
Let $\mathbf{T}\subset\mathbf{G}$ be a maximal torus in $\mathbf{G}$. We will
denote by $X_{*}(\mathbf{T})$ (resp. $X^{*}(\mathbf{T})$) the cocharacter
(resp. character) lattice of $\mathbf{T}$. We will write
$\langle-,-\rangle:X_{*}(\mathbf{T})\otimes
X^{*}(\mathbf{T})\longrightarrow\mathbb{Z}$ for the canonical pairing. There
is a natural inclusion $X_{*}(\mathbf{T})\subset\text{Lie}(\mathbf{T})$ given
by taking differentials at the identity. We will freely use this
identification without further notice. Note that a cocharacter
$\lambda:\mathbb{G}_{m}\longrightarrow\mathbf{T}\subset\mathbf{G}$ yields a
point $\lambda\in\mathbf{G}(\mathbf{k}[t,t^{-1}])$. We denote by $t^{\lambda}$
the element of $\mathbf{G}(F)$ obtained via the natural inclusion
$\mathbf{k}[t,t^{-1}]\hookrightarrow F$.
We will make use of the algebraic exponential map, as in [DG80] pg. 315. For
$X\in t\mathfrak{gl}_{n}(\mathcal{O})$ we have an exponential
$\text{exp}(X)\in\mathbf{GL_{n}}(\mathcal{O})$ defined by
$\text{exp}(X)\vcentcolon=\sum_{i=0}^{\infty}\frac{1}{i!}X^{i}$. By choosing a
closed embedding $\mathbf{G}\hookrightarrow\mathbf{GL_{n}}$ we can similarly
define an exponential map $\text{exp}\vcentcolon
t\mathfrak{g}(\mathcal{O})\longrightarrow\mathbf{G}(\mathcal{O})$. It can be
checked that this does not depend on the choice of embedding. We will only use
one property of this map: for any $X\in\mathfrak{g}$, the image of
$\text{exp}(t^{n}\,X)$ when we reduce modulo $t^{n+1}$ is given by
$1+t^{n}X\in\mathbf{G}\left(\mathcal{O}/t^{n+1}\mathcal{O}\right)$.
### 2.2 Adjoint orbits in semisimple Lie algebras
Here we include some facts about semisimple algebraic groups and their Lie
algebras. Most of these results are standard and can be found in the book
[CM93]. For the rest of this section we will assume that $\mathbf{G}$ is
connected semisimple.
Recall that an element of a semisimple Lie algebra is called semisimple (resp.
nilpotent) if the the image under the adjoint representation is semisimple
(resp. nilpotent) as a linear transformation of $\mathfrak{g}$. It turns out
that we can check these conditions on any faithful representation. This fact
follows from the following theorem.
###### Theorem 2.4 (Additive Jordan Decomposition).
Let $\mathfrak{g}$ semisimple. For any $A\in\mathfrak{g}$ there exist unique a
$A_{s}$ semisimple and $A_{n}$ nilpotent such that
1. (i)
$A=A_{s}+A_{n}$
2. (ii)
$[A_{s},A_{n}]=0$
###### Remark 2.5.
For a reductive Lie algebra, all elements of the center are considered
semisimple. For the Lie algebra of an arbitrary linear algebraic group, we
will usually fix a Levi subgroup $\mathbf{L}$ and speak of semisimple elements
inside $\text{Lie}(\mathbf{L})$.
Recall that $\mathfrak{sl}_{2}=\\{X\in\mathfrak{gl}_{2}\mid\text{tr}(X)=0\\}$.
The Lie bracket is given by the matrix commutator. Define
$H=\begin{bmatrix}1&0\\\ 0&-1\end{bmatrix}$, $X=\begin{bmatrix}0&1\\\
0&0\end{bmatrix}$ and $Y=\begin{bmatrix}0&0\\\ 1&0\end{bmatrix}$. Then we have
$\mathfrak{sl}_{2}=\mathbf{k}H\oplus\mathbf{k}X\oplus\mathbf{k}Y$ as a
$\mathbf{k}$-vector space.
###### Definition 2.6.
A $\mathfrak{sl}_{2}$-triple in $\mathfrak{g}$ is a nonzero Lie algebra map
$\phi:\mathfrak{sl}_{2}\longrightarrow\mathfrak{g}$. We will often abuse
notation and denote the images of $H,X,Y$ with the same letters.
###### Theorem 2.7 (Jacobson-Morozov).
Let $\mathbf{G}$ be a connected semisimple algebraic group with Lie algebra
$\mathfrak{g}$. Let $U\in\mathfrak{g}$ be a nilpotent element. Then there
exists a homomorphism $\Phi\vcentcolon SL_{2}\longrightarrow G$ such that the
$\mathfrak{sl}_{2}$-triple corresponding to the differential
$d\Phi:\mathfrak{sl}_{2}\longrightarrow\mathfrak{g}$ satisfies $d\Phi(Y)=U$.
Moreover such a homomorphism is uniquely determined up to conjugation by an
element of the centralizer $Z_{\mathbf{k}}(U)(\mathbf{k})$.
If $Y\neq 0$ is a nilpotent element in $\mathfrak{g}$, we will denote by
$(H,X,Y)$ the $\mathfrak{sl}_{2}$-triple granted by Jacobson-Morozov. For any
element $X\in\mathfrak{g}$, we will write $\mathfrak{g}_{X}$ for the
centralizer of $X$ in $\mathfrak{g}$.
Let $G=\mathbf{G}(\mathbf{k})$ denote the $\mathbf{k}$-rational points of
$\mathbf{G}$. Recall that for any $Y\in\mathfrak{g}$, the orbit under the
adjoint action $\mathbf{G}\cdot Y$ can be equipped with the structure of a
smooth locally closed subvariety of $\mathfrak{g}$. We will often harmlessly
identify it with its closed points $G\cdot Y$. The following proposition is
going to be the essential technical tool for the induction argument in the
reductive case. The proof can be found in [BV83] pages 17-18.
###### Proposition 2.8.
Let $Y\neq 0$ be nilpotent in $\mathfrak{g}$. Let $(H,X,Y)$ be the
corresponding $\mathfrak{sl}_{2}$-triple. Then the affine space
$Y+\mathfrak{g}_{X}$ meets the orbit $G\cdot Y$ exactly at $Y$. For any other
nilpotent $U\in Y+\mathfrak{g}_{X}$ with $U\neq Y$, we have $\text{dim}(G\cdot
U)>\text{dim}(G\cdot Y)$.
###### Example 2.9.
If $Y$ is regular nilpotent, then it is the unique nilpotent element in
$Y+\mathfrak{g}_{X}$.
Fix a maximal torus $\mathbf{T}\subset\mathbf{G}$. Let $\Phi$ be the set of
roots of $\mathbf{G}$ with respect to $\mathbf{T}$. The coweight lattice
$Q_{\mathbf{G}}$ of $\mathbf{G}$ with respect to $\mathbf{T}$ is defined to be
$Q_{\mathbf{G}}\vcentcolon=\text{Hom}(\mathbb{Z}\Phi,\,\mathbb{Z})$. Since
$\mathbf{G}$ is semisimple, the cocharacter lattice $X_{*}(\mathbf{T})$ has
finite index in the coweight lattice $Q_{\mathbf{G}}$.
###### Definition 2.10.
The index $I(\mathbf{G})$ is defined to be the exponent of the finite group
$Q_{\mathbf{G}}/\,X_{*}(\mathbf{T})$.
Let $\Phi^{\vee}$ be the set of coroots of $\mathbf{G}$ with respect to
$\mathbf{T}$. We have the following chain of inclusions
$\mathbb{Z}\Phi^{\vee}\,\subset\,X_{*}(\mathbf{T})\subset\,Q_{\mathbf{G}}$
###### Definition 2.11.
$J(\mathbf{G})$ is the exponent of the finite group
$Q_{\mathbf{G}}/\,\mathbb{Z}\Phi^{\vee}$.
###### Remark 2.12.
Since all maximal tori in $\mathbf{G}$ are conjugate, both $I(\mathbf{G})$ and
$J(\mathbf{G})$ do not depend on the choice of $\mathbf{T}$.
Let us fix a Borel subgroup $\mathbf{B}\subset\mathbf{G}$ containing
$\mathbf{T}$. This amounts a choice of positive roots $\Phi^{+}$. We let
$\Delta$ be the corresponding subset of simple roots.
###### Definition 2.13.
Let $\alpha=\sum_{\beta\in\Delta}m_{\beta}\beta$ be a positive root. The
height of $\alpha$ is defined to be
$\text{hgt}\,(\alpha)\vcentcolon=\sum_{\beta\in\Delta}\;m_{\beta}$. The height
of the Lie algebra $\mathfrak{g}$ is
$\text{hgt}\,(\mathfrak{g})\vcentcolon=\text{sup}_{\alpha\in\Phi^{+}}\;\text{hgt}\,(\alpha)$.
To conclude this section, we define a function that measures the “size” of the
semisimple element $H$ in the Jacobson-Morozov triple corresponding to a
nilpotent $Y\in\mathfrak{g}$. We can always arrange $H\in X_{*}(\mathbf{T})$.
We will implicitly assume this from now on.
###### Definition 2.14.
Let $Y\in\mathfrak{g}$ be a nilpotent element. Let $H$ be the corresponding
semisimple element in the Jacobson-Morozov triple of $Y$. Then, we define
$\Lambda(Y)\vcentcolon=\text{sup}_{\alpha\in\Phi}\;\left(\frac{1}{2}\alpha(H)+1\right)$.
This function $\Lambda$ is constant on nilpotent orbits.
###### Example 2.15.
Suppose that $Y$ is regular nilpotent. We can choose $H$ so that $\alpha(H)=2$
for every $\alpha\in\Delta$ (see [CM93] Chapter 3). Therefore,
$\Lambda(Y)=\text{hgt}\,(\mathfrak{g})+1$ in this case. It turns out that this
is the biggest possible value for $\Lambda$. In other words
$\Lambda(Y)\leq\text{hgt}\,(\mathfrak{g})+1$ for any nilpotent
$Y\in\mathfrak{g}$.
## 3 Regular connections
Fix a connected linear algebraic group $\mathbf{G}$ over $\mathbf{k}$. What we
call regular connections are also known as connections with at worst regular
singularities.
###### Definition 3.1.
A connection $A=\sum_{j=r}^{\infty}A_{j}\,t^{j}\in\mathfrak{g}_{F}$ is said to
be of the first kind if if it has at worst a simple pole (i.e. $r\geq-1$). A
connection $A$ is called regular if there exists
$x\in\mathbf{G}(\overline{F})$ such that $x\cdot A$ is of the first kind.
In the analytic context, regular connections are classified by topological
data. Indeed, such connections are determined by their monodromy
representation. Our goal in this section is to classify formal regular
connections over an arbitrary ground field. This will be achieved in
Subsection 3.6. It should be noted that the development of the regular case is
a necessary preliminary step in our treatment of the general irregular
singular case in Sections 4 and 5.
As mentioned in the introduction, the regular singular case for
$\mathbf{k}=\mathbb{C}$ is treated in [BV83] using transcendental methods. The
case of the group $\text{GL}_{n}$ was known well before. See Deligne’s book
[Del70][II §1] for a discussion of regular singular connections for
$\text{GL}_{n}$ before the paper [BV83]. It should be noted that Levelt
[Lev75] gave a proof of the existence of canonical forms for $\text{GL}_{n}$
that applies to any algebraically closed field $\mathbf{k}$.
### 3.1 Regular connections for semisimple groups
We will start with the semisimple case. A regular connection is said to be in
canonical form if it can be written as $t^{-1}C$ for some $C\in\mathfrak{g}$.
In order to prove Theorem 3.6 we can assume that $A$ is of first kind, because
of the definition of regular connection. We first need the following
definition and lemma from [BV83, 8.5], which actually work for arbitrary
$\mathbf{G}$. We include a detailed proof of the lemma in order to keep the
exposition self-contained.
###### Definition 3.2.
Let $\mathbf{G}$ be a connected linear algebraic group. Let
$A=\sum_{j=-1}^{\infty}A_{j}\,t^{j}$ be a connection of the first kind in
$\mathfrak{g}_{F}$. The endomorphism
$\text{ad}\,(A_{-1})\,\in\text{GL}_{n}(\mathfrak{g})$ yields a decomposition
of $\mathfrak{g}$ into generalized eigenspaces
$\mathfrak{g}=\bigoplus_{\lambda}\mathfrak{g}_{\lambda}$. We say that $A$ is
aligned if $A_{j}\in\mathfrak{g}_{j+1}$ for all $j$.
###### Lemma 3.3.
Let $\mathbf{G}$ be a connected linear algebraic group and
$A=\sum_{j=-1}^{\infty}A_{j}\,t^{j}$ a formal connection of the first kind in
$\mathfrak{g}_{F}$. Then there exist $x\in\mathbf{G}(\mathcal{O})$ such that
$x\cdot A$ is aligned.
###### Proof.
We will inductively build a sequence $(B_{j})_{j=1}^{\infty}$ of elements of
$\mathfrak{g}$ such that the change of trivialization by
$x\vcentcolon=\lim_{n\rightarrow\infty}\prod_{j=0}^{n-1}\text{exp}(t^{n-j}\,B_{n-j})$
puts $A$ in aligned form. Let $k\in\mathbb{N}$. Suppose that we have chosen
$B_{j}\in\mathfrak{g}$ for all $j\leq k$ such that the connection
$A^{(k)}=\sum_{l=-1}^{\infty}A^{(k)}_{l}\,t^{l}$ defined by
$A^{(k)}\vcentcolon=\prod_{j=0}^{k-1}\text{exp}(t^{k-j}\,B_{k-j})\cdot A$
satisfies $A_{l}^{(k)}\in\mathfrak{g}_{l+1}$ for all $l<k$. Notice that the
base case $k=0$ is trivial and that we have $A^{(k)}_{-1}=A_{-1}$. Let’s try
to determine $B_{k+1}$.
Recall that $\text{exp}(t^{k+1}\,B_{k+1})\equiv
1+t^{k+1}\,B_{k+1}\;(\text{mod}\;t^{k+2})$. By an elementary matrix
computation (choose an embedding of
$\mathbf{G}\hookrightarrow\mathbf{\text{GL}_{n}}$), we can see that
$\text{exp}(t^{k+1}B_{k+1})\cdot
A^{(k)}\equiv\sum_{l=-1}^{k-1}A^{(k)}_{l}\,t^{l}+[A^{(k)}_{k}-(ad(A_{-1})-(k+1))B_{k+1}]\,t^{k}\;\;(\text{mod}\;t^{k+1})$
Decompose $\mathfrak{g}$ into generalized $ad(A_{-1})$ eigenspaces
$\mathfrak{g}=\bigoplus_{\lambda}\mathfrak{g}_{\lambda}$. By definition the
operator $ad(A_{-1})-(k+1)$ restricts to an automorphism of
$\mathfrak{g}_{\lambda}$ for all $\lambda\neq k+1$. In particular, we can
choose $B_{k+1}\in\mathfrak{g}$ such that
$A^{(k)}_{k}-(ad(A_{-1})-(k+1))B_{k+1}$ is in $\mathfrak{g}_{k+1}$. This
concludes the induction step. It follows by construction that the gauge
transformation by
$x\vcentcolon=\lim_{n\rightarrow\infty}\prod_{j=0}^{n-1}\text{exp}(t^{n-j}\,B_{n-j})$
puts $A$ in aligned form. ∎
###### Remark 3.4.
Any aligned connection is actually in
$\mathfrak{g}\otimes\mathbf{k}[t,t^{-1}]$. The coefficient with largest
exponent is $\left(x\cdot A\right)_{j}t^{j}$, where $j+1$ is the biggest
integer eigenvalue of $ad\left(\,(A_{-1})_{s}\,\right)$. We denote this number
by $k(A_{-1})\vcentcolon=j+1$ for further reference. In order to determine the
resulting aligned connection, we only need to multiply by $k(A_{-1})$-many
exponentials in the proof above. Therefore the aligned form only depends on
$A_{j}$ for $-1\leq j\leq k(A_{-1})$. Note that $k(A_{-1})$ can drastically
change if we multiply $A$ by a scalar in $\mathbf{k}$. This reflects the fact
that gauge transformations are not $\mathbf{k}$-linear.
###### Example 3.5.
Suppose that $\text{ad}\left(\,\left(A_{-1}\right)_{s}\,\right)$ does not have
any integer eigenvalues. Then the aligned connection will be in canonical
form.
###### Theorem 3.6.
Let $\mathbf{G}$ be a connected semisimple algebraic group. Let
$A\in\mathfrak{g}_{F}$ be a regular connection. Then, there exists
$x\in\mathbf{G}(\overline{F})$ such that $x\cdot A=t^{-1}C$ for some
$C\in\mathfrak{g}$.
###### Proof.
This is a special case of [Sch07][Thm. 4.2]. We include the proof with some
modifications that will suit our needs in subsequent subsections. By Lemma
3.3, we can assume that $A$ is an aligned connection in $\mathfrak{g}_{F}$.
Let $(A_{-1})_{s}$ be the semisimple part of $A_{-1}$. Choose a maximal torus
$\mathbf{T}$ of $\mathbf{G}$ such that the corresponding Cartan subalgebra
$\text{Lie}(\mathbf{T})$ contains $(A_{-1})_{s}$. Fix a choice of positive
roots $\Phi^{+}$ of $\mathbf{G}$ relative to $\mathbf{T}$. Let $\Delta$ be the
subset of simple roots. Choose a basis for $\mathbf{k}$ as a vector space over
$\mathbb{Q}$. Suppose that $1$ is one of the basis elements. Let
$\pi\vcentcolon\mathbf{k}\longrightarrow\mathbb{Q}$ be the corresponding
projection. We can define $\tau$ in $\text{Lie}(\mathbf{T})$ given by
$\tau(\alpha)=\pi(\alpha((A_{-1})_{s}))$ for all $\alpha\in\Delta$.
There exists $b\in\mathbb{N}$ such that $b\tau$ is in the cocharacter lattice
of $\mathbf{T}$. We let $\mu\vcentcolon=b\tau$ be the corresponding
cocharacter. Recall from the preliminaries that we have a $b$-lift
$\tilde{A}=\sum_{j=-1}^{\infty}bA_{j}\,t^{bj+b-1}$. We can assume that we are
working with $\tilde{A}$ by passing to the $b$-ramified cover. We claim that
$t^{-\mu}\cdot\tilde{A}$ is in canonical form. In order to show this, it is
convenient to use the $ad$ representation and view everything as matrices in
$\text{End}(\mathfrak{g})$. The root decomposition
$\mathfrak{g}=\bigoplus_{\alpha\in\Phi}\mathfrak{g}_{\alpha}$ gives us the
spectral decomposition of $({A}_{-1})_{s}$.
We can view $\text{Ad}(t^{-\mu})$ as a matrix in
$\text{GL}(\mathfrak{g}_{F})$. $\text{Ad}(t^{-\mu})$ acts as the scalar
$t^{-\langle\mu,\beta\rangle}$ on the root space $\mathfrak{g}_{\beta}$. By
assumption $A$ is aligned. This means that $A_{j}$ is in a sum of root spaces
$\mathfrak{g}_{\beta}$ where $(A_{-1})_{j+1}$ has eigenvalue $j+1$. These are
the root spaces $\mathfrak{g}_{\beta}$ where $\beta((A_{-1})_{s})=j+1$. By the
construction of $\mu$, we know that
$\langle\mu,\beta\rangle=b\beta((A_{-1})_{s})$ whenever $\beta((A_{-1})_{s})$
is an integer. Therefore, $\text{Ad}(t^{-\mu})\,A_{j}=t^{-bj-b}A_{j}$. We
conclude that
$t^{-\mu}\cdot\tilde{A}=t^{-\mu}\cdot\left(\sum_{j=-1}^{\infty}bA_{j}\,t^{bj+b-1}\right)=\left(\sum_{j=-1}^{\infty}bA_{j}\right)\,t^{-1}\,+\,\frac{d}{dt}\,\left(t^{-\mu}\right)\,t^{\mu}$
For the last term $\frac{d}{dt}\,\left(t^{-\mu}\right)\,t^{\mu}$ we are
performing the calculation in $\text{End}(\mathfrak{g}_{F})$. A matrix
computation yields
$\frac{d}{dt}\,\left(t^{-\mu}\right)\,t^{\mu}=-\mu\,t^{-1}$. The theorem
follows. ∎
###### Remark 3.7.
Babbitt and Varadarajan prove the theorem over the ground field
$\mathbf{k}=\mathbb{C}$ using the analytic theory of regular singular
connections (see section 8 of [BV83]). In the analytic setting, we need to
pass to a ramified cover only when the monodromy class of the connection is
not in the image of the exponential map. For example all conjugacy classes in
$\mathbf{GL}_{n}$ are exponential, so we don’t need to pass to a ramified
cover to reduce regular $\mathbf{GL_{n}}$-connections. This latter fact can
also be proven algebraically using the center of $\mathbf{GL}_{n}$. See the
argument in pages 19-22 of [BV83].
###### Remark 3.8.
We only need to fix a rational basis of
$\text{span}_{\mathbb{Q}}\\{\alpha((A_{-1})_{s})\,\vcentcolon\,\alpha\in\Delta\\}$
in the proof above. So the argument is constructive.
We can be a bit more careful in the proof of Theorem 3.6. This way we can get
a uniform bound for the ramification needed. We record this as a small lemma.
###### Lemma 3.9.
We can always choose $b\leq\text{hgt}(\mathfrak{g})\cdot I(\mathbf{G})$ in the
proof of Theorem 3.6.
###### Proof.
Set $\tau(\alpha)$ to be the best approximation of $\pi(\alpha((A_{-1})_{s}))$
in $\frac{1}{\text{hgt}(\mathfrak{g})}\mathbb{Z}$. By the definition of
$\text{hgt}(\mathfrak{g})$, it follows that $\tau(\beta)=\beta((A_{-1})_{s})$
whenever $\beta((A_{-1})_{s})$ is an integer. So the proof of Theorem 3.6
still goes through with this choice of $\tau$. By construction we have
$\text{hgt}(\mathfrak{g})\tau\in Q_{\mathbf{G}}$. Then the definition of
$I(\mathbf{G})$ implies that $\text{hgt}(\mathfrak{g})I(\mathbf{G})\tau\in
X_{*}(\mathbf{T})$. Hence we can choose
$b=\text{hgt}(\mathfrak{g})I(\mathbf{G})$. ∎
Choose a maximal torus $\mathbf{T}\subset\mathbf{G}$. Let $W$ be the Weyl
group of $\mathbf{G}$ with respect $\mathbf{T}$. Fix a projection
$\pi:\mathbf{k}\longrightarrow\mathbb{Q}$ as in the proof above. We can extend
this projection to a natural map $\pi:\text{Lie}(\mathbf{T})\cong
X_{*}(\mathbf{T})\otimes\mathbf{k}\longrightarrow
X_{*}(\mathbf{T})\otimes\mathbb{Q}$. We will once and for all fix a
fundamental domain $\mathfrak{D}$ for the action of $W$ on the set
$\Xi\vcentcolon=\left\\{C\in\text{Lie}(\mathbf{T})\,\mid\,\pi(C)=0\right\\}$.
Notice that $\Xi$ is a set of representatives for the quotient
$\text{Lie}(\mathbf{T})/\,X_{*}(\mathbf{G})\otimes\mathbb{Q}$.
It turns out that we can always choose $x$ in Theorem 3.6 so that the
semisimple part $C_{s}$ is in $\mathfrak{D}$.
###### Corollary 3.10.
Let $\mathbf{G}$ be connected semisimple with a choice of maximal torus
$\mathbf{T}\subset\mathbf{G}$. Let $A\in\mathfrak{g}_{F}$ be a regular
connection. Then, there exists $x\in\mathbf{G}(\overline{F})$ such that
$x\cdot A=t^{-1}C$ for some $C\in\mathfrak{g}$ satisfying
$C_{s}\in\mathfrak{D}$.
###### Proof.
By Theorem 3.6, we can assume that $A=t^{-1}\,C$ for some $C\in\mathfrak{g}$.
Since $\mathbf{k}$ is algebraically closed, we can conjugate the semisimple
element $C_{s}$ to the torus $\mathbf{T}$. By applying the gauge
transformation $t^{-\pi(C_{s})}$, we can assume that $\pi(C_{s})=0$. Finally,
we can conjugate by an element of $W$ to obtain $C_{s}\in\mathfrak{D}$. ∎
The following proposition will be crucial in establishing uniqueness of
canonical reductions in general.
###### Proposition 3.11.
Let $\mathbf{G}$ be connected semisimple with a choice of maximal torus
$\mathbf{T}\subset\mathbf{G}$. Let $C,D\in\mathfrak{g}$ with
$C_{s},D_{s}\in\mathfrak{D}$. Suppose that there exists
$x\in\mathbf{G}(\overline{F})$ such that
$x\cdot\left(t^{-1}\,C\right)=t^{-1}\,D$. Then we have $C_{s}=D_{s}$. Moreover
$x$ is a $\mathbf{k}$-point in the centralizer
$Z_{\mathbf{G}}(C_{s})(\mathbf{k})$.
###### Proof.
By lifting everything to a ramified cover, we can assume for simplicity that
$x\in\mathbf{G}(F)$. Choose a faithful representation
$\mathbf{G}\hookrightarrow\mathbf{\text{GL}_{n}}$. We can view
$x\in\mathbf{\text{GL}_{n}}(F)$ and $C,D\in\mathfrak{gl}_{n}$. Let’s consider
the linear transformation $U$ in $\text{End}(\mathfrak{gl}_{n})$ given by
$U\,v=Dv\,-\,vC$ for all $v\in\mathfrak{gl}_{n}$. Notice that we can write
$U=U_{s}+U_{n}$, where
$\displaystyle U_{s}\,v\vcentcolon=D_{s}v-vC_{s}$ $\displaystyle
U_{n}\,v\vcentcolon=D_{n}v-vC_{n}$
We know that $C_{s}$ and $D_{s}$ can be simultaneously diagonalized. Therefore
$U_{s}$ is semisimple. The eigenvalues of $U_{s}$ are differences of
eigenvalues of $C_{s}$ and $D_{s}$. Since $\pi(C_{s})=\pi(D_{s})=0$, we
conclude that $0$ is the only possible rational eigenvalue of $U_{s}$. By
definition, we have that $U_{n}$ is nilpotent and $[U_{s},U_{n}]=0$. We
conclude that $U=U_{s}+U_{n}$ is the additive Jordan decomposition of $U$. In
particular the set of eigenvalues of $U$ is the same as the set of eigenvalues
of $U_{s}$. Therefore, $0$ is the only possible rational eigenvalue of $U$.
The condition $x\cdot\left(t^{-1}\,C\right)=t^{-1}\,D$ can be expressed as
$\frac{d}{dt}x=t^{-1}U\,x$. Here we are viewing $x$ as an invertible matrix in
$\mathfrak{gl}_{n}(F)$. Set $x=\sum_{j=r}^{\infty}x_{j}\,t^{j}$. Then this
condition reads
$\sum_{j=r}^{\infty}jx_{j}\,t^{j-1}=\sum_{j=r}^{\infty}U\,x_{j}\,t^{j-1}$
Hence we have $jx_{j}=U\,x_{j}$ for all $j$. Since $0$ is the only possible
rational eigenvalue of $U$, we conclude that $x_{j}=0$ for all $j\neq 0$.
Therefore, $x=x_{0}\in\mathbf{G}(\mathbf{k})$. Hence the relation
$x\cdot\left(t^{-1}\,C\right)=t^{-1}\,D$ implies that $\text{Ad}(x)\,C=D$. By
uniqueness of Jordan decomposition for $\mathbf{\text{GL}_{n}}$, this means
that $\text{Ad}(x)\,C_{s}=D_{s}$.
It is well-known that $\text{Lie}(\mathbf{T})/W$ parametrizes semisimple
conjugacy classes in $\mathfrak{g}$ (see [CM93] Chapter 2). In particular,
$\mathfrak{D}$ is a set of representatives of conjugacy classes of semisimple
elements that map to $0$ under $\pi$. We conclude that we must have
$C_{s}=D_{s}$. Then $\text{Ad}(x)\,C_{s}=D_{s}$ implies that $x\in
Z_{\mathbf{G}}(C_{s})(\mathbf{k})$. ∎
### 3.2 Regular connections for tori and reductive groups
###### Proposition 3.12.
Let $\mathbf{G}$ be a torus and $A=\sum_{j=-1}^{\infty}A_{j}\,t^{j}$ a formal
connection of the first kind. Then there exists $x\in\mathbf{G}(\mathcal{O})$
such that $x\cdot A=t^{-1}A_{-1}$. Moreover, there is a unique such $x$ with
$x\equiv 1\,\left(mod\;t\right)$.
###### Proof.
Since $\mathbf{k}$ is algebraically closed, $\mathbf{G}$ is split. Therefore
the theorem follows from the special case $\mathbf{G}=\mathbb{G}_{m}$. We are
reduced to an elementary computation. Let
$v=\sum_{j=0}^{\infty}A_{j}t^{j}\;\in\,\mathcal{O}$. It suffices to find
$u=\sum_{j=0}^{\infty}B_{j}t^{j}\in\mathcal{O}^{\times}$ with
$\frac{d}{dt}(u)=-vu$. By expanding we see that we want
$(j+1)B_{j+1}=-\sum_{l=0}^{j}A_{l}B_{j-l}$ for all $j\geq 0$. This is a
recurrence we can solve, because we are in characteristic $0$. We can set the
initial condition $B_{0}=1$ and then the rest of the coefficients are uniquely
determined. ∎
###### Example 3.13.
For $\mathbf{G}=\mathbb{G}_{m}$ we can phrase this result concretely in terms
of differential equations. In this case we have an equation
$\frac{d}{dt}x=Ax$, where $A\in\mathbf{k}(({t}))$ is a Laurent series with at
worst a simple pole. The statement says that we can do a multiplicative change
of variables $y=Bx$ for some power series $B\in\mathcal{O}^{\times}$ such that
the equation becomes $\frac{d}{dt}y=\frac{a}{t}y$ for some scalar
$a\in\mathbf{k}$.
Let’s state a uniqueness result for canonical forms of regular formal
connections for tori.
###### Proposition 3.14.
Let $\mathbf{G}$ be a torus, and let $C_{1},C_{2}\in\mathfrak{g}$. Suppose
that there exists $x\in\mathbf{G}(F)$ with
$x\cdot\left(t^{-1}\,C_{1}\right)=t^{-1}\,C_{2}$. Then, we have $x=g\,t^{\mu}$
for some cocharacter $\mu\in X_{*}(\mathbf{G})$ and some
$g\in\mathbf{G}(\mathbf{k})$. Moreover $C_{1}=C_{2}-\mu$.
###### Proof.
We will do the computation for $\mathbf{G}=\mathbb{G}_{m}$. The general case
follows from the same argument. Write $x=k\,t^{r}\,y$, where
$k\in\mathbf{k}^{\times}$ and $y=1+\sum_{j=1}^{\infty}a_{j}\,t^{j}$. Then,
$x\cdot\left(t^{-1}\,C_{1}\right)\;=\;t^{-1}\;C_{1}+rt^{-1}+dy\,y^{-1}\;=\;t^{-1}\,C_{2}$
Notice that $dy\,y^{-1}$ is in $\mathbf{k}[[{t}]]$. By looking at the
nonnegative coefficients in the equation above, we conclude that $dy=0$.
Therefore we have $y=1$. Hence $x=k\,t^{r}$, and the result follows. ∎
We can patch together some of the previous of results to get canonical forms
for regular connections when the group is reductive. The following corollary
is [Sch07][Thm. 4.2].
###### Corollary 3.15.
Let $\mathbf{G}$ be reductive and $A\in\mathfrak{g}_{F}$ a regular formal
connection. Then there exists $x\in\mathbf{G}(\overline{F})$ such that $x\cdot
A=t^{-1}C$ for some $C\in\mathfrak{g}$.
###### Proof.
For completness we explain how to deduce the corollary from previous
propositions. We can assume that $A$ is of the first kind. Let
$\mathbf{Z}^{0}_{G}$ be the neutral component of the center of $\mathbf{G}$.
Set $\mathfrak{z}\vcentcolon=\text{Lie}(\mathbf{Z}^{0}_{\mathbf{G}})$. Let
$\mathbf{G}_{\text{der}}$ the derived subgroup of $\mathbf{G}$.
$\mathbf{G}_{der}$ is semisimple with Lie algebra
$\mathfrak{g}_{\text{der}}\vcentcolon=[\mathfrak{g},\mathfrak{g}]$. We have
$\mathfrak{g}=\mathfrak{g}_{\text{der}}\oplus\mathfrak{z}$. Decompose
$A=A_{\mathfrak{g}_{\text{der}}}+A_{\mathfrak{z}}$. By the semisimple case
there exists $x\in\mathbf{G}_{\text{der}}(\overline{F})$ such that $x\cdot
A_{\mathfrak{g}_{\text{der}}}$ is in canonical form. Now $x\cdot A=x\cdot
A_{\mathfrak{g}_{\text{der}}}+A_{\mathfrak{z}}$. Use the result for tori to
put $A_{\mathfrak{z}}$ in canonical form and conclude. ∎
###### Remark 3.16.
By Remark 3.4, we only need to know
$k\left(\,(A_{\mathfrak{g}_{\text{der}}})_{-1}\,\right)$-many coefficients of
a connection of the first kind in order to determine its canonical form. The
bound for the ramification needed in this case is reduced to the bound for the
semisimple group $\mathbf{G}_{\text{der}}$ as explained in Lemma 3.9.
Notice that the setup before Corollary 3.10 applies to the reductive case. We
formulate the analogous statement for convenience.
###### Corollary 3.17.
Let $\mathbf{G}$ connected reductive with maximal torus
$\mathbf{T}\subset\mathbf{G}$.
1. (i)
Let $A\in\mathfrak{g}_{F}$ be a regular connection. Then there exists
$x\in\mathbf{G}(\overline{F})$ such that $x\cdot A=t^{-1}C$ for some
$C\in\mathfrak{g}$ satisfying $C_{s}\in\mathfrak{D}$.
2. (ii)
Assume that $C,D\in\mathfrak{g}$ satisfy $C_{s},D_{s}\in\mathfrak{D}$. Suppose
that there exists $x\in\mathbf{G}(\overline{F})$ such that
$x\cdot\left(t^{-1}\,C\right)=t^{-1}\,D$. Then, we have $C_{s}=D_{s}$.
Moreover $x$ is in the centralizer $Z_{\mathbf{G}}(C_{s})(\mathbf{k})$.
###### Proof.
Part (i) follows by combining Proposition 3.12 for tori and Corollary 3.10 for
semisimple groups. Part (ii) follows from the same argument as in Proposition
3.11. ∎
This corollary allows us to give a concrete parametrization of regular
$\mathbf{G}(\overline{F})$-gauge equivalence classes of formal connections.
Let $A\in\mathfrak{g}_{F}$ be a regular formal connection. Suppose that
$B=t^{-1}C$ is a connection in canonical form that is
$\mathbf{G}(\overline{F})$-gauge equivalent to $A$. Assume that
$C_{s}\in\mathfrak{D}$. By Corollary 3.17, $C_{s}$ does not depend on the
choice of canonical form $B$. Let $W$ denote the Weyl group of $\mathbf{G}$
with respect to $\mathbf{T}$. Recall that $\mathfrak{D}$ is a set of
representatives for
$\left(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q}\right)/\,W$. In
particular we get a well defined element in
$\left(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q}\right)/\,W$
corresponding to $C_{s}$.
###### Definition 3.18.
Let $A\in\mathfrak{g}_{F}$ be a regular formal connection as above. We define
the semisimple $\overline{F}$-monodromy of $A$ to be the element
$m^{s}_{A,\,\overline{F}}\in\left(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q}\right)/\,W$
corresponding to $C_{s}$ as described above.
Let $m\in\left(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q}\right)/\,W$. We
define $Z_{\mathbf{G}}(m)$ to be the centralizer in $\mathbf{G}$ of the unique
representative of $m$ in $\mathfrak{D}$. It turns out that $Z_{\mathbf{G}}(m)$
is a Levi subgroup of a parabolic in $\mathbf{G}$. It is well-known that the
Lie algebra centralizer of a semisimple element
$\text{Lie}(Z_{\mathbf{G}}(m))=\mathfrak{g}_{m}$ is the Levi component of a
parabolic subalgebra of $\mathfrak{g}$. For connectedness, we can pass to an
isogenous cover $p:\tilde{\mathbf{G}}\longrightarrow\mathbf{G}$ with simply
connected derived subgroup. Notice that
$p(Z_{\tilde{\mathbf{G}}}(m))=Z_{\mathbf{G}}(m)$. So it suffices to prove
connectedness of $Z_{\tilde{\mathbf{G}}}(m)$, which follows from [Hum95] pg.
33. Note that the isomorphism class of $Z_{\mathbf{G}}(m)$ does not depend on
the choice of projection $\pi:\mathbf{k}\longrightarrow\mathbb{Q}$ and
fundamental domain $\mathfrak{D}$. In fact, $Z_{\mathbf{G}}(m)\cong
Z_{\mathbf{G}}(C)$ for any representative $C$ such that $\text{ad}(C)$ has no
rational eigenvalues.
Corollary 3.17 implies that the nilpotent part of a canonical form $B$ that is
$\mathbf{G}(\overline{F})$-gauge equivalent to $A$ is uniquely determined up
to $Z_{\mathbf{G}}(m^{s}_{A,\,\overline{F}})$-conjugacy. We record this as a
corollary.
###### Corollary 3.19.
Fix $m\in\left(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q}\right)/\,W$.
Let $\mathcal{N}_{Z_{\mathbf{G}}(m)}$ denote the nilpotent cone in the Lie
algebra of $Z_{\mathbf{G}}(m)$. There is a natural correspondence
$\left\\{\text{regular}\;A\in\mathfrak{g}_{\overline{F}}\;\text{with}\;m^{s}_{A,\,\overline{F}}=m\right\\}/\,\mathbf{G}(\overline{F})\;\;\;\longleftrightarrow{\;\;\;\;}\mathcal{N}_{Z_{\mathbf{G}}(m)}/\,Z_{\mathbf{G}}(m)$
Let $S\subset\Delta$ be a subset of simple roots. Each $\alpha\in S$ induces a
linear functional
$X_{*}(\mathbf{T})\otimes\mathbf{k}\longrightarrow\mathbf{k}$. Let
$H_{\alpha}$ be the hyperplane of $X_{*}(\mathbf{T})\otimes\mathbf{k}$ where
this functional vanishes. We denote by $\overline{H}_{\alpha}$ the image of
$H_{\alpha}$ in
$\left(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q}\right)/\,W$. Define
$\overline{H}_{S}\vcentcolon=\bigcap_{\alpha\in S}\overline{H}_{\alpha}$. We
say that
$m\in\left(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q}\right)/\,W$ is of
type $S$ if we have $m\in\overline{H}_{S}$ and $m\notin\overline{H}_{V}$ for
all $S\subsetneq V\subset\Delta$. Let $Q_{S}$ be the set of all $m$ of type
$S$. For any $m\in Q_{S}$, the centralizer $Z_{\mathbf{G}}(m)$ is conjugate to
the standard Levi $\mathbf{M}_{S}$ associated to the subset of simple roots
$S\subset\Delta$. We get the following rewording of the corollary above.
###### Corollary 3.20.
There is a natural correspondence
$\left\\{\text{regular formal
connections}\right\\}/\,\mathbf{G}(\overline{F})\;\;\;\longleftrightarrow{\;\;\;\;}\bigsqcup_{S\subset\Delta}Q_{S}\times\mathcal{N}_{\mathbf{M}_{S}}/\,\mathbf{M}_{S}$
This gives us a procedure to describe all regular
$\mathbf{G}(\overline{F})$-gauge equivalence classes of formal connections.
For each $S\subset\Delta$, the group $\mathbf{M}_{S}$ is connected and
reductive. The set of nilpotent orbits
$\mathcal{N}_{\mathbf{M}_{S}}/\,\mathbf{M}_{S}$ is a finite set which admits
many well studied parametrizations. For example, nilpotent orbits for
classical groups can be classified by partition diagrams as in [CM93] Chapter
5. This yields an explicit canonical block decompositions for
$\mathbf{G}(\overline{F})$-gauge equivalence classes of regular connections
for classical groups.
### 3.3 Connections for unipotent groups
All connections in a unipotent group are regular. They admit canonical forms.
###### Proposition 3.21.
Let $\mathbf{G}$ be connected unipotent and let $A\in\mathfrak{g}_{F}$ be a
connection. Then, there exists $x\in\mathbf{G}(F)$ such that $x\cdot
A=t^{-1}C$ for some $C\in\mathfrak{g}$.
###### Proof.
We proceed by induction on $\text{dim}(\mathbf{G})$. Suppose that
$\text{dim}(\mathbf{G})=1$. Since $\text{char}(\mathbf{k})=0$, we know that
$\mathbf{G}\cong\mathbb{G}_{a}$. In this case the theorem follows from an
elementary computation. See Example 3.23 below.
Now assume $\text{dim}\,(\mathbf{G})\geq 2$. Since $\mathbf{k}$ is of
characteristic 0, $\mathbf{G}$ is split unipotent. In particular, the center
$\mathbf{Z}_{\mathbf{G}}$ contains a closed subgroup $\mathbf{H}$ isomorphic
to $\mathbb{G}_{a}$. Let $\text{Lie}(\mathbf{H})=\mathfrak{h}$. By induction
there exists $\overline{x}\in\mathbf{G}/\mathbf{H}(F)$ such that
$\overline{x}\cdot\overline{A}\in\mathfrak{g}/\mathfrak{h}$ is in canonical
form. We can lift $\overline{x}$ to an element $x\in\mathbf{G}(F)$ because
$H^{1}\left(F,\,\mathbf{H}(\overline{F})\right)=H^{1}\left(F,\,\overline{F}\right)=0$
(the additive version of Hilbert’s Theorem 90). By construction, we have
$x\cdot A=t^{-1}C+B$ for some $C\in\mathfrak{g}$ and $B\in\mathfrak{h}_{F}$.
Now we can use the base case for $\mathbf{H}\cong\mathbb{G}_{a}$ to put $B$
into regular canonical form. ∎
###### Remark 3.22.
Here we didn’t need to pass to a ramified cover in order to find a good
trivialization.
###### Example 3.23.
In the case of $\mathbf{G}=\mathbb{G}_{a}$, we can phrase this result
concretely in terms of differential equations. We use the embedding
$\mathbb{G}_{a}\hookrightarrow\text{GL}_{2}$, so that we can interpret the
connection as system of differential equations
$\displaystyle\frac{d}{dt}x_{1}$ $\displaystyle=Ax_{2}$
$\displaystyle\frac{d}{dt}x_{2}$ $\displaystyle=0$
Set $x_{2}=c$, where $c\in\mathbf{k}$ is a constant. We are left with the
nonhomogeneous equation $\frac{d}{dt}x_{1}=cA$ for some Laurent series $cA$.
The statement of the proposition reduces to the obvious fact that we can find
a formal antiderivative for any Laurent series with residue $0$ (i.e.
$A_{-1}=0$).
We now prove uniqueness of the canonical form up to conjugacy.
###### Proposition 3.24.
Let $\mathbf{U}$ be a unipotent group. Let $C_{1},C_{2}$ be two elements of
the Lie algebra $\mathfrak{u}\vcentcolon=\text{Lie}(\mathbf{U})$. Suppose that
there exists $x\in\mathbf{U}(F)$ such that
$x\cdot\left(t^{-1}C_{1}\right)=t^{-1}C_{2}$. Then, we have
$x\in\mathbf{U}(\mathbf{k})$.
###### Proof.
We will argue by induction on the dimension of $\mathbf{U}$. If
$\text{dim}(\mathbf{U})=1$, then $\mathbf{U}\cong\mathbb{G}_{a}$. We can write
$x=\sum_{j=m}^{\infty}a_{j}\,t^{j}$ for some $a_{j}\in\mathbf{k}$. The
hypothesis then becomes
$x\cdot\left(t^{-1}C_{1}\right)=t^{-1}C_{1}+dx=t^{-1}C_{2}$
This means that
$dx=\sum_{j=m}^{\infty}ja_{j}\,t^{j-1}=t^{-1}\left(C_{2}-C_{1}\right)$. In
particular we must have $ja_{j}=0$ for all $j\neq 0$. Hence
$x=a_{0}\in\mathbf{k}$.
Suppose that $\mathbf{U}$ is an arbitrary unipotent group. Assume that the
result holds for all unipotent groups of smaller dimension. Let $\mathbf{H}$
be a subgroup of the center $Z_{\mathbf{U}}$ of $\mathbf{U}$ such that
$\mathbf{H}\cong\mathbb{G}_{a}$ (this is possible because
$\text{char}(\mathbf{k})=0$) . Let $\overline{x}\in\mathbf{U}/\mathbf{H}\,(F)$
be the image of $x$ in the quotient. By the induction hypothesis, the
proposition holds for $\mathbf{U}/\,\mathbf{H}$. Hence we have that
$\overline{x}\in\mathbf{U}/\mathbf{H}\,(\mathbf{k})$. We can lift
$\overline{x}$ to an element $v\in\mathbf{U}(\mathbf{k})$, since $\mathbf{k}$
is algebraically closed.
We can therefore write $x=vu$, with $u\in\mathbf{H}(F)$. Our assumption thus
becomes
$x\cdot\left(t^{-1}C_{1}\right)\;=\;t^{-1}\text{Ad}(v)\,\text{Ad}(u)\,C_{1}+\text{Ad}(v)du\;=\;t^{-1}C_{2}$
Since $u\in\mathbf{H}(F)\subset Z_{\mathbf{U}}(F)$, we have
$\text{Ad}(u)C_{1}=C_{1}$. After rearranging we get
$du\;=\;t^{-1}\left(\text{Ad}(v^{-1})\,C_{2}-C_{1}\right)$
The computation for $\mathbb{G}_{a}$ above implies that
$u\in\mathbf{H}(\mathbf{k})$. Therefore $x\in\mathbf{U}(\mathbf{k})$. ∎
We end this section with a determinacy result for canonical forms in the
unipotent case. Recall that the nilpotency class of a unipotent group is the
length of the upper central series. For example, a commutative unipotent group
has nilpotency class $0$.
###### Proposition 3.25.
Let $\mathbf{U}$ be a unipotent group of nilpotency class $n$. Let
$A=\sum_{j=m}^{\infty}A_{j}\,t^{j}\in\mathfrak{u}_{F}$ be a connection with
$A_{m}\neq 0$.
1. (i)
If $m>-1$, then there exists $x\in\mathbf{U}(\mathcal{O})$ such that $x\equiv
1\;\left(mod\;t^{m+1}\right)$ and $x\cdot A=0$.
2. (ii)
If $m\leq-1$, then the gauge equivalence class of $A$ is determined by $A_{j}$
for $m\leq j<n(|m|-1)$. More precisely, suppose that $B$ is another connection
with $B\equiv A\;\left(mod\;t^{k}\right)$ for some $k\geq n(|m|-1)$. Then
there exists $x\in\mathbf{U}(\mathcal{O})$ with $x\equiv
1\left(mod\;t^{k-n|m|+n+1}\right)$ such that $x\cdot A=B$.
###### Proof.
We will induct on the nilpotency class $n$. The base case $n=0$ means that
$\mathbf{U}\cong\mathbb{G}_{a}^{l}$ for some $l$. Here we can make use of the
explicit computation we have done for $\mathbb{G}_{a}$ a few times already.
Define $u_{A}\vcentcolon=-\sum_{j=0}^{\infty}\frac{1}{j+1}A_{j+1}\,t^{j+1}$.
We have
$u_{A}\cdot A=A+du_{A}=\sum_{j=m}^{-1}A_{j}\,t^{j}$
Now both (i) and (ii) are clear by taking $x=-u_{B}+u_{A}$ (we use $B=0$ for
part (i)).
For the induction step, let $\mathbf{U}$ be an arbitrary unipotent group of
nilpotency class $n$. We will think of $\mathbf{U}$ as embedded in the group
of upper triangular matrices of $\text{GL}_{p}$ for some $p$. By definition,
the quotient $\mathbf{U}/\,Z_{\mathbf{U}}$ of $\mathbf{U}$ by its center
$Z_{\mathbf{U}}$ has nilpotency class $n-1$. It follows from the matrix
description that we can choose a section $s$ over $\mathbf{k}$ for the
$Z_{\mathbf{U}}$-torsor $\mathbf{U}\longrightarrow\mathbf{U}/\,Z_{\mathbf{U}}$
such that $s(1)=1$ (this is a section as $\mathbf{k}$-schemes, it is not a
homomorphism).
Let’s address part (i). Let $\overline{A}$ be the image of $A$ in the quotient
$\text{Lie}(\mathbf{U}/\,Z_{\mathbf{U}})_{F}$. By the induction hypothesis,
there exists $\overline{x}\in\mathbf{U}/\,Z_{\mathbf{U}}(\mathcal{O})$ such
that $\overline{x}\equiv 1\;\left(mod\;t^{m+1}\right)$ and
$\overline{x}\cdot\overline{A}=0$. Therefore, we have $s(\overline{x})\cdot
A\in\text{Lie}(Z_{\mathbf{U}})_{F}$. Notice that we have
$s(\overline{x})\equiv s(\overline{x})^{-1}\equiv
1\;\left(mod\;t^{m+1}\right)$. It follows that
$s(\overline{x})\cdot
A\;=\;s(\overline{x})\,A\,s(\overline{x})^{-1}\;+\;d\,s(\overline{x})\;s(\overline{x})^{-1}\;\equiv
0\;\left(mod\;t^{m}\right)$
Now we can conclude by using the base case for $Z_{\mathbf{U}}$. For part
(ii), let $\overline{A}$ and $\overline{B}$ denote the images of $A$ and $B$
in the quotient. By the induction hypothesis, there exists
$\overline{x}\in\mathbf{U}/\,Z_{\mathbf{U}}(\mathcal{O})$ with
$\overline{x}\equiv 1\;\left(mod\;t^{k-(n-1)|m|+n}\right)$ such that
$\overline{x}\cdot\overline{A}=\overline{B}$. We can now write
$s(\overline{x})\cdot A=ds\left(\overline{x}\cdot\overline{A}\right)+C$ and
$B=ds\left(\overline{x}\cdot\overline{A}\right)+D$ for some
$C,D\in\text{Lie}(Z_{\mathbf{U}})_{F}$.
Notice that $s(\overline{x})\equiv s(\overline{x})^{-1}\equiv
1\;\left(mod\;t^{k-(n-1)|m|+n}\right)$. Therefore,
$s(\overline{x})\cdot
A\;=\;s(\overline{x})\,A\,s(\overline{x})^{-1}\;+\;d\,s(\overline{x})\;s(\overline{x})^{-1}\;\equiv\;A\;\left(mod\;t^{k-n|m|+n}\right)$
Since $A\equiv B\;\left(mod\;t^{k-n|m|+n}\right)$, it follows that $C\equiv
D\;\left(mod\;t^{k-n|m|+n}\right)$. Now by the base case we can find $y\in
Z_{\mathbf{U}}(\mathcal{O})$ with $y\equiv
1\;\left(mod\;t^{k-n|m|+n+1}\right)$ such that $y\cdot C=D$. We conclude that
$y\,s(\overline{x})\cdot A=B$, because $y$ is in the center. We clearly have
$y\,s(\overline{x})\equiv 1\;\left(mod\;t^{k-n|m|+n+1}\right)$, as desired. ∎
### 3.4 Regular connections for solvable groups
We fix a projection $\pi:\mathbf{k}\longrightarrow\mathbb{Q}$ as in the proof
of Proposition 3.6. For $\mathbf{T}$ a torus, we extend this projection to a
map $\pi:\text{Lie}(\mathbf{\mathbf{T}})\cong
X_{*}(\mathbf{T})\otimes\mathbf{k}\longrightarrow
X_{*}(\mathbf{T})\otimes\mathbb{Q}$.
###### Proposition 3.26.
Let $\mathbf{G}$ be of the form $\mathbf{T}\ltimes\mathbf{U}$, where
$\mathbf{T}$ is a torus and $\mathbf{U}$ is unipotent. Let
$A=A_{\mathbf{T}}+A_{\mathbf{U}}$ be a formal connection with
$A_{\mathbf{T}}\in\text{Lie}(\mathbf{T})_{F}$ a connection of the first kind
and $A_{\mathbf{U}}\in\text{Lie}(\mathbf{U})_{F}$. Let $b$ be a positive
integer such that $b\,\pi\left(\,(A_{\mathbf{T}})_{-1}\,\right)\in
X_{*}(\mathbf{T})$. Then there exists $x\in\mathbf{G}(F_{b})$ such that
$x\cdot A=t^{-1}\,C_{\mathbf{T}}+t^{-1}\,C_{\mathbf{U}}$ for some
$C_{\mathbf{T}}\in\text{Lie}(\mathbf{T})$ and
$C_{\mathbf{U}}\in\text{Lie}(\mathbf{U})$. Moreover, we can arrange that
$\pi(C_{\mathbf{T}})=0$ and $[C_{\mathbf{T}},C_{\mathbf{U}}]=0$.
###### Proof.
By the proof of Proposition 3.12, we can find $g\in\mathbf{T}(F)$ with $g\cdot
A_{\mathbf{T}}=t^{-1}\,(A_{\mathbf{T}})_{-1}$. Set
$\mu\vcentcolon=b\,\pi\left(\,(A_{\mathbf{T}})_{-1}\,\right)\in
X_{*}(\mathbf{T})$. Then we have $(t^{\frac{1}{b}\mu}\,g)\cdot
A_{\mathbf{T}}=t^{-1}\,C_{\mathbf{T}}$ for some
$C_{\mathbf{T}}\in\text{Lie}(\mathbf{T})$ with $\pi(C_{\mathbf{T}})=0$.
We can replace $A$ with $B\vcentcolon=(t^{\frac{1}{b}\,\mu}\,g)\cdot A$. We
know that $B=t^{-1}\,C_{\mathbf{T}}+B_{\mathbf{U}}$ for some
$B_{\mathbf{U}}\in\text{Lie}(\mathbf{U})_{F_{b}}$. By lifting to the
$b$-ramified cover, we can assume that
$B_{\mathbf{U}}\in\text{Lie}(\mathbf{U})_{F}$. We claim that we can find
$u\in\mathbf{U}(F)$ such that $u\cdot
B=t^{-1}\,C_{\mathbf{T}}+t^{-1}\,C_{\mathbf{U}}$ with
$C_{\mathbf{U}}\in\text{Lie}(\mathbf{U})$ and
$[C_{\mathbf{T}},C_{\mathbf{U}}]=0$. We will show this by induction on the
dimension of $\mathbf{U}$.
The base case is $\mathbf{U}=\mathbb{G}_{a}$. Then, $\mathbf{T}$ acts on
$\mathbf{U}$ by a character $\chi:\mathbf{T}\longrightarrow\mathbb{G}_{m}$.
Write
$B_{\mathbf{U}}=\sum_{j=r}^{\infty}\left(B_{\mathbf{U}}\right)_{j}\,t^{j}$.
For any $u=\sum_{j=r}^{\infty}u_{j}\,t^{j}\in\mathbf{U}(F)$, we have
$u\cdot
B=t^{-1}\,C_{\mathbf{T}}\,+\,B_{\mathbf{U}}\,-\,\sum_{j=r}^{\infty}\left(d\chi(C_{\mathbf{T}})-j\right)u_{j}\,t^{j-1}$
Since $\pi(C_{\mathbf{T}})=0$, we have
$\pi\left(d\chi(C_{\mathbf{T}})\right)=0$. There are two options:
1. (1)
$d\chi(C_{\mathbf{T}})\notin\mathbb{Q}$. Then, setting
$u_{j}=\frac{1}{d\chi(C_{\mathbf{T}})-j}\left(B_{\mathbf{U}}\right)_{j-1}$ we
get $u\cdot B=t^{-1}\,C_{\mathbf{T}}$.
2. (2)
$d\chi(C_{\mathbf{T}})=0$. We can set
$u_{j}=\frac{1}{d\chi(C_{\mathbf{T}})\,-\,j}\left(B_{\mathbf{U}}\right)_{j-1}$
for $j\neq 0$ and $u_{0}=0$. Then $u\cdot
B=t^{-1}\,C_{\mathbf{T}}+t^{-1}\,\left(B_{\mathbf{U}}\right)_{-1}$. Notice
that we have
$[C_{\mathbf{T}},\,\left(B_{\mathbf{U}}\right)_{-1}]=d\chi(C_{\mathbf{T}})\left(B_{\mathbf{U}}\right)_{-1}=0$.
This concludes the proof of the base case.
Let’s proceed with the induction step. We can decompose the action of the
split torus $\mathbf{T}$ on the vector space $Z_{\mathbf{U}}$ into one-
dimensional spaces. Let $\mathbf{H}\cong\mathbb{G}_{a}\leq Z_{\mathbf{U}}$ be
one of these eigenspaces. The eigenspace decomposition yields a natural
$\mathbf{T}$-equivariant section of the quotient map
$Z_{\mathbf{U}}\longrightarrow Z_{\mathbf{U}}/\,\mathbf{H}$. We claim that we
can extend this to a $\mathbf{T}$-equivariant section $s$ of the morphism of
schemes $\mathbf{U}\longrightarrow\mathbf{U}/\,\mathbf{H}$. In order to see
this claim, we can use induction on the nilpotency class to reduce to the case
when $\mathbf{U}$ has nilpotency class $1$. Notice that we can find a section
which is not necessarily $\mathbf{T}$-equivariant, since everything is
isomorphic to an affine space. Then we can use the argument in Lemma 9.4 of
[BS68] to obtain a $\mathbf{T}$-equivariant section. We can arrange so that
$s$ preserves the identities by substracting the image
$s(1_{\mathbf{U}/\,\mathbf{H}})$. Let us denote by $ds$ the induced map of
tangent spaces at the identity.
Let $\overline{B}$ be the image of $B$ in the quotient
$\text{Lie}(\mathbf{T}\ltimes\mathbf{U}/\,\mathbf{H})_{F}$. By the induction
hypothesis, we can find $\overline{u}\in\mathbf{U}/\,\mathbf{H}(F)$ such that
$\overline{u}\cdot\overline{B}=t^{-1}\,C_{\mathbf{T}}+t^{-1}\,\overline{D}$
for some $\overline{D}\in\text{Lie}\left(\mathbf{U}/\,\mathbf{H}\right)$ with
$[C_{\mathbf{T}},\overline{D}]=0$. We can then write
$s(\overline{u})\cdot
B=t^{-1}\,C_{\mathbf{T}}+t^{-1}\,ds(\overline{D})+B_{\mathbf{H}}$
for some $B_{\mathbf{H}}\in\text{Lie}(\mathbf{H})_{F}$. Since $s$ is
$\mathbf{T}$-equivariant, we have $[ds(\overline{D}),\,C_{\mathbf{T}}]=0$. We
can now use the base case for $\mathbf{H}$ in order to conclude. ∎
###### Remark 3.27.
We can decompose the Lie algebra
$\mathfrak{u}\vcentcolon=\text{Lie}(\mathbf{U})$ into weight spaces
$\mathfrak{u}=\bigoplus_{i}\mathfrak{u}_{\chi_{i}}$. Here each $\chi_{i}$ is a
character of $\mathbf{T}$. Fix a basis $\\{\alpha_{j}\\}$ for the character
lattice $X^{*}(\mathbf{T})$. For each $i$ we can write
$\chi_{i}=\sum_{j}m^{i}_{j}\alpha_{j}$ for some integers $m^{i}_{j}$. Define
$\text{hgt}(\chi_{i})=\sum_{j}|m^{i}_{j}|$. Set $b=\underset{1\leq i\leq
l}{\text{max}}\,\text{hgt}(\chi_{i})$. If we don’t require
$\pi(C_{\mathbf{T}})$ and $[C_{\mathbf{T}},C_{\mathbf{U}}]=0$ in Proposition
3.26, then it suffices to pass to a $b$-ramified cover. So there is a uniform
upper bound on the ramification needed to put any regular
$\mathbf{G}$-connection into canonical form. It only depends on the solvable
group $\mathbf{G}$.
Let us prove a uniqueness result for regular canonical forms in the solvable
case.
###### Proposition 3.28.
Let $\mathbf{G}$ be of the form $\mathbf{T}\ltimes\mathbf{U}$ as above. Let
$C=t^{-1}\,C_{\mathbf{T}}+t^{-1}\,C_{\mathbf{U}}$ and
$D=t^{-1}\,D_{\mathbf{T}}+t^{-1}\,D_{\mathbf{U}}$ be two regular canonical
connections with $C_{\mathbf{T}},D_{\mathbf{T}}\in\text{Lie}(\mathbf{T})$ and
$C_{\mathbf{U}},D_{\mathbf{U}}\in\text{Lie}(\mathbf{U})$. Suppose that
$\pi(C_{\mathbf{T}})=\pi(D_{\mathbf{T}})=0$ and
$[C_{\mathbf{T}},C_{\mathbf{U}}]=[D_{\mathbf{T}},D_{\mathbf{U}}]=0$. If there
exists $x\in\mathbf{G}(\overline{F})$ such that $x\cdot C=D$, then in fact
$C_{\mathbf{T}}=D_{\mathbf{T}}$. Moreover, $x$ is in the centralizer
$Z_{\mathbf{G}}(C_{\mathbf{T}})(\mathbf{k})$ of $C_{\mathbf{T}}$.
###### Proof.
By lifting to a ramified cover, we can assume that $x\in\mathbf{G}(F)$. Write
$x=x_{\mathbf{U}}\,x_{\mathbf{T}}$ with $x_{\mathbf{U}}\in\mathbf{U}(F)$ and
$x_{\mathbf{T}}\in\mathbf{T}(F)$. By the computation in Proposition 3.14
applied to $\mathbf{T}$, we get that $x_{\mathbf{T}}\in\mathbf{T}(\mathbf{k})$
and $C_{\mathbf{T}}=D_{\mathbf{T}}$. The same proof of Proposition 3.11
implies that $x\in\mathbf{G}(\mathbf{k})$ and
$\text{Ad}(x)C_{\mathbf{T}}=D_{\mathbf{T}}$. Since
$C_{\mathbf{T}}=D_{\mathbf{T}}$, this means that $x\in
Z_{\mathbf{G}}(C_{\mathbf{T}})(\mathbf{k})$. ∎
We conclude this section with a determinacy result for regular connections in
the case of solvable groups. But first we need to setup some notation. Let
$\mathbf{G}=\mathbf{T}\ltimes\mathbf{U}$ solvable. We have an action of the
split torus $\mathbf{T}$ on the Lie algebra
$\mathfrak{u}\vcentcolon=\text{Lie}(\mathbf{U})$ via the adjoint
representation. We can decompose this representation into weight spaces
$\mathfrak{u}=\bigoplus_{i=1}^{l}\mathfrak{u}_{\chi_{i}}$ for some finite set
$\\{\chi_{1},\chi_{2},...,\chi_{l}\\}$ of charaters
$\chi_{i}:\mathbf{T}\longrightarrow\mathbb{G}_{m}$.
Suppose that we have a formal connection $A=A^{\mathbf{T}}+A^{\mathbf{U}}$,
with $A^{\mathbf{T}}\in\text{Lie}(\mathbf{T})_{F}$ a connection of the first
kind and $A^{\mathbf{U}}\in\text{Lie}(\mathbf{U})_{F}$. We can write
$A^{\mathbf{T}}=t^{-1}A^{\mathbf{T}}_{-1}\,+\,\sum_{j=p}^{\infty}A^{\mathbf{T}}_{j}\,t^{j}$
for some $p\geq 0$ and
$A^{\mathbf{U}}=\sum_{j=m}^{\infty}A^{\mathbf{U}}_{j}\,t^{j}$ for some
$m\in\mathbb{Z}$. Let $b$ be a positive integer such that
$\mu\vcentcolon=b\,\pi\left(\,A^{\mathbf{T}}_{-1}\,\right)$ is in
$X_{*}(\mathbf{T})$. Define $L$ to be
$L\vcentcolon=\text{max}\left(\left\\{\frac{1}{b}\langle\mu,\,\chi_{i}\rangle\right\\}_{i=1}^{l}\cup\\{0\\}\right)$
###### Proposition 3.29.
Keep the same notation as in the paraggraph above. Assume that $\mathbf{U}$
has nilpotency class $n$.
1. (i)
Suppose that $m>L-1$. Then there exists $x\in\mathbf{G}(\mathcal{O})$ with
$x\cdot A=t^{-1}A^{\mathbf{T}}_{-1}$. More precisely, there exists
$x_{\mathbf{T}}\in\mathbf{T}(\mathcal{O})$ with $x_{\mathbf{T}}\equiv
1_{\mathbf{T}}\,\left(mod\;t^{p+1}\right)$ and
$x_{\mathbf{U}}\in\mathbf{U}(\mathcal{O})$ with $x_{\mathbf{U}}\equiv
1_{\mathbf{U}}\,\left(mod\;t^{m+1}\right)$ such that
$(x_{\mathbf{U}}x_{\mathbf{T}})\cdot A=t^{-1}A^{\mathbf{T}}_{-1}$.
2. (ii)
Suppose that $m\leq L-1$. The $\mathbf{G}(F)$-gauge equivalence class of $A$
is determined by the coefficients $A_{j}^{\mathbf{T}}$ for $-1\leq
j<(n+1)(|m|-1)+L$ and $A_{j}^{\mathbf{U}}$ for $m\leq j<n(|m|-1)+L$. More
precisely, suppose that there is another connection $B$ and positive integer
$k\geq n(|m|-1)+L$ with $B^{\mathbf{T}}\equiv
A^{\mathbf{T}}\,\left(mod\;t^{k+|m|}\right)$ and $B^{\mathbf{U}}\equiv
A^{\mathbf{U}}\,\left(mod\;t^{k}\right)$. Then, there exists
$x\in\mathbf{G}(\mathcal{O})$ with $x\equiv
1\,\left(mod\;t^{k-n|m|+n+1}\right)$ such that $x\cdot A=B$.
###### Proof.
1. (i)
By assumption $A^{\mathbf{T}}\equiv
t^{-1}A^{\mathbf{T}}_{-1}\,\left(mod\;t^{p}\right)$. The proof of Proposition
3.12 shows that there exists $x_{\mathbf{T}}\in\mathbf{T}(\mathcal{O})$ with
$x_{\mathbf{T}}\equiv 1\,\left(mod\;t^{k+1}\right)$ such that
$x_{\mathbf{T}}\cdot A^{\mathbf{T}}=t^{-1}A^{\mathbf{T}}_{-1}$. Set
$C\vcentcolon=x_{\mathbf{T}}\cdot A$. We can write
$C=t^{-1}A_{-1}^{\mathbf{T}}+\text{Ad}(x_{\mathbf{T}})A^{\mathbf{U}}$. In
order to ease notation, set
$C^{\mathbf{U}}\vcentcolon=\text{Ad}(x_{\mathbf{T}})A^{\mathbf{U}}$. Since
$A^{\mathbf{U}}\equiv 0\,\left(mod\;t^{m}\right)$, we have
$C^{\mathbf{U}}\equiv 0\,\left(mod\;t^{m}\right)$. We claim that there exists
$x\in\mathbf{U}(\mathcal{O})$ with $x\equiv
1_{\mathbf{U}}\,\left(mod\;t^{m+1}\right)$ such that $x\cdot
C=t^{-1}A^{\mathbf{T}}_{-1}$. This claim finishes the proof of part (i).
In order to prove the claim, we will induct on the nilpotency class of
$\mathbf{U}$. The base case $n=0$ means that
$\mathbf{U}\cong\mathbb{G}_{a}^{d}$ for some $d$. We can decompose into
eigenvalues and look at each coordinate separately in order to reduce to the
case $d=1$. Then there is a single weight space $\mathfrak{u}_{\chi_{i}}$.
This case amounts to solving a recurrence as in the base case for the proof of
Proposition 3.26. We want to find $x=\sum_{j=0}^{\infty}t^{j}u_{j}$ satisfying
$C^{\mathbf{U}}_{j-1}=\left(d\chi_{i}(A_{-1}^{\mathbf{T}})-j\right)u_{j}$
If $j\leq m$ then $C^{\mathbf{U}}_{j-1}=0$ by assumption. So we can set
$u_{j}=0$. If $j\geq m+1$, then we have
$\pi\left(d\chi_{i}(A^{\mathbf{T}}_{-1})\right)-j=\frac{1}{b}\langle\mu,\chi_{i}\rangle-j\leq
L-m-1$
By assumption $L-m-1<0$, so we must have $d\chi_{i}(A^{\mathbf{T}}_{-1})-j\neq
0$. Hence we can set
$u_{j}=\frac{1}{d\chi_{i}(A^{\mathbf{T}}_{-1})\,-\,j}\,C^{\mathbf{U}}_{j-1}$.
The base case follows.
For the induction step, let $Z_{\mathbf{U}}$ denote the center of
$\mathbf{U}$. Let $s$ be a $\mathbf{T}$-equivariant section of the quotient
$\mathbf{U}\longrightarrow\mathbf{U}/\,Z_{\mathbf{U}}$, as in the proof of
Proposition 3.26. Let $\overline{C}$ be the image of $C$ in the quotient
$\text{Lie}(\mathbf{T}\ltimes\mathbf{U}/\,Z_{\mathbf{U}})_{F}$. By the
induction hypothesis, there exists
$\overline{x}\in\mathbf{U}/\,Z_{\mathbf{U}}(\mathcal{O})$ such that
$\overline{x}\equiv 1\;\left(mod\;t^{m+1}\right)$ and
$\overline{x}\cdot\overline{C}=t^{-1}A^{\mathbf{T}}_{-1}$. We must then have
$s(\overline{x})\cdot C=t^{-1}A^{\mathbf{T}}_{-1}+D_{Z_{\mathbf{U}}}$ for some
$D_{Z_{\mathbf{U}}}\in\text{Lie}(Z_{\mathbf{U}})_{F}$. By definition
$s(\overline{x})\cdot
C\;=\;t^{-1}\text{Ad}(s(\overline{x}))A^{\mathbf{T}}_{-1}\,+\,\text{Ad}(s(\overline{x}))C^{\mathbf{U}}\,+\,ds(\overline{x})s(\overline{x})^{-1}$
We know that $s(\overline{x})\equiv s(\overline{x})^{-1}\equiv
1\;\left(mod\;t^{m+1}\right)$. Also by assumption
$C^{\mathbf{U}}\in\mathfrak{u}_{\mathcal{O}}$. It follows that
$s(\overline{x})\cdot
C\;\equiv\;t^{-1}A^{\mathbf{T}}_{-1}\,+\,C^{\mathbf{U}}\;\equiv\;t^{-1}C_{\mathbf{T}}\;\left(mod\;t^{m}\right)$
Therefore $D_{Z_{\mathbf{U}}}\equiv 0\;\left(mod\;t^{m}\right)$. Now we can
conclude by using the base case for $Z_{\mathbf{U}}$.
2. (ii)
The hypothesis implies that $B^{\mathbf{T}}_{-1}=A^{\mathbf{T}}_{-1}$. The
proof of Proposition 3.12 shows that there exist
$x_{\mathbf{T}}\in\mathbf{T}(\mathcal{O})$ with $x_{\mathbf{T}}\equiv
1\,\left(mod\;t^{k+|m|}\right)$ such that $x_{\mathbf{T}}\cdot
A^{\mathbf{T}}=B^{\mathbf{T}}$. Set $C\vcentcolon=x_{\mathbf{T}}\cdot A$. We
have $C=B^{\mathbf{T}}+\text{Ad}(x_{\mathbf{T}})A^{\mathbf{U}}$. Define
$C^{\mathbf{U}}\vcentcolon=\text{Ad}(x_{\mathbf{T}})A^{\mathbf{U}}$.
We know that $C^{\mathbf{U}}\equiv A^{\mathbf{U}}\,\left(mod\;t^{k}\right)$,
because $x_{\mathbf{T}}\equiv 1\,\left(mod\;t^{k+|m|}\right)$ and
$A^{\mathbf{U}}\in t^{m}\mathfrak{u}_{\mathcal{O}}$. Therefore
$C^{\mathbf{U}}\equiv B^{\mathbf{U}}\,\left(mod\;t^{k}\right)$ by assumption.
We claim that there exists $u\in\mathbf{U}(\mathcal{O})$ with $u\equiv
1\;\left(mod\;t^{k-n|m|+n+1}\right)$ such that $u\cdot C=B$. This claim
concludes the proof of part (ii). In order to prove the claim, we will induct
on the nilpotency class of $\mathbf{U}$. The base case $n=0$ follows from an
argument similar to the one for part (i), we omit the details.
For the induction step, let $Z_{\mathbf{U}}$ and $s$ be as in part (i). Let
$\overline{C}$ and $\overline{B}$ denote the images of $C$ and $B$ in the
quotient $\text{Lie}(\mathbf{T}\ltimes\mathbf{U}/\,Z_{\mathbf{U}})_{F}$. By
the induction hypothesis, there exists
$\overline{x}\in\mathbf{U}/\,Z_{\mathbf{U}}(\mathcal{O})$ with
$\overline{x}\equiv 1\;\left(mod\;t^{k-(n-1)|m|+n}\right)$ such that
$\overline{x}\cdot\overline{C}=\overline{B}$. We can now write
$s(\overline{x})\cdot C=ds\left(\overline{B}\right)+E_{Z_{\mathbf{U}}}$ and
$B=ds\left(\overline{B}\right)+K_{Z_{\mathbf{U}}}$ for some
$E_{Z_{\mathbf{U}}},F_{Z_{\mathbf{U}}}\in\text{Lie}(Z_{\mathbf{U}})_{F_{b}}$.
By definition
$s(\overline{x})\cdot
C\;=\;t^{-1}\text{Ad}(s(\overline{x}))B^{\mathbf{T}}\,+\,\text{Ad}(s(\overline{x}))C^{\mathbf{U}}\,+\,ds(\overline{x})s(\overline{x})^{-1}$
We know that $s(\overline{x})\equiv s(\overline{x})^{-1}\equiv
1\;\left(mod\;t^{k-(n-1)|m|+n}\right)$. Since $|m|\geq 1$, we conclude that
$ds\left(\overline{B}\right)+E_{Z_{\mathbf{U}}}\;=\;s(\overline{x})\cdot
C\;\equiv\;B^{\mathbf{T}}\,+\,C^{\mathbf{U}}\;=\;C\;\left(mod\;t^{k-n|m|+n}\right)$
Since $k\geq k-n|m|+n$, we have $C\equiv B\;\left(mod\;t^{k-n|m|+n}\right)$.
It follows that $E_{Z_{\mathbf{U}}}\equiv
K_{Z_{\mathbf{U}}}\;\left(mod\;t^{k-n|m|+n}\right)$. Now by the base case we
can find $y\in Z_{\mathbf{U}}(\mathcal{O})$ with $y\equiv
1\;\left(mod\;t^{k-n|m|+n+1}\right)$ such that $(y\,s(\overline{x}))\cdot
C=B$. We have $y\,s(\overline{x})\equiv 1\;\left(mod\;t^{k-n|m|+n+1}\right)$,
as desired.
∎
###### Remark 3.30.
Suppose that $\langle\mu,\chi_{i}\rangle>0$ for all $i$. It follows from the
proof above that we can relax further the conditions on the coefficients of
$A^{\mathbf{U}}$. Similarly, we can obtain sharper conditions for the
coefficients of $A^{\mathbf{T}}$ in the case $0\leq m\leq L-1$. We leave the
details of these refinements to the interested reader.
###### Remark 3.31.
If $L=0$, then the statement simplifies and we recover conditions similar to
the unipotent case (Proposition 3.25).
### 3.5 Regular connections for arbitrary linear algebraic groups
###### Theorem 3.32.
Let $\mathbf{G}$ be a connected linear algebraic group. Let
$A\in\mathfrak{g}_{F}$ be a regular connection. Fix a Levi subgroup
$\mathbf{L}$ and maximal torus $\mathbf{T}\subset\mathbf{L}$. Then there
exists $x\in\mathbf{G}(\overline{F})$ such that $x\cdot A=t^{-1}C$ for some
$C\in\mathfrak{g}$. Moreover, such $x$ can be chosen so that the semisimple
part $C_{s}$ of the Levi component satisfies $C_{s}\in\mathfrak{D}$ and
$[C_{s},C]=0$.
###### Proof.
Assume that $A$ is of the first kind. Let $\mathbf{U}\subset\mathbf{G}$ be the
unipotent radical of $\mathbf{G}$ with Lie algebra $\mathfrak{u}$. Let
$\mathfrak{l}$ be the Lie algebra of $\mathbf{L}$. We know that
$\mathbf{G}=\mathbf{L}\ltimes\mathbf{U}$, and so
$\mathfrak{g}=\mathfrak{l}\oplus\mathfrak{u}$. Decompose
$A=A_{\mathfrak{l}}+A_{\mathfrak{u}}$. By the reductive group case, there
exists $x\in\mathbf{L}(\overline{F})$ such that $x\cdot
A_{\mathfrak{l}}=t^{-1}C$ for some $C\in\mathfrak{l}$ satisfying
$C_{s}\in\mathfrak{D}$.
Let $C_{n}\in\mathfrak{l}$ denote the nilpotent part of $C$. Let $\mathbf{E}$
be the neutral component of the centralizer $Z_{\mathbf{T}}(C_{n})$ of $C_{n}$
in $\mathbf{T}$. Note that $\mathbf{E}$ is a subtorus of $\mathbf{T}$ and
$C_{s}\in\text{Lie}(\mathbf{E})$. Since $\text{char}(\mathbf{k})=0$, there is
a unique connected one-dimensional unipotent subgroup $\mathbf{N}$ of
$\mathbf{L}$ with $C_{n}\in\text{Lie}(\mathbf{N})$. We have that $x\cdot A$ is
a formal connection for the solvable group
$(\mathbf{E}\times\mathbf{N})\ltimes\mathbf{U}$. Now the result follows from
the solvable case (Proposition 3.26). ∎
###### Remark 3.33.
In the beginning of the proof above, let $X$ denote the semisimple part of
$(A_{\mathfrak{l}})_{-1}$. After conjugating by an element of
$\mathbf{L}(\mathbf{k})$, we can suppose that $X\in\text{Lie}(\mathbf{T})$.
Let $b$ be a positive integer such that $\mu\vcentcolon=b\,\pi(X)$ is in
$X_{*}(\mathbf{T})$. Then, we can take $x\in G(F_{b})$ in the proof above. In
order to see this we can first apply $t^{-\frac{1}{b}\,\mu}$. So we can assume
that $\pi(X)=0$. By the proofs of Theorem 3.6 and Proposition 3.26, it follows
that we don’t need any further ramification to put $A$ into canonical form.
###### Proposition 3.34.
Let $\mathbf{G}$ be a connected linear algebraic group. Fix a Levi subgroup
$\mathbf{L}$ and maximal torus $\mathbf{T}\subset\mathbf{L}$. Let
$C,D\in\mathfrak{g}$. Write $C_{s},D_{s}$ for the semisimple parts of the Levi
components $C_{\mathfrak{l}},D_{\mathfrak{l}}$. Assume that
$C_{s},D_{s}\in\mathfrak{D}$ and $[C_{s},C]=[D_{s},D]=0$. Suppose that there
exists $x\in\mathbf{G}(\overline{F})$ such that
$x\cdot\left(t^{-1}\,C\right)=t^{-1}\,D$. Then, we have $C_{s}=D_{s}$.
Moreover $x$ is in the centralizer $Z_{\mathbf{G}}(C_{s})(\mathbf{k})$.
###### Proof.
Write $x=x_{\mathbf{U}}\,x_{\mathbf{L}}$ with
$x_{\mathbf{U}}\in\mathbf{U}(\overline{F})$ and
$x_{\mathbf{L}}\in\mathbf{L}(\overline{F})$. By Corollary 3.17 applied to
$\mathbf{L}$, we get that $x_{\mathbf{L}}\in
Z_{\mathbf{L}}(C_{s})(\mathbf{k})$ and $C_{s}=D_{s}$. The same proof as in
Proposition 3.11 shows that $x\in\mathbf{G}(\mathbf{k})$ and
$\text{Ad}(x)C_{s}=D_{s}$. Since $C_{s}=D_{s}$, we conclude that $x\in
Z_{\mathbf{G}}(C_{s})(\mathbf{k})$. ∎
Let $A\in\mathfrak{g}_{F}$ be a regular formal connection. Proposition 3.34
implies that we can define the semisimple $\overline{F}$-monodromy
$m^{s}_{A}\in\left(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q}\right)/\,W$
just as we did in Definition 3.18. The same reasoning as in the reductive case
yields the following.
###### Corollary 3.35.
Fix $m\in\left(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q}\right)/\,W$.
Let $\mathcal{N}_{Z_{\mathbf{G}}(m)}$ denote the nilpotent cone in the Lie
algebra of $Z_{\mathbf{G}}(m)$. There is a natural correspondence
$\left\\{\text{regular}\;A\in\mathfrak{g}_{\overline{F}}\;\text{with}\;m^{s}_{A}=m\right\\}/\,\mathbf{G}(\overline{F})\;\;\;\longleftrightarrow{\;\;\;\;}\mathcal{N}_{Z_{\mathbf{G}}(m)}/\,Z_{\mathbf{G}}(m)$
Since $\mathbf{T}$ is a split torus, we have a weight decomposition
$\mathfrak{g}=\bigoplus_{\chi\in V}\mathfrak{g}_{\chi}$ of $\mathfrak{g}$
under the adjoint action of $\mathbf{T}$. Here $V$ is a set of characters of
$\mathbf{T}$. Let $W$ be the Weyl group of $\mathbf{L}$ with respect to
$\mathbf{T}$. There is a natural action of $W$ on $V$. For any subset $S$ of
the quotient $V/\,W$, we can define the set $Q_{S}$ of elements in
$\left(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q}\right)/\,W$ of type
$S$, just as we did for reductive groups. Let $Z_{\mathbf{G}}(S)$ be the
centralizer of any element in $Q_{S}$. The same reasoning as in the reductive
case yields the following concrete parametrization.
###### Corollary 3.36.
There is a natural correspondence
$\left\\{\text{regular formal
connections}\right\\}/\,\mathbf{G}(\overline{F})\;\;\;\longleftrightarrow{\;\;\;\;}\bigsqcup_{S\subset
V/\,W}Q_{S}\times\mathcal{N}_{Z_{\mathbf{G}}(S)}/\,Z_{\mathbf{G}}(S)$
### 3.6 Descent for gauge equivalence classes
All of the theorems above give us a description of connections up to
trivializations over $\text{Spec}\,\overline{F}$. We would like to get a
classification over $\text{Spec}\,F$. This amounts to a problem in Galois
cohomology.
We have an action of $\mathbf{G}(\overline{F})$ on
$\mathfrak{g}_{\overline{F}}$ that is compatible with the action of the
absolute Galois group $\text{Gal}(F)$. Choose a regular connection in
canonical form $B=t^{-1}C$ with $C_{s}\in\mathfrak{D}$ and $[C_{s},C]=0$. It
is a direct consequence of Proposition 3.34 that the centralizer of $B$ in
$\mathbf{G}(\overline{F})$ is
$Z_{G}(C)\vcentcolon=Z_{\mathbf{G}}(C)(\mathbf{k})$. Therefore, we get an
exact sequence of sheaves of sets over the etale site of $\text{Spec}\,F$
$1\longrightarrow
Z_{G}(C)\longrightarrow\mathbf{G}\longrightarrow\mathbf{G}\cdot
B\longrightarrow 1$
Here $Z_{G}(C)$ is the constant sheaf associated to the group of
$\mathbf{k}$-points of the centralizer $Z_{\mathbf{G}}(C)$ of $C$. This yields
a long exact sequence of pointed sets:
$1\longrightarrow
Z_{G}(C)\longrightarrow\mathbf{G}(F)\longrightarrow\mathbf{G}\cdot
B(F)\longrightarrow H^{1}_{\text{Gal}(F)}(Z_{G}(C))\longrightarrow
H^{1}_{\text{Gal}(F)}(\mathbf{G})$
The theorems of Tsen and Springer mentioned in the preliminaries imply that
the right-most Galois cohomology group vanishes. This means that the set of
connections over $\text{Spec}\,F$ that admit a trivialization over
$\text{Spec}\,\overline{F}$ with canonical form $t^{-1}C$ is in bijection with
$H^{1}_{\text{Gal}(F)}(Z_{G}(C))$. Since the action of $\text{Gal}(F)$ on
$Z_{G}(C)$ is trivial, $H^{1}_{\text{Gal}(F)}(Z_{G}(C))$ is in (noncanonical)
bijection with the set conjugacy classes of elements of finite order in
$Z_{G}(C)$. Such bijection comes from the choice of a topological generator of
$\text{Gal}(F)\cong\hat{\mathbb{Z}}$. Such a generator corresponds to the
compatible choice of a generator $\omega_{b}$ of $\mu_{b}$ for all positive
integers $b$. Here is a summary of the classification we have obtained.
###### Proposition 3.37 (Regular Connections over $D^{*}$).
Let $B=t^{-1}\,C$ be a regular canonical connection with
$C_{s}\in\mathfrak{D}$ and $[C_{s},C]=0$. The set of $\mathbf{G}$-connections
over $\text{Spec}(F)$ that become gauge equivalent to $B$ over
$\text{Spec}(\overline{F})$ is in (noncanonical) bijection with the set of
conjugacy classes of elements of finite order in
$Z_{\mathbf{G}}(C)(\mathbf{k})$ as described below.
The correspondence goes as follows. Let $x\in Z_{\mathbf{G}}(C)(\mathbf{k})$
of order $b$. By the vanishing of $H^{1}_{\text{Gal}(F)}(\mathbf{G})$, we can
find an element $y\in\mathbf{G}(F_{b})$ such that $\omega_{b}\cdot y=y\,x$.
The connection associated to $x$ will be
$A=y\cdot\left(t^{-1}\,C\right)\;\in\mathfrak{g}_{F}$. Conversely, suppose
that $A=y\cdot B$ is a connection in $\mathfrak{g}_{F}$ for some
$y\in\mathbf{G}(F_{b})$. We set $x\vcentcolon=y^{-1}\,\left(\omega_{b}\cdot
y\right)$.
Using the descriptions of regular $\mathbf{G}(\overline{F})$\- gauge
equivalence classes we have given previously, we can parametrize regular
formal connections. Let $(m,u)$ be a pair with
$m\in\left(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q}\right)$ and $u$ a
nilpotent element in $\mathcal{N}_{Z_{\mathbf{G}}(m)}$. A cohomology cocycle
as described above is given by an element $t$ of finite order in the
centralizer $Z_{\mathbf{G}}(m,u)$ of $u$ in $Z_{\mathbf{G}}(m)$. Since
$Z_{\mathbf{G}}(m)$ is connected, we can conjugate by an element of
$Z_{\mathbf{G}}(m)$ in order to assume that the semisimple element $t$ lies in
$\mathbf{T}\subset Z_{\mathbf{G}}(m)$. It follows that the set of regular
formal $\mathbf{G}$ connections over $D^{*}$ is in natural correspondence with
equivalence classes of triples $(m,x,u)$, where
1. (i)
$m\in\left(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q}\right)$.
2. (ii)
$x$ is an element of finite order in $\mathbf{T}(\mathbf{k})$.
3. (iii)
$u\in\mathcal{N}_{Z_{\mathbf{G}}(m)}$ with $\text{Ad}(t)(u)=u$.
Two such triples are considered equivalent if they can be conjugated by an
element of $\mathbf{G}(\mathbf{k})$.
Recall that there is a canonical isomorphism $\mathbf{T}(\mathbf{k})\cong
X_{*}(\mathbf{T})\otimes\mathbf{k}^{\times}$. Under this identification, the
set $\mathbf{T}(\mathbf{k})^{tor}$ of elements of finite order in
$\mathbf{T}(\mathbf{k})$ correspond to $X_{*}(\mathbf{T})\otimes\mu_{\infty}$.
The compatible choice of primitive roots of unity $\omega_{b}$ yields an
isomorphism $\mu_{\infty}\cong\mathbb{Q}/\,\mathbb{Z}$. Hence we get an
identification $\mathbf{T}(\mathbf{k})^{tor}\cong
X_{*}(\mathbf{T})\otimes\,\mathbb{Q}/\,\mathbb{Z}$. This means that the set of
pairs
$(m,x)\in(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q})\times\mathbf{T}(\mathbf{k})^{tor}$
is in natural bijection with
$X_{*}(\mathbf{T})\otimes\,\mathbf{k}/\,\mathbb{Z}$.
For an element $v\in X_{*}(\mathbf{T})\otimes\,\mathbf{k}/\,\mathbb{Z}$, we
will let $Z_{\mathbf{G}}(v)$ denote the centralizer $Z_{\mathbf{G}}(m,x)$ of
the corresponding pair
$(m,x)\in(X_{*}(\mathbf{T})\otimes\mathbf{k}/\,\mathbb{Q})\times\mathbf{T}(\mathbf{k})^{tor}$.
Conjugate elements of $X_{*}(\mathbf{T})\otimes\,\mathbf{k}/\,\mathbb{Z}$
yield isomorphic centralizers, so it makes sense to define $Z_{\mathbf{G}}(v)$
for $v\in(X_{*}(\mathbf{T})\otimes\,\mathbf{k}/\,\mathbb{Z})/\,W$. We end up
the following parametrization of regular formal connections.
###### Corollary 3.38.
There is a natural bijection between regular formal connections over $D^{*}$
and pairs $(v,O)$, where
1. (i)
$v\in(X_{*}(\mathbf{T})\otimes\,\mathbf{k}/\,\mathbb{Z})/\,W$.
2. (ii)
$O$ is a nilpotent orbit in
$\mathcal{N}_{Z_{\mathbf{G}}(v)}/\,Z_{\mathbf{G}}(v)$.
###### Definition 3.39.
Let $A$ be a regular formal connection over $D^{*}$. We will denote by
$(m_{A}^{s},m_{A}^{n})$ the corresponding pair granted by Corollary 3.38.
$m_{A}^{s}$ (resp. $m_{A}^{n}$) is called the semisimple (resp. unipotent)
monodromy of $A$.
###### Example 3.40.
Suppose that $\mathbf{k}=\mathbb{C}$. The set of pairs $(m_{A}^{s},m_{A}^{n})$
as above is in correspondence with the set conjugacy classes in
$\mathbf{G}(\mathbb{C})$. For a representative
$(C,U)\in\text{Lie}(\mathbf{T})\times\mathcal{N}_{\mathbf{G}}$ of the pair
$(m_{A}^{s},m_{A}^{n})$, the corresponding element of $\mathbf{G}(\mathbb{C})$
is given by $\text{exp}(2\pi iC+U)$. This just the monodromy class of the
regular formal connection.
We can use the theorem in [Hum95] pg. 26 to give a description of
$Z_{\mathbf{G}}(v)$. We can decompose
$\mathfrak{g}=\text{Lie}(\mathbf{T})\oplus\bigoplus_{i=1}^{l}\mathfrak{u}_{\chi_{i}}$,
where each $\mathfrak{u}_{i}$ is a one dimensional eigenspace of $\mathbf{T}$
consisting of nilpotent elements (we allow repetition in the $\chi_{i}$).
Suppose that $\mathbf{T}$ acts on $\mathfrak{u}_{i}$ through the character
$\chi_{i}:\mathbf{T}\longrightarrow\mathbb{G}_{m}$. Since we are working in
characteristic $0$, each $\mathfrak{u}_{i}$ is the Lie algebra of a unique
unipotent subgroup $\mathbf{U}_{i}$ isomorphic to $\mathbb{G}_{a}$. Let
$v\in(X_{*}(\mathbf{T})\otimes\,\mathbf{k}/\,\mathbb{Z})$. For each character
$\chi\in X^{*}(\mathbf{T})$ it makes sense to ask whether $\langle
v,\chi\rangle\in\mathbb{Z}$, even though $v$ is only defined up to an element
of $X_{*}(\mathbf{T})$. The connected component of $Z_{\mathbf{G}}(v)$ is
generated by $\mathbf{T}$ and those unipotent weight spaces $\mathbf{U}_{i}$
such that $\langle v,\chi_{i}\rangle\in\mathbb{Z}$. The full group
$Z_{\mathbf{G}}(v)$ is generated by its neutral component and the reflections
$w_{\alpha}$ in the Weyl group $W$ of $\mathbf{L}$ corresponding to roots
$\alpha\in\Phi$ such that $\langle v,\,\alpha\rangle\in\mathbb{Z}$.
In order to classify formal regular connections, it is convenient to group
them depending on the type of their semisimple monodromy. For each
$v\in(X_{*}(\mathbf{T})\otimes\,\mathbf{k}/\,\mathbb{Z})/\,W$, we can define
the type of $v$ to be a subset $S\subset V/\,W$ just as we did when working
over $\overline{F}$. For any $S\subset V/\,W$, let us denote by
$P_{S}\subset(X_{*}(\mathbf{T})\otimes\,\mathbf{k}/\,\mathbb{Z})/\,W$ the set
of all element of type $S$. By the description given in the last paragraph, it
follows that the isomorphism class of $Z_{\mathbf{G}}(v)$ is the same for all
$v\in P_{S}$. We will denote this group by $Z_{\mathbf{G}}(S)_{F}$. We can now
rewrite the last corollary.
###### Corollary 3.41.
There is a natural correspondence
$\left\\{\text{regular formal connections over
$D^{*}$}\right\\}\;\;\;\longleftrightarrow{\;\;\;\;}\bigsqcup_{S\subset
V/\,W}P_{S}\times\mathcal{N}_{Z_{\mathbf{G}}(S)_{F}}/\,Z_{\mathbf{G}}(S)_{F}$
This yields procedure to describe all regular $\mathbf{G}(F)$-gauge
equivalence classes of formal connections. The different types $S$ can be
established by declaring a subsets of characters that evaluate to integer
eigenvalues. For each $S$, one needs to parametrize the nilpotent orbits in
the Lie algebra of $\mathbf{M}_{S}$. This works especially well when the group
is reductive.
###### Example 3.42.
Suppose that $\mathbf{G}$ is reductive. Then each $\mathbf{M}_{S}$ is
connected reductive. If the group $\mathbf{G}$ is classical, we can
parametrize the finite set of nilpotent orbits
$\mathcal{N}_{\mathbf{M}_{S}}/\,\mathbf{M}_{S}$ using partition diagrams as in
[CM93] Chapter 5. This yields explicit canonical block decompositions of
regular connections for classical groups.
## 4 Irregular connections for $\mathbf{G}$ reductive
### 4.1 Connections in canonical form
Let $\mathbf{G}$ be connected reductive over $\mathbf{k}$.
###### Definition 4.1 (Canonical form).
A connection $B\in\mathfrak{g}_{\overline{F}}$ is said to be in canonical form
if we have $B=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}+t^{-1}\,C$, where
1. (1)
$r_{j}\in\mathbb{Q}$ for all $j$, and they satisfy satisfy
$r_{1}<r_{2}<..r_{l}<-1$.
2. (2)
$D_{j}\neq 0$ is a semisimple element in $\mathfrak{g}$ for all $j$.
3. (3)
$D_{1},D_{2},...,C$ are pairwise commuting (the brackets vanish).
The $r_{j}$ above are called the levels of the canonical form. The smallest of
them $r_{1}$ is called the principal level. The initial sum
$\sum_{j=1}^{l}D_{j}\,t^{r_{j}}$ is called the irregular part of the
connection; we denote it by $B_{\text{irr}}$.
###### Remark 4.2.
The irregular part could be $0$ if the summation is empty. We then recover the
notion of canonical form for a regular connection.
We prove the existence of canonical form for reductive groups first.
###### Theorem 4.3 (Reduction Theory for Reductive Groups).
Let $\mathbf{G}$ be connected reductive and $A\in\mathfrak{g}_{\overline{F}}$.
Then there exists $x\in\mathbf{G}(\overline{F})$ such that $x\cdot
A=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}+t^{-1}\,C$ is in canonical form.
The argument proceeds by induction on $\text{dim}\,\mathbf{G}$. The base case
(when $\mathbf{G}=\mathbb{G}_{m}$) follows from the computation done in the
proof of Proposition 3.12. We state this result for future reference.
###### Proposition 4.4.
Let $\mathbf{G}$ be a tori.
1. (a)
Let $A=\sum_{j=r}^{\infty}A_{j}\,t^{j}$ a formal connection in
$\mathfrak{g}_{F}$. Then there exists $x\in\mathbf{G}(\mathcal{O})$ such that
$x\cdot A=\sum_{j=r}^{-1}A_{j}\,t^{j}$. Moreover there is a unique such $x$
with $x\equiv 1\,\left(mod\;t\right)$.
2. (b)
Let $B=\sum_{j=r}^{-1}B_{j}\,t^{j}$ and $C=\sum_{j=r}^{-1}C_{j}\,t^{j}$ be two
connections in canonical form. Suppose that there exists $x\in\mathbf{G}(F)$
such that $x\cdot C=B$. Then $x=g\,t^{\mu}$ for some cocharacter $\mu\in
X_{*}(\mathbf{G})$ and some $g\in\mathbf{G}(\mathbf{k})$. In this case, we
have $B_{j}=C_{j}$ for all $r\leq j<-1$ and $C_{-1}=B_{-1}-\mu$.
###### Proof.
This is the same computation as in Propositions 3.12 and 3.14. We omit the
details. ∎
###### Remark 4.5.
In particular we see that two canonical connections
$B=\sum_{j=r}^{-1}B_{j}\,t^{j}$ and $C=\sum_{j=r}^{-1}C_{j}\,t^{j}$ for a
torus are gauge equivalent over $F$ if and only if $B_{j}=C_{j}$ for all
$r\leq j<-1$ and $C_{-1}-B_{-1}\in X_{*}(\mathbf{G})$. By lifting, we conclude
that they are equivalent over $\overline{F}$ if and only if $B_{j}=C_{j}$ for
all $r\leq j<-1$ and $C_{-1}-B_{-1}\in X_{*}(\mathbf{G})\otimes\mathbb{Q}$.
Let’s get started with the argument for Theorem 4.3. By the structure theory
of reductive groups, we know that $\mathbf{G}$ admits an isogeny from the
product of its maximal central torus and its derived subgroup
$\mathbf{G}_{der}$. By Proposition 4.4, we can deal with the central part. We
may therefore assume that $\mathbf{G}$ is semisimple.
By lifting to a ramified cover, we can assume
$A=\sum_{j=r}^{\infty}A_{j}\,t^{j}\in\mathfrak{g}_{F}$ with $A_{r}\neq 0$. If
$r\geq-1$ we can use the theory for regular connections developed in Section
2. So we can assume $r<-1$. There are two substantially different
possibilities: $A_{r}$ could be nilpotent or not. The case when $A_{r}$ is not
nilpotent turns out to be the easiest; we do it first.
### 4.2 The case when $A_{r}$ is not nilpotent
We need the following lemma (cf. [BV83, 9.3]).
###### Lemma 4.6.
Let $A=\sum_{j=r}A_{j}\,t^{j}$ a connection in $\mathfrak{g}_{F}$ with $r<-1$.
Let $V=(A_{r})_{s}$ be the semisimple part of $A_{r}$. Then, there exist
$x\in\mathbf{G}(F)$ such that $x\cdot A$ is in $\mathfrak{g}_{V}(F)$.
###### Proof.
This is very similar to Lemma 3.3. We will inductively build a sequence
$(B_{j})_{j=1}^{\infty}$ of elements of $\mathfrak{g}$ such that the gauge
transformation
$x\vcentcolon=\lim_{n\rightarrow\infty}\prod_{j=0}^{n-1}\text{exp}(t^{n-j}\,B_{n-j})$
satisfies the conclusion of the lemma. Suppose that we have chosen $B_{j}$ for
$j\leq k$ such that the connection
$A^{(k)}=\sum_{l=r}^{\infty}A^{(k)}_{l}\,t^{l}$ defined by
$A^{(k)}\vcentcolon=\prod_{j=0}^{k-1}\text{exp}(t^{n-j}\,B_{k-n})\cdot A$
satisfies $A_{l}^{(k)}\in\mathfrak{g}_{V}$ for all $l\leq k+r$. Notice that
the base case $k=0$ is trivial and $A^{(k)}_{r}=A_{r}$. Let’s try to determine
$B_{k+1}$.
Recall that $\text{exp}(t^{k+1}\,B_{k+1})\equiv
1+t^{k+1}\,B_{k+1}\;(\text{mod}\;t^{k+2})$. By an elementary matrix
computation (choose an embedding of
$\mathbf{G}\hookrightarrow\mathbf{\text{GL}_{n}}$), one can see that
$\text{exp}(t^{k+1}B_{k+1})\cdot
A^{(k)}\equiv\sum_{l=r}^{k+r}A^{(k)}_{l}\,t^{l}+[A^{(k)}_{k+1+r}-ad(A_{r})B_{k+1}]\,t^{k+1+r}\;\;(\text{mod}\;t^{k+2+r})$
Let $\mathfrak{g}=\bigoplus_{\lambda}\mathfrak{g}_{\lambda}$ be the spectral
decomposition of $ad(X)=(ad(A_{r}))_{s}$. By definition the operator
$ad(A_{-r})$ restricts to an automorphism of $\mathfrak{g}_{\lambda}$ for all
$\lambda\neq 0$. In particular, we can choose $B_{k+1}\in\mathfrak{g}$ such
that $A^{(k)}_{k+1-r}-ad(A_{-r})B_{k+1}$ is in
$\mathfrak{g}_{0}=\mathfrak{g}_{V}$. This concludes the construction of the
sequence $(B_{j})_{j=1}^{\infty}$. It follows from the construction that
$x\vcentcolon=\lim_{n\rightarrow\infty}\prod_{j=0}^{n-1}\text{exp}(t^{n-j}\,B_{n-j})$
satisfies $x\cdot A\;\in\,\mathfrak{g}_{V}(F)$. ∎
Let us continue with the proof of Theorem 4.3. Suppose that $A_{r}$ is not
nilpotent. Then the semisimple part $X=(A_{r})_{s}$ is not $0$. Since we are
assuming that $\mathbf{G}$ is semisimple, the connected reductive centralizer
$\mathbf{Z}_{G}(X)$ is a proper subgroup of $\mathbf{G}$. By Lemma 4.6 we can
assume that $A\in\mathfrak{g}_{V}(F)$. We win by induction, because
$\text{dim}\,\mathbf{Z}_{G}(V)\,<\,\text{dim}\,\mathbf{G}$. This settles the
case when $A_{r}$ is not nilpotent in the proof of Theorem 4.3.
Recall that the principal level of a canonical connection is defined to be the
order $r_{1}$ (see the paragraph after Definition 4.1). We can define the
principal level of a connection $A$ to be the principal level of any canonical
connection equivalent to $A$. This is well defined by Lemma 5.4 in the next
section. The inductive argument given above implies the following interesting
fact.
###### Proposition 4.7.
Suppose that $A=\sum_{j=r}^{\infty}A_{j}t^{j}$ with $r<-1$ and $A_{r}$ not
nilpotent. Then $r$ is the principal level of A.
###### Proof.
We induct on the dimension of the group. The base case $\mathbb{G}_{m}$ is
clear by direct computation. Notice that in the proof above we have that
$A_{r}$ is still not nilpotent in the smaller group $\mathbf{Z}_{G}(V)$, since
its semisimple part $V$ is not $0$. We can then conclude by induction. ∎
###### Remark 4.8.
As we will soon see, this is not necessarily the case when $A_{r}$ is
nilpotent. In the nilpotent case the principal level can be larger than $r$.
### 4.3 The case when $A_{r}$ is nilpotent
In this section we conclude the proof of Theorem 4.3 by dealing with the case
when $A_{r}$ is nilpotent. We closely follow an argument sketched in [BV83],
which we include here for the sake of completeness. For this section
$\mathbf{G}$ will be semisimple, as we may assume for our proof of Theorem
4.3. Let’s set up the notation we will use. Suppose that we have
$A=\sum_{j=r}^{\infty}A_{j}\,t^{j}$ with $A_{r}$ nilpotent and $r<-1$. Let
$(H,X,\,Y=A_{r})$ be a Jacobson-Morozov $\mathfrak{sl}_{2}$-triple coming from
an algebraic homomorphism $\Phi:\text{SL}_{2}\longrightarrow\mathbf{G}$. For
an integer $n$, we will denote by $t^{nH}$ the element $t^{\mu}$, where $\mu$
is the natural composition
$\mathbb{G}_{m}\xrightarrow{[n]}\mathbb{G}_{m}\hookrightarrow\text{SL}_{2}\xrightarrow{\Phi}\mathbf{G}$.
###### Lemma 4.9.
With notation as above, there exists $x\in\mathbf{G}(F)$ such that $x\cdot
A=\sum_{j=r}^{\infty}B_{j}\,t^{j}$ satisfies
1. (1)
$B_{r}=A_{r}$.
2. (2)
$B_{j}\in\mathfrak{g}_{X}$ for all $j>r$.
###### Proof.
This is a carbon copy of the proof of Lemma 4.6. The only difference is that
in the last paragraph we have to use the fact that the range of $ad(A_{r})$ is
complementary to $\mathfrak{g}_{X}$. This follows from the theory of
representations of $\mathfrak{sl}_{2}$. We omit the details. ∎
By Lemma 4.9, we can assume that $A_{j}\in\mathfrak{g}_{X}$ for $j>r$. For the
purpose of having an algorithm that works in finitely many steps, we won’t
actually use the full force of the lemma. Instead, we will use a weaker
hypothesis as an input for the next proposition. Let
$\Lambda\vcentcolon=\Lambda\left(A_{r}\right)$ be as in Definition 2.14. We
will henceforth suppose that $A_{r+m}\in\mathfrak{g}_{X}$ for $1\leq
m<\Lambda(|r|-1)$.
Let $(Z_{l})_{l=1}^{q}$ be a basis of eigenvectors of $ad(H)$ acting on
$\mathfrak{g}_{X}$. This means that
$\mathfrak{g}_{X}=\bigoplus_{l=1}^{q}\mathbf{k}\,Z_{l}$ and that there exist
$\lambda_{l}$ such that $[H,Z_{l}]=\lambda_{l}\,Z_{l}$. It turns out that the
$\lambda_{l}$s are nonnegative integers, by the theory of representations of
$\mathfrak{sl}_{2}$. By the assumption on $A$, we can write
$A_{r+m}=\sum_{l=1}^{q}a_{r+m,\,l}\,Z_{l}$ for all $1\leq m\leq\Lambda(|r|-1)$
and some constants $a_{r+m,\,l}\in\mathbf{k}$.
###### Definition 4.10.
In the situation above, define $\delta=\delta(A)$ to be given by:
$\delta=\text{inf}\;\left\\{\frac{m}{\frac{1}{2}\lambda_{l}+1}\;\vcentcolon\;\;1\leq
m<\Lambda(|r|-1),\;\;1\leq l\leq q,\;\;a_{r+m,\,l}\neq 0\right\\}$
We set $\delta=\infty$ if $A_{r+m}=0$ for all $1\leq m<\Lambda$. We also
define the set
$P\vcentcolon=\left\\{(m,l)\;\vcentcolon\;\;1\leq m<\Lambda(|r|-1),\;\;1\leq
l\leq q,\;\;a_{r+m,\,l}\neq
0,\;\;\frac{m}{\frac{1}{2}\lambda_{l}+1}=\delta\right\\}$
In plain words, $P$ is the set of pairs $(m,l)$ of indices in the definition
of $\delta$ where the infimum is actually achieved.
###### Remark 4.11.
By the definition of $\Lambda(A_{r})$, it follows that the denominators
appearing in the set defining $\delta$ are always less than $\Lambda$. This
implies that there exists a positive integer $b\leq 2\Lambda-1$ such that
$b\delta\in\mathbb{Z}$. This fact will be used later to determine a bound for
the ramification needed to put $A$ in canonical form.
The following proposition is one of the main steps in the argument in [BV83].
We are going to get a step closer to canonical form by applying a
transformation of the type $t^{nH}$. These elements are called shearing
transformations. The statement and proof in the case of $\mathbf{GL_{n}}$ can
be found in [BV83] pages 33-34. We have decided to include a detailed
treatment of the general case for the convenience of the reader.
###### Proposition 4.12 (Main Proposition for the Induction Step).
Let the notation/set-up be as discussed above.
1. (C1)
Suppose $|r|-1\leq\delta\leq\infty$. Let $\tilde{A}$ be the $2$-lift of $A$.
Then $B\vcentcolon=t^{(r+1)H}\cdot\tilde{A}$ is of the first kind, and
$B_{-1}$ only depends on $A_{r+m}$ for $0\leq m\leq\Lambda(|r|-1)$.
2. (C2)
Suppose $0<\delta<|r|-1$. We know that $b\delta\in\mathbb{Z}$ for some
$b\in\mathbb{N}$. Let $\tilde{A}$ be the $2b$-lift of $A$. We have that
$B\vcentcolon=t^{-b\delta H}\cdot\tilde{A}$ has order
$r^{\prime}\vcentcolon=2br+2b\delta+2b-1<-1$. Moreover,
$B_{r^{\prime}}=2bA_{r}+2b\sum_{(m,l)\in P}a_{r+m,\,l}\,Z_{l}\;\neq\;2bA_{r}$
In particular we have that $B_{r^{\prime}}$ is determined by $A_{r+m}$ for
$0\leq m<\Lambda(|r|-1)$. If $B_{r^{\prime}}$ is nilpotent, then
$\text{dim}\,(G\cdot B_{r^{\prime}})\,>\,\text{dim}\,(G\cdot A_{r})$.
###### Proof.
The computation is similar to the one we did in the proof of Theorem 3.6.
Recall from the discussion in that proof that for all
$W\in\mathfrak{g}_{\beta}$ we have
$\text{Ad}(t^{nH})\,W=t^{n\beta(H)}W\;\;\;(*)$
1. (C1)
By using the definitions and expanding
$\displaystyle t^{(r+1)H}\cdot\tilde{A}$
$\displaystyle=\;\;2\sum_{m=0}^{\Lambda(|r|-1)-1}\text{Ad}(t^{(r+1)H})A_{r+m}\,t^{2(r+m)+1}$
$\displaystyle\quad\;+\,\text{Ad}(t^{(r+1)H})A_{r+\Lambda(|r|-1)}\,t^{2(r+\Lambda(|r|-1))+1}$
$\displaystyle\quad+\,2\sum_{m=\Lambda(|r|-1)+1}^{\infty}\text{Ad}(t^{(r+1)H})A_{r+m}\,t^{2(r+m)+1}$
$\displaystyle\quad\;+\,\frac{d}{dt}\,(t^{(r+1)H})\,t^{-(r+1)H}$
The fourth summand is just $(r+1)Ht^{-1}$, which is of the first kind. We can
see that the third summand is actually in $\mathfrak{g}(\mathcal{O})$ by using
$(*)$ and the fact that $(r+1)\beta(H)\geq(2\Lambda-2)(r+1)$ for all roots
$\beta$. The same reasoning implies that the second summand is of the first
kind. For the first summand, we can write
$A_{r+m}=\sum_{l=1}^{q}a_{r+m,\,l}\,Z_{l}$. We can expand and use $(*)$ plus
the definition of $\lambda_{l}$. We get that the first summand is:
$2\sum_{m=0}^{\Lambda(|r|-1)-1}\sum_{l=1}^{q}a_{r+m,\,l}\,Z_{l}\,t^{2(r+m)+1+(r+1)\lambda_{l}}$
This expression is also of the first kind. This can be shown by doing some
algebra with the exponents of $t$, keeping in mind the definition of $\delta$
and the fact that $\delta\geq|r|-1$. The remark about $B_{-1}$ follows plainly
from the argument, because the third summand did not contribute to $B_{-1}$.
2. (C2)
This is very similar to the first case. We expand:
$\displaystyle t^{-b\delta H}\cdot\tilde{A}$
$\displaystyle=\;\;2b\sum_{m=0}^{\Lambda(|r|-1)-1}\text{Ad}(t^{-b\delta
H})A_{r+m}\,t^{2b(r+m)+2b-1}$
$\displaystyle\quad+\,2b\sum_{m=\Lambda(|r|-1)}^{\infty}\text{Ad}(t^{-b\delta
H})A_{r+m}\,t^{2b(r+m)+2b-1}$
$\displaystyle\quad\;+\,\frac{d}{dt}\,(t^{(-b\delta)H})\,t^{(b\delta)H}$
The third summand is $-b\delta Ht^{-1}$, which is of the first kind. We can
therefore ignore the third summand. The bound
$-b\delta\beta(H)\geq-2b\delta(\Lambda-1)$ and equation $(*)$ show that the
order of the second summand is at least $r^{\prime}+1=2br+2b\delta+2b$. The
computation is almost the same as for the third summand in Case 1 above. For
the first summand, we can again use $A_{r+m}=\sum_{l=1}^{q}a_{r+m,\,l}\,Z_{l}$
and expand using $(*)$ to get:
$2b\sum_{m=0}^{\Lambda(|r|-1)-1}\sum_{l=1}^{q}a_{r+m,\,l}\,Z_{l}\,t^{2b(r+m)+2b-1-b\delta\lambda_{l}}$
We are reduced to check that the exponent of $t$ in the sum above has minimal
value $r^{\prime}=2br+2b\delta+2b-1$ exactly for the pairs $(m,l)$ in $P$.
This is an exercise in elementary algebra.
The claim about $B_{r^{\prime}}$ follows from the argument, because the second
summand does not contribute to $B_{r^{\prime}}$. The claim about the increase
of the dimension of nipotent orbits is a direct consequence of Proposition
2.8.
∎
###### Remark 4.13.
The claim about the dimension of the orbit in $C2$ is essential. This
guarantees that the process of applying shearing transformations eventually
stops. Hence we are provided with a terminating algorithm. See the proof of
Theorem 4.3 given below for details.
###### Example 4.14.
Let’s see how this works in the case of $\text{SL}_{2}$. Up to inner
automorphism, we can assume that $A_{r}=Y=\begin{bmatrix}0&0\\\
1&0\end{bmatrix}$. Then $X=\begin{bmatrix}0&1\\\ 0&0\end{bmatrix}$ and
$H=\begin{bmatrix}1&0\\\ 0&-1\end{bmatrix}$. In this case $\Lambda=2$, and
$\mathfrak{g}_{X}=\mathbf{k}X$. So there is a single eigenvalue $\lambda=2$.
Our assumption just says that $A$ is of the form
$A=Y\,t^{r}+\sum_{m=1}^{2|r|-3}a_{r+m}X\,t^{r+m}+\text{higher order terms}$
We have that $\delta$ is $\frac{n}{2}$, where $n$ is the smallest index such
that $a_{r+n}\neq 0$. The set $P$ only contains this index $n$. So in fact $A$
can be written in the form
$A=Y\,t^{r}+\sum_{m=n}^{2|r|-3}a_{r+m}X\,t^{r+m}+\text{higher order terms}$
There are two cases.
1. (C1)
The first case is $n\geq 2(|r|-1)$. This just means that all $a_{i}$ above are
$0$. Then we can use the change of trivialization
$t^{\frac{r+1}{2}H}=\begin{bmatrix}t^{\frac{r+1}{2}}&0\\\
0&t^{-\frac{r+1}{2}}\end{bmatrix}$ to transform $A$ into a connection of the
first kind.
2. (C2)
The second case is when $n<2(|r|-1)$. Here at least some of the $a_{i}$ are
not $0$. We can apply the transformation
$t^{-\frac{n}{4}H}=\begin{bmatrix}t^{-\frac{n}{4}}&0\\\
0&t^{\frac{n}{4}}\end{bmatrix}$. The resulting connection will have order
$r+\frac{n}{2}$. The principal term will be $B_{r+\frac{n}{2}}=Y+a_{r+n}X$,
which is semisimple. Hence we can use Lemma 4.6 to reduce to the group
$\mathbb{G}_{m}$. We can then apply Proposition 4.4 to find the canonical
form.
###### Proof of Theorem 4.3.
By Lemma 4.9, we can put ourselves in the situation of Proposition 4.12 above.
We have three possibilities:
1. (i)
If $|r|-1\leq\delta\leq\infty$, then we can use Proposition 4.12 Case 1. We
are done by the theory of regular connections we have already developed.
2. (ii)
If $0<\delta<|r|-1$, we can use 4.12 Case 2. Suppose that $B_{r^{\prime}}$ is
not nilpotent. Then we are in the case worked out in Subsection 4.2.
3. (iii)
Suppose that $0<\delta<|r|-1$ and $B_{r^{\prime}}$ is nilpotent with
$\text{dim}\,(G\cdot B_{r^{\prime}})\,>\,\text{dim}\,(G\cdot A_{r})$. We can
apply Proposition 4.12 Case 2 again with $B$ instead of $A$. We can keep
iterating this procedure until we are in one of the first two possibilities
above. Notice that this process cannot go on indefinitely, because the
dimensions of nilpotent orbits in $\mathbf{G}$ are bounded.
∎
###### Remark 4.15.
The dimension of adjoint nilpotent orbits in $\mathbf{G}$ is always even
[CM93]. Therefore we need to apply at most
$\left\lfloor{\frac{1}{2}\text{dim}(\mathbf{G})}\right\rfloor$ shearing
transformations as in Proposition 4.12 Case 2 before we land in one of the
first two possibilities.
### 4.4 Algorithm for reductive groups and some quantitative results
Let’s give a detailed description of the reduction algorithm that we obtain
from the proof of Theorem 4.3.
###### Algorithm 4.16 (Algorithm for reduction of a formal connection for a
reductive group).
There is a set of six possible operations that we will use as steps in our
algorithm.
1. (i)
Apply Lemma 4.6.
2. (ii)
Apply Lemma 4.9.
3. (iii)
Apply Proposition 4.12 Case 1.
4. (iv)
Apply Proposition 4.12 Case 2.
5. (v)
Find the canonical form of a connection in a torus.
6. (vi)
Find the canonical form for a connection of the first kind in a semisimple
group (as in Theorem 3.6 of Section 3).
The algorithm proceeds as follows. The input is a given reductive group
$\mathbf{G}$ and a formal connection $A=\sum_{j=r}^{\infty}A_{j}\,t^{j}$.
First, we know that $\mathbf{G}$ is isogenous to the product
$Z^{0}\left(\mathbf{G}\right)\times\mathbf{G}_{\text{der}}$ of its maximal
central torus and its derived subgroup. Apply operation (v) to the central
part of the connection $A_{Z^{0}(\mathbf{G})}$. We can record the (canonical
form) output of (v) and ignore it from now on, since it is not going to be
altered by the subsequent steps in the algorithm. Replace $\mathbf{G}$ by
$\mathbf{G}_{\text{der}}$ and $A$ by $A_{\text{der}}$. We have two cases.
1. (1)
If $A_{der}$ is of the first kind, apply step (vi) to reduce this connection
to canonical form. Add any “central” parts we might have split off earlier in
the algorithm and output the result. End of the algorithm.
2. (2)
If $A_{der}$ is not of the first kind, check whether $A_{r}$ is nilpotent or
not. There are now two ways to proceed:
1. (2-1)
If $A_{r}$ is not nilpotent, use operation (i). Replace $\mathbf{G}$ by
$Z_{\mathbf{G}}\left(\,\left(A_{r}\right)_{s}\,\right)$ and replace $A$ by the
output of operation (i). Return to the beginning of the algorithm with this
new input.
2. (2-2)
If $A_{r}$ is nilpotent, compute $\Lambda\left(A_{r}\right)$. Apply operation
$(ii)$ and replace $A$ with the output. Now compute $\delta$.
1. (2-2a)
If $|r|-1\leq\delta$, apply operation (iii) and replace $A$ with the output.
This is a connection of the first kind. Return to the beginning of the
algorithm.
2. (2-2b)
If $\delta<|r|-1$, apply operation (iv). Go to the beginning of the algorithm.
###### Remark 4.17.
In the algorithm above the order of the pole of $A$ only ever gets smaller
(taking into account $b$-lifting whenever passing to a ramified cover). This
shows that the principal level of $A$ determines the mildest pole in the
$\mathbf{G}(\overline{F})$-gauge equivalence class of $A$.
A careful analysis of Algorithm 4.16 yields a bound for the ramification
needed to put a given connection into canonical form. We can get a uniform
bound that only depends on the group, and not on the connection. Before giving
the proof of such bound, we need a lemma.
Note that in each step of Algorithm 4.16 we are ultimately working with a
semisimple subgroup of $\mathbf{G}$. These subgroups are of the form
$\mathbf{H}_{der}$, where $\mathbf{H}$ is the centralizer of a finite set
$\\{D_{1},D_{2},...,D_{l}\\}$ of pairwise commuting semisimple elements in
$\mathfrak{g}$. We will first try to understand $J(\mathbf{H}_{der})$ (see
Definition 2.11).
###### Lemma 4.18.
Let $\mathbf{G}$ be a connected semisimple group. Let $R$ be the rank of
$\mathbf{G}$. Suppose that $\\{D_{1},D_{2},...,D_{l}\\}$ is a finite set of
pairwise commuting semisimple elements in $\mathfrak{g}$. Set
$\mathbf{H}=Z_{\mathbf{G}}\left(\\{D_{1},D_{2},...,D_{l}\\}\right)$, the
centralizer of all $D_{i}$. Let $\mathbf{H}_{der}$ be the derived subgroup of
$\mathbf{H}$. Then we have
$J(\mathbf{H}_{der})\leq\text{hgt}(\mathfrak{g})^{2R-2}\cdot J(\mathbf{G})$.
###### Proof.
The lemma is clearly true if $\mathbf{H}=\mathbf{G}$. We can therefore assume
that $\mathbf{H}\neq\mathbf{G}$. Let $\mathbf{T}$ be a maximal torus of
$\mathbf{G}$ such that $\text{Lie}(\mathbf{T})$ contains the set $\\{D_{i}\\}$
of pairwise commuting semisimple elements. Note that
$\mathbf{T}\subset\mathbf{H}$ is a maximal torus. Let $\Phi$ (resp. $\Sigma$)
be the set of roots of $\mathbf{G}$ (resp. $\mathbf{H}$) with respect to
$\mathbf{T}$. We will denote by $\Sigma^{\vee}$ and $\Phi^{\vee}$ the
corresponding sets of coroots. It follows by definition that
$\Sigma\subset\Phi$ and $\Sigma^{\vee}\subset\Phi^{\vee}$.
Write $Q_{\mathbf{H}_{der}}$ for the coweight lattice of $\mathbf{H}_{der}$.
Let $\lambda\in Q_{\mathbf{H}_{der}}$. We want to show that there exists
$b\leq\text{hgt}(\mathfrak{g})^{2R-1}\cdot J(\mathbf{G}_{\text{der}})$ such
that $b\lambda\in\mathbb{Z}\Sigma^{\vee}$.
Fix a choice of positive roots $\Phi^{+}$ in $\Phi$. Let $\Delta_{\Phi}$ be
the corresponding set of simple roots. By definition $|\Delta_{\Phi}|=R$.
Notice that this induces a set of positive roots
$\Sigma^{+}\vcentcolon=\Phi^{+}\cap\Sigma$. Let $\Delta_{\Sigma}$ be the
corresponding set of simple roots in $\Sigma$. Set
$c\vcentcolon=|\Delta_{\Sigma}|$. We know that $c\leq R-1$ because
$\mathbf{H}\neq\mathbf{G}$. Consider the short exact sequence
$0\longrightarrow\mathbb{Z}\Sigma\xrightarrow{\;\;M\;\;}\mathbb{Z}\Phi\longrightarrow\mathbb{Z}\Phi/\,\mathbb{Z}\Sigma\longrightarrow
0$
The theory of Smith normal form implies that
$\mathbb{Z}\Phi/\,\mathbb{Z}\Sigma\cong E\oplus\mathbb{Z}^{R-c}$, where $E$ is
a finite group. The exponent of $E$ is given by the biggest elementary divisor
$d$ of the inclusion $M$ of free $\mathbb{Z}$-modules. Applying the functor
$\text{Hom}(-,\mathbb{Z})$ to the short exact sequence yields an exact
sequence
$0\longrightarrow\mathbb{Z}^{R-c}\longrightarrow Q_{\mathbf{G}}\longrightarrow
Q_{\mathbf{H}_{der}}\longrightarrow E\longrightarrow 0$
Hence we have that $d\lambda$ can be extended to an element of
$Q_{\mathbf{G}}$. By the definition of $J(\mathbf{G})$, it follows that
$d\,J(\mathbf{G})\,\lambda$ extends to an element of $\mathbb{Z}\Phi^{\vee}$.
Let $\varphi:\mathbb{Z}\Phi^{\vee}\longrightarrow Q_{\mathbf{H}_{der}}$ be the
composition
$\varphi:\mathbb{Z}\Phi^{\vee}\,\hookrightarrow\,Q_{\mathbf{G}}\,\longrightarrow
Q_{\mathbf{H}_{der}}$
Set $L\vcentcolon=\text{Im}\,\varphi$ and $K\vcentcolon=\text{Ker}\,\varphi$.
The discussion above implies that the exponent of the finite group
$Q_{\mathbf{H}_{der}}/\,L$ is bounded by $d\,J(\mathbf{G})$. By definition we
have a short exact sequence
$0\,\longrightarrow\,K\,\longrightarrow\,\mathbb{Z}\Phi^{\vee}\,\longrightarrow\,L\,\longrightarrow\,0$
Since $L$ is a torsion-free $\mathbb{Z}$-module, the above exact sequence
splits. Fix a splitting $\mathbb{Z}\Phi^{\vee}\cong K\oplus L$. Let’s look now
at the inclusion of lattices
$\mathbb{Z}\Sigma^{\vee}\subset\mathbb{Z}\Phi^{\vee}$. The composition
$\mathbb{Z}\Sigma^{\vee}\,\hookrightarrow\,\mathbb{Z}\Phi^{\vee}\,\xlongequal{\;}\,K\oplus
L\,\xrightarrow{\;\;pr_{2}\;\;}\,L\,\xhookrightarrow{\;\;}\,Q_{\mathbf{H}_{der}}$
is the natural inclusion $\mathbb{Z}\Sigma^{\vee}\hookrightarrow
Q_{\mathbf{H}_{der}}$. Hence the morphism $\psi$ given by the composition
$\psi\,:\;\;\mathbb{Z}\Sigma^{\vee}\,\hookrightarrow\,\mathbb{Z}\Phi^{\vee}\,\xlongequal{\;}\,K\oplus
L\,\xrightarrow{\;\;pr_{2}\;\;}\,L$
is injective. So we have an inclusion
$\psi:\mathbb{Z}\Sigma^{\vee}\hookrightarrow L$. Let $e$ denote the exponent
of the finite group $L/\,\mathbb{Z}\Sigma^{\vee}$. By definition $e$ is the
biggest elementary divisor of the inclusion
$\psi:\mathbb{Z}\Sigma^{\vee}\hookrightarrow L$. Notice that this is also the
biggest elementary divisor of the natural inclusion
$\mathbb{Z}\Sigma^{\vee}\subset\mathbb{Z}\Phi^{\vee}\xlongequal{}K\oplus L$.
The discussion up to now implies that $J(\mathbf{H}_{der})\leq
e\,d\,J(\mathbf{G})$.
We are left to compute the elementary divisors $d$ and $e$ of the inclusions
of the root and coroot lattices. We first claim that
$d\leq\text{hgt}(\mathfrak{g})^{R-1}$. In order to prove the claim, we will
use $\Delta_{\Sigma}$ and $\Delta_{\Phi}$ as bases for the root lattices. For
each $\alpha\in\Delta_{\Sigma}$, we can write
$\alpha=\sum_{\beta\in\Delta_{\Phi}}m_{\beta}^{\alpha}\,\beta$ for some
nonnegative integers $m_{\beta}^{\alpha}$. Set
$M\vcentcolon=(m_{\beta}^{\alpha})_{\beta\in\Delta_{\Phi},\,\alpha\in\Delta_{\Sigma}}$.
This is a $R\times c$ matrix representing the inclusion
$\mathbb{Z}\Sigma\hookrightarrow\mathbb{Z}\Phi$. By the theory of Smith normal
form, $d$ divides all $c\times c$-minors of $M$. Since all
$m_{\beta}^{\alpha}$ are nonnegative, such $c\times c$-minor is bounded by
$\prod_{\alpha\in\Delta_{\Sigma}}\left(\sum_{\beta\in\Delta_{\Phi}}m_{\beta}^{\alpha}\right)\,\leq\,\text{hgt}(\mathfrak{g})^{c}\,\leq\,\text{hgt}(\mathfrak{g})^{R-1}$
The claim $d\leq\text{hgt}(\mathfrak{g})^{R-1}$ follows. We can apply the same
argument to the inclusion
$\mathbb{Z}\Sigma^{\vee}\subset\mathbb{Z}\Phi^{\vee}$. The maximal height in
the dual root system $\Phi^{\vee}$ is also $\text{hgt}(\mathfrak{g})$.
Therefore the same proof yields a bound
$e\leq\text{hght}(\mathfrak{g})^{R-1}$. This implies
$J(\mathbf{H}_{der})\,\leq\,e\,d\,J(\mathbf{G})\,\leq\,\text{hgt}(\mathfrak{g})^{2R-2}\cdot
J(\mathbf{G})$
∎
###### Proposition 4.19.
Let $\mathbf{G}$ be connected reductive. Let $A\in\mathfrak{g}_{F}$ be a
connection. Then there exist $x\in\mathbf{G}(F_{b})$ for some positive integer
$b$ such that $x\cdot A$ is in canonical form. If we set
$R\vcentcolon=\text{rank}(\mathbf{G}_{der})$, then $b$ can be chosen so that
$b\;\leq\;2\,\text{hgt}(\mathfrak{g})^{2R-1}\cdot
J(\mathbf{G}_{\text{der}})\;\cdot\prod_{j=0}^{\left\lfloor{\frac{\text{dim}(\mathbf{G}_{\text{der}})}{3}}\right\rfloor}\left(4\,\text{hgt}(\mathfrak{g})+2\right)^{\left\lfloor{}\frac{1}{2}\left(\text{dim}(\mathbf{G}_{\text{der}})-3j\right)\right\rfloor}$
###### Proof.
We have to keep track of how much ramification is needed to perform each of
the steps in Algorithm 4.16. Recall our six operations:
1. (i)
Apply Lemma 4.6. No ramification is needed for this operation, as is apparent
from the proof of the lemma.
2. (ii)
Apply Lemma 4.9. No ramification is needed for this operation. This also
follows directly from the proof of the lemma.
3. (iii)
Apply Proposition 4.12 Case 1. We need to pass to a $2$-cover.
4. (iv)
Apply Proposition 4.12 Case 2. We need to pass to a $2b$-cover, where $b$ is
such that $b\delta\in\mathbb{Z}$. By Remark 4.11, we know that we can choose
$b\leq 2\Lambda-1\leq 2\text{hgt}(\mathfrak{g})+1$.
5. (v)
Find the canonical form of a connection in a torus. No ramification is needed
to perform this operation, by the proof of Proposition 3.12.
6. (vi)
Find the canonical form for a connection of the first kind (as in Theorem
3.6). By Lemma 3.9, we can perform this operation after passing to a $b$-cover
with $b\leq\text{hgt}(\mathfrak{g})\cdot I(\mathbf{G})$.
We know that operations (iii) and (vi) will be used only once, at the end of
the algorithm. This gives us a factor of $2\,\text{hgt}(\mathfrak{g})\cdot
I(\mathbf{H}_{\text{der}})$, where $\mathbf{H}$ is the centralizer
$Z_{\mathbf{G}_{der}}\left(\\{D_{1},D_{2},...,D_{l}\\}\right)$ of a finite set
of pariwise commuting semisimple elements $D_{i}$ in $\mathfrak{g}_{der}$.
Since $I(\mathbf{H}_{der})\leq J(\mathbf{H}_{der})$, this is bounded by
$2\,\text{hgt}(\mathfrak{g})\cdot J(\mathbf{H}_{\text{der}})$. By Lemma 4.18
we known that
$J(\mathbf{H}_{\text{der}})\leq\text{hgt}(\mathfrak{g})^{2R-2}\cdot
J(\mathbf{G}_{\text{der}})$. This yields the first factor in the bound above.
We are now left to count the amount of times that we need to apply operation
(iv) in our algorithm. Each time we apply it we need to pass to a cover of
ramification at most $4\,\text{hgt}(\mathfrak{g})+2$. By the remark after the
proof of Theorem 4.3, we need to apply operation (iv) at most
$\left\lfloor{\frac{1}{2}\text{dim}(\mathbf{G}_{\text{der}})}\right\rfloor$
times before we are in the case when $A_{r}$ is not nilpotent. We therefore
pick up a ramification of at most
$\left(4\,\text{hgt}(\mathfrak{g})+2\right)^{\left\lfloor{}\frac{1}{2}\text{dim}(\mathbf{G}_{\text{der}})\right\rfloor}$.
After that we change our group.
We can apply operation (i) and split off the central part in order to pass to
a proper semisimple subgroup
$\mathbf{H}_{der}\vcentcolon=\left(Z_{\mathbf{G}}\left(\,(A_{r})_{s}\,\right)\right)_{\text{der}}$.
Notice that
$\text{dim}(\mathbf{H}_{der})\leq\text{dim}(\mathbf{G}_{\text{der}})-3$,
because we are removing at least two root spaces (positive and negative pair)
and the nontrivial central torus of the centralizer $\mathbf{H}$. Now we start
all over again. We know that we need to apply operation (iv) at most
$\left\lfloor{\frac{1}{2}\left(\text{dim}(\mathbf{G}_{\text{der}})-3\right)}\right\rfloor$-many
times until $A_{r}$ is not nilpotent. So we pick up a ramification of at most
$\left(4\,\text{hgt}(\mathfrak{g})+2\right)^{\left\lfloor{\frac{1}{2}\left(\text{dim}(\mathbf{G}_{\text{der}})-3\right)}\right\rfloor}$.
Iterating this procedure, we get the product appearing in the bound above. ∎
###### Remark 4.20.
In terms of dimension, the right hand side is $J(\mathbf{G}_{\text{der}})\cdot
e^{O\left(\text{dim}(\mathbf{G}_{\text{der}})^{2}\,\text{log}\,\text{dim}(\mathbf{G}_{\text{der}})\right)}$.
We proceed to establish a quantitative refinement of Theorem 4.3. It
essentially follows from keeping track of some indices in the operations for
the algorithm above. It makes sense once we know that the irregular part of a
canonical form is unique up to conjugacy, as stated in Theorem 5.5 below. The
following result over $\mathbb{C}$ can already be found in the work of Babbitt
and Varadarajan [BV83, 9.7].
###### Proposition 4.21 (Determinacy for the Irregular part of the Canonical
Form).
Let $\mathbf{G}$ be connected reductive. Let $A=\sum_{j=r}^{\infty}A_{j}t^{j}$
be a connection in $\mathfrak{g}_{F}$. The irregular part of the canonical
form of $A$ depends only on $A_{r+m}$ for $0\leq
m<\left(\text{hgt}(\mathfrak{g})+1\right)(|r|-1)$.
###### Proof.
It suffices to check the steps in Algorithm 4.16. In some steps we replace the
group $\mathbf{G}$ by a proper subgroup (either a centralizer or the derived
subgroup). This can only decrease the quantity
$\left(\text{hgt}(\mathfrak{g})+1\right)(|r|-1)$, so we can safely ignore
these changes of groups. We are left to study the effect of operations
(i)-(vi) in Algorithm 4.16.
The last operation (vi) has no effect on the irregular part of the connection,
so there is nothing to do here. Step (v) takes a connection
$A=\sum^{\infty}_{j=r}A_{j}\,t^{j}$ and outputs its truncation
$A=\sum^{-1}_{j=r}A_{j}\,t^{j}$ (see the proof of Proposition 3.12). The
output is therefore determined by the coefficients given in the statement of
the proposition.
Step (iii) outputs a connection with no irregular part. Notice that in
Proposition 4.12 we can determine whether we are in Case 1 (i.e. when we have
to perform Step (iii)) based on the value of $\delta$. This depends only on
the $A_{r+m}$ for $0\leq m<\Lambda\left(A_{r}\right)(|r|-1)$. Since
$\Lambda\left(A_{r}\right)\leq\text{hgt}(\mathfrak{g})+1$ by Example 2.15,
this case can be determined by the coefficients provided.
For the remaining operations (i), (ii) and (iv), we start with a given
connection $A$ with lowest coefficient $A_{r}$ and output an irregular
connection $B$ with lowest coefficient $B_{r^{\prime}}$. The proposition will
follow if we can prove that for each of these operations the coefficients
$B_{r^{\prime}+m}$ for $0\leq
m<\left(\text{hght}+1\right)\left(|r^{\prime}|-1\right)$ are completely
determined by the coefficients $A_{r+m}$ for $0\leq
m<\left(\text{hgt}(\mathfrak{g})+1\right)(|r|-1)$.
Operations (i) and (ii) are very similar. In this case we have $r=r^{\prime}$.
Let $m$ be an integer. From the proofs of Lemma 4.6 and Lemma 4.9, it follows
that $B_{m}$ is determined by $A_{j}$ for $j\leq m$. So we are done with these
operations.
We are left with operation (iv). Recall from the proof of Proposition 4.12
Case 2 that we have
$\displaystyle B=t^{-b\delta H}\cdot\tilde{A}$
$\displaystyle=\;\;2b\sum_{m=0}^{\infty}\text{Ad}(t^{-b\delta
H})A_{r+m}\,t^{2b(r+m)+2b-1}\;-\;b\delta Ht^{-1}$
The term $b\delta Ht^{-1}$ is determined by the knowledge of $\delta$ and
$H=\left(A_{r})\right)_{s}$. For the infinite sum, we can write the root
decompositions $A_{r+m}=\sum_{\beta\in\Phi}A^{\beta}_{r+m}$ and use that
$\text{Ad}(t^{-b\delta
H})\,A^{\beta}_{r+m}=t^{-b\delta\beta(H)}\,A^{\beta}_{r+m}$ in order to get
$\displaystyle\text{Ad}(t^{-b\delta
H})A^{\beta}_{r+m}\,t^{2b(r+m)+2b-1}\;=A^{\beta}_{r+m}\,t^{-b\delta\beta(H)+2b(r+m)+2b-1}=A^{\beta}_{r+m}\,t^{r^{\prime}+2bm-b\delta\beta(H)}$
By Example 2.15 we know $\beta\left(H\right)\leq 2\,\text{hgt}(\mathfrak{g})$.
Suppose that a positive integer $m$ satisfies
$2bm-b\delta\beta(H)<\left(\text{hgt}(\mathfrak{g})+1\right)(|r^{\prime}|-1)$.
Some algebraic manipulations show that
$m<\left(\text{hght}+1\right)\left(|r|-1\right)$. So indeed the coefficients
$B_{r^{\prime}+m}$ for $0\leq
m<\left(\text{hght}+1\right)\left(|r^{\prime}|-1\right)$ are completely
determined by the coefficients $A_{r+m}$ for $0\leq
m<\left(\text{hgt}(\mathfrak{g})+1\right)(|r|-1)$. ∎
###### Remark 4.22.
We can think of Proposition 4.21 as a continuity statement. It says that a
small perturbation of the original connection will not alter the irregular
part of its canonical form. This is analogous to the finite determinacy
theorem for analytic singularities, as in [dJP00] Theorem 9.4 in page 313.
One can obtain a similar continuity statement for the residue of the canonical
connection (i.e. the coefficient of $t^{-1}$ in the canonical form). However
the explicit bound for the number of terms needed is complicated and not very
illuminating. We refrain from including a formula for the bound.
###### Proposition 4.23.
Let $\mathbf{G}$ be connected reductive and let $A\in\mathfrak{g}_{F}$ be a
connection. There exist a positive integer $n$ such that all connections
$C\in\mathfrak{g}_{F}$ satisfying $C\equiv A\;(mod\;t^{n})$ are
$\mathbf{G}(\overline{F})$-gauge equivalent to $A$.
###### Proof.
In Algorithm 4.16 we apply operations (iii) and (vi) exactly once at the very
end. Suppose that we are given the coefficients $A_{r+m}$ for $0\leq
m\leq\left(\text{hgt}(\mathfrak{g})+1\right)(|r|-1)$ in a given connection
$A$. Let $D$ be the output of applying one of the operations (i), (ii), (iv)
or (v) to $A$. The proof of Proposition 4.21 implies that we can determine the
corresponding coefficients $D_{r^{\prime}+m}$ for $0\leq
m\leq\left(\text{hgt}(\mathfrak{g})+1\right)(|r^{\prime}|-1)$.
We can iterate this reasoning. Suppose that
$D=\sum_{j=r^{\prime}}^{\infty}D_{j}\,t^{j}$ is the ouput of the algorithm
before applying the last two steps (operations (iii) and (vi)). Then we see
that the coefficients $0\leq
m<\left(\text{hgt}(\mathfrak{g})+1\right)(|r^{\prime}|-1)$ are completely
determined by $A_{r+m}$ for $0\leq
m\leq\left(\text{hgt}(\mathfrak{g})+1\right)(|r|-1)$, where $A$ is the
original connection we start with. The number of steps needed in the algorithm
is also completely determined.
By the statement of Proposition 4.12 Case 1, we will be able to determine the
residue (i.e. the coefficient of $t^{-1}$) when we apply operation (iii) to
$D$. The output of operation (iii) will be a connection of the first kind
$B=\sum_{j=-1}^{\infty}B_{j}\,t^{j}$, and we can compute
$k\left(B_{-1}\right)$ (see the remark after Lemma 3.3).
Now we need to determine the result of applying operation (vi) to $B$ as
above. By Remark 3.4, this will be determined by $B_{j}$ for $-1\leq j\leq
k(B_{-1})$. We can then work backwards using an argument similar to the proof
of Proposition 4.21 to find a number $n$ big enough so that the coefficients
$B_{j}$ for $-1\leq j\leq k(B_{-1})$ are determined by $A_{j}$ for $r\leq
j<n$. ∎
## 5 Irregular connections for arbitrary linear algebraic groups
We proceed as in the regular case. Just as before, we start with solvable
groups.
### 5.1 Irregular connections for solvable groups
We will again make use of the map $\pi:\text{Lie}(\mathbf{\mathbf{T}})\cong
X_{*}(\mathbf{T})\otimes\mathbf{k}\longrightarrow
X_{*}(\mathbf{T})\otimes\mathbb{Q}$ as in Proposition 3.26.
###### Proposition 5.1.
Let $\mathbf{G}$ be of the form $\mathbf{T}\ltimes\mathbf{U}$, where
$\mathbf{T}$ is a torus and $\mathbf{U}$ is unipotent. Let
$A\in\mathfrak{g}_{F}$ be a formal connection. Write
$A=A_{\mathbf{T}}+A_{\mathbf{U}}$ for some
$A_{\mathbf{T}}\in\text{Lie}(\mathbf{T})_{F}$ and
$A_{\mathbf{U}}\in\text{Lie}(\mathbf{U})_{F}$. Let $b$ be a positive integer
such that $b\,\pi\left(\,(A_{\mathbf{T}})_{-1}\,\right)\in X_{*}(\mathbf{T})$.
Then there exists $x\in\mathbf{G}(F_{b})$ such that $x\cdot
A=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}\,+\,t^{-1}\,C$ with:
1. (1)
$r_{j}\in\mathbb{\mathbb{Z}}_{<-1}$ for all $j$.
2. (2)
$D_{j}\in\text{Lie}(\mathbf{T})$ for all $j$.
3. (3)
$[D_{j},\,C]=0$ for all $j$.
4. (4)
$\pi(C_{s})=0$.
###### Proof.
The global structure of the proof is very similar to the argument in
Proposition 3.26. Write $A_{\mathbf{T}}=\sum_{j=-q}^{\infty}t^{j}\,D_{j}$. By
Proposition 4.4 (a), we can find $g\in\mathbf{T}(F)$ with $g\cdot
A_{\mathbf{T}}=\sum_{j=-q}^{-1}t^{j}\,D_{j}$. Set
$\mu\vcentcolon=b\,\pi\left(D_{-1}\right)\in X_{*}(\mathbf{T})$. Then we have
$(t^{\frac{1}{b}\mu}\,g)\cdot
A_{\mathbf{T}}=\sum_{j=-q}^{-2}D_{j}\,t^{j}\,+\,t^{-1}\,C_{\mathbf{T}}$ for
some $C_{\mathbf{T}}\in\text{Lie}(\mathbf{T})$ with $\pi(C_{\mathbf{T}})=0$.
Replace $A$ with $B\vcentcolon=(t^{\frac{1}{b}\,\mu}\,g)\cdot A$. We know that
$B=\sum_{j=-q}^{-2}D_{j}\,t^{j}\,+\,t^{-1}\,C_{\mathbf{T}}+B_{\mathbf{U}}$ for
some $B_{\mathbf{U}}\in\text{Lie}(\mathbf{U})_{F_{b}}$. By lifting to the
$b$-ramified cover, we can assume that
$B_{\mathbf{U}}\in\text{Lie}(\mathbf{U})_{F}$. We claim that we can find
$u\in\mathbf{U}(F)$ such that $u\cdot
B=\sum_{j=-q}^{-2}D_{j}\,t^{j}\,+\,t^{-1}\,C_{\mathbf{T}}+t^{-1}\,C_{\mathbf{U}}$
with $C_{\mathbf{U}}\in\text{Lie}(\mathbf{U})$ and
$[C_{\mathbf{T}},C_{\mathbf{U}}]=[D_{j},C_{\mathbf{U}}]=0$ for all $j$. We
will show this by induction on the dimension of $\mathbf{U}$.
The base case is $\mathbf{U}=\mathbb{G}_{a}$. Then, $\mathbf{T}$ acts on
$\mathbf{U}$ by a character $\chi:\mathbf{T}\longrightarrow\mathbb{G}_{m}$.
For $u=\sum_{j=r}^{\infty}u_{j}\,t^{j}\in\mathbf{U}(F)$, we have
$u\cdot
B=t^{-1}\,C_{\mathbf{T}}\,+\,B_{\mathbf{U}}\,-\,\sum_{j=r-q}^{\infty}\left[\left(d\chi(C_{\mathbf{T}})-j\right)u_{j}+\sum_{i=2}^{q}d\chi(D_{i})u_{j+i-1}\right]\,t^{j-1}$
We have two cases
1. (1)
Suppose that $d\chi(D_{j})\neq 0$ for some $j$. Then, we can solve the
recurrence
$\left(d\chi(C_{\mathbf{T}})-j\right)u_{j}+\sum_{i=2}^{q}d\chi(D_{i})u_{j+i}=B_{j-1}$
with initial values $u_{j}=0$ for $j\ll 0$. This yields an element
$u\in\mathbf{U}(F)$ with $u\cdot
B=\sum_{j=-q}^{-2}D_{j}\,t^{j}\,+\,t^{-1}\,C_{\mathbf{T}}$.
2. (2)
Suppose that $d\chi(D_{j})=0$ for all $j$. The argument for the base case in
Proposition 3.26 shows that there is an element $u\in\mathbf{U}(F)$ such that
$u\cdot
B=\sum_{j=-q}^{-2}D_{j}\,t^{j}\,+\,t^{-1}\,C_{\mathbf{T}}\,+\,t^{-1}\,C_{\mathbf{U}}$
for some $C_{\mathbf{U}}\in\text{Lie}(\mathbf{U})$ satisfying
$[C_{\mathbf{T}},C_{\mathbf{U}}]=0$. Notice that we have
$[D_{j},\,C_{\mathbf{U}}]=d\chi(D_{j})\,C_{\mathbf{U}}=0$ by assumption. So we
are done in this case.
This concludes the proof of the base case.
Let’s proceed with the induction step. We can decompose the action of the
split torus $\mathbf{T}$ on the vector space $Z_{\mathbf{U}}$ into one-
dimensional spaces. Let $\mathbf{H}\cong\mathbb{G}_{a}\leq Z_{\mathbf{U}}$ be
one of these eigenspaces. Let $s$ be a $\mathbf{T}$-equivariant section of the
morphism of schemes $\mathbf{U}\longrightarrow\mathbf{U}/\,\mathbf{H}$ as in
the proof of Proposition 3.26.
Let $\overline{B}$ be the image of $B$ in the quotient
$\text{Lie}(\mathbf{U}/\,\mathbf{H})_{F}$. By the induction hypothesis, we can
find $\overline{u}\in\mathbf{U}/\,\mathbf{H}(F)$ such that
$\overline{u}\cdot\overline{B}=\sum_{j=-q}^{-2}D_{j}\,t^{j}\,+\,t^{-1}\,C_{\mathbf{T}}+t^{-1}\,\overline{E}$
for some $\overline{E}\in\text{Lie}\left(\mathbf{U}/\,\mathbf{H}\right)$ with
$[D_{j},\overline{E}]=[C_{\mathbf{T}},\overline{E}]=0$. We can then write
$s(\overline{u})\cdot
B=\sum_{j=-q}^{-2}D_{j}\,t^{j}\,+\,t^{-1}\,C_{\mathbf{T}}+t^{-1}\,ds(\overline{E})+B_{\mathbf{H}}$
for some $B_{\mathbf{H}}\in\text{Lie}(\mathbf{H})_{F}$. Since $s$ is
$\mathbf{T}$-equivariant, we have
$[ds(\overline{E}),D_{j}]=[ds(\overline{E}),\,C_{\mathbf{T}}]=0$. We can use
the base case for $\mathbf{H}$ in order to conclude. ∎
We end with a generalization of Proposition 3.29. We will use the same
notation as in the regular case. Let $A=A^{\mathbf{T}}+A^{\mathbf{U}}$ be a
formal connection with $A^{\mathbf{T}}\in\text{Lie}(\mathbf{T})_{F}$ and
$A^{\mathbf{U}}\in\text{Lie}(\mathbf{U})_{F}$. Write
$A^{\mathbf{T}}=\sum_{j=-q}^{-1}A^{\mathbf{T}}_{j}\,t^{j}\,+\,\sum_{j=p}^{\infty}A^{\mathbf{T}}_{j}\,t^{j}$
for some $q,p\geq 0$. Also, write
$A^{\mathbf{U}}=\sum_{j=m}^{\infty}A^{\mathbf{U}}_{j}\,t^{j}$.
###### Proposition 5.2.
Keep the same notation as above. Assume that $\mathbf{U}$ has nilpotency class
$n$.
1. (i)
Suppose that $m>L-1$. Then there exists $x\in\mathbf{G}(\mathcal{O})$ such
that $x\cdot A=\sum_{-q}^{-1}A^{\mathbf{T}}_{j}\,t^{j}$. More precisely, there
exist $x_{\mathbf{T}}\in\mathbf{T}(\mathcal{O})$ with $x_{\mathbf{T}}\equiv
1_{\mathbf{T}}\,\left(mod\;t^{p+1}\right)$ and
$x_{\mathbf{U}}\in\mathbf{U}(\mathcal{O})$ with $x_{\mathbf{U}}\equiv
1_{\mathbf{U}}\,\left(mod\;t^{m+1}\right)$ such that
$(x_{\mathbf{U}}x_{\mathbf{T}})\cdot
A=\sum_{-q}^{-1}A^{\mathbf{T}}_{j}\,t^{j}$.
2. (ii)
Suppose that $m\leq L-1$. Then the $\mathbf{G}(F)$-gauge equivalence class of
$A$ is determined by the coefficients $A^{\mathbf{T}}_{j}$ for $-q\leq
j<(n+1)(|m|-1)+L$ and $A^{\mathbf{U}}_{j}$ for $-q\leq j<n(|m|-1)+L$. More
precisely, suppose that there is another connection $B$ and an integer $k\geq
n(|m|-1)+L$ satisfying $A^{\mathbf{T}}\equiv
B^{\mathbf{T}}\,\left(mod\;t^{k+|m|-1}\right)$ and $A^{\mathbf{U}}\equiv
B^{\mathbf{U}}\,\left(mod\;t^{k}\right)$. Then, there exists
$x\in\mathbf{G}(\mathcal{O})$ with $x\equiv
1\,\left(mod\;t^{k-n|m|+n+1}\right)$ such that $x\cdot A=B$.
###### Proof.
The proof is similar in spirit to the argument for Proposition 3.29, but it
involves an extra subtle twist to deal with the negative powers.
1. (i)
Just as in Proposition 3.29, we can find
$x_{\mathbf{T}}\in\mathbf{T}(\mathcal{O})$ with $x_{\mathbf{T}}\equiv
1_{\mathbf{T}}\,\left(mod\;t^{p+1}\right)$ such that
$C\vcentcolon=x_{\mathbf{T}}\cdot
A=\sum_{-q}^{-1}A^{\mathbf{T}}_{j}\,t^{j}\,+\,C^{\mathbf{U}}$
for some $C^{\mathbf{U}}\in\mathfrak{u}_{\mathcal{O}}$. Moreover we have
$C^{\mathbf{U}}\equiv 0\,\left(mod\;t^{m}\right)$. We claim that there exists
$u\in\mathbf{U}(\mathcal{O})$ with $u\equiv
1_{\mathbf{U}}\,\left(mod\;t^{m+1}\right)$ such that $u\cdot
C=\sum_{-q}^{-1}A^{\mathbf{T}}_{j}\,t^{j}$. This claim finishes the proof of
part (i).
In order to prove the claim, we will actually show something stronger. Let us
fix some notation. By [BS68] Corollary 9.12, there is a
$\mathbf{T}$-equivariant map of $\mathbf{k}$-schemes
$\psi_{\mathbf{U}}:\mathbf{U}\longrightarrow\mathfrak{u}$. We can define this
map so that the following diagram commutes
$\textstyle{\mathbf{U}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi_{\mathbf{U}}}$$\textstyle{\mathbf{U}/\,Z_{\mathbf{U}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi_{\mathbf{U}/\,Z_{\mathbf{U}}}}$$\textstyle{\mathfrak{u}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathfrak{u}/\,\mathfrak{z}}$
Here $Z_{\mathbf{U}}$ is the center of $\mathbf{U}$ and
$\mathfrak{z}=\text{Lie}(Z_{\mathbf{U}})$. Notice that $Z_{\mathbf{U}}$ is
just a direct sum of copies of $\mathbb{G}_{a}$. The corresponding map
$\psi_{Z_{\mathbf{U}}}$ can be taken to be the usual identification of a
vector space with its tangent space at the identity. By iterating, we can
arrange so that we get a corresponding compatibility at each step of the upper
central series of $\mathbf{U}$.
Recall that we have a weight decomposition
$\mathfrak{u}=\bigoplus_{i=1}^{l}\mathfrak{u}_{\chi_{i}}$. Via the isomorphism
$\psi_{\mathbf{U}}$, we can get a decomposition
$\mathbf{U}=\prod_{\chi_{i}}\mathbf{U}_{\chi_{i}}$ as a product of schemes.
For $u\in\mathbf{U}(\mathbf{k})$, we will denote by $u_{\chi_{i}}$ the
corresponding component in $\mathbf{U}_{\chi_{i}}$.
For each $i$, define $a_{i}$ to be the biggest positive integer $j$ such that
$d\chi_{i}\left(A^{\mathbf{T}}_{-j}\right)\neq 0$. If
$d\chi_{i}\left(A^{\mathbf{T}}_{-j}\right)=0$ for all $j>0$, we set $a_{i}=1$.
Then, we claim that we can find $u\in\mathbf{U}(\mathcal{O})$ with
$u_{\chi_{i}}\equiv 1_{\mathbf{U}}\,\left(mod\;t^{m+a_{i}}\right)$ such that
$u\cdot C=\sum_{-q}^{-1}A^{\mathbf{T}}_{j}\,t^{j}$. We will prove this
stronger claim by induction on the nilpotency class of $\mathbf{U}$.
For the base case $n=0$, we have $\mathbf{U}\cong\mathbb{G}_{a}^{d}$ for some
$d$. By decomposing into one-dimensional $\mathbf{T}$-modules and looking at
each coordinate, we can reduce to the case $d=1$. So we have a single weight
space $\mathfrak{u}_{\chi_{i}}$. This case amounts to solving a recurrence as
in the computation for the base case in Proposition 5.1. We want to find
$u=\sum_{j=0}^{\infty}u_{j}\,t^{j}$ satisfying
$\left(d\chi_{i}(A^{\mathbf{T}}_{-1})-j\right)u_{j}+\sum_{k=2}^{q}d\chi_{i}(A^{\mathbf{T}}_{-k})u_{j+k}=C^{\mathbf{U}}_{j-1}$
By the definition of $a_{i}$, this is the same as
$\left(d\chi_{i}(A^{\mathbf{T}}_{-1})-j\right)u_{j}+\sum_{k=2}^{a_{i}}d\chi_{i}(A^{\mathbf{T}}_{-k})u_{j+k}=C^{\mathbf{U}}_{j-1}$
There are two different cases.
1. (1)
If $a_{i}=1$, then the recurrence reduces to
$\left(d\chi_{i}(A^{\mathbf{T}}_{-1})-j\right)u_{j}=C^{\mathbf{U}}_{j-1}$
The claim follows by the argument for the base case in Proposition 3.29.
2. (2)
Suppose that $a_{i}\neq 1$. We know that
$d\chi_{i}(A^{\mathbf{T}}_{-a_{i}})\neq 0$. We can solve the recurrence by
rewriting
$d\chi_{i}(A^{\mathbf{T}}_{-a_{i}})\,u_{j+a_{i}}=C^{\mathbf{U}}_{j-1}-\left(d\chi_{i}(A^{\mathbf{T}}_{-1})-j\right)u_{j}-\sum_{k=2}^{a_{i}-1}d\chi_{i}(A^{\mathbf{T}}_{-k})u_{j+k}$
Since $C^{\mathbf{U}}_{j}=0$ for all $j\leq m-1$, we can set $u_{j}=0$ for all
$j\leq m+a_{i}$. Then we can solve for the rest of the $u_{j}$ using the
recursion formula above.
Let’s proceed with the induction step. Notice that $\mathfrak{z}$ is a direct
sum of some one-dimensional $\mathbf{T}$-submodules of $\mathfrak{u}$. We can
get an identification of $\mathfrak{u}/\,\mathfrak{z}$ with the direct sum of
some choice of remaining one-dimensional $\mathbf{T}$-submodules. This way we
get a $\mathbf{T}$-equivariant inclusion
$\mathfrak{u}/\,\mathfrak{z}\hookrightarrow\mathfrak{u}$. We can get a
$\mathbf{T}$-equivariant section
$s:\mathbf{U}/\,Z_{\mathbf{U}}\longrightarrow\mathbf{U}$ defined by the
composition
$s:\mathbf{U}/\,Z_{\mathbf{U}}\,\xrightarrow{\;\psi_{\mathbf{U}/\,Z_{\mathbf{U}}}\;}\,\mathfrak{u}/\,\mathfrak{z}\,\xhookrightarrow{\;\;\;\;}\,\mathfrak{u}\,\xrightarrow{\;\psi_{\mathbf{U}}^{-1}\;}\,\mathbf{U}$
Let $\overline{C}$ be the image of $C$ in the quotient
$\text{Lie}(\mathbf{T}\ltimes\mathbf{U}/\,Z_{\mathbf{U}})_{F_{b}}$. By the
induction hypothesis, there exists
$\overline{x}\in\mathbf{U}/\,Z_{\mathbf{U}}(\mathcal{O})$ such that
$\overline{x}_{\chi_{i}}\equiv 1\;\left(mod\;t^{m+a_{i}}\right)$ and
$\overline{x}\cdot\overline{C}=\sum_{-q}^{-1}A^{\mathbf{T}}_{j}\,t^{j}$. By
the $\mathbf{T}$-equivariance of $s$, we must then have $s(\overline{x})\cdot
C=\sum_{-q}^{-1}A^{\mathbf{T}}_{j}\,t^{j}\,+\,D_{Z_{\mathbf{U}}}$ for some
$D_{Z_{\mathbf{U}}}\in\text{Lie}(Z_{\mathbf{U}})_{F}$. By definition
$s(\overline{x})\cdot
C\;=\;\sum_{-q}^{-1}t^{j}\text{Ad}(s(\overline{x}))A^{\mathbf{T}}_{j}\,+\,\text{Ad}(s(\overline{x}))C^{\mathbf{U}}\,+\,ds(\overline{x})s(\overline{x})^{-1}$
Since $s(\overline{x})\equiv s(\overline{x})^{-1}\equiv
1\;\left(mod\;t^{m+1}\right)$, it follows that
$ds(\overline{x})s(\overline{x})^{-1}\equiv 0\,\left(mod\;t^{m+1}\right)$.
Also $\text{Ad}(s(\overline{x}))C^{\mathbf{U}}\equiv
C^{\mathbf{U}}\,\left(mod\;t^{m+1}\right)$, because by assumption
$C_{\mathbf{U}}\in\mathfrak{u}_{\mathcal{O}}$. We are left to study
$\text{Ad}(s(\overline{x}))A^{\mathbf{T}}_{j}$.
Consider the map of $\mathbf{k}$-schemes
$\varphi_{j}:\mathbf{U}\longrightarrow\mathfrak{u}$ given by
$\varphi_{j}(u)\vcentcolon=\text{Ad}(u)A^{\mathbf{T}}_{j}-A^{\mathbf{T}}_{j}$.
By construction $\varphi_{j}$ is $\mathbf{T}$-equivariant. This means that it
must respect the decomposition into weight spaces. In other words, the
$\chi_{i}$-coordinate of $\varphi_{j}(u)$ is given by
$\varphi_{j}(u_{\chi_{i}})$. In particular, this means that
$\text{Ad}(s(\overline{x}))A^{\mathbf{T}}_{j}=A^{\mathbf{T}}_{j}\,+\,\sum_{i=1}^{l}\left(\,\text{Ad}(s(\overline{x})_{\chi_{i}})A^{\mathbf{T}}_{j}-A^{\mathbf{T}}_{j}\,\right)$
We have that
$\text{Ad}(s(\overline{x})_{\chi_{i}})A^{\mathbf{T}}_{j}=A^{\mathbf{T}}_{j}$
whenever $d\chi_{i}\left(A^{\mathbf{T}}_{j}\right)=0$. By definition this
happens whenever $-j>a_{i}$. So we get
$\text{Ad}(s(\overline{x}))A^{\mathbf{T}}_{j}=A^{\mathbf{T}}_{j}\,+\,\sum_{-j\leq
a_{i}}\left(\,\text{Ad}(s(\overline{x})_{\chi_{i}})A^{\mathbf{T}}_{j}-A^{\mathbf{T}}_{j}\,\right)$
Suppose that $-j\leq a_{i}$. By assumption $s(\overline{x})_{\chi_{i}}\equiv
1\;\left(mod\;t^{m+a_{i}}\right)$, so in particular
$s(\overline{x})_{\chi_{i}}\equiv 1\;\left(mod\;t^{m-j}\right)$. Hence we have
$\text{Ad}(s(\overline{x})_{\chi_{i}})A^{\mathbf{T}}_{j}\,\equiv\,A^{\mathbf{T}}_{j}\,\left(mod\;t^{m-j}\right)$.
The sum above becomes
$\text{Ad}(s(\overline{x}))A^{\mathbf{T}}_{j}\;\equiv\;A^{\mathbf{T}}_{j}\;\left(mod\;t^{m-j}\right)$
Hence
$t^{j}\,\text{Ad}(s(\overline{x})A^{\mathbf{T}}_{j}\,\equiv\,t^{j}\,A^{\mathbf{T}}_{j}\;\left(mod\;t^{m}\right)$.
We can put together all of the discussion above to conclude that
$s(\overline{x})\cdot
C\;\equiv\;\sum_{-q}^{-1}A^{\mathbf{T}}_{j}\,t^{j}\,+C^{\mathbf{U}}\;=\;C\;\left(mod\;t^{m}\right)$
Therefore $D_{Z_{\mathbf{U}}}\equiv 0\;\left(mod\;t^{m}\right)$. Now we can
conclude by using the base case for $Z_{\mathbf{U}}$.
2. (ii)
The hypothesis implies that we have equality of singular parts
$\sum_{j=-q}^{-1}B^{\mathbf{T}}_{j}\,t^{j}=\sum_{j=-q}^{-1}A^{\mathbf{T}}_{j}\,t^{j}$.
The proof of Proposition 3.12 shows that there exist
$x_{\mathbf{T}}\in\mathbf{T}(\mathcal{O})$ with $x_{\mathbf{T}}\equiv
1_{\mathbf{T}}\,\left(mod\;t^{p+1}\right)$ such that $x_{\mathbf{T}}\cdot
A^{\mathbf{T}}=B^{\mathbf{T}}$. Set $C\vcentcolon=x_{\mathbf{T}}\cdot A$. We
have $C=B^{\mathbf{T}}\,+\,\text{Ad}(x_{\mathbf{T}})A^{\mathbf{U}}$. Define
$C^{\mathbf{U}}\vcentcolon=\text{Ad}(x_{\mathbf{T}})A^{\mathbf{U}}$. We know
that $C^{\mathbf{U}}\equiv A^{\mathbf{U}}\,\left(mod\;t^{k}\right)$, because
$x_{\mathbf{T}}\equiv 1\,\left(mod\;t^{k+|m|}\right)$ and $A^{\mathbf{U}}\in
t^{m}\mathfrak{u}_{\mathcal{O}}$. Therefore $C^{\mathbf{U}}\equiv
B^{\mathbf{U}}\ \left(mod\;t^{k}\right)$ by assumption.
Let $s$, $\mathbf{U}_{\chi_{i}}$ and $a_{i}$ be defined as in part (i). We
claim that there exists $u\in\mathbf{U}(\mathcal{O})$ with $u_{\chi_{i}}\equiv
1\;\left(mod\;t^{k-n|m|+n+a_{i}}\right)$ such that $u\cdot C=B$. This implies
that $u\equiv 1\,\left(mod\;t^{k-n|m|+n+1}\right)$, so this claim concludes
the proof of part (ii). In order to prove the claim, we will induct on the
nilpotency class of $\mathbf{U}$. The base case $n=0$ follows again from the
explicit computation done in Proposition 5.1, we omit the details.
Let’s proceed with the induction step. Let $\overline{C}$ and $\overline{B}$
denote the images of $C$ and $B$ in the quotient
$\text{Lie}(\mathbf{T}\ltimes\mathbf{U}/\,Z_{\mathbf{U}})_{F}$. By the
induction hypothesis, there exists
$\overline{x}\in\mathbf{U}/\,Z_{\mathbf{U}}(\mathcal{O})$ with
$\overline{x}_{\chi_{i}}\equiv 1\;\left(mod\;t^{k-(n-1)|m|+n-1+a_{i}}\right)$
such that $\overline{x}\cdot\overline{C}=\overline{B}$. We can now write
$s(\overline{x})\cdot C=ds\left(\overline{B}\right)+E_{Z_{\mathbf{U}}}$ and
$B=ds\left(\overline{B}\right)+K_{Z_{\mathbf{U}}}$ for some
$E_{Z_{\mathbf{U}}},F_{Z_{\mathbf{U}}}\in\text{Lie}(Z_{\mathbf{U}})_{F}$. By
definition
$s(\overline{x})\cdot
C\;=\;\sum_{j=-q}^{\infty}t^{j}\text{Ad}(s(\overline{x}))B^{\mathbf{T}}_{j}\,+\,\text{Ad}(s(\overline{x}))C^{\mathbf{U}}\,+\,ds(\overline{x})s(\overline{x})^{-1}$
Since $s(\overline{x})\equiv 1\;\left(mod\;t^{k-(n-1)|m|+n}\right)$, it
follows that
$t^{j}\,\text{Ad}(s(\overline{x}))B^{\mathbf{T}}_{j}\,\equiv\,t^{j}\,B^{\mathbf{T}}_{j}\,\left(mod\;t^{k-(n-1)|m|+n}\right)$
for all $j\geq 0$. The same reasoning as in part (i) shows that
$t^{j}\,\text{Ad}(s(\overline{x}))B^{\mathbf{T}}_{j}\,\equiv\,t^{j}\,B^{\mathbf{T}}_{j}\,\left(mod\;t^{k-(n-1)|m|+n-1}\right)$
for all $j<0$. Also we know that
$\text{Ad}(s(\overline{x})C^{\mathbf{U}}\,\equiv\,C^{\mathbf{U}}\,\left(mod\;t^{k-n|m|+n}\right)$,
because $s(\overline{x})\equiv 1\;\left(mod\;t^{k-(n-1)|m|+n}\right)$ and
$C_{\mathbf{U}}\in t^{m}\mathfrak{u}_{\mathcal{O}}$. We conclude that
$ds\left(\overline{B}\right)+E_{Z_{\mathbf{U}}}\;=\;s(\overline{x})\cdot
C\;\equiv\;B^{\mathbf{T}}\,+\,C^{\mathbf{U}}\;=\;C\;\left(mod\;t^{k-n|m|+n}\right)$
Since $k\geq k-n|m|$, we have $C\equiv B\;\left(mod\;t^{k-n|m|}\right)$. It
follows that $E_{Z_{\mathbf{U}}}\equiv
K_{Z_{\mathbf{U}}}\;\left(mod\;t^{k-n|m|+n}\right)$. Now by the base case we
can find $y\in Z_{\mathbf{U}}(\mathcal{O})$ with $y_{\chi_{i}}\equiv
1\;\left(mod\;t^{k-n|m|+n+a_{i}}\right)$ such that $(y\,s(\overline{x}))\cdot
C=B$. By the definition of $\mathbf{U}_{\chi_{i}}$ and its compatibility with
the center, we can see that $(y\,s(\overline{x}))_{\chi_{i}}\equiv
1\;\left(mod\;t^{k-n|m|+n+a_{i}}\right)$. The claim follows.
∎
### 5.2 Irregular connections for arbitrary linear algebraic groups
###### Theorem 5.3.
Let $\mathbf{G}$ be a connected linear algebraic group. Fix a Levi subgroup
$\mathbf{L}$ and a maximal torus $\mathbf{T}\subset\mathbf{L}$. Let
$A\in\mathfrak{g}_{\overline{F}}$ be a formal connection. Then there exists
$x\in\mathbf{G}(\overline{F})$ such that $x\cdot
A=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}\,+\,t^{-1}\,C$ with
1. (1)
$r_{j}\in\mathbb{Q}_{<-1}$ for all $j$.
2. (2)
$D_{j}\in\text{Lie}(\mathbf{T})$ for all $j$.
3. (3)
$[D_{j},\,C]=0$ for all $j$.
4. (4)
$C_{s}\in\mathfrak{D}$.
5. (5)
$[C_{s},C]=0$.
###### Proof.
The same steps as in the proof of Theorem 3.32 reduce the result to the
solvable case (Proposition 5.1). ∎
A connection of the form $B=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}\,+\,t^{-1}\,C$
satisfying conditions (1)-(3) above is said to be in canonical form. Let’s
formulate some uniqueness results for such irregular canonical forms. Before
doing this, we need a lemma.
###### Lemma 5.4.
Let $B=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}+t^{-1}\,C$ and
$B^{\prime}=\sum_{j=1}^{s}D^{\prime}_{j}\,t^{r^{\prime}_{j}}+t^{-1}\,C^{\prime}$
be two connections in canonical form. Suppose that
$x\in\mathbf{G}(\overline{F})$ satisfies $x\cdot B=B^{\prime}$. Then all the
following statements are true
1. (1)
$l=s$ and $r_{j}=r^{\prime}_{j}$.
2. (2)
$\text{Ad}(x)D_{j}=D^{\prime}_{j}$ for all $j$.
3. (3)
$x\cdot\,(t^{-1}\,C)=t^{-1}\,C^{\prime}$.
###### Proof.
If we know both $(1)$ and $(2)$, then part $(3)$ follows. So we will focus on
the first couple of statements. By lifting everything to a ramified cover, we
can assume that $x\in\mathbf{G}(F)$. Choose a faithful representation
$\mathbf{G}\hookrightarrow\mathbf{\text{GL}_{n}}$. We can view
$x\in\mathbf{\text{GL}_{n}}(F)$ and
$B,B^{\prime}\in\mathfrak{gl}_{n}(\overline{F})$.
To simplify notation, let us add some trivial $D_{j}$ and $D^{\prime}_{j}$ so
that we have the same indexes and exponents for both $B_{\text{irr}}$ and
$B^{\prime}_{\text{irr}}$. We therefore write
$B=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}+t^{-1}\,C$ and
$B^{\prime}=\sum_{j=1}^{l}D^{\prime}_{j}\,t^{r_{j}}+t^{-1}\,C^{\prime}$. Now
$D_{j}$ and $D^{\prime}_{j}$ are (possibly $0$) semisimple elements in
$\mathfrak{g}$. We claim that $\text{Ad}(x)D_{j}=D^{\prime}_{j}$ for all $j$.
This claim would imply that none of the new $D_{j}$ and $D^{\prime}_{j}$ are
$0$. This would mean that we didn’t actually add any extra terms. So both (1)
and (2) would follow. We are left to show the claim.
Let us consider the linear transformation $W$ in
$\text{End}(\mathfrak{gl}_{n})\,(\overline{F})$ given by
$W\,v\vcentcolon=B^{\prime}v\,-\,vB$ for all $v\in\mathfrak{gl}_{n}$. We can
write $W=\sum_{j=1}^{l}W_{j}\,t^{r_{j}}\,+\,t^{-1}\,U$, where
$\displaystyle W_{j}\in\text{End}(\mathfrak{gl}_{n})\;\text{is given
by}\;W_{j}\,v\vcentcolon=D^{\prime}_{j}v-vD_{j}$ $\displaystyle
U\in\text{End}(\mathfrak{gl}_{n})\;\text{is given
by}\;U\,v\vcentcolon=C^{\prime}v-vC$
Each $W_{j}$ is semisimple by definition. Also we have that the $W_{j}$s and
$U$ pairwise commute. Therefore there is a simultaneous spectral decomposition
$\mathfrak{gl}_{n}=\bigoplus_{\vec{\lambda}}(\mathfrak{gl}_{n})_{\vec{\lambda}}$
for the $W_{j}$s, where $\vec{\lambda}=(\lambda_{j})_{j=1}^{l}$ ranges over a
set of $l$-tuples of eigenvalues of the $W_{j}$s. Note that $W$ preserves this
spectral decomposition, because $U$ commutes with all $W_{j}$s.
The condition $x\cdot B=B^{\prime}$ can be expressed as $\frac{d}{dt}x=W\,x$.
Here we are viewing $x$ as an invertible matrix in $\mathfrak{gl}_{n}(F)$. We
can restrict to the $\vec{\lambda}$-eigenspace and use the decomposition for
$W$ in order to see that the component
$x_{\vec{\lambda}}\in(\mathfrak{gl}_{n})_{\vec{\lambda}}$ of $x$ satisfies
$\frac{d}{dt}x_{\vec{\lambda}}=\sum_{j=1}^{l}\lambda_{j}\,t^{r_{j}}\,x_{\vec{\lambda}}\,+\,t^{-1}\,U\,x_{\vec{\lambda}}$
Recall that $r_{j}<-1$ for all $j$. By comparing the smallest exponent of $t$
in both sides, we conclude that $x_{\vec{\lambda}}=0$ unless
$\vec{\lambda}=\vec{0}$. Hence $x\in(\mathfrak{gl}_{n})_{\vec{0}}\,(F)$. This
means that $\text{Ad}(x)D_{j}=D^{\prime}_{j}$ for all $j$. ∎
As a consequence, we get the following uniqueness result for all irregular
canonical forms that satisfy (1)-(5) as in Therorem 5.3.
###### Theorem 5.5.
Let $\mathbf{G}$ be a connected linear algebraic group. Fix a Levi subgroup
$\mathbf{L}$ and a maximal torus $\mathbf{T}\subset\mathbf{L}$. Let
$A=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}\,+\,t^{-1}\,C$ and
$B=\sum_{j=1}^{l}D^{\prime}_{j}\,t^{r^{\prime}_{j}}\,+\,t^{-1}\,C^{\prime}$ be
two connections in canonical form. Suppose that
$C_{s},\,C^{\prime}_{s}\in\mathfrak{D}$ and
$[C_{s},C]=[C^{\prime}_{s},C^{\prime}]=0$. If there exists
$x\in\mathbf{G}(\overline{F})$ with $x\cdot A=B$, then we have
1. (1)
$C_{s}=C^{\prime}_{s}$.
2. (2)
$x\in Z_{\mathbf{G}}(C_{s})(\mathbf{k})$.
3. (3)
$\text{Ad}(x)\,D_{j}=D^{\prime}_{j}$ for all $j$.
###### Proof.
This follows from Lemma 5.4 combined with Proposition 3.34. ∎
We conclude this section with a determinacy result for arbitrary linear
algebraic groups.
###### Proposition 5.6.
Let $\mathbf{G}$ be connected linear algebraic group. Let
$A\in\mathfrak{g}_{F}$ be a connection. There exist a positive integer $n$
such that all connections $C\in\mathfrak{g}_{F}$ satisfying $C\equiv
A\;(mod\;t^{n})$ are $\mathbf{G}(\overline{F})$-gauge equivalent to $A$.
###### Proof.
This follows from the corresponding determinacy results for reductive groups
(Proposition 4.23) and solvable groups (Proposition 5.2) via a reduction as in
the proof of Theorem 5.3. ∎
### 5.3 Galois cohomology for irregular connections
For this section $\mathbf{G}$ will be a connected linear algebraic group. We
will fix a choice of Levi subgroup $\mathbf{L}\subset\mathbf{G}$ and maximal
torus $\mathbf{T}\subset\mathbf{L}$. Let
$B=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}+t^{-1}C$ be a connection in canonical form
with $C_{s}\in\mathfrak{D}$ and $[C_{s},C]=0$, as in the statement of Theorem
5.5. If $B_{\text{irr}}\neq 0$, then we don’t necessarily have
$B\in\mathfrak{g}_{F}$. Suppose that $B$ is in $\mathfrak{g}_{F_{b}}$, with
$b$ a given positive integer. Then we have a Galois action of
$\mathbf{\mu}_{b}\cong\text{Gal}(F_{b}/\,F)$ on $B$ by the formula
$\gamma\cdot
B=\sum_{j=1}^{l}\gamma^{-br_{j}}\,D_{j}\,t^{r_{j}}\,+\,t^{-1}\,C$. Because
this action is not necessarily trivial, we have to consider twisted cocyles in
order to classify connections over $\text{Spec}\,F$ with canonical form $B$.
###### Definition 5.7.
Let $b$ be a natural number. Let
$B=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}+t^{-1}C\;\in\mathfrak{g}_{F_{b}}$ be a
connection in canonical form. A $B$-twisted $\mu_{b}$-cocycle is a map
$\phi\vcentcolon\mathbf{\mu}_{b}\longrightarrow Z_{G}(C)$ satisfying
1. (i)
$\text{Ad}(\phi_{\gamma})B=\gamma\cdot B$ for all $\gamma\in\mathbf{\mu}_{b}$.
2. (ii)
$\phi_{\gamma\gamma^{\prime}}=\phi_{\gamma}\phi_{\gamma^{\prime}}$ for all
$\gamma,\gamma^{\prime}\in\mathbf{\mu}_{b}$.
Fix a compatible choice of generators $\omega_{b}$ of $\mu_{b}$ for all $b$
positive, just as we did in the regular case. Note that a $B$-twisted
$\mu_{b}$ cocycle $\phi$ is completely determined by $\phi_{\omega_{b}}\in
Z_{G}(C)$. This is a an element of finite order dividing $b$, and it satisfies
$\text{Ad}(\phi_{\omega_{b}})B=\omega_{b}\cdot B$. Conversely, for any element
$\phi_{\omega_{b}}\in Z_{G}(C)$ satisfying
$\text{Ad}(\phi_{\omega_{b}})B=\omega_{b}\cdot B$ we can get a corresponding
$B$-twisted cocycle.
Notice that the centralizer $Z_{G}(\\{D_{1},...,D_{l},C\\})$ acts on the set
of $B$-twisted $\mu_{b}$-cocycles by conjugation. By the same type of general
Galois cohomology argument as in the regular case, we get the following couple
of propositions. The proofs are omitted.
###### Proposition 5.8 (Criterion for Descent to $D^{*}$).
Let $b$ be a natural number. Let
$B=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}+t^{-1}C\;\in\mathfrak{g}_{F_{b}}$ be a
connection in canonical form with $C_{s}\in\mathfrak{D}$ and $[C_{s},C]=0$.
Then $B$ is equivalent to a connection in $\mathfrak{g}_{F}$ via an element of
$\mathbf{G}(F_{b})$ if and only there exists a $B$-twisted $\mu_{b}$-cocycle.
###### Proposition 5.9 (Classification of Connections over $D^{*}$).
Let $B=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}+t^{-1}C\;\in\mathfrak{g}_{F_{b}}$ be a
connection in canonical form with $C_{s}\in\mathfrak{D}$ and $[C_{s},C]=0$.
Suppose that $B$ satisfies the equivalent statements in Proposition 5.8 above
for some $b$. Then the set of equivalence classes of $\mathbf{G}$-connections
over $D^{*}$ that become gauge equivalent over $\text{Spec}\,F_{b}$ are in
bijection with the set of $B$-twisted $\mu_{b}$-cocycles up to
$Z_{G}(\\{D_{1},\,...,D_{l},\,C\\})$-conjugacy.
The correspondence in Proposition 5.9 can be described as follows. Let
$\phi_{\omega_{b}}\in Z_{\mathbf{G}}(C)(\mathbf{k})$ be such that
$\text{Ad}(\phi_{\omega_{b}})B=\omega_{b}\cdot B$. By the vanishing of
$H^{1}_{\text{Gal}(F)}(\mathbf{G})$, we can find an element
$y\in\mathbf{G}(F_{b})$ such that $\omega_{b}\cdot y=y\,\phi_{\omega_{b}}$.
Then the connection associated to $\phi_{\omega_{b}}$ will be $A=y\cdot
B\;\in\mathfrak{g}_{F}$. Conversely, suppose that $A=y\cdot B$ is a connection
in $\mathfrak{g}_{F}$ for some $y\in\mathbf{G}(F_{b})$. We set
$\phi_{\omega_{b}}\vcentcolon=y^{-1}\,\left(\omega_{b}\cdot y\right)$.
As a consequence of this Galois cohomology classification, we can put a bound
on the denominators of the levels $r_{j}$ of a canonical form for a connection
in $\mathfrak{g}_{F}$. Let $W$ denote the Weyl group of $\mathbf{L}$ with
respect to $\mathbf{T}$. A Coxeter element of $W$ is an element of largest
length in $W$. All Coxeter elements are conjugate to each other. The Coxeter
number $h_{\mathbf{L}}$ of $\mathbf{L}$ is the order of a Coxeter element in
$W$.
###### Proposition 5.10.
Let $A\in\mathfrak{g}_{F}$ be a formal connection. Let
$B=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}+t^{-1}C$ be a connection in canonical form
with $C_{s}\in\mathcal{D}$ and $[C_{s},C]=0$. Suppose that B is
$\mathbf{G}(\overline{F})$-gauge equivalent to $A$. Let $b$ be the smallest
positive integer such that $B\in\mathfrak{g}_{F_{b}}$. Then
1. (1)
$b$ divides a fundamental degree of $\text{Lie}(\textbf{L})$. In particular
$b\leq h_{\mathbf{L}}$.
2. (2)
If $b=h_{\mathbf{L}}$, then $C_{s}\in\text{Lie}(Z_{\mathbf{L}}^{0})$.
###### Proof.
Recall the notation $B_{irr}=\sum_{j=1}^{l}D_{j}\,t^{r_{j}}$. We have
$\mathbf{G}=\mathbf{L}\ltimes\mathbf{U}$, where $\mathbf{U}$ is the unipotent
radical. Write $\mathfrak{l}\vcentcolon=\text{Lie}(\mathbf{L})$ and
$\mathfrak{u}\vcentcolon=\mathbf{U}$. We can decompose
$A=A_{\mathfrak{l}}+A_{\mathfrak{u}}$. It follows from the proof of Theorem
5.3 that $B_{irr}$ is given by the irregular part of the canonical form of
$A_{\mathfrak{l}}$. Therefore, we can assume without loss of generality that
$\mathbf{G}=\mathbf{L}$.
By assumption, we have $B=\mathbf{G}(F_{d})\cdot A$ for some $d$ dividing $b$.
By Proposition 5.8, we know that there exists a $B$-twisted $\mu_{d}$-cocycle
$\phi$. This means in particular that
$\text{Ad}(\phi_{\omega_{d}})(B_{irr}+t^{-1}C_{s})=\omega_{d}\cdot
B_{irr}+t^{-1}C_{s}$. We can consider $B_{irr}+t^{-1}C_{s}$ as an element of
$\text{Lie}(\mathbf{T}_{\overline{F}})$. Also $\phi_{\omega_{d}}$ can be
viewed as an element of $\mathbf{G}(\overline{F})$. This means that
$B_{irr}+t^{-1}$ and $\omega_{d}\cdot B_{irr}+t^{-1}$ are
$\mathbf{G}(\overline{F})$-conjugate elements of
$\text{Lie}(\mathbf{T}_{\overline{F}})$. By [CM93] Chapter 2, there is an
element $w\in W$ such that $w\cdot(B_{irr}t^{-1}C_{s})=\omega_{d}\cdot
B_{irr}+t^{-1}C_{s}$. By definition, $b$ is the least positive integer such
that $(\omega_{d})^{b}\cdot B_{irr}=B_{irr}$. We conclude that some of the
eigenvalues of $w$ are primitive $b$ roots of unity. It follows that $b$
divides a fundamental degree of $\mathfrak{l}$ by [Spr74] Theorem 3.4. If
$b=h_{\mathbf{L}}$, then $w$ must be a Coxeter element by [Kan01] Theorem
32.2-C. Since $w\cdot C_{s}=C_{s}$, we must have
$C_{s}\in\text{Lie}(Z_{\mathbf{L}}^{0})$ by the lemma in page 76 of [Hum90]. ∎
###### Remark 5.11.
This does not yield a bound on the ramification needed to put $A$ into
canonical form. For example, if $A$ is regular then $B\in\mathfrak{g}_{F}$.
But we have seen that it is sometimes necessary to pass to a ramified cover in
order to put a a regular connection into canonical form.
We remark that part (i) of Proposition 5.10 was proven in [CK17] via the
existence of oper structures for any connection [FZ10]. Here we note that
there is a direct argument using some facts about Coxeter groups.
### Acknowledgements
This paper grew out of a suggestion from Nicolas Templier to write a modern
exposition to [BV83]. I am happy to thank him for his very valuable input on
the redaction of the manuscript.
## References
* [Ati57] M. F. Atiyah. Complex analytic connections in fibre bundles. Trans. Amer. Math. Soc., 85:181–207, 1957.
* [BS68] A. Borel and T. A. Springer. Rationality properties of linear algebraic groups. II. Tohoku Math. J. (2), 20:443–497, 1968.
* [BV83] Donald G. Babbitt and V. S. Varadarajan. Formal reduction theory of meromorphic differential equations: a group theoretic view. Pacific J. Math., 109(1):1–80, 1983.
* [CK17] Tsao-Hsien Chen and Masoud Kamgarpour. Preservation of depth in the local geometric Langlands correspondence. Trans. Amer. Math. Soc., 369(2):1345–1364, 2017.
* [CM93] David H. Collingwood and William M. McGovern. Nilpotent orbits in semisimple Lie algebras. Van Nostrand Reinhold Mathematics Series. Van Nostrand Reinhold Co., New York, 1993.
* [Del70] Pierre Deligne. Équations différentielles à points singuliers réguliers. Lecture Notes in Mathematics, Vol. 163. Springer-Verlag, Berlin-New York, 1970.
* [DG80] Michel Demazure and Peter Gabriel. Introduction to algebraic geometry and algebraic groups, volume 39 of North-Holland Mathematics Studies. North-Holland Publishing Co., Amsterdam-New York, 1980. Translated from the French by J. Bell.
* [dJP00] Theo de Jong and Gerhard Pfister. Local analytic geometry. Advanced Lectures in Mathematics. Friedr. Vieweg & Sohn, Braunschweig, 2000. Basic theory and applications.
* [FZ10] Edward Frenkel and Xinwen Zhu. Any flat bundle on a punctured disc has an oper structure. Math. Res. Lett., 17(1):27–37, 2010.
* [Hum90] James E. Humphreys. Reflection groups and Coxeter groups, volume 29 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1990.
* [Hum95] James E. Humphreys. Conjugacy classes in semisimple algebraic groups, volume 43 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 1995.
* [Kan01] Richard Kane. Reflection groups and invariant theory, volume 5 of CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC. Springer-Verlag, New York, 2001.
* [Lev75] A. H. M. Levelt. Jordan decomposition for a class of singular differential operators. Ark. Mat., 13:1–27, 1975.
* [Sch07] Olaf M. Schnürer. Regular connections on principal fiber bundles over the infinitesimal punctured disc. J. Lie Theory, 17(2):427–448, 2007.
* [Ser02] Jean-Pierre Serre. Galois cohomology. Springer Monographs in Mathematics. Springer-Verlag, Berlin, english edition, 2002. Translated from the French by Patrick Ion and revised by the author.
* [Spr74] T. A. Springer. Regular elements of finite reflection groups. Invent. Math., 25:159–198, 1974.
|
2024-09-04T02:54:55.704509 | 2020-02-28T20:03:43 | 2003.00045 | {
"authors": "Pamela Bilo Thomas, Rachel Krohn, Tim Weninger",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25951",
"submitter": "Tim Weninger PhD",
"url": "https://arxiv.org/abs/2003.00045"
} | arxiv-papers |
Library Adoption Dynamics in Software Teams
Pamela Bilo Thomas
University of Notre Dame
Rachel Krohn
University of Notre Dame
Tim Weninger
University of Notre Dame
When a group of people strives to understand new information, struggle ensues as various ideas compete for attention. Steep learning curves are surmounted as teams learn together. To understand how these team dynamics play out in software development, we explore Git logs, which provide a complete change history of software repositories. In these repositories, we observe code additions, which represent successfully implemented ideas, and code deletions, which represent ideas that have failed or been superseded. By examining the patterns between these commit types, we can begin to understand how teams adopt new information. We specifically study what happens after a software library is adopted by a project, , when a library is used for the first time in the project. We find that a variety of factors, including team size, library popularity, and prevalence on Stack Overflow are associated with how quickly teams learn and successfully adopt new software libraries.
§ INTRODUCTION
The process of learning new information and adopting new technology can be challenging. When new information is acquired, a learning curve often exists until an individual or group becomes proficient in the new technology. These challenges are present throughout society but are especially prevalent in computing and software development, where existing technologies are ever-changing and new innovations are constantly being developed. Further complicating these issues, most major software development occurs in teams where multiple parties collaborate on a project together, and where teammates may or may not know each other. This can lead to teammates having communication issues, which could result in teams which are inefficient and uncooperative. Online collaboration systems like GitHub have provided a powerful setting in which to study this process, because through analyzing the history of commits, whether removed, added, or otherwise, we can reconstruct a story of what happened between the teammates to eventually create the finalized code. By retelling this story, we can further investigate the struggles that occurred as the teammates tried to learn new information together, and see how long it takes for a team to become totally proficient at working together using the same technology.
The present work introduces a new approach to study this problem. By investigating how software developers adopt and use software libraries in this specific context, we may better understand how humans learn new technical information and incorporate concepts previously unknown to the user or group. The findings from this study can be generalized to understand how humans work, learn, and fight together, since GitHub provides a rich dataset which approximates the collaborative process.
Previous work by Kula et al found that programmers adopt new libraries only when they can be assured of the library's quality and functional correctness [1]. But what happens after the adoption event? When are other group member receptive to new libraries; and when do they resist? What tools can help a team find and learn how to use new libraries? How long does it take a group of people to be successful at learning new information together? And finally, what does it even mean for that group to be successful?
To answer these and other questions, we explore the circumstances surrounding a library adoption, including the number of commits, the size of commits (measured by lines of code), and other related information, including availability of online resources, such as Stack Overflow. We present an in-depth look at commit addition and deletions, and analyze these questions from a variety of different angles.
Additionally, we ask if there exists any competition among team members over the inclusion of a library. When teammates have disagreements about what should be included in a GitHub project, there will ultimately be a winner. Uncovering which user eventually wins these code fights is an interesting research question. We explore competition by examining code fights where two users revert each other’s code over the course of several commits. Like edit-wars on Wikipedia [2], by looking at the users and libraries that participate in code fights we can learn a great deal about the adoption and diffusion of information. Although the present work focuses specifically on code contributed to public Python projects hosted on GitHub, we hope that other domains can use the methodology of the present work in other explorations of information adoption generally.
In addition to competition, we also aim to find the characteristics of fast adoption. It is preferable to have a short adoption period - in that way, teams can become productive more quickly, since a longer adoption time results in periods of lost production. By finding what combinations of repositories and libraries result in quick adoption times, we can begin to understand how team members work together to learn new information. Therefore, we hope to identify the qualities that create a good team, and what the characteristics of a bad team are. We hope that the findings from our research can be applied to help improve team dynamics and create groups that can work well together.
To summarize, this work aims to answer the following research questions:
What are the events that happen when a team adopts a library for the first time?
Learning new information together can be challenging. As a team strives to use a new library together efficiently and correctly, it is one of the aims of this work to uncover the events that happen as the group learns, and what causes groups to work together well (or poorly!). In the data, we hope to find when library adoptions are successful - and what leads team members to abandon library adoptions as they decide to use other libraries instead.
Are commits containing new libraries more likely to have deletions than other types of commits?
One of our research questions is to understand how teams learn to use these new libraries. We expect to find that as teams struggle to use new libraries, commits that contain new libraries will contain many deletions as users attempt to understand how the library works. Eventually, as team members become proficient in using a library, and the library becomes established in a project, we expect to see that there will be less deletions in the project repository code relative to when the library was first adopted. This ratio of positive to negative commits will help us define what it means for a library to be adopted.
Do the answers to these questions vary by library type, team size, or the amount of information available on Stack Overflow?
Not all libraries and teams are alike, and we expect to find that as the types of the teams and the libraries change, so will the adoption time. Therefore, we will analyze library adoptions along the axes of team size and library type, and discover the relationship between adoption speeds and resources available on places like StackOverflow. We attempt to show how the speed of library adoptions changes with the addition of easy to use online resources, which we use as a proxy to represent library popularity. We hypothesize that if there exist few resources to learn how to use a library, the time to adoption for that library will be longer than for those libraries that are more well documented.
Do team members fight over the adoption and usage of a new library?
We cannot assume that all teams work together well. Fights between team members might result as teammates have differing opinions about what libraries to use or how to implement new code. Therefore, we analyze code fights as part of this work. We wish to uncover what happens in teams when teammates remove code, or remove entire libraries. We examine which libraries are most often fought over, and which team members end up winning code fights. We hypothesize that more experienced team members will win these code fights.
To answer these questions we cloned 259,613 repositories from Github. This data provided us entire commit histories with userids, timestamps and code-diffs across all of the projects. We also downloaded, indexed, and cross-referenced library usage within Stack Overflow. Library adoption in software repositories reveals a natural experiment from which causal relationships between inferred. By following a careful extraction and experimental methodology, the following pages presents answers to these questions and an exploration of this rich resource.
An example of a Git history log where lines which begin with a `+' represent additions, and lines which begin with a `-' represent deletions.
§ DATA AND METHODOLOGY
Characteristics of GitHub.
Since GitHub's rise to popularity and the growth of the open source software movement, developers have become increasingly comfortable contributing to public repositories [3]. Riding this wave of easily accessible data, researchers have done considerable investigation into GitHub, but many of these studies are limited to a few dozen project repositories in small-scale experiments, thereby limiting their potential generalizability [4]. Instead, our goal is to gather as much data as possible, which, as we shall see, presents different challenges. We summarize our findings from this large-scale collection of data in the subsequent sections of this paper. As future work, further, more specific analysis can be done on smaller datasets, but for the purposes of our research, we wish to treat this work as a big data problem, and try to ingest as much data as possible to answer these questions.
Project Repositories on GitHub
The confluence of GitHub's online social system and Git's complete history of pull and commit behavior provides an unprecedented source of information on adoption behavior [5, 6]. We focus on commits that contain imported libraries and their functions. The example code snippet in Fig. <ref> shows how a single commit can add and/or remove one or more lines of code. In this example, the 5 additions and 2 deletions result in a net commit size of +3 lines, including the adoption of . We consider usages of libraries $\ell$ (, , , ) that are imported into Python code through an statement. An important assumption that we make is that submodule statements represent the parent library. Therefore, submodules like are considered equivalent to in the present work, because the function is included as a part of the parent library, and we are interested in functions which refer to the adopted library.
We also consider direct function calls on the imported library. It is important to carefully consider the scoping of library aliases (, , , in Fig. <ref>) so that we can mine these library functions accurately. Thankfully, the library aliases are included in the import statement, so we can easily find these aliases in the code. In the above example, the functions from the library ( and ) are referenced in two of the added lines.
Indirect library usage is exemplified by the function of the object and by the function of the object in Fig. <ref>. These objects were created from some ancestor call to the library (, ) in some other location, but do not directly reference the library themselves. We do not consider indirect library usage in the present work, and only focus on direct library calls.
Data Collection
First, we issued a query to the GitHub search API for projects written primarily in Python. GitHub returned repository IDs of the 1,000 most popular Python projects on the site. We then found all GitHub users who made at least one commit to a repository in this set and retrieved all their Python projects. We did this breadth-first style crawling two additional times, culminating in 259,923 projects with 89,311 contributing GitHub users.
Of these, we were able to clone 259,613 Python repositories to disk; the remainder were made private or deleted between the time we crawled their project URL and the time that we performed the clone operation. Each cloned repository includes all files, branches, and versions, along with a complete edit history. These repositories constitute about 13% of all Python repositories on GitHub as of September 2018. The full dataset of cloned projects occupies about 8 TB of disk space. For analysis, we parsed the commits to only contain imported functions and libraries, which drastically reduced the size of the dataset.
Because we sampled 13% of the total available public Python projects available on GitHub, it is important to be wary of sampling biases. Our initial GitHub query returned the most popular projects, so our dataset may over-represent highly active repositories compared to the population. It is not our intention to faithfully survey GitHub use nor to represent all Python projects; instead, our goal is to understand how programmers adopt and use new software libraries. Our findings can be applied to projects of all sizes, and small projects are well-represented in our data. However, it is important to remember that private software repositories are not included in our dataset, so we can only investigate team interactions in a public environment.
Additionally, we downloaded all question posts from Stack Overflow from its inception until September 2018. Appropriate tagging tends to increase viewership of questions [7], so we filtered out any posts that were not tagged as Python posts, then extracted all libraries from any code block using the same pattern matching technique as used in the library extraction from Python projects. Only top-level libraries were included. Because Stack Overflow is a free-form text entry platform, this pattern matching procedure may occasionally count extraneous words and misspellings as libraries. Additionally, it is possible that some libraries might not be returned by our query because the library name might have been misspelled, or the question did not include Python code containing a library import.
Recreating the Project Workflow
Each project repository contains files, commits, and users. Each commit is marked with a timestamp $t$, a message, one (or many) parent-commits (in the event of a merge operation), and a specifying lines that were added, edited, or deleted within each file.
An important complication that arises in Git commit histories is that stashes, reverts, branches, and other Git commands can result in a non-monotonic chronology of edits. Because of this, we should not compare commits across different branches to each other until they are merged. Reversions introduce an additional complication. For example, if a user (1) commits code on Monday, (2) commits some new code on Wednesday, and then (3) on Friday reverts the project's code to Monday's commit, then the chronology of the repository flows non-monotonically from Monday to Wednesday to Monday again despite the reversion being made on Friday.
Fortunately, each commit keeps a pointer to its parent(s), so we can create the actual lineage of the commits by following the graph of each commit's parent rather than blindly following the commit timestamps. Because the order of actions is more important than exact times, we enforce a monotonic chronology according to the commit graph. Future work can attempt to explore how this problem can be approached by using timestamps to analyze how the project changes over time.
(a) Number of commits per project follows a shifted power law distribution, , most projects have few commits, and few projects have many commits. (b) Number of libraries used per project also follows a shifted power law distribution, , most projects adopt few libraries, and few libraries are adopted by many repositories. (c) When libraries are adopted into repositories, it tends to occur early in the repository history.
Text Mining for Libraries
After downloading the data, we combed through each Git commit log to find which libraries had been imported by using Regex pattern matching to identify import statements, such as `from $\ell$ import $f$" and `import $\ell$ as $f$". Once we identified which libraries were used in a Git project, we searched the log to find the lines which referenced the functions contained in a library import. To do this, we used pattern matching to search for the libraries $\ell$ and functions $f$, along with indicators such as and , to indicate that the library or function was being used. We gathered the author name, and then stored all this information, including library used, author name, and commit type, in a database for further analysis. This parsing of the data allowed us to perform a relatively quick analysis on a smaller dataset than the entire commit log.
§ LIBRARY ADOPTION
Formally, for a project $p$, a library $\ell$, and a time $t$, we define a library adoption to be an event ($p$, $\ell$, $t$) representing the first time $t$ that $\ell$ is found in $p$.
Some project repositories are simple, containing only a single commit, while others are extremely complex with multiple branches, hundreds of users, and thousands of commits. In 2014, Kalliamvakou et al found that the median number of commits per project on a small sample of GitHub projects was 6, and 90 percent of projects had less than 50 commits [5]. Our dataset shows a slightly larger distribution of commits, though as mentioned earlier, it is possible that our GitHub data is over-represented by active projects. The distribution of commit-activity per project, illustrated in Fig. <ref>, resembles a shifted power law distribution. Because of this dynamic, 50% of projects were found to have 10 or fewer commits (, median of 10) and 90% of projects have 100 or fewer commits.
The distribution of the number of libraries adopted per project, illustrated in Fig. <ref>, also resembles a shifted power law distribution, albeit with a larger offset than the commit-activity distribution of Fig. <ref>. However, the number of adoptions is less evenly distributed: 54% of projects adopted 10 or fewer distinct libraries and 98% of projects adopted 100 or fewer libraries.
Across all commits of all projects, we find that library adoptions occur more frequently within the first few commits. Figure <ref> shows that a project's first commit adopts 6.4 libraries on average (with a median of 2 not illustrated). A project's second through fifth commits adopt 3.3, 1.1, 0.8, and 0.65 libraries on average (with median values of 0 not illustrated). In general, the average number of adoptions per commit appears to follow a Zipfian decay, and commits tend to occur early in a project repository history.
§ ACTIVITY CHANGE AFTER ADOPTION
A simple (albeit poor) indicator of productivity in software projects is the number of lines of code (LOC) that are added and/or removed in a commit. While it can be argued that not every line added to a repository is important, productive, or useful, and that inefficient code tends to have more lines, we go with this indicator because it is the simplest to understand, and provides a good summary statistic. Further analysis can be done on measuring the effectiveness of lines of code added to a repository. Within a single project the addition of code lines typically indicates positive activity like the addition of new functionality or features. Conversely, the removal of code typically indicates some negative activity like reversions or mistakes that are removed by the user. Oftentimes software bugs or other types of mistakes occur requiring edits that both add and remove code. Our data will contain the number of lines that are added or deleted in each commit, along with the libraries involved.
We begin our analysis of library adoptions by parsing each code commit over each project repository. For each project starting from the first commit, we retrieve all imported libraries, their commit number, and the scope of any aliases. Users reference libraries and use them in several different ways. We define two classes of library use for the purposes of this study: (1) direct use, and (2) indirect use.
Direct library use, as the name implies, references the library name or alias directly. From the example in Fig. <ref> above, the function call directly references the alias of the library; this is an example of direct use. Indirect library use references library calls made through a variable or other intermediary. Again from the example in Fig. <ref> above, the function call is an indirect use of the library because it references the library's functions through the variable. Taking an accurate accounting of indirect library use, especially in a language as fluid and dynamic as Python, would require an interpretation or partial-compile of each of the 23 million commits in our dataset. Therefore, in the present work, we limit our analysis to the import and direct use of libraries.
[legend columns=-1,
legend style=
column sep=1ex,
legend entries=Additions, Deletions, Net Use]
green, mark=*
red, mark=*
black, mark=*
Library additions, deletions and net-usage (in LOC) after the adoption event.
Statistics surrounding newly adopted library $\ell$
Avg LOC that reference $\ell$ 31.34
Median LOC that reference $\ell$ 4
Avg insert LOC after 1st adoption to ref $\ell$ 2.09
Avg deleted LOC after 1st adoption to ref $\ell$ 1.62
Table <ref> shows there is a wide gap between the average LOC and median lines of code that represent a library $\ell$, indicating skew caused by large commits. This matches with our analysis which showed earlier than many of the statistics surrounding commits follow a power law distribution. Additionally, average LOC drops quickly after the first commit. Fig. <ref> shows the average direct use of an adopted library in lines added and deleted (in green and red respectively) as well as the net change, , insertions minus deletions (in black).
After the initial commit, we find that most of the following commits have only a small positive net productivity. We also find that the volume of activity of lines of code referencing $\ell$ in Fig. <ref> tends towards zero rather quickly after the adoption. This indicates that, on average, the activity surrounding the adoption of a library is brief and oftentimes contradicted quickly. Recall from Fig. <ref> that most repositories only adopt a few libraries, with more than half adopting 10 or fewer libraries. Therefore, we can safely deduce that in most repositories, when adoptions occur, they occur early in a repository's history.
Stack Overflow
The popularity of software-centric question-and-answer Web sites has spread rapidly. Stack Overflow was created in 2008 [8], which is the same year that GitHub was launched [9]. Because these resources have grown in popularity together, we expect that they have influenced each other in some way. Much of the previous work in this area has focused on understanding user behaviors across Stack Overflow and GitHub [10, 11], modelling user interest [12], or mining for code that users have imported from Stack Overflow posts into GitHub projects [13]. In other cases, researchers aim to leverage posts and their answers in order to build tools that aid software development [14, 15]. Further research needs to be cone to understand data flows from Stack Overflow to GitHub, and vice-versa.
Number of users of $\ell$ in GitHub dataset as a function of the number of Stack Overflow posts about $\ell$, showing that each library type has a statistically significant positive correlation.
Here we see how the library adoption has become faster as the number of posts that appear on Stack Overflow have increased.
Here we see how the library adoption has become faster as the number of posts that appear on Stack Overflow have increased.
Growth of library usage after adoption grouped by Stack Overflow usage. Q1, Median, and Q3 growth in lines of code referencing $\ell$ are represented from bottom to top respectively.
We plot the number of users of $\ell$ by the mean average number of Stack Overflow posts (across all adoption times) in Fig. <ref> that existed when $\ell$ was referenced. This illustration also groups libraries that are (1) included in PyPi, the default library repository used by the pip installer, (2) part of Python's standard suite of libraries, , os, json, time, and (3) all other libraries. We observe a strong positive correlation between the number of library users and the number of Stack Overflow mentions for standard libraries ($R^2$=0.625, $p<$0.001) and PyPi libraries ($R^2$=0.410, $p<$0.001). There is a small positive correlation between usage of unknown libraries and Stack Overflow posts ($R^2$=0.08, $p<$0.001), most likely due to individuals naming libraries like words and phrases that also happen to appear on Stack Overflow, or perhaps users sharing GitHub repositories that have not made it to PyPi.
Exemplar libraries are called out in Fig. <ref>. For example, the standard libraries , , and indicate some of the most widely used libraries on GitHub. The , , and libraries represent three libraries found on PyPi. The library is used to create source code documentation files; it is relatively popular on GitHub but has only a few dozen posts on Stack Overflow. This seems to indicate that users have few questions about this library relative to its use (perhaps the library that produces source code documentation is well-documented!). Conversely, the library, which is used to crawl the Web, and the library, which is used for data analysis, have many questions, potentially indicating that the library is complicated to use. Despite being rather popular on Stack Overflow and GitHub, the “library” is not found in the standard Python libraries nor PyPi. This is a bit of a misnomer because the use of a “contrib” folder/module is a standard way to encourage open source developers to contribute to various projects. As a result, the contrib module is indicated as a common library simply because of its use as a naming convention in many distinct projects, so we see that it is mentioned quite frequently on Stack Overflow as a “local” library, but the functions defined in local “contrib” libraries across Github would yield very different functions since each user is writing their own “contrib” library for different purposes.
Our next task is to understand what differences, if any, exist in the net productivity of these various libraries. To help understand the dynamics of library adoption, we calculate the median percentage growth (in LOC) of an adopted library, , the change in the number of added lines containing $\ell$ in a commit minus the number of deleted lines containing $\ell$ in a commit for the first 100 commits after the adoption. This provides us with a simple way to compare growth across teams which are different sizes.
Formally, we compute the growth of a library $\ell$ within a project as follows. If $x=0$, then let $y_{x} = 1$; otherwise
$$y_{x} = y_{x-1} \left(\frac{\sum\limits_{i=0}^{x-1}{(n_i)} + n_x}{\sum\limits_{i=0}^{x-1}{(n_i)}}\right),$$
where $n_i$ is the number of changed lines of code (net additions minus deletions) in commit $i$ that contain $\ell$. From this equation $y_x$ contains the percentage change at commit $x$ relative to the adoption event ($x=0$).
Consider as an example the following series of commits $n$ = [+2, +1, +4, -1]. Here, the adoption commit ($x=0$) introduces two lines of code that reference $\ell$. We set $y_{0} = 1$. The next commit contains a net change of +1, rendering $y_1 = 1((2 + 1)/2) = 1.5$. In other words, in the second commit the number of lines of code referencing $\ell$ within this example project grew such that it is now 150% of its original size. The next commit contains a net change of +3, rendering $y_2 = 1.5((3 + 4)/3) = 3.5$ indicating that after the third commit the use of $\ell$ within this example project grew to be 350% of its size over the initial commit, , from 2 lines to 7. The final commit contains a net change of -1, rendering $y_3 = 3.5((7 - 1 )/7) = 3$. This normalization of median growth rate helps us compare larger teams to smaller ones.
We plot the median growth (in LOC referencing $\ell$) as a function of the number of commits after the adoption in Fig. <ref>. Note that the adoption commit is not shown; instead, each commit either net-adds or net-subtracts from the initial adoption. Columns represent four groups of Stack Overflow mention sizes: no mention, between 1 and 100, 100 and 1000, and greater than 1000 from left to right respectively. Within each plot, solid lines represent the median, and the dashed lines on top and bottom represent the 3rd and 1st quartiles respectively.
We observe that the use of an adopted library has a complicated relationship with its popularity on Stack Overflow. The primary distinction is in the growth rates for libraries with more than 1000 Stack Overflow posts. 100 commits after the adoption, the adoption of a highly mentioned library on Stack Overflow will have approximately 350% growth (on average) compared to only 250% growth for less mentioned libraries. We can assume that having over 1000 Stack Overflow posts means that the library is highly successful, and highly popular. Over time, it might be easier for users to find new resources about the library online - and add more functionality as the library becomes fully integrated into the project.
When a library does not appear on Stack Overflow, the growth rate is similar to libraries that have over 1000 posts. Libraries that do not appear at all in Stack Overflow mostly consist of libraries that were written by developers who are also the authors committing the library to the repository. This may explain why growth is large in unknown libraries – the adopters know how to use the library because they wrote it. We could also propose that programmers who are using libraries for which there are no online resources available might be more experienced than those that are using more popular libraries, so their growth rate is faster.
Project Team Size
Next, we investigate differences in library adoptions as a function of team size. Because library adoptions occur from the perspective of a project, studying how various team sizes adopt libraries is important as we attempt to understand how teams form and work together. Researchers have studied GitHub previously for its team formation attributes. Git and GitHub directly store how team members collaborate and the types of activities that they perform [16]. For example, researchers have found that diversity and team makeup have a significant impact on the productivity of GitHub teams [17], and larger teams tend to process more pull requests on average [18].
Like the commit and adoption distributions illustrated in Figs. <ref> and <ref>, the team size distribution follows a power law.
Median percentage change in lines of code referencing $\ell$ after adoption.
We calculate a project's team size by counting the number of distinct committers. We observe in Fig. <ref> that the distribution of team sizes has a power law-like heavy tail wherein 59% of projects have only a single committer; 24% and 7% of projects have two and three distinct committers respectively. Projects with small teams therefore dominate the GitHub community. For the 59% of projects with a single committer, we do not have to even consider the team dynamics when a library adoption occurs, because only one individual is adopting a new library for use in the project.
Like in the Stack Overflow analysis, we calculate the median growth over the first 100 commits after a library adoption for various team sizes, which we can see in Figure <ref>. We see that smaller teams add more lines of code after the first adoption event than larger teams. A possible explanation for the slower growth of library usage in larger teams is because of perspective differences between two or more committers to a project. Users might feel more comfortable making more commits or experimenting with newly adopted libraries in smaller teams, or if they are working alone, because there are fewer team members to consult with before a commit is made - and also a greater need to ensure that all team members understand the purpose of the commit. Also, more communication might be necessary between teammates before large commits are made, which would appear to cause slower growth rates for bigger teams. It is possible that in larger teams, the first adoption event is more substantial - and then grow more slowly afterwards.
§ CODE FIGHTS
Fights between committers to a project occur whenever there is a disagreement about how others should structure code, how they should implement features, or any other decision impacting code production. We use this analysis of code fight to understand who wins these arguments by tracking who eventually commits code which eventually stays in the project - or is ultimately removed. Researchers have long analyzed the diffusion and change of information and conventions including work on the adoption of word use in offline textbooks [19], on Twitter [20], and other domains. The experience of the individuals in the group also plays a key role in what ideas are adopted offline [21] and in online social systems [22].
For example, Sumi et al describe edit wars on Wikipedia where two or more Wikipedia editors constantly edit each other's changes to some article [23]. Investigators have found fights in collaborative science writing where researchers often use and adapt various macros and vocabulary. Specifically, based on files obtained from ArXiV, Rotabi et al showed that user experience is a large factor in determining who will win a fight. Less experienced researchers tended to win invisible fights, , fights over -macros that did not have a high visibility. More experienced researchers, , advisors and senior PIs, tended to win highly visible fights such as fights over the conventions used in the title of the paper [24]. In the software development paradigm, norms tend to develop in software teams, to which developers eventually learn and conform [25].
In the context of library adoptions in collaborative projects, we informally define a code fight as a series of commits that include back-and-forth additions and deletions of the same code containing a newly adopted $\ell$. For clarity, in the current work we restrict a fight to occur between two committers $u$ and $v$, but we encourage follow-up research that lifts this restriction in future analysis. In this context, a fight occurs when user $v$ removes all (or almost all) of the code that user $u$ committed that references $\ell$. Occasionally, the adopting user $u$ will recommit the original code, which $v$ may then revert.
A user may add or remove code over a series of contiguous commits, rather than in a single large commit. Therefore, rather than thinking of fights as one commit after another, we model a fight in rounds. A round is a series of commits by one user that is uninterrupted by the other user. For example, if $v$ deletes 5 lines of code in commit 2 and then another 6 lines of code in commit 3, then we represent these two commits as a single round with -11 lines deleted.
We formally define a fight as follows. Let $n^{(r)}$ represent the net change in lines of code referencing $\ell$ in round $r$; $r=0$ indicates the round of the adoption event. Also let $n^{\le r}$ be the sum of all lines of code referencing $\ell$ up to and including $r$, , the running total.
A code fight occurs if there exists any $r$ such that $(1-\epsilon)n^{\le (r-1)} \le n^{\le r}$, where we set $\epsilon$ to represent the percent reduction that must occur, with $\epsilon \in \{.10, .20, .30, .40, .50\}$. Once a fight starts it will continue until there are no more rounds regardless of the size of the change in each round, , further rounds within the same fight do not have to necessarily add or remove $1-\epsilon$ LOC.
The probability of a fight, for various sizes of $\epsilon$ and by project team size, is illustrated in Fig. <ref>. We observe that fights are relatively rare, occurring between 1 and 3 times for every 100,000 commits on average. We also observe that the choice for $\epsilon$ has a limited effect on the probability of a fight. The probability of a fight increases with team size, but with diminishing returns that resemble a Zipfian Distribution. In other words, because there are more interactions in a larger team, it is more likely that a fight will occur.
Next, we analyze what happens during a two-person fight. Technically speaking, the first round of a fight is the adoption event and the second round of the fight is the removal of at least 100(1 - $\epsilon$) percent lines of the adopter's code. After this point, the two fighters (the adopter and the deleter) may continue with more rounds of back and forth commits.
Despite the dropoff in number of fights, the adopter tends to fight back with more lines of code. In Fig. <ref> we observe that odd-numbered rounds, corresponding to the adopter, have more net LOC referencing $\ell$ per round, than the deleter's round that comes afterwards. Also, we see that the larger the original deletion of the code was, the less likely the adoptor is to fight back with lines of code.
We can see that in the majority of fights, the fighters, who are the people who originally delete code from a repository, win the fight as the last committer. Non-adopters, who are parties who were not involved as either the deleter of code or the adopter, win fights as someone who enters a fight later slightly more often than the adopter themselves.
We define a fight's winner as the user who was the last commiter referencing $\ell$. In some cases, the adopter may acquiesce to the deleter and allow the code to remain deleted. In other cases, the deleter may allow the adopter's reassertion of the library's addition to stand. By our definition of rounds, it is clear that the deleter wins approximately 90% of the fights because the adopter only fights back 10% of the time. This shows that it is relatively rare for users to counter and re-add code. Perhaps in some cases, the “fight” was not contentious and the result of a mutual agreement to remove a library. Further research is needed to find out how many of these fights are a result of team friction.
What role does experience play in winning a fight? To answer this question, we must first ask how to best define experience. Two options are to count 1) the number of commits of a user (in any project) or 2) the time elapsed since the users first commit (in any project). Although these two options obtain similar results, the current work maintains the standard set by prior studies [24] and therefore defines experience as the time since the user's first commit (in any project). We observe in Fig. <ref>, which plots only results from $\epsilon=0.1$, that the more experienced committer wins the fights between 70% and 80% of the time. Results from alternative $\epsilon$ values were nearly identical to $\epsilon = 0.1$ and are omitted for clarity and brevity. The experience difference groupings were selected so that each contained a similar number of fights. Interestingly, the more experienced users have about a 75% win probability even when the experience differences are less than a week or even a day (not illustrated). This suggests that even slightly more familiarity with the project (perhaps an indication of project leadership) results in the more experienced user winning the fight. It appears that it is more common for new people working on a team to be subservient to the person who has been working in the codebase the most.
Probability of a fight as a function of team size for various $\epsilon$. An increase in team size is directly correlated with the probability of a fight (Spearman $R^2=0.14$, p$\le0.01$ if $\epsilon$ = 0.1).
When a two-person fight occurs, the adopter $u$ (indicated by odd-numbered $x$) tends to commit more code than $v$ on average
The more experienced user is more likely to win a code fight regardless of the experience gap.
Finally, we ask: which libraries are the most fought over? To answer this question, we counted the occurrences of $\ell$ within each two-person fight and normalized by the number of times $\ell$ was adopted. Here we set $\epsilon=0.1$, but results for other $\epsilon$ were similar. Table <ref> shows the top eight libraries involved in the most fights.
Library Prob in Fight
pdb 12.4
pprint 11.3
telnetlib 10.5
syslog 10.3
distutils 9.1
glob 9.0
poplib 8.4
imp 7.8
Libraries most likely to be involved in a fight.
We observe that common debugging libraries , , and comprise three of the top four most common causes of fights. In these instances, we assume that adopters might be implementing or testing some functionality, they use the debugging libraries, and commit changes with the debugging code still implemented. This code then is then removed by another team member thereby instigating a fight.
It is not surprising to see the library counted among the top fight starters. This particular library is used to generate source code distributions , code releases, but it is strongly encouraged that users use the library instead. So most cases importing is likely an error, and the usage of that library was deleted by others who are updating their code base, which appears in our analysis as a code “fight.” However, we can suppose that for some teams, the discussion of using in favor of is indeed a fight, especially if a team member is reluctant to switch libraries.
Fight Threshold % Median Net Commit Size
0.5 24.58
0.4 28.49
0.3 27.22
0.2 24.09
0.1 18.12
Various values for median net commit lines before a fight occurs, in which the next commit results in net lines of code that are below the set fight threshold.
§ DISCUSSION
Let's return to our original questions and summarize our results based on our findings.
What does it look like when a team adopts a library for the first time?
In Fig. <ref> we observe that library adoptions tend to happen early in a project's history. Hence, the probability of a new library being adopted later is lower. We can expect that it is difficult to adopt a new library once a project has matured. Perhaps this is because new libraries may introduce instability into a repository, or because the primary innovation within a project occurs early on in its lifespan. Further research is needed to understand this more clearly.
While Fig. <ref> shows us that these adoptions happen early on, it would also be interesting to approach this problem from the perspective of percent of project repository history. We see that the commit distribution follows a power law distribution, and many projects have few commits. Therefore, while we measure the commits in a sequential order in Fig. <ref>, further research should be done to see when library adoptions occur as a measure of project completeness. Early in a project history, do we see that it takes more time for a library to be adopted? Or do we see that a team takes longer to adopt a new project after they have been working using established methods? Further research needs to be done to answer these questions. Despite these questions, we can see that library adoptions tend to be events that happen in the first few commits - though further research can help us understand when they occur in relation to the length of a project. That would give us another dimension to analyze this problem.
Are commits containing new libraries more likely to have deletions than other types of commits?
Once an adoption has occurred, we track how long it takes for library usage to become stable, or adopted, within the project by examining how many additions and deletions occur in the commits after a library is first used. In Fig. <ref>, we observe that activity involving a newly adopted library is relatively high after a commit occurs. Over time, the number of lines of code referencing the adopted library stabilizes, which indicates that the library has been fully adopted and incorporated into the project. We can safely conclude that users tend to write most lines of code that involve a newly adopted library relatively soon (within 10-15 commits) after library adoption. However, in many instances we also find that some libraries never stabilize and end up being deleted.
While we currently measure when adoptions occur, it would also be interesting to see when libraries are completely deleted from project repositories. When would those deletions occur? Would we find that libraries are more likely to be deleted early on in a project repository history, or later? Would these deletions follow a similar distribution over time as library adoptions? To answer these questions, we need more research from the perspective of library deletions. As with Research Question 1, while we are currently asking this question from the perspective of commit number, we can also use the percentage of project completeness to understand when these deletions are occurring over the lifespan of a project. An analysis such as this would not skew the averages towards smaller projects since the project timeline would be measured from the perspective of percentage of a project completed, instead of absolute commit number.
Additionally, we could ask further questions about the coding history of users that are on the project. To add to this work, we could track the libraries used by the individuals who are contributing to these projects. Other work as shown that user activity rates are higher soon after an adoption event [26]. Tracking user history across GitHub repositories could give us more information about what occurs in an adoption event, and help us learn more about how individuals learn and retain new information.
Do the answers to these questions vary by library type, team size, or the amount of information available on Stack Overflow?
When team sizes are larger, the lines of library code do not grow as quickly relative to the first adoption commit as they do when team sizes are smaller. This may be because larger team projects require more communication and planning and are therefore less agile than small teams or individual projects. As mentioned previously, this is one of the drawbacks of using lines of code as a measurement tool, since it could be argued that the code committed by larger teams is more valuable than the lines of code committed by smaller teams. Additionally, this work only looked at public software repositories. Perhaps an analysis of private software repositories would lead to different conclusions about how various team sizes work together - because the interaction between teammates who know each other personally may be different than those who do not, or may be operating in a public environment.
Further in-depth research is necessary here. While we defined `large' team sizes as those with 10+ members, it would be interesting to see how teams whose members number in the hundreds or thousands would differ. We would expect these teams to have different adoption behavior. Also, a team size and lifespan analysis would provide unique insights into how long large team repositories survive versus smaller teams. What is the distribution of smaller team projects' lifespans compared to larger teams?
In addition to team size, we showed that the number of times a library appears in Stack Overflow is highly correlated with the number of adoptions in our data set. Further questions about StackOverflow still need to be answered. In particular, more questions need to be answered about which way StackOverflow and GitHub grow - does StackOverflow influence GitHub, does GitHub influence StackOverflow, or is there information flowing in both directions? How do individuals find new libraries on StackOverflow? What is the distribution of attention given to different StackOverflow questions? Each of these questions requires further analysis as we attempt to understand how groups utilize outside resources to learn about new information.
What does it look like when team members fight over new library usage?
When working on a team, there is bound to be conflict. Different team members have various opinions about which library is best to use in a repository. The probability of these fights occurring increases with team size, as shown in Fig. <ref>. The winner of these fights tends to be more experienced as shown in Fig. <ref>.
While we have done an analysis of team size, length of experience, and libraries used in fights, there are still questions to be answered regarding fights. Further research directions could include analyzing when fights occur in a project history. It would be interesting to see if fights are something that occur early or late in a project repository history. Additionally, this research was focused solely on fights between two individuals. Further analysis needs to be done on larger team sizes. Would an analysis of large team fights yield groups of people who are fighting for control on a project? Would larger teams result in more or less fights?
It makes intuitive sense that more experienced teammates would win GitHub fights, since they have more seniority in a project. However, it would also be interesting to investigate the projects in which the individual with less project experience wins the fight. What characteristics do these new, inexperienced team members have that causes them to win code fights? Is there another interesting quality that results in them having more influence over their teammates? More research is needed to answer these questions.
Additionally, in future work we could to uncover who is bringing new libraries into a project. We have seen that more experienced programmers tend to win these code fights. However, we could learn more information about how teammates work together by uncovering which team members are the first to bring in a new library. Would the teammates who have worked longer on the project be the ones to try new approaches with other libraries? Or would new teammates be the ones to suggest new ideas? Additionally, is there a divide between who is using new libraries and the popularity of that library on StackOverflow? We could hypothesize that common libraries, such as , would be used for the first time equally by both experienced and inexperienced programmers, but a more specialized library like would only be used by veteran team members. Since we have suggested libraries such as have steep learning curves due to the number of posts on StackOverflow about that particular library, we could also track how team members fight over how to use complicated libraries, and if those libraries have a longer time to adoption than others.
There are some important caveats to the findings presented in the present work. We were only able to crawl 13% of all public Python GitHub projects; even if we could obtain all Python projects, these public projects only represent a subset of all Git projects. Therefore, we must temper conclusions to represent a case study of this domain and we caution the reader against drawing broad conclusions about user behavior outside of Python projects from public GitHub repositories. We can hypothesize that private software repositories are written by teams that have higher interpersonal relationships, and library adoption will appear to be different in these groups. Therefore, our conclusions cannot be generalized to all GitHub projects, though they provide a good overview of how libraries are adopted in public GitHub repositories. In particular, fights might look different in private repositories because we can assume that there would be some sort of relationship between the committers in those repositories, which might effect who wins those code fights. Additionally, we might find that people who post only in private repositories might be very different from those who have public GitHub accounts. Understanding the differences between these types of users would yield more interesting takeaways about how individuals work together in teams, as we analyze the difference between people who are more guarded of their work to those who are more open.
From our work, we found some interesting takeaways. We see that the number of commits and adoptions per project, along with team size, follow a power law distribution. We can conclude that power law distributions are common as we understand social behavior - there are many people who contribute little, and only a few that contribute in large amounts. We found positive correlations between the number of times libraries appear in StackOverflow and GitHub. We discovered that popular libraries on StackOverflow have faster rates of adoption for projects in Git. Additionally, smaller teams are more agile and can grow more quickly than larger teams, when productivity is measured as a function of median percentage growth. We also find that code fights are rare, but when they occur, they tend to be won by more experienced coders, and involve libraries which are used for debugging purposes. We can therefore conclude that the availability and proliferation of online resources has helped improve productivity for programmer, and that team characteristics, including size and seniority, have interesting implications for team dynamics. We can only expect resources such as StackOverflow and GitHub to continue growing as they attract new users.
However, just analyzing commit histories of varied team sizes do not tell the whole story. While we see in Figure <ref> that larger teams do not grow as quickly, further research needs to be done to conclude if these larger teams are actually more efficient. These smaller teams may be `moving fast and breaking things,' while larger teams could be more cautious in their execution due to the fact that larger teams by definition have more interpersonal relationships which need to be managed. It could be that larger team sizes have a higher percentage of their code that makes its way into the production version, while the multiple, minor commits that smaller teams make might be junk code that winds up being deleted. Therefore, this research needs to be continued to investigate this problem of team productivity from many different angles. This begs the quesiton of what `productivity' means, which could fuel many different research questions.
While this work has focused on the library-centric approach to understanding adoptions, more work needs to be done to understand how individuals work to adopt information. Epidemiological models have been used to understand how information spreads across a social network [27], with a SIR model (suspectible, infected, recovered) being used to mimic one's potential to become `infected' (or viewing a post) and `infecting' it (by sharing it). In these models, individuals have varying levels of susceptibility. Further research could apply these models to GitHub users. Do we see that there are some individuals which have high susceptibility rates, which could enable them to adopt to new libraries more quickly? Also are there some users who are more `infecting' than others, and when they are present on a project, their introduced libraries are adopted more quickly than those introduced by individuals who are less `good' at spreading `infections.' This could be an interesting research topic that could attempt to apply epidemiological models to GitHub projects, where a library could be considered a virus. This type of approach could also help us create a model where we find highly influential GitHub users that are highly successful in implementing new libraries in varied projects. We could also view new libraries as a `virus' that spread throughout GitHub - and track the users that are instrumental in helping them spread.
From this work, we can conclude that the proliferation of online coding resources such as GitHub and StackOverflow have been a positive development for programmers who wish to learn how to use new libraries to accomplish their coding tasks. Observing the growth of individuals who use StackOverflow and GitHub over time shows that the platforms have had tremendous growth over time for those hoping to learn to program. We expect continued high growth rates for both programs.
Future Work
We uncovered some interesting findings, but ultimately end up with more questions than conclusive answers. We encourage the community to explore specific questions raised by our results using the methodology developed in the present work. Specifically, we encourage further probing into how Stack Overflow contributes to the growth of library adoption and popularity. We have uncovered patterns that exist when team members fight over library adoption, and we look forward to further research which investigates code fights at an even greater depth.
We present several facets of analysis, but due to the complexity of varying size of teams and GitHub repositories, there exists virtually limitless possibilities in exploring the data. Since the commit and adoption distribution follow a power law, it might be interesting to investigate what occurs only in projects where these values are very large. This paper attempted to account for this variety of team sizes, but more focused work on investigating either small or large team sizes would yield very interesting results. Additionally, tracking the growth of team sizes could yield some very interesting research.
While this work ignores when commits occurred as a matter of date and time, gathering time stamps of commits might also yield interesting research. There are several questions that could be answered by analyzing time stamps, such as which time of year projects are more productive (students trying to finish semester projects? Teams being more productive at the end of the month?).
Additionally, this work does not attempt to track users across projects. Further research could be done to discover if a person's prior programming experience results in lower time to adoption, or if sufficiently complex libraries remain hard even for experienced programmers. Some of these questions have been answered by Krohn et. al [26]. Another interesting research topic would be to track how quickly libraries are adopted as a function of time spent in the project, since many projects have short lifespans. Further research could attempt to find if libraries that are adopted later in a project's repository history will be adopted more quickly, as teams learn how to work together more cohesively.
We have presented an analysis of the dynamics of library adoptions in Python projects pushed to GitHub. We find that when teams attempt to learn new information together, it can be challenging to apply these new concepts and there is often a learning curve needed before the new information can fully stabilize within the group project. We further find that that even though learning curves are unavoidable, it helps to have teammates and other online resources that can guide groups towards learning how to adopt new information. When conflict arises, the more experienced team members usually end up winning disagreements when we track whose code ends up in the final version. Through this work, we confirmed that learning new information together can be a difficult process, and that many of the statistics surrounding GitHub projects follow power law distributions (including team size, commits, and adoptions). This work provides a superficial glimpse into an analysis of teamwork on GitHub, though there are still more in-depth questions to be answered to uncover more information about more complex interactions.
[1]
R. G. Kula, D. M. German, T. Ishio, and K. Inoue, “Trusting a library: A study
of the latency to adopt the latest maven release,” in 2015 IEEE 22nd
International Conference on Software Analysis, Evolution, and Reengineering
(SANER).1em plus 0.5em minus 0.4emIEEE, 2015, pp. 520–524.
[2]
F. B. Viégas, M. Wattenberg, and K. Dave, “Studying cooperation and
conflict between authors with history flow visualizations,” in
Proceedings of the SIGCHI conference on Human factors in computing
systems.1em plus 0.5em minus 0.4emACM, 2004, pp. 575–582.
[3]
L. F. Dias, I. Steinmacher, G. Pinto, D. A. da Costa, and M. Gerosa, “How does
the shift to github impact project collaboration?” in Software
Maintenance and Evolution (ICSME), 2016 IEEE International Conference
on.1em plus 0.5em minus 0.4emIEEE, 2016, pp. 473–477.
[4]
V. Cosentino, J. Luis, and J. Cabot, “Findings from github: methods, datasets
and limitations,” in Proceedings of the 13th International Conference
on Mining Software Repositories.1em plus 0.5em minus 0.4emACM,
2016, pp. 137–141.
[5]
E. Kalliamvakou, G. Gousios, K. Blincoe, L. Singer, D. M. German, and
D. Damian, “The promises and perils of mining github,” in Proceedings
of the 11th working conference on mining software repositories.1em
plus 0.5em minus 0.4emACM, 2014, pp. 92–101.
[6]
T. McDonnell, B. Ray, and M. Kim, “An empirical study of api stability and
adoption in the android ecosystem,” in 2013 IEEE International
Conference on Software Maintenance.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 70–79.
[7]
A. K. Saha, R. K. Saha, and K. A. Schneider, “A discriminative model approach
for suggesting tags automatically for stack overflow questions,” in
Proceedings of the 10th Working Conference on Mining Software
Repositories.1em plus 0.5em minus 0.4emIEEE Press, 2013, pp.
[8]
A. Anderson, D. Huttenlocher, J. Kleinberg, and J. Leskovec, “Discovering
value from community activity on focused question answering sites: a case
study of stack overflow,” in Proceedings of the 18th ACM SIGKDD
international conference on Knowledge discovery and data mining.1em
plus 0.5em minus 0.4emACM, 2012, pp. 850–858.
[9]
D.-C. Yan, Z.-W. Wei, X.-P. Han, and B.-H. Wang, “Empirical analysis on the
human dynamics of blogging behavior on github,” Physica A: Statistical
Mechanics and its Applications, vol. 465, pp. 775–781, 2017.
[10]
Y. Xiong, Z. Meng, B. Shen, and W. Yin, “Mining developer behavior across
github and stackoverflow.” in SEKE, 2017, pp. 578–583.
[11]
G. Silvestri, J. Yang, A. Bozzon, and A. Tagarelli, “Linking accounts across
social networks: the case of stackoverflow, github and twitter.” in
KDWeb, 2015, pp. 41–52.
[12]
R. K.-W. Lee and D. Lo, “Github and stack overflow: Analyzing developer
interests across multiple social collaborative platforms,” in
International Conference on Social Informatics.1em plus 0.5em
minus 0.4emSpringer, 2017, pp. 245–256.
[13]
M. Gharehyazie, B. Ray, and V. Filkov, “Some from here, some from there:
Cross-project code reuse in github,” in Mining Software Repositories
(MSR), 2017 IEEE/ACM 14th International Conference on.1em plus 0.5em
minus 0.4emIEEE, 2017, pp. 291–301.
[14]
L. Ponzanelli, G. Bavota, M. Di Penta, R. Oliveto, and M. Lanza, “Mining
stackoverflow to turn the ide into a self-confident programming prompter,”
in Proceedings of the 11th Working Conference on Mining Software
Repositories.1em plus 0.5em minus 0.4emACM, 2014, pp.
[15]
S. P. Reiss, “Semantics-based code search,” in Proceedings of the 31st
International Conference on Software Engineering.1em plus 0.5em minus
0.4emIEEE Computer Society, 2009, pp. 243–253.
[16]
J. Middleton, E. Murphy-Hill, D. Green, A. Meade, R. Mayer, D. White, and
S. McDonald, “Which contributions predict whether developers are accepted
into github teams,” 2018.
[17]
B. Vasilescu, A. Serebrenik, and V. Filkov, “A data set for social diversity
studies of github teams,” in Proceedings of the 12th Working
Conference on Mining Software Repositories.1em plus 0.5em minus
0.4emIEEE Press, 2015, pp. 514–517.
[18]
B. Vasilescu, Y. Yu, H. Wang, P. Devanbu, and V. Filkov, “Quality and
productivity outcomes relating to continuous integration in github,” in
Proceedings of the 2015 10th Joint Meeting on Foundations of Software
Engineering.1em plus 0.5em minus 0.4emACM, 2015, pp. 805–816.
[19]
F. Perek, “Vector spaces for historical linguistics: Using distributional
semantics to study syntactic productivity in diachrony,” in
Proceedings of the 52nd Annual Meeting of the Association for
Computational Linguistics (Volume 2: Short Papers), vol. 2, 2014, pp.
[20]
R. Goel, S. Soni, N. Goyal, J. Paparrizos, H. Wallach, F. Diaz, and
J. Eisenstein, “The social dynamics of language change in online networks,”
in International Conference on Social Informatics.1em plus
0.5em minus 0.4emSpringer, 2016, pp. 41–57.
[21]
D. Krackhardt, “Organizational viscosity and the diffusion of controversial
innovations,” Journal of Mathematical Sociology, vol. 22, no. 2, pp.
177–199, 1997.
[22]
C. Danescu-Niculescu-Mizil, R. West, D. Jurafsky, J. Leskovec, and C. Potts,
“No country for old members: User lifecycle and linguistic change in online
communities,” in Proceedings of the 22nd international conference on
World Wide Web.1em plus 0.5em minus 0.4emACM, 2013, pp.
[23]
R. Sumi, T. Yasseri et al., “Edit wars in wikipedia,” in
Privacy, Security, Risk and Trust (PASSAT) and 2011 IEEE Third
Inernational Conference on Social Computing (SocialCom), 2011 IEEE Third
International Conference on.1em plus 0.5em minus 0.4emIEEE,
2011, pp. 724–727.
[24]
R. Rotabi, C. Danescu-Niculescu-Mizil, and J. Kleinberg, “Competition and
selection among conventions,” in Proceedings of the 26th International
Conference on World Wide Web.1em plus 0.5em minus 0.4emInternational World Wide Web Conferences Steering Committee, 2017, pp.
[25]
D. Avery, H. K. Dam, B. T. R. Savarimuthu, and A. Ghose, “Externalization of
software behavior by the mining of norms,” in Proceedings of the 13th
International Conference on Mining Software Repositories.1em plus
0.5em minus 0.4emACM, 2016, pp. 223–234.
[26]
R. Krohn and T. Weninger, “Library adoption in public software repositories,”
Journal of Big Data, vol. 6, no. 1, p. 36, 2019.
[27]
L. Zhao, H. Cui, X. Qiu, X. Wang, and J. Wang, “Sir rumor spreading model in
the new media age,” Physica A: Statistical Mechanics and its
Applications, vol. 392, no. 4, pp. 995–1003, 2013.
|
2024-09-04T02:54:55.722760 | 2020-02-28T20:56:24 | 2003.00063 | {
"authors": "Gustavo Assun\\c{c}\\~ao, Nuno Gon\\c{c}alves, Paulo Menezes",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25952",
"submitter": "Gustavo Assun\\c{c}\\~ao",
"url": "https://arxiv.org/abs/2003.00063"
} | arxiv-papers | # Bio-Inspired Modality Fusion for Active Speaker Detection
Gustavo Assunção1,2,†, Nuno Gonçalves1,2, and Paulo Menezes1,2 This work was
supported by OE - national funds of FCT/MCTES (PIDDAC) under project
UID/EEA/00048/2019.1Institute of Systems and Robotics (ISR), University of
Coimbra, R. Silvio Lima, 3030-194 Coimbra, Portugal2Department of Electrical
and Computer Engineering, University of Coimbra, Portugal†Corresponding author
<EMAIL_ADDRESS>
###### Abstract
Human beings have developed fantastic abilities to integrate information from
various sensory sources exploring their inherent complementarity. Perceptual
capabilities are therefore heightened enabling, for instance, the well-known
”cocktail party” and McGurk effects, i.e. speech disambiguation from a panoply
of sound signals. This fusion ability is also key in refining the perception
of sound source location, as in distinguishing whose voice is being heard in a
group conversation. Furthermore, Neuroscience has successfully identified the
superior colliculus region in the brain as the one responsible for this
modality fusion, with a handful of biological models having been proposed to
approach its underlying neurophysiological process. Deriving inspiration from
one of these models, this paper presents a methodology for effectively fusing
correlated auditory and visual information for active speaker detection. Such
an ability can have a wide range of applications, from teleconferencing
systems to social robotics. The detection approach initially routes auditory
and visual information through two specialized neural network structures. The
resulting embeddings are fused via a novel layer based on the superior
colliculus, whose topological structure emulates spatial neuron cross-mapping
of unimodal perceptual fields. The validation process employed two publicly
available datasets, with achieved results confirming and greatly surpassing
initial expectations.
## I INTRODUCTION
The ability to discern between activity and passivity in humans may well
benefit social robots and autonomous cognition systems by enabling a more
accurate and natural response in interactive scenarios. Accordingly, machines
capable of this task would be able to measure some user’s engagement level and
infer valuable conclusions. Moreover, successfully solving this task would
expedite a research shift towards other problems whose solutions depend on the
output of a human activity-passivity recognizer. Recent work on data mining
and collection [1] and speaker diarization [2], which heavily rely on the use
of active speaker detection are exemplary of the topic’s importance.
Active speaker detection (ASD), or the ability to recognize who in a group is
speaking at any given time, is a subset of human activity-passivity
recognition. This subset focuses on mapping utterances that make up a speech
signal, such as in a conversation, to either a closed or open set of
intervening speakers. In turn, techniques from the subset may take advantage
of audio-only, visual-only or audiovisual information depending on their
nature. Though studies do exist taking advantage of other types of perceptual
input, such as [3], these are far more uncommon and somewhat less successful.
In the past, extensive research effort has been put towards the audio-only
section of ASD as a side tool for other problems [4], [5]. Techniques
generally relying on sequential speaker clustering [6] or speech source
localization [7], [8] were often enough to achieve real-time diarization, some
occasionally aided by supervised learning. Unfortunately these were
constrained by predetermined assumptions – mic location, static speakers,
controlled environment. Furthermore, audio-only approaches remain unable to
match segmented and clustered speech with visual speakers without additional
data or restrictions, limiting their use on audiovisual data. Hence, visual
approaches to the topic become useful, though not without their own drawbacks.
Unreservedly interpreting lip movement [9], or other facial feature variations
[10], [11] as vocal activity without accounting for similar motions stemming
from normal human interaction – emotional expression, coughing, chewing – will
of course negatively impact the detection process. An exemplary output of an
active speaker detection system is shown in Fig. 1, for easier understanding.
Figure 1: Exemplary output of an multi-speaker ASD system where the
intervening speaker(s) is(are) denoted in green and speaker passiveness is
encircled in red.
In terms of multi-modal approaches, by means of complementing a visual
approach with its audio counterpart or vice-versa, performances are generally
higher thanks to increased robustness. This improvement is obtained by
exploiting correlations between the two types of data (e.g. [12], [13]) which
may decrease ambiguous situations prone to cause model confusion – speaker
overlap, unrelated movement. Self-evidently, techniques requiring both a
visual and audio input are unable to deal with situations where either of the
modalities is missing. These situations however are scarce, an do not
correspond to the immensely vast amount of audiovisual data available
nowadays. Thus multi-modality remains as the preferred manner of addressing
active speaker detection [14], [15], [16].
In spite of the recent technological advancements which have allowed deep
learning research to flourish, such improvements would likely not have been
possible without key biological contributions to the area. The success of
remarkable architectures such as convolutional neural networks, whose earliest
iterations [17] strived to emulate neuron behavior in the visual cortex of
primates [18], would likely have been halted without this biological
inspiration. However, Engineering maintains a collaborative yet stubbornly
segregated relationship with Biology by disregarding several other promising
models which the latter has to offer.
In this study our contribution is two-fold as we propose a new multi-modal
approach to visual ASD as well as a novel bio-inspired modality fusion layer
which strives to emulate the superior colliculus structure of the brain. The
approach is designed to work in real-time and makes no prior assumptions
regarding noise, number of participants or spatial configuration. This is
because it was developed also with the intent of being used in parallel
projects, where it is required to recognize speakers and the moments they
intervene in. The newly designed modality fusion layer performs successful
integration of multi-source uni-sensory information through feedback
stimulation of spatially proximal neural regions. The obtained results were
highly positive, having exceeded expectations in terms of current state-of-
the-art standards.
The presented paper is structured as follows. Initially, examination and
insight into previous research is shown in Section II, inducting the reader
into the topic and contextualizing our approach. The proposed technique is
described in Section III, followed by an overview of the experiments carried
out in Section IV and the obtained results. These are then examined and
critically discussed in Section V. Finally, a conclusion is provided in
Section VI.
## II RELATED WORK
The overview presented is branched into two sub-sections. To begin with,
recent research on the topic is summarized in order to give further insight
into active speaker detection (ASD), and contextualize our own approach to the
matter. Following that, we analyze a number of the few available datasets and
their suitability as an integrating part of ASD techniques.
### II-A ASD Recent Research
Given the broad array of early audio-centered approaches, and their lingering
success with nonvisual data, current efforts have turned more towards vision-
oriented issues. This is not to say audio has stopped being a key aspect of
current approaches to ASD. In fact, multi-modal state-of-the-art techniques
have used machine learning to realize audiovisual correlations in data. Such
techniques are successful, for instance, as a means to recognize speech
related facial motion while also discerning it from other unrelated movements.
In [16], [19] Stefanov et al. used the outputs of a face detector, along with
audio-derived voice activity detection (VAD) labels, to train a CNN task
specific feature extractor together with a Perceptron classifier. For transfer
learning comparison, known CNN architectures were used as pre-trained feature
extractors whose outputs were employed in training temporal (LSTM net) and
nontemporal (Perceptron) classifiers. In a related work [20], Ren et al.
tackled clutter and unrelated motion fallible situations by extending
conventional LSTM models to share learned weights across the audio and visual
modalities. Thus learning audiovisual correlations to improve model
robustness. With a real implementation in a NAO robot, Cech et al. [21] used
time difference of arrival (TDOA) over a set of microphone pairs to
simultaneously evince potential sound sources in 2D space. A facial detector
then estimated mouth locations in 3D space, to be statistically mapped to the
former 2D sound sources and TDOAs. In a similar fashion, Gebru et al. [22]
mapped located sound sources to the 2D image plane as well as estimated
potential speaker lip positions. However, this cross-modal weighting approach
employed a GMM variation to assign weights to audio observations based on
their spatial proximity to visual observations, and vice-versa. Audiovisual
clustering then achieved active speaker detection. To give another example,
Hoover et al. [23] attempted correlating the degree of occurrence of
individual faces in a video with single speaker clusters of speech segments,
obtained using vectors of locally aggregated descriptors (VLADs) [24].
In addition to the usual reliance on high-end equipment or ideal settings –
consistent number of speakers, frontal facing and nonoverlapping speech,
mic/camera location – which is common to clustering and other methods, further
issues also occur with current state-of-the-art techniques. Particularly,
unidirectional dependency between modalities is recurrent among machine
learning approaches. The lack of carefully labeled big data on both the audio
and visual modalities for ASD evidently forces resourcing to alternatives
which transfer labels generated from one modality to its sibling (typically
from audio to video). Such processes unavoidably lead to propagation of error,
originating from the single modality prediction techniques, which could be
avoided were large ASD datasets existent and applicable to robust machine
learning architectures.
### II-B Datasets
In order to validate audiovisual ASD techniques (and train when required),
some data encompassing all considered modalities must be available.
Furthermore, such data is required to include some form of labeling so as to
enable its application in the aforementioned techniques. Unfortunately, the
multi-modal nature and challenges inherent to audiovisual ASD constitute a
major setback in terms of acquiring labeled data. As such, small and
constrained datasets are frequently built by research teams just to evaluate
their own approaches, which of course restricts their comparability and
reduces the credibility of their success. Yet very recently a few datasets
have been created and made available, which address this current necessity in
ASD research.
In [2] the AVSpeech dataset was presented, an automatically collected large-
scale corpus made up of several lecture recordings available on YouTube. These
recordings, amounting to around 4700 hours of audiovisual data, always show a
single clearly visible face to which the ongoing speech corresponds. This
dataset could be labeled for active speaker detection using some voice
activity detection (VAD) or speech-silence discerning technique, and then
applied for training and testing of ASD approaches. However, this may lead to
model confusion later on in multi-speaker scenarios and as such AVSpeech is
perhaps best left for ASD testing. A potentially better example of a suitable
corpus is given in [15], where the Columbia dataset was first introduced.
Here, the recording of a panel discussion between 7 speakers was overlapped
with upper body bounding boxes for all participants, each labeled with
speak/no-speak markers. By always including more than one speaker in each
frame, this dataset has the benefit over AVSpeech to enable dealing with
multi-speaker situations in inference models. As for its downside, the dataset
is of reduced size and its data is considerably homogeneous. This evidently
means it may be harder to extrapolate relevant features only and generalize a
model trained on this data, which constrains the dataset’s usage in machine
learning approaches.
The largest and most heterogeneous active speaker detection dataset available
at the moment is presumably AVA-ActiveSpeaker [25], developed by Google for
the 4th ActivityNet challenge at CVPR 2019. This hand labeled set of segments
obtained from 160 YouTube videos amounts to around 38.5 hours of audiovisual
data, where each of the covered 3.65 million frames is detailed with bounding
box locations for all detected speaker faces as well as with three types of
labels for each present bounding box – speaking and audible, speaking but not
audible, not speaking. Such a corpus should therefore be perfect for learning
ASD as it covers all previously mentioned requirements for a suitable
audiovisual dataset and presents no major drawbacks. To support this, the team
behind AVA-ActiveSpeaker used it to train and test both static and recurrent
prediction models, applying modality fusion and consistently achieving great
performances. Unfortunately however, this dataset encompasses an extremely
high amount of low quality videos in terms of image resolution, audio
intelligibility and noise. The creators of the dataset even went so far as to
include several voice-over videos and dubbed versions of movies not in their
original language. This naturally means a copious amount audio clips are out
of sync with their supposedly respective facial image sequences, besides also
the lengths of these clips frequently not pairing with those of their
sequences – the same phrase usually varies in duration for different
languages. Taking this into account each of the 160 videos was manually
inspected for quality assessment, as though they may be appropriate for a
challenge in the same conditions, it would be senseless to use such great
amounts of noisy incorrect data to train a model and expect it to perform
successfully with correct data and in the real world. A total of only 20
videos, whose list is available on our github
repository111https://github.com/gustavomiguelsa/SCF, were deliberated to meet
today’s audiovisual capture standards. The repository also contains an
exemplary demonstration video of the technique described in this paper, for
easier understanding.
## III METHODOLOGY
This section describes the path taken to develop the proposed technique aimed
at performing active speaker detection, including detailed descriptions of
each individual component of the overall system.
The developed method is capable of dealing with either audio data, visual data
or audiovisual data thanks to its split structure. To put simply, each
modality is dealt with independently through separate preprocessing and the
use of dedicated architectures for feature extraction. In order to achieve
this, audiovisual raw data is divided into audio segments for which narrowband
spectrograms are generated and propagated through the VGGVox model [1], and
corresponding sequences of facial images, whose features and corresponding
temporal correlations are obtained through the employment of an extended
ResNet50 architecture [26] with incorporated recurrence henceforth designated
as RN-LSTM. The convolutional stage of the former has been pretrained with the
VGGFace2 dataset [27] and is available in [28]. Detail is further provided for
each model component in the following sections.
### III-A Audio Clips and VGGVox
In [1], the VGGVox model was first introduced as a modified VGG-M CNN
architecture [29] aimed at speaker recognition through examination of spectral
representations of audio clips. Given its extensive training with over 100,000
utterances by 7000+ distinct speakers, the observed excellent performance has
been recognized as a state-of-the-art standard in recent research. Thus, the
model is ideal for extrapolating speaker specific cues and discern multiple
distinct speakers in an active speaker detection scenario.
In terms of actual audio clips to be analyzed, they should be required to
undergo minimal preprocessing only while still being able to progress through
a machine learning model. In the past, many techniques have been proposed to
obtain audio representations which achieve this goal such as the known Mel-
frequency cepstral coefficients [30]. Despite their success, such techniques
are noted to introduce increased overhead and require a manual tuning which
somewhat invalidates the posterior model’s positive performance. Therefore for
each raw audio clip in our approach, a sliding Hamming window of width 25ms
was applied with a step of 10ms in order to generate a respective narrowband
spectrogram, as well as means and variances being normalized at each frequency
bin of the spectrum. These operations were the single ones performed on the
audio data, following VGGVox’s preprocessing stage.
### III-B Image Sequences and RN-LSTM
The ResNet-50 residual network architecture is one based on the deep residual
learning process proposed in the work [26], which may shortcut the connections
between the inputs and outputs of a network’s intermediate layers – creating
identity mappings. This procedure is performed in order to attempt
approximation of residual functions in lieu of fitting a more complex mapping,
thanks to the ease of learning advantage caused by occasional skipping of
layers considered less relevant. Through skipping, increasingly deeper models
may be created without the inherent accuracy degradation previously observed
in [31] and still benefiting from improved generalization. In addition, the
identity mappings also imply a loss similar to that of their shallow non-
residual equivalent models. For these reasons, residual networks should be
capable of achieving the level of generalization necessary for extracting
activity/inactivity cues from face images.
Whilst residual networks may be suitable for the task, these were not
originally trained for facial analysis. Furthermore, ASD approaches must
unavoidably deal with human faces in real world interactions which are
obviously unconstrained by pose or expression, besides potentially
encompassing several and extremely diversified physical attributes. As a means
of dealing with these face issues and other common image problems (e.g.
varying light/exposure) while still benefiting from the advantages of residual
architectures, the ResNet-50 model trained with VGGFace2 made available by
[27] was chosen for our approach.
Considering how little to no conclusions pertaining to speech activity may be
drawn from solo face images, analyzing the sequential patterns between them
becomes a requirement for success. Undoubtedly, recurrent neural networks and
more specifically LSTMs are highly suitable for approaching that requirement
given their demonstrated past success when dealing with the temporal
correlation of facial embeddings [32], [33]. Thus a wide double-layered LSTM
was appended to the trained ResNet-50 architecture creating the mentioned RN-
LSTM. This new recurrent section was trained freely by freezing the weights of
its preceding ResNet-50 layers.
### III-C Bio-Inspired Embedding Fusion
Integration of multi-modal information is a considerably complex task
characterized by deeply convoluted data correlations. In the animal kingdom,
such a task is unconsciously performed on certain brain regions, allowing for
beings and equally humans to derive multi-sensory conclusions. Inspired by the
work of Ursino et al. in [34] and their model of the brain’s superior
colliculus (SC) for multi-sensory integration, we present a novel bio-inspired
neural network layer for audio and visual embedding fusion. This section
describes its structure and behavior.
The proposed SC fusion layer, henceforth abbreviated to SCF, is composed by
two upstream uni-modal neural areas corresponding to the audio and visual
modalities, plus the additional downstream multi-modal area in charge of
audiovisual integration. Each upstream area comprises $N\times M$ basic neuron
units interlinked within that same area, but not between upstream areas, and
topologically organized to communicate inputs to corresponding downstream
neurons whose receptive fields (RF) – kernels – are in the same spatial
positions. The downstream area in turn, also made up of $N\times M$ neurons,
feedbacks to each upstream area in the same neuron spatial fashion. This
serves to emulate how proximal biological neurons respond to stimuli in
proximal position of space, and allows for visual area neurons to indirectly
excite audio area neurons or vice-versa through the multi-modal area. Lateral
synapses between intra-area neurons are modelled after a mexican hat (Ricker)
wavelet, so they may be both excitatory (central region) and inhibitory
(surrounding region). The overall described structure is depicted in Fig. 2.
Figure 2: Overall SCF layer structure, where the synapses of the central
darker neuron (representing neuron $ij$ in a $N\times M$ neural area) are
modelled after a Mexican Hat wavelet. Each neuron is laterally linked to its
area $s$ neighbors by excitatory $ex$ and inhibitory $in$ synapses (red
arrows). The described spatially conditional connection between uni-modal
neurons (visual/auditory) and their multi-modal counterparts is shown on the
right.
Considering a neural area $s\in S=\\{a,v,m\\}$, pertaining to the auditory,
visual and multi-modal factors. Asserting $s$ to be composed of $N\times M$
neurons as previously described, each neuron $ij$ or $hk$
($i,h=1,\ldots,N;j,k=1,\ldots,M$) receives a composite input $u^{s}_{ij}[n]$
as in (1).
$u^{s}_{ij}[n]=\left\\{\begin{array}[]{ll}r^{s}_{ij}[n]+l^{s}_{ij}[n]+f^{s}_{ij}[n]\hskip
28.45274pts=\\{a,v\\}\vspace{0.5cm}\\\ k^{a}\cdot z^{a}_{ij}[n]+k^{v}\cdot
z^{v}_{ij}[n]+l^{s}_{ij}[n]\hskip 8.5359pts=m\\\ \end{array}\right.$ (1)
Where $r^{s}_{ij}[n]$ denotes the input at neuron $ij$ triggered by an
external stimulus, $l^{s}_{ij}[n]$ represents the input caused by lateral
synapses between intra-area neurons, $f^{s}_{ij}[n]$ is the feedback input
from the multi-modal SC neurons up to the uni-modal neurons, and
$z^{s}_{ij}[n]$ describes an arbitrary neuron’s activity whose continuous
differential model is discretized in our implementation as the signal in (2)
for any $s\in S$.
$\begin{array}[]{ll}\tau_{s}z^{s}_{ij}[n+1]=(\tau_{s}-1)z^{s}_{ij}[n]+\phi(p^{s}(u^{s}_{ij}[n]-\vartheta^{s}))\end{array}$
(2)
Figure 3: Structure overview of the proposed model. From a video, the image
sequence of a face is extracted as well as the corresponding audio. A
narrowband spectrogram is generated from the audio and progressed through
VGGVox. The image sequence is progressed through the RN-LSTM without any
preprocessing. The obtained audio and visual embeddings are then classified
either separately or merged by concatenation or the SCF layer.
For such a response $\tau_{s}$ refers to the speed with which a neuron
responds to some stimulus, and $\phi$ is the sigmoidal activation of some
input $u^{s}_{ij}[n]$ normalized by $\vartheta^{s}$ and $p^{s}$ – respectively
the input value at the central point of neuron activity and its slope. As a
consequence, the input to a multi-modal neuron can be seen as the weighted sum
of the input activities of each of its spatial uni-sensory counterparts
($k^{a}$ and $k^{v}$ being inter-area synaptic connection intensity) plus the
area’s own lateral synapses.
The laterally synaptic input received by some neuron $ij$ of an area $s$ is
defined simply as the weighted sum of the neighboring neurons activities,
following (3). Here $L^{s}_{ij,hk}$ represents the strength of the synaptic
connection from neuron $hk$ to neuron $ij$, which follows the mexican hat
wavelet in (4) as previously explained.
$\begin{array}[]{ll}l^{s}_{ij}[n]=\sum_{h,k}L^{s}_{ij,hk}\cdot
z^{s}_{hk}[n]&s\in S\end{array}$ (3)
$\begin{array}[]{ll}L^{s}_{ij,hk}=L_{ex}^{s}\cdot
e^{-\frac{[d_{x}^{2}+d_{y}^{2}]}{2\cdot(\sigma_{ex}^{s})^{2}}}-L_{in}^{s}\cdot
e^{-\frac{[d_{x}^{2}+d_{y}^{2}]}{2\cdot(\sigma_{in}^{s})^{2}}}&s\in
S\end{array}$ (4)
Where the $L_{ex},\sigma_{ex}$ and $L_{in},\sigma_{in}$ parameters define the
strength and spatial disposition of excitatory and inhibitory lateral
synapses, respectively. As for $d_{x}$ and $d_{y}$ these refer to the inter-
area distances between vertically and horizontally intervening neurons, being
calculated circularly to prevent border issues.
In terms of the external stimulus input at each neuron $r^{s}_{ij}[n]$, this
was obtained through the inner product of the stimulus $I_{s}$ with the
neuron’s own receptive field $R_{ij}^{s}$, made trainable and adaptive at each
epoch, as in (5). This was also the case for the feedback input
$f^{s}_{ij}[n]$, being calculated as the inner product of a multi-modal
neuron’s activity $z^{m}_{ij}[n]$ and $F_{s}$, the strength of the feedback
synaptic connection, as in (6).
$\begin{array}[]{ll}r^{s}_{ij}[n]=R_{ij}^{s}\cdot
I_{s}&s=\\{a,v\\}\end{array}$ (5)
$\begin{array}[]{ll}f^{s}_{ij}[n]=F_{s}\cdot
z^{m}_{ij}[n]&s=\\{a,v\\}\end{array}$ (6)
The motivation behind the proposed layer structure comes from the need to
effectively fuse single modality embeddings. The ideal result is an enhanced
multi-modal embedding of the most useful audio-video correlations and
features. This is achieved thanks to the SC emulation area which upon
receiving a multi-modal stimulus reinforces each of its uni-sensory fractions
upstream to spatially correlated neuron regions. This concept has greater
reach than the one demonstrated on this paper, as there is absolutely no
restrictions in terms of number of modalities for fusion – a posture analysis
neural area could be added by extending (1), behaving similarly to the
auditory and visual areas. Moreover, prior sub-fusion areas could also be
integrated in the model to combine more closely correlated information – the
vision neural area could be divided in two in the case of using a stereo
rather than mono visual capture technique. Finally, the structure’s described
non-trainable parameters were made fine tunable to application specific goals
but we kept their default values in our implementation, being supported by the
literature pointed in [34].
Evidently the embeddings used for the concatenation approach as well as the
input auditory + visual stimulus for the novel SCF layer method were obtained
using the previously described VGGVox and RN-LSTM networks. A comprehensive
diagram of the overall structure is shown in Fig. 3.
## IV EXPERIMENTS
The experimental portion of our work consisted of two distinct parts: (1) data
preparation, given how the condition of the available data was not suitable
for proper immediate testing; (2) modality experimentation, where each
modality was tested first individually and then fused together.
### IV-A Data preparation
The evaluation of the proposed ASD system required a considerably large amount
of data, preferably with a high degree of heterogeneity and with respect to
natural conversations. Yet as previously noted, no single dataset encompasses
all these characteristics and so the testing phase examined data from two
sources: the Columbia dataset [15]; and the 20 videos considered acceptable
from the AVA-ActiveSpeaker dataset [25]. All of the latter’s sequences were
already under the 10 second mark and so were left untouched, as opposed to the
former’s which were originally only 136 and ranged from less than 1 second to
several minutes in terms of duration. Consequently and considering the
standard 30 fps frame rate of the original video, all instances above the 30
frame mark were segmented into 30 frame long sequences. This was performed in
order to obtain a much larger number of instances and take better advantage of
all the available data for training and testing. Sequences from both datasets
below the 30 frame mark were discarded as it was considered samples under 1
second in length would likely suffer from the same problems as static images
with respect to active speaker detection. Given how ultimately the obtained
sequences for training and testing were of varying lengths, masking was
employed in order to enable all available sequences to be accepted by the RN-
LSTM. Considering this, training was independent of frame window size, making
it arbitrary (within 1 and 10 seconds) for model applications. As for the AVA-
ActiveSpeaker sequences pertaining to the third label ’SPEAKING_NOT_AUDIBLE’,
these were considered for the visual only portion of the experiments and
disregarded for the rest. The overall characteristics of the evaluated data
are specified in Table I.
TABLE I: Characteristics of the prepared data
Dataset | Sequences | Frames | | Unique
---
Facial Trackings
222This factor is provided in place of number of speakers given the fact that AVA-ActiveSpeaker does not inform on speaker amount. However, the same speaker has been manually verified to not exist in two different videos. | Image
---
Conditions
Columbia | 4914 | 30 | 34 (6 speakers) | Identical
AVA-ActiveSpeaker | | 5510 (3 labels)
---
5470 (2 labels)
[30,300] | 4321 (20 videos) | Varied
### IV-B Modality Experimentation
Before attempting classification of the fused multi-modal embeddings, an
initial testing of each individual modality was performed to assess the
extrapolation quality and obtain a term of comparison for later on.
Accordingly, multilayer perceptrons (MLP) were attached to the end of the
feature extraction layers of VGGVox and the developed RN-LSTM, for
classification. Additionally, level of model generalization was also assessed
by comparing dependence and independence scenarios in terms of speaker for the
Columbia dataset and in terms of video for AVA-ActiveSpeaker – a solution to
the unknown speaker amount issue22footnotemark: 2. Essentially, for dependent
scenario experiments (closed set) the test data includes instances from
speakers who are also represented in the training data, albeit by completely
distinct instances. On the contrary, for independent scenario experiments
(open set) it is guaranteed that the speakers from the test data are
completely different and never before seen by the model during training. These
two types of testing allow for a better evaluation of the model’s
generalization ability, by examining how much the model’s performance
decreases from dependency to independency. The obtained results are presented
in Tables II and III, respectively.
TABLE II: Audio/video results using the Columbia dataset Mean Accuracy
(Standard Deviation)
| Speaker Dependent | Speaker Independent
---|---|---
| $VGGVox+MLP_{2}$
---
(Audio Only)
47.37 (0.52) [%] | 17.34 (9.14) [%]
| $RN$-$LSTM+MLP_{1}$
---
(Video Only)
97.68 (0.39) [%] | 64.89 (14.44) [%]
TABLE III: Audio/video results using the AVA-ActiveSpeaker videos Mean
Accuracy (Standard Deviation)
| Video Dependent | Video Independent
---|---|---
| $VGGVox+MLP_{2}$
---
(Audio Only)
87.73 (0.87) [%] | 84.96 (1.78) [%]
| $RN$-$LSTM+MLP_{1}$
---
(Video Only)
78.68 (1.79) [%] | 72.48 (2.17) [%]
The novel SCF layer, as well as the entirety of the implemented experiments,
were realized using the Keras framework [35]. In order to assess its
superiority against state-of-the-art standards and suitability to the task at
hand, we compared its performance to a naive baseline. This was composed of an
audiovisual concatenated embedding array fed directly to a multi-layer
perceptron, as seen in Fig. 3. The results of this experiment and those
obtained with the SCF layer alternative are shown in Tables IV and V for the
Columbia and AVA datasets, respectively.
TABLE IV: Audiovisual results using the Columbia dataset Mean Accuracy
(Standard Deviation)
| Speaker Dependent | Speaker Independent
---|---|---
| $Concat+MLP_{3}$
---
(Baseline)
97.60 (0.49) [%] | 41.78 (15.46) [%]
| $SCF+MLP_{3}$
---
(New Approach)
98.53 (0.21) [%] | 58.50 (7.71) [%]
TABLE V: Audiovisual results using the AVA-ActiveSpeaker videos Mean Accuracy
(Standard Deviation)
| Video Dependent | Video Independent
---|---|---
| $Concat+MLP_{3}$
---
(Baseline)
89.10 (0.41) [%] | 87.52 (2.49) [%]
| $SCF+MLP_{3}$
---
(New Approach)
91.68 (0.51) [%] | 88.46 (2.06) [%]
The SCF layer’s neural areas were kept at a $17\times 17$ dimension, for the
sake of computational simplicity. Each of the presented values in all tables
was obtained using 5-fold cross validation for increased robustness.
Furthermore, the adaptive moment estimation (Adam) optimizer [36] was employed
during the training phase of the models for weight adjustment. Batch
normalization [37] was also integrated in the training, in addition to minor
dropout [38]. The number of epochs was maintained constant across transversal
testing, having each ideal value been obtained through early stopping
mechanism for the three experimental phases (video-only, audio-only and
audiovisual).
## V DISCUSSION
As a first observation, it can be noted how the audio-only approach to active
speaker detection (Tables II, III) performed surprisingly well for the AVA-
ActiveSpeaker videos but was rather unsuccessful for the Columbia dataset.
This is attributed to the mentioned data homogeneity of the latter,
considering it encompasses a single speaker panel video. The fact that there
is constant speech in the video (one speaker picks up after the other)
prevents the model from learning even the most basic factor of discerning
silence from speech. Hence results matching random decision for speaker
dependency but failing with the independent case.
Still regarding the Columbia dataset’s homogeneity, it proves to be
advantageous for the dependent section of the video-only testing phase, but
again not so much for the independent one. Thus the model naturally performs
better with previously seen subjects but fails to extrapolate suitable ASD
features by training with such a reduced number of speakers. Not unlike the
audio testing however, the AVA videos led to a considerably high performance
using images only, proving data heterogeneity and speaker abundance to be
ultimately better for training as expected.
Audiovisual results greatly surpassed expectations, with the baseline
concatenation approach already demonstrating state-of-the-art performance for
either dataset considered. Increase in terms of standard deviation from the
dependent to the independent testing was observed as expected, considering how
the model suffers a natural decrease of confidence in its predictions when
analyzing previously unseen speakers. Relatively poor results are of course
still obtained for the independent part of the Columbia dataset testing due to
the constant speech issue, which as expected severely hinders the model’s
performance. Nonetheless, results are remarkably positive for either
concatenation or SCF layer approaches with respect to speaker dependent
testing. Moreover, the SCF layer still showed its superiority by majorly
improving over concatenation and even surpassing a random choice performance
despite having to deal with crippling audio data. As for the AVA video
testing, both the concatenation and SCF results beat those presented in the
dataset paper [25]333Even though testing was carried out with only 20 of the
original videos, the performance gain was still exemplary in terms of result
excellence.. Even more so, the SCF layer successfully improved the already
excellent baseline performance. This serves to validate its bio-inspired
concept – mutual neuron excitation of spatially proximal multi-modal neural
regions for integration of uni-sensory information – here emulated for
modality fusion. Undoubtedly even greater performance gains could be achieved
if all the layer’s hyperparameters were finely tuned to our application,
rather than using default values. Nonetheless, results were highly successful
and warrant further research in this and other areas encompassing several
types of modalities.
## VI CONCLUSIONS
This work proposed a novel modality fusion layer (SCF), bio-inspired by the
brain’s superior colliculus region and incorporated into a new method for
active speaker detection (ASD) in real-world scenarios. This proposed method
deals with audio and video information jointly and makes no assumptions
regarding input data. The audiovisual technique was extensively tested against
data from two tried and tested ASD databases, specifically the Columbia [15]
and the AVA-ActiveSpeaker [25] datasets. The obtained results were compared to
those of audio-only and video-only approaches, as well as those of a naive
concatenation multi-modal baseline. In any case the SCF technique always
demonstrated state-of-the-art performance, surpassing its baseline and uni-
modal counterparts. It was shown how mutual neuron excitation of a spatially
proximal multi-modal region by uni-modal neurons can be a remarkably
successful modality fusion technique.
This observed SCF layer’s success in terms of uni-sensory data integration,
even despite doing so with default settings, evidently warrants a greater
exploration of its capabilities. As such, in the future we intend to further
examine the layer’s behavior in terms of not only audiovisual fusion but also
combination of data of other natures (e.g. tactile). In addition, the
development of a real-time ASD demo based on the system presented in this work
is underway.
## References
* [1] A. Nagrani, J. S. Chung, and A. Zisserman. Voxceleb: a large-scale speaker identification dataset. InProc. Inter-speech, 2018.
* [2] A. Ephrat, I. Mosseri, O. Lang, T. Dekel, K. Wilson, A. Hassidim, W. T. Freeman, and M. Rubinstein. Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation. ACT Trans. Graph., 37(112),2018.
* [3] H. Vajaria, S. Sarkar, and R. Kasturi, “Exploring co-occurrence between speech and body movement for audio-guided video localization,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18,no. 11, pp. 1608–1617, 2008.
* [4] S. E. Tranter and D. A. Reynolds, ”An overview of automatic speaker diarization systems,” in IEEE Transactions on Audio, Speech, and Language Processing, vol. 14, no. 5, pp. 1557-1565, Sept. 2006.
* [5] R. Le Bouquin-Jeannès and G. Faucon, ”Study of a voice activity detector and its influence on a noise reduction system,” Speech Comm., vol. 16, pp. 245-254, 1995.
* [6] D. Liu and F. Kubala,“Online speaker clustering”, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, vol. I, Hong Kong, China, Apr.2003, pp. 572–575.
* [7] S. Maraboina, D. Kolossa, P. K. Bora and R. Orglmeister, ”Multi-speaker voice activity detection using ICA and beampattern analysis,” 2006 14th European Signal Processing Conference, Florence, 2006, pp. 1-5.
* [8] A. Bertrand and M. Moonen, ”Energy-based multi-speaker voice activity detection with an ad hoc microphone array,” 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, 2010, pp. 85-88.
* [9] S. Siatras, N. Nikolaidis, M. Krinidis and I. Pitas, ”Visual Lip Activity Detection and Speaker Detection Using Mouth Region Intensities,” in IEEE Transactions on Circuits and Systems for Video Technology, vol. 19, no. 1, pp. 133-137, Jan. 2009.
* [10] R. Ahmad, S. P. Raza and H. Malik, ”Visual Speech Detection Using an Unsupervised Learning Framework,” 2013 12th International Conference on Machine Learning and Applications, Miami, FL, 2013, pp. 525-528. doi: 10.1109/ICMLA.2013.171.
* [11] K. Stefanov, A. Sugimoto, and J. Beskow. Look who’s talking: Visual identification of the active speaker in multi-party human-robot interaction. InProc. Workshop Advance-ments in Social Signal Processing for Multimodal Interac-tion, pages 22–27. ACM, 2016.
* [12] V. P. Minotto, C. R. Jung and B. Lee, ”Simultaneous-Speaker Voice Activity Detection and Localization Using Mid-Fusion of SVM and HMMs,” in IEEE Transactions on Multimedia, vol. 16, no. 4, pp. 1032-1044, June 2014.
* [13] R. Cutler and L. Davis, ”Look who’s talking: speaker detection using video and audio correlation,” 2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532), New York, NY, 2000, pp. 1589-1592 vol.3.
* [14] P. Chakravarty, S. Mirzaei, T. Tuytelaars, and H. Vanhamme. Who’s speaking? audio-supervised classification of active speakers in video. InProc. ACM Int. Conf. Multimodal In-teraction, pages 1–5, 2015.
* [15] P. Chakravarty and T. Tuytelaars. Cross-modal supervision for learning active speaker detection in video. InProc. European Conference on Computer Vision, pages 1–5, 2016.
* [16] K. Stefanov, J. Beskow, and G. Salvi, “Vision-based active speaker detection in multiparty interaction,” in Proceedings of GLU 2017 Inter-national Workshop on Grounding Language Understanding, 2017, pp.47–51.
* [17] K. Fukushima. ”Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position”, Biological Cybernetics, vol. 36 (4):93-202, 1980.
* [18] D. H. Hubel, and T. N. Wiesel. ”Receptive fields and functional architecture of monkey striate cortex”, The Journal of physiology, vol. 195 (1):215-243, 1968.
* [19] K. Stefanov, J. Beskow and G. Salvi, ”Self-Supervised Vision-Based Detection of the Active Speaker as Support for Socially-Aware Language Acquisition”, in IEEE Transactions on Cognitive and Developmental Systems, 2019.
* [20] J. Ren, Y. Hu, Y.-W. Tai, C. Wang, L. Xu, W. Sun, and Q. Yan, “Look, listen and learn - a multimodal LSTM for speaker identification”, in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016, pp. 3581–3587.
* [21] J. Cech, R. Mittal, A. Deleforge, J. Sanchez-Riera, X. Alameda-Pineda and R. Horaud, ”Active-speaker detection and localization with microphones and cameras embedded into a robotic head”, 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Atlanta, GA, 2013, pp. 203-210.
* [22] I. D. Gebru, X. Alameda-Pineda, R. Horaud and F. Forbes, ”Audio-visual speaker localization via weighted clustering”, 2014 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), Reims, 2014, pp. 1-6.
* [23] K. Hoover, S. Chaudhuri, C. Pantofaru, I. Sturdy and M. Slaney, ”Using audio-visual information to understand speaker activity: Tracking active speakers on and off screen”, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, 2018, pp. 6558-6562.
* [24] H. Jegou, F. Perronnin, M. Douze, J. Sanchez, P. Perez, and C. Schmid. ”Aggregating local image descriptors into compact codes”, TPAMI, 2012.
* [25] J. Roth, S. Chaudhuri, O. Klejch, R. Marvin, A. C. Gallagher, L. Kaver, S. Ramaswamy, A. Stopczynski, C. Schmid, Z. Xi and C. Pantofaru, ”AVA-ActiveSpeaker: An Audio-Visual Dataset for Active Speaker Detection”, 2019. ArXiv, abs/1901.01342.
* [26] K. He, X. Zhang, S. Ren, and J. Sun, ”Deep residual learning for image recognition”, In Proceedings of the Conference on Computer Vision and Pattern Recognition, pp 770–778, 2016.
* [27] Q. Cao, L. Shen, W. Xie, O. Parkhi, A. Zisserman, ”VGGFace2: A Dataset for Recognising Faces across Pose and Age”, In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, 2018.
* [28] The Keras-VGGFace Package. Available at: https://pypi.org/project/keras-vggface/.
* [29] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, ”Return of the devil in the details: Delving deep into convolutional nets”. In Proceedings of the British Machine Vision Conference, 2014.
* [30] P. Mermelstein, ”Distance measures for speech recognition, psychological and instrumental,” in Pattern Recognition and Artificial Intelligence, 1976, C. H. Chen Ed., pp374–388, Academic, New York.
* [31] K. He and J. Sun, ”Convolutional neural networks at constrained timecost”, In CVPR, 2015.
* [32] S. Kankanamge, C. Fookes and S. Sridharan, ”Facial analysis in the wild with LSTM networks”, 2017 IEEE International Conference on Image Processing (ICIP), Beijing, 2017, pp. 1052-1056.
* [33] Z. Xu, S. Li and W. Deng, ”Learning temporal features using LSTM-CNN architecture for face anti-spoofing”, 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, 2015, pp. 141-145.
* [34] M. Ursino, C. Cuppini, E. Magosso, A. Serino, and G. di Pellegrino. ”Multisensory integration in the superior colliculus: a neural network model”, Journal of Computational Neuroscience, 26(1):55–73, May 2008.
* [35] F. Chollet and others. Keras. https://keras.io, 2015.
* [36] D. P. Kingma and J. Ba, ”Adam: A method for stochastic optimization”, arXivpreprint arXiv:1412.6980, 2014.
* [37] S. Ioffe and C. Szegedy, ”Batch normalization: Acceleratingdeep network training by reducing internal covariate shift”, arXivpreprint arXiv:1502.03167, 2015.
* [38] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. ”Dropout: A simple way to prevent neural networks from overfitting”, Journal of Machine Learning Research, 15:1929–1958, 2014.
|
2024-09-04T02:54:55.735176 | 2020-02-28T21:52:49 | 2003.00083 | {
"authors": "Heejong Bong, Wanshan Li, Shamindra Shrotriya, Alessandro Rinaldo",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25953",
"submitter": "Heejong Bong",
"url": "https://arxiv.org/abs/2003.00083"
} | arxiv-papers | Nonparametric Estimation in the Dynamic Bradley-Terry Model
Heejong Bong* Wanshan Li* Shamindra Shrotriya* Alessandro Rinaldo
Department of Statistics and Data Science Carnegie Mellon University
###### Abstract
We propose a time-varying generalization of the Bradley-Terry model that
allows for nonparametric modeling of dynamic global rankings of distinct
teams. We develop a novel estimator that relies on kernel smoothing to pre-
process the pairwise comparisons over time and is applicable in sparse
settings where the Bradley-Terry may not be fit. We obtain necessary and
sufficient conditions for the existence and uniqueness of our estimator. We
also derive time-varying oracle bounds for both the estimation error and the
excess risk in the model-agnostic setting where the Bradley-Terry model is not
necessarily the true data generating process. We thoroughly test the practical
effectiveness of our model using both simulated and real world data and
suggest an efficient data-driven approach for bandwidth tuning.
## 1 Introduction and Prior Work
### 1.1 Pairwise Comparison Data and the Bradley-Terry Model
Pairwise comparison data are very common in daily life, especially in cases
where the goal is to rank several objects. Rather than directly ranking all
objects simultaneously, it is usually much easier and more efficient to first
obtain results of pairwise comparisons and then use them to derive a global
ranking across all individuals in a principled manner. Since the required
global rankings are not directly observable, developing a statistical
framework for the estimating rankings is a challenging unsupervised learning
problem. One such statistical model for deriving global rankings using
pairwise comparisons was presented in the classic paper (Bradley and Terry,,
1952), and thereafter commonly referred to as the Bradley-Terry model in the
literature. A similar model was also studied by Zermelo (Zermelo,, 1929). The
Bradley-Terry model is one of the most popular models to analyze paired
comparison data due to its interpretability and computational efficiency in
parameter estimation. The Bradley-Terry model along with its variants has been
studied and applied in various ranking applications across many domains. This
includes the ranking of sports teams (Masarotto and Varin,, 2012; Cattelan et
al.,, 2013; Fahrmeir and Tutz,, 1994), scientific journals (Stigler,, 1994;
Varin et al.,, 2016), and the quality of several brands (Agresti,, 2013;
Radlinski and Joachims,, 2007), to name a few.
In order to introduce the Bradley-Terry model, suppose that we have $N$
distinct teams, each with a positive score $s_{i}$,
$i\in[N]:=\\{1,\ldots,N\\}$, quantifying its propensity to be picked or win
over other items. The model postulates that the comparisons between different
pairs are independent and the results of the comparisons between a given pair,
say team $i$ and team $j$, are independent and identically distributed
Bernoulli random variables, with winning probability defined as
$\mathbb{P}\\!\left(i\ \text{ beats
}j\right)=\frac{s_{i}}{s_{i}+s_{j}},\>\forall\;i,j\in[N]$ (1)
A common way to parametrize the model is to set, for each $i$,
$s_{i}=\exp(\beta_{i})$, where $(\beta_{1},\ldots,\beta_{N})$ are real
parameters such that $\sum_{i\in[N]}\beta_{i}=0$ (this latter condition is to
ensure identifiability). In this case, equation (1) is usually expressed as
${\sf{logit}}(\mathbb{P}\\!\left(i\ \text{ beats
}j\right))=\beta_{i}-\beta_{j}$, where, for $x\in(0,1)$,
${\sf{logit}}(x):=\log\frac{x}{1-x}$.
### 1.2 The time-varying (dynamic) Bradley-Terry Model
In many applications it is very common to observe paired comparison data
spanning over multiple (discrete) time periods. A natural question of interest
is then to understand how the global rankings vary over time i.e. dynamically.
For example, in sports analytics the performance of teams often changes across
match rounds and thus explicitly incorporating the time-varying dependence
into the model is crucial. In particular the paper (Fahrmeir and Tutz,, 1994)
considers a state-space generalization of the Bradley-Terry model to analyze
sports tournaments data. In a similar manner, bayesian frameworks for the
dynamic Bradley-Terry model are studied further in (Glickman,, 1993; Glickman
and Stern,, 1998; Lopez et al.,, 2018). Such dynamic ranking analysis is
becoming increasingly important because of the rapid growth of openly
available time-dependent paired comparison data.
Our main focus in this paper is to tackle the problem of generalizing the
Bradley-Terry model to the time-varying setting with statistical guarantees.
Our approach estimates the changes in the Bradley-Terry model parameters over
time nonparametrically. This enables the derivation of time-varying global
dynamic rankings with minimal parametric assumptions. Unlike previous time-
varying Bradley-Terry estimation approaches, we seek to establish guarantees
in the model-agnostic setting where the Bradley-Terry model is not the true
data generating process. This is in contrast to more assumption-heavy
parametric frequentist dynamic Bradley-Terry models including (Cattelan et
al.,, 2013). Our method is also computationally efficient compared to some
state-space methods including but not limited to (Fahrmeir and Tutz,, 1994;
Glickman and Stern,, 1998; Maystre et al.,, 2019).
## 2 Time-varying Bradley-Terry Model
### 2.1 Model Setup
In our time-varying generalization of the original Bradley-Terry model we
assume that $N$ distinct teams play against each other at possibly different
times, over a given time interval which, without loss of generality, is taken
to be $[0,1]$. The result of a game between team $i$ and team $j$ at time $t$
is determined by the timestamped winning probability
$p_{ij}(t):=\mathbb{P}\\!\left(i\text{ defeats }j\text{ at time }t\right)$
which we assume arises from a distinct Bradley-Terry model, one for each time
point. In detail, for each $i$, $j$, and $t$
${\sf{logit}}(p_{ij}(t))=\beta_{i}(t)-\beta_{j}(t),\>\forall\;i,j\in[N],t\in[0,1]$
(2)
where
$\bm{\beta}(t)=\text{vec}(\beta_{1}(t),\beta_{2}(t),\dots,\beta_{N}(t))\in\mathbb{R}^{N}$
is an unknown vector such that $\sum_{i}\beta_{i}(t)=0$.
We observe the outcomes of $M$ timestamped pairwise matches among the $N$
teams $\\{(i_{m},j_{m},t_{m}):m\in[M]\\}$. Here $(i_{m},j_{m},t_{m})$
indicates that team $i_{m}$ and team $j_{m}$ played at time $t_{m}$, where
$t_{1}\leq t_{2}\leq\ldots\leq t_{M}$. The result of the $m$-th match can be
expressed in a $N\times N$ data matrix $X^{(m)}$ as follows:
$X_{ij}^{(m)}=\begin{cases}\begin{split}&\mathbf{1}(i\text{ defeats }j\text{
at }t_{m})\\\ &\sim\text{Bernoulli}(p_{ij}(t_{m}))\end{split}&\text{for
}i=i_{m}\text{ and }j=j_{m}\\\ 1-X_{ji}^{(m)}&\text{for }i=j_{m}\text{ and
}j=i_{m}\\\ 0&\text{elsewhere}\end{cases}$ (3)
Our goal is to estimate the underlying parameters $\bm{\beta}(t)$ where
$t\in[0,1]$, and then derive the corresponding global ranking of teams. In
order to make the estimation problem tractable we assume that the parameters
$\bm{\beta}(t)$ vary smoothly as a function of time $t\in[0,1]$. It is worth
noting that the naive strategy of estimating the model parameters separately
at each observed discrete time point on the original data will suffer from two
major drawbacks: (i) it will in general not guarantee smoothly varying
estimates and, perhaps more importantly, (ii) computing the maximum likelihood
estimator (MLE) of the parameters in the static Bradley-Terry model may be
infeasible due to sparsity in the data (e.g., at each time point we may
observe only one match), as we discuss below in 4. To overcome these issues we
propose a nonparametric methodology which involves kernel-smoothing the
observed data over time.
### 2.2 Estimation
Our approach in estimating time-varying global rankings is described in the
following three step procedure:
1. 1.
Data pre-processing: Kernel smooth the pairwise comparison data across all
time periods. This is used to obtain the smoothed pairwise data at each time
$t$:
$\tilde{X}(t)=\sum_{m=1}^{M}W_{h}(t_{m},t)X^{(m)},$ (4)
where $W_{h}$ is an appropriate kernel function with bandwidth $h$, which
controls the extent of data smoothing. The higher the value of $h$ is, the
smoother $\tilde{X}_{ij}(t)$ is over time.
2. 2.
Model fitting: Fit the regular Bradley-Terry model on the smoothed data
$\tilde{X}_{ij}(t)$. The model estimates the performance of each team at time
$t$ using the estimated score vector
$\widehat{\bm{\beta}}(t)={\arg\min}_{\bm{\beta}:\sum_{i}\beta_{i}(t)=0}\widehat{\mathcal{R}}(\bm{\beta};t)$
(5)
minimizing the negative log-likelihood risk
$\widehat{\mathcal{R}}(\bm{\beta};t)=\sum_{i,j:i\neq
j}\frac{\tilde{X}_{ij}(t)\log(1+\exp(\beta_{j}(t)-\beta_{i}(t)))}{\sum_{i,j:i\neq
j}\tilde{X}_{ij}(t)}$ (6)
3. 3.
Derive global rankings: Rank each team at time $t$ by its score from
$\hat{\bm{\beta}}(t)$.
We observe that if $t=t_{1}=t_{2}=\dots=t_{M}$ then step 1 reduces to the
original (static) Bradley-Terry model. In this case,
$\begin{split}\tilde{X}_{ij}(t)&=W_{h}(t,t)\sum_{m=1}^{M}\mathbf{1}(i_{m}=i,j_{m}=j)\\\
&=W_{h}(t,t)X_{ij}\end{split}$ (7)
where $X_{ij}=\\#\\{i\text{ defeated }j\\}$. Thus, fitting the model on
$\tilde{X}(t)$ in step 2 is equivalent to the original method on data $X$. In
this sense our proposed ranking approach represents a time-varying
generalization of the original Bradley-Terry model.
This data pre-processing is a critical step in our method and is similar to
the approach adopted in (Zhou et al.,, 2010) where it was used in the context
of estimating smoothed time varying undirected graphs. This approach has two
main advantages. First, applying kernel smoothing on the input pairwise
comparison data enables borrowing of information across timepoints. In sparse
settings, this reduces the data requirement at each time point to meet
necessary and sufficient conditions required for the Bradley-Terry model to
have a unique solution as detailed in Section 4. Second, kernel smoothing is
computationally efficient in a single dimension and is readily available in
open source scientific libraries.
## 3 Our Contributions
Our main contributions in this paper are summarized as follows:
1. 1.
We obtain necessary and sufficient conditions for the existence and uniqueness
for our time-varying estimator $\widehat{\bm{\beta}}(t)$ for each $t\in[0,1]$
simulataneously. See Section 4.
2. 2.
We extend the results of Simons and Yao, (1999) and obtain statistical
guarantees for our proposed method in the form of convergence results of the
estimated model parameters uniformly over all times. We express such
guarantees in the form of oracle inequalities in the model-agnostic setting
where the Bradley-Terry model is not necessarily the true data generating
process. See Section 5.
3. 3.
We apply our estimator with an data-driven tuned (by LOOCV) hyperparameter
successfully to simulations and to real life applications including a
comparison to 5 seasons of NFL ELO ratings. See Section 6 and Section 7.
## 4 Existence and uniqueness of solution
The existence and uniqueness of solutions for model (5) is not guaranteed in
general. This is an innate property of the original Bradley-Terry model
(Bradley and Terry,, 1952). As pointed out in (Ford,, 1957) existence of the
MLE for the Bradley-Terry model parameters demands a sufficient amount of
pairwise comparison data so that there is enough information of relative
performance between any pair of two teams for parameter estimation purposes.
For example, if there is a team which has never been compared to the others,
there is no information which the model can exploit to assign a score for the
team. As such its derived rank could be arbitrary. In addition if there are
several teams which have never outperformed the others then the Bradley-Terry
model would assign negative infinity for the performance of these teams. It
would not be possible to compare amongst them for global ranking purposes. In
all such cases, the model parameters are not estimable.
Ford, (1957) derived the necessary and sufficient condition for the existence
and uniqueness of the MLE in the original Bradley-Terry model. Below we show
how this condition can also be adapted to guarantee the existence and
uniqueness of the solution in our time-varying Bradley-Terry model. The
condition can be stated for each time $t$ in terms of the corresponding
kernel-smoothed data $\tilde{X}(t)$ as follows:
###### Condition 4.1.
In every possible partition of the teams into two nonempty subsets, some team
$i$ in the first set and some team $j$ in the second set satisfy
$\tilde{X}_{ij}(t)>0$. Or equivalently, for each ordered pair $(i,j)$, there
exists a sequence of indices $i_{0}=i,i_{1},\dots,i_{n}=j$ such that
$\tilde{X}_{i_{k-1}i_{k}}(t)>0$ for $k=1,\dots,n$.
###### Remark 1.
If we regard $[|\tilde{X}(t)|_{ij}]$ as the adjacency matrix of a (weighted)
directed graph, then 4.1 is equivalent to the strong connectivity of the
graph.
Under condition 4.1 we obtain the following existence and uniqueness theorem
on the solution set of the time-varying Bradley-Terry model.
###### Theorem 4.1.
If the smoothed data $\tilde{X}(t)$ satisfies Condition 4.1, then the solution
of (5) uniquely exists at time $t$.
Hence, in the proposed time-varying Bradley-Terry model we do not require the
strong conditions of (Ford,, 1957) to be met at each time point, but simply
require the aggregated conditions in Theorem 4.1 to hold. This is a
significant weakening of the data requirement. For example, even if one team
did not play any game in a match round – a situation that would preclude the
MLE in the standard Bradley-Terry model – it is still possible to assign a
rank to this team in their missing round, as long a game is recorded in
another round (with at least one win and one loss). In this sense, the kernel-
smoothing of the data in our time-varying Bradley-Terry model reduces the
required sample complexity for a unique solution.
## 5 Statistical Properties of the Time-varying Bradley-Terry Model
### 5.1 Preliminaries
Existing results (Simons and Yao,, 1999; Negahban et al.,, 2017) demonstrate
the consistency of the estimated static Bradley-Terry scores provided that the
data were generated from the Bradley-Terry model. However this assumption may
be too restrictive for data generation processes in real world applications.
In the rest of this section, we will consider model-agnostic time-varying
settings where the Bradley-Terry model is not necessarily the true pairwise
data generating model. In order to investigate the statistical properties of
the proposed estimator, we impose the following relatively mild assumptions.
###### Assumption 5.1.
Each pair of teams $(i,j)$ play $T^{(i,j)}$ times at time points
$\\{t_{k}^{(i,j)},k=1,2,\dots,T^{(i,j)}\\}$, where each $T^{(i,j)}>0$ satisfy
the following conditions, for fixed constants $T>0$ and $0<D_{m}\leq 1\leq
D_{M}$:
1. 1.
$T^{(i,j)}>T$ ;
2. 2.
for every interval $(a,b)\subset[0,1]$,
$\begin{split}\lfloor
D_{m}(b-a)T^{(i,j)}\rfloor\leq&\lvert\\{k:t_{k}^{(i,j)}\in(a,b)\\}\rvert\\\
\leq&\lceil D_{M}(b-a)T^{(i,j)}\rceil.\end{split}$ (8)
We remark that the second condition further implies that
$\begin{split}&t_{1}^{(i,j)}\leq\frac{1}{D_{m}T^{(i,j)}},\enspace
t_{T^{(i,j)}}^{(i,j)}\geq 1-\frac{1}{D_{m}T^{(i,j)}},\\\
&\frac{1}{D_{M}T^{(i,j)}}\leq
t_{k+1}^{(i,j)}-t_{k}^{(i,j)}\leq\frac{1}{D_{m}T^{(i,j)}}\end{split}$ (9)
for $k=1,2,\dots,T^{(i,j)}-1$.
5.1 allows for different team pairs to play against each other a different
number of times and for the game times to be spaced irregularly, though in a
controlled manner. To enable statistical analyses on time-varying quantities,
we further require that the winning probabilities satisfy a minimal degree of
smoothness and that their rate of decay is controlled.
###### Assumption 5.2.
For any $i,j$, the function $t\in[0,1]\mapsto p_{ij}(t)$ is Lipschitz with
universal constant $L_{p}$ and uniformly bounded below $p_{\min}>0$ which is
dependent to $N$ and $T$.
The quantity $p_{\min}$ need not be bounded away from $0$ as function of $T$
and $N$. However, in order to guarantee estimability of the model parameters
in time-varying settings, we will need to control the rate at which it is
allowed to vanish. See Theorem 5.1 below.
Finally, we assume that the kernel used to smooth over time satisfy the
following regularity conditions, which are quite standard in the nonparametric
literature.
###### Assumption 5.3.
The kernel function $W:(-\infty,\infty)\rightarrow(0,\infty)$ is a symmetric
function such that
$\begin{split}\int_{-\infty}^{\infty}W(x)dx=1,&\enspace\int_{-\infty}^{\infty}|x|W(x)dx<\infty\\\
\mathcal{V}(W)<\infty,&\enspace\mathcal{V}(|\cdot|W)<\infty\\\ \end{split}$
(10)
where $\mathcal{V}(f(x))$ is the total variation of a function $f$. For each
$s,t\in[0,1]$ we further write
$W_{h}(s,t)=\frac{1}{h}W\left(\frac{s-t}{h}\right).$ (11)
It is easy to see that these conditions imply
$\|W\|_{\infty}=\sup_{x}W(x)<\infty.$ (12)
Thus, without loss of generality, we assume that $\|W\|_{\infty}\leq 1$; the
general case can be handled by modifying the constants accordingly. The use of
kernels satisfying the above assumptions is standard in nonparametric problems
involving Hölder continuous functions of order $1$, such as the winning
probabilities function of Assumption 5.2.
### 5.2 Existence and uniqueness of solutions
Simons and Yao, (1999) showed that the necessary and sufficient condition for
the existence and uniqueness of the MLE in the original Bradley-Terry model is
satisfied asymptotically almost surely under minimal assumptions. Below, we
show that this type of result can be extended to our more general time-varying
settings.
###### Theorem 5.1.
$\begin{split}&\mathbb{P}(\text{Condition~{}\ref{cond:nec_suff_bt_1} is
satisfied at every }t)\\\ &\geq
1-4N\exp\left(-\frac{NT}{2}p_{\text{min}}\right).\end{split}$ (13)
###### Remark 2.
As we remarked above, $p_{\min}$ needs not be bounded away from zero, but is
allowed to vanish slowly enough in relation to $N$ and $T$ so that condition
(5.1) is fulfilled as long as $\frac{1}{p_{\text{min}}}=o\left(\frac{NT}{2\log
N}\right)$.
### 5.3 Oracle Properties
In our general agnostic time-varying setting the Bradley-Terry model is not
assumed to be the true data generating process. It follows that, for each $t$,
there is no true parameter to which to compare the estimator defined in (5).
Instead, we may compare it to the projection parameter
$\bm{\beta}^{*}(t)\in\mathbb{R}^{N}$, which is the best approximation to the
winning probabilities at time $t$ using the dynamic Bradley Terry model; see
(2). In detail, the oracle parameter is defined as
$\bm{\beta}^{*}(t)={\arg\min}_{\bm{\beta}:\sum_{i}\beta_{i}(t)=0}\mathcal{R}(\bm{\beta};t)$
(14)
where
$\mathcal{R}(\bm{\beta};t)=\frac{1}{\binom{N}{2}}\sum_{i,j:i\neq
j}p_{ij}(t)\log(1+\exp(\beta_{j}(t)-\beta_{i}(t)))$ (15)
We note that, when the winning probabilities obey a Bradley-Terry model, the
projection parameter corresponds to the true model parameters.
Next, for each fixed time $t\in(0,1)$ and $h>0$, we introduce two quantities,
namely $M(t)$ and $\delta_{h}(t)$, that can be thought of as conditions
numbers of sort, affecting directly both the estimation and prediction
accuracy of the proposed estimator. In detail, we set
$M(t)=\max_{i,j:i\neq j}\exp(\beta_{i}^{*}(t)-\beta_{j}^{*}(t))$
and
$\delta_{h}(t)=\max_{i}\sum_{j:j\neq
i}\left|\frac{\tilde{T}_{ij}(t)}{\tilde{T}_{i}(t)}-\frac{1}{N-1}\right|.$
where $\tilde{T}_{ij}(t)=\tilde{X}_{ij}(t)+\tilde{X}_{ji}(t)$ and
$\tilde{T}_{i}(t)=\sum_{j:j\neq i}\tilde{T}_{ij}(t)$. The ratio $M(t)$
quantifies the maximal discrepancy in winning scores among all possible pairs
at time $t$, and, as shown in Simons and Yao, (1999), determines the
consistency rate of the MLE in the traditional Bradely-Terry model (see also
the comments following 3 below). The quantity $\delta_{h}(t)$ is instead a
measure of regularity in how evenly the teams play against each other. In
particular $\delta_{h}(t)=0$, for all $t$ and $h$ when there is a constant
number of matches among each pair of teams, for each time. Since we allow for
the possibility of different number of matches between teams and across time,
it becomes necessary to quantity such degree of design irregularity.
In order to verify the quality of the proposed estimator
$\widehat{\bm{\beta}}(t)$, we will consider high-probability oracle bound on
estimation error
$\lVert\widehat{\bm{\beta}}(t)-\bm{\beta}^{*}(t)\rVert_{\infty}$. In the
following theorems, we present both a point-wise and a uniform in $t$ version
of this bound in the asymptotic regime of $T,N\rightarrow\infty$ and under
only minimal assumptions on the ground-truth winning probabilities
$p_{ij}(t)$’s.
###### Theorem 5.2.
Let $\gamma=\gamma(T,N,p_{\min})$ be the probability that 4.1 fails and
suppose that the kernel bandwidth is chosen as
$\displaystyle
h=\max\left\\{\frac{1}{T^{1+\eta}},\left(\frac{36(1-p_{\min})\log
N}{C_{s}^{2}D_{m}(N-1)T}\right)^{\frac{1}{3}}\right\\},$
for any $\eta>0$ and some universal constant $C_{s}$ depending only to
$D_{m}$, $D_{M}$, and $W$. Then, for each fixed time $t\in(0,1)$ and
sufficiently large $N$ and $T$,
$\begin{split}&\|\hat{\bm{\beta}}(t)-\bm{\beta}^{*}(t)\|_{\infty}\leq
48M(t)\left(\begin{split}&\delta_{h}(t)+C_{s}h\end{split}\right)\end{split}$
(16)
with probability at least $1-\frac{2}{N}-\gamma$ as long as the right hand
side is smaller than $\frac{1}{3}$.
Next, we strengthen our previous result, which is valid for each fixed time
$t$, to a uniform guarantee over the entire time course.
###### Theorem 5.3.
Let $\gamma=\gamma(T,N,p_{\min})$ be the probability that 4.1 fails and
suppose that the kernel bandwidth is chosen as
$\displaystyle
h=\max\left\\{\frac{1}{T^{1+\eta}},\left(\frac{36(1-p_{\min})\log(NT^{3+3\eta})}{C_{s}^{2}D_{m}(N-1)T}\right)^{\frac{1}{3}}\right\\}$
and that $W$ is $L_{W}$-Lipschitz. Then, for sufficiently large $N$ and $T$,
$\begin{split}&\sup_{t\in[0,1]}\|\hat{\bm{\beta}}(t)-\bm{\beta}(t)^{*}\|_{\infty}\leq
48\sup_{t\in[0,1]}M(t)\left(\begin{split}&\delta_{h}(t)+C_{s}h\end{split}\right)\end{split}$
(17)
with probability at least $1-\frac{2h^{3}}{N}-\gamma$ as long as the right
hand side is smaller than $\frac{1}{3}$.
###### Remark 3.
The rate of point-wise convergence for the estimation error implied by the
previous result is
$\begin{split}&\begin{split}&M(t)\left(\delta_{h}(t)+\max\left\\{\frac{1}{T^{1+\eta}},\left(\mathchoice{{\hbox{$\displaystyle\sqrt{\frac{\log
N}{NT}\,}$}\lower 0.4pt\hbox{\vrule
height=12.30554pt,depth=-9.84448pt}}}{{\hbox{$\textstyle\sqrt{\frac{\log
N}{NT}\,}$}\lower 0.4pt\hbox{\vrule
height=8.61386pt,depth=-6.89113pt}}}{{\hbox{$\scriptstyle\sqrt{\frac{\log
N}{NT}\,}$}\lower 0.4pt\hbox{\vrule
height=6.15276pt,depth=-4.92223pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\frac{\log
N}{NT}\,}$}\lower 0.4pt\hbox{\vrule
height=6.15276pt,depth=-4.92223pt}}}\right)^{\frac{2}{3}}\right\\}\right)\end{split},\end{split}$
(18)
while the rate for uniform convergence is
$\begin{split}&\begin{split}&M(t)\left(\delta_{h}(t)+\max\left\\{\frac{1}{T^{\frac{1}{2}+\eta}},\left(\mathchoice{{\hbox{$\displaystyle\sqrt{\frac{\log(NT^{1+\eta})}{NT}\,}$}\lower
0.4pt\hbox{\vrule
height=14.55441pt,depth=-11.64359pt}}}{{\hbox{$\textstyle\sqrt{\frac{\log(NT^{1+\eta})}{NT}\,}$}\lower
0.4pt\hbox{\vrule
height=10.21387pt,depth=-8.17113pt}}}{{\hbox{$\scriptstyle\sqrt{\frac{\log(NT^{1+\eta})}{NT}\,}$}\lower
0.4pt\hbox{\vrule
height=7.66386pt,depth=-6.13112pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\frac{\log(NT^{1+\eta})}{NT}\,}$}\lower
0.4pt\hbox{\vrule
height=7.66386pt,depth=-6.13112pt}}}\right)^{\frac{2}{3}}\right\\}\right)\end{split}.\end{split}$
(19)
Importantly, as we can see in the previous results, the proposed time varying
estimator $\hat{\bm{\beta}}(t)$ is consistent only provided that the design
regularity parameter $\delta_{h}(t)$ goes to zero. Of course, if all teams
play each other a constant number of times, then $\delta_{h}(t)=0$
automatically. In general, however, the impact of the design on the estimation
accuracy needs to be assessed on a case-by-case basis.
The rate (18) should be compared with the convergence rate to the true
parameters under the static Bradley Terry model, which (Simons and Yao,, 1999)
show to be $O_{p}(\max_{i,j:i\neq
j}\exp(\beta_{i}^{*}-\beta_{j}^{*})\mathchoice{{\hbox{$\displaystyle\sqrt{\log
N/NT\,}$}\lower 0.4pt\hbox{\vrule
height=7.5pt,depth=-6.00003pt}}}{{\hbox{$\textstyle\sqrt{\log N/NT\,}$}\lower
0.4pt\hbox{\vrule
height=7.5pt,depth=-6.00003pt}}}{{\hbox{$\scriptstyle\sqrt{\log
N/NT\,}$}\lower 0.4pt\hbox{\vrule
height=5.25pt,depth=-4.20003pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\log
N/NT\,}$}\lower 0.4pt\hbox{\vrule height=3.75pt,depth=-3.00002pt}}})$. Thus,
not surprisingly, in the more challenging dynamic settings with smoothly
varying winning probabilities the estimation accuracy decreases. The exponent
of $\frac{2}{3}$ in the rate (18) matches the familiar rate for estimating
Hölder continuous function of order $1$.
From (18) and (19) we observe that the desired oracle property on estimated
parameters requires rate constraints on $M(t)$. These constraints appear to be
strong assumptions without a direct connection to $p_{ij}(t)$’s in our model-
agnostic setting. Instead, we circumvent this issue by introducing a more
interpretable condition number $K$ (or $p_{\min}$), dependent on $N,T$, given
by
$\displaystyle K=$ $\displaystyle\exp\left(\frac{1}{p_{\min}}\right)$
and proving that for each fixed time $t\in(0,1)$ our desired oracle property
follows from a bound on $M(t)$.
###### Theorem 5.4.
Under the conditions in Theorem 5.2
$\begin{split}&\|\hat{\bm{\beta}}(t)-\bm{\beta}^{*}(t)\|_{\infty}\leq
72K\left(\begin{split}&\delta_{h}(t)+C_{s}h\end{split}\right)\end{split}$ (20)
and under the conditions Theorem 5.3
$\begin{split}&\sup_{t\in[0,1]}\|\hat{\bm{\beta}}(t)-\bm{\beta}(t)^{*}\|_{\infty}\leq
72K\sup_{t\in[0,1]}\left(\begin{split}&\delta_{h}(t)+C_{s}h\end{split}\right)\end{split}$
(21)
with probability at least $1-\frac{2}{N}-\gamma$.
###### Remark 4.
We note that assuming $p_{\text{min}}$ to be bounded away from $0$ ensures
$\sup_{t\in[0,1]}\|\hat{\bm{\beta}}(t)-\bm{\beta}^{*}(t)\|_{\infty}\to 0$,
with high probability. This assumption means that no team is uniformly
dominated by or dominates others (since this implies $1-p_{\text{min}}(t)$ is
bounded away from 1). This is a reasonable assumption in real-world data such
as sports match histories where teams are screened to be competitive with each
other. Therefore it is reasonable to only consider matches between teams that
do not have vastly different skills, which is reflected in winning
probabilities that are bounded away from $\\{0,1\\}$.
In summary, our proposed method achieves high-probability oracle bounds on the
estimation error in our general model-agnostic time-varying setting. We
provide the proofs for the stated theorems in Appendix Section 9.1.
## 6 Experiments
We compare our method with some other methods on synthetic data***Code
available at https://github.com/shamindras/bttv-aistats2020. We consider both
cases where the pairwise comparison data are generated from the Bradley-Terry
model and from a different, nonparametric model.
### 6.1 Bradley-Terry Model as the True Model
First we conduct simulation experiments in which the Bradley-Terry model is
the true model. Given the number of teams $N$ and the number of time points
$M$, the synthetic data generation process is as follows:
1. 1.
For $i\in[N]$, simulate $\bm{\beta}_{i}\in\mathbb{R}^{M}$ as described below;
2. 2.
For $1\leq i<j\leq N$ and $t\in[M]$, set $n_{ij}{(t)}$ and simulate $X{(t)}$
by
$x_{ij}{(t)}\sim\text{Binom}\Big{(}n_{ij}{(t)},{1}/\\{(1+\exp[\beta_{j}{(t)}-\beta_{i}{(t)}]\\}\Big{)}$
and $x_{ji}{(t)}=n_{ij}{(t)}-x_{ij}{(t)}$.
For each $i\in[N]$, we generate $\bm{\beta}_{i}\in\mathbb{R}^{M}$ from a
Gaussian process ${\rm GP}({\mu}_{i}(t),\sigma_{i}(t,s))$ as follows:
1. 1.
Set the values of the mean process ${\mu}_{i}(t)$ for all $t\in[M]$ and get
mean vector $\bm{\mu}_{i}=(\mu_{i}(1),\ldots,\mu_{i}(M))$;
2. 2.
Set the values of the variance process $\sigma_{i}(t,s)$ at $(s,t)\in[M]^{2}$,
to derive $\Sigma_{i}\in\mathbb{R}^{M\times M}$;
3. 3.
Generate a sample $\bm{\beta}_{i}$ from ${\rm
Normal}(\bm{\mu}_{i},\Sigma_{i})$.
In our experiment, we generate the parameter $\bm{\beta}$ via a Gaussian
process for $N=50$, $M=50$ and $n_{ij}(t)=1$ for all $t$. See appendix for
full details. We compare the true $\bm{\beta}$, the win rate, and
$\hat{\bm{\beta}}$ by different methods in Fig. 1.
Figure 1: Comparison of $\bm{\beta}$ and different estimators. First row: true
$\bm{\beta}$ (left), $\hat{\bm{\beta}}$ with our dynamic BT (right); second
row: win rate (left), original BT (right).
[HTML]FFFFC7 Estimator | Rank Diff | LOO Prob | LOO nll
---|---|---|---
[HTML]DAE8FCWin Rate | 3.75 | 0.44 | -
[HTML]DAE8FCOriginal BT | 3.75 | 0.37 | 0.56
[HTML]DAE8FCDynamic BT | 2.29 | 0.37 | 0.55
Table 1: Comparison of Different estimators with results based on 20 repeats.
Rank Diff. is the average absolute difference between the estimated and true
rankings of teams. LOO Prob means average leave-one-out prediction error of
win/loss. LOO nll means average leave-one-out negative log-likelihood.
We use LOOCV to select the kernel parameter $h$. The CV curve can be found in
Section 6 of Appendix. In this relatively sparse simulated data we see that
our dynamic Bradley-Terry estimator $\hat{\bm{\beta}}$ recovers the
comprehensive global ranking of each team in line with the original Bradley-
Terry model. We also observe that due to the kernel-smoothing that our
estimator has relatively more stable paths over time.
Table 1 compares the three estimators across key metrics. The results are
averaged over 20 repeats. As expected, in this sparse data setting, our
dynamic Bradley-Terry method performs better than the original Bradley-Terry
model.
### 6.2 Model-Agnostic Setting
In our second experiment we adopt a model-agnostic setting where we assume
that Bradley-Terry model is not necessarily the true data generating model, as
described in Section 5.1. With the same notation, for $1\leq i<j\leq N$ and
$t\in[M]$ we first simulate $p_{ij}(t)$, and then set $n_{ij}{(t)}$ and
simulate $X{(t)}$ by
$x_{ij}{(t)}\sim\text{Binom}\Big{(}n_{ij}{(t)},p_{ij}(t)\Big{)}$ and
$x_{ji}{(t)}=n_{ij}{(t)}-x_{ij}{(t)}$. To generate a smoothly changing
$p_{ij}(t)$, we again use Gaussian process. Specifically, first we generate
$p_{ij}(t)$ for $1\leq i<j\leq N$ and $t\in[M]$ from a Gaussian process. Then
we scale those $p_{ij}(t)$’s uniformly to make the values fall within a range
$[p_{l},p_{u}]$. In our experiment we set $M=50$, $N=50$,
$[p_{l},p_{u}]=[0.05,0.95]$ and $n_{ij}(t)=1$ for all $t$. The projection
parameter $\bm{\beta}^{*}$, the win rate, and $\widehat{\bm{\beta}}$ by
different methods are compared in Fig. 2. Again the kernel parameter $h$ in
our model is selected by LOOCV.
Figure 2: Comparison of $\bm{\beta}^{*}$ and different estimators when the
underlying model is not the Bradley-Terry model. First row: projection
$\bm{\beta}^{*}$ (left), our dynamic BT (right); second row: win rate (left),
original BT (right).
[HTML]FFFFC7 Estimator | Rank Diff | LOO Prob | LOO nll
---|---|---|---
[HTML]DAE8FCWin Rate | 10.68 | 0.49 | -
[HTML]DAE8FCOriginal BT | 10.70 | 0.49 | 0.71
[HTML]DAE8FCDynamic BT | 5.48 | 0.49 | 0.68
Table 2: Comparison of different estimators when the underlying model is not
the Bradley-Terry model.
By comparing curves in Fig. 2, we note that our estimator $\hat{\bm{\beta}}$
recovers the global rankings better than the win rate and the original
Bradley-Terry model, and produces relatively more stable paths over time. The
same conclusion is confirmed by Table 2, which compares the three estimators
in some metrics with 20 repetitions.
###### Remark 5.
In this sparse data setting where $n_{ij}(t)$ is fairly small, the original
Bradley-Terry model performs worse than our model for two reasons: 1. 4.1 can
fail to hold at some time points, whence the MLE does not exist; 2. even when
the MLE exists, it can fluctuate significantly over time because of the
relatively small sample size at each time point. As we show in Section 9.3.4
in the appendix, when $M=50$, $N=50$ and $n_{ij}(t)=1$, the MLE exists with
fairly high frequency. Still, our model performs much better than the original
Bradley-Terry model.
###### Remark 6.
Since our method is aimed at obtaining accurate estimates of smoothly changing
beta/rankings with strong prediction power, it may fail to capture some
changes in rankings, especially when these changes are relatively small (as in
the present case). However the winrate and original Bradley-Terry methods
perform much worse, as they appear to miss some true ranking changes while
returning many false change points.
Additional details about experiments, including running time efficiency in
simulated settings, can be found in the Appendix Section 9.3.
rank | 2011 | 2012 | 2013 | 2014 | 2015
---|---|---|---|---|---
ELO | BT | ELO | BT | ELO | BT | ELO | BT | ELO | BT
1 | blue!25GB | blue!25GB | yellow!25NE | yellow!25HOU | blue!25SEA | blue!25SEA | yellow!25SEA | yellow!25DEN | yellow!25SEA | yellow!25CAR
2 | yellow!25NE | yellow!25SF | yellow!25DEN | yellow!25ATL | yellow!25SF | yellow!25DEN | yellow!25NE | yellow!25ARI | yellow!25CAR | yellow!25DEN
3 | blue!25NO | blue!25NO | yellow!25GB | yellow!25SF | yellow!25NE | yellow!25 NO | yellow!25DEN | yellow!25NE | yellow!25ARI | yellow!25NE
4 | yellow!25PIT | yellow!25NE | yellow!25SF | CHI | yellow!25DEN | yellow!25KC | yellow!25GB | yellow!25SEA | yellow!25KC | yellow!25CIN
5 | yellow!25BAL | yellow!25DET | yellow!25ATL | yellow!25GB | yellow!25CAR | yellow!25SF | blue!25DAL | blue!25DAL | yellow!25DEN | yellow!25ARI
6 | yellow!25SF | yellow!25BAL | yellow!25SEA | yellow!25NE | yellow!25CIN | yellow!25NE | PIT | yellow!25GB | yellow!25NE | yellow!25GB
7 | yellow!25ATL | yellow!25PIT | NYG | yellow!25DEN | yellow!25NO | yellow!25IND | BAL | PHI | yellow!25PIT | yellow!25MIN
8 | PHI | yellow!25HOU | CIN | yellow!25SEA | yellow!25ARI | yellow!25CAR | IND | SD | yellow!25CIN | yellow!25KC
9 | yellow!25SD | CHI | blue!25BAL | blue!25BAL | yellow!25IND | yellow!25ARI | yellow!25ARI | DET | yellow!25GB | yellow!25PIT
10 | yellow!25HOU | yellow!25ATL | yellow!25HOU | IND | yellow!25SD | yellow!25CIN | CIN | KC | yellow!25MIN | yellow!25SEA
Av. Diff. | 4.2 | 5.0 | 3.5 | 4.3 | 3.4
Table 3: BT within season vs. ELO NFL top 10 rankings. Blue: perfect match,
yellow: top 10 match. Our dynamic BT model is fitted on 16 rounds of each
season, and the ranking of a season is based on the ranking at the last round.
## 7 Application - NFL Data
In order to test our model in practical settings we consider ranking National
Football League (NFL) teams over multiple seasons. Specifically we source 5
seasons of openly available NFL data from 2011-2015 (inclusive) using the
nflWAR package (Yurko et al.,, 2018). Each NFL season is comprised of $N=32$
teams playing $M=16$ games each over the season. This means that at each point
in time $t$ the pairwise comparison matrix based on scores across all 32 teams
is sparsely populated with only 16 entries. We fit our time-varying Bradley
Terry estimator over all 16 rounds in the season using a standard Gaussian
Kernel and tune $h$ using the LOOCV approach described in section 9.2. In
order to gauge whether the rankings produced by our model are reasonable we
compare our season-ending rankings (fit over all games played in that season)
with the relevant openly available NFL ELO ratings (Paine,, 2015). The top 10
season-ending rankings from each method across NFL seasons 2011-2015 are
summarized in Table 3.
Based on Table 3 we observe that we roughly match between 6 to 10 of the top
10 ELO teams consistently over all 5 seasons. There is often misalignment with
specific ranking values across both ranking methods. We note that the unlike
our method, the NFL ELO rankings use pairwise match data and also additional
features including an adjustment for margin of victory. This demonstrates an
advantage of our model in only requiring the minimal time-varying pairwise
match data and smoothness assumptions to deliver comparable results to this
more feature rich ELO ranking method. Furthermore, since our model aggregates
data across time it can provide a reasonable minimalist ranking benchmark in
modern sparse time-varying data settings with limited “expert knowledge” e.g.
e-sports.
## 8 Conclusion
We propose a time-varying generalization of the Bradley-Terry model that
captures temporal dependencies in a nonparametric fashion. This enables the
modeling of dynamic global rankings of distinct teams in settings in which the
parameteres of the ordinary Bradley Terry model would not be estimable.
From a theoretical perspective we adapt the results of (Ford,, 1957) to obtain
the necessary and sufficient condition for the existence and uniqueness of our
Bradley-Terry estimator in the time-varying setting. We extend the previous
analysis of (Simons and Yao,, 1999) to derive oracle inequalities on for our
proposed method for both the estimation error and the excess risk under
smoothness conditions on the winning probabilities. The resulting rates of
consistency are of nonparametric type.
From an implementation perspective we provide a general strategy for tuning
the kernel bandwidth hyperparameter using an efficient data-driven approach
specific to our unsupervised time-varying setting. Finally, we test the
practical effectiveness of our estimator under both simulated and real world
settings. In the latter case we separately rank 5 consecutive seasons of open
National Football League (NFL) team data (Yurko et al.,, 2018) from 2011-2015.
Our NFL ranking results compare favourably to the well-accepted NFL ELO model
rankings (Paine,, 2015). We thus view our nonparametric time-varying Bradley-
Terry estimator as a useful benchmarking tool for other feature-rich time-
varying ranking models since it simply relies on the minimalist time-varying
score information for modeling.
## References
* Agresti, (2013) Agresti, A. (2013). Categorical data analysis. Wiley Series in Probability and Statistics. Wiley-Interscience [John Wiley & Sons], Hoboken, NJ, third edition.
* Bradley and Terry, (1952) Bradley, R. A. and Terry, M. E. (1952). Rank analysis of incomplete block designs. I. The method of paired comparisons. Biometrika, 39:324–345.
* Cattelan et al., (2013) Cattelan, M., Varin, C., and Firth, D. (2013). Dynamic Bradley-Terry modelling of sports tournaments. J. R. Stat. Soc. Ser. C. Appl. Stat., 62(1):135–150.
* Fahrmeir and Tutz, (1994) Fahrmeir, L. and Tutz, G. (1994). Dynamic stochastic models for time-dependent ordered paired comparison systems. Journal of the American Statistical Association, 89(428):1438–1449.
* Ford, (1957) Ford, Jr., L. R. (1957). Solution of a ranking problem from binary comparisons. Amer. Math. Monthly, 64(8, part II):28–33.
* Glickman, (1993) Glickman, M. E. (1993). Paired comparison models with time-varying parameters. ProQuest LLC, Ann Arbor, MI. Thesis (Ph.D.)–Harvard University.
* Glickman and Stern, (1998) Glickman, M. E. and Stern, H. S. (1998). A state-space model for national football league scores. Journal of the American Statistical Association, 93(441):25–35.
* Lopez et al., (2018) Lopez, M. J., Matthews, G. J., and Baumer, B. S. (2018). How often does the best team win? A unified approach to understanding randomness in North American sport. Ann. Appl. Stat., 12(4):2483–2516.
* Masarotto and Varin, (2012) Masarotto, G. and Varin, C. (2012). The ranking lasso and its application to sport tournaments. Ann. Appl. Stat., 6(4):1949–1970.
* Maystre et al., (2019) Maystre, L., Kristof, V., and Grossglauser, M. (2019). Pairwise comparisons with flexible time-dynamics. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1236–1246.
* Negahban et al., (2017) Negahban, S., Oh, S., and Shah, D. (2017). Rank centrality: ranking from pairwise comparisons. Oper. Res., 65(1):266–287.
* Paine, (2015) Paine, N. (2015). NFL Elo Ratings Are Back! https://fivethirtyeight.com/features/nfl-elo-ratings-are-back/.
* Radlinski and Joachims, (2007) Radlinski, F. and Joachims, T. (2007). Active exploration for learning rankings from clickthrough data. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’07, pages 570–579, New York, NY, USA. ACM.
* Raghavan, (1988) Raghavan, P. (1988). Probabilistic construction of deterministic algorithms: approximating packing integer programs. Journal of Computer and System Sciences, 37(2):130–143.
* Simons and Yao, (1999) Simons, G. and Yao, Y.-C. (1999). Asymptotics when the number of parameters tends to infinity in the Bradley-Terry model for paired comparisons. Ann. Statist., 27(3):1041–1060.
* Stigler, (1994) Stigler, S. M. (1994). Citation patterns in the journals of statistics and probability. Statist. Sci., 9:94–108.
* Varin et al., (2016) Varin, C., Cattelan, M., and Firth, D. (2016). Statistical modelling of citation exchange between statistics journals. J. Roy. Statist. Soc. Ser. A, 179(1):1–63.
* von Luxburg, (2007) von Luxburg, U. (2007). A tutorial on spectral clustering. Stat. Comput., 17(4):395–416.
* Yurko et al., (2018) Yurko, R., Ventura, S., and Horowitz, M. (2018). nflwar: A reproducible method for offensive player evaluation in football. arXiv preprint arXiv:1802.00998.
* Zermelo, (1929) Zermelo, E. (1929). Die Berechnung der Turnier-Ergebnisse als ein Maximumproblem der Wahrscheinlichkeitsrechnung. Math. Z., 29(1):436–460.
* Zhou et al., (2010) Zhou, S., Lafferty, J., and Wasserman, L. (2010). Time varying undirected graphs. Mach. Learn., 80(2-3):295–319.
## 9 Appendices
### 9.1 Proofs of Theorems
#### 9.1.1 Proof of Theorem 4.1
##### Uniqueness of the solution
The elements of the Hessian for $\hat{\mathcal{R}}(\bm{\beta};t)$ in (6) are:
$\begin{split}H(\hat{\mathcal{R}})_{ii}=&\sum_{j:j\neq
i}(\tilde{X}_{ij}(t)+\tilde{X}_{ji}(t))\frac{\exp{\beta_{i}}\exp{\beta_{j}}}{(\exp{\beta_{i}}+\exp{\beta_{j}})^{2}}\\\
H(\hat{\mathcal{R}})_{ij}=&-(\tilde{X}_{ij}(t)+\tilde{X}_{ji}(t))\frac{\exp{\beta_{i}}\exp{\beta_{j}}}{(\exp{\beta_{i}}+\exp{\beta_{j}})^{2}}\end{split}$
(22)
Note that the Hessian has positive diagonal elements, non-positive off-
diagonal elements, and zero column sums. With Condition 4.1, this implies that
the Hessian can be regarded as a graph Laplacian for a connected graph.
Following the classical proof of the property of graph Laplacian (von
Luxburg,, 2007),
$v^{T}H(\hat{\mathcal{R}})v=\sum_{i<j}\frac{|\tilde{X}_{ij}(t)+\tilde{X}_{ji}(t)|}{2}(v_{i}-v_{j})^{2}\geq
0$ (23)
Then, Condition 4.1 guarantees that “=” is achieved if and only if
$v=c\mathbf{1}$. This proves the uniqueness up to constant shifts.
##### Existence of solution
Plugging in $\bm{\beta}=\mathbf{0}$, we get an upperbound for the minimum loss
function
$\hat{\mathcal{R}}^{\star}(t):=\hat{\mathcal{R}}(\hat{\bm{\beta}};t)$:
$\hat{\mathcal{R}}^{\star}(t)\leq\log 2$ (24)
As $\hat{\mathcal{R}}(\bm{\beta};t)$ is continuous with respect to
$\bm{\beta}$, it suffices to show that the level set of
$\hat{\mathcal{R}}(\cdot;t)$ at $\log 2$ within
$\\{\bm{\beta}:\sum_{i=1}^{N}\beta_{i}=0\\}$ is bounded so that it is compact.
Suppose that $\bm{\beta}$ is in the intersection between the levelset and
$\\{\bm{\beta}:\sum_{i=1}^{N}\beta_{i}=0\\}$. Since each summand of
$\hat{\mathcal{R}}(\bm{\beta})$ in (6) is non-negative, i.e.,
$\frac{\tilde{X}_{ij}(t)}{\sum_{i^{\prime},j^{\prime}:i^{\prime}\neq
j^{\prime}}\tilde{X}_{i^{\prime}j^{\prime}}(t)}\log(1+\exp(\beta_{j}-\beta_{i}))\geq
0$ (25)
if $i$ and $j$ satisfies $\tilde{X}_{ij}(t)>0$ then the corresponding summand
should be smaller than $\log 2$ so that:
$\begin{split}\beta_{j}-\beta_{i}\leq&\log(1+\exp(\beta_{j}-\beta_{i}))\\\
\leq&\log 2\frac{\sum_{i^{\prime},j^{\prime}:i^{\prime}\neq
j^{\prime}}\tilde{X}_{i^{\prime}j^{\prime}}(t)}{\tilde{X}_{ij}(t)}\end{split}$
(26)
By Condition 4.1, for any distinct $i$ and $j$, there exists an index sequence
$(i=i_{0},i_{1},\dots,i_{n}=j$ such that $X_{i_{k-1}i_{k}}>0$ for
$k=1,2,\dots,n$. Hence,
$\begin{split}\beta_{j}-\beta_{i}\leq&\log
2\sum_{k=1}^{n}\frac{\sum_{i^{\prime},j^{\prime}:i^{\prime}\neq
j^{\prime}}\tilde{X}_{i^{\prime}j^{\prime}}(t)}{\tilde{X}_{i_{k-1}i_{k}}(t)}\\\
\leq&\log 2\sum_{i^{\prime},j^{\prime}:i^{\prime}\neq
j^{\prime}}\tilde{X}_{i^{\prime}j^{\prime}}(t)\sum_{i^{\prime},j^{\prime}:i^{\prime}\neq
j^{\prime}}\frac{1}{\tilde{X}_{i^{\prime}j^{\prime}}(t)}=:B\end{split}$ (27)
where $B\in(0,\infty)$.
In sum,
$\|\bm{\beta}\|_{\infty}\leq\max_{i,j:i\neq j}|\beta_{i}-\beta_{j}|\leq B$
(28)
and this proves the existence part of the theorem.
#### 9.1.2 Proof of Theorem 5.1
The proof of this theorem is based on the proof of Lemma 1 in Simons and Yao,
(1999).
Since the kernel function $W$ in Assumption 5.3 has support
$(-\infty,\infty)$, $\tilde{X}_{ij}(t)>0$ if and only if team $i$ defeated
team $j$ at least once any time. In other words, if Condition 4.1 holds for at
least one time point, then so it does for every time point. Here, we prove
that the probability of Condition 4.1 to hold at at least one time point
converge to $1$ as $N,T\rightarrow\infty$.
Given $p_{\text{min}}$ instead of $\max_{i,j:i\neq
j}\exp(\beta^{*}_{i}-\beta^{*}_{j})$, the probability of the event
$\mathcal{S}$ that no team in a subset $S$ loses against a team not of $S$ is
no larger than
$(1-p_{\text{min}})^{|S|(N-|S|-1)T}$ (29)
Hence, we bound the probability that data does not meet Condition 4.1 by a
union bound
$\begin{split}&\mathbb{P}\\!\left(\text{Condition~{}\ref{cond:nec_suff_bt_1}
fails}\right)\leq\sum_{S\subset[N]:S\neq\emptyset}\mathbb{P}\\!\left(\mathcal{S}\right)\\\
&\leq\sum_{n=1}^{N-1}\binom{N}{n}(1-p_{\text{min}})^{n(N-n-1)T}\\\ &\leq
2\sum_{n=1}^{\lceil N/2\rceil}\binom{N}{n}(1-p_{\text{min}})^{n(N-n-1)T}\\\
&\leq 2\sum_{n=1}^{\lceil N/2\rceil}\binom{N}{n}(1-p_{\text{min}})^{nNT/2}\\\
&\leq 2\left[(1+(1-p_{\text{min}})^{NT/2})^{N}-1\right]\\\ &\leq
2\left[(1+e^{-\frac{NTp_{\text{min}}}{2}})^{N}-1\right]\\\ &\leq
4Ne^{-\frac{NTp_{\text{min}}}{2}}\end{split}$ (30)
as long as $e^{-\frac{NTp_{\text{min}}}{2}}\leq\frac{\log 2}{N}$. We note that
$(1+\frac{\log 2}{N})^{N}\leq e^{\log 2}=2\leq 2\log 2+1=2N\frac{\log
2}{N}+1$. Hence,
$\begin{split}\mathbb{P}\\!\left(\text{Condition~{}\ref{cond:nec_suff_bt_1}
fails}\right)\leq 4Ne^{-\frac{NTp_{\text{min}}}{2}}\end{split}$ (31)
as long as $Ne^{-\frac{NTp_{\text{min}}}{2}}\leq\log 2$.
Since $Ne^{-\frac{NTp_{\text{min}}}{2}}\geq\log 2$ implies
$4Ne^{-\frac{NTp_{\text{min}}}{2}}$ to be larger than $1$, the probability
bound holds for any $N$, $T$, and $p_{\text{min}}$.
#### 9.1.3 Proof of Theorem 5.2
For readability, in our notation we will omit the dependence on the time point
$t$ in the expressions for $\hat{\bm{\beta}}(t)$ and $\bm{\beta}^{*}(t)$,
unless required for clarity.
In our proofs we rely on the results and arguments of Simons and Yao, (1999)
to demonstrate consistency for the maximum likelihood estimator in the static
Bradley-Terry model with an increasing number of parameters. In that setting,
the authors parametrize the winning probabilities as
$p_{i,j}=\frac{u^{*}_{i}}{u^{*}_{i}+u^{*}_{j}}$, where
$u^{*}_{i}\equiv\exp(\beta^{*}_{i})$, with $\bm{\beta}^{*}\in\mathbb{R}^{N}$
such that $\beta^{*}_{1}=0$. Then, setting $\Delta
u_{i}=\frac{\hat{u}_{i}-u^{*}_{i}}{u^{*}_{i}}$, where $\hat{u}_{i}$ is the MLE
of $u^{*}_{i}$ (with $\hat{u}_{1}=0$ by convention), it follows from the proof
of Lemma 3 of Simons and Yao, (1999) that
$\begin{split}&\max_{i}\frac{\lvert\Delta u_{i}\rvert}{\lvert\Delta
u_{i}\rvert+1}\\\
&\leq\frac{8}{N-1}\max_{i,j}\frac{u_{i}^{*}}{u_{j}^{*}}\max_{i}\sum_{j:j\neq
i}\left\\{\frac{\hat{u}_{i}}{\hat{u}_{i}+\hat{u}_{j}}-\frac{u^{*}_{i}}{u^{*}_{i}+u^{*}_{j}}\right\\}\\\
\end{split}$ (32)
where $u^{*}_{i}=\exp(\beta_{i}^{*})$. Next, the authors derived a high
probability upper bound on
$\begin{split}&\max_{i}\sum_{j:j\neq
i}\left\\{\frac{\hat{u}_{i}}{\hat{u}_{i}+\hat{u}_{j}}-\frac{u^{*}_{i}}{u^{*}_{i}+u^{*}_{j}}\right\\}\end{split}$
(33)
using the facts that
$\sum_{j:j\neq i}p_{ij}=\sum_{j:j\neq i}\frac{u^{*}_{i}}{u^{*}_{i}+u^{*}_{j}}$
(34)
and
$\begin{split}\sum_{j:j\neq i}\frac{X_{ij}}{T}=\sum_{j:j\neq
i}\frac{\hat{u}_{i}}{\hat{u}_{i}+\hat{u}_{j}},\end{split}$ (35)
where $X_{ij}$ is the number of matches in which $i$ defeated $j$. The second
identity comes from the first order optimality condition of
$\hat{\bm{\beta}}$.
In our time-varying setting, however, the subgradient optimality of
$\hat{\bm{\beta}}(t)$ for $\hat{\mathcal{R}}(\bm{\beta};t)$ only imply that,
for each $j$,
$\sum_{j:j\neq i}\tilde{X}_{ij}(t)=\sum_{j:j\neq
i}\tilde{T}_{ij}(t)\frac{e^{\hat{\beta}_{i}}}{e^{\hat{\beta}_{j}}+e^{\hat{\beta}_{i}}}.$
(36)
Thus, Eq. 35 does not hold in the dynamic setting, due to different
$\tilde{X}_{ij}(t)+\tilde{X}_{ji}(t)$ across all $j\neq i$. Instead, we have
that
$\begin{split}&\frac{1}{N-1}\left(\sum_{j:j\neq
i}\frac{\tilde{X}_{ij}(t)}{\tilde{T}_{ij}(t)}-\sum_{j:j\neq
i}\frac{e^{\hat{\beta}_{i}}}{e^{\hat{\beta}_{j}}+e^{\hat{\beta}_{i}}}\right)\\\
&=\left(\begin{split}&\sum_{j:j\neq
i}\left(\frac{1}{N-1}-\frac{\tilde{T}_{ij}(t)}{\tilde{T}_{i}(t)}\right)\frac{\tilde{X}_{ij}(t)}{\tilde{T}_{ij}(t)}\\\
&+\sum_{j:j\neq
i}\left(\frac{\tilde{T}_{ij}(t)}{\tilde{T}_{i}(t)}-\frac{1}{N-1}\right)\frac{e^{\hat{\beta}_{i}}}{e^{\hat{\beta}_{j}}+e^{\hat{\beta}_{i}}}\end{split}\right)\end{split}$
(37)
Since
$\frac{\tilde{X}_{ij}(t)}{\tilde{T}_{ij}(t)},\frac{e^{\hat{\beta}_{i}}}{e^{\hat{\beta}_{j}}+e^{\hat{\beta}_{i}}}<1$,
$\begin{split}&\left|\begin{split}&\frac{1}{N-1}\left(\sum_{j:j\neq
i}\frac{\tilde{X}_{ij}(t)}{\tilde{T}_{ij}(t)}-\sum_{j:j\neq
i}\frac{e^{\hat{\beta}_{i}}}{e^{\hat{\beta}_{j}}+e^{\hat{\beta}_{i}}}\right)\end{split}\right|\\\
&\leq 2\delta_{h}(t)\end{split}$ (38)
and
$\begin{split}&\frac{1}{N-1}\sum_{j:j\neq
i}\left\\{\frac{e^{\hat{\beta}_{i}}}{e^{\hat{\beta}_{i}}+e^{\hat{\beta}_{j}}}-\frac{e^{\beta^{*}_{i}}}{e^{\beta^{*}_{i}}+e^{\beta^{*}_{j}}}\right\\}\\\
&\leq 2\delta_{h}(t)+\frac{1}{N-1}\sum_{j:j\neq
i}\left\\{\frac{\tilde{X}_{ij}(t)}{\tilde{T}_{ij}(t)}-p_{ij}(t)\right\\}.\end{split}$
(39)
To make the bias-variance trade-off due to kernel smoothing more explicit, we
decompose the term
$\begin{split}\sum_{j:j\neq
i}\left\\{\frac{\tilde{X}_{ij}(t)}{\tilde{T}_{ij}(t)}-p_{ij}(t)\right\\}\end{split}$
(40)
as
$\begin{split}&\sum_{j:j\neq
i}\left(\begin{split}&\frac{\sum_{k}W_{h}(t_{k},t)(\mathbf{1}_{ij}(t_{k})-p_{ij}(t_{k}))}{\sum_{k}W_{h}(t_{k},t)}\\\
\end{split}\right)\\\ &+\sum_{j:j\neq
i}\left(\frac{\sum_{k}W_{h}(t_{k},t)p_{ij}(t_{k})}{\sum_{k}W_{h}(t_{k},t)}-p_{ij}(t)\right)\\\
&=:\Delta^{(var)}_{i}+\Delta^{(bias)}_{i}\end{split}$ (41)
where, for brevity, $t_{k}$ and $\mathbf{1}_{ij}(t_{k})$ here stand for
$t_{k}^{(i,j)}$ and $\mathbf{1}(i\text{ defeats }j\text{ at }t_{k})$,
respectively.
For the first term, we have that
$\begin{split}&\mathbb{P}\\!\left(\left|\Delta^{(var)}_{i}\right|\geq\epsilon\right)\\\
&=\mathbb{P}\\!\left(\left|\sum_{j:j\neq
i}\frac{\sum_{k}W_{h}(t_{k},t)(\mathbf{1}_{ij}(t_{k})-p_{ij}(t_{k}))}{\sum_{k}W_{h}(t_{k},t)}\right|\geq\epsilon\right)\\\
&=\mathbb{P}\\!\left(\begin{split}&\left|\sum_{j,k}hW_{h}(t_{k},t)\frac{s_{\min}}{s_{j}}(\mathbf{1}_{ij}(t_{k})-p_{ij}(t_{k}))\right|\\\
&\geq\epsilon\cdot h\cdot s_{\min}\end{split}\right)\\\ \end{split}$ (42)
where $s_{j}=\sum_{k}W_{h}(t_{k},t)$ and $s_{\min}=\min_{j:j\neq i}~{}s_{j}$.
Next,
$hW_{h}(t_{k},t)\frac{s_{\min}}{s_{j}}=W\left(\frac{t_{k}-t}{h}\right)\frac{s_{\min}}{s_{j}}\leq
1$ and hence that multiplicative Chernoff bound (see, e.g. Raghavan,, 1988)
yields that
$\begin{split}&\mathbb{P}\\!\left(\left|\Delta^{(var)}_{i}\right|\geq\epsilon\right)\\\
&\leq 2\exp\left(-\frac{(\epsilon\cdot h\cdot
s_{\min})^{2}}{3\sum_{j,k}hW_{h}(t_{k},t)\frac{s_{\min}}{s_{j}}p_{ij}(t_{k})}\right)\\\
&\leq
2\exp\left(-\frac{\epsilon^{2}hD_{m}T}{18(N-1)(1-p_{\text{min}})}\right)\end{split}$
(43)
for each $i$ as long as
$\begin{split}\frac{\epsilon}{\sum_{j,k}\frac{W_{h}(t_{k},t)}{\sum_{k^{\prime}}W_{h}(t_{k^{\prime}},t)}p_{ij}(t_{k})}\leq
1.\end{split}$ (44)
This condition holds for $\epsilon\leq p_{\min}$.
We note that we have also used the bounds
$\frac{1}{6}D_{m}T\leq\sum_{k}W_{h}(t_{k},t)\leq D_{M}T$ (45)
for any $i,j$ and sufficiently small $h$, which were shown in Section 9.1.4.
Then using the union bound,
$\begin{split}&\mathbb{P}\\!\left(\max_{i}\left|\Delta^{(var)}_{i}\right|\geq\epsilon\right)\\\
&\leq
2N\exp\left(-\frac{\epsilon^{2}hD_{m}T}{18(1-p_{\text{min}})(N-1)}\right)\end{split}$
(46)
Hence, plugging in
$\epsilon=\mathchoice{{\hbox{$\displaystyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log
N}{hD_{m}T}\,}$}\lower 0.4pt\hbox{\vrule
height=14.075pt,depth=-11.26006pt}}}{{\hbox{$\textstyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log
N}{hD_{m}T}\,}$}\lower 0.4pt\hbox{\vrule
height=9.8611pt,depth=-7.88892pt}}}{{\hbox{$\scriptstyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log
N}{hD_{m}T}\,}$}\lower 0.4pt\hbox{\vrule
height=7.25238pt,depth=-5.80194pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log
N}{hD_{m}T}\,}$}\lower 0.4pt\hbox{\vrule
height=7.25238pt,depth=-5.80194pt}}}$, we get that, with probability at least
$1-\frac{2}{N}$,
$\begin{split}\max_{i}|\Delta^{(var)}_{i}|\leq&\mathchoice{{\hbox{$\displaystyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log
N}{hD_{m}T}\,}$}\lower 0.4pt\hbox{\vrule
height=14.075pt,depth=-11.26006pt}}}{{\hbox{$\textstyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log
N}{hD_{m}T}\,}$}\lower 0.4pt\hbox{\vrule
height=9.8611pt,depth=-7.88892pt}}}{{\hbox{$\scriptstyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log
N}{hD_{m}T}\,}$}\lower 0.4pt\hbox{\vrule
height=7.25238pt,depth=-5.80194pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log
N}{hD_{m}T}\,}$}\lower 0.4pt\hbox{\vrule
height=7.25238pt,depth=-5.80194pt}}}\end{split}$ (47)
To handle the deterministic bias terms $\Delta^{(bias)}_{i}$, we rely on the
following bound, whose proof is given below in Section 9.1.4.
###### Lemma 9.1.
Suppose that
1. 1.
$t_{1},t_{2},\dots,t_{T}$ satisfies Eq. 8 and
2. 2.
$\frac{1}{T}=o(h)$ as $T\rightarrow\infty$.
Then, for a $L_{f}$-Lipschitz function $f:[0,1]\rightarrow\mathbb{R}$,
$\begin{split}&\sup_{t\in[0,1]}\left|\sum_{k=1}^{T}\frac{W_{h}(t_{k},t)}{\sum_{k^{\prime}}W_{h}(t_{k^{\prime}},t)}f(t_{k})-f(t)\right|\leq
C_{s}h\end{split}$ (48)
with a universal constant $C_{s}$ depending only on $D_{m},D_{M},W$ and
$L_{f}$.
Accordingly,
$\begin{split}&\max_{i}|\Delta^{(bias)}_{i}|\\\ &\leq\max_{i}\sum_{j:j\neq
i}\left\\{\frac{\sum_{k}W_{h}(t_{k},t)p_{ij}(t_{k})}{\sum_{k}W_{h}(t_{k},t)}-p_{ij}(t)\right\\}\\\
&\leq C_{s}(N-1)h\end{split}$ (49)
for some constant $C_{s}$ depending only on $D_{m},D_{M},W$ and $L_{p}$.
Thus, combining all the pieces,
$\begin{split}&\max_{i}\frac{|e^{\hat{\beta}_{i}-\beta^{*}_{i}}-1|}{|e^{\hat{\beta}_{i}-\beta^{*}_{i}}-1|+1}\\\
&\leq
8M(t)\left(2\delta_{h}(t)+\max_{i}\frac{|\Delta_{i}^{(var)}(t)|+|\Delta_{i}^{(bias)}(t)|}{N-1}\right)\\\
&\leq
8M(t)\left(2\delta_{h}(t)+\mathchoice{{\hbox{$\displaystyle\sqrt{\frac{36(1-p_{\min})\log
N}{hD_{m}(N-1)T}\,}$}\lower 0.4pt\hbox{\vrule
height=15.0pt,depth=-12.00005pt}}}{{\hbox{$\textstyle\sqrt{\frac{36(1-p_{\min})\log
N}{hD_{m}(N-1)T}\,}$}\lower 0.4pt\hbox{\vrule
height=10.5pt,depth=-8.40004pt}}}{{\hbox{$\scriptstyle\sqrt{\frac{36(1-p_{\min})\log
N}{hD_{m}(N-1)T}\,}$}\lower 0.4pt\hbox{\vrule
height=7.58572pt,depth=-6.0686pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\frac{36(1-p_{\min})\log
N}{hD_{m}(N-1)T}\,}$}\lower 0.4pt\hbox{\vrule
height=7.58572pt,depth=-6.0686pt}}}+C_{s}h\right)\end{split}$ (50)
with probability at least $1-\frac{2}{N}$ as long as $\epsilon\leq p_{\min}$.
Plugging in
$h=\max\left\\{\left(\frac{1}{T}\right)^{1+\eta},\left(\frac{36(1-p_{\min})\log
N}{C_{s}^{2}D_{m}(N-1)T}\right)^{\frac{1}{3}}\right\\}$ leads to the bound
$\begin{split}&\max_{i}\frac{|e^{\hat{\beta}_{i}-\beta^{*}_{i}}-1|}{|e^{\hat{\beta}_{i}-\beta^{*}_{i}}-1|+1}\\\
&\leq
8M(t)\left(\begin{split}&2\delta_{h}(t)+\left(\frac{36C_{s}(1-p_{\min})\log
N}{D_{m}(N-1)T}\right)^{\frac{1}{3}}\\\ &+C_{s}h\end{split}\right)\\\ &\leq
16M(t)\left(\begin{split}&\delta_{h}(t)+C_{s}h\end{split}\right)\end{split}$
(51)
with probability at least $1-\frac{2}{N}$ when $\epsilon\leq p_{\min}$. We
note that, given our choice for $h$,
$\epsilon=\mathchoice{{\hbox{$\displaystyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log
N}{hD_{m}T}\,}$}\lower 0.4pt\hbox{\vrule
height=14.075pt,depth=-11.26006pt}}}{{\hbox{$\textstyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log
N}{hD_{m}T}\,}$}\lower 0.4pt\hbox{\vrule
height=9.8611pt,depth=-7.88892pt}}}{{\hbox{$\scriptstyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log
N}{hD_{m}T}\,}$}\lower 0.4pt\hbox{\vrule
height=7.25238pt,depth=-5.80194pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log
N}{hD_{m}T}\,}$}\lower 0.4pt\hbox{\vrule
height=7.25238pt,depth=-5.80194pt}}}\leq C_{s}h$. Hence, for a sufficiently
small $h$, if the right hand side is smaller than, say, $\frac{1}{3}$ then
$\begin{split}&\|\hat{\bm{\beta}}-\bm{\beta}^{*}\|_{\infty}\leq
3\max_{i}\frac{|e^{\hat{\beta}_{i}-\beta^{*}_{i}}-1|}{|e^{\hat{\beta}_{i}-\beta^{*}_{i}}-1|+1}\\\
&\leq
48M(t)\left(\begin{split}&\delta_{h}(t)+C_{s}h\end{split}\right)\end{split}$
(52)
with probability at least $1-\frac{2}{N}$ since
$\frac{|e^{x}-1|}{|e^{x}-1|+1}\geq\frac{|x|}{3}$ for $|x|\leq 1$.
#### 9.1.4 Proof of Lemma 9.1
Since $f$ is $L_{f}$-Lipschitz,
$\begin{split}&\left|\sum_{k=1}^{T}\frac{W_{h}(t_{k},t)}{\sum_{k^{\prime}}W_{h}(t_{k^{\prime}},t)}f(t_{k})-f(t)\right|\\\
&\leq\sum_{k=1}^{T}\frac{W_{h}(t_{k},t)}{\sum_{k^{\prime}}W_{h}(t_{k^{\prime}},t)}|f(t_{k})-f(t)|\\\
&\leq
L_{f}\sum_{k=1}^{T}\frac{W_{h}(t_{k},t)}{\sum_{k^{\prime}}W_{h}(t_{k^{\prime}},t)}|t_{k}-t|.\end{split}$
(53)
Let
$I_{1}=\left[0,\frac{t_{1}+t_{2}}{2}\right],I_{2}=\left[\frac{t_{1}+t_{2}}{2},\frac{t_{2}+t_{3}}{2}\right],\dots,I_{T}=\left[\frac{t_{T-1}+t_{T}}{2},1\right]$
and $l_{k}$ be the length of $I_{k}$. We note that $\frac{1}{D_{M}T}\leq
l_{k}\leq\frac{2}{D_{m}T}$ by Eq. 9. Then,
$\begin{split}&\int_{0}^{1}|x-t|W_{h}(x,t)dx=\sum_{k}\int_{I_{k}}|x-t|W_{h}(x,t)\\\
&=\left(\begin{split}&\sum_{k}l_{k}|t_{k}-t|W_{h}(t_{k},t)\\\
&+\sum_{k}\int_{I_{k}}\left(\begin{split}&\left|\frac{x-t}{h}\right|W\left(\frac{x-t}{h}\right)\\\
&-\left|\frac{t_{k}-t}{h}\right|W\left(\frac{t_{k}-t}{h}\right)\end{split}\right)dx\\\
\end{split}\right).\end{split}$ (54)
Since $|\cdot|W$ has a finite total variation,
$\begin{split}&\sum_{k}\int_{I_{k}}\left|\begin{split}&\left|\frac{x-t}{h}\right|W\left(\frac{x-t}{h}\right)\\\
&-\left|\frac{t_{k}-t}{h}\right|W\left(\frac{t_{k}-t}{h}\right)\end{split}\right|dx\\\
&\leq\sum_{k}\frac{1}{D_{m}T}\sup_{x,y\in
I_{k}}\left|\begin{split}&\left|\frac{x-t}{h}\right|W\left(\frac{x-t}{h}\right)\\\
&-\left|\frac{y-t}{h}\right|W\left(\frac{t_{k}-t}{h}\right)\end{split}\right|\\\
&\leq\frac{\mathcal{V}(|\cdot|W)}{D_{m}T}.\end{split}$ (55)
Hence,
$\begin{split}&\int_{0}^{1}|x-t|W_{h}(x,t)dx\\\
&\geq\left(\begin{split}&\frac{1}{D_{M}T}\sum_{k}|t_{k}-t|W_{h}(t_{k},t)\\\
&-\frac{\mathcal{V}(|\cdot|W)}{D_{m}T}\end{split}\right).\end{split}$ (56)
As a result,
$\begin{split}&\sum_{k}|t_{k}-t|W_{h}(t_{k},t)\\\ &\leq
D_{M}Th\int_{-\infty}^{\infty}|x|W(x)dx+\frac{D_{M}\mathcal{V}(|\cdot|W)}{D_{m}}.\end{split}$
(57)
On the other hand, with a similar argument,
$\begin{split}&\int_{0}^{1}W_{h}(x,t)dx=\sum_{k}\int_{I_{k}}W_{h}(x,t)\\\
&=\left(\begin{split}&\sum_{k}l_{k}W_{h}(t_{k},t)\\\
&+\sum_{k}\int_{I_{k}}\left(\frac{1}{h}W\left(\frac{x-t}{h}\right)-\frac{1}{h}W\left(\frac{t_{k}-t}{h}\right)\right)dx\\\
\end{split}\right)\\\
&\leq\frac{2}{D_{m}T}\sum_{k}W_{h}(t_{k},t)+\frac{\mathcal{V}(W)}{D_{m}Th},\end{split}$
(58)
implying that
$\begin{split}\sum_{k}W_{h}(t_{k},t)&\geq\frac{D_{m}T}{2}\int_{-t/h}^{(1-t)/h}W(x)dx-\frac{D_{M}\mathcal{V}(W)}{2D_{m}h}.\\\
\end{split}$ (59)
As long as $h\rightarrow 0$ and $\frac{1}{T}=o(h)$,
$\inf_{t\in[0,1]}\int_{-t/h}^{(1-t)/h}W(x)dx$ (60)
is bounded below from $0$ (in particular, we consider a small enough $h$ so
that it is bounded below by, say, $\frac{1}{3}$), and the term
$\frac{D_{M}\mathcal{V}(|\cdot|W)}{D_{m}}$ and
$\frac{D_{M}\mathcal{V}(W(x)}{2D_{m}h}$ in Eqs. 57 and 59 become
asymptotically negligible. As a result,
$\begin{split}\sum_{k=1}^{T}\frac{W_{h}(t_{k},t)}{\sum_{k^{\prime}}W_{h}(t_{k^{\prime}},t)}|t_{k}-t|\leq
C^{\prime}h\end{split}$ (61)
where $C^{\prime}$ is a universal constant depending only on $D_{m}$, $D_{M}$,
and $W$, and furthermore
$\begin{split}\left|\sum_{k=1}^{T}\frac{W_{h}(t_{k},t)}{\sum_{k^{\prime}}W_{h}(t_{k^{\prime}},t)}f(t_{k})-f(t)\right|\leq
C_{s}h\end{split}$ (62)
for a univeral constant $C_{s}$ depending only on $D_{m}$, $D_{M}$, $W$, and
$L_{f}$.
#### 9.1.5 Proof of Theorem 5.3
In Section 9.1.3, we showed that
$\begin{split}&\max_{i}\frac{|e^{\hat{\beta}_{i}(t)-\beta^{*}_{i}(t)}-1|}{|e^{\hat{\beta}_{i}(t)-\beta^{*}_{i}(t)}-1|+1}\\\
&\leq
8M(t)\left(2\delta_{h}(t)+\max_{i}\frac{|\Delta_{i}^{(var)}(t)|+|\Delta_{i}^{(bias)}(t)|}{N-1}\right)\\\
\end{split}$ (63)
Since the bound for $\Delta^{(bias)}_{i}(t)$ depends only on $D_{m}$, $D_{M}$,
$W$, and $L_{f}$, it is sufficient to find a bound for
$\sup_{t\in[0,1]}\max_{i}|\Delta^{(var)}_{i}(t)|.$ (64)
We use a covering approach. For $L\in\mathbb{N}$, let
$\overline{t}_{l}=\frac{2l-1}{2L}$ for $l=1,2,\dots,L$. Then for any
$t\in[0,1]$ there exists $l^{*}$ such that
$|t-\overline{t}_{l^{*}}|\leq\frac{1}{2L}$ and
$\begin{split}&\max_{i}\left\\{\Delta^{(var)}_{i}(t)\right\\}\\\
&\leq\max_{i}\left\\{\begin{split}&\sum_{j:j\neq
i}\frac{\sum_{k}W_{h}(t_{k},t)\mathbf{1}(i\text{ defeats }j\text{ at
}t_{k})}{\sum_{k}W_{h}(t_{k},t)}\\\ &-\sum_{j:j\neq
i}\frac{\sum_{k}W_{h}(t_{k},\overline{t}_{l^{*}(t)})\mathbf{1}(i\text{ defeats
}j\text{ at
}t_{k})}{\sum_{k}W_{h}(t_{k},\overline{t}_{l^{*}(t)})}\end{split}\right\\}\\\
&+\max_{i}\Delta^{(var)}_{i}(\overline{t}_{l^{*}})\end{split}$ (65)
where $t_{k}$ here stands $t_{k}^{(i,j)}$ for brevity.
In order to bound the second term in the curly brackets, we bound each of its
summands as follows:
$\begin{split}&\left|\frac{W_{h}(t_{k},t)}{\sum_{k}W_{h}(t_{k},t)}-\frac{W_{h}(t_{k},\overline{t}_{l^{*}(t)})}{\sum_{k}W_{h}(t_{k},\overline{t}_{l^{*}(t)})}\right|\\\
&\leq\left|\frac{W_{t}S_{\overline{t}}-W_{\overline{t}}S_{t}}{S_{t}S_{\overline{t}}}\right|\\\
&\leq\frac{|W_{t}-W_{\overline{t}}|}{S_{\overline{t}}}+\frac{|S_{\overline{t}}-S_{t}|W_{\overline{t}}}{S_{t}S_{\overline{t}}}\\\
\end{split}$ (66)
where we denote $W_{t}=W_{h}(t_{k},t)$,
$W_{\overline{t}}=W_{h}(t_{k},\overline{t}_{l^{*}(t)})$,
$S_{t}=\sum_{k}W_{h}(t_{k},t)$, and
$S_{\overline{t}}=\sum_{k}W_{h}(t_{k},\overline{t}_{l^{*}(t)})$ for brevity.
We have seen in Section 9.1.4 that, for any a sufficiently small $h$,
$S_{t},~{}S_{\overline{t}}\geq\frac{D_{m}T}{6}.$ (67)
Thus,
$\begin{split}&\frac{|W_{t}-W_{\overline{t}}|}{S_{\overline{t}}}+\frac{|S_{\overline{t}}-S_{t}|W_{\overline{t}}}{S_{t}S_{\overline{t}}}\\\
&\leq\frac{6}{D_{m}T}\frac{L_{W}}{h}|t-\overline{t}_{l^{*}(t)}|+\left(\frac{6}{D_{m}T}\right)^{2}T\frac{L_{W}}{h}|t-\overline{t}_{l^{*}(t)}|\\\
&\leq\frac{36}{D_{m}^{2}Lh^{2}T}\end{split}$ (68)
as $D_{m}\leq 1$ and $W$ is $L_{W}$-Lipschitz by assumption. Hence,
$\begin{split}&\max_{i}\left\\{\begin{split}&\sum_{j:j\neq
i}\frac{\sum_{k}W_{h}(t_{k},t)\mathbf{1}(i\text{ defeats }j\text{ at
}t_{k})}{\sum_{k}W_{h}(t_{k},t)}\\\ &-\sum_{j:j\neq
i}\frac{\sum_{k}W_{h}(t_{k},\overline{t}_{l^{*}(t)})\mathbf{1}(i\text{ defeats
}j\text{ at
}t_{k})}{\sum_{k}W_{h}(t_{k},\overline{t}_{l^{*}(t)})}\end{split}\right\\}\\\
&\leq\frac{36(N-1)}{D_{m}^{2}Lh^{2}}\end{split}$ (69)
On the other hand,
$\begin{split}&\max_{i}\Delta^{(var)}_{i}(\overline{t}_{l^{*}})\leq\max_{l,i}\Delta^{(var)}_{i}(\overline{t}_{l})\\\
&=\max_{l,i}\left\\{\sum_{j:j\neq
i}\left(\begin{split}&\frac{\sum_{k}W_{h}(t_{k},t)(\mathbf{1}_{ij}(t_{k})-p_{ij}(t_{k}))}{\sum_{k}W_{h}(t_{k},t)}\\\
\end{split}\right)\right\\}\end{split}$ (70)
where, again,$\mathbf{1}_{ij}(t_{k})$ here stands $\mathbf{1}(i\text{ defeats
}j\text{ at }t_{k})$ for simplicity.
Using Eq. 42 and a union bound, we get that
$\begin{split}&\mathbb{P}\\!\left(\max_{l,i}\Delta^{(var)}_{i}(\overline{t}_{l})\geq\epsilon\right)\\\
&\leq
2NL\exp\left(-\frac{\epsilon^{2}hD_{m}T}{18(1-p_{\text{min}})(N-1)}\right),\end{split}$
(71)
for $\epsilon\leq p_{\min}$.
Next we plug in
$\epsilon=\mathchoice{{\hbox{$\displaystyle\sqrt{\frac{36(1-p_{\min})(N-1)\log(NL)}{hD_{m}T}\,}$}\lower
0.4pt\hbox{\vrule
height=14.075pt,depth=-11.26006pt}}}{{\hbox{$\textstyle\sqrt{\frac{36(1-p_{\min})(N-1)\log(NL)}{hD_{m}T}\,}$}\lower
0.4pt\hbox{\vrule
height=9.8611pt,depth=-7.88892pt}}}{{\hbox{$\scriptstyle\sqrt{\frac{36(1-p_{\min})(N-1)\log(NL)}{hD_{m}T}\,}$}\lower
0.4pt\hbox{\vrule
height=7.25238pt,depth=-5.80194pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\frac{36(1-p_{\min})(N-1)\log(NL)}{hD_{m}T}\,}$}\lower
0.4pt\hbox{\vrule height=7.25238pt,depth=-5.80194pt}}}$ to obtain the bounds
$\begin{split}&\left|\max_{l,i}\Delta^{(var)}_{i}(\overline{t}_{l^{*}})\right|\leq\mathchoice{{\hbox{$\displaystyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log(NL)}{hD_{m}T}\,}$}\lower
0.4pt\hbox{\vrule
height=14.075pt,depth=-11.26006pt}}}{{\hbox{$\textstyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log(NL)}{hD_{m}T}\,}$}\lower
0.4pt\hbox{\vrule
height=9.8611pt,depth=-7.88892pt}}}{{\hbox{$\scriptstyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log(NL)}{hD_{m}T}\,}$}\lower
0.4pt\hbox{\vrule
height=7.25238pt,depth=-5.80194pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\frac{36(1-p_{\text{min}})(N-1)\log(NL)}{hD_{m}T}\,}$}\lower
0.4pt\hbox{\vrule height=7.25238pt,depth=-5.80194pt}}}\end{split}$ (72)
and, in turn,
$\begin{split}&\sup_{t,i}\frac{|e^{\hat{\beta}_{i}(t)-\beta^{*}_{i}(t)}-1|}{|e^{\hat{\beta}_{i}(t)-\beta^{*}_{i}(t)}-1|+1}\\\
&\leq\sup_{t}8M(t)\left(\begin{split}&2\delta_{h}(t)+\frac{36}{D_{m}^{2}Lh^{2}}\\\
&+\mathchoice{{\hbox{$\displaystyle\sqrt{\frac{36(1-p_{\min})\log(NL)}{hD_{m}(N-1)T}\,}$}\lower
0.4pt\hbox{\vrule
height=15.0pt,depth=-12.00005pt}}}{{\hbox{$\textstyle\sqrt{\frac{36(1-p_{\min})\log(NL)}{hD_{m}(N-1)T}\,}$}\lower
0.4pt\hbox{\vrule
height=10.5pt,depth=-8.40004pt}}}{{\hbox{$\scriptstyle\sqrt{\frac{36(1-p_{\min})\log(NL)}{hD_{m}(N-1)T}\,}$}\lower
0.4pt\hbox{\vrule
height=7.58572pt,depth=-6.0686pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\frac{36(1-p_{\min})\log(NL)}{hD_{m}(N-1)T}\,}$}\lower
0.4pt\hbox{\vrule
height=7.58572pt,depth=-6.0686pt}}}+C_{s}h\end{split}\right)\end{split}$ (73)
with probability at least $1-\frac{2}{NL}$ and as long as $\epsilon\leq
p_{\min}$.
Plugging in
$h=\max\left\\{\left(\frac{1}{T}\right)^{1+\eta},\left(\frac{36(1-p_{\min})\log(NT^{3+3\eta})}{C_{s}^{2}D_{m}(N-1)T}\right)^{\frac{1}{3}}\right\\}$
and $L=\lceil h^{-3}\rceil$, we conclude that
$\begin{split}&\sup_{t,i}\frac{|e^{\hat{\beta}_{i}(t)-\beta^{*}_{i}(t)}-1|}{|e^{\hat{\beta}_{i}(t)-\beta^{*}_{i}(t)}-1|+1}\\\
&\leq\sup_{t}8M(t)\left(\begin{split}&2\delta_{h}(t)+\frac{36}{D_{m}^{2}Lh^{2}}+C_{s}h\\\
&+\mathchoice{{\hbox{$\displaystyle\sqrt{\frac{36(1-p_{\min})\log(NL)}{hD_{m}(N-1)T}\,}$}\lower
0.4pt\hbox{\vrule
height=15.0pt,depth=-12.00005pt}}}{{\hbox{$\textstyle\sqrt{\frac{36(1-p_{\min})\log(NL)}{hD_{m}(N-1)T}\,}$}\lower
0.4pt\hbox{\vrule
height=10.5pt,depth=-8.40004pt}}}{{\hbox{$\scriptstyle\sqrt{\frac{36(1-p_{\min})\log(NL)}{hD_{m}(N-1)T}\,}$}\lower
0.4pt\hbox{\vrule
height=7.58572pt,depth=-6.0686pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\frac{36(1-p_{\min})\log(NL)}{hD_{m}(N-1)T}\,}$}\lower
0.4pt\hbox{\vrule height=7.58572pt,depth=-6.0686pt}}}\end{split}\right)\\\
&\leq\sup_{t}8M(t)\left(\begin{split}&2\delta_{h}(t)+\left(C_{s}+\frac{72}{D_{m}^{2}}\right)h\\\
&+\mathchoice{{\hbox{$\displaystyle\sqrt{\frac{36(1-p_{\min})\log(Nh^{-3})}{hD_{m}(N-1)T}\,}$}\lower
0.4pt\hbox{\vrule
height=16.24889pt,depth=-12.99916pt}}}{{\hbox{$\textstyle\sqrt{\frac{36(1-p_{\min})\log(Nh^{-3})}{hD_{m}(N-1)T}\,}$}\lower
0.4pt\hbox{\vrule
height=11.39998pt,depth=-9.12003pt}}}{{\hbox{$\scriptstyle\sqrt{\frac{36(1-p_{\min})\log(Nh^{-3})}{hD_{m}(N-1)T}\,}$}\lower
0.4pt\hbox{\vrule
height=8.59682pt,depth=-6.87749pt}}}{{\hbox{$\scriptscriptstyle\sqrt{\frac{36(1-p_{\min})\log(Nh^{-3})}{hD_{m}(N-1)T}\,}$}\lower
0.4pt\hbox{\vrule height=8.59682pt,depth=-6.87749pt}}}\end{split}\right)\\\
&\leq\sup_{t}16M(t)\left(\begin{split}&\delta_{h}(t)+\left(C_{s}+\frac{36}{D_{m}^{2}}\right)h\\\
\end{split}\right)\\\ \end{split}$ (74)
with probability at least $1-\frac{2h^{3}}{N}$ when $\epsilon\leq p_{\min}$.
Since $\epsilon\leq\mathchoice{{\hbox{$\displaystyle\sqrt{3(1+\eta)\,}$}\lower
0.4pt\hbox{\vrule
height=7.5pt,depth=-6.00003pt}}}{{\hbox{$\textstyle\sqrt{3(1+\eta)\,}$}\lower
0.4pt\hbox{\vrule
height=7.5pt,depth=-6.00003pt}}}{{\hbox{$\scriptstyle\sqrt{3(1+\eta)\,}$}\lower
0.4pt\hbox{\vrule
height=5.25pt,depth=-4.20003pt}}}{{\hbox{$\scriptscriptstyle\sqrt{3(1+\eta)\,}$}\lower
0.4pt\hbox{\vrule height=3.75pt,depth=-3.00002pt}}}C_{s}h$ given the choice of
$h$, this bound holds for all sufficiently small $h$.
#### 9.1.6 Proof of Theorem 5.4
For convenience, we omit the time index $t$ for $\hat{\bm{\beta}}(t)$,
$\bm{\beta}^{*}(t)$, and $p_{\text{min}}(t)$, unless it is required for
clarification.
We seek to replace $M(t)$ in Eq. 16 by a term of $p_{\text{min}}$. This
requires $\exp(\beta^{*}_{i}-\beta^{*}_{j})$ to be bounded above by a function
of $p_{\text{min}}$. The following lemma provides the desired bound. The proof
is in Section 9.1.7.
###### Lemma 9.2.
$\max_{i,j:i\neq j}|\beta_{i}^{*}-\beta_{j}^{*}|-\frac{1}{p_{\text{min}}}$
(75)
is upper-bounded by a universal constant, and hence
$\max_{i,j:i\neq j}\exp(|\beta_{i}^{*}-\beta_{j}^{*}|)\leq
C_{p}\exp\left(\frac{1}{p_{\text{min}}}\right)$ (76)
for some universal constant $1<C_{p}<1.5$.
Plugging in the new bound on $\exp(\beta^{*}_{i}-\beta^{*}_{j})$, we get
$\begin{split}&\|\hat{\bm{\beta}}(t)-\bm{\beta}^{*}(t)\|_{\infty}\leq
72K\left(\begin{split}&\delta_{h}(t)+C_{s}h\end{split}\right)\end{split}$ (77)
instead of $48M(t)\left(\delta_{h}(t)+C_{s}h\right)$ in Eq. 16
This result easily extends to the uniform case Eq. 21.
#### 9.1.7 Proof of Lemma 9.2
Let $d_{0}$ be the difference in scores which implies a bias of probability
$\frac{p_{\text{min}}}{2}$:
$\frac{1}{1+\exp(d_{0})}=\frac{p_{\text{min}}}{2}$ (78)
Suppose that
$i_{\text{max}}={\arg\max}_{i}\beta_{i}^{*}\text{ and
}i_{\text{min}}={\arg\min}_{i}\beta_{i}^{*}$ (79)
and that
$\beta^{*}_{\text{max}}=\max_{i}\beta_{i}^{*}\text{ and
}\beta^{*}_{\text{min}}=\min_{i}\beta_{i}^{*}$ (80)
Then, the maximal difference between $\bm{\beta}_{i}^{*}$’s $d_{\text{max}}$
is
$d_{\text{max}}=\max_{i,j:i\neq
j}\beta_{i}^{*}-\beta_{j}^{*}=\beta^{*}_{\text{max}}-\beta^{*}_{\text{min}}$
(81)
Let $I_{1}=\\{i:\beta_{i}<\beta_{\text{min}}+d_{0}\\}$. Plugged in
$i=i_{\text{min}}$, Eq. 34 implies
$\begin{split}&(N-1)p_{\text{min}}\leq\sum_{j:j\neq
i_{\text{min}}}p_{i_{\text{min}}j}(t)\\\ &=\sum_{j:j\neq
i_{\text{min}}}\frac{1}{1+\exp(\beta^{*}_{\text{j}}-\beta^{*}_{\text{min}})}\\\
&\leq\frac{|I_{1}|-1}{2}+(N-1)\frac{p_{\text{min}}}{2}\end{split}$ (82)
Hence, $|I_{1}|\geq(N-1)p_{\text{min}}+1$.
Now, let $I_{2}=\\{i:\beta_{i}<\beta_{\text{min}}+2d_{0}\\}$. Summing Eq. 34
plugged in $i\in I$, we get
$\begin{split}&(N-|I_{1}|)|I_{1}|p_{\text{min}}\leq\sum_{j\in
I_{1}^{C}}\sum_{i\in I_{1}}p_{ij}(t)\\\ &=\sum_{j\in I_{1}^{C}}\sum_{i\in
I_{1}}\frac{1}{1+\exp(\beta^{*}_{j}-\beta^{*}_{i})}\\\
&\leq\frac{(|I_{2}|-|I_{1}|)|I_{1}|}{2}+(N-|I_{1}|)|I_{1}|\frac{p_{\text{min}}}{2}\end{split}$
(83)
and hence
$\begin{split}|I_{2}|\geq&|I_{1}|+(N-|I_{1}|)p_{\text{min}}\\\
\geq&Np_{\text{min}}+|I_{1}|(1-p_{\text{min}})\\\
\geq&(N-1)(1-(1-p_{\text{min}})^{2})+1\\\ \end{split}$ (84)
Similarly for $I_{k}=\\{i:\beta_{i}<\beta_{\text{min}}+kd_{0}\\}$ and
$J_{k}=\\{j:\beta_{j}>\beta_{\text{max}}-kd_{0}\\}$,
$\begin{split}&|I_{k}|\geq(N-1)(1-(1-p_{\text{min}})^{k})+1,\\\
&|J_{k}|\geq(N-1)(1-(1-p_{\text{min}})^{k})+1.\end{split}$ (85)
Now, without loss of generality we assume that $d_{\text{max}}>2kd_{0}$. Then,
by the optimality of $\bm{\beta}^{*}$ for $\mathcal{R}(\bm{\beta})$,
$\begin{split}\log 2=&\mathcal{R}(\mathbf{0})\geq\mathcal{R}(\beta^{*})\\\
=&\frac{1}{\binom{N}{2}}\sum_{i,j:i\neq
j}p_{ij}(t)\log(1+\exp(\beta_{j}^{*}-\beta_{i}^{*}))\\\
\geq&\frac{1}{\binom{N}{2}}\sum_{i\in I_{k}}\sum_{j\in
J_{k}}p_{\text{min}}\log(1+\exp(d_{\text{max}}-2kd_{0}))\\\
\geq&2p_{\text{min}}(1-(1-p_{\text{min}})^{k})^{2}(d_{\text{max}}-2kd_{0}).\end{split}$
(86)
Thus, $d_{\text{max}}\leq\frac{\log
2}{2p_{\text{min}}(1-(1-p_{\text{min}})^{k})^{2}}+2kd_{0}$ for any $k$.
Plugging in $k=\lceil\log(\frac{1}{p_{\text{min}}})\rceil$, we get that
$\begin{split}d_{\text{max}}\leq&\frac{\log 2}{2(1-1/e)^{2}p_{\text{min}}}\\\
&+2\log\left(\frac{2}{p_{\text{min}}}-1\right)\left(\log\left(\frac{1}{p_{\text{min}}}\right)+1\right)\\\
\leq&\frac{1}{p_{\text{min}}}+C,\end{split}$ (87)
for some universal constant $C$ since the derivative of
$2\log\left(2x-1\right)\left(\log x+1\right)$ is positive and converges to $0$
as $x\rightarrow\infty$. Then $2\log\left(2x-1\right)\left(\log x+1\right)$
has a upper-bounding tangent line with slope $1-\frac{\log 2}{2(1-1/e)^{2}}$,
and $C$ is its y-intercept. This also yields that
$\max_{i,j:i\neq j}\exp(\beta^{*}_{i}-\beta^{*}_{j})\leq
C_{p}\exp\left(\frac{1}{p_{\text{min}}}\right),$ (88)
for some universal constant $C_{p}$. In particular, $1<C_{p}<1.5$.
### 9.2 Tuning kernel bandwidth in practical settings
As noted in Section 2.2, the bandwidth $h\in\mathbb{R}_{>0}$ of the kernel
function serves as an effective global smoothing parameter between subsequent
time periods and allows to borrow information across contiguous time points.
Increasing $h$, all else held constant, leads to parameter estimates (and
hence the derived global rankings) becoming “smoothed” together across time.
Naturally the question remains on how to tune $h$ in practical applications in
a principled data-driven manner. This is a fundamentally challenging question
not just in our problem but, more generally, in nonparametric regression. Here
we present a way to tuning $h$ with some degree of objectivity based on leave-
one-out cross-validation (LOOCV).
In general settings where we have independent and identically distributed
(i.i.d.) samples, LOOCV assesses the performance of a predictive model on a
single held-out i.i.d. sample. In our case, each pairwise comparison can be
considered an i.i.d. sample if we take the compared teams and the time point
on which they are compared as covariates. Remember that $(i_{m},j_{m},t_{m})$
denotes $m$-th pairwise comparison where team $i_{m}$ won against team $j_{m}$
at time point $t_{m}$ for $m=1,\dots,M$. Then, for a given smoothing penalty
parameter $h$, LOOCV is adapted to our estimation approach as follows:
1. 1.
For $m=1,\dots,M$, given $h>0$:
1. (a)
fit our model with kernel bandwidth $h$ on the dataset with the $m$-th
comparison held-out;
2. (b)
calculate the negative log-likelihood (nll) of the previous solution to
$(i_{m},j_{m},t_{m})$.
2. 2.
Take the average of the negative log-likelihoods to obtain $\text{nll}_{h}$ as
a loss in the predictive performance of time-varying Bradley-Terry estimator
for given $h$ on our dataset.
3. 3.
Choose the bandwith $h^{*}$ with the smallest $\text{nll}_{h}$ value.
We apply this data-driven methodology to the experiments and real-life
application in Section 6 and Section 7.
### 9.3 Details of Experiments
Here we explain some details of the setting of the numerical experiments in
Section 6.
#### 9.3.1 Bradley-Terry Model as the True Model
We set the number of teams $N=50$ and the number of time points $M=50$. We set
$n_{ij}(t)=1$ for all $i,j\in[N]$ and $t\in[M]$.
For the Gaussian process to generate a path for $\bm{\beta}^{*}_{i}$ at
$t=1,\ldots,M$, we use the same covariance matrix
$\Sigma_{i}=\Sigma\in\mathbb{R}^{M\times M}$ for all $i\in[N]$, and $\Sigma$
is set to be a Teoplitz matrix defined by
$\Sigma_{ij}=1-M^{-\alpha}|i-j|^{r},$
and in our experiment we set $(\alpha,r)=(1,1)$. The mean vector is set to be
a constant over time, i.e., $\mu_{i}(t)=u_{i}$ for $t=1,\ldots,M$, and
$u_{1},\ldots,u_{N}$ are $i.i.d.$ generated from uniform distribution on
$[0,1]$.
Figure 3: LOOCV curve of our Dynamic Bradley-Terry model fitted with Gaussian
kernel. $y$-axis: averaged negative log-likelihood. The optimal $h^{*}$ is
0.03.
Fig. 3 shows the curve of LOOCV of our dynamic Bradley-Terry model fitted with
a Gaussian kernel in one repetition of our experiment. The curve is for the
setting here and for the agnostic model setting the CV curve has similar
shape. The curve shows a typical shape of CV curve for tuning parameter. The
kernel bandwidth, $h$, with smallest $\mathrm{nll}_{h}$ is $h^{*}=0.03$. The
LOOCV procedure is described in Section 9.2.
#### 9.3.2 Agnostic Model Setting
Again we set the number of teams $N=50$, the number of time points $M=50$, and
$n_{ij}(t)=1$ for all $i,j\in[N]$ and $t\in[M]$. The covariance matrix is also
the same as section 9.3.1. The only difference lies in the mean vector. Now
the mean vector is still constant over time, or $\mu_{i}(t)=u_{i}$ for
$t=1,\ldots,M$, but $u_{i}$’s are generated in a following group-wise way:
1. 1.
Set the number of groups $G$ and the index set of each group
$I_{1},\ldots,I_{G}$ so that $\sum_{i}|I_{i}|=N$. Set the base support to be
$[0,b]$ and the group gap to be $a$.
2. 2.
For each $i\in\\{1,\ldots,G\\}$, generate $u_{j}$ from ${\rm
Uniform}(a(i-1),a(i-1)+b)$ for all $j\in I_{i}$.
In our experiment we set $G=5$ with each group containing two randomly picked
indices, $b=0.5$ and $a=1.5$. Such group-wise generation intends to ensure
that different teams have distinguishable perofrmance in pairwise comparison
so that the ranking is more reasonable.
#### 9.3.3 Running Time
Fig. 4 compares the time it takes to fit our model and the original Bradley-
Terry model under 3 different settings, where $N$ is the number of teams and
$M$ is the number of time points:
* •
Fix $N$, vary $M$.
* •
Fix $M$, vary $N$.
* •
Vary $N$ and $M$ together while keeping $N=M$.
Figure 4: Comparison of running time of original Bradley Terry model (oBT) and
our Dynamic Bradley Terry model (DBT). The values are averaged over 20
repetitions.
For our dynamic Bradley-Terry model, the running time here is measured for
fitting the model with a given kernel parameter $h$, hence it contains the
time cost of kernel smoothing step and the optimization step. In real
applications, if one wants to select the best $h$ from a range of values with
cross-validation, then the total computation time would be approximately the
running time here multiplied by the number of cross-validations.
The results in Fig. 4 shows that with all advantages our model can bring with,
it does not cost much more in terms of computation time. Furthermore, when the
number of time points $M$ is large while $N$ is relatively small, our model
can cost even less time than the original Bradley-Terry model.
If one wants to do LOOCV to select $h$ when $N$ and $M$ are huge, then it
could take a long time to finish the whole procedure. However, in this case we
observed in some extended experiments that with a pre-determined $h$ in a
reasonable range, our model can give fairly good estimate close to the one
given by the best $h^{*}$ selected by LOOCV. The supporting files of our
experiments can be found in our GitHub repository†††Code available at
https://github.com/shamindras/bttv-aistats2020.
#### 9.3.4 MLE of the Bradley-Terry Model
Table 4 shows the frequency with which 4.1 holds at a single time point for
the original pairwise comparison data for different $M$ and $N$, where
$n_{ij}(t)=1$ for all $i,j,t$. To be clear, here we just regard the matrix
$\tilde{X}(t)$ in 4.1 as the original data rather than the smoothed data, as
it originally was in Ford, (1957). Given $\\{X(t),t\in[M]\\}$, the frequency
here refers to $\\#\\{t:\text{The condition holds for }X(t)\\}/M$.
The data are generated as described in Section 6.1, and the frequency is
averaged over 50 repetitions. When $N=M=10$ and $n_{ij}(t)=4$ for all $i,j,t$,
the frequency arises to 0.988, illustrating how $n_{ij}(t)$ controls the
sparsity of the game matrix and consequently whether 4.1 holds or not.
$\mathbf{(N,M)}$ | (5,5) | (10,10) | (20,10) | (30,10) | (40,10) | (50,10)
---|---|---|---|---|---|---
Freq. | 0.248 | 0.622 | 0.902 | 0.950 | 0.984 | 0.984
Table 4: Frequency that 4.1 holds at a single time point for the original
pairwise comparison data. $n_{ij}(t)=1$.
As a comparison, under the same setting, for the kernel-smoothed pairwise
comparison data, 4.1 always holds in the experiment. This fact demonstrates
the advantage of using kernel-smooth, and partly explains why in our
experiments where the data is sparse our model performs the best.
The frequencies in Table 4 seem high for $N>20$, but from a global
perspective, the induced frequency that 4.1 holds for all $M$ time points
could be much lower. Table 5 shows such frequency in some settings. Again the
values are averaged over 50 repetitions. Remember that in these settings the
condition always holds for kernel-smoothed data.
$\mathbf{(N,M)}$ | (10,10) | (20,10) | (30,10) | (40,10) | (50,10) | (60,10)
---|---|---|---|---|---|---
Freq. | 0.02 | 0.44 | 0.62 | 0.86 | 0.84 | 0.88
Table 5: Frequency that 4.1 holds for all $M$ time points for the original
pairwise comparison data. $n_{ij}(t)=1$.
To make it clearer how $n_{ij}(t)$ affects the global connectivity, we make
Table 6. In the table we fix $(N,M)=(10,10)$.
$n_{ij}(t)$ | 1 | 2 | 4 | 6 | 8 | 10
---|---|---|---|---|---|---
Freq. | 0.02 | 0.48 | 0.92 | 0.94 | 0.96 | 1.0
Table 6: Frequency that 4.1 holds for all $M$ time points for the original
pairwise comparison data. $(N,M)=(10,10)$. Figure 5: Divergence of
max$|\hat{\beta}|$ when MLE does not exist at some time points for the
original Bradley-Terry model.
By direct inspection of the likelihood of the original Bradley-Terry model, it
can be seen that, when the MLE does not exist, the norm of $\hat{\beta}$ will
go to infinity if one uses gradient descent without any regularization. Fig. 5
shows an example where $N=M=10$ and $n_{ij}(t)=1$ for all $i,j,t$.
|
2024-09-04T02:54:55.749879 | 2020-02-28T21:59:00 | 2003.00084 | {
"authors": "Immanuel Ben Porat",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25954",
"submitter": "Immanuel Ben Porat",
"url": "https://arxiv.org/abs/2003.00084"
} | arxiv-papers |
Convexity in Multivalued Harmonic Functions
IMMANUEL BEN PORAT
We investigate variants of a Three Circles type Theorem in the context
of $\mathcal{Q}-$valued functions. We prove some convexity inequalities
related to the $L^{2}$ growth function in the $\mathcal{Q}-$valued
settings. Optimality of these inequalities and comparsion to the case
of real valued harmonic functions is also discussed.
§ INTRODUCTION
§.§ Background
The study of multivalued harmonic functions was originated in the
pioneering work of Almgren [5] on Plateau's problem, which asks
for a surface of minimal area among all surfaces stretched accross
a given closed contour. Almgren's theory was further extended and
simplified in [1]. The profound geometric applications of Almgren's
theory to minimal surfaces are not addressed here. Instead, we shall
connect the theory of $\mathcal{Q}-$valued functions to a classical
results from complex analysis, which has some modern reflections.
Let us begin by providing some background to the material that motivated
this work. Let $0\neq u$ be harmonic on the unit ball $B_{1}(0)$.
Then one can associate to $u$ real valued functions $H_{u},D_{u},\overline{H}_{u},I_{u}:\mathrm{(0,1)\rightarrow\mathbb{R}}$
by letting
\[
H_{u}(r)=\underset{\partial B_{r}(0)}{\int}u^{2}(x)d\sigma,D_{u}(r)=\underset{B_{r}(0)}{\int}|\nabla u|^{2}dx,\overline{H}_{u}(r)=\frac{1}{|\partial B_{r}(0)|}H_{u}(r),I_{u}(r)=\frac{rD_{u}(r)}{H_{\mathit{u}}(r)}
\]
$\overline{H}_{u}(r)$ is called the $L^{2}-$growth function of $u$
and $I_{u}(r)$ is called the frequency function of $u$. The motivation
behind the definition of the function $\overline{H}_{u}$ comes from
a classsical result in complex analysis known as the Three Circles
Given a holomorphic function $f$ on the unit ball $B_{1}$, let $M(r)=\underset{B_{r}(0)}{\max}|f|$.
The Three Circles Theorem, proved by Hadamard, states that the function
$\widetilde{M}:(-\infty,0)\rightarrow\mathbb{R}$ defined by $\widetilde{M}(t)=M(e^{t})$
is $\log$ convex, that is $\log\widetilde{M}(t)$ is convex. It is
therefore natural to seek for a Three Circles type Theorem for real
harmonic functions $u:B_{1}(0)\subset\mathbb{R}^{n}\rightarrow\mathbb{R}$.
It was first observed by Agmon [6] that such a theorem holds
if the function $M$ is replaced by an appropriate $L^{2}$-version
on the sphere. Namely, Agmon proves that the function $t\mapsto\overline{H}_{u}(e^{t})$
is $\log$ convex.
In 2015, Lippner and Mangoubi observed the following stronger result:
. Let
$u:B_{1}(0)\rightarrow\mathbb{R}$ be harmonic. Then $\overline{H}_{u}$
is absolutely monotonic, that is $\overline{H}_{u}^{(k)}\geq0$ for
all $k\in\mathbb{N}$. In particular, $H_{u}^{(k)}\geq0$ for all
Since Theorem <ref> in [3] is carried out in a
discrete setting, for the sake of completeness a proof of the continuous
version (as stated above) is presented in the appendix. The second
statement in Theorem <ref> is an immediate consequence
of the first one. It is an exercise to verify that absolute monotinicity
of $\overline{H}$ implies $\log$ convexity of $t\mapsto\overline{H}(e^{t})$
(see [7], II, Problem 123). Roughly speaking, we are interested
in the question whether a Lippner-Mangoubi type theorem can be obtained
in the more general setting of multivalued harmonic functions. Let
us emphasize this could have fascinating applications in the regularity
theory of these objects, since absolutely monotonic functions are
real analytic (due to a celebrated theorem of Bernstein. See [2]).
In some sense, the nonlinear nature of the problem is the main obstacle
in obtaining elliptic regularity type results for multivalued harmonic
functions. We hope that approaching the problem via Bernstein's theorem
may be useful in overcoming some of the difficulties that are created
by the lack of linearity.
§.§ Main Results
Given some $P\in\mathbb{R}^{n}$ we denote by $[[P]]$ the Dirac mass
in $P\in\mathbb{R}^{n}$ and define
\[
\mathcal{A}_{\mathcal{Q}}(\mathbb{R}^{n}):=\{\stackrel[i=1]{\mathfrak{\mathcal{Q}}}{\sum}[[P_{i}]]|P_{i}\in\mathbb{R}^{n},1\le i\le\mathcal{Q}\}
\]
The set $\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n})$
is endowed with a metric $\mathcal{G}$, not specified for the moment,
such that the space $(\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n}),\mathcal{G})$
is a complete metric space. We then consider functions $f:\Omega\subset\mathbb{R}^{m}\rightarrow\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n})$,
where $\Omega$ is some domain in $\mathbb{R}^{m}$. We call such
functions $\mathcal{Q}$-valued functions. One key fact is the existence
of a notion of a harmonic $\mathcal{Q}$-valued function.
We adapt the terminology of [1] and call such functions Dir-minimizing.
As their name suggests, Dir-minimizing functions are defined as functions
minimizing a certain class of integrals, by analogy with the classical
Dirichlet principle. For each $f:B_{1}(0)\subset\mathbb{R}^{m}\rightarrow\mathcal{A}_{\mathcal{Q}}(\mathbb{R}^{n})$
Dir-minimizing we associate a real valued function $\overline{H}_{f}:(0,1)\rightarrow\mathbb{R}$
by letting: $\overline{H}_{f}(r)=\frac{1}{|\partial B_{r}(0)|}\underset{\partial B_{r}(0)}{\int}|f|{}^{2}d\sigma$.
The function $\overline{H}_{f}$ is a generalization of the function
introduced in the beginning. Our first aim would be to generlize Agmon's
Theorem to the multivalued case. We will prove
Let $f:B_{1}(0)\subset\mathbb{R^{\mathit{m}}}\rightarrow\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n})$
be Dir minimizing such that $H(r)>0$. Define $a:(-\infty,0)\rightarrow\mathbb{R}$
by $a(t)=\log\overline{H}(e^{t})$. Then
$a'(t)\geq0$ for all $t\in(-\infty,0)$
for a.e.
Furthermore, $a$ is convex.
Since (ii) holds merely up to a null set, the convexity of $a$ does
not follow directly. This requires an additional consideration. The
following theorem, is the main result of this work
Let $f:B_{1}(0)\subset\mathbb{R}^{2}\rightarrow\mathcal{A_{Q}}(\mathbb{R}^{n})$
be a Dir minimizing function. Suppose $f|_{\partial B_{r}}\in W^{1,2}(\partial B_{r}(0),\mathcal{A_{Q}}(\text{\ensuremath{\mathbb{R}}}^{n}))$
for a.e. $0<r<1$. For each $N>0$ define
$\overline{h}_{N,f}:(0,1)\rightarrow\mathbb{R}$ by $\overline{h}_{N,f}(r)=\overline{H}_{f}(r^{N}).$
$\overline{h}'_{\frac{\mathcal{Q}}{2},f}(r)\geq0$ for
all $r\in(0,1)$
for a.e. $r\in(0,1)$
Furthermore, $\overline{h}_{\frac{\mathcal{Q}}{2}}$is convex.
The proof of Theorem <ref> will be followed by a higher
dimensional version, that is, when the domain is the $m-$dimensional
unit baḷl for arbitrary $m>2$. In the higher dimensional version,
the constant $\frac{\mathcal{Q}}{2}$ will be replaced by some constant
depending on $m$ which does not have a simple closed formula. It
should be remarked that unlike in the scenario of Theorem <ref>,
the fact that $\overline{H}_{f}$ (and hence $a$ and $\overline{h}_{N,f}$
) is a.e. twice differentiable (and moreover $C^{1}$) is nontrivial.
A naive version of Theorem <ref> for $\mathcal{Q}$-valued
functions is not valid, as wittnesed by the example $f(z)=\underset{w^{3}=z}{\sum}[[w]]$,
for which the associated $\overline{H}$ function has a negative second
derivative for all $0<r<1$. In addition, the following proposition
demonstrates that we do not have an obvious third derivative version
of Theorem <ref>:
Define $f:B_{1}(0)\subset\mathbb{R}^{2}\mathbb{\rightarrow\mathcal{A}_{\mathrm{2}}\mathrm{(\mathbb{R^{\mathrm{2}}\mathrm{)}}}}$
by $f(z)=\underset{w^{2}=2z-1}{\sum}[[w]]$. Then $f$ is Dir minimizing,$f|_{\partial B_{r}}\in W^{1,2}(\partial B_{r},\mathcal{A_{Q}}(\text{\ensuremath{\mathbb{R}}}^{n}))$
for all $r\neq\frac{1}{2}$ and $\overline{h}'''_{1,f}(r)=\mathcal{\mathit{\overline{H}_{f}'''}}(r)<0$
for all $\frac{1}{2}<r<1$.
Both Proposition <ref> and the abovementioned example
will be proved and explained in section <ref>. In Proposition
<ref>, our main conribution is performing the formal
computation which shows that $\overline{H}_{f}'''<0$ for all $\frac{1}{2}<r<1$
and showing that the boundary condition $f|_{\text{\ensuremath{\partial}}B_{r}}\text{\ensuremath{\in}}W^{1,2}(\text{\ensuremath{\partial}}B_{r},\mathcal{A_{Q}}(\text{\ensuremath{\mathbb{R}}}^{n}))$
is indeed satisfied for all $r\neq\frac{1}{2}$. Proving that $f$
is Dir minimizing is a difficult task, and relies on some rather heavy
machinery from geometric measure theory. It should be emphasized that
the domain of $f$ in both counterexamples is the planar unit disk.
Thus, we did not rule out the possibillity that in higher dimensions
the $L^{2}$- growth function of $f$ is more well behaved.
Organization of the paper. In Section <ref>, we
fix some notation and briefly review the frequency function and its
relatives. A detailed exposition may be found in [1]. Section
<ref> is devoted mainly to the proof of Proposition <ref>,
Theorem <ref> and other related convexity inequalities.
In Section <ref> we preform the calculations required to
establish the counterexample given in Proposition <ref>.
In the same context we will prove that the boundary condition $f|_{\text{\ensuremath{\partial}}B_{r}}\text{\ensuremath{\in}}W^{1,2}(\text{\ensuremath{\partial}}B_{r}(0),\mathcal{A_{Q}}(\text{\ensuremath{\mathbb{R}}}^{n}))$
in Theorem <ref> is in fact verified for a certain class
of Dir-minimizing functions on the unit disk.
§ ACKNOWLEDGEMENT
This work is part of the author's MA thesis. I wish to express my
most deep gratitude towards my supervisor Dan Mangoubi for suggesting
and guiding me in this problem, for many fruitful and stimulating
disscusions and for carefully reading previous versions of this manuscript
and providing insightful comments. In particular, for explaining the
proof of Theorem <ref>. I would also like to thank Camillo
De Lellis for his interest in this work. In particular, for suggesting
the counterexample that appears in Proposition <ref>
and for explaining the proof of Theorem <ref>.
§ PRELIMINARIES
§.§ Notations
$d\sigma=$The surface measure
$\Delta u=$The Laplacian of $u$
$\cdot=$The standard scalar product on $\mathbb{R}^{m}$
$\nu=$The unit normal to the sphere
$C_{m}=$Surface area of the $m-$dimensional unit sphere
$B_{R}(x)=$The ball of radius $R$ centered at $x$
We assume that the reader is familiar with the basic theory of $\mathcal{Q}$-valued
functions. Following [1], we recall some basic notions and terminology.
Given some $P\in\mathbb{R}^{n}$ we denote by $[[P]]$ the Dirac mass
in $P\in\mathbb{R}^{n}$ and define $\mathcal{A}_{\mathcal{Q}}(\mathbb{R}^{n}):=\{\stackrel[i=1]{\mathcal{Q}}{\sum}[[P_{i}]]|P_{i}\in\mathbb{R}^{n},1\le i\le\mathcal{Q}\}$.
We endow $\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n})$
with a metric $\mathcal{G}$, defined as follows: For each $T_{1},T_{2}\in\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n})$
with $T_{1}=\stackrel[\mathit{i=\mathrm{1}}]{\mathcal{Q}}{\sum}[[P_{i}]],T_{2}=\stackrel[\mathit{i=\mathrm{1}}]{\mathcal{Q}}{\sum}[[S_{i}]]$
we define $\mathcal{G\mathrm{(\mathit{T_{\mathrm{1}},T_{\mathrm{2}}})=\underset{\sigma\in\mathscr{P_{\mathcal{Q}}}}{min}\sqrt{\stackrel[\mathit{i=\mathrm{1}}]{\mathcal{Q}}{\sum}|\mathit{P_{i}-S_{\sigma\mathrm{(}i\mathrm{)}}}|^{2}}}}$,
where $\mathscr{P_{\mathcal{Q}}}$ is the permutation group on $\{1,...,\mathcal{Q}\}$.
The space $(\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n}),\mathcal{G})$
is a complete metric space. A $\mathcal{Q}$- valued function is a
function $f:\Omega\subset\mathbb{R}^{m}\rightarrow\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n})$.
Of course, this formalism was designed to capture the notion of a
function attaining multiple values at each point. A regularity theory
can be developed for $\mathcal{Q}$-valued functions. In particular,
the notion of a Sobolev space $W^{1,p}(\Omega,\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n}))$
and the notion of an approximate differential denoted by $Df$.
Suppose $\Omega\subset\mathbb{R}^{m}$ is a bounded domain with smooth
boundary. By analogy with the Dirichlet principle we say that a function
is Dir-minimizing if $f\in W^{1,2}(\Omega,\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n}))$
\[
\underset{\Omega}{\int}|Df|^{2}\leq\underset{\Omega}{\int}|Dg|^{2}
\]
for all $g\in W^{1,2}(\Omega,\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n})$
whose trace on $\partial\Omega$ agrees with that of $f$. We shall
allways assume $m\geq2$.
3.2. Frequency Function.
We recall the frequency function and its relatives in the context
of $\mathcal{Q}-$valued functions. We have the following Holder regularity
type theorem for Dir-minimizing functions:
There are constants $\alpha=\alpha(m,\mathcal{Q})\in(0,1)$ and $C=C(m,n,\mathcal{Q\mathrm{,}\delta\mathit{\mathrm{)}}}$
with the following property. If $f:B_{1}(0)\subset\mathbb{R}^{m}\rightarrow\mathcal{A}_{\mathcal{Q}}\mathrm{(\mathbb{R}^{\mathit{n}})}$
is Dir-minimizing then: $\underset{x\neq y\in\overline{B_{\delta}(0)}}{\sup}\frac{\mathcal{G\mathrm{(\mathit{f\mathrm{(\mathit{x\mathrm{),\mathit{f\mathrm{(\mathit{y}))}}}}}}}}}{|x-y|^{\alpha}}$$\leq CDir(f)^{\frac{1}{2}}$
for all $0<\delta<1$.
§ PROOF OF MAIN RESULTS
§.§ Variants of the Three Circles Theorem
In this section we give a proof of Proposition <ref>
and Theorem <ref>. The following identities will play
a crucial role in the study of convexity of the frequency function
and its relatives
Let $f:B_{R}(x)\subset\mathbb{R^{\mathit{m}}\rightarrow\mathcal{A_{Q}\mathrm{(\mathbb{R}^{\mathit{n}})}}}$
be Dir-minimizing.
Then for a.e. $0<r<R$ we have: (i) $(m-2)$$\underset{B_{r}(x)}{\int}|Df|^{2}dx=r\underset{\partial B_{r}(x)}{\int}|Df|^{2}d\sigma-2r\underset{\partial B_{r}(x)}{\int}\stackrel[i=1]{\mathcal{Q}}{\sum}|\partial_{\nu}f_{i}|^{2}d\sigma$
(ii) $\underset{B_{r}(x)}{\int}|Df|^{2}dx=\underset{\partial B_{r}(x)}{\int}\stackrel[i=1]{\mathcal{Q}}{\sum}(\partial_{\nu}f_{i})\cdot f_{i}d\sigma$
Our starting point is the following theorem
be Dir minimizing. Then:
(a) $H\in C^{1}(0,1)$ and the following identity holds for
all $r\in(0,1):$
\begin{equation}
\end{equation}
(b) If $H(r)>0$ then $I(r)$ is absolutely continuous and
nondecreasing. In particular, $I'(r)\geq0$ for a.e. $r$.
Statement (b) is known as Almgren's monotonicity formula. Since $D$
is absolutely continuous, it is a.e. differentiable. Therefore, in
view of equation <ref>, we see that $H'$ is a.e. differentiable.
Otherwise put, the second derivative of $H$ exists a.e. The regularity
properties of $H$ clearly apply for $\overline{H}$ as well. With
the aid of Almgren's monotonicity formula we are able to extend Agmon's
convexity result ([6]) in the context of $\mathcal{Q}-$valued
functions. This method of proof differs from Agmon's original approach
for real valued harmonic functions, which involves ODEs on Banach
Proof of Proposition 1.2. In light of the above discussion
it is clear that $a$ is $C^{1}(-\infty,0)$ and that $a''$ exists
almost everywhere. For (i), note that
\[
\]
\[
\frac{1}{r^{m-1}}(\frac{m-1}{r}H(r)+2D(r))-(m-1)\frac{1}{r^{m}}H(r)=\frac{2D(r)}{r^{m-1}}\geq0
\]
Where the second equality is due to equation <ref>. So
\begin{equation}
\overline{H}'(r)=\frac{2D(r)}{C_{m}r^{m-1}}\geq0\label{eq:7}
\end{equation}
Therefore, for all $t\in(-\infty,0)$
\begin{equation}
\end{equation}
For (ii), start by noting that $a(t)=\log(\overline{H}(e^{t}))=\log(H(e^{t}))-\log(C_{m}e^{t(m-1)})=\log(H(e^{t}))-\log(C_{m})-(m-1)t$.
Therefore, to prove $a''(t)\geq0$ it suffices to prove $(\log(H(e^{t})))''\geq0$.
By Theorem <ref>, we have that $I'(r)\geq0$ for a.e.
$0<r<1.$ To spare some space, all equalities and inequalities from
now on should be interpreted up to a null set. By virtue of equation
\[
0\leq I'(r)=(\frac{rD(r)}{H(r)})'=(\frac{D(r)}{r^{m-2}\overline{H}(r)})'=\frac{1}{2}(\frac{r(H'(r)-(\frac{m-1}{r})H(r))}{H(r)})'=\frac{1}{2}(\frac{rH'(r)-(m-1)H(r)}{H(r)})'=
\]
\[
\frac{1}{2}(\frac{rH'(r)}{H(r)})'=\frac{1}{2}(\frac{(H'(r)+rH''(r))H(r)-r(H'(r))^{2}}{H^{2}(r)})
\]
Thus, we get
\begin{equation}
\end{equation}
On the other hand by a straightforward calculation:
\begin{equation}
\end{equation}
Combining inequality <ref> with equation <ref> we arrive
at $e^{-t}(\log(H(e^{t}))''\geq0$ which is the same as $(\log(H(e^{t}))''\geq0$.
We are left to explain why $a$ is convex. It is classical that a
continuously differentiable function is convex iff its derivative
is nondecreasing, and so our task reduces to showing that $a'$ is
nondecreasing. By equations <ref> and <ref> we get $a'(t)=\frac{\overline{H}'(e^{t})e^{t}}{\overline{H}(e^{t})}=\frac{2D(e^{t})}{C_{m}e^{t(m-2)}\overline{H}(e^{t})}$.
Since $D(e^{t})$ is a composition of an absolutely continuous function
with a nondecreasing smooth function, it is absolutely continuous.
In addition, $\frac{1}{C_{m}e^{t(m-2)}\overline{H}(e^{t})}$ is differentiable.
So $a'(t)$ is absolutely continuous function on any closed subinterval
of $(-\infty,0)$, as a product of such function. Therefore, the fundamental
theorem of calculus is applicable: if $t_{1},t_{2}\in(-\infty,0),t_{1}<t_{2}$
then $a'(t_{2})-a'(t_{1})=\stackrel[t_{1}]{t_{2}}{\int}a''(t)dt\geq0$.
We draw the reader's attention to a somewhat delicate point, which
will be also relevant in what will come next . The implication “nonnegative
derivative a.e.$\Rightarrow$nondecreasing” is not true in general.
In Proposition <ref> we employed the fact that the
first derivative of $a$ is absolutely continuous in order to deduce
that it is convex. The absolute continuity of the derivatives is a
consequence of equation <ref>. Hence, we implicitly relied
here on the Dir-minimization property. In the case of 1 valued harmonic
functions, this technicality is not created because all functions
involved are smooth. To the best of our knowledge, improved regularity
for the frequency function and its relatives in the multivalued settings
is still an open problem.
The inequality “$\overline{H}''(r)\geq0$ a.e.” is not true in
general, as witnessed by the counterexample in section <ref>.
Nevertheless, we are still able to obtain a convexity result by reducing
the power of the normalization of $H$. More precisely we observe
the weaker
Let $f:B_{1}(0)\subset\mathbb{R^{\mathit{m}}}\rightarrow\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n})$
be Dir minimizing. Then
(i) $(r\overline{H}(r))'\geq0$ for all $r\in(0,1)$ (ii) $(r\overline{H}(r))''\geq0$
for a.e. $r\in(0,1)$. Furthermore $r\mapsto r\overline{H}(r)$ is
Proof. As for (i) we can in fact derive a stronger result,
namely $\overline{H}'(r)\geq0$ for all $r\in(0,1)$. We compute:
\begin{equation}
\overline{H}'(r)=\frac{1}{C_{m}}[-(m-1)r^{-m}H(r)+r^{-(m-1)}H'(r)]=\frac{r^{-(m-1)}}{C_{m}}[H'(r)-\frac{m-1}{r}H(r)]=\frac{2D(r)r^{-(m-1)}}{C_{m}}\geq0\label{eq:3.4}
\end{equation}
where the last equality is by equation <ref>. For (ii), start
by noting that equation <ref> implies the following equality
for a.e. $r$
\begin{equation}
\overline{H}''(r)=\frac{2}{C_{m}r^{m-1}}[D'(r)-\frac{(m-1)D(r)}{r}]\label{eq:3.4'}
\end{equation}
Gathering our calculations one readily checks that $(r\overline{H}(r))''\geq0\iff rD'(r)-(m-3)D(r)\geq0$.
\[
rD'(r)-(m-3)D(r)\geq rD'(r)-(m-2)D(r)=
\]
\[
r\underset{\partial B_{r}(0)}{\int}|Df|^{2}-(m-2)\underset{B_{r}(0)}{\int}|Df|^{2}=2r\underset{\partial B_{r}(0)}{\int}\stackrel[i=1]{\mathcal{Q}}{\sum}|\partial_{\nu}f_{i}|^{2}\geq0
\]
Where the last equality is thanks to (i), <ref>.
The convexity of $r\mapsto r\overline{H}(r)$ follows by a similar
argument to the one demonstrated in Proposition <ref>.
It is clear that Proposition <ref> implies in particular
that the same conclusion holds true for $H$ (and this can also be
derived directly from iterating equation <ref>).
We present now a proof of Theorem <ref>, including a
higher dimensional analog. The main ingredients of the proof are the
variational formulas provided by Proposition <ref>
and the following estimates
Let $f:B_{1}(0)\subset\mathbb{R^{\mathit{m}}}\rightarrow\mathcal{A}_{\mathcal{Q}}\mathrm{(\mathbb{R}^{\mathit{n}})}$
be Dir-minimizing and suppose that $g_{r}:=f|_{\partial B_{r}(0)}\in W^{1,2}(\partial B_{r}(0),\mathcal{A}_{\mathcal{Q}}\mathrm{(\mathbb{R}^{\mathit{n}}))}$
for a.e $0<r<1$. Then for a.e $r$: (i) If $m=2$,
$\mathrm{Dir}(f,B_{r}(0))\leq\mathcal{Q}r\mathrm{Dir}(g_{r},\partial B_{r}(0))$
(ii) If $m>2$, $\mathrm{Dir}(f,B_{r}(0))\leq c(m)r\mathrm{Dir}(g_{r},\partial B_{r}(0))$,
where $c(m)<\frac{1}{m-2}$.
Before going into the proof of Theorem <ref>, let us
heuristically explain why it is reasonable the expect the validity
of such a result through the following example.
We recall that a function $f:B_{1}(0)\rightarrow\mathcal{A}_{\mathcal{Q}}\mathrm{(\mathbb{R}^{\mathit{n}})}$
is $\alpha-$homogeneous ($\alpha>0)$ if $\forall y\in B_{1},y\neq0:f(y)=|y|^{\alpha}f(\frac{y}{|y|})$.
Denote by $\eta:\mathcal{A}_{\mathcal{Q}}\mathrm{(\mathbb{R}^{\mathit{n}})}\rightarrow\mathbb{R}^{n}$
the center of mass map defined by $\eta(\stackrel[i=1]{\mathcal{Q}}{\sum}[[P_{i}]])=\frac{\stackrel[i=1]{\mathcal{Q}}{\sum}P_{i}}{\mathcal{Q}}$.
We can derive an explicit formula for the $L^{2}-$growth function
of a continuous $\alpha-$homogeneous map:
\[
H(r)=\underset{\partial B_{r}}{\int}|f|^{2}d\sigma=r^{m-1}\underset{\partial B_{1}}{\int}|f|^{2}(ry)d\sigma=
\]
\[
r^{m-1}\underset{\partial B_{1}}{\int}\stackrel[i=1]{\mathcal{Q}}{\sum}|f_{i}|^{2}(ry)d\sigma=r^{2\alpha+m-1}\underset{\partial B_{1}}{\int}\stackrel[i=1]{\mathcal{Q}}{\sum}|f_{i}|^{2}(y)d\sigma=\kappa r^{2\alpha+m-1}
\]
For some constant $\kappa\geq0$. Hence $\overline{H}$ takes the
form $\overline{H}(r)=\kappa r^{2\alpha}$. Assume now that $m=2$,
is a Dir-minimizing, nontrivial, $\alpha-$homogeneous map with $\eta\circ f=0$
(the simplest example of such a map the 2-valued function $f:B_{1}(0)\rightarrow\mathcal{A}_{2}(\mathbb{R}^{2})$
defined by $f(z)=\underset{w^{2}=z}{\sum}[[w]]$. See Theorem <ref>).
It is proved in [1], Proposition 8.2 that in this case necessarily
$\alpha=\frac{p}{q}\in\mathbb{Q}$ for some $q\leq\mathcal{Q}$. So
$\overline{H}(r)=\kappa r^{\frac{2p}{q}}$, which implies $\overline{H}(r^{\frac{q}{2}})=\kappa r^{p}$,
and the latter function is obviously absolutely monotonic. We are
thus lead to spectulate that more generally, composing $\overline{H}$
with some suitable $\frac{1}{2}[\mathcal{Q}]$-power produces a function
which is more well behaved. Theorem <ref> partially confirms
this speculation.
Proof of Theorem <ref>. That $\overline{h}'_{\frac{\mathcal{Q}}{2}}(r)\geq0$
follows immediately from equation <ref>. Henceforth all equalities
and inequlities should be interpeted up to a null set. A direct calculation
Writing $\xi=r^{N}$, we see that $\overline{h}_{N}''(\xi)\geq0\Leftrightarrow(\frac{N-1}{N})\overline{H}'(\xi)+\overline{H}''(\xi)\xi\geq0$.
Taking $N=\frac{\mathcal{Q}}{2}$ we get
\[
\overline{h}_{\frac{\mathcal{Q}}{2}}''(\xi)\geq0\Leftrightarrow(\mathcal{Q}-2)\overline{H}'(\xi)+\mathcal{Q}\overline{H}''(\xi)\xi\geq0
\]
Owing to equation <ref> we can express $\overline{H}',\overline{H}''$
\[
\overline{H}'(\xi)=\frac{2}{C\xi}\underset{B_{\xi}(0)}{\int}|Df|^{2}
\]
\[
\overline{H}''(\xi)=\frac{2}{C\xi}[\underset{\partial B_{\xi}(0)}{\int}|Df|^{2}-\frac{1}{\xi}\underset{B_{\xi}(0)}{\int}|Df|^{2}]
\]
We now have
\begin{equation}
C\xi[(\mathcal{Q}-2)\overline{H}'(\xi)+\mathcal{Q}\overline{H}''(\xi)\xi]=2(\mathcal{Q}-2)\underset{B_{\xi}(0)}{\int}|Df|^{2}+2\mathcal{Q}\xi[\underset{\partial B_{\xi}(0)}{\int}|Df|^{2}-\frac{1}{\xi}\underset{B_{\xi}(0)}{\int}|Df|^{2}]=2\mathcal{Q}\xi\underset{\partial B_{\xi}(0)}{\int}|Df|^{2}-4\underset{B_{\xi}(0)}{\int}|Df|^{2}\label{eq:3.7}
\end{equation}
Proposition <ref>, (i) combined with Proposition
<ref> yield the following estimate:
\[
\underset{B_{\xi}(0)}{\int}|Df|^{2}dx\leq\xi\mathcal{Q}\mathrm{Dir}(g_{\xi},\partial B_{\xi}(0))
\]
\[
2\underset{B_{\xi}(0)}{\int}|Df|^{2}\leq2\xi\mathcal{Q}\mathrm{Dir}(g_{\xi},\partial B_{\xi}(0))=2\xi\mathcal{Q}[\underset{\partial B_{\xi}(0)}{\int}|Df|^{2}-\stackrel[i=1]{\mathcal{Q}}{\sum}|\partial_{\nu}f_{i}|^{2}d\sigma]=\mathcal{Q}\xi\underset{\partial B_{\xi}(0)}{\int}|Df|^{2}
\]
Where the last equality is due to <ref>, (i).
Combining the last inequality with equation <ref> gives $(\mathcal{Q}-2)\overline{H}'(\xi)+\mathcal{Q}\overline{H}''(\xi)\xi\geq0$,
as wanted. Finally, note that $\overline{h}'_{\frac{\mathcal{Q}}{2}}(r)=\frac{2D(r^{\frac{\mathcal{Q}}{2}})r^{-\frac{\mathcal{Q}}{2}(m-1)}}{C_{m}}r^{\frac{\mathcal{Q}}{2}-1}=\frac{2D(r^{\frac{\mathcal{Q}}{2}})r^{\frac{\mathcal{Q}}{2}(2-m)-1}}{C_{m}}$.
Since $D(r^{\frac{\mathcal{Q}}{2}})$ is a composition of absolutely
continuous nondecreasing functions, it is absolutely continuous. In
view of the previous equation we see that $\overline{h}'_{\frac{\mathcal{Q}}{2}}$
is absolutely continuous. Thus, proceeding as in Proposition <ref>,
it follows that $\overline{h}{}_{\frac{\mathcal{Q}}{2}}$ is convex.
We finish this section by stating an higher dimensional analog of
Let $m>2$ and $f:B_{1}(0)\subset\mathbb{R}^{m}\rightarrow\mathcal{A}_{\mathcal{Q}}(\mathbb{R}^{\mathit{n}})$
be a Dir minimizing function. Suppose $f|_{\partial B_{r}}\in W^{1,2}(\partial B_{r},\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n}))$
for a.e. $0<r<1$. Let $c(m)=\frac{1}{m-2}-\epsilon_{m}$
, $0<\epsilon_{m}<\frac{1}{m-2}$ be the constant obtained via proposition
<ref>. Let $\alpha_{m}=\frac{1-\epsilon_{m}(m-2)}{\epsilon_{m}(m-2)^{2}}$.
for all $r\in(0,1)$
for a.e. $r\in(0,1)$
The proof is identical to that of Theorem <ref>, using
estiamte (ii) in Proposition <ref> instead of (i).
We can now conclude that nontrivial Dir minimizing, $\alpha-$homogeneous
function have exponents $\alpha$ far away from $0$.
There are constants $\beta_{m}>0$ with the
following property. Let $f:B_{1}(0)\subset\mathbb{R}^{m}\rightarrow\mathcal{A}_{\mathcal{Q}}(\mathbb{R}^{\mathit{n}})$
be a nontrivial Dir minimizing, $\alpha-$homogeneous function.
Suppose $f|_{\partial B_{r}}\in W^{1,2}(\partial B_{r},\mathcal{A}_{\mathcal{Q}}(\text{\ensuremath{\mathbb{R}}}^{n}))$
for a.e. $0<r<1$. Then $\alpha\geq\beta_{m}$.
Put $\beta_{2}=\frac{1}{\mathcal{Q}}$ and $\beta_{m}=\frac{1}{\alpha_{m}},m>2$.
According to the computation performed in Example <ref>,
$\overline{H}(r)=\kappa r^{2\alpha}$ for some $\kappa>0$. According
to Theorem <ref> and Theorem <ref> we see
that $0\leq\kappa\frac{\alpha}{\beta_{m}}(\frac{\alpha}{\beta_{m}}-1)r^{\frac{\alpha}{\beta_{m}}-2}$
for a.e. $r$, which in particular gives $\alpha\geq\beta_{m}$.
Note that in the specific case that $m=2$ and $\eta\circ f=0$, Corollary
<ref> recovers a weaker form of Proposition 8.2, [1]
which was already mentioned in Example <ref>.
§ COUNTEREXAMPLES
The following theorem allows us to produce nontrivial examples of
Dir-minimizing functions:
$a\neq0,b\in\mathbb{R}$. Define $u:B_{1}(0)\subset\mathbb{R}^{2}\mathbb{\rightarrow}\mathcal{A_{\mathcal{Q}}}(\mathbb{R}^{2})$
by $u(z)=\underset{w^{\mathcal{Q}}=az+b}{\sum}[[w]]$. Then $u$ is
Define $f:B_{1}(0)\subset\mathbb{R}^{2}\mathbb{\rightarrow\mathcal{A}_{\mathrm{3}}\mathrm{(\mathbb{R^{\mathrm{2}}\mathrm{)}}}}$
by $f(z)=\underset{w^{3}=z}{\sum}[[w]]$. That $f$ is Dir minimizing
follows from Theorem <ref>. We compute:
\[
\underset{B_{\rho}(0)}{\int}|f|^{2}dx=\stackrel[0]{2\pi}{\int}\stackrel[0]{\rho}{\int}3r^{\frac{5}{3}}drd\theta=3\stackrel[0]{2\pi}{\int}\stackrel[0]{\rho}{\int}r^{\frac{5}{3}}drd\theta=\frac{9\pi\rho^{\frac{8}{3}}}{4}=C\rho^{\frac{8}{3}}
\]
where $C>0$. Therefore, $H(\rho)=C\rho^{\frac{5}{3}}$ for some
constant $C>0$, and so $\overline{H}(\rho)=\frac{C\rho^{\frac{5}{3}}}{2\pi\rho}=C\rho^{\frac{2}{3}}$,
hence $\overline{H}''(\rho)<0$.
Our next aim is to show that the boundary regularity condition appearing
in Proposition <ref> is indeed verified for a certain
Dir-minimizing functions on the planar unit disk. This will be the
content of Lemma <ref>, which is of interest by its own
right. Any $z\in\mathbb{C}-\{0\}$ admits a representation of the
form $z=Re^{i\omega}$ for some $R>0,\omega\in[0,2\pi)$. We shall
use the convention $\sqrt{z}=\sqrt{R}e^{\frac{i\omega}{2}}$.
Fix $r\in(0,\frac{1}{2})\cup(\frac{1}{2},1)$
and define $h:(0,2\pi)\rightarrow\mathbb{C}$ by $h(\theta)=\sqrt{2re^{i\theta}-1}$.
Then $h\in W^{1,2}((0,2\pi),\mathbb{C})$.
Proof. $\square$
We recall that $f\in W^{1,2}(\Omega,\mathcal{A}_{\mathcal{Q}}(\mathbb{R}^{n}))$
if there are $\{\varphi_{j}\}_{j=1}^{m}\subset L^{2}(\Omega,\mathbb{R}_{\geq0})$
such that 1. $x\mapsto\mathcal{G}(f(x),T)\in W^{1,2}(\Omega)$ for
all $T\in\mathcal{A}_{\mathcal{Q}}(\mathbb{R}^{n})$ and 2. $|\partial_{j}\mathcal{G}(f(x),T)|\leq\varphi_{j}$
a.e. for all $T\in\mathcal{A}_{\mathcal{Q}}(\mathbb{R}^{n})$ and
$1\leq j\leq m$.
Fix $r\in(0,\frac{1}{2})\cup(\frac{1}{2},1)$ and
define $f:(0,2\pi)\rightarrow\mathcal{A}_{2}(\mathbb{R}^{2})$ by
$f(\theta)=\underset{z^{2}=2re^{i\theta}-1}{\sum}[[z]]$. Then $f\in W^{1,2}((0,2\pi),\mathcal{A}_{2}(\mathbb{R}^{2}))$.
Proof. Let $T=\stackrel[i=1]{2}{\sum}[[T_{i}]]\in\mathcal{A}_{2}(\mathbb{R}^{2})$.
Set $h(\theta)=\sqrt{2re^{i\theta}-1}$ and for $P=(P_{1},P_{2})\in\mathbb{R}^{2}\times\mathbb{R}^{2}$
denote by $d_{P}:\mathbb{R}^{2}\rightarrow\mathbb{R}$ the function
$d_{P}(x)=\sqrt{|x-P_{1}|^{2}+|x-P_{2}|^{2}}$. It is not difficult
to see that for any fixed $P$, $d_{P}$ is Lipschitz.
Put $\alpha_{T}(\theta)=(d_{(-T_{1},T_{2})}\circ h)(\theta),\beta_{T}(\theta)=(d_{(T_{1},-T_{2})}\circ h)(\theta)$.
Note that
\begin{equation}
\mathcal{G}(f(\theta),T)=\frac{\alpha_{T}(\theta)+\beta_{T}(\theta)-|\alpha_{T}(\theta)-\beta_{T}(\theta)|}{2}\label{eq:}
\end{equation}
By Lemma <ref>, $\alpha_{T},\beta_{T}$ are a composition
of a Lipschitz function on a $W^{1,2}(0,2\pi)$ function. Therefore
$\alpha_{T},\beta_{T}$ are also $W^{1,2}(0,2\pi)$. In view of Equation
<ref> it is now apparent that $\theta\mapsto\mathcal{G}(f(\theta),T)\in W^{1,2}(0,2\pi)$.
In addition, there is at most one $\theta_{0}\in(0,2\pi)$ for which
$\alpha_{T}(\theta_{0})=0$. An elementary calculation shows that
the following estimate is obeyed for $\theta\in(0,2\pi)-\{\theta_{0}\}$:
and $\partial_{\theta}h_{i}\in L^{2}(0,2\pi)$, $i=1,2$ by Lemma
The same is true for $\beta_{T}$. Combining these estimates with
equation <ref> we see that
\[
|\partial_{\theta}\mathcal{G}(f(\theta),T)|\leq8(|(\partial_{\theta}h_{1})(\theta)|+|(\partial_{\theta}h_{2})(\theta)|)\in L^{2}(0,2\pi)
\]
for all but finitely many $\theta\in(0,2\pi)$. So strictly by definition,
$f\in W^{1,2}((0,2\pi),\mathcal{A}_{2}(\mathbb{R}^{2}))$. $\square$
Proof of Proposition <ref>. That
$f$ is Dir minimizing follows from Theorem <ref>. Furthermore,
by Lemma <ref> $f|_{\partial B_{r}}\in W^{1,2}(\partial B_{r},\mathcal{A}_{\mathrm{2}}\mathrm{(\mathbb{R^{\mathrm{2}}\mathrm{)}}})$
for all $r\neq\frac{1}{2}$.
We compute:
\[
\underset{B_{\rho}(0)}{\int}|f|^{2}dx=2\stackrel[0]{\rho}{\int}\stackrel[0]{2\pi}{\int}r\sqrt{(2r\cos\theta-1)^{2}+4r^{2}\sin^{2}(\theta)}d\theta dr
\]
\[
\]
which implies
\[
\overline{H}(\rho)=\frac{1}{\pi}\stackrel[0]{2\pi}{\int}\sqrt{(2\rho\cos\theta-1)^{2}+4\rho^{2}\sin^{2}(\theta)}d\theta=\frac{1}{\pi}\stackrel[0]{2\pi}{\int}\sqrt{1+4\rho^{2}-4\rho\cos\theta}d\theta\coloneqq\frac{1}{\pi}A(\rho)
\]
We compute $A'''(\rho)$ for all $\frac{1}{2}<\rho<1$:
\[
\]
\[
\]
\[
\stackrel[0]{2\pi}{\int}\frac{4(1+4\rho^{2}-4\rho\cos\theta)-(4\rho-2\cos\theta)^{2}}{(1+4\rho^{2}-4\rho\cos\theta)^{\frac{3}{2}}}d\theta=\stackrel[0]{2\pi}{\int}\frac{4\sin^{2}\theta}{(1+4\rho^{2}-4\rho\cos\theta)^{\frac{3}{2}}}d\theta
\]
\begin{equation}
\end{equation}
\[
\]
We note that as long as $\frac{1}{2}<\rho<1$, the RHS of equation
<ref> is strictly negative, hence the claim. $\square$
§ APPENDIX
This section was explained to me by Dan Mangoubi. We present here
a proof of Theorem <ref>.
The converse statement of Theorem <ref> is clearly not
true. As an example we can take $m=1$ and $u(x)=x^{2}$. Clearly
$u$ is not harmonic. However,
which is obviously absolutely monotonic.
Let $u$ be harmonic on $B_{1}(0)$.
Let $\phi:[0,1]\rightarrow\mathbb{R}$ be smooth and nondecreasing.
Define $d(r)=\underset{B_{1}(0)}{\int}u^{2}(rx)\phi'(||x||\text{\texttwosuperior})dx$.
Then $d$ is absolutely monotonic.
We denote by $u_{i}$ the derivative of $u$ with respect to the $i-$th
variable. Define $\psi(\xi)=\phi(\frac{||\xi||^{2}}{r^{2}})$.
\[
d'(r)=\underset{\text{\ensuremath{B_{1}(0)}}}{{\normalcolor \int}}[2\stackrel[\text{\ensuremath{i}=1}]{m}{\sum}u(rx)u_{i}(rx)x_{i}]\phi'(||x||\text{\texttwosuperior})dx=\underset{B_{r}(0)}{\int}[2\stackrel[\text{\ensuremath{i}=1}]{m}{\sum}u(\xi)u_{i}(\xi)\frac{\xi_{i}}{r^{m+1}}]\phi'(\frac{||\xi||^{2}}{r^{2}})d\xi=
\]
\[
=\frac{1}{r^{m-1}}\underset{B_{r}(0)}{\int}[2\stackrel[\text{\ensuremath{i}=1}]{m}{\sum}u(\xi)u_{i}(\xi)\frac{\xi_{i}}{r^{2}}]\phi'(\frac{||\xi||^{2}}{r^{2}})d\xi=\frac{1}{2r^{m-1}}\underset{B_{r}(0)}{\int}\nabla u^{2}\cdot\nabla\psi d\xi\overset{\ast}{=}
\]
\[
\frac{1}{2r^{m-1}}[\underset{\partial B_{r}(0)}{\int}\psi\nabla_{\nu}u^{2}d\sigma-\underset{B_{r}(0)}{\int}\psi\Delta u^{2}d\xi]=\frac{1}{2r^{m-1}}[\underset{\partial B_{r}(0)}{\int}\phi(1)\nabla_{\nu}u^{2}d\sigma-\underset{B_{r}(0)}{\int}\psi\Delta u^{2}d\xi]\overset{\ast\ast}{=}
\]
\[
=\frac{1}{2r^{m-1}}[\underset{B_{r}(0)}{\int}\phi(1)\text{\textgreek{D}}u^{2}d\xi-\underset{B_{r}(0)}{\int}\psi\Delta u^{2}d\xi]=\frac{1}{2r^{m-1}}\underset{B_{r}(0)}{\int}(\phi(1)-\psi)\Delta u^{2}d\xi
\]
The equality $\ast$ is by Green's identity and the equality $\ast\ast$
is by the divergence theorem. Taking advantage of the identity $\Delta u^{2}=|\nabla u|^{2}$
for $u$ harmonic we obtain
\[
d'(r)=\frac{1}{2r^{m-1}}\underset{B_{r}(0)}{\int}(\phi(1)-\psi)|\nabla u|^{2}d\xi=\frac{1}{2}r\underset{B_{1}(0)}{\int}(\phi(1)-\phi(||x||^{2}))|\nabla u|^{2}(rx)dx
\]
Let $\Phi$ be some anti derivative of $\phi$ and define $\varphi:[0,1]\rightarrow$
by $\varphi(\rho)=\phi(1)\rho-\Phi(\rho)$. Evidently $\varphi$ is
nondecreasing and according to the computation we preformed
\[
\]
Since each $u_{i}$ is harmonic we can iterate the the same argument
in order to obtain $d^{(k)}(r)\ge0$ for all $k$.
As a corollary we obtain a proof of theorem <ref>:
Let $u$ be harmonic on $B_{1}(0)$. Then $\overline{H}$
is absolutely monotonic.
\[
\overline{H}(r)=\frac{1}{C_{m}r^{m-1}}\underset{\partial B_{r}(0)}{\int}u^{2}(x)d\sigma=\frac{1}{C_{m}r^{m-1}}\frac{d}{dr}[\underset{B_{r}(0)}{\int}u^{2}(x)dx]=\frac{1}{C_{m}r^{m-1}}\frac{d}{dr}[\underset{B_{r}(0)}{\int}u^{2}(x)dx]
\]
\[
\]
\[
\frac{1}{C_{m}}[m\underset{B_{1}(0)}{\int}u^{2}(rx)dx+r\frac{d}{dr}[\underset{B_{1}(0)}{\int}u^{2}(rx)dx]]
\]
Taking $\phi(t)=t$ in Proposition <ref>, we see
that last expression is a sum of absolutely monotonic functions, and
hence absolutely monotonic. $\square$
[1] Camillo De Lellis and Emanuele Nunzio
Spadaro. Q-valued functions and approximation
of minimal currents. PhD Thesis. 2009.
[2] Sergei Bernstein. Sur
la definition et les proprietes des fonctions analytiques
d’unevariable reelle. Math. Ann.75(1914),
no. 4, 449–468 (French).
[3] Gabor Lippner and Dan Mangoubi.
Harmonic functions on the lattice: Absolute monotonicity and propagation
of smallness. Duke Math. J. 164, no. 13. 2015.
[4] Camillo De Lellis. Private
[5] Frederick J. Almgren. Almgren's
big regularity paper.World Scientific. 2000.
[6] S. Agmon. Unicit´e
et convexit´e dans les probl`emes diff´erentiels.
S´eminaire de Math´ematiques Sup´erieures,
No. 13 (´Et´e, 1965). Les Presses de
l’Universit´e de Montr´eal,
Montreal, Que. 1966.
[7] G. Po´lya and G. Szego.
Problems and theorems in analysis.
Vol. I: Series, integral calculus, theory of functions, Springer-Verlag,
New York, 1972. Translated from the German by D. Aeppli; Die Grundlehren
der mathematischen Wissenschaften, Band 193.
Immanuel Ben Porat
Einstein Institute of Mathematics, The Hebrew University of Jerusalem
Edmond J. Safra Campus; Givat Ram, Jerusalem, Israel; 9190401
|
2024-09-04T02:54:55.763060 | 2020-02-28T22:25:52 | 2003.00094 | {
"authors": "Mohit Daga",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25955",
"submitter": "Mohit Daga",
"url": "https://arxiv.org/abs/2003.00094"
} | arxiv-papers | Improved Algorithm for Min-Cuts in Distributed Networks
COMPUTER SCIENCE AND ENGINEERING
# Improved Algorithm for Min-Cuts in Distributed Networks
Mohit Daga
(OCTOBER 2017)
###### Abstract
KEYWORDS: Connectivity, Reliability, Edge Min-Cuts
In this thesis, we present fast deterministic algorithm to find small cuts in
distributed networks. Finding small min-cuts for a network is essential for
ensuring the quality of service and reliability. Throughout this thesis, we
use the CONGEST model which is a typical message passing model used to design
and analyze algorithms in distributed networks. We survey various algorithmic
techniques in the CONGEST model and give an overview of the recent results to
find cuts. We also describe elegant graph theoretic ideas like cut spaces and
cycle spaces that provide useful intuition upon which our work is built.
Our contribution is a novel fast algorithm to find small cuts. Our algorithm
relies on a new characterization of trees and cuts introduced in this thesis.
Our algorithm is built upon several new algorithmic ideas that, when coupled
with our characterization of trees and cuts, help us to find the required min-
cuts. Our novel techniques include a _tree restricted semigroup function
(TRSF)_ , a novel _sketching_ technique, and a _layered algorithm_. TRSF is
defined with respect to a spanning tree and is based on a commutative
semigroup. This simple yet powerful technique helps us to deterministically
find min-cuts of size one (bridges) and min-cuts of size two optimally. Our
sketching technique samples a small but relevant vertex set which is enough to
find small min-cuts in certain cases. Our layered algorithm finds min-cuts in
smaller sub-graphs _pivoted_ by nodes at different levels in a spanning tree
and uses them to make the decision about the min-cuts in the complete graph.
This is interesting because it enables us to show that even for a global
property like finding min-cuts, local information can be exploited in a
coordinated manner.
This is to certify that the thesis titled Improved Algorithm for Min-Cuts in
Distributed Networks, submitted by Mohit Daga, to the Indian Institute of
Technology, Madras, for the award of the degree of Master of Science, is a
bona fide record of the research work done by him under my supervision. The
contents of this thesis, in full or in parts, have not been submitted to any
other Institute or University for the award of any degree or diploma.
Dr. John Augustine
Research Guide
Associate Professor
Dept. of CSE
IIT-Madras, 600 036
Place: Chennai
Date:
###### Acknowledgements.
First and foremost, I would like to extend my sincere gratitude to my thesis
supervisor Dr. John Augustine. I am thankful to John for his constant
encouragement, guidance and optimism. John actively helped me in improving my
technical writing and presentation with lot of patience. Our countless
discussions about research (we literally burnt the midnight oil) enabled the
timely completion of this thesis. His opinion and vision about a career in
research and academia are well placed. I also closely collaborated with Dr.
Danupon Nanongkai, who introduced me to the problem covered in this thesis. I
thank him for inviting me to KTH in April’17. Danupon also reviewed a draft of
this work and gave me valuable inputs. During the initial days at IIT-Madras,
I interacted with Dr. Rajseakar Manokaran, in his capacity as my faculty
adviser. He inspired me to pursue research in Theoretical Computer Science. I
thank Dr. N.S. Narayanaswamy, chairperson of my General Test Committee (GTC)
for being critical of my seminar presentations and helping me to grow as a
researcher. I also thank Dr. Krishna Jagannathan and Dr. Shweta Agrawal for
accepting to be part of my GTC. I feel proud to be part of IIT Madras and CSE
Department. I thank the administration here for establishing tranquility and
serenity of campus after the devastating 2016 cyclone. I thank the support
staff of my department. Special thanks to Mr. Mani from the CSE office for
scheduling the required meetings and helping me to understand the procedures
and requirements with patience. During my stay at IIT Madras, I was part of
two groups: ACT Lab and TCS Lab. I thank my lab mates for maintaining good
work culture. My various discussions on a wide range of topics with my lab
mates were enjoyable and enriching. This experience will be helpful when I
move to Stockholm for my Ph.D. studies. Last but not the least, I thank my
mother Beena, my father Manohar and my sisters: Kaushal and Sheetal. This
thesis is dedicated to them.
###### Contents
1. NOTATION
2. 1 Introduction
1. 1.1 Notations and Conventions
2. 1.2 Distributed Model
3. 1.3 Classification of network problems based on locality
4. 1.4 Overview of past results
5. 1.5 Organization of this thesis
3. 2 Preliminaries and Background
1. 2.1 Simple algorithms in Distributed Networks
1. 2.1.1 Tree Constructions and Notations
2. 2.1.2 Convergecast and Broadcast
2. 2.2 Cuts Space and Cycle Space
3. 2.3 Min-Cuts in Distributed Networks
1. 2.3.1 Using Greedy Tree Packing
2. 2.3.2 Using Random Circulations
4. 3 Technical Overview
1. 3.1 Characterization of Trees and Cuts
2. 3.2 Our Contribution
5. 4 Tree Restricted Semigroup Function
1. 4.1 Min-Cuts of size 1
2. 4.2 Min-Cuts of size 2
6. 5 Min-Cuts of size three
1. 5.1 Graph Sketching
1. 5.1.1 Definition of Sketch
2. 5.1.2 Algorithm to Compute Sketch
3. 5.1.3 Application of graph sketch
2. 5.2 Layered Algorithm
1. 5.2.1 Definition of Layered Algorithm
2. 5.2.2 Application of layered algorithm
3. 5.3 Summary
7. 6 Future Work
###### List of Tables
1. 5.1 Overview of the case-structure of min-cut of size $3$
###### List of Figures
1. 3.1 Roadmap for finding small min-cuts
2. 5.1 Different cases of min-cuts of size three
3. (a) CASE-2 (Either of $\alpha$ or $\beta$ is 1 and the other is $0$)
4. (b) CASE-3 (Either of $\alpha$ or $\beta$ is 1 and the other is $0$)
5. (c) CASE-4 (Both $\alpha$ and $\beta$ are non-zero)
6. (d) CASE-5 (At least two of $\alpha$, $\beta$ and $\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})$ are non-zero)
7. (e) CASE-6 (at least two of $\gamma({v_{1}}^{\downarrow},{v_{2}}^{\downarrow})$, $\gamma({v_{3}}^{\downarrow},{v_{2}}^{\downarrow})$, $\gamma({v_{1}}^{\downarrow},{v_{3}}^{\downarrow})$ are non-zero)
8. (f) CASE-7 (Both $\alpha$ and $\beta$ are non-zero)
9. 5.2 Motivation for Sketching Technique in CASE-3
10. 5.3 Motivation for Sketching Technique in CASE-7
11. 5.4 Branching Number
12. 5.5 Different types of sub-cases which occur when there exists a min-cut of size $3$ as in CASE-3
13. 5.6 Two different non-isomorphic sub-cases of a min-cut as given by CASE-6
14. (a) One of $\gamma({v_{1}}^{\downarrow},{v_{2}}^{\downarrow}),\gamma({v_{1}}^{\downarrow},{v_{3}}^{\downarrow}),\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})$ is zero. WLOG let $\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})=0$
15. (b) All three of $\gamma({v_{1}}^{\downarrow},{v_{2}}^{\downarrow}),\gamma({v_{1}}^{\downarrow},{v_{3}}^{\downarrow}),\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})$ are non-zero
16. 5.7 3-Sketch of different cases as viewed from node $v$
17. 5.8 Reduced sketch ${\mathcal{S}}^{2}(v_{2},v_{3})$ when for some $v_{1},v_{2},v_{3}\in V\setminus r$ as in CASE-7
18. 5.9 Min-cut of size $3$ as given by CASE-5
IITM | Indian Institute of Technology, Madras
---|---
TRSF | Tree Restricted Semigroup function
## NOTATION
${v}^{\downarrow{T}}$ | set of descendants of vertex $v$ (including itself) in some tree $T$
---|---
$\mathcal{A}_{T}\left(v\right)$ | set of ancestors of vertex $v$ (including itself) in some tree $T$
$\pi_{{T}}\left(v\right)$ | parent of vertex $v$ in some tree $T$
$\operatorname{children}_{T}(v)$ | children of node $v$ in some tree $T$
$\delta(A)$ | cut set induced by vertex set $A$
$\eta_{T}(v)$ | $|\delta({v}^{\downarrow{T}})|$
$\gamma(A,B)$ | $|\delta(A)\cap\delta(B)|$, where $A,B$ are vertex set
$\ell_{T}\left(v\right)$ | level of node $v$ in a rooted tree $T$, root at level $0$
$\alpha_{T}(x,l)$ | ancestor of vertex $x$ at level $l$
$R_{T}(v)$ | canonical tree of a node $v$ with respect to the spanning $T$
$\mathcal{S}_{T}^{k}(v)$ | $k$-Sketch of node $v$ w.r.t a spanning tree $T$
$\mathcal{\xi}_{{T}}\left(v\right)$ | branching number of node $v$ w.r.t some tree $T$
${\mathcal{S}}_{T}^{k}(v,a)$ | reduced $k$-Sketch of a node $v$ and some $a\in{v}^{\downarrow{T}}\setminus v$ w.r.t a spanning tree $T$
$\rho_{T}(v)$ | a path from the root to node $v$ in the tree $T$
$\operatorname{LCA}_{T}(v_{1},v_{2})$ | Lowest Common Ancestor of two node $v_{1},v_{2}$ in some tree T
$\overline{\mathcal{N}}_{T}({A})$ | $\left\\{y\mid x\in A,y\notin A,(x,y)\in E,(x,y)\ \text{is a non-tree edge in tree }T\right\\}$
$\mathcal{P}_{T}(A)$ | $\left\\{\rho_{T}(a)\mid a\in A\right\\}$
## Chapter 1 Introduction
Networks of massive sizes are all around us. From large computer networks
which are the backbone of the Internet to sensor networks of any manufacturing
unit, the ubiquity of these networks is unquestionable. Each node in these
kinds of networks is an independent compute node or abstracted as one.
Further, nodes in these networks only have knowledge of their immediate
neighbors, but not the whole topology. There are many applications which
require us to solve problems related to such networks. To solve such problems,
a trivial idea is as follows: Each node sends its adjacency list to one
particular node through the underlying communication network; that node then
solves the required problem using the corresponding sequential algorithm. Such
an idea may not be cost effective due to the constraints of the capacity of
links and large sizes of these networks. The question then is: Does there
exist a faster cost-effective algorithm through which nodes in a network can
coordinate among each other to find the required network property? Such a
question is part of a more generalized spectrum of _Distributed Algorithms_.
In this thesis, we give fast distributed algorithm for finding small min-cuts.
An edge cut is the set of edges whose removal disconnects the network and the
one with a minimum number of edges is called as a _min-cut_. Finding small
min-cuts has applications in ensuring reliability and quality of service. This
enables the network administrator to know about the critical edges in the
network.
Our approach to finding the min-cut is based on our novel distributed
algorithmic techniques and backed by new characterizations of cuts and trees.
Our algorithmic techniques are as follows:
* •
_Tree Restricted Semigroup Function_ wherein a semigroup function is computed
for each node in the network based on a rooted spanning tree
* •
_Sketching Technique_ : which allows us to collect small but relevant
information of the network in a fast and efficient manner
* •
_Layered Algorithm_ which finds min-cuts in smaller sub-graphs and
strategically uses the information to make the decision about min-cuts in the
complete graph.
### 1.1 Notations and Conventions
We consider an undirected, unweighted and simple graph $G=(V,E)$ where $V$ is
the vertex set and $E$ is the edge set. We use $n$ and $m$ to denote the
cardinalities of $V$ and $E$ respectively. Edges are the underlying physical
links and vertices are the compute nodes in the network. We will use the term
vertex and node interchangeably. Similarly, the terms edges and links will be
used interchangeably.
A cut $C=(A,V\setminus A)$, where $\emptyset\subsetneq A\subsetneq V$ is a
partition of the vertex set into two parts. The cut-set is the edges crossing
these two parts. A min-cut is a cut-set with a minimum number of edges. Let
$d_{G}(u,v)$ be the distance between any two nodes $u,v$ in the network which
is defined by the number of edges in the shortest path between them. We will
consider connected networks, so $d_{G}(u,v)$ is guaranteed to be well-defined
and finite. The diameter of the graph $G$ is defined as $\max\limits_{u,v\in
V}d_{G}(u,v)$. We will use $D$ to denote the diameter of the graph and $n$ for
the size of the vertex set and $m$ for the size of edge set.
In a typical communication network, the diameter $D$ is much smaller than the
number of vertices $n$. This is because the networks are often designed in
such a way that there is an assurance that a message sent from any node to any
other node should not be required to pass through too many links.
### 1.2 Distributed Model
Different types of communication networks have different parameters. This is
primarily based on how communication takes place between any two nodes in a
distributed system. For the algorithm designers, these details are hidden and
abstracted by appropriate modeling of the real world scenario. An established
book in this domain titled ‘Distributed Computing: A Locality-Sensitive
Approach’ [23] remarks: “ _The number of different models considered in the
literature is only slightly lower than the number of papers published in this
field_ ”. But certainly, over the years, the research community working around
this area have narrowed down to a handful of models. Out of them, the CONGEST
model has been quite prominent and captures several key aspects of typical
communication networks.
In the CONGEST model, as in most other distributed networks, each vertex acts
as an independent compute node. Two nodes may communicate to each other in
synchronous rounds only if they have a physical link between them. The message
size is restricted to $O(\log n)$ bits across each link in any given round. We
will focus our attention on communication which is more expensive than
computation.
In the CONGEST model, the complexity of an algorithm is typically measured in
terms of the number of rounds required for an algorithm to terminate.
Parallels of this could be drawn to the time complexity of sequential
algorithms. Much of the focus has been on the study of the distributed round
complexity of algorithms but recently there has been some renewed interest to
study the message complexity (the sum total of messages transmitted across all
edges) as well [22, 4]. Apart from these some recent research is also being
directed towards the study of asymptotic memory required at each node during
the execution of the algorithm [5]. In this thesis, we will limit our focus on
the analysis of round complexity.
### 1.3 Classification of network problems based on locality
Any problem pertaining to a communication network may depend either on local
information or global information. Local information of a node comprises
knowledge of immediate neighbors or nodes which are just a constant number of
hops away. Global information includes knowledge of nodes across the network.
One of the prominent examples of a problem which just requires local
information is of finding a _maximal independent set_ (MIS). An _independent
set_ is a set of vertices such that no two vertices in the set have an edge
between them. An MIS is an independent set such that no vertex can be added
into it without violating independence. It has been shown by [18] that finding
MIS requires $O(\log n)$ rounds.
On the other side, to gather global information at all nodes, it takes at
least $D$ rounds. This is because the farthest node could be as much as $D$
distance away. Thus the complexity of algorithms which require global
information is $\Omega(D)$.
Simple problems like finding the count of the number of nodes in the network,
constructing a BFS tree, leader election etc are problems which require global
information and takes $O(D)$ rounds. Apart from this, there are some problems
which require global information and have been proved to take at least
$\Omega(D+\sqrt{n})$ rounds. A prominent example is that of finding a Minimum
Spanning Tree (MST). It has been shown that to even find an approximate MST we
require $\Omega(D+\sqrt{n})$ rounds [3]. One of the key idea used in most of
the algorithms which take $O(D+\sqrt{n})$ is a result from [17], which gives a
distributed algorithm that can partition nodes of any spanning tree $T$ into
$O(\sqrt{n})$ subtrees such that each subtree has $O(\sqrt{n})$ diameter.
Basically, this approach can be considered as a classical divide and conquer
paradigm of distributed algorithms. To solve any problem using this idea,
nodes are divided into $O(\sqrt{n})$ clusters and then the problem is solved
in each of the clusters and subsequently, a mechanism is given to combine the
results.
### 1.4 Overview of past results
In this section, we will give an overview of the current progress in finding a
min-cut. A more detailed technical review of the relevant historical results
will be given in next chapter.
In the centralized setting, the problem of finding min-cut has seen a lot of
advances [11, 13, 12, 15, 20, 19, 6, 27, 14]. Recently, in the centralized
setting even a linear time algorithm have been proposed for finding min-cut by
[16] and further improved by [9].
A few attempts at finding min-cuts in the distributed setting (in the CONGEST
model in particular) have been made in past. These include work by [1, 29, 30,
24, 25, 21, 7]. A decade-old result by [25], currently stands out as the most
efficient way to find min-cut of small sizes in particular of size one. They
gave an $O(D)$ round algorithm using random circulation technique for finding
min-cuts of size $1$. Their result of finding a min-cut of size one has also
been extended to give a Las Vegas type randomized algorithms to find min-cut
of size 2 in $O(D)$ rounds. But a deterministic analogue of the same has so
far been elusive.
More recently [21] have given a
$O((\sqrt{n}\log^{*}n+D)k^{4}\log^{2}n)$111$\log^{*}n$ (usually read ”log
star”), is the number of times the logarithm function must be iteratively
applied before the result is less than or equal to $1$ rounds algorithm to
find min-cut of size $k$. There have also been results to find an approximate
min-cut. [8] gave a distributed algorithm that, for any weighted graph and any
$\epsilon\in(0,1)$, with high probability finds a cut of size at most
$O(\epsilon^{-1}k)$ in $O(D)+\tilde{O}(\sqrt{n})$ 222$\widetilde{O}$ hides
logarithmic factors rounds, where $k$ is the size of the minimum cut. [21]
improves this aproximation factor and give a $(1+\epsilon)$ approximation
algorithm in $O((\sqrt{n}\log^{*}n+D)\epsilon^{-5}\log^{3}n)$ time. Also, [7]
have given a min-cut algorithm for planar graphs which finds a $(1-\epsilon)$
approximate min-cut in $\widetilde{O}(D)$ rounds.
While a linear time algorithm for finding min-cut exists in the centralized
setting, in distributed setting the best known exact algorithm still takes
$O((\sqrt{n}\log^{*}n+D)k^{4}\log^{2}n)$ rounds to deterministically find a
min-cut of size $k>1$. Moreover, a $\widetilde{O}(D)$ round algorithm exists
to find a min-cut in planar graphs. The following question arises: Is
$\Omega(\sqrt{n}+D)$ a lower bound for finding min-cuts or can we save the
large factor of $\sqrt{n}$ as known for planar graphs for at least the small
min-cuts. We answer this question through our thesis. We present a new
algorithm to deterministically find all the min-cuts of size one, two, and
three. For min-cut of size either one or two our algorithm takes $O(D)$
rounds. For min-cuts of size three our algorithm takes $O(D^{2})$ rounds.
### 1.5 Organization of this thesis
In Chapter 2, we will survey simple distributed algorithms for the CONGEST
model and also review graph properties like the cuts spaces and cycle spaces.
We also give a brief overview of previously know techniques to find min-cut in
distributed setting, in particular the techniques based on greedy tree packing
and random circulations.
In Chapter 3, we will present our characterization of trees and cuts. Here we
will also give a road-map of the novel distributed techniques introduced in
this thesis. In Chapter 4 and Chapter 5, we will present distributed
algorithms which will ensure that whenever there exists a min-cut of the kind
the required quantities by our charecterization will be communicated to at
least one node in network. In Chapter 4, we introduce _Tree Restricted
Semigroup Function_ and show that this is enough for finding min-cuts of size
$1$ and $2$ optimally. Further, in Chapter 5, we will introduce two new
techniques: _Sketching_ and _Layered Algorithm_ to find min-cuts of size $3$.
## Chapter 2 Preliminaries and Background
In this chapter, we will review fundamental network algorithms in the
distributed setting. Among the distributed algorithms, we will discuss the
construction of various types of spanning trees including Breadth First Search
(BFS) Tree, Depth First Search (DFS) tree and Minimum Spanning Tree (MST).
Further, we will discuss some simple tree based algorithmic paradigms
generally used in distributed networks. We also review important and relevant
concepts like cut spaces and cycle spaces and show that these are vector
spaces and orthogonal to each other. Finally, we will survey two important
recent techniques used to find min-cuts in distributed networks.
### 2.1 Simple algorithms in Distributed Networks
As mentioned in the earlier chapter, we use the CONGEST model of distributed
networks which considers point-to-point, synchronous communications between
any two nodes in a network. In any given synchronous round, communication is
only allowed between nodes with a physical link and is restricted to $O(\log
n)$ bits.
The network will be modeled as a simple undirected graph $G=(V,E)$. As stated
earlier, we will use $n$ to denote the number of nodes, $m$ to denote the
number of edges and $D$ as the diameter of the network. Each node in this
network has a unique $id$ which is of size $O(\log n)$ bits. Whenever we say
node $x$ in the network, we mean a node with $x$ as its $id$.
Various algorithms in distributed setting start with constructing a tree. A
tree gives structure to the problem by defining a relationship between nodes
as decedents or as ancestors or neither of them. Moreover, it also serves as a
medium for information flow between nodes in a structured way using the tree
edges. In this section, we state results about the construction of various
types of spanning trees and review simple distributed algorithms on trees.
#### 2.1.1 Tree Constructions and Notations
Let ${T}$ be any spanning tree rooted at node $r_{T}$ (or just $r$ when clear
from the context). Let $\ell_{T}\left(x\right)$ be the level of node $x$ in
$T$ such that the root $r$ is at level $0$ and for all other nodes $x$,
$\ell_{T}\left(x\right)$ is just the distance from root following the tree
edges. For $x\neq r$, let $\pi_{{T}}\left(x\right)$ denote the parent of the
vertex $x$ in $T$. For every node $v$, let $\operatorname{children}_{T}(v)$ be
the set of children of $v$ in the tree $T$. Further for any node $v$, we will
use $\mathcal{A}_{T}\left(v\right)$ to denote the set of all ancestors of node
$v$ in tree $T$ including $v$ and ${v}^{\downarrow{T}}$ as the set of vertices
which are descendants of $v$ in the tree $T$ including $v$. We will briefly
review construction of different types of trees. For details the reader is
referred to [23].
##### BFS Tree.
Our approach to find min-cuts uses an arbitrary BFS tree. Although, our
technique is invariant of the type of the spanning tree chosen but a BFS tree
is ideal because it can be constructed in just $O(D)$ rounds. The following
lemma states the guarantees regarding construction of a BFS trees
###### Lemma 2.1.
There exists a distributed algorithm for construction of a BFS tree which
requires at most $O(D)$ rounds.
A brief description of an algorithm to construct BFS tree as follows: an
arbitrary node $r$ initiates the construction of BFS tree, it assigns itself a
level $\ell\left(r\right)=0$. Then the node $r$ advertises to all its adjacent
neighbors about its level, who then joins the BFS tree at level $1$ as
children of $r$. Subsequently, each node at level $1$ also advertises its
level to all its neighbors, but only those nodes join the BFS tree as children
of nodes at level $1$ who are not part of the BFS tree already. Here, ties are
broken arbitrarily. This process goes on until all the nodes are part of the
BFS tree.
##### DFS Tree.
Unlike BFS tree, the depth of a DFS tree is oblivious of the diameter. In
fact, two of the early approaches to find cut edges in distributed networks
used a DFS tree [10, 1]. The following lemma states the guarantees regarding
construction of a DFS trees.
###### Lemma 2.2.
There exists a distributed algorithm for construction of a DFS tree which
requires at most $O(n)$ rounds.
##### MST Tree.
The following lemma is known regarding the complexity to construct an MST
tree. In fact, it is also known that we cannot do better in terms of round
complexity for construction of the MST tree. [3].
###### Lemma 2.3.
For a weighted network, there exists a distributed algorithm for construction
of MST tree which requires $O(\sqrt{n}+D)$ rounds.
A recent technique to find min-cuts given by [21] assigns weights to each edge
in an unweighted graph and compute MST tree, further these weights are updated
and a new MST tree is constructed. This goes on for $\text{polylog }n$
iterations. We review this technique in Section 2.3.
#### 2.1.2 Convergecast and Broadcast
We will now give brief details about simple distributed algorithms. In
particular, we will describe _Broadcasts_ and _Convergecasts_. These are
mechanisms to move data along the edges of any fixed tree. In this thesis, we
will often use these techniques to compute properties of the given network
which will, in turn, help us to find min-cuts.
##### Convergecasts
In distributed network algorithms, there are applications where it is required
to collect information upwards on some tree, in particular, this is a flow of
information from nodes to their ancestors in the tree. This type of a
technique is called as _Convergecast_. In a BFS tree $T$, any node has at most
$D$ ancestors, but for any node $v$ the number of nodes in descendants set
${v}^{\downarrow{T}}$ is not bounded by $D$. Thus during convergecast, all
nodes in the set ${v}^{\downarrow{T}}$ might want to send information upwards
to one of the ancestor node $v$. This may cause a lot of contentions and is a
costly endeavor. In this thesis, whenever we use a covergecast technique, we
couple it with either an _aggregation based strategy_ or a _contention
resolution mechanism_.
In an aggregation based strategy, during covergecast, the flow of information
from nodes at a deeper level is synchronized and aggregated together to be
sent upwards. In Chapter 4, we introduce _tree restricted semigroup function_
which is such an aggregation based technique. Moreover, our algorithm to
compute the specially designed _Sketch_ also uses this kind of
convergecasting. Further, in a contention resolution mechanism, only certain
information are allowed to move upwards based on some specific criteria, thus
limiting contention. In Chapter 5, we use such contention resolution mechanism
to convergecast details found by our _Layered Algorithm_.
##### Broadcasts
In this technique, the flow of information takes place in the downward
direction along the tree edges. When such an information is aimed to be
communicated to all the nodes in the decedent set then it is called
_broadcast_. In the general broadcast algorithm, for some spanning tree $T$,
the root is required to send a message of size $O(\log n)$ bits to all the
nodes. We state the following lemma from [23] whose proof uses a simple
flooding based algorithm.
###### Lemma 2.4 (Lemma 3.2.1 from [23]).
For any spanning tree $T$, let the root have a message of size $O(\log n)$
bits, then it can be communicated to all the nodes in $O(Depth(T))$ rounds.
###### Proof.
Here we assume that there exists a spanning tree $T$ and each node is aware of
the spanning tree edges incident to it. Now, all we need is to flood the
message to the entire tree. In the first round, the root sends this message to
all its children, who are at level $1$. Further, in the second round, nodes at
level $1$ sends the message to all their children who are at level $2$ and so
on. Thus, in all, we will require $O(Depth(T))$ rounds. ∎
At various stages of the algorithms presented in this thesis, we require two
variants of the general broadcast techniques. They are described below. We
also give the respective algorithm for them in Lemma 2.5.
1. _Broadcast Type-1_
Given a spanning tree $T$, all nodes $v\in V$ have a message of size $O(\log
n)$ bits which is required to be communicated to all the nodes in the vertex
set ${v}^{\downarrow{T}}$.
2. _Broadcast Type-2_
Given a spanning tree $T$, all nodes $v\in V$ have $Depth(T)$ messages of size
$O(\log n)$ bits which is required to be communicated to all the nodes in the
vertex set ${v}^{\downarrow{T}}$.
###### Lemma 2.5.
There exist distributed algorithms which require $O(Depth(T))$ and
$O(Depth(T)^{2})$ rounds respectively for Broadcast Type-1 and Broadcast
Type-2.
###### Proof.
First, let us consider _Broadcast Type-1_. Here we will use Lemma 2.4 coupled
with _pipelining_. Each node $v$ initiates a broadcast of its $O(\log n)$ bits
sized message to the subtree rooted at node $v$. using the algorithm from
Lemma 2.4 in the first round. If $v$ is not the root of the spanning tree,
then in the first round it will have received the message broadcasted by its
parent and it is imperative to node $v$ to send this message to all its
children. Note that this does not hinder with the broadcast initiated by node
$v$ to the subtree rooted at it. In all the subsequent rounds a train of
messages is _pipelined_ one after another through the tree edges, enabling
broadcasts initiated by nodes to run without any hindrance or congestion. Thus
it only takes $O(Depth(T))$ rounds for this type of broadcast. For _Broadcast
Type-2_ , we just need to run $Depth(T)$ iterations of _Broadcast Type-1_
taking $O(Depth(T)^{2})$ rounds. ∎
### 2.2 Cuts Space and Cycle Space
In this section, we will discuss about the _cut space_ and _cycle spaces_. We
will first define _cut spaces_ , _cycle spaces_ and prove that, they are
vector spaces. Further, we will show that they are orthogonal. All these facts
are well known and have been part of standard graph theory books, for example,
see [2].
As defined earlier, $E$ and $V$ are edge set and vertex set respectively. We
will work with the field $\mathbb{Z}_{2}$ and the space
$\mathbb{Z}_{2}^{|E|}$. Let $S\subseteq E$, then its characteristics vector is
defined as follows: $\chi_{e}^{S}=1$ (at coordinate $e$) for $e\in S$ and
$\chi_{f}^{S}=0$ for $f\notin E$. Here we will use the operator $\oplus$
(symmetric difference or XOR) for the space $\mathbb{Z}_{2}^{|E|}$. Let
$R,S\subseteq E$. Note that $R,S$ are edge sets whereas $\chi^{R}$ and
$\chi^{S}$ are binary vectors in $\mathbb{Z}_{2}^{|E|}$. When we use $R\oplus
S$, we will mean symmetric difference of the two sets and when we use
$\chi^{R}\oplus\chi^{S}$, we will mean XOR operation between the two vectors.
It is easy to see that $\chi^{R}\oplus\chi^{S}=\chi^{R\oplus S}$
Let $\phi\subseteq E$. If the subgraph $(V,\phi)$ has all the vertex with even
degrees then $\phi$ is a _binary circulation_. The set of all binary
circulations of a graph is called the _cycle space_ of the graph. Similarly,
let $A\subseteq V$. Define an induced cut $\delta(A)$ as the set of edges with
exactly one endpoint in $A$ or essentially the cut set $(A,V\setminus A)$. The
set of all the induced cuts is called the _cut space_ of the graph.
###### Theorem 2.6.
The cut space and cycle space are vector spaces of $\mathbb{Z}_{2}^{|E|}$
###### Proof.
To show that both the structures are vector spaces it is sufficient to show
that there exists a $0$ for the vector space, an addition operator and inverse
of each element. We will show that $\emptyset_{E}$ is the required $0$ of the
vector space and $\oplus$ is the operator. For cut space take a set
$A=\emptyset$. Then $\delta(A)=\emptyset_{E}$ that is
$\chi^{\delta(A)}=\vec{0}_{2}$. Similarly for cycle space if take
$\phi=\emptyset\subset E$ then all the vertex in the graph $(V,\phi)$ have
degree as $0$ which is even. Let $A,B\subseteq V$. Then $\delta(A)$ and
$\delta(B)$ are induced cut defined for A,B. Let $A\oplus B$ be the symmetric
difference of the set $A$ and $B$. As per the definition $\delta(A)$ is the
set of edges which have exactly one end in $A$ and similarly for $\delta(B)$
which are the edges with one end in $B$. Now $A\oplus B$ is a symmetric
difference between the set $A,B$ and $\delta(A\oplus B)$ is the set of edges
with one end in $A\oplus B$. To see that $\delta(A)\oplus\delta(B)$ is equal
to $\delta(A\oplus B)$ just take two cases where $A\cap B=\emptyset$ and
$A\cap B\not=\emptyset$.
Now let us turn our attention to the cycle space. Let $R,S\subseteq E$. And
let $G_{1}=(V,R)$ and $G_{2}=(V,S)$ be two sub-graphs such that all vertices
are of even degrees in them; that is both $R,S$ are binary circulations. Now
consider the subgraph $G_{1,2}=(V,R\oplus S)$ and a vertex $v\in V$. The
degree of vertex $v$ in $G_{1,2}$ is $\deg_{R}(v)+\deg_{S}(v)-2\deg_{R\cap
S}(v)$ which is even. Hence $R\oplus S$ is also a binary circulation. ∎
The following corollary will be useful for proving our characterization of
cuts and trees given in Chapter 3.
###### Corollary 2.7.
Let $A_{1},A_{2},\ldots,A_{j}$ be set of vertices such that $\forall i\
A_{i}\subset V$ then $\delta(A_{1}\oplus A_{2}\oplus\ldots\oplus
A_{j})=\delta(A_{1})\oplus\delta(A_{2})\oplus\ldots\oplus\delta(A_{j})$
###### Theorem 2.8.
The cut space and cycle space are orthogonal.
###### Proof.
Let $A\subset V$ and $\phi$ be a binary circulation. Let $\chi^{\phi}$ and
$\chi^{\delta(A)}$ be the vectors corresponding to the $\phi$ and induced cut
$\delta(A)$. We have to show that $\chi^{\delta(A)}.\chi^{\phi}=0\left(\mod
2\right)$ which is equivalent to showing that
$|\chi^{\phi}\cap\chi^{\delta(A)}|$ is even. Now $\sum_{a\in
A}\deg_{\phi}(a)=\sum_{a\in A}|\phi\cap\delta(a)|$. Observe that
$\sum_{a}\deg_{\phi}(a)$ is even because $\phi$ is a binary circulation.
Further $\sum_{a\in A}|\phi\cap\delta(a)|$ counts the edges with both ends in
$A$ twice and every edge in $|\phi\cap\delta(A)|$ once and does not count any
other edge. Hence $|\phi\cap\delta(A)|$ is even. ∎
### 2.3 Min-Cuts in Distributed Networks
In this section we will review some past results for finding min-cuts, in
particular, the ideas employed in two recent works [26, 21]. While [21] relied
on greedy tree packing introduced by [28]; [26] gave a novel technique of
random circulations.
#### 2.3.1 Using Greedy Tree Packing
First, we will define the concept of greedy tree packing. A _tree packing_
$\mathbb{T}$ is a multiset of spanning trees. Let the load of any edge $e$
w.r.t a tree packing $\mathbb{T}$, denoted by $\mathcal{L}^{\mathbb{T}}(e)$,
be defined as the number of trees in $\mathbb{T}$ containing $e$. A tree
packing $\mathbb{T}=\left\\{T_{1},\ldots,T_{j}\right\\}$ is greedy if each
$T_{i}$ is a minimum spanning tree (MST) with respect to the loads induced by
$\left\\{T_{1},\ldots,T_{i-1}\right\\}$. [28] has given the following results
related to tree packing:
###### Theorem 2.9 ([28]).
Let $G=(V,E)$ be an unweighted graph. Let $m$ be the number of edges in $G$
and $k$ be the size of min-cut. Then a greedy tree-packing with
$\omega(k^{7}\log^{3}m)$ contains a tree crossing some min-cut exactly once.
Based on the above theorem, [21] construct a greedy tree packing of
$\omega(k^{7}\log^{3}m)$ trees. There is a well-known algorithm to find MST in
$O(D+\sqrt{n})$ rounds. Further, in each tree, the authors find the size of
the smallest edge cut which shares only one edge with the tree. They give an
algorithm to do the same in $O(D+\sqrt{n})$ rounds for each tree. Note that
they cannot find all the min-cuts because of the limitations of Theorem 2.9
which only guarantees that there will exists some (but not all) min-cut which
will share exactly one edge with at least one tree in tree packing
$\mathbb{T}$.
#### 2.3.2 Using Random Circulations
The random circulation technique has its foundation based on a well-known fact
that cut spaces and cycle spaces are orthogonal to each other [2]. Based on
this technique [25] gave the following result
###### Theorem 2.10 ([25]).
There is a Las Vegas distributed algorithm to compute all the min-cuts of size
1 and 2 (cut edges and cut pairs) in $O(D)$ time.
The above result cannot be made deterministic for min-cuts of size 2. Moreover
extending the random circulation technique for min-cuts of size $3$ or more do
not seem plausible due to its fundamental limitations. But as shown in [24],
1-cuts can be found deterministically in $O(D)$ rounds.
## Chapter 3 Technical Overview
We give deterministic algorithm for finding min-cuts of size one, two, and
three. We find the min-cut of size one and two in $O(D)$ time and of size
three in $O(D^{2})$ time. We recreate the optimal result for min-cut of size
one. For min-cuts of size two and three our results resolve the open problem
from [26]. We give a new characterization involving trees and min-cuts. We
also introduce a new algorithmic technique named as the Tree restricted
Semigroup function (TRSF) which is defined with respect to a tree $T$. The
TRSF is a natural approach to find min-cuts of size one. We also show a non-
trivial application of TRSF in finding min-cuts of size two, which is quite
intriguing. For finding, if there exists a min-cut of size $3$ we introduce a
new _sketching technique_ where each node finds a view of the graph with
respect to a tree by getting rid of edges which may not be a part of a min-
cut. The sketching idea coupled with our characterization results helps us to
find the required min-cut.
### 3.1 Characterization of Trees and Cuts
In this subsection, we establish some fundamental relationships between trees
and cuts which will help us find the required min-cuts. When a min-cut shares
$k$ edges with a spanning tree then we say that it $k$-respects the tree. Each
min-cut will at least share one edge with a spanning tree otherwise the said
min-cut will not be a cut because there will exist a spanning tree which
connects all the nodes. We give the following simple observation in this
regard.
###### Observation 3.1.
For any induced cut of size k and a spanning tree $T$, there exists some
$j\in[1,\min(k,n)]$ such that induced cut $j-$ respects the tree $T$.
The above simple observation lets us break the problem of finding small min-
cuts into different cases. A min-cut of size $1$ will always share one edge
with any spanning tree. A min-cut of size $2$ shares either $1$ or $2$ edges
with any spanning tree. Similarly a min-cut of size $3$ either shares $1,2$ or
$3$ edges with any spanning tree.
We now make a simple observation along with a quick proof sketch
###### Observation 3.2.
When an induced cut of size $k$ 1-respects a spanning tree $T$, then there
exists at least one node $v$ such that $|\delta({v}^{\downarrow{T}})|=k$.
###### Proof Sketch.
Consider the cut edge $(u,v)$ that is in $T$ with $u$ being closer to the root
$r_{T}$. Since the tree only 1-respects the cut, ${v}^{\downarrow{T}}$ remains
on one side of the cut, while $r_{T}$ is on the other. Moreover,
$V\setminus{v}^{\downarrow{T}}$ fully remains on the side with $r_{T}$;
otherwise, $T$ will not be limited to 1-respecting the cut. Thus,
${v}^{\downarrow{T}}$ will serve our required purpose. ∎
We will now give a distributed algorithm such that each node $v$ finds
$|\delta({v}^{\downarrow{T}})|$ in $O(D)$ rounds. For the remaining cases, we
will give lemmas to characterize the induced cuts and tree. These
characterizations follow from Corollary 2.7. We begin with the
characterization of an induced cut of size 2 when it 2-respects a tree $T$.
###### Lemma 3.3 (2-cut, 2-respects a spanning tree $T$).
Let $T$ be any spanning tree. When $u\neq v$ and $u,v\in V\setminus r$. Then
the following two statements are equivalent.
1. $P_{1}:$
$\left\\{(\pi_{{T}}\left(u\right),u),(\pi_{{T}}\left(v\right),v)\right\\}$ is
a cut set induced by ${{u}^{\downarrow{T}}}\oplus{{v}^{\downarrow{T}}}$.
2. $P_{2}:$
$|\delta({u}^{\downarrow{T}})|=|\delta({v}^{\downarrow{T}})|=|\delta({u}^{\downarrow{T}})\cap\delta\left({v}^{\downarrow{T}}\right)|+1$.
To prove the above lemma the following simple observation will be helpful.
###### Observation 3.4.
For any $u\neq v$ and $u,v\in V\setminus r$, and a spanning tree $T$ the edge
$(\pi_{{T}}\left(v\right),v)\in\delta({{v}^{\downarrow{T}}})$ but
$(\pi_{{T}}\left(v\right),v)\notin\delta({{u}^{\downarrow{T}}})$; also
$(\pi_{{T}}\left(u\right),u)\in\delta({{u}^{\downarrow{T}}})$ but
$(\pi_{{T}}\left(u\right),u)\notin\delta({{v}^{\downarrow{T}}})$.
###### Proof.
Consider the given spanning tree $T$. Here the edge
$(\pi_{{T}}\left(u\right),u)$ has one end in the vertex set
${u}^{\downarrow{T}}$ and the other end in the vertex set
$V\setminus{u}^{\downarrow{T}}$. Thus the edge $(u,\pi_{{T}}\left(u\right))$
is part of the cut set
$({u}^{\downarrow{T}},V\setminus{u}^{\downarrow{T}})=\delta({u}^{\downarrow{T}})$.
Now, consider any other node $v$. Either $v\in{u}^{\downarrow{T}}$ or it does
not. Let’s take the first case when $v\in{u}^{\downarrow{T}}$. Since $v\neq
u$, thus both the end points of the edge $(\pi_{{T}}\left(u\right),u)$ are
outside the set ${v}^{\downarrow{T}}$ hence
$(\pi_{{T}}\left(u\right),u)\notin\delta({v}^{\downarrow{T}})$.
When $v\notin{u}^{\downarrow{T}}$, then also similar argument holds. If
$v\in\mathcal{A}_{T}\left(u\right)$, then both the endpoints are in the set
${v}^{\downarrow{T}}$, otherwise both of them are outside the set
${v}^{\downarrow{T}}$. In either of them the edge
$(\pi_{{T}}\left(u\right),u)\notin\delta({v}^{\downarrow{T}})$. ∎
###### Proof of Lemma 3.3.
Consider the forward direction,
$\left\\{(\pi_{{T}}\left(u\right),u),(\pi_{{T}}\left(v\right),v)\right\\}$ is
a cut set induced by ${{u}^{\downarrow{T}}}\oplus{{v}^{\downarrow{T}}}$.
Therefore
$\delta({{v}^{\downarrow{T}}}\oplus{{u}^{\downarrow{T}}})=\left\\{(\pi_{{T}}\left(u\right),u),(\pi_{{T}}\left(v\right),v)\right\\}$.
Further from Corollary 2.7, we have
$\delta({{v}^{\downarrow{T}}}\oplus{{u}^{\downarrow{T}}})=\delta({{u}^{\downarrow{T}}})\oplus\delta({{v}^{\downarrow{T}}})$.
Therefore
$\left\\{(\pi_{{T}}\left(u\right),u),(\pi_{{T}}\left(v\right),v)\right\\}=\delta({{u}^{\downarrow{T}}})\oplus\delta({{v}^{\downarrow{T}}})$
which along with Observation 3.4 implies that
$\delta({{u}^{\downarrow{T}}})\setminus(\pi_{{T}}\left(u\right),u)=\delta({{v}^{\downarrow{T}}})\setminus(\pi_{{T}}\left(v\right),v)$
thus
$|\delta({{v}^{\downarrow{T}}})|-1=|\delta({{v}^{\downarrow{T}}})\cap\delta({{u}^{\downarrow{T}}})|$
and
$|\delta({{u}^{\downarrow{T}}})|-1=|\delta({{v}^{\downarrow{T}}})\cap\delta({{u}^{\downarrow{T}}})|$.
For the other direction, given that
$|\delta({{u}^{\downarrow{T}}})\cap\delta({{v}^{\downarrow{T}}})|+1=|\delta({{v}^{\downarrow{T}}})|=|\delta({{u}^{\downarrow{T}}})|$,
which along with Observation 3.4 implies that
$\delta({{u}^{\downarrow{T}}})\setminus(\pi_{{T}}\left(u\right),u)=\delta({{v}^{\downarrow{T}}})\setminus(\pi_{{T}}\left(v\right),v)$.
Hence
$\delta({{u}^{\downarrow{T}}})\oplus\delta({{v}^{\downarrow{T}}})=\left\\{(\pi_{{T}}\left(u\right),u),(\pi_{{T}}\left(v\right),v)\right\\}$.
Using the fact that
$\delta({{v}^{\downarrow{T}}})\oplus\delta({{u}^{\downarrow{T}}})=\delta({{v}^{\downarrow{T}}}\oplus{{u}^{\downarrow{T}}})$,
the edge set
$\left\\{(\pi_{{T}}\left(u\right),u),(\pi_{{T}}\left(v\right),v)\right\\}$ is
a cut induced by ${{v}^{\downarrow{T}}}\oplus{{u}^{\downarrow{T}}}$. ∎
When an induced cut of size $2$, 2-respects a spanning tree there could be two
sub-cases. Consider nodes $u,v$ as in the above lemma. Here either
${u}^{\downarrow{T}}\cap{v}^{\downarrow{T}}=\emptyset$ or
${u}^{\downarrow{T}}\subseteq{v}^{\downarrow{T}}$ (which is same as
${v}^{\downarrow{T}}\subseteq{u}^{\downarrow{T}}$). The above lemma unifies
these two cases. In the next chapter, we will show that how one of the nodes
in the network finds all the three quantities as required by the above lemma
when there exists a min-cut of this kind.
Now we will see the similar characterizations for an induced cut of size 3
when it 2 respects the cut and 3-respects the cut.
###### Lemma 3.5 (3-cut, 2-respects a spanning tree $T$).
Let $T$ be any tree. Let $v_{1},v_{2}\in V\setminus r$ be two different nodes
and $e$ be a non-tree edge, then the following two are equivalent.
1. $P_{1}:$
$\left\\{(\pi_{{T}}\left(v_{1}\right),v_{1}),(\pi_{{T}}\left(v_{2}\right),v_{2}),e\right\\}$
is a cut set induced by the vertex set
${v_{1}}^{\downarrow{T}}\oplus{v_{2}}^{\downarrow{T}}$.
2. $P_{2}:$
$|\delta({v_{1}}^{\downarrow{T}})|-2=|\delta({v_{2}}^{\downarrow{T}})|-1=\gamma({{v_{1}}^{\downarrow{T}}},{{v_{2}}^{\downarrow{T}}})$
or
$\delta({v_{1}}^{\downarrow{T}})-1=|\delta({v_{2}}^{\downarrow{T}})|-2=\gamma({{v_{1}}^{\downarrow{T}}},{{v_{2}}^{\downarrow{T}}})$.
###### Proof.
The proof is similar to Lemma 3.3. Consider the forward direction,
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),e\right\\}$
is a cut set induced by ${{v_{1}}^{\downarrow}}\oplus{{v_{2}}^{\downarrow}}$.
Therefore
$\delta({{v_{2}}^{\downarrow}}\oplus{{v_{1}}^{\downarrow}})=\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),e\right\\}$.
Further from Corollary 2.7, we have
$\delta({{v_{1}}^{\downarrow}}\oplus{{v_{2}}^{\downarrow}})=\delta({{v_{1}}^{\downarrow}})\oplus\delta({{v_{2}}^{\downarrow}})$.
Therefore
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),e\right\\}=\delta({{v_{1}}^{\downarrow}})\oplus\delta({{v_{2}}^{\downarrow}})$.
Since $e\in\delta({v_{1}}^{\downarrow})\oplus\delta({v_{2}}^{\downarrow})$.
Thus $e\in\delta({v_{1}}^{\downarrow})$ or $e\in\delta({v_{2}}^{\downarrow})$
but not in both due to the symmetric difference operator. Without loss in
generality, let $e\in\delta({v_{1}}^{\downarrow})$. Now Observation 3.4
implies that
$\delta({{v_{1}}^{\downarrow}})\setminus\left\\{(\pi\left(v_{1}\right),v_{1}),e\right\\}=\delta({{v_{2}}^{\downarrow}})\setminus(\pi\left(v_{2}\right),v_{2})$
thus
$|\delta({{v_{1}}^{\downarrow}})|-2=|\delta({{v_{2}}^{\downarrow}})\cap\delta({{v_{2}}^{\downarrow}})|$
and
$|\delta({{v_{2}}^{\downarrow}})|-1=|\delta({{v_{1}}^{\downarrow}})\cap\delta({{v_{2}}^{\downarrow}})|$.
Similarly when $e\in\delta({v_{2}}^{\downarrow})$ then
$|\delta({{v_{1}}^{\downarrow}})|-1=|\delta({{v_{2}}^{\downarrow}})\cap\delta({{v_{2}}^{\downarrow}})|$
and
$|\delta({{v_{2}}^{\downarrow}})|-2=|\delta({{v_{1}}^{\downarrow}})\cap\delta({{v_{2}}^{\downarrow}})|$
For the other direction, without loss in generality, let us choose one of the
two statements. In particular, let
$|\delta({{v_{1}}^{\downarrow}})\cap\delta({{v_{2}}^{\downarrow}})|=|\delta({{v_{1}}^{\downarrow}})|-1=|\delta({{v_{2}}^{\downarrow}})|-2$,
which along with Observation 3.4 implies that
$|\delta({{v_{1}}^{\downarrow}})\setminus(\pi\left(v_{1}\right),v_{1})|=|\delta({{v_{2}}^{\downarrow}})\setminus(\pi\left(v_{2}\right),v_{2})|-1$.
Thus there exists exactly one edge $e$ which is not in
$\delta({v_{1}}^{\downarrow})$ but in $\delta({v_{2}}^{\downarrow})$ Hence
$\delta({{v_{1}}^{\downarrow}})\oplus\delta({{v_{2}}^{\downarrow}})=\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),e\right\\}$.
Using the fact that
$\delta({{v_{2}}^{\downarrow}})\oplus\delta({{v_{1}}^{\downarrow}})=\delta({{v_{2}}^{\downarrow}}\oplus{{v_{1}}^{\downarrow}})$;
the edge set
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),e\right\\}$
is a cut induced by ${{v_{1}}^{\downarrow}}\oplus{{v_{2}}^{\downarrow}}$. ∎
###### Lemma 3.6 (3-cut, 3-respects a spanning tree $T$).
Let $T$ be any tree. Let $v_{1},v_{2},v_{3}\in V\setminus r$, further each of
$v_{1},v_{2}$ and $v_{3}$ are different then the following two statements are
equivalent:
1. $P_{1}:$
$\left\\{(v_{1},\pi_{{T}}\left(v_{1}\right)),(v_{2},\pi_{{T}}\left(v_{2}\right)),(v_{3},\pi_{{T}}\left(v_{3}\right))\right\\}$
is a cut-set induced by
${v_{1}}^{\downarrow{T}}\oplus{v_{2}}^{\downarrow{T}}\oplus{v_{3}}^{\downarrow{T}}$.
2. $P_{2}:$
$|\delta({v_{i}}^{\downarrow{T}})|-1=\sum\limits_{j\in\left\\{1,2,3\right\\}\setminus\left\\{i\right\\}}\gamma({v_{i}}^{\downarrow{T}},{v_{j}}^{\downarrow{T}}),\
\forall i\in\left\\{1,2,3\right\\}$.
###### Proof.
Consider the forward direction $\delta({v_{1}}^{\downarrow}\ \oplus\
{v_{2}}^{\downarrow}\ \oplus\
{v_{3}}^{\downarrow})=\\{(v_{1},\pi\left(v_{1}\right)),(v_{2},\pi\left(v_{2}\right)),$
$(v_{3},\pi\left(v_{3}\right))\\}$. Also, by Corollary 2.7, we know that
$\delta({v_{1}}^{\downarrow})\oplus\delta({v_{2}}^{\downarrow})\oplus\delta({v_{3}}^{\downarrow})=\delta({v_{1}}^{\downarrow}\oplus{v_{2}}^{\downarrow}\oplus{v_{3}}^{\downarrow})$.
Notice that there are 3 equations in $P_{2}$ when $i\in[1,3]$. We will show
for $i=1$ and the argument for the rest will follow similarly. For all edge
$e\neq(v_{1},\pi\left(v_{1}\right))$ and $e\in\delta({v_{1}}^{\downarrow})$
then $e$ must be present exactly once in either of
$\delta({v_{2}}^{\downarrow})$ or $\delta({v_{3}}^{\downarrow})$. Because if
it is present in none of them then among the three sets
$\delta({v_{1}}^{\downarrow}),\delta({v_{2}}^{\downarrow})$ and
$\delta({v_{3}}^{\downarrow})$, $e$ is exactly in one that is
$\delta({v_{1}}^{\downarrow})$. Thus
$e\in\delta({v_{1}}^{\downarrow}\oplus{v_{2}}^{\downarrow}\oplus{v_{3}}^{\downarrow})$
which is not true. Also if it is present in both
$\delta({v_{2}}^{\downarrow})$ and $\delta({v_{3}}^{\downarrow})$ then
similarly
$e\in\delta({v_{1}}^{\downarrow}\oplus{v_{2}}^{\downarrow}\oplus{v_{3}}^{\downarrow})$,
which again is not true. Thus apart from the edge
$(\pi\left(v_{1}\right),v_{1})$ whenever there exists an
$e\in\delta({v_{1}}^{\downarrow})$ then it is also present in exactly one of
$\delta({v_{2}}^{\downarrow})$ or $\delta({v_{3}}^{\downarrow})$. Thus
$|\delta({v_{1}}^{\downarrow})|-1=|\delta({v_{1}}^{\downarrow})\cap\delta({v_{2}}^{\downarrow})|+|\delta({v_{1}}^{\downarrow})\cap\delta({v_{3}}^{\downarrow})|$.
For the backward direction, we will again focus our attention to three
different edge sets
$\delta({v_{1}}^{\downarrow}),\delta({v_{2}}^{\downarrow})$ and
$\delta({v_{3}}^{\downarrow})$. We know that the edge
$(\pi\left(v_{1}\right),v_{1})\in\delta({v_{1}}^{\downarrow})$ but not in
either of $\delta({v_{2}}^{\downarrow})$ and $\delta({v_{3}}^{\downarrow})$. 3
Similarly for $(\pi\left(v_{2}\right),v_{2})$ and
$(\pi\left(v_{3}\right),v_{3})$. Lets the edge set
$C=\delta({v_{1}}^{\downarrow})\oplus\delta({v_{2}}^{\downarrow})\oplus\delta({v_{3}}^{\downarrow})$.
Thus $C$ is sure to contain the edges
$(v_{1},\pi\left(v_{1}\right)),(v_{2},\pi\left(v_{2}\right))$ and
$(v_{3},\pi\left(v_{3}\right))$ because of the symmetric difference operator
between the three sets. Now based on the condition in $P_{2}$ we will show
that there is no other edge $e$ in $C$. Imagine there is only one edge
$e\neq(\pi\left(v_{1}\right),v_{1})$ and $e\in\delta({v_{1}}^{\downarrow})$
and $e\in C$. Thus $e$ could either be in both $\delta({v_{2}}^{\downarrow})$
and $\delta({v_{3}}^{\downarrow})$ or none of them. Imagine $e$ is in both
$\delta({v_{2}}^{\downarrow})$ and $\delta({v_{3}}^{\downarrow})$, then
$|\delta({v_{1}}^{\downarrow})|-2=|\delta({v_{1}}^{\downarrow})\cap\delta({v_{2}}^{\downarrow})|+|\delta({v_{1}}^{\downarrow})\cap\delta({v_{3}}^{\downarrow})|$
which contradicts $P_{2}$. Imagine $e$ is in none of
$\delta({v_{2}}^{\downarrow})$ and $\delta({v_{3}}^{\downarrow})$, then
$|\delta({v_{1}}^{\downarrow})|=|\delta({v_{1}}^{\downarrow})\cap\delta({v_{2}}^{\downarrow})|+|\delta({v_{1}}^{\downarrow})\cap\delta({v_{3}}^{\downarrow})|$
which again contradicts $P_{2}$. ∎
Lemma 3.5 and 3.6 characterize an induced cut of size 3 when it shares 2 edges
and 3 edges with a spanning tree respectively. Similar to a min-cut of size 2,
here again we have few different cases. These lemmas unify all the different
cases. We will discuss these different cases in Chapter 5.
We will work with an arbitrary BFS tree $\mathcal{T}$ of the given network.
Whenever a quantity is computed with respect to this BFS tree $\mathcal{T}$,
we will skip the redundant $\mathcal{T}$ from the subscript or superscript,
for example $\pi\left(v\right)$ instead of $\pi_{{\mathcal{T}}}\left(v\right)$
and ${v}^{\downarrow}$ instead of ${v}^{\downarrow{\mathcal{T}}}$. At the
beginning each node $v\neq r$ knows $\ell\left(v\right)$, $\pi\left(v\right)$
and the ancestor set $\mathcal{A}\left(v\right)$. The BFS tree and these
associated quantities can be computed in the beginning in $O(D)$ rounds. For
any node $v$ which is not the root of the BFS tree, $\ell\left(v\right)$ and
$\pi\left(v\right)$ are known to $v$ at the time of construction of the BFS
tree as shown in the proof of the Lemma 2.1. Further the ancestor set
$\mathcal{A}\left(v\right)$ can also be known to node $v$ in $O(D)$ rounds
using _Broadcast Type - 1_. And can be done as follows: each node $a$ tells
all the nodes in the set ${a}^{\downarrow}$ about it being one the ancestor.
This information is just of size $O(\log n)$ bits.
### 3.2 Our Contribution
In this thesis, we present deterministic algorithm to find min-cuts of size
1,2 and 3.
Figure 3.1: Roadmap for finding small min-cuts
For min-cuts of size $1$ and $2$, our algorithm takes $O(D)$ rounds. For min-
cuts of size $3$, our algorithm takes $O(D^{2})$ rounds.
We use our characterization given in Lemma 3.3, Lemma 3.5 and Lemma 3.6.
Further, in subsequent chapters, we will give communication strategies which
will ensure that at least one node in the network knows about the required
information as per these lemmas whenever a min-cut of the kind exists. The
idea is basically to break the problem into smaller parts. Recall from
Observation 3.1 that a min-cut of size 1 shares an edge with any spanning
tree, a min-cut of size 2 may share 1 or 2 edges and finally a min-cut of size
$3$ may share 1,2 or 3 edges with the tree. In each of these cases, we our
communication strategies ensure that at least one of the node will have the
required information as per the aforementioned characterization lemmas. We
give a road-map of this division in Figure 3.1.
Our results of finding min-cuts are split into two chapters. In Chapter 4, we
present a new technique called the _tree restricted semigroup function_. We
will show that this is enough to optimally find the min-cuts of size $1$ and
$2$. In Chapter 5, we will give algorithm to find min-cut of size $3$. We
introduce two techniques: _sketching_ and _layered algorithm_ to do the same.
## Chapter 4 Tree Restricted Semigroup Function
In this chapter, we will define tree restricted semigroup function and
demonstrate its utility in finding the induced-cuts of size 1 and 2. A tree
restricted semigroup function is defined with respect to a tree $T$ and is
based on a commutative semigroup. Recall that a commutative semigroup is a set
$\mathcal{X}$ together with a commutative and associative binary operation
$\boxplus$ which is a function
$\boxplus:\mathcal{X}\times\mathcal{X}\rightarrow\mathcal{X}$. Tree restricted
semigroup function is inspired by semigroup function defined in [23, Def.
3.4.3].
Let $\mathcal{X}$ be a commutative semigroup with operator $\boxplus$. Let $T$
be any spanning tree. We will formally define $f:V\rightarrow\mathcal{X}$ to
be a tree restricted semigroup function with respect to the tree $T$ in
Definition 4.1. For any node $v$, $f(v)$ will only depend on the vertex set
${v}^{\downarrow{T}}$. Let $a\in V$ be any node, for each
$v\in\mathcal{A}_{T}\left(a\right)$, node $a$ computes the value $X_{a}^{v}$
through a pre-processing step which is the contribution by node $a$ for
calculating $f(v)$.
###### Definition 4.1 (Tree Restricted Semigroup function).
Let $\mathcal{X}$ be a commutative semigroup with the operator $\boxplus$.
Then, the function $f:V\rightarrow\mathcal{X}$ is a tree restricted semigroup
function if
$f(v)=\mathop{\vphantom{\bigoplus}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}}\displaylimits\limits_{a\in{v}^{\downarrow{T}}}X_{a}^{v}$
Further for any $a\in V$ and $v\in\mathcal{A}_{T}\left(a\right)$, define
$X_{{a}^{\downarrow{T}}}^{v}=\mathop{\vphantom{\bigoplus}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}}\displaylimits\limits_{a^{\prime}\in{a}^{\downarrow{T}}}X_{a^{\prime}}^{v}$.
Note that $X_{{a}^{\downarrow{T}}}^{v}\in\mathcal{X}$. We will say that
$X_{{a}^{\downarrow{T}}}^{v}$ is the contribution of nodes in the set
${a}^{\downarrow{T}}$ for computing $f(v)$.
###### Observation 4.2.
Let $a$ be an internal node in the tree $T$ and
$v\in\mathcal{A}_{T}\left(a\right)$ then
$X_{{a}^{\downarrow{T}}}^{v}=X_{a}^{v}\boxplus\left(\mathop{\vphantom{\bigoplus}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}}\displaylimits\limits_{c\in\operatorname{children}_{T}(a)}X_{{c}^{\downarrow{T}}}^{v}\right)$
###### Observation 4.3.
For any internal node $v$
$f(v)=X_{v}^{v}\boxplus\left(\mathop{\vphantom{\bigoplus}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}}\displaylimits\limits_{c\in\operatorname{children}_{T}(v)}X_{c^{\downarrow}}^{v}\right)$
1
2 Pre-Processing __
3 For any node $a,\forall v\in\mathcal{A}_{T}\left(a\right)$, node $a$ knows
$X_{a}^{v}$ through a pre-processing
4
1 Phase _1 : Aggregation phase run on all node $a$, aggregates
$X_{{a}^{\downarrow{T}}}^{v}\ \forall v\in\mathcal{A}\left(a\right)$_
2
3 for _rounds $t=1$ to $Depth(T)-\ell_{T}\left(a\right)$_ wait
4 $l\leftarrow 0$
5 for _rounds $t=Depth(T)-\ell_{T}\left(a\right)+1$ to $h\left(T\right)$_ do
6 $v\leftarrow$ ancestor of node $a$ at level $l$
7 if _$a$ is leaf node_ then $X_{a^{\downarrow T}}^{v}\leftarrow X_{a}^{v}$
8 else
9 for _$c\in\operatorname{children}_{T}(a)$_ parallely collect $\langle
l,X_{c^{\downarrow T}}^{v}\rangle$
10 $X_{a^{\downarrow T}}^{v}\leftarrow
X_{a}^{v}\boxplus\left(\mathop{\vphantom{\bigoplus}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}}\displaylimits\limits_{c\in\operatorname{children}_{T}(a)}X_{c^{\downarrow
T}}^{v}\right)$
11
12 send to the parent $\langle l,X_{a^{\downarrow T}}^{v}\rangle$
13 $l\leftarrow l+1$
14
15
1 Phase _2: Computation Phase (run on all node $v\in V$), finds f(v)_
2 Available Info: __Each node $v$ knows $X_{c^{\downarrow T}}^{v}$ for all
$c\in\operatorname{children}(v)$
3 if _$v$ is a leaf node_ then $f(v)\leftarrow X_{v}^{v}$
4 else
5 $f(v)\leftarrow
X_{v}^{v}\boxplus\left(\mathop{\vphantom{\bigoplus}\mathchoice{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}{\vbox{\hbox{\leavevmode\resizebox{0.0pt}{}{$\boxplus$}}}}}\displaylimits\limits_{c\in\operatorname{children}_{T}(v)}X_{c^{\downarrow
T}}^{v}\right)$
6
Algorithm 1 Tree Restricted Semigroup function
###### Definition 4.4.
A tree restricted semigroup function $f(\cdot)$ is efficiently computable if
for all $v$, $f(v)$ can be computed in $O(Depth(T))$ time in the CONGEST
model.
In Lemma 4.5, we will give sufficient conditions for a tree restricted
semigroup function to be efficiently computable.
###### Lemma 4.5.
For all $a\in V$ and $v\in\mathcal{A}_{T}\left(a\right)$ if
$X_{a}^{v},X_{{a}^{\downarrow{T}}}^{v},f(v)$ are all of size $O(\log n)$ bits
and if $X_{a}^{v}$ can be computed in $O(Depth(T))$ time by node $a$ then the
tree restricted semigroup function $f(\cdot)$ is efficiently computable.
###### Proof.
For any node $v$, tree restricted function $f(v)$ depends on the value
$X_{a}^{v}$ for all $a\in{v}^{\downarrow{T}}$, thus each such node $a$
_convergecasts_ (see Sec. 2.1.2) the required information up the tree which is
supported by aggregation of the values. We will give an algorithmic proof for
this lemma. The algorithm to compute a efficiently computable semigroup
function $f(\cdot)$ is given in Algorithm 1.
The aggregation phase of the algorithm given in Phase 1 runs for at most
$Depth(T)$ time and facilitates a coordinated aggregation of the required
values and convergecasts them in a synchronized fashion. Each node $a$ in
Phase 2, sends $\ell_{T}\left(a\right)$ messages of size $O(\log n)$ to its
parent, each message include $X_{a^{\downarrow T}}^{v}$ where
$v\in\mathcal{A}_{T}\left(a\right)$; which as defined earlier is the
contribution of nodes in $a^{\downarrow T}$ to $f(v)$. This message passing
takes $O(1)$ time since $X_{a^{\downarrow T}}^{v}\in\mathcal{X}$ is of size
$O(\log n)$ bits. For brevity, we assume this takes exactly $1$ round, this
enables us to talk about each round more appropriately as follows: Any node
$a$ at level $\ell_{T}\left(a\right)$ waits for round $t=1$ to
$Depth(T)-\ell_{T}\left(a\right)$. For any $l\in[0,\ell_{T}\left(a\right)-1]$,
in round $t=Depth(T)-\ell_{T}\left(a\right)+l+1$ node $a$ sends to its parent
$\langle l,X_{a^{\downarrow T}}^{v}\rangle$ where $v$ is the ancestor of $a$
at level $l$. When node $a$ is an internal node then as per Observation 4.2,
$X_{{a}^{\downarrow{T}}}^{v}$ depends on $X_{a}^{v}$ which can be pre-
calculated in $O(Depth(T))$ rounds. Also, $X_{a^{\downarrow T}}^{v}$ depends
on $X_{c^{\downarrow T}}^{v}$ for all $c\in\operatorname{children}_{T}(a)$
which are at level $\ell_{T}\left(a\right)+1$ and have send to $a$ (which is
their parent) the message $\langle l,X^{v}_{c^{\downarrow T}}\rangle$ in the
$(Depth(T)-\ell_{T}\left(a\right)+l)^{\text{th}}$ round. For a leaf node $a$,
$X_{a^{\downarrow T}}^{v}=X_{a}^{v}$ which again is covered in pre-processing
step.
In Phase 2, node $v$ computes tree restricted semigroup function $f(v)$. As
per Observation 4.3 of the algorithm each internal node $v$ requires
$X_{{c}^{\downarrow{T}}}^{v}\ \forall c\in\operatorname{children}_{T}(v)$ and
$X_{v}^{v}$. $X_{v}^{v}$ is computed in the pre-processing step. And
$X_{{c}^{\downarrow{T}}}^{v}$ is received by $v$ in the aggregation phase.
When node $v$ is a leaf node, $f(v)$ depends only on $X_{v}^{v}$. ∎
For any node $v$ and a tree $T$, let us define
$\eta_{T}(v)=|\delta({v}^{\downarrow{T}})|$. In Subsection 4.1, we will show
that $\eta_{T}(\cdot)$ is an efficiently computable tree restricted semigroup
function. This will be enough to find if there exists a min-cut of size $1$ or
a min-cut of size $2,3$ which 1-respects our fixed spanning tree.
Further in Subsection 4.2, we will find induced cuts of size 2 and define a
tree restricted semigroup function $\zeta_{T}(\cdot)$ which will enable us to
find induced-cuts of size 2. As mentioned earlier, we will work with a fixed
BFS tree $\mathcal{T}$. We will skip the redundant $\mathcal{T}$ in the
notations. For example $\eta(\cdot)$ instead of $\eta_{T}(\cdot)$ and
$\zeta(\cdot)$ instead of $\zeta_{T}(\cdot)$.
### 4.1 Min-Cuts of size 1
In this subsection we will prove that the function $\eta(\cdot)$ is an
efficiently computable tree restricted semigroup function. Here we will work
with our fixed BFS tree $\mathcal{T}$. Recall that the function
$\eta:V\rightarrow[m]$ where $\eta(v)=|\delta(v^{\downarrow})|$ and
$\delta(v^{\downarrow})$ is the cut induced by the vertex set
$v^{\downarrow}$. For any node $a\in v^{\downarrow}$, we will use
$H_{a}^{v}\triangleq|\delta(a)\cap\delta({v}^{\downarrow})|$. This is equal to
the number of vertices adjacent to $a$ which are not in vertex set
$v^{\downarrow}$. The commutative semigroup associated here is the set of all
positive integers $\mathbb{Z}^{+}$ with ‘addition’operator. Thus for a node
$a\in\mathcal{A}\left(v\right)$,
$H_{{a}^{\downarrow}}^{v}=\sum_{a^{\prime}\in{a}^{\downarrow}}H_{a^{\prime}}^{v}$.
We give the pre-processing steps in Algorithm 2 which calculates $H_{a}^{v}$
for all $v\in\mathcal{A}\left(a\right)$
1 for _$b$ adjacent to $a$_ parallely send the ancestor set
$\mathcal{A}\left(a\right)$ to $b$
2 for _$b$ adjacent to $a$_ parallely receive the ancestor set
$\mathcal{A}\left(b\right)$
3 for _$v\in\mathcal{A}\left(a\right)$_ do
4 $H_{a}^{v}\leftarrow|\left\\{b\mid(a,b)\in E,\
v\notin\mathcal{A}\left(b\right)\right\\}|$
/* node $a$ can execute the above step locally because it knows
$\mathcal{A}\left(b\right)$, $\forall\ (a,b)\in E$ */
5
Algorithm 2 Pre-Processing for computing the function $\eta(\cdot)$ (run at
all node $a$ for finding $H^{v}_{a}\ \forall\ v\in\mathcal{A}\left(a\right)$)
###### Observation 4.6.
Pre processing as given in Algorithm 2 takes $O(D)$ time.
###### Proof.
Any node $a$ has at most $D$ ancestors in $\mathcal{T}$. So for every node $b$
adjacent to $a$, it takes $O(D)$ time to communicate the set
$\mathcal{A}\left(a\right)$ to it. Similarly, it takes $O(D)$ time to receive
the set $\mathcal{A}\left(b\right)$ from node $b$. Now $H_{a}^{v}$ for any
$v\in\mathcal{A}\left(a\right)$ will just be an internal computation at node
$a$. ∎
###### Lemma 4.7.
$\eta(\cdot)$ is an efficiently computable tree restricted semigroup function.
###### Proof.
Let $v\in V$, $\eta(v)$ is the number of edges going out of the vertex set
${v}^{\downarrow}$. That is $\forall a\in{v}^{\downarrow}$, $\eta(v)$ is the
sum of the number of vertices incident to $a$ which are not in the set
${v}^{\downarrow}$. Thus $\eta(v)=\sum_{a\in{v}^{\downarrow}}H_{a}^{v}$. By
Observation 4.6, for all $v\in\mathcal{A}\left(a\right)$, $H_{a}^{v}$ can be
computed in $O(D)$ time. Also for any $v$, $\eta(v)$ could be as big as the
number of edges $m$. Therefore, for any $a\in V$ and
$v\in\mathcal{A}\left(a\right)$ we have, $0\leq H_{a}^{v}\leq
H_{{a}^{\downarrow}}^{v}\leq\eta(v)\leq m$. Thus
$H_{a}^{v}$,$H_{{a}^{\downarrow}}^{v}$ and $f(v)$ can be represented in
$O(\log n)$ bits. Hence by Lemma 4.5, $\eta(\cdot)$ is an efficiently
computable tree restricted semigroup function. ∎
To compute $\eta(\cdot)$, we use Algorithm 1 given in Lemma 4.5. Further
during the computation of $\eta(\cdot)$, each node $a$ computes
$H_{{a}^{\downarrow}}^{v}$ for all $v\in\mathcal{A}\left(a\right)$ which is
the aggregated value of all the decendents of node $a$. We summarize these
results in the following Lemmas.
###### Lemma 4.8.
The function $\eta(\cdot)$ can be computed in $O(D)$ time for every node $v\in
V$.
###### Proof.
In Lemma 4.7, we show that $\eta(\cdot)$ is an efficiently computable tree
restricted semigroup function defined with respect to the BFS tree
$\mathcal{T}$. Also, $Depth(\mathcal{T})=O(D)$. Thus by Definition 4.1,
$\eta(\cdot)$ can be computed in $O(D)$ ∎
###### Lemma 4.9.
For each node $a$ and $v\in\mathcal{A}\left(a\right)$, node a knows
$H_{{a}^{\downarrow}}^{v}=|\delta\left({a}^{\downarrow}\right)\cap\delta\left({v}^{\downarrow}\right)|$
in $O(D)$ rounds.
###### Proof.
During the computation of the tree restricted semigroup function
$\eta(\cdot)$, each node $a$ for every $v\in\mathcal{A}\left(a\right)$,
computes the aggregated value
$H_{{a}^{\downarrow}}^{v}=\sum_{a^{\prime}\in{a}^{\downarrow}}H_{a}^{v}$ which
is the contribution of nodes in ${a}^{\downarrow}$ towards the computation of
$\eta(v)$. Thus the lemma follows. ∎
###### Theorem 4.10.
Min-cut of size one (bridge edges) can be found in $O(D)$ time.
###### Proof.
For any node $v\neq r$, the edge
$(\pi\left(v\right),v)\in\delta(v^{\downarrow})$. Thus when $\eta(v)=1$, then
$(\pi\left(v\right),v)$ is the only edge in $\delta(v^{\downarrow})$ and is a
cut edge. ∎
###### Lemma 4.11.
If $\eta(v)=k$ then $\delta(v^{\downarrow})$ is a cut-set of size $k$.
Having found $\eta(\cdot)$ for all the nodes we downcast them. That is each
node $v$, downcasts its $\eta(v)$ value to the vertex set ${v}^{\downarrow}$.
This kind of broadcast is similar to _Broadcast Type -1_ defined in Chapter 2.
Thus by Lemma 2.4, this can be done in $O(D)$ time. That is all nodes
$a\in{v}^{\downarrow}$ will have the value of $\eta(v)$ in $O(D)$ time. This
will be useful for the algorithm to find induced cuts of size 2 given in next
subsection. We quantify the same in the following lemma.
###### Lemma 4.12.
For any node $v$, the nodes in the vertex set ${v}^{\downarrow}$ know
$\eta(v)$ in $O(D)$ rounds.
### 4.2 Min-Cuts of size 2
In this subsection, we will give an algorithm to find min-cuts of size 2. The
theme here will be the use of tree restricted semigroup function. We will
define a new tree restricted semigroup function $\zeta(\cdot)$ which will be
based on a specially designed semigroup.
Before we get into details of $\zeta(\cdot)$, we will give details about cuts
of size $2$. Let $A\subset V$ and $|\delta(A)|=2$. The question here is to
find $\delta(A)$. Here, $\delta(A)$ could share either one edge with the tree
$T$, or it could share $2$ edges with the tree. When $\delta(A)$ shares one
edge with the tree then it can be found in $O(D)$ rounds as described by Lemma
4.11. We make the following observation for the case when $\delta(A)$,
2-respects the tree. In this case,
$\delta(A)=\left\\{(\pi\left(a\right),a),(\pi\left(w\right),w)\right\\}$ for
some $a,w\in V\setminus r$ and $a\neq w$.
###### Observation 4.13.
Let $A\subset V$ and $|\delta(A)|=2$. Also, let $\delta(A)$ 2-respect the tree
$\mathcal{T}$ then
$\delta(A)=\delta({a}^{\downarrow})\oplus\delta({w}^{\downarrow})$ for some
$a\neq w$ and $a,w\in V\setminus r$. Further either
${{a}^{\downarrow}}\subset{{w}^{\downarrow}}$ (nested) or
${{a}^{\downarrow}}\cap{{w}^{\downarrow}}=\emptyset$ (mutually disjoint).
###### Proof.
Let $\delta(A)=\left\\{(\pi\left(a\right),a),(\pi\left(w\right),w)\right\\}$
such that $a\neq w$ and $a,w\in V\setminus r$. WLOG let
$\ell\left(a\right)\geq\ell\left(w\right)$. Because of the tree structure,
either ${a}^{\downarrow}\subset{w}^{\downarrow}$ or
${a}^{\downarrow}\cap{w}^{\downarrow}=\emptyset$. When
${a}^{\downarrow}\subset{w}^{\downarrow}$ then the induced cut $\delta(A)$ is
the cut set
$({w}^{\downarrow}\setminus{a}^{\downarrow},V\setminus({w}^{\downarrow}\setminus{a}^{\downarrow}))=\delta({w}^{\downarrow}\setminus{a}^{\downarrow})$.
Since ${a}^{\downarrow}\subset{w}^{\downarrow}$ thus
${w}^{\downarrow}\setminus{a}^{\downarrow}={w}^{\downarrow}\oplus{a}^{\downarrow}$.
Hence
$\delta({w}^{\downarrow}\setminus{a}^{\downarrow})=\delta({w}^{\downarrow}\oplus{a}^{\downarrow})$
and finally by Corollary 2.7, we have
$\delta({a}^{\downarrow}\oplus{w}^{\downarrow})=\delta({a}^{\downarrow})\oplus\delta({w}^{\downarrow})$.
Similarly, when ${a}^{\downarrow}\cap{w}^{\downarrow}=\emptyset$ then the
induced cut $\delta(A)$ is the cut set
$({a}^{\downarrow}\cup{w}^{\downarrow},V\setminus{a}^{\downarrow}\cup{w}^{\downarrow})=\delta({a}^{\downarrow}\cup{w}^{\downarrow})=\delta({a}^{\downarrow}\oplus{w}^{\downarrow})=\delta({a}^{\downarrow})\oplus\delta({w}^{\downarrow})$.
∎
The above observation states that we have two different cases when an induced-
cut of size $2$ shares both the edges with the tree. For any $A,B\subseteq V$,
let $\gamma(A,B)=|\delta(A)\cap\delta(B)|$. In this subsection, we will prove
the following lemma. This lemma will be enough to prove that the min-cuts of
size $2$ can be deterministically found in $O(D)$ rounds. Moreover, it will
also help us find min-cuts of size 3 as given in Chapter 5.
###### Lemma 4.14.
Let $a,w$ be two nodes. WLOG let $\ell\left(a\right)\geq\ell\left(w\right)$.
If $\left\\{(\pi\left(a\right),a),(\pi\left(w\right),w)\right\\}$ is a cut set
induced by ${a}^{\downarrow}\oplus{w}^{\downarrow}$ and if
$\gamma({a}^{\downarrow},{w}^{\downarrow})>0$ then such an induced cut can be
found in $O(D)$ rounds by node $a$.
Using the above lemma we now prove the following theorem.
###### Theorem 4.15.
Min-cuts of size $2$ can be found in $O(D)$ rounds.
###### Proof.
When there is a min-cut of size 2, it either 1-respects the tree or 2-respects
the tree. When it 1-respects the tree, by Lemma 4.11, we know that it can be
found in $O(D)$ rounds because this only requires computation of
$\eta(\cdot)$.
When a min-cut of size 2, 2-respects the tree then by Lemma 4.13 we know that
the cut is of the form
$\delta({w}^{\downarrow})\oplus\delta({a}^{\downarrow})$ for some nodes $a$
and $w$. Moreover $\gamma({a}^{\downarrow},{w}^{\downarrow})>0$ since there is
not cut of size $1$. From Lemma 4.14 we know that this can be found in $O(D)$
rounds. ∎
We will now prove Lemma 4.14. When the induced cut of size 2, 2-respects the
tree, we know by Observation 4.13 that there could be two different cases. For
the easy case when ${{a}^{\downarrow}}\subset{{w}^{\downarrow}}$, we know from
Lemma 4.9 that
$H_{{a}^{\downarrow}}^{w}=|\delta({a}^{\downarrow})\cap\delta({w}^{\downarrow})|=\gamma({{a}^{\downarrow}},{{w}^{\downarrow}})$
is known by node $a$ in $O(D)$ rounds. Also from Lemma 4.12, $\eta(w)$ is
known by all the vertices $a\in{{w}^{\downarrow}}$. Hence, if
$\left\\{(\pi\left(w\right),w),(\pi\left(a\right),a)\right\\}$ is an induced
cut and $a\in{w}^{\downarrow}$ and $a\neq w$, then as per Lemma 3.3 node $a$
has the required information to find the induced cut. The following lemma
summarizes this.
###### Lemma 4.16.
Let $a,w$ be two vertices such that ${a}^{\downarrow}\subset{w}^{\downarrow}$.
Let
$\delta({a}^{\downarrow}\oplus{w}^{\downarrow})=\left\\{(\pi\left(a\right),a),(\pi\left(w\right),w)\right\\}$
be an induced cut, then node $a$ can find such a cut in $O(D)$ rounds.
The other case is when ${a}^{\downarrow}$ and ${w}^{\downarrow}$ are disjoint
sets. This is the non-trivial part for finding an induced cut of size 2.
Recall from Lemma 3.3, that to make a decision about an induced cut of the
form $\left\\{(\pi\left(a\right),a),(\pi\left(w\right),w)\right\\}$ we require
one of the node in the network to know
$\gamma({a}^{\downarrow},{w}^{\downarrow}),\eta(a)$ and $\eta(w)$. The idea
here is for node $a$ (when $\ell\left(a\right)\geq\ell\left(w\right)$) to find
$\eta(w)$ and $\gamma({a}^{\downarrow},{w}^{\downarrow})$ which is quite a
challenge because there does not exists a straightforward way through
broadcast or convergecast. Moreover, there may not even exist an edge between
the vertices $a$ and $w$. To deal with this, we introduce a new tree
restricted semigroup function $\zeta(\cdot)$
The tree restricted semigroup function $\zeta(\cdot)$ is based on a specially
defined semigroup $\mathcal{Z}$. Before we give further details and intuition
about the function $\zeta(\cdot)$, let us define the semigroup $\mathcal{Z}$.
There are two special elements $\mathbb{e}$ and $\nparallel$ in the semigroup
$\mathcal{Z}$. Apart from these special elements all other $Z\in\mathcal{Z}$
are a four tuple. Let the four-tuple be given as $Z=\langle
Z[1],Z[2],Z[3],Z[4]\rangle$, here $Z[1],Z[2]\in V$ and
$Z[3],Z[4]\in\mathbb{Z}^{+}$.
The operator associated with the semigroup $\mathcal{Z}$ is $\odot$. Special
elements $\mathbb{e}$ and $\nparallel$ are the identity and the absorbing
element with respect to the operator $\odot$ of the semigroup $\mathcal{Z}$.
These special elements are defined in such a way that for any
$Z\in\mathcal{Z}$, $Z\odot\mathbb{e}=Z=\mathbb{e}\odot Z$ and $Z\
\odot\nparallel\ =\ \nparallel\ =\ \nparallel\odot Z$. (Here the symbols
$\mathbb{0},\mathbb{1}$ are not chosen as identity and absorbing elements,
because $\mathbb{e}$ corresponds to zero edges and $\nparallel$ is considered
as a zero of the semigroup which we will see in Property 4.18). In Algorithm
3, we define the operator $\odot$.
// for $i\in\left\\{1,2,3,4\right\\}$ let $Z_{1}[i]$ be the $i^{\text{th}}$
element in 4-tuple of $Z_{1}$. Similarly for $Z_{2}$
1 if _one of $Z_{1},Z_{2}$ is $\nparallel$_ then return $\nparallel$
2 else if _$Z_{1}=\mathbb{e}$_ then return $Z_{2}$
3 else if _$Z_{2}=\mathbb{e}$_ then return $Z_{1}$
4 else if _$Z_{1}[1:3]=Z_{2}[1:3]$_ then return $\langle
Z_{1}[1],Z_{1}[2],Z_{1}[3],Z_{1}[4]+Z_{2}[4]\rangle$
5 else return $\nparallel$
Algorithm 3 $Z_{1}\odot Z_{2}$ (Both $Z_{1},Z_{2}\in\mathcal{Z}$)
###### Observation 4.17.
$\odot\ (\texttt{zeta-operator})$ is commutative and associative.
###### Proof.
Commutativity of $\odot$ is straightforward and is implied by the
commutativity of addition over $\mathbb{Z}^{+}$. For associativity, let’s
imagine there exist $Z_{1},Z_{2},Z_{3}\in\mathcal{Z}$. Further let’s imagine
that $Z_{1},Z_{2},Z_{3}\notin\left\\{\nparallel,\mathbb{e}\right\\}$. Now when
the first two elements of each of the four tuples $Z_{1},Z_{2},Z_{3}$ is same,
then associativity is trivial and implied by the associativity of the addition
operator over $\mathcal{Z}^{+}$. When the first two elements are not equal in
$Z_{1},Z_{2},Z_{3}$ then $Z_{1}\odot(Z_{2}\odot Z_{3})=(Z_{1}\odot Z_{2})\odot
Z_{3}=\ \nparallel$. Similarly, If one of $Z_{1},Z_{2},Z_{3}$ is $\nparallel$,
then also $Z_{1}\odot(Z_{2}\odot Z_{3})=(Z_{1}\odot Z_{2})\odot Z_{3}=\
\nparallel$, hence associativity is implied. And lastly when one of
$Z_{1},Z_{2},Z_{3}$ is $\mathbb{e}$, then $Z_{1}\odot(Z_{2}\odot Z_{3})$
becomes an operation between just two elements and associativity is implied
directly. ∎
For any node $v$, define the edge set
$E\left\\{\zeta(v)\right\\}\triangleq\delta({v}^{\downarrow})\setminus(\pi(v),v)$.
The value of $\zeta(v)$ depends on the edge set $E\left\\{\zeta(v)\right\\}$.
For any node $a$ and $v\in\mathcal{A}\left(a\right)$, let
$E\left\\{Z_{a}^{v}\right\\}\triangleq
E\left\\{\zeta(v)\right\\}\cap\delta(a)$. We define $Z_{a}^{v}$ as per the
Property 4.18.
###### Property 4.18.
For any node $a$ and $v\in\mathcal{A}\left(a\right)$, $Z_{a}^{v}$ takes one of
the following three values:-
1. i.
$\mathbb{e}$ when $E\left\\{Z_{a}^{v}\right\\}=\emptyset$
2. ii.
${\langle w,\pi\left(w\right),\eta(w),\gamma(a,{{w}^{\downarrow}})\rangle}$
when there exists a node $w$ at level $\ell\left(v\right)$ such that all the
edges in $E\left\\{Z_{a}^{v}\right\\}$ have one endpoint in
${{w}^{\downarrow}}$
3. iii.
$\nparallel$ otherwise
Similar to the previous subsection the semigroup function $\zeta(\cdot)$ is
defined as $\zeta(v)\triangleq\bigodot_{a\in{v}^{\downarrow}}Z_{a}^{v}$. We
will say that $Z_{{{a}^{\downarrow}}}^{v}$ is the contribution of nodes in
vertex set ${a}^{\downarrow}$ to compute $\zeta(v)$. We will now prove that
for all node $a\in V$ and $v\in\mathcal{A}\left(a\right)$, $Z_{a}^{v}$ can be
computed in $O(D)$ time in Lemma 4.19, then using Definition 4.1 and Lemma
4.5, we will prove that $\zeta(\cdot)$ is an efficiently computable tree
restricted semigroup function.
For any $l<\ell\left(a\right)$, let $\alpha(a,l)$ be the ancestor of node $a$
at level $l$. For notational convenience let $\alpha(a,\ell\left(a\right))=a$.
###### Lemma 4.19.
For all node $a\in V$ and $v\in\mathcal{A}\left(a\right)$, $Z_{a}^{v}$ can be
computed in $O(D)$ time as defined in Property 4.18.
###### Proof.
Here we will give an algorithmic proof, We describe the steps in Algorithm 4.
Further we will prove that this algorithm correctly finds $Z_{a}^{v}$ and
takes $O(D)$ time.
1 $\mathcal{N}(a)\leftarrow\left\\{b\mid(a,b)\in E,(a,b)\ \text{is a non-tree
edge}\right\\}$
2 for _all $b\in\mathcal{N}(a)$_ parallely send tuples
$\langle\ell\left(u\right),\eta(u),u\rangle$ for all
$u\in\mathcal{A}\left(a\right)$ to $b$
3 for _all $b\in\mathcal{N}(a)$_ parallely receive tuples
$\langle\ell\left(u\right),\eta(u),u\rangle$ for all
$u\in\mathcal{A}\left(b\right)$
4 $l_{\min}\leftarrow\min(\left\\{\ell\left(b\right)\ |\
b\in\mathcal{N}(a)\right\\}\cup\ell(a))$
5 if _$l_{\min}\neq\ell(a)$_ then
6 for _$v\in\left\\{\alpha(a,l)\mid l\in(l_{\min},\ell(a)]\right\\}$_ do
$Z_{a}^{v}=\ \nparallel$
7
8for _$l=l_{\min}$ to $1$_ do
9 $v\leftarrow\alpha(a,l)$
// $A^{l}$ is set of ancestors at level $l$ of nodes in $\mathcal{N}(a)$
except $v$
10 $A^{l}\leftarrow\left\\{\alpha(b,l)\mid
b\in\mathcal{N}(a)\right\\}\setminus v$
11 if _$A^{l}=\emptyset$_ then $Z^{v}_{a}\leftarrow\mathbb{e}$
12 else if _$|A^{l}|=1$_ then
13 $w\leftarrow$ element in singleton $A^{l}$
14 $\gamma({{w}^{\downarrow}},a)\leftarrow|\left\\{b\mid
b\in\mathcal{N}(a),\alpha(b,l)=w\right\\}|$
15 $Z^{v}_{a}\leftarrow\langle
w,\pi\left(w\right),\eta(w),\gamma({{w}^{\downarrow}},a)\rangle$
16 else $Z^{v}_{a}\leftarrow\nparallel$
17
Algorithm 4 Pre-Processing step for $\zeta$ (run at all node $a$ for finding
$Z_{a}^{v}$ for all $v\in\mathcal{A}\left(a\right)$)
As per Property 4.18, $Z_{a}^{v}$ just depend on the nodes adjacent to the
node $a$. The required information is received and sent from the adjacent
nodes in Algorithm 4 at line 4 and 4. Each of these takes only $O(D)$ time
because at max $O(D)$ messages of $O(\log n)$ bits are communicated.
$\mathcal{N}(a)$ is the non-tree neighbors of node $a$ found in line 4. In
Algorithm 4, decision in regard to $Z_{a}^{v}$ is taken based on $A^{\ell(v)}$
which is the set of ancestors at level $\ell\left(v\right)$ of nodes in
$\mathcal{N}(a)$ except the node $v$.
When $A^{\ell(v)}=\emptyset$, then in line 4, $Z_{a}^{v}$ is set to
$\mathbb{e}$. For bullet (i) of Property 4.18, we need to prove that
$E\left\\{Z_{a}^{v}\right\\}=\emptyset\implies A^{\ell(v)}=\emptyset$. Here
$E\left\\{Z_{a}^{v}\right\\}=\delta({v}^{\downarrow})\setminus(\pi\left(v\right),v)\cup\delta(a)$.
When $a\neq v$, then $E\left\\{Z_{a}^{v}\right\\}$ is the set of all the edges
which are incident on $a$ and goes out of the vertex set ${v}^{\downarrow}$.
Now when $E\left\\{Z_{a}^{v}\right\\}=\emptyset$, then all edges incident on
$a$ have the other endpoint in the vertex set ${v}^{\downarrow}$. Thus
$A^{\ell(v)}=\emptyset$. When $a=v$, here $E\left\\{Z_{v}^{v}\right\\}$ is the
set of all edges other then $(\pi\left(v\right),v)$ which are incident on $v$
and goes out of the vertex set ${v}^{\downarrow}$. Note that
$(\pi\left(v\right),v)$ is a tree-edge thus
$\pi\left(v\right)\notin\mathcal{N}(a)$. Thus
$E\left\\{Z_{v}^{v}\right\\}=\emptyset\implies A^{\ell(v)}=\emptyset$.
Now for correctness, if $E\left\\{Z_{a}^{v}\right\\}=\emptyset$ and $v\neq a$
then no edge incident on node $a$ goes out of ${v}^{\downarrow}$, hence the
set $A^{\ell(v)}=\emptyset$. When $v=a$, we know that $\mathcal{N}(v)$ (in
line 4) contains only non-tree neighbors, thus
$\pi\left(v\right)\notin\mathcal{N}(v)$. We know that edge
$(\pi\left(v\right),v)\notin E\left\\{Z_{v}^{v}\right\\}$. Thus here when
$E\left\\{Z_{v}^{v}\right\\}=\emptyset$ then no edge other than
$(\pi\left(v\right),v)$ incident on $v$ goes out of ${v}^{\downarrow}$, thus
$A^{\ell(v)}=\emptyset$. Hence in both the cases $Z_{a}^{v}$ is correctly set
to $\mathbb{e}$. (in line 4)
For the other two bullet’s in Property 4.18, we employ the same idea. If
$A^{\ell\left(v\right)}=\left\\{w\right\\}$ (for some node $w$) then it
implies that all the neighbours adjacent to node $a$ have a common ancestor
$w$ other than $v$ and thus $Z_{a}^{v}$ captures the required information
about node $w$. And when $|A^{\ell\left(v\right)}|>1$ it simply means there
are more than one node at level $\ell\left(v\right)$ as given in bullet
$(iii)$ of Property 4.18. ∎
Similar to previous section,
$Z_{{a}^{\downarrow}}^{{v}}\triangleq\bigodot_{a^{\prime}\in{a}^{\downarrow}}Z_{a^{\prime}}^{v}$.
Since for any $a^{\prime}\in{a}^{\downarrow}$, $Z_{{{a^{\prime}}}}^{v}$
depends on $E\left\\{\zeta(v)\right\\}\cap\delta({a^{\prime}})$ thus
$Z_{{{a}^{\downarrow}}}^{v}$ depends on
$E\left\\{Z_{{{a}^{\downarrow}}}^{v}\right\\}=\bigcup_{a^{\prime}\in{a}^{\downarrow}}E\left\\{Z_{{{a^{\prime}}}}^{v}\right\\}=E\left\\{\zeta(v)\right\\}\cap\delta({a}^{\downarrow})$.
The following lemma about $Z_{{a}^{\downarrow}}^{v}$ is now implicit and
immediately follows from Property 4.18.
###### Lemma 4.20.
For any node $a$ and $v\in\mathcal{A}\left(a\right)$,
$Z_{{a}^{\downarrow}}^{v}$ depends on the edge set
$E\left\\{Z_{{a}^{\downarrow}}^{v}\right\\}=\bigcup\limits_{a^{\prime}\in{a}^{\downarrow}}E\left\\{Z_{a^{\prime}}^{v}\right\\}$
and takes one of the following values
1. a)
$\mathbb{e}$ when $E\left\\{Z_{{a}^{\downarrow}}^{v}\right\\}=\emptyset$
2. b)
${\langle
w,\pi\left(w\right),\eta(w),\gamma({a}^{\downarrow},{{w}^{\downarrow}})\rangle}$
when there exists a node $w$ at level $\ell\left(v\right)$ such that all the
edges in $E\left\\{Z_{{a}^{\downarrow}}^{v}\right\\}$ have one endpoint in
${{w}^{\downarrow}}$
3. c)
$\nparallel$ otherwise
Basically, $Z_{{a}^{\downarrow}}^{v}$ captures if there exists some node $w$
at level $\ell\left(v\right)$ such that all the edges which are in
$\delta({a}^{\downarrow})$ and go out of the vertex set ${v}^{\downarrow}$,
have the other end point in the vertex set ${w}^{\downarrow}$.
###### Lemma 4.21.
$\zeta(\cdot)$ is an efficiently computable semigroup function.
###### Proof.
In Lemma 4.19, it was shown that there exists a $O(D)$ time algorithm to
compute $Z_{a}^{v}$ for any node $a$ and $v\in\mathcal{A}\left(a\right)$.
Further each such $Z_{a}^{v}$ is either a special symbol among
$\mathbb{e},\nparallel$ or a four tuple. It is easy to see that this four
tuple is of $O(\log n)$ bits because the first two elements in the four tuple
are node ids and the last two are integers which cannot exceed the number of
edges and we require $O(\log n)$ bits to represent them. Now invoking Lemma
4.5, we know that $\zeta(\cdot)$ is an efficiently computable tree restricted
semigroup function. ∎
###### Lemma 4.22.
For all nodes $a$ and $v\in\mathcal{A}\left(a\right)$,
$Z_{{a}^{\downarrow}}^{v}$ can be computed in $O(D)$ time.
###### Proof.
Since $\zeta(\cdot)$ is an efficiently computable semigroup function and can
be computed in $O(D)$ rounds. Also, during the computation of $\zeta(\cdot)$,
for all node $a$ and $v\in\mathcal{A}\left(a\right)$,
$Z_{{a}^{\downarrow}}^{v}$ is also computed. ∎
###### Lemma 4.23.
Let $a,w$ be two vertices such that
${w}^{\downarrow}\cap{a}^{\downarrow}=\emptyset$. WLOG, let
$\ell\left(a\right)\geq\ell\left(w\right)$. Let
$\left\\{(\pi\left(a\right),a),(\pi\left(w\right),w)\right\\}$ be an induced
cut such that $\gamma({w}^{\downarrow},{a}^{\downarrow})>0$, then node $a$ can
find it in $O(D)$ rounds.
###### Proof.
Here we have to prove that node $a$ will have access to $\eta(w)$ and
$\gamma({w}^{\downarrow},{a}^{\downarrow})$ if such a min cut occurs. Then
confirming the min-cut is easy by Lemma 3.3. Let $v$ be the ancestor of node
$a$ at level $\ell\left(w\right)$. By Observation 4.20, we know that
$Z_{{a}^{\downarrow}}^{v}=\langle
w,\pi\left(w\right),\eta(w),\gamma({w}^{\downarrow},{a}^{\downarrow})\rangle$
in this case. Thus, the required information will be available at node $a$. ∎
###### Proof of Lemma 4.14.
When the induced cut 2-respects the tree that is it is a symmetric difference
of two tree cuts $\delta({{a}^{\downarrow}})$ and $\delta({{w}^{\downarrow}})$
for some $a,w\in V\setminus r$. As per Observation 4.13 here two cases could
occur. The nested case when ${{a}^{\downarrow}}\subset{{w}^{\downarrow}}$ and
the mutually disjoint case when ${{a}^{\downarrow}}\cap{{w}^{\downarrow}}$,
results regarding them are given in Lemma 4.16 and Lemma 4.23. In line 5 and 5
of Algorithm 5, we give details about the actual search.
1 Available Info: __Each node $a$, $\forall v\in\mathcal{A}\left(a\right)$
knows $\eta(v),H_{{a}^{\downarrow}}^{v}$ and $Z_{{a}^{\downarrow}}^{v}$ (Lemma
4.12,4.9,4.22)
1 for _$l=1$ to $\ell(a)$_ do
$v\leftarrow$ $\alpha(a,l)$ // ancestor of node $a$ at level $l$
2 if
_$Z_{{{a}^{\downarrow}}}^{v}\notin\left\\{\mathbb{e},\nparallel\right\\}$_
then
3 Let $Z_{{{a}^{\downarrow}}}^{v}=\langle
w,\pi\left(w\right),\eta(w),\gamma({{w}^{\downarrow}},{{a}^{\downarrow}})\rangle$
4 if _$\eta(a)-\gamma({{w}^{\downarrow}},{{a}^{\downarrow}})=1$ &
$\eta(w)-\gamma({{w}^{\downarrow}},{{a}^{\downarrow}})=1$_ then
5 $\left\\{(\pi\left(w\right),w),(\pi\left(a\right),a)\right\\}$ is a 2-cut
6
7 if _$v\neq a$ & $\eta(v)-H_{{{a}^{\downarrow}}}^{v}=1$ &
$\eta(a)-H_{{{a}^{\downarrow}}}^{v}=1$_ then
8 $\left\\{(\pi\left(v\right),v),(\pi\left(a\right),a)\right\\}$ is a 2-cut
9
Algorithm 5 Algorithm to find 2-cut for node $a$
∎
In this chapter, we formally defined tree restricted semigroup function.
Further, we introduced two different types of tree restricted semigroup
function $\eta(\cdot)$ and $\zeta(\cdot)$. We showed that these are enough to
find min-cuts of size 1 and 2. In the next chapter, we will give algorithms to
find min-cuts of size $3$.
## Chapter 5 Min-Cuts of size three
In this chapter, we will give an algorithm to find a min-cut of size three.
The idea here is to use Lemma 3.5 and Lemma 3.6 given in Chapter 3 which
characterizes the min-cut of size $3$. Having laid down these characterization
lemmas, the critical aspect which remains to find the min-cut of size 3 (if it
exists) is to communicate the required quantities by the characterizing lemmas
to at least one node in the network whenever a min-cut of the kind occurs. For
this chapter, we will assume that a min-cut of size $1,2$ do not occur.
Recall that we have fixed a BFS tree $\mathcal{T}$ in the beginning. If there
exists a min-cut in the network, there could be 7 different cases. These cases
are different to each other based on the relation of the min-cut to the fixed
BFS tree $\mathcal{T}$. We enumerate these cases in Lemma 5.1. Further, give
an algorithmic outline about how these cases can be found in Table 5.1.
###### Lemma 5.1.
If there exists a min-cut of size 3 then the following cases may arise for
some $v_{1},v_{2},v_{3}\in V\setminus r$ and $e,f$ as non-tree edge.
1. CASE-1
$\left\\{(\pi\left(v_{1}\right),v_{1}),e,f\right\\}$ is a min-cut
2. CASE-2
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),e\right\\}$
is a min-cut such that ${v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}$
3. CASE-3
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),e\right\\}$
is a min-cut such that
${v_{2}}^{\downarrow}\cap{v_{1}}^{\downarrow}=\emptyset$
4. CASE-4
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a min-cut and
${v_{3}}^{\downarrow}\subset{v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}$
5. CASE-5
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a min-cut ${v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}$ and
${v_{3}}^{\downarrow}\subset{v_{1}}^{\downarrow}$ and
${v_{2}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$
6. CASE-6
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a min-cut and ${v_{3}}^{\downarrow},{v_{2}}^{\downarrow}$ and
${v_{1}}^{\downarrow}$ are pairwise mutually disjoint
7. CASE-7
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a min-cut and ${v_{3}}^{\downarrow}\subset{v_{2}}^{\downarrow}$,
${v_{1}}^{\downarrow}\cap{v_{2}}^{\downarrow}=\emptyset$ and
${v_{1}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$
###### Proof.
From Observation 3.1, we know that that if there exists a min-cut of size $3$
then it shares either $1,2$ or $3$ edges with the tree $\mathcal{T}$. When a
min-cut of size $3$ shares one edge with the tree then CASE-1 applies. Here
two edges are non-tree edges.
When a min-cut of size $3$ shares 2-edges with the tree $\mathcal{T}$ then
there exists two tree edges. For some node $v_{1},v_{2}\in V\setminus r$ let
these edges be $(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2})$.
WLOG let $\ell\left(v_{1}\right)\leq\ell\left(v_{2}\right)$. Similar to
Observation 4.13, there could be two cases here either $v_{2}\subset v_{1}$ or
$v_{1}\cap v_{2}=\emptyset$. Both of which are described in CASE-2 and CASE-3
respectively.
The non-trivial part here is when the min-cut of size 3 shares all 3 edges
with the tree. Here we will have 4 different cases. Let these cut edges be
$(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2})$ and
$(\pi\left(v_{3}\right),v_{3})$. Also in the beginning let
$\ell\left(v_{1}\right)<\ell\left(v_{2}\right)<\ell\left(v_{3}\right)$. We
start with the case when
${v_{3}}^{\downarrow}\subset{v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}$
which is described in CASE-4. Now when
${v_{3}}^{\downarrow}\not\subset{v_{2}}^{\downarrow}$ then since $v_{2},v_{3}$
are different thus we have
${v_{2}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$ which is CASE-5.
Notice that in both CASE-4 and CASE-5 we have
$({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow})\subset{v_{1}}^{\downarrow}$.
Now we move to the different scenario where
$({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow})\not\subset{v_{2}}^{\downarrow}$.
Here also we may have two cases when
${v_{2}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$ then we get CASE-6
and when ${v_{2}}^{\downarrow}\subset{v_{3}}^{\downarrow}$ then we get CASE-7.
Note that for CASE-6 and CASE-7 we may not require
$\ell\left(v_{1}\right)\leq\ell\left(v_{2}\right)$ and
$\ell\left(v_{1}\right)\leq\ell\left(v_{3}\right)$ ∎
| Characterization | Case | Technique
---|---|---|---
1-respects $\mathcal{T}$ | - | CASE-1 | check if for some $v$, $\eta(v)=3$
2-respects $\mathcal{T}$ | Lemma 3.5 | CASE-2 | _Broadcast Type - 1_ Section 2.1.2
CASE-3 | 2-Sketch Section 5.1
3-respects $\mathcal{T}$ | Lemma 3.6 | CASE-4 | _Broadcast Type - 2_ Section 2.1.2
CASE-5 | Layered Algorithm Section 5.2
CASE-6 | 3-Sketch Section 5.1
CASE-7 | Reduced 2-Sketch Section 5.1
Table 5.1: Overview of the case-structure of min-cut of size $3$. A min-cut of
size $3$ if exists may share $1,2$ or $3$ edges with the fixed BFS tree
$\mathcal{T}$.
The different cases as mentioned in Lemma 5.1 are pictorially shown in Figure
5.1.
(a) CASE-2 (Either of $\alpha$ or $\beta$ is 1 and the other is $0$)
(b) CASE-3 (Either of $\alpha$ or $\beta$ is 1 and the other is $0$)
(c) CASE-4 (Both $\alpha$ and $\beta$ are non-zero)
(d) CASE-5 (At least two of $\alpha$, $\beta$ and
$\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})$ are non-zero)
(e) CASE-6 (at least two of
$\gamma({v_{1}}^{\downarrow},{v_{2}}^{\downarrow})$,
$\gamma({v_{3}}^{\downarrow},{v_{2}}^{\downarrow})$,
$\gamma({v_{1}}^{\downarrow},{v_{3}}^{\downarrow})$ are non-zero)
(f) CASE-7 (Both $\alpha$ and $\beta$ are non-zero)
Figure 5.1: Different cases of an min cut of size three. Each figure in the
above examples is a snippet of the tree and shows a different case of min-cut
of size 3. Red edges are cut edges. Each shaded region in the above figures
correspond to a vertex set. Sets with same color in the shaded region
correspond to one side of cut. Thick edges with arrow ends may correspond to
zero or more edges between two vertex set. The label on these edges represent
the actual number of edges.
Among these cases, CASE-1 is simple. Here for some node $v_{1}$ the induced
cut is $(V\setminus{v_{1}}^{\downarrow},{v_{1}}^{\downarrow})$. Here we just
need to find the size of tree cut $\eta(v_{1})$ and if there exists such a min
cut of size $3$, then $\eta(v_{1})=3$. This takes $O(D)$ time as shown in
Lemma 4.11.
Among the other cases only CASE-2 and CASE-4 have a simple broadcast based
algorithm which is enough to let at least one of the node know the required
quantities as per Lemma 3.5 and Lemma 3.6. We give the details in the
following lemmas.
###### Lemma 5.2.
When there exists a min-cut of size 3 as given in CASE-2 (for some nodes
$v_{1},v_{2}\in V\setminus r$ and a non-tree edge $e$
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),e\right\\}$
is a min-cut such that ${v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}$),
then the cut edges can be found in $O(D)$ time.
###### Proof.
From Chapter 4, we know that each node $v$ knows
$\eta(v)=|\delta({v}^{\downarrow})|$. Also for each
$u\in\mathcal{A}\left(v\right)$, $v$ knows $\eta(u)$ and
$H^{u}_{{v}^{\downarrow}}$ by Lemma 4.12 and 4.9. Thus if there exists an
induced cut as in $\textbf{CASE}-2$ then we just need to make a simple
evaluation based on Lemma 3.5. We give the details of the evaluation in
Algorithm 6.
1 Available Info: __Each node $x$, $\forall v\in\mathcal{A}\left(x\right)$
knows $\eta(v),H_{{x}^{\downarrow}}^{v}$ (Lemma 4.12 ,4.9)
1 for _$v\in\mathcal{A}\left(x\right)\setminus\left\\{x,r\right\\}$_ do
2 if _$\eta(v)-1=H_{{x}^{\downarrow}}^{v}=\eta(x)-2$ OR
$\eta(v)-2=H_{{x}^{\downarrow}}^{v}=\eta(x)-1$_ then
3 $\delta({v}^{\downarrow}\oplus{x}^{\downarrow})$ is an induced cut of size 3
4
5
Algorithm 6 Algorithm to find an induced cut of size $3$ as given in CASE-2
(for some nodes $v_{1},v_{2}\in V\setminus r$ and a non-tree edge $e$
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),e\right\\}$
is a min-cut such that ${v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}$) run
on all node $x\in V\setminus r$
∎
###### Lemma 5.3.
When there exists a min-cut of size 3 as given in CASE-4 (For some
$v_{1},v_{2},v_{3}\in V\setminus r$,
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a min-cut and
${v_{3}}^{\downarrow}\subset{v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}$)
then the cut edges can be found in $O(D^{2})$ time.
###### Proof.
By Lemma 4.9, each node $a$ knows
$H_{{a}^{\downarrow}}^{v}=\gamma({a}^{\downarrow},{v}^{\downarrow})$ for all
$v\in\mathcal{A}\left(a\right)$. Further, for any node $a\in V\setminus r$ we
want to make sure that for any two nodes
$v,u\in\mathcal{A}\left(v\right)\setminus\left\\{a,r\right\\}$, node $a$ knows
$H_{{u}^{\downarrow}}^{v}$ if $\ell\left(u\right)>\ell\left(v\right)$. For
this, each node $x$ at level $i$ has $i-1$ such quantities to broadcast to its
descendants in ${x}^{\downarrow}$. This is similar to _Broadcast Type-2_ and
takes $O(D^{2})$ rounds. After this step every node $x$ knows
$H_{{y}^{\downarrow}}^{z}$ for all
$y,z\in\mathcal{A}\left(x\right)\setminus\left\\{r,x\right\\}$ and
$\ell\left(y\right)>\ell\left(z\right)$. Now at each node $x$, to determine if
$\left\\{(\pi\left(x\right),x),(\pi\left(y\right),y),(\pi\left(z\right),z)\right\\}$
is a min-cut, node $x$, checks if
$\eta(x)-1=H_{{x}^{\downarrow}}^{y}+H_{{x}^{\downarrow}}^{z}$ and
$\eta(y)-1=H_{{y}^{\downarrow}}^{z}+H_{{x}^{\downarrow}}^{y}$ and
$\eta(z)-1=H_{{x}^{\downarrow}}^{z}+H_{{y}^{\downarrow}}^{z}$ ∎
For other cases the problem boils down to efficiently computing the required
quantities as per Lemma 3.5 and 3.6 and communicating them to at least one
node in the network, then this node can make the required decision about the
min-cut of size 3. Unfortunately, simple broadcast and convergecast techniques
do not seem plausible for the cases which are left. This is because of the
arbitrary placements of the nodes in the tree.
In the remaining part of this chapter, we introduce two new techniques which
will take care of this. In Section 5.1, we give Sketching Technique for
CASE-3,CASE-6 and CASE-7. Further, in Section 5.2, we give Layered Algorithm
which is enough to find the min-cut as given by CASE-5 if it exists.
### 5.1 Graph Sketching
In this section, we will introduce our graph sketching technique. Recall from
Lemma 3.5 and Lemma 3.6 that to make a decision about min-cut of size $3$, a
node requires certain information about other nodes. The whole graph has as
many as $n$ nodes. It will be cost ineffective for every node to know the
details about each of $n$ nodes. We introduce sketching technique, which
reduces the number of nodes, any particular node has to scan, in order to find
the required quantities to make a decision about the min-cut.
A sketch is defined for all the nodes in the network. If there exists a min-
cut, at least some node $v$, can make the decision regarding it using its
sketch or appropriate sketch of other nodes communicated to it. In this
section, first we will give the motivation behind the use of sketch, then in
Subsection 5.1.1, we will define sketch and introduce related notations. Here
we will prove that the size of the sketch is not large. Later in Subsection
5.1.2, we will give the algorithm to compute the sketch and in Subsection
5.1.3, we will showcase the application of graph sketch in finding a min-cut
as given by CASE-3, CASE-6 and CASE-7. Before going to the formal notation of
the sketch definition, we will describe algorithmic idea for these cases.
First, we begin with CASE-3. Imagine that
$\left\\{(\pi\left(u\right),u),(\pi\left(v\right),v),e\right\\}$ is a min-cut
of size $3$, for some nodes $v,u\in V\setminus r$ such that
${v}^{\downarrow}\cap{u}^{\downarrow}=\emptyset$ and a non-tree edge $e$ as
given by CASE-3 and shown in Fig. 5.2 (A). Now imagine node $u$ has to make a
decision of this min-cut, then as per Lemma 3.5, it will require information
about $\eta(v),\gamma({u}^{\downarrow},{v}^{\downarrow})$. But upfront, node
$u$ has no idea that it is part of such a min-cut and there exists some other
node $v$, because it only has local information. Moreover, there are, as many
as $n$ nodes, in the whole network which is lot of information for node $u$ to
look at and is cost inefficient. Our sketching technique brings down the size
of the set of nodes which any node $u$ has to scan to make a decision about a
min-cut as give by CASE-3. Let
$\overline{\mathcal{N}}(A)\triangleq\left\\{y\mid x\in A,y\notin A,(x,y)\in E\
\text{is a non-tree edge}\right\\}$. We make two simple observations:
1. i)
node $u$ needs to search only the paths from the root $r$ to all nodes
$y\in\overline{\mathcal{N}}({u}^{\downarrow})$ shown in Fig. 5.2 (B).
2. ii)
such paths can be significantly trimmed: for instance as shown in Fig. 5.2
(A), $\left\\{(\pi\left(u\right),u),(v,v_{1}),e\right\\}$ cannot be a min-cut
because removing these edges does not partition the vertex set into two.
Figure 5.2: Demonstration of Sketching Technique for CASE-3. Each part in the
above figure is a snippet of the tree. Edges with dashed stroke style are non-
tree edges. Red edges are cut edges. Shaded region with vertices denote vertex
sets.
Based on the above two simple observation, we can see that node $u$ can limit
its scan of finding some node $x$ and subsequently
$\eta(x),\gamma({x}^{\downarrow},{u}^{\downarrow})$ to the bold path shown in
Fig. 5.2 (C). Our sketch exactly computes this. We will give details about it
in the later part of this section. The idea for CASE-6 is similar to the one
demonstrated here.
To find a min-cut as given by CASE-7, we will use a different idea called
_reduced sketch_. Recall that, a min-cut as given by CASE-7 is as follows: for
$v_{1},v_{2},v_{3}\in V\setminus r$,
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a min-cut such that ${v_{3}}^{\downarrow}\subset{v_{2}}^{\downarrow}$,
${v_{1}}^{\downarrow}\cap{v_{2}}^{\downarrow}=\emptyset$ such that
${v_{1}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$. Here we use the
characterization given in Lemma 3.6 which requires that at least one node
knows 6 quantities
$\eta(v_{1}),\eta(v_{2}),\eta(v_{3}),\gamma({v_{1}}^{\downarrow},{v_{2}}^{\downarrow}),\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow}),\gamma({v_{1}}^{\downarrow},{v_{3}}^{\downarrow})$.
In this case, we use a modified sketch. For any node $v$, our algorithm
ensures that each node $c\in{v}^{\downarrow}$ have information about
strategically truncated and trimmed paths from root $r$ to all the vertices in
the set $\overline{\mathcal{N}}({v}^{\downarrow}\setminus{{c}^{\downarrow}})$.
The same is illustrated in Fig. 5.3. The pictorial representation of this case
shown in Fig. 5.3 (A). Our algorithm makes sure that node $v_{3}$ (see that
$v_{3}\in v_{2}^{\downarrow}$) has information about the nodes in the bold
path (specially truncated and trimmed paths from root to nodes in
$\overline{\mathcal{N}}(v_{2}^{\downarrow}\setminus v_{3}^{\downarrow})$)
shown in Fig. 5.3 (C). The intermediate step is shown in Fig. 5.3 (B). Also,
coupling it with Lemma 4.9 node $v_{3}$ knows
$H_{{v_{3}}^{\downarrow}}^{v_{2}}=\gamma({v_{3}}^{\downarrow},{v_{2}}^{\downarrow})$.
Figure 5.3: Motivation for Sketching Technique in CASE-7. Edges with dashed
stroke style are non-tree edges. Shaded region with vertices denote vertex
sets.
In the next subsection, we will give a formal definition of _sketch_ and
_reduced sketch_. We will give the definition for a general spanning tree $T$.
The sketch is defined for a parameter $k$ which governs the number of branches
which can be included in the sketch. Further, we will give distributed
algorithms to compute sketch and reduced sketch.
#### 5.1.1 Definition of Sketch
For any node $x$, let $\rho_{T}(x)$ represent the unique path from root $r$ to
the node $x$ in tree $T$. Further for any vertex set $A\subseteq V$, let
$\mathcal{P}_{T}(A)\triangleq\left\\{\rho_{T}(x)\mid x\in A\right\\}$.
Basically, $\mathcal{P}_{T}(A)$ is a set of paths. We say that a tree path
$\rho_{T}(x)$ is parallel to a tree edge $e=(a,b)$, if
${x}^{\downarrow{T}}\cap{a}^{\downarrow{T}}=\emptyset$ and
${x}^{\downarrow{T}}\cap{b}^{\downarrow{T}}=\emptyset$. Also, for any vertex
set $A\subset V$ and a tree $T$ recall that
$\overline{\mathcal{N}}_{T}(A)=\left\\{y\mid x\in A,(x,y)\in E,(x,y)\text{ is
a non-tree edge}\right\\}$.
Now, we define canonical tree which is the first structure towards defining
sketch. The sketch which we will define is nothing but a truncation of this
canonical tree. For any node $v$, the canonical tree is the graph-union of
paths from the root $r$ to non-tree neighbors of node $v$. This notation for a
canonical tree is also overloaded for a vertex set as well and formally
defined below.
###### Definition 5.4 (Canonical Tree).
Canonical tree of a node $v$ is a subtree of some spanning tree $T$, denoted
by $R_{T}(v)$ and formed by union (graph-union operation) of tree paths in
$\mathcal{P}_{T}\left(\left\\{v\cup\overline{\mathcal{N}}_{T}\left(\left\\{v\right\\}\right)\right\\}\right)$.
Canonical tree of a vertex set ${v}^{\downarrow{T}}$ is denoted by
$R_{T}({v}^{\downarrow{T}})$ and formed by union of the paths in
$\mathcal{P}_{T}\left(\left\\{v\cup\overline{\mathcal{N}}_{T}({v}^{\downarrow{T}})\right\\}\right)$.
Further, we also define a reduced canonical tree. We use the same notation
since the idea is same.
###### Definition 5.5.
Let $v$ be an internal (non-leaf and non-root) node of a tree $T$. Let
$c\in{v}^{\downarrow{T}}$, we define the reduced canonical tree denoted by
$R_{T}({v}^{\downarrow{T}}\setminus{c}^{\downarrow{T}})$ and formed by union
of the paths in
$\mathcal{P}_{T}\left(\left\\{v\cup\overline{\mathcal{N}}_{T}({v}^{\downarrow{T}}\setminus{c}^{\downarrow{T}})\right\\}\right)$
We define sketch as the truncation of the canonical tree and give an algorithm
that can compute it in $O(D^{2})$ rounds. A canonical tree could be of very
large size. Thus its truncation is required. To characterize this truncation
we will use branching number as defined in Definition 5.6. Let
$\operatorname{firstBranchNode}(T^{\prime})$ of any rooted tree $T^{\prime}$
be the branch node (a node which has at least two children in a tree
$T^{\prime}$) closest to root. If the tree $T^{\prime}$ has no branch node
then $\operatorname{firstBranchNode}(T^{\prime})$ is the root itself.
###### Definition 5.6 (Branching Number).
For any tree $T^{\prime}$, the branching number of a node $b$ in tree
$T^{\prime}$ is denoted by $\mathcal{\xi}_{{T^{\prime}}}\left(b\right)$. It is
defined as
$\mathcal{\xi}_{{T^{\prime}}}\left(b\right)\triangleq\begin{cases}1&\ell_{T^{\prime}}\left(b\right)\leq\ell_{T^{\prime}}\left(x\right)\
\&\ x\neq root(T^{\prime})\\\ 2&{b}={x}=root(T^{\prime})\\\
deg_{{}_{T^{\prime}}}(\pi_{{T^{\prime}}}\left(b\right))+\mathcal{\xi}_{{T^{\prime}}}\left(\pi_{{T^{\prime}}}\left(b\right)\right)-2&\ell_{T^{\prime}}\left(b\right)>\ell_{T^{\prime}}\left(x\right)\\\
\end{cases}$
where $x=\operatorname{firstBranchNode}(T^{\prime})$.
The aforementioned definition is illustrated through examples in Figure 5.4.
Basically, for any given tree $T^{\prime}$, branching number of any node in
the tree is a function of number of splits in paths from root to that node.
Figure 5.4: Illustration of branching number on two separate trees $T_{1}$ and
$T_{2}$
We will now make a simple observation about branching number and give a
characterizing lemma regarding the size of the canonical tree.
###### Observation 5.7.
Let $v$ be a node and $c\in\operatorname{children}_{T}(v)$. Let $b\in
R_{T}({c}^{\downarrow{T}})$, then
$\xi_{R_{T}({c}^{\downarrow{T}})}(b)\leq\xi_{R_{T}({v}^{\downarrow{T}})}(b)$.
###### Proof.
The canonical tree $R_{T}({c}^{\downarrow{T}})$ of any node
$c\in\operatorname{children}_{T}(v)$ is the subtree of the canonical tree
$R_{T}({v}^{\downarrow{T}})$. Hence the observation. ∎
###### Lemma 5.8.
For any tree $T^{\prime}$, the number of nodes in the tree that has branching
number less than $k$ is $O(2^{k}Depth(T^{\prime}))$.
###### Proof.
In the worst case $T^{\prime}$ may be a binary tree. Then each branching node
in the tree will have a degree $3$ and on the path beyond that branching node
will have branching number one more than the parent (by the definition of
branching node). On every branching node there are two paths which split in
the tree $T^{\prime}$. Thus we may have as many as $O(2^{k})$ different
branching paths. Each path may be $O(Depth(T^{\prime}))$ long thus we have
$O(2^{k}Depth(T^{\prime}))$ such nodes. ∎
We now define graph-sketch of a node which is defined based on the canonical
tree and comes with a parameter $k$ on which the truncation is based. We
define the truncation of a tree as below.
###### Definition 5.9.
For some tree $T^{\prime}$ and a number $k$,
$\operatorname{trunc}(T^{\prime},k)$ is an sub-tree of $T^{\prime}$ induced by
vertices with branching number less than or equal to $k$.
Our graph sketch will be called as $k$-Sketch because it comes with a
parameter $k$ on which the truncation is based. For every node in the
$k$-sketch, meta information is also added. We define the $k$-Sketch of a node
as below.
###### Definition 5.10 ($k$-Sketch).
For any node $v$ and a spanning tree $T$, let
$R^{\prime}=\operatorname{trunc}(R_{T}({v}^{\downarrow{T}}),k)$. The
$k$-Sketch of a node $v$ w.r.t. the spanning tree $T$ is denoted as
$\mathcal{S}_{T}^{k}(v)$ and it is defined as
$\mathcal{S}_{T}^{k}(v)\triangleq\left\\{R^{\prime}\cup\left\\{\langle
u:\eta_{T}(u),\pi_{{T}}\left(u\right),\gamma\left({v}^{\downarrow{T}},{u}^{\downarrow{T}}\right)\rangle\
\forall u\in R^{\prime}\right\\}\right\\}$
Basically $k$-Sketch is a truncated canonical tree packaged along with the
meta information $\langle
u:\eta_{T}(u),\pi_{{T}}\left(u\right),\gamma\left({v}^{\downarrow{T}},{u}^{\downarrow{T}}\right)\rangle$
for each node $u$ in the truncated tree.
We will give an algorithm to compute $k$-Sketch for every node $v\in V$ in the
next sub-section and further showcase the application of $k$-Sketch to find a
min-cut if it exists as given by CASE-3 and CASE-6. Similar to $k$-Sketch of a
node $v$ we define the reduced $k$-Sketch which is based on the reduced
canonical tree. This will be used to find a min-cut if it exists as given by
CASE-7.
###### Definition 5.11 (Reduced $k$-Sketch).
Let v be an internal node of a spanning tree $T$. Let
$c\in{v}^{\downarrow{T}}$ and let
$R^{\prime}=\operatorname{trunc}(R_{T}({v}^{\downarrow{T}}\setminus{c}^{\downarrow{T}}),k)$.
The reduced $k$-Sketch of a node $v$ and $c$ w.r.t. the spanning tree $T$ is
denoted as ${\mathcal{S}}_{T}^{k}(v,c)$ and it is defined as
${\mathcal{S}}_{T}^{k}(v,c)\triangleq\left\\{R^{\prime}\cup\left\\{\langle
u:\eta_{T}(u),\pi_{{T}}\left(u\right),\gamma\left({v}^{\downarrow{T}}\setminus{c}^{\downarrow{T}},{u}^{\downarrow{T}}\right)\rangle\
\forall u\in R^{\prime}\right\\}\right\\}$
We know give the following Lemma about the size of the $k$-Sketch.
###### Lemma 5.12.
For any spanning tree $T$, the k-Sketch of a node $v$,
$\mathcal{S}_{T}^{k}(v)$ w.r.t. $T$ is of size $O(2^{k}Depth(T)\log n)$ bits.
###### Proof.
For any arbitrary node $u\in\mathcal{S}_{T}^{k}(v)$, the sketch contains a
three tuple
$\langle\eta_{T}(u),\pi_{{T}}\left(u\right),\gamma\left({v}^{\downarrow{T}},{u}^{\downarrow{T}}\right)\rangle$.
Here
$\eta_{T}(u),\gamma\left({v}^{\downarrow{T}},{u}^{\downarrow{T}}\right)\leq|E|=O(n^{2})$
and can be represented in $O(\log n)$ bits. Thus the three tuple is of $O(\log
n)$ bits. Now by Lemma 5.8 it is clear that $\mathcal{S}_{T}^{k}(v)$ is of
size $O(2^{k}Depth(T)\log n)$ bits. ∎
###### Corollary 5.13.
For any spanning tree $T$ and an internal node $v$ and some
$c\in{v}^{\downarrow{T}}$ the reduced $k$-Sketch ${\mathcal{S}}_{T}^{k}(v,c)$
w.r.t. $T$ is of size $O(2^{k}Depth(T)\log n)$ bits.
The $k$-Sketch of a node will be used to find a min-cut as given by CASE-3 and
CASE-6. Whereas the reduced $k$-Sketch will be used to find a min-cut as given
by CASE-7. In the subsequent section, we will give algorithms to compute
$k$-Sketch and the reduced $k$-Sketch. We will work with the fixed BFS tree
$\mathcal{T}$ and for simplicity in the notations the $\mathcal{T}$ will be
skipped from subscript or superscript.
#### 5.1.2 Algorithm to Compute Sketch
In this subsection, we will give distributed algorithms to compute $k$-Sketch
and the reduced $k$-Sketch. We will prove that our algorithm takes $O(D^{2})$
rounds. The idea to compute sketch is as follows: Each node computes its own
$k$-Sketch (which is of size $O(D)\log n$ bits) and communicates the same to
the parent. The parent node after receiving the sketch from all the children
computes its own sketch and communicates the same further up. This process
continues and at the end each node has its $k$-Sketch. Here we will use
Observation 5.7, to argue that the sketch received from children is enough for
a node to compute its sketch.
###### Lemma 5.14.
For all $v\in V$, $\mathcal{S}^{k}(v)$ can be computed in $O(D^{2})$ rounds.
###### Proof.
We describe a detailed algorithm to compute this in Algorithm 7. In Line 7,
Algorithm 7 calculates
$\mathcal{P}\left(\left\\{v\cup\overline{\mathcal{N}}(v)\right\\}\right)$
which as per the definitions is the tree paths of the non-tree neighbors of
$v$ including $\rho(v)$.
1 _Algorithm to be run on each node $a$_
Output: k-Sketch $\mathcal{S}^{k}(a)$
2 _Past Knowledge_
3 Each node $u\in V$ knows $\ell\left(u\right)$ and $\eta(u)$ from previous
section
4 For each $u\in\mathcal{A}\left(a\right)$, $a$ knows $H_{{a}}^{u}$ and
$\eta(u)$
5 $\overline{\mathcal{N}}(a)\leftarrow\left\\{b\mid(a,b)\in E,(a,b)\ \text{is
a non-tree edge}\right\\}$
6 for _all $b\in\overline{\mathcal{N}}(a)$_ parallely do
7 send tuples $\langle\ell\left(u\right),\eta(u),u\rangle$ for all
$u\in\mathcal{A}\left(a\right)$ to $b$
$\mathcal{P}\leftarrow\left\\{\rho(a)\right\\}$ // $\rho(a)$ can be computed
easily because $a$ has all nodes in $\mathcal{A}\left(a\right)$
8 for _all $b\in\overline{\mathcal{N}}(a)$_ parallely do
9 for _all $u\in\mathcal{A}\left(b\right)$_ do receive tuples
$\langle\ell\left(u\right),\eta(u),u\rangle$
10 Construct the path $\rho(b)$ using the information
11 $\mathcal{P}\leftarrow\mathcal{P}\cup\rho(b)$
12
13 perform graph union of all the paths in $\mathcal{P}$ and form canonical
tree $R(a)$
14 for _all nodes $u\in R(a)$_ do
15 if _$u\in\mathcal{A}\left(a\right)$_ then
$\gamma(a,{u}^{\downarrow})=H_{a}^{u}$
16 else $\gamma(a,{u}^{\downarrow})=|\left\\{b\mid
b\in\overline{\mathcal{N}}(a),u\in\mathcal{A}\left(b\right)\right\\}|$
17 include the tuple $\langle
u:\eta(u),\pi\left(u\right),\gamma(a,{u}^{\downarrow})\rangle$ for the node
$u\in R(a)$
18 if _$a$ is an internal node_ then wait until $\mathcal{S}^{k}(c)$ is
received for all $c\in\operatorname{children}(a)$
19 $S\leftarrow R(a)\cup_{c\in\operatorname{children}(a)}\mathcal{S}^{k}(c)$
20 perform graph union of all trees in $S$ to form a tree $T$
21 for _node $u\in T$_ do
22 compute branching number $\mathcal{\xi}_{{T}}\left(u\right)$ as per the
definition
23 if _$\mathcal{\xi}_{{T}}\left(u\right) >k$ and $u\in{v}^{\downarrow}$_
then
24 remove node $u$ from $T$
25
26 for _node $u\in T$_ do
27 compute $\gamma({a}^{\downarrow},{u}^{\downarrow})$ by adding appropriately
from all $T^{\prime}\in S$
28
29 Construct $\mathcal{S}^{k}(a)$ using $T$ by including $\langle
u:\eta(u),\pi\left(u\right),\gamma({a}^{\downarrow},{u}^{\downarrow})\rangle$
for all node $u\in T$
30 Send $\mathcal{S}^{k}(a)$ to $\pi\left(a\right)$
Algorithm 7 Distributed Algorithm for Computing $k$-Sketch
This can be computed easily because each non-tree neighbor sends all its
ancestors and their associated meta information. Having calculated
$\mathcal{P}\left(\left\\{v\cup\overline{\mathcal{N}}(v)\right\\}\right)$,
then it is easy to compute the sketch tree $R(v)$ by Definition 5.4. Now if a
node is an internal node we again need to perform a graph union of other
sketches received from children. This is also trivial and it is guaranteed
that we will not lose any node here because of Observation 5.7. Further
branching number is computed for all the nodes and those nodes which do not
satisfy the condition of branching number are removed to form the sketch. Also
among the three tuples of meta information two remain fixed from the sketch of
children and $\gamma({u}^{\downarrow},{v}^{\downarrow})$ for a node
$u\in\mathcal{S}^{k}(v)$ can be computed by appropriate addition.
##### Time Requirements
In Algorithm 7, communication between any two nodes occur only in line 7, 7
and 7. In line 7, only $O(D)$ rounds are taken because a node has at most
$O(D)$ ancestors in the BFS tree $\mathcal{T}$. Similarly, line 7 also takes
$O(D)$ rounds because each of the neighbors also has $O(D)$ ancestors and the
transfer of the three tuples from each of the neighbors happen in parallel.
Further in line 7, a node $a$ waits to recieve all the sketch from its
children. From Lemma 5.12, we know that the size of the sketch is $O(D\log n)$
bits when the tree in action is a BFS tree. Now a node at level $l$ will wait
for all the sketch from its children which are at level $l+1$ and they inturn
depend on all the children which are at level $l+2$ and so on. Thus line 7
takes atmost $O(D^{2})$ rounds. ∎
In Lemma 5.14, we gave an algorithm for computing the $k$-Sketch. We will now
move toward an algorithm to find a reduced sketch. There are two steps towards
these details of which are given in Observation 5.15 and Lemma 5.16.
###### Observation 5.15.
For a constant $k$. For any internal node $a$, and
$c^{\prime}\in\operatorname{children}(a)$, ${\mathcal{S}}^{k}(a,c^{\prime})$
can be computed at node $a$ in $O(D^{2})$ rounds.
###### Proof.
Basically, the idea here is not to include the sketch received from child $c$,
while computing the sketch of node $a$ and the resultant becomes the reduced
sketch ${\mathcal{S}}^{k}(v,c)$. To do this, we just need to change Line 7 in
Algorithm 7 to $\mathcal{S}\leftarrow
R(a)\cup_{c\in\operatorname{children}(a)\setminus\left\\{c^{\prime}\right\\}}\mathcal{S}^{k}(c)$
for each $c^{\prime}\in\operatorname{children}(a)$ and this enables the
computation ${\mathcal{S}}^{k}(a,c^{\prime})$ at node $a$. ∎
###### Lemma 5.16.
For a fixed $k$, there exists a $O(D^{2})$ round algorithm such that all nodes
$x$ can compute ${\mathcal{S}}^{k}(v,c)$ for all
$v\in\mathcal{A}\left(x\right)$.
###### Proof.
For any internal node $v\in V$, we ensure that for all
$c\in\operatorname{children}(v)$ the reduced k-Sketch ${\mathcal{S}}^{k}(v,c)$
is downcasted to all the nodes in ${c}^{\downarrow}$. This takes $O(D^{2})$
rounds. After this step every node $x$ at level $\ell\left(x\right)$ has the
sketch ${\mathcal{S}}^{k}(a,b)$ for any node $a,b$ at level $i$ and $i+1$ for
all $i\in[1,l-1]$ and $a,b\in\mathcal{A}\left(a\right)$. Now based on this we
will show that node $x$ can compute ${\mathcal{S}}^{k}(v,x)$ for all
$v\in\mathcal{A}\left(x\right)$ as per Algorithm 8.
Each node $x\in V\setminus r$ has to perform some local computation based on
the various reduced sketch received earlier. For $1\leq
i\leq\ell\left(x\right)-1$ let $\mathcal{S}_{i}={\mathcal{S}}^{k}(a,b)$ when
$a=\alpha(x,i)$ and $b=\alpha(x,i+1)$ (recall that $\alpha(x,l)$ is the
ancestor of node $x$ at level $l$). For some node $v=\alpha(x,l)$ which is the
ancestor of node $x$ at some level $l<\ell\left(x\right)$; to compute
${\mathcal{S}}^{k}(v,x)$ node $x$ uses $\mathcal{S}=\cup_{l\leq
j\leq\ell\left(x\right)-1}\mathcal{S}_{j}$. Here $\mathcal{S}$ is basically a
set of Sketches. Note that any two sketch in the set $\mathcal{S}=\cup_{l\leq
j\leq\ell\left(x\right)-1}\mathcal{S}_{j}$ are overlapping. That is they have
information about disjoint vertex sets. The rest of the steps are exactly same
as given in Algorithm 7 and are described in detail in Algorithm 8
Input: ${\mathcal{S}}^{k}(a,b)$ for any node $a,b$ at level $i$ and $i+1$ such
that $1\leq i\leq\ell\left(x\right)-1$
Output: ${\mathcal{S}}^{k}(v,x)$ for all $v\in\mathcal{A}\left(x\right)$
1 for _$1\leq i\leq\ell\left(x\right)-1$_ do
$\mathcal{S}_{i}\leftarrow{\mathcal{S}}^{k}(a,b)$ when $a=\alpha(x,i)$ and
$b=\alpha(x,i+1)$
// recall that $\alpha(x,l)$ is the ancestor of node $x$ at level $l$
2 for _$1\leq i\leq\ell\left(x\right)-1$_ do
3 $v=\alpha(x,i)$
4 $S\leftarrow\cup_{i\leq j\leq\ell\left(x\right)-1}\mathcal{S}_{j}$
5 perform graph union of all trees in $\mathcal{S}$ to form a tree $T$
6 compute branching number $\mathcal{\xi}_{{T}}\left(u\right)\ \forall u\in T$
as per the definition
7 for _node $u\in T$_ do
8 if _$\mathcal{\xi}_{{T}}\left(u\right) >k$ and $u\in{v}^{\downarrow}$_ then
9 remove $u$ from $T$
10
11 for _node $u\in T$_ do
12 compute
$\gamma({v}^{\downarrow}\setminus{x}^{\downarrow},{u}^{\downarrow})$ by adding
appropriately from all $T^{\prime}\in S$
13
14
15 Construct ${\mathcal{S}}^{k}(v,x)$ using $T$ by including $\langle
u:\eta(u),\pi\left(u\right),\gamma({v}^{\downarrow}\setminus{x}^{\downarrow},{u}^{\downarrow})\rangle$
for all node $u\in T$
16
Algorithm 8 To be run on all node $x\in V\setminus\left\\{r\right\\}$
∎
#### 5.1.3 Application of graph sketch
Now we will describe the application of graph sketch to find a min-cut of size
$3$ if it exists as given by CASE-3, CASE-6 and CASE-7. We prove the same in
Lemma 5.17. Here, CASE-3 is a direct application of $3$-Sketch; in CASE-6 we
will be required to move $3$-Sketch in a strategic way and in CASE-7 reduced
$2$-Sketch will be used. We give further details of each of the cases in Lemma
5.17, 5.18 and 5.20.
For any two nodes $v_{1},v_{2}$, let $\operatorname{LCA}_{T}(v_{1},v_{2})$ be
the lowest common ancestor of node $v_{1}$ and $v_{2}$ in tree $T$.
###### Lemma 5.17.
For some $v_{1},v_{2}\in V\setminus r$ and a non-tree edge $e$, if
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),e\right\\}$
is a min-cut as given in CASE-3, then node $v_{1}$ can make a decision about
the said min-cut using $\mathcal{S}^{3}(v_{1})$.
###### Proof.
As per Lemma 3.5, we know that node $v_{1}$ can decide for such a min-cut if
it knows three quantities: $\eta(v_{1}),\eta(v_{2})$ and
$\gamma({v_{1}}^{\downarrow},{v_{2}}^{\downarrow})$. Also, for any node $u,v$,
we know that if there is a node $u\in\mathcal{S}^{3}(v)$ then the sketch also
contains $\eta(u),\gamma({u}^{\downarrow},{v}^{\downarrow})$. And every node
$v$ in the network knows $\eta(v)$ from previous section. Thus here to prove
this lemma we have to prove that node $v_{2}\in\mathcal{S}^{3}(v_{1})$. Then
node $v_{1}$ can enumerate through all the nodes in $\mathcal{S}^{3}(v_{1})$
which are not in $\mathcal{A}\left(v\right)$ and apply the condition of Lemma
3.5 to test if there exists such a min-cut.
Figure 5.5: Illustrating different types of sub-cases which might occur when
there exists a min-cut of size $3$ as in CASE-3. The figures presented here
are tree snippets. In each of the sub-cases we demonstrate the $3$-Sketch as
computed by the node $v_{1}$. Solid black line denote the tree paths in Sketch
of $v_{1}$. Dashed line represent other paths in tree $\mathcal{T}$. Edges in
red are cut edges. Here all the edges that go out of the vertex set
${v_{1}}^{\downarrow}$ have their other end points in the vertex set
${v_{2}}^{\downarrow}$ barring one non-tree edge $e$ in (B) and (C). The
important fact here is that in all the different cases
$v_{2}\in\mathcal{S}^{3}(v_{1})$
.
We will now show that if such a min-cut exists then node
$v_{2}\in\mathcal{S}^{3}(v_{1})$. As illustrated in Fig. 5.5 for all the
different sub-cases node $v_{2}\in\mathcal{S}^{3}(v_{1})$. In (A) when the
other non-tree edge $e$ has one end-point in ${v_{2}}^{\downarrow}$ then the
branching number of $v_{2}$ in the sketch-tree $R({v_{1}}^{\downarrow})$ is
$\mathcal{\xi}_{{R({v_{1}}^{\downarrow})}}\left(v_{2}\right)=2$ thus
$v_{2}\in\mathcal{S}^{3}(v_{1})$. Both (B) and (C) are similar in terms of the
fact that the non-tree cut edge $e$ has the other endpoint in
${v_{1}}^{\downarrow}$ but differ in terms of the branching node. Nevertheless
here also $\mathcal{\xi}_{{R({v_{1}}^{\downarrow})}}\left(v_{2}\right)=3$ thus
$v_{2}\in\mathcal{S}^{3}(v_{1})$
∎
###### Lemma 5.18.
For some $v_{1},v_{2},v_{3}\in V\setminus r$, let
${v_{1}}^{\downarrow},{v_{2}}^{\downarrow},{v_{3}}^{\downarrow}$ be pair wise
mutually disjoint. If there exists a min cut as in CASE-6 such that
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a min-cut, then it can be found in $O(D^{2})$ time.
###### Proof.
For such a min-cut to exists then at least two of
$\gamma({v_{1}}^{\downarrow},{v_{2}}^{\downarrow}),\gamma({v_{1}}^{\downarrow},{v_{3}}^{\downarrow}),\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})$
need to be non-zero otherwise the vertex sets
${v_{1}}^{\downarrow}\cup{v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow}$ may not
form a connected component. WLOG here we may have two non-isomorphic cases as
illustrated in Fig. 5.6.
(a) One of
$\gamma({v_{1}}^{\downarrow},{v_{2}}^{\downarrow}),\gamma({v_{1}}^{\downarrow},{v_{3}}^{\downarrow}),\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})$
is zero. WLOG let $\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})=0$
(b) All three of
$\gamma({v_{1}}^{\downarrow},{v_{2}}^{\downarrow}),\gamma({v_{1}}^{\downarrow},{v_{3}}^{\downarrow}),\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})$
are non-zero
Figure 5.6: Two different non-isomorphic sub-cases of a min-cut as in CASE-6.
Here a circle correspond to a vertex set. And an double arrow line between
them indicate that there are some edges which have an end point in each of the
vertex set.
Also, as per Lemma 3.6 we need
$\eta(v_{1}),\eta(v_{2}),\eta(v_{3}),\gamma({v_{1}}^{\downarrow},{v_{2}}^{\downarrow}),\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})$
and $\gamma({v_{1}}^{\downarrow},{v_{3}}^{\downarrow})$ at at least one node
to decide for a min-cut as given by CASE-6.
We will first work with the case in Fig. 5.6(a). Here,
$\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})=0$. In this sub-case node
$v_{1}$ can make the decision based on $\mathcal{S}^{3}(v_{1})$. We will prove
that if such a min-cut exists then $v_{2},v_{3}\in\mathcal{S}^{3}(v_{1})$. We
demonstrate the different cases in Figure 5.7. The three different cases are
in terms of the intersection of the paths
$\rho(v_{1}),\rho(v_{2}),\rho(v_{3})$. In all the sub-cases demonstrated in
the Fig. 5.7 we can see that the branching number
$\mathcal{\xi}\left(v_{1}\right){v_{2}},\mathcal{\xi}\left(v_{1}\right){v_{3}}\leq
3$. Thus $v_{2},v_{3}\in\mathcal{S}^{3}(v_{1})$. Here, $v_{1}$ just need to
pick pairs of nodes $a,b\in\mathcal{S}^{3}(v_{1})$ such that
${a}^{\downarrow},{b}^{\downarrow},{v_{1}}^{\downarrow}$ are pair-wise
mutually disjoint and compare
$\eta(a),\eta(b),\eta(v_{1}),\gamma({a}^{\downarrow},{v_{1}}^{\downarrow})$
and $\gamma({b}^{\downarrow},{v_{1}}^{\downarrow})$ as per Lemma 3.6.
Figure 5.7: Illustrating different types of sub-cases. We demonstrate the
$3$-Sketch as per the node $v_{1}$. Solid black line denote the tree paths in
Sketch of $v_{1}$. Dashed line represent other paths in tree $\mathcal{T}$.
Edges in red are cut edges. The different sub-cases occur based on the
intersection of three paths $\rho(v_{1}),\rho(v_{2}),\rho(v_{3})$. In sub-case
(A),
$\operatorname{LCA}(v_{1},v_{3})=\operatorname{LCA}(v_{2},v_{3})\neq\operatorname{LCA}(v_{1},v_{2})$.
In sub-case (B),
$\operatorname{LCA}(v_{1},v_{2})=\operatorname{LCA}(v_{1},v_{3})\neq\operatorname{LCA}(v_{2},v_{3})$.
In sub-case (C),
$\operatorname{LCA}(v_{1},v_{2})=\operatorname{LCA}(v_{1},v_{3})=\operatorname{LCA}(v_{2},v_{3})$.
Now, we will take the case as mentioned in Fig. 5.6(b). By the same argument
as above we can say that $v_{1},v_{2}\in\mathcal{S}^{3}(v_{3})$ and
$v_{1},v_{3}\in\mathcal{S}^{3}(v_{2})$. In this case
$\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})\neq 0$ and
$\mathcal{S}^{3}(v_{1})$ does not have
$\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})$ which is required to make
a decision about the said min-cut as per Lemma 3.6. Thus node $v_{1}$ cannot
make the required decision based on only $\mathcal{S}^{3}(v_{1})$.
Amidst, the above mentioned challenge, there is an easy way around. Each node
$v\in V$ downcasts $\mathcal{S}^{3}(v)$ (its 3-Sketch) to all nodes in
${v}^{\downarrow}$. Since 3-Sketch is of size $O(D\log n)$ thus this takes
$O(D^{2})$ rounds. After this step every node $u$ has $\mathcal{S}^{3}(v)\
\forall v\in\mathcal{A}\left(u\right)$. Now each node $u$ sends
$\mathcal{S}^{3}(a)\ \forall a\in\mathcal{A}\left(u\right)$ to its non-tree
neighbors. Since there are as many as $D$ such sketches thus this takes
$O(D^{2})$ time. Also, $\gamma({v_{1}}^{\downarrow},{v_{2}}^{\downarrow})>0$
thus we know that there is at least one node node
$v_{1}^{\prime}\in{v_{1}}^{\downarrow}$ and
$v_{2}^{\prime}\in{v_{2}}^{\downarrow}$ such that
$(v_{1}^{\prime},v_{2}^{\prime})$ is a non-tree edge. After the above
mentioned steps node $v_{1}^{\prime}$ as well as node $v_{2}^{\prime}$ will
have both $\mathcal{S}^{3}(v_{1})$ and $\mathcal{S}^{3}(v_{2})$. Now, both
${v_{1}^{\prime}}$ and ${v_{2}^{\prime}}$ can make the decision about the said
min-cut. We will discuss steps which may be undertaken at node
$v_{1}^{\prime}$ for the sane, for each
$v\in\mathcal{A}\left(v_{1}^{\prime}\right)$, $v_{1}^{\prime}$ locally
iterates through all the parallel paths of the $\mathcal{S}^{3}(v)$ and picks
all possible $x,y$ such that
${x}^{\downarrow},{y}^{\downarrow},{v}^{\downarrow}$ are mutually disjoint.
Notice that through $\mathcal{S}^{3}(v)$, $v_{1}^{\prime}$ has
$\gamma({x}^{\downarrow},{v}^{\downarrow}),\gamma({y}^{\downarrow},{v}^{\downarrow}),\eta(x),\eta(y)$.
Also it has $\eta(v)$ from previous calculations. Now from the sketches
received through its non-tree neighbors $v_{1}^{\prime}$ looks for
$\gamma({x}^{\downarrow},{y}^{\downarrow})$. If
$\gamma({x}^{\downarrow},{v}^{\downarrow}),\gamma({y}^{\downarrow},{v}^{\downarrow}),\eta(x),\eta(y)$
and $\gamma({x}^{\downarrow},{y}^{\downarrow})$ then satisfies Lemma 3.6 then
it can make the decision about the required min-cut. ∎
Next, we move to CASE-7. Here we will use reduced sketch as given in
Definition 5.11. First we the use the reduced $2$-sketch.
###### Observation 5.19.
For some $v_{1},v_{2},v_{3}\in V\setminus r$, let
${v_{3}}^{\downarrow}\subset{v_{2}}^{\downarrow}$,
${v_{1}}^{\downarrow}\cap{v_{2}}^{\downarrow}=\emptyset$ and
${v_{1}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$. If there exists a
min cut as in CASE-7 such that
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a min-cut then $v_{1}\in{\mathcal{S}}^{2}(v_{2},v_{3})$
###### Proof.
If such a min-cut exists then all the edges going out of the vertex set
${v_{2}}^{\downarrow}\setminus{v_{3}}^{\downarrow}$ apart from
$(\pi\left(v_{2}\right),v_{2})$ and $((\pi\left(v_{3}\right),v_{3}))$ goes to
the vertex set ${v_{1}}^{\downarrow}$. Thus the reduced canonical tree
$R({v_{2}}^{\downarrow}\setminus{v_{3}}^{\downarrow})$ contains node $v_{1}$.
We showcase this scenario in Fig. 5.20. Also the branching number of $v_{1}$
will be
$\mathcal{\xi}_{{R({v_{2}}^{\downarrow}\setminus{v_{3}}^{\downarrow})}}\left(v_{1}\right)=2$
based on the definition. Thus $v_{1}\in{\mathcal{S}}^{2}(v_{2},v_{3})$
Figure 5.8: Reduced sketch ${\mathcal{S}}^{2}(v_{2},v_{3})$ when for some
$v_{1},v_{2},v_{3}\in V\setminus r$, let
${v_{3}}^{\downarrow}\subset{v_{2}}^{\downarrow}$,
${v_{1}}^{\downarrow}\cap{v_{2}}^{\downarrow}=\emptyset$ and
${v_{1}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$ and
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a min-cut
∎
###### Lemma 5.20.
For some $v_{1},v_{2},v_{3}\in V\setminus r$, let
${v_{3}}^{\downarrow}\subset{v_{2}}^{\downarrow}$,
${v_{1}}^{\downarrow}\cap{v_{2}}^{\downarrow}=\emptyset$ and
${v_{1}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$. If there exists a
min cut as in CASE-7 such that
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a min-cut then it can be found in $O(D^{2})$ time.
###### Proof.
Here we run Algorithm 9 on each node.
1 _Past Knowledge_
2 ${\mathcal{S}}^{2}(v,x)$ for all $v\in\mathcal{A}\left(x\right)$ using Lemma
5.16
3 For each $u\in\mathcal{A}\left(x\right)$, $x$ knows
$H_{{x}^{\downarrow}}^{u}$ and $\eta(u)$
1 for _$v\in\mathcal{A}\left(x\right)\setminus x$_ do
2 for _$u\in{\mathcal{S}}^{2}(v,x)\ \ &\ u\notin\mathcal{A}\left(x\right)$_
do
3 if
_$\eta(v)-1=H_{{x}^{\downarrow}}^{v}+\gamma({u}^{\downarrow},{v}^{\downarrow}\setminus{x}^{\downarrow})\
\ &$
$\eta(u)-1=\gamma({u}^{\downarrow},{v}^{\downarrow}\setminus{x}^{\downarrow})\
\&$ $\eta(x)-1=H_{{x}^{\downarrow}}^{v}$_ then
4
$\left\\{(\pi\left(x\right),x),(\pi\left(v\right),v),(\pi\left(u\right),u)\right\\}$
is a min-cut
5
6
Algorithm 9 Algorithm to find an induced cut fo size $3$ as given in CASE-7 to
be run on all node $x\in V\setminus r$
By Lemma 3.6 the reported min-cut is correct. Further when Algorithm 9 is run
on node $v_{3}$ it will be able to make decision about the required min-cut
because by Observation 5.19 if there exists such a min-cut then
$v_{1}\in{\mathcal{S}}^{2}(v_{2},v_{3})$.
Also, this just requires $O(D^{2})$ time because computing reduced sketch
${\mathcal{S}}^{2}(v,x)$ at any node $x$ for all
$v\in\mathcal{A}\left(x\right)$ just takes $O(D^{2})$ rounds as per Lemma
5.16. ∎
### 5.2 Layered Algorithm
In the last section, we gave a technique to find a min-cut of size 3 as in
CASE-3, CASE-6 and CASE-7 using a special graph-sketch. In this section, we
will give an algorithm to find the min-cut as given by CASE-5. A $k$-Sketch
cannot be used to find a min-cut as in CASE-5 because here it is challenging
for one of the nodes to know the $6$ quantities as required by Lemma 3.6 using
a $k$-sketch. To resolve this challenge, we give layered algorithm where we
solve for min-cut iteratively many times.
Recall a min-cut as given by CASE-5 is as follows: for some node
$v_{1},v_{2},v_{3}\in V\setminus r$,
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a min-cut such that ${v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}$ and
${v_{3}}^{\downarrow}\subset{v_{1}}^{\downarrow}$ and
${v_{2}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$. For the introduction
of layered algorithm, let us assume that such a min-cut exists. Further let
$v_{1},v_{2}$ and $v_{3}$ be these specific nodes. In Fig. 5.9, we show a
pictorial representation of such a min-cut.
Figure 5.9: Min-cut of size $3$ as given by CASE-5
Similar to the previous section, we will use the characterization Lemma 3.6
which requires six quantities to make a decision about the min-cut. These are
$\eta(v_{1}),\eta(v_{2}),\eta(v_{3}),\gamma({v_{1}}^{\downarrow},{v_{2}}^{\downarrow}),\gamma({v_{3}}^{\downarrow},{v_{2}}^{\downarrow})$
and $\gamma({v_{1}}^{\downarrow},{v_{3}}^{\downarrow})$.
In this subsection, we will show that node $x=\operatorname{LCA}(v_{2},v_{3})$
can find all these six quantities. The challenge here is the fact that some of
these quantities have to come from node $v_{2}$ and/or $v_{3}$. Since node $x$
is higher up in the tree $\mathcal{T}$ than $v_{2},v_{3}$ thus the information
from node $v_{2},v_{3}$ when it travels up the tree may face contention from
other nodes in ${x}^{\downarrow}$. From Recall (from chapter 2) that
convergecast is such a technique to send data to nodes which are higher up in
the tree from nodes at a lower level. For convergecast to be efficient here we
need to couple it with a contention resolution mechanism. The idea is as
follows:
* •
We calculate min-cuts of size one and two in sub-graphs _pivoted_ at nodes at
different level in the tree $\mathcal{T}$ (details are deferred to Section
5.2.1)
* •
After the above step, each node computes its _one-cut detail_ and _two-cut
detail_ (definitions are deferred to Section 5.2.2)
* •
Based on the above computation, if a min-cut exists as given by
$\textbf{CASE}-5$, then node $v_{2},v_{3}$ can designate themselves as special
nodes using Lemma 5.24. Thus, the information from node $v_{2}$ and node
$v_{3}$ could reach $LCA(v_{2},v_{3})$ efficiently.
#### 5.2.1 Definition of Layered Algorithm
For any node $v\in V\setminus r$, let ${E}_{{v}^{\downarrow}}$ be the set of
edges whose both endpoints are in the vertex set $v^{\downarrow}$. Let
subgraph pivoted at any vertex $v\neq r$ be
$G_{v}\triangleq(v^{\downarrow}\cup\pi\left(v\right),{E}_{{v}^{\downarrow}}\cup(\pi\left(v\right),v))$.
Note that this definition is different from the subgraph rooted at $v$ because
here we add the extra edge $(\pi\left(v\right),v)$ and vertex $\pi(v)$.
_Layered Algorithm_ is a special type of algorithm where we will solve for
induced-cut of size one and two in a layered fashion. As usual, we will work
with the fixed BFS tree $\mathcal{T}$. Here we will solve for induced-cut of
size $1$ and $2$ repeatedly for $Depth(\mathcal{T})=D$ times. In the first
iteration, our algorithm solves for these induced cuts in the whole graph $G$.
Then it solves for the min-cuts of size $1$ and $2$ in all the sub-graph
pivoted at nodes at level $1$; that is in all sub-graphs in the set
$\left\\{G_{v}\mid\ell\left(v\right)=1\right\\}$, subsequently in the sub-
graphs for all nodes pivoted at level $2$ and so on until level $D$. This is
the _Layered Algorithm for min-cut_. We will showcase the utility of this
algorithm later. In Observation 5.21, we will argue that the layered algorithm
for min-cut takes $O(D^{2})$ rounds.
###### Observation 5.21.
The layered algorithm for min-cut runs in $O(D^{2})$ rounds.
###### Proof.
From Theorem 4.10 and Lemma 4.14, we know that cuts of size 1,2 can be found
in $O(D)$ rounds. For all graphs
$\left\\{G_{v}\mid\ell\left(v\right)=i\right\\}$ pivoted at any level $1\leq
i\leq D$ the algorithm to compute min-cut of size $1,2$ can be run
independently because none of the sub-graphs in the set
$\left\\{G_{v}\mid\ell\left(v\right)=i\right\\}$ share an edge. Thus it takes
$O(D)$ round to run the min-cut algorithm in all such graphs. Now to run for
all graphs pivoted at nodes at all levels it takes $O(D^{2})$ rounds. ∎
We will now define the notation for the sequence of subgraphs. This is a short
hand notation to say that a property is satisfied for a set of pivoted sub-
graphs. Let $r\rightarrow w_{1}\rightarrow w_{2}\rightarrow\cdots\rightarrow
w_{t}$ be a path in the BFS tree. Then we know that
$G_{w_{1}},G_{w_{2}},\ldots,G_{w_{t}}$ are subgraphs of $G$ as defined
earlier. Let $i,j\in[1,t]$ and $i\leq j$, then let
$G_{\left({w_{i}},{w_{j}}\right)}$ be a set of subgraphs such that
$G_{\left({w_{i}},{w_{j}}\right)}\triangleq\left\\{G_{w_{h}}\mid i\leq h\leq
j\right\\}$. We say a property is true for $G_{\left({w_{i}},{w_{j}}\right)}$
if it is true for all the subgraphs in the set.
###### Observation 5.22.
For some nodes $x$ and $v$, let $x\in{v}^{\downarrow}$. If
$(\pi\left(x\right),x)$ is a min-cut for the sub-graph $G_{v}$ then
$(\pi\left(x\right),x)$ is a min-cut for all sub-graphs in
$G_{\left({x},{v}\right)}$. Similarly, let $a,b$ be two nodes. Let node $z$ be
the LCA of node $a,b$ in $tree$ and $v$ be some node such that
${v}\in\mathcal{A}\left(z\right)$. If
$\left\\{(\pi\left(a\right),a),(\pi\left(b\right),b)\right\\}$ is a min cut
for the sub-graph $G_{v}$ then it a min-cut for all the sub-graphs in
$G_{\left({z},{v}\right)}$.
#### 5.2.2 Application of layered algorithm
We will now showcase the application of the above-mentioned layered algorithm
in finding a min-cut as given by CASE-5. First, we will give a simple process
in Algorithm 10 to be run on individual nodes after the run of the _Layered
Algorithm for min-cut_. For each node $a$, this collects two quantities 1-cut
detail $\mathcal{D}^{1}(a)$ and 2-cut detail $\mathcal{D}^{2}(a)$ which
relevant information. After this information is collected by each node $a$ it
will be sent upwards via a convergecast technique described later.
Further, we will characterize CASE-5 in Lemma 5.24. Recall that in CASE-5, for
some node $v_{1},v_{2},v_{3}\in V\setminus r$,
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a min-cut such that ${v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}$ and
${v_{3}}^{\downarrow}\subset{v_{1}}^{\downarrow}$ and
${v_{2}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$. This will help us
prove that one of the information from
$\left\\{\mathcal{D}^{2}\left(v_{2}\right),\mathcal{D}^{2}\left(v_{3}\right),\left\\{\mathcal{D}^{1}\left(v_{3}\right),\mathcal{D}^{1}\left(v_{2}\right)\right\\}\right\\}$
will be received by $\operatorname{LCA}(v_{2},v_{3})$ after the convergecast
algorithm. And this will be enough to make a decision about the min-cut based
on the characterization given in Lemma 3.6.
1 _computing 1-cut detail_
2 $v\leftarrow$ closest node to $r$ such that $(\pi\left(a\right),a)$ is a
min-cut in $G_{v}$
3 $\mathcal{D}^{1}(a)=\langle
node:a,parent:\pi\left(a\right),eta:\eta(a),pivotnode:v,numedges:H_{{a}^{\downarrow}}^{v},pivotnodelevel:\ell\left(v\right)\rangle$
1 _computing 2-cut detail_
2 $vSet\leftarrow\\{u\mid\exists\text{ node }x\ s.t.\
{x}^{\downarrow}\cap{a}^{\downarrow}=\emptyset;\
\gamma({x}^{\downarrow},{a}^{\downarrow})>0;\
\ell\left(a\right)\geq\ell\left(x\right);\left\\{(\pi\left(a\right),a),(\pi\left(x\right),x)\right\\}$
$\text{ is a induced cut in }G_{u}\\}$
3 if _ $vSet=\emptyset$ _ then $\mathcal{D}^{2}(a)=\phi$
4 else
5 $v\leftarrow\underset{u\in vSet}{argmin}\ \ell\left(u\right)$
6
7 $bSet\leftarrow\\{x\mid{x}^{\downarrow}\cap{a}^{\downarrow}=\emptyset;\
\gamma({x}^{\downarrow},{a}^{\downarrow})>0;\
\ell\left(a\right)\geq\ell\left(x\right);\left\\{(\pi\left(a\right),a),(\pi\left(x\right),x)\right\\}$
$\text{ is a induced cut in }G_{v}\\}$
8 $b\leftarrow\underset{x\in bSet}{argmin}\ \ell\left(x\right)$
9 $z\leftarrow\operatorname{LCA}\left(a,b\right)$
10 $\mathcal{D}^{2}(a)\triangleq\langle
node1:a,parent1:\pi\left(a\right),eta1:\eta(a),node1PivotOutEdges:H^{v}_{{a}^{\downarrow}},node2:b,parent2:\pi\left(b\right),eta2:\eta(b),node2PivotOutEdges:H^{v}_{{b}^{\downarrow}},betweenedges:\gamma({a}^{\downarrow},{b}^{\downarrow}),lca:z,lcalevel:\ell\left(z\right),pivotnode:v,pivotenodelevel:\ell\left(v\right)\rangle$
11
Algorithm 10 1-cut details and 2-cut details (to be run after layered
algorithm for min-cut on all nodes $a$)
###### Observation 5.23.
For each node $x$, $\mathcal{D}^{1}(x)$ (1-cut detail) and
$\mathcal{D}^{2}(x)$ (2-cut detail) can be computed in $O(D^{2})$ time as
given in Algorithm 10
###### Proof.
In Observation 5.21, we saw that the layered algorithm for min-cut runs in
$O(D^{2})$ time. We claim that after the layered algorithm for min-cut is
executed we have all the information to compute both 1-cut detail and 2-cut
detail. For 1-cut detail of a node $a$ we require $\eta(a)$ which node $a$
knows from the first section. Further, we require node $v$ closest to the root
such that the edge $(\pi\left(a\right),a)$ persists as a min-cut in the sub-
graph $G_{v}$. This node can be found after layered algorithm for min-cut is
run. Also $H_{{a}^{\downarrow}}^{v}$ is available with node $a$ from Section
4.1.
Similarly for 2-cut detail. Here the layered algorithm uses Lemma 4.23, which
takes $O(D)$ rounds. At the end of the layered algorithm for the min-cut every
node $a$ knows all the required induced-cut of size $2$ that it participates
in subgraph $G_{v}$ for all $v\in\mathcal{A}\left(a\right)$. Based on that, it
can calculate 2-cut detail. ∎
Now, we will give characterization of $\textbf{CASE}-5$. This characterization
will help us to desginate node $v_{2},v_{3}$ as special nodes in
${x}^{\downarrow}$.
###### Lemma 5.24.
Let the graph $G$ be 3-connected (there are no min-cuts of size $1$ and $2$ in
$G$). Let $v_{1},v_{2},v_{3}\in V\setminus\left\\{r\right\\}$,
${v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}$ and
${v_{3}}^{\downarrow}\subset{v_{1}}^{\downarrow}$ and
${v_{2}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$, let
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
be a min-cut (as in CASE-5). Then the following statements are true
1. i)
Let
$x\in{v_{1}}^{\downarrow}\setminus\left({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow}\right)$
and $x$ is not on either of the path $\rho(v_{2}),\rho(v_{3})$ then the edge
$(\pi\left(x\right),x)$ is not a min-cut in $G_{v_{1}}$
2. ii)
when $\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})=0$,
$(\pi\left(v_{2}\right),v_{2})$ and $(\pi\left(v_{3}\right),v_{3})$ are 1-cuts
for sub-graph in $G_{\left({v_{2}},{v_{1}}\right)}$ and
$G_{\left({v_{3}},{v_{1}}\right)}$ respectively
3. iii)
Let
$x\in{v_{1}}^{\downarrow}\setminus\left({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow}\right)$
be a node and let $x$ not be on either of the two paths
$\rho(v_{2}),\rho(v_{3})$ and also let
$\ell\left(x\right)\in\left\\{\ell\left(v_{2}\right),\ell\left(v_{3}\right)\right\\}$.
When $\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})>0$, then there does
not exists a node $y$ such that
${x}^{\downarrow}\cap{y}^{\downarrow}=\emptyset$,
$\gamma({x}^{\downarrow},{y}^{\downarrow})>0$ and the edge set
$\left\\{(\pi\left(x\right),x),(\pi\left(y\right),y)\right\\}$ is a induced
cut of size $2$ in $G_{v_{1}}$
4. iv)
Let node $z$ be the LCA of $v_{2},v_{3}$. When
$\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})>0$ then
$\left\\{(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a 2-cut for sub-graph sequence $G_{\left({z},{v_{1}}\right)}$
5. v)
When $\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})>0$ then there does not
exists a node $x$ on the path $\rho(v_{2})$ such that
$\ell\left(x\right)<\ell\left(v_{2}\right)$,
${x}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$ and
$\left\\{(\pi\left(x\right),x),(\pi\left(v_{3}\right),v_{3})\right\\}$ is an
induced cut of $G_{v_{1}}$
###### Observation 5.25.
If there is an induced cut as given in Lemma 5.24, then no node in the set
${v_{1}}^{\downarrow}\setminus\left({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow}\right)$
is connected to any node in $V\setminus{v_{1}}^{\downarrow}$. Also apart from
the edges $(\pi\left(v_{2}\right),v_{2})$ and $(\pi\left(v_{3}\right),v_{3})$
node in $\left({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow}\right)$ do not
have any edge with nodes in
${v_{1}}^{\downarrow}\setminus\left({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow}\right)$.
###### Proof.
If such was the case then
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
would not have been a min-cut. We will prove the above claims using this
simple observation. ∎
###### Proof of Lemma 5.24.
1. i)
As per given condition $v_{2}\notin{x}^{\downarrow}$ and
$v_{3}\notin{x}^{\downarrow}$. By Observation 5.25, the only nodes in
${v_{1}}^{\downarrow}$ which have an adjacent edge to the nodes in
$V\setminus{v_{1}}^{\downarrow}$ are in the vertex sets ${v_{2}}^{\downarrow}$
or ${v_{3}}^{\downarrow}$. Now since $(\pi\left(x\right),x)$ is a min-cut in
$G_{v_{1}}$ thus no node in ${x}^{\downarrow}$ has an adjacent edge with a
node in $V\setminus{v_{1}}^{\downarrow}$ . Thus if $(\pi\left(x\right),x)$ is
a min-cut for the sub-graph $G_{v_{1}}$ then it is also a min-cut edge in the
whole graph $G$ which is a contradiction because the graph is 3-connected.
2. ii)
Since $\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})=0$ thus there are no
edges between vertices in the sets ${v_{2}}^{\downarrow}$ and
${v_{3}}^{\downarrow}$. Also there are no edges between vertices in the vertex
set ${v_{1}}^{\downarrow}\setminus{v_{2}}^{\downarrow}$ and
${v_{2}}^{\downarrow}$. Thus $(\pi\left(v_{2}\right),v_{2})$ is a cut-edge in
subgraph $G_{v_{1}}$ and in all the subgraphs in
$G_{\left({v_{2}},{v_{1}}\right)}$ by Observation 5.22. Similarly for the edge
$(\pi\left(v_{3}\right),v_{3})$.
3. iii)
Lets consider the simple case when $y$ is in either ${v_{2}}^{\downarrow}$ or
${v_{3}}^{\downarrow}$. Here
$\gamma\left({x}^{\downarrow},{y}^{\downarrow}\right)=0$ by Observation 5.25
The other case when $(\pi\left(y\right),y)$ is on one of the paths out of
$\rho(v_{2})$ or $\rho(v_{3})$. WLOG let $(\pi\left(y\right),y)$ be on the
path $\rho(v_{2})$. Thus ${v_{2}}^{\downarrow}\subseteq{y}^{\downarrow}$. Also
given that $\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})>0$ thus
$\left\\{(\pi\left(x\right),x),(\pi\left(y\right),y)\right\\}$ cannot be a cut
induced by $\delta({x}^{\downarrow}\oplus{y}^{\downarrow})$ in $G_{v_{1}}$
since ${v_{2}}^{\downarrow}\subseteq{y}^{\downarrow}$ and
${v_{3}}^{\downarrow}\subseteq{v_{1}}^{\downarrow}\setminus({x}^{\downarrow}\cup{y}^{\downarrow})$
thus the vertex set ${x}^{\downarrow}\cup{y}^{\downarrow}$ and
${v_{1}}^{\downarrow}\setminus({x}^{\downarrow}\cup{y}^{\downarrow})$ have at
least one edge between them
Now let $y$ be not on either of the path $\rho(v_{2})$ or $\rho(v_{3})$. Also
$y$ is not in either of the vertex set ${v_{2}}^{\downarrow}$ and
${v_{3}}^{\downarrow}$. Now, as per given condition, $(\pi\left(x\right),x)$
is not on either of the path $\rho(v_{2})$ or $\rho(v_{3})$. Since both the
edges are not on the paths $\rho(v_{2})$ and $\rho(v_{3})$ thus neither of
$v_{2}$ or $v_{3}$ are in the vertex set ${x}^{\downarrow}$ or
${y}^{\downarrow}$. If here
$\left\\{(\pi\left(x\right),x),(\pi\left(y\right),y)\right\\}$ is a min-cut
then there are edges between vertices of the two sets ${x}^{\downarrow}$ and
${y}^{\downarrow}$ and they do not have any other edge to the vertices in
${v_{1}}^{\downarrow}\setminus({x}^{\downarrow}\cup{y}^{\downarrow})$. Also
here by Observation 5.25 nodes in ${x}^{\downarrow}\cup{y}^{\downarrow}$ do
not have any edge in $V\setminus{v_{1}}^{\downarrow}$. Thus by the above two
statements nodes in ${x}^{\downarrow}\cup{y}^{\downarrow}$ do not have any
adjacent edges to nodes in $V\setminus({x}^{\downarrow}\cup{y}^{\downarrow})$
hence, the edge set
$\left\\{(\pi\left(x\right),x),(\pi\left(y\right),y)\right\\}$ persists as a
min-cut in the whole graph which is a contradiction because the graph is
3-connected.
4. iv)
Since $\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})>0$ thus there are
some edges between vertices of the sets ${v_{2}}^{\downarrow}$ and
${v_{3}}^{\downarrow}$. Also by Observation 5.25 there are no edges between
vertices in the vertex set
${v_{1}}^{\downarrow}\setminus({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow})$
and ${v_{2}}^{\downarrow}$. Similarly there are no edges between vertices in
the vertex set
${v_{1}}^{\downarrow}\setminus({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow})$
and ${v_{3}}^{\downarrow}$. Here $z$ is the LCA of $v_{2},v_{3}$ and there are
edges which go between ${v_{2}}^{\downarrow}$ and ${v_{3}}^{\downarrow}$.
Also, by Observation 5.25 no edges other than
$\left\\{(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
connect nodes in
${z}^{\downarrow}/({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow})$ to nodes in
${v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow}$. Thus
$\left\\{(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
is a 2-cut in $G_{z}$. Similar argument can be given for all sub-graphs in the
graph sequence $G_{\left({z},{v_{1}}\right)}$
5. v)
As per the given condition
$x\in{v_{1}}^{\downarrow}\setminus({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow})$
and $v_{2}\in{x}^{\downarrow}$. We will focus on the vertex set
${x}^{\downarrow}\setminus{v_{2}}^{\downarrow}$ which is not empty because
$\ell\left(x\right)<\ell\left(v_{2}\right)$. If
$\left\\{(\pi\left(x\right),x),(\pi\left(v_{3}\right),v_{3})\right\\}$ is an
induced cut of $G_{v_{1}}$ then vertices in the set
${x}^{\downarrow}\cup{v_{3}}^{\downarrow}$ do not have an adjacent edge to
vertices in
${v_{1}}^{\downarrow}\setminus({x}^{\downarrow}\cup{v_{3}}^{\downarrow})$.
Thus vertices in ${x}^{\downarrow}\setminus{v_{2}}^{\downarrow}$ does not have
an adjacent edge to vertices in
${v_{1}}^{\downarrow}\setminus({x}^{\downarrow}\cup{v_{3}}^{\downarrow})$.
Further
${x}^{\downarrow}\setminus{v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}\setminus({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow})$
thus by Observation 5.25 vertices in
${x}^{\downarrow}\setminus{v_{2}}^{\downarrow}$ do not have an adjacent edge
to vertices in $V\setminus{v_{1}}^{\downarrow}$. Hence
$\delta({x}^{\downarrow}\setminus{v_{2}}^{\downarrow})=\left\\{(\pi\left(x\right),{x}),(\pi\left(v_{2}\right),{v_{2}})\right\\}$
is an induced cut in the whole graph which is a contradiction.
∎
Now we will give a variant of a convergecast algorithm for communicating 1-cut
details and 2-cut detail of a node to the ancestors in Algorithm 11. Similarly
for 2-cut detail. Note that for any node $v$ it will not receive all
$\mathcal{D}^{1}(x)$ and $\mathcal{D}^{2}(x)$ for all $x\in{v}^{\downarrow}$
but the ones which are relevant among others. We will characterize this in
Observation 5.26.
1 _convergecasting 1-cut detail_
2 if _node $a$ is a leaf node_ then
3 send $<\mathcal{D}^{1}(a),\ell\left(a\right)>$ to $\pi\left(a\right)$
4 else if _node $a$ is a internal node_ then
5 for _round $t=0$_ do send $<\mathcal{D}^{1}(a),\ell\left(a\right)>$ to
$\pi\left(a\right)$
6 for _each subsequet round $t=1$ to $t=D-\ell\left(a\right)$_ do
7 collect message $\operatorname{msg}_{c}$ from all node
$c\in\operatorname{children}(a)$
8 if _$\operatorname{msg}_{c}=\phi\ \forall c\in\operatorname{children}(a)$_
then send $<\phi,\ell\left(a\right)+t>$ to $\pi\left(a\right)$
9 else
10 $\mathcal{D}\leftarrow$ among $\operatorname{msg}_{c}\forall
c\in\operatorname{children}(a)$ choose the 1-cut detail from which 1-cut
persists for the sub-graph closest to the root that is the lowest $6^{th}$
element (last) of the 1-cut detail tuple
11 send $<\mathcal{D},\ell\left(a\right)+t>$ to $\pi\left(a\right)$
12
13
14
1
2 _convergecasting 2-cut detail_
3 if _node $a$ is a leaf node_ then
4 send $<\mathcal{D}^{2}(a),\ell\left(a\right)>$ to $\pi\left(a\right)$
5 else if _node $a$ is a internal node_ then
6 for _round $t=0$_ do send $<\mathcal{D}^{2}(a),\ell\left(a\right)>$ to
$\pi\left(a\right)$
7 for _each subsequet round $t=1$ to $t=D-\ell\left(a\right)$_ do
8 collect message $\operatorname{msg}_{c}$ from all node
$c\in\operatorname{children}(a)$
9 if _$\operatorname{msg}_{c}=\phi\ \forall c\in\operatorname{children}(a)$_
then send $<\phi,\ell\left(a\right)+t>$ to $\pi\left(a\right)$
10 else
11 $\mathcal{D}\leftarrow$ among $\operatorname{msg}_{c}\forall
c\in\operatorname{children}(a)$ choose the 2-cut detail for which 2-cut
persists for the sub-graph closest to the root that is the lowest $11^{th}$
element (last) of the 2-cut detail tuple
12 send $<\mathcal{D},\ell\left(a\right)+t>$ to $\pi\left(a\right)$
13
14
15
Algorithm 11 Converge-cast algorithm to be run at every node $a$
###### Observation 5.26.
For some nodes $v_{1},v_{2},v_{3}\in V\setminus\left\\{r\right\\}$, let
${v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}$ and
${v_{3}}^{\downarrow}\subset{v_{1}}^{\downarrow}$ and
${v_{2}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$. Let
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
be a min-cut (as in CASE-5). If
$\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})=0$ then at the end of the
Algorithm 11, $\operatorname{LCA}(v_{2},v_{3})$ has both
${\mathcal{D}^{1}(v_{2})},\mathcal{D}^{1}(v_{3})$. And when
$\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})>0$ then it has either of
$\mathcal{D}^{2}(v_{2})$, $\left\\{\mathcal{D}^{2}(v_{3})\right\\}$. Also
Algorithm 11 takes $O(D)$ rounds.
###### Proof.
Let $z$ be the LCA of node $v_{2},v_{3}$. Lets first consider
$\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})=0$. Here based on Lemma
5.24 (i and ii) we can say that there does not exists a node
$x\in{v_{1}}^{\downarrow}\setminus\left({v_{2}}^{\downarrow}\cup{v_{3}}^{\downarrow}\right)$
such that $(\pi\left(x\right),x)$ persists as a min-cut in $G_{v_{1}}$. Thus
at line 11 of Algorithm 11, $\mathcal{D}^{1}(v_{2})$ will not have a
contention at any node $u\in\mathcal{A}\left(v_{2}\right)$ such that
$\ell\left(u\right)>\ell\left(z\right)$. Similarly for $v_{3}$. The first
contention happen at node $z$ (LCA of $v_{2}$ and $v_{3}$). Let $z$ be the LCA
of node $v_{2},v_{3}$. Here $z$ will have both $\mathcal{D}^{1}(v_{2})$ and
$\mathcal{D}^{1}(v_{3})$ received from two different children
$c_{1},c_{2}\in\operatorname{children}(z)$.
Now when $\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})>0$. Here by Lemma
5.24 (pt. iv) we know that
$\left\\{(\pi\left(v_{2}\right),{v_{2}}),(\pi\left(v_{3}\right),{v_{3}})\right\\}$
is an induced 2-cut. WLOG Lets assume
$\ell\left(v_{2}\right)\leq\ell\left(v_{3}\right)$. Thus using Algorithm 7
node $v_{2}$ can make a decision about the said induced cut of size 2. Also
$\mathcal{D}^{2}(v_{2})$ (2-cut detail of node $v_{2}$) contains information
about the induced cut
$\left\\{(\pi\left(v_{2}\right),{v_{2}}),(\pi\left(v_{3}\right),{v_{3}})\right\\}$
because there does not exists any other node $x$ as per Lemma 5.24 (pt. v)
such that $\ell\left(x\right)<\ell\left(v_{3}\right)$ and
$\left\\{(\pi\left(x\right),{x}),(\pi\left(v_{2}\right),{v_{2}})\right\\}$ is
an induced cut of size 2 in $G_{v_{1}}$. Further using the Lemma 5.24 (pt. ii)
there will not be any contention for $\mathcal{D}^{2}(v_{2})$ to reach to
$\operatorname{LCA}(v_{2},v_{3})$.
Also both the converge-casting modules in Algorithm 11 run for $O(D)$ time
because both 1-cut detail and 2-cut detail are of the size $O(\log n)$ bits
and for each node the modules run for atmost $O(D)$ rounds. ∎
###### Lemma 5.27.
For some nodes $v_{1},v_{2},v_{3}\in V\setminus\left\\{r\right\\}$, let
${v_{2}}^{\downarrow}\subset{v_{1}}^{\downarrow}$ and
${v_{3}}^{\downarrow}\subset{v_{1}}^{\downarrow}$ and
${v_{2}}^{\downarrow}\cap{v_{3}}^{\downarrow}=\emptyset$ . Let
$\left\\{(\pi\left(v_{1}\right),v_{1}),(\pi\left(v_{2}\right),v_{2}),(\pi\left(v_{3}\right),v_{3})\right\\}$
be a min-cut (as in CASE-5). Then we can find this min-cut in $O(D^{2})$
rounds.
###### Proof.
Here we run Algorithm 12 on each node $x$. This algorithm is run after the
convergecast of 1-cut details and 2-cut details. This algorithm has two parts:
finding min-cut from 1-cut detail and from 2-cut detail as given in Algorithm
11. We will argue that all the min-cuts of the form as given by CASE-5 are
correctly found by this algorithm. And also prove that this process takes
$O(D^{2})$ rounds.
We claim that when Algorithm 12 is run on LCA of $v_{2},v_{3}$ then it can
find the required min-cut. Firstly, let see the case where
$\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})=0$ here as per Observation
5.26 the LCA of $v_{2},v_{3}$ has both $\mathcal{D}^{1}(v_{2})$ and
$\mathcal{D}^{1}(v_{3})$. Let $v$ be the pivotnode from
$\mathcal{D}^{1}(v_{2})$ and $u$ be the pivotnode from
$\mathcal{D}^{1}(v_{3})$. The information regarding
$H_{{v_{2}}^{\downarrow}}^{v}$ and $H_{{v_{3}}^{\downarrow}}^{u}$ is available
from the respective one cut detail. WLOG let
$\ell\left(v\right)\geq\ell\left(u\right)$. Thus here
$H_{{v_{3}}^{\downarrow}}^{u}=H_{{v_{3}}^{\downarrow}}^{v}$ because the number
of edges going out of the vertex set ${v_{3}}^{\downarrow}$ which also goes
out of the vertex ${u}^{\downarrow}$ is same as the number of edges going out
of the vertex set ${v_{3}}^{\downarrow}$ which also goes out of the vertex
${v}^{\downarrow}$. Now if the condition in Line 12 is satisfied then the min-
cut is correctly found as per Lemma 3.6.
Further, moving to case when
$\gamma({v_{2}}^{\downarrow},{v_{3}}^{\downarrow})>0$. Here as per Observation
5.26 one of $\mathcal{D}^{2}(v_{2})$ or $\mathcal{D}^{2}(v_{3})$ is present at
node $v_{3}$. And when condition at Line 12 then the min-cut is correctly
found as per Lemma 3.6.
1 _from 1-cut detail_
2
$nodePair\leftarrow\left\\{(a,b)\mid\mathcal{D}^{1}(a),\mathcal{D}^{1}(b)\text{
where received from two nodes
}c_{1},c_{2}\in\operatorname{children}(v)\right\\}$
3 for _each $(a,b)\in nodePair$_ do
4 $v\leftarrow pivotnode$ from $\mathcal{D}^{1}(a),u\leftarrow pivotnode$ from
$\mathcal{D}^{1}(b)$
5 $z\leftarrow\underset{z^{\prime}\in\left\\{v,u\right\\}}{argmax\
}\ell\left(z^{\prime}\right)$
6 if _$\eta(z)-1=H_{{a}^{\downarrow}}^{z}+H_{{b}^{\downarrow}}^{z}$ &
$\eta(a)-1=H_{{a}^{\downarrow}}^{z}$ & $\eta(b)-1=H_{{b}^{\downarrow}}^{z}$_
then
7
$\left\\{(\pi\left(z\right),{z}),(\pi\left(a\right),{a}),(\pi\left(b\right),{b})\right\\}$
is a min-cut
8
9
1
2 _from 2-cut detail_
3 for _all node $a$ such that $\mathcal{D}^{2}(a)$ was received_ do
4 $a,b$ be $node1$ and $node2$ in $\mathcal{D}^{2}(a)$; $v$ be the pivotNode
from $\mathcal{D}^{2}(a)$
5 if _$\eta(v)-1=H_{{a}^{\downarrow}}^{v}+H_{{b}^{\downarrow}}^{v}$ &
$\eta(a)-1=H_{{a}^{\downarrow}}^{v}+\gamma({a}^{\downarrow},{b}^{\downarrow})$&
$\eta(b)-1=H_{{b}^{\downarrow}}^{v}+\gamma({a}^{\downarrow},{b}^{\downarrow})$_
then
6
$\left\\{(\pi\left(v\right),{v}),(\pi\left(a\right),{a}),(\pi\left(b\right),{b})\right\\}$
is a min-cut
7
8
Algorithm 12 Algorithm to find min-cut as given by CASE-5 (to be run at each
internal node $x$)
Also, this whole process just takes $O(D^{2})$ rounds because the layered
algorithm for min-cut takes $O(D^{2})$ rounds and the covergecast algorithm
given in Algorithm 11 takes just $O(D)$ rounds. ∎
### 5.3 Summary
In this chapter, we gave noval deterministic algorithm to find min-cuts of
size $3$. The backbone of our algorithm is the characterization presented in
Lemma 3.5 and Lemma 3.6. In this chapter, we showed that whenever there exists
a min-cut of size $3$ at least one node in the whole network will receive the
required quantities as per the characterization in $O(D^{2})$ rounds and thus
the min-cut will be found.
The communication strategies introduced in this chapter are of two flavours:
_Sketching_ technique where we collected small but relevant information and
_Layered Algorithm_ which is a convergecast technique based on a contention
resolution mechanism.
## Chapter 6 Future Work
We have discussed techniques for finding min-cut. Before our result, a decade
old work existed to find small min-cuts by [25]. They gave an algebraic
technique of random circulation for finding min-cut of size $1$ and $2$. On
the contrary, we give deterministic and purely combinatorial techniques for
finding min-cuts of size $1,2$ and $3$. Our algorithm takes $O(D)$ rounds for
finding min-cut of size $1,2$ and $O(D^{2})$ rounds for finding min-cut of
size $3$. An immediate future goal could be to prove that the lower bound
$O(D^{2})$ for a min-cut of size $3$ which is not trivial.
Currently, there are some fundamental issues for finding min-cuts of size $4$
and above using our techniques. It would be interesting to extend our
techniques for min-cut of size $4$. We are hopeful about it and give the
following conjecture
###### Conjecture 1.
There exists a distributed algorithm to find if a min-cut of size $k=o(\log
n)$ exists in $\tilde{O}(D^{k})$ rounds.
## References
* Ahuja and Zhu [1989] Ahuja, M. and Y. Zhu, An efficient distributed algorithm for finding articulation points, bridges, and biconnected components in asynchronous networks. In Foundations of Software Technology and Theoretical Computer Science, FSTTCS. 1989.
* [2] Bondy, J. A. and U. S. R. Murty, Graph theory with applications, volume 290. Citeseer, .
* Das Sarma et al. [2011] Das Sarma, A., S. Holzer, L. Kor, A. Korman, D. Nanongkai, G. Pandurangan, D. Peleg, and R. Wattenhofer, Distributed verification and hardness of distributed approximation. In Proceedings of the 43rd Annual ACM Symposium on Theory of Computing, STOC. 2011.
* Elkin [2017] Elkin, M., A simple deterministic distributed mst algorithm, with near-optimal time and message complexities. In Proceedings of the ACM Symposium on Principles of Distributed Computing, PODC. 2017.
* Elkin and Neiman [2017] Elkin, M. and O. Neiman (2017). Linear-size hopsets with small hopbound, and distributed routing with low memory. URL http://arxiv.org/abs/1704.08468.
* Gabow [1991] Gabow, H. N., A matroid approach to finding edge connectivity and packing arborescences. In Proceedings of the Twenty-third Annual ACM Symposium on Theory of Computing, STOC ’91. ACM, New York, NY, USA, 1991. ISBN 0-89791-397-3. URL http://doi.acm.org/10.1145/103418.103436.
* Ghaffari and Haeupler [2016] Ghaffari, M. and B. Haeupler, Distributed algorithms for planar networks - ii: Low-congestion shortcuts, mst, and min-cut. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA. 2016.
* Ghaffari and Kuhn [2013] Ghaffari, M. and F. Kuhn, Distributed minimum cut approximation. In International Symposium on Distributed Computing. Springer, 2013.
* Henzinger et al. [2017] Henzinger, M., S. Rao, and D. Wang, Local flow partitioning for faster edge connectivity. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA. 2017.
* Hohberg [1990] Hohberg, W. (1990). How to find biconnected components in distributed networks. J. Parallel Distrib. Comput., 9(4), 374–386. ISSN 0743-7315. URL http://dx.doi.org/10.1016/0743-7315(90)90122-6.
* Karger [1993] Karger, D. R., Global min-cuts in rnc, and other ramifications of a simple min-out algorithm. In Proceedings of the Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’93. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1993. ISBN 0-89871-313-7. URL http://dl.acm.org/citation.cfm?id=313559.313605.
* Karger [1994a] Karger, D. R., Random sampling in cut, flow, and network design problems. In Proceedings of the Twenty-sixth Annual ACM Symposium on Theory of Computing, STOC ’94. ACM, New York, NY, USA, 1994a. ISBN 0-89791-663-8. URL http://doi.acm.org/10.1145/195058.195422.
* Karger [1994b] Karger, D. R., Using randomized sparsification to approximate minimum cuts. In Proceedings of the Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’94. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1994b. ISBN 0-89871-329-3. URL http://dl.acm.org/citation.cfm?id=314464.314582.
* Karger [2000] Karger, D. R. (2000). Minimum cuts in near-linear time. J. ACM, 47(1), 46–76. ISSN 0004-5411. URL http://doi.acm.org/10.1145/331605.331608.
* Karger and Stein [1993] Karger, D. R. and C. Stein, An õ (n 2) algorithm for minimum cuts. In Proceedings of the twenty-fifth annual ACM symposium on Theory of computing. ACM, 1993.
* Kawarabayashi and Thorup [2015] Kawarabayashi, K.-i. and M. Thorup, Deterministic global minimum cut of a simple graph in near-linear time. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing, STOC. 2015.
* Kutten and Peleg [1998] Kutten, S. and D. Peleg (1998). Fast distributed construction of smallk-dominating sets and applications. J. Algorithms, 28(1).
* Luby [1985] Luby, M., A simple parallel algorithm for the maximal independent set problem. In Proceedings of the Seventeenth Annual ACM Symposium on Theory of Computing, STOC. 1985.
* Matula [1993] Matula, D. W., A linear time 2 + &&egr;&egr; approximation algorithm for edge connectivity. In Proceedings of the Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’93. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1993. ISBN 0-89871-313-7. URL http://dl.acm.org/citation.cfm?id=313559.313872.
* Nagamochi and Ibaraki [1992] Nagamochi, H. and T. Ibaraki (1992). Computing edge-connectivity in multigraphs and capacitated graphs. SIAM Journal on Discrete Mathematics, 5(1), 54–66.
* Nanongkai and Su [2014] Nanongkai, D. and H.-H. Su, Almost-tight distributed minimum cut algorithms. In 24th International Symposium on Distributed Computing, DISC. 2014.
* Pandurangan et al. [2017] Pandurangan, G., P. Robinson, and M. Scquizzato, A time- and message-optimal distributed algorithm for minimum spanning trees. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC. 2017.
* Peleg [2000] Peleg, D., Distributed computing: a locality-sensitive approach. SIAM, 2000.
* Pritchard [2006] Pritchard, D. (2006). An optimal distributed edge-biconnectivity algorithm. Accepted as a Poster at 25th ACM symposium on Principles of distributed computing. PODC.. URL http://arxiv.org/abs/cs/0602013.
* Pritchard [2008] Pritchard, D., Fast distributed computation of cuts via random circulations. In International Colloquium on Automata, Languages, and Programming, ICALP. 2008.
* Pritchard and Thurimella [2011] Pritchard, D. and R. Thurimella (2011). Fast computation of small cuts via cycle space sampling. ACM Transactions on Algorithms (TALG), 7(4), 46.
* Stoer and Wagner [1997] Stoer, M. and F. Wagner (1997). A simple min-cut algorithm. J. ACM, 44(4), 585–591. ISSN 0004-5411. URL http://doi.acm.org/10.1145/263867.263872.
* Thorup [2001] Thorup, M., Fully-dynamic min-cut. In Proceedings of the thirty-third annual ACM symposium on Theory of computing. ACM, 2001.
* Thurimella [1995] Thurimella, R., Sub-linear distributed algorithms for sparse certificates and biconnected components. In Proceedings of the fourteenth annual ACM symposium on Principles of distributed computing, PODC. 1995.
* Tsin [2006] Tsin, Y. H. (2006). An efficient distributed algorithm for 3-edge-connectivity. International Journal of Foundations of Computer Science, 17(03), 677–701.
## CURRICULUM VITAE
1. | NAME | : | Mohit Daga
---|---|---|---
2. | DATE OF BIRTH | : | July 5, 1993
3. | EDUCATION QUALIFICATIONS | |
1. (a)
Bachelor of Technology
Institute | : | The LNM Institute of Information Technology, Jaipur
---|---|---
Specialization | : | Communication and Computer Engineering
2. (b)
M.S. By Research
Institute | : | Indian Institute of Technology - Madras
---|---|---
Registration Date | : | January 6, 2016
## GENERAL TEST COMMITTEE
CHAIRPERSON | : | Prof. N.S. Narayanaswamy
---|---|---
| | Professor
| | Department of Computer Science and Engineering
GUIDE | : | Dr. John Augustine
| | Associate Professor
| | Department of Computer Science and Engineering
MEMBERS | : | Dr. Shweta Agrawal
| | Assistant Professor
| | Department of Computer Science and Engineering
| | Dr. Krishna Jagannathan
| | Assistant Professor
| | Department of Electrical Engineering
|
2024-09-04T02:54:55.786571 | 2020-02-28T22:53:30 | 2003.00102 | {
"authors": "Israel Morales, Ferran Valdez",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:25956",
"submitter": "Ferran Valdez",
"url": "https://arxiv.org/abs/2003.00102"
} | arxiv-papers | # Loxodromic elements in big mapping class groups via the Hooper-Thurston-
Veech construction
Israel Morales and Ferrán Valdez Ferrán Valdez
Centro de Ciencias Matemáticas, UNAM, Campus Morelia, C.P. 58190, Morelia,
Michoacán, México<EMAIL_ADDRESS>Israel Morales
Centro de Ciencias Matemáticas, UNAM, Campus Morelia, C.P. 58190, Morelia,
Michoacán, México<EMAIL_ADDRESS>
###### Abstract.
Let $S$ be an infinite-type surface and $p\in S$. We show that the Thurston-
Veech construction for pseudo-Anosov elements, adapted for infinite-type
surfaces, produces infinitely many loxodromic elements for the action of
$\mathrm{Mod}(S;p)$ on the loop graph $L(S;p)$ that do not leave any finite-
type subsurface $S^{\prime}\subset S$ invariant. Moreover, in the language of
[BaWa18B], Thurston-Veech’s construction produces loxodromic elements of any
weight. As a consequence of Bavard and Walker’s work, any subgroup of
$\mathrm{Mod}(S;p)$ containing two "Thurston-Veech loxodromics" of different
weight has an infinite-dimensional space of non-trivial quasimorphisms.
## 1\. Introduction
Let $S$ be an orientable infinite-type surface, $p\in S$ a marked
point111Through this text we think of $p$ as either a marked point or a marked
puncture. and $\mathrm{Mod}(S;p)$ the quotient of ${\rm Homeo}^{+}(S;p)$ by
isotopies which fix $p$ for all times. This group is related to the (big)
mapping class group222$\mathrm{Mod}(S)$ denotes ${\rm Homeo}^{+}(S;p)$ modulo
isotopy. of $S$ via Birman’s exact sequence:
$1\longrightarrow\pi_{1}(S,p)\overset{Push}{\longrightarrow}\mathrm{Mod}(S;p)\overset{Forget}{\longrightarrow}\mathrm{Mod}(S)\longrightarrow
1.$
The group $\mathrm{Mod}(S;p)$ acts by isometries on the (Gromov-hyperbolic,
infinite-diameter) loop graph $L(S;p)$, see [BaWa18A] and [BaWa18B]. Up to
date the only known examples of loxodromic elements for this action are:
1. (1)
The loxodromic element $h\in S=\mathbb{S}^{2}\setminus\\{p=\infty\cup\rm
Cantor\\}$ defined by J. Bavard in [Bavard16]. In this case
$\mathrm{Mod}(S;p)=\mathrm{Mod}(S)$, $h$ is not in the pure mapping class
group $\mathrm{PMod}(S)$, does not preserve any finite type subsurface and, in
the language of [BaWa18B], has weight 1.
2. (2)
Elements defined by pseudo-Anosov homeomorphisms supported in a finite-type
subsurface $S^{\prime}\subset S$ containing $p$. All these live in
$\mathrm{PMod}(S,p)$. Moreover, for any given $m\in\mathbb{N}$, $S^{\prime}$
can be chosen so that the loxodromic in question has weight $m$.
In [BaWa18B], the authors remark that _it would be interesting to construct
examples of loxodromic elements of weight greater than 1 which do not preserve
any finite type subsurface (up to isotopy)_.
The purpose of this article is to show that such examples can be obtained by
adapting the Thurston-Veech construction for pseudo-Anosov elements (see
[Thurston88], [Veech89] or [FarbMArgalit12]) to the context of infinite-type
surfaces. This adaptation is an extension of Thurston and Veech’s ideas built
upon previous work by Hooper [Hooper15], hence we call it the Hooper-Thurston-
Veech construction. Roughly speaking, we show that if one takes as input an
appropiate pair of multicurves $\alpha$, $\beta$ whose union fills $S$, then
the subgroup of $\mathrm{Mod}(S,p)$ generated by the (right) multitwists
$T_{\alpha}$ and $T_{\beta}$ contains infinitely many loxodromic elements.
More precisely, our main result is:
###### Theorem 1.1.
Let $S$ be an orientable infinite-type surface, $p\in S$ a marked point and
$m\in\mathbb{N}$. Let $\alpha=\\{\alpha_{i}\\}_{i\in I}$ and
$\beta=\\{\beta_{j}\\}_{j\in J}$ be multicurves in minimal position whose
union fills $S$ and such that:
1. (1)
the configuration graph333The vertices of this graph are $I\cup J$. There is
an edge between $i\in I$ and $j\in J$ for every point of intersection between
$\alpha_{i}$ and $\beta_{j}$. $\mathcal{G}(\alpha\cup\beta)$ is of finite
valence444Valence here means the supremum of $degree(v)$ where $v$ is a vertex
of the graph in question.,
2. (2)
for some fixed $N\in\mathbb{N}$, every connected component $D$ of
$S\setminus\alpha\cup\beta$ is a polygon or a once-punctured polygon555The
boundary of any connected component of $S\setminus\alpha\cup\beta$ is formed
by arcs contained in the curves forming $\alpha\cup\beta$ and hence we can
think of them as (topological) polygons. with at most $N$ sides and
3. (3)
the connected component of $S\setminus\alpha\cup\beta$ containing $p$ is a
$2m$-polygon.
If $T_{\alpha}$, $T_{\beta}\in\mathrm{Mod}(S;p)$ are the (right) multitwists
w.r.t. $\alpha$ and $\beta$ respectively then any $f\in\mathrm{Mod}(S;p)$ in
the positive semigroup generated by $T_{\alpha}$ and $T_{\beta}^{-1}$ given by
a word on which both generators appear is a loxodromic element of weight $m$
for the action of $\mathrm{Mod}(S;p)$ on the loop graph $L(S;p)$.
###### Remark 1.2.
During the 2019 AIM-Workshop on Surfaces of Infinite Type we learned that
Abbott, Miller and Patel have also a construction of loxodromic mapping
classes whose support is an infinite type surface [AbMiPa]. Their examples are
obtained via a composition of handle shifts and live in the complement of the
closure of the subgroup of $\mathrm{Mod}(S,p)$ defined by homeomorphisms with
compact support. In contrast, loxodromic elements given by Theorem 1.1 are
limits of compactly supported mapping classes.
As we explain in Section 2.3, the weight of a loxodromic element
$f\in\mathrm{Mod}(S;p)$ is defined by Bavard and Walker using a precise
description of the (Gromov) boundary $\partial L(S;p)$ of the loop graph. For
finite-type surfaces, if $f$ is a pseudo-Anosov having a singularity at $p$,
this number corresponds to the number of separatrices based at $p$ of an
$f$-invariant transverse measured foliation. This quantity is relevant
because, as shown in [BaWa18B] and using the language of Bestvina and Fujiwara
[BestvinaFujiwara02], loxodromic elements with different weights are
independent and anti-aligned. This has several consequences, for example the
work of Bavard-Walker [BaWa18A] gives for free the following:
###### Corollary 1.3.
Let $f,g$ be two loxodromic elements in $\mathrm{Mod}(S;p)$ as in Theorem 1.1
and suppose that their weights are $m\neq m^{\prime}$. Then any subgroup of
$\mathrm{Mod}(S;p)$ containing them has an infinite-dimensional space of non-
trivial quasimorphisms.
Applications for random-walks on the loop graph with respect to probability
distributions supported on countable subgroups of $\mathrm{Mod}(S;p)$ can be
easily deduced from recent work of Maher-Tiozzo [MaherTiozzo18].
On the other hand, recent work by Rasmussen [Rass19] implies that the mapping
classes given by Theorem 1.1 are not WWPD in the language of Bestvina-
Bromberg-Fujiwara [BeBroFu15].
_About the proof of Theorem 1.1_. As in the case of Thurston’s work, our proof
relies on the existence of a flat surface structure $M$ on $S$, having a
conical singularity at $p$, for which the Dehn-twists $T_{\alpha}$ and
$T_{\beta}$ are affine automorphisms. In the case of finite-type surfaces,
this structure is unique (up to scaling) and its existence is guaranteed by
the Perron-Frobenius theorem. For infinite-type surfaces the presence of such
a flat surface structure is guaranteed once one can find a positive
eigenfunction of the adjacency operator on the (infinite bipartite)
configuration graph $\mathcal{G}(\alpha\cup\beta)$. Luckly, the spectral
theory of infinite graphs in this context secures the existence of uncountably
many flat surface structures (which are not related by scaling) on which the
Dehn-twists $T_{\alpha}$ and $T_{\beta}$ are affine automorphisms. The main
difficulty we encounter is that the description of the Gromov boundary
$\partial L(S;p)$ needed to certify that $f$ is a loxodromic element depends
on a hyperbolic structure on $S$ which, a priori, is not quasi-isometric to
any of the aforementioned flat surface structures. To overcome this difficulty
we propose arguments which are mostly of topological nature.
We strongly believe that Theorem 1.1 does not describe all possible
loxodromics living in the group generated by $T_{\alpha}$ and $T_{\beta}$.
###### Question 1.4.
Let $\alpha$ and $\beta$ be as in Theorem 1.1. Is every element $f$ in the
group generated by $T_{\alpha}$ and $T_{\beta}$ given by a word on which both
generators appear loxodromic? In particular, is $T_{\alpha}T_{\beta}$
loxodromic?
We spend a considerable part of this text in the proof of the next result,
which guarantees that Theorem 1.1 is not vacuous.
###### Theorem 1.5.
Let $S$ be an infinite-type surface, $p\in S$ a marked point and
$m\in\mathbb{N}$. Then there exist two multicurves $\alpha$ and $\beta$ whose
union fills $S$ and such that:
1. (1)
the configuration graph $\mathcal{G}(\alpha\cup\beta)$ is of finite valence,
2. (2)
every connected component $D_{i}$ of $S\setminus\alpha\cup\beta$ which does
not contain the pont $p$ is a polygon or a once-punctured polygon with $n_{i}$
sides, where $n_{i}\leq N:=\max\\{8,m\\}$, and
3. (3)
$p$ is contained in a connected component of $D_{j}$ of
$S\setminus\alpha\cup\beta$ whose boundary $\partial D_{j}$ is a $2m$-polygon.
A crucial part on the proof of this result is to find, for _any_ infinite-type
surface $S$, a simple way to portrait $S$. We call this a topological normal
form. Once this is achieved, we give a recipe to construct the curves $\alpha$
and $\beta$ explicitly.
On the other hand, we find phenomena proper to big mapping class groups:
###### Corollary 1.6.
Let $S$ be the Loch-Ness monster666$S$ has infinite genus and only one end.
and consider the action of $\mathrm{Mod}(S;p)$ on the loop graph. Then there
exist a sequence of loxodromic elements $(f_{n})$ in $\mathrm{Mod}(S;p)$ which
converge in the compact-open topology to a non-trivial elliptic element.
###### Theorem 1.7.
There exits a family of translation surface structures
$\\{M_{\lambda}\\}_{\lambda\in[2,+\infty]}$ on a Loch Ness monster $S$ and
$f\in\mathrm{Mod}(S)$ such that:
* •
For every $k\in\mathbb{N}$, $f^{k}$ does not fix any isotopy class of
essential simple closed curve in $S$.
* •
If $\tau_{\lambda}:\mathrm{Aff}(M_{\lambda})\hookrightarrow\mathrm{Mod}(S)$ is
the natural map sending an affine homeomorphism to is mapping class, then
$D\tau_{\lambda}^{-1}(f)\in\mathrm{PSL}(2,\mathbb{R})$ is parabolic if
$\lambda=2$ and hyperbolic for every $\lambda>2$.
Recall that for finite-type surfaces $S$ a class $f\in\mathrm{Mod}(S)$ such
that for every $k\in\mathbb{N}$, $f^{k}$ does not fix any isotopy class of
essential simple closed curve in $S$ is necessarily pseudo-Anosov. In
particular, the derivative of any affine representative $\phi\in f$ is
hyperbolic.
We want to stress that for many infinite-type surfaces $\mathrm{Mod}(S)$ does
not admit an action on a metric space with unbounded orbits. For a more
detailed discussion on this fact and the large scale geometry of big mapping
class groups we refer the reader to [DurhamFanoniVlamis18], [MahnRafi20] and
references therein.
_Outline_. Section 2 is devoted to preliminaries about the loop graph, its
boundary and infinite-type flat surfaces. In Section 3 we present the Hooper-
Thurston-Veech construction777More precisely, the construction here presented
is a particular case of a more general construction built upon work of P.
Hooper by the second author and V. Delecroix. The first version of this more
general construction had mistakes that were pointed out by the first author..
In Section 3 we also proof Theorem 1.7. Finally, Section 4 is devoted to the
proof of Theorems 1.1, 1.5 and Corollary 1.6 (in this order).
_Acknowledgements_. We are greatful to Vincent Delecroix, Alexander Rasmussen
and Patrick Hooper for providing crucial remarks that lead to the proof of
Theorem 1.1. We also want to thank Vincent Delecroix for letting us include a
particular case of the Hooper-Thurston-Veech’s construction and Figures 2, 3
and 4, which are work in collaboration with the second author (see [DHV19]).
We are greatful to Juliette Bavard and Alden Walker for taking the time to
explain the details of their work, and to Chris Leininger for answering all
our questions. We want to thank the following institutions and grants for
their hospitality and financial support: Max-Planck Institut für Mathematik,
American Institut of Mathematics, PAPIIT IN102018, UNAM, PASDA de la DGAPA,
CONACYT and Fondo CONACYT-Ciencia Básica’s project 283960.
## 2\. Preliminaries
### 2.1. Infinite-type surfaces
Any orientable (topological) surface $S$ with empty boundary is determiner up
to homeomorphism by its genus (possibly infinite) and a pair of nested,
totally disconnected, separable, metrizable topological spaces
$\mathrm{Ends}_{\infty}(S)\subset\mathrm{Ends}(S)$ called the space of ends
accumulated by genus and the space of ends of $S$, respectively. Any such pair
of nested topological spaces occurs as the space of ends of some orientable
surface, see [Richards63]. On the other hand, $S\cup\mathrm{Ends}(S)$ can be
endowed with a natural topology which makes the correspoding space compact.
This space is called the Freudenthal end-point compactification of $S$, see
[Raymond60].
A surface $S$ is of finite (topological) type if its fundamental group is
finitely generated. In any other case we say that $S$ is an _infinite-type_
surface. All surfaces considered henceforth are orientable.
### 2.2. Multicurves
Let $S$ be an infinite-type surface. A collection of essential curves $l$ in
$S$ is locally finite if for every $x\in S$ there exist a neighborhood $U$ of
$x$ which intersects a finitely many elements of $l$. As surfaces are second-
countable topological spaces, any locally finite collection of essential
curves is countable.
A _multicurve_ in $S$ is a locally finite, pairwise disjoint, and pairwise
non-isotopic collection of essential curves in $S$.
Let $\alpha$ be a multicurve in $S$. We say that $\alpha$ _bounds a
subsurface_ $\Sigma$ of $S$, if the elements of $\alpha$ are exactly all the
boundary curves of the closure of $\Sigma$ in $S$. Also, we say that $\Sigma$
is _induced_ by $\alpha$ if there exists a subset
$\alpha^{\prime}\subset\alpha$ that bounds $\Sigma$ and there are no elements
of $\alpha\smallsetminus\alpha^{\prime}$ in its interior.
A multicurve $\alpha$ in $S$ is of _finite type_ if every connected component
of $S\smallsetminus\alpha$ is a finite-type surface.
Finite multicurves (that is, with formed by a finite number of curves) are not
necessarily of finite type. On the other hand, there are infinite multicurves
which are not of finite type, _e.g._ the blue ("vertical") curves in the
right-hand side of Figure 1.
Let $\alpha$ and $\beta$ be two multicurves in $S$. We say that $\alpha$ and
$\beta$ are in _minimal position_ if for every $\alpha_{i}\in\alpha$ and
$\beta_{j}\in\beta$, $|\alpha_{i}\cap\beta_{j}|$ realizes the minimal number
of (geometric) intersection points between a representative in the free
isotopy class of $\alpha_{i}$ and a representative in the homotopy class of
$\beta_{j}$. For every pair of multicurves one can find representatives in
their isotopy classes which are in minimal position.
Let $\alpha$ and $\beta$ be two multicurves in $S$ in minimal position. We say
that $\alpha$ and $\beta$ _fill_ $S$ if every connected component of
$S\smallsetminus\left(\alpha\cup\beta\right)$ is either a disk or a once-
punctured disk.
###### Remark 2.1.
Let $\alpha$ and $\beta$ be multicurves. Then:
1. (1)
If $\alpha$ and $\beta$ are of finite type and fill $S$, then every
complementary connected component of $\alpha\cup\beta$ in $S$ is a polygon
with finitely many sides. The converse is not true, see the left-hand side of
Figure 1.
2. (2)
There are pair of multicurves $\alpha$ and $\beta$ so that
$S\smallsetminus\left(\alpha\cup\beta\right)$ has a connected component that
is a polygon with infinitely many sides, see the right-hand side of Figure 1.
Figure 1.
### 2.3. The loop graph and its boundary
In [BaWa18A] and [BaWa18B], Bavard and Walker introduced the loop graph
$L(S;p)$ and prove that it is hyperbolic graph on which $\mathrm{Mod}(S,p)$
acts by isometries. Moreover, they made a precise description of the Gromov
boundary of $L(S;p)$ in terms of (hyperbolic) geodesics on the Poincaré disk.
We recall the general aspects of their work in what follows. The exposition
follows largely [BaWa18A], [BaWa18B] and [Rass19].
_The loop graph_. Let $S$ be an infinite type surface and $p\in S$. In what
follows we think of $p$ as a marked puncture in $S$. The isotopy class of a
topological embedding of $\gamma:(0,1)\hookrightarrow S$ is said to be a
_loop_ if it can be continuously extended to the end-point Freudenthal
compactification $S\cup\mathrm{Ends}(S)$ of $S$ with $\gamma(0)=\gamma(1)=p$.
On the other hand, if the continuous extension of $\gamma$ satifies that
$\gamma(0)=p$ and $\gamma(1)\in\mathrm{Ends}(P)\setminus\\{p\\}$ we call it a
short ray. The loop graph has as vertex set isotopy classes (relative to $p$)
of loops and adjacency is defined by disjointness (modulo isotopy) and it is
hyperbolic w.r.t to the combinatorial distance, see [BaWa18B]. In order to
describe the Gromov boundary of $L(S;p)$ we need to introduce the completed
ray graph.
_Long rays and the completed ray graph_. From now on we fix a hyperbolic
metric $\mu$ on $S$ of the first kind888That is, the Fuchsian group appearing
in the uniformization $\mathbb{D}\to S$ has as limit set the whole boundary of
the Poincaré disk. for which the marked point $p$ is a cusp. Every short ray
or loop has a unique geodesic representative in its isotopy class. We denote
by $\pi:\hat{S}\to S$ the infinite cyclic cover of $S$ defined by the
(conjugacy class of) cyclic subgroup of $\pi_{1}(S,\cdot)$ generated by a
simple loop around the cusp $p$ and call it _the conical cover of $S$_. The
surface $\hat{S}$ is conformally equivalent to a punctured disc and its unique
cusp $\hat{p}$ projects to $p$. We denote by $\partial\hat{S}$ the Gromov
boundary $\partial\hat{S}$ from which $\hat{p}$ has been removed. This cover
is usefull because for every geodesic representative of a short ray or loop in
$S$ there is a unique lift to $\hat{S}$ which is a geodesic with one end in
$\hat{p}$ and the other in $\partial\hat{S}$.
A long ray on S is a _simple_ bi-infinite geodesic of the form
$\pi(\hat{\delta})$, where $\hat{\delta}\subset\hat{S}$ is a geodesic from
$\hat{p}$ to $\hat{S}$, which is not a short ray or a loop. By definition,
each long ray limits to p at one end and does not limit to any point of
$\mathrm{Ends}(S)$ on the other end. The vertices of the completed ray graph
$\mathcal{R}(S;p)$ are isotopy classes of loops and short rays, and long rays.
Two vertices are adjacent if their geodesic representatives in $(S,\mu)$ are
disjoint. As before, we consider the combinatorial metric on
$\mathcal{R}(S;p)$ defined by declaring that each edge has length 1.
###### Theorem 2.2.
[BaWa18B] The completed ray graph $\mathcal{R}(S;p)$ is disconnected. There
exist a component containg all loops and short rays, which is of infinite
diameter and quasi-isometric to the loop graph. All other connected components
are (possibly infinite) cliques and each of them is formed exclusively by long
rays.
The component of $\mathcal{R}(S;p)$ containg all loops and short rays is
called the _main component_ of the completed ray graph. Long rays not
contained in the main component are called _high-filling_ rays and they each
one of they form cliques formed exclusively of high-filling rays.
_The Gromov boundary of the loop graph_. Let us denote by $\mathcal{H}(S,p)$
the set of all high-filling rays in $\mathcal{R}(S;p)$. Bavard and Walker
endow $\mathcal{H}(S,p)$ with a topology. This topology is based on the notion
of two rays $k$-beggining like each other, see Section 4.1 and Definition
5.2.4 in [BaWa18B]. On the other hand, they define a
$\mathrm{Mod}(S;p)$-action on $\mathcal{H}(S;p)$ by homeomorphisms. We sketch
this action in what follows. First they show that endpoints of lifts of loops
and short rays to the conical cover $\hat{S}$ are dense in $\partial\hat{S}$.
Using this, and the fact that mapping classes in $\mathrm{Mod}(S;p)$ permute
loops and short rays, they prove that any $\phi\in\mathrm{Mod}(S;p)$ lifts to
a homeomorphism of $\hat{S}$ which admits a unique continuous extension to a
homeomorphism of $\partial\hat{S}$. Finally, they show that this extension
preserves the subset of $\partial\hat{S}$ formed by endpoints of high-filling
rays, hence inducing the aforementioned action by homeomorphisms.
###### Theorem 2.3.
Let $\mathcal{E}(S;p)=\mathcal{H}(S;p)/\sim$, where $\sim$ identifies all
high-filling rays in the same clique, and endow this set with the quotient
topology. Then there exists a $\mathrm{Mod}(S;p)$-equivariant homeomorphism
$F:\mathcal{E}(S;p)\to\partial L(S;p)$, where $\partial L(S;p)$ is the Gromov
boundary of the loop graph.
In consequence any loxodromic element $\phi\in\mathrm{Mod}(S;p)$ fixes two
cliques of high-filling rays $\mathcal{C}^{-}(\phi)$ and
$\mathcal{C}^{+}(\phi)$.
###### Theorem 2.4 ([BaWa18B], Theorem 7.1.1).
The cliques $\mathcal{C}^{-}(\phi)$ and $\mathcal{C}^{+}(\phi)$ are finite and
of the same cardinality.
This allows to define the _weight_ of a loxodromic element $\phi$ as the
cardinality of either $\mathcal{C}^{-}(\phi)$ or $\mathcal{C}^{+}(\phi)$. As
said in the introduction, the importance of the weight of a loxodromic
elements is given by the following fact:
###### Lemma 2.5 ([BaWa18B], Lemma 9.2.7).
Let $g,h\in\mathrm{Mod}(S;p)$ be two loxodromic elements with different
weights. Then in the language of Bestvina-Fujiwara [BestvinaFujiwara02], $g$
and $h$ are independent and anti-aligned.
### 2.4. Flat surfaces
In this section we recall only basic concepts about flat surfaces needed for
the rest of the paper. It is important to remark that most of the flat
surfaces considered in this text are of infinite type. For a detailed
discussion on infinite-type flat surfaces we refer the reader to [DHV19].
We use $x,y$ for the standard coordinates in $\mathbb{R}^{2}$, $z=x+iy$ the
corresponding number in $\mathbb{C}$ and $(r,\theta)$ for polar coordinates
$x=r\cos(\theta)$ and $y=r\sin(\theta)$ (or $z=r\exp(i\theta)$). The Euclidean
metric $dx^{2}+dy^{2}$ can also be written as $(dr)^{2}+(rd\theta)^{2}$.
Let $S$ be an orientable surface and $g$ is a metric defined on the complement
of a discrete set $\Sigma\subset S$. A point $p\in S$ is called a _conical
singularity of angle $\pi n$_ for some $n\in\mathbb{N}$ if there exists an
open neighbourhood $U$ of $p$ such that $(U,g)$ is isometric to $(V,g_{n})$,
where $V\subset\mathbb{C}^{*}$ is a (punctured) neighborhood of the origin and
$g_{n}=(dr)^{2}+(nrd\theta)^{2}$. If $n=2$ we call $p$ a _regular point_. In
general, regular points are not considered as singularities, though as we see
in the proof of Theorem 1.1 sometimes it is convenient to think of them as
marked points.
A _flat surface structure_ on a topological surface $S$ is the maximal atlas
$\mathcal{T}=\\{\phi_{i}:U_{i}\to\mathbb{C}\\}$ where
$(U_{i})_{i\in\mathbb{N}}$ forms an open covering of $S$, each $\phi_{i}$ is a
homeomorphism from $U_{i}$ to $\phi(U_{i})$ and for each $i,j$ the transition
map $\phi_{j}\circ\phi_{i}^{-1}:\phi_{i}(U_{i}\cap U_{j})\to\phi_{j}(U_{i}\cap
U_{j})$ is a translation in $\mathbb{C}$ or a map of the form999These kind of
maps are also called _half-translations_ and for this reason flat surfaces
containing half-translation are also known as half-translation surfaces.
$z\to-z+\lambda$ for some $\lambda\in\mathbb{C}$.
###### Definition 2.6.
A _flat surface_ is a pair $M=(S,\mathcal{T})$ made of a connected topological
surface $S$ and a flat surface structure $\mathcal{T}$ on $S\setminus\Sigma$,
where:
1. (1)
$\Sigma$ is a discrete subset of $S$ and
2. (2)
every $z\in\Sigma$ is a conical singularity.
If the transition functions of $\mathcal{T}$ are all translations we call the
pair $(S,\mathcal{T})$ a translation surface.
###### Remark 2.7.
In the preceding definition $S$ can be of infinite topological type and
$\Sigma$ can be infinite. All points in $M\setminus\Sigma$ are regular. Every
flat surface carries a flat metric given by pulling back the Euclidean metric
in $\mathbb{C}$. We denote by $\widehat{M}$ the corresponding metric
completion and ${\rm Sing}(M)\subset\widehat{M}$ the set of non-regular
points, which can be thought as singularities of the flat metric. We stress
that the structure of $M$ near a non-regular point is not well understood in
full generality, see [BowmanValdez13] and [Randecker18].
Every flat surface $M$ which is not a translation surface has a (ramified)
double covering $\pi:\widetilde{M}\to M$ such that $\widetilde{M}$ is a
translation surface whose atlas is obtained by pulling back via $\pi$ the flat
surface structure of $M$. This is called the orientation double covering. If
$z_{0}\in M$ is a conical singularity of angle $n\pi$ then, if $n$ is even,
$\pi^{-1}(z_{0})$ is formed by two conical singularities of total angle
$n\pi$; whereas if $n$ is odd, $\pi^{-1}(z_{0})$ is a conical singularity of
total angle $2n\pi$. Hence the branching points of the orientation double
covering are the conical singularities in $M$ of angle $n\pi$, with $n$ odd.
This will be used in the proof of Theorem 1.1.
On the other hand, flat surfaces can be defined using the language of complex
geometry in terms of Riemann surfaces and quadratic differentials or by
glueing (possibly infinite) families of polygons along their edges. A detailed
discussion on these other definitions can be found in the first Chapter of
[DHV19].
Affine maps. A map $f\in Homeo(M)$ with $f(\Sigma)\subset\Sigma$ is called an
_affine automorphisms_ if the restriction $f:M\setminus\Sigma\to
M\setminus\Sigma$ to flat charts is an $\mathbb{R}$-affine map. We denote by
$\mathrm{Aff}(M)$ the group of affine homeomorphisms of $M$ and by
$\mathrm{Aff}^{+}(M)$ the subgroup of $\mathrm{Aff}(M)$ made of orientation
preserving affine automorphisms (_i.e_ their linear part has positive
determinant). Remark that the derivative $Df$ of an element
$f\in\mathrm{Aff}(M)$ is an element of ${\mathrm{GL}}_{2}(\mathbb{R})/\pm Id$.
Translation flows. For each direction $\theta\in\mathbb{R}/2\pi\mathbb{Z}$ we
have a well-defined translation flow
$F_{\mathbb{C},\theta}^{t}:\mathbb{C}\to\mathbb{C}$ given by
$F_{\mathbb{C},\theta}^{t}(z)=z+te^{i\theta}$. This flow defines a constant
vector field $X_{\mathbb{C},\theta}(z):=\frac{\partial
F_{\mathbb{C},\theta}^{t}}{\partial t}|_{t=0}(z)$. Now let $M$ be a
translation surface and $X_{M,\theta}$ the vector field on
$\widehat{M}\setminus{\rm Sing}(M)$ obtained by pulling back
$X_{\mathbb{C},\theta}$ using the charts of the structure. For every $z\in
M\setminus\mathrm{Sing}(M)$ let us denote by $\gamma_{z}:I\to M$, where
$I\subset\mathbb{R}$ is an interval containing zero, the maximal integral
curve of $X_{M,\theta}$ with initial condition $\gamma_{z}(0)=z$. We define
$F_{M,\theta}^{t}(z):=\gamma_{z}(t)$ and call it the _translation flow_ on $M$
in direction $\theta$. Let us remark that formally speaking $F^{t}_{M,\theta}$
is not a flow because the trajectory of the curve $\gamma_{z}(t)$ may reach a
point of $\mathrm{Sing}(M)$ in finite time. A trajectory of the translation
flow whose maximal domain of definition is a bounded interval (on both sides)
is called a _saddle connection_. When there is no need to distinguish
translation flows in different translation surfaces we abbreviate
$F_{M,\theta}^{t}$ and $X_{M,\theta}$ by $F_{\theta}^{t}$ and $X_{\theta}$
respectively. If $M$ is a flat surface but not a translation surface the pull
back of the vector field $e^{i\theta}$ to $M$ does not define a global vector
field on $M$ but it does define a _direction field_. In both cases integral
curves defines a foliation $\mathcal{F}_{\theta}$ on
$M\setminus\mathrm{Sing}(M)$.
###### Definition 2.8 (Cylinders and strips).
A _horizontal cylinder_ $C_{c,I}$ is a translation surface of the form
$([0,c]\times I)/\sim$, where $I\subset\mathbb{R}$ is open (but not
necessarily bounded), connected, and where $(0,s)$ is identified with $(c,s)$
for all $s\in I$. The numbers $c$ and $h=|I|$ are called the _circumference_
and _height_ of the cylinder respectively. The _modulus_ of $C_{c,I}$ is the
number $\frac{h}{c}$.
A _horizontal strip_ $C_{\infty,I}$ is a translation surface of the form
$\mathbb{R}\times I$, where $I$ is a bounded open interval. Analogously, the
height of the horizontal strip is $h=|I|$.
An open subset of a translation surface $M$ is called a _cylinder_
(respectively a _strip_) in direction $\theta$ (or parallel to $\theta$) if it
is isomorphic, as a translation surface, to $e^{-i\theta}C_{c,I}$ (respect. to
$e^{-i\theta}C_{\infty,I}$).
One can think of strips as cylinders of infinite circumference and finite
height. For flat surfaces which are not translation surfaces the definition of
cylinder still makes sense, though its direction is well defined only up to
change of sign.
###### Definition 2.9.
Let $M$ be a flat surface and $\theta\in\mathbb{R}/2\pi\mathbb{Z}$ a fixed
direction. A collection of maximal cylinders $\\{C_{i}\\}_{i\in I}$ parallel
to $\theta$ such that $\cup_{i\in I}C_{i}$ is dense in $M$ is called a
_cylinder decomposition_ in direction $\theta$.
## 3\. The Hooper-Thurston-Veech construction
The main result of this section is a generalization of the Thurston-Veech
construction for infinite-type surfaces. The key ingredient for this
generalization is the following:
###### Lemma 3.1.
Let $M$ be a flat surface for which there is a cylinder decomposition in the
horizontal direction. Suppose that every maximal cylinder in this
decomposition has modulus equal to $\frac{1}{\lambda}$ for some $\lambda>0$.
Then there exists a unique affine automorphism $\phi_{h}$ which fixes the
boundaries of the cylinders and whose derivative (in
$\mathrm{PSL}(2,\mathbb{R})$) is given by the matrix
$\left(\begin{smallmatrix}1&\lambda\\\ 0&1\end{smallmatrix}\right)$. Moreover,
the automorphism $\phi_{h}$ acts as a Dehn twist along the core curve of each
cylinder.
In general, if $M$ is a flat surface having a cylinder decomposition in
direction $\theta\in\mathbb{R}/2\pi\mathbb{Z}$ for which every cylinder has
modulus equal to $\frac{1}{\lambda}$ for some $\lambda\in\mathbb{R}^{*}$, one
can apply to $M$ the rotation $R_{\theta}\in\mathrm{SO}(2,\mathbb{R})$ that
takes $\theta$ to the horizontal direction and apply the preceding lemma. For
example, if $\theta=\frac{\pi}{2}$, then there exists a unique affine
automorphism $\varphi_{v}$ which fixes the boundaries of the vertical
cylinders, acting on each cylinder as a Dehn twists and with derivative (in
$\mathrm{PSL}(2,\mathbb{R})$) is given by the matrix
$\left(\begin{smallmatrix}1&0\\\ -\lambda&1\end{smallmatrix}\right)$. In
particular, if $M$ is a flat surface having cylinder decompositions in the
horizontal and vertical directions such that each cylinder involved have
modulus $\frac{1}{\lambda}$, then $\mathrm{Aff}(M)$ has two affine multitwists
$\phi_{h}$ and $\phi_{v}$ with
$D\phi_{h}=\left(\begin{smallmatrix}1&\lambda\\\ 0&1\end{smallmatrix}\right)$
and $D\phi_{v}=\left(\begin{smallmatrix}1&0\\\
-\lambda&1\end{smallmatrix}\right)$ in $\mathrm{PSL}(2,\mathbb{R})$.
Let us recall now the Thurston-Veech construction (see [Farb-Margalit],
Theorem 14.1):
###### Theorem 3.2 (Thurston-Veech construction).
Let $\alpha=\\{\alpha_{i}\\}_{i=1}^{n}$ and $\beta=\\{\beta_{j}\\}_{j=1}^{m}$
be two multicurves filling a finite type surface $S$. Then there exists
$\lambda=\lambda(a,b)\in\mathbb{R}^{*}$ and a representation $\rho:\langle
T_{\alpha},T_{\beta}\rangle\to\mathrm{PSL}(2,\mathbb{R})$ given by:
$T_{\alpha}\to\begin{pmatrix}1&\lambda\\\ 0&1\end{pmatrix},\hskip
14.22636ptT_{\beta}\to=\begin{pmatrix}1&0\\\ -\lambda&1\end{pmatrix}.$
Moreover, an element $f\in\langle T_{\alpha},T_{\beta}\rangle$ is periodic,
reducible or pseudo-Anosov according to whether $\rho(f)$ is elliptic,
parabolic or hyperbolic.
The proof of Theorem 3.2 uses Lemma 3.1. More precisely, one needs to find a
flat surface structure on $S$ which admits horizontal and vertical cylinder
decompositions $\\{H_{i}\\}_{i=1}^{n}$ and $\\{V_{j}\\}_{j=1}^{m}$ such that
each cylinders has modulus equal to $\frac{1}{\lambda}$ and for which
$\alpha_{i}$ and $\beta_{j}$ are the core curves of $H_{i}$ and $V_{j}$ for
each $i=1,\ldots,n$ and $j=1,\ldots,m$, respectively. By the Perron-Frobenius
theorem, such a flat structure always exists and is unique up to scaling.
###### Definition 3.3.
Let $\alpha=\cup_{i\in I}\alpha_{i}$ and $\beta=\cup_{j\in J}\beta_{j}$ be two
multicurves in a topological surface $S$ (in minimal position, not necessarily
filling). The _configuration graph_ of the pair $(\alpha,\beta)$ is the
bipartite graph $\mathcal{G}(\alpha\cup\beta)$ whose vertex set is $I\sqcup J$
and where there is an edge between two vertices $i\in I$ and $j\in J$ for
every intersection point between $\alpha_{i}$ and $\beta_{j}$.
Cylinder decompositions, bipartite graphs and harmonic functions. Let $M$ be a
flat surface having horizontal and vertical cylinder decompositions
$\mathcal{H}=\\{H_{i}\\}_{i\in I}$ and $\mathcal{V}=\\{V_{j}\\}_{j\in J}$ such
that each cylinder has modulus $\frac{1}{\lambda}$ for some $\lambda>0$. For
every $i\in I$ let $\alpha_{i}$ be the core curve of $H_{i}$ and for every
$j\in J$ let $\beta_{j}$ be the core curve of $V_{j}$. Then
$\alpha=\\{\alpha_{i}\\}_{i\in I}$ and $\beta=\\{\beta_{j}\\}_{j\in J}$ are
multicurves whose union fills $M$. Let $\textbf{h}:I\cup J\to\mathbb{R}_{>0}$
be the function which to an index associates the height of the corresponding
cylinder. Then $A\textbf{h}=\lambda\textbf{h}$ where $A$ is the adjacency
operator of the graph $\mathcal{G}(\alpha\cup\beta)$, that is:
(1) $(A\textbf{h})(v):=\sum_{w\sim v}\textbf{h}(w)$
where the sum above is taken over over edges $\\{v,w\\}$ having $v$ as one
endpoint, that is, the summand $\textbf{h}(w)$ appears as many times as there
are edges between the vertices $v$ and $w$.
###### Definition 3.4.
Let $\mathcal{G}=(V,E)$ be a graph with vertices of finite degree and
$A:V^{\mathbb{R}}\to\mathbb{R}$ as in (1). A function
$\textbf{h}:V\to\mathbb{R}$ that satisfies $A\textbf{h}=\lambda\textbf{h}$ is
called a _$\lambda$ -harmonic function_ of $\mathcal{G}$.
In summary: the existence of a horizontal and a vertical cylinder
decomposition where each cylinders has modulus $\frac{1}{\lambda}$ implies the
existence of a _positive_ $\lambda$-harmonic function of configuration graph
of the multicurves given by the core curves of the cylinders in the
decomposition.
The idea to generalize Thurston-Veech’s construction for infinite-type
surfaces is to reverse this process: given a pair of multicurves $\alpha$ and
$\beta$ whose union fills $S$, every positive $\lambda$-harmonic function of
$\mathcal{G}(\alpha\cup\beta)$ can be used to construct construct horizontal
and vertical cylinder decompositions of $S$ where all cylinders have the same
modulus.
###### Theorem 3.5 (Hooper-Thurson-Veech construction).
Let $S$ be an infinite-type surface. Suppose that there exist two multicurves
$\alpha=\\{\alpha_{i}\\}_{i\in I}$ and $\beta=\\{\beta_{j}\\}_{j\in J}$
filling $S$ such that:
1. (1)
there is an uniform upper bound on the degree of the vertices of the
configuration graph $\mathcal{G}(\alpha\cup\beta)$ and
2. (2)
every component of the complement of $S\setminus\alpha\cup\beta$ is a polygon
with a finite number of sides101010That is, each component of
$S\setminus\alpha\cup\beta$ is a disc whose boundary consists of finitely man
subarcs of curves in $\alpha\cup\beta$..
Then there exists $\lambda_{0}\geq 2$ such that for every
$\lambda\geq\lambda_{0}$ there exists a positive $\lambda$-harmonic function h
on $\mathcal{G}(\alpha\cup\beta)$ which defines a flat surface structure
$M=M(\alpha,\beta,\textbf{h})$ on $S$ admiting horizontal and vertical
cylinder decompositions $\mathcal{H}=\\{H_{i}\\}_{i\in I}$ and
$\mathcal{V}=\\{V_{j}\\}_{j\in J}$ where each cylinder has modulus
$\frac{1}{\lambda}$. Moreover, for every $i\in I$ and $j\in J$ the core curves
of $H_{i}$ and $V_{j}$ are $\alpha_{i}$ and $\beta_{j}$, respectively. In
particular, we have (right) multitwists $T_{\alpha}$ and $T_{\beta}$ in
$\mathrm{Aff}(M)$ which fix the boundary of each cylinder in $\mathcal{H}$ and
$\mathcal{V}$, respectively. For each $\lambda\geq\lambda_{0}$, the
derivatives of these multitwists define a representation $\rho:\langle
T_{\alpha},T_{\beta}\rangle\to\mathrm{PSL}(2,\mathbb{R})$ given by:
$T_{\alpha}\to\begin{pmatrix}1&\lambda\\\ 0&1\end{pmatrix},\hskip
14.22636ptT_{\beta}\to\begin{pmatrix}1&0\\\ -\lambda&1\end{pmatrix}.$
###### Remark 3.6.
Theorem 3.5 is a particular case of a more general version of Hooper-Thurston-
Veech’s construction due to V. Delecroix and the second author whose final
form was achieved after discussions with the first author, see [DHV19]. We do
not need this more general version for the proof of our main results. On the
other hand, the second assumption on the multicurves in Theorem 3.5 makes the
proof simpler that in the general case and for this reason we decided to
sketch it. Many of the key ideas in the proof of the result above and its
general version appear already in the work of P. Hooper [Hooper15]. The main
difference is that P. Hooper starts with a bipartite infinite graph with
uniformly bounded valence and then, using a positive harmonic function $h$ on
that graph, constructs a translation surface. We take as input an infinite-
type topological surface $S$ and a pair of filling multicurves to construct a
flat surface structure on $S$, which is a not _a priori_ a translation surface
structure.
Proof of Theorem 3.5. The union $\alpha\cup\beta$ of the multicurves $\alpha$
and $\beta$ defines a graph embedded in $S$: the vertices are points in
$\bigcup_{(i,j)\in I\times J}\alpha_{i}\cap\beta_{j}$ and edges are the
segments forming the connected components of
$\alpha\cup\beta\setminus\bigcup_{(i,j)\in I\times J}\alpha_{i}\cap\beta_{j}$.
Abusing notation we write $\alpha\cup\beta$ to refer to this graph. It is
important not to confuse the (geometric) graph $\alpha\cup\beta$ with the
(abstract) configuration graph $\mathcal{G}(\alpha\cup\beta)$. To define the
flat structure $M$ on $S$ we consider a dual graph $(\alpha\cup\beta)^{*}$
defined as follows. If $S$ had no punctures then $(\alpha\cup\beta)^{*}$ is
just the dual graph of $\alpha\cup\beta$. If $S$ has punctures111111We think
of punctures also as isolated ends or points at infinity. we make the
following convention to define the vertices of $(\alpha\cup\beta)^{*}$: for
every connected component $D$ of $S\setminus\alpha\cup\beta$ homeomorphic to a
disc choose a unique point $v_{D}$ inside the connected component. If $D$ is a
punctured disc, then choose $v_{D}$ to be the puncture.
Figure 2. The graph $(\alpha\cup\beta)^{*}$.
The points $v_{D}$ chosen above are the vertices of $(\alpha\cup\beta)^{*}$.
Vertices in this graph are joined by an edge in $S$ if the closures of the
corresponding connected components of $S\setminus\alpha\cup\beta$ share an
edge of $\alpha\cup\beta$. Edges are chosen to be pairwise disjoint. Remark
that $(\alpha\cup\beta)^{*}$ might have loops. See Figure 2.
Given that $\alpha\cup\beta$ fills, every connected component
$S\setminus(\alpha\cup\beta)^{*}$ is a topological quadrilateral which
contains a unique vertex of $\alpha\cup\beta$. Hence there is a well defined
bijection between edges in the abstract graph $\mathcal{G}(\alpha\cup\beta)$
and the set of these quadrilaterals. This way, for every edge $e\in
E(\mathcal{G}(\alpha\cup\beta))$ we denote by $R_{e}$ the closure in $S$ of
the corresponding topological quadrilateral with the convention to add to
$R_{e}$ vertices $v_{D}$ corresponding to punctures in $S$.
Note that there are only two sides of $R_{e}$ intersecting the multicurve
$\alpha$, which henceforth are called _vertical sides_. The other two sides
are in consequence called _horizontal_ , see Figure 3.
We now build a flat surface structure on $S$ by declaring the topological
quadrilaterals $R_{e}$ of the dual graph $(\alpha\cup\beta)^{*}$ to be
Euclidean rectangles. Given that there is a uniform upper bound on the degree
of the vertices of the configuration graph $\mathcal{G}(\alpha\cup\beta)$
there exists121212For a more detailed discussion on $\lambda$-harmonic
functions we recommend Appendix C in [Hooper15] and reference therein.
$\lambda_{0}\geq 2$ such that for every $\lambda\geq\lambda_{0}$ there exists
a positive $\lambda$-harmonic function
$\textbf{h}:\mathcal{G}(\alpha\cup\beta)\to\mathbb{R}_{>0}$. We use this
function to define compatible heights of horizontal and vertical cylinders.
More precisely, let us define the maps:
$p_{\alpha}:E(\mathcal{G}(\alpha\cup\beta))\to
V(\mathcal{G}(\alpha\cup\beta))\qquad\text{and}\qquad
p_{\beta}:E(\mathcal{G}(\alpha\cup\beta))\to V(\mathcal{G}(\alpha\cup\beta))$
which to an edge $e$ of the configuration graph $\mathcal{G}(\alpha\cup\beta)$
associate its endpoints $p_{\alpha}(e)$ in $I$ and $p_{\beta}(e)$ in $J$. The
desired flat structure is defined by declaring131313For a formal description
on how to identify $R_{e}$ with $[0,\textbf{h}\circ
p_{\beta}(e)]\times[0,\textbf{h}\circ p_{\alpha}(e)]$ we refer the reader to
[DHV19]. $R_{e}$ to be the rectangle $[0,\textbf{h}\circ
p_{\beta}(e)]\times[0,\textbf{h}\circ p_{\alpha}(e)]$, see Figure 3.
Figure 3. Transforming topological rectangles into Euclidean rectangles.
We denote the resulting flat surface $M(\alpha,\beta,\textbf{h})$. Remark that
by contruction a vertex $v_{D}$ of the dual graph $(\alpha\cup\beta)^{*}$ of
valence $k$ defines a conical singularity of angle $\frac{\pi k}{2}$ in the
metric completion of $M(\alpha,\beta,\textbf{h})$. Given that $k$ is always an
even number we have that $M(\alpha,\beta,\textbf{h})$ is a half-translation
surface (_i.e._ given by a quadratic differential) when $k=2(2n-1)$ for some
$n\in\mathbb{Z}_{\geq 1}$.
Now, for every $i\in I$, the curve $\alpha_{i}$ is the core curve of the
horizontal cylinder $H_{i}:=\cup_{e\in p_{\alpha}^{-1}(i)}R_{e}$. Because h is
$\lambda$-harmonic we have
$\sum_{e\in p_{\alpha}^{-1}(i)}\textbf{h}(p_{\beta}(e))=\sum_{j\sim
i}\textbf{h}(j)=\lambda\textbf{h}(i).$
This equations say that the circumference $\sum_{e\in
p_{\alpha}^{-1}(i)}\textbf{h}(p_{\beta}(e))$ of $H_{i}$ is $\lambda$ times its
height $\textbf{h}(i)$, hence the modulus of $H_{i}$ is equal to
$\frac{1}{\lambda}$. The same computation with $\beta_{j}$ shows that the
vertical cylinders $V_{j}:=\cup_{e\in p_{\beta}^{-1}(j)}R_{e}$ have core curve
$\beta_{j}$ and modulus $\frac{1}{\lambda}$.∎
###### Remark 3.7.
As said before, Hooper-Thurston-Veech construction can be applied to more
general pairs of multicurves $\alpha,\beta$. Consider for example the case in
the Loch Ness monster depicted in Figure 4: here the graph
$\mathcal{G}(\alpha\cup\beta)$ has finite valence but there exist four
connected componets $\\{C_{i}\\}_{i=1}^{4}$ of $S\setminus\alpha\cup\beta$
which are infinite polygons, that is, whose boundary is formed by infinitely
many segments belonging to curves in $\alpha$ and in $\beta$. In this
situation the convention is to consider vertices in the dual graph
$(\alpha\cup\beta)^{*}$ of infinite degree as points at infinity (that is, not
in $S$). With this convention the Hooper-Thuston-Veech construction produces a
translation surface structure on $S$, because each $\partial C_{i}$ is
connected. In Figure 4 we illustrate the case of a $2$-harmonic function; the
resulting flat surface is a translation surface known as the infinite
staircase.
(a) Two oriented multicurves $\alpha$ (in blue) and $\beta$ (in red) in the
Loch Ness monster for which the Hooper-Thurston-Veech construction produces
the infinite staircase.
(b) The graph $\mathcal{G}(\alpha\cup\beta)$
Figure 4. The infinite staircase as a Hooper-Thurson-Veech surface.
_Proof of Theorem 1.7_. Let $\lambda\geq 2$ and consider the subgroup of
$\mathrm{SL}(2,\mathbb{R})$:
(2) $G_{\lambda}:=\langle\left(\begin{smallmatrix}1&\lambda\\\
0&1\end{smallmatrix}\right),\left(\begin{smallmatrix}1&0\\\
-\lambda&1\end{smallmatrix}\right)\rangle$
This group is free and its elements are matrices of the form
$\left(\begin{smallmatrix}1+k_{11}\lambda^{2}&k_{12}\lambda\\\
k_{21}\lambda&1+k_{22}\lambda^{2}\end{smallmatrix}\right)$,
$k_{ij}\in\mathbb{Z}$, such that the determinant is 1 and
$|\frac{1+k_{11}\lambda^{2}}{k_{12}\lambda}|$ does not belong to the interval
$(t^{-1},t)$, where $t=\frac{1}{2}(\lambda+\sqrt{\lambda^{2}-4})$, see
[Brenner55]. On the other hand, since
$\\{-\frac{\lambda}{2}<\Im(z)\leq\frac{\lambda}{2}\\}\cap\\{|z+\frac{1}{2\lambda}|>\frac{1}{2\lambda}\\}\cap\\{|z-\frac{1}{2\lambda}|\geq\frac{1}{2\lambda}\\}\subset\mathbb{H}^{2}$
is a fundamental domain in the hyperbolic plane for $G_{\lambda}$, this group
has no elliptic elements. Morevoer, if $\lambda>2$ there are only two
conjugacy classes of parabolics (correspoding to the generators of
$G_{\lambda}$) and if $\lambda=2$ then $\left(\begin{smallmatrix}1&\lambda\\\
0&1\end{smallmatrix}\right)\left(\begin{smallmatrix}1&0\\\
-\lambda&1\end{smallmatrix}\right)=\left(\begin{smallmatrix}1-\lambda^{2}&\lambda\\\
-\lambda&1\end{smallmatrix}\right)$ and
$\left(\begin{smallmatrix}1&-\lambda\\\
0&1\end{smallmatrix}\right)\left(\begin{smallmatrix}1&0\\\
\lambda&1\end{smallmatrix}\right)=\left(\begin{smallmatrix}1-\lambda^{2}&-\lambda\\\
\lambda&1\end{smallmatrix}\right)$ determine, together with the generators of
$G_{\lambda}$, the only 4 conjugacy classes of parabolics in $G_{\lambda}$.
Remark that $\left(\begin{smallmatrix}1-\lambda^{2}&\lambda\\\
-\lambda&1\end{smallmatrix}\right)$ and
$\left(\begin{smallmatrix}1-\lambda^{2}&-\lambda\\\
\lambda&1\end{smallmatrix}\right)$ are hyperbolic if $\lambda>2$.
If $\alpha$ and $\beta$ are the multicurves depicted in Figure 4 (A) on the
Loch Ness monster then $\mathcal{G}(\alpha\cup\beta)$ is the infinite
bipartite graph on Figure 4 (B). Let us index the vertices of this graph by
the integers as in the Figure so that $\textbf{h}_{2}(n)=1$, for all $\hskip
2.84526ptn\in\mathbb{Z}$, is a positive $2$-harmonic function on
$\mathcal{G}(\alpha\cup\beta)$. If $\lambda>2$ and
$r_{+}=\frac{\lambda+\sqrt{\lambda^{2}-4}}{2}$ the positive function
$\textbf{h}_{\lambda}(n)=r_{+}^{n},\hskip 2.84526ptn\in\mathbb{Z}$ is
$\lambda$-harmonic on $\mathcal{G}(\alpha\cup\beta)$. The desired family of
translation surfaces $\\{M_{\lambda}\\}_{\lambda\in[2,+\infty)}$, is obtained
by applying Hooper-Thuston-Veech’s construction to the multicurves $\alpha$,
$\beta$ and the family of positive $\lambda$-harmonic functions
$\\{\textbf{h}_{\lambda}\\}_{\lambda\in[2,+\infty)}$. The desired class
$f\in\mathrm{Mod}(S)$ is given by the product of (right) multitwist
$T_{\alpha}T_{\beta}$. No positive power of $f$ fixes an isotopy class of
simple closed curve in $S$ because on one hand if $\lambda=2$ the translation
flow on eigendirection of the parabolic matrix $D\tau_{2}^{-1}(f)$ decomposes
$M_{2}$ in two strips and, on the other if $\lambda>2$, then
$D\tau_{\lambda}^{-1}(f)$ is hyperbolic. ∎
###### Remark 3.8.
There are only 3 infinite graphs which admit $2$-harmonic functions, these are
depicted in Figure 5 together with their correspoding positive $2$-harmonic
functions (which are unique up to rescaling). None of them comes from a pair
of multicurves satifying (2) in Theorem 3.5.
Figure 5. Graphs with $2$-harmonic functions.
Renormalizable directions. The main results on Hooper’s work [Hooper15] deal
with the dynamical properties of the translation flow in _renormalizable
directions_.
###### Definition 3.9.
Consider the action of $G_{\lambda}$ as defined in (2) by homographies on the
real projective line $\mathbb{RP}^{1}$. We say that a direction
$\theta\in\mathbb{R}/2\pi\mathbb{Z}$ is $\lambda$-renormalizable if its
projectivization lies in the limit set of $G_{\lambda}$ and is not an
eigendirection of any matrix conjugated in $G_{\lambda}$ to a matrix of the
form:
$\begin{pmatrix}1&\lambda\\\ 0&1\end{pmatrix},\hskip
5.69054pt\begin{pmatrix}1&0\\\ \lambda&1\end{pmatrix}\hskip 5.69054ptor\hskip
5.69054pt\begin{pmatrix}1&0\\\
-\lambda&1\end{pmatrix}\cdot\begin{pmatrix}1&\lambda\\\ 0&1\end{pmatrix}$
We use two of Hooper’s results in the proof of Theorem 3.5. Recall that in
Hooper’s work one takes as input an infinite bipartite graph and a positive
$\lambda$-harmonic function on this graph to produce a translation surface.
###### Theorem 3.10 (Theorem 6.2, [Hooper15]).
Let $M$ be a translation surface obtained from an infinite bipartite graph as
in [_Ibid_.] using a positive $\lambda$-harmonic function and let $\theta$ be
a $\lambda$-renormalizable direction. Then the translation flow
$F_{\theta}^{t}$ on $M$ does not have saddle connections.
###### Theorem 3.11 (Theorem 6.4, [Hooper15]).
Let $M$ be a translation surface obtained from an infinite bipartite graph as
in [_Ibid_.] using a positive $\lambda$-harmonic function and let $\theta$ be
a $\lambda$-renormalizable direction. Then the translation flow
$F_{\theta}^{t}$ is conservative, that is, given $A\subset M$ of positive
measure and any $T>0$, for Lebesgue almost every $x\in M$ there is a $t>T$
such that $F_{\theta}^{t}(x)\in A$
## 4\. Proof of results
### 4.1. Proof of Theorem 1.1
The proof is divided in two parts. In the first part we use the Hooper-
Thurston-Veech construction (see Section 3) to find two transverse measured
$f$-invariant foliations $\mathcal{F}^{u}$ and $\mathcal{F}^{s}$ on $S$ for
which $p$ is a singular point and for which each foliation has $m$
separatrices based at $p$. We prove that each separatrix based at $p$ is dense
in $S$. Then, we consider a hyperbolic metric on $S$ of the first kind
(allowing us to talk about the completed ray graph $R(S;p)$). We stretch each
separatrix of $\mathcal{F}^{u}$ and $\mathcal{F}^{s}$ based at $p$ to a
geodesic with respect to this metric. This defines two sets $\Gamma^{+}$ and
$\Gamma^{-}$ of geodesics, each having cardinality $m$. In the second part of
the proof, we show that $\Gamma^{+}$ and $\Gamma^{-}$ are the only cliques of
high-filling rays fixed by $f$ in the Gromov boundary of the loop graph.
_Flat structures_. We use the Hooper-Thurston-Veech construction (Section 3)
for this part of the proof. Let $\alpha$ and $\beta$ be two multicurves
satisfying the hypothesis of Theorem 1.1. Fix
$\mathbf{h}:\mathcal{G}(\alpha\cup\beta)\to\mathbb{R}_{>0}$ a positive
$\lambda$-harmonic function on the configuration graph
$\mathcal{G}(\alpha\cup\beta)$. Let $M=M(\alpha,\beta,\mathbf{h})$ be the flat
structure on $S$ given by the Hooper-Thurston-Veech construction and
$\rho:\langle T_{\alpha},T_{\beta}\rangle\to\mathrm{PSL}(2,\mathbb{R})$
the corresponding presentation. Here, we have chosen $p$ as one of the
vertices of the dual graph $(\alpha\cup\beta)^{*}$ (see the proof of Theorem
3.5) and therefore it makes sense to consider the classes that the affine
multitwists $T_{\alpha}$, $T_{\beta}$ define in $\mathrm{Mod}(S;p)$. We abuse
notation and denote also by $T_{\alpha}$, $T_{\beta}$ these classes.
The eigenspaces of the hyperbolic matrix $\rho(f)$ define two transverse
($f$-invariant) measured foliations $(\mathcal{F}^{u},\mu_{u})$ and
$(\mathcal{F}^{s},\mu_{s})$ (for unstable and stable, respectively) on $M$ .
Moreover, we have that
$f\cdot(\mathcal{F}^{u},\mu_{u})=(\mathcal{F}^{u},\eta\mu_{u})$ and
$f\cdot(\mathcal{F}^{s},\mu_{s})=(\mathcal{F}^{s},\eta^{-1}\mu_{u})$, where
$\eta>1$ is (up to sign) an eingenvalue of $\rho(f)$. For simplicity we
abbreviate the notation for these foliations by $\mathcal{F}^{u}$ and
$\mathcal{F}^{s}$.
_The set $\mathfrak{V}$_. Recall that $M=M(\alpha,\beta,\textbf{h})$ is
constructed by glueing a family of rectangles $\\{R_{e}\\}_{e\in E}$, where
$E=E(\mathcal{G}(\alpha\cup\beta))$ is the set of edges of the configuration
graph $\mathcal{G}(\alpha\cup\beta)$, along their edges using translations and
half-translations. By the way the Hooper-Thurston-Veech construction is
carried out, sometimes the corners of these rectangles are not part of the
surface $S$: this is the case when there are connected components of
$S\setminus\alpha\cup\beta$ which are punctured discs. However, every corner
of a rectangle $\\{R_{e}\\}_{e\in E}$ belongs to the metric completion
$\widehat{M}$ of $M$ (w.r.t. the natural flat metric). We define
$\mathfrak{V}\subset\widehat{M}$ to be the set of points that are corners of
rectangles in $\\{R_{e}\\}_{e\in E}$ (after glueings). Remark that since all
connected components of $S\setminus\alpha\cup\beta$ are (topological) polygons
with an uniformly bounded number of sides, points in $\mathfrak{V}$ are
regular points or conical singularities of $\widehat{M}$ whose total angle is
uniformly bounded. Moreover the set $\mathrm{Fix}(f)$ of fixed points of the
continuous extension of $f$ to $\widehat{M}$ contains $\mathfrak{V}$. Indeed,
if $\mathcal{H}=\\{H_{i}\\}$ and $\mathcal{V}=\\{V_{j}\\}$ denote the
horizontal and vertical (maximal) cylinder decompositions of $M$, then
$\mathfrak{V}=\cup_{i\neq j}(\partial H_{i}\cap\partial V_{j})$, where the
boundary of each cylinder is taken in the metric completion $\widehat{M}$. The
claim follows from the fact that for every $i\in I$ and $j\in J$, $T_{\alpha}$
fixes $\partial H_{i}$ and $T_{\beta}$ fixes $\partial V_{j}$.
For each $q\in\mathfrak{V}$ we denote by $\mathrm{Sep}_{q}(*)$ the set of
leaves of $*\in\\{\mathcal{F}^{u},\mathcal{F}^{s}\\}$ based at $q$. We call
such a leaf a _separatrix based at $q$_. Remark that if the total angle of the
flat structure $M$ around $q$ is $k\pi$ then
$|\mathrm{Sep}_{q}(\mathcal{F}^{u})|=|\mathrm{Sep}_{q}(\mathcal{F}^{s})|=k$.
The following fact is essential for the second part of the proof.
###### Proposition 4.1.
Let $q\in\mathfrak{V}$. Then any separatrix in
$\mathrm{Sep}_{q}(\mathcal{F}^{u})\cup\mathrm{Sep}_{q}(\mathcal{F}^{s})$ is
dense in $M$.
###### Proof.
We consider first the case when $M$ is a translation surface. At the end of
the proof we deal with the case when $M$ is a half-translation surface.
We show that any separatrix in $\mathrm{Sep}_{q}(\mathcal{F}^{u})$ is dense.
The arguments for separatrices in $\mathrm{Sep}_{q}(\mathcal{F}^{s})$ are
analogous.
_Claim_ : $\cup_{q\in\mathfrak{V}}\mathrm{Sep}_{q}(\mathcal{F}^{u})$, the
union of all separatrices of $\mathcal{F}^{u}$, is dense in $M$. To prove this
claim we strongly use the work of Hooper [Hooper15]141414Hooper only deals
with the case when $M$ is a translation surface. This is why when $M$ is a
half-translation surface we consider its orientation double cover.. In
particular, we use the fact that leaves in $\mathcal{F}^{u}$ are parallel to a
renormalizable direction, see Definition 3.9. We proceed by contradiction by
assuming that the complement of the closure of $\cup_{q\in
V}\mathrm{Sep}_{q}(\mathcal{F}^{u})$ in $\widehat{M}$ is non-empty. Let $U$ be
a connected component of this complement. Then $U$ is
$\mathcal{F}^{u}$-invariant. If $U$ contains a closed leaf of
$\mathcal{F}^{u}$ then it has to be a cylinder, but this cannot happen because
there are no saddle connections parallel to renormalizable directions, see
Theorem 6.2 in [Hooper15]. Then $U$ contains a transversal to
$\mathcal{F}^{u}$ to which leaves never return. In other words, $U$ contains
an infinite strip, _i.e._ a set which (up to rotation) is isometric to
$(a,b)\times\mathbb{R}$ for some $a<b$. This is impossible since the
translation flow on $M$ in a renormalizable direction is conservative151515The
translation flow $F_{\theta}^{t}$ is called conservative if given $A\subset M$
of positive measure and any $T>0$, for Lebesgue almost every $x\in M$ there is
a $t>T$ such that $F_{\theta}^{t}(x)\in A$., see Theorem 6.4 in [Hooper15].
The claim follows.
We strongly recommend that the reader uses Figure 6 as a guide for the next
paragraphs.
Figure 6.
Henceforth if $\gamma$ is a separatrix of $\mathcal{F}^{u}$, we denote by
$\gamma(t)$, $t>0$ the parametrization for which $\lim_{t\to
0}\gamma(t)\in\mathfrak{V}$ and such that $|\gamma^{\prime}|=1$ (w.r.t. to the
flat metric on $M$).
For each horizontal cylinder $H_{k}$ in $M$ and
$\xi\in\mathfrak{V}\cap\partial H_{k}$ we denote by
$\gamma^{u}_{\xi,H_{k}}\subset M$ (respectively $\gamma^{s}_{\xi,H_{k}}$) the
unique separatrix of $\mathcal{F}^{u}$ (respect. of $\mathcal{F}^{s}$) based
at $\xi$ within $H_{k}$, that is, for which $\gamma^{u}_{\xi,H_{k}}(t)\in
H_{k}$ for all $t$ in a small neighbourhood of $0$. For a vertical cylinder
$V_{l}$, $\gamma^{u}_{\xi,V_{l}}$ and $\gamma^{s}_{\xi,V_{l}}$ are defined in
a similar way. Let $\mathfrak{V}^{b}(H_{k})$ and $\mathfrak{V}^{t}(H_{k})$
denote the points in $\mathfrak{V}\cap H_{k}$ in the bottom and in the
top161616We pull back the standard orientation of the Euclidean plane to $M$
to make sense of the east-west and bottom-top sides of a cylinder. connected
component of $\partial H_{k}$ respectively; and for any vertical cylinder
$V_{l}$ let $\mathfrak{V}^{e}(V_{l})$ and $\mathfrak{V}^{w}(V_{l})$ denote the
points in $\mathfrak{V}\cap V_{l}$ in the east and west connected component of
$\partial V_{l}$ respectively.
Without loss of generality we suppose that $q\in V^{b}(H_{k})\cap
V^{e}(V_{l})$. We denote by $\omega(\gamma_{q,H_{k}}^{u})$ the $\omega$-limit
set of $\gamma_{q,H_{k}}^{u}$.
_Claim_ : the union of all separatrices of $\mathcal{F}^{u}$ based at points
in $\partial H_{k}\cup\partial V_{l}$ within $H_{k}$ and $V_{l}$ respectively
(3) $\left(\bigcup_{\xi\in\mathfrak{V}\cap\partial
H_{k}}\gamma^{u}_{\xi,H_{k}}\right)\cup\left(\bigcup_{\xi\in\mathfrak{V}\cap\partial
V_{l}}\gamma^{u}_{\xi,V_{l}}\right)$
is contained in $\omega(\gamma_{q,H_{k}}^{u})$.
_Proof of claim_. Remark that since $H_{k}$ is tiled by rectangles
corresponding to points of intersection of the core curve $\alpha_{k}$ with
curves in $\beta$, $|\mathfrak{V}^{b}(H_{k})|=|\mathfrak{V}^{t}(H_{k})|$ and
for each $\xi\in\mathfrak{V}^{b}(H_{k})$ there is exactly one point in
$\mathfrak{V}^{t}(H_{k})$ just above. Hence, using the east-west orientation
of $H_{k}$, we can order the elements of
$\mathfrak{V}^{b}(H_{k})\cup\mathfrak{V}^{t}(H_{k})$ cyclically: we write
$\mathfrak{V}^{b}(H_{k})=\\{q_{j}^{b}\\}_{j\in\mathbb{Z}/N\mathbb{Z}}$,
$\mathfrak{V}^{t}(H_{k})=\\{q_{j}^{t}\\}_{j\in\mathbb{Z}/N\mathbb{Z}}$ for
some $N\geq 1$. The sets
$\mathfrak{V}^{e}(V_{l})=\\{q_{j}^{e}\\}_{j\in\mathbb{Z}/M\mathbb{Z}}$,
$\mathfrak{V}^{w}(V_{l})=\\{q_{j}^{w}\\}_{j\in\mathbb{Z}/M\mathbb{Z}}$, for
some $M\geq 1$, are defined in a similar way.
We suppose that the labeling is such that above $q_{j}^{b}$ lies $q_{j}^{t}$
for all $j\in\mathbb{Z}/N\mathbb{Z}$, and that $q=q_{0}^{b}=q_{0}^{e}$. Recall
that $DT_{\alpha}=\left(\begin{smallmatrix}1&\lambda\\\
0&1\end{smallmatrix}\right)$, $DT_{\beta}^{-1}=\left(\begin{smallmatrix}1&0\\\
\lambda&1\end{smallmatrix}\right)$, hence $Df=\left(\begin{smallmatrix}a&b\\\
c&d\end{smallmatrix}\right)$, with $a,b,c,d\in\mathbb{R}_{>0}$. In particular
$Df$ sends the positive quadrant $\mathbb{R}_{x\geq 0,y\geq 0}$ into itself.
If we suppose, without loss of generality, that the unstable eingenspace of
$Df$ (without its zero) lies in the interior of $\mathbb{R}_{x\geq 0,y\geq
0}\cup\mathbb{R}_{x\leq 0,y\leq 0}$, then the stable eigenspace of $Df$
(without its zero) has to lie in the interior of $\mathbb{R}_{x\geq 0,y\leq
0}\cup\mathbb{R}_{x\leq 0,y\geq 0}$. Hence, for every
$j\in\mathbb{Z}/N\mathbb{Z}$ we have that $\gamma^{u}_{q_{j}^{b},H_{k}}$
intersects $\gamma^{s}_{\xi,H_{k}}$, for every
$\xi\in\\{q_{j}^{t},q_{j+1}^{b}\\}$ and
$\gamma^{s}_{q_{j+1}^{t},V_{l^{\prime}}}$, where $V_{l^{\prime}}$ is the
vertical cylinder intersecting $H_{k}$ and having
$\\{q_{j}^{b},q_{j}^{t},q_{j+1}^{b},q_{j+1}^{t}\\}$ in its
boundary171717Remark that in $M$ these points need not to be all different
from each other. For example $q_{j}^{b}=q_{j}^{t}$ and
$q_{j+1}^{b}=q_{j+1}^{t}$ if the core curve of $V_{l^{\prime}}$ only
intersects the core curve of $H_{k}$. In any case the claims remain valid..
From Figure 6 we can see that some of these points of intersection are
actually in $H_{k}\cup V_{l^{\prime}}$. By applying repeatedly $f$ to all
these points of intersection of separatrices we obtain that
$\xi\in\omega(\gamma_{q_{j}^{b}}^{u},H_{k})$ for every
$j\in\mathbb{Z}/N\mathbb{Z}$ and
$\xi\in\\{q_{j+1}^{b},q_{j}^{t},q_{j+1}^{t}\\}$. This implies that
$\gamma^{u}_{\xi,H_{k}}\subset\omega(\gamma_{q_{j}^{b},H_{k}}^{u})$ for every
$j\in\mathbb{Z}/N\mathbb{Z}$ and
$\xi\in\\{q_{j+1}^{b},q_{j}^{t},q_{j+1}^{t}\\}$. In particular, we get that
$\omega(\gamma_{q=q_{0}^{b},H_{k}}^{u})$ contains $\gamma^{u}_{\xi,H_{k}}$ for
every $\xi\in\\{q_{1}^{b},q_{1}^{t},q_{0}^{t}\\}$. As a consequence, we have
that $\omega(\gamma_{q,H_{k}}^{u})$ contains181818Here we are using the
following general principle: if $\gamma_{1},\gamma_{2}$ are trayectories of a
vector field on a translation surface and $\gamma_{1}$ is contained in
$\omega(\gamma_{2})$, then $\omega(\gamma_{1})\subset\omega(\gamma_{2})$.
$\omega(\gamma^{u}_{q_{1}^{b},H_{k}})$, which in turn contains
$\\{\gamma^{u}_{q_{2}^{b},H_{k}},\gamma^{u}_{q_{1}^{t},H_{k}},\gamma^{u}_{q_{2}^{t},H_{k}}\\}$.
Proceeding inductively we get that $\omega(\gamma_{q,H_{k}}^{u})$ contains
$\bigcup_{\xi\in\mathfrak{V}\cap\partial H_{k}}\gamma^{u}_{\xi,H_{k}}.$
The positivity of the matrix $Df$ and the fact that its unstable eigenspace
lies in $\mathbb{R}_{x\geq 0,y\geq 0}\cup\mathbb{R}_{x\leq 0,y\leq 0}$ also
imply that for every $j\in\mathbb{Z}/M\mathbb{Z}$ the separatrix
$\gamma_{q_{j}^{e},V_{l}}^{u}$ intersects $\gamma_{\xi,V_{l}}^{s}$ for
$\xi\in\\{q_{j}^{w},q_{j+1}^{e},q_{j+1}^{w}\\}$. From here on, the logic to
show that $\omega(\gamma_{q,H_{k}}^{u})$ contains
$\bigcup_{\xi\in\mathfrak{V}\cap\partial V_{l}}\gamma^{u}_{\xi,V_{l}}$ is the
same as the one presented in the preeceding paragraph and the claim follows.
The arguments in the proof of the preceding claim are local so they can be
used to show that:
* •
For every $j\in\mathbb{Z}/N\mathbb{Z}$, the limit set
$\omega(\gamma_{q_{j}^{b},H_{k}}^{u})$ contains all separatrices:
$\left(\bigcup_{\xi\in\mathfrak{V}\cap\partial
H_{k}}\gamma^{u}_{\xi,H_{k}}\right)\cup\left(\bigcup_{\xi\in\mathfrak{V}\cap\partial
V_{l^{\prime}}}\gamma^{u}_{\xi,V_{l^{\prime}}}\right)$
where $V_{l^{\prime}}$ is such that $q_{j}^{b}=\partial H_{k}\cap\partial
V_{l^{\prime}}$.
* •
For every $j\in\mathbb{Z}/M\mathbb{Z}$, the limit set
$\omega(\gamma_{q_{j}^{e},H_{k}}^{u})$ contains all separatrices:
$\left(\bigcup_{\xi\in\mathfrak{V}\cap\partial
V_{l}}\gamma^{u}_{\xi,V_{l}}\right)\cup\left(\bigcup_{\xi\in\mathfrak{V}\cap\partial
H_{k^{\prime}}}\gamma^{u}_{\xi,H_{k^{\prime}}}\right)$
where $H_{k^{\prime}}$ is a horizontal cylinder such that $q_{j}^{e}=\partial
V_{l}\cap\partial H_{k^{\prime}}$.
If we now denote by $\alpha_{k}\in\alpha$ the core curve of $H_{k}$ then the
preceding discussion can be summarized as follows:
$\omega(\gamma_{q,H_{k}}^{u})$ contains all separatrices of $\mathcal{F}^{u}$
based at points in the boundary of cylinders (and stemming within those
cylinders) whose core curves belong to the link of $\alpha_{k}$ in the
configuration graph $\mathcal{G}(\alpha\cup\beta)$; moreover, if
$\beta_{l}\in{\rm link}(\alpha_{k})$ then $\omega(\gamma_{q,H_{k}}^{u})$
contains all separatrices of $\mathcal{F}^{u}$ based at points in the boundary
of cylinders (and stemming within those cylinders) whose core curves belong to
${\rm link}(\beta_{l})$. This way we can extend the arguments above to the
whole configuration graph $\mathcal{G}(\alpha\cup\beta)$ to conclude that
$\omega(\gamma_{q,H_{k}}^{u})$ contains
$\cup_{q\in\mathfrak{V}}\mathrm{Sep}_{q}(\mathcal{F}^{u})$. Since the later is
dense in $M$ we conclude that $\gamma_{q,H_{k}}^{u}$ is dense in $M$.
We now suppose that $M$ is a half-translation surface (_i.e._ given by a
quadratic differential). Let $\pi:\widetilde{M}\to M$ the orientation double
cover of $M$.
We claim that for every horizontal cylinder $H_{i}$ in $M$ the lift
$\pi^{-1}(H_{i})$ is formed by two disjoint isometric copies
$\widetilde{H}_{i_{1}}$, $\widetilde{H}_{i_{2}}$ of $H_{i}$ and these are
maximal horizontal cylinders in $\widetilde{M}$. Recall that if
$p\in\mathfrak{V}$ is a conic singularity of angle $n\pi$, then $\pi^{-1}(p)$
is formed by two conical singularities of angle $n\pi$ if $n$ is even, whereas
if $n$ is odd $\pi^{-1}(p)$ is a conical singularity of angle $2n\pi$. Given
that the multicurves $\alpha$ and $\beta$ are in minimal position, points in
$\mathfrak{V}\cap\partial H_{i}$ which are conical singularities of angle
$\pi$ are actually punctures of $S$. This implies that
$\widetilde{H}_{i_{1}}\cup\widetilde{H}_{i_{2}}$ cannot be merged within
$\widetilde{M}$ into a flat cylinder. The same conclusion holds when
$\mathfrak{V}\cap\partial H_{i}$ has conical singularities of angle different
from $\pi$ and the claim follows. Analogously, we have that for every vertical
cylinder $V_{j}$ in $M$ the lift $\pi^{-1}(V_{j})$ is formed by two disjoint
isometric copies $\widetilde{V}_{j_{1}}$, $\widetilde{V}_{j_{2}}$ of $V_{j}$
and these are maximal vertical cylinders in $\widetilde{M}$. The families
$\widetilde{\mathcal{H}}=\\{\widetilde{\mathcal{H}}_{i_{1}},\widetilde{\mathcal{H}}_{i_{2}}\\}$
and
$\widetilde{\mathcal{V}}=\\{\widetilde{\mathcal{V}}_{j_{1}},\widetilde{\mathcal{V}}_{j_{2}}\\}$
define horizontal and vertical (maximal) cylinder decompositions of
$\widetilde{M}$ respectively.
Let $\tilde{\alpha}$, $\tilde{\beta}$ denote the lifts to the orientation
double cover of $\alpha$ and $\beta$ respectively. Given that the moduli of
cylinders downstairs and upstairs is the same, we have a pair of affine
multitwists
$\widetilde{T_{\tilde{\alpha}}},\widetilde{T_{\tilde{\beta}}}\in\mathrm{Aff}(\widetilde{M})$
with $DT_{\alpha}=D\widetilde{T_{\tilde{\alpha}}}$ and
$DT_{\beta}=D\widetilde{T_{\tilde{\beta}}}$ in $\mathrm{PSL}(2,\mathbb{R})$.
If we rewrite the word defining $f$ replacing each appearence of $T_{\alpha}$
with $\widetilde{T_{\tilde{\alpha}}}$ and each appearence of $T_{\beta}^{-1}$
with $\widetilde{T_{\tilde{\beta}}}^{-1}$ the result is an affine multitwist
$\tilde{f}$ on $\widetilde{M}$ with $D\tilde{f}=Df$ in
$\mathrm{PSL}(2,\mathbb{R})$. The eigendirections of $\tilde{f}$ define a pair
of transverse $\tilde{f}$-invariant measured foliations
$\widetilde{\mathcal{F}}_{u}$ and $\widetilde{\mathcal{F}}_{s}$. Moreover, we
have that $\widetilde{\mathcal{F}}_{u}=\pi^{-1}(\mathcal{F}_{u})$ and
$\widetilde{\mathcal{F}}_{s}=\pi^{-1}(\mathcal{F}_{s})$ (_i.e._ the projection
$\pi$ sends leaves to leaves). Let
$\hat{\pi}:\widehat{\widetilde{M}}\to\widehat{M}$ be the continuous extension
of the projection $\pi$ to the metric completions of $M$ and $\widetilde{M}$
and define $\tilde{\mathfrak{V}}:=\hat{\pi}^{-1}(\mathfrak{V})$. Remark that
$\tilde{\mathfrak{V}}=\cup(\partial\widetilde{H_{i}}\cap\partial\widetilde{V_{j}})$,
where the boundaries of the cylinders are taken in $\widehat{\widetilde{M}}$.
As with $M$, for every $q\in\widetilde{\mathfrak{V}}$ we define
$\mathrm{Sep}_{q}(*)$ as the set of leaves of
$*\in\\{\widetilde{\mathcal{F}}^{u},\widetilde{\mathcal{F}}^{s}\\}$ based at
$q$. In this context, the proof of Proposition 4.1 for translation surfaces
then applies to $\widetilde{M}$ and we get the following:
###### Corollary 4.2.
Let $q\in\widetilde{\mathfrak{V}}$. Then any separatrix in
$\mathrm{Sep}_{q}(\widetilde{\mathcal{F}}^{u})\cup\mathrm{Sep}_{q}(\widetilde{\mathcal{F}}^{s})$
is dense in $\widetilde{M}$.
If separatrices are dense upstairs they are dense downstairs. This ends the
proof of Proposition 4.1.
∎
Let now $p\in S$ be the marked puncture and
$\mathrm{Sep}_{p}(\mathcal{F}^{u})=\\{\gamma_{1},\ldots,\gamma_{m}\\}$. We
denote by $S_{\mu}$ a fixed complete hyperbolic structure on $S$ ($\mu$ stands
for the metric) of the first kind and define the completed ray graph $R(S;p)$
with respect to $\mu$. Remark that in $S_{\mu}$ the point $p$ becomes a cusp,
_i.e._ a point at infinity. In what follows we associate to each $\gamma_{i}$
a simple geodesic in $S_{\mu}$ based at $p$. For elements in
$\mathrm{Sep}_{p}(\mathcal{F}^{s})$ the arguments are analogous. The ideas we
present are largely inspired by the work of P. Levitt [Levitt83].
Henceforth $\pi:\mathbb{D}\to S_{\mu}$ denotes the universal cover,
$\Gamma<{\rm PSL(2,\mathbb{R})}$ the Fuchsian group for which
$S_{\mu}=\mathbb{D}/\Gamma$, $\tilde{p}\in\partial\mathbb{D}$ a chosen point
in lift of the cusp $p$ to $\partial\mathbb{D}$ and
$\tilde{\gamma_{i}}=\tilde{\gamma_{i}}(\tilde{p})$ the unique lift of
$\gamma_{i}$ to $\mathbb{D}$ based at $\tilde{p}$.
_Claim:_ $\tilde{\gamma_{i}}$ converges to two distincs points in
$\partial\mathbb{D}$.
First remark that since $\gamma_{i}$ is not a loop, then it is sufficient to
show that $\tilde{\gamma_{i}}(t)$ converges to a point when considering the
parametrization that begins at $\tilde{p}$ and $t\to\infty$. Recall that in
$S$, the point $p$ is in a region bounded by a $2m$-polygon whose sides belong
to closed curves ${\alpha_{1},\ldots,\alpha_{m},\beta_{1},\ldots,\beta_{m}}$
in $\alpha\cup\beta$. Each of these curves is transverse to the leaves of
$\mathcal{F}^{u}$ and of $\mathcal{F}^{s}$. Up to making an isotopy, we can
suppose without loss of generality that the first element in
${\alpha_{1},\ldots,\alpha_{m},\beta_{1},\ldots,\beta_{m}}$ intersected by
$\gamma_{i}(t)$ (for the natural parametrization used in the proof of
Proposition 4.1) is $\alpha_{j}$. Given that $\alpha_{j}$ is transverse to
$\mathcal{F}^{u}$ and $\gamma_{i}$ is dense in $M$ we have that
$\gamma_{i}\cap\alpha_{j}$ is dense in $\alpha_{j}$. In consequence
$\tilde{\gamma_{i}}$ intersects $\pi^{-1}(\alpha_{j})$ infinitely often.
Remark that $\tilde{\gamma_{i}}$ intersects a connected component of
$\pi^{-1}(\alpha_{j})$ at most once. Indeed, if this was not the case there
would exists a disc $D$ embedded in $\mathbb{D}$ whose boundary $\partial D$
is formed by an arc in $\tilde{\gamma_{i}}$ and an arc contained in
$\pi^{-1}(\alpha_{j})$ transverse to
$\widetilde{\mathcal{F}^{u}}:=\pi^{-1}(\mathcal{F}^{u})$. This is impossible
because all singularities of $\widetilde{\mathcal{F}^{u}}$ are saddles (in
particular only a finite number of separatrices can steam from each one of
them) and there is a finite number of them inside $D$. Then, all limit points
in $\widetilde{\gamma_{i}}$ in $\mathbb{D}\cup\partial\mathbb{D}$ different
from $\tilde{p}$ are in the intersection of an infinite family of nest domains
$\mathbb{D}\cup\partial\mathbb{D}$ whose boundaries in $\mathbb{D}$ are
components of $\pi^{-1}(\alpha_{j})$. Moreover, this intersection is a single
point $q_{i}\in\partial\mathbb{D}$ because it has to be connected and the
endpoints in $\partial\mathbb{D}$ of components in $\pi^{-1}(\alpha_{j})$ are
dense in $\mathbb{D}$ since $\Gamma$ is Fuchsian of the first kind. This
finishes the proof of our claim above.
We define thus $\widetilde{\delta_{i}}=\widetilde{\delta_{i}}(\tilde{p})$ to
be the geodesic in $\mathbb{D}$ whose endpoints are $\tilde{p}$ and $q_{i}$ as
above and $\delta_{i}:=\pi(\widetilde{\delta_{i}})$. The geodesic $\delta_{i}$
is well defined: it does not depend on the lift of $\gamma_{i}$ based at
$\tilde{p}$ we have chosen and if we changed $\tilde{p}$ by some
$\tilde{p}^{\prime}=gp$ for some $g\in\Gamma$ then by continuity
$q_{i}^{\prime}=gq_{i}$. On the other hand $\delta_{i}$ is simple: if this was
not the case two components of $\pi^{-1}(\delta_{i})$ would intersect and this
implies that two components of $\pi^{-1}(\gamma_{i})$ intersect, which is
imposible since $\widetilde{\mathcal{F}^{u}}$ is a foliation. Remark that if
$\gamma_{i}\neq\gamma_{j}$ then $\delta_{i}$ and $\delta_{j}$ are disjoint. If
this was not the case then there would be a geodesic in $\pi^{-1}(\delta_{i})$
intersecting a geodesic in $\pi^{-1}(\delta_{j})$, but this would imply that a
connected component of $\pi^{-1}(\gamma_{i})$ intersects a connected component
of $\pi^{-1}(\gamma_{j})$, which is impossible since
$\widetilde{\mathcal{F}}^{u}$ is a foliation.
Hence we can associate to the set of separatrices
$\\{\gamma_{1},\ldots,\gamma_{m}\\}$ a set of pairwise distinct simple
geodesics $\\{\delta_{1},\ldots,\delta_{m}\\}$ based at $p$. Remark that by
construction this set is $f$-invariant. In what follows we show that
$\\{\delta_{1},\ldots,\delta_{m}\\}$ is a clique of high-filling rays. By
applying the same arguments to the separatrices of $\mathcal{F}^{u}$ based at
$p$ one obtains a different $f$-invariant clique of $m$ high-filling rays.
These correspond to the only two points in the Gromov boundary of the loop
graph $L(S;p)$ fixed by $f$.
Let $\tilde{\delta}$ be a geodesic in $\mathbb{D}$ based at $\tilde{p}$ such
that $\delta:=\pi(\tilde{\delta})$ is a simple geodesic in $S_{\mu}$ which
does not belong to $\\{\delta_{1},\ldots,\delta_{m}\\}$. We denote by $q$ the
endpoint of $\tilde{\delta}$ which is different from $\tilde{p}$. Since every
short ray or loop has a geodesic representative, it is sufficient to show that
for every $i=1,\ldots,m$ there exists a (geodesic) component of
$\pi^{-1}(\delta_{i})$ which intersects $\tilde{\delta}$. We recommend to use
Figure 7 as a guide for the next paragraph.
Figure 7.
All components of $\pi^{-1}(\mathrm{Sep}_{p}(\mathcal{F}^{u}))$ with one
endpoint in $\tilde{p}$ are of the form $\\{g^{k}\tilde{\gamma}_{1},\ldots
g^{k}\tilde{\gamma}_{m}\\}_{k\in\mathbb{Z}}$, with $g\in\Gamma$ parabolic
fixing $\tilde{p}$. Hence, there exists a closed disc
$D\subset\mathbb{D}\cup\partial\mathbb{D}$ whose boundary is formed by
$\tilde{p}\cup\tilde{\gamma}_{k}\cup\tilde{\gamma}_{l}\cup A$, where $A$ is a
closed arc in $\partial\mathbb{D}$ containing $q$; $\tilde{\gamma}_{k}$,
$\tilde{\gamma}_{l}$ are (lifts of) separatrices and there is no element of
$\pi^{-1}(\mathrm{Sep}_{p}(\mathcal{F}^{u}))$ with one endpoint in $\tilde{p}$
in the interior of $D$. Remark that the endpoints of $A$ are $q_{k}$ and
$q_{l}$ (the endpoints of $\tilde{\gamma}_{k}$ and $\tilde{\gamma}_{l}$
respectively). Given that $p$ is a saddle-type singularity of the foliation
$\mathcal{F}^{u}$ there exists a neighbourhood of $\tilde{p}$ in $D$ which
contains a segment191919As a matter of fact this segment can be taked to live
in one of the (lifts of) the curves in $\alpha\cup\beta$ forming the boundary
of the disc in $S\setminus\alpha\cup\beta$ containing $p$. $\Sigma$ with one
endpoint in $\tilde{\gamma}_{k}$ and the other in $\tilde{\gamma}_{l}$, and
which is transverse to $\widetilde{\mathcal{F}^{u}}$ except at one point $\xi$
in its interior. Moreover, since $p$ is an isolated singularity of
$\mathcal{F}^{u}$, we can suppose that the closure of the connected component
of $D\setminus\Sigma$ in $\mathbb{D}$ having $\tilde{p}$ in its boundary does
not contain singular points of $\widetilde{\mathcal{F}^{u}}$ different from
$\tilde{p}$. The point $\xi$ divides $\Sigma$ in two connected components
$\Sigma_{L}$ and $\Sigma_{R}$. On the other hand $q$ divides $A$ in two
connected components $A_{L}$ and $A_{R}$. Now let $i=1,\ldots,m$ be fixed.
Since $\gamma_{i}$ is dense in $M$ we have that $\pi^{-1}(\gamma_{i})$ is
dense in $\mathbb{D}$ and in particular $\pi^{-1}(\gamma_{i})\cap\Sigma$ is
dense in $\Sigma$. Hence we can pick a leaf $\tilde{\gamma}_{i}^{\prime}$ in
$\pi^{-1}(\gamma_{i})$ passing through a point $\xi_{L}\in\Sigma_{L}$ and
suppose without loss of generality that one of its endpoints $q_{i}^{\prime}$
is in $A_{L}$. Then
$\tilde{\gamma}_{i}^{\prime}\cap\Sigma=\\{\xi_{L},\xi_{R}\\}$ with
$\xi_{R}\in\Sigma_{R}$. Again, given that $\pi^{-1}(\gamma_{i})\cap\Sigma$ is
dense in $\Sigma$, we can find a leaf
$\tilde{\gamma_{i}}^{\prime\prime}\in\pi^{-1}(\gamma_{i})$ which intersects
$\Sigma$ transversally at a point $\eta_{R}$ between $\xi_{R}$ and
$\tilde{\gamma}_{l}$, and which has an enpoint $t_{R}$ in $A_{R}$ arbitrarly
close to $q_{l}$. This is true because of the way $q_{l}$ was found: there is
a family of connected components of $\pi^{-1}(\alpha)$, for some closed curve
$\alpha$ in $M$ transverse to $\mathcal{F}^{u}$, bounding domains in
$\mathbb{D}\cup\partial\mathbb{D}$ whose intersection is $q_{l}$. Now, by the
way $\Sigma$ was chosen we have that
$\tilde{\gamma}_{i}^{\prime\prime}\cap\Sigma=\\{\eta_{L},\eta_{R}\\}$ with
$\eta_{L}\in\Sigma_{L}$. This implies that $\tilde{\gamma}_{i}^{\prime\prime}$
has and endpoint $t_{L}$ in $A_{L}$. Hence the geodesic
$\tilde{\delta}^{\prime}$ determined by the endpoints $t_{L}$ and $t_{R}$
intersects $\tilde{\delta}$ and $\delta_{i}=\pi(\tilde{\delta}^{\prime})$
intersects $\delta=\pi(\tilde{\delta})$.∎
###### Remark 4.3.
In the proof of Theorem 1.1 we made use of the fact that every short-ray or
loop has a geodesic representative, but this is not necessary. As a matter of
fact the following is true: if $\tilde{\delta}$ is any curve in $\mathbb{D}$
based at $\widetilde{p}$ whose extremities define two different points in
$\partial\mathbb{D}$ and such that $\pi(\tilde{\delta})=\delta$ is simple and
does not belong to the set of separatrices
$\\{\gamma_{1},\ldots,\gamma_{m}\\}$, then for any $j=1,\ldots,m$ the geodesic
$\delta_{j}$ intersects $\delta$.
On the other hand, in the proof of Theorem 1.1 the density of each separatrix
of $\mathcal{F}^{u}$ or $\mathcal{F}^{s}$ on the _whole_ surface $S$ is not
used. The proof remains valid if we only require separetrices of
$\mathcal{F}^{u}$ and $\mathcal{F}^{s}$ to be dense on a subsurface
$S^{\prime}\subset S$ of finite type with enough topology, _e.g._ such that
all curves defining the polygon on which $p$ lives are essential in
$S^{\prime}$. In particular we have the following:
###### Corollary 4.4.
Let $S^{\prime}\subset S$ be an essential subsurface of finite topological
type containing $p$ and $h\in\mathrm{Mod}(S^{\prime})$ a pseudo-Anosov element
for which $p$ is a $k$-prong for some $k\in\mathbb{N}$. Let $\hat{h}$ be the
extension (as the identity) of $h$ to $S$. Then $\hat{h}$ is a loxodromic
element of weight $k$. Moreover, the separatrices of the invariant transverse
measured foliations of $h$ based at $p$ define202020By a stretching process as
described in the proof of Theorem 1.1 the cliques of high-filling rays fixed
by $\hat{h}$.
This result already appears in the work of Bavard and Walker [BaWa18B], see
Lemma 7.2.1 and Theorem 8.3.1.
### 4.2. Proof of Theorem 1.5
#### 4.2.1. Preliminaries
In this section we present, for each infinite type surface, a model that is
convenient for the proof of Theorem 1.5.
Normal forms for infinite-type surfaces. In what follows we detail how to
construct, for any infinite type surface $S$, a graph
$T(S)\subset\mathbb{H}^{3}$ having a regular neighbourhood whose boundary is
homeomorphic to $S$ (our choice of ambient space obeys illustrative purposes
only). There are many ways to construct such graph. The one we present is
intended to make the proof of Theorem 1.5 more transparent.
Let $2^{\mathbb{N}}:=\prod_{j\in\mathbb{N}}\\{0,1\\}_{j}$ be the Cantor set.
In general terms, the construction is as follows. We consider a rooted binary
tree $T2^{\mathbb{N}}$, a homeomorphism
$f:\prod_{j\in\mathbb{N}}\\{0,1\\}\to\mathrm{Ends}(T2^{\mathbb{N}})$ from the
standard binary Cantor set to the space of ends of this tree, and we choose a
topological embedding
$i:\mathrm{Ends}(S)\hookrightarrow\prod_{j\in\mathbb{N}}\\{0,1\\}$. We show
that there exists a subtree of $T2^{\mathbb{N}}$ whose space of ends is
precisely $f\circ i(\mathrm{Ends}(S))$. For our purposes it is important that
this subtree is _simple_ (see Definition 4.5 below). Then, if $S$ has genus,
we perform a surgery on vertices of the aforementioned subtree of of
$T2^{\mathbb{N}}$ belonging to rays starting at the root and having one end on
$f\circ i(\mathrm{Ends}_{\infty}(S))$.
The rooted binary tree. For every $n\in\mathbb{N}$ let
$2^{(n)}:=\prod_{i=1}^{n}\\{0,1\\}$ and $\pi_{i}:2^{(n)}\to\\{0,1\\}$ the
projection on to the $i^{th}$ coordinate. The rooted binary tree is the graph
$T2^{\mathbb{N}}$ whose vertex set $V(T2^{\mathbb{N}})$ is the union of the
simbol $\mathfrak{r}$ (this will be the root of the tree) with the set
$\\{D:D\in 2^{(n)}\hskip 2.84526pt\text{for some $n\in\mathbb{N}$}\\}$. The
edges $E(T2^{\mathbb{N}})$ are $\\{(\mathfrak{r},0)$, $(\mathfrak{r},1)\\}$
together with:
$\\{(D,D^{\prime}):D\in 2^{(n)}\hskip 2.84526ptD^{\prime}\in 2^{(n+1)}\hskip
2.84526pt\text{for some $n\in\mathbb{N}$},\hskip 2.84526pt\text{and}\hskip
2.84526pt\pi_{i}(D_{s})=\pi_{i}(D_{t})\hskip 2.84526pt\forall 1\leq i\leq
n\\}$
Henceforth $T2^{\mathbb{N}}$ is endowed with the combinatorial distance. For
every $\hat{x}=(x_{n})\in 2^{\mathbb{N}}$ we define
$r(\hat{x})=(\mathfrak{r},a_{1},\ldots,a_{n}):=(x_{1},\ldots,x_{n},\ldots)$ to
be the infinite geodesic ray in $T2^{N}$ starting from $\mathfrak{r}$ and
ending in $\mathrm{Ends}(T2^{\mathbb{N}})$. Then, the map
(4) $f:\prod_{i\in\mathbb{N}}\\{0,1\\}\to\mathrm{Ends}(T2^{\mathbb{N}})$
which associates to each infinite sequence $\hat{x}=(x_{n})_{n\in\mathbb{N}}$
the end $f(\hat{x})$ of $T2^{\mathbb{N}}$ defined by the infinite geodesic ray
$r(\hat{x})$ is a homeomorphism.
###### Definition 4.5.
Let $v$ and $v^{*}$ two different vertices in a subtree $\mathcal{T}$ of
$T2^{\mathbb{N}}$. If $v$ is contained in the geodesic which connects $v^{*}$
with $\mathfrak{r}$, then we say that $v^{*}$ is a _descendant_ of $v$. A
connected rooted subtree of $\mathcal{T}$ without leaves is _simple_ if all
descendants of a vertex $v\neq\mathfrak{r}$ of degree two, have also degree
two.
###### Lemma 4.6.
Let $F\subset\mathrm{Ends}(T2^{\mathbb{N}})$ be closed. Then there exists a
_simple_ subtree $T$ of $T2^{\mathbb{N}}$ rooted at $\mathfrak{r}$ such that
$F$ is homeomorphic to $\mathrm{Ends}(T)$.
We postpone the proof of this lemma to the end of the section.
###### Definition 4.7.
Given a subset $F$ of $\mathrm{Ends}(T2^{\mathbb{N}})$ we define
$T_{F}:=\bigcup_{\hat{x}\in f^{-1}(F)}r(\hat{x})\subseteq T2^{\mathbb{N}}$ and
call it the _tree induced by $F$_.
Surgery. Let $T$ be a subtree of $T2^{\mathbb{N}}$ rooted at $\mathfrak{r}$
and having no leaves different from this vertex, if any. Let $L$ be a subset
of the vertex set of $T$. We denote by $\Gamma_{T,L}$ the graph obtained from
$T$ and $L$ after performing the following operations on each vertex $v\in L$:
1. (1)
If $v$ has degree 3 with adjacent descendants $v^{\prime},v^{\prime\prime}$ we
delete first the edges $\\{(v,v^{\prime}),(v,v^{\prime\prime},)\\}$. Then we
add to $L$ two vertices $v_{*}^{\prime},v_{*}^{\prime\prime}$ and the edges
$\\{(v,v_{*}^{\prime}),(v,v_{*}^{\prime\prime}),(v_{*}^{\prime},v^{\prime}),(v_{*}^{\prime\prime},v^{\prime\prime}),(v^{\prime}_{*},v_{*}^{\prime\prime})\\}$.
2. (2)
If $v$ has degree 2 and $v^{\prime}$ is its adjacent descendant, we delete
first the edge $(v,v^{\prime})$. Then we add to $L$ two vertices
$v_{*}^{\prime},v_{*}^{\prime\prime}$ and the edges
$\\{(v,v_{*}^{\prime}),(v,v_{*}^{\prime\prime}),(v_{*}^{\prime},v^{\prime}),(v^{\prime}_{*},v_{*}^{\prime\prime})\\}$.
###### Definition 4.8.
Let $S$ be a surface of infinite type of genus
$g\in\mathbb{N}\cup\\{\infty\\}$ and
$\mathrm{Ends}_{\infty}(S)\subset\mathrm{Ends}(S)\subset 2^{\mathbb{N}}$ its
space of ends accumulated by genus and space of ends, respectively. We define
the graph $T(S)$ according to the following cases. In all of them we suppose
w.l.o.g that $T_{f(\mathrm{Ends}(S))}$ is simple.
1. (1)
If $g=0$ let $T(S):=T_{f(\mathrm{Ends}(S))}$,
2. (2)
if $g\in\mathbb{Z}_{>0}$ let $T(S):=\Gamma_{T,L}$ where
$T=T_{f(\mathrm{Ends}(S))}$ and $L={a_{1},a_{2},\ldots,a_{g}}$ and
$(\mathfrak{r},a_{1},\ldots,a_{g},\ldots)=r(\hat{x})$ for some
$\hat{x}\in\mathrm{Ends}(S)\subset 2^{\mathbb{N}}$, and
3. (3)
if $g=\infty$ let $T(S):=\Gamma_{T,L}$ where $T=T_{f(\mathrm{Ends}(S))}$ and
$L$ is the set of vertices of the subtree
$T_{f(\mathrm{Ends}_{\infty}(S))}\subset T_{f(\mathrm{Ends}(S))}$.
By construction, there exists a geometric realization for $T(S)$ as a graph in
the plane $\\{(x,0,z)\in\mathbb{H}^{3}:z>0\\}$ in 3-dimensional hyperbolic
space, which we denote again by $T(S)$. Moreover there exists a closed regular
neighbourhood $N(T(S))$ so that $S$ is homeomorphic to $S^{\prime}=\partial
N(T(S))$, see Figure 8. Observe that $T(S)$ is a strong deformation retract of
$N(T(S))$. We identify $S$ with $S^{\prime}$, and we say that $S$ is in
_normal form_ , and $T(S)$ is the _underlying_ graph that induces $S$.
Figure 8. Embedding of the tree $T(S)$.
###### Remark 4.9.
In [BaWa18B], A. Walker and J. Bavard carry out a similar construction. For
this they introduce the notion of _rooted core tree_ $T$ from which they
construct a surface $\Sigma(T)$ homeomorphic to a given infinite type surface
$S$, see Lemma 2.3.1 in [BaWa18B]. The graph $T_{f(\mathrm{Ends}(S))}$ (see
Definition 4.8) is turned into a rooted core tree by declaring that the
vertices of $T_{f(\mathrm{Ends}_{\infty}(S))}$ are the marked vertices. The
main difference with the work of Walker and Bavard is that the normal form we
are proposing comes from a _simple_ tree. This property is strongly used in
the proof of Theorem 1.5.
_Proof of Lemma 4.6_. Let $T_{F}$ be the subtree of $T2^{\mathbb{N}}$ induced
by $F$. Let $V^{\prime}$ be the set of vertices of $T_{F}$ of degree 2,
different from $\mathfrak{r}$, having at least one descendant of degree 3.
Then $V^{\prime}=\sqcup_{i\in I}V^{\prime}_{i}$, where:
1. (1)
$V_{i}^{\prime}$ is a subset of the vertices of a ray $r(\hat{x})$ for some
$\hat{x}\in f^{-1}(F)$,
2. (2)
for every $i\in I$, one can label
$V^{\prime}_{i}=\\{a_{i,1},\ldots,a_{i,k_{i}}\\}$ so that $a_{i,l+1}$ is a
descendant of $a_{i,l}$ adjacent to $a_{i,l}$.
3. (3)
for every $i\in I$, the vertex $A_{i}$ of $T_{F}$ adjacent to $a_{i,1}$ other
than $a_{i,2}$ is either the root $\mathfrak{r}$ or a vertex of degree 3.
Similarly, the vertex $B_{i}$ of $T_{F}$ adjacent to $a_{i,k_{i}}$ other than
$a_{i,k_{i}-1}$ is of degree 3.
Replacing the finite simple path from $A_{i}$ to $B_{i}$ by an edge
$(A_{i},B_{i})$ does not modify the space of ends of $T_{F}$. By doing this
for every $i\in I$ we obtain a simple tree as desired. ∎
_Proof of Theorem 1.5_. This proof is divided in two parts. First we show the
existence of a pair of multicurves of finite type whose union fills $S$ and
which satisfy (1) and (2). In the second part we use these to construct the
desired multicurves $\alpha$ and $\beta$.
First part: Let $S$ be an infinite-type surface in its normal form, and
$T(S)\subset\mathbb{H}^{3}$ the underlying graph that induces $S$. We are
supposing that $T(S)$ is obtained after surgery from a simple tree as
described above. The idea here is to construct two disjoint collections $A$
(blue curves) and $B$ (red curves) of pairwise disjoint curves in $S$ such
that after forgetting the non-essential curves in $A\cup B$, we get the pair
of multicurves $\alpha$ and $\beta$ which satisfy (1) and (2) as in Theorem
1.5.
Let $T_{g}(S)$ be the full subgraph of $T(S)$ generated by all the vertices
which define a triangle in $T(S)$. Observe that, since $T(S)$ is constructed
by performing a surgery on a simple tree, the graph $T_{g}(S)$ is connected.
Let $T_{g}^{\prime}(S)$ be the subset obtained as the union of $T_{g}(S)$ with
all the edges in $T(S)$ adjacent to $T_{g}(S)$. As $T_{g}(S)$ is connected,
then $T_{g}^{\prime}(S)$ is also connected. Let $\Delta$ be a triangle in
$T_{g}(S)$, and $\Delta^{\prime}$ be the disjoint union of $\Delta$ with all
the edges in $T_{g}^{\prime}(S)$ adjacent to $\Delta$. We notice that
$\Delta^{\prime}$ is one of the following two possibilities: (1) the disjoint
union of $\Delta$ with exactly three edges adjacent to it, or (2) the disjoint
union of $\Delta$ with exactly two edges adjacent to it. For each case, we
choose blue a red curves in $S$ as indicated in the Figure 9.
Figure 9. Blue and red curves associated to the neighborhood $\Delta^{\prime}$
of a triangle $\Delta$.
For each edge $e$ in $T_{g}^{\prime}(S)$ which connects two triangles in
$T_{g}^{\prime}(S)$, we choose a blue curve in $S$ as indicated in Figure 10.
Figure 10. Blue curve associated to an edge which connects two triangles.
We consider the following cases.
$\mathrm{Ends}(S)=\mathrm{Ends}_{\infty}(S)$. In this case $A$ and $B$ are the
multicurves formed by the blue and red curves as chosen above respectively.
$\mathrm{Ends}(S)\neq\mathrm{Ends}_{\infty}(S)$. Let $C$ be a connected
component of $T(S)-T_{g}^{\prime}(S)$. Given that $T(S)$ is obtained from a
simple tree, $C$ is a tree with infinitely many vertices. Let $v$ be the only
vertex in $C$ which is adjacent to an edge $e(v)$ in $T_{g}^{\prime}(S)$. If
$v$ has degree one in $C$, then every vertex of $C$ different from $v$ has
degree two because $T_{f(\mathrm{Ends}(S))}$ is a simple subtree of
$T2^{\mathbb{N}}$. In this case, we have that the subsurface $S(C)\subset S$
induced by $C$ is homeomorphic to a punctured disc. In particular, the red
curve in $S$ associated to the edge $e(v)$ chosen as depicted in Figure 9 is
not essential in $S$.
Suppose now that $v$ has degree two in $C$. We color with blue all the edges
in $C$ having vertices at combinatorial distances $k$ and $k+1$ from $v$ for
every even $k\in\mathbb{Z}_{\geq 0}$. We color all other edges in $C$ in red,
see the left-hand side in Figure 11. Let $e$ and $e^{\prime}$ be two edges in
$C$ of the same color and suppose that they shares a vertex $v$. Suppose that
all vertices of $e\cup e^{\prime}$ different from $v$ have degree three. If
$e$ and $e^{\prime}$ are marked with blue color (respectively red color), we
choose the red curve (respect. blue curve) in $S$ as in the right-hand side of
Figure 11.
Figure 11. (Left) Edges of $C$ colored with blue and red alternating in
levels. (Right) The corresponding curve in $S$ for the pair of edges $e$ and
$e^{\prime}$ marked with blue color.
For the edge $e(v)\in T^{\prime}_{g}(S)$, we choose the blue curve in $S$ as
in the left-hand side of Figure 12. Finally, for each edge $e$ of $C$, we take
a curve in $S$ with the same marked color of $e$ as is showed in the right-
hand side of Figure 12.
Figure 12. (Left) The corresponding curve for the $e$ which connects $F$ with
$T_{g}^{\prime}(S)$. (Right)
Blue (red) curve associated to an edge $e$ of $F$.
If $\mathrm{Ends}(S)$ has at most one isolated planar end here ends the
construction of the multicurves $A$ and $B$.
Now suppose that $\mathrm{Ends}(S)$ has more that one isolated planar end,
that is, $S$ has at least two punctures. Let $R$ be the full subgraph of
$T(S)$ generated by all the vertices of degree 3 in $T(S)$ together with the
root vertex $\mathfrak{r}$, and define $T_{g}^{\prime\prime}(S)$ as the full
subgraph of $T(S)$ generated by all the vertices in $T(S)$ at distance at most
1 from $R$. The graph $T_{g}^{\prime\prime}(S)$ is connected (again, because
$T_{f(\mathrm{Ends}(S))}$ is simple) and contains $T_{g}^{\prime}(S)$ . It
also has at least two leaves, i.e., vertices of degree one which we denote by
$v_{1}$ and $v_{2}$. If $v_{1}$ and $v_{2}$ are at distance 3 in
$T_{g}^{\prime\prime}(S)$ there exist a single edge $e$ in
$T_{g}^{\prime\prime}(S)$ whose adjacent vertices are at distance 1 from
$v_{1}$ or $v_{2}$. Let us suppose that this is not an edge of a triangle in
$T(S)$. Then $e$ is contained in a connected component of $T(S)\smallsetminus
T_{g}^{\prime}(S)$. In this case, if $e$ is marked with red color (blue
color), we choose the blue curve (red curve) in $S$ as is shown in Figure 13.
In all other cases we do nothing and this finishes the construction of the
multicurves $A$ and $B$.
Figure 13. The corresponding blue curve in $S$ for the red edge $e$.
We define $\alpha=\\{\alpha_{i}\\}_{i\in I}$ and $\beta=\\{\beta_{j}\\}_{j\in
J}$ as the set of essential curves in $A$ and $B$, respectively. By
construction, $\alpha$ and $\beta$ are multicurves of finite type in minimal
position and $\rm{i}(\alpha_{i},\beta_{j})\leq 2$ for every $i\in I$ and $j\in
J$. Upon investigation, one can observe that each connected component of
$S\setminus\alpha\cup\beta$ is a disc or a punctured disc whose boundary is
formed by 2, 4, 6 or 8 segments.
Second part: Let $m\in\mathbb{N}$. Take a finite multicurve $\delta$ in $S$
such that, if $Q$ is the connected component of $S\smallsetminus\delta$ which
contains $p$, we have that $Q\setminus p$ is homeomorphic to $S_{0}^{m+3}$,
i.e., a genus zero surfaces with $m+3$ punctures. In $Q$ we choose blue and
red curves to form a chain as in Figure 14 and color them in blue and red so
that no two curves of the same color intersect. We denote the blue and red
curves in $Q$ by $\alpha^{\prime}$ and $\beta^{\prime}$ respectively. Remark
that the connected component of
$Q\smallsetminus(\alpha^{\prime}\cup\beta^{\prime})$ containing the point $p$
is a $2m-$polygon.
Figure 14. The multicurves $\alpha^{\prime}$ (blue) and $\beta^{\prime}$
(red).
The idea is to extend $\alpha^{\prime}$ and $\beta^{\prime}$ to multicurves
$\alpha$ and $\beta$, respectively, which satisfy all the desired properties.
We consider two cases for the rest of the proof.
$m$ even. Without loss of generality we suppose that all punctures in $Q$
different from $p$ are encircled by elements of $\alpha^{\prime}$. Now, let
$F$ be a connected component of $S\smallsetminus\alpha^{\prime}$ not
containing the point $p$. Then the closure $\overline{F}$ of $F$ in $S$ is a
surface with $b>0$ boundary components. Moreover,
$\overline{F}\cap\beta^{\prime}$ is a finite collection of disjoint essential
arcs in $\overline{F}$ with end points in $\partial\overline{F}$, and the end
points of an arc in $\overline{F}\cap\beta^{\prime}$ are in a common connected
component of $\partial\overline{F}$. We denote by $\theta_{F}$ the collection
of arcs in $\overline{F}\cap\beta^{\prime}$, and by $\delta_{F}$ to be the set
of curves in $\delta$ contained in $F$.
Claim: There exists a pair of multicurves $\alpha_{F}^{\prime\prime}$ and
$\beta_{F}^{\prime\prime}$ whose union fills $F$, which satisfy (1) & (2) in
Theorem 1.5 and such that $\theta_{F}\cap\beta_{F}^{\prime\prime}=\emptyset$.
Remark that if we define
$\alpha:=\alpha^{\prime}\bigcup\left(\bigcup_{F\subset
S\setminus\alpha^{\prime}}\alpha_{F}^{\prime\prime}\right)$ and
$\beta:=\beta^{\prime}\bigcup\left(\bigcup_{F\subset
S\setminus\alpha^{\prime}}\beta_{F}^{\prime\prime}\right)$, then $\alpha$ and
$\beta$ are the desired pair of multicurves.
We divided the proof of our claim in two cases: $b=1$ and $b>1$.
Case $b=1$. If $F$ is a finite-type surface is it not difficult to find the
multicurves $\alpha_{F}^{\prime\prime}$ and $\beta_{F}^{\prime\prime}$. If $F$
is of infinite-type let $\alpha^{\prime\prime}_{F}$ and
$\beta^{\prime\prime}_{F}$ be the blue and red curves obtained from applying
the first part of the proof of Theorem 1.5 to $F$. Remark that by construction
all arcs in $\theta_{F}$ intersect only one curve in
$\alpha_{F}^{\prime\prime}\cup\beta_{F}^{\prime\prime}$, hence, up to
interchanging the colors of $\alpha^{\prime\prime}_{F}$ and
$\beta_{F}^{\prime\prime}$, we can get that
$\beta_{F}^{\prime\prime}\cap\theta_{F}=\emptyset$.
Case $b>1$. Again, the case when $F$ is a finite-type surface is left to the
reader. If $F$ is of infinite type, let $\gamma$ be the separating curve in
$\overline{F}$ which bounds a subsurface $W\subset\overline{F}$ of genus 0
with one puncture, $b$ boundary components and such that $\partial
W=\partial\overline{F}$ and write $\overline{F}\smallsetminus\gamma=W\sqcup
F_{1}$. Let $\theta_{F_{1}}$ to be the set of arcs given by
$\theta_{F}\cap\overline{F_{1}}$. Let $\eta_{1}$ and $\eta_{2}$ be two curves
in $F_{1}$ (not necessary essential) such that $\gamma,\eta_{1}$ and
$\eta_{2}$ bounds a pair of pants $P$ in $F_{1}$. If an element of
$\theta_{F}$ intersects $\eta_{1}\cup\eta_{2}$ then we replace it with one
that doesn’t and which is disjoint from all other arcs in $\theta_{F}$. Up to
making these replacements, we can assume that $\theta_{F}$ does not intersects
$\eta_{1}\cup\eta_{2}$, see Figure 15. Hence,
$\theta_{F_{1}}\subseteq\overline{P}$. As $\overline{F_{1}}$ has one boundary
component, by the case $b=1$ above, there exist a pair of multicurves
$\alpha_{F_{1}}^{\prime\prime}$ and $\beta_{F_{1}}^{\prime\prime}$ which fills
$F_{1}$ and such that
$\theta_{F_{1}}\cap\beta_{F_{1}}^{\prime\prime}=\emptyset$. Define then
$\alpha_{F}^{{}^{\prime\prime}}:=\\{\gamma\\}\cup\alpha_{F_{1}}^{{}^{\prime\prime}}$
and $\beta_{F}^{{}^{\prime\prime}}:=\beta_{F_{2}}^{{}^{\prime\prime}}$.
Figure 15. Case $b>1$.
m odd. Without loss of generality we suppose that $\alpha^{\prime}$ encircles
all punctures in $Q$ different from $p$ except one. We add the curves
$\alpha_{1}$ and $\beta_{1}$ to $\alpha^{\prime}$ and $\beta^{\prime}$ as
depicted in Figure 16 respectively. Then we consider each connected component
$F$ of $S\setminus\alpha^{\prime}$ and proceed as in the preceding case. ∎
Figure 16. Case $m$ odd.
###### Remark 4.10.
If $\alpha$ and $\beta$ are multicurves as constructed in the proof of Theorem
1.5, then $S\setminus\alpha\cup\beta$ is a family of polygons, each of which
has either 2, 4, 6 or 8 sides. Hence, if $M=M(\alpha,\beta,\textbf{h})$ is
given by the Hooper-Thurston-Veech construction, then the set $\mathfrak{V}$
defined in the proof of Theorem 1.1 is formed by regular points and conic
singularities of total angle $\pi$, $3\pi$ or $4\pi$.
### 4.3. Proof of Corollary 1.6
Our arguments use Figure 17. Let $\alpha$ and $\beta$ be the multicurves in
blue and red illustrated in the figure; the union of these fills the surface
$S$ in question (a Loch Ness monster). Let us write
$\beta=\beta^{\prime}\sqcup\\{a_{i}\\}_{i\in\mathbb{N}}\sqcup\\{b_{i}\\}_{i\in\mathbb{N}}$,
and for each $n\in\mathbb{N}$ let
$\beta_{n}:=\beta^{\prime}\sqcup\\{a_{i}\\}_{i\geq n}\sqcup\\{b_{i}\\}_{i\geq
n}$. Theorem 1.1 implies that $f_{n}:=T_{\alpha}\circ T_{\beta_{n}}^{-1}$ acts
loxodromically on the loop graph $L(S;p)$ and hence it acts loxodromically on
the main component of the completed ray graph $\mathcal{R}(S;p)$ (see Theorem
2.2). Remark that $f_{n}$ converges to $f=T_{\alpha}\circ
T_{\beta^{\prime}}^{-1}$ in the compact-open topology. On the other hand, $f$
fixes the short rays $l$ and $l^{\prime}$ and hence it acts elliptically on
both $\mathcal{R}(S;p)$ and $L(S;p)$. ∎
###### Remark 4.11.
Using techniques similar to the ones presented in the proof of Theorem 1.5 one
can construct explicit sequences $(f_{n})$ as above for any infinite-type
surface $S$.
Figure 17.
## References
* [1]
AbbottCarolynMillerNickPatelPriyamInfinite-type loxodromic isometries of the
relative arc graph2020In preparation.@unpublished{AbMiPa, author = {{Abbott},
Carolyn}, author = {{Miller}, Nick}, author = {{Patel}, Priyam}, title =
{Infinite-type loxodromic isometries of the relative arc graph}, date =
{2020}, note = {In preparation.}}
* [3]
BavardJulietteHyperbolicité du graphe des rayons et quasi-morphismes sur un
gros groupe modulaire.French2016ISSN 1465-3060; 1364-0380/eGeom.
Topol.201491–535@article{Bavard16, author = {{Bavard}, Juliette}, title =
{{Hyperbolicit\'e du graphe des rayons et quasi-morphismes sur un gros groupe
modulaire.}}, language = {French}, date = {2016}, issn = {1465-3060;
1364-0380/e}, journal = {{Geom. Topol.}}, volume = {20}, number = {1}, pages =
{491\ndash 535}}
* [5]
BestvinaMladenBrombergKenFujiwaraKojiConstructing group actions on quasi-trees
and applications to mapping class groups2015ISSN 0073-8301Publ. Math. Inst.
Hautes Études Sci.1221–64LinkReview MathReviews@article{BeBroFu15, author =
{Bestvina, Mladen}, author = {Bromberg, Ken}, author = {Fujiwara, Koji}, title
= {Constructing group actions on quasi-trees and applications to mapping class
groups}, date = {2015}, issn = {0073-8301}, journal = {Publ. Math. Inst.
Hautes \'{E}tudes Sci.}, volume = {122}, pages = {1\ndash 64}, url =
{https://doi.org/10.1007/s10240-014-0067-4}, review = {\MR{3415065}}}
* [7]
BestvinaMladenFujiwaraKojiBounded cohomology of subgroups of mapping class
groups2002ISSN 1465-3060Geom. Topol.669–89LinkReview
MathReviews@article{BestvinaFujiwara02, author = {Bestvina, Mladen}, author =
{Fujiwara, Koji}, title = {Bounded cohomology of subgroups of mapping class
groups}, date = {2002}, issn = {1465-3060}, journal = {Geom. Topol.}, volume =
{6}, pages = {69\ndash 89}, url = {https://doi.org/10.2140/gt.2002.6.69},
review = {\MR{1914565}}}
* [9]
BrennerJoël LeeQuelques groupes libres de matrices1955ISSN 0001-4036C. R.
Acad. Sci. Paris2411689–1691Review MathReviews@article{Brenner55, author =
{Brenner, Jo\"{e}l~Lee}, title = {Quelques groupes libres de matrices}, date =
{1955}, issn = {0001-4036}, journal = {C. R. Acad. Sci. Paris}, volume =
{241}, pages = {1689\ndash 1691}, review = {\MR{75952}}}
* [11]
BowmanJoshua P.ValdezFerránWild singularities of flat surfaces2013ISSN
0021-2172Israel J. Math.197169–97LinkReview
MathReviews@article{BowmanValdez13, author = {Bowman, Joshua~P.}, author =
{Valdez, Ferr\'{a}n}, title = {Wild singularities of flat surfaces}, date =
{2013}, issn = {0021-2172}, journal = {Israel J. Math.}, volume = {197},
number = {1}, pages = {69\ndash 97}, url =
{https://doi.org/10.1007/s11856-013-0022-y}, review = {\MR{3096607}}}
* [13]
BavardJulietteWalkerAldenThe Gromov boundary of the ray graph.English2018ISSN
0002-9947; 1088-6850/eTrans. Am. Math. Soc.370117647–7678@article{BaWa18A,
author = {{Bavard}, Juliette}, author = {{Walker}, Alden}, title = {{The
Gromov boundary of the ray graph.}}, language = {English}, date = {2018}, issn
= {0002-9947; 1088-6850/e}, journal = {{Trans. Am. Math. Soc.}}, volume =
{370}, number = {11}, pages = {7647\ndash 7678}}
* [15]
BavardJulietteWalkerAldenTwo simultaneous actions of big mapping class
groups.2018preprint, arXiv:1806.10272@unpublished{BaWa18B, author = {{Bavard},
Juliette}, author = {{Walker}, Alden}, title = {{Two simultaneous actions of
big mapping class groups.}}, date = {2018}, note = {preprint,
arXiv:1806.10272}}
* [17]
DurhamMatthew GentryFanoniFedericaVlamisNicholas G.Graphs of curves on
infinite-type surfaces with mapping class group actions2018ISSN 0373-0956Ann.
Inst. Fourier (Grenoble)6862581–2612LinkReview
MathReviews@article{DurhamFanoniVlamis18, author = {Durham, Matthew~Gentry},
author = {Fanoni, Federica}, author = {Vlamis, Nicholas~G.}, title = {Graphs
of curves on infinite-type surfaces with mapping class group actions}, date =
{2018}, issn = {0373-0956}, journal = {Ann. Inst. Fourier (Grenoble)}, volume
= {68}, number = {6}, pages = {2581\ndash 2612}, url =
{http://aif.cedram.org/item?id=AIF_2018__68_6_2581_0}, review =
{\MR{3897975}}}
* [19]
DelecroixVincentHubertPascalValdezFerránTranslation surfaces on the
wild2019Preprint, https://www.labri.fr/perso/vdelecro/infinite-translation-
surfaces-in-the-wild.html@unpublished{DHV19, author = {{Delecroix}, Vincent},
author = {{Hubert}, Pascal}, author = {{Valdez}, Ferr\'an}, title =
{Translation surfaces on the wild}, date = {2019}, note = {Preprint,
https://www.labri.fr/perso/vdelecro/infinite-translation-surfaces-in-the-
wild.html}}
* [21]
FarbBensonMargalitDanA primer on mapping class groupsPrinceton Mathematical
SeriesPrinceton University Press, Princeton, NJ201249ISBN
978-0-691-14794-9Review MathReviews@book{FarbMArgalit12, author = {Farb,
Benson}, author = {Margalit, Dan}, title = {A primer on mapping class groups},
series = {Princeton Mathematical Series}, publisher = {Princeton University
Press, Princeton, NJ}, date = {2012}, volume = {49}, isbn =
{978-0-691-14794-9}, review = {\MR{2850125}}}
* [23]
HooperW. PatrickThe invariant measures of some infinite interval exchange
maps.English2015ISSN 1465-3060; 1364-0380/eGeom.
Topol.1941895–2038@article{Hooper15, author = {{Hooper}, W.~Patrick}, title =
{{The invariant measures of some infinite interval exchange maps.}}, language
= {English}, date = {2015}, issn = {1465-3060; 1364-0380/e}, journal = {{Geom.
Topol.}}, volume = {19}, number = {4}, pages = {1895\ndash 2038}}
* [25]
LevittGilbertFoliations and laminations on hyperbolic surfaces.English1983ISSN
0040-9383Topology22119–135@article{Levitt83, author = {{Levitt}, Gilbert},
title = {{Foliations and laminations on hyperbolic surfaces.}}, language =
{English}, date = {1983}, issn = {0040-9383}, journal = {{Topology}}, volume =
{22}, pages = {119\ndash 135}}
* [27]
MannKathrynRafiKasraLarge scale geometry of big mapping class
groups2019Preprint, arXiv:1912.10914@unpublished{MahnRafi20, author = {{Mann},
Kathryn}, author = {{Rafi}, Kasra}, title = {Large scale geometry of big
mapping class groups}, date = {2019}, note = {Preprint, arXiv:1912.10914}}
* [29]
MaherJosephTiozzoGiulioRandom walks on weakly hyperbolic groups2018ISSN
0075-4102J. Reine Angew. Math.742187–239LinkReview
MathReviews@article{MaherTiozzo18, author = {Maher, Joseph}, author = {Tiozzo,
Giulio}, title = {Random walks on weakly hyperbolic groups}, date = {2018},
issn = {0075-4102}, journal = {J. Reine Angew. Math.}, volume = {742}, pages =
{187\ndash 239}, url = {https://doi.org/10.1515/crelle-2015-0076}, review =
{\MR{3849626}}}
* [31]
RandeckerAnjaWild translation surfaces and infinite genus2018ISSN
1472-2747Algebr. Geom. Topol.1852661–2699LinkReview
MathReviews@article{Randecker18, author = {Randecker, Anja}, title = {Wild
translation surfaces and infinite genus}, date = {2018}, issn = {1472-2747},
journal = {Algebr. Geom. Topol.}, volume = {18}, number = {5}, pages =
{2661\ndash 2699}, url = {https://doi.org/10.2140/agt.2018.18.2661}, review =
{\MR{3848396}}}
* [33]
RasmussenAlexanderWWPD elements of big mapping class groups2019Preprint,
arXiv:1909.06680@unpublished{Rass19, author = {Rasmussen, Alexander}, title =
{{WWPD} elements of big mapping class groups}, date = {2019}, note =
{Preprint, arXiv:1909.06680}}
* [35]
RaymondFrankThe end point compactification of manifolds1960ISSN
0030-8730Pacific J. Math.10947–963LinkReview MathReviews@article{Raymond60,
author = {Raymond, Frank}, title = {The end point compactification of
manifolds}, date = {1960}, issn = {0030-8730}, journal = {Pacific J. Math.},
volume = {10}, pages = {947\ndash 963}, url =
{http://projecteuclid.org/euclid.pjm/1103038243}, review = {\MR{120637}}}
* [37]
RichardsIanOn the classification of noncompact surfaces1963ISSN
0002-9947Trans. Amer. Math. Soc.106259–269LinkReview
MathReviews@article{Richards63, author = {Richards, Ian}, title = {On the
classification of noncompact surfaces}, date = {1963}, issn = {0002-9947},
journal = {Trans. Amer. Math. Soc.}, volume = {106}, pages = {259\ndash 269},
url = {https://doi.org/10.2307/1993768}, review = {\MR{143186}}}
* [39]
ThurstonWilliam P.On the geometry and dynamics of diffeomorphisms of
surfaces.English1988ISSN 0273-0979; 1088-9485/eBull. Am. Math. Soc., New
Ser.192417–431@article{Thurston88, author = {{Thurston}, William~P.}, title =
{{On the geometry and dynamics of diffeomorphisms of surfaces.}}, language =
{English}, date = {1988}, issn = {0273-0979; 1088-9485/e}, journal = {{Bull.
Am. Math. Soc., New Ser.}}, volume = {19}, number = {2}, pages = {417\ndash
431}}
* [41]
VeechW. A.Teichmüller curves in moduli space, Eisenstein series and an
application to triangular billiards1989ISSN 0020-9910Invent.
Math.973553–583LinkReview MathReviews@article{Veech89, author = {Veech,
W.~A.}, title = {Teichm\"{u}ller curves in moduli space, {E}isenstein series
and an application to triangular billiards}, date = {1989}, issn =
{0020-9910}, journal = {Invent. Math.}, volume = {97}, number = {3}, pages =
{553\ndash 583}, url = {https://doi.org/10.1007/BF01388890}, review =
{\MR{1005006}}}
* [43]
|
2024-09-04T02:54:55.811777 | 2020-02-29T07:08:56 | 2003.00196 | {
"authors": "Aliaksandr Siarohin, St\\'ephane Lathuili\\`ere, Sergey Tulyakov, Elisa\n Ricci and Nicu Sebe",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25957",
"submitter": "Aliaksandr Siarohin",
"url": "https://arxiv.org/abs/2003.00196"
} | arxiv-papers | # First Order Motion Model for Image Animation
Aliaksandr Siarohin
DISI, University of Trento
<EMAIL_ADDRESS>and Stéphane Lathuilière
DISI, University of Trento
LTCI, Télécom Paris, Institut polytechnique de Paris
<EMAIL_ADDRESS>and Sergey Tulyakov
Snap Inc.
<EMAIL_ADDRESS>and Elisa Ricci
DISI, University of Trento
Fondazione Bruno Kessler
<EMAIL_ADDRESS>
and Nicu Sebe
DISI, University of Trento
Huawei Technologies Ireland
<EMAIL_ADDRESS>
###### Abstract
Image animation consists of generating a video sequence so that an object in a
source image is animated according to the motion of a driving video. Our
framework addresses this problem without using any annotation or prior
information about the specific object to animate. Once trained on a set of
videos depicting objects of the same category (e.g. faces, human bodies), our
method can be applied to any object of this class. To achieve this, we
decouple appearance and motion information using a self-supervised
formulation. To support complex motions, we use a representation consisting of
a set of learned keypoints along with their local affine transformations. A
generator network models occlusions arising during target motions and combines
the appearance extracted from the source image and the motion derived from the
driving video. Our framework scores best on diverse benchmarks and on a
variety of object categories. Our source code is publicly available111
https://github.com/AliaksandrSiarohin/first-order-model.
## 1 Introduction
Generating videos by animating objects in still images has countless
applications across areas of interest including movie production, photography,
and e-commerce. More precisely, image animation refers to the task of
automatically synthesizing videos by combining the appearance extracted from a
source image with motion patterns derived from a driving video. For instance,
a face image of a certain person can be animated following the facial
expressions of another individual (see Fig. 1). In the literature, most
methods tackle this problem by assuming strong priors on the object
representation (e.g. 3D model) [4] and resorting to computer graphics
techniques [6, 34]. These approaches can be referred to as object-specific
methods, as they assume knowledge about the model of the specific object to
animate.
Recently, deep generative models have emerged as effective techniques for
image animation and video retargeting [2, 42, 3, 43, 28, 29, 38, 41, 32, 22].
In particular, Generative Adversarial Networks (GANs) [14] and Variational
Auto-Encoders (VAEs) [21] have been used to transfer facial expressions [38]
or motion patterns [3] between human subjects in videos. Nevertheless, these
approaches usually rely on pre-trained models in order to extract object-
specific representations such as keypoint locations. Unfortunately, these pre-
trained models are built using costly ground-truth data annotations [2, 28,
32] and are not available in general for an arbitrary object category. To
address this issues, recently Siarohin et al. [29] introduced Monkey-Net, the
first object-agnostic deep model for image animation. Monkey-Net encodes
motion information via keypoints learned in a self-supervised fashion. At test
time, the source image is animated according to the corresponding keypoint
trajectories estimated in the driving video. The major weakness of Monkey-Net
is that it poorly models object appearance transformations in the keypoint
neighborhoods assuming a zeroth order model (as we show in Sec. 3.1). This
leads to poor generation quality in the case of large object pose changes (see
Fig. 4). To tackle this issue, we propose to use a set of self-learned
keypoints together with local affine transformations to model complex motions.
We therefore call our method a first-order motion model. Second, we introduce
an occlusion-aware generator, which adopts an occlusion mask automatically
estimated to indicate object parts that are not visible in the source image
and that should be inferred from the context. This is especially needed when
the driving video contains large motion patterns and occlusions are typical.
Third, we extend the equivariance loss commonly used for keypoints detector
training [18, 45], to improve the estimation of local affine transformations.
Fourth, we experimentally show that our method significantly outperforms
state-of-the-art image animation methods and can handle high-resolution
datasets where other approaches generally fail. Finally, we release a new high
resolution dataset, Thai-Chi-HD, which we believe could become a reference
benchmark for evaluating frameworks for image animation and video generation.
| | |
---|---|---|---
| | |
| | |
| | |
---|---|---|---
| | |
| | |
| | |
---|---|---|---
| | |
| | |
| | |
---|---|---|---
| | |
| | |
Figure 1: Example animations produced by our method trained on different
datasets: _VoxCeleb_ [23] (top left), _Tai-Chi-HD_ (top right), _Fashion-
Videos_ [42] (bottom left) and _MGif_ [29] (bottom right). We use relative
motion transfer for _VoxCeleb_ and _Fashion-Videos_ and absolute transfer for
_MGif_ and _Tai-Chi-HD_ see Sec. 3.4. Check our project page for more
qualitative results222https://aliaksandrsiarohin.github.io/first-order-model-
website/.
## 2 Related work
Video Generation. Earlier works on deep video generation discussed how spatio-
temporal neural networks could render video frames from noise vectors [37,
27]. More recently, several approaches tackled the problem of conditional
video generation. For instance, Wang et al. [39] combine a recurrent neural
network with a VAE in order to generate face videos. Considering a wider range
of applications, Tulyakov et al. [35] introduced MoCoGAN, a recurrent
architecture adversarially trained in order to synthesize videos from noise,
categorical labels or static images. Another typical case of conditional
generation is the problem of future frame prediction, in which the generated
video is conditioned on the initial frame [12, 24, 31, 36, 45]. Note that in
this task, realistic predictions can be obtained by simply warping the initial
video frame [1, 12, 36]. Our approach is closely related to these previous
works since we use a warping formulation to generate video sequences. However,
in the case of image animation, the applied spatial deformations are not
predicted but given by the driving video.
Image Animation. Traditional approaches for image animation and video re-
targeting [6, 34, 13] were designed for specific domains such as faces [46,
43], human silhouettes [8, 38, 28] or gestures [32] and required a strong
prior of the animated object. For example, in face animation, method of
Zollhofer et al. [46] produced realistic results at expense of relying on a 3D
morphable model of the face. In many applications, however, such models are
not available. Image animation can also be treated as a translation problem
from one visual domain to another. For instance, Wang et al. [38] transferred
human motion using the image-to-image translation framework of Isola et al.
[16]. Similarly, Bansal et al. [3] extended conditional GANs by incorporating
spatio-temporal cues in order to improve video translation between two given
domains. Such approaches in order to animate a single person require hours of
videos of that person labelled with semantic information, and therefore have
to be retrained for each individual. In contrast to these works, we neither
rely on labels, prior information about the animated objects, nor on specific
training procedures for each object instance. Furthermore, our approach can be
applied to any object within the same category (e.g., faces, human bodies,
robot arms etc).
Several approaches were proposed that do not require priors about the object.
X2Face [41] uses a dense motion field in order to generate the output video
via image warping. Similarly to us they employ a reference pose that is used
to obtain a canonical representation of the object. In our formulation, we do
not require an explicit reference pose, leading to significantly simpler
optimization and improved image quality. Siarohin et al. [29] introduced
Monkey-Net, a self-supervised framework for animating arbitrary objects by
using sparse keypoint trajectories. In this work, we also employ sparse
trajectories induced by self-supervised keypoints. However, we model object
motion in the neighbourhood of each predicted keypoint by a local affine
transformation. Additionally, we explicitly model occlusions in order to
indicate to the generator network the image regions that can be generated by
warping the source image and the occluded areas that need to be inpainted.
## 3 Method
We are interested in animating an object depicted in a source image
$\mathbf{S}$ based on the motion of a similar object in a driving video
$\mathcal{D}$. Since direct supervision is not available (pairs of videos in
which objects move similarly), we follow a self-supervised strategy inspired
from Monkey-Net [29]. For training, we employ a large collection of video
sequences containing objects of the same object category. Our model is trained
to reconstruct the training videos by combining a single frame and a learned
latent representation of the motion in the video. Observing frame pairs, each
extracted from the same video, it learns to encode motion as a combination of
motion-specific keypoint displacements and local affine transformations. At
test time we apply our model to pairs composed of the source image and of each
frame of the driving video and perform image animation of the source object.
Figure 2: Overview of our approach. Our method assumes a source image
$\mathbf{S}$ and a frame of a driving video frame $\mathbf{D}$ as inputs. The
unsupervised keypoint detector extracts first order motion representation
consisting of sparse keypoints and local affine transformations with respect
to the reference frame $\mathbf{R}$. The dense motion network uses the motion
representation to generate dense optical flow
$\hat{\mathcal{T}}_{\mathbf{S}\leftarrow\mathbf{D}}$ from $\mathbf{D}$ to
$\mathbf{S}$ and occlusion map
$\hat{\mathcal{O}}_{\mathbf{S}\leftarrow\mathbf{D}}$. The source image and the
outputs of the dense motion network are used by the generator to render the
target image.
An overview of our approach is presented in Fig. 2. Our framework is composed
of two main modules: the motion estimation module and the image generation
module. The purpose of the motion estimation module is to predict a dense
motion field from a frame $\mathbf{D}\in\mathbb{R}^{3\times H\times W}$ of
dimension $H\times W$ of the driving video $\mathcal{D}$ to the source frame
$\mathbf{S}\in\mathbb{R}^{3\times H\times W}$. The dense motion field is later
used to align the feature maps computed from $\mathbf{S}$ with the object pose
in $\mathbf{D}$. The motion field is modeled by a function
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$
that maps each pixel location in $\mathbf{D}$ with its corresponding location
in $\mathbf{S}$. $\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}$ is often
referred to as backward optical flow. We employ backward optical flow, rather
than forward optical flow, since back-warping can be implemented efficiently
in a differentiable manner using bilinear sampling [17]. We assume there
exists an abstract reference frame $\mathbf{R}$. We independently estimate two
transformations: from $\mathbf{R}$ to $\mathbf{S}$
($\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}$) and from $\mathbf{R}$ to
$\mathbf{D}$ ($\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}$). Note that
unlike X2Face [41] the reference frame is an abstract concept that cancels out
in our derivations later. Therefore it is never explicitly computed and cannot
be visualized. This choice allows us to independently process $\mathbf{D}$ and
$\mathbf{S}$. This is desired since, at test time the model receives pairs of
the source image and driving frames sampled from a different video, which can
be very different visually. Instead of directly predicting
$\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}$ and
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}$, the motion estimator module
proceeds in two steps.
In the first step, we approximate both transformations from sets of sparse
trajectories, obtained by using keypoints learned in a self-supervised way.
The locations of the keypoints in $\mathbf{D}$ and $\mathbf{S}$ are separately
predicted by an encoder-decoder network. The keypoint representation acts as a
bottleneck resulting in a compact motion representation. As shown by Siarohin
et al. [29], such sparse motion representation is well-suited for animation as
at test time, the keypoints of the source image can be moved using the
keypoints trajectories in the driving video. We model motion in the
neighbourhood of each keypoint using local affine transformations. Compared to
using keypoint displacements only, the local affine transformations allow us
to model a larger family of transformations. We use Taylor expansion to
represent $\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}$ by a set of keypoint
locations and affine transformations. To this end, the keypoint detector
network outputs keypoint locations as well as the parameters of each affine
transformation.
During the second step, a dense motion network combines the local
approximations to obtain the resulting dense motion field
$\hat{\mathcal{T}}_{\mathbf{S}\leftarrow\mathbf{D}}$. Furthermore, in addition
to the dense motion field, this network outputs an occlusion mask
$\hat{\mathcal{O}}_{\mathbf{S}\leftarrow\mathbf{D}}$ that indicates which
image parts of $\mathbf{D}$ can be reconstructed by warping of the source
image and which parts should be inpainted, i.e.inferred from the context.
Finally, the generation module renders an image of the source object moving as
provided in the driving video. Here, we use a generator network $G$ that warps
the source image according to
$\hat{\mathcal{T}}_{\mathbf{S}\leftarrow\mathbf{D}}$ and inpaints the image
parts that are occluded in the source image. In the following sections we
detail each of these step and the training procedure.
### 3.1 Local Affine Transformations for Approximate Motion Description
The motion estimation module estimates the backward optical flow
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}$ from a driving frame
$\mathbf{D}$ to the source frame $\mathbf{S}$. As discussed above, we propose
to approximate $\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}$ by its first
order Taylor expansion in a neighborhood of the keypoint locations. In the
rest of this section, we describe the motivation behind this choice, and
detail the proposed approximation of
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}$.
We assume there exist an abstract reference frame $\mathbf{R}$. Therefore,
estimating $\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}$ consists in
estimating $\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}$ and
$\mathcal{T}_{\mathbf{R}\leftarrow\mathbf{D}}$. Furthermore, given a frame
$\mathbf{X}$, we estimate each transformation
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}$ in the neighbourhood of the
learned keypoints. Formally, given a transformation
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}$, we consider its first order
Taylor expansions in $K$ keypoints $p_{1},\dots p_{K}$. Here, $p_{1},\dots
p_{K}$ denote the coordinates of the keypoints in the reference frame
$\mathbf{R}$. Note that for the sake of simplicity in the following the point
locations in the reference pose space are all denoted by $p$ while the point
locations in the $\mathbf{X}$, $\mathbf{S}$ or $\mathbf{D}$ pose spaces are
denoted by $z$. We obtain:
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p)=\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p_{k})+\left(\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)(p-p_{k})+o(\|p-p_{k}\|),$
(1)
In this formulation, the motion function
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}$ is represented by its values in
each keypoint $p_{k}$ and its Jacobians computed in each $p_{k}$ location:
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p)\simeq\left\\{\left\\{\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p_{1}),\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{1}}\right\\},\dots\left\\{\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p_{k}),\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{K}}\right\\}\right\\}.$
(2)
Furthermore, in order to estimate
$\mathcal{T}_{\mathbf{R}\leftarrow\mathbf{X}}=\mathcal{T}^{-1}_{\mathbf{X}\leftarrow\mathbf{R}}$,
we assume that $\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}$ is locally
bijective in the neighbourhood of each keypoint. We need to estimate
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}$ near the keypoint $z_{k}$ in
$\mathbf{D}$, given that $z_{k}$ is the pixel location corresponding to the
keypoint location $p_{k}$ in $\mathbf{R}$. To do so, we first estimate the
transformation $\mathcal{T}_{\mathbf{R}\leftarrow\mathbf{D}}$ near the point
$z_{k}$ in the driving frame $\mathbf{D}$, e.g.
$p_{k}=\mathcal{T}_{\mathbf{R}\leftarrow\mathbf{D}}(z_{k})$. Then we estimate
the transformation $\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}$ near $p_{k}$
in the reference $\mathbf{R}$. Finally
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}$ is obtained as follows:
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}=\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}\circ\mathcal{T}_{\mathbf{R}\leftarrow\mathbf{D}}=\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}\circ\mathcal{T}^{-1}_{\mathbf{D}\leftarrow\mathbf{R}},$
(3)
After computing again the first order Taylor expansion of Eq. (3) (see Sup.
Mat.), we obtain:
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}(z)\approx\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p_{k})+J_{k}(z-\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}(p_{k}))$
(4)
with:
$J_{k}=\left(\frac{d}{dp}\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)\left(\frac{d}{dp}\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)^{-1}$
(5)
In practice, $\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p_{k})$ and
$\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}(p_{k})$ in Eq. (4) are predicted
by the keypoint predictor. More precisely, we employ the standard U-Net
architecture that estimates $K$ heatmaps, one for each keypoint. The last
layer of the decoder uses softmax activations in order to predict heatmaps
that can be interpreted as keypoint detection confidence map. Each expected
keypoint location is estimated using the average operation as in [29, 25].
Note if we set $J_{k}=\mathbb{1}$ ($\mathbb{1}$ is $2\times 2$ identity
matrix), we get the motion model of Monkey-Net. Therefore Monkey-Net uses a
zeroth-order approximation of
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}(z)-z$.
For both frames $\mathbf{S}$ and $\mathbf{D}$, the keypoint predictor network
also outputs four additional channels for each keypoint. From these channels,
we obtain the coefficients of the matrices
$\frac{d}{dp}\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p)|_{p=p_{k}}$ and
$\frac{d}{dp}\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p)|_{p=p_{k}}$ in
Eq. (5) by computing spatial weighted average using as weights the
corresponding keypoint confidence map.
Combining Local Motions. We employ a convolutional network $P$ to estimate
$\mathcal{\hat{T}}_{\mathbf{S}\leftarrow\mathbf{D}}$ from the set of Taylor
approximations of $\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}(z)$ in the
keypoints and the original source frame $\mathbf{S}$. Importantly, since
$\mathcal{\hat{T}}_{\mathbf{S}\leftarrow\mathbf{D}}$ maps each pixel location
in $\mathbf{D}$ with its corresponding location in $\mathbf{S}$, the local
patterns in $\mathcal{\hat{T}}_{\mathbf{S}\leftarrow\mathbf{D}}$, such as
edges or texture, are pixel-to-pixel aligned with $\mathbf{D}$ but not with
$\mathbf{S}$. This misalignment issue makes the task harder for the network to
predict $\mathcal{\hat{T}}_{\mathbf{S}\leftarrow\mathbf{D}}$ from
$\mathbf{S}$. In order to provide inputs already roughly aligned with
$\mathcal{\hat{T}}_{\mathbf{S}\leftarrow\mathbf{D}}$, we warp the source frame
$\mathbf{S}$ according to local transformations estimated in Eq. (4). Thus, we
obtain $K$ transformed images $\mathbf{S}^{1},\dots\mathbf{S}^{K}$ that are
each aligned with $\mathcal{\hat{T}}_{\mathbf{S}\leftarrow\mathbf{D}}$ in the
neighbourhood of a keypoint. Importantly, we also consider an additional image
$\mathbf{S}^{0}=\mathbf{S}$ for the background.
For each keypoint $p_{k}$ we additionally compute heatmaps $\mathbf{H}_{k}$
indicating to the dense motion network where each transformation happens. Each
$\mathbf{H}_{k}(z)$ is implemented as the difference of two heatmaps centered
in $\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}(p_{k})$ and
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p_{k})$:
$\mathbf{H}_{k}(z)=exp\left(\frac{\left(\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}(p_{k})-z\right)^{2}}{\sigma}\right)-exp\left(\frac{\left(\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p_{k})-z\right)^{2}}{\sigma}\right).$
(6)
In all our experiments, we employ $\sigma=0.01$ following Jakab et al. [18].
The heatmaps $\mathbf{H}_{k}$ and the transformed images
$\mathbf{S}^{0},\dots\mathbf{S}^{K}$ are concatenated and processed by a U-Net
[26]. $\mathcal{\hat{T}}_{\mathbf{S}\leftarrow\mathbf{D}}$ is estimated using
a part-based model inspired by Monkey-Net [29]. We assume that an object is
composed of $K$ rigid parts and that each part is moved according to Eq. (4).
Therefore we estimate $K$+1 masks $\mathbf{M}_{k},k=0,\dots K$ that indicate
where each local transformation holds. The final dense motion prediction
$\mathcal{\hat{T}}_{\mathbf{S}\leftarrow\mathbf{D}}(z)$ is given by:
$\mathcal{\hat{T}}_{\mathbf{S}\leftarrow\mathbf{D}}(z)=\mathbf{M}_{0}z+\sum_{k=1}^{K}{\mathbf{M}_{k}\left(\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p_{k})+J_{k}(z-\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}(p_{k}))\right)}$
(7)
Note that, the term $\mathbf{M}_{0}z$ is considered in order to model non-
moving parts such as background.
### 3.2 Occlusion-aware Image Generation
As mentioned in Sec.3, the source image $\mathbf{S}$ is not pixel-to-pixel
aligned with the image to be generated $\hat{\mathbf{D}}$. In order to handle
this misalignment, we use a feature warping strategy similar to [30, 29, 15].
More precisely, after two down-sampling convolutional blocks, we obtain a
feature map $\boldsymbol{\xi}\in\mathbb{R}^{H^{\prime}\times W^{\prime}}$ of
dimension $H^{\prime}\times W^{\prime}$. We then warp $\boldsymbol{\xi}$
according to $\mathcal{\hat{T}}_{\mathbf{S}\leftarrow\mathbf{D}}$. In the
presence of occlusions in $\mathbf{S}$, optical flow may not be sufficient to
generate $\hat{\mathbf{D}}$. Indeed, the occluded parts in $\mathbf{S}$ cannot
be recovered by image-warping and thus should be inpainted. Consequently, we
introduce an occlusion map
$\mathcal{\hat{O}}_{\mathbf{S}\leftarrow\mathbf{D}}\in[0,1]^{H^{\prime}\times
W^{\prime}}$ to mask out the feature map regions that should be inpainted.
Thus, the occlusion mask diminishes the impact of the features corresponding
to the occluded parts. The transformed feature map is written as:
$\boldsymbol{\xi}^{\prime}=\mathcal{\hat{O}}_{\mathbf{S}\leftarrow\mathbf{D}}\odot
f_{w}(\boldsymbol{\xi},\mathcal{\hat{T}}_{\mathbf{S}\leftarrow\mathbf{D}})$
(8)
where $f_{w}(\cdot,\cdot)$ denotes the back-warping operation and $\odot$
denotes the Hadamard product. We estimate the occlusion mask from our sparse
keypoint representation, by adding a channel to the final layer of the dense
motion network. Finally, the transformed feature map
$\boldsymbol{\xi}^{\prime}$ is fed to subsequent network layers of the
generation module (see _Sup. Mat._) to render the sought image.
### 3.3 Training Losses
We train our system in an end-to-end fashion combining several losses. First,
we use the reconstruction loss based on the perceptual loss of Johnson et al.
[19] using the pre-trained VGG-19 network as our main driving loss. The loss
is based on implementation of Wang et al. [38]. With the input driving frame
$\mathbf{D}$ and the corresponding reconstructed frame $\mathbf{\hat{D}}$, the
reconstruction loss is written as:
$L_{rec}(\mathbf{\hat{D}},\mathbf{D})=\sum_{i=1}^{I}\left|N_{i}(\mathbf{\hat{D}})-N_{i}(\mathbf{D})\right|,$
(9)
where $N_{i}(\cdot)$ is the $i^{th}$ channel feature extracted from a specific
VGG-19 layer and $I$ is the number of feature channels in this layer.
Additionally we propose to use this loss on a number of resolutions, forming a
pyramid obtained by down-sampling $\mathbf{\hat{D}}$ and $\mathbf{D}$,
similarly to MS-SSIM [40, 33]. The resolutions are $256\times 256$, $128\times
128$, $64\times 64$ and $32\times 32$. There are 20 loss terms in total.
Imposing Equivariance Constraint. Our keypoint predictor does not require any
keypoint annotations during training. This may lead to unstable performance.
Equivariance constraint is one of the most important factors driving the
discovery of unsupervised keypoints [18, 44]. It forces the model to predict
consistent keypoints with respect to known geometric transformations. We use
thin plate splines deformations as they were previously used in unsupervised
keypoint detection [18, 44] and are similar to natural image deformations.
Since our motion estimator does not only predict the keypoints, but also the
Jacobians, we extend the well-known equivariance loss to additionally include
constraints on the Jacobians.
We assume that an image $\mathbf{X}$ undergoes a known spatial deformation
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}$. In this case
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}$ can be an affine transformation
or a thin plane spline deformation. After this deformation we obtain a new
image $\mathbf{Y}$. Now by applying our extended motion estimator to both
images, we obtain a set of local approximations for
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}$ and
$\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}$. The standard equivariance
constraint writes as:
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}\equiv\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}\circ\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}$
(10)
After computing the first order Taylor expansions of both sides, we obtain the
following constraints (see derivation details in Sup. Mat.):
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p_{k})\equiv\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}\circ\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}(p_{k}),$
(11)
$\left(\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)\equiv\left(\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}(p)\middle|_{p=\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}(p_{k})}\right)\left(\frac{d}{dp}\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right),$
(12)
Note that the constraint Eq. (11) is strictly the same as the standard
equivariance constraint for the keypoints [18, 44]. During training, we
constrain every keypoint location using a simple $L_{1}$ loss between the two
sides of Eq. (11). However, implementing the second constraint from Eq. (12)
with $L_{1}$ would force the magnitude of the Jacobians to zero and would lead
to numerical problems. To this end, we reformulate this constraint in the
following way:
$\mathbb{1}\equiv\left(\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)^{-1}\left(\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}(p)\middle|_{p=\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}(p_{k})}\right)\left(\frac{d}{dp}\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right),$
(13)
where $\mathbb{1}$ is $2\times 2$ identity matrix. Then, $L_{1}$ loss is
employed similarly to the keypoint location constraint. Finally, in our
preliminary experiments, we observed that our model shows low sensitivity to
the relative weights of the reconstruction and the two equivariance losses.
Therefore, we use equal loss weights in all our experiments.
### 3.4 Testing Stage: Relative Motion Transfer
At this stage our goal is to animate an object in a source frame
$\mathbf{S}_{1}$ using the driving video $\mathbf{D}_{1},\dots\mathbf{D}_{T}$.
Each frame $\mathbf{D}_{t}$ is independently processed to obtain
$\mathbf{S}_{t}$. Rather than transferring the motion encoded in
$\mathcal{T}_{\mathbf{S}_{1}\leftarrow\mathbf{D}_{t}}(p_{k})$ to $\mathbf{S}$,
we transfer the relative motion between $\mathbf{D}_{1}$ and $\mathbf{D}_{t}$
to $\mathbf{S}_{1}$. In other words, we apply a transformation
$\mathcal{T}_{\mathbf{D}_{t}\leftarrow\mathbf{D}_{1}}(p)$ to the neighbourhood
of each keypoint $p_{k}$:
$\mathcal{T}_{\mathbf{S}_{1}\leftarrow\mathbf{S}_{t}}(z)\approx\mathcal{T}_{\mathbf{S}_{1}\leftarrow\mathbf{R}}(p_{k})+J_{k}(z-\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p_{k})+\mathcal{T}_{\mathbf{D}_{1}\leftarrow\mathbf{R}}(p_{k})-\mathcal{T}_{\mathbf{D}_{t}\leftarrow\mathbf{R}}(p_{k}))$
(14)
with
$J_{k}=\left(\frac{d}{dp}\mathcal{T}_{\mathbf{D}_{1}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)\left(\frac{d}{dp}\mathcal{T}_{\mathbf{D}_{t}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)^{-1}$
(15)
Detailed mathematical derivations are provided in Sup. Mat.. Intuitively, we
transform the neighbourhood of each keypoint $p_{k}$ in $\mathbf{S}_{1}$
according to its local deformation in the driving video. Indeed, transferring
relative motion over absolute coordinates allows to transfer only relevant
motion patterns, while preserving global object geometry. Conversely, when
transferring absolute coordinates, as in X2Face [41], the generated frame
inherits the object proportions of the driving video. It’s important to note
that one limitation of transferring relative motion is that we need to assume
that the objects in $\mathbf{S}_{1}$ and $\mathbf{D}_{1}$ have similar poses
(see [29]). Without initial rough alignment, Eq. (14) may lead to absolute
keypoint locations physically impossible for the object of interest.
## 4 Experiments
Datasets. We train and test our method on four different datasets containing
various objects. Our model is capable of rendering videos of much higher
resolution compared to [29] in all our experiments.
* •
The _VoxCeleb_ dataset [23] is a face dataset of 22496 videos, extracted from
YouTube videos. For pre-processing, we extract an initial bounding box in the
first video frame. We track this face until it is too far away from the
initial position. Then, we crop the video frames using the smallest crop
containing all the bounding boxes. The process is repeated until the end of
the sequence. We filter out sequences that have resolution lower than
$256\times 256$ and the remaining videos are resized to $256\times 256$
preserving the aspect ratio. It’s important to note that compared to X2Face
[41], we obtain more natural videos where faces move freely within the
bounding box. Overall, we obtain 19522 training videos and 525 test videos,
with lengths varying from 64 to 1024 frames.
Table 1: Quantitative ablation study for video reconstruction on _Tai-Chi-HD_.
| _Tai-Chi-HD_
---|---
| $\mathcal{L}_{1}$ | (AKD, MKR) | AED
_Baseline_ | 0.073 | (8.945, 0.099) | 0.235
_Pyr._ | 0.069 | (9.407, 0.065) | 0.213
_Pyr._ +$\mathcal{O}_{\mathbf{S}\leftarrow\mathbf{D}}$ | 0.069 | (8.773, 0.050) | 0.205
_Jac. w/o Eq. ( 12)_ | 0.073 | (9.887, 0.052) | 0.220
_Full_ | 0.063 | (6.862, 0.036) | 0.179
Table 2: Paired user study: user preferences in favour of our approach.
| X2Face [41] | Monkey-Net [29]
---|---|---
_Tai-Chi-HD_ | 92.0% | 80.6%
_VoxCeleb_ | 95.8% | 68.4%
_Nemo_ | 79.8% | 60.6%
_Bair_ | 95.0% | 67.0%
Input $\mathbf{D}$ | | | | |
---|---|---|---|---|---
_Baseline_ | | | | |
_Pyr._ | | | | |
_Pyr._ +$\mathcal{O}_{\mathbf{S}\leftarrow\mathbf{D}}$ | | | | |
_Jac. w/o Eq. ( 12)_ | | | | |
_Full_ | | | | |
Figure 3: Qualitative ablation on _Tai-Chi-HD_.
* •
The UvA-_Nemo_ dataset [9] is a facial analysis dataset that consists of 1240
videos. We apply the exact same pre-processing as for _VoxCeleb_. Each video
starts with a neutral expression. Similar to Wang et al. [39], we use 1116
videos for training and 124 for evaluation.
* •
The _BAIR_ robot pushing dataset [10] contains videos collected by a Sawyer
robotic arm pushing diverse objects over a table. It consists of 42880
training and 128 test videos. Each video is 30 frame long and has a $256\times
256$ resolution.
* •
Following Tulyakov et al. [35], we collected 280 tai-chi videos from YouTube.
We use 252 videos for training and 28 for testing. Each video is split in
short clips as described in pre-processing of _VoxCeleb_ dataset. We retain
only high quality videos and resized all the clips to $256\times 256$ pixels
(instead of $64\times 64$ pixels in [35]). Finally, we obtain 3049 and 285
video chunks for training and testing respectively with video length varying
from 128 to 1024 frames. This dataset is referred to as the _Tai-Chi-HD_
dataset. The dataset will be made publicly available.
Evaluation Protocol. Evaluating the quality of image animation is not obvious,
since ground truth animations are not available. We follow the evaluation
protocol of Monkey-Net [29]. First, we quantitatively evaluate each method on
the "proxy" task of video reconstruction. This task consists of reconstructing
the input video from a representation in which appearance and motion are
decoupled. In our case, we reconstruct the input video by combining the sparse
motion representation in (2) of each frame and the first video frame. Second,
we evaluate our model on image animation according to a user-study. In all
experiments we use $K$=10 as in [29]. Other implementation details are given
in _Sup. Mat._
Metrics. To evaluate video reconstruction, we adopt the metrics proposed in
Monkey-Net [29]:
* •
$\mathcal{L}_{1}$. We report the average $\mathcal{L}_{1}$ distance between
the generated and the ground-truth videos.
* •
_Average Keypoint Distance (AKD)_. For the _Tai-Chi-HD_ , _VoxCeleb_ and
_Nemo_ datasets, we use 3rd-party pre-trained keypoint detectors in order to
evaluate whether the motion of the input video is preserved. For the
_VoxCeleb_ and _Nemo_ datasets we use the facial landmark detector of Bulat et
al. [5]. For the _Tai-Chi-HD_ dataset, we employ the human-pose estimator of
Cao et al. [7]. These keypoints are independently computed for each frame. AKD
is obtained by computing the average distance between the detected keypoints
of the ground truth and of the generated video.
* •
_Missing Keypoint Rate (MKR)_. In the case of _Tai-Chi-HD_ , the human-pose
estimator returns an additional binary label for each keypoint indicating
whether or not the keypoints were successfully detected. Therefore, we also
report the MKR defined as the percentage of keypoints that are detected in the
ground truth frame but not in the generated one. This metric assesses the
appearance quality of each generated frame.
* •
_Average Euclidean Distance (AED)_. Considering an externally trained image
representation, we report the average euclidean distance between the ground
truth and generated frame representation, similarly to Esser et al. [11]. We
employ the feature embedding used in Monkey-Net [29].
Ablation Study. We compare the following variants of our model. _Baseline_ :
the simplest model trained without using the occlusion mask
($\mathcal{O}_{\mathbf{S}\leftarrow\mathbf{D}}$=1 in Eq. (8)), jacobians
($J_{k}=\mathbb{1}$ in Eq. (4)) and is supervised with $L_{rec}$ at the
highest resolution only; _Pyr._ : the pyramid loss is added to _Baseline_ ;
_Pyr._ +$\mathcal{O}_{\mathbf{S}\leftarrow\mathbf{D}}$: with respect to _Pyr._
, we replace the generator network with the occlusion-aware network; _Jac. w/o
Eq. ( 12)_ our model with local affine transformations but without
equivariance constraints on jacobians Eq. (12); _Full_ : the full model
including local affine transformations described in Sec. 3.1.
In Fig. 3, we report the qualitative ablation. First, the pyramid loss leads
to better results according to all the metrics except _AKD_. Second, adding
$\mathcal{O}_{\mathbf{S}\leftarrow\mathbf{D}}$ to the model consistently
improves all the metrics with respect to _Pyr._. This illustrates the benefit
of explicitly modeling occlusions. We found that without equivariance
constraint over the jacobians, $J_{k}$ becomes unstable which leads to poor
motion estimations. Finally, our _Full_ model further improves all the
metrics. In particular, we note that, with respect to the _Baseline_ model,
the MKR of the full model is smaller by the factor of 2.75. It shows that our
rich motion representation helps generate more realistic images. These results
are confirmed by our qualitative evaluation in Tab. 1 where we compare the
_Baseline_ and the _Full_ models. In these experiments, each frame
$\mathbf{D}$ of the input video is reconstructed from its first frame (first
column) and the estimated keypoint trajectories. We note that the _Baseline_
model does not locate any keypoints in the arms area. Consequently, when the
pose difference with the initial pose increases, the model cannot reconstruct
the video (columns 3,4 and 5). In contrast, the _Full_ model learns to detect
a keypoint on each arm, and therefore, to more accurately reconstruct the
input video even in the case of complex motion.
Comparison with State of the Art. We now compare our method with state of the
art for the video reconstruction task as in [29]. To the best of our
knowledge, X2Face [41] and Monkey-Net [29] are the only previous approaches
for model-free image animation. Quantitative results are reported in Tab. 3.
We observe that our approach consistently improves every single metric for
each of the four different datasets. Even on the two face datasets, _VoxCeleb_
and _Nemo_ datasets, our approach clearly outperforms X2Face that was
originally proposed for face generation. The better performance of our
approach compared to X2Face is especially impressive X2Face exploits a larger
motion embedding (128 floats) than our approach (60=K*(2+4) floats). Compared
to Monkey-Net that uses a motion representation with a similar dimension
(50=K*(2+3)), the advantages of our approach are clearly visible on the _Tai-
Chi-HD_ dataset that contains highly non-rigid objects (i.e.human body).
Table 3: Video reconstruction: comparison with the state of the art on four
different datasets.
| _Tai-Chi-HD_ | _VoxCeleb_ | _Nemo_ | _Bair_
---|---|---|---|---
| $\mathcal{L}_{1}$ | (AKD, MKR) | AED | $\mathcal{L}_{1}$ | AKD | AED | $\mathcal{L}_{1}$ | AKD | AED | $\mathcal{L}_{1}$
X2Face [41] | 0.080 | (17.654, 0.109) | 0.272 | 0.078 | 7.687 | 0.405 | 0.031 | 3.539 | 0.221 | 0.065
Monkey-Net [29] | 0.077 | (10.798, 0.059) | 0.228 | 0.049 | 1.878 | 0.199 | 0.018 | 1.285 | 0.077 | 0.034
Ours | 0.063 | (6.862, 0.036) | 0.179 | 0.043 | 1.294 | 0.140 | 0.016 | 1.119 | 0.048 | 0.027
We now report a qualitative comparison for image animation. Generated
sequences are reported in Fig. 4. The results are well in line with the
quantitative evaluation in Tab. 3. Indeed, in both examples, X2Face and
Monkey-Net are not able to correctly transfer the body notion in the driving
video, instead warping the human body in the source image as a blob.
Conversely, our approach is able to generate significantly better looking
videos in which each body part is independently animated. This qualitative
evaluation illustrates the potential of our rich motion description. We
complete our evaluation with a user study. We ask users to select the most
realistic image animation. Each question consists of the source image, the
driving video, and the corresponding results of our method and a competitive
method. We require each question to be answered by 10 AMT worker. This
evaluation is repeated on 50 different input pairs. Results are reported in
Tab. • ‣ 4. We observe that our method is clearly preferred over the
competitor methods. Interestingly, the largest difference with the state of
the art is obtained on _Tai-Chi-HD_ : the most challenging dataset in our
evaluation due to its rich motions.
| | | | | |
---|---|---|---|---|---|---
X2Face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
| | | | |
---|---|---|---|---|---
| | | | |
| | | | |
| | | | |
Figure 4: Qualitative comparison with state of the art for the task of image
animation on two sequences and two source images from the _Tai-Chi-HD_
dataset.
## 5 Conclusions
We presented a novel approach for image animation based on keypoints and local
affine transformations. Our novel mathematical formulation describes the
motion field between two frames and is efficiently computed by deriving a
first order Taylor expansion approximation. In this way, motion is described
as a set of keypoints displacements and local affine transformations. A
generator network combines the appearance of the source image and the motion
representation of the driving video. In addition, we proposed to explicitly
model occlusions in order to indicate to the generator network which image
parts should be inpainted. We evaluated the proposed method both
quantitatively and qualitatively and showed that our approach clearly
outperforms state of the art on all the benchmarks.
## References
* [1] Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy H Campbell, and Sergey Levine. Stochastic variational video prediction. In ICLR, 2017.
* [2] Guha Balakrishnan, Amy Zhao, Adrian V Dalca, Fredo Durand, and John Guttag. Synthesizing images of humans in unseen poses. In CVPR, 2018.
* [3] Aayush Bansal, Shugao Ma, Deva Ramanan, and Yaser Sheikh. Recycle-gan: Unsupervised video retargeting. In ECCV, 2018.
* [4] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH, 1999.
* [5] Adrian Bulat and Georgios Tzimiropoulos. How far are we from solving the 2d & 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks). In ICCV, 2017.
* [6] Chen Cao, Qiming Hou, and Kun Zhou. Displaced dynamic expression regression for real-time facial tracking and animation. TOG, 2014.
* [7] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In CVPR, 2017.
* [8] Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. Everybody dance now. In ECCV, 2018.
* [9] Hamdi Dibeklioğlu, Albert Ali Salah, and Theo Gevers. Are you really smiling at me? spontaneous versus posed enjoyment smiles. In ECCV, 2012.
* [10] Frederik Ebert, Chelsea Finn, Alex X Lee, and Sergey Levine. Self-supervised visual planning with temporal skip connections. In CoRL, 2017.
* [11] Patrick Esser, Ekaterina Sutter, and Björn Ommer. A variational u-net for conditional appearance and shape generation. In CVPR, 2018.
* [12] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016.
* [13] Zhenglin Geng, Chen Cao, and Sergey Tulyakov. 3d guided fine-grained face manipulation. In CVPR, 2019.
* [14] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
* [15] Artur Grigorev, Artem Sevastopolsky, Alexander Vakhitov, and Victor Lempitsky. Coordinate-based texture inpainting for pose-guided image generation. In CVPR, 2019.
* [16] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017.
* [17] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In NIPS, 2015.
* [18] Tomas Jakab, Ankush Gupta, Hakan Bilen, and Andrea Vedaldi. Unsupervised learning of object landmarks through conditional image generation. In NIPS, 2018.
* [19] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016.
* [20] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014.
* [21] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014.
* [22] Yahui Liu, Marco De Nadai, Gloria Zen, Nicu Sebe, and Bruno Lepri. Gesture-to-gesture translation in the wild via category-independent conditional maps. ACM MM, 2019.
* [23] A. Nagrani, J. S. Chung, and A. Zisserman. Voxceleb: a large-scale speaker identification dataset. In INTERSPEECH, 2017.
* [24] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video prediction using deep networks in atari games. In NIPS, 2015.
* [25] Joseph P Robinson, Yuncheng Li, Ning Zhang, Yun Fu, and Sergey Tulyakov. Laplace landmark localization. In ICCV, 2019.
* [26] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
* [27] Masaki Saito, Eiichi Matsumoto, and Shunta Saito. Temporal generative adversarial nets with singular value clipping. In ICCV, 2017.
* [28] Aliaksandra Shysheya, Egor Zakharov, Kara-Ali Aliev, Renat Bashirov, Egor Burkov, Karim Iskakov, Aleksei Ivakhnenko, Yury Malkov, Igor Pasechnik, Dmitry Ulyanov, Alexander Vakhitov, and Victor Lempitsky. Textured neural avatars. In CVPR, June 2019.
* [29] Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. Animating arbitrary objects via deep motion transfer. In CVPR, 2019.
* [30] Aliaksandr Siarohin, Enver Sangineto, Stéphane Lathuilière, and Nicu Sebe. Deformable gans for pose-based human image generation. In CVPR, 2018.
* [31] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. Unsupervised learning of video representations using lstms. In ICML, 2015.
* [32] Hao Tang, Wei Wang, Dan Xu, Yan Yan, and Nicu Sebe. Gesturegan for hand gesture-to-gesture translation in the wild. In ACM MM, 2018.
* [33] Hao Tang, Dan Xu, Wei Wang, Yan Yan, and Nicu Sebe. Dual generator generative adversarial networks for multi-domain image-to-image translation. In ACCV, 2018.
* [34] Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In CVPR, 2016.
* [35] Sergey Tulyakov, Ming-Yu Liu, Xiaodong Yang, and Jan Kautz. Mocogan: Decomposing motion and content for video generation. In CVPR, 2018.
* [36] Joost Van Amersfoort, Anitha Kannan, Marc’Aurelio Ranzato, Arthur Szlam, Du Tran, and Soumith Chintala. Transformation-based models of video sequences. arXiv preprint arXiv:1701.08435, 2017.
* [37] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. In NIPS, 2016.
* [38] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. Video-to-video synthesis. In NIPS, 2018.
* [39] Wei Wang, Xavier Alameda-Pineda, Dan Xu, Pascal Fua, Elisa Ricci, and Nicu Sebe. Every smile is unique: Landmark-guided diverse smile generation. In CVPR, 2018.
* [40] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In ACSSC, 2003.
* [41] Olivia Wiles, A Sophia Koepke, and Andrew Zisserman. X2face: A network for controlling face generation using images, audio, and pose codes. In ECCV, 2018.
* [42] Polina Zablotskaia, Aliaksandr Siarohin, Bo Zhao, and Leonid Sigal. Dwnet: Dense warp-based network for pose-guided human video generation. In BMVC, 2019.
* [43] Egor Zakharov, Aliaksandra Shysheya, Egor Burkov, and Victor Lempitsky. Few-shot adversarial learning of realistic neural talking head models. In ICCV, 2019.
* [44] Yuting Zhang, Yijie Guo, Yixin Jin, Yijun Luo, Zhiyuan He, and Honglak Lee. Unsupervised discovery of object landmarks as structural representations. In CVPR, 2018.
* [45] Long Zhao, Xi Peng, Yu Tian, Mubbasir Kapadia, and Dimitris Metaxas. Learning to forecast and refine residual motion for image-to-video generation. In ECCV, 2018.
* [46] Michael Zollhöfer, Justus Thies, Pablo Garrido, Derek Bradley, Thabo Beeler, Patrick Pérez, Marc Stamminger, Matthias Nießner, and Christian Theobalt. State of the art on monocular 3d face reconstruction, tracking, and applications. In Computer Graphics Forum, 2018.
## A Detailed Derivations
### A.1 Approximating Motion with Local Affine Transformations
Here, we detail the derivation leading to the approximation of
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}$ near the keypoint $z_{k}$ in
Eq. (4). Using first order Taylor expansion we can obtain:
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}(z)=\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}(z_{k})+\left(\frac{d}{dz}\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}(z)\middle|_{z=z_{k}}\right)(z-z_{k})+o(\|z-z_{k}\|)$
(16)
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}$ can be written as the
composition of two transformations:
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}=\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}\circ\mathcal{T}_{\mathbf{R}\leftarrow\mathbf{D}}$
(17)
In order to compute the zeroth order term, we estimate the transformation
$\mathcal{T}_{\mathbf{R}\leftarrow\mathbf{D}}$ near the point $z_{k}$ in the
driving frame $\mathbf{D}$, e.g
$p_{k}=\mathcal{T}_{\mathbf{R}\leftarrow\mathbf{D}}(z_{k})$. Then we can
estimate the transformation $\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}$
near $p_{k}$ in the reference $\mathbf{R}$. Since
$p_{k}=\mathcal{T}_{\mathbf{R}\leftarrow\mathbf{D}}(z_{k})$ and
$\mathcal{T}^{-1}_{\mathbf{R}\leftarrow\mathbf{D}}=\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}$,
we can write $z_{k}=\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}(p_{k})$.
Consequently, we obtain:
$\displaystyle\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}(z_{k})$
$\displaystyle=\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}\circ\mathcal{T}_{\mathbf{R}\leftarrow\mathbf{D}}(z_{k})$
$\displaystyle=\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}\circ\mathcal{T}^{-1}_{\mathbf{D}\leftarrow\mathbf{R}}(z_{k})$
$\displaystyle=\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}\circ\mathcal{T}^{-1}_{\mathbf{D}\leftarrow\mathbf{R}}\circ\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}(p_{k})$
$\displaystyle=\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p_{k}).$ (18)
Concerning the first order term, we apply the function composition rule in Eq.
(17) and obtain:
$\left(\frac{d}{dz}\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}(z)\middle|_{z=z_{k}}\right)=\left(\frac{d}{dp}\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p)\middle|_{p=\mathcal{T}_{\mathbf{R}\leftarrow\mathbf{D}}(z_{k})}\right)\left(\frac{d}{dz}\mathcal{T}^{-1}_{\mathbf{D}\leftarrow\mathbf{R}}(z)\middle|_{z=z_{k}}\right)$
(19)
Since the matrix inverse of the Jacobian is equal to the Jacobian of the
inverse function, and since
$p_{k}=\mathcal{T}_{\mathbf{R}\leftarrow\mathbf{D}}(z_{k})$, Eq. (19) can be
rewritten:
$\left(\frac{d}{dz}\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}(z)\middle|_{z=z_{k}}\right)=\left(\frac{d}{dp}\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)\left(\frac{d}{dp}\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)^{-1}$
(20)
After injecting Eqs. (A.1) and (20) into (16), we finally obtain:
$\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{D}}(z)\approx\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p_{k})+\left(\frac{d}{dp}\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)\left(\frac{d}{dp}\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)^{-1}(z-\mathcal{T}_{\mathbf{D}\leftarrow\mathbf{R}}(p_{k}))$
(21)
### A.2 Equivariance Loss
At training time, we use equivariance constraints that enforces:
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}\equiv\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}\circ\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}$
(22)
After applying first order Taylor expansion on the left-hand side, we obtain:
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p)=\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p_{k})+\left(\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)(p-p_{k})+o(\|p-p_{k}\|).$
(23)
After applying first order Taylor expansion on the right-hand side in Eq.
(22), we obtain:
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}\circ\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}(p)=\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}\circ\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}(p_{k})+\left(\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}\circ\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}\middle|_{p=p_{k}}\right)(p-p_{k})+o(\|p-p_{k}\|),$
(24)
We can further simplify this expression using derivative of function
composition:
$\left(\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}\circ\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}\middle|_{p=p_{k}}\right)=\left(\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}(p)\middle|_{p=\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}(p_{k})}\right)\left(\frac{d}{dp}\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right).$
(25)
Eq. (22) holds only when every coefficient in Taylor expansion of the right
and left sides are equal. Thus, it leads us to the following constaints:
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p_{k})\equiv\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}\circ\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}(p_{k}),$
(26)
and
$\left(\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)\equiv\left(\frac{d}{dp}\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}(p)\middle|_{p=\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}(p_{k})}\right)\left(\frac{d}{dp}\mathcal{T}_{\mathbf{Y}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right).$
(27)
### A.3 Transferring Relative Motion
In order to transfer only relative motion patterns, we propose to estimate
$\mathcal{T}_{\mathbf{S}_{t}\leftarrow\mathbf{R}}(p)$ near the keypoint
$p_{k}$ by shifting the motion in the driving video to the location of
keypoint $p_{k}$ in the source. To this aim, we introduce
$\mathcal{V}_{\mathbf{S}_{1}\leftarrow\mathbf{D_{1}}}(p_{k})=\mathcal{T}_{\mathbf{S}_{1}\leftarrow\mathbf{R}}(p_{k})-\mathcal{T}_{\mathbf{D}_{1}\leftarrow\mathbf{R}}(p_{k})\in\mathbb{R}^{2}$
that is the 2D vector from the landmark position $p_{k}$ in $\mathbf{D}_{1}$
to its position in $\mathbf{S}_{1}$. We proceed as follows. First, we shift
point coordinates according to
$-\mathcal{V}_{\mathbf{S}_{1}\leftarrow\mathbf{D_{1}}}(p_{k})$ in order to
obtain coordinates in $\mathbf{D_{1}}$. Second, we apply the transformation
$\mathcal{T}_{\mathbf{D}_{t}\leftarrow\mathbf{D}_{1}}$. Finally, we translate
the points back in the original coordinate space using
$\mathcal{V}_{\mathbf{S}_{1}\leftarrow\mathbf{D_{1}}}(p_{k})$. Formally, it
can be written:
$\mathcal{T}_{\mathbf{S}_{t}\leftarrow\mathbf{R}}(p)=\mathcal{T}_{\mathbf{D}_{t}\leftarrow\mathbf{D}_{1}}\big{(}\mathcal{T}_{\mathbf{S}_{1}\leftarrow\mathbf{R}}(p)-\mathcal{V}_{\mathbf{S}_{1}\leftarrow\mathbf{D}_{1}}(p_{k})\big{)}+\mathcal{V}_{\mathbf{S}_{1}\leftarrow\mathbf{D}_{1}}(p_{k})$
Now, we can compute the value and Jacobian in the $p_{k}$:
$\mathcal{T}_{\mathbf{S}_{t}\leftarrow\mathbf{R}}(p_{k})=\mathcal{T}_{\mathbf{D}_{t}\leftarrow\mathbf{D}_{1}}\circ\mathcal{T}_{\mathbf{D}_{1}\leftarrow\mathbf{R}}(p_{k})-\mathcal{T}_{\mathbf{D}_{1}\leftarrow\mathbf{R}}(p_{k})+\mathcal{T}_{\mathbf{S}_{1}\leftarrow\mathbf{R}}(p_{k})$
and:
$\left(\frac{d}{dp}\mathcal{T}_{\mathbf{S}_{t}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)=\left(\frac{d}{dp}\mathcal{T}_{\mathbf{D}_{t}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)\left(\frac{d}{dp}\mathcal{T}_{\mathbf{D}_{1}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)^{-1}\left(\frac{d}{dp}\mathcal{T}_{\mathbf{S}_{1}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right).$
Now using Eq. (21) and treating $\mathbf{S}_{1}$ as source and
$\mathbf{S}_{t}$ as driving frame, we obtain:
$\mathcal{T}_{\mathbf{S}_{1}\leftarrow\mathbf{S}_{t}}(z)\approx\mathcal{T}_{\mathbf{S}_{1}\leftarrow\mathbf{R}}(p_{k})+J_{k}(z-\mathcal{T}_{\mathbf{S}\leftarrow\mathbf{R}}(p_{k})+\mathcal{T}_{\mathbf{D}_{1}\leftarrow\mathbf{R}}(p_{k})-\mathcal{T}_{\mathbf{D}_{t}\leftarrow\mathbf{R}}(p_{k}))$
(28)
with
$J_{k}=\left(\frac{d}{dp}\mathcal{T}_{\mathbf{D}_{1}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)\left(\frac{d}{dp}\mathcal{T}_{\mathbf{D}_{t}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)^{-1}.$
(29)
Note that, here,
$\left(\frac{d}{dp}\mathcal{T}_{\mathbf{S}_{1}\leftarrow\mathbf{R}}(p)\middle|_{p=p_{k}}\right)$
canceled out.
## B Implementation details
### B.1 Architecture details
In order to reduce memory and computational requirements of our model, the
keypoint detector and dense motion predictor both work on resolution of
$64\times 64$ (instead of $256\times 256$). For the two networks of the motion
module, we employ an architecture based on U-Net [26] with five $conv_{3\times
3}$ \- $bn$ \- $relu$ \- $avg-pool_{2\times 2}$ blocks in the encoders and
five $upsample_{2\times 2}$ \- $conv_{3\times 3}$ \- $bn$ \- $relu$ blocks in
the decoders. In the generator network, we use the Johnson architecture [19]
with two down-sampling blocks, six residual-blocks and two up-sampling blocks.
We train our network using Adam [20] optimizer with learning rate $2e-4$ and
batch size 20. We employ learning decay by dropping the learning rate at
$\frac{T}{2}$ and $\frac{3T}{4}$ iterations, where T is total number of
iteration. We chose $T\approx 100k$ for _Tai-Chi-HD_ and _VoxCeleb_ , and
$T\approx 40k$ for _Nemo_ and _Bair_. The model converges in approximately 2
days using 2 TitanX gpus for _Tai-Chi-HD_ and _VoxCeleb_.
### B.2 Equivariance loss implementation
As explained above our equivariance losses force the keypoint detector to be
equivariant to some transformations
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}$. In our experiments
$\mathcal{T}_{\mathbf{X}\leftarrow\mathbf{Y}}$ is implemented using randomly
sampled thin plate splines. We sample spline parameters from normal
distributions with zero mean and variance equal to 0.005 for deformation
component and 0.05 for the affine component. For deformation component we use
uniform $5\times 5$ grid.
## C Additional experiments
### C.1 Image Animation
In this section, we report additional qualitative results.
We compare our approach with X2face [41] and Monkey-Net [29]. In Fig. 5, we
show three animation examples from the _VoxCeleb_ dataset. First, X2face is
not capable of generating realistic video sequences as we can see, for
instance in the last frame of the last sequence. Then, Monkey-Net generates
realistic frames but fails to generate specific facial expressions as in the
third frame of the first sequence or in transferring the eye movements as in
the last two frames of the second sequence.
In Fig. 6, we show three animation examples from the _Nemo_ dataset. First, we
observe that this dataset is simpler than _VoxCeleb_ since the persons are
facing a uniformly black background. With this simpler dataset, X2Face
generates realistic videos. However, it is not capable of inpainting image
parts that are not visible in the source image. For instance, X2Face does not
generate the teeth. Our approach also perform better than Monkey-Net as we can
see by comparing the generate teeth in the first sequence or the closed eyes
in the fourth frames of the second and third sequences.
In Fig. 6, we report additional examples for the _Tai-Chi-HD_ dataset. These
examples are well in line with what is reported in the main paper. Both X2Face
and Monkey-Net completely fail to generate realistic videos. The source images
are warped without respecting human body structure. Conversely, our approach
is able to deform the person in foreground without affecting the background.
Even though we can see few minor artifacts, our model is able to move each
body part independently following the body motion in the driving video.
Finally, in Fig. 8 we show three image animation examples on the _Bair_
dataset. Again, we see that X2Face is not able to transfer motion since it
constantly returns frames almost identical with the source images. Compared to
Monkey-Net, our approach performs slightly better since it preserves better
the robot arm as we can see in the second frame of the first sequence or in
the fourth frame of the last sequence.
### C.2 Keypoint detection
We now illustrate the keypoints that are learned by our self-supervised
approach in Fig. 9. On the _Tai-Chi-HD_ dataset, the keypoints are
semantically consistent since each of them corresponds to a body part: light
green for the right foot, and blue and red for the face for instance. Note
that, a light green keypoint is constantly located in the bottom left corner
in order to model background or camera motion. On _VoxCeleb_ , we observe
that, overall, the obtained keypoints are semantically consistent except for
the yellow and green keypoints. For instance, the red and purple keypoints
constantly correspond to the nose and the chin respectively. We observe a
similar consistency for the _Nemo_ dataset. For the Bair dataset, we note that
two keypoints (dark blue and light green) correspond to the robotic arm.
### C.3 Visualizing occlusion masks
In Fig. 10, we visualize the predicted occlusion masks
$\hat{\mathcal{O}}_{\mathbf{S}\leftarrow\mathbf{D}}$ on the _Tai-Chi-HD_ ,
_VoxCeleb_ and _Nemo_ datasets. In the first sequence, when the person in the
driving video is moving backward (second to fourth frames), the occlusion mask
becomes black (corresponding to 0) in the background regions that are occluded
in the source frame. It indicates that these parts cannot be generated by
warping the source image features and must be inpainted. A similar observation
can be made on the example sequence of _VoxCeleb_. Indeed, we see that when
the face is rotating, the mask has low values (dark grey) in the neck region
and in the right face side (in the left-hand side of the image) that are not
visible in the source Frame. Then, since the driving video example from _Nemo_
contains only little motion, the predicted mask is almost completely white.
Overall, these three examples show that the occlusion masks truly indicate
occluded regions even if no specific training loss is employed in order to
lead to this behaviour. Finally, the predicted occlusion masks are more
difficult to interpret in the case of the _Bair_ dataset. Indeed, the robotic
arm is masked out in every frame whereas we could expect that the model
generates it by warping. A possible explanation is that, since in this
particular dataset, the moving object is always the same, the network can
generate without warping the source image. We observe also that masks have low
values for the regions corresponding to the arm shadow. It is explained by the
fact that shadows cannot be obtained by image warping and that they need to be
added by the generator.
| | | | | |
---|---|---|---|---|---|---
X2face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
| | | | | |
---|---|---|---|---|---|---
X2face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
| | | | | |
---|---|---|---|---|---|---
X2face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
Figure 5: Qualitative comparison with state of the art for the task of image
animation on different sequences from the _VoxCeleb_ dataset.
| | | | | |
---|---|---|---|---|---|---
X2face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
| | | | | |
---|---|---|---|---|---|---
X2face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
| | | | | |
---|---|---|---|---|---|---
X2face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
Figure 6: Qualitative comparison with state of the art for the task of image
animation on different sequences from the _Nemo_ dataset.
| | | | | |
---|---|---|---|---|---|---
X2face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
| | | | | |
---|---|---|---|---|---|---
X2face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
| | | | | |
---|---|---|---|---|---|---
X2face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
Figure 7: Qualitative comparison with state of the art for the task of image
animation on different sequences from the _Tai-Chi-HD_ dataset.
| | | | | |
---|---|---|---|---|---|---
X2face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
| | | | | |
---|---|---|---|---|---|---
X2face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
| | | | | |
---|---|---|---|---|---|---
X2face [41] | | | | | |
Monkey-Net [29] | | | | | |
Ours | | | | | |
Figure 8: Qualitative comparison with state of the art for the task of image
animation on different sequences from the _Bair_ dataset.
| | | | | |
---|---|---|---|---|---|---
_Tai-Chi-HD_ | | | | | |
| | | | | |
| | | | | |
_VoxCeleb_ | | | | | |
| | | | | |
| | | | | |
_Nemo_ | | | | | |
| | | | | |
| | | | | |
_Bair_ | | | | | |
| | | | | |
Figure 9: Keypoint visualization for the four datasets.
| | | | | |
---|---|---|---|---|---|---
| | | | | |
Occlusion | | | | | |
Output | | | | | |
| | | | | |
---|---|---|---|---|---|---
Occlusion | | | | | |
Output | | | | | |
| | | | | |
---|---|---|---|---|---|---
Occlusion | | | | | |
Output | | | | | |
| | | | | |
---|---|---|---|---|---|---
Occlusion | | | | | |
Output | | | | | |
Figure 10: Visualization of occlusion masks and images obtained after
deformation on _Tai-Chi-HD_ , _VoxCeleb_ , _Nemo_ and _Bair_ datasets.
|
2024-09-04T02:54:55.836853 | 2020-02-29T11:53:08 | 2003.00244 | {
"authors": "Pramod Padmanabhan, Fumihiko Sugino, Diego Trancanelli",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25958",
"submitter": "Diego Trancanelli",
"url": "https://arxiv.org/abs/2003.00244"
} | arxiv-papers | Braiding quantum gates from partition algebras
††⋆ On leave of absence from the Institute of Physics at the University of São
Paulo, São Paulo, Brazil.
aCenter for Theoretical Physics of Complex Systems,
Institute for Basic Science, Daejeon, South Korea bCenter for Theoretical
Physics of the Universe,
Institute for Basic Science, Daejeon, South Korea cDipartimento di Scienze
Fisiche, Informatiche e Matematiche,
Università di Modena e Reggio Emilia, via Campi 213/A, 41125 Modena, Italy
&
INFN Sezione di Bologna, via Irnerio 46, 40126 Bologna, Italy pramod23phys,
fusugino<EMAIL_ADDRESS>
###### Contents
1. 1 Introduction
2. 2 Set partitions and partition algebras
1. 2.1 Representations
3. 3 Equivalence classes of $R$-matrices and SLOCC classes
4. 4 Generalized $R$-matrices
1. 4.1 2-qubits
2. 4.2 3-qubits
3. 4.3 4-qubits
4. 4.4 4-qubits from 2-qubits via Temperley-Lieb generators
5. 4.5 Algorithm for multi-qubit generalized $R$-matrices
5. 5 Comparison with known generalized $R$-matrices
6. 6 Outlook
## 1 Introduction
The fragile nature of quantum entanglement is a central issue in quantum
computing, which can in principle be alleviated by the use of topology.
Drawing inspiration from the Aravind hypothesis [1, 2, 3], it has been
proposed that braiding operators – operators that obey braiding relations and
create knots from unlinked strands – could be thought of as quantum
entanglers, i.e. gates that create entanglement from product states [4, 5, 6,
7, 8]. These initial studies about the relation between entangling gates and
knots were then pushed forward in [9, 10, 11, 12, 13], paving the way to the
proposal of topological quantum circuits with gates given by braiding
operators [14, 15]. It is expected that a physical realization of these
braiding operators/entangling gates could be obtained using anyons.
One way to get braiding operators is by solving the parameter-independent
Yang-Baxter equation (YBE), which we briefly review.111For details about the
case in which the YBE depends on a so-called spectral parameter, see e.g. [16]
and references therein. The YBE is an operator equation for an invertible
matrix, $R:V\otimes V\rightarrow V\otimes V$, given by
$\left(R\otimes I\right)\left(I\otimes R\right)\left(R\otimes
I\right)=\left(I\otimes R\right)\left(R\otimes I\right)\left(I\otimes
R\right),$ (1.1)
where $V$ is a $d$-dimensional complex vector space and $I$ is the identity
operator on $V$. We use the terms Yang-Baxter operator and R-matrix
interchangeably for the operator $R$. Solutions to (1.1) for some cases are
presented in [17, 18, 19].
The $R$-matrix can be seen as a generalization of the permutation operator
that swaps two vector spaces. This point of view is useful if one notices that
the $R$-matrices can be used to construct representations of the Artin braid
group $B_{n}$ on $n$-strands, with generators $\sigma_{i}$ satisfying
$\displaystyle\sigma_{i}\sigma_{i+1}\sigma_{i}$ $\displaystyle=$
$\displaystyle\sigma_{i+1}\sigma_{i}\sigma_{i+1},$ (1.2)
$\displaystyle\sigma_{i}\sigma_{j}$ $\displaystyle=$
$\displaystyle\sigma_{j}\sigma_{i},~{}~{}|i-j|>1,$ (1.3)
for $i=1,\ldots,n-1$. The first relation above is called the braid relation,
whereas the second relation is the far-commutativity condition.
Representations for $\sigma_{i}$ can be constructed out of the $R$-matrices
that solve (1.1) as follows
$\rho(\sigma_{i})=I^{\otimes i-1}\otimes R_{i,i+1}\otimes I^{\otimes n-i-1}.$
(1.4)
Notice that this representation satisfies far-commutativity trivially. This
implies that every $R$-matrix that solves (1.1) can be used to construct a
representation of the braid group, denoted a braiding operator.
The distinction between $R$-matrices and braiding operators become essential
when introducing a natural generalization of the YBE [10, 11] which involves
two extra parameters, $m$ and $l$. The linear invertible operator
$R:V^{\otimes m}\rightarrow V^{\otimes m}$ now acts on $m$ copies of the
$d$-dimensional vector space $V$ with $l$ identity operators, and obeys
$\left(R\otimes I^{\otimes l}\right)\left(I^{\otimes l}\otimes
R\right)\left(R\otimes I^{\otimes l}\right)=\left(I^{\otimes l}\otimes
R\right)\left(R\otimes I^{\otimes l}\right)\left(I^{\otimes l}\otimes
R\right),$ (1.5)
prompting the notation $(d,m,l)$-gYBE, as used in [21]. We dub this
generalized $R$-operator as either the generalized Yang-Baxter operator or the
generalized $R$-matrix. This generalization is important for quantum
information processes that involve more than two qubits.
Unlike the $R$-matrix that solves the YBE in (1.1), not all generalized
$R$-matrices that solve the $(d,m,l)$-gYBE in (1.5) provide a representation
of the braid group, as they do not always satisfy the far-commutativity
condition in (1.3). However, for the cases when $2l\geq m$ (assuming $l<m$)
far-commutativity is trivially satisfied, just as in the case of the Yang-
Baxter operators. This is seen through the representations of the braid group
given in terms of the generalized $R$-matrices by
$\rho(\sigma_{i})=\left(I^{\otimes l}\right)^{\otimes i-1}\otimes
R_{i,\cdots,i+m-1}\otimes\left(I^{\otimes l}\right)^{\otimes n-i-m+1}.$ (1.6)
We will then be interested in finding the generalized Yang-Baxter operators
that satisfies the $(d,m,l)$-gYBE when $2l\geq m$, thus automatically leading
to representations of the braid group.222A different approach to
representations to the braid group is discussed in [22].
The $(d,m,l)$-gYBE in (1.5) involves $d^{2m+2l}$ cubic polynomial relations
for $d^{2m}$ unknowns (the entries of the generalized $R$-matrix) and is in
general hard to solve. In this work we use algebraic methods to solve for the
$R$-matrices and generalized $R$-matrices using partition algebras [23, 24,
25, 26, 27, 28]. We obtain families of both unitary and non-unitary operators,
with the former finding use as quantum gates in quantum computing and the
latter being useful to investigate topological aspects of the gYBE that
include the study of knot invariants. We focus on the quantum entangling
aspects of these operators.
The paper is structured as follows. Set partitions and partition algebras are
reviewed in Sec. 2, along with representations (the Qubit and the Temperley-
Lieb representations) of their modified versions. In Sec. 3 we recall the
equivalence classes of the Yang-Baxter operators and discuss how they relate
to the notion of Stochastic Local Operations and Classical Communication
(SLOCC) classes of entangled quantum states in quantum information theory. Our
main results are in Sec. 4, where we obtain and discuss in detail $R$-matrices
for the 2-, 3-, and 4-qubit cases. We also study the SLOCC classes of
entangled states generated by these matrices. The structure of these
generalized $R$-matrices allows to find an algorithm to systematically
generate solutions of the $(d,m,l)$-gYBE. There are three kinds of generalized
$R$-matrices known in the 3-qubit case [10, 21, 12]. In Sec. 5 we show that
the 3-qubit solutions obtained in this paper are inequivalent to the known
solutions. We conclude with some open questions and an outlook in Sec. 6.
## 2 Set partitions and partition algebras
We review the notion of set partitions and partition algebras following [29].
We present just the bare minimum needed in this work, pointing the reader to
that reference for more details. The elements of set partition, denoted
$A_{k}$, are the partitions of two copies of a set: $\\{1,2,\cdots,k\\}$ and
$\\{1^{\prime},2^{\prime},\cdots,k^{\prime}\\}$. As an example consider the
following diagram showing the partition of a set with $k=7$, this represents
the set partition
$\left\\{\left\\{1,3,5,4^{\prime},5^{\prime}\right\\},\left\\{2,3^{\prime}\right\\},\left\\{4,6,7,6^{\prime}\right\\},\left\\{1^{\prime},2^{\prime}\right\\},\left\\{7^{\prime}\right\\}\right\\}.$
Figure 1: A diagram representing
$\left\\{\left\\{1,3,5,4^{\prime},5^{\prime}\right\\},\left\\{2,3^{\prime}\right\\},\left\\{4,6,7,6^{\prime}\right\\},\left\\{1^{\prime},2^{\prime}\right\\},\left\\{7^{\prime}\right\\}\right\\}\in
A_{7}$.
In the diagram, vertices $i$ and $j$ are connected by a path if $i$ and $j$
belong to the same block of the partition. Note that the diagram for a given
element of $A_{k}$ is not unique. For example the diagram in Fig. 2 also
represents the same element represented in the diagram Fig. 1.
Figure 2: Another diagram representing the same element of $A_{7}$ shown in
Fig. 1.
To compose two diagrams, $d_{1}\circ d_{2}$, place $d_{1}$ above $d_{2}$ and
then trace the lines to obtain the new partition. An example of such a
composition is given for the case of $k=6$ in Fig. 3.
Figure 3: Composition of elements of $A_{6}$.
The elements of $A_{k}$ are generated by successive compositions of
$\displaystyle p_{i},\;\;\;\;\qquad\textrm{for}~{}i\in\\{1,\cdots,k\\},$ (2.1)
$\displaystyle p_{i+\frac{1}{2}},\qquad\textrm{for}~{}i\in\\{1,\cdots,k-1\\},$
(2.2) $\displaystyle
s_{i},\;\;\;\;\qquad\textrm{for}~{}i\in\\{1,\cdots,k-1\\},$ (2.3)
whose action can be represented diagrammatically, see Fig. 4.
Figure 4: Generators of $A_{k}$.
An example of composition of these generators is shown in Fig. 5.
Figure 5: Example of composition in $A_{k}$. The nodes $1,\cdots
i-1,i+3,\cdots,k$ on which the generators act trivially are suppressed.
Using these diagrams one can easily verify that the generators satisfy the
following relations
$p_{i}^{2}=p_{i},\qquad p^{2}_{i+\frac{1}{2}}=p_{i+\frac{1}{2}},$ (2.4)
$p_{i}p_{i\pm\frac{1}{2}}p_{i}=p_{i},\qquad
p_{i\pm\frac{1}{2}}p_{i}p_{i\pm\frac{1}{2}}=p_{i\pm\frac{1}{2}},$ (2.5)
$p_{i}p_{j}=p_{j}p_{i},~{}\textrm{for}~{}|i-j|>\frac{1}{2},$ (2.6)
$s_{i}^{2}=1,\qquad s_{i}s_{i+1}s_{i}=s_{i+1}s_{i}s_{i+1},\qquad
s_{i}s_{j}=s_{j}s_{i},~{}\textrm{for}~{}|i-j|>1.$ (2.7)
Here and below, we simply write $d_{1}\circ d_{2}$ as $d_{1}d_{2}$ for
notational simplicity.
Note that $p_{i}$ and $p_{i+\frac{1}{2}}$ generate planar diagrams. Non-
planarity is introduced by the permutation group generators $s_{i}$. The mixed
relations are
$s_{i}p_{i}p_{i+1}=p_{i}p_{i+1}s_{i}=p_{i}p_{i+1},\quad
s_{i}p_{i}s_{i}=p_{i+1},\quad
s_{i}p_{i+j}=p_{i+j}s_{i},~{}\textrm{for}~{}j\neq 0,1,$ (2.8)
and
$s_{i}p_{i+\frac{1}{2}}=p_{i+\frac{1}{2}}s_{i}=p_{i+\frac{1}{2}},\quad
s_{i}s_{i+1}p_{i+\frac{1}{2}}s_{i+1}s_{i}=p_{i+\frac{3}{2}},\quad
s_{i}p_{i+j}=p_{i+j}s_{i},~{}\textrm{for}~{}j\neq-\frac{1}{2},\frac{3}{2}.$
(2.9)
To emphasize that $s_{i}$ swaps the elements on the vector spaces at sites $i$
and $i+1$, one could also write it as $s_{i,i+1}$, but we will stick to the
notation $s_{i}$ to avoid cluttering. The second relations in (2.8)-(2.9) can
be understood as the fundamental property of the permutation operator:
$\displaystyle s_{i}p_{i}s_{i}=p_{i+1},\qquad
s_{i+1}p_{i+\frac{1}{2}}s_{i+1}=s_{i}p_{i+\frac{3}{2}}s_{i},$ (2.10)
The index swapping is obvious in the first relation and it becomes obvious
also in the second one, when one notices that $p_{i+\frac{1}{2}}$ has non-
trivial support on sites $i$ and $i+1$ whereas $p_{i+\frac{3}{2}}$ has non-
trivial support on sites $i+1$ and $i+2$. To make this more transparent, one
can change notation by identifying $p_{i+\frac{1}{2}}$ with $p_{i,i+1}$ and
$p_{i+\frac{3}{2}}$ with $p_{i+1,i+2}$, prompting the definition
$s_{i+1}p_{i,i+1}s_{i+1}=s_{i}p_{i+1,i+2}s_{i}\equiv p_{i,i+2},$ (2.11)
whose diagrammatic representation is shown in Fig. 6 and can be worked out
using the composition laws in Fig. 5.
Figure 6: The element $p_{i,i+2}$.
The figure suggests one can generalize the $p_{i+\frac{1}{2}}\equiv p_{i,i+1}$
operators to the cases with support on two arbitrary sites, $i$ and $i+j$, as
$p_{i,i+j}=s_{i+j-1}s_{i+j-2}\cdots s_{i+1}p_{i,i+1}s_{i+1}\cdots
s_{i+j-2}s_{i+j-1},$ (2.12)
satisfying the relations
$\displaystyle p_{i,i+j}^{2}$ $\displaystyle=$ $\displaystyle p_{i,i+j},$
(2.13) $\displaystyle p_{i,i+j_{1}}p_{i,i+j_{2}}$ $\displaystyle=$
$\displaystyle p_{i,i+j_{1}}p_{i+j_{1},i+j_{2}},~{}~{}j_{1}<j_{2},$ (2.14)
$\displaystyle p_{i+l,i+j}p_{i,i+j}$ $\displaystyle=$ $\displaystyle
p_{i,i+j}p_{i+l,i+j}=p_{i,i+l}p_{i+l,i+j},~{}~{}l<j,$ (2.15)
which can be verified diagrammatically. Henceforth, we shall use $p_{i,i+1}$
instead of $p_{i+\frac{1}{2}}$.
Linear combinations of elements of $A_{k}$ with coefficients being complex
numbers form the partition algebra $\mathbb{C}A_{k}(1)$.
### 2.1 Representations
We use a slightly modified form of the relations in (2.4) where we either
scale one or both of the relations by a factor, $d$. We denote them asymmetric
or symmetric scaling respectively. To this end we employ hermitian
representations for the generators, which come in two kinds: the generators of
the planar diagrams can be rescaled either asymmetrically (Qubit
representation) or symmetrically (Temperley-Lieb representation). Strictly
speaking, these representations do not give the representations of the
relations (2.4)-(2.9), but the representations of their deformed versions.
##### Qubit representation
In this representation one of the relations satisfied by $p_{i,i+1}$ is
modified to
$p_{i,i+1}^{2}=d~{}p_{i,i+1},$ (2.16)
with the other relations in (2.4), (2.5) and (2.6) unchanged. Here $d$ is the
dimension of the local Hilbert space on the site $i$ acted upon by the
generators of $A_{k}$.
The relations in the non-planar part of $A_{k}$ involving $p_{i,i+1}$ and
$s_{i}$, see (2.9), and the relations in (2.14) and (2.15) are unchanged. The
relation in (2.13) is modified to
$p_{i,i+j}^{2}=d~{}p_{i,i+j},$ (2.17)
corresponding to the scaling of $p_{i,i+1}$.
In this paper we deal with qubits and hence the case of $d=2$. The qudit
realizations can be similarly obtained through an appropriate generalization.
Qubit representations are given by
$p_{i}=\frac{1+Z_{i}}{2},\qquad p_{i,j}=1+X_{i}X_{j},\qquad
s_{i}=\frac{1+X_{i}X_{i+1}+Y_{i}Y_{i+1}+Z_{i}Z_{i+1}}{2},$ (2.18)
where 1 is the identity operator acting on the relevant Hilbert space and
$X_{i},Y_{i},Z_{i}$ are the usual Pauli matrices acting on the qubit space on
site $i$,
$X=\left(\begin{array}[]{cc}0&1\\\ 1&0\end{array}\right),\quad
Y=\left(\begin{array}[]{cc}0&-\mathrm{i}\\\
\mathrm{i}&0\end{array}\right),\quad Z=\left(\begin{array}[]{cc}1&0\\\
0&-1\end{array}\right),$ (2.19)
written in the basis $\\{{\left|{0}\right>}$, ${\left|{1}\right>}\\}$ where
$Z$ is diagonal. Another representation which is unitarily equivalent to the
above is given by
$p_{i}=\frac{1+X_{i}}{2},\qquad p_{i,j}=1+Z_{i}Z_{j},\qquad
s_{i}=\frac{1+X_{i}X_{i+1}+Y_{i}Y_{i+1}+Z_{i}Z_{i+1}}{2}.$ (2.20)
The qubit representation gives the representation of the relations (2.4)-(2.9)
with the normalization of $p_{i,i+1}\equiv p_{i+\frac{1}{2}}$ changed to
(2.16) with $d=2$.
##### Temperley-Lieb representation
Now both generators of the planar diagrams are rescaled by the same factor,
$p_{i,i+1}^{2}=(Q+Q^{-1})p_{i,i+1},\qquad p_{i}^{2}=(Q+Q^{-1})p_{i},\qquad
Q\in\mathbb{R}-\\{0\\}$ (2.21)
with the rest of the relations of the planar part of the partition algebra
unchanged. The planar part of $A_{k}$ can be realized using the generators of
the Temperley-Lieb algebra with doubled dimensions,
$e_{1},\cdots,e_{2k-1}\in{\rm TL}_{2k}$,
$p_{i}=e_{2i-1},\qquad p_{i,i+1}=e_{2i},$ (2.22)
which satisfy the relations in (2.5)-(2.6), see [29]. Notice the doubling of
the number of sites in this realization, as shown in Fig. 7.
Figure 7: Temperley-Lieb representation of $p_{i}$ and $p_{i+1}$. The white
dots, obtained by doubling the original sites for $A_{k}$ (the black dots),
are sites on which the Temperley-Lieb generators ($e_{2i-1},e_{2i+1}$) act.
In this representation the introduction of non-planarity through the
permutation generators $s_{i}$ affects some of the mixed relations (2.8) and
(2.9). Let us consider the case that $s_{i}$ is realized as an appropriate
permutation operator given by
$s_{i}=s_{2i-1,2i+1}~{}s_{2i,2i+2},$ (2.23)
or by the unitarily equivalent
$s_{i}=s_{2i-1,2i+2}~{}s_{2i,2i+1},$ (2.24)
with $s_{i,j}$ being the operator that swaps the indices $i$ and $j$. This
realization lives on the doubled lattice as shown in Fig. 8.
Figure 8: Temperley-Lieb representation of $s_{i}$ in (2.23).
Using this diagram the partition algebra in (2.8) can be easily verified,
whereas (2.9) does not hold. Thus the Temperley-Lieb representation gives the
representation of the relations (2.21) and (2.5)-(2.8).
The doubling of the sites in this representation implies that one shall obtain
$R$-matrices and generalized $R$-matrices on twice the number of sites, i.e.
if one obtains the generalized $R$-matrix that solves the $(d,m,l)$-gYBE, then
this representation will yield another solution that solves the
$(d,2m,2l)$-gYBE.
In section 4, generalized $R$-matrices are constructed as linear combinations
of the above representations of deformed set partitions that are analogous to
elements of the partition algebras.
## 3 Equivalence classes of $R$-matrices and SLOCC classes
In quantum information theory the idea of SLOCC was introduced to classify
entangled states. It states that two quantum states are equivalent when there
exists an Invertible Local Operator (ILO) that maps one state into the other:
${\left|{\psi_{1}}\right>}=\left(A_{1}\otimes\cdots\otimes
A_{n}\right){\left|{\psi_{2}}\right>},$ (3.1)
where the states ${\left|{\psi_{1}}\right>}$ and ${\left|{\psi_{2}}\right>}$
live in the Hilbert space $\otimes_{i=1}^{n}~{}\mathcal{H}_{i}$ [31]. $A_{i}$
is an ILO acting only at the site $i$. This equivalence relation appeals to
the intuition of entangled states, as one expects local operations to not
disturb the non-local entanglement property of the state.
One can also define an equivalence class of the $R$-matrices satisfying the
parameter-independent YBE. To identify this class, one observes that if $R$ is
a solution to the $(d,m,l)$-gYBE, then so are $\alpha R$ (with $\alpha$ a
constant), $R^{-1}$, and $\left(A_{1}\otimes\cdots\otimes
A_{m}\right)R\left(A_{1}^{-1}\otimes\cdots\otimes A^{-1}_{m}\right)$, where
$A_{1},\cdots,A_{m}$ is an ILO that also appears in the definition of the
SLOCC classes of quantum states. We can now prove the following theorem:
##### Theorem
Two entangling $R$-matrices $R_{1}$ and $R_{2}$, which are equivalent under
ILO, produce entangled states of the same SLOCC class.
##### Proof
$R_{1}$ produces an entangled state ${\left|{E}\right>}$ acting on the product
state ${\left|{P}\right>}$,
$R_{1}~{}{\left|{P}\right>}={\left|{E}\right>}.$ (3.2)
By assumption, one can express $R_{1}=AR_{2}A^{-1}$, where $A$ is an ILO, so
that
$AR_{2}A^{-1}~{}{\left|{P}\right>}={\left|{E}\right>}.$ (3.3)
This means $R_{2}A^{-1}~{}{\left|{P}\right>}=A^{-1}~{}{\left|{E}\right>}$, by
definition, both $A^{-1}~{}{\left|{P}\right>}$ and
$A^{-1}~{}{\left|{E}\right>}$ are in the same SLOCC classes as
${\left|{P}\right>}$ and ${\left|{E}\right>}$, respectively, hence proving the
assertion. $\blacksquare$
This theorem naturally implies that if two $R$-matrices produce states of two
different SLOCC classes, then they cannot be related by an ILO. However, the
converse of the theorem is not always true: if two entangled states
${\left|{E_{1}}\right>}$ and ${\left|{E_{2}}\right>}$ belonging to the same
SLOCC class are generated by two entangling $R$-matrices $R_{1}$ and $R_{2}$,
respectively, then they need not be related by an ILO. One has in fact
$\displaystyle R_{1}~{}{\left|{P_{1}}\right>}={\left|{E_{1}}\right>},\qquad
R_{2}~{}{\left|{P_{2}}\right>}={\left|{E_{2}}\right>}.$ (3.4)
As ${\left|{E_{2}}\right>}=A{\left|{E_{1}}\right>}$ and
${\left|{P_{2}}\right>}=L{\left|{P_{1}}\right>}$, where $A$ and $L$ are two
ILOs, one obtains
$A^{-1}R_{2}L~{}{\left|{P_{1}}\right>}={\left|{E_{1}}\right>}.$
Note that the ILOs $A$ and $L$ need not be the same. For unitary $R$-matrices,
this relation holds on all the product states that span the Hilbert space, so
that one can identify
$R_{1}=A^{-1}R_{2}L.$
We shall use the definitions and this result to determine the classes of
$R$-matrices.
##### 2-qubit SLOCC classes
There are two SLOCC classes in the 2-qubit case, the Bell state class and the
product state class.
##### 3-qubit SLOCC classes
There are six SLOCC classes in the 3-qubit case [31]. Two tri-partite
entangled classes, GHZ state class and W state class, also denoted as $ABC$ to
symbolize the three parties of the state. Three partially entangled state
classes, $A-BC,AC-B,AB-C$ and finally the product state class, $A-B-C$.
##### 4-qubit SLOCC classes
In the 4-qubit case, it was discussed in [31] that there are infinitely many
SLOCC classes. Later, it was shown in [32] that there are nine families in the
sense of nine different ways of entangling 4-qubits. On the other hand, it was
reported in [33, 34] that the number of the families is eight instead of nine
for genuinely entangled states.333The classification in [32] contains no
genuinely entangled state with canonical state
${\left|{0000}\right>}+{\left|{0111}\right>}$. Due to this difference, [33,
34] does not contradict [32]. Furthermore, it was discovered in [35] that the
nine families in [32] further split into 49 SLOCC entanglement classes by
looking at SLOCC invariant quantities.
## 4 Generalized $R$-matrices
The generators of the permutation group $s_{i}$ solve the $(d,2,1)$-gYBE. In
fact, the transposition operators $s_{i,i+l}$ solve the $(d,m,l)$-gYBE,
assuming $l\leq m$, with non-trivial support on the sites $i$ and $i+l$. The
ansatze with non-trivial support on all the $m$ sites used in this paper are
modifications of these transposition operators, with generators given by the
planar part of the partition algebra. In the language of quantum gates, these
ansatze are generalized SWAP gates.
In the following we discuss the 2-qubit and 3-qubit cases in detail before
writing down the answers for the 4-qubit case and outlining an algorithm for
an arbitrary multi-qubit generalized $R$-matrix.
### 4.1 2-qubits
On two sites $i$ and $i+1$, there are various choices of the generators
$p_{i}$, $p_{i+1}$, $p_{i,i+1}$ and $s_{i}$ to construct the $R$-matrices. We
consider the different possibilities separately.
##### Using $s_{i}$, $p_{i}$ and $p_{i+1}$
Consider the following ansatz for the Yang-Baxter operator with support on
sites $i$ and $i+1$,
$R_{i}=s_{i}\left(1+\alpha~{}p_{i}+\beta~{}p_{i+1}+\gamma~{}p_{i}p_{i+1}\right),$
(4.1)
with constants $\alpha,\beta,\gamma\in\mathbb{C}$. This operator satisfies the
$(d,2,1)$-gYBE for all $\alpha,\beta,\gamma$, as seen by evaluating the two
sides of the YBE
$\displaystyle R_{i}R_{i+1}R_{i}$ $\displaystyle=$
$\displaystyle\left(1+\alpha~{}p_{i+1}+\beta~{}p_{i}+\gamma~{}p_{i}p_{i+1}\right)\left(1+\alpha~{}p_{i+2}+\beta~{}p_{i}+\gamma~{}p_{i}p_{i+2}\right)$
(4.3)
$\displaystyle\times\left(1+\alpha~{}p_{i+2}+\beta~{}p_{i+1}+\gamma~{}p_{i+1}p_{i+2}\right)\left(s_{i}s_{i+1}s_{i}\right),$
where repeated use of the permutation operator given in (2.10) has been
applied. In a similar manner one can compute the right hand side
$R_{i+1}R_{i}R_{i+1}$, which turns out to be equal to (4.3). Using
$p_{i}^{2}=p_{i}$, $p_{i}p_{i+1}=p_{i+1}p_{i}$ and $s_{i}^{2}=1$, we can show
that the inverse is given by
$R_{i}^{-1}=\left(1-\frac{\alpha}{1+\alpha}~{}p_{i}-\frac{\beta}{1+\beta}~{}p_{i+1}+\frac{\alpha\beta(2+\alpha+\beta+\gamma)-\gamma}{(1+\alpha)(1+\beta)(1+\alpha+\beta+\gamma)}~{}p_{i}p_{i+1}\right)s_{i}.$
(4.4)
This expression is needed to check for which values of the parameters the
$R$-matrix is unitary.
It is also easy to check that the operators in (4.1) satisfy far-commutativity
for braid operators, $\sigma_{i}\sigma_{j}=\sigma_{j}\sigma_{i}\;(|i-j|>1)$,
by noting that
$R_{i+j}=s_{i+j}\left(1+\alpha~{}p_{i+j}+\beta~{}p_{i+j+1}+\gamma~{}p_{i+j}p_{i+j+1}\right)$
(4.5)
has trivial common support with the operator in (4.1) for all $j>1$.
In general, these solutions are non-unitary and generate the infinite-
dimensional braid group, i.e. the image of braid group representations built
using these $R$-matrices is infinite. This is seen by computing the powers of
$R_{i}$,
$R_{i}^{n}=s_{i}^{n}\left(1+\alpha_{n}~{}p_{i}+\beta_{n}~{}p_{i+1}+\gamma_{n}~{}p_{i}p_{i+1}\right),$
(4.6)
where the parameters are defined recursively as
$\displaystyle\alpha_{n}$ $\displaystyle=$
$\displaystyle\alpha_{1}+\beta_{n-1}+\alpha_{1}\beta_{n-1},\qquad\beta_{n}=\alpha_{n-1}+\beta_{1}+\alpha_{n-1}\beta_{1},$
$\displaystyle\gamma_{n}$ $\displaystyle=$
$\displaystyle\alpha_{1}\alpha_{n-1}+\beta_{1}\beta_{n-1}+\gamma_{1}\gamma_{n-1}+\gamma_{1}\left(1+\alpha_{n-1}+\beta_{n-1}\right)+\gamma_{n-1}\left(1+\alpha_{1}+\beta_{1}\right),$
(4.7)
after identifying $\alpha_{1}$, $\beta_{1}$ and $\gamma_{1}$ with $\alpha$,
$\beta$ and $\gamma$ in (4.1).
By equating (4.6) and $R_{i}^{\dagger}$, the conditions
$\displaystyle\alpha^{*}=-\frac{\alpha}{1+\alpha},\qquad\beta^{*}=-\frac{\beta}{1+\beta},\qquad\gamma^{*}=\frac{\alpha\beta(2+\alpha+\beta+\gamma)-\gamma}{(1+\alpha)(1+\beta)(1+\alpha+\beta+\gamma)}.$
(4.8)
give a family of unitary solutions that generate the infinite-dimensional
braid group just as the non-unitary case. (4.8) are explicitly solved by
$\alpha=e^{\mathrm{i}\theta}-1,\qquad\beta=e^{\mathrm{i}\varphi}-1,\qquad\gamma=e^{\mathrm{i}\phi}-e^{\mathrm{i}\theta}-e^{\mathrm{i}\varphi}+1$
(4.9)
with $\theta$, $\varphi$ and $\phi$ angles between 0 and $2\pi$. The recursive
definitions of the parameters of $R_{i}^{n}$ (4.7) show that $R_{i}^{n}\neq 1$
for any finite $n$ when $\theta$, $\varphi$ and $\phi$ are generic. This can
also be seen in the eigenvalues $\\{1,\pm
e^{\frac{\mathrm{i}}{2}(\theta+\varphi)},e^{\mathrm{i}\phi}\\}$ at the unitary
solutions.
There are eight real unitary solutions, four of which are shown in Table 1 in
the qubit representations of (2.18). (The remaining four generate the same
SLOCC classes as these four.)
| $(\alpha,\beta,\gamma)$ | $R_{i}$ | Eigenvalues | $n|R^{n}=1$
---|---|---|---|---
1. | $(0,0,0)$ | $\tiny{\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&0&1&0\\\ 0&1&0&0\\\ 0&0&0&1\end{array}\right)}$ | $(-1_{(1)},1_{(3)})$ | 2
2. | $(0,0,-2)$ | $\tiny{\left(\begin{array}[]{cccc}-1&0&0&0\\\ 0&0&1&0\\\ 0&1&0&0\\\ 0&0&0&1\end{array}\right)}$ | $(-1_{(2)},1_{(2)})$ | 2
3. | $(0,-2,0)$ | $\tiny{\left(\begin{array}[]{cccc}-1&0&0&0\\\ 0&0&-1&0\\\ 0&1&0&0\\\ 0&0&0&1\end{array}\right)}$ | $(-1,\mathrm{i},-\mathrm{i},1)$ | 4
4. | $(0,-2,2)$ | $\tiny{\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&0&-1&0\\\ 0&1&0&0\\\ 0&0&0&1\end{array}\right)}$ | $(\mathrm{i},-\mathrm{i},1_{(2)})$ | 4
Table 1: Unitary solutions in the 2-qubit case using operators $s_{i}$,
$p_{i}$ and $p_{i+1}$. The $k$ in $a_{(k)}$ denotes the multiplicity of the
eigenvalue $a$. In the last column, $n|R^{n}=1$ means the lowest positive
integer $n$ satisfying $R^{n}=1$.
In the qubit representation, the $(2,2,1)$-Yang-Baxter operator takes the
explicit form
$R_{i}=\left(\begin{array}[]{cccc}1+\alpha+\beta+\gamma&0&0&0\\\
0&0&1+\beta&0\\\ 0&1+\alpha&0&0\\\ 0&0&0&1\end{array}\right).$ (4.10)
The unitary operators can act as quantum gates, however not all may lead to a
universal set of gates. According to a theorem by Brylinski [30] for a 2-qubit
space, a gate helps building a universal set if and only if it is entangling.
We can use this criterion to check which of the operators in Table 1 are
entangling and can potentially lead to a universal set.
A quantum gate is entangling if there is a vector
${\left|{v_{1}}\right>}\otimes{\left|{v_{2}}\right>}\in\mathbb{C}^{2}\otimes\mathbb{C}^{2}$
that gets mapped to an entangled state by the quantum gate. With this
definition the gates corresponding to $(0,0,-2)$ and $(0,-2,2)$ are
entangling. This assertion can be checked by seeing that these gates map the
most general product state in $\mathbb{C}^{2}\otimes\mathbb{C}^{2}$, given by
$a_{1}a_{2}{\left|{00}\right>}+a_{1}b_{2}{\left|{01}\right>}+b_{1}a_{2}{\left|{10}\right>}+b_{1}b_{2}{\left|{11}\right>}$,
to an entangled state. For example, the operator corresponding to $(0,0,-2)$
maps this product state to
$-a_{1}a_{2}{\left|{00}\right>}+a_{1}b_{2}{\left|{01}\right>}+b_{1}a_{2}{\left|{10}\right>}+b_{1}b_{2}{\left|{11}\right>}$,
which is entangled.
##### Using $s_{i}$ and $p_{i,i+1}$
The operator
$R_{i}=s_{i}\left(1+\alpha~{}p_{i,i+1}\right)$ (4.11)
satisfies the $(d,2,1)$-gYBE for all values of $\alpha\in\mathbb{C}$, as can
be checked by computing $R_{i}R_{i+1}R_{i}$ and $R_{i+1}R_{i}R_{i+1}$. The
inverse, when $d=2$, is given by
$R_{i}^{-1}=s_{i}\left(1-\frac{\alpha}{1+2\alpha}~{}p_{i,i+1}\right).$ (4.12)
The image of the braid group representation built using the
$(d,2,1)$-$R$-matrix in (4.11) is infinite, as seen through its powers:
$R_{i}^{n}=s_{i}^{n}\left(1+\alpha_{n}~{}p_{i,i+1}\right),$ (4.13)
with the parameters, when $d=2$, defined recursively as
$\alpha_{n}=\alpha_{1}+\alpha_{n-1}+2\alpha_{1}\alpha_{n-1},$ after
identifying $\alpha_{1}$ with $\alpha$ in (4.11).
This is unitary for $\alpha^{*}=-\frac{\alpha}{1+2\alpha}$, i.e.
$\alpha=\frac{1}{2}(e^{\mathrm{i}\theta}-1)$ for arbitrary angle $\theta$,
which gives real solutions $\alpha=0,-1$. From the above recursion formula,
unitary solutions with generic $\theta$ generate the infinite-dimensional
braid group ($R_{i}^{n}\neq 1$ for any finite $n$). For the representation of
the partition algebra generators in (2.18) one obtains
$R_{i}=\left(\begin{array}[]{cccc}1+\alpha&0&0&\alpha\\\
0&\alpha&1+\alpha&0\\\ 0&1+\alpha&\alpha&0\\\
\alpha&0&0&1+\alpha\end{array}\right).$ (4.14)
The case $\alpha=0$ is just the permutation operator $s_{i}$, which is not an
entangler by previous considerations. That leaves us with $\alpha=-1$, which
is not an entangler either.
##### Comparison of the unitary solutions to known cases in [19, 20]
There are five families of unitary 2-qubit solutions to the Yang-Baxter
equation as found in [19] and analyzed in [20]. The solutions in (4.1) is
mapped to one of the five families whose representative is given by
$\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&0&\psi_{1}&0\\\ 0&\psi_{2}&0&0\\\
0&0&0&\psi_{3}\end{array}\right)\qquad\mbox{with}\qquad|\psi_{1}|=|\psi_{2}|=|\psi_{3}|=1.$
(4.15)
Actually, the solution of the form (4.10) with (4.9) becomes
$\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&0&e^{\mathrm{i}(\varphi-\phi)}&0\\\
0&e^{\mathrm{i}(\theta-\phi)}&0&0\\\
0&0&0&e^{-\mathrm{i}\phi}\end{array}\right)$ after scaling with
$e^{-\mathrm{i}\phi}$, which implies that the solution belongs to the family
(4.15) with $\psi_{1}=e^{\mathrm{i}(\varphi-\phi)}$,
$\psi_{2}=e^{\mathrm{i}(\theta-\phi)}$ and $\psi_{3}=e^{-\mathrm{i}\phi}$. In
particular, the four real unitary solutions listed in Table 1 belong to the
same family with $\psi_{1}=\psi_{2}=\psi_{3}=1$,
$\psi_{1}=\psi_{2}=\psi_{3}=-1$ (after scaling with $-1$),
$\psi_{1}=1,\psi_{2}=\psi_{3}=-1$ (after scaling with $-1$) and
$\psi_{1}=-1,\psi_{2}=\psi_{3}=1$, respectively.
In addition, the unitary solution in (4.11), more explicitly (4.14) with
$\alpha=\frac{1}{2}(e^{\mathrm{i}\theta}-1)$, can also be mapped to (4.15) up
to the overall phase factor $e^{\mathrm{i}\theta}$, where
$\psi_{1}=\psi_{2}=e^{-\mathrm{i}\theta}$, $\psi_{3}=1$, and the mapping is
done by the ILO $Q\otimes Q$ with $Q=\begin{pmatrix}1&1\\\ 1&-1\end{pmatrix}$.
Thus, all the unitary 2-qubit solutions we obtained belong to the single
family (4.15) among the five described in [19, 20].
### 4.2 3-qubits
The number of possible operators on three sites $i$, $i+1$ and $i+2$ are
$p_{i}$, $p_{i+1}$, $p_{i+2}$, $p_{i,i+1}$, $p_{i+1,i+2}$ and $p_{i,i+2}$,
along with the permutation generators $s_{i}$ and $s_{i+1}$. In order to
obtain valid representations of the braid group we obtain solutions to the
$(d,3,2)$-gYBE.
##### Using $s_{i}$, $p_{i}$, $p_{i+1}$ and $p_{i+2}$
As ansatz we propose the natural generalization of (4.1) from the 2-qubit
case:
$R_{i}=s_{i,i+2}\left(1+\alpha_{1}~{}p_{i}+\alpha_{2}~{}p_{i+1}+\alpha_{3}~{}p_{i+2}+\beta_{1}~{}p_{i}p_{i+1}+\beta_{2}~{}p_{i+1}p_{i+2}+\beta_{3}~{}p_{i}p_{i+2}+\gamma~{}p_{i}p_{i+1}p_{i+2}\right),$
(4.16)
where $s_{i,i+2}=s_{i}s_{i+1}s_{i}$ and the parameters are complex. This
operator does not satisfy the $(d,3,2)$-gYBE for all values of the parameters.
One can however use the identities in (2.8) to check that
$R_{i}R_{i+2}R_{i}=R_{i+2}R_{i}R_{i+2}$ when
$\alpha_{2}=0,~{}~{}\beta_{2}=-\frac{\beta_{1}\left(1+\alpha_{3}\right)}{1+\alpha_{1}+\beta_{1}},~{}~{}\gamma=\frac{\beta_{1}\left(\alpha_{3}-\alpha_{1}-\beta_{1}\right)}{1+\alpha_{1}+\beta_{1}}.$
(4.17)
The inverse is given by
$R_{i}^{-1}=\left(1+\alpha_{1}^{\prime}~{}p_{i}+\alpha_{3}^{\prime}~{}p_{i+2}+\beta_{1}^{\prime}~{}p_{i}p_{i+1}+\beta_{2}^{\prime}~{}p_{i+1}p_{i+2}+\beta_{3}^{\prime}~{}p_{i}p_{i+2}+\gamma^{\prime}~{}p_{i}p_{i+1}p_{i+2}\right)s_{i,i+2},$
(4.18)
where
$\displaystyle\alpha_{1}^{\prime}$ $\displaystyle=$
$\displaystyle-\frac{\alpha_{1}}{1+\alpha_{1}},~{}~{}\alpha_{3}^{\prime}=-\frac{\alpha_{3}}{1+\alpha_{3}},$
$\displaystyle\beta_{1}^{\prime}$ $\displaystyle=$
$\displaystyle-\frac{\beta_{1}}{(1+\alpha_{1})(1+\alpha_{1}+\beta_{1})},~{}~{}\beta_{2}^{\prime}=-\frac{\beta_{2}}{(1+\alpha_{3})(1+\alpha_{3}+\beta_{2})},$
$\displaystyle\beta_{3}^{\prime}$ $\displaystyle=$
$\displaystyle\frac{\alpha_{1}\alpha_{3}(2+\alpha_{1}+\alpha_{3})-\beta_{3}(1-\alpha_{1}\alpha_{3})}{(1+\alpha_{1})(1+\alpha_{3})(1+\alpha_{1}+\alpha_{3}+\beta_{3})},$
$\displaystyle\gamma^{\prime}$ $\displaystyle=$
$\displaystyle-\frac{\beta_{1}(\alpha_{1}-\alpha_{3}+\beta_{1})}{(1+\alpha_{1})(1+\alpha_{3})(1+\alpha_{1}+\beta_{1})}.$
(4.19)
The image of the braid group representation constructed out of (4.16) with
parameters satisfying (4.17) is infinite, as seen by computing the powers of
the generalized $R$-matrix
$R_{i}^{n}=s_{i,i+2}^{n}\left(1+\alpha_{1}^{(n)}~{}p_{i}+\alpha_{3}^{(n)}~{}p_{i+2}+\beta_{1}^{(n)}~{}p_{i}p_{i+1}+\beta_{2}^{(n)}~{}p_{i+1}p_{i+2}+\beta_{3}^{(n)}~{}p_{i}p_{i+2}+\gamma^{(n)}~{}p_{i}p_{i+1}p_{i+2}\right),$
(4.20)
with the parameters defined recursively as
$\displaystyle\alpha_{1}^{(n)}=\alpha_{1}^{(1)}+\alpha_{3}^{(n-1)}+\alpha_{1}^{(1)}\alpha_{3}^{(n-1)},\qquad\alpha_{3}^{(n)}=\alpha_{1}^{(n-1)}+\alpha_{3}^{(1)}+\alpha_{1}^{(n-1)}\alpha_{3}^{(1)},$
(4.21)
and
$\displaystyle\beta_{1}^{(n)}$ $\displaystyle=$
$\displaystyle\beta_{1}^{(1)}+\beta_{2}^{(n-1)}+\beta_{1}^{(1)}\beta_{2}^{(n-1)}+\alpha_{1}^{(1)}\beta_{2}^{(n-1)}+\alpha_{3}^{(n-1)}\beta_{1}^{(1)},$
$\displaystyle\beta_{2}^{(n)}$ $\displaystyle=$
$\displaystyle\beta_{1}^{(n-1)}+\beta_{2}^{(1)}+\beta_{1}^{(n-1)}\beta_{2}^{(1)}+\alpha_{3}^{(1)}\beta_{1}^{(n-1)}+\alpha_{1}^{(n-1)}\beta_{2}^{(1)},$
$\displaystyle\beta_{3}^{(n)}$ $\displaystyle=$
$\displaystyle\beta_{3}^{(1)}+\beta_{3}^{(n-1)}+\beta_{3}^{(1)}\beta_{3}^{(n-1)}+\beta_{3}^{(1)}\left(\alpha_{1}^{(n-1)}+\alpha_{3}^{(n-1)}\right)+\beta_{3}^{(n-1)}\left(\alpha_{1}^{(1)}+\alpha_{3}^{(1)}\right),$
$\displaystyle\gamma^{(n)}$ $\displaystyle=$
$\displaystyle\gamma^{(1)}+\gamma^{(n-1)}+\gamma^{(1)}\gamma^{(n-1)}+\gamma^{(1)}\left(\alpha_{1}^{(n-1)}+\alpha_{3}^{(n-1)}+\beta_{1}^{(n-1)}+\beta_{2}^{(n-1)}+\beta_{3}^{(n-1)}\right)$
(4.22)
$\displaystyle+\gamma^{(n-1)}\left(\alpha_{1}^{(1)}+\alpha_{3}^{(1)}+\beta_{1}^{(1)}+\beta_{2}^{(1)}+\beta_{3}^{(1)}\right)$
$\displaystyle+\beta_{1}^{(n-1)}\left(\beta_{1}^{(1)}+\beta_{3}^{(1)}\right)+\beta_{2}^{(n-1)}\left(\beta_{2}^{(1)}+\beta_{3}^{(1)}\right)+\beta_{3}^{(n-1)}\left(\beta_{1}^{(1)}+\beta_{2}^{(1)}\right)$
$\displaystyle+\alpha_{1}^{(n-1)}\beta_{1}^{(1)}+\alpha_{3}^{(n-1)}\beta_{2}^{(1)}+\alpha_{1}^{(1)}\beta_{1}^{(n-1)}+\alpha_{3}^{(1)}\beta_{2}^{(n-1)},$
after identifying $\alpha_{1}^{(1)}$, $\alpha_{3}^{(1)}$, $\beta_{1}^{(1)}$,
$\beta_{2}^{(1)}$, $\beta_{3}^{(1)}$ and $\gamma^{(1)}$ with $\alpha_{1}$,
$\alpha_{3}$, $\beta_{1}$, $\beta_{2}$, $\beta_{3}$ and $\gamma$ in (4.16).
Unitary solutions occur when $\alpha_{1}^{\prime}=\alpha_{1}^{*}$,
$\alpha_{3}^{\prime}=\alpha_{3}^{*}$, $\beta_{1}^{\prime}=\beta_{1}^{*}$,
$\beta_{2}^{\prime}=\beta_{2}^{*}$, $\beta_{3}^{\prime}=\beta_{3}^{*}$, and
$\gamma^{\prime}=\gamma^{*}$ with $\alpha_{1}^{\prime},\cdots,\gamma^{\prime}$
given by (4.19). Their explicit form is given by
$\alpha_{1}=e^{\mathrm{i}\theta_{1}}-1,\qquad\alpha_{3}=e^{\mathrm{i}\theta_{3}}-1,\qquad\beta_{1}=e^{\mathrm{i}\varphi_{1}}-e^{\mathrm{i}\theta_{1}},\qquad\beta_{3}=e^{\mathrm{i}\varphi_{3}}-e^{\mathrm{i}\theta_{1}}-e^{\mathrm{i}\theta_{3}}+1,$
(4.23)
where $\theta_{1}$, $\theta_{3}$, $\varphi_{1}$ and $\varphi_{3}$ are
arbitrary angles, and $\beta_{2}$ and $\gamma$ are obtained from (4.17). These
operators with generic angles generate the infinite-dimensional braid group
just as the non-unitary operators. This is further seen from the eigenvalues
at these unitary solutions given by $\\{e^{-\mathrm{i}\varphi}_{(2)},\pm
e^{-\frac{\mathrm{i}}{2}\left(\theta_{1}+\theta_{3}\right)}_{(2)},1_{(2)}\\}$,
with the $n$ in $a_{(n)}$ denoting the multiplicity of the eigenvalue $a$.
There are 16 real unitary points for the parameters in (4.17), of which we
discuss eight. (The remaining eight fall into the same SLOCC classes as the
chosen eight.) These eight unitary solutions are not equivalent to each other
and generate the GHZ, $AC-B$ and product state SLOCC classes as shown in
Tables 2, 3 and 4, respectively.
| $(\alpha_{1},\alpha_{3},\beta_{1},\beta_{3})$ | $R$ | Eigenvalues | $n|R^{n}=1$
---|---|---|---|---
1. | $(-2,-2,2,2)$ | $\frac{1}{2}\tiny{\left(\begin{array}[]{cccccccc}0&-1&1&0&-1&0&0&-1\\\ -1&0&0&-1&0&-1&1&0\\\ 1&0&0&-1&0&-1&-1&0\\\ 0&-1&-1&0&1&0&0&-1\\\ -1&0&0&1&0&-1&-1&0\\\ 0&-1&-1&0&-1&0&0&1\\\ 0&1&-1&0&-1&0&0&-1\\\ -1&0&0&-1&0&1&-1&0\end{array}\right)}$ | $(-1_{(4)},1_{(4)})$ | 2
2. | $(-2,-2,2,4)$ | $\frac{1}{2}\tiny{\left(\begin{array}[]{cccccccc}1&0&1&0&0&1&0&-1\\\ 0&1&0&-1&1&0&1&0\\\ 1&0&1&0&0&-1&0&1\\\ 0&-1&0&1&1&0&1&0\\\ 0&1&0&1&1&0&-1&0\\\ 1&0&-1&0&0&1&0&1\\\ 0&1&0&1&-1&0&1&0\\\ -1&0&1&0&0&1&0&1\end{array}\right)}$ | $(-1_{(2)},1_{(6)})$ | 2
3. | $(-2,0,2,0)$ | $\frac{1}{2}\tiny{\left(\begin{array}[]{cccccccc}0&-1&0&-1&-1&0&1&0\\\ -1&0&1&0&0&-1&0&-1\\\ 0&-1&0&-1&1&0&-1&0\\\ 1&0&-1&0&0&-1&0&-1\\\ -1&0&-1&0&0&-1&0&1\\\ 0&-1&0&1&-1&0&-1&0\\\ -1&0&-1&0&0&1&0&-1\\\ 0&1&0&-1&-1&0&-1&0\end{array}\right)}$ | $(-1_{(2)},1_{(2)},\mathrm{i}_{(2)},-\mathrm{i}_{(2)})$ | 4
4. | $(-2,0,2,2)$ | $\frac{1}{2}\tiny{\left(\begin{array}[]{cccccccc}1&0&0&-1&0&1&1&0\\\ 0&1&1&0&1&0&0&-1\\\ 0&-1&1&0&1&0&0&1\\\ 1&0&0&1&0&-1&1&0\\\ 0&1&-1&0&1&0&0&1\\\ 1&0&0&1&0&1&-1&0\\\ -1&0&0&1&0&1&1&0\\\ 0&1&1&0&-1&0&0&1\end{array}\right)}$ | $(\mathrm{i}_{(2)},-\mathrm{i}_{(2)},1_{(4)})$ | 4
Table 2: 3-qubit unitary generalized $R$-matrices generating the GHZ SLOCC
class.
The generalized $R$-matrices in Table 2 generate the following entangled
states in the GHZ SLOCC class
$\displaystyle\frac{1}{2}\left[-{\left|{001}\right>}+{\left|{010}\right>}-{\left|{100}\right>}-{\left|{111}\right>}\right],\quad\frac{1}{2}\left[{\left|{000}\right>}+{\left|{010}\right>}+{\left|{101}\right>}-{\left|{111}\right>}\right],$
$\displaystyle\frac{1}{2}\left[-{\left|{001}\right>}+{\left|{011}\right>}-{\left|{100}\right>}-{\left|{110}\right>}\right],\quad\frac{1}{2}\left[{\left|{000}\right>}+{\left|{011}\right>}+{\left|{101}\right>}-{\left|{110}\right>}\right],$
(4.24)
which are equivalent to the standard GHZ state,
${\left|{000}\right>}+{\left|{111}\right>}$, by the application of appropriate
ILOs, such as, for example,
$\left(\begin{array}[]{cc}a_{1}&b_{1}\\\
\mathrm{i}a_{1}&-\mathrm{i}b_{1}\end{array}\right)\otimes\left(\begin{array}[]{cc}a_{2}&b_{2}\\\
-\mathrm{i}a_{2}&\mathrm{i}b_{2}\end{array}\right)\otimes\left(\begin{array}[]{cc}\frac{\mathrm{i}}{4a_{1}a_{2}}&-\frac{\mathrm{i}}{4b_{1}b_{2}}\\\
-\frac{1}{4a_{1}a_{2}}&-\frac{1}{4b_{1}b_{2}}\end{array}\right).$
The generalized $R$-matrices in Table 3 generate the partially entangled
states $AC-B$ given by
$\displaystyle\frac{1}{2}\left[-{\left|{000}\right>}-{\left|{001}\right>}-{\left|{100}\right>}+{\left|{101}\right>}\right],\quad\frac{1}{2}\left[{\left|{000}\right>}-{\left|{001}\right>}+{\left|{100}\right>}+{\left|{101}\right>}\right],$
(4.25)
respectively. The product state SLOCC class is reported instead on Table 4.
| $(\alpha_{1},\alpha_{3},\beta_{1},\beta_{3})$ | $R$ | Eigenvalues | $n|R^{n}=1$
---|---|---|---|---
1. | $(-2,-2,0,2)$ | $\frac{1}{2}\tiny{\left(\begin{array}[]{cccccccc}-1&-1&0&0&-1&1&0&0\\\ -1&1&0&0&-1&-1&0&0\\\ 0&0&-1&-1&0&0&-1&1\\\ 0&0&-1&1&0&0&-1&-1\\\ -1&-1&0&0&1&-1&0&0\\\ 1&-1&0&0&-1&-1&0&0\\\ 0&0&-1&-1&0&0&1&-1\\\ 0&0&1&-1&0&0&-1&-1\end{array}\right)}$ | $(-1_{(4)},1_{(4)})$ | 2
2. | $(-2,0,0,2)$ | $\frac{1}{2}\tiny{\left(\begin{array}[]{cccccccc}1&1&0&0&-1&1&0&0\\\ -1&1&0&0&1&1&0&0\\\ 0&0&1&1&0&0&-1&1\\\ 0&0&-1&1&0&0&1&1\\\ 1&1&0&0&1&-1&0&0\\\ 1&-1&0&0&1&1&0&0\\\ 0&0&1&1&0&0&1&-1\\\ 0&0&1&-1&0&0&1&1\end{array}\right)}$ | $(\mathrm{i}_{(2)},-\mathrm{i}_{(2)},1_{(4)})$ | 4
Table 3: 3-qubit unitary generalized $R$-matrices generating the $AC-B$ SLOCC class. | $(\alpha_{1},\alpha_{3},\beta_{1},\beta_{3})$ | $R$ | Eigenvalues | $n|R^{n}=1$
---|---|---|---|---
1. | $(-2,-2,0,4)$ | $\tiny{\left(\begin{array}[]{cccccccc}0&0&0&0&0&1&0&0\\\ 0&1&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&1\\\ 0&0&0&1&0&0&0&0\\\ 0&0&0&0&1&0&0&0\\\ 1&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&1&0\\\ 0&0&1&0&0&0&0&0\end{array}\right)}$ | $(-1_{(2)},1_{(6)})$ | 2
2. | $(-2,0,0,0)$ | $\tiny{\left(\begin{array}[]{cccccccc}0&0&0&0&-1&0&0&0\\\ -1&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&-1&0\\\ 0&0&-1&0&0&0&0&0\\\ 0&0&0&0&0&-1&0&0\\\ 0&-1&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&-1\\\ 0&0&0&-1&0&0&0&0\end{array}\right)}$ | $(\mathrm{i}_{(2)},-\mathrm{i}_{(2)},-1_{(2)},1_{(2)})$ | 4
Table 4: 3-qubit unitary generalized $R$-matrices generating the product state
SLOCC class.
##### Using $s_{i}$, $p_{i,i+1}$ and $p_{i+1,i+2}$
The ansatz
$R_{i}=s_{i,i+2}\left(1+\alpha~{}p_{i,i+1}+\beta~{}p_{i+1,i+2}+\gamma~{}p_{i,i+1}p_{i+1,i+2}+\delta~{}p_{i,i+2}\right),$
(4.26)
with $s_{i,i+2}=s_{i}s_{i+1}s_{i}$, satisfies the $(d,3,2)$-gYBE when
$\gamma=-\frac{\alpha+\beta}{2}$. This is seen after simplifying the
expressions in the $(d,3,2)$-gYBE using the swapping property of the
permutation operator in (2.10), as done before.
The inverse, when $d=2$ and $\gamma=-\frac{\alpha+\beta}{2}$ is given by
$R_{i}^{-1}=\left(1+\alpha^{\prime}~{}p_{i,i+1}+\beta^{\prime}~{}p_{i+1,i+2}+\gamma^{\prime}~{}p_{i,i+1}p_{i+1,i+2}+\delta^{\prime}~{}p_{i,i+2}\right)s_{i,i+2},$
(4.27)
with
$\displaystyle\alpha^{\prime}=-\frac{\alpha}{1+2\alpha},\quad\beta^{\prime}=-\frac{\beta}{1+2\beta},\quad\delta^{\prime}=-\frac{\delta}{1+2\delta},\quad\gamma^{\prime}=\frac{\alpha+\beta+4\alpha\beta}{2+4(\alpha+\beta+2\alpha\beta)},$
(4.28)
which results in a unitary family when $\alpha^{\prime}=\alpha^{*}$,
$\beta^{\prime}=\beta^{*}$, $\gamma^{\prime}=\gamma^{*}$, and
$\delta^{\prime}=\delta^{*}$ that are solved by
$\alpha=\frac{1}{2}(e^{\mathrm{i}\theta}-1),\qquad\beta=\frac{1}{2}(e^{\mathrm{i}\varphi}-1),\qquad\delta=\frac{1}{2}(e^{\mathrm{i}\phi}-1)$
(4.29)
with $\gamma=-\frac{\alpha+\beta}{2}$ for $\theta$, $\varphi$ and $\phi$
arbitrary angles. The eigenvalues of this operator at the unitary family are
given by $\\{e^{\mathrm{i}\phi}_{(4)},\pm
e^{\frac{\mathrm{i}}{2}\left(\theta+\varphi\right)}_{(2)}\\}$.
There are eight real unitary solutions, out of which four inequivalent unitary
solutions are shown in Table 5.
| $(\alpha,\beta,\delta)$ | $R_{i}$ | Eigenvalues | $n|R^{n}=1$
---|---|---|---|---
1. | $(-1,-1,-1)$ | $\frac{1}{2}\tiny{\left(\begin{array}[]{cccccccc}-1&0&0&0&0&0&0&0\\\ 0&0&0&0&-1&0&0&0\\\ 0&0&-1&0&0&0&0&0\\\ 0&0&0&0&0&0&-1&0\\\ 0&-1&0&0&0&0&0&0\\\ 0&0&0&0&0&-1&0&0\\\ 0&0&0&-1&0&0&0&0\\\ 0&0&0&0&0&0&0&-1\end{array}\right)}$ | $(-1_{(6)},1_{(2)})$ | 2
2. | $(-1,-1,0)$ | $\tiny{\left(\begin{array}[]{cccccccc}0&0&0&0&0&1&0&0\\\ 0&1&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&1\\\ 0&0&0&1&0&0&0&0\\\ 0&0&0&0&1&0&0&0\\\ 1&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&1&0\\\ 0&0&1&0&0&0&0&0\end{array}\right)}$ | $(-1_{(2)},1_{(6)})$ | 2
3. | $(-1,0,-1)$ | $\frac{1}{2}\tiny{\left(\begin{array}[]{cccccccc}-1&0&0&1&0&-1&-1&0\\\ 0&-1&-1&0&-1&0&0&1\\\ 0&1&-1&0&-1&0&0&-1\\\ -1&0&0&-1&0&1&-1&0\\\ 0&-1&1&0&-1&0&0&-1\\\ -1&0&0&-1&0&-1&1&0\\\ 1&0&0&-1&0&-1&-1&0\\\ 0&-1&-1&0&1&0&0&-1\end{array}\right)}$ | $(-1_{(4)},\mathrm{i}_{(2)},-\mathrm{i}_{(2)})$ | 4
4. | $(-1,0,0)$ | $\frac{1}{2}\tiny{\left(\begin{array}[]{cccccccc}1&0&0&1&0&1&-1&0\\\ 0&1&-1&0&1&0&0&1\\\ 0&1&1&0&-1&0&0&1\\\ -1&0&0&1&0&1&1&0\\\ 0&1&1&0&1&0&0&-1\\\ 1&0&0&-1&0&1&1&0\\\ 1&0&0&1&0&-1&1&0\\\ 0&-1&1&0&1&0&0&1\end{array}\right)}$ | $(\mathrm{i}_{(2)},-\mathrm{i}_{(2)},1_{(4)})$ | 4
Table 5: Unitary solutions in the 3-qubit case for the generalized Yang–Baxter
operator in (4.26) with $\gamma=-\frac{\alpha+\beta}{2}$.
The image of the braid group representations constructed out of the
generalized $R$-matrix in (4.26) for $d=2$ and
$\gamma=-\frac{\alpha+\beta}{2}$, for both the unitary and non-unitary cases,
is once again infinite as in the previous cases, as seen by computing the
powers of the generalized $R$-matrix,
$R_{i}^{n}=s_{i,i+2}^{n}\left(1+\alpha_{n}~{}p_{i,i+1}+\beta_{n}~{}p_{i+1,i+2}+\gamma_{n}~{}p_{i,i+1}p_{i+1,i+2}+\delta_{n}~{}p_{i,i+2}\right),$
(4.30)
with the parameters defined recursively as
$\displaystyle\alpha_{n}$ $\displaystyle=$
$\displaystyle\alpha_{1}+\beta_{n-1}+2\alpha_{1}\beta_{n-1},\qquad\beta_{n}=\alpha_{n-1}+\beta_{1}+2\alpha_{n-1}\beta_{1},$
(4.31) $\displaystyle\gamma_{n}$ $\displaystyle=$
$\displaystyle\gamma_{1}+\gamma_{n-1}+4\gamma_{1}\gamma_{n-1}+2\delta_{1}\gamma_{n-1}+2\gamma_{1}\delta_{n-1}+\delta_{n-1}\left(\alpha_{1}+\beta_{1}\right)+\delta_{1}\left(\alpha_{n-1}+\beta_{n-1}\right)$
(4.32)
$\displaystyle+2\gamma_{n-1}\left(\alpha_{1}+\beta_{1}\right)+2\gamma_{1}\left(\alpha_{n-1}+\beta_{n-1}\right)+\alpha_{1}\alpha_{n-1}+\beta_{1}\beta_{n-1},$
$\displaystyle\delta_{n}$ $\displaystyle=$
$\displaystyle\delta_{1}+\delta_{n-1}+2\delta_{1}\delta_{n-1},$ (4.33)
after identifying $\alpha_{1}$, $\beta_{1}$, $\gamma_{1}$ and $\delta_{1}$
with $\alpha$, $\beta$, $\gamma$ and $\delta$ in (4.26).
The generalized $R$-matrices in the first two rows of Table 5 are not
entanglers, as they generate just the product state SLOCC class. The
generalized $R$-matrices of the second two rows, on the other hand, are both
entanglers generating the GHZ SLOCC class. In particular the states they
generate by acting on ${\left|{000}\right>}$ are
$\displaystyle\frac{1}{2}\left[-{\left|{000}\right>}-{\left|{011}\right>}-{\left|{101}\right>}+{\left|{110}\right>}\right],\qquad\frac{1}{2}\left[{\left|{000}\right>}-{\left|{011}\right>}+{\left|{101}\right>}+{\left|{110}\right>}\right],$
(4.34)
which are equivalent to the standard GHZ state
${\left|{000}\right>}+{\left|{111}\right>}$ by appropriate ILOs, such as, for
example,
$\left(\begin{array}[]{cc}a_{1}&b_{1}\\\
\mathrm{i}a_{1}&-\mathrm{i}b_{1}\end{array}\right)\otimes\left(\begin{array}[]{cc}a_{2}&b_{2}\\\
\mathrm{i}a_{2}&-\mathrm{i}b_{2}\end{array}\right)\otimes\left(\begin{array}[]{cc}-\frac{1}{4a_{1}a_{2}}&-\frac{1}{4b_{1}b_{2}}\\\
\frac{\mathrm{i}}{4a_{1}a_{2}}&-\frac{\mathrm{i}}{4b_{1}b_{2}}\end{array}\right).$
### 4.3 4-qubits
As one increases the number of qubits, the analytic computation for the
generalized $R$-matrices gets more tedious. To illustrate the feasibility of
the method, we write down the answers for the 4-qubit case as well. We use the
operators $s_{i}$, $p_{i}$, $p_{i+1}$, $p_{i+2}$ and $p_{i+3}$ to build the
generalized $R$-matrices. The generalized $R$-matrices that satisfy the
$(d,4,2)$-gYBE and the $(d,4,3)$-gYBE also satisfy far-commutativity yielding
braid group representations. These are the cases we focus on.
#### The $(d,4,2)$-generalized $R$-matrix
The generalized $R$-matrix
$\displaystyle R_{i}$ $\displaystyle=$ $\displaystyle
s_{i,i+2}\left(1+\alpha_{1}~{}p_{i}+\alpha_{3}~{}p_{i+2}+\beta_{1}~{}p_{i}p_{i+1}+\beta_{2}~{}p_{i}p_{i+2}\right.$
(4.35)
$\displaystyle+\beta_{3}~{}p_{i}p_{i+3}+\beta_{4}~{}p_{i+1}p_{i+2}+\beta_{6}~{}p_{i+2}p_{i+3}+\gamma_{1}~{}p_{i}p_{i+1}p_{i+2}+$
$\displaystyle+\left.\gamma_{2}~{}p_{i}p_{i+1}p_{i+3}+\gamma_{3}~{}p_{i}p_{i+2}p_{i+3}+\gamma_{4}~{}p_{i+1}p_{i+2}p_{i+3}+\delta~{}p_{i}p_{i+1}p_{i+2}p_{i+3}\right),$
with $s_{i,i+2}=s_{i}s_{i+1}s_{i}$, satisfies the $(d,4,2)$-gYBE for
$\displaystyle\beta_{4}$ $\displaystyle=$
$\displaystyle-\frac{\beta_{1}\left(1+\alpha_{3}\right)}{1+\alpha_{1}+\beta_{1}},~{}~{}\beta_{6}=-\frac{\beta_{3}\left(1+\alpha_{3}\right)}{1+\alpha_{1}+\beta_{3}},$
$\displaystyle\gamma_{1}$ $\displaystyle=$
$\displaystyle-\frac{\beta_{1}\left(\alpha_{1}+\beta_{1}-\alpha_{3}\right)}{1+\alpha_{1}+\beta_{1}},~{}~{}\gamma_{3}=-\frac{\beta_{3}\left(\alpha_{1}+\beta_{3}-\alpha_{3}\right)}{1+\alpha_{1}+\beta_{3}},$
$\displaystyle\gamma_{4}$ $\displaystyle=$
$\displaystyle\left(1+\alpha_{3}\right)\left[\frac{\beta_{1}}{1+\alpha_{1}+\beta_{1}}-\frac{\left(1+\alpha_{1}\right)\left(\beta_{1}+\gamma_{2}\right)}{\left(1+\alpha_{1}+\beta_{3}\right)\left(1+\alpha_{1}+\beta_{1}+\beta_{3}+\gamma_{2}\right)}\right],$
$\displaystyle\delta$ $\displaystyle=$
$\displaystyle-\gamma_{2}+\frac{\left(1+\alpha_{3}\right)\left[-\beta_{1}\beta_{3}\left(2\alpha_{1}+\beta_{1}+\beta_{3}+2\right)+\gamma_{2}\left(\left(1+\alpha_{1}\right)^{2}-\beta_{1}\beta_{3}\right)\right]}{\left(1+\alpha_{1}+\beta_{1}\right)\left(1+\alpha_{1}+\beta_{3}\right)\left(1+\alpha_{1}+\beta_{1}+\beta_{3}+\gamma_{2}\right)}.$
(4.36)
The solutions become unitary when
$\displaystyle\alpha_{1}^{*}$ $\displaystyle=$
$\displaystyle-\frac{\alpha_{1}}{1+\alpha_{1}},~{}~{}\alpha_{3}^{*}=-\frac{\alpha_{3}}{1+\alpha_{3}},$
$\displaystyle\beta_{1}^{*}$ $\displaystyle=$
$\displaystyle-\frac{\beta_{1}}{(1+\alpha_{1})(1+\alpha_{1}+\beta_{1})},~{}~{}\beta_{3}^{*}=-\frac{\beta_{3}}{(1+\alpha_{1})(1+\alpha_{1}+\beta_{3})},$
$\displaystyle\beta_{2}^{*}$ $\displaystyle=$
$\displaystyle\frac{\alpha_{1}}{1+\alpha_{1}}-\frac{1}{1+\alpha_{3}}+\frac{1}{1+\alpha_{1}+\alpha_{3}+\beta_{2}},$
$\displaystyle\gamma_{2}^{*}$ $\displaystyle=$
$\displaystyle\frac{\beta_{1}}{(1+\alpha_{1})(1+\alpha_{1}+\beta_{1})}-\frac{1}{1+\alpha_{1}+\beta_{3}}+\frac{1}{1+\alpha_{1}+\beta_{1}+\beta_{3}+\gamma_{2}},$
(4.37)
which is solved by
$\displaystyle\alpha_{1}$ $\displaystyle=$ $\displaystyle
e^{\mathrm{i}\theta_{1}}-1,~{}~{}\alpha_{3}=e^{\mathrm{i}\theta_{3}}-1,$
$\displaystyle\beta_{1}$ $\displaystyle=$ $\displaystyle
e^{\mathrm{i}\phi_{1}}-e^{\mathrm{i}\theta_{1}},~{}~{}\beta_{3}=e^{\mathrm{i}\phi_{3}}-e^{\mathrm{i}\theta_{1}},$
$\displaystyle\beta_{2}$ $\displaystyle=$ $\displaystyle
e^{\mathrm{i}\phi_{2}}-e^{\mathrm{i}\theta_{1}}-e^{\mathrm{i}\theta_{3}}+1,~{}~{}\gamma_{2}=e^{\mathrm{i}\varphi_{2}}-e^{\mathrm{i}\phi_{1}}-e^{\mathrm{i}\phi_{3}}+e^{\mathrm{i}\theta_{1}},$
(4.38)
for arbitrary angles $\theta_{1}$, $\theta_{3}$, $\phi_{1}$, $\phi_{3}$ and
$\varphi_{2}$. The eigenvalues at these unitary solutions are given by
$\\{1_{(4)},\pm
e^{\frac{\mathrm{i}}{2}(\theta_{1}+\theta_{3})}_{(4)},e^{\mathrm{i}\phi_{2}}_{(4)}\\}$.
There are 64 real unitary generalized $R$-matrices in this case. They
encompass four sets of eigenvalues, with 16 unitary generalized $R$-matrices
in each of these sets. The unitary generalized $R$-matrices are inequivalent
when they belong to different eigenvalue sets, however when they belong to the
same eigenvalue set they may or may not be equivalent. We write down one
unitary solution from each set of eigenvalues.
##### Eigenvalues $\in\\{-1_{(8)},1_{(8)}\\}$
The solution with
$\left(\alpha_{1},\alpha_{3},\beta_{1},\beta_{2},\beta_{3},\gamma_{2}\right)=\left(-2,-2,0,2,0,0\right)$
reads explicitly
$\frac{1}{2}\left(\tiny{\begin{array}[]{cccccccc|cccccccc}-1&0&-1&0&0&0&0&0&-1&0&1&0&0&0&0&0\\\
0&-1&0&-1&0&0&0&0&0&-1&0&1&0&0&0&0\\\ -1&0&1&0&0&0&0&0&-1&0&-1&0&0&0&0&0\\\
0&-1&0&1&0&0&0&0&0&-1&0&-1&0&0&0&0\\\ 0&0&0&0&-1&0&-1&0&0&0&0&0&-1&0&1&0\\\
0&0&0&0&0&-1&0&-1&0&0&0&0&0&-1&0&1\\\ 0&0&0&0&-1&0&1&0&0&0&0&0&-1&0&-1&0\\\
0&0&0&0&0&-1&0&1&0&0&0&0&0&-1&0&-1\\\
\hline\cr-1&0&-1&0&0&0&0&0&1&0&-1&0&0&0&0&0\\\
0&-1&0&-1&0&0&0&0&0&1&0&-1&0&0&0&0\\\ 1&0&-1&0&0&0&0&0&-1&0&-1&0&0&0&0&0\\\
0&1&0&-1&0&0&0&0&0&-1&0&-1&0&0&0&0\\\ 0&0&0&0&-1&0&-1&0&0&0&0&0&1&0&-1&0\\\
0&0&0&0&0&-1&0&-1&0&0&0&0&0&1&0&-1\\\ 0&0&0&0&1&0&-1&0&0&0&0&0&-1&0&-1&0\\\
0&0&0&0&0&1&0&-1&0&0&0&0&0&-1&0&-1\end{array}}\right).$ (4.39)
This generates
$\frac{1}{2}\left[-{\left|{0000}\right>}-{\left|{0010}\right>}-{\left|{1000}\right>}+{\left|{1010}\right>}\right]$
from ${\left|{0000}\right>}$.
##### Eigenvalues $\in\\{-1_{(4)},1_{(12)}\\}$
The solution for
$\left(\alpha_{1},\alpha_{3},\beta_{1},\beta_{2},\beta_{3},\gamma_{2}\right)=\left(-2,-2,0,4,0,2\right)$
reads
$\frac{1}{4}\left(\tiny{\begin{array}[]{cccccccc|cccccccc}1&1&0&0&1&1&0&0&0&0&3&-1&0&0&-1&-1\\\
1&1&0&0&1&1&0&0&0&0&-1&3&0&0&-1&-1\\\ 0&0&3&-1&0&0&-1&-1&1&1&0&0&1&1&0&0\\\
0&0&-1&3&0&0&-1&-1&1&1&0&0&1&1&0&0\\\ 1&1&0&0&1&1&0&0&0&0&-1&-1&0&0&3&-1\\\
1&1&0&0&1&1&0&0&0&0&-1&-1&0&0&-1&3\\\ 0&0&-1&-1&0&0&3&-1&1&1&0&0&1&1&0&0\\\
0&0&-1&-1&0&0&-1&3&1&1&0&0&1&1&0&0\\\ \hline\cr
0&0&1&1&0&0&1&1&3&-1&0&0&-1&-1&0&0\\\ 0&0&1&1&0&0&1&1&-1&3&0&0&-1&-1&0&0\\\
3&-1&0&0&-1&-1&0&0&0&0&1&1&0&0&1&1\\\ -1&3&0&0&-1&-1&0&0&0&0&1&1&0&0&1&1\\\
0&0&1&1&0&0&1&1&-1&-1&0&0&3&-1&0&0\\\ 0&0&1&1&0&0&1&1&-1&-1&0&0&-1&3&0&0\\\
-1&-1&0&0&3&-1&0&0&0&0&1&1&0&0&1&1\\\
-1&-1&0&0&-1&3&0&0&0&0&1&1&0&0&1&1\end{array}}\right).$ (4.40)
This generates
$\frac{1}{4}\left[{\left|{0000}\right>}+{\left|{0001}\right>}+{\left|{0100}\right>}+{\left|{0101}\right>}+3{\left|{1010}\right>}-{\left|{1011}\right>}-{\left|{1110}\right>}-{\left|{1111}\right>}\right]$
from ${\left|{0000}\right>}$.
##### Eigenvalues $\in\\{-\mathrm{i}_{(4)},\mathrm{i}_{(4)},1_{(8)}\\}$
The solution with
$\left(\alpha_{1},\alpha_{3},\beta_{1},\beta_{2},\beta_{3},\gamma_{2}\right)=\left(-2,0,0,2,2,0\right)$
reads
$\frac{1}{2}\left(\tiny{\begin{array}[]{cccccccc|cccccccc}1&0&0&-1&0&0&0&0&0&1&1&0&0&0&0&0\\\
0&1&-1&0&0&0&0&0&1&0&0&1&0&0&0&0\\\ 0&1&1&0&0&0&0&0&1&0&0&-1&0&0&0&0\\\
1&0&0&1&0&0&0&0&0&1&-1&0&0&0&0&0\\\ 0&0&0&0&1&0&0&-1&0&0&0&0&0&1&1&0\\\
0&0&0&0&0&1&-1&0&0&0&0&0&1&0&0&1\\\ 0&0&0&0&0&1&1&0&0&0&0&0&1&0&0&-1\\\
0&0&0&0&1&0&0&1&0&0&0&0&0&1&-1&0\\\ \hline\cr
0&-1&1&0&0&0&0&0&1&0&0&1&0&0&0&0\\\ -1&0&0&1&0&0&0&0&0&1&1&0&0&0&0&0\\\
1&0&0&1&0&0&0&0&0&-1&1&0&0&0&0&0\\\ 0&1&1&0&0&0&0&0&-1&0&0&1&0&0&0&0\\\
0&0&0&0&0&-1&1&0&0&0&0&0&1&0&0&1\\\ 0&0&0&0&-1&0&0&1&0&0&0&0&0&1&1&0\\\
0&0&0&0&1&0&0&1&0&0&0&0&0&-1&1&0\\\
0&0&0&0&0&1&1&0&0&0&0&0&-1&0&0&1\end{array}}\right).$ (4.41)
This generates
$\frac{1}{2}\left[{\left|{0000}\right>}+{\left|{0011}\right>}-{\left|{1001}\right>}+{\left|{1010}\right>}\right]$
from ${\left|{0000}\right>}$.
##### Eigenvalues
$\in\\{-\mathrm{i}_{(4)},\mathrm{i}_{(4)},-1_{(4)},1_{(4)}\\}$
The solution with
$\left(\alpha_{1},\alpha_{3},\beta_{1},\beta_{2},\beta_{3},\gamma_{2}\right)=\left(-2,0,0,0,2,0\right)$
reads
$\frac{1}{2}\left(\tiny{\begin{array}[]{cccccccc|cccccccc}0&0&-1&-1&0&0&0&0&-1&1&0&0&0&0&0&0\\\
0&0&-1&-1&0&0&0&0&1&-1&0&0&0&0&0&0\\\ -1&1&0&0&0&0&0&0&0&0&-1&-1&0&0&0&0\\\
1&-1&0&0&0&0&0&0&0&0&-1&-1&0&0&0&0\\\ 0&0&0&0&0&0&-1&-1&0&0&0&0&-1&1&0&0\\\
0&0&0&0&0&0&-1&-1&0&0&0&0&1&-1&0&0\\\ 0&0&0&0&-1&1&0&0&0&0&0&0&0&0&-1&-1\\\
0&0&0&0&1&-1&0&0&0&0&0&0&0&0&-1&-1\\\
\hline\cr-1&-1&0&0&0&0&0&0&0&0&-1&1&0&0&0&0\\\
-1&-1&0&0&0&0&0&0&0&0&1&-1&0&0&0&0\\\ 0&0&-1&1&0&0&0&0&-1&-1&0&0&0&0&0&0\\\
0&0&1&-1&0&0&0&0&-1&-1&0&0&0&0&0&0\\\ 0&0&0&0&-1&-1&0&0&0&0&0&0&0&0&-1&1\\\
0&0&0&0&-1&-1&0&0&0&0&0&0&0&0&1&-1\\\ 0&0&0&0&0&0&-1&1&0&0&0&0&-1&-1&0&0\\\
0&0&0&0&0&0&1&-1&0&0&0&0&-1&-1&0&0\end{array}}\right).$ (4.42)
This generates
$\frac{1}{2}\left[-{\left|{0010}\right>}+{\left|{0011}\right>}-{\left|{1000}\right>}-{\left|{1001}\right>}\right]$
from ${\left|{0000}\right>}$.
#### The $(d,4,3)$-generalized $R$-matrix
The generalized $R$-matrix
$\displaystyle R_{i}$ $\displaystyle=$ $\displaystyle
s_{i,i+3}\left(1+\alpha_{1}~{}p_{i}+\alpha_{4}~{}p_{i+3}+\beta_{1}~{}p_{i}p_{i+1}\right.$
(4.43)
$\displaystyle+\left.\beta_{2}~{}p_{i}p_{i+2}+\beta_{3}~{}p_{i}p_{i+3}+\beta_{5}~{}p_{i+1}p_{i+3}+\beta_{6}~{}p_{i+2}p_{i+3}\right.$
$\displaystyle+\left.\gamma_{1}~{}p_{i}p_{i+1}p_{i+2}+\gamma_{2}~{}p_{i}p_{i+1}p_{i+3}+\gamma_{3}~{}p_{i}p_{i+2}p_{i+3}+\gamma_{4}~{}p_{i+1}p_{i+2}p_{i+3}\right.$
$\displaystyle+\left.\delta~{}p_{i}p_{i+1}p_{i+2}p_{i+3}\right),$
with $s_{i,i+3}=s_{i+2}s_{i+1}s_{i}s_{i+1}s_{i+2}$, satisfies the
$(d,4,3)$-gYBE for
$\displaystyle\beta_{5}$ $\displaystyle=$
$\displaystyle-\frac{\beta_{1}\left(1+\alpha_{4}\right)}{1+\alpha_{1}+\beta_{1}},~{}~{}\beta_{6}=-\frac{\beta_{2}\left(1+\alpha_{4}\right)}{1+\alpha_{1}+\beta_{2}},$
$\displaystyle\gamma_{2}$ $\displaystyle=$
$\displaystyle-\frac{\beta_{1}\left(\alpha_{1}+\beta_{1}-\alpha_{4}\right)}{1+\alpha_{1}+\beta_{1}},~{}~{}\gamma_{3}=-\frac{\beta_{2}\left(\alpha_{1}+\beta_{2}-\alpha_{4}\right)}{1+\alpha_{1}+\beta_{2}},$
$\displaystyle\gamma_{4}$ $\displaystyle=$
$\displaystyle\left(1+\alpha_{4}\right)\left[\frac{\beta_{1}}{1+\alpha_{1}+\beta_{1}}-\frac{\left(1+\alpha_{1}\right)\left(\beta_{1}+\gamma_{1}\right)}{\left(1+\alpha_{1}+\beta_{2}\right)\left(1+\alpha_{1}+\beta_{1}+\beta_{2}+\gamma_{1}\right)}\right],$
$\displaystyle\delta$ $\displaystyle=$
$\displaystyle-\gamma_{1}+\frac{\left(1+\alpha_{4}\right)\left[-\beta_{1}\beta_{2}\left(2\alpha_{1}+\beta_{1}+\beta_{2}+2\right)+\gamma_{1}\left(\left(1+\alpha_{1}\right)^{2}-\beta_{1}\beta_{2}\right)\right]}{\left(1+\alpha_{1}+\beta_{1}\right)\left(1+\alpha_{1}+\beta_{2}\right)\left(1+\alpha_{1}+\beta_{1}+\beta_{2}+\gamma_{1}\right)}.$
(4.44)
The solutions become unitary when
$\displaystyle\alpha_{1}^{*}$ $\displaystyle=$
$\displaystyle-\frac{\alpha_{1}}{1+\alpha_{1}},~{}~{}\alpha_{4}^{*}=-\frac{\alpha_{4}}{1+\alpha_{4}},$
$\displaystyle\beta_{1}^{*}$ $\displaystyle=$
$\displaystyle-\frac{\beta_{1}}{(1+\alpha_{1})(1+\alpha_{1}+\beta_{1})},~{}~{}\beta_{2}^{*}=-\frac{\beta_{2}}{(1+\alpha_{1})(1+\alpha_{1}+\beta_{2})},$
$\displaystyle\beta_{3}^{*}$ $\displaystyle=$
$\displaystyle\frac{\alpha_{1}}{1+\alpha_{1}}-\frac{1}{1+\alpha_{4}}+\frac{1}{1+\alpha_{1}+\alpha_{4}+\beta_{3}},$
$\displaystyle\gamma_{1}^{*}$ $\displaystyle=$
$\displaystyle\frac{\beta_{1}}{(1+\alpha_{1})(1+\alpha_{1}+\beta_{1})}-\frac{1}{1+\alpha_{1}+\beta_{2}}+\frac{1}{1+\alpha_{1}+\beta_{1}+\beta_{2}+\gamma_{1}},$
(4.45)
which is solved by
$\displaystyle\alpha_{1}$ $\displaystyle=$ $\displaystyle
e^{\mathrm{i}\theta_{1}}-1,~{}~{}\alpha_{4}=e^{\mathrm{i}\theta_{4}}-1,$
$\displaystyle\beta_{1}$ $\displaystyle=$ $\displaystyle
e^{\mathrm{i}\phi_{1}}-e^{\mathrm{i}\theta_{1}},~{}~{}\beta_{2}=e^{\mathrm{i}\phi_{2}}-e^{\mathrm{i}\theta_{1}},$
$\displaystyle\beta_{3}$ $\displaystyle=$ $\displaystyle
e^{\mathrm{i}\phi_{3}}-e^{\mathrm{i}\theta_{1}}-e^{\mathrm{i}\theta_{4}}+1,~{}~{}\gamma_{1}=e^{\mathrm{i}\varphi_{1}}-e^{\mathrm{i}\phi_{1}}-e^{\mathrm{i}\phi_{2}}+e^{\mathrm{i}\theta_{1}},$
(4.46)
for arbitrary angles $\theta_{1}$, $\theta_{4}$, $\phi_{1}$, $\phi_{2}$ and
$\varphi_{1}$. The eigenvalues at these unitary solutions are given by
$\\{1_{(4)},\pm
e^{\frac{\mathrm{i}}{2}(\theta_{1}+\theta_{4})}_{(4)},e^{\mathrm{i}\phi_{3}}_{(4)}\\}$.
As in the $(2,4,2)$ case, there are 64 real unitary generalized $R$-matrices
and we write down one unitary solution from each set of eigenvalue. When the
parameters are complex we obtain a family of unitary solutions that generate
the full braid group.
##### Eigenvalues $\in\\{-1_{(4)},1_{(12)}\\}$
When
$\left(\alpha_{1},\alpha_{4},\beta_{1},\beta_{2},\beta_{3},\gamma_{1}\right)=\left(-2,-2,0,0,4,2\right)$,
the matrix reads
$\frac{1}{4}\left(\tiny{\begin{array}[]{cccccccc|cccccccc}1&0&1&0&1&0&1&0&0&3&0&-1&0&-1&0&-1\\\
0&3&0&-1&0&-1&0&-1&1&0&1&0&1&0&1&0\\\ 1&0&1&0&1&0&1&0&0&-1&0&3&0&-1&0&-1\\\
0&-1&0&3&0&-1&0&-1&1&0&1&0&1&0&1&0\\\ 1&0&1&0&1&0&1&0&0&-1&0&-1&0&3&0&-1\\\
0&-1&0&-1&0&3&0&-1&1&0&1&0&1&0&1&0\\\ 1&0&1&0&1&0&1&0&0&-1&0&-1&0&-1&0&3\\\
0&-1&0&-1&0&-1&0&3&1&0&1&0&1&0&1&0\\\ \hline\cr
0&1&0&1&0&1&0&1&3&0&-1&0&-1&0&-1&0\\\ 3&0&-1&0&-1&0&-1&0&0&1&0&1&0&1&0&1\\\
0&1&0&1&0&1&0&1&-1&0&3&0&-1&0&-1&0\\\ -1&0&3&0&-1&0&-1&0&0&1&0&1&0&1&0&1\\\
0&1&0&1&0&1&0&1&-1&0&-1&0&3&0&-1&0\\\ -1&0&-1&0&3&0&-1&0&0&1&0&1&0&1&0&1\\\
0&1&0&1&0&1&0&1&-1&0&-1&0&-1&0&3&0\\\
-1&0&-1&0&-1&0&3&0&0&1&0&1&0&1&0&1\end{array}}\right).$ (4.47)
This generates
$\frac{1}{4}\left[{\left|{0000}\right>}+{\left|{0010}\right>}+{\left|{0100}\right>}+{\left|{0110}\right>}+3{\left|{1001}\right>}-{\left|{1011}\right>}-{\left|{1101}\right>}-{\left|{1111}\right>}\right]$
from ${\left|{0000}\right>}$.
##### Eigenvalues $\in\\{-1_{(8)},1_{(8)}\\}$
The solution for
$\left(\alpha_{1},\alpha_{4},\beta_{1},\beta_{2},\beta_{3},\gamma_{1}\right)=\left(-2,-2,0,0,2,0\right)$
reads
$\frac{1}{2}\left(\tiny{\begin{array}[]{cccccccc|cccccccc}-1&-1&0&0&0&0&0&0&-1&1&0&0&0&0&0&0\\\
-1&1&0&0&0&0&0&0&-1&-1&0&0&0&0&0&0\\\ 0&0&-1&-1&0&0&0&0&0&0&-1&1&0&0&0&0\\\
0&0&-1&1&0&0&0&0&0&0&-1&-1&0&0&0&0\\\ 0&0&0&0&-1&-1&0&0&0&0&0&0&-1&1&0&0\\\
0&0&0&0&-1&1&0&0&0&0&0&0&-1&-1&0&0\\\ 0&0&0&0&0&0&-1&-1&0&0&0&0&0&0&-1&1\\\
0&0&0&0&0&0&-1&1&0&0&0&0&0&0&-1&-1\\\
\hline\cr-1&-1&0&0&0&0&0&0&1&-1&0&0&0&0&0&0\\\
1&-1&0&0&0&0&0&0&-1&-1&0&0&0&0&0&0\\\ 0&0&-1&-1&0&0&0&0&0&0&1&-1&0&0&0&0\\\
0&0&1&-1&0&0&0&0&0&0&-1&-1&0&0&0&0\\\ 0&0&0&0&-1&-1&0&0&0&0&0&0&1&-1&0&0\\\
0&0&0&0&1&-1&0&0&0&0&0&0&-1&-1&0&0\\\ 0&0&0&0&0&0&-1&-1&0&0&0&0&0&0&1&-1\\\
0&0&0&0&0&0&1&-1&0&0&0&0&0&0&-1&-1\end{array}}\right).$ (4.48)
This generates
$\frac{1}{2}\left[-{\left|{0000}\right>}-{\left|{0001}\right>}-{\left|{1000}\right>}+{\left|{1001}\right>}\right]$
from ${\left|{0000}\right>}$.
##### Eigenvalues $\in\\{-\mathrm{i}_{(4)},\mathrm{i}_{(4)},1_{(8)}\\}$
The solution for
$\left(\alpha_{1},\alpha_{4},\beta_{1},\beta_{2},\beta_{3},\gamma_{1}\right)=\left(-2,0,0,0,2,2\right)$
reads
$\frac{1}{4}\left(\tiny{\begin{array}[]{cccccccc|cccccccc}2&1&0&-1&0&-1&0&-1&-1&2&1&0&1&0&1&0\\\
-1&2&1&0&1&0&1&0&2&1&0&-1&0&-1&0&-1\\\ 0&-1&2&1&0&-1&0&-1&1&0&-1&2&1&0&1&0\\\
1&0&-1&2&1&0&1&0&0&-1&2&1&0&-1&0&-1\\\ 0&-1&0&-1&2&1&0&-1&1&0&1&0&-1&2&1&0\\\
1&0&1&0&-1&2&1&0&0&-1&0&-1&2&1&0&-1\\\ 0&-1&0&-1&0&-1&2&1&1&0&1&0&1&0&-1&2\\\
1&0&1&0&1&0&-1&2&0&-1&0&-1&0&-1&2&1\\\ \hline\cr
1&2&-1&0&-1&0&-1&0&2&-1&0&1&0&1&0&1\\\ 2&-1&0&1&0&1&0&1&1&2&-1&0&-1&0&-1&0\\\
-1&0&1&2&-1&0&-1&0&0&1&2&-1&0&1&0&1\\\ 0&1&2&-1&0&1&0&1&-1&0&1&2&-1&0&-1&0\\\
-1&0&-1&0&1&2&-1&0&0&1&0&1&2&-1&0&1\\\ 0&1&0&1&2&-1&0&1&-1&0&-1&0&1&2&-1&0\\\
-1&0&-1&0&-1&0&1&2&0&1&0&1&0&1&2&-1\\\
0&1&0&1&0&1&2&-1&-1&0&-1&0&-1&0&1&2\end{array}}\right).$ (4.49)
This generates
$\frac{1}{4}\left[2{\left|{0000}\right>}-{\left|{0001}\right>}+{\left|{0011}\right>}+{\left|{0101}\right>}+{\left|{0111}\right>}+{\left|{1000}\right>}+2{\left|{1001}\right>}-{\left|{1010}\right>}-{\left|{1100}\right>}-{\left|{1110}\right>}\right]$
from ${\left|{0000}\right>}$.
##### Eigenvalues
$\in\\{-\mathrm{i}_{(4)},\mathrm{i}_{(4)},-1_{(4)},1_{(4)}\\}$
The solution for
$\left(\alpha_{1},\alpha_{4},\beta_{1},\beta_{2},\beta_{3},\gamma_{1}\right)=\left(-2,0,0,2,0,0\right)$
reads
$\frac{1}{2}\left(\tiny{\begin{array}[]{cccccccc|cccccccc}0&-1&0&-1&0&0&0&0&-1&0&1&0&0&0&0&0\\\
-1&0&1&0&0&0&0&0&-1&0&-1&0&0&0&0&0\\\ 0&-1&0&-1&0&0&0&0&1&0&-1&0&0&0&0&0\\\
1&0&-1&0&0&0&0&0&0&-1&0&-1&0&0&0&0\\\ 0&0&0&0&0&-1&0&-1&0&0&0&0&-1&0&1&0\\\
0&0&0&0&-1&0&1&0&0&0&0&0&0&-1&0&-1\\\ 0&0&0&0&0&-1&0&-1&0&0&0&0&1&0&-1&0\\\
0&0&0&0&1&0&-1&0&0&0&0&0&0&-1&0&-1\\\
\hline\cr-1&0&-1&0&0&0&0&0&0&-1&0&1&0&0&0&0\\\
0&-1&0&1&0&0&0&0&-1&0&-1&0&0&0&0&0\\\ -1&0&-1&0&0&0&0&0&0&1&0&-1&0&0&0&0\\\
0&1&0&-1&0&0&0&0&-1&0&-1&0&0&0&0&0\\\ 0&0&0&0&-1&0&-1&0&0&0&0&0&0&-1&0&1\\\
0&0&0&0&0&-1&0&1&0&0&0&0&-1&0&-1&0\\\ 0&0&0&0&-1&0&-1&0&0&0&0&0&0&1&0&-1\\\
0&0&0&0&0&1&0&-1&0&0&0&0&-1&0&-1&0\end{array}}\right).$ (4.50)
This generates
$\frac{1}{2}\left[-{\left|{0001}\right>}+{\left|{0011}\right>}-{\left|{1000}\right>}-{\left|{1010}\right>}\right]$
from ${\left|{0000}\right>}$.
### 4.4 4-qubits from 2-qubits via Temperley-Lieb generators
Up to this point we used the qubit representations of (2.20) in writing down
the $(d,2,1)$-, $(d,3,2)$-, $(d,4,2)$\- and $(d,4,3)$-generalized Yang-Baxter
operators. As noted in Sec. 2, if we instead use the Temperley-Lieb
representation we would obtain the generalized Yang-Baxter operators that
solve the $(d,4,2)$-, $(d,6,4)$-, $(d,8,4)$-, and $(d,8,6)$-gYBEs. In what
follows we discuss the $(d,4,2)$-generalized $R$-matrices in detail. Note that
far-commutativity is satisfied for each of these operators, thus leading to
braid group representations.
By applying the realization (2.22) and (2.23) to (4.1) for the 2-qubit case,
we obtain the generalized Yang-Baxter operator for 4-qubits at the sites
$2i-1,2i,2i+1,2i+2$:
$R_{i}=s_{2i-1,\,2i+1}s_{2i,\,2i+2}\left(1+\alpha e_{2i-1}+\beta
e_{2i+1}+\gamma e_{2i-1}e_{2i+1}\right).$ (4.51)
It is clear that this solves $R_{i}R_{i+1}R_{i}=R_{i+1}R_{i}R_{i+1}$, but this
should be recognized as the $(d,4,2)$\- gYBE instead of $(d,4,1)$-gYBE. Note
that $R_{i}$ has a nontrivial support on the sites $2i-1$ to $2i+2$, whereas
$R_{i+1}$ has that on the sites $2i+1$ to $2i+4$. A shift of the index $i$ of
the $R$-matrix by one corresponds to a shift of the sites by two. The far-
commutativity relation is satisfied in the sense of $R_{i}R_{j}=R_{j}R_{i}$
for $|i-j|>1$.
If the Temperley-Lieb generators are expressed by hermitian matrices, we can
see that the solution is unitary for
$\alpha=\frac{1}{\Delta}(e^{\mathrm{i}\theta}-1),\qquad\beta=\frac{1}{\Delta}(e^{\mathrm{i}\varphi}-1),\qquad\gamma=\frac{1}{\Delta^{2}}(e^{\mathrm{i}\phi}-e^{\mathrm{i}\theta}-e^{\mathrm{i}\varphi}+1)$
(4.52)
with $\Delta\equiv Q+Q^{-1}$, and $\theta$, $\varphi$ and $\phi$ angles taking
any value. For real $\alpha$, $\beta$ and $\gamma$, there are 8 unitary
points:
$\displaystyle(\alpha,\beta,\gamma)$ $\displaystyle=$
$\displaystyle\large\\{(0,0,0),\left(0,0,-\frac{2}{\Delta^{2}}\right),\left(0,-\frac{2}{\Delta},0\right),\left(-\frac{2}{\Delta},0,0\right),$
(4.53)
$\displaystyle\left(0,-\frac{2}{\Delta},\frac{2}{\Delta^{2}}\right),\left(-\frac{2}{\Delta},0,\frac{2}{\Delta^{2}}\right),\left(-\frac{2}{\Delta},-\frac{2}{\Delta},\frac{2}{\Delta^{2}}\right),\left(-\frac{2}{\Delta},-\frac{2}{\Delta},\frac{4}{\Delta^{2}}\right)\large\\}.$
As a representation of the Temperley-Lieb generators for qubits at the sites
$i$ and $i+1$, we use the following matrix [36]
$e_{i}=\left(Q{\left|{01}\right>}-{\left|{10}\right>}\right)\left({\left<{01}\right|}-Q^{-1}{\left<{10}\right|}\right)=\begin{pmatrix}0&0&0&0\\\
0&Q&-1&0\\\ 0&-1&Q^{-1}&0\\\ 0&0&0&0\end{pmatrix}.$ (4.54)
Then, the eigenvalues of the $R$-matrix (4.51) with (4.52) are
$\\{1_{(6)},\,-1,\,e^{\mathrm{i}\phi},\,\pm
e^{\frac{\mathrm{i}}{2}(\theta+\varphi)}_{(3)}\\}$, showing that the
$R$-matrix generates the infinite-dimensional braid group. Note that these are
independent of the parameter $Q$. In what follows, we discuss the
corresponding entangled states for the non-trivial cases of (4.53). We obtain
the $R$-matrices which are not equivalent with those in the $(d,4,2)$-cases in
the previous subsection.
##### Case I: $\left(0,0,-\frac{2}{\Delta^{2}}\right)$
The solution satisfies $R_{i}^{2}=1$, and generates entangled states as
$\displaystyle R_{i}{\left|{0101}\right>}$ $\displaystyle=$
$\displaystyle\left(1-\frac{2Q^{2}}{\Delta^{2}}\right){\left|{0101}\right>}-\frac{2}{\Delta^{2}}{\left|{1010}\right>}+\frac{2Q}{\Delta^{2}}\left({\left|{0110}\right>}+{\left|{1001}\right>}\right),$
$\displaystyle R_{i}{\left|{1010}\right>}$ $\displaystyle=$
$\displaystyle\left(1-\frac{2Q^{-2}}{\Delta^{2}}\right){\left|{1010}\right>}-\frac{2}{\Delta^{2}}{\left|{0101}\right>}+\frac{2Q^{-1}}{\Delta^{2}}\left({\left|{0110}\right>}+{\left|{1001}\right>}\right),$
$\displaystyle R_{i}{\left|{0110}\right>}$ $\displaystyle=$
$\displaystyle\frac{Q^{2}+Q^{-2}}{\Delta^{2}}{\left|{1001}\right>}-\frac{2}{\Delta^{2}}{\left|{0110}\right>}+\frac{2Q}{\Delta^{2}}{\left|{0101}\right>}+\frac{2Q^{-1}}{\Delta^{2}}{\left|{1010}\right>},$
$\displaystyle R_{i}{\left|{1001}\right>}$ $\displaystyle=$
$\displaystyle\frac{Q^{2}+Q^{-2}}{\Delta^{2}}{\left|{0110}\right>}-\frac{2}{\Delta^{2}}{\left|{1001}\right>}+\frac{2Q}{\Delta^{2}}{\left|{0101}\right>}+\frac{2Q^{-1}}{\Delta^{2}}{\left|{1010}\right>}.$
(4.55)
For the other states, $R_{i}$ does not generate entanglement. We can see that
each of the four states in (4.55) is SLOCC equivalent to
${\left|{0000}\right>}+{\left|{1111}\right>}+\lambda\left({\left|{0011}\right>}+{\left|{1100}\right>}\right),$
(4.56)
with some coefficient $\lambda$. This falls into what is called $G_{abcd}$ in
[32] with $a=1+\lambda,b=c=0,d=1-\lambda$, or into what is called the class of
${\rm span}\\{0_{k}\Psi,0_{k}\Psi\\}$ in [34]. For example, the first state in
(4.55) is mapped to (4.56) by successively operating the following two ILOs:
$1\otimes X\otimes 1\otimes X,\quad{\rm
diag}\left(\left(1-\frac{2Q}{\Delta^{2}}\right)^{-1/4},\left(-\frac{2}{\Delta^{2}}\right)^{-1/4}\right)^{\otimes
4}.$ (4.57)
The matrix $R_{i}$ has eigenvalues $1_{(9)}$ and $-1_{(7)}$, which is
inequivalent to any of (4.39)-(4.42).
##### Case II: $\left(0,-\frac{2}{\Delta},0\right)$
The solution satisfies $R_{i}^{4}=1$ (but $R_{i}^{2},R_{i}^{3}\neq 1$), and
generates entangled states as the form of $(\mbox{the Bell
state})\otimes(\mbox{separable 2-qubit state})$. The eigenvalues of $R_{i}$
are $\\{-\mathrm{i}_{(3)},\mathrm{i}_{(3)},-1_{(4)},1_{(6)}\\}$, which is
inequivalent to any of (4.39)-(4.42).
##### Case III: $\left(-\frac{2}{\Delta},0,0\right)$
This case provides essentially the same entanglement as the case II, since the
two $R$-matrices are connected by swapping sites as
$\left.R_{i}\right|_{{\rm
case\,III}}=s_{2i-1,2i+1}s_{2i,2i+1}\left(\left.R_{i}\right|_{{\rm
case\,II}}\right)s_{2i-1,2i+1}s_{2i,2i+1}.$ (4.58)
##### Case IV: $\left(0,-\frac{2}{\Delta},\frac{2}{\Delta^{2}}\right)$
The solution satisfies $R_{i}^{4}=1$ (but $R_{i}^{2},R_{i}^{3}\neq 1$), and
gives entangled states as
$\displaystyle R_{i}{\left|{0101}\right>}$ $\displaystyle=$
$\displaystyle\left(1-\frac{2Q}{\Delta}+\frac{2Q^{2}}{\Delta^{2}}\right){\left|{0101}\right>}+\frac{2}{\Delta^{2}}{\left|{1010}\right>}-\frac{2Q}{\Delta^{2}}{\left|{0110}\right>}+\frac{2Q^{-1}}{\Delta^{2}}{\left|{1001}\right>},$
$\displaystyle R_{i}{\left|{1010}\right>}$ $\displaystyle=$
$\displaystyle\left(1-\frac{2Q^{-1}}{\Delta}+\frac{2Q^{-2}}{\Delta^{2}}\right){\left|{1010}\right>}+\frac{2}{\Delta^{2}}{\left|{0101}\right>}+\frac{2Q}{\Delta^{2}}{\left|{0110}\right>}-\frac{2Q^{-1}}{\Delta^{2}}{\left|{1001}\right>},$
$\displaystyle R_{i}{\left|{0110}\right>}$ $\displaystyle=$
$\displaystyle\left(1-\frac{2Q^{-1}}{\Delta}+\frac{2}{\Delta^{2}}\right){\left|{1001}\right>}+\frac{2}{\Delta^{2}}{\left|{0110}\right>}+\frac{2Q^{-1}}{\Delta^{2}}\left({\left|{0101}\right>}-{\left|{1010}\right>}\right),$
$\displaystyle R_{i}{\left|{1001}\right>}$ $\displaystyle=$
$\displaystyle\left(1-\frac{2Q}{\Delta}+\frac{2}{\Delta^{2}}\right){\left|{0110}\right>}+\frac{2}{\Delta^{2}}{\left|{1001}\right>}+\frac{2Q}{\Delta^{2}}\left({\left|{1010}\right>}-{\left|{0101}\right>}\right).$
(4.59)
Operating $R_{i}$ on any of the four states
${\left|{0001}\right>},{\left|{0010}\right>},{\left|{1101}\right>},{\left|{1110}\right>}$
generates entangled states like $(\mbox{the Bell
state})\otimes(\mbox{separable 2-qubit state})$. For the other states, $R_{i}$
gives product states. The four states in (4.59) are SLOCC equivalent to
(4.56). For example, successive operations of the three ILOs
$1\otimes X\otimes 1\otimes X,\quad 1\otimes\begin{pmatrix}iQ^{-1}&\\\
&1\end{pmatrix}\otimes\begin{pmatrix}-iQ&\\\ &1\end{pmatrix}\otimes
1,\quad{\rm
diag}\left(\left(1-\frac{2Q}{\Delta}+\frac{2Q^{2}}{\Delta^{2}}\right)^{-1/4},\left(\frac{2}{\Delta^{2}}\right)^{-1/4}\right)^{\otimes
4}$ (4.60)
map the first state in (4.59) to (4.56). The eigenvalues of $R_{i}$ are
$\\{-\mathrm{i}_{(3)},\mathrm{i}_{(3)},-1_{(3)},1_{(7)}\\}$, which is
inequivalent to any of (4.39)-(4.42).
##### Case V: $\left(-\frac{2}{\Delta},0,\frac{2}{\Delta^{2}}\right)$
This case is essentially the same as the case IV, because
$\left.R_{i}\right|_{{\rm
case\,V}}=s_{2i-1,2i+1}s_{2i,2i+1}\left(\left.R_{i}\right|_{{\rm
case\,IV}}\right)s_{2i-1,2i+1}s_{2i,2i+1}.$ (4.61)
##### Case VI:
$\left(-\frac{2}{\Delta},-\frac{2}{\Delta},\frac{2}{\Delta^{2}}\right)$
The solution satisfies $R_{i}^{2}=1$, and generates entangled states as
$\displaystyle R_{i}{\left|{0101}\right>}$ $\displaystyle=$
$\displaystyle\left(1-\frac{4Q}{\Delta}+\frac{2Q^{2}}{\Delta^{2}}\right){\left|{0101}\right>}+\frac{2}{\Delta^{2}}{\left|{1010}\right>}+\frac{2Q^{-1}}{\Delta^{2}}\left({\left|{0110}\right>}+{\left|{1001}\right>}\right),$
$\displaystyle R_{i}{\left|{1010}\right>}$ $\displaystyle=$
$\displaystyle\left(1-\frac{4Q^{-1}}{\Delta}+\frac{2Q^{-2}}{\Delta^{2}}\right){\left|{1010}\right>}+\frac{2}{\Delta^{2}}{\left|{0101}\right>}+\frac{2Q}{\Delta^{2}}\left({\left|{0110}\right>}+{\left|{1001}\right>}\right),$
$\displaystyle R_{i}{\left|{0110}\right>}$ $\displaystyle=$
$\displaystyle\left(-1+\frac{2}{\Delta^{2}}\right){\left|{1001}\right>}+\frac{2}{\Delta^{2}}{\left|{0110}\right>}+\frac{2Q}{\Delta^{2}}{\left|{1010}\right>}+\frac{2Q^{-1}}{\Delta^{2}}{\left|{0101}\right>},$
$\displaystyle R_{i}{\left|{1001}\right>}$ $\displaystyle=$
$\displaystyle\left(-1+\frac{2}{\Delta^{2}}\right){\left|{0110}\right>}+\frac{2}{\Delta^{2}}{\left|{1001}\right>}+\frac{2Q}{\Delta^{2}}{\left|{1010}\right>}+\frac{2Q^{-1}}{\Delta^{2}}{\left|{0101}\right>},$
(4.62)
which are SLOCC equivalent to (4.56). The states ${\left|{0001}\right>}$,
${\left|{1110}\right>}$ and their permutations with respect to the sites
provide the direct product of the Bell state and a separable 2-qubit state by
acting with $R_{i}$. For the other states, $R_{i}$ gives product states. The
$R_{i}$ has the eigenvalues $1_{(9)}$ and $-1_{(7)}$, which is inequivalent to
any of (4.39)-(4.42).
##### Case VII:
$\left(-\frac{2}{\Delta},-\frac{2}{\Delta},\frac{4}{\Delta^{2}}\right)$
Since the solution in this case is factorized as
$R_{i}=s_{2i-1,\,2i+1}\left(1-\frac{2}{\Delta}e_{2i-1}\right)s_{2i\,2i+2}\left(1-\frac{2}{\Delta}e_{2i+1}\right),$
(4.63)
it is easy to see that $R_{i}^{2}=1$, and the entangled states obtained fall
into a class of the direct product of the two Bell states or the direct
product of the Bell state and a separable 2-qubit state. The eigenvalues of
$R_{i}$ are $1_{(10)}$ and $-1_{(6)}$, which is inequivalent to any of
(4.39)-(4.42).
Although various inequivalent solutions would be obtained by choosing other
representations of the Temperley-Lieb algebra, we leave this issue as a future
subject.
### 4.5 Algorithm for multi-qubit generalized $R$-matrices
The ansatze in (4.1), (4.16), (4.35), and (4.43) act as a guide for
constructing the generalized $R$-matrices in the multi-qubit case, while using
the available operators: $s_{j}~{}j=i,\cdots,i+m-2$ and
$p_{j}~{}j=i,\cdots,i+m-1$. Throughout we assume that $2l\geq m$ in order to
ensure far-commutativity in addition to obeying the $(d,m,l)$-gYBE.
The generalized $R$-matrix that satisfies the $(d,m,l)$-gYBE and is made up of
products of the $p_{i}$ operators is
$\displaystyle R_{i}$ $\displaystyle=$ $\displaystyle
s_{i,i+l}\Big{[}1+\sum_{r=1}^{m-1}\Big{(}\sum_{\begin{subarray}{c}k_{1},\cdots,k_{r-1}=1\\\
0<k_{1}<\cdots<k_{r-1}<l\end{subarray}}^{m-1}\alpha^{(r)}_{0,k_{1},\cdots,k_{r-1}}p_{i}\prod_{j=1}^{r-1}p_{i+k_{j}}+\sum_{\begin{subarray}{c}k_{1},\cdots,k_{r-1}=1\\\
0<k_{1}<\cdots<k_{r-1}<l\end{subarray}}^{m-1}\alpha^{(r)}_{l,k_{1},\cdots,k_{r-1}}p_{i+l}\prod_{j=1}^{r-1}p_{i+k_{j}}$
(4.64) $\displaystyle\hskip
85.35826pt+\sum_{\begin{subarray}{c}k_{1},\cdots,k_{r-2}=1\\\
0<k_{1}<\cdots<k_{r-2}<l\end{subarray}}^{m-1}\alpha^{(r)}_{0,k_{1},\cdots,k_{r-2},l}p_{i}p_{i+l}\prod_{j=1}^{r-2}p_{i+k_{j}}\Big{)}+\alpha^{(m)}_{0,1,\cdots,m-1}\prod_{j=0}^{m-1}p_{i+j}\Big{]},$
where $s_{i,i+l}=s_{i+l-1}\cdots s_{i+1}s_{i}s_{i+1}\cdots s_{i+l-1}$ swaps
the sites $i$ and $i+l$. For $r=1$, $\prod_{j=1}^{r-1}p_{i+k_{j}}$ and
$\prod_{j=1}^{r-2}p_{i+k_{j}}$ should be regarded as $1$ and $0$,
respectively.
The coefficients can be determined by requiring this to satisfy
$R_{i}R_{i+l}R_{i}=R_{i+l}R_{i}R_{i+l}$. This computation can be done
analytically, but it is tedious. At the moment we do not have a general
analytic expression for the generalized $R$-matrix, but this algorithm works
as studied in the cases of three and four qubits. As seen in those cases, we
expect to obtain a family of both unitary and non-unitary generalized
$R$-matrices. The image of the braid group representations using the non-
unitary matrices and the complex unitary matrices will be infinite, whereas
the image of the braid group representations of the real unitary matrices is
expected to be finite.
Analogous generalized $R$-matrices solving the $(d,m,l)$-gYBE can be
constructed using the operators $s_{i,i+l}$ and the powers of
$p_{k_{1},k_{2}}$ where at least one of $k_{1},k_{2}$ is either $i$ or $i+l$.
We omit the general expression for such an operator here as it is
straightforward.
## 5 Comparison with known generalized $R$-matrices
The known unitary 3-qubit generalized $R$-matrices are the GHZ matrix obtained
from the extraspecial 2-group generators in [10], the generalization of the
Rowell solutions in [21], and the solutions obtained from ribbon fusion
categories in [12, 13]. We now show that the four unitary 3-qubit solutions in
Table 2 are inequivalent to all of the solutions above.
##### Non-equivalence to the GHZ matrix
The GHZ matrix that solves the $(2,3,2)$-gYBE is given by
$R_{\textrm{GHZ}}=\frac{1}{\sqrt{2}}~{}\left(\begin{array}[]{cccccccc}1&0&0&0&0&0&0&1\\\
0&1&0&0&0&0&1&0\\\ 0&0&1&0&&1&0&0\\\ 0&0&0&1&1&0&0&0\\\ 0&0&0&-1&1&0&0&0\\\
0&0&-1&0&0&1&0&0\\\ 0&-1&0&0&0&0&1&0\\\ -1&0&0&0&0&0&0&1\end{array}\right)$
(5.1)
and has eigenvalues $e^{\pm\mathrm{i}\frac{\pi}{4}}$, both with multiplicity
4. The eigenvalues of the complex unitary 3-qubit matrices of the form (4.16)
have eigenvalues that cannot be mapped to the eigenvalues of the GHZ matrix
either by an inversion or by a scalar multiplication and thus we conclude that
at the unitary points these solutions are inequivalent to the GHZ matrix.
At the same time the eigenvalues of the four real unitary 3-qubit matrices in
Table 2 cannot be mapped to these eigenvalues by a scalar multiplication or an
inversion, leading us to the conclusion that those matrices cannot be possibly
equivalent to the GHZ matrix.
##### Comparison with the generalized Rowell solutions
In [21] the Rowell solutions of the form $\left(\begin{array}[]{cc}X&0\\\
0&Y\end{array}\right)=X\oplus Y$, are generalized to obtain three families of
solutions. They have eigenvalues from the sets
$\\{e^{-\mathrm{i}\frac{\pi}{12}},e^{-\mathrm{i}\frac{\pi}{12}},e^{\mathrm{i}\frac{7\pi}{12}},e^{\mathrm{i}\frac{7\pi}{12}}\\}$,
$\\{e^{-\mathrm{i}\frac{\pi}{4}},-e^{-\mathrm{i}\frac{\pi}{4}},e^{\mathrm{i}\frac{\pi}{4}},e^{\mathrm{i}\frac{\pi}{4}}\\}$
and
$\\{e^{-\mathrm{i}\frac{\pi}{4}},e^{-\mathrm{i}\frac{\pi}{4}},e^{\mathrm{i}\frac{\pi}{4}},e^{\mathrm{i}\frac{\pi}{4}}\\}$,
respectively. However, these solve the $(2,3,1)$-gYBEs and hence cannot be
compared to the solutions in this paper, which solve instead the
$(2,3,2)$-gYBE.
##### Comparison with the Kitaev-Wang solutions
The Kitaev-Wang solutions in [12] and their qubit realizations in [13] solve
the $(d,3,1)$-gYBE, whereas the methods presented in this paper generate the
generalized Yang-Baxter operators that solve the $(d,3,2)$-gYBE. Thus we
cannot compare the two generalized $R$-matrices.
## 6 Outlook
In this paper we have used the notion of partition algebras to introduce a
solution-generating technique to obtain parameter-independent generalized
$R$-matrices. This is quite remarkable, since solving the gYBE is a
notoriously difficult task. This is especially true for the parameter-
independent gYBE, for which very few solutions are known in the literature. In
some recent work [16], we have focused on the parameter-dependent case, using
supersymmetry algebras instead of partition algebras. In that case, the
relation between $R$-matrices and braid group representations was not very
clear, so that the present work should be considered as an improvement of our
previous analysis.
Improved as it may be, we need however to remark that the method based on
partition algebras has certain limitations. The main issue is that not all
SLOCC classes of entangled states seem to be obtainable from the generalized
$R$-matrices via partition algebras. In particular, we do not obtain the
W-state SLOCC class of a 3-qubit system in the case that $R$ is a unitary
braid operator. We suspect that this absence of the W-state class is not
peculiar to 3-qubit systems, and it might extend to the multi-qubit case as
well. It would be interesting to check whether this is true, although it would
represent a very laborious computation.
Understanding such a distinction of the W states, in particular from the GHZ
states, would be relevant to various applications in quantum information and
quantum computing. The GHZ states become unentangled bipartite mixed states
after tracing out one qubit, while the W states are still entangled. Thus, the
GHZ states and their multipartite analogs are fragile against particle loss
and suitable for quantum secret sharing. On the other hand, the W states and
their multiqubit versions are robust against loss with possible application to
quantum memories, multiparty quantum network protocols and universal quantum
cloning machines.
A new technique to construct solutions to generalized Yang-Baxter equation is
presented via interplay with $k$-graphs in [37]. It will also interesting to
do similar analysis on the entanglement generated through possible novel
solutions obtained from the technique.
Of course, the physical realization of braiding operators is of the utmost
interest. A natural next step would then be to try to identify the anyons
corresponding to these representations and study their computational power.
This could possibly help identifying the unitary modular tensor categories
that describe these anyons [38].
On a complementary direction, one could construct new integrable spin chains
upon Baxterizing the 2-qubit $R$-matrices. In particular, using the Temperley-
Lieb representations of the 2-qubit $R$-matrices one could obtain new 4-site
interaction spin chains that are integrable. This is something on which we
hope to report at some point in the future.
##### Note added
After the conclusion of this work, we have found nonunitary braid operators
corresponding to W states [39], which were left out of the present analysis.
We have managed this by using partition algebras for W states in a 3-qubit
space and extraspecial 2-groups for the 4-qubit space case.
### Acknowledgements
We thank Z. Wang for his comments on the manuscript. PP and FS are supported
by the Institute for Basic Science in Korea (IBS-R024-Y1, IBS-R018-D1). DT is
supported in part by the INFN grant Gauge and String Theory (GAST).
## References
* [1] P. K. Aravind, Borromean Entanglement of the GHZ state, in R. S. Cohen, M. Horne, J. Stachel (eds.), Potentiality, Entanglement and Passion-at-a-Distance, Boston Studies in the Philosophy of Science 194, Springer (1997), DOI:10.1007/978-94-017-2732-7.
* [2] A. Sugita, Borromean Entanglement Revisited, arXiv:0704.1712 [quant-ph].
* [3] G. M. Quinta and R. André, Classifying quantum entanglement through topological links, Phys. Rev. A 97, no. 4, 042307 (2018), DOI:10.1103/PhysRevA.97.042307 [arXiv:1803.08935 [quant-ph]].
* [4] L. H. Kauffman, S. J. Lomonaco, Quantum Entanglement and Topological Entanglement, New J. Phys. 4 73 (2002), DOI:10.1088/1367-2630/4/1/373 [arXiv:quant-ph/0205137].
* [5] L. H. Kauffman, S. J. Lomonaco Jr, Braiding Operators are Universal Quantum Gates, New J. Phys. 6 134 (2004), DOI:10.1088/1367-2630/6/1/134 [arXiv:quant-ph/0401090].
* [6] Y. Zhang, L. H. Kauffman, M.-L. Ge, Universal Quantum Gate, Yang-Baxterization and Hamiltonian, Int. J. of Quantum Inf., Vol. 3, No. 4 (2005) 669-678, DOI:10.1142/S0219749905001547 [arXiv:quant-ph/0412095].
* [7] Y. Zhang, L.H. Kauffman and M.L. Ge, Yang-Baxterizations, Universal Quantum Gates and Hamiltonians, Quant. Inf. Proc. 4 (2005) 159-197, DOI:10.1007/s11128-005-7655-7 [arXiv: quant-ph/0502015].
* [8] L. H. Kauffman, E. Mehrotra, Topological Aspects of Quantum Entanglement, Quantum Inf. Process (2019) 18: 76, DOI:10.1007/s11128-019-2191-z [arXiv:1611.08047 [math.GT]].
* [9] G. Alagic, M. Jarret, S. P. Jordan, Yang-Baxter operators need quantum entanglement to distinguish knots, J. Phys. A, 49 075203 (2016), DOI:10.1088/1751-8113/49/7/075203 [arXiv:1507.05979 [quant-ph]].
* [10] E. C. Rowell, Y. Zhang, Y. S. Wu, M. L. Ge, Extraspecial Two-Groups, Generalized Yang-Baxter Equations and Braiding Quantum Gates, Quant. Inf. Comput.10:685-702, 2010 DOI:10.26421/QIC10.7-8 [arXiv:0706.1761 [quant-ph]].
* [11] J. Franko, E. C. Rowell and Z. Wang, Extraspecial 2-Groups and Images of Braid Group Representations, J. Knot Theory Ramifications, 15 (2006) 413-428, DOI:10.1142/S0218216506004580 [arXiv:math.RT/0503435].
* [12] A. Kitaev, Z. Wang, Solutions to generalized Yang-Baxter equations via ribbon fusion categories, GTM 18 (2012) 191-197, DOI:10.2140/gtm.2012.18.191 [arXiv:1203.1063 [math.QA]].
* [13] J. F. Vasquez, Z. Wang, H. M. Wong, Qubit representations of the braid groups from generalized Yang-Baxter matrices, Quantum Information Processing 15(7) 2016, DOI:10.1007/s11128-016-1313-0 [arXiv:1602.08536 [math.QA]].
* [14] L. H. Kauffman, Knot Logic and Topological Quantum Computing with Majorana Fermions, in J. Chubb, J. Chubb, A. Eskandarian, and V. Harizanov (eds.), Lecture Notes in Logic, Cambridge University Press, DOI:10.1017/CBO9781139519687.012 [arXiv:1301.6214 [quant-ph]].
* [15] L. H. Kauffman, S. J. Lomonaco Jr, $q$-Deformed Spin Networks, Knot Polynomials and Anyonic Topological Quantum Computation, J. Knot Theory Ramifications 16(3) 2007, DOI:10.1142/S0218216507005282 [arXiv:quant-ph/0606114v3].
* [16] P. Padmanabhan, F. Sugino, D. Trancanelli, Quantum entanglement, supersymmetry, and the generalized Yang-Baxter equation, Quantum Information and Computation Vol. 20, No. 1& 2, (2020) 0037-0064, DOI:10.26421/QIC20.1-2 [arXiv:1911.02577 [quant-ph]].
* [17] J. Hietarinta, All solutions to the constant quantum Yang-Baxter equation in two dimensions, Physics Letters A 165 (1992) 245-251, DOI:10.1016/0375-9601(92)90044-M.
* [18] J. Hietarinta, The upper triangular solutions to the three-state constant quantum Yang-Baxter equation, J. Physics A: General Physics 26(23):7077 (1999), DOI:10.1088/0305-4470/26/23/044 [arXiv:solv-int/9306001].
* [19] H. A. Dye, Unitary Solutions to the Yang-Baxter Equation in Dimension Four, Quantum Information Processing 2, 117-152 (2003), DOI:10.1023/A:1025843426102 [arXiv:quant-ph/0211050].
* [20] J. M. Franko, Braid Group Representations arising from the Yang Baxter Equation, J. Knot Theory Ramifications 19(4) 2010, DOI:10.1142/S021821651000798X [arXiv:0807.4138 [math.GT]].
* [21] R. Chen, Generalized Yang-Baxter Equations and Braiding Quantum Gates, J. Knot Theory Ramifications 21(9) 2011, DOI:10.1142/S0218216512500873 [arXiv:1108.5215 [math.QA]].
* [22] P. Gustafson, A. Kimball, E. C. Rowell, Q. Zhang, Braid group representations from twisted tensor products of algebras, Peking Math. Journal, DOI:10.1007/s42543-020-00023-5 [arXiv:1906.08153 [math.QA]].
* [23] V. F. R. Jones, The Potts model and the symmetry group, in Subfactors: Proceedings of the Taniguchi Symposium on Operator Algebras, Kyuzeso 1993, World Scientific Publishing (1994) 259-267.
* [24] P. Martin, Potts models and related problems in statistical mechanics, in Series on Advances in Statistical Mechanics, World Scientific Publishing (1991), DOI:10.1142/0983.
* [25] P. Martin, Temperley-Lieb Algebras for Non-Planar Statistical Mechanics - The Partition Algebra Construction, J. Knot Theory Ramifications, 3 (1994) 51-82, DOI:10.1142/S0218216594000071.
* [26] P. Martin, The structure of the partition algebras, J. Algebra 183 (1996) 319-358, DOI:10.1006/jabr.1996.022.
* [27] P. Martin, The partition algebra and the Potts model transfer matrix spectrum in high dimensions, J. Phys. A: Math. Gen. 33 (2000) 3669-3695, DOI:10.1088/0305-4470/33/19/304.
* [28] P. Martin, G. Rollet, The Potts model representation and a Robinson-Schensted correspondence for the partition algebra, Compositio Math. 112 (1998) 237-254, DOI:10.1023/A:100040041473.
* [29] T. Halverson, A. Ram, Partition Algebras, European J. of Combinatorics 26, Issue 6 (2005) 869-921, DOI:10.1016/j.ejc.2004.06.005 [arXiv:math/0401314 [math.RT]].
* [30] J. L. Brylinski and R. Brylinski, Universal Quantum Gates, in Mathematics of Quantum Computation, Chapman & Hall/CRC Press (2002), DOI:10.1201/9781420035377.pt2.
* [31] W. Dur, G. Vidal, J. I. Cirac, Three qubits can be entangled in two inequivalent ways, Phys. Rev. A 62, 062314 (2000), DOI:10.1103/PhysRevA.62.062314 [arXiv:quant-ph/0005115].
* [32] F. Verstraete, J. Dehaene, B. De Moor, H. Verschelde, Four qubits can be entangled in nine different ways, Phys. Rev. A 65, 052112 (2002), DOI:10.1103/PhysRevA.65.052112 [arXiv:quant-ph/0109033].
* [33] L. Lamata, J. Leon, D. Salgado, E. Solano, Inductive classification of multipartite entanglement under SLOCC, Phys. Rev. A 74, 052336 (2006), DOI:10.1103/PhysRevA.74.052336 [arXiv:quant-ph/0603243].
* [34] L. Lamata, J. Leon, D. Salgado, E. Solano, Inductive Entanglement Classification of Four Qubits under SLOCC, Phys. Rev. A 75, 022318 (2007), DOI:10.1103/PhysRevA.74.052336 [arXiv:quant-ph/0610233].
* [35] D. Li, X. Li, H. Huang, X. Li, SLOCC classification for nine families of four-qubits, Quantum Information and Computation, Vol. 9, No. 9 & 10 (2009) 0778-0800 DOI:10.26421/QIC9.9-10 [arXiv:0712.1876 [quant-ph]].
* [36] M. T. Batchelor, L. Mezincescu, R. I. Nepomechie, V. Rittenberg, q-deformations of the O(3) symmetric spin-1 Heisenberg chain, J. Phys. A 23, L141 (1990), DOI:10.1088/0305-4470/23/4/003.
* [37] D. Yang, The interplay between k-graphs and the Yang-Baxter equation, Journal of Algebra 451 (2016) 494-525, DOI:10.1016/j.jalgebra.2016.01.001 [arXiv:1506.03117 [math.QA]].
* [38] E. Rowell, R. Stong, Z. Wang, On classification of modular tensor categories, Comm. Math. Phys. 292 (2009) no. 2, 343-389, DOI:10.1007/s00220-009-0908-z [arXiv:0712.1377 [math.QA]].
* [39] P. Padmanabhan, F. Sugino and D. Trancanelli, Generating W states with braiding operators, arXiv:2007.05660 [quant-ph].
|
2024-09-04T02:54:55.858575 | 2020-02-29T14:09:34 | 2003.00262 | {
"authors": "Emeka Abakasanga, Nir Shlezinger, Ron Dabora",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25959",
"submitter": "Emeka Abakasanga Godswill",
"url": "https://arxiv.org/abs/2003.00262"
} | arxiv-papers | # The Rate Distortion Function of Asynchronously Sampled Memoryless
Cyclostationary Gaussian Processes
Emeka Abakasanga, Nir Shlezinger, Ron Dabora
E. Abakasanga and R. Dabora are with the department of ECE, Ben-Gurion
University, Israel (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>N.
Shlezinger is with the faculty of Math and CS, Weizmann Institute of Science,
Israel (e-mail: [email protected]). This work was supported by the
Israel Science Foundation under Grants 1685/16 and 0100101, and by the Israeli
Ministry of Economy through the HERON 5G consortium.
###### Abstract
Man-made communications signals are typically modelled as continuous-time (CT)
wide-sense cyclostationary (WSCS) processes. As modern processing is digital,
it operates on sampled versions of the CT signals. When sampling is applied to
a CT WSCS process, the statistics of the resulting discrete-time (DT) process
depends on the relationship between the sampling interval and the period of
the statistics of the CT process: When these two parameters have a common
integer factor, then the DT process is WSCS. This situation is referred to as
synchronous sampling. When this is not the case, which is referred to as
asynchronous sampling, the resulting DT process is wide-sense almost
cyclostationary (WSACS). Such acquired CT processes are commonly encoded using
a source code to facilitate storage or transmission over multi-hop networks
using compress-and-forward relaying. In this work, we study the fundamental
tradeoff of sources codes applied to sampled CT WSCS processes, namely, their
rate-distortion function (RDF). We note that while RDF characterization for
the case of synchronous sampling directly follows from classic information-
theoretic tools utilizing ergodicity and the law of large numbers, when
sampling is asynchronous, the resulting process is not information stable. In
such cases, commonly used information-theoretic tools are inapplicable to RDF
analysis, which poses a major challenge. Using the information spectrum
framework, we show that the RDF for asynchronous sampling in the low
distortion regime can be expressed as the limit superior of a sequence of RDFs
in which each element corresponds to the RDF of a synchronously sampled WSCS
process (but their limit is not guaranteed to exist). The resulting
characterization allows us to introduce novel insights on the relationship
between sampling synchronization and RDF. For example, we demonstrate that,
differently from stationary processes, small differences in the sampling rate
and the sampling time offset can notably affect the RDF of sampled CT WSCS
processes.
## I Introduction
Man-made signals are typically generated using a repetitive procedure, which
takes place at fixed intervals. The resulting signals are thus commonly
modeled as continuous-time (CT) random processes exhibiting periodic
statistical properties [1, 2, 3], which are referred to as wide-sense
cyclostationary (WSCS) processes. In digital communications, where the
transmitted waveforms commonly obey the WSCS model [3], the received CT signal
is first sampled to obtain a discrete-time (DT) received signal. In the event
that the sampling interval is commensurate with the period of the statistics
of the CT WSCS signal, cyclostationarity is preserved in DT [3, Sec. 3.9]. In
this work, we refer to this situation as synchronous sampling. However, it is
practically common to encounter scenarios in which the sampling rate at the
receiver and symbol rate of the received CT WSCS process are incommensurate,
which is referred to as asynchronous sampling. The resulting sampled process
in such cases is a DT wide-sense almost cyclostationary (WSACS) stochastic
process [3, Sec. 3.9].
This research aims at investigating lossy source coding for asynchronously
sampled CT WSCS processes. In the source coding problem, every sequence of
information symbols from the source is mapped into a sequence of code symbols,
referred to as codewords, taken from a predefined codebook. In lossy source
coding, the source sequence is recovered up to a predefined distortion
constraint, within an arbitrary small tolerance of error. The figure-of-merit
for lossy source coding is the rate-distortion function (RDF) which
characterizes the minimum number of bits per symbol required to compress the
source sequence such that it can be reconstructed at the decoder within the
specified maximal distortion [4]. For an independent and identically
distributed (IID) random source process, the RDF can be expressed as the
minimum mutual information between the source variable and the reconstruction
variable, such that for the corresponding conditional distribution of the
reconstruction symbol given the source symbol, the distortion constraint is
satisfied [5, Ch. 10]. The source coding problem has been further studied in
multiple different scenarios, including the reconstruction of a single source
at multiple destinations [6] and the reconstruction of multiple correlated
stationary Gaussian sources at a single destination [7, 8, 9].
For stationary source processes, ergodicity theory and the asymptotic
equipartition property (AEP) [5, Ch. 3] were applied for characterizing the
RDF for different scenarios [10, Ch. 9], [4, Sec. I], [11]. However, as in a
broad range of applications, including digital communication networks, most CT
signals are WSCS, the sampling operation results in a DT source signal whose
statistics depends on the relationship between the sampling rate and the
period of the statistics of the source signal. When sampling is synchronous,
the resulting DT source signal is WSCS [3, Sec. 3.9]. The RDF for lossy
compression of DT WSCS Gaussian sources with memory was studied in [12]. This
used the fact that any WSCS signal can be transformed into a set of stationary
subprocess [2]; thereby facilitating the application of information-theoretic
results obtained for multivariate stationary sources to the derivation of the
RDF; Nonetheless, in many digital communications scenarios, the sampling rate
and the symbol rate of the CT WSCS process are not related in any way, and are
possibly incommensurate, resulting in a sampled process which is a DT WSACS
stochastic process [3, Sec. 3.9]. Such situations can occur as a result of the
a-priori determined values of the sampling interval and the symbol duration of
the WSCS source signal, as well as due to sampling clock jitters resulting
from hardware impairments. A comprehensive review of trends and applications
for almost cyclostationary signals can be found in [13]. Despite their
apparent frequent occurrences, the RDF for lossy compression of WSACS sources
was not characterized, which is the motivation for the current research. A
major challenge associated with characterizing fundamental limits for
asynchronously sampled WSCS processes stems from the fact that the resulting
processes are not information stable, in the sense that its conditional
distribution is not ergodic [14, Page X], [15], [16]. As a result, the
standard information-theoretic tools cannot be employed, making the
characterization of the RDF a very challenging problem.
Our recent study in [17] on channel coding reveals that for the case of
additive CT WSCS Gaussian noise, capacity varies significantly with sampling
rates, whether the Nyquist criterion is satisfied or not. In particular, it
was observed that the capacity can change dramatically with minor variations
in the sampling rate, causing it to switch from synchronous sampling to
asynchronous sampling. This is in direct contrast to the results obtained for
wide-sense stationary noise for which the capacity remains unchanged for any
sampling rate above the Nyquist rate [18]. A natural fundamental question that
arises from this result is how the RDF of a sampled Gaussian source process
varies with the sampling rate. As a motivating example, one may consider
compress-and-forward (CF) relaying, where the relay samples at a rate which
can be incommensurate with the symbol rate of the incoming communications
signal.
In this work, we employ the information spectrum framework [14] in
characterizing the RDF of asynchronously sampled memoryless Gaussian WSCS
processes, as this framework is applicable to the information-theoretic
analysis of non information-stable processes [14, Page VII]. We further note
that while rate characterizations obtained using information spectrum tools
and its associated quantities may be difficult to evaluate [14, Remark 1.7.3],
here we obtain a numerically computable characterization of the RDF. In
particular, we focus on the mean squared error (MSE) distortion measure in the
low distortion regime, namely, source codes for which the average MSE of the
difference between the source and the reproduction process is not larger than
the minimal source variance. The results of this research lead to accurate
modelling of signal compression in current and future digital communications
systems. Furthermore, we utilize our characterization of the RDF RDF for a
sampled CT WSCS Gaussian source with different sampling rates and sampling
time offsets. We demonstrate that, differently from stationary signals, when
applying a lossy source code a sampled WSCS process, the achievable rate-
distortion tradeoff can be significantly affected by minor variations in the
sampling time offset and the sampling rate. Our results thus allow identifying
the sampling rate and sampling time offsets which minimize the RDF in systems
involving asynchronously sampled WSCS processes.
The rest of this work is organised as follows: Section II provides a
scientific background on cyclostationary processes, and on rate-distortion
analysis of DT WSCS Gaussian sources. Section III presents the problem
formulation and auxiliary results, and Section IV details the main result of
RDF characterization for sampled WSCS Gaussian process. Numerical examples and
discussions are addressed in Section V, and Section VI concludes the paper.
## II Preliminaries and Background
In the following we review the main tools and framework used in this work: In
Subsection II-A, we detail the notations. In Subsection II-B we review the
basics of cyclostationary processes and the statistical properties of a DT
process resulting from sampling a CT WSCS process. In Subsection II-C, we
recall some preliminaries in rate-distortion theory, and present the RDF for a
DT WSCS Gaussian source process. This background creates a premise for the
statement of the main result provided in Section IV of this paper.
### II-A Notations
In this paper, random vectors are denoted by boldface uppercase letters, e.g.,
${\boldsymbol{X}}$; boldface lowercase letters denote deterministic column
vectors, e.g., ${\boldsymbol{x}}$. Scalar RVs and deterministic values are
denoted via standard uppercase and lowercase fonts respectively, e.g., $X$ and
$x$. Scalar random processes are denoted with $X(t),t\in\mathcal{R}$ for CT
and with $X[n],n\in\mathcal{Z}$ for DT. Uppercase Sans-Serif fonts represent
matrices, e.g., $\mathsf{A}$, and the element at the $i^{th}$ row and the
$l^{th}$ column of $\mathsf{A}$ is denoted with $(\mathsf{A})_{i,l}$. We use
$|\cdot|$ to denote the absolute value, $\lfloor d\rfloor,d\in\mathcal{R}$, to
denote the floor function and $d^{+},d\in\mathcal{R}$, to denote the
$\mathop{\max}\\{0,d\\}$. $\delta[\cdot]$ denotes the Kronecker delta
function: $\delta[n]=1$ for $n=0$ and $\delta[n]=0$ otherwise, and
$\mathbb{E}\\{\cdot\\}$ denotes the stochastic expectation. The sets of
positive integers, integers, rational numbers, real numbers, positive numbers,
and complex numbers are denoted by $\mathcal{N},\mathcal{Z}$, $\mathcal{Q}$,
$\mathcal{R}$, $\mathcal{R}^{++}$, and $\mathcal{C}$, respectively. The
cumulative distribution function (CDF) is denoted by
$F_{X}(x)\triangleq\Pr{(X\leq x)}$ and the probability density function (PDF)
of a CT random variable (RV) is denoted by $p_{X}(x)$. We represent a real
Gaussian distribution with mean $\mu$ and variance $\sigma^{2}$ by the
notation $\mathcal{N}(\mu,\sigma^{2})$. All logarithms are taken to base-2,
and $j=\sqrt{-1}$. Lastly, for any sequence $y[i]$, $i\in\mathcal{N}$, and
positive integer $k\in\mathcal{N}$, ${{\boldsymbol{y}}}^{(k)}$ denotes the
column vector $\big{(}{y}[1],\ldots,{y}[k]\big{)}^{T}$.
### II-B Wide-Sense Cyclostationary Random Processes
Here, we review some preliminaries in the theory of cyclostationarity. We
begin by recalling the definition of wide-sense cyclostationary processes:
###### Definition 1 (Wide-sense cyclostationary processes [2, Sec. 17.2]).
A scalar stochastic process $\\{S(t)\\}_{t\in\mathcal{T}}$, where
$\mathcal{T}$ is either discrete ($\mathcal{T}=\mathcal{Z}$) or continuous
($\mathcal{T}=\mathcal{R}$) is called WSCS if both its first-order and its
second-order moments are periodic with respect to $t\in\mathcal{T}$ with some
period $N_{p}\in\mathcal{T}$.
WSCS signal are thus random processes whose first and second-order moments are
periodic functions. To define WSACS signals, we first recall the definition of
almost-periodic functions:
###### Definition 2 (Almost-periodic-function [19]).
A function $x(t),t\in\mathcal{T}$ where $\mathcal{T}$ is either discrete
($\mathcal{T}=\mathcal{Z}$) or continuous ($\mathcal{T}=\mathcal{R}$), is
called an almost-periodic function if for every $\epsilon>0$ there exists a
number $l(\epsilon)>0$ with the property that any interval in $\mathcal{T}$ of
length $l(\epsilon)$ contains a $\tau$, such that
$|x(t+\tau)-x(t)|<\epsilon,\quad\forall t\in\mathcal{T}.$
###### Definition 3 (Wide-sense almost-cyclostationary processes [2, Def.
17.2]).
A scalar stochastic process $\left\\{S(t)\right\\}_{t\in\mathcal{T}}$ where
$\mathcal{T}$ is either discrete ($\mathcal{T}=\mathcal{Z}$) or continuous
($\mathcal{T}=\mathcal{R}$), is called WSACS if its first and its second order
moments are almost-periodic functions with respect to $t\in\mathcal{T}$.
The DT WSCS model is commonly used in the communications literature, as it
facilitates the the analysis of many problems of interest, such as fundamental
rate limits analysis [20, 21, 22], channel identification [23],
synchronization [24], and noise mitigation [25]. However, in many scenarios,
the considered signals are WSACS rather than WSCS. To see how the WSACS model
is obtained in the context of sampled signals, we briefly recall the
discussion in [17] on sampled WSCS processes (please refer to [17, Sec. II.B]
for more details): Consider a CT WSCS random process $S(t)$, which is sampled
uniformly with a sampling interval of $T_{\rm s}$ and sampling time offset
$\phi$, resulting in a DT random process $S[i]=S(i\cdot T_{\rm s}+\phi)$. It
is well known that contrary to stationary processes, which have a time-
invariant statistical characteristics, the values of $T_{\rm s}$ and $\phi$
have a significant effect on the statistics of sampled WSCS processes [17,
Sec. II.B]. To demonstrate this point, consider a CT WSCS process with
variance $\sigma_{s}^{2}(t)=\frac{1}{2}\cdot\sin\left(2\pi t/T_{\rm
sym}\right)+2$ for some $T_{\rm sym}>0$. The sampled process for $\phi=0$ (no
symbol time offset) and $T_{\rm s}=\frac{T_{\rm sym}}{3}$ has a variance
function whose period is $N_{p}=3$: $\sigma_{s}^{2}(iT_{\rm
s})=\\{2,2.433,1.567,2,2.433,1.567,\ldots\\}$, for $i=0,1,2,3,4,5,\ldots$;
while the DT process obtained with the same sampling interval and the sampling
time offset of $\phi=\frac{T_{\rm s}}{2\pi}$ has a periodic variance with
$N_{p}=3$ with values $\sigma_{s}^{2}(iT_{\rm
s}+\phi)=\\{2.155,2.335,1.510,2.155,2.335,1.510,\ldots\\}$, for
$i=0,1,2,3,4,5,\ldots$, which are different from the values of the DT variance
for $\phi=0$. It follows that both variances are periodic in discrete-time
with the same period $N_{p}=3$, although with different values within the
period, which is a result of the sampling time offset, yet, both DT processes
correspond to two instances of synchronous sampling. Lastly, consider the
sampled variance obtained by sampling without a time offset (i.e., $\phi=0$)
at a sampling interval of $T_{\rm s}=(1+\frac{1}{2\pi})\frac{T_{\rm sym}}{3}$.
For this case, $T_{\rm s}$ is not an integer divisor of $T_{\rm sym}$ or of
any of its integer multiples (i.e., $\frac{T_{\rm sym}}{T_{\rm
s}}=2+\frac{2\pi-2}{2\pi+1}\equiv 2+\epsilon$; where
$\epsilon\not\in\mathcal{Q}$ and $\epsilon\in[0,1)$ ) resulting in the
variance values $\sigma_{s}^{2}(iT_{\rm
s})=\\{2,2.335,1.5027,2.405,1.896,1.75,\ldots\\}$, for $i=0,1,2,3,4,5\ldots$.
For this scenario, the DT variance is not periodic but is almost-periodic,
corresponding to asynchronous sampling and the resulting DT process is not
WSCS but WSACS [3, Sec. 3.2]. The example above demonstrates that the
statistical properties of sampled WSCS signals depend on the sampling rate and
the sampling time offset, implying that the RDF of such processes should also
depend on these quantities, as we demonstrate in the sequel.
### II-C The Rate-Distortion Function for DT WSCS Processes
Encoder $f_{S}$Decoder
$g_{S}$$\\{1,2,\ldots,2^{lR}\\}$$\\{S[i]\\}_{i=1}^{l}$$\\{\hat{S}[i]\\}_{i=1}^{l}$
Figure 1: Source coding block diagram
In this subsection we review the source coding problem and the existing
results on the RDF of WSCS processes. We begin by recalling the definition of
a source coding scheme, see, e.g., [26, ch. 3], [5, Ch.10]:
###### Definition 4 (Source coding scheme).
A source coding scheme with blocklength $l$ consists of:
1. 1.
An encoder $f_{S}$ which maps a block of $l$ source samples
$\\{S[i]\\}^{l}_{i=1}$ into an index from a set of $M=2^{lR}$ indexes,
$f_{S}:\\{S[i]\\}_{i=1}^{l}\mapsto\\{1,2,\ldots,M\\}$.
2. 2.
A decoder $g_{S}$ which maps the received index into a reconstructed sequence
of length $l$, $\left\\{\hat{S}[i]\right\\}_{i=1}^{l}$,
$g_{S}:\\{1,2,\ldots,M\\}\mapsto\left\\{\hat{S}[i]\right\\}_{i=1}^{l}$
The encoder-decoder pair is referred to as an $(R,l)$ source code, where $R$
is the rate of the code in bits per source sample, defined as:
$R=\frac{1}{l}\log_{2}M$ (1)
The RDF characterizes the minimal average number of bits per source sample,
denoted $R(D)$, that can be used to encode a source process such that it can
be reconstructed from its encoded representation with a recovery distortion
not larger than $D>0$ [5, Sec. 10.2]. In the current work, we use the MSE
distortion measure, which measures the cost of decoding a source symbol $S$
into $\hat{S}$ via $d(S,\hat{S})=\left\|S-\hat{S}\right\|^{2}$. The distortion
for a sequence of source samples ${\boldsymbol{S}}^{(l)}$ decoded into a
reproduction sequence $\hat{{\boldsymbol{S}}}^{(l)}$ is given by
$d\left({\boldsymbol{S}}^{(l)},\hat{{\boldsymbol{S}}}^{(l)}\right)=\frac{1}{l}\sum\limits_{i=1}^{l}\left(S[i]-\hat{S}[i]\right)^{2}$
and the average distortion in decoding a random source sequence
${\boldsymbol{S}}^{(l)}$ into a random reproduction sequence
$\hat{{\boldsymbol{S}}}^{(l)}$ is defined as:
$\bar{d}\left({\boldsymbol{S}}^{(l)},\hat{{\boldsymbol{S}}}^{(l)}\right)\triangleq\mathds{E}\left\\{d\left({\boldsymbol{S}}^{(l)},\hat{{\boldsymbol{S}}}^{(l)}\right)\right\\}=\frac{1}{l}\sum\limits_{i=1}^{l}\mathds{E}\left\\{\left(S[i]-\hat{S}[i]\right)^{2}\right\\},$
(2)
where the expectation in (2) is taken with respect to the joint probability
distributions on the source $S[i]$ and its reproduction $\hat{S}[i]$. Using
Def. 4 we can now formulate the achievable rate-distortion pair for a source
$S[i]$, as stated in the following definition [10, Pg. 471]:
###### Definition 5 (Achievable rate-distortion pair).
A rate-distortion pair $(R,D)$ is achievable for a process
$\\{S[i]\\}_{i\in\mathcal{N}}$ if for any $\eta>0$ and for all sufficiently
large $l$ one can construct an $\left(R_{s},l\right)$ source code such that
$R_{s}\leq R+\eta.$ (3)
and
$\bar{d}\left({\boldsymbol{S}}^{(l)},\hat{{\boldsymbol{S}}}^{(l)}\right)\leq
D+\eta.$ (4)
###### Definition 6.
The rate-distortion function $R(D)$ is defined as the infimum of all
achievable rates $R$ for a given maximum allowed distortion $D$.
Def. 5 defines a rate-distortion pair to as that achievable using source codes
with any sufficiently large blocklength. In the following lemma, which is
required to characterize the RDF of DT WSCS signals, we state that it is
sufficient to consider only source codes whose blocklength is an integer
multiple of some fixed integer:
###### Lemma 1.
Consider the process$\\{S[i]\\}_{i\in\mathcal{N}}$ with a finite and bounded
variance. For a given maximum allowed distortion $D$, the optimal reproduction
process $\\{\hat{S}[i]\\}_{i\in\mathcal{N}}$ is also the optimal reproduction
process when restricted to using source codes whose blocklengths are integer
multiples of some fixed positive integer $r$.
###### Proof.
The proof of the lemma is detailed in Appendix A. ∎
This lemma facilitates switching between multivariate and scalar
representations of the source and the reproduction processes.
The RDF obviously depends on the distribution of the source
$\\{S[i]\\}_{i\in\mathcal{N}}$. Thus, modifying the source yields a different
RDF. However, when a source is scaled by some positive constant, the RDF of
the scaled process with the MSE criterion can be inferred from that of the
original process, as stated in the following theorem:
###### Theorem 1.
Let $\\{S[i]\\}_{i\in\mathcal{N}}$ be a source process for which the rate-
distortion pair $(R,D)$ is achievable under the MSE distortion. Then, for
every $\alpha\in\mathcal{R}^{++}$, it holds that the rate-distortion pair
$(R,\alpha^{2}\cdot D)$ is achievable for the source $\\{\alpha\cdot
S[i]\\}_{i\in\mathcal{N}}$.
###### Proof.
The proof to the theorem is detailed in Appendix B. ∎
Lastly, in the proof of our main result, we make use of the RDF for DT WSCS
sources derived in [12, Thm. 1], repeated below for ease of reference. Prior
to the statement of the theorem, we recall that for blocklenghts which are
integer multiples of $N_{p}$, a WSCS process $S[i]$ with period $N_{p}>0$ can
be represented as an equivalent $N_{p}$-dimensional process
${\boldsymbol{S}}^{(N_{p})}[i]$ via the decimated component decomposition
(DCD) [2, Sec. 17.2]. The power spectral density (PSD) of the process
${\boldsymbol{S}}^{(N_{p})}$ is defined as [12, Sec. II]:
$\bigg{(}{\boldsymbol{\rho}}_{{\boldsymbol{S}}}\left(e^{j2\pi
f}\right)\bigg{)}_{u,v}=\sum\limits_{\Delta\in\mathcal{Z}}\bigg{(}\mathsf{R}_{{\boldsymbol{S}}}[\Delta]\bigg{)}_{u,v}e^{-j2\pi
f\Delta}\qquad-\frac{1}{2}\leq f\leq\frac{1}{2},\quad u,v\in\\{1,2,\ldots
N_{p}\\}$ (5)
where
$\mathsf{R}_{{\boldsymbol{S}}}[\Delta]\triangleq\mathds{E}\left\\{{\boldsymbol{S}}^{(N_{p})}[i]\cdot{\boldsymbol{S}}^{(N_{p})}[i+\Delta]\right\\}$
[2, Sec. 17.2]. We now proceed to the statement of [12, Thm. 1]:
###### Theorem 2.
[12, Thm. 1] Consider a zero-mean real DT WSCS Gaussian source
$S[i],i\in\mathcal{N}$ with memory, and let $N_{p}\in\mathcal{N}$ denote the
period of its statistics. The RDF is expressed as:
$R(D)=\frac{1}{2N_{p}}\sum\limits_{m=1}^{N_{p}}\int_{f=-0.5}^{0.5}\left(\log\left(\frac{\lambda_{m}\left(e^{j2\pi
f}\right)}{\theta}\right)\right)^{+}\mathrm{d}f,$ (6a) where
$\lambda_{m}\left(e^{j2\pi f}\right)$, $m=1,2,\ldots,N$ denote the eigenvalues
of the PSD matrix of the process ${\boldsymbol{S}}^{(N_{p})}[i]$, which is
obtained from $S[i]$ by applying $N_{p}$-dimensional DCD, and $\theta$ is
selected such that
$D=\frac{1}{N_{p}}\sum\limits_{m=1}^{N_{p}}\int_{f=-0.5}^{0.5}\min\left\\{\lambda_{m}\left(e^{j2\pi
f}\right),\theta\right\\}\mathrm{d}f.$ (6b)
We note that ${\boldsymbol{S}}^{(N_{p})}[i]$ corresponds to a vector of
stationary processes whose elements are are not identically distributed; hence
the variance is different for each element. Using [12, Thm. 1], we can
directly obtain the RDF for the special case of a DT memoryless WSCS Gaussian
process. This is stated in the following corollary:
###### Corollary 1.
Let $\\{S[i]\\}_{i\in\mathcal{N}}$ be a zero-mean DT memoryless real WSCS
Gaussian source with period $N_{p}\in\mathcal{N}$, and set
$\sigma^{2}_{m}=\mathds{E}\\{S^{2}[m]\\}$ for $m=1,2,\ldots,N_{P}$.
The RDF for compression of of $S[i]$ is stated as: $\displaystyle
R(D)=\begin{cases}\frac{1}{2N_{p}}\sum\limits_{m=1}^{N_{p}}\log\left(\frac{\sigma^{2}_{m}}{D_{m}}\right)&D\leq\frac{1}{N_{p}}\sum\limits_{m=1}^{N_{p}}\sigma^{2}_{m}\\\
0&D>\frac{1}{N_{p}}\sum\limits_{m=1}^{N_{p}}\sigma^{2}_{m},\end{cases}$ (7a)
where $D_{m}\triangleq\min\left\\{\sigma^{2}_{m},\theta\right\\}$, and
$\theta$ is defined such that
$D=\frac{1}{N_{p}}\sum\limits_{m=1}^{N_{p}}D_{m}.$ (7b)
###### Proof.
Applying Equations (6a) and (6b) to our specific case of a memoryless WSCS
source, we obtain equations (7a) and (7b) as follows: First note that the
corresponding DCD components for a zero-mean memoryless WSCS process are also
zero-mean and memoryless; hence the PSD matrix for the multivariate process
${\boldsymbol{S}}^{(N_{p})}[i]$ is a diagonal matrix, whose eigenvalues are
the constant diagonal elements such that the $m^{th}$ diagonal element is
equal to the variance $\sigma_{m}^{2}$: $\lambda_{m}\left(e^{j2\pi
f}\right)=\sigma_{m}^{2}$. Now, writing Eqn. (6a) for this case we obtain:
$\displaystyle R(D)$
$\displaystyle=\frac{1}{2N_{p}}\sum\limits_{m=1}^{N_{p}}\int_{f=-0.5}^{0.5}\left(\log\left(\frac{\lambda_{m}\left(e^{j2\pi
f}\right)}{\theta}\right)\right)^{+}\mathrm{d}f$
$\displaystyle=\frac{1}{2N_{p}}\sum\limits_{m=1}^{N_{p}}\left(\log\left(\frac{\sigma^{2}_{m}}{\theta}\right)\right)^{+}.$
(8)
Since
$\left(\log\left(\frac{\sigma^{2}_{m}}{\theta}\right)\right)^{+}=\max\left\\{0,\log\left(\frac{\sigma^{2}_{m}}{\theta}\right)\right\\}\equiv\log\left(\frac{\sigma^{2}_{m}}{D_{m}}\right)$
it follows that (8) coincides with (7a). Next, expressing Eqn. (6b) for the
memoryless source process, we obtain:
$D=\frac{1}{N_{p}}\sum\limits_{m=1}^{N_{p}}\int_{f=-0.5}^{0.5}\min\left\\{\lambda_{m}\left(e^{j2\pi
f}\right),\theta\right\\}\mathrm{d}f=\frac{1}{N_{p}}\sum\limits_{m=1}^{N_{p}}\min\left\\{\sigma^{2}_{m},\theta\right\\},$
(9)
proving Eqn. (7b). ∎
Now, from Lemma 1, we conclude that the RDF for compression of source
sequences whose blocklength is an integer multiple of $N_{p}$ is the same as
the RDF for compressing source sequences whose blocklength is arbitrary. We
recall that from [5, Ch. 10.3.3] it follows that for the zero-mean memoryless
Gaussian DCD vector source process ${\boldsymbol{S}}^{(N_{p})}[i]$ the optimal
reproduction process which achieves the RDF is an $N_{p}\times 1$ memoryless
process whose covariance matrix is diagonal with non-identically distributed
elements. From [2], we can apply the inverse DCD to obtain a WSCS process.
Hence, from Lemma 1 we can conclude that the optimal reproduction process for
the DT WSCS Gaussian source is a DT WSCS Gaussian process.
## III Problem Formulation and Auxiliary Results
Our objective is to characterize the RDF for compression of asynchronously
sampled CT WSCS Gaussian sources when the sampling interval is larger than the
memory of the source. In particular, we focus on the minimal rate required to
achieve a high fidelity reproduction, representing the RDF curve for
distortion values not larger than the variance of the source. Such
characterization of the RDF for asynchronous sampling is essential for
comprehending the relationship between the minimal required number of bits and
the sampling rate at a given distortion. Our analysis constitutes an important
step towards constructing joint source-channel coding schemes in scenarios in
which the symbol rate of the transmitter is not necessarily synchronized with
the sampling rate of the source to be transmitted. Such scenarios arise, for
example, when recording a communications signal for storage or processing, or
in compress-and-forward relaying [26, Ch. 16.7], [27] in which the relay
compresses the sampled received signal, which is then forwarded to the
assisted receiver. As the relay operates with its own sampling clock, which
need not necessarily be synchronized with the symbol rate of the assisted
transmitter, sampling at the relay may result in a DT WSACS source signal. In
the following we first characterize the sampled source model in Subsection
III-A. Then, as a preliminary step to our characterization the RDF for
asynchronously sampled CT WSCS Gaussian processes stated in Section IV, we
recall in Subsection III-B the definitions of some information-spectrum
quantities used in this study. Finally, in Subsection III-C, we recall an
auxiliary result relating the information spectrum quantities of a collection
of sequences of RVs to the information spectrum quantities of its limit
sequence of RVs. This result will be applied in the derivation of the RDF with
asynchronous sampling.
### III-A Source Model
Consider a real CT, zero-mean WSCS Gaussian random process $S_{c}(t)$ with
period $T_{\rm ps}$. Let the variance function of $S_{c}(t)$ be defined as
$\sigma^{2}_{S_{\rm c}}(t)\triangleq\mathds{E}\big{\\{}S_{\rm
c}^{2}(t)\big{\\}}$, and assume it is both upper bounded and lower bounded
away from zero, and that it is continuous in $t\in\mathcal{R}$. Let
$\tau_{m}>0$ denote the maximal correlation length of $S_{c}(t)$, i.e.,
$r_{S_{c}}(t,\tau)\triangleq\mathds{E}\big{\\{}S_{c}(t)S_{c}(t-\tau)\big{\\}}=0,\forall|\tau|>\tau_{m}$.
By the cyclostationarity of $S_{\rm c}(t)$, we have that
$\sigma_{S_{c}}^{2}(t)=\sigma_{S_{c}}^{2}(t+T_{\rm ps}),\forall
t\in\mathcal{R}$. Let $S_{c}(t)$ be sampled uniformly with the sampling
interval $T_{\rm s}>0$ such that $T_{\rm ps}=(p+\epsilon)\cdot T_{\rm s}$ for
$p\in\mathcal{N}$ and $\epsilon\in[0,1)$ yielding $S_{\epsilon}[i]\triangleq
S_{\rm c}(i\cdot T_{\rm s})$, where $i\in\mathcal{Z}$. The variance of
$S_{\epsilon}[i]$ is given by $\sigma^{2}_{S_{\epsilon}}[i]\triangleq
r_{S_{\epsilon}}[i,0]=\sigma^{2}_{S_{\rm c}}\left(\frac{i\cdot T_{\rm
ps}}{p+\epsilon}\right)$.
In this work, as in [17], we assume that the duration of temporal correlation
of the CT signal is shorter than the sampling interval $T_{\rm s}$, namely,
$\tau_{m}<T_{\rm s}$. Consequently, the DT Gaussian process $S_{\epsilon}[i]$
is a memoryless zero-mean Gaussian process and its autocorrelation function is
given by:
$\displaystyle r_{S_{\epsilon}}[i,\Delta]$
$\displaystyle=\mathds{E}\bigg{\\{}S_{\epsilon}[i]S_{\epsilon}[i+\Delta]\bigg{\\}}$
$\displaystyle=\mathds{E}\left\\{S_{\rm c}\left(\frac{i\cdot T_{\rm
ps}}{p+\epsilon}\right)\cdot S_{\rm c}\left(\frac{(i+\Delta)\cdot T_{\rm
ps}}{p+\epsilon}\right)\right\\}=\sigma^{2}_{S_{\rm c}}\left(\frac{i\cdot
T_{\rm
ps}}{p+\epsilon}\right)\cdot\delta[\Delta]=\sigma^{2}_{S_{\epsilon}}[i]\cdot\delta[\Delta].$
(10)
While we do not explicitly account for sampling time offsets in our definition
of the sampled process $S_{\epsilon}[i]$, it can be incorporated by replacing
$\sigma^{2}_{S_{\rm c}}(t)$ with a time-shifted version, i.e.,
$\sigma^{2}_{S_{\rm c}}(t-\phi)$, see also [17, Sec. II.C].
It can be noted from (III-A) that if $\epsilon$ is a rational number, i.e.,
$\exists u,v\in\mathcal{N}$, $u$ and $v$ are relatively prime, such that
$\epsilon=\frac{u}{v}$, then
$\left\\{S_{\epsilon}[i]\right\\}_{i\in\mathcal{Z}}$ is a DT memoryless WSCS
process with the period $p_{u,v}=p\cdot v+u\in\mathcal{N}$ [17, Sec. II.C].
For this class of processes, the RDF can be obtained from [12, Thm. 1] as
stated in Corollary 1. On the other hand, if $\epsilon$ is an irrational
number, then sampling becomes asynchronous and leads to a WSACS process whose
RDF has not been characterized to date.
### III-B Definitions of Relevant Information Spectrum Quantities
Conventional information theoretic tools for characterizing RDFs are based on
an underlying ergodicity of the source. Consequently, these techniques cannot
be applied to characterize the RDF of of asynchronously sampled WSCS
processes. To tackle this challenge, we use information spectrum methods. The
information spectrum framework [14] can be utilized to obtain general formulas
for rate limits for any arbitrary class of processes. The resulting
expressions do are not restricted to specific statistical models of the
considered processes, and in particular, do not require information stability
or stationarity. In the following, we recall the definitions of several
information-spectrum quantities used in this study, see also [14, Def.
1.3.1-1.3.2]:
###### Definition 7.
The limit-inferior in probability of a sequence of real RVs
$\\{Z_{k}\\}_{k\in\mathcal{N}}$ is defined as
${\rm
p-}\mathop{\lim\inf}\limits_{k\rightarrow\infty}Z_{k}\triangleq\sup\left\\{\alpha\in\mathcal{R}\big{|}\mathop{\lim}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}<\alpha\right)=0\right\\}\triangleq\alpha_{0}.$
(11)
Hence, $\alpha_{0}$ is the largest real number satisfying that
$\forall\tilde{\alpha}<\alpha_{0}$ and $\forall\mu>0$ there exists
$k_{0}(\mu,\tilde{\alpha})\in\mathcal{N}$ such that
$\Pr(Z_{k}<\tilde{\alpha})<\mu$, $\forall k>k_{0}(\mu,\tilde{\alpha})$.
###### Definition 8.
The limit-superior in probability of a sequence of real RVs
$\\{Z_{k}\\}_{k\in\mathcal{N}}$ is defined as
${\rm
p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}Z_{k}\triangleq\inf\left\\{\beta\in\mathcal{R}\big{|}\mathop{\lim}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)=0\right\\}\triangleq\beta_{0}.$
(12)
Hence, $\beta_{0}$ is the smallest real number satisfying that
$\forall\tilde{\beta}>\beta_{0}$ and $\forall\mu>0$, there exists
$k_{0}(\mu,\tilde{\beta})\in\mathcal{N}$, such that
$\Pr(Z_{k}>\tilde{\beta})<\mu$, $\forall k>k_{0}(\mu,\tilde{\beta})$.
The notion of uniform integrability of a sequence of RVs is a basic property
in probability [28, Ch. 12], which is not directly related to information
spectrum methods. However, since it plays an important role in the information
spectrum characterization of RDFs, we include its statement in the following
definition:
###### Definition 9 (Uniform Integrability [28, Def. 12.1],[14, Eqn.
(5.3.2)]).
The sequence of real-valued random variables $\\{Z_{k}\\}_{k=1}^{\infty}$, is
said to satisfy uniform integrability if
$\mathop{\lim}\limits_{u\rightarrow\infty}\mathop{\sup}\limits_{k\geq
1}\mathop{\int}\limits_{z:|z|\geq u}p_{Z_{k}}\left(z\right)|z|\mathrm{d}z=0$
(13)
The aforementioned quantities allow characterizing the RDF of arbitrary
sources. Consider a general source process $\\{S[i]\\}_{i=1}^{\infty}$
(stationary or non-stationary) taking values from the source alphabet
$S[i]\in\mathcal{S}$ and a reproduction process
$\\{\hat{S}[i]\\}_{i=1}^{\infty}$ with values from the reproduction alphabet
$\hat{S}[i]\in\hat{\mathcal{S}}$. It follows from [14, Sec. 5.5] that for a
distortion measure which satisfies the uniform integrability criterion, i.e.,
that there exists a deterministic sequence $\\{r[i]\\}_{i=1}^{\infty}$ such
that the sequence of RVs
$\\{d\big{(}{\boldsymbol{S}}^{(k)},{\boldsymbol{r}}^{(k)}\big{)}\\}_{k=1}^{\infty}$
satisfies Def. 9 [14, Pg. 336], then the RDF is expressed as [14, Eqn.
(5.4.2)]:
$R(D)=\mathop{\inf}\limits_{F_{S,\hat{S}}:\bar{d}_{S}({\boldsymbol{S}}^{(k)},\hat{{\boldsymbol{S}}}^{(k)})\leq
D}\bar{I}\left({\boldsymbol{S}}^{(k)};\hat{{\boldsymbol{S}}}^{(k)}\right),$
(14)
where
$\bar{d}_{S}({\boldsymbol{S}}^{(k)},\hat{{\boldsymbol{S}}}^{(k)})=\mathop{\lim\sup}\limits_{k\rightarrow\infty}\frac{1}{k}\mathds{E}\left\\{d\left({\boldsymbol{S}}^{(k)},\hat{{\boldsymbol{S}}}^{(k)}\right)\right\\}$,
$F_{S,\hat{S}}$ denotes the joint CDF of $\\{S[i]\\}_{i=1}^{\infty}$ and
$\\{\hat{S}[i]\\}_{i=1}^{\infty}$, and
$\bar{I}\left({\boldsymbol{S}}^{(k)}:\hat{{\boldsymbol{S}}}^{(k)}\right)$
represents the limit superior in probability of the mutual information rate of
${\boldsymbol{S}}^{(k)}$ and $\hat{{\boldsymbol{S}}}^{(k)}$, given by:
$\bar{I}\left({\boldsymbol{S}}^{(k)};\hat{{\boldsymbol{S}}}^{(k)}\right)\triangleq{\rm
p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}\frac{1}{k}\log\frac{p_{{\boldsymbol{S}}^{(k)}|\hat{{\boldsymbol{S}}}^{(k)}}\left({\boldsymbol{S}}^{(k)}|\hat{{\boldsymbol{S}}}^{(k)}\right)}{p_{{\boldsymbol{S}}^{(k)}}\left({\boldsymbol{S}}^{(k)}\right)}$
(15)
In order to use the RDF characterization in (14), the distortion measure must
satisfy the uniform integrability criterion. For the considered class of
sources detailed in Subsection III-A, the MSE distortion satisfies this
criterion, as stated in the following lemma:
###### Lemma 2.
For any real memoryless zero-mean Gaussian source $\\{S[i]\\}_{i=1}^{\infty}$
with bounded variance, i.e., $\exists\sigma_{\max}^{2}<\infty$ such that
$\mathds{E}\\{S^{2}[i]\\}\leq\sigma_{\max}^{2}$ for all $i\in\mathcal{N}$, the
MSE distortion satisfies the uniform integrability criterion.
###### Proof.
Set the deterministic sequence $\\{{r}[i]\\}_{i=1}^{\infty}$ to be the all-
zero sequence. Under this setting and the MSE distortion, it holds that
$d\big{(}{\boldsymbol{S}}^{(k)},{\boldsymbol{r}}^{(k)}\big{)}=\frac{1}{k}\sum_{i=1}^{k}S^{2}[i]$.
To prove the lemma, we show that the sequence of RVs
$\big{\\{}d\big{(}{\boldsymbol{S}}^{(k)},{\boldsymbol{r}}^{(k)}\big{)}\big{\\}}_{k=1}^{\infty}$
has a bounded $\ell_{2}$ norm, which proves that it is uniformly integrable by
[28, Cor. 12.8]. The $\ell_{2}$ norm of
$d\big{(}{\boldsymbol{S}}^{(k)},{\boldsymbol{r}}^{(k)}\big{)}$ satisfies
$\displaystyle\mathds{E}\big{\\{}d\big{(}{\boldsymbol{S}}^{(k)},{\boldsymbol{r}}^{(k)}\big{)}^{2}\big{\\}}$
$\displaystyle=\frac{1}{k^{2}}\mathds{E}\left\\{\sum_{i=1}^{k}S^{2}[i]\sum_{j=1}^{k}S^{2}[j]\right\\}$
$\displaystyle=\frac{1}{k^{2}}\sum_{i=1}^{k}\sum_{j=1}^{k}\mathds{E}\left\\{S^{2}[i]S^{2}[j]\right\\}\stackrel{{\scriptstyle(a)}}{{\leq}}\frac{1}{k^{2}}\sum_{i=1}^{k}\sum_{j=1}^{k}3\sigma_{\max}^{4}=3\sigma_{\max}^{4},$
(16)
where $(a)$ follows since
$\mathds{E}\\{S^{2}[i]S^{2}[j]\\}=\mathds{E}\\{S^{2}[i]\\}\mathds{E}\\{S^{2}[j]\\}=\sigma_{\max}^{4}$
for $i\neq j$ while $\mathds{E}\\{S^{4}[i]\\}=3\sigma_{\max}^{4}$ [29, Ch.
5.4]. Eqn. (16) proves that
$d\big{(}{\boldsymbol{S}}^{(k)},{\boldsymbol{r}}^{(k)}\big{)}$ is
$\ell_{2}$-bounded by $3\sigma_{\max}^{4}<\infty$ for all $k\in\mathcal{N}$,
which in turn implies that the MSE distortion is uniformly integrable for the
source $\\{S[i]\\}_{i=1}^{\infty}$. ∎
Since, as detailed in Subsection III-A, we focus in the following on
memoryless zero-mean Gaussian sources, Lemma 2 implies that the RDF of the
source can be characterized using (14). However, (14) is in general difficult
to evaluate, and thus does not lead to a meaningful understanding of how the
RDF of sampled WSCS sources behaves, motivating our analysis in Section IV.
### III-C Information Spectrum Limits
The following theorem originally stated in [17, Thm. 1], presents a
fundamental result which is directly useful for the derivation of the RDF:
###### Theorem 3.
[17, Thm. 1] Let $\left\\{\tilde{Z}_{k,n}\right\\}_{n,k\in\mathcal{N}}$ be a
set of sequences of real scalar RVs satisfying two assumptions:
1. AS1
For every fixed $n\in\mathcal{N}$, every convergent subsequence of
$\left\\{\tilde{Z}_{k,n}\right\\}_{k\in\mathcal{N}}$ converges in
distribution, as $k\rightarrow\infty$, to a finite deterministic scalar. Each
subsequence may converge to a different scalar.
2. AS2
For every fixed $k\in\mathcal{N}$, the sequence
$\big{\\{}\tilde{Z}_{k,n}\big{\\}}_{n\in\mathcal{N}}$ converges uniformly in
distribution, as $n\rightarrow\infty$, to a scalar real-valued RV $Z_{k}$.
Specifically, letting $\tilde{F}_{k,n}(\alpha)$ and $F_{k}(\alpha)$,
$\alpha\in\mathcal{R}$, denote the CDFs of $\tilde{Z}_{k,n}$ and of $Z_{k}$,
respectively, then by AS2 it follows that $\forall\eta>0$, there exists
$n_{0}(\eta)$ such that for every $n>n_{0}(\eta)$
$\left|\tilde{F}_{k,n}(\alpha)-F_{k}(\alpha)\right|<\eta,$
for each $\alpha\in\mathcal{R}$, $k\in\mathcal{N}$.
Then, for $\big{\\{}\tilde{Z}_{k,n}\big{\\}}_{n,k\in\mathcal{N}}$ it holds
that
$\displaystyle{\rm p-}\mathop{\lim\inf}\limits_{k\rightarrow\infty}Z_{k}$
$\displaystyle=$
$\displaystyle\mathop{\lim}\limits_{n\rightarrow\infty}\Big{(}{\rm
p-}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{Z}_{k,n}\Big{)},$ (17a)
$\displaystyle{\rm p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}Z_{k}$
$\displaystyle=$
$\displaystyle\mathop{\lim}\limits_{n\rightarrow\infty}\Big{(}{\rm
p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}\tilde{Z}_{k,n}\Big{)}.$ (17b)
###### Proof.
In Appendix C we explicitly prove Eqn. (17b). This complements the proof in
[17, Appendix A] which explicitly considers only (17a). ∎
## IV Rate-Distortion Characterization for Sampled CT WSCS Gaussian Sources
### IV-A Main Result
Using the information spectrum based characterization of the RDF (14) combined
with the characterization of the limit of a sequence of information spectrum
quantities in Theorem 3, we now analyze the RDF of asynchronously sampled WSCS
processes. Our analysis is based on formulating a sequence of synchronously
sampled WSCS processes, whose RDF is given in Corollary 1. Then, we show that
the RDF of the asynchronously sampled process can be obtained as the limit
superior of the computable RDFs of the sequence of synchronously sampled
processes. We begin by letting $\epsilon_{n}\triangleq\frac{\lfloor
n\cdot\epsilon\rfloor}{n}$ for $n\in\mathcal{N}$ and defining a Gaussian
source process $S_{n}[i]=S_{\rm c}\left(\frac{i\cdot T_{\rm
ps}}{p+\epsilon_{n}}\right)$. From the discussion in Subsection III-A (see
also [17, Sec. II.C]), it follows that since $\epsilon_{n}$ is rational,
$S_{n}[i]$ is a WSCS process and its period is given by $p_{n}=p\cdot
n+\lfloor n\cdot\epsilon\rfloor$. Accordingly, the periodic correlation
function of $S_{n}[i]$ can be obtained similarly to (III-A) as:
$r_{S_{n}}[i,\Delta]=\mathds{E}\bigg{\\{}S_{n}[i]S_{n}[i+\Delta]\bigg{\\}}=\sigma^{2}_{S_{\rm
c}}\left(\frac{i\cdot T_{\rm ps}}{p+\epsilon_{n}}\right)\cdot\delta[\Delta].$
(18)
Due to cyclostationarity of $S_{n}[i]$, we have that
$r_{S_{n}}[i,\Delta]=r_{S_{n}}[i+p_{n},\Delta]$, $\forall
i,\Delta\in\mathcal{Z}$, and let $\sigma^{2}_{S_{n}}[i]\triangleq
r_{S_{n}}[i,0]$ denote its periodic variance.
We next restate Corollary 1 in terms of $\epsilon_{n}$ as follows:
###### Proposition 1.
Consider a DT, memoryless, zero-mean, WSCS Gaussian random process $S_{n}[i]$
with a variance $\sigma^{2}_{S_{n}}[i]$, obtained from $S_{\rm c}(t)$ by
sampling with a sampling interval of $T_{s}(n)=\frac{T_{\rm
ps}}{p+\epsilon_{n}}$. Let ${\boldsymbol{S}}^{p_{n}}[i]$ denote the memoryless
stationary multivariate random process obtained by applying the DCD to
$S_{n}[i]$ and let $\sigma^{2}_{S_{n}}[m]$, $m=1,2,\ldots,p_{n}$, denote the
variance of the $m^{th}$ component of ${\boldsymbol{S}}^{p_{n}}[i]$. The rate-
distortion function is given by:
$\displaystyle
R_{n}(D)=\begin{cases}\frac{1}{2p_{n}}\sum\limits_{m=1}^{p_{n}}\left(\log\left(\frac{\sigma^{2}_{S_{n}}[m]}{D_{n}[m]}\right)\right)&D\leq\frac{1}{p_{n}}\sum\limits_{m=1}^{p_{n}}\sigma^{2}_{S_{n}}[m]\\\
0&D>\frac{1}{p_{n}}\sum\limits_{m=1}^{p_{n}}\sigma^{2}_{S_{n}}[m]\end{cases},$
(19a)
where for
$D\leq\frac{1}{p_{n}}\sum\limits_{m=1}^{p_{n}}\sigma^{2}_{S_{n}}[m]\;$ we let
$D_{n}[m]\triangleq\min\big{\\{}\sigma^{2}_{S_{n}}[m],\theta_{n}\big{\\}}$,
and $\theta_{n}$ is selected $s.t.$
$D=\frac{1}{p_{n}}\sum\limits_{m=1}^{p_{n}}D_{n}[m].$ (19b)
We recall that the RDF of $S_{n}[i]$ is characterized in Corollary 1 via the
RDF of the multivariate stationary process ${\boldsymbol{S}}_{n}^{(p_{n})}[i]$
obtained via a $p_{n}$-dimensional DCD applied to $S_{n}[i]$. Next, we recall
that the relationship between the source process
${\boldsymbol{S}}_{n}^{(p_{n})}[i]$ and the optimal reconstruction process,
denoted by $\hat{{\boldsymbol{S}}}_{n}^{(p_{n})}[i]$, is characterized in [5,
Ch. 10.3.3] via a linear, multivariate, time-invariant backward channel with a
$p_{n}\times 1$ additive vector noise process
${\boldsymbol{W}}_{n}^{(p_{n})}[i]$, and is given by:
${\boldsymbol{S}}_{n}^{(p_{n})}[i]=\hat{{\boldsymbol{S}}}_{n}^{(p_{n})}[i]+{\boldsymbol{W}}_{n}^{(p_{n})}[i],\quad
i\in\mathcal{N}.$ (20)
It also follows from [5, Sec. 10.3.3] that for the IID Gaussian multivariate
process whose entries are independent and distributed via
$\big{(}{\boldsymbol{S}}_{n}^{(p_{n})}[i]\big{)}_{m}\sim\mathcal{N}(0,\sigma^{2}_{S_{n}}[m])$,
$m\in\\{1,2,\ldots,p_{n}\\}$, the optimal reconstruction vector process
$\hat{{\boldsymbol{S}}}_{n}^{(p_{n})}[i]$ and the corresponding noise vector
process ${\boldsymbol{W}}_{n}^{(p_{n})}[i]$ each follow a multivariate
Gaussian distribution:
$\hat{{\boldsymbol{S}}}_{n}^{(p_{n})}[i]\sim\mathcal{N}\left({\bf
0},\left[\begin{matrix}\sigma^{2}_{\hat{S}_{n}}[1]&\cdots&0\\\
\vdots&\ddots&\vdots\\\
0&\cdots&\sigma^{2}_{\hat{S}_{n}}[p_{n}]\end{matrix}\right]\right)\quad\mathrm{and}\quad{\boldsymbol{W}}_{n}^{(p_{n})}[i]\sim\mathcal{N}\left({\bf
0},\left[\begin{matrix}D_{n}[1]&\cdots&0\\\ \vdots&\ddots&\vdots\\\
0&\cdots&D_{n}[p_{n}]\end{matrix}\right]\right),$
where
$D_{n}[m]\triangleq\min\left\\{\sigma^{2}_{S_{n}}[m],\theta_{n}\right\\}$;
$\theta_{n}$ denotes the reverse waterfilling threshold defined in Prop. 1 for
the index $n$, and is selected such that
$D=\frac{1}{p_{n}}\sum\limits_{m=1}^{p_{n}}D_{n}[m]$. The optimal
reconstruction process, $\hat{{\boldsymbol{S}}}_{n}^{(p_{n})}[i]$ and the
noise process ${\boldsymbol{W}}_{n}^{(p_{n})}[i]$ are mutually independent,
and for each $m\in\\{1,2,\ldots,p_{n}\\}$ it holds that
$\mathds{E}\left\\{\left(S_{n}^{(p_{n})}[i]-\hat{S}_{n}^{(p_{n})}[i]\right)_{m}^{2}\right\\}=D_{n}[m]$,
see [5, Ch. 10.3.2-10.3.3]. The multivariate relationship between stationary
processes in (20) can be transformed into an equivalent linear relationship
between cyclostationary Gaussian memoryless processes via the inverse DCD
transformation [2, Sec 17.2] applied to each of the processes, resulting in:
$S_{n}[i]=\hat{S}_{n}[i]+W_{n}[i]\quad i\in\mathcal{N}.$ (21)
We are now ready to state our main result, which is the RDF of the
asynchronously sampled DT source $S_{\epsilon}[i],\epsilon\not\in\mathcal{Q}$,
in the low MSE regime, i.e., at a given distortion $D$ which is not larger
than the source variance. The RDF is stated in the following theorem, which
applies to both synchronous sampling as well as asynchronous sampling:
###### Theorem 4.
Consider a DT source $\\{S_{\epsilon}[i]\\}_{i=1}^{\infty}$ obtained by
sampling a CT WSCS source, whose period of statistics is $T_{\rm ps}$, at
intervals $T_{\rm s}$. Then, for any distortion constraint $D$ such that
$D<\mathop{\min}\limits_{0\leq t\leq T_{\rm ps}}\sigma^{2}_{S_{\rm c}}(t)$ and
any $\epsilon\in[0,1)$, the RDF $R_{\epsilon}(D)$ for compressing
$\\{S_{\epsilon}[i]\\}_{i=1}^{\infty}$ can be obtained as the limit:
$R_{\epsilon}(D)=\mathop{\lim\sup}\limits_{n\rightarrow\infty}R_{n}(D),$ (22)
where $R_{n}(D)$ is defined Prop. 1.
###### Proof.
The detailed proof is provided in Appendix D. Here, we give a brief outline:
The derivation of the RDF with asynchronous sampling follows three steps:
First, we note that sampling rate $T_{\rm s}(n)=\frac{T_{\rm
ps}}{p+\epsilon_{n}}$ used to obtain the sequence of DT WSCS sources
$\\{S_{n}[i]\\}_{i\in\mathcal{N},n\in\mathcal{N}}$ asymptotically approaches
the sampling interval for irrational $\epsilon$ given by $T_{\rm
s}=\frac{T_{\rm ps}}{p+\epsilon}$ as $n\rightarrow\infty$. We define a
sequence of rational numbers $\epsilon_{n}$ $s.t.$
$\epsilon_{n}\rightarrow\epsilon$ as $n\rightarrow\infty$; Building upon this
insight, we prove that the RDF with $T_{\rm s}$ can be stated as a double
limit where the outer limit is with respect to the blocklength and the inner
limit is with respect to $\epsilon_{n}$. Lastly, we use Theorem 3 to show that
the limits can be exchanged, obtaining a limit of expressions which are
computable. ∎
###### Remark 1.
Theorem 4 focuses on the low distortion regime, defined as the values of $D$
satisfying $D<\mathop{\min}\limits_{0\leq t\leq T_{\rm ps}}\sigma^{2}_{S_{\rm
c}}(t)$. This implies that $\theta_{n}$ has to be smaller than
$\mathop{\min}\limits_{0\leq t\leq T_{\rm ps}}\sigma^{2}_{S_{\rm c}}(t)$;
hence, from Prop. 1 it follows that for the corresponding stationary noise
vector ${\boldsymbol{W}}_{n}^{(p_{n})}[i]$ in (20),
$D_{n}[m]=\min\left\\{\sigma^{2}_{S_{n}}[m],\theta_{n}\right\\}=\theta_{n}$
and
$D=\frac{1}{p_{n}}\mathop{\sum}\limits_{m=1}^{p_{n}}D_{n}[m]=\theta_{n}=D_{n}[m]$.
We note that since every element of the vector
$\left({\boldsymbol{W}}_{n}^{(p_{n})}[i]\right)_{m}$ has the same variance
$D_{n}[m]=D$ for all $n\in\mathcal{N}$ and $m=1,2,\ldots,p_{n}$ then by
applying the inverse DCD to ${\boldsymbol{W}}_{n}^{(p_{n})}[i]$, the resulting
scalar DT process $W_{n}[i]$ is wide sense stationary; and in fact IID with
$\mathds{E}\left\\{\big{(}W_{n}[i]\big{)}^{2}\right\\}=D$.
### IV-B Discussion and Relationship with Capacity Derivation in [17]
Theorem 4 provides a meaningful and computable characterization for the RDF of
sampled WSCS signals. We note that the proof of the main theorem uses some of
the steps used in our recent study on the capacity of memoryless channels with
sampled CT WSCS Gaussian noise [17]. It should be emphasized, however, that
there are several fundamental differences between the two studies, which
require the introduction of new treatments and derivations original to the
current work. First, it is important to note that in the study on capacity, a
physical channel model exists, and therefore the conditional PDF of the output
signal given the input signal can be characterized explicitly for both
synchronous sampling and asynchronous sampling for every input distribution.
For the current study of the RDF we note that the relationship (21), commonly
referred to as the backward channel [30], [5, Ch.10.3.2], characterizes the
relationship between the source process and the optimal reproduction process,
and hence is valid only for synchronous sampling and the optimal reproduction
process. Consequently, in the RDF analysis the limiting relationship (21) as
$n\rightarrow\infty$ is not even known to exist and, in fact, we can show it
exists under a rather strict condition on the distortion (namely, the
condition $D<\mathop{\min}\limits_{0\leq t\leq T_{\rm ps}}\sigma^{2}_{S_{\rm
c}}(t)$ stated in Theorem 4). In particular, to prove the statement in Theorem
4, we had to show that from the backward channel (21), we can define an
asymptotic relationship, as $n\rightarrow\infty$, which corresponds to the
asynchronously sampled source process, denoted by $S_{\epsilon}[i]$, and
relates $S_{\epsilon}[i]$ with its optimal reconstruction process
$\hat{S}_{\epsilon}[i]$. This is done by showing that the PDFs for the
reproduction process $\hat{S}_{n}[i]$ and noise process $W_{n}[i]$ from (21),
each converge uniformly as $n\rightarrow\infty$ to a respective limiting PDF,
which has to be defined as well. This enabled us to relate the RDFs for the
synchronous sampling and for the asynchronous sampling cases using Theorem 3,
eventually leading to (22). Accordingly, in our detailed proof of Theorem 4
given in Appendix D, Lemmas D.4 and D.6 as well as a significant part of Lemma
D.2 are largely new, addressing the special aspects of the proof arising from
the fundamental differences between current setup and the setup in [17], while
the derivations of Lemmas D-A and D.5 follow similarly to [17, Lemma B.1] and
[17, Lemma B.5], respectively, and parts of Lemma D.2 coincide with [17, Lemma
B.2].
## V Numerical Examples
In this section we demonstrate the insights arising from our RDF
characterization via numerical examples. Recalling that Theorem 4 states the
RDF for asynchronously sampled CT WSCS Gaussian process, $R_{\epsilon}(D)$, as
the limit supremum of a sequence of RDFs corresponding to DT memoryless WSCS
Gaussian source processes $\left\\{R_{n}(D)\right\\}_{n\in\mathcal{N}}$, we
first consider the convergence of $\\{R_{n}(D)\\}_{n\in\mathcal{N}}$ in
Subsection V-A. Next, in Subsection V-B we study the variation of the RDF of
the sampled CT process due to changes in the sampling rate and in the sampling
time offset.
Similarly to [17, Sec. IV], define a periodic continuous pulse function,
denoted by $\Pi_{t_{\rm dc},t_{\rm rf}}(t)$, with equal rise/fall time $t_{\rm
rf}=0.01$, duty cycle $t_{\rm dc}\in[0,0.98]$, and period of $1$, i.e.,
$\Pi_{t_{\rm dc},t_{\rm rf}}(t+1)=\Pi_{t_{\rm dc},t_{\rm rf}}(t)$ for all
$t\in\mathcal{R}$. Specifically, for $t\in[0,1)$ the function $\Pi_{t_{\rm
dc},t_{\rm rf}}(t)$ is given by
$\Pi_{t_{\rm dc},t_{\rm rf}}(t)=\begin{cases}\frac{t}{t_{\rm
rf}}&t\in[0,t_{\rm rf}]\\\ 1&t\in(t_{\rm rf},t_{\rm dc}+t_{\rm rf})\\\
1-\frac{t-t_{\rm dc}-t_{\rm rf}}{t_{\rm rf}}&t\in[t_{\rm dc}+t_{\rm rf},t_{\rm
dc}+2\cdot t_{\rm rf}]\\\ 0&t\in(t_{\rm dc}+2\cdot t_{\rm rf},1).\end{cases}$
(23)
In the following, we model the time varying variance of the WSCS source
$\sigma^{2}_{S_{\rm c}}(t)$ to be a linear periodic function of $\Pi_{t_{\rm
dc},t_{\rm rf}}(t)$. To that aim, we define a time offset between the first
sample and the rise start time of the periodic continuous pulse function; we
denote the time offset by $\phi\in[0,1)$. This corresponds to the sampling
time offset normalized to the period $T_{\rm ps}$. The variance of $S_{\rm
c}(t)$ is periodic function with period $T_{\rm ps}$ which is given by
$\sigma^{2}_{S_{\rm c}}(t)=0.2+4.8\cdot\Pi_{t_{\rm dc},t_{\rm
rf}}\left(\frac{t}{T_{\rm ps}}-\phi\right),\qquad t\in[0,T_{\rm ps}),$ (24)
with period of $T_{\rm ps}=5$ $\mu$secs.
### V-A Convergence of $R_{n}(D)$ in $n$
Figure 2: $R_{n}(D)$ versus $n$; offset $\phi=0$.
Figure 3: $R_{n}(D)$ versus $n$; offset $\phi=\frac{1}{16}$.
From Theorem 4 it follows that if the distortion satisfies
$D<\mathop{\min}\limits_{0\leq t\leq T_{\rm ps}}\sigma^{2}_{S_{\rm c}}(t)$,
the RDF of the asynchronously sampled CT WSCS Gaussian process is given by the
limit superior of the sequence $\\{R_{n}(D)\\}_{n\in\mathcal{N}}$; where
$R_{n}(D)$ is obtained via Corollary 1. In this subsection, we study the
sequence of RDFs $\\{R_{n}(D)\\}_{n\in\mathcal{N}}$ as $n$ increases. For this
evaluation setup, we fixed the distortion constraint at $D=0.18$ and set
$\epsilon=\frac{\pi}{7}$ and $p=2$. Let the variance of the CT WSCS Gaussian
source process $\sigma^{2}_{S_{\rm c}}(t)$ be modelled by Eq. (24) for two
sampling time offsets $\phi=\\{0,\frac{1}{16}\\}$. For each offset $\phi$,
four duty cycle values were considered: $t_{\rm dc}=[20,45,75,98]\%$. For each
$n$ we obtain the synchronous sampling mismatch
$\epsilon_{n}\triangleq\frac{\lfloor n\cdot\epsilon\rfloor}{n}$, which
approaches $\epsilon$ as $n\rightarrow\infty$, where $n\in\mathcal{N}$. Since
$\epsilon_{n}$ is a rational number, corresponding to a sampling period of
$T_{\rm s}(n)=\frac{T_{\rm ps}}{p+\epsilon_{n}}$, then for each $n$, the
resulting DT process is WSCS with the period $p_{n}=p\cdot n+\lfloor
n\cdot\epsilon\rfloor$ and its RDF follows from Corollary 1.
Figures 3 and 3 depict $R_{n}(D)$ for $n\in[1,500]$ with the specified duty
cycles and sampling time offsets, where in Fig. 3 there is no sampling time
offset, i.e., $\phi=0$, and in Fig. 3 the sampling time offset is set to
$\phi=\frac{1}{16}$. We observe that in both figures the RDF values are higher
for higher $t_{\rm dc}$. This can be explained by noting that for higher
$t_{\rm dc}$ values, the resulting time averaged variance of the DT source
process increases, hence, a higher number of bits per sample is required to
encode the source process maintaining the same distortion value. Also, in all
configurations, $R_{n}(D)$ varies significantly for smaller values of $n$.
Comparing Figures 3 and 3, we see that the pattern of these variations depends
on the sampling time offset $\phi$. For example, when $t_{\rm dc}=45\%$ at
$n\in[4,15]$, then for $\phi=0$ the RDF varies in the range $[1.032,1.143]$
bits per sample, while for $\phi=\frac{1}{16}$ the RDF varies in the range
$[1.071,1.237]$ bits per sample. However, as we increase $n$ above $230$, the
variations in $R_{n}(D)$ become smaller and are less dependent on the sampling
time offset, and the resulting values of $R_{n}(D)$ are approximately in the
same range in both Figures 3 and 3 for $n\geq 230$. This behaviour can be
explained by noting that as $n$ varies, the period $p_{n}$ also varies and
hence the statistics of the DT variance differs over its respective period.
This consequently affects the resulting RDF (especially for small periods). As
$n$ increases $\epsilon_{n}$ approaches the asynchronous sampling mismatch
$\epsilon$ and the period $p_{n}$ takes a sufficiently large value such that
the samples of the DT variance over the period are similarly distributed
irrespective of the value of $\phi$; leading to a negligible variation in the
RDF as seen in the above figures.
### V-B The Variation of the RDF with the Sampling Rate
Figure 4: $R_{n}(D)$ versus $\frac{T_{\rm ps}}{T_{\rm s}}$; offset $\phi=0$.
Figure 5: $R_{n}(D)$ versus $\frac{T_{\rm ps}}{T_{\rm s}}$; offset
$\phi=\frac{1}{16}$.
Next, we observe the dependence of the RDF for the sampled memoryless WSCS
Gaussian process on the value of the sampling interval $T_{\rm s}$. For this
setup, we fix the distortion constraint to $D=0.18$ and set the duty cycle in
the source process (24) to $t_{\rm dc}=[45,75]\%$. Figures 5-5 demonstrate the
numerically evaluated values for $R_{n}(D)$ at sampling intervals in the range
$2<\frac{T_{\rm ps}}{T_{\rm s}}<4$ with the sampling time offsets $\phi=0$ and
$\phi=\frac{1}{16}$, respectively. A very important insight which arises from
the figures is that the sequence of RDFs $R_{n}(D)$ is not convergent; hence,
for example, one cannot approach the RDF for $\frac{T_{\rm ps}}{T_{\rm
s}}=2.5$ by simply taking rational values of $\frac{T_{\rm ps}}{T_{\rm s}}$
which approach $2.5$. This verifies that the RDF for asynchronous sampling
cannot be obtained by straightforward application of previous results, and
indeed, the entire analysis carried in the manuscript is necessary for the
desired characterization. We observe in Figures 5-5 that when $\frac{T_{\rm
ps}}{T_{\rm s}}$ has a fractional part with a relatively small integer
denominator, the variations in the RDF are significant, and the variations
depend on the sampling time offset. However, when $\frac{T_{\rm ps}}{T_{\rm
s}}$ approaches an irrational number, the period of the sampled variance
function becomes very long, and consequently, the RDF is approximately
constant and independent of the sampling time offset. As an example, consider
$\frac{T_{\rm ps}}{T_{\rm s}}=2.5$ and $t_{\rm dc}=75\%$: For sampling time
offset $\phi=0$ the RDF takes a value of $1.469$ bits per sample, as shown in
Figure 5 while for the offset of $\phi=\frac{1}{16}$ the RDF peaks to $1.934$
bits per sample as we see in Figure 5. On the other hand, when approaching
asynchronous sampling, the RDF takes an approximately constant value $1.85$
bits per sample for all the considered values of $\frac{T_{\rm ps}}{T_{\rm
s}}$ and this value is invariant to the offsets of $\phi$. This follows since
when the denominator of the fractional part of $\frac{T_{\rm ps}}{T_{\rm s}}$
increases, then the DT period of the resulting sampled variance, $p_{n}$,
increases and practically captures the entire set of values of the CT variance
regardless of the sampling time offset. In a similar manner as with the study
on capacity in [17], we conjecture that since asynchronous sampling captures
the entire set of values of the CT variance, the respective RDF represents the
RDF of the analog source, which does not depend on the specific sampling rate
and offset. Figures 5-5 demonstrate how slight variations in the sampling rate
can result in significant changes in the RDF. For instance, at $\phi=0$ we
notice in Figure 5 that when the sampling rate switches from $T_{\rm
s}=2.25\cdot T_{\rm ps}$ to $T_{\rm s}=2.26\cdot T_{\rm ps}$, i.e., the
sampling rate switches from being synchronous to being nearly asynchronous,
and the RDF changes from $1.624$ bits per channel use to $1.859$ bits per
sample for $t_{\rm dc}=75\%$; also, we observe in Figure 5 for $t_{\rm
dc}=45\%$, that when the sampling rate switches from $T_{\rm s}=2.5\cdot
T_{\rm ps}$ to $T_{\rm s}=2.51\cdot T_{\rm ps}$, i.e., the sampling rate also
switches from being synchronous to being nearly asynchronous, and the RDF
changes from $1.005$ bits per source sample to $1.154$ bits per source sample.
Figure 6: $R_{n}(D)$ versus $D$; offset $\phi=0$.
Figure 7: $R_{n}(D)$ versus $D$; offset $\phi=\frac{1}{16}$.
Lastly, Figures 7-7 numerically evaluate the RDF versus the distortion
constraint $D\in[0.05,0.19]$ for the sampling time offsets of $0$ and
$\frac{1}{16}$ respectively. At each $\phi$, the result is evaluated at three
different values of synchronization mismatch $\epsilon$. For this setup, we
fix $t_{\rm dc}=75\%$, $p=2$ and $\epsilon\in\\{0.5,\frac{5\pi}{32},0.6\\}$.
The only mismatch value that refers to the asynchronous sampling case is
$\epsilon=\frac{5\pi}{32}$ and its corresponding sampling interval is
approximately $2.007$ $\mu$secs, which is a negligible variation from the
sampling intervals corresponding to $\epsilon\in\\{0.5,0.6\\}$, which are
$2.000$ $\mu$secs and $1.923$ $\mu$secs, respectively. Observing both figures,
we see that the RDF may vary significantly for very slight variation in the
sampling rate. For instance, as shown in Figure 7 for $\phi=0$, at $D=0.18$, a
slight change in the synchronization mismatch from $\epsilon=\frac{5\pi}{32}$
(i.e., $T_{\rm s}\approx 2.007\mu$secs) to $\epsilon=0.5$ (i.e., $T_{\rm
s}=2.000\mu$secs) results to approximately $20\%$ decrease in the RDF. For
$\phi=\frac{1}{16}$ the same change in the sampling synchronization mismatch
at $D=0.18$ results to a rise in the RDF by roughly $4\%$. These results
demonstrate the unique and counter-intuitive characteristics of the RDF of
sampled WSCS signals which arise from our derivation.
## VI Conclusions
In this work the RDF of a sampled CT WSCS Gaussian source process was
characterized for scenarios in which the resulting DT process is memoryless.
This characterization shows the relationship between the sampling rate and the
minimal number of bits required for compression at a given distortion. For
cases in which the sampling rate is synchronized with the period of the
statistics of the source process, the resulting DT process is WSCS and
standard information theoretic framework can be used for deriving its RDF. For
asynchronous sampling, information stability does not hold, and hence we
resort to the information spectrum framework to obtain a characterization. To
that aim we derived a relationship between some relevant information spectrum
quantities for uniformly convergent sequences of RVs. This relationship was
further applied to characterize the RDF of an asynchronously sampled CT WSCS
Gaussian source process as the limit superior of a sequence of RDFs, each
corresponding to the synchronous sampling of the CT WSCS Gaussian process. The
results were derived in the low distortion regime, i.e., under the condition
that the distortion constraint $D$ is less than the minimum variance of the
source, and for sampling intervals which are larger than the correlation
length of the CT process. Our numerical examples give rise to non-intuitive
insights which follow from the derivations. In particular, the numerical
evaluation demonstrates that the RDF for a sampled CT WSCS Gaussian source can
change dramatically with minor variations in sampling rate and sampling time
offset. In particular, when the sampling rate switches from being synchronous
to being asynchronous and vice versa, the RDF may change considerably as the
statistical model of the source switches between WSCS to WSACS. The resulting
analysis enables determining the sampling system parameters in order to
facilitate accurate and efficient source coding of acquired CT signals.
## Appendix A Proof of Lemma 1
###### Proof.
To prove that the minimum achievable rate at a given maximum distortion for a
code with arbitrary blocklength can be achieved by considering only codes
whose blocklength is an integer multiple of $r$, we apply the following
approach: We first show that every rate-distortion pair achievable when
restricted to using source codes whose blocklength is an integer multiple of
$r$ is also achievable when using arbitrary blocklenghts; We then prove that
every achievable rate-distortion pair is also achievable when restricted to
using codes whose blocklength is an integer multiple of $r$. Combining these
two assertions proves that the rate-distortion function of the source
$\\{S[i]\\}_{i\in\mathcal{N}}$ can be obtained when restricting the
blocklengths to be an integer multiple of $r$. Consequently, a reproduction
signal $\\{\hat{S}[i]\\}_{i\in\mathcal{N}}$ which achieves the minimal rate
for a given $D$ under the restriction to use only blocklengths which are an
integer multiple of $r$ is also the reproduction signal achieving the minimal
rate without this restriction, and vice versa, thus proving the lemma.
To prove the first assertion, consider a rate-distortion pair $(R,D)$ which is
achievable when using codes whose blocklength is an integer multiple of $r$.
It thus follows directly from Def. 5 that for every $\eta>0$, $\exists
b_{0}\in\mathcal{N}$ such that for all $b>b_{0}$ there exists a a source code
$\left(R_{(b\cdot r)},b\cdot r\right)$ with rate $R_{(b\cdot r)}\leq R+\eta$
satisfying $\bar{d}\big{(}{\boldsymbol{S}}^{(b\cdot
r)},\hat{{\boldsymbol{S}}}^{(b\cdot r)}\big{)}\leq D+\frac{\eta}{2}$. We now
show that we can construct a code with an arbitrary blocklength $l=b\cdot r+j$
where $0<j<r$ (i.e., the blocklength $l$ is not an integer multiple of $r$)
satisfying Def. 5 for all $j\in\\{1,\ldots,r-1\\}$ as follows: Apply the code
$\left(R_{(b\cdot r)},b\cdot r\right)$ to the first $b\cdot r$ samples of
$S[i]$ and then concatenate each codeword by $j$ zeros to obtain a source code
having codewords of length $b\cdot r+j$. The average distortion (i.e., see
(2)) of the resulting $\left(R_{(b\cdot r+j)},b\cdot r+j\right)$ code is given
by:
$\displaystyle\bar{d}\left({\boldsymbol{S}}^{(b\cdot
r+j)},\hat{{\boldsymbol{S}}}^{(b\cdot r+j)}\right)$
$\displaystyle=\frac{1}{b\cdot r+j}\left(\sum\limits_{i=1}^{b\cdot
r}\mathds{E}\left\\{\left(S[i]-\hat{S}[i]\right)^{2}\right\\}+\sum\limits_{i=b\cdot
r+1}^{b\cdot r+j}\mathds{E}\left\\{\left(S[i]\right)^{2}\right\\}\right)$
$\displaystyle=\frac{1}{b\cdot r+j}\left(b\cdot
r\cdot\bar{d}\left({\boldsymbol{S}}^{(b\cdot
r)},\hat{{\boldsymbol{S}}}^{(b\cdot
r)}\right)+\sum\limits_{i=1}^{j}\sigma_{S}^{2}[i]\right)$
$\displaystyle=\frac{b\cdot r}{b\cdot
r+j}\cdot\bar{d}\left({\boldsymbol{S}}^{(b\cdot
r)},\hat{{\boldsymbol{S}}}^{(b\cdot r)}\right)+\frac{1}{b\cdot
r+j}\sum\limits_{i=1}^{j}\sigma_{S}^{2}[i].$ (A.1)
Thus $\exists b>b_{o}$ such that $\frac{1}{b\cdot
r+j}\sum\limits_{i=1}^{j}\sigma_{S}^{2}[i]<\frac{\eta}{2}$ and
$\displaystyle\bar{d}\left({\boldsymbol{S}}^{(b\cdot
r+j)},\hat{{\boldsymbol{S}}}^{(b\cdot r+j)}\right)$
$\displaystyle=\frac{b\cdot r}{b\cdot
r+j}\cdot\bar{d}\left({\boldsymbol{S}}^{(b\cdot
r)},\hat{{\boldsymbol{S}}}^{(b\cdot r)}\right)+\frac{1}{b\cdot
r+j}\sum\limits_{i=1}^{j}\sigma_{S}^{2}[i]$ $\displaystyle\leq\frac{b\cdot
r}{b\cdot r+j}\cdot\bar{d}\left({\boldsymbol{S}}^{(b\cdot
r)},\hat{{\boldsymbol{S}}}^{(b\cdot r)}\right)+\frac{\eta}{2}$
$\displaystyle\leq\bar{d}\left({\boldsymbol{S}}^{(b\cdot
r)},\hat{{\boldsymbol{S}}}^{(b\cdot r)}\right)+\frac{\eta}{2}\leq D+\eta.$
(A.2)
The rate $R_{(b\cdot r+j)}$ satisfies:
$R_{(b\cdot r+j)}=\frac{1}{b\cdot r+j}\cdot\log_{2}M=R_{(b\cdot
r)}\cdot\frac{b\cdot r}{b\cdot r+j}\leq\left(R+\eta\right)\cdot\frac{b\cdot
r}{b\cdot r+j}\leq R+\eta.$ (A.3)
Consequently, any rate-distortion pair achievable with codes whose blocklength
is an integer multiple of $r$ can be achieved by codes with arbitrary
blocklengths.
Next, we prove that any achievable rate-distortion pair $(R,D)$ can be
achieved by codes whose blocklength is an integer multiple of $r$. To that
aim, we fix $\eta>0$. By Def. 5, it holds that there exists a code of
blocklength $l$ satisfying (3)-(4). To show that $(R,D)$ is achievable using
codes whose blocklength is an integer multiple of $r$, we assume here that $l$
is not an integer multiple of $r$, hence, there exist some positive integers
$b$ and $j$ such that $j<r$ and $l=b\cdot r+j$. We denote this code by
$\left(R_{(b\cdot r+j)},b\cdot r+j\right)$. It follows from Def. 5 that
$R_{(b\cdot r+j)}\leq R+\eta$ and $\bar{d}\left({\boldsymbol{S}}^{(b\cdot
r+j)},\hat{{\boldsymbol{S}}}^{(b\cdot r+j)}\right)\leq D+\frac{\eta}{2}$.
Next, we construct a code $\left(R_{(b+1)\cdot r},(b+1)\cdot r\right)$ with
codewords whose length is $(b+1)\cdot r$, i.e., an integer multiple of $r$, by
adding $r-j$ zeros at the end of each codeword of the code $\left(R_{(b\cdot
r+j)},b\cdot r+j\right)$. The average distortion can now be computed as
follows:
$\displaystyle\bar{d}\left({\boldsymbol{S}}^{((b+1)\cdot
r)},\hat{{\boldsymbol{S}}}^{((b+1)\cdot r)}\right)$
$\displaystyle=\frac{1}{(b+1)\cdot r}\left(\sum\limits_{i=1}^{b\cdot
r+j}\mathds{E}\left\\{\left(S[i]-\hat{S}[i]\right)^{2}\right\\}+\sum\limits_{i=b\cdot
r+j+1}^{(b+1)\cdot r}\mathds{E}\left\\{\left(S[i]\right)^{2}\right\\}\right)$
$\displaystyle=\frac{1}{(b+1)\cdot r}\left((b\cdot
r+j)\cdot\bar{d}\left({\boldsymbol{S}}^{(b\cdot
r+j)},\hat{{\boldsymbol{S}}}^{(b\cdot r+j)}\right)+\sum\limits_{i=b\cdot
r+j+1}^{(b+1)\cdot r}\sigma_{S}^{2}[i]\right)$ $\displaystyle=\frac{b\cdot
r+j}{(b+1)\cdot r}\cdot\bar{d}\left({\boldsymbol{S}}^{(b\cdot
r+j)},\hat{{\boldsymbol{S}}}^{(b\cdot r+j)}\right)+\frac{\sum\limits_{i=b\cdot
r+j+1}^{(b+1)\cdot r}\sigma_{S}^{2}[i]}{(b+1)\cdot r},$ (A.4)
and again $\exists b>b_{o}$ such that $\frac{\sum\limits_{i=b\cdot
r+j+1}^{(b+1)\cdot r}\sigma_{S}^{2}[i]}{(b+1)\cdot r}<\frac{\eta}{2}$, hence
$\displaystyle\bar{d}\left({\boldsymbol{S}}^{((b+1)\cdot
r)},\hat{{\boldsymbol{S}}}^{((b+1)\cdot r}\right)$
$\displaystyle\leq\frac{b\cdot r+j}{(b+1)\cdot
r}\cdot\bar{d}\left({\boldsymbol{S}}^{(b\cdot
r+j)},\hat{{\boldsymbol{S}}}^{(b\cdot r+j)}\right)+\frac{\eta}{2}$
$\displaystyle\leq\bar{d}\left({\boldsymbol{S}}^{(b\cdot
r+j)},\hat{{\boldsymbol{S}}}^{(b\cdot r+j)}\right)+\frac{\eta}{2}\leq D+\eta.$
(A.5)
The rate $R_{(b+1)\cdot r}$ can be expressed as follows:
$R_{(b+1)\cdot r}=\frac{1}{(b+1)\cdot r}\cdot\log_{2}M=R_{(b\cdot
r+j)}\cdot\frac{b\cdot r+j}{(b+1)\cdot
r}\leq\left(R+\eta\right)\cdot\frac{b\cdot r+j}{(b+1)\cdot r}<R+\eta.$ (A.6)
It follows that $R_{(b+1)\cdot r}\leq R+\eta$ for any arbitrary $\eta$ by
selecting a sufficiently large $b$. This proves that every rate-distortion
pair achievable with arbitrary blocklengths (e.g., $l=b\cdot r+j,j<r$) is also
achievable when considering source codes whose blocklength is an integer
multiple of $r$ (e.g., $l=b\cdot r$). This concludes the proof. ∎
## Appendix B Proof of Theorem 1
Recall that $\alpha\in\mathcal{R}^{++}$. To prove the theorem, we fix a rate-
distortion pair $(R,D)$ that is achievable for the source
$\\{{S}[i]\\}_{i\in\mathcal{N}}$. By Def. 5 this implies that for all $\eta>0$
there exists $l_{0}(\eta)\in\mathcal{N}$ such that for all $l>l_{0}(\eta)$
there exists a source code $\mathcal{C}_{l}$ with rate $R_{l}\leq R+\eta$ and
MSE distortion
$D_{l}=\mathds{E}\big{\\{}\frac{1}{l}\big{\|}{\boldsymbol{S}}^{(l)}-\hat{{\boldsymbol{S}}}^{(l)}\big{\|}^{2}\big{\\}}\leq
D+\eta$. Next, we use the code $\mathcal{C}_{l}$ to define the source code
$\mathcal{C}_{l}^{(\alpha)}$, which operates in the following manner: The
encoder first scales its input block by $1/\alpha$. Then, the block is encoded
using the source code $\mathcal{C}_{l}$. Finally, the selected codeword is
scaled by $\alpha$. Since the $\mathcal{C}_{l}^{(\alpha)}$ has the same number
of codewords and the same blocklength as $\mathcal{C}_{l}$, it follows that
its rate, denote $R_{l}^{(\alpha)}$, satisfied $R_{l}^{(\alpha)}=R_{l}\leq
R+\eta$. Furthermore, by the construction of $\mathcal{C}_{l}^{(\alpha)}$, it
holds that its reproduction vector when applied to
$\alpha\cdot{\boldsymbol{S}}^{(l)}$ is equal to the output of
$\mathcal{C}_{l}$ applied to ${\boldsymbol{S}}^{(l)}$ scaled by $\alpha$,
i.e., $\alpha\cdot\hat{{\boldsymbol{S}}}^{(l)}$. Consequently, the MSE of
$\mathcal{C}_{l}^{(\alpha)}$ when applied to the source
$\\{\alpha\cdot{S}[i]\\}_{i\in\mathcal{N}}$, denoted $D_{l}^{(\alpha)}$,
satisfies
$D_{l}^{(\alpha)}=\mathds{E}\big{\\{}\frac{1}{l}\big{\|}\alpha\cdot{\boldsymbol{S}}^{(l)}-\alpha\cdot\hat{{\boldsymbol{S}}}^{(l)}\big{\|}^{2}\big{\\}}=\alpha^{2}D_{l}\leq\alpha^{2}D+\alpha^{2}\eta$.
It thus follows that for all $\tilde{\eta}>0$ there exists
$\tilde{l}_{0}(\tilde{\eta})=l_{0}\big{(}\min(\tilde{\eta},\alpha^{2}\tilde{\eta})\big{)}$
such that for all $l>\tilde{l}_{0}(\tilde{\eta})$ there exists a code
$\mathcal{C}_{l}^{(\alpha)}$ with rate $R_{l}^{(\alpha)}\leq R+\tilde{\eta}$
which achieves an MSE distortion of $D_{l}^{(\alpha)}\leq\alpha^{2}\cdot
D+\tilde{\eta}$ when applied to the compression of
$\\{\alpha\cdot{S}[i]\\}_{i\in\mathcal{N}}$. Hence, $(R,\alpha^{2}D)$ is
achievable for compression of $\\{\alpha\cdot{S}[i]\\}_{i\in\mathcal{N}}$ by
Def. 5, proving the theorem.
## Appendix C Proof of Theorem 3
In this appendix, we prove (17b) by applying a similar approach as used for
proving (17a) in [17, Appendix A]. We first note that Def. 8 can also be
written as follows:
${\rm
p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}Z_{k}\\!\stackrel{{\scriptstyle(a)}}{{=}}\\!\inf\left\\{\beta\in\mathcal{R}\Big{|}|\mathop{\lim\sup}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)=0\right\\}\\!\stackrel{{\scriptstyle(b)}}{{=}}\\!\inf\left\\{\beta\in\mathcal{R}\Big{|}|\mathop{\lim\inf}\limits_{k\rightarrow\infty}F_{k}(\beta)=1\right\\}.$
(C.1)
For the equality $(a)$, we note that the set of probabilities
$\\{\Pr\left(Z_{k}>\beta\right)\\}_{k\in\mathcal{N}}$ is non-negative and
bounded in $[0,1]$; hence, for any $\beta\in\mathcal{R}$ for which
$\mathop{\lim\sup}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)=0$,
it also holds from [31, Thm. 3.17] that the limit of any subsequence of
$\left\\{\Pr\left(Z_{k}>\beta\right)\right\\}_{k\in\mathcal{N}}$ is also $0$,
since non-negativity of the probability implies
$\mathop{\lim\inf}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)\geq
0$. Then, combined with the relationship
$\mathop{\lim\inf}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)\leq\mathop{\lim\sup}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)$,
we conclude:
$\displaystyle
0\leq\mathop{\lim\inf}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)\leq\mathop{\lim\sup}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)=0$
$\displaystyle\implies\mathop{\lim\inf}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)=\mathop{\lim\sup}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)\stackrel{{\scriptstyle(a)}}{{=}}\mathop{\lim}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)=0,$
where $(a)$ follows from [31, Example. 3.18(c)]. This implies
$\mathop{\lim}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)$ exists
and is equal to 0.
In the opposite direction, if
$\mathop{\lim}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)=0$ then
it follows from [31, Example. 3.18(c)] that
$\mathop{\lim\sup}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)=0$.
Next, we note that since $F_{k}(\beta)$ is bounded in $[0,1]$ then
$\mathop{\lim\inf}\limits_{k\rightarrow\infty}F_{k}(\beta)$ is finite
$\forall\beta\in\mathcal{R}$, even if
$\mathop{\lim}\limits_{k\rightarrow\infty}F_{k}(\beta)$ does not exist.
Equality $(b)$ follows since
$\mathop{\lim\sup}\limits_{k\rightarrow\infty}\Pr\left(Z_{k}>\beta\right)=\mathop{\lim\sup}\limits_{k\rightarrow\infty}\left(1-\Pr\left(Z_{k}\leq\beta\right)\right)$
which according to [32, Thm. 7.3.7] is equal to
$1+\mathop{\lim\sup}\limits_{k\rightarrow\infty}\left(-\Pr\left(Z_{k}\leq\beta\right)\right)$.
By [33, Ch. 1, page 29], this quantity is also equal to
$1-\mathop{\lim\inf}\limits_{k\rightarrow\infty}\left(\Pr\left(Z_{k}\leq\beta\right)\right)=1-\mathop{\lim\inf}\limits_{k\rightarrow\infty}F_{k}(\beta)$.
Next, we state the following lemma:
###### Lemma C.1.
Given assumption AS2, for all $\beta\in\mathcal{R}$ it holds that
$\mathop{\lim\inf}\limits_{k\rightarrow\infty}F_{k}(\beta)=\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta).$
(C.2)
###### Proof.
To prove the lemma we first show that
$\mathop{\lim\inf}\limits_{k\rightarrow\infty}F_{k}(\beta)\leq\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)$,
and then we show
$\mathop{\lim\inf}\limits_{k\rightarrow\infty}F_{k}(\beta)\geq\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)$.
Recall that by AS2, for all $\beta\in\mathcal{R}$ and $k\in\mathcal{N}$,
$\tilde{F}_{k,n}(\beta)$ converges as $n\rightarrow\infty$ to $F_{k}(\beta)$,
uniformly over $k$ and $\beta$, i.e., for all $\eta>0$ there exists
$n_{0}(\eta)\in\mathcal{N}$,
$k_{0}\big{(}n_{0}(\eta),\eta\big{)}\in\mathcal{N}$ such that for every
$n>n_{0}(\eta)$, $\beta\in\mathcal{R}$ and
$k>k_{0}\big{(}n_{0}(\eta),\eta\big{)}$, it holds that
$\big{|}\tilde{F}_{k,n}(\beta)-F_{k}(\beta)\big{|}<\eta$. Consequently, for
every subsequence $0<k_{1}<k_{2}<\ldots$ such that
$\mathop{\lim}\limits_{l\rightarrow\infty}\tilde{F}_{k_{l},n}(\beta)$ exists
for any $n>n_{0}(\eta)$, it follows from [31, Thm. 7.11] 111[31, Thm. 7.11]:
Suppose $f_{n}\rightarrow f$ uniformly in a set $E$ in a metric space. Let $x$
be a limit point of $E$, and suppose that $\mathop{\lim}\limits_{t\rightarrow
x}f_{n}(t)=A_{n}$, $(n=1,2,3,\ldots)$. Then $A_{n}$ converges, and
$\mathop{\lim}\limits_{t\rightarrow
x}\mathop{\lim}\limits_{n\rightarrow\infty}f_{n}(t)=\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim}\limits_{t\rightarrow
x}f_{n}(t)$ that, as the convergence over $k$ is uniform, the limits over $n$
and $l$ are interchangeable:
$\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim}\limits_{l\rightarrow\infty}\tilde{F}_{k_{l},n}(\beta)=\mathop{\lim}\limits_{l\rightarrow\infty}\mathop{\lim}\limits_{n\rightarrow\infty}\tilde{F}_{k_{l},n}(\beta)=\mathop{\lim}\limits_{l\rightarrow\infty}F_{k_{l}}(\beta).$
(C.3)
The existence of such a convergent subsequence is guaranteed by the Bolzano-
Weierstrass theorem [31, Thm. 2.42] 222[31, Thm. 2.42]:Every bounded infinite
subset of $\mathcal{R}^{k}$ has a limit point in $\mathcal{R}^{k}$ as
$\tilde{F}_{k,n}(\beta)\in[0,1]$.
From the properties of the limit inferior [31, Thm. 3.17] 333[31, Thm. 3.17]:
Let $\\{s_{n}\\}$ be a sequence of real numbers; Let $E$ be the set of numbers
$x$ (in the extended real number system) containing all limits of all
subsequences of $\\{s_{n}\\}$. Then……
$\mathop{\lim\inf}\limits_{n\rightarrow\infty}s_{n}\in E$. it follows that
there exists a subsequence of
$\big{\\{}F_{k}(\beta)\big{\\}}_{k\in\mathcal{N}}$, denoted
$\big{\\{}F_{k_{m}}(\beta)\big{\\}}_{m\in\mathcal{N}}$, such that
$\mathop{\lim}\limits_{m\rightarrow\infty}F_{k_{m}}(\beta)=\mathop{\lim\inf}\limits_{k\rightarrow\infty}F_{k}(\beta)$.
Consequently,
$\displaystyle\mathop{\lim\inf}\limits_{k\rightarrow\infty}F_{k}(\beta)$
$\displaystyle=\mathop{\lim}\limits_{m\rightarrow\infty}F_{k_{m}}(\beta)\stackrel{{\scriptstyle(a)}}{{=}}\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim}\limits_{m\rightarrow\infty}\tilde{F}_{k_{m},n}(\beta)$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{\geq}}\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta),$
(C.4)
where $(a)$ follows from (C.3), and $(b)$ follows from the definition of the
limit inferior [31, Def. 3.16]. Similarly, by [31, Thm. 3.17], for any
$n\in\mathcal{N}$ there exists a subsequence of
$\\{\tilde{F}_{k,n}(\beta)\\}_{k\in\mathcal{N}}$ which we denote by
$\big{\\{}\tilde{F}_{k_{l},n}(\beta)\big{\\}}_{l\in\mathcal{N}}$ where
$\\{k_{l}\\}_{l\in\mathcal{N}}$ satisfy $0<k_{1}<k_{2}<\ldots$, such that
$\mathop{\lim}\limits_{l\rightarrow\infty}\tilde{F}_{k_{l},n}(\beta)=\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)$.
Therefore,
$\displaystyle\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)$
$\displaystyle=\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim}\limits_{l\rightarrow\infty}\tilde{F}_{k_{l},n}(\beta)$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}\mathop{\lim}\limits_{l\rightarrow\infty}F_{k_{l}}(\beta)\stackrel{{\scriptstyle(b)}}{{\geq}}\mathop{\lim\inf}\limits_{k\rightarrow\infty}F_{k}(\beta),$
(C.5)
where $(a)$ follows from (C.3), and $(b)$ follows from the definition of the
limit inferior [31, Def. 3.16]. Therefore,
$\mathop{\lim\inf}\limits_{k\rightarrow\infty}F_{k}(\beta)\leq\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)$.
Combining (C.4) and (C.5) proves (C.2) in the statement of the lemma. ∎
###### Lemma C.2.
Given assumptions AS1-AS2, the sequence of RVs
$\big{\\{}\tilde{Z}_{k,n}\big{\\}}_{k,n\in\mathcal{N}}$ satisfies
$\displaystyle\mathop{\lim}\limits_{n\rightarrow\infty}\left({\rm
p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}\tilde{Z}_{k,n}\right)$
$\displaystyle=\inf\left\\{\beta\in\mathcal{R}\Big{|}\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)=1\right\\}.$
(C.6)
###### Proof.
Since by assumption AS1, for every $n\in\mathcal{N}$, every convergent
subsequence of $\big{\\{}\tilde{Z}_{k,n}\big{\\}}_{k\in\mathcal{N}}$ converges
in distribution as $k\rightarrow\infty$ to a deterministic scalar, it follows
that every convergent subsequence of $\tilde{F}_{k,n}(\beta)$ converges as
$k\rightarrow\infty$ to a step function, which is the CDF of the corresponding
sublimit of $\tilde{Z}_{k,n}$. In particular, the limit
$\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)$ is a
step function representing the CDF of the deterministic scalar $\zeta_{n}$,
i.e.,
$\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)=\begin{cases}0&\beta<\zeta_{n}\\\
1&\beta\geq\zeta_{n}.\end{cases}$ (C.7)
Since, by Lemma C.1, AS2 implies that the limit
$\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)$
exists444The convergence to a discontinuous function is in the sense of [31,
Ex. 7.3], then $\mathop{\lim}\limits_{n\rightarrow\infty}\zeta_{n}$ exists.
Hence, we obtain that
$\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)=\begin{cases}0&\beta<\mathop{\lim}\limits_{n\rightarrow\infty}\zeta_{n}\\\
1&\beta\geq\mathop{\lim}\limits_{n\rightarrow\infty}\zeta_{n},\end{cases}$
(C.8)
and from the right-hand side of (C.6) we have that
$\inf\left\\{\beta\in\mathcal{R}\Big{|}\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)=1\right\\}=\mathop{\lim}\limits_{n\rightarrow\infty}\zeta_{n}.$
(C.9)
Next, from (C.1) and (C.7) we note that
$\displaystyle{\rm
p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}\tilde{Z}_{k,n}=\inf\left\\{\beta\in\mathcal{R}\Big{|}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)=1\right\\}=\zeta_{n}.$
Consequently, the left-hand side of (C.6) is equal to
$\mathop{\lim}\limits_{n\rightarrow\infty}\zeta_{n}$. Combining with (C.9) we
arrive at the equality (C.6) in the statement of the lemma. ∎
Substituting (C.2) into (C.1) results in
$\displaystyle{\rm p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}Z_{k}$
$\displaystyle=\inf\left\\{\beta\in\mathcal{R}\Big{|}\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim\inf}\limits_{k\rightarrow\infty}\tilde{F}_{k,n}(\beta)=1\right\\}\stackrel{{\scriptstyle(a)}}{{=}}\mathop{\lim}\limits_{n\rightarrow\infty}\left({\rm
p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}\tilde{Z}_{k,n}\right),$
(C.10)
where $(a)$ follows from (C.6). Eq. (C.10) concludes the proof for (17b).
## Appendix D Proof of Theorem 4
In this appendix we detail the proof of Theorem 4. The outline of the proof is
given as follows:
* •
We first show in Subsection D-A that for any $k\in\mathcal{N}$, the PDF of the
random vector ${\boldsymbol{S}}_{n}^{(k)}$, representing the first $k$ samples
of the CT WSCS source $S_{\rm c}(t)$ sampled at time instants $T_{\rm
s}(n)=\frac{T_{\rm ps}}{p+\epsilon_{n}}$, converges in the limit as
$n\rightarrow\infty$ and for any $k\in\mathcal{N}$ to the PDF of
${\boldsymbol{S}}_{\epsilon}^{(k)}$, which represents the first $k$ samples of
the CT WSCS source $S_{\rm c}(t)$, sampled at time instants $T_{\rm
s}=\frac{T_{\rm ps}}{p+\epsilon}$. We prove that this convergence is uniform
in $k\in\mathcal{N}$ and in the realization vector
${\boldsymbol{s}}^{(k)}\in\mathcal{R}^{k}$. This is stated in Lemma D.1.
* •
Next, in Subsection D-B we apply Theorem 3 to relate the mutual information
density rates for the random source vector ${\boldsymbol{S}}_{n}^{(k)}$ and
its reproduction $\hat{{\boldsymbol{S}}}_{n}^{(k)}$ with that of the random
source vector ${\boldsymbol{S}}_{\epsilon}^{(k)}$ and its reproduction
$\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}$. To that aim, let the functions
$F_{S_{n},\hat{S}_{n}}$ and $F_{S_{\epsilon},\hat{S}_{\epsilon}}$ denote the
joint distributions of an arbitrary dimensional source and reproduction
vectors corresponding to the synchronously sampled and to the asynchronously
sampled source process respectively. We define the following mutual
information density rates:
$\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}\right)\triangleq\frac{1}{k}\log\frac{p_{{\boldsymbol{S}}_{n}^{(k)}|\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{S}}_{n}^{(k)}|\hat{{\boldsymbol{S}}}_{n}^{(k)}\right)}{p_{{\boldsymbol{S}}_{n}^{(k)}}\left({\boldsymbol{S}}_{n}^{(k)}\right)},$
(D.1a) and
$Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)\triangleq\frac{1}{k}\log\frac{p_{{\boldsymbol{S}}_{\epsilon}^{(k)}|\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left({\boldsymbol{S}}_{\epsilon}^{(k)}\big{|}\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\right)}{p_{{\boldsymbol{S}}_{\epsilon}^{(k)}}\left({\boldsymbol{S}}_{\epsilon}^{(k)}\right)},$
(D.1b)
$k,n\in\mathcal{N}$. The RVs
$\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}\right)$ and
$Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$ in
(D.1) denote the mutual information density rates [14, Def. 3.2.1] between the
DT source process and the corresponding reproduction process for the case of
synchronous sampling and for the case of asynchronous sampling, respectively.
We then show that if the pairs of source process and optimal reproduction
process $\big{\\{}S_{n}[i],\hat{S}_{n}[i]\big{\\}}_{i\in\mathcal{N}}$ and
$\big{\\{}S_{\epsilon}[i],\hat{S}_{\epsilon}[i]\big{\\}}_{i\in\mathcal{N}}$
satisfy that
$p_{\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)\mathop{\longrightarrow}\limits_{n\rightarrow\infty}p_{\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$
uniformly with respect to $\hat{{\boldsymbol{s}}}^{(k)}\in\mathcal{R}^{k}$ and
$k\in\mathcal{N}$, and that
$p_{{\boldsymbol{S}}_{n}^{(k)}|\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)\mathop{\longrightarrow}\limits_{n\rightarrow\infty}p_{{\boldsymbol{S}}_{\epsilon}^{(k)}|\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)$
uniformly in
$\left(\big{(}\hat{{\boldsymbol{s}}}^{(k)}\big{)}^{T},\big{(}{\boldsymbol{s}}^{(k)}\big{)}^{T}\right)^{T}\in\mathcal{R}^{2k}$
and $k\in\mathcal{N}$, then
$\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$
uniformly in $k\in\mathcal{N}$. In addition, Lemma D.3 proves that every
subsequence of
$\left\\{\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}\right)\right\\}_{k\in\mathcal{N}}$
$w.r.t.$ $k$, indexed as $k_{l}$ converges in distribution, in the limit
$l\rightarrow\infty$ to a deterministic scalar.
* •
Lastly, in Subsection D-C we combine the above results to show in Lemmas D.5
and D.6 that
$R_{\epsilon}(D)\leq\mathop{\lim\sup\limits}\limits_{n\rightarrow\infty}R_{n}(D)$
and $R_{\epsilon}(D)\geq\mathop{\lim\sup}\limits_{n\rightarrow\infty}R_{n}(D)$
respectively; implying that
$R_{\epsilon}(D)=\mathop{\lim\sup}\limits_{n\rightarrow\infty}R_{n}(D)$, which
proves the theorem.
To facilitate our proof we will need uniform convergence in $k\in\mathcal{N}$,
of $p_{{\boldsymbol{S}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\right)$,
$p_{\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$
and
$p_{{\boldsymbol{S}}_{n}^{(k)}|\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)$
to $p_{{\boldsymbol{S}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}\right)$,
$p_{\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$
and
$p_{{\boldsymbol{S}}_{\epsilon}^{(k)}|\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)$,
respectively. To that aim, we will make the following scaling assumption
w.l.o.g.:
###### Assumption D.1.
The variance of the source and the allowed distortion are scaled by some
factor $\alpha^{2}$ such that
$\alpha^{2}\cdot\min\left\\{D,\left(\mathop{\min}\limits_{0\leq t\leq T_{\rm
ps}}\sigma^{2}_{S_{\rm c}}(t)-D\right)\right\\}>\frac{1}{2\pi}.$ (D.2)
Note that this assumption has no effect on the generality of the RDF for
multivariate stationary processes detailed in [5, Sec. 10.3.3], [34, Sec. IV].
Moreover, by Theorem 1, for every $\alpha>0$ it holds that when any rate $R$
achievable when compressing the original source $S_{\rm c}(t)$ with distortion
not larger that $D$ is achievable when compressing the scaled source
$\alpha\cdot S_{\rm c}(t)$ with distortion not larger than $\alpha^{2}\cdot
D$. Note that if for the source $S_{\rm c}(t)$ the distortion satisfies
$D<\mathop{\min}\limits_{0\leq t\leq T_{\rm ps}}\sigma^{2}_{S_{\rm c}}(t)$,
then for the scaled source and distortion we have $\alpha^{2}\cdot
D<\mathop{\min}\limits_{0\leq t\leq T_{\rm
ps}}\alpha^{2}\cdot\sigma^{2}_{S_{\rm c}}(t)$.
### D-A Convergence in Distribution of ${{\boldsymbol{S}}_{n}^{(k)}}$ to
${{\boldsymbol{S}}_{\epsilon}^{(k)}}$ Uniformly with Respect to
${k\in\mathcal{N}}$
In order to prove the uniform convergence in distribution,
${\boldsymbol{S}}_{n}^{(k)}\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}{\boldsymbol{S}}_{\epsilon}^{(k)}$,
uniformly with respect to $k\in\mathcal{N}$, we first prove, in Lemma D.1,
that as $n\rightarrow\infty$ the sequence of PDFs of
${\boldsymbol{S}}_{n}^{(k)}$,
$p_{{\boldsymbol{S}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\right)$, converges
to the PDF of ${\boldsymbol{S}}_{\epsilon}^{(k)}$,
$p_{{\boldsymbol{S}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}\right)$,
uniformly in ${\boldsymbol{s}}^{(k)}\in\mathcal{R}^{k}$ and in
$k\in\mathcal{N}$. Next, we show in Corollary D.1 that
${\boldsymbol{S}}_{n}^{(k)}\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}{\boldsymbol{S}}_{\epsilon}^{(k)}$
uniformly in $k\in\mathcal{N}$.
To that aim, let us define the set $\mathcal{K}\triangleq\\{1,2,\ldots,k\\}$
and consider the $k$\- dimensional zero-mean, memoryless random vectors
${\boldsymbol{S}}_{n}^{(k)}$ and ${\boldsymbol{S}}_{\epsilon}^{(k)}$ with
their respective diagonal correlation matrices expressed below:
$\mathsf{R}_{n}^{(k)}\triangleq\mathds{E}\big{\\{}\big{(}{\boldsymbol{S}}_{n}^{(k)}\big{)}\big{(}{\boldsymbol{S}}_{n}^{(k)}\big{)}^{T}\big{\\}}=\textrm{diag}\big{(}\sigma^{2}_{S_{n}}[1],\ldots,\sigma^{2}_{S_{n}}[k]\big{)},$
(D.3a)
$\mathsf{R}_{\epsilon}^{(k)}\triangleq\mathds{E}\big{\\{}\big{(}{\boldsymbol{S}}_{\epsilon}^{(k)}\big{)}\big{(}{\boldsymbol{S}}_{\epsilon}^{(k)}\big{)}^{T}\big{\\}}=\textrm{diag}\big{(}\sigma^{2}_{S_{\epsilon}}[1],\ldots,\sigma^{2}_{S_{\epsilon}}[k]\big{)}.$
(D.3b)
Since $\epsilon_{n}\triangleq\frac{\lfloor n\cdot\epsilon\rfloor}{n}$ it holds
that $\frac{n\cdot\epsilon-1}{n}\leq\epsilon_{n}\leq\frac{n\cdot\epsilon}{n}$;
therefore
$\mathop{\lim}\limits_{n\rightarrow\infty}\epsilon_{n}=\epsilon.$ (D.4)
Now we note that since $\sigma_{S_{\rm c}}^{2}(t)$ is uniformly continuous,
then by the definition of a uniformly continuous function, for each
$i\in\mathcal{N}$, the limit in (D.4) implies that
$\mathop{\lim}\limits_{n\rightarrow\infty}\sigma^{2}_{S_{n}}[i]\equiv\mathop{\lim}\limits_{n\rightarrow\infty}\sigma^{2}_{S_{\rm
c}}\left(i\cdot\frac{T_{\rm ps}}{p+\epsilon_{n}}\right)=\sigma^{2}_{S_{\rm
c}}\left(i\cdot\frac{T_{\rm
ps}}{p+\epsilon}\right)\equiv\sigma^{2}_{S_{\epsilon}}[i].$ (D.5)
From Assumption D.1, it follows that $\sigma^{2}_{S_{n}}[i]$ satisfies
$\sigma^{2}_{S_{n}}[i]>\frac{1}{2\pi}$; Hence, we can state the following
lemma:
###### Lemma D.1.
The PDF of ${\boldsymbol{S}}_{n}^{(k)}$,
$p_{{\boldsymbol{S}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\right)$, converges
as $n\rightarrow\infty$ to the PDF of ${\boldsymbol{S}}_{\epsilon}^{(k)}$,
$p_{{\boldsymbol{S}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}\right)$,
uniformly in ${\boldsymbol{s}}^{(k)}\in\mathcal{R}^{k}$ and in
$k\in\mathcal{N}$:
$\mathop{\lim}\limits_{n\rightarrow\infty}p_{{\boldsymbol{S}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\right)=p_{{\boldsymbol{S}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}\right),\quad\forall{\boldsymbol{s}}^{(k)}\in\mathcal{R}^{k},\forall
k\in\mathcal{N}.$
###### Proof.
The proof of the lemma directly follows from the steps in the proof of [17,
Lemma B.1] 555[17, Lemma B.1] considers a CT memoryless WSCS Gaussian noise
process ${{\boldsymbol{W}}}_{c}(t)$ sampled synchronously and asynchronously
to yield ${{\boldsymbol{W}}}_{n}^{(k)}$ and
${{\boldsymbol{W}}}_{\epsilon}^{(k)}$ respectively, both having independent
entries. This Lemma proves that, for both Gaussian vectors of independent
entries, if the variance of ${{\boldsymbol{W}}}_{n}^{(k)}$ converges as
$n\rightarrow\infty$ to the variance of ${{\boldsymbol{W}}}_{\epsilon}^{(k)}$,
then the PDF (which in this case is a continuous mapping of the variance) of
${{\boldsymbol{W}}}_{n}^{(k)}$ also converges to the PDF of
${{\boldsymbol{W}}}_{\epsilon}^{(k)}$. Also, for simplicity, the assumption
$\frac{1}{2\pi}<\sigma^{2}_{W_{\rm c}}(t)<\infty$ for all $t\in\mathcal{R}$
was used to prove uniform convergence of the PDF of
${{\boldsymbol{W}}}_{n}^{(k)}$. A similar setup is applied in this work with
$\frac{1}{2\pi}<\sigma^{2}_{S_{c}}(t)<\infty$ for all $t\in\mathcal{R}$, for
the memoryless CT WSCS process $S_{\rm c}(t)$; ${\boldsymbol{S}}_{n}^{(k)}$
and ${\boldsymbol{S}}_{\epsilon}^{(k)}$ also have independent entries, for the
synchronously and for the asynchronously sampled case respectively. The proof
for uniform convergence in $k\in\mathcal{N}$ from [17, Lemma B.1] also applies
to this current work. , which was applied for a Gaussian noise process with
independent entries and variance above $\frac{1}{2\pi}$. ∎
Lemma D.1 gives rise to the following corollary:
###### Corollary D.1.
For any $k\in\mathcal{N}$ it holds that
${\boldsymbol{S}}_{n}^{(k)}\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}{\boldsymbol{S}}_{\epsilon}^{(k)}$,
and convergence is uniform over $k$.
###### Proof.
The corollary holds due to [35, Thm.1] 666[35, Thm.1]: If, for a sequence
$\\{p_{n}(x)\\}_{n\in\mathcal{N}}$ of densities,
$\mathop{\lim}\limits_{n\rightarrow\infty}p_{n}(x)=p(x)$ for almost all $x$ in
$\mathcal{R}$, then a sufficient condition that
$\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\int}\limits_{\mathcal{S}}p_{n}(x)dx=\mathop{\int}\limits_{\mathcal{S}}p(x)dx$,
uniformly for all Borel sets $\mathcal{S}$ in $\mathcal{R}$, is that $p(x)$ be
a density. : Since
$p_{{\boldsymbol{S}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\right)$ converges
to $p_{{\boldsymbol{S}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}\right)$
then
${\boldsymbol{S}}_{n}^{(k)}\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}{\boldsymbol{S}}_{\epsilon}^{(k)}$.
In addition, since the convergence of the PDFs is uniform in
$k\in\mathcal{N}$, the convergence of the CDFs is also uniform in
$k\in\mathcal{N}$. ∎
### D-B Showing that
${\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm opt}\right)}$ and
${Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)}$
Satisfy the Conditions of Thm. 3
Let $F_{S_{n},\hat{S}_{n}}^{\rm opt}$ denote the joint distribution for the
source process and the corresponding optimal reproduction process satisfying
the distortion constraint $D$. We next prove that for
$F_{S_{n},\hat{S}_{n}}^{\rm
opt}\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}F_{S_{\epsilon},\hat{S}_{\epsilon}}$,
then $\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm opt}\right)$
and $Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$
satisfy AS1-AS2. In particular, in Lemma D.2 we prove that
$\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$
uniformly in $k\in\mathcal{N}$ for the optimal zero-mean Gaussian reproduction
vectors with independent entries. Lemma D.3 proves that for any fixed $n$,
$\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm opt}\right)$
converges in distribution to a deterministic scalar as $k\rightarrow\infty$.
###### Lemma D.2.
Let $\\{\hat{{\boldsymbol{S}}}_{n}^{(k)}\\}_{n\in\mathcal{N}}$ and
$\\{{\boldsymbol{W}}_{n}^{(k)}\\}_{n\in\mathcal{N}}$ be two sets of mutually
independent sequences of $k\times 1$ zero-mean Gaussian random vectors related
via the backward channel (20), each having independent entries and let PDFs
$p_{\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$
and $p_{{\boldsymbol{W}}_{n}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)$,
respectively, denote their PDFs. Consider two other zero-mean Gaussian random
vectors $\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}$ and
${\boldsymbol{W}}_{\epsilon}^{(k)}$ each having independent entries with the
PDFs
$p_{\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$
and
$p_{{\boldsymbol{W}}_{\epsilon}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)$,
respectively, such that
$\mathop{\lim}\limits_{n\rightarrow\infty}p_{\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)=p_{\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$
uniformly in $\hat{{\boldsymbol{s}}}^{(k)}\in\mathcal{R}^{k}$ and uniformly
with respect to $k\in\mathcal{N}$, and
$\mathop{\lim}\limits_{n\rightarrow\infty}p_{{\boldsymbol{W}}_{n}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)=p_{{\boldsymbol{W}}_{\epsilon}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)$
uniformly in ${\boldsymbol{w}}^{(k)}\in\mathcal{R}^{k}$ and uniformly with
respect to $k\in\mathcal{N}$. Then, the RVs
$\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm opt}\right)$ and
$Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$,
defined via (D.1) satisfy
$\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$
uniformly over $k\in\mathcal{N}$.
###### Proof.
To begin the proof, for
$\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)\in\mathcal{R}^{2k}$,
define
$f_{k,n}\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)\triangleq\frac{p_{{\boldsymbol{S}}_{n}^{(k)}|\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)}{p_{{\boldsymbol{S}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\right)},\qquad
f_{k,\epsilon}\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)\triangleq\frac{p_{{\boldsymbol{S}}_{\epsilon}^{(k)}|\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)}{p_{{\boldsymbol{S}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}\right)}.$
(D.6)
Now, we recall the backward channel relationship (20):
${\boldsymbol{S}}_{n}^{(k)}=\hat{{\boldsymbol{S}}}_{n}^{(k)}+{\boldsymbol{W}}_{n}^{(k)},$
(D.7)
where $\hat{{\boldsymbol{S}}}_{n}^{(k)}$ and ${\boldsymbol{W}}_{n}^{(k)}$ are
mutually independent zero-mean, Gaussian random vectors with independent
entries, corresponding to the optimal compression process and its respective
distortion. From this relationship we obtain
$\displaystyle
p_{{\boldsymbol{S}}_{n}^{(k)}|\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}p_{\hat{{\boldsymbol{S}}}_{n}^{(k)}+{\boldsymbol{W}}_{n}^{(k)}|\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)$
$\displaystyle=p_{{\boldsymbol{W}}_{n}^{(k)}|\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}-\hat{{\boldsymbol{s}}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}p_{{\boldsymbol{W}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}-\hat{{\boldsymbol{s}}}^{(k)}\right),$
(D.8)
where $(a)$ follows since
${\boldsymbol{S}}_{n}^{(k)}=\hat{{\boldsymbol{S}}}_{n}^{(k)}+{\boldsymbol{W}}_{n}^{(k)}$,
see (D.7), and $(b)$ follows since ${\boldsymbol{W}}_{n}^{(k)}$ and
$\hat{{\boldsymbol{S}}}_{n}^{(k)}$ are mutually independent. The joint PDF of
${\boldsymbol{S}}_{n}^{(k)}$ and $\hat{{\boldsymbol{S}}}_{n}^{(k)}$ can be
expressed via the conditional PDF as:
$\displaystyle
p_{{\boldsymbol{S}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)\\!=\\!p_{{\boldsymbol{S}}_{n}^{(k)}|\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)\cdot
p_{\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)\\!\stackrel{{\scriptstyle(a)}}{{=}}\\!p_{{\boldsymbol{W}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\\!-\\!\hat{{\boldsymbol{s}}}^{(k)}\right)\cdot
p_{\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right),$
(D.9)
where $(a)$ follows from (D.8). Since $\hat{{\boldsymbol{S}}}_{n}^{(k)}$ and
${\boldsymbol{W}}_{n}^{(k)}$ are Gaussian and mutually independent and since
the product of two multivariate Gaussian PDFs is also a multivariate Gaussian
PDF [36, Sec. 3], it follows from (D.9) that ${\boldsymbol{S}}_{n}^{(k)}$ and
$\hat{{\boldsymbol{S}}}_{n}^{(k)}$ are jointly Gaussian. Following the mutual
independence of ${\boldsymbol{W}}_{n}^{(k)}$ and
$\hat{{\boldsymbol{S}}}_{n}^{(k)}$, the right hand side (RHS) of (D.9) is also
equivalent to the joint PDF of
$\left[\left({\boldsymbol{W}}_{n}^{(k)}\right)^{T},\left(\hat{{\boldsymbol{S}}}_{n}^{(k)}\right)^{T}\right]^{T}$
denoted by
$p_{{\boldsymbol{W}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}-\hat{{\boldsymbol{s}}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)$.
Now, from (D.8), the assumption
$\mathop{\lim}\limits_{n\rightarrow\infty}p_{{\boldsymbol{W}}_{n}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)=p_{{\boldsymbol{W}}_{\epsilon}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)$
implies that a limit exists for the conditional PDF
$p_{{\boldsymbol{S}}_{n}^{(k)}\mid\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\mid\hat{{\boldsymbol{s}}}^{(k)}\right)$,
this we denote by
$p_{{\boldsymbol{S}}_{\epsilon}^{(k)}\mid\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}\mid\hat{{\boldsymbol{s}}}^{(k)}\right)$.
Combining this with the assumption
$\mathop{\lim}\limits_{n\rightarrow\infty}p_{\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)=p_{\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$,
we have that,
$\displaystyle\mathop{\lim}\limits_{n\rightarrow\infty}p_{{\boldsymbol{S}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)$
$\displaystyle=\mathop{\lim}\limits_{n\rightarrow\infty}\left(p_{{\boldsymbol{S}}_{n}^{(k)}|\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)\cdot
p_{\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)\right)$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}\mathop{\lim}\limits_{n\rightarrow\infty}\left(p_{{\boldsymbol{W}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}-\hat{{\boldsymbol{s}}}^{(k)}\right)\cdot
p_{\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)\right)$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}\mathop{\lim}\limits_{n\rightarrow\infty}\left(p_{{\boldsymbol{W}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}-\hat{{\boldsymbol{s}}}^{(k)}\right)\right)\cdot\mathop{\lim}\limits_{n\rightarrow\infty}\left(p_{\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)\right)$
$\displaystyle=p_{{\boldsymbol{S}}_{\epsilon}^{(k)}|\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)\cdot
p_{\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$
$\displaystyle=p_{{\boldsymbol{S}}_{\epsilon}^{(k)},\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right),$
(D.10)
where $(a)$ follows from (D.8), and $(b)$ follows since the limit for each
sequence in the product exists [31, Thm. 3.3]; Convergence is uniform in
$\left(\left(\hat{{\boldsymbol{s}}}^{(k)}\right)^{T},\left({\boldsymbol{s}}^{(k)}\right)^{T}\right)^{T}\in\mathcal{R}^{2k}$
and $k\in\mathcal{N}$, as each sequence converges uniformly in
$k\in\mathcal{N}$ [31, Page 165] 777[31, Page 165, Ex 2]: The solution to this
exercise shows that if two functions $\\{f_{n}\\}$ and $\\{g_{n}\\}$ converge
uniformly on a set $E$ and both $\\{f_{n}\\}$ and $\\{g_{n}\\}$ are sequences
of bounded functions then $\\{f_{n}g_{n}\\}$ converges uniformly on $E$. .
Observe that the joint PDF for the zero-mean Gaussian random vectors
$\left[{\boldsymbol{S}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}\right]$ is
given by the general expression:
$p_{{\boldsymbol{S}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}}\\!\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)\\!=\\!\Big{(}{\rm
Det}\big{(}2\pi\tilde{\mathsf{C}}_{n}^{(2k)}\big{)}\Big{)}^{-\frac{1}{2}}\\!\exp\left(-\frac{1}{2}\left[\left(\hat{{\boldsymbol{s}}}^{(k)}\right)^{T}\\!\\!,\left({\boldsymbol{s}}^{(k)}\right)^{T}\right]\big{(}\tilde{\mathsf{C}}_{n}^{(2k)}\big{)}^{-1}\\!\left[\left(\hat{{\boldsymbol{s}}}^{(k)}\right)^{T}\\!\\!,\left({\boldsymbol{s}}^{(k)}\right)^{T}\right]^{T}\right),$
(D.11)
where $\tilde{\mathsf{C}}_{n}^{(2k)}$ denotes the joint covariance matrix of
$\left[\left(\hat{{\boldsymbol{S}}}_{n}^{(k)}\right)^{T},\left({\boldsymbol{S}}_{n}^{(k)}\right)^{T}\right]^{T}$.
From (D.11) we note that
$p_{{\boldsymbol{S}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)$
is a continuous mapping of $\tilde{\mathsf{C}}_{n}^{(2k)}$ with respect to the
index $n$, see [17, Lemma B.1]. Hence the convergence in (D-B) of
$p_{{\boldsymbol{S}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)$
as $n\rightarrow\infty$ directly implies the convergence of
$\tilde{\mathsf{C}}_{n}^{(2k)}$ as $n\rightarrow\infty$ to a limit which we
denote by $\tilde{\mathsf{C}}_{\epsilon}^{(2k)}$. It therefore follows that
the limit function
$p_{{\boldsymbol{S}}_{\epsilon}^{(k)},\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)$
corresponds to the PDF of a Gaussian vector with the covariance matrix
$\tilde{\mathsf{C}}_{\epsilon}^{(2k)}$.
The joint PDF for the zero-mean Gaussian random vectors
$\left[{\boldsymbol{W}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}\right]$ can
be obtained using their mutual independence as:
$\displaystyle
p_{{\boldsymbol{W}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}-\hat{{\boldsymbol{s}}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)$
$\displaystyle=\\!\Big{(}{\rm
Det}\big{(}2\pi\Sigma_{n}^{(2k)}\big{)}\Big{)}^{\frac{1}{2}}\\!\exp\left(-\frac{1}{2}\left[\left({\boldsymbol{s}}^{(k)}\\!-\\!\hat{{\boldsymbol{s}}}^{(k)}\right)^{T},\left(\hat{{\boldsymbol{s}}}^{(k)}\right)^{T}\right]\big{(}\Sigma_{n}^{(2k)}\big{)}^{-\\!1}\\!\left[\left({\boldsymbol{s}}^{(k)}\\!-\\!\hat{{\boldsymbol{s}}}^{(k)}\right)^{T},\left(\hat{{\boldsymbol{s}}}^{(k)}\right)^{T}\right]^{T}\right),$
(D.12)
where $\Sigma_{n}^{(2k)}$ denotes the joint covariance matrix of
$\left[\left({\boldsymbol{W}}_{n}^{(k)}\right)^{T},\left(\hat{{\boldsymbol{S}}}_{n}^{(k)}\right)^{T}\right]^{T}$.
Since the vectors ${\boldsymbol{W}}_{n}^{(k)}$ and
$\hat{{\boldsymbol{S}}}_{n}^{(k)}$ are zero-mean, mutually independent and, by
the relationship (20), each vector has independent entries, it follows that
$\Sigma_{n}^{(2k)}$ is a diagonal matrix with each diagonal element taking the
value of the corresponding temporal variance at the respective index
$i\in\\{1,2,\ldots k\\}$. i.e.,
$\displaystyle\Sigma_{n}^{(2k)}$
$\displaystyle\triangleq\mathds{E}\left\\{\left(\left({\boldsymbol{W}}_{n}^{(k)}\right)^{T},\left(\hat{{\boldsymbol{S}}}_{n}^{(k)}\right)^{T}\right)^{T}\cdot\left(\left({\boldsymbol{W}}_{n}^{(k)}\right)^{T},\left(\hat{{\boldsymbol{S}}}_{n}^{(k)}\right)^{T}\right)\right\\}$
$\displaystyle=\textrm{diag}\big{(}\mathds{E}\left\\{\left(W_{n}[1]\right)^{2}\right\\},\mathds{E}\left\\{\left(W_{n}[2]\right)^{2}\right\\},\ldots,\mathds{E}\left\\{\left(W_{n}[k]\right)^{2}\right\\},\sigma^{2}_{\hat{S}_{n}}[1],\sigma^{2}_{\hat{S}_{n}}[2]\ldots,\sigma^{2}_{\hat{S}_{n}}[k]\big{)}.$
(D.13)
The convergence of
$p_{{\boldsymbol{W}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}-\hat{{\boldsymbol{s}}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)$,
from (D-B), implies a convergence of the diagonal elements in (D-B) as
$n\rightarrow\infty$. Hence $\Sigma_{n}^{(2k)}$ converges as
$n\rightarrow\infty$ to a diagonal joint covariance matrix which we denote by
$\Sigma_{\epsilon}^{(2k)}$. This further implies that the limiting vectors
${\boldsymbol{W}}_{\epsilon}^{(k)}$ and
$\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}$ are zero-mean, mutually independent
and each vector has independent entries in $i\in[1,2,\ldots,k]$.
Relationship (D-B) implies that the joint limit distribution satisfies
$p_{{\boldsymbol{S}}_{\epsilon}^{(k)},\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)=p_{\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)\cdot
p_{{\boldsymbol{W}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}-\hat{{\boldsymbol{s}}}^{(k)}\right)$.
Consequently, we can define an asymptotic backward channel that satisfies
(D-B) via the expression:
${\boldsymbol{S}}_{\epsilon}^{(k)}[i]=\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}[i]+{\boldsymbol{W}}_{\epsilon}^{(k)}[i].$
(D.14)
Next, by convergence of the joint PDF
$p_{{\boldsymbol{W}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}-\hat{{\boldsymbol{s}}}^{(k)}\right)\cdot
p_{\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$
uniformly in $k\in\mathcal{N}$ and in
$\left(\left({\boldsymbol{s}}^{(k)}\right)^{T},\left(\hat{{\boldsymbol{s}}}^{(k)}\right)^{T}\right)^{T}\in\mathcal{R}^{2k}$,
it follows from [35, Thm.1] 888Please refer to the footnote 6 on page 6. that
$\left[\big{(}\hat{{\boldsymbol{S}}}_{n}^{(k)}\big{)}^{T},\big{(}{\boldsymbol{W}}_{n}^{(k)}\big{)}^{T}\right]^{T}\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}\left[\big{(}\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\big{)}^{T},\big{(}{\boldsymbol{W}}_{\epsilon}^{(k)}\big{)}^{T}\right]^{T}$
and the convergence is uniform in $k\in\mathcal{N}$ and in
$\left(\left({\boldsymbol{s}}^{(k)}\right)^{T},\left(\hat{{\boldsymbol{s}}}^{(k)}\right)^{T}\right)^{T}\in\mathcal{R}^{2k}$.
Then, by the continuous mapping theorem (CMT) [37, Thm. 7.7], we have
$\displaystyle\left[\big{(}{\boldsymbol{S}}_{n}^{(k)}\big{)}^{T},\big{(}\hat{{\boldsymbol{S}}}_{n}^{(k)}\big{)}^{T}\right]^{T}\\!=\\!\left[\big{(}\hat{{\boldsymbol{S}}}_{n}^{(k)}\\!+\\!{\boldsymbol{W}}_{n}^{(k)}\big{)}^{T},\big{(}\hat{{\boldsymbol{S}}}_{n}^{(k)}\big{)}^{T}\right]^{T}\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}\left[\big{(}\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\\!+\\!{\boldsymbol{W}}_{\epsilon}^{(k)}\big{)}^{T},\big{(}\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\big{)}^{T}\right]^{T}\\!=\\!\left[\big{(}{\boldsymbol{S}}_{\epsilon}^{(k)}\big{)}^{T},\big{(}\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\big{)}^{T}\right]^{T}.$
Now, using the extended CMT [37, Thm. 7.24] 999 [37, Thm. 7.24]: (Extended
continuous mapping). Let $\mathbb{D}_{n}\subset\mathbb{D}$ and $g_{n}$ :
$\mathbb{D}_{n}\mapsto\mathbb{E}$ satisfy the following: If $x_{n}\rightarrow
x$ with $x_{n}\in\mathbb{D}_{n}$ for all $n\geq 1$ and $x\in\mathbb{D}_{0}$,
then $g_{n}(x_{n})\rightarrow g(x)$, where $\mathbb{D}_{0}\subset\mathbb{D}$
and $g:\mathbb{D}_{0}\mapsto\mathbb{E}$. Let $X_{n}$ be maps taking values in
$\mathbb{D}_{n}$, and let $X$ be Borel measurable and separable. Then (i)
$X_{n}\rightsquigarrow X$ implies $g_{n}(X_{n})\rightsquigarrow g(X)$. (ii)
$X_{n}\mathop{\rightarrow}\limits^{P}X$ implies
$g_{n}(X_{n})\mathop{\rightarrow}\limits^{P}g(X)$. (iii)
$X_{n}\mathop{\rightarrow}\limits^{as*}X$ implies
$g_{n}(X_{n})\mathop{\rightarrow}\limits^{as*}g(X)$. , we will show that
$f_{k,n}\big{(}{\boldsymbol{S}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}\big{)}\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}f_{k,\epsilon}\big{(}{\boldsymbol{S}}_{\epsilon}^{(k)},\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\big{)}$
for each $k\in\mathcal{N}$, following the same approach of the proof for [17,
Lemma B.2] 101010[17, Lemma B.2]: Consider a sequence of $k\times 1$ zero-mean
Gaussian random vectors with independent entries
$\\{{{\boldsymbol{X}}}_{n}^{(k)}\\}_{n\in\mathcal{N}}$ and a zero-mean
Gaussian random vector with independent entries ${{\boldsymbol{X}}}^{(k)}$,
such that
${{\boldsymbol{X}}}_{n}^{(k)}\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}{{\boldsymbol{X}}}^{(k)}$
uniformly with respect to $k\in\mathcal{N}$. Then, the RVs
$\tilde{Z}_{k,n}^{\prime}\left(F_{{{\boldsymbol{X}}}_{n}}\right)$ and
$Z_{k}^{\prime}\left(F_{{{\boldsymbol{X}}}}\right)$ defined in [17, Eqn.
(B.1)] satisfy
$\tilde{Z}_{k,n}^{\prime}\left(F_{{{\boldsymbol{X}}}_{n}}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}Z_{k}^{\prime}\left(F_{{{\boldsymbol{X}}}}\right)$
uniformly over $k\in\mathcal{N}$. . Then, since
$\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)=\frac{1}{k}\log
f_{k,n}\left({\boldsymbol{S}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}\right)$
and
$Z_{k}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)=\frac{1}{k}\log
f_{k,\epsilon}\left({\boldsymbol{S}}_{\epsilon}^{(k)},\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\right)$,
we conclude that $\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}Z_{k}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$,
where it also follows from the proof of [17, Lemma B.2] that the convergence
is uniform in $k\in\mathcal{N}$. Specifically, to prove that
$f_{k,n}\left({\boldsymbol{S}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}f_{k,\epsilon}\left({\boldsymbol{S}}_{\epsilon}^{(k)},\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\right)$,
we will show that the following two properties hold:
1. P1
The distribution of
$\left[\left({\boldsymbol{S}}_{\epsilon}^{(k)}\right)^{T},\left(\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\right)^{T}\right]^{T}$
is separable 111111By [37, Pg. 101], an RV $X\in\mathcal{X}$ is separable if
$\forall\eta>0$ there exists a compact set
$\mathcal{K}(\eta)\subset\mathcal{X}$ such that
$\Pr\left(X\in\mathcal{K}(\eta)\right)\geq 1-\eta$. .
2. P2
For any convergent sequence
$\left(\left({\boldsymbol{s}}_{n}^{(k)}\right)^{T},\left(\hat{{\boldsymbol{s}}}_{n}^{(k)}\right)^{T}\right)^{T}\in\mathcal{R}^{2k}$
such that
$\mathop{\lim}\limits_{n\rightarrow\infty}\left({\boldsymbol{{\boldsymbol{s}}}}_{n}^{(k)},\hat{{\boldsymbol{s}}}_{n}^{(k)}\right)=\left({\boldsymbol{{\boldsymbol{s}}}}_{\epsilon}^{(k)},\hat{{\boldsymbol{s}}}_{\epsilon}^{(k)}\right)$,
then
$\mathop{\lim}\limits_{n\rightarrow\infty}f_{k,n}\left({\boldsymbol{s}}_{n}^{(k)},\hat{{\boldsymbol{s}}}_{n}^{(k)}\right)=f_{k,\epsilon}\left({\boldsymbol{s}}_{\epsilon}^{(k)},\hat{{\boldsymbol{s}}}_{\epsilon}^{(k)}\right)$.
To prove property P1, we show that
${U}^{(k)}\triangleq\left[\big{(}{\boldsymbol{S}}_{\epsilon}^{(k)}\big{)}^{T},\big{(}\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\big{)}^{T}\right]^{T}$
is 121212We point out that here, we misuse use the dimension notation as
$U^{(k)}$ which denotes a $2k$ dimensional vector. Here, $k$ refers to the
dimension of the compression problem and not of the vector. separable [37, Pg.
101], i.e., we show that $\forall\eta>0$, there exists $\beta>0$ such that
$\Pr\left(\|U^{(k)}\|^{2}>\beta\right)<\eta$. To that aim, recall first that
by Markov’s inequality [29, Pg. 114], it follows that
$\Pr\left(\right\|U^{(k)}\left\|{}^{2}>\beta\right)<\frac{1}{\beta}\mathds{E}\left\\{\left\|U^{(k)}\right\|^{2}\right\\}$.
For the asynchronously sampled source process, we note that
$\sigma^{2}_{S_{\epsilon}}[i]\triangleq\mathds{E}\left\\{\left(S_{\epsilon}[i]\right)^{2}\right\\}\in[0,\mathop{\max}\limits_{0\leq
t\leq T_{\rm ps}}\sigma^{2}_{S_{\rm c}}(t)]$. By the independence of
${\boldsymbol{W}}_{\epsilon}^{(k)}$ and
$\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}$, and by the fact that their mean is
zero, we have, from (D.14) that
$\mathds{E}\left\\{\left(S_{\epsilon}[i]\right)^{2}\right\\}=\mathds{E}\left\\{\left(\hat{S}_{\epsilon}[i]\right)^{2}\right\\}+\mathds{E}\left\\{\left(W_{\epsilon}[i]\right)^{2}\right\\}\leq\mathop{\max}\limits_{0\leq
t\leq T_{\rm ps}}\sigma^{2}_{S_{\rm c}}(t)$; Hence
$\mathds{E}\left\\{\left(\hat{S}_{\epsilon}[i]\right)^{2}\right\\}\leq\mathop{\max}\limits_{0\leq
t\leq T_{\rm ps}}\sigma^{2}_{S_{\rm c}}(t)$, and
$\mathds{E}\left\\{\left(W_{\epsilon}[i]\right)^{2}\right\\}\leq\mathop{\max}\limits_{0\leq
t\leq T_{\rm ps}}\sigma^{2}_{S_{\rm c}}(t)$. This further implies that
$\mathds{E}\left\\{\left\|U^{(k)}\right\|^{2}\right\\}=\mathds{E}\left\\{\left\|\left[\big{(}{\boldsymbol{S}}_{\epsilon}^{(k)}\big{)}^{T},\big{(}\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\big{)}^{T}\right]^{T}\right\|^{2}\right\\}\leq
2\cdot k\cdot\mathop{\max}\limits_{0\leq t\leq T_{\rm ps}}\sigma^{2}_{S_{\rm
c}}(t)$ ; therefore for each
$\beta>\frac{1}{\eta}\mathds{E}\left\\{\left\|U^{(k)}\right\|^{2}\right\\}$ we
have that $\Pr\left(\left\|U^{(k)}\right\|^{2}>\beta\right)<\eta$, and thus
$U^{(k)}$ is separable.
By the assumption in this lemma it follows that $\forall\eta>0$ there exists
$n_{0}(\eta)>0$ such that for all $n>n_{0}(\eta)$ we have that
$\forall{\boldsymbol{w}}^{(k)}\in\mathcal{R}^{k}$,
$\big{|}p_{{\boldsymbol{W}}_{n}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)-p_{{\boldsymbol{W}}_{\epsilon}^{(k)}}\left(w^{(k)}\right)\big{|}<\eta$,
for all sufficiently large $k\in\mathcal{N}$. Consequently, for all
$\left(\left({\boldsymbol{s}}^{(k)}\right)^{T},\left(\hat{{\boldsymbol{s}}}^{(k)}\right)^{T}\right)^{T}\in\mathcal{R}^{2k}$,
$n>n_{0}(\eta)$ and a sufficiently large $k\in\mathcal{N}$, it follows from
(D.8) that
$\displaystyle\left|p_{{\boldsymbol{S}}_{n}^{(k)}|\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)-p_{{\boldsymbol{S}}_{\epsilon}^{(k)}|\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}\big{|}\hat{{\boldsymbol{s}}}^{(k)}\right)\right|$
$\displaystyle=\left|p_{{\boldsymbol{W}}_{n}^{(k)}}\left({\boldsymbol{s}}^{(k)}-\hat{{\boldsymbol{s}}}^{(k)}\right)-p_{{\boldsymbol{W}}_{\epsilon}^{(k)}}\left({\boldsymbol{s}}^{(k)}-\hat{{\boldsymbol{s}}}^{(k)}\right)\right|<\eta.$
(D.15)
Following the continuity of
$p_{{\boldsymbol{S}}_{n}^{(k)}|\hat{{\boldsymbol{S}}}_{n}^{(k)}}\left(s^{(k)}\big{|}\hat{s}^{(k)}\right)$
and of $p_{{\boldsymbol{S}}_{n}^{(k)}}({\boldsymbol{s}}^{(k)})$,
$f_{k,n}\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)$ is
also continuous [31, Thm. 4.9] 131313[31, Thm. 4.9]: Let $f$ and $g$ be
complex continuous functions on a metric space $X$. Then $f+g$, $fg$ and $f/g$
are continuous on $X$. In the last case, we must assume that $g(x)\neq 0$, for
all $x\in X$ ; hence, when
$\mathop{\lim}\limits_{n\rightarrow\infty}\big{(}{\boldsymbol{s}}_{n}^{(k)},\hat{{\boldsymbol{s}}}_{n}^{(k)}\big{)}=\big{(}{\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\big{)}$,
then
$\mathop{\lim}\limits_{n\rightarrow\infty}f_{k,n}\left({\boldsymbol{s}}_{n}^{(k)},\hat{{\boldsymbol{s}}}_{n}^{(k)}\right)=f_{k,\epsilon}\left({\boldsymbol{s}}^{(k)},\hat{{\boldsymbol{s}}}^{(k)}\right)$.
This satisfies condition P2 for the extended CMT; Therefore, by the extended
CMT, we have that
$f_{k,n}\left({\boldsymbol{S}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}f_{k,\epsilon}\left({\boldsymbol{S}}_{\epsilon}^{(k)},\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\right)$.
Since the RVs $\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)$ and
$Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$,
defined in (D.1), are also continuous mappings of
$f_{k,n}\left({\boldsymbol{S}}_{n}^{(k)},\hat{{\boldsymbol{S}}}_{n}^{(k)}\right)$
and of
$f_{k,\epsilon}\left({\boldsymbol{S}}_{\epsilon}^{(k)},\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\right)$,
respectively, it follows from the CMT [37, Thm. 7.7] that
$\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$.
Finally, to prove that the convergence
$\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$
is uniform in $k\in\mathcal{N}$, we note that as
$\hat{{\boldsymbol{S}}}_{n}^{(k)}$ and
$\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}$ have independent entries, and the
backward channels (21) and (D.14) are memoryless. Hence, it follows from the
proof of [17, Lemma B.2], that the characteristic function of the RV
$k\cdot\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm opt}\right)$
which is denoted by
$\Phi_{k\cdot\tilde{Z}_{k,n}}(\alpha)\triangleq\mathds{E}\left\\{e^{j\cdot\alpha\cdot
k\cdot\tilde{Z}_{k,n}}\right\\}$ converges to the characteristic function of
$k\cdot
Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$,
denoted by $\Phi_{k\cdot Z_{k,\epsilon}}(\alpha)$, uniformly over
$k\in\mathcal{N}$. Thus, for all sufficiently small $\eta>0$, $\exists
k_{0}\in\mathcal{N},n_{0}(\eta,k_{0})\in\mathcal{N}$ such that $\forall
n>n_{0}(\eta,k_{0})$, and $\forall k>k_{0}$
$\big{|}\Phi_{k\cdot\tilde{Z}_{k,n}}(\alpha)-\Phi_{k\cdot
Z_{k,\epsilon}}(\alpha)\big{|}<\eta,\quad\forall\alpha\cdot\in\mathcal{R}.$
(D.16)
Hence, following Lévy’s convergence theorem [38, Thm. 18.1] 141414[38, Thm.
18.1]: Let $(F_{n})$ be a sequence of density functions and let $\phi_{n}$
denote the characteristic function of $F_{n}$. Suppose that
$g(\theta)\coloneqq\lim\phi_{n}(\theta)$ exists for all
$\theta\in\mathcal{R}$, and that $g(\cdot)$ is continuous at $0$. Then $g=\phi
F$ for some distribution function $F$, and
$F_{n}\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}F$. we
conclude that $k\cdot\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}k\cdot
Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$ and
that this convergence is uniform for sufficiently large $k$. Finally, since
the CDFs of $k\cdot\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)$ and $k\cdot
Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$
obtained at $\alpha\in\mathcal{R}$ are equivalent to the CDFs of
$\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm opt}\right)$ and
$Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$
obtained at $\frac{\alpha}{k}\in\mathcal{R}$ respectively, we can conclude
that $\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)$,
uniformly in $k\in\mathcal{N}$. ∎
The following convergence lemma D.3 corresponds to [17, Lemma. B.3],
###### Lemma D.3.
Let $n\in\mathcal{N}$ be given. Every subsequence of
$\left\\{\tilde{Z}_{k,n}^{\prime}\left(F_{\hat{S}_{n},S_{n}}^{\rm
opt}\right)\right\\}_{k\in\mathcal{N}}$, indexed by $k_{l}$, converges in
distribution, in the limit as $l\rightarrow\infty$, to a finite deterministic
scalar.
###### Proof.
Recall that the RVs $\tilde{Z}_{k,n}^{\prime}\left(F_{\hat{S}_{n},S_{n}}^{\rm
opt}\right)$ represent the mutual information density rate between $k$ samples
of the source process $S_{n}[i]$ and the corresponding samples of its
reproduction process $\hat{S}_{n}[i]$, where these processes are jointly
distributed via the Gaussian distribution measure $F_{\hat{S}_{n},S_{n}}^{\rm
opt}$. Further, recall that the relationship between the source signal and the
reproduction process which achieves the RDF can be described via the backward
channel in (21) for a Gaussian source. The channel (21) is a memoryless
additive WSCS Gaussian noise channel with period $p_{n}$, thus, by [21], it
can be equivalently represented as a $p_{n}\times 1$ multivariate memoryless
additive stationary Gaussian noise channel, which is an information stable
channel [39, Sec. 1.5]151515Information stable channels can be described as
having the property that the input that maximizes mutual information and its
corresponding output behave ergodically [15]. Also, the information stability
was further defined in [16, Sec. IV] by applying the fact that ergodic theory
is consequential to the law of large numbers. [14, Eq. (3.9.2)]: A general
source $V=\left\\{V^{n}\right\\}_{n=1}^{\infty}$ is said to be information-
stable if
$\frac{\frac{1}{n}\log\frac{1}{P_{v^{n}}\left(V^{n}\right)}}{H_{n}\left(V^{n}\right)}\rightarrow
1$. where $H_{n}\left(V^{n}\right)=\frac{1}{n}H\left(V^{n}\right)$ and
$H\left(V^{n}\right)$ stands for the entropy of $V^{n}$.. For such channels in
which the source and its reproduction obey the RDF-achieving joint
distribution $F_{S_{n},\hat{S}_{n}}^{\rm opt}$, the mutual information density
rate converges as $k$ increases almost surely to the finite and deterministic
mutual information rate [14, Thm. 5.9.1] 161616[14, Thm. 5.9.1] holds for a
subadditive distortion measure [14, Eqn. (5.9.2)]; The MSE distortion measure,
which was used in this research, is additive (and thus also subadditive)..
Since almost sure convergence implies convergence in distribution [37, Lemma
7.21], this proves the lemma. ∎
### D-C Showing that
${R_{\epsilon}(D)=\mathop{\lim\sup}\limits_{n\rightarrow\infty}R_{n}(D)}$
This section completes the proof to Theorem 4. We note from (14) that the RDF
for the source process $S_{n}[i]$ (for fixed length coding and MSE distortion
measure) is given by:
$R_{n}(D)=\mathop{\inf}\limits_{F_{\hat{S}_{n},S_{n}}:\bar{d}_{S}\left(F_{\hat{S}_{n},S_{n}}\right)\leq
D}\left\\{{\rm
p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}\tilde{Z}_{k,n}^{\prime}\left(F_{\hat{S}_{n},S_{n}}^{\rm
opt}\right)\right\\},$ (D.17)
where
$\bar{d}_{S}\left(F_{\hat{S}_{n},S_{n}}\right)=\mathop{\lim\sup}\limits_{k\rightarrow\infty}\frac{1}{k}\mathds{E}\big{\\{}\big{\|}{\boldsymbol{S}}_{n}^{(k)}-\hat{{\boldsymbol{S}}}_{n}^{(k)}\big{\|}^{2}\big{\\}}$.
We now state the following lemma characterizing the asymptotic statistics of
the optimal reconstruction $\hat{{\boldsymbol{S}}}_{n}^{(k)}$ process and the
respective noise process ${\boldsymbol{W}}_{n}^{(k)}$ used in the backward
channel relationship (21):
###### Lemma D.4.
Consider the RDF-achieving distribution with distortion $D$ for compression of
a vector Gaussian source process ${\boldsymbol{S}}_{n}^{(k)}$ characterized by
the backward channel (21). Then, there exists a subsequence in the index
$n\in\mathcal{N}$ denoted $n_{1}<n_{2}<\ldots$, such that for the RDF-
achieving distribution, the sequences of reproduction vectors
$\\{\hat{{\boldsymbol{S}}}_{n_{l}}^{(k)}\\}_{l\in\mathcal{N}}$ and backward
channel noise vectors $\\{{\boldsymbol{W}}_{n_{l}}^{(k)}\\}_{l\in\mathcal{N}}$
satisfy that
$\mathop{\lim}\limits_{l\rightarrow\infty}p_{\hat{{\boldsymbol{S}}}_{n_{l}}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)=p_{\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$
uniformly in ${\boldsymbol{\hat{s}}}^{(k)}\in\mathcal{R}^{k}$ and uniformly
with respect to $k\in\mathcal{N}$, as well as
$\mathop{\lim}\limits_{l\rightarrow\infty}p_{{\boldsymbol{W}}_{n_{l}}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)=p_{{\boldsymbol{W}}_{\epsilon}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)$
uniformly in ${\boldsymbol{w}}^{(k)}\in\mathcal{R}^{k}$ and uniformly with
respect to $k\in\mathcal{N}$, where
$p_{\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$
and $p_{{\boldsymbol{W}}_{\epsilon}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)$
are Gaussian PDFs.
###### Proof.
Recall from the analysis of the RDF for WSCS processes that for each
$n\in\mathcal{N}$, the marginal distributions of the RDF-achieving
reproduction process $\hat{S}_{n}[i]$ and the backward channel noise
$W_{n}[i]$ is Gaussian, memoryless, zero-mean, and with variances
$\sigma_{\hat{S}_{n}}^{2}[i]\triangleq\mathds{E}\left\\{\big{(}\hat{S}_{n}[i]\big{)}^{2}\right\\}$
and
$\mathds{E}\left\\{\big{(}W_{n}[i]\big{)}^{2}\right\\}=\sigma_{{S}_{n}}^{2}[i]-\sigma_{\hat{S}_{n}}^{2}[i],$
(D.18)
respectively. Consequently, the sequences of reproduction vectors
$\\{\hat{{\boldsymbol{S}}}_{n}^{(k)}\\}_{n\in\mathcal{N}}$ and backward
channel noise vectors $\\{{\boldsymbol{W}}_{n}^{(k)}\\}_{n\in\mathcal{N}}$ are
zero-mean Gaussian with independent entries for each $k\in\mathcal{N}$. Since
$\sigma_{{S}_{n}}^{2}[i]\leq\mathop{\max}\limits_{t\in\mathcal{R}}\sigma_{S_{c}}^{2}(t)$,
then, from (D.18), it follows that $\sigma^{2}_{\hat{S}_{n}}[i]$ is also
bounded in the interval
$[0,\mathop{\max}\limits_{t\in\mathcal{R}}\sigma_{S_{c}}^{2}(t)]$ for all
$n\in\mathcal{N}$. Therefore, by Bolzano-Weierstrass theorem [31, Thm. 2.42]
171717Every bounded infinite subset of $\mathcal{R}^{k}$ has a limit point in
$\mathcal{R}^{k}$. , $\sigma^{2}_{\hat{S}_{n}}[i]$ has a convergent
subsequence, and we let $n_{1}<n_{2}<\ldots$ denote the indexes of this
convergent subsequence and let the limit of the subsequence be denoted by
$\sigma_{\hat{S}_{\epsilon}}^{2}[i]$. From the CMT, as applied in the proof of
[17, Lemma B.1], the convergence
$\sigma_{\hat{S}_{n_{l}}}^{2}[i]\mathop{\longrightarrow}\limits_{l\rightarrow\infty}\sigma_{\hat{S}_{\epsilon}}^{2}[i]$
for each $i\in\mathcal{N}$ implies that the subsequence of PDFs
$p_{\hat{{\boldsymbol{S}}}_{n_{l}}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$
corresponding to the memoryless Gaussian random vectors
$\\{\hat{{\boldsymbol{S}}}_{n_{l}}^{(k)}\\}_{l\in\mathcal{N}}$ converges as
$l\rightarrow\infty$ to a Gaussian PDF which we denote by
$p_{\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$,
and the convergence of
$p_{\hat{{\boldsymbol{S}}}_{n_{l}}^{(k)}}\left(\hat{{\boldsymbol{s}}}^{(k)}\right)$
is uniform in ${\boldsymbol{s}}^{(k)}$ for any fixed $k\in\mathcal{N}$. By
Remark 1, it holds that $W_{n}[i]$ is a memoryless stationary process with
variance $\mathds{E}\left\\{\left(W_{n}[i]\right)^{2}\right\\}=D$ and by Eq.
(D.18), $\sigma^{2}_{\hat{S}_{n}}[i]=\sigma^{2}_{S_{n}}[i]-D$. Hence by
Assumption D.1 and by the proof of [17, Lemma B.1], it follows that for a
fixed $\eta>0$ and $k_{0}\in\mathcal{N}$, $\exists n_{0}(\eta,k_{0})$ such
that for all $n>n_{0}(\eta,k_{0})$ and for all sufficiently large $k$, it
holds that
$\big{|}p_{\hat{{\boldsymbol{S}}}_{n_{l}}^{(k)}}\big{(}\hat{{\boldsymbol{s}}}^{(k)}\big{)}-p_{\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}}\big{(}\hat{{\boldsymbol{s}}}^{(k)}\big{)}\big{|}<\eta$
for every $\hat{{\boldsymbol{s}}}^{(k)}\in\mathcal{R}^{k}$. Since
$n_{0}(\eta,k_{0})$ does not depend on $k$ (only on the fixed $k_{0}$), this
implies that the convergence is uniform with respect to $k\in\mathcal{N}$.
The fact that $W_{n}[i]$ is a zero-mean stationary Gaussian process with
variance $D$ for each $n\in\mathcal{N}$, implies that the sequence of PDFs
$p_{{\boldsymbol{W}}_{n}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)$ converges
as $n\rightarrow\infty$ to a Gaussian PDF which we denote by
$p_{{\boldsymbol{W}}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)$, hence its
subsequence with indices $n_{1}<n_{2}<\ldots$ also converges to
$p_{{\boldsymbol{W}}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)$. Since
$D>\frac{1}{2\pi}$ by Assumption D.1 combined with the proof of [17, Lemma
B.1] it follows that this convergence is uniform in ${\boldsymbol{w}}^{(k)}$
and in $k\in\mathcal{N}$ to
$p_{{\boldsymbol{W}}_{\epsilon}^{(k)}}\left({\boldsymbol{w}}^{(k)}\right)$.
Following the proof of Corollary D.1, it holds that the subsequences of the
memoryless Gaussian random vectors
$\left\\{\hat{{\boldsymbol{S}}}_{n_{l}}^{(k)}\right\\}$ and
$\left\\{{\boldsymbol{W}}_{n_{l}}^{(k)}\right\\}$ converge in distribution as
$l\rightarrow\infty$ to a Gaussian distribution, and the convergence is
uniform in $k\in\mathcal{N}$ for any fixed $k\in\mathcal{N}$. Hence, as shown
in Lemma D.2 the joint distribution
$\left[\big{(}{\boldsymbol{S}}_{n_{l}}^{(k)}\big{)}^{T},\big{(}\hat{{\boldsymbol{S}}}_{n_{l}}^{(k)}\big{)}^{T}\right]^{T}\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}\left[\big{(}{\boldsymbol{S}}_{\epsilon}^{(k)}\big{)}^{T},\big{(}\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)}\big{)}^{T}\right]^{T}$,
and the limit distribution is jointly Gaussian. ∎
###### Lemma D.5.
The RDF of $\\{S_{\epsilon}[i]\\}$ satisfies
$R_{\epsilon}(D)\leq\mathop{\lim\sup}\limits_{n\rightarrow\infty}R_{n}(D)$,
and the rate $\mathop{\lim\sup}\limits_{n\rightarrow\infty}R_{n}(D)$ is
achievable for the source $\\{S_{\epsilon}[i]\\}$ with distortion $D$ when the
reproduction process which obeys a Gaussian distribution.
###### Proof.
According to Lemma D.4, we note that the sequence of joint distributions
$\\{F_{S_{n},\hat{S}_{n}}^{\rm opt}\\}_{n\in\mathcal{N}}$ has a convergent
subsequence, i.e., there exists a set of indexes $n_{1}<n_{2}<\ldots$ such
that the sequence of distributions with independent entries
$\\{F_{S_{n_{l}},\hat{S}_{n_{l}}}^{\rm opt}\\}_{l\in\mathcal{N}}$ converges in
the limit $l\rightarrow\infty$ to a joint Gaussian distribution
$F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime}$ and the convergence is uniform
in $k\in\mathcal{N}$. Hence, this satisfies the condition of Lemma D.2; This
implies that
$\tilde{Z}_{k,n_{l}}^{\prime}\left(F_{S_{n_{l}},\hat{S}_{n_{l}}}^{\rm
opt}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{l\rightarrow\infty}Z_{k}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime}\right)$
uniformly in $k\in\mathcal{N}$. Also, by Lemma D.3 every subsequence of
$\big{\\{}\tilde{Z}_{k,n_{l}}^{\prime}\big{(}F_{S_{n_{l}},\hat{S}_{n_{l}}}^{\rm
opt}\big{)}\big{\\}}_{l\in\mathcal{N}}$ converges in distribution to a finite
deterministic scalar as $k\rightarrow\infty$. Therefore, by Theorem 3 it holds
that
$\displaystyle\mathop{\lim}\limits_{l\rightarrow\infty}\left({\rm
p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}\tilde{Z}_{k,n_{l}}^{\prime}\left(F_{S_{n_{l}},\hat{S}_{n_{l}}}^{\rm
opt}\right)\right)$ $\displaystyle={\rm
p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime}\right)$
$\displaystyle\geq\mathop{\inf}\limits_{F_{S_{\epsilon},\hat{S}_{\epsilon}}}\left\\{{\rm
p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}\right)\right\\}=R_{\epsilon}(D).$
(D.19)
From (14) we have that $R_{n}(D)={\rm
p-}\mathop{\lim\sup}\limits_{k\rightarrow\infty}\tilde{Z}_{k,n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)$, then from (D-C), it follows that
$R_{\epsilon}(D)\leq\mathop{\lim}\limits_{l\rightarrow\infty}R_{n_{l}}(D)\stackrel{{\scriptstyle(a)}}{{\leq}}\mathop{\lim\sup}\limits_{n\rightarrow\infty}R_{n}(D),$
(D.20)
where $(a)$ follows since, by [31, Def. 3.16], the limit of every subsequence
is not greater than the limit superior. Noting that
$F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime}$ is Gaussian by Lemma D.4
concludes the proof. ∎
###### Lemma D.6.
The RDF of $\\{S_{\epsilon}[i]\\}$ satisfies
$R_{\epsilon}(D)\geq\mathop{\lim\sup}\limits_{n\rightarrow\infty}R_{n}(D)$.
###### Proof.
To prove this lemma, we first show that for a joint distribution
$F_{S_{\epsilon},\hat{S}_{\epsilon}}$ which achieves a rate-distortion pair
$(R_{\epsilon},D)$ it holds that
$R_{\epsilon}\geq\mathds{E}\\{Z_{k,\epsilon}^{\prime}(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime})\\}$:
Recall that $(R_{\epsilon},D)$ is an achievable rate-distortion pair for the
source $\\{S_{\epsilon}[i]\\}$, namely, there exists a sequence of codes
$\\{\mathcal{C}_{l}\\}$ whose rate-distortion approach $(R_{\epsilon},D)$ when
applied to $\\{S_{\epsilon}[i]\\}$, This implies that for any $\eta>0$ there
exists $l_{0}(\eta)$ such that $\forall l>l_{0}(\eta)$ it holds that
$\mathcal{C}_{l}$ has a code rate $R_{l}=\frac{1}{l}\log_{2}M_{l}$ satisfying
$R_{l}\leq R_{\epsilon}+\eta$ by (3). Recalling Def. 4, the source code maps
${\boldsymbol{S}}_{\epsilon}^{(l)}$ into a discrete index
$J_{l}\in\\{1,2,\ldots,M_{l}\\}$, which is in turn mapped into
$\hat{{\boldsymbol{S}}}_{\epsilon}^{(l)}$, i.e.,
${{\boldsymbol{S}}}_{\epsilon}^{(l)}\mapsto
J_{l}\mapsto\hat{{\boldsymbol{S}}}_{\epsilon}^{(l)}$ form a Markov chain.
Since $J_{l}$ is a discrete random variable taking values in
$\\{1,2,\ldots,M_{l}\\}$, it holds that
$\displaystyle\log_{2}M_{l}$ $\displaystyle\geq H(J_{l})$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{\geq}}I({{\boldsymbol{S}}}_{\epsilon}^{(l)};J_{l})$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{\geq}}I({{\boldsymbol{S}}}_{\epsilon}^{(l)};\hat{{\boldsymbol{S}}}_{\epsilon}^{(l)}),$
(D.21)
where $(a)$ follows since
$I({{\boldsymbol{S}}}_{\epsilon}^{(l)};J_{l})=H(J_{l})-H(J_{l}|{{\boldsymbol{S}}}_{\epsilon}^{(l)})$
which is not larger than $H(J_{l})$ as $J_{l}$ takes discrete values; while
$(b)$ follows from the data processing inequality [5, Ch. 2.8]. Now, (D.21)
implies that for each $l>l_{0}(\eta)$, the reproduction obtained using the
code $\mathcal{C}_{l}$ satisfies
$\frac{1}{l}I({{\boldsymbol{S}}}_{\epsilon}^{(l)};\hat{{\boldsymbol{S}}}_{\epsilon}^{(l)})\leq\frac{1}{l}\log
M_{l}\leq R_{\epsilon}+\eta$. Since for every arbitrarily small
$\eta\rightarrow 0$, this inequality holds for all $l>l_{0}(\eta)$, i.e., for
all sufficiently large $l$, it follows that
$R_{\epsilon}\geq\mathop{\lim\sup}\limits_{k\rightarrow\infty}\frac{1}{l}I({{\boldsymbol{S}}}_{\epsilon}^{(l)};\hat{{\boldsymbol{S}}}_{\epsilon}^{(l)})$.
Hence, replacing the blocklength symbol from $l$ to $k$, as
$\frac{1}{k}I({\boldsymbol{S}}_{\epsilon}^{(k)},\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)})=\mathds{E}\\{Z_{k,\epsilon}^{\prime}(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime})\\}$[5,
Eqn. (2.3)], we conclude that
$R_{\epsilon}(D)\geq\mathop{\lim\sup}\limits_{k\rightarrow\infty}\mathds{E}\\{Z_{k,\epsilon}^{\prime}(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime})\\}.$
(D.22)
Next, we consider
$\mathop{\lim\sup}\limits_{k\rightarrow\infty}\mathds{E}\\{Z_{k_{l},\epsilon}^{\prime}(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime})\\}$:
Let
$Z_{k_{l},\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime}\right)$
be a subsequence of
$\mathds{E}\left\\{Z_{k,\epsilon}^{\prime}(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime})\right\\}$
with the indexes $k_{1}<k_{2}<\ldots$ such that its limit equals the limit
superior. i.e.,
$\mathop{\lim}\limits_{l\rightarrow\infty}\mathds{E}\left\\{Z_{k_{l},\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime}\right)\right\\}=\mathop{\lim\sup}\limits_{k\rightarrow\infty}\mathds{E}\left\\{Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime}\right)\right\\}$.
Since by Lemma D.2, the sequence of non-negative RVs
$\left\\{\tilde{Z}_{k_{l},n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\right\\}_{n\in\mathcal{N}}$ convergences in distribution to
$Z_{k_{l},\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime}\right)$
as $n\rightarrow\infty$ uniformly in $k\in\mathcal{N}$, it follows from
181818[40, Thm. 3.5] states that if $X_{n}$ are uniformly integrable and
$X_{n}\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}X$ then
$\mathds{E}\\{X_{n}\\}\mathop{\longrightarrow}\limits_{n\rightarrow\infty}\mathds{E}\\{X\\}$.
[40, Thm. 3.5] that
$\mathds{E}\left\\{Z_{k_{l},\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime}\right)\right\\}=\mathop{\lim}\limits_{n\rightarrow\infty}\mathds{E}\left\\{\tilde{Z}_{k_{l},n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\right\\}$. Also, we define a family of distributions
$\mathcal{F}(D)$ such that
${\mathcal{F}(D)=\\{F_{S,\hat{S}}:\mathsf{D}\left(F_{S,\hat{S}}\right)\leq
D\\}}$. Consequently, Eq. (D.22) can now be written as:
$\displaystyle
R_{\epsilon}(D)\geq\mathop{\lim\sup}\limits_{k\rightarrow\infty}\mathds{E}\left\\{Z_{k,\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}_{\epsilon}}^{\prime}\right)\right\\}$
$\displaystyle=\mathop{\lim}\limits_{l\rightarrow\infty}\mathop{\lim}\limits_{n\rightarrow\infty}\mathds{E}\left\\{\tilde{Z}_{k_{l},n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\right\\}$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim}\limits_{l\rightarrow\infty}\mathds{E}\left\\{\tilde{Z}_{k_{l},n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\right\\}$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}\mathop{\lim\sup}\limits_{n\rightarrow\infty}\mathop{\lim}\limits_{l\rightarrow\infty}\mathds{E}\left\\{\tilde{Z}_{k_{l},n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\right\\}$
$\displaystyle\geq\mathop{\lim\sup}\limits_{n\rightarrow\infty}\mathop{\lim}\limits_{l\rightarrow\infty}\mathop{\inf}\limits_{F_{S,\hat{S}}\in\mathcal{F}(D)}\mathds{E}\left\\{\tilde{Z}_{k_{l},n}^{\prime}\left(F_{S,\hat{S}}\right)\right\\}$
$\displaystyle\stackrel{{\scriptstyle(c)}}{{=}}\mathop{\lim\sup}\limits_{n\rightarrow\infty}\mathop{\lim}\limits_{l\rightarrow\infty}\mathop{\inf}\limits_{F_{S,\hat{S}}\in\mathcal{F}(D)}\frac{1}{k_{l}}I\left(\hat{{\boldsymbol{S}}}_{n}^{(k_{l})};{\boldsymbol{S}}_{n}^{(k_{l})}\right),$
(D.23)
where $(a)$ follows since the convergence
$\tilde{Z}_{k_{l},n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\mathop{\longrightarrow}\limits^{(dist.)}_{n\rightarrow\infty}Z_{k_{l},\epsilon}^{\prime}\left(F_{S_{\epsilon},\hat{S}}^{\prime}\right)$
is uniform with respect to $k_{l}$, thus the limits are interchangeable [31,
Thm. 7.11] 191919Rudin: Thm. 7.11: Suppose $f_{n}\rightarrow f$ uniformly in a
set $E$ in a metric space. Let $x$ be a limit point of $E$….
$\mathop{\lim}\limits_{t\rightarrow
x}\mathop{\lim}\limits_{n\rightarrow\infty}f_{n}(t)=\mathop{\lim}\limits_{n\rightarrow\infty}\mathop{\lim}\limits_{t\rightarrow
x}f_{n}(t)$ ; $(b)$ follows since the limit of the subsequence
$\mathds{E}\left\\{\tilde{Z}_{k_{l},n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\right\\}$ exists in the index $n$, and is therefore equivalent to
the limit superior,
$\mathop{\lim\sup}\limits_{n\rightarrow\infty}\mathds{E}\left\\{\tilde{Z}_{k_{l},n}^{\prime}\left(F_{S_{n},\hat{S}_{n}}^{\rm
opt}\right)\right\\}$ [31, Page 57]; and $(c)$ holds since mutual information
is the expected value of the mutual information density rate [5, Eqn. (2.30)].
Finally, we recall that in the proof of Lemma D.3 it was established that the
backward channel for the RDF at the distortion constraint $D$, defined in
(21), is information stable, hence for such backward channels, we have from
[41, Thm. 1] that the minimum rate is defined as
$R_{n}(D)=\mathop{\lim}\limits_{k\rightarrow\infty}\mathop{\inf}\limits_{{F_{S,\hat{S}}\in\mathcal{F}(D)}}\frac{1}{k}I\left(\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)};{\boldsymbol{S}}_{n}^{(k)}\right)$
and the limit exists; Hence,
$\mathop{\lim}\limits_{k\rightarrow\infty}\mathop{\inf}\limits_{{F_{S,\hat{S}}\in\mathcal{F}(D)}}\frac{1}{k}I\left(\hat{{\boldsymbol{S}}}_{\epsilon}^{(k)};{\boldsymbol{S}}_{n}^{(k)}\right)=\mathop{\lim}\limits_{l\rightarrow\infty}\mathop{\inf}\limits_{{F_{S,\hat{S}}\in\mathcal{F}(D)}}\frac{1}{k_{l}}I\left(\hat{{\boldsymbol{S}}}^{(k_{l})};{\boldsymbol{S}}_{n}^{(k_{l})}\right)$
in the index $k$. Substituting this into equation (D-C) yields the result:
$R_{\epsilon}(D)\geq\mathop{\lim\sup}\limits_{n\rightarrow\infty}R_{n}(D).$
(D.24)
This proves the lemma. ∎
Combining the lemmas D.5 and D.6 proves that
$R_{\epsilon}(D)=\mathop{\lim\sup}\limits_{n\rightarrow\infty}R_{n}(D)$ and
the rate is achievable with Gaussian inputs, completing the proof of the
theorem.
## References
* [1] W. Gardner, W. Brown, and C.-K. Chen, “Spectral correlation of modulated signals: Part II-digital modulation,” _IEEE Transactions on Communications_ , vol. 35, no. 6, pp. 595–601, 1987.
* [2] G. B. Giannakis, “Cyclostationary signal analysis,” _Digital Signal Processing Handbook_ , pp. 17–1, 1998.
* [3] W. A. Gardner, A. Napolitano, and L. Paura, “Cyclostationarity: Half a century of research,” _Signal processing_ , vol. 86, no. 4, pp. 639–697, 2006.
* [4] T. Berger and J. D. Gibson, “Lossy source coding,” _IEEE Transactions on Information Theory_ , vol. 44, no. 6, pp. 2693–2723, 1998.
* [5] T. M. Cover and J. A. Thomas, _Elements of Information Theory_. John Wiley & Sons, 2006.
* [6] J. K. Wolf, A. D. Wyner, and J. Ziv, “Source coding for multiple descriptions,” _The Bell System Technical Journal_ , vol. 59, no. 8, pp. 1417–1426, 1980.
* [7] A. Wyner and J. Ziv, “The rate-distortion function for source coding with side information at the decoder,” _IEEE Transactions on Information Theory_ , vol. 22, no. 1, pp. 1–10, 1976.
* [8] Y. Oohama, “Gaussian multiterminal source coding,” _IEEE Transactions on Information Theory_ , vol. 43, no. 6, pp. 1912–1923, 1997.
* [9] A. Pandya, A. Kansal, G. Pottie, and M. Srivastava, “Lossy source coding of multiple Gaussian sources: m-helper problem,” in _Proceedings of the Information Theory Workshop_. IEEE, Oct. 2004, pp. 34–38.
* [10] R. G. Gallager, _Information Theory and Reliable Communication_. Springer, 1968, vol. 588.
* [11] M. T. Harrison, “The generalized asymptotic equipartition property: Necessary and sufficient conditions,” _IEEE Transactions on Information Theory_ , vol. 54, no. 7, pp. 3211–3216, 2008.
* [12] A. Kipnis, A. J. Goldsmith, and Y. C. Eldar, “The distortion rate function of cyclostationary Gaussian processes,” _IEEE Transactions on Information Theory_ , vol. 64, no. 5, pp. 3810–3824, 2018.
* [13] A. Napolitano, “Cyclostationarity: New trends and applications,” _Signal Processing_ , vol. 120, pp. 385–408, 2016.
* [14] T. Han, _Information-Spectrum Methods in Information Theory_. Springer, 2003, vol. 50.
* [15] S. Verdú and T. Han, “A general formula for channel capacity,” _IEEE Transactions on Information Theory_ , vol. 40, no. 4, pp. 1147–1157, 1994.
* [16] W. Zeng, P. Mitran, and A. Kavcic, “On the information stability of channels with timing errors,” in _Proceedings of the IEEE International Symposium on Information Theory (ISIT)_. IEEE, July 2006, pp. 1885–1889.
* [17] N. Shlezinger, E. Abakasanga, R. Dabora, and Y. C. Eldar, “The capacity of memoryless channels with sampled cyclostationary Gaussian noise,” _IEEE Transactions on Communications_ , vol. 68, no. 1, pp. 106–121, 2020\.
* [18] C. E. Shannon, “Communication in the presence of noise,” _Proceedings of the IEEE_ , vol. 86, no. 2, pp. 447–457, 1998.
* [19] Y. Guan and K. Wang, “Translation properties of time scales and almost periodic functions,” _Mathematical and Computer Modelling_ , vol. 57, no. 5, pp. 1165 – 1174, 2013.
* [20] N. Shlezinger and R. Dabora, “On the capacity of narrowband PLC channels,” _IEEE Transactions on Communications_ , vol. 63, no. 4, pp. 1191–1201, 2015\.
* [21] ——, “The capacity of discrete-time Gaussian MIMO channels with periodic characteristics,” in _Proceedings of the IEEE International Symposium on Information Theory (ISIT)_ , July 2016, pp. 1058–1062.
* [22] N. Shlezinger, D. Zahavi, Y. Murin, and R. Dabora, “The secrecy capacity of Gaussian MIMO channels with finite memory,” _IEEE Transactions on Information Theory_ , vol. 63, no. 3, pp. 1874–1897, 2017.
* [23] R. W. Heath and G. B. Giannakis, “Exploiting input cyclostationarity for blind channel identification in ofdm systems,” _IEEE transactions on signal processing_ , vol. 47, no. 3, pp. 848–856, 1999.
* [24] R. Shaked, N. Shlezinger, and R. Dabora, “Joint estimation of carrier frequency offset and channel impulse response for linear periodic channels,” _IEEE Transactions on Communications_ , vol. 66, no. 1, pp. 302–319, 2017\.
* [25] N. Shlezinger and R. Dabora, “Frequency-shift filtering for OFDM signal recovery in narrowband power line communications,” _IEEE Transactions on Communications_ , vol. 62, no. 4, pp. 1283–1295, 2014.
* [26] A. El Gamal and Y.-H. Kim, _Network Information Theory_. Cambridge University Press, 2011.
* [27] X. Wu and L.-L. Xie, “On the optimal compressions in the compress-and-forward relay schemes,” _IEEE Transactions on Information Theory_ , vol. 59, no. 5, pp. 2613–2628, 2013.
* [28] G. Zitkovic, “Lecture notes on theory of probability,” _Lecture Notes for M385D (UT)_ , 2015.
* [29] A. Papoulis, _Probability, Random Variables, and Stochastic Processes_. McGraw-Hill, 2002.
* [30] R. Zamir, Y. Kochman, and U. Erez, “Achieving the Gaussian rate–distortion function by prediction,” _IEEE Transactions on Information Theory_ , vol. 54, no. 7, pp. 3354–3364, 2008.
* [31] W. Rudin, _Principles of Mathematical Analysis_ , ser. International series in pure and applied mathematics. McGraw-Hill, 1976.
* [32] J. Dixmier, _General Topology_. Springer Science & Business Media, 2013.
* [33] E. M. Stein and R. Shakarchi, _Real Analysis: Measure Theory, Integration, and Hilbert spaces_. Princeton University Press, 2009.
* [34] A. Kolmogorov, “On the Shannon theory of information transmission in the case of continuous signals,” _IRE Transactions on Information Theory_ , vol. 2, no. 4, pp. 102–108, 1956.
* [35] H. Scheffé, “A useful convergence theorem for probability distributions,” _The Annals of Mathematical Statistics_ , vol. 18, no. 3, pp. 434–438, 1947\.
* [36] P. Bromiley, “Products and convolutions of Gaussian probability density functions,” _Tina-Vision Memo_ , vol. 3, no. 4, p. 1, 2003.
* [37] M. R. Kosorok, _Introduction to Empirical Processes and Semiparametric Inference._ Springer, 2008.
* [38] D. Williams, _Probability with Martingales_ , ser. Cambridge mathematical textbooks. Cambridge University Press, 1991\. [Online]. Available: https://books.google.co.il/books?id=e9saZ0YSi-AC
* [39] R. L. Dobrushin, “A general formulation of the fundamental theorem of Shannon in the theory of information,” _Uspekhi Matematicheskikh Nauk_ , vol. 14, no. 6, pp. 3–104, 1959.
* [40] P. Billingsley, _Convergence of probability measures_. John Wiley & Sons, 2013.
* [41] R. Venkataramanan and S. S. Pradhan, “Source coding with feed-forward: Rate-distortion theorems and error exponents for a general source,” _IEEE Transactions on Information Theory_ , vol. 53, no. 6, pp. 2154–2179, June 2007.
*[CT]: continuous-time
*[WSCS]: wide-sense cyclostationary
*[DT]: discrete-time
*[WSACS]: wide-sense almost cyclostationary
*[RDF]: rate-distortion function
*[IID]: independent and identically distributed
*[AEP]: asymptotic equipartition property
*[CF]: compress-and-forward
*[MSE]: mean squared error
*[CDF]: cumulative distribution function
*[PDF]: probability density function
*[RV]: random variable
*[DCD]: decimated component decomposition
*[PSD]: power spectral density
*[RDFs]: rate-distortion function
*[RVs]: random variable
*[CDFs]: cumulative distribution function
*[PDFs]: probability density function
*[RHS]: right hand side
*[CMT]: continuous mapping theorem
|
2024-09-04T02:54:55.878179 | 2020-02-29T15:02:16 | 2003.00274 | {
"authors": "Ajaz A. Bhat and Vishwanathan Mohan",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25960",
"submitter": "Ajaz Bhat",
"url": "https://arxiv.org/abs/2003.00274"
} | arxiv-papers | # Causal Learning by a Robot with Semantic-Episodic Memory in an Aesop’s Fable
Experiment
Ajaz A. Bhat
School of Psychology
University of East Anglia
Norwich, NR47TJ, UK
<EMAIL_ADDRESS>
&Vishwanathan Mohan
School of Computer Science and Electronic Engineering
University of Essex
Wivenhoe Park Colchester, CO4 3SQ, UK
<EMAIL_ADDRESS>
###### Abstract
Corvids, apes, and children solve “The Crow and The Pitcher” task (from
Aesop’s Fables) indicating a causal understanding of the task. By cumulatively
interacting with different objects, how can cognitive agents abstract the
underlying cause-effect relations to predict affordances of novel objects? We
address this question by re-enacting the Aesop’s Fable task on a robot and
present a) a brain-guided neural model of semantic-episodic memory; with b)
four task-agnostic learning rules that compare expectations from recalled past
episodes with the current scenario to progressively extract the hidden causal
relations. The ensuing robot behaviours illustrate causal learning; and
predictions for novel objects converge to Archimedes’ principle, independent
of both the objects explored during learning and the order of their cumulative
exploration.
## 1 Introduction
The ability to learn causal regularities and object affordances allows
organisms to exploit objects in the service of their needs and wants, an
obvious adaptive advantage in the race for survival. Experiments exploring
paradigms like ‘floating peanut’ task (Hanus et al., 2011), trap-tube task
(Martin-Ordas et al., 2008) and more recently the Crow and the Pitcher task
from Aesop’s Fable (Jelbert et al., 2014; Cheke et al., 2012) provide evidence
that children, primates and corvids can reason to varying degrees about the
causally task-relevant properties of objects. A pertinent question therefore
is how through cumulative and explorative interactions with different objects
in the world, cognitive agents learn task-relevant physical and causal
relations and then exploit these flexibly in novel contexts. Accordingly,
theoretical works have investigated causal learning in cognitive psychology
(Gopnik et al., 2004; Griffiths & Tenenbaum, 2005), computer science (Pearl,
2009; Shimizu et al., 2006), recently in machine learning (Battaglia et al.,
2016; Iten et al., 2020; Baradel et al., 2020) but less so in robotics (Xiong
et al., 2016). We explore this question in the context of robotics to show how
a semantic-episodic memory system along with a set of four learning rules
endows a robot, iCub with the capability to cumulatively extract and infer
causal relations between objects and actions in ”the Crow and the Pitcher”
task. In designing this model, we connect some major trends from neuroscience
on distributed hub-based semantic memory representation (Kiefer &
Pulvermüller, 2012; Ralph et al., 2017), small-word properties (Sporns, 2011),
episodic memory circuitry (Allen & Fortin, 2013; Lee et al., 2015) to
hypothesize possible mechanisms of causal learning.
### 1.1 Causal Learning Task
The task is inspired from an Aesop’s fable (see analogous empirical works
(Jelbert et al., 2014; Bird & Emery, 2009)), in which a thirsty crow drops
pebbles into a half-filled water pitcher, raising the water level high enough
to drink. In a comparable scenario (Figure 1 A), iCub has a set of objects (on
a reachable stand) and a jar of water containing a floating target (green
ball) in front. With the goal to reach the green ball (which as such is
unreachable), iCub explores if use of available objects can help realize its
(otherwise unrealizable) goal. A priori, there is no previous experience with
any of the objects. Hence, iCub knows nothing a priori concerning the causal
nature of the task.
Figure 1: Panel A shows the setup. Each episode involves robot either dropping
a single object or making a choice between multiple objects. Objects vary in
their physical properties: color, size, shape and weight. Panel B shows the
block diagram of the proposed model. Panel C is a flowchart of how the
learning rules are applied.
## 2 Model Description
Hub-based Semantic Memory: Figure 1 B shows the proposed model. At the bottom
is the sensory layer to analyze object properties: colour, shape, size and
weight. Word labels are provided by the teacher either to issue goals to iCub
or teach names of new objects. Output from sensory/physical layer is passed
upwards to a bunch of property-specific self-organizing maps (SOMs) that
encode object properties as concept features. Neural connectivity between the
sensory layer and the property-specific maps is learnt using standard SOM
procedure (see Bhat et al. (2016)). There onwards, perceptual analysis of an
object (e.g. a large heavy blue cylinder) leads to activations in different
property-specific SOM’s coding for color, shape, weight and size respectively.
In this sense, layer 1 emulates the distributed property-specific organization
of perceptual information in brain (Patterson et al., 2007; Martin, 2016).
Activity in these maps forms the bottom up input to the layer 2 SOM i.e. the
object hub. The learning rule for dual dyad (Park & Friston, 2013)
connectivity matrix $W$ from SOMs to the object hub (and its counterpart
$W^{\prime}$ backwards) is: if the net activity due to a neuron $i$ and a
neuron $j$ winning in the maps manages to activate a neuron $k$ in the object
hub, set $W_{ik}=1$ and $W_{jk}=1$. Thereby, the object hub facilitates
integration and multimodal object representation, as evidenced neuro-
scientifically (Ralph et al., 2017; Kiefer & Pulvermüller, 2012). This small-
world network (Sporns, 2011) of maps and the object hub is complemented with a
dynamics to let the neural activity in one map retro-activate other members of
the network, hence allowing information to move top down, bottom up or in
cross modal fashion. So, a word like “blue cylinder” activates word-map that
forwards activations to the object hub, which thereafter retro-activates the
shape and colour maps, as if expecting top down what a “blue cylinder” might
be perceptually.
In an embodied robot, _objects_ in the environment are employed via _actions_.
Here, actions are represented at an abstract level (“what can be done with an
object”) separated from the action planning details (“how to do”). While the
former relates to the motor affordance of an object, the latter relates to
motion planning details, and interested reader may refer to Bhat et al. (2017)
for details related to motion planning. In layer 2, the abstract
representation corresponds to the action hub and consists of single neurons
coding for different action goals such as reach, grasp, etc. In this sense,
action hub neurons are similar to canonical neurons in the pre-motor cortex
(Rizzolatti & Craighero, 2004) that are activated at the sight of objects to
which specific actions are applicable. Finally, the consequences of both
perceptions and actions alter the state of the body itself. The body hub,
another layer 2 neural map explicitly encodes these states of the body (like
failing to reach an object etc.). Reward of an experience is either given by
the user or evaluated by the robot itself though observation. In this task,
reward is the volume/level of water raised by dropping an object into the jar
of water.
Episodic memory and its link to the Hubs: Practically, when a robot interacts
with its environment, it is the ongoing sequences of actions on various
objects, the ensuing consequences, internal body state, and rewards received
that mainly form the content of its experiences. Thus, in our model, it is the
temporal sequence of activations in the different hubs (object, action,
reward, body) during an episode that make up the episodic memory content. The
episodic memory is realized using a excitatory-inhibitory neural network of
auto-associative memory (Mohan et al., 2014; Hopfieid, 2008). It consists of
1000 neurons, organized in a sheet like structure with 20 rows each containing
50 neurons. Every row is an event in time (indicating activation in object
hub, action hub, body hub or reward) and the complete memory as an episode of
experience. For example, being unable to reach a floating object in a jar of
water (Body hub state), perceiving a red cylinder (Object hub state), dropping
it in water (Action hub state), fetching a reward of 100 (end reward). In
future, if the robot perceives the red cylinder, the object hub state serves
as a partial cue to reconstruct the full experience. Importantly, in the
memory network of 1000 neurons, multiple episodic memories $(\approx 230)$ can
be represented and retrieved (Hopfieid, 2008). See Bhat et al. (2016) for
methods to encode new experiences in episodic memory and recall past ones from
partial cues.
Learning Rules for Causal Abstraction: As Figure 1 C shows, these rules
compare what the robot has experienced in the past against what is happening
in the present situation. Let $\bigtriangleup Property$ be the difference in
activity in a property-specific map when activated bottom up (through sensory
layer) and when activated top down through recall of past event from episodic
memory. Let $\bigtriangleup Contradiction$ be the difference between the
robots anticipation of how an object might behave (expected reward due to
recalled past experience) and the real observed behaviour. Then the learning
rules are as follows:
(E)limination rule : If ${\bigtriangleup
Property}\land{\neg{\bigtriangleup}Contradiction}$, then that property is not
causally dominant and hence drastically reduce the connection strength between
the object hub and the associated property-specific map.
(G)rowth rule : If ${\bigtriangleup Property}\land{\bigtriangleup
Contradiction}$, then that property is causally dominant. Hence connectivity
between the object hub and the associated property-specific is strengthened,
and encode the experience in episodic memory. Contradiction in the robot’s
anticipation implies that there is something new to learn.
(U)ncertainty rule : If ${\neg{\bigtriangleup}Property}\land{\bigtriangleup
Contradiction}$, the connection strength between the object hub and the map is
marginally reduced. In this condition it is not possible to infer whether the
property is causally dominant or not, unless further experience is gained.
(S)tatus Quo rule : If
${\neg{\bigtriangleup}Property}\land{\neg{\bigtriangleup}Contradiction}$,
nothing new to learn, so no change in connectivity.
## 3 Results
Given that the robot is learning cumulatively, this section presents
successive episodes of learning (a video playlist of these experiments is
available _online_). All episodes share a set of computational processes, 1)
bottom up experience/interaction with the world and hence activation of
various maps 2) recall of past experiences from memory (if any); 3) Use of
recalled past experiences to anticipate and 4) application of the learning
rules.
Learning that colour of objects is causally irrelevant to the Aesop’s fable
task
Episode 1: In the first episode, iCub is given the goal to reach the green
ball (in the jar of water). The motion planning system of iCub provides the
information that the goal is unreachable. A large heavy red cylinder is
available and detected (see Figure 2 left, _Object 1_ ). Bottom up sensory
streams activate property-specific maps related to (red) color, (cylinder)
shape, $(11.5cm)$ size and $(420g)$ weight properties leading to a distributed
representation of the object in the maps and object hub. The object hub
activity leads to generation of a partial cue for the recall of any related
past episodes. Since there is no previous experience in the episodic memory,
nothing is recalled. So, there is no top down activity in the object hub nor
any reward expected. With only option to explore, the robot picks and drops
the object into the jar of water. The object sinks in the water displacing a
volume of water of about $365cm^{3}$ enough to make the floating green sphere
reachable. This experience is encoded into the episodic memory: an unreachable
goal (body hub state), dropping a large heavy red cylinder (object hub
activity), a volume of water displaced $365cm^{3}$ (reward) and goal realized
successfully (body hub state). Note, this is a rapid one-shot event encoding
into memory.
Figure 2: Left panel plots growing causal knowledge as robot explores objects
over successive episodes. Causal knowledge regarding a property is either
Unknown (depicted by 0 in the plot); or has been learnt to be Dominant or
Irrelevant (depicted by 1); or the system expects the property to be Likely
Irrelevant ( a certainty value between 0 and 1). Right panel plots error in
iCub’s prediction about the volume of water displaced on dropping an object
against actual volume displacement calculated using Archimedes’ principle. The
four curves correspond to four different orders in which the objects were
presented to robot for exploration.
Episode 2: iCub is presented with a blue cylinder of the same size and weight
as one in episode 1. Object hub activity (with partial similarity to _Object
1_ generates a partial cue that) leads to recall of the only one past
experience (i.e. episode 1: Figure 2 left, _Object 2_). So iCub anticipates
that large heavy blue cylinder would displace $365cm^{3}$ of water and this
turns out to be the case once the robot actually drops the object into water.
Comparing the expected behaviour of object (reward hub activity in the
recalled past experience) and the observed behaviour, the robot finds no
contradiction. In sum, there is change in a property (colour), but it did not
cause any contradiction in the expected behaviour. Elimination rule applies
here implying colour is not a causally dominant property. The connectivity
between the colour map and the object hub is drastically reduced, so they no
longer will retro-activate each other. This episode is not encoded into
episodic memory.
Learning that weight is a causally dominant property
Episode 3: iCub is presented with a very light cylinder $(14g)$ with other
properties same as in episode 1 (see Figure 2 left, _Object 3_) Bottom-up
activity recalls episode 1. A high reward of $365cm^{3}$ is anticipated.
However, only a small amount $(24cm^{3})$ of water is displaced after robot’s
action, leading to a contradiction between expected and observed behaviours. A
comparison of the bottom up activity and the reconstructed top down activity
reveals there is a difference in weight map. Growth rule is applied because a
change in the weight property causes contradiction. The new experience is
encoded into the episodic memory which can be recalled next time for better
prediction. Furthermore, activity in shape and size maps showed no change even
though there was a contradiction between the expected and the observed
behaviour. Hence the Uncertainty rule applies too: as the robot still has no
experience or complete knowledge of the causal relevance of object-size or
shape. The robot partially believes at this point that ‘shape and size’ of the
object may not be relevant in causing water rise.
Accumulating over a set of such experiences (Figure 2 left) with objects of
different properties, the robot grows it’s causal knowledge (certainty) of
properties relevant to the task. Furthermore, in cases when the system is
presented a cylinder of a weight never experienced before, all the past
experiences (due to the same shape) will be recalled and a weighted averaging
of the rewards expected due to these past experiences is used as an estimate
of net anticipated reward (see Bhat et al. (2016) for more details). As the
number of experiences with objects of different weights increases, the
accuracy in the prediction of reward increases systematically. Figure 2
(right) show robot’s predictions in four different random orders of 8 objects.
Results show that the causal knowledge is same at the end of explorations in
all cases and error in prediction (i.e., difference between robot’s prediction
and Archimedes’ principle) rapidly decreases in all cases.
## 4 Conclusion
The work takes the topic of affordances from the level of object-action to the
level of property-action, in line with emerging studies from neurosciences,
and suggests a possible mechanism for causal learning in animals and robots.
The work emphasizes that reasoning and learning always have to go hand-in-hand
and grow cumulatively and continuously in lifetime of a learner, be it a
natural or an artificial cognitive agent.
## References
* Allen & Fortin (2013) Timothy A. Allen and Norbert J. Fortin. The evolution of episodic memory. _Proceedings of the National Academy of Sciences of the United States of America_ , 110(SUPPL2):10379–10386, 2013. ISSN 00278424. doi: 10.1073/pnas.1301199110. URL http://www.pnas.org/content/110/Supplement{_}2/10379.full.
* Baradel et al. (2020) Fabien Baradel, Natalia Neverova, Julien Mille, Greg Mori, and Christian Wolf. COPHY: Counterfactual Learning of Physical Dynamics. In _ICLR_ , apr 2020. URL http://arxiv.org/abs/1909.12000.
* Battaglia et al. (2016) Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Rezende, and Koray Kavukcuoglu. Interaction networks for learning about objects, relations and physics. In _Advances in Neural Information Processing Systems_ , pp. 4509–4517, 2016.
* Bhat et al. (2016) Ajaz A. Bhat, Vishwanathan Mohan, Giulio Sandini, and Pietro Morasso. Humanoid infers Archimedes’ principle: Understanding physical relations and object affordances through cumulative learning experiences. _Journal of the Royal Society Interface_ , 13(120):20160310, 2016. ISSN 17425662. doi: 10.1098/rsif.2016.0310. URL http://rsif.royalsocietypublishing.org/lookup/doi/10.1098/rsif.2016.0310.
* Bhat et al. (2017) Ajaz A. Bhat, Sharath C. Akkaladevi, Vishwanathan Mohan, Christian Eitzinger, and Pietro Morasso. Towards a learnt neural body schema for dexterous coordination of action in humanoid and industrial robots. _Autonomous Robots_ , 41(4):945–966, 2017. ISSN 15737527. doi: 10.1007/s10514-016-9563-3. URL http://link.springer.com/10.1007/s10514-016-9563-3.
* Bird & Emery (2009) Christopher David Bird and Nathan John Emery. Rooks Use Stones to Raise the Water Level to Reach a Floating Worm. _Current Biology_ , 19(16):1410–1414, 2009. ISSN 09609822. doi: 10.1016/j.cub.2009.07.033. URL http://dx.doi.org/10.1016/j.cub.2009.07.033.
* Cheke et al. (2012) Lucy G. Cheke, Elsa Loissel, and Nicola S. Clayton. How do children solve Aesop’s fable? _PLoS ONE_ , 7(7):e40574, 2012. ISSN 19326203. doi: 10.1371/journal.pone.0040574. URL http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3405099{&}tool=pmcentrez{&}rendertype=abstract.
* Gopnik et al. (2004) Alison Gopnik, David M. Sobel, David Danks, Clark Glymour, Laura E. Schulz, and Tamar Kushnir. A Theory of Causal Learning in Children: Causal Maps and Bayes Nets. _Psychological Review_ , 111(1):3–32, 2004. ISSN 0033295X. doi: 10.1037/0033-295X.111.1.3.
* Griffiths & Tenenbaum (2005) Thomas L. Griffiths and Joshua B. Tenenbaum. Structure and strength in causal induction. _Cognitive Psychology_ , 51(4):334–384, dec 2005\. ISSN 00100285. doi: 10.1016/j.cogpsych.2005.05.004.
* Hanus et al. (2011) Daniel Hanus, Natacha Mendes, Claudio Tennie, and Josep Call. Comparing the performances of apes (gorilla gorilla, pan troglodytes, pongo pygmaeus) and human children (homo sapiens) in the floating peanut task. _PLoS ONE_ , 6(6), 2011. ISSN 19326203. doi: 10.1371/journal.pone.0019555.
* Hopfieid (2008) J. J. Hopfieid. Searching for memories, Sudoku, implicit check bits, and the iterative use of not-always-correct rapid neural computation. _Neural Computation_ , 20(5):1119–1164, may 2008\. ISSN 08997667. doi: 10.1162/neco.2008.09-06-345. URL http://www.mitpressjournals.org/doi/abs/10.1162/neco.2007.09-06-345{#}.VquamvkrLIU.
* Iten et al. (2020) Raban Iten, Tony Metger, Henrik Wilming, Lídia del Rio, and Renato Renner. Discovering Physical Concepts with Neural Networks. _Physical Review Letters_ , 124(1), jan 2020. ISSN 0031-9007. doi: 10.1103/physrevlett.124.010508.
* Jelbert et al. (2014) Sarah A. Jelbert, Alex H. Taylor, Lucy G. Cheke, Nicola S. Clayton, and Russell D. Gray. Using the aesop’s fable paradigm to investigate causal understanding of water displacement by new caledonian crows. _PLoS ONE_ , 9(3):e92895, jan 2014. ISSN 19326203. doi: 10.1371/journal.pone.0092895. URL http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092895.
* Kiefer & Pulvermüller (2012) Markus Kiefer and Friedemann Pulvermüller. Conceptual representations in mind and brain: Theoretical developments, current evidence and future directions. _Cortex_ , 48(7):805–825, 2012. ISSN 00109452. doi: 10.1016/j.cortex.2011.04.006.
* Lee et al. (2015) Sang Wan Lee, John P. O’Doherty, and Shinsuke Shimojo. Neural Computations Mediating One-Shot Learning in the Human Brain. _PLoS Biology_ , 13(4), 2015. ISSN 15457885. doi: 10.1371/journal.pbio.1002137.
* Martin (2016) Alex Martin. GRAPES—Grounding representations in action, perception, and emotion systems: How object properties and categories are represented in the human brain. _Psychonomic Bulletin and Review_ , 23(4):979–990, aug 2016. ISSN 15315320. doi: 10.3758/s13423-015-0842-3.
* Martin-Ordas et al. (2008) Gema Martin-Ordas, Josep Call, and Fernando Colmenares. Tubes, tables and traps: Great apes solve two functionally equivalent trap tasks but show no evidence of transfer across tasks. _Animal Cognition_ , 11(3):423–430, 2008. ISSN 14359448. doi: 10.1007/s10071-007-0132-1.
* Mohan et al. (2014) Vishwanathan Mohan, Giulio Sandini, and Pietro Morasso. A neural framework for organization and flexible utilization of episodic memory in cumulatively learning baby humanoids. _Neural computation_ , 26(12):2692–734, dec 2014\. ISSN 1530-888X. doi: 10.1162/NECO˙a˙00664. URL http://www.mitpressjournals.org/doi/abs/10.1162/NECO{_}a{_}00664{#}.VquaXfkrLIU.
* Park & Friston (2013) Hae Jeong Park and Karl Friston. Structural and functional brain networks: From connections to cognition. _Science_ , 342(6158):1238411, nov 2013. ISSN 10959203. doi: 10.1126/science.1238411. URL http://science.sciencemag.org/content/342/6158/1238411.abstracthttp://www.ncbi.nlm.nih.gov/pubmed/24179229.
* Patterson et al. (2007) Karalyn Patterson, Peter J Nestor, and Timothy T Rogers. Where do you know what you know? The representation of semantic knowledge in the human brain. _Nature reviews. Neuroscience_ , 8(12):976–87, dec 2007. ISSN 1471-0048. doi: 10.1038/nrn2277. URL http://dx.doi.org/10.1038/nrn2277.
* Pearl (2009) Judea Pearl. _Causality_. Cambridge University Press, 2009.
* Ralph et al. (2017) Matthew A.Lambon Ralph, Elizabeth Jefferies, Karalyn Patterson, and Timothy T. Rogers. The neural and computational bases of semantic cognition. _Nature Reviews Neuroscience_ , 18(1):42–55, dec 2017. ISSN 14710048. doi: 10.1038/nrn.2016.150. URL http://dx.doi.org/10.1038/nrn.2016.150.
* Rizzolatti & Craighero (2004) Giacomo Rizzolatti and Laila Craighero. The Mirror-Neuron System. _Annual Review of Neuroscience_ , 27(1):169–192, jul 2004. ISSN 0147-006X. doi: 10.1146/annurev.neuro.27.070203.144230.
* Shimizu et al. (2006) Shohei Shimizu, Patrik O. Hoyer, Aapo Hyvärinen, and Antti Kerminen. A linear non-gaussian acyclic model for causal discovery. _Journal of Machine Learning Research_ , 7:2003–2030, 2006\. ISSN 15337928.
* Sporns (2011) Olaf Sporns. The human connectome: a complex network. _Annals of the New York Academy of Sciences_ , 1224:109–25, apr 2011. ISSN 1749-6632. doi: 10.1111/j.1749-6632.2010.05888.x. URL http://www.ncbi.nlm.nih.gov/pubmed/21251014.
* Xiong et al. (2016) Caiming Xiong, Nishant Shukla, Wenlong Xiong, and Song Chun Zhu. Robot learning with a spatial, temporal, and causal and-or graph. In _Proceedings - IEEE International Conference on Robotics and Automation_ , volume 2016-June, pp. 2144–2151. Institute of Electrical and Electronics Engineers Inc., jun 2016. ISBN 9781467380263. doi: 10.1109/ICRA.2016.7487364.
|
2024-09-04T02:54:55.889507 | 2020-02-29T15:26:48 | 2003.00277 | {
"authors": "Emilio Pisanty, Marcelo F. Ciappina and Maciej Lewenstein",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25961",
"submitter": "Emilio Pisanty",
"url": "https://arxiv.org/abs/2003.00277"
} | arxiv-papers | metadata
# The imaginary part of the high-harmonic cutoff
Emilio Pisanty<EMAIL_ADDRESS>ICFO – Institut de Ciencies Fotoniques,
The Barcelona Institute of Science and Technology, 08860 Castelldefels
(Barcelona) Marcelo F. Ciappina ICFO – Institut de Ciencies Fotoniques, The
Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona)
Maciej Lewenstein ICFO – Institut de Ciencies Fotoniques, The Barcelona
Institute of Science and Technology, 08860 Castelldefels (Barcelona) ICREA,
Passeig de Lluís Companys, 23, 08010 Barcelona, Spain
(22 July 2020)
###### Abstract
High-harmonic generation – the emission of high-frequency radiation by the
ionization and subsequent recombination of an atomic electron driven by a
strong laser field – is widely understood using a quasiclassical trajectory
formalism, derived from a saddle-point approximation, where each saddle
corresponds to a complex-valued trajectory whose recombination contributes to
the harmonic emission. However, the classification of these saddle points into
individual quantum orbits remains a high-friction part of the formalism. Here
we present a scheme to classify these trajectories, based on a natural
identification of the (complex) time that corresponds to the harmonic cutoff.
This identification also provides a natural complex value for the cutoff
energy, whose imaginary part controls the strength of quantum-path
interference between the quantum orbits that meet at the cutoff. Our
construction gives an efficient method to evaluate the location and brightness
of the cutoff for a wide class of driver waveforms by solving a single saddle-
point equation. It also allows us to explore the intricate topologies of the
Riemann surfaces formed by the quantum orbits induced by nontrivial waveforms.
Accepted Manuscript for J. Phys. Photonics 2, 034013 (2020), available as
arXiv:2003.00277.
†† © The authors, 2020. Licensed under CC BY 4.0.
High-harmonic generation (HHG) is an extremely nonlinear optical process in
which a strong laser field drives the emission of a train of short bursts of
high-frequency radiation Krausz2009 ; Corkum2007 , which can cover hundreds of
harmonic orders of the driving field, over a broad plateau that terminates at
a cutoff. This emission comes from a three-step process in which the laser
ionizes the target atom via tunnel ionization, and then propels the released
electron back to the parent ion, where it recombines with the hole it left
behind, releasing its kinetic energy as a photon Corkum1993 ; Kulander1993 .
This emission can be modelled using a wide range of approaches, from classical
heuristics Corkum1993 ; Kulander1993 to intensive numerical computations
Scrinzi2014 , but the quantitative models that most closely follow the overall
intuition are quasiclassical methods Lewenstein1994 ; Amini2019 , known as the
Strong-Field Approximation (SFA), where the emission amplitude is given by a
path-integral sum over discrete emission events. These are known as quantum
orbits Salieres2001 ; Paulus2000 ; Kopold2002 , i.e., quasiclassical
trajectories whose start and end times are complex Ivanov2014 ; Nayak2019 .
The quantum-orbit formalism arises naturally under the approximation that the
electron’s motion in the continuum is completely controlled by the driving
laser, which allows an exact solution in terms of highly oscillatory
integrals. These are then reduced, using a steepest-descent method known as
the saddle-point approximation (SPA), to discrete contributions coming from
the saddle points of the integrand BruijnAsymptotics ; BleisteinIntegrals ;
GerlachSPAonline . These saddle points represent the quasiclassical
trajectories, and they typically come in pairs – most notably the ‘short’ and
‘long’ trajectories Lewenstein1995 ; Zair2008 . Each pair of trajectories
approaches each other over the harmonic plateau and then performs a Stokes
transition at the harmonic cutoff Figueira2002 ; Milosevic2002 , giving way to
an exponential drop where only one of the saddles is used. In practice,
however, classifying the saddle points into these pairs of trajectories is one
of the highest-friction points when applying this method Chipperfield2007 ;
Hoffmann2011 ; Das2017 , particularly since the saddles tend to move quickly,
and approach each other very closely, at the harmonic cutoff.
---
(a)
(b)
(c)
Figure 1: Solving the HHG saddle-point equations returns a discrete set of
saddle points as a function of the harmonic photon energy $\Omega$, shown in
(b) for the monochromatic field in (a). Our main result is the trajectory
classification in (c), which consists of the harmonic-cutoff points
$t_{\mathrm{hc}}$ (triangles) and the separatrices through them, which allow
the cloud of saddle-point solutions in (b) to be organized into individual
trajectories; the different colors correspond to different quantum orbits. We
model neon in a field of wavelength $800\text{\,}\mathrm{n}\mathrm{m}$ and
intensity
$2\text{\times}{10}^{14}\text{\,}\mathrm{W}\mathrm{/}\mathrm{c}\mathrm{m}^{2}$.
In this work we construct a quantum-orbit classification scheme based on a
natural notion of complex-valued harmonic-cutoff times $t_{\mathrm{hc}}$.
These are points – given by zeros of the second derivative of the action –
which sit between the two saddle points as they approach each other, and which
provide an organic separation between the two. Thus, an unordered set of
saddle-point solutions like the one shown in Fig. 1b can be cleanly organized
programmatically into families of trajectories, as shown in Fig. 1c, in a
flexible and general fashion which is robust to changes in the quantum orbits
over broad families of optical fields.
Once these second-order saddle points have been identified, they naturally
fill in the role of the quantum orbits corresponding to the high-harmonic
cutoff, and they give the photon energy at which it occurs – a quantity which
can often be hard to pin down precisely – as the real part of the time
derivative of the action at the harmonic-cutoff time. We benchmark this
understanding of the harmonic cutoff against the standard ‘cutoff law’, which
describes the scaling as $\Omega_{\mathrm{max}}\sim I_{p}+3.17U_{p}$ with the
ionization potential $I_{p}$ and the ponderomotive energy $U_{p}$ of the field
(previously derived both classically Corkum1993 ; Kulander1993 and via a
systematic expansion in $I_{p}/U_{p}$ Lewenstein1994 ). However, our method
extends trivially to drivers with higher harmonic content as well as to
tailored polarizations.
Moreover, the information at $t_{\mathrm{hc}}$, which can be obtained by
solving a single second-order saddle point equation, is sufficient to
calculate an accurate evaluation of the harmonic yield at the cutoff, as well
as a good qualitative estimate (which we term the Harmonic-Cutoff
Approximation) for the shape of the spectrum at the upper plateau. The
efficiency of this approach makes it a good tool when optimizing both the
position and the brightness of the cutoff over the high-dimensional parameter
spaces available on current optical synthesizers Manzoni2015 .
This understanding of the high-harmonic cutoff $\Omega_{\mathrm{hc}}$ also
assigns a natural value to its imaginary part, whose direct impact is to
control the closeness of the approach between the pair of saddle points at the
cutoff. Since this closeness regulates, in turn, the distinguishability
between the two trajectories, the imaginary part of the harmonic cutoff
ultimately controls the strength of quantum-path interference (QPI) Zair2008
between the pair of orbits.
In particular, the zeros of the imaginary part of the harmonic cutoff energy
$\imaginary(\Omega_{\mathrm{hc}})$ pinpoint the configurations where the
saddle points for the two quantum orbits have an exact coalescence at the
cutoff, in which case they cannot be distinguished from each other. For
tailored polarizations, as well as other polychromatic fields with one or more
nontrivial shape parameters, $\imaginary(\Omega_{\mathrm{hc}})$ will
generically oscillate, indicating that the quantum orbits have reconnection
events – where, in essence, the evanescent post-cutoff saddle point will
transfer from the ‘short’ quantum orbit to the ‘long’ one.
We showcase this behaviour in bichromatic counter-rotating circularly-
polarized ‘bicircular’ fields Fleischer2014 ; Kfir2015 ; Eichmann1995 ;
Milosevic1996 ; Milosevic2000bicircular ; Baykusheva2016 ; Dorney2017 ;
JimenezGalan2018 ; Pisanty2014 ; Pisanty2017 ; Milovsevic2019 as the relative
intensity between the two components changes: the various quantum orbits then
recombine with each other, making for a complicated landscape within a unified
topology. This illustrates the fact that the various saddle points are simply
different branches of a single, unified Riemann surface, with our harmonic-
cutoff times $t_{\mathrm{hc}}$ taking on the role of the branch points of this
Riemann surface. More practically, the reconnections make the quantum-orbit
classification challenging, but we show that our algorithm can seamlessly
handle these changes in the trajectories.
In the more structural terms of catastrophe theory Poston1978 , the harmonic
cutoff is a spectral caustic, in the ‘fold’ class of diffraction catastrophes
Raz2012 . In this sense, the complex harmonic-cutoff energy
$\Omega_{\mathrm{hc}}$ is the bifurcation set for the catastrophe – suitably
generalized to the complex variables involved – and it marks the location of
the caustic. The formal study of caustics has seen increasing interest within
attoscience Raz2012 ; Goulielmakis2012 ; Austin2014 ; Facciala2016 ;
Facciala2018 ; Hamilton2017 ; Birulia2019 ; Uzan2020 ; Kelvich2017 , and our
approach provides a useful tool for exploring and describing the higher-
dimensional bifurcation sets at the heart of the higher-order catastrophes
that become available as our control over short pulses of light becomes more
finely tuned.
This work is structured as follows. In Section I we construct the harmonic-
cutoff times and examine their structure, first summarizing the standard SFA
in Section I.1 and constructing a model with a cubic action in Section I.2,
which we explore in depth in Sections I.3 and I.4, where we construct the
classification algorithm; in Section I.5 we explore the quantum-orbit Riemann
surface uncovered by this perspective. In Section II we construct the
Harmonic-Cutoff Approximation, by explicitly integrating the Airy integral of
the cubic action of our model, and we benchmark it against the standard
approximations. In Section III we examine how the complex cutoff energy
$\Omega_{\mathrm{hc}}$ scales with the field parameters, and show that it
agrees with the known cutoff law and that it extends it to the complex domain.
Finally, in Section IV, we explore the branch-cut classification for
bicircular fields, showing that our algorithm can handle the quantum-orbit
reconnections, as well as how these reconnections give rise to a nontrivial
topology for the quantum orbits.
The functionality we describe here has been integrated into the RBSFA package
for Mathematica RBSFA , and our specific implementation is available from Ref.
FigureMaker, . Interactive versions of 3D Fig. 1 and 3D Fig. 2, as well as
3D-printable models of those surfaces, are available as Supplementary Material
SupplementaryMaterial .
## I The complex harmonic-cutoff times
### I.1 The Strong-Field Approximation
In the SFA, high-harmonic emission is calculated as the expectation value of
the dipole operator, under the assumption that there is a single active
electron that is either in the atomic potential’s ground state $\ket{g}$ or in
a laser-driven continuum where the effect of the atomic potential is
negligible. The calculation Ivanov2014 ; Nayak2019 ; Amini2019 ;
Lewenstein1994 then gives the harmonic dipole in the form
$\displaystyle\mathbf{D}(\Omega)$
$\displaystyle=\int_{-\infty}^{\infty}\\!\\!\\!\\!\\!\\!\textrm{d}t\int_{-\infty}^{t}\\!\\!\\!\\!\\!\\!\textrm{d}t^{\prime}\\!\int\\!\textrm{d}\mathbf{p}\,\mathbf{d}(\mathbf{p}+\mathbf{A}(t))\Upsilon(\mathbf{p}+\mathbf{A}(t^{\prime}))$
$\displaystyle\qquad\qquad\qquad\qquad\times
e^{-iS_{V}\\!(\mathbf{p},t,t^{\prime})+i\Omega t}+\mathrm{c.c.},$ (1)
where the three integrals over the times of ionization and recollision,
$t^{\prime}$ and $t$, and the intermediate momentum, $\mathbf{p}$, form a
reduced Feynman path integral summing over a restricted set of relevant orbits
Salieres2001 . This Feynman sum is modulated by ionization and recollision
amplitudes given by the transition dipole matrix element
$\mathbf{d}(\mathbf{k})=\matrixelement{\mathbf{k}}{\hat{r}}{g}$ and the scalar
function
$\Upsilon(\mathbf{k})=(I_{p}+\tfrac{1}{2}\mathbf{k}^{2})\innerproduct{\mathbf{k}}{g}$,
evaluated at the ionization and recollision velocities: $\mathbf{A}(t)$ is the
vector potential of the laser and $\mathbf{p}$ is the canonical momentum, so
$\mathbf{v}(t)=\mathbf{p}+\mathbf{A}(t)$ is the recollision velocity and
$m_{e}\mathbf{v}(t)=\mathbf{v}(t)$ is the kinematic momentum. (We use atomic
units with $m_{e}=\hbar=1$ throughout, and we consider neon in a linearly-
polarized field $\mathbf{F}(t)=F_{0}\hat{\mathbf{e}}_{z}\cos(\omega t)$ of
wavelength $800\text{\,}\mathrm{n}\mathrm{m}$ and intensity
$2\text{\times}{10}^{14}\text{\,}\mathrm{W}\mathrm{/}\mathrm{c}\mathrm{m}^{2}$
unless otherwise stated.)
More importantly, the contribution from each orbit in the Feynman sum in (1)
is modulated by a phase given by the total action accumulated during
propagation,
$\displaystyle S(\mathbf{p},t,t^{\prime})$
$\displaystyle=S_{V}(\mathbf{p},t,t^{\prime})-\Omega t$ (2a)
$\displaystyle\text{for}\ S_{V}(\mathbf{p},t,t^{\prime})$
$\displaystyle=\frac{1}{2}\int_{t^{\prime}}^{t}\left(\mathbf{p}+\mathbf{A}(\tau)\right)^{2}\textrm{d}\tau+I_{p}(t-t^{\prime}),$
(2b)
with $I_{p}$ the ionization potential of the atom, which is normally dominated
by the Volkov component $S_{V}$. For a field with amplitude $F_{0}$ and
frequency $\omega$, the action scales with the ratio $z=U_{p}/\omega$ of the
field’s ponderomotive energy $U_{p}=F_{0}^{2}/4\omega^{2}$ to its frequency.
This is typically large, so the exponential term $e^{-iS}$ changes much faster
than the amplitudes in the prefactor, making the integral in (1) highly
oscillatory.
The highly oscillatory nature of this amplitude then allows us to deal with
this integral using the method of steepest descents BruijnAsymptotics ;
BleisteinIntegrals ; GerlachSPAonline , also known as the saddle-point
approximation (SPA). In this method, we deform the integration contours in (1)
into the complex plane, so that they will pass through the saddle points of
the exponent (2). There the exponent is locally quadratic, so the integral can
be reformulated as a gaussian and integrated exactly, under the assumption
that the prefactor is slow. These points can be found through the saddle-point
equations
$\displaystyle 0$ $\displaystyle=\frac{\partial S}{\partial
t}=\frac{1}{2}\left(\mathbf{p}+\mathbf{A}(t)\right)^{2}+I_{p}-\Omega,$ (3a)
$\displaystyle 0$ $\displaystyle=\frac{\partial S}{\partial
t^{\prime}}=\frac{1}{2}\left(\mathbf{p}+\mathbf{A}(t^{\prime})\right)^{2}+I_{p},$
(3b) $\displaystyle 0$ $\displaystyle=\frac{\partial
S}{\partial\mathbf{p}}=\int_{t^{\prime}}^{t}\left(\mathbf{p}+\mathbf{A}(\tau)\right)\textrm{d}\tau,$
(3c)
which can be interpreted on physical grounds as encoding the requirements of
energy conservation at recollision and ionization ((3a) and (3b), resp.) as
well as the return of the quasiclassical laser-driven trajectory
$\boldsymbol{\alpha}(t)=\int_{t^{\prime}}^{t}\mathbf{v}(\tau)\textrm{d}\tau$
to the ion at the time of recollision.111We use the term ‘quasiclassical’ here
to distinguish this formalism from more general semiclassical approaches,
which use Feynman sums over allowed classical trajectories and add quantum
corrections coming from the gaussian (or other similar) spread of the integral
around them. One key feature of these conditions is that the ionization
equation (3b), in particular, cannot be satisfied with real variables, forcing
all of the variables involved to take complex values.
Normally, the return equation (3c) is solved separately, since its linearity
in $\mathbf{p}$ guarantees a unique solution,
$\mathbf{p}_{s}(t,t^{\prime})=-\frac{1}{t-t^{\prime}}\int_{t^{\prime}}^{t}\mathbf{A}(\tau)\textrm{d}\tau,$
(4)
for any arbitrary pair $(t,t^{\prime})$ of ionization and recollision times.
Once the momentum integral has been performed in this way, the expression for
the harmonic dipole takes the form
$\displaystyle\mathbf{D}(\Omega)$
$\displaystyle=\int_{-\infty}^{\infty}\\!\\!\\!\\!\\!\textrm{d}t\int_{-\infty}^{t}\\!\\!\\!\\!\\!\\!\textrm{d}t^{\prime}\>\mathbf{d}(\mathbf{p}_{s}(t,t^{\prime}){+}\mathbf{A}(t))\Upsilon(\mathbf{p}_{s}(t,t^{\prime}){+}\mathbf{A}(t^{\prime}))$
$\displaystyle\qquad\times\left(\frac{2\pi}{i(t-t^{\prime})}\right)^{3/2}e^{-iS_{V}\\!(t,t^{\prime})+i\Omega
t},$ (5)
where the fractional power of the excursion time $\tau=t-t^{\prime}$
represents the dispersion of the released wavepacket in position space.222If
the time integrals are performed numerically, this factor needs to be
regularized at $\tau\to 0$, to account for a breakdown of the saddle-point
approximation in that limit Pisanty2016 . If the time integrals are also
evaluated via saddle-point methods, however, this is not required, as that
limit is not used. We notate
$S(t,t^{\prime})=S(\mathbf{p}_{s}(t,t^{\prime}),t,t^{\prime})$ where it does
not lead to confusion, and we drop the added complex conjugate for simplicity.
The resulting two-dimensional integral, (5), is now in its minimal form; the
saddle-point equations for the amended action read
$\displaystyle 0$ $\displaystyle=\frac{\partial S}{\partial
t}=\frac{1}{2}\left(\mathbf{p}_{s}(t,t^{\prime})+\mathbf{A}(t)\right)^{2}+I_{p}-\Omega,$
(6a) $\displaystyle 0$ $\displaystyle=\frac{\partial S}{\partial
t^{\prime}}=\frac{1}{2}\left(\mathbf{p}_{s}(t,t^{\prime})+\mathbf{A}(t^{\prime})\right)^{2}+I_{p},$
(6b)
and they can only be solved numerically.333This is often done by gradient
descent on the modulus of the right-hand side Ivanov2014 ; Nayak2019 , but it
is also possible to use Newton’s method directly, as it readily extends to
multiple complex variables Chipperfield2007 ; RBSFA . A typical set of
solutions for these saddle-point equations is shown in Fig. 1b: the solutions
form a discrete set of points, which shift when the harmonic frequency
$\Omega$ changes. These discrete points thus trace out individual curves on
the complex $t$ and $t^{\prime}$ planes, which form the individual quantum
orbits.
---
(a)
(b)
(c)
Figure 2: (a,b) Detail of the first pair of quantum orbits from Fig. 1,
labelled by harmonic order, $\Omega/\omega$. At the avoided crossing of the
two saddles, they go through a Stokes and then an anti-Stokes transition
(solid and hollow arrow, resp.), at the points where $\real(S)$ and
$\imaginary(S)$ of the two are equal, resp. (c) Saddle-point approximation to
the core elements of the harmonic dipole, $\tau^{-3/2}e^{-iS}$, for the first
pair of quantum orbits (green and blue curves). At the anti-Stokes transition
(dot-dashed line), one of the two starts growing exponentially, but it must be
discarded at the Stokes transition (solid line), when the required
integration-contour topology changes. This produces a discontinuous change in
the total SPA dipole (dashed line), which can be fixed by using the Uniform
Approximation (solid line). The oscillations over the plateau are quantum path
interference between the two trajectories.
Once the saddle points have been found, the SPA gives the harmonic dipole as
$\displaystyle\mathbf{D}(\Omega)$
$\displaystyle=\sum_{s}\frac{2\pi}{\sqrt{-\det[S^{\prime\prime}(t_{s},t^{\prime}_{s})]}}\left(\frac{2\pi}{i(t_{s}-t^{\prime}_{s})}\right)^{3/2}$
$\displaystyle\qquad\qquad\times\mathbf{d}(\mathbf{p}_{s}{+}\mathbf{A}(t_{s}))\Upsilon(\mathbf{p}_{s}{+}\mathbf{A}(t^{\prime}_{s}))\,$
(7) $\displaystyle\qquad\qquad\times
e^{-iS_{V}\\!(t_{s},t^{\prime}_{s})+i\Omega t_{s}},$
with
$\det[S^{\prime\prime}]=\partial_{tt}^{2}S\,\partial_{t^{\prime}t^{\prime}}^{2}S-(\partial_{tt^{\prime}}^{2}S)^{2}$
the Hessian of the action, thus giving the harmonic yield as a coherent sum of
discrete contributions coming from each of the quantum orbits. Fig. 2c shows a
typical example of how the individual contributions (blue and green curves)
get combined into a total harmonic yield, with quantum-path interference
beatings in the plateau as the two contributions go in and out of phase
Lewenstein1995 ; Zair2008 .
Within the SPA expression (7), the key component is the summation: this runs
over all the saddle points of the action which are compatible with a suitable
steepest-descent integration contour, a property that can be nontrivial to
determine. Fig. 2b shows a typical configuration, with the short- and long-
trajectory saddle points approaching each other over the harmonic plateau,
close to the real axis, performing an ‘avoided crossing’ at the harmonic
cutoff, and then advancing into imaginary time.
At this avoided crossing, the saddle-point pair experiences two important
transitions, known as the Stokes and anti-Stokes lines Figueira2002 ;
Milosevic2002 ; Chipperfield2007 . At the anti-Stokes transition, which is
defined as the point where $\imaginary(S)$ for the two orbits is equal, the
short-trajectory contribution begins to grow exponentially, and it must be
discarded from the summation in order to keep reasonably physical results.
This elimination is enforced by the Stokes transition, which typically happens
earlier, defined as the point where $\real(S)$ for the two orbits is equal and
then changes order. This is an important change, as the steepest-descent
method requires integration contours to follow the contour lines of
$\real(S)$: thus, the change in the ordering of
$\real(S(t_{s},t_{s}^{\prime}))$ for the long- and short-trajectory saddle
points means that, after the Stokes transition, the short-trajectory saddle
point (in this example) can no longer form part of a suitable integration
contour, and it needs to be discarded from the summation.
This means, however, that the SPA harmonic yield at the cutoff is a
discontinuous function of $\Omega$, coming from the discrete jump at the
points where one of the trajectories is eliminated, and this discontinuity is
clearly incompatible with the initial, obviously continuous, expressions for
$\mathbf{D}(\Omega)$ in (1) and (5). This apparent paradox is resolved by
noting that the SPA is not valid when saddles are close together, as the
quadratic approximation to the exponent fails. Instead, one must use a cubic
form for the exponent, which can be integrated in terms of Airy or fractional-
order Bessel functions. This gives a continuous spectrum from the saddle-point
pair, known as the Uniform Approximation Figueira2002 ; Milosevic2002 (UA),
which is shown in Fig. 2c.
From a more general viewpoint, it is important to stress that applying these
considerations is only possible once the various saddle points have been
classified into continuously-connected quantum orbits, as without that step it
is impossible to even define the objects that will be included in the
summation or discarded from it.444Similarly, keeping good track of the saddle
points in continuous quantum orbits is also essential to ensure that the
Hessian square root in (7),
$\sqrt{-\det[S^{\prime\prime}(t_{s},t^{\prime}_{s})]}$, does not cross any
branch cuts. In practice, this is a high-friction point in the calculation,
requiring expensively tight grid spacings in $\Omega$ to accurately resolve
the avoided crossings, or the additional design of an adaptive grid to
increase the energy resolution there Chipperfield2007 ; Hoffmann2011 ; Das2017
. Moreover, this problem is compounded in more complex polychromatic fields,
where the saddle-point structure changes depending on the details of the laser
pulse. It is the goal of this manuscript to provide a simple and effective
method to classify these quantum orbits, separating the points in Fig. 2b
along the diagonal into the two curves shown.
### I.2 A model for the saddle points at the cutoff
In order to build this method, we first consider a simple model for the saddle
points at and near the harmonic cutoff, in order to isolate the key features
of the problem. In essence, Fig. 2b shows us two saddle points approaching
each other and then receding, so we consider the simplest model action with
only two saddles, of the polynomial form
$\frac{\textrm{d}S}{\textrm{d}t}=A(t-t_{s,1})(t-t_{s,2}).$ (8)
(For simplicity, we restrict our attention for now to actions with only one
variable, $t$, which corresponds to solving (6b) first for
$t_{s}^{\prime}=t_{s}^{\prime}(t)$, giving $S(t)=S(t,t_{s}^{\prime}(t))$ and
then examining (6a). We shall lift this restriction later.) Moreover, we use
the clear symmetry evident in Fig. 2b, with both saddles (approximately)
symmetrically placed about the center of the plot, and enforce the condition
$t_{s,1}=-t_{s,2}$ in our model, so that its action obeys
$\displaystyle\frac{\textrm{d}S}{\textrm{d}t}$
$\displaystyle=A(t-t_{s})(t+t_{s})$ $\displaystyle=A(t^{2}-t_{s}^{2}),$ (9)
which can be integrated to find
$S(t)=\frac{1}{3}A\,t^{3}-At_{s}^{2}\,t+C$ (10)
as a cubic polynomial, where we set $C=0$ for simplicity.
Turning now to the oscillatory integral for this action, we can define it in
the form
$\displaystyle\int e^{-iS(t)}\textrm{d}t=\int
e^{-\frac{i}{3}At^{3}+iAt_{s}^{2}\,t}\,\textrm{d}t$ (11)
and then compare the linear term, $e^{+iAt_{s}^{2}\,t}$, with the Fourier
kernel $e^{+i\Omega t}$ from (1), which has the same structure. Thus, in order
to turn (11) into a form that is clearly analogous to (1), we separate
$At_{s}^{2}=\Omega-(\Omega_{c}+i\eta)$ into a variable and a constant part,
and thus define
$\displaystyle D(\Omega)=\int
e^{-\frac{i}{3}At^{3}+i(\Omega-\Omega_{c}-i\eta)t}\,\textrm{d}t.$ (12)
Here the constant part $\Omega_{\mathrm{hc}}=\Omega_{c}+i\eta$ has a nonzero
imaginary part $+i\eta$, as we do not have any guarantees that $At_{s}^{2}$ is
real, but its real part $\Omega_{c}$ can be set to zero if desired, since it
simply acts as an offset for the variable $\Omega$. Here the functional
dependence
$At_{s}^{2}=\Omega-(\Omega_{c}+i\eta)=\Omega-\Omega_{\mathrm{hc}}$ (13)
on $\Omega$ is an additional postulate, justified only by analogy with the
Fourier kernel of the full integral which our model attempts to mimic, but, as
we shall see shortly, the ‘quantum orbits’ $t_{s}(\Omega)$ that result from
this identification form a good model for the avoided-crossing behaviour in
Fig. 2b.
In addition, if we now separate $S(t)=S_{V}(t)-\Omega t$ as we did in (2)
above, the saddle-point equation (9) can now be rephrased as the requirement
that
$\displaystyle\frac{\textrm{d}S_{V}}{\textrm{d}t}=At^{2}+(\Omega_{c}+i\eta)=\Omega,$
(14a) with $\Omega$ running over the real axis, in direct analogy to (3a) and
(6a). This is our first key insight: the curves traced out by the solutions of
the saddle-point equation (9) can also be described as the contour line
$\imaginary\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}\right]\mathclose{}=0$
(14b)
of the derivative of the model Volkov action $S_{V}$ (and also of the full
action $S$, since they differ by a real number).
---
Figure 3: Contour map of
$\imaginary\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}\right]\mathclose{}$
for the model action in (10), with blue (yellow) indicating negative
(positive) values, and with the
$\imaginary\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}\right]\mathclose{}=0$
contour highlighted in black. The gray dot-dashed lines are the contours of
$\real\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}\right]\mathclose{}$
passing through the central saddle point. We set $\eta=-1$ and
$A=e^{$10\text{\,}\mathrm{\SIUnitSymbolDegree}$i}$.
This insight then lights the way further: our principal task is to look for
objects between the two curves in Fig. 2b that will help us separate them, and
the contour map of
$\imaginary\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}\right]\mathclose{}$
is clearly the place to look. We show this contour map in Fig. 3, with the
zero contour of (14b) highlighted as a thick black curve, clearly showing the
avoided-crossing structure of Fig. 2b that we want to model. More importantly,
this plot shows us our second key insight: there is indeed a nontrivial object
separating the two quantum orbits, in the form of a saddle point in this
contour map. This is the central object we are after: the harmonic-cutoff time
$t_{\mathrm{hc}}$ for this model. Like all saddle points, this can be found as
a zero of the derivative of the contour map’s original function, or in other
words, via
$\frac{\textrm{d}^{2}S_{V}}{\textrm{d}t^{2}}(t_{\mathrm{hc}})=0.$ (15)
In the particular case of our model action (14a), for which
$\frac{\textrm{d}^{2}S_{V}}{\textrm{d}t^{2}}=2A\,t$, this saddle point lies at
the origin, due to the explicit choice of center of symmetry made by setting
$t_{s,1}=-t_{s,2}$ above. In the general case, (15) will find the centerpoint
between
$\imaginary\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}\right]\mathclose{}$
contours whenever they approach each other.
The appearance of a saddle point at the midpoint between contour lines in
close proximity is a generic feature of contour landscapes. Similar structures
have been explored previously Pisanty2016slalom ; Keil2016 , in the context of
tunnel ionization.
### I.3 The landscape at the harmonic-cutoff point
Having found the saddle point, we can now use it directly, since we can fully
reconstruct the model action $S_{V}(t)$ using only its behaviour at the
center, by taking derivatives:
$\displaystyle\frac{\textrm{d}S_{V}}{\textrm{d}t}(t_{\mathrm{hc}})$
$\displaystyle=\Omega_{\mathrm{hc}}=\Omega_{c}+i\eta,$ (16a)
$\displaystyle\frac{\textrm{d}^{3}S_{V}}{\textrm{d}t^{3}}(t_{\mathrm{hc}})$
$\displaystyle=2A,$ (16b)
with the second derivative vanishing by construction.
---
Figure 4: Contour map of $\imaginary\mathopen{}\left[\frac{\partial
S_{V}}{\partial t}(t,t_{s}^{\prime}(t))\right]$ over the complex recollision-
time plane for the full Volkov action from (2b). The saddle-point trajectories
of Fig. 1 appear clearly as the zero contour (thick lines), i.e. as the
complex times where $\frac{\partial S_{V}}{\partial t}(t,t_{s}^{\prime}(t))$
is real-valued. The harmonic-cutoff points of Fig. 1c (triangles) are,
correspondingly, the saddle points of this derivative; the coloured lines show
the contours at these saddle points.
We begin by focusing on the first derivative, which gives the linear term in
the action, whose coefficient is directly connected to the harmonic frequency
via $At_{s}^{2}=\Omega-(\Omega_{c}+i\eta)$ as set above. In particular, this
term directly encodes the saddle-point solutions, and we can recover these
explicitly by inverting that relationship:
$t_{s}=\pm\sqrt{\frac{\Omega-(\Omega_{c}+i\eta)}{A}}.$ (17)
Here the square in $At_{s}^{2}$ implies that $t_{s}$ appears as an explicit
square root, with the sign ambiguity giving us the two saddle-point solutions
of the problem.
This is also a key structural insight, which has largely remained unobserved
in the literature: the various solutions of saddle-point equations are simply
different branches of a unified Riemann surface, examined at the real axis of
the target space. This observation is made explicit in the branch choice given
by the $\pm$ sign in (17), but the same is true in the full problem: if we
approach the coupled equations in (6) by solving (6b) first for
$t_{s}^{\prime}=t_{s}^{\prime}(t)$, then (6a) can be expressed in the form
$\frac{\partial S_{V}}{\partial
t}(t,t_{s}^{\prime}(t))=\frac{1}{2}\left(\mathbf{p}_{s}(t,t_{s}^{\prime}(t))+\mathbf{A}(t)\right)^{2}+I_{p}=\Omega,$
(18)
where it explicitly involves the inverse of the many-to-one, complex-valued
function $\frac{\partial S_{V}}{\partial t}(t,t_{s}^{\prime}(t))$, and this is
precisely the type of problem encoded by Riemann surfaces. (We explore this
perspective in detail in Section I.5 below.)
Moreover, as far as the full HHG problem is concerned, our insight in (14b)
above that the saddle-point trajectories are simply contour lines of a
derivative of the action remains unchanged, with the obvious alterations to
$\imaginary\mathopen{}\left[\frac{\partial S_{V}}{\partial
t}(t,t_{s}^{\prime}(t))\right]\mathclose{}=0.$ (19)
We exemplify this directly in Fig. 4, showing the contour map of
$\imaginary\mathopen{}\left[\frac{\partial S_{V}}{\partial
t}(t,t_{s}^{\prime}(t))\right]\mathclose{}$ for the full HHG problem as in
Fig. 1, and there the zero contour lines (highlighted in black) are indeed the
curves followed by the typical quantum orbits shown in Fig. 1c. Similarly, the
harmonic-cutoff times $t_{\mathrm{hc}}$ used there are found as the saddle
points of the $\frac{\partial S_{V}}{\partial t}(t,t_{s}^{\prime}(t)){}$
landscape in Fig. 4.
That said, the quantum-orbit curves of Fig. 4 are not simply unstructured
curves, as they have an explicit parametrization in terms of the harmonic
frequency $\Omega$, but the same is true for the contours of
$\imaginary\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}\right]\mathclose{}$,
which can also be seen as parametrized by the real part of the function. This
brings physical content to the first part of the derivatives we found in (16),
where the real part of $\frac{\textrm{d}S_{V}}{\textrm{d}t}(t_{\mathrm{hc}})$
is simply the frequency offset $\Omega$. This offset is carried from the
saddle point $t_{\mathrm{hc}}$ to the quantum-orbit lines by means of the
contour lines of
$\real\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}\right]\mathclose{}$.
These are shown dot-dashed in Fig. 4, and they clearly intersect the quantum-
orbit tracks at the point where they are closest to each other, and thus at
the transition itself. This is then what allows us to identify
$\Omega_{c}=\real\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}(t_{\mathrm{hc}})\right]\mathclose{}$
(20a) as the frequency of the harmonic cutoff.
Having made this leap, we must now confront the fact that
$\frac{\textrm{d}S_{V}}{\textrm{d}t}(t_{\mathrm{hc}})$ as given in (16a) also
has a nonzero imaginary part,
$\eta=\imaginary\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}(t_{\mathrm{hc}})\right]\mathclose{},$
(20b)
which similarly needs to be addressed. This imaginary part has already played
a role, and it is explicitly shown in Fig. 3 as the height of the
$\frac{\textrm{d}S_{V}}{\textrm{d}t}$ saddle with respect to the zero contour
where the quantum orbits lie. As such, the imaginary part $\eta$ of the
harmonic cutoff, defined by (16a), controls how closely the two quantum orbits
approach each other at the cutoff, and thus how distinguishable they are. As
we shall see in Section II.1 below, this distinguishability further controls
the strength of quantum-path interference between them.
Moreover, the sign of $\eta$ controls the direction in which the quantum
orbits turn when they approach each other, and thus which of the two post-
cutoff evanescent solutions, at positive and negative imaginary time,
corresponds to the ‘short’ and ‘long’ trajectory to the left and right of
$t_{\mathrm{hc}}$. Generally, this imaginary part of
$\frac{\textrm{d}S_{V}}{\textrm{d}t}$ is fixed by the problem and it can be
either positive or negative, but, as we shall see in Section IV, sign changes
in $\eta$ are generic behaviour when the driving pulse shape changes depending
on one or more parameters, which implies reconnections and changes in topology
for the curves traced out by the quantum orbits. This further emphasizes the
fact that the different saddle points are essentially one and the same object,
corresponding only to different branches of one unified Riemann surface.
However, this point can be observed more cleanly, by simply allowing the
harmonic frequency $\Omega$ to acquire complex values, i.e., by considering
the analytical continuation of $D(\Omega)$. Doing this amounts to directly
affecting the value of $\eta$ in (14a), and if $\Omega$ is taken at the
complex harmonic cutoff itself, $\Omega_{c}+i\eta$, then the constant term in
the saddle-point equation vanishes, leaving a double zero of the form
$At^{2}=0$ (21)
at $t=t_{\mathrm{hc}}$. In other words, the harmonic-cutoff times
$t_{\mathrm{hc}}$ are the locations where the saddle points for the two
quantum orbits fully coalesce into a single point.
### I.4 The orientation of the harmonic-cutoff saddle and quantum-orbit
classification
Having examined the first derivative of $S_{V}$ at the harmonic-cutoff time,
we now turn to the role of the third derivative, given in (16b). This term
gives the coefficient of the quadratic term in
$\frac{\textrm{d}S_{V}}{\textrm{d}t}$ and, as such, it controls the
orientation of the saddle in $\frac{\textrm{d}S_{V}}{\textrm{d}t}$ at
$t_{\mathrm{hc}}$, as shown in Fig. 3. (Indeed, we chose a nonzero phase for
$A=e^{$10\text{\,}\mathrm{\SIUnitSymbolDegree}$i}$ for that configuration to
emphasize that this saddle need not be neatly oriented, as observed in Fig.
4.) Retrieving this orientation is essential in order to use our harmonic-
cutoff times $t_{\mathrm{hc}}$ to classify the saddle points into quantum
orbits: in order to turn the clouds of points in Fig. 1b into the ordered
curves of Fig. 1c, we also require the direction of the separatrix that goes
through the missed approach.
To obtain this direction, we look for the vector that goes from
$t_{\mathrm{hc}}$ to the closest point on the
$\imaginary\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}\right]\mathclose{}=0$
contour that the quantum orbits follow. As we noted above, this point occurs
at $\Omega=\Omega_{c}$, which means that we simply need to solve the saddle-
point equation (14a) at that frequency, i.e.,
$\displaystyle At_{\mathrm{sep}}^{2}+i\eta=0\ \implies\
t_{\mathrm{sep}}=\sqrt{-i\eta/A},$ (22)
or, in terms of the explicit derivatives of the action and with an explicit
relation to the saddle center $t_{\mathrm{hc}}$,
$\displaystyle\delta
t_{\mathrm{sep}}=t_{\mathrm{sep}}-t_{\mathrm{hc}}=\sqrt{-2i\frac{\imaginary\mathopen{}\left[\displaystyle\frac{\textrm{d}S_{V}}{\textrm{d}t}(t_{\mathrm{hc}})\right]\mathclose{}}{\displaystyle\frac{\textrm{d}^{3}S_{V}}{\textrm{d}t^{3}}(t_{\mathrm{hc}})}}.$
(23)
We can then write the explicit condition for the separatrix by treating
$\delta t_{\mathrm{sep}}$ and $t_{s}-t_{\mathrm{hc}}$ as vectors and asking
for the sign their inner product. Thus, the criterion that separates the two
quantum orbits on either side of a given harmonic-cutoff time
$t_{\mathrm{hc}}$ is the sign comparison
$\real\mathopen{}\Big{[}(t_{s}-t_{\mathrm{hc}})^{*}\,\delta
t_{\mathrm{sep}}\Big{]}\mathclose{}\lessgtr 0,$ (24)
with $t_{\mathrm{hc}}$ defined as in (15) and $\delta t_{\mathrm{sep}}$
defined as in (23).
In a practical calculation, there will typically be multiple harmonic-cutoff
times, alternating between near-threshold harmonic frequencies, where the
quantum orbits first appear, and high-frequency cutoffs. To complete our
classification scheme – shown as the background coloured zones in Fig. 1c,
which we repeat in some additional detail in Fig. 5 – we separate the complex
time plane into strips using the mid-points between the real parts of the
successive harmonic-cutoff times, and then divide each strip in two using the
criterion in (24).
Here it is important to note that the definition in (23) contains a sign
ambiguity coming from the choice of sign for the square root. In practice, the
principal branch of the square root tends to work well for most cases, but the
branch choice there does occasionally require dedicated attention,
particularly for the first low-energy harmonic cutoff at the start of the
series.
The results of our classification procedure are shown in Fig. 5: the
separatrices obtained from the sign-comparison criterion (24) break up the
complex plane into trapezoids that contain one and only one quantum orbit
each, and they can thus be used to classify the saddle-point solutions into
well-defined families in a uniform and robust fashion.
---
Figure 5: Results of our saddle-point classification procedure, as shown
previously in Fig. 1c. We find harmonic-cutoff times $t_{\mathrm{hc}}$
(triangles) as solutions of (15), with the separatrix through each
$t_{\mathrm{hc}}$ given by the sign comparison in (24) with $\delta
t_{\mathrm{sep}}$ defined in (23); we apply this criterion on vertical strips
obtained by taking the midpoints between the real parts of the
$t_{\mathrm{hc}}$. This then defines clear regions occupied by each of the
quantum orbits, which can be unambiguously labelled.
---
List of tdfigures 1 Topology of the Riemann surface of the quantum orbits.
This is the manifold
$\mathcal{S}=\\{(t,\Omega)\in\mathbb{C}^{2}:\Omega=\frac{\textrm{d}S_{V}}{\textrm{d}t}(t)\\}$,
which has complex dimension 1 and is embedded in the space $\mathbb{C}^{2}$,
with real dimension 4, so we can only plot it by projecting out one component:
we show its projections to $\real(\omega t)$ in (a, b) and to
$\imaginary(\omega t)$ in (c-f). The projection to $\real(\omega t)$ shows the
topology as a single sheet which wraps around itself, covering the
$\Omega\in\mathbb{C}$ plane multiple times when the $\real(\omega t)$
coordinate is projected out, so that $\frac{\textrm{d}S_{V}}{\textrm{d}t}$ has
a multi-valued inverse. To separate this multi-valued inverse into valid
single-valued functions, the sheet needs to be cut into separate branches:
this is the role of the separation into coloured regions from the
classification scheme in Fig. 5, which carry over to the multiple coloured
sections in (a). The sheets connect to each other via square-root-type branch
cuts, and we show a detail of the first such connection in (b). The projection
onto $\imaginary(\omega t)$ forms a self-intersecting surface which loops
around itself multiple times, so we plot each of the branches separately (with
half of each of its neighbours shown half-transparently) in (c-f). Interactive
versions and 3D-printable models of these plots are available at imaginary-
harmonic-cutoff.github.io SupplementaryMaterial .
### I.5 The quantum-orbit Riemann surface
We now return to an observation we made earlier: the saddle-point equations
for HHG represent the inverse of a many-to-one analytical function,
$\frac{\partial S_{V}}{\partial t}$ as given in (18), evaluated on the real
axis in $\Omega$, and this is precisely the definition of a Riemann surface.
As such, it is pertinent to study this Riemann surface as a whole, since its
topology and geometry are the key factors that govern the quantum-orbit
classification.
The Riemann surface here is the set
$\mathcal{S}=\left\\{(t,\Omega)\in\mathbb{C}^{2}:\Omega=\frac{\textrm{d}S_{V}}{\textrm{d}t}(t)\right\\},$
(25)
and it forms a manifold of complex dimension 1 (i.e., a manifold where each
point has a neighbourhood homeomorphic with the complex unit disk) embedded in
a space of complex dimension 2 (and therefore of real dimension 4). This space
is too big to be visualized directly, so we approach it by projecting down to
the real and imaginary parts of $t$, i.e., by projecting the surface to the
spaces $(\real(\Omega),\imaginary(\Omega),\real(\omega t))$ and
$(\real(\Omega),\imaginary(\Omega),\imaginary(\omega t))$, which we show in 3D
Fig. 1.
The topology is most clearly displayed in the $\real(\omega t)$ projection,
shown in 3D Fig. 11: the Riemann surface consists of a series of connected
sheets which connect sequentially to each other, one by one, as $\real(\omega
t)$ increases. In essence, the topology and the overall geometry here are
those of the Riemann surface of $\sin(z)$, with the sheets connected pairwise
by square-root branch points (locally homeomorphic to the quadratic model
discussed in Section I.2) as shown in detail in 3D Fig. 11.
This Riemann surface carries an image of the $\omega t$ complex plane (so, in
particular, the mesh on the surface represents a square grid on that plane),
but it forms a multiple cover of the image plane $\Omega\in\mathbb{C}$. To get
the inverse, then, the surface must be split into separate branches, and this
is precisely the role of the coloured regions shown in Fig. 5 as produced by
our saddle-point classification algorithm: this colouring is retained in the
Riemann surface as displayed in 3D Fig. 1, and each individual region, when
projected down to the $\Omega$ plane, forms a single full cover of the complex
plane.555This single-cover property is approximate, as there is still a small
amount of double-cover overlap in regions next to the separatrix when away
from the branch points at the $t_{\mathrm{hc}}$. This can be fixed if
necessary, but it is not central to our argument here. In other words, each
of the coloured regions in Fig. 5 forms the image of a single-valued branch of
the inverse of $\frac{\textrm{d}S_{V}}{\textrm{d}t}(t)$.
The imaginary-part projection onto $\imaginary(\omega t)$ is slightly harder
to represent and visualize, as the corresponding surface folds back on itself,
with multiple self-intersections. However, the separation of this surface into
individual branches also solves this problem: when plotting one branch at a
time, as we show in 3D Fig. 11-1, the self-intersections disappear – as they
must if the surface represents a single-valued function – and they are only
present as intersections between different branches.
Physically, the most important part of this Riemann surface is its
intersection with the $\Omega\in\mathbb{R}^{+}$ positive real axis in the
$\Omega$ plane, which, as discussed in Section I.3 above, forms the quantum
orbits themselves. These are shown highlighted as curves in 3D Fig. 1, and
their role here is most clearly apparent in 3D Fig. 11: this intersection
forms the $(\real(\omega t),\real(\Omega))$ energy-time mapping for the
quantum orbits, a well-known (and deeply physical) part of the theory
Ivanov2014 ; Nayak2019 . The quantum-orbit Riemann surface, together with its
topology, is nothing more than the analytical continuation of this standard
energy-time mapping.
### I.6 Derivatives of the action in the two-variable case
Before moving on, it is important to define in more detail the relationship
between our simple model and the full-blown HHG integral, whose action has two
variables instead of one. This is possible, as mentioned above, by using (6b)
to define $t_{s}^{\prime}=t_{s}^{\prime}(t)$ and with that the single-variable
$S_{V}(t)=S_{V}(t,t_{s}^{\prime}(t))$, but that only works explicitly for the
values of the action, and its derivatives need to be considered more
carefully.
The first derivative is not affected, since
$\displaystyle\frac{\textrm{d}S_{V}}{\textrm{d}t}(t)$
$\displaystyle=\frac{\textrm{d}}{\textrm{d}t}S_{V}(t,t_{s}^{\prime}(t))$
$\displaystyle=\frac{\partial S_{V}}{\partial
t}(t,t_{s}^{\prime}(t))+\frac{\textrm{d}t_{s}^{\prime}}{\textrm{d}t}(t)\frac{\partial
S_{V}}{\partial t^{\prime}}(t,t_{s}^{\prime}(t)),$ (26)
and here the partial derivative in the second term, $\frac{\partial
S_{V}}{\partial t^{\prime}}(t,t_{s}^{\prime}(t))$, vanishes by the definition
of $t_{s}^{\prime}(t)$. However, if we now turn to the second derivative, the
procedure no longer works, and the equivalent calculation,
$\displaystyle\frac{\textrm{d}^{2}S_{V}}{\textrm{d}t^{2}}(t)$
$\displaystyle=\frac{\textrm{d}}{\textrm{d}t}\frac{\partial S_{V}}{\partial
t^{\prime}}(t,t_{s}^{\prime}(t))$
$\displaystyle=\frac{\partial^{2}S_{V}}{\partial
t^{2}}(t,t_{s}^{\prime}(t))+\frac{\textrm{d}t_{s}^{\prime}}{\textrm{d}t}(t)\frac{\partial^{2}S_{V}}{\partial
t\,\partial t^{\prime}}(t,t_{s}^{\prime}(t)),$ (27)
returns a term that includes
$\frac{\textrm{d}t_{s}^{\prime}}{\textrm{d}t}(t)$. This cannot be evaluated
explicitly within this system, as $t_{s}^{\prime}(t)$ itself is only defined
implicitly.
To resolve this, we use the implicit definition of $t_{s}^{\prime}(t)$,
$\frac{\partial S_{V}}{\partial t^{\prime}}(t,t_{s}^{\prime}(t))\equiv 0,$
(28)
and then differentiate with respect to $t$, to obtain
$\displaystyle\frac{\partial^{2}S_{V}}{\partial t\,\partial
t^{\prime}}(t,t_{s}^{\prime}(t))+\frac{\textrm{d}t_{s}^{\prime}}{\textrm{d}t}(t)\frac{\partial^{2}S_{V}}{\partial
t^{\prime 2}}(t,t_{s}^{\prime}(t))\equiv 0,$ (29)
which can then be solved for the implicit derivative as
$\displaystyle\frac{\textrm{d}t_{s}^{\prime}}{\textrm{d}t}(t)=-\frac{\partial^{2}S_{V}}{\partial
t\,\partial
t^{\prime}}(t,t_{s}^{\prime}(t))\bigg{/}\frac{\partial^{2}S_{V}}{\partial
t^{\prime 2}}(t,t_{s}^{\prime}(t)).$ (30)
Finally, this can be substituted into (27) to give the form
$\displaystyle\frac{\textrm{d}^{2}S_{V}}{\textrm{d}t^{2}}(t)$
$\displaystyle=\frac{\displaystyle\frac{\partial^{2}S_{V}}{\partial
t^{2}}\frac{\partial^{2}S_{V}}{\partial t^{\prime
2}}-\left(\frac{\partial^{2}S_{V}}{\partial t\,\partial
t^{\prime}}\right)^{2}}{\displaystyle\frac{\partial^{2}S_{V}}{\partial
t^{\prime 2}}}.$ (31)
Here we have dropped the right-hand side evaluations at
$(t,t_{s}^{\prime}(t))$ for simplicity, but it is also possible to understand
the right-hand side as a function of $(t,t^{\prime})$ as independent
variables, whose zeros are to be found in conjunction with those of (6b), and
this is an easier approach in practice.
Finally, to obtain the higher-order derivatives (in particular, the third
derivative, which gives the cubic coefficient $A$) we use symbolic computation
RBSFA ; FigureMaker , using the substitution rule (30) at each step to retain
explicit formulations.
### I.7 The threshold harmonics for linear fields
For a general driving field, the harmonic cutoffs given by our definition will
appear at both ends of the harmonic plateau, oscillating between low
frequencies near the ionization threshold and high energies at the classical
cutoff. In general, moreover, all of these harmonic cutoffs will have nonzero
imaginary parts, corresponding to missed approaches at a finite distance
between the quantum orbits involved. However, as can be seen in Fig. 4 and
Fig. 1c, this is not the case for the linearly-polarized field we use above,
since half of the harmonic-cutoff saddles $t_{\mathrm{hc}}$ lie directly on
the quantum-orbit line, i.e., the quantum orbits have a full crossing there.
This behaviour is generic to all linearly-polarized fields, whether
monochromatic or not, and it implies that the low-energy cutoffs always lie at
exactly $\Omega_{c}+i\eta=I_{p}$, and the threshold harmonics at
$\Omega=I_{p}$ always involve an exact saddle coalescence. In other words, for
threshold harmonics, the saddles of the action landscape are not gaussian:
instead, they are exact monkey saddles, as shown in Fig. 7a below; this was
noticed as early as Ref. Lewenstein1994, , but has not drawn much attention
since then.666This is largely because the approximations that produced the SFA
integral are known to be inaccurate for the lower plateau and threshold
harmonics.
To understand this behaviour, we return to the saddle-point equations (6),
which for linearly-polarized fields can be rephrased in the simpler, scalar
form
$\displaystyle\frac{\partial S_{V}}{\partial
t}=\frac{1}{2}\left(p_{s}(t,t^{\prime})+A(t)\right)^{2}+I_{p}$
$\displaystyle=\Omega,$ (32a) $\displaystyle-\frac{\partial S_{V}}{\partial
t^{\prime}}=\frac{1}{2}\left(p_{s}(t,t^{\prime})+A(t^{\prime})\right)^{2}+I_{p}$
$\displaystyle=0.$ (32b)
Here the first equation is the crucial one, since at $\Omega=I_{p}$ it tells
us that
$\frac{1}{2}v_{r}(t,t^{\prime})^{2}=\frac{1}{2}\left(p_{s}(t,t^{\prime})+A(t)\right)^{2}=0,$
(33)
i.e., the recollision velocity $v_{r}(t,t^{\prime})=p_{s}(t,t^{\prime})+A(t)$
vanishes exactly, with a double zero which still vanishes after taking one
derivative.
This becomes particularly important when we consider the second derivative of
the action, (31), whose zeros determine the harmonic-cutoff times. The partial
derivatives involved in (31) can be evaluated by differentiating (32a) with
respect to $t$ and $t^{\prime}$, which yields
$\displaystyle\frac{\partial^{2}S_{V}}{\partial t^{2}}$
$\displaystyle=\left(p_{s}+A(t)\right)\left[\frac{\partial p_{s}}{\partial
t}+\dot{A}(t)\right],$ (34a) $\displaystyle\frac{\partial S_{V}}{\partial
t\,\partial t^{\prime}}$ $\displaystyle=\left(p_{s}+A(t)\right)\frac{\partial
p_{s}}{\partial t^{\prime}}.$ (34b)
Both of these derivatives share a common factor of
$v_{r}(t,t^{\prime})=p_{s}(t,t^{\prime})+A(t)$, and this then means that the
second derivative $\frac{\textrm{d}^{2}S_{V}}{\textrm{d}t^{2}}(t)$ will always
vanish when evaluated at solutions of the first-order saddle-point equations
(32) at $\Omega=I_{p}$, completing the proof.
It is important to note that, when a saddle-point coalescence like this
occurs, the classification scheme embodied by our test in (24) breaks down
since, at the double zero,
$\frac{\textrm{d}S_{V}}{\textrm{d}t}(t_{\mathrm{hc}})$ also vanishes, so the
direction vector $\delta t_{\mathrm{sep}}$ from (23) also vanishes. In
practice, this can cause numerical instability, with
$\imaginary\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}(t_{\mathrm{hc}})\right]\mathclose{}$
evaluating to machine precision with a fluctuating sign, and this needs to be
handled explicitly – which could be as simple as assigning an artificial
nonzero
$\imaginary\mathopen{}\left[\frac{\textrm{d}S_{V}}{\textrm{d}t}(t_{\mathrm{hc}})\right]\mathclose{}$
to be used there, as either direction will work well for the separatrix. (For
the plots in Fig. 4 and Fig. 1c, we used a small ellipticity of
$\epsilon=0.01\%$ to break the degeneracy and avoid this instability.)
At an exact coalescence, it is impossible to classify the saddles in a unique
way: two saddles come in and two saddles come out, but there is no unique way
to match either of the outgoing saddles with either of the incoming ones. This
happens for all linearly-polarized fields at the $\Omega=I_{p}$ threshold, but
it can also happen at the harmonic cutoff in isolated cases when the field
depends on a variable parameter, as we will exemplify in Section IV below.
## II The Harmonic-Cutoff Approximation
Having examined the structure of the action at the harmonic-cutoff times
$t_{\mathrm{hc}}$, in this section we will explore the behaviour of the
oscillatory integral around it, which we will employ to build the Harmonic-
Cutoff Approximation (HCA), an efficient method for estimating the harmonic
dipole around the cutoff using only information at $t_{\mathrm{hc}}$ itself.
### II.1 Airy-function representation for the model case
To do this, we return to the model integral from (12), where we set as the
object of interest the integral the exponential of our model action (10),
$\displaystyle
D(\Omega)=\int_{C}e^{-\frac{i}{3}At^{3}+i(\Omega-\Omega_{\mathrm{hc}})t}\,\textrm{d}t,$
(35)
over an integration contour $C$ which should start at the ionization time
$t^{\prime}$ (or some suitably compatible valley of $\imaginary(S)$ to the
left of $t_{\mathrm{hc}}$) and end at $t\to\infty$. This integral is
essentially in Airy form, and it is almost structurally identical to the Airy
function’s integral representation (NIST_handbook, , Eq. (9.5.E4)),
$\operatorname{Ai}(z)=\frac{1}{2\pi i}\int_{\infty e^{-i\pi/3}}^{\infty
e^{i\pi/3}}\exp(\tfrac{1}{3}t^{3}-zt)\textrm{d}t.$ (36)
This is expected, since the harmonic cutoff is a ‘fold’ catastrophe, whose
associated diffraction-catastrophe integral is the Airy function Berry1980 .
To bring our representation (35) into the canonical form (36), the core
transformation is to eliminate the coefficient in front of the cubic term,
$-\frac{i}{3}At^{3}\longmapsto\frac{1}{3}\tilde{t}\,^{3}.$ (37)
In a sense, this is relatively simple, as it boils down to a change in
integration variable to $\tilde{t}=iA^{1/3}\,t,$ but there is an added
complication in that the radical $A^{1/3}$ of the cubic coefficient admits
three separate branches,
$\tilde{t}=ie^{2\pi ik/3}A^{1/3}\,t,$ (38)
for $k\in\\{0,1,2\\}$, which requires dedicated attention.
The variable change itself is essentially trivial, and (35) transforms under
(38) to
$\displaystyle D(\Omega)$ $\displaystyle=\frac{e^{-2\pi
ik/3}}{iA^{1/3}}\\!\\!\int_{\tilde{C}}\\!\exp(\frac{1}{3}\tilde{t}\,^{3}-\frac{\Omega_{\mathrm{hc}}-\Omega}{e^{2\pi
ik/3}A^{1/3}}\tilde{t})\,\textrm{d}\tilde{t},$ (39a) with
$\displaystyle\tilde{C}=ie^{2\pi ik/3}A^{1/3}\,C.$ (39b)
This is essentially in explicit Airy form, so long as the contour is correct,
and thus we can substitute in the Airy function as
$\displaystyle D(\Omega)$ $\displaystyle=\frac{2\pi}{e^{2\pi
ik/3}A^{1/3}}\operatorname{Ai}\mathopen{}\left(\frac{\Omega_{\mathrm{hc}}-\Omega}{e^{2\pi
ik/3}A^{1/3}}\right)\mathclose{},$ (40)
where $k$ needs to be chosen such that the altered contour $\tilde{C}=ie^{2\pi
ik/3}A^{1/3}\,C$ is compatible with the standard contour in (36), which we
depict in Fig. 6.
| |
---|---|---
(a)
(b)
(c)
Figure 6: Contour map of the exponent $\real(\tfrac{1}{3}t^{3}-zt)$ of the
integral representation of the Airy function, (36), with the standard contour
shown as the black arrows, for (a) coalescent saddles, (b) the oscillatory
regime, and (c) the evanescent part. The details of the contour can be altered
as required (such as e.g. to pass through the two saddle points in (b) along
steepest-descent contours), under the constraints that the contour start at
infinity at $-\tfrac{\pi}{2}<\arg(t)<-\tfrac{\pi}{6}$ and end at
$\tfrac{\pi}{6}<\arg(t)<\tfrac{\pi}{2}$, i.e., that it go down the valleys
that surround $e^{-i\pi/3}$ and $e^{i\pi/3}$.
---
(a)
(b)
(c)
Figure 7: Contour map of $\imaginary[S(t,t_{s}^{\prime}(t))-\Omega t]$,
presented as in Fig. 4, taking a complex $\Omega$ equal to $\Omega_{c}+i\eta$
at the various harmonic-cutoff times $t_{\mathrm{hc}}$, causing the nearby
saddles to fully coalesce, and marking a complete breakdown of the SPA. To
apply the Harmonic-Cutoff Approximation, each monkey saddle (marked by the
converging coloured contours) must be rotated by $\tilde{C}=ie^{2\pi
ik/3}A^{1/3}\,C$, choosing the branch index $k$ so that the integration
contour matches the canonical Airy contours of Fig. 6. The phase $\arg(A)$ is
inset in (b) and (c), and it equals
$-87.97\text{\,}\mathrm{\SIUnitSymbolDegree}$,
$-$0.011\text{\,}\mathrm{\SIUnitSymbolDegree}$$ and
$-$0.155\text{\,}\mathrm{\SIUnitSymbolDegree}$$, resp., for the three monkey
saddles shown in (a), consistently with their different orientations.
In principle, a rigorous approach to the contour-choice problem requires a
careful examination of the constrains on the contour of the original integral,
as shown in Fig. 7, to determine the correct integration contour $C$ that is
compatible with the original integration limits; this is then rotated by
$iA^{1/3}$ and any additional factors of $e^{2\pi ik/3}$ necessary for the
contour to be in canonical form. (Moreover, care must be taken when taking the
radical $A^{1/3}$ over a parameter scan, since, as mentioned in Fig. 7, $A$
can lie very close to commonly-used choices for the branch cut of the
radical.)
In practice, if there are known constraints on the behaviour of the integral
(in particular, exponential decay at $\Omega>\Omega_{c}$), there will
typically only be one choice of $k$ compatible with the constraints, which can
then be selected on physical grounds. As a general rule, exponential decay at
$\Omega>\Omega_{c}$ requires $e^{2\pi ik/3}A^{1/3}$ to have a negative real
part.
Our result for the model integral, (40), now allows us to have a closer look
at the role of the imaginary part of the complex harmonic-cutoff energy
$\Omega_{\mathrm{hc}}$, which we introduced in (20b). As we mentioned then,
this imaginary part controls the strength of quantum-path interference between
the two quantum orbits that meet at the relevant cutoff, and we show this in
Fig. 8. The HCA approximates the integral (35) as an explicit Airy function in
$\Omega$ (red line), which transitions from oscillatory to exponential-decay
behaviour at $\Omega_{c}=\real(\Omega_{\mathrm{hc}})$.
However, in order for the interference fringes in the oscillatory regime to
have a full contrast and pass through the zeros of the Airy function, the
argument must be real, and this is generally impossible if
$\eta=\imaginary(\Omega_{\mathrm{hc}})$ is nonzero. To increase the
interference features, thus, we can artificially reduce (blue line) or
eliminate (green line) this imaginary part; conversely, increasing $\eta$
(purple and magenta lines) further damps the interference, and it introduces
exponential behaviour into the plateau. That said, to get full contrast in the
interference it is also necessary for the denominator to be real-valued, which
we show as the gray line by artificially adjusting it.
The nonzero imaginary parts of the constants in the argument of the Airy
function also makes this approximation distinct from previous approaches that
regularize the cutoff in terms of single Airy functions Ivanov2014 ;
Frolov2009 ; Frolov2010 , making it better able to capture the QPI contrast in
the neighbourhood of the cutoff. On the other hand, our approximation sits one
level below the Uniform Approximation Figueira2002 ; Milosevic2002 ; Wong2001
(as well as the equivalent constructions in Refs. Frolov2012, ;
Sarantseva2013, ; Okajima2012, ), which expands the prefactor to subleading
order and is thus able to capture variations in the QPI contrast coming from
the prefactor’s effect on the saddles. That said, the HCA is, at least in
principle, strictly local to the cutoff, and this makes it especially suited
for efficient evaluation and clear analysis of the harmonic strength (as well
as phase properties Khokhlova2016 ) at the cutoff.
### II.2 The full HHG integral
We now return to the full two-dimensional integral for HHG, (5), and transform
it into the model form (35) so that the HCA can be used to estimate it. For
simplicity, we write the integral in explicit prefactor-action form,
$\displaystyle\mathbf{D}(\Omega)$
$\displaystyle=\int_{-\infty}^{\infty}\\!\\!\\!\textrm{d}t\int_{-\infty}^{t}\\!\\!\\!\textrm{d}t^{\prime}\
\mathbf{f}(t,t^{\prime})\,e^{-iS_{V}\\!(t,t^{\prime})+i\Omega t}$ (41a)
$\displaystyle\mathbf{f}(t,t^{\prime})$
$\displaystyle=\frac{(2\pi/i)^{3/2}}{(t-t^{\prime})^{3/2}}\mathbf{d}(\mathbf{p}_{s}(t,t^{\prime}){+}\mathbf{A}(t))\Upsilon(\mathbf{p}_{s}(t,t^{\prime}){+}\mathbf{A}(t^{\prime})).$
(41b)
To transform this into the model form (35), we need to reduce the integral to
a single dimension, and then translate the origin to the relevant harmonic-
cutoff time $t_{\mathrm{hc}}$.777However, it is important to point out that
the process is rather more general than this. As was pointed out in the
construction of the Uniform Approximation Figueira2002 (as well as its
earlier analogues in a semiclassical context Schomerus1997 ), the coordinate
separation done here is in essence an application of the splitting lemma of
catastrophe theory Poston1978 , which allows the ‘fold’ catastrophe encoded by
the Airy function to be isolated into a single coordinate axis. As such, the
simple model of Section I.2 is not a ‘toy’ model in any sense: instead, it is
a universal model, which is fully capable of capturing the (local) behaviour
of the integral.
---
Figure 8: Control of the imaginary part of the harmonic cutoff on quantum-
path interference. The red line shows the Airy-function dependence of the
Harmonic-Cutoff Approximation, as in (40), for $\Omega_{\mathrm{hc}}$ and $A$
corresponding to the first-return cutoff in Fig. 1. The coloured lines have
the imaginary part $\eta=\imaginary(\Omega_{\mathrm{hc}})$ artificially
reduced or amplified by a variable factor $r$, which respectively amplifies or
damps the QPI features; this corresponds to choosing a lower or higher
$\imaginary(\frac{\partial S_{V}}{\partial t})$ contour to follow in Fig. 3.
To obtain full interference, $r$ must be set to $0$, but the denominator must
also be real and negative, i.e., we replace $e^{2\pi ik/3}A^{1/3}$ with
$-|A|^{1/3}$. The black vertical line is at $\Omega=\Omega_{c}$.
The first is achieved, as mentioned earlier, by doing a saddle-point
approximation on the $t^{\prime}$ integral only, and this returns the harmonic
dipole in the form
$\displaystyle\mathbf{D}(\Omega)$
$\displaystyle=\sum_{s}\int_{-\infty}^{\infty}\textrm{d}t\,\sqrt{\frac{2\pi}{i\frac{\partial^{2}S_{V}}{\partial
t^{\prime 2}}(t,t_{s}^{\prime}(t))}}\mathbf{f}(t,t_{s}^{\prime}(t))$ (42)
$\displaystyle\qquad\qquad\qquad\qquad\times
e^{-iS_{V}\\!(t,t_{s}^{\prime}(t))+i\Omega t},$
with $t^{\prime}_{s}(t)$ defined implicitly via (6b). After this, we perform a
Taylor expansion of the Volkov action $S_{V}(t,t_{s}^{\prime}(t))$ at the
harmonic-cutoff time $t_{\mathrm{hc}}$ up to third order,
$\displaystyle\mathbf{D}(\Omega)$
$\displaystyle=\sum_{s,\mathrm{hc}}\int_{C}\\!\\!\textrm{d}t\sqrt{\frac{2\pi/i}{\frac{\partial^{2}S_{V}}{\partial
t^{\prime
2}}(t,t_{s}^{\prime}(t))}}\mathbf{f}(t,t_{s}^{\prime}(t))e^{-iS_{V}\\!(t_{\mathrm{hc}},t_{s}^{\prime}(_{\mathrm{hc}}))}$
(43)
$\displaystyle\qquad\times\exp(-\frac{i}{3}A(t-t_{\mathrm{hc}})^{3}-i\Omega_{\mathrm{hc}}(t-t_{\mathrm{hc}})+i\Omega
t),$
where $A$ and $\Omega_{\mathrm{hc}}$ are the third and first derivatives of
$S_{V}(t,t_{s}^{\prime}(t))$, as set in (16) and evaluated as constructed in
Section I.6; here we have also allowed the integration contour $C$ to vary so
that it can pass through $t_{\mathrm{hc}}$ as appropriate. Finally, we move
the integration origin to $t_{\mathrm{hc}}$,
$\displaystyle\mathbf{D}(\Omega)$
$\displaystyle=\sum_{s,\mathrm{hc}}\int_{\bar{C}}\\!\\!\textrm{d}\bar{t}\sqrt{\frac{2\pi/i}{\frac{\partial^{2}S_{V}}{\partial
t^{\prime
2}}(t,t_{s}^{\prime}(t))}}\mathbf{f}(t,t_{s}^{\prime}(t))e^{-iS_{V}\\!(t_{\mathrm{hc}},t_{s}^{\prime}(_{\mathrm{hc}}))}$
(44)
$\displaystyle\qquad\times\exp(-\frac{i}{3}A\bar{t}\,^{3}-i\Omega_{\mathrm{hc}}\bar{t}+i\Omega(\bar{t}+t_{\mathrm{hc}})),$
keeping the explicit $t=\bar{t}+t_{\mathrm{hc}}$ for notational simplicity.
To apply the Harmonic-Cutoff Approximation, we now assume that the prefactor
varies slowly enough at $t_{\mathrm{hc}}$ that it can be pulled out of the
integral (i.e., that it changes slower than the decay into the valleys shown
in Fig. 7),888On a more rigorous footing, this amounts to a zeroth-order
Taylor expansion of the prefactor at $t_{\mathrm{hc}}$, as is done in the SPA
GerlachSPAonline . Derivatives of the prefactor can be added as required, and
the corresponding terms will change the Airy function to its derivatives (as
can be seen by differentiating (36) with respect to $z$, which brings down a
factor of $t$ into the prefactor). For the cases we plot here, these terms are
about two orders of magnitude weaker than the leading-order contribution.
which gives
$\displaystyle\mathbf{D}(\Omega)$
$\displaystyle=\sum_{\mathrm{hc}}\sqrt{\frac{2\pi}{i\frac{\partial^{2}S_{V}}{\partial
t^{\prime
2}}(t_{\mathrm{hc}}^{\phantom{a}},t^{\prime}_{\mathrm{hc}})}}\mathbf{f}(t_{\mathrm{hc}}^{\phantom{a}},t^{\prime}_{\mathrm{hc}})e^{-iS_{V}\\!(t_{\mathrm{hc}}^{\phantom{a}},t_{\mathrm{hc}}^{\prime})}$
(45) $\displaystyle\qquad\times e^{i\Omega
t_{\mathrm{hc}}}\int_{\bar{C}}\exp(-\frac{i}{3}A\bar{t}\,^{3}+i(\Omega-\Omega_{\mathrm{hc}})\bar{t})\textrm{d}\bar{t}.$
The integral is now in the form of (35), with an added functional dependence
on $\Omega$ coming from the term $e^{i\Omega t_{\mathrm{hc}}}$. This term
results from the translation of the time origin, and it can have a sizeable
effect on the harmonic yield if $\imaginary(t_{\mathrm{hc}})$ is nonzero. That
said, we can now use the result (40) for the model form directly.
We obtain thus our final result for the HCA,
$\displaystyle\mathbf{D}(\Omega)$
$\displaystyle=\sum_{\mathrm{hc}}\sqrt{\frac{2\pi}{i\frac{\partial^{2}S_{V}}{\partial
t^{\prime
2}}(t_{\mathrm{hc}}^{\phantom{a}},t^{\prime}_{\mathrm{hc}})}}\frac{2\pi}{e^{2\pi
ik/3}A_{\mathrm{hc}}^{1/3}}$ (46)
$\displaystyle\quad\times\mathbf{f}(t_{\mathrm{hc}}^{\phantom{a}},t^{\prime}_{\mathrm{hc}})e^{-iS_{V}\\!(t_{\mathrm{hc}}^{\phantom{a}},t_{\mathrm{hc}}^{\prime})+i\Omega
t_{\mathrm{hc}}}\operatorname{Ai}\mathopen{}\left(\frac{\Omega_{\mathrm{hc}}-\Omega}{e^{2\pi
ik/3}A_{\mathrm{hc}}^{1/3}}\right)\mathclose{},$
where the harmonic-cutoff time pairs
$(t_{\mathrm{hc}}^{\phantom{a}},t^{\prime}_{\mathrm{hc}})$ are found by
solving the simultaneous equations
$\displaystyle\frac{\partial^{2}S_{V}}{\partial
t^{2}}\frac{\partial^{2}S_{V}}{\partial t^{\prime
2}}-\left(\frac{\partial^{2}S_{V}}{\partial t\,\partial
t^{\prime}}\right)^{2}$ $\displaystyle=0$ (47a) $\displaystyle\frac{\partial
S_{V}}{\partial t^{\prime}}$ $\displaystyle=0$ (47b)
and where the coefficients $\Omega_{\mathrm{hc}}$ and $A_{\mathrm{hc}}$ for
each pair of harmonic-cutoff times
$(t_{\mathrm{hc}}^{\phantom{a}},t^{\prime}_{\mathrm{hc}})$ are given, as in
(16), by
$\displaystyle\Omega_{\mathrm{hc}}=\frac{\partial S_{V}}{\partial
t}(t_{\mathrm{hc}}^{\phantom{a}},t^{\prime}_{\mathrm{hc}})\ \text{and}\
A_{\mathrm{hc}}=\frac{1}{2}\frac{\textrm{d}^{3}S_{V}}{\textrm{d}t^{3}}(t_{\mathrm{hc}}^{\phantom{a}},t^{\prime}_{\mathrm{hc}}),$
(48)
with the third derivative $\frac{\textrm{d}^{3}}{\textrm{d}t^{3}}$ understood
in the constrained sense of Section I.6.999An added wrinkle can appear if the
prefactor, and particularly the dipole moments it includes, have singularities
at the solutions of (47), which is often the case in the regular SPA. If this
occurs, a regularization scheme like the one described in Ref.
Popruzhenko2014, will be required, but this is likely to extend relatively
cleanly. Our implementation of this approximation, in the Wolfram Language,
is available in the RBSFA software package RBSFA .
---
(a)
(b)
(c)
Figure 9: Approximations to (a) the pure action term $e^{-iS}$, (b) the
action term with wavepacket diffusion, $\tau^{-3/2}e^{-iS}$, and (c) the full
harmonic dipole, via the Saddle-Point Approximation for the long and short
trajectories (green and blue lines) and their coherent combination (dashed
gray lines), the Uniform Approximation (black lines), and the Harmonic-Cutoff
Approximation (red lines). The HCA is quantitatively accurate at the harmonic
cutoff, and reasonably qualitatively accurate in the upper plateau, but it
only requires solving a single saddle-point equation, instead one per harmonic
frequency over a tight grid in $\Omega$.
We show the results of this approximation, compared to the SPA and the UA (as
described in Refs. Figueira2002, ; Milosevic2002, ), in Fig. 9, plotted as in
Fig. 2c. When the prefactor is fully ignored, as in Fig. 9a, the HCA is
extremely accurate at the harmonic cutoff, and it retains a good qualitative
accuracy as $\Omega$ descends from $\Omega_{c}$ and into the plateau: it
shifts vertically from the UA in the lower plateau and the predicted period
for the QPI beatings is too long, but the QPI contrast is mostly well
reproduced.
The qualitative accuracy of the HCA, particularly regarding the QPI contrast,
gets degraded when the wavepacket-diffusion dilution factor of $\tau^{-3/2}$
is included, as shown in Fig. 9b. This happens because the $\tau^{-3/2}$
factor affects the long trajectories (green line) much more than it does the
short ones (blue line), bringing them closer together and allowing for better
interference between them; the HCA, with its information coming from a single
point, is blind to this effect, which only acts as a global vertical shift on
the curve. The same is true for the full harmonic dipole
$\mathbf{D}(\Omega)$,101010Here we use the ground-state wavefunction and
dipole transition matrix element for the $s$ state of a short-range-potential,
as described in Ref. Pisanty2017, . which we plot in Fig. 9c – the HCA has
good quantitative accuracy at the cutoff, but it becomes more of a qualitative
estimation for the middle and lower plateau.
However, this weakness of the HCA is also its biggest strength: precisely
because it is able to describe the harmonic yield using only information
coming from a single solution of a set of saddle-point equations (47), it can
be calculated in a small fraction of the time taken to produce SPA or UA
calculations of the harmonic spectrum, since those require the solution of one
system of saddle-point equations – the system in (6) – for each harmonic
frequency $\Omega$ of interest, and these will typically form a grid with
hundreds of instances.
In this sense, the HCA sits at the far end of the tradeoff spectrum between
numerical accuracy and computational complexity, which is typically understood
as running from full simulations of the time-dependent Schrödinger equation
(TDSE) Scrinzi2014 , passing through explicit numerical integration of the SFA
dipole (5), to the SPA and UA quantum-orbit approaches. Normally, the quantum-
orbit methods are considered to be fast enough that no further optimization is
necessary, but this is not always the case when large scans are required
spanning a high-dimensional parameter space—particularly if the waveforms
involved cause nontrivial behaviour in the quantum orbits—, such as when
optimizing the length or strength of the harmonic plateau and cutoff
Chipperfield2009 ; Chipperfield2010 ; Haessler2014 . Moreover, when the
primary focus is on the harmonic cutoff, the HCA will typically be
sufficiently accurate.
## III Parameter scaling and the cutoff law
As we have seen, the formalism we have presented allows us to extract both a
unique and natural definition for the (complex) times that correspond to the
harmonic cutoff, and also a natural definition of the harmonic frequency
$\Omega_{c}$ at which the cutoff occurs. This identification is a valuable
connection, since the high-harmonic cutoff is often understood as a vague
term, referring to a spectral region where the behaviour changes, instead of a
concrete point—largely because of the difficulty in pinning down a specific
frequency at which this change occurs.
This is particularly important, since the scaling of the cutoff frequency with
the field parameters is one of the key hallmarks of HHG, in terms of the so-
called ‘cutoff law’,
$\Omega_{\mathrm{max}}\approx 3.17\,U_{p}+I_{p},\phantom{\,F(I_{p}/U_{p})}$
(49)
which relates the cutoff frequency $\Omega_{\mathrm{max}}$ to the target’s
ionization potential $I_{p}$ and the ponderomotive energy
$U_{p}=F^{2}/4\omega^{2}$ of the field. This relationship was first uncovered
in numerical simulations Krause1992 and subsequently explained using the
classical three-step model Corkum1993 ; Kulander1993 : the numerical factor of
$3.17$ arises as the maximal kinetic energy that can be achieved at the return
to the origin by an electron released at zero velocity, with the $I_{p}$ term
representing quantum energy conservation at the recombination, with an
essentially heuristic justification.
The fully-quantum quasiclassical theory Lewenstein1994 re-derives this cutoff
law, giving also a quantum correction to the $I_{p}$ term of the form
$\Omega_{\mathrm{max}}\approx
3.17\,U_{p}+1.32\,I_{p}.\phantom{\,F(I_{p}/U_{p})}$ (50)
To obtain this formulation, the cutoff frequency is defined as the maximum of
the recollision kinetic energy, $\real(\mathbf{v}(t_{r})^{2})$, taken under
the restriction that its imaginary part $\imaginary(\mathbf{v}(t_{r})^{2})$
vanish. The dynamics are then analyzed using a systematic expansion on
$I_{p}/U_{p}$, using the purely-classical case at $I_{p}=0$ for the leading
$3.17U_{p}$ term, after which the cutoff law can be derived in the exact form
$\Omega_{\mathrm{max}}=3.17\,U_{p}+F(I_{p}/U_{p})\,I_{p},$ (51)
in terms of a universal function $F(I_{p}/U_{p})$ which can be calculated
numerically as $F(0)=1.32$.
This prescription can be computed exactly for linearly-polarized monochromatic
fields, and it can be extended to elliptical polarizations Milosevic2000 ;
Flegel2004 , but it is laborious to apply for polychromatic combinations
Figueira1999 (where the complex waveform makes analytical calculations
challenging) and for tailored field polarizations Milosevic1996 , especially
for field shapes whose recolliding orbits do not ionize at zero velocity, as
required by the classical theory. In any case, this definition of the harmonic
cutoff has not seen wide adoption in the literature.
Instead, a wide variety of other methods have been used in its place,
including relaxing the $\imaginary(\mathbf{v}(t_{r})^{2})=0$ condition to
$\imaginary(t_{r})=0$ Milosevic2000bicircular , focusing only on the classical
recollision velocity Chipperfield2009 ; Chipperfield2010 , examining the
change in the intensity scaling of the harmonic yield at a fixed harmonic
order Lewenstein1995 , the use of graphical methods based on tangents to the
optical waveform Figueira1999 , and extracting the cutoff from the Bessel-
function expansion of the oscillatory integral Averbukh2001 ; Avetissian2014 ,
as well as direct numerical methods used to extract the cutoff from the
results of TDSE simulations Neufeld2019timescales . None of these definitions,
however, is particularly satisfactory, and – like the original definition –
none of the analytical approaches have been used very widely in the
literature.
---
(a)
(b)
(c)
(d)
Figure 10: Scaling of the harmonic-cutoff frequency $\Omega_{\mathrm{hc}}$
with respect to the field parameters. (a) The affine dependence of the cutoff
law $\real(\Omega_{\mathrm{hc}})\approx 3.17U_{p}+I_{p}$ is reproduced well;
the dots at the axis mark the position of $I_{p}$, and the mismatch to the
lines comes from the quantum corrections. (b) The imaginary part
$\imaginary(\Omega_{\mathrm{hc}})$ decays with $U_{p}$ as
$I_{p}^{3/2}/U_{p}^{1/2}$. These scalings indicate a relationship of the form
$\Omega_{\mathrm{hc}}=3.17U_{p}+F(\gamma)I_{p}$, with the real and imaginary
parts of $F(\gamma)$, shown in (c) and (d), suggesting that it is a power
series in $i\gamma$.
As we argued earlier, the real part $\Omega_{c}$ of our complex-valued
$\Omega_{\mathrm{hc}}$ forms a natural candidate as a precise definition of
the harmonic-cutoff frequency $\Omega_{\mathrm{max}}$, with the added
advantage that it can be trivially adapted to complicated waveforms and to
tailored polarizations without significantly complicating the calculation, a
useful feature as the complexity of the field shapes under consideration
continues to increase Neufeld2019 ; Javsarevic2020 . To strengthen this
identification, though, it is important to verify that it reproduces the
cutoff law, including the quantum corrections coming from the SFA as captured
by the original definition. We show this in Fig. 10a: the variation of
$\real(\Omega_{\mathrm{hc}})$ with $U_{p}$ is linear with an $I_{p}$-dependent
offset, which closely follows the quasiclassical quantum-corrected cutoff law
(50).
However, our formalism allows us to go beyond this level, since
$\Omega_{\mathrm{hc}}$ also has an imaginary part, whose scaling with $U_{p}$
and $I_{p}$ is shown in Fig. 10b. As rough behaviour,
$\imaginary(\Omega_{\mathrm{hc}})$ decreases with $U_{p}$ and increases with
$I_{p}$, and simple testing shows that the leading asymptotic component is the
behaviour
$\imaginary(\Omega_{\mathrm{hc}})\sim{I_{p}^{3/2}}\big{/}{U_{p}^{1/2}}.$ (52)
At first glance, this shows very different behaviour to the real part, which
follows (51). However, once a factor of $I_{p}$ has been set apart, as we did
for the real part, the scaling for $\imaginary(\Omega_{\mathrm{hc}})$ is
simply the Keldysh parameter, $\gamma=\sqrt{I_{p}/2U_{p}}$. This is therefore
indicative that the parameter in the quantum scaling law (51) should be
amended from $I_{p}/U_{p}$ to its square root, so it should read
$\Omega_{\mathrm{hc}}=3.17\,U_{p}+F(\gamma)\,I_{p}.$ (53)
In this form, (53) essentially acts as a definition for
$F(\gamma)=(\Omega_{\mathrm{hc}}-3.17U_{p})/I_{p}$, but this definition can
only work if that value is independent of what combination of $I_{p}$ and
$U_{p}$ gives rise to $\gamma$. This is indeed the case, as we show in Figs.
10c and 10d for the real and imaginary parts, respectively: plotting
$F(\gamma)$ by changing $I_{p}$ reveals the same universal curve for a range
of different values of $U_{p}$. Moreover, we can extract the low-$\gamma$
behaviour here to obtain
$\displaystyle F(\gamma)$ $\displaystyle\approx 1.323+i\,0.07361\,\gamma$ (54)
$\displaystyle\quad\ -0.068\,\gamma^{2}-i\,0.025\,\gamma^{3}+\cdots,$
with the first two terms shown as the gray dashed line in Figs. 10c and 10d.
This understanding coincides with the existing theory (so, in particular, Fig.
10c coincides with Fig. 5 of Ref. Lewenstein1994, when plotted as a function
of $\gamma^{2}$), but it also helps extend it and clarify its structure,
though a formal analysis of the existence and convergence of the power series
in $i\gamma$ for $F(\gamma)$, as defined here for $\Omega_{\mathrm{hc}}$, is
still lacking. We summarize our parameter-scaling results, and their
relationship to the existing theories, in Table 1.
Theory | | Cutoff law
---|---|---
Corkum Corkum1993 | | $\Omega_{\mathrm{max}}\approx 3.17U_{p}+I_{p}$
Kulander Kulander1993 |
Lewenstein Lewenstein1994 | | $\begin{aligned} \Omega_{\mathrm{max}}&=3.17U_{p}+F(I_{p}/U_{p})I_{p}\\\ F(0)&=1.32\end{aligned}$
This work | | $\begin{aligned} \Omega_{\mathrm{hc}}&=3.17U_{p}+F(\gamma)I_{p}\\\ F(\gamma)&=1.323+i\,0.07361\,\gamma\\\ &\quad\ -0.068\,\gamma^{2}-i\,0.025\,\gamma^{3}+\cdots\end{aligned}$
Table 1: Scaling of the harmonic cutoff with $I_{p}$ and $U_{p}$, for the
different understandings of the cutoff.
## IV Nontrivial quantum-orbit topology for bicircular fields
To conclude our (current) explorations of the role of the complex harmonic-
cutoff times in HHG, we return to the classification scheme for separating
saddle-point solutions into quantum orbits, which we first showcased in Fig. 1
and constructed in detail in Section I.4. One crucial aspect of this
classification scheme is the orientation of the separatrix, which indicates
the direction that the saddle points take in their avoided crossing at the
cutoff: i.e., whether the short-trajectory saddle, coming in from the left,
will go up into positive (or down into negative) imaginary time when it meets
the long-trajectory saddle. Our construction shows that this direction is
given, via (23), by
$\displaystyle\delta t_{\mathrm{sep}}=\sqrt{-i\eta/A_{\mathrm{hc}}},$ (55)
and thus that it has a sensitive dependence on the sign of
$\eta=\frac{\partial S_{V}}{\partial
t}(t_{\mathrm{hc}}^{\phantom{a}},t_{\mathrm{hc}}^{\prime}),$ (56)
the imaginary part of the time derivative of the Volkov action at the
harmonic-cutoff time.
This imaginary part is essentially controlled by the internal details of the
action and, as such, its sign can be either positive or negative depending on
the precise particulars of the optical waveform of the driving laser. If the
shape of the driving laser pulse remains essentially constant (say, when doing
an intensity scan as in Fig. 10) then the sign of the imaginary part is
unlikely to change (as was seen to be the case in Fig. 10b).
However, if one has a more complicated driving field with a nontrivial shape
parameter that affects the waveform of the pulse, and thus the details of the
Volkov action, then the sign of $\eta$ will change, and the direction of the
separatrix, $\delta t_{\mathrm{sep}}$, will switch to one
$45\text{\,}\mathrm{\SIUnitSymbolDegree}$ away. This will, in turn, switch the
direction of the avoided crossing, and with that the identity (say, as ‘short’
or ‘long’ trajectories) of the post-cutoff branches of the orbit. This
behaviour is generic and commonplace: as a simple example, it can be observed
in monochromatic fields once an envelope is introduced, where changes in the
carrier-envelope phase can produce this type of topological change in the
quantum-orbit layout.
In this section we showcase a concrete example of this behaviour, and we
explore the nontrivial topologies that it induces in the set of quantum-orbit
saddle points. We focus, in particular, on tailored polarization states known
as ‘bicircular’ fields Fleischer2014 ; Kfir2015 ; Eichmann1995 ; Milosevic1996
; Milosevic2000bicircular ; Baykusheva2016 ; Dorney2017 ; JimenezGalan2018 ;
Pisanty2014 ; Pisanty2017 ; Milovsevic2019 , formed by the combination of two
counter-rotating circularly-polarized strong laser pulses at frequencies
$\omega$ and $2\omega$,
$\mathbf{F}(t)=F_{1}\begin{pmatrix}\cos(\omega t)\\\ \sin(\omega
t)\end{pmatrix}+F_{2}\begin{pmatrix}\phantom{-}\cos(2\omega t)\\\
-\sin(2\omega t)\end{pmatrix}.$ (57)
These fields have been the focus of widespread interest in recent years
because they are subject to a spin selection rule Fleischer2014 ; Pisanty2014
which ensures that the harmonics of the driver are circularly polarized
Kfir2015 . For our purposes, however, we select them as an example of a
reasonably well-understood waveform with a nontrivial shape parameter, namely,
the relative intensity between the two pulses; this is normally kept at unity,
but the effects of its variation have seen some exploration
Milosevic2000bicircular ; Baykusheva2016 ; Dorney2017 ; JimenezGalan2018 . If
we keep the total intensity constant, at
$I_{\mathrm{tot}}=$2\text{\times}{10}^{14}\text{\,}\mathrm{W}\mathrm{/}\mathrm{c}\mathrm{m}^{2}$$
for concreteness, then the shift in intensity is best described by the mixing
angle $\theta$, which we use to define the individual field amplitudes as
$\displaystyle F_{1}=F\cos(\theta),\qquad F_{2}=F\sin(\theta),$ (58)
where $F=\sqrt{2}\times$0.053\text{\,}\mathrm{{a.u.}}$$ is the peak field
strength.
---
(a)
(b)
(c)
Figure 11: Topological transition and reconnection in the quantum orbits of a
bicircular field as the mixing angle $\theta$ changes. Here we show a detail
of the quantum-orbit layout for the short and long trajectories, plotted as in
Fig. 1c, with the triangles marking the harmonic-cutoff times
$t_{\mathrm{hc}}$ and the coloured separatrices marking the regions occupied
by the different orbits, as described in Section I.4. In (b) we show the
topological transition, where
$\eta=\imaginary(\tfrac{\textrm{d}S_{V}}{\textrm{d}t})$ vanishes and the
separatrix becomes undefined, which we show by highlighting the
$t_{\mathrm{hc}}$ with a black border and plotting both options (using an
artificial $\eta=\pm 1$) as the gray cross. In this case, both choices of
separatrix are acceptable for saddle classification, since the saddles fully
coalesce.
We show in Fig. 11 a representative topological reconnection transition,
between the short and long orbits (blue and green curves, respectively) of the
bicircular field. Before the transition, in Fig. 11a, the short orbit connects
up to the positive-imaginary saddle, while after the transition, in Fig. 11c,
it connects to negative imaginary time. Between these two there must always
lie a transition, which we show in Fig. 11b: here the saddles fully coalesce,
producing a monkey-saddle landscape like the ones shown in Fig. 7, with a
purely-real HCA Airy function corresponding to enhanced quantum path
interference between the two orbits.
|
---
|
---
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
Figure 12: Topological transitions and reconnections in the quantum orbits of
a bicircular field as a function of the mixing angle $\theta$, plotted as in
Fig. 11. As $\theta$ varies over a wider range, the various pairs of
neighbouring quantum orbits go through reconnection transitions with each
other (with two separate transitions, in (b) and (f), for the first pair of
orbits), so that all of the quantum orbits are connected into a single
topology, which we show below in 3D Fig. 2. At each panel the inset at bottom
right shows the shape of the electric field at the relevant mixing angle.
More importantly, however, this transition separates the instances with
$\eta>0$ from those with $\eta<0$, so at the transition itself $\eta$ must
vanish – and, as with the exact coalescences we studied in Section I.7, no
(nonzero) unique separatrix direction $\delta t_{\mathrm{sep}}$ can be found.
That said, this failure is fairly benign since, at the transition, either of
the separatrices obtained by the artificial choices of $\eta=\pm 1$ will work
correctly. At the transition the missed approach becomes a full coalescence,
and any identification of pre- and post-cutoff orbits is artificial, so both
signs of $\eta$ will work equally well. In Fig. 11b, we mark the transition by
showing both possible separatrices in gray, taking an arbitrary choice between
the two as to which one to use for the actual classification. We also
highlight the triangle at the harmonic-cutoff time $t_{\mathrm{hc}}$ with a
black border for further clarity.
On a more practical footing, the fact that
$\eta=\imaginary(\tfrac{\textrm{d}S_{V}}{\textrm{d}t})$ vanishes at the
transition is especially useful, since it can be used to look for the
parameters where the transitions happen, by looking for zeros of $\eta$ as a
function of $\theta$.
That said, the most important aspect of these topological transitions becomes
apparent when we survey the quantum-orbit landscape on a wider perspective, in
terms of the multiple quantum orbits present in the dynamics, as well as for a
wider interval in the mixing angle, which we show in Fig. 12. Here we see that
all of the pairs of quantum orbits undergo one or more reconnection
transitions with each other, so that no pair of quantum orbits, however
apparently distant, can actually be separated: in the same way that the
evanescent orbits of Fig. 11b have ‘confused’ identities between the short and
long orbits, the short orbit ‘mingles’ at low $\Omega$ with the short-time
orbit shown in cyan, at two separate transitions (shown in Fig. 12b and Fig.
12f), the long orbit connects with the second return (with the transition
shown in Fig. 12d), and so on.
---
List of tdfigures 2 Unified surface formed by the quantum orbits in Fig. 12,
when the $\theta$ dependence is unfolded as a third dimension. (Thus, a cut
through the surface at constant $\theta$ will return a panel from Fig. 12.) At
the transitions, the horizontal parts of the surface (the plateau harmonics)
change from going up to going down and vice versa, but from a global
perspective, the saddles form a single surface with a unified topology, and
the separation of this surface into individual regions corresponding to the
different quantum orbits is somewhat artificial. This highlights the fact that
the various quantum orbits, as inverse images under the multivalued inverse of
an analytical function, are simply different branches of a single Riemann
surface. An interactive version and a 3D-printable model of this plot is
available at imaginary-harmonic-cutoff.github.io SupplementaryMaterial .
|
---|---
|
|
|
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 13: Energy-time relations (a, c, e, g) and indicative SPA harmonic
spectra, including the factors from the action and the wavepacket dilution (b,
d, f, h) for four of the quantum-orbit transitions from Fig. 12. For
bicircular fields, attention is generally focused on the short quantum orbit
(blue curve), since at equal intensities
($\theta=$45\text{\,}\mathrm{\SIUnitSymbolDegree}$$) it dominates the spectrum
(as shown in (b), which closely follows Fig. 8 of Ref.
Milosevic2000bicircular, ). However, this is no longer the case at mixing
angles with a high intensity for the $2\omega$ field. At the long-short
topological transition (e, f), where the two become indistinguishable at the
cutoff, the latter is intensified by this effect, while at large mixing
angles, as shown in (g, h), the contribution of the short trajectory plummets.
In other words, the quantum orbits here form a single unified topology, and
they should be understood as such. We show this topology in 3D Fig. 2, by
unfolding the $\theta$ dependence into a third dimension for the plot: this
reveals that the quantum orbits form a single surface, with discontinuous
changes in colour where the (apparent) reconnection transitions force us to
choose where to split this unified surface into individual components.
The fundamental topological object here, as we explored in depth in Section
I.5, is the Riemann surface formed by the saddle points: that is, the surface
defined by the equation $\frac{\textrm{d}S_{V}}{\textrm{d}t}(t_{s})=\Omega$,
encoding the multiple inverses of the action’s derivative, whose intersection
with the real $\Omega$ axis gives the quantum orbits themselves. Within that
perspective, the difference between the two types of connections between the
quantum orbits boils down to what side of the real $\Omega$ axis the branch
point (i.e., the harmonic-cutoff time $t_{\mathrm{hc}}$) falls on.
For fixed field shapes, this branch-cut structure can essentially be ignored,
if so desired, since we are only looking for the inverse images for $\Omega$
over the real axis, and those inverse images will normally be well separated.
In 3D Fig. 2, however, as well as in any other situations where the field
shape changes over a control parameter, we see how these changes to the
position and shape of the Riemann surface bring these once-separate images
into collision (and reconnection) with each other, and this structure can no
longer be ignored.
In more general terms, the reconnection phenomenon we have demonstrated in
this section entails that, ultimately, attempting to attach labels to
individual quantum orbits is fundamentally impossible, since they are
essentially just different instances of one and the same object. Saddle-point
tagging schemes of this type, in terms of multi-indices typically notated as
$\alpha\beta m$, have been used widely in the literature, both for
monochromatic fields Chipperfield2007 ; Milosevic2002 ; Odzak2005 ;
Chipperfield2005 ; Milosevic2017chapter as well as more complex waveforms
Hasovic2016 ; Milosevic2019xray ; Milovsevic2019quantum ; Milosevic2016 . The
topological features we have explored here imply that these schemes should be
regarded as labelling the principal branches of the quantum-orbit Riemann
surface.
These schemes can, in principle, be applied to changing driver waveforms with
nontrivial connections between the various branches involved, but then those
branch changes must be explicitly tracked – in which case, the branch-point
machinery we have developed here (both for finding the branch points
$t_{\mathrm{hc}}$ as well as the parameters where their branch cuts cross the
real $\Omega$ axis) is essential.
As a final note here, it is important to remark that, while some of the
dynamics we have explored here occurs for quantum orbits with weak or
negligible contributions to the spectrum, that is not always the case. Indeed,
as we show in Fig. 13, the equal-intensities viewpoint that the spectrum is
completely dominated by the short-trajectory contribution breaks down at large
mixing angles, where the $2\omega$ contribution is significant, and where many
of the topological transitions we have discussed in this section take place.
## V Discussion and conclusions
As we have shown, the problem of saddle-point classification can be solved in
a robust and flexible way by using the harmonic-cutoff times $t_{\mathrm{hc}}$
to locate the center of the missed approach and implement a separatrix that
lies between the two quantum orbits that perform an avoided crossing at each
cutoff. These harmonic-cutoff times can be easily found as the zeros of a
second-order saddle-point equation. In a clear sense, the $t_{\mathrm{hc}}$’s
form the centerpiece of the quasiclassical structure at the cutoff, and they
are a useful and flexible tool to understand a range of questions about the
quantum-orbit dynamics.
The strongest implication of this is that the harmonic-cutoff times can be
used to provide a natural identification for the energy of the cutoff,
$\Omega_{\mathrm{hc}}=\frac{\partial S_{V}}{\partial t}(t_{\mathrm{hc}})$.
This now takes on a complex value, with the imaginary part controlling the
strength of quantum-path interference between the quantum orbits that meet at
that cutoff, as well as the closeness and direction of the approach between
them.
Similarly, our approach provides an efficient method for finding the position
of the cutoff as well as the harmonic yield there, which will work uniformly
and efficiently for a broad range of optical waveforms for the driving laser.
This extends to a full estimation of the spectrum, the Harmonic-Cutoff
Approximation, which uses only quantities local to $t_{\mathrm{hc}}$, and
which accurately captures the cutoff as well as the qualitative shape of the
spectrum down into the middle of the plateau.
On a more abstract note, our search for structures that can be used to
classify the solutions of the saddle-point equations into individual quantum
orbits yields a fresh perspective on the quasiclassical theory of strong-field
phenomena. The quantum orbits are thus revealed as the
$\imaginary(\frac{\partial S_{V}}{\partial t})=0$ contour of the time
derivative of the action, with the rest of its contour map holding crucial
information – most notably via its saddles, the harmonic-cutoff times
$t_{\mathrm{hc}}$. Likewise, the usual saddle points are re-understood as
individual branches of a larger Riemann surface, which encompasses all of the
quantum orbits. The branch points that separate these branches are again the
harmonic-cutoff times, which can thus be used to watch for changes in the
quantum-orbit topology as the driving field’s waveform changes.
The location of the cutoff at a zero of the second derivative of the action
also admits a rather more pedestrian interpretation, though. If the emission
is modelled using only the classical-trajectory level, then the energy of
return $E_{\mathrm{return}}=\frac{\partial S}{\partial t}$ is a function of
the return time, and if we want to find its extrema, then we simply need to
solve the equation $\frac{\partial^{2}S}{\partial t^{2}}=0$. When the theory
is upgraded to the quasiclassical formalism, that equation might seem to lose
its meaning, since the action and the trajectories are complex, as would be
the solutions to the extremum equation $\frac{\partial^{2}S}{\partial
t^{2}}=0$. Our solution in this work is entirely in line with the spirit of
the quasiclassical formalism for strong-field physics: we solve this extremum
equation in the same form it takes classically, allowing for complex values
wherever necessary, and then look at the underlying oscillatory integral for
the correct interpretation of those complex quantities.
More physically, the key structure at play is the fact that the harmonic
cutoff is a caustic: the quantum-orbit analysis of the underlying matter-wave
dynamics is exactly analogous to the geometric-optics analysis of wave optics
in terms of rays, and in this analogy the harmonic cutoff corresponds to
caustics where two families of rays interfere and then meet and fold into each
other, giving way to evanescent-wave behaviour analogous to the post-cutoff
decay in the harmonic spectrum. In one dimension, this caustic can be
precisely localized to a point, known as the ‘fold’ (more technically, the
‘bifurcation set’), which marks the transition between the two regimes, and
our harmonic-cutoff times are the precise embodiment of that understanding of
the caustic.
This view of the harmonic cutoff as a caustic has been used for some time
Raz2012 ; Goulielmakis2012 ; Austin2014 , but recent years have seen a marked
increase in interest in that perspective on the dynamics Facciala2016 ;
Facciala2018 ; Hamilton2017 ; Birulia2019 ; Uzan2020 ; Kelvich2017 , as laser
sources achieve better control over polychromatic combinations with high
relative intensities (which is slated to continue to increase Mitra2020 ).
This has opened the door to the observation of higher-order catastrophes in
strong-field observables, involving higher-dimensional bifurcation sets;
however, the theoretical understanding remains somewhat behind, and a
quantitative understanding that reflects those structures has not yet
followed. In this work we have precisely pinned down the embodiment of the
bifurcation set in the quasiclassical formalism for the ‘fold’ catastrophe,
and this can then be used to analyze in detail the more complicated
configurations involved in newer experiments.
Similarly, our work suggests that the harmonic-cutoff times we have discussed
here should have analogous structures in ionization experiments (most notably
high-order above-threshold ionization Figueira2019 ), where the
experimentally-measurable variable, momentum, has multiple dimensions. This
allows greater freedom to the theory (while at the same time substantially
complicating its analysis), and this in turn allows for higher-dimensional
singular matter-wave structures to be contained in the experimental results.
The tools we have demonstrated here for understanding the one-dimensional
caustic formed at the harmonic cutoff should then provide a useful basis for
understanding those configurations.
###### Acknowledgements.
We thank Przemyslaw Grzybowski for essential and generous help in
understanding the structures presented here. E.P. acknowledges Cellex-ICFO-MPQ
Fellowship funding. We acknowledge funding from the Spanish Ministry MINECO
(National Plan 15 Grant: FISICATEAMO No. FIS2016-79508-P, SEVERO OCHOA No.
SEV-2015-0522, FPI), European Social Fund, Fundació Cellex, Fundació Mir-Puig,
Generalitat de Catalunya (AGAUR Grant No. 2017 SGR 1341, CERCA/Program), ERC
AdG NOQIA, EU FEDER, European Union Regional Development Fund – ERDF
Operational Program of Catalonia 2014-2020 (Operation Code: IU16-011424),
MINECO-EU QUANTERA MAQS (funded by The State Research Agency (AEI)
PCI2019-111828-2 / 10.13039/501100011033), and the National Science Centre,
Poland, Symfonia Grant No. 2016/20/W/ST4/00314.
## Author ORCIDs
* Emilio Pisanty: 0000-0003-0598-8524
* Marcelo F. Ciappina: 0000-0002-1123-6460
* Maciej Lewenstein: 0000-0002-0210-7800
## References
* (1) F. Krausz and M. Ivanov. Attosecond physics. _Rev. Mod. Phys._ 81 no. 1, pp. 163–234 (2009).
* (2) P. B. Corkum and F. Krausz. Attosecond science. _Nat. Phys._ 3 no. 6, pp. 381–387 (2007).
* (3) P. B. Corkum. Plasma perspective on strong field multiphoton ionization. _Phys. Rev. Lett._ 71 no. 13, pp. 1994–1997 (1993).
* (4) K. C. Kulander, K. J. Schafer and J. L. Krause. Dynamics of short-pulse excitation, ionization and harmonic conversion. In B. Piraux, A. L’Huillier and K. Rzążewski (eds.), _Super-Intense Laser Atom Physics_ , vol. 316 of _NATO Advanced Studies Institute Series B: Physics_ , pp. 95–110 (Plenum, New York, 1993).
* (5) A. Scrinzi. Time-dependent Schrödinger equation. In T. Schultz and M. Vrakking (eds.), _Attosecond and XUV Physics: Ultrafast Dynamics and Spectroscopy_ , pp. 257–292 (Wiley-VCH, Weinheim, 2014).
* (6) M. Lewenstein, P. Balcou, M. Y. Ivanov, A. L’Huillier and P. B. Corkum. Theory of high-harmonic generation by low-frequency laser fields. _Phys. Rev. A_ 49 no. 3, pp. 2117–2132 (1994).
* (7) K. Amini, J. Biegert, F. Calegari et al. Symphony on strong field approximation. _Rep. Progr. Phys._ 82 no. 11, p. 116001 (2019). arXiv:1812.11447.
* (8) P. Salières, B. Carré, L. Le Déroff et al. Feynman’s path-integral approach for intense-laser-atom interactions. _Science_ 292 no. 5518, pp. 902–905 (2001).
* (9) G. G. Paulus, F. Grasbon, A. Dreischuh et al. Above-threshold ionization by an elliptically polarized field: Interplay between electronic quantum trajectories. _Phys. Rev. Lett._ 84 no. 17, pp. 3791–3794 (2000).
* (10) R. Kopold, W. Becker and D. B. Milošević. Quantum orbits: a space-time picture of intense-laser-induced processes in atoms. _J. Mod. Opt._ 49 no. 12, pp. 1987–1999 (2002).
* (11) M. Ivanov and O. Smirnova. Multielectron high harmonic generation: simple man on a complex plane. In T. Schultz and M. Vrakking (eds.), _Attosecond and XUV Physics: Ultrafast Dynamics and Spectroscopy_ , pp. 201–256 (Wiley-VCH, Weinheim, 2014). arXiv:1304.2413.
* (12) A. Nayak, M. Dumergue, S. Kühn et al. Saddle point approaches in strong field physics and generation of attosecond pulses. _Phys. Rep._ 833, pp. 1–52 (2019). U Szeged eprint.
* (13) N. G. de Bruijn. _Asymptotic methods in analysis_ (Dover Publications, New York, 1981).
* (14) N. Bleistein and R. A. Handelsman. _Asymptotic Expansions of Integrals_ (Dover, New York, 1975).
* (15) U. H. Gerlach. _Linear Mathematics in Infinite Dimensions: Signals, Boundary Value Problems and Special Functions_ (Ohio State University, 2017), pp. 370–380. 2nd Beta Edition. Available online at http://people.math.osu.edu/gerlach.1/math/BVtypset. Retrieved 12 February 2019.
* (16) M. Lewenstein, P. Salières and A. L’Huillier. Phase of the atomic polarization in high-order harmonic generation. _Phys. Rev. A_ 52 no. 6, pp. 4747–4754 (1995).
* (17) A. Zaïr, M. Holler, A. Guandalini et al. Quantum path interferences in high-order harmonic generation. _Phys. Rev. Lett._ 100 no. 14, p. 143902 (2008).
* (18) C. Figueira de Morisson Faria, H. Schomerus and W. Becker. High-order above-threshold ionization: The uniform approximation and the effect of the binding potential. _Phys. Rev. A_ 66 no. 4, p. 043413 (2002). UCL eprint.
* (19) D. B. Milošević and W. Becker. Role of long quantum orbits in high-order harmonic generation. _Phys. Rev. A_ 66 no. 6, p. 063417 (2002).
* (20) L. E. Chipperfield. _High Harmonic Generation with Few-Cycle Pulses_. PhD thesis, Imperial College London (2007).
* (21) D. Hoffmann. _High Harmonic Generation Using Multicolour Fields_. PhD thesis, Imperial College London (2011).
* (22) T. Das. _Quantum-orbit analysis of laser-matter interactions in intense orthogonally polarised fields_. PhD thesis, University College London (2017).
* (23) C. Manzoni, O. D. Mücke, G. Cirmi et al. Coherent pulse synthesis: towards sub-cycle optical waveforms. _Laser Photonics Rev._ 9 no. 2, pp. 129–171 (2015). PoliMi eprint.
* (24) A. Fleischer, O. Kfir, T. Diskin, P. Sidorenko and O. Cohen. Spin angular momentum and tunable polarization in high-harmonic generation. _Nat. Photon._ 8 no. 7, pp. 543–549 (2014). arXiv:1310.1206.
* (25) O. Kfir, P. Grychtol, E. Turgut et al. Generation of bright phase-matched circularly-polarized extreme ultraviolet high harmonics. _Nat. Photon._ 9 no. 2, pp. 99 – 105 (2015).
* (26) H. Eichmann, A. Egbert, S. Nolte et al. Polarization-dependent high-order two-color mixing. _Phys. Rev. A_ 51 no. 5, pp. R3414–R3417 (1995).
* (27) D. B. Milošević and B. Piraux. High-order harmonic generation in a bichromatic elliptically polarized laser field. _Phys. Rev. A_ 54 no. 2, pp. 1522–1531 (1996).
* (28) D. B. Milošević, W. Becker and R. Kopold. Generation of circularly polarized high-order harmonics by two-color coplanar field mixing. _Phys. Rev. A_ 61 no. 6, p. 063403 (2000).
* (29) D. Baykusheva, M. S. Ahsan, N. Lin and H. J. Wörner. Bicircular high-harmonic spectroscopy reveals dynamical symmetries of atoms and molecules. _Phys. Rev. Lett._ 116 no. 12, p. 123001 (2016). ETH eprint.
* (30) K. M. Dorney, J. L. Ellis, C. Hernández-García et al. Helicity-selective enhancement and polarization control of attosecond high harmonic waveforms driven by bichromatic circularly polarized laser fields. _Phys. Rev. Lett._ 119 no. 6, p. 063201 (2017). JILA eprint.
* (31) A. Jiménez-Galán, N. Zhavoronkov, D. Ayuso et al. Control of attosecond light polarization in two-color bicircular fields. _Phys. Rev. A_ 97 no. 2, p. 023409 (2018). arXiv:1805.02250.
* (32) E. Pisanty, S. Sukiasyan and M. Ivanov. Spin conservation in high-order-harmonic generation using bicircular fields. _Phys. Rev. A_ 90 no. 4, p. 043829 (2014). arXiv:1404.6242.
* (33) E. Pisanty and A. Jiménez-Galán. Strong-field approximation in a rotating frame: high-order harmonic emission from $p$ states in bicircular fields. _Phys. Rev. A_ 96 no. 6, p. 063401 (2017). arXiv:1709.00397.
* (34) D. B. Milošević. Quantum-orbit analysis of high-order harmonic generation by bicircular field. _J. Mod. Opt._ 66 no. 1, pp. 47–58 (2019).
* (35) T. Poston and I. N. Stewart. _Catastrophe Theory and its Applications_ (Pitman, London, 1978).
* (36) O. Raz, O. Pedatzur, B. D. Bruner and N. Dudovich. Spectral caustics in attosecond science. _Nat. Photon._ 6 no. 3, p. 170 (2012).
* (37) E. Goulielmakis. Attosecond photonics: Extreme ultraviolet catastrophes. _Nat. Photon._ 6 no. 3, p. 142 (2012).
* (38) D. R. Austin and J. Biegert. Attosecond pulse shaping using partial phase matching. _New J. Phys._ 16 no. 11, p. 113011 (2014).
* (39) D. Faccialà, S. Pabst, B. D. Bruner et al. Probe of multielectron dynamics in xenon by caustics in high-order harmonic generation. _Phys. Rev. Lett._ 117 no. 9, p. 093902 (2016). handle:11311/996104.
* (40) D. Faccialà, S. Pabst, B. Bruner et al. High-order harmonic generation spectroscopy by recolliding electron caustics. _J. Phys. B: At. Mol. Opt. Phys._ 51 no. 13, p. 134002 (2018).
* (41) K. R. Hamilton, H. W. van der Hart and A. C. Brown. Pulse-shape control of two-color interference in high-order-harmonic generation. _Phys. Rev. A_ 95 no. 1, p. 013408 (2017). arXiv:1701.01640.
* (42) V. A. Birulia and V. V. Strelkov. Spectral caustic in two-color high-order harmonic generation: Role of Coulomb effects. _Phys. Rev. A_ 99 no. 4, p. 043413 (2019). arXiv:1901.10518.
* (43) A. J. Uzan, G. Orenstein, Á. Jiménez-Galán et al. Attosecond spectral singularities in solid-state high-harmonic generation. _Nat. Photon._ 14 no. 3, pp. 183–187 (2020). arXiv:1812.02498.
* (44) S. A. Kelvich, W. Becker and S. P. Goreslavski. Caustics and catastrophes in above-threshold ionization. _Phys. Rev. A_ 96 no. 2, p. 023427 (2017).
* (45) E. Pisanty. RB-SFA: High Harmonic Generation in the Strong Field Approximation via Mathematica. GitHub, https://github.com/episanty/RB-SFA, v2.1.2 (2016).
* (46) E. Pisanty. Figure-maker code for The imaginary part of the high-harmonic cutoff. Zenodo:3692563, doi:10.5281/zenodo.3692563 (2020).
* (47) Supplementary Material available online at imaginary-harmonic-cutoff.github.io, or for local use at doi:10.5281/zenodo.3758483 (2020).
* (48) E. Pisanty. _Electron dynamics in complex time and complex space_. PhD thesis, Imperial College London (2016).
* (49) E. Pisanty and M. Ivanov. Slalom in complex time: emergence of low-energy structures in tunnel ionization via complex time contours. _Phys. Rev. A._ 93 no. 4, p. 043408 (2016). arXiv:1507.00011.
* (50) T. Keil, S. V. Popruzhenko and D. Bauer. Laser-driven recollisions under the Coulomb barrier. _Phys. Rev. Lett._ 117 no. 24, p. 243003 (2016). arXiv:1608.03844.
* (51) F. W. J. Olver, D. W. Lozier, R. F. Boisvert and C. W. Clark (eds.). _NIST Handbook of Mathematical Functions_ (Cambridge University Press NIST, Cambridge, New York, 2010). Available online as the Digital Library of Mathematical Functions.
* (52) M. V. Berry and C. Upstill. Catastrophe optics: morphologies of caustics and their diffraction patterns. In E. Wolf (ed.), _Progress in Optics_ , vol. 18, chap. IV, pp. 257–346 (Elsevier, 1980).
* (53) M. Frolov, N. Manakov, T. Sarantseva and A. F. Starace. Analytic formulae for high harmonic generation. _J. Phys. B: At. Mol. Opt. Phys._ 42 no. 3, p. 035601 (2009). UNL eprint.
* (54) M. V. Frolov, N. L. Manakov, A. A. Silaev and N. V. Vvedenskii. Analytic description of high-order harmonic generation by atoms in a two-color laser field. _Phys. Rev. A_ 81 no. 6, p. 063407 (2010).
* (55) R. Wong. _Asymptotic approximations of integrals_ (SIAM, Philadelphia, 2001), pp. 366–372.
* (56) M. V. Frolov, N. L. Manakov, T. S. Sarantseva and A. F. Starace. High-order-harmonic-generation spectroscopy with an elliptically polarized laser field. _Phys. Rev. A_ 86 no. 6, p. 063406 (2012). UNL eprint.
* (57) T. Sarantseva, M. Frolov, N. Manakov, M. Y. Ivanov and A. F. Starace. Harmonic generation spectroscopy with a two-colour laser field having orthogonal linear polarizations. _J. Phys. B: At. Mol. Opt. Phys._ 46 no. 23, p. 231001 (2013). UNL eprint.
* (58) Y. Okajima, O. I. Tolstikhin and T. Morishita. Adiabatic theory of high-order harmonic generation: One-dimensional zero-range-potential model. _Phys. Rev. A_ 85 no. 6, p. 063406 (2012).
* (59) M. A. Khokhlova and V. V. Strelkov. Phase properties of the cutoff high-order harmonics. _Phys. Rev. A_ 93 no. 4, p. 043416 (2016). arXiv:1511.02133.
* (60) H. Schomerus and M. Sieber. Bifurcations of periodic orbits and uniform approximations. _J. Phys. A: Math. Gen._ 30 no. 13, p. 4537 (1997). arXiv:chao-dyn/9701022.
* (61) S. Popruzhenko. Keldysh theory of strong field ionization: history, applications, difficulties and perspectives. _J. Phys. B: At. Mol. Opt. Phys._ 47 no. 20, p. 204001 (2014).
* (62) L. E. Chipperfield, J. S. Robinson, J. W. G. Tisch and J. P. Marangos. Ideal waveform to generate the maximum possible electron recollision energy for any given oscillation period. _Phys. Rev. Lett._ 102 no. 6, p. 063003 (2009).
* (63) L. E. Chipperfield, J. W. Tisch and J. P. Marangos. Derivation of the perfect wave for optimising the strong-field driven electron recollision energy. _J. Mod. Opt._ 57 no. 11, pp. 992–998 (2010).
* (64) S. Haessler, T. Balčiunas, G. Fan et al. Optimization of quantum trajectories driven by strong-field waveforms. _Phys. Rev. X_ 4 no. 2, p. 021028 (2014).
* (65) J. L. Krause, K. J. Schafer and K. C. Kulander. High-order harmonic generation from atoms and ions in the high intensity regime. _Phys. Rev. Lett._ 68 no. 24, pp. 3535–3538 (1992). zenodo:1233893.
* (66) D. B. Milošević. Cut-off law for high-harmonic generation by an elliptically polarized laser field. _J. Phys. B: At. Mol. Opt. Phys._ 33 no. 13, p. 2479 (2000).
* (67) A. Flegel, M. Frolov, N. Manakov and A. F. Starace. Cutoffs of high-energy plateaux for atomic processes in an intense elliptically polarized laser field. _J. Phys. B: At. Mol. Opt. Phys._ 38 no. 1, p. L27 (2004). UNL eprint.
* (68) C. Figueira de Morisson Faria, M. Dörr, W. Becker and W. Sandner. Time-frequency analysis of two-color high-harmonic generation. _Phys. Rev. A_ 60 no. 2, pp. 1377–1384 (1999).
* (69) V. Averbukh, O. E. Alon and N. Moiseyev. High-order harmonic generation by molecules of discrete rotational symmetry interacting with circularly polarized laser field. _Phys. Rev. A_ 64 no. 3, p. 033411 (2001).
* (70) H. K. Avetissian, B. R. Avchyan and G. F. Mkrtchian. Generation of harmonics via multiphoton resonant excitation of hydrogenlike ions in an x-ray free-electron-laser field. _Phys. Rev. A_ 90 no. 5, p. 053812 (2014).
* (71) O. Neufeld, A. Fleischer and O. Cohen. High-order harmonic generation of pulses with multiple timescales: selection rules, carrier envelope phase and cutoff energy. _Mol. Phys._ 117 no. 15-16, pp. 1956–1963 (2019).
* (72) O. Neufeld, D. Podolsky and O. Cohen. Floquet group theory and its application to selection rules in harmonic generation. _Nat. Commun._ 10 no. 1, pp. 1–9 (2019).
* (73) A. Jašarević, E. Hasović, R. Kopold, W. Becker and D. B. Milošević. Application of the saddle-point method to strong-laser-field ionization. _J. Phys. A: Math. Gen._ In press (2020).
* (74) S. Odžak and D. B. Milošević. High-order harmonic generation in the presence of a static electric field. _Phys. Rev. A_ 72 no. 3, p. 033407 (2005).
* (75) L. E. Chipperfield, L. N. Gaier, P. L. Knight, J. P. Marangos and J. W. G. Tisch. Conditions for the reliable production of attosecond pulses using ultra-short laser-generated high harmonics. _J. Mod. Opt._ 52 no. 2–3, pp. 243–260 (2005).
* (76) D. B. Milošević. Strong-field approximation and quantum orbits. In D. Bauer (ed.), _Computational Strong-Field Quantum Dynamics: Intense Light–Matter Interactions_ , pp. 203–225 (De Gruyter, Berlin, 2017).
* (77) E. Hasović, W. Becker and D. B. Milošević. Electron rescattering in a bicircular laser field. _Opt. Express_ 24 no. 6, pp. 6413–6424 (2016).
* (78) D. B. Milošević and W. Becker. X-ray harmonic generation by orthogonally polarized two-color fields: Spectral shape and polarization. _Phys. Rev. A_ 100 no. 3, p. 031401 (2019).
* (79) D. B. Milošević. Quantum-orbit analysis of high-order harmonic generation by bicircular field. _J. Mod. Opt._ 66 no. 1, pp. 47–58 (2019).
* (80) D. B. Milošević and W. Becker. Improved strong-field approximation and quantum-orbit theory: Application to ionization by a bicircular laser field. _Phys. Rev. A_ 93 no. 6, p. 063418 (2016).
* (81) S. Mitra, S. Biswas, J. Schötz et al. Peak suppression in two-colour high harmonic generation. _J. Phys. B: At. Mol. Opt. Phys._ 53 no. 13, p. 134004 (2020).
* (82) C. Figueira de Morisson Faria and A. S. Maxwell. It is all about phases: ultrafast holographic photoelectron imaging. _Rep. Progr. Phys._ 83, p. 034401 (2019). arXiv:1906.11781.
|
2024-09-04T02:54:55.909409 | 2020-02-29T17:30:26 | 2003.00314 | {
"authors": "J. Maurice Rojas and Yuyu Zhu",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25962",
"submitter": "J. Maurice Rojas",
"url": "https://arxiv.org/abs/2003.00314"
} | arxiv-papers | #
A Complexity Chasm for Solving Sparse Polynomial Equations Over $p$-adic
Fields
J. Maurice Rojas<EMAIL_ADDRESS>TAMU 3368, College Station, TX
77843-3368 and Yuyu Zhu<EMAIL_ADDRESS>TAMU 3368, College Station,
TX 77843-3368
###### Abstract.
We reveal a complexity chasm, separating the trinomial and tetranomial cases,
for solving univariate sparse polynomial equations over certain local fields.
First, for any fixed field
$K\\!\in\\!\\{\mathbb{Q}_{2},\mathbb{Q}_{3},\mathbb{Q}_{5},\ldots\\}$, we
prove that any polynomial $f\\!\in\\!\mathbb{Z}[x_{1}]$ with exactly $3$
monomial terms, degree $d$, and all coefficients having absolute value at most
$H$, can be solved over $K$ in deterministic time $\log^{9}(dH)$ in the
classical Turing model. (The best previous algorithms were of complexity
exponential in $\log d$, even for just counting roots in $\mathbb{Q}_{p}$.) In
particular, our algorithm generates approximations in $\mathbb{Q}$ with bit-
length $\log^{8}(dH)$ to all the roots of $f$ in $K$, and these approximations
converge quadratically under Newton iteration. On the other hand, we give a
unified family of tetranomials requiring $\Omega(d\log H)$ bits to distinguish
the base-$b$ expansions of their roots in $K$.
Partially supported by NSF grants CCF-1900881 and CCF-1409020.
Partially supported by NSF grants CCF-1900881 and CCF-1409020.
## 1\. Introduction
The applications of solving systems of polynomial equations are legion: The
real case permeates all of non-linear optimization as well as numerous
problems in engineering. The $p$-adic case leads to many classical questions
in number theory, and is close to many applications in cryptography, coding
theory, and computational number theory. As such, it is important to
understand the complexity of solving systems of polynomial equations over
local fields. Furthermore, the complexity of solving structured systems — such
as those with a fixed number of monomial terms or invariance with respect to a
group action — arises naturally in many computational geometric applications
and is closely related to a deeper understanding of circuit complexity (see,
e.g., [16]). Clearly, if we are to fully understand the complexity of solving
sparse polynomial systems, then we should at least be able to settle the
univariate case, e.g., classify when it is possible to separate and
approximate roots in deterministic time polynomial in the input size.
Our first main result settles the univariate case, over a fundamental family
of local fields admitting an analogue of Descartes’ Rule. More precisely,
thanks to 17th century work of Descartes, and 20th century work of Lenstra
[18] and Poonen [22], it is known that univariate polynomials with exactly $t$
monomial terms have at most $t^{O(1)}$ roots in a fixed field $K$ only when
$K$ is $\mathbb{R}$ or a finite algebraic extension of $\mathbb{Q}_{p}$ for
some prime $p\\!\in\\!\mathbb{N}$. For instance, $\mathbb{C}$ is ruled out
because $x^{d}-1$ has just $2$ monomial terms but $d$ distinct complex roots.
Also, the Laurent series fields $\mathbb{F}_{p}((\theta))$ are ruled out by an
elementary calculation [22] showing
that$\prod\limits_{z_{0},\ldots,z_{t-2}\in\mathbb{F}_{p}}(x_{1}-z_{0}-z_{1}\theta-\cdots-
z_{t-2}\theta^{t-2})$ has exactly $t$ monomial terms as a polynomial in
$\mathbb{F}_{p}[\theta][x_{1}]$, but exactly $p^{t-1}$ roots in
$\mathbb{F}_{p}[\theta]$.
We’ll use $|\cdot|_{p}$ for the usual absolute value on $\mathbb{C}_{p}$,
normalized so that $|p|_{p}\\!=\\!\frac{1}{p}$. Recall also that for any
function $f$ analytic on $K$, the corresponding Newton endomorphism is
$N_{f}(z):=z-\frac{f(z)}{f^{\prime}(z)}$, and the corresponding sequence of
Newton iterates of a $z_{0}\\!\in\\!K$ is the sequence
$(z_{i})^{\infty}_{i=0}$ where $z_{i+1}\\!:=\\!N_{f}(z_{i})$ for all
$i\\!\geq\\!0$. Finally, we call any polynomial in
$\mathbb{Z}[x_{1},\ldots,x_{n}]$ having exactly $t$ terms in its monomial term
expansion an $n$-variate $t$-nomial.
###### Theorem 1.1.
Suppose $K\\!=\\!\mathbb{Q}_{p}$ for some fixed111We clarify the dependence of
our complexity bounds on $p$ in Section 5. prime $p\\!\in\\!\mathbb{N}$. Then
there is an algorithm which, for any input trinomial
$f\\!\in\\!\mathbb{Z}[x_{1}]$ with degree $d$ and all coefficients of
(Archimedean) absolute value at most $H$, outputs a set
$\left\\{\frac{a_{1}}{b_{1}},\ldots,\frac{a_{m}}{b_{m}}\right\\}\\!\subset\\!\mathbb{Q}$
of cardinality $m\\!=\\!m(K,f)$ such that:
1\. For all $j$ we have
$a_{j}\\!\neq\\!0\Longrightarrow\log|a_{j}|,\log|b_{j}|=\log^{8}(dH)$.
2\. There is a $\mu\\!=\\!\mu(d,H)\\!>\\!1$ such that for all $j$ we have that
$z_{0}\\!:=\\!a_{j}/b_{j}$ implies that $f$ has a root $\zeta_{j}\in K$ such
that the corresponding sequence of Newton iterates satisfies
$|z_{i}-\zeta_{j}|_{p}\\!\leq\\!\mu^{-2^{i-1}}|z_{0}-\zeta_{j}|_{p}$ for all
$i\\!\geq\\!1$.
3\. $m$ is exactly the number of roots of $f$ in $K$ and the cardinality of
$\\{\zeta_{1},\ldots,\zeta_{m}\\}$.
Moreover, the underlying algorithm (Algorithm 5.7 in Section 5 below) takes
deterministic time $O\\!\left(\log^{9}(dH)\right)$ on a Turing machine.
We prove Theorem 1.1 in Section 5. (An analogue of Theorem 1.1 in fact holds
for $K\\!=\\!\mathbb{R}$ as well, and this will be presented in a sequel to
this paper.) We will call the convergence condition on $z_{0}$ above being an
approximate root (in the sense of Smale222This terminology has only been
applied over $\mathbb{C}$ with $\mu\\!=\\!2$ so far [28], so we take the
opportunity here to extend it to $\mathbb{Q}_{p}$ for $p$ prime.), with
associated true root $\zeta_{j}$. This type of convergence provides an
efficient encoding of an approximation that can be quickly tuned to any
desired accuracy.
###### Remark 1.2.
Defining the input size of a univariate polynomial
$f(x_{1})\\!:=\\!\sum^{t}_{i=1}c_{i}x^{a_{i}}_{1}$$\in\\!\mathbb{Z}[x_{1}]$ as
$\sum^{t}_{i=1}\log((|c_{i}|+2)(|a_{i}|+2))$ we see that Theorem 1.1 implies
that one can solve univariate trinomial equations, over any fixed $p$-adic
field, in deterministic time polynomial in the input size. $\diamond$
###### Remark 1.3.
Efficiently solving univariate $t$-nomial equations over $K$ in the sense of
Theorem 1.1 is easier for $t\\!\leq\\!2$: The case $t\\!=\\!1$ is clearly
trivial (with $0$ the only possible root) while the case
$(K,t)\\!=\\!(\mathbb{R},2)$ is implicit in work on computer arithmetic from
the 1970s (see, e.g., [7]). We review the case
$(K,t)\\!=\\!(\mathbb{Q}_{p},2)$ with $p$ prime in Theorem 2.15 of Section 2
below. $\diamond$
Despite much work on factoring univariate polynomials over $\mathbb{Q}_{p}$
(see, e.g., [9, 12, 4, 5]), all known general algorithms for solving (or even
just counting the solutions of) arbitrary degree $d$ polynomial equations over
$\mathbb{Q}_{p}$ have complexity exponential in $\log d$. So Theorem 1.1
presents a new speed-up, and extends earlier work in [24] where it was shown
that detecting roots in $\mathbb{Q}_{p}$ for univariate trinomials can be done
in $\mathbf{NP}$ for fixed $p$. We’ll see in Section 3 how our speed-up
depends on applying Yu’s Theorem on linear forms in $p$-adic logarithms [32].
The key new ingredient in proving Theorem 1.1 is an efficient encoding of
roots in $\mathbb{Z}/\langle p^{k}\rangle$ from [17] (with an important
precursor in [5]).
### 1.1. The Chasm at Four Terms via Root Separation
Unfortunately, there are obstructions to attaining complexity polynomial in
the input size, for solving univariate polynomial equations with sufficiently
many terms. Indeed, our second main result is that the underlying root spacing
changes dramatically already at $4$ terms.
###### Theorem 1.4.
Consider the family of tetranomials
$\displaystyle\displaystyle{f_{d,\varepsilon}(x_{1}):=x^{d}_{1}-\frac{1}{\varepsilon^{2h}}x^{2}_{1}+\frac{2}{\varepsilon^{h+1}}x_{1}-\frac{1}{\varepsilon^{2}}}$
with $h\\!\in\\!\mathbb{N}$, $h\\!\geq\\!3$, and
$d\\!\in\\!\left\\{4,\ldots,\left\lfloor e^{h}\right\rfloor\right\\}$ even.
Let $H\\!:=\\!\max\\{\varepsilon^{\pm 2h}\\}$. Then $f_{d,\varepsilon}$ has
distinct roots $\zeta_{1},\zeta_{2}\\!\in\\!K$ with
$|\log|\zeta_{1}-\zeta_{2}|_{p}|$ or $\log|\zeta_{1}-\zeta_{2}||$ of order
$\Omega(d\log H)$, according as
$(K,\varepsilon)\\!=\\!\left(\mathbb{R},\frac{1}{2}\right)$ or
$(K,\varepsilon)\\!=\\!(\mathbb{Q}_{p},p)$. In particular, the coefficients of
$f_{d,\frac{1}{2}}$ and $p^{2h}f_{d,p}$ all lie in $\mathbb{Z}$ and have bit-
length $O(\log H)$.
We prove Theorem 1.4 in Section 4. The special case $K\\!=\\!\mathbb{R}$ was
derived earlier (in different notation) by Mignotte [20]. (See also [25].) The
case $K\\!=\\!\mathbb{Q}_{p}$ with $p$ prime appears to be new, and our proof
unifies the Archimedean and non-Archimedean cases via tropical geometry
(Newton polygons in particular). Note that Theorem 1.4 implies that the roots
in $K$ of a tetranomial can be so close that one needs $\Omega(d\log H)$ many
digits to distinguish their base-$b$ expansions (for any fixed $b$) in the
worst case.
Mignotte used the tetranomial $f_{d,\frac{1}{2}}$ in [20] to show that an
earlier root separation bound of Mahler [19], for arbitrary degree $d$
polynomials in $\mathbb{Z}[x_{1}]$, is asymptotically near-optimal. We recall
the following paraphrased version:
###### Mahler’s Theorem .
Suppose $f\\!\in\\!\mathbb{Z}[x_{1}]$ has degree $d$, all coefficients of
(Archimedean) absolute value at most $H$, and is irreducible in
$\mathbb{Z}[x_{1}]$. Let $\zeta_{1},\zeta_{2}\\!\in\\!\mathbb{C}$ be distinct
roots of $f$. Then $|\log|\zeta_{1}-\zeta_{2}||\\!=\\!O(d\log(dH))$.
$\blacksquare$
Our new algorithmic results are enabled by our third and final main result:
Mahler’s bound is far from optimal for $t$-nomials with $t\\!\leq\\!3$.
###### Theorem 1.5.
Suppose $p$ is prime and $f\\!\in\\!\mathbb{Z}[x_{1}]$ is irreducible, has
exactly $3$ monomial terms, degree $d$, and all coefficients of (Archimedean)
absolute value at most $H$. Let $\zeta_{1},\zeta_{2}\\!\in\\!\mathbb{C}_{p}$
be distinct roots of $f$. Then
$|\log|\zeta_{1}-\zeta_{2}|_{p}|\\!=\\!O\\!\left(p\log^{4}(dH)\right)$.
We prove Theorem 1.5 (which arose from the unpublished Ph.D. thesis of author
Zhu) in Section 3. Theorem 1.5 is in fact a $p$-adic analogue of a separation
bound of Koiran [15]. Even sharper bounds can be derived for binomials and we
review these bounds in Section 2.2.
### 1.2. Arbitrary Sparse Polynomials
It is worth recalling that merely deciding the existence of roots over
$\mathbb{Q}_{p}$ for univariate polynomials (with an arbitrary number of
monomial terms) is $\mathbf{NP}$-hard with respect to randomized
($\mathbf{ZPP}$, a.k.a. Las Vegas) reductions [24]. Put another way, this
means that a polynomial-time algorithm for this problem would imply that
$\mathbf{NP}\\!\subseteq\\!\mathbf{ZPP}$ — a containment doubted by complexity
theorists just as much as the equality $\mathbf{P}\\!=\\!\mathbf{NP}$ [31].
Our paper is devoted to showing that a harder problem (giving a set of
approximate roots over $\mathbb{Q}_{p}$ of the correct cardinality) is doable
in polynomial-time for trinomials, but not for tetranomials, for fixed $p$.
Based on our results, we conjecture the following:
###### Conjecture 1.6.
For any fixed $p$ and $t$, there is a polynomial-time algorithm that counts
the roots in $\mathbb{Q}_{p}$ of any input $t$-nomial
$f\\!\in\\!\mathbb{Z}[x_{1}]$.
In particular, we expect the complexity to be exponential in $t+\log p$, but
polynomial in $\log(dH)$.
We also note that detecting roots in $\mathbb{Q}_{p}$ for $n$-variate
$(n+1)$-nomials is known to be doable in $\mathbf{NP}$ [24]. However, speeding
this up to polynomial-time, even for $n\\!=\\!2$ and fixed $p$, hinges upon
detecting roots in $(\mathbb{Z}/\langle p^{k}\rangle)^{2}$ for bivariate
trinomials of degree $d$ in time $(k+\log d)^{O(1)}$. The latter problem
remains open, but some progress has been made in author Zhu’s Ph.D. thesis. On
a related note, even counting points on trinomial curves over the prime fields
$\mathbb{F}_{p}$ in time $\log^{O(1)}(pd)$ remains a challenging open
question.
### 1.3. Acknowledgements
We thank Erich Bach and Bjorn Poonen for informative discussions on Hensel’s
Lemma.
Let us now review the necessary background for our proofs.
## 2\. Background
### 2.1. Newton Polygons and Newton Iteration: Archimedean and Non-
Archimedean
Definitive sources for $p$-adic arithmetic and analysis include [27, 26, 23].
We will only review a few facts necessary for our development.
We use $\operatorname{ord}_{p}:\mathbb{C}_{p}\longrightarrow\mathbb{Q}$ for
the standard $p$-adic valuation on $\mathbb{C}_{p}$, normalized so that
$\operatorname{ord}_{p}p\\!=\\!1$. The most significant ($p$-adic) digit of
$\sum\limits^{\infty}_{j=s}a_{j}p^{j}\\!\in\\!\mathbb{Q}_{p}$ is simply
$a_{s}$, assuming $a_{j}\\!\in\\!\\{0,\ldots,p-1\\}$ for all $j$ and
$a_{s}\\!\neq\\!0$, i.e., $a_{s}$ is the digit being multiplied by the power
of $p$ with largest $p$-adic norm.
The notion of Newton polygon goes back to 17th century work of Newton on
Puiseux series solutions to polynomial equations. We will need variants of
this notion over $\mathbb{C}_{p}$ and $\mathbb{C}$. (See, e.g., [30] for the
$p$-adic case and [21, 1] for the complex case.):
###### Definition 2.1.
Suppose $p$ is prime and
$f(x)\\!:=\\!\sum_{i=1}^{t}c_{i}x^{a_{i}}_{1}\\!\in\\!\mathbb{Z}[x_{1}]$ with
$c_{i}\\!\neq 0$ for all $i$ and $a_{1}\\!<\\!\cdots\\!<\\!a_{t}$. We then
define $\operatorname{Newt}_{p}(f)$ (resp. $\operatorname{Newt}_{\infty}(f)$)
to be the convex hull of the set of points
$\\{(a_{i},\operatorname{ord}_{p}c_{i})\;|\;i\\!\in\\!\\{1,\ldots,t\\}\\}$
(resp. the convex hull of
$\\{(a_{i},-\log|c_{i}|)\;|\;i\\!\in\\!\\{1,\ldots,t\\}\\}$). We call an edge
$E$ of a polygon in $\mathbb{R}^{2}$ lower if and only if $E$ has an inner
normal with positive last coordinate. We also define the horizontal length of
a line segment $E$ connecting $(r,s)$ and $(u,v)$ to be
$\lambda(E)\\!:=\\!|u-r|$. $\diamond$
###### Theorem 2.2.
Following the notation above, when $p$ is prime, the number of roots of $f$ in
$\mathbb{C}_{p}$ of valuation $v$ is exactly the horizontal length of the face
of $\operatorname{Newt}_{p}(f)$ with inner normal $(v,1)$. Furthermore, if
$\operatorname{Newt}_{\infty}(f)$ has a lower edge $E$ with slope $v$, and no
other lower edges with slope in the open interval $(v-\log 3,v+\log 3)$, then
the number of roots $\zeta\\!\in\\!\mathbb{C}$ of $f$ with
$\log|\zeta|\\!\in\\!(v-\log 3,v+\log 3)$ is exactly $\lambda(E)$, counting
multiplicity. $\blacksquare$
The first portion of Theorem 2.2 goes back to early 20th century work of
Hensel, while the second portion is an immediate consequence of [1, Thm. 1.5]
(and has an important precursor in [21]).
We will also use the following version of Hensel’s famous criterion for the
rapid convergence of Newton’s method over $\mathbb{C}_{p}$:
###### Hensel’s Lemma .
(See, e.g., [23, Thm., pg. 48, Sec. 6.4].) Suppose $p$ is prime,
$f\in\mathbb{Z}[x]$, $j\\!\geq\\!1$, $\zeta\\!\in\\!\mathbb{Z}_{p}$,
$\ell\\!=\\!\operatorname{ord}_{p}f^{\prime}(\zeta)\\!<\\!\infty$, and
$f(\zeta)\equiv 0\mod p^{2\ell+j}$. Let
$\zeta^{\prime}\\!:=\\!\zeta-\frac{f(\zeta)}{f^{\prime}(\zeta)}$. Then
$f(\zeta^{\prime})\\!=\\!0$ mod $p^{2\ell+2j}$,
$\operatorname{ord}_{p}f^{\prime}(\zeta^{\prime})\\!=\\!\ell$, and
$\zeta\\!=\\!\zeta^{\prime}$ mod $p^{\ell+j}$. $\blacksquare$
### 2.2. Separating Roots of Binomials
When $f\\!\in\\!\mathbb{Z}[x_{1}]$ is a binomial, all of its roots in
$\mathbb{C}$ are multiples of roots of unity that are evenly spaced on a
circle. The same turns out to be true over $\mathbb{C}_{p}$, but the root
spacing then depends more subtly on the degree and $p$. For convenience, we
will sometimes write $|\cdot|_{\infty}$ instead of $|\cdot|$ for the standard
norm on $\mathbb{C}$. In summary, we have the following:
###### Proposition 2.3.
Suppose $f(x_{1})\\!:=\\!c_{1}+c_{2}x^{d}_{1}\\!\in\\!\mathbb{Z}[x_{1}]$,
$c_{1}c_{2}\\!\neq\\!0$, and $|c_{1}|,|c_{2}|\\!\leq\\!H$. Also let
$p\\!\in\\!\\{\infty,2,3,5,\ldots\\}$ and let $\overline{K}_{p}$ denote
$\mathbb{C}$ or $\mathbb{C}_{p}$, according as $p\\!=\\!\infty$ or $p$ is
prime. Then for any distinct roots
$\zeta_{1},\zeta_{2}\\!\in\\!\overline{K}_{p}$ of $f$, we have that
$|\log|\zeta_{1}-\zeta_{2}|_{p}|$ is bounded from above by
$\begin{cases}\frac{3}{2}\log(3)+\log\\!\left(dH^{1/d}\right);\ \text{for
}p\\!=\\!\infty\text{ and }d\\!\geq\\!3\\\ \frac{\log p}{d}\log H;\hskip
56.9055pt\text{for }p\text{ prime and }p\nmid d,\\\ \frac{\log
p}{d}\log(H)+\frac{\log p}{p-1};\hskip 19.91684pt\text{for }p\text{ prime and
}p|d.\ \text{$\blacksquare$}\end{cases}$
The case $p\\!=\\!\infty$ is immediate (using a very rough estimate for the
distance between the vertices of a regular $d$-gon), while the case of prime
$p$ follows easily from the ultrametric inequality and classical facts on the
spacing of $p$-adic roots of unity (see, e.g., [23, Cor. 1, Pg. 105, Sec. 4.3
& Thm. Pg. 107, Sec. 4.4]). While we won’t need Proposition 2.3 for our
upcoming algorithms, it is an important precursor to our trinomial root
separation bound (Theorem 1.5), and is algorithmically relevant for more
general problems like approximating roots in angular sectors in $\mathbb{C}$
or finite extensions of $\mathbb{Q}_{p}$.
### 2.3. Counting Roots of Binomials Over $\mathbb{Q}_{p}$
To efficiently solve a binomial $f$ over $\mathbb{Q}_{p}$, it helps to first
find a fast way to count the roots of $f$ in $\mathbb{Q}_{p}$, and then to
find a fast way to incrementally compute the base-$p$ expansions of the roots
of $f$ in $\mathbb{Q}_{p}$. Counting roots over $\mathbb{Q}_{p}$ is more
involved than counting roots over $\mathbb{R}$, but is still quite efficiently
doable. For any ring $R$ we will let $R^{*}$ denote the multiplicatively
invertible elements of $R$.
###### Lemma 2.4.
Suppose $p$ is an odd prime and
$f(x_{1})\\!:=\\!c_{1}+c_{2}x^{d}_{1}\\!\in\\!\mathbb{Z}[x_{1}]$ with
$|c_{1}|,|c_{2}|\\!\leq\\!H$, $c_{1}c_{2}\\!\neq\\!0$, and
$\ell\\!:=\\!\operatorname{ord}_{p}d$. Then the number of roots of $f$ in
$\mathbb{Q}_{p}$ is either $0$ or $\gcd(d,p-1)$. In particular, $f$ has roots
in $\mathbb{Q}_{p}$ if and only if both of the following conditions hold:
1\. $d|\operatorname{ord}_{p}(c_{1}/c_{2})$.
2\.
$\left(-\frac{c_{1}}{c_{2}}p^{\operatorname{ord}_{p}(c_{2}/c_{1})}\right)^{p^{\ell}(p-1)/\gcd(d,p-1)}\\!=\\!1$
mod $p^{2\ell+1}$. $\blacksquare$
Lemma 2.4 is classical and follows from basic group theory and [24, Cor. 3.2].
###### Remark 2.5.
A criterion similar to Lemma 2.4 also holds for detecting roots in
$\mathbb{Q}_{2}$ for binomials (see, e.g., [24, Cor. 3.2]). However, for
brevity’s sake, we will not pursue the case $p\\!=\\!2$ further in the present
version of this paper. $\diamond$
Since powers in finite groups are easily computable via recursive squaring
(see, e.g., [3, pp. 102–103]), we easily obtain the following via Hensel’s
Lemma, Proposition 2.3, Lemma 2.4, and standard complexity bounds on modular
arithmetic [3, 29]:
###### Corollary 2.6.
Following the notation and assumptions of Lemma 2.4, one can count exactly the
number of roots of $f$ in $\mathbb{Q}_{p}$ in time
$O\\!\left(\log^{3}(pdH)\right)$. Furthermore, for any such root
$\zeta\\!\in\\!\mathbb{Q}_{p}$ there is an
$x_{0}\\!\in\\!\mathbb{Z}\left/\left\langle p^{2\ell+1}\right\rangle\right.$
that is a root of the mod $p^{2\ell+1}$ reduction of
$\frac{c_{1}}{p^{\operatorname{ord}_{p}c_{1}}}+\frac{c_{2}}{p^{\operatorname{ord}_{p}c_{2}}}x^{d}_{1}$,
and with
$z_{0}\\!:=\\!p^{\operatorname{ord}_{p}(c_{2}/c_{1})/d}x_{0}\\!\in\\!\mathbb{Q}$
an approximate root of $f$ with associated true root $\zeta$. In particular,
the logarithmic height of $z_{0}$ is
$O\\!\left(\log\left(dH^{1/d}\right)\right)$. $\blacksquare$
At this point, one may wonder how one can find a suitable $x_{0}$ as in
Corollary 2.6. Naive brute-force search can take time super-linear in
$p^{\ell}$, but in the next section we’ll see a combinatorial encoding
enabling a much better time bound of
$O\\!\left(p\log^{2}(p)\log\gcd(d,p-1)+\log^{3}(pdH)\gcd(d,p-1)\right)$.
### 2.4. Trees and Roots in $\mathbb{Z}/\langle p^{k}\rangle$ and
$\mathbb{Z}_{p}$
A key tool we will need is a tree structure that will enable us to easily
approximate the first few base-$p$ digits of the roots of an arbitrary
univariate polynomial.
###### Definition 2.7.
[17] Let $p\\!\in\\!\mathbb{N}$ be prime. For any $f\in\mathbb{Z}[x_{1}]$ let
$\tilde{f}$ denote the mod $p$ reduction of $f$. For any degenerate root
$\zeta_{0}$ of $\tilde{f}$, we define
$\displaystyle
s(f,\zeta_{0}):=\min\left\\{i+\operatorname{ord}_{p}\frac{f^{(i)}(\zeta_{0})}{i!}\right\\}.$
Fixing $k\in\mathbb{N}$, for $i\geq 1$, let us inductively define a set
$T_{p,k}(f)$ of pairs
$(f_{i-1,\mu},k_{i-1,\mu})$$\in\mathbb{Z}[x_{1}]\times\mathbb{N}$. We set
$(f_{0,0},k_{0,0}):=(f,k)$. Then for any $i\geq 1$ with
$(f_{i-1,\mu},k_{i-1,\mu})$$\in T_{p,k}(f)$, and any degenerate root
$\zeta_{i-1}\in\mathbb{Z}/p\mathbb{Z}$ of $\tilde{f}_{i-1,\mu}$ with
$\displaystyle
s_{i-1}:=s(f_{i-1,\mu},\zeta_{i-1})\in\\{2,\cdots,k_{i-1,\mu}-1\\},$
we define
$\zeta:=\mu+p^{i-1}\zeta_{i-1},k_{i,\zeta}:=k_{i-1,\mu}-s(f_{i-1,\mu},\zeta_{i-1})$,
and
$\displaystyle
f_{i,\zeta}(x):=\left[\frac{1}{p^{s(f_{i-1,\mu},\zeta_{i-1})}}f_{i-1,\mu}(\zeta_{i-1}+px)\right]\mod
p^{k_{i,\zeta}}.\text{ \ $\diamond$}$
Note that $\frac{f^{(i)}(\zeta_{0})}{i!}$ is the coefficient of $x^{i}$ in the
Taylor expansion of $f(x+\zeta_{0})$ about $0$, and is thus an integer since
$f\\!\in\\!\mathbb{Z}[x_{1}]$.
The collection of pairs $(f_{i,\zeta},k_{i,\zeta})$ admits a tree structure
that will give us a way of extending Hensel lifting to degenerate roots.
###### Definition 2.8.
[17] Let us identify the elements of $T_{p,k}(f)$ with nodes of a labelled
rooted directed tree $\mathcal{T}_{p,k}(f)$ defined inductively as
follows333This definition differs slightly from the original in [17]: there
are no edge labels in our version here since we won’t need them for root
counting in $\mathbb{Z}_{p}$.:
* i.
We set $f_{0,0}\\!:=\\!f$, $k_{0,0}\\!:=\\!k$, and let $(f_{0,0},k_{0,0})$ be
the label of the root node of $\mathcal{T}_{p,k}(f)$.
* ii.
The non-root nodes of $\mathcal{T}_{p,k}(f)$ are uniquely labelled by each
$(f_{i,\zeta},k_{i,\zeta})$$\in T_{p,k}(f)$ with $i\\!\geq\\!1$.
* iii.
There is an edge from node
$(f_{i^{\prime},\zeta^{\prime}},k_{i^{\prime},\zeta^{\prime}})$ to node
$(f_{i,\zeta},k_{i,\zeta})$ if and only if$i^{\prime}\\!=\\!i-1$ and there is
a degenerate root $\zeta_{i-1}\\!\in\\!\mathbb{Z}/(p)$ of
$\tilde{f}_{i^{\prime},\zeta^{\prime}}$ with
$s(f_{i^{\prime},\zeta^{\prime}},\zeta_{i-1})$$\in\\{2,\ldots,k_{i^{\prime},\zeta^{\prime}}-1\\}$
and
$\zeta\\!=\\!\zeta^{\prime}+p^{i-1}\zeta_{i-1}\\!\in\\!\mathbb{Z}/(p^{i})$.
In particular, the labels of the nodes lie in $\mathbb{Z}[x]\times\mathbb{N}$.
$\diamond$
We call each $f_{i,\zeta}$ a nodal polynomial of $\mathcal{T}_{p,k}(f)$. It is
in fact possible to easily read off the roots of $f$ in $\mathbb{Z}/\langle
p^{k}\rangle$ from $\mathcal{T}_{p,k}(f)$ [17]. We will instead use
$\mathcal{T}_{p,k}(f)$, with $k$ chosen via our root separation bounds, to
efficiently count the roots of $f$ in $\mathbb{Z}_{p}$ (and thus in
$\mathbb{Q}_{p}$ via rescaling).
###### Example 2.9.
Let $f(x_{1})\\!=\\!1-x^{397}_{1}$. Then $\mathcal{T}_{17,k}(f)$, for any
$k\\!\geq\\!1$, consists of a single node, labelled $(1-x^{397}_{1},k)$. Note
that $1$ is the unique root of $f$ in $\mathbb{Q}_{17}$, $1$ is a non-
degenerate root of $\tilde{f}$, and $1$ is the only root of $\tilde{f}$ in
$\mathbb{F}_{17}$. $\diamond$
###### Example 2.10.
Let $f(x_{1})\\!=\\!1-x^{340}_{1}$. Then, when $k\\!\in\\!\\{1,2\\}$, the tree
$\mathcal{T}_{17,k}(f)$ consists of a single root node, labelled
$(1-x^{340}_{1},k)$. However, when $k\\!\geq\\!3$, the tree
$\mathcal{T}_{17,k}(f)$ has depth $1$, and consists of the aforementioned root
node and exactly $4$ child nodes. The child nodes are labelled
$(f_{1,\zeta},k-2)$ where the $\tilde{f}_{1,\zeta}$ are, respectively,
$3x_{1}$, $5x_{1}+7$, $12x_{1}+2$, and $14x+14$. Note that this $f$ has
exactly $4$ roots in $\mathbb{Q}_{17}$: $1$, $4+2\cdot 17+\cdots$, $13+14\cdot
17+\cdots$, and $16+16\cdot 17+\cdots$. In particular, the most significant
digits ($1$, $4$, $13$, and $16$) of these roots are exactly the roots in
$\mathbb{F}_{17}$ of $\tilde{f}_{0,0}$ (the mod $17$ reduction of $f$); and
the next digits ($0$, $2$, $14$, and $16$) of these roots in
$\mathbb{Q}_{17}$ are exactly the roots in $\mathbb{F}_{17}$ of the nodal
polynomials $f_{1,\zeta_{0}}$ of the $4$ child nodes. $\diamond$
It will be important to recall the following properties of nodal polynomials:
###### Lemma 2.11.
[17, Lemmata 2.2 & 3.6] Following the preceding notation, suppose
$i\\!\geq\\!1$, $\zeta_{i-1}\\!\in\\!\mathbb{F}_{p}$ is a degenerate root of
$f_{i-1,\mu}$ with multiplicity $m$, and$\zeta\\!=\\!\mu+p^{i-1}\zeta_{i-1}$.
Then $\mathcal{T}_{p,k}(f)$ has depth
$\\!\leq\\!\left\lfloor(k-1)/2\right\rfloor$,
$\deg\tilde{f}_{i,\zeta}\\!\leq\\!s(f_{i-1,\mu},\zeta_{i-1})$, and
$s(f_{i-1,\mu},\zeta_{i-1})\\!\in\\!\\{2,\ldots,m\\}$. Also,
$f_{i,\zeta}(x)\\!=\\!\frac{1}{p^{s}}f(\zeta_{0}+\zeta_{1}p+\cdots+\zeta_{i-1}p^{i-1}+p^{i}x)$
where
$s\\!:=\\!\sum\limits^{i-1}_{j=0}s(f_{j,\zeta_{0}+\cdots+\zeta_{j-1}p^{j-1}},\zeta_{j})$.
$\blacksquare$
Let $n_{p}(f)$ denote the number of non-degenerate roots in $\mathbb{F}_{p}$
of the mod $p$ reduction of $f$.
###### Lemma 2.12.
If $f\\!\in\\!\mathbb{Z}[x_{1}]$, $D$ is the maximum of
$\operatorname{ord}_{p}(\zeta_{1}-\zeta_{2})$ over all
$\zeta_{1},\zeta_{2}\\!\in\\!\mathbb{Z}_{p}$ with
$f(\zeta_{1})\\!=\\!f(\zeta_{2})\\!=\\!0\\!\neq\\!f^{\prime}(\zeta_{1})f^{\prime}(\zeta_{2})$,
and $k\\!\geq\\!1+D$, then $f$ has exactly
$\displaystyle{\sum\limits_{(f_{i,\zeta},k_{i,\zeta})\in\mathcal{T}_{p,k}(f)}\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!n_{p}(f_{i,\zeta})}$
non-degenerate roots in $\mathbb{Z}_{p}$.
Proof: First note that $\tilde{f}_{i,\zeta}$ having $\zeta_{i}$ as a non-
degenerate root in $\mathbb{F}_{p}$ implies that
$\operatorname{ord}_{p}f^{\prime}_{i,\zeta}(\zeta_{i})\\!\neq\\!0$. By Lemma
2.11,
$f_{i,\zeta}(x)\\!=\\!\frac{1}{p^{s}}f(\zeta_{0}+\cdots+\zeta_{i-1}p^{i-1}+p^{i}x)$.
So by Hensel’s Lemma, $\zeta+\zeta_{i}p^{i}$ lifts to a unique non-degenerate
root of $f$ in $\mathbb{Z}_{p}$. In particular, these lifted roots in
$\mathbb{Z}_{p}$ are distinct because they differ somewhere in their $i+1$
most significant digits.
In other words, we have shown that
$\sum\limits_{(f_{i,\zeta},k_{i,\zeta})\in\mathcal{T}_{p,k}(f)}n_{p}(f_{i,\zeta})$
is a lower bound on the number of non-degenerate roots of $f$ in
$\mathbb{Z}_{p}$. To see that we obtain all non-degenerate roots of $f$ in
$\mathbb{Z}_{p}$ this way, simply note that any non-degenerate root $\rho$ of
$f$ has a unique mod $p^{k}$ reduction, by our definition of $k$. So we are
done. $\blacksquare$
### 2.5. Trees and Solving Binomial Equations Digit by Digit
The following proposition is elementary upon counting the powers of $p$ that
divide the factors of the expression $\frac{d\cdot(d-1)\cdots(d-j+1)}{1\cdot
2\cdots j}$.
###### Proposition 2.13.
If $p$ is an odd prime, $d\\!\in\\!\mathbb{N}$ with $d\\!\geq\\!2$,
$\ell\\!:=\\!\operatorname{ord}_{p}d$, and $j\\!\in\\!\\{2,\ldots,d-1\\}$,
then $j+\operatorname{ord}_{p}\binom{d}{j}\\!\geq\\!2+\ell$. $\blacksquare$
The following lemma is a useful property of $\mathcal{T}_{p,k}(f)$ when $f$ is
a binomial.
###### Lemma 2.14.
Suppose $p$ is an odd prime,
$f(x_{1})\\!=\\!c_{1}+c_{2}x^{d}\\!\in\\!\mathbb{Z}[x_{1}]$ with $p\nmid
c_{1}c_{2}$, and $\ell\\!:=\\!\operatorname{ord}_{p}d$. Then
$k\\!\geq\\!\ell+2$ implies that $\mathcal{T}_{p,k}(f)$ has depth $0$ or $1$,
according as $\ell$ is $0$ or not.
Proof: Note that any root $\zeta_{0}\\!\in\\!\mathbb{F}_{p}$ of $\tilde{f}$
must be nonzero.
So$\operatorname{ord}_{p}\frac{f^{(j)}(\zeta_{0})}{j!}=\operatorname{ord}_{p}\\!\left(\binom{d}{j}c_{2}\zeta_{0}^{d-j}\right)=\operatorname{ord}_{p}\binom{d}{j}$.
We then obtain by Proposition 2.13 that
$s(f,\zeta_{0})\\!=\\!\min\\{\operatorname{ord}_{p}f(\zeta_{0}),1+\ell\\}\\!\leq\\!k-1$,
and thus
$\tilde{f}_{1,\zeta_{0}}(x_{1})\\!=\\!\frac{f(\zeta_{0})}{p^{s(f,\zeta_{0})}}+\frac{c_{2}d\zeta^{d-1}_{0}}{p^{s(f,\zeta_{0})}}x_{1}$.
So then, no $\tilde{f}_{1,\zeta_{0}}$ can have a degenerate root and we are
done. $\blacksquare$
With our tree-based encoding of $p$-adic roots in place, we can now prove that
it is easy to find approximate roots in $\mathbb{Q}_{p}$ for binomials when
$p$ is fixed.
###### Theorem 2.15.
Suppose $f\\!\in\\!\mathbb{Z}[x_{1}]$ is a binomial of degree $d$ with
coefficients of absolute value at most $H$, $f(0)\\!\neq\\!0$,
$\gamma\\!=\\!\gcd(d,p-1)$, and $\\{\zeta_{1},\ldots,\zeta_{\gamma}\\}$ is the
set of roots of $f$ in $\mathbb{Q}_{p}$. Then in time
$O\\!\left(p\log^{2}(p)\log\gamma+\log^{3}(pdH)\gamma\right)$,
we can find, for all $j\\!\in\\!\\{1,\ldots,\gamma\\}$, a
$z_{0}\\!\in\\!\mathbb{Q}$ of logarithmic height
$O\\!\left(\log\left(dH^{1/d}\right)\right)$ that is an approximate root of
$f$ with associated true root $\zeta_{j}$.
An algorithm that proves Theorem 2.15, and more, can be outlined as follows:
###### Algorithm 2.16.
(Solving Binomial Equations Over $\boldsymbol{\mathbb{Q}_{p}}$)
Input. An odd prime $p$ and $c_{1},c_{2},d\\!\in\\!\mathbb{Z}\setminus\\{0\\}$
with $|c_{i}|\\!\leq\\!H$ for all $i$.
Output. A true declaration that $f(x_{1})\\!:=\\!c_{1}+c_{2}x^{d}$ has no
roots in $\mathbb{Q}_{p}$, or $z_{1},$$\ldots,z_{\gamma}\\!\in\\!\mathbb{Q}$
with logarithmic height $O\\!\left(\log\left(dH^{1/d}\right)\right)$ such that
$\gamma\\!=\\!\gcd(d,p-1)$, $z_{j}$ is an approximate root of $f$ with
associated true root $\zeta_{j}\\!\in\\!\mathbb{Q}_{p}$ for all $j$, and the
$\zeta_{j}$ are pair-wise distinct.
Description.
1: If $\operatorname{ord}_{p}c_{1}\\!\neq\\!\operatorname{ord}_{p}c_{2}$ mod
$d$ then say ‘‘No roots in $\mathbb{Q}_{p}$!’’ and STOP.
2: Rescale roots so that we may replace $c_{i}$ by
$\frac{c_{i}}{p^{\operatorname{ord}_{p}c_{i}}}$ for all $i$.
3: Invert roots if necessary so we may replace $d$ by $|d|$.
4: Let $\ell\\!:=\\!\operatorname{ord}_{p}d$.
5: If $\left(-\frac{c_{1}}{c_{2}}\right)^{p^{\ell}(p-1)/\gamma}\\!\neq\\!1$
mod $p^{2\ell+1}$ then say ‘‘No roots in $\mathbb{Q}_{p}$!’’ and STOP.
6: Let $r\\!:=\\!(d/\gamma)^{-1}$ mod $\frac{p-1}{\gamma}$,
$c\\!:=\\!(-c_{1}/c_{2})^{r}$ mod $p$, and let
$\tilde{h}(x_{1})\\!:=\\!x^{\gamma}_{1}-c$.
7: Find a root $x^{\prime}\\!\in\\!\\{1,\ldots,p-1\\}$ of $\tilde{h}$ via
brute-force search and then let
$x_{1}\\!:=\\!(x^{\prime})^{d/\gamma}$ mod $p$.
8: For all $j\\!\in\\!\\{2,\ldots,\gamma\\}$ let
$x_{j}\\!:=\\!x_{j-1}g^{(p-1)/\gamma}$ mod $p$.
9: If $\ell\\!\geq\\!1$ then, for each $j\\!\in\\!\\{1,\ldots,\gamma\\}$,
replace $x_{j}$ by
$x_{j}-\frac{f(x_{j})/p^{\ell}}{f^{\prime}(x_{j})/p^{\ell}}\\!\in\\!\mathbb{Z}/\langle
p^{2\ell+1}\rangle$.
10: Output
$\left\\{x_{1}p^{\operatorname{ord}_{p}(c_{2}/c_{1})/d},\ldots,x_{\gamma}p^{\operatorname{ord}_{p}(c_{2}/c_{1})/d}\right\\}$.
###### Remark 2.17.
While we have strived for simplicity in Algorithm 2.16, its complexity is
obviously super-linear in $p$, vis à vis Step 7. However, since our main goal
is to prove polynomial-time deterministic complexity for fixed $p$, the
dependence on $p$ does not matter for our theoretical purposes. If one allows
randomization then there are techniques that can considerably improve the
practical complexity (see, e.g., [3, Ch. 7, Sec. 3]). $\diamond$
Proof of Theorem 2.15: It clearly suffices to prove the correctness of
Algorithm 2.16, and then to establish a sufficiently good complexity bound.
Correctness: Theorem 2.2 implies that Step 1 merely checks whether the
valuations of the roots of $f$ in $\mathbb{C}_{p}$ in fact lie in
$\mathbb{Z}$, which is necessary for $f$ to have roots in $\mathbb{Q}_{p}$.
Steps 2 and 3 merely involve substitutions of the form $x_{1}\leftarrow
p^{\delta}x_{1}$ and $x_{1}\leftarrow x^{-1}_{1}$, and it is elementary to
check that the bit length of any resulting approximate roots in $\mathbb{Q}$
remains within $O\\!\left(\log\left(dH^{1/d}\right)\right)$.
Lemma 2.4 implies that Steps 4–5 merely check that the coset of roots of $f$
in $\mathbb{C}_{p}$ intersects $\mathbb{Z}_{p}$.
Step 6 is merely the application of an automorphism of $\mathbb{F}^{*}_{p}$
(that preserves the existence of roots of $\tilde{f}$ in $\mathbb{F}^{*}_{p}$)
that enables us to work with a binomial of degree $\gamma$ ($<\\!d$).
Steps 7–10 then clearly find the correct coset of $\mathbb{F}^{*}_{p}$ that
makes $f$ vanish. In particular, since $\mathbb{F}^{*}_{p}$ is cyclic, Step 9
clearly gives the correct output if $\ell\\!=\\!0$.
If $\ell\\!>\\!0$ then each nodal polynomial $f_{1,\zeta_{0}}$ is of degree
$\leq\\!1$ (thanks to Lemma 2.14) and, by Definition 2.7, its unique root in
$\mathbb{Z}/\langle p^{2\ell+1-s}\rangle$ determines the next $2\ell+1-s$
base-$p$ digits of a unique root of $f$ in $\mathbb{Z}_{p}$. (See also the
proof of [17, Lemma 1.5].) So Steps 8–10 indeed give us suitable approximants
in $\mathbb{Q}$ to all the roots in $\mathbb{Z}_{p}$ of (our renormalized) $f$
in $\mathbb{Z}_{p}$. So our algorithm is correct.
Note also that the outputs, being numbers in $\mathbb{Z}/\langle
p^{2\ell+1}\rangle$, rescaled by a factor of
$p^{\operatorname{ord}_{p}(c_{2}/c_{1})/d}$, clearly each have bit-length
$O\\!\left(\operatorname{ord}_{p}(d)\log(p)+\frac{|\log(c_{2}/c_{1})|}{d\log
p}\log p\right)\\!=\\!O(\log(d)+\frac{\log H}{d})\\!=\\!O(\log(dH^{1/d}))$.
$\blacksquare$
Complexity Analysis: Via Corollary 2.6, and some additional elementary bit
complexity estimates for modular arithmetic, it is clear that, save for Steps
7–10, Algorithm 2.16 has complexity $O(\gamma\log^{3}(pdH))$. Evaluating a
$\gamma$th power mod $p$ takes time $O(\log(\gamma)\log^{2}p)$ via recursive
squaring (using grade-school multiplication). Step 7 (whose complexity
dominates that of Steps 8–10), consists of no more than $p-1$ evaluations of a
$\gamma$th power mod $p$. This takes time
$O\\!\left(p\log^{2}(p)\log\gamma\right)$ so we are done. $\blacksquare$
## 3\. Separating Roots of Trinomials in $\mathbb{Q}_{p}$: Proving Theorem
1.5
In the last section, we saw that we could pick a moderately-sized $\ell$ and
then lift roots of a binomial in $\mathbb{Z}/\langle p^{2\ell+1}\rangle$ to
roots in $\mathbb{Z}_{p}$. The same strategy will work for trinomials, but it
will be much harder to prove that a moderately-sized $\ell$ exists.
Toward this end, let us first recall the following version of Yu’s Theorem:
###### Theorem 3.1.
[32, Cor. 1] Suppose $p$ is any prime; let $\alpha_{1},\ldots,\alpha_{n}$ be
$n$ ($\geq\\!2$) nonzero rational numbers, with $\alpha_{i}=r_{i}/s_{i}$ the
reduced fraction for each $i$; and $b_{1},\cdots,b_{n}$ are integers not all
zero. Then $\alpha_{1}^{b_{1}}\cdots\alpha_{n}^{b_{n}}\neq 1$ implies that
$\alpha_{1}^{b_{1}}\cdots\alpha_{n}^{b_{n}}-1$ has $p$-adic valuation bounded
from above by
$11145\left(\frac{24(n+1)^{2}}{\log
p}\right)^{n+2}(p-1)\left(\prod^{n}_{i=1}\log A_{i}\right)\log(4B)\\\
\max\left\\{\log(2^{12}\cdot 3n(n+1)\log A_{n}),\frac{\log p}{n}\right\\},$
where $B\\!:=\\!\max\\{|b_{1}|,\ldots,|b_{n}|,3\\}$, and $A_{1},\ldots,A_{n}$
are real numbers such that$A_{1}\leq\cdots\leq A_{n}$ and, for each $j$,
$A_{j}\geq\max\\{|r_{j}|,|s_{j}|,p\\}$. $\blacksquare$
In order to prove our separation bound for roots of trinomials, we will follow
three steps for a given trinomial $f$:
1. 1.
Use Yu’s Theorem on linear forms in $p$-adic logarithms to prove that for
$m\in\mathbb{C}_{p}$ such that $f^{\prime}(m)=0$, $\left|f(m)\right|_{p}$
cannot be too small.
2. 2.
Show that $\left|f^{\prime}(m)\right|_{p}$ cannot be too large for
$\left|m\right|_{p}\leq 1$, since $f(0)\neq 0$ by construction.
3. 3.
Use a $p$-adic adaption of Rolle’s Theorem to obtain our bound.
To expand on this approach, first note that by a simple computation, we obtain
the following useful formula:
###### Proposition 3.2.
Fix a prime $p$ and consider a
trinomial$f(x_{1})=c_{1}+c_{2}x^{a_{2}}_{1}+c_{3}x^{a_{3}}_{1}\\!\in\\!\mathbb{Z}_{p}[x_{1}]$
where $a_{3}\\!>\\!a_{2}\\!\geq\\!1$. If $f^{\prime}(m)=0$ at some point
$m\in\mathbb{C}_{p}$, then $m^{a_{3}-a_{2}}=-\frac{a_{2}c_{2}}{a_{3}c_{3}}$.
Moreover, $f(m)=c_{1}+c_{2}m^{a_{2}}\left(1-\frac{a_{2}}{a_{3}}\right)$.
$\blacksquare$
We want to show that if $m$ is as in Proposition 3.2 then
$\left|f(m)\right|_{p}$ cannot be too small if it is nonzero.
###### Lemma 3.3.
Let
$f(x_{1})=c_{1}+c_{2}x^{a_{2}}_{1}+c_{3}x^{a_{3}}_{1}\\!\in\\!\mathbb{Z}_{p}[x_{1}]$
be a square-free trinomial and $s$ the input size of $f$. If $f^{\prime}(m)=0$
at some point $m\in\mathbb{C}_{p}$
then$\left|f(m)\right|_{p}\geq\exp(-O(ph_{p}^{4}/(\log p)^{3}))$, for
$h_{p}=\max\\{s,\log p\\}$.
Proof: By Proposition 3.2,
$\displaystyle\operatorname{ord}_{p}(f(m))$
$\displaystyle=\operatorname{ord}_{p}(c_{1}+bm^{a}_{2}(1-a_{2}/a_{3}))$ (*)
$\displaystyle=\operatorname{ord}_{p}(c_{1})+\operatorname{ord}_{p}\left(\frac{-c_{2}(a_{3}-a_{2})}{c_{1}a_{3}}\left(-\frac{a_{2}c_{2}}{a_{3}c_{3}}\right)^{a_{2}/(a_{3}-a_{2})}-1\right)$
Clearly $\operatorname{ord}_{p}(c_{1})\leq s/\log p$. In order to bound the
second summand of (* ‣ 3), we claim that it suffices to bound
$\displaystyle
M:=\operatorname{ord}_{p}\left(\left(\frac{-c_{2}(a_{3}-a_{2})}{c_{1}a_{3}}\right)^{a_{3}-a_{2}}\left(-\frac{a_{2}c_{2}}{a_{3}c_{3}}\right)^{a_{2}}-1\right).$
Let $\alpha_{1}=-c_{2}\frac{a_{3}-a_{2}}{c_{1}a_{3}}$,
$\alpha_{2}=-\frac{a_{2}c_{2}}{a_{3}c_{3}}$, $b_{1}=a_{3}-a_{2}$, and
$b_{2}=a_{2}$. To apply Yu’s Theorem on
$M=\operatorname{ord}_{p}(\alpha_{1}^{b_{1}}\alpha_{2}^{b_{2}}-1)$, it
suffices to take $n\\!=\\!2$, $A_{i}=\max\\{e^{2s},p\\}$ for
$i\\!\in\\!\\{1,2\\}$, and $B=\max\\{e^{s},3\\}$. Then $\log A_{1},\log
A_{2},\log B=O(h_{p})$. Thus
$\displaystyle\operatorname{ord}_{p}(f(m))\leq s/\log p+M\leq s/\log
p+C_{2}ph_{p}^{4}/(\log p)^{4},$
for some absolute constant $C_{2}$. So then $\left|f(m)\right|_{p}\geq
p^{-\operatorname{ord}_{p}(f(m))}$$=\\!\exp(-s-C_{2}ph_{p}^{4}/(\log
p)^{3})\\!\geq\\!\exp(-C_{1}ph_{p}^{4}/(\log p)^{3})$, by picking $C_{1}\geq
C_{2}(1+\frac{s}{M})$.
Now to prove the claim first observe that
$x^{a_{3}-a_{2}}-1=\prod_{i=1}^{a_{3}-a_{2}}(x-\zeta^{j})$ for
$\zeta\in\mathbb{C}_{p}$ a root of unity of order $a_{3}-a_{2}$. Taking
$T=\frac{c_{2}(a_{3}-a_{2})}{c_{1}a_{3}}\left(-\frac{a_{2}c_{2}}{c_{3}a_{3}}\right)^{a_{2}/(a_{3}-a_{2})}$,
we get
$M=\operatorname{ord}_{p}(T^{a_{3}-a_{2}}-1)=\sum_{j=1}^{a_{3}-a_{2}}\operatorname{ord}_{p}(T-\zeta^{j})$.
The the second summand of (* ‣ 3) is exactly
$\operatorname{ord}_{p}(T-\zeta^{a_{3}-a_{2}})$. Suppose
$\operatorname{ord}_{p}T<0$. Then for each
$i\\!\in\\!\\{1,\ldots,a_{3}-a_{2}\\}$ we have
$\operatorname{ord}_{p}(T-\zeta^{j})=\operatorname{ord}_{p}T<0$. This is
because roots of unity always have $p$-adic valuation $0$ regardless of the
choice of $p$. Then we must have
$\operatorname{ord}_{p}(f(m))\leq\operatorname{ord}_{p}(a)+\operatorname{ord}_{p}(T-\zeta^{a_{3}-a_{2}})\leq
s/\log p$, and the conclusion of the lemma is immediate.
Now suppose $\operatorname{ord}_{p}T\geq 0$. Then for each $j$,
$\operatorname{ord}_{p}(T-\zeta^{j})\geq j\operatorname{ord}_{p}(\zeta)=0$,
and $M\geq\operatorname{ord}_{p}(T-\zeta^{j})$, thus proving the claim.
$\blacksquare$
Thanks to the ultrametric inequality we can effectively bound
$\left|f^{\prime}(x)\right|_{p}$ for $\left|x\right|_{p}\leq 1$ with
$f^{\prime}(x)\neq 0$.
###### Lemma 3.4.
Following the notation above, assume $f$ is a square-free trinomial. Then
$\left|x\right|_{p}\leq 1\Longrightarrow\left|f^{\prime}(x)\right|_{p}\leq 1$.
Proof: As $c_{2},c_{3}\\!\in\\!\mathbb{Z}_{p}$,
$a_{2},a_{3}\\!\in\\!\mathbb{Z}$, $a_{3}\\!>\\!a_{2}\\!\geq\\!1$, and
$\left|x\right|_{p}\leq 1$, we have
$\displaystyle\operatorname{ord}_{p}(c_{2}a_{2}x^{a_{2}-1})=\operatorname{ord}_{p}(c_{2})+\operatorname{ord}_{p}(a_{2})+(a_{2}-1)\operatorname{ord}_{p}(x)\geq
0.$
Similarly, $\operatorname{ord}_{p}(c_{3}a_{3}x^{a_{3}-1})\geq 0$. Then simply
observe that:
$\displaystyle\left|f^{\prime}(x)\right|_{p}=\left|c_{2}a_{2}x^{a_{2}-1}+c_{3}a_{3}x^{a_{3}-1}\right|_{p}\leq\max\left\\{\left|c_{2}a_{2}m^{a_{2}-1}\right|_{p},\left|c_{3}a_{3}x^{a_{3}-1}\right|_{p}\right\\}\leq
1.\text{ $\blacksquare$}$
To conclude our approach we state the following scaled version of $p$-adic
Rolle’s Theorem, which follows immediately from [23, Thm., pg. 316, Sec. 2.4].
###### Theorem 3.5.
Let $f\in\mathbb{C}_{p}[x]$ be a polynomial with two distinct roots
$x_{1},x_{2}\in\mathbb{C}_{p}$, having
$\left|x_{1}-x_{2}\right|_{p}=cp^{1/(p-1)}$ for some constant $c>0$. Then its
derivative $f^{\prime}$ has a root $m$ within $p$-adic distance $c$ of both
$x_{1}$ and $x_{2}$. $\blacksquare$
We now prove our third and final main result.
Proof of Theorem 1.5: Suppose $x_{1},x_{2}$ are two distinct roots
of$f(x)=a+bx^{a_{2}}+cx^{a_{3}}$. In particular, as we assume the constant
term is nonzero, both the roots $x_{1},x_{2}$ are nonzero. Then we proceed by
cases.
Case 1: (Both roots are small:
$\boldsymbol{\left|x_{1}\right|_{p},\left|x_{2}\right|_{p}\leq 1}$.)
Suppose $\left|x_{1}-x_{2}\right|_{p}>p^{-2/(p-1)}$. Then
$\left|x_{1}-x_{2}\right|_{p}>\exp(-2\log p/(p-1))$$>\exp(-Cph_{p}^{4}/(\log
p)^{3})$ for large $C$.
Now assume that $\left|x_{1}-x_{2}\right|_{p}\leq p^{-2/(p-1)}$. Then by
Theorem 3.5 there exists $m\in\mathbb{C}_{p}$ such that
$\left|m\right|_{p}\leq p^{-1/(p-1)}$ and $f^{\prime}(m)=0$. Moreover,
$\left|x_{i}-m\right|_{p}\leq p^{1/(p-1)}\left|x_{1}-x_{2}\right|_{p}\leq
p^{-1/(p-1)}$ for $i\\!\in\\!\\{1,2\\}$. Without loss of generality, assume
that $i=1$. So it suffices bound $\left|m-x_{1}\right|_{p}$ from below.
As we assume $f$ is square-free, $f(m)\neq 0$. Then by Lemma 3.3,
$\left|f(m)\right|_{p}\geq\exp(-C_{1}ph_{p}^{4}/(\log p)^{3})$, for
$h_{p}=\max\\{s,\log p\\}$ and some constant $C_{1}$. By applying Theorem 3.5
again, we see that there exists a $\zeta$ with $\left|\zeta\right|_{p}\leq 1$
and such that
$\displaystyle f(m)=f(m)-f(x_{1})=f^{\prime}(\zeta)(m-x_{1}).$
As $f(m)\neq 0$, then $f^{\prime}(\zeta)\neq 0$ and $m\neq x_{1}$. Recall from
Lemma 3.4 that $\left|f^{\prime}(\zeta)\right|_{p}\leq 1$, so then
$\left|m-x_{1}\right|_{p}=\frac{\left|f(m)\right|_{p}}{\left|f^{\prime}(\zeta)\right|_{p}}\geq\frac{\exp(-C_{1}ph_{p}^{4}/(\log
p)^{3})}{1}=\exp(-C_{1}ph_{p}^{4}/(\log p)^{3})$. This in turn implies
$\left|x_{1}-x_{2}\right|_{p}\geq
p^{-1/(p-1)}\left|m-x_{1}\right|_{p}\geq\exp\\!\left(-C_{1}ph_{p}^{4}/\log^{3}(p)-\frac{1}{p-1}\log
p\right)\geq\exp(-C_{1}ph_{p}^{4}/\log^{3}p-1)$.
By picking an appropriate $C$ depending on $C_{1}$, the conclusion follows in
this case.
Case 2: (Both roots are large:
$\boldsymbol{\left|x_{1}\right|_{p},\left|x_{2}\right|_{p}>1}$.)
In this case, consider
$g(x)=x^{a_{3}}f(\frac{1}{x})=ax^{a_{3}}+bx^{a_{3}-a_{2}}+c$. Correspondingly,
$\frac{1}{x_{1}},\frac{1}{x_{2}}$ are the roots of $g$ with
$\left|\frac{1}{x_{1}}\right|_{p},\left|\frac{1}{x_{2}}\right|_{p}<1$. By
using the same argument in Case 1,
$\left|\frac{1}{x_{1}}-\frac{1}{x_{2}}\right|_{p}\geq\exp(-C^{\prime}ph_{p}^{4}/(\log
p)^{3})$ for some absolute constant $C^{\prime}$. Hence
$\displaystyle\left|x_{1}-x_{2}\right|_{p}=\left|x_{1}\right|_{p}\left|x_{2}\right|_{p}\left|\frac{1}{x_{1}}-\frac{1}{x_{2}}\right|_{p}\geq\exp(-Cph_{p}^{4}/(\log
p)^{3})$
by picking $C=C^{\prime}$.
Case 3: (Only one root has norm $\boldsymbol{>1}$.)
Without loss of generality, we may assume that
$|x_{1}|_{p}\\!\leq\\!1\\!<\\!|x_{2}|_{p}$. We then simply note that, as
$\left|x_{1}\right|_{p}\neq\left|x_{2}\right|_{p}$, we have
$\left|x_{1}-x_{2}\right|_{p}=\max\left\\{\left|x_{1}\right|_{p},\left|x_{2}\right|_{p}\right\\}\\!>\\!1$
and we are done. $\blacksquare$
## 4\. Tetranomials: The Proof of Theorem 1.4
### 4.1. The Case of Prime $\boldsymbol{p}$
We will see that
$f(x_{1}):=f_{d,p}(x_{1})=x^{d}_{1}-\left(\frac{x_{1}}{p^{h}}-\frac{1}{p}\right)^{2}$
has two roots $x_{1},x_{2}\in\mathbb{Q}_{p}$ such that
$\left|x_{1}-x_{2}\right|_{p}<p^{-\Omega(dh)}$.
Let
$g(x)=p^{2h}f(x+p^{h-1})=p^{2h}(x+p^{h-1})^{d}-p^{2h}\left(\frac{x+p^{h-1}}{p^{h}}-\frac{1}{p}\right)^{2}$$=p^{2h}(x+p^{h-1})^{d}-x^{2}$.
Then $g$ has the same roots as $f$, save for a “small” shift by $p^{h-1}$.
Rescaling, we get:
$\displaystyle G(x)$ $\displaystyle:=\frac{g(p^{(h-1)d/2+h}x)}{p^{(h-1)d+2h}}$
$\displaystyle=p^{-(h-1)d-2h}\left[p^{2h}(p^{(h-1)d/2+h}x+p^{h-1})^{d}-p^{(h-1)d+2h}x^{2}\right]$
$\displaystyle=\sum_{i=0}^{d}{d\choose i}p^{(h-1)(di/2-i)+ih}x^{i}-x^{2}$
$\displaystyle=1-x^{2}\mod p^{d(h-1)/2+1},$
which is square-free for odd prime $p$.
When $p\\!=\\!2$, as $h\\!>\\!2$, we have $p^{d(h-1)/2+1}\geq 8$. Then, as
$G(x)=1-x^{2}=(3-x)(5-x)\mod p^{8}$, we obtain that $G$ is also square-free.
Hensel’s Lemma then implies that there are roots
$\zeta_{1},\zeta_{2}\in\mathbb{Z}_{p}$ of $G$ such that $\zeta_{1}\equiv 1\mod
p^{n(h-1)/2+1}$ and $\zeta_{2}\equiv-1\mod p^{n(n-1)/2+1}$. Moreover,
$\left|\zeta_{1}\right|_{p}=\left|\zeta_{2}\right|_{p}=1$. For each
$i\\!\in\\!\\{1,2\\}$, $y_{i}=p^{(h-1)n/2+h}\zeta_{i}$ is the corresponding
root of $G$, and thus of $g$. Then $x_{1}=y_{1}+p^{h-1}$ and
$x_{2}=y_{2}+p^{h-1}$ are two roots of $f$ in $\mathbb{Z}_{p}$ such that
$\displaystyle\left|x_{1}-x_{2}\right|_{p}$
$\displaystyle=\left|(y_{1}+p^{h-1})-(y_{2}+p^{h-1})\right|_{p}$
$\displaystyle=\left|y_{1}-y_{2}\right|_{p}\leq\max\left\\{\left|y_{1}\right|_{p},\left|y_{2}\right|_{p}\right\\}=p^{-(h-1)d/2-h}=p^{-\Omega(dh)}.\text{
$\blacksquare$}$
### 4.2. The Case $\boldsymbol{p\\!=\\!\infty}$
Shifting by $\frac{1}{2^{h-1}}$, we get:
$\displaystyle g(x)$
$\displaystyle:=f_{d,\frac{1}{2}}(x+2^{1-h})=(x+2^{1-h})^{d}-2^{2h}x^{2}$
$\displaystyle=2^{d(1-h)}+d2^{(d-1)(1-h)}x+\left({d\choose
2}2^{(d-2)(1-h)}-2^{2h}\right)x^{2}$ $\displaystyle+{d\choose
3}2^{(d-3)(1-h)}x^{3}+\cdots+x^{d}.$
When computing $\operatorname{Newt}_{\infty}(g)$, the three lowest order terms
of $g$ contribute the points
$\displaystyle p_{0}=(0,d(h-1)\log 2),$ $\displaystyle p_{1}=(1,(d-1)(h-1)\log
2-\log d),\text{ and }$ $\displaystyle
p_{2}=\left(2,-\log\left(4^{h}-\frac{{d\choose
2}}{2^{(d-2)(h-1)}}\right)\right)$
as potential vertices of $\operatorname{Newt}_{\infty}(f)$. In particular,
observe that $\frac{{d\choose 2}}{2^{(d-2)(h-1)}}\\!<\\!0.059$ for all
$h\\!\geq\\!3$ and $d\\!\geq\\!4$, and thus $p_{2}$ is the only point of
$\operatorname{Newt}_{\infty}(f)$ with negative $y$-coordinate. So $p_{2}$ is
a vertex of $\operatorname{Newt}_{\infty}(f)$, and all edges with vertices to
the right of $p_{2}$ have positive slope. Furthermore, the slopes of the line
segments $\overline{p_{0}p_{1}}$ and $\overline{p_{0}p_{2}}$ are respectively
$-(h-1)\log(2)-\log d$ and a number less than
$-\frac{1}{2}\log(4^{h}-0.059)-\frac{1}{2}d(h-1)\log 2$.
Since $2^{h-1}\\!<\\!\sqrt{4^{h}-0.059}$ and $\log
d\\!<\\!\frac{1}{2}d(h-1)\log 2$ for all $d\\!\geq\\!4$ and $h\\!\geq\\!3$, we
thus see that the slope of $\overline{p_{0}p_{2}}$ is more negative. So the
leftmost lower edge of $\operatorname{Newt}_{\infty}(g)$ has vertices $p_{0}$
and $p_{2}$. It is easily checked that the slope of this edge is less than
$-10.3$, which is in turn clearly $<\\!-2\log 3$. So by Theorem 2.2, there are
two roots $z_{1},z_{2}$ of $g$ such that
$\displaystyle\log|z_{i}|\leq\frac{1}{2}\left[-\log\left(2^{2h}-{d\choose
2}2^{(d-2)(1-h)}\right)-d(h-1)\log 2\right].$
These two roots thus satisfy $|z_{i}|=2^{-\Omega(dh)}$. Now, for
$i\\!\in\\!\\{1,2\\}$, $\zeta_{i}=z_{i}+2^{1-h}$ yields roots of
$f_{d,\frac{1}{2}}$ with
$|\zeta_{1}-\zeta_{2}|=|z_{1}+2^{1-h}-(z_{2}+2^{1-h})|\leq|z_{1}|+|z_{2}|<2^{-\Omega(dh)}$.
$\blacksquare$
## 5\. Solving Trinomial Equations over $\mathbb{Q}_{p}$
Unlike the binomial case, the tree $\mathcal{T}_{p,k}(f)$ can have high depth
for $f$ an arbitrary trinomial. However, its structure will nevertheless be
simple: For $k$ sufficiently large, $\mathcal{T}_{p,k}(f)$ will be a subtree
of the union of $L$ chains of length $\left\lfloor(k-1)/2\right\rfloor$
emanating from a single root node. The number $L$ will be the number of
degenerate roots in $\mathbb{F}_{p}$ of $\tilde{f}$, and will turn out to be
congruent to $0$ or $1$ mod $p-1$. The lower limit for $k$ to be “large” will
be unwieldy, particulary as a function of $p$, but the lower limit will be
small enough for us to obtain reasonable dependence on $d$ and $H$: We will
attain complexity $p^{2+o(1)}\log^{9}(dH)$ in our algorithm.
We begin with an earlier result, also derived via Yu’s Theorem:
###### Theorem 5.1.
[24, Sec. 5] If
$f(x_{1})\\!=\\!c_{1}+c_{2}x^{a_{2}}_{1}+c_{3}x^{a_{3}}_{1}\\!\in\\!\mathbb{Z}[x_{1}]$
is a trinomial of degree $d\\!=\\!a_{3}\\!>\\!a_{2}\\!\geq\\!1$, with
coefficients of absolute value at most $H$, $p$ is any prime, $p\nmid
c_{1}c_{2}$, and $p|c_{3}$, then the sum of
$\operatorname{ord}_{p}f^{\prime}(\zeta)$ over all roots
$\zeta\\!\in\\!\mathbb{C}_{p}$ of valuation $0$ is $O(p\log^{8}(dH))$.
$\blacksquare$
We point out out that while only the case $\gcd(a_{2},a_{3})\\!=\\!1$ is
derived in [24, Sec. 5], the general case follows immediately from the
$\gcd(a_{2},a_{3})\\!=\\!1$ case by an application of the chain rule. It is
also worth noting that while certain special cases (such as when $a_{2}$,
$a_{3}-a_{2}$, and $a_{3}$ are all relatively prime to $p-1$) admit better
bounds like $\log^{O(1)}(pdH)$, there are several other vexing cases where
there is no obvious technique other than to apply Yu’s Theorem.
###### Lemma 5.2.
Following the notation and assumptions of Theorem 5.1, suppose $D$ is the
maximum of $\operatorname{ord}_{p}f^{\prime}(\zeta)$ over all roots
$\zeta\\!\in\\!\mathbb{Z}_{p}$ of valuation $0$, $\tilde{f}$ has at least one
degenerate root in $\mathbb{F}_{p}$, and $k\\!\geq\\!2D+1$. Then
$\mathcal{T}_{p,k}(f)$ has depth $\geq\\!1$ and is a union of $L$ chains of
length $\leq\\!\left\lfloor(k-1)/2\right\rfloor$, where $L$ is the number of
degenerate roots of $\tilde{f}$. Moreover, the degree of any non-root nodal
polynomial is at most $3$.
###### Example 5.3.
Suppose $f(x_{1})=x^{10}_{1}-10x_{1}+738$ and $p\\!=\\!3$. Then
$\mathcal{T}_{3,7}(f)$ is a chain with depth $2$. In particular,
$f_{0,0}\\!:=\\!f$ and $\tilde{f}_{0,0}(x_{1})\\!=\\!x_{1}(x_{1}-1)^{9}$. So
$1$ is a degenerate root of $\tilde{f}_{0,0}$ and $s(f_{0,0},1)\\!=\\!4$. So
then $f_{1,1}(x_{1})\\!=\\!21x^{4}_{1}+13x^{3}_{1}+5x^{2}_{1}+9$ mod $27$ and
$\tilde{f}_{1,1}(x_{1})\\!=x^{2}_{1}(x_{1}-1)$. We then see that $0$ is a
degenerate root of $\tilde{f}_{1,1}$ and $s(f_{1,1},0)\\!=\\!2$. So
$f_{2,1}(x_{1})\\!=\\!2(x_{1}-1)(x_{1}-2)$ mod $3$.
There are a total of $4$ non-degenerate roots for the nodal polynomials: $1$
for $f_{0,0}$, $1$ for $f_{1,1}$, and $2$ for $f_{2,1}$. These non-degenerate
roots in $\mathbb{F}_{3}$ then lift to the following roots of $f$ in
$\mathbb{Z}_{3}$: $0+O(3^{1})$, $1+1\cdot 3+O(3^{2})$, $1+0\cdot 3+1\cdot
3^{2}+O(3^{3})$, and $1+0\cdot 3+2\cdot 3^{2}+O(3^{3})$. A quick calculation
via Maple’s rootp command tells us that these are all the $3$-adic rational
roots of $f$.
As far as we are aware, this is the first example of a trinomial with $4$
roots in $\mathbb{Q}_{3}$, each with most significant digit $1$. Via a more
careful look at $\mathcal{T}_{3,k}(f)$ for trinomial $f$, one can in fact
prove that no trinomial in $\mathbb{Z}[x_{1}]$ has more than $4$ roots in
$\mathbb{Q}_{3}$ with most significant digit $1$ (see author Zhu’s Ph.D.
thesis). For $p\\!\geq\\!5$ the optimal upper bound is $3$ [2], while for
$p\\!=\\!2$, the optimal upper bound is $6$ [18]. $\diamond$
###### Remark 5.4.
It is worth noting that, over a finite field, trinomials can only vanish on a
small number of cosets in $\mathbb{F}_{q}$: Building on earlier results from
[8, 6, 13], Kelley and Owen have proved [14, Thm. 1.2] that
$c_{1}+c_{2}x^{a_{2}}_{1}+c_{3}x^{a_{3}}_{1}\\!\in\\!\mathbb{F}_{q}[x_{1}]$,
with $q$ a prime power, vanishes at no more than
$\left\lfloor\frac{1}{2}+\sqrt{\frac{q-1}{\delta}}\right\rfloor$ cosets of the
size $\delta$ subgroup of $\mathbb{F}^{*}_{q}$, where
$\delta\\!=\\!\gcd(a_{2},a_{3},q-1)$. In particular, this bound is optimal for
$\mathbb{F}_{q}$ an even degree extension of a prime field. For $q$ prime,
there is computational evidence (for all $q\leq\\!292,837$) that the number
of such cosets might in fact just be $O(\log q)$ [10]. $\diamond$
Proof of Lemma 5.2: First note that Lemma 2.11 implies that $k\\!\geq\\!2D+1$
guarantees that $s(f_{0,0},\zeta_{0})\\!\leq\\!k-1$, so there is a child node
corresponding to each degenerate root of $\tilde{f}_{0,0}$. Now suppose
$\zeta_{0}$ is a nonzero degenerate root of $\tilde{f}$ of multiplicity $m\geq
2$, so $\zeta_{0}\in\\{1,\ldots,p-1\\}$. Let $r:=\min_{0\leq i\leq
m}\\{\operatorname{ord}_{p}f^{(i)}(\zeta_{0})\\}$.
Consider the matrix
(1) $A_{m}:=\begin{bmatrix}1&1&1\\\ 0&a_{2}&a_{3}\\\
0&a_{2}(a_{2}-1)&a_{3}(a_{3}-1)\\\ \vdots&\vdots&\vdots\\\
0&a_{2}\cdots(a_{2}-m+2)&a_{3}\cdots(a_{3}-m+2)\end{bmatrix}\mod p.$
Then $[c_{1},c_{2}\zeta^{a_{2}}_{0},c_{3}\zeta^{a_{3}}_{0}]^{T}$ must be a
right null vector of $A_{m}$ modulo $p$ and$A_{m}\begin{bmatrix}c_{1}\\\
c_{2}\zeta_{0}^{a_{2}}\\\ c_{3}\zeta_{0}^{a_{3}}\end{bmatrix}=0\mod p^{r}$.
For each $i\geq 3$, $\operatorname{ord}_{p}f^{(i-1)}(\zeta_{0})\geq r$ if and
only if $\operatorname{ord}_{p}(c_{3}\cdot a_{3}(a_{3}-a_{2})C_{i})\geq r$,
where $C_{m}$ is such that $[0,0,a_{3}(a_{3}-a_{2})C_{m}]^{T}$ lies in the row
space of $A_{m}$. (It is easily checked that $C_{3}=1$.) So $\min_{2\leq i\leq
m}\operatorname{ord}_{p}(f^{(i)}(\zeta_{0}))=\operatorname{ord}_{p}f^{\prime\prime}(\zeta_{0})$,
and for $i,p\geq 3$ we have
$\operatorname{ord}_{p}\frac{f^{\prime\prime}(\zeta_{0})}{2}+2\leq\operatorname{ord}_{p}\frac{f^{(i)}(\zeta_{0})}{i!}+i$,
with equality holding only when $p=3$. Therefore, $s_{0}:=s(f,\zeta_{0})=$
$\begin{cases}\min\\{\operatorname{ord}_{p}f^{\prime}(\zeta_{0})+1,\operatorname{ord}_{p}\frac{f^{\prime\prime}(\zeta_{0})}{2}+2,\operatorname{ord}_{p}\frac{f^{(3)}(\zeta_{0})}{3!}+3\\}&\text{if
}p=3\\\
\min\\{\operatorname{ord}_{p}f(\zeta_{0}),\operatorname{ord}_{p}f^{\prime}(\zeta_{0})+1,\operatorname{ord}_{p}\frac{f^{\prime\prime}(\zeta_{0})}{2}+2\\}&\text{if
}p\neq 3\end{cases}$
This says that $s_{0}$, and thus $\tilde{f}_{1,\zeta_{0}}$, is simply
determined by the coefficients of the first three or four terms of the Taylor
series expansion of $f(\zeta_{0}+px)$ about $0$. Moreover,
$\tilde{f}_{1,\zeta_{0}}(x)$ has degree $\leq\\!2$ and at most one degenerate
root $\zeta_{1}$ (of multiplicity $2$) when $p\geq 5$, or degree $\leq\\!3$
and at most one degenerate root $\zeta_{1}$ (of multiplicity $3$) when $p=3$.
So by Lemma 2.11, $s_{1}:=s(f_{1,\zeta_{0}},\zeta_{1})$ is bounded from above
by $2$ or $3$ respectively. So, inductively, any nodal polynomial
$\tilde{f}_{i,\zeta}$ of $\mathcal{T}_{p}(f_{1,\zeta_{0}})$ has degree
$\leq\\!2$ when $p\neq 3$, and $\leq\\!3$ when $p=3$. Moreover,
$\tilde{f}_{1,\zeta_{0}}$ can have at most one degenerate root. So
$\mathcal{T}_{p,k}(f_{1,\zeta_{0}})$ is a path and we are done. $\blacksquare$
Generalizing our automorphism trick from Algorithm 2.16 for lowering the
degree of a binomial, we can efficiently do the same for trinomials if we
apply a fast algorithm for the Shortest Vector Problem in a lattice (see,
e.g., [11]). A very special case of this approach is the following:
###### Lemma 5.5.
[6, Special Case of Lemma 1.9] Given any prime $p$, and
$a_{2},a_{3}\\!\in\\!\mathbb{N}$ with $0<a_{2}<a_{3}<p-1$ and
$\gamma\\!:=\\!\gcd(a_{2},a_{3},p-1)$, one can find within $\log^{1+o(1)}p$
bit operations, an integer $e$ such that for all $i\in\\{1,\ldots,t\\}$,
$ea_{i}\equiv m_{i}$ mod $p-1$ and $|m_{i}|\leq\gamma\sqrt{2(p-1)}$.
$\blacksquare$
It is then simple to prove that computing the nodal polynomials of
$\mathcal{T}_{p,k}(f)$ can be done efficiently for trinomial $f$.
###### Lemma 5.6.
Following the preceding notation, suppose $p$ is an odd prime, $\zeta_{0}$ is
a nonzero degenerate root of $\tilde{f}$ mod $p$, and
$\mathcal{T}_{p,k}(f_{1,\zeta_{0}})$ has depth greater than $1$. Then for each
$j\geq 1$, given $f_{1,\zeta_{0}}$, we can compute any other nodal polynomial
of $\mathcal{T}_{p}(f_{1,\zeta_{0}})$ in time $(\log p)^{1+o(1)}$.
Proof: Suppose $\mathcal{T}_{p,k}(f_{1,\zeta_{0}})$ has depth
$\delta\\!\geq\\!2$, and suppose for each integer
$j\\!\in\\!\\{1,\ldots,\delta-1\\}$ that $\zeta_{j}$ is the unique degenerate
root of the $(j-1)$-th nodal polynomial $\tilde{f}_{j,\sigma_{j-1}}$ of
$\mathcal{T}_{p,k}(f_{1,\zeta_{0}})$, where $\sigma_{0}:=\zeta_{0}$ and
$\sigma_{j}:=\sum_{1\leq i\leq j}\zeta_{i}p^{i-1}$. For each $j$ let
$s_{j}:=s(f_{j,\sigma_{j-1}},\zeta_{j})$. Then $s_{j}\leq 3$ and in fact
$s_{j}\\!\geq\\!2$ as otherwise, by Lemma 2.11, the node corresponding to
$f_{j-1,\sigma_{j-2}}$ would have no child and then $f_{j,\sigma_{j-1}}$
wouldn’t be a nodal polynomial.
Let $\ell\geq 1$, suppose $\mathcal{T}_{p,k}(f)$ has depth $>\\!\ell$, and
suppose for each $j\\!\in\\!\\{0,\ldots,\ell\\}$ we have that $\zeta_{j}$ is
the unique degenerate root of the $j$th nodal polynomial
$\tilde{f}_{j,\zeta_{j-1}}$. Then $s(f_{j,\zeta_{j-1}},\zeta_{j})\leq 2$, and
in fact $s(f_{j,\zeta_{j-1}},\zeta_{j})\\!=\\!2$ as otherwise, by the
definition of the $f_{i,\zeta}$, the tree $\mathcal{T}_{p,k}(f_{1,\zeta_{0}})$
would terminate at depth $j<\delta$. So
$f_{j+1,\sigma_{j}}(x)=\frac{1}{p^{s_{j}}}f_{j,\sigma_{j-1}}(\zeta_{j}+px)=\frac{1}{p^{\sum_{\i=1}^{j}s_{j}}}f_{1,\zeta_{0}}(\sigma_{j}+p^{j}x)$
and $\tilde{f}_{j+1,\sigma_{j}}(x)$ is a quadratic polynomial, and thus we can
determine if a degenerate root $\zeta_{j+1}$ exists by simply computing the
discriminant. Via Horner’s method and finite ring arithmetic, this takes time
no worse than $(\log p)^{1+o(1)}$ (see [3]).
Suppose such a $\zeta_{j+1}$ exists. Then
$\displaystyle f_{j+1,\sigma_{j}}(\zeta_{j+1}+px)$
$\displaystyle=\frac{1}{p^{\sum_{\i=1}^{j}s_{j}}}f_{1,\zeta_{0}}(\sigma_{j+1}+p^{j+1}x)$
$\displaystyle=\frac{1}{p^{\sum_{\i=1}^{j+1}s_{j}}}\left[f_{1,\zeta_{0}}(\sigma_{j+1})+p^{j+1}f^{\prime}_{1,\zeta_{0}}(\sigma_{j+1})+p^{2j+2}\frac{f^{\prime\prime}_{1,\zeta_{0}}(\sigma_{j+1})}{2}x^{2}\right.$
$\displaystyle\left.+p^{3j+3}\frac{f^{(3)}_{1,\zeta_{0}}(\sigma_{j+1})}{3!}x^{3}+\text{higher
powers of }p\right].$
As $\sum_{i=1}^{j+1}s_{j}\leq 3(j+1)$, and $s_{j+1}\leq 3$, to determine
$\tilde{f}_{j+2,\sigma_{j+1}}$, it really suffices to compute the coefficients
of the first four terms. Thus for $j>1$, the computation at the $j$-th
recursion step take time no worse than $(\log p)^{1+o(1)}$. $\blacksquare$
We are now ready to outline the algorithm that proves theorem 1.1.
###### Algorithm 5.7.
(Solving Trinomial Equations Over $\boldsymbol{\mathbb{Q}_{p}}$)
Input. An odd prime $p$ and
$c_{1},c_{2},c_{3},a_{2},a_{3}\\!\in\\!\mathbb{Z}\setminus\\{0\\}$ with
$|c_{i}|\\!\leq\\!H$ for all $i$ and $1\\!\leq\\!a_{2}\\!<\\!a_{3}\\!=:\\!d$.
Output. A true declaration that
$f(x_{1})\\!:=\\!c_{1}+c_{2}x^{a_{2}}+c_{3}x^{a_{3}}$ has no roots in
$\mathbb{Q}_{p}$, or $z_{1},\ldots,z_{m}\\!\in\\!\mathbb{Q}$ with logarithmic
height $O\\!\left(p\log^{8}(dH)\right)$ such that $m$ is the number of roots
of $f$ in $\mathbb{Q}_{p}$, $z_{j}$ is an approximate root of $f$ with
associated true root $\zeta_{j}\\!\in\\!\mathbb{Q}_{p}$ for all $j$, and the
$\zeta_{j}$ are pair-wise distinct.
Description.
1: If $\operatorname{ord}_{p}c_{1}\\!\neq\\!\operatorname{ord}_{p}c_{2}$ mod
$a_{2}$ and $\operatorname{ord}_{p}c_{2}\\!\neq\\!\operatorname{ord}_{p}c_{3}$
mod $a_{3}-a_{2}$ then say ‘‘No roots in $\mathbb{Q}_{p}$!’’ and STOP.
2: Rescale and invert roots if necessary, so that we may assume $p\nmid
c_{1}c_{2}$ and $\operatorname{ord}_{p}c_{3}\\!\geq\\!0$.
3: Compute all the nodal polynomials for $T_{p,k}(f)$ for
$k\\!=\\!2D^{\prime}+1$ where $D^{\prime}$ is at least as large as the
parameter $D$ in Lemma 5.2.
4: Use Hensel Lifting to find the first $2D^{\prime}+1$ base-$p$ digits of all
the non-degenerate roots of $f$ in $\mathbb{Z}_{p}$ of valuation $0$.
5: Via Algorithm 2.16 find the first $O(\log(dH))$ base-$p$ digits of all the
degenerate roots of $f$.
6: If $p|c_{3}$ then rescale and invert roots to compute approximants for the
remaining roots of $f$ in $\mathbb{Q}_{p}$ by computing roots of valuation $0$
for a rescaled version of $f$ with coefficients in reverse order.
Proof of Theorem 1.1: We have described Algorithm 5.7 at a higher level of
abstraction than Algorithm 2.16, so that we may more easily isolate the key
ingredients of the proof.
In particular, the height bound for our approximate roots from Assertion (1)
follows directly from Lemma 5.2, which is used in Step 3.
Assertion (2) follows easily from Theorem 5.1: Since Steps 3 and 4 (which use
Hensel’s Lemma) will imply a decay rate of
$O\\!\left(\left(\frac{1}{p}\right)^{2D+2^{i}}\right)$ for the $p$-adic
distance of the $i$th Newton iterate to a true root, we merely observe that
this is no worse than a decay rate of
$O\\!\left(\left(\frac{1}{p^{1/(2D+1)}}\right)^{2^{i}}\right)$. So Assertion
(2) holds with $\mu\\!=\\!p^{1/O(p\log^{8}(dH))}$.
As for Assertion (3) on correctly counting the roots of $f$ in
$\mathbb{Q}_{p}$, this follows immediately from Steps 3–5.
So all that remains is to prove correctness (including elaborating Step 5) and
to do a sufficiently good complexity analysis.
Correctness: Thanks to Theorem 2.2, Step 1 merely guarantees that $f$ has
roots of integral valuation, which is a necessary condition for their to be
roots in $\mathbb{Q}_{p}$.
Step 2 merely involves simple substitutions that only negligibly affect the
heights of the coefficients, similar to the binomial case.
Steps 3 and 4 correctly count the number of non-degenerate roots of $f$ in
$\mathbb{Z}_{p}$ of valuation $0$, thanks to Lemma 2.12.
Step 5 can be elaborated as follows: First note that $0$ can not be a
degenerate root since $f(0)\\!\neq\\!0$. Next, rearranging the equations
$f(\zeta)\\!=\\!f^{\prime}(\zeta)\\!=\\!0$, it is easy to see that
$\zeta\\!\in\\!\mathbb{Q}^{*}_{p}$ is a degenerate root of $f$ if and only if
$[c_{1},c_{2}\zeta^{a_{2}},c_{3}\zeta^{a_{3}}]^{T}$ is a right null-vector for
the matrix $B\\!:=\\!\begin{bmatrix}1&1&1\\\ 0&a_{2}&a_{3}\end{bmatrix}$.
Since the right null-space of $B$ is generated by
$[a_{3}-a_{2},-a_{3},a_{2}]^{T}$, we see that such a $\zeta$ is defined by two
binomial equations: $(a_{3}-a_{2})c_{2}\zeta^{a_{2}}\\!=\\!-c_{1}a_{3}$ and
$-a_{3}c_{3}\zeta^{a_{3}-a_{2}}\\!=\\!c_{2}a_{2}$. Via an application of the
Extended Euclidean Algorithm, we can find $R,S\\!\in\\!\mathbb{Z}$ with
$Ra_{2}+S(a_{3}-a_{2})\\!=\\!\gcd(a_{2},a_{3})$ and the logarithmic heights of
$R$ and $S$ of order $O(\log d)$. So by multiplying and dividing suitable
powers of our binomial equations, we get that $\zeta$ must satisfy the single
equation
$((a_{3}-a_{2})c_{2})^{R}(-a_{3}c_{3})^{S}\zeta^{\gcd(a_{2},a_{3})}\\!=\\!(-c_{1}a_{3})^{R}(c_{2}a_{2})^{S}$.
The latter equation can be solved easily, within our overall time bound, via
Algorithm 2.16. Note in particular that while the coefficient heights look
much larger, any root $\zeta$ ultimately satisifies the original pair of
binomials, thus implying $\zeta$ must have low logarithmic height.
Step 6 merely takes care of the remaining roots, at negligible affect to the
coefficient heights.
Note that we do need to renormalize the roots in the end, due to the various
rescalings, but this adds a neglible summand of $O(\log H)$ to the logarithmic
heights of the roots. So we are done. $\blacksquare$
Complexity Analysis: The only part of the algorithm going beyond basic
arithmetic operations modulo $p$ or $p^{\ell}$ is Steps 3–4. In particular,
Theorem 5.1 tells us that we can take
$D^{\prime}\\!=\\!O\\!\left(p\log^{8}(dH)\right)$, and Lemma 5.6 tells us that
the complexity of Steps 3–4 is no worse than
$O\\!\left(p^{2}\log^{2}(p)\log^{9}(dH)\right)$, assuming we employ brute-
force search to find the roots in $\mathbb{F}_{p}$ of the mod $p$ reductions
of the nodal polynomials. So the final overall complexity of our algorithm is
$O\\!\left(p^{2}\log^{2}(p)\log^{9}(dH)\right)$. $\blacksquare$
###### Remark 5.8.
We can apply degree reduction to lower the exponent of $p$ slightly in our
complexity estimate. Also, if we apply randomness, we can lower the exponent a
bit more. Of course, the main source of the size of our complexity bound is
our use of Yu’s Theorem. With a slightly more careful application, we could
possibly reduce the exponents of $9$ and $8$ to $5$. $\diamond$
## References
* [1] Martín Avendano, Roman Kogan, Mounir Nisse, and J. Maurice Rojas. Metric estimates and membership complexity for Archimedean amoebae and tropical hypersurfaces. Journal of Complexity, 46:45–65, 2018.
* [2] Martín Avendaño and Teresa Krick. Sharp bounds for the number of roots of univariate fewnomials. Journal of Number Theory, 131(7):1209 – 1228, 2011.
* [3] Eric Bach and Jeffrey Shallit. Algorithmic number theory, volume 1: efficient algorithms. MIT Press, Cambridge, Massachusetts, 1996.
* [4] Jens-Dietrich Bauch, Enric Nart, and Hayden D. Stainsby. Complexity of om factorizations of polynomials over local fields. LMS Journal of Computation and Mathematics, 16:139–171, 2013.
* [5] Jèrèmy Berthomieu, Grègoire Lecerf, and Guillaume Quintin. Polynomial root finding over local rings and application to error correcting codes. Appl. Algebra Eng. Commun. Comput., 24:413–443, 2013.
* [6] Jingguo Bi, Qi Cheng, and J. Maurice Rojas. Sub-linear root detection, and new hardness results, for sparse polynomials over finite fields. In Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation, ISSAC ’13, page 61–68, New York, NY, USA, 2013\. Association for Computing Machinery.
* [7] J. M. Borwein and P. B. Borwein. On the complexity of familiar functions and numbers. SIAM Rev., 30(4):589–601, 1988.
* [8] Ran Canetti, John Friedlander, Sergei Konyagin, Michael Larsen, Daniel Lieman, and Igor Shparlinski. On the statistical properties of diffie-hellman distributions. Israel Journal of Mathematics, 120(1):23–46, Dec 2000.
* [9] David G. Cantor and Daniel M. Gordon. Factoring polynomials over $\rho$-adic fields. In Wieb Bosma, editor, Algorithmic Number Theory, pages 185–208, Berlin, Heidelberg, 2000. Springer Berlin Heidelberg.
* [10] Qi Cheng, Shuhong Gao, J. Maurice Rojas, and Daqing Wan. Sparse univariate polynomials with many roots over finite fields. Finite Fields and Their Applications, 46:235 – 246, 2017.
* [11] Daniel Dadush, Chris Peikert, and Santosh Vempala. Enumerative lattice algorithms in any norm via M-ellipsoid coverings. In 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science—FOCS 2011, pages 580–589. IEEE Computer Soc., Los Alamitos, CA, 2011.
* [12] Jordi Guàrdia, Enric Nart, and Sebastian Pauli. Single-factor lifting and factorization of polynomials over local fields. Journal of Symbolic Computation, 47(11):1318 – 1346, 2012.
* [13] Zander Kelley. Roots of sparse polynomials over a finite field. LMS Journal of Computation and Mathematics, 19(A):196–204, 2016\.
* [14] Zander Kelley and Sean W. Owen. Estimating the number of roots of trinomials over finite fields. Journal of Symbolic Computation, 79:108 – 118, 2017. SI: MEGA 2015.
* [15] Pascal Koiran. Root separation for trinomials. J. Symbolic Comput., 95:151–161, 2019.
* [16] Pascal Koiran, Natacha Portier, and Sébastien Tavenas. A Wronskian approach to the real $\tau$-conjecture. J. Symbolic Comput., 68(part 2):195–214, 2015.
* [17] Leann Kopp, Natalie Randall, J. Maurice Rojas, and Yuyu Zhu. Randomized Polynomial-Time Root Counting in Prime Power Rings. Mathematics of Computation, 89(321):373–385, January 2020.
* [18] Hendrik W Lenstra. On the factorization of lacunary polynomials. Number Theory in Progress, 1:277–291, 1999.
* [19] Kurt Mahler. An inequality for the discriminant of a polynomial. The Michigan Mathematical Journal, 11(3):257–262, 1964.
* [20] Maurice Mignotte. On the distance between the roots of a polynomial. Appl. Algebra Eng. Commun. Comput., 6:327–332, 11 1995.
* [21] Alexandre Ostrowski. Recherches sur la méthode de Graeffe et les zéros des polynomes et des séries de Laurent. Acta Math., 72:99–155, 1940.
* [22] Bjorn Poonen. Zeros of sparse polynomials over local fields of characteristic $p$. Math. Res. Lett., 5(3):273–279, 1998.
* [23] Alain M. Robert. A Course in p-adic Analysis. Springer-Verlag New York, 2000.
* [24] Martín Avendano; Ashraf Ibrahim; J. Maurice Rojas; and Korben Rusek. Feaster $p$-adic feasibility for certain multivariate sparse polynomials. Journal of Symbolic Computation, 47(4):454–479, 2012.
* [25] Michael Sagraloff. A near-optimal algorithm for computing real roots of sparse polynomials. In ISSAC 2014 (39th International Symposium on Symbolic and Algebraic Computation ), pages 359–366, 2014.
* [26] W. H. Schikhof. Ultrametric calculus, volume 4 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2006. An introduction to $p$-adic analysis, Reprint of the 1984 original [MR0791759].
* [27] J.-P. Serre. A course in arithmetic. Springer-Verlag, New York-Heidelberg, 1973. Translated from the French, Graduate Texts in Mathematics, No. 7.
* [28] Steve Smale. Newton’s method estimates from data at one point. In The merging of disciplines: new directions in pure, applied, and computational mathematics (Laramie, Wyo., 1985), pages 185–196. Springer, New York, 1986.
* [29] Joachim von zur Gathen and Jürgen Gerhard. Modern computer algebra. Cambridge University Press, Cambridge, third edition, 2013.
* [30] Edwin Weiss. Algebraic number theory. International series in pure and applied mathematics. McGraw-Hill, 1963\.
* [31] R. Ryan Williams. Some Estimated Likelihoods for Computational Complexity, pages 9–26. Springer International Publishing, Cham, 2019.
* [32] Kunrui Yu. Linear forms in $p$-adic logarithms III. Compositio Mathematica, 3(241-276), 1994.
|
2024-09-04T02:54:55.925229 | 2020-02-29T19:51:55 | 2003.00336 | {
"authors": "Shrinu Kushagra",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:25963",
"submitter": "Shrinu Kushagra",
"url": "https://arxiv.org/abs/2003.00336"
} | arxiv-papers | # Three-dimensional matching is NP-Hard
Shrinu Kushagra111Email<EMAIL_ADDRESS>The results in this note first
appeared in [Kushagra et al., 2019] where the reduction was used to prove
query lower bounds for a clustering problem.
###### Abstract
The standard proof of NP-Hardness of 3DM provides a power-$4$ reduction of
3SAT to 3DM. In this note, we provide a linear-time reduction. Under the
exponential time hypothesis, this reduction improves the runtime lower bound
from $2^{o(\sqrt[4]{m})}$ (under the standard reduction) to $2^{o(m)}$.
Keywords: 3DM, 3SAT, ETH, X3C, NP-Hard
## 1 Introduction
In this note, we first establish the hardness of the following decision
problem.
###### Definition 1 (3DM).
.
Input: Sets $W,X$ and $Y$ and a set of matches $M\subseteq W\times X\times Y$
of size $m$.
Output: YES if there exists $M^{\prime}\subseteq M$ such that each element of
$W,X,Y$ appears exactly once in $M^{\prime}$. NO otherwise.
To prove that 3DM is NP-Hard, we reduce an instance of 3SAT to the given
problem. Next, we define the 3SAT decision problem.
###### Definition 2 (3-SAT).
.
Input: A boolean formulae $\phi$ in 3CNF form with $n$ literals and $m$
clauses.
Output: YES if $\phi$ is satisfiable, NO otherwise.
Given an instance of 3SAT with $n$ literals and $m$ clauses, [Garey and
Johnson, 1979] construct a graph with $\Theta(nm)$ vertices and
$\Theta(n^{2}m^{2})$ edges. Thus, this is a power-$4$ reduction. In this note,
we use a similar but a more efficient gadget and provide a linear time
reduction of the 3SAT instance to the given problem.
## 2 Hardness of 3DM
###### Theorem 3.
Three-dimensional matching is an NP-Hard problem.
Figure 1: Part of graph $G$ constructed for the literal $x_{1}$. The figure is
an illustration for when $x_{1}$ is part of four different clauses. The
triangles (or hyper-edge) $(a_{i},b_{i},c_{i})$ capture the case when $x_{1}$
is true and the other triangle $(b_{i},c_{i}^{\prime},a_{i+1})$ captures the
case when $x_{1}$ is false. Assuming that a clause
$C_{j}=\\{x_{1},x_{2},x_{3}\\}$, the hyper-edges containing
$tf_{i},tf_{i}^{\prime}$ and $t_{1},t_{1}^{\prime}$ capture different
settings. The hyper-edges containing $t_{1},t_{1}^{\prime}$ ensure that
atleast one of the literals in the clause is true. The other two ensure that
two variables can take either true or false values.
Our reduction is described in Fig. 1. For each literal $x_{i}$, let $m_{i}$ be
the number of clauses in which the the literal is present. We construct a
“truth-setting” component containing $2m_{i}$ hyper-edges (or triangles). We
add the following hyper-edges to $M$.
$\displaystyle\\{(a_{k}[i],b_{k}[i],c_{k}[i]):1\leq k\leq m_{i}\\}$
$\displaystyle\cup\\{(a_{k+1}[i],b_{k}[i],c_{k}^{\prime}[i]):1\leq k\leq
m_{i}\\}$
Note that one of $(a_{k},b_{k},c_{k})$ or $(a_{k+1},b_{k},c_{k}^{\prime})$
have to be selected in a matching $M^{\prime}$. If the former is selected,
that corresponds to the variable $x_{i}$ being assigned true, the latter
corresponds to false. This part is the same as the standard construction.
For every clause $C_{j}=\\{x_{1},x_{2},x_{3}\\}$ we add three types of hyper-
edges. The first type ensures that atleast one of the literals is true.
$\\{(c_{k}[i],t_{1}[j],t_{1}^{\prime}[j]):x_{i}^{\prime}\in
C_{j}\\}\cup\\{(c_{k}^{\prime}[i],t_{1}[j],t_{1}^{\prime}[j]):x_{i}\in
C_{j}\\}$
The other two types of hyper-edges (conected to the $tf_{i}$’s) say that two
of the literals can be either true or false. Hence, we connect them to both
$c_{k}$ and $c_{k}^{\prime}$
$\displaystyle\\{(c_{k}[i],tf_{1}[j],tf_{1}^{\prime}[j]):x_{i}^{\prime}\text{
or }x_{i}\in C_{j}\\}$
$\displaystyle\cup\\{(c_{k}[i],tf_{2}[j],tf_{2}^{\prime}[j]):x_{i}\text{ or
}x_{i}^{\prime}\in C_{j}\\}$
$\displaystyle\cup\\{(c_{k}^{\prime}[i],tf_{1}[j],tf_{1}^{\prime}[j]):x_{i}^{\prime}\text{
or }x_{i}\in C_{j}\\}$
$\displaystyle\cup\\{(c_{k}^{\prime}[i],tf_{2}[j],tf_{2}^{\prime}[j]):x_{i}\text{
or }x_{i}^{\prime}\in C_{j}\\}$
Note that in the construction $k$ refers to the index of the clause $C_{j}$ in
the truth-setting component corresponding to the literal $x_{i}$. Using the
above construction, we get that
$\displaystyle W=\\{c_{k}[i],c_{k}^{\prime}[i]\\}$ $\displaystyle
X=\\{a_{k}[i]\\}\cup\\{t_{1}[j],tf_{1}[j],tf_{2}[j]\\}$ $\displaystyle
Y=\\{b_{k}[i]\\}\cup\\{t_{1}^{\prime}[j],tf_{1}^{\prime}[j],tf_{2}^{\prime}[j]\\}$
Hence, we see that $|W|=2\sum_{i}m_{i}=6m$. Now,
$|X|=|Y|=\sum_{i}m_{i}+3m=6m$. And, we have that $|M|=2\sum_{i}m_{i}+15m=21m$.
Thus, we see that this construction is linear in the number of clauses.
Now, if the 3-SAT formula $\phi$ is satisfiable then there exists a matching
$M^{\prime}$ for the 3DM problem. If a variable $x_{i}=T$ in the assignment
then add $(c_{k}[i],a_{k}[i],b_{k}[i])$ to $M^{\prime}$ else add
$(c_{k}^{\prime}[i],a_{k+1}[i],b_{k}[i])$. For every clause $C_{j}$, let
$x_{i}$ (or $x_{i}^{\prime}$) be the variable which is set to true in that
clause. Add $(c_{k}^{\prime}[i],t_{1}[j],t_{1}^{\prime}[j])$ (or
$(c_{k}[i],t_{1}[j],t_{1}^{\prime}[j])$) to $M^{\prime}$. For the remaining
two clauses, add the hyper-edges containing $tf_{1}[j]$ and $tf_{2}[j]$
depending upon their assignments. Clearly, $M^{\prime}$ is a matching.
Now, the proof for the other direction is similar. If there exists a matching,
then one of $(a_{k},b_{k},c_{k})$ or $(a_{k+1},b_{k},c_{k}^{\prime})$ have to
be selected in a matching $M^{\prime}$. This defines a truth assignment of the
variables. Now, the construction of the clause hyper-edges ensures that every
clause is satisfiable.
## 3 Exponential Time Hypothesis for 3DM
Before we start the discussion in the section, lets review the definition of
the exponential time hypothesis.
Exponential Time Hypothesis (ETH)
There does not exist an algorithm which decides 3-SAT and runs in $2^{o(m)}$
time.
If the exponential hypothesis is true, the standard reduction of 3-SAT to 3DM
[Garey and Johnson, 1979] implies that any algorithm for 3DM runs in
$2^{o(m^{1/4})}$. However, using the reduction in Section 2, we get a more
tighter dependence on $m$ stated as a theorem below.
###### Theorem 4.
If the exponential time hypothesis holds then there does not exist an
algorithm which decides the three-dimensional matching problem (3DM) and runs
in time $2^{o(m)}$.
###### Proof.
For the sake on contradiction, suppose that such an algorithm $\mathcal{A}$
exists. Then, using the reduction from Section 2 and $\mathcal{A}$, we get an
algorithm for 3SAT that runs in $2^{o(m)}$ time which contradicts the ET
hypothesis. ∎
An immediate corollary of this result applies to another popular problem Exact
Cover by 3-sets.
###### Definition 5 (X3C).
.
Input: $U=\\{x_{1},\ldots,x_{3q}\\}$. A collections of subsets
$S=\\{S_{1},\ldots,S_{m}\\}$ such that each $S_{i}\subset U$ and contains
exactly three elements.
Output: YES if there exist $S^{\prime}\subseteq S$ such that each element of
$U$ occurs exactly once in $S^{\prime}$, NO otherwise.
###### Corollary 6.
If the exponential time hypothesis holds then there does not exist an
algorithm which decides exact cover by 3-sets problem (X3C) and runs in time
$2^{o(m)}$.
###### Proof.
The proof follows from the trivial reduction of 3DM to X3C where $U=W\cup
X\cup Y$ and $S=M$. ∎
## References
* [Garey and Johnson, 1979] Garey, M. R. and Johnson, D. S. (1979). Computers and intractability, volume 174. freeman San Francisco.
* [Kushagra et al., 2019] Kushagra, S., Ben-David, S., and Ilyas, I. F. (2019). Semi-supervised clustering for de-duplication. In The 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan, pages 1659–1667.
|
2024-09-04T02:54:55.932988 | 2020-02-29T20:17:56 | 2003.00341 | {
"authors": "Raghav G. Jha",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25964",
"submitter": "Raghav Govind Jha",
"url": "https://arxiv.org/abs/2003.00341"
} | arxiv-papers |
Finite $N$ unitary matrix model
Raghav G. Jha
Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada
🖄<EMAIL_ADDRESS>
Abstract: We consider one-plaquette unitary matrix model at finite $N$ using
exact expression of the partition function for both SU($N$) and U($N$) groups.
###### Contents
1. 1 Introduction
2. 2 Partition function and observables
3. 3 Results & Conclusions
## 1 Introduction
The study of matrix models has been important in understanding and gaining
deep insights into the structure of gauge theories. Their range of
applicability is wide and some examples include – the geometry of random
surfaces, two-dimensional quantum gravity, statistical mechanics of spin
systems. They also have connections to supersymmetric gauge theories and non-
perturbative formulations of superstring theory. A class of matrix models has
also been studied in the context of exhibiting properties similar to the four-
dimensional theory of strong interactions (QCD). The techniques of strong
coupling expansion and lattice methods [1] along with approximate recursion
relations [2] were pioneered to understand the properties of QCD. Around the
same time, another important direction was pursued by ’t Hooft in finding an
expansion parameter, independent of the scale to study QCD by considering
1/$N$ expansion [3]. In this setting, the Feynman diagrams drawn in the
double-line notation can be organized to follow an expansion in powers of
$N^{\chi=2-2g}$ and were argued to be analogous to the topological expansion
in string theory. The large $N$ limit was then used to solve the two-
dimensional model of QCD in the light-cone gauge which is now also known as ’t
Hooft model [4]. This model has many similarities with the four-dimensional
theory but is exactly solvable in the planar limit. In higher dimensions, the
success has been scarce but it is at least known that the large $N$ limit does
not alter a distinctive feature of QCD – ultraviolet freedom, which is obvious
from the $N\to\infty$ limit of the two-loop QCD $\beta$-function which has the
leading coefficient with the correct sign. Following these developments, the
large $N$ limit of gauge theories have been extensively explored for a wide
class of gauge theories. In this topological limit, a remarkable
simplification occurs, where only the planar diagrams survive, when we keep
$\lambda=g_{\text{YM}}^{2}N$ fixed and take $N\to\infty$. The interplay
between the large $N$ limit of gauge theories and string theory has
significantly improved our understanding of strongly coupled gauge theories
and resulted in the AdS/CFT conjecture. In this work, we consider a unitary
matrix model that is equivalent to two-dimensional pure QCD. The two-
dimensional models such as this have rich behavior but are trivial since gauge
field is non-dynamical (absence of degrees of freedom) and the expectation
value of observables can be calculated exactly as done in [5, 6, 7]. In this
model, the Wilson loop obeys area law for all couplings since the Coulomb
potential is always linear and exhibits both ultraviolet freedom and infrared
prison. The two-dimensional lattice gauge theories with Wilson action were
shown to exhibit third-order phase transition at large $N$. This transition
came as a surprise when it was first observed but now we have a better
understanding through AdS/CFT correspondence that these phase transitions at
large $N$ and finite volume occurs in several holographic models and are
related to transitions between different black hole solutions in supergravity
[8, 9]. The occurrence of this phase transition in the large $N$ limit
signifies that non-analytic behaviour can occur even in the simplest of models
which makes it clear that the strong coupling expansions cannot be usually
smoothly continued to the limit of weak coupling. This two-dimensional lattice
gauge theory with SU($N$) or U($N$) gauge group is often described in terms of
single plaquette action over compact unitary group manifold and will hereafter
be referred to as the GWW model after the authors of [6, 5]. The unitary
matrix models of this kind have also been studied from the holographic point
of view [10, 11]. The interest in this model stems from two main reasons, 1)
This model admits an exact solution for any $N$ and coupling, 2) This model is
closely related to one of the simplest strings (minimal superstring theory)
with manageable non-perturbative description (Type 0 strings in one dimension)
[12]. In fact, it was shown that unitary matrix models in the double scaling
limit ($N\to\infty$ & $\lambda\to\lambda_{\text{critical}}$ with some well-
defined relation between $N$ and $|\lambda-\lambda_{\text{critical}}|$) is
described by the Painlevé II equation [13].
The GWW model and several of its modifications have been well-studied using
various techniques in the $N\to\infty$ limit. The finite $N$ limit has been
surprisingly less explored but recently attracted some attention [14]. Even in
these explorations, the gauge group considered was U($N$) since it is a little
easier to handle and in the planar limit, which is mostly of interest, there
is no distinction with SU($N$) . However, at finite $N$, they have _different_
behaviour and independent studies of both $\mathbb{G}=$ SU($N$) and U($N$)
gauge groups are desirable and to our knowledge no treatment of this unitary
matrix model at finite $N$ for SU($N$) gauge group has been done yet. This
paper aims to fill this gap in the literature. We derive an expression for the
partition function considering SU($N$) gauge group and study observables in
the finite $N$ limit. In case the qualitative behaviour is not severely
altered by addition of matter fields i.e. presuming this phase transition is
not turned into a smooth crossover, then these studies may be useful in
understanding black hole solutions and stringy corrections by considering the
finite $N$ limit of matrix models [15] since the transition at
$\lambda_{\text{critical}}=2$ is supposed to separate the supergravity regime
from the perturbative gauge theory regime. There are only a few matrix models
where the finite $N$ regime is exactly solvable for any coupling and this is
one such model. In this respect and various others, a finite $N$ study of
these class of matrix models would be useful in probing the quantum effects of
gravity using holography.
We briefly outline the structure of the paper. In Section 2, we write down the
exact expression of the partition function for _special unitary matrix_ model
at any coupling and $N$ in terms of determinant of a Toeplitz matrix111Named
after German mathematician Otto Toeplitz (1881 - 1940), each descending
diagonal from left to right in this matrix is constant and matrix element
$(j,k)$ depends on $j-k$ and sketch the well-known phase structure of this
model at large $N$. In Section 3, we numerically calculate the corrections to
the planar result for free energy in both the phases and show that it is
simpler to deduce the instanton corrections in the strong coupling phase since
the $1/N$ (or genus) expansion terminates at genus-zero for U($N$) unitary
model. However, this is not the case for the SU($N$) model which has
contributions to the free energy at small $N$ in both the phases. We also
provide results for the Wilson loop in the SU($N$) matrix model using our
exact expression and show that in the large $N$ limit, they converge to the
known results for the U($N$) unitary model. We end with a brief mention of
some open questions which can be addressed in the context of SU($N$) one-
plaquette matrix model and other related unitary matrix models.
## 2 Partition function and observables
The central object in the study of matrix models is the partition function
which is determined by integration over some compact group manifold. However,
there exists only a handful of models which can be solved exactly [16] and
most of these proceeds by reduction of $N^{2}$ matrix degrees of freedom to
$N$ by exploiting some symmetry in the large $N$ limit. The simplest among all
such matrix models is the well-studied one-matrix model. For a nice review for
unitary matrix models, we refer the reader to [17]. The lattice Wilson action
in two dimensions is equivalent to one-plaquette model by considering the
integration over the unitary group. This has also been studied using character
expansion222It is useful to sometimes think of character expansion as Fourier
expansion on a compact group manifold which is discussed in [18, 19]. It was
shown [2] that in two dimensions, the expansion in terms of characters
constitute the recursion relations through which one can exactly solve the
model. For the large $N$ analysis, the saddle point methods [20] are often
used as was done in [6]. However, the saddle point methods are not useful in
extracting sub-leading orders in the 1/$N$ expansion for which the method of
orthogonal polynomials is usually used as was done in [7]. The general
partition function can be schematically written as:
$Z=\frac{1}{\text{Vol}(\mathbb{G})}\int\prod_{\text{links}\leavevmode\nobreak\
l}\mathscr{D}U_{l}\prod_{\Box}Z_{\Box},$ (2.1)
where $\Box$ denotes the plaquette on whose perimeter we take the ordered
product of the link matrices $U$ of size $N\times N$. In order to compute
expectation values in the two-dimensional model, one has to make a choice
between one-plaquette and heat kernel actions. The partition function of the
two-dimensional Yang-Mills theory based on the heat kernel action is written
in terms of sum over all irreducible representations of the unitary gauge
group. We will use the one-plaquette action in this work similar to [5, 6]
which can be expressed as333Some authors use $g$ to denote $1/\lambda$ or
$2/\lambda$. This distinction is clear from the coupling where phase
transition occurs.:
$S(U)=\frac{N}{\lambda}\Big{[}\mathrm{Tr}\Big{(}\prod_{\Box}U\Big{)}+\mathrm{Tr}\Big{(}\prod_{\Box}U^{\dagger}\Big{)}\Big{]},$
(2.2)
where $\prod_{\Box}U$ is the product of links around a plaquette and shorthand
for
$U_{\mu}(\textbf{n})U_{\nu}(\mathbf{n}+\widehat{\boldsymbol{\mu}})U^{\dagger}_{\mu}(\mathbf{n}+\widehat{\boldsymbol{\mu}}+\widehat{\boldsymbol{\nu}})U^{\dagger}_{\nu}(\mathbf{n}+\widehat{\boldsymbol{\nu}})$.
The convention used is such that $U_{\mu}(\textbf{n})$ denotes a link starting
from site n and going in $\mu$-direction.
It was found that for this model in the $N\to\infty$ limit with fixed
$\lambda=g_{\text{YM}}^{2}N$, one observes a discontinuity in the third
derivative of the free energy corresponding to a third-order phase transition.
This means that one loses the analytic structure for even simple actions in
the large $N$ limit and the continuation from the weak coupling to strong
coupling is non-trivial. This transition is not like the usual phase
transitions in statistical mechanics which occur in the infinite volume limit.
Here, it occurs in a finite volume (single plaquette) for an infinite rank
gauge group.
We denote the full partition function of the model with $\mathcal{Z}$ and
since in two dimensions, we can treat all the plaquettes as independent, we
deal with just a single plaquette [21, 6], which we will refer to as one-
plaquette partition function, $Z=\mathcal{Z}^{1/N_{s}N_{t}}$, where
$N_{s}N_{t}$ are the number of nodes in the lattice and equals the number of
plaquettes. It is given by:
$Z=\int_{\begin{subarray}{c}{\sf U(N)}\leavevmode\nobreak\ \\\
\text{or\leavevmode\nobreak\ }{\sf SU(N)}\leavevmode\nobreak\
\end{subarray}}\mathscr{D}U\exp\Bigg{[}\frac{N}{\lambda}\mathrm{Tr}\Big{(}U^{\dagger}+U\Big{)}\Bigg{]}.$
(2.3)
It is known that the partition function for the U($N$) matrix model given
above can be written in terms of Toeplitz determinant444The determinant of
infinite size Toeplitz matrix has a well-defined asymptotic behaviour limit
due to a theorem by Szegö. given by [22, 7]:
$Z(N,\lambda)=\text{Det}\Bigg{[}I_{j-k}\Big{(}\frac{2N}{\lambda}\Big{)}\Bigg{]}_{j,k=1\cdots
N},$ (2.4)
where $I_{\nu}(x)$ is the modified Bessel function of first kind or Bessel
function of imaginary argument of order $\nu$. Note that the argument of the
Bessel function is twice the prefactor of action in (2.3). The appearance of
partition function in terms of determinant has deep connections to the notion
of integrability and special differential equations. For instance, the
determinant of Toeplitz matrices also play a role in the context of the Ising
model, for a partial set of references, see [23, 24]. Instead of working with
the partition function given in (2.3), one can also consider more general
unitary matrix model with source arbitrary $N\times N$ matrix $A$ as was
considered in [25]. The action for this model is given by:
$Z=\int\mathscr{D}U\exp\mathrm{Tr}\Big{[}UA^{\dagger}+AU^{\dagger}\Big{]},$
(2.5)
where $U$ is product of four ordered links. The exact form of $Z$ was derived
in the large $N$ limit and it was shown that the strong and weak coupling
regimes in this model are characterized by
$(1/N)\mathrm{Tr}(A^{\dagger}A)^{-1/2}$. One gets back the usual GWW model by
setting $A=\mathbb{I}/\lambda$. The model in (2.5) was related to a specific
$N\times N$ Hermitian matrix model following some parameter tuning in [26] and
an exact solution was found. In this work we will only consider (2.3) and
derive the exact expression for the partition function when the integration is
over SU($N$) group in (2.3).
The analysis for SU($N$) is similar to the one for U($N$) except that now we
have the constraint that the product of eigenvalues should satisfy
$\prod_{j=1}^{N}e^{i\alpha_{j}}=1$. We start with the one-plaquette partition
function:
$Z=\int_{SU(N)}\mathscr{D}U\exp\Bigg{[}\frac{N}{\lambda}\Bigg{(}\mathrm{Tr}\prod_{\text{single\leavevmode\nobreak\
}\Box}U+\mathrm{Tr}\prod_{\text{single\leavevmode\nobreak\
}\Box}U^{\dagger}\Bigg{)}\Bigg{]},$ (2.6)
where the measure is given by:
$\mathscr{D}U=\frac{1}{N!}\prod_{j=1}^{N}\underbrace{\sum_{q}\delta\Big{(}\sum_{m=1}^{N}\alpha_{m}-2q\pi\Big{)}}_{\text{{\sf
SU}($N$)\leavevmode\nobreak\
constraint}}\frac{d\alpha_{j}}{2\pi}\prod_{j<k}\Big{(}e^{i\alpha_{j}}-e^{i\alpha_{k}}\Big{)}\Big{(}e^{-i\alpha_{j}}-e^{-i\alpha_{k}}\Big{)},$
(2.7)
and the constraint can be written as:
$\frac{1}{2\pi}\sum_{p=-\infty}^{\infty}e^{ip\sum\alpha_{m}}\leavevmode\nobreak\
\leavevmode\nobreak\ .$ (2.8)
By using the representation of the modified Bessel function as:
$I_{k-j-p}\Big{(}\frac{2N}{\lambda}\Big{)}=I_{j-k+p}\Big{(}\frac{2N}{\lambda}\Big{)}=\frac{1}{2\pi}\int_{0}^{2\pi}\leavevmode\nobreak\
e^{\frac{2N}{\lambda}\cos\alpha}e^{i(j-k+p)\alpha}d\alpha,$ (2.9)
where, $e^{i\alpha}$ are the eigenvalues of $U$ and (2.9) fills the
corresponding $j^{\text{th}}$ and $k^{\text{th}}$ element of the matrix
$\mathcal{M}_{\alpha}=I_{j-k+\alpha}(\frac{2N}{\lambda})$ and using (2.6)
through (2.9), we obtain the partition function for SU($N$) matrix model as:
$\displaystyle
Z(N,\lambda)=\sum_{p=-\infty}^{\infty}\text{Det}\Bigg{[}I_{j-k+p}\Big{(}\frac{2N}{\lambda}\Big{)}\Bigg{]}_{j,k=1\cdots
N}.$ (2.10)
In contrast to the expression for the exact partition function for U($N$)
matrix model i.e. (2.4), there is an additional sum over the index $p$ from
the constraint in (2.7). In this paper, we study different observables using
this partition function. However, for practical purposes of computation using
Mathematica, we simply replace the $\infty$ in (2.10) by a large number. We
checked that $\left|p\right|\leq 15$ suffices i.e.,
$Z(N,\lambda)=\underbrace{\sum_{p=-15,\neq
0}^{15}\text{Det}\Bigg{[}I_{j-k+p}\Big{(}\frac{2N}{\lambda}\Big{)}\Bigg{]}}_{\text{due
to {\sf SU}($N$)\leavevmode\nobreak\
}}+\text{Det}\Bigg{[}I_{j-k}\Big{(}\frac{2N}{\lambda}\Big{)}\Bigg{]}.$ (2.11)
This partition function enables us to evaluate free energy which can be
written as a sum over genus expansion for this model:
$F(N,\lambda)=\sum_{g\in\mathbb{N}}F_{g}N^{2-2g}+\mathcal{O}(e^{-N}),$ (2.12)
where $\mathbb{N}$ denotes the set of non-negative integers. The exact result
for the leading coefficient, $F_{0}$, in the planar limit is given by:
$F_{0}=\hskip 5.69054pt\begin{cases}\frac{-1}{\lambda^{2}}\hskip
108.12047pt\text{$\lambda\geq 2$}\\\
\frac{-2}{\lambda}-\frac{1}{2}\ln\frac{\lambda}{2}+\frac{3}{4}\hskip
48.36967pt\text{$\lambda<2$}\end{cases}.$ (2.13)
Another important observable in unitary matrix models and the one we consider
are the Wilson loops. The finite $N$ analysis of Wilson loops in various
representations for U($N$) gauge theory was recently done in [14]. An
expression for the expectation value of normalized winding Wilson loops
defined as:
$\mathcal{W}_{k}(N,\lambda)=\frac{1}{N}\Big{\langle}\mathrm{Tr}\Big{(}\prod_{\mathcal{C}}U\Big{)}^{k}\Big{\rangle},$
(2.14)
was given in terms of $\mathrm{Tr}(\mathcal{M}_{k}/\mathcal{M}_{0})$ with
$k\in\mathbb{Z}^{+}$ denoting the winding number and the expectation value is
computed over a closed contour $\mathcal{C}$. A similar expression for the
SU($N$) case is yet unknown. Note that like the partition function, we can
write $\mathcal{W}_{k,\mathcal{C}}(N,\lambda)$ as
$W_{k}(N,\lambda)^{N_{s}N_{t}}$, where the contour is of time extent $aN_{t}$
and spatial extent $aN_{s}$. The single winding ($k=1$) Wilson loop is related
to the derivative of the free energy and is given by:
$W_{1}(N,\lambda)=\frac{-\lambda^{2}}{2N^{2}}\frac{\partial\ln
Z}{\partial\lambda}=\hskip 5.69054pt\begin{cases}\frac{1}{\lambda}\hskip
48.36967pt\text{$\lambda\geq 2$}\\\ 1-\frac{\lambda}{4}\hskip
28.45274pt\text{$\lambda\leq 2$}\end{cases}.$ (2.15)
For this U($N$) matrix model as mentioned above, there is another equivalent
definition of $W_{1}$ given by:
$W_{1}(N,\lambda)=\mathrm{Tr}\leavevmode\nobreak\
\Big{(}\frac{\mathcal{M}_{1}}{\mathcal{M}_{0}}\Big{)},$ (2.16)
where $\mathcal{M}_{\alpha}$ is defined below (2.9). We will present results
for the free energy and Wilson loops in Section 3 for the SU($N$) and U($N$)
models.
Even though this matrix model is one of the simplest it has a wide range of
interesting features. In [27], by considering the trans-series solutions of
the pre-string equation to obtain instanton corrections, it was deduced that
the instanton action vanishes at the critical point i.e. $\lambda=2$ and it
was concluded that the GWW transition is caused by the effect of
instantons555Usually, when one thinks of large $N$ limit, it seems that the
instanton corrections which goes as $\exp(-A/g_{s})\sim\exp(-AN/\lambda)$ are
insignificant with $A$ denoting the instanton action. In fact, a more general
form is $\exp(-F(\lambda)N)$, where $F$ is some non-negative function of the
coupling $\lambda$ and proportional to $A$. When $F(\lambda)=0$ corresponding
to vanishing action, the exponentially suppressed instanton contribution to
$1/N$ expansion becomes important. It turns out that for the GWW model this
happens exactly at $\lambda_{\text{critical}}=2$ where the third-order phase
transition takes place. Therefore, one also refers to these as instanton
driven large $N$ phase transitions and physically relate them to condensation
of instantons. We thank M. Mariño for email correspondence regarding the one-
plaquette model and his book ‘Instantons and Large $N$’ for clarification
regarding the instanton contributions. There exist other examples where a
similar phenomenon occurs [28]. The behaviour of the corrections to the planar
result due to $1/N$ and instanton have been well-studied. This model also
exhibits resurgence behaviour thoroughly explored in [29]. One of the striking
features of these studies, which is clearly evident in our results is that the
strong coupling phase has no $1/N$ corrections [7] but only
$\mathcal{O}(e^{-N})$ corrections from the contributions due to instantons. In
GWW model, both gapped and ungapped phases have instanton corrections, albeit
of different nature. In the ungapped phase, the eigenvalues of the holonomy
matrix fill the circle while in the gapped phase they are distributed over
some interval. When the eigenvalue distribution is restricted to some domain,
it is called a one-cut (or single interval) distribution/solution. In the
ungapped phase, the instanton contribution to the free energy can simply be
evaluated by subtracting genus-zero contribution from the total free energy.
As we will see, this does not hold for the SU($N$) model. The distribution of
eigenvalues, which is one of the central objects in these matrix models, have
been studied in both the phases for the U($N$) model and is given by:
$\rho(\lambda,\theta)=\hskip
5.69054pt\begin{cases}\frac{2}{\pi\lambda}\cos\Big{(}\frac{\theta}{2}\Big{)}\sqrt{\frac{\lambda}{2}-\sin^{2}\frac{\theta}{2}}\hskip
48.36967pt\text{$\lambda<2$}\\\
\frac{1}{2\pi}\Big{(}1+\frac{2}{\lambda}\cos\theta\Big{)}\hskip
82.51299pt\text{$\lambda>2$}\end{cases}.$ (2.17)
In the gapped phase, the distribution is only supported on some interval
$\theta\in[-a,a]$ while it is uniform in the ungapped phase. For SU($N$)
unitary matrix model, there are corrections to the distributions above which
was discussed in [30] and will not be further discussed in this work.
Figure 1: A telegraphic summary describing the phase diagram of the one-
plaquette matrix model at large $N$ and the exact expression of the partition
function for SU($N$) model as given in the main text.
## 3 Results & Conclusions
In this section we present the results obtained using (2.11) and (2.4) for
SU($N$) and U($N$) groups respectively. We primarily focus on the free energy
for different couplings to emphasize behaviours in both phases and at the
critical coupling. Our results converge to expected results in (2.13) when we
take large $N$, while also probing the finite $N$ coefficients i.e. $F_{g}$
with $g\neq 0$ according to (2.12). In the weak coupling limit, $\lambda<2$,
the free energy up to genus-two was calculated in [7] and given by:
$\displaystyle F(\lambda,N)=F_{0}-\frac{1}{N^{2}}\Big{(}\frac{1}{12}-\ln
A-\frac{1}{12}\ln
N-\frac{1}{8}\ln(1-(\lambda/2))\Big{)}-\frac{1}{N^{4}}\Big{(}\frac{3\lambda^{3}}{1024}\frac{1}{(1-(\lambda/2))^{3}}+\frac{1}{240}\Big{)},$
(3.1)
where,
$A=2^{\frac{7}{36}}\pi^{\frac{-1}{6}}\exp\Big{[}\frac{1}{3}+\frac{2}{3}\int_{0}^{\frac{1}{2}}\ln\Gamma(1+x)dx\Big{]}=1.2824271291\cdots$
is the Glaisher-Kinkelin constant and is related to the derivative of zeta
function as, $\zeta^{\prime}(-1)=\frac{1}{12}-\ln A$. A general expression for
genus $g$ free energy $F_{g}(\lambda)$ is also known [7, 13, 27, 14] and can
be written as:
$F_{g}(\lambda)=\frac{B_{2g}}{2g(2g-2)}+\frac{1}{\Big{(}\frac{2}{\lambda}-1\Big{)}^{3g-3}}\sum_{n=0}^{g-2}C_{n}^{g}\lambda^{-n},$
(3.2)
where $B_{2g}$ is the Bernoulli number666We note that if we calculate the
orbifold Euler characteristic of the moduli space of Riemann surfaces of genus
$g$ with $n$ marked points i.e. $\chi(\mathcal{M}_{g,n})$ using Harer-Zagier
formula and set $n=0$ we also obtain $\frac{B_{2g}}{2g(2g-2)}$.
A general expression for the SU($N$) case is unknown but is expected to have
some similarity since the origin of these Bernoulli numbers are in the volume
of the U($N$) group which is similar to that of SU($N$) . In Figure 2, we
compute the free energy for $\lambda=4/3$ and show the results for SU($N$) and
U($N$) models.
Figure 2: The free energy (normalized by $N^{2}$) for $\lambda=4/3$ plotted
against 1/$N$. The dashed green line is (3.1) from [7] and the dashed black
line is the $N\to\infty$ value. The analytic expression corresponding to the
dashed yellow line is yet unknown. Figure 3: The dependence of the free energy
(normalized by $N^{2}$) for SU($N$) and U($N$) model at $\lambda=2$ on $N$. In
the $N\to\infty$ limit, the exact value is $F_{0}=-0.25$.
We then consider $\lambda_{\text{critical}}=2$ and plot the results in Figure
3. The free energy has noticeable difference for SU($N$) and U($N$) groups for
$N<15$ (which is not the case for other couplings) and we have explored up to
$N=100$ for this case. One plausible explanation for this behaviour is that at
the critical coupling the instanton contributions are more important compared
to any other $\lambda$ (for fixed $N$) and the difference between SU($N$) and
U($N$) instanton sectors are therefore most significant. It might be possible
to explain this by a systematic finite $N$ instanton contributions with
SU($N$) group and comparing with the known results in the U($N$) models [27].
Figure 4: The dependence of the free energy (normalized by $N^{2}$) on $N$ for
$\lambda=4$ (strong coupling). There are no $1/N$ corrections for the U($N$)
model while for SU($N$) the genus expansion does not terminate at genus-zero.
We then consider strong coupling ($\lambda>2$). This analysis is more
interesting since the genus expansion terminates at genus-zero in case of
U($N$) , first discussed in [7]. Our results shown in Figure 4 are in complete
agreement with this and we note that there are no $1/N$ corrections in this
ungapped phase when the SU($N$) constraint is not imposed. For SU($N$) model,
our study of the partition function signals that there are corrections and the
genus convergence is subtle (and certainly not genus-0 exact up to instanton
effects) in this case and this deserves further study.
Finally, in Figure 5, we study the Wilson loop by taking the numerical
derivative of the free energy for a range of couplings with $N=4,10$ for both
SU($N$) and U($N$) groups.
Figure 5: The expectation value of Wilson loop against coupling for $N=4,10$
around the critical coupling. The dashed lines (different colours) are the
planar limit result in different phases.
The most distinct feature is that the planar result is approached from
different sides for SU($N$) and U($N$) models (similar to free energy
behaviour) and signifies that the $1/N$ corrections to planar values come with
opposite sign. It will be interesting to compute the exact expression for the
Wilson loop winding around $k$ times as done for the U($N$) model in [14].
A related model to the one we studied here is the ‘double trace model’ for
which the action is $\mathrm{Tr}U\mathrm{Tr}{U}^{\dagger}$ and can be written
in terms of the partition function of GWW model. This model is closely related
to a truncated limit of $\mathcal{N}$ = 4 SYM and in the double scaling limit
exhibits the Hagedorn/deconfinement phase transition. It would be interesting
to understand the finite $N$ limit of this model while not restricting to
U($N$) integral as done in [31].
In this paper, we have given an exact expression for the partition function of
SU($N$) one-plaquette matrix model valid for all $N$ and couplings and
computed exact results for free energy and Wilson loop at finite $N$ for
several couplings. We concluded that the $1/N$ corrections to free energy
vanish for U($N$) matrix model in the ungapped (strongly coupled) phase where
the only corrections to the planar result come from the instanton correction,
while for SU($N$) matrix model, the genus expansion contribution to the free
energy does not terminate at genus-zero. Our results suggest that that the
finite $N$ behaviour of SU($N$) is _very_ different from the U($N$) matrix
model and deserves further analysis. It would be interesting to understand
results for multiply winded Wilson loops and 1/$N$ expansion of the free
energy for SU($N$) along the lines as done for U($N$) group.
### Acknowledgements
Research at Perimeter Institute is supported in part by the Government of
Canada through the Department of Innovation, Science and Economic Development
Canada and by the Province of Ontario through the Ministry of Economic
Development, Job Creation and Trade.
## References
* [1] K. G. Wilson, “Confinement of Quarks,” Phys. Rev. D10 (1974) 2445–2459.
* [2] A. A. Migdal, “Recursion Equations in Gauge Theories,” Sov. Phys. JETP 42 (1975) 413.
* [3] G. ’t Hooft, “A Planar Diagram Theory for Strong Interactions,” Nucl. Phys. B72 (1974) 461. [,337(1973)].
* [4] G. ’t Hooft, “A Two-Dimensional Model for Mesons,” Nucl. Phys. B75 (1974) 461–470.
* [5] S. R. Wadia, “A Study of U(N) Lattice Gauge Theory in 2-dimensions,” arXiv:1212.2906 [hep-th].
* [6] D. J. Gross and E. Witten, “Possible Third Order Phase Transition in the Large N Lattice Gauge Theory,” Phys. Rev. D21 (1980) 446–453.
* [7] Y. Y. Goldschmidt, “1/$N$ Expansion in Two-dimensional Lattice Gauge Theory,” J. Math. Phys. 21 (1980) 1842.
* [8] E. Witten, “Anti-de Sitter space, thermal phase transition, and confinement in gauge theories,” Adv. Theor. Math. Phys. 2 (1998) 505–532, arXiv:hep-th/9803131 [hep-th].
* [9] S. Catterall, R. G. Jha, D. Schaich, and T. Wiseman, “Testing holography using lattice super-Yang-Mills theory on a 2-torus,” Phys. Rev. D97 no. 8, (2018) 086020, arXiv:1709.07025 [hep-th].
* [10] D. N. Kabat and G. Lifschytz, “Approximations for strongly coupled supersymmetric quantum mechanics,” Nucl. Phys. B571 (2000) 419–456, arXiv:hep-th/9910001 [hep-th].
* [11] D. N. Kabat, G. Lifschytz, and D. A. Lowe, “Black hole entropy from nonperturbative gauge theory,” Phys. Rev. D64 (2001) 124015, arXiv:hep-th/0105171 [hep-th].
* [12] I. R. Klebanov, J. M. Maldacena, and N. Seiberg, “Unitary and complex matrix models as 1-d type 0 strings,” Commun. Math. Phys. 252 (2004) 275–323, arXiv:hep-th/0309168 [hep-th].
* [13] V. Periwal and D. Shevitz, “Unitary Matrix models as exactly solvable String Theories,” Phys. Rev. Lett. 64 (1990) 1326.
* [14] K. Okuyama, “Wilson loops in unitary matrix models at finite $N$,” JHEP 07 (2017) 030, arXiv:1705.06542 [hep-th].
* [15] L. Susskind, “Matrix theory black holes and the Gross-Witten transition,” arXiv:hep-th/9805115 [hep-th].
* [16] V. A. Kazakov, “Solvable matrix models,” 2000\. arXiv:hep-th/0003064 [hep-th].
* [17] P. Rossi, M. Campostrini, and E. Vicari, “The Large N expansion of unitary matrix models,” Phys. Rept. 302 (1998) 143–209, arXiv:hep-lat/9609003 [hep-lat].
* [18] A. B. Balantekin, “Character expansions, Itzykson-Zuber integrals, and the QCD partition function,” Phys. Rev. D62 (2000) 085017, arXiv:hep-th/0007161 [hep-th].
* [19] A. Bazavov, S. Catterall, R. G. Jha, and J. Unmuth-Yockey, “Tensor renormalization group study of the non-Abelian Higgs model in two dimensions,” Phys. Rev. D99 no. 11, (2019) 114507, arXiv:1901.11443 [hep-lat].
* [20] F. Green and S. Samuel, “Calculating the large- N phase transition in gauge and matrix models,” Nuclear Physics B 194 no. 1, (Jan, 1982) 107–156.
* [21] J. M. Drouffe and C. Itzykson, “Lattice Gauge Fields,” Phys. Rept. 38 (1978) 133–175.
* [22] I. Bars and F. Green, “Complete Integration of U ($N$) Lattice Gauge Theory in a Large $N$ Limit,” Phys. Rev. D20 (1979) 3311.
* [23] P. Deift, A. Its, and I. Krasovsky, “Toeplitz matrices and toeplitz determinants under the impetus of the ising model. some history and some recent results,” arXiv:1207.4990 [math.FA].
* [24] T. T. Wu, “Theory of Toeplitz Determinants and the Spin Correlations of the Two-Dimensional Ising Model. I,” Physical Review 149 no. 1, (Sep, 1966) 380–401.
* [25] E. Brezin and D. J. Gross, “The External Field Problem in the Large N Limit of QCD,” Phys. Lett. 97B (1980) 120–124.
* [26] E. Brezin and S. Hikami, “Duality and replicas for a unitary matrix model,” JHEP 07 (2010) 067, arXiv:1005.4730 [hep-th].
* [27] M. Marino, “Nonperturbative effects and nonperturbative definitions in matrix models and topological strings,” JHEP 12 (2008) 114, arXiv:0805.3033 [hep-th].
* [28] D. J. Gross and A. Matytsin, “Instanton induced large N phase transitions in two-dimensional and four-dimensional QCD,” Nucl. Phys. B429 (1994) 50–74, arXiv:hep-th/9404004 [hep-th].
* [29] A. Ahmed and G. V. Dunne, “Transmutation of a Trans-series: The Gross-Witten-Wadia Phase Transition,” JHEP 11 (2017) 054, arXiv:1710.01812 [hep-th].
* [30] M. Campostrini, P. Rossi, and E. Vicari, “Large N phase transition in lattice 2-d principal chiral models,” Phys. Rev. D52 (1995) 395–401, arXiv:hep-lat/9412102 [hep-lat].
* [31] H. Liu, “Fine structure of Hagedorn transitions,” arXiv:hep-th/0408001 [hep-th].
|
2024-09-04T02:54:55.944001 | 2020-02-29T22:09:46 | 2003.00351 | {
"authors": "Nicolae-Catalin Ristea and Liviu Cristian Dutu and Anamaria Radoi",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25965",
"submitter": "Anamaria Radoi",
"url": "https://arxiv.org/abs/2003.00351"
} | arxiv-papers | # Emotion Recognition System from Speech and Visual Information based on
Convolutional Neural Networks
Nicolae-Cătălin Ristea University Politehnica of Bucharest
Bucharest, Romania
<EMAIL_ADDRESS>Liviu Cristian Dutu Fotonation Romania, member of
Xperi Group
Bucharest, Romania
<EMAIL_ADDRESS>Anamaria Radoi University Politehnica of Bucharest
Bucharest, Romania
<EMAIL_ADDRESS>
###### Abstract
Emotion recognition has become an important field of research in the human-
computer interactions domain. The latest advancements in the field show that
combining visual with audio information lead to better results if compared to
the case of using a single source of information separately. From a visual
point of view, a human emotion can be recognized by analyzing the facial
expression of the person. More precisely, the human emotion can be described
through a combination of several Facial Action Units. In this paper, we
propose a system that is able to recognize emotions with a high accuracy rate
and in real time, based on deep Convolutional Neural Networks. In order to
increase the accuracy of the recognition system, we analyze also the speech
data and fuse the information coming from both sources, i.e., visual and
audio. Experimental results show the effectiveness of the proposed scheme for
emotion recognition and the importance of combining visual with audio data.
###### Index Terms:
Emotion Recognition, Facial Action Units, Spectrogram, Convolutional Neural
Network
††publicationid: pubid: 978-1-7281-0984-8/19/$31.00 ©2019 IEEE.
## I Introduction
Facial attribute recognition, including facial action units and emotions, has
been a topic of interest among computer vision researchers for over a decade.
Being able to recognize and understand the emotion of a subject could be a key
factor in a wide range of fields such as public security, healthcare
(including therapeutic treatments) or entertainment. Moreover, the ability of
today’s systems to recognize and express emotions would leverage the barriers
in obtaining a "natural" interaction between systems and humans.
The interest of researchers on this subject lead to the development of Facial
Action Coding System (FACS) [1]. Following FACS encoding system, each emotion
can be modelled as a finite group of Facial Action Units (FAUs). Indeed, Ekman
identifies several facial attributes that allow emotion recognition from face
expressions (e.g., morphology, symmetry, duration, coordination of facial
muscles) [2].
The recent success of deep learning techniques in many Computer Vision-related
applications influenced the emotion recognition field as well. Due to the
release of labeled large datasets for emotion recognition from images, as well
as the advances made in the design of convolutional neural networks, error
rates have significantly dropped.
An interesting approach towards emotion recognition is to use multimodal
systems of recognition. It is well known that visual and audio information are
very important when building emotion recognition systems because the usage of
combined sound and video information leads to a better understanding of the
emotion context than having access to a single source of information [3]. In
this regard, several databases have been recently developed (e.g., CREMA-D
[4], OMG-Emotion [5]), but still the lack of multimodal data is a major
problem. Therefore, despite the success of multimodal emotion recognition
systems, there are still problems regarding emotion classification in real
world scenarios that benefit from the existence of both visual and audio
information.
In this paper, we propose a novel approach towards emotion recognition using
multiple sources of information. Our approach involves both visual and speech
information and it is based on convolutional neural networks. The experiments
prove the effectiveness of the proposed approach and show that having both
images and sounds helps to achieve high classification accuracy.
In Section II, we review several of the state-of-the-art approaches in emotion
recognition, whereas Section III describes several databases related to
emotion recognition. The proposed approach is presented in Section IV and the
corresponding results are discussed in Section V. Section VI concludes the
paper.
## II Related Work
### II-A Emotion recognition from images
A wide range of approaches have been developed for the recognition of emotions
from still images. The recognition system proposed in [6] (called DeXpression)
is based on a Convolutional Neural Network (CNN) architecture inspired by the
well-known GoogleNet arhitecture [7]. The architecture contains two blocks of
feature extraction composed by convolution, Max Pooling (MaxPool), Rectified
Linear Units (ReLU) and concatenation layers. The end part of the network is a
fully connected layer which performs the actual emotion classification. The
authors use standard datasets (e.g. Extended Cohn-Kanade (CKP) and MMI Facial
Expression Database) in their experimental setup and reported that the system
has better performances than previous approaches. With an accuracy of 99.6%
for CKP and 98.63% for MMI, the DeXpression architecture is robust, small and
suitable for real-time applications when emotions are found in the pictures.
However, the system fails to recognize an emotion from the first few frames,
but these misclassification errors do not affect the overall accuracy of the
system since the first corresponding instances can be considered as neutral.
Another method, proposed by Z. Yu and C. Zhang [8], is based on learning
multiple deep neural networks. The authors describe a more complex face
detector composed by three state-of-the-art detectors, followed by a
classification module made by combining multiple CNNs. The combining method
considers minimizing both the log-likelihood loss and the hinge loss. The
approach achieved state-of-the-art results on the Facial Expression
Recognition (FER) Challenge 2013 dataset, whereas the classification accuracy
reported on the validation and test set of Static Facial Expressions in the
Wild (SFEW) dataset 2.0 was 55.96% and 61.29%, respectively.
A comprehensive review of the methods related to facial expression recognition
can be found in [9]. However, as the authors mention, two key issues appear
when dealing with facial expression recognition system. Firstly, training deep
convolutional neural networks require large volumes of annotated data.
Secondly, variations such as illumination, head pose and person identity might
lead to inconsistent recognition results. Bringing audio information to the
recognition system may leverage several of these drawbacks.
### II-B Emotion recognition using visual and audio information
Combining more sources of information leads to a higher accuracy rate if
compared to the case of using a single source of information, audio or visual.
EmoNets, proposed in [10], is an emotion recognition system that considers
visual features extracted using Convolutional Neural Networks, whereas a deep
belief network is intended for building the representation of the audio
stream. The spatio-temporal relations in the video streams are tackled by
using a relational autoencoder. The authors stress that combining visual and
audio features leads to a classifier that achieves a high accuracy rate. The
method proposed in [10] won the 2013 Emotion Recognition in the Wild Challenge
(EmotiW) [11], reporting an accuracy level on the test set of 47.67% for the
2014 dataset.
A different approach is presented in [12] which considers using a CNN to
extract features from the speech, whilst, in order to represent the visual
information, a deep residual network ResNet-50 is used. The features from the
before-mentioned networks are concatenated and inserted into a two-layers Long
Short-Term Memory (LSTM) module. Two continuous values are predicted at each
moment, namely arousal and valence. The method outperforms other previous
methods on the RECOLA database of the Audio-Visual Emotion Challenge (AVEC)
2016.
Figure 1: Examples of frames from the CREMA-D dataset along with the
corresponding emotion labels.
## III Database
Several databases have been developed for emotion recognition purposes.
However, regarding multimodal emotion recognition, the lack of data is a major
problem. In this approach we focused on the CREMA-D database which is commonly
used in multimodal emotion recognition [4, 13, 14]. Some examples are provided
in Fig. 1.
The CREMA-D database was published in 2015 and contains 7442 clips of 91
actors (48 male and 43 female) with different ethnic backgrounds, coordinated
by professional theater directors. The actors were asked to convey particular
emotions while producing, with different intonations, 12 particular sentences
that evoke the target emotions. There were six labeled emotions (neutral,
happy, anger, disgust, fear, sad) and four different emotion levels (low,
medium, high, unspecified).
The recordings were performed in three different modes, namely audio, visual,
and audiovisual. The labels corresponding to each recording were collected
using crowdsourcing. More precisely, 2443 participants were asked to label the
perceived emotion and its intensity in three modes, depending on the
information put at their disposal: video, audio and combined audiovisual. The
human accuracy reported for each mode is presented in Table I. In this case,
human training was achieved through participants’ previous experiences,
whereas the results mentioned in Table I refer to the accuracy over the whole
dataset.
TABLE I: Human accuracy rate on CREMA-D [4] Mode | Audio | Video | Audio + Video
---|---|---|---
Accuracy | 40.9% | 58.2% | 63.6%
Figure 2: Proposed scheme for emotion recognition. The first part of the
network deals with visual information extraction, whilst, the second part of
the network handles the sound information. The last part of the network is a
stack of two fully-connected (FC) layers, representing the classifier.
## IV Proposed method
### IV-A Spectrogram
The spectrogram represents a traditional approach towards the time-frequency
analysis of signals with many applications in the fields of music, radar and
speech processing. More precisely, the spectrogram of a signal shows the
temporal evolution of its short-time spectral amplitude. From a visual
representation point of view, the spectrogram is a two-dimensional plot with x
axis representing time, y axis representing frequency and the magnitude of
signal being encoded by the pixel value.
The most common method of computing a spectrogram is with the discrete Short-
Time Fourier Transform (STFT), described by the following formula:
$STFT\\{x[n]\\}(m,k)=\sum_{m=-\infty}^{\infty}x[m]\cdot
w[n-m]e^{-j\frac{2\pi}{N_{x}}kn}$ (1)
where $N_{x}$ is the number of samples.
Moreover, several others methods of computing a spectrogram have developed
during time. The Wigner-Ville distribution is one of the techniques used to
obtain information about a signal which varies in time [15]. Wavelet analysis,
through the _continuous wavelet transform_ with different wavelet bases can
also be used for this purpose. Nevertheless, in this study we restrict
ourselves to the Short-Time Fourier Transform as a way to compute the
spectrogram, and leave the other methods presented above for future
investigations.
While all these techniques extract a time-frequency representation of the
signal’s energy density, we note that the resolution accuracy of all these
representations is bounded by the time-frequency uncertainty principle (see
[16], Chapter 3) which forbids arbitrarily accurate energy density
representations in both time and frequency simultaneously. This is why we
believe that in order to extract relevant features for the emotions
classification task, these time-frequency representations should be followed
by a learned hierarchical feature extractor, such as a convolutional neural
network, as detailed below.
### IV-B Preprocessing
The video sequence is divided into frames and we consider keeping a fixed
number of images $N$, that are equally distanced in the video. The following
step consists in detecting the face of the subjects and resizing them to have
equal size. In order to prevent the overfitting effect caused by the
insufficient number of training samples, data augmentation techniques are
applied, e.g. rotation, random cropping, different illumination and flips. In
this sense, for each video sequence in part, other 30 videos were made
following the methods presented above.
The raw audio signal is processed with the discrete Short-Time Fourier
Transform (STFT) described above. In the majority of cases, the audio files do
not have the same length. For this reason, the spectrograms do not have the
same width. In order to overcome this aspect, the spectrograms are reshaped to
a fix dimension (the height is the same for all signals, so a reshape was made
on the $x$ axis). Two examples of spectrograms for two audio signals
expressing different types of emotions are shown in Fig. 3.
(a) Angry
(b) Happy
Figure 3: Resized spectrograms for two different types of emotions Figure 4:
Loss function scores on the training set and sets. Figure 5: Accuracy scores
on the training set and sets.
### IV-C Proposed Architecture
In this paper, we propose a deep Convolutional Neural Network (CNN)-based
architecture, composed of three parts. The first part of the network deals
with the feature extraction from the image sequences, the second part with the
feature extraction from the audio signals, whereas the last part performs the
emotion recognition part. The entire architecture is depicted in Fig. 2.
An important component of the proposed network architecture is the convolution
operation, which is defined, for the audio (1-D) and visual (2-D) signals as
follows:
$(f*h)[i]=\sum_{k=-T}^{T}h[k]\cdot f[i-k]$ (2)
$(f*h)[i,j]=\sum_{k=-T}^{T}\sum_{m=-T}^{T}h[k,m]\cdot f[i-k,j-m]$ (3)
where $h[k]$ and $h[k,m]$ are the 1-D and 2-D kernels whose parameters are
learned during the training phase, and $f$ is the 1-D or 2-D signal at hand.
The first part of the network is represented by the lower branch of the scheme
presented in Fig. 2 and it is designed to receive a sequence of images and to
output abstract features which encode the information from frames. The input
dimension is $B\times N\times W\times H$, where $B$ is the batch size, $N$ is
the length of the video sequence, whereas $W$ is the width and $H$ is the
corresponding height. The video frames are analysed one after another.
The second part of the network is also a CNN-based architecture whose aim is
to process the spectrograms of the audio signals and to extract meaningful
features from them. Two remarks should be made at this point. Firstly, the
number of convolutional layers is rather small. Secondly, the last layer
contains a smaller number of neurons if compared to the first part of the
network which deals with image analysis. This is argued by the fact that
images contain more information than the audio signals (e.g., the position of
the eyebrows, eyes, mouth or cheeks may indicate person’s emotional state).
More specifically, non-verbal information is essential in discovering the
emotional state of an individual. For example, happiness can be unambiguously
determined from the facial expression of a person. However, vocal expression
is needed to detect anger [4].
The third part of the network shown in Fig. 2 is a classifier composed of two
fully-connected (FC) layers. After the inference through the before-mentioned
parts of the network, the abstract features are concatenated and the resulting
feature vector is introduced in the last part of the network, which provides
the probability distribution of the emotion in the analyzed video. As already
mentioned, the lengths of the feature vectors for frames and sound are
different, in proportion of 4:1, because the importance of the visual data is
greater.
The output is a tensor of $L$ elements $\\{y_{x,1},y_{x,2},\ldots,y_{x,L}\\}$
that indicates the emotional state of the person (i.e., $L$ being the number
of emotional states taken into consideration). In order to obtain the
probability distribution for the emotional states, we consider using a SoftMax
function that transforms the vector of 6 scores
$\\{y_{x,1},y_{x,2},\ldots,y_{x,L}\\}$ into a normalized vector of
probabilities $\\{p_{x,1},p_{x,2},\ldots,p_{x,L}\\}$, using the formula:
$p_{x,c}=\frac{e^{y_{x,c}}}{\sum_{c^{\prime}=1}^{L}e^{y_{x,c^{\prime}}}}$ (4)
for $c\in\\{1,...,L\\}$.
The loss function considered for training is the Cross-Entropy, defined, for
an observation $x$, as:
$\mathcal{L}(x)=-\sum_{c=1}^{L}\delta_{x,c}\log(p_{x,c})$ (5)
where $\delta_{x,c}$ is the binary indicator (0 or 1) showing if class label
$c$ is correct for observation $x$ or not and $p_{x,c}$ is the predicted
probability of observation $x$ pertaining to class $c$.
## V Experimental Setup and Results
In order to prove the effectiveness of the proposed classification
architecture that the audio data is helpful for the classification scope, we
did two experiments. In the first experiment, we consider as input data just
the sequence of images and we removed the part of the network which processes
spectrograms. In the second experiment, we used the entire network
architecture described in the previous section which receive data from two
different sources, i.e., audio and visual. Several setups are considered
during the implementation of the proposed neural network. As mentioned above,
the spectrograms of the signals must have equal dimensions even if the
analysed audio signals are different in length. We choose to resize all the
spectrograms to $192\times 120$. Similarly, the images containing the detected
faces are resized to images of $98\times 80$ pixels.
The experiments were implemented in PyTorch on a computer with NVIDIA GTX
1080Ti graphic board and Intel i7 processor. We tried more optimizers, but the
best performance regarding time of convergence and network accuracy was
achieved by Adam. The learning rate was $10^{-4}$, with a weight decay of
$5\cdot 10^{-5}$.
At the training phase, we set the batch size to be 32 and we kept 20 frames
equally distanced from every video. Therefore, the input data size is
$32\times 20\times 98\times 80$ for the first part of the network and $1\times
1\times 192\times 120$ for the second part of the network. In order to learn
the parameters that minimize the loss function, we have used a learning rate
of $0.001$ and Adam as the method for stochastic optimization [17].
The testing method is leave-one-out. More precisely, we trained the network on
all actors except one, then we tested the network on the actor that was
removed from the dataset. This procedure was made for all 91 actors from
database and the obtained results were averaged to obtain the final
classification accuracy score.
The loss function values obtained on train and test sets are showed for
different epoch numbers in Fig. 4, whereas the corresponding accuracy values
are given in Fig. 5. It can be easily observed that the loss function,
computed for the training set, has a fast decay in the first epoch, followed
by a slower decay in the next epochs, as showed in Fig. 4.
We mention that the network training was stopped after four epochs in average,
just before overfitting occurs. Moreover, as shown in Fig. 5, the accuracy
starts to decay at the fifth epoch. Also, the validation loss was greater in
some cases because the Cross-Entropy loss function is known to punish
significantly the network if the classification returns unsuitable results.
Because the overfitting was a constant problem in the training process, new
methods of data augmentation could be added, such as near infra–red (NIR)
generated data [18].
The obtained results on the test set are reported in Table II for two
different modes of the recordings, namely video and audiovisual, respectively.
It is worth mentioning that the accuracy is reported considering the leave-
one-out strategy, whereas in Table I the human accuracy was computed over the
whole dataset.
TABLE II: Classification results on CREMA-D | Mean accuracy over 91 actors | Standard deviation
---|---|---
Video | 62.84% | 14.06
Audio + Video | 69.42% | 13.25
## VI Conclusion
In this article, we present a network architecture that is able to recognize
emotion by combining visual with audio information. The output of the network
is a distribution of probabilities for each type of emotion considered for
training.
The architecture is CNN-based, whilst the time series of frames are processed
in a pipeline that takes into consideration multiple channels for each image.
This approach introduced the data variation in time, which is a key point in
emotion recognition because each emotion evolves in intensity over time.
The results sustain the idea that having more information about a subject is
important for determining the emotion with increased accuracy. In this sense,
it can be observed that adding the audio information (i.e., which is a
signature for each subject) in the network, yield an improvement of almost
6.57% in the accuracy results.
Also, we can observe that having multimodal information about a subject the
standard deviation of predictions is lower, which sustains the fact that the
network performs better when the input is a more complex information, in our
case video and audio information.
Moreover, our results, in both experiments, outperform the human accuracy
reported by the authors of CREMA-D dataset, in video mode with 4.64% and, in
video combined with audio, with 5.82%.
As future development, the emotion intensity levels reported on CREMA-D
samples could be taken into consideration to further increase the accuracy of
the emotion recognition system. This information may help the training process
by pondering the relative importance of every sample. Other future research
directions include the use different techniques for time-frequency analysis,
such as the Wigner-Ville distribution or the Continuous Wavelet Transform,
which could lead to improved information extraction from the audio signal.
## Acknowledgment
This work has been partially supported by the Ministry of Innovation and
Research, UEFISCDI, project SPIA-VA, agreement 2SOL/2017, grant PN-
III-P2-2.1-SOL-2016-02-0002.
## References
* [1] Y.-I. Tian, T. Kanade, and J. F. Cohn, “Recognizing action units for facial expression analysis,” _IEEE Transactions on pattern analysis and machine intelligence_ , vol. 23, no. 2, pp. 97–115, 2001.
* [2] P. Ekman, “Facial expression and emotion.” _American psychologist_ , vol. 48, no. 4, p. 384, 1993.
* [3] C. Marechal, D. Mikołajewski, K. Tyburek, P. Prokopowicz, L. Bougueroua, C. Ancourt, and K. Wegrzyn-Wolska, _Survey on AI-Based Multimodal Methods for Emotion Detection_. Springer International Publishing, 2019, pp. 307–324.
* [4] H. Cao, D. G. Cooper, M. K. Keutmann, R. C. Gur, A. Nenkova, and R. Verma, “Crema-d: Crowd-sourced emotional multimodal actors dataset,” _IEEE transactions on affective computing_ , vol. 5, no. 4, pp. 377–390, 2014.
* [5] P. Barros, N. Churamani, E. Lakomkin, H. Siqueira, A. Sutherland, and S. Wermter, “The omg-emotion behavior dataset,” in _2018 International Joint Conference on Neural Networks (IJCNN)_. IEEE, 2018, pp. 1–7.
* [6] P. Burkert, F. Trier, M. Z. Afzal, A. Dengel, and M. Liwicki, “Dexpression: Deep convolutional neural network for expression recognition,” _arXiv preprint arXiv:1509.05371_ , 2015.
* [7] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in _Computer Vision and Pattern Recognition (CVPR)_ , 2015. [Online]. Available: http://arxiv.org/abs/1409.4842
* [8] Z. Yu and C. Zhang, “Image based static facial expression recognition with multiple deep network learning,” in _Proceedings of the 2015 ACM on International Conference on Multimodal Interaction_. ACM, 2015, pp. 435–442.
* [9] S. Li and W. Deng, “Deep facial expression recognition: A survey,” _CoRR_ , vol. abs/1804.08348, 2018. [Online]. Available: http://arxiv.org/abs/1804.08348
* [10] S. E. Kahou, X. Bouthillier, P. Lamblin, C. Gulcehre, V. Michalski, K. Konda, S. Jean, P. Froumenty, Y. Dauphin, N. Boulanger-Lewandowski _et al._ , “Emonets: Multimodal deep learning approaches for emotion recognition in video,” _Journal on Multimodal User Interfaces_ , vol. 10, no. 2, pp. 99–111, 2016.
* [11] A. Dhall, R. Goecke, J. Joshi, M. Wagner, and T. Gedeon, “Emotion recognition in the wild challenge (emotiw) challenge and workshop summary,” in _Proceedings of the 15th ACM on International conference on multimodal interaction_. ACM, 2013, pp. 371–372.
* [12] P. Tzirakis, G. Trigeorgis, M. A. Nicolaou, B. W. Schuller, and S. Zafeiriou, “End-to-end multimodal emotion recognition using deep neural networks,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 11, no. 8, pp. 1301–1309, Dec 2017.
* [13] K. Vougioukas, S. Petridis, and M. Pantic, “Realistic speech-driven facial animation with gans,” _ArXiv_ , vol. abs/1906.06337, 2019.
* [14] R. Beard, R. Das, R. W. M. Ng, P. G. K. Gopalakrishnan, L. Eerens, P. Swietojanski, and O. Miksik, “Multi-modal sequence fusion via recursive attention for emotion recognition,” in _Proceedings of the 22nd Conference on Computational Natural Language Learning_. Brussels, Belgium: Association for Computational Linguistics, Oct. 2018, pp. 251–259. [Online]. Available: https://www.aclweb.org/anthology/K18-1025
* [15] L. Cohen, “Time-frequency distributions-a review,” _Proceedings of the IEEE_ , vol. 77, no. 7, pp. 941–981, 1989.
* [16] ——, _Time-frequency analysis_. Prentice hall, 1995, vol. 778.
* [17] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _International Conference on Learning Representations_ , pp. 1–15, 2015.
* [18] A. Mălăescu, L. C. Duţu, A. Sultana, D. Filip, and M. Ciuc, “Improving in-car emotion classification by nir database augmentation,” in _2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019)_. IEEE, 2019, pp. 1–5.
|
2024-09-04T02:54:55.966086 | 2020-03-01T12:01:19 | 2003.00467 | {
"authors": "Benjamin Ward-Cherrier, Nicholas Pestell and Nathan F. Lepora",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25966",
"submitter": "Benjamin Ward-Cherrier",
"url": "https://arxiv.org/abs/2003.00467"
} | arxiv-papers | # NeuroTac: A Neuromorphic Optical Tactile Sensor
applied to Texture Recognition
Benjamin Ward-Cherrier, Member, IEEE, Nicholas Pestell, Student Member, IEEE,
Nathan F. Lepora, Member, IEEE BWC was supported by a University of Bristol
Vice-Chancellor’s fellowship, NP was supported by an EPSRC DTP studentship and
NL was supported in part by a Leverhulme Trust Research Leadership Award on ’A
biomimetic forebrain for robot touch’ (RL-2016-039).Authors are with the
Department of Engineering Mathematics, University of Bristol and Bristol
Robotics Laboratory, University of Bristol, UK.
Email: {b.ward-cherrier, n.pestell<EMAIL_ADDRESS>
###### Abstract
Developing artificial tactile sensing capabilities that rival human touch is a
long-term goal in robotics and prosthetics. Gradually more elaborate
biomimetic tactile sensors are being developed and applied to grasping and
manipulation tasks to help achieve this goal. Here we present the neuroTac, a
novel neuromorphic optical tactile sensor. The neuroTac combines the
biomimetic hardware design from the TacTip sensor which mimicks the layered
papillae structure of human glabrous skin, with an event-based camera
(DAVIS240, iniVation) and algorithms which transduce contact information in
the form of spike trains. The performance of the sensor is evaluated on a
texture classification task, with four spike coding methods being implemented
and compared: Intensive, Spatial, Temporal and Spatiotemporal. We found
timing-based coding methods performed with the highest accuracy over both
artificial and natural textures. The spike-based output of the neuroTac could
enable the development of biomimetic tactile perception algorithms in robotics
as well as non-invasive and invasive haptic feedback methods in prosthetics.
## I INTRODUCTION
The long-term scientific goal of human-like artificial touch is being worked
towards with the creation of gradually more elaborate biomimetic tactile
sensors. The development of sensors which mimic aspects of biological touch
could lead to safer robots and more intuitive and versatile prosthetic
devices. An example of such sensors are neuromorphic tactile sensors, which
aim to replicate the spike-based representation of information found in the
nervous system.
The principal objective of neuromorphic engineering is to develop technologies
which can exploit efficient representations of information and operate in a
rapid, power-efficient manner. Using neuromorphic technologies with event-
based outputs also holds potential for integration with the human nervous
system, as has been demonstrated with artificial retinas [1]. Optical tactile
sensors have also recently demonstrated progress on a number of tactile tasks
[2, 3], and have the advantage of capitalizing on advances in image
recognition and machine learning techniques. These technologies will be
combined here to develop a tactile sensor for robotic manipulation and
prosthetics applications.
The aims of this paper are as follows:
* •
Develop a novel neuromorphic optical tactile sensor to enable the development
and investigation of biomimetic spike-based information processing methods.
* •
Validate this sensor on a texture classification task.
* •
Investigate 4 spike coding methods (intensive, spatial, temporal and
spatiotemporal) and their effect on texture recognition performance.
Figure 1: Transduction, encoding and decoding mechanisms for the neuroTac
sensor. The sensor mimics biological processes by accumulating pixel events
(Potentials) from an event-based camera and combining them into taxel events
(Spikes).
The neuroTac sensor, described here, follows the tradition of neuromorphic
technologies [4] in seeking to produce and decode spike-based information
(Fig. 1). The spike-based sensor output is coded using 4 different methods:
intensive (overall number of spikes), spatial (number of spikes per taxel),
temporal (number of spikes per time window) and spatiotemporal (Van Rossum
metric [5]). Although neuromorphic devices may present certain advantages over
their non-neuromorphic counterparts (speed and energy efficiency), our
principal objective here is linked to biomimetism. The TacTip sensor emulates
the internal structure of human skin [6], and the neuroTac adds to that
biomimetic morphology by producing a neuromorphic, spike-based output. We aim
to develop the neuroTac and associated spike-based information processing
methods to investigate the advantages this approach might provide biological
organisms.
The performance of the sensor and its associated coding methods is validated
on texture recognition tasks, in which artificial and natural textures are
identified using a K-nearest-neighbour (KNN) algorithm. The sensor
successfully performs texture recognition, with temporal methods producing the
highest classification accuracy. This underlines the importance of spike
timing in texture recognition tasks.
## II BACKGROUND AND RELATED WORK
One of the strongest motivations for studying the human sense of touch is its
important role in our interactions and in the manipulation of our environment.
Artificial tactile sensing thus often mimics particular features of biological
touch through biomimetic hardware or perception algorithms [7]. An emerging
area of biomimetic tactile sensing aims to replicate biological spike-based
signalling through event-based systems and study the encoding of tactile
information [8]. This is generally referred to as neuromorphic sensing [4].
Neuromorphic sensing arose from seminal work on silicon neurons [9] and in the
area of vision it has led to the successful integration of artificial retinas
with the human nervous system [1]. Evidence suggests that a neuromorphic
tactile sensor could similarly be used to restore a natural sense of touch to
amputees [10, 11].
Large-scale event-based tactile sensors have been developed for use as robotic
skins [12], with a focus on the efficient processing of large quantities of
asynchronous data. Bartolozzi et. al. also developed an architecture for use
with distributed off-the shelf tactile sensors, transforming raw data into
event-based signals [13]. Another example of a large-scale system for the
investigation of spiking outputs in neuromorphic touch is that developed by
Lee et. al. [14]. These systems represent crucial technological developments
in the area of event-based systems, and will be essential in the creation of
rapidly reacting fully tactile robot systems.
Here, our focus is not on large-scale event-based robotic skins or platforms
for investigating spike-based encoding but rather on an artificial fingertip
with high spatial resolution for fine in-hand manipulation tasks. Oddo et. al.
developed a tactile sensor more in line with this design objective [15]. Their
neuromorphic tactile fingertip comprises 16 taxels combining raw outputs from
4 microelectromechanical system (MEMS) sensors, fed to an Izhikevich model to
produce spikes. This system simulates the outputs from biological SA1
afferents, and has been proven capable of accurately distinguishing natural
textures [16]. A crucial difference with the sensor presented here is that the
neuroTac emulates the fast-adapting responses of FA1 afferents to dynamic
contact (rather than the slowly adapting SA1 afferent responses). Recently, a
sensor has been developed using similar hardware, with an event-based camera
capturing the deformations of a deformable membrane [17]. The system was
calibrated for use as a force sensor and to measure material hardness. The
neuroTac sensor design aligns more closely both with traditional tactile
sensor designs comprising taxels (internal markers which transduce contact),
and biological fingertip structures (with taxels functioning as artificial
mechanoreceptors).
We validate the neuroTac on a texture classification task, in which 3d-printed
and natural textures are classified by sliding the sensor horizontally across
each stimulus. Texture identification is a common task in tactile sensing due
to its important implications in object recognition and manipulation. Studies
on the tactile sensing of textures generally involve naturally occurring
textures [18, 16, 19, 20]. Here, we initially wish to investigate the sensor’s
response to highly structured textures. To achieve this, we utilize purposely
designed 3d-printed stimuli with regular grids of cylindrical bumps as has
been done in past studies of biological touch [21]. Following this, we
evaluate the sensor’s classification performance on a set of 20 natural
textures.
## III METHODS
### III-A NeuroTac design and operation
Figure 2: The neuroTac sensor. The tip contains internal pins treated as
mechanoreceptors, which produce pixel events at the event-based camera. These
are pooled and converted to taxel events (akin to biological spikes) upstream.
##### Sensor design
The NeuroTac is based on the TacTip sensor [6], a 3d-printed optical tactile
sensor with a compliant dome-shaped outer membrane comprising biomimetic
internal markers which emulate the internal structure of human fingertips
[22]. To convert the TacTip into a neuromorphic sensor, we replace its camera
module (ELP, USBFHD01M-L21) with an event-based camera (iniVation, DAVIS240)
which processes only dynamic events within the image frame. The DAVIS240 is an
evolution of the DVS128 [23] and comprises 240x180 pixels, which process
changes in brightness through independent electronic circuits to produce
events in the address-event representation (AER) format [24]. These events are
combined by taxel and transmitted by the sensor, analogously to biological
spike trains (see Section III-A for more details on the sensor operation).
Like the TacTip, the NeuroTac is made up of 3 main hardware elements (Fig. 2):
* •
Tip: This is a compliant, 3d-printed modular part whose outer membrane comes
into contact with the environment. Its internal surface comprises white-tipped
pins which are displaced during contact. It is filled with silicone gel
(Techsil, RTV27905) and covered with an acrylic lens to protect electronic
components.
* •
LED ring: This illuminates the tip’s internal surface.
* •
Camera: Housed within the main part of the sensor (the 3d-printed Body), this
camera is event-based (iniVation, DAVIS240) and produces AER outputs in
response to movements of the sensor’s internal markers.
Figure 3: Sensor operation. Pixel events produced by the camera (iniVation,
DAVIS240) are initially filtered, then pooled into a single taxel event.
Finally, the position of taxels is updated (see Section III-A). Figure 4:
Experimental setup: The NeuroTac is attached to a 6-dof industrial robot arm
(ABB, IRB120) which is used to slide the sensor horizontally across the
3d-printed textures.
##### Sensor operation
As the NeuroTac’s compliant membrane deforms through contact, its 49 white-
tipped internal pins deflect, and their movement triggers events in the camera
(iniVation, Davis240) in the Address-Event Representation (AER) format. These
events are produced by thresholding brightness changes at each photodiode in
parallel [23, 24], leading to fast data transmission and high temporal
precision. We designate these events ’pixel events’ as they are created at the
pixel level. Data transduction by the neuroTac then occurs through the 3
following steps (illustrated in Fig. 3):
* •
Noise filtering: each of the sensor’s 49 pins represents a taxel, and has a
receptive field assigned to it (6 px diameter). Pixel events that occur
outside a taxel’s receptive field, or which do not have another pixel event
occur within a given spatiotemporal window (neighbouring pixels, 5 ms) are
filtered out.
* •
Pooling: pixel events are pooled over a short duration (20 ms) and combined
into a ’taxel event’ based on the receptive field they are located within.
Each taxel event is an array comprising 3 numbers: the number of pixel events
it contains, their average location and their average timing. Note that each
taxel event is interpreted as a spike associated with an artificial
mechanoreceptor.
* •
Position update: Receptive fields are re-centred around each taxel by shifting
towards detected pixel events, to account for each pin’s movement across the
image.
### III-B Experimental setup
The first experiment with artificial textures involves sliding the NeuroTac
across 11 3d-printed textures. Artificial textures consist of rectangular
grids of cylindrical bumps (1 mm height) with equal spacing and diameter. The
texture grid size varies from 0 mm (smooth surface) to 5 mm in steps of 0.5 mm
(Fig. 4).
Figure 5: Natural textures used for classification. 20 textures were used,
with a full list presented in Table II.
Data is collected by mounting the NeuroTac on a 6-dof robotic arm (ABB,
IRB120) and sliding it horizontally across the textures (Fig. 4). The robot
slides the NeuroTac across each texture at a speed of 15 mm/s, over a distance
of 60 mm, comprising one data sample. We collect 100 samples for each texture,
to obtain a dataset of 100 (number of runs) $\times$ 11 (number of textures)
$\times$ 49 (number of taxels) spike trains.
In the second experiment on natural texture classification, the data
collection procedure is repeated for 50 runs over 20 natural textures(see
Table II).
### III-C Spike train encoding methods
The multi-taxel spike trains obtained from the sensor are denoted as a matrix
of spike times $t_{n}^{i}$, where $n=1,2...,N$ (N denotes the number of
taxels, $N=49$ in this case) and $i=1...I_{n}$ ($I_{n}$ denotes the number of
spikes for taxel $n$). A sample is denoted as the multi-taxel spike train
resulting from the sensor sliding 60 mm horizontally across one texture.
Following, we describe the 4 coding methods applied to transform the spike
trains $t_{n}^{i}$ to the encoded representation $R$.
#### III-C1 Intensive coding
This encoding consists of the average spike count per taxel for a given data
sample. We name it intensive, analogously to work on biological touch [25],
since it can be interpreted as an overall intensity of the sample signal with
an absence of any spatial or temporal resolution. The encoding produces a
single average spike count per sample, which is used for texture
classification.
$R=\frac{1}{N}\sum_{n=1}^{N}\sum_{i=1}^{I_{n}}t_{n}^{i}$ (1)
where $N$ is the total number of taxels and $I_{n}$ denotes the number of
spikes for taxel n.
#### III-C2 Spatial coding
Here we consider the topology of the sensor, and sum the spike counts
separately for each of the sensor’s 49 taxels. The resulting array of 49 spike
rates is used as an encoded representation of the texture
$R_{n}=\sum_{i=1}^{I_{n}}t_{n}^{i}$ (2)
#### III-C3 Temporal coding
Temporal coding considers a rolling window of width $\Delta t$ over each data
sample, which is rolled forward over the time domain in timesteps of 1 ms.
Within the window $\Delta t$, the average spikes per taxel are recorded before
proceeding to the next timestep.
$R_{n}(t)=\frac{1}{N}\sum_{t=t}^{t+\Delta t}t_{n}^{i}(t)$ (3)
The $\Delta t$ parameter in this encoding method affects the classification
accuracy, therefore we optimize it through brute-force optimization within a
limited range of 1-200 ms (see Section IV-A).
#### III-C4 Spatiotemporal coding
Spatiotemporal coding uses the spatial and temporal features of spike trains
produced by the sensor. A multi-neuron Van Rossum distance [26] is used as a
metric in the texture classification, to ensure that both spatial and temporal
dimensions of the data are considered.
The spatiotemporal encoded representation can be considered the convolution of
the multi-taxel spike train with an exponential kernel used in the Van Rossum
distance calculation (see Section III-D).
$R_{n}^{i}(t)=t_{n}^{i}h(t-t_{i})$ (4)
where
$h=\begin{cases}0,&t\leq 0\\\ \frac{1}{\tau}e^{-t/\tau},&t\geq 0\end{cases}$
(5)
$\tau$ is a time constant parameter which can be optimized to create the most
accurate clustering of texture representations.
### III-D Classification
Texture classification is performed through a KNN algorithm (k=4), which
assigns a class (texture grid size 0-5 mm in 0.5 mm steps here) to a test
sample based on its 4 closest training samples. For the intensive, spatial and
temporal coding methods, a standard Euclidean distance is used to calculate
distances between samples.
For the spatiotemporal coding, we use a multi-neuron Van Rossum distance [26]
which involves the convolution of spike trains with an exponential kernel
(Section III-C4) before applying a distance metric.
In the original Van Rossum metric [5], which applies to single neuron spike
trains, the distance metric between two spike trains is simply:
$d_{2}(t_{1},t_{2};h)=\sqrt{\int dt(f_{1}-f_{2})^{2}}$ (6)
where $f_{1}$ and $f_{2}$ are the convolved functions.
The extension of the Van Rossum metric to multi-neuron cases, as described by
Houghton and Sen [26], introduces an additional parameter $\theta$ which
represents the correlation between neurons. The parameter’s effect is best
described by taking its cosine, which varies from 0 to 1 with:
* •
$cos\theta=0$. Labelled line code: each neuron is considered independent and
their distances are summed.
* •
$cos\theta=1$. Summed population code: spike trains are superimposed before
calculating the distance metric.
* •
$1>cos\theta>0$: an intermediate level of inter-neuron correlation between
these two extremes.
The $\theta$ angle is an optimizable parameter, and thus we perform a two
parameter Bayesian optimization of $\tau$ (exponential kernel time constant)
and $\theta$ for spatiotemporal encoding as described in section IV-B.
## IV RESULTS
### IV-A Inspection of data - Artificial textures
Figure 6: Examples of the spike trains produced by the NeuroTac when sliding
across three different 3d-printed textures. From left to right, textures are 0
mm grid (smooth), 2.5 mm grid and 5 mm grid, with the corresponding spike
trains, spatial and temporal distributions displayed below them.
Data is gathered by sliding the sensor horizontally across the set of 11
artificial textures (see Section III-B). The output of the NeuroTac consists
of 49 spike trains which represent the events being produced at each taxel.
The spike trains will vary in overall intensity for different textures, as
well as having distinct spatial and temporal signatures.
Examples of spike trains obtained for 3 textures of grid sizes 0 mm (smooth),
2.5 mm and 5 mm are displayed here (Fig. 6, second row). It is visually
noticeable that these 3 multi-neuron spike trainsdiffer, with the number of
spikes produced increasing with texture coarseness. As expected, applying
intensive coding to these samples gives readily distinguishable average spike
counts over all taxels of $4.63$ spikes/taxel (0 mm grid), $9.84$ spikes/taxel
(2.5 mm grid) and $34.73$ spikes/taxel (5 mm grid).
Spatial coding (spike frequency per taxel) reveals an additional topological
structure to the data (Fig. 6, third row), wherein certain taxels seem to fire
at higher rates than others (taxels 10, 17 and 27 for the 2.5 mm texture for
instance). This could be due to their location on the sensor being directly in
the path of the texture’s raised bumps, leading to more events being produced
for those taxels.
We also illustrate temporal coding with a rolling window of size $\Delta
t=159\,ms$ (Fig. 6, bottom row), in which the increased response with texture
grid size is visibly apparent. There is also a sharp increase in activity at
the beginning of each sample which corresponds to the start of the sensor’s
horizontal sliding motion, and is likely linked to the coefficient of static
friction between the sensor and texture.
### IV-B Texture classification - Artificial textures
Figure 7: Confusion matrices for artificial texture classification with each
of the 4 encoding methods: Intensive (Top Left), Spatial (Top Right), Temporal
(Bottom Left) and Spatiotemporal (Bottom Right).
For each encoding method (see Section III-C), the resulting data is classified
with a KNN algorithm to identify the 11 3d-printed textures.
Note that temporal and spatiotemporal encodings are both parameterised (see
Section III-C3, III-C4), and their parameters ($\Delta t$ for temporal
encoding, $cos\theta$ and $\tau$ for spatiotemporal encoding) are optimised to
maximize classification accuracy. Temporal encoding contains a single
parameter $\Delta t$, allowing us to perform a brute-force optimization over a
range of values from 1 to 200 ms. We set an upper threshold on the $\Delta t$
parameter both to accelerate the optimization procedure and ensure the method
is distinct from intensive coding (which corresponds to $\Delta t=T$, where T
is the full duration of the sample). This optimization process give us a value
of $\Delta t=159\,ms$.
In the case of the spatiotemporal encoding, we perform a Bayesian optimization
over the two parameters $cos\theta$ and $\tau$. The domain space we apply
optimization over is $cos\theta=0-1$ and $\tau=10-100ms$. We place an upper
limit on the $\tau$ parameter for faster convergence and to ensure
spatiotemporal coding is distinct from spatial coding. The parameters converge
over 1000 optimization epochs to values of $cos\theta=0.4$ and $\tau=76ms$.
Coding method | Performance - Artificial textures (%)
---|---
Intensive | 78.5 $\pm$ 41.1
Spatial | 95.5 $\pm$ 20.6
Temporal | 98.1 $\pm$ 13.7
Spatiotemporal | 98.3 $\pm$ 13
TABLE I: Comparison of different encoding methods using leave-one-out cross-
validation for texture classification.
First, we run a simple 80/20 train-test split on the gathered dataset, and
attempt to classify the textures using a KNN classifier (k=4). The results are
presented in the form of confusion matrices (Fig. 7), and provide insight into
each coding method’s performance. Visually, we can observe that the intensive
method is the least effective, particularly for the smoother textures (bump
diameter 0-2 mm). The spatial coding method seems more accurate, though there
is still some slight inaccuracy in the classification of smoother textures
(bump diameter 0-1.5 mm). Temporal and spatiotemporal coding appears to
provide near-perfect classification accuracy, indicating that sliding the
sensor over the textures produces discriminable time-dependent features.
To obtain a more complete picture of the performance of each coding method, we
perform a leave-one-out cross-validation over the gathered dataset. Results
are presented in table I and mirror the results displayed in the train-test
split confusion matrices. Intensive coding performs worst, with spatial
information improving performance significantly. However both temporal and
spatiotemporal coding have the highest accuracy, indicating the sensor
produces features in the temporal domain which are likely most representative
of the classified textures.
### IV-C Texture classification - Natural textures
Here we seek to test the sensor’s performance more thoroughly on texture
recognition by replacing the 3d-printed textures with natural textures, listed
below in Table II.
We again run a KNN classifier (k=4) over 20 runs with an 80/20 train-test
split (giving 4 test runs for each texture). The results are presented in the
form of confusion matrices for each coding method (Fig. 8).
Figure 8: Confusion matrices for natural texture classification with each of
the 4 encoding methods: Intensive (Top Left), Spatial (Top Right), Temporal
(Bottom Left) and Spatiotemporal (Bottom Right).
Classification accuracy appears generally strong, with Intensive coding the
weakest method, and Temporal and Spatiotemporal coding performing the best, as
was the case for artificial textures. This is confirmed through a leave-one-
out cross-validation, with results displayed in Table III. One significant
difference between the natural and artificial textures is that natural
textures appear more irregular in their misclassifications. The artificial
textures lie along a regularly designed gradient of coarseness, and thus
classification errors occur most often between neighbouring classes. In the
case of natural textures, there is no such regularity or order and
classification errors appear to be spread more widely across the confusion
matrices (Fig. 8).
It is also interesting to note that classification errors vary between coding
methods. For instance for Intensive and Spatial coding, fake fur and felt
(textures 6 and 17) are confused, whereas Temporal and Spatiotemporal coding
can distinguish them. Equally, liquid satin (texture 7) is misclassified as
foam (texture 1) only when Spatiotemporal coding is applied (Fig. 8). The fact
that these misclassifications vary between coding methods likely stems from
the fact that each method produces a distinct set of features.
## V Discussion
The neuroTac sensor developed here is a neuromorphic optical tactile sensor,
with a data output consisting of multi-taxel spike trains. The spike trains
were encoded using four different bio-inspired encoding mechanisms: Intensive,
Spatial, Temporal and Spatiotemporal. We validated the sensor through a
texture classification task, in which 11 3d-printed textures (grid sizes 0-5
mm in steps of 0.5 mm) and 20 natural textures were discriminated using a KNN
classification algorithm.
We chose texture discrimination as a validation task as it has been a key
experimental procedure in psychophysical and neuroscientific studies of touch.
Here we will attempt to contrast our results with existing theories in these
fields, and identify limitations as well as gaps for further investigation.
We found that applying spatial coding to the neuroTac data produced high
classification accuracy for coarser textures (2.5-5 mm grid size), but
performed less well for smoother textures (0-2.5 mm grid size). This result
seems to concur with Katz’s duplex theory of human tactile perception, with
spatial resolution being important for rougher textures, and vibration cues
taking over as textures get smoother [27]. The stronger performance of the
temporal and spatiotemporal coding methods for smooth textures could be linked
to their detection of high frequency vibrational cues.
Texture | Texture | Texture | Texture
---|---|---|---
number | name | number | name
1 | Foam | 11 | Flarefree net
2 | Plywood | 12 | Embroidered cotton
3 | MDF | 13 | Shirt canvas
4 | Acrylic | 14 | Fairydust organza
5 | Wool | 15 | Sequins allover
6 | Fake fur | 16 | Metallic mesh
7 | Liquid satin | 17 | Felt
8 | Shimmer organza | 18 | Needlecord
9 | Tweed | 19 | Fleece
10 | Lace | 20 | Microdot foil
TABLE II: Natural textures and their corresponding number. Coding method | Performance - natural textures (%)
---|---
Intensive | 69.8
Spatial | 85.5
Temporal | 93.0
Spatiotemporal | 92.8
TABLE III: Comparison of the 4 different coding methods using leave-one-out
cross-validation for texture classification of the 20 natural textures.
It has also been suggested in a recent study that the specific timing of
spikes carries significance for texture discrimination, as it could encode
spatial frequency features of the textures being contacted [28]. The
spatiotemporal coding method described here could capture these precise spike
timings, with a resolution dependent on the time constant parameter $\tau$.
Here $\tau$ was optimized to be 76 ms, which could indicate an approximate
timescale for the frequency features of the textures considered.
Human touch and proprioception provide much richer and more complex data than
that explored here, and most theories suggest that human texture recognition
relies on a complex coding strategy involving inputs from several
mechanoreceptors (Pacinian corpuscles, Merkel cells) [28]. Further studies
with the neuroTac could include methods for simulating input from these
biological afferents, for instance by feeding taxel positions as an input to
spiking neuron models. This would open the path to more complex coding
algorithms to be applied to the sensor’s neuromorphic data which could improve
generalizability and robustness.
## VI Conclusion
We presented a neuromorphic optical tactile sensor and demonstrated its
performance on a texture classification task. Four bio-inspired spike encoding
mechanisms were investigated, which suggested information about texture
coarseness is encoded in the timing of spikes. The neuroTac’s fast spike-based
output could lead to a step forward in the areas of robotic manipulation and
prosthetics.
## References
* [1] K. Zaghloul and K. Boahen. A silicon retina that reproduces signals in the optic nerve. Journal of neural engineering, 3(4):257, 2006.
* [2] R. Li and E. Adelson. Sensing and recognizing surface textures using a gelsight sensor. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1241–1247, 2013.
* [3] Jasper Wollaston James, Nicholas Pestell, and Nathan F Lepora. Slip detection with a biomimetic tactile sensor. IEEE Robotics and Automation Letters, 3(4):3340–3346, 2018.
* [4] S. Liu and T. Delbruck. Neuromorphic sensory systems. Current opinion in neurobiology, 20(3):288–295, 2010.
* [5] M. van Rossum. A novel spike distance. Neural computation, 13(4):751–763, 2001.
* [6] B. Ward-Cherrier, N. Pestell, L. Cramphorn, B. Winstone, M. Giannaccini, J. Rossiter, and N. Lepora. The tactip family: Soft optical tactile sensors with 3d-printed biomimetic morphologies. Soft robotics, 5(2):216–227, 2018.
* [7] R. Dahiya, G. Metta, M. Valle, and G. Sandini. Tactile sensing—from humans to humanoids. Robotics, IEEE Trans. on, 26(1):1–20, 2010.
* [8] R. Johansson and R. Flanagan. Coding and use of tactile signals from the fingertips in object manipulation tasks. Nature Reviews Neuroscience, 10(5):345–359, 2009.
* [9] G. Indiveri, B. Linares-Barranco, T. Hamilton, A. Van Schaik, R. Etienne-Cummings, T. Delbruck, S. Liu, P. Dudek, P. Häfliger, S. Renaud, et al. Neuromorphic silicon neuron circuits. Frontiers in neuroscience, 5:73, 2011.
* [10] S. Raspopovic, M. Capogrosso, F. Petrini, M. Bonizzato, J. Rigosa, G. Di Pino, J. Carpaneto, M. Controzzi, T. Boretius, E. Fernandez, et al. Restoring natural sensory feedback in real-time bidirectional hand prostheses. Science translational medicine, 6(222):222ra19–222ra19, 2014.
* [11] L. Osborn, A. Dragomir, J. Betthauser, C. Hunt, H. Nguyen, R. Kaliki, and N. Thakor. Prosthesis with neuromorphic multilayered e-dermis perceives touch and pain. Science Robotics, 3(19):eaat3818, 2018.
* [12] F. Bergner, P. Mittendorfer, E. Dean-Leon, and G. Cheng. Event-based signaling for reducing required data rates and processing power in a large-scale artificial robotic skin. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2124–2129. IEEE, 2015.
* [13] C. Bartolozzi, P. Ros, F. Diotalevi, N. Jamali, L. Natale, M. Crepaldi, and D. Demarchi. Event-driven encoding of off-the-shelf tactile sensors for compression and latency optimisation for robotic skin. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 166–173. IEEE, 2017.
* [14] W. Lee, S. Kukreja, and N. Thakor. A kilohertz kilotaxel tactile sensor array for investigating spatiotemporal features in neuromorphic touch. In 2015 IEEE Biomedical Circuits and Systems Conference (BioCAS), pages 1–4. IEEE, 2015.
* [15] C. Oddo, L. Beccai, M. Felder, F. Giovacchini, and M. Carrozza. Artificial roughness encoding with a bio-inspired mems-based tactile sensor array. Sensors, 9(5):3161–3183, 2009.
* [16] U. Rongala, A. Mazzoni, and C. Oddo. Neuromorphic artificial touch for categorization of naturalistic textures. IEEE transactions on neural networks and learning systems, 28(4):819–829, 2015.
* [17] Fariborz Baghaei Naeini, Aamna Alali, Raghad Al-Husari, Amin Rigi, Mohammad K AlSharman, Dimitrios Makris, and Yahya Zweiri. A novel dynamic-vision-based approach for tactile sensing applications. IEEE Transactions on Instrumentation and Measurement, 2019.
* [18] Nawid Jamali and Claude Sammut. Majority voting: Material classification by tactile sensing using surface texture. IEEE Transactions on Robotics, 27(3):508–521, 2011.
* [19] Alison I Weber, Hannes P Saal, Justin D Lieber, Ju-Wen Cheng, Louise R Manfredi, John F Dammann, and Sliman J Bensmaia. Spatial and temporal codes mediate the tactile perception of natural textures. Proceedings of the National Academy of Sciences, 110(42):17107–17112, 2013.
* [20] Danfei Xu, Gerald E Loeb, and Jeremy A Fishel. Tactile identification of objects using bayesian exploration. In 2013 IEEE International Conference on Robotics and Automation, pages 3056–3061. IEEE, 2013.
* [21] Mandayam A Srinivasan, JM Whitehouse, and Robert H LaMotte. Tactile detection of slip: surface microgeometry and peripheral neural codes. Journal of neurophysiology, 63(6):1323–1332, 1990.
* [22] C. Chorley, C. Melhuish, T. Pipe, and J. Rossiter. Development of a tactile sensor based on biologically inspired edge encoding. In Advanced Robotics, ICAR Int. Conf. on, pages 1–6. IEEE, 2009\.
* [23] P. Lichtsteiner, C. Posch, and T. Delbruck. A 128x128 120 db 15 mu s latency asynchronous temporal contrast vision sensor. IEEE journal of solid-state circuits, 43(2):566–576, 2008.
* [24] C. Brandli, R. Berner, M. Yang, S. Liu, and T. Delbruck. A 240$\times$ 180 130 db 3 $\mu$s latency global shutter spatiotemporal vision sensor. IEEE Journal of Solid-State Circuits, 49(10):2333–2341, 2014.
* [25] Tetsu Miyaoka, Tadaaki Mano, and Masahiro Ohka. Mechanisms of fine-surface-texture discrimination in human tactile sensation. The journal of the acoustical society of America, 105(4):2485–2492, 1999.
* [26] C. Houghton and K. Sen. A new multineuron spike train metric. Neural computation, 20(6):1495–1511, 2008.
* [27] M. Hollins and R. Risner. Evidence for the duplex theory of tactile texture perception. Perception & psychophysics, 62(4):695–705, 2000.
* [28] E. Mackevicius, M. Best, H. Saal, and S. Bensmaia. Millisecond precision spike timing shapes tactile perception. Journal of Neuroscience, 32(44):15309–15317, 2012.
|
2024-09-04T02:54:55.977705 | 2020-03-01T12:40:54 | 2003.00479 | {
"authors": "Lijia Ding and Kai Wang",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25967",
"submitter": "Lj Ding",
"url": "https://arxiv.org/abs/2003.00479"
} | arxiv-papers |
# The $L^{p}$-$L^{q}$ problems of Bergman-type operators
Lijia Ding School of Mathematical Sciences, Peking University, Beijing,
100086, P. R. China School of Mathematical Sciences, Fudan University,
Shanghai, 200433, P. R. China<EMAIL_ADDRESS>and Kai Wang School of
Mathematical Sciences, Fudan University, Shanghai, 200433, P. R. China
<EMAIL_ADDRESS>
###### Abstract.
Let $\mathbb{B}^{d}$ be the unit ball on the complex space $\mathbb{C}^{d}$
with normalized Lebesgue measure $dv.$ For $\alpha\in\mathbb{R},$ denote
$k_{\alpha}(z,w)=\frac{1}{(1-\langle z,w\rangle)^{\alpha}},$ the Bergman-type
integral operator $K_{\alpha}$ on $L^{1}(\mathbb{B}^{d},dv)$ is defined by
$K_{\alpha}f(z)=\int_{\mathbb{B}^{d}}k_{\alpha}(z,w)f(w)dv(w).$
It is an important class of operators in the holomorphic function space theory
over the unit ball. We also consider the integral operator $K_{\alpha}^{+}$ on
$L^{1}(\mathbb{B}^{d},dv)$ which is given by
$K_{\alpha}^{+}f(z)=\int_{\mathbb{B}^{d}}|k_{\alpha}(z,w)|f(w)dv(w).$
In this paper, we completely characterize the $L^{p}$-$L^{q}$ boundedness of
$K_{\alpha},K_{\alpha}^{+}$ and $L^{p}$-$L^{q}$ compactness of $K_{\alpha}.$
The results of boundedness are in fact the Hardy-Littlewood-Sobolev theorem
but also prove the conjecture of [4] in the case of bounded domain
$\mathbb{B}^{d}.$ Meanwhile, a trace formula and some sharp norm estimates of
$K_{\alpha},K_{\alpha}^{+}$ are given.
###### Key words and phrases:
Bergman projection; Embedding theorem; Compact operator; Hardy-Littlewood-
Sobolev theorem; Norm estimate
###### 2010 Mathematics Subject Classification:
Primary 47G10; Secondary 47A30; 47B05
The first author was partially supported by Fudan University Exchange Program
(2018017). The second author was partially supported by NSFC (11722102), the
Alexander von Humboldt Foundation (1151823), Shanghai Pujiang Program
(16PJ1400600).
## 1\. Introduction
Let $\mathbb{B}^{d}$ be the unit ball on the complex space $\mathbb{C}^{d}$
with the normalized Lebesgue measure $dv.$ For $\alpha\in\mathbb{R},$ denote
$\alpha$-order Bergman-type kernel function $k_{\alpha}(z,w)$ on
$\mathbb{B}^{d}\times\mathbb{B}^{d}$ by
$k_{\alpha}(z,w)=\frac{1}{(1-\langle z,w\rangle)^{\alpha}}.$
Clearly the $(d+1)$-order Bergman-type kernel function $k_{d+1}(z,w)$ is the
standard Bergman kernel on $\mathbb{B}^{d}.$ Denote Bergman-type integral
operator $K_{\alpha}$ on $L^{1}(\mathbb{B}^{d},dv)$ by
$K_{\alpha}f(z)=\int_{\mathbb{B}^{d}}k_{\alpha}(z,w)f(w)dv(w).$
Such operators $K_{\alpha}$ play an important role in complex analysis of
several variables and operator theory; in particular, when $\alpha=d+1,$
$K_{d+1}$ is the standard Bergman projection over the unit ball
$\mathbb{B}^{d}.$ Indeed, for any $\alpha>0,$ if restrict $K_{\alpha}$ to the
holomorphic function space $H(\mathbb{B}^{d}),$ then every $K_{\alpha}$ is a
spacial form of fractional radial differential operator $R^{s,t},$ which is a
kind of very useful operators in the Bergman space theory on the unit ball,
see Lemma 2.8; many key results on Bergman spaces can be deduced from the
fractional radial differential operators, see example for [24, 25]. On the
other hand, the operators $K_{\alpha}$ play a significant role in the
characterization of weighted Bloch spaces and Lipschitz spaces over the unit
ball $\mathbb{B}^{d},$ see [24, 25, 26]. We also consider the kernel integral
operator $K_{\alpha}^{+}$ on $L^{1}(\mathbb{B}^{d},dv),$ which is given by
$K_{\alpha}^{+}f(z)=\int_{\mathbb{B}^{d}}\frac{f(w)}{|1-\langle
z,w\rangle|^{\alpha}}dv(w).$
The operators $K_{\alpha}^{+}$ can be regarded as Riesz potential operators
over the bounded domain $\mathbb{B}^{d}.$ Comparing to the classical Riesz
potential operators over real Euclidian space $\mathbb{R}^{d},$ whose basic
result concerning mapping properties is the Hardy-Littlewood-Sobolev theorem,
see [13, 16, 19, 21] and references therein. For convenience, we write
$L^{p}(\mathbb{B}^{d},dv)$ in the simple form $L^{p}(\mathbb{B}^{d})$ or
$L^{p}$ for any $1\leq p\leq\infty$ without confusion arises. In the present
paper, we mainly concern the $L^{p}$-$L^{q}$ problem for $K_{\alpha}$ and
$K_{\alpha}^{+},$ namely we consider the boundedness and compactness of
$K_{\alpha}$ and $K_{\alpha}^{+},$
$K_{\alpha},K_{\alpha}^{+}:L^{p}\rightarrow L^{q},$
for $1\leq p,q\leq\infty.$ Indeed, the results of $L^{p}$-$L^{q}$ boundedness
are the Hardy-Littlewood-Sobolev theorem with respect to $K_{\alpha}^{+}$ over
the unit ball $\mathbb{B}^{d}.$
Actually, on more general bounded domain $\Omega$ with the normalized Lebesgue
measure $dv$ in $\mathbb{C}^{d},$ the $L^{p}$-$L^{q}$ boundedness for Bergman-
type operators and in particular $L^{p}$-$L^{p}$ boundedness for the standard
Bergman projection had attracted much interest in the past decades; the target
spaces are even Bloch spaces, Lipschitz spaces and Sobolev spaces [12, 18]. As
we all know, it is trivial that the standard Bergman projection $P$ is bounded
for any bounded domain when $p=q=2.$ However, the problem becomes very
complicated for general $1\leq p,q\leq\infty.$ Nevertheless, the known results
show that depends strongly on the property of the domain $\Omega.$ When
$\Omega$ is a strongly pseudoconvex domain with sufficiently smooth boundary,
then the standard Bergman projection $P:L^{p}(\Omega)\rightarrow
L^{p}(\Omega)$ is bounded for any $1<p<\infty,$ the conclusion is also true
for the Bergman-type integral operators with the order of the kernel function
no more than $d+1;$ we refer the reader to [9, 15, 18] along this line.
Indeed, in the case of unit ball, the boundedness of more general Bergman type
integral operators were considered in [9, 23, 24, 25, 26]. However, if
$\Omega$ is a bounded symmetric domain of tube type with rank $\geq 2$, the
boundedness of the standard Bergman projection $P:L^{p}(\Omega)\rightarrow
L^{p}(\Omega)$ is conjectured by M. Stein that $P:L^{p}(\Omega)\rightarrow
L^{p}(\Omega)$ is only bounded when $p$ belongs to a finite interval around
$p=2;$ we refer the reader to [1, 2] and references therein. Although the
$L^{p}$-$L^{q}$ boundedness for standard Bergman projection $P$ over tube type
domains with rank $\geq 2$ has been considered a long time, it is still an
open problem.
Now return to our unit ball setting. In [10], X. Fang and Z. Wang established
a relation between the boundedness of standard Bergman projection and Berezin
transform on the weighted Bergman spaces over the unit disc
$\mathbb{D}=\mathbb{B}^{1}.$ The compactness of standard Bergman projection
$K_{2}:L^{\infty}(\mathbb{D})\rightarrow L^{q}(\mathbb{D})$ for $1\leq
q<\infty$ was observed by K. Zhu in Section 3.6 of [26]. Recently, X. Fang and
G. Cheng et al [4] completely solved the $L^{p}$-$L^{q}$ boundedness problem
of $K_{\alpha}$ over the unit disc $\mathbb{D};$ they also considered the
$L^{p}$-$L^{q}$ boundedness of Bergman-type operator over the upper half plane
$\mathbb{U}=\\{z\in\mathbb{C}:\text{Im}(z)>0\\}.$ Not long afterward, G. Cheng
et al [5] solved the $L^{p}$-$L^{q}$ boundedness problem of $K_{\alpha}$ in
the spacial case $\alpha=1$ over the unit ball $\mathbb{B}^{d}$ for general
$d\geq 1.$ The main difficulty in the case of high dimensional ball
$\mathbb{B}^{d}$ is how to determine the critical exponents $d+1$ and $d+2,$
see the following theorems. In the present paper we completely describe the
$L^{p}$-$L^{q}$ boundedness of $K_{\alpha},K_{\alpha}^{+}$ but also the
$L^{p}$-$L^{q}$ compactness of $K_{\alpha}$ over the unit ball
$\mathbb{B}^{d}(d\geq 1).$ The results of boundedness for $K_{\alpha}$
completely prove the conjecture of [4] but also extend some classical results
[7, 14, 18, 23, 25, 26] in the case unit ball, the results of boundedness for
$K_{\alpha}^{+}$ are essentially the Hardy-Littlewood theorem as mentioned
before; however the results of compactness are almost entirely new. Firstly,
it is trivial that $K_{\alpha},K_{\alpha}^{+}:L^{p}\rightarrow L^{q}$ are
compact for any $1\leq p,q\leq\infty$ when $\alpha\leq 0.$ Thus we only
concern the case $\alpha>0.$ The following five theorems are our main results.
Theorem 1. If $d+1<\alpha<d+2,$ then the following conditions are equivalent:
1. (1)
$K_{\alpha}:L^{p}\rightarrow L^{q}$ is bounded;
2. (2)
$K_{\alpha}^{+}:L^{p}\rightarrow L^{q}$ is bounded;
3. (3)
$K_{\alpha}:L^{p}\rightarrow L^{q}$ is compact;
4. (4)
$p,q$ satisfy one of the following inequalities:
1. (a)
$\frac{1}{d+2-\alpha}<p<\infty,\frac{1}{q}>\frac{1}{p}+\alpha-(d+1);$
2. (b)
$p=\infty,q<\frac{1}{\alpha-(d+1)}.$
As an consequence of Theorem 1, the following Hardy-Littlewood-Sobolev
inequality (HLS) is established over the bounded domain $\mathbb{B}^{d}.$
HLS 1. For any $1<p,s<\infty,\frac{1}{s}+\frac{1}{p}+\alpha<d+2$ and
$d+1<\alpha<d+2,$ then there exists a constant $C$ which depends only on
$p,\alpha,d,s$ such that
$\left|\int_{\mathbb{B}^{d}}\int_{\mathbb{B}^{d}}\frac{f(w)g(z)}{|1-\langle
z,w\rangle|^{\alpha}}dv(w)dv(z)\right|\leq C\|f\|_{L^{p}}\|g\|_{L^{s}},$ (1.1)
for all $f\in L^{p}(\mathbb{B}^{d}),g\in L^{s}(\mathbb{B}^{d}).$
Theorem 2. If $0<\alpha\leq d+1,$ then the following conditions are
equivalent:
1. (1)
$K_{\alpha}:L^{p}\rightarrow L^{q}$ is bounded;
2. (2)
$K_{\alpha}^{+}:L^{p}\rightarrow L^{q}$ is bounded;
3. (3)
$p,q$ satisfy one of the following inequalities:
1. (a)
$p=1,q<\frac{d+1}{\alpha};$
2. (b)
$1<p<\frac{d+1}{d+1-\alpha},\frac{1}{q}\geq\frac{1}{p}+\frac{\alpha}{d+1}-1;$
3. (c)
$p=\frac{d+1}{d+1-\alpha},q<\infty;$
4. (d)
$\frac{d+1}{d+1-\alpha}<p\leq\infty.$
In particular, $K_{\alpha},K_{\alpha}^{+}:L^{p}\rightarrow L^{p}$ are both
bounded for any $1\leq p\leq\infty$ when $0<\alpha<d+1,$ which is actually a
more precise conclusion than Lemma 5 of [18] in the case of unit ball.
Although $K_{\alpha},K_{\alpha}^{+}:L^{1}\rightarrow L^{\frac{d+1}{\alpha}}$
are both unbounded under the condition of Theorem 2, it turns out that
$K_{\alpha}$ is weak type $(1,\frac{d+1}{\alpha}),$ i.e.
$K_{\alpha},K_{\alpha}^{+}:L^{1}\rightarrow L^{\frac{d+1}{\alpha},\infty}$ are
both bounded over $\mathbb{B}^{d},$ see the following Corollary 4.7, which is
a generalization of the result that the standard Bergman projection is weak
type (1,1) over some bounded domains [7, 14]. More importantly, by Theorem 2,
it implies the following the Hardy-Littlewood-Sobolev inequality over the unit
ball $\mathbb{B}^{d}.$
HLS 2. For any $1<p,s<\infty,\frac{1}{s}+\frac{1}{p}+\frac{\alpha}{d+1}\leq 2$
and $\alpha\leq d+1,$ then there exists a constant $C$ that depends only on
$p,\alpha,d,s$ satisfying that (1.1) holds for all $f\in
L^{p}(\mathbb{B}^{d}),g\in L^{s}(\mathbb{B}^{d}).$
Comparing HLS 1 and HLS 2 to the classical Hardy-Littlewood-Sobolev inequality
[13, 16, 19, 21] over $\mathbb{R}^{d},$ it is surprising that HLS 1 is a new
type of Hardy-Littlewood-Sobolev inequality.
Theorem 3. If $0<\alpha\leq d+1,$ then the following conditions are
equivalent:
1. (1)
$K_{\alpha}:L^{p}\rightarrow L^{q}$ is compact;
2. (2)
$p,q$ satisfy one of the following inequalities:
1. (a)
$p=1,q<\frac{d+1}{\alpha};$
2. (b)
$1<p<\frac{d+1}{d+1-\alpha},\frac{1}{q}>\frac{1}{p}+\frac{\alpha}{d+1}-1;$
3. (c)
$p=\frac{d+1}{d+1-\alpha},q<\infty;$
4. (d)
$\frac{d+1}{d+1-\alpha}<p\leq\infty.$
Theorem 4. For $\alpha\in\mathbb{R},$ then the following conditions are
equivalent:
1. (1)
$\alpha<d+2;$
2. (2)
there exist $1\leq p,q\leq\infty$ such that $K_{\alpha}:L^{p}\rightarrow
L^{q}$ is bounded;
3. (3)
there exist $1\leq p,q\leq\infty$ such that $K_{\alpha}^{+}:L^{p}\rightarrow
L^{q}$ is bounded;
4. (4)
there exist $1\leq p,q\leq\infty$ such that $K_{\alpha}:L^{p}\rightarrow
L^{q}$ is compact.
Theorem 5. If $\alpha<\frac{d+2}{2},$ then the following holds.
1. (1)
$K_{\alpha},K_{\alpha}^{+}:L^{2}\rightarrow L^{2}$ are Hilbert-Schmidt.
2. (2)
Moreover, if $d=1$ and $0<\alpha<\frac{3}{2},$ then we have the trace formula,
$Tr(K_{\alpha}^{*}K_{\alpha})=\|K_{2\alpha}^{+}\|_{L^{\infty}\rightarrow
L^{1}}=\frac{1}{(\alpha-1)^{2}}\left(\frac{\Gamma(3-2\alpha)}{\Gamma^{2}(2-\alpha)}-1\right).$
where $\Gamma$ is the usual Gamma function. When $\alpha=1,$ the quantity on
the right side should be interpreted as $\frac{\pi^{2}}{6}.$
The above theorems show that $K_{\alpha}:L^{p}\rightarrow L^{q}$ is bounded if
and only if $K_{\alpha}^{+}:L^{p}\rightarrow L^{q}$ is bounded. From Theorem
1, it is amazing to know that, when $d+1<\alpha<d+2,$
$K_{\alpha}:L^{p}\rightarrow L^{q}$ is compact if and only if
$K_{\alpha}:L^{p}\rightarrow L^{q}$ is bounded. However, it is very different
when $0<\alpha\leq d+1$ by Theorems 2 and 3. In particular, the standard
Bergman projection $K_{d+1}:L^{p}\rightarrow L^{q}$ is compact if and only if
$1\leq q<p\leq\infty$ over $\mathbb{B}^{d}.$
Let us consider the above boundedness problem in the following viewpoint.
Denote $G(K_{\alpha})$ by the set of $(\frac{1}{p},\frac{1}{q})\in E$ such
that $K_{\alpha}:L^{p}\rightarrow L^{q}$ is bounded, where $E$ is given by
$E=\\{(x,y)\in\mathbb{R}^{2}:0\leq x,y\leq 1\\},$
i.e. $E$ is a unit square in the real plane $\mathbb{R}^{2}.$ Following by T.
Tao [22], $G(K_{\alpha})$ is called the type diagram of the operator
$K_{\alpha},$ see Figure 1. By a classical interpolation result, it implies
immediately that every $G(K_{\alpha})$ is convex. The adjointness of
$K_{\alpha}$ implies that $G(K_{\alpha})$ is axisymmetric on the inside of
$E$. To prove the above theorems is equivalent to solve the corresponding type
diagrams. The above theorems show that the type diagram $G(K_{\alpha})$ is
determined by the corresponding inequalities. Conversely, the inequalities in
the above theorem are determined by the type diagram $G(K_{\alpha}).$ The
convexity and axisymmetry of the type diagram will make the solving process
simpler. Similarly, we can define the type diagrams $G(K_{\alpha}^{+})$ for
operators $K_{\alpha}^{+},$ which are also convex and is axisymmetric on the
inside of $E$; see Figure 1. Note that $|K_{\alpha}(f)|\leq
K_{\alpha}^{+}(|f|),$ it implies immediately that $G(K_{\alpha}^{+})\subset
G(K_{\alpha}).$ Then combing with several embedding theorems of holomorphic
function spaces and some estimations of Bergman kernel over the unit ball, we
completely characterize $L^{p}$-$L^{q}$ boundedness and $L^{p}$-$L^{q}$
compactness of $K_{\alpha}.$ The above main theorems show in fact that
$G(K_{\alpha}^{+})=G(K_{\alpha})$ for every $\alpha\in\mathbb{R}.$ After
characterizing the boundedness and compactness of $K_{\alpha},$ by using of
the hypergeometric function theory and the interpolation theory, we give some
sharp norm estimations of $K_{\alpha},K_{\alpha}^{+}.$ It is in fact that we
estimate the upper bounds of the best constant in the inequalities HLS 1 and
HLS 2.
The results of this paper can be generalized to cover the weighted Lesbegue
integrable spaces and more general kernel operators over the unit ball.
Another promising idea is the study of the boundedness of Bergman projection
over the bounded symmetric domains of tube type [2] with rank $\geq 2.$
The paper is organized as follows. In Section 2, we give some basic properties
of the operators $K_{\alpha}.$ In Section 3, we prove Theorem 1. The proof of
Theorem 2 is given in Section 4. In Section 5, we prove Theorem 3 and Theorem
4. Finally, we give some sharp norm estimations of the operators
$K_{\alpha},K_{\alpha}^{+}.$
Figure 1. Type diagrams $G(K_{\alpha}),G(K_{\alpha}^{+}).$
## 2\. Basic properties of $K_{\alpha}$
In this section, we prove some results for latter use. We first take a rough
look at the property of type diagram $G(K_{\alpha})$ of the operator
$K_{\alpha}.$ We prove that every $G(K_{\alpha})$ is convex and is
axisymmetric on the inside of $E$ as mentioned before. Let $l_{E}$ be the
diagonal line of the square $E$ which connects points $(0,1)$ and $(1,0).$
Clearly $G(K_{\alpha})\subset E$ for any $\alpha\in\mathbb{R}.$
###### Proposition 2.1.
1. (1)
If $G(K_{\alpha})\neq\emptyset,$ then $(0,1)\in G(K_{\alpha})$; if $(1,0)\in
G(K_{\alpha}),$ then $G(K_{\alpha})=E.$
2. (2)
For any $\alpha\in\mathbb{R},$ the type diagram $G(K_{\alpha})$ is convex and
is axisymmetric about $l_{E}$ on the inside of $E.$
###### Proof.
(1) It comes from the following continuous embedding of $L$-integrable spaces,
i.e. $L^{p}\subset L^{q}$ whenever $p\geq q.$
(2) To show that $G(K_{\alpha})$ is convex, it suffices to show that if
$(\frac{1}{p_{1}},\frac{1}{q_{1}}),(\frac{1}{p_{2}},\frac{1}{q_{2}})\in
G(K_{\alpha}),$ then
$\theta(\frac{1}{p_{1}},\frac{1}{q_{1}})+(1-\theta)(\frac{1}{p_{2}},\frac{1}{q_{2}})\in
G(K_{\alpha})$ for any $0\leq\theta\leq 1.$ Indeed, it is a direct corollary
of the following Lemma 2.2, a classical complex interpolation result. Now we
turn to the symmetry. By Fubini’s theorem, it implies that $K_{\alpha}$ is
adjoint. Then, for $1<p,q<\infty,$ the boundedness of
$K_{\alpha}:L^{p}\rightarrow L^{q}$ is equivalent to the boundedness of
$K_{\alpha}:L^{q^{\prime}}\rightarrow L^{p^{\prime}},$ where
$p^{\prime},q^{\prime}$ are the conjugate numbers of $p,q$, respectively. It
means that $(\frac{1}{p},\frac{1}{q})\in G(K_{\alpha})$ if and only if
$(\frac{1}{q^{\prime}},\frac{1}{p^{\prime}})\in G(K_{\alpha}).$ It easy to
check that $(\frac{1}{p},\frac{1}{q})$ and
$(\frac{1}{q^{\prime}},\frac{1}{p^{\prime}})$ are symmetric about $l_{E}$ by
the conjugate relationship. ∎
###### Lemma 2.2.
[25] Suppose $1\leq p_{1},p_{2},q_{1},q_{2}\leq\infty.$ If a linear operator
$T$ such that $T:L^{p_{1}}\rightarrow L^{q_{1}}$ is bounded with norm $M_{1}$
and $T:L^{p_{2}}\rightarrow L^{q_{2}}$ is bounded with norm $M_{2}.$ Then
$T:L^{p}\rightarrow L^{q}$ is bounded with norm no more than
$M_{1}^{\theta}M_{2}^{1-\theta},$ if there exists $\theta\in(0,1)$ such that
$\frac{1}{p}=\frac{\theta}{p_{1}}+\frac{1-\theta}{p_{2}},\frac{1}{q}=\frac{\theta}{q_{1}}+\frac{1-\theta}{q_{2}}.$
###### Remark 2.3.
Proposition 2.1 shows that the type diagram $G(K_{\alpha})$ is a bounded
convex set in the plane $\mathbb{R}^{2}$, so to solve $G(K_{\alpha}),$ it
suffices to find out all extreme points or the boundary points of
$G(K_{\alpha}).$ The symmetry of $G(K_{\alpha})$ shows that is only need to
find out a half. On the other hand, Proposition 2.1 holds for more general
domains and adjoint operators.
###### Corollary 2.4.
1. (1)
If $G(K_{\alpha}^{+})\neq\emptyset,$ then $(0,1)\in G(K_{\alpha}^{+})$; if
$(1,0)\in G(K_{\alpha}^{+}),$ then $G(K_{\alpha}^{+})=E.$
2. (2)
For any $\alpha\in\mathbb{R},$ the type diagram $G(K_{\alpha}^{+})$ is convex
and is axisymmetric about $l_{E}$ on the inside of $E.$
###### Corollary 2.5.
If $\alpha\leq 0,$ then $G(K_{\alpha})=G(K_{\alpha}^{+})=E.$
Corollary 2.5 means that $K_{\alpha},K_{\alpha}^{+}:L^{p}\rightarrow L^{q}$
are bounded for any $1\leq p,q\leq\infty$ if $\alpha\leq 0.$ For any
$\beta>-1,$ denote $dv_{\beta}(z)=c_{\beta}(1-|z|^{2})^{\beta}dv(z),$ where
$c_{\beta}=\frac{\Gamma(d+\beta+1)}{\Gamma(d+1)\Gamma(\beta+1)}.$ For $1\leq
p\leq\infty,$ let $A_{\beta}^{p}=H(\mathbb{B}^{d})\cap L^{p}(dv_{\beta})$ be
the weighted Bergman space on $\mathbb{B}^{d},$ in particular,
$A_{\beta}^{\infty}=H^{\infty}$ is just the bounded holomorphic function
space. Recall that $K_{d+1}$ is the Bergman projection from $L^{p}$ onto
$A_{0}^{p},$ a well known result is that $K_{d+1}(L^{p})=A_{0}^{p}$ for
$1<p<\infty.$ Now we establish a general result for $\alpha\geq d+1.$
###### Proposition 2.6.
Suppose that $\alpha\geq d+1$ and $1<p<\infty,$ then
$K_{\alpha}(L^{p})=K_{\alpha}(A_{0}^{p})=A_{p(\alpha-d-1)}^{p}.$
To prove Proposition 2.6, we need some lemmas. The following Lemma 2.7 was
proved [4] in the case $d=1,$ use the same method, it can be proved in the
general case, see Lemma 11 of [4] for more detail.
###### Lemma 2.7.
If $\alpha>0$ and $1<p<\infty,$ then
$K_{\alpha}K_{d+1}=K_{\alpha}~{}\text{on}~{}L^{p}.$
Lemma 2.7 shows that for $1<p,q<\infty$, $K_{\alpha}:L^{p}\rightarrow L^{q}$
is bounded if and only if $K_{\alpha}:A_{0}^{p}\rightarrow A_{0}^{q}$ is
bounded. Now we turn to the behavior of $K_{\alpha}$ on holomorphic function
spaces. Recall first the definition of fractional radial differential operator
$R^{s,t}$ on $H(\mathbb{B}^{d}).$
For any two real parameters $s$ and $t$ with the property that neither $d+s$
nor $d+s+t$ is a negative integer, the invertible operator $R^{s,t}$ is given
by
$R^{s,t}f(z)=\sum_{n=0}^{\infty}\frac{\Gamma(d+1+s)\Gamma(d+1+n+s+t)}{\Gamma(d+1+s+t)\Gamma(d+1+n+s)}f_{n}(z),$
for any $f=\sum_{n=0}^{\infty}f_{n}\in H(\mathbb{B}^{d})$ with homogeneous
expansion. In fact, it can be checked by direct calculation that the
invertible operator of $R^{s,t}$ is just $R^{s+t,-t}.$ Be careful of the
invertible operator here merely means that is linear.
###### Lemma 2.8.
For $\alpha>0$ and $1<p<\infty,$ the following holds on $A_{0}^{p},$
$K_{\alpha}=R^{0,\alpha-d-1}.$
###### Proof.
Suppose $f=\sum_{n=0}^{\infty}f_{n}\in A_{0}^{p}$ with the homogeneous
expansion. By direct calculation, it implies that
$K_{\alpha}f=\sum_{n=0}^{\infty}\frac{\Gamma(d+1)\Gamma(\alpha+n)}{\Gamma(\alpha)\Gamma(d+1+n)}f_{n}.$
(2.1)
It leads to the desired result. ∎
Proof of Proposition 2.6. Lemma 2.7 implies that
$K_{\alpha}(L^{p})=K_{\alpha}(A_{0}^{p}).$ Now we prove
$K_{\alpha}(A_{0}^{p})=A_{p(\alpha-d-1)}^{p}.$ By Theorem 14 of [24], which is
a characterization of Bergman space, shows that $f\in A_{0}^{p}$ if and only
if $R^{0,\alpha-d-1}f\in L^{p}(dv_{p(\alpha-d-1)}),$ namely $f\in A_{0}^{p}$
if and only if $R^{0,\alpha-d-1}f\in A_{p(\alpha-d-1)}^{p}.$ Note that
$K_{\alpha}=R^{0,\alpha-d-1}$ by Lemma 2.8, it follows that $f\in A_{0}^{p}$
if and only if $K_{\alpha}f\in A_{p(\alpha-d-1)}^{p}.$ It shows that
$K_{\alpha}(A_{0}^{p})\subset A_{p(\alpha-d-1)}^{p}.$ To prove another
direction, suppose that $g\in A_{p(\alpha-d-1)}^{p}.$ Since
$K_{\alpha}=R^{0,\alpha-d-1}$ is invertible on $H(\mathbb{B}^{d}),$ i.e. there
exists $f\in H(\mathbb{B}^{d})$ such that $K_{\alpha}f=R^{0,\alpha-d-1}f=g.$
From Theorem 2.19 of [25], there exists a positive constant $c$ that depends
only on $\alpha,d,p$ such that
$\|f\|_{L^{p}}\leq c\|g\|_{A_{p(\alpha-d-1)}^{p}}.$
It means that $f\in A_{0}^{p}.$ Thus $A_{p(\alpha-d-1)}^{p}\subset
K_{\alpha}(A_{0}^{p}).$ It completes the proof. ∎
###### Corollary 2.9.
Suppose that $\alpha\geq d+1$ and $1<p<\infty,$ then for any $\gamma>-1,$ the
following holds,
$K_{\alpha}(L^{p}(dv_{\gamma}))=K_{\alpha}(A_{\gamma}^{p})=A_{\gamma+p(\alpha-d-1)}^{p}.$
The following Proposition 2.10 gives the image of $K_{\alpha}$ in case of
$p=\infty.$ Denote $\mathcal{B}_{\beta}$ by the weighted Bloch space on
$\mathbb{B}^{d},$ see definition for Section 7.1 of [25].
###### Proposition 2.10.
For $\alpha\geq d+1,$ then $K_{\alpha}(H^{\infty})\subsetneq
K_{\alpha}(L^{\infty})=\mathcal{B}_{\alpha-d}.$
###### Proof.
Note that $K_{\alpha}(L^{\infty})=\mathcal{B}_{\alpha-d}$ by Theorem 7.1 of
[25]. If $\alpha=d+1,$ then $K_{d+1}(H^{\infty})=H^{\infty},$ thus
$K_{d+1}(H^{\infty})\subsetneq\mathcal{B}_{\alpha-d}.$ Now turn to the case
$\alpha>d+1.$ Note that $K_{\alpha}(H^{\infty})\subset K_{\alpha}(A_{0}^{p})$
for any $1<p<\infty,$ then it implies by Proposition 2.6 that
$K_{\alpha}(H^{\infty})\subset\bigcap_{1<p<\infty}A_{p(\alpha-d-1)}^{p}.$
(2.2)
On the other hand, from Theorem 2.1 of [25], a pointwise estimates for
functions in weighted Bergman spaces, we know that
$A_{\gamma}^{p}\subset\mathcal{B}_{\frac{d+1+\gamma}{p}}.$ (2.3)
Combing (2.2) with (2.3), it implies that
$K_{\alpha}(H^{\infty})\subset\bigcap_{1<p<\infty}\mathcal{B}_{(\alpha-d)+\frac{d+1}{p}-1}.$
Together with the fact that the weighted Bloch space is strictly increased,
namely $\mathcal{B}_{\beta}\subsetneq\mathcal{B}_{\beta^{\prime}}$ whenever
$0<\beta<\beta^{\prime},$ it implies that
$K_{\alpha}(H^{\infty})\subsetneq\mathcal{B}_{\alpha-d}.$ ∎
###### Remark 2.11.
The monotonicity of the weighted Bloch space can be obtained as follows. It is
easy to see that the weighted Bloch space is increased, so it suffices to show
that is strict. For any $0<\beta<\beta^{\prime},$ there exist $p>1$ and
$\varepsilon>0$ such that
$\beta<\beta-1+\frac{d+\varepsilon}{p}<\beta^{\prime}.$
Combing (2.3) and the following Lemma 3.2, it implies that
$\mathcal{B}_{\beta}\subsetneq
A_{p(\beta-1)-1+\varepsilon}^{p}\subset\mathcal{B}_{\beta^{\prime}}.$
## 3\. Proof of Theorem 1
In this section, we prove Theorem 1. We need several embedding theorems of
holomorphic function spaces on the unit ball $\mathbb{B}^{d}.$ For
convenience, we state them without proof as follows.
###### Lemma 3.1.
[24] Let $0<q<p<\infty$.Then $A_{\beta}^{p}\subset A_{\gamma}^{q}$ if and only
if $\frac{\beta+1}{p}<\frac{\gamma+1}{q}.$ And in this case the inclusions are
strict.
###### Proof.
See proof of Theorem 70 of [24]. ∎
###### Lemma 3.2.
[17, 24] Suppose that $\beta>0,\gamma>-1,p\geq 1,$ then
$\mathcal{B}_{\beta}\subset A_{\gamma}^{p}$ if and only if
$\beta-1<\frac{1+\gamma}{p}.$ And in this case the inclusions are strict.
###### Proof.
See proofs in [17] or Theorem 66 of [24]. ∎
We also needs the following lemmas.
###### Lemma 3.3.
If $d+1<\alpha<d+2,$ then $K_{\alpha}:L^{\infty}\rightarrow L^{q}$ is bounded
if and only if $q<\frac{1}{\alpha-(d+1)}.$
###### Proof.
We first to show that $K_{\alpha}:L^{\infty}\rightarrow L^{q}$ is bounded if
$q<\frac{1}{\alpha-(d+1)}.$ Then, for $f\in L^{\infty},$ by Proposition 1.4.10
of [20] and Hölder’s inequality, it implies that
$\begin{split}|K_{\alpha}f(z)|\leq\|f\|_{\infty}\int_{\mathbb{B}^{d}}\frac{1}{|1-\langle
z,w\rangle|^{\alpha}}dv(w)\leq
C_{d,\alpha}\|f\|_{\infty}(1-|z|^{2})^{d+1-\alpha},|z|\rightarrow
1^{-},\end{split}$ (3.1)
where $C_{d,\alpha}$ is a constant. The condition $q<\frac{1}{\alpha-(d+1)}$
means that $q((d+1)-\alpha)>-1.$ Then (3.1) implies that $K_{\alpha}f(z)\in
L^{q}$ and $K_{\alpha}:L^{\infty}\rightarrow L^{q}$ is bounded. Now we turn to
prove that $K_{\alpha}:L^{\infty}\rightarrow L^{q}$ is unbounded if
$q\geq\frac{1}{\alpha-(d+1)}.$ By Hölder’s inequality, it is enough to prove
that $K_{\alpha}:L^{\infty}\rightarrow L^{\frac{1}{\alpha-(d+1)}}$ is
unbounded. It suffices to show that $K_{\alpha}(L^{\infty})\not\subset
L^{\frac{1}{\alpha-(d+1)}}.$ Since
$K_{\alpha}(L^{\infty})=\mathcal{B}_{\alpha-d}$, it suffices to show that
$\mathcal{B}_{\alpha-d}\not\subset A_{0}^{\frac{1}{\alpha-(d+1)}}.$ Indeed, it
is a fact from Lemma 3.2. ∎
###### Corollary 3.4.
If $d+1<\alpha<d+2,$ then $K_{\alpha}:L^{p}\rightarrow L^{1}$ is bounded if
and only if $p>\frac{1}{(d+2)-\alpha}.$
###### Proof.
First, suppose that $p>\frac{1}{(d+2)-\alpha}.$ From Lemma 3.3 and
$K_{\alpha}$ is an adjoint operator, we know that
$K_{\alpha}:L^{p}\rightarrow(L^{\infty})^{*}$ is bounded if
$p>\frac{1}{(d+2)-\alpha}.$ Proposition 2.6 implies that
$K_{\alpha}(L^{p})=A_{p(\alpha-d-1)}^{p}.$ Since
$\frac{p(\alpha-d-1)+1}{p}<(\alpha-d-1)+(d+2)-\alpha=1,$ it follows by Lemma
3.1 that $A_{p(\alpha-d-1)}^{p}\subset A_{0}^{1}.$ Thus
$K_{\alpha}(L^{p})\subset L^{1}.$ Note that $L^{1}\subset(L^{\infty})^{*},$ it
implies that $K_{\alpha}:L^{p}\rightarrow L^{1}$ is bounded.
Conversely, suppose that $K_{\alpha}:L^{p}\rightarrow L^{1},p\neq\infty$ is
bounded. Then $K_{\alpha}:L^{\infty}\rightarrow L^{p^{\prime}}$ is bounded,
where $p^{\prime}=\frac{p}{p-1}.$ From Lemma 3.3, it implies that
$\frac{p}{p-1}=p^{\prime}<\frac{1}{\alpha-(d+1)},$ it means that
$p>\frac{1}{(d+2)-\alpha}.$ Clearly the case of $p=\infty$ is trivial by Lemma
3.3. ∎
###### Corollary 3.5.
If $d+1<\alpha<d+2,$ then
1. (1)
$K_{\alpha}^{+}:L^{\infty}\rightarrow L^{q}$ is bounded if and only if
$q<\frac{1}{\alpha-(d+1)};$
2. (2)
$K_{\alpha}^{+}:L^{p}\rightarrow L^{1}$ is bounded if and only if
$p>\frac{1}{(d+2)-\alpha}.$
###### Proof.
(1) For $f\in L^{\infty},$ by Proposition 1.4.10 of [20] and Hölder’s
inequality, it implies that
$\begin{split}|K_{\alpha}^{+}f(z)|\leq\|f\|_{\infty}\int_{\mathbb{B}^{d}}\frac{1}{|1-\langle
z,w\rangle|^{\alpha}}dv(w)\leq
C_{d,\alpha}\|f\|_{\infty}(1-|z|^{2})^{d+1-\alpha},|z|\rightarrow
1^{-},\end{split}$ (3.2)
where $C_{d,\alpha}$ is a constant. So, if $q<\frac{1}{\alpha-(d+1)},$ i.e.
$q((d+1)-\alpha)>-1,$ then (3.2) implies that $K_{\alpha}f(z)\in L^{q}$ and
$K_{\alpha}^{+}:L^{\infty}\rightarrow L^{q}$ is bounded. It means that
$\\{(0,\frac{1}{q}):\frac{1}{q}>\alpha-(d+1)\\}\subset G(K_{\alpha}^{+}).$
On the other hand, Lemma 3.3 implies that point $(0,\frac{1}{q})\in
G(K_{\alpha})$ if and only if $\frac{1}{q}>\alpha-(d+1).$ Combing with
$G(K_{\alpha}^{+})\subset G(K_{\alpha}),$ it follows that $(0,\frac{1}{q})\in
G(K_{\alpha})$ if and only if $\frac{1}{q}>\alpha-(d+1).$ It leads the desired
result.
(2) The proof is similar to (1). ∎
###### Lemma 3.6.
Suppose that $d+1<\alpha<d+2$ and $\frac{1}{q}\leq\frac{1}{p}+\alpha-(d+1),$
then $K_{\alpha}:L^{p}\rightarrow L^{q}$ is unbounded.
###### Proof.
By the continuous embedding of $L$-integrable spaces, it suffices to show that
$K_{\alpha}:L^{p}\rightarrow L^{q}$ is unbounded if
$d+1<\alpha<d+2,\frac{1}{q}=\frac{1}{p}+\alpha-(d+1).$ The cases of $p=\infty$
or $q=1$ had been proved in Lemma 3.3 and Corollary 3.4. For case of
$1<p,q<\infty,$ it suffices to show that $K_{\alpha}(L^{p})\not\subset L^{q}.$
On the other hand, Proposition 2.6 shows that
$K_{\alpha}(L^{p})=A_{p(\alpha-d-1)}^{p},$ a holomorphic function space. Thus,
it suffices to show that
$K_{\alpha}(L^{p})=A_{p(\alpha-d-1)}^{p}\not\subset A_{0}^{q}.$ (3.3)
Since $\frac{p(\alpha-d-1)+1}{p}=\frac{1}{q},$ it follows that (3.3) holds by
Lemma 3.1. It completes the proof. ∎
Proof of Theorem 1.
$Step~{}1$. To prove that (1)$\Leftrightarrow$(2)$\Leftrightarrow$(4).
First, we prove that (1) is equivalent to (4). As mentioned before, it is
equivalent to prove that $G(K_{\alpha})$ is exactly the triangle region
$D_{1}\subset E$ which determined by the equations in (4) of Theorem 1, namely
$G(K_{\alpha})=D_{1}$. Lemma 3.3, Corollary 3.4 and the convexity of
$G(K_{\alpha})$ imply that $D_{1}\subset G(K_{\alpha}).$ On the other hand,
Lemma 3.6 and the convexity of $G(K_{\alpha})$ imply that $E-D_{1}\subset
E-G(K_{\alpha}),$ it follows that $G(K_{\alpha})\subset D_{1}.$ Thus
$G(K_{\alpha})=D_{1}.$ Now we turn to prove that (2) is equivalent to (4), it
is equivalent to prove that $G(K_{\alpha}^{+})=D_{1}.$ Corollary 3.5 and the
convexity of $G(K_{\alpha}^{+})$ implies that $D_{1}\subset
G(K_{\alpha}^{+}).$ Combing the fact that $G(K_{\alpha}^{+})\subset
G(K_{\alpha})=D_{1},$ then $G(K_{\alpha}^{+})=D_{1}.$ It completes the proof.
$Step~{}2$. To prove that (1)$\Leftrightarrow$(3).
Since compact operators must be bounded, it suffices to prove that
$K_{\alpha}:L^{p}\rightarrow L^{q}~{}\text{is~{} compact},~{}\text{if
}(\frac{1}{p},\frac{1}{q})\in G(K_{\alpha}).$
We first prove the following claim.
$Claim:$ $K_{\alpha}:L^{\infty}\rightarrow L^{q}$ is compact if and only if
$q<\frac{1}{\alpha-(d+1)}.$
If $K_{\alpha}:L^{\infty}\rightarrow L^{q}$ is compact, is immediate from
Corollary 3.4 that $q<\frac{1}{\alpha-(d+1)}.$ Now we prove the reverse, that
is, to prove that $K_{\alpha}:L^{\infty}\rightarrow L^{q}$ is compact if
$q<\frac{1}{\alpha-(d+1)}.$ We need to show that for any bounded sequence in
$L^{\infty}$, there is a subsequence such that whose image under $K_{\alpha}$
converges in $L^{q}.$ Suppose that $\\{f_{n}\\}\in L^{\infty}$ is an arbitrary
bounded sequence and $K$ is an arbitrary compact subset of $\mathbb{B}^{d}.$
Moreover, we assume that $\|f_{n}\|_{\infty}\leq C$ for any $n\geq 1,$ where
$C$ is a positive constant. Then we obtain
$\begin{split}\sup_{z\in
K}|K_{\alpha}f_{n}(z)|&\leq\|f_{n}\|_{\infty}\sup_{z\in
K}\int_{\mathbb{B}^{d}}\frac{1}{|1-\langle z,w\rangle|^{\alpha}}dv(w)\\\
&\leq\|f_{n}\|_{\infty}\sup_{z\in
K}\frac{1}{(1-|z|)^{\alpha}}<\infty.\end{split}$
Combing with that the image of $K_{\alpha}$ is holomorphic, it implies that
$\\{K_{\alpha}f_{n}\\}$ is a normal family. Hence $\\{f_{n}\\}$ has a
subsequence $\\{f_{n_{j}}\\}$ such that $K_{\alpha}f_{n_{j}}$ converges
uniformly on compact subsets of $\mathbb{B}^{d}$ to a holomorphic function
$g$. By Fatou’s Lemma and boundedness of $K_{\alpha}$, it follows that
$\int_{\mathbb{B}^{d}}|g|^{q}dv\leq\varliminf_{j\rightarrow\infty}\int_{\mathbb{B}^{d}}|K_{\alpha}f_{n_{j}}|^{q}dv\leq\|K_{\alpha}\|_{L^{\infty}\rightarrow
L^{q}}^{q}\varliminf_{j\rightarrow\infty}\|f_{n_{j}}\|_{\infty}^{q}<\infty.$
(3.4)
It means that $g\in L^{q}.$ Now we prove that there exists positive function
$g_{1}\in L^{q}$ such that $|K_{\alpha}f_{n_{j}}|\leq g_{1}.$ We first observe
that Proposition 1.4.10 of [20] and the condition $q<\frac{1}{\alpha-(d+1)}$
imply that
$\left(\int_{\mathbb{B}^{d}}\frac{1}{|1-\langle
z,w\rangle|^{\alpha}}dv(w)\right)^{q}\in L^{1}.$
Then by easy estimation, it yields that
$\begin{split}|K_{\alpha}f_{n_{j}}(z)|&\leq\|f_{n_{j}}\|_{\infty}\int_{\mathbb{B}^{d}}\frac{1}{|1-\langle
z,w\rangle|^{\alpha}}dv(w)\\\ &\leq C\int_{\mathbb{B}^{d}}\frac{1}{|1-\langle
z,w\rangle|^{\alpha}}dv(w).\\\ \end{split}$ (3.5)
Thus (3.5) shows that it is enough to take
$g_{1}=C\int_{\mathbb{B}^{d}}\frac{1}{|1-\langle z,w\rangle|^{\alpha}}dv(w).$
Combing (3.4) with (3.5), it implies that
$|K_{\alpha}f_{n_{j}}-g|^{q}\leq(g_{1}+|g|)^{q}\in L^{1},\forall j\geq 1.$
By dominated convergence theorem, it gives that
$\lim_{j\rightarrow\infty}\|K_{\alpha}f_{n_{j}}-g\|_{q}=\lim_{j\rightarrow\infty}\left(\int_{\mathbb{B}^{d}}|K_{\alpha}f_{n_{j}}-g|^{q}dv\right)^{\frac{1}{q}}=\left(\int_{\mathbb{B}^{d}}\lim_{j\rightarrow\infty}|K_{\alpha}f_{n_{j}}-g|^{q}dv\right)^{\frac{1}{q}}=0,$
and completes the proof the claim.
Combing the last claim with facts that an operator is compact if and only if
its adjoint operator is still compact, thus we get that
$K_{\alpha}:L^{p}\rightarrow L^{1}$ is compact if and only if
$p<\frac{1}{d+2-\alpha}.$ Then by the following Lemma 3.7, an interpolation
result of the compact operators, it implies that $K_{\alpha}:L^{p}\rightarrow
L^{q}~{}\text{is~{} compact}~{}\text{if }(\frac{1}{p},\frac{1}{q})\in
G(K_{\alpha}).$ ∎
###### Lemma 3.7.
[6, 11] Suppose that $1\leq p_{1},p_{2},q_{1},q_{2}\leq\infty$ and
$q_{1}\neq\infty.$ If a linear operator $T$ such that $T:L^{p_{1}}\rightarrow
L^{q_{1}}$ is bounded and $T:L^{p_{2}}\rightarrow L^{q_{2}}$ is compact, then
$T:L^{p}\rightarrow L^{q}$ is compact, if there exists $\theta\in(0,1)$ such
that
$\frac{1}{p}=\frac{\theta}{p_{1}}+\frac{1-\theta}{p_{2}},\frac{1}{q}=\frac{\theta}{q_{1}}+\frac{1-\theta}{q_{2}}.$
###### Remark 3.8.
The compactness of $K_{\alpha}:L^{p}\rightarrow L^{q}$ for $1<p,q<\infty$ can
be also proved by the Carleson type measure theory on Bergman spaces, see
definition for [24, 25]. This strategy will be adopted under appropriate
circumstances in Section 5.
## 4\. Proof of Theorem 2
In this section we give the proof of Theorem 2. We first establish several
lemmas. Denote $k_{\alpha}(z,w)=\frac{1}{(1-\langle
z,w\rangle)^{\alpha}},k_{\alpha}^{+}(z,w)=\frac{1}{|1-\langle
z,w\rangle|^{\alpha}},z,w\in\mathbb{B}^{d}.$ Then $k_{\alpha},k_{\alpha}^{+}$
are integral kernel functions of integral operators
$K_{\alpha},K_{\alpha}^{+}$ respectively.
###### Lemma 4.1.
If $0<\alpha\leq d+1,$ then
1. (1)
$K_{\alpha}:L^{1}\rightarrow L^{q}$ is bounded if and only if
$q<\frac{d+1}{\alpha};$
2. (2)
$K_{\alpha}^{+}:L^{1}\rightarrow L^{q}$ is bounded if and only if
$q<\frac{d+1}{\alpha}.$
###### Proof.
(1) From Proposition 5.2 of [22], we know that
$\|K_{\alpha}\|_{L^{1}\rightarrow
L^{q}}=\sup_{z\in\mathbb{B}^{d}}\|k_{\alpha}(z,\cdot)\|_{L^{q}}=\sup_{z\in\mathbb{B}^{d}}\left(\int\frac{dv(w)}{|1-\langle
z,w\rangle|^{q\alpha}}\right)^{\frac{1}{q}}\\\ $
Then, combing with Proposition 1.4.10 of [20], we know that
$\|K_{\alpha}\|_{L^{1}\rightarrow L^{q}}<\infty$ is equivalent to
$q\alpha<d+1.$ It leads the desired result.
(2) It is similar to (1). ∎
Dually, we have the following lemma.
###### Lemma 4.2.
If $0<\alpha\leq d+1,$ then
1. (1)
$K_{\alpha}:L^{p}\rightarrow L^{\infty}$ is bounded if and only if
$p>\frac{d+1}{(d+1)-\alpha};$
2. (2)
$K_{\alpha}^{+}:L^{p}\rightarrow L^{\infty}$ is bounded if and only if
$p>\frac{d+1}{(d+1)-\alpha}.$
###### Proof.
(1) From Proposition 5.4 of [22], we know that
$\|K_{\alpha}\|_{L^{p}\rightarrow
L^{\infty}}=\sup_{z\in\mathbb{B}^{d}}\|k_{\alpha}(z,\cdot)\|_{L^{p^{\prime}}}=\sup_{z\in\mathbb{B}^{d}}\left(\int\frac{dv(w)}{|1-\langle
z,w\rangle|^{\frac{p\alpha}{p-1}}}\right)^{\frac{p-1}{p}}.$ (4.1)
Then, combing with Proposition 1.4.10 of [20], we know that
$\|K_{\alpha}\|_{L^{p}\rightarrow L^{\infty}}<\infty$ is equivalent to
$\frac{p\alpha}{p-1}<d+1.$ It leads the desired result.
(2) It is similar to (1). ∎
###### Lemma 4.3.
If $1<p<\frac{d+1}{d+1-\alpha},$ then $K_{\alpha}:L^{p}\rightarrow L^{q}$ is
bounded if and only if $\frac{1}{q}\geq\frac{1}{p}+\frac{\alpha}{d+1}-1.$
Before proving Lemma 4.3, we do some preparations. For $p\geq 1$, denote
Lorentz space $L^{p,\infty}$ on $\mathbb{B}^{d}$ by
$L^{p,\infty}=\\{f:\sup_{\lambda>0}\lambda
d_{f}^{\frac{1}{p}}(\lambda)<\infty\\},$
where $d_{f}(\lambda)=v\\{z\in\mathbb{B}^{d}:|f(z)|>\lambda\\}.$ Note that
$L^{p,\infty}\subset L^{q,\infty}$ if $p>q,$ and the inclusion is continuous.
###### Lemma 4.4.
There exists a constant $C$ that only depends on $\alpha$ and $d$ such that,
for every $z\in\mathbb{B}^{d},$
$\|k_{\alpha}(z,\cdot)\|_{L^{\frac{d+1}{\alpha},\infty}}=\|k_{\alpha}(\cdot,z)\|_{L^{\frac{d+1}{\alpha},\infty}}<C.$
###### Proof.
By the unitary invariance of Lebesgue measure, we only need to consider the
case of $z=(|z|,0,\cdots,0).$ Note that
$d_{k_{\alpha}(\cdot,z)}(\lambda)=v\\{w\in\mathbb{B}^{d}:\frac{1}{|1-\langle
w,z\rangle|^{\alpha}}>\lambda\\}=v\\{w:|\frac{1}{|z|}-w_{1}|<\frac{1}{|z|}\lambda^{-\frac{1}{\alpha}}\\}$
(4.2)
When $|z|<\frac{1}{2},$ then $\frac{1}{|1-\langle
w,z\rangle|^{\alpha}}<2^{\alpha}.$ It follows that
$d_{k_{\alpha}(\cdot,z)}(\lambda)=0,$ if $\lambda\geq 2^{\alpha}.$ Thus
$\|k_{\alpha}(\cdot,z)\|_{L^{\frac{d+1}{\alpha},\infty}}\leq 2^{\alpha}.$
Now we turn to the case $\frac{1}{2}\leq|z|<1.$ The conclusion comes
immediately from the following estimation,
$\lambda
d_{k_{\alpha}(\cdot,z)}^{\frac{\alpha}{d+1}}(\lambda)\leq\begin{cases}1,&\lambda\leq
1,\\\ (d\cdot
2^{3d-1})^{\frac{\alpha}{d+1}},&1<\lambda<\frac{1}{(1-|z|)^{\alpha}},\\\
0,&\lambda\geq\frac{1}{(1-|z|)^{\alpha}}.\end{cases}$ (4.3)
Now we prove (4.3). Let $dV(w)=(\frac{i}{2})^{d}\prod_{n=1}^{d}dw_{n}\wedge
d\bar{w}_{n}.$ Then $dV=\frac{\pi^{d}}{\Gamma(d+1)}dv.$ When $\lambda\leq 1,$
then $\lambda d_{k_{\alpha}(\cdot,z)}^{\frac{\alpha}{d+1}}(\lambda)<1.$ Denote
$I$ by the subset in the unit disk such that
$\begin{split}I=\\{w_{1}\in\mathbb{D}:|\frac{1}{|z|}-w_{1}|<\frac{1}{|z|}\lambda^{-\frac{1}{\alpha}}\\}.\end{split}$
(4.4)
When $1<\lambda<\frac{1}{(1-|z|)^{\alpha}},$ by (4.2) and Fubini’s theorem, we
have that
$\begin{split}d_{k_{\alpha}(\cdot,z)}(\lambda)&=v\\{w:|\frac{1}{|z|}-w_{1}|<\frac{1}{|z|}\lambda^{-\frac{1}{\alpha}}\\}\\\
&\leq\frac{\Gamma(d+1)}{\pi^{d}}(\frac{i}{2})^{d}\int_{I}dw_{1}\wedge
d\bar{w}_{1}\int_{|w_{2}|^{2}+\cdots+|w_{d}|^{2}<1-|w_{1}|^{2}}\prod_{n=2}^{d}dw_{n}\wedge
d\bar{w}_{n}\\\ &=d\int_{I}(1-|w_{1}|^{2})^{d-1}dv(w_{1})\\\
&<d(1-\frac{1}{|z|^{2}}+2\frac{1}{|z|^{2}}\frac{1}{\lambda^{\frac{1}{\alpha}}}-\frac{1}{|z|^{2}\lambda^{\frac{2}{\alpha}}})^{d-1}\int_{I}dv(w_{1})\\\
&<d\cdot
2^{3d-3}\frac{1}{\lambda^{\frac{d-1}{\alpha}}}\frac{4}{\lambda^{\frac{2}{\alpha}}}\\\
&=\frac{d\cdot 2^{3d-1}}{\lambda^{\frac{d+1}{\alpha}}}\\\ \end{split}$ (4.5)
Then (4.5) implies that $\lambda
d_{k_{\alpha}(\cdot,z)}^{\frac{\alpha}{d+1}}(\lambda)<(d\cdot
2^{3d-1})^{\frac{\alpha}{d+1}}$ if $1<\lambda<\frac{1}{(1-|z|)^{\alpha}}.$
When $\lambda\geq\frac{1}{(1-|z|)^{\alpha}},$ it is easy to see that
$d_{k_{\alpha}(\cdot,z)}(\lambda)=0.$ So $\lambda
d_{k_{\alpha}(\cdot,z)}^{\frac{\alpha}{d+1}}(\lambda)=0,$ if
$\lambda\geq\frac{1}{(1-|z|)^{\alpha}}.$∎
###### Corollary 4.5.
There exists a constant $C$ that only depends on $\alpha$ and $d$ such that,
for every $z\in\mathbb{B}^{d},$
$\|k_{\alpha}^{+}(z,\cdot)\|_{L^{\frac{d+1}{\alpha},\infty}}=\|k_{\alpha}^{+}(\cdot,z)\|_{L^{\frac{d+1}{\alpha},\infty}}<C.$
Now we modify Proposition 6.1 of [22] to suit our setting.
###### Lemma 4.6.
[22] Suppose that $k:\mathbb{B}^{d}\times\mathbb{B}^{d}\rightarrow\mathbb{C}$
is measurable such that
$\|k(z,\cdot)\|_{L^{r,\infty}}\leq C,z\in\mathbb{B}^{d},a.e.$
and
$\|k(\cdot,w)\|_{L^{r,\infty}}\leq C,w\in\mathbb{B}^{d},a.e.$
for some $1<r<\infty$ and $C>0.$ Then the operator $T$ defined as
$Tf(z)=\int_{\mathbb{B}^{d}}k(z,w)f(w)dv(w)$
is bounded from $L^{1}$ to $L^{r,\infty}.$ Moreover, if $1<p<q<\infty$ such
that $\frac{1}{p}+\frac{1}{r}=\frac{1}{q}+1,$ then $T$ is bounded from $L^{p}$
to $L^{q}.$
###### Corollary 4.7.
If $0<\alpha\leq d+1,$ then $K_{\alpha},K_{\alpha}^{+}:L^{1}\rightarrow
L^{\frac{d+1}{\alpha},\infty}$ are bounded.
###### Proof.
When $\alpha=d+1,$ $K_{d+1}$ is the Bergman projection, then
$K_{d+1}:L^{1}\rightarrow L^{1,\infty}$ is bounded by the proof of Theorem 6
of [14]. Indeed, similar to the proof of Theorem 6 of [14], by the Calderón-
Zygmund decomposition, it can be proved that $K_{d+1}^{+}:L^{1}\rightarrow
L^{1,\infty}$ is bounded. When $0<\alpha<d+1,$ by Lemma 4.4 and Lemma 4.6, it
implies that $K_{\alpha},K_{\alpha}^{+}:L^{1}\rightarrow
L^{\frac{d+1}{\alpha},\infty}$ are bounded. It completes the proof. ∎
Sufficiency part of Lemma 4.3. We need to prove that if
$1<p<\frac{d+1}{d+1-\alpha},\frac{1}{q}\geq\frac{1}{p}+\frac{\alpha}{d+1}-1,$
then $K_{\alpha}:L^{p}\rightarrow L^{q}$ is bounded. By the continuous
embedding of $L$-integrable spaces, it suffices to show that
$K_{\alpha}:L^{p}\rightarrow L^{q}$ is bounded if
$\frac{1}{q}=\frac{1}{p}+\frac{\alpha}{d+1}-1.$ Then Lemma 4.4 and Lemma 4.6
implies that $K_{\alpha}:L^{p}\rightarrow L^{q}$ is bounded. ∎
###### Corollary 4.8.
If $1<p<\frac{d+1}{d+1-\alpha},$ then $K_{\alpha}^{+}:L^{p}\rightarrow L^{q}$
is bounded if $\frac{1}{q}\geq\frac{1}{p}+\frac{\alpha}{d+1}-1.$
For necessity part of Lemma 4.3, we need find out a function belongs to
$L^{p}$ but its image under $K_{\alpha}$ is not in $L^{q}.$ So we establish an
isometry from $L^{p}(\mathbb{D},dv_{d-1})$ to $L^{p}(\mathbb{B}^{d}),$ where
$\mathbb{D}$ is the unit disc, i.e. $\mathbb{D}=\mathbb{B}^{1}.$ Let
$I_{p}:L^{p}(\mathbb{D},dv_{d-1})\rightarrow
L^{p}(\mathbb{B}^{d}),I_{p}(f)(z)=f(z_{1}).$
Denote $L_{1}^{p}(\mathbb{B}^{d})=\\{f\in
L^{p}(\mathbb{B}^{d}):f(z)=f(z_{1},0\cdots,0),\forall z\in\mathbb{B}^{d}\\}.$
If $f\in L_{1}^{p}(\mathbb{B}^{d})$, we always write $f(z_{1})$ without
ambiguity. Denote $A_{0,1}^{p}$ by the set of holomorphic functions in
$L_{1}^{p}(\mathbb{B}^{d}).$ Then we have the following lemma.
###### Lemma 4.9.
$I_{p}$ is an isometry from $A_{d-1}^{p}(\mathbb{D})$ onto
$A_{0,1}^{p}(\mathbb{B}^{d}).$
###### Proof.
Suppose that $f(z_{1})\in L_{1}^{p}(\mathbb{B}^{d}),$ then
$\begin{split}\|f\|_{L_{1}^{p}(\mathbb{B}^{d})}^{p}&=\int_{\mathbb{B}^{d}}|f(z_{1})|^{p}dv\\\
&=\frac{\Gamma(d+1)}{\pi^{d}}(\frac{i}{2})^{d}\int_{\mathbb{D}}|f(z_{1})|^{p}dz_{1}\wedge
d\bar{z}_{1}\int_{|z_{2}|^{2}+\cdots+|z_{d}|^{2}<1-|z_{1}|^{2}}\prod_{n=2}^{d}dz_{n}\wedge
d\bar{z}_{n}\\\
&=d\int_{\mathbb{D}}|f(z_{1})|^{p}(1-|z_{1}|^{2})^{d-1}dv(z_{1})\\\
&=\|f\|_{L^{p}(\mathbb{D},dv_{d-1})}^{p}.\\\ \end{split}$ (4.6)
It leads to the desired result. ∎
###### Corollary 4.10.
$A_{0,1}^{p}(\mathbb{B}^{d})\simeq A_{d-1}^{p}(\mathbb{D}).$
###### Lemma 4.11.
Suppose that $t\in\mathbb{R},$ then
$f(z_{1})=\sum_{n=1}^{\infty}n^{t}z_{1}^{n}\in A_{0,1}^{p}(\mathbb{B}^{d})$ if
and only if $p(t+1)<d+1.$
###### Proof.
From Corollary 3.5 of [3], Corollary 4.10 and Proposition 1.4.10 of [20], it
yields that $f(z_{1})=\sum_{n=1}^{\infty}n^{t}z_{1}^{n}\in
A_{0,1}^{p}(\mathbb{B}^{d})\simeq A_{d-1}^{p}(\mathbb{D})$ if and only if
$\sum_{n=1}^{\infty}\frac{\Gamma(n+1+t)}{\Gamma(n+1)\Gamma(t+1)}z_{1}^{n}=\frac{1}{(1-z_{1})^{t+1}}\in
L_{a}^{p}(\mathbb{D},dv_{d-1})$ if and only if $p(t+1)<2+(d-1)=d+1.$ ∎
Necessity part of Lemma 4.3. We need to prove that if
$1<p<\frac{d+1}{d+1-\alpha},\frac{1}{q}<\frac{1}{p}+\frac{\alpha}{d+1}-1,$
then $K_{\alpha}:L^{p}\rightarrow L^{q}$ is unbounded. Assume that
$K_{\alpha}:L^{p}\rightarrow L^{q}$ is bounded, it is equivalent to
$K_{\alpha}:A_{0}^{p}\rightarrow A_{0}^{q}$ is bounded. Then
$K_{\alpha}(A_{0}^{p})\subset A_{0}^{q}.$ Choose any $t$ such that
$\frac{d+1}{q}+d-\alpha<t<\frac{d+1}{p}-1,$ (4.7)
denote $f_{t}(z)=\sum_{n=1}^{\infty}n^{t}z_{1}^{n},$ then condition (4.7) and
Lemma 4.11 imply that $f_{t}\in A_{0}^{p}.$ Now, assume that
$K_{\alpha}f_{t}\in A_{0}^{q}.$ Then
$\begin{split}K_{\alpha}f_{t}(z)&=\sum_{n=1}^{\infty}\frac{\Gamma(n+\alpha)\Gamma(d+1)}{\Gamma(n+d+1)\Gamma(\alpha)}n^{t}z_{1}^{n}\\\
&=\frac{\Gamma(d+1)}{\Gamma(\alpha)\Gamma(d+1-\alpha)}\sum_{n=1}^{\infty}\frac{\Gamma(n+\alpha)\Gamma(d+1-\alpha)}{\Gamma(n+d+1)}n^{t}z_{1}^{n}\\\
&=\frac{\Gamma(d+1)}{\Gamma(\alpha)\Gamma(d+1-\alpha)}\sum_{n=1}^{\infty}B(n+\alpha,d+1-\alpha)n^{t}z_{1}^{n}\\\
&\in A_{0,1}^{q}(\mathbb{B}^{d})\simeq A_{d-1}^{q}(\mathbb{D}),\\\
\end{split}$ (4.8)
where $B(\cdot,\cdot)$ is Beta function. On the other hand, by Lemma 3.2 of
[3], similar to the prove of Lemma 3.4 of [3], it can be proved that
$\sum_{n=1}^{\infty}\frac{n^{\alpha-d-1}}{B(n+\alpha,d+1-\alpha)}a_{n}z^{n}\in
A_{d-1}^{q}(\mathbb{D}),\forall\sum_{n=1}^{\infty}a_{n}z^{n}\in
A_{d-1}^{q}(\mathbb{D}).$ (4.9)
Combing (4.8) with (4.9), it implies that $\sum
n^{\alpha-d-1}n^{t}z_{1}^{n}\in A_{d-1}^{q}(\mathbb{D})\simeq
A_{0,1}^{q}(\mathbb{B}^{d}).$ So, by Lemma 4.11, it follows that
$q(\alpha-d+t)<d+1,$ namely $t<\frac{d+1}{q}+d-\alpha,$ a contradiction to the
condition $\frac{d+1}{q}+d-\alpha<t.$ It completes the proof. ∎
Proof of Theorem 2. First, we prove that (1) is equivalent to (3). Denote
$D_{2}$ by the region determined by the equations in (3) of Theorem 3. It is
equivalent to prove that $G(K_{\alpha})=D_{2}.$ Lemma 4.1, Lemma 4.2, Lemma
4.3 and the convexity of $G(K_{\alpha}),$ imply that $G(K_{\alpha})=D_{2}.$
Lemma 4.1, Lemma 4.2, Corollary 4.8 and the convexity of $G(K_{\alpha}^{+}),$
imply that $D_{2}\subset G(K_{\alpha}^{+}).$ From the above we know that
$G(K_{\alpha}^{+})\subset G(K_{\alpha})=D_{2}.$ Then
$G(K_{\alpha}^{+})=D_{2}.$ It completes the proof. ∎
## 5\. Proofs of Theorem 3 and Theorem 4
In the previous Section 4, we have characterized completely the
$L^{p}$-$L^{q}$ boundedness of $K_{\alpha},K_{\alpha}^{+}$ under the case of
$0<\alpha\leq d+1.$ In the present section, we will characterize completely
the $L^{p}$-$L^{q}$ compactness of $K_{\alpha}$ when $0<\alpha\leq d+1.$ It is
equivalent to solve the set $F(K_{\alpha}),$ where $F(K_{\alpha})$ is defined
by
$F(K_{\alpha})=\\{(\frac{1}{p},\frac{1}{q})\in E:K_{\alpha}:L^{p}\rightarrow
L^{q}~{}\text{is ~{}compact}\\}.$
It is easy to see that $F(K_{\alpha})$ is a subset of $G(K_{\alpha}).$ Theorem
3 in fact shows that $F(K_{\alpha})$ and $G(K_{\alpha})$ differ only by a
segment on the boundary of $G(K_{\alpha}).$ Thus we always show first that
$K_{\alpha}$ is compact on the other part of on the boundary of
$G(K_{\alpha}).$ In the end of this section, we give the proof of Theorem 4.
###### Proposition 5.1.
$K_{d+1}:L^{p}\rightarrow L^{q}$ is compact if and only if $1\leq
q<p\leq\infty.$
###### Proof.
From Theorem 2, we know that $K_{d+1}:L^{p}\rightarrow L^{q}$ is bounded if
and only if $q\leq p.$ Since $K_{d+1}$ is the standard Bergman projection, it
is easy to see $K_{d+1}:L^{p}\rightarrow L^{p}$ is not compact for any
$1<p<\infty.$ Thus it suffices to show that $K_{\alpha}:L^{p}\rightarrow
L^{q}$ is compact if $q<p.$ Indeed, this can be proved by the similar method
we used in the $step$ 2 of proof of Theorem 1, we omit it. ∎
Now, we recall some results on hypergeometric function theory for later use.
For complex numbers $\alpha,\beta,\gamma$ and complex variable $z,$ we use the
classical notation $\tensor[_{2}]{F}{{}_{1}}(\alpha,\beta;\gamma;z)$ to denote
$\tensor[_{2}]{F}{{}_{1}}(\alpha,\beta;\gamma;z)=\sum_{j=0}^{\infty}\frac{(\alpha)_{j}(\beta)_{j}}{j!(\gamma)_{j}}z^{j},$
with $\gamma\neq 0,-1,-2,\ldots,$ where
$(\alpha)_{j}=\Pi_{k=0}^{j-1}(\alpha+k)$ is the Pochhammer for any complex
number $\alpha.$ The following lemma is in fact a restatement of Proposition
1.4.10 of [20].
###### Lemma 5.2.
[20] Suppose $\beta\in\mathbb{R}$ and $\gamma>-1,$ then
$\int_{\mathbb{B}^{d}}\frac{(1-|w|^{2})^{\gamma}}{|1-\langle
z,w\rangle|^{2\beta}}dv(w)=\frac{\Gamma(1+d)\Gamma(1+\gamma)}{\Gamma(1+d+\gamma)}\tensor[_{2}]{F}{{}_{1}}(\beta,\beta;1+d+\gamma;|z|^{2}).$
We also need the following lemma.
###### Lemma 5.3.
[8, Chapter 2] The following three identities hold.
1. (1)
$\tensor[_{2}]{F}{{}_{1}}(\alpha,\beta;\gamma;z)=(1-z)^{\gamma-\alpha-\beta}\tensor[_{2}]{F}{{}_{1}}(\gamma-\alpha,\gamma-\beta;\gamma;z);$
2. (2)
$\tensor[_{2}]{F}{{}_{1}}(\alpha,\beta;\gamma;1)=\frac{\Gamma(\gamma)\Gamma(\gamma-\alpha-\beta)}{\Gamma(\gamma-\alpha)\Gamma(\gamma-\beta)},$
if $Re(\gamma-\alpha-\beta)>0;$
3. (3)
$\frac{d}{dz}~{}\tensor[_{2}]{F}{{}_{1}}(\alpha,\beta;\gamma;z)=\frac{\alpha\beta}{\gamma}~{}\tensor[_{2}]{F}{{}_{1}}(\alpha+1,\beta+1;\gamma+1;z).$
###### Lemma 5.4.
If $0<\alpha<d+1,$ then $K_{\alpha}:L^{\infty}\rightarrow L^{q}$ is compact
for any $1\leq q\leq\infty.$
###### Proof.
Since the continuous embedding of $L$-integrable spaces, it suffices to prove
that $K_{\alpha}:L^{\infty}\rightarrow L^{\infty}$ is compact. We first prove
that, for any $f\in L^{\infty},$ then $K_{\alpha}f\in A(\mathbb{B}^{d}),$
where $A(\mathbb{B}^{d})=H(\mathbb{B}^{d})\cap C(\overline{\mathbb{B}^{d}})$
is the ball algebra. For $f\in L^{\infty},$ it is clear that $K_{\alpha}f$ is
holomorphic on the ball, i.e. $K_{\alpha}f\in H(\mathbb{B}^{d}).$ Now we prove
that $K_{\alpha}f$ is also continuous on the closed ball
$\overline{\mathbb{B}^{d}.}$ From Lemma 5.2 and (2) of Lemma 5.3, it implies
that $K_{\alpha}f(\eta)$ exists for any $\eta\in\partial\mathbb{B}^{d}$ and
$|K_{\alpha}f(\eta)|\leq\|f\|_{\infty}\frac{\Gamma(d+1)\Gamma(d+1-\alpha)}{\Gamma^{2}(d+1-\frac{\alpha}{2})}.$
We now turn to prove that $K_{\alpha}f$ is continuous on
$\partial\mathbb{B}^{d}.$ It suffices to prove that, for any
$\eta\in\partial\mathbb{B}^{d}$ and for any point sequence $\\{z_{n}\\}$ in
$\mathbb{B}^{d}$ satisfying that $z_{n}\rightarrow\eta,$ we have
$K_{\alpha}f(z_{n})\rightarrow K_{\alpha}f(\eta)$ as $n\rightarrow\infty.$ By
Lemma 5.2 and (2) of Lemma 5.3 again, we have
$\begin{split}|K_{\alpha}f(z)|&\leq\|f\|_{\infty}\int_{\mathbb{B}^{d}}\frac{1}{|1-\langle
z,w\rangle|^{\alpha}}dv(w)\\\
&\leq\|f\|_{\infty}\int_{\mathbb{B}^{d}}\frac{1}{|1-\langle\eta,w\rangle|^{\alpha}}dv(w)\\\
&=\|f\|_{\infty}\frac{\Gamma(d+1)\Gamma(d+1-\alpha)}{\Gamma^{2}(d+1-\frac{\alpha}{2})},\end{split}$
(5.1)
for any $z\in\mathbb{B}^{d}.$ Due to the absolute continuity of the integral,
it implies that, for any $\varepsilon>0,$ there exists $0<\delta<1$,
satisfying that
$\int_{F}\frac{dv(w)}{|1-\langle\eta,w\rangle|^{\alpha}}\leq\frac{\varepsilon}{4},$
(5.2)
whenever $v(F)<\delta.$ Denote
$F_{\delta}=\\{z\in\mathbb{B}^{d}:\sqrt[d]{1-\frac{\delta}{2}}<|z|<1\\}.$ Note
that $v(F_{\delta})=\frac{\delta}{2}<\delta$ and
$\frac{1}{(1-\langle
z_{n},w\rangle)^{\alpha}}\rightarrow\frac{1}{(1-\langle\eta,w\rangle)^{\alpha}}~{}\text{uniformly~{}on
}~{}\mathbb{B}^{d}\setminus F_{\delta},\text{~{}as}~{}n\rightarrow\infty.$
Then there exists $N>0$ such that, for any $n>N,$
$\int_{\mathbb{B}^{d}\setminus F_{\delta}}\left|\frac{1}{(1-\langle
z_{n},w\rangle)^{\alpha}}-\frac{1}{(1-\langle\eta,w\rangle)^{\alpha}}\right|dv(w)\leq\frac{\varepsilon}{2}.$
Combing this with (5.1), (5.2), it implies that, for any $n>N,$
$\begin{split}|K_{\alpha}f(z_{n})-K_{\alpha}f(\eta)|&\leq\|f\|_{\infty}\int_{\mathbb{B}^{d}\setminus
F_{\delta}}\left|\frac{1}{(1-\langle
z_{n},w\rangle)^{\alpha}}-\frac{1}{(1-\langle\eta,w\rangle)^{\alpha}}\right|dv(w)\\\
&~{}~{}~{}~{}~{}~{}~{}~{}~{}+\|f\|_{\infty}\int_{F_{\delta}}\left|\frac{1}{(1-\langle
z_{n},w\rangle)^{\alpha}}-\frac{1}{(1-\langle\eta,w\rangle)^{\alpha}}\right|dv(w)\\\
&\leq\|f\|_{\infty}\int_{\mathbb{B}^{d}\setminus
F_{\delta}}\left|\frac{1}{(1-\langle
z_{n},w\rangle)^{\alpha}}-\frac{1}{(1-\langle\eta,w\rangle)^{\alpha}}\right|dv(w)\\\
&~{}~{}~{}~{}~{}~{}~{}~{}~{}+2\|f\|_{\infty}\int_{F_{\delta}}\frac{1}{|1-\langle\eta,w\rangle|^{\alpha}}dv(w)\\\
&\leq\|f\|_{\infty}\frac{\varepsilon}{2}+2\|f\|_{\infty}\frac{\varepsilon}{4}\\\
&=\varepsilon\|f\|_{\infty}.\\\ \end{split}$ (5.3)
It completes the proof of what $K_{\alpha}f$ is continuous on the closed ball
$\overline{\mathbb{B}^{d}}.$ Now we prove that, for any bounded sequence in
$L^{\infty},$ there exists a subsequence satisfying that its image under
$K_{\alpha}$ is convergent in $L^{\infty}.$ Suppose that $\\{f_{n}\\}$ is a
bounded sequence in $L^{\infty},$ then we have $\\{K_{\alpha}f_{n}\\}$ is in
$C(\overline{\mathbb{B}^{d}})$ and $\\{K_{\alpha}f_{n}\\}$ is uniformly
bounded by (5.1). Now we prove that $\\{K_{\alpha}f_{n}\\}$ is also
equicontinuous. From (5.3), we know that
$\lim_{\mathbb{B}^{d}\ni
z\rightarrow\eta}\int_{\mathbb{B}^{d}}\left|\frac{1}{(1-\langle
z,w\rangle)^{\alpha}}-\frac{1}{(1-\langle\eta,w\rangle)^{\alpha}}\right|dv(w)=0,$
(5.4)
for arbitrary fixed $\eta\in\partial\mathbb{B}^{d}.$ Combing (5.4) with the
unitary invariance of Lebsgue measure and the symmetry of the unit ball, it
implies that, for any $\epsilon>0,$ there exists $0<\delta^{\prime}<1,$
satisfying that
$\int_{\mathbb{B}^{d}}\left|\frac{1}{(1-\langle
z,w\rangle)^{\alpha}}-\frac{1}{(1-\langle\eta,w\rangle)^{\alpha}}\right|dv(w)\leq\frac{\epsilon}{2}$
(5.5)
whenever $z\in\mathbb{B}^{d},\eta\in\partial\mathbb{B}^{d}$ and
$|z-\eta|<\delta^{\prime}.$ Denote
$B_{1-\frac{\delta^{\prime}}{2}}=\\{z\in\mathbb{C}^{d}:|z|\leq
1-\frac{\delta^{\prime}}{2}\\}$ and
$C_{\frac{\delta^{\prime}}{2}}=\\{z\in\mathbb{C}^{d}:1-\frac{\delta^{\prime}}{2}<|z|\leq
1\\}.$ Then the closed ball $\overline{\mathbb{B}^{d}}$ has the following
decomposition,
$\overline{\mathbb{B}^{d}}=B_{1-\frac{\delta^{\prime}}{2}}\cup
C_{\frac{\delta^{\prime}}{2}}\text{~{}and~{}}B_{1-\frac{\delta^{\prime}}{2}}\cap
C_{\frac{\delta^{\prime}}{2}}=\emptyset.$ (5.6)
Since the function $\frac{1}{(1-\langle z,w\rangle)^{\alpha}}$ is uniformly
continuous on compact set
$B_{1-\frac{\delta^{\prime}}{2}}\times\overline{\mathbb{B}^{d}},$ then there
exists $0<\delta^{\prime\prime}<1$ such that
$\left|\frac{1}{(1-\langle z_{1},w\rangle)^{\alpha}}-\frac{1}{(1-\langle
z_{2},w\rangle)^{\alpha}}\right|\leq\epsilon,$ (5.7)
whenever $(z_{1},w),(z_{2},w)\in
B_{1-\frac{\delta^{\prime}}{2}}\times\overline{\mathbb{B}^{d}}$ and
$|z_{1}-z_{2}|<\delta^{\prime\prime}.$ Take
$\delta^{\prime\prime\prime}=\min\\{\frac{\delta^{\prime}}{2},\delta^{\prime\prime}\\}.$
Now we prove that, for any $z_{1},z_{2}\in\overline{\mathbb{B}^{d}}$ such that
$|z_{1}-z_{2}|<\delta^{\prime\prime\prime},$ then we have
$\int_{\mathbb{B}^{d}}\left|\frac{1}{(1-\langle
z_{1},w\rangle)^{\alpha}}-\frac{1}{(1-\langle
z_{2},w\rangle)^{\alpha}}\right|dv(w)\leq\epsilon.$ (5.8)
In fact, there are two cases need to be considered. The first case is
$z_{1}\in C_{\frac{\delta^{\prime}}{2}}$ or $z_{2}\in
C_{\frac{\delta^{\prime}}{2}}.$ Without loss of generality, we can assume that
$z_{1}\in C_{\frac{\delta^{\prime}}{2}},$ then there exists an
$\eta\in\partial\mathbb{B}^{d}$ satisfying that
$|z_{1}-\eta|<\delta^{\prime\prime\prime}\leq\frac{\delta^{\prime}}{2}.$ By
triangle inequality, it implies that
$|z_{2}-\eta|\leq|z_{2}-z_{1}|+|z_{1}-\eta|<\delta^{\prime}.$ Together with
(5.5), it implies that
$\begin{split}&\int_{\mathbb{B}^{d}}\left|\frac{1}{(1-\langle
z_{1},w\rangle)^{\alpha}}-\frac{1}{(1-\langle
z_{2},w\rangle)^{\alpha}}\right|dv(w)\\\
&\leq\int_{\mathbb{B}^{d}}\left|\frac{1}{(1-\langle
z_{1},w\rangle)^{\alpha}}-\frac{1}{(1-\langle\eta,w\rangle)^{\alpha}}\right|dv(w)\\\
&~{}\vspace{0.2cm}+\int_{\mathbb{B}^{d}}\left|\frac{1}{(1-\langle\eta,w\rangle)^{\alpha}}-\frac{1}{(1-\langle
z_{2},w\rangle)^{\alpha}}\right|dv(w)\\\ &\leq\epsilon\end{split}$
The second case is $z_{1},z_{2}\in B_{1-\frac{\delta^{\prime}}{2}}.$ By (5.7),
it implies that
$\begin{split}\int_{\mathbb{B}^{d}}\left|\frac{1}{(1-\langle
z_{1},w\rangle)^{\alpha}}-\frac{1}{(1-\langle
z_{2},w\rangle)^{\alpha}}\right|dv(w)\leq\epsilon\int_{\mathbb{B}^{d}}dv=\epsilon.\end{split}$
It proves (5.8). Combing with
$|K_{\alpha}f_{n}(z_{1})-K_{\alpha}f_{n}(z_{2})|\leq\|f_{n}\|_{\infty}\int_{\mathbb{B}^{d}}\left|\frac{1}{(1-\langle
z_{1},w\rangle)^{\alpha}}-\frac{1}{(1-\langle
z_{2},w\rangle)^{\alpha}}\right|dv(w),$
it implies that $\\{K_{\alpha}f_{n}\\}$ is equicontinuous. Then by Arzelà-
Ascoli theorem, it implies that $\\{K_{\alpha}f_{n}\\}$ has a convergency
subsequence in the supremum norm. ∎
###### Corollary 5.5.
If $0<\alpha<d+1,$ then the following holds:
1. (1)
$K_{\alpha}:L^{p}\rightarrow L^{1}$ is compact for any $1\leq p\leq\infty.$
2. (2)
$K_{\alpha}:L^{1}\rightarrow L^{q}$ is compact if and only if
$q<\frac{d+1}{\alpha}.$
3. (3)
$K_{\alpha}:L^{p}\rightarrow L^{\infty}$ is compact if and only if
$p>\frac{d+1}{d+1-\alpha}.$
###### Proof.
It comes from Lemma 3.7, Lemma 5.4 and the fact that $K_{\alpha}$ is adjoint.
∎
In the following, we deal with the case $1<p,q<\infty$ . However, we need the
following result about Carleson type measures for the Bergman spaces over the
unit ball.
###### Lemma 5.6.
[24] Suppose $1\leq p\leq q<\infty$ and $\mu$ is a positive Borel measure on
$\mathbb{B}^{d}$. Then the following conditions are equivalent:
1. (1)
If $\\{f_{n}\\}$ is a bounded sequence in $A_{0}^{p}$ and $f_{n}(z)\rightarrow
0$ for every $z\in\mathbb{B}^{d}$, then
$\lim_{n\rightarrow\infty}\int_{\mathbb{B}^{d}}|f_{n}|^{q}d\mu=0.$
2. (2)
For every (or some) $s>0,$ we have
$\lim_{|z|\rightarrow
1^{-}}\int_{\mathbb{B}^{d}}\frac{(1-|z|^{2})^{s}}{|1-\langle
z,w\rangle|^{s+\frac{q(d+1)}{p}}}d\mu(w)=0.$
The Borel measure in Lemma 5.6 is in fact the so-called vanishing Carleson
measures. If denote $A^{q}(d\mu)$ by the weighted Bergman space
$A^{q}(d\mu)=H(\mathbb{B}^{d})\cap L^{q}(\mathbb{B}^{d},d\mu).$ Then (1) of
Lemma 5.6 guarantees ( or is equivalent to) that the embedding
$Id:A^{p}_{0}\rightarrow A^{q}(d\mu)$ is compact.
###### Proposition 5.7.
If $0<\alpha<d+1$ and $1<p\leq q<\infty,$ then the following conditions are
equivalent.
1. (1)
$K_{\alpha}:L^{p}\rightarrow L^{q}$ is compact.
2. (2)
$K_{\alpha}:A_{0}^{p}\rightarrow A_{0}^{q}$ is compact.
3. (3)
The embedding $Id:A_{0}^{p}\rightarrow A_{q(d+1-\alpha)}^{q}$ is compact.
4. (4)
$\frac{1}{q}>\frac{1}{p}+\frac{\alpha}{d+1}-1.$
###### Proof.
We first prove that (1) is equivalent to (2). Clearly (1) implies (2). To
prove the reverse, note that by $0<\alpha<d+1$ and combing this with Theorem 2
gives us that $K_{d+1}:L^{p}\rightarrow A_{0}^{p}$ is bounded. Suppose that
$\\{f_{n}\\}$ is an arbitrary bounded sequence in $L^{p},$ thus we have
$\\{K_{d+1}f_{n}\\}$ is a bounded sequence in $A_{0}^{p}.$ Then the
compactness of operator $K_{\alpha}:A_{0}^{p}\rightarrow A_{0}^{q}$ implies
that, there exists a subsequence $\\{f_{n_{j}}\\}$ such that
$\\{K_{\alpha}(K_{d+1}f_{n_{j}})\\}$ is convergent in $A_{0}^{q}.$ Combing
this with Lemma 2.7, yields that $\\{K_{\alpha}f_{n_{j}}\\}$ is convergent in
$A_{0}^{q}.$ It proved that (2) implies (1).
Now we prove that (2) is equivalent to (3). Similar to the proof of
Proposition 2.6, by Theorem 14 of [24] and Theorem 2.19 of [25], it can be
proved that
$R^{\alpha-d-1,d+1-\alpha}:A_{0}^{q}\rightarrow A_{q(d+1-\alpha)}^{q}$
and its inverse operator are bounded. Note that $K_{\alpha}=R^{0,\alpha-d-1}$
on $A_{0}^{p}$ and
$R^{\alpha-d-1,d+1-\alpha}R^{0,\alpha-d-1}f=f,\forall f\in A_{0}^{p}.$
Then we have the following decomposition for embedding $Id,$
$\begin{split}Id:A_{0}^{p}&\xrightarrow{K_{\alpha}=R^{0,\alpha-d-1}}A_{0}^{q}\xrightarrow{R^{\alpha-d-1,d+1-\alpha}}A_{q(d+1-\alpha)}^{q},\\\
&Id=R^{\alpha-d-1,d+1-\alpha}K_{\alpha}.\end{split}$ (5.9)
Combing (5.9) with the fact that $R^{\alpha-d-1,d+1-\alpha}$ has bounded
inverse, it implies that $K_{\alpha}:A_{0}^{p}\rightarrow A_{0}^{q}$ is
compact if and only if the embedding $Id:A_{0}^{p}\rightarrow
A_{q(d+1-\alpha)}^{q}$ is compact.
In the next, we prove that (3) is equivalent to (4). Suppose that
$\\{f_{n}\\}$ is an arbitrary bounded sequence in $A_{0}^{p},$ then by Theorem
20 of [24], the locally estimation for functions in $A_{0}^{p},$ it implies
that $\\{f_{n}\\}$ is a normal family. Hence, by Fatou’s lemma, similar to the
proof of Theorem 1, there exists a subsequence $\\{f_{n_{j}}\\}$ and $g\in
A_{0}^{p}$ such that $f_{n_{j}}$ converges uniformly to $g$ on any compact
subset of $\mathbb{B}^{d}.$ Then $\\{f_{n_{j}}-g\\}$ is in $A_{0}^{p}$ and
$f_{n_{j}}-g\rightarrow 0$ pointwise as $j\rightarrow\infty.$ Together with
Lemma 5.6, it yields that the embedding $Id:A_{0}^{p}\rightarrow
A_{q(d+1-\alpha)}^{q}$ is compact if and only if
$\lim_{|z|\rightarrow
1^{-}}\int_{\mathbb{B}^{d}}\frac{(1-|z|^{2})^{s}}{|1-\langle
z,w\rangle|^{s+\frac{q(d+1)}{p}}}dv_{q(d+1-\alpha)}(w)=0,$ (5.10)
for any $s>0.$ On the other hand, by Proposition 1.4.10 of [20], it implies
that (5.10) is equivalent to $\frac{1}{q}>\frac{1}{p}+\frac{\alpha}{d+1}-1.$
It completes the proof. ∎
Proof of Theorem 3. When $\alpha=d+1.$ Theorem 3 degenerates into Proposition
5.1.
Now we turn to the case $0<\alpha<d+1.$ We first prove that (2) implies (1).
In fact, it is an immediate corollary from Lemma 3.7, Lemma 5.4 and Corollary
5.5. To see the reverse, note that by Theorem 2 and combining this with
Proposition 5.7 gives that (1) is not held if (2) is not held, it implies that
(1) implies (2), completing the proof. ∎
Proof of Theorem 4. By Theorem 1,2,3, it is easy to see that
$(1)\Rightarrow(4)\Rightarrow(2)\Leftrightarrow(3).$ Thus we only need to show
that $(2)\Rightarrow(1).$ It is equivalent to prove that $(2)$ is not held if
$(1)$ is not held. It suffices to show that $K_{\alpha}:L^{\infty}\rightarrow
L^{1}$ is not bounded if $\alpha\geq d+2.$ Suppose that $\alpha\geq d+2.$ In
view to Theorem 7.1 of [25], it implies that
$K_{\alpha}(L^{\infty})=\mathcal{B}_{\alpha-d}.$ From Lemma 3.2, we know that
$\mathcal{B}_{\alpha-d}\not\subset L^{1},$ it means that
$K_{\alpha}:L^{\infty}\rightarrow L^{1}$ is not bounded. It completes the
proof. ∎
## 6\. norm estimations for $K_{\alpha}.$
In the previous sections, we have completely characterized the $L^{p}$-$L^{q}$
boundedness of $K_{\alpha},K_{\alpha}^{+}$ and compactness of $K_{\alpha}.$ In
the present section, we will state and prove some sharp norm estimates of
$K_{\alpha},K_{\alpha}^{+},$ which gives essentially the upper bounds of the
best constants in the Hardy-Littlewood-Sobolev inequalities.
###### Proposition 6.1.
If $d+1<\alpha<d+2$ and $K_{\alpha}:L^{p}\rightarrow L^{q}$ is bounded, then
$\|K_{\alpha}\|_{L^{p}\rightarrow
L^{q}}\leq\frac{\Gamma(d+1)^{1+\frac{1}{q}-\frac{1}{p}}\Gamma(\alpha-(d+1))\Gamma(\frac{1}{q^{-1}-p^{-1}}(d+1-\alpha)+1)^{\frac{1}{q}-\frac{1}{p}}}{\Gamma(\frac{\alpha}{2})^{2}\Gamma(\frac{1}{q^{-1}-p^{-1}}(d+1-\alpha)+d+1)^{\frac{1}{q}-\frac{1}{p}}}.$
(6.1)
###### Lemma 6.2.
Suppose that $d+1<\alpha<d+2$ and $(0,\frac{1}{q})\in
G(K_{\alpha})=G(K_{\alpha}^{+}),$ then the following holds.
1. (1)
$\|K_{\alpha}\|_{L^{\infty}\rightarrow
L^{q}}\leq\|K_{\alpha}^{+}\|_{L^{\infty}\rightarrow
L^{q}}=\|\int_{\mathbb{B}^{d}}k_{\alpha}^{+}(\cdot,w)dv(w)\|_{L^{q}}.$
2. (2)
In particular, when $d=1,$
$\|K_{\alpha}\|_{L^{\infty}\rightarrow
L^{1}}\leq\|K_{\alpha}^{+}\|_{L^{\infty}\rightarrow
L^{1}}=\frac{4}{(\alpha-2)^{2}}\left(\frac{\Gamma(3-\alpha)}{\Gamma^{2}(2-\frac{\alpha}{2})}-1\right).$
(6.2)
3. (3)
For any general $(0,\frac{1}{q})\in G(K_{\alpha})=G(K_{\alpha}^{+}),$
$\|K_{\alpha}\|_{L^{\infty}\rightarrow
L^{q}}\leq\|K_{\alpha}^{+}\|_{L^{\infty}\rightarrow
L^{q}}\leq\frac{\Gamma(d+1)^{1+\frac{1}{q}}\Gamma(\alpha-(d+1))\Gamma(q(d+1-\alpha)+1)^{\frac{1}{q}}}{\Gamma(\frac{\alpha}{2})^{2}\Gamma(q(d+1-\alpha)+d+1)^{\frac{1}{q}}}.$
(6.3)
###### Proof.
(1) Since $|K_{\alpha}(f)|\leq K_{\alpha}^{+}(|f|),$ it implies that
$\|K_{\alpha}\|_{L^{\infty}\rightarrow
L^{q}}\leq\|K_{\alpha}^{+}\|_{L^{\infty}\rightarrow L^{q}}$ if $K_{\alpha}$
and $K_{\alpha}^{+}$ are bounded. Note that
$|K_{\alpha}^{+}f|(z)\leq\|f\|_{\infty}\int_{\mathbb{B}^{d}}\frac{1}{|1-\langle
z,w\rangle|^{\alpha}}dv(w)$ for any $f\in L^{\infty},$ it yields that
$\|K_{\alpha}\|_{L^{\infty}\rightarrow
L^{q}}\leq\|K_{\alpha}^{+}\|_{L^{\infty}\rightarrow
L^{q}}\leq\|\int_{\mathbb{B}^{d}}k_{\alpha}^{+}(\cdot,w)dv(w)\|_{L^{q}}.$
To see the reverse, note that
$\|K_{\alpha}^{+}\|_{L^{\infty}\rightarrow
L^{q}}\geq\|K_{\alpha}^{+}1\|_{L^{q}}=\|\int_{\mathbb{B}^{d}}k_{\alpha}^{+}(\cdot,w)dv(w)\|_{L^{q}}.$
It leads to the desired result.
(2) Now we turn to calculate the norm in the case of $d=1.$ From (2) of Lemma
5.3 and what we have proved, it follows that
$\begin{split}\|K_{\alpha}\|_{L^{\infty}\rightarrow
L^{1}}&\leq\|K_{\alpha}^{+}\|_{L^{\infty}\rightarrow L^{1}}\\\
&=\int_{\mathbb{B}^{d}}\tensor[_{2}]{F}{{}_{1}}(\frac{\alpha}{2},\frac{\alpha}{2};d+1;|z|^{2})dv(z)\\\
&=d\int_{0}^{1}\tensor[_{2}]{F}{{}_{1}}(\frac{\alpha}{2},\frac{\alpha}{2};d+1;r)r^{d-1}dr,\\\
\end{split}$ (6.4)
in the last equality we apply the integration in polar coordinates, see Lemma
1.8 of [25], and the unitary invariance of hypergeometric function
$\tensor[_{2}]{F}{{}_{1}}(\frac{\alpha}{2},\frac{\alpha}{2};d+1;|z|^{2}).$ Now
we use the differential properties listed in Lemma 5.3 to calculate the
integral in the case of $d=1.$ We observe (3) of Lemma 5.3, it gives that
$\frac{d}{dr}\left(\tensor[_{2}]{F}{{}_{1}}(\frac{\alpha}{2}-1,\frac{\alpha}{2}-1;1;r)\right)=(\frac{\alpha}{2}-1)^{2}\tensor[_{2}]{F}{{}_{1}}(\frac{\alpha}{2},\frac{\alpha}{2};2;r).$
Then integrate the two sides of the above equality and we get
$\int_{0}^{1}\tensor[_{2}]{F}{{}_{1}}(\frac{\alpha}{2},\frac{\alpha}{2};2;r)dr=\frac{4}{(\alpha-2)^{2}}\left(\tensor[_{2}]{F}{{}_{1}}(\frac{\alpha}{2}-1,\frac{\alpha}{2}-1;1;1)-1\right).$
Together with with (2) of Lemma 5.3 yields the desired result.
(3) Combing (1) with Lemma 5.2 and (1),(2) of Lemma 5.3, it follows that
$\begin{split}\|K_{\alpha}^{+}\|_{L^{\infty}\rightarrow
L^{q}}&=\left(\int_{\mathbb{B}^{d}}\left(\int_{\mathbb{B}^{d}}\frac{1}{|1-\langle
z,w\rangle|^{\alpha}}dv(w)\right)^{q}dv(z)\right)^{\frac{1}{q}}\\\
&=\left(\int_{\mathbb{B}^{d}}\tensor[_{2}]{F}{{}_{1}}(\frac{\alpha}{2},\frac{\alpha}{2};d+1;|z|^{2})^{q}dv(z)\right)^{\frac{1}{q}}\\\
&=\left(\int_{\mathbb{B}^{d}}(1-|z|^{2})^{q(d+1-\alpha)}\tensor[_{2}]{F}{{}_{1}}(d+1-\frac{\alpha}{2},d+1-\frac{\alpha}{2};d+1;|z|^{2})^{q}dv(z)\right)^{\frac{1}{q}}\\\
&\leq\tensor[_{2}]{F}{{}_{1}}(d+1-\frac{\alpha}{2},d+1-\frac{\alpha}{2};d+1;1)\left(\int_{\mathbb{B}^{d}}(1-|z|^{2})^{q(d+1-\alpha)}dv(z)\right)^{\frac{1}{q}}\\\
&=\frac{\Gamma(d+1)^{1+\frac{1}{q}}\Gamma(\alpha-(d+1))\Gamma(q(d+1-\alpha)+1)^{\frac{1}{q}}}{\Gamma(\frac{\alpha}{2})^{2}\Gamma(q(d+1-\alpha)+d+1)^{\frac{1}{q}}}.\\\
\end{split}$
It leads to (6.3). ∎
Proof of Proposition 6.1. Suppose that $K_{\alpha}^{+}:L^{p}\rightarrow L^{q}$
is bounded, it is equivalent to $(\frac{1}{p},\frac{1}{q})\in
G(K_{\alpha}^{+}).$ Then (3) of Theorem 1 guarantees
$\frac{1}{q}-\frac{1}{p}>\alpha-(d+1).$ Note by (3) of Theorem 1 again yields
that
$(0,\frac{1}{q}-\frac{1}{p}),(1-(\frac{1}{q}-\frac{1}{p}),1)\in
G(K_{\alpha}^{+})$ (6.5)
and there exists $0\leq\theta\leq 1$ satisfying that
$(\frac{1}{p},\frac{1}{q})=\theta\cdot(0,\frac{1}{q}-\frac{1}{p})+(1-\theta)\cdot(1-(\frac{1}{q}-\frac{1}{p}),1).$
(6.6)
Combing (6.5),(6.6) with Lemma 2.2, it follows that
$\|K_{\alpha}^{+}\|_{L^{p}\rightarrow
L^{q}}\leq\|K_{\alpha}^{+}\|_{L^{\infty}\rightarrow
L^{\frac{1}{q^{-1}-p^{-1}}}}^{\theta}\|K_{\alpha}^{+}\|_{L^{\frac{1}{1-(q^{-1}-p^{-1})}}\rightarrow
L^{1}}^{1-\theta}$ (6.7)
We observe that the adjoint operator of $K_{\alpha}^{+}:L^{\infty}\rightarrow
L^{\frac{1}{q^{-1}-p^{-1}}}$ is exactly the operator
$K_{\alpha}^{+}:L^{\frac{1}{1-(q^{-1}-p^{-1})}}\rightarrow L^{1},$ it means
that
$\|K_{\alpha}^{+}\|_{L^{\infty}\rightarrow
L^{\frac{1}{q^{-1}-p^{-1}}}}=\|K_{\alpha}^{+}\|_{L^{\frac{1}{1-(q^{-1}-p^{-1})}}\rightarrow
L^{1}}.$
Applying this to (6.7), then yields that
$\|K_{\alpha}^{+}\|_{L^{p}\rightarrow
L^{q}}\leq\|K_{\alpha}^{+}\|_{L^{\infty}\rightarrow
L^{\frac{1}{q^{-1}-p^{-1}}}}.$ (6.8)
Combing (6.8) with (6.5) and applying Lemma 6.3, it leads to the desired
conclusion.∎
###### Corollary 6.3.
Suppose that $C_{1}$ is the best constant in HLS 1, then
$C_{1}\leq\frac{\Gamma(d+1)^{2-\frac{1}{s}-\frac{1}{p}}\Gamma(\alpha-(d+1))\Gamma(\frac{1}{1-s^{-1}-p^{-1}}(d+1-\alpha)+1)^{1-\frac{1}{s}-\frac{1}{p}}}{\Gamma(\frac{\alpha}{2})^{2}\Gamma(\frac{1}{1-s^{-1}-p^{-1}}(d+1-\alpha)+d+1)^{1-\frac{1}{s}-\frac{1}{p}}}.$
###### Proposition 6.4.
If $0<\alpha<d+1$ and
$\frac{1}{p}-(1-\frac{\alpha}{d+1})<\frac{1}{q}\leq\frac{1}{p},$ then
$\|K_{\alpha}\|_{L^{p}\rightarrow
L^{q}}\leq\|K_{\alpha}^{+}\|_{L^{p}\rightarrow
L^{q}}\leq\left(\frac{\Gamma(d+1)\Gamma(d+1-\frac{\alpha}{1-(p^{-1}-q^{-1})})}{\Gamma^{2}(d+1-\frac{\alpha}{2(1-(p^{-1}-q^{-1}))})}\right)^{1-(\frac{1}{p}-\frac{1}{q})}.$
(6.9)
In particular, when $q=\infty,$ the inequality (6.9) is an equality.
###### Proof.
We first prove that (6.9) is in fact equality under the case of $q=\infty.$
From (4.1), we know that
$\|K_{\alpha}\|_{L^{p}\rightarrow
L^{\infty}}=\|K_{\alpha}^{+}\|_{L^{p}\rightarrow
L^{\infty}}=\sup_{z\in\mathbb{B}^{d}}\left(\int\frac{dv(w)}{|1-\langle
z,w\rangle|^{\frac{p\alpha}{p-1}}}\right)^{\frac{p-1}{p}}.$ (6.10)
On the other hand, Lemma 5.2 and (2) of Lemma 5.3 yield
$\begin{split}\int_{\mathbb{B}^{d}}\frac{dv(w)}{|1-\langle
z,w\rangle|^{\frac{p\alpha}{p-1}}}&=\tensor[_{2}]{F}{{}_{1}}(\frac{p\alpha}{2(p-1)},\frac{p\alpha}{2(p-1)};d+1;|z|^{2})\\\
&\leq\tensor[_{2}]{F}{{}_{1}}(\frac{p\alpha}{2(p-1)},\frac{p\alpha}{2(p-1)};d+1;1)\\\
&=\frac{\Gamma(d+1)\Gamma(d+1-\frac{p\alpha}{p-1})}{\Gamma^{2}(d+1-\frac{p\alpha}{2(p-1)})}.\end{split}$
(6.11)
Combing (6.10) and (6.11), it implies that
$\|K_{\alpha}\|_{L^{p}\rightarrow
L^{\infty}}=\|K_{\alpha}^{+}\|_{L^{p}\rightarrow
L^{\infty}}=\left(\frac{\Gamma(d+1)\Gamma(d+1-\frac{p\alpha}{p-1})}{\Gamma^{2}(d+1-\frac{p\alpha}{2(p-1)})}\right)^{\frac{p-1}{p}}.$
(6.12)
Now we turn to prove (6.9) in the general case. Note first that
$|K_{\alpha}(f)|\leq K_{\alpha}^{+}(|f|),$ it implies that
$\|K_{\alpha}\|_{L^{p}\rightarrow
L^{q}}\leq\|K_{\alpha}^{+}\|_{L^{p}\rightarrow L^{q}}$ if $K_{\alpha}$ and
$K_{\alpha}^{+}$ are bounded. Since
$\frac{1}{p}-(1-\frac{\alpha}{d+1})<\frac{1}{q}\leq\frac{1}{p},$ Theorem 2
implies that
$(\frac{1}{p},\frac{1}{q}),(\frac{1}{p}-\frac{1}{q},0),(1,1-(\frac{1}{p}-\frac{1}{q}))\in
G(K_{\alpha}^{+})$ (6.13)
and there exists $0\leq\theta\leq 1$ satisfying that
$(\frac{1}{p},\frac{1}{q})=\theta\cdot(\frac{1}{p}-\frac{1}{q},0)+(1-\theta)\cdot(1,1-(\frac{1}{p}-\frac{1}{q}))$
(6.14)
Combing (6.13),(6.14) with Lemma 2.2, it follows that
$\|K_{\alpha}^{+}\|_{L^{p}\rightarrow
L^{q}}\leq\|K_{\alpha}^{+}\|_{L^{\frac{1}{p^{-1}-q^{-1}}}\rightarrow
L^{\infty}}^{\theta}\|K_{\alpha}^{+}\|_{L^{1}\rightarrow
L^{\frac{1}{1-(p^{-1}-q^{-1})}}}^{1-\theta}$ (6.15)
We observe that the adjoint operator of
$K_{\alpha}^{+}:L^{\frac{1}{p^{-1}-q^{-1}}}\rightarrow L^{\infty}$ is exactly
the operator $K_{\alpha}^{+}:L^{1}\rightarrow
L^{\frac{1}{1-(p^{-1}-q^{-1})}},$ it means that
$\|K_{\alpha}^{+}\|_{L^{\frac{1}{p^{-1}-q^{-1}}}\rightarrow
L^{\infty}}=\|K_{\alpha}^{+}\|_{L^{1}\rightarrow
L^{\frac{1}{1-(p^{-1}-q^{-1})}}}.$ (6.16)
Thus by (6.15) and (6.16), it follows that
$\|K_{\alpha}^{+}\|_{L^{p}\rightarrow
L^{q}}\leq\|K_{\alpha}^{+}\|_{L^{\frac{1}{p^{-1}-q^{-1}}}\rightarrow
L^{\infty}}.$
Together with (6.12), it completes the proof.∎
###### Corollary 6.5.
Suppose that $C_{2}$ is the best constant in HLS 2, then the following holds.
1. (1)
If $\frac{1}{p}<1-\frac{1}{s},$ then
$C_{2}\leq\frac{\Gamma(d+1)\Gamma(d+1-\alpha)}{\Gamma^{2}(d+1-\frac{\alpha}{2})}.$
2. (2)
If $\frac{1}{p}-(1-\frac{\alpha}{d+1})<1-\frac{1}{s}\leq\frac{1}{p},$ then
$C_{2}\leq\left(\frac{\Gamma(d+1)\Gamma(d+1-\frac{\alpha}{2-p^{-1}-s^{-1}})}{\Gamma^{2}(d+1-\frac{\alpha}{2(2-p^{-1}-s^{-1})})}\right)^{2-(\frac{1}{p}-\frac{1}{s})}.$
Proof of Theorem 5. When $\alpha<\frac{d+2}{2},$ by Proposition 1.4.10 of
[20], it implies that the kernel function $k_{\alpha}^{+}\in
L^{2}(\mathbb{B}^{d}\times\mathbb{B}^{d},dv\times dv),$ thus
$K_{\alpha},K_{\alpha}^{+}:L^{2}\rightarrow L^{2}$ are Hilbert-Schmidt. Note
that
$Tr(K_{\alpha}^{*}K_{\alpha})=\int_{\mathbb{B}^{d}}\int_{\mathbb{B}^{d}}\frac{1}{|1-\langle
z,w\rangle|^{2\alpha}}dv(w)dv(z).$ (6.17)
When $\alpha\neq 1,$ similar to (6.2), yields the trace formula. Now we deal
with the spacial case $\alpha=1.$ Combing Lemma 5.2 with (6.17), it implies
that
$Tr(K_{1}^{*}K_{1})=\int_{0}^{1}\tensor[_{2}]{F}{{}_{1}}(1,1;2;r)dr=\sum_{j=1}^{\infty}\frac{1}{j^{2}}=\frac{\pi^{2}}{6}.$
∎
###### Remark 6.6.
By (3) of Proposition 5.3 and inductive method, we can get explicit trace
formulas for every dimension $d\geq 1.$
As a consequence of Theorem 5 we obtain the following generalized Euler-Jacobi
identity.
###### Corollary 6.7.
Suppose that $0<\alpha<\frac{3}{2},$ then
$\sum_{j=0}^{\infty}\left(\frac{\Gamma(\alpha+j)}{\Gamma(\alpha)\Gamma(2+j)}\right)^{2}=\frac{1}{(\alpha-1)^{2}}\left(\frac{\Gamma(3-2\alpha)}{\Gamma^{2}(2-\alpha)}-1\right).$
(6.18)
When $\alpha=1,$ the identity (6.18) is the well known Euler-Jacobi identity
$\sum_{j=1}^{\infty}\frac{1}{j^{2}}=\frac{\pi^{2}}{6}.$
When $d=1,0<\alpha<\frac{3}{2},$ we know that $K_{\alpha}:L^{2}\rightarrow
L^{2}$ is compact by Theorem 1 or Theorem 5. Thus the spectrum
$\sigma(K_{\alpha})$ of the operator $K_{\alpha}$ is exactly the point
spectrum. Note that every $K_{\alpha}$ is adjoint, then combing (2.1) and
(6.18) with Stirling’s formula, we have the following.
###### Corollary 6.8.
Suppose that $d=1$ and $0<\alpha<\frac{3}{2},$ then
$K_{\alpha}:L^{2}\rightarrow L^{2}$ is compact and
$\sigma(K_{\alpha})=\bigcup_{j=0}^{\infty}\\{\frac{\Gamma(\alpha+j)}{\Gamma(\alpha)\Gamma(2+j)}\\}.$
Moreover, in this case,
$\|K_{\alpha}\|_{L^{2}\rightarrow L^{2}}=\max_{0\leq
j\leq\infty}\frac{\Gamma(\alpha+j)}{\Gamma(\alpha)\Gamma(2+j)}.$
Acknowledgements. The first author would like to thank Professor G. Zhang for
his helpful discussions and warm hospitality when the author visited Chalmers
University of Technology.
## References
* [1] D. Békollé, A. Bonami, _Estimates for the Bergman and Szegö projections in two symmetric domains of $\mathbb{C}^{n}$,_ Colloq. Math. 68, 81-100 (1995)
* [2] A. Bonami, G. Garrigós, C.Nana, _$L^{p}$ -$L^{q}$ estimates for Bergman projections in bounded symmetric domains of tube type,_ J. Geom. Anal. 24, 1737-1769 (2014)
* [3] M. Buckley, P. Koskela, D. Vukotić, _Fractional integration, differentiation, and weighted Bergman spaces,_ Math. Proc. Cambridge Philos. Soc. 126, 369-385 (1999)
* [4] G. Cheng, X. Fang, Z. Wang, J. Yu, _The hyper-singular cousin of the Bergman projection,_ Trans. Amer. Math. Soc. 369, 8643-8662 (2017)
* [5] G. Cheng, X. Hou, C. Liu, _The singular integral operator induced by Drury-Arveson kernel_ , Complex Anal. Oper. Theory, 12, 917-929 (2018)
* [6] F. Cobos, J. Peetre, _Interpolation of compactness using Aronszajn-Gagliardo functors,_ Israel J. Math. 68, 220-240 (1989)
* [7] Y. Deng, L. Huang, T. Zhao, D. Zheng, _Bergman projection and Bergman spaces,_ J. Operator Theory, 41, 3-24 (2001)
* [8] A. Erdélyi, W. Magnus, F. Oberhettinger, F. G. Tricomi, _Higher Transcendental Functions, Vols. I, II_ , McGraw-Hill, New York (1953)
* [9] F. Forelli, W. Rudin, _Projections on spaces of holomorphic functions in balls,_ Indiana Univ. Math. J. 24, 593-602 (1974)
* [10] X. Fang, Z. Wang, _Two weight inequalities for the Bergman projection with doubling measures_ , Taiwanese J. Math. 19, 919-926 (2015)
* [11] A. Krasnoselskii, _On a theorem of M. Riesz,_ Soviet Math. Dokl. 1 229-231 (1960)
* [12] D. Kalaj, D. Vujadinović, _Norm of the Bergman projection onto the Bloch space,_ J. Operator Theory, 73 113-126 (2017)
* [13] H. Lieb, _Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities,_ Ann. Math. 118, 349-374 (1983)
* [14] D. Mcneal, _The Bergman projection as a singular integral operator,_ J. Geom. Anal. 1, 91-103 (1994)
* [15] D. Mcneal, M. Stein, _Mapping properties of the Bergman projection on convex domains of finite type,_ Duke Math. J. 73, 177-199 (1994)
* [16] A. Nowak,L. Roncal, _Potential operators associated with Jacobi and Fourier-Bessel expansions,_ J. Math. Anal. Appl. 422 , 148-184 (2015)
* [17] C. Ouyang, W. Yang, _Exact location of $\alpha$-Bloch spaces in $L_{a}^{p}$ and $H^{p}$ of a complex unit ball,_ Rocky Mountain J. Math. 30, 1151-1169 (2000)
* [18] H. Phong, M. Stein, _Estimates for the Bergman and Szegö projections on strongly pseudo-convex domains_ , Duke Math. J. 44, 695-704 (1977)
* [19] N. Plessis, _Some theorems about the Riesz fractional integral,_ Trans. Amer. Math. Soc. 80, 124-134 (1955)
* [20] W. Rudin, _Function Theory in the Unit Ball of $\mathbb{C}^{n}$,_ Grundlehren der Math. Springer, New York, 1980
* [21] M. Stein, G. Weiss, _Fractional Integrals on n-Dimensional Euclidean Space,_ J. Math. Mech. 7, 503-514 (1958)
* [22] T. Tao, _Harmonic Analysis,_ Lecture notes at UCLA, http://www.math.ucla.edu/$\sim$tao/ 247a.1.06f/notes2.pdf
* [23] R. Zhao, _Generalization of Schur’s test and its application to a class of integral operators on the unit ball of $\mathbb{C}^{n}$,_ Integr. Equ. Oper. Theory, 82, 519-532 (2015)
* [24] R. Zhao, K. Zhu, _Theory of Bergman Spaces in the Unit Ball of $\mathbb{C}^{n}$,_ Mém. Soc. Math. Fr. 115 (2009)
* [25] K. Zhu, _Spaces of Holomorphic Functions in the Unit Ball_ , Graduate Texts in Mathematics, 226, Springer-Verlag, New York (2005)
* [26] K. Zhu, _Operator Theory in Function Spaces. Second Edition_ , Mathematical Surveys and Monographs, 138, American Mathematical Society, Providence, RI, 2007.
|
2024-09-04T02:54:55.992745 | 2020-03-01T13:35:03 | 2003.00488 | {
"authors": "Pongsaphol Pongsawakul",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25968",
"submitter": "Pongsaphol Pongsawakul",
"url": "https://arxiv.org/abs/2003.00488"
} | arxiv-papers | # An Algorithm for Consensus Trees
Pongsaphol Pongsawakul
<EMAIL_ADDRESS>
###### Abstract
We consider the tree consensus problem, an important problem in
bioinformatics. Given a rooted tree $t$ and another tree $T$, one would like
to incorporate compatible information from $T$ to $t$. This problem is a
subproblem in the tree refinement problem called the RF-Optimal Tree
Refinement Problem defined by in Christensen, Molloy, Vachaspati and Warnow
[WABI’19] who employ the greedy algorithm by Gawrychowski, Landau, Sung, and
Weimann [ICALP’18] that runs in time $O(n^{1.5}\log n)$. We give a faster
algorithm for this problem that runs in time $O(n\log n)$. Our key ingredient
is a bipartition compatibility criteria based on amortized-time leaf counters.
While this is an improvement, the fastest solution is an algorithm by Jansson,
Shen, and Sung [JACM’16] which runs in time $O(n)$.
## 1 Introduction
We consider the tree consensus problem, an important problem in
bioinformatics. Given a rooted tree $t$ and another rooted tree $T$, we would
like to combine “information” from $T$ into $t$. More over, we would like to
only greedily take information that is currently consistent with our current
$t$. (See definitions below.) This problem is a subproblem in the tree
refinement problem called RF-Optimal Tree Refinement Problem defined by in
Christensen, Molloy, Vachaspati and Warnow [1] who employ the greedy algorithm
by Gawrychowski, Landau, Sung, and Weimann [2] that runs in time
$O(n^{1.5}\log n)$. We give a faster algorithm for this problem that runs in
time $O(n\log n)$. Our key ingredient is a bipartition compatibility criteria
based on amortized-time leaf counters. While this is an improvement, the
fastest solution is an algorithm by Jansson, Shen, and Sung [3] which runs in
time $O(n)$.
The algorithm by Gawrychowski et al [2] works in a more general case where the
goal is the find the greedy consensus trees from $k$ trees. In this case,
their algorithm runs in time $O(kn^{1.5}\log n)$, an improvement over
$O(kn^{2})$ of Jansson et al [3]. For this problem, Sung [4] also present an
algorithm that runs in time $O(k^{2}n)$, improving over Gawrychowski et al [2]
when $k=O(\sqrt{n}\log n)$.
In an earlier version, we erroneously claimed that our algorithm works for the
case with many trees. We thank Pawel Gawrychowski and Oren Weiman for pointing
this out. Jittat Fakcharoenphol who help advising the author on this
manuscript would like to take the full responsibility for this mistake.
## 2 Definitions
We start with definitions related to trees and consistency.
Let $V(T),E(T)$ and $R(T)$ be vertex set, edge set and the root of tree $T$.
For every vertex $u\in V(T)-\\{R(T)\\}$, let $par(u)$ be parent node of node
$u$. For every vertex $u\in V(T)$, let $depth(u)$ be depth of node $u$. We can
denote as $depth(u)=depth(par(u))+1$, for every vertex $u\in V(T)-\\{R(T)\\}$
and $depth(R(T))=1$. For every vertex $u\in V(T)$, let $L(u)$ be set of all
leaves on subtree $u$. Let $size(u)=|L(u)|$ for each $u\in V(T)$. For each
node $u\in V(T)-\\{R(T)\\}$, we call $\Lambda(u)$ be set of bipartition at
edge $(u,par(u))$. For each bipartition, $\Lambda(u)$, we can represent into
two clusters, $A=L(u)$ and $B=L(R(T))-L(u)$, and denoted by $A|B$. The set of
bipartitions of $T$ can denoted by $C(T)=\\{\Lambda(u):u\in
V(T)-\\{R(T)\\}\\}$. Let $RF(T_{a},T_{b})$ be Robinson-Foulds distance between
trees $T_{a}$ and $T_{b}$. RF-distance can denoted by
$RF(T_{a},T_{b})=|C(T_{a})-C(T_{b})|$.
The set $S$ of bipartitions is compatible if there exists tree $T$ such that
$C(T)=S$. leaves set $A$ is compatible with $t^{\prime}$ when have node $u$
that for all $v\in child(u)$ if only if $L(v)\cap S=\emptyset$ or
$L(V)\subseteq S$.
## 3 The $O(n^{2})$ algorithm
In this section, we describe a simpler version of the algorithm and prove its
correctness and its running time (in Subsection 3.1). We improve its running
time in Section 4
We assume that both $T$ and the current $t^{\prime}$ are equipped with a data
structure that given an id of a bipartition $b$, find vertex $u$ in the trees
such that $\Lambda(u)=b$. Since $t^{\prime}$ changes over time, we assume that
our data structures can handle the tree update efficiently. We discuss this in
Subsection 3.2.
The main loop of the algorithm iterates over all bipartition of $T$
recursively and, if possible, add each bipartition to $t^{\prime}$.
We define variables used in the main loop. Let $z$ be the node that have
minimum depth in tree $t^{\prime}$ for each bipartition in $T$. The main loop
is described below.
Algorithm 1 Main loop
1:for each node $u\in V(T)$ do
2: ClearCounter()
3: $z\leftarrow$ UpdateCounterSubtree($u$) $\triangleright$ Update Step,
referred to in Section 4
4: if IsCompatible($t,u,z$) then
5: update $t^{\prime}$
6: end if
7:end for
The main loop uses the following function.
Algorithm 2 UpdateCounterSubtree($u$)
1:$z\leftarrow null$
2:for each node $v\in L(u)$ do
3: $p\leftarrow$ UpdateCounterLeaf($v,t^{\prime}$) $\triangleright$ After this
step we say that $v$ has been added.
4: if $z=null$ or $depth(p)<z$ then
5: $z\leftarrow p$
6: end if
7:end for
8:return $z$
Our algorithm maintains variable $counter$ for each vertex in $t^{\prime}$. We
also keep a list of dirty vertices so that ClearCounter can run in $O(1)$
time. Variable $counter$ is updated in function UpdateCounterLeaf. Note that
$counter$ changes over time as UpdateCounterSubtree keeps adding leaves in
$L(u)$. The algorithm ensures that $counter(u)$ is exactly as follows. If $u$
is a leaf vertex, we let
$counter(u)=\begin{cases}1&\text{if }v\text{ has been added}\\\
0&\text{otherwise}\end{cases}$
For other internal vertex $u$, we let
$counter(u)=\sum_{v\in child(u)}\begin{cases}counter(v)&\text{if
}counter(v)=|L(v)|\\\ 0&\text{otherwise}\end{cases}$
For each call this function, it can take amortized $O(1)$ time (to be proved
later). The following algorithm describes function UpdateCounterLeaf.
Algorithm 3 UpdateCounterLeaf($v,t^{\prime}$)
1:$counter(v)\leftarrow 1$
2:while $counter(v)=size(v)$ do
3: $p\leftarrow par(v)$
4: $counter(p)\leftarrow counter(p)+counter(v)$
5: $v\leftarrow p$
6:end while
7:return $v$
Next, we have algorithm that check that add from previous algorithm is
compatible to $t^{\prime}$. We can track the node $u$ that have minimum depth
and have $counter(u)>0$ in the while loop at the previous algorithm.
Algorithm 4 IsCompatible($t^{\prime},u,z$)
1:if $counter(z)=|L(u)|$ then
2: return YES
3:else
4: return NO
5:end if
We prove the correctness of the algorithm. We note that it considers all
bipartitions.
###### Lemma 1.
The main loop considers all bipartitions in $T$.
###### Proof.
From definition, $\Lambda(u)$ can represent all bipartitions of $T$ and for
each bipartition, we have consider all node in $\Lambda(u)$. From this, we can
consider all bipartitions in $T$. ∎
For clarity, for each vertex $w\in t^{\prime}$ we denote by
$L_{t^{\prime}}(w)$ its leaf set in $t^{\prime}$. Note each call to
UpdateCounterLeaf increases $counter$ for each vertex by at most 1, and we
call UpdateCounterLeaf exactly $|L(u)|$ times. This implies the next lemma,
which can be formally proven by induction.
###### Claim 1.
During the call of UpdateCounterSubtree($u$), for any vertex $v\in
t^{\prime}$, $counter(v)\leq|L(u)\cap L_{t^{\prime}}(v)|$. Moreover, if
$L_{t^{\prime}}(v)\subseteq L(u)$, $counter(v)=|L(u)\cap
L_{t^{\prime}}(v)|=|L_{t^{\prime}}(v)|$, i.e., the counter attains its
maximum.
The following the key lemma.
###### Lemma 2.
Let $z^{\prime}$ be the least common ancestor of leaf vertices in $L(u)$ in
$t^{\prime}$. After UpdateCounterSubtree($u$) is called, leaf set $L(u)$ is
compatible with the current $t^{\prime}$ if and only if
$counter(z^{\prime})=|L(u)|$.
###### Proof.
Note that after the call to UpdateCounterSubtree($u$), we have called
UpdateCounterLeaf($v$) for every leaf $v$ in $L(u)$. From Claim 1, we only
need to consider the case when $counter(z^{\prime})\leq|L(u)|$.
The algorithm maintains vertex $z$, which is the vertex closest to the root
that UpdateCounterLeaf has touched. We first consider the case that
$z=z^{\prime}$.
We show that if $counter(z^{\prime})=|L(u)|$, then for each child $w\in
children(z^{\prime})$, $L_{t^{\prime}}(w)\cap L(u)=\emptyset$ or
$L_{t^{\prime}}(w)\subseteq L(u)$. This implies that $L(u)$ is compatible with
$t^{\prime}$.
We note that $z^{\prime}$ is not a leaf. Consider each $w\in
children(z^{\prime})$. We only need to consider $w$ such that $L_{t}(w)\cap
L(u)\neq\emptyset$. The only way $counter(z^{\prime})=|L(u)|$ is when $w$ is
“complete”, i.e., $counter(w)=|L_{t^{\prime}}(w)|$. Since
$counter(w)\leq|L(u)\cap L_{t^{\prime}}(w)|$ (from Claim 1), we know that
$L_{t^{\prime}}(w)\subseteq L(u)$.
On the other hand, if $L(u)$ is compatible with $t^{\prime}$, we show that
$counter(z^{\prime})=|L(u)|$. We prove a stronger statement: if $L(u)$ is
compatible with $t^{\prime}$ for every vertex $v\in t^{\prime}$ in the subtree
rooted at $z^{\prime}$,
$counter(v)=|L(u)\cap L_{t^{\prime}}(v)|,$
i.e., the upper bound in Claim 1 attains its maximum. To do so, we prove
inductively on the structure of $t^{\prime}$. Clearly, the claim is true when
$v$ is a leaf. Consider vertex $v\neq z^{\prime}$ in the subtree of
$t^{\prime}$ rooted at $z^{\prime}$. If $L_{t^{\prime}}(v)\cap
L(u)=\emptyset$, $counter(v)=0$; thus the property follows. Now, consider $v$
such that $L_{t^{\prime}}(v)\cap L(u)\neq\emptyset$. Let $w$ be a child of
$z^{\prime}$ such that $v$ belongs to subtree rooted at $w$. Since
$L_{t^{\prime}}(v)\cap L(u)\neq\emptyset$ and $L_{t^{\prime}}(v)\subseteq
L_{t^{\prime}}(w)$, we know that $L_{t^{\prime}}(w)\cap L(u)\neq\emptyset$.
Since $L(u)$ is compatible with $t^{\prime}$, we have that
$L_{t^{\prime}}(w)\subseteq L(u),$
implying that $L_{t^{\prime}}(v)\subseteq L(u)$; thus
$counter(v)=|L_{t^{\prime}}(v)|=|L_{t^{\prime}}(v)\cap L(u)|$, from Claim 1.
Finally, consider $z^{\prime}$. Note that since $z^{\prime}$ is the common
ancestor of leaves in $L(u)$, $L_{t^{\prime}}(z^{\prime})\supseteq L(u)$. For
each child $w$ of $z^{\prime}$, when $L_{t^{\prime}}(w)\cap
L(u)\neq\emptyset$, $w$ is complete and propagate $|L_{t^{\prime}}(w)\cap
L(u)|$ to $counter(z^{\prime})$. Summing all children of $z^{\prime}$, we have
that $counter(z^{\prime})=|L(u)|$.
This completes the proof of the lemma.
∎
###### Lemma 3.
Tree compatibility condition works
From above condition, our algorithm have $add$ function $counter(u)$. for each
subtree $L(u)$ have fully resolved when $counter(u)=|L(u)|$
### 3.1 Running time analysis
We first analyze the running time of the algorithm except the calls to
UpdateCounterSubtree. We show that this part runs in linear time.
We start with UpdateCounterLeaf.
###### Lemma 4.
Function UpdateCounterLeaf($v,t^{\prime}$) runs in amortized $O(1)$ time.
###### Proof.
We use the potential method. Our data structure consists of variables
$counter$ for all vertices in $t^{\prime}$. Denote the data structure at time
$i$ by $D_{i}$. We say that a vertex $u\in t^{\prime}$ is incomplete if
$0<counter(u)<|L(u)|$. Let potential function $\Phi(D_{i})$ be the number of
incomplete vertices in $t^{\prime}$ time $i$. Using the potential method, when
the data structure changes from $D_{i-1}$ to $D_{i}$, the amortized cost of an
operation is $\hat{c}=c+\Delta\Phi$, where $c$ is an actual cost, and
$\Delta\Phi=\Phi(D_{i})-\Phi(D_{i-1})$. Let $D_{0}$ be initial data structure
after ClearCounter is called; thus $\Phi(D_{0})=0$. Note that
$\Phi(D_{i})\geq\Phi(D_{0})=0$ for any $i$.
When invoking UpdateCounterLeaf at time $i$, let $k$ be number of times the
while loop in Lines 2 - 6 is executed. Clearly, the actual cost $c$ of the
operation is $k+1$. Let $\Delta\Phi=\Phi(D_{i})-\Phi(D_{i-1})$.
We claim that $\Phi(D_{i})=\Phi(D_{i-1})-k+1$, i.e., the number of incomplete
vertices decreases by $k-1$. Let $v^{\prime}$ be the actual leaf that
UpdateCounterLeaf is called on. Note that each time the loop is executed,
$counter(v)=size(v)=|L(v)|$. Except when $v=v^{\prime}$, previously at time
$i-1$, we know that $0<counter(v)<|L(v)|$, because $v$ is an internal vertex
with at least 2 children; hence $v$ was incomplete at time $i-1$. Since
$counter(v)=|L(v)|$ at time $i$, $v$ is no longer incomplete; thus the number
of incomplete vertices decreases by $k-1$ as claimed.
Thus the amortized cost $\hat{c}=k+1+\Delta\Phi=k+1+(-k+1)=2=O(1)$.
∎
We now analyze the running time of UpdateCounterSubtree.
###### Lemma 5.
Function UpdateCounterSubtree($u$) runs in time $O(|L(u)|)$.
###### Proof.
From Lemma 4, it is clear that UpdateCounterSubtree runs in time
$O(|L(u)|)=O(n)$ and it is invoked for $O(n)$ time. Therefore, the total
running time of the function is $O(n^{2})$. Combining the two parts, we get
that the algorithm runs in $O(n^{2})$. ∎
### 3.2 Updating $t$
In this section, we show that when the bipartition defined by $L(u)$ is
compatible with $t^{\prime}$, we can update $t^{\prime}$ to include that
bipartition efficiently in time $deg(u)$. When $t^{\prime}$ is compatible with
a bipartition defined by $L(u)$ for $u\in T$, to update $t^{\prime}$ we have
to create a new child of $z$ that consists of only children of $z$ that
corresponds to the bipartition $L(u)$. Note that these children are those
“full” $counter$ that also propagate the counter to $z$. Therefore, when a
child propagate a counter to any vertex, we keep a list of them. When we need
to update $t^{\prime}$ at $z$, we can take every vertex in this list, and
create a new child $z^{\prime}$ of $z$ with these vertices as $z^{\prime}$’s
children and update their counter accordingly. This can be done in time
$O(deg(u))$.
## 4 The faster algorithm: heavy child optimization
In this section, we describe a simple method to speed up the algorithm from
the previous section. Note that the only bottle-neck to a nearly linear time
algorithm is the counting procedure.
As a preprocessing, we assume that for each vertex $u\in T$, we know $|L(u)|$.
This can be computed in $O(n)$ time. The improved algorithm is described
below.
Algorithm 5 Solve($u$)
if $u$ is not leaf then
let $c_{1},c_{2}$ be children of $u$ $\triangleright$ There are exactly two
children, since $T$ is binary
if $|L(c_{1})|>|L(c_{2})|$ then
swap node $c_{1}$ and node $c_{2}$
end if
Solve($c_{1}$) $\triangleright$ Solve smaller subtree
ClearCounter()
Solve($c_{2}$) $\triangleright$ Solve larger subtree
end if
$z\leftarrow$ UpdateCounterSubtree($c_{1}$) $\triangleright$ Update the
counter for the smaller subtree
if IsCompatible($t,u,z$) then
update $t^{\prime}$
end if
To see that this algorithm is the correct implementation of the Main loop, we
essentially need to show that at the end of Solve($u$), variable $counter$ is
exactly equal to variable $counter$ right after the “Update Step” in Line 3 in
the Main loop while processing $u$, i.e., variable $counter$ is exactly equal
to the case when every leaf in $L(u)$ has been added while no other leaves
have been added. This can be shown by induction on the calls of Solve. We omit
the proof in this version of the manuscript.
We are left to analyze its running time.
###### Theorem 1.
The algorithm Solve runs in $O(n\log n)$ time.
###### Proof.
Note that the running time for all other operations in Solve is $O(1)$ per
invocation. Since Solve is called for $O(n)$ time, the total running time of
these operations is $O(n)$. Also, the running time of ClearCounter() can be
amortized to the running time of UpdateCounterSubtree, where the counters are
updated.
Therefore, we are left to analyze the running time of UpdateCounterSubtree.
Note that UpdateCounterSubtree($u$) for $u\in T$ runs in time linearly in the
number of leaves, $|L(u)|$, from Lemma 5. Hence, we can charge the cost to
these leaves.
We analyze the running time by counting the number of times each leaf is
involved in this charging scheme. Note that we only call UpdateCounterSubtree
at $c_{1}$, which is the lighter subtree. Clearly, each leaf $u$ belongs to at
most $O(\log n)$ light subtrees; hence, it is charged by at most $O(\log n)$
time. Summing all leaves, we have that the total running time for
UpdateCounterSubtree is $O(n\log n)$. ∎
## 5 Acknowledgements
We would like to thank Pawel Gawrychowski and Oren Weiman for pointing out our
erroneous claim and also give us reference to Sung’s result [4]. As mentioned
earlier, Jittat Fakcharoenphol who help advising the author on this manuscript
would like to take full responsibility for this mistake. We would like to
thank Jittat Fakcharoenphol for suggesting this problem to work on and for his
help in editing this manuscript.
## References
* [1] Sarah Christensen, Erin K. Molloy, Pranjal Vachaspati, and Tandy Warnow. TRACTION: Fast Non-Parametric Improvement of Estimated Gene Trees. In Katharina T. Huber and Dan Gusfield, editors, 19th International Workshop on Algorithms in Bioinformatics (WABI 2019), volume 143 of Leibniz International Proceedings in Informatics (LIPIcs), pages 4:1–4:16, Dagstuhl, Germany, 2019. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
* [2] Pawel Gawrychowski, Gad M. Landau, Wing-Kin Sung, and Oren Weimann. A faster construction of greedy consensus trees. In Ioannis Chatzigiannakis, Christos Kaklamanis, Dániel Marx, and Donald Sannella, editors, 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018, July 9-13, 2018, Prague, Czech Republic, volume 107 of LIPIcs, pages 63:1–63:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2018.
* [3] Jesper Jansson, Chuanqi Shen, and Wing-Kin Sung. Improved algorithms for constructing consensus trees. J. ACM, 63(3), June 2016.
* [4] Wing-Kin Sung. Greedy consensus tree and maximum greedy consensus tree problems. In Gautam K. Das, Partha S. Mandal, Krishnendu Mukhopadhyaya, and Shin-ichi Nakano, editors, WALCOM: Algorithms and Computation, pages 305–316, Cham, 2019. Springer International Publishing.
|
2024-09-04T02:54:56.006207 | 2020-03-01T15:30:21 | 2003.00503 | {
"authors": "CMS Collaboration",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25969",
"submitter": "The CMS Collaboration",
"url": "https://arxiv.org/abs/2003.00503"
} | arxiv-papers | JME-18-001
JME-18-001
# Pileup mitigation at CMS in 13data
###### Abstract
With increasing instantaneous luminosity at the LHC come additional
reconstruction challenges. At high luminosity, many collisions occur
simultaneously within one proton-proton bunch crossing. The isolation of an
interesting collision from the additional “pileup” collisions is needed for
effective physics performance. In the CMS Collaboration, several techniques
capable of mitigating the impact of these pileup collisions have been
developed. Such methods include charged-hadron subtraction, pileup jet
identification, isospin-based neutral particle “$\delta\beta$” correction,
and, most recently, pileup per particle identification. This paper surveys the
performance of these techniques for jet and missing transverse momentum
reconstruction, as well as muon isolation. The analysis makes use of data
corresponding to 35.9collected with the CMS experiment in 2016 at a center-of-
mass energy of 13. The performance of each algorithm is discussed for up to 70
simultaneous collisions per bunch crossing. Significant improvements are found
in the identification of pileup jets, the jet energy, mass, and angular
resolution, missing transverse momentum resolution, and muon isolation when
using pileup per particle identification.
## 0.1 Introduction
At the CERN LHC, instantaneous luminosities of up to $1.5\times
10^{34}\cm^{-2}\unit{s}^{-1}$ [1] are sufficiently large for multiple proton-
proton ($\Pp\Pp$) collisions to occur in the same time window in which proton
bunches collide. This leads to overlapping of particle interactions in the
detector. To study a specific $\Pp\Pp$ interaction, it is necessary to
separate this single interaction from the overlapping ones. The additional
collisions, known as pileup (PU), will result in additional particles
throughout the detector that confuse the desired measurements. With PU
mitigation techniques, we can minimize the impact of PU and better isolate the
single collision of interest. With increasing beam intensity over the past
several years, identification of interesting $\Pp\Pp$ collisions has become an
ever-growing challenge at the LHC. The number of additional collisions that
occur when two proton bunches collide was, on average, 23 in 2016 and
subsequently increased to 32 in 2017 and 2018. At this level of collision
density, the mitigation of the PU effects is necessary to enable physics
analyses at the LHC.
The CMS Collaboration has developed various widely used techniques for PU
mitigation. One technique, charged-hadron subtraction (CHS) [2], has been the
standard method to mitigate the impact of PU on the jet reconstruction for the
last few years. It works by excluding charged particles associated with
reconstructed vertices from PU collisions from the jet clustering procedure.
In this technique, to mitigate the impact of neutral PU particles in jets, an
event-by-event jet-area-based correction [3, 4, 5] is applied to the jet four-
momenta. Further, a PU jet identification (PU jet ID) technique [6] is used to
reject jets largely composed of particles from PU interactions.
These techniques have limitations when attempting to remove PU contributions
due to neutral particles. For the jet-area-based correction, the jet four-
momentum correction acts on a whole jet and is therefore not capable of
removing PU contributions from jet shape or jet substructure observables. To
overcome this limitation, a new technique for PU mitigation, pileup per
particle identification (PUPPI) [7], is introduced that operates at the
particle level. The PUPPI algorithm builds on the existing CHS algorithm. In
addition, it calculates a probability that each neutral particle originates
from PU and scales the energy of these particles based on their probability.
As a consequence, objects clustered from hadrons, such as jets, missing
transverse momentum (), and lepton isolation are expected to be less
susceptible to PU when PUPPI is utilized.
In this paper, the performance of PU mitigation techniques, including the
commissioning of PUPPI in $\Pp\Pp$ collision data, is summarized. After a
short description of the CMS detector in Section 0.2 and definitions of the
data set and Monte Carlo (MC) simulations used in these studies in Section
0.3, the CHS and PUPPI algorithms are described in Section 0.4. In Section
0.5.1 performance in terms of jet resolution at a high number of interactions
is presented. Section 0.5.2 summarizes the impact on noise rejection of PU
mitigation techniques. Section 0.5.3 presents the rejection of jets
originating from PU with PU jet ID and PUPPI. Jets reconstructed with a larger
cone size are often used to identify the decay of Lorentz-boosted heavy
particles such as , , and Higgs bosons, and top quarks. Pileup significantly
degrades the reconstruction performance, and the gain from PU mitigation
techniques for such large-size jets is discussed in Section 0.6. The
measurement of also benefits from PU mitigation techniques, which is discussed
in Section 0.7. Mitigation of PU for muon isolation variables is presented in
Section 0.8.
## 0.2 The CMS detector
The central feature of the CMS apparatus is a superconducting solenoid of 6m
internal diameter, providing a magnetic field of 3.8T. Within the solenoid
volume are a silicon pixel and strip tracker, a lead tungstate crystal
electromagnetic calorimeter (ECAL), and a brass and scintillator hadron
calorimeter (HCAL), each composed of a barrel and two endcap sections. The
ECAL covers the pseudorapidity range $\abs{\eta}<3$, while the HCAL is
extended with forward calorimeters up to $\abs{\eta}<5$. Muons are detected in
gas-ionization chambers embedded in the steel flux-return yoke outside the
solenoid. The silicon tracker measures charged particles within
$\abs{\eta}<2.5$. It consists of 1440 silicon pixel and 15 148 silicon strip
detector modules. For nonisolated particles with transverse momentum of
$1<\pt<10\GeV$ and $\abs{\eta}<1.4$, the track resolutions are typically 1.5%
in and 25–90 (45–150)in the transverse (longitudinal) impact parameter [8]. A
more detailed description of the CMS detector, together with a definition of
the coordinate system used and the relevant kinematic variables, can be found
in Ref. [9].
The particle-flow (PF) event reconstruction [2] reconstructs and identifies
each individual particle in an event, with an optimized combination of all
subdetector information. In this process, the identification of the particle
type (photon, electron, muon, charged or neutral hadron) plays an important
role in the determination of the particle direction and energy. Photons (,
coming from decays or from electron bremsstrahlung) are identified as ECAL
energy clusters not linked to the extrapolation of any charged particle
trajectory to the ECAL. Electrons (, coming from photon conversions in the
tracker material or from hadron semileptonic decays) are identified as a
primary charged-particle track and potentially many ECAL energy clusters
corresponding to this track extrapolation to the ECAL and to possible
bremsstrahlung photons emitted along the way through the tracker material.
Muons are identified as tracks in the central tracker consistent with either
tracks or several hits in the muon system, and associated with calorimeter
deposits compatible with the muon hypothesis. Charged hadrons are identified
as charged particle tracks neither identified as electrons, nor as muons.
Finally, neutral hadrons are identified as HCAL energy clusters not linked to
any charged-hadron trajectory, or as a combined ECAL and HCAL energy excess
with respect to the expected charged-hadron energy deposit.
The energy of photons is obtained from the ECAL measurement, corrected for
zero-suppression effects. The energy of electrons is determined from a
combination of the track momentum at the main interaction vertex, the
corresponding ECAL cluster energy, and the energy sum of all bremsstrahlung
photons attached to the track. The energy of muons is obtained from the
corresponding track momentum. The energy of charged hadrons is determined from
a combination of the track momentum and the corresponding ECAL and HCAL
energy, corrected for zero-suppression effects and for the response function
of the calorimeters to hadronic showers. Finally, the energy of neutral
hadrons is obtained from the corresponding corrected ECAL and HCAL energy.
The collision rate is 40MHz, and the events of interest are selected using a
two-tiered trigger system [10]. The first level (L1), composed of custom
hardware processors, uses information from the calorimeters and muon detectors
to select events at a rate of around 100kHz within a fixed time interval of
less than 4$\mu$s. The second level, known as the high-level trigger (HLT),
consists of a farm of processors running a version of the full event
reconstruction software optimized for fast processing, and reduces the event
rate to around 1kHz before data storage.
All detector subsystems have dedicated techniques to reject signals from
electronic noise or from particles that do not originate from the $\Pp\Pp$
collisions in the bunch crossing of interest, such as particles arriving from
$\Pp\Pp$ collisions that occur in adjacent bunch crossings before or after the
bunch crossing of interest (so called out-of-time PU). While these rejection
techniques are not the focus of this paper, some false signals can pass these
filters and affect the PF reconstruction. Particularly relevant is residual
noise from ECAL and HCAL electronics that may add to the energy of
reconstructed photons, electrons, and hadrons. Algorithms for the rejection of
this noise are further discussed in Section 0.5.2.
## 0.3 Data and simulated samples
In this paper, data corresponding to an integrated luminosity of 35.9 [1]
taken in 2016 are used. Figure 1 shows the PU conditions in the years
2016–2018. The number of $\Pp\Pp$ interactions is calculated from the
instantaneous luminosity based on an estimated inelastic $\Pp\Pp$ collision
cross section of 69.2mb. This number is obtained using the PU counting method
described in the inelastic cross section measurements [11, 12]. In the
following sections of this paper, we distinguish between two definitions:
“mean number of interactions per crossing” (abbreviated “number of
interactions” and denoted $\mu$) and “number of vertices” (denoted
$N_{\text{vertices}}$). Vertices are reconstructed through track clustering
using a deterministic annealing algorithm [8]. The number of interactions is
used to estimate the amount of PU in simulation. The number of vertices can be
determined in both data and simulation. Further details on the relationship
between $\mu$ and $N_{\text{vertices}}$ are provided in Section 0.5.3. The
studies presented in this paper focus on the PU conditions in 2016, though the
trends towards higher PU scenarios with up to 70 simultaneous interactions are
explored as well. The trigger paths used for the data taking are mentioned in
each section.
Figure 1: Distribution of the mean number of inelastic interactions per
crossing (pileup) in data for $\Pp\Pp$ collisions in 2016 (dotted orange
line), 2017 (dotted dashed light blue line), 2018 (dashed navy blue line), and
integrated over 2016–2018 (solid grey line). A total inelastic $\Pp\Pp$
collision cross section of 69.2mbis chosen. The mean number of inelastic
interactions per bunch crossing is provided in the legend for each year.
Samples of simulated events are used to evaluate the performance of the PU
mitigation techniques discussed in this paper. The simulation of standard
model events composed uniquely of jets produced through the strong
interaction, referred to as quantum chromodynamics (QCD) multijet events, is
performed with v8.212 [13] in standalone mode using the Lund string
fragmentation model [14, 15] for jets. For studies of lepton isolation,
dedicated QCD multijet samples that are enriched in events containing
electrons or muons (, from heavy-flavor meson decays) are used. The and boson
production in association with jets is simulated at leading-order (LO) with
the v2.2.2 [16] generator. Production of top quark-antiquark pair () events is
simulated with (v2) [17, 18, 19]. Single top quark production via the $s$\-
and $t$-channels, and processes are simulated at next-to-leading-order (NLO)
with that is interfaced with . For Lorentz-boosted $\PW$ boson studies [20],
MC simulation of high mass bulk graviton resonance [21, 22, 23] decaying to
$\PW\PW$ boson pairs are generated at LO with . All parton shower simulations
are performed using . For +jets production, an additional sample is generated
using interfaced with v2.7.1 [24, 25] with the UE-EE-5C underlying event tune
[26] to assess systematic uncertainties related to the modeling of the parton
showering and hadronization.
The LO and NLO NNPDF 3.0 [27] parton distribution functions (PDF) are used in
all generated samples matching the QCD order of the respective process. The
parameters for the underlying event are set according to the CUETP8M1 tune
[28, 29], except for the $\ttbar$ sample, which uses CUETP8M2 [30]. All
generated samples are passed through a detailed simulation of the CMS detector
using [31]. To simulate the effect of additional $\Pp\Pp$ collisions within
the same or adjacent bunch crossings, additional inelastic events are
generated using with the same underlying event tune as the main interaction
and superimposed on the hard-scattering events. The MC simulated events are
weighted to reproduce the distribution of the number of interactions observed
in data.
## 0.4 The CHS and PUPPI algorithms
A detailed description of the CHS algorithm and its performance is found in
Ref. [2]. In the following, we summarize the salient features and differences
with respect to the PUPPI algorithm. Both algorithms use the information of
vertices reconstructed from charged-particle tracks. The physics objects
considered for selecting the primary $\Pp\Pp$ interaction vertex are track
jets, clustered using the anti-algorithm [32, 33] with the tracks assigned to
the vertex as inputs, and the associated
$\vec{p}_{\mathrm{T},\text{tracks}}^{\text{miss}}$, which is the negative
vector sum of those jets. The reconstructed vertex with the largest value of
summed physics-object $\pt^{2}$ is selected as the primary $\Pp\Pp$
interaction vertex or “leading vertex” (LV). Other reconstructed collision
vertices are referred to as PU vertices.
The CHS algorithm makes use of tracking information to identify particles
originating from PU after PF candidates have been reconstructed and before any
jet clustering. The procedure removes charged-particle candidates that are
associated with a reconstructed PU vertex. A charged particle is associated
with a PU vertex if it has been used in the fit to that PU vertex [8]. Charged
particles not associated with any PU vertex and all neutral particles are
kept.
The PUPPI [7] algorithm aims to use information related to local particle
distribution, event PU properties, and tracking information to mitigate the
effect of PU on observables of clustered hadrons, such as jets, , and lepton
isolation. The PUPPI algorithm operates at the particle candidate level,
before any clustering is performed. It calculates a weight in a range from 0
to 1 for each particle, exploiting information about the surrounding
particles, where a value of 1 is assigned to particles considered to originate
from the LV. These per-particle weights are used to rescale the particle four-
momenta to correct for PU at particle-level, and thus reduces the contribution
of PU to the observables of interest.
For charged particles, the PUPPI weight is assigned based on tracking
information. Charged particles used in the fit of the LV are assigned a weight
of 1, while those associated with a PU vertex are assigned a weight of 0. A
weight of 1 is assigned to charged particles not associated with any vertex
provided the distance of closest approach to the LV along the $z$ axis
($d_{z}$) is smaller than 0.3; a weight of 0 is applied in all other
scenarios. The threshold of 0.3corresponds to about 15 standard deviations of
the vertex reconstruction resolution in the $z$ direction at an average PU of
10 [8], and it works as an additional filter against undesirable objects, such
as accidentally reconstructed particles from detector noise.
Neutral particles are assigned a weight based on a discriminating variable
$\alpha$. In general, the $\alpha$ variable is used to calculate a weight,
which encodes the probability that an individual particle originates from a PU
collision. As discussed in Ref. [7], various definitions of $\alpha$ are
possible. Within CMS, the $\alpha$ variable for a given particle $i$ is
defined as
$\alpha_{i}=\log\sum_{j\neq i,\,\Delta
R_{ij}<R_{0}}\left(\frac{p_{\mathrm{T},\,j}}{\Delta
R_{ij}}\right)^{2}\begin{cases}\text{for }\abs{\eta_{i}}<2.5,&j\text{ are
charged particles from LV,}\\\ \text{for }\abs{\eta_{i}}>2.5,&j\text{ are all
kinds of reconstructed particles,}\\\ \end{cases}$ (1)
where $i$ refers to the particle in question, $j$ are other particles,
$p_{\mathrm{T},\,j}$ is the transverse momentum of particle $j$ in , and
$\Delta R_{ij}=\sqrt{\smash[b]{(\Delta\eta_{ij})^{2}+(\Delta\phi_{ij})^{2}}}$
(where $\phi$ is the azimuthal angle in radians) is the distance between the
particles $i$ and $j$ in the $\eta$-$\phi$ plane. The summation runs over the
particles $j$ in the cone of particle $i$ with a radius of $R_{0}=0.4$. A
value of $\alpha_{i}=0$ is assigned when there are no particles in the cone.
The choice of the cone radius $R_{0}$ in the range of 0.2–0.6 has a weak
impact on the performance. The value of 0.4 was chosen as a compromise between
the performance when used in the definition of the isolation variable
(preferring larger cones) and jet performance (preferring smaller cones). In
$\abs{\eta}<2.5$, where tracking information is available, only charged
particles associated with the LV are included as particle $j$, whereas all
particles with $\abs{\eta}>2.5$ are included. The variable $\alpha$ contrasts
the collinear structure of QCD in parton showers with the soft diffuse
radiation coming from PU interactions. A particle from a shower is expected to
be close to other particles from the same shower, whereas PU particles can be
distributed more homogeneously. The $\alpha$ variable is designed such that a
particle gets a large value of $\alpha$ if it is close to either particles
from the LV or, in $\abs{\eta}>2.5$, close to highly energetic particles.
To translate $\alpha_{i}$ of each particle into a probability, charged
particles assigned to PU vertices are used to generate the expected PU
distribution in an event. From this expected distribution a median and root-
mean-square (RMS) of the $\alpha$ values are computed. The $\alpha_{i}$ of
each neutral particle is compared with the computed median and RMS of the
$\alpha$ distribution of the charged PU particles using a signed $\chi^{2}$
approximation:
$\text{signed
}\chi^{2}_{i}=\frac{(\alpha_{i}-\overline{\alpha}_{\text{PU}})\abs{\alpha_{i}-\overline{\alpha}_{\text{PU}}}}{(\alpha_{\text{PU}}^{\text{RMS}})^{2}},$
(2)
where $\overline{\alpha}_{\text{PU}}$ is the median value of the $\alpha_{i}$
distribution for charged PU particles in the event and
$\text{RMS}_{\text{PU}}$ is the corresponding RMS. If signed $\chi^{2}_{i}$ is
large, the particle most likely originates from the LV. The sign of the
numerator is sensitive to the direction of the deviation of $\alpha_{i}$ from
$\overline{\alpha}_{\text{PU}}$. For the detector region where
$\abs{\eta}>2.5$ and tracking is not available, the values
$\overline{\alpha}_{\text{PU}}$ and $\text{RMS}_{\text{PU}}$ can not be
calculated directly. Therefore, $\overline{\alpha}_{\text{PU}}$ and
$\text{RMS}_{\text{PU}}$ are taken from the detector region where
$\abs{\eta}<2.5$ and extrapolated to the region where $\abs{\eta}>2.5$ by
multiplying with transfer factors (see Tab. 0.4) derived from MC simulation.
The transfer factors are necessary, since the granularity of the detector
varies with $\eta$ and leads to a variation of $\alpha$ with $\eta$,
particularly outside of the tracker coverage ($\abs{\eta}=2.5$) and ECAL
coverage ($\abs{\eta}=3.0$). Lastly, to compute the weight of the particles,
the signed $\chi^{2}_{i}$ for PU particles is assumed to be approximately
distributed according to a $\chi^{2}$ distribution for $\chi^{2}_{i}>0$. The
weight is given by $w_{i}=F_{\chi^{2},\,\text{NDF}=1}(\text{signed
}\chi^{2}_{i})$ where $F_{\chi^{2},\,\text{NDF}=1}$ is the cumulative
distribution function of the $\chi^{2}$ distribution with one degree of
freedom. Particles with weights $w_{i}$ smaller than 0.01, , those with a
probability greater than 99% to originate from PU are rejected; this last
rejection removes remaining high-energy noise deposits. In addition, neutral
particles that fulfill the following condition:
$w_{i}\,p_{\mathrm{T},\,i}<(A+B\,N_{\text{vertices}})\GeV$, where
$N_{\text{vertices}}$ is the number of vertices in the event, get a weight of
0. This selection reduces the residual dependence of jet energies on the
number of interactions. The parameters $A$ and $B$ are tunable parameters. To
perform the tuning of these parameters, jets clustered from PUPPI-weighted
particles in the regions $\abs{\eta}<2.5$ and $2.5<\abs{\eta}<3.0$ are
adjusted to have near-unity jet response, as a function of the number of
interactions, i.e., the reconstructed jet energy matches the true jet energy
regardless of the amount of PU. In the region $\abs{\eta}>3$, the parameters
are chosen such that resolution is optimized. Table 0.4 summarizes the
resulting parameters that have been obtained using QCD multijet simulation
with an average number of interactions of 23 and a significant amount of
events beyond 30 interactions reflecting the 2016 data (orange curve in Fig.
1). The parameters $A$ and $B$ are smaller in $\abs{\eta}<2.5$ (where the
majority of particles are reconstructed with the tracker) than in
$\abs{\eta}>2.5$ (where the measurement comes solely from the calorimeters
that have a coarser granularity and thus collect more PU energy per cell).
The tunable parameters of PUPPI optimized for application in 2016 data
analysis. The transfer factors used to extrapolate the
$\overline{\alpha}_{\text{PU}}$ and $\alpha_{\text{PU}}^{\text{RMS}}$ to
$\abs{\eta}>2.5$ are denoted TF. $\abs{\eta}$ of particle $A$ [] $B$ [] TF
$\overline{\alpha}_{\text{PU}}$ TF $\alpha_{\text{PU}}^{\text{RMS}}$ $[0,2.5]$
0.2 0.015 1 1 $[2.5,3]$ 2.0 0.13 0.9 1.2 $[3,5]$ 2.0 0.13 0.75 0.95
### 0.4.1 Data-to-simulation comparison for variables used within PUPPI
Figure 2: Data-to-simulation comparison for three different variables of the
PUPPI algorithm. The markers show a subset of the data taken in 2016 of the
jet sample and the PU sample, while the solid lines are QCD multijet
simulations or PU-only simulation. The lower panel of each plot shows the
ratio of data to simulation. Only statistical uncertainties are displayed. The
upper left plot shows the $\alpha$ distribution in the jet sample for charged
particles associated with the LV (red triangles), charged particles associated
with PU vertices (blue circles), and neutral particles (black crosses) for
$\abs{\eta}<2.5$. The upper right plot shows the $\alpha$ distribution in the
PU sample for charged (blue circles) and neutral (orange diamond) particles.
The lower left plot shows the signed
$\chi^{2}=(\alpha-\overline{\alpha}_{\text{PU}})\abs{\alpha-\overline{\alpha}_{\text{PU}}}/(\alpha_{\text{PU}}^{\text{RMS}})^{2}$
for neutral particles with $\abs{\eta}<2.5$ in the jet sample (black crosses)
and in the PU sample (orange diamonds). The lower right plot shows the PUPPI
weight distribution for neutral particles in the jet sample (black crosses)
and the PU sample (orange diamonds). The error bars correspond to the
statistical uncertainty.
The behavior of the variables used in PUPPI has been studied in two
complementary data samples. A subset of the data taken in 2016, corresponding
to an integrated luminosity of 0.36and selected using trigger paths based on
the scalar sum () of the of jets with $\pt>30\GeV$ and $\abs{\eta}<3$,
requiring an offline selection of $\HT>1500\GeV$, is referred to as the jet
sample. The details of jet reconstruction and performance are discussed in
Section 0.5. Here, we present comparisons of data and QCD multijet simulation
based on all PF candidates in the event, rather than clustered jets. As a
reference, a data sample enriched in events containing mainly particles from
PU collisions is compared with PU-only simulation and is referred to as the PU
sample. The PU data sample is recorded with a zero-bias trigger that randomly
selects a fraction of the collision events, corresponding to an integrated
luminosity of 3.18. The distribution of the number of PU interactions in both
subsets of data is comparable to the one in the whole data sample collected in
2016.
Figure 2 shows the distribution of the three main variables used in PUPPI for
data and simulation. The upper left plot presents the distribution of $\alpha$
for charged particles from the LV and the PU vertices and for neutral
particles with $\abs{\eta}<2.5$ in the jet sample. The separation power of the
variable $\alpha$ between particles from the LV and PU vertices for charged
particles can be deduced from this figure. The majority of the charged
particles from PU vertices have an $\alpha$ value below 8, whereas only a
small fraction of particles have higher values. Charged particles from the LV
exhibit a double-peak structure. The first peak at large $\alpha$ is
characteristic of particles within jets originating from the LV. The second
peak at lower $\alpha$ consists of charged particles that are isolated from
other particles originating from the LV. With the exception of particles from
lepton decays, which are directly addressed later, isolated particles have
limited physics impact and consequently a low $\alpha$ value has a negligible
impact on the algorithm performance on physics objects.
The $\alpha$ distribution of neutral PU particles can be compared to charged
PU particles in the PU sample shown in Figure 2 (upper right). It becomes
clear that the median and RMS of the $\alpha$ distribution are similar for
charged and neutral particles originating from PU. This similarity confirms
one of the primary assumptions of PUPPI, namely that
$\overline{\alpha}_{\mathrm{PU}}$ and $\text{RMS}_{\mathrm{PU}}$, which are
computed for charged particles, can be used to compute weights for neutral
particles with a discrimination power between PU and LV particles. Although
the qualitative features of the $\alpha$ distribution in data are reproduced
by the simulation, a disagreement between data and simulation is observed,
which is most pronounced for neutral particles from PU with large values of
$\alpha$.
The $\chi^{2}$ distribution shown in Fig. 2 (lower left) shows two peaks for
both the jet sample and the PU sample. The first peak results from particles
without any neighbor and an $\alpha$ value of zero. The second peak at zero
represents all PU particles. The jet sample (black curve) shows a third peak
for all LV particles. Additionally, the shape of the resulting PUPPI weight
distribution, shown in Fig. 2 (lower right) is well modeled by simulation for
particles with high weights (, those likely originating from the LV). A
considerable mismodeling is observed at low values of PUPPI weight, where low-
particles from PU interactions dominate. This mismodeling does not propagate
to further observables, because these particles receive small weights, and as
a consequence have a negligible contribution. Although both samples have a
similar distribution of number of interactions, the weight distribution of the
jet sample has more events at higher values of the weight compared to the PU
sample because of the selection of a high jet.
## 0.5 Jet reconstruction
Jets are clustered from PF candidates using the anti-algorithm [32] with the
FastJet software package [33]. Distance parameters of 0.4 and 0.8 are used for
the clustering. While jets with $R=0.4$ (AK4 jets) are mainly used in CMS for
reconstruction of showers from light-flavor quarks and gluons, jets with
$R=0.8$ (AK8 jets) are mainly used for reconstruction of Lorentz-boosted , ,
and Higgs bosons, and for top quark identification, as discussed in detail in
Section 0.6. Before jet clustering, CHS- or PUPPI-based PU mitigation is
applied to the PF candidates. Reconstructed jets with the respective PU
mitigation technique applied are referred to as CHS and PUPPI jets,
respectively.
Jet momentum is determined as the vectorial sum of all particle momenta in the
jet, and from simulation is, on average, within 5 to 20% of the true momentum
over the whole spectrum and detector acceptance. For CHS jets, an event-by-
event jet-area-based correction [3, 4, 5] is applied to the jet four-momenta
to remove the remaining energy due to neutral and charged particles
originating from PU vertices, while no such correction is necessary for PUPPI
jets. Although CHS removes charged particles associated with a PU vertex,
charged particles not associated with any vertex are kept and can add charged
PU energy to the jet. The remaining energy from PU particles subtracted from
the jet energy is assumed proportional to the jet area and parametrized as a
function of the median energy density in the event, the jet area, $\eta$, and
. In addition, jet energy corrections are derived from simulation for CHS and
PUPPI to bring the measured response of jets to that of generated particle-
level jets on average. In situ measurements of the momentum balance in dijet,
photon+jets, +jets, and multijet events are used to correct any residual
differences in jet energy scale between data and simulation [5].
In the following, only jets with $\pt>15\GeV$ are used, which is the lowest
jet used in physics analysis in CMS. The presentation of jet performance
focuses on $\abs{\eta}<2.5$, covered by the tracking detector, ECAL, and HCAL,
and the forward region, $\abs{\eta}>3$, where only the hadron forward
calorimeter is present. The intermediate region, $2.5<\abs{\eta}<3.0$, which
is covered by ECAL and HCAL resembles the forward region in sensitivity to PU
and is not discussed in this paper. For Sec. 0.5.1 the focus is set on
$\abs{\eta}<0.5$, as the region $0.5<\abs{\eta}<2.5$ provides no further
information and shows a similar performance.
### 0.5.1 Jet energy and angular resolutions
The performance of the jet four-momentum reconstruction is evaluated in QCD
multijet simulation by comparing the kinematics of jets clustered from
reconstructed PF candidates (reconstruction-level jets) to jets clustered from
stable (lifetime $c\tau>1\cm$) particles excluding neutrinos before any
detector simulation (particle-level jets). Particle-level jets are clustered
without simulation of PU collisions whereas the reconstruction-level jets
include simulation of PU collisions. Jet energy corrections are applied to the
reconstruction-level jets such that the ratio of reconstruction and particle-
level jet (the response) is on average 1. The jet energy resolution (JER) is
defined as the spread of the response distribution, which is Gaussian to a
good approximation. The resolution is defined as the $\sigma$ of a Gaussian
fit to the distribution in the range $[m-2\sigma,m+2\sigma]$, where $m$ and
$\sigma$ are the mean and width of the Gaussian fit, determined with an
iterative procedure. The cutoff at $\pm 2\sigma$ is set so that the evaluation
is not affected by outliers in the tails of the distribution. Figure 3 shows
the JER as a function of jet for jets reconstructed from all of the PF
candidates (PF jets), CHS jets, and PUPPI jets, simulated with on average
20–30 PU interactions. For AK4 jets, the performance of the CHS and PUPPI
algorithms is similar. Jet resolution for PUPPI is slightly degraded below 30
PU, since PUPPI has been optimized for overall performance, including
resolution and stability, beyond 30 PU interactions. This behavior at low PU
can in principle be overcome through a special treatment in the limit of small
amount of PU, where the number of particles to compute
$\overline{\alpha}_{\text{PU}}$ and $\text{RMS}_{\text{PU}}$ is limited. The
PF jets in the detector region of $\abs{\eta}<0.5$ exhibit a worse
performance, particularly at low , since these jets are more affected by PU.
In the region of $3.2<\abs{\eta}<4.7$, PF jets show the same performance as
CHS jets, because no tracking is available. For AK8 jets, PUPPI provides
better performance than the CHS and PF algorithms, since neutral particles
from PU interactions contribute significantly to such jets.
Figure 3: Jet energy resolution as a function of the particle-level jet for PF
jets (orange circles), PF jets with CHS applied (red triangles), and PF jets
with PUPPI applied (blue squares) in QCD multijet simulation. The number of
interactions is required to be between 20 and 30. The resolution is shown for
AK4 jets with $\abs{\eta}<0.5$ (upper left) and $3.2<\abs{\eta}<4.7$ (upper
right), as well as for AK8 jets with $\abs{\eta}<0.5$ (lower). The error bars
correspond to the statistical uncertainty in the simulation.
Figure 4 demonstrates how the JER scales with the number of interactions. At
more than 30 interactions, JER for AK4 jets with $\abs{\eta}<0.5$ and
$\pt=30\GeV$ is better with the PUPPI than with the CHS PU mitigation.
However, JER for AK4 jets with $3.2<\abs{\eta}<4.7$ and $\pt=30\GeV$ is better
with the CHS than with the PUPPI PU mitigation, which is a result of the PUPPI
algorithm being tuned to yield the best resolution rather than the best jet
resolution in the $\abs{\eta}>3$ region. This is achieved with a low PU
particle rate, rather than the best jet resolution, achieved by high LV
particle efficiency. At $\pt>100\GeV$, PUPPI jets have a resolution that is
slightly worse than that of CHS jets with $\abs{\eta}<0.5$, while in
$3.2<\abs{\eta}<4.7$ PUPPI and CHS performances are comparable. For AK8 jets
at low , PUPPI yields a better JER than CHS; this improvement is present
through the high-PU scenarios, , at 50 or 60 interactions. The jet energy
resolution becomes worse with PUPPI than with CHS for jets with $\pt>200\GeV$.
The behavior of PUPPI at high is to a large extent limited by the quality of
track-vertex association using $d_{z}$ for high-charged hadrons. The effect is
not visible in CHS because the $d_{z}$ requirement for charged particles that
are not associated to any vertex is not used, but instead CHS keeps all
charged particles not associated with any vertex.
Figure 4: Jet energy resolution as a function of the number of interactions
for jets with CHS (solid red line) and with PUPPI (dashed blue line)
algorithms applied in QCD multijet simulation for different jet values
(different markers). The resolution is shown for AK4 jets with
$\abs{\eta}<0.5$ (upper left) and $3.2<\abs{\eta}<4.7$ (upper right), as well
as for AK8 jets with $\abs{\eta}<0.5$ (lower). The error bars correspond to
the statistical uncertainty in the simulation.
Figure 5 shows the jet $\eta$ angular resolution simulated with 20–30
interactions. The same qualitative conclusions also hold for the resolution in
$\phi$, since $\phi$ and $\eta$ segmentation of the detector are similar. The
resolution is evaluated as the width of a Gaussian function fit to the
distribution of the $\eta$-difference between the generator- and
reconstruction-level jets. The same conclusions as for JER also hold for jet
angular resolution. The CHS and PUPPI algorithms perform similarly for AK4
jets with $\abs{\eta}<0.5$. However, significant improvements from PUPPI are
observed for AK8 jets for $\abs{\eta}<0.5$. Angular resolution of large-size
jets is particularly sensitive to PU as the clustered energy from PU particles
increases with the jet size. Hence, the improvements are larger when PUPPI
jets are considered.
Figure 5: Jet $\eta$ resolution as a function of particle-level jet for PF
jets (orange circles), PF jets with CHS applied (red triangles), and PF jets
with PUPPI applied (blue squares) in QCD multijet simulation. The number of
interactions is required to be between 20 and 30. The resolution is shown for
AK4 jets with $\abs{\eta}<0.5$ (upper left) and $3.2<\abs{\eta}<4.7$ (upper
right) as well as for AK8 jets with $\abs{\eta}<0.5$ (lower). The error bars
correspond to the statistical uncertainty in the simulation.
### 0.5.2 Noise jet rejection
The identification and rejection of jets originating from noise and
reconstruction failures are critical to all CMS analyses where a jet or is
used as part of the selection. To further reject noise after detector signal
processing and jet clustering, a set of criteria on the PF candidates within a
jet are applied [6]. The criteria listed in Table 0.5.2 are based on jet
constituent energy fractions and multiplicities. They reject residual noise
from the HCAL and ECAL, retaining 98–99% of genuine jets, , jets initiated by
genuine particles rather than detector noise. Although PU mitigation
algorithms are not designed to have an effect on detector noise, they could,
in principle, affect the rejection capability of the noise jet ID.
Figure 6 (upper left/right and lower left) shows the distribution of the
charged and neutral constituent multiplicities comparing genuine jet enriched
(dijet) and noise jet enriched (minimum bias) data, demonstrating the
separation power. For the dijet selection, data are selected with an HLT
requirement of at least one jet having a $\pt>400\GeV$, two offline
reconstructed jets with greater than 60 and 30, respectively, and an opening
in azimuthal angle greater than 2.7. For the minimum bias selection, jets with
$\pt>30\GeV$ passing the minimum bias trigger path are used. The noise jet ID
requires at least one charged constituent for jets with $\abs{\eta}<2.4$ and
at least two constituents (neutral or charged) for $\abs{\eta}<2.7$. The
charged constituent multiplicity is smaller for PUPPI than for CHS jets
because PUPPI rejects additional charged particles by applying a $d_{z}$
requirement on tracks not associated with any vertex. The PUPPI weighted
neutral constituent multiplicity, defined as the sum of PUPPI weights of all
neutral particles in the jet, is also smaller than the neutral constituent
multiplicity for CHS. In $3<\abs{\eta}<5$, the PUPPI neutral constituent
multiplicity is significantly lower than for CHS. Thus, the ability to
separate noise is reduced. With CHS, noise jets are rejected by requiring a
minimum of 10 neutral particles. With PUPPI, a minimum of 3 is required for
the PUPPI scaled neutral multiplicity. Figure 6 (lower right) demonstrates the
PU dependence of the neutral constituent multiplicity. While for CHS, the
average multiplicity changes by 30–40% going from 20–30 to 50–60 reconstructed
vertices, the PUPPI scaled multiplicities do not change significantly, making
noise jet rejection independent of PU.
The efficiency of the jet ID criteria for genuine jets is measured in data
using a tag-and-probe procedure in dijet events [6]. The background rejection
is estimated using a noise-enriched minimum bias event selection. The fraction
of rejected noise jets after applying jet ID criteria that yield a 99%
efficiency for genuine jets is summarized in Table 0.5.2 for different regions
in $\eta$. The number of noise jets reconstructed with the CHS and PUPPI
algorithms is not the same, because the PUPPI reconstruction criteria reject
particles that would otherwise give rise to a fraction of noise jets before
jet ID criteria are applied. The absolute number of noise jets remaining after
PU mitigation and jet ID together differs by less than 20% between CHS and
PUPPI jets.
Jet ID criteria for CHS and PUPPI jets yielding a genuine jet efficiency of
99% in different regions of $\abs{\eta}$.
Region of $\abs{\eta}$ Variable Requirement (CHS) Requirement (PUPPI)
$\abs{\eta}<2.4$ Charged hadron energy fraction $>$0 $>$0 Charged multiplicity
$>$0 $>$0 $\abs{\eta}<2.7$ Neutral hadron energy fraction $<$0.90 $<$0.90
Neutral EM energy fraction $<$0.90 $<$0.90 Number of constituents $>$1 $>$1
$2.7<\abs{\eta}<3$ Neutral EM energy fraction $>$0.02 and $<$0.99 Number of
neutral particles $>$2 Neutral hadron energy fraction $<$0.99 $\abs{\eta}>3$
Neutral EM energy fraction $<$0.90 $<$0.9 Neutral hadron energy fraction
$>$0.02 $>$0.02 Number of neutral particles $>$10 $>$3
Fraction of noise jets rejected when applying jet ID criteria to PUPPI and CHS
jets yielding a genuine jet efficiency of 99% in different regions of
$\abs{\eta}$.
Region of $\abs{\eta}$ Fraction of noise jets rejected $\abs{\eta}<$ $2.7$
99.9% $2.7<$ $\abs{\eta}<$ $3.0$ 97.6% $3<$ $\abs{\eta}<$ $5$ 15% (PUPPI) 35%
(CHS)
Figure 6: The charged- and neutral-particle multiplicities for CHS and PUPPI
in a dijet (genuine jets) and minimum bias (noise jets) selection in data. The
multiplicities are shown for AK4 jets using CHS reconstructed real jets (red
dashed), CHS reconstructed noise jets (black long dashed), PUPPI reconstructed
genuine jets (blue circles), and PUPPI reconstructed noise jets (orange
triangles). The upper plots show the charged (left) and neutral particle
multiplicities (right) for jets with $\abs{\eta}<0.5$. The lower left plot
shows the neutral particle multiplicity for jets with $3<\abs{\eta}<5$. The
lower right plot shows the neutral particle multiplicity of AK4 jets with
$\abs{\eta}<0.5$ in a dijet selection in data using CHS and PUPPI for 15–20
and 35–50 interactions. The error bars correspond to the statistical
uncertainty.
### 0.5.3 Pileup jet rejection
Particles resulting from PU collisions will introduce additional jets that do
not originate from the LV. These jets are referred to as PU jets. PU jets can
be classified in two categories: QCD-like PU jets, originating from PU
particles from a single PU vertex, and stochastic PU jets, originating from PU
particles from multiple different PU vertices. Both PU mitigation techniques,
PUPPI and CHS, remove the charged tracks associated with PU vertices, reducing
the of QCD-like PU jets to roughly 1/3 of their original , such that they can
be largely reduced by selections on the jet . In CMS, a multivariate technique
to reject the remaining PU jets (dominated by stochastic PU jets) has been
developed and applied for CHS jets [6], whereas PUPPI intrinsically suppresses
PU jets better by rejecting more charged and neutral particles from PU
vertices before jet clustering. Both techniques suppress both QCD-like and
stochastic PU jets, though the observables used for neutral particle rejection
are primarily sensitive to stochastic PU jets.
The performance of the PU jet rejection for both PUPPI and CHS is evaluated in
+jets events in data and simulation. The jet recoiling against the boson
provides a pure sample of LV jets, whereas additional jets are often from PU
collisions. The +jets events are selected by requiring two oppositely charged
muons with $\pt>20\GeV$ and $\abs{\eta}<2.4$ whose combined invariant mass is
between 70 and 110. Jets that overlap with leptons within $\Delta
R(\text{lepton},\,\text{jet})<0.4$ from the boson decay are removed from the
collections of particle- and reconstruction-level jets.
In simulation jets are categorized into four groups based on the separation
from particle-level jets and their constituents. If a reconstruction-level jet
has a particle-level jet within $\Delta R<0.4$, it is regarded as originating
from the LV. Jet flavors are defined by associating generated particles to
reconstructed jets. This is done by clustering a new jet with the generated
and reconstructed particles together where, in this case, the four-momenta of
generated particles are scaled by a very small number. Newly reconstructed
jets in this way are almost identical to the original jets because the added
particles, with extremely small energy, do not affect the jet reconstruction.
If a jet originating from the LV contains generated quarks or gluons, it is
regarded as a jet of quark or gluon origin, depending on the label of the
highest particle-level particle. If a jet not originating from the LV does not
contain any generated particles from the hard scattering, it is regarded as a
jet originating from a PU vertex, , a PU jet. The remaining jets, which do not
have nearby particle-level jets but contain particle-level particles (from
LV), are labeled as unassigned.
List of variables used in the PU jet ID for CHS jets.
Input
variable Definition $\text{LV}\sum\pt$ fraction Fraction of of charged
particles associated with the LV, defined as
$\sum_{i\in\text{LV}}p_{\mathrm{T},\,i}/\sum_{i}p_{\mathrm{T},\,i}$ where $i$
iterates over all charged PF particles in the jet $N_{\text{vertices}}$ Number
of vertices in the event $\langle\Delta R^{2}\rangle$ Square distance from the
jet axis scaled by $\pt^{2}$ average of jet constituents: $\sum_{i}\Delta
R^{2}p_{\mathrm{T},\,i}^{2}/\sum_{i}p_{\mathrm{T},\,i}^{2}$
$f_{\text{ringX}}$, $X=1,2,3,\text{ and }4$ Fraction of of the constituents
($\sum p_{\mathrm{T},\,i}/\pt^{\text{jet}}$) in the region $R_{i}<\Delta
R<R_{i+1}$ around the jet axis, where $R_{i}=0,0.1,0.2,$ and 0.3 for
$X=1,2,3$, and 4 $\pt^{\text{lead}}/\pt^{\text{jet}}$ fraction carried by the
leading PF candidate $\pt^{\text{l. ch.}}/\pt^{\text{jet}}$ fraction carried
by the leading charged PF candidate $\abs{\vec{m}}$ Pull magnitude, defined as
$\abs{(\sum_{i}\pt^{i}\abs{r_{i}}\vec{r}_{i})}/\pt^{\text{jet}}$ where
$\vec{r_{i}}$ is the direction of the particle $i$ from the direction of the
jet $N_{\text{total}}$ Number of PF candidates $N_{\text{charged}}$ Number of
charged PF candidates $\sigma_{1}$ Major axis of the jet ellipsoid in the
$\eta$-$\phi$ space $\sigma_{2}$ Minor axis of the jet ellipsoid in the
$\eta$-$\phi$ space $\pt^{\text{D}}$ Jet fragmentation distribution, defined
as $\sqrt{\sum_{i}p^{2}_{\mathrm{T},\,i}}/\sum_{i}p_{\mathrm{T},\,i}$
This identification of PU jets is based on two observations: (i) the majority
of tracks associated with PU jets do not come from the LV, and (ii) PU jets
contain particles originating from multiple PU collisions and therefore tend
to be more broad and diffuse than jets originating from one single quark or
gluon. Table 0.5.3 summarizes the input variables for a multivariate analysis.
Track-based variables include the $\text{LV}\sum\pt$ fraction and
$N_{\text{vertices}}$, where the $\text{LV}\sum\pt$ fraction is the summed of
all charged PF candidates in the jet originating from the LV, divided by the
summed of all charged candidates in the jet. The $\text{LV}\sum\pt$ fraction
variable provides the strongest discrimination of any variable included in the
discriminator, but is available only within the tracking volume. The inclusion
of the $N_{\text{vertices}}$ variable allows the multivariate analysis to
determine the optimal discriminating variables as the PU is increased. Jet
shape variables included in the multivariate discriminant are as follows:
$\langle\Delta R^{2}\rangle$, $f_{\text{ring0}}$, $f_{\text{ring1}}$,
$f_{\text{ring2}}$, $f_{\text{ring3}}$, $\pt^{\text{lead}}/\pt^{\text{jet}}$,
$\abs{\vec{m}}$, $N_{\text{total}}$, $N_{\text{charged}}$, major axis
($\sigma_{1}$), minor axis ($\sigma_{2}$), and $\pt^{\mathrm{D}}$, with their
definitions given in Table 0.5.3. Pileup jets tend to have $\langle\Delta
R^{2}\rangle$ of large value relative to genuine jets. For the set of
$f_{\text{ringX}}$, PU jets tend to have large values for variables with large
$R$, which represents the characteristic of PU jets having a large fraction of
energy deposited in the outer annulus. Most of the other variables are
included to distinguish quark jets from gluon jets, and thus enhance the
separation from PU jets. In particular, the variable $\pt^{\mathrm{D}}$ tends
to be larger for quark jets than for gluon jets, and smaller than both quark
jets and gluon jets for PU jets. The $N_{\text{total}}$, $\pt^{\mathrm{D}}$
and $\sigma_{2}$ variables have previously been used for a dedicated quark-
and gluon-separation technique; more details on their definition and
performance are found in Ref. [6].
Figure 7 shows the distribution of the $\text{LV}\sum\pt$ fraction and the
charged-particle multiplicity of jets with $30<\pt<50\GeV$ and $\abs{\eta}<1$
in data and simulation. The distributions of the variables in selected data
events agree with simulation within the uncertainties, with a clear separation
in the discriminating variables between LV and PU jets.
Figure 7: Data-to-simulation comparison for two input variables to the PU jet
ID calculation for CHS jets with $30<\pt<50\GeV$: the $\text{LV}\sum\pt$
fraction (left) and charged-particle multiplicity (right). Black markers
represent the data while the colored areas are +jets simulation events. The
simulation sample is split into jets originating from quarks (red), gluons
(purple), PU (green), and jets that could not be assigned (gray). The
distributions are normalized to unity. The shape of a sample showered with ++
is superimposed. The lower panels show the data-to-simulation ratio along with
a gray band corresponding to the one-sided uncertainty, which is the
difference between simulated Z+jets events showered with the PYTHIA parton
shower and those showered with the HERWIG++ parton shower. Also included in
the ratio panel is the PU rate uncertainty (dark gray).
The set of 15 variables listed in Table 0.5.3 is used to train a boosted
decision tree (BDT) algorithm, and to distinguish between jets from the LV and
PU jets. For the BDT training, +jets simulation events are used. To perform
the training, reconstruction-level jets that are within a distance of $\Delta
R<0.4$ from any particle-level jet are regarded as jets from the LV, and the
remaining jets are identified as PU jets. A jet is considered to satisfy the
PU jet ID if it passes certain thresholds on the output of the BDT
discriminator. This output is dependent on the $\eta$ and $\pt$ of the jet.
Three working points are considered in the following resulting in different
efficiencies and misidentification rates. These working points are defined by
their average efficiency on quark-initiated jets. The definitions are:
* •
tight working point: 80% efficient for quark jets,
* •
medium working point: 90% efficient for quark jets,
* •
loose working point: 99% efficient for quark jets in $\abs{\eta}<2.5$, 95%
efficient for quark jets in $\abs{\eta}>2.5$.
Since 92% of the PU jets tend to occur at $\pt<50\GeV$, the contamination from
PU jets with $\pt>50\GeV$ is small. Thus, the PU jet ID is designed to act
only on jets with $\pt<50\GeV$.
The fraction of PU jets in simulation passing this kinematic event selection
is 10% for $\abs{\eta}<2.5$, 48% for $2.50<\abs{\eta}<2.75$, 59% for
$2.75<\abs{\eta}<3.00$, and 65% for $3<\abs{\eta}<5$. The distribution of the
output BDT discriminator in selected data events and simulation is shown in
Fig. 8. Some disagreement is present between the data and simulation. This
disagreement is largest for $\abs{\eta}>2.5$ and at low discrimination values,
where PU jets dominate. The difference between data and simulation is roughly
comparable to the total uncertainty in simulation, considering the uncertainty
in the number of interactions and the difference to an alternative ++-based
parton shower prediction.
Figure 8: Data-to-simulation comparison of the PU jet ID boosted decision tree
(BDT) output for AK4 CHS jets with $30<\pt<50\GeV$ for the detector region
within the tracker volume (left) and $3<\abs{\eta}<5$ (right). Black markers
represent the data while the colored areas are +jets simulation events. The
simulation sample is split into jets originating from quarks (red), gluons
(purple), PU (green), and jets that could not be assigned (gray). The
distributions are normalized to unity. The shape of a sample showered with ++
is superimposed The lower panels show the data-to-simulation ratio along with
a gray band corresponding to the one-sided uncertainty that is the difference
between simulated Z+jets events showered with the PYTHIA parton shower to
those showered with the HERWIG++ parton shower. Also included in the ratio
panel is the PU rate uncertainty (dark gray).
When studying jet performance with PU, it is clear that jet reconstruction and
selection, including PU mitigation, affect the relationship between the number
of reconstructed vertices and the mean number of interactions per crossing.
The mean number of vertices as a function of the number of interactions can be
seen in Fig. 9 (left). Without jet selection, the number of vertices is on
average 30% smaller [34, 8] than the number of interactions, because the
vertex reconstruction and identification efficiency is about 70% (although it
is nearly 100% for hard-scattering interactions). When introducing a selection
on the jet , the mean number of vertices for a given number of interactions is
reduced. This effect is largest for CHS jets, where no treatment of jets
composed of mostly PU particles is present. If a PU vertex is close to or
overlaps with the LV, jets composed of PU particles end up in the event
reconstruction and cause the observed bias. When applying a technique to
reduce the number of additional jets composed of mostly PU particles (PUPPI or
CHS+tight PU jet ID), the relationship shows a behavior more similar to the
one without selection. The mean number of interactions as a function of the
number of vertices is presented in Fig. 9 (right). This relationship depends
on the assumed distribution of pileup interactions in data and is adjusted to
match the 2016 data taking. The largest difference between events with and
without a cut is observed for a high number of vertices, while the different
PU mitigation techniques show a similar behavior.
Figure 9: Left: distribution of mean number of reconstructed vertices as a
function of the mean number of interactions in +jets simulation. Right:
distribution of the mean number of interactions as a function of the number of
vertices in +jets simulation. The black open circles show the behavior without
applying any event selection, while for the other markers a selection on jets
of $\pt>20\GeV$ is applied using the CHS (full red triangles), CHS+tight PU
jet ID (violet open squares), and PUPPI (full blue squares) algorithms. The
error bars correspond to the statistical uncertainty in the simulation.
Figure 10 shows the LV jet efficiency and purity in +jets simulation as a
function of the number of interactions for CHS jets, CHS jets with a PU jet ID
applied, and PUPPI jets. The efficiency is defined as the fraction of
particle-level jets with $\pt>30\GeV$ that match within $\Delta R<0.4$ with a
reconstruction-level jet with $\pt>20\GeV$. The purity is defined as the
fraction of reconstruction-level jets with $\pt>30\GeV$ that match within
$\Delta R<0.4$ with a particle-level jet with $\pt>20\GeV$ from the main
interaction. The cuts at reconstruction and generator level are chosen to be
different to remove any significant JER effects on this measurement.
For CHS jets, the efficiency is larger than 95% in entire detector region up
to $\abs{\eta}<5$ regardless of the number of interactions. However, the
purity drops strongly with the number of interactions down to 70 and 18% at 50
interactions for the regions of $\abs{\eta}<2.5$ and $\abs{\eta}>2.5$,
respectively. The PU jet ID applied on top of CHS reduces the efficiency with
respect to using only CHS, but at the same time improves the purity,
especially for low-jets. In $\abs{\eta}<2.5$, the loose working point has only
a slightly reduced efficiency compared to CHS alone. In $\abs{\eta}>2.5$, the
efficiency drops to roughly 80% at high PU for the loose working point. In
$\abs{\eta}<2.5$, the purity remains constant at around 98% over the whole
range of PU scenarios. In $\abs{\eta}>2.5$, the purity is PU-dependent, but
improves over CHS alone by a factor of 1.7 at high PU for the loose working
point. The tight PU jet ID achieves the best purity in $\abs{\eta}>2.5$ at 40%
with collisions at 50 interactions and a jet efficiency of 45%. PUPPI also
reduces the efficiency with respect to CHS by removing neutral particles. At
the same time, PUPPI improves the purity by removing PU jets from the event
without the need of a PU jet ID. At low PU (below 10 interactions), the purity
of PUPPI jets is equal to that of CHS. At high PU, the purity of PUPPI jets
with respect to CHS jets is significantly higher than that of CHS jets. PUPPI
has a constant efficiency above 95% in $\abs{\eta}<2.5$, and a purity
compatible with the tight PU jet ID working point at high PU. In
$\abs{\eta}>2.5$, above 30 interactions the efficiency of PUPPI is better than
the loose PU jet ID, whereas the purity is compatible to within a few percent
to the loose PU jet ID. In summary, PUPPI shows an intrinsic good balance
between efficiency and purity compared to CHS, but if purity in
$\abs{\eta}>2.5$ is crucial to an analysis, CHS+tight PU jet ID yields better
performance. Using variables designed to distinguish quark jets from gluon
jets results in a $<1\%$ difference for $20<\mathrm{PU}<30$ in efficiency for
PUPPI and CHS in $\abs{\eta}<2.5$ and range up to 5% (12%) in $\abs{\eta}>3$
for PUPPI (CHS) with tight PU ID.
Figure 10: The LV jet efficiency (upper) and purity (lower) in +jets
simulation as a function of the number of interactions for PUPPI (blue closed
squares), CHS (red closed triangles), CHS+tight PU jet ID (magenta open
squares), CHS+medium PU jet ID (orange crosses), and CHS+loose PU jet ID
(black triangles). Plots are shown for AK4 jets $\pt>20\GeV$, and (left)
$\abs{\eta}<2.5$ and (right) $\abs{\eta}>3$. The LV jet efficiency is defined
as the number of matched reconstruction-level jets with $\pt>20\GeV$ divided
by the number of particle-level jets with $\pt>30\GeV$ that originate from the
main interaction. For the lower plots, the purity is defined as the number of
matched particle-level jets with $\pt>20\GeV$ divided by the number of
reconstructed jets that have $\pt>30\GeV$. The error bars correspond to the
statistical uncertainty in the simulation.
To evaluate the performance of PU jet identification in data, the ratio of PU
jets to genuine jets for the leading jet in the event is studied. Events are
split into two categories to compare both PU and LV jets. The categorization
is performed utilizing the difference between the azimuths $\phi$ of the
leading jet and the boson. The PU-enriched events are required to have
$\Delta\phi(\PZ\,\text{boson},\text{jet})<1.5$, while events enriched in LV
jets are required to have $\Delta\phi(\PZ\,\text{boson},\text{jet})>2.5$.
Figure 11 shows the rate of events in the PU-enriched region divided by the
rate of events in the LV-enriched region, as a function of the number of
vertices for CHS jets, CHS jets with medium PU jet ID applied, and PUPPI jets
in +jets simulation and data. The rate of PU-enriched events selecting CHS
jets alone exhibits a strong dependence on the number of vertices in detector
regions where $\abs{\eta}<2.5$. This dependence increases from 8 to 25% when
going from 5 to 40 vertices. The dependence is strongly reduced when the PU
jet ID is applied or PUPPI is utilized. PUPPI shows a stable behavior across
the whole range in $\abs{\eta}<2.5$ for both data and simulation. For
$\abs{\eta}>2.5$, all three algorithms show a PU dependence with CHS jets
having the worst performance. Furthermore, categorization with PUPPI jets has
a PU-enriched rate between that of events categorized with CHS and CHS+medium
PU jet ID. For reference, the rate of jets that are matched to a particle-
level jet in simulation is also shown for CHS jets (simulation, CHS LV). This
line shows the expected ratio of events in the two regions when only the LV
jets are used for the categorization. This curve shows a slight PU dependence
because of the high matching parameter of generator- with reconstruction-level
jets ($\Delta R<0.4$).
Scale factors for the efficiency of data and simulation for both matched jets
from the LV and PU jets for various PU jet ID working points are derived using
the event categories enriched in genuine jets and PU jets. Scale factors are
within a few percent of unity in the detector region where $\abs{\eta}<2.5$.
In $\abs{\eta}>2.5$, they are farther from unity, with differences up to 10%
for jets with $2.5<\abs{\eta}<3.0$ and the tight working point applied. The
scale factor for PU jets is significantly larger and leads to a visible
disagreement in Fig. 11. This disagreement is found to be as large as 30% for
low jets with $\abs{\eta}>2.5$. The difference in modeling when using ++
instead of for parton showering shown in the lower panel of Fig. 11 is
considered as an additional uncertainty. The difference of data with respect
to showered jets is contained within the total variation when considering both
++ and based parton showers.
Figure 11: Rate of jets in the PU-enriched region divided by the rate of jets
in the LV-enriched region as a function of the number of vertices for CHS jets
(red triangles), CHS jets with medium PU jet ID applied (orange crosses) and
PUPPI jets (blue squares) in +jets simulation (open markers), and data (full
markers). For reference, the rate of jets that are matched to a particle-level
jet in simulation is also shown for CHS jets (black solid line). The plots
show the ratio for events with $\abs{\eta}<2.5$ (left) and $\abs{\eta}>2.5$
(right). The lower panels show the data-to-simulation ratio along with a gray
band corresponding to the one-sided uncertainty that is the difference between
simulated +jets events showered with the parton shower to those showered with
the ++ parton shower.
## 0.6 , , Higgs boson, and top quark identification
### 0.6.1 Jet substructure reconstruction
In various searches for new physics phenomena and measurements of standard
model properties, top quarks, , , and Higgs bosons are important probes. They
can be produced with a large Lorentz boost, $\gamma$, such that the direction
of their decay particles becomes very collinear. The spatial separation
between the decay products in the $\eta$-$\phi$ plane is approximately $\Delta
R\approx 2/\gamma$. In such cases, it is difficult to reconstruct the decay
products of the hadronically decaying objects of interest properly with
traditional jets of size 0.4. As a result, techniques to reconstruct all decay
products within one jet with a larger size of 0.8 have been widely studied and
used [20, 35]. The invariant mass and substructure of the reconstructed jets
are typically used to identify the different bosons and top quarks. These
larger cone size jets tend to collect more PU, hence PU mitigation techniques
are relevant across a larger range, extending to well beyond $\pt>100\GeV$. In
addition, the jet mass and substructure variables are particularly affected by
soft and wide-angle radiation. A grooming technique is applied on top of CHS
and PUPPI to remove soft radiation from the jet-clustering algorithm and
thereby mitigate the effects from PU, underlying event, and initial-state
radiation. The main grooming algorithm used in CMS is the soft drop or
modified mass drop tagger [36, 37]. It reclusters a jet with the
Cambridge–Aachen algorithm [38], and splits the jet in two subjets by undoing
the last step of the jet clustering. It regards the jet as the final soft drop
jet if the two subjets satisfy the condition
$\frac{\min(\pt^{1},\pt^{2})}{\pt^{1}+\pt^{2}}>z_{\text{cut}}\Big{(}\frac{\Delta
R_{12}}{R_{0}}\Big{)}^{\beta},$ (3)
where $\pt^{1}$ and $\pt^{2}$ are the transverse momenta of the two subjets,
$R_{0}$ is the size parameter of the jet, $\Delta
R_{12}=\sqrt{\smash[b]{(\Delta\eta_{12})^{2}+(\Delta\phi_{12})^{2}}}$ is the
distance between the two subjets, and $z_{\text{cut}}$ and $\beta$ are tunable
parameters of soft drop set to $z_{\text{cut}}=0.1$ and $\beta=0$ here. If the
soft drop condition is not met, the declustering procedure is repeated with
the subjet that has the larger of the two, and the other subjet is rejected.
The soft drop jet mass is computed from the sum of the four-momenta of the
constituents passing the grooming algorithm. The mass is then corrected by a
factor derived in simulated boson samples to ensure a $\pt$\- and
$\eta$-independent jet mass distribution [6].
Additional separation of boosted , , and Higgs bosons, and top quarks from
quark and gluon jet background can be achieved with a substructure observable.
A commonly used observable in CMS is $N$-subjettiness [39], defined as
$\tau_{N}=\frac{1}{d_{0}}\sum_{k}p_{\mathrm{T}k}\,\min(\Delta R_{1,k},\Delta
R_{2,k},\ldots,\Delta R_{N,k}),$ (4)
with the normalization factor $d_{0}$:
$d_{0}=\sum_{k}p_{\mathrm{T}k}\,R_{0},$ (5)
where $R_{0}$ is the size parameter used in the clustering process,
$p_{\mathrm{T}k}$ is the transverse momentum of the $k$-th constituent of the
jet, and $\Delta R_{n,k}$ estimates the angular separation of the constituents
of the jet to the closest subjet axis. We use a one-step optimization of the
exclusive $\kt$ axes as a definition for the subjet axes. The ratio
$\tau_{2}/\tau_{1}$, which is called $\tau_{21}$, has excellent capability in
separating jets with bipolar structures, originating from boosted , , and
Higgs bosons, from jets coming from quarks and gluons. The ratio
$\tau_{32}=\tau_{3}/\tau_{2}$ can be used to discriminate top quark jets from
, , and Higgs boson jets, or quark and gluon jets.
### 0.6.2 Identification performance and pileup
The variation as a function of pileup of the median soft drop jet mass, median
$\tau_{21}$, and the soft drop jet mass resolution is shown in Fig. 12 for
jets from boosted bosons with $400<\pt<600\GeV$ using simulation of bulk
gravitons decaying into $\PW\PW$ pairs. The soft drop jet mass resolution is
defined as the spread of the ratio of reconstruction- and particle-level jet
mass (the response) divided by the mean of the response. The response
distribution is, to a very good approximation, Gaussian, and the resolution is
determined using the same procedure as for the JER described in Section 0.5.1.
The CHS jets exhibit a PU dependence for the soft drop jet mass and
$\tau_{21}$ observables. The PUPPI jets, on the other hand, entirely remove
the PU dependence of the soft drop jet mass and $\tau_{21}$ medians. The soft
drop jet mass resolution is similar for the CHS and PUPPI algorithms, though a
slightly better resolution is observed for the CHS algorithm for fewer than 20
interactions, while the PUPPI algorithm shows less dependence on PU leading to
an improved resolution for more than 30 interactions, for which it has been
optimized.
Figure 12: Median soft drop jet mass (upper left), median $\tau_{21}$ (upper
right), and soft drop jet mass resolution (lower) for AK8 jets from boosted
bosons with $400<\pt<600\GeV$ for CHS (red triangles) and PUPPI (blue squares)
jets in a bulk graviton decaying to signal sample, as a function of the number
of vertices. The error bars correspond to the statistical uncertainty in the
simulation.
The performance of a typical or boson tagger with respect to the PU
contamination is studied using simulation of bulk gravitons decaying into
$\PW\PW$ pairs for tagging efficiency and QCD multijet production for
misidentification rate. Reconstructed jets are required to have larger than
200and $\abs{\eta}<2$, and not to overlap with any well-reconstructed leptons.
In addition, jets are required to have reconstructed mass compatible with the
boson mass (within 65–105). Figure 13 shows the evaluated efficiency and
misidentification rate of the tagger with CHS and PUPPI jets operated at two
cutoff values on $\tau_{21}$ (0.6 and 0.45 for CHS jets, and 0.55 and 0.40 for
PUPPI jets, which give a comparable efficiency to that for CHS jets). The
tagger with PUPPI provides stable performance for both efficiency and
misidentification rate, whereas the one with CHS reduces both efficiency and
misidentification rate as the PU increases. This behavior of the tagger with
CHS results from the linear dependence of median $\tau_{21}$ on the number of
vertices for both /jets and jets (see Fig. 12).
Figure 13: boson identification performance using a selection on $\tau_{21}$
for CHS (red triangles and dark red crosses) and PUPPI (blue squares and
circles) AK8 jets as a function of the number of vertices for loose and tight
selections, respectively. Shown on the left is the boson identification
efficiency evaluated in simulation for a bulk graviton decaying to a boson
pair and on the right the misidentification rate evaluated with QCD multijet
simulation. The error bars correspond to the statistical uncertainty in the
simulation.
Figure 14: Top quark identification efficiency (left) and misidentification
rate (right) as a function of the number of vertices for CHS (open symbols)
and PUPPI (closed symbols) jets, using different combinations of substructure
variables: soft drop mass cut between 105 and 210(blue rectangles),
$\tau_{32}<0.54$ (orange circles), and both requirements together (red
triangles). The error bars correspond to the statistical uncertainty in the
simulation.
The same stability of the PUPPI algorithm is seen in top quark jet
identification, which is performed by selecting jets originating from top
quarks in simulation that have a soft drop mass within 105–210and
$\tau_{32}<0.54$. Figure 14 shows the tagging performance using the CHS and
PUPPI algorithms with the soft drop mass and $\tau_{32}$ conditions applied
separately, and with both of them together. Although the efficiency is
slightly different between the application of PUPPI or CHS, the same stability
is observed with respect to PU as for tagging.
The performance of the boson tagger with the CHS and PUPPI algorithms is
compared in data and simulation following the procedure described in Ref.
[20]. The boson identification efficiency is measured in a region enriched in
events, where one top quark decays to a final state with a lepton, neutrino,
and a bottom quark and is used to tag the event. The other top quark is
required to decay to a bottom quark and a boson that further decays to a
quark-antiquark pair. The AK8 jet with the highest in the event is probed as
the boson jet candidate and required to have $\pt>200\GeV$ and
$\abs{\eta}<2.4$. Data collected by single-lepton triggers are compared with
simulation samples of top quark pair production and backgrounds from single
top, boson, and diboson production. The soft drop jet mass scale and
resolution, as well as the $\tau_{21}$ selection efficiency with the CHS and
PUPPI algorithms, are well modeled by the simulation. The data-to-simulation
scale factors for jet mass scale, jet mass resolution, and $\tau_{21}$
selection efficiency are found in Table 0.6.2. The leading systematic effects
include parton showering and variations of the fit model (treatment of nearby
jets) as detailed in Ref. [20].
Data-to-simulation scale factors for the jet mass scale, jet mass resolution,
and the $\tau_{21}$ selection efficiency for the CHS and PUPPI algorithms.
Parameter Data/simulation CHS PUPPI Mass scale $1.007\pm 0.009\stat\pm
0.005\syst$ $0.998\pm 0.007\stat\pm 0.006\syst$ Mass resolution $1.15\pm
0.04\stat\pm 0.04\syst$ $1.08\pm 0.02\stat\pm 0.09\syst$ $\tau_{21}<0.45$
$1.00\pm 0.06\stat\pm 0.07\syst$ $\tau_{21}<0.4$ $1.01\pm 0.06\stat\pm
0.05\syst$
## 0.7 Missing transverse momentum resolution
The imbalance of momentum for all reconstructed objects in the transverse
plane, called missing transverse momentum $\vec{p}_{T}^{\text{miss}}$ with
magnitude , is a signature of neutrino production. It also plays an important
role in searches for unknown stable neutral particles. In CMS,
$\vec{p}_{T}^{\text{miss}}$ is calculated as the negative vector sum of all PF
candidates (called PF $\vec{p}_{T}^{\text{miss}}$ in the following). The
$\vec{p}_{T}^{\text{miss}}$ thus relies on the accurate measurement of the
reconstructed physics objects, namely muons, electrons, photons, hadronically
decaying taus, jets, and unclustered energy. The unclustered energy is the
contribution from the PF candidates not associated with any of the previous
physics objects. The CHS procedure is not suitable for
$\vec{p}_{T}^{\text{miss}}$ computation since it selectively removes only
particles within the tracker volume ($\abs{\eta}<2.5$). PU events that spread
across the tracker volume boundary are thus partially removed leading to a
degradation in the $\vec{p}_{T}^{\text{miss}}$ resolution. The
$\vec{p}_{T}^{\text{miss}}$ is corrected with the difference between the
vector sum of all reconstructed jets in the event calibrated to the particle
level and the vector sum of all uncalibrated jet momenta (called type-1
correction), to account for the detector response of jet objects. Anomalous
high-events can be due to a variety of reconstruction failures, detector
malfunctions, or noncollision backgrounds. Such events are rejected by event
filters that are designed to identify more than 85–90% of the spurious high-
events with a mistagging rate less than 0.1% [40]. The performance of the
$\vec{p}_{T}^{\text{miss}}$ reconstruction in CMS (covering $\PZ\to\Pe\Pe$,
$\PZ\to\PGm\PGm$ and $\gamma$+jets data samples) is discussed in detail in
Ref. [40].
The PUPPI algorithm can be used for the computation of
$\vec{p}_{T}^{\text{miss}}$ by scaling the PF candidates by their PUPPI weight
(PUPPI $\vec{p}_{T}^{\text{miss}}$), and then applying the type-1 correction
using PUPPI jets. The PUPPI metric as defined in Eq. 1 in Section 0.4 treats
charged leptons and charged hadrons in the same way, i.e., charged leptons get
a weight of 0 or 1 depending on their vertex association and enter into the
computation of the weight of their surrounding particles. This causes prompt
leptons, e.g., leptons from the decay of the boson, to create a PU dependence
by giving PU particles around the prompt lepton a higher weight. Therefore, a
second PUPPI metric is defined in which charged leptons are excluded from the
calculation. In this definition, it is assumed that all leptons in the event
are prompt. This results in PU particles surrounding a prompt lepton having a
lower weight consistent with the PU hypothesis. In the following discussion,
the metric defined with the default PUPPI weight, including the leptons, is
referred to as “PUPPI-with-lepton” and the metric, which excludes the leptons,
as “PUPPI-no-lepton.” For the purpose of the PUPPI $\vec{p}_{T}^{\text{miss}}$
computation, PUPPI-no-lepton collection is combined with the collection of
leptons given a weight of 1. In addition, a PUPPI weight of 1 is automatically
assigned to photons reconstructed in the tracker region ($\abs{\eta}<2.5$)
with $\pt>20\GeV$. These photons are required to pass certain identification
and isolation criteria ensuring an efficiency of above 80% and a purity of
above 95%.
The resolution of is quantified by measuring the resolution of the hadronic
recoil in boson events. The recoil is defined as the vector sum of the momenta
of all the objects (with the same PU mitigation applied as for ) in the event
but the boson. The transverse momenta of the recoil and of the boson are
balanced against each other, such that their difference allows the
determination of the momentum resolution. The momentum of the boson decaying
into charged leptons can be reconstructed with high resolution such that it
can serve as a reference for the measurement of the energy resolution of the
hadronic recoil. The momentum of the recoil is projected to axes parallel and
perpendicular to the momentum of the reconstructed boson. The resolution of
the former is sensitive to the energy resolution and the latter to the PU
contribution.
The $\Pp\Pp$ collision data collected with a dielectron trigger are used to
evaluate the performance. Events with two isolated electrons, within
$\abs{\eta}<2.5$, with the leading (subleading) electron $\pt>25\,(20)\GeV$,
and the invariant mass of the two electrons within a 20window centered around
the boson mass are selected. The four-momentum of the boson are reconstructed
from the four-momentum of the two electrons. The recoil is calculated as the
vector sum of the momenta of all particles, but the two electrons.
Figure 15 shows the ratio of the recoil to the boson transverse momentum
($u_{\parallel}$) as a function of the boson transverse momentum
($q_{\mathrm{T}}$) for PUPPI $\vec{p}_{T}^{\text{miss}}$ and PF
$\vec{p}_{T}^{\text{miss}}$. The PUPPI tends to have a smaller response in
events with low momentum recoil. This is because of the removal of PF
candidates that are wrongly assigned to the PU vertex by the PUPPI algorithm.
Deviations from unity indicate imperfect calibration of the hadronic energy
scale.
Figure 16 shows the resolution of the recoil, parallel ($\sigma_{\parallel}$),
and perpendicular ($\sigma_{\perp}$) to the boson momentum, as a function of
the number of vertices for PUPPI $\vec{p}_{T}^{\text{miss}}$ and PF
$\vec{p}_{T}^{\text{miss}}$. The scale of the recoil is corrected as a
function of the boson momentum for comparison. The PUPPI
$\vec{p}_{T}^{\text{miss}}$ resolution for both components is consistently
better than the PF $\vec{p}_{T}^{\text{miss}}$ resolution above a number of
vertices of 10. In addition, PUPPI $\vec{p}_{T}^{\text{miss}}$ provides a more
stable performance with respect to PU than $\vec{p}_{T}^{\text{miss}}$, up to
at least 50 vertices.
Figure 15: The hadronic recoil response ($-\langle
u_{\parallel}\rangle/\langle q_{\mathrm{T}}\rangle$) of the boson computed for
PUPPI and PF , as a function of $q_{\mathrm{T}}$ in $\PZ\to\Pe\Pe$ events in
collision data. The lower panel shows the data-to-simulation ratio. A gray
shaded band is added in the lower panel showing the systematic uncertainties
resulting from jet energy scale and jet energy resolution variations, and
variations in the unclustered energy added in quadrature.
Figure 16: The hadronic recoil components $u_{||}$(left) and
$u_{\perp}$(right) for PUPPI and PF resolution as a function of the number of
vertices in $\PZ\to\Pe\Pe$ events in collision data. The lower panel shows the
data-to-simulation ratio. A gray shaded band is added in the lower panel
showing the systematic uncertainties resulting from jet energy scale and jet
energy resolution variations, and variations in the unclustered energy added
in quadrature.
## 0.8 Muon isolation
Muons are reconstructed through a fit to hits in both the inner tracking
system and the muon spectrometer [41, 42]. Muons must satisfy identification
and reconstruction requirements on the impact parameters of the track, the
number of hits reconstructed in both the silicon tracker and the muon
detectors, and the uncertainty in the measurement. These quality criteria
ensure a precise measurement of the four-momentum, and rejection of badly
reconstructed muons.
To distinguish prompt charged leptons from those originating from semileptonic
decays of hadrons, the lepton isolation provides a powerful handle. Lepton
isolation is defined as the sum of all surrounding particles in a cone around
the lepton. In this study, PUPPI is investigated in the context of muon
isolation and compared with other techniques commonly used in CMS. While not
shown here, these techniques are also applicable to electron isolation.
Various techniques exist to limit the impact of PU on isolation. A widely used
variable within CMS is the $\delta\beta\text{-corrected}$ isolation [41]. This
variable is used to estimate the contribution of neutral particles based on
the nearby contributions of charged particles, defined by:
$\text{$\delta\beta$-Iso}^{\mu^{i}}=\sum_{\Delta R(i,j)<0.4}^{\text{Ch-
LV}}\pt^{j}+\max\left(0,\sum_{\Delta
R(i,j)<0.4}^{\mathrm{Nh}}\pt^{j}+\sum_{\Delta
R(i,j)<0.4}^{\mathrm{Ph}}\pt^{j}-\frac{1}{2}\sum_{\Delta
R(i,j)<0.4}^{\text{Ch-PU}}\pt^{j}\right),$ (6)
where each sum runs over the particles, indexed with $j$, with $\Delta R<0.4$
of the muon, $\pt^{j}$ is the transverse momentum of each surrounding
particle, Ch-LV and Ch-PU are charged particles associated with the LV and PU
vertices, respectively, and Nh and Ph are neutral hadrons and photons
reconstructed with the PF algorithm, respectively. The subtraction by one half
of the amount of Ch-PU corresponds to the subtraction of the PU contamination.
It is motivated by isospin symmetry, yielding the ratio of charged to neutral
pion production of two, which is responsible for the fact that jets are
composed of roughly one-third neutral pions and two-thirds charged pions [5].
An alternative isolation can be constructed using PUPPI. The simplest
definition of PUPPI muon isolation is:
$\text{Iso}^{\mu^{i}}=\sum_{\Delta R(i,j)<0.4}\pt^{j}\omega^{j},$ (7)
where $\pt^{j}$ and $\omega^{j}$ are the transverse momentum and the PUPPI
weight of particle $j$, respectively. The PUPPI weight is either determined
from PUPPI-with-lepton or PUPPI-no-lepton as described in Section 0.7. In
addition, a combined isolation defined as the mean of the two isolation
quantities is referred as “PUPPI-combined”:
$\text{Iso}_{\text{combined}}=\frac{\text{Iso}_{\text{no-lepton
}}+\text{Iso}_{\text{with-lepton}}}{2}.$ (8)
The performance of muon isolation is tested using simulated boson (prompt
muons) and QCD multijet (nonprompt muons) events with a PU distribution having
a mean of 20 interactions comparable to the 2016 PU conditions. For
comparison, the relative isolation algorithm, defined as the isolation divided
by the muon , is used. Muons are selected if the relative isolation is below a
certain threshold. The threshold value for the relative isolation (0.156 for
PUPPI-combined and 0.15 for $\delta\beta\text{-corrected}$) is defined such
that each isolation quantity gives an inclusive misidentification rate of 12%
for the muons selected in QCD multijet simulation. The fraction of muons
passing the criteria is referred to as isolation efficiency for prompt muons
and as misidentification rate for nonprompt muons. The efficiency is
calculated with respect to reconstructed prompt muons with $\pt>20\GeV$ and
$\abs{\eta}<2.4$.
As explained before, PUPPI-with-lepton has the shortcoming that PU particles
around a prompt lepton get too high a weight because of the of the lepton in
the $\alpha_{i}$ calculation. Therefore, the application of the weights from
PUPPI-with-lepton for the muon isolation leads to a PU-dependent efficiency
for prompt muons and a PU-independent misidentification rate. The
misidentification rate is PU-independent, because LV particles, which drive
the isolation of nonprompt leptons, get a reasonable weight. Conversely,
PUPPI-no-lepton has the shortcoming that LV particles near a nonprompt lepton
get a reduced weight because the of the nonprompt lepton is excluded when
calculating $\alpha_{i}$ for these particles. The weight of LV particles
contributing to the isolation is thus less stable against their surroundings.
PU particles around leptons, however, get reasonable weights, resulting in a
good estimate of the isolation for prompt leptons. Therefore, using PUPPI-no-
lepton for the isolation calculation yields a stable efficiency and a less PU-
resilient misidentification rate.
Figure 17 shows the isolation efficiency and the misidentification rate as a
function of the number of vertices. All three PUPPI isolation quantities are
observed to be more stable across PU when compared with the
$\delta\beta\text{-corrected}$ isolation in terms of misidentification rate.
In terms of efficiency, the PUPPI-no-lepton shows a more stable behavior
compared with $\delta\beta\text{-corrected}$ isolation whereas PUPPI-with-
lepton shows a stronger dependence on the number of vertices. The stability of
the PUPPI-combined isolation efficiency is between the two PUPPI isolation
variants and similar to the $\delta\beta\text{-corrected}$ isolation.
Figure 17: The identification efficiency for prompt muons in simulated +jets
events (left) and the misidentification rate for nonprompt muons in QCD
multijet simulated events (right) for the different definitions of the
isolation: $\delta\beta\text{-corrected}$ isolation (black circles), PUPPI-
with-lepton (blue triangles), PUPPI-no-lepton (red crosses), PUPPI-combined
(green squares), as a function of the number of vertices. The threshold of
each isolation is set to yield a 12% misidentification rate for reconstructed
muons in QCD multijet simulation. The error bars correspond to the statistical
uncertainty in the simulation.
Figure 18 shows a receiver operating characteristic (ROC) curve, , the
efficiency as a function of the misidentification rate, when using different
definitions of the isolation. The combined PUPPI isolation provides the best
performance over the typical analysis working points.
Figure 18: The identification efficiency for prompt muons in simulated +jets
events as a function of the misidentification rate for nonprompt muons in QCD
multijet simulated events for the different definitions of the isolation:
$\delta\beta\text{-corrected}$ isolation (black solid line), PUPPI-with-lepton
(blue dashed line), PUPPI-no-lepton (red mixed dashed), PUPPI-combined (green
long mixed dashed). The average number of interactions is 27.
The PUPPI isolation is further investigated in collision data collected with a
single-muon trigger path requiring an isolated muon with $\pt>24\GeV$. Two
levels of muons are defined: loose muons are required to have $\pt>15\GeV$ and
$\abs{\eta}<2.4$ with no isolation requirement and tight muons $\pt>26\GeV$
and $\abs{\eta}<2.1$ with a $\delta\beta\text{-corrected}$ isolation
corresponding to an efficiency of 95% (threshold of 0.15). One tight and one
loose muon, with the invariant mass of the two muons within a 10window
centered around the boson mass are selected. The performance is measured using
a tag-and-probe method, with the tight muon as the tag muon and the loose muon
as the probe muon. The behavior of the isolation variables in data are
compared with +jets simulation. Other backgrounds are neglected.
Figure 19 shows the mean fractions of the contributions of charged hadrons,
neutral hadrons, and photons to the relative isolation variable, as a function
of the number of vertices for the two types of PUPPI isolation in data and
+jets simulation. The neutral hadrons and photons make up a large contribution
to the total isolation and show a clear PU dependence for the PUPPI-with-
lepton isolation, whereas this is not the case for the PUPPI-no-lepton
isolation. The trend in data is well described by simulation.
Figure 19: Mean relative isolation for PUPPI-with-lepton (left) and PUPPI-no-
lepton (right) in data compared to +jets simulation. The relative isolation is
split into separate charged hadron (Ch, green squares), neutral hadron (Nh,
blue circles), photon (Ph, red crosses) components, and combined (black
triangles). Data and simulation are shown using full and open markers,
respectively. The lower panels show the data-to-simulation ratio of each
component. The error bars correspond to the statistical uncertainty.
The isolation efficiency of the PUPPI-combined isolation is evaluated using
the same tag-and-probe method, and is compared to the
$\delta\beta\text{-corrected}$ isolation. The threshold for PUPPI-combined
isolation (0.15) is chosen such that the isolation efficiencies are roughly
equal for muons with $15<\pt<20\GeV$, where $\delta\beta\text{-corrected}$
isolation is applied.
Figure 20: The identification efficiency for prompt muon isolation selection
in $\PZ\to\PGm\PGm$ events in data compared to +jets simulation, as a function
of the number of vertices for PUPPI-combined (green circles) and
$\delta\beta\text{-corrected}$ isolation (black squares). Data and simulation
are shown using full and open markers, respectively. The lower panel shows the
data-to-simulation ratio. The error bars correspond to the statistical
uncertainty. The threshold for PUPPI-combined isolation (0.15) is chosen such
that the isolation efficiencies are roughly equal for muons with
$15<\pt<20\GeV$, where $\delta\beta\text{-corrected}$ isolation is applied,
leading to an approximately 1% higher efficiency for $\pt>15\GeV$ with
variations as a function of the number of vertices.
Figure 20 shows the efficiency of the chosen PUPPI and
$\delta\beta\text{-corrected}$ isolation variables as a function of the number
of vertices. The ratio of efficiency in data to that in simulation is 0.99.
Although the PU dependence of the efficiency of the PUPPI-combined isolation
is stronger than that of the $\delta\beta\text{-corrected}$ isolation, this
does not mean PUPPI-combined isolation is more susceptible to PU, because the
misidentification rate is stable against PU (see Fig. 21). The PUPPI-combined
isolation outperforms $\delta\beta\text{-corrected}$ isolation across the PU
conditions studied.
The misidentification rate of the PUPPI isolation is evaluated in data by
selecting $\PZ\to\PGm\PGm$ events passing a dimuon trigger path ($\pt>17$ and
$8\GeV$ for the leading and subleading muons, respectively). To obtain the
boson candidates, two oppositely charged muons are selected within a 10window
centered around the boson mass and passing loose isolation criteria. In
addition to the two muons from the boson decay, a third muon is required and
labeled as the misidentified muon. This additional muon is either a third
prompt muon initiated by leptonic decays of and processes or, as is usually
the case, a nonprompt muon from a semileptonic hadron decay. To further reduce
the prompt-muon contribution from production, the transverse mass (as defined
in Ref. [40]) obtained from the muon with third-highest and needs to be less
than 40. Both and production are well measured and generally well modeled. The
difference in agreement between data and simulation is thus ascribed to
nonprompt-lepton events.
The misidentification rate shown in Fig. 21 is defined as the number of events
with a third isolated muon divided by the total number of events after
subtracting the background. The misidentification rate of the
$\delta\beta\text{-corrected}$ isolation is $(5.4\pm 0.4)\%$ while that of
PUPPI-combined isolation is $(4.2\pm 0.4)\%$. The uncertainty is statistical
only. The ratio of the misidentification rate of PUPPI isolation to the
$\delta\beta\text{-corrected}$ isolation is $(77\pm 4)\%$, where the
correlation is included in the uncertainty computation. The performance
improvements from PUPPI-combined isolation expected from simulation studies
are thus confirmed by data measurements.
Figure 21: The misidentification rate defined as the number of events with a
third isolated muon divided by the total number of events with a third muon in
$\PZ\to\PGm\PGm$ data for PUPPI-combined (blue closed circles) and
$\delta\beta\text{-corrected}$ isolation (red open circles). The lower panel
shows the ratio of PUPPI-combined and $\delta\beta\text{-corrected}$
isolation, taking the correlation of their uncertainties into account. The
threshold for PUPPI-combined isolation (0.15) is chosen such that the
isolation efficiencies are roughly equal for muons with $15<\pt<20\GeV$, where
$\delta\beta\text{-corrected}$ isolation is applied.
## 0.9 Summary
The impact of pileup (PU) mitigation techniques on object reconstruction
performance in the CMS experiment has been presented. The main techniques
under study are charged-hadron subtraction (CHS) and pileup per particle
identification (PUPPI), which both exploit particle-level information. The
performance of these techniques is evaluated in the context of the
reconstruction of jets and missing transverse momentum (), lepton isolation,
and the calculation of jet substructure observables for boosted object
tagging. The CHS and PUPPI algorithms are further compared with other
algorithmic approaches that act on jet, , and lepton objects. While CHS
rejects charged particles associated with PU vertices, PUPPI applies a more
stringent selection to charged particles and rescales the four-momentum of
neutral particles according to their probability to originate from the leading
vertex. Both techniques reduce the dependence on PU interactions across all
objects. A stronger reduction is achieved with PUPPI, especially for events
with more than 30 interactions. The PUPPI algorithm provides the best
performance for jet mass and substructure observables, resolution, and
rejection of misidentified muons. With respect to jet-momentum resolution and
PU jet rejection, the preferred algorithm depends on the physics process under
study: the PUPPI algorithm provides a better jet momentum resolution for jets
with $\pt<100\GeV$, whereas CHS does so for $\pt>100\GeV$. The highest
rejection rate for jets originating purely from PU is obtained when using a
dedicated PU jet identification in addition to CHS. However, when a looser
working point for the PU jet identification is chosen such that its efficiency
for selecting jets coming from the leading vertex is similar to that of PUPPI,
both provide a similar rejection power. The PU suppression techniques studied
in this paper are proven to maintain reasonable object performance up to 70
interactions. Their use will be crucial for future running of the LHC, where
even more challenging PU conditions up to 200 interactions per bunch crossing
are expected.
###### Acknowledgements.
We congratulate our colleagues in the CERN accelerator departments for the
excellent performance of the LHC and thank the technical and administrative
staffs at CERN and at other CMS institutes for their contributions to the
success of the CMS effort. In addition, we gratefully acknowledge the
computing centers and personnel of the Worldwide LHC Computing Grid for
delivering so effectively the computing infrastructure essential to our
analyses. Finally, we acknowledge the enduring support for the construction
and operation of the LHC and the CMS detector provided by the following
funding agencies: BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq,
CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST,
and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF
(Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, PUT and ERDF (Estonia); Academy
of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG,
and HGF (Germany); GSRT (Greece); NKFIA (Hungary); DAE and DST (India); IPM
(Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES
(Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT,
LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC
(Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom,
RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER
(Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST
(Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK
(Turkey); NASU (Ukraine); STFC (United Kingdom); DOE and NSF (USA).
Individuals have received support from the Marie-Curie program and the
European Research Council and Horizon 2020 Grant, contract Nos. 675440,
752730, and 765710 (European Union); the Leventis Foundation; the A.P. Sloan
Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science
Policy Office; the Fonds pour la Formation à la Recherche dans l’Industrie et
dans l’Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door
Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium)
under the “Excellence of Science – EOS” – be.h project n. 30820817; the
Beijing Municipal Science & Technology Commission, No. Z191100007219010; the
Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the
Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy –
EXC 2121 “Quantum Universe” – 390833306; the Lendület (“Momentum”) Program and
the János Bolyai Research Scholarship of the Hungarian Academy of Sciences,
the New National Excellence Program ÚNKP, the NKFIA research grants 123842,
123959, 124845, 124850, 125105, 128713, 128786, and 129058 (Hungary); the
Council of Science and Industrial Research, India; the HOMING PLUS program of
the Foundation for Polish Science, cofinanced from European Union, Regional
Development Fund, the Mobility Plus program of the Ministry of Science and
Higher Education, the National Science Center (Poland), contracts Harmonia
2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and
2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities
Research Program by Qatar National Research Fund; the Ministry of Science and
Education, grant no. 14.W03.31.0026 (Russia); the Tomsk Polytechnic University
Competitiveness Enhancement Program and “Nauka” Project FSWW-2020-0008
(Russia); the Programa Estatal de Fomento de la Investigación Científica y
Técnica de Excelencia María de Maeztu, grant MDM-2015-0509 and the Programa
Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs
cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for
Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn
Academic into Its 2nd Century Project Advancement Project (Thailand); the
Kavli Foundation; the Nvidia Corporation; the SuperMicro Corporation; the
Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA).
## References
* [1] CMS Collaboration, “CMS luminosity measurements for the 2016 data taking period”, CMS Physics Analysis Summary CMS-PAS-LUM-17-001, 2017.
* [2] CMS Collaboration, “Particle-flow reconstruction and global event description with the CMS detector”, JINST 12 (2017) P10003, 10.1088/1748-0221/12/10/P10003, arXiv:1706.04965.
* [3] M. Cacciari, G. P. Salam, and G. Soyez, “The catchment area of jets”, JHEP 04 (2008) 005, 10.1088/1126-6708/2008/04/005, arXiv:0802.1188.
* [4] M. Cacciari and G. P. Salam, “Pileup subtraction using jet areas”, Phys. Lett. B 659 (2008) 119, 10.1016/j.physletb.2007.09.077, arXiv:0707.1378.
* [5] CMS Collaboration, “Jet energy scale and resolution in the CMS experiment in pp collisions at 8 TeV”, JINST 12 (2017) P02014, 10.1088/1748-0221/12/02/P02014, arXiv:1607.03663.
* [6] CMS Collaboration, “Jet algorithms performance in 13 TeV data”, CMS Physics Analysis Summary CMS-PAS-JME-16-003, 2017.
* [7] D. Bertolini, P. Harris, M. Low, and N. Tran, “Pileup per particle identification”, JHEP 10 (2014) 059, 10.1007/JHEP10(2014)059, arXiv:1407.6013.
* [8] CMS Collaboration, “Description and performance of track and primary-vertex reconstruction with the CMS tracker”, JINST 9 (2014) P10009, 10.1088/1748-0221/9/10/P10009, arXiv:1405.6569.
* [9] CMS Collaboration, “The CMS experiment at the CERN LHC”, JINST 3 (2008) S08004, 10.1088/1748-0221/3/08/S08004.
* [10] CMS Collaboration, “The CMS trigger system”, JINST 12 (2017) P01020, 10.1088/1748-0221/12/01/P01020, arXiv:1609.02366.
* [11] CMS Collaboration, “Measurement of the inclusive production cross sections for forward jets and for dijet events with one forward and one central jet in $\Pp\Pp$ collisions at $\sqrt{s}=7$ TeV”, JHEP 06 (2012) 036, 10.1007/JHEP06(2012)036, arXiv:1202.0704.
* [12] CMS Collaboration, “Measurement of the inelastic $\Pp\Pp$ cross section at $\sqrt{s}=7$ TeV”, CMS Physics Analysis Summary CMS-PAS-QCD-11-002, 2011.
* [13] T. Sjöstrand et al., “An introduction to PYTHIA 8.2”, Comput. Phys. Commun. 191 (2015) 159, 10.1016/j.cpc.2015.01.024, arXiv:1410.3012.
* [14] B. Andersson, G. Gustafson, G. Ingelman, and T. Sjöstrand, “Parton fragmentation and string dynamics”, Phys. Rept. 97 (1983) 31, 10.1016/0370-1573(83)90080-7.
* [15] T. Sjöstrand, “The merging of jets”, Phys. Lett. B 142 (1984) 420, 10.1016/0370-2693(84)91354-6.
* [16] J. Alwall et al., “The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations”, JHEP 07 (2014) 079, 10.1007/JHEP07(2014)079, arXiv:1405.0301.
* [17] P. Nason, “A new method for combining NLO QCD with shower Monte Carlo algorithms”, JHEP 11 (2004) 040, 10.1088/1126-6708/2004/11/040, arXiv:hep-ph/0409146.
* [18] S. Frixione, P. Nason, and C. Oleari, “Matching NLO QCD computations with parton shower simulations: the POWHEG method”, JHEP 11 (2007) 070, 10.1088/1126-6708/2007/11/070, arXiv:0709.2092.
* [19] S. Alioli, P. Nason, C. Oleari, and E. Re, “A general framework for implementing NLO calculations in shower Monte Carlo programs: the POWHEG BOX”, JHEP 06 (2010) 043, 10.1007/JHEP06(2010)043, arXiv:1002.2581.
* [20] CMS Collaboration, “Identification techniques for highly boosted W bosons that decay into hadrons”, JHEP 12 (2014) 017, 10.1007/JHEP12(2014)017, arXiv:1410.4227.
* [21] K. Agashe, H. Davoudiasl, G. Perez, and A. Soni, “Warped gravitons at the LHC and beyond”, Phys. Rev. D 76 (2007) 036006, 10.1103/PhysRevD.76.036006, arXiv:hep-ph/0701186.
* [22] A. L. Fitzpatrick, J. Kaplan, L. Randall, and L.-T. Wang, “Searching for the Kaluza-Klein graviton in bulk RS models”, JHEP 09 (2007) 013, 10.1088/1126-6708/2007/09/013, arXiv:hep-ph/0701150.
* [23] O. Antipin, D. Atwood, and A. Soni, “Search for RS gravitons via W(L)W(L) decays”, Phys. Lett. B 666 (2008) 155, 10.1016/j.physletb.2008.07.009, arXiv:0711.3175.
* [24] M. Bähr et al., “Herwig++ physics and manual”, Eur. Phys. J. C 58 (2008) 639, 10.1140/epjc/s10052-008-0798-9, arXiv:0803.0883.
* [25] J. Bellm et al., “Herwig++ 2.7 release note”, (2013). arXiv:1310.6877.
* [26] M. H. Seymour and A. Siodmok, “Constraining MPI models using $\sigma_{eff}$ and recent Tevatron and LHC underlying event data”, JHEP 10 (2013) 113, 10.1007/JHEP10(2013)113, arXiv:1307.5015.
* [27] NNPDF Collaboration, “Parton distributions for the LHC Run II”, JHEP 04 (2015) 040, 10.1007/JHEP04(2015)040, arXiv:1410.8849.
* [28] P. Skands, S. Carrazza, and J. Rojo, “Tuning PYTHIA 8.1: the Monash 2013 tune”, Eur. Phys. J. C 74 (2014) 3024, 10.1140/epjc/s10052-014-3024-y, arXiv:1404.5630.
* [29] CMS Collaboration, “Event generator tunes obtained from underlying event and multiparton scattering measurements”, Eur. Phys. J. C 76 (2016) 155, 10.1140/epjc/s10052-016-3988-x, arXiv:1512.00815.
* [30] CMS Collaboration, “Investigations of the impact of the parton shower tuning in 8 in the modelling of $\mathrm{t\overline{t}}$ at $\sqrt{s}=8$ and 13 TeV”, CMS Physics Analysis Summary CMS-PAS-TOP-16-021, 2016.
* [31] GEANT4 Collaboration, “—a simulation toolkit”, Nucl. Instrum. Meth. A 506 (2003) 250, 10.1016/S0168-9002(03)01368-8.
* [32] M. Cacciari, G. P. Salam, and G. Soyez, “The anti-jet clustering algorithm”, JHEP 04 (2008) 063, 10.1088/1126-6708/2008/04/063, arXiv:0802.1189.
* [33] M. Cacciari, G. P. Salam, and G. Soyez, “FastJet user manual”, Eur. Phys. J. C 72 (2012) 1896, 10.1140/epjc/s10052-012-1896-2, arXiv:1111.6097.
* [34] CMS Collaboration, “2017 tracking performance plots”, CMS Detector Performance Note CMS-DP-2017-015, 2017.
* [35] CMS Collaboration, “Identification of heavy, energetic, hadronically decaying particles using machine-learning techniques”, JINST 15 (2020) P06005, 10.1088/1748-0221/15/06/P06005, arXiv:2004.08262.
* [36] A. J. Larkoski, S. Marzani, G. Soyez, and J. Thaler, “Soft drop”, JHEP 05 (2014) 146, 10.1007/JHEP05(2014)146, arXiv:1402.2657.
* [37] M. Dasgupta, A. Fregoso, S. Marzani, and G. P. Salam, “Towards an understanding of jet substructure”, JHEP 09 (2013) 029, 10.1007/JHEP09(2013)029, arXiv:1307.0007.
* [38] Y. L. Dokshitzer, G. D. Leder, S. Moretti, and B. R. Webber, “Better jet clustering algorithms”, JHEP 08 (1997) 001, 10.1088/1126-6708/1997/08/001, arXiv:hep-ph/9707323.
* [39] J. Thaler and K. Van Tilburg, “Maximizing boosted top identification by minimizing $N$-subjettiness”, JHEP 02 (2012) 093, 10.1007/JHEP02(2012)093, arXiv:1108.2701.
* [40] CMS Collaboration, “Performance of missing transverse momentum reconstruction in proton-proton collisions at $\sqrt{s}=$ 13 TeV using the CMS detector”, JINST 14 (2019) P07004, 10.1088/1748-0221/14/07/P07004, arXiv:1903.06078.
* [41] CMS Collaboration, “Performance of the CMS muon detector and muon reconstruction with proton-proton collisions at $\sqrt{s}=$ 13 TeV”, JINST 13 (2018) P06015, 10.1088/1748-0221/13/06/P06015, arXiv:1804.04528.
* [42] CMS Collaboration, “Performance of CMS muon reconstruction in pp collision events at $\sqrt{s}=7$ TeV”, JINST 7 (2012) P10002, 10.1088/1748-0221/7/10/P10002, arXiv:1206.4071.
## .10 The CMS Collaboration
Yerevan Physics Institute, Yerevan, Armenia
A.M. Sirunyan${}^{\textrm{\textdagger}}$, A. Tumasyan Institut für
Hochenergiephysik, Wien, Austria
W. Adam, F. Ambrogi, T. Bergauer, M. Dragicevic, J. Erö, A. Escalante Del
Valle, M. Flechl, R. Frühwirth1, M. Jeitler1, N. Krammer, I. Krätschmer, D.
Liko, T. Madlener, I. Mikulec, N. Rad, J. Schieck1, R. Schöfbeck, M. Spanring,
W. Waltenberger, C.-E. Wulz1, M. Zarucki Institute for Nuclear Problems,
Minsk, Belarus
V. Drugakov, V. Mossolov, J. Suarez Gonzalez Universiteit Antwerpen,
Antwerpen, Belgium
M.R. Darwish, E.A. De Wolf, D. Di Croce, X. Janssen, A. Lelek, M. Pieters, H.
Rejeb Sfar, H. Van Haevermaet, P. Van Mechelen, S. Van Putte, N. Van Remortel
Vrije Universiteit Brussel, Brussel, Belgium
F. Blekman, E.S. Bols, S.S. Chhibra, J. D’Hondt, J. De Clercq, D. Lontkovskyi,
S. Lowette, I. Marchesini, S. Moortgat, Q. Python, K. Skovpen, S. Tavernier,
W. Van Doninck, P. Van Mulders Université Libre de Bruxelles, Bruxelles,
Belgium
D. Beghin, B. Bilin, B. Clerbaux, G. De Lentdecker, H. Delannoy, B. Dorney, L.
Favart, A. Grebenyuk, A.K. Kalsi, L. Moureaux, A. Popov, N. Postiau, E.
Starling, L. Thomas, C. Vander Velde, P. Vanlaer, D. Vannerom Ghent
University, Ghent, Belgium
T. Cornelis, D. Dobur, I. Khvastunov2, M. Niedziela, C. Roskas, M. Tytgat, W.
Verbeke, B. Vermassen, M. Vit Université Catholique de Louvain, Louvain-la-
Neuve, Belgium
O. Bondu, G. Bruno, C. Caputo, P. David, C. Delaere, M. Delcourt, A.
Giammanco, V. Lemaitre, J. Prisciandaro, A. Saggio, M. Vidal Marono, P.
Vischia, J. Zobec Centro Brasileiro de Pesquisas Fisicas, Rio de Janeiro,
Brazil
F.L. Alves, G.A. Alves, G. Correia Silva, C. Hensel, A. Moraes, P. Rebello
Teles Universidade do Estado do Rio de Janeiro, Rio de Janeiro, Brazil
E. Belchior Batista Das Chagas, W. Carvalho, J. Chinellato3, E. Coelho, E.M.
Da Costa, G.G. Da Silveira4, D. De Jesus Damiao, C. De Oliveira Martins, S.
Fonseca De Souza, L.M. Huertas Guativa, H. Malbouisson, J. Martins5, D. Matos
Figueiredo, M. Medina Jaime6, M. Melo De Almeida, C. Mora Herrera, L. Mundim,
H. Nogima, W.L. Prado Da Silva, L.J. Sanchez Rosas, A. Santoro, A. Sznajder,
M. Thiel, E.J. Tonelli Manganote3, F. Torres Da Silva De Araujo, A. Vilela
Pereira Universidade Estadual Paulista a, Universidade Federal do ABC b, São
Paulo, Brazil
C.A. Bernardesa, L. Calligarisa, T.R. Fernandez Perez Tomeia, E.M. Gregoresb,
D.S. Lemos, P.G. Mercadanteb, S.F. Novaesa, SandraS. Padulaa Institute for
Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Sofia,
Bulgaria
A. Aleksandrov, G. Antchev, R. Hadjiiska, P. Iaydjiev, M. Misheva, M. Rodozov,
M. Shopova, G. Sultanov University of Sofia, Sofia, Bulgaria
M. Bonchev, A. Dimitrov, T. Ivanov, L. Litov, B. Pavlov, P. Petkov, A. Petrov
Beihang University, Beijing, China
W. Fang7, X. Gao7, L. Yuan Department of Physics, Tsinghua University,
Beijing, China
M. Ahmad, Z. Hu, Y. Wang Institute of High Energy Physics, Beijing, China
G.M. Chen8, H.S. Chen8, M. Chen, C.H. Jiang, D. Leggat, H. Liao, Z. Liu, A.
Spiezia, J. Tao, E. Yazgan, H. Zhang, S. Zhang8, J. Zhao State Key Laboratory
of Nuclear Physics and Technology, Peking University, Beijing, China
A. Agapitos, Y. Ban, G. Chen, A. Levin, J. Li, L. Li, Q. Li, Y. Mao, S.J.
Qian, D. Wang, Q. Wang Zhejiang University, Hangzhou, China
M. Xiao Universidad de Los Andes, Bogota, Colombia
C. Avila, A. Cabrera, C. Florez, C.F. González Hernández, M.A. Segura Delgado
Universidad de Antioquia, Medellin, Colombia
J. Mejia Guisao, J.D. Ruiz Alvarez, C.A. Salazar González, N. Vanegas Arbelaez
University of Split, Faculty of Electrical Engineering, Mechanical Engineering
and Naval Architecture, Split, Croatia
D. Giljanović, N. Godinovic, D. Lelas, I. Puljak, T. Sculac University of
Split, Faculty of Science, Split, Croatia
Z. Antunovic, M. Kovac Institute Rudjer Boskovic, Zagreb, Croatia
V. Brigljevic, D. Ferencek, K. Kadija, B. Mesic, M. Roguljic, A. Starodumov9,
T. Susa University of Cyprus, Nicosia, Cyprus
M.W. Ather, A. Attikis, E. Erodotou, A. Ioannou, M. Kolosova, S. Konstantinou,
G. Mavromanolakis, J. Mousa, C. Nicolaou, F. Ptochos, P.A. Razis, H.
Rykaczewski, D. Tsiakkouri Charles University, Prague, Czech Republic
M. Finger10, M. Finger Jr.10, A. Kveton, J. Tomsa Escuela Politecnica
Nacional, Quito, Ecuador
E. Ayala Universidad San Francisco de Quito, Quito, Ecuador
E. Carrera Jarrin Academy of Scientific Research and Technology of the Arab
Republic of Egypt, Egyptian Network of High Energy Physics, Cairo, Egypt
Y. Assran11,12, S. Khalil13 National Institute of Chemical Physics and
Biophysics, Tallinn, Estonia
S. Bhowmik, A. Carvalho Antunes De Oliveira, R.K. Dewanjee, K. Ehataht, M.
Kadastik, M. Raidal, C. Veelken Department of Physics, University of Helsinki,
Helsinki, Finland
P. Eerola, L. Forthomme, H. Kirschenmann, K. Osterberg, M. Voutilainen
Helsinki Institute of Physics, Helsinki, Finland
F. Garcia, J. Havukainen, J.K. Heikkilä, V. Karimäki, M.S. Kim, R. Kinnunen,
T. Lampén, K. Lassila-Perini, S. Laurila, S. Lehti, T. Lindén, P. Luukka, T.
Mäenpää, H. Siikonen, E. Tuominen, J. Tuominiemi Lappeenranta University of
Technology, Lappeenranta, Finland
T. Tuuva IRFU, CEA, Université Paris-Saclay, Gif-sur-Yvette, France
M. Besancon, F. Couderc, M. Dejardin, D. Denegri, B. Fabbro, J.L. Faure, F.
Ferri, S. Ganjour, A. Givernaud, P. Gras, G. Hamel de Monchenault, P. Jarry,
C. Leloup, B. Lenzi, E. Locci, J. Malcles, J. Rander, A. Rosowsky, M.Ö. Sahin,
A. Savoy-Navarro14, M. Titov, G.B. Yu Laboratoire Leprince-Ringuet,
CNRS/IN2P3, Ecole Polytechnique, Institut Polytechnique de Paris
S. Ahuja, C. Amendola, F. Beaudette, P. Busson, C. Charlot, B. Diab, G.
Falmagne, R. Granier de Cassagnac, I. Kucher, A. Lobanov, C. Martin Perez, M.
Nguyen, C. Ochando, P. Paganini, J. Rembser, R. Salerno, J.B. Sauvan, Y.
Sirois, A. Zabi, A. Zghiche Université de Strasbourg, CNRS, IPHC UMR 7178,
Strasbourg, France
J.-L. Agram15, J. Andrea, D. Bloch, G. Bourgatte, J.-M. Brom, E.C. Chabert, C.
Collard, E. Conte15, J.-C. Fontaine15, D. Gelé, U. Goerlach, M. Jansová, A.-C.
Le Bihan, N. Tonon, P. Van Hove Centre de Calcul de l’Institut National de
Physique Nucleaire et de Physique des Particules, CNRS/IN2P3, Villeurbanne,
France
S. Gadrat Université de Lyon, Université Claude Bernard Lyon 1, CNRS-IN2P3,
Institut de Physique Nucléaire de Lyon, Villeurbanne, France
S. Beauceron, C. Bernet, G. Boudoul, C. Camen, A. Carle, N. Chanon, R.
Chierici, D. Contardo, P. Depasse, H. El Mamouni, J. Fay, S. Gascon, M.
Gouzevitch, B. Ille, Sa. Jain, F. Lagarde, I.B. Laktineh, H. Lattaud, A.
Lesauvage, M. Lethuillier, L. Mirabito, S. Perries, V. Sordini, L. Torterotot,
G. Touquet, M. Vander Donckt, S. Viret Georgian Technical University, Tbilisi,
Georgia
T. Toriashvili16 Tbilisi State University, Tbilisi, Georgia
Z. Tsamalaidze10 RWTH Aachen University, I. Physikalisches Institut, Aachen,
Germany
C. Autermann, L. Feld, K. Klein, M. Lipinski, D. Meuser, A. Pauls, M. Preuten,
M.P. Rauch, J. Schulz, M. Teroerde, B. Wittmer RWTH Aachen University, III.
Physikalisches Institut A, Aachen, Germany
M. Erdmann, B. Fischer, S. Ghosh, T. Hebbeker, K. Hoepfner, H. Keller, L.
Mastrolorenzo, M. Merschmeyer, A. Meyer, P. Millet, G. Mocellin, S. Mondal, S.
Mukherjee, D. Noll, A. Novak, T. Pook, A. Pozdnyakov, T. Quast, M. Radziej, Y.
Rath, H. Reithler, J. Roemer, A. Schmidt, S.C. Schuler, A. Sharma, S.
Wiedenbeck, S. Zaleski RWTH Aachen University, III. Physikalisches Institut B,
Aachen, Germany
G. Flügge, W. Haj Ahmad17, O. Hlushchenko, T. Kress, T. Müller, A. Nowack, C.
Pistone, O. Pooth, D. Roy, H. Sert, A. Stahl18 Deutsches Elektronen-
Synchrotron, Hamburg, Germany
M. Aldaya Martin, P. Asmuss, I. Babounikau, H. Bakhshiansohi, K. Beernaert, O.
Behnke, A. Bermúdez Martínez, D. Bertsche, A.A. Bin Anuar, K. Borras19, V.
Botta, A. Campbell, A. Cardini, P. Connor, S. Consuegra Rodríguez, C.
Contreras-Campana, V. Danilov, A. De Wit, M.M. Defranchis, C. Diez Pardos, D.
Domínguez Damiani, G. Eckerlin, D. Eckstein, T. Eichhorn, A. Elwood, E. Eren,
E. Gallo20, A. Geiser, A. Grohsjean, M. Guthoff, M. Haranko, A. Harb, A.
Jafari, N.Z. Jomhari, H. Jung, A. Kasem19, M. Kasemann, H. Kaveh, J. Keaveney,
C. Kleinwort, J. Knolle, D. Krücker, W. Lange, T. Lenz, J. Lidrych, K. Lipka,
W. Lohmann21, R. Mankel, I.-A. Melzer-Pellmann, A.B. Meyer, M. Meyer, M.
Missiroli, J. Mnich, A. Mussgiller, V. Myronenko, D. Pérez Adán, S.K.
Pflitsch, D. Pitzl, A. Raspereza, A. Saibel, M. Savitskyi, V. Scheurer, P.
Schütze, C. Schwanenberger, R. Shevchenko, A. Singh, H. Tholen, O. Turkot, A.
Vagnerini, M. Van De Klundert, R. Walsh, Y. Wen, K. Wichmann, C. Wissing, O.
Zenaiev, R. Zlebcik University of Hamburg, Hamburg, Germany
R. Aggleton, S. Bein, L. Benato, A. Benecke, V. Blobel, T. Dreyer, A.
Ebrahimi, F. Feindt, A. Fröhlich, C. Garbers, E. Garutti, D. Gonzalez, P.
Gunnellini, J. Haller, A. Hinzmann, A. Karavdina, G. Kasieczka, R. Klanner, R.
Kogler, N. Kovalchuk, S. Kurz, V. Kutzner, J. Lange, T. Lange, A. Malara, J.
Multhaup, C.E.N. Niemeyer, A. Perieanu, A. Reimers, O. Rieger, C. Scharf, P.
Schleper, S. Schumann, J. Schwandt, J. Sonneveld, H. Stadie, G. Steinbrück,
F.M. Stober, B. Vormwald, I. Zoi Karlsruher Institut fuer Technologie,
Karlsruhe, Germany
M. Akbiyik, C. Barth, M. Baselga, S. Baur, T. Berger, E. Butz, R. Caspart, T.
Chwalek, W. De Boer, A. Dierlamm, K. El Morabit, N. Faltermann, M. Giffels, P.
Goldenzweig, A. Gottmann, M.A. Harrendorf, F. Hartmann18, U. Husemann, S.
Kudella, S. Mitra, M.U. Mozer, D. Müller, Th. Müller, M. Musich, A. Nürnberg,
G. Quast, K. Rabbertz, M. Schröder, I. Shvetsov, H.J. Simonis, R. Ulrich, M.
Wassmer, M. Weber, C. Wöhrmann, R. Wolf, S. Wozniewski Institute of Nuclear
and Particle Physics (INPP), NCSR Demokritos, Aghia Paraskevi, Greece
G. Anagnostou, P. Asenov, G. Daskalakis, T. Geralis, A. Kyriakis, D. Loukas,
G. Paspalaki National and Kapodistrian University of Athens, Athens, Greece
M. Diamantopoulou, G. Karathanasis, P. Kontaxakis, A. Manousakis-katsikakis,
A. Panagiotou, I. Papavergou, N. Saoulidou, A. Stakia, K. Theofilatos, E.
Tziaferi, K. Vellidis, E. Vourliotis National Technical University of Athens,
Athens, Greece
G. Bakas, K. Kousouris, I. Papakrivopoulos, G. Tsipolitis University of
Ioánnina, Ioánnina, Greece
I. Evangelou, C. Foudas, P. Gianneios, P. Katsoulis, P. Kokkas, S. Mallios, K.
Manitara, N. Manthos, I. Papadopoulos, J. Strologas, F.A. Triantis, D.
Tsitsonis MTA-ELTE Lendület CMS Particle and Nuclear Physics Group, Eötvös
Loránd University, Budapest, Hungary
M. Bartók22, R. Chudasama, M. Csanad, P. Major, K. Mandal, A. Mehta, M.I.
Nagy, G. Pasztor, O. Surányi, G.I. Veres Wigner Research Centre for Physics,
Budapest, Hungary
G. Bencze, C. Hajdu, D. Horvath23, F. Sikler, T.Á. Vámi, V. Veszpremi, G.
Vesztergombi${}^{\textrm{\textdagger}}$ Institute of Nuclear Research ATOMKI,
Debrecen, Hungary
N. Beni, S. Czellar, J. Karancsi22, J. Molnar, Z. Szillasi Institute of
Physics, University of Debrecen, Debrecen, Hungary
P. Raics, D. Teyssier, Z.L. Trocsanyi, B. Ujvari Eszterhazy Karoly University,
Karoly Robert Campus, Gyongyos, Hungary
T. Csorgo, W.J. Metzger, F. Nemes, T. Novak Indian Institute of Science
(IISc), Bangalore, India
S. Choudhury, J.R. Komaragiri, P.C. Tiwari National Institute of Science
Education and Research, HBNI, Bhubaneswar, India
S. Bahinipati25, C. Kar, G. Kole, P. Mal, V.K. Muraleedharan Nair Bindhu, A.
Nayak26, D.K. Sahoo25, S.K. Swain Panjab University, Chandigarh, India
S. Bansal, S.B. Beri, V. Bhatnagar, S. Chauhan, N. Dhingra27, R. Gupta, A.
Kaur, M. Kaur, S. Kaur, P. Kumari, M. Lohan, M. Meena, K. Sandeep, S. Sharma,
J.B. Singh, A.K. Virdi, G. Walia University of Delhi, Delhi, India
A. Bhardwaj, B.C. Choudhary, R.B. Garg, M. Gola, S. Keshri, Ashok Kumar, M.
Naimuddin, P. Priyanka, K. Ranjan, Aashaq Shah, R. Sharma Saha Institute of
Nuclear Physics, HBNI, Kolkata, India
R. Bhardwaj28, M. Bharti28, R. Bhattacharya, S. Bhattacharya, U. Bhawandeep28,
D. Bhowmik, S. Dutta, S. Ghosh, B. Gomber29, M. Maity30, K. Mondal, S. Nandan,
A. Purohit, P.K. Rout, G. Saha, S. Sarkar, T. Sarkar30, M. Sharan, B. Singh28,
S. Thakur28 Indian Institute of Technology Madras, Madras, India
P.K. Behera, P. Kalbhor, A. Muhammad, P.R. Pujahari, A. Sharma, A.K. Sikdar
Bhabha Atomic Research Centre, Mumbai, India
D. Dutta, V. Jha, V. Kumar, D.K. Mishra, P.K. Netrakanti, L.M. Pant, P. Shukla
Tata Institute of Fundamental Research-A, Mumbai, India
T. Aziz, M.A. Bhat, S. Dugad, G.B. Mohanty, N. Sur, RavindraKumar Verma Tata
Institute of Fundamental Research-B, Mumbai, India
S. Banerjee, S. Bhattacharya, S. Chatterjee, P. Das, M. Guchait, S. Karmakar,
S. Kumar, G. Majumder, K. Mazumdar, N. Sahoo, S. Sawant Indian Institute of
Science Education and Research (IISER), Pune, India
S. Dube, B. Kansal, A. Kapoor, K. Kothekar, S. Pandey, A. Rane, A. Rastogi, S.
Sharma Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
S. Chenarani31, E. Eskandari Tadavani, S.M. Etesami31, M. Khakzad, M.
Mohammadi Najafabadi, M. Naseri, F. Rezaei Hosseinabadi University College
Dublin, Dublin, Ireland
M. Felcini, M. Grunewald INFN Sezione di Bari a, Università di Bari b,
Politecnico di Bari c, Bari, Italy
M. Abbresciaa,b, R. Alya,b,32, C. Calabriaa,b, A. Colaleoa, D. Creanzaa,c, L.
Cristellaa,b, N. De Filippisa,c, M. De Palmaa,b, A. Di Florioa,b, W.
Elmetenaweea,b, L. Fiorea, A. Gelmia,b, G. Iasellia,c, M. Incea,b, S.
Lezkia,b, G. Maggia,c, M. Maggia, J.A. Merlin, G. Minielloa,b, S. Mya,b, S.
Nuzzoa,b, A. Pompilia,b, G. Pugliesea,c, R. Radognaa, A. Ranieria, G.
Selvaggia,b, L. Silvestrisa, F.M. Simonea,b, R. Vendittia, P. Verwilligena
INFN Sezione di Bologna a, Università di Bologna b, Bologna, Italy
G. Abbiendia, C. Battilanaa,b, D. Bonacorsia,b, L. Borgonovia,b, S. Braibant-
Giacomellia,b, R. Campaninia,b, P. Capiluppia,b, A. Castroa,b, F.R. Cavalloa,
C. Cioccaa, G. Codispotia,b, M. Cuffiania,b, G.M. Dallavallea, F. Fabbria, A.
Fanfania,b, E. Fontanesia,b, P. Giacomellia, C. Grandia, L. Guiduccia,b, F.
Iemmia,b, S. Lo Meoa,33, S. Marcellinia, G. Masettia, F.L. Navarriaa,b, A.
Perrottaa, F. Primaveraa,b, A.M. Rossia,b, T. Rovellia,b, G.P. Sirolia,b, N.
Tosia INFN Sezione di Catania a, Università di Catania b, Catania, Italy
S. Albergoa,b,34, S. Costaa,b, A. Di Mattiaa, R. Potenzaa,b, A. Tricomia,b,34,
C. Tuvea,b INFN Sezione di Firenze a, Università di Firenze b, Firenze, Italy
G. Barbaglia, A. Cassesea, R. Ceccarellia,b, V. Ciullia,b, C. Civininia, R.
D’Alessandroa,b, F. Fioria,c, E. Focardia,b, G. Latinoa,b, P. Lenzia,b, M.
Meschinia, S. Paolettia, G. Sguazzonia, L. Viliania INFN Laboratori Nazionali
di Frascati, Frascati, Italy
L. Benussi, S. Bianco, D. Piccolo INFN Sezione di Genova a, Università di
Genova b, Genova, Italy
M. Bozzoa,b, F. Ferroa, R. Mulargiaa,b, E. Robuttia, S. Tosia,b INFN Sezione
di Milano-Bicocca a, Università di Milano-Bicocca b, Milano, Italy
A. Benagliaa, A. Beschia,b, F. Brivioa,b, V. Cirioloa,b,18, M.E. Dinardoa,b,
P. Dinia, S. Gennaia, A. Ghezzia,b, P. Govonia,b, L. Guzzia,b, M. Malbertia,
S. Malvezzia, D. Menascea, F. Montia,b, L. Moronia, M. Paganonia,b, D.
Pedrinia, S. Ragazzia,b, T. Tabarelli de Fatisa,b, D. Valsecchia,b, D.
Zuoloa,b INFN Sezione di Napoli a, Università di Napoli ’Federico II’ b,
Napoli, Italy, Università della Basilicata c, Potenza, Italy, Università G.
Marconi d, Roma, Italy
S. Buontempoa, N. Cavalloa,c, A. De Iorioa,b, A. Di Crescenzoa,b, F.
Fabozzia,c, F. Fiengaa, G. Galatia, A.O.M. Iorioa,b, L. Listaa,b, S.
Meolaa,d,18, P. Paoluccia,18, B. Rossia, C. Sciaccaa,b, E. Voevodinaa,b INFN
Sezione di Padova a, Università di Padova b, Padova, Italy, Università di
Trento c, Trento, Italy
P. Azzia, N. Bacchettaa, D. Biselloa,b, A. Bolettia,b, A. Bragagnoloa,b, R.
Carlina,b, P. Checchiaa, P. De Castro Manzanoa, T. Dorigoa, U. Dossellia, F.
Gasparinia,b, U. Gasparinia,b, A. Gozzelinoa, S.Y. Hoha,b, P. Lujana, M.
Margonia,b, A.T. Meneguzzoa,b, J. Pazzinia,b, M. Presillab, P. Ronchesea,b, R.
Rossina,b, F. Simonettoa,b, A. Tikoa, M. Tosia,b, M. Zanettia,b, P. Zottoa,b,
G. Zumerlea,b INFN Sezione di Pavia a, Università di Pavia b, Pavia, Italy
A. Braghieria, D. Fiorinaa,b, P. Montagnaa,b, S.P. Rattia,b, V. Rea, M.
Ressegottia,b, C. Riccardia,b, P. Salvinia, I. Vaia, P. Vituloa,b INFN Sezione
di Perugia a, Università di Perugia b, Perugia, Italy
M. Biasinia,b, G.M. Bileia, D. Ciangottinia,b, L. Fanòa,b, P. Laricciaa,b, R.
Leonardia,b, E. Manonia, G. Mantovania,b, V. Mariania,b, M. Menichellia, A.
Rossia,b, A. Santocchiaa,b, D. Spigaa INFN Sezione di Pisa a, Università di
Pisa b, Scuola Normale Superiore di Pisa c, Pisa, Italy
K. Androsova, P. Azzurria, G. Bagliesia, V. Bertacchia,c, L. Bianchinia, T.
Boccalia, R. Castaldia, M.A. Cioccia,b, R. Dell’Orsoa, S. Donatoa, G. Fedia,
L. Gianninia,c, A. Giassia, M.T. Grippoa, F. Ligabuea,c, E. Mancaa,c, G.
Mandorlia,c, A. Messineoa,b, F. Pallaa, A. Rizzia,b, G. Rolandi35, S. Roy
Chowdhury, A. Scribanoa, P. Spagnoloa, R. Tenchinia, G. Tonellia,b, N.
Turinia, A. Venturia, P.G. Verdinia INFN Sezione di Roma a, Sapienza
Università di Roma b, Rome, Italy
F. Cavallaria, M. Cipriania,b, D. Del Rea,b, E. Di Marcoa, M. Diemoza, E.
Longoa,b, P. Meridiania, G. Organtinia,b, F. Pandolfia, R. Paramattia,b, C.
Quarantaa,b, S. Rahatloua,b, C. Rovellia, F. Santanastasioa,b, L. Soffia,b
INFN Sezione di Torino a, Università di Torino b, Torino, Italy, Università
del Piemonte Orientale c, Novara, Italy
N. Amapanea,b, R. Arcidiaconoa,c, S. Argiroa,b, M. Arneodoa,c, N. Bartosika,
R. Bellana,b, A. Bellora, C. Biinoa, A. Cappatia,b, N. Cartigliaa, S.
Comettia, M. Costaa,b, R. Covarellia,b, N. Demariaa, B. Kiania,b, F. Legger,
C. Mariottia, S. Masellia, E. Migliorea,b, V. Monacoa,b, E. Monteila,b, M.
Montenoa, M.M. Obertinoa,b, G. Ortonaa,b, L. Pachera,b, N. Pastronea, M.
Pelliccionia, G.L. Pinna Angionia,b, A. Romeroa,b, M. Ruspaa,c, R.
Salvaticoa,b, V. Solaa, A. Solanoa,b, D. Soldia,b, A. Staianoa, D. Trocinoa,b
INFN Sezione di Trieste a, Università di Trieste b, Trieste, Italy
S. Belfortea, V. Candelisea,b, M. Casarsaa, F. Cossuttia, A. Da Rolda,b, G.
Della Riccaa,b, F. Vazzolera,b, A. Zanettia Kyungpook National University,
Daegu, Korea
B. Kim, D.H. Kim, G.N. Kim, J. Lee, S.W. Lee, C.S. Moon, Y.D. Oh, S.I. Pak, S.
Sekmen, D.C. Son, Y.C. Yang Chonnam National University, Institute for
Universe and Elementary Particles, Kwangju, Korea
H. Kim, D.H. Moon, G. Oh Hanyang University, Seoul, Korea
B. Francois, T.J. Kim, J. Park Korea University, Seoul, Korea
S. Cho, S. Choi, Y. Go, S. Ha, B. Hong, K. Lee, K.S. Lee, J. Lim, J. Park,
S.K. Park, Y. Roh, J. Yoo Kyung Hee University, Department of Physics
J. Goh Sejong University, Seoul, Korea
H.S. Kim Seoul National University, Seoul, Korea
J. Almond, J.H. Bhyun, J. Choi, S. Jeon, J. Kim, J.S. Kim, H. Lee, K. Lee, S.
Lee, K. Nam, M. Oh, S.B. Oh, B.C. Radburn-Smith, U.K. Yang, H.D. Yoo, I. Yoon
University of Seoul, Seoul, Korea
D. Jeon, J.H. Kim, J.S.H. Lee, I.C. Park, I.J Watson Sungkyunkwan University,
Suwon, Korea
Y. Choi, C. Hwang, Y. Jeong, J. Lee, Y. Lee, I. Yu Riga Technical University,
Riga, Latvia
V. Veckalns36 Vilnius University, Vilnius, Lithuania
V. Dudenas, A. Juodagalvis, A. Rinkevicius, G. Tamulaitis, J. Vaitkus National
Centre for Particle Physics, Universiti Malaya, Kuala Lumpur, Malaysia
Z.A. Ibrahim, F. Mohamad Idris37, W.A.T. Wan Abdullah, M.N. Yusli, Z. Zolkapli
Universidad de Sonora (UNISON), Hermosillo, Mexico
J.F. Benitez, A. Castaneda Hernandez, J.A. Murillo Quijada, L. Valencia Palomo
Centro de Investigacion y de Estudios Avanzados del IPN, Mexico City, Mexico
H. Castilla-Valdez, E. De La Cruz-Burelo, I. Heredia-De La Cruz38, R. Lopez-
Fernandez, A. Sanchez-Hernandez Universidad Iberoamericana, Mexico City,
Mexico
S. Carrillo Moreno, C. Oropeza Barrera, M. Ramirez-Garcia, F. Vazquez Valencia
Benemerita Universidad Autonoma de Puebla, Puebla, Mexico
J. Eysermans, I. Pedraza, H.A. Salazar Ibarguen, C. Uribe Estrada Universidad
Autónoma de San Luis Potosí, San Luis Potosí, Mexico
A. Morelos Pineda University of Montenegro, Podgorica, Montenegro
J. Mijuskovic2, N. Raicevic University of Auckland, Auckland, New Zealand
D. Krofcheck University of Canterbury, Christchurch, New Zealand
S. Bheesette, P.H. Butler National Centre for Physics, Quaid-I-Azam
University, Islamabad, Pakistan
A. Ahmad, M. Ahmad, Q. Hassan, H.R. Hoorani, W.A. Khan, M.A. Shah, M. Shoaib,
M. Waqas AGH University of Science and Technology Faculty of Computer Science,
Electronics and Telecommunications, Krakow, Poland
V. Avati, L. Grzanka, M. Malawski National Centre for Nuclear Research,
Swierk, Poland
H. Bialkowska, M. Bluj, B. Boimska, M. Górski, M. Kazana, M. Szleper, P.
Zalewski Institute of Experimental Physics, Faculty of Physics, University of
Warsaw, Warsaw, Poland
K. Bunkowski, A. Byszuk39, K. Doroba, A. Kalinowski, M. Konecki, J.
Krolikowski, M. Olszewski, M. Walczak Laboratório de Instrumentação e Física
Experimental de Partículas, Lisboa, Portugal
M. Araujo, P. Bargassa, D. Bastos, A. Di Francesco, P. Faccioli, B. Galinhas,
M. Gallinaro, J. Hollar, N. Leonardo, T. Niknejad, J. Seixas, K. Shchelina, G.
Strong, O. Toldaiev, J. Varela Joint Institute for Nuclear Research, Dubna,
Russia
S. Afanasiev, P. Bunin, M. Gavrilenko, I. Golutvin, I. Gorbunov, A. Kamenev,
V. Karjavine, A. Lanev, A. Malakhov, V. Matveev40,41, P. Moisenz, V. Palichik,
V. Perelygin, M. Savina, S. Shmatov, S. Shulha, N. Skatchkov, V. Smirnov, N.
Voytishin, A. Zarubin Petersburg Nuclear Physics Institute, Gatchina (St.
Petersburg), Russia
L. Chtchipounov, V. Golovtcov, Y. Ivanov, V. Kim42, E. Kuznetsova43, P.
Levchenko, V. Murzin, V. Oreshkin, I. Smirnov, D. Sosnov, V. Sulimov, L.
Uvarov, A. Vorobyev Institute for Nuclear Research, Moscow, Russia
Yu. Andreev, A. Dermenev, S. Gninenko, N. Golubev, A. Karneyeu, M. Kirsanov,
N. Krasnikov, A. Pashenkov, D. Tlisov, A. Toropin Institute for Theoretical
and Experimental Physics named by A.I. Alikhanov of NRC ‘Kurchatov Institute’,
Moscow, Russia
V. Epshteyn, V. Gavrilov, N. Lychkovskaya, A. Nikitenko44, V. Popov, I.
Pozdnyakov, G. Safronov, A. Spiridonov, A. Stepennov, M. Toms, E. Vlasov, A.
Zhokin Moscow Institute of Physics and Technology, Moscow, Russia
T. Aushev National Research Nuclear University ’Moscow Engineering Physics
Institute’ (MEPhI), Moscow, Russia
M. Chadeeva45, P. Parygin, D. Philippov, E. Popova, V. Rusinov P.N. Lebedev
Physical Institute, Moscow, Russia
V. Andreev, M. Azarkin, I. Dremin, M. Kirakosyan, A. Terkulov Skobeltsyn
Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow,
Russia
A. Belyaev, E. Boos, M. Dubinin46, L. Dudko, A. Ershov, A. Gribushin, V.
Klyukhin, O. Kodolova, I. Lokhtin, S. Obraztsov, S. Petrushanko, V. Savrin, A.
Snigirev Novosibirsk State University (NSU), Novosibirsk, Russia
A. Barnyakov47, V. Blinov47, T. Dimova47, L. Kardapoltsev47, Y. Skovpen47
Institute for High Energy Physics of National Research Centre ‘Kurchatov
Institute’, Protvino, Russia
I. Azhgirey, I. Bayshev, S. Bitioukov, V. Kachanov, D. Konstantinov, P.
Mandrik, V. Petrov, R. Ryutin, S. Slabospitskii, A. Sobol, S. Troshin, N.
Tyurin, A. Uzunian, A. Volkov National Research Tomsk Polytechnic University,
Tomsk, Russia
A. Babaev, A. Iuzhakov, V. Okhotnikov Tomsk State University, Tomsk, Russia
V. Borchsh, V. Ivanchenko, E. Tcherniaev University of Belgrade: Faculty of
Physics and VINCA Institute of Nuclear Sciences
P. Adzic48, P. Cirkovic, M. Dordevic, P. Milenovic, J. Milosevic, M.
Stojanovic Centro de Investigaciones Energéticas Medioambientales y
Tecnológicas (CIEMAT), Madrid, Spain
M. Aguilar-Benitez, J. Alcaraz Maestre, A. Álvarez Fernández, I. Bachiller, M.
Barrio Luna, CristinaF. Bedoya, J.A. Brochero Cifuentes, C.A. Carrillo
Montoya, M. Cepeda, M. Cerrada, N. Colino, B. De La Cruz, A. Delgado Peris,
J.P. Fernández Ramos, J. Flix, M.C. Fouz, O. Gonzalez Lopez, S. Goy Lopez,
J.M. Hernandez, M.I. Josa, D. Moran, Á. Navarro Tobar, A. Pérez-Calero
Yzquierdo, J. Puerta Pelayo, I. Redondo, L. Romero, S. Sánchez Navas, M.S.
Soares, A. Triossi, C. Willmott Universidad Autónoma de Madrid, Madrid, Spain
C. Albajar, J.F. de Trocóniz, R. Reyes-Almanza Universidad de Oviedo,
Instituto Universitario de Ciencias y Tecnologías Espaciales de Asturias
(ICTEA), Oviedo, Spain
B. Alvarez Gonzalez, J. Cuevas, C. Erice, J. Fernandez Menendez, S. Folgueras,
I. Gonzalez Caballero, J.R. González Fernández, E. Palencia Cortezon, V.
Rodríguez Bouza, S. Sanchez Cruz Instituto de Física de Cantabria (IFCA),
CSIC-Universidad de Cantabria, Santander, Spain
I.J. Cabrillo, A. Calderon, B. Chazin Quero, J. Duarte Campderros, M.
Fernandez, P.J. Fernández Manteca, A. García Alonso, G. Gomez, C. Martinez
Rivero, P. Martinez Ruiz del Arbol, F. Matorras, J. Piedra Gomez, C. Prieels,
T. Rodrigo, A. Ruiz-Jimeno, L. Russo49, L. Scodellaro, I. Vila, J.M. Vizan
Garcia University of Colombo, Colombo, Sri Lanka
D.U.J. Sonnadara University of Ruhuna, Department of Physics, Matara, Sri
Lanka
W.G.D. Dharmaratna, N. Wickramage CERN, European Organization for Nuclear
Research, Geneva, Switzerland
D. Abbaneo, B. Akgun, E. Auffray, G. Auzinger, J. Baechler, P. Baillon, A.H.
Ball, D. Barney, J. Bendavid, M. Bianco, A. Bocci, P. Bortignon, E. Bossini,
E. Brondolin, T. Camporesi, A. Caratelli, G. Cerminara, E. Chapon, G.
Cucciati, D. d’Enterria, A. Dabrowski, N. Daci, V. Daponte, A. David, O.
Davignon, A. De Roeck, M. Deile, M. Dobson, M. Dünser, N. Dupont, A. Elliott-
Peisert, N. Emriskova, F. Fallavollita50, D. Fasanella, S. Fiorendi, G.
Franzoni, J. Fulcher, W. Funk, S. Giani, D. Gigi, K. Gill, F. Glege, L.
Gouskos, M. Gruchala, M. Guilbaud, D. Gulhan, J. Hegeman, C. Heidegger, Y.
Iiyama, V. Innocente, T. James, P. Janot, O. Karacheban21, J. Kaspar, J.
Kieseler, M. Krammer1, N. Kratochwil, C. Lange, P. Lecoq, C. Lourenço, L.
Malgeri, M. Mannelli, A. Massironi, F. Meijers, S. Mersi, E. Meschi, F.
Moortgat, M. Mulders, J. Ngadiuba, J. Niedziela, S. Nourbakhsh, S. Orfanelli,
L. Orsini, F. Pantaleo18, L. Pape, E. Perez, M. Peruzzi, A. Petrilli, G.
Petrucciani, A. Pfeiffer, M. Pierini, F.M. Pitters, D. Rabady, A. Racz, M.
Rieger, M. Rovere, H. Sakulin, J. Salfeld-Nebgen, C. Schäfer, C. Schwick, M.
Selvaggi, A. Sharma, P. Silva, W. Snoeys, P. Sphicas51, J. Steggemann, S.
Summers, V.R. Tavolaro, D. Treille, A. Tsirou, G.P. Van Onsem, A. Vartak, M.
Verzetti, W.D. Zeuner Paul Scherrer Institut, Villigen, Switzerland
L. Caminada52, K. Deiters, W. Erdmann, R. Horisberger, Q. Ingram, H.C.
Kaestli, D. Kotlinski, U. Langenegger, T. Rohe, S.A. Wiederkehr ETH Zurich -
Institute for Particle Physics and Astrophysics (IPA), Zurich, Switzerland
M. Backhaus, P. Berger, N. Chernyavskaya, G. Dissertori, M. Dittmar, M.
Donegà, C. Dorfer, T.A. Gómez Espinosa, C. Grab, D. Hits, W. Lustermann, R.A.
Manzoni, M.T. Meinhard, F. Micheli, P. Musella, F. Nessi-Tedaldi, F. Pauss, G.
Perrin, L. Perrozzi, S. Pigazzini, M.G. Ratti, M. Reichmann, C. Reissel, T.
Reitenspiess, B. Ristic, D. Ruini, D.A. Sanz Becerra, M. Schönenberger, L.
Shchutska, M.L. Vesterbacka Olsson, R. Wallny, D.H. Zhu Universität Zürich,
Zurich, Switzerland
T.K. Aarrestad, C. Amsler53, C. Botta, D. Brzhechko, M.F. Canelli, A. De Cosa,
R. Del Burgo, B. Kilminster, S. Leontsinis, V.M. Mikuni, I. Neutelings, G.
Rauco, P. Robmann, K. Schweiger, C. Seitz, Y. Takahashi, S. Wertz, A.
Zucchetta National Central University, Chung-Li, Taiwan
T.H. Doan, C.M. Kuo, W. Lin, A. Roy, S.S. Yu National Taiwan University (NTU),
Taipei, Taiwan
P. Chang, Y. Chao, K.F. Chen, P.H. Chen, W.-S. Hou, Y.y. Li, R.-S. Lu, E.
Paganis, A. Psallidas, A. Steen Chulalongkorn University, Faculty of Science,
Department of Physics, Bangkok, Thailand
B. Asavapibhop, C. Asawatangtrakuldee, N. Srimanobhas, N. Suwonjandee Çukurova
University, Physics Department, Science and Art Faculty, Adana, Turkey
A. Bat, F. Boran, A. Celik54, S. Damarseckin55, Z.S. Demiroglu, F. Dolek, C.
Dozen56, I. Dumanoglu, G. Gokbulut, EmineGurpinar Guler57, Y. Guler, I. Hos58,
C. Isik, E.E. Kangal59, O. Kara, A. Kayis Topaksu, U. Kiminsu, G. Onengut, K.
Ozdemir60, S. Ozturk61, A.E. Simsek, U.G. Tok, S. Turkcapar, I.S. Zorbakir, C.
Zorbilmez Middle East Technical University, Physics Department, Ankara, Turkey
B. Isildak62, G. Karapinar63, M. Yalvac Bogazici University, Istanbul, Turkey
I.O. Atakisi, E. Gülmez, M. Kaya64, O. Kaya65, Ö. Özçelik, S. Tekten, E.A.
Yetkin66 Istanbul Technical University, Istanbul, Turkey
A. Cakir, K. Cankocak67, Y. Komurcu, S. Sen68 Istanbul University, Istanbul,
Turkey
S. Cerci69, B. Kaynak, S. Ozkorucuklu, D. Sunar Cerci69 Institute for
Scintillation Materials of National Academy of Science of Ukraine, Kharkov,
Ukraine
B. Grynyov National Scientific Center, Kharkov Institute of Physics and
Technology, Kharkov, Ukraine
L. Levchuk University of Bristol, Bristol, United Kingdom
E. Bhal, S. Bologna, J.J. Brooke, D. Burns70, E. Clement, D. Cussans, H.
Flacher, J. Goldstein, G.P. Heath, H.F. Heath, L. Kreczko, B. Krikler, S.
Paramesvaran, B. Penning, T. Sakuma, S. Seif El Nasr-Storey, V.J. Smith, J.
Taylor, A. Titterton Rutherford Appleton Laboratory, Didcot, United Kingdom
K.W. Bell, A. Belyaev71, C. Brew, R.M. Brown, D.J.A. Cockerill, J.A. Coughlan,
K. Harder, S. Harper, J. Linacre, K. Manolopoulos, D.M. Newbold, E. Olaiya, D.
Petyt, T. Reis, T. Schuh, C.H. Shepherd-Themistocleous, A. Thea, I.R. Tomalin,
T. Williams Imperial College, London, United Kingdom
R. Bainbridge, P. Bloch, J. Borg, S. Breeze, O. Buchmuller, A. Bundock,
GurpreetSingh CHAHAL72, D. Colling, P. Dauncey, G. Davies, M. Della Negra, R.
Di Maria, P. Everaerts, G. Hall, G. Iles, M. Komm, L. Lyons, A.-M. Magnan, S.
Malik, A. Martelli, V. Milosevic, A. Morton, J. Nash73, V. Palladino, M.
Pesaresi, D.M. Raymond, A. Richards, A. Rose, E. Scott, C. Seez, A.
Shtipliyski, M. Stoye, T. Strebler, A. Tapper, K. Uchida, T. Virdee18, N.
Wardle, D. Winterbottom, A.G. Zecchinelli, S.C. Zenz Brunel University,
Uxbridge, United Kingdom
J.E. Cole, P.R. Hobson, A. Khan, P. Kyberd, C.K. Mackay, I.D. Reid, L.
Teodorescu, S. Zahid Baylor University, Waco, USA
A. Brinkerhoff, K. Call, B. Caraway, J. Dittmann, K. Hatakeyama, C. Madrid, B.
McMaster, N. Pastika, C. Smith Catholic University of America, Washington, DC,
USA
R. Bartek, A. Dominguez, R. Uniyal, A.M. Vargas Hernandez The University of
Alabama, Tuscaloosa, USA
A. Buccilli, S.I. Cooper, C. Henderson, P. Rumerio, C. West Boston University,
Boston, USA
A. Albert, D. Arcaro, Z. Demiragli, D. Gastler, C. Richardson, J. Rohlf, D.
Sperka, D. Spitzbart, I. Suarez, L. Sulak, D. Zou Brown University,
Providence, USA
G. Benelli, B. Burkle, X. Coubez19, D. Cutts, Y.t. Duh, M. Hadley, U. Heintz,
J.M. Hogan74, K.H.M. Kwok, E. Laird, G. Landsberg, K.T. Lau, J. Lee, M.
Narain, S. Sagir75, R. Syarif, E. Usai, W.Y. Wong, D. Yu, W. Zhang University
of California, Davis, Davis, USA
R. Band, C. Brainerd, R. Breedon, M. Calderon De La Barca Sanchez, M. Chertok,
J. Conway, R. Conway, P.T. Cox, R. Erbacher, C. Flores, G. Funk, F. Jensen, W.
Ko${}^{\textrm{\textdagger}}$, O. Kukral, R. Lander, M. Mulhearn, D. Pellett,
J. Pilot, M. Shi, D. Taylor, K. Tos, M. Tripathi, Z. Wang, F. Zhang University
of California, Los Angeles, USA
M. Bachtis, C. Bravo, R. Cousins, A. Dasgupta, A. Florent, J. Hauser, M.
Ignatenko, N. Mccoll, W.A. Nash, S. Regnard, D. Saltzberg, C. Schnaible, B.
Stone, V. Valuev University of California, Riverside, Riverside, USA
K. Burt, Y. Chen, R. Clare, J.W. Gary, S.M.A. Ghiasi Shirazi, G. Hanson, G.
Karapostoli, O.R. Long, M. Olmedo Negrete, M.I. Paneva, W. Si, L. Wang, S.
Wimpenny, B.R. Yates, Y. Zhang University of California, San Diego, La Jolla,
USA
J.G. Branson, P. Chang, S. Cittolin, S. Cooperstein, N. Deelen, M. Derdzinski,
R. Gerosa, D. Gilbert, B. Hashemi, D. Klein, V. Krutelyov, J. Letts, M.
Masciovecchio, S. May, S. Padhi, M. Pieri, V. Sharma, M. Tadel, F. Würthwein,
A. Yagil, G. Zevi Della Porta University of California, Santa Barbara -
Department of Physics, Santa Barbara, USA
N. Amin, R. Bhandari, C. Campagnari, M. Citron, V. Dutta, M. Franco Sevilla,
J. Incandela, B. Marsh, H. Mei, A. Ovcharova, H. Qu, J. Richman, U. Sarica, D.
Stuart, S. Wang California Institute of Technology, Pasadena, USA
D. Anderson, A. Bornheim, O. Cerri, I. Dutta, J.M. Lawhorn, N. Lu, J. Mao,
H.B. Newman, T.Q. Nguyen, J. Pata, M. Spiropulu, J.R. Vlimant, S. Xie, Z.
Zhang, R.Y. Zhu Carnegie Mellon University, Pittsburgh, USA
M.B. Andrews, T. Ferguson, T. Mudholkar, M. Paulini, M. Sun, I. Vorobiev, M.
Weinberg University of Colorado Boulder, Boulder, USA
J.P. Cumalat, W.T. Ford, E. MacDonald, T. Mulholland, R. Patel, A. Perloff, K.
Stenson, K.A. Ulmer, S.R. Wagner Cornell University, Ithaca, USA
J. Alexander, Y. Cheng, J. Chu, A. Datta, A. Frankenthal, K. Mcdermott, J.R.
Patterson, D. Quach, A. Ryd, S.M. Tan, Z. Tao, J. Thom, P. Wittich, M. Zientek
Fermi National Accelerator Laboratory, Batavia, USA
S. Abdullin, M. Albrow, M. Alyari, G. Apollinari, A. Apresyan, A. Apyan, S.
Banerjee, L.A.T. Bauerdick, A. Beretvas, D. Berry, J. Berryhill, P.C. Bhat, K.
Burkett, J.N. Butler, A. Canepa, G.B. Cerati, H.W.K. Cheung, F. Chlebana, M.
Cremonesi, J. Duarte, V.D. Elvira, J. Freeman, Z. Gecse, E. Gottschalk, L.
Gray, D. Green, S. Grünendahl, O. Gutsche, J. Hanlon, R.M. Harris, S.
Hasegawa, R. Heller, J. Hirschauer, B. Jayatilaka, S. Jindariani, M. Johnson,
U. Joshi, T. Klijnsma, B. Klima, M.J. Kortelainen, B. Kreis, S. Lammel, J.
Lewis, D. Lincoln, R. Lipton, M. Liu, T. Liu, J. Lykken, K. Maeshima, J.M.
Marraffino, D. Mason, P. McBride, P. Merkel, S. Mrenna, S. Nahn, V. O’Dell, V.
Papadimitriou, K. Pedro, C. Pena, G. Rakness, F. Ravera, A. Reinsvold Hall, L.
Ristori, B. Schneider, E. Sexton-Kennedy, N. Smith, A. Soha, W.J. Spalding, L.
Spiegel, S. Stoynev, J. Strait, N. Strobbe, L. Taylor, S. Tkaczyk, N.V. Tran,
L. Uplegger, E.W. Vaandering, C. Vernieri, R. Vidal, M. Wang, H.A. Weber
University of Florida, Gainesville, USA
D. Acosta, P. Avery, D. Bourilkov, L. Cadamuro, V. Cherepanov, F. Errico, R.D.
Field, S.V. Gleyzer, D. Guerrero, B.M. Joshi, M. Kim, J. Konigsberg, A.
Korytov, K.H. Lo, K. Matchev, N. Menendez, G. Mitselmakher, D. Rosenzweig, K.
Shi, J. Wang, S. Wang, X. Zuo Florida International University, Miami, USA
Y.R. Joshi Florida State University, Tallahassee, USA
T. Adams, A. Askew, S. Hagopian, V. Hagopian, K.F. Johnson, R. Khurana, T.
Kolberg, G. Martinez, T. Perry, H. Prosper, C. Schiber, R. Yohay, J. Zhang
Florida Institute of Technology, Melbourne, USA
M.M. Baarmand, M. Hohlmann, D. Noonan, M. Rahmani, M. Saunders, F. Yumiceva
University of Illinois at Chicago (UIC), Chicago, USA
M.R. Adams, L. Apanasevich, R.R. Betts, R. Cavanaugh, X. Chen, S. Dittmer, O.
Evdokimov, C.E. Gerber, D.A. Hangal, D.J. Hofman, C. Mills, T. Roy, M.B.
Tonjes, N. Varelas, J. Viinikainen, H. Wang, X. Wang, Z. Wu The University of
Iowa, Iowa City, USA
M. Alhusseini, B. Bilki57, K. Dilsiz76, S. Durgut, R.P. Gandrajula, M.
Haytmyradov, V. Khristenko, O.K. Köseyan, J.-P. Merlo, A. Mestvirishvili77, A.
Moeller, J. Nachtman, H. Ogul78, Y. Onel, F. Ozok79, A. Penzo, C. Snyder, E.
Tiras, J. Wetzel Johns Hopkins University, Baltimore, USA
B. Blumenfeld, A. Cocoros, N. Eminizer, A.V. Gritsan, W.T. Hung, S. Kyriacou,
P. Maksimovic, J. Roskes, M. Swartz The University of Kansas, Lawrence, USA
C. Baldenegro Barrera, P. Baringer, A. Bean, S. Boren, J. Bowen, A. Bylinkin,
T. Isidori, S. Khalil, J. King, G. Krintiras, A. Kropivnitskaya, C. Lindsey,
D. Majumder, W. Mcbrayer, N. Minafra, M. Murray, C. Rogan, C. Royon, S.
Sanders, E. Schmitz, J.D. Tapia Takaki, Q. Wang, J. Williams, G. Wilson Kansas
State University, Manhattan, USA
S. Duric, A. Ivanov, K. Kaadze, D. Kim, Y. Maravin, D.R. Mendis, T. Mitchell,
A. Modak, A. Mohammadi Lawrence Livermore National Laboratory, Livermore, USA
F. Rebassoo, D. Wright University of Maryland, College Park, USA
A. Baden, O. Baron, A. Belloni, S.C. Eno, Y. Feng, N.J. Hadley, S. Jabeen,
G.Y. Jeng, R.G. Kellogg, A.C. Mignerey, S. Nabili, F. Ricci-Tam, M. Seidel,
Y.H. Shin, A. Skuja, S.C. Tonwar, K. Wong Massachusetts Institute of
Technology, Cambridge, USA
D. Abercrombie, B. Allen, R. Bi, S. Brandt, W. Busza, I.A. Cali, M. D’Alfonso,
G. Gomez Ceballos, M. Goncharov, P. Harris, D. Hsu, M. Hu, M. Klute, D.
Kovalskyi, Y.-J. Lee, P.D. Luckey, B. Maier, A.C. Marini, C. Mcginn, C.
Mironov, S. Narayanan, X. Niu, C. Paus, D. Rankin, C. Roland, G. Roland, Z.
Shi, G.S.F. Stephans, K. Sumorok, K. Tatar, D. Velicanu, J. Wang, T.W. Wang,
B. Wyslouch University of Minnesota, Minneapolis, USA
R.M. Chatterjee, A. Evans, S. Guts${}^{\textrm{\textdagger}}$, P. Hansen, J.
Hiltbrand, Sh. Jain, Y. Kubota, Z. Lesko, J. Mans, M. Revering, R. Rusack, R.
Saradhy, N. Schroeder, M.A. Wadud University of Mississippi, Oxford, USA
J.G. Acosta, S. Oliveros University of Nebraska-Lincoln, Lincoln, USA
K. Bloom, S. Chauhan, D.R. Claes, C. Fangmeier, L. Finco, F. Golf, R.
Kamalieddin, I. Kravchenko, J.E. Siado, G.R. Snow${}^{\textrm{\textdagger}}$,
B. Stieger, W. Tabb State University of New York at Buffalo, Buffalo, USA
G. Agarwal, C. Harrington, I. Iashvili, A. Kharchilava, C. McLean, D. Nguyen,
A. Parker, J. Pekkanen, S. Rappoccio, B. Roozbahani Northeastern University,
Boston, USA
G. Alverson, E. Barberis, C. Freer, Y. Haddad, A. Hortiangtham, G. Madigan, B.
Marzocchi, D.M. Morse, T. Orimoto, L. Skinnari, A. Tishelman-Charny, T.
Wamorkar, B. Wang, A. Wisecarver, D. Wood Northwestern University, Evanston,
USA
S. Bhattacharya, J. Bueghly, A. Gilbert, T. Gunter, K.A. Hahn, N. Odell, M.H.
Schmitt, K. Sung, M. Trovato, M. Velasco University of Notre Dame, Notre Dame,
USA
R. Bucci, N. Dev, R. Goldouzian, M. Hildreth, K. Hurtado Anampa, C. Jessop,
D.J. Karmgard, K. Lannon, W. Li, N. Loukas, N. Marinelli, I. Mcalister, F.
Meng, Y. Musienko40, R. Ruchti, P. Siddireddy, G. Smith, S. Taroni, M. Wayne,
A. Wightman, M. Wolf, A. Woodard The Ohio State University, Columbus, USA
J. Alimena, B. Bylsma, L.S. Durkin, B. Francis, C. Hill, W. Ji, A. Lefeld,
T.Y. Ling, B.L. Winer Princeton University, Princeton, USA
G. Dezoort, P. Elmer, J. Hardenbrook, N. Haubrich, S. Higginbotham, A.
Kalogeropoulos, S. Kwan, D. Lange, M.T. Lucchini, J. Luo, D. Marlow, K. Mei,
I. Ojalvo, J. Olsen, C. Palmer, P. Piroué, D. Stickland, C. Tully University
of Puerto Rico, Mayaguez, USA
S. Malik, S. Norberg Purdue University, West Lafayette, USA
A. Barker, V.E. Barnes, R. Chawla, S. Das, L. Gutay, M. Jones, A.W. Jung, A.
Khatiwada, B. Mahakud, D.H. Miller, G. Negro, N. Neumeister, C.C. Peng, S.
Piperov, H. Qiu, J.F. Schulte, N. Trevisani, F. Wang, R. Xiao, W. Xie Purdue
University Northwest, Hammond, USA
T. Cheng, J. Dolen, N. Parashar Rice University, Houston, USA
A. Baty, U. Behrens, S. Dildick, K.M. Ecklund, S. Freed, F.J.M. Geurts, M.
Kilpatrick, Arun Kumar, W. Li, B.P. Padley, R. Redjimi, J. Roberts, J. Rorie,
W. Shi, A.G. Stahl Leiton, Z. Tu, A. Zhang University of Rochester, Rochester,
USA
A. Bodek, P. de Barbaro, R. Demina, J.L. Dulemba, C. Fallon, T. Ferbel, M.
Galanti, A. Garcia-Bellido, O. Hindrichs, A. Khukhunaishvili, E. Ranken, R.
Taus Rutgers, The State University of New Jersey, Piscataway, USA
B. Chiarito, J.P. Chou, A. Gandrakota, Y. Gershtein, E. Halkiadakis, A. Hart,
M. Heindl, E. Hughes, S. Kaplan, I. Laflotte, A. Lath, R. Montalvo, K. Nash,
M. Osherson, H. Saka, S. Salur, S. Schnetzer, S. Somalwar, R. Stone, S. Thomas
University of Tennessee, Knoxville, USA
H. Acharya, A.G. Delannoy, S. Spanier Texas A&M University, College Station,
USA
O. Bouhali80, M. Dalchenko, M. De Mattia, A. Delgado, R. Eusebi, J. Gilmore,
T. Huang, T. Kamon81, H. Kim, S. Luo, S. Malhotra, D. Marley, R. Mueller, D.
Overton, L. Perniè, D. Rathjens, A. Safonov Texas Tech University, Lubbock,
USA
N. Akchurin, J. Damgov, F. De Guio, V. Hegde, S. Kunori, K. Lamichhane, S.W.
Lee, T. Mengke, S. Muthumuni, T. Peltola, S. Undleeb, I. Volobouev, Z. Wang,
A. Whitbeck Vanderbilt University, Nashville, USA
S. Greene, A. Gurrola, R. Janjam, W. Johns, C. Maguire, A. Melo, H. Ni, K.
Padeken, F. Romeo, P. Sheldon, S. Tuo, J. Velkovska, M. Verweij University of
Virginia, Charlottesville, USA
M.W. Arenton, P. Barria, B. Cox, G. Cummings, J. Hakala, R. Hirosky, M. Joyce,
A. Ledovskoy, C. Neu, B. Tannenwald, Y. Wang, E. Wolfe, F. Xia Wayne State
University, Detroit, USA
R. Harr, P.E. Karchin, N. Poudyal, J. Sturdy, P. Thapa University of Wisconsin
- Madison, Madison, WI, USA
T. Bose, J. Buchanan, C. Caillol, D. Carlsmith, S. Dasu, I. De Bruyn, L. Dodd,
C. Galloni, H. He, M. Herndon, A. Hervé, U. Hussain, A. Lanaro, A. Loeliger,
K. Long, R. Loveless, J. Madhusudanan Sreekala, A. Mallampalli, D. Pinna, T.
Ruggles, A. Savin, V. Sharma, W.H. Smith, D. Teague, S. Trembath-reichert †:
Deceased
1: Also at Vienna University of Technology, Vienna, Austria
2: Also at IRFU, CEA, Université Paris-Saclay, Gif-sur-Yvette, France
3: Also at Universidade Estadual de Campinas, Campinas, Brazil
4: Also at Federal University of Rio Grande do Sul, Porto Alegre, Brazil
5: Also at UFMS, Nova Andradina, Brazil
6: Also at Universidade Federal de Pelotas, Pelotas, Brazil
7: Also at Université Libre de Bruxelles, Bruxelles, Belgium
8: Also at University of Chinese Academy of Sciences, Beijing, China
9: Also at Institute for Theoretical and Experimental Physics named by A.I.
Alikhanov of NRC ‘Kurchatov Institute’, Moscow, Russia
10: Also at Joint Institute for Nuclear Research, Dubna, Russia
11: Also at Suez University, Suez, Egypt
12: Now at British University in Egypt, Cairo, Egypt
13: Also at Zewail City of Science and Technology, Zewail, Egypt
14: Also at Purdue University, West Lafayette, USA
15: Also at Université de Haute Alsace, Mulhouse, France
16: Also at Tbilisi State University, Tbilisi, Georgia
17: Also at Erzincan Binali Yildirim University, Erzincan, Turkey
18: Also at CERN, European Organization for Nuclear Research, Geneva,
Switzerland
19: Also at RWTH Aachen University, III. Physikalisches Institut A, Aachen,
Germany
20: Also at University of Hamburg, Hamburg, Germany
21: Also at Brandenburg University of Technology, Cottbus, Germany
22: Also at Institute of Physics, University of Debrecen, Debrecen, Hungary,
Debrecen, Hungary
23: Also at Institute of Nuclear Research ATOMKI, Debrecen, Hungary
24: Also at MTA-ELTE Lendület CMS Particle and Nuclear Physics Group, Eötvös
Loránd University, Budapest, Hungary, Budapest, Hungary
25: Also at IIT Bhubaneswar, Bhubaneswar, India, Bhubaneswar, India
26: Also at Institute of Physics, Bhubaneswar, India
27: Also at G.H.G. Khalsa College, Punjab, India
28: Also at Shoolini University, Solan, India
29: Also at University of Hyderabad, Hyderabad, India
30: Also at University of Visva-Bharati, Santiniketan, India
31: Also at Isfahan University of Technology, Isfahan, Iran
32: Now at INFN Sezione di Bari a, Università di Bari b, Politecnico di Bari
c, Bari, Italy
33: Also at Italian National Agency for New Technologies, Energy and
Sustainable Economic Development, Bologna, Italy
34: Also at Centro Siciliano di Fisica Nucleare e di Struttura Della Materia,
Catania, Italy
35: Also at Scuola Normale e Sezione dell’INFN, Pisa, Italy
36: Also at Riga Technical University, Riga, Latvia, Riga, Latvia
37: Also at Malaysian Nuclear Agency, MOSTI, Kajang, Malaysia
38: Also at Consejo Nacional de Ciencia y Tecnología, Mexico City, Mexico
39: Also at Warsaw University of Technology, Institute of Electronic Systems,
Warsaw, Poland
40: Also at Institute for Nuclear Research, Moscow, Russia
41: Now at National Research Nuclear University ’Moscow Engineering Physics
Institute’ (MEPhI), Moscow, Russia
42: Also at St. Petersburg State Polytechnical University, St. Petersburg,
Russia
43: Also at University of Florida, Gainesville, USA
44: Also at Imperial College, London, United Kingdom
45: Also at P.N. Lebedev Physical Institute, Moscow, Russia
46: Also at California Institute of Technology, Pasadena, USA
47: Also at Budker Institute of Nuclear Physics, Novosibirsk, Russia
48: Also at Faculty of Physics, University of Belgrade, Belgrade, Serbia
49: Also at Università degli Studi di Siena, Siena, Italy
50: Also at INFN Sezione di Pavia a, Università di Pavia b, Pavia, Italy,
Pavia, Italy
51: Also at National and Kapodistrian University of Athens, Athens, Greece
52: Also at Universität Zürich, Zurich, Switzerland
53: Also at Stefan Meyer Institute for Subatomic Physics, Vienna, Austria,
Vienna, Austria
54: Also at Burdur Mehmet Akif Ersoy University, BURDUR, Turkey
55: Also at Şırnak University, Sirnak, Turkey
56: Also at Department of Physics, Tsinghua University, Beijing, China,
Beijing, China
57: Also at Beykent University, Istanbul, Turkey, Istanbul, Turkey
58: Also at Istanbul Aydin University, Application and Research Center for
Advanced Studies (App. & Res. Cent. for Advanced Studies), Istanbul, Turkey
59: Also at Mersin University, Mersin, Turkey
60: Also at Piri Reis University, Istanbul, Turkey
61: Also at Gaziosmanpasa University, Tokat, Turkey
62: Also at Ozyegin University, Istanbul, Turkey
63: Also at Izmir Institute of Technology, Izmir, Turkey
64: Also at Marmara University, Istanbul, Turkey
65: Also at Kafkas University, Kars, Turkey
66: Also at Istanbul Bilgi University, Istanbul, Turkey
67: Also at Near East University, Research Center of Experimental Health
Science, Nicosia, Turkey
68: Also at Hacettepe University, Ankara, Turkey
69: Also at Adiyaman University, Adiyaman, Turkey
70: Also at Vrije Universiteit Brussel, Brussel, Belgium
71: Also at School of Physics and Astronomy, University of Southampton,
Southampton, United Kingdom
72: Also at IPPP Durham University, Durham, United Kingdom
73: Also at Monash University, Faculty of Science, Clayton, Australia
74: Also at Bethel University, St. Paul, Minneapolis, USA, St. Paul, USA
75: Also at Karamanoğlu Mehmetbey University, Karaman, Turkey
76: Also at Bingol University, Bingol, Turkey
77: Also at Georgian Technical University, Tbilisi, Georgia
78: Also at Sinop University, Sinop, Turkey
79: Also at Mimar Sinan University, Istanbul, Istanbul, Turkey
80: Also at Texas A&M University at Qatar, Doha, Qatar
81: Also at Kyungpook National University, Daegu, Korea, Daegu, Korea
|
2024-09-04T02:54:56.031280 | 2020-03-01T16:19:49 | 2003.00516 | {
"authors": "Antonio Capolupo, Gaetano Lambiase, Aniello Quaranta",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25970",
"submitter": "Aniello Quaranta",
"url": "https://arxiv.org/abs/2003.00516"
} | arxiv-papers | # Neutrinos in curved space-time: particle mixing and flavor oscillations
A. Capolupo<EMAIL_ADDRESS>Dipartimento di Fisica “E.R. Caianiello”
Università di Salerno, and INFN – Gruppo Collegato di Salerno, Via Giovanni
Paolo II, 132, 84084 Fisciano (SA), Italy G. Lambiase<EMAIL_ADDRESS>Dipartimento di Fisica “E.R. Caianiello” Università di Salerno, and INFN –
Gruppo Collegato di Salerno, Via Giovanni Paolo II, 132, 84084 Fisciano (SA),
Italy A. Quaranta<EMAIL_ADDRESS>Dipartimento di Fisica “E.R.
Caianiello” Università di Salerno, and INFN – Gruppo Collegato di Salerno, Via
Giovanni Paolo II, 132, 84084 Fisciano (SA), Italy
###### Abstract
We present a quantum field theoretical approach to the vacuum neutrino
oscillations in curved space, we analyze the non–trivial interplay between
quantum field mixing and field quantization in curved space and derive new
oscillation formulae. We compute the formulae explicitly in the spatially flat
FLRW metrics for universes dominated by a cosmological constant and by
radiation. We evaluate the transition probabilities in the Schwarzschild black
hole metric, and we show that the Hawking radiation affects the oscillations
of neutrinos. We show that our results are consistent with those of previous
analyses when the quantum mechanical limit is considered.
## I Introduction
Since they were theoretically proposed by Pauli Pauli , neutrinos have proven
to be among the most enigmatic particles in the universe. Until the discovery
of flavor oscillations NeutrinoOscillations1 ; NeutrinoOscillations2 , whose
theory was pioneered by Pontecorvo Pontecorvo ; Bilenky , neutrinos were
believed to be massless. Today it is accepted that neutrinos are massive
particles, and that they oscillate among three flavors
$\nu_{e},\nu_{\mu},\nu_{\tau}$ corresponding to the companion charged leptons
$e,\mu,\tau$. This peculiarity renders neutrinos unique among the known
elementary particles and puts them beyond the scope of the standard model of
particles Mohapatra . In many respects, neutrinos are forerunners of a new
physics, as several issues, including the origin of their mass NeutrinoMass
and their fundamental nature NeutrinoNature , are still open to the present
day.
On the other hand, the relevance of neutrinos in astrophyisical and
cosmological contexts has grown dramatically during the last years. They
figure as a valuable source of information, along with gravitational waves and
electromagnetic radiation, in the ever–growing field of multi–messenger
astronomy Franckowiak . The study of neutrinos of astrophysical origin can
indeed provide fundamental insights on the source that produced them. In
addition, neutrinos are expected to play an important role in the first phases
of the universe Buchmuller ; CosmologicalNeutrinos , and the detection of the
cosmic neutrino background, pursued in experiments as PTOLEMY PTOLEMY , could
represent an essential test for the standard cosmological model
CosmologicalNeutrinos . Mass varying neutrinos have also been proposed as a
possible explanation for Dark Energy Mavans .
This state of affairs requires a careful investigation of neutrino
oscillations on a curved spacetime. The topic has been discussed in several
works , where it was found that gravitational fields may alter both the
oscillations in vacuum and in matter Grossman ; Piriz ; Cardall .
Here we wish to go beyond the heuristic treatment of ref. Cardall , and
present a quantum field theoretical approach, based on the field quantization
in curved space-time, to evaluate the effects of gravitational fields on
neutrino oscillations. We derive general oscillation formulae for flavor
fields in curved space-time, which represent our main result. We discuss the
particle interpretation of the fields in presence of gravity and study how the
mixing changes when moving from a mass field representation to another. We
demonstrate the invariance of local observables, which are represented by
expectation values on flavor states of local operators constructed from the
flavor fields. We show that the oscillation probabilities, on the other hand,
do in general depend on the representation of the mass fields, since they are
not a local observable and involve the comparison between particles in
different spacetime regions. We establish the conditions which have to be
satisfied in order that the resulting transition probabilities are invariant
under changes of mass field representation.
We also compute explicitly the oscillation formulae for two examples of
spatially flat Friedmann–Lemaitre–Robertson–Walker spacetimes, corresponding
to a cosmological constant–dominated and a radiation–dominated universe
respectively. In these cases, exact analytical solutions to the Dirac equation
are available, and the formalism here introduced can be applied directly.
Moreover, we give an estimation of the oscillation formulae for neutrinos
propagating from the past infinity to the future infinity in a stationary
Schwarzschild spacetime. We introduce a method to extract the oscillation
formulae on spacetimes with asymptotically flat regions without resorting to
the exact solutions of the Dirac equation. We then employ this strategy to
compute the formulae on the Schwarzschild black hole spacetime, for neutrinos
propagating from the past infinity to the future infinity. We show how the
Hawking radiation is naturally embedded in the resulting transition
probabilities.
Our results generalize those of the previous treatments Cardall , and are
consistent with the latter when the suitable limits are considered. In our
computation, for simplicity, we limit our analysis to the vacuum oscillations,
therefore considering the sole effect of gravity.
The paper is organized as follows: in section II we provide the setting for
the description of the mass fields in curved space; in section III we develop
field mixing and find the oscillation probabilities in curved spacetime, with
a thorough analysis of their features; in section IV we apply the formalism to
some spacetimes of interest, including the spatially flat FLRW metric for a
radiation-dominated universe and for a cosmological constant-dominated
universe, and the Schwarzschild black hole metric, where we show the impact of
the Hawking effect on neutrino oscillations; finally in section V we draw our
conclusions.
## II Mass neutrino fields in curved Space
To evaluate the oscillation formulae for neutrinos on a curved spacetime, it
is necessary to consider both the effects of curvature and mixing on the
(free) mass fields. Let $M$ be a globally hyperbolic spacetime, and let
$\tau\in\mathbb{R}$ label a foliation of $M$ by Cauchy surfaces. Consider the
tetrad fields $e^{\mu}_{a}(x)$ satisfying
$\eta^{ab}e^{\mu}_{a}(x)e^{\nu}_{b}(x)=g^{\mu\nu}(x)$. Here $\eta^{ab}\equiv
diag(1,-1,-1,-1)$ is the Minkowski metric tensor, while $g^{\mu\nu}(x)$ is the
contravariant metric $g^{\mu\nu}(x)g_{\nu\rho}(x)=\delta^{\mu}_{\rho}$ on $M$
in a given coordinate system. The massive neutrino fields satisfy the Dirac
equations:
$(i\gamma^{\mu}(x)D_{\mu}-m_{i})\psi_{i}=0$ (1)
where $\gamma^{\mu}(x)=e^{\mu}_{a}(x)\gamma^{a}$, $\gamma^{a}$ being the usual
flat space Dirac matrices, and
$D_{\mu}=\partial_{\mu}-\frac{i}{4}\omega_{\mu}^{ab}\sigma_{ab}$. The spin
connection is defined as
$\omega_{\mu}^{ab}=e^{a}_{\nu}\Gamma^{\nu}_{\rho\mu}e^{\rho
b}+e^{a}_{\nu}\partial_{\mu}e^{\nu b}$, whereas $\sigma_{ab}$ are the
commutators of flat Dirac matrices
$\sigma_{ab}=\frac{i}{2}[\gamma^{a},\gamma^{b}]$. In equation (1), the index
$i=1,2,...,N$ ranges over the number of neutrino species $N$. For the sake of
simplicity we focus on the case $N=2$, though the generalization to $N=3$ is
straightforward. In general equation (1) cannot be solved exactly. Even if one
is able to find exact solutions, these do not play the same prominent role as
their flat spacetime counterpart. It is well–known, indeed, that the positive
frequency solutions of equation cannot be defined univocally, and that,
consequently, there is no natural (nor unique) particle interpretation for the
corresponding Quantum Field Theory Birrell ; Wald . Nevertheless, the
canonical quantization of the Dirac field proceeds along the same lines as in
Minkowski spacetime.
To perform a field expansion, one must find a set of positive $\zeta_{k,i}$
and negative $\xi_{k,i}$ frequency solutions for each of the equations (1). In
general the bipartition of the solutions to eq. (1) makes sense only locally,
while there is no natural global definition of positive and negative frequency
modes. Anyway, one is free to choose a set of modes Note1
$\\{\zeta_{k,i},\xi_{k,i}\\}$ , deemed to be positive/negative frequency modes
according to some specified observer, and expand the field with respect to
them, provided that they form a complete (and orthonormal) set of solutions
under the inner product
$(a_{i},b_{i})=\int_{\Sigma(\tau)}\sqrt{-g}d\Sigma_{\mu}(\tau)\bar{a}_{i}\gamma^{\mu}(x)b_{i}\
$ (2)
with $a_{i},b_{i}$ any solution to equation (1) with mass $m_{i}$ and
$\bar{b}_{i}=b^{\dagger}_{i}\gamma^{0}(x)$. Here
$d\Sigma^{\mu}(\tau)=n^{\mu}(\tau)dV_{\tau}$ denotes the volume element on the
surface $\tau$ with unit timelike normal $n^{\mu}(\tau)$. This has to hold
separately for each $i=1,2$. As it is easy to prove, for $a_{i},b_{i}$
solutions of the (same) Dirac equation, the inner product (2) does not depend
on the hypersurface chosen for the integration. In particular, it is
independent on the foliation by Cauchy hypersurfaces employed. The fields can
then be expanded as
$\psi_{i}(x)=\sum_{k,s}\left(\gamma_{k,s;i}\zeta_{k,s;i}(x)+\epsilon_{k,s;i}^{\dagger}\xi_{k,s;i}(x)\right)$
(3)
with the operator coefficients $\gamma_{k,s,i},\epsilon_{k,s,i}$ satisfying
the usual canonical anticommutation relations, $k$ momentum index and $s$
helicity index. The annihilators are also required to anticommute for $i\neq
j$, and, in particular
$\\{\gamma_{k,s,i},\gamma^{\dagger}_{k,s,j}\\}=\delta_{ij}$,
$\\{\epsilon_{k,s,i},\epsilon^{\dagger}_{k,s,j}\\}=\delta_{ij}$. In equation
(3) we prefer to keep any space-time dependence within the modes, for ease of
treatment with a general metric. The expansions (3) define the mass Hilbert
space $\mathcal{H}_{m}=H_{1}\otimes H_{2}$, which is constructed out of the
vacuum $|0_{m}\rangle=|0_{1}\rangle\otimes|0_{2}\rangle$. Here $\ket{0_{i}}$
is defined, as usual, by
$\gamma_{k,s,i}\ket{0_{i}}=0=\epsilon_{k,s,i}\ket{0_{i}}$ for each $k,s,i$.
As hinted above, the field expansions (3) are somewhat arbitrary, as opposed
to the flat spacetime case, where there is no ambiguity in the definition of
positive and negative frequency modes. Any other basis
$\\{\tilde{\zeta}_{k,s,i},\tilde{\xi}_{k,s,i}\\}$ can be used to expand the
fields
$\psi_{i}=\sum_{k,s}(\tilde{\gamma}_{k,s,i}\tilde{\zeta}_{k,s,i}(x)+\tilde{\epsilon}_{k,s,i}^{\dagger}\tilde{\xi}_{k,s,i}(x))$.
Since both the sets $\\{\zeta_{k,s,i},\xi_{k,s,i}\\}$ and
$\\{\tilde{\zeta}_{k,s,i},\tilde{\xi}_{k,s,i}\\}$ form a basis for the space
of solutions of eqs. (1), one can write the modes of a set in terms of the
other, for each $i$:
$\displaystyle\tilde{\zeta}_{k^{\prime},s^{\prime},i}$ $\displaystyle=$
$\displaystyle\sum_{k,s}\left(\Gamma_{k^{\prime},s^{\prime};k,s;i}^{*}\zeta_{k,s,i}+\Sigma_{k^{\prime},s^{\prime};k,s;i}^{*}\xi_{k,s,i}\right)$
$\displaystyle\tilde{\xi}_{k^{\prime},s^{\prime},i}$ $\displaystyle=$
$\displaystyle\sum_{k,s}\left(\Gamma_{k^{\prime},s^{\prime};k,s;i}\xi_{k,s,i}-\Sigma_{k^{\prime},s^{\prime};k,s;i}\zeta_{k,s,i}\right)$
(4)
where
$\Gamma_{k^{\prime},s^{\prime};ks;i}=(\tilde{\zeta}_{k^{\prime},s^{\prime},i},\zeta_{k,s,i})=(\xi_{k,s,i},\tilde{\xi}_{k^{\prime},s^{\prime},i})$
and
$\Sigma_{k^{\prime},s^{\prime};ks;i}=(\tilde{\zeta}_{k^{\prime},s^{\prime},i},\xi_{k,s,i})=-(\zeta_{k,s,i},\tilde{\xi}_{k^{\prime}s^{\prime},i})$.
This is a fermionic Bogoliubov transformation, for which
$\sum_{q,r}\left(\Gamma_{k,s;q,r;i}^{*}\Gamma_{k^{\prime},s^{\prime};q,r;i}+\Sigma_{k,s;q,r;i}^{*}\Sigma_{k^{\prime},s^{\prime};q,r;i}\right)=\delta_{k,k^{\prime}}\delta_{s,s^{\prime}}$
for each $i$. The corresponding relation between the two sets of annihilators
is given by
$\displaystyle\tilde{\gamma}_{k,s,i}$ $\displaystyle=$
$\displaystyle\sum_{k^{\prime},s^{\prime}}\left(\Gamma_{k,s;k^{\prime}s^{\prime};i}\gamma_{k^{\prime},s^{\prime},i}+\Sigma_{k,s;k^{\prime},s^{\prime};i}\epsilon^{\dagger}_{k^{\prime},s^{\prime},i}\right)$
$\displaystyle\tilde{\epsilon}_{k,s,i}$ $\displaystyle=$
$\displaystyle\sum_{k^{\prime},s^{\prime}}\left(\Gamma_{k,s;k^{\prime},s^{\prime};i}\epsilon_{k^{\prime},s^{\prime},i}-\Sigma_{k,s;k^{\prime},s^{\prime};i}\gamma^{\dagger}_{k^{\prime},s^{\prime},i}\right)\
.$ (5)
It is often the case that the Bogoliubov coefficients
$\Gamma_{k,s;k^{\prime},s^{\prime};i},\Sigma_{k,s;k^{\prime},s^{\prime};i}$
can be written as
$\Gamma_{k,s;k^{\prime},s^{\prime};i}=\delta_{k,k^{\prime}}\delta_{s,s^{\prime}}\Gamma_{k,i}$,
$\Sigma_{k,s;k^{\prime},s^{\prime};i}=\delta_{k,k^{\prime}}\delta_{s,s^{\prime}}\Sigma_{k,i}$,
with $\Gamma_{k,i}$ and $\Sigma_{k,i}$ depending on $k$ alone. In this
occurrence, they admit the parametrization
$\Gamma_{k,i}=e^{i\eta_{k,i}}\cos(\theta_{k,i})$
,$\Sigma_{k,i}=e^{i\phi_{k,i}}\sin(\theta_{k,i})$, with
$\eta_{k,i},\phi_{k,i},\theta_{k,i}$ real functions of $k$. We remark that the
Bogoliubov transformations (II) can be recast in terms of the generators
$J_{i}=e^{\sum_{k,k^{\prime},s,s^{\prime}}\left[(\lambda_{k,k^{\prime},s,s^{\prime},i}^{*}\gamma_{k,s,i}^{\dagger}\epsilon_{k^{\prime}s^{\prime},i}^{\dagger}-\lambda_{k,k^{\prime},s,s^{\prime},i}\epsilon_{k,s,i}\gamma_{k^{\prime},s^{\prime},i})\right]}$,
with
$\lambda_{k,k^{\prime},s,s^{\prime},i}=Arctan(\frac{\Sigma_{k,s;k^{\prime},s^{\prime};i}}{\Gamma_{k,s;k^{\prime},s^{\prime};i}})$,
as
$\tilde{\gamma}_{k,s,i}=J_{i}^{-1}\gamma_{k,s,i}J_{i}\;,\qquad\qquad\ \
\tilde{\epsilon}_{k,s,i}=J_{i}^{-1}\epsilon_{k,s,i}J_{i}\ .$ (6)
The maps $J_{i}:\tilde{\mathcal{H}}_{i}\rightarrow\mathcal{H}_{i}$ interpolate
between the Fock spaces $\mathcal{H}_{i}$ built from the
$\gamma_{k,s,i},\epsilon_{k,s,i}$ and the Fock space $\tilde{\mathcal{H}}_{i}$
built from the $\tilde{\gamma}_{k,s,i},\tilde{\epsilon}_{k,s,i}$. In
particular, one has for the vacuum states
$|\tilde{0}_{i}\rangle=J_{i}^{-1}|0_{i}\rangle$. As for the untilded
representation, the mass Hilbert space in the tilded representation is the
tensor product
$\tilde{\mathcal{H}}_{m}=\tilde{\mathcal{H}}_{1}\otimes\tilde{\mathcal{H}}_{2}$.
It is convenient to define a unique generator of Bogoliubov transformations
$J:\tilde{\mathcal{H}}_{m}\longrightarrow\mathcal{H}_{m}$ on
$\tilde{\mathcal{H}}_{m}$ as the tensor product $J=J_{1}\otimes J_{2}$. Then
$\tilde{\gamma}_{k,s,i}=J^{-1}\gamma_{k,s,i}J\;,\qquad\qquad\ \
\tilde{\epsilon}_{k,s,i}=J^{-1}\epsilon_{k,s,i}J$ (7)
for $i=1,2$. The expansions of the two fields $\psi_{1}$ and $\psi_{2}$ must
be compatible with each other, i.e., each of the modes
$\zeta_{k,s,2},\xi_{k,s,2}$ must be obtainable from the corresponding modes
$\zeta_{k,s,1},\xi_{k,s,1}$ by the substitution $m_{1}\leftrightarrow m_{2}$,
and vice–versa. In the context of mixing, this ensures that the same kind of
particle, described by the same set of quantum numbers, is being mixed. This,
of course, does not undermine the arbitrariness in the choice of the modes;
these can be any complete set of solutions to the Dirac equation, provided
that the same choice is made for the two fields.
## III Neutrino mixing and oscillation formulae in curved space-time
In this Section, we show new oscillation formulae for flavor fields in curved
space-time and we present general considerations on the infinitely many
unitarily inequivalent representations of the canonical anticommutation
relations which characterize the quantization of mixed fields and of fields in
curved space.
### III.1 Oscillation Formulae
As discussed above, the QFT of free Dirac fields in curved space is
characterized by infinitely many unitarily inequivalent representations of the
canonical anticommutation relations. The phenomenon of mixing, even in
Minkowski space, suffers from an analogous ambiguity, in that the flavor and
the mass representations are unitarily inequivalent Capolupo1 . The effects of
such inequivalence have been analyzed in flat space time Capolupo:2006et and
the possibility to reveal them in experimental setup has been recently
proposed Capolupo:2019gmn . Let us start by fixing the mass field expansions
(3) and describe the mixing in a given representation of the mass fields. The
flavor fields are defined as
$\psi_{e}=\cos(\theta)\psi_{1}+\sin(\theta)\psi_{2}$ and
$\psi_{\mu}=\cos(\theta)\psi_{2}-\sin(\theta)\psi_{1}$ with $\theta$ the
(2-flavor) mixing angle. Just like the Bogoliubov transformations (7), the
rotation to flavor fields can be cast in terms of a generator
$\mathcal{I}_{\theta}(\tau)$. This is given by
$\mathcal{I}_{\theta}(\tau)=e^{\theta[(\psi_{1},\psi_{2})_{\tau}-(\psi_{2},\psi_{1})_{\tau}]}\
.$ (8)
where the scalar products $(\psi_{i},\psi_{j})$ _do_ depend on the
hypersurface chosen for the integration, since they are solutions to different
Dirac equations. Then, by definition, the flavor fields are expressed as
$\psi_{e}=\mathcal{I}_{\theta}^{-1}(\tau)\psi_{1}\mathcal{I}_{\theta}(\tau)\;,\qquad\qquad\
\
\psi_{\mu}=\mathcal{I}_{\theta}^{-1}(\tau)\psi_{2}\mathcal{I}_{\theta}(\tau).$
(9)
If we let the generator (8) act on the mass annihilators, we obtain the flavor
annihilators for curved space
$\gamma_{k,s,e}(\tau)=\mathcal{I}_{\theta}^{-1}(\tau)\gamma_{k,s,1}\mathcal{I}_{\theta}(\tau)=\cos(\theta)\gamma_{k,s,1}+\sin(\theta)\sum_{q,r}\bigg{[}\Lambda^{*}_{q,r;k,s}(\tau)\gamma_{q,r,2}+\Xi_{q,r;k,s}(\tau)\epsilon_{q,r,2}^{\dagger}\bigg{]}\,.$
(10)
And similar for
$\gamma_{k,s,\mu}(\tau),\epsilon_{k,s,e}(\tau),\epsilon_{k,s,\mu}(\tau)$. The
Bogoliubov coefficients are provided by the inner products of the solutions to
the _curved space_ Dirac equation with mass $m_{1}$ and $m_{2}$, that is,
$\Lambda_{q,r;k,s}(\tau)=(\zeta_{q,r,2},\zeta_{k,s,1})_{\tau}=(\xi_{k,s,1},\xi_{q,r,2})_{\tau}$
and
$\Xi_{q,r;k,s}(\tau)=(\zeta_{k,s,1},\xi_{q,r,2})_{\tau}=-(\zeta_{q,r,2},\xi_{k,s,1})_{\tau}$.
The mixing coefficients always satisfy
$\sum_{q,r}\left(\Lambda_{k,s;q,r}^{*}(\tau)\Lambda_{k^{\prime},s^{\prime};q,r}(\tau)+\Xi_{k,s;q,r}^{*}(\tau)\Xi_{k^{\prime},s^{\prime};q,r}(\tau)\right)\\!=\\!\delta_{k,k^{\prime}}\delta_{s,s^{\prime}}\
.$ (11)
Since the mass expansions are compatible, the mixing coefficients are often
diagonal, namely of the form
$\displaystyle\Lambda_{q,r;k,s}(\tau)$ $\displaystyle=$
$\displaystyle\delta_{q,k}\delta_{r,s}\Lambda_{k,s}(\tau)$
$\displaystyle\Xi_{q,r;k,s}(\tau)$ $\displaystyle=$
$\displaystyle\delta_{q,k}\delta_{r,s}\Xi_{k,s}(\tau)$ (12)
with $\Lambda_{k,s}(\tau)$, $\Xi_{k,s}(\tau)$ depending on $k$ and $s$ alone
Note3 . Exceptions to this arise when we consider expansions of the mass
fields in terms of modes labelled by the energy. In such a case, the mixing
coefficients are non–diagonal and different from zero
$\Lambda_{\omega,\omega^{\prime}}\neq 0$ ,$\Xi_{\omega,\omega^{\prime}}\neq
0$, once $\omega$ is fixed, only for a specific value of $\omega^{\prime}$.
$|\Lambda_{k,s}(\tau)|^{2}+|\Xi_{k,s}(\tau)|^{2}=1$ (13)
for each $k,s,\tau$, and
$|\Lambda_{\omega;\omega^{\prime}}(\tau)|^{2}+|\Xi_{\omega;\omega^{\prime}}(\tau)|^{2}=1$
(14)
respectively. The mass and the flavor representations are unitarily
inequivalent. For each $\tau$ one has a distinct flavor Fock space
$\mathcal{H}_{f}(\tau)$ defined by
$\gamma_{e,\mu}(\tau),\epsilon_{e,\mu}(\tau)$. The flavor vacuum
$\ket{0_{f}(\tau)}=\mathcal{I}_{\theta}^{-1}(\tau)\ket{0_{m}}$ is a condensate
of $\psi_{1},\psi_{2}$ particle-antiparticle pairs.
In order to define the transition probabilities, we observe that the total
Lagrangian is invariant under $U(1)$ gauge transformations. Therefore the
total charge $Q=Q_{1}+Q_{2}=Q_{e}+Q_{\mu}$ is conserved Note4 , where
$Q_{i}=\sum_{k,s}\left(\gamma_{k,s,i}^{\dagger}\gamma_{k,s,i}-\epsilon_{k,s,i}^{\dagger}\epsilon_{k,s,i}\right)$
for $i=1,2,e,\mu$. It is then meaningful to define the transition
probabilities as
$P^{\rho\rightarrow\sigma}_{k,s}(\tau)=\sum_{q,r}\bigg{(}\langle\nu_{\rho,k,s}(\tau_{0})|Q^{q,r}_{\sigma}(\tau)|\nu_{\rho,k,s}(\tau_{0})\rangle-\langle
0_{f}(\tau_{0})|Q^{q,r}_{\sigma}(\tau)|0_{f}(\tau_{0})\rangle\bigg{)}.$ (15)
Here $\rho,\sigma=e,\mu$, the state
$|\nu_{\rho,k,s}(\tau_{0})\rangle=\gamma^{\dagger}_{k,s,\rho}(\tau_{0})\ket{0_{f}(\tau_{0})}$
is the state with a single neutrino of flavor $\rho$, momentum $k$ and
helicity $s$ on the reference hypersurface $\tau=\tau_{0}$ . The second term
on the rhs of (15) is just the implementation of the normal ordering with
respect to $\ket{0_{f}(\tau_{0})}$. By construction $P^{e\rightarrow
e}_{k,s}(\tau)+P^{e\rightarrow\mu}_{k,s}(\tau)=1$ and $P^{\mu\rightarrow
e}_{k,s}(\tau)+P^{\mu\rightarrow\mu}_{k,s}(\tau)=1$ for each $\tau$. A
straightforward calculation yields, in the general case (accounting for both
diagonal and non–diagonal mixing coefficients) the result
$P^{e\rightarrow\mu}_{k,s}(\tau)=2\cos^{2}(\theta)\sin^{2}(\theta)\times\bigg{[}1-\sum_{q,r}\Re\bigg{(}\Lambda_{k,s;q,r}^{*}(\tau_{0})\Lambda_{k,s;q,r}(\tau)+\Xi_{k,s;q,r}^{*}(\tau_{0})\Xi_{k,s;q,r}(\tau)\bigg{)}\bigg{]}\
.$ (16)
Equation (16) is the central result of the paper. When equations (III.1) hold,
this reduces to
$P^{e\rightarrow\mu}_{k,s}(\tau)=2\cos^{2}(\theta)\sin^{2}(\theta)\times\bigg{[}1-\Re\bigg{(}\Lambda_{k,s}^{*}(\tau_{0})\Lambda_{k,s}(\tau)+\Xi_{k,s}^{*}(\tau_{0})\Xi_{k,s}(\tau)\bigg{)}\bigg{]}\
.$ (17)
In both cases one has
$\displaystyle P^{e\rightarrow
e}_{k,s}(\tau)=1-P^{e\rightarrow\mu}_{k,s}(\tau).$ (18)
### III.2 Mixing on a curved background and gravity-induced ambiguity in the
particle interpretation
Up to now, we have worked within a fixed, but arbitrary, representation of the
mass fields. The question arises about the other possible representations, and
how the mixing changes when moving from a representation to another. For the
definition (15) to make sense, we must determine if and how the probabilities
vary when the mass representation is changed. We take as a guideline the
principle of covariance, so that the _local_ physical observables should be
independent of the underlying representation. In moving from a given
representation $\\{\gamma_{1},\epsilon_{1}\\},\\{\gamma_{2},\epsilon_{2}\\}$
to another
$\\{\tilde{\gamma}_{1},\tilde{\epsilon}_{1}\\},\\{\tilde{\gamma}_{2},\tilde{\epsilon}_{2}\\}$,
we know how to connect the mass Fock spaces, namely via the generator (7)
$J^{-1}:\mathcal{H}_{m}\rightarrow\tilde{\mathcal{H}}_{m}$. For each mass
representation, we can proceed as we did above and build the corresponding
flavor annihilators and flavor spaces
$\mathcal{H}_{f}(\tau),\tilde{\mathcal{H}}_{f}(\tau)$, together with the
mixing generators
$\mathcal{I}_{\theta}(\tau):\mathcal{H}_{f}(\tau)\rightarrow\mathcal{H}_{m}$,
$\tilde{\mathcal{I}}_{\theta}(\tau):\tilde{\mathcal{H}}_{f}(\tau)\rightarrow\tilde{\mathcal{H}}_{m}$.
It is useful, at this point, to determine the relations among the mixing
coefficients $\Lambda(\tau),\Xi(\tau)$ and
$\tilde{\Lambda}(\tau),\tilde{\Xi}(\tau)$ that appear in the explicit form of
the two generators $\mathcal{I}_{\theta}(\tau)$ and
$\tilde{\mathcal{I}}_{\theta}(\tau)$. By definition we have
$\tilde{\Lambda}_{q,r;k,s}(\tau)=(\tilde{\zeta}_{q,r;2},\tilde{\zeta}_{k,s;1})_{\tau}=\sum_{q^{\prime},k^{\prime},r^{\prime},s^{\prime}}\bigg{(}\bigg{[}\Gamma_{q,r;q^{\prime},r^{\prime};2}^{*}\zeta_{q^{\prime},r^{\prime};2}+\Sigma_{q,r;q^{\prime},r^{\prime};2}^{*}\xi_{q^{\prime},r^{\prime};2}\bigg{]},\bigg{[}\Gamma_{k,s;k^{\prime},s^{\prime};1}^{*}\zeta_{k^{\prime},s^{\prime};1}+\Sigma_{k,s;k^{\prime},s^{\prime};1}^{*}\xi_{k^{\prime},s^{\prime};1}\bigg{]}\bigg{)}_{\tau}\,.$
(19)
Here the first equality is just the definition of $\tilde{\Lambda}(\tau)$, the
second follows from the Bogoliubov transformations (II). By using the
properties of the inner product (2), in the general case (again, accounting
for both the diagonal and non–diagonal mixing coefficients), we obtain
$\displaystyle\tilde{\Lambda}_{q,r;k,s}(\tau)$ $\displaystyle=$
$\displaystyle\sum_{q^{\prime},k^{\prime},r^{\prime},s^{\prime}}\bigg{[}\Gamma_{q,r;q^{\prime}r^{\prime};2}\Gamma_{k,s;k^{\prime},s^{\prime};1}^{*}(\zeta_{q^{\prime},r^{\prime};2},\zeta_{k^{\prime},s^{\prime};1})_{\tau}+\
\
\Gamma_{q,r;q^{\prime},r^{\prime};2}\Sigma_{k,s;k^{\prime},s^{\prime};1}^{*}(\zeta_{q^{\prime},r^{\prime};2},\xi_{k^{\prime},s^{\prime};1})_{\tau}$
(20) $\displaystyle+$ $\displaystyle\ \
\Sigma_{q,r;q^{\prime}r^{\prime};2}\Gamma_{k,s;k^{\prime}s^{\prime};1}^{*}(\xi_{q^{\prime},r^{\prime};2},\zeta_{k^{\prime},s^{\prime};1})_{\tau}+\
\
\Sigma_{q,r;q^{\prime},r^{\prime};2}\Sigma_{k,s;k^{\prime},s^{\prime};1}^{*}(\xi_{q^{\prime},r^{\prime};2},\xi_{k^{\prime},s^{\prime};1})_{\tau}\bigg{]}\,,$
and, finally, from the definition of $\Lambda(\tau)$ and $\Xi(\tau)$, we have
$\displaystyle\tilde{\Lambda}_{q,r;k,s}(\tau)=\sum_{q^{\prime},k^{\prime},r^{\prime},s^{\prime}}\bigg{[}\Gamma_{q,r,q^{\prime}r^{\prime};2}\left(\Gamma_{k,s;k^{\prime},s^{\prime};1}^{*}\Lambda_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}(\tau)-\Sigma_{k,s;k^{\prime},s^{\prime};1}^{*}\Xi_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}(\tau)\right)$
$\displaystyle+\
\Sigma_{q,r;q^{\prime},r^{\prime};2}\left(\Gamma_{k,s;k^{\prime},s^{\prime};1}^{*}\Xi_{q^{\prime},r^{\prime};k^{\prime}s^{\prime}}^{*}(\tau)+\Sigma_{k,s;k^{\prime},s^{\prime};1}^{*}\Lambda_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}^{*}(\tau)\right)\bigg{]}\
.$ (21)
Similarly we have
$\displaystyle\tilde{\Xi}_{q,r;k,s}(\tau)=\sum_{q^{\prime},k^{\prime};r^{\prime},s^{\prime}}\bigg{[}\Gamma_{q,r;q^{\prime},r^{\prime};2}\left(\Gamma_{k,s;k^{\prime},s^{\prime};1}\Xi_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}(\tau)+\Sigma_{k,s;k^{\prime},s^{\prime};1}\Lambda_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}(\tau)\right)$
$\displaystyle-\
\Sigma_{q,r;q^{\prime},r^{\prime};2}\left(\Gamma_{k,s;k^{\prime},s^{\prime};1}\Lambda_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}^{*}(\tau)-\
\
\Sigma_{k,s;k^{\prime},s^{\prime};1}\Xi_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}^{*}(\tau)\right)\bigg{]}\
.$ (22)
When equations (III.1) hold for both the representations
$\\{q,r\\},\\{q^{\prime},r^{\prime}\\}$, the equations reduce to
$\displaystyle\tilde{\Lambda}_{q,r}(\tau)$ $\displaystyle=$
$\displaystyle\sum_{q^{\prime},r^{\prime}}\bigg{[}\Gamma_{q,r,q^{\prime}r^{\prime};2}\left(\Gamma_{q,r;q^{\prime},r^{\prime};1}^{*}\Lambda_{q^{\prime},r^{\prime}}(\tau)-\Sigma_{q,r;q^{\prime},r^{\prime};1}^{*}\Xi_{q^{\prime},r^{\prime}}(\tau)\right)$
(23) $\displaystyle+$ $\displaystyle\
\Sigma_{q,r;q^{\prime},r^{\prime};2}\left(\Gamma_{q,r;q^{\prime},r^{\prime};1}^{*}\Xi_{q^{\prime},r^{\prime}}^{*}(\tau)+\Sigma_{q,r;q^{\prime},r^{\prime};1}^{*}\Lambda_{q^{\prime},r^{\prime}}^{*}(\tau)\right)\bigg{]}.$
and
$\displaystyle\tilde{\Xi}_{q,r}(\tau)$ $\displaystyle=$
$\displaystyle\sum_{q^{\prime},r^{\prime}}\bigg{[}\Gamma_{q,r;q^{\prime},r^{\prime};2}\left(\Gamma_{q,r;q^{\prime},r^{\prime};1}\Xi_{q^{\prime},r^{\prime}}(\tau)+\Sigma_{q,r;q^{\prime},r^{\prime};1}\Lambda_{q^{\prime},r^{\prime}}(\tau)\right)$
(24) $\displaystyle-$ $\displaystyle\
\Sigma_{q,r;q^{\prime},r^{\prime};2}\left(\Gamma_{q,r;q^{\prime},r^{\prime};1}\Lambda_{q^{\prime},r^{\prime}}^{*}(\tau)-\Sigma_{q,r;q^{\prime},r^{\prime};1}\Xi_{q^{\prime},r^{\prime}}^{*}(\tau)\right)\bigg{]}\
.$
The equations (III.2, III.2, 23, 24) provide an explicit relation between
$\mathcal{I}_{\theta}(\tau)$ and $\tilde{\mathcal{I}}_{\theta}(\tau)$, and
show how the mixing coefficients change, in moving from a mass representation
to another, in order to ensure covariance. In particular, the tilded
coefficients turn out to be a linear combination of the untilded coefficients
weighted by the coefficients of the Bogoliubov transformations between the two
mass representations. A slightly modified version of the eqs. (III.2) and
(III.2) will be expedient in the calculation of the transition probabilities
in a number of interesting cases.
It remains to establish how the flavor operators
$\gamma_{\rho}(\tau),\epsilon_{\rho}(\tau)$ and the flavor vacuum
$|0_{f}(\tau)\rangle$ transform under a change of mass representation. We
focus on the vacuum state $|0_{f}(\tau)\rangle\in\mathcal{H}_{f}(\tau)$. First
we employ the generator
$\mathcal{I}_{\theta}(\tau):\mathcal{H}_{f}(\tau)\rightarrow\mathcal{H}_{m}$
to get the mass vacuum $|0_{m}\rangle$. Then we apply the generator of
Bogoliubov transformations (7)
$J^{-1}:\mathcal{H}_{m}\rightarrow\tilde{\mathcal{H}}_{m}$ to obtain
$|\tilde{0}_{m}\rangle$. Finally, the generator of mixing in the tilde
representation
$\tilde{\mathcal{I}}_{\theta}^{-1}(\tau):\tilde{\mathcal{H}}_{m}\rightarrow\tilde{\mathcal{H}}_{f}(\tau)$
is employed to get $|\tilde{0}_{f}(\tau)\rangle$. We conclude that the two
flavor vacua are related by the transformation
$|\tilde{0}_{f}(\tau)\rangle=J_{f}^{-1}(\tau)|0_{f}(\tau)\rangle\doteq\tilde{\mathcal{I}}_{\theta}^{-1}(\tau)J^{-1}\mathcal{I}_{\theta}(\tau)|0_{f}(\tau)\rangle\
$ (25)
where we have defined the inverse $J_{f}^{-1}(\tau)$ for convenience. The
flavor operators must then transform as
$\displaystyle\gamma_{k,s,\rho}(\tau)$ $\displaystyle\rightarrow$
$\displaystyle J_{f}^{-1}(\tau)\gamma_{k,s,\rho}(\tau)J_{f}(\tau)$
$\displaystyle\epsilon_{k,s,\rho}(\tau)$ $\displaystyle\rightarrow$
$\displaystyle J_{f}^{-1}(\tau)\epsilon_{k,s,\rho}(\tau)J_{f}(\tau)$ (26)
and similarly for the creation operators. Equations (25) and (III.2) ensure
the invariance of local observables in the form of expectation values
$\bra{\psi_{f}(\tau)}F(\psi_{e}(\tau),\psi_{\mu}(\tau))\ket{\psi_{f}(\tau)}$
with $\ket{\psi_{f}(\tau)}\in\mathcal{H}_{f}(\tau)$ and
$F(\psi_{e}(\tau),\psi_{\mu}(\tau))$ any operator constructed from the fields
$\psi_{e}(\tau),\psi_{\mu}(\tau)$.
### III.3 Transition probabilities and the mass representation
The oscillation probabilities are not a local observable, since they involve
the comparison between particles at different values of $\tau$, as it is
evident from the definition (15). In general these quantities _do_ depend on
the representation of the mass fields, and this is because distinct
representations might assign a different meaning to the quantum numbers $k,s$.
For example, one might consider two expansions of the mass fields, one in
terms of plane waves, labelled by the three–momenta $\\{\boldsymbol{k}\\}$,
and one in terms of localized wave packets, labelled by a suitable set of
quantum numbers $\\{q\\}$. It is clear that the two expansions describe
particles with different physical properties; the first describes particles
with definite momentum, the second describes particles for which momentum and
position are definite to some extent. Therefore the probabilities
$P^{\rho\rightarrow\sigma}_{k,s}$ and $P^{\rho\rightarrow\sigma}_{q,s}$ refer
to the oscillations of different particles, and have a different
interpretation. It would make no sense, in such a case, to require the
equivalence of the two. This, of course, would be true even in flat space.
What is meaningful to require, is that the transition probabilities
$P^{\rho\rightarrow\sigma}_{k,s}$ be the same for each compatible
representation, i.e., for each representation that refers to the same kind of
particle, and therefore agrees on the meaning of the quantum numbers $k,s$. In
mathematical terms, any two such representations shall be connected by
diagonal Bogoliubov transformations
$\displaystyle\tilde{\zeta}_{k,s,i}$ $\displaystyle=$
$\displaystyle\Gamma_{k,s;i}^{*}\zeta_{k,s,i}+\Sigma_{k,s;i}^{*}\xi_{k,s,i}$
$\displaystyle\tilde{\xi}_{k,s,i}$ $\displaystyle=$
$\displaystyle\Gamma_{k,s;i}\xi_{k,s,i}-\Sigma_{k,s;i}\zeta_{k,s,i}\ $ (27)
where it is understood that
$\Gamma_{k,s;q,r;i}=\delta_{k,q}\delta_{s,r}\Gamma_{k,s;i}$ and
$\Sigma_{k,s;q,r;i}=\delta_{k,q}\delta_{s,r}\Sigma_{k,s;i}$, In this case the
transition probabilities $P^{\rho\rightarrow\sigma}_{k,s}$ are indeed the
same, and this can be proven explicitly, by writing out equation (17) for the
two representations
$\displaystyle
P^{e\rightarrow\mu}_{k,s}(\tau)=2\cos^{2}(\theta)\sin^{2}(\theta)\bigg{[}1-\sum_{q,r}\Re\bigg{(}\Lambda_{k,s;q,r}^{*}(\tau_{0})\Lambda_{k,s;q,r}(\tau)+$
$\displaystyle\Xi_{k,s;q,r}^{*}(\tau_{0})\Xi_{k,s;q,r}(\tau)\bigg{)}\bigg{]}$
(28)
and
$\displaystyle\tilde{P}^{e\rightarrow\mu}_{k,s}(\tau)=2\cos^{2}(\theta)\sin^{2}(\theta)\times\bigg{[}1-\sum_{q,r}\Re\bigg{(}\tilde{\Lambda}_{k,s;q,r}^{*}(\tau_{0})\tilde{\Lambda}_{k,s;q,r}(\tau)+\tilde{\Xi}_{k,s;q,r}^{*}(\tau_{0})\tilde{\Xi}_{k,s;q,r}(\tau)\bigg{)}\bigg{]}$
(29)
With the aid of equations (III.2) , (III.2) and (45) we find
$\displaystyle\tilde{\Lambda}_{k,s;q,r}^{*}(\tau_{0})\tilde{\Lambda}_{k,s;q,r}(\tau)+\tilde{\Xi}_{k,s;q,r}^{*}(\tau_{0})\tilde{\Xi}_{k,s;q,r}(\tau)=$
$\displaystyle+\left(\Lambda_{k,s;q,r}^{*}(\tau_{0})\Lambda_{k,s;q,r}(\tau)+\Xi_{k,s;q,r}^{*}(\tau_{0})\Xi_{k,s;q,r}(\tau)\right)\left[|\Gamma_{k,s,2}|^{2}|\Gamma_{q,r,1}|^{2}+|\Gamma_{k,s,2}|^{2}|\Sigma_{q,r,1}|^{2}\right]$
$\displaystyle+\left(\Lambda_{k,s;q,r}(\tau_{0})\Lambda_{k,s;q,r}^{*}(\tau)+\Xi_{k,s;q,r}(\tau_{0})\Xi_{k,s;q,r}^{*}(\tau)\right)\left[|\Sigma_{k,s,2}|^{2}|\Gamma_{q,r,1}|^{2}+|\Sigma_{k,s,2}|^{2}|\Sigma_{q,r,1}|^{2}\right]\
.$ (30)
Each of the terms in the square brackets is real. Considered that
$\displaystyle\Lambda_{k,s;q,r}(\tau_{0})\Lambda_{k,s;q,r}^{*}(\tau)=\left(\Lambda_{k,s;q,r}^{*}(\tau_{0})\Lambda_{k,s;q,r}(\tau)\right)^{*}\,,\qquad\Xi_{k,s;q,r}(\tau_{0})\Xi_{k,s;q,r}^{*}(\tau)=\left(\Xi_{k,s;q,r}^{*}(\tau_{0})\Xi_{k,s;q,r}(\tau)\right)^{*}\,,$
(31)
and that the Bogoliubov coefficients satisfy
$|\Gamma_{k,s,i}|^{2}+|\Sigma_{k,s,i}|^{2}=1$ for each $k,s,i$, we finally get
$\displaystyle\Re[\tilde{\Lambda}_{k,s;q,r}^{*}(\tau_{0})\tilde{\Lambda}_{k,s;q,r}(\tau)+\tilde{\Xi}_{k,s;q,r}^{*}(\tau_{0})\tilde{\Xi}_{k,s;q,r}(\tau)]=\Re[\Lambda_{k,s;q,r}^{*}(\tau_{0})\Lambda_{k,s;q,r}(\tau)+\Xi_{k,s;q,r}^{*}(\tau_{0})\Xi_{k,s;q,r}(\tau)]\,,$
(32)
which proves the invariance of (28).
In the most general case, as the quantum numbers $k,s$ and
$k^{\prime},s^{\prime}$ have a different phyiscal meaning, the probabilities
$P^{\rho\rightarrow\sigma}_{k,s}$ and
$\tilde{P}^{\rho\rightarrow\sigma}_{k^{\prime},s^{\prime}}$ have different
interpretations. Different representations of the mass fields do indeed assign
a different meaning to such indices, so that the probabilities (15) have no
invariant meaning. In order to make sense of the probabilities in eqs. (15) in
the most general case, a representation of the mass fields must be fixed on
the grounds of physical relevance. When the underlying spacetime $M$ possesses
non trivial symmetries, as time translational invariance or spherical
symmetry, there is no doubt that the representation should be fixed so to take
them into account. In these cases ”good quantum numbers” are suggested by the
symmetries themselves (for instance, the energy $\omega$ for stationary
metrics, the angular momentum $l,m$ for spherically symmetric spacetimes). In
any case, the probabilities in a given mass representation can always be
related to the probabilites in any other mass representation with the aid of
equations (23) and (24). As a final remark, we stress that the issue discussed
here has nothing to do with the diffeomorphism covariance of the theory. All
the probabilities (17) are (generally covariant) scalars, as it is evident
from the definitions.
## IV Neutrino oscillation formulae in FLRW metrics and in presence of a
Schwarzschild black hole
In this section we apply the formalism developed above to some cases of
interest. After an analysis of the flat space limit, we consider two
cosmologically relevant FLRW metrics, corresponding to a cosmological
constant–dominated and a radiation–dominated universe respectively. In these
cases, exact analytical solutions to the Dirac equation are available, and it
is possible to employ equation (16) directly. We then introduce a method to
extract the oscillation formulae on spacetimes with asymptotically flat
regions without resorting to the exact solutions of the Dirac equation. We
employ this strategy to compute the formulae on the Schwarzschild black hole
spacetime, for neutrinos propagating from the past infinity to the future
infinity. We show how the Hawking radiation is naturally embedded in the
resulting transition probabilities.
### IV.1 Flat spacetime limit
As a first, trivial, application of the formulae (17), let us check the flat
spacetime limit. We can see at once that the equations (17) reduce to the
ordinary oscillation formulae. Indeed, in this case, we can choose the cauchy
hypersurfaces to be the $t=constant$ surfaces in a given Minkowskian
coordinate system, while the modes
$\\{\zeta_{\boldsymbol{k},s,i}(x),\xi_{\boldsymbol{k},s,i}(x)\\}$ are just the
plane wave solutions to the flat Dirac equation with definite momentum, so
that $\Lambda_{q,r;k,s}(t)\rightarrow U_{q,r;k,s}(t)$ and
$\Xi_{q,r;k,s}(t)\rightarrow V_{q,r;k,s}(t)$, where $U_{q,r;k,s}$ and
$V_{q,r;k,s}$ are the usual mixing coefficients in flat space Capolupo1 .
Assuming, without loss of generality, $k$ along the $z$ direction, the
helicity indices decouple
$U_{q,r;k,s}=\delta^{3}(\boldsymbol{k}-\boldsymbol{q})\delta_{r,s}U_{\boldsymbol{k}}$,
$V_{q,r;k,s}=\delta^{3}(\boldsymbol{k}-\boldsymbol{q})\delta_{r,s}(-1)^{s}V_{\boldsymbol{k}}$.
Since
$U_{\boldsymbol{k}}(t)=U_{\boldsymbol{k}}(0)e^{i(\omega_{k,2}-\omega_{k,1})t}$
and
$V_{\boldsymbol{k}}(t)=V_{\boldsymbol{k}}(0)e^{i(\omega_{k,2}+\omega_{k,1})t}$,
we get
$\displaystyle\sum_{k^{\prime},r^{\prime},q,r}\Re[U^{*}_{k,s;k^{\prime},r^{\prime}}(0)U_{k^{\prime},r^{\prime};q,r}(t)$
$\displaystyle+$ $\displaystyle
V^{*}_{k,s;k^{\prime},r^{\prime}}(0)V_{k^{\prime},r^{\prime};q,r}(t)]\;=\;\Re[|U_{\boldsymbol{k}}(0)|^{2}e^{i(\omega_{k,2}-\omega_{k,1})t}+|V_{\boldsymbol{k}}(0)|^{2}e^{i(\omega_{k,2}+\omega_{k,1})t}]$
(33) $\displaystyle=$
$\displaystyle|U_{\boldsymbol{k}}(0)|^{2}\cos[(\omega_{k,2}-\omega_{k,1})t]+|V_{\boldsymbol{k}}(0)|^{2}\cos[(\omega_{k,2}+\omega_{k,1})t].$
Substituting this result in eqs. (17) yields the flat space oscillation
formulae, which further reduce to the Pontecorvo oscillation formulae in the
quantum mechanical limit ($V_{\boldsymbol{k}}=0$). Flat space also offers the
possibility to illustrate some points discussed above in the simplest possible
context. One might well expand the mass fields in terms of modes with definite
energy and angular momentum $\zeta_{\omega,\kappa_{j},m_{j};i}$,
$\xi_{\omega,\kappa_{j},m_{j},s;i}$ instead of considering modes with definite
cartesian three–momentum $\boldsymbol{k}$. The former shall be suitable
combinations of spherical spinors AngularMom . An interesting aspect, is that
in such a representation, the mixing coefficients are no longer diagonal.
Indeed one has
$\Lambda_{\omega^{\prime}\kappa^{\prime}_{j}m^{\prime}_{j};\omega\kappa_{j}m_{j}}(t)=\delta_{\omega^{\prime},\sqrt{\omega^{2}+\Delta
m^{2}}}\delta_{\kappa^{\prime}_{j},\kappa_{j}}\delta_{m^{\prime}_{j},m_{j}}|U_{\omega,\omega^{\prime}}|e^{i(\omega^{\prime}-\omega)t}$
(34)
with $\Delta m^{2}=m_{2}^{2}-m_{1}^{2}$ and
$|U_{\omega,\omega^{\prime}}|=\sqrt{\frac{\omega^{\prime}+m_{2}}{2\omega^{\prime}}}\sqrt{\frac{\omega+m_{1}}{2\omega}}\left(1+\sqrt{\frac{(\omega^{\prime}-m_{2})(\omega-
m_{1})}{(\omega+m_{1})(\omega^{\prime}+m_{2})}}\right)\ ,$ (35)
and similar for $\Xi$, where the exponential is
$e^{i(\omega^{\prime}+\omega)t}$ and $|U_{\omega,\omega^{\prime}}|$ is
replaced by
$|V_{\omega,\omega^{\prime}}|=\sqrt{\frac{\omega^{\prime}+m_{2}}{2\omega^{\prime}}}\sqrt{\frac{\omega+m_{1}}{2\omega}}\left(\sqrt{\frac{\omega^{\prime}-m_{2}}{\omega^{\prime}+m_{2}}}-\sqrt{\frac{\omega-
m_{1}}{\omega+m_{1}}}\right)\ .$ (36)
Here the quantum number $\kappa_{j}$ refers to a relativistic generalization
of the spin–orbit operator, which enters the Dirac equation in spherical
coordinates AngularMom ; Thaller , and takes into account both the orbital and
spin angular momentum. The index $m_{j}$ refers to one component of the total
angular momentum $\boldsymbol{J}$, and has not to be confused with the masses
$m_{i}$. Without delving into the details of the calculation, the result of
equation (34) can be understood as follows. The modes, apart from a
normalization constant, are given by
$\zeta_{\omega,\kappa_{j},m_{j};i}=e^{-i\omega
t}\sqrt{\frac{\omega+m_{i}}{2\omega
r^{2}}}\begin{pmatrix}P_{\kappa_{j}}(\lambda_{i}r)\Omega_{\kappa_{j},m_{j}}(\theta,\phi)\\\
\sqrt{\frac{\omega-
m_{i}}{\omega+m_{i}}}P_{\kappa_{j}}(\lambda_{i}r)\Omega_{-\kappa_{j},m_{j}}(\theta,\phi)\end{pmatrix}$
(37)
$\xi_{\omega,\kappa_{j},m_{j};i}=e^{i\omega
t}\sqrt{\frac{\omega+m_{i}}{2\omega r^{2}}}\begin{pmatrix}-\sqrt{\frac{\omega-
m_{i}}{\omega+m_{i}}}P_{\kappa_{j}}(-\lambda_{i}r)\Omega_{\kappa_{j},m_{j}}(\theta,\phi)\\\
P_{\kappa_{j}}(-\lambda_{i}r)\Omega_{-\kappa_{j},m_{j}}(\theta,\phi)\end{pmatrix}$
(38)
where $\Omega_{\kappa_{j},m_{j}}(\theta,\phi)$ are spherical spinors, and
$P_{\kappa_{j}}(\lambda_{i}r)$ are radial functions of the product
$\lambda_{i}r$, with the radial momentum
$\lambda_{i}=\sqrt{\omega^{2}-m_{i}^{2}}$. These are solutions to the radial
part of the Dirac equation, which turns out to be a Riccati–Bessel
equationAbramowitz . The functions $P_{\kappa_{j}}(\lambda_{i}r)$ are
combinations of spherical bessel functions $j_{n}$ of the form
$P_{\kappa_{j}}(\lambda_{i}r)=rj_{\kappa_{j}}(\lambda_{i}r)$. In computing the
inner products, the radial integration $\int
drP^{*}_{\kappa_{j};2}(\lambda_{2}r)P_{\kappa_{j};1}(\lambda_{1}r)$ will
produce a factor $\delta_{\lambda_{2},\pm\lambda_{1}}$, because of the closure
relation satisfied by the spherical Bessel functions. Since
$\lambda_{2}=\sqrt{\omega^{{}^{\prime}2}-m_{2}^{2}}$ and
$\lambda_{1}=\sqrt{\omega^{2}-m_{1}^{2}}$, this will give rise to the delta
factor appearing in (34). Notice that
$|U_{\omega,\omega^{\prime}}|,|V_{\omega,\omega^{\prime}}|$ are numerically
the same as the usual flat space coefficients
$|U_{\boldsymbol{k}}|,|V_{\boldsymbol{k}}|$, when
$k^{2}=\omega^{2}-m_{1}^{2}=\omega^{{}^{\prime}2}-m_{2}^{2}$ .This shows why
the mixing coefficients $\Lambda$, $\Xi$ are not generally diagonal, and the
flexibility of the formalism we have employed. Indeed, the non–diagonal
coefficients automatically ensure that the flavor operators
$\gamma_{\omega,\rho}$, $\epsilon_{\omega,\rho}$ take into account the mass
difference, involving operators with distinct energies
$\omega,\omega^{\prime}$ for the fields $\psi_{1}$ and $\psi_{2}$ Note5 . In
flat space, the shift between the two representations is actually of no use.
However, in a non–trivial framework, the versatility of the formalism is
essential, as there are instances in which the cartesian components of the
momentum $k_{x},k_{y},k_{z}$ are useless, while the ”spherical” quantum
numbers $\omega,l,m$ are well defined.
### IV.2 Expanding universe with exponential growth of the scale factor
The simplest non–trivial application is to spatially flat
Friedmann–Lemaitre–Robertson–Walker (FLRW) spacetimes. Consider the metric
$ds^{2}=dt^{2}-a^{2}(t)(dx^{2}+dy^{2}+dz^{2})$ with an exponential expansion
$a(t)=e^{Ht}$, $H=constant$. This is well–suited to describe a homogeneous,
spatially flat and isotropic universe dominated by a cosmological constant.
The normalized solutions to the Dirac equation for this metric were derived in
Barut . Assuming, without loss of generality, the momentum $k$ to be along the
$z$ direction, the helicities decouple as in flat space. Choosing the Cauchy
surfaces as the surfaces with $t=constant$, or equivalently with
$a(t)=constant$, the mixing coefficients read
$\displaystyle\Lambda_{k,s;q,r}$
$\displaystyle(t)=\delta_{s,r}\delta^{3}(\boldsymbol{k}-\boldsymbol{q})\frac{\pi
ke^{-Ht}}{2H\sqrt{\cos(\frac{i\pi m_{2}}{H})\cos(\frac{i\pi
m_{1}}{H})}}\times\left[\\!J_{v_{1}}^{*}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!\\!J_{v_{2}}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!+\\!J_{v_{1}-1}^{*}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!\\!J_{v_{2}-1}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!\right]$
(39) $\displaystyle\Xi_{k,s;q,r}$
$\displaystyle(t)=\delta_{s,r}(-1)^{s}\delta^{3}(\boldsymbol{k}-\boldsymbol{q})\frac{\pi
ke^{-Ht}}{2H\sqrt{\cos(\frac{i\pi m_{2}}{H})\cos(\frac{i\pi
m_{1}}{H})}}\left[\\!J_{v_{1}}^{*}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!\\!J_{-v_{2}}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!+\\!J_{v_{1}-1}^{*}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!\\!J_{1-v_{2}}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!\right]$
(40)
where $J_{\alpha}$ denotes the $\alpha$ Bessel function and
$v_{j}=\frac{1}{2}\left(1+\frac{2im_{j}}{H}\right)$ for $j=1,2$. Plugging
these expressions in equations (17), we obtain
$\displaystyle
P^{e\rightarrow\mu}_{k,s}(t)=2\cos^{2}(\theta)\sin^{2}(\theta)\bigg{\\{}1-\frac{\pi^{2}k^{2}e^{-H(t+t_{0})}}{4H^{2}\cos(\frac{i\pi
m_{2}}{H})\cos(\frac{i\pi m_{1}}{H})}$ (41)
$\displaystyle\times\Re\bigg{[}\left[\\!J_{v_{1}}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht_{0}}\\!\\!\right)\\!\\!J_{v_{2}}^{*}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht_{0}}\\!\\!\right)\\!+\\!J_{v_{1}-1}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht_{0}}\\!\\!\right)\\!\\!J_{v_{2}-1}^{*}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht_{0}}\\!\\!\right)\\!\right]\left[\\!J_{v_{1}}^{*}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!\\!J_{v_{2}}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!+\\!J_{v_{1}-1}^{*}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!\\!J_{v_{2}-1}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!\right]$
$\displaystyle+\left[\\!J_{v_{1}}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht_{0}}\\!\\!\right)\\!\\!J_{-v_{2}}^{*}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht_{0}}\\!\\!\right)\\!+\\!J_{v_{1}-1}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht_{0}}\\!\\!\right)\\!\\!J_{1-v_{2}}^{*}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht_{0}}\\!\\!\right)\\!\right]\left[\\!J_{v_{1}}^{*}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!\\!J_{-v_{2}}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!+\\!J_{v_{1}-1}^{*}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!\\!J_{1-v_{2}}\\!\\!\left(\\!\\!\frac{k}{H}e^{-Ht}\\!\\!\right)\\!\right]\bigg{]}\bigg{\\}}$
### IV.3 Expanding universe dominated by radiation
Here we consider the FLRW metric for a radiation dominated universe
$a(t)=a_{0}t^{\frac{1}{2}}$. Notice that since $a(t)$ has to be adimensional,
$a_{0}$ has dimension $[t]^{-\frac{1}{2}}=[m]^{\frac{1}{2}}$. As before,
without loss of generality, we assume the neutrino momentum $k$ along the $z$
direction to decouple the helicities, and consider a foliation by the
$t=constant$ hypersurfaces. The solutions to the Dirac equation for this
metric are again found in Barut , and yield the mixing coefficients
$\displaystyle\Lambda_{k,s;q,r}(t)=\delta_{s,r}\delta^{3}(\boldsymbol{k}-\boldsymbol{q})\frac{1}{\sqrt[4]{4m_{1}m_{2}t^{2}}}e^{-\frac{\pi
k^{2}(m_{1}+m_{2})}{4m_{1}m_{2}a_{0}^{2}}}\bigg{\\{}W^{*}_{\kappa_{2},\frac{1}{4}}(-2im_{2}t)W_{\kappa_{1},\frac{1}{4}}(-2im_{1}t)$
(42)
$\displaystyle+\frac{4}{m_{1}m_{2}a_{0}^{2}t}\bigg{[}W^{*}_{\kappa_{2},\frac{1}{4}}(-2im_{2}t)-\frac{1}{8}\left(1-\frac{ik^{2}}{m_{2}a_{0}^{2}}\right)W^{*}_{\kappa_{2}-1,\frac{1}{4}}(-2im_{2}t)\bigg{]}\bigg{[}W_{\kappa_{1},\frac{1}{4}}(-2im_{1}t)-\frac{1}{8}\left(1-\frac{ik^{2}}{m_{1}a_{0}^{2}}\right)W_{\kappa_{1}-1,\frac{1}{4}}(-2im_{1}t)\bigg{]}\bigg{\\}}$
$\displaystyle\Xi_{k,s;q,r}(t)=\delta_{s,r}\delta^{3}(\boldsymbol{k}-\boldsymbol{q})\frac{k}{\sqrt[4]{2m_{1}(2m_{2})^{3}a_{0}^{2}t}}e^{-\frac{\pi
k^{2}(m_{1}+m_{2})}{4m_{1}m_{2}a_{0}^{2}}}\bigg{\\{}W^{*}_{\kappa_{1},\frac{1}{4}}(-2im_{1}t)W_{-\kappa_{2},\frac{1}{4}}(-2im_{2}t)$
(43)
$\displaystyle+\frac{1}{m_{1}m_{2}a_{0}^{2}t}\bigg{[}W^{*}_{\kappa_{1},\frac{1}{4}}(-2im_{1}t)-\frac{1}{8}\left(1-\frac{ik^{2}}{m_{1}a_{0}^{2}}\right)W^{*}_{\kappa_{1}-1,\frac{1}{4}}(-2im_{1}t)\bigg{]}\bigg{[}W_{-\kappa_{2},\frac{1}{4}}(2im_{2}t)+\frac{2im_{2}a_{0}^{2}}{k^{2}}W_{-\kappa_{2}+1,\frac{1}{4}}(2im_{2}t)\bigg{]}\bigg{\\}}$
where $W_{\kappa,\mu}(z)$ are the Whittaker functions Abramowitz and
$\kappa_{j}=\frac{1}{4}\left(1+\frac{2ik^{2}}{a_{0}^{2}m_{j}}\right)$ for
$j=1,2$. Insertion in eqs. (17) gives the transition probabilities
($t_{0},t>0$)
$\displaystyle
P^{e\rightarrow\mu}_{k,s}(t)=2\cos^{2}(\theta)\sin^{2}(\theta)\bigg{\\{}1+\Re\bigg{[}\frac{1}{\sqrt[2]{4m_{1}m_{2}t_{0}t}}e^{-\frac{\pi
k^{2}(m_{1}+m_{2})}{2m_{1}m_{2}a_{0}^{2}}}\bigg{\\{}W_{\kappa_{2},\frac{1}{4}}(-2im_{2}t_{0})W^{*}_{\kappa_{1},\frac{1}{4}}(-2im_{1}t_{0})$
$\displaystyle+\frac{4}{m_{1}m_{2}a_{0}^{2}t_{0}}\left(W_{\kappa_{2},\frac{1}{4}}(-2im_{2}t_{0})-\frac{1}{8}\left(1+\frac{ik^{2}}{m_{2}a_{0}^{2}}\right)W_{\kappa_{2}-1,\frac{1}{4}}(-2im_{2}t_{0})\right)\left(W^{*}_{\kappa_{1},\frac{1}{4}}(-2im_{1}t_{0})-\frac{1}{8}\left(1+\frac{ik^{2}}{m_{1}a_{0}^{2}}\right)W^{*}_{\kappa_{1}-1,\frac{1}{4}}(-2im_{1}t_{0})\right)\bigg{\\}}$
$\displaystyle\times\bigg{\\{}W^{*}_{\kappa_{2},\frac{1}{4}}(-2im_{2}t)W_{\kappa_{1},\frac{1}{4}}(-2im_{1}t)$
$\displaystyle+\frac{4}{m_{1}m_{2}a_{0}^{2}t}\left(W^{*}_{\kappa_{2},\frac{1}{4}}(-2im_{2}t)-\frac{1}{8}\left(1-\frac{ik^{2}}{m_{2}a_{0}^{2}}\right)W^{*}_{\kappa_{2}-1,\frac{1}{4}}(-2im_{2}t)\right)\left(W_{\kappa_{1},\frac{1}{4}}(-2im_{1}t)-\frac{1}{8}\left(1-\frac{ik^{2}}{m_{1}a_{0}^{2}}\right)W_{\kappa_{1}-1,\frac{1}{4}}(-2im_{1}t)\right)\bigg{\\}}$
$\displaystyle+\frac{k^{2}}{\sqrt[2]{2m_{1}(2m_{2})^{3}a_{0}^{4}t_{0}t}}e^{-\frac{\pi
k^{2}(m_{1}+m_{2})}{2m_{1}m_{2}a_{0}^{2}}}\bigg{\\{}W_{\kappa_{1},\frac{1}{4}}(-2im_{1}t_{0})W^{*}_{-\kappa_{2},\frac{1}{4}}(-2im_{2}t_{0})$
$\displaystyle+\frac{1}{m_{1}m_{2}a_{0}^{2}t_{0}}\left(W_{\kappa_{1},\frac{1}{4}}(-2im_{1}t_{0})-\frac{1}{8}\left(1+\frac{ik^{2}}{m_{1}a_{0}^{2}}\right)W_{\kappa_{1}-1,\frac{1}{4}}(-2im_{1}t_{0})\right)\left(W^{*}_{-\kappa_{2},\frac{1}{4}}(2im_{2}t_{0})-\frac{2im_{2}a_{0}^{2}}{k^{2}}W^{*}_{-\kappa_{2}+1,\frac{1}{4}}(2im_{2}t_{0})\right)\bigg{\\}}$
$\displaystyle\times\bigg{\\{}W^{*}_{\kappa_{1},\frac{1}{4}}(-2im_{1}t)W_{-\kappa_{2},\frac{1}{4}}(-2im_{2}t)$
$\displaystyle+\frac{1}{m_{1}m_{2}a_{0}^{2}t}\left(W^{*}_{\kappa_{1},\frac{1}{4}}(-2im_{1}t)-\frac{1}{8}\left(1-\frac{ik^{2}}{m_{1}a_{0}^{2}}\right)W^{*}_{\kappa_{1}-1,\frac{1}{4}}(-2im_{1}t)\right)\left(W_{-\kappa_{2},\frac{1}{4}}(2im_{2}t)+\frac{2im_{2}a_{0}^{2}}{k^{2}}W_{-\kappa_{2}+1,\frac{1}{4}}(2im_{2}t)\right)\bigg{\\}}\bigg{]}\bigg{\\}}$
(44)
### IV.4 Spacetimes with asymptotically flat regions
The FLRW spacetimes considered above are among the few non–trivial metrics for
which the Dirac equation (1) can be solved analytically. More often one does
not have an exact solution at his disposal, and the implementation of eqs.
(17) is a complicated task. In many cases of interest, however, the spacetime
$M$ admits asymptotically flat regions $\Omega_{A},\Omega_{B}\subset M$,
usually in the far past and in the far future. $\Omega_{A}$ and $\Omega_{B}$
are separated by a region with non-trivial curvature, where the Dirac equation
is usually unsolvable analytically. In $\Omega_{A}$ and $\Omega_{B}$ the
solutions to the Dirac equation are the flat space modes
$\\{u^{I}_{k,s,i}(x),v^{I}_{k,s,i}\\}$ with $I=A,B$, and one has a natural
choice for the positive frequency modes. Because of the non-trivial curvature,
the two sets of solutions $A,B$ are distinct. When limited to one of the two
regions, eqs. (17) reduce to the ordinary flat space formulae. When the
intermediate curvature region is involved, a direct application of eqs. (17)
is prohibitive. Nevertheless, if one is able to provide the relation between
the two sets $A,B$, in the form of a Bogoliubov transformation, it is possible
to derive oscillation formulae for the propagation from $\Omega_{A}$ to
$\Omega_{B}$. Assume that $\Omega_{A}=\bigcup_{\tau\leq\tau_{A}}\Sigma_{\tau}$
and $\Omega_{B}=\bigcup_{\tau\geq\tau_{B}}\Sigma_{\tau}$ for some values
$\tau_{B}>\tau_{A}$, and consider $\tau_{0}\leq\tau_{A}$ , $\tau\geq\tau_{B}$.
Suppose also that the $B$ modes are given in terms of the $A$ modes as
$\displaystyle u^{B}_{k^{\prime},s^{\prime},i}$ $\displaystyle=$
$\displaystyle\sum_{k,s}\left(\Gamma_{k^{\prime},s^{\prime};k,s;i}^{*}u^{A}_{k,s,i}+\Sigma_{k^{\prime},s^{\prime};k,s;i}^{*}v^{A}_{k,s,i}\right)$
(45) $\displaystyle v^{B}_{k^{\prime}s^{\prime},i}$ $\displaystyle=$
$\displaystyle\sum_{k,s}\left(\Gamma_{k^{\prime},s^{\prime};k,s;i}v^{A}_{k,s,i}-\Sigma_{k^{\prime},s^{\prime};k,s;i}u^{A}_{k,s,i}\right)\
.$ (46)
Here the Bogoliubov coefficients
$\Gamma_{k^{\prime},s^{\prime};k,s;i},\Sigma_{k^{\prime},s^{\prime};k,s;i}$
are again provided by the inner products
$(u^{A}_{i},u^{B}_{i}),(u^{A}_{i},v^{B}_{i})$, yet their significance is
slightly different from those in (II). While equation (II) describes a general
and arbitrary change of basis, the transformation of equation (45) is dictated
by the circumstance of having a natural choice for the modes in $\Omega_{A}$
and $\Omega_{B}$, with a well–defined physical meaning. Indeed, the Bogoliubov
coefficients take into account the effect of the curvature in the intermediate
region. Then we can specialize equations (III.2), (III.2) to get
$\displaystyle\Lambda^{B}_{q,r;k,s}(\tau)=\sum_{q^{\prime},k^{\prime},r^{\prime},s^{\prime}}\bigg{[}\Gamma_{q,r;q^{\prime},r^{\prime};2}\left(\Gamma_{k,s;k^{\prime},s^{\prime};1}^{*}\Lambda_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}^{A}(\tau)-\Sigma_{k,s;k^{\prime},s^{\prime};1}^{*}\Xi_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}^{A}(\tau)\right)$
$\displaystyle+\Sigma_{q,r;q^{\prime},r^{\prime};2}\left(\Gamma_{k,s;k^{\prime},s^{\prime};1}^{*}\Xi_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}^{A*}(\tau)+\Sigma_{k,s;k^{\prime},s^{\prime};1}^{*}\Lambda_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}^{A*}(\tau)\right)\bigg{]}\
.$ (47)
$\displaystyle\Xi^{B}_{q,r;k,s}(\tau)=\sum_{q^{\prime},k^{\prime},r^{\prime},s^{\prime}}\bigg{[}\Gamma_{q,r;q^{\prime},r^{\prime};2}\left(\Gamma_{k,s;k^{\prime},s^{\prime};1}\Xi^{A}_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}(\tau)+\Sigma_{k,s;k^{\prime},s^{\prime};1}\Lambda^{A}_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}(\tau)\right)$
(48)
$\displaystyle-\Sigma_{q,r;q^{\prime},r^{\prime};2}\times\left(\Gamma_{k,s;k^{\prime},s^{\prime};1}\Lambda_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}^{A*}(\tau)-\Sigma_{k,s;k^{\prime},s^{\prime};1}\Xi_{q^{\prime},r^{\prime};k^{\prime},s^{\prime}}^{A*}(\tau)\right)\bigg{]}\
.$
For $\tau<\tau_{A}$ $\Lambda^{A}(\tau),\Xi^{A}(\tau)$ are trivial, and vice-
versa, for $\tau>\tau_{B}$, $\Lambda^{B}(\tau),\Xi^{B}(\tau)$ are trivial.
Choosing, for instance, the $B$ representation, one would have a trivial
expression for $\Lambda^{B}(\tau),\Xi^{B}(\tau)$ and one can make use of eqs.
(IV.4), (48) to obtain $\Lambda^{B}(\tau_{0}),\Xi^{B}(\tau_{0})$ in terms of
$\Lambda^{A}(\tau_{0}),\Xi^{A}(\tau_{0})$, which are also trivial. Then one
can plug $\Lambda^{B}(\tau,\tau_{0})$ and $\Xi^{B}(\tau,\tau_{0})$ in equation
(17) to obtain $P^{e\rightarrow\mu}_{k,s\rightarrow q,r}(\tau)$ for
$\tau\geq\tau_{B}$ and the reference hypersurface $\tau_{0}\leq\tau_{A}$.
#### IV.4.1 Scwharzschild black hole
As a realization of this scheme, consider the (static) Schwarzschild metric
$\displaystyle
ds^{2}=\left(1-\frac{2GM}{r}\right)dt^{2}-\left(1-\frac{2GM}{r}\right)^{-1}dr^{2}-r^{2}d\Omega,$
(49)
and two sequences of spacelike hypersurfaces Note6
$\\{\Sigma_{n}^{+}\\}_{n\in\mathbb{N}}$,
$\\{\Sigma_{n}^{-}\\}_{n\in\mathbb{N}}$ approaching, respectively, the future
$\mathcal{I}^{+}$ and the past null infinity $\mathcal{I}^{-}$ as
$n\rightarrow\infty$. We require that $\Sigma_{n}^{+}$ and $\Sigma_{n}^{-}$ be
Cauchy surfaces respectively for the causal past $J^{-}(\mathcal{I}^{+})$ of
$\mathcal{I}^{+}$ and the causal future $J^{+}(\mathcal{I}^{-})$of
$\mathcal{I}^{-}$ for each $n$. For $n$ large enough, these surfaces span an
approximately flat portion of the Schwarzschild spacetime. On the surfaces
$\Sigma_{n}^{-}$, as $n$ approaches infinity, we expand the massive fields in
terms of the incoming solutions, with frequency defined with respect to the
schwarzschild time $t$ and with definite angular momentum,
$\displaystyle\zeta^{IN}_{\omega,\kappa_{j},m_{j};i}(t,r,\theta,\phi)\propto
e^{-i\omega
t}\,,\qquad\qquad\xi^{IN}_{\omega,\kappa_{j},m_{j};i}(t,r,\theta,\phi)\propto
e^{i\omega t}.$ (50)
These modes reduce to the flat space solutions (37) and (38) as
$\mathcal{I}^{-}$ is approached. Omitting the irrelevant angular and spin
quantum numbers, as $n\rightarrow\infty$, we get
$\displaystyle\Lambda^{IN}_{\omega;\omega^{\prime}}(\Sigma^{-}_{n})\rightarrow\delta_{\omega^{\prime},\sqrt{\omega^{2}+\Delta
m^{2}}}|U_{\omega,\omega^{\prime}}|e^{i\varphi^{-}(\omega,n)}$ (51)
$\displaystyle\Xi^{IN}_{\omega;\omega^{\prime}}(\Sigma^{-}_{n})\rightarrow\delta_{\omega^{\prime},\sqrt{\omega^{2}+\Delta
m^{2}}}|U_{\omega,\omega^{\prime}}|e^{i\rho^{-}(\omega,n)}$ (52)
with $|U_{\omega,\omega^{\prime}}|$ and $|V_{\omega,\omega^{\prime}}|$ flat
space spherical mixing coefficients as defined in (35) and (36), and
$\varphi^{-}(\omega,n),\rho^{-}(\omega,n)$ phase factors depending on $\omega$
and $n$. A similar reasoning can be carried on for the _outgoing_ modes
emerging at $\mathcal{I}^{+}$,
$\zeta^{OUT}_{\omega,\kappa_{j},m_{j};i}(t,r,\theta,\phi)$ ,
$\xi^{OUT}_{\omega,\kappa_{j},m_{j};i}(t,r,\theta,\phi)$, so to yield as
$n\rightarrow\infty$
$\displaystyle\Lambda^{OUT}_{\omega;\omega^{\prime}}(\Sigma^{+}_{n})\rightarrow\delta_{\omega^{\prime},\sqrt{\omega^{2}+\Delta
m^{2}}}|U_{\omega,\omega^{\prime}}|e^{i\varphi^{+}(\omega,n)}$ (53)
$\displaystyle\Xi^{OUT}_{\omega;\omega^{\prime}}(\Sigma^{+}_{n})\rightarrow\delta_{\omega^{\prime},\sqrt{\omega^{2}+\Delta
m^{2}}}|V_{\omega,\omega^{\prime}}|e^{i\rho^{+}(\omega,n)}\,.$ (54)
Because of the Black Hole, the $IN$ and $OUT$ modes do not coincide. Fermion
creation by the Schwarzschild black hole has been studied, via the tunneling
method, in Kerner . There it has been shown that the Hawking temperature
$T_{H}=\frac{1}{8\pi GM}$ is recovered for the emission of spin–$\frac{1}{2}$
particles. We then infer that the $IN$ and $OUT$ modes are related by a
thermal Bogoliubov transformation at the Hawking temperature $T_{H}$,
corrected for fermions:
$\displaystyle\zeta^{OUT}_{\omega,\kappa_{j},m_{j};i}=\sqrt{\frac{e^{\frac{\omega}{k_{B}T_{H}}}}{e^{\frac{\omega}{k_{B}T_{H}}}+1}}\zeta^{IN}_{\omega,\kappa_{j},m_{j};i}+\sqrt{\frac{1}{e^{\frac{\omega}{k_{B}T_{H}}}+1}}\xi^{IN}_{\omega,\kappa_{j},m_{j};i}$
(55)
$\displaystyle\xi^{OUT}_{\omega,\kappa_{j},m_{j};i}=\sqrt{\frac{e^{\frac{\omega}{k_{B}T_{H}}}}{e^{\frac{\omega}{k_{B}T_{H}}}+1}}\xi^{IN}_{\omega,\kappa_{j},m_{j};i}-\sqrt{\frac{1}{e^{\frac{\omega}{k_{B}T_{H}}}+1}}\zeta^{IN}_{\omega,\kappa_{j},m_{j};i}\
,$ (56)
It is understood that these equations hold as long as the spacetime is
stationary (eternal black hole). For a body that collapses to a Schwarzschild
black hole, as considered in the original paper by Hawking Hawking , we expect
a slightly modified version of the Bogoliubov transformations, comprising
non–diagonal thermal coefficients $\Gamma_{\omega,\omega^{\prime};i}$ and
$\Sigma_{\omega,\omega^{\prime};i}$ with $\omega\neq\omega^{\prime}$. We pick
the ingoing representation to calculate the probabilities (of course, the
outgoing representation yields the same result, as the Bogoliubov
transformations are diagonal), and employ eqs. (IV.4, 48) to obtain, with
$F_{H}(\omega)=\frac{1}{e^{\frac{\omega}{k_{B}T_{H}}}+1}$,
$\displaystyle\Lambda^{IN}_{\omega;\omega^{\prime}}(\Sigma^{+}_{n})$
$\displaystyle=$
$\displaystyle\bigg{\\{}\sqrt{\left[1-F_{H}(\omega)\right]\left[1-F_{H}(\omega^{\prime})\right]}\Lambda^{OUT}_{\omega;\omega^{\prime}}(\Sigma^{+}_{n})-\
\
\sqrt{F_{H}(\omega)\left[1-F_{H}(\omega^{\prime})\right]}\Xi^{OUT}_{\omega;\omega^{\prime}}(\Sigma^{+}_{n})$
(57) $\displaystyle+$
$\displaystyle\sqrt{F_{H}(\omega^{\prime})\left[1-F_{H}(\omega)\right]}\left(\Xi^{OUT}_{\omega;\omega^{\prime}}\right)^{*}(\Sigma^{+}_{n})+\sqrt{F_{H}(\omega)F_{H}(\omega^{\prime})}\left(\Lambda^{OUT}_{\omega;\omega^{\prime}}\right)^{*}(\Sigma^{+}_{n})\bigg{\\}}\,,$
Considered that
$\Lambda^{OUT}_{\omega;\omega^{\prime}}(\Sigma^{+}_{n})\rightarrow\delta_{\omega^{\prime},\sqrt{\omega^{2}+\Delta
m^{2}}}|U_{\omega,\omega^{\prime}}|e^{i\varphi^{+}(\omega,n)}$ and
$\Xi^{+}_{\omega;\omega^{\prime}}(\Sigma^{+}_{n})\rightarrow\delta_{\omega^{\prime},\sqrt{\omega^{2}+\Delta
m^{2}}}|V_{\omega,\omega^{\prime}}|e^{i\rho^{+}(\omega,n)}$ as
$n\rightarrow\infty$, we obtain
$\displaystyle\Lambda^{IN}_{\omega;\omega^{\prime}}(\Sigma^{+}_{n})$
$\displaystyle=$
$\displaystyle\bigg{\\{}\sqrt{\left[1-F_{H}(\omega)\right]\left[1-F_{H}(\omega^{\prime})\right]}|U_{\omega;\omega^{\prime}}|e^{i\varphi^{+}(\omega,n)}-\sqrt{F_{H}(\omega)\left[1-F_{H}(\omega^{\prime})\right]}|V_{\omega;\omega^{\prime}}|e^{i\rho^{+}(\omega,n)}$
(58) $\displaystyle+$
$\displaystyle\sqrt{F_{H}(\omega^{\prime})\left[1-F_{H}(\omega)\right]}|V_{\omega;\omega^{\prime}}|e^{-i\rho^{+}(\omega,n)}+\sqrt{F_{H}(\omega)F_{H}(\omega^{\prime})}|U_{\omega,\omega^{\prime}}|e^{-i\varphi^{+}(\omega,n)}\bigg{\\}}\delta_{\omega^{\prime},\sqrt{\omega^{2}+\Delta
m^{2}}}$
and
$\displaystyle\Xi^{IN}_{\omega;\omega^{\prime}}(\Sigma^{+}_{n})$
$\displaystyle=$
$\displaystyle\bigg{\\{}\sqrt{\left[1-F_{H}(\omega)\right]\left[1-F_{H}(\omega^{\prime})\right]}|V_{\omega,\omega^{\prime}}|e^{i\rho^{+}(\omega,n)}+\sqrt{F_{H}(\omega)\left[1-F_{H}(\omega^{\prime})\right]}|U_{\omega,\omega^{\prime}}|e^{i\varphi^{+}(\omega,n)}$
(59)
$\displaystyle-\sqrt{F_{H}(\omega^{\prime})\left[1-F_{H}(\omega)\right]}|U_{\omega,\omega^{\prime}}|e^{-i\varphi^{+}(\omega,n)}+\sqrt{F_{H}(\omega)F_{H}(\omega^{\prime})}|V_{\omega,\omega^{\prime}}|e^{-i\rho^{+}(\omega,n)}\bigg{\\}}\delta_{\omega^{\prime},\sqrt{\omega^{2}+\Delta
m^{2}}}\ .$
Choosing as reference hypersurface $\Sigma^{-}_{m}$ for large $m$, we can now
compute the probabilities (17) for a neutrino propagating from
$\Sigma^{-}_{m}$ to $\Sigma^{+}_{n}$, i.e. from $\mathcal{I}^{-}$ to
$\mathcal{I}^{+}$ in the limit $m,n\rightarrow\infty$. We find, for
$m,n\rightarrow\infty$
$\displaystyle P^{e\rightarrow\mu}_{\omega}(m,n)$ $\displaystyle\approx$
$\displaystyle 2\cos^{2}\theta\sin^{2}\theta\
\bigg{(}1-\sqrt{\left[1-F_{H}(\omega)\right]\left[1-F_{H}(\omega^{\prime})\right]}\left[|U_{\omega;\omega^{\prime}}|^{2}\cos(\Delta^{-}_{\omega;m,n})+|V_{\omega;\omega^{\prime}}|^{2}\cos(\Phi^{-}_{\omega;m,n})\right]$
(60) $\displaystyle+$
$\displaystyle\sqrt{F_{H}(\omega)\left[1-F_{H}(\omega^{\prime})\right]}|U_{\omega;\omega^{\prime}}||V_{\omega;\omega^{\prime}}|\left[\cos(\Theta^{-}_{\omega;m,n})-\cos(\Psi^{-}_{\omega;m,n})\right]$
$\displaystyle+$
$\displaystyle\sqrt{F_{H}(\omega^{\prime})\left[1-F_{H}(\omega)\right]}|U_{\omega;\omega^{\prime}}||V_{\omega;\omega^{\prime}}|\left[\cos(\Psi^{+}_{\omega;m,n})-\cos(\Theta^{+}_{\omega;m,n})\right]$
$\displaystyle-$
$\displaystyle\sqrt{F_{H}(\omega)F_{H}(\omega^{\prime})}\left[|U_{\omega;\omega^{\prime}}|^{2}\cos(\Delta^{+}_{\omega;m,n})+|V_{\omega;\omega^{\prime}}|^{2}\cos(\Phi^{+}_{\omega;m,n})\right]\bigg{)}\
.$
with $\Delta^{\pm}_{\omega;m,n}=\varphi^{+}(\omega,n)\pm\varphi^{-}(\omega,m)$
, $\Phi^{\pm}_{\omega;m,n}=\rho^{+}(\omega,n)\pm\rho^{-}(\omega,m)$,
$\Psi^{\pm}_{\omega;m,n}=\rho^{+}(\omega,n)\pm\varphi^{-}(\omega,m)$ and
$\Theta^{\pm}_{\omega;m,n}=\varphi^{+}(\omega,n)\pm\rho^{-}(\omega,m)$ . In
particular, for large energies of the mass fields $\omega,\omega^{\prime}$,
$|V_{\omega,\omega^{\prime}}|\rightarrow 0$ and
$|U_{\omega,\omega^{\prime}}|\rightarrow 1$, thus
$\displaystyle P^{e\rightarrow\mu}_{\omega}(m,n)\approx
2\cos^{2}\theta\sin^{2}\theta\
\bigg{(}1-\sqrt{\left[1-F_{H}(\omega)\right]\left[1-F_{H}(\omega^{\prime})\right]}\cos(\Delta^{-}_{\omega,\kappa_{j};m,n})-\sqrt{F_{H}(\omega)F_{H}(\omega^{\prime})}\cos(\Delta^{+}_{\omega,\kappa_{j};m,n})\bigg{)}\
,$ (61)
where it is understood that $\omega^{{}^{\prime}2}=\omega^{2}+\Delta m^{2}$.
If the limit $T_{H}\rightarrow 0$ is taken in equation (61), one obtains,
apart from a phase factor, the usual Pontecorvo oscillation formulae. To
compute the oscillation formulae on a Schwarzschild background for an
arbitrary propagation, since exact analytical solutions are unavailable, one
has to resort to approximate solutions to the Dirac equation. In all the
applications considered, the oscillation formulae do not depend on the
helicity $s$ of neutrinos. However, when additional complications due to
frame–dragging and non–conservation of angular momentum arise (see e.g.
Lambiase1 ; Lambiase2 ), the formulae for left-handed and right-handed
neutrinos can differ.
### IV.5 Quantum mechanical limit
It is a general feature of equations (17) that when all the quantum field
theoretical effects are negligible, the oscillation formulae are modified only
for a phase factor. Indeed, when one can neglect $\Xi_{k,s}$ in equation (17),
one immediately has $|\Lambda^{*}_{k,s}(\tau)|=1$,
($|\Lambda_{\omega,\omega^{\prime}}(\tau)|=1$ respectively, in the
non–diagonal case) for each $\tau$ from equation (III.1). Then the product
$\Lambda_{k,s}(\tau_{0})^{*}\Lambda_{k,s}(\tau)$,
($\Lambda_{\omega,\omega^{\prime}}(\tau_{0})^{*}\Lambda_{\omega,\omega^{\prime}}(\tau)$
respectively) is just a phase $e^{i\varphi(\tau_{0},\tau)}$ and the net effect
is a phase shift with respect to the Pontecorvo oscillation formulae,
consistently with previous results obtained in a heuristic treatmentCardall .
Of course, the explicit value of the phase $\varphi(\tau_{0},\tau)$ depends on
the metric and the surfaces $\tau$ considered, as well as on the mode
expansion chosen for the mass fields. When the gravitational fields are weak
enough, the phase can be computed by means of geometrical optics
considerations Cardall ; Lambiase1 .
## V Conclusions
We have developed a quantum field theoretical approach to the vacuum neutrino
oscillations in curved space, discussing the transition probabilities, and
their behaviour under changes of mass representation. We have analyzed the
non–trivial interplay between quantum field mixing and field quantization in
curved space, and have found that the former has a remarkably richer structure
when compared to its flat space counterpart. In particular, the formalism has
to be versatile enough to deal with the existence of infinite unitarily
inequivalent representations of the mass fields, which is to say that no
preferred notion of particle does generally exist. In the spirit of general
covariance, we have determined the effect on flavor fields of a shift in the
expansion of the mass fields, and established under which conditions the
resulting transition probabilities are the same. We have then computed the
oscillation formulae in three example metrics, including two FRLW spacetimes
and the static Schwarzschild black hole. In the latter we have found that the
Hawking radiation affects, although very slightly, the oscillations for
neutrinos propagating from the asymptotic past infinity to the asymptotic
future infinity. As a general result, it is found that when all the quantum
field theoretical effects on neutrino mixing can be neglected, the
gravitational background only affects the phase of the oscillations,
constistently with previous analyses.
## Acknowledgements
Partial financial support from MIUR and INFN is acknowledged. A.C. and G.L.
also acknowledge the COST Action CA1511 Cosmology and Astrophysics Network for
Theoretical Advances and Training Actions (CANTATA).
## References
* (1) L. M. Brown, Physics Today 31, 9, 23 (1978).
* (2) Q.R. Ahmad et al. (SNO Collaboration), Phys. Rev. Lett. 87 (7): 071301 (2001).
* (3) Y. Fukuda et al. (Super-Kamiokande Collaboration), Phys. Rev. Lett. 81 (8), 1562 - 1567 (1998).
* (4) S. M. Bilenky and B. Pontecorvo, Phys. Rep. 41, 225 (1978).
* (5) S. M. Bilenky and S. T. Petcov, Rev. Mod. Phys. 59, 671 (1987)
* (6) R. N. Mohapatra and P. B. Pal, Massive neutrinos in physics and astrophysics, Lecture Notes in Physics (3rd ed.) 72, (2007).
* (7) Y. Cai et al. From the Trees to the Forest: A Review of Radiative Neutrino Mass Models, Front. Phys. 5, 10.3389/fphy.2017.00063 (2017).
* (8) S. T. Petcov, Adv. High En. Phys., 852987 (2013)
* (9) A. Franckowiak, J. Phys.: Conf. Ser. 888, 012009 (2017).
* (10) W. Buchmüller, Neutrinos, Grand Unification and Leptogenesis, arXiv:hep-ph/0204288v2 (2002).
* (11) M. Tanabashi et al. (Particle Data Group), Neutrinos in Cosmology, Phys. Rev. D 98, 030001 (2018).
* (12) M.G. Betti et al. (PTOLEMY collaboration), JCAP 07, 047 (2019).
* (13) D. B. Kaplan, A. E. Nelson, N. Weiner, Phys. Rev. Lett. 93, 091801 (2004); R. Fardon, A. E. Nelson, N. Weiner, JCAP 10, 005 (2004).
* (14) Y. Grossman and H. J. Lipkin, Phys. Rev. D 55, 2760 (1997).
* (15) D. Piriz, M. Roy and J. Wudka, Phys. Rev. D 54, 1587 (1996).
* (16) C. Y. Cardall and G. M. Fuller, Phys. Rev. D 55, 7960 (1997).
* (17) N. Birrell and P. Davies, Quantum Fields in Curved Space (Cambridge Monographs on Mathematical Physics)., Cambridge: Cambridge University Press, doi:10.1017/CBO9780511622632 (1982).
* (18) R. Wald, Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics (Chicago Lectures in Physics), The University of Chicago Press, ISBN: 9780226870274 (1994).
* (19) Here the index $k$ refers to a generic set of quantum numbers labelling the modes. The sums $\sum_{k}$ are understood as integrals if $k$ is a continuous index.
* (20) M. Blasone, A. Capolupo, G. Vitiello, Phys. Rev. D 66, 025033 (2002) and references therein; A. Capolupo, I. De Martino, G. Lambiase and A. Stabile, Phys. Lett. B 790, 427 (2019); K. Fujii, C. Habe and T. Yabuki, Phys. Rev. D 59, 113003 (1999); Phys. Rev. D 64, 013011 (2001); K.C. Hannabuss and D.C. Latimer, J. Phys. A 33, 1369 (2000); J. Phys. A 36, L69 (2003); C.R Ji and Y. Mishchenko, Phys. Rev. D 65, 096015 (2002); Ann. Phys. 315, 488 (2005); M. Blasone, A. Capolupo, O. Romei and G. Vitiello, Phys. Rev. D 63, 125015 (2001); A. Capolupo, C. R. Ji , Y. Mishchenko and G. Vitiello, Phys. Lett. B 594, 135 (2004); M. Blasone, A. Capolupo, F. Terranova, G. Vitiello, Phys. Rev. D 72, 013003 (2005).
* (21) A. Capolupo, Adv. High Energy Phys. 2016, 8089142, 10 (2016); Adv. High Energy Phys. 2018, 9840351 (2018); A. Capolupo, S. Capozziello and G. Vitiello, Phys. Lett. A 373, 601 (2009); Phys. Lett. A 363, 53 (2007); Int. J. Mod. Phys. A 23, 4979 (2008); M. Blasone, A. Capolupo, S. Capozziello, G. Vitiello, Nucl. Instrum. Meth. A 588, 272 (2008); M. Blasone, A. Capolupo, G. Vitiello, Prog. Part. Nucl. Phys. 64, 451 (2010); M. Blasone, A. Capolupo, S. Capozziello, S. Carloni and G. Vitiello, Phys. Lett. A 323, 182 (2004); A. Capolupo, M. Di Mauro, A. Iorio, Phys. Lett. A 375, 3415 (2011).
* (22) A. Capolupo, S. M. Giampaolo, G. Lambiase and A. Quaranta, arXiv:1912.03332 [hep-ph].
* (23) The helicity index can always be decoupled by choosing eigenstates of the helicity operator.
* (24) This does _not_ imply that $Q_{e}$ and $Q_{\mu}$ are conserved separately.
* (25) L. Biedenharn, J. Louck, P. Carruthers, Angular Momentum in Quantum Physics: Theory and Application (Encyclopedia of Mathematics and its Applications), Cambridge: Cambridge University Press., doi:10.1017/CBO9780511759888 (1984).
* (26) B. Thaller, Advanced Visual Quantum Mechanics, Springer–Verlag New York, doi:10.1007/b138654 (2005).
* (27) M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions, Dover Publications Inc. New York, ISBN: 9780486612720 (1965).
* (28) In this case the label $\omega$ refers to the energies of the reference mass field, that is $\psi_{1}$ for $\rho=e$ and $\psi_{2}$ for $\rho=\mu$. This _must not_ be interpreted as the energy of neutrinos with flavor $\rho$, since by definition they have no definite energy. Indeed, the non–diagonality of (34) shows that it is impossible to classify flavor neutrinos with a single energy $\omega$.
* (29) A. O. Barut, H. Duru, Phys. Rev. D 36, 3705 (1987).
* (30) We use here two discrete families of surfaces, but also continuous families would do the trick, as long as the same limiting process is involved ($\Sigma^{\pm}\rightarrow\mathcal{I}^{\pm}$).
* (31) R. Kerner, R. B. Mann, Class.Quant.Grav. 25, 095014 (2008).
* (32) S. W. Hawking, Particle creation by black holes, Comm. Math. Phys. 43 , no. 3, 199–220 (1975).
* (33) G. Lambiase, G. Papini, Raffaele Punzi, G. Scarpetta, Phys.Rev. D 71, 073011 (2005).
* (34) G. Lambiase, Mon.Not.Roy.Astron.Soc. 362, 867-871 (2005).
|
2024-09-04T02:54:56.045374 | 2020-03-01T17:44:08 | 2003.00532 | {
"authors": "Uday Bondhugula",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25971",
"submitter": "Uday Bondhugula",
"url": "https://arxiv.org/abs/2003.00532"
} | arxiv-papers | language=Shell, columns=flexible, basicstyle=, literate=$$1 language=mlir,
basicstyle=, columns=flexible, mathescape
# High Performance Code Generation in MLIR: An Early Case Study with GEMM
Uday Bondhugula
Dept of CSA, Indian Institute of Science & PolyMage Labs
Bengaluru, Karnataka, India.
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
This article is primarily meant to present an early case study on using MLIR
[1], a new compiler intermediate representation infrastructure, for high-
performance code generation. Aspects of MLIR covered in particular include
memrefs, the affine dialect, and polyhedral utilities and pass infrastructure
surrounding those. This article is also aimed at showing the role compiler
infrastructure could play in generating code that is competitive with highly
tuned manually developed libraries, albeit in a more modular, reusable, and
automatable way.
_K_ eywords MLIR $\cdot$ GEMM $\cdot$ polyhedral $\cdot$ code generation
## 1 Introduction, Motivation, and Setup
It is well-known that the computationally demanding routines driving the
state-of-the-art in domains such as dense linear algebra and deep learning are
all based on carefully hand-optimized and tuned libraries developed with
significant engineering effort and insights over time. The techniques and the
level of attention necessary to optimize for near-peak performance on a target
architecture makes the process often inaccessible to those without a deep
knowledge of code optimization and low-level interactions with hardware. In
many cases, the best performing code comes from the hardware vendors
themselves. Yet, fortunately, there are published works such as those of Goto
and Van de Geijn [2] that have described in great detail how such close to
peak performance could be obtained. Subsequent works [3] made the process more
modular, reusable, and accessible to a wider audience, having translated to an
open-source project FLAME/BLIS.
This article alludes to the fact that this process could potentially be made
even more modular, automatable and systematic — by hosting it on top of a real
compiler IR that has the necessary abstractions and infrastructure. This
completely avoids the need to write any code by hand — be it C or inline
assembly. The IR infrastructure that will be used here is of course, MLIR [1,
4] and we will try to recreate the OpenBLAS/BLIS’ optimization approach in a
compiler-oriented manner using MLIR. MLIR is a new intermediate representation
designed to provide a unified, modular, and extensible infrastructure to
progressively lower dataflow compute graphs, through loop nests potentially,
to high-performance target-specific code. While MLIR shares similarities with
traditional CFG-based three-address SSA representations (including LLVM IR and
Swift intermediate language), it also introduces notions from the polyhedral
compiler framework as first class concepts to allow powerful analysis and
transformation in the presence of loop nests and multi-dimensional arrays. It
is thus a useful infrastructure suitable for developing new compilers, whether
general-purpose or domain-specific, and whether targeting general-purpose
multicores or specialized accelerator chips.
### 1.1 Setup
We are going to be using an Intel Skylake-based high-end desktop/workstation
processor for all experimentation. The processor is an Intel(R) Core(TM)
i7-8700K CPU @ 3.70GHz, which is actually based on Coffee Lake, a process
refinement of Skylake, and thus with the same core/performance
characteristics. Note that although this is based on the Skylake
microarchitecture, it is not a SkyLake-X: as such, its vectors are not AVX-512
but just AVX-2 (256-bit). It has a 32 KiB L1 data cache and a 256 KiB L2
unified cache per core, and a 12 MiB shared L3 cache (2 MiB / core).
Before we start, we are going to compute the machine peak on this processor.
This CPU supports AVX-2 and has two FMA units that can operate on 256-bit wide
vectors. As such, one can perform $2*4*2$ double-precision floating-point
multiply and adds per cycle, which at the max turbo boost frequency of 4.7 GHz
translates to 75.2 GFLOPS per core. (Frequency scaling can impact how this
figure is calculated, but we will ignore that for now.)
### 1.2 Running example: DGEMM
Matrix-matrix multiplication is often an excellent routine for a tutorial or
exercise in code optimization because a number of standard practices could be
demonstrated using it. It is also one of the most important ones for many
domains and thus often the first thing one builds an optimized implementation
for when rolling out a new architecture.
### 1.3 Starters
Let us take a 2088x2048 double precision matrix-matrix multiplication, and
benchmark it in various ways. We choose 2088 here since it is divisible by
both 4 and 3: this makes it easier to read some of the snippets given that we
will be exploring register tiling by sizes that are multiples either of 2, 3,
or 4.
#### 1.3.1 Compilers with -O3, Pluto
Let us see what current compilers do if this is written as a naive nest in C.
These results can be reproduced with the setup in Pluto from under
examples/matmul/. (We use -ffast-math along with -O3 to be fair given the
other things we are going to compare this with.)
⬇
$ clang -v
clang version 8.0.0 (Fedora 8.0.0-3.fc30)
Target: x86_64-unknown-linux-gnu
$ clang -O3 -ffast-math -DTIME matmul.c -o matmul.clang -lm
$ ./matmul.clang
36.484855s
0.47 GFLOPS
$ gcc –version
gcc (GCC) 9.2.1 20190827 (Red Hat 9.2.1-1)
$ gcc -O3 -ffast-math -DTIME matmul.c -o matmul.gcc -lm
$ ./matmul.gcc
3.788421s
4.53 GFLOPS
Disappointingly, clang and GCC are at 0.6% and 6% of the machine peak
respectively! But general-purpose compilers are not expected or meant to get
anywhere to the peak. Programmers instead typically use highly tuned
libraries. We are going to get to that shortly, but a quick note on why Clang
performs poorly here. Clang only performs innermost loop vectorization, which
is really a bad choice here (due to poor locality) when that’s the only thing
being done. Even a textbook loop interchange here from $ijk\rightarrow ikj$
improves Clang performance by about 10x (because now the right loop would be
at the innermost level for both locality and auto-vectorization). Clang does
not perform any unroll-and-jam here either. While on this let us also see what
a polyhedral tool Pluto, a source to source translator, does on this. The
below run shows that it is at about 25% of the machine peak, which is better
but also pretty unsatisfying.
⬇
$ make tiled
$ ./tiled
0.889177s
19.32 GFLOPS
#### 1.3.2 Libraries
Now, let us see what the fastest libraries in the world do on matmul. Again,
the setup in Pluto allows experimenting with OpenBLAS, BLIS, or MKL quickly.
⬇
$ make mkl blis openblas
$ ./blis
63.29 GFLOPS
$ ./openblas
0.259527s
67.49 GFLOPS
$ ./mkl
0.273492s
67.70 GFLOPS
MKL and OpenBLAS are nearly at 92% of the peak (68/75.2), while BLIS is close
at 85% of the peak. We will keep these in mind as we try out how to build
high-performance versions with MLIR. Note: we haven’t been very rigorous with
the above experimentation in terms of performing warmup runs, taking an
average across repetitions etc., but our goal is to just get a reasonably
accurate picture for now.
## 2 Using MLIR to Optimize GEMM
There is currently no C/C++ or another language/programming model frontend
that emits MLIR. The way to get something in MLIR run on CPUs is through mlir-
cpu-runner which can take MLIR as input and JIT compile and execute it. As
part of this process, optimized MLIR is converted to LLVM IR, which then goes
through its optimization pipeline to generate target code, all of this through
mlir-cpu-runner.
### 2.1 Affine Ops
Here is how a simple canonical matrix-matrix multiplication looks like in MLIR
– the structure is close to its form in C.
⬇
// C += A * B.
func @matmul(%A: memref<2048x2048xf64>, %B: memref<2048x2048xf64>, %C:
memref<2048x2048xf64>) {
affine.for %arg3 = 0 to 2048 {
affine.for %arg4 = 0 to 2048 {
affine.for %arg5 = 0 to 2048 {
%a = affine.load %A[%arg3, %arg5] : memref<2048x2048xf64>
%b = affine.load %B[%arg5, %arg4] : memref<2048x2048xf64>
%ci = affine.load %C[%arg3, %arg4] : memref<2048x2048xf64>
%p = mulf %a, %b : f64
%co = addf %ci, %p : f64
affine.store %co, %C[%arg3, %arg4] : memref<2048x2048xf64>
}
}
}
return
}
func @main() {
%A = alloc() : memref<2048x2048xf64>
%B = alloc() : memref<2048x2048xf64>
%C = alloc() : memref<2048x2048xf64>
%cf1 = constant 1.00000e+00 : f64
linalg.fill(%A, %cf1) : memref<2048x2048xf64>, f64
linalg.fill(%B, %cf1) : memref<2048x2048xf64>, f64
linalg.fill(%C, %cf1) : memref<2048x2048xf64>, f64
call @matmul(%A, %B, %C) : (memref<2048x2048xf64>, memref<2048x2048xf64>,
memref<2048x2048xf64>) -> ()
call @print_memref_2d_f64(%C): (memref<2048x2048xf64>) -> ()
return
}
func @print_memref_2d_f64(memref<2048x2048xf64>)
affine.for and affine.load/store are ops from the Affine dialect [5] used
above. The IR above also uses a helper op from the LinAlg dialect to
initialize matrices. The rest like (addf, mulf, alloc) are all ops from MLIR’s
standard dialect.
### 2.2 MemRefs
A memref [6] in MLIR is the in-memory representation of a tensor in MLIR.
Depending on its layout map, it could refer to a potentially non-contiguous
sub-region of the underlying buffer. All memrefs in the above snippet have the
default identity layout map $(d0,d1)\rightarrow(d0,d1)$, which corresponds to
a row major layout in memory. 2088 x 2048 is the shape of memref where each of
its elements is an f64 (double). There is other information such as an affine
layout map (a mapping for its logical coordinate space to physical memory)
that is elided when these are identity maps (the default). We will look at an
example of this a little later (Section 2.9). The only way to access the
elements of a memref is through load and store operations.
### 2.3 A high-level domain-specific op that can expand
MLIR is extensible in several ways, and one of them is in the ease of
adding/creating new ops at the level of abstraction we desire. One way of
classifying ops or IR in general in MLIR is into high-level, mid-level, and
low-level, although there is not a clear distinction and lowering can be
progressive and you can have a mix. High-level ops operate on tensor and
memref types themselves, i.e., their inputs (operands) and results are of that
type. affine.for and affine.if are mid-level ops – these have nested blocks
(which are themselves a list of ops) and a particular structure to them. Low-
level ops are at the same level as instructions in LLVM, and these typically
correspond to either one or a flat sequence of instructions matching a target
(three address code style in other words).
If one is building a DSL and knows that they need or have a matmul, one could
just define an op that does the matmul taking memrefs as operands. For the
purpose of this article, we will create a hop.matmul op. An operation in MLIR
has inputs, results, and attributes (ones with regions have region arguments
as well).
⬇
hop.matmul %A, %B, %C {some_attr = 96, another_attr = 4}
: (memref<2088x2048xf64>, memref<2048x2048xf64>, memref<2088x2048xf64>)
hop.matmul above is an operation that operates on three memrefs, %A, %B, and
%C, which are the LHS, the RHS, and the output corresponding to a matrix-
matrix multiplication. This op has zero results: since these are memrefs, the
underlying memory can be updated, but the memref itself is an SSA value and
thus can be defined only once. In this case, %C will be both read and written
by this heavy op. some_attr is an attribute of the op, which like with LLVM
metadata is meant to be a constant of one of the numerous types available
including some polyhedral types like affine maps and integer sets. Such
polyhedral attributes could be used to encode powerful information and we will
see an example of this later.
For the purpose of this article, we create an -hopt pass that is able to
expand out hop.matmul into the naive 3-d loop nest we showed further above.
Let us just execute this with MLIR without any optimization to see where we
are starting our journey from.
⬇
// This is where we start. No optimizations performed in MLIR besides
canonicalization.
$ mlir-opt -hopt -convert-linalg-to-loops -lower-affine -convert-std-to-llvm dgemm.mlir |
mlir-cpu-runner -O3 -e main -entry-point-result=void -shared-
libs=lib/libmlir_runner_utils.so
Compilation time: 0.015995s
0.558177 GFLOPS
This is pretty close to the execution time of ‘clang -O3’ that we saw above,
which makes sense, but this is equally slow (less than 1% of the machine
peak)! But we have not really made use of or run any of the optimization
infrastructure in MLIR. The -hopt here just lowers this op into the canonical
3-d loop nest we showed further above, which is then lowered to LLVM and goes
through the same LLVM -O3 pipeline that clang went through.
### 2.4 Tiling for caches: OpenBLAS/BLIS tiling as a polyhedral schedule
Tiling and blocking has been studied for several decades, and it is now known
how matrix-matrix multiplication should be tiled for high performance on
widely used architectures. We are going to use the same cache tiling strategy
used in OpenBLAS and BLIS. [This
scheme](https://dl.acm.org/citation.cfm?id=1356053) is a very clever one that
carefully tiles for multiple levels of the cache in a way that exploits reuse
for each of the matrices involved at one level or another: the objective is to
make sure the vector FMA/add mul pipeline remains full and one does not wait
for loads. It does not blindly tile the nest multiple times to exploit reuse
for all matrices at L1, L2, and L3, as would most polyhedral tools do.
There are various ways of describing OpenBLAS’s/BLIS’ tiling scheme and they
are explained in excellent detail with illustration in the papers by Van Zee
and van de Geijn on BLIS [3], and later with analysis and modeling by Low et
al. 2016 [7]. But here, we will describe the tiling transformation compactly
via polyhedral schedules. For an introduction to polyhedral schedules, see
here [8] or the material on polyhedral.info. In particular, the tiling and
corresponding intra-tile loop orders used in the OpenBLAS/BLIS approach can be
expressed as the following polyhedral schedule (a multi-dimensional affine map
/ transformation):
$(i,j,k)\rightarrow(j/N_{C},k/K_{C},i/M_{C},j/N_{R},i/M_{R},k\ \textrm{ mod }\
K_{C},j\ \textrm{ mod }\ N_{R},i\textrm{ mod }M_{R}),$
where ‘/’ is a floor division. The innermost two loops after applying the
above schedule are meant to be fully unrolled leading to an $M_{R}$ x $N_{R}$
loop body. Ignoring the last level of tiling (for the L3 cache), this becomes:
$(i,j,k)\rightarrow(k/K_{C},i/M_{C},j/N_{R},i/M_{R},k\textrm{ mod
}K_{C},j\textrm{ mod }N_{R},i\textrm{ mod }M_{R}).$
Figure 1 from the BLIS works is a great illustration of its optimization
strategy:
Figure 1: Tiling strategy of OpenBLAS/BLIS. Figure courtesy: Field Van Zee
and Robert van de Geijn
The parameters ($M_{C}$, $K_{C}$) are chosen such that a tile of the LHS
matrix (%A) of size $M_{C}$ x $K_{C}$ is reused in the L2 cache, an RHS matrix
tile of size $K_{C}$ x $N_{R}$ is reused in the L1 cache, while a register
tile of the output matrix $M_{R}$ x $N_{R}$ is reused in just the registers.
There are several other considerations in choosing these parameters if one is
interested in analytical modeling; those are described in the work of Low et
al. [7]. The last three dimensions of the schedule are multiplying a panel of
%A of size $M_{R}$ x $K_{C}$ with a panel of %B of size $K_{C}$ x $N_{R}$.
Note that %C is being both read and written, and intuitively, it makes sense
to exploit reuse for it in the registers while running a large enough k intra-
tile loop at the innermost level. In the schedule above, the innermost two
dimensions (j, i) are fully unrolled, i.e., you get an unrolled body of size
$M_{R}$ x $N_{R}$ times the original one. For readers seeking more detail in
general on this and on programming for high performance, please see course
notes [9] from the BLIS team members.
### 2.5 Tiling in MLIR
There are two ways of achieving this tiling in MLIR: one by calling
[mlir::tile] (which is also what the loop tiling pass in MLIR uses), and then
performing the desired loop interchange via mlir::interchangeLoops The other
is by implementing a higher order polyhedral (HOP) approach based on domains
and schedules. We use this latter approach here since MLIR’s affine analysis
machinery does not yet have the necessary simplification to get rid of certain
redundant bounds resulting from tiled code generation in advanced cases. The
HOP approach depends on an external library,
[ISL](http://isl.gforge.inria.fr), and we implement this as part of the -hopt
pass. We express the tiling schedule as an MLIR affine map (in fact, any
affine schedule could be used), perform the code generation via ISL, and
convert the ISL AST back to MLIR. We will now use $M_{C}$, $K_{C}$, $M_{R}$,
$N_{R}$ as attributes. on the hop.matmul op.
⬇
hop.matmul %A, %B, %C { M_C = 64 : i32, K_C = 256 : i32, M_R = 4 : i32, N_R =
8 : i32}
: (memref<2088x2048xf64>, memref<2048x2048xf64>, memref<2048x2048xf64>)
We have used $K_{C}$ = 256, $M_{C}$ = 64, $M_{R}$ = 4, $N_{R}$ = 8 as a
starting point: these can be analyzed to be good values given the cache size
constraints described earlier. This is functionally equivalent to using the
following schedule with HOP and fully unrolling the last two dimensions, d1,
d0, which will be of trip counts 8 and 4 respectively):
⬇
hop.matmul %A, %B, %C {
schedule = (d0, d1, d2) -> (d2 floordiv 256, d0 floordiv 64, d1 floordiv 8, d0
floordiv 4, d2, d1, d0)
} : (memref<2088x2048xf64>, memref<2048x2048xf64>, memref<2088x2048xf64>)
In order to just evaluate cache tiling for now, we will just use the following
schedule (drop the tiling for registers):
⬇
hop.matmul %A, %B, %C {
schedule = (d0, d1, d2) -> (d2 floordiv 256, d0 floordiv 64, d1, d0, d2)
} : (memref<2088x2048xf64>, memref<2048x2048xf64>, memref<2088x2048xf64>)
Now, with the above tiling, we get this loop nest:
⬇
…
#map8 = (d0) -> (d0 * 16)
#map9 = (d0) -> (522, d0 * 16 + 16)
…
func @matmul(%A: memref<2088x2048xf64>, %B: memref<2048x2048xf64>, %C:
memref<2088x2048xf64>) {
%c0 = constant 0 : index
affine.for %arg3 = 0 to 8 {
affine.for %arg4 = 0 to 33 {
affine.for %arg5 = 0 to 2048 {
affine.for %arg6 = #map7(%arg4) to min #map8(%arg4) {
%0 = alloca() : memref<1xf64>
%1 = affine.load %C[%arg6, %arg5] : memref<2088x2048xf64>
affine.store %1, %0[%c0] : memref<1xf64>
affine.for %arg7 = 0 to 256 {
%3 = affine.load %A[%arg6, %arg3 * 256 + %arg7] : memref<2088x2048xf64>
%4 = affine.load %B[%arg3 * 256 + %arg7, %arg5] : memref<2048x2048xf64>
%5 = affine.load %0[0] : memref<1xf64>
%6 = mulf %3, %4 : f64
%7 = addf %5, %6 : f64
affine.store %7, %0[0] : memref<1xf64>
}
%2 = affine.load %0[%c0] : memref<1xf64>
affine.store %2, %C[%arg6, %arg5] : memref<2088x2048xf64>
}
}
}
}
return
}
Note that the invariant load on %B has been hoisted out. When we execute this:
⬇
// With tiling.
$ mlir-opt -hopt -convert-linalg-to-loops -lower-affine -convert-std-to-llvm dgemm.mlir |
mlir-cpu-runner -O3 -e main -entry-point-result=void -shared-
libs=lib/libmlir_runner_utils.so
Compilation time: 0.0443082s
1.6012 GFLOPS
This is nearly a 3x improvement in performance, but still nowhere close to the
machine peak or the performance of MKL, OpenBLAS or BLIS!
### 2.6 Explicit Copying or Packing
Whenever a code with multi-dimensional arrays exhibits reuse, explicit copying
or packing is a commonly used technique in HPC, which first copies or packs
accessed data into contiguous buffers and indexes those buffers for
computation. Such copying reduces or nearly eliminates conflict misses, TLB
misses, and improves hardware prefetching performance (since the access will
lead to fewer prefetch streams). More importantly, in conjunction with tiling,
it provides the intended benefits of tiling. On the one hand, tiling is
performed so that reuse is exploited in multiple directions when the data
accessed fits in a higher level of the memory hierarchy, on the other hand,
the data accessed for a tile is no longer contiguous in the original
matrix/tensor leading to conflict misses, TLB misses, and more prefetch
streams — taking away a lot of the gain even if there is high reuse. There is
also a choice in terms of which depth / where one would like to introduce the
copies, but this is quite clear for the tiling strategy we are using.
The packing optimization is something that one would really wish a compiler
performs automatically. MLIR has a pass -affine-data-copy-generate and a
utility mlir::affineDataCopyGenerate that can perform explicit copying or
packing. The utility can also be called on a specific memref to perform the
packing at a specific loop depth, and with a number of other options
(including DMA support).
With the tiling approach we are using here, packing is performed for the LHS
matrix %A right under the second loop ($i\textrm{ floordiv }K_{C}$ dimension
in the schedule). For %B (RHS matrix), the copying is performed right under j
floordiv $N_{R}$ (the third dimension). Figure 2 illustrates the packing
needed.
Figure 2: The one in green is the LHS to be packed into a buffer that will
reside in L2, and the RHS in blue will reside in L1. (Figure courtesy: Field
Van Zee and Robert van de Geijn).
To perform the packing automatically with MLIR, instead of calling the
-affine-data-copy-generate pass, we will just use the underlying utility
[mlir::affineDataCopyGenerate] to place copies at the right depths. For the
purpose of this article, we enabled this under the -hopt-copy option. With
-hopt-copy, now the nest looks like:
⬇
affine.for %arg3 = 0 to 8 {
affine.for %arg4 = 0 to 32 {
%0 = alloc() : memref<64x256xf64>
// Packing %A into a 64x256 buffer.
affine.for %arg5 = #map6(%arg4) to #map7(%arg4) {
affine.for %arg6 = #map4(%arg3) to #map5(%arg3) {
%1 = affine.load %arg0[%arg5, %arg6] : memref<2048x2048xf64>
affine.store %1, %0[%arg4 * -64 + %arg5, %arg3 * -256 + %arg6] :
memref<64x256xf64>
}
}
affine.for %arg5 = 0 to 256 {
%1 = alloc() : memref<256x8xf64>
// Packing %B into a 256x8 buffer.
affine.for %arg6 = #map4(%arg3) to #map5(%arg3) {
affine.for %arg7 = #map9(%arg5) to #map10(%arg5) {
%2 = affine.load %arg1[%arg6, %arg7] : memref<2048x2048xf64>
affine.store %2, %1[%arg3 * -256 + %arg6, %arg5 * -8 + %arg7] :
memref<256x8xf64>
}
}
affine.for %arg6 = #map16(%arg4) to #map17(%arg4) {
// This is multiplying a packed 64x256 LHS panel with a packed 256x8 RHS
panel.
affine.for %arg7 = 0 to 256 {
affine.for %arg8 = 0 to 8 {
affine.for %arg9 = 0 to 4 {
%2 = affine.load %0[%arg4 * -64 + %arg6 * 4 + %arg9, %arg7] :
memref<64x256xf64>
%3 = affine.load %1[%arg7, %arg8] : memref<256x8xf64>
%4 = affine.load %arg2[%arg6 * 4 + %arg9, %arg5 * 8 + %arg8] :
memref<2048x2048xf64>
%5 = mulf %2, %3 : f64
%6 = addf %4, %5 : f64
affine.store %6, %arg2[%arg6 * 4 + %arg9, %arg5 * 8 + %arg8] :
memref<2048x2048xf64>
}
}
}
}
dealloc %1 : memref<256x8xf64>
}
dealloc %0 : memref<64x256xf64>
}
}
⬇
// With tiling and packing.
$ mlir-opt -hopt -hopt-copy -convert-linalg-to-loops -lower-affine -convert-
std-to-llvm
dgemm.mlir | mlir-cpu-runner -O3 -e main -entry-point-result=void
-shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.0506358s
2.38624 GFLOPS
This is nearly a 1.5x improvement in performance, but in absolute terms still
pretty bad. On a general note, the benefit of any optimization often depends
on how well optimized the code is on other aspects before the optimization was
applied – even more so in this case. It’s important to be mindful of this in
order to not draw incorrect conclusions on the relative benefits of each
individual transformation at this time. For example, the benefits of cache
tiling on a code where unroll-and-jam or register tiling has already been
performed is significantly lower than on a version where no optimizations were
performed. We will look at an example a little later.
### 2.7 Unroll and Jam
We will now use the complete schedule that has the $M_{R}$ and $N_{R}$
register tiling so that an unroll-and-jam of the innermost two loops by
$N_{R}$ and $M_{R}$ respectively could be achieved, but we actually didn’t
enable that yet. We use the option -hopt-unroll to turn this on and off. We
also run scalar replacement in MLIR post unroll-and-jam. Besides turning
memref locations being reduced into scalars (single element memrefs) and
hoisting them out, it also eliminates redundant loads, and hoists invariant
loads out of loops.
While at unroll-and-jam, we will also add one more parameter to the list,
$K_{U}$, which will be the unroll factor for the $K_{C}$ loop (intra-tile loop
corresponding to k reduction dimension). We use $K_{U}$ = 4 to unroll the k
loop (which will end up as the innermost loop post all register tiling). So,
we have:
⬇
hop.matmul %A, %B, %C { M_C = 64 : i32, K_C = 256 : i32, M_R = 4 : i32, N_R =
8 : i32, K_U = 4 : i32}
: (memref<2088x2048xf64>, memref<2048x2048xf64>, memref<2088x2048xf64>)
⬇
// With tiling, packing, and unroll-and-jam/unroll.
$ mlir-opt -hopt -hopt-copy -hopt-unroll -hopt-scalrep -convert-linalg-to-
loops
-lower-affine -convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main
-entry-point-result=void -shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.11306s
23.2737 GFLOPS
A code is only as fast as its weakest link. While the cache tiling really
optimized reuse for the LHS and RHS, the strategy used by OpenBLAS and BLIS
exploits reuse for the output matrix only in registers (not in L1 nor L2). As
such, without unroll-and-jam aka register tiling, brakes had been applied. So,
this last step opened the floodgates here yielding a 10x improvement and
getting us to 23 GFLOPS.
Let us go back a step and check what would happen if we disabled packing while
performing unroll-and-jam.
⬇
// With tiling, unroll-and-jam/unroll, but no packing.
$ mlir-opt -hopt -hopt-copy=false -hopt-unroll -hopt-scalrep -convert-linalg-
to-loops
-lower-affine -convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main
-entry-point-result=void -shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.0568249s \\\
11.5595 GFLOPS
We lose over 2x of the performance if we do not perform packing here! And if
we had not performed the innermost loop unrolling (i.e., set $K_{U}$ = 1), we
get:
⬇
// With tiling, packing, and unroll-and-jam/unroll, but K_U = 1\.
$ mlir-opt -hopt -hopt-copy -hopt-unroll -hopt-scalrep -convert-linalg-to-
loops -lower-affine
-convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main -entry-point-result=void
-shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.0881901s
20.249 GFLOPS
All of these improvements are significant from where we started but they are
all finally not enough at all on an absolute scale — we are still at about 34%
of the peak even after having reproduced a good part of the OpenBLAS/BLIS
recipe! Clearly, there is something orthogonal that all of this is missing. It
looks like we completely missed vectorization! The highly tuned library
kernels that we have set as a target use a careful selection and schedule of
vector instructions in inline assembly.
Let us see whether our code is even being vectorized reasonably under the hood
by LLVM. It’s pretty straightforward to check this. Instead of looking at the
pass output remarks, we will just look at the generated assembly. The ’-dump-
object-file -object-filename’ options of mlir-cpu-runner are useful for this
purpose.
⬇
$ mlir-opt -hopt -hopt-copy -hopt-unroll -hopt-scalrep -convert-linalg-to-
loops
-lower-affine -convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main
-entry-point-result=void -shared-libs=lib/libmlir_runner_utils.so
-dump-object-file -object-filename=hopt.o
$ llvm-objdump -d hopt.o
…
14e5: c4 e2 a9 b9 e2 vfmadd231sd %xmm2, %xmm10, %xmm4
1856: c4 42 e5 a8 f8 vfmadd213pd %ymm8, %ymm3, %ymm15
185b: c4 e3 79 05 a9 30 ff ff ff 01 vpermilpd $1, -208(%rcx), %xmm5
1865: c4 41 7d 28 e6 vmovapd %ymm14, %ymm12
186a: c4 63 fd 01 f5 55 vpermpd $85, %ymm5, %ymm14
1870: c4 62 7d 19 c5 vbroadcastsd %xmm5, %ymm8
1875: c4 62 e5 a8 c2 vfmadd213pd %ymm2, %ymm3, %ymm8
187a: c4 42 e5 a8 f5 vfmadd213pd %ymm13, %ymm3, %ymm14
187f: c5 fd 10 b9 40 ff ff ff vmovupd -192(%rcx), %ymm7
1887: c4 e2 7d 19 b1 40 ff ff ff vbroadcastsd -192(%rcx), %ymm6
1890: c4 e3 fd 01 d7 ff vpermpd $255, %ymm7, %ymm2
1896: c4 e2 e5 a8 d0 vfmadd213pd %ymm0, %ymm3, %ymm2
189b: c5 fd 29 93 00 01 00 00 vmovapd %ymm2, 256(%rbx)
18a3: c4 e3 fd 01 ef 55 vpermpd $85, %ymm7, %ymm5
18a9: c4 63 fd 01 ef aa vpermpd $170, %ymm7, %ymm13
18af: c4 62 e5 a8 ab 80 03 00 00 vfmadd213pd 896(%rbx), %ymm3, %ymm13
18b8: c4 e2 e5 a8 e9 vfmadd213pd %ymm1, %ymm3, %ymm5
18bd: c4 c2 e5 a8 f4 vfmadd213pd %ymm12, %ymm3, %ymm6
18c2: c5 fb 10 99 60 ff ff ff vmovsd -160(%rcx), %xmm3
18ca: c5 f9 28 93 20 01 00 00 vmovapd 288(%rbx), %xmm2
18d2: c4 62 e9 b9 cb vfmadd231sd %xmm3, %xmm2, %xmm9
18d7: c4 c1 7b 10 bc f7 f0 ef ff ff vmovsd -4112(%r15,%rsi,8), %xmm7
18e1: c4 62 e1 b9 df vfmadd231sd %xmm7, %xmm3, %xmm11
18e6: c4 c1 7b 10 84 f7 f0 f7 ff ff vmovsd -2064(%r15,%rsi,8), %xmm0
18f0: c4 62 e1 b9 d0 vfmadd231sd %xmm0, %xmm3, %xmm10
18f5: c4 c1 7b 10 4c f7 f0 vmovsd -16(%r15,%rsi,8), %xmm1
18fc: c4 e2 f1 a9 dc vfmadd213sd %xmm4, %xmm1, %xmm3
1901: c5 c1 14 e2 vunpcklpd %xmm2, %xmm7, %xmm4
1905: c5 f1 14 c0 vunpcklpd %xmm0, %xmm1, %xmm0
1909: c4 e3 7d 18 cc 01 vinsertf128 $1, %xmm4, %ymm0, %ymm1
190f: c4 62 7d 19 a1 68 ff ff ff vbroadcastsd -152(%rcx), %ymm12
1918: c4 42 f5 a8 e7 vfmadd213pd %ymm15, %ymm1, %ymm12
191d: c4 e3 79 05 a1 70 ff ff ff 01 vpermilpd $1, -144(%rcx), %xmm4
1927: c4 e3 fd 01 c4 55 vpermpd $85, %ymm4, %ymm0
192d: c4 e2 7d 19 fc vbroadcastsd %xmm4, %ymm7
1932: c4 c2 f5 a8 c6 vfmadd213pd %ymm14, %ymm1, %ymm0
1937: c4 c2 f5 a8 f8 vfmadd213pd %ymm8, %ymm1, %ymm7
…
This is really not the code we would like to see in the innermost loop!
Although the code is vectorized, it is using XMM registers (128-bit) for a
good part instead of YMM ones (256-bit). More importantly, the FMAs are
regularly accompanied by loads around them and neither should this have been
necessary nor is it any good for performance! A good inner loop here is
expected to only have one load of the LHS and RHS every $N_{R}$ and $M_{R}$
FMA ops, roughly speaking, because that’s the amount of reuse we should be
getting. In addition, there should be no loads or stores for the output matrix
in the innermost loop, which happens to be the case here. And finally, it all
should have been %ymm.
It is also important to note that vectorization is often nearly useless unless
the code has been optimized for locality, (i.e., we should be getting data
from caches at least), and it is in fact still not as useful unless we are not
reading from registers most of the time. We will not dig and analyze any
further here to figure out what went wrong with clang/LLVM and why (this is
actually due to a sub-optimal vectorization choice)! Instead, we will see how
we could get the vectorization done in MLIR itself where we have all the
information and opportunity.
### 2.8 Vectorization
MLIR supports vector types and vector types as element types for memrefs.
Unlike in LLVM, vector types in MLIR can be multi-dimensional. However, we
only need 1-d vector types here. Loop vectorization in MLIR would just lead to
IR that turns memrefs of f64 into memrefs of vector of f64 besides
transforming loops/loop bodies. A casting op that changes the elemental type
of a memref, and the splat op are also needed here. In more complex cases, the
view/subview ops are also needed. But for this benchmark, the vectorization
needed is quite straightforward, and is a simple outer loop vectorization
along the j loop.
The existing “super vectorizer” in MLIR is really not functional or complete
for the auto-vectorization we need. For this article, we build and use a new
loop vectorizer (enabled by -hopt-vect or via -affine-vectorize when running
separately). We introduce a new memref_shape_cast op which is needed to change
the elemental type on a memref. Here’s how the vectorized MLIR looks like if
we started with the naive matmul nest.
⬇
// mlir-opt -hopt-vect
func @matmul(%A: memref<2048x2048xf64>, %B: memref<2048x2048xf64>, %C:
memref<2048x2048xf64>) {
%0 = memref_shape_cast %B : memref<2048x2048xf64> to
memref<2048x512xvector<4xf64>>
%1 = memref_shape_cast %C : memref<2048x2048xf64> to
memref<2048x512xvector<4xf64>>
affine.for %arg3 = 0 to 2048 {
affine.for %arg4 = 0 to 512 {
affine.for %arg5 = 0 to 2048 {
%2 = affine.load %A[%arg3, %arg5] : memref<2048x2048xf64>
%3 = splat %2 : vector<4xf64>
%4 = affine.load %0[%arg5, %arg4] : memref<2048x512xvector<4xf64>>
%5 = mulf %3, %4 : vector<4xf64>
%6 = affine.load %1[%arg3, %arg4] : memref<2048x512xvector<4xf64>>
%7 = addf %5, %6 : vector<4xf64>
affine.store %7, %1[%arg3, %arg4] : memref<2048x512xvector<4xf64>>
}
}
}
return
}
The above IR is generated by vectorizing along the $j$ loop, which is a data
parallel loop as opposed to a reduction dimension). The memref_shape_cast op
casts a memref to one of another shape as long as the product of the dimension
sizes all the way up to any aggregate elemental types remain the same.
memref_shape_cast actually does not exist in MLIR trunk. Also, we do not
discuss the case of trip counts not being a multiple of vector size here since
some of the infrastructure around it in MLIR is still being built. Let us look
at the performance with vectorization and with nothing else enabled.
⬇
// Just vectorization.
Compilation time: 0.016561s
1.53043 GFLOPS
#### 2.8.1 Rewind and measure with vectorization
Let us rewind and see how the vectorized code performs step by step, with
tiling, with packing, and with register tiling / unroll-and-jam.
⬇
// Just vectorization.
Compilation time: 0.016561s
1.53043 GFLOPS
# Vectorization with tiling
Compilation time: 0.0209861s
7.05307 GFLOPS
As mentioned earlier, simply vectorizing without worrying about memory
bandwidth gets us nowhere. We have now broken that barrier, and can go further
here.
⬇
// Vectorization, tiling, and packing/copying
$ mlir-opt -hopt -hopt-vect -hopt-copy -convert-linalg-to-loops -lower-affine
-convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main -entry-point-result=void -reps=5
-shared-libs=lib/libmlir_runner_utils.so,/usr/local/lib/libblis.so
Compilation time: 0.0409529s
11.2309 GFLOPS
// Vectorization, tiling, packing, and unroll-and-jam, unrolling with MLIR
scalar replacement.
$ mlir-opt -hopt -hopt-vect -hopt-copy -hopt-unroll -hopt-scalrep -convert-
linalg-to-loops
-lower-affine -convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main -reps=5
-entry-point-result=void -shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.0383081s
49.8336 GFLOPS
As observed earlier, once again, the last step has opened the floodgates here
yielding a 4.5x improvement and getting us to 49 GFLOPS – only 23% away from
BLIS and about 27% away from MKL or OpenBLAS.
Let us see how the MLIR looks like right after vectorization, tiling, copying,
and unroll-and-jam (we will disable the $K_{C}$ loop unroll, i.e., set $K_{U}$
= 1 to make it easier to read; the unroll-jamming of $i,j$ is still shown).
⬇
affine.for %arg3 = 0 to 8 {
affine.for %arg4 = 0 to 33 {
%2 = alloc() : memref<64x256xf64>
affine.for %arg5 = #map6(%arg4) to min #map7(%arg4) {
affine.for %arg6 = #map4(%arg3) to #map5(%arg3) {
%3 = affine.load %arg0[%arg5, %arg6] : memref<2088x2048xf64>
affine.store %3, %2[%arg4 * -64 + %arg5, %arg3 * -256 + %arg6] :
memref<64x256xf64>
}
}
affine.for %arg5 = 0 to 256 {
%3 = alloc() : memref<256x2xvector<4xf64>>
affine.for %arg6 = #map4(%arg3) to #map5(%arg3) {
affine.for %arg7 = #map9(%arg5) to #map10(%arg5) {
%4 = affine.load %0[%arg6, %arg7] : memref<2048x512xvector<4xf64>>
affine.store %4, %3[%arg3 * -256 + %arg6, %arg5 * -2 + %arg7] :
memref<256x2xvector<4xf64>>
}
}
affine.for %arg6 = #map26(%arg4) to min #map27(%arg4) {
%4 = alloca() : memref<1xvector<4xf64>>
%5 = affine.load %1[%arg6 * 4, %arg5 * 2] : memref<2088x512xvector<4xf64>>
affine.store %5, %4[0] : memref<1xvector<4xf64>>
%6 = alloca() : memref<1xvector<4xf64>>
%7 = affine.load %1[%arg6 * 4 + 1, %arg5 * 2] : memref<2088x512xvector<4xf64>>
affine.store %7, %6[0] : memref<1xvector<4xf64>>
%8 = alloca() : memref<1xvector<4xf64>>
%9 = affine.load %1[%arg6 * 4 + 2, %arg5 * 2] : memref<2088x512xvector<4xf64>>
affine.store %9, %8[0] : memref<1xvector<4xf64>>
%10 = alloca() : memref<1xvector<4xf64>>
%11 = affine.load %1[%arg6 * 4 + 3, %arg5 * 2] :
memref<2088x512xvector<4xf64>>
affine.store %11, %10[0] : memref<1xvector<4xf64>>
%12 = alloca() : memref<1xvector<4xf64>>
%13 = affine.load %1[%arg6 * 4, %arg5 * 2 + 1] :
memref<2088x512xvector<4xf64>>
affine.store %13, %12[0] : memref<1xvector<4xf64>>
%14 = alloca() : memref<1xvector<4xf64>>
%15 = affine.load %1[%arg6 * 4 + 1, %arg5 * 2 + 1] :
memref<2088x512xvector<4xf64>>
affine.store %15, %14[0] : memref<1xvector<4xf64>>
%16 = alloca() : memref<1xvector<4xf64>>
%17 = affine.load %1[%arg6 * 4 + 2, %arg5 * 2 + 1] :
memref<2088x512xvector<4xf64>>
affine.store %17, %16[0] : memref<1xvector<4xf64>>
%18 = alloca() : memref<1xvector<4xf64>>
%19 = affine.load %1[%arg6 * 4 + 3, %arg5 * 2 + 1] :
memref<2088x512xvector<4xf64>>
affine.store %19, %18[0] : memref<1xvector<4xf64>>
affine.for %arg7 = 0 to 256 {
// Unroll-and-jammed body (M_R = 4, N_R = 8 (but vectorized and hence 8/4
effectively).
%28 = affine.load %2[%arg4 * -64 + %arg6 * 4, %arg7] : memref<64x256xf64>
%29 = splat %28 : vector<4xf64>
%30 = affine.load %3[%arg7, 0] : memref<256x2xvector<4xf64>>
%31 = mulf %29, %30 : vector<4xf64>
%32 = affine.load %4[0] : memref<1xvector<4xf64>>
%33 = addf %31, %32 : vector<4xf64>
affine.store %33, %4[0] : memref<1xvector<4xf64>>
%34 = affine.load %2[%arg4 * -64 + %arg6 * 4 + 1, %arg7] : memref<64x256xf64>
%35 = splat %34 : vector<4xf64>
%36 = mulf %35, %30 : vector<4xf64>
%37 = affine.load %6[0] : memref<1xvector<4xf64>>
%38 = addf %36, %37 : vector<4xf64>
affine.store %38, %6[0] : memref<1xvector<4xf64>>
%39 = affine.load %2[%arg4 * -64 + %arg6 * 4 + 2, %arg7] : memref<64x256xf64>
%40 = splat %39 : vector<4xf64>
%41 = mulf %40, %30 : vector<4xf64>
%42 = affine.load %8[0] : memref<1xvector<4xf64>>
%43 = addf %41, %42 : vector<4xf64>
affine.store %43, %8[0] : memref<1xvector<4xf64>>
%44 = affine.load %2[%arg4 * -64 + %arg6 * 4 + 3, %arg7] : memref<64x256xf64>
%45 = splat %44 : vector<4xf64>
%46 = mulf %45, %30 : vector<4xf64>
%47 = affine.load %10[0] : memref<1xvector<4xf64>>
%48 = addf %46, %47 : vector<4xf64>
affine.store %48, %10[0] : memref<1xvector<4xf64>>
%49 = affine.load %3[%arg7, 1] : memref<256x2xvector<4xf64>>
%50 = mulf %29, %49 : vector<4xf64>
%51 = affine.load %12[0] : memref<1xvector<4xf64>>
%52 = addf %50, %51 : vector<4xf64>
affine.store %52, %12[0] : memref<1xvector<4xf64>>
%53 = mulf %35, %49 : vector<4xf64>
%54 = affine.load %14[0] : memref<1xvector<4xf64>>
%55 = addf %53, %54 : vector<4xf64>
affine.store %55, %14[0] : memref<1xvector<4xf64>>
%56 = mulf %40, %49 : vector<4xf64>
%57 = affine.load %16[0] : memref<1xvector<4xf64>>
%58 = addf %56, %57 : vector<4xf64>
affine.store %58, %16[0] : memref<1xvector<4xf64>>
%59 = mulf %45, %49 : vector<4xf64>
%60 = affine.load %18[0] : memref<1xvector<4xf64>>
%61 = addf %59, %60 : vector<4xf64>
affine.store %61, %18[0] : memref<1xvector<4xf64>>
}
%20 = affine.load %18[0] : memref<1xvector<4xf64>>
affine.store %20, %1[%arg6 * 4 + 3, %arg5 * 2 + 1] :
memref<2088x512xvector<4xf64>>
%21 = affine.load %16[0] : memref<1xvector<4xf64>>
affine.store %21, %1[%arg6 * 4 + 2, %arg5 * 2 + 1] :
memref<2088x512xvector<4xf64>>
%22 = affine.load %14[0] : memref<1xvector<4xf64>>
affine.store %22, %1[%arg6 * 4 + 1, %arg5 * 2 + 1] :
memref<2088x512xvector<4xf64>>
%23 = affine.load %12[0] : memref<1xvector<4xf64>>
affine.store %23, %1[%arg6 * 4, %arg5 * 2 + 1] :
memref<2088x512xvector<4xf64>>
%24 = affine.load %10[0] : memref<1xvector<4xf64>>
affine.store %24, %1[%arg6 * 4 + 3, %arg5 * 2] :
memref<2088x512xvector<4xf64>>
%25 = affine.load %8[0] : memref<1xvector<4xf64>>
affine.store %25, %1[%arg6 * 4 + 2, %arg5 * 2] :
memref<2088x512xvector<4xf64>>
%26 = affine.load %6[0] : memref<1xvector<4xf64>>
affine.store %26, %1[%arg6 * 4 + 1, %arg5 * 2] :
memref<2088x512xvector<4xf64>>
%27 = affine.load %4[0] : memref<1xvector<4xf64>>
affine.store %27, %1[%arg6 * 4, %arg5 * 2] : memref<2088x512xvector<4xf64>>
}
dealloc %3 : memref<256x2xvector<4xf64>>
}
dealloc %2 : memref<64x256xf64>
}
}
The memref<1 x f64> values (single element memrefs) that we see below are a
result of the scalar replacement pass (-affine-scalrep) that runs after the
unroll-and-jam. We see eight of these summing up to 32 elements of the output
matrix, and these are held in registers when the innermost loop (k) is
iterating. LLVM’s passes later turn these into virtual registers.
Both BLIS and OpenBLAS use inline assembly microkernels and other hand written
components that are composed in a modular/reusable way. On the other hand, we
have been trying to generate all our code all the way down, including using
LLVM’s instruction selection, scheduling, and register allocation. Note that
we have got here through the hop.matmul with the values of $M_{C}$, $K_{C}$,
$M_{R}$, $N_{R}$, $K_{U}$ indicated below:
⬇
hop.matmul %A, %B, %C {
M_C = 64 : i32, K_C = 256 : i32, M_R = 4, N_R = 8 : i32, K_U = 4 : i32
} : (memref<2088x2048xf64>, memref<2048x2048xf64>, memref<2088x2048xf64>)
Before we look at adjusting these for maximum mileage, we will take a detour
to memref layouts.
### 2.9 A Quick Detour into Affine Map Layouts
We are going to take a quick detour here to understand how a memref’s layout
map [6] works. A memref type has three pieces of information: its shape, a
affine layout map, and a memory space.
⬇
memref<64x256xf64, (d0, d1) -> (d0, d1), 0>
This is a memref of shape 64x256, and with the identity map (d0, d1) -> (d0,
d1) which corresponds to a row-major layout in memory. 64 rows and 256
columns, and each of its elements is a float. An element %A[%i, %j] will map
to element ($256*d0+d1$) in the buffer – in this case all elements are
contiguous. ’0’ is the memory space the memref lives in. On architectures that
have different kinds of explicitly addressable memory, a different memory
space id is assigned to each. Whenever the layout map is identity, its
printing is elided; similarly, in the case of memory space 0.
Now, let us look at the memref corresponding to the buffer for the LHS matrix
(%A), which is of type memref<64x256xf64>.
Figure 3: LHS buffer layout.
This block is being multiplied with the RHS panel of type
memref<256x2xvector<2xf64>>. The figure above shows how the elements of A
block get traversed: along columns of height $M_{R}$ all the way up to $K_{C}$
columns, and then turn around for the next panel, until one has processed all
$M_{C}$/$M_{R}$ panels. Note that the elements are not being accessed
contiguously. But since the block of A is L2 cache resident and is just
streamed through L1, we still get both spatial reuse for it in L1 as well as
L2, i.e., it is just that the reuse in space for A does not happen immediately
but once you’ve traversed the $M_{R}$ height. And we know per this whole
approach that it still stays in cache. However, there is no harm (and it can
only potentially get better) if we ensure that A is accessed contiguously. For
eg., it can only help with prefetching into L1. Hence, we would like to pick a
layout different from the default row-major for memref<64x256xf64>, and MLIR’s
affine layout maps make it convenient to express the desired layout here in a
compact and powerful way. If we simply change the memref type to:
memref<64x256xf64, (d0, d1) $\rightarrow$ (d0 floordiv $M_{R}$, d1, d0 mod
$M_{R}$)>
it yields the desired layout. The rest of the IR does not need a single
change! The mapping specifies that a $(d0,d1)$ in the logical access space
should be mapped to (d0 flooridv $M_{R}$, d1, d0 mod $M_{R}$) in a physical
(contiguous) buffer of size 16x256x4. When mlir::normalizeMemRef runs, it will
turn this memref into:
memref<16x256x4xf64>
And an access Abuf[%i, %j] is remapped to Abuf[%i floordiv 4, %j, %i mod 4].
This is an example of a tiled layout, akin to how iteration space tiling is
performed – stripmine and interchange – but on the data space.
As another example, here’s a memref that accesses a non-contiguous sub-space
of its underlying buffer. memref<64x256xf64, (d0, d1) -> (d0, d1 floordiv 6,
d1 mod 6)> Note that we would have a “padding” worth 2 elements (256 % 6) at
the end of each row that can never be accessed via this memref while staying
in bounds. Another way to write this is: memref<64x256xf64,
$(d0,d1)\rightarrow(d0*258+d1)$>
A memref with access strides say 128, 2 (for major, minor respectively) in a
larger underlying buffer can be expressed for example as: memref<126x100xf32,
$(d0,d1)\rightarrow(d0*128+d1*2)$>.
More examples can be found here.
The copy options supplied to mlir::affineDataCopyGenerate allows one to choose
a custom data layout for the buffer (being copied into/from). One can choose
any layout map as long as it is injective: any injective layout will lead to
semantically correct code.
We will see that this layout is only needed for an experiment later (Section
2.12). We will now get back to adjusting parameters to go full throttle.
### 2.10 Tweaking $M_{C}$, $K_{C}$, $M_{R}$, $N_{R}$ to maximize reuse
Using a different setting of these parameters is as simple as changing the
attributes on the hop.matmul op. Note that although one can tune these
parameters on BLIS, its ASM kernel has to be among those that shipped — these
are long hand-written / hand-unrolled ones. However, with MLIR we are in a
position to also play around with the register tile sizes just as easily. Let
us pick a different set of values of $M_{C}$, $K_{C}$, $M_{R}$, and $N_{R}$
for better utilization of registers and L2 cache capacity as far %A’s buffer
goes. We’ll set $M_{R}$ to 3 and $N_{R}$ to 16 since this uses 12 vector
registers for %C instead of the under utilization of 8 earlier (there are a
total of 16 %ymm registers on x86-64).
⬇
hop.matmul %A, %B, %C {
M_C = 180 : i32, K_C = 480 : i32, M_R = 3, N_R = 16 : i32, K_U = 4 : i32
} : (memref<2088x2048xf64>, memref<2048x2048xf64>, memref<2088x2048xf64>)
When one chooses $M_{R}$ = 3 and $N_{R}$ = 16, it leads to 3 * 16 /4 = 12
256-bit vector registers for the output values, 3 values for the LHS, and 1
for the RHS, which amounts to 12 + 3 + 1 = 16 registers — exactly the number
of VEX encodable registers for AVX-2. The BLIS provided kernels for Haswell
otherwise include $6*8$, $8*6$, $4*12$, $12*4$ – they too use ($2*6$ =) 12
registers for the output, but these options differ in how much register reuse
is relatively exploited along the two dimensions and how big the L1 resident
buffer for the RHS is. Let us execute the $M_{R}$ = 3, $N_{R}$ = 16
configuration.
⬇
// M_C = 180 : i32, K_C = 480 : i32, M_R = 3, N_R = 16 : i32, K_U = 4 : i32
$ mlir-opt -hopt -hopt-vect -hopt-copy -hopt-unroll -hopt-scalrep -convert-
linalg-to-loops
-lower-affine -convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main -reps=3
-entry-point-result=void -shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.039474s
61.939 GFLOPS
We immediately see a 24% boost in performance! A big strength of a pure
compiler approach here is that one automatically generates everything – all
the way down to the innermost loop body. So, we have the opportunity to
explore more than what’s possible with a fixed set of manually pre-optimized
kernels. This is all under the assumption that the compiler generated code is
competitive or could be made competitive. Let us now explore the impact of
just the $M_{R}$, $N_{R}$ values around the best configuration we have found.
⬇
// Let us see the impact of just the M_R, N_R values.
// M_C = 72 : i32, K_C = 256 : i32, M_R = 4, N_R = 8 : i32, K_U = 4 : i32
$ mlir-opt -hopt -hopt-vect -hopt-copy -hopt-unroll -hopt-scalrep -convert-
linalg-to-loops
-lower-affine -convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main -reps=3
-entry-point-result=void -shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.04356s
49.7282 GFLOPS
// M_R = 6, N_R = 8 (this is what BLIS’ micro kernel uses for Haswell).
$ mlir-opt -hopt -hopt-vect -hopt-copy -hopt-unroll -hopt-scalrep -convert-
linalg-to-loops
-lower-affine -convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main -reps=3
-entry-point-result=void -shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.0479391s
40.4135 GFLOPS
// The best conf so far.
// M_C = 180 : i32, K_C = 480 : i32, M_R = 3, N_R = 16 : i32, K_U = 4 : i32.
$ mlir-opt -hopt -hopt-vect -hopt-copy -hopt-unroll -hopt-scalrep -convert-
linalg-to-loops
-lower-affine -convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main -reps=3
-entry-point-result=void -shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.039474s
61.843 GFLOPS
Let us now peek at the generated assembly of the innermost loop to see how
good it is. A necessary thing is that the innermost block should have no
load/store’s on the output (everything should be in registers) and we should
have a continuous sequence of VFMA instructions on 256-bit AVX registers which
are named %ymm[0-15].
⬇
# => This Inner Loop Header: Depth=5
vbroadcastsd -7704(%rbx), %ymm12
vmovapd -480(%r13), %ymm14
vfmadd231pd %ymm12, %ymm14, %ymm0 # ymm0 = (ymm14 * ymm12) + ymm0
vbroadcastsd -3864(%rbx), %ymm13
vfmadd231pd %ymm13, %ymm14, %ymm1 # ymm1 = (ymm14 * ymm13) + ymm1
vbroadcastsd -24(%rbx), %ymm15
vfmadd213pd %ymm10, %ymm15, %ymm14 # ymm14 = (ymm15 * ymm14) + ymm10
vmovapd -448(%r13), %ymm10
vfmadd231pd %ymm10, %ymm12, %ymm2 # ymm2 = (ymm12 * ymm10) + ymm2
vfmadd231pd %ymm10, %ymm13, %ymm3 # ymm3 = (ymm13 * ymm10) + ymm3
vfmadd231pd %ymm10, %ymm15, %ymm4 # ymm4 = (ymm15 * ymm10) + ymm4
vmovapd -416(%r13), %ymm10
vfmadd231pd %ymm10, %ymm12, %ymm5 # ymm5 = (ymm12 * ymm10) + ymm5
vfmadd231pd %ymm10, %ymm13, %ymm6 # ymm6 = (ymm13 * ymm10) + ymm6
vfmadd231pd %ymm10, %ymm15, %ymm7 # ymm7 = (ymm15 * ymm10) + ymm7
vmovapd -384(%r13), %ymm10
vfmadd213pd %ymm9, %ymm10, %ymm12 # ymm12 = (ymm10 * ymm12) + ymm9
vfmadd213pd %ymm11, %ymm10, %ymm13 # ymm13 = (ymm10 * ymm13) + ymm11
vfmadd231pd %ymm10, %ymm15, %ymm8 # ymm8 = (ymm15 * ymm10) + ymm8
vbroadcastsd -7696(%rbx), %ymm9
vmovapd -352(%r13), %ymm11
vfmadd231pd %ymm9, %ymm11, %ymm0 # ymm0 = (ymm11 * ymm9) + ymm0
vbroadcastsd -3856(%rbx), %ymm10
vfmadd231pd %ymm10, %ymm11, %ymm1 # ymm1 = (ymm11 * ymm10) + ymm1
vbroadcastsd -16(%rbx), %ymm15
vfmadd213pd %ymm14, %ymm15, %ymm11 # ymm11 = (ymm15 * ymm11) + ymm14
vmovapd -320(%r13), %ymm14
vfmadd231pd %ymm14, %ymm9, %ymm2 # ymm2 = (ymm9 * ymm14) + ymm2
vfmadd231pd %ymm14, %ymm10, %ymm3 # ymm3 = (ymm10 * ymm14) + ymm3
vfmadd231pd %ymm14, %ymm15, %ymm4 # ymm4 = (ymm15 * ymm14) + ymm4
vmovapd -288(%r13), %ymm14
vfmadd231pd %ymm14, %ymm9, %ymm5 # ymm5 = (ymm9 * ymm14) + ymm5
vfmadd231pd %ymm14, %ymm10, %ymm6 # ymm6 = (ymm10 * ymm14) + ymm6
vfmadd231pd %ymm14, %ymm15, %ymm7 # ymm7 = (ymm15 * ymm14) + ymm7
vmovapd -256(%r13), %ymm14
vfmadd213pd %ymm12, %ymm14, %ymm9 # ymm9 = (ymm14 * ymm9) + ymm12
vfmadd213pd %ymm13, %ymm14, %ymm10 # ymm10 = (ymm14 * ymm10) + ymm13
vfmadd231pd %ymm14, %ymm15, %ymm8 # ymm8 = (ymm15 * ymm14) + ymm8
vbroadcastsd -7688(%rbx), %ymm12
vmovapd -224(%r13), %ymm14
…
This looks as good as we may expect! The output matrix values are always in
registers (no spilling). LHS and RHS values have the intended amount of reuse
in registers. The LHS is loaded in once via broadcast/splat (vbroadcastsd),
and reused along the $j$ register tile dimension, while the RHS is moved in
via aligned vector loads and reused along the $i$ register tile dimension.
⬇
// Let us look at the benefit of packing in isolation on the best code we
have.
$ mlir-opt -hopt -hopt-vect -hopt-copy -hopt-unroll -hopt-scalrep -convert-
linalg-to-loops
-lower-affine -convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main -reps=3
-entry-point-result=void -shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.039474s
61.843 GFLOPS
⬇
// Without packing (but with everything else).
$ mlir-opt -hopt -hopt-vect -hopt-copy=false -hopt-unroll -hopt-scalrep
-convert-linalg-to-loops
-lower-affine -convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main
-entry-point-result=void -shared-libs=lib/libmlir_runner_utils.so,/usr/local/lib/libblis.so
Compilation time: 0.030535s
22.5257 GFLOPS
// A 3x drop in performance just due to the lack of packing, while it was just
// 1.5x on a code that was not fully optimized.
⬇
// Without unroll-and-jam (but with packing and everything else)
$ mlir-opt -hopt -hopt-vect -hopt-copy -hopt-unroll=false -hopt-scalrep
-linalg-lower-to-loops
-linalg-convert-to-llvm -convert-linalg-to-loops -lower-affine -convert-std-to-llvm
dgemm.mlir | mlir-cpu-runner -O3 -reps=5 -e main -entry-point-result=void
-shared-libs=lib/libmlir_runner_utils.so,/usr/local/lib/libblis.so
Compilation time: 0.0424139s
15.5651 GFLOPS
⬇
// Without packing and without unroll-and-jam (but with everything else)
$ mlir-opt -hopt -hopt-vect -hopt-copy=false -hopt-unroll=false -hopt-scalrep
-linalg-lower-to-loops -linalg-convert-to-llvm -convert-linalg-to-loops -lower-affine
-convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -reps=5 -e main
-entry-point-result=void
-shared-libs=lib/libmlir_runner_utils.so,/usr/local/lib/libblis.so
Compilation time: 0.0228369s
10.785 GFLOPS
As for $M_{C}$, $K_{C}$, note that we are using large enough values that the
$M_{C}$ x $K_{C}$ tile (675 KB for $M_{C}$ = 180, $K_{C}$ = 480) of the LHS
matrix (A) actually fits in the L3 cache, as opposed to in the L2 cache, which
was the plan with the BLIS/OpenBLAS tiling approach. Since we haven’t really
done the additional level of tiling ($N_{C})$ per the OpenBLAS/BLIS strategy,
we notice that we get more mileage here out of using a larger tile for the LHS
in L3 instead of a smaller tile in L2 (the L3 was sort of unoccupied for us).
Note that a larger $M_{C}$ in particular amortizes the cost of the RHS’s
transfer into L1 and exploits more L1 reuse on that tile. The additional level
of tiling is conceptually straightforward given where we are now, and this
could be evaluated along with better values for $M_{C}$, $K_{C}$ for improved
L2 and L3 reuse. We skip that for this article.
Let us now look back at our journey up until this best performing result, and
the role each optimization played.
### 2.11 Within 9% of OpenBLAS and MKL
Let us plot the overall picture so far.
We have now nearly matched BLIS performance (61 vs 63 GFLOPS), and are only
9away from MKL and OpenBLAS’s performance! Note that BLIS’ performance could
also likely improved by tuning and playing around $M_{C}$ and $K_{C}$ as
opposed to choosing the configuration it preset ($M_{C}$ = 72, $K_{C}$ = 256),
but as we observed here, it is the $M_{R}$, $N_{R}$ values that have a greater
impact here as far as the MLIR + LLVM approach goes. Secondly, further tuning
of BLIS is likely to still be around the OpenBLAS performance (based on
published results).
#### 2.11.1 Vectorization with copying, unroll-and-jam, unrolling but no
scalar replacement
On this note, let us look at what happens if we disabled MLIR’s scalar
replacement but enabled unroll-and-jam, sort of leaving it to LLVM to perform
the replacement.
⬇
$ mlir-opt -hopt -hopt-vect -hopt-scalrep=false -convert-linalg-to-loops
-lower-affine
-convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main -entry-point-result=void -reps=5
-shared-libs=lib/libmlir_runner_utils.so,/usr/local/lib/libblis.so
Compilation time: 0.038455s
10.7969 GFLOPS
Relying on LLVM to do the scalar replacement does not work! We will not again
go into why. Although the passes in LLVM could be improved here, this is an
optimization that MLIR is supposed to perform well since it has to do with
multidimensional subscripts and loops.
### 2.12 Map to BLIS micro-kernel: “Outer MLIR, Inner BLIS”
We might wonder at this point as to what would happen if the MLIR approach
just borrowed the BLIS micro-kernel for its inner three loops (the ones
running with trip counts $K_{C}$, $M_{R}$, $N_{R}$). How much better
performance would it lead to for the best MLIR version? Given that we were
able to get close to BLIS using MLIR + LLVM all the way, a no difference in
performance would be great news for those who have worked on the x86/x86-64
LLVM backends and most of the low level infrastructure surrounding it.
We will do a simple experiment here. One could imagine employing BLIS’ micro-
kernel (multiplication of two panels: $M_{R}$ x $K_{C}$ and $K_{C}$ x $N_{R}$)
within MLIR itself, i.e., the outer loops, tiling, explicit copying etc. are
all automated with MLIR the way we described above, but the carefully crafted
inline assembly kernel of BLIS is called for the inner part. This is also an
approach of high interest to several research groups working on HPC compilers
today because many believe that (1) the performance of the hand-written inner
kernel cannot be attained through compiler generated code, (2) one shouldn’t
be subject to the vagaries of a compiler when generating such code where even
the last ounce of performance should be extracted. Both points are debatable,
and the issues involved are addressable.
Over here, our experiment is also interesting because it tells us whether the
remaining difference in performance is coming from the carefully tailored
instruction selection and schedule used in BLIS’ micro-kernel, which the
compiler (having been tasked to deal with the entire world of programs) is
unable to nail down optimally. Seeing a big difference when we swap out our
inner loops with the BLIS kernel could be disappointing because it would mean
we will have to now get into LLVM to see how we could improve things as one
option. On the other hand, the lack of any significant improvement would be
great news for us because we could then focus on the "macro kernel", code and
loops that we have all possible control over, to go closer to the peak. It
would also be a tribute to all those who have worked on LLVM backends and
surrounding infrastructure.
Figure 4: The BLIS micro-kernel (figure courtesy: Field Van Zee and Robert van
de Geijn).
The BLIS micro-kernel being used here is bli_dgemm_haswell_asm_6x8; this
corresponds to $M_{R}$ = 6 and $N_{R}$ = 8. We’ll thus run a pure MLIR code
generated version with $M_{C}$ = 174, $K_{C}$ = 512, $M_{R}$ = 6, $N_{R}$ = 8,
and then the same one with the innermost 3-d nest mapped to BLIS’ microkernel.
This keeps the comparison meaningful. (We have adjusted 180 and 480 slightly
to 174 and 512 respectively to ensure that all cache tiles are full tiles to
keep things more readable).
We also need to change the layout of the LHS buffer to the packed layout shown
in green - this was discussed in detail earlier when we described how memref
layouts work (Section 2.9).
Let us look at the structure of MLIR here with tiling and packing, but without
vectorization or any unroll-jamming enabled since the inner portion is going
to be swapped out.
⬇
affine.for %arg1 = 0 to 4 {
affine.for %arg2 = 0 to 12 {
%9 = alloc() : memref<29x512x6xf64>
affine.for %arg3 = #map4(%arg2) to #map5(%arg2) {
affine.for %arg4 = #map2(%arg1) to #map3(%arg1) {
%10 = affine.load %0[%arg3, %arg4] : memref<2088x2048xf64>
affine.store %10, %9[%arg2 * -29 + %arg3 floordiv 6, %arg1 * -512 + %arg4,
%arg3 mod 6]
: memref<29x512x6xf64>
}
}
affine.for %arg3 = 0 to 256 {
%10 = alloc() : memref<512x8xf64>
affine.for %arg4 = #map2(%arg1) to #map3(%arg1) {
affine.for %arg5 = #map7(%arg3) to #map8(%arg3) {
%11 = affine.load %1[%arg4, %arg5] : memref<2048x2048xf64>
affine.store %11, %10[%arg1 * -512 + %arg4, %arg3 * -8 + %arg5] :
memref<512x8xf64>
}
}
affine.for %arg4 = #map15(%arg2) to #map16(%arg2) {
affine.for %arg5 = 0 to 512 {
affine.for %arg6 = 0 to 8 {
%11 = affine.load %10[%arg5, %arg6] : memref<512x8xf64>
affine.for %arg7 = 0 to 6 {
%12 = affine.load %9[%arg2 * -29 + %arg4 + %arg7 floordiv 6, %arg5, %arg7 mod
6]
: memref<29x512x6xf64>
%13 = affine.load %2[%arg4 * 6 + %arg7, %arg3 * 8 + %arg6] :
memref<2088x2048xf64>
%14 = mulf %12, %11 : f64
%15 = addf %14, %13 : f64
affine.store %15, %2[%arg4 * 6 + %arg7, %arg3 * 8 + %arg6] :
memref<2088x2048xf64>
}
}
}
}
dealloc %10 : memref<512x8xf64>
}
dealloc %9 : memref<29x512x6xf64>
}
}
Note that the buffer of shape <174x512xf64> after being assigned the tiled
layout and post memref normalization has been converted to the shape
<29x512x6xf64>, and it has elements exactly in the order the BLIS micro-kernel
wants. One can notice that the blocks and panels corresponding to the buffers
here being multiplied (LHS buffer: <29x512x6xf64>, RHS buffer: <512x8xf64>,
the output is just 2048x2048xf64 since we will reuse it in registers). For the
purpose of this article, we developed a -hopt-blis pass option that actually
maps the right part to the BLIS micro-kernel. For a 2088x2048 matrix, this is
the generated IR:
⬇
affine.for %arg1 = 0 to 4 {
affine.for %arg2 = 0 to 12 {
%9 = alloc() {alignment = 32 : i32} : memref<29x512x6xf64>
affine.for %arg3 = #map4(%arg2) to #map5(%arg2) {
affine.for %arg4 = #map2(%arg1) to #map3(%arg1) {
%10 = affine.load %0[%arg3, %arg4] : memref<2088x2048xf64>
affine.store %10, %9[%arg2 * -29 + %arg3 floordiv 6, %arg1 * -512 + %arg4,
%arg3 mod 6]
: memref<29x512x6xf64>
}
}
affine.for %arg3 = 0 to 256 {
%10 = alloc() {alignment = 32 : i32} : memref<512x8xf64>
affine.for %arg4 = #map2(%arg1) to #map3(%arg1) {
affine.for %arg5 = #map7(%arg3) to #map8(%arg3) {
%11 = affine.load %1[%arg4, %arg5] : memref<2048x2048xf64>
affine.store %11, %10[%arg1 * -512 + %arg4, %arg3 * -8 + %arg5] :
memref<512x8xf64>
}
}
call @hopt_dgemm_blis_kernel(%9, %10, %2, %c512, %arg2, %arg3)
: (memref<29x512x6xf64>, memref<512x8xf64>, memref<2088x2048xf64>, index,
index, index) -> ()
dealloc %10 : memref<512x8xf64>
}
dealloc %9 : memref<29x512x6xf64>
}
}
The hopt_dgemm_blis_kernel function is added to mlir_runtime_utils, and just
wraps around bli_dgemm_haswell_asm_6x8.
The allocs above have alignments since the BLIS kernel uses 256-bit aligned
loads on these buffers. So, this experimentation was done with additional
support in the MLIR’s std to llvm dialect conversion to use aligned_alloc. We
have these alignments set even for the pure MLIR generated code since the
loads and stores on vector types will have the vector size as the default ABI
alignment, leading to aligned load/stores during LLVM code generation; so the
alloc’s need to be aligned to vector size boundaries.
⬇
// MLIR with BLIS micro-kernel: M_R = 6, N_R = 8\.
$ mlir-opt -hopt -hopt-blis -convert-linalg-to-loops -lower-affine -convert-
std-to-llvm dgemm.mlir
| mlir-cpu-runner -O3 -e main -entry-point-result=void
-shared-libs=lib/libmlir_runner_utils.so,/usr/local/lib/libblis.so -dump-object-file
-object-filename=hopt.o
Compilation time: 0.0281591s
61.421 GFLOPS
⬇
// MLIR with pure codegen M_R = 6, N_R = 8\.
$ mlir-opt -hopt -convert-linalg-to-loops -lower-affine -convert-std-to-llvm dgemm.mlir |
mlir-cpu-runner -O3 -e main -reps=5 -entry-point-result=void
-shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.0475061s
40.2426 GFLOPS
⬇
// MLIR with pure codegen M_R = 4, N_R = 8\.
$ mlir-opt -hopt -convert-linalg-to-loops -lower-affine -convert-std-to-llvm dgemm.mlir |
mlir-cpu-runner -O3 -e main -reps=5 -entry-point-result=void
-shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.0415299s
50.7075 GFLOPS
⬇
// Recall with M_R = 3, N_R = 16\.
$ mlir-opt -hopt -hopt-copy -hopt-unroll -hopt-scalrep -convert-linalg-to-
loops -lower-affine
-convert-std-to-llvm dgemm.mlir | mlir-cpu-runner -O3 -e main -reps=3 -entry-point-result=void
-shared-libs=lib/libmlir_runner_utils.so
Compilation time: 0.039474s
61.939 GFLOPS
There is nearly no difference when comparing the hybrid MLIR and BLIS micro
kernel with the best MLIR version we had. For a given $M_{R}$ x $N_{R}$
register tile with schedule we have been using, the register requirement
(assuming the instruction scheduler is not doing any non-trivial reordering)
would be $M_{R}*N_{R}/4+M_{R}$ \+ 1 (the division by four is for the vector
width). With $M_{R}=3$, $N_{R}=16$, this requirement would exactly be 16,
which is the number of %ymm registers we have! Using $M_{R}$ = 6, $N_{R}$ = 8
with our code (as opposed to with the BLIS micro-kernel), the requirement goes
up to 62 + 6 + 1 = 19 registers, and leads to a spill. The assembly dump
confirms this (notice the vmovupd stores in between the FMAs, which didn’t
exist earlier). Also, the intuitively sub-optimal sizes of $M_{R}$ = 4,
$N_{R}$ = 8 provide better performance than the tighter $M_{R}$ = 6, $M_{R}$ =
8, indicating the cliff associated with spilling.
⬇
c4 62 95 b8 f0 vfmadd231pd %ymm0, %ymm13, %ymm14
1360: c4 a2 7d 19 84 ef e8 c3 ff ff vbroadcastsd -15384(%rdi,%r13,8), %ymm0
136a: c5 fd 11 84 24 80 01 00 00 vmovupd %ymm0, 384(%rsp)
1373: c4 e2 95 b8 c8 vfmadd231pd %ymm0, %ymm13, %ymm1
1378: c4 a2 7d 19 84 ef e8 d2 ff ff vbroadcastsd -11544(%rdi,%r13,8), %ymm0
1382: c5 fd 11 84 24 a0 01 00 00 vmovupd %ymm0, 416(%rsp)
138b: c4 e2 95 b8 d0 vfmadd231pd %ymm0, %ymm13, %ymm2
1390: c4 a2 7d 19 84 ef e8 e1 ff ff vbroadcastsd -7704(%rdi,%r13,8), %ymm0
139a: c4 e2 95 b8 d8 vfmadd231pd %ymm0, %ymm13, %ymm3
139f: c4 22 7d 19 a4 ef e8 f0 ff ff vbroadcastsd -3864(%rdi,%r13,8), %ymm12
13a9: c5 7d 11 a4 24 e0 01 00 00 vmovupd %ymm12, 480(%rsp) 13b2:
c4 c2 95 b8 e4 vfmadd231pd %ymm12, %ymm13, %ymm4
However, $M_{R}$ = 6, $N_{R}$ = 8 could be made to fit in the register budget
by using a different permutation of instructions for the innermost loop body,
i.e., with a register tile whose intra-tile loops have been permuted before
the unrolling was performed. In such a case, the register requirement would
be: $M_{R}$ * $N_{R}$/4 + $N_{R}$/4 + 1 = 15! The schedule to be used to
realize that would be:
$(d0,d1,d2)\rightarrow(d2\textrm{ floordiv }480,d0\textrm{ floordiv
}330,d1\textrm{ floordiv }8,d0\textrm{ floordiv }6,d2,d0,d1).$
Notice the flip $(\dots,d0,d1)$ instead of $(\dots,d1,d0)$. This also means
that the LLVM backend is not going to automatically do a complex permutation
on the innermost body that changes live ranges, reducing register requirements
to fit within the budget and thereby avoiding a spill. Using the above
schedule leads to no spill and the code performs at about 5% slower though
than $M_{R}$ = 3, $N_{R}$ = 16. It is thus also important to get the right
permutation for the unrolled register tile (at the time of selecting the loop
transformation) — so that the matrix values with longer reuse distances for
that intra register tile permutation (LHS values in the case of (d1, d0) and
RHS values in the case of (d0, d1)) do not cause a spill. Alternatively, this
problem could be viewed as one that performs the unroll-and-jam for $i$, $j$
in the right order. The register requirement in the context of this "known"
computation and its interaction with the intra-tile loop permutation and
register tile sizes ($M_{R}$, $N_{R}$) are pretty easy to capture at a high
level to make sure even a simple register allocator does the right thing.
Overall, we conclude here that the kind of code we could get automatically is
clearly on par or close to what was achieved with expert written assembly. The
best performing $M_{R}$x$N_{R}$ of 3x16 is on par with the MLIR-BLIS
configuration we explored. The 6x8 version, if made to work without spilling,
will likely be on par with 3x16. This basically means the 4-9% difference
between the pure MLIR one and the BLIS/OpenBLAS ones likely stems from things
around and outside the micro-kernel. This part requires more experimentation
before any firm conclusions can be made.
On a separate note, one can see that MLIR does make it easier to map to
external hand-optimized kernels, and combine the best of both worlds if that’s
necessary for peak performance. Also, it is straightforward to perform a
systematic exploration around interesting values by generating ops with
different parameter attributes. However, each choice requires a compile and
execute cycle since we are not generating any target code parametric in the
identified parameters.
Figure 5: DGEMM performance.
### 2.13 SGEMM performance
We can similarly now benchmark SGEMM performance quickly. All that we need to
change on the op’s operands is an s/f64/f32! In addition, we will just double
the register tile size $N_{R}$ from 16 to 32 since two times the number of f32
elements can be held in a vector register (8 x f32 instead of 4 x f64). We
would thus still be using 3x4 = 12 vector registers for C. In addition, we
will also double $M_{C}$ to 348 for the same reason. We thus use this op to
generate SGEMM.
⬇
hop.matmul %A, %B, %C {
M_C = 348 : i32, K_C = 512 : i32, M_R = 3, N_R = 32 : i32, K_U = 4 : i32
} : (memref<2088x2048xf32>, memref<2048x2048xf32>, memref<2088x2048xf32>)
Evaluating this, we find that the performance of purely MLIR generated code is
within 2% of MKL performance here!, and in fact marginally better than
OpenBLAS and BLIS. The outer MLIR + inner BLIS version here delivers the
expected performance, nearly on par with pure MLIR here.
Figure 6: SGEMM performance.
### 2.14 What about the remaining 9%?
At this point, we would like to see what stands in the way of bridging the
remaining 9% gap in getting to MKL/OpenBLAS’s performance. In particular, the
hand-written kernels in OpenBLAS, BLIS also use prefetching – we are not yet
generating any of these instructions.
In summary, we have been able to generate all code starting from just this op:
⬇
hop.matmul %A, %B, %C {
M_C = 180 : i32, K_C = 480 : i32, M_R = 3, N_R = 16 : i32, K_U = 4 : i32
} : (memref<2088x2048xf64>, memref<2048x2048xf64>, memref<2088x2048xf64>)
### 2.15 Other questions
There may be a few questions here.
1. 1.
While the code was automatically generated, the transformation sequence was
specified from within an MLIR pass. What does the sequence look like - just
C++ IR building and calls to utilities/passes? How productive is it and the
subsequent exploration?
2. 2.
What about problem sizes that are not a perfect multiple of register tile
sizes ($M_{R}$, $N_{R}$)? While MLIR’s unroll-and-jam works with cleanup code
generated, there are unresolved issues in interactions downstream (for
performance). There is literature in the polyhedral body of works on
separation of full and partial tiles, and this is largely an implementation
issue. Note that cache tile sizes need not perfectly divide problem sizes -
this already works in MLIR in conjunction with packing and other
transformations presented here. For eg. the best $M_{C}$ size we used (180)
does not divide 2048.
3. 3.
We have primarily used the polyhedral passes in MLIR here since the generated
code at each stage always stays affine. The LinAlg dialect does not have the
utilities to automatically analyze and generate packing code for example.
Nearly all passes and utilities used here such as unroll-and-jam, scalar
replacement, and memref normalization work on affine dialect ops.
4. 4.
What about all the other options around input matrices such as strides and
transpose? These all are cleanly expressed via memrefs’ affine layout map
discussed above (Section 2.9). We haven’t considered the $\alpha$ and $\beta$
arguments for DGEMM though (it is really $C=\alpha\\*A\\*B+\beta\\*C$) - we
have actually assumed $\alpha$ = $\beta$ = 1. In fact, MLIR’s pattern rewrites
can fold and optimize away code/nests when $\alpha$ or $\beta$ are 0 or 1.
Overall, these aspects are something to keep in mind before making a
complete/exact comparison.
5. 5.
How do we determine good or the optimal parameter values? The work of Low et
al. is on analytical modeling to derive good parameter values (the derived
formulae are pretty straightforward to just plug values from hardware
resources into). Besides such analytical modeling, there is a lot of room to
play here depending on how powerful the generative infrastructure is, and to
what extent we would like to generalize this approach beyond the domain
considered here.
## 3 Related Work
Most of the related work on OpenBLAS [2] and BLIS [3] was already described
inline. The work by Gareev et al. [10] is the only approach I am aware of that
tried the BLIS approach from inside a compiler (Polly/LLVM there). From their
results, they appeared to have had reached about 70% of MKL/OpenBLAS
performance. With a low-level IR such as LLVM, and with Polly and LLVM
infrastructure decoupled to some extent, one’s expressive power and mileage is
expected to vary. There has been a large amount of work on empirical search
driven library generation for GEMM [11] (ATLAS), and works like that of Yotov
et al. [12] have plugged in analytical models to drive such optimization with
comparable performance. The work of Low et al. [7] makes a similar attempt
with BLIS. However, all of these frameworks were centered around generation of
C code along with inner inline assembly kernels.
## 4 Reproducing these Results
##### Software setup:
Fedora Linux 30 running kernel 5.3.6-200.fc30.x86_64, BLIS version
0.6.0-40-gf4f5170f, MKL version 2019.4.243, OpenBLAS 0.3.7-1.fc30.x86_64, and
Pluto git 0.11.4-903-g7f21ab57. cpupower was used to set the frequency
governor to ‘performance’.
A good part of the experiments presented in this article can be reproduced
with MLIR trunk. There are major features though that are pending upstream
integration (memref_shape_cast op, alloca op, scalar replacement, a new
vectorization pass/utility, and support for a few packing options), but these
are available in the $hop$ branch here, which is the recommended way of
reproducing results presented herein as is. Please see this README to run most
of the experiments.
The hop branch however is not intended to be continuously kept in sync with
changing upstream MLIR development — so that the examples, IR and results
presented herein remain valid. On the other hand, the MLIRX [13] repository
provides all of the features presented here while being maintained to be in
sync with upstream MLIR development. If the reader’s objective is to build
upon the augmented IR infrastructure as opposed to just trying to reproduce
things, MLIRX is recommended.
## References
* [1] Chris Lattner, Mehdi Amini, Uday Bondhugula, Albert Cohen, Andy Davis, Jacques Pienaar, River Riddle, Tatiana Shpeisman, Nicolas Vasilache, and Oleksandr Zinenko. MLIR: A compiler infrastructure for the end of Moore’s law, 2020.
* [2] Kazushige Goto and Robert A. van de Geijn. Anatomy of high-performance matrix multiplication. ACM Trans. Math. Softw., 34(3), 2008.
* [3] Field G. Van Zee and Robert A. van de Geijn. BLIS: A framework for rapidly instantiating BLAS functionality. ACM Trans. Math. Softw., 41(3), June 2015.
* [4] MLIR: A multi-level intermediate representation. https://mlir.llvm.org.
* [5] Affine dialect in MLIR. https://github.com/llvm/llvm-project/tree/master/mlir/docs/Dialects/Affine.md.
* [6] MemRefs in MLIR. https://github.com/llvm/llvm-project/tree/master/mlir/docs/LangRef.md#memref-type.
* [7] Tze Meng Low, Francisco D. Igual, Tyler M. Smith, and Enrique S. Quintana-Orti. Analytical modeling is enough for high-performance BLIS. ACM Trans. Math. Softw., 43(2), August 2016.
* [8] Uday Bondhugula. An introduction to the polyhedral framework, 2016. https://www.csa.iisc.ac.in/$\sim$udayb/slides/uday-polyhedral-opt.pdf.
* [9] Robert van de Geijn, Margaret Myers, and Devangi Parikh. LAFF-On programming for high performance: ulaff.net, 2019. http://www.cs.utexas.edu/users/flame/laff/pfhp/.
* [10] Roman Gareev, Tobias Grosser, and Michael Kruse. High-performance generalized tensor operations: A compiler-oriented approach. ACM Trans. Archit. Code Optim., 15(3), 2018.
* [11] R. Clint Whaley and Jack Dongarra. Automatically tuned linear algebra software. In SuperComputing 1998: High Performance Networking and Computing, 1998. http://math-atlas.sourceforge.net/.
* [12] Kamen Yotov, Xiaoming Li, Gang Ren, María Jesús Garzarán, David A. Padua, Keshav Pingali, and Paul Stodghill. Is search really necessary to generate high-performance blas? Proceedings of the IEEE, 93(2):358–386, 2005.
* [13] PolyMage Labs. MLIRX, 2020. https://github.com/polymage-labs/mlirx.
|
2024-09-04T02:54:56.060692 | 2020-03-01T17:51:26 | 2003.00536 | {
"authors": "H.R. Wakeford, D.K. Sing, K.B. Stevenson, N.K. Lewis, N. Pirzkal, T.J.\n Wilson, J. Goyal, T. Kataria, T. Mikal-Evans, N. Nikolov, and J. Spake",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25972",
"submitter": "Hannah R Wakeford",
"url": "https://arxiv.org/abs/2003.00536"
} | arxiv-papers | # Into the UV: A precise transmission spectrum of HAT-P-41b using
Hubble’s WFC3/UVIS G280 grism
H.R. Wakeford School of Physics, University of Bristol, HH Wills Physics
Laboratory, Tyndall Avenue, Bristol BS8 1TL, UK Space Telescope Science
Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA D.K. Sing
Department of Earth & Planetary Sciences, Johns Hopkins University, Baltimore,
MD, USA Department of Physics & Astronomy, Johns Hopkins University,
Baltimore, MD, USA K.B. Stevenson JHU Applied Physics Laboratory, 11100 Johns
Hopkins Rd, Laurel, MD 20723, USA N.K. Lewis Department of Astronomy and Carl
Sagan Institute, Cornell University, 122 Sciences Drive, Ithaca, NY 14853, USA
N. Pirzkal Space Telescope Science Institute, 3700 San Martin Drive,
Baltimore, MD 21218, USA T.J. Wilson University of Exeter, Physics Building,
Stocker Road, Exeter, Devon, Ex4 4QL, UK Space Telescope Science Institute,
3700 San Martin Drive, Baltimore, MD 21218, USA J. Goyal Department of
Astronomy and Carl Sagan Institute, Cornell University, 122 Sciences Drive,
Ithaca, NY 14853, USA T. Kataria NASA Jet Propulsion Laboratory, 4800 Oak
Grove Drive, Pasadena, CA 91109, USA T. Mikal-Evans Kavli Institute for
Astrophysics and Space Research, Massachusetts Institute of Technology, 77
Massachusetts Avenue, 37-241, Cambridge, MA 02139, USA N. Nikolov Department
of Physics & Astronomy, Johns Hopkins University, Baltimore, MD, USA J. Spake
Division of Geological and Planetary Sciences, California Institute of
Technology, Pasadena, CA 91125, USA
(Accepted to AJ February 29, 2020)
###### Abstract
The ultraviolet-visible wavelength range holds critical spectral diagnostics
for the chemistry and physics at work in planetary atmospheres. To date,
exoplanet time-series atmospheric characterization studies have relied on
several combinations of modes on Hubble’s STIS/COS instruments to access this
wavelength regime. Here for the first time, we apply the Hubble WFC3/UVIS G280
grism mode to obtain exoplanet spectroscopy from 200-800 nm in a single
observation. We test the G280 grism mode on the hot Jupiter HAT-P-41b over two
consecutive transits to determine its viability for exoplanet atmospheric
characterization. We obtain a broadband transit depth precision of 29–33 ppm
and a precision of on average 200 ppm in 10 nm spectroscopic bins. Spectral
information from the G280 grism can be extracted from both the positive and
negative first order spectra, resulting in a 60% increase in the measurable
flux. Additionally, the first HST orbit can be fully utilized in the time-
series analysis. We present detailed extraction and reduction methods for use
by future investigations with this mode, testing multiple techniques. We find
the results fully consistent with STIS measurements of HAT-P-41b from 310–800
nm, with the G280 results representing a more observationally efficient and
precise spectrum. HAT-P-41b’s transmission spectrum is best fit with a model
with Teq=2091 K, high metallicity, and significant scattering and cloud
opacity. With these first of their kind observations, we demonstrate that
WFC3/UVIS G280 is a powerful new tool to obtain UV-optical spectra of
exoplanet atmospheres, adding to the UV legacy of Hubble and complementing
future observations with the James Webb Space Telescope.
††software: POET pipeline (Stevenson et al., 2012b; Cubillos et al., 2013),
IRAF (Tody, 1986, 1993), IDL Astronomy user’s library (Landsman, 1995), NumPy
(Oliphant, 2006–), SciPy (Virtanen et al., 2019), MatPlotLib (Caswell et al.,
2019), AstroPy (Astropy Collaboration et al., 2018), Photutils (Bradley et
al., 2019)
## 1 Introduction
The characterization of planetary atmospheres in the solar system and beyond
has long leveraged the ultra-violet (UV) to near-infrared (NIR) spectroscopic
capabilities of the Hubble Space Telescope (HST). Observations with HST have
been critical in the exploration of the chemical composition, climate, and
aerosol properties of exoplanet atmospheres (see Kreidberg et al., 2018, and
references therein). With the help of HST we now know that clouds and hazes
are likely present in all types of exoplanetary atmospheres (e.g. Marley et
al., 2013; Helling, 2019; Wakeford et al., 2019), but we currently lack
information related to their abundances, physical properties and extent
throughout the atmosphere. We also know that exoplanets exhibit extended upper
atmospheres with evidence for atmospheric escape (e.g. Ehrenreich et al.,
2014; Bourrier et al., 2018; Sing et al., 2019), but struggle to connect
physical processes in the lower and upper portions of exoplanet atmospheres.
The UV through optical (200–800 nm) spectra of planets hold rich information
about the chemistry and physics at work across a broad range of atmospheric
pressures. In the solar system, UV and near-UV spectroscopy has been critical
in identifying and measuring the abundances of a variety of hydrocarbon and
sulfur-bearing species, produced via photochemical mechanisms, as well as
oxygen and ozone and more. For exoplanets, UV to near-UV spectroscopy has been
especially useful for constraining aerosol properties and exploring
atmospheric chemistry in hot ($>$1000 K) atmospheres (e.g. Sing et al., 2016;
Evans et al., 2018). To date, only a handful of exoplanets have been probed in
the critical 200–400 nm wavelength range that crosses the optical to UV
boundary. Results from these studies have been mixed, limited by the
wavelength coverage and sensitivity of the workhorse instrument for such
studies, HST’s Space Telescope Imaging Spectrograph (STIS) G430L and E230M
gratings.
It is important to remember that none of HST’s instruments or modes were
specifically designed to support exoplanet observations. It has only been
through the development of new observational strategies, such as spatial
scanning (McCullough & MacKenty, 2012; McCullough et al., 2014), and data
reduction techniques that the potential for HST to probe exoplanet atmospheres
has been achieved. In general, slitless spectroscopic observing modes have
been preferred for high-precision time-series observations of exoplanets that
transit, pass in front of, their host stars because they typically offer more
throughput and temporal stability. The slitless spectroscopy capabilities
HST’s Wide Field Camera 3 (WFC3) have been heavily used by the exoplanet
community at infrared wavelengths (750–1800 nm) with the G102 and G141 grisms.
However, HST’s WFC3 UV/Visible (UVIS) channel also offers slitless
spectroscopy in the UV through visible (200–800 nm) wavelength range that has
yet to be leveraged for exoplanet observations. In fact, this mode has only
been employed in a handful of scientific investigations, first used as part of
HST WFC3 early release science programs in cycle 16 (2006), however, none of
the G280 work was published from this study.
Here we detail for the first time the observations, spectral extraction, and
analysis processes taken to apply Hubble’s WFC3/UVIS G280 spectroscopic grism
to transiting exoplanet observations. We first introduce the challenges in
using the UVIS G280 grism in §2. In §3 we detail the observations and spectral
extraction procedures used. We then detail the broadband time-series analysis
using two systematic reduction techniques in §4. We use Spitzer transit
observations to refine system parameters and update the orbital ephemeris in
§4.1 and 4.2. We outline the spectroscopic analysis in §5 and discuss the
results in §6 including searching for evidence of atmospheric escape and
comparisons to STIS data. We then conclude with a summary of our results and
the potential of WFC3/UVIS G280 for future exoplanet investigations.
## 2 Introduction to the UVIS G280 grism
The WFC3 instrument on HST is fitted with two channels, UVIS and IR. Across
these two channels are three slitless spectroscopic grisms: G280 in UVIS and
G102 and G141 in the IR channel. The IR grisms have been extensively applied
to exoplanet atmospheric studies with increasing success at the advent of
spatial scanning (McCullough & MacKenty, 2012), where HST slews in the cross-
dispersion direction to spread the target light over a column of pixels (e.g.,
Deming et al. 2013; Kreidberg et al. 2014; Wakeford et al. 2013, 2016; de Wit
et al. 2016). However, the UVIS G280 grism has not had such usage despite
large throughput in the near-UV (NUV) and wide coverage from 200–800 nm. More
commonly, studies that cover 300–900 nm are conducted with multiple
observations using HST’s STIS G430L and G750L low resolution gratings from
300–550 nm and 500–900 nm respectively (e.g., Nikolov et al. 2014; Sing et al.
2016; Lothringer et al. 2018) despite their comparatively low throughput (Fig.
1).
The UVIS grism, however, comes with several quirks that make it difficult to
observe with and challenging to analyze. A number of these challenges will
also affect observations with James Webb Space Telescope’s (JWST)
spectroscopic instrument modes. Therefore, WFC3/UVIS G280 is a current working
example of the challenges that will be faced with JWST. Here we detail each of
the challenges associated with WFC3’s UVIS grism and also the advantages it
has over other instrument modes in the NUV to optical wavelengths.
### 2.1 Challenges
We detail some of the challenges encountered with this dataset and those
expected in the general use of this instrument mode for exoplanet time-series
characterisation.
\- Curved spectral trace: The trace for spectral order with the G280 grism is
strongly curved at shorter wavelengths. The trace is best fit with a 6th order
polynomial function detailed by Pirzkal et al. (2017) and section 3. This
curvature causes it to be offset in the cross-dispersion direction from the
0th order position, meaning subarray sizes need to be carefully chosen. Unlike
the IR grisms, the spectra should not be spatially scanned as this would
result in overlapping wavelength regions along the scan direction.
The curved spectral trace also introduces a non-linear wavelength solution,
meaning each pixel has a slightly different wavelength range than the
surrounding pixels in that column. The wavelength solution is therefore
extracted relative to the fitted trace position on the detector with a 6th
order polynomial.
\- Multiple overlapping orders: Additional spectral orders, both positive and
negative, overlap with the first order spectra at wavelengths greater than 550
nm. In many cases these additional orders will be much dimmer than the first
order spectrum and not impact the observations. However, for stars bright in
the NUV such as O,B,A stars the additional spectral orders may impact the
spectral extraction.
In the presented case, the second order spectrum is $\approx$ 65$\times$
dimmer than the primary spectrum in both positive and negative orders. This
would negligibly contribute to the measured transit depths causing the
measured planetary radius ($R_{m}$) to be $\approx$ 99.24% of the true
planetary radius ($R$) following,
$\frac{R_{m}}{R}=\sqrt{\frac{1}{1+\frac{1}{65}}}=0.9924.$ (1)
\- Geometric distortion: Using the grism filters causes the spectra to be
offset spatially in the detector relative to their direct image X and Y
coordinates. For the UVIS array the offset varies as a function of the
position due to geometric distortion (Rothberg et al., 2011). The relationship
between the coordinates in x and y pixel position also needs to be taken into
account when planning the observations in X and Y arcsecond coordinates (see
WFC3 data handbook for conversion functions 111WFC3 Data Handbook Appendix
C.2).
\- Orientation constraints: The spectral trace of the positive and negative
orders extend across over 500 pixels each depending on the brightness of the
target. In a crowded field or where a target is part of a visual binary
system, tight orient constraints need to be placed on the observations to
prevent contamination from nearby stars. This is often mitigated in WFC3/IR
grism observations using spatial scans where the spectra can be extracted by
differencing the individual non-destructive reads within the final science
frame. However, as WFC3/UVIS grism observations can only be conducted in stare
mode, up-the-ramp sampling cannot be used to recover overlapping spectra.
\- Cosmic rays: The large wavelength coverage that extends significantly into
the blue wavelengths increases the number of detected comic rays compared to
the IR detectors.
\- JWST challenges: For JWST a number of the instrument modes that will be
utilized for exoplanet timeseries data exhibit curved spectral traces,
overlapping spectral orders, and contamination constraints from additional
sources on the sky. NIRISS SOSS mode is most similar to the G280 grism with
both strongly curved spectral traces and overlapping spectral orders. It is
also expected that NIRSpec Bright Object Time Series observations will also
have a slightly curved trace. For all slitless modes on JWST used for
exoplanet time series observations contamination overlap will need to be
carefully considered and orientation constrained carefully sampled.
### 2.2 Advantages
While we have detailed many challenges there are also significant advantages
to this instrument over other modes in the NUV and optical. We detail these
here.
\- Wide wavelength coverage: observations are conducted over the whole
wavelength range 200–800 nm in a single exposure. Low resolution spectra
across this wide wavelength range can address the two main exoplanet science
points revealed by HST observations; cloud opacities and atmospheric escape.
The G280 grism can measure both the lower atmosphere sensitive to aerosol
scattering, while large atmospheric escape signatures can be detectable in
narrow bands around strong Fe and Mg signatures at $<$300 nm.
Figure 1: Throughput curves for HST instruments and modes commonly used for
exoplanet time-series observations. Solid lines are the WFC3-UVIS G280 grism
+1 and -1 orders; the dark dot-dashed line is the combined transmission of
both orders. Dashed lines are STIS G430L and G750L gratings. -..- line shows
the COS G230L. Dotted lines are WFC3-IR G102 and G141 grisms.
\- Multiple spectral orders: both the positive and negative orders are
measured in each exposure. The UVIS CCD is split into two chips (1 & 2), with
each chip of 2051$\times$2048 pixels easily encompassing both spectral orders
which each cover $\approx$500 pixels in the dispersion direction. In the
presented observations we use chip 2 as it has been shown to be more stable
than chip 1. We therefore also recommend the use of chip 2 for future studies.
\- Throughput: WFC3/UVIS has the highest throughput amongst all HST
instruments in the wavelength range from its lower cut off at 200 nm to the
upper end at $\sim$800 nm. The throughput of UVIS G280 in the NUV is on
average 25 times that of STIS E230M between 200–400 nm, and roughly four times
that of STIS G430L at 350 nm. UVIS G280 also has the advantage of being able
to measure both positive and negative spectral orders that have a combined
throughput greater than STIS G430L across the whole wavelength range (see Fig.
1).
\- New calibration program: Prior to these observations there have been three
instrument science reports (Kuntschner et al., 2009; Rothberg et al., 2011;
Pirzkal et al., 2017) and no scientific papers using this grism. As demand for
time-series observations with this grism have increased there are now new
calibration programs being implemented to better characterize the detector and
improve the trace fitting for all spectral orders. Calibration of the
instrument and mode are important to understand the structure of the CCD, on-
sky changes in the PSF, and wavelength dispersion across the detector -
especially under the requirements of long term stability for exoplanet
investigations that span multiple HST orbits.
Overall the WFC3/UVIS G280 grism has many challenges that are difficult but
not impossible to overcome, and a significant advantage over other instrument
modes in this wavelength range. In the following sections we detail the
observations taken and the measurements made with the tools to overcome these
challenges.
## 3 UVIS G280 Observations
We used HST’s WFC3/UVIS channel with the G280 spectroscopic slitless grism to
observe the spectrum of the transiting exoplanet host star HAT-P-41 from
200–800 nm (GO-15288, PIs D.K. Sing & N.K. Lewis). Unlike the WFC3/IR G102 and
G141 grisms, the UVIS G280 grism produces a spectrum that is strongly curved,
with overlapping spectral orders at longer wavelengths, and a dimmer
($\sim$60%) -1 order spectrum compared to the +1 order. We designed an
observation strategy that would cover both +1 and -1 orders simultaneously to
examine this difference in flux and test the usability of the G280 grism for
time-series exoplanet studies.
We observed the target HAT-P-41, in the constellation of Altair, over two
visits, each consisting of five HST orbits, to measure the transit of the hot
Jupiter exoplanet HAT-P-41b. The two visits were separated by a single
planetary orbital period (visit 1: 2018 August 1st; visit 2: 2018 August 4th,
period = 2.694047 days), significantly reducing the potential impact of any
stellar variations on the transits of this quiet F6 star.
Each visit consists of 54 exposures, over 5 HST orbits, with exposure times of
190 seconds. We used a 2100 $\times$ 800 sub-array, with a POS TARG Y offset
of -50” to center the spectrum on chip 2. The sub-array is cut out of the full
2051 $\times$ 4096 pixel CCD which contains chip 1 and 2, where chip 1 and
chip 2 are separated by 1.2”. Our target star, HAT-P-41, has a nearby
companion separated by 3.615”, equivalent to $\approx$ 91.5 pixels on the
detector. The nearby companion resulted in a number of tight orientation
constraints on the observation. However, our sub-array is large enough to
capture both full +1 and -1 spectral orders around the 0th order trace. The
maximum flux obtained in a single pixel in the spectral trace is $\approx$
36,000 e-, keeping it well within the saturation and non-linearity limit of
the detector, which is approximately 67,000–72,000 e- (Gilliland et al.,
2010).
### 3.1 Spectral Extraction
The spectral traces for both visits and both +1 and -1 orders were extracted
using calibration files provided by the WFC3 team. A complete extraction and
reduction of the provided data requires the following steps: a) cosmic ray
removal b) background subtraction c) aperture determination, and d) trace
fitting We then use the WFC3 UVIS calibration files to compute the wavelength
solution for each spectral order. We also performed spectral extraction with
IRAF and custom IDL routines as a second check on the extraction procedure as
this is the first published analysis of G280 grism data for time-series
spectroscopy (see 3.1.1 for details).
##### Cosmic ray removal
We use the “flt” files from the Calwfc3 pipeline to analyze each exposure.
Cosmic rays were then rejected by examining the time series for each pixel,
and flagging and replacing 3.5-$\sigma$ outliers in an iterative process. We
also applied a further spatial cosmic ray cleaning step by rejecting cosmic
rays through Laplacian Edge Detection (van Dokkum, 2001). We do a final cosmic
ray cleaning on the extracted 1D stellar spectra by comparing them to the
median combined spectra and replacing outliers which deviated by more than 3.5
$\sigma$. Where cosmic rays are flagged temporally we replace the pixel value
with a median of the time sampled pixel, where cosmic rays are spatially
flagged a median of the surrounding pixels in the same frame is used.
Figure 2: Modal background count for each exposure in visit 1 (solid) and
visit 2 (dashed) across the whole sub-array. The dotted vertical lines
indicate the start of a new HST orbit. The background is higher at the start
of each HST orbit with a bi-model distribution, perhaps due to stray
Earthshine or orbital effects on the telescope.
##### Background subtraction
We use the local background estimation, similar to WISE (see section 4.4 c
Cutri et al. 2012 222WISE All-sky release explanatory supplement, Section 4.4
c), by computing the pixel mode for each image and subtracting that from each
pixel. The mode, or most common binned histogram value, tends to be robust
against the bright tail of the distribution that is caused by other stars and
cosmic-ray (or hot) pixels in the exposure. We compared this to the mean and
median sigma clipped pixel values and found good agreement, giving weight to
the mode being resistant to outliers. In each visit the first exposure of each
orbit has much higher background than the other exposures with a slightly bi-
modal distribution around the peak of the histogram (see Fig. 2), perhaps due
to stray Earthshine or orbital effects on the telescope. We remove the first
exposure of each orbit in both visits in the lightcurve analysis.
Figure 3 shows the visual difference between the original “flt” images and a
cleaned-background-subtracted exposure. We save the cleaned and background
subtracted images as fits files to be used for the trace fitting routines.
Figure 3: HST WFC3 UVIS/G280 spectral image. Top: “flt” file processed and
available on the MAST archive. Bottom: cleaned file with cosmic rays and hot
pixels removed, and flat fielding applied. In this comparison you can clearly
see the difference between the original and cleaned data demonstrating the
requirement for accurate and precise treatment of detector artifacts and
cosmic ray hits.
##### Trace fitting
To extract the target spectrum using the provided calibration files for UVIS
G280333G280 UVIS grism files, the sub-array image needs to be re-embedded into
the full frame (Rothberg et al., 2011). This can be done using the embedsub
routine in wfc3tools444https://github.com/spacetelescope/wfc3tools. This
routine also requires the “spt” files be downloaded from the MAST database and
contained within the same folder as the cleaned fits files generated from the
previous steps.
Figure 4: HST WFC3 UVIS/G280 spectral image. Top: visit 1 +1 spectral order
(left) and -1 spectral order (right). Bottom: visit 2 +1 spectral order (left)
and -1 spectral order (right). All images are background subtracted and cosmic
rays have been removed. The dotted black line shows the calculated trace
center, with the extent of the +-12 pixel aperture shown in orange dashed
lines. At lower flux values the spectral trace does not fit quite as well but
the full flux is captured inside the selected aperture. Color shows flux,
truncated at 25 e-s-1.
Direct images of the target were taken with the F300X filter at the start of
each visit to provide an accurate location of our target on the detector.
Visits 1 and 2 were positioned on the detector within 1 pixel of each other
with x, y centroid positions of [2040.8756, 1063.8825] and [2041.0399,
1062.9073] respectively.
Using the description of the spectral trace of the G280 UVIS grism from
Pirzkal et al. (2017), we computed the expected location of the trace in each
exposure of our G280 datasets. In summary, Pirzkal et al. (2017) compute the
trace location as a function of the x-pixel on the detector and a high order
2D polynomial is fit across the trace. The best fit trace is defined by a 6th
order polynomial function with a linear 2D field dependence. The reference
column for the polynomial fit is chosen to be close to the inflection point of
the trace to ensure the best fit to both the highly curved spectrum at short
wavelengths and the near-linear trace at longer wavelengths. The polynomial
function reproduces the position of both the +1 and -1 spectral orders to
within a fraction of a pixel from 200–800 nm. Figure 4 shows the central trace
position for both visits and computed for the +1 and -1 spectral orders. The
trace fits are currently best calibrated to the +1 order, however, the authors
note that there is a new WFC3/UVIS G280 calibration program that will fully
characterize the -1 and additional spectral orders. At longer wavelengths,
toward the tail end of the spectral trace, fringing effects come into play
that divert the spectra from the fit polynomial trace (see Fig. 5).
A simple extraction of the spectrum contained in each dataset was created by
adding up the observed count rates in pixels above and below the computed
location of the trace. We tested apertures ranging from $\pm$5 pixels around
the central trace to $\pm$50 pixels. To determine the best aperture we
minimized the standard deviation of the residuals for out-of-transit counts.
We find that the optimal aperture is $\pm$12 pixels (see Fig. 4), to account
for the slightly extended wings of the trace (Kuntschner et al., 2009). Both
the +1 and -1 spectra orders were processed in this manner.
The overlapping spectral orders are expected to impact the spectrum in the
long wavelengths approximately beyond 400 nm. However, these observations were
not ideal to show the impact of overlapping spectral orders as the brightness
of the star in the shorter wavelengths is too dim, $\approx$65$\times$ dimmer
than the first order trace. We discuss potential corrections to this in more
detail in § 5.
##### Wavelength solution
The wavelength solution is calculated from the trace position using the
equation detailed in Pirzkal et al. (2017) which is calibrated from 190 to 800
nm. The extracted wavelength solution is good to +/- 0.7 nm which is roughly
half of a UVIS resolution element. We measure the mean spectral dispersion in
the first order which varies from $\sim$1.1–1.6 nm per pixel over the full
spectral range 200–800 nm.
We plot the stellar spectra for both visits and first order spectra in Fig. 5,
showing the 16-84 percentile range of each spectrum with remarkable agreement
between visits, demonstrating the stability of the instrument. Beyond 800 nm
the target spectrum shows extreme fringing effects and is not calibrated, thus
we remove it from this analysis. It is also clear to see that the -1 order is
significantly dimmer across the whole wavelength range with a large impact on
the short wavelengths, short of 250 nm, where the flux drops to near-zero.
Figure 5: The 16–84 percentile range of each visit and orders spectral trace.
The +1 orders and -1 orders overlap closely, making it difficult to tell the
two visits apart and demonstrating the stability of the instrument and the
star. The -1 orders are $\sim$50% dimmer than the +1 orders, with little to no
flux short of 250 nm. Above 800 nm fringing patterns can clearly be seen in
the stellar spectra and we do not use these wavelengths for the lightcurve
analysis.
#### 3.1.1 IRAF APALL Spectral Extraction
We also performed spectral extraction with IRAF and custom IDL routines. The
images were first background subtracted and cosmic rays were removed in the
same way as detailed above. We then used IRAF’s APALL routine to extract the
spectra for each image in the time series, finding an 8th order legendre
polynomial was optimal for the spectral trace extraction as measured by the
trace root mean square residuals. We note that with IRAF, the fixed aperture
center varies smoothly to follow changes in the position of the spectrum
across the dispersion axis and partial pixels are used at the ends. We
extracted the spectra with a wide range of aperture sizes, finding a 24 pixel
aperture was optimal. Similar to the UVIS calibration pipeline routines, the
extracted spectra still exhibited a few cosmic rays not cleaned in previous
processes, we also then perform the 1D stellar spectra cosmic ray removal
step. Using IRAF APALL we were unable to replicate the wavelength solution
calculation and therefore used the one calculated following Pirzkal et al.
(2017) that required the trace fitting following the UVIS calibration
pipeline.
Both spectral extraction techniques produce near identical stellar spectra and
transmission spectra. However, in the following sections we adopt and present
the analysis based on the spectra extracted using the UVIS calibration
pipeline as it is widely accessible, publicly available extraction method that
does not rely on proprietary custom routines, and has a fully consistent
wavelength solution.
## 4 Broadband white-light analysis
Prior to measuring the transmission spectrum of HAT-P-41b, we first analyze
the broadband white lightcurve from 200–800 nm. In this section we detail the
analysis of the broadband whitelight transit depth measured in the UVIS G280
transits for each visit and spectral order based on two different systematic
treatment methods - instrument systematic marginalization (Wakeford et al.,
2016) and jitter decorrelation (Sing et al., 2019).
##### Instrument systematic marginalization
uses a pseudo-stochastic grid of corrective systematic models to measure the
desired lightcurve parameters, namely the transit depth, via an evidence-based
weight assigned by the data to each potential systematic model. We run a grid
of 50 systematic models in the form of an extended polynomial;
$S(\mathbf{x})=t_{1}\phi_{t}\times\sum^{n}_{i=1}p_{i}\phi^{i}_{HST}\times\sum^{n}_{j=1}l_{j}\delta^{j}_{\lambda}+1$
(2)
where $\phi_{t}$ is the planetary phase representing a linear slope over the
whole visit, $\phi_{HST}$ is the HST orbital phase accounting for “HST thermal
breathing” effects, and $\delta_{\lambda}$ is the positional shift in the
wavelength direction on the detector over the visit. Each of these parameters
have scaling factors with the linear slope defined by $t_{1}$, and “HST
breathing” and positional shifts fit up to a 4th order polynomial function
defined by $p_{1-n}$ and $l_{1-n}$, respectively. Each of the scaling
parameters are then either fit as free parameters to activate the systematic
model or fixed to zero. The whole grid of 50 systematic models used in this
analysis can be found in Table 2 of Wakeford et al. (2016) note the table is 0
indexed.
We approximate the evidence (marginal likelihood) of each systematic model fit
to the data using the Akaike Information Criterion (AIC). We then calculate
the evidence-based weight ($W_{q}$) across all 50 systematic models and use
the information from all models to marginalize over the desired parameter
($\alpha_{q}$).
$\alpha_{m}=\sum^{N_{q}}_{q=0}(W_{q}\times\alpha_{q})$ (3)
Equation (15) of Wakeford et al. (2016), where $N_{q}$ is the number of models
fit, and $\alpha_{m}$ is the resulting marginalized parameter. The uncertainty
is then calculated in a similar way based on the weights (see Equation (16) of
Wakeford et al. 2016).
Figure 6: Spectral position changes over the course of each visit, measured by
cross-correlating to a template spectrum (points, Cc), and by fitting
background sources on the full exposure image (shaded regions, stars). Each
are shown relative to the final exposure for comparison. The spectral shifts
are accounted for in the systematic treatment of each lightcurve. Figure 7:
Mean flux in Spitzer’s 3.6 $\mu$m channel. Plotting on a logarithmic scale
reveals HAT-P-41’s faint, nearby companion at pixel position (12,15). We limit
our photometry aperture size to 2.25 pixels to minimize contamination from the
companion. Bad pixels are masked in white.
##### Jitter decorrelation
uses HST’s Pointing Control System to detrend photometric time-series data.
Based on the results of (Sing et al., 2019), we include optical state vectors
traditionally used for STIS (Sing et al., 2011) as well as several jitter
vectors. The full systematics model, $S(\mathbf{x})$, used to detrend the
lightcurve is written as,
$S(\mathbf{x})=p_{1}\phi_{t}+\sum^{4}_{i=1}p_{i+1}\phi^{i}_{HST}+p_{6}\delta_{\lambda}\\\
+p_{7}X_{psf}+p_{8}Y_{psf}+p_{9}RA+p_{10}DEC\\\
+p_{11}Vn_{roll}+p_{12}Vt_{roll}+1,$ (4)
where $\phi_{t}$ is a linear baseline time trend, $\phi_{HST}$ is the 96
minute HST orbital phase, $X_{psf}$ and $Y_{psf}$ are the detector positions
of the PSF as measured by the spectral trace, $\delta_{\lambda}$ is the
wavelength shift of each spectra as measured by cross-correlation, $V2\\_roll$
and $V3\\_roll$ are roll of the telescope along the V2 and V3 axis, $RA$ and
$DEC$ are the right ascension and declination of the aperture reference, and
$p_{1..12}$ are the fit systematic parameter coefficients. The first portion
of this function was found to be the best functional form of the additional
systematic features and corresponds to one of the models used in the
marginalization grid. This function is then fit for all transit lightcurves in
this form and is not marginalized over to determine the optimal functional
form in each lightcurve. The full jitter decorrelation set results in up to
twelve total terms used to describe the instrument systematics of the dataset
in question. However, in practice not all of these parameters are needed. For
each visit and each of the two orders, we used the AIC and measured red noise,
$\sigma_{\rm r}$, to determine the optimal optical state vectors to include
from the full set without over-fitting the data and minimizing the red noise.
Both systematic marginalization and jitter decorrelation require a measurement
of the spectral positional changes on the detector across the duration of the
observation ($\delta_{\lambda}$). To calculate the shift, we cross-correlate
the 1D stellar spectra to a template spectrum and measure the displacement
across the whole wavelength range. To demonstrate that this accurately
represents the physical shift on the detector, we measured the position for
three background sources distributed across the exposure image. We selected
the most Gaussian-like sources from the full image and used a 2D-Gaussian fit
to their 0th order spectrum in each exposure of each visit. In this case we
cannot use the 0th order of the target or its stellar companion to measure
this shift as they are both saturated on the detector. Figure 6 shows
$\delta_{\lambda}$ for visits 1 and 2 measured using the cross-correlation
method (Cc) and the range of positional values measured from the three
background sources (stars). The form of the positional shifts are very similar
with the vertical breaks showing where the telescope is reset after each HST
orbit. The magnitude of the positional shifts is on the sub-pixel scale and is
easily accounted for with either of the systematic treatments detailed. Using
the 2D-Gaussian fit to the background sources, we find that positional shifts
in the y-direction are negligible and do not improve the fit to the data.
Due to the phase coverage of HST observations, resulting from Earth
occultation events, we are unable to accurately fit for the inclination, a/R∗,
and orbital period of the system. Unfortunately, HAT-P-41b was not observed by
the Transiting Exoplanet Survey Satellite (TESS) which would have allowed us
to easily constrain the system parameters. To fit for these vital parameters
we instead use two transit observations from the Spitzer Space Telescope IRAC
instrument to obtain accurate system parameters for the inclination and a/R∗
of HAT-P-41b, detailed in §4.1. In §4.2 we present the measured center of
transit times for these and previous transit observations of HAT-P-41b to
determine the period of the planet, and in §4.3 we present the results of the
UVIS G280 broadband lightcurves for the two visits and for each spectroscopic
order using both systematic treatments.
Figure 8: Transit light curves of HAT-P-41b using Spitzer’s 3.6 $\mu$m (left)
and 4.5 $\mu$m (right) channels. We bin the data for plotting purposes only.
The 3.6 $\mu$m residuals demonstrate a small amount of correlated noise at
timescales shorter than the transit duration.
### 4.1 Spitzer Data Analysis
Spitzer program 13044 (PI: Deming) acquired transit observations of HAT-P-41b
at 3.6 and 4.5 $\mu$m on 2017 January 18 and 2017 February 3, respectively.
The IRAC instrument (Fazio et al., 2004) acquired 32$\times$32 pixel subarray
frames at 2 second intervals in batches of 64. Each observation acquired a
total of 21,632 frames over a span of $\sim$12 hours.
Using the POET pipeline (Stevenson et al., 2012a; Cubillos et al., 2013), we
apply a double-iteration, $4\sigma$ outlier rejection routine, 2D Gaussian
centroiding, and $5\times$ interpolated aperture photometry over a range of
aperture sizes. We convert times to BJDTDB using the JPL Horizons interface.
We find that the best aperture size (as defined by the lowest standard
deviation of the normalized residuals) is 3.0 pixels; however, at this size
there is noticeable contamination from the nearby binary companion. This is
evidenced by the correlation between aperture size and transit depth
(significant at $3.3\sigma$). HAT-P-41’s stellar companion is located $\sim 3$
pixels away, in the wings of the primary star’s point response function. This
is shown in Figure 7, where we depict the mean flux at 3.6 $\mu$m on a
logarithmic scale. We find that the impact of the stellar companion on the
measured transit depth is minimal ($<1\sigma$) for apertures $\leq 2.25$
pixels and, thus, adopt this value for our final analyses. We note that the
transit time, inclination, and semi-major axis parameters do not vary with our
choice of aperture size.
To derive our best-fit values (see Tables 1 and 2), we fit both Spitzer
channels simultaneously using the transit model described by Mandel & Agol
(2002), a linear trend in time, and a BLISS map (Stevenson et al., 2012a) to
account for intrapixel sensitivity variations. We estimate uncertainties using
the Differential-Evolution Markov Chain Monte Carlo technique (ter Braak &
Vrugt, 2008) and test for convergence using the Gelmin-Rubin statistic (Gelman
& Rubin, 1992) by ensuring that the potential scale reduction factor is within
1% of unity. Figure 8 shows Spitzer’s normalized light curves and residuals.
The best-fit 3.6 and 4.5 µm transit depths are $0.992\,{\pm}\,0.008$ % and
$1.028\,{\pm}\,0.013$ %, respectively.
Figure 9: Observed minus calculated (O-C) diagram of measured HAT-P-41b transit times. The dashed line shows the 1-sigma uncertainty. Table 1: Star and Planet parameters used in the lightcurve fitting process for this analysis. Parameter | Value | Reference
---|---|---
Star | |
V (mag) | 11.087 | Hartman et al. (2012)
M∗ (M⊙) | 1.418 | Hartman et al. (2012)
R∗ (R⊙) | 1.786 | Morrell & Naylor (2019)
Teff (K) | 6340 | Morrell & Naylor (2019)
[Fe/H] (dex) | 0.21 | Hartman et al. (2012)
log(g) | 4.14 | Hartman et al. (2012)
Planet | |
Mp (MJ) | 0.795 | Bonomo et al. (2017)
Rp (RJ) | 1.685 | Hartman et al. (2012)
Period (days) | 2.69404861 $\pm$0.00000092 | This work
T0 (days) | 2456600.29325$\pm$0.00050 | This work
inclination (∘) | 89.17 $\pm$ 0.62 | This work
a/R∗ | 5.55 $\pm$ 0.04 | This work
ecc | 0.0 | Bonomo et al. (2017)
### 4.2 Updated Orbital Ephemeris
We used previous and current data to calculate an up-to-date orbital period
for HAT-P-41b, including the ephemeris from the discovery (Hartman et al.,
2012), as well as HST and Spitzer transit data (see Table 2). The HST data
includes the WFC3/UVIS transits where the +1 and -1 orders were treated
independently (see §4.3), as well as WFC3/IR and STIS transits from the Hubble
PanCET program (GO-14767, PIs D.K. Sing & M. Lopez-Moralez, Sheppard 2020 in
prep - private communication). We converted all of the available transit times
to BJDTDB using the tools from (Eastman et al., 2010). These times were fit
with a linear function of the period $P$ and transit epoch $E$,
$T(E)=T_{0}+EP.$ (5)
The resulting ephemeris is given in Table 2, with the linear function giving a
reasonable fit to the data (see Fig. 9), with a $\chi^{2}$ value of 14.47 for
9 degrees of freedom (DOF).
Table 2: Center of transit times used in Fig.9 to calculate the period of the planetary orbit as well as the resulting best-fit orbital ephemeris. All times have been converted to BJDTBD. Instrument | Mode | Epoch | Note
---|---|---|---
| | (BJDTDB) (days) |
| | 2454983.86247 $\pm$ 0.00107 | Hartman et al. (2012)
HST WFC3-IR | G141 | 2457677.912139 $\pm$ 0.0008 |
Spitzer IRAC | CH1 | 2457772.20477 $\pm$ 0.00021 |
Spitzer IRAC | CH2 | 2457788.36879 $\pm$ 0.00027 |
HST STIS | G430L | 2458001.197547 $\pm$ 0.001151 | visit 1
HST STIS | G430L | 2458246.357040 $\pm$ 0.000339 | visit 2
HST STIS | G750L | 2458281.379682 $\pm$ 0.000363 |
HST WFC3-UVIS | G280 | 2458332.566558 $\pm$ 0.000656 | Visit 1, +1 order
HST WFC3-UVIS | G280 | 2458332.564321 $\pm$ 0.001366 | Visit 1, -1 order
HST WFC3-UVIS | G280 | 2458335.260623 $\pm$ 0.000303 | Visit 2, +1 order
HST WFC3-UVIS | G280 | 2458335.259912 $\pm$ 0.000290 | Visit 2, -1 order
Period $P$ (days) | | $T_{0}$ (BJDTDB) (days) |
2.69404861$\pm$0.000000918 | | 2456600.293253 $\pm$ 0.000504 |
### 4.3 UVIS G280 Broadband Lightcurve Results
Figure 10: Top: broadband lightcurves. We show the raw extracted lightcurve
for the visit 1 +1 spectral order to demonstrate the stability of the 1st HST
orbit in the time series (light grey). The systematic corrected and normalized
white lightcurves for each visit and spectroscopic order (colored labeled
points) with the best fit transit model. Each point represents a single
exposure. Each lightcurve is offset for clarity. Middle: residuals from each
ligthcurve fit using the systematic marginalization method. Bottom: residuals
for each lightcurve fit using the jitter decorrelation method. We measure the
combined transit depth of HAT-P-41b to be $(R_{p}/R_{*})^{2}$ = 1.0406 $\pm$
0.0029 % (SDNR = 221 ppm) and 1.0330 $\pm$ 0.0033 % (SNDR = 281 ppm), for each
method respectively. Figure 11: The evidence-based weight for each systematic
model used in instrument systematic marginalization for each visit and order
for the broadband lightcurve analysis. The table of systematic models relating
to each number can be found in Wakeford et al. (2016).
We measure the broadband transit depth for UVIS G280 by summing the flux from
200–800 nm and correcting for systematics via systematic marginalization and
jitter decorrelation independently for both visits and both spectral orders.
We measure a combined transit depth of all four transit timeseries
measurements of $(R_{p}/R_{*})^{2}$ = 1.0406 $\pm$ 0.0029 % and 1.0330 $\pm$
0.0033 %, with an average standard deviation on the residuals of 221 ppm and
281 ppm, using the systematic marginalization and jitter decorrelation methods
respectively. There is a 1.7$\sigma$ difference between the two methods,
likely due to the small differences between the uncertainties on each exposure
for each analysis method that can be seen by comparing the bottom two panels
of Fig. 10. In each analysis we use the same extracted stellar spectra, the
same limb-darkening coefficients derived using the 3D stellar models presented
in Magic et al. (2015), and the same system parameters shown in Table 1.
We show the four transit lightcurves (2 visits + 2 orders) corrected in Fig.
10. The lightcurves shown have been corrected using the most favored model
applied in systematic marginalization, with the underlying models derived from
the same most-likely systematic model. For both data analysis methods,
systematic marginalization and jitter decorrelation, the transit model is fit
iteratively with the systematic model to measure the transit depth. We note
that the lightcurves in Fig.10 only represents a portion of the information
obtained through marginalization as all the information from corrected data
using other weighted systematic models also go into the final marginalized
transit depth measurement (contribution weights can be seen in Fig. 11). Using
jitter decorrelation, we derive a single solution for the lightcurve
corrections and transit depth for each visit and spectral order. The
individual lightcurves from jitter decorrelation are indistinguishable by eye
compared to the systematic marginalization ones presented here. For a more
direct comparison we show the residuals of both systematic analyses at the
bottom of Fig. 10 with their related uncertainties, both achieving near photon
noise precision.
Figure 12: Intensity plot of the spectroscopic lightcurve residuals for each
wavelength bin using the systematic marginalization method. The color bar
shows the residuals amplitude for all intensity plots. For the -1 orders we do
not compute the transmission below 250 nm as the flux is too low to produce
convergent results in the systematic analysis.
While jitter decorelation uses a fixed systematic model plus the jitter files
directly from the telescope as a main decorelation factor, systematic
marginalization derives its information from evidence obtained from an array
of independent systematic models. Systematic marginalization therefore
accounts for the unknown factors affecting the lightcurves by weighting them
according to the reduced data rather than the telescopes fine guidance
sensors. Using systematic marginalization we find that each transit and
spectral order favors slightly different combinations of systematic
corrections. For visit 1 both orders predominantly favor models with a
quadratic correction to $\delta_{\lambda}$, while both orders of visit 2 favor
a 3rd order $\phi_{HST}$ correction with additional correction for
$\delta_{\lambda}$. Given the similarity in the $\delta_{\lambda}$ trend for
each visit and spectral order, as shown in Fig. 6, the more favored correction
of the HST breathing in visit 2 suggests that this movement on the detector is
likely connected with the thermal effects of the telescope and thus the
corrections themselves are interchangeable in this specific case where the
structure of the systematic is similar. For each lightcurve there is a
marginal preference to correct for a linear trend in time across the whole
visit; however, it is slightly more significant in visit 1. This linear trend
across the whole visit has been noted in several other HST timeseries
observations (e.g., Deming et al., 2013; Sing et al., 2015; Kreidberg et al.,
2014; Wakeford et al., 2016), and is thus likely related to the observatory as
a whole rather than a specific instrument. For each visit and order we show
the weighting assigned to each systematic model in the systematic
marginalization reduction for the broadband analysis in Fig.11, these model
weights are later applied to the spectroscopic lightcurves. The weights shown
correspond to the systematic models shown in Table 2 of Wakeford et al.
(2016). The structure of this grid is such that it first loops through
polynomials correcting for $\delta_{\lambda}$, followed by added corrections
for $\phi_{HST}$ with the second half of the grid (25-49) adding in
corrections for $\phi_{t}$. The overall structure of the computed weights
shows that the corrections for $\delta_{\lambda}$ are the dominant factor
given causing the loop every four models.
Figure 13: The individual and combined transmission spectra using both
systematic marginalization and jitter decorrelation. The two visits and +1/-1
spectral orders are shown as colored shaded regions representing the range of
the uncertainties for each spectrum. The final transmission spectrum combining
the results of all four are shown as joined black points with errorbars.
## 5 Spectroscopic Analysis
To measure the transit depth as a function of wavelength and produce an
atmospheric transmission spectrum for HAT-P-41b, we divide the stellar flux
into 10 nm bins ($\sim$5 detector resolution elements) from 200 – 800 nm. We
note that it is possible to sample the transmission spectrum at a higher
resolution ($>$2 resolution elements) in the most optimal portions of the
spectrum where the flux is high; however, we use uniform bins across the whole
wavelength range for consistency and accurate comparison.
We analyze each individual spectroscopic lightcurve in the same way, as
described in §4 for the broadband lightcurve, using both systematic
marginalization and jitter decorrelation methods. In jitter decorrelation, the
systematic correction model is unchanged between wavelength bins, thus
assuming all systematics are wavelength independent. Using systematic
marginalization, we account for any wavelength dependent systematics by
running the full grid of systematic models in each spectroscopic lightcurve.
We then use the evidence based weights for each of those models measured in
the broadband lightcurve (see Fig 11) to marginalize over the measured values
for each model in each lightcurve. By fixing the systematic model weighting to
those derived from the broad-band analysis, the uncertainty is then more
representative of the dominant wavelength independent systematics while
incorporating the scatter measured across wavelength dependent systematics
being fit to the data.
Figure 14: Direct comparison of the final combined transmission spectrum for
each systematic treatment: jitter decorrelation (dark squares) and systematic
marginalization (light circles). The horizontal dashed lines show the measure
broadband depth using each method. Figure 15: The standard deviation between
the four individual transmission spectra in each wavelength bin for systematic
marginalization (pink) and jitter decorrelation (purple).
Each visit and +1/-1 spectral orders were analyzed separately using the
parameters detailed in Table 1 fixing the period, inclination, and a/R∗, and
using the center of transit times listed in Table 2. Using both jitter
decorrelation and systematic marginalization independently, we find consistent
results across both visits and spectral orders. Both methods reach photon
noise precision in each of the channels determined by calculating the white
and red noise associated with the fits (see Pont et al. 2006), and finding a
beta value of 1 consistent with no correlated noise. We show the residuals
from each of the spectroscopic lightcurves for the systematic marginalization
analysis in Fig. 12 as an intensity residual map to show any global structure
in the fit. From the residuals it is clear that the -1 order lightcurves are
noisier than the +1 orders. There is also an increase in the scatter at the
edges of the wavelength regime, with shorter wavelengths dominating the
overall noise range associated with the pure count rates measured from the
stellar spectrum in each of the bins (see Fig. 5).
In Fig. 13, we present the transmission spectrum measured using both methods
for each visit and each +/- first order spectrum with the combined
transmission spectrum overlaid. We show a direct comparison between the
combined transmission spectrum measured using the two systematic treatments in
Fig. 14, with 90% of the points overlapping at the 1-$\sigma$ uncertainty
level. A direct comparison between the two methods is best demonstrated by
looking at the standard deviation and uncertainty in the transit depth
measured across the four transits analyzed (see Fig. 15). It is again evident
in the standard deviations and uncertainties that the lower counts measured in
the near-UV wavelengths ($<$300 nm) introduce larger scatter and uncertainty
to the transit depths. The standard deviation in the short wavelengths
indicates that that derived transit depths in each lightcurve are more similar
within the uncertainties using systematic marginalization compared to the
Jitter decorrelation method. However, there is added scatter with the
marginalization method at longer wavelengths. Both methods have similar
uncertainty profiles indicating the ability to analyse these data with
multiple methods. The unique contribution of the UV points to the transmission
spectrum of an exoplanet atmosphere in combination with the optical from a
single observation with this low-resolution grism cannot be overstated.
## 6 Discussion
We present HST’s WFC3/UVIS G280 grism as a reliable observational mode to
measure the transmission spectrum of exoplanet atmospheres from 200–800 nm,
critically reaching down to near-UV and optical wavelengths not accessible to
JWST. This wavelength range is important to understand and measure cloud
opacity sources and their scattering profiles that are defined by the particle
sizes (e.g., Lecavelier des Etangs et al. 2008; Wakeford & Sing 2015; Wakeford
et al. 2017), escaping atmospheres (e.g., Ehrenreich et al. 2014; Sing et al.
2019), and absorption from Na.
To test this new mode, we measured the atmosphere of the hot Jupiter HAT-P-41b
over the course of two consecutive transits with the WFC3/UVIS G280 grism. We
obtained the positive and negative first order spectra of the target star in
each observation and extracted the stellar flux following the methods outlined
by the UVIS calibration pipelines (Kuntschner et al., 2009; Rothberg et al.,
2011; Pirzkal et al., 2017). We analysed the transit data for each visit and
spectral order using two well established techniques, instrument systematic
marginalization (Wakeford et al., 2016) and jitter decorrelation (Sing et al.,
2019). Both analysis techniques produced statistically similar transmission
spectra for the atmosphere of HAT-P-41b. We obtain a precision of 29–33 ppm on
the broadband transit depth from 200–800 nm, and an average precision of
$\approx$200 ppm in 10 nm spectroscopic bins.
##### Comparison to STIS Observations
Figure 16: Transmission spectrum of HAT-P-41b measured with WFC3/UVIS G280
grism using systematic marginalization combining two HST observations (pink),
compared to STIS G430L combined spectra from two HST observations (dark green)
and one observation with the HST STIS G750L grating (Sheppard, 2020 in prep -
private communication). The WFC3/UVIS G280 grism is able to efficiently
measure the atmosphere of a transiting exoplanet from 200–800 nm to high
precision, matching and exceeding that of STIS. Figure 17: Transmission model
fit using the planetary specific grid with rainout condensation by Goyal et
al. (2018). Both the jitter decorrelated and systematic marginalization G280
spectra were fit independently with the Spitzer data to the full grid of
HAT-P-41b models. Both datasets found the same best fit model with Teq = 2091
K, [M/H] = +2.0, C/O = 0.7, 1100$\times$scattering, cloud = 0.2.
We compare the transmission spectrum measured of HAT-P-41b with WFC3/UVIS G280
grism to that measured using STIS G430L and G750L gratings. We find that the
combination of the two HST observations in the G280 UVIS grism results in
resolution and precision exceeding that of STIS, which required the
combination of three HST observations to cover the whole wavelength range
compared to two for UVIS. Figure 16 shows the transmission spectrum derived
using systematic marginalization from two transits with UVIS G280 compared to
the transmission spectrum from three transits with STIS G430L and G750L
presented by Sheppard (2020 in prep - private communication).
Assessing the overall use of UVIS G280 over the STIS gratings, there are a
number of trade offs to consider. As G280 cannot be scanned and the throughput
is much higher it will likely be more difficult to observe bright (Vmag $<$ 7)
targets, especially considering the impact of overlapping spectral orders that
will make it difficult to extract individual spectral bins at this resolution.
Therefore, bright targets will be more efficiently observed with STIS/G430L in
particular. Additionally, although UVIS G280 can efficiently measure a wide
wavelength range in a single observation it does not extend to wavelengths
spanning the potassium absorption line that can only be accurately captured
with the STIS G750L grating. However, the extended wavelength coverage into
the UV compared to the G430L grism and the comparable resolution means that a
potential Na line can be resolved just as easily with UVIS as with STIS but
with potentially higher precisions in UVIS. The measured UVIS spectrum far
exceeds the resolution and precision over the comparative wavelengths than can
be achieved by STIS/G750L (see Fig. 16).
This direct comparison for the same planet demonstrates that the UVIS G280
grism can easily exceed the precision and resolution of STIS in an equivalent
number of observations, while being more efficient and requiring less
observing time. UVIS G280 also has the advantage of spanning the whole
wavelength range in one shot, dramatically reducing the potential impact of
stellar activity and systematics which can cause offsets between datasets from
different instrument modes. In summary, for targets with Vmag $\geqslant$ 7
the UVIS G280 grism shows reduced systematics, higher resolution, precision,
and wavelength coverage with more efficient observing compared to STIS G430L
and G750L gratings.
##### Searching for Evidence of Atmospheric Escape
The UVIS G280 grism has ideal wavelength coverage to search for signatures of
atmospheric escape of the Fe II at 240 nm and 260 nm, and the prominent Mg II
doublet at 279.63 nm. A single resolution element for the G280 grism is
$\sim$2 nm which encompasses the whole Mg II doublet absorption line, thus
limiting us to strong, low resolution detections. At a single resolution
element of the detector, the scatter becomes large and we were unable to
converge on a solution to fit the lightcurve systematics. We therefore
conducted an analysis of the HAT-P-41b transit data in 4 nm bins (2 resolution
elements) across the 230 – 290 nm, with individual moving analyses in 10 nm
steps to search for excess absorption from escaping ions. In this analysis, we
find little significant evidence for additional absorption by Fe II and Mg II
in the atmosphere. In a single 4 nm bin centred at 280 nm we measure
additional 0.2% absorption compared to the average transit depth which could
potentially correspond to Mg II. However, this absorption is not seen in bins
centered 10 nm either side of 280 nm that encompass the peak of the
absorption. The scatter is on the order of 0.3% across the whole sampled
range.
We conducted our search predominantly using the positive spectral orders for
each visit as the throughput and flux levels are high enough for the precision
needed at these wavelengths. However, for strong signatures such as those seen
in WASP-121b (Sing et al., 2019) or KELT-9b (Hoeijmakers et al., 2018), which
also orbit bright stars, the absorption signature will likely also be
measurable in the negative order spectra as well. We conclude that there is no
evidence of significant Fe II and Mg II escaping from the atmosphere of
HAT-P-41b based on the precision of these measurements. However, we cannot
currently conclude where this places HAT-P-41b in the comparative phase space
as more measurements with this mode or similar to that shown in Sing et al.
(2019) will be required over a wide temperature phase space to examine the
likelihood of detection.
##### Planetary Specific Model Comparison
We ran each of the transmission spectra including the measured Spitzer transit
depths through the planetary specific forward model grid for HAT-P-41b using
rainout condensation presented by Goyal et al. (2018, 2019). In each case, the
model fits have the same number of degrees of freedom with the only additional
fitting parameter being the absolute altitude of the model. For each UVIS G280
spectrum, we trim the first and last two data points that are likely most
affected by low flux and fringing, respectively, and append on the Spitzer
transit depths. Each transmission spectrum independently favors the same
atmospheric model that has: Teq = 2091 K, atmospheric metallicity [M/H] =
+2.0, C/O = 0.7, 1100$\times$scattering profile, and uniform cloud opacity =
0.2 (see Fig. 17). We find a $\chi^{2}_{\nu}$ = 1.45 and 1.72 when fitting the
most favored model to the jitter decorrelated and marginalized transmission
spectrum, respectively.
The model shows prominent TiO/VO features in the near-UV fitting the UVIS G280
data well in the optical with a wavelength dependent slope associated with a
scattering opacity source composed of small sub-micron particles. This model
predicts a muted H2O feature in the near-IR that would be detectable with
WFC3’s G102 and G141 grisms. The Spitzer IR is dominated by CO2 that would add
additional constraints on the atmospheric metallicity (Moses et al., 2011) and
can be validated by JWST NIRSpec observations.
## 7 Conclusions
We present HST’s WFC3/UVIS G280 grism as a new and ideal instrument mode for
exoplanet time-series characterisation. This is the first time scientific
analysis of any observation with this instrument mode has been published. As
such, we provide a detailed breakdown of the challenges and advantages of the
instrument, detailed instructions on the spectral extraction with reference to
data files and programs provided through UVIS calibration files, and a
comparative study of two well established systematic reduction methods.
To test the UVIS G280 grism for time-series data, we observed the transit of
the hot Jupiter HAT-P-41b over two consecutive transit events. This allowed us
to measure the overall stability of the instrument, the precision, and
resolution without additional concerns associated with potential stellar
activity. We obtained both positive and negative first order spectra from each
observations providing four different datasets from 200–800 nm. We analysed
each dataset separately before combining the information to produce the final
atmospheric transmission spectrum of HAT-P-41b. We applied two different
extraction and systematic analysis techniques to the data and find them to be
statistically similar across the whole transmission spectrum demonstrating the
robust and consistent nature of the instrument critical for accurate exoplanet
transmission spectral studies.
We measure the complete transmission spectrum of the hot Jupiter HAT-P-41b
from 200–800 nm in 10 nm bins and at 3.6 and 4.5 $\mu$m with Spitzer’s IRAC
instrument. In the broadband UVIS lightcurves, we reach a precision of 29-33
ppm, with an average of $\approx$200 ppm in 10 nm wide spectroscopic channels.
The transmission spectrum shows evidence of TiO/VO in the near-UV to optical
with significant absorption from CO2 in the Spitzer 4.5 $\mu$m channel. We fit
a grid of forward models specifically derived for HAT-P-41b to the
transmission spectrum from multiple reduction pipelines and find constant
results with a Teq = 2091 K, [M/H] = +2.0, C/O = 0.7, scattering $\times$1100,
and cloud opacity = 0.2 for rainout condensation (see Goyal et al. 2018,
2019). Additional measurements in the near-IR will further aid the
interpretation of this planets atmospheric transmission and will be detailed
in future publications.
We demonstrate that Hubble’s WFC3 UVIS G280 grism is superior to the
combination of STIS G430L and G750L gratings for time-series observations in
terms of efficiency, precision, and resolution from 300–800 nm for exoplanet
time-series observations. Notably the UVIS G280 grism also allows access to
wavelengths as short as 200 nm with the potential to measure the escaping
atmosphere of giant exoplanets via Fe II and Mg II absorption lines and a
broad of range of other atmospheric processes. The wavelength coverage offered
by the UVIS G280 grism (200–800 nm) provides a perfect complement to the
spectroscopic capabilities of the James Webb Space Telescope (600–14000 nm),
which together can probe the full extent of atmospheric processes in
exoplanets that closely orbit their host star.
## Acknowledgements
We thank D. Deming for use of data from his Spitzer program 13044 that
provided the two transits used to obtain accurate system parameters. Thanks to
S. Morrell for discussions on the stellar parameters with updates from Gaia.
We acknowledge private communication from N. Nikolov and K. Sheppard for the
STIS data analysis from the Hubble PanCET program (Sheppard, 2020 in prep -
private communication).
This work is based on observations made with the NASA/ESA Hubble Space
Telescope, https://archive.stsci.edu/hst/search.php (catalog HST-GO-15288),
that were obtained at the Space Telescope Science Institute, which is operated
by the Association of Universities for Research in Astronomy, Inc. A portion
of this work is based on observations made with the Spitzer Space Telescope,
which is operated by the Jet Propulsion Laboratory, California Institute of
Technology under a contract with NASA. H.R. Wakeford acknowledges support from
the Giacconi Prize Fellowship at the Space Telescope Science Institute, which
is operated by the Association of Universities for Research in Astronomy, Inc.
Author contributions: H.R. Wakeford led the UVIS data analysis with detailed
comparisons provided by D.K. Sing. N. Pirzkal provided the UVIS calibration
pipeline and knowledge of the instrument. K.B. Stevenson analysed the Spitzer
data and helped with discussions on the UVIS analysis. N.K. Lewis aided with
the interpretation of the results and providing context for the observations.
T.J. Wilson aided in the statistical analysis. All authors provided text and
comments for the manuscript.
## References
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
* Bonomo et al. (2017) Bonomo, A. S., Desidera, S., Benatti, S., et al. 2017, A&A, 602, A107, doi: 10.1051/0004-6361/201629882
* Bourrier et al. (2018) Bourrier, V., Lecavelier des Etangs, A., Ehrenreich, D., et al. 2018, arXiv e-prints, arXiv:1812.05119. https://arxiv.org/abs/1812.05119
* Bradley et al. (2019) Bradley, L., Sipőcz, B., Robitaille, T., et al. 2019, astropy/photutils: v0.7.2, v0.7.2, Zenodo, doi: 10.5281/zenodo.3568287
* Caswell et al. (2019) Caswell, T. A., Droettboom, M., Hunter, J., et al. 2019, matplotlib/matplotlib v3.1.0, v3.1.0, Zenodo, doi: 10.5281/zenodo.2893252
* Cubillos et al. (2013) Cubillos, P., Harrington, J., Madhusudhan, N., et al. 2013, ApJ, 768, 42, doi: 10.1088/0004-637X/768/1/42
* Cutri et al. (2012) Cutri, R. M., Wright, E. L., Conrow, T., et al. 2012, Explanatory Supplement to the WISE All-Sky Data Release Products, Tech. rep.
* de Wit et al. (2016) de Wit, J., Wakeford, H. R., Gillon, M., et al. 2016, Nature, 537, 69, doi: 10.1038/nature18641
* Deming et al. (2013) Deming, D., Wilkins, A., McCullough, P., et al. 2013, ApJ, 774, 95
* Eastman et al. (2010) Eastman, J., Siverd, R., & Gaudi, B. S. 2010, PASP, 122, 935, doi: 10.1086/655938
* Ehrenreich et al. (2014) Ehrenreich, D., Bonfils, X., Lovis, C., et al. 2014, A&A, 570, A89, doi: 10.1051/0004-6361/201423809
* Evans et al. (2018) Evans, T. M., Sing, D. K., Goyal, J. M., et al. 2018, AJ, 156, 283, doi: 10.3847/1538-3881/aaebff
* Fazio et al. (2004) Fazio, G. G., Hora, J. L., Allen, L. E., et al. 2004, Astrophy. J. Suppl. Ser., 154, 10, doi: 10.1086/422843
* Gelman & Rubin (1992) Gelman, A., & Rubin, D. 1992, Statistical Science, 7, 457
* Gilliland et al. (2010) Gilliland, R. L., Rajan, A., & Deustua, S. 2010, WFC3 UVIS Full Well Depths, and Linearity Near and Beyond Saturation, Tech. rep.
* Goyal et al. (2018) Goyal, J. M., Mayne, N., Sing, D. K., et al. 2018, MNRAS, 474, 5158, doi: 10.1093/mnras/stx3015
* Goyal et al. (2019) —. 2019, MNRAS, 486, 783, doi: 10.1093/mnras/stz755
* Hartman et al. (2012) Hartman, J. D., Bakos, G. Á., Béky, B., et al. 2012, AJ, 144, 139, doi: 10.1088/0004-6256/144/5/139
* Helling (2019) Helling, C. 2019, Annual Review of Earth and Planetary Sciences, 47, 583, doi: 10.1146/annurev-earth-053018-060401
* Hoeijmakers et al. (2018) Hoeijmakers, H. J., Ehrenreich, D., Heng, K., et al. 2018, Nature, 560, 453, doi: 10.1038/s41586-018-0401-y
* Kreidberg et al. (2014) Kreidberg, L., Bean, J. L., Désert, J.-M., et al. 2014, Nature, 505, 69, doi: 10.1038/nature12888
* Kreidberg et al. (2018) Kreidberg, L., Line, M. R., Parmentier, V., et al. 2018, ArXiv e-prints. https://arxiv.org/abs/1805.00029
* Kuntschner et al. (2009) Kuntschner, H., Bushouse, H., Kümmel, M., & Walsh, J. R. 2009, The ground calibrations of the WFC3/UVIS G280 grism, Tech. rep.
* Landsman (1995) Landsman, W. B. 1995, Astronomical Society of the Pacific Conference Series, Vol. 77, The IDL Astronomy User’s Library, ed. R. A. Shaw, H. E. Payne, & J. J. E. Hayes, 437
* Lecavelier des Etangs et al. (2008) Lecavelier des Etangs, A., Pont, F., Vidal-Madjar, A., & Sing, D. 2008, A&A, 481, L83, doi: 10.1051/0004-6361:200809388
* Lothringer et al. (2018) Lothringer, J. D., Benneke, B., Crossfield, I. J. M., et al. 2018, AJ, 155, 66, doi: 10.3847/1538-3881/aaa008
* Magic et al. (2015) Magic, Z., Chiavassa, A., Collet, R., & Asplund, M. 2015, A&A, 573, A90, doi: 10.1051/0004-6361/201423804
* Mandel & Agol (2002) Mandel, K., & Agol, E. 2002, ApJ, 580, L171, doi: 10.1086/345520
* Marley et al. (2013) Marley, M. S., Ackerman, A. S., Cuzzi, J. N., & Kitzmann, D. 2013, Clouds and Hazes in Exoplanet Atmospheres, ed. S. J. Mackwell, A. A. Simon-Miller, J. W. Harder, & M. A. Bullock, 367–391, doi: 10.2458/azu_uapress_9780816530595-ch15
* McCullough & MacKenty (2012) McCullough, P., & MacKenty, J. 2012, STScI Instrument Science Report WFC3, 8, 2012
* McCullough et al. (2014) McCullough, P. R., Crouzet, N., Deming, D., & Madhusudhan, N. 2014, ApJ, 791, 55, doi: 10.1088/0004-637X/791/1/55
* Morrell & Naylor (2019) Morrell, S., & Naylor, T. 2019, MNRAS, 489, 2615, doi: 10.1093/mnras/stz2242
* Moses et al. (2011) Moses, J. I., Visscher, C., Fortney, J. J., et al. 2011, ApJ, 737, 15, doi: 10.1088/0004-637X/737/1/15
* Nikolov et al. (2014) Nikolov, N., Sing, D. K., Pont, F., et al. 2014, MNRAS, 437, 46, doi: 10.1093/mnras/stt1859
* Oliphant (2006–) Oliphant, T. 2006–, NumPy: A guide to NumPy, USA: Trelgol Publishing. http://www.numpy.org/
* Pirzkal et al. (2017) Pirzkal, N., Hilbert, B., & Rothberg, B. 2017, Trace and Wavelength Calibrations of the UVIS G280 +1/-1 Grism Orders, Tech. rep.
* Pont et al. (2006) Pont, F., Zucker, S., & Queloz, D. 2006, MNRAS, 373, 231, doi: 10.1111/j.1365-2966.2006.11012.x
* Rothberg et al. (2011) Rothberg, B., Pirzkal, N., & Baggett, S. 2011, First Results from Contamination Monitoring with the WFC3 UVIS G280 Grism, Tech. rep.
* Sheppard (2020 in prep - private communication) Sheppard, K. 2020 in prep - private communication, AJ
* Sing et al. (2011) Sing, D. K., Pont, F., Aigrain, S., et al. 2011, MNRAS, 416, 1443, doi: 10.1111/j.1365-2966.2011.19142.x
* Sing et al. (2015) Sing, D. K., Wakeford, H. R., Showman, A. P., et al. 2015, MNRAS, 446, 2428, doi: 10.1093/mnras/stu2279
* Sing et al. (2016) Sing, D. K., Fortney, J. J., Nikolov, N., et al. 2016, Nature, 529, 59. http://dx.doi.org/10.1038/nature16068
* Sing et al. (2019) Sing, D. K., Lavvas, P., Ballester, G. E., et al. 2019, AJ, 158, 91, doi: 10.3847/1538-3881/ab2986
* Stevenson et al. (2012a) Stevenson, K. B., Harrington, J., Fortney, J. J., Loredo, T. J., et al. 2012a, ApJ, 754, 136, doi: 10.1088/0004-637X/754/2/136
* Stevenson et al. (2012b) Stevenson, K. B., Harrington, J., Fortney, J. J., et al. 2012b, ApJ, 754, 136, doi: 10.1088/0004-637X/754/2/136
* ter Braak & Vrugt (2008) ter Braak, C., & Vrugt, J. 2008, Statistics and Computing, 18, 435, doi: 10.1007/s11222-008-9104-9
* Tody (1986) Tody, D. 1986, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 627, The IRAF Data Reduction and Analysis System, ed. D. L. Crawford, 733, doi: 10.1117/12.968154
* Tody (1993) —. 1993, Astronomical Society of the Pacific Conference Series, Vol. 52, IRAF in the Nineties, ed. R. J. Hanisch, R. J. V. Brissenden, & J. Barnes, 173
* van Dokkum (2001) van Dokkum, P. G. 2001, PASP, 113, 1420, doi: 10.1086/323894
* Virtanen et al. (2019) Virtanen, P., Gommers, R., Burovski, E., et al. 2019, scipy/scipy: SciPy 1.2.1, v1.2.1, Zenodo, doi: 10.5281/zenodo.2560881
* Wakeford et al. (2013) Wakeford, H., Sing, D., Deming, D., et al. 2013, MNRAS, 435, 3481
* Wakeford & Sing (2015) Wakeford, H. R., & Sing, D. K. 2015, A&A, 573, A122, doi: 10.1051/0004-6361/201424207
* Wakeford et al. (2016) Wakeford, H. R., Sing, D. K., Evans, T., Deming, D., & Mandell, A. 2016, The Astrophysical Journal, 819, 10. http://stacks.iop.org/0004-637X/819/i=1/a=10
* Wakeford et al. (2017) Wakeford, H. R., Visscher, C., Lewis, N. K., et al. 2017, MNRAS, 464, 4247, doi: 10.1093/mnras/stw2639
* Wakeford et al. (2019) Wakeford, H. R., Wilson, T. J., Stevenson, K. B., & Lewis, N. K. 2019, Research Notes of the American Astronomical Society, 3, 7, doi: 10.3847/2515-5172/aafc63
Table 3: Transmission spectrum of HAT-P-41b based on the combined spectrum of +1 and -1 spectral orders over two transt events. for both systematic marginalization and jitter decorrelation | Systematic marginalization | Jitter Decorrelation | Limb-darkening coefficients∗
---|---|---|---
Wavelength | Transit depth | Uncertainty | Transit depth | Uncertainty | c1 | c2 | c3 | c4
(nm) | (%) | (%) | (%) | (%) | | | |
200-800 | 1.04056 | 0.00293 | 1.03303 | 0.00331 | | | |
205 | 1.07731 | 0.14700 | 0.85089 | 0.11217 | 0.27642 | -0.27861 | 0.17437 | 0.81199
215 | 1.04749 | 0.09769 | 1.04495 | 0.06266 | 0.44951 | -1.31326 | 2.39714 | -0.53880
225 | 1.11641 | 0.06163 | 0.97186 | 0.04573 | 0.26022 | -0.34716 | 0.79494 | 0.26854
235 | 0.95137 | 0.08046 | 1.02503 | 0.05374 | 0.44189 | -0.57725 | 1.16157 | -0.07411
245 | 1.09086 | 0.03881 | 0.94739 | 0.04950 | 0.54045 | -1.11394 | 2.37372 | -0.82917
255 | 0.93706 | 0.08189 | 0.98717 | 0.05082 | 0.48574 | -0.52757 | 1.52725 | -0.54102
265 | 1.05554 | 0.07504 | 1.01539 | 0.03792 | 0.33614 | -0.12161 | 1.23718 | -0.49873
275 | 0.92522 | 0.06059 | 1.01556 | 0.04516 | 0.51787 | -0.22060 | 0.71649 | -0.07533
285 | 0.95923 | 0.04902 | 0.97523 | 0.03401 | 0.47085 | -0.29824 | 1.30007 | -0.53323
295 | 1.00307 | 0.02041 | 1.00242 | 0.02578 | 0.43969 | -0.17729 | 1.45827 | -0.79410
305 | 1.06577 | 0.01350 | 1.03449 | 0.02395 | 0.40399 | 0.12345 | 0.91689 | -0.53110
315 | 0.99512 | 0.02188 | 0.97180 | 0.02567 | 0.34789 | 0.38909 | 0.43002 | -0.26221
325 | 1.05841 | 0.01795 | 1.01076 | 0.01979 | 0.41584 | 0.42379 | 0.33271 | -0.27752
335 | 0.98227 | 0.01837 | 1.00338 | 0.01746 | 0.32808 | 0.83687 | -0.47180 | 0.19469
345 | 0.99562 | 0.01689 | 0.98657 | 0.02122 | 0.50090 | 0.47375 | -0.04764 | -0.04352
355 | 1.03245 | 0.01882 | 0.99715 | 0.02232 | 0.49485 | 0.49152 | -0.08977 | -0.02466
365 | 1.00155 | 0.01805 | 1.00064 | 0.01927 | 0.49130 | 0.55355 | -0.23093 | 0.04401
375 | 1.00250 | 0.01921 | 1.01414 | 0.02044 | 0.51901 | 0.65211 | -0.42396 | 0.11740
385 | 1.04525 | 0.01217 | 0.99727 | 0.01692 | 0.44683 | 0.40462 | 0.26623 | -0.24303
395 | 0.93250 | 0.03636 | 1.03279 | 0.02120 | 0.48246 | 0.16780 | 0.44794 | -0.21622
405 | 1.03092 | 0.01341 | 1.02334 | 0.01455 | 0.48142 | 0.45253 | 0.03247 | -0.08316
415 | 1.01568 | 0.01890 | 1.00675 | 0.01593 | 0.43310 | 0.43124 | 0.12149 | -0.10687
425 | 1.03996 | 0.01103 | 1.01964 | 0.01239 | 0.47865 | 0.31563 | 0.19400 | -0.11728
435 | 1.02160 | 0.00993 | 1.03207 | 0.01504 | 0.50686 | 0.42439 | -0.02042 | -0.06239
445 | 1.02758 | 0.01111 | 1.01087 | 0.01519 | 0.62723 | 0.00768 | 0.49756 | -0.26869
455 | 1.03755 | 0.01292 | 1.03748 | 0.01523 | 0.66796 | -0.03296 | 0.39110 | -0.16936
465 | 1.03632 | 0.00862 | 1.04188 | 0.01242 | 0.64089 | 0.10690 | 0.17586 | -0.07282
475 | 1.03320 | 0.01313 | 1.01325 | 0.01207 | 0.68519 | 0.04928 | 0.14144 | -0.03468
485 | 1.02089 | 0.01256 | 1.02408 | 0.01507 | 0.79901 | -0.19234 | 0.36598 | -0.16162
495 | 1.04370 | 0.01210 | 1.02236 | 0.01338 | 0.77433 | -0.20489 | 0.40845 | -0.15622
505 | 1.03363 | 0.01274 | 1.00958 | 0.01437 | 0.71140 | -0.05044 | 0.22921 | -0.08213
515 | 1.06672 | 0.01122 | 1.03961 | 0.01399 | 0.64107 | -0.00460 | 0.31643 | -0.15507
525 | 1.02409 | 0.01215 | 1.01568 | 0.01591 | 0.73764 | -0.16794 | 0.39393 | -0.17123
535 | 1.03226 | 0.00823 | 1.03075 | 0.01383 | 0.79821 | -0.32921 | 0.54626 | -0.23226
545 | 1.05347 | 0.01105 | 1.04898 | 0.01551 | 0.82715 | -0.38564 | 0.54019 | -0.20221
555 | 1.04829 | 0.01132 | 1.06219 | 0.01434 | 0.81141 | -0.35516 | 0.49708 | -0.18347
565 | 1.04683 | 0.01666 | 1.02657 | 0.01682 | 0.82890 | -0.41148 | 0.54895 | -0.20500
575 | 1.03509 | 0.01478 | 0.98730 | 0.01769 | 0.84535 | -0.43068 | 0.52234 | -0.18103
585 | 1.07381 | 0.01539 | 1.05872 | 0.01698 | 0.83910 | -0.44512 | 0.56000 | -0.20974
595 | 1.04651 | 0.01203 | 1.04453 | 0.01652 | 0.88137 | -0.54670 | 0.63637 | -0.22889
605 | 1.04134 | 0.01189 | 1.03169 | 0.01478 | 0.88844 | -0.57180 | 0.65069 | -0.23386
615 | 1.03707 | 0.01348 | 1.00632 | 0.01922 | 0.81257 | -0.38094 | 0.42040 | -0.13042
625 | 1.01444 | 0.01252 | 0.98434 | 0.01837 | 0.87418 | -0.59117 | 0.70278 | -0.27158
635 | 1.01662 | 0.01658 | 1.02845 | 0.01979 | 0.87714 | -0.58646 | 0.66610 | -0.24723
645 | 1.02919 | 0.01724 | 1.00875 | 0.01949 | 0.86284 | -0.56322 | 0.65497 | -0.25552
655 | 1.06727 | 0.01847 | 1.05593 | 0.02030 | 0.95153 | -0.74127 | 0.72558 | -0.26379
665 | 1.00227 | 0.01846 | 1.02700 | 0.02128 | 0.90248 | -0.67572 | 0.74030 | -0.27779
675 | 1.03065 | 0.01782 | 1.03617 | 0.02138 | 0.87733 | -0.62328 | 0.65601 | -0.22774
685 | 1.01072 | 0.01601 | 0.94646 | 0.02119 | 0.88769 | -0.66447 | 0.70310 | -0.25098
695 | 1.01402 | 0.02043 | 1.00378 | 0.02471 | 0.89369 | -0.70395 | 0.75795 | -0.27823
705 | 1.07414 | 0.01838 | 1.07324 | 0.02316 | 0.88686 | -0.68657 | 0.72303 | -0.26083
715 | 1.03237 | 0.02539 | 0.99668 | 0.02136 | 0.87414 | -0.65906 | 0.67614 | -0.23359
725 | 1.04279 | 0.02215 | 1.03922 | 0.02722 | 0.89617 | -0.72378 | 0.74812 | -0.26705
735 | 1.02738 | 0.01973 | 1.03978 | 0.02539 | 0.89467 | -0.73795 | 0.77170 | -0.28323
745 | 1.04238 | 0.01927 | 1.01427 | 0.03020 | 0.86509 | -0.64972 | 0.65973 | -0.23664
755 | 1.04771 | 0.02398 | 1.02146 | 0.02712 | 0.89132 | -0.75184 | 0.79302 | -0.29984
765 | 1.06149 | 0.01834 | 0.98611 | 0.03130 | 0.88774 | -0.75638 | 0.78905 | -0.29437
775 | 1.05751 | 0.02014 | 1.09123 | 0.02938 | 0.88279 | -0.73219 | 0.73173 | -0.25886
785 | 1.10846 | 0.02307 | 1.10165 | 0.03805 | 0.88973 | -0.77932 | 0.81639 | -0.31113
* Based on using the +1 order throughput curve and 3D stellar models from Magic et al. (2015) .
|
2024-09-04T02:54:56.081725 | 2020-03-01T18:30:57 | 2003.00544 | {
"authors": "Jeevan Manavalan, Prabhakar Ray, Matthew Howard",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25973",
"submitter": "Jeevan Manavalan",
"url": "https://arxiv.org/abs/2003.00544"
} | arxiv-papers | # Exploiting Ergonomic Priors in Human-to-Robot Task Transfer
Jeevan Manavalan1∗, Prabhakar Ray & Matthew Howard 1Jeevan Manavalan,
Prabhakar Ray and Matthew J. Howard are with the Centre for Robotics Research,
Department of Engineering, King’s College London, London, UK.
<EMAIL_ADDRESS>
###### Abstract
In recent years, there has been a booming shift in the development of
versatile, autonomous robots by introducing means to intuitively teach robots
task-oriented behaviour by demonstration. In this paper, a method based on
programming by demonstration is proposed to learn null space policies from
constrained motion data. The main advantage to using this is generalisation of
a task by retargeting a systems redundancy as well as the capability to fully
replace an entire system with another of varying link number and lengths while
still accurately repeating a task subject to the same constraints. The
effectiveness of the method has been demonstrated in a 3-link simulation and a
real world experiment using a human subject as the demonstrator and is
verified through task reproduction on a 7DoF physical robot. In simulation,
the method works accurately with even as little as five data points producing
errors less than $\mathbf{10^{-14}}$. The approach is shown to outperform the
current state-of-the-art approach in a simulated 3DoF robot manipulator
control problem where motions are reproduced using learnt constraints.
Retargeting of a systems null space component is also demonstrated in a task
where controlling how redundancy is resolved allows for obstacle avoidance.
Finally, the approach is verified in a real world experiment using
demonstrations from a human subject where the learnt task space trajectory is
transferred onto a 7DoF physical robot of a different embodiment.
## I Introduction
In recent years, there has been a booming shift in the development of
versatile, autonomous robots capable of performing increasingly complex tasks.
Such robots are expected to enhance the capabilities of ordinary people to
introduce automation into their lives by means of intuitively teaching robots
task-oriented behaviour by demonstration [1, 2].
When humans perform task-oriented movement, it is often the case that there is
a high level of redundancy, with the number of degrees of freedom (DoF)
available to execute the task usually much higher than those required [3]. For
instance, in the task of opening a drawer (see Figure 1), the primary
objective is to _manipulate the drawer from the closed to the open position_ ,
however, there is redundancy in the several possible ways this can be
achieved. For example, a human performing this task can adopt different elbow
postures, such as flaring it out at various degrees, while still managing to
move the drawer (Figure 1(a)). Having this flexibility is beneficial as it not
only allows for multiple ways of achieving the task, but can also enhance
efficiency or robustness, thereby enhancing overall performance.
Humans tend to take advantage of this flexibility in predictable,
stereotypical ways, commonly, seeking to minimise discomfort or energy
expenditure [4], [5]. For instance, in human drawer opening, despite the
variety of postures that can be taken, it is typical to keep ones wrist
straight, to avoid uncomfortable joint flexion, and ones elbow down, to avoid
working unnecessarily against gravity. Indeed, such features are codified in
human ergonomics literature, to the point that they shape work environments
and policy on safe working practice [6, 7, 8].
\begin{overpic}[width=216.81pt,height=75.8804pt]{images/drawerFlaring4.jpeg}\put(0.0,75.0){\ref{f:drawingPosesA}}\end{overpic}
\begin{overpic}[width=216.81pt,height=75.8804pt]{images/drawerConfiguration3.jpeg}\put(70.0,75.0){\ref{f:drawingPosesB}}\end{overpic}
Figure 1: (a) Redundancy in elbow postures when opening a drawer. In absence
of other constraints, humans tend to avoid using non-ergonomic postures (shown
in blue). (b) A comfortable pose for a human is different to one maximises
manipulability in a robot with different joint limits.
Similarly, a multi-DoF robotic system imitating the human can adopt different
joint configurations that are consistent with maintaining the end-effector on
the drawer handle, including those that closely match the human’s posture.
Therefore, the simplest way to have a robot learn is to match it to the
human’s posture as closely as possible when executing the task. However, such
an approach neglects the differences in embodiment between human and robot
that may lead to sub-optimal performance of the robot [9, 10]. For instance,
maintaining a posture in which ones wrist joint is kept straight (Figure
1(b)), while comfortable for the human, may represent a singular posture for
the robot that can lead to dangerous unstable movements. Moreover, for a robot
with geared, non-backdriveable joints, it may cost little energy to maintain
the elbow in a flared posture, whereas moving the arm to a more human-like,
elbow-down posture may actually expend energy unnecessarily. Such cases
suggest a more nuanced approach to human-to-robot behaviour transfer is
required, that takes explicit account of the stereotypical features of human
movement, and the desirability, or otherwise, to reproduce them in a robotic
imitator.
To this end, this paper investigates how stereotypical features of
demonstrators’ posture control can be used to decompose observed behaviour
into task-oriented and redundant components of motion. Specifically, it
presents a new method for programming by demonstration whereby explicit use of
the underlying null space control policy—as determined by the stereotypical or
ergonomic features—is used to learn the task and null spaces involved in the
behaviour and their underlying constraints [11, 12]. The latter allows the
original behaviour to be retargeted to an imitator robot that has a different
kinematic embodiment, by optimising movement according to the robot’s own
structure without causing any interference with the task goal [13]. Numerical
and physical evaluations are reported in which the proposed approach is
applied to learn constraints in a toy experiment where performance on varying
data lengths and noise are evaluated, a simulated 3DoF experiment where the
approach is compared to the state-of-the-art approach, a demonstration on the
benefits of retargeting the system to resolve redundancy for obstacle
avoidance, task reproduction from a demonstrator system to an imitator of a
different embodiment and finally a real world experiment where demonstrations
from a human are used to learn and reproduce the task space motions with a
Sawyer, a 7DoF physical robot. The results indicate that learning in this way
outperforms several state-of-the-art methods [14, 15, 16, 17] in its ability
to accurately learn the decomposition from relatively little data where in a
comparative study the constraint is learnt from one trajectory of length
$2\,s$ (100 data points), with minimal assumptions made on the form of the
data.
## II Background & Related Work
### II-A Stereotypical Movement
In the natural motion of people, it can generally be assumed that the observed
movement will be optimised for efficiency according to their embodiment. In
the context of humans teaching robots task-oriented movements, this means that
postures adopted by a person in the course of a demonstration are likely to
not only meet the requirements of the task, but also show stereotypical traits
that reflect the demonstrator’s embodiment. For instance, human demonstrators
will typically adopt postures that avoid working against gravity, or limb
flexion or extension away from the resting posture of the limb. The study of
postural preferences and stereotypical movement features in humans has long
been studied in the field of _human movement science_ and, is particularly
well documented in the domain of _human ergonomics_ [6, 7].
Note that, these stereotypical features are _secondary to the task_ —that is,
they will tend to be promoted to seek comfort and minimise fatigue in
movement—but are _subject to any applicable task constraints_. This means that
they can be inhibited if the task demands it. For example, in drawer opening,
the default rest posture of the shoulder is not maintained: since the hand
must be lifted to the drawer handle for the task. Furthermore, if maintaining
the elbow-down posture during opening _conflicts with the task_ (e.g., would
result in a collision with a obstacle), then task space extends to the elbow
elevation and overrides the default behaviour.
This flexibility is a hallmark of human behaviour that is not currently
captured by existing imitation learning approaches and poses an ongoing
challenge. Traditional imitation learning approaches tend to treat behaviours
as monolithic control policies, and so do not lend themselves well to task-
priorised behaviours [18, 1, 19].
### II-B Task Prioritised Behaviour
To better capture this task-prioritised view of behaviour, several studies
have recently focused on modelling demonstrations hierarchically, whereby
movement is decomposed into the task space—the DoF required for the primary
task—and a null space (i.e., the remaining DoF). This draws on several well-
established hierarchical control schemes, such as Liegeois’ redundant
kinemetic control scheme [20], or Khatib’s Operational Space Formulation [21].
In this view, actions $\mathbf{u}\in\mathbb{R}^{\mathcal{{Q}}}$ are assumed to
take the form
$\mathbf{u}(\mathbf{x})=\underbrace{\mathbf{A}^{\dagger}(\mathbf{x})\mathbf{b}(\mathbf{x})}_{\mathbf{v}}+\underbrace{\mathbf{N}(\mathbf{x})\boldsymbol{\pi}(\mathbf{x})}_{\mathbf{w}}$
(1)
where $\mathbf{x}\in\mathbb{R}^{\mathcal{{P}}}$ represents state (usually
represented either in end-effector or joint space) and
$\mathbf{A}(\mathbf{x})\in\mathbb{R}^{\mathcal{{S}}\times\mathcal{{Q}}}$ is a
matrix describing a system of $\mathcal{{S}}$-dimensional constraints
$\mathbf{A}(\mathbf{x})\mathbf{u}(\mathbf{x})=\mathbf{b}(\mathbf{x})$ (2)
and
$\mathbf{N}(\mathbf{x}):=\mathbf{I}-\mathbf{A}(\mathbf{x})^{\dagger}\mathbf{A}(\mathbf{x})\in\mathbb{R}^{\mathcal{{Q}}\times\mathcal{{Q}}}$
(3)
is the null space projection matrix that projects the policy
$\boldsymbol{\pi}(\mathbf{x})$ onto the null space of $\mathbf{A}$. Here,
$\mathbf{I}\in\mathbb{R}^{\mathcal{{Q}}\times\mathcal{{Q}}}$ denotes the
identity matrix and
$\mathbf{A}^{\dagger}=\mathbf{A}^{\top}(\mathbf{A}\mathbf{A}^{\top})^{-1}$ is
the Moore-Penrose pseudo-inverse of $\mathbf{A}$.
In this view, $\mathbf{b}(\mathbf{x})\in\mathbb{R}^{\mathcal{{S}}}$ represents
the _task space policy_ describing the primary task to be accomplished, and
the lumped term
$\mathbf{v}=\mathbf{A}^{\dagger}(\mathbf{x})\mathbf{b}(\mathbf{x})$ represents
that policy projected into the configuration space.
$\boldsymbol{\pi}(\mathbf{x})$ represents the _null space policy_ , that
encapsulates any actions in the configuration space secondary to the task.
Note that, it is typically the case that $\mathbf{b}$ is unknown (since this
is the task that should be learnt by demonstration), and $\mathbf{A}$ (and
therefore $\mathbf{N}$) is also not explicitly known (since this describes the
space in which the unknown task is defined).
The key insight of this paper is that in many cases, _prior knowledge of
$\boldsymbol{\pi}$ may be assumed_, since it commonly represents the
_stereotypical features of secondary movements_. Furthermore, as shown in
§III, knowledge of $\boldsymbol{\pi}$ enables efficient estimation of the
other quantities in (1) ($\mathbf{v}$ and $\mathbf{N}$) that can in turn be
used to separate out the task-oriented part of the demonstrations, and thereby
replace the secondary components with a control policy tailored to the
imitator’s embodiment.
### II-C Learning the Decomposition
Several prior studies have examined the possibility of robot learning by
demonstration using the representation (1)-(3), and in particular the
possibility of learning $\mathbf{v}$ or $\mathbf{A}$ under the assumption that
only $\mathbf{x}$ and $\mathbf{u}$ are observable. However, as noted in §II-A,
in many cases such an assumption is _overly stringent_ and can result in
degraded estimation performance.
Depending on the assumptions made on its representation (see §III-C), one of
several learning methods can be used to estimate $\mathbf{A}$ [14, 15, 17].
However, all of these methods, rely on the ability to separate the lumped task
space term $\mathbf{v}$ (or, equivalently, the null space term $\mathbf{w}$)
from the demonstrations, and learning performance is highly dependant on the
quality of the separation. These studies rely on the same approach, first
proposed by Towell et al. [12], that uses variations in the task space policy
$\mathbf{b}$ and consistency in the null space policy $\boldsymbol{\pi}$ to
form an estimate of the separation. This has several limitations in practice.
First, for the separation to work, it can be difficult to ensure that the data
is ‘rich’ enough in terms of the variations seen in the task space policy
$\mathbf{b}$. Second, if working with data which contains different several
distinct task-spaces, it is important to separate the data into subgroups and
learn within each subset individually. However, this diminishes the learning
quality as it tends to make less data available within each subgroup. These
requirements can hamper the methods’ efficacy as increasingly complex systems
and constraints are considered. The approach proposed here does not have such
prerequisites—instead, it exploits prior knowledge of the control policy
$\boldsymbol{\pi}$, a component that can often be estimated through an
understanding of stereotypical behaviour or consideration of human ergonomics.
## III Method
In this section, a new method is defined for estimating the null space
projection matrix $\mathbf{N}$ in redundant systems where some prior knowledge
of the redundancy resolution strategy is assumed available.
### III-A Data
The proposed method works on data given as $\mathcal{{N}}$ pairs of observed
states $\mathbf{x}_{n}$ and actions $\mathbf{u}_{n}$ collected from task-
oriented movement demonstrations. It is assumed that (i) observations are in
the form presented in (1), (ii) $\mathbf{A}$, $\mathbf{N}$ and $\mathbf{b}$
are not explicitly known for any given observation and (iii)
$\boldsymbol{\pi}$is _known_ (or a good estimate is available) .
As noted in §II, assumption (iii) is reasonable depending on several factors,
including the task at hand and the environment. In most circumstances, healthy
human demonstrators will tend to perform tasks in a way that _promotes
comfort_. In the experiments reported here, this tendency is captured by
assuming that the secondary control policy is a point attractor
$\boldsymbol{\pi}(\mathbf{x})=\beta(\mathbf{x}^{*}-\mathbf{x})$ (4)
where $\beta$ is a gain matrix and the point of attraction $\mathbf{x}^{*}$ is
a posture that scores highly according to standard ergonomic assessment
procedures such as Rapid Upper Limb Assessment [8]. While many ergonomic
measures are applicable depending on the experiment, RULA is selected as it is
quick and simple to classify static postures, focuses on the upper body which
is in line with the drawer handling experiments, and provides a constant joint
posture range for optimal scoring [22], [23].
### III-B Learning the Null Space Projection Matrix
The proposed method works by exploiting the orthogonality between the task and
null space parts in (1). Specifically, noting that
$\mathbf{v}^{\top}\mathbf{w}=\mathbf{w}^{\top}\mathbf{v}=0$, (1) can be
written
$\mathbf{w}^{\top}\mathbf{u}=\mathbf{w}^{\top}\mathbf{v}+\mathbf{w}^{\top}\mathbf{w}=\mathbf{w}^{\top}\mathbf{w}$
(5)
yielding the identity
$\mathbf{w}^{\top}(\mathbf{u}-\mathbf{w})=0.$ (6)
An estimate of $\mathbf{N}$, and therefore the null space component
$\mathbf{w}$, can be formed by choosing
$\tilde{\mathbf{w}}=\tilde{\mathbf{N}}\boldsymbol{\pi}$ consistent with this
identity by minimising111For brevity, here, and throughout the paper, the
notation $\mathbf{a}_{n}$ is used to denote the quantity $\mathbf{a}$
evaluated on the $n$th sample. For example, if $\mathbf{a}$ is a vector
quantity computed from the state $\mathbf{x}$, then
$\mathbf{a}_{n}=\mathbf{a}(\mathbf{x}_{n})$.
$E[\tilde{\mathbf{N}}]=\sum^{\mathcal{{N}}}_{n=1}||\boldsymbol{\pi}_{n}^{\top}\tilde{\mathbf{N}}_{n}(\mathbf{u}_{n}-\boldsymbol{\pi}_{n})||.$
(7)
This minimisation problem can be solved using various nonlinear optimisation
tools, in this case Matlab’s fmincon is used with the interior-point
algorithm, which is a nonlinear optimisation solver for constrained
multivariable functions.
### III-C Representation of the Constraints
In order to efficiently learn $\tilde{\mathbf{N}}$, a suitable representation
needs to be selected. The approach chosen here follows that first proposed by
Lin et al. [11, 17] that represents $\tilde{\mathbf{N}}$ in terms of an
underlying constraint matrix $\tilde{\mathbf{A}}$ according to (3). This has
been shown to be effective both for unstructured problems (i.e., where the
form of the constraint matrix $\mathbf{A}$ is completely unknown) and for
situations where some features (i.e., candidate rows) of $\mathbf{A}$ are
available.
#### III-C1 Unit Vector Representation of $\mathbf{A}$
Following [17], if the form of $\mathbf{A}$ is completely unknown, it can be
represented using a (potentially, state-dependent) set of $\mathcal{{S}}$
orthonormal vectors
$\tilde{\mathbf{A}}=\left[\boldsymbol{\alpha}_{1}^{\top}\,\boldsymbol{\alpha}_{2}^{\top}\,\cdots\,\boldsymbol{\alpha}_{\mathcal{{S}}}^{\top}\right]^{\top}$
(8)
where $\boldsymbol{\alpha}_{s}=(a_{s,1},a_{s,2},...,a_{s,\mathcal{{Q}}})$
corresponds to the $s$th constraint in the observations. The latter can be
constructed iteratively by selecting vectors orthonormal to one another where
the $s$th vector has the form
$\displaystyle a_{s,1}$ $\displaystyle=\cos\theta_{1}$ $\displaystyle a_{s,2}$
$\displaystyle=\sin\theta_{1}\cos\theta_{2}$ $\displaystyle a_{s,3}$
$\displaystyle=\sin\theta_{1}\sin\theta_{2}\cos\theta_{3}$
$\displaystyle\vdots$ $\displaystyle a_{s,\mathcal{{Q}}-1}$
$\displaystyle=\prod_{q=1}^{\mathcal{{Q}}-2}\sin\theta_{q}\cos\theta_{\mathcal{{Q}}-1}$
$\displaystyle a_{s,\mathcal{{Q}}}$
$\displaystyle=\prod_{q=1}^{\mathcal{{Q}}-1}\sin\theta_{q}.$ (9)
The resultant matrix is represented by a total of
$\mathcal{{P}}=\mathcal{{S}}(2\mathcal{{Q}}-\mathcal{{S}}-1)/2$ parameters
$\boldsymbol{\theta}=(\theta_{1},\theta_{2},\cdots,\theta_{\mathcal{{P}}})^{\top}$.
#### III-C2 Representation of $\mathbf{A}$ with Candidate Rows
In the case that prior information about the form constraint matrix is
available, this can be incorporated into the estimate using the approach first
proposed by [11]. Here, a suitable representation of the constraint matrix is
$\tilde{\mathbf{A}}=\tilde{\mathbf{\Lambda}}\mathbf{\Phi}$ (10)
where
$\tilde{\mathbf{\Lambda}}\in\mathbb{R}^{\mathcal{{S}}\times\mathcal{{P}}}$ is
a _selection matrix_ (to be estimated during learning) and
$\mathbf{\Phi}\in\mathbb{R}^{\mathcal{{P}}\times\mathcal{{Q}}}$ is a
(possibly, state-dependent) feature matrix. The rows of the latter can take
generic forms such as a series of polynomials, or can contain candidate
constraints if there is prior knowledge of those potentially affecting the
system. For instance, one may choose $\mathbf{\Phi}=\mathbf{J}(\mathbf{x})$,
the Jacobian of the manipulator, so that
$\tilde{\mathbf{A}}=\tilde{\mathbf{\Lambda}}\mathbf{J}(\mathbf{x})$ encodes
constraints on the motion of specific degrees of freedom in the end-effector
space.
### III-D Estimating the Components of the Behaviour
Once $\tilde{\mathbf{N}}$ is estimated, the decomposition of the behaviour
into task-oriented and null space parts is straightforward. The null space
space component is given as
$\tilde{\mathbf{w}}=\tilde{\mathbf{N}}\boldsymbol{\pi}$ (11)
and the task space part is computed as
$\tilde{\mathbf{v}}=\mathbf{u}-\tilde{\mathbf{w}}.$ (12)
Note that, given the estimate $\tilde{\mathbf{A}}$, $\tilde{\mathbf{N}}$ can
be computed using (3), and an estimate of the task space policy
$\tilde{\mathbf{b}}$ can be obtained using (2).
### III-E Substituting the Non-task oriented Behaviour
As noted in §I, in many cases the redundancy resolution strategy seen in
demonstrators’ task-oriented behaviour may be ill-suited to the robot
imitator. The proposed method provides a simple means of _retargeting the
behaviour to the robotic system_ , while maintaining the task-oriented parts.
Specifically, this is achieved by replacing the controls (1) with
$\mathbf{u}=\tilde{\mathbf{A}}^{\dagger}\tilde{\mathbf{b}}+\tilde{\mathbf{N}}\boldsymbol{\psi}$
(13)
where $\boldsymbol{\psi}$ is a (possibly, state-dependent) redundancy
resolution policy for the robot. For instance, $\boldsymbol{\psi}$ could be
chosen so as to avoid robot-specific joint limits or singularities [13].
Alternatively, if the task space trajectory is predictable, $\tilde{\mu}$ can
be used in combination with global optimisation in the null space [24].
## IV Evaluation
In this section, the proposed approach is first examined through a toy
experiment, then a more complex 3-link planar system, before evaluating its
performance in the context of programming by demonstration with a human
demonstrator and a 7DoF physical robot.222The data supporting this research
are openly available from King’s College London at http://doi.org/[link to be
made available on acceptance]. Further information about the data and
conditions of access can be found by emailing<EMAIL_ADDRESS>
### IV-A Toy Problem
The aim of the first evaluation is to test the robustness of the proposed
method using data from a simple, two-dimensional system with a one-dimensional
task space. The set up (based on [11]) is as follows.
Constrained motion data is gathered from a two-dimensional system with a one-
dimensional constraint $\mathbf{A}=\boldsymbol{\alpha}\in\mathbb{R}^{1\times
2}$. Movement in the task space is defined by the constraint matrix and occurs
in the direction of the unit vector
$\boldsymbol{\hat{\alpha}}=(\cos{\theta},\sin{\theta})$. This direction is
selected from a uniform-random distribution $\theta\sim
U[0^{\circ},180^{\circ}]$ at the start of each trial. The task space policy is
a linear point attractor $b(\mathbf{x})_{i}=r^{*}-r,i\in\\{1\\}$, where $r$ is
the position in the task space and $r^{*}$ is the target point. To simulate
varying tasks, the task space targets are selected randomly $r^{*}\sim
U[-2,2]$ for each trial.
In the below, learning performance is reported for three different secondary
control policies $\boldsymbol{\pi}$, namely,
1. 1.
A linear policy: $\boldsymbol{\pi}(\mathbf{x})=-\mathbf{L}\bar{\mathbf{x}}$
where $\bar{\mathbf{x}}\coloneqq(\mathbf{x}^{\top},1)^{\top})$ and
$\mathbf{L}=((2,4,0)^{\top},(1,3,-1)^{\top})^{\top})$.
2. 2.
A limit cycle: $\dot{\rho}=\rho(\rho_{0}-\rho^{2})$ with radius
$\rho_{0}=0.75\,m$, angular velocity $\phi=1\,rad/s$, where $\rho$ and $\phi$
are the polar representation of the state, i.e.,
$\mathbf{x}=(\rho\cos\phi,\rho\sin\phi)^{\top}$.
3. 3.
A non-linear (sinusoidal) policy:
$\boldsymbol{\pi}=(\cos z_{1}\cos z_{2},-\sin z_{1}\sin z_{2})^{\top}$ where
$z_{1}=\pi x_{1}$ and $z_{2}=\pi(x_{2}+\frac{1}{2})$.
The training data consists of $150$ data points, drawn uniform randomly across
the space $(\mathbf{x})_{i}\sim U[-1,1],i\in\\{1,2\\}$. For testing, a further
150 data points are used, generated through the same procedure as above. The
constraint is learnt by finding a $\boldsymbol{\theta}$ which minimises (7).
In each trial performance is measured using two metrics. First, the normalised
mean squared error (NMSE) in the estimated null space component is
evaluated333The notation $\mathbf{C}=\mathbf{A}\oslash\mathbf{B}$ denotes
_Hadamard_ (element-wise) _division_ of $\mathbf{A}$ by $\mathbf{B}$, i.e.,
$(\mathbf{C})_{ij}=(\mathbf{A})_{ij}/(\mathbf{B})_{ij}$.
$E_{\tilde{\mathbf{w}}}=\frac{1}{\mathcal{{N}}}\sum_{n=1}^{\mathcal{{N}}}||(\mathbf{w}_{n}-\tilde{\mathbf{w}}_{n})\oslash\boldsymbol{\sigma}_{\mathbf{u}}||^{2}.$
(14)
where $\boldsymbol{\sigma}_{\mathbf{u}}\in\mathbb{R}^{\mathcal{{Q}}}$ is a
vector containing the element-wise standard deviation of the observations
$\mathbf{u}$. Note that, as
$\mathbf{w}_{n}=\mathbf{N}_{n}\boldsymbol{\pi}_{n}$ and
$\tilde{\mathbf{w}}_{n}=\tilde{\mathbf{N}}_{n}\boldsymbol{\pi}_{n}$, this
measure is equal to the normalised projected policy error [17].
Second, the normalised error measure (7) is evaluated
$E_{\tilde{\mathbf{N}}}=\frac{1}{\mathcal{{N}}||\sigma_{\mathbf{u}}||^{2}}\sum^{\mathcal{{N}}}_{n=1}||\boldsymbol{\pi}_{n}^{\top}\tilde{\mathbf{N}}_{n}(\mathbf{u}_{n}-\boldsymbol{\pi}_{n})||.$
(15)
It indicates the performance of the minimisation function using only known
information from given data and is therefore applicable for practical
application. 444To evaluate the fitness of $\tilde{\mathbf{v}}$, noting that
$\mathbf{v}+\mathbf{w}=\tilde{\mathbf{v}}+\tilde{\mathbf{w}}$ can be written
as $\mathbf{v}-\tilde{\mathbf{v}}=-\mathbf{w}+\tilde{\mathbf{w}}$ returns the
identity $\mathbf{v}-\tilde{\mathbf{v}}=-(\mathbf{w}-\tilde{\mathbf{w}}$),
where the error in both components are opposites. Thus, results for the
fitness of $\tilde{\mathbf{w}}$ can also be considered the same for
$\tilde{\mathbf{v}}$ and therefore $\tilde{\mathbf{v}}$ is omitted. The
experiment is repeated $50$ times.
TABLE I: Test data NMSE in $\tilde{\mathbf{w}}$ (mean$\pm$s.d.)$\times
10^{-15}$ and $E_{\tilde{\mathbf{N}}}$ (mean$\pm$s.d.)$\times 10^{-8}$ over
$50$ trials for different $\boldsymbol{\pi}$.
$\boldsymbol{\pi}$ $E_{\tilde{\mathbf{w}}}$ $E_{\tilde{\mathbf{N}}}$ Linear
1.2147 $\pm$ 2.6458 0.4263 $\pm$ 1.3396 Limit-cycle 0.5462 $\pm$ 0.7043 1.1616
$\pm$ 1.1682 Sinusoidal 0.3020 $\pm$ 0.5741 1.6685 $\pm$ 1.9316
The NMSE in $\tilde{\mathbf{w}}$ and $E_{\tilde{\mathbf{N}}}$ are presented in
TABLE I. As can be seen, $\tilde{\mathbf{N}}$ is successfully estimated with
errors in $\tilde{\mathbf{w}}$ less than $10^{-14}$ and $\tilde{\mathbf{N}}$
with less than $10^{-7}$ for all of the policies considered. These low errors
shows that the constraint matrix can be estimated with very high precision if
knowledge of $\boldsymbol{\pi}$ is available. The overall performance of
$\tilde{\mathbf{w}}$ is seen to be very roughly around twice as accurate as
$E_{\tilde{\mathbf{N}}}$, which shows that the task and null space components
can generally be reproduced with greater accuracy than indicated by just
evaluating $E_{\tilde{\mathbf{N}}}$.
\begin{overpic}[width=433.62pt]{images/allToyResultsV3lowres.png}
\put(0.0,0.0){\scriptsize\ref{f:dataToyResults-a}}
\put(25.0,0.0){\scriptsize\ref{f:dataToyResults-b}}
\put(51.0,0.0){\scriptsize\ref{f:dataToyResults-c}}
\put(76.0,0.0){\scriptsize\ref{f:dataToyResults-d}} \end{overpic} Figure 2:
NMSE in $\tilde{\mathbf{w}}$ and $E_{\tilde{\mathbf{N}}}$ (mean$\pm$s.d. over
$50$ trials) for (a) increasing number of data points for
$E_{\tilde{\mathbf{w}}}$, (b) increasing number of data point for
$E_{\tilde{\mathbf{N}}}$, (c) increasing noise levels in $\mathbf{u}$ and
(d) increasing noise levels in $\boldsymbol{\pi}$. Mean results are plotted
as thick lines and their respective standard deviation are the shaded areas of
a similar lighter tone.
To further characterise the performance of the proposed approach, the
experiment is repeated with (i) data sets of varying sizes
($5<\mathcal{{N}}<250$), (ii) varying levels of noise in the training data
$\mathbf{u}_{n}$ represented as $N(0,\epsilon\sigma_{\mathbf{u}}^{2})$
additive white Gaussian noise where $0<\epsilon<0.14$ and (iii) varying
levels of noise in the estimated $\boldsymbol{\pi}_{n}$ where
$0<\epsilon<0.15$ for $50$ trials using the limit cycle policy. The latter
case simulates error in the assumed $\boldsymbol{\pi}$ and thereby allows
evaluation of the proposed approach in face of an inaccurate estimate of the
true underlying redundancy resolution strategy. The results are plotted in
Figure 2.
As shown in Figure 2(a), the NMSE in $\tilde{\mathbf{w}}$ is less than
$10^{-14}$ for both mean and standard deviation with as few as five data
points. As the number of data points increases, so does the accuracy for
minimising $\tilde{\mathbf{w}}$ where the performance of the method seems to
plateau after around 25 data points. This shows that the approach can learn
constraints with as few as five data points and for optimal performance with
at least around 25 data points. Figure 2(b) shows errors of less $10^{-7}$ for
both mean and standard deviation in $\tilde{\mathbf{N}}$. It follows a similar
trend to the previous evaluation with respect to accurate learning with as few
as five data points and optimal performance after at least around 25 data
points. It can also be observed that the learning performance is very roughly
half compared to $E_{\tilde{\mathbf{w}}}$ which was also observed in TABLE I.
Looking at Figure 2(c), there is a clear trend with a degrading mean accuracy
and greater standard deviation as the noise in $\mathbf{u}$ increases. The
mean error in $\tilde{\mathbf{w}}$ stays below 0.1 when $\epsilon<=0.14$ and
for mean error in $\tilde{\mathbf{N}}$ when $\epsilon<=0.13$. It can also been
seen that the error in $\tilde{\mathbf{N}}$ is greater compared to
$\tilde{\mathbf{w}}$ in most cases which is in agreement with prior
experiments.
Looking at Figure 2(d), the accuracy decreases, with greater standard
deviation, as the error in the assumed $\boldsymbol{\pi}$ increases. The mean
error in $\tilde{\mathbf{w}}$ stays below 0.1 when $\epsilon<0.05$, however
when $\epsilon<=0.1$ only 2 mean values are shown to produce an error above
0.1. The mean $E_{\tilde{\mathbf{N}}}$ stays below 0.1 when $\epsilon<=0.08$
and only has a single instance where the mean value is above 0.1 where
$\epsilon<=0.1$. Comparing $E_{\tilde{\mathbf{w}}}$ and
$E_{\tilde{\mathbf{N}}}$, while both have similar mean performance, the
standard deviation of $E_{\tilde{\mathbf{N}}}$ is noticeably smaller. This is
expected, as $E_{\tilde{\mathbf{N}}}$ relies on knowledge of the noisy
estimate of $\boldsymbol{\pi}$ to obtain $\tilde{\mathbf{N}}$, whereas
$E_{\tilde{\mathbf{w}}}$ compares this to the true $\boldsymbol{\pi}$ to
present the error within the estimated null space component.
### IV-B Simulated Three Link Planar Arm
The aim of the next evaluation is to test the performance of the proposed
method on a more complex system with non-linear constraints which simulates a
real world system more accurately. The set up is as follows.
Constrained motion data is gathered from a kinematic simulation of a three-
link planar robot with uniform links of length $10\,cm$. The state and action
space refer to the joint angle position and velocities, respectively, i.e.,
$\mathbf{x}:=\mathbf{q}\in\mathbb{R}^{3}$ and
$\mathbf{u}:=\dot{\mathbf{q}}\in\mathbb{R}^{3}$. The task space is described
by the coordinates $\mathbf{r}=(r_{x},r_{y},r_{\theta})^{\top}$ referring to
the end-effector positions and orientation, respectively. The simulation runs
at a rate of $50\,Hz$.
Joint space motion of the system is recorded as it performs tasks under
different constraints in the end-effector space. Specifically, a task
constraint at state $\mathbf{x}$ is described through
$\mathbf{A}(\mathbf{x})=\mathbf{\Lambda}\mathbf{J}(\mathbf{x})$ (16)
where $\mathbf{J}\in\mathbb{R}^{3\times 3}$ is the manipulator Jacobian, and
$\mathbf{\Lambda}\in\mathbb{R}^{3\times 3}$ is a diagonal selection matrix,
with elements
$\boldsymbol{\lambda}=(\lambda_{x},\lambda_{y},\lambda_{\theta})^{\top}$ along
the diagonal, indicating which coordinates should be included in
($\lambda_{i}=1$) or excluded from ($\lambda_{i}=0$) the task space. In the
results reported below, the following six selection matrices are considered:
(i) $\mathbf{\Lambda}_{x}$where $\boldsymbol{\lambda}=(1,0,0)^{\top}$, (ii)
$\mathbf{\Lambda}_{y}$where $\boldsymbol{\lambda}=(0,1,0)^{\top}$, (iii)
$\mathbf{\Lambda}_{\theta}$where $\boldsymbol{\lambda}=(0,0,1)^{\top}$, (iv)
$\mathbf{\Lambda}_{x,y}$where $\boldsymbol{\lambda}=(1,1,0)^{\top}$, (v)
$\mathbf{\Lambda}_{x,\theta}$where $\boldsymbol{\lambda}=(1,0,1)^{\top}$, and
(vi) $\mathbf{\Lambda}_{x,\theta}$where $\boldsymbol{\lambda}=(0,1,1)^{\top}$.
To simulate demonstrations of reaching behaviour, the robot end-effector
starts from a point chosen uniform-randomly $q_{1}\sim
U[0^{\circ},10^{\circ}],q_{2}\sim U[90^{\circ},100^{\circ}],q_{3}\sim
U[0^{\circ},10^{\circ}]$ to a task space target $\mathbf{r}^{*}$ following a
linear point attractor policy
$\mathbf{b}(\mathbf{x})=\mathbf{r}^{*}-\mathbf{r}$ (17)
where $\mathbf{r}^{*}$ is drawn uniformly from $r^{*}_{x}\sim U[-1,1]$,
$r^{*}_{y}\sim U[0,2]$, $r^{*}_{\theta}\sim U[0^{\circ},180^{\circ}]$. As the
secondary control policy a simple point attractor of the form
$\boldsymbol{\pi}(\mathbf{x})=\beta(\mathbf{x}^{*}-\mathbf{x})$ (18)
is used, where $\mathbf{x}^{*}$ is arbitrarily chosen as $x_{1}=10^{\circ}$,
$x_{2}=-10^{\circ}$, $x_{3}=10^{\circ}$ and $\beta=1$. For each of the cases
(i)–(vi) above, $100$ trajectories are generated each containing $50$ data
points, with $50$% of the samples provided for learning and the remainder
reserved as unseen testing data. Finally, this whole experiment is repeated
$50$ times.
TABLE II: Mean $\pm$s.d. $E_{\tilde{\mathbf{w}}}$ and $E_{\tilde{\mathbf{N}}}$
on testing data for different null space policies over $50$ trials. The
figures for $E_{\tilde{\mathbf{w}}}$ are (mean$\pm$s.d.)$\times 10^{-11}$ and
for $E_{\tilde{\mathbf{N}}}$ are (mean$\pm$s.d.)$\times 10^{-7}$.
$\boldsymbol{\pi}$ $E_{\tilde{\mathbf{w}}}$ $E_{\tilde{\mathbf{N}}}$
$\mathbf{\Lambda}_{x}$ 0.0741 $\pm$ 0.0291 1.1527 $\pm$ 2.3654
$\mathbf{\Lambda}_{y}$ 12.1467 $\pm$ 18.6578 11.1195 $\pm$ 9.1619
$\mathbf{\Lambda}_{\theta}$ 0.1496 $\pm$ 0.3732 0.2445 $\pm$ 0.1700
$\mathbf{\Lambda}_{x,y}$ 5.6114 $\pm$ 10.3401 5.1969 $\pm$ 6.5849
$\mathbf{\Lambda}_{x,\theta}$ 0.0139 $\pm$ 0.0320 0.8522 $\pm$ 1.0982
$\mathbf{\Lambda}_{y,\theta}$ 0.0476 $\pm$ 0.1357 0.6719 $\pm$ 0.6424
The NMSE in $\tilde{\mathbf{w}}$ and $E_{\tilde{\mathbf{N}}}$ on the testing
data are presented in TABLE II. As shown, the constraints are successfully
learnt with $E_{\tilde{\mathbf{w}}}$ less than $10^{-9}$ and
$E_{\tilde{\mathbf{N}}}$ less than $10^{-5}$ in all cases. The
$E_{\tilde{\mathbf{w}}}$ is roughly half of the $E_{\tilde{\mathbf{N}}}$ which
is in agreement with previous experiments. Overall, the constraint matrix can
be accurately estimated using data from the observed demonstrations and
knowledge of the control policy, without having to explicitly know how the
constraints affect the system’s motions.
To further evaluate the performance of the proposed method, it is compared to
the current state-of-the-art. As discussed in §II-C, while there are many
applicable methods to learn the constraint matrix with acceptable performance
including [11], they all rely on the method proposed in [12] for separation of
the observed actions. Following [12], Figure 3 shows an example of using a
learnt constraint to generate a new trajectory. In this experiment, the new
trajectory is reproduced using $\tilde{\mathbf{A}}$ which is learnt from a
separate training data set, $\mathbf{u}$, $\mathbf{x}$ and $\boldsymbol{\pi}$,
where the latter three are given. Firstly, training data consists of one
trajectory of length $2\,s$ (100 data points) with a random constraint in
$x,y$. Using the state-of-the-art approach in [12], $\mathbf{u}$ is separated
into the task and null space components. Now that the null space component is
learnt, it is used with $\mathbf{u}$, $\mathbf{x}$ and the approach in [11] to
obtain $\tilde{\mathbf{A}}$. On the other hand, the novel approach uses
$\mathbf{u}$, $\mathbf{x}$ and $\boldsymbol{\pi}$ to directly obtain
$\tilde{\mathbf{A}}$. Now that both approaches have resulted in a learnt
constraint, a ground-truth test trajectory is produced subject to the same
true constraints in $x,y$ of the training data. Its start pose is
$q_{1}=90^{\circ},q_{2}=45^{\circ},q_{3}=-20^{\circ}$ and the $x,y$ position
of the end-effector moves towards the target point $(15,10)^{\top}$ which
reaches convergence in 4 seconds. To compare the novel and literature
approach, both use $\mathbf{x}$ of this ground-truth data to start at the same
position. Both produce $\tilde{\mathbf{v}}$ with their respectively learnt
$\tilde{\mathbf{A}}$ and use this with $\mathbf{u}$ to estimate
$\tilde{\mathbf{b}}$ following (2). The literature approach already has
$\tilde{\mathbf{w}}$ which was obtained using [12]. The novel approach uses
(3) to obtain $\tilde{\mathbf{N}}$ and uses the known $\boldsymbol{\pi}$ to
produce $\tilde{\mathbf{w}}$. Both approaches use this information to
iteratively reproduce the ground-truth data and similarly run for 4 seconds
which is shown in Figure 3(a) and Figure 3(b). This experiment is repeated for
a constraint in $\theta$ shown in Figure 3(c) and Figure 3(d).
As can be seen in Figure 3(a) and Figure 3(b), the proposed approach is shown
alongside the true policy as well as the current state-of-the-art [12] for
comparison. The new method follows the true joint trajectories accurately. The
state-of-the-art approach on the other hand takes a different route in both
the end effector trajectory as well as joint space leading to a different
target. In Figure 3(c) and Figure 3(d), the task space target is set as the
orientation of the end-effector which moves towards the target angle of
$45^{\circ}$. The novel approach accurately reproduces the movements under
this 1DoF constraint unlike the state-of-the-art method in [12] and [11].
\begin{overpic}[width=208.13574pt]{images/allCompare2.png}
\put(0.0,45.0){\scriptsize\ref{f:compareLiteratureA}}
\put(50.0,45.0){\scriptsize\ref{f:compareLiteratureB}}
\put(0.0,0.0){\scriptsize\ref{f:compareLiteratureC}}
\put(52.0,0.0){\scriptsize\ref{f:compareLiteratureD}} \end{overpic} Figure 3:
Reproducing the ground-truth movement (dotted-black) in both learnt task and
null space using the proposed method (solid-red) and the state-of-the-art
method [12] (dashed-blue) to learn the null space component and constraint
$\mathbf{A}$ to obtain $\mathbf{b}$ [11]. (a) Arm visualisation for example
task under constraint space $\mathbf{r}=(x,y)$, (b) Joint angle positions
during example movement in task space $\mathbf{r}=(x,y)$, (c) Arm
visualisation for example task under constraint space $\mathbf{r}=\theta$ and
(d) Joint angle positions during example movement in task space
$\mathbf{r}=\theta$.
\begin{overpic}[width=205.97214pt]{images/threeToThreeV4Obstacle.png}\put(0.0,0.0){\scriptsize\ref{f:ThreeToThree}}\end{overpic}
\begin{overpic}[width=205.97214pt]{images/threeToSevenV2.png}\put(0.0,0.0){\scriptsize\ref{f:ThreeToSeven}}\end{overpic}
Figure 4: Retargeting behaviour with an imitator robot (a) with the same
embodiment as the demonstrator but located near to an obstacle, and (b) with
a different kinematic structure. The demonstrated movement is illustrated in
red, while that of the imitators is in blue. The yellow region indicates an
obstacle. Figure 5: Sample flow of obtaining natural demonstration data from
user. Video of the subject is recorded during the action of repeatedly opening
and closing drawers of various heights to different lengths. After collection,
the video feed is overlaid with a skeleton using Openpose [25] to obtain the
positions of body parts such that joint angles and trajectories can be
extracted into Matlab. The shoulder, elbow and wrist joint angles are used to
learn the constraints contained within the set of demonstrations.
As mentioned in §III-E, the proposed approach allows _retargeting_ of task-
oriented behaviour by substituting the demonstrator’s redundancy resolution
strategy with one better-suited to the robot. More concretely, consider the
scenario where it is desired to reproduce a demonstrated reaching movement
(i) with a robot with identical embodiment to the demonstrator, but is located
right next to an obstacle (such that there is the risk of collision, see
Figure 4(a)), and (ii) with a robot that has a different kinematic structure
to the demonstrator (here, different numbers and lengths of links) . In the
following, the feasibility of retargeting to these scenarios is assessed. As
the reaching movement to be reproduced, a typical trajectory is taken from the
training data (given in the absence of any obstacles) described above.
Specifically, the example chosen uses
$\mathbf{\Lambda}=\mathbf{\Lambda}_{x,y}$, $\mathbf{w}$ derived from the
policy (18), $\mathbf{r}^{*}=(-9.12,3.89)^{\top}$ and
$\mathbf{q}=(8.67^{\circ},94.18^{\circ},-2.32^{\circ})^{\top}$. This movement
is _retargeted_ by using the learnt $\tilde{\mathbf{A}}$ to derive
$\tilde{\mathbf{b}}$, and then applying (13) with a replacement null space
control policy.
In the case of the robot located next to an obstacle, retargeting simply
consists of selecting an appropriate null-space control policy
$\boldsymbol{\psi}$. Here,
$\boldsymbol{\psi}(\mathbf{x})=\beta_{r}(\mathbf{x}^{*}_{r}-\mathbf{x})$ is
used, where $\beta_{r}=5$ and
$\mathbf{x}^{*}_{r}=(-320^{\circ},100^{\circ},50^{\circ})^{\top}$. The
resultant movements _with_ and _without_ retargeting are shown in Figure 4(a)
(blue and red figures, respectively). As can be seen, were the arm to directly
imitate the demonstration (red figure) a collision would occur (second and
third links overlap with the yellow region). This could not only jeopardise
the success of the task but also potentially cause damage to the system. In
contrast, starting at the same start point, the retargeted controller (blue
figure) successfully completes the task (converges to the same target point in
end-effector space), however, by resolving its redundancy differently it
avoids the collision.
In the case of the robot with the different kinematic structure, retargeting
is achieved as follows. As noted above, the constraint in this system is
represented in the form (10) where the feature matrix is selected as the
Jacobian of the demonstrator’s embodiment (i.e., $\mathbf{\Phi}=\mathbf{J}$)
and the selection matrix $\tilde{\mathbf{\Lambda}}$ is learnt. Since the rows
of $\mathbf{\Phi}$ represent meaningful quantities (here, the relationship
between the joint space and the end-effector position and orientation) a
correspondence is drawn between these and the equivalent quantities for the
new arm (e.g., if the first row of $\mathbf{J}$ relates to the Jacobian for
the $r_{x}$ coordinate, the corresponding row of the Jacobian $\mathbf{J}_{r}$
for the imitator is selected), and $\mathbf{A}_{r}$ is constructed
accordingly. Substituting into (13) gives the controller
$\mathbf{u}=\mathbf{A}_{r}^{\dagger}\tilde{\mathbf{b}}+(\mathbf{I}-{\mathbf{A}_{r}}^{\dagger}\mathbf{A}_{r})\boldsymbol{\psi}.$
(19)
In this evaluation, the imitator robot is taken to be a $7$-DoF arm with link
lengths $10$, $5$, $5$, $5$, $5$, $5$ and $10\,cm$, and
$\boldsymbol{\psi}==\beta_{r}(\mathbf{x}^{*}_{r}-\mathbf{x})$, where
$\beta_{r}=1$ and
$\mathbf{x}^{*}_{r}=(-10^{\circ},-10^{\circ},-10^{\circ},-10^{\circ},-10^{\circ},-10^{\circ},-10^{\circ})^{\top}$.
The start posture is chosen such that the initial end-effector position
matches that of the demonstration (i.e.,
$\mathbf{q}=(0^{\circ},90^{\circ},-90^{\circ},85^{\circ},90^{\circ},-1^{\circ},-81.5^{\circ})^{\top}$).
The resultant movement is shown in Figure 4(b). As can be seen, despite the
significant difference in embodiment, the task-oriented part of the movement
is effectively reproduced, while the imitator-specific null space controller
appropriately handles the added redundancy.
### IV-C Real World Human Arm
The aim of this final experiment is to test the performance of the proposed
approach in the real world using data from a human demonstrator. The set up is
as follows.
The task chosen for this experiment is to teach a robotic system the skill of
opening and closing a three-tiered set of drawers (see Figure 5). To collect
data, the demonstrator stands in front of the drawers at approximately an
arm’s length distance. The starting state of each drawer is randomised
(varying from anywhere between fully closed to completely open). Starting with
the top drawer it is moved a random distance in the opening or closing
direction. This is repeated for each of the drawers producing a total three
trajectories which are used for learning a model. A side view of this is
recorded from a single $12$MP phone camera (with a sensor of $1.4\mu$m pixels
and aperture of f/$1.7$) placed roughly $1\,m$ away from the demonstrator. The
video data is then post-processed to overlay a skeleton using Openpose [25] to
estimate the joint lengths and extract the joint angles and velocities during
movement. Constrained motion data of movement in the sagittal plane is
gathered from three joints of the demonstrator’s arm, by observing flexion and
extension of the shoulder, elbow and wrist (as well as abduction and adduction
of the wrist depending on the forearms pronation/supination. This yields an
average of $11$ frames per trajectory which translates into 11 data points.
Now that the data is collected, to set it up for learning, the joint angles of
the demonstrator are treated as the state
$\mathbf{x}:=\mathbf{q}\in\mathbb{R}^{3}$ and the joint velocities as the
action $\mathbf{u}:=\dot{\mathbf{q}}\in\mathbb{R}^{3}$. The task space is
described by the end-effector coordinates
$\mathbf{r}=(r_{x},r_{y},r_{\theta})^{\top}$ referring to the hand position
and orientation, respectively. The task constraints are described in the form
(16), where $\mathbf{J}\in\mathbb{R}^{3\times 3}$ is the manipulator Jacobian
of the demonstrator. To construct the Jacobian which simulates the
demonstrator as a system, the link lengths are calculated from the skeleton in
Openpose for each frame. As these can vary from frame to frame depending on
obscurities in the demonstrators pose and imprecision of Openpose, the mean of
the joint lengths at every frame for the 3 trajectories are used. Moreover,
when translating recorded movements from pixels to the $x$ and $y$ axis in
Matlab, joint lengths are extremely large where the upper arm measures at
around 30 meters. Therefore the scale of these joint lengths are
proportionally reduced to around 10cm by dividing each by 300.
$\mathbf{\Lambda}\in\mathbb{R}^{3\times 3}$ is the selection matrix specifying
the coordinates to be constrained. In this experiment,
$\mathbf{\Lambda}_{x,y}$ represents the ground truth, since the end-effector
(demonstrator’s hand) must be maintained at the height $y$ of the drawer
handle ($y$ changes in each demonstration depending on the which drawer is
being manipulated), and $x$ is the task space in which opening or closing
action occurs. It is assumed that the control policy in $\mathbf{w}$ is
resolved by the subject with comfort in mind and moves towards a target joint
pose following (18) with
$\mathbf{x}^{*}=(-90^{\circ},90^{\circ},0^{\circ})^{\top}$. This posture is
chosen as it lies in the middle of each joints optimal range following RULA,
resulting in a high score according to RULA’s standard ergonomic assessment
procedure [8] (see §II).
Since the human demonstrations are modelled as a system with its respective
Jacobian matrix, $\mathbf{u}$, $\mathbf{x}$ and $\boldsymbol{\pi}$, it can be
evaluated like any other robotic system presented so far [26], [27]. The
experiment is repeated $45$ times yielding a total of $135$ trajectories ($45$
repetitions for each of the 3 drawers). This is done to verify that the
performance of learning the constraints is consistent.
The true decomposition of the behaviour, (i.e., $\mathbf{v}$ and $\mathbf{w}$)
are not known, however they can be estimated using $\mathbf{\Lambda}_{x,y}$.
The initial aspect that can be evaluated is the $E_{\tilde{\mathbf{w}}}$,
however the variance in $\mathbf{u}$ is quite small and thus
$E[\tilde{\mathbf{N}}]$ from (7) is reported which is $6.3329\pm
5.7832$(mean$\pm$s.d.). Looking at the learnt $\mathbf{\Lambda}$, the correct
constraints are consistently learnt using the novel approach. To demonstrate
this, the learnt constraint is used to produce $\tilde{\mathbf{b}}$ and the
task-oriented trajectory is reproduced on the Sawyer, a 7DoF revolute physical
robotic system with a maximum reach of 1260mm and precision of $\pm$0.1mm. A
closing of a drawer trajectory is selected from the human demonstration data.
$\mathbf{J}\in\mathbb{R}^{3\times 7}$ is the manipulator Jacobian where a
correspondence is drawn between these and the human arms decomposed Jacobian.
The start pose is
$\mathbf{q}=(-3.89^{\circ},42.82^{\circ},25.48^{\circ},-76.96^{\circ},-8.23^{\circ},32.82^{\circ},88.93^{\circ})^{\top}$.
$\boldsymbol{\psi}==\beta_{r}(\mathbf{x}^{*}_{r}-\mathbf{x})$, where
$\beta_{r}=1$ and
$\mathbf{x}^{*}_{r}=(-70.29^{\circ},-32.47^{\circ},-15.92^{\circ},53.24^{\circ},-32.55^{\circ},-17.48^{\circ},\newline
-68.23^{\circ})^{\top}$. The resulting trajectory is presented in Figure 6. As
shown, the Sawyer is able to reproduce the task space component of closing the
drawer using its own embodiment and a different $\boldsymbol{\pi}$ to resolve
redundancy subject to the same task constraints.
Figure 6: 3DoF Human to 7DoF Sawyer Robot task transfer
## V Conclusion
In this paper, a method based on programming by demonstration is proposed to
learn null space policies from constrained motion data. The main advantage to
using this is the retargeting of not only the systems redundancy resolution
but also the entire system itself with another of a different embodiment which
can repeat a task accurately while being subject to the same constraints. On a
lesser note, this proposed approach can be used to learn directly from
observed actions without the need to decompose motion data into a task and
null space component.
The effectiveness of the method has been demonstrated in a simulated toy
experiment, a 3-link simulation and a real world experiment using data
collected from a human demonstrator which is validated through task-oriented
reproduction on a 7DoF physical robot. All experiments are in agreement that
the constraints can be learnt from demonstration. In addition, the evaluations
show that the method can (i) learn with very little data
($E_{\tilde{\mathbf{w}}}$ below $10^{-14}$ with just five data points) and
(ii) handle noise ($E_{\tilde{\mathbf{w}}}$ below $10^{-1}$ with normalised
additive white gaussian noise below 0.15) . In a comparative experiment, the
approach is shown to outperform the current state-of-the-art approach in a
simulated 3DoF robot manipulator control problem where motions are reproduced
using the learnt constraints. It is also used to demonstrate retargeting of a
systems null space component to resolve redundancy such that an obstacle can
be avoided. Moreover, retargeting through the learnt constraints from the
simulated 3DoF demonstrator to a 7DoF robot imitator of different embodiment
is shown. Finally, the approach is verified in a real world experiment where
demonstrations from a human subject are used to consistently learn the
constraint matrix, which allows for accurate decomposition of the demonstrated
task space and task-oriented reproduction on the Sawyer, a 7DoF physical robot
with a different embodiment.
Future work looks at conducting a study with naïve subjects and a more in-
depth look at estimating control policies to resolve redundancy.
## References
* [1] B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey of robot learning from demonstration,” _Robotics and Autonomous Systems_ , vol. 57, no. 5, pp. 469–483, 2009.
* [2] A. Sena and M.H., “Quantifying teaching behaviour in robot learning from demonstration,” _Int. J. Robotics Res._ , 2020.
* [3] H. Cruse and M. Brüwer, “The human arm as a redundant manipulator: The control of path and joint angles,” _Biological cybernetics_ , vol. 57, pp. 137–44, 02 1987.
* [4] R. Ranganathan, A. Adewuyi, and F. A. Mussa-Ivaldi, “Learning to be lazy: Exploiting redundancy in a novel task to minimize movement-related effort,” _Journal of Neuroscience_ , vol. 33, no. 7, pp. 2754–2760, 2013.
* [5] J. Soechting, C. Buneo, U. Herrmann, and M. Flanders, “Moving effortlessly in three dimensions: does donders’ law apply to arm movement?” _Journal of Neuroscience_ , vol. 15, no. 9, pp. 6271–6280, 1995.
* [6] A. Shafti, A. Ataka, B. U. Lazpita, A. Shiva, H. A. Wurdemann, and K. Althoefer, “Real-time robot-assisted ergonomics,” in _ICRA_ , 2019.
* [7] M. Robertson, B. Amick, K. DeRango, T. Rooney, L. Bazzani, R. Harrist, and A. Moore, “The effects of an office ergonomics training and chair intervention on worker knowledge, behavior and musculoskeletal risk,” _Applied ergonomics_ , vol. 40, pp. 124–35, 04 2008.
* [8] M. Middlesworth, “A step-by-step guide to the rula assessment tool,” 2019. [Online]. Available: https://ergo-plus.com/rula-assessment-tool-guide/
* [9] Y. Zhao, A. Sena, F. Wu, and M. Howard, “A framework for teaching impedance behaviours by combining human and robot ‘best practice’,” in _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 10 2018, pp. 3010–3015.
* [10] M. Howard, D. Braun, and S. Vijayakumar, “Transferring human impedance behavior to heterogeneous variable impedance actuators,” _IEEE Transactions on Robotics_ , vol. 29, no. 4, pp. 847–862, 2013.
* [11] H. C. Lin, P. Ray, and M. Howard, “Learning task constraints in operational space formulation,” in _ICRA_ , 2017, pp. 309–315.
* [12] C. Towell, M. Howard, and S. Vijayakumar, “Learning nullspace policies,” in _IROS_ , 2010, pp. 241–248.
* [13] J. Manavalan and M. Howard, “Learning singularity avoidance,” in _IEEE/RSJ International Conference on Intelligent Robots and Systems_ , 2019\.
* [14] L. Armesto, J. Bosga, V. Ivan, and S. Vijayakumar, “Efficient learning of constraints and generic null space policies,” in _ICRA_ , 2017, pp. 1520–1526.
* [15] H. C. Lin, S. Rathod, and M. Howard, “Learning state dependent constraints,” in _IEEE T-Ro_ , 2016.
* [16] J. Manavalan and M. Howard, “Learning null space projections fast,” in _ESANN_ , 2017.
* [17] H.-C. Lin, M. Howard, and S. Vijayakumar, “Learning null space projections,” in _ICRA_ , 2015, pp. 2613–2619.
* [18] A. Hussein, M. Gaber, E. Elyan, and C. Jayne, “Imitation learning: A survey of learning methods,” _ACM Computing Surveys_ , vol. 50, 04 2017.
* [19] S. Schaal, A. Ijspeert, and A. Billard, “Computational approaches to motor learning by imitation.” _Philos Trans R Soc Lond B Biol Sci_ , vol. 358, no. 1431, pp. 537–547, March 2003.
* [20] A. Liégeois, “Automatic supervisory control of the configuration and behavior of multibody mechanisms,” _IEEE Transactions on Systems, Man, and Cybernetics_ , vol. 7, pp. 868–871, 1977.
* [21] O. Khatib, “A unified approach for motion and force control of robot manipulators: the operational space formulation,” _IEEE Journal of Robotics and Automation_ , vol. RA-3, no. 1, pp. 43–53, December 1987.
* [22] L. Mcatamney and E. Nigel Corlett, “Rula: A survey method for the investigation of work-related upper limb disorders,” _Applied ergonomics_ , vol. 24, pp. 91–9, 05 1993.
* [23] L. McAtamney and E. N. Corlett, “Rula: a survey method for the investigation of work-related upper limb disorders,” _Applied Ergonomics_ , vol. 24, no. 2, pp. 91 – 99, 1993.
* [24] Y. Nakamura, _Advanced Robotics: Redundancy and Optimization_. Addison-Wesley Publishing Company, 1991.
* [25] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, “OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields,” in _arXiv preprint arXiv:1812.08008_ , 2018.
* [26] B. Lee, “A mouse with two optical sensors that eliminates coordinate disturbance during skilled strokes,” _Human-Computer Interaction_ , vol. 30, 03 2014.
* [27] A. Gams and J. Lenarcic, “Humanoid arm kinematic modeling and trajectory generation,” in _IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics_ , 2006, pp. 301 – 305.
|
2024-09-04T02:54:56.095409 | 2020-03-01T19:26:30 | 2003.00565 | {
"authors": "Farzad Aalipour and Tuhin Das",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25974",
"submitter": "Farzad Aalipour",
"url": "https://arxiv.org/abs/2003.00565"
} | arxiv-papers | # Proportional Power Sharing Control of Distributed Generators
in Microgrids
Farzad Aalipour<EMAIL_ADDRESS>Tuhin Das<EMAIL_ADDRESS>
## Abstract
This research addresses distributed proportional power sharing of inverter-
based Distributed Generators (DGs) in microgrids under variations in maximum
power capacity of DGs. A microgrid can include renewable energy resources such
as wind turbines, solar panels, fuel cells, etc. The intermittent nature of
such energy resources causes variations in their maximum power capacities.
Since DGs in microgrids can be regarded as Multi-Agent-Systems (MASs), a
consensus algorithm is designed to have the DGs generate their output power in
proportion to their maximum capacities under capacity fluctuations. A change
in power capacity of a DG triggers the consensus algorithm which uses a
communication map at the cyber layer to estimate the corresponding change.
During the transient time of reaching a consensus, the delivered power may not
match the load power demand. To eliminate this mismatch, a control law is
augmented that consists of a finite-time consensus algorithm embedded within
the overarching power sharing consensus algorithm. The effectiveness of the
distributed controller is assessed through simulation of a microgrid
consisting of a realistic model of inverter-based DGs.
## Keywords
Consensus, Eigenvalue Perturbation, Distributed Control, Finite-Time
Consensus, Inverter-based Microgrid, Proportional Power Sharing, Renewable
Energy, Transient Control.
## 1 Introduction
Environmentally sustainable electrical energy production depends on renewable
energy resources. In this regard, significant amount of researches have been
undertaken within the past few decades [1, 2, 3]. Conventionally, control of
electric power systems and the main power grid was accomplished through a few
central controllers. Through emerging renewable energy plants, intelligent
loads located in the demand side and computational advances, distributed
energy production and management has become viable. DGs as distributed energy
production units, together with local loads which are distinct from the main
power, is called a microgrid. Microgrids operate in two different operational
modes called grid-connected and islanding. A microgrid is said to work in
grid-connected mode when it is connected to the main grid via a tie line at
the point of common coupling (PCC) where there exists bidirectional power flow
from or into the main grid [4]. In contrast, microgrids in islanding mode
generates power for local loads [5]. To deploy small-scale DGs including
photovoltaic (PV) cells, wind turbines, fuel cells and energy storage systems
(ESSs) in microgrids, power electronics inverters are vital interfaces which
connect DGs to the power buses [6].
There have been extensive studies conducted on control of inverter-based
microgrids during the past decades [7, 8]. The control strategies can be
classified in different categories including frequency and voltage control or
power control [9]. Applying these methods also depends on the microgrid’s mode
of operation. For instance, in the grid-connected mode the frequency and
voltage are imposed by the main grid. However, voltage and frequency control
are vital in islanding mode.
In this work, power sharing control of microgrids in grid-connected mode is
studied [11]. The problem of power sharing has been studied from the aspect of
equal power sharing in [12, 13]. Since DGs possess different capacities, the
DGs with higher capacities can share more power than the DGs with lower
capacities. The power sharing problem becomes challenging under intermittent
nature of power resources. Intermittency causes fluctuations in maximum
capacity of DGs, which leads to changes in their output power. Thereby, the
total power fluctuates, and the load power may not always be maintained. These
fluctuations can be addressed by deploying electrical energy storage (EES) or
managing the DGs to flexibly address the variations in their capacities.
Other approaches proposed to address the power sharing problem in microgrids
can be categorized either as proportional power sharing [9, 10, 14], or
economic dispatch problem (EDP) [15]. The studies [14] and [16] have proposed
techniques for proportional power sharing. Here, proportional power sharing is
defined as sharing the load among DGs such that each individual DG shares a
fraction of the load in proportion to its maximum capacity. A distributed
droop control scheme based on nonlinear state feedback proposed in [9]
guarantees that DGs share reactive power proportionally. In [32], through a
distributed voltage control, the active and reactive power are shared
proportionally, for a microgrid with inductive impedance loads. In addition,
[33] formulates the proportional power sharing as a tracking problem and
solves it for grid-connected spatially concentrated microgrids. However, [9]
and [32] study islanding mode, and none of [9, 32, 33] have covered the power
mismatch during transient time of their proposed strategies.
On the other hand, EDP is a method to control the power flow among different
DGs optimally, where optimality implies minimizing a quadratic performance
index assigned to each DG as the cost of their generated power. EDP has been
studied through different techniques including the population dynamic method
[17], and the lambda iteration [18]. While these methods have been formulated
within a centralized control framework, distributed version of EDP can be
found in [15, 19].
Motivated by systems with cyber-physical layers, the power sharing control in
this study is devised in two layers. The physical layer that consists of DGs,
loads, measurement units, etc., is where the power control loop of each DG is
established to track the input power command issued from the cyber layer. DGs
have their corresponding agents in the cyber layer. Thus, the ideas of MASs
can be utilized to establish the DGs’ controllers and their interactions. The
agents communicate through a communication network in the cyber layer.
The agents can choose different strategies to control the DGs including
centralized, decentralized or distributed formats. When the DGs are located in
a small region, it is viable to apply centralized controllers. As the number
of DGs increases, while geographically scattered in a wide area, applying the
centralized controllers faces deficiencies due to some reasons; Firstly, the
centralized controller is not reliable due to the dependency of the DGs on a
single controller where its malfunctions deteriorate the performance of the
microgrid or may result in instability. Besides, in centralized coordinated
control, transferring data to a control center and issuing control signals
back to DGs require high bandwidth communication, which is not economically
efficient, or technically secure, and is prone to failure [20]. On the
contrary, distributed control techniques require considerably lower bandwidth
which makes the communications among the DGs economically viable.
Decentralized controllers are applicable locally, however it does not exploit
cooperation of DGs [21]. Therefore, they may not perform efficiently where the
global information and cooperation is required. In contrast, a distributed
control scheme encompasses the plug and play feature, which it makes it more
flexible compared to centralized and decentralized controllers [22]. The
centralized control scheme depends on global information while DGs in
distributed control exchange information exclusively with the DGs in their
neighborhood. In this study, the well-known consensus algorithm is utilized to
design a distributed controller for the power sharing control problem.
Considering a large number of DGs scattered in a wide area, it takes agents
time to transfer all the required signals. Therefore, it is inevitable to have
communication delays in the distributed controllers [23]. The ranges of these
delays are from tens to hundreds of milliseconds [24]. The delays may result
in prolonging the convergence time of consensus algorithm and potentially lead
to microgrid instability [16]. The delays can be reduced through increasing
the convergence rate of consensus algorithms utilizing approaches including
multiplying the weights of the communication graph with a large constant, or
through an optimization of the weights [25].
This study considers a microgrid operating in grid-connected mode using
proportional power sharing. Proportional power sharing makes the microgrid
adaptable to intermittency of power sources. The grid-connected operation
enables the microgrid to transmit excess power to the main grid, while relying
on it for frequency and voltage control. The contributions of this paper are:
1) A distributed consensus algorithm is designed, by which the DGs are able to
estimate the microgrid’s power capacity under perturbations in the power
capacity of individual DGs. Convergence rate of the algorithm is studied and
bounds on allowable perturbations are derived based on practical constraints.
2) Multiple proportional power sharing strategies are proposed, to meet the
demanded power as consensus is reached. The strategies are executed in a
distributed manner. They ensure that the microgrid satisfies load power
variations dynamically and allow excess power to be transmitted to the main
grid.
3) Items (1) and (2) enable proportional power sharing. However, during
convergence of the consensus algorithm, a power mismatch occurs between the
generated and demanded power. Although the rate of convergence can be
increased, the transient power mismatch remains inevitable. To eliminate this
mismatch, a fully distributed finite-time consensus algorithm, based on [27],
is additionally augmented.
4) A realistic simulation is conducted, using MATLAB’s Simscape toolbox, with
a clear primary controller scheme and corresponding parameters. Grid and DG
parameters used in simulations are also given. The simulations and the results
can be reproduced by readers, allowing further enhancements in future
research.
The rest of this paper is organized as follows. The preliminary definitions of
technical terms are explained in section two. Then, proportional power sharing
is defined in the third section. In the fourth section, the consensus
algorithm is developed through which the DGs are able to update their
information about the total microgrid power capacity following a change in a
DG’s capacity. The overarching consensus algorithm and the embedded transient
controller are proposed and elaborated in the same section. Fifth section
discusses the cyber and physical layers which control the output power of DGs.
Next, simulation results are provided in section six to illustrate the
effectiveness of the proposed control plan in response to different variations
in capacity of a DG. Finally, concluding remarks are provided and references
are listed.
## 2 Preliminary Definitions
We define the graph $\mathcal{G}$ as the set pair
$(\mathcal{\nu},\mathcal{\varepsilon})$ having vertices set $\mathcal{\nu}$
and edge set $\mathcal{\varepsilon}$. Let the number of vertices in
$\mathcal{G}$ be $N$, and let the set $\mathcal{\varepsilon}$ consist of the
vertices pairs $(i,j)$ for which there exists an edge that connects $j$ to
$i$, with $i,j=1,2,\cdots,N$ and $i\neq j$. The intended graph in this study
is undirected or bidirectional graph, where the signals flow along edges in
both directions, i.e. if $(i,j)\in\mathcal{\varepsilon}$, then
$(j,i)\in\mathcal{\varepsilon}$. The adjacency matrix associated with the
graph is $A=[a_{ij}]\in R^{N\times N}$ where each element $a_{ij}>0$ if
$(i,j)\in\mathcal{\varepsilon}$, otherwise $a_{ij}=0$. As stated above, in the
bidirectional graph $\mathcal{G}$, if $(i,j)\in\mathcal{\varepsilon}$, then
$(j,i)\in\mathcal{\varepsilon}$, and $a_{ij}=a_{ji}$. Then $A$ is symmetric,
i.e. $A=A^{T}$. We define the degree matrix $D=[d_{ii}]\in R^{N\times N}$ as a
diagonal matrix as such
$d_{ii}=\sum_{j=1}^{N}a_{ij}\vspace{-1.5mm}$ (1)
The matrix $L=[l_{ij}]=D-A$ is denoted as the Laplacian Matrix of
$\mathcal{G}$. As mentioned above, $A=A^{T}$, and considering $D$ is a
diagonal matrix, it follows that $L=L^{T}$. The neighbor set corresponding to
each vertex $i$ is defined as
$\mathcal{N}_{\;\;i}=\\{j|(i,j)\in\mathcal{\varepsilon}\\}$. Additionally,
$\mathcal{N}_{\;\;j}^{+}$ denotes the set of outgoing neighbors of node $j$,
i.e., the set of nodes receiving signals form the node $j$, and
$\mathcal{N}_{\;\;j}^{-}$ is the set of nodes which sends signals to the node
$j$. For the bidirectional graph $\mathcal{G}$,
$\mathcal{N}_{\;\;j}^{+}=\mathcal{N}_{\;\;j}^{-}$. A graph is connected if
there exists a path between any two distinct vertices [26]. We assume that
$\mathcal{G}$ is connected.
Next, consider Fig. 1 which shows a sample localized microgrid with four DGs,
denoted by DGi, $i=1,2,3,4$. In this figure, the dashed lines show signaling
between the cyber layer and physical layer, i.e. the communications between
the DGs and their corresponding agents in the cyber layer. The lines with
bidirectional arrows represent communications among the corresponding agents
of DGi located in the cyber layer. The solid lines are electrical connections.
Based on the weights shown in Fig. 1 and the explanations above, the adjacency
and degree matrices are defined as,
$\displaystyle
A\\!\\!=\\!\\!\\!\\!\left[\begin{array}[]{cccc}\\!\\!\\!\\!0&\\!\\!\\!\\!0&\\!\\!\\!\\!a_{13}&\\!\\!\\!\\!0\\!\\!\\!\\!\\!\\!\\\
\\!\\!\\!\\!0&\\!\\!\\!\\!0&\\!\\!\\!\\!a_{23}&\\!\\!\\!\\!a_{24}\\!\\!\\!\\!\\!\\!\\\
\\!\\!\\!\\!a_{31}&\\!\\!\\!\\!a_{32}&\\!\\!\\!\\!0&\\!\\!\\!\\!0\\!\\!\\!\\!\\!\\!\\\
\\!\\!\\!\\!0\\!\\!\\!\\!&\\!\\!\\!\\!a_{42}&\\!\\!\\!\\!0\\!\\!\\!\\!&\\!\\!\\!\\!0\\!\\!\\!\\!\\!\\!\end{array}\right]\\!\\!\\!,D\\!\\!=\\!\\!\\!\\!\left[\begin{array}[]{cccc}\\!\\!\\!\\!\\!a_{13}&\\!\\!\\!\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!\\!\\!\\!0\\!\\!\\!\\!\\!\\!\\\
\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!\\!\\!\\!a_{23}+a_{24}&\\!\\!\\!\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!\\!\\!\\!0\\!\\!\\!\\!\\!\\!\\\
\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!\\!\\!\\!a_{31}+a_{32}&\\!\\!\\!\\!\\!\\!\\!\\!0\\!\\!\\!\\!\\!\\\
\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!\\!\\!\\!a_{42}\\!\\!\\!\\!\\!\end{array}\right]$
(2)
Based on the definition of Laplacian matrix, the corresponding Laplacian
matrix to the adjacency and diagonal matrices defined in (2) is
$\displaystyle
L=\left[\begin{array}[]{cccc}\\!\\!a_{13}&\\!\\!0&\\!\\!-a_{13}&\\!\\!0\\\
\\!\\!0&\\!\\!a_{23}+a_{24}&\\!\\!-a_{23}&\\!\\!-a_{24}\\\
\\!\\!-a_{31}&\\!\\!-a_{32}&\\!\\!a_{31}+a_{32}&\\!\\!0\\\
0&\\!\\!-a_{42}&\\!\\!0&\\!\\!a_{42}\end{array}\right]$ (3)
## 3 Problem Definition
We consider a microgrid in the grid-connected mode, where the microgrid’s
voltage and frequency are imposed by the main grid, i.e. the microgrid’s
frequency and voltage are fixed. Hence, the goal in this mode is to control
the output power of the DGs. The cyber-physical systems considered in this
paper is similar to the one shown in Fig. 1. The proposed control emerges from
consensus control of Multi-Agent Systems (MAS). The control objective is
sharing load power in proportion to the maximum power capacity of the DGs,
under variations in maximum capacities.
Figure 1: Schematic of a microgrid comprising cyber and physical layers
We assume that there exists $N$ DGs in a microgrid which are labeled as DGi
where $i=1,2,\cdots,N$. The maximum power capacity and instantaneous output
power of each DGi are defined as $P_{i,max}$ and $P_{i}$, respectively. Let
$P_{L}$ be the load power, which is proportionately shared among the DGs, i.e.
$P_{L}=\sum_{i=1}^{N}P_{i},\,\,\,\mbox{s.t.}\,\,\,r=\frac{P_{1}}{P_{1,max}}=\frac{P_{2}}{P_{2,max}}=\cdots=\frac{P_{N}}{P_{N,max}}\vspace{-1mm}$
(4)
where $r$ is the proportional power share ratio. Thus, the output power of DGi
is $P_{i}=rP_{i,max}$. Let $P_{T}$ be the total power capacity of the
microgrid defined as the accumulation of the maximum power capacity of all the
DGs in the microgrid. Then, one can conclude that
$r=\frac{\sum_{i=1}^{N}P_{i}}{\sum_{i=1}^{N}P_{i,max}}=\frac{P_{L}}{P_{T}}\vspace{-1mm}$
(5)
and the output power of DGi is $P_{i}=(P_{L}/P_{T})\,P_{i,max}$. Note that a
fluctuation in the maximum capacity of a DG $P_{i,max}$ or a change in $P_{L}$
will cause a change in $r$. The proposed power sharing control will, in
response, manage the output power of the DGs flexibly. Throughout this study,
the goal is to manage the variations of $P_{i}$s while meeting the requirement
$P_{L}=\sum_{i=1}^{N}P_{i}$. It is assumed that all DGs have a knowledge of
$P_{L}$ at all times. The power demand $P_{L}$ can vary with time.
We next explain two scenarios for which different controllers are designed. At
the core of these controllers is a consensus algorithm which is inherently
distributed. Recall that an underlying assumption is that the communication
graph among the DGs is connected. Before any change happens to the renewable
energy resources, we assume all DGs have the knowledge of $P_{T}$ by which
they are able to compute $r$ from (5) and thereby generate their appropriate
proportional power share $P_{i}=rP_{i,max}$, $i=1,2,\cdots,N$.
In the first scenario, assume the maximum capacity of DGk which is
$P_{k,max}$, changes. Then $P_{T}$ changes accordingly and all DGs are
required to update their value of $P_{T}$ to be able to recalculate the new
$r$ based on (5). The only DG that can generate accurate power immediately
after a fluctuation happens is the DGk since it is aware of the change in
$P_{k,max}$. Let $\delta$ be the change such that
$\tilde{P}_{k,max}=P_{k,max}+\delta$, where $\tilde{P}_{k,max}$, is the
updated value of $P_{k,max}$. Thus, DGk can compute the updated capacity of
the microgrid as $\tilde{P}_{T}$ where $\tilde{P}_{T}=P_{T}+\delta$ and
recalculate $r$ and the delivered power $P_{k}$. A consensus algorithm is
devised to have other DGs compute the $\tilde{P}_{T}$ and thereby reach the
new value of $r$, distributively.
In the second scenario, we address the mismatch between load and supplied
power before consensus is reached. As was discussed in scenario (1), only DGk
can generate an accurate amount of power instantaneously after a fluctuation
in DGk. Although the other DGs are able to update $P_{i}$ following a change
in $P_{k,max}$, the consensus algorithm takes time to converge, and hence
during the transient time $\sum_{i=1}^{N}P_{i}$ would not necessarily be equal
to $P_{L}$. The reason is that the other DGs do not have the correct value of
$\tilde{P}_{T}$ instantaneously. However, since instantaneous matching of load
power is a priority, a control law is augmented with the consensus algorithm
to practically remove power mismatch during transients.
## 4 Distributed Microgrid Control
### 4.1 Consensus on Total Power Capacity under Perturbation
We consider a scenario where the individual DGs know the power ratio $r$ and
generate accurate $P_{i}$ based on proportional power sharing, as shown in
(4). Hence, each DG has correct knowledge of $P_{T}$, as per (5). Next,
consider a change in $P_{k,max}$ to $\tilde{P}_{k,max}=P_{k,max}+\delta$.
Following this change, all agents are required to compute
$\tilde{P}_{T}=P_{T}+\delta$, the updated value of $P_{T}$. We define
$s_{i}(t)$ as the estimate of $\tilde{P}_{T}$ by DGi. The vector of estimate
variables is then,
$\mathbf{S}(t)=\left[\begin{array}[]{ccccc}s_{1}&s_{2}&\cdots&s_{N}\end{array}\right]^{T}$,
where $N$ is the number of the DGs in the microgrid. As mentioned above, all
the DGs know $P_{T}$ before any change happens. Therefore, the initial value
of $\mathbf{S}$ is, $\mathbf{S}(0)=P_{T}\textbf{1}$ where
$\textbf{1}^{N}=\left[\begin{array}[]{ccccc}1&1&\cdots&1\end{array}\right]^{T}$.
Thereafter, we propose the following consensus dynamics in the cyber layer,
through which all DGs update their value of $P_{T}$ and converge to
$\tilde{P}_{T}$.
$\displaystyle\dot{s}_{k}(t)=\\!-h\left(s_{k}(t)-\tilde{P}_{T}\right)-\\!\\!\\!\\!\sum_{j\in\mathcal{N}_{\;\;k}}\\!\\!\\!\\!a_{kj}\left(s_{k}(t)-s_{j}(t)\right)\\!\\!$
(6)
$\displaystyle\dot{s}_{i}(t)=\\!-\\!\\!\\!\\!\sum_{j\in\mathcal{N}_{\;\;i}}\\!\\!\\!\\!a_{ij}\left(s_{i}(t)-s_{j}(t)\right),\,\,i=\\!1,2,...,N,\,i\neq
k$
where $s_{k}(0)=P_{T}$ and $s_{i}(0)\\!=P_{T}$. In
(LABEL:Eq:_Agents_Communication_Dynamics), $a_{ij}>0$ and it denotes the
weight of the communication link between agents $i$ and $j$, where
$i,j=1,2,\cdots,N$, $i\neq j$, and $h>0$ is a parameter chosen by the $k^{th}$
agent. The parameter $h$ represents a measure of the convergence rate of
$s_{i}(t)$ towards $\tilde{P}_{T}$. Since the communication graph is
bidirectional, therefore $a_{ij}=a_{ji}$, and this implies that the Laplacian
matrix is symmetric, i.e. $L=L^{T}$ (see example in (3). From
(LABEL:Eq:_Agents_Communication_Dynamics), the following matrix equation is
obtained
$\dot{\textbf{S}}=-(L+\Delta)\mathbf{S}+hd_{k}\tilde{P}_{T}$ (7)
where
$d_{k}=\\!\\!\left[\begin{array}[]{ccccccccc}\sigma_{1,k}\\\ \sigma_{2,k}\\\
\vdots\\\ \sigma_{k,k}\\\ \vdots\\\
\sigma_{N,k}\end{array}\right]\\!\\!\\!=\\!\\!\left[\begin{array}[]{ccccccccc}0\\\
0\\\ \vdots\\\ 1\\\ \vdots\\\ 0\end{array}\right]\\!\\!\\!\in\\!R^{N\times
1},\Delta=hd_{k}d^{T}_{k}\in\\!R^{N\times N}$ (8)
In (8), $\sigma_{i,k}$ is the Kronecker Delta function, implying the $k^{th}$
element of $d_{k}$ is one and rest are zero. Also, $\Delta_{k,k}=h$ and all
other elements are zero. We now propose and prove the following Lemma.
###### Lemma 1.
The linear dynamic system defined in (7) and (8) is input-to-state stable
(ISS), and $S\rightarrow\tilde{P}_{T}\textbf{1}$ given the graph of
communication among the agents is connected.
###### Proof.
The linear system of (7) and (8) is ISS if $-(L+\Delta)$ is Hurwitz [28]. The
input is $\tilde{P}_{nom}$ which is constant and bounded, thus, if
$-(L+\Delta)$ is Hurwitz the proof is complete. Since $L=L^{T}$, and by
definition $\Delta=\Delta^{T}$, the matrix $-(L+\Delta)$ is symmetric. Hence,
it is Hurwitz if $-(L+\Delta)<0$, i.e. negative definite. To prove this, it is
required to show that for any vector $u\in R^{N}$, $u^{T}[-(L+\Delta)]u$ is
strictly less than zero unless $u=0$. From Eq. 8,
$u^{T}[-(L+\Delta)]u=-u^{T}Lu-u^{T}\Delta u=-u^{T}Lu-h{u_{k}}^{2}$ (9)
where $h=\Delta_{kk}>0$, and $u_{k}$ is the $k^{th}$ element of the vector
$u$. As the communication graph is connected, $L$ is positive semi-definite
with a single zero eigenvalue [29, 26], and it is diagonalizable [30], with
all real eigenvalues. Let $\lambda_{i}$, $i=1,2,\cdots,N$ be the eigenvalues
of $L$ in descending order,
$\lambda_{1}\geq\lambda_{2}\geq\cdots>\lambda_{N-1}>\lambda_{N}=0$. The
canonical form of $L$ is $L=V\Lambda V^{T}$, where $\Lambda$ is a diagonal
matrix consisting of the eigenvalues of $L$, and $V$ is the right eigenvector-
matrix,
$\Lambda\\!\\!=\\!\\!\left[\begin{array}[]{ccccc}\lambda_{1}&0&&0\\\
0&\lambda_{2}&&\\\ &&\ddots&\\\
0&&&\lambda_{N}\end{array}\right]\\!\\!,\,\,V\\!\\!=\left[\begin{array}[]{ccccc}\\!\\!\\!v_{1}&\\!\\!\\!v_{2}&\\!\\!\\!\cdots&\\!\\!\\!v_{N}\\!\\!\\!\end{array}\right]$
(10)
Since $v_{N}$ is the eigenvector corresponding to $\lambda_{N}=0$, following
the definition of $L$, $v_{N}=c\textbf{1}^{N}$ where $c\neq 0$ is a real
value. Substituting $L=V\Lambda V^{T}$ into (9) and taking (10) into account,
the following holds:
$\displaystyle-u^{T}Lu-h{u_{k}}^{2}=$
$\displaystyle-z^{T}\\!\\!\left[\begin{array}[]{ccccc}\\!\\!\lambda_{1}&\\!\\!\\!\\!0&\\!\\!\\!\\!&\\!\\!\\!\\!0\\!\\!\\\
\\!\\!0&\\!\\!\\!\\!\lambda_{2}&\\!\\!\\!\\!&\\!\\!\\\
\\!\\!&\\!\\!\\!\\!&\\!\\!\\!\\!\ddots&\\!\\!\\\
\\!\\!0\\!\\!\\!\\!&\\!\\!\\!\\!&\\!\\!\\!\\!&\\!\\!\\!\\!\lambda_{N}\\!\\!\end{array}\right]z-h{u_{k}}^{2}$
(11) $\displaystyle=-\sum_{i=1}^{N}\lambda_{i}z_{i}^{2}-h{u_{k}}^{2}$
where $z=V^{T}u$. Since $V^{T}$ is a nonsingular matrix it is invertible, and
its inverse matrix is $V$ and $u=Vz$. For any
$z=\left[\begin{array}[]{ccccc}z_{1}&z_{2}&\cdots&z_{N}\end{array}\right]^{T}$
and $u_{k}$, the right hand side of (11) is negative, except
$z_{i}=0\;\forall\;i=1,2,\cdots,N-1$ and $u_{k}=0$. The remaining condition is
$z=\left[\begin{array}[]{ccccc}0&\cdots&0&z_{N}\end{array}\right]^{T}$. As $V$
is not singular, $u=Vz\neq 0$ while $z_{N}\neq 0$. According to (11) and since
$v_{N}=c\textbf{1}^{N}$,
$u=Vz=\\!\left[\begin{array}[]{ccccc}\\!\\!\\!\\!v_{1}&\\!\\!\\!v_{2}&\\!\\!\\!\cdots&\\!\\!\\!c\textbf{1}\\!\\!\\!\\!\end{array}\right]\left[\begin{array}[]{ccccc}\\!\\!\\!\\!0&\\!\\!\\!\cdots&\\!\\!\\!0&\\!\\!\\!z_{N}\\!\\!\\!\\!\end{array}\right]^{T}\\!\\!=cz_{N}\textbf{1}$
(12)
Now that $u=cz_{N}\mathbf{1}$ and $c,z_{N}\in R$ are non-zero,
$u_{k}=cz_{N}\neq 0$. However, this is in contradiction with the assumption
made before which is $u_{k}=0$. Thus, (11) is negative for any vector $z$ and
since $u=Vz$, (9) is negative for any $u\neq 0$ which proves $-(L+\Delta)$ is
negative definite. We define $y=\mathbf{S}-\tilde{P}_{T}\textbf{1}$,
therefore, $\mathbf{S}=y+\tilde{P}_{T}\textbf{1}$ and
$\dot{\mathbf{S}}=\dot{y}$. By substituting $y$ and $\dot{y}$ into (7), we
obtain
$\dot{y}=\\!-(L+\Delta)y,\,y=\mathbf{S}-\tilde{P}_{T}\textbf{1},\,y(t_{0})\\!=\\!P_{T}\textbf{1}-\tilde{P}_{T}\textbf{1}\\!=\\!-\delta\textbf{1}\\!\\!\\!\\!$
(13)
As $-(L+\Delta)$ is Hurwitz the dynamics of (13) is exponentially stable. It
means $y\rightarrow 0$ and therefore
$\mathbf{S}\rightarrow\tilde{P}_{T}\textbf{1}$. This completes the proof. ∎
We assume that at any time, only one DG can change its power capacity.
Thereafter, any subsequent change in power capacity can be done once the
consensus algorithm described above has converged.
### 4.2 Observations on Rate of Convergence
From Weyl’s theorem on eigenvalue inequalities for sum of two Hermitian
matrices [30], the properties of Laplacian matrices in a connected network
[29], and Lemma 1, we have
$\lambda_{N-1}(-L)\leq\lambda_{N}(-(L+\Delta))<\lambda_{N}(-L)=0$
assuming the eigenvalues of $-L$ are ordered as
$\lambda_{1}(-L)\leq\lambda_{2}(-L)\leq\cdots\leq\lambda_{N-1}(-L)\leq\lambda_{N}(-L)=0$,
and those of $-(L+\Delta)$ are ordered as
$\lambda_{1}(-(L+\Delta))\leq\lambda_{2}(-(L+\Delta))\leq\cdots\leq\lambda_{N-1}(-(L+\Delta))\leq\lambda_{N}(-(L+\Delta))<0$
for all $h>0$. The eigenvalue $\lambda_{N}(-(L+\Delta))$ is the dominant
eigenvalue of $-(L+\Delta)$ and hence it determines the rate of convergence of
consensus. To explain the effect of $h$ on convergence rate, we provide the
following lemma.
###### Lemma 2.
The rate of convergence of the consensus algorithm, given by (7) and (8),
increases with increasing values of $h>0$.
###### Proof.
The characteristic equation of $-L$ can be expressed as $p(\lambda)=0$. The
roots of $p(\lambda)$, i.e.
$\lambda_{1}(-L)\leq\lambda_{2}(-L)\leq\cdots\leq\lambda_{N-1}(-L)\leq\lambda_{N}(-L)=0$,
are all real. It can be verified that the characteristic equation of
$-(L+\Delta)$ takes the form $p(\lambda)+hq(\lambda)=0$. Further, $p(\lambda)$
and $q(\lambda)$ are both monic polynomials, with $p(\lambda)$ and
$q(\lambda)$ being polynomials of order $N$ and $(N-1)$ respectively. From
Lemma 1, we know that the roots of $p(\lambda)+hq(\lambda)$ satisfy
$\lambda_{1}(-(L+\Delta))\leq\lambda_{2}(-(L+\Delta))\leq\cdots\leq\lambda_{N-1}(-(L+\Delta))\leq\lambda_{N}(-(L+\Delta))<0$,
$\forall\;h>0$. Considering the characteristic equation of $-(L+\Delta)$ in
root-locus form, i.e.
$1+h\frac{q(\lambda)}{p(\lambda)}=0,\quad h>0,$
we deduce that all roots of $q(\lambda)$ are real and negative. Otherwise,
increasing $h$ will eventually cause some eigenvalues of $-(L+\Delta)$ to
become complex conjugates and/or unstable, thereby contradicting Lemma 1. This
observation is in accordance with the rules of root locus. We further deduce
that the largest root of $q(\lambda)$, namely $\lambda_{max}(q(\lambda))$,
must satisfy
$\lambda_{N-1}(-L)\leq\lambda_{max}(q(\lambda))<\lambda_{N}(-L)=0$
Violating the above condition would also contradict Lemma 1. Since
$\lambda_{N}(-L)=0$ and $\lambda_{max}(q(\lambda))<0$ are the largest roots of
$p(\lambda)$ and $q(\lambda)$ respectively, there is one branch of root locus
that originates from $\lambda_{N}(-L)$ and ends at $\lambda_{max}(q(\lambda))$
for $0\leq h<\infty$. This branch is also the closest to the origin of all $N$
branches of the above root locus (which are all strictly along the negative
real axis), and hence contains the locus of the dominant eigenvalue of
$-(L+\Delta)$. With increasing $h$, this eigenvalue moves further to the left
of the origin, thereby increasing the rate of convergence to the consensus
proposed in Lemma 1. This completes the proof. ∎
### 4.3 Proportional Power Sharing Strategies
The consensus algorithm of Section 4.1 enable the DGs to compute the updated
capacity of the microgrid under perturbation. In this section, we propose
methods by which individual agents command power to the physical layer based
on consensus. Subsequent to a capacity variation such as $\delta$ in Section
4.1, three slightly different strategies are proposed through which the DGs
meet the load demand $P_{L}$. The first and third strategies are discussed in
detail. The second strategy is similar to the first and hence its details are
omitted. Assuming at $t=t_{0}$, $P_{k,max}$ changes to
$\tilde{P}_{k,max}=P_{k,max}+\delta$, the first strategy to generate $P_{i}$s
is
$\mbox{Strategy \\!\\!\\! 1:}\left\\{\begin{aligned}
&\\!P_{k}=\\!\frac{P_{L}}{s_{k}}\tilde{P}_{k,max}\\\
&\\!P_{i}=\\!\frac{P_{L}}{s_{i}}P_{i,max}\,\,\,\mbox{for}\,\,\;i=1,2,\cdots,N\;i\neq
k\\!\\!\\!\\!\end{aligned}\right.$ (14)
where $s_{i}$, $i=1,2,\cdots,N$, are the estimates of $\tilde{P}_{T}$, as
discussed in Section 4.1, and $\lim_{t\to\infty}s_{i}=\tilde{P}_{T}$ according
to Lemma 1. A potential issue may arise when $s_{i}(t)$ crosses or approaches
zero for some $t>t_{0}$ such that $P_{i}$ diverges. In this regard, we state
and prove the following Lemma:
###### Lemma 3.
Considering the LTI system defined in (7), if $|\delta|<{\theta
P_{T}}/{(1+\sqrt{N}})$ with $0<\theta<1-(P_{L}/P_{T})$, the following holds
$(1\\!-\theta)P_{T}\\!\\!\leq
s_{i}(t)\\!\leq\\!\\!(1\\!+\theta)P_{T},\,\,\,\,P_{i}(t)\\!\\!<\\!P_{i,max},\,\,\,\forall\,t\\!>t_{0}\\!\\!\\!\vspace{0.1in}$
(15)
The proof of Lemma 3 is given in the Appendix. From Lemma 3, it may appear
that as the number of DGs, $N$, increases, there will be a bigger restriction
on $\delta$, since $|\delta|<{\theta P_{T}}/{(1+\sqrt{N}})$. However, it can
be shown that the above inequality is not restrictive, mainly because as $N$
increases, $P_{T}$ also increases. An analysis of this aspect is given in the
Appendix. From Lemma 3, it may also appear that the constraint on $\delta$ is
restrictive as $\theta\to 0$, which happens as $P_{L}\to P_{T}$. This
restriction is however justified, since $P_{L}\approx P_{T}$ practically
implies that the grid is already close to maximum capacity. Hence, further
perturbation in DGs may prevent it from meeting the load demand. So far, it is
proved that strategy 1 is valid provided changes in $\delta$ satisfy the
conditions in Lemma 3. Defining the total instantaneous output power of the
microgrid as $P_{O}(t)$, from (14),
$P_{O}(t)=\frac{P_{L}}{s_{k}}\tilde{P}_{k,max}+\sum_{i=1,i\neq
k}^{N}\frac{P_{L}}{s_{i}}P_{i,max}$ (16)
Therefore,
$P_{O}(t)=\frac{P_{L}}{s_{k}}\delta+\sum_{i=1}^{N}\frac{P_{L}}{s_{i}}P_{i,max}$
(17)
Thus, defining the instantaneous error $E(t)=P_{O}(t)-P_{L}$, we have,
$E(t)=P_{O}(t)-P_{L}=\frac{P_{L}}{s_{k}}\delta+\sum_{i=1}^{N}\frac{P_{L}}{s_{i}}P_{i,max}-P_{L}$
(18)
At $t=t_{0}$, $s_{i}=P_{T}$ for $i=1,2,\cdots,N$. Thus,
$E(t_{0})=P_{L}\Bigl{[}\frac{\delta}{P_{T}}+\sum_{i=1}^{N}\frac{P_{i,max}}{P_{T}}-1\Bigr{]}$
(19)
Since $\sum_{i=1}^{N}\frac{P_{i,max}}{P_{T}}=1$, therefore
$E(t_{0})=P_{L}\frac{\delta}{P_{T}}$ (20)
Equation (20) shows that $E(t_{0})\neq 0$, and since $E(t)$ is continuous, it
implies that a perturbation $\delta$ causes a transient mismatch between the
delivered power $P_{O}(t)$ and the load $P_{L}$. The error $E(t)\to 0$ at
steady-state, as proven in Lemma 1. Therefore, Strategy 1 given in (14),
causes a temporary mismatch of power following a perturbation. This issue is
addressed in Section 4.4.
The second strategy, which is slightly different from the first one, is as
follows:
$\mbox{ Strategy \\!\\!\\! 2:}\left\\{\begin{aligned}
&\\!P_{k}=\\!\frac{P_{L}}{\tilde{P}_{T}}\tilde{P}_{k,max}\\\
&\\!P_{i}=\\!\frac{P_{L}}{s_{i}}P_{i,max}\,\,\,\mbox{for}\,\,\;i=1,2,\cdots,N\;i\neq
k\\!\\!\\!\\!\\!\end{aligned}\right.$ (21)
As before, the total instantaneous output power of the microgrid $P_{O}(t)$ is
$P_{O}(t)=\frac{P_{L}}{P_{T}+\delta}(\tilde{P}_{k,max})+\sum_{i=1,i\neq
k}^{N}\frac{P_{L}}{s_{i}}P_{i,max}$ (22)
We again evaluate the error $E(t)=P_{O}(t)-P_{L}$ for $t\geq t_{0}$, yielding
$E(t)=\frac{P_{L}}{P_{T}+\delta}(P_{k,max}+\delta)+\sum_{i=1,i\neq
k}^{N}\frac{P_{L}}{s_{i}}P_{i,max}-P_{L}$ (23)
Upon simplifying, we obtain
$E(t)=P_{L}\Bigl{[}\frac{(P_{k,max}+\delta)}{P_{T}+\delta}+\sum_{i=1,i\neq
k}^{N}\frac{P_{i,max}}{s_{i}}-1\Bigr{]}$
Since at $t=t_{0}$, $s_{i}=P_{T}$ for $i=1,2,\cdots,N$, and $\sum_{i=1,i\neq
k}^{N}P_{i,max}=P_{T}-P_{k,max}$,
$E(t_{0})=P_{L}\frac{\delta(P_{T}-P_{k,max})}{P_{T}(P_{T}+\delta)}$ (24)
Equation (24) shows that $E(t_{0})\neq 0$, and since $E(t)$ is continuous, it
implies that similar to Strategy 1, a perturbation $\delta$ causes a transient
mismatch between the delivered power $P_{O}(t)$ and the load $P_{L}$ in
Strategy 2. The error $E(t)\to 0$ at steady-state, as proven in Lemma 1.
The last candidate strategy is proposed as
$\mbox{ Strategy \\!\\!\\! 3:}\left\\{\begin{aligned}
&\\!P_{k}=\\!\frac{P_{L}}{s_{k}}({P_{k,max}}+s_{k}-P_{T})\\\
&\\!P_{i}=\\!\frac{P_{L}}{s_{i}}P_{i,max}\,\,\,\mbox{for}\,\,\;i=1,2,\cdots,N\;i\neq
k\\!\\!\\!\\!\\!\end{aligned}\right.$ (25)
The Strategy 3 allows DGs to update their output power more smoothly compared
to the first two strategies. In this case,
$P_{O}(t)=\frac{P_{L}}{s_{k}}(P_{k,max}+s_{k}-P_{T})+\sum_{i=1,i\neq
k}^{N}\frac{P_{L}}{s_{i}}P_{i,max}$ (26)
Therefore,
$E(t)=P_{L}\Bigl{[}-\frac{P_{T}}{s_{k}}+\sum_{i=1}^{N}\frac{P_{i,max}}{s_{i}}\Bigl{]}$
(27)
Since $s_{i}=P_{T}$ for all $i=1,2,\cdots,N$, at $t=t_{0}$, $E(t_{0})=0$.
However, $E(t)$ still undergoes transient fluctuations. Based on (27),
$\frac{\dot{E}(t)}{P_{L}}=\frac{P_{T}\dot{s}_{k}(t)}{s_{k}^{2}(t)}-\sum_{i=1}^{N}\frac{P_{i,max}\dot{s}_{i}(t)}{s_{i}^{2}(t)}$
(28)
Equation (28) can be further simplified using (7) as follows,
$\displaystyle\frac{\dot{E}(t)}{P_{L}}=$
$\displaystyle\frac{P_{T}\dot{s}_{k}}{s_{k}^{2}}-$ (29)
$\displaystyle\left[\begin{array}[]{ccccccc}\frac{P_{1,max}}{s_{1}^{2}}&\frac{P_{2,max}}{s_{2}^{2}}&\cdots&\frac{P_{N,max}}{s_{N}^{2}}\end{array}\right]$
$\displaystyle\hskip 42.67912pt\times[-(L+\Delta)S+hd_{k}(P_{T}+\delta)]$
Since at $t=t_{0}$, $s_{i}=P_{T}$ for all $i=1,2,\cdots,N$,
$\mathbf{S}(t_{0})=P_{T}\mathbf{1}$ and (29) becomes
$\displaystyle\frac{\dot{E}(t_{0})}{P_{L}}=$
$\displaystyle\frac{P_{T}\dot{s}_{k}(t_{0})}{P_{T}^{2}}-$ (30)
$\displaystyle\left[\begin{array}[]{ccccccc}\\!\\!\frac{P_{1,max}}{P_{T}^{2}}&\\!\\!\\!\frac{P_{2,max}}{P_{T}^{2}}&\\!\\!\\!\cdots&\\!\\!\\!\frac{P_{N,max}}{P_{T}^{2}}\end{array}\right]$
$\displaystyle\hskip
22.76219pt\times\left[-L\mathbf{1}P_{T}-hd_{k}P_{T}+hd_{k}P_{T}+hd_{k}\delta\right]$
Simplifying (30) yields
$\frac{\dot{E}(t_{0})}{P_{L}}=\frac{P_{T}\dot{s}_{k}(t_{0})}{P_{T}^{2}}-\frac{hP_{k,max}\delta}{P_{T}^{2}}$
(31)
Referring to (7), at $t=t_{0}$, the term $\dot{s}_{k}(t_{0})$ in (31) is
$\dot{s}_{k}(t_{0})=h(P_{T}+\delta)-hP_{T}=h\delta$ (32)
Therefore, from (31) and (32),
$\frac{\dot{E}(t_{0})}{P_{L}}=h\delta\frac{P_{T}-P_{k,max}}{P_{T}^{2}}$ (33)
where from (LABEL:Eq:_Agents_Communication_Dynamics), $h$ is a positive scalar
chosen by $k^{th}$ agent. Thus, although $E(t_{0})=0$, $\dot{E}(t_{0})\neq 0$.
Therefore, as $E(t)$ is continuous, similar to Strategies 1 and 2, a change
$\delta$ results in a transient mismatch between $P_{O}$ and $P_{L}$. It is
shown that the three strategies proposed above match the load power $P_{L}$ at
steady-state while producing transient deviations. This transient issue is
resolved in the next section, where a strategy is proposed to practically
maintain $P_{O}=P_{L}$ at any time.
### 4.4 Proportional Power Sharing with Transient Power Match
Upon a perturbation in $P_{k,max}$, which results in a change in $P_{T}$, the
agents estimate $\tilde{P}_{T}=P_{T}+\delta$ through (7). Among the DGs, only
DGk has a knowledge of $\tilde{P}_{T}=P_{T}+\delta$. The other DGs in the
microgrid converge to $\tilde{P}_{T}$ through consensus only at steady-state.
This leads to the transient power mismatch discussed in Section 4.3. To remove
this transient mismatch, we propose a strategy where DGk modulates its power
delivery as follows, while the other DGs maintain the same strategy as in
Section 4.3:
$\displaystyle P_{k}=\frac{P_{L}}{s_{k}}P^{\prime}_{k,max}$ (34)
$\displaystyle P_{i}=\frac{P_{L}}{s_{i}}P_{i,max}\qquad\quad\mbox{for}\quad
i=1,2,\cdots,N\quad i\neq k$
where $P^{\prime}_{k,max}$ is an auxiliary dynamic variable required to
modulate the instantaneous power of DGk. Hence, at $t=t_{0}$,
$P^{\prime}_{k,max}(t_{0})=P_{k,max}(t_{0})$, and it is required to converge
to $(P_{k,max}+\delta)$ while $s_{i}$ converges via consensus. With the goal
of maintaining $P_{O}(t)=P_{L}$ for all $t>t_{0}$, we must have
$P_{L}=P_{O}(t)=\frac{P_{L}}{s_{k}}P^{\prime}_{k,max}+\sum_{i=1,i\neq
k}^{N}\frac{P_{L}}{s_{i}}P_{i,max}$ (35)
Thus,
$\frac{P^{\prime}_{k,max}}{s_{k}}+\sum_{i=1,i\neq
k}^{N}\frac{P_{i,max}}{s_{i}}-1=0$ (36)
Therefore, $P^{\prime}_{k,max}$ is
$P^{\prime}_{k,max}(t)=s_{k}(t)\Bigl{[}1-\sum_{i=1,i\neq
k}^{N}\frac{P_{i,max}}{s_{i}(t)}\Bigr{]}$ (37)
The algorithm for updating $P^{\prime}_{k,max}$ and $s_{i}$ for
$i=1,2,\cdots,N$ in (34) is as follows:
$\displaystyle P^{\prime}_{k,max}(t)=s_{k}(t)\Bigl{[}1-\sum_{i=1,i\neq
k}^{N}\frac{P_{i,max}}{s_{i}(t)}\Bigr{]}$ (38a)
$\displaystyle\dot{\textbf{S}}=-(L+\Delta)\mathbf{S}+hd_{k}\tilde{P}_{T}\quad\,\,\,\mbox{where}\,\,\,\,\,\,\,\mathbf{S}(t_{0})=P_{T}\mathbf{1}$
(38b)
Based on (34) and (38), we state and prove the following lemma:
###### Lemma 4.
The dynamic system of (38) is stable, i.e. the terms $P^{\prime}_{k,max}$,
$\frac{P_{i,max}}{s_{i}}$ and S remain bounded if $|\delta|<{\theta
P_{T}}/{(1+\sqrt{N}})$, where $0<\theta<1-(P_{L}/P_{T})$. Furthermore,
$P^{\prime}_{k,max}\rightarrow\tilde{P}_{k,max}$ and
$\mathbf{S}\rightarrow\tilde{P}_{T}\mathbf{1}$, while the instantaneous
delivered power satisfies (35) for all $t\geq t_{0}$.
###### Proof.
Since (38b) is equivalent to (7), per Lemma 1, the dynamic system of (38b) is
ISS. Therefore S is bounded. Additionally, as (38b) and 7) have the same
initial conditions, i.e. $\mathbf{S}(t_{0})=P_{T}\mathbf{1}$, thus
$\mathbf{S}\rightarrow\tilde{P}_{T}\mathbf{1}$. Following $|\delta|<{\theta
P_{T}}/{(1+\sqrt{N}})$, from Lemma 3 we have $(1-\theta)P_{T}\leq
s_{i}(t)\leq(1+\theta)P_{T}$ with $0<\theta<1-(P_{L}/P_{T})$. Thus,
$\frac{P_{i,max}}{(1+\theta)P_{T}}\leq\frac{P_{i,max}}{s_{i}}\leq\frac{P_{i,max}}{(1-\theta)P_{T}}$
(39)
Therefore, $\frac{P_{i,max}}{s_{i}}$ is bounded for all $i=1,2,\cdots,N$. It
demonstrates that (38a) represents a viable way to update
$P^{\prime}_{k,max}$. By plugging $P^{\prime}_{k,max}$ from (38a) into (34),
$P_{O}(t)$ simplifies to
$P_{O}(t)=\frac{P_{L}}{s_{k}}s_{k}\Bigl{[}1\\!-\\!\\!\\!\\!\sum_{i=1,i\neq
k}^{N}\\!\\!\\!\\!\frac{P_{i,max}}{s_{i}}\Bigr{]}\\!+\\!\\!\\!\\!\\!\\!\sum_{i=1,i\neq
k}^{N}\\!\\!\frac{P_{L}}{s_{i}}P_{i,max}=P_{L}$ (40)
for all $t\geq t_{0}$. Since $\mathbf{S}$ converges to
$\tilde{P}_{T}\mathbf{1}$, from (38a) we therefore deduce
$P^{\prime}_{k,max}(t)\rightarrow\tilde{P}_{T}\Bigl{[}1-\sum_{i=1,i\neq
k}^{N}\frac{P_{i,max}}{\tilde{P}_{T}}\Bigr{]}=\tilde{P}_{k,max}$ (41)
This completes the proof. ∎
The controller designed in (38a) and (38b) maintains $P_{O}(t)=P_{L}$
following a variation in the power capacity of a DG, namely $P_{k,max}$.
However, to compute the term
$\sum_{i=1,i\neq k}^{N}\frac{P_{i,max}}{s_{i}}$ (42)
in $P^{\prime}_{k,max}$, as given in (38a), the $k^{th}$ agent requires
additional information. The following approach is proposed to enable the
$k^{th}$ agent to attain this information distributively. This approach is
based on the distributed finite-time average consensus studied in [27].
According to [27], each agent $i$, shares $\frac{P_{i,max}}{s_{i}(t)}$ to its
outgoing neighbors $\mathcal{N}_{\;\;i}^{+}$ where, following Section 2,
$\mathcal{N}_{\;\;i}^{+}$ stands for the set of nodes which receives signals
from node $i$. Accordingly, based on what follows the agents are able to
distributively compute the instantaneous average of all
$\frac{P_{i,max}}{s_{i}(t)}$ where $i=1,2,\cdots,N$, i.e.
$C_{a}(t)=\frac{\sum_{i=1}^{N}\frac{P_{i,max}}{s_{i}(t)}}{N}$ (43)
Then, the $k^{th}$ agent can compute (42) via
$\sum_{i=1,i\neq
k}^{N}\frac{P_{i,max}}{s_{i}}=N\,C_{a}(t)-\frac{P_{k,max}}{s_{k}}$ (44)
One example of applying this distributed finite-time average consensus is
presented in [31]. Similar to [31], the steps of executing the finite-time
algorithm is as following:
$\displaystyle\overline{g}_{i}(m+1)$
$\displaystyle=p_{ii}\overline{g}_{i}(m)+\sum_{j\in\mathcal{N}_{\;\;i}^{-}}p_{ij}\overline{g}_{j}(m)$
(45) $\displaystyle g_{i}(m+1)$
$\displaystyle=p_{ii}g_{i}(m)+\sum_{j\in\mathcal{N}_{\;\;i}^{-}}p_{ij}g_{j}(m)$
where $\overline{g}_{i}(0)=\frac{P_{i,max}}{s_{i}}$ and $g_{i}(0)=1$ for
$i=1,2,\cdots,N$. Additionally, $p_{ij}=\frac{1}{1+|\mathcal{N}_{\;\;j}^{+}|}$
for $i\in\mathcal{N}_{\;\;j}^{+}\cup\\{j\\}$, otherwise is zero. Let us define
the vectors
$\displaystyle\overline{g}_{i,2m}^{T}\\!\\!=\\!\\![\overline{g}_{i}(1)\\!-\\!\overline{g}_{i}(0),\overline{g}_{i}(2)\\!-\\!\overline{g}_{i}(1),\cdots,\overline{g}_{i}(2m+1)\\!-\\!\overline{g}_{i}(2m)]$
(46) $\displaystyle
g_{i,2m}^{T}\\!\\!=\\!\\![g_{i}(1)\\!-\\!g_{i}(0),g_{i}(2)\\!-\\!g_{i}(1),\cdots,g_{i}(2m+1)\\!-\\!g_{i}(2m)]$
and the following Hankel matrices
$\small{\Gamma\\{\overline{g}_{i,2m}^{T}\\}\triangleq\begin{bmatrix}\overline{g}_{i,2m}(1)&\cdots&\overline{g}_{i,2m}(m+1)\\\
\overline{g}_{i,2m}(2)&\cdots&\overline{g}_{i,2m}(m+2)\\\
\vdots&\ddots&\vdots\\\
\overline{g}_{i,2m}(m+1)&\cdots&\overline{g}_{i,2m}(2m+1)\end{bmatrix}}\normalsize$
(47)
and
$\small{\Gamma\\{g_{i,2m}^{T}\\}\triangleq\begin{bmatrix}g_{i,2m}(1)&\cdots&g_{i,2m}(m+1)\\\
g_{i,2m}(2)&\cdots&g_{i,2m}(m+2)\\\ \vdots&\ddots&\vdots\\\
g_{i,2m}(m+1)&\cdots&g_{i,2m}(2m+1)\end{bmatrix}}\normalsize$ (48)
Each agent $i$ runs the steps in (45) for $2N+1$ times and keeps the values
$\overline{g}_{i}(m)$ and $g_{i}(m)$ for $m=1,2,\cdots,2N+1$. Having
$\overline{g}_{i}(m)$ stored for the $2N+1$, each agent $i$ establishes the
vectors $\overline{g}_{i,2m}^{T}$ and $g_{i,2m}^{T}$ defined in (46) starting
from $m=0$. At the same time, all individual agents construct their Hankel
matrices $\Gamma\\{\overline{g}_{i,2m}^{T}\\}$ and $\Gamma\\{g_{i,2m}^{T}\\}$
defined in (47) and (48), respectively. Additionally, they calculate the ranks
of the Hankel matrices for each $m$ and repeat the same procedure for the next
$m+1$ until for a specific $m$ either $\Gamma\\{\overline{g}_{i,2m}^{T}\\}$ or
$\Gamma\\{g_{i,2m}^{T}\\}$ becomes a defective matrix. Assume
$\Gamma\\{\overline{g}_{i,2M_{i}}^{T}\\}$ or $\Gamma\\{g_{i,2M_{i}}^{T}\\}$ is
the first matrix which loses its full rank where
$\beta_{i}=[\beta_{i,0},\cdots,\beta_{i,M_{i}-1},1]^{T}$ is its corresponding
kernel. Having the kernel $\beta_{i}$, the $i^{th}$ agent computes the average
of all $\overline{g}_{i}(0)=\frac{P_{i,max}}{s_{i}}$ for $i=1,2,\cdots,N$
defined as $C_{a}$ in (43) through the following
$C_{a}(t)=\frac{1}{N}\sum_{i=1}^{N}\overline{g_{i}}(0)=\frac{\left[\overline{g}_{i}(0),\overline{g}_{i}(1),\cdots,\overline{g}_{i}(M_{i})\right]\beta_{i}}{[g_{i}(0),g_{i}(1),\cdots,g_{i}(M_{i})]\beta_{i}}$
(49)
Thereby, the $k^{th}$ agent achieves $C_{a}(t)$, distributively. At this step,
the $k^{th}$ agent obtains the term in (42) via (44). By plugging (42) back to
(37) the $k^{th}$ agent is able to compute $P^{\prime}_{k,max}$.
To implement the proposed strategy practically, the mentioned procedures are
required to be discretized firstly, since in practice the signals and
algorithms update, digitally. We define an index $w$, starting from $w=0$,
that represents the discrete instants at which the overall consensus algorithm
is executed. At $w=0$, the $i^{th}$ agent, $i=1,2,\cdots,N$, has the value of
$s_{i}(w)=P_{T}$. Therefore, they can compute $P_{i,max}/s_{i}(w)$,
individually. Following the finite-time algorithm, they implement the
procedure in (45)-(49). Now that all the agents, including the $k^{th}$ agent,
obtains $C_{a}(w)$ from (49), then using $C_{a}(w)$, the $k^{th}$ agent
computes
$P^{\prime}_{k,max}(w)=s_{k}(w)\Bigl{[}1-\sum_{i=1,i\neq
k}^{N}\frac{P_{i,max}}{s_{i}(w)}\Bigr{]}$ (50)
The command signals to DGs are as follows
$\displaystyle P_{k}(w)=\frac{P_{L}(w)}{s_{k}(w)}P^{\prime}_{k,max}(w)$ (51)
$\displaystyle
P_{i}(w)=\frac{P_{L}(w)}{s_{i}(w)}P_{i,max}\qquad\quad\mbox{for}\quad
i=1,2,\cdots,N\quad i\neq k$
Thereafter, the agents compute $s_{i}(w+1)$ through
$\textbf{S}(w+1)=\textbf{S}(w)+dt[-(L+\Delta)\mathbf{S}(w)+hd_{k}\tilde{P}_{T}]$
(52)
Set $w=w+1$, and repeat the above strategy with the updated value of the
$s_{i}(w)$ defined in (52).
(a)
(b)
(c)
Figure 2: Physical layer control scheme Figure 3: Power circuit diagram of a
DG Figure 4: Simulated microgrid bus system
We end this section with the following two observations:
###### Remark 1.
The consensus algorithms proposed are independent from the load demand $P_{L}$
and its changes, referring to 4.1 and 4.4.
###### Remark 2.
All agents in the cyber layer have access to the instantaneous value of
$P_{L}$. The power generation command for strategies 1, 2 and 3 introduced in
4.3 and the control scheme in 4.4 incorporate $P_{L}$ and $s_{i}(t)$ in the
commanded power. Therefore, during consensus the power demand is met,
irrespective of whether $P_{L}$ changes or not.
## 5 Controller Layout of Physical Layer
The proposed power control methods for DGs introduced in this study are
required to be implemented on both cyber and physical layer of microgrids. The
physical layer which includes DGs is where controllers are designed to control
the output power of DGs. In this study, the problem of proportional power
sharing is addressed in the grid-connected mode, hence the frequency and
voltage of DGs are imposed by the main grid. Therefore, frequency and voltage
control methods, such as droop control, are not considered in this study.
Furthermore, the reactive power control in the grid-connected mode is not
studied for the practical reason of availability of reactive power in the main
grid. Therefore, the required reactive power of the microgrid can be
maintained from the main grid.
The desired active power command of each DGi, i.e $P_{i}^{*}$ for
$i=1,2,\cdots,N$ is calculated by its corresponding agent, i.e $i^{th}$ agent
in the cyber layer. Then, this signal of $P_{i}^{*}$ is sent to the power
control block of DGi located in the physical layer. The power control block of
DGs is represented in Fig. 2a. This block receives the voltage $V_{abc}$ and
current $I_{abc}$ from the voltage and current measurement units installed on
the output of each DG, as shown in Fig. 3. Figure 3 also shows that each DG is
connected to the main grid via a dedicated transformer to match the voltage
between the DG and the main grid, as the output voltage of the main grid is
significantly higher than the output voltage of DGs. To control the generated
power of a DGi, i.e, $P_{i}$, it is required to control its output current
since $V_{abc}$ and the frequency of the microgrid are fixed by the main grid.
To achieve this, the desired active power command $P_{i}^{*}$ issued from
$i^{th}$ agent is also considered as the other input in Fig. 2a. Using the
Phase-Locked-Loop (PLL) block, the signals in Fig. 2a are converted to their
equivalent values in the $dq0$ reference frame, i.e. $V_{dq}$ and $I_{dq}$.
Next, the outputs of the $V_{dq}$ and $I_{dq}$ are fed as inputs to Fig. 2b.
The parameters $C_{1}$, $C_{2}$ and the $PI$ controllers coefficients together
with the upper and lower bounds of the saturation blocks in Fig. 2b are all
defined in Section 6. The outputs of the Fig. 2b, regarded as the imaginary
and real values of a complex number, are the inputs of the Fig. 2c. These
inputs are converted to the amplitude and phase angle of the same complex
value. The amplitude and the phase signals, together with the voltage angle
$\omega\,t$, obtained from the PLL in the Fig. 2a, constitute the three phase
signal fed to the PWM in Fig. 2c. Finally, each PWM sends the switching
signals to the three level inverter of its corresponding DG which is
illustrated in Fig. 3.
Figure 5: Communication graph of the simulated $DG$s
(a)
(b)
(c)
(d)
(e)
(f)
Figure 6: Output powers $P_{i}$, $i=1,2,\cdots,6$, under a variation in
$P_{1,max}$ are depicted in the figures LABEL:sub@fig:_DG1_output_power,
LABEL:sub@fig:_DG2_output_power, LABEL:sub@fig:_DG3_output_power,
LABEL:sub@fig:_DG4_output_power, LABEL:sub@fig:_DG5_output_power and
LABEL:sub@fig:_DG6_output_power, respectively. Figure 7: Microgrid total
output power $P_{O}$ obtained from the proposed control method of Section 4.4
(a)
(b)
Figure 8: Consensus trajectories of agents on $\tilde{P}_{T}$ from (38b) and
trajectories of $r_{i}$, $i=1,2,\cdots,6$, from (34).
(LABEL:sub@fig:_Consensus_Convergence) Consensus trajectories on
$\tilde{P}_{T}$ (LABEL:sub@fig:_Zoomed_in_Consensus_Convergence) The signals
$r_{i}$ Figure 9: Individual ${P_{i,max}}/{s_{i}(t)}$ and the average
${P_{i,max}}/{s_{i}(t)}$ at each time step, as defined in (42) and (43)
## 6 Simulations
In this section, the performance of the proposed control methods explained in
Section 4.4 is evaluated through the simulation of a microgrid consisting of
six inverter-based DGs shown in Fig. 4. The model layout in Fig. 4 is inspired
from Matlab-based example available in [35] and the studies in [32, 34]. Then,
the performances of the strategies 1 and 3, provided in (25) and (14)
respectively, are juxtaposed with the performance of the controller in Section
4.4. The simulations are accomplished using the Simscape toolbox of Matlab.
The simulated DGs are numbered from 1 to 6 and are connected to the main grid
in parallel as depicted in Fig. 4. Each DG has a corresponding agent in the
cyber layer where the updated value of the desired output power is computed by
the agents using the information obtained through their bidirectional
communication structure, as shown in Fig. 5. Note that the communication graph
of the DGs in Fig. 5 is connected per its definition in Section 2.
As the communication graph in Fig. 5 is a bidirectional graph, per Section 2,
the adjacency matrix of the graph is symmetric. The adjacency and degree
matrices are chosen as
$A=\\!\\!\left[\begin{array}[]{ccccccc}\\!\\!\\!\\!0&\\!\\!\\!6&\\!\\!\\!0&\\!\\!\\!6&\\!\\!\\!6&\\!\\!\\!0\\!\\!\\!\\!\\\
\\!\\!\\!\\!6&\\!\\!\\!0&\\!\\!\\!0&\\!\\!\\!6&\\!\\!\\!0&\\!\\!\\!0\\!\\!\\!\\!\\\
\\!\\!\\!\\!0&\\!\\!\\!0&\\!\\!\\!0&\\!\\!\\!6&\\!\\!\\!6&\\!\\!\\!0\\!\\!\\!\\!\\\
\\!\\!\\!\\!6&\\!\\!\\!6&\\!\\!\\!6&\\!\\!\\!0&\\!\\!\\!6&\\!\\!\\!6\\!\\!\\!\\!\\\
\\!\\!\\!\\!6&\\!\\!\\!0&\\!\\!\\!6&\\!\\!\\!6&\\!\\!\\!0&\\!\\!\\!6\\!\\!\\!\\!\\\
\\!\\!\\!\\!0&\\!\\!\\!0&\\!\\!\\!0&\\!\\!\\!6&\\!\\!\\!6&\\!\\!\\!0\\!\\!\end{array}\right]\\!\\!\\!,\,\,D=\\!\\!\left[\begin{array}[]{ccccccc}\\!\\!\\!\\!18&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0\\!\\!\\!\\!\\\
\\!\\!\\!\\!0&\\!\\!\\!\\!\\!12&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0\\!\\!\\!\\!\\\
\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!12&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0\\!\\!\\!\\!\\\
\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!30&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0\\!\\!\\!\\!\\\
\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!24&\\!\\!\\!\\!\\!0\\!\\!\\!\\!\\\
\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!0&\\!\\!\\!\\!\\!12\\!\\!\\!\\!\end{array}\right]\\!\\!$
(53)
From Section 2, the correlated Laplacian matrix $L=A-D$ is defined as
$L=\left[\begin{array}[]{ccccccc}18&-6&0&-6&-6&0\\\ -6&12&0&-6&0&0\\\
0&0&12&-6&-6&0\\\ -6&-6&-6&30&-6&-6\\\ -6&0&-6&-6&24&-6\\\
0&0&0&-6&-6&12\end{array}\right]$ (54)
For the simulations, we set $h=10$ in (7), (38b) and (52). Let the maximum
capacity of the DGs $P_{i,max}$ for $i=1,2\cdots,6$ be
$\begin{split}P_{1,max}=600\,kw\;\;P_{2,max}=450\,kw\;\;P_{3,max}=300\,kw\\\
P_{4,max}=150\,kw\;\;P_{5,max}=750\,kw\;\;P_{6,max}=150\,kw\end{split}$ (55)
Thus, the maximum capacity of the whole microgrid is
$P_{T}=\sum_{i=1}^{6}P_{i,max}=2400\,kw$, and assume the load demand is
$P_{L}=1600\,kw$. According to Section 3, the agents have knowledge of $P_{L}$
at all times and $P_{T}$ at initial time. Therefore, each agent is able to
compute the proportional power share ratio $r=\frac{P_{L}}{P_{T}}$ defined in
(5), independently, which is $\frac{2}{3}$, initially. Therefore, the output
power of each DGi for $i=1,2,\cdots,6$, based on the proportional power
sharing, must be,
$\displaystyle P_{1}=400\,kW\quad P_{2}=300\,kW\quad P_{3}=200\,kW$ (56)
$\displaystyle P_{4}=100\,kW\quad P_{5}=500\,kW\quad P_{6}=100\,kW$
During $t=[0,3]sec$, the power capacity of each DG remains unchanged and
hence, each of the DGs generates its active power share as calculated in (56).
At $t=3sec$ the power capacity of DG1 undergoes a step change, thereby $P_{T}$
has an increment of $300\,kW$, and at $t=9\,sec$ we introduce a decrement of
$600kW$ to the capacity of DG1.
The simulated microgrid consists of several components such as inverters,
output filters of inverters, transformers, PWM, PI controllers, line
impedance, loads, DC resources, measurement units, PLL and abc/dq0 converters.
To emulate the main grid a dispatchable generator is considered in the
simulation as shown in Fig. 4. The parameters of the transformer that connects
the main grid to the distribution system and those of the transformers which
connect the DGs to the distribution system are given in Table 1. The
parameters of the distribution system are provided in Table 2. The PI
controllers depicted in Fig. 2b are identical. The PI controllers of all DGs
are also identical, meaning they all have the same $P$ and $I$ gains, chosen
as $k_{P}=0.3$ and $k_{I}=30$, respectively.
The energy resource of each DG is simulated as a DC power source, then by
utilizing an inverter, the DC current converts to the AC current, as shown in
Fig. 3. In the same figure, to remove the harmonics from the output power of
the inverter, an output filter is applied and then connected to the main grid
via a transformer, as illustrated in Fig. 3. The output filter consists of RL
and RC branches. The resistive and inductive elements of each RL component are
set as R${}_{1}=5.4946\times 10^{-4}\Omega$ and L$=1.4575\times 10^{-4}H$,
respectively. The RC components for the output filters of each individual DGi
arranged in the delta format have $P_{i}(W)$ and reactive power $Q_{i}(kVar)$
as given in Table 3. In Fig. 2b, $C_{1}=0.0039$, $C_{2}=0.21$, and the upper
and lower limit of the saturation blocks are $+1.5$ and $-1.5$, respectively.
The power control of each DG established in the physical layer is depicted in
Fig. 2.
Table 1: Parameters of the Transformers
Transformer Connected to DGs
---
Parameter | Value
Nominal Power (kVA) | 100
Nominal Frequency (Hz) | 60
winding 1 | $V_{1,rms,ph-ph,kV}$ | 25
$R_{1}(pu)$ | 0.0012
$L_{1}(pu)$ | 0.03
winding 2 | $V_{1,rms,ph-ph,V}$ | 270
$R_{2}(pu)$ | 0.0012
$L_{2}(pu)$ | 0.03
Magnetization resistance Rm (pu) | 200
Magnetization inductance Lm (pu) | 200
Transformer Connected to Dispatchable Generator
---
Prameter | Value
Nominal Power (kVA) | 47000
Nominal Frequency (Hz) | 60
winding 1 | $V_{1,rms,ph-ph,kV}$ | 1200
$R_{1}(pu)$ | 0.0026
$L_{1}(pu)$ | 0.08
winding 2 | $V_{2,rms,ph-ph,kV}$ | 25
$R_{2}(pu)$ | 0.0026
$L_{2}(pu)$ | 0.08
Magnetization Resistance Rm (pu) | 500
Magnetization Inductance Lm (pu) | 500
Table 2: Parameters of the Grid Parameter | Value
---|---
Load 1 Nominal Voltage ($kV_{ph-ph}$) | 25
Load 1 Active Power P (kW) | 250
Load 2 Active Power P (kW) | 2000
Load 3 Power S (kVA) | 30000+j2000
Line 1 $Z_{1}$ Positive and Zero Sequence | Length (km) | 8
$R(\Omega/km)$ | [0.1153 0.413]
$L(H/km)$ | [1.05e-3 3.32e-3]
$C(F/km)$ | [11.33e-009 5.01e-009]
Line 2 $Z_{2}$ Positive and Zero Sequence | Length (km) | 14
$R(\Omega/km)$ | [0.1153 0.413]
$L(H/km)$ | [1.05e-3 3.32e-3]
$C(F/km)$ | [11.33e-009 5.01e-009]
Nominal Frequency (Hz) | 60
Table 3: Active and reactive powers of RC components of each output filter DG Number | Active Power(W) | Reactive Power(kVar)
---|---|---
1 | 400 | 20
2 | 200 | 10
3 | 600 | 30
4 | 500 | 25
5 | 300 | 15
6 | 400 | 20
(a)
(b)
Figure 10: (LABEL:sub@fig:_DG1_Different_Methods) Output power $P_{1}$
according to Strategies 1, 3 and the proposed controller of Section 4.4 and
(LABEL:sub@fig:_Output_Sum_Diff_Method) Their corresponding microgrid total
power $P_{O}$
Starting from $t=3\,sec$, $P_{1,max}$ increases from $600\,kW$ to $900\,kW$.
Therefore, the microgrid maximum power capacity increases from $2400\,kW$ to
$2700\,kW$. Based on the approach explained in Section 4.4, the finite-time
algorithm in (45)-(49) is embedded in the consensus algorithm (38b) to have
DGs apply the proposed control law in (34) and (38), distributively. The
results of the simulations are shown in Fig. 6 where $P_{1}$ increases and
$P_{i}$, $i=2,3,4,5,6$, decreases. During $t=[3,9]\,sec$, the microgrid output
power $P_{O}$ remains almost equal to $P_{L}=1600\,kW$, as shown in Fig. 7.
The slight difference between $P_{O}$ and $P_{L}$ is due to resistive losses
in the DGs due to the resistor elements shown in Fig. 3. Considering (38b),
the six estimation variables $s_{i}$ for all $i=1,2,\cdots,6$ are updated
through the information exchange until they reach a consensus concerning
$\tilde{P}_{T}=2700kW$ as demonstrated in Fig. 8a. The ratios of
$r_{i}=\frac{P_{L}}{s_{i,max}}$, for $i=1,2,\cdots,6$, are shown in Fig. 8b,
where before reaching a consensus during $t=(3,8)\,sec$, these ratios are
different, however they become almost identical during $t=[8,9]\,sec$. Figure
6 illustrates that after the consensus algorithm converges, the steady state
values of the output powers of DGs are $P_{1}=533.33\,kW$, $P_{2}=266.66\,kW$,
$P_{3}=177.77\,kW$, $P_{4}=88.88\,kW$, $P_{5}=444.44\,kW$ and
$P_{6}=88.88\,kW$. Recalling the finite-time average consensus algorithm is
embedded in the consensus algorithm, at each time step of evolution of
consensus algorithm, the finite-time algorithm is applied. Through this
approach, the agents compute the average of $\frac{P_{i,max}}{s_{i}(t)}$ for
$i=1,2,\cdots,6$ in a distributed way where its corresponding result is
illustrated in Fig. 9.
The next variation of the microgrid maximum power capacity occurs at $t=9s$
where $P_{1,max}$ decreases for $-600kW$. Therefore, starting from $t=9\,sec$,
the current capacity of the microgrid which is $P_{T}=2700\,kW$ changes to
$\tilde{P}_{T}=2100kW$. Then, similar to the same procedure adopted in
reaction to a change in a microgrid capacity, the control method in (34) and
(38) is triggered. Hence, during $t=[9,18]sec$, according to Fig. 8a, the
agents have reached to another consensus on the maximum power capacity of the
microgrid which is $2100kW$. Figure 6a demonstrates that $P_{1}$ becomes
$228.57\,kW$ after the convergence during $[9,18]sec$. Figure 6 also shows
that the output power of the other DGs have increased due to the microgrid
capacity reduction. The power sharing ratio $r_{i}$ for $i=1,2,\cdots,6$ are
shown in Fig. 8b. The figure illustrates that, during the transient duration
of $(9,15)\,sec$, $r_{i}$ ratios are not equal. On the contrary, they converge
to steady state conditions in $[14,18]\,sec$. Furthermore, from Fig. 7,
$P_{O}$ remains practically equal to $P_{L}=1600\,kW$. In Fig. 10a, the
results of the proposed control algorithm in Section 4.4 is compared with the
results of the strategies defined as (14) and (25) in Section 4.3. From this
figure, it is clear that the output power of $P_{1}$ obtained from the
proposed control algorithm Section 4.4 differs from the other two ones during
the transient duration. However, after the transient durations of $(3,8)\,sec$
and $(9,15)\,sec$ the output power of $P_{1}$ from all three methods are the
same. Figure 10b demonstrates that the approaches of (14) and (25) are
ineffective to address the load demand. They produce significant deviation in
$P_{O}$ from $P_{L}$ during transient. On the other hand, upon applying the
method of Section 4.4, the deviation drastically reduces, both for increase
and decrease in maximum power capacity of DG1.
## 7 Conclusion
In this research, the problem of distributed proportional power sharing is
studied for microgrids that operate in the grid-connected mode. Firstly, a
consensus algorithm is designed through which, under a variation in the
maximum power capacity of a DG, all DGs in the microgrid estimate the updated
microgrid capacity. Utilizing the estimations, they generate their output
powers in a distributed manner. Stability and convergence of the consensus
algorithm are proven. While the consensus algorithm operates in the cyber
layer, power commands are sent to the DGs at the physical layer using multiple
strategies, discussed in the research. In this regards, practical issues such
as ensuring power commands are within acceptable bounds during the transient
time of the consensus method, are addressed. However, the consensus algorithm
along with the aforementioned strategies does not guarantee maintaining load
power during the transient time. Therefore, a modified strategy is proposed to
guarantee a match between demanded and delivered power during transient, while
the DGs reach a new consensus following a perturbation in grid capacity. The
distributed controller is tested in a simulated microgrid. The microgrid is
modeled in Matlab/Simulink using the Simscape toolbox. A complete description
of the model along with parameters values used for simulation, are given.
Simulation results confirm the effectiveness of the proposed strategy.
## Appendix
Proof of Lemma 3
###### Proof.
From (13), we note that $y_{i}=s_{i}-\tilde{P}_{T}$. Since Lemma 1 shows that
$-(L+\Delta)$ is Hurwitz, therefore from (13) we have,
$y(t)=e^{-(L+\Delta)t}y(t_{0})\;\\!\\!\Rightarrow\\!\\!\;\|y(t)\|\\!\\!\leq\|e^{-(L+\Delta)t}\|\|y(t_{0})\|$
(57)
As explained in Lemma 1, $-(L+\Delta)$ is diagonalizable and all of its
eigenvalues are negative and real. Assuming $\lambda_{1}<0$ is the largest
eigenvalue of $-(L+\Delta)$ and since $y(t_{0})=-\delta\mathbf{1}$, we have,
$\|y(t)\|\leq
e^{\lambda_{1}t}\|y(t_{0})\|=e^{\lambda_{1}t}\sqrt{N}|\delta|\leq\sqrt{N}|\delta|$
(58)
Hence, $\|y\|$ is bounded. Since $|y_{i}|\leq\|y(t)\|$, therefore
$|y_{i}(t)|\leq\sqrt{N}|\delta|\quad\forall\;i=1,2,\cdots,N$ (59)
If $|\delta|<{\theta P_{T}}/{(1+\sqrt{N}})$, then it follows that
$|y_{i}(t)|\leq\sqrt{N}|\delta|\leq\sqrt{N}{\theta P_{T}}/{(1+\sqrt{N})}$ (60)
and since $y_{i}=s_{i}-\tilde{P}_{T}$, therefore we have
$\displaystyle\tilde{P}_{T}-\sqrt{N}{\theta P_{T}}/$
$\displaystyle{(1+\sqrt{N}})\leq s_{i}(t)\leq$ (61)
$\displaystyle\tilde{P}_{T}+\sqrt{N}{\theta P_{T}}/{(1+\sqrt{N}})$
Since $\tilde{P}_{T}=P_{T}+\delta$, and from the assumption $|\delta|<{\theta
P}/{(1+\sqrt{N}})$, we have
$\displaystyle P_{T}-$ $\displaystyle{\theta
P_{T}}/{(1+\sqrt{N}})-\sqrt{N}{\theta P_{T}}/{(1+\sqrt{N}})\leq s_{i}(t)$ (62)
$\displaystyle\leq P_{T}+{\theta P_{T}}/{(1+\sqrt{N}})+\sqrt{N}{\theta
P_{T}}/{(1+\sqrt{N}})$
Thus,
$(1-\theta)P_{T}\leq s_{i}(t)\leq(1+\theta)P_{T}$ (63)
Since for all $t>t_{0}$, the output power of each DGi should satisfy
$P_{i}(t)=\frac{P_{L}}{s_{i}(t)}P_{i,max}<P_{i,max}$, it is required that
$s_{i}(t)>P_{L}$ for all $t>t_{0}$. For guaranteeing $s_{i}(t)>P_{L}$, from
(63), we can impose $(1-\theta)P_{T}>P_{L}$. Therefore, under the dynamics of
$\mathbf{S}$ in (7), $P_{T}>P_{L}/(1-\theta)$ or $1-(P_{L}/P_{T})>\theta$
ensures that $P_{i}(t)<P_{i,max}$. This completes the proof. ∎
Observation on $\delta$: Lemma 3 gives the condition $|\delta|<{\theta
P_{T}}/{(1+\sqrt{N}})$ to prevent unfavorable transients in $s_{i}(t)$. To
demonstrate that this condition is not restrictive as $N$ increases, we
consider a change in $N$ to $N+1$ and a corresponding change from $P_{T}$ to
$P_{T}+P_{N+1,max}$. Further, we impose
$\frac{\theta(P_{T}+P_{N+1,max})}{1+\sqrt{N+1}}>\frac{\theta
P_{T}}{1+\sqrt{N}}$ (64)
to derive the condition under which $|\delta|$ will increase as we increase
$N$ to $N+1$. From (64), we have,
$P_{N+1,max}>\Bigl{[}\frac{\sqrt{N+1}-\sqrt{N}}{1+\sqrt{N}}\Bigr{]}P_{T}$ (65)
From (65), it can be observed that $P_{N+1,max}$ can be only a small fraction
of $P_{T}$ to allow $|\delta|$ to increase rather than decrease. For instance,
if $N=3$, then $P_{N+1,max}>0.098P_{T}$, and if $N=8$, then
$P_{N+1,max}>0.045P_{T}$ which are small fractions of $P_{T}$. In addition,
comparing the right hand side of (65) with the average of $P_{T}$,
$P_{T,avg}=P_{T}/N$, we obtain the minimum ratio of $P_{N+1,max}/P_{T,avg}$ as
following
$N\Big{[}\frac{\sqrt{N+1}-\sqrt{N}}{1+\sqrt{N}}\Big{]}$ (66)
Equation (66) is strictly less than $\frac{1}{2}$ and it converges to
$\frac{1}{2}$ for large values of $N$. This proves that $P_{N+1,max}$ is
required to be $P_{N+1,max}\geq(1/2)P_{avg}$ at the worst cases to satisfy the
condition on $|\delta|$. Therefore, the condition on $|\delta|$ is not
restrictive.
## References
* [1] J. Rocabert, A. Luna, F. Blaabjerg, and P. Rodriguez, “Control of power converters in ac microgrids,” IEEE transactions on power electronics, vol. 27, no. 11, pp. 4734–4749, 2012.
* [2] J. He, Y. Pan, B. Liang, and C. Wang, “A simple decentralized islanding microgrid power sharing method without using droop control,” IEEE Transactions on Smart Grid, vol. 9, no. 6, pp. 6128–6139, 2017.
* [3] Y. Du, X. Lu, J. Wang, and S. Lukic, “Distributed secondary control strategy for microgrid operation with dynamic boundaries,” IEEE Transactions on Smart Grid, 2018.
* [4] Y. Zhang, H. J. Jia, and L. Guo, “Energy management strategy of islanded microgrid based on power flow control,” in 2012 IEEE PES Innovative Smart Grid Technologies (ISGT), pp. 1–8, IEEE, 2012.
* [5] Y. Han, H. Li, P. Shen, E. A. A. Coelho, and J. M. Guerrero, “Review of active and reactive power sharing strategies in hierarchical controlled microgrids,” IEEE Transactions on Power Electronics, vol. 32, no. 3, pp. 2427–2451, 2017.
* [6] E. Barklund, N. Pogaku, M. Prodanovic, C. Hernandez-Aramburo, and T. C. Green, “Energy management in autonomous microgrid using stability-constrained droop control of inverters,” IEEE Transactions on Power Electronics, vol. 23, no. 5, pp. 2346–2352, 2008.
* [7] Y. Deng, Y. Tao, G. Chen, G. Li, and X. He, “Enhanced power flow control for grid-connected droop-controlled inverters with improved stability,” IEEE Transactions on Industrial Electronics, vol. 64, no. 7, pp. 5919–5929, 2016\.
* [8] Y. Gui, C. Kim, C. C. Chung, J. M. Guerrero, Y. Guan, and J. C. Vasquez, “Improved direct power control for grid-connected voltage source converters,” IEEE Transactions on Industrial Electronics, vol. 65, no. 10, pp. 8041–8051, 2018.
* [9] Y. Fan, G. Hu, and M. Egerstedt, “Distributed reactive power sharing control for microgrids with event-triggered communication,” IEEE Transactions on Control Systems Technology, vol. 25, no. 1, pp. 118–128, 2016.
* [10] D. He, D. Shi, and R. Sharma, “Consensus-based distributed cooperative control for microgrid voltage regulation and reactive power sharing,” in IEEE PES Innovative Smart Grid Technologies, Europe, pp. 1–6, IEEE, 2014.
* [11] M. Mahmud, M. Hossain, H. Pota, and A. Oo, “Robust nonlinear distributed controller design for active and reactive power sharing in islanded microgrids,” IEEE Transactions on Energy Conversion, vol. 29, no. 4, pp. 893–903, 2014.
* [12] J. M. Guerrero, L. G. De Vicuna, J. Matas, M. Castilla, and J. Miret, “Output impedance design of parallel-connected ups inverters with wireless load-sharing control,” IEEE Transactions on industrial electronics, vol. 52, no. 4, pp. 1126–1135, 2005.
* [13] J. M. Guerrero, J. Matas, L. G. de Vicuna, M. Castilla, and J. Miret, “Decentralized control for parallel operation of distributed generation inverters using resistive output impedance,” IEEE Transactions on industrial electronics, vol. 54, no. 2, pp. 994–1004, 2007.
* [14] F. Aalipour and T. Das, “Proportional power sharing consensus in distributed generators,” in ASME 2018 Dynamic Systems and Control Conference, pp. V002T17A002–V002T17A002, American Society of Mechanical Engineers, 2018.
* [15] G. Chen, F. L. Lewis, E. N. Feng, and Y. Song, “Distributed optimal active power control of multiple generation systems,” IEEE Transactions on Industrial Electronics, vol. 62, no. 11, pp. 7079–7090, 2015.
* [16] Q.-C. Zhong, “Robust droop controller for accurate proportional load sharing among inverters operated in parallel,” IEEE Transactions on Industrial Electronics, vol. 60, no. 4, pp. 1281–1290, 2011.
* [17] A. Pantoja and N. Quijano, “A population dynamics approach for the dispatch of distributed generators,” IEEE Transactions on Industrial Electronics, vol. 58, no. 10, pp. 4559–4567, 2011.
* [18] C.-E. Lin and G. Viviani, “Hierarchical economic dispatch for piecewise quadratic cost functions,” IEEE transactions on power apparatus and systems, no. 6, pp. 1170–1175, 1984.
* [19] S. Kar and G. Hug, “Distributed robust economic dispatch in power systems: A consensus+ innovations approach,” in 2012 IEEE Power and Energy Society General Meeting, pp. 1–8, IEEE, 2012.
* [20] G. Lou, W. Gu, Y. Xu, M. Cheng, and W. Liu, “Distributed mpc-based secondary voltage control scheme for autonomous droop-controlled microgrids,” IEEE transactions on sustainable energy, vol. 8, no. 2, pp. 792–804, 2016.
* [21] A. Vaccaro, G. Velotto, and A. F. Zobaa, “A decentralized and cooperative architecture for optimal voltage regulation in smart grids,” IEEE Transactions on Industrial Electronics, vol. 58, no. 10, pp. 4593–4602, 2011\.
* [22] W. Liu, W. Gu, J. Wang, W. Yu, and X. Xi, “Game theoretic non-cooperative distributed coordination control for multi-microgrids,” IEEE Transactions on Smart Grid, vol. 9, no. 6, pp. 6986–6997, 2018.
* [23] B. Chaudhuri, R. Majumder, and B. C. Pal, “Wide-area measurement-based stabilizing control of power system considering signal transmission delay,” IEEE Transactions on Power Systems, vol. 19, no. 4, pp. 1971–1979, 2004\.
* [24] H. Wu, K. S. Tsakalis, and G. T. Heydt, “Evaluation of time delay effects to wide-area power system stabilizer design,” IEEE Transactions on Power Systems, vol. 19, no. 4, pp. 1935–1941, 2004.
* [25] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” Systems & Control Letters, vol. 53, no. 1, pp. 65–78, 2004.
* [26] F. Dörfler, J. W. Simpson-Porco, and F. Bullo, “Electrical networks and algebraic graph theory: Models, properties, and applications,” Proceedings of the IEEE, vol. 106, no. 5, pp. 977–1005, 2018.
* [27] T. Charalambous, Y. Yuan, T. Yang, W. Pan, C. N. Hadjicostis, and M. Johansson, “Distributed finite-time average consensus in digraphs in the presence of time delays,” IEEE Transactions on Control of Network Systems, vol. 2, no. 4, pp. 370–381, 2015.
* [28] H. K. Khalil, Nonlinear systems, vol. 3. Prentice hall Upper Saddle River, NJ, 2002.
* [29] M. Mesbahi and M. Egerstedt, Graph theoretic methods in multiagent networks. Princeton University Press, 2010.
* [30] R. A. Horn and C. R. Johnson, Matrix analysis, vol. 2. Cambridge University Press, 2013.
* [31] F. Aalipour, A. Gusrialdi, and Z. Qu, “Distributed optimal output feedback control of heterogeneous multi-agent systems under a directed graph,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 5097–5102, 2017.
* [32] J. Schiffer, T. Seel, J. Raisch, and T. Sezi, “Voltage stability and reactive power sharing in inverter-based microgrids with consensus-based distributed voltage control,” IEEE Transactions on Control Systems Technology, vol. 24, no. 1, pp. 96–109, 2015.
* [33] H. Cai and G. Hu, “Distributed robust hierarchical power sharing control of grid-connected spatially concentrated ac microgrid,” IEEE Transactions on Control Systems Technology, vol. 27, no. 3, pp. 1012–1022, 2018.
* [34] V. Nasirian, Q. Shafiee, J. M. Guerrero, F. L. Lewis, and A. Davoudi, “Droop-free distributed control for ac microgrids,” IEEE Transactions on Power Electronics, vol. 31, no. 2, pp. 1600–1617, 2016.
* [35] “250-kw grid-connected pv array.” https://www.mathworks.com/help/physmod/sps/examples/250-kw-grid-connected-pv-array.html.
|
2024-09-04T02:54:56.112109 | 2020-03-01T20:44:31 | 2003.00579 | {
"authors": "Constantinos F. Panagiotou and Fotos S. Stylianou and Elias Gravanis\n and Evangelos Akylas and Stavros C. Kassinos",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25975",
"submitter": "Konstantinos F. Panagiotou Dr.",
"url": "https://arxiv.org/abs/2003.00579"
} | arxiv-papers | # Application of the ASBM-SA closure in a turbulent flow over a hump in the
presence of separation control.
C. F. Panagiotou F. S. Stylianou E. Gravanis E. Akylas S. C. Kassinos
Laboratory Of Environmental Engineering (GAIA), Nireas-International Water
Research Centre, University of Cyprus, Nicosia, Cyprus Department of Civil &
Environmental Engineering, University of Cyprus, Nicosia, Cyprus
Computational Sciences Laboratory (UCY-CompSci), Nireas-International Water
Research Centre, University of Cyprus, Nicosia, Cyprus Department of
Mechanical & Manufacturing Engineering, University of Cyprus, Nicosia, Cyprus
Department of Civil Engineering & Geomatics, Cyprus University of Technology,
Limassol, Cyprus Eratosthenes Centre of Excellence, Cyprus University of
Technology, Limassol, Cyprus
###### Abstract
We demonstrate the coupling between the Algebraic Structure-Based Model (ASBM)
and the one-equation Spalart–Allmaras (SA) model, which provides an easy route
to bringing structure information in engineering turbulence closures. The
estimation ability of the hybrid model was tested for a flow over a hump model
with no-flow control and steady suction. ASBM-SA model produced satisfactory
predictions for the streamwise Reynolds stress component, while a qualitative
agreement with the experiments was achieved for the transverse component.
Regarding the shear stress component, ASBM-SA closure provides improved
predictions compared to SA in the entire domain.
## 1 Introduction
The class of RANS models most often used in engineering applications is that
of Eddy Viscosity Models (EVM). One of the most popular EVM is the
Spalart–Allmaras (SA) one-equation model [1]. The SA model is often favored by
practicing engineers because it exhibits superior robustness, low CPU time
requirements and substantially lower sensitivity to grid resolution compared
to two-equation models. On the other hand, one has to recognize that, despite
its computational and implementational attractiveness, the eddy viscosity
assumption is also the source of some of the most important performance
limitations. For example, like other EVM, the SA model fails to capture
important flow features, such as turbulence anisotropy or the effects of mean
or system rotation. A common feature of the classical closure approaches
described so far is the assumption that all key information about the
turbulence is contained in the scales of the turbulence and in the turbulence
stress tensor. However, one should consider that the turbulent stresses
contain information only about the componentality of the turbulence, i.e.
about the directions in which the turbulence fluctuations associated with
large-scale eddies are most energetic. Thus, traditional closures do not take
into account the morphology of the energy-containing eddies. Yet, eddies tend
to organize spatially the fluctuating motion in their vicinity. In doing so,
they eliminate gradients of fluctuation fields in some directions (those in
which the spatial extent of the structure is significant) and enhance
gradients in other directions (those in which the extent of the structure is
small). Thus, associated with each eddy are local axes of dependence and
independence that determine the dimensionality of the local turbulence
structure. This structure dimensionality information is complementary to the
componentality information contained in the Reynolds stresses, and as Kassinos
& Reynolds [2] and Kassinos et al. [3] have shown, it is dynamically
important. A detailed description of the complete set of the turbulence
structure tensors is given in several works [3, 4, 5].
The significant effect that the morphology of the large-scale structures has
on the evolution of turbulent statistics [6] has motivated the development of
structure-based models. These models can be classified into two categories:
differential and algebraic. The first category involves solving a set of
transport equations to evaluate structure tensors. Simplified models have been
proposed that are applicable at the homogeneous limit [7, 8, 9], while a more
sophisticated differential model was proposed by Poroseva et al. [10] that was
tested in a turbulent flow passing through a cylindrical pipe that rotates
around its longitudinal axis. Furthermore, structure-based models were
recently constructed to evaluate scalar transport, ranging from simple passive
scalars [5, 11] to strongly stably stratified flows [12].
The second category refers to algebraic approaches, which are based on
assumptions that lead to constitutive equations. The Algebraic Structure-Based
Model (ASBM) [13, 14] is an engineering structure-based turbulence model that
follows this second approach. It is a fully realizable two-equation structure-
aware model that provides the full Reynolds stress tensor. Panagiotou &
Kassinos [15] presented a successful coupling between the ASBM with the one-
equation SA model. Their intention was to combine the numerical robustness and
stability of the SA model along with the deeper physical content of the ASBM.
The performance of the hybrid model, called ASBM-SA, was evaluated in several
standard benchmark cases, ranging from simple fully-developed channel flows to
a flow over a steep hill, achieving an overall good agreement between model
and experimental predictions. Hence, the aim of this study is to evaluate the
performance of the ASBM-SA closure to a more complex case, in particular the
case of turbulent flow over a two-dimensional hill with and without separation
control.
## 2 Coupling between ASBM and the SA model
The main limitation of SA (and any other one-equation model) is that it does
not provide of a complete set of turbulence scales. On the other hand, the
ASBM closure relies on the availability of suitable turbulence scales and this
was the key stumbling block in trying to couple the ASBM and SA closures. The
Bradshaw hypothesis [16] has been used as the starting point for transforming
turbulence scales $\kappa-\epsilon$ closures, where $\kappa$ and $\epsilon$
are the turbulent kinetic energy and energy dissipation rate respectively, to
one equation models. Here, our objective is to use the same phenomenology in
order to extract these scales from the SA one-equation closure. A complete
description of this formulation and on how the coupling between the two
closures has been achieved can be found in Panagiotou & Kassinos [15].
## 3 Results and discussion
### 3.1 Outline
In this study we have considered the case of no-flow control, as well as the
case where active-control is applied via steady-suction. During the CFDVAL2004
Workshop [17], these two cases were investigated in depth to assess the
performance of popular engineering turbulence models in strongly separated
flows subjected to favorable/adverse pressure gradients. Experiments have been
conducted in the NASA Langley Transonic Cryogenic Tunnel by Greenblatt et al.
[18]. The shape of the hump is that of a “Modified Glauert-Goldschmied” hill,
similar to the one used by Seifert and Pack [19]. The experiments are
nominally two-dimensional (2D), despite the presence of three dimensional (3D)
effects near the side end-plates. The scenarios involved both uncontrolled and
controlled flow (steady suction) for Reynolds numbers ($Re$) ranging from 0.37
up to 1.1 million, corresponding to Mach numbers ($M$) ranging from 0.04 up to
0.12. One no-flow control case and one active-control case were selected for
the extraction of detailed experimental measurements. Figure 1 shows the
geometry of the whole domain, including a detailed view of the flow control
slot. The chord length of the hump is denoted as $c$, the height of the domain
$H$ is $90\%$ the chord length, while the maximum height of the hump is
approximately $0.13\ c$. The slot is located near $x/c\approx 0.65$, where the
slot width $h$ is $0.00187\ c$. Detailed information regarding the geometry,
computational grids and the relevant experiments can be found in [20]. One of
the conclusions reached during the CFDVAL2004 Workshop is that blockage
effects stemming from the presence of side plates need to be accounted for in
simulations, otherwise the computed pressure coefficients levels exhibit
significant discrepancy relative to the experiments, especially over the hump.
Thus, the top tunnel surface around hump location is modified so as to reflect
the change in the tunnel cross-sectional area due to the presence of the side-
plates, as described in [20].
Figure 1: Sketch of the geometry, with a modification along the top-surface
such as to account for the side-plate effects, as described in [20].
### 3.2 Validation cases
#### 3.2.1 Turbulent boundary layer
As a first step, we performed steady computations of a spatially developing
boundary layer flow over a flat plate for flow conditions that correspond to
the experiments of Greenblatt et al. [18]. Profiles of the converged solution
at a specific streamwise location were extracted and then used as inlet
boundary conditions for the cases involving the 2D hump. The desired Reynolds
number is $Re_{\delta}\approx 68200$ based on the freestream velocity
$U_{\infty}$, where subscript $\infty$ denotes freestream values, and the
boundary layer thickness $\delta\approx 0.074\,c$. At the inlet, Dirichlet
boundary conditions are imposed for the mean streamwise velocity $U_{x}$ and
the pseudo-viscosity $\tilde{\nu}$ variables, such as
${\tilde{\nu}_{\infty}}/{\nu}\approx 3$ and $U_{\infty}=0.1\ M$, yielding a
freestream Reynolds number $Re_{\infty}=929,000$ based on the chord length $c$
of the hump, air viscosity $\nu$ and the freestream velocity $U_{\infty}$. At
the outlet, a penalty condition is imposed to prevent the occurrence of
reflectional effects while ensuring mass conservation. A slip condition was
imposed at the top surface, a no-slip condition at the bottom wall surface and
periodic conditions along the spanwise direction. In order to obtain grid-
independent solutions, three different meshes of increasing resolution were
considered. For each mesh, geometric functions were used to define the normal
distribution of the nodes, while uniform spacing has been adopted along the
streamwise direction. Grid 1 contains a non-uniform mesh of size 120 x 90 x 1
along the streamwise, wall-normal and spanwise directions respectively. The
corresponding size for Grid 2 is 130 x 120 x 1 and for Grid 3 is 140 x 150 x
1. The finest grid yields a value of $y^{+}$ around 0.5 for the wall-adjacent
cell at the location of the extracted data. Figures 2a-b show predictions
using the SA closure for the streamwise mean velocity and pseudo-viscosity
respectively. In Figure 3 we show a comparison between the predictions of the
SA model using Grid 3 as the baseline grid, and the experimental data for the
streamwise mean velocity $U_{x}$, yielding a good agreement.
(a)
(b)
Figure 2: Grid-convergence analysis for a spatially developing turbulent
boundary layer at $Re_{\delta}\approx 68,200$. SA model predictions for (a)
the streamwise mean velocity and (b) the pseudo-viscosity. Comparison is made
among three different grids: Grid 1 ($\solid$) ; Grid 2 ($\dashed$) ; Grid 3
($\chndotdotdot$). Figure 3: SA model predictions (lines) for the streamwise
mean velocity. Comparison is made to the experiments (symbols) of Greenblatt
et al. [18].
#### 3.2.2 No-flow control
The case of flow over a hump having the shape of “Modified Glauert” hill is
considered next. This case was originally conceived for testing the ability of
active control to reduce the size of the existing recirculation bubble.
However, from a turbulence modeling perspective, even the uncontrolled case is
interesting due to the presence of strong separation, which proves to be
challenging to turbulence engineering models. Thus, in our numerical
experiments, we considered first the uncontrolled case of flow. Simulations
have been performed using SA and ASBM-SA models, which are compared to the
experimental work of Greenblatt et al. [18] At the inlet surface, profiles for
the variables are obtained from the SA solution for the turbulent boundary
layer corresponding to $Re_{\delta}\approx 68200$ as described in the previous
subsection. At the floor surface, as well as at the wall surfaces inside the
cavity, solid wall (no-slip) boundary conditions were applied. A penalty
condition is imposed at the outlet surface to ensure that mass flow exits the
domain properly, while slip conditions are used at the top surface and
periodic conditions for the spanwise direction. Two grids were considered to
conduct a grid-sensitivity analysis. The coarser grid contains approximately
103,000 grid points, whereas the finer grid posses approximately 160,000 grid
points. Mesh details are shown in Figure 4, together with a zoomed view of the
cavity region.
(a)
(b)
(c)
Figure 4: Unstructured computational grid with details of the slot region.
Figures 5a-b show SA model predictions for the wall-static pressure
coefficient $c_{p}=(p-p_{0})/\frac{1}{2}\rho U^{2}_{\infty}$ and skin-friction
coefficient $c_{f}=\tau_{w}/\frac{1}{2}\rho U^{2}_{\infty}$ respectively.
Based on these results, the coarser grid is shown to be fine enough for the
current case.
(a)
(b)
Figure 5: Effect of grid on SA model predictions for the uncontrolled case,
for (a) the wall-static pressure coefficient and (b) the skin-friction
coefficient. Two grids are shown: coarse grid ($\solid$) ; fine grid
($\dashed$).
Due to the algebraic nature of the ASBM closure, numerical difficulties are
encountered for the cases where strongly separated flows are considered, as
the present ones. During previous works, a filtering scheme was applied in
order to smoothen the profiles, improving that way the numerical stability of
the solution. However, use of this scheme in the current case led to a
mismatch between SA and ASBM-SA predictions for the skin-friction coefficients
in regions where a good agreement was expected, such as upstream of the
leading edge of the hill and downstream of the recirculation region. As a
result, we separated the domain into two zones, one prior the leading edge
where the filtering scheme is not active, and the region downstream the
leading edge where the scheme is switched on (Figure 6). Figure
LABEL:fig:cfcp_zf shows a comparison between SA and ASBM-SA predictions using
both approaches for the filtering scheme, revealing the significant role that
the filtering details play on the skin-friction distribution all along the
bottom surface. In contrast, the pressure coefficient remains unaffected by
the choice of filtering scheme.
Figure 6: Separation zones showing where the filtering scheme is active (on)
or not (off).
Figure 8 displays the evolution of the maximum mean streamwise velocity
residual. The residual is divided by its initial value, denoted by subscript
0, and is defined by
$\text{ Residual }=\max\bigg{[}\frac{V\times\Delta U_{x}/\Delta
t}{(V\times\Delta U_{x}\Delta t)_{0}}\bigg{]}\,,\hskip 8.5359pt\Delta
U_{x}=U^{n+1}_{x}-U^{n}_{x}\,,$ (1)
where $n$ refers to the $n$-th iteration, $\Delta t$ to the time step and $V$
to the volume of the corresponding cell. For both SA and ASBM-SA models, a
drop of at least 5 orders of magnitude for the residuals compared to the
initial field is achieved, which is believed to be sufficient to provide time-
converged solutions.
Figure 8: Time history of the streamwise mean velocity residual for the
uncontrolled case. The sudden jump in the residual levels indicates the point
where the ASBM coupling is switched on.
In order to both accelerate our simulations and overcome some stability issues
related to the ASBM-SA computations inside the cavity in the case of no-
suction, additional computations using similar meshes but without the presence
of the cavity were conducted. For these cases, we considered a solid-wall
condition along the slot exit. Figures 9a-b show SA predictions for the mean
velocity streamlines at the vicinity of the hump in the presence and absence
of the cavity respectively, demonstrating the trivial discrepancies between
the two approaches. Figure 10 shows the corresponding comparison for the wall-
static pressure coefficient, which again reveals the negligible effect on the
results of the cavity absence when the flow control is inactive (no suction).
(a)
(b)
Figure 9: SA model predictions for the streamlines of the mean flow
approaching the hump (a) in the presence of the cavity and (b) in the absence
of the cavity. Figure 10: SA model predictions in the presence ($\solid$) or
absence ($\dashed$) of the cavity for the wall-static pressure coefficient.
In the following figures, all turbulent and mean quantities (except $c_{f}$,
$c_{p}$) are normalized based on the chord length $c$ and the reference inlet
freestream velocity $U_{\infty}$. Setting the leading edge of the hump as the
origin of the streamwise distance ($x/c=0$), profiles are extracted at three
different stations inside the recirculation bubble as measured by the
experiments ($x/c=0.66,\ 0.8,\ 1.0$) and one station at the recovery region
($x/c=1.2$), denoted as stations $A,\ B,\ C,\ D$ respectively (Figure 11).
Figure 11: Geometrical and mesh details in the absence of cavity. Data is
extracted at four stations, denoted as A, B, C and D. The leading-edge point
(LE) and the re-attachment point (R) are also shown.
Figure 12 shows the predictions of the ASBM-SA and SA closures for the
variation of the pressure ($c_{p}$) and skin friction ($c_{f}$) coefficients
along the wall surface. Comparison is made to experimental measurements. The
ASBM-SA model captures accurately the peak magnitude of the pressure
coefficient around $x/c\approx 0.57$. For $x/c$ ranging from -1 to 1.1, that
is from the inlet up to about the re-attachment point, the ASBM-SA provides
slightly improved $c_{p}$ predictions when compared to SA. Right after re-
attachment though, the ASBM-SA predicts a slightly delayed recovery of $c_{p}$
as compared to the experiments (and the SA closure). The two models produce
comparable agreement with the experiments for the skin-friction coefficient.
ASBM-SA provides an improvement right after the upstream edge of the hill
($x/c\approx 0.2$), while SA predicts correctly the magnitude near the sharp
geometry change that occurs around $x/c\approx 0.65$. According to Table 1,
both models overpredict the recirculation bubble, with ASBM having the
tendency to delay the re-attachment of the flow further downstream, an
observation which is in agreement with previous cases in which separated flows
over 2D hills were considered, such as the “Witch of Agnesi” hump, as
described in detail in [15]. Figure 13 shows results for the streamwise mean
velocity $U_{x}$ at the four stations. As shown in Figures 13a-b, at the first
two stations ASBM-SA provides slightly improved predictions relative to the SA
model, while SA is in better agreement with the experimental data at the next
two stations, mostly due to the greater delay of re-attachment in the ASBM-SA
predictions.
(a)
(b)
Figure 12: SA ($\solid$) and ASBM-SA ($\dashed$) model predictions for the no-
flow control case for (a) the wall static-pressure coefficient and (b) the
wall skin-friction coefficient. Comparison is made to experimental values
(symbols) of Greenblatt et al. [18].
(a) station A, $x/c=0.66$
(b) station B, $x/c=0.8$
(c) station C, $x/c=1.0$
(d) station D, $x/c=1.2$
Figure 13: Turbulent flow over the “Glauert-Goldschmied” 2D hill for the no-
flow control case. Model predictions for the streamwise mean velocity $U_{x}$
at various $x$-stations for SA ($\solid$) and ASBM-SA ($\dashed$) closures.
Comparison is made to experimental values of Greenblatt et al. [18].
Next, we consider the performance of the ASBM-SA closure for the turbulent
intensities and the fluctuating shear stress with respect to experimental
results. The SA model is included only in the comparison for the fluctuating
shear stress component, since it cannot provide predictions for the turbulent
intensities. Figure 14b displays the streamwise Reynolds stress component
$R_{xx}$. At station A, ASBM-SA yields reasonable predictions, being able to
capture the near-wall peak magnitude. At the remaining three stations
($x/c=0.8,1.0,1.2$), ASBM-SA is able to capture satisfactorily the peak
magnitude. We note that the wiggles near $y/c=0.2$ at station A (Figure 14a)
originate from the algebraic expressions for the estimation of the turbulent
kinetic energy (not shown here). Following term by term the algebraic
procedure for the calculation of kinetic energy, we found that this issue is
most likely related to the local mean velocity gradients. These wiggles are
also present in previous works [15], for which similar findings were deduced.
We also point out that the location $y/c=0.2$ at which the wiggles appear is
close to the interface between two grid blocks. Overall, we believe that this
is a localized effect that does not affect the quality of the solution.
(a) station A, $x/c=0.66$
(b) station B, $x/c=0.8$ (c) station C, $x/c=1.0$ (d) station D, $x/c=1.2$
Figure 14: Turbulent flow over the “Glauert-Goldschmied” 2D hill for the no-
flow control case. ASBM-SA model predictions (lines) for the streamwise
Reynolds stress component $R_{xx}$ at various $x$-stations are shown.
Comparison is made to experimental values (symbols) of Greenblatt et al. [18].
,
Figure LABEL:fig:ryy_hill depicts the corresponding predictions for the
transverse Reynolds stress component $R_{yy}$. ASBM-SA closure strongly
overpredicts the near-wall magnitude at station A, while a fair agreement with
the experiments is achieved at the remaining stations.
Figure LABEL:fig:rxy_hill shows SA and ASBM-SA predictions for the fluctuating
shear stress component. At the first station, ASBM-SA exhibits a similar
behavior as for the transverse Reynolds stress component. At the remaining
three stations, ASBM-SA provides noticable improvement relative to the SA
model, in both the near-wall and freestream regions. This improvement is
evident in the whole range of the recirculation bubble, suggesting a
satisfactory response of the hybrid model to the strong anisotropic effects
that characterize this region.
#### 3.2.3 Control via steady suction
As a first step, we define the steady mass transfer momentum coefficient
$c_{\mu}=\frac{\rho hU^{2}_{jet}}{1/2c\rho U^{2}_{o}}\,,$ (2)
where $U_{jet}$ denotes the jet velocity. For the current case, $c_{\mu}$ is
set equal to $0.241\,\%$, corresponding to a constant mass flow rate of
$\dot{m}=0.01518\ kg/s$ being sucked through the slot, in order to match the
experimental conditions of Greenblatt et al. [18]. Figure 17 shows ASBM-SA
model predictions for the wall-normal spacing along the floor surface for the
uncontrolled and controlled cases, needed to ensure that our solutions are
obtained in sufficiently resolved grids. As expected, a sharp increase in
$y^{+}$ levels occurs at the location of the slot exit where no-wall is
present.
Figure 17: ASBM-SA model predictions for the streamwise variation at the
bottom surface for the normal spacing at the wall normalized in wall-units.
Results are shown for both the no-flow control ($\solid$) and the steady-
suction $(\dashed)$ cases.
Results for the wall-static pressure coefficient are shown in Figure 18, where
the predictions of the SA and ASBM-SA models are compared to the experimental
data of Greenblatt et al. [18]. The ASBM-SA closure manages to capture
accurately the magnitude and the location of the first sharp change right
after the slot, located around $x/c\approx 0.67$, followed by a small recovery
delay, as compared to the SA model, till the trailing edge of the hill
($x/c=1$) where the two models coincide again.
(a) Figure 18: SA ($\solid$) and ASBM-SA ($\dashed$) model predictions for
the steady-suction case for the wall static-pressure coefficient. Comparison
is made to experimental values of Greenblatt et al. [18].
Figure 19 displays results for the streamwise mean velocity $U_{x}$ at the
four stations. As shown, SA provides slightly better agreement with the
experiments than the ASBM-SA model. Figure 20 shows the corresponding
comparison for the transverse mean velocity $U_{y}$. In general, SA
predictions are in better agreement with the experimental data compared to
ASBM-SA model.
(a) station A, $x/c=0.66$
(b) station B, $x/c=0.8$
(c) station C, $x/c=1.0$
(d) station D, $x/c=1.2$
Figure 19: Turbulent flow over the “Glauert-Goldschmied” hill for the steady-
suction case. Model predictions for the streamwise mean velocity $U_{x}$ at
various $x$-stations for SA and ASBM-SA closures. Comparison is made to
experimental values of Greenblatt et al. [18].
(a) station A, $x/c=0.66$
(b) station B, $x/c=0.8$
(c) station C, $x/c=1.0$
(d) station D, $x/c=1.2$
Figure 20: Turbulent flow over the “Glauert-Goldschmied” hill for the steady-
suction case. Model predictions for the transverse mean velocity $U_{y}$ at
various $x$-stations for SA and ASBM-SA closures. Comparison is made to
experimental values of Greenblatt et al. [18].
Table 1 shows details regarding the recirculation region. SA predicts more
accurately the re-attachment point for both cases, providing an indication why
SA closure obtains better results than the hybrid model for the mean
statistics.
Case | Model | sep.loc. | sep.loc. | Error | reatt.loc. | reatt.loc. | Error
---|---|---|---|---|---|---|---
| | experiment | CFD | ($\%$) | experiment | CFD | ($\%$)
no-flow control | SA | $\approx 0.67$ | 0.663 | 1.0 | $1.11\pm 0.003$ | 1.235 | 11.3
no-flow control | ASBM-SA | $\approx 0.67$ | 0.656 | 2.1 | $1.11\pm 0.003$ | 1.330 | 19.8
steady suction | SA | $\approx 0.68$ | 0.676 | 0.6 | $0.94\pm 0.005$ | 1.113 | 18.4
steady suction | ASBM-SA | $\approx 0.68$ | 0.665 | 2.2 | $0.94\pm 0.005$ | 1.180 | 25.5
Table 1: Details of SA and ASBM-SA model predictions regarding the
recirculation bubble for each case. Comparison is made to the experimental
work of Greenblatt et al [18].
Figure 21 shows the corresponding comparison for the streamwise Reynolds
stress component $R_{xx}$. At three of the four stations, ASBM-SA correctly
predicts the near-wall peak magnitude and the freestream values, yielding a
fair agreement with the experiments.
(a) station A, $x/c=0.66$
(b) station B, $x/c=0.8$
(c) station C, $x/c=1.0$
(d) station D, $x/c=1.2$
Figure 21: Turbulent flow over the “Glauert-Goldschmied” hill for the steady-
suction case. Model predictions for the streamwise Reynolds stress component
$R_{xx}$ at various $x$-stations for SA and ASBM-SA closures. Comparison is
made to experimental values of Greenblatt et al. [18].
In Figure LABEL:fig:ryy_on_hill, the agreement is qualitative between the
ASBM-SA predictions and the experimental measurements for the transverse
Reynolds stress component $R_{yy}$. Combining these results with the analogous
ones for $R_{xx}$ reveals the sensitivity of the algebraic model to the
anisotropic nature of the flow.
Results for the fluctuating shear stress component $R_{xy}$ are shown in
Figure LABEL:fig:rxy_on_hill. As shown, the hybrid ASBM-SA model is able to
provide significantly improved predictions compared to the SA closure in the
whole range of the recirculation region. Overall, ASBM-SA provides a
satisfactory agreement with experiments.
The active control effect on the recirculation bubble is visualized in Figure
24. ASBM-SA model predictions for the streamlines of the mean velocity for
both cases are shown, revealing a noticeable reduction of the bubble size.
(a)
(b)
Figure 24: ASBM-SA model predictions for the streamlines of the mean velocity
for (a) the uncontrolled case and (b) the controlled case .
## 4 Summary and Conclusions
The ASBM-SA closure has been tested for the case of a flow over a two-
dimensional smooth hill in the shape of a “Modified Glauert-Goldschmied” hump
both in the presence and absence of separation control. For both cases
considered, the ASBM-SA model produced satisfactory predictions for the
streamwise Reynolds stress component $R_{xx}$, while a qualitative agreement
with the experiments was achieved for the transverse component $R_{yy}$. ASBM-
SA closure provided improved predictions compared to SA for the shear stress
at all stations. Regarding the mean quantities, the predictions of both
closures are comparable, providing fair agreement with the experiments.
Overall, the hybrid model managed to capture satisfactory the traits of these
highly anisotropic flows, while maintaining the high robustness of the SA
model, at a good convergence rate. Use of separated zones for the activation
of the filtering scheme resulted to smoother mean velocity profiles in the
recirculation region, and more meaningful skin-friction profiles upstream and
downstream the separated region.
As part of future work we intend to ascertain the performance of the hybrid
model over challenging two-dimensional flows, such as turbulent flows around a
wall-mounted cube and plane wall jets, while an extension to three-dimensional
smooth hills will be attempted. Regarding the refinement issues encountered in
the current and previous works, further consideration is needed to understand
why ASBM-SA delays further the re-attachment compared to SA for all validation
cases considered until now, even though it gives better predictions for the
shear stress over the entire region of the recirculation. We have already
started developing more advanced filtering schemes, suitable for highly
deformed meshes, since we believe that the choice of filtering scheme plays a
role on the delay of flow’s re-attachment, and contributes to yielding larger
recirculation bubbles.
## References
* [1] P. Spalart and S. Allmaras. A one-equation turbulence model for aerodynamic flows. Recherche Aerospatiale, 1:5–21, 1994. doi: 10.2514/6.1992-439.
* [2] S.C. Kassinos and W.C. Reynolds. A structure-based model for the rapid distortion of homogeneous turbulence. PhD Thesis, Thermosciences Division Department of Mechanical Engineering Stanford, California 94305, 1994.
* [3] S.C. Kassinos, W.C. Reynolds, and M.M. Rogers. One-point turbulence structure tensors. J. Fluid Mech., 428:213–248, 2001. doi: 10.1017/S0022112000002615.
* [4] F.S. Stylianou, R. Pecnik, and S.C. Kassinos. A general framework for computing the turbulence structure tensors. Comput. Fluids, 106:54–66, 2015. doi: 10.1016/j.compfluid.2014.09.042.
* [5] C.F. Panagiotou and S.C. Kassinos. A structure-based model for the transport of passive scalars in homogeneous turbulent flows. Int. J. Heat Fluid Fl., 57:109–129, 2016. doi: 10.1016/j.ijheatfluidflow.2015.11.008.
* [6] B. Shraiman and E. Sigga. Scalar turbulence. Nature, 405:639–646, 2000. doi: 10.1038/35015000.
* [7] S.C. Kassinos and W.C. Reynolds. A particle representation model for the deformation of homogeneous turbulence. pages 31–51. Annual Research Briefs, Center for Turbulence Research, 1996\.
* [8] S.C. Kassinos and E. Akylas. Advances in particle representation modeling of homogeneous turbulence. From the linear PRM version to the interacting viscoelastic IPRM, volume 18. ERCOFTAC Series, 2012. doi: 10.1007/978-94-007-2506-5_6.
* [9] C.F. Panagiotou and S.C. Kassinos. A differential structure-based model based on stochastic evolution equations for the scalar turbulence parameters. European Conference on Computational Fluid Dynamics (ECFD 6), Barcelona, Spain, 2014.
* [10] S.V. Poroseva, S.C. Kassinos, C.A. Langer, and W.C. Reynolds. Structure-based turbulence model: Application to a rotating pipe flow. Phys. Fluids, 14(4):1523–1532, 2002. doi: 10.1063/1.1458008.
* [11] C.F. Panagiotou, F.S. Stylianou, and S.C. Kassinos. Structure-based transient models for scalar dissipation rate in homogeneous turbulence. Int. J. Heat Fluid Fl., 82:108557, 2020.
* [12] C.F. Panagiotou and S.C. Kassinos. A structure-based model for transport in stably stratified homogeneous turbulent flows. Int. J. Heat Fluid Fl., 65:309–322, 2017. doi: 10.1016/j.ijheatfluidflow.2016.12.005.
* [13] S.C. Kassinos, C.A. Langer, G. Kalitzin, and G. Iaccarino. A simplified structure-based model using standard turbulence scale equations: Computation of rotating wall-bounded flows. Int. J. Heat Fluid Fl., 27:653–660, 2006. doi: 10.1016/j.ijheatfluidflow.2006.02.018.
* [14] C.A. Langer and W.C. Reynolds. A new algebraic structure-based turbulence model for rotating wall-bounded flows. PhD Thesis, Thermosciences Division Department of Mechanical Engineering Stanford, California 94305, 2003.
* [15] C.F. Panagiotou and S.C. Kassinos. The ASBM-SA turbulence closure: Taking advantage of structure-based modeling in current engineering CFD codes. Int. J. Heat Fluid Fl., 52:111–128, 2015.
* [16] P. Bradshaw, D.H. Ferriss, and N.P. Atwell. Calculation of boundary layer development using the turbulent energy equation. J. Fluid Mech., 28(3):593–616, 1967. doi: 0.1017/S0022112067002319.
* [17] C. Rumsey, T.B. Gatski, L. Sellers, V. Vatsa, and S. Viken. Summary of the 2004 CFD validation workshop on synthetic jets and turbulent separation control. In AIAA 2004-2217, 2004. doi: 10.2514/6.2004-2217.
* [18] D. Greenblatt, K.B. Paschal, N.W. Schaeffler, A.E. Washburn, J. Harris, and C. Yao. A separation control CFD validation test case. Part 1: Baseline and steady suction. In 2nd AIAA Flow Control Turbulence, 2004. doi: 10.2514/6.2004-2220.
* [19] A. Seifert and L.G. Pack. Active flow separation control on wall-mounted hump at high Reynolds numbers. AIAA J., 40(7):1363–1372, 2002. doi: 10.2514/2.1796.
* [20] T.B. Gatski and C. Rumsey. Langley Research Center Workshop: CFD validation of synthetic jets and turbulent separation control, (Cited 1 March 2020).
|
2024-09-04T02:54:56.124095 | 2020-03-01T21:01:25 | 2003.00584 | {
"authors": "David Daly, William Brown, Henrik Ingo, Jim O'Leary, David Bradford",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25976",
"submitter": "David Daly",
"url": "https://arxiv.org/abs/2003.00584"
} | arxiv-papers | # The Use of Change Point Detection to Identify Software Performance
Regressions in a Continuous Integration System
David Daly 0000-0001-9678-3721 MongoDB Inc<EMAIL_ADDRESS>, William
Brown Columbia University<EMAIL_ADDRESS>, Henrik Ingo
0000-0003-1571-5108 MongoDB Inc<EMAIL_ADDRESS>, Jim O’Leary
0000-0002-3923-5742 MongoDB Inc<EMAIL_ADDRESS>and David Bradford
0000-0003-2282-6535 MongoDB Inc<EMAIL_ADDRESS>
(2020)
###### Abstract.
We describe our process for automatic detection of performance changes for a
software product in the presence of noise. A large collection of tests run
periodically as changes to our software product are committed to our source
repository, and we would like to identify the commits responsible for
performance regressions. Previously, we relied on manual inspection of time
series graphs to identify significant changes. That was later replaced with a
threshold-based detection system, but neither system was sufficient for
finding changes in performance in a timely manner. This work describes our
recent implementation of a change point detection system built upon the
E-Divisive means (Matteson and James, 2014) algorithm. The algorithm produces
a list of change points representing significant changes from a given history
of performance results. A human reviews the list of change points for
actionable changes, which are then triaged for further inspection. Using
change point detection has had a dramatic impact on our ability to detect
performance changes. Quantitatively, it has dramatically dropped our false
positive rate for performance changes, while qualitatively it has made the
entire performance evaluation process easier, more productive (ex. catching
smaller regressions), and more timely.
change points, performance, testing, continuous integration
††journalyear: 2020††copyright: rightsretained††conference: Proceedings of the
2020 ACM/SPEC International Conference on Performance Engineering; April
20–24, 2020; Edmonton, AB, Canada††doi: 10.1145/3358960.3375791††ccs: Software
and its engineering Software performance††ccs: Information systems Database
performance evaluation††ccs: Mathematics of computing Time series analysis
## 1\. Introduction
We work in a software and services company and need to understand the
performance of the software we develop and sell. Our continuous integration
system runs thousands of benchmarks periodically (most commonly every 2 hours
or every 24 hours), which each produce one or more scalar values as a result.
It is a challenge to analyze all of those results and historical data to
determine whether the test should be considered passed or failed. It is
inherent to this type of benchmarking that the results contain some level of
noise. In most of our tests the worst case run to run variation level is
currently less than $10\%$, but some sensitive tests can fluctuate as much as
$20\%$ or more over the course of a small numbers of runs. In this paper, we
detail the results of our experience in deploying automated change point
detection software for identifying performance changes in an evolving code
base.
### 1.1. Existing performance results and analysis
Our performance testing infrastructure is built upon our continuous
integration (CI) system: Evergreen (Erf, 2016; Eve, [n.d.]). Evergreen tracks
every commit to our source repository, compiles the software on a number of
different build targets, and runs correctness tests.
Evergreen also runs our performance tests. The performance tests take longer
to run than the correctness tests, so we run them less frequently, but
otherwise the performance and correctness tests are the same from the
perspective of Evergreen.
With any performance test, we must
* •
Setup a system under test
* •
Run a workload against the system under test
* •
Report the results from the test
* •
Decide (and alert) if the performance changed
* •
Visualize the results
This paper focuses on the last two bullets111Previous work focused on the
first two bullets and getting reproducible results (Ingo and Daly, 2019). When
we first setup our performance tests in our CI system, we had a visualization
of the performance over time, but no automated alerting. We had a person
(called the “Build Baron”) looking through the graphs for regressions, and
opening Jira tickets222Atlassian’s Jira is our ticketing system. for any
changes requiring further investigation. The Build Baron would be looking at
performance trend graphs like those shown in Figure 1 and Figure 2. Figure 1
shows a test with some basic noise, but no real changes in performance, while
Figure 2 has two very clear changes in performance. The cases in Figures 1 and
2 should be easy for the Build Baron to triage, but many other cases are not,
such as the two graphs in Figure 3 and the graph in Figure 4, which have
smaller changes and larger run to run variation. We quickly realized that
having humans triaging performance changes solely on the trend graphs was not
a tenable solution: humans looking through all the graphs lose focus quickly
and the problem would get worse as we added more tests and system
configurations. Therefore we added an automated detection system.
Figure 1. Performance trend graph with noise and no discernible changes.
Figure 2. Example of a clear drop in performance on August 8th and subsequent
performance recovery on August 20th. Figure 3. Performance trend graphs with
noise. The second graph (insert_ttl) has a change in performance on June 13th
smaller than the noise threshold of the first graph. The amount of noise also
changes over time for the insert_ttl graph. Figure 4. Performance trend graph
with a small drop in performance on August 5th. It would be easy for a person
to miss that regression when looking through many graphs.
The automated detection system was designed to catch
* •
Sudden drops in performance from a specific software change
* •
Multiple small changes over time causing a drift down in performance
It did this in a very simple manner by doing three comparisons:
1. (1)
Compare to the immediately preceding run. Did it change more than
$X\%$333Where a human then looked at flagged failures and $X$ is a parameter
of the system. We used $10\%$ by default?
2. (2)
Compare to a run from a week ago. Did it change more than $X\%$?
3. (3)
Compare to a run for the last stable release of the software. Did it change
more than $X\%$?
This system was a vast improvement over just staring at graphs. It was
automated and computers do not get bored or lose focus. However, we soon
realized we had a large number of false positives with which to deal (up to
99% as discussed in Section 5 depending on how you count). All the tests have
some inherent noise in them. However, different tests and different test
configurations have different levels and types of noise. Using a static
threshold (e.g., $10\%$) led to a lot of alerts for changes on noisy tests. At
the same time it would either miss real changes that were less than the
threshold, or detect the smaller changes at some later time (when the change
plus the noise crossed the threshold). For example, in Figure 3 the insert_ttl
test shows a clear, but very small change around June 13th, while the
map_reduce workload is steady with run to run variation (noise). There is no
way we could adjust a common threshold for those two tests that would both
catch the change in insert_ttl and not constantly flag the map_reduce test, or
constantly flag the insert_ttl 1 thread line after July 19th.
While the system was noisy and painful, it was still useful for us. Over time
we added fixes and band aids to the system to reduce the number of false
positives (e.g., adjust the thresholds separately for different tests). Even
with all those fixes to the previous system, the system:
* •
Produced a large number of false positive alerts that had to be processed
* •
Missed smaller regressions (false negatives)
* •
Flagged failures on commits unrelated to the regression, requiring humans to
trace back to find when a regression actually happened
* •
Only flagged regressions, not improvements
* •
Consumed a huge amount of time from the team to process all the results
We wanted to improve the process to something fundamentally better. We started
by trying to clearly state the problem.
## 2\. Problem Definition
The software we are testing is updated in discrete commits to our source
repository. Our problem is to:
###### Problem 1.
Detect which commits change the performance of the software (as measured by
our performance tests) in the presence of the noise from the testing
infrastructure.
When the problem is phrased like that, it is clear that this is a signal
processing problem. The noise of the system makes the performance results a
stochastic process. As such, we reviewed the literature on signal
processing444See Section 7 for references on signal processing and change
point detection. In particular, we decided our problem was a change point
detection problem. From Matteson and James (Matteson and James, 2014): “Change
point analysis is the process of detecting distributional changes within time-
ordered observations.” Our system can be represented as the time series:
(1) $S_{t}=P_{t}+N_{t}$
Where $S_{t}$ is the measured signal, $P_{t}$ is a constant value of
performance, and $N_{t}$ is a random variable representing the noise. It is
not clear a priori that the noise component is independent, identically
distributed (IID) or not. In fact, we have examples in which the noise is
correlated with time.
The noise is really composed of at least three components:
(2) $N=N_{p}+N_{s}+N_{w}$
* •
$N_{p}$: The noise from the test platform, including CPU, network, OS, etc.
* •
$N_{s}$: The noise from the software itself. This can be from the compiler as
well as other non-determinism or randomness in the software.
* •
$N_{w}$: The noise from the workload generator. The workload generator is also
software and may vary in the load it applies to the system.
Reducing $N_{p}$ and $N_{w}$ was the focus of previous work (Ingo and Daly,
2019). However, as computers and computer networks are inherently very
complicated, non-deterministic machines, including things like caches and
predictors, we can only hope to reduce those noise components, not remove
them. As such, we have to learn to live with noise.
### 2.1. E-Divisive Means
After reviewing the literature on change point detection, we decided to use
the E-Divisive means algorithm (Matteson and James, 2014). E-Divisive means
has the following useful properties for us:
* •
Does not require any distributional assumptions other than a finite mean
* •
Can be recursively applied to identify multiple change points
* •
Does not require training data and works out of the box on a time series
* •
Optimal sets of change points can be efficiently computed via dynamic
programming and permutation sampling
Additionally, the main focus of our work is on retroactive identification of
change points, not prediction. Many time series algorithms are intended for
forecasting, which is not relevant in the context of commits being pushed by
developers.
The algorithm works by hierarchically selecting distributional change points
that divide the time series into clusters. For a given cluster, a candidate
change point is chosen by computing a statistic $\widehat{Q}$ at every point
within the cluster and selecting the point with the largest value. This
statistic $\widehat{Q}$ is the discrete analog of the divergence between
continuous multivariate distributions. After the first change point is
determined, the algorithm is then reapplied to each cluster created by the
existing change point. This process repeats until the largest candidate
$\widehat{Q}$ value is below a threshold of significance.
#### 2.1.1. $\widehat{Q}$ Statistic Computation
To compute $\widehat{Q}$ for a series of n + m points bisected at the nth
point, let:
(3) $\mathbf{X}_{n}=\\{X_{i}:i=1,...,n\\}$ (4)
$\mathbf{Y}_{m}=\\{Y_{j}:j=1,...,m\\}$
Represent the first n points and the last m points, respectively. An empirical
divergence metric can be computed as follows:
(5)
$\begin{split}\widehat{\mathcal{E}}(\mathbf{X}_{n},\mathbf{Y}_{m};\alpha)=&\frac{2}{mn}\sum_{i=1}^{n}\sum_{j=1}^{m}{|X_{i}-Y_{i}|}^{\alpha}\\\
&-\binom{n}{2}^{-1}\sum_{1\leq i<k\leq n}{|X_{i}-X_{k}|}^{\alpha}\\\
&-\binom{m}{2}^{-1}\sum_{1\leq j<k\leq m}{|Y_{j}-Y_{k}|}^{\alpha}\end{split}$
The $\alpha$ parameter can be any value between 0 and 2. We use 1 for
simplicity555Matteson and James (edvisive2014) observed similar results with
$\alpha$ of 0.5 and 1.5, and presented their simulation studies with $\alpha$
of 1.. We then attain $\widehat{Q}$ by weighting the previous result by the
size of our clusters:
(6)
$\widehat{Q}(\mathbf{X}_{n},\mathbf{Y}_{m};\alpha)=\frac{mn}{m+n}\widehat{\mathcal{E}}(\mathbf{X}_{n},\mathbf{Y}_{m};\alpha)$
#### 2.1.2. Termination
The procedure for termination in the paper involves numerous random
permutations of the values in each cluster to determine the statistical
significance. Intuitively, if rearranging the order of data in a cluster does
not significantly affect the maximum $\widehat{Q}$, there is likely no
meaningful change point.
## 3\. Implementation
Selecting an algorithm for detecting change points was just the beginning. We
then needed to:
* •
Implement (or find an implementation of the algorithm)
* •
Automate processing our data with the algorithm
* •
Present the data after processing
* •
Enable users to act on the presented data
The first step of implementing the algorithm was straightforward as there is
an R package (James and Matteson, 2015) for the algorithm. We also wrote a
Python implementation666The Python implementation of e-divisive means is
available here: https://github.com/mongodb/signal-processing-algorithms to go
with our existing Python code base. Both implementations will iteratively find
the most significant change point in the series, and then analyze the clusters
on either side of the change point, until reaching the stopping criterion.
Even though we used our Python implementation, we benefited from the existence
of the R implementation. We were able to compare the results from the two
implementations to gain confidence in the correctness of our implementation.
It also gave us a reference point for the performance of the algorithm.
It took a little more work to transform our existing data into a form that
could be consumed by the algorithm. We store results in a MongoDB database,
with all the results for a given task build stored together. We unbundled the
data so we could easily generate a vector of results for each individual
combination of (system variant, task, test, test configurations). We then
saved the computed change points in the database in a separate collection.
Each change point has the same identifying fields as the results data, plus
data from the algorithm itself, such as the $\widehat{Q}$ and the order of the
change point (was it the first or $N^{th}$ change point for the series when
recursively identifying change points). After identifying the change points
for a series, each series is partitioned into clusters in which the results
are unchanging. We call those clusters “stable regions”. We calculate common
statistics for each stable region, and include that in the entry for each
change point. Statistics include the min, max, median, mean, variance for the
given measure, as well as how many data points are in the stable region.
The algorithm is run at the completion of every set of performance tests. The
result data is uploaded and transformed before the algorithm is run. We
replace any existing change points with the newly generated change points.
Rerunning the analysis allows the change points to be updated as more data
becomes available. The data is statistical in nature, so having more data
allows better analysis. As such, it is expected that on occasion a previously
computed change point will “move” to a new commit or even completely go away
as more data becomes available. Additionally, the change point algorithm will
never find that newest result to be a change point. If a regression was
introduced, the algorithm will only detect a change point after several more
test results are collected. A three to five day delay is common for our tests.
This contrasts with processing results for correctness tests in which the
pass/fail result is determined at run time and will not change at a later
time.
### 3.1. Display
We then displayed the data in two distinct ways. First, we updated the trend
graphs discussed in Section 1.1. We added annotations to show the change
points as well as any Jira tickets that match the test and revision. Figures 5
and 6 show two trend graphs annotated with both change points and Jira
tickets. The green diamonds are the Jira tickets, and the highlighted segments
are change points. You can clearly see two tickets matching two green
highlighted sections in Figure 5, while there is a small drop with change
point and ticket highlighted in Figure 6. That figure is particularly
interesting as the change in mean is on the same scale as the level of the
noise before and after the change, and a version of that graph without
annotations appeared earlier in Figure 4. As we will discuss in Section 3.2,
it is a goal of our process that each change point is matched to a Jira ticket
or hidden from view if it is clear noise.
Figure 5. Trend graph for index build background test. Notice two change
points (green highlights) and two JIRA tickets (diamonds). Figure 6. Trend
graph for the wildcard index test with annotations for Jira tickets and change
points. This figure can be compared to Figure 4 which does not have the
annotations.
We added necessary meta data fields to our Jira tickets so that they can be
mapped to their originating Evergreen project, task, test and git revisions.
We track both “First failing revision” and “Fix revision”. Additionally, we
needed to query this data every time we drew the displays. For performance
reasons, we setup a regular batch job to query Jira and load the data into our
database.
Note that we do not run the performance tests on every commit to the source
repository. That adds complexity to the visualization above. A change point is
identified for commits that were built and run. The algorithm cannot determine
which commit caused a performance change if the commits have not run. Instead,
we can identify the range of commits between the change point and the previous
test result as suspect commits. When displaying these results and annotations,
we must be able to match up change point, Jira ticket, and builds that have
actually run, even if they point to close but different commits.
Those annotations are great for examining how one test performs over time, and
identifying the location of significant changes. It is not useful though for
tracking all tests, triaging the most important regressions, and grouping
tests that change at the same time. For that, we built a new page for viewing
the change points. This new page shows all the change points on one page,
grouped by commit and is shown in Figures 7 and 8. Change points are initially
“unprocessed” and through triage become “processed”.
Figure 7. Triage view of unprocessed change points. Figure 8. View of
processed change points888Our Best Buy tests are based off of the open data
set from Best Buy (bes, [n.d.]).
Figure 7 shows the list of software revisions (identified by their git hashes)
associated with “unprocessed” change points. All the change points for a
single git hash are grouped together. The display has the git hash, test name,
task name, date of the build, and the hazard level of the change. Hazard level
is computed as the log of the ratio of mean before to the mean after the
change point. The hazard level has proven extremely useful in focusing triage
on the major regressions and it has the property that two changes that cancel
out (e.g., $33\%$ drop followed by a $50\%$ rise) have the same magnitude. The
display has the ability to filter or sort on any of the fields. There are two
buttons along the top to acknowledge or hide the change points. Hiding or
acknowledging makes a copy of the changed point with the addition of field to
indicate that the change point has been processed. The process of
acknowledging or hiding a change point also removes it from the unprocessed
change point page. It will also be removed from this page if there is already
a Jira ticket for the same test at that commit. The list is short in the
figure because the remaining change points had already been “processed”, with
only two less interesting change points remaining.
Once a commit is acknowledged or hidden, it shows up on the processed tab as
shown in Figure 8. There we have shown one change point with a significant
improvement in performance. We will isolate that change to a single commit,
and open a Jira ticket to document it. Once that ticket is opened, the change
point will also be removed from this view.
### 3.2. Triage
We have a rotating role of a dedicated person to check for performance
changes. We call this person the “Build Baron”. The Build Baron uses all the
views of the change points shown in Figures 7 and 8 along with the trend
graphs as shown in Figures 5 and 6. Ideally, the Build Baron reviews every
change point, deciding if it is a real change worthy of investigation, if it
is insignificant, or if it is just noise.
The Build Baron starts with the list of the unprocessed change points as shown
in Figure 7. By default we have the list set to filter out some things (lower
priority, canary999Canary tests are tests run to detect changes to the test
system itself. Changes in performance for canary tests indicate a problem with
the test itself. See (Ingo and Daly, 2019) for more on our use of canary
tests. tests, or some results that are merely informational). The Build Baron
goes down this list investigating each git hash in the list. Each git hash can
have multiple change points (multiples tests, variants, etc). The test column
includes links to the task page with the matching trend graph for the Build
Baron to decide if the change point is worth further investigation.
They may determine from inspection that there is a clear change. If so, they
will try to isolate the offending commit. Recall that we do not run every
commit, so once a performance change is detected, we need to run the tests on
the missing commits to determine the precise commit that introduced the
performance change. The Build Baron will schedule the builds appropriately,
and “acknowledge” the change point by pressing the “acknowledge” button on the
change point page. The Build Baron will need to wait for the tests to finish.
When the tests finish, the system will rerun the change point detection
algorithm in order to update the change point to the correct commit. If the
change point represents a new issue, the Build Baron will open a Jira ticket
for the change and assign it to the appropriate team to investigate. If there
is an existing Jira ticket (e.g., this is the fix to a previous regression)
the Build Baron will update the existing Jira ticket. If the performance
change is a significant enough regression, we may also revert the commit.
Not all change points lead to Jira tickets. Occasionally we will have spurious
drops in performance, with the performance changing for one build and then
returning to previous performance. Drops such as these may go away if we rerun
the task. If the performance change is large enough it can lead to a change
point being created. We start by checking these against our canary workloads.
As part of previous noise reduction work we have a number of canary workloads.
The canary workloads tell us nothing about the software we are testing, but a
lot about the test system. The results should be constant. If we detect a
statistically significant change for a canary or otherwise suspect a spurious
drop, we treat all the results from that test run as suspect and rerun the
test.
Alternatively, the change point may be due to noise. The algorithm is very
good at accounting for IID noise, but not for correlated noise. Unfortunately,
we have some correlated noise in our system – some from time varying behavior
on our cloud providers, other due to things as finicky as code alignment
issues in compilation and its impact on processor caches. Either case usually
can be confirmed visually, and the Build Baron will “Hide” the change point by
using the Hide button. That will remove the change point from the change point
page and change the color of the change point on the trend graph to blue. If
we have a sustained change in performance due to cloud changes, we open a Jira
ticket for that in order to document it. That way, anyone looking at the trend
graphs will be able to see why the performance changed.
All change points that are processed (hidden or acknowledged) and all change
points that match a Jira ticket are removed from the change point list. The
list is meant for triage and those points have already been triaged. We have
the history of the points however and can use it to compile effectiveness data
about our system.
Since we group change points by git hash in the display, we may have the case
in which we want to acknowledge some of the change points in the group and
hide others. This is fully supported.
## 4\. Optimizations
The computational complexity of a naive implementation of E-Divisive means is
$\mathcal{O}(\kappa T^{3})$, where $\kappa$ is the number of change points and
$T$ the number of observations in the series. Matteson and James (Matteson and
James, 2014) point out that it is possible to compute the series $\widehat{Q}$
in $\mathcal{O}(T^{2})$ (per change point) as each value in the series can be
derived from the previous with linear time. We implemented this optimization
in our POC.
The major generations of the implementation were:
* •
Naive $\mathcal{O}(\kappa T^{3})$ implementation
* •
$\mathcal{O}(\kappa T^{2})$ implementation (row/column differences)
* •
Use NumPy arrays
* •
Use native C implementation (Python module)
We also tested compiling the first 2 versions with Cython, but this yielded
only about $10\%$ improvements and was discarded as an option.
With a test data series of 173 values, using an Intel(R) Core(TM) i7-6560U CPU
@ 2.20GHz, single threaded executions yielded the following results:
Implementation | Time (s) | Ratio
---|---|---
Naive | 0.622551 | 6103.44
Row/column differences | 0.011567 | 113.40
NumPy | 0.001282 | 12.57
Native | 0.000102 | 1.00
At this modest problem size, the native C implementation is over 6,000x faster
than our original naive implementation, 113x faster than a non-naive
implementation, and 13x faster than our best Python only implementation.
At the end of a CI task there are 20-60 new results. The historical time
series for those is fetched and the algorithm is used to re-compute change
points. The computation is easy to parallelize as there are multiple series to
be processed. Using the native C implementation and parallelizing the
computation, we are able to re-compute all the change points in under 10
seconds. The majority of that time is spent waiting for network communication
to the database, with the E-Divisive mean computation taking only
milliseconds. The computation performance is a combination of the performance
enhancements discussed above, and a conscious decision to limit our analysis
to results from N recent months, by using up to 500 data points101010This is
approximate. If more than 500 data points are available, we will select data
points going back to an already computed change beyond that range. As such,
the computation will use more than 500 points.. We do this since we are
interested in catching regressions in new commits and new releases. Due to
this, our time series typically have 100 to 500 values and 1 to 10 change
points, and the computation time is minimal.
In our original project plan we were concerned about the $\mathcal{O}(\kappa
T^{3})$ complexity and had envisioned running change point detection in a
separate asynchronous job so as to not impact test execution time. Due to the
above optimizations we are able to run the change point detection immediately
after each test execution in a simpler process. When the Build Baron is
running additional tests to isolate the exact commit that changed performance,
having the change point detection immediately refreshed reduces the time to
isolate the change.
Even though the computation time is minimal in production, we still care about
the execution time. Occasionally we need (or want) to recompute all the change
points in our system (e.g., we updated something related to the computation or
we fixed a bug). Based on the current performance of the algorithm, we believe
we could recompute all the change points across our system in under an hour.
Previously we had a need to regenerate change points for a section of the data
and we tried to regenerate the points over the weekend. The job was still
running Monday morning when we checked on it. When we tried to regenerate the
data more recently for the same subset, it took approximately 30 minutes.
Finally, we may want to extend the length of history we use in the computation
in the future, for example if we add a project that runs much more frequently.
## 5\. Impact
The point of this work, as discussed in Section 2, was to more efficiently
detect which commits change the performance of our software. It has succeeded.
Triaging performance changes has gone from being a full time job that one
person could not completely keep up with, to one that the Build Baron can
process all of the change points and have time left over. There is a morale
improvement, as the Build Baron is working on real performance changes that
make a difference to the company. Additionally, we are able to detect smaller
changes in performance than we could previously.
We are also tracking performance improvements now in addition to performance
regressions. This has had a few impacts:
* •
We have caught correctness bugs (e.g., important check suddenly skipped).
* •
We have been able to provide positive feedback to developers for making things
faster. Someone notices the impact of their work now.
* •
We can share improvements with marketing (Walker-Morgan, 2019).
While we clearly and strongly know that we have qualitatively improved our
system, we have also quantitatively improved it. We have two sets of data:
First, we performed a proof of concept (POC) before fully implementing the
system. During the POC we looked at false negative and false positive rates,
as well as general feasibility related issues. Since then we have implemented
the system in full, including collecting the labeling data and the information
in Jira tickets. The combination of that data allows us also to measure the
effectiveness of the system in production.
### 5.1. Learnings from the POC
For the POC we implemented the calculation of change points as a batch job. We
then compared the changes points to the results from our existing system and
in Jira tickets. We focused on a 5 month period of results. At that time our
automated system used two level of Jira tickets. The first level was
automatically generated by the analysis code. The Build Baron then would go
through that list. The Build Baron would move representative failures to the
second Jira project, and link the rest. For the 5 month period we had 2393
tickets in the first project, and 160 in the second. Of the 160 tickets in the
second project, 24 of them were “useful.”111111For this use we considered a
ticket useful if it reflected a change in performance of the software. So, on
average 100 tickets in the first project reduced down to 1 useful ticket in
the second project, with a human going through all of those. In other words,
the old system generated a huge number of Jira tickets, of which the vast
majority were false positives, while also suffering from false negatives.
The change point algorithm found all the regressions we were tracking in Jira
tickets. The change point algorithm may have missed performance changes, but
if it did those were performance changes we were already missing. From a false
negative rate, the new system was strictly better than the old system.
We also looked at the change points that did not match existing tickets. We
started by sorting on the $\widehat{Q}$ value. The first three that we
investigated led to two new Jira tickets. The first element was one ticket,
while the second and third items shared a ticket. The second item corresponded
to a drop in performance and and the third item was a later fix for the
earlier drop.
Since we did not have complete reference data for all the performance data, we
instead sampled the change points to estimate a false positive rate. We
considered only the change points that did not match existing Jira tickets, as
those are the potential false positives. We looked at 10 of those change
points. Of those:
* •
2 were clear changes in performance we had not caught previously.
* •
4 were clearly noise.
* •
4 were not clear cut. They included
* –
One clear change in distribution (the test got noisier)
* –
Two cases in which the distribution changed slightly. We think this was likely
correlated noise in our system.
* –
One large outlier value got the system to create a change point.
Of the cases that were noise we used the order of change point statistic to
search for more significant regressions. In all cases there was a point in
which change points went from noise or maybe regressions, to clear cut
changes.
Depending on how you count the four “not clear cut” cases, we had a false
positive rate between 40% and 80%, with the possibility to tune the system.
Given the previous system required looking at 100 things for each useful one,
this was a huge improvement and we knew we wanted to move forward.
### 5.2. Learnings from Production
After the POC we put the change point detection algorithm into production and
have been collecting data. Looking at a 3 month period from summer of 2019 for
our primary performance project we have:
* •
12,321 distinct change points
* •
Corresponding to 178 unique git hash values
* •
126 of those 178 correspond to a Jira ticket
* •
122 of those 178 were acknowledged or hidden
* •
there were 79 unique Jira tickets for those change points
Therefore, the Build Baron had 178 things to look at during this period, and
more than 2/3 of all of them were issues we felt were worth tracking and
possibly working on. We went from 100 notifications leading to 1 tracked
issue, to 9 notifications leading to 6 worth follow-up and 4 tracked issues.
Given that we want the process to be slightly biased toward false positives,
this result is close to optimal.
The process still is not perfect though. We also have:
* •
Acknowledged change points that do not match a raw change point
* •
Acknowledged change points without a matching Jira ticket
* •
Hidden change points associated with Jira tickets
The two cases of acknowledged change points not matching a Jira ticket or a
raw change point appear to come from a few cases related to triaging:
aggressive triaging and mistaken triaging. The change points are grouped by
git hash. It is easy to check some of the change points for the git hash and
observe a distinct performance change, and aggressively acknowledge all the
points. Alternatively, sometimes someone hits the wrong button and
acknowledges when they probably should have hidden a point. We observe both of
those cases when inspecting points for these two cases. We are adding more
auditing of how we label change points.
Hidden change points that do not match a raw change point mainly come from a
combination of marginal change points aggressively triaged, in which the
change point algorithm is able to exclude once it has additional data.
## 6\. Open Problems
This work is part of a larger effort to understand the performance of our
software, and to know when the fundamental performance changes. This builds on
work to
* •
Build tools for making performance workloads
* •
Automate running of performance tests
* •
Include external benchmarks in our system
* •
Reduce the noise in our test bed
There are a number of areas we continue to work on in this space
* •
Better workload generation tools
* •
More flexible result types
* •
Better signaling of performance changes
* •
Determine if a “patch” has changed performance
* •
Identify, exclude, and rerun test runs in which “canaries” have failed
* •
Use the data from processing change points to make the algorithm more
effective
Better workload generation tools is about making it easier for our engineers
to craft interesting workloads, while the more flexible result types deal with
handling latencies, percentiles, throughputs, etc.
The next three items show the limits of the change point detection algorithm.
E-Divisive means is great at detecting changes in distribution, but it needs
multiple data points to compute that distribution. It also has a bias towards
the center of the longest clusters. As such, it cannot tell you anything about
the most recent test run, and it will not detect two distinct changes if they
are too close together, such as in Figure 9. In Figure 9 a performance
impacting code change went in and was reverted soon afterwards. There are not
enough commits for the algorithm to detect that two distinct changes happened.
Figure 9. Example of a drop in performance followed quickly by a fix. The
regression and fix are too close together for the E-Divisive means algorithm
to identify both changes.
The bias towards the center of clusters is a result of the coefficient
$mn/(m+n)$ in Equation 6. The coefficient was an integral assumption in the
consistency proof in (Matteson and James, 2014). Since $m+n=T$, where $T$ is
the number of observations in the series, we get $mn/(m+n)=m(T-m)/T=m-m2/T$.
The first and last values of this coefficient are therefore $1-1/T$ when
either $m$ or $n$ equals $1$. All other values of this coefficient are greater
than one (e.g. $2-4/T$ for $m=2$) and the coefficient achieves the maximum
value of $T/4$ when $m=n=T/2$. Even if it would lead to possibly invalidating
the mathematical proof behind E-Divisive, in practice it is tempting to
explore whether these limitations could be removed.
The algorithm has a parameter for the minimum cluster size of observations: N.
The tests used later in (Matteson and James, 2014) used minimum cluster sizes
N=30 and N=10, while we have operated with N=3 to have faster notification.
With N=3 we require at least 3 data points after the change (including the
change) to detect the change. The combined effect of these limitations is that
in our primary performance project we typically get alerts of a new regression
5-6 days after it was committed.
Ultimately change point detection is not suitable for determining whether a
single new test result is a regression or not. We can use outlier detection to
answer that question. The change point detection algorithm can identify a
stable region, and then we can use outlier detection to determine if the patch
build is statistically the same as the stable region or not. This will allow
us to react early to very clear regressions, while change point detection can
catch smaller regressions after a delay. We also have a use case in which
developers can test (patch test) their code changes (patches) before pushing
them to the source repository. We can use the same outlier detection to detect
performance changes in patch tests. Finally, with our canary tests the goal is
reversed: Rather than ignoring outliers, we want to be alerted about outliers
so that we can rerun the tests or quarantine the results.
As mentioned in Section 3.2, we run a number of canary tests to detect changes
to the test system itself. If we have a sudden change in the canary workloads
we would like to flag the results and possibly rerun the tests. The detection
of performance chances in the canary workloads, flagging the results, and
rerunning suspect executions are all amenable to automation.
Finally, we are collecting a large corpus of training data related to the
change point algorithm, through the acknowledged and hidden change points.
This will allow us to compare the existing algorithm to other change point
detection algorithms. Additionally, we expect we can use this data to either
manually adjust the algorithm, or train more advanced machine learning
algorithms.
## 7\. Related Work
Previous work has focused on reducing the noise in our test environment (Ingo
and Daly, 2019). Our test environment is built on the Evergreen (Erf, 2016) CI
system. There are many other continuous integration systems, such as Jenkins
(Smart, 2011), BuildBot (Ablett et al., 2007), Travis CI (Tra, [n.d.]), Gitlab
CI (Git, [n.d.]), and Circle CI (Cir, [n.d.]). The other CI systems have
various supports for graphing performance results. For instance, Jenkins has a
performance plugin (Jen, [n.d.]) to support running common benchmarking tools,
reporting results, and plotting trend graphs.
The Chromium project has implemented its own CI system called LUCI (luc,
[n.d.]) and implemented regression detection (Chr, [n.d.]) within it. Each
test requires setting explicit threshold values for alerting a regression. As
such, it is very similar in concept to our previous analysis system.
Google has a patent filing (Vallone, 2018) for an algorithm called “Window
deviation analyzer” (WDA). It describes identifying windows of data in a time
series, computing median and mean for each window, and estimating if the most
recent sample differs more than some threshold from those values. Like our
system, Google’s WDA will fail a test some number of commits after the commit
that introduced a regression. Unlike our system, WDA requires an explicit
threshold and it is does not identify a specific offending commit (it
identifies a window of commits within which a regression has occurred).
For algorithms for dealing with noise in time series data, we found the
NIST/Sematech e-handbook on Statistical Methods (NIST/SEMATECH, 2012) a good
overview, particularly chapter 6 “Process and Product Monitoring and Control”.
We looked into using techniques such as Moving Averages or Box Jenkins (Box et
al., 2015) for the time series data. That would involve building a model of
the process and detecting when samples diverge from the existing model.
Additionally, see (Aminikhanghahi and Cook, 2017) for an extensive survey on
change point detection techniques in time series data, covering supervised
methods such as multi-class, binary, and virtual classifiers, and unsupervised
methods such as likelihood ratio, subspace models, and probabilistic models.
There are frequentist/Bayesian and parametric/non-parametric methods
algorithms for both the supervised and unsupervised classes.
## 8\. Conclusion
In this paper we describe how we detect performance changes in our software
using our continuous integration system and change point detection. We run a
large collection of tests periodically as changes to our software product are
committed to our source repository. We would like to know when the performance
changes for those tests. Originally we used humans looking and graphs, which
was later replaced with a threshold based automatic detection system, but
neither system was sufficient for finding changes in performance in a timely
manner. This paper describes our move to using change point detection
(specifically E-Divisive means) to detect and notify when the performance
changes.
In addition to implementing the change point algorithm, we also had to
integrate it into our existing performance testing system and visualize the
results so that engineers could triage and use the information. We describe
what was required to get change point detection working in our system and how
we triage and process the results. We also describe our efforts to speed up
the algorithm so that we could use it synchronously within our workflow.
The net impact of this work was large for us: Quantitatively, it dramatically
dropped our false positive rate for performance changes, while qualitatively
it made the entire process easier, more productive (ex. catching smaller
regressions), and more timely.
There is more work to be done to continue to improve our ability to detect
when the performance of our software changes. We discuss a number of open
problems for performance testing in general and for using change point
detection in particular.
## References
* (1)
* bes ([n.d.]) [n.d.]. Best Buy APIs open data set. https://bestbuyapis.github.io/api-documentation/#overview
* Chr ([n.d.]) [n.d.]. (Chromium) Regression Detection for Performance Tests. wiki. https://www.chromium.org/chromium-os/testing/perf-regression-detection
* Cir ([n.d.]) [n.d.]. Circle CI. https://circleci.com/docs/
* Eve ([n.d.]) [n.d.]. Evergreen Continuous Integration. https://github.com/evergreen-ci/evergreen/wiki
* Git ([n.d.]) [n.d.]. Gitlab CI/CD. https://docs.gitlab.com/ee/ci/
* Jen ([n.d.]) [n.d.]. Jenkins Performance Plugin. wiki. https://wiki.jenkins.io/display/JENKINS/Performance+Plugin
* luc ([n.d.]) [n.d.]. (LUCI) A Tour of Continuous Integration UI. https://chromium.googlesource.com/chromium/src.git/+/master/docs/tour_of_luci_ui.md
* Tra ([n.d.]) [n.d.]. Travis CI. https://docs.travis-ci.com
* Ablett et al. (2007) Ruth Ablett, Ehud Sharlin, Frank Maurer, Jorg Denzinger, and Craig Schock. 2007. Buildbot: Robotic monitoring of agile software development teams. In _RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication_. IEEE, 931–936.
* Aminikhanghahi and Cook (2017) Samaneh Aminikhanghahi and Diane J Cook. 2017. A survey of methods for time series change point detection. _Knowledge and information systems_ 51, 2 (2017), 339–367.
* Box et al. (2015) George EP Box, Gwilym M Jenkins, Gregory C Reinsel, and Greta M Ljung. 2015. _Time series analysis: forecasting and control_. John Wiley & Sons.
* Erf (2016) Kyle Erf. 2016. Evergreen Continuous Integration: Why We Reinvented The Wheel. Blog Post. https://engineering.mongodb.com/post/evergreen-continuous-integration-why-we-reinvented-the-wheel
* Ingo and Daly (2019) Henrik Ingo and David Daly. 2019. Reducing variability in performance tests on EC2: Setup and Key Results. Blog Post. https://engineering.mongodb.com/post/reducing-variability-in-performance-tests-on-ec2-setup-and-key-results
* James and Matteson (2015) Nicholas James and David Matteson. 2015. ecp: An R Package for Nonparametric Multiple Change Point Analysis of Multivariate Data. _Journal of Statistical Software, Articles_ 62, 7 (2015), 1–25. https://doi.org/10.18637/jss.v062.i07
* Matteson and James (2014) David S. Matteson and Nicholas A. James. 2014. A Nonparametric Approach for Multiple Change Point Analysis of Multivariate Data. _J. Amer. Statist. Assoc._ 109, 505 (2014), 334–345. https://doi.org/10.1080/01621459.2013.849605 arXiv:https://doi.org/10.1080/01621459.2013.849605
* NIST/SEMATECH (2012) NIST/SEMATECH. 2012\. e-Handbook of Statistical Methods. http://www.itl.nist.gov/div898/handbook/ note.
* Smart (2011) John Ferguson Smart. 2011\. _Jenkins: The Definitive Guide: Continuous Integration for the Masses_. ” O’Reilly Media, Inc.”.
* Vallone (2018) Anthony Vallone. 2018\. Window Deviation Analyzer. Patent Publication 2018/0150373 A1, Filed Nov. 28, 2016, Published May 31, 2018.
* Walker-Morgan (2019) DJ Walker-Morgan. 2019\. MongoDB 4.2 Performance Boosts. Blog. https://www.mongodb.com/blog/post/mongodb-42-performance-boosts
|
2024-09-04T02:54:56.141510 | 2020-03-02T04:32:38 | 2003.00653 | {
"authors": "Wei Jin, Yaxin Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu Aggarwal and\n Jiliang Tang",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25977",
"submitter": "Wei Jin",
"url": "https://arxiv.org/abs/2003.00653"
} | arxiv-papers | # Adversarial Attacks and Defenses on Graphs:
A Review, A Tool and Empirical Studies
Wei Jin† Yaxin Li† Han Xu† Yiqi Wang† Shuiwang Ji‡ Charu Aggarwal§ Jiliang
Tang†
†
‡
§
Data Science and Engineering Lab, Michigan State University Texas A&M
University IBM T. J. Watson Research Center
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Deep neural networks (DNNs) have achieved significant performance in various
tasks. However, recent studies have shown that DNNs can be easily fooled by
small perturbation on the input, called adversarial attacks. As the extensions
of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to
inherit this vulnerability. Adversary can mislead GNNs to give wrong
predictions by modifying the graph structure such as manipulating a few edges.
This vulnerability has arisen tremendous concerns for adapting GNNs in safety-
critical applications and has attracted increasing research attention in
recent years. Thus, it is necessary and timely to provide a comprehensive
overview of existing graph adversarial attacks and the countermeasures. In
this survey, we categorize existing attacks and defenses, and review the
corresponding state-of-the-art methods. Furthermore, we have developed a
repository with representative algorithms111https://github.com/DSE-
MSU/DeepRobust. The repository enables us to conduct empirical studies to
deepen our understandings on attacks and defenses on graphs.
## 1 Introduction
Graphs can be used as the representation of a large number of systems across
various areas such as social science (social networks), natural science
(physical systems, and protein-protein interaction networks), and knowledge
graphs [2, 26]. With their prevalence, it is of great significance to learn
effective representations for graphs that can facilitate various downstream
tasks such as node classification, graph classification, link prediction and
recommendation [2, 26]. Graph Neural Networks (GNNs), which generalize
traditional deep neural networks (DNNs) to graphs, pave one effective way to
learn representations for graphs [26, 73, 3, 84]. The power of GNNs relies on
their capability of capturing the graph structure simultaneously with the node
features. Instead of only considering the instances (nodes with their
features) independently, GNNs also leverage the relationships between them.
Specifically, GNNs follow a message-passing scheme [24], where the nodes
aggregate and transform the information from their neighbors in each layer. By
stacking multiple GNN layers, information can be propagated further through
the graph structure and we can embed nodes into low-dimensional
representations. The obtained node representations can then be fed into any
differentiable prediction layer so that the whole model can be trained in an
end-to-end fashion. Due to their strong representation learning capability,
GNNs have gained practical significance in various applications ranging from
data mining [34, 63], natural language processing [48], and computer vision
[37] to healthcare and biology [45].
As new generalizations of traditional DNNs to graphs, GNNs inherit both
advantages and disadvantages of traditional DNNs. Similar to traditional DNNs,
GNNs are also powerful in learning representations of graphs and have
permeated numerous areas of science and technology. Traditional DNNs are
easily fooled by adversarial attacks [25, 75, 40, 39]. In other words, the
adversary can insert slight perturbation during either the training or test
phases, and the DNN models will totally fail. It is evident that GNNs also
inherit this drawback [89, 17, 91]. The attacker can generate graph
adversarial perturbations by manipulating the graph structure or node features
to fool the GNN models. As illustrated in Figure 1, originally node $7$ was
classified by the GNN model as a green node; after node $7$ creates a new
connection with node $3$ and modifies its own features, the GNN model
misclassifies it as a blue node. Such vulnerability of GNNs has arisen
tremendous concerns on applying them in safety-critical applications such as
financial system and risk management. For example, in a credit scoring system,
fraudsters can fake connections with several high-credit customers to evade
the fraud detection models; and spammers can easily create fake followers to
increase the chance of fake news being recommended and spread. Therefore,
there is an urgent need to investigate graph adversarial attacks and their
countermeasures.
Figure 1: An example of adversarial attack on graph data. The goal of the GNN
is to predict the color of the nodes. Here node 7 is the target node. Attacker
aims to change the prediction of GNN on node 7 by modifying the edges and
features.
Pushing this research has a great potential to facilitate the successful
adoption of GNNs in a broader range of fields, which encourages increasing
attention on graph adversarial attacks and defenses in recent years. Thus, it
is necessary and timely to provide a comprehensive and systematic overview on
existing algorithms. Meanwhile, it is of great importance to deepen our
understandings on graph adversarial attacks via empirical studies. These
understandings can not only provide knowledge about the behaviors of attacks
but also offer insights for us to design defense strategies. These motivate
this survey with the following key purposes:
* •
We categorize existing attack methods from various perspectives in Section 3
and review representative algorithms in Section 4.
* •
We classify existing countermeasures according to their defense strategies and
give a review on representative algorithms for each category in Section 5.
* •
We perform empirical studies based on the repository we developed that provide
comprehensive understandings on graph attacks and defenses in Section 6.
* •
We discuss some promising future research directions in Section 7.
## 2 Preliminaries and Definitions
Before presenting the review and empirical studies, we first introduce
concepts, notations and definitions in this section.
### 2.1 Learning on Graph Data
In this survey, we use $G=(V,E)$ to denote the structure of a graph where
$V=\\{v_{1},\dots,v_{N}\\}$ is the set of $N$ nodes and
$E=\\{e_{1},\dots,e_{K}\\}$ is the edge set. We use matrix ${\bf
A}\in\\{0,1\\}^{{N}\times{N}}$ to denote the adjacency matrix of $G$, where
each entry ${\bf A}_{ij}=1$ means nodes $v_{i}$ and $v_{j}$ are connected in
$G$. Furthermore, we use ${\bf X}\in\mathbb{R}^{{N}\times{D}}$ to denote the
node attribute matrix where $D$ is the dimension of the node feature vectors.
Thus, graph data can be denoted as $G=({\bf A},{\bf X})$. There are a lot of
learning tasks on graphs and in this work, we focus on the classification
problems on graphs. Furthermore, we use $f_{\theta}$ with parameters $\theta$
to denote the learning models in this survey.
Node-Level Classification. For node-level classification, each node in the
graph $G$ belongs to a class in the label set $Y$. The graph model aims to
learn a neural network, based on labeled nodes (training nodes), denoted as
$V_{L}$, to predict the class of unlabeled nodes (test nodes). The training
objective function can be formulated as:
$\min_{\theta}~{}\mathcal{L}_{\text{train}}(f_{\theta}(G))=\sum_{v_{i}\in{V}_{L}}\ell\left(f_{\theta}({\bf
X},{\bf A})_{i},y_{i}\right),$ (1)
where $f_{\theta}({\bf X},{\bf A})_{i}$ and $y_{i}$ are the predicted and the
true label of node $v_{i}$ and $\ell(\cdot,\cdot)$ is a loss function such as
cross entropy.
Graph-Level Classification. For graph-level classification, each individual
graph has a class in the label set $Y$. We use $\mathcal{G}$ to denote a set
of graphs, and $\mathcal{G}_{L}$ is the labeled set (training set) of
$\mathcal{G}$. The goal of graph-level classification is to learn a mapping
function $f_{\theta}:\mathcal{G}\rightarrow{Y}$ to predict the labels of
unlabeled graphs. Similar to node-level classification, the objective function
can be formulated as
$\min_{\theta}~{}\mathcal{L}_{\text{train}}(\mathcal{G})=\sum_{G_{i}\in\mathcal{G}_{L}}\ell\left(f_{\theta}(G_{i}),y_{i}\right),$
(2)
where $G_{i}$ is the labeled graph with ground truth $y_{i}$ and
$f_{\theta}(G_{i})$ is the prediction of the graph $G_{i}$.
### 2.2 A General Form of Graph Adversarial Attack
Based on the objectives in Section 2.1, we can define a general form of the
objective for adversarial attacks, which aims to misguide the victim model by
minimizing some attack loss. Thus, the problem of node-level graph adversarial
attacks can be stated as:
Given $G=({\bf A},{\bf X})$ and victim nodes subset $V_{t}\subseteq{V}$. Let
$y_{u}$ denote the class for node $u$ (predicted or using ground truth). The
goal of the attacker is to find a perturbed graph
$\hat{G}=({\bf\hat{A},\hat{X}})$ that minimize the following attack objective
$\mathcal{L}_{\text{atk}}$,
$\displaystyle{\min~{}\mathcal{L}_{\text{atk}}(f_{\theta}({\hat{G}}))=\sum_{u\in
V_{t}}\ell_{\text{atk}}\left(f_{\theta^{*}}(\hat{G})_{u},y_{u}\right)}$ (3)
$\displaystyle
s.t.,\quad{\theta^{*}=\arg\min_{\theta}\mathcal{L}_{\text{train}}\left(f_{\theta}\left({G^{\prime}}\right)\right)},$
where $\ell_{\text{atk}}$ is a loss function for attacking and one option is
to choose $\ell_{\text{atk}}=-\ell$ and $G^{\prime}$ can either be $G$ or
$\hat{G}$. Note that $\hat{G}$ is chosen from a constrained domain $\Phi(G)$.
Given a fixed perturbation budget $\Delta$, a typical $\Phi(G)$ can be
implemented as,
$\|\hat{\bf A}-{\bf A}\|_{0}+\|\hat{\bf X}-{\bf X}\|_{0}\leq\Delta.$ (4)
Table 1: Commonly used notations Notations | Description | Notations | Description
---|---|---|---
$G$ | Graph | $u$ | Target node
$\hat{G}$ | Perturbed graph | $y_{u}$ | Label of node $u$
$V$ | The set of nodes | $f_{\theta}$ | Neural network model
$V_{L}$ | The set of labeled nodes | $\mathcal{L}$ | Loss function
$E$ | The set of edges | $l(\cdot,\cdot)$ | Pair-wise loss function
$\mathbb{A}$ | Adjacency matrix | $\|\cdot\|_{0}$ | $\ell_{0}$ norm
$\mathbb{\hat{A}}$ | Perturbed adjacency matrix | $\Delta$ | Perturbation budget
$\mathbb{X}$ | Node attribute matrix | ${\bf Z}$ | Predicted probability
$\mathbb{\hat{X}}$ | Perturbed node attribute matrix | $\mathbb{h}_{u}$ | Hidden representation of node $u$
$D$ | Dimension of node features | $e_{ij}$ | Edge between node $v_{i}$ and $v_{j}$
We omit the definition of graph-level adversarial attacks since (1) the graph-
level adversarial attacks can be defined similarly and (2) the majority of the
adversarial attacks and defenses focus on node-level. Though adversarial
attacks have been extensively studied in the image domain, we still need
dedicated efforts for graphs due to unique challenges – (1) graph structure is
discrete; (2) the nodes in the graph are not independent; and (3) it is
difficult to measure whether the perturbation on the graph is imperceptible or
not.
### 2.3 Notations
With the aforementioned definitions, we list all the notations which will be
used in the following sections in Table 1.
## 3 Taxonomy of Graph Adversarial Attacks
In this section, we briefly introduce the main taxonomy of adversarial attacks
on graph structured data. Attack algorithms can be categorized into different
types based on different goals, resources, knowledge and capacity of
attackers. We try to give a clear overview on the main components of graph
adversarial attacks.
### 3.1 Attacker’s Capacity
The adversarial attacks can happen at two phases, i.e., the model training and
model testing. It depends on the attacker’s capacity to insert adversarial
perturbation:
* •
Evasion Attack: Attacking happens after the GNN model is trained or in the
test phase. The model is fixed, and the attacker cannot change the model
parameter or structure. The attacker performs evasion attack when
$G^{\prime}=G$ in Eq. (3).
* •
Poisoning Attack: Attacking happens before the GNN model is trained. The
attacker can add “poisons” into the model training data, letting trained model
have malfunctions. It is the case when $G^{\prime}=\hat{G}$ in Eq. (3).
### 3.2 Perturbation Type
The attacker can insert adversarial perturbations from different aspects. The
perturbations can be categorized as modifying node features, adding/deleting
edges, and adding fake nodes. Attackers should also keep the perturbation
unnoticeable, otherwise it would be easily detected.
* •
Modifying Feature: Attackers can slightly change the node features while
maintaining the graph structure.
* •
Adding or Deleting Edges: Attackers can add or delete edges for the existing
nodes under certain budget of total actions.
* •
Injecting Nodes: Attackers can insert fake nodes to the graph, and link it
with some benign nodes in the graph.
### 3.3 Attacker’s Goal
According to the goals of attacks, we can divide the attacks for node-level
classification into the following two categories:
* •
Targeted Attack: There is a small set of test nodes. The attacker aims to let
the trained model misclassify these test samples. It is the case when
$V_{t}\subset{V}$ in Eq. (3). We can further divide targeted attacks into (1)
direct attack where the attacker directly modifies the features or edges of
the target nodes and (2) influencer attack where the attacker can only
manipulate other nodes to influence the targets.
* •
Untargeted Attack: The attacker aims to insert poisons to let the trained
model have bad overall performance on all test data. It is the case when
$V_{t}={V}$ in Eq. (3).
Noth that for graph-level classification, there also exist targeted and
untargeted attacks. Targeted attack aims to induce the model to give a
specific label to a given graph sample, while untargeted attack only wants the
model to perform incorrectly.
### 3.4 Attacker’s Knowledge
Attacker’s knowledge means how much information an attacker knows about the
model that he aims to attack. Usually, there are three settings:
* •
White-box Attack: All information about the model parameters, training input
(e.g, adjacency matrix and attribute matrix) and the labels are given to the
attacker.
* •
Gray-box Attack: The attacker only has limited knowledge about the victim
model. For example, the attacker cannot access the model parameters but can
access the training labels. Then it can utilize the training data to train
surrogate models to estimate the information from victim model.
* •
Black-box Attack: The attacker does not have access to the model’s parameters
or training labels. It can access the adjacency matrix and attribute matrix,
and do black-box query for output scores or labels.
### 3.5 Victim Models
In this part we are going to summarize the victim models that have been proven
to be susceptible to adversarial examples.
Graph Neural Networks. Graph neural networks are powerful tools in learning
representation of graphs [56, 73]. One of the most successful GNN variants is
Graph Convolutional Networks (GCN) [34]. GCN learns the representation for
each node by keeping aggregating and transforming the information from its
neighbor nodes. Though GNNs can achieve high performance in various tasks,
studies have demonstrated that GNNs including GCN are vulnerable to
adversarial attacks [89, 56]. Besides, it is evident from recent works [89,
87, 44] that other graph neural networks including column network (CLN) [52],
graph attention network (GAT) [63] and JKNet [77] have the same issue.
Other Graph Learning Algorithms. In addition to graph neural networks,
adversary may attack some other important algorithms for graphs such as
network embeddings including LINE [59] and Deepwalk [51], graph-based semi-
supervised learning (G-SSL) [88], and knowledge graph embedding [8, 42].
Table 2: Categorization of representative attack methods
Attack Methods | | Attack
---
Knowledge
| Targeted or
---
Non-targeted
| Evasion or
---
Poisoning
Perturbation Type | Application | Victim Model
PGD, Min-max [76] | White-box | Untargeted | Both | Add/Delete edges | Node Classification | GNN
| IG-FGSM [72]
---
IG-JSMA [72]
White-box | Both | Evasion | | Add/Delete edges
---
Modify features
Node Classification | GNN
Wang et al. [64] | | White-box
---
Gray-box
Targeted | Poisoning | Add/Delete edges | Node Classification | GNN
Nettack [89] | Gray-box | Targeted | Both | | Add/Delete edges
---
Modify features
Node Classification | GNN
Metattack [91] | Gray-box | Untargeted | Poisoning | Add/Delete edges | Node Classification | GNN
NIPA [57] | Gray-box | Untargeted | Poisoning | Inject nodes | Node Classification | GNN
RL-S2V [17] | Black-box | Targeted | Evasion | Add/Delete edges | | Graph Classification
---
Node Classification
GNN
ReWatt [46] | Black-box | Untargeted | Evasion | Add/Delete edges | Graph Classification | GNN
Liu et al. [43] | | White-box
---
Gray-box
Untargted | Poisoning | | Flip label
---
Modify features
| Classification
---
Regression
G-SSL
FGA [13] | White-box | Targeted | Both | Add/Delete edges | | Node Classification
---
Community Detection
| Network
---
Emebdding
GF-Attack [9] | Black-box | Targeted | Evasion | Add/Delete edges | Node Classification | | Network
---
Emebdding
Bojchevski et al. [5] | Black-box | Both | Poisoning | Add/Delete edges | | Node Classification
---
Community Detection
| Network
---
Emebdding
Zhang et al. [81] | White-box | Targeted | Poisoning | Add/Delete facts | Plausibility Prediction | | Knowledge
---
Graph
Embedding
CD-Attack [38] | Black-box | Targeted | Poisoning | Add/Delete edges | Community Detection | | Community
---
Detection
Algorithm
## 4 Graph Adversarial Attacks
In this section, we review representative algorithms for graph adversarial
attacks. Following the categorizations in the previous section, we first
divide these algorithms into white-box, gray-box and black-box and then for
algorithms in each category, we further group them into targeted and
untargeted attacks. Note that without specific mention, the following attack
methods focus on node-level classification. An overall categorization of
representative attack methods is shown in Table 2. In addition, some open
source implementations of representative algorithms are listed in Table 9.
### 4.1 White-box Attacks
In white-box attack setting, the adversary has access to any information about
the victim model such as model parameters, training data, labels, and
predictions. Although in most of the real world cases we do not have the
access to such information, we can still assess the vulnerability of the
victim models under the worst situation. Typically, white-box attacks use the
gradient information from the victim model to guide the generation of attacks
[13, 76, 72, 10].
#### 4.1.1 Targeted Attack
Targeted attack aims to mislead the victim model to make wrong predictions on
some target samples. A lot of studies follow the white-box targeted attack
setting with a wide range of real-world applications. FGA [13] extracts the
link gradient information from GCN, and then greedily selects the pair of
nodes with maximum absolute gradient to modify the graph iteratively. Genetic
algorithm based Q-Attack is proposed to attack a number of community detection
algorithms [10]. Iterative gradient attack (IGA) based on the gradient
information in the trained graph auto-encoder, which is introduced to attack
link prediction [11]. Furthermore, the vulnerability of knowledge graph
embedding is investigated in [81] and the plausibility of arbitrary facts in
knowledge graph can be effectively manipulated by the attacker. Recommender
systems based on GNNs are also vulnerable to adversarial attacks, which is
shown in [86]. In addition, there are great efforts on attacking node-level
classification. Traditional attacks in the image domain always use models’
gradients to find adversarial examples. However, due to the discrete property
of graph data, directly calculating gradients of models could fail. To solve
this issue, the work [72] suggests to use integrated gradient [58] to better
search for adversarial edges and feature perturbations. During the attacking
process, the attacker iteratively chooses the edge or feature which has the
strongest effect to the adversarial objective. By this way, it can cause the
victim model to misclassify target nodes with a higher successful rate. The
work [79] assumes there is a set of “bad actor” nodes in a graph. When they
flip the edges with any target nodes in a graph, it will cause the GNN model
to have a wrong prediction on the target node. These “bad actor” nodes are
critical to the safety of GNN models. For example, Wikipedia has hoax articles
which have few and random connections to real articles. Manipulating the
connections of these hoax articles will cause the system to make wrong
prediction of the categories of real articles.
#### 4.1.2 Untargeted Attack
Currently there are not many studies on untargeted white-box attack, and
topology attack [76] is one representative algorithm. It first constructs a
binary symmetric perturbation matrix ${\bf S}\in\\{0,1\\}^{n}$ where ${\bf
S}_{ij}=1$ indicates to flip the edge between $i$ and $j$ and ${\bf S}_{ij}=0$
means no modification on ${\bf A}_{ij}$. Thus, the goal of the attacker is to
find ${\bf S}$ that minimizes the predefined attack loss given a finite budget
of edge perturbations $\Delta$, i.e., $\|{\bf S}\|_{0}\leq\Delta$. It
considers two different attack scenarios: attacking pre-trained GNN with fixed
parameters $\theta$ and attacking a re-trainable GNN $f_{\theta}$. For
attacking a fixed $f_{\theta}$, the problem can be formulated as,
$\displaystyle\min\limits_{{\bf{S}}\in{\\{0,1\\}^{n}}}~{}\mathcal{L}_{\text{atk}}(f_{\theta}(\bf{S,{A}},\bf{X}))$
$\displaystyle\quad\text{s.t.}~{}\|{\bf S}\|_{0}\leq{\Delta}.$ (5)
It utilizes the Projected Gradient Descent (PGD) algorithm in [47] to search
the optimal $\bf S$. Note that the work [47] is also one popular attack
algorithm in the image domain. For the re-trainable GNNs, parameter $\theta$
will be retrained after adversarial manipulation. Thus the attack problem is
formulated as a min-max form where the inner maximization is to update
$\theta$ by maximizing the attack loss and can be solved by gradient ascent
while the outer minimization can be solved by PGD. Another work on untargeted
white-box attack aims to attack multi-network mining by perturbing the
networks (graphs) [85]. Specifically, in each iteration it measures the
influence of network elements (edges and node attributes) to the mining
results and attack one network element with the highest influence value.
### 4.2 Gray-box Attacks
White-box attacks assume that attackers can calculate gradient through model
parameters, which is not always practical in real-world scenarios. Gray-box
attacks are proposed to generate attacks with limited knowledge on the victim
model [89, 91, 57]. Usually they first train a surrogate model with the
labeled training data to approximate the information of the victim model and
then generate perturbations to attack the surrogate model. It is noted that
these models need the access to the labels of training data, thus they are not
black-box attacks that will be introduced in the following subsection.
#### 4.2.1 Targeted Attack
The early work on targeted gray-box attacks is for graph clustering [15]. It
demonstrates that injecting noise to a domain name system (DNS) query graph
can degrade the performance of graph embedding models. Different from [15],
the work [89] proposes an attack method called Nettack to generate structure
and feature attacks, aiming at solving Eq. (3). Besides, they argue that only
limiting the perturbation budgets cannot always make the perturbation
“unnoticeable”. They suggest the perturbed graphs should also maintain
important graph properties, including degree distribution and feature co-
occurrence. Therefore, Nettack first selects possible perturbation candidates
not violating degree distribution and feature co-occurrence of the original
graph. Then it greedily chooses the perturbation that has the largest score to
modify the graph, where the score is defined as,
$\max_{i\neq y}\ln\left({\bf
Z}_{u,i}\left(G^{\prime}\right)\right)-\ln\left({\bf
Z}_{u,y}\left(G^{\prime}\right)\right),$ (6)
where ${\bf Z}_{u,i}$ is the probability of node $u$ to be the class $i$
predicted by the surrogate model. Thus, the goal of the attacker is to
maximize the difference in the log-probabilities of the target node $u$. By
doing this repeatedly until reaching the perturbation budge $\Delta$, it can
get the final modified graph. Furthermore, it suggests that such graph attack
can also transfer from model to model, just as the attacks in the image domain
[25]. The authors also conduct influencer attacks where they can only
manipulate the nodes except the target. It turns out that influencer attacks
lead to a lower decrease in performance compared with directly modifying
target node given the same perturbation budget.
However, Nettack cannot handle large-scale graphs due to its high time
complexity. The work [90] employs statistical analysis to investigate the
patterns exhibited by Nettack perturbations. According to the observations on
perturbation patterns, a scalable attack method Fasttack is proposed. It first
ranks the possible perturbations by their impact on the victim model and
chooses the top perturbations to generate attacks. In [67], AFGSM is proposed
to derive an approximate closed-form solution with a lower time cost for the
attack model and sequentially inject fake nodes into the graph.
Another line of works focus on generating backdoor attacks for graphs [83,
74]. Backdoor attack aims to find some ”trigger” (or hidden pattern) that
would produce misclassified results when added to an input [66, 14]. To
generate backdoor attacks for graphs, the attacker injects triggers (e.g.
carefully-crafted subgraphs) in the training samples, and aims to fool GNNs
into misclassifying those samples with triggers while maintaining the
performance on other testing samples. Note that graph backdoor attacks can be
applied in both node-level and graph-level classification [74].
#### 4.2.2 Untargeted Attack
Although following the same way of training a surrogate model as Nettack,
Metattack [91] is a kind of untargeted poisoning attack. It tackles the bi-
level problem in Eq. (3) by using meta-gradient. Basically, it treats the
graph structure matrix as a hyper-parameter and the gradient of the attacker
loss with respect to it can be obtained by:
$\displaystyle\nabla_{G}^{\text{meta }}=\nabla_{G}\mathcal{L}_{\text{atk
}}\left(f_{\theta^{*}}(G)\right).$ (7)
Note that $\nabla_{G}\mathcal{L}_{\text{atk }}\left(f_{\theta^{*}}(G)\right)$
is actually a function with respect to both $G$ and $\theta$. If $\theta^{*}$
is obtained by some differential operations, we can compute
$\nabla_{G}^{\text{meta }}$ as follows,
$\nabla_{G}^{\text{meta }}=\nabla_{f}\mathcal{L}_{\text{atk
}}\left(f_{\theta^{*}}(G)\right)\cdot\left[\nabla_{G}f_{\theta^{*}}(G)+\nabla_{\theta^{*}}f_{\theta^{*}}(G)\cdot\nabla_{G}\theta^{*}\right]$
(8)
where $\theta^{*}$ is often obtained by gradient descent in fixed iterations
$T$. At iteration $t+1$, the gradient of $\theta_{t+1}$ with respect to $G$
can be formulated as,
$\nabla_{G}\theta_{t+1}=\nabla_{G}\theta_{t}-\alpha\nabla_{G}\nabla_{\theta_{t}}\mathcal{L}_{\text{train
}}\left(f_{\theta_{t}}(G)\right),$ (9)
where $\alpha$ denotes learning rate of the gradient descent operation. By
unrolling the training procedure from $\theta_{T}$ back to $\theta_{0}$, we
can get $\nabla_{G}\theta_{T}$ and then $\nabla_{G}^{\text{meta }}$. A greedy
approach is applied to select the perturbation based on the meta gradient.
Specifically, given the meta-gradient for a node pair $(u,v)$, it defines a
score $S(u,v)=\nabla_{{\bf
A}_{uv}}^{\operatorname{meta}}\cdot\left(-2\cdot{\bf A}_{uv}+1\right)$ and
greedily picks the perturbation which has the highest score but satisfies the
unnoticeable constraints as in Nettack.
Instead of modifying the connectivity of existing nodes, a novel reinforment
learning method for node injection poisoning attacks (NIPA) [57] is proposed
to inject fake nodes into graph data . Specifically, NIPA first injects
singleton $n$ nodes into the original graph. Then in each action $a_{t}$, the
attacker first chooses an injected node to connect with another node in the
graph and then assigns a label to the injected node. By doing this
sequentially, the final graph is statistically similar to the original graph
but can degrade the overall model performance.
### 4.3 Black-box Attacks
Different from gray-box attacks, black-box attacks [17, 46, 57, 5, 9, 44] are
more challenging since the attacker can only access the adjacency matrix,
attribute matrix and output of the victim model. The access of parameters,
labels and predicted probability is prohibited.
#### 4.3.1 Targeted Attack
As mentioned earlier, training a surrogate model requires access to the labels
of training data, which is not always practical. We hope to find a way that we
only need to do black-box query on the victim model [17] or attack the victim
in an unsupervised fashion [5, 9].
To do black-box query on the victim model, reinforcement learning is
introduced. RL-S2V [17] is the first work to employ reinforcement learning
technique to generate adversarial attacks on graph data under the black-box
setting. They model the attack procedure as a Markov Decision Process (MDP)
and the attacker is allowed to modify $m$ edges to change the predicted label
of the target node $u$. They study both node-level (targeted) and graph-level
(untargeted) attacks. For node-level attack, they define the MDP as follows,
* •
State. The state $s_{t}$ is represented by the tuple $\left(G^{(t)},u\right)$
where $G^{(t)}$ is the modified graph at time step $t$.
* •
Action. A single action at time step $t$ is denoted as $a_{t}$. For each
action $a_{t}$, the attacker can choose to add or remove an edge from the
graph. Furthermore, a hierarchical structure is applied to decompose the
action space.
* •
Reward. Since the goal of the attacker is to change the classification result
of the target node $u$, RL-S2V gives non-zero reward $r$ to the attacker at
the end of the MDP:
$r\left(s_{m},a_{m}\right)=\left\\{\begin{array}[]{ll}{1}&{\text{ if
}f_{\theta}\left(G^{(m)},u\right)\neq y},\\\ {-1}&{\text{ if
}f_{\theta}\left(G^{(m)},u\right)=y.}\end{array}\right.$ (10)
In the intermediate steps, the attacker receives no reward, i.e., $\forall
t=1,2,\dots,m-1,r\left(s_{t},a_{t}\right)=0$.
* •
Termination. The process terminates when the attacker finishes modifying $m$
edges.
Since they define the MDP of graph-level attack in the similar way, we omit
the details. Further, the Q-learning algorithm [50] is adopted to solve the
MDP and guide the attacker to modify the graph.
Instead of attacking node-level classification, the work [5] shows a way to
attack the family of node embedding models in the black-box setting. Inspired
by the observation that DeepWalk can be formulated in matrix factorization
form [53], they maximize the unsupervised DeepWalk loss with matrix
perturbation theory by performing $\Delta$ edge flips. It is further
demonstrated that the perturbed structure is transferable to other models like
GCN and Label Propagation. However, this method only considers the structure
information. GF-Attack [9] is proposed to incorporate the feature information
into the attack model. Specifically, they formulate the connection between the
graph embedding method and general graph signal process with graph filter and
construct the attacker based on the graph filter and attribute matrix. GF-
Attack can also be transferred to other network embedding models and achieves
better performance than the method in [5].
#### 4.3.2 Untargeted Attack
It is argued that the perturbation constraining only the number of modified
edges may not be unnoticeable enough. A novel framework ReWatt [46] is
proposed to solve this problem and perform untargeted graph-level attack.
Employing a reinforcement learning framework, ReWatt adopts the rewiring
operation instead of simply adding/deleting an edge in one single modification
to make perturbation more unnoticeable. One rewiring operation involves three
nodes $v_{1},v_{2}$ and $v_{3}$, where ReWatt removes the existing edge
between $v_{1}$ and $v_{2}$ and connects $v_{1}$ and $v_{3}$. ReWatt also
constrains $v_{3}$ to be the 2-hop neighbor of $v_{1}$ to make perturbation
smaller. Such rewiring operation does not change the number of nodes and edges
in the graph and it is further proved that such rewiring operation affects
algebraic connectivity and effective graph resistance, both of which are
important graph properties based on graph Laplacian, in a smaller way than
adding/deleting edges.
To perform graph adversarial attacks in a more realistic way, the work [44]
proposes a more restricted black-box setting where the attackers only have
access to a subset of nodes and can only attack a small number of them. Under
this setting, the attacker is required to generate attacks in two steps: 1)
selecting a small subset of nodes to attack under the limits of node access;
2) modifying node features or edges under the perturbation budget. By
exploiting the structural inductive biases of the GNN models [77, 36] as the
information source for attacking, it proposes a practical greedy method of
adversarial attacks for node-level classification tasks and effectively
degrades the performance of GNNs.
## 5 Countermeasures Against
Graph Adversarial Attacks
In previous sections, we have shown that graph neural networks can be easily
fooled by unnoticeable perturbation on graph data. The vulnerability of graph
neural networks poses great challenges to apply them in safety-critical
applications. In order to defend the graph neural networks against these
attacks, different countermeasure strategies have been proposed. The existing
methods can be categorized into the following types: (1) adversarial training,
(2) adversarial perturbation detection, (3) certifiable robustness, (4) graph
purification, and (5) attention mechanism.
### 5.1 Adversarial Training
Adversarial training is a widely used countermeasure for adversarial attacks
in image data [25]. The main idea of adversarial training is to inject
adversarial examples into the training set such that the trained model can
correctly classify the future adversarial examples. Similarly, we can also
adopt this strategy to defend graph adversarial attacks as follows,
$\min_{\theta}\max_{\delta_{\bf A}\in\mathcal{P}_{\bf A}\atop\delta_{\bf
X}\in\mathcal{P}_{\bf X}}\mathcal{L}_{\text{train}}\left(f_{\theta}({\bf
A}+\delta_{\bf A},{\bf X}+\delta_{\bf X})\right),$ (11)
where $\delta_{\bf A}$, $\delta_{\bf X}$ denote the perturbation on ${\bf
A},{\bf X}$, respectively; $\mathcal{P}_{\bf A}$ and $\mathcal{P}_{\bf X}$
stand for the domains of imperceptible perturbation. The min-max optimization
problem in Eq (11) indicates that adversarial training involves two processes:
(1) generating perturbations that maximize the prediction loss and (2)
updating model parameters that minimize the prediction loss. By alternating
the above two processes iteratively, we can train a robust model against to
adversarial attacks. Since there are two inputs, i.e., adjacency matrix ${\bf
A}$ and attribute matrix ${\bf X}$, adversarial training can be done on them
separately. To generate perturbations on the adjacency matrix, it is proposed
to randomly drop edges during adversarial training [17]. Though such simple
strategy cannot lead to very significant improvement in classification
accuracy (1% increase), it shows some effectiveness with such cheap
adversarial training. Furthermore, projected gradient descent is used to
generate perturbations on the discrete input structure, instead of randomly
dropping edges [76]. On the other hand, an adversarial training strategy with
dynamic regularization is proposed to perturb the input features [22].
Specifically, it includes the divergence between the prediction of the target
example and its connected examples into the objective of adversarial training,
aiming to attack and reconstruct graph smoothness. Furthermore, batch virtual
adversarial training [19] is proposed to promote the smoothness of GNNs and
make GNNs more robust against adversarial perturbations. Several other
variants of adversarial training on the input layer are introduced in [12, 18,
69, 20].
The aforementioned adversarial training strategies face two main shortcomings:
(1) they generate perturbations on ${\bf A}$ and ${\bf X}$ separately; and (2)
it is not easy to perturb the graph structure due to its discreteness. To
overcome the shortcomings, instead of generating perturbation on the input, a
latent adversarial training method injects perturbations on the first hidden
layer [32]:
$\min_{\theta}\max_{\delta\in\mathcal{P}}\mathcal{L}_{\text{train}}\left(f_{\theta}(G;{\bf
H}^{(1)}+\delta)\right),$ (12)
where ${\bf H}^{(1)}$ denotes the representation matrix of the first hidden
layer and $\delta\in\mathcal{P}$ is some perturbation on ${\bf H}$. It is
noted that the hidden representation is continuous and it incorporates the
information from both graph structure and node attributes.
### 5.2 Adversarial Perturbation Detection
To resist graph adversarial attacks during the test phase, there is one main
strategy called adversary detection. These detection models protect the GNN
models by exploring the intrinsic difference between adversarial edges/nodes
and the clean edges/nodes [78, 28]. The work [78] is the first work to propose
detection approaches to find adversarial examples on graph data. It introduces
four methods to distinguish adversarial edges or nodes from the clean ones
including (1) link prediction (2) sub-graph link prediction (3) graph
generation models and (4) outlier detection. These methods have shown some
help to correctly detect adversarial perturbations. The work [28] introduces a
method to randomly draw subsets of nodes, and relies on graph-aware criteria
to judiciously filter out contaminated nodes and edges before employing a
semi-supervised learning (SSL) module. The proposed model can be used to
detect different anomaly generation models, as well as adversarial attacks.
### 5.3 Certifiable Robustness
Previously introduced adversarial training strategies are heuristic and only
show experimental benefits. However, we still do not know whether there exist
adversarial examples even when current attacks fail. Therefore, there are
works [92, 6, 30, 7, 93, 61, 65] considering to seriously reason the safety of
graph neural networks which try to certify the GNN’s robustness. As we know,
GNN’s prediction on one node $v_{t}$ always depends on its neighbor nodes. In
[92], they ask the question: which nodes in a graph are safe under the risk of
any admissible perturbations of its neighboring nodes’ attributes. To answer
this question, for each node $v$ and its corresponding label $y_{v}$, they try
to exactly calculate an upper bound $U(v)$ of the maximized margin loss :
$U(v)\geq\max_{G^{\prime}\in\mathcal{G}}\Big{(}\max_{i\neq y}{\bf
Z}_{v,i}\left(G^{\prime}\right)-{\bf Z}_{v,y}\left(G^{\prime}\right)\Big{)},$
(13)
where $\mathcal{G}$ denotes the set of all allowed graph perturbations (in
[92] only attribute perturbation). This upper bound $U$ is called the
Certificate of node $v$. During the certification, for $v$, if $U(v)\leq 0$,
any perturbation cannot result the model to give a larger score to a wrong
class $(i\neq y_{v})$ than the class $y_{v}$, so there is no adversarial
attack in $\mathcal{G}$ that can change the model’s prediction. During the
test phase, they calculate the certificate for all test nodes, thus they can
know how many nodes in a graph is absolutely safe under attributes
perturbation. Moreover, this certificate is trainable, directly minimizing the
certificates will help more nodes become safe. However, the work [92] only
considers the perturbations on node attributes. Analyzing certifiable
robustness from a different perspective, in [6], it deals with the case when
the attacker only manipulates the graph structure. It derives the robustness
certificates (similar to Eq. (13)) as a linear function of personalized
PageRank [29], which makes the optimization tractable. In [93], it also tries
to certify robustness of graph neural networks under graph structural
perturbations. It successfully solves the certification problem using a
jointly constrained bilinear programming method. In [65], it borrows the idea
from randomized smoothing [16] to achieve certifiable robustness for graphs
under structural perturbation. Meanwhile, the sparsity of graph data during
certification is considered in [7]. It improves the efficiency and accuracy of
the certifications against attacks on both graph features and structures.
Immunization methods are introduced to improve graphs’ certifiable robustness
[61]. Besides, there are also works studying certifiable robustness on GNN’s
other applications such as community detection [30].
### 5.4 Graph Purification
Both adversarial training or certifiable defense methods only target on
resisting evasion attacks, which means that the attack happens during the test
time. However, graph purification defense methods mainly focus on defending
poisoning attacks. Since the poisoning attacks insert poisons into the
training graph, purification methods aim to purify the poisoned graph and
learn robust graph neural network models based on it. There are two approaches
to realize graph purification: pre-processing [72, 21] and graph learning [33,
35].
#### 5.4.1 Pre-processing
Pre-processing methods first purify the perturbed graph data and then train
the GNN model on the purified graph. In this way, the GNN model is trained on
a clean graph. The work [72] proposes a purification method based on two
empirical observations of the attack methods: (1) Attackers usually prefer
adding edges over removing edges or modifying features and (2) Attackers tend
to connect dissimilar nodes. As a result, they propose a defense method by
eliminating the edges whose two end nodes have small Jaccard Similarity [54].
Because these two nodes are different and it is not likely they are connected
in reality, the edge between them may be adversarial. The experimental results
demonstrate the effectiveness and efficiency of the proposed defense method.
However, this method can only work when the node features are available. In
[21], it is observed that Nettack [89] generates the perturbations which
mainly changes the small singular values of the graph adjacency matrix. Thus
it proposes to purify the perturbed adjacency matrix by using truncated SVD to
get its low-rank approximation. it further shows that only keeping the top
$10$ singular values of the adjacency matrix is able to defend Nettack and
improve the performance of GNNs.
#### 5.4.2 Graph Learning
Pre-procesing might not be an ideal choice for purifying the graph, as it is
independent of GNN’s training process and could mistakenly remove normal
edges. An alternative purification strategy is graph learning, which targets
at removing adversarial patterns and obtaining clean graph structure.
Traditional graph learning methods [23, 31] do not directly deal with
adversarial attacks. Hence, to make GNNs more robust, it is of great
importance to leverage the characteristics of adversarial attacks to guide the
graph learning process. In [33], it has demonstrated that adversarial attacks
can essentially violate some important graph properties, i.e, low-rank,
sparsity and feature smoothness. Then it explores these graph properties to
design robust graph neural networks, Pro-GNN, which jointly learns clean graph
structure and robust GNN parameters. Specifically, Pro-GNN alternatively
updates graph structure by preserving the aforementioned properties and trains
GNN parameters on the updated graph structure. In [80], variational graph
autoencoder [35] is employed to reconstruct graph structure, which can also
reduce the effects of adversarial perturbations.
### 5.5 Attention Mechanism
Different from the purification methods which try to exclude adversarial
perturbations, attention-based defense methods aim to train a robust GNN model
by penalizing model’s weights on adversarial edges or nodes [87, 60, 82].
Basically, these methods learn an attention mechanism to distinguish
adversarial edges and nodes from the clean ones, and then make the adversarial
perturbations contribute less to the aggregation process of the GNN training.
The work [87] assumes that adversarial nodes may have high prediction
uncertainty, since adversary tends to connect the node with nodes from other
communities. In order to penalize the influence from these uncertain nodes,
they propose a defense method named RGCN to model the $l$-th layer hidden
representation $\boldsymbol{h}_{i}^{(l)}$ of nodes as Gaussian distribution
with mean value $\boldsymbol{\mu}_{\mathrm{i}}^{(l)}$ and variance
$\boldsymbol{\sigma}_{i}^{(l)}$,
$\boldsymbol{h}_{i}^{(l)}\sim
N\left(\boldsymbol{\mu}_{\mathrm{i}}^{(l)},\operatorname{diag}\left(\boldsymbol{\sigma}_{i}^{(l)}\right)\right),$
(14)
where the uncertainty can be reflected in the variance
$\boldsymbol{\sigma}_{i}^{(l)}$. When aggregating the information from
neighbor nodes, it applies an attention mechanism to penalize the nodes with
high variance,
$\boldsymbol{\alpha}_{i}^{(l)}=\exp\left(-\gamma\boldsymbol{\sigma}_{i}^{(l)}\right),$
(15)
where $\boldsymbol{\alpha}^{(l)}_{i}$ is the attention score assigned to node
$i$ and $\gamma$ is a hyper-parameter. Furthermore, it is verified that the
attacked nodes do have higher variances than normal nodes and the proposed
attention mechanism does help mitigate the impact brought by adversarial
attacks. From a different perspective, GNNGuard is proposed to employ the
theory of network homophily [49] to assign higher scores to edges connecting
similar nodes while pruning edges between unrelated node [82].
In [60], it suggests that to improve the robustness of one target GNN model,
it is beneficial to include the information from other clean graphs, which
share the similar topological distributions and node attributes with the
target graph. For example, Facebook and Twitter have social network graph data
that share similar domains; Yelp and Foursquare have similar co-review graph
data. Thus, it first generates adversarial edges $E_{P}$ on the clean graphs,
which serve as the supervision of known perturbation. With this supervision
knowledge, it further designs the following loss function to reduce the
attention scores of adversarial edges:
$\mathcal{L}_{dist}=-\min\left(\eta,\underset{e_{ij}\in{E}\backslash{E_{P}}}{\mathbb{E}}\boldsymbol{\alpha}_{ij}^{(l)}-\underset{e_{ij}\in{E_{P}}}{\mathbb{E}}\boldsymbol{\alpha}_{ij}^{(l)}\right),$
(16)
where $\mathbb{E}$ denotes the expectation, $E\backslash{E_{P}}$ represents
normal edges in the graph, $\boldsymbol{\alpha}^{(l)}_{ij}$ is the attention
score assigned to edge $e_{ij}$ and $\eta$ is a hyper-parameter controlling
the margin between the expectation of two distributions. It then adopts meta-
optimization to train a model initialization and fine-tunes it on the target
poisoned graph to get a robust GNN model.
## 6 A Repository for Graph Attacks and Defenses
In this section, we give a brief introduction to the repository we have
developed for adversarial attacks and defenses,
DeepRobust222https://github.com/DSE-MSU/DeepRobust [41]. To facilitate the
research on adversarial attacks and defenses, DeepRobust includes the majority
of representative attack and defense algorithms for both graph data and image
data. The repository can enable researchers to deepen our understandings on
attacks and defenses via empirical studies. Specifically, for graph
adversarial learning, DeepRobust provides 7 datasets: four citation graphs
including Cora [55], Cora-ML [4], Citeseer [4] and Pubmed [55], one co-author
graph ACM [68], one blog graph Polblogs [1] and one social graph BlogCatalog
[27]. On top of that, DeepRobust covers the following attacks and defenses:
(1) 5 targeted attack algorithms, i.e., FGA [13], Nettack [89], RL-S2V [17],
integrated gradient attack [72] and random attack [89]; (2) 3 untargeted
algorithms, i.e., Metattack [91], Topology attack [76] and DICE [70]; (3) one
victim model, i.e. GCN [34]; and (4) 5 defense algorithms, i.e., adversarial
training [25], GCN-Jaccard [72], GCN-SVD [21], RGCN [87] and Pro-GNN [33].
DeepRobust is an easy-to-use platform for researchers who are working on
adversarial attacks and defenses. With DeepRobust, users can generate graph
adversarial attacks by training an attack model through the attacking APIs or
loading the pre-attacked graphs provided by the repository. Robust models can
be trained on the perturbed graph with the defense APIs. Once we obtain the
perturbed graph and the trained defense model, they can be fed into the
evaluation API to compete against each other, and the model performance under
the corresponding attack will be reported. In the next section, we use
DeepRobust for empirical studies on graph adversarial attacks.
## 7 Empirical Studies
With the help of DeepRobust, we are able to conduct empirical studies on graph
adversarial attacks and discover their important patterns. Next we first
introduce the experimental settings and then present the empirical results and
findings.
### 7.1 Experimental Setup
Different attack and defense methods have been designed under different
settings. We perform the experiments with one of the most popular settings –
the untargeted poisoning setting. Correspondingly we choose representative
attack and defense methods that have been designed for this setting. Three
representative attack methods are adopted to generate perturbations including
DICE [70], Metattack [91] and Topology attack [76]. It is noted that DICE is a
white-box attack which randomly connects nodes with different labels or drops
edges between nodes sharing the same label. To evaluate the performance of
different defense methods under adversarial attacks, we compare the robustness
of the natural trained GCN [34] and four defense methods on those attacked
graphs, i.e., GCN [34], GCN-Jaccard [72], GCN-SVD [21], RGCN [87] and GAT
[62]. Following [91], we use three datasets: Cora, Citeseer and Polblogs. For
each dataset, we randomly choose 10% of nodes for training, 10% of nodes for
validation and the remaining 80% for test. We repeat each experiment for 5
times and report the average performance. On Cora and Citeseer datasets, the
most destructive variant CE-min-max [76] is adopted to implement Topology
attack. But CE-min-max cannot converge on Polblogs dataset, we adopt another
variant called CE-PGD [76] on this dataset.
### 7.2 Analysis on Attacked Graph
One way to understand the behaviors of attacking methods is to compare the
properties of the clean graph and the attacked graph. In this subsection, we
perform this analysis from both global and local perspectives. Global Measure.
We have collected five global properties from both clean graphs and perturbed
graphs generated by the three attacks on the three datasets. These properties
include the number of added edges, the number of deleted edges, the number of
edges, the rank of the adjacent matrix, and clustering coefficient. We only
show the results of Metattack in Table LABEL:tab:pro-meta. Results for
Topology attack and DICE can be found in Appendix A.1. Note that we vary the
perturbation rates from $0$ to $25\%$ with a step of $5\%$ and $0\%$
perturbation denotes the original clean graph. It can be observed from the
table:
* •
Attackers favor adding edges over deleting edges.
* •
Attacks are likely to increase the rank of the adjacency matrix.
* •
Attacks are likely to reduce the connectivity of a graph. The clustering
coefficients of a perturbed graph decrease with the increase of the
perturbation rate.
Table 3: Properties of attacked graphs under Metattack. Note that $r$ denotes
perturbation rate and $0\%$ perturbation indicates the original clean graph.
Dataset | $r$(%) | edge+ | edge- | edges | ranks | | clustering
---
coefficients
Cora | 0 | 0 | 0 | 5069 | 2192 | 0.2376
5 | 226 | 27 | 5268 | 2263 | 0.2228
10 | 408 | 98 | 5380 | 2278 | 0.2132
15 | 604 | 156 | 5518 | 2300 | 0.2071
20 | 788 | 245 | 5633 | 2305 | 0.1983
25 | 981 | 287 | 5763 | 2321 | 0.1943
Citeseer | 0 | 0 | 0 | 3668 | 1778 | 0.1711
5 | 181 | 2 | 3847 | 1850 | 0.1616
1 | 341 | 25 | 3985 | 1874 | 0.1565
15 | 485 | 65 | 4089 | 1890 | 0.1523
20 | 614 | 119 | 4164 | 1902 | 0.1483
25 | 743 | 174 | 4236 | 1888 | 0.1467
Polblogs | 0 | 0 | 0 | 16714 | 1060 | 0.3203
5 | 732 | 103 | 17343 | 1133 | 0.2719
10 | 1347 | 324 | 17737 | 1170 | 0.2825
15 | 1915 | 592 | 18038 | 1193 | 0.2851
20 | 2304 | 1038 | 17980 | 1193 | 0.2877
25 | 2500 | 1678 | 17536 | 1197 | 0.2723
Local Measure. We have also studied two local properties including the feature
similarity and label equality between two nodes connected by three kinds of
edges: the newly added edges, the deleted edges and the normal edges which
have not been changed by the attack methods. Since features are binary in our
datasets, we use jaccard similarity as the measure for feature similarity. For
label equality, we report the ratio if two nodes share the same label or have
different labels. The feature similarity and label equality results are
demonstrated in Figures 2 and 3, respectively. We show the results for
Metattack with $5\%$ perturbations. Results for Topology attack and DICE can
be found in Appendix A.2. Note that we do not have feature similarity results
on Polblogs since this dataset does not have node features. We can make the
following observations from the figures.
* •
Attackers tend to connect nodes with dissimilar features and different labels.
As shown in Figure 2 and Figure 3, most of the added edges connect nodes with
very dissimilar features and different labels.
* •
Attackers tend to remove edges from nodes which share similar features and
same label. As shown in Figure 2 and Figure 3, most of the deleted edges
originally connect nodes sharing similar features and same labels.
(a) Cora (b) Citeseer
Figure 2: Node feature similarity for Metattack.
(a) Cora (b) Citeseer (c) Polblogs
Figure 3: Label equality for Metattack.
### 7.3 Attack and Defense Performance
In this subsection, we study how the attack methods perform and whether the
defense methods can help resist to attacks. Similarly, we vary the
perturbation rates from $0$ to $25\%$ with a step of $5\%$. The results are
demonstrated in Table 4. We show the performance for Metattack. Results for
Topology attack and DICE are shown in Appendix A.3. Note that we do not have
the performance for Jaccard defense model in Polblogs since this mode requires
node features and Polblogs does not provide node features. According to the
results, we have the following observations:
* •
With the increase of the perturbations, the performance of GCN dramatically
deceases. This result suggests that Metattack can lead to a significant reduce
of accuracy on the GCN model.
* •
When the perturbations are small, we observe small performance reduction for
defense methods which suggests their effectiveness. However, when the graphs
are heavily poisoned, their performance also reduces significantly which
indicates that efforts are needed to defend heavily poisoning attacks.
Table 4: Performance (accuracy) under Metattack
Dataset | $r$ (%) | 0 | 5 | 10 | 15 | 20 | 25
---|---|---|---|---|---|---|---
Cora | GCN | 83.10 | 76.69 | 65.58 | 54.88 | 48.66 | 38.44
Jaccard111 | 82.39 | 81.02 | 77.28 | 72.74 | 69.16 | 64.56
SVD222 | 77.97 | 75.67 | 70.51 | 64.34 | 55.89 | 45.92
RGCN | 84.81 | 81.32 | 72.12 | 60.25 | 49.75 | 37.76
GAT | 81.69 | 74.75 | 61.69 | 52.56 | 45.30 | 38.52
Citeseer | GCN | 74.53 | 72.59 | 63.96 | 61.66 | 50.58 | 44.32
Jaccard111 | 74.82 | 73.60 | 73.50 | 72.80 | 72.97 | 72.53
SVD222 | 70.32 | 71.30 | 67.58 | 63.86 | 56.91 | 45.28
RGCN | 74.41 | 72.68 | 71.15 | 69.38 | 67.93 | 67.24
GAT | 74.23 | 72.01 | 67.12 | 57.70 | 47.97 | 38.70
Polblogs | GCN | 95.80 | 73.93 | 72.07 | 67.69 | 62.29 | 52.97
SVD222 | 94.99 | 82.64 | 71.27 | 66.09 | 61.37 | 52.82
RGCN | 95.60 | 72.01 | 67.12 | 57.70 | 47.97 | 38.70
GAT | 95.40 | 84.83 | 77.03 | 69.94 | 53.62 | 53.76
* 1
Jaccard: GCN-Jaccard defense model.
* 2
SVD: GCN-SVD defense model.
## 8 Conclusion & Future Directions
In this survey, we give a comprehensive overview of an emerging research
field, adversarial attacks and defenses on graph data. We investigate the
taxonomy of graph adversarial attacks, and review representative adversarial
attacks and the corresponding countermeasures. Furthermore, we conduct
empirical study to show how different defense methods behave under different
attacks, as well as the changes in important graph properties by the attacks.
Via this comprehensive study, we have gained deep understandings on this area
that enables us to discuss some promising research directions.
* •
Imperceptible perturbation measure. Different from image data, humans cannot
easily tell whether a perturbation on graph is imperceptible or not. The
$\ell_{0}$ norm constraint on perturbation is definitely not enough. Currently
only very few existing work study this problem, thus finding concise
perturbation evaluation measure is of great urgency.
* •
Different graph data. Existing works mainly focus on static graphs with node
attributes. Complex graphs such as graphs with edge attributes and dynamic
graphs are not well-studied yet.
* •
Existence and transferability of graph adversarial examples. There are only a
few works discussing about the existence and transferability of graph
adversarial examples. Studying this topic is important for us to understand
our graph learning algorithm, thus helping us build robust models.
* •
Scalability. The high complexity of attack methods has hindered their use in
practical applications. However, there are only few works developing efficient
attack methods in terms of time complexity. Furthermore, given that most of
attack algorithms are gradient-based, how to reduce their memory complexity
also remains a challenge.
## Acknowledgments
This research is supported by the National Science Foundation (NSF) under
grant number CNS1815636, IIS1928278, IIS1714741, IIS1845081, IIS1907704, and
IIS1955285.
## References
* [1] L. A. Adamic and N. Glance. The political blogosphere and the 2004 us election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery, pages 36–43, 2005.
* [2] C. C. Aggarwal, H. Wang, et al. Managing and mining graph data, volume 40. Springer, 2010.
* [3] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
* [4] A. Bojchevski and S. Günnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. arXiv preprint arXiv:1707.03815, 2017.
* [5] A. Bojchevski and S. Günnemann. Adversarial attacks on node embeddings via graph poisoning. arXiv preprint arXiv:1809.01093, 2018.
* [6] A. Bojchevski and S. Günnemann. Certifiable robustness to graph perturbations. In Advances in Neural Information Processing Systems, pages 8317–8328, 2019.
* [7] A. Bojchevski, J. Klicpera, and S. Günnemann. Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more. arXiv preprint arXiv:2008.12952, 2020.
* [8] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pages 2787–2795, 2013.
* [9] H. Chang, Y. Rong, T. Xu, W. Huang, H. Zhang, P. Cui, W. Zhu, and J. Huang. The general black-box attack method for graph neural networks. arXiv preprint arXiv:1908.01297, 2019.
* [10] J. Chen, L. Chen, Y. Chen, M. Zhao, S. Yu, Q. Xuan, and X. Yang. Ga-based q-attack on community detection. IEEE Transactions on Computational Social Systems, 6(3):491–503, 2019.
* [11] J. Chen, Z. Shi, Y. Wu, X. Xu, and H. Zheng. Link prediction adversarial attack. arXiv preprint arXiv:1810.01110, 2018.
* [12] J. Chen, Y. Wu, X. Lin, and Q. Xuan. Can adversarial network attack be defended? arXiv preprint arXiv:1903.05994, 2019.
* [13] J. Chen, Y. Wu, X. Xu, Y. Chen, H. Zheng, and Q. Xuan. Fast gradient attack on network embedding. arXiv preprint arXiv:1809.02797, 2018.
* [14] X. Chen, C. Liu, B. Li, K. Lu, and D. Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017.
* [15] Y. Chen, Y. Nadji, A. Kountouras, F. Monrose, R. Perdisci, M. Antonakakis, and N. Vasiloglou. Practical attacks against graph-based clustering. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1125–1142, 2017.
* [16] J. M. Cohen, E. Rosenfeld, and J. Z. Kolter. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019.
* [17] H. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song. Adversarial attack on graph structured data. arXiv preprint arXiv:1806.02371, 2018.
* [18] Q. Dai, X. Shen, L. Zhang, Q. Li, and D. Wang. Adversarial training methods for network embedding. In The World Wide Web Conference, pages 329–339, 2019.
* [19] Z. Deng, Y. Dong, and J. Zhu. Batch virtual adversarial training for graph convolutional networks. arXiv preprint arXiv:1902.09192, 2019.
* [20] Y. Dou, G. Ma, P. S. Yu, and S. Xie. Robust spammer detection by nash reinforcement learning. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 924–933, 2020.
* [21] N. Entezari, S. A. Al-Sayouri, A. Darvishzadeh, and E. E. Papalexakis. All you need is low (rank) defending against adversarial attacks on graphs. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 169–177, 2020.
* [22] F. Feng, X. He, J. Tang, and T.-S. Chua. Graph adversarial training: Dynamically regularizing based on graph structure. IEEE Transactions on Knowledge and Data Engineering, 2019.
* [23] L. Franceschi, M. Niepert, M. Pontil, and X. He. Learning discrete structures for graph neural networks. arXiv preprint arXiv:1903.11960, 2019.
* [24] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017.
* [25] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
* [26] W. L. Hamilton, R. Ying, and J. Leskovec. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017.
* [27] X. Huang, J. Li, and X. Hu. Label informed attributed network embedding. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 731–739, 2017.
* [28] V. N. Ioannidis, D. Berberidis, and G. B. Giannakis. Graphsac: Detecting anomalies in large-scale graphs. arXiv preprint arXiv:1910.09589, 2019.
* [29] G. Jeh and J. Widom. Scaling personalized web search. In Proceedings of the 12th international conference on World Wide Web, pages 271–279, 2003.
* [30] J. Jia, B. Wang, X. Cao, and N. Z. Gong. Certified robustness of community detection against adversarial structural perturbation via randomized smoothing. arXiv preprint arXiv:2002.03421, 2020.
* [31] B. Jiang, Z. Zhang, D. Lin, J. Tang, and B. Luo. Semi-supervised learning with graph learning-convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11313–11320, 2019.
* [32] H. Jin and X. Zhang. Latent adversarial training of graph convolution networks. In ICML Workshop on Learning and Reasoning with GraphStructured Representations, 2019.
* [33] W. Jin, Y. Ma, X. Liu, X. Tang, S. Wang, and J. Tang. Graph structure learning for robust graph neural networks. arXiv preprint arXiv:2005.10203, 2020.
* [34] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
* [35] T. N. Kipf and M. Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016.
* [36] J. Klicpera, A. Bojchevski, and S. Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997, 2018.
* [37] L. Landrieu and M. Simonovsky. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4558–4567, 2018.
* [38] J. Li, H. Zhang, Z. Han, Y. Rong, H. Cheng, and J. Huang. Adversarial attack on community detection by hiding individuals, 2020\.
* [39] Y. Li, S. Bai, C. Xie, Z. Liao, X. Shen, and A. Yuille. Regional homogeneity: Towards learning transferable universal adversarial perturbations against defenses. arXiv preprint arXiv:1904.00979, 2019.
* [40] Y. Li, S. Bai, Y. Zhou, C. Xie, Z. Zhang, and A. L. Yuille. Learning transferable adversarial examples via ghost networks. In AAAI, pages 11458–11465, 2020.
* [41] Y. Li, W. Jin, H. Xu, and J. Tang. Deeprobust: A pytorch library for adversarial attacks and defenses. arXiv preprint arXiv:2005.06149, 2020.
* [42] Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu. Learning entity and relation embeddings for knowledge graph completion. In Twenty-ninth AAAI conference on artificial intelligence, 2015\.
* [43] X. Liu, S. Si, X. Zhu, Y. Li, and C.-J. Hsieh. A unified framework for data poisoning attack to graph-based semi-supervised learning. arXiv preprint arXiv:1910.14147, 2019.
* [44] J. Ma, S. Ding, and Q. Mei. Black-box adversarial attacks on graph neural networks with limited node access. arXiv preprint arXiv:2006.05057, 2020.
* [45] T. Ma, C. Xiao, J. Zhou, and F. Wang. Drug similarity integration through attentive multi-view graph auto-encoders. arXiv preprint arXiv:1804.10850, 2018.
* [46] Y. Ma, S. Wang, L. Wu, and J. Tang. Attacking graph convolutional networks via rewiring. arXiv preprint arXiv:1906.03750, 2019.
* [47] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
* [48] D. Marcheggiani and I. Titov. Encoding sentences with graph convolutional networks for semantic role labeling. arXiv preprint arXiv:1703.04826, 2017.
* [49] M. McPherson, L. Smith-Lovin, and J. M. Cook. Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1):415–444, 2001.
* [50] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
* [51] B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710, 2014.
* [52] T. Pham, T. Tran, D. Phung, and S. Venkatesh. Column networks for collective classification. arXiv preprint arXiv:1609.04508, 2016.
* [53] J. Qiu, Y. Dong, H. Ma, J. Li, K. Wang, and J. Tang. Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 459–467, 2018.
* [54] A. Said, E. W. De Luca, and S. Albayrak. How social relationships affect user similarities.
* [55] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
* [56] L. Sun, J. Wang, P. S. Yu, and B. Li. Adversarial attack and defense on graph data: A survey. arXiv preprint arXiv:1812.10528, 2018.
* [57] Y. Sun, S. Wang, X. Tang, T.-Y. Hsieh, and V. Honavar. Node injection attacks on graphs via reinforcement learning. arXiv preprint arXiv:1909.06543, 2019.
* [58] M. Sundararajan, A. Taly, and Q. Yan. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3319–3328. JMLR. org, 2017.
* [59] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067–1077, 2015.
* [60] X. Tang, Y. Li, Y. Sun, H. Yao, P. Mitra, and S. Wang. Transferring robustness for graph neural network against poisoning attacks. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 600–608, 2020.
* [61] S. Tao, H. Shen, Q. Cao, L. Hou, and X. Cheng. Adversarial immunization for improving certifiable robustness on graphs. arXiv preprint arXiv:2007.09647, 2020.
* [62] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
* [63] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. Graph attention networks. 2018\.
* [64] B. Wang and N. Z. Gong. Attacking graph-based classification via manipulating the graph structure. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 2023–2040, 2019.
* [65] B. Wang, J. Jia, X. Cao, and N. Z. Gong. Certified robustness of graph neural networks against adversarial structural perturbation. arXiv preprint arXiv:2008.10715, 2020.
* [66] B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE Symposium on Security and Privacy (SP), pages 707–723. IEEE, 2019.
* [67] J. Wang, M. Luo, F. Suya, J. Li, Z. Yang, and Q. Zheng. Scalable attack on graph data by injecting vicious nodes. arXiv preprint arXiv:2004.13825, 2020.
* [68] X. Wang, H. Ji, C. Shi, B. Wang, Y. Ye, P. Cui, and P. S. Yu. Heterogeneous graph attention network. In The World Wide Web Conference, pages 2022–2032, 2019.
* [69] X. Wang, X. Liu, and C.-J. Hsieh. Graphdefense: Towards robust graph convolutional networks, 2019.
* [70] M. Waniek, T. P. Michalak, M. J. Wooldridge, and T. Rahwan. Hiding individuals and communities in a social network. Nature Human Behaviour, 2(2):139–147, Jan 2018.
* [71] M. Waniek, T. P. Michalak, M. J. Wooldridge, and T. Rahwan. Hiding individuals and communities in a social network. Nature Human Behaviour, 2(2):139–147, 2018.
* [72] H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu. Adversarial examples for graph data: deep insights into attack and defense. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 4816–4823. AAAI Press, 2019.
* [73] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu. A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596, 2019.
* [74] Z. Xi, R. Pang, S. Ji, and T. Wang. Graph backdoor. arXiv preprint arXiv:2006.11890, 2020.
* [75] H. Xu, Y. Ma, H. Liu, D. Deb, H. Liu, J. Tang, and A. Jain. Adversarial attacks and defenses in images, graphs and text: A review. arXiv preprint arXiv:1909.08072, 2019.
* [76] K. Xu, H. Chen, S. Liu, P.-Y. Chen, T.-W. Weng, M. Hong, and X. Lin. Topology attack and defense for graph neural networks: An optimization perspective. arXiv preprint arXiv:1906.04214, 2019.
* [77] K. Xu, C. Li, Y. Tian, T. Sonobe, K.-i. Kawarabayashi, and S. Jegelka. Representation learning on graphs with jumping knowledge networks. arXiv preprint arXiv:1806.03536, 2018.
* [78] X. Xu, Y. Yu, B. Li, L. Song, C. Liu, and C. Gunter. Characterizing malicious edges targeting on graph neural networks. 2018\.
* [79] X. Zang, Y. Xie, J. Chen, and B. Yuan. Graph universal adversarial attacks: A few bad actors ruin graph learning models, 2020.
* [80] A. Zhang and J. Ma. Defensevgae: Defending against adversarial attacks on graph data via a variational graph autoencoder. arXiv preprint arXiv:2006.08900, 2020.
* [81] H. Zhang, T. Zheng, J. Gao, C. Miao, L. Su, Y. Li, and K. Ren. Towards data poisoning attack against knowledge graph embedding. ArXiv, abs/1904.12052, 2019.
* [82] X. Zhang and M. Zitnik. Gnnguard: Defending graph neural networks against adversarial attacks. arXiv preprint arXiv:2006.08149, 2020.
* [83] Z. Zhang, J. Jia, B. Wang, and N. Z. Gong. Backdoor attacks to graph neural networks. arXiv preprint arXiv:2006.11165, 2020.
* [84] J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, and M. Sun. Graph neural networks: A review of methods and applications. arXiv preprint arXiv:1812.08434, 2018.
* [85] Q. Zhou, L. Li, N. Cao, L. Ying, and H. Tong. Admiring: Adversarial multi-network mining. In 2019 IEEE International Conference on Data Mining (ICDM), pages 1522–1527. IEEE, 2019.
* [86] Q. Zhou, Y. Ren, T. Xia, L. Yuan, and L. Chen. Data poisoning attacks on graph convolutional matrix completion. In Algorithms and Architectures for Parallel Processing, 2020.
* [87] D. Zhu, Z. Zhang, P. Cui, and W. Zhu. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1399–1407, 2019.
* [88] X. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation. 2002\.
* [89] D. Zügner, A. Akbarnejad, and S. Günnemann. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2847–2856, 2018.
* [90] D. Zügner, O. Borchert, A. Akbarnejad, and S. Guennemann. Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM Transactions on Knowledge Discovery from Data (TKDD), 14(5):1–31, 2020.
* [91] D. Zügner and S. Günnemann. Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412, 2019.
* [92] D. Zügner and S. Günnemann. Certifiable robustness and robust training for graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 246–256, 2019.
* [93] D. Zügner and S. Günnemann. Certifiable robustness of graph convolutional networks under structure perturbations. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1656–1665, 2020.
## Appendix A Additional Results
### A.1 Global Measures for Topology Attack and DICE
The global measures for Topology attack are shown in Table LABEL:tab:pro-top;
the global measures for DICE are shown in Table LABEL:tab:pro-dice.
Table 5: Properties of attacked graphs under Topology attack.
Dataset | $r$(%) | edges+ | edges- | edges | ranks | | clustering
---
coefficients
Cora | 0 | 0 | 0 | 5069 | 2192 | 0.2376
5 | 255 | 0 | 5324 | 2292 | 0.2308
10 | 508 | 0 | 5577 | 2369 | 0.2185
15 | 762 | 0 | 5831 | 2417 | 0.2029
20 | 1015 | 0 | 6084 | 2442 | 0.1875
25 | 1269 | 0 | 6338 | 2456 | 0.1736
Citeseer | 0 | 0 | 0 | 3668 | 1778 | 0.1711
5 | 185 | 0 | 3853 | 1914 | 0.1666
10 | 368 | 0 | 4036 | 2003 | 0.1568
15 | 552 | 0 | 4220 | 2058 | 0.1429
20 | 735 | 0 | 4403 | 2077 | 0.1306
25 | 918 | 0 | 4586 | 2087 | 0.1188
Polblogs | 0 | 0 | 0 | 16714 | 1060 | 0.3203
5 | 716 | 96 | 17334 | 1213 | 0.2659
10 | 1532 | 128 | 18118 | 1220 | 0.2513
15 | 2320 | 146 | 18887 | 1221 | 0.2408
20 | 3149 | 155 | 19708 | 1221 | 0.2317
25 | 3958 | 163 | 20509 | 1221 | 0.2238
Table 6: Properties of attacked graphs under DICE attack.
Dataset | $r$(%) | edge+ | edge- | edges | ranks | | clustering
---
coefficients
Cora | 0 | 0 | 0 | 5069 | 2192 | 0.2376
5 | 125 | 128 | 5066 | 2208 | 0.2165
10 | 251 | 255 | 5065 | 2241 | 0.1963
15 | 377 | 383 | 5063 | 2256 | 0.1768
20 | 504 | 509 | 5063 | 2262 | 0.1588
25 | 625 | 642 | 5053 | 2271 | 0.1436
Citeseer | 0 | 0 | 0 | 3668 | 1798 | 0.1711
5 | 91 | 92 | 3667 | 1829 | 0.1574
10 | 183 | 183 | 3668 | 1843 | 0.1404
15 | 276 | 274 | 3670 | 1858 | 0.1279
20 | 368 | 365 | 3672 | 1872 | 0.1170
25 | 462 | 455 | 36755 | 1871 | 0.1068
Polblogs | 0 | 0 | 0 | 16714 | 1060 | 0.3203
5 | 420 | 415 | 16719 | 1151 | 0.2742
10 | 846 | 825 | 16736 | 1191 | 0.2341
15 | 1273 | 1234 | 16752 | 1206 | 0.2077
20 | 1690 | 1652 | 16752 | 1214 | 0.1862
25 | 2114 | 2064 | 16765 | 1216 | 0.1675
### A.2 Local Measures for Topology Attack and DICE
The node feature similarity and label equality for Topology attack are shown
in Figure 4 and Figure 6, respectively; the node feature similarity and label
equality for DICE are shown in Figure 5 and Figure 7, respectively.
(a) Cora (b) Citeseer
Figure 4: Node feature similarity for Topology attack.
(a) Cora (b) Citeseer
Figure 5: Node feature similarity for DICE attack.
(a) Cora (b) Citeseer (c) Polblogs
Figure 6: Label equality for Topology attack.
(a) Cora (b) Citeseer (c) Polblogs
Figure 7: Label equality for DICE attack.
### A.3 Attack and Defense Performance for Topology Attack and DICE
The attack and defense performance for DICE is shown in Table 7; the
performance for DICE is shown in Table 8.
Table 7: Performance (accuracy) under Topology attack.
Dataset | $r$ (%) | 0 | 5 | 10 | 15 | 20 | 25
---|---|---|---|---|---|---|---
Cora | GCN | 83.10 | 71.82 | 68.96 | 66.77 | 64.21 | 62.52
Jaccard111 | 82.39 | 73.05 | 72.62 | 71.84 | 71.41 | 70.85
SVD222 | 77.97 | 78.17 | 75.92 | 73.69 | 72.03 | 70.11
RGCN | 84.81 | 72.68 | 71.15 | 69.38 | 67.92 | 67.23
GAT | 81.69 | 71.03 | 68.80 | 65.66 | 64.29 | 62.58
Citeseer | GCN | 74.53 | 79.29 | 75.47 | 72.89 | 70.12 | 68.49
Jaccard111 | 74.82 | 79.07 | 76.76 | 74.29 | 71.87 | 69.55
SVD222 | 70.32 | 78.17 | 75.92 | 73.69 | 72.03 | 70.11
RGCN | 74.41 | 78.13 | 75.93 | 73.93 | 72.32 | 70.60
GAT | 74.23 | 77.52 | 74.09 | 71.90 | 69.62 | 66.99
Polblogs | GCN | 95.80 | 72.04 | 65.87 | 63.35 | 61.06 | 58.49
SVD222 | 94.99 | 71.90 | 65.42 | 63.01 | 60.74 | 58.26
RGCN | 95.60 | 71.27 | 65.30 | 62.76 | 60.25 | 57.89
GAT | 95.40 | 72.56 | 65.97 | 63.35 | 60.94 | 58.77
* 1
Jaccard: GCN-Jaccard defense model.
* 2
SVD: GCN-SVD defense model.
Table 8: Performance (accuracy) under DICE attack.
Dataset | $r$ (%) | 0 | 5 | 10 | 15 | 20 | 25
---|---|---|---|---|---|---|---
Cora | GCN | 83.10 | 81.56 | 80.28 | 78.64 | 77.10 | 75.17
Jaccard111 | 82.39 | 80.91 | 79.80 | 79.23 | 77.81 | 76.35
SVD222 | 77.97 | 75.24 | 73.33 | 70.92 | 69.47 | 67.29
RGCN | 84.81 | 83.53 | 82.56 | 80.70 | 79.30 | 77.89
GAT | 81.69 | 78.86 | 77.50 | 74.56 | 70.54 | 69.25
Citeseer | GCN | 74.53 | 74.41 | 73.61 | 71.98 | 71.47 | 69.92
Jaccard111 | 74.82 | 74.73 | 74.11 | 72.88 | 72.36 | 71.32
SVD222 | 70.32 | 70.68 | 69.89 | 68.59 | 67.09 | 66.65
RGCN | 74.41 | 74.46 | 73.93 | 72.37 | 71.61 | 70.85
GAT | 74.23 | 73.70 | 72.71 | 70.52 | 69.27 | 67.78
Polblogs | GCN | 95.80 | 89.90 | 85.87 | 82.41 | 79.47 | 78.02
SVD222 | 94.99 | 90.88 | 87.79 | 85.09 | 83.64 | 81.25
RGCN | 95.60 | 89.86 | 86.11 | 82.25 | 78.81 | 77.18
GAT | 95.40 | 90.74 | 86.22 | 82.41 | 78.83 | 76.77
* 1
Jaccard: GCN-Jaccard defense model.
* 2
SVD: GCN-SVD defense model.
## Appendix B Open Source Code
We list some open source implementations of representative algorithms in Table
9.
Table 9: A summary of open-source implementations | Methods | Framework | Github Link
---|---|---|---
Attack | PGD, Min-max [76] | | tensorflow
---
pytorch
| https://github.com/KaidiXu/GCN_ADV_Train
---
https://github.com/DSE-MSU/DeepRobust
DICE [71] | python | https://github.com/DSE-MSU/DeepRobust
FGA [13] | pytorch | https://github.com/DSE-MSU/DeepRobust
IG-FGSM [72] | pytorch | https://github.com/DSE-MSU/DeepRobust
Nettack [89] | tensorflow | https://github.com/danielzuegner/nettack
Metattack [91] | | tensorflow
---
pytorch
| https://github.com/danielzuegner/gnn-meta-attack
---
https://github.com/ChandlerBang/pytorch-gnn-meta-attack
RL-S2V [17] | pytorch | https://github.com/Hanjun-Dai/graph_adversarial_attack
Bojchevski et al. [5] | tensorflow | https://github.com/abojchevski/node_embedding_attack
GF-Attack [9] | tensoflow | https://github.com/SwiftieH/GFAttack
Defense | RGCN [87] | | tensorflow
---
pytorch
| https://github.com/thumanlab/nrlweb/blob/master/static/
---
assets/download/RGCN.zip
https://github.com/DSE-MSU/DeepRobust
GCN-Jaccard [72] | pytorch | https://github.com/DSE-MSU/DeepRobust
GCN-SVD [21] | pytorch | https://github.com/DSE-MSU/DeepRobust
Pro-GNN [33] | pytorch | https://github.com/ChandlerBang/Pro-GNN
| Adversarial
---
Training [76]
| tensorflow
---
pytorch
| https://github.com/KaidiXu/GCN_ADV_Train
---
https://github.com/DSE-MSU/DeepRobust
PA-GNN [60] | tensorflow | https://github.com/tangxianfeng/PA-GNN
Graph-Cert [6] | python | https://github.com/abojchevski/graph_cert
DefenseVGAE [80] | pytorch | https://github.com/zhangao520/defense-vgae
Bojchevski et al. [7] | pytorch | https://github.com/abojchevski/sparse_smoothing
Zügner et al. [93] | python | https://github.com/danielzuegner/robust-gcn-structure
|
2024-09-04T02:54:56.170815 | 2020-03-02T11:06:36 | 2003.00761 | {
"authors": "Fouad Elmouhib, Mohamed Talbi and Abdelmalek Azizi",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25978",
"submitter": "Fouad Elmouhib",
"url": "https://arxiv.org/abs/2003.00761"
} | arxiv-papers | # $5$-rank of ambiguous class groups of
quintic Kummer extensions
Fouad ELMOUHIB Mohamed TALBI Abdelmalek AZIZI
###### Abstract
Let $k\,=\,\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$, where $n$ is a positive integer
$5^{th}$ power-free, whose $5-$class group denoted $C_{k,5}$ is isomorphic to
$\mathbb{Z}/5\mathbb{Z}\times\mathbb{Z}/5\mathbb{Z}$. Let
$k_{0}\,=\,\mathbb{Q}(\zeta_{5})$ be the cyclotomic field containing a
primitive $5^{th}$ root of unity $\zeta_{5}$. Let $C_{k,5}^{(\sigma)}$ the
group of the ambiguous classes under the action of $Gal(k/k_{0})$ =
$\langle\sigma\rangle$. The aim of this paper is to determine all integers $n$
such that the group of ambiguous classes $C_{k,5}^{(\sigma)}$ has rank $1$ or
$2$.
## 1 Introduction
One of the most important problems in number theory is the determination of
the structure of class group of a number field, particularly its rank. The
case of quadratic fields, Gauss’s genus theory, determines the rank of
$2$-class group. In a series of papers ( see [References], [References],
[References]), Frank Gerth III proved several results on the $3$-class groups
of pure cubic extension of $\mathbb{Q}$ and cyclic cubic extension of
$\mathbb{Q}$. Recently, S.Aouissi, M.C.Ismaili, M.Talbi, A.Azizi in
[References] had classified all fields $\mathbb{Q}(\sqrt[3]{n},\zeta_{3})$
whose $3-$class group is of type $(9,3)$.
Let $k\,=\,\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$, a number of researchers have
studied the $5$-class group $C_{k,5}$. M.Kulkarni, D.Majumdar and B.Sury in
[References] proved some results that are seen as generalisation of Gerth’s
work to the case of any odd prime, and they give more details in case of
$5$-class group of $k$. In [References], C.Parry gives a formula between class
numbers of pure quintic field $\mathbb{Q}(\sqrt[5]{n})$ and its normal closure
$k$. In References H.Kobayashi proved that if the radicand $n$ has a prime
factor $p$ congruent to $-1$ modulo $5$, then the class number of the pure
quintic field $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$ is a multiple of $5$, and
the class number of $k$ is multiple of $25$.
Let $n>1$ be a $5^{th}$ power-free integer and
$k\,=\,\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$ be a quintic Kummer extension of the
cyclotomic field $k_{0}\,=\,\mathbb{Q}(\zeta_{5})$. By $C_{k,5}^{(\sigma)}$ we
denote the $5$-group of ambiguous ideal classes under the action of
$Gal(k/k_{0})$ = $\langle\sigma\rangle$, i.e
$C_{k,5}^{(\sigma)}\,=\,\\{\mathcal{A}\,\in\,C_{k,5}|\,\mathcal{A}^{\sigma}\,=\,\mathcal{A}\\}$.
Let $k^{\ast}\,=\,(k/k_{0})^{\ast}$ be the maximal abelian extension of
$k_{0}$ contained in the Hilbert $5$-class field $k_{5}^{(1)}$ of $k$, which
is called the relative $5$-genus field of $k/k_{0}$.
We consider the problem of finding the radicands $n$ of all pure quintic
fields $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$, for which the Galois group
$\operatorname{Gal}(k^{\ast}/k)$ is non-trivial. The present work gives the
complete solution of this problem by characterizing all quintic Kummer
extensions $k/k_{0}$ with $5$-group of ambiguous ideal classes
$C_{k,5}^{(\sigma)}$ of order $5$ or $25$. This paper can be viewed as the
continuation of the work of M.Kulkarni, D.Majumdar and B.Sury in [References].
In fact, we shall prove the following Main Theorem:
###### Theorem 1.1.
Let $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$ be a pure quintic field, where $n>1$
is a $5^{th}$ power-free integer, and
$k\,=\,\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$ be its normal closure. We assume
that the $5-$class group $C_{k,5}$ is of type $(5,5)$.
(1) If rank $(C_{k,5}^{(\sigma)})\,=\,1$, then the integer $n$ can be written
in one of the following forms:
$n\,=\,\left\\{\begin{array}[]{ll}5^{e}q_{1}^{2}q_{2}\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad q_{1}\,\text{ or }\,q_{2}$
$\not\equiv\,\pm 7\,(\mathrm{mod}\,25)\\\ 5^{e}p\not\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad
p\,\not\equiv\,-1\,(\mathrm{mod}\,25)\\\ 5^{e}q_{1}\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad q_{1}\,\equiv\,\pm
7\,(\mathrm{mod}\,25)\\\ p^{e}q_{1}\,\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad
p\,\not\equiv\,-1\,(\mathrm{mod}\,25),\,q_{1}\,\not\equiv\,\pm
7\,(\mathrm{mod}\,25)\\\ p^{e}\,\equiv\,\pm 1,\pm 7\,(\mathrm{mod}\,25)&\text{
with }\quad p\,\equiv\,-1\,(\mathrm{mod}\,25)\\\
q_{1}^{e_{1}}q_{2}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)&\text{ with }\quad
q_{i}\,\equiv\,\pm 7\,(\mathrm{mod}\,25)\par\end{array}\right.$ (1)
where $p\,\equiv\,-1\,(\mathrm{mod}\,5)$ and $q_{1},q_{2}\,\equiv\,\pm
2\,(\mathrm{mod}\,5)$ are primes and $e,e_{1}$ are integers in
$\\{1,2,3,4\\}$.
(2) If rank $(C_{k,5}^{(\sigma)})\,=\,2$, then the integer $n$ can be written
in one of the following forms:
$n\,=\,\left\\{\begin{array}[]{ll}5^{e}l\not\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\quad\text{ with }\quad
l\,\not\equiv\,1\,(\mathrm{mod}\,25),\\\ l^{e_{1}}q_{1}\,\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\quad\text{ with }\quad
l\,\equiv\,1\,(\mathrm{mod}\,5),\,q_{1}\,\equiv\,\pm 2,\pm 7,\pm
3\,(\mathrm{mod}\,25)\\\ l^{e_{1}}\,\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\quad\text{ with }\quad
l\,\equiv\,1\,(\mathrm{mod}\,25),\\\ \end{array}\right.$ (2)
where $l\,\equiv\,1\,(\mathrm{mod}\,5)$ and $q_{1}\,\equiv\,\pm
2\,(\mathrm{mod}\,5)$ are primes and $e,e_{1}$ are integers in
$\\{1,2,3,4\\}$.
This result will be underpinned by numerical examples obtained with the
computational number theory system PARI/GP [References] in section 3.
Notations.
Throughout this paper, we use the following notations:
* •
The lower case letter $p$,$q$ and $l$ will denote a prime numbers such that,
$p\,\equiv\,-1\,(\mathrm{mod}\,5)$, $q\,\equiv\,\pm 2\,(\mathrm{mod}\,5)$ and
$l\,\equiv\,1\,(\mathrm{mod}\,5)$.
* •
$\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$: a pure quintic field, where $n\neq 1$ is
a $5^{th}$ power-free positive integer.
* •
$k_{0}\,=\,\mathbb{Q}(\zeta_{5})$, the cyclotomic field, where
$\zeta_{5}\,=\,e^{\frac{2i\pi}{5}}$ a primitive $5^{th}$ root of unity.
* •
$k\,=\,\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$: the normal closure of $\Gamma$, a
quintic Kummer extension of $k_{0}$.
* •
$\Gamma^{{}^{\prime}},\,\Gamma^{{}^{\prime\prime}},\,\Gamma^{{}^{\prime\prime\prime}},\,\Gamma^{{}^{\prime\prime\prime\prime}},\,$
the four conjugates quintic fields of $\Gamma$, contained in $k$.
* •
$\langle\tau\rangle\,=\,\operatorname{Gal}(k/\Gamma)$ such that
$\tau^{4}\,=\,id,\,\tau^{3}(\zeta_{5})\,=\,\zeta_{5}^{3},\,\tau^{2}(\zeta_{5})\,=\,\zeta_{5}^{4},\,\tau(\zeta_{5})\,=\,\zeta_{5}^{2}$
and $\tau(\sqrt[5]{n})\,=\,\sqrt[5]{n}$.
* •
$\langle\sigma\rangle\,=\,\operatorname{Gal}(k/k_{0})$ such that
$\sigma^{5}\,=\,id,\ \sigma(\zeta_{5})\,=\,\zeta_{5}$ and
$\sigma(\sqrt[5]{n})\,=\,\zeta_{5}\sqrt[5]{n},\,\sigma^{2}(\sqrt[5]{n})\,=\,\zeta_{5}^{2}\sqrt[5]{n},\,\\\
\\\
\sigma^{3}(\sqrt[5]{n})\,=\,\zeta_{5}^{3}\sqrt[5]{n},\,\sigma^{4}(\sqrt[5]{n})\,=\,\zeta_{5}^{4}\sqrt[5]{n}$.
* •
$\lambda\,=\,1-\zeta_{5}$ is prime element above $5$ of $k_{0}$.
* •
$q^{\ast}\,=\,0,\,1$ or $2$ according to whether $\zeta_{5}$ and $1+\zeta_{5}$
is not norm or is norm of an element of
$k^{*}\,=\,k\setminus\\{0\\}$.
* •
$d$: the number of prime ideals of $k_{0}$ ramified in $k$.
* •
For a number field $L$, denote by:
* –
$\mathcal{O}_{L}$: the ring of integers of $L$;
* –
$E_{L}$: the group of units of $L$;
* –
$C_{L}$, $h_{L}$, $C_{L,5}$: the class group, class number, and $5$-class
group of $L$.
* –
$L_{5}^{(1)},L^{\ast}$: the Hilbert $5$-class field of $L$, and the absolute
genus field of $L$.
$\mathbf{k}$$\mathbf{\Gamma}$ $\mathbf{\Gamma^{\prime}}$
$\mathbf{\Gamma^{\prime\prime}}$ $\mathbf{\Gamma^{\prime\prime\prime}}$
$\mathbf{\Gamma^{\prime\prime\prime\prime}}$ $\mathbf{k_{0}}$$\mathbb{Q}$
Figure 15445
## 2 Proof of Main Theorem
###### Theorem 2.1.
(Decompositon in cyclotomic fields)
Let $m$ a positive integer and $p$ a prime number. Suppose $p$ do not divides
$m$, and let $f$ be the smallest positive integer such that
$p^{f}\,\equiv\,1\,(\mathrm{mod}\,m)$. Then $p$ splits into $\phi(m)/f$
distinct primes in $\mathbb{Q}(\zeta_{m})$ each of which has a residue class
degree $f$. In particular, p splits completely if and only if
$p\,\equiv\,1\,(\mathrm{mod}\,m)$
###### Proof.
see [References] page 14. ∎
###### Corollary 2.1.
Let $p$ a prime integer, we have :
If $p\,=\,5$, then $\lambda\,=\,1-\zeta_{5}$ is the unique prime over 5 in
$\mathbb{Q}(\zeta_{5})$.
If $l\,\equiv\,1\,(\mathrm{mod}\,5)$, then $l$ splits completely in
$\mathbb{Q}(\zeta_{5})$ as $l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$, with
$\pi_{i}$ are primes in $\mathbb{Q}(\zeta_{5})$
If $q\,\equiv\,\pm 2\,(\mathrm{mod}\,5)$, then $q$ is inert in
$\mathbb{Q}(\zeta_{5})$.
If $p\,\equiv\,-1\,(\mathrm{mod}\,5)$, then $p$ splits in
$\mathbb{Q}(\zeta_{5})$ as $p\,=\,\pi_{1}\pi_{2}$, with $\pi_{i}$ are primes
in $\mathbb{Q}(\zeta_{5})$.
Before proving the main theorem, we give a proof of the existance of a unique
prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$ divides the radicand $n$ in the case
of rank $(C_{k,5}^{(\sigma)})\,=\,2$
###### Theorem 2.2.
If rank $C_{k,5}^{(\sigma)}=2$, so $C_{k,5}\,=\,C_{k,5}^{(\sigma)}$, and there
is unique prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$ that divides the radicand
$n$. Furthermore, we have $(k/k_{0})^{*}\,=\,k_{5}^{(1)}$.
###### Proof.
If rank $(C_{k,5}^{(\sigma)})=2$ so the order of $C_{k,5}^{(\sigma)}$ is at
least $25$, since $C_{k,5}^{(\sigma)}\subseteq C_{k,5}$ and $|C_{k,5}|\,=\,25$
because $C_{k,5}$ is of type $(5,5)$ we have $C_{k,5}\,=\,C_{k,5}^{(\sigma)}$
it means that all ideal classes are ambiguous.
By class fields theory $C_{k,5}^{1-\sigma}$ correspond to $(k/k_{0})^{*}$ and
$Gal(k_{5}^{(1)}/k)\cong C_{k,5}$. Since $C_{k,5}^{(\sigma)}\,=\,C_{k,5}$, we
get $C_{k,5}^{1-\sigma}\,=\,\\{1\\}$, hence $(k/k_{0})^{*}\,=\,k_{5}^{(1)}$,
and by [References, proposition 5.8] we know explicitly in this case the
Hilbert $5-$class field of $k$.
We assume now that there is no prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$ divides
$n$.
We can write $n$ as
$n\,=\,5^{e}q_{1}^{f_{1}}...q_{r}^{f_{r}}p_{1}^{g_{1}}....p_{s}^{g_{s}}$ with
$q_{i}\,\equiv\,\pm 2\,(mod\,5)$ and $p_{j}\,\equiv\,-1\,(mod\,5)$,
$f_{i}\,=\,1,2,3,4$ for $1\leq i\leq r$ and $g_{j}\,=\,1,2,3,4$ for $1\leq
j\leq s$, and $e\,=\,0,1,2,3,4$, by Corollary 2.1 each $q_{i}$ is inert in
$k_{0}$, and $q_{i}$ is ramified in $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$, also
by Corollary 2.1 $p_{j}$ splits in $k_{0}$ as $p_{j}\,=\,\pi_{1}\pi_{2}$,
where $\pi_{1},\pi_{2}$ are primes in $k_{0}$, and $p_{j}$ is ramified in
$\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$, so the prime ideals ramified in
$k/k_{0}$ are those above $q_{i}$ and $\pi_{j}$ and the ideal above $\lambda$
with $\lambda\,=\,1-\zeta_{5}$ if 5 is ramified in
$\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$.
If $\lambda$ is ramified in $k/k_{0}$, we note $\mathcal{I}$ the prime ideal
in $k$ above $\lambda$, and for $1\leq i\leq r$ ,$\mathcal{Q}_{i}$ the prime
ideal above $q_{i}$ in $k$, and for $1\leq j\leq s$ ,$\mathcal{P}_{j}$ the
prime ideal above $\pi_{j}$ in $k$, with $\pi_{j}$ is prime of $k_{0}$ above
$p_{j}$. We have evidently $\mathcal{I}^{5}\,=\,(\lambda)$,
$\mathcal{Q}_{i}^{5}\,=\,(q_{i})$, $\mathcal{P}_{j}^{5}\,=\,(\pi_{j})$ in $k$.
we note by $C_{k,s}^{(\sigma)}$ the group of strong ambiguous adeal classes.
We have to treat two cases:
(i) $C_{k,s}^{(\sigma)}\,\neq\,C_{k,5}^{(\sigma)}\,=\,C_{k,5}$:
Let
$C_{k,5}^{+}\,=\,\\{\mathcal{A}\in\,C_{k,5}|\mathcal{A}^{\tau^{2}}\,=\,\mathcal{A}\\}$
and
$C_{k,5}^{-}\,=\,\\{\mathcal{A}\in\,C_{k,5}|\mathcal{A}^{\tau^{2}}\,=\,\mathcal{A}^{-1}\\}$
be a nontrivial subgoups of $C_{k,5}$. We have
$(C_{k,5}^{(\sigma)})^{+}\,=\,C_{k,5}^{+}$, by [References, Lemma 6.2]
$C_{k,5}^{+}\,\simeq\,C_{\Gamma,5}$ i.e $C_{k,5}^{+}$ can be generated by
$5-$class comes from $\Gamma$ ($|C_{k,5}^{+}|\,=\,5$). The strong ambiguous
classes are those of primes ramified in $k/k_{0}$, namly $[\mathcal{Q}_{i}]$
for $1\leq i\leq r$, $[\mathcal{P}_{j}]$ for $1\leq j\leq s$ and
$[\mathcal{I}]$ if $\lambda$ is ramified in $k/k_{0}$. Its easy to see that:
$[\mathcal{Q}_{i}^{\tau^{2}}]\,=\,[\mathcal{Q}_{i}]^{\tau^{2}}\,=\,[\mathcal{Q}_{i}]$
and
$[\mathcal{P}_{j}^{\tau^{2}}]\,=\,[\mathcal{P}_{j}]^{\tau^{2}}\,=\,[\mathcal{P}_{j}]$
and $[\mathcal{I}^{\tau^{2}}]\,=\,[\mathcal{I}]^{\tau^{2}}\,=\,[\mathcal{I}]$,
we now that $C_{k,5}/C_{k,s}^{(\sigma)}$ is generated by image in
$C_{k,5}/C_{k,s}^{(\sigma)}$ of element in $C_{k,5}^{+}$. Since
$C_{k,s}^{(\sigma)}$ is generated by $[\mathcal{Q}_{i}],[\mathcal{P}_{j}]$ and
$[\mathcal{I}]$ if $\lambda$ is ramified in $k/k_{0}$, all elements of
$C_{k,5}$ will be fixed by $\tau^{2}$, in particular whose of $C_{k,5}^{-}$,
therefore $\forall\mathcal{A}\in C_{k,5}^{-}$,
$\mathcal{A}^{\tau^{2}}\,=\,\mathcal{A}^{-1}\,=\,\mathcal{A}$ i.e
$\mathcal{A}^{2}\,=\,1$ i.e $\mathcal{A}^{4}\,=\,1$, hence $\mathcal{A}\,=\,1$
because $\mathcal{A}$ is $5-$class, so we get $C_{k,5}^{-}\,=\,{1}$, and this
contradict the fact that $|C_{k,5}^{-}|=5$.
(ii) $C_{k,s}^{(\sigma)}\,=\,C_{k,5}^{(\sigma)}\,=\,C_{k,5}$:
In this case $C_{k,5}$ will be generated by $[\mathcal{Q}_{i}]$,
$[\mathcal{P}_{j}]$ and $[\mathcal{I}]$ if $\lambda$ is ramifed in $k/k_{0}$,
and as in (i) all the classes are fixed by $\tau^{2}$, which give the same
contradiction. Thus we proved the existence of a prime $l$ divides $n$ such
that $l\,\equiv\,1\,(\mathrm{mod}\,5)$.
According to [References,section 5.1], we have rank
$C_{k,5}^{(\sigma)}\,=\,d-3+q^{*}$ where $d$ is the number of prime ramified
in $k/k_{0}$ and $q^{*}$ is an index has as value 0,1 or 2. Assuming that
there is two prime $l_{1}$ and $l_{2}$ divides $n$ such that
$l_{i}\,\equiv\,1\,(\mathrm{mod}\,5)$, $(i=1,2)$, then $d\geq 8$ and rank
$C_{k,5}^{(\sigma)}$ is at least $5$, that is impossible. Thus if rank
$C_{k,5}^{(\sigma)}\,=\,2$ its exsite unique prime
$l\,\equiv\,1\,(\mathrm{mod}\,5)$ divides $n$. ∎
### 2.1 Proof of Theoreme 1.1
Let $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$ be a pure quintic field, where $n\geq
2$ is a $5^{th}$ power-free integer, $k\,=\,\Gamma(\zeta_{5})$ be its normal
closure, and $C_{k,5}$ be the $5$-class group of $k$. Let $C_{k,5}^{(\sigma)}$
be the ambigous ideal classes group under the action of
$Gal(k/k_{0})\,=\,\langle\sigma\rangle$. Since $k_{0}$ has class number $1$,
$C_{k,5}^{(\sigma)}$ is un elementary abelian $5$-group, so rank
$(C_{k,5}^{(\sigma)})\,=\,1\,\mathrm{or}\,2$.
According to [References,section 5.1], the rank of $C_{k,5}^{(\sigma)}$ is
given as follows:
rank $(C_{k,5}^{(\sigma)})\,=\,d-3+q^{*}$
where $d$ is the number of prime ideals of $k_{0}$ ramified in $k$, and
$q^{\ast}\,=\,0,\,1$ or $2$ according to whether $\zeta_{5}$ and $1+\zeta_{5}$
is not norm or is norm of an element of $k^{*}\,=\,k\setminus\\{0\\}$ as
follows:
$q^{*}$ = $\begin{cases}2&\text{if }\,\zeta,1+\zeta\in N_{k/k_{0}}(k^{*}),\\\
1&\text{if }\,\zeta^{i}(1+\zeta)^{j}\in N_{k/k_{0}}(k^{*})\,\text{ for some i
and j },\\\ 0&\text{if }\,\zeta^{i}(1+\zeta)^{j}\notin
N_{k/k_{0}}(k^{*})\,\text{for}\hskip 5.69054pt0\leq i,j\leq 4\text{ and}\hskip
5.69054pti+j\neq 0.\\\ \end{cases}$
We can writ $n$ as $n\,=\,\mu\lambda^{e}\pi_{1}^{e_{1}}....\pi_{g}^{e_{g}}$,
where $\mu$ is a unit in $\mathcal{O}_{k_{0}}$, $\lambda=1-\zeta_{5}$,
$\pi_{1},,,,\pi_{g}$ are primes in $k_{0}$ and $e\in\\{0,1,2,3,4\\}$,
$e_{i}\in\\{1,2,3,4\\}$ for $1\leq i\leq g$. According to [References, Lemma
5.1] we have, $\zeta_{5}\,\in
N_{k/k_{0}}(k^{*})\,\Longleftrightarrow\,N_{k_{0}/\mathbb{Q}}((\pi_{i}))\,\,\equiv\,1\,(\mathrm{mod}\,25)$
for all i, and from [References, proposition 8.2] if $\pi$ is a prime of
$\mathcal{O}_{k_{0}}$ over a prime $p\,\in\,\mathbb{Z}$ we have
$N_{k_{0}/\mathbb{Q}}((\pi))\,=\,p^{f}$, with $f$ is the least positif integer
such that $p^{f}\,\,\equiv\,\,1\,(\mathrm{mod},\,5)$, so we can get that
$\zeta_{5}$ is norm of an element of $k^{*}\,=\,k\setminus\\{0\\}$ if and only
if $p^{f}\,\,\equiv\,\,1\,(\mathrm{mod},\,25)$ for all prime $p\,\neq\,5$
dividing $n$:
1. $-$
If $l\,\,\equiv\,\,1\,(\mathrm{mod},\,5)$, by corollary 2.1
$l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$, we have
$N_{k_{0}/\mathbb{Q}}((\pi_{i}))\,=\,l$, so to have
$N_{k_{0}/\mathbb{Q}}((\pi_{i}))\,\,\equiv\,\,1\,(\mathrm{mod},\,25)$ the
prime $l$ must verify $l\,\,\equiv\,\,1\,(\mathrm{mod}\,25)$.
2. $-$
If $q\,\,\equiv\,\,\pm 2\,(\mathrm{mod},\,5)$ we have $q$ is inert in $k_{0}$,
so $N_{k_{0}/\mathbb{Q}}((q))\,=\,q^{4}$, so to have
$N_{k_{0}/\mathbb{Q}}((q))\,\,\equiv\,\,1\,(\mathrm{mod},\,25)$ the prime $q$
must verify $q\,\equiv\,\pm 7\,(\mathrm{mod}\,25)$.
3. $-$
If $p\,\,\equiv\,\,-1\,(\mathrm{mod}\,5)$,by corollary 2.1
$p\,=\,\pi_{1}\pi_{2}$ we have $N_{k_{0}/\mathbb{Q}}((\pi))\,=\,p^{2}$, so to
have $N_{k_{0}/\mathbb{Q}}((\pi))\,\,\equiv\,\,1\,(\mathrm{mod}\,25)$ the
prime $p$ must verify $p\,\,\equiv\,\,-1\,(\mathrm{mod}\,25)$.
(1) If rank $(C_{k,5}^{(\sigma)})\,=\,1$, we get that $d+q^{*}\,=\,4$, so
there are three possible cases as follows:
* •
Case 1: $q^{*}=0\,\,\mathrm{and}\,\,d=4$,
* •
Case 2: $q^{*}=1\,\,\mathrm{and}\,\,d=3$,
* •
Case 3: $q^{*}=2\,\,\mathrm{and}\,\,d=2$,
We will successively treat the three cases to prove the first point of the
main theorem.
* •
Case 1: we have $q^{*}=0$ and $d=4$, so the number of prime ideals which are
ramified in $k/k_{0}$ should be $4$. According to the proof of theorem 2.2, if
$n$ is divisible by a prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$ then the prime
$l$ is unique.
\- If $l\,\equiv\,1\,(\mathrm{mod}\,5)$ divides $n$, then by Corollary 2.1,
$l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$ where $\pi_{i}$ are primes of $k_{0}$.
The prime $l$ is ramified in $\Gamma$, because
disk$(\Gamma/\mathbb{Q})\,=\,5^{5}n^{4}$ and $l$ divides this discriminent,
then $\pi_{1},\pi_{2},\pi_{3}$ and $\pi_{4}$ are ramified in $k$. Hence we
have $d=4$, so $l$ is the unique prime divides $n$, because if $n$ is dividing
by other prime we obtain $d>4$, which is impossible in the three cases. So
$n\,=\,l^{e_{1}}$ with $l\,\equiv\,1\,(\mathrm{mod}\,5)$ and
$e_{1}\in\\{1,2,3,4\\}$. According to [References, Lemma 5.1] we have
$(\lambda)$ ramifies in $k/k_{0}\,\Longleftrightarrow\,n\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$, with $\lambda\,=\,1-\zeta_{5}$, so in the case
$n\,=\,l^{e_{1}}$ where $l\,\equiv\,1\,(\mathrm{mod}\,5)$ we should have
$n\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$, so the only $l$ verifiy
$n\,=\,l^{e_{1}}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ is
$l\,\equiv\,1\,(\mathrm{mod}\,25)$, and for this prime we have
$q^{*}\,\geq\,1$, because $\zeta_{5}$ is norme, which is impossible in this
case.
We note that if $n\,=\,l^{e_{1}}$ with $l\,\equiv\,1\,(\mathrm{mod}\,25)$ and
$q*\,=\,2$, there is no fields $k$ of type $(5,5)$, because we have
rank$(C_{k,5}^{(\sigma)})\,=\,3$, and if $q^{*}\,=\,1$ we have
rank$(C_{k,5}^{(\sigma)})\,=\,2$, which will treat in the second point of the
proof.
-If no prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$ divides $n$, we have two forms of $n$:
1. $(i)$
$n\,=\,5^{e}p^{e_{1}}q_{1}^{e_{2}}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
with $p\,\equiv\,-1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,\pm
2\,(\mathrm{mod}\,5)$ and $e,e_{1},e_{2}\in\\{1,2,3,4\\}$. By Corollary 2.1,
$p\,=\,\pi_{1}\pi_{2}$, where $\pi_{1},\,\pi_{2}$ are primes in $k_{0}$ and
$q_{1}$ is inert in $k_{0}$, the prime $p$ is ramified in $\Gamma$, then
$\pi_{1},\,\pi_{2}$ are ramified in $k$. We have $q_{1}$ is ramified in
$\Gamma$. Since $n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$, then
$\lambda\,=\,1-\zeta_{5}$ is ramified in $k$, so we get $d=4$. To verifiy that
$n\,=\,5^{e}p^{e_{1}}q_{1}^{e_{2}}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
we can choose $e_{1}\,=\,2$ and $e_{2}\,=\,1$ because
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$,
so $n\,=\,5^{e}p^{2}q_{1}$ with $e\in\\{1,2,3,4\\}$. In one hand if
$e\,=\,2,3,4$ we have $n\,\equiv\,0\,(\mathrm{mod}\,25)$, in the other hand if
$n\,=\,5p^{2}q_{1}$ we have $p^{2}q_{1}\,=\,5\alpha+2$ or
$p^{2}q_{1}\,=\,5\alpha^{\prime}+3$ with
$\alpha,\alpha^{\prime}\in\mathbb{Z}$, so
$5p^{2}q_{1}\,\equiv\,10\,(\mathrm{mod}\,25)$ or
$5p^{2}q_{1}\,\equiv\,15\,(\mathrm{mod}\,25)$, therefore we conclude that
$n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$. According to [References,
Theorem 5.18], if $p\,\equiv\,-1\,(\mathrm{mod}\,25)$ and $q_{1}\,\equiv\,\pm
7\,(\mathrm{mod}\,25)$, rank $(C_{k,5})$ is at least $3$ which contradict the
fact that $C_{k,5}$ is of type $(5,5)$, and by the proof of [References,
Theorem 5.13, Theorem 5.15], for the other congruence cases of $p$ and $q_{1}$
we have $q^{*}\,=\,1$ which is impossible in this case.
2. $(ii)$
$n\,=\,p^{e}q_{1}^{e_{1}}q_{2}^{e_{2}}\,\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ with
$p\,\equiv\,-1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,2\,(\mathrm{mod}\,5),\,\,\\\
q_{2}\,\equiv\,3\,(\mathrm{mod}\,5)$ and $e,e_{1},e_{2}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{1}\,=\,2,e_{2}\,=\,1$ i.e $n\,=\,p^{e}q_{1}^{2}q_{2}$ with
$e\in\\{1,2,3,4\\}$. By Corollary 2.1 $p\,=\,\pi_{1}\pi_{2}$ where
$\pi_{1},\,\pi_{2}$ are primes in $k_{0}$ and $q_{1},\,q_{2}$ are inert in
$k_{0}$. We have $p,\,q_{1}$ and $q_{2}$ are ramified in $\Gamma$,so
$\pi_{1},\,\pi_{2},\,q_{1}$ and $q_{2}$ are ramified in $k$ then $d=4$. The
condition $n\,=\,p^{e}q_{1}^{2}q_{2}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
is not verified for all
$p\,\equiv\,-1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,2\,(\mathrm{mod}\,5)$ and
$q_{2}\,\equiv\,3\,(\mathrm{mod}\,5)$, so we combine all the cases of
congruence and we obtain that
$p\,\equiv\,-1\,(\mathrm{mod}\,25),\,\,q_{1}\,\equiv\,12\,(\mathrm{mod}\,25),\,\,q_{2}\,\equiv\,3\,(\mathrm{mod}\,25)$.
By [References, Lemma 5.1], since
$N_{k_{0}/\mathbb{Q}}(\pi_{i})\,\equiv\,1\,(\mathrm{mod}\,25),\,N_{k_{0}/\mathbb{Q}}(q_{1})\,\not\equiv\,1\,(\mathrm{mod}\,25),N_{k_{0}/\mathbb{Q}}(q_{2})\,\not\equiv\,1\,(\mathrm{mod}\,25)$,
we have $q^{*}\,=\,1$ which is impossible in this case.
We deduce that in the case 1, there is no radicand $n$ who verifiy
rank$(C_{k,5}^{(\sigma)})\,=\,1$.
* •
Case 2: we have $q^{*}=1$ and $d=3$, so the number of prime ideals which are
ramified in $k/k_{0}$ should be $3$. According to case 1, n is not divisible
by any prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$ in this case.
We can writ $n$ as $n\,=\,\mu\lambda^{e}\pi_{1}^{e_{1}}....\pi_{g}^{e_{g}}$,
where $\mu$ is a unit in $\mathcal{O}_{k_{0}}$, $\lambda=1-\zeta_{5}$,
$\pi_{1},,,,\pi_{g}$ are primes in $k_{0}$ and $e\in\\{0,1,2,3,4\\}$,
$e_{i}\in\\{1,2,3,4\\}$ for $1\leq i\leq g$.
By [References, proposition 5.2] $d\,=\,g$ or $g+1$ according to whether
$n\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ or $n\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$, to obtain $d=3$, $n$ must be written in
$\mathcal{O}_{k_{0}}$ as:
$n\,=\,\pi_{1}^{e_{1}}\pi_{2}^{e_{2}}\pi_{3}^{e_{3}}$ or
$n\,=\,\lambda^{e}\pi_{1}^{e_{1}}\pi_{2}^{e_{2}}$, therefore we have three
forms of $n$:
1. $(i)$
$n\,=\,5^{e}p^{e_{1}}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\equiv\,-1\,(\mathrm{mod}\,5)$ and $e,e_{1}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{1}\,=\,1$ i.e $n\,=\,5^{e}p$ with $e\in\\{1,2,3,4\\}$. By
Corollary 2.1 $p\,=\,\pi_{1}\pi_{2}$ with $\pi_{1},\,\pi_{2}$ are primes in
$k_{0}$, we have $p$ is ramified in $\Gamma$, so $\pi_{1},\,\pi_{2}$ are
ramified in $k$, and since $n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$,
according to [References, Lemma 5.1], $\lambda\,=\,1-\zeta_{5}$ is ramified in
$k$ so we obtain $d=3$. The condition $n\,\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ is verified for all $p\,\equiv\,-1\,(\mathrm{mod}\,5)$
because, if $e\,=\,2,3,4$ we have
$n\,=\,5^{e}p\,\equiv\,0\,(\mathrm{mod}\,25)$, if $e=1$ i.e $n\,=\,5p$ we have
$p\,=\,5\alpha+4$ that implie $5p\,=\,25\alpha+20$ with $\alpha\in\mathbb{Z}$,
so $n\,=\,5p\,\equiv\,20\,(\mathrm{mod}\,25)$. According to the proof of
[References, theorem 5.15] if $p\,\equiv\,-1(\mathrm{mod}\,25)$ we have
$q^{*}\,=\,2$, and if $p\,\not\equiv\,-1(\mathrm{mod}\,25)$ we have
$q^{*}\,=\,1$, we conclude that $n\,=\,5^{e}p\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ with $p\,\not\equiv\,-1(\mathrm{mod}\,25)$.
We note that the computational number theory system PARI [References], show
that if $n\,=\,5^{e}p\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\not\equiv\,-1(\mathrm{mod}\,25)$, the field $k$ is not always of type
$(5,5)$.
2. $(ii)$
$n\,=\,5^{e}q_{1}^{e_{1}}q_{2}^{e_{2}}\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ with
$q_{1}\,\equiv\,2\,(\mathrm{mod}\,5),\,\,q_{2}\,\equiv\,3\,(\mathrm{mod}\,5)$
and $e,e_{1},e_{2}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$,
we can choose $e_{1}\,=\,2$ and $e_{2}\,=\,1$ i.e $n\,=\,5^{e}q_{1}^{2}q_{2}$
with $e\in\\{1,2,3,4\\}$. By Corollary 2.1, $q_{1}$ and $q_{2}$ are inert in
$k_{0}$ , and $q_{1}$, $q_{2}$ are ramified in $\Gamma$, then $q_{1},\,q_{2}$
are ramified in $k$. Since $n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$,
then $\lambda\,=\,1-\zeta_{5}$ is ramified in $k$, so we get $d=3$.The
condition $n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ is verified for all
$q_{1}\,\equiv\,2\,(\mathrm{mod}\,5),\,\,q_{2}\,\equiv\,3\,(\mathrm{mod}\,5)$,
if $e\,=\,2,3,4$ we have
$n\,=\,5^{e}q_{1}^{2}q_{2}\,\,\equiv\,0\,(\mathrm{mod}\,25)$, if
$n\,=\,5q_{1}^{2}q_{2}$ we have $q_{1}^{2}q_{2}\,=\,5\alpha+2$ with
$\alpha,\in\mathbb{Z}$, so $5q_{1}^{2}q_{2}\,\equiv\,10\,(\mathrm{mod}\,25)$.
If $q_{1}\,\equiv\,7\,(\mathrm{mod}\,25)$ and
$q_{2}\,\equiv\,-7\,(\mathrm{mod}\,25)$ we have $q^{*}\,=\,2$, and if
$q_{1}\,\not\equiv\,7\,(\mathrm{mod}\,25)$ or
$q_{2}\,\not\equiv\,-7\,(\mathrm{mod}\,25)$, according to the proof of
[References, theorem 5.13] we have $q^{*}\,=\,1$, but for this form of the
radicand $n$ the computational number theory system PARI [References] show
that $C_{k,5}\,\simeq\,\mathbb{Z}/5\mathbb{Z}$.
3. $(iii)$
$n\,=\,p^{e}q_{1}^{e_{1}}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\equiv\,-1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,\pm 2\,(\mathrm{mod}\,5)$
and $e,e_{1}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{1}\,=\,1$ i.e $n\,=\,p^{e}q_{1}$ with $e\in\\{1,2,3,4\\}$.
By Corollary 2.1 $p\,=\,\pi_{1}\pi_{2}$ where $\pi_{1},\,\pi_{2}$ are primes
in $k_{0}$ and $q_{2}$ is inert in $k_{0}$. We have $p$ is ramified in
$\Gamma$,so $\pi_{1},\,\pi_{2}$ are ramified in $k$. $q_{2}$ is ramified in
$\Gamma$ too, we obtain $d=3$. The condition $n\,=\,p^{e}q_{1}\,\equiv\,\pm
1\pm 7\,(\mathrm{mod}\,25)$ is not verified for all
$p\,\equiv\,-1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,\pm
2\,(\mathrm{mod}\,5)$, so we combine all the cases of congruence and we obtain
that $p\,\not\equiv\,-1\,(\mathrm{mod}\,25)$ and $q_{1}\,\not\equiv\,\pm
7\,(\mathrm{mod}\,25)$. According to [References, theorem 5.18] If
$p\,\equiv\,-1\,(\mathrm{mod}\,25)$ and $q_{1}\,\equiv\,\pm
7\,(\mathrm{mod}\,25)$ we have rank $C_{k,5}\geq 3$ which is impossible in our
cases. If $p\,\not\equiv\,-1\,(\mathrm{mod}\,25)$ and $q_{1}\,\not\equiv\,\pm
7\,(\mathrm{mod}\,25)$, according to [References, theorem 5.13] we have
$q^{*}\,=\,1$. Using the computational number theory system PARI/GP
[References], if $n\,=\,p^{e}q_{1}$ with
$p\,\not\equiv\,-1\,(\mathrm{mod}\,25)$ and $q_{1}\,\not\equiv\,\pm
7\,(\mathrm{mod}\,25)$, the field $k$ is not always of type $(5,5)$.
We summarize all forms of integer $n$ in the case 2, for which $k$ is of type
$(5,5)$ and rank $(C_{k,5}^{(\sigma)})\,=\,1$ as follows:
$n=\left\\{\begin{array}[]{ll}5^{e}p\not\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\text{ with }p\,\not\equiv\,-1\,(\mathrm{mod}\,25),\\\
p^{e}q_{1}\,\equiv\,\pm 1,\pm 7\,(\mathrm{mod}\,25)&\text{ with
}p\,\not\equiv\,-1\,(\mathrm{mod}\,25)\text{ and }q_{1}\,\not\equiv\,\pm
7\,(\mathrm{mod}\,25)\\\ \end{array}\right.$ (3)
* •
Case 3: we have $q^{*}=2$ and $d=2$, so the number of prime ideals which are
ramified in $k/k_{0}$ should be $2$. Let
$n\,=\,\mu\lambda^{e}\pi_{1}^{e_{1}}....\pi_{g}^{e_{g}}$, where $\mu$ is a
unit in $\mathcal{O}_{k_{0}}$, $\lambda=1-\zeta_{5}$, $\pi_{1},,,,\pi_{g}$ are
primes in $k_{0}$ and $e\in\\{0,1,2,3,4\\}$, $e_{i}\in\\{1,2,3,4\\}$ for
$1\leq i\leq g$.
$d\,=\,g$ or $g+1$ according to whether $n\,\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ or $n\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$, to
obtain $d=2$, $n$ must be written in $\mathcal{O}_{k_{0}}$ as:
$n\,=\,\pi_{1}^{e_{1}}\pi_{2}^{e_{2}}$ or $n\,=\,\lambda^{e}\pi_{1}^{e_{1}}$,
therefore we have three forms of $n$:
1. $(i)$
$n\,=\,5^{e}q_{1}^{e_{1}}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$q_{1}\,\equiv\,\pm 2\,(\mathrm{mod}\,5)$ and $e,e_{1}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{1}\,=\,1$ i.e $n\,=\,5^{e}q_{1}$ with $e\in\\{1,2,3,4\\}$.
Since $q^{*}=2$ so we have $q_{1}\,\,\equiv\,\,\pm 7\,(\mathrm{mod}\,25)$. By
Corollary 2.1 $q_{1}$ is inert in $k_{0}$, we have $q_{1}$ is ramified in
$\Gamma$, so $q_{1}$ is ramified too in $k$, and since $n\,\not\equiv\,\pm
1\pm 7\,(\mathrm{mod}\,25)$, according to [References, Lemma 5.1],
$\lambda\,=\,1-\zeta_{5}$ is ramified in $k$ so we obtain $d=2$. The condition
$n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ is verified for all
$q_{1}\,\,\equiv\,\,\pm 7\,(\mathrm{mod}\,5)$, because if $e\,=\,2,3,4$ we
have $n\,=\,5^{e}q_{1}\,\equiv\,0\,(\mathrm{mod}\,25)$, if $e=1$ i.e
$n\,=\,5q_{1}$ we have $n\,=\,5q_{1}\,\equiv\,\pm 10\,(\mathrm{mod}\,25)$, we
conclude that $n\,=\,5^{e}q_{1}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
with $q_{1}\,\equiv\,\pm 7\,(\mathrm{mod}\,25)$, but for this form of the
radicand $n$ the computational number theory system PARI/GP [References] show
that $C_{k,5}\,\simeq\,\mathbb{Z}/5\mathbb{Z}$.
2. $(ii)$
$n\,=\,q_{1}^{e_{1}}q_{2}^{e_{2}}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
with
$q_{1}\,\equiv\,3\,(\mathrm{mod}\,5),\,\,q_{2}\,\equiv\,2\,(\mathrm{mod}\,5)$
and $e_{1},e_{2}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{2}\,=\,1$ i.e $n\,=\,q_{1}^{e_{1}}q_{2}$ with
$e_{1}\in\\{1,2,3,4\\}$. Since $q^{*}=2$, we have
$\zeta_{5}\,\in\,N_{k/k_{0}}(k^{*})$, we get that
$q_{1}\,\,\equiv\,\,-7\,(\mathrm{mod}\,25)$ and
$q_{2}\,\,\equiv\,\,7\,(\mathrm{mod}\,25)$. By Corollary 2.1 $q_{1}$ and
$q_{2}$ are inert in $k_{0}$, and we have $q_{1},\,q_{2}$ are ramified in
$\Gamma$,so $q_{1},\,q_{2}$ are ramified in $k$, so we obtain $d=2$. The
condition $n\,=\,q_{1}^{e_{1}}q_{2}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
is verified, because we have $n\,=\,q_{1}^{e_{1}}q_{2}\,\equiv\,\pm
7\,(\mathrm{mod}\,25)$ for all $e_{1}$, so we conclude that
$n\,=\,q_{1}^{e_{1}}q_{2}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$q_{1}\,\equiv\,-7\,(\mathrm{mod}\,25),\,\,q_{2}\,\equiv\,7\,(\mathrm{mod}\,25)$,
but for this form of the radicand $n$ the computational number theory system
PARI/GP [References] show that $C_{k,5}\,\simeq\,\mathbb{Z}/5\mathbb{Z}$
3. $(iii)$
$n\,=\,p^{e}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\equiv\,-1\,(\mathrm{mod}\,5)$ and $e\in\\{1,2,3,4\\}$. Since $q^{*}=2$,
we have $\zeta_{5}\,\in\,N_{k/k_{0}}(k^{*})$, we get that
$p\,\,\equiv\,\,-1\,(\mathrm{mod}\,25)$. By Corollary 2.1,
$p\,=\,\pi_{1}\pi_{2}$ where $\pi_{1},\,\pi_{2}$ are primes of $k_{0}$. The
prime $p$ is ramified in $\Gamma$, then $\pi_{1},\pi_{2}$ are ramified in $k$.
hence we have $d=2$. The condition $n\,=\,p^{e}\,\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ is verified for all
$p\,\,\equiv\,\,-1\,(\mathrm{mod}\,25)$, we conclude that
$n\,=\,p^{e}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\equiv\,-1\,(\mathrm{mod}\,25)$. Using the computational number theory
system PARI/GP [References], if $n\,=\,p^{e}\,\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ with $p\,\equiv\,-1\,(\mathrm{mod}\,25)$, the field $k$
is not always of type $(5,5)$.
We deduce that in the case 3, there is one form of $n$ for which the fields
$k$ is of type $(5,5)$ and rank$(C_{k,5}^{(\sigma)})\,=\,1$ as follows:
$n\,=\,p^{e}\,\equiv\,\pm 1,\pm 7\,(\mathrm{mod}\,25)\text{ with
}p\,\equiv\,-1\,(\mathrm{mod}\,25)$
(2) If rank $(C_{k,5}^{(\sigma)})\,=\,2$, so $C_{k,5}\,=\,C_{k,5}^{(\sigma)}$.
According to [References,section 5.1], the rank of $C_{k,5}^{(\sigma)}$ is
given as follows:
rank $(C_{k,5}^{(\sigma)})\,=\,d-3+q^{*}$
where $d$ et $q^{*}$ are defined previously. Since rank
$(C_{k,5}^{(\sigma)})\,=\,2$ and $q^{*}\,=\,0,1\,\mathrm{or}\,2$, there are
three possible cases as follows:
* •
Case 1: $q^{*}=0\,\,\mathrm{and}\,\,d=5$,
* •
Case 2: $q^{*}=1\,\,\mathrm{and}\,\,d=4$,
* •
Case 3: $q^{*}=2\,\,\mathrm{and}\,\,d=3$,
We will treat the three cases to prove the forms of the radicand $n$. By
theorem 2.2, $n$ must be divisible by one prime
$l\,\equiv\,1\,(\mathrm{mod}\,5)$ in all cases, and since rank
$(C_{k,5}^{(\sigma)})\,=\,2$, the invariant $q^{*}$ should be $0$ or $1$,
because if $q^{*}\,=\,2$ and $l\,\equiv\,1\,(\mathrm{mod}\,5)$ divides $n$, we
get that the invariant $d$ is at least $4$, so we obtain that rank
$(C_{k,5}^{(\sigma)})$ is at least $3$.
* •
Case 1: we have $q^{*}=0$ and $d=5$, so the number of prime ideals which are
ramified in $k/k_{0}$ should be $5$. The radicand $n$ must be divisible by one
prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$.
We can writ $n\,\in\,\mathcal{O}_{k_{0}}$ as
$n\,=\,\mu\lambda^{e}\pi_{1}^{e_{1}}....\pi_{g}^{e_{g}}$, where $\mu$ is a
unit in $\mathcal{O}_{k_{0}}$, $\lambda=1-\zeta_{5}$, $\pi_{1},,,,\pi_{g}$ are
primes in $k_{0}$ and $e\in\\{0,1,2,3,4\\}$, $e_{i}\in\\{1,2,3,4\\}$ for
$1\leq i\leq g$.
$d\,=\,g$ or $g+1$ according to whether $n\,\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ or $n\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$. To
obtain $d=5$, $n$ must be written in $\mathcal{O}_{k_{0}}$ as:
$n\,=\,\pi_{1}^{e_{1}}\pi_{2}^{e_{2}}\pi_{3}^{e_{3}}\pi_{4}^{e_{4}}\pi_{5}^{e_{5}}$
or
$n\,=\,\lambda^{e}\pi_{1}^{e_{1}}\pi_{2}^{e_{2}}\pi_{3}^{e_{3}}\pi_{4}^{e_{4}}$,
therefore we have two forms of $n$:
1. $(i)$
$n\,=\,5^{e}l^{e_{1}}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\equiv\,1\,(\mathrm{mod}\,5)$ and $e,e_{1}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{1}\,=\,1$ i.e $n\,=\,5^{e}l$ with $e\in\\{1,2,3,4\\}$. By
Corollary 2.1 $l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$ with $\pi_{i}$ are primes
in $k_{0}$, we have $l$ is ramified in $\Gamma$, so
$\pi_{1},\,\pi_{2},\pi_{3}$ and $\pi_{4}$ are ramified in $k$, and since
$n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$, according to [References,
Lemma 5.1], $\lambda\,=\,1-\zeta_{5}$ is ramified in $k$ so we obtain $d=5$.
The condition $n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ is verified for
all $l\,\equiv\,1\,(\mathrm{mod}\,5)$, because if $e\,=\,2,3,4$ we have
$n\,=\,5^{e}l\,\equiv\,0\,(\mathrm{mod}\,25)$, if $e=1$ i.e $n\,=\,5l$ we have
$l\,=\,5\alpha+1$ that implie $5l\,=\,25\alpha+5$ with $\alpha\in\mathbb{Z}$,
so $n\,=\,5l\,\equiv\,5\,(\mathrm{mod}\,25)$. If
$l\,\equiv\,1(\mathrm{mod}\,25)$ we have $\zeta_{5}\,\in\,N_{k/k_{0}}(k^{*})$,
so $q^{*}\,\geq\,1$ which in impossible in this case. We conclude that
$n\,=\,5^{e}l\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\not\equiv\,1(\mathrm{mod}\,25)$. Using the computational number theory
system PARI/GP [References], if $n\,=\,5^{e}l\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ with $l\,\not\equiv\,1(\mathrm{mod}\,25)$, the field
$k$ is not always of type $(5,5)$.
2. $(ii)$
$n\,=\,l^{e}q_{1}^{e_{1}}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\equiv\,1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,\pm 2\,(\mathrm{mod}\,5)$
and $e,e_{1}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{1}\,=\,1$ i.e $n\,=\,l^{e}q_{1}$ with $e\in\\{1,2,3,4\\}$.
By Corollary 2.1 $l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$ where $\pi_{i}$ are
primes in $k_{0}$ and $q_{1}$ is inert in $k_{0}$. We know that $l$ is
ramified in $\Gamma$,so $\pi_{1},\,\pi_{2},\pi_{3}$ and $\pi_{4}$ are ramified
in $k$. $q_{1}$ is ramified in $\Gamma$ too, we obtain $d=5$. The condition
$n\,=\,l^{e}q_{1}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ is not verified for
all $l\,\equiv\,1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,\pm
2\,(\mathrm{mod}\,5)$, so we combine all the cases of congruence and we obtain
that $l\,\equiv\,1\,(\mathrm{mod}\,5)$ and $q_{1}\,\equiv\,\pm 2,\pm 3\,\pm
7\,(\mathrm{mod}\,25)$. Using the computational number theory system PARI/GP
[References], if $n\,=\,l^{e}q_{1}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$,
with $l\,\equiv\,1\,(\mathrm{mod}\,5)$, and $q_{1}\,\equiv\,\pm 2,\pm 3\,\pm
7\,(\mathrm{mod}\,25)$, the field $k$ is not always of type $(5,5)$
We summarize all forms of integer $n$ in this case as follows:
$n=\left\\{\begin{array}[]{ll}5^{e}l\not\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\text{ with }l\,\not\equiv\,1\,(\mathrm{mod}\,25),\\\
l^{e}q_{1}\,\equiv\,\pm 1,\pm 7\,(\mathrm{mod}\,25)&\text{ with
}l\,\equiv\,1\,(\mathrm{mod}\,5)\,q_{1}\,\equiv\,\pm 2,\pm 3\,\pm
7\,(\mathrm{mod}\,25)\\\ \end{array}\right.$ (4)
* •
Case 2: We have $q^{*}\,=\,1$ and $d=4$, so the number of prime ideals which
are ramified in $k/k_{0}$ should be $4$. The radicand $n$ must be divisible by
one prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$, and according to Corollary 2.1
$l$ splits in $k_{0}$ as $l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$ with $\pi_{i}$
are primes in $k_{0}$. Since $l$ is ramified in $\Gamma$, so
$\pi_{1},\,\pi_{2},\pi_{3}$ and $\pi_{4}$ are ramified in $k$, hence if $n$ is
devisible by another prime than $l$ the number of primes which are ramified in
$k/k_{0}$ surpass $4$, therefore we have unique form of $n$ in this case, its
$n\,=\,l^{e}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\equiv\,1\,(\mathrm{mod}\,5)$ and $e\in\\{1,2,3,4\\}$. The condition
$n\,\equiv\,\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ is verified only for
$l\,\,\equiv\,1,(\mathrm{mod}\,25)$, and we have $q^{*}\,=\,1$. In conclusion
we get $n\,=\,l^{e}$ with $l\,\,\equiv\,1,(\mathrm{mod}\,25)$. Using the
computational number theory system PARI/GP [References], if
$n\,=\,l^{e}\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\equiv\,1(\mathrm{mod}\,25)$, the field $k$ is not always of type $(5,5)$
* •
case 3: We have $q^{*}\,=\,2$ and $d=3$, so the number of prime ideals which
are ramified in $k/k_{0}$ should be $3$. The radicand $n$ must be divisible by
one prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$, and according to Corollary 2.1
$l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$ with $\pi_{i}$ are primes in $k_{0}$.
Since $p$ is ramified in $\Gamma$, so $\pi_{1},\,\pi_{2},\pi_{3}$ and
$\pi_{4}$ are ramified in $k$, so we deduce that the number of primes ramified
in $k/k_{0}$ is at least $4$, so the case 3 does not exist.
## 3 Numerical examples
Let $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$ be a pure quintic field, where $n$ is
a positive integer, $5^{th}$ power-free, and let
$k\,=\,\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$ its normal closure. We assume that
$C_{k,5}$ is of type $(5,5)$. Using the system PARI/GP [References], we
illustrate our main result Theorem 1.1.
### 3.1 rank $(C_{k,5}^{(\sigma)})\,=\,1$
Table 1: $n\,=\,p^{e}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\equiv\,-1\,(\mathrm{mod}\,25)$
$p$ | $n\,=\,p^{e}$ | $p\,(\mathrm{mod}\,5)$ | $p\,(\mathrm{mod}\,25)$ | $h_{k,5}$ | $C_{k,5}$ | rank $(C_{k,5}^{(\sigma)})$
---|---|---|---|---|---|---
149 | 22201 = $149^{2}$ | -1 | -1 | 25 | $(5,5)$ | 1
199 | 7880599 = $199^{3}$ | -1 | -1 | 25 | $(5,5)$ | 1
349 | 42508549 = $349^{3}$ | -1 | -1 | 25 | $(5,5)$ | 1
449 | $449$ | -1 | -1 | 25 | $(5,5)$ | 1
559 | $559$ | -1 | -1 | 25 | $(5,5)$ | 1
1249 | $1249$ | -1 | -1 | 25 | $(5,5)$ | 1
1499 | $1499$ | -1 | -1 | 25 | $(5,5)$ | 1
1949 | $1949$ | -1 | -1 | 25 | $(5,5)$ | 1
1999 | $1999$ | -1 | -1 | 25 | $(5,5)$ | 1
2099 | $449$ | -1 | -1 | 25 | $(5,5)$ | 1
Table 2: $n\,=\,q_{1}^{e_{1}}q_{2}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
with $q_{i}\,\equiv\,\pm 7\,(\mathrm{mod}\,25)$
$q_{1}$ $q_{1}\,(\mathrm{mod}\,5)$ $q_{1}\,(\mathrm{mod}\,25)$ $q_{2}$
$q_{2}\,(\mathrm{mod}\,5)$ $q_{2}\,(\mathrm{mod}\,25)$
$n\,=\,q_{1}^{e_{1}}q_{2}$ $h_{k,5}$ $C_{k,5}$ rank $(C_{k,5}^{(\sigma)})$ 7 2
7 43 3 -7 2107 = $7^{2}\times 43$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 7 2 7 193 3 -7
1351 = $7\times 193$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 7 2 7 293 3 -7 2051 =
$7\times 293$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 107 2 7 43 3 -7 492307 =
$107^{2}\times 43$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 107 2 7 193 3 -7 20651 =
$107\times 193$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 107 2 7 293 3 -7 31351 =
$107\times 293$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 107 2 7 443 3 -7 47401 =
$107\times 443$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 157 2 7 43 3 -7 6751 = $157\times
43$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 157 2 7 193 3 -7 30301 = $157\times 193$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 157 2 7 443 3 -7 69551 = $157\times 443$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 257 2 7 193 3 -7 49601 = $257\times 293$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 257 2 7 293 3 -7 75301 = $257\times 293$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 307 2 7 193 3 -7 59251 = $307\times 193$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 457 2 7 43 3 -7 19651 = $457\times 43$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 457 2 7 443 3 -7 202451 = $457\times 443$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 557 2 7 43 3 -7 23251 = $557\times 43$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 607 2 7 43 3 -7 26101 = $607\times 43$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1
Table 3 : $n\,=\,5^{e}q_{1}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$q_{1}\,\equiv\,\pm 7\,(\mathrm{mod}\,25)$
$q_{1}$ | $n\,=\,5^{e}q_{1}$ | $q_{1}\,(\mathrm{mod}\,5)$ | $q_{1}\,(\mathrm{mod}\,25)$ | $h_{k,5}$ | $C_{k,5}$ | rank $(C_{k,5}^{(\sigma)})$
---|---|---|---|---|---|---
7 | 175 = $5^{2}\times 7$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
107 | 535 = $5\times 107$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
157 | 19625 = $5^{3}\times 157$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
257 | 6425 = $5^{2}\times 257$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
307 | 38375 = $5^{3}\times 307$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
457 | 2285 = $5\times 457$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
557 | 2785 = $5\times 557$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
607 | 3053 = $5\times 607$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
757 | 3785 = $5\times 457$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
857 | 4285 = $5\times 457$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
907 | 4535 = $5\times 907$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
43 | 1075 = $5^{2}\times 43$ | 3 | -7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
193 | 120625 = $5^{4}\times 193$ | 3 | -7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
293 | 120625 = $5^{4}\times 193$ | 3 | -7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
443 | 11075 = $5^{2}\times 443$ | 3 | -7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
643 | 3215 = $5\times 643$ | 3 | -7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
Table 4: $n\,=\,5^{e}q_{1}^{2}q_{2}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
with $q_{1}$ or $q_{2}$ $\not\equiv\,\pm 7\,(\mathrm{mod}\,25)$
$q_{1}$ | $q_{1}\,(\mathrm{mod}\,5)$ | $q_{1}\,(\mathrm{mod}\,25)$ | $q_{2}$ | $q_{2}\,(\mathrm{mod}\,5)$ | $q_{2}\,(\mathrm{mod}\,25)$ | $n\,=\,5^{e}q_{1}^{2}q_{2}$ | $h_{k,5}$ | $C_{k,5}$ | rank $(C_{k,5}^{(\sigma)})$
---|---|---|---|---|---|---|---|---|---
2 | 2 | 2 | 3 | 3 | 3 | 60 = $5\times 2^{2}\times 3$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
2 | 2 | 2 | 13 | 3 | 13 | 260 = $5\times 2^{2}\times 13$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
2 | 2 | 2 | 53 | 3 | 3 | 1060 = $5\times 2^{2}\times 53$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
2 | 2 | 2 | 23 | 3 | -2 | 460 = $5\times 2^{2}\times 23$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
7 | 2 | 7 | 3 | 3 | 3 | 735 = $5\times 7^{2}\times 3$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
17 | 2 | 17 | 3 | 3 | 3 | 108375 = $5^{3}\times 17^{2}\times 3$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
17 | 2 | 17 | 23 | 3 | -2 | 33235 = $5\times 17^{2}\times 23$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
37 | 2 | 17 | 3 | 3 | 3 | 20535 = $5\times 37^{2}\times 13$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
37 | 2 | 17 | 13 | 3 | 13 | 88985 = $5\times 37^{2}\times 13$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
47 | 2 | -3 | 3 | 3 | 3 | 33135 = $5\times 47^{2}\times 3$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
47 | 2 | -3 | 13 | 3 | 13 | 143585 = $5\times 47^{2}\times 13$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
47 | 2 | -3 | 23 | 3 | -2 | 254035 = $5\times 47^{2}\times 23$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
47 | 2 | -3 | 43 | 3 | -7 | 474935 = $5\times 47^{2}\times 43$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
107 | 2 | 7 | 23 | 3 | -2 | 1316635 = $5\times 2^{2}\times 3$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
67 | 2 | 17 | 3 | 3 | 3 | 67335 = $5\times 67^{2}\times 3$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
67 | 2 | 17 | 53 | 3 | 3 | 1189585 = $5\times 67^{2}\times 53$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
97 | 2 | -3 | 43 | 3 | -7 | 2022935 = $5\times 97^{2}\times 43$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
Table 5: $n\,=\,p^{e}q_{1}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\not\equiv\,-1\,(\mathrm{mod}\,25)$, $q_{1}\,\not\equiv\,\pm
7\,(\mathrm{mod}\,25)$ $p$ $p\,(\mathrm{mod}\,5)$ $p\,(\mathrm{mod}\,25)$
$q_{1}$ $q_{1}\,(\mathrm{mod}\,5)$ $q_{1}\,(\mathrm{mod}\,25)$
$n\,=\,p^{e}q_{1}$ $h_{k,5}$ $C_{k,5}$ rank $(C_{k,5}^{(\sigma)})$ 59 -1 9 2 2
2 118 = $59\times 2$ 25 $(5,5)$ 1 19 -1 19 3 3 3 57 = $19\times 3$ 25 $(5,5)$
1 59 -1 9 23 3 -2 1357 = $59\times 23$ 25 $(5,5)$ 1 359 -1 9 2 2 2 718 =
$359\times 2$ 25 $(5,5)$ 1 409 -1 9 2 2 2 816 = $409\times 2$ 25 $(5,5)$ 1 59
-1 9 127 2 2 7493 = $59\times 127$ 25 $(5,5)$ 1 109 -1 9 23 3 -2 2507 =
$109\times 23$ 25 $(5,5)$ 1 509 -1 9 2 2 2 1018 = $509\times 2$ 25 $(5,5)$ 1
709 -1 9 2 2 2 1418 = $709\times 2$ 25 $(5,5)$ 1 19 -1 19 53 3 3 1007 =
$19\times 53$ 25 $(5,5)$ 1
Table 6: $n\,=\,5^{e}p\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\not\equiv\,-1(\mathrm{mod}\,25)$
$p$ | $n\,=\,5^{e}p$ | $p\,(\mathrm{mod}\,5)$ | $p\,(\mathrm{mod}\,25)$ | $h_{k,5}$ | $C_{k,5}$ | rank $(C_{k,5}^{(\sigma)})$
---|---|---|---|---|---|---
19 | 475 = $5^{2}\times 19$ | -1 | 19 | 25 | (5,5) | 1
29 | 145 = $5\times 29$ | -1 | 4 | 25 | (5,5) | 1
59 | 7375 = $5^{3}\times 59$ | -1 | 9 | 25 | (5,5) | 1
89 | 55625 = $5^{4}\times 89$ | -1 | 14 | 25 | (5,5) | 1
109 | 2725 = $5^{2}\times 109$ | -1 | 9 | 25 | (5,5) | 1
229 | 28625 = $5^{3}\times 229$ | -1 | 4 | 25 | (5,5) | 1
239 | 1195 = $5\times 239$ | -1 | 14 | 25 | (5,5) | 1
269 | 6725 = $5^{2}\times 19$ | -1 | 19 | 25 | (5,5) | 1
379 | 168125 = $5^{4}\times 379$ | -1 | 4 | 25 | (5,5) | 1
389 | 1945 = $5^{2}\times 389$ | -1 | 14 | 25 | (5,5) | 1
### 3.2 rank $(C_{k,5}^{(\sigma)})\,=\,2$
Table 1: $n\,=\,5^{e}l\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\not\equiv\,1(\mathrm{mod}\,25)$
$l$ | $n\,=\,5^{e}l$ | $l\,(\mathrm{mod}\,5)$ | $l\,(\mathrm{mod}\,25)$ | $h_{k,5}$ | $C_{k,5}$ | rank $(C_{k,5}^{(\sigma)})$
---|---|---|---|---|---|---
11 | 55 = $5\times 11$ | 1 | 11 | 25 | (5,5) | 2
41 | 5125 = $5^{3}\times 41$ | 1 | -9 | 25 | (5,5) | 2
61 | 5125 = $5^{4}\times 61$ | 1 | 11 | 25 | (5,5) | 2
71 | 1775 = $5^{2}\times 71$ | 1 | -4 | 25 | (5,5) | 2
131 | 655 = $5\times 131$ | 1 | 6 | 25 | (5,5) | 2
181 | 113125 = $5^{4}\times 181$ | 1 | 6 | 25 | (5,5) | 2
241 | 30125 = $5^{3}\times 241$ | 1 | -9 | 25 | (5,5) | 2
311 | 1555 = $5\times 311$ | 1 | 11 | 25 | (5,5) | 2
331 | 8275 = $5^{2}\times 331$ | 1 | 6 | 25 | (5,5) | 2
431 | 2155 = $5\times 431$ | 1 | 6 | 25 | (5,5) | 2
Table 2: $n\,=\,l^{e}q_{1}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\equiv\,1\,(\mathrm{mod}\,5)$ and $q_{1}\,\equiv\,\pm 2,\pm 3\,\pm
7\,(\mathrm{mod}\,25)$ $l$ $l\,(\mathrm{mod}\,5)$ $l\,(\mathrm{mod}\,25)$
$q_{1}$ $q_{1}\,(\mathrm{mod}\,5)$ $q_{1}\,(\mathrm{mod}\,25)$
$n\,=\,l^{e}q_{1}$ $h_{k,5}$ $C_{k,5}$ rank $(C_{k,5}^{(\sigma)})$ 31 1 6 2 2
2 $31\times 2$ 25 $(5,5)$ 2 131 1 6 23 3 -2 $131^{3}\times 23$ 25 $(5,5)$ 2
181 1 6 47 2 -3 $181\times 47$ 25 $(5,5)$ 2 11 1 11 3 3 3 $11\times 3$ 25
$(5,5)$ 2 41 1 16 23 3 -2 $41\times 23$ 25 $(5,5)$ 2 191 1 16 2 2 2 $191\times
2$ 25 $(5,5)$ 2 41 1 16 47 2 -3 $41^{2}\times 47$ 25 $(5,5)$ 2 311 1 11 2 2 2
$311^{4}\times 2$ 25 $(5,5)$ 2
Table 3: $n\,=\,l^{e}\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\equiv\,1(\mathrm{mod}\,25)$ $l$ $n\,=\,l^{e}$ $l\,(\mathrm{mod}\,5)$
$l\,(\mathrm{mod}\,25)$ $h_{k,5}$ $C_{k,5}$ rank $(C_{k,5}^{(\sigma)})$ 151
$151$ 1 1 25 (5,5) 2 251 $251^{2}$ 1 1 25 (5,5) 2 601 $601^{3}$ 1 1 25 (5,5) 2
1051 $1051^{4}$ 1 1 25 (5,5) 2 1301 $1301$ 1 1 25 (5,5) 2 1451 $1451^{2}$ 1 1
25 (5,5) 2 1801 $1801^{3}$ 1 1 25 (5,5) 2 1901 $1901^{4}$ 1 1 25 (5,5) 2 2111
2111 1 1 25 (5,5) 2 2131 $2131^{2}$ 1 1 25 (5,5) 2
## 4 Conjecture
In this article, we have classified some pure quintic fields
$\mathbb{Q}(\sqrt[5]{n})$, more precisely, we focused on the ones whose normal
closures $\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$ possesses a $5$-class groups of
type $(5,5)$, by treating the rank of the ambiguous classes, that can be
characterized by the radicand $n$.
As to provide numerical examples, we use the system PARI/GP References. Thus,
we have noticed that the done calculations for some $n$ forms, show that
$5$-class groupe $C_{k,5}$ of the field $k$, is isomorphic to
$\mathbb{Z}/5\mathbb{Z}$, which allows us to give this conjecture as follows:
###### Conjecture 4.1.
Let $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$ be a pure quintic field, where $n$ is
a positive integer, $5^{th}$ power-free. Let $k\,=\Gamma(\zeta_{5})$ be the
normal closure of $\Gamma$. Denote by $C_{k,5}$ the 5-class group of $k$,
$q_{1},q_{2}\,\equiv\,\pm 2\,(\mathrm{mod}\,5)$ are primes and
$e\in\\{1,2,3,4\\}$.
If the radicand $n$ take one form as follows:
$n\,=\,\begin{cases}q_{1}^{e_{1}}q_{2}\,\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad q_{i}\,\equiv\,\pm
7\,(\mathrm{mod}\,25)\\\ 5^{e}q_{1}\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad q_{1}\,\equiv\,\pm
7\,(\mathrm{mod}\,25)\\\ 5^{e}q_{1}^{2}q_{2}\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad q_{1}\,\text{ or }\,q_{2}$
$\not\equiv\,\pm 7\,(\mathrm{mod}\,25)\\\ \end{cases}$ (5)
Then $C_{k,5}$ is a cyclic groupe of order $5$.
## References
* [1] S. Aouissi, M. Talbi, M. C. Ismaili and A. Azizi. _Fields $\mathbb{Q}(\sqrt[3]{d},\zeta_{3})$ whose $3$-class group is of type $(9,3)$_, Int.J.Number Theory (2019)
* [2] F. Gerth III, _On $3$-class groups of cyclic cubic extensions of certain number fields_, J. Number Theory 8 (1976), No. 1, 84–98.
* [3] F. Gerth III, _On $3$-class groups of pure cubic fields,_ J. Reine Angew. Math. 278/279 (1975), 52–62.
* [4] F. Gerth III, _On $3$-class groups of certain pure cubic fields_,Bull. Austral. Math. Soc. 72 (2005), 471–476.
* [5] David Grant. _A proof of quintic reciprocity using the arithmetic of $y^{2}=x^{5}+\frac{1}{4}$_. ACTA ARITHMETICA (1996).
* [6] G.Gras, _Sur les $l$-classes d’idéux dans les extensions cycliques relatives de degré premier impaire $l$_. _Annales de l’institut Fourier_ , (1973).
* [7] E.Hecke, _Algebraic Number Theory_ , GTM 77, Springer-Verlag 1981.
* [8] M.Ishida, _The genus Fields of Algebraic Number Fields_. Lecture notes in Mathematics Vol 555, Springer-Verlag (1976).
* [9] K. Iwasawa, _A note on the group of units of an algebraic number field_ , J. Math. Pures Appl. (9) 35 (1956), 189–192.
* [10] K.Ireland and M.Rosen, _A Classical Introduction to modern Number Theory_. Graduate Texts in Mathematics 84, Springer-Verlag (1982).
* [11] G.J Janus, _Algebraic Number Fields_. Academic Press, New York-London (1973).
* [12] M.Kulkarni,D. Majumdar, B.Sury _$l$ -class groups of cyclic extension of prime degree $l$_, J. Ramanujan Math. Soc. 30, No.4 (2015), 413-454.
* [13] H. Kobayashi, _Class numbers of pure quintic fields_ , Journal of Number Theory 160 (2016) 463-477.
* [14] C. Parry, _Class number relations in pure quintic felds_ , Symposia Mathematica. 15 (1975), 475-485.
* [15] Lawrence C. Washington, _Introduction to Cyclotomic Fields_ , Springer-Verlag New York Inc (1982).
* [16] The PARI Group, PARI/GP, Version 2.4.9, Bordeaux, 2017, http://pari.math.u-bordeaux.fr.
Fouad ELMOUHIB
Department of Mathematics and Computer Sciences,
Mohammed 1st University,
Oujda - Morocco,
<EMAIL_ADDRESS>
Mohamed TALBI
Regional Center of Professions of Education and Training in the Oriental,
Oujda - Morocco,
<EMAIL_ADDRESS>
Abdelmalek AZIZI
Department of Mathematics and Computer Sciences,
Mohammed 1st University,
Oujda - Morocco,
<EMAIL_ADDRESS>
|
2024-09-04T02:54:56.183203 | 2020-03-02T11:10:22 | 2003.00764 | {
"authors": "Reggie C. Pantig, Emmanuel T. Rodulfo",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25979",
"submitter": "Reggie Pantig",
"url": "https://arxiv.org/abs/2003.00764"
} | arxiv-papers | # Weak deflection angle of a dirty black hole
Reggie C. Pantig<EMAIL_ADDRESS>Physics Department, De La Salle
University-Manila, 2401 Taft Ave., 1004 Manila Philippines Emmanuel T.
Rodulfo<EMAIL_ADDRESS>
###### Abstract
In this paper, we present the weak deflection angle in a Schwarzschild black
hole of mass $m$ surrounded by the dark matter of mass $M$ and thickness
$\Delta r_{s}$. The Gauss-Bonnet theorem, formulated for asymptotic
spacetimes, is found to be ill-behaved in the third-order of $1/\Delta r_{s}$
for very large $\Delta r_{s}$. Using the finite-distance for the radial
locations of the source and the receiver, we derived the expression for the
weak deflection angle up to the third-order of $1/\Delta r_{s}$ using Ishihara
(et al.) method. The result showed that the required dark matter thickness is
$\sim 2\sqrt{3mM}$ for the deviations in the weak deflection angle to occur.
Such thickness requirement is better by a factor of 2 as compared to the
deviations in the shadow radius ($\sim\sqrt{3mM}$). It implies that the use of
the weak deflection angle in detecting dark matter effects in one’s galaxy is
better than using any deviations in the shadow radius.
## I Introduction
One of the fruitful achievements of the human mind is the general theory of
relativity by Albert Einstein, which relates the phenomena of gravitation to
the geometry of spacetime. One consequence of the theory is the existence of
black holes that have long remained to be a theoretical construct, until the
release of the first image of the black hole shadow at the center of the M87
galaxy on April 10, 2019 Akiyama _et al._ (2019).
At present, dark matter is another entity that remains so mysterious and
elusive. The $\Lambda$CDM model of cosmology suggests that the content of our
universe is made up of 27$\%$ dark matter, which constitutes 85$\%$ of the
total mass Jarosik _et al._ (2011). Using Earth-based laboratories, several
scientists attempted to detect dark matter by direct means, but they gave
inconsistent results with each other. (see Bernabei _et al._ (2010) and
Aprile _et al._ (2010)). Even long before these experiments, indirect search
through dark matter annihilation also revealed null results Desai _et al._
(2004); Peirani _et al._ (2004). It proves that dark matter detection is more
difficult compared to gravitational wave detection Abbott _et al._ (2016).
Emerging theoretical efforts have shown alternative means of possible dark
matter detection using the changes in the silhouette of a black hole. The
spacetime of pure dark matter and black hole were combined rigorously by Xu in
Ref. Xu _et al._ (2018), and this formalism of using dark matter density
profiles to black hole spacetime is applied immediately to our own and M87
galaxy Hou _et al._ (2018a); Jusufi _et al._ (2019). Several studies are
also present that analyze the black hole shadow and deflection angle under the
influence of dark matter modeled as a perfect fluid Hou _et al._ (2018b);
Haroon _et al._ (2019). However, Konoplya in Ref. Konoplya (2019) considered
these black hole metrics as highly model-dependent. Using a less model-
dependent and agnostic view on dark matter, he estimated the condition for
dark matter to have some notable effect on the shadow radius.
Not only the study of shadow from various black hole models has gained much
attention from many researchers, but also the study of gravitational lensing
(GL). GL has proved useful in probing many astrophysical objects. Long ago, it
is used to probe the coevolution of supermassive black holes (SMBH) and
galaxies Peng _et al._ (2006). It is also used to probe exotic entities like
dark matter Trimble (1987); Metcalf and Madau (2001); Metcalf and Zhao (2002)
that permeates a whole galaxy, and even galaxy clusters Hoekstra and Jain
(2008); Ellis (2010). Astrophysical black holes, which can also be
approximately described by the Schwarzschild metric for useful estimates, are
studied extensively in terms of lensing and the relativistic images that it
produces Virbhadra and Ellis (2000); Virbhadra (2009).
Perhaps the most popular way to obtain the weak deflection angle is by using
the Gauss-Bonnet theorem (GBT) introduced in Ref. Gibbons and Werner (2008).
Since then, the GBT has been proven very useful in calculating the weak
deflection angle of various black hole models that demonstrate asymptotic
behavior Övgün _et al._ (2018a, b); Övgün (2018); Övgün _et al._ (2019a);
Övgün (2019a); Jusufi and Övgün (2018); Javed _et al._ (2019). Other notable
studies also existed that includes dark matter, phantom matter, and
quintessential energy, Övgün (2019b); Övgün _et al._ (2019b); Haroon _et
al._ (2019); De Leon and Vega (2019) on their analysis of the weak deflection
angle. Gravitational lensing by exotic matter or energy, which possibly
deviate from some well-known equation of state, is also explored in Kitamura
_et al._ (2014); Nakajima _et al._ (2014); Izumi _et al._ (2013).
Calculation of the weak deflection angle for non-asymptotic spacetimes seems
problematic using the GBT as far as its asymptotic form is concerned.
Recently, an improved analysis in Ref. Ishihara _et al._ (2016) made the
calculation possible by considering the finite distances of the source and
receiver. However, the correspondence between the GBT and finite-distance
involves the brute force of evaluating integrals, which contain the orbit
equation. This method, whether the spacetime is asymptotic or not, axially-
symmetric or not Ono _et al._ (2019); Li and Övgün (2020); Zhang _et al._
(2019), is also useful in strong gravitational lensing regime Ishihara _et
al._ (2017); Azreg-Aïnou _et al._ (2017).
It is interesting to investigate the effect of dark matter configuration used
in Ref. Konoplya (2019) on a different black hole phenomenon: weak
gravitational lensing. In particular, this paper seeks to derive the
expression for the weak deflection angle caused by the main property of dark
matter - its mass. Although that the deviation in the shadow radius already
gives us the idea that null geodesics are affected, it is interesting to find
out whether if the deviation in the black hole’s weak deflection angle offers
a better condition for dark matter detection.
We organize the paper as follows: Sect. II introduces the dirty Schwarzschild
metric alongside with the description of the simple dark matter model in
consideration. In Sect. III, we present three possible cases that show the
different positions of the source and receiver relative to the dark matter
distribution and utilize the GBT. In Sect. IV, we proceed to calculate the
weak deflection angle by assuming some finite-distance of the source and the
receiver using the Ishihara (et al.) method. Lastly, in Sect. V, we summarize
the results and indicate some possible future research direction. The metric
signature in this study is +2, and we use $G=c=1$.
## II Dirty Schwarzschild black hole
The term ”dirty” black hole has its roots in Ref. Visser (1992, 1993); Macedo
_et al._ (2016); Leung _et al._ (1997); Krauss _et al._ (1996) which
describes a black hole surrounded by some astrophysical environment. These
dirty black holes can be categorized as follows: (a) those that came from a
specific Lagrangian, or a certain field theory which results to a metric as
the Einstein field equation is solved (for examples, see Refs. Shapere _et
al._ (1991); Dowker _et al._ (1992); Gibbons and ichi Maeda (1988); Allen
_et al._ (1990); Galt’sov and Ershov (1989); Lahiri (1992)), (b) generic black
hole metrics with sufficient generality Nielsen and Birnholz (2019) that came
from some hypothetical configuration (or derived from empirical data) of
astrophysical environment, and (c) dirty black holes which came from solutions
to a non-Einsteinian theory such as pseudo-complex general relativity Moffat
(1979); Hess and Greiner (2009); Mann and Moffat (1982).
Here, we give a brief overview and describe the dirty Schwarzschild black hole
used in Ref. Konoplya (2019), where the author studied dark matter effects on
the photonsphere and black hole shadow. The astrophysical environment in
consideration is a spherical shell of dark matter described only by its mass
M, inner radius $r_{s}$, and thickness $\Delta r_{s}$ (the subscript $s$
denotes shell). Further, the dark matter mass $M$ is treated as an additional
effective mass to the black hole while maintaining its non-interaction with
the electromagnetic field. Since the physical parameters of the dark matter
shell are hypothetical, such a dirty black hole falls to the second category
described above. The generic metric produced still has sufficient generality
because the black hole itself came from the vacuum solution of the Einstein
equation.
One can then assume a piecewise function to impose three domains Konoplya
(2019):
$\mathcal{M}(r)=\begin{cases}m,&r<r_{s};\\\ m+MG(r),&r_{s}\leq r\leq
r_{s}+\Delta r_{s};\\\ m+M,&r>r_{s}+\Delta r_{s}\end{cases}$ (1)
where
$G(r)=\left(3-2\frac{r-r_{s}}{\Delta r_{s}}\right)\left(\frac{r-r_{s}}{\Delta
r_{s}}\right)^{2}.$ (2)
The expression for $G(r)$ is chosen so that $\mathcal{M}(r)$ and
$\mathcal{M}^{\prime}(r)$ are continuous (see Fig. 1). Note also the
possibility of $M<0$.
Figure 1: Example of a choice for mass function. Here, $M=10m$, $r_{s}=2m$,
$\Delta r_{s}=20m$, $m=1$.
The Schwarzschild metric is one of the famous, yet simplest, vacuum solution
of the Einstein field equations. Consider surrounding it with a spherical
shell of dark matter whose parameters are described by Eq. (2). Then we have
$d{s}^{2}=-f(r)dt^{2}+f(r)^{-1}dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta
d\phi^{2}\right)$ (3)
where the metric function $f(r)$ is now given by
$f(r)=1-\frac{2\mathcal{M}(r)}{r}.$ (4)
The general scenario is depicted in Fig. 2 where $r_{s}\neq r_{h}$, which
makes the piecewise function in Eq. (1) very clear. Considering the first
domain and if $r$ is between $r_{h}$ and $r_{s}$, the dark matter outside $r$
has no bearing to any black hole phenomena that we want to analyze. The
situation is completely equivalent as if there’s no dark matter surrounding
the black hole. Suppose that $r$ describes the photonsphere radius $r_{ph}$,
then we simply have the trivial expression $r_{ph}=3m$.
Figure 2: A representation of a black hole surrounded by a thin shell of dark
matter where $r_{s}\neq r_{h}$.
The third domain of Eq. (1) seems an innocent-looking condition where the
parameters $r_{s}$ and $\Delta r_{s}$ does not exist. The domain can be
interpreted in two ways: (1) an isolated black hole surrounded by some dark
matter shell, and (2) a black hole at the center of a galaxy (combined mass is
$m$) where both are surrounded by dark matter halo. Taking the example of
deriving the photonsphere radius, the former case gives $r_{ph}=3(m+M)$ as the
null geodesic has no way to cross the outer radius of the dark matter shell.
The study of the deflection angle is more applicable to the latter case, but
the third domain’s applicability rests on the assumption that the null
geodesic never crosses the region of dark matter halo. For this to happen, the
location of the source and the receiver must be tremendously remote from the
lensing galaxy. It is also easy to see why the first and third domains should
satisfy the Einstein equation.
For the second domain, the spacetime where the $r$-coordinate is located is
now different due to the introduction of dark matter mass $M$ and its physical
dimensions. Since the configuration is hypothetical, we don’t know the
specific field theory, which may lead to the stress-energy tensor that
produces such expression of the metric if the Einstein equation is solved. It
is also true for black hole spacetime fused with dark matter models whose
density profiles came from observational data Xu _et al._ (2018); Hou _et
al._ (2018a); Jusufi _et al._ (2019), hence they can be considered as dirty
black holes. It is possible, however, to satisfy the Einstein equation. The
key is to express the Einstein tensor in terms of appropriately chosen
orthogonal bases along with the stress-energy tensor Azreg-Aïnou (2014).
Assuming that the new spacetime expressed through the second domain of Eq. (1)
satisfies the Einstein equation, one can proceed with the analysis of black
hole properties. It is found out that it gives a non-trivial expression for
the photonsphere radius inside the dark matter shell, and is also true even
with the approximation $\Delta r_{s}>>r_{s}$ Konoplya (2019). Nevertheless, it
gave relevant insights to the effect of dark matter on the photonsphere
radius, and as a consequence, to the deviations in the shadow radius. Note
that being in the piecewise relation, the equation for the second domain
cannot be applied for values of $r$ beyond its lower and upper bounds. An
observer inside the dark matter shell has a different reality condition
compared to being outside it, or as the first domain applies.
Looking at Eq. (2) again, the function allows $r_{s}$ to have values smaller
than $r=2m$. For simplicity, one can set $r_{s}=2m$ and assume that the dark
matter shell is static. By being static, it means that dark matter is not
affected by the radial pull of the black hole. Nevertheless, it amplifies the
change in the black hole’s geometry, which then manifests to the dynamics of
null and time-like particles.
The idea of an astrophysical environment that surrounds a black hole is common
in several studies Konoplya _et al._ (2019); Cardoso and Pani (2017). The
reasons include (1) verify whether the deviation observed is due to the new
physics happening near the horizon, or due to some effect of the astrophysical
environment; and (2) directly examine and study the influence of astrophysical
environment to the geometry of black hole. Only recently that the mass
function in Eq. (1) has been used to make some estimates of dark matter
effects to the radius of the black hole shadow Konoplya (2019).
## III Deflection angle in a dirty black hole using the Gauss-Bonnet theorem
Consider $D$ as a freely orientable two-dimensional curved surface described
by the Gaussian curvature $K$, and $dS$ is its infinitesimal area element (See
Fig. 3). It contains several boundaries that are differentiable curves,
denoted by $\partial D_{a}$ ($a=1,2,...,N$), with geodesic curvature
$\kappa_{g}$. Such boundary also contains the line element $\ell$ where its
sign remains consistent with the surface orientation. Let $\theta_{a}$
represents the exterior or jump angles along the vertices of the surface. Then
the Gauss-Bonnet theorem states that Ishihara _et al._ (2016, 2017); Do Carmo
(2016); Klingenberg (2013)
$\iint_{D}KdS+\sum\limits_{a=1}^{N}\int_{\partial
D_{a}}\kappa_{g}d\ell+\sum\limits_{a=1}^{N}\theta_{a}=2\pi.$ (5)
Figure 3: Schematic diagram of a curved surface for Gauss-Bonnet theorem. The
interior angles are denoted by $\varepsilon$ while the exterior (or ”jump”)
angles are $\theta$.
In the weak lensing regime, the Gauss-Bonnet theorem proves to be a powerful
tool in calculating the weak deflection angle $\hat{\alpha}$ as long as the
spacetime is asymptotically flat. If the spacetime is static, and spherically
symmetric (SSS), $\kappa_{g}=0$ along the boundary curves and the second term
in Eq. (5) vanishes. Assuming asymptotic flatness of the spacetime being
considered, the weak deflection angle can be derived as Gibbons and Werner
(2008); Javed _et al._ (2019); Ishihara _et al._ (2016, 2017); Werner
(2012); Ono _et al._ (2017)
$\hat{\alpha}=-\iint_{{}_{R}^{\infty}\square_{S}^{\infty}}KdS$ (6)
where $K$ is the Gaussian optical curvature that is integrated over the
quadrilateral ${}_{R}^{\infty}\square_{S}^{\infty}$, $dS$ is the surface area
element. These quantities that are important in GBT, are defined as follows:
$K=\frac{R_{r\phi r\phi}}{\gamma},$ (7) $dS=\sqrt{\gamma}drd\phi$ (8)
where $\gamma$ denotes the determinant of the spatial metric $\gamma_{ij}$ as
$i$ and $j$ run from 1 to 3. The spatial metric for an SSS spacetime can be
found by considering the null condition $ds^{2}=0$, and solving for $dt$
yields Övgün _et al._ (2018a)
$dt=\sqrt{\gamma_{ij}dx^{i}dx^{j}}.$ (9)
Rewriting Eq. (3) as
$d{s}^{2}=A(r)dt^{2}+B(r)dr^{2}+C(r)d\theta^{2}+D(r,\theta)d\phi^{2},$ (10)
the definition of $\gamma_{ij}$ is given by
$\gamma_{ij}dx^{i}dx^{j}=\frac{1}{A(r)}\left(B(r)dr^{2}+C(r)d\theta^{2}+D(r,\theta)d\phi^{2}\right).$
(11)
Dealing only in equatorial orbits, the Gaussian curvature $K$, being related
to the two-dimensional Riemann tensor, can be written in terms of affine
connection as Haroon _et al._ (2019); Övgün _et al._ (2019a)
$K=\frac{1}{\sqrt{\gamma}}\left[\frac{\partial}{\partial\phi}\left(\frac{\sqrt{\gamma}}{\gamma_{rr}}\Gamma_{rr}^{\phi}\right)-\frac{\partial}{\partial
r}\left(\frac{\sqrt{\gamma}}{\gamma_{rr}}\Gamma_{r\phi}^{\phi}\right)\right]$
(12)
and using Eq. (10), $K$ can be written as Ono _et al._ (2017)
$K=-\sqrt{\frac{A(r)^{2}}{B(r)D(r)}}\frac{\partial}{\partial
r}\left[\frac{1}{2}\sqrt{\frac{A(r)^{2}}{B(r)D(r)}}\frac{\partial}{\partial
r}\left(\frac{D(r)}{A(r)}\right)\right].$ (13)
With the area surface element $dS$ in Eq. (8), we can write Eq. (6) as
$\hat{\alpha}=\int_{0}^{\pi}\int_{r_{o}}^{\infty}\mathcal{K}drd\phi,$ (14)
where the Gaussian curvature term De Leon and Vega (2019) reads
$\mathcal{K}=-\frac{\partial}{\partial
r}\left[\frac{1}{2}\sqrt{\frac{A(r)^{2}}{B(r)D(r)}}\frac{\partial}{\partial
r}\left(\frac{D(r)}{A(r)}\right)\right].$ (15)
Also, $r_{o}$ denotes the radial distance of the photon’s closest approach to
the lensing black hole, and can be obtained through the orbit equation. For an
SSS metric such as the Schwarzschild metric, the photon’s orbit equation in
the equatorial plane reads
$\left(\frac{dr}{d\phi}\right)^{2}=\frac{D(r)(D(r)-A(r)b^{2})}{A(r)B(r)b^{2}}$
(16)
where $b$ is the impact parameter. For convenience and as usually done in
celestial mechanics, we can set $u=\frac{1}{r}$. Thus, the above can be
expressed as
$\left(\frac{du}{d\phi}\right)^{2}\equiv
F(u)=\frac{u^{4}D(u)(D(u)-A(u)b^{2})}{A(u)B(u)b^{2}}.$ (17)
The closest approach $u_{o}$ can be found by iteratively solving $u$ in Eq.
(17) and imposing the boundary condition
$\frac{du}{d\phi}\big{|}_{\phi=\frac{\pi}{2}}=0$. For example, for the
Schwarzschild metric, $u$ is given by
$u=\frac{\sin\phi}{b}+\frac{m}{b^{2}}(1+\cos^{2}\phi).$ (18)
With $u_{o}$, we can now recast Eq. (14) as
$\hat{\alpha}=\int_{0}^{\pi}\int_{0}^{u_{o}}-\frac{\mathcal{K}}{u^{2}}dud\phi.$
(19)
### III.1 Case 1: $\mathcal{M}(r)=m$
Let $u_{S}$ and $u_{R}$ be the reciprocal of the source’s and receiver’s
radial distance from the lensing object. The use of Eq. (14) assumes that
these radial distances are so far away that the spacetime becomes
approximately Minkowskian. For the first domain in Eq. (1) to hold, the inner
radius $r_{s}$ of the dark matter shell should always larger than the source’s
and receiver’s radial distance (See Fig. 4). With the metric function taking
the form
$f(r)=1-\frac{2m}{r},$ (20)
the Gaussian curvature and the area element in the equatorial plane reads
$\displaystyle K$ $\displaystyle=-\frac{2m}{r^{3}}+\frac{3m^{2}}{r^{4}}$
$\displaystyle dS$ $\displaystyle=r+3m+\mathcal{O}(m^{2})$ (21)
respectively. Using Eq. (19), we find
$\displaystyle\hat{\alpha}$
$\displaystyle=\int_{0}^{\pi}\int_{0}^{\frac{\sin(\phi)}{b}+\frac{m}{b^{2}}(1+\cos^{2}\phi)}-\frac{\mathcal{K}}{u^{2}}dud\phi$
$\displaystyle=\int_{0}^{\pi}\int_{0}^{\frac{\sin(\phi)}{b}+\frac{m}{b^{2}}(1+\cos^{2}\phi)}2m+3m^{2}u+\mathcal{O}\left(m^{3}\right)dud\phi$
$\displaystyle=\frac{4m}{b}+\frac{15\pi
m^{2}}{4b^{2}}+\mathcal{O}\left(m^{3}\right)$ (22)
which is the weak deflection angle of a Schwarzschild black hole up to the
second-order in $m$.
Figure 4: Quadrilateral embedded in curved spacetime while enclosing dark
matter shell.
### III.2 Case 2: $\mathcal{M}(r)=m+M$
Following the discussion in Sect. II about the possible interpretation of this
case, Fig. 5 shows the use of the third domain of Eq. (1). As we deal only
with a non-rotating black hole, the metric function is trivial:
$f(r)=1-\frac{2(m+M)}{r}.$ (23)
Using Eq. (14), we find
$\displaystyle\hat{\alpha}$
$\displaystyle=\int_{0}^{\pi}\int_{0}^{\frac{\sin(\phi)}{b}+\frac{m+M}{b^{2}}(1+\cos^{2}\phi)}-\frac{\mathcal{K}}{u^{2}}dud\phi$
$\displaystyle=\int_{0}^{\pi}\int_{0}^{\frac{\sin(\phi)}{b}+\frac{m+M}{b^{2}}(1+\cos^{2}\phi)}2(m+M)$
$\displaystyle+3(m+M)^{2}u+\mathcal{O}\left[(m+M)^{3}\right]dud\phi$
$\displaystyle=\frac{4(m+M)}{b}+\frac{15\pi(m+M)^{2}}{4b^{2}}+\mathcal{O}\left[(m+M)^{3}\right]$
(24)
The result in Eq. (III.2) shows clearly how the dark matter mass acts as an
additional effective mass to the black hole - it increases the value of the
weak deflection angle. It clearly shows how $M$ adds up to the spacetime
distortion that affects the path of the null geodesic. It also means that the
photon’s path never enters and leaves the dark matter shell as it travels from
the source to the receiver. Suppose that $M=0$ and we increase $m$, the null
geodesic would only shift to its new orbital equilibrium while being outside
the vicinity of the event horizon.
Figure 5: Geometrical configuration of the quadrilateral when S and R are
beyond the outermost radius of dark matter shell.
### III.3 Case 3: $\mathcal{M}(r)=m+MG(r)$
In this scenario, the null geodesic should remain inside the dark matter shell
for the second domain in Eq. (1) to hold. Such restriction is to determine how
the weak deflection angle should behave, knowing that the dark matter beneath
or above the null geodesic can affect it. This scenario is the same as the
restriction imposed in Ref. Konoplya (2019) for the photonsphere radius.
Suppose that $\Delta r_{s}$ is fixed where its thickness is not yet comparable
to dark matter halos and accommodates both the source and the receiver inside
the shell. In calculating the weak deflection angle using Eq. (19), the
asymptotic condition must apply, but the problem is that the source and the
receiver locations exceed that of the shell’s outer radius. Hence, given Eq.
(1), such a situation invalidates the use of Eq. (2). Therefore, there is a
necessity for the approximation of $\Delta r_{s}$ that it must be very large.
Such approximation directs the analysis to the scenario involving the source
and the receiver inside a very large dark matter halo. This means there is an
assumption that $\Delta r_{s}>>\frac{1}{u_{S}}$ (and $u_{R}$). See Fig. 6.
Figure 6: Far approximation of S and R (i.e. $u_{S}<<1$ and $u_{S}<<1$) always
implies the approximation that $\Delta r_{s}>>\frac{1}{u_{S}}$ (or $u_{R}$)
for the 2nd domain in Eq. (1) to hold.
The orbit equation in this case is
$\displaystyle F(u)$ $\displaystyle=\frac{1}{b^{2}}+u^{2}(2mu-1)$
$\displaystyle+\frac{2M}{\Delta
r_{s}^{2}}(r_{s}u-1)^{2}\left(\frac{2r_{s}u}{\Delta r_{s}}-\frac{2}{\Delta
r_{s}}+3u\right).$ (25)
Note that there is no coupling between the black hole mass $m$ and dark matter
mass $M$ in Eq. (III.3), unlike the coupling between the spin parameter $a$
and $m$ in Kerr spacetime. The iterative solution of Eq. (III.3) to obtain
$u_{o}$ is then
$u=\frac{\sin\phi}{b}+\frac{m}{b^{2}}(1+\cos^{2}\phi)+\frac{3Mr_{s}^{2}}{b^{2}\Delta
r_{s}^{2}}.$ (26)
Hence, the integrand of Eq. (19), up to the third-order of $1/\Delta r_{s}$,
is expressed by
$\displaystyle\hat{\alpha}$
$\displaystyle=\int_{0}^{\pi}\int_{0}^{u_{o}}-\frac{\mathcal{K}}{u^{2}}dud\phi=\int_{0}^{\pi}\int_{0}^{u_{o}}\bigg{[}2m+\frac{6Mr_{s}^{2}}{\Delta
r_{s}^{2}}$ $\displaystyle-\frac{6mMr_{s}(2-3r_{s}u)}{\Delta
r_{s}^{2}}-\frac{4M}{\Delta r_{s}^{3}}\left(\frac{1}{u^{3}}-r_{s}^{3}\right)$
$\displaystyle-\frac{12mMr_{s}^{2}(1-r_{s}u)}{\Delta
r_{s}^{3}}+\mathcal{O}\left(m^{2},\frac{1}{\Delta
r_{s}^{4}}\right)\bigg{]}dud\phi.$ (27)
The result shows divergence in the 4th term if one attempts to evaluate the
integral. It means that in the third-order of $1/\Delta r_{s}$, if one insists
to be more precise in their calculations, the spacetime is not asymptotically
flat and GBT cannot be used. The same case also occur in other non-asymptotic
spacetime such as those that involved the cosmological constant such as the
Kottler spacetime Kottler (1918). However, the cosmological constant has no
adjustable parameters where an approximation can be made possible.
The condition $\Delta r_{s}>>\frac{1}{u}$ gives the approximate expression if
$\Delta r_{s}$ is very large, at least comparable to the known size of dark
matter halos. Therefore, the manifestation of dark matter effects to the weak
deflection angle can be seen to begin with the second-order of $1/\Delta
r_{s}$. Fortunately, there is asymptotic flatness and proceeding with the GBT
calculation, we find
$\displaystyle\hat{\alpha}$
$\displaystyle=\int_{0}^{\pi}\int_{0}^{u_{o}}-\frac{\mathcal{K}}{u^{2}}dud\phi=\int_{0}^{\pi}\int_{0}^{u_{o}}\bigg{[}2m+\frac{6Mr_{s}^{2}}{\Delta
r_{s}^{2}}$ $\displaystyle-\frac{6mMr_{s}(2-3r_{s}u)}{\Delta
r_{s}^{2}}\bigg{]}dud\phi=\frac{4m}{b}+\frac{12Mr_{s}^{2}}{b\Delta r_{s}^{2}}$
$\displaystyle-\frac{24mMr_{s}}{b\Delta r_{s}^{2}}+\frac{15\pi
m^{2}}{4b^{2}}+\frac{39\pi mMr_{s}^{2}}{2b^{2}\Delta r_{s}^{2}}.$ (28)
showing higher-order terms. It is now a matter of question whether the third
term in Eq. (III.3) can also be neglected due to the nature of dark matter
model used in this study. To find out, we will use Ishihara (et al.) method
Ishihara _et al._ (2016) to purposely show that we can compute the weak
deflection angle up to the third-order in $1/\Delta r_{s}$, and compare the
result to Eq. (III.3) at least in second-order of $1/\Delta r_{s}$. The
formalism developed is very useful in dealing with Kottler spacetime and
Schwarzschild-like solutions in the Weyl conformal gravity Mannheim and
Kazanas (1989) because it assumes that $u_{S}$ and $u_{R}$ are at finite
distance from the black hole. The computation using this method will be the
subject of the next section.
## IV Deflection angle using finite distance
In this section, we use the method in Ref. Ishihara _et al._ (2016) to
calculate the weak deflection angle of a black hole under the influence of
dark matter, as demonstrated in Fig. 6. From the GBT, the generalized
correspondence between the deflection angle and the surface integral of the
Gaussian curvature reads
$\hat{\alpha}=\phi_{RS}+\Psi_{R}-\Psi_{S}$
$=\int_{u_{R}}^{u_{o}}\frac{1}{\sqrt{F(u)}}du+\int_{u_{S}}^{u_{o}}\frac{1}{\sqrt{F(u)}}du+\Psi_{R}-\Psi_{S}$
(29)
where $F(u)$ and $u_{o}$ are given by Eq. (III.3) and Eq. (26) respectively.
The angles $\Psi_{S}$ and $\Psi_{R}$ in Eq. (29) are determined through the
inner product of the unit basis vector in the spacetime considered, and the
unit vector with respect to the lensing object. The unit basis vector $e^{i}$,
along the equatorial plane, is given by
$e^{i}=\left(\frac{dr}{dt},0,\frac{d\phi}{dt}\right)=\frac{d\phi}{dt}\left(\frac{dr}{d\phi},0,1\right)$
(30)
while the unit radial vector, which is along the radial direction from the
lens is
$R^{i}=\left(\frac{1}{\sqrt{\gamma_{rr}}},0,0\right).$ (31)
Hence, the inner product suggests the definition
$\cos\Psi\equiv\gamma_{ij}e^{i}R^{j}$
$\cos\Psi=\sqrt{\gamma_{rr}}\frac{A(r)b}{D(r)}\frac{dr}{d\phi}.$ (32)
Using $F(u)$ in Eq. (17), it is easy to see that
$\sin\Psi=\sqrt{\frac{A(r)}{D(r)}}b$ (33)
where it clear that it is favorable to use $\sin\Psi$ than using $\cos\Psi$.
Taylor expansion of $\Delta r_{s}$ as it is very large resulted to an angle
$\Psi$ calculated as
$\Psi=\arcsin(bu\sqrt{1-2mu})$ $-\frac{3bM}{\Delta
r_{s}^{2}}\frac{(r_{s}u-1)^{2}}{\sqrt{1-2mu}\sqrt{b^{2}u^{2}(2mu-1)+1}}$
$-\frac{2bM}{\Delta
r_{s}^{3}}\frac{(r_{s}u-1)^{3}}{u\sqrt{1-2mu}\sqrt{b^{2}u^{2}(2mu-1)+1}}+\mathcal{O}\left(\frac{1}{\Delta
r_{s}^{4}}\right).$ (34)
Continuing, we find that
$\Psi_{R}-\Psi_{S}=(\Psi_{R}^{\text{Schw}}-\Psi_{S}^{\text{Schw}})-\frac{3bM}{\Delta
r_{s}^{2}}\left[\frac{(r_{s}u_{R}-1)^{2}}{\sqrt{1-b^{2}u_{R}^{2}}}+\frac{(r_{s}u_{S}-1)^{2}}{\sqrt{1-b^{2}u_{S}^{2}}}\right]$
$-\frac{2bM}{\Delta
r_{s}^{3}}\left[\frac{(r_{s}u_{S}-1)^{3}}{u_{R}\sqrt{1-b^{2}u_{R}^{2}}}+\frac{(r_{s}u_{S}-1)^{3}}{u_{S}\sqrt{1-b^{2}u_{S}^{2}}}\right]$
$+\frac{3bmM}{\Delta
r_{s}^{2}}\left[\frac{u_{R}\left(2b^{2}u_{R}^{2}-1\right)(r_{s}u_{R}-1)^{2}}{\left(1-b^{2}u_{R}^{2}\right)^{3/2}}+\frac{u_{S}\left(2b^{2}u_{S}^{2}-1\right)(r_{s}u_{S}-1)^{2}}{\left(1-b^{2}u_{S}^{2}\right)^{3/2}}\right]$
$+\frac{2bmM}{\Delta
r_{s}^{3}}\left[\frac{\left(2b^{2}u_{R}^{2}-1\right)(r_{s}u_{R}-1)^{3}}{\left(1-b^{2}u_{R}^{2}\right)^{3/2}}+\frac{\left(2b^{2}u_{S}^{2}-1\right)(r_{s}u_{S}-1)^{3}}{\left(1-b^{2}u_{S}^{2}\right)^{3/2}}\right]+\mathcal{O}\left(\frac{1}{\Delta
r_{s}^{4}}\right)$ (35)
where
$(\Psi_{R}^{\text{Schw}}-\Psi_{S}^{\text{Schw}})=[\arcsin(u_{R}b)+\arcsin(u_{S}b)-\pi]$
$-bm\left[\frac{u_{R}^{2}}{\sqrt{1-b^{2}u_{R}^{2}}}+\frac{u_{S}^{2}}{\sqrt{1-b^{2}u_{S}^{2}}}\right].$
(36)
Note the above expression contains a divergent term in the third-order of
$1/\Delta r_{s}$. Hence, the spacetime caused by the combination of black hole
and dark matter with very large $\Delta r_{s}$ is non-asymptotic and the limit
$u_{S}\rightarrow 0$ and $u_{R}\rightarrow 0$ is not allowed.
We now proceed to calculate the $\phi_{RS}$ part by evaluating the integrals
in Eq. (29). The function $F(u)$ in Eq. (III.3) results to an integrand of the
form
$\frac{1}{\sqrt{F(u)}}=\frac{b}{\sqrt{1-b^{2}u^{2}}}-\frac{b^{3}u^{3}m}{\left(1-b^{2}u^{2}\right)^{3/2}}$
$-\frac{3Mb^{3}}{\Delta
r_{s}^{2}}\frac{u(r_{s}u-1)^{2}}{\left(1-b^{2}u^{2}\right)^{3/2}}+\frac{9b^{5}mMu^{4}}{\Delta
r_{s}^{2}}\frac{(r_{s}u-1)^{2}}{\left(1-b^{2}u^{2}\right)^{5/2}}$
$-\frac{2Mb^{3}}{\Delta
r_{s}^{3}}\frac{(r_{s}u-1)^{3}}{\left(1-b^{2}u^{2}\right)^{3/2}}+\frac{6b^{5}mMu^{3}}{\Delta
r_{s}^{3}}\frac{(r_{s}u-1)^{3}}{\left(1-b^{2}u^{2}\right)^{5/2}}+\mathcal{O}\left(\frac{1}{\Delta
r_{s}^{4}}\right).$ (37)
The evaluation of the integral gives a cumbersome expression and reveals no
divergence especially in the third-order of $1/\Delta r_{s}$ and hence, can be
safely omitted for brevity. Due to Eq. (29), the expression for the source and
receiver are essentially the same, and that is
$\int\frac{1}{\sqrt{F(u)}}du=\arcsin
bu+\frac{m}{b}\frac{\left(b^{2}u^{2}-2\right)}{\sqrt{1-b^{2}u^{2}}}$
$+\frac{9M}{\Delta
r_{s}^{2}}\left[\left(m-\frac{2r_{s}}{3}\right)+\frac{5mr_{s}^{2}}{2b^{2}}\right]\arcsin(bu)-\frac{3M}{b\Delta
r_{s}^{2}}\frac{\left[b^{2}\left(-r_{s}^{2}u^{2}-2r_{s}u+1\right)+2r_{s}^{2}\right]}{\sqrt{1-b^{2}u^{2}}}$
$-\frac{3mMr_{s}}{2b\Delta
r_{s}^{2}}\frac{\left(48r_{s}b^{2}u^{2}+6b^{2}u+15r_{s}^{2}u-32r_{s}\right)}{\left(1-b^{2}u^{2}\right)^{3/2}}+\mathcal{O}\left(\frac{1}{\Delta
r_{s}^{3}}\right)+C$ (38)
where C is a constant. The expression for $\phi_{RS}$ includes the sum of two
evaluated integrals:
$\phi_{RS}=\phi_{RS}^{\text{Schw}}-\frac{9M}{\Delta
r_{s}^{2}}\left[\left(m-\frac{2r_{s}}{3}\right)+\frac{5mr_{s}^{2}}{2b^{2}}\right]\left[\arcsin(bu_{R})+\arcsin(bu_{S})\right]$
$+\frac{3M}{b\Delta
r_{s}^{2}}\left\\{\frac{\left[b^{2}\left(-r_{s}^{2}u_{R}^{2}-2r_{s}u_{R}+1\right)+2r_{s}^{2}\right]}{\sqrt{1-b^{2}u_{R}^{2}}}+\frac{\left[b^{2}\left(-r_{s}^{2}u_{S}^{2}-2r_{s}u_{S}+1\right)+2r_{s}^{2}\right]}{\sqrt{1-b^{2}u_{S}^{2}}}\right\\}$
$\displaystyle+\frac{3mMr_{s}}{2b\Delta
r_{s}^{2}}\bigl{[}\frac{\left(48r_{s}b^{2}u_{R}^{2}+6b^{2}u_{R}+15r_{s}^{2}u_{R}-32r_{s}\right)}{\left(1-b^{2}u_{R}^{2}\right)^{3/2}}$
$\displaystyle+\frac{\left(48r_{s}b^{2}u_{S}^{2}+6b^{2}u_{S}+15r_{s}^{2}u_{S}-32r_{s}\right)}{\left(1-b^{2}u_{S}^{2}\right)^{3/2}}\bigr{]}+\mathcal{O}\left(\frac{1}{\Delta
r_{s}^{3}}\right)$ (39)
where we introduced
$\phi_{RS}^{\text{Schw}}=\pi-\arcsin(u_{R}b)-\arcsin(u_{S}b)-\frac{m}{b}\left[\frac{\left(b^{2}u_{R}^{2}-2\right)}{\sqrt{1-b^{2}u_{R}^{2}}}+\frac{\left(b^{2}u_{S}^{2}-2\right)}{\sqrt{1-b^{2}u_{S}^{2}}}\right].$
(40)
We can finally calculate the deflection angle $\hat{\alpha}$ using Eq. (29).
Combining Eq. (35) and Eq. (IV), we find
$\hat{\alpha}\approx\frac{2m}{b}\left[\sqrt{1-b^{2}u_{R}^{2}}+\sqrt{1-b^{2}u_{S}^{2}}\right]$
$-\frac{9M}{\Delta
r_{s}^{2}}\left[\left(m-\frac{2r_{s}}{3}\right)+\frac{5mr_{s}^{2}}{2b^{2}}\right]\left[\arcsin(bu_{R})+\arcsin(bu_{S})\right]$
$+\frac{6Mr_{s}^{2}}{b\Delta
r_{s}^{2}}\left[\sqrt{1-b^{2}u_{R}^{2}}+\sqrt{1-b^{2}u_{S}^{2}}\right]-\frac{2bM}{\Delta
r_{s}^{3}}\left[\frac{(r_{s}u_{R}-1)^{3}}{u_{R}\sqrt{1-b^{2}u_{R}^{2}}}+\frac{(r_{s}u_{S}-1)^{3}}{u_{S}\sqrt{1-b^{2}u_{S}^{2}}}\right]-$
$\frac{3mM}{2\Delta
r_{s}^{2}}\left\\{\frac{\left[4b^{2}u_{R}+r_{s}(52b^{2}u_{R}^{2}+15r_{s}u_{R}-32)\right]}{b\left(1-b^{2}u_{R}^{2}\right)^{3/2}}+\frac{\left[4b^{2}u_{S}+r_{s}(52b^{2}u_{S}^{2}+15r_{s}u_{S}-32)\right]}{b\left(1-b^{2}u_{S}^{2}\right)^{3/2}}\right\\}$
$+\frac{2bmM}{\Delta
r_{s}^{3}}\left[\frac{\left(2b^{2}u_{R}^{2}-1\right)(r_{s}u_{R}-1)^{3}}{\left(1-b^{2}u_{R}^{2}\right)^{3/2}}+\frac{\left(2b^{2}u_{S}^{2}-1\right)(r_{s}u_{S}-1)^{3}}{\left(1-b^{2}u_{S}^{2}\right)^{3/2}}\right]+\mathcal{O}\left(\frac{1}{\Delta
r_{s}^{4}}\right).$ (41)
The next procedure is to impose the limit $u_{S}\rightarrow 0$ and
$u_{S}\rightarrow 0$ in Eq. (41) which is not problematic in asymptotic
spacetimes. However, this limit is not applicable or allowed in Eq. (41) since
$\hat{\alpha}$ will apparently diverge. Hence, the reciprocal of $u_{S}$ and
$u_{R}$ can be interpreted as finite distances but with very small values. For
the far source and receiver, it is then safe to impose $u_{S}<<1$ and
$u_{R}<<1$ Ishihara _et al._ (2016); Haroon _et al._ (2019). Eq. (41) then
becomes
$\hat{\alpha}\approx\frac{4m}{b}+\frac{12Mr_{s}^{2}}{b\Delta
r_{s}^{2}}-\frac{96mMr_{s}}{b\Delta r_{s}^{2}}$ $+\frac{2Mb}{\Delta
r_{s}^{3}}\left[\left(\frac{1}{u_{R}}+\frac{1}{u_{S}}\right)+2m\right]+\mathcal{O}\left(\frac{1}{\Delta
r_{s}^{4}}\right).$ (42)
Comparing this result to Eq. (III.3), we see that the first two terms are
identical, except the third term, and the existence of terms in the third-
order of $1/\Delta r_{s}$ which cannot be derived from the GBT given by Eq.
(14). Dark matter mass, being an additional effective mass, makes the $mM$
term to be interpreted as a higher-order term in mass and hence, can safely be
neglected. The significant contribution to the weak deflection angle only
arises from the first two terms. Moreover, the source and the receiver are
inside the dark matter halo, hence $\Delta r_{s}>>1/u_{S}$ and $\Delta
r_{s}>>1/u_{R}$ is possible. Hence, we are left with
$\hat{\alpha}\approx\frac{4m}{b}+\frac{12Mr_{s}^{2}}{b\Delta r_{s}^{2}}$ (43)
and we see that dark matter has an effect to increase the value of the weak
deflection angle. See Fig. (7). Moreover, the abnormality of such increase in
the weak deflection angle due to the very small $\Delta r_{s}$ might constrain
the value of dark matter mass $M$. The weak deflection angle is asymptotic to
the Schwarzschild case as $\Delta r_{s}$ increases, as expected. Note that if
the mass $M$ is negative, which might represent an exotic matter with negative
kinetic term, the weak deflection angle decreases.
Figure 7: Weak deflection angle as $\Delta r_{s}$ varies. Here, $r_{s}=2m$
which coincides with the event horizon, and $b=1000m$.
We recall the estimate in Ref. Konoplya (2019) that for notable dark matter
effects to occur in the shadow radius, the effective radius of the dark matter
distribution must be in the order of $\Delta r_{s}\sim\sqrt{3mM}$. This result
is found by the same analysis using Fig. 6. For a given value of $M$, it
revealed a very small value of $\Delta r_{s}$, which implies that dark matter
must be concentrated near the black hole to change the shadow radius
considerably. Further analysis also revealed that when the dark matter density
is very low ($\Delta r_{s}$ is very large), the effect of dark matter outside
the photon’s orbit can be safely neglected. Hence, the dark matter mass
beneath the photon’s orbit is the main contributor to any deviations. Such a
previous conclusion synchronizes with the results in Eq. (III.3) and Eq. (42).
Since dark matter acts as an additional effective mass to the black hole, we
can interpret the $mM$ term as a higher-order term in mass, and their coupling
can be ignored.
Surprisingly, Eq. (43) reveals that dark matter effects will occur in the weak
deflection angle (as deviation) when $\Delta r_{s}=2\sqrt{3mM}$. Since there
is an increase in the thickness requirement, dark matter detection via
deviations in the weak deflection angle of a black hole is better compared
when a deviation in shadow radius is used. Unfortunately, however, such an
increase is still irrelevant, at least for the technological capabilities we
have today. Consider an estimate for dark matter mass in our galaxy which is
$M\approx 1.0\times 10^{12}M_{\odot}$ Battaglia _et al._ (2005) while the
mass of the central black hole is around $m\approx 4.3\times 10^{6}M_{\odot}$.
It gives a required dark matter thickness of $\Delta r_{s}\approx 6.13\times
10^{12}$ m $\approx 2\times 10^{-4}$ pc to see any changes in the weak
deflection angle. Such value is lower in many orders of magnitude even if we
compare it to the core radius of the dark matter halo present in our galaxy
($r_{o}\approx 15.7-17.46$ kpc) de Oliveira _et al._ (2015).
## V Conclusion
In this paper, we present an analytic formula for the weak deflection angle
using a simple dark matter model that only incorporates its basic features,
such as mass and physical parameters. It is shown that the GBT, with its form
given by Eq. (19), can be used only to the second-order of $1/\Delta r_{s}$,
but is ill-behaved in the third-order of $1/\Delta r_{s}$. If precision in the
calculation matters, the resulting apparent divergence is found and justified
using Ishihara (et al.) method. The expression containing the second-order in
$1/\Delta r_{s}$ represents the approximate condition where the initial
manifestation of dark matter effect occurs as a deviation in the weak
deflection angle. We found that it is twice that of shadow radius deviations.
Interestingly, if one seeks to detect dark matter using the central black hole
in one’s galaxy, using the weak deflection angle is better than observing
deviations in the shadow radius. Although being better, the deviation is still
very small to be detected by current technology.
Extensions of the current study to non-spherical, or non-static dark matter
distribution, surrounding a more complicated black hole metrics, are left for
future work.
## References
* Akiyama _et al._ (2019) Akiyama _et al._ , Astrophys. J. 875, L1 (2019).
* Jarosik _et al._ (2011) N. Jarosik _et al._ , Astrophys. Journal, Suppl. Ser. 192, 1 (2011), arXiv:1001.4744 .
* Bernabei _et al._ (2010) R. Bernabei _et al._ , Astro-Ph (2010), arXiv:1007.0595v1 .
* Aprile _et al._ (2010) E. Aprile _et al._ , Phys. Rev. Lett. 105, 9 (2010), arXiv:1005.0838 .
* Desai _et al._ (2004) S. Desai _et al._ , Phys. Rev. D - Part. Fields, Gravit. Cosmol. 70 (2004), 10.1103/PhysRevD.70.083523, arXiv:0404025 [hep-ex] .
* Peirani _et al._ (2004) S. Peirani _et al._ , Phys. Rev. D - Part. Fields, Gravit. Cosmol. 70 (2004), 10.1103/PhysRevD.70.043503, arXiv:0401378 [astro-ph] .
* Abbott _et al._ (2016) B. P. Abbott _et al._ , Phys. Rev. Lett. 116, 1 (2016), arXiv:1602.03837 .
* Xu _et al._ (2018) Z. Xu, X. Hou, X. Gong, and J. Wang, Journal of Cosmology and Astroparticle Physics 2018, 038 (2018).
* Hou _et al._ (2018a) X. Hou, Z. Xu, M. Zhou, and J. Wang, Journal of Cosmology and Astroparticle Physics 2018, 015 (2018a).
* Jusufi _et al._ (2019) K. Jusufi, M. Jamil, P. Salucci, T. Zhu, and S. Haroon, Phys. Rev. D 100, 1 (2019), arXiv:1905.11803 .
* Hou _et al._ (2018b) X. Hou, Z. Xu, and J. Wang, Journal of Cosmology and Astroparticle Physics 2018, 040 (2018b).
* Haroon _et al._ (2019) S. Haroon, M. Jamil, K. Jusufi, K. Lin, and R. B. Mann, Physical Review D 99, 044015 (2019).
* Konoplya (2019) R. A. Konoplya, Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics 795, 1 (2019), arXiv:1905.00064v2 .
* Peng _et al._ (2006) C. Y. Peng, C. D. Impey, H.-w. Rix, C. S. Kochanek, C. R. Keeton, E. E. Falco, J. Lehar, and B. A. McLeod, Astrophys. J. 649, 616 (2006), arXiv:0603248 [astro-ph] .
* Trimble (1987) V. Trimble, Annu. Rev. Astron. Astrophys. 25, 425 (1987).
* Metcalf and Madau (2001) R. B. Metcalf and P. Madau, Astrophys. J. 563, 9 (2001).
* Metcalf and Zhao (2002) R. B. Metcalf and H. Zhao, Astrophys. J. 567, L5 (2002).
* Hoekstra and Jain (2008) H. Hoekstra and B. Jain, Annu. Rev. Nucl. Part. Sci. 58, 99 (2008), arXiv:0805.0139 .
* Ellis (2010) R. S. Ellis, Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 368, 967 (2010).
* Virbhadra and Ellis (2000) K. Virbhadra and G. F. Ellis, Phys. Rev. D 62, 084003 (2000), arXiv:astro-ph/9904193 .
* Virbhadra (2009) K. Virbhadra, Phys. Rev. D 79, 083004 (2009), arXiv:0810.2109 [gr-qc] .
* Gibbons and Werner (2008) G. W. Gibbons and M. C. Werner, Class. Quantum Gravity 25, 1 (2008), arXiv:0807.0854 .
* Övgün _et al._ (2018a) A. Övgün, I. Sakalli, and J. Saavedra, J. Cosmol. Astropart. Phys. 2018 (2018a), 10.1088/1475-7516/2018/10/041, 1807.00388 .
* Övgün _et al._ (2018b) A. Övgün, K. Jusufi, and I. Sakalli, Ann. Phys. (N. Y). 399, 193 (2018b), arXiv:1805.09431 .
* Övgün (2018) A. Övgün, Phys. Rev. D 98, 34 (2018), arXiv:1805.06296 .
* Övgün _et al._ (2019a) A. Övgün, I. Sakalli, and J. Saavedra, Ann. Phys. (N. Y). 411, 167978 (2019a), arXiv:1806.06453 .
* Övgün (2019a) A. Övgün, Phys. Rev. D 99, 1 (2019a), arXiv:1902.04411 .
* Jusufi and Övgün (2018) K. Jusufi and A. Övgün, Phys. Rev. D 97, 24042 (2018).
* Javed _et al._ (2019) W. Javed, R. Babar, and A. Övgün, Phys. Rev. D 99, 084012 (2019), arXiv:1903.11657 .
* Övgün (2019b) A. Övgün, Universe 5 (2019b), 10.3390/universe5050115.
* Övgün _et al._ (2019b) A. Övgün, G. Gyulchev, and K. Jusufi, Ann. Phys. (N. Y). 406, 152 (2019b), arXiv:1806.03719 .
* De Leon and Vega (2019) K. De Leon and I. Vega, Phys. Rev. D 99, 1 (2019), arXiv:1903.06951 .
* Kitamura _et al._ (2014) T. Kitamura, K. Izumi, K. Nakajima, C. Hagiwara, and H. Asada, Phys. Rev. D - Part. Fields, Gravit. Cosmol. 89, 1 (2014), arXiv:1307.6637 .
* Nakajima _et al._ (2014) K. Nakajima, K. Izumi, and H. Asada, Phys. Rev. D - Part. Fields, Gravit. Cosmol. 90 (2014), 10.1103/PhysRevD.90.084026, arXiv:1404.2720 .
* Izumi _et al._ (2013) K. Izumi, C. Hagiwara, K. Nakajima, T. Kitamura, and H. Asada, Phys. Rev. D - Part. Fields, Gravit. Cosmol. 88, 1 (2013), arXiv:1305.5037 .
* Ishihara _et al._ (2016) A. Ishihara, Y. Suzuki, T. Ono, T. Kitamura, and H. Asada, Phys. Rev. D 94, 1 (2016), arXiv:1604.08308 .
* Ono _et al._ (2019) T. Ono, A. Ishihara, and H. Asada, Phys. Rev. D 99, 1 (2019), arXiv:1811.01739 .
* Li and Övgün (2020) Z. Li and A. Övgün, Phys. Rev. D 101, 1 (2020), arXiv:2001.02074 .
* Zhang _et al._ (2019) H.-X. Zhang, C. Li, P.-Z. He, Q.-Q. Fan, and J.-B. Deng, Eur. Phys. J. C 123 (2019), 10.1140/epjc/s10052-020-8022-7, arXiv:1912.07068 .
* Ishihara _et al._ (2017) A. Ishihara, Y. Suzuki, T. Ono, and H. Asada, Phys. Rev. D 95, 1 (2017), arXiv:1612.04044 .
* Azreg-Aïnou _et al._ (2017) M. Azreg-Aïnou, S. Bahamonde, and M. Jamil, Eur. Phys. J. C 77, 1 (2017), arXiv:1701.02239 .
* Visser (1992) M. Visser, Phys. Rev. D 46, 2445 (1992), arXiv:9203057 [hep-th] .
* Visser (1993) M. Visser, Phys. Rev. D 48, 5697 (1993), arXiv:9307194 [hep-th] .
* Macedo _et al._ (2016) C. F. Macedo, L. C. Leite, and L. C. Crispino, Phys. Rev. D 93, 1 (2016), arXiv:1511.08781 .
* Leung _et al._ (1997) P. T. Leung, Y. T. Liu, W. M. Suen, C. Y. Tam, and K. Young, Phys. Rev. Lett. 78, 2894 (1997), arXiv:9903031 [gr-qc] .
* Krauss _et al._ (1996) L. M. Krauss, H. Liu, and J. Heo, Phys. Rev. Lett. 77, 5164 (1996), arXiv:9610135 [hep-th] .
* Shapere _et al._ (1991) A. Shapere, S. Trivedi, and F. Wilczek, Mod Phys Lett A 06, 2677 (1991), https://doi.org/10.1142/S0217732391003122 .
* Dowker _et al._ (1992) F. Dowker, R. Gregory, and J. Traschen, Phys. Rev. D 45, 2762 (1992).
* Gibbons and ichi Maeda (1988) G. W. Gibbons and K. ichi Maeda, Nucl. Physics, Sect. B 298, 741 (1988).
* Allen _et al._ (1990) T. J. Allen, M. J. Bowick, and A. Lahiri, Phys. Lett. B 237, 47 (1990).
* Galt’sov and Ershov (1989) D. V. Galt’sov and A. A. Ershov, Phys. Lett. A 138, 160 (1989).
* Lahiri (1992) A. Lahiri, Phys. Lett. B 297, 248 (1992).
* Nielsen and Birnholz (2019) A. B. Nielsen and O. Birnholz, Astron. Nachrichten 340, 116 (2019).
* Moffat (1979) J. W. Moffat, Phys. Rev. D 19, 3554 (1979).
* Hess and Greiner (2009) P. O. Hess and W. Greiner, Int. J. Mod. Phys. E. 18, 51 (2009), https://doi.org/10.1142/S0218301309012045 .
* Mann and Moffat (1982) R. B. Mann and J. W. Moffat, Phys. Rev. D 26, 1858 (1982).
* Azreg-Aïnou (2014) M. Azreg-Aïnou, Phys. Rev. D - Part. Fields, Gravit. Cosmol. 90 (2014), 10.1103/PhysRevD.90.064041, arXiv:1405.2569 .
* Konoplya _et al._ (2019) R. A. Konoplya, Z. Stuchlík, and A. Zhidenko, Phys. Rev. D 99, 1 (2019), arXiv:1810.01295 .
* Cardoso and Pani (2017) V. Cardoso and P. Pani, Nat. Astron. 1, 586 (2017), arXiv:1709.01525 .
* Do Carmo (2016) M. P. Do Carmo, _Differential geometry of curves and surfaces: revised and updated second edition_ (Courier Dover Publications, 2016).
* Klingenberg (2013) W. Klingenberg, _A course in differential geometry_ , Vol. 51 (Springer Science & Business Media, 2013).
* Werner (2012) M. C. Werner, Gen. Relativ. Gravit. 44, 3047 (2012).
* Ono _et al._ (2017) T. Ono, A. Ishihara, and H. Asada, Phys. Rev. D 96 (2017), 10.1103/PhysRevD.96.104037, arXiv:1704.05615 .
* Kottler (1918) F. Kottler, Annalen der Physik 361, 401 (1918), https://onlinelibrary.wiley.com/doi/pdf/10.1002/andp.19183611402 .
* Mannheim and Kazanas (1989) P. D. Mannheim and D. Kazanas, Astrophys. J. 342, 635 (1989).
* Battaglia _et al._ (2005) G. Battaglia, A. Helmi, H. Morrison, P. Harding, E. W. Olszewski, M. Mateo, K. C. Freeman, J. Norris, and S. A. Shectman, Mon. Not. R. Astron. Soc. 364, 433 (2005), arXiv:0506102 [astro-ph] .
* de Oliveira _et al._ (2015) P. L. de Oliveira, J. A. de Freitas Pacheco, and G. Reinisch, Gen. Relativ. Gravit. 47 (2015), 10.1007/s10714-014-1849-1, arXiv:1501.01008 .
|
2024-09-04T02:54:56.195532 | 2020-03-02T11:18:35 | 2003.00768 | {
"authors": "Mehmet Yamac, Mete Ahishali, Serkan Kiranyaz, Moncef Gabbouj",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:25980",
"submitter": "Mehmet Yamac",
"url": "https://arxiv.org/abs/2003.00768"
} | arxiv-papers | # Convolutional Sparse Support Estimator Network (CSEN)
From energy efficient support estimation to learning-aided Compressive Sensing
Mehmet Yamaç Tampere University, Faculty of Information Technology and
Communication Sciences, Tampere, Finland Mete Ahishali Tampere University,
Faculty of Information Technology and Communication Sciences, Tampere, Finland
Serkan Kiranyaz Department of Electrical Engineering, Qatar University, Qatar
Moncef Gabbouj Tampere University, Faculty of Information Technology and
Communication Sciences, Tampere, Finland
###### Abstract
Support estimation (SE) of a sparse signal refers to finding the location
indices of the non-zero elements in a sparse representation. Most of the
traditional approaches dealing with SE problem are iterative algorithms based
on greedy methods or optimization techniques. Indeed, a vast majority of them
use sparse signal recovery techniques to obtain support sets instead of
directly mapping the non-zero locations from denser measurements (e.g.,
Compressively Sensed Measurements). This study proposes a novel approach for
learning such a mapping from a training set. To accomplish this objective, the
Convolutional Support Estimator Networks (CSENs), each with a compact
configuration, are designed. The proposed CSEN can be a crucial tool for the
following scenarios: (i) Real-time and low-cost support estimation can be
applied in any mobile and low-power edge device for anomaly localization,
simultaneous face recognition, etc. (ii) CSEN’s output can directly be used as
”prior information” which improves the performance of sparse signal recovery
algorithms. The results over the benchmark datasets show that state-of-the-art
performance levels can be achieved by the proposed approach with a
significantly reduced computational complexity.
###### Index Terms:
Support Recovery, Sparse Signal Representation, Learned Compressive Sensing.
## I Introduction
Sparse Representation or Sparse Coding (SC) denotes representing a signal as a
linear combination of only a small subset of a pre-defined set of waveforms.
Compressive Sensing (CS) [1, 2] can be seen as a special form of SC while a
signal, $\mathbf{s}\in\mathbb{R}^{d}$ which has a sparse representation,
$\mathbf{x}\in\mathbb{R}^{n}$ in a dictionary or basis
$\mathbf{\Phi}\in\mathbb{R}^{d\times n}$, can be acquired in a compressed
manner using a linear dimensional reductional matrix,
$\mathbf{A}\in\mathbb{R}^{m\times d}$. Therefore, this signal can also be
represented in a sparse manner in the dictionary,
$\mathbf{D}\in\mathbb{R}^{m\times n}$, (that can be called equivalent
dictionary [3], where $m<<n$, and typically assumed to be full-row rank),
which is the matrix multiplication of the measurement matrix, $\mathbf{A}$ and
pre-defined dictionary, $\mathbf{\Phi}$, i.e.,
$\mathbf{D}=\mathbf{A}\mathbf{\Phi}$. In SC literature, signal synthesis
refers to producing a signal, $\mathbf{y}=\mathbf{Dx}\in\mathbb{R}^{m}$, using
a sparse code, $\mathbf{x}\in\mathbb{R}^{n}$ and a pre-specified dictionary,
$\mathbf{D}$. On the other hand, signal analysis deals with finding the sparse
codes, $\mathbf{x}$ from the given measurements, $\mathbf{y}$, with respect to
the dictionary $\mathbf{D}$ [4]. Sparse Support Estimation or simply Support
Estimation (SE) [5, 6, 7], refers to finding the location indices of non-zero
elements in SCs. In other words, it is the localization of the smallest subset
of the atoms, which are the basis waveforms in the dictionary, whose linear
combination represents the given signal well enough. On the other hand, sparse
Signal Recovery (SR) refers to finding the values of these non-zero elements
of SCs. SE and SR are intimately linked in such a way that the SE of a sparse
signal is first performed, then an SR will be trivial using the ordinary Least
Squares optimization. Actually, this is the main principle of most greedy
algorithms [8, 9]
Figure 1: The proposed CSEN with two potential applications: a) (bottom-left)
Sparse Support Estimation b) (top-middle) Learned aided CS-sparse signal
reconstruction with CSEN vs. (top-right) traditional recovery methods; i) OMP
[8] and ii) $\ell_{1}$-minimization.
The literature that purely targets SE is relatively short compared to
extensive studies on sparse signal recovery [10]. Many existing works, first
apply a coarse SR using existing SR methods, and then SE can be easily
performed if SE is the main objective. Indeed, there are many applications
where computing the support set is more important than computing the
magnitudes of SCs. For instance, in an SR based classification (SRC) [11],
such as face recognition [12], the training samples are stacked in the
dictionary in such a way that a subset of the columns consists of the samples
of a specific class. As another example, in cognitive radio systems, only a
small ratio of all spectrum is occupied for a given time interval. Therefore,
finding the occupied spectrum (i.e., the support set) is the primary concern
[13, 14]. Similarly, in a ground-penetrating radar imaging system, finding the
location of the target is more important than predicting the actual signal
magnitudes [15].
In this study, a novel Convolutional Support Estimator Network (CSEN) is
proposed with two primary objectives as in Figure 1. First, this approach
enables learning-based non-iterative support estimation with minimal
computational complexity. To accomplish this, we use two compact Convolutional
Neural Network (CNN) configurations, both of which are designed without the
dense layers [16]. The proposed CSENs are trained to optimize the support
estimations. To our knowledge, this is the first study that proposes a
learning-based approach for non-iterative support estimation. Hence, in order
to perform comparative evaluations we train the following state-of-the-art CS
signal reconstruction deep neural networks as the support estimators: 1)
ReconNet [17] that originally works on the spatial domain and, 2) the Learned
AMP (LAMP) [18], that is the deep version of AMP [19], which is the state-of-
the-art optimization scheme working on sparse domain. An extensive set of
experiments over three benchmark datasets has demonstrated that the proposed
CSEN approach outperforms both deep counterparts, especially dealing with a
structural sparse signal. In the first experimental setup, we simulate a CS
system making data acquisition from the MNIST data set in different
measurement rates. Moreover, the proposed SE system is shown to improve the SE
performance when compared to its deep counterparts, especially in low
measurement rates and imperfect sparsity (in the case of CS of approximate
sparse signal or noisy environment). Furthermore, CSEN is tested on a well-
known support recovery problem, where face recognition is performed based on
sparse codes [11]. We use two benchmark datasets, Yale-B [20], and CelebA
[21], in our experiments. Comparative evaluations performed against the two
state-of-the-art dictionary-based (representation-based) face recognition
methods in the literature, SR based face recognition [11], and collaborative
learning [22], have demonstrated that the proposed CSEN approach outperformed
both methods.
As for the second objective, we focus on an alternative usage of CSENs.
Instead of using them as support estimators, which naturally requires the
hard-thresholding of the network outputs, these outputs can be directly used
as prior information about the sparse signals. It is a well-known fact that
having prior information about the non-zero locations such as the probability
map, $p(x)$ (or simply $\mathbf{p}$), on the support set, could improve the
conventional signal recovery algorithms [23]. However, in many cases, it is
not clear how to obtain such prior information in advance. The most common
usage of such a system appears in dynamical sparse recovery [24], where
previous support estimations can be used as priors for the next estimation. In
this study, we have demonstrated that CSEN outputs can be a better alternative
for the prior information of the non-zero locations. Therefore, CSEN is now
used as a learning-aided CS reconstruction scheme, where the prior information
comes directly from the CSEN outputs. A wide range of experiments shows that
this approach has great potential to improve SR performance of traditional
approaches for sparse signal recovery problems. As mentioned above, we used CS
imaging simulation, but this time signal reconstruction error is compared with
state-of-the-art conventional SR approaches. Figure 1 illustrates a
representative graph of two different applications of CSENs; a) performing SE
from CS measurement vector, $\mathbf{y}$, and b) the output of CSEN is used as
the side information, $\mathbf{p}$, which gives the estimated probability of
being non-zero for each index. In this simple illustration we assume that the
hand-writing signal ’$2$’ is sparse in spatial domain such that
$\mathbf{\Phi}=\mathbf{I}$; therefore,
$\mathbf{D}=\mathbf{A}\mathbf{I}=\mathbf{A}$ and $\mathbf{B}$ is a denoiser
matrix such as $\mathbf{D}^{T}$, or
$\left(\mathbf{D}^{T}\mathbf{D}+\lambda\mathbf{I}\right)^{-1}\mathbf{D}^{T}$
where $\lambda$ is the regularization parameter.
The rest of the paper is organized as follows. In Section II, we start by
giving mathematical notation that is used in this article. A brief overview of
sparse representation and compressive sensing theory, with emphasis on state-
of-art sparse signal recovery and support estimation techniques, will be given
in Section III. In the same section, we also introduce case studies of support
estimation that are chosen for this work. Then we discuss the limitations of
existing support estimator techniques. In Section IV, we will present the
proposed learned based SE scheme and the two compact CSEN models. Experimental
evaluations of the study will also be given at the end of this section, which
we can divide into three main categories according to the case studies: (i)
Basic support estimation performance evaluation on MNIST dataset that is
performed to compare CSENs with the aforementioned state-of-art deep networks.
(ii) SE based face recognition performance evolution of proposed SE with an
emphasis on how CSEN based SE has the ability to improve the classical
representation-based approaches. (iii) Performance comparison of classical
compressing sensing reconstruction techniques and proposed learned-aided SR in
terms of both speed and reconstruction accuracy. Having theoretical and
experimental analysis, in Section VI we will present a more detailed
discussion on how the proposed scheme differs from the state-of-the-art SR and
SE techniques, pros and cons, and possible usage scenarios with an emphasis on
the flexibility of proposed CSEN in different scenarios. Finally, the
conclusions are drawn in Section VII.
## II Notations
In this work, we define the $\ell_{p}$ norm of any vector
$\mathbf{x}\in\mathbb{R}^{n}$ as
$\left\|\mathbf{x}\right\|_{\ell_{p}^{n}}=\left(\sum_{i=1}^{n}\left|x_{i}\right|^{p}\right)^{1/p}$
for $p\geq 1$. The $\ell_{0}$-norm of the vector $\mathbf{x}\in\mathbb{R}^{n}$
is given as $\left\|\mathbf{x}\right\|_{\ell_{0}^{n}}=\lim_{p\to
0}\sum_{i=1}^{n}\left|x_{i}\right|^{p}=\\#\\{j:x_{j}\neq 0\\}$ and the
$\ell_{\infty}$ is defined as
$\left\|\mathbf{x}\right\|_{\ell_{\infty}^{n}}=\max_{i=1,...,n}\left(\left|x_{i}\right|\right)$.
A signal $\mathbf{s}$ can be defined as a strictly $k$-sparse signal if it can
be represented with less than $k+1$ non-zero coefficients in a proper basis
$\mathbf{\Phi}$, i.e., $\left\|\mathbf{x}\right\|_{0}\leq k$, where
$\mathbf{s}=\mathbf{\Phi}\ \mathbf{x}$. We also define a sparse support set or
simply support set, $\Lambda\subset\\{1,2,3,...,n\\}$, as the set of indices
that represents the non-zero coefficients, i.e.,
$\Lambda:=\left\\{i:x_{i}=0\right\\}$. The complement of support set,
$\Lambda$, with respect to $\\{1,2,3,...,n\\}$ is given as
$\Lambda^{c}=\\{1,2,3,...,n\\}\setminus\Lambda$. In this manner,
$\mathbf{x}_{\Lambda}\in\mathbb{R}^{\left|\Lambda\right|}$ is a vector
consisting of non-zero elements of $\mathbf{x}\in\mathbb{R}^{n}$, where
$\left|\Lambda\right|$ refers to the number of the non-zero coefficients .
Similarly, $\mathbf{M}_{\Lambda}\in\mathbb{R}^{m\times\left|\Lambda\right|}$
denotes a matrix that consists of the columns of a matrix
$\mathbf{M}\in\mathbb{R}^{m\times n}$ indexed by support ${\Lambda}$.
## III Related Work
CS theory claims that a signal $\mathbf{s}$ can be sensed using far fewer
linear measurements $m$ than Nyquist/Shannon based traditional methods use,
$d$, i.e.,
$\mathbf{y}=\mathbf{A}\mathbf{s}=\mathbf{A}\mathbf{\Phi}\mathbf{x}=\mathbf{D}\mathbf{x},$
(1)
where $\mathbf{A}\in\mathbb{R}^{m\times d}$ is the measurement matrix and
$\mathbf{D}\in\mathbb{R}^{m\times n}$ is called the equivalent dictionary. It
can be demonstrated that sparse representation,
$\min_{\mathbf{x}}~{}\left\|\mathbf{x}\right\|_{0}~{}\text{subject
to}~{}\mathbf{D}\mathbf{x}=\mathbf{y}$ (2)
is unique if $m\geq 2k$ [25] and $\left\|\mathbf{x}\right\|_{0}\leq k$. In
brief, the uniqueness of the sparse representation in Eq. (2) shows that any
$k$-sparse signal pair can still be distinguished in the equivalent
dictionary, $\mathbf{D}$. However, the problem in Eq. (2) is that this is a
non-convex problem and known to be NP-hard. The most common approach is the
relaxation of the $\ell_{0}$ norm to the closest convex norm, which is
$\ell_{1}$-norm,
$\min_{\mathbf{x}}\left\|\mathbf{x}\right\|_{1}~{}s.t.~{}\mathbf{x}\in\mho\left(\mathbf{y}\right)$
(3)
where
$\mho\left(\mathbf{y}\right)=\left\\{\mathbf{x}:\mathbf{D}\mathbf{x}=\mathbf{y}\right\\}$,
which is known as Basis Pursuit [26]. The surprising result of the CS theory
is that even if the exact recovery of the signal, $\mathbf{s}$ was not
possible by using the minimum norm solution, a tractable solution is possible
using (3), when $\mathbf{D}$ satisfies some properties such as Restricted
Isometry Property [27] and $m>k(\log(n/k))$.
However, the signal of interest, $\mathbf{x}$, is not perfectly $k$-sparse but
approximately sparse in most of the cases. In addition, CS measurements, most
probably, are corrupted by an additive noise during data acquisition,
quantization, etc. As a result, we handle
$\mathbf{y}=\mathbf{D}\mathbf{x}+\mathbf{z}$, where $\mathbf{z}$ is additive
noise. In this case, the constraint can be relaxed by setting
$\mho\left(\mathbf{y}\right)=\left\\{\mathbf{x}:\left\|\mathbf{D}\mathbf{x}-\mathbf{y}\right\|_{2}\leq\epsilon\right\\}$
which is known as Basis Pursuit Denoising [28] or Dantzig Selector [29] if we
set
$\mho\left(\mathbf{y}\right)=\left\\{\mathbf{x}:\left\|\mathbf{D}^{T}\left(\mathbf{y}-\mathbf{D}\mathbf{x}\right)\right\|_{\infty}\leq\lambda\right\\}$.
In the noisy case, even exact recovery of sparse signal is not possible,
stable recovery is well studied in the literature for Basis Pursuit Denoising
[30] and Dantzig Selector [31, 32]. We mean by stable recovery is that a
stable solution $\hat{\mathbf{x}}$ obeys
$\left\|\mathbf{x}-\hat{\mathbf{x}}\right\|\leq\kappa\left\|\mathbf{z}\right\|$
where the $\kappa$ is small constant. Another related formulation is
$\min_{\mathbf{x}}\left\\{\left\|\mathbf{D}\mathbf{x}-\mathbf{y}\right\|_{2}^{2}+\lambda\left\|\mathbf{x}\right\|_{1}\right\\}$
(4)
which is known as Lasso [33] formulation, which is also known to produce
stable solution in noisy case and exact solution in noise free case [34].
Figure 2: Most common model for a practical support estimator.
### III-A Generic Sparse Support Estimation (SE)
In many application scenarios, detecting the indices of the non-zero
coefficients’ location, $\Lambda$, is more important than computing these
coefficients. To list a few, in a sparse anomaly (either from CS [35] or
uniform sampled measurements) detection problem [36], where a group of users
initiates a flooding attack to a communication network (specifically for a
VoIP network), detecting the malicious user group (sub-set of all users) is
more critical. Other, CS-based active user detection in the downlink of a CDMA
system [37], and for the uplink of a NOMA [38, 39] system. Such systems are
believed to play an important role in 5G communication technology. As
discussed in Section I, other examples may be listed as sparse representation
based classifications [11, 12] and radar imaging [15, 40].
Mathematically speaking, for the linear measurement model given in Eq. (1) and
with additive noise, $\mathbf{y}=\mathbf{Dx}+\mathbf{z}$, we define the
following support estimator $\mathcal{E}(.,.)$,
$\hat{\Lambda}=\mathcal{E}\left(\mathbf{y},\mathbf{D}\right)$ (5)
where $\hat{\Lambda}$ is the estimated support. For the noise-free case and
$\mathbf{x}$ is exactly $k$-sparse, the exact $\Lambda$ recovery performance
of an algorithm coincides with the sparse signal recovery performance. This an
expected outcome since the unique representation is satisfied when $m>2k$. In
the noisy case, even if the exact signal recovery is not possible, it is still
possible to recover the support set exactly. In the literature, several
studies have proposed to provide information-theoretical (i.e., the optimal
decoder, $\mathcal{E}$’s performance) guarantee conditions for exact [41, 5,
42, 10], and partial support estimation [7, 43, 10]. However, in most of the
practical applications, a tractable signal recovery method is applied first to
find an estimation $\hat{\mathbf{x}}$ of the sparse signal $\mathbf{x}$, then
a component-wise thresholding is applied to $\hat{\mathbf{x}}$ to compute the
estimated support as illustrated in Figure 2.
A common approach is to follow an iterative sparse signal recovery method from
the CS literature. For instance, it is proven in [44] that if
$min_{i\in\Lambda}\left|x_{i}\right|>8\sigma\sqrt{2*log(n)}$, then one can
recover the support set exactly using Lasso with $\lambda=2\sqrt{2*log(n)}$,
where $\sigma^{2}$ is variance of the measurement noise. This theorem is valid
in the case that the equivalent dictionary satisfies the mutual coherence
property defined in [44]. One may clearly deduce from their results that
accurate support estimation is possible via Lasso if the non-zero
coefficients’ magnitudes are above a certain level determined by the noise.
Similarly, the conditions of exact support recovery under noise using OMP are
given in [45], and partial support recovery performance bounds of AMP are in
[46]. Along with these SR algorithms in CS literature, which are iterative
methods, traditional linear decoders such as Maximum Correlation (MC) [47],
$\hat{\mathbf{x}}^{MC}=\mathbf{D}^{T}\mathbf{y}$ and LMSEE [46],
$\hat{\mathbf{x}}^{LMMSE}=\left(\mathbf{D}^{T}\mathbf{D}+\sigma_{z}^{2}\mathbf{I}_{n\times
n}\right)^{-1}\mathbf{D}^{T}\mathbf{y}$ are also used in many applications.
The theoretical performance bounds of these methods are also given in [46].
### III-B Case study of SE: Representation based classification
Consider an image from a particular class is queried: It can be expected from
the estimated SCs, $\hat{\mathbf{x}}$ to have significant (non-zero) entries
which are located in a specific location so that the corresponding columns in
the dictionary matrix, $\mathbf{D}$, are the samples from the actual class of
the image. This problem is also known as the representation based
classification, which is a typical example where the support set location is
the main information we are seeking.
In [11], $\ell_{1}$-minimization is used to obtain such a sparse code to
determine the identity of face images. However, in reality such an ideal
decomposition is not accomplished in general because face images show a high
correlation among different classes. This is why, instead of using the
estimated sparse codes, $\hat{\mathbf{x}}$ obtained by an SR technique such as
Eq. (4), the authors propose a four steps solution: (i) Normalization:
Normalize all the atoms in $\mathbf{D}$ and $\mathbf{y}$ to have unit
$\ell_{2}$-norm, (ii) SR:
$\hat{\mathbf{x}}=\arg\min_{\mathbf{x}}\left\|\mathbf{x}\right\|_{1}\text{s.t}\left\|\mathbf{y}-\mathbf{D}\mathbf{x}\right\|_{2}$,
(iii) Residual finding:
$\mathbf{e_{i}}=\left\|\mathbf{y}-\mathbf{D_{i}}\mathbf{\hat{x}_{i}}\right\|_{2}$,
where $\mathbf{\hat{x}_{i}}$ is the estimated coefficients corresponding the
class $i$, (iv) Class determination:
$\text{Class}\left(\mathbf{y}\right)=\arg\min\left(\mathbf{e_{i}}\right)$.
This technique and its similar variants have been reported to perform well not
only in face recognition but many other classification problems [48, 49].
Later, the authors of [22] propose to change the second step, from
$\ell_{1}$-minimization to the classical $\ell_{2}$-minimization;
$\mathbf{\hat{x}}=\arg\min_{\mathbf{x}}\left\\{\left\|\mathbf{y}-\mathbf{D}\mathbf{x}\right\|_{2}^{2}
+\lambda\left\|\mathbf{x}\right\|_{2}^{2} \right\\}$, which has a closed-form
solution, $\hat{\mathbf{x}}=\left(\mathbf{D}^{T}
\mathbf{D}+\lambda\mathbf{I}_{n\times n} \right)^{-1}\mathbf{D}^{T}
\mathbf{y}$. This collaborative representation based classification (CRC) was
reported to achieve a comparable classification performance for different
classification problems. For face recognition problems, in particular, the
authors reported that high classification accuracies were obtained especially
for high measurement rates (MRs).
Figure 3: Proposed model for an efficient support estimator.
### III-C Sparse signal reconstruction with side information of support set
Consider the case where SE is not the main concern but SR is. In case side
information is available about the support set, an improvement to
$\ell_{1}$-minimization can be achieved in sparse signal recovery as follows:
$\min_{\mathbf{x}}\left\\{\left\|\mathbf{D}\mathbf{x}-\mathbf{y}\right\|_{2}^{2}+\lambda\left\|\mathbf{w\odot
x}\right\|_{1}\right\\}$ (6)
where $\odot$ is element-wise multiplication operator and $\mathbf{w}$ is the
predefined cost that imposes the prior information about each element’s
values. In the concept of modified CS [50] and CS with prior information
literature, the cost function, $\mathbf{w}$ generally appears in the form of
$w_{i}=\frac{1}{p_{i}+\epsilon}$, where $\epsilon>0$ is a predefined constant
and $p_{i}$ is the $i^{th}$ element of the vector $\mathbf{p}$ which is a type
of a measure such as prior likelihood [23] of the support set, which could
represent the probability of $(i)^{th}$ element being non-zero.
### III-D Limitations of existing Support Estimators
Both SE and SR algorithms guarantee to perform well if the equivalent
dictionary $\mathbf{D}$ satisfies certain properties such as mutual
incoherence [51]. However, in many practical scenarios, $\mathbf{D}$ fails to
satisfy these properties, e.g., in face recognition problem, the atoms of
$\mathbf{D}$, vectorized faces, are highly correlated. The second limitation
of traditional sparse recovery algorithms is that they are iterative methods
and computationally costly. Therefore, the support estimators relying on these
sparse recovery algorithms may not be feasible, especially in real-time
applications. The third limitation of state-of-the-art SR techniques such as
$\ell_{1}$-minimization is that there is a lower limit for MR (see phase
transition [52]); below this limit, the SR algorithms start to fail
completely. This limit generally depends on the wellness of $\mathbf{D}$
(defined by properties such as mutual incoherence [51] ). Therefore, SE
techniques that build upon an SR algorithm tend to fail if $\mathbf{D}$ does
not satisfy the required properties, e.g., if the atoms of $\mathbf{D}$ are
highly correlated.
On the other hand, when it comes to SR techniques leveraging SE as prior
information, despite the fact that a good improvement can be achieved using
such prior information, most of the works assume that the information is
available in advance; however, they do not mention how to obtain such a
$\mathbf{p}$.
## IV Convolutional Support Estimator Network
Recent advance in deep neural networks [53, 18] enables a non-iterative
solution for the sparse signal recovery. It is often reported that they
produce a solution $\mathbf{\hat{x}}$, which is closer to $\mathbf{x}$ than
the ones obtained by an iterative approach. They can still work under those
measurement rates where classical CS recovery algorithms fail. Nevertheless, a
crucial disadvantage is that they require a large number of training samples
to achieve a high generalization capability. Second, their complex
configuration with millions of parameters causes certain computational
complexity issues such as speed and memory problems, especially when they are
used in edge devices with limited power, speed and memory.
If one may wish to find only support $\Lambda$ instead of the sign and
amplitude of $\mathbf{x}$, a traditional Machine Learning approach would be
sufficient. In this study, we propose a support estimator, $\mathcal{E}(.)$,
that can be performed by a compact CSEN network. Another crucial objective is
to have the ability to learn from a minimal training set with a limited number
of labeled data. A typical application where this approach can benefit from is
face recognition via sparse representations, where only a few samples of each
identity are available.
Let us define a binary mask $\mathbf{v}\in\left\\{0,1\right\\}^{n}$, as
follows
$\displaystyle v_{i}=$ $\displaystyle 1\hfill$ if $i\in\Lambda$ (7a)
$\displaystyle v_{i}=$ $\displaystyle 0$ else . (7b)
Consequently, the problem of finding an estimation $\hat{\mathbf{v}}$ of this
binary mask will be equivalent to producing a support estimation
$\hat{\Lambda}$, i.e.,
$\hat{\Lambda}=\left\\{i\in\left\\{1,2,..,n\right\\}:\hat{v}_{i}=1\right\\}$.
To accomplish this objective, first, the CSEN network with input and output,
$\mathcal{P}\left(\mathbf{y},\mathbf{D} \right):\mathbb{R}^{n}
\mapsto\left[0,1\right]^{n}$, produces a vector $\mathbf{p}$ that gives the
information about the probability of each index to be in support set such that
$p_{i}\in\left[0,1\right]$. Then, the final support estimator,
$\mathcal{E}(\mathbf{y},\mathbf{D})$ will produce a support estimation such
that $\hat{\Lambda}=\left\\{ i\in \left\\{ 1,2,..,n\right\\}:p_{i}>\tau
\right\\}$, by thresholding $\mathbf{p}$ with $\tau$ where $\tau$ is a fixed
threshold.
As shown in Figure 3, the proposed SE approach is different from the
conventional SR based methods, which directly threshold $\hat{\mathbf{x}}$ for
support estimation. Moreover, the input-output pair is different. The proposed
CSEN learns over $\left(y^{train},v^{train}\right)$ to compute $\mathbf{p}$
while the conventional SR methods work with $\left(y^{train},x^{train}\right)$
to first make the sparse signal estimation, and then compute support
estimation by thresholding it. As evident in Figure 1, the application of
direct signal recovery may cause noisy estimation of the support codes while
the proposed CSEN has the advantage of learning the pattern of the support
codes and, therefore, can predict their most-likely location with proper
training.
Figure 4: Type-I Convolutional Support Estimator Network (CSEN1). Figure 5:
Type-II Convolutional Support Estimator Network (CSEN2).
In this study, the proposed CSEN models consist of only convolutional layers
in the type of fully convolutional networks [16] that are trained by
optimizing the support estimations. Since the SE problem involves one-to-one
mapping, other network types such as Multi-Layer Perceptrons (MLPs) can also
be used as in [18]. However, this brings two limitations compared to CSENs:
high computational complexity and over-fitting due to the limited training
data and number of parameters in the network. In Section V, it will be shown
that such an approach yields a poor generalization and is not robust to noise.
When a CSEN is trained, it learns the following transformation:
$\hat{\mathbf{v}}\leftarrow \mathcal{P}\left(\mathbf{\tilde{x}}\right)$, where
$\hat{\mathbf{v}}$ is the estimation of binary mask representing the estimated
support for the signal $\mathbf{x}$, and the proxy $\mathbf{\tilde{x}=By}$
with $\mathbf{B=D^{T}}$, or
$\left(\mathbf{D}^{T}\mathbf{D}+\lambda\mathbf{I}\right)^{-1}\mathbf{D}^{T}$,
i.e., the Maximum Correlation and LMMSE formula in [46]; and hence,
$\mathbf{x},\mathbf{\tilde{x}}\in\mathbb{R}^{N}$. First, the proxy
$\mathbf{\tilde{x}}$ is reshaped to 2-D plane (e.g., original size of the
image or pre-defined search grid). Correspondingly, the proxy
$\mathbf{\tilde{x}}$ is convolved with $\mathbf{W_{1}}$, the weight kernels
connecting the input layer to the next layer with $N$ filters to form the
input of the next layer with the summation of weight biases $\mathbf{b_{1}}$
as follows:
$\mathbf{F_{1}}=\\{S(ReLu(b_{1}^{i}+\mathbf{w}_{1}^{i}*\tilde{\mathbf{x}}))\\}_{i=1}^{N},$
(8)
where $S(.)$ is the down- or up-sampling operation and $ReLu(x)=max(0,x)$. In
more general form, the $k^{th}$ feature map of layer $l$ can be expressed as,
$\mathbf{f_{l}^{k}}=\textsc{S}(\textsc{ReLu}(b_{l}^{k}+\sum_{i=1}^{N_{l-1}}\textsc{conv2D}(\mathbf{w}_{l}^{ik},\mathbf{f}_{l-1}^{i},^{\prime}\textsc{ZeroPad}^{\prime}))).$
(9)
The trainable parameters of the network would be:
$\mathbf{\Theta_{CSEN}}=\big{\\{}\\{\mathbf{w}_{1}^{i},b_{1}^{i}\\}_{i=1}^{N_{1}},\\{\mathbf{w}_{2}^{i},b_{2}^{i}\\}_{i=1}^{N_{2}},...\\{\mathbf{w}_{L}^{i},b_{L}^{i}\\}_{i=1}^{N_{L}}\big{\\}}$
for a L layer CSEN.
In the proposed approach, the Mean-Square Error (MSE) is computed between its
binary mask, $\mathbf{v}$, and CSEN’s actual output,
$\mathcal{P}_{\Theta}\left(\mathbf{x}\right)_{p}$ as follows:
$E(\mathbf{x})=\sum_{p}\left(\mathcal{P}_{\Theta}\left(\mathbf{x}\right)_{p}-v_{p}\right)^{2}$
(10)
where $v_{p}$ is the $p^{th}$ pixel of $\mathbf{v}$. The CSEN network is
trained using samples in the train data,
$D_{train}=\big{\\{}(\tilde{\mathbf{x}}^{(1)},\mathbf{v}^{(1)}),(\tilde{\mathbf{x}}^{(2)},\mathbf{v}^{(2)}),...(\tilde{\mathbf{x}}^{(s)},\mathbf{v}^{(s)})\big{\\}}$.
Please note that, even if we use MSE as the loss function in the original CSEN
design, depending on the application, any other regularization function (e.g.,
$\ell_{1}$-norm, mixed norm, etc.) can be added to this cost function. As an
example, we present a strategy to approximate the loss function which is group
$\ell_{1}$-norm in addition to MSE.
## V Results
In order to evaluate the effect of different network configurations, in this
study, we use two different CSEN configurations and perform a comprehensive
analysis over each of them. Generally, each convolutional layer has a
dimension reduction capability with pooling functions. However, the first
proposed network architecture consists of only convolutional layers with ReLu
activation functions to preserve the sparse signal (e.g., image) dimensions at
the output layer. In this configuration (CSEN1), we propose to use three
convolutional layers with $48$ and $24$ hidden neurons and $3\times 3$ filter
size as given in Figure 4. CSEN2 is a slight modification of CSEN1
configuration, as shown in Figure 5 by using up- and down-sampling layers.
Although this modification increases the number of parameters, in return, it
yields substantial performance improvement over MNIST. While the SE
performance analysis over MNIST has done using CSEN1 and CSEN2, only CSEN1
results are reported since CSEN2 produces similar recognition rates ($\sim$
0.001 difference) for face recognition. In any case, both network
configurations are compact compared to the deep CNNs that have been proposed
recently. For example, the study in [17] proposes ReconNet for SR, which
consists of six convolutional layers with $32$ neurons or more in each layer.
Since there is no competing method for SE that is similar to the proposed
method, we use the ReconNet [17] in this study on the SE problem by directly
giving $\mathbf{\tilde{x}}$ as the input, and removing the denoiser block at
the end for comparative evaluations. Finally, we apply thresholding over the
output of ReconNet to generate SE i.e., $\hat{\Lambda}_{\text{R}}=\left\\{
i\in \left\\{
1,2,..,n\right\\}:\mathcal{P}_{\text{R}}\left(\mathbf{\tilde{x}}\right)>\tau
\right\\}$, where $\mathcal{P}_{\text{R}}(.)$ is ReconNet with fully
convolutional layers. ReconNet is originally a CS recovery algorithm working
directly on spatial domain, i.e.,
$\mathbf{\hat{s}}\leftarrow\mathcal{P}\left(\mathbf{y}\right)$ instead of
solving them in the sparsifying dictionary, i.e.,
$\mathbf{\hat{s}}=\mathbf{\Phi}\mathbf{\hat{x}}$ where
$\mathbf{\hat{x}}\leftarrow\mathcal{P}\left(\mathbf{y}\right)$. Therefore,
ReconNet serves as a deep CSEN approach against which the performance of the
two compact CSENs will be compared. Moreover, we also train the state-of-the-
art deep SR solution, LAMP, in order to use it over SE problem. For LAMP
method, it is possible to predefine the number of layers in advance. For a
fair comparison, we have tested the algorithm for three different setups: $2$,
$3$, and $4$ layers design using their provided implementation. Next, in the
experiments of face recognition based on SR, we consider both speed and
recognition accuracy of the algorithms as it is performed only for
$\ell_{1}$-minimization toolbox in [54]. Thus, in order to perform comparative
evaluations, the proposed CSEN approach is evaluated against most of the
conventional state-of-the-art SR techniques along with ReconNet. Finally,
CSEN2 is applied as a pre-processing step for the CS-recovery to obtain
$\mathbf{w}$ in the cost function as illustrated in Figure 1.
Figure 6: Histogram of $\rho_{i}$’s obtained from the 10k samples (test set).
The vectorized gray scale images, $\mathbf{x}_{i}$ in MNIST dataset are
already sparse in the spatial domain (in canonical basis, i.e., $\Phi=I$) with
$\left\|\mathbf{x}_{i}\right\|\leq k_{i}$.
The experiments in this study have been carried out on a workstation that has
four Nvidia® TITAN-X GPU cards and Intel ® Xeon(R) CPU E5-2637 v4 at 3.50GHz
with 128 GB memory. Tensorflow library [55] is used with Python. ADAM
optimizer [56] is utilized during the training with the proposed default
values of the learning parameters: learning rate, $\alpha=0.001$, and moment
updates $\beta_{1}=0.9$, $\beta_{2}=0.999$ with only 100 and 30 Back-
Propagation iterations for MNIST and face recognition experiments,
respectively.
### V-A Experiment I: Support Estimation from CS measurements
For the experiments in this section, MNIST dataset is used. This dataset
contains 70000 samples (50K/10K/10K as the sizes of the train/validation/test
sets) of the handwritten digits (0 to 9). Each image in the dataset is a
$28\times 28$ pixel resolution with intensity values ranging from 0 (black,
background) to 1 (white, foreground). Since the background covers more area
than the foreground, each image can be considered as a sparse signal.
Mathematically speaking, we may assume that the $i^{th}$ vectorized,
$\mathbf{x}_{i}\in\mathbb{R}^{n=784}$ can be considered as the $k_{i}$-sparse
signal. The sparsity rates of each sample is calculated as
$\rho_{i}=\frac{k_{i}}{n}$, and its histogram is given in Figure 6. We have
designed an experimental setup where these sparse signals (sparse in canonical
basis) $\mathbf{x}_{i}$’s are compressively sensed,
$\mathbf{y_{i}}=\mathbf{A}\mathbf{{x_{i}}}=\mathbf{D}\mathbf{x_{i}}$ (11)
where $\mathbf{D}=\mathbf{A}\in\mathbb{R}^{m\times n}$ since
$\mathbf{\Phi}=\mathbf{I}$. We calculate the measurement rate as
$\mathbf{MR}=\frac{m}{n}$. Therefore, the problem is SE from each CS
measurement, i.e., finding $\hat{\Lambda}_{i}$ from each $\mathbf{y_{i}}$ in
the test dataset.
Figure 7: F1 Measure graph of CSEN and Lamp configurations in different noise level at MR = 0.25. TABLE I: Support Recovery Performance of Algorithms from the noise-free measurements. MR | 0.25 | 0.1 | 0.05 | 0.25 | 0.1 | 0.05 | 0.25 | 0.1 | 0.05 | 0.25 | 0.1 | 0.05
---|---|---|---|---|---|---|---|---|---|---|---|---
| F1 Measure | Precision | Recall | CE
CSEN | 0.91 | 0.85 | 0.80 | 0.90 | 0.84 | 0.77 | 0.92 | 0.87 | 0.84 | 0.03 | 0.06 | 0.08
CSEN2 | 0.94 | 0.89 | 0.84 | 0.93 | 0.88 | 0.82 | 0.94 | 0.90 | 0.87 | 0.02 | 0.04 | 0.06
ReconNet | 0.90 | 0.85 | 0.79 | 0.89 | 0.82 | 0.76 | 0.90 | 0.87 | 0.83 | 0.05 | 0.06 | 0.09
LAMP (2) | 0.92 | 0.89 | 0.82 | 0.94 | 0.90 | 0.82 | 0.89 | 0.87 | 0.83 | 0.05 | 0.05 | 0.08
LAMP (3) | 0.93 | 0.89 | 0.82 | 0.95 | 0.90 | 0.82 | 0.91 | 0.88 | 0.82 | 0.03 | 0.05 | 0.08
LAMP (4) | 0.93 | 0.90 | 0.83 | 0.95 | 0.92 | 0.82 | 0.92 | 0.89 | 0.83 | 0.03 | 0.04 | 0.08
For this dataset, the measurement rate, (MR) is varied from $0.05$ to $0.25$
in order to investigate the effect of MR on the SE performance. The
measurement matrix is then chosen as the ”Gaussian”, the matrix whose elements
$A_{i,j}$ are i.i.d. drawn from $\mathcal{N}\left(0,\frac{1}{m}\right)$. It is
worth mentioning that the approximate message passing (AMP) algorithm is a
well-optimized method for the Gaussian measurement matrix, and LAMP is a
learned version of this algorithm. Therefore, they are reported to be state-
of-the-art if the measurement matrix is Gaussian but they do not even
guarantee the converge for other types of measurement matrices. On the other
hand, the comparative performance evaluations against LAMP and deep CS-sparse
methods are presented in Table I, and the results clearly indicate that the
proposed method achieves the best SE performance in terms of F1 measure for MR
= 0.25 and 0.05 and comparable for MR = 0.1. The results presented in Table I
indicate that despite its deep and complex configuration, compact CSENs
achieve superior performance levels compared to ReconNet.
TABLE II: Support Recovery Performance of Algorithms under 10 dB measurement noise. MR | 0.25 | 0.1 | 0.05 | 0.25 | 0.1 | 0.05 | 0.25 | 0.1 | 0.05 | 0.25 | 0.1 | 0.05
---|---|---|---|---|---|---|---|---|---|---|---|---
| F1 Measure | Precision | Recall | CE
CSEN | 0.89 | 0.82 | 0.77 | 0.89 | 0.82 | 0.75 | 0.89 | 0.82 | 0.79 | 0.04 | 0.07 | 0.09
CSEN2 | 0.92 | 0.86 | 0.80 | 0.92 | 0.86 | 0.80 | 0.92 | 0.86 | 0.82 | 0.03 | 0.06 | 0.08
ReconNet | 0.89 | 0.83 | 0.78 | 0.89 | 0.81 | 0.74 | 0.89 | 0.85 | 0.81 | 0.04 | 0.07 | 0.09
LAMP (2) | 0.87 | 0.85 | 0.79 | 0.90 | 0.86 | 0.78 | 0.84 | 0.83 | 0.80 | 0.08 | 0.08 | 0.10
LAMP (3) | 0.87 | 0.84 | 0.77 | 0.91 | 0.87 | 0.78 | 0.84 | 0.81 | 0.77 | 0.06 | 0.08 | 0.12
LAMP (4) | 0.86 | 0.85 | 0.77 | 0.87 | 0.87 | 0.78 | 0.85 | 0.82 | 0.77 | 0.08 | 0.07 | 0.12
Furthermore, comparative evaluations are performed when the measurements are
exposed to noise in the test set, i.e.,
$\mathbf{y}_{i}=\mathbf{D}\mathbf{x}_{i}+\mathbf{z}_{i}$, where
$\mathbf{z}_{i}$ is an additive white Gaussian noise. The results presented in
Figure 7 show that SE performances of the LAMP method are adversely affected
by increased measurement noise. Their performance gets even worse when the
number of layers is increased (i.e., see results for LAMP (2) to LAMP (4)).
CSEN2, on the other hand, achieves the highest F1 measure for all noise
levels.
### V-B Experiment II: Face Recognition based on Sparse Representation
Figure 8: Reconstruction accuracy vs. process time comparison of Algorithms in
Yale-B database.
As explained in Section III-B, the dictionary-based (representation based)
classification could be seen as a SE problem. Therefore, CSEN presents an
alternative and better approach to both CRC and SRC solutions. In this manner,
the proposed CSEN approach is evaluated against both CRC and the state-of-the-
art SRC techniques recently proposed. The algorithms are chosen by considering
both their speed and performance on the SR problem, since the speed-accuracy
performance of SRC directly depends on the performance of the sparse signal
recovery algorithm [54], and there is no unique winner to achieve the top
performance level for all databases. The proposed method is, of course, not
limited to face recognition but can be applied in any other representation
based classification problem. For the sake of brevity, in this study, we focus
only on the face recognition problem.
Figure 9: Reconstruction accuracy vs. process time comparison of Algorithms in
CelebA database.
In dictionary-based classification designs, the samples of a specific class
are stacked in the dictionary as atoms with pre-defined indices, e.g., the
atoms belonging to a particular class can be located in concatenate manner.
Consequently, in sparse representation based classification, instead of using
$\ell_{1}$-minimization in Eq. (4), group $\ell_{1}$-minimization can be
introduced as follows,
$\min_{\mathbf{x}}\left\\{\left\|\mathbf{D}\mathbf{x}-\mathbf{y}\right\|_{2}^{2}+\lambda\sum_{i=1}^{c}\left\|\mathbf{x_{Gi}}\right\|_{2}\right\\}.$
(12)
where $\mathbf{x_{Gi}}$ is the group of coefficients corresponds to class $i$.
Hence, the MSE cost function in Eq. (10) can be modified accordingly:
$E(\mathbf{x})=\sum_{p}(\mathcal{P}_{\Theta}\left(\mathbf{x}\right)_{p}-v_{p})^{2}+\lambda\sum_{i=1}^{c}\left\|\mathcal{P}_{\Theta}\left(\mathbf{x}\right)_{Gi}\right\|_{2}.$
(13)
To approximate this, a simple average pooling can be applied after the last
layer of CSEN which is then followed by SoftMax function to produce class
probabilities. Therefore, the modified cost function with Cross-Entropy loss
at the output would be:
$E(\mathbf{x})=-\sum_{i}^{C}t_{i}log\left(\mathcal{P}_{\Theta}(\mathbf{x})\right)$
where $t_{i}$ and $\mathcal{P}_{\Theta}(\mathbf{x})$ are the real and the
predicted values by CSEN, respectively, for class $i\in C$. By this way, the
modified network can directly produce the predicted class labels as the
output.
In the experiments, we have used Yale-B [20] and CelebA [21] databases. In
Yale-B dataset, there are $2414$ face images with 38 identities; and a sub-set
of CelebA is chosen with 5600 images and 200 identities. The face recognition
experiments are repeated 5 times with samples randomly selected to build the
dictionary, train, and test sets with 32, 16, 16, and 8, 12, 8 samples each
for Yale-B and CelebA, respectively, and 25% of training data is separated as
validation set. The selected sub-set of CelebA dataset is also different
between each repeated run. For Yale-B database, we use vectorized images in
the dictionary. Earlier studies reported that both SRC and CRC techniques
achieve a high recognition accuracy such as 97 - 98%, especially for high MR
rate scenarios ($m/d>0.25$ for $\mathbf{A}\in\mathbb{R}^{m\times d}$). On the
other hand, for CalebA both CRC and SRC solution tends to fail when we use raw
atoms in the dictionary without extracting descriptive features. This is why
in this study, we propose to use a more representative dictionary. Instead of
using raw images, the atoms consist of more descriptive features extracted by
a neural network-based face feature extractor in the library [57]. The
proposed method is compared against CRC and SRC techniques with the following
7 state-of-the-art SR solver: ADDM [58], Dalm [54], OMP [54], Homotopy [59],
GPSR [60], L1LS [61], $\ell_{1}$-magic [62], and Palm [54].
Overall, when we perform experiments in two facial image databases, Yale-B and
CalebA for different MRs, the CSEN based classification proves to be very
stable; and in all MRs, it gives the highest or comparable recognition
accuracy to the highest ones for all experiments as presented in Figure 8 and
9. Furthermore, it is significantly superior in terms of computational speed
when compared with SRC solutions.
To be able to use the same CSEN designs introduced in Section IV, we re-order
the positions of the atoms, i.e., in the representative sparse codes
corresponding non-zero coefficients remain next to each other in 2-D plane. A
simplified illustration on the comparison of conventional dictionary design
and the proposed design for sparse representation based classification is
shown in Figure 10. Defined sparse code sizes and their representations in 2-D
grid for Yale-B and CelebA datasets are also given in Table III.
Figure 10: The graphical representation of proposed dictionary design vs. conventional design for face recognition problem. TABLE III: Utilized face recognition benchmark datasets are given with their corresponding mask size and number of samples in dictionary, training, and testing per class. Dataset | | Dictionary
---
Samples
Train Samples | Test Samples | SC Size in 2D-Plane
Yale-B | 32 | 16 | 16 | 16 x 76
CelebA | 8 | 12 | 8 | 8 x 200
### V-C Experiment III: Learning-aided Compressive Sensing
Figure 11: Examples from MNIST that are compressively sensed, and then
reconstructed at MR=$0.25$.
As the experimental setup, we randomly choose, sparse signals, $\mathbf{x}$ in
the MNIST database and use the Gaussian measurement matrix, $\mathbf{A}$ to
simulate the CS, i.e., $\mathbf{y}=\mathbf{A}\mathbf{x}$. Then, we recover the
sparse signal from $\mathbf{y}$ by using the aforementioned state-of-the-art
SR tools and the proposed weighted $\ell_{1}$-minimization Eq. (6), where the
weights $\mathbf{w}$ are obtained using CSEN output such that
$\mathbf{w}=\frac{1}{\mathbf{p}+\epsilon}$. The two examples where signals are
compressively sensed with $MR=0.25$ and their estimated versions by different
SR methods are shown in Figure 11. It is clear that the proposed approach
recovers the sparse signal with the best quality while the other state-of-art
SR techniques perform poorly. Figure 12 shows an illustration of how proposed
compressive sensing reconstruction scheme differs from traditional compressive
sensing recovery setup. Using the output of CSEN as prior information not only
provide more accurate signal recovery, but also faster convergence of
iterative sparse signal recovery such as $\ell_{1}$-minimization.
Figure 12: (Top) Proposed Compressive Sensing Reconstruction. (Bottom)
Traditional $\ell_{1}$-minimization based CS-recovery.
Furthermore, we draw the estimated phase transition of the algorithms in
Figure 13 using an experimental setup whose procedure is explained in [19].
Briefly summarizing the procedure, a grid of (MR, $\rho$) is generated for
each algorithm, with 20 independent realization of the problem: according to
their sparsity ratios, $\rho$, randomly chosen sparse signal $\mathbf{x}$,
among $10000$ MNIST test images, are compressively sensed with independent
realization of measurement matrices. Then, they are recovered using the
competing algorithms, and each realization is considered a success for the
specific algorithm if
$\frac{\left\|\mathbf{x}-\mathbf{{\hat{x}}}\right\|_{2}}{\left\|\mathbf{x}\right\|}\leq\text{tol}$
(14)
where tol is a predefined parameter, we choose $\text{tol}=10^{-1}$ in our
experiments. For a specific algorithm, we draw the phase transition in the
border where a $50\%$ success rate is achieved. The procedure is similar to
[19], with the exception that they repeated the experiment only once, while we
repeat it 100 times for each method, except L1LS due to its infeasibly high
computational cost (it took almost two weeks with an ordinary computer). With
an accurate SR algorithm, we expect the transition border to be close to the
left-top corner in the phase transition graph because it is a good indicator
that the algorithm performs well in low MRs and with high sparsity ratio,
$\rho$. From the Figure, one can easily deduce that the proposed CS-
reconstruction approach clearly outperforms all competing state-of-the-art SR
reconstruction methods.
Figure 13: Phase Transition of the Algorithms.
## VI Discussion
### VI-A Sparse Modelling vs Structural Sparse Modelling
The first generation CS-recovery or sparse representation methods only use the
information that the signal, we encounter in real life, is sparse in a proper
domain or dictionary. These models do not utilize any further assumptions
about the sparse signal, $\mathbf{x}$, in signal recovery or support
estimation. Therefore, they only impose sparsity to the signal to have support
set with elements in arbitrary location i.e.,
$\min\left\|\mathbf{x}\right\|_{0}~{}\text{s.t.}~{}\mathbf{Dx}=\mathbf{y}$.
However, most sparse signals we face in practical applications exhibit a kind
of structure. In second-generation sparse representation models, researchers
realized that in addition to arbitrary sparsity any prior information about
the sparse code can be used in modeling more advance recovery schemes [63,
64]. For instance, the indices of the non-zero wavelet coefficients of an
image mostly exhibit grouping effect [65]. This kind of group sparsity
patterns can be imposed by designing the optimization problem involving mixed
norm minimization problems [66] instead of simple $\ell_{1}$-norm. On the
other hand, more complex sparsity structures require more complex model
design.
This work proposes an alternative solution to the hand-crafted model-based
sparse signal recovery approaches, to be able to learn the pattern inside
sparse code (or structural sparse signals), $\mathbf{x}$ by a Machine Learning
technique. This proof of the concept work in which the performance is tested
over 3 real datasets, MNIST, Yale and CelabA, validates the possibility of
such learning and deserves further investigation in different sparse
representation problems.
### VI-B Unrolling Deep Models vs CSEN
The most common approaches to reconstruct sparse signals, $\mathbf{x}$ from
the given measurements, $\mathbf{y}$ with a fixed dictionary $\mathbf{D}$ can
be listed as follows: i) Convex Relaxation (or $\ell_{1}$ minimization) such
as Basis Pursuit [26]:
$\min_{\mathbf{x}}\left\|\mathbf{x}\right\|~{}\text{s.t.}~{}\mathbf{y}=\mathbf{Dx}$
or Basis Pursuit Denoising (BPDN) [26]:
$\min_{\mathbf{x}}\left\|\mathbf{x}\right\|~{}\text{s.t.}~{}\left\|\mathbf{y}-\mathbf{Dx}\right\|\leq\epsilon$,
where $\epsilon$ is a small constant. ii) Greedy Algorithms such as matching
pursuit (MP) [67], Orthogonal Matching Pursuit (OMP) [8], Compressive Sampling
Matched Pursuit (CoSaMP) [9]. iii) Bayesian Framework [68], etc. These
conventional algorithms dealing with sparse inverse problems work in an
iterative manner; for instance, most convex relaxation techniques such as
BPDN, minimize the data fidelity term (e.g., $\ell_{2}$-norm) and sparsifying
term (e.g., $\ell_{1}$-norm) in an alternating manner in each iteration.
Therefore, these schemes suffer from computational complexity and not suitable
for real-time applications.
Along with the traditional approaches listed above, Deep Learning methods used
in this domain have recently become very popular:
$\mathbf{\hat{x}}\leftarrow\mathcal{P}\left(\mathbf{y}\right)$, where
$\mathcal{P}$ is a learned mapping from $m$ dimensional compressed domain to
$n$ dimensional sparse domain. These techniques are built on the idea that the
performance of existing convex relaxation can further be improved by reducing
the number of iterations and enhancing the reconstruction accuracy. The key
idea is that both the possible denoiser matrices, $\mathbf{B}$ (responsible
from dealing with data fidelity term) such as $\mathbf{D}^{T}$, or
$\left(\mathbf{D}^{T}\mathbf{D}+\lambda\mathbf{I}\right)^{-1}\mathbf{D}^{T}$
where $\lambda$ is the regularization parameter, and the thresholding values
(responsible from sparsifying) can be learned from the training data using a
deep network generally with dense layers. For instance, the first example of
this type is Learned-ISTA (LISTA) [53] which is built upon iterative soft-
thresholding algorithm (ISTA) [69]. These category of methods, also called
unrolled deep models, design networks in iterative manner are powerfull tools
for sparse signal recovery.
However, in many practical applications, we may either no need to estimate the
sparse signal itself or not have a large amount of training data for deep
unrolling networks. In that manner, CSEN provides a third approach, by
directly estimating the support set via a compact design which requires less
computational power, memory and training set. It exhibits very good
performance, especially in the problems that include sparse representation
with sparse codes having structural patterns. The other advantage of the
compact design with convolutional layers is that it more stable against noise
compared to unrolled deep models which include dense layers.
### VI-C Proxy signal vs measurement vector as input to CSEN
The proposed support estimation scheme utilizes proxy
$\mathbf{\tilde{x}}=\mathbf{By}$ as input to convolutional layers. Making
inference directly on proxy using ML approach has been recently reported by
several studies. For example, the study in [70, 71] proposed to perform
reconstruction-free image classification on proxy, and the study in [72]
performed signal reconstruction using proxy as an input to a deep fully
convolutional network. Furthermore, proxy $\mathbf{\tilde{x}}$ can be learnt
by fully-connected dense layers as presented in [70]. However, this brings
additional complexity and training the network may cause over-fitting with
limited number of training data. As in [70], they had to adapt by first
training the fully-connected layers or try to freeze the other layers during
the training.
On the other hand, choosing the denoiser matrix, $\mathbf{B}$, is another
design problem. For example, in [70, 71] the authors use $\mathbf{B=D^{T}}$ as
denoiser to obtain proxy. We reported the results in this paper for denoiser
matrix,
$\mathbf{B}=\left(\mathbf{D}^{T}\mathbf{D}+\lambda\mathbf{I}\right)^{-1}\mathbf{D}^{T}$,
because it gives slightly more stable performance over $\mathbf{B}$.
## VII Conclusions
Sparse support estimators that work based on traditional sparse signal
recovery techniques suffer from computational complexity and noise. Moreover,
they tend to fail at low MRs completely. The proposed CSENs can be considered
as reconstruction-free and non-iterative support estimators. Of course,
despite their high computational complexity, recent state-of-the-art deep
signal reconstruction algorithms may be a cure to sparse recovery methods.
However, they are still redundant if SR is not the main concern. In addition,
such deep networks often require a large amount of training data that is not
available in many practical applications. To address these drawbacks and
limitations, in this study, we introduce a novel learning-based support
estimators, which have compact network designs. The highlights of the proposed
system are as follows: i) Signal reconstruction-free support estimation where
sparse estimation can be done in a feed-forward manner, non-iteratively at a
low cost. ii) Compact network designs enabling efficient learning even from a
small-size training set. iii) The proposed solution is generic; it could be
used in any support estimation task such as SE based classification.
## References
* [1] D. L. Donoho _et al._ , “Compressed sensing,” _IEEE Transactions on information theory_ , vol. 52, no. 4, pp. 1289–1306, 2006.
* [2] E. J. Candès _et al._ , “Compressive sampling,” in _Proceedings of the International Congress of Mathematicians_ , vol. 3, 2006, pp. 1433–1452.
* [3] G. Li, Z. Zhu, D. Yang, L. Chang, and H. Bai, “On projection matrix optimization for compressive sensing systems,” _IEEE Transactions on Signal Processing_ , vol. 61, no. 11, pp. 2887–2898, 2013.
* [4] M. Elad, _Sparse and redundant representations: from theory to applications in signal and image processing_. Springer Science & Business Media, 2010.
* [5] W. Wang, M. J. Wainwright, and K. Ramchandran, “Information-theoretic limits on sparse support recovery: Dense versus sparse measurements,” in _2008 IEEE International Symposium on Information Theory_. IEEE, 2008, pp. 2197–2201.
* [6] J. Haupt and R. Baraniuk, “Robust support recovery using sparse compressive sensing matrices,” in _2011 45th Annual Conference on Information Sciences and Systems_. IEEE, 2011, pp. 1–6.
* [7] G. Reeves and M. Gastpar, “Sampling bounds for sparse support recovery in the presence of noise,” in _2008 IEEE International Symposium on Information Theory_. IEEE, 2008, pp. 2187–2191.
* [8] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” _IEEE Transactions on information theory_ , vol. 53, no. 12, pp. 4655–4666, 2007.
* [9] D. Needell and J. A. Tropp, “Cosamp: Iterative signal recovery from incomplete and inaccurate samples,” _Applied and computational harmonic analysis_ , vol. 26, no. 3, pp. 301–321, 2009.
* [10] J. Scarlett and V. Cevher, “Limits on support recovery with probabilistic models: An information-theoretic framework,” _IEEE Transactions on Information Theory_ , vol. 63, no. 1, pp. 593–620, 2016.
* [11] J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. S. Huang, and S. Yan, “Sparse representation for computer vision and pattern recognition,” _Proceedings of the IEEE_ , vol. 98, no. 6, pp. 1031–1044, 2010.
* [12] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 31, no. 2, pp. 210–227, 2008.
* [13] B. Khalfi, B. Hamdaoui, M. Guizani, and N. Zorba, “Efficient spectrum availability information recovery for wideband dsa networks: A weighted compressive sampling approach,” _IEEE Transactions on Wireless Communications_ , vol. 17, no. 4, pp. 2162–2172, 2018.
* [14] B. Hamdaoui, B. Khalfi, and M. Guizani, “Compressed wideband spectrum sensing: Concept, challenges, and enablers,” _IEEE Communications Magazine_ , vol. 56, no. 4, pp. 136–141, 2018.
* [15] A. C. Gurbuz, J. H. McClellan, and W. R. Scott, “A compressive sensing data acquisition and imaging method for stepped frequency GPRs,” _IEEE Transactions on Signal Processing_ , vol. 57, no. 7, pp. 2640–2650, 2009.
* [16] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 3431–3440.
* [17] K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 449–458.
* [18] M. Borgerding, P. Schniter, and S. Rangan, “Amp-inspired deep networks for sparse linear inverse problems,” _IEEE Transactions on Signal Processing_ , vol. 65, no. 16, pp. 4293–4308, 2017.
* [19] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” _Proceedings of the National Academy of Sciences_ , vol. 106, no. 45, pp. 18 914–18 919, 2009.
* [20] A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” _IEEE Transactions on Pattern Analysis & Machine Intelligence_, no. 6, pp. 643–660, 2001.
* [21] Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in _Proceedings of International Conference on Computer Vision (ICCV)_ , December 2015.
* [22] L. Zhang, M. Yang, and X. Feng, “Sparse representation or collaborative representation: Which helps face recognition?” in _2011 International conference on computer vision_. IEEE, 2011, pp. 471–478.
* [23] O. D. Escoda, L. Granai, and P. Vandergheynst, “On the use of a priori information for sparse signal approximations,” _IEEE transactions on signal processing_ , vol. 54, no. 9, pp. 3468–3482, 2006.
* [24] N. Vaswani and J. Zhan, “Recursive recovery of sparse signal sequences from compressive measurements: A review,” _IEEE Transactions on Signal Processing_ , vol. 64, no. 13, pp. 3523–3549, 2016.
* [25] D. L. Donoho and M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via $ell_{1}$ minimization,” _Proceedings of the National Academy of Sciences_ , vol. 100, no. 5, pp. 2197–2202, 2003.
* [26] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” _SIAM review_ , vol. 43, no. 1, pp. 129–159, 2001.
* [27] E. J. Candes, “The restricted isometry property and its implications for compressed sensing,” _Comptes rendus mathematique_ , vol. 346, no. 9-10, pp. 589–592, 2008.
* [28] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” _SIAM review_ , vol. 43, no. 1, pp. 129–159, 2001.
* [29] E. Candes, T. Tao _et al._ , “The dantzig selector: Statistical estimation when p is much larger than n,” _The annals of Statistics_ , vol. 35, no. 6, pp. 2313–2351, 2007.
* [30] E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” _Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences_ , vol. 59, no. 8, pp. 1207–1223, 2006.
* [31] Y. C. Eldar and G. Kutyniok, _Compressed sensing: theory and applications_. Cambridge University Press, 2012.
* [32] M. S. Asif and J. Romberg, “On the lasso and dantzig selector equivalence,” in _2010 44th Annual Conference on Information Sciences and Systems (CISS)_. IEEE, 2010, pp. 1–6.
* [33] R. Tibshirani, “Regression shrinkage and selection via the lasso,” _Journal of the Royal Statistical Society: Series B (Methodological)_ , vol. 58, no. 1, pp. 267–288, 1996.
* [34] E. J. Candes and Y. Plan, “A probabilistic and ripless theory of compressed sensing,” _IEEE transactions on information theory_ , vol. 57, no. 11, pp. 7235–7254, 2011.
* [35] N. Durgin, R. Grotheer, C. Huang, S. Li, A. Ma, D. Needell, and J. Qin, “Compressed anomaly detection with multiple mixed observations,” in _Research in Data Science_. Springer, 2019, pp. 211–237.
* [36] M. Yamaç, B. Sankur, and A. T. Cemgil, “Malicious users discrimination in organized attacks using structured sparsity,” in _2017 25th European Signal Processing Conference (EUSIPCO)_. IEEE, 2017, pp. 266–270.
* [37] B. Shim and B. Song, “Multiuser detection via compressive sensing,” _IEEE Communications Letters_ , vol. 16, no. 7, pp. 972–974, 2012.
* [38] O. O. Oyerinde, “Multiuser detector for uplink grant free noma systems based on modified subspace pursuit algorithm,” in _2018 12th International Conference on Signal Processing and Communication Systems (ICSPCS)_. IEEE, 2018, pp. 1–6.
* [39] B. Wang, L. Dai, Y. Zhang, T. Mir, and J. Li, “Dynamic compressive sensing-based multi-user detection for uplink grant-free noma,” _IEEE Communications Letters_ , vol. 20, no. 11, pp. 2320–2323, 2016.
* [40] M. Yamaç, M. Orhan, B. Sankur, A. S. Turk, and M. Gabbouj, “Through the wall target detection/monitoring from compressively sensed signals via structural sparsity,” in _5th International Workshop on Compressed Sensing applied to Radar, Multimodal Sensing,and Imaging_ , 2018.
* [41] K. R. Rad, “Nearly sharp sufficient conditions on exact sparsity pattern recovery,” _IEEE Transactions on Information Theory_ , vol. 57, no. 7, pp. 4672–4679, 2011.
* [42] M. Wainwright, “Information-theoretic bounds on sparsity recovery in the high-dimensional and noisy setting,” in _2007 IEEE International Symposium on Information Theory_. IEEE, 2007, pp. 961–965.
* [43] G. Reeves and M. C. Gastpar, “Approximate sparsity pattern recovery: Information-theoretic lower bounds,” _IEEE Transactions on Information Theory_ , vol. 59, no. 6, pp. 3451–3465, 2013.
* [44] E. J. Candès, Y. Plan _et al._ , “Near-ideal model selection by $ell_{1}$ minimization,” _The Annals of Statistics_ , vol. 37, no. 5A, pp. 2145–2177, 2009.
* [45] T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal recovery with noise.” Institute of Electrical and Electronics Engineers, 2011.
* [46] G. Reeves and M. Gastpar, “The sampling rate-distortion tradeoff for sparsity pattern recovery in compressed sensing,” _IEEE Transactions on Information Theory_ , vol. 58, no. 5, pp. 3065–3092, 2012.
* [47] A. K. Fletcher, S. Rangan, and V. K. Goyal, “Necessary and sufficient conditions for sparsity pattern recovery,” _IEEE Transactions on Information Theory_ , vol. 55, no. 12, pp. 5758–5772, 2009.
* [48] T. Guha and R. K. Ward, “Learning sparse representations for human action recognition,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 34, no. 8, pp. 1576–1588, 2011.
* [49] W. Li and Q. Du, “A survey on representation-based classification and detection in hyperspectral remote sensing imagery,” _Pattern Recognition Letters_ , vol. 83, pp. 115–123, 2016.
* [50] N. Vaswani and W. Lu, “Modified-cs: Modifying compressive sensing for problems with partially known support,” _IEEE Transactions on Signal Processing_ , vol. 58, no. 9, pp. 4595–4607, 2010.
* [51] E. Candes and J. Romberg, “Sparsity and incoherence in compressive sampling,” _Inverse problems_ , vol. 23, no. 3, p. 969, 2007.
* [52] D. Donoho and J. Tanner, “Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing,” _Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences_ , vol. 367, no. 1906, pp. 4273–4293, 2009.
* [53] K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in _Proceedings of the 27th International Conference on International Conference on Machine Learning_. Omnipress, 2010, pp. 399–406.
* [54] A. Y. Yang, Z. Zhou, A. G. Balasubramanian, S. S. Sastry, and Y. Ma, “Fast $ell_{1}$-minimization algorithms for robust face recognition,” _IEEE Transactions on Image Processing_ , vol. 22, no. 8, pp. 3234–3246, 2013.
* [55] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin _et al._ , “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” _arXiv preprint arXiv:1603.04467_ , 2016.
* [56] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint arXiv:1412.6980_ , 2014.
* [57] D. E. King, “Dlib-ml: A machine learning toolkit,” _Journal of Machine Learning Research_ , vol. 10, no. Jul, pp. 1755–1758, 2009.
* [58] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein _et al._ , “Distributed optimization and statistical learning via the alternating direction method of multipliers,” _Foundations and Trends® in Machine learning_ , vol. 3, no. 1, pp. 1–122, 2011\.
* [59] D. M. Malioutov, M. Cetin, and A. S. Willsky, “Homotopy continuation for sparse signal representation,” in _Proceedings.(ICASSP’05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005._ , vol. 5. IEEE, 2005, pp. v–733.
* [60] M. A. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” _IEEE Journal of selected topics in signal processing_ , vol. 1, no. 4, pp. 586–597, 2007.
* [61] K. Koh, S.-J. Kim, and S. Boyd, “An interior-point method for large-scale l1-regularized logistic regression,” _Journal of Machine learning research_ , vol. 8, no. Jul, pp. 1519–1555, 2007.
* [62] E. Candes and J. Romberg, “l1-magic: Recovery of sparse signals via convex programming,” _URL: www. acm. caltech. edu/l1magic/downloads/l1magic. pdf_ , vol. 4, p. 14, 2005.
* [63] G. Yu, G. Sapiro, and S. Mallat, “Solving inverse problems with piecewise linear estimators: From gaussian mixture models to structured sparsity,” _IEEE Transactions on Image Processing_ , vol. 21, no. 5, pp. 2481–2499, 2012\.
* [64] R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Model-based compressive sensing,” _IEEE Transactions on Information Theory_ , vol. 56, no. 4, pp. 1982–2001, 2010.
* [65] D. Donoho and G. Kutyniok, “Microlocal analysis of the geometric separation problem,” _Communications on Pure and Applied Mathematics_ , vol. 66, no. 1, pp. 1–47, 2013.
* [66] M. Kowalski and B. Torrésani, “Structured sparsity: from mixed norms to structured shrinkage,” in _SPARS’09-Signal Processing with Adaptive Sparse Structured Representations_ , 2009.
* [67] S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” _IEEE Transactions on signal processing_ , vol. 41, no. 12, pp. 3397–3415, 1993.
* [68] S. Ji, Y. Xue, L. Carin _et al._ , “Bayesian compressive sensing,” _IEEE Transactions on signal processing_ , vol. 56, no. 6, p. 2346, 2008.
* [69] A. Chambolle, R. A. De Vore, N.-Y. Lee, and B. J. Lucier, “Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage,” _IEEE Transactions on Image Processing_ , vol. 7, no. 3, pp. 319–335, 1998.
* [70] A. Değerli, S. Aslan, M. Yamac, B. Sankur, and M. Gabbouj, “Compressively sensed image recognition,” in _2018 7th European Workshop on Visual Information Processing (EUVIP)_. IEEE, 2018, pp. 1–6.
* [71] S. Lohit, K. Kulkarni, and P. Turaga, “Direct inference on compressive measurements using convolutional neural networks,” in _2016 IEEE International Conference on Image Processing (ICIP)_ , Sep. 2016, pp. 1913–1917.
* [72] A. Mousavi and R. G. Baraniuk, “Learning to invert: Signal recovery via deep convolutional networks,” in _2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2017, pp. 2272–2276.
|
2024-09-04T02:54:56.210907 | 2020-03-02T11:18:48 | 2003.00769 | {
"authors": "Stefan Mijin, Abetharan Antony, Fulvio Militello, Robert J. Kingham",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25981",
"submitter": "Stefan Mijin",
"url": "https://arxiv.org/abs/2003.00769"
} | arxiv-papers | # SOL-KiT - fully implicit code for kinetic simulation of parallel electron
transport in the tokamak Scrape-Off Layer
S. Mijin A. Antony F. Militello R.J. Kingham Blackett Lab., Plasma Physics
Group, Imperial College, London SW7 2AZ, UK CCFE, Culham Science Centre,
Abingdon, Oxon OX14 3DB, UK
###### Abstract
Here we present a new code for modelling electron kinetics in the tokamak
Scrape-Off Layer (SOL). SOL-KiT (Scrape-Off Layer Kinetic Transport) is a
fully implicit 1D code with kinetic (or fluid) electrons, fluid (or
stationary) ions, and diffusive neutrals. The code is designed for fundamental
exploration of non-local physics in the SOL and utilizes an arbitrary degree
Legendre polynomial decomposition of the electron distribution function,
treating both electron-ion and electron-atom collisions. We present a novel
method for ensuring particle and energy conservation in inelastic and
superelastic collisions, as well as the first full treatment of the logical
boundary condition in the Legendre polynomial formalism. To our knowledge,
SOL-KiT is the first fully implicit arbitrary degree harmonic kinetic code,
offering a conservative and self-consistent approach to fluid-kinetic
comparison with its integrated fluid electron mode. In this paper we give the
model equations and their discretizations, as well as showing the results of a
number of verification/benchmarking simulations.
###### keywords:
kinetic , non-local , electron , atomic , SOL , implicit
††journal: Computer Physics Communications
*[ allpages, angle=45, scale=10, xpos=-50, ypos=50 ] PREPRINT
PROGRAM SUMMARY
Program Title: SOL-KiT
Licensing provisions: GNU GPLv3
Programming language: Fortran 90
Nature of problem: Fluid models of parallel transport in the Scrape-Off Layer
(SOL) fail to account for the fact that the electron disctribution function is
often far from a Maxwellian, and kinetic effects have been linked to
discrepencies between experiment and fluid modelling[1]. A kinetic treatment
of electrons in the SOL requires detailed accounting of collisional processes,
especially those with neutral particles, as well as a proper implementation of
the logical boundary condition at the material surface[2]. Furthermore, the
ability to identify differences between fluid and kinetic modelling using
self-consistent comparison is desirable.
Solution method: Electrons are modelled either as a fluid, or kinetically,
maintaining self-consistency between models. All equations are solved using
finite difference and the implicit Euler method, with fixed-point iteration.
The kinetic approach is based on solving the Vlasov-Fokker-Planck-Boltzmann
equation, decomposed in Legendre polynomials. Equations for the harmonics (or
electron fluid equations) are solved alongside fluid equations for the ions,
Ampère-Maxwell’s law for the electric field, as well as a diffusive-reactive
Collisional-Radiative model for the evolution of hydrogenic atomic states.
Each individual operator is built into a matrix, combined with all other
operators, and the matrix equation arising from the Euler method is solved
using the PETSc library, with MPI parallelization.
Additional comments: This article presents the physical and numerical outline
of the code, and is accompanied by html documentation, a small test suite
based on benchmarking runs presented below, as well as instructions and means
for compiling and executing SOL-KiT. Special focus in the article is given to
the novel numerical and model aspects in the greater context of the developed
software.
## References
* [1] A. Chankin, D. Coster, On the locality of parallel transport of heat carrying electrons in the SOL, J. Nucl. Mater. 463 (2015)
* [2] R. J. Procassini, C. K. Birdsall, B. I. Cohen, Particle simulations of collisional transport in a high recycling, diverted tokamak scrape-off layer, Nucl. Fusion 30 (11) (1990)
## 1 Introduction
The heat flow onto the plasma facing components of both present day and future
magnetically confined fusion (MCF) devices is of considerable importance [1,
2], as it will greatly affect the lifetime of the material. This is true in
both steady state operation and during transients (such as ELMs - Edge
Localalized Modes). Understanding the heat flux in the Scrape-Off Layer (SOL)
is thus of key importance for the design and operation of future fusion
devices.
Classic fluid modelling of the parallel (to the magnetic field lines) energy
transport in the edge region of MCF devices has relied on the fluid closure of
Braginskii [3], or otherwise on various flux limiter approaches [4]. However,
it is now well known that there exist discrepancies between experiments and
the widely used fluid codes. These discrepencies can be, at least partly,
attributed to the effect of non-local transport in the SOL [5]. In this paper,
the term “non-local” is used to describe behaviour that stems from strong
departure of the electron distribution function from a Maxwellian, in
particular due to the fact that electron-ion mean-free paths in situations of
interest are comparable to or greater than the temperature gradient scale
lengths.
In the following text we focus mainly on aspects of the divertor SOL. The main
feature of the divertor configuration is that the location of the primary
plasma-surface interaction is relatively far away from the hot core [6] to
specifically designed target plates. We distinguish between the “upstream”,
closer to the core, and the “downstream”, where the plasma near the divertor
targets is considerably cooler, and the ionization degree can be well below
100%, rendering plasma-neutral interaction important. As such, a large
temperature gradient is present, and plasma collisionality (measured with the
electron-ion collision mean free path $\lambda_{ei}$) varies greatly along the
magnetic field lines in the SOL. Of critical importance is the ratio of the
mean free path to the temperature gradient scale length $L_{\nabla
T}=(\nabla_{||}T/T)^{-1}$. Once the ratio $\lambda_{ei}/L_{\nabla T}$ is no
longer much less than unity, the classical transport results are no longer
valid [4]. However, another important concept in the understanding of energy
transport in the SOL is that of the high energy heat-carrying electrons (HCE),
which become marginally collisionless before the bulk of the distribution [7,
8], and will remain so in most SOL situations. In other words, even if the
ratio $\lambda_{ei}/L_{\nabla T}$ might still imply the correctness of
classical transport coefficients, the HCE could be collisionless, and thus
modify the transport by producing non-Maxwellian distribution functions. A
further complication in the understanding of the SOL, as mentioned above, is
the importance of electron-neutral interactions. This is especially true
during detachment[9], when the ionization degree drops considerably, and a
neutral cloud is formed between the divertor targets and the upstream plasma.
As this regime of operation offers better protection to the divertor plate
materials, it becomes important to understand the interplay of kinetic/non-
local effects already present in the SOL with detachment.
### 1.1 SOL kinetic modelling
In order to properly capture non-local effects in the SOL it is necessary to
treat the plasma using a kinetic approach. Broadly speaking, the two main
approaches in the kinetic modelling of the SOL are the Particle-in-Cell (PIC)
and finite difference methods solving the Vlasov-Fokker-Planck equation
sometimes combined with the Boltzmann collision integral.
A representative PIC code for SOL simulations is BIT1[10, 11], used in a
variety of simulation scenarios corresponding to present day machine
conditions. The PIC method is highly parallelizable, and naturally
accommodates the addition of many different collision types (e.g. tungsten
impurities[12, 13]). While detailed simulations covering many aspects of SOL
transport are possible, two issues make PIC codes complicated to operate..
These are the usually long run times (compared to finite difference methods
and fluid codes), and the fact that the number of particles simulated in a PIC
code can never approach reality, requiring smoothing techniques[14], and
potentially not resolving the high energy tails of distributions with enough
accuracy.
Finite difference methods do not suffer from the noise problems of PIC codes,
as they solve for the distribution function directly. A number of finite
difference codes with different approaches have been utilized in the modeling
of the SOL, with a few examples mentioned here. An early example of a
completely kinetic code (treating every species kinetically) was the ALLA
code[15]. Another code, utilizing a similar method to what is presented in
this paper, albeit with an explicit algorithm, is the FPI code[16, 17, 18],
where electrons are treated kinetically while others species are stationary.
More recently, the code KIPP[8, 19, 20], with kinetic electrons, has been
coupled with the 2D fluid code SOLPS, providing the latter with kinetically
calculated transport coefficients.
### 1.2 Motivation to develop SOL-KiT
Due to the great mass difference between electrons and other species within a
hydrogen plasma, electrons mainly suffer pitch-angle scattering collisions
when colliding with those heavier particles. Eigenfunctions of such collision
operators are spherical harmonics, and an expansion of the electron
distribution function in spherical harmonics becomes natural[21]. This
approach has been used in the modeling of Scrape-Off-Layer transport to a
limited extent[16, 17, 18], but has been used both in codes dealing with
laser-plasma interactions (KALOS[22]/OSHUN[23], and IMPACT[24]), as well as
electron swarm transport models[25, 26]. The expansion has been used to
efficiently model both Coulomb and electron-neutral collisions, and has proven
itself to be a powerful tool in treating plasmas of various collisionality.
With this in mind, a marriage of the Vlasov-Fokker-Planck (VFP) approach in
laser plasmas and the Boltzmann approach of electron swarms in neutral gases
seems to be potentially a highly applicable model for the Scrape-Off Layer
plasma, where collisionality changes, and neutral-plasma interactions carry a
great deal of importance.
As is typical with solutions of differential equations, the boundary
conditions tend to define the system behaviour and dictate the approach in the
numerical solution. In modeling the SOL, it becomes necessary to incorporate
the effect of the plasma sheath formed at the boundary, i.e. at the divertor
target. While the traditional approach of Procassini et al.[27] has been used
in many kinetic codes, when utilizing the spherical harmonic expansion it
becomes necessary to formulate the well known boundary condition in terms of
the expansion basis. We present this formulation, and give its implementation
in SOL-KiT.
The divertor target plate acts as a sink of particles, and as such generates
flows towards the target. Since the divertor boundary condition is formulated
in the lab frame, it is then necessary to treat the ion flow in the lab frame
as well. As a consequence, the electron-ion collision operator must be
extended to account for the moving ions. This is a different strategy for
incorporating ion motion in the electron VFP equation, than used elsewhere.
There the Vlasov terms are transformed instead into the local rest frame of
the ions[28]. We present a simplified treatment of this lab frame operator in
the next section, together with fluid ion equations. In order to be able to
resolve the low velocity cells for higher harmonics (avoiding the CFL
condition) it then becomes necessary to treat the electron kinetic equation
implicitly.
Another consequence of the divertor boundary conditions is that it also acts
as a source of neutral particles. The inclusion of electron-neutral collisions
on a nonuniform velocity grid poses particle and energy conservation problems.
We present a method of mapping inelastic collisions on a nonuniform velocity
grid which conserves both energy and particles, as well as obeying a
numerically consistant detailed balance condition when calculating
superelastic collision cross-sections. In order to self-consistently model the
interaction of atomic states and the electrons, we include a diffusive-
reactive Collisional Radiative model[29] for the evolution of hydrogenic
atomic states.
Finally, in order to provide a modular, self-consistent one-to-one comparison
of kinetic and fluid modelling, the code also includes fluid equations for the
electrons, which can be solved instead of the kinetic model, while making sure
that all of the physical processes are the same, and that the atomic data is
used consistently between the two models.
To the authors’ knowledge, this is the first fully implicit arbitrary Legendre
polynomial/Spherical Harmonic code that has an inbuilt sheath boundary
condition, inelastic electron-neutral collisions, and a self-consistent fluid
mode for clean comparisons. In the following sections the equations of SOL-KiT
and their numerical implementation will be presented, starting with the
analytical aspects of the model in section 2, before moving on to the model’s
numerical implementation in section 3. Finally, details of performed
benchmarking runs will be given in section 4. We discuss the various aspects
of the code in section 5.
## 2 Physical model
In this section we will introduce the equations being solved in the SOL-KiT
model, giving a condensed overview of the physics before expanding on
individual operators.
While the code is capable of handling both fixed and periodic boundary
conditions, since the most involved cases utilize the Scrape-Off Layer domain,
we start with describing it. The domain is 1D, and is assumed to be along a
(straightened) field line. The field line is taken as the $x$-axis, around
which the domain is symmetric. The point $x=0$ is taken to be the symmetry
plane, representing the ”upstream” of the SOL. A sketch of the simplified SOL
domain is given in Figure 1.
Figure 1: The SOL simulation domain (not to scale): the $x$-axis is the
principle axis of the system, with $x=0$ being the upstream symmetry plane;
the right boundary of the system is at the divertor target, which acts as a
sink for the plasma, as well as a source of neutrals via the recycling flux
$\Gamma_{REC}$
The equations solved by SOL-KiT are the following:
* 1.
Electron equations - either fluid (density, parallel velocity, temperature) or
kinetic
* 2.
Ion fluid equations - density and parallel velocity (assuming either $T_{i}=0$
or $T_{i}=T_{e}$)
* 3.
Diffusive-reactive Collisional Radiative Model for atomic states
* 4.
Ampère-Maxwell law for the evolution of the electric field
For the electrons we either solve the kinetic equation, which is the main mode
of the code, or we can solve local fluid equations, obtained by taking moments
of the kinetic model, ensuring maximum correspondance between the kinetic and
fluid modes. This in turn allows for easy comparison between the fluid and
kinetic model, further highlighting kinetic effects. The 1D kinetic equation
solved for the electrons is the Vlasov-Fokker-Planck-Boltzmann equation, given
by
$\frac{\partial f(x,\vec{v},t)}{\partial t}+v_{x}\frac{\partial
f(x,\vec{v},t)}{\partial x}-\frac{e}{m_{e}}E\frac{\partial
f(x,\vec{v},t)}{\partial v_{x}}=C[f,...],$ (1)
where $E$ is the electric field (assuming azimuthal symmetry and straightening
out the magnetic field we ignore magnetic effects). The RHS contains all of
the collision and source operators. Details on collision operators and the
Legendre polynomial decomposition of the electron distribution function are
given in Section 2.1. In the electron fluid mode, the continuity, momentum,
and temperature equations are solved instead.
The continuity equation is given by
$\frac{\partial n_{e}}{\partial t}+\frac{\partial(n_{e}u_{e})}{\partial x}=S,$
(2)
while the momentum equation is
$\frac{\partial u_{e}}{\partial t}=-u_{e}\frac{\partial u_{e}}{\partial
x}-\frac{e}{m_{e}}E+\frac{R_{ei}+R_{en}}{m_{e}n_{e}}-\frac{S}{n_{e}}u_{e}-\frac{1}{m_{e}n_{e}}\frac{\partial(n_{e}kT_{e})}{\partial
x},$ (3)
where $n_{e}$,$u_{e}$, and $T_{e}$ are the electron density, flow velocity,
and temperature, respectively. $S=S_{ion}+S_{rec}$ is the particle source
including ionization and recombination, and $R_{ei}=R_{T}+R_{u}$ is the
classical Braginskii[3] friction
$R_{u}=-\frac{m_{e}n_{e}}{\tau_{e}}0.51(u_{e}-u_{i}),$ (4)
$R_{T}=-0.71n_{e}\frac{\partial(kT_{e})}{\partial x},$ (5)
where the $\tau_{e}$ is the Braginskii collision time[3]. $R_{en}$ is the
total friction from all electron-neutral collisions, assuming a slowly
(compared to electron thermal speed) drifting Maxwellian distribution for
electrons.
The electron temperature equation is
$\frac{\partial kT_{e}}{\partial t}=-u_{e}\frac{\partial kT_{e}}{\partial
x}+\frac{2}{3}\left[\frac{Q}{n_{e}}-kT_{e}\frac{\partial u_{e}}{\partial
x}-\frac{1}{n_{e}}\frac{\partial q_{e}}{\partial
x}-\frac{S}{n_{e}}\left(\frac{3}{2}kT_{e}-\frac{m_{e}u_{e}^{2}}{2}\right)-\frac{u_{e}(R_{ei}+R_{en})}{m_{e}n_{e}}\right],$
(6)
where $q_{e}=q_{T}+q_{u}$ is again the classical Braginskii[3] heat flux
$q_{T}=-\kappa_{e}\frac{\partial(kT_{e})}{\partial x},$ (7)
$q_{u}=0.71n_{e}kT_{e}(u_{e}-u_{i}),$ (8)
where $\kappa$ is taken to be either the Spitzer-Härm result or the Lorentz
result [30, 31], depending on whether we treat electron-electron momentum
transfer collisions. $Q=Q_{ext}+Q_{en}$ is the combination of the external
heating energy source (see 2.1.4.), as well as any inelastic collision energy
transfer between the electrons and the atoms.
The ion equations are analogous, with a few differences. Firstly, the ion
continuity equation can be solved, or quasi-neutrality can be enforced
artificially by setting $Zn_{i}=n_{e}$. Secondly, there is no ion temperature
equation in the current version of the code. Instead, ion temperature is set
to either zero, or to the electron temperature. The equations are
$\frac{\partial n_{i}}{\partial t}+\frac{\partial(n_{i}u_{i})}{\partial x}=S,$
(9)
$\frac{\partial u_{i}}{\partial t}=-u_{i}\frac{\partial u_{i}}{\partial
x}+\frac{Ze}{m_{i}}E+\frac{R_{ie}+R_{CX}}{m_{i}n_{i}}-\frac{S}{n_{i}}u_{i}-\frac{1}{m_{i}n_{i}}\frac{\partial(n_{i}kT_{i})}{\partial
x},$ (10)
where $R_{ie}$ is obtained by requiring total momentum in electron-ion
collisions to be conserved, ie. $R_{ie}=-R_{ei}$. $R_{CX}$ is a simple charge
exchange friction term, given by
$R_{CX}=-n_{i}m_{i}u_{i}|u_{i}|\sum_{b}n_{b}\sigma_{CX,b},$ (11)
where the sum is over neutral atomic states, and both the ions and neutrals
are approximated as cold, with the simplified constant hydrogenic charge
exchange cross sections given by approximate low energy values obtained from
Janev[32]
$\sigma_{CX,1}=3\times 10^{-19}m^{2},\quad\sigma_{CX,2}=2^{4}\times
10^{-19}m^{2},\quad\sigma_{CX,3}=3^{4}\times 7\times
10^{-20}m^{2},\quad\sigma_{CX,b\geq 4}=b^{4}\times 6\times 10^{-20}m^{2}.$
To calculate the electric field, we solve Ampère-Maxwell’s law, which contains
only the displacement current
$\frac{\partial E}{\partial t}=-\frac{1}{\epsilon_{0}}(j_{e}+Zen_{i}u_{i}),$
(12)
where $j_{e}$ is either given as a moment of the electron distribution
function, or simply as $j_{e}=-en_{e}u_{e}$ in the electron fluid case.
Finally, the atomic state distribution of the neutrals must be tracked. This
is done using a diffusive-reactive Collisional Radiative model (CRM) to obtain
the evolution of the neutral state densities $n_{b}$ (where $b$ here denotes
the principal quantum number of the state)
$\displaystyle\frac{\partial n_{b}}{\partial t}$
$\displaystyle=\frac{\partial}{\partial x}\left(D_{b}\frac{\partial
n_{b}}{\partial x}\right)+\sum_{b^{\prime}<b}\left[K_{b^{\prime}\rightarrow
b}^{e}n_{b^{\prime}}-A_{b\rightarrow b^{\prime}}n_{b}-K_{b\rightarrow
b^{\prime}}^{e}n_{b}\right]$
$\displaystyle+\sum_{b^{\prime}>b}\left[K_{b^{\prime}\rightarrow
b}^{e}n_{b^{\prime}}+A_{b^{\prime}\rightarrow b}n_{b^{\prime}}-K_{b\rightarrow
b^{\prime}}^{e}n_{b}\right]-K_{b}^{ion}n_{b}+\alpha_{b}n_{e}^{2}n_{i}+\beta_{b}n_{e}n_{i},$
(13)
where the ionization and (de-)excitation rates $K$, as well as three-body
recombination rates $\alpha$ are calculated using moments of the distribution
function (see 2.1.3.). The inelastic cross-sections and radiative de-
excitation/recombination rates $A$ and $\beta$ are all taken from Janev[32]
and NIST[33]. Radiative de-excitation is included only up to state number
$b=20$, due to lack of available data. Since higher excited states are
primarily collisionally coupled in most situations of interest this should not
cause significant discrepancies.
The classical 1D diffusion coefficient is simply
$D_{b}=\frac{v_{tn}}{2[(n_{i}+n_{1})\sigma_{el}+\sigma_{CX,b}n_{i}]}$ (14)
where $v_{tn}$ is the thermal speed of neutrals, $\sigma_{el}$ is the elastic
collisions cross-section (see electron-neutral elastic collision operator in
2.1.3.), and $n_{1}$ is the ground state density. When charge exchange is
used, it is the dominant term in the diffusion coefficient, but the elastic
collision diffusion is included for cases when charge exchange is turned off.
Since gas temperature and elastic cross-section are free parameters in SOL-
KiT, this operator can be tuned. Ideally, however, a self-consistent neutral
dynamics model should be implemented, and this is a planned extension of the
model.
The boundary condition at the divertor target is the logical boundary
condition [27] when electrons are treated kinetically, while the ions are
always assumed to reach sound speed (as per the Bohm criterion). Details of
the kinetic boundary condition are given below. In both the fluid and kinetic
model, the flow into the sheath is ambipolar $\Gamma_{e}=\Gamma_{i}$. When
electrons are treated as a fluid the sheath heat transmission coefficient[6]
in $q_{sh}=\gamma kT_{e}\Gamma_{e}$ is taken to be
$\gamma_{e}=2-0.5\ln(2\pi(1+T_{i}/T_{e})m_{e}/m_{i})$. Finally, the atomic
neutrals are recycled with a recyling flux $\Gamma_{REC}=-R\Gamma_{i}$, where
$R\leq 1$ is the recycling coefficient. This is simply imposed by setting
$D_{1}\partial n_{1}/\partial x$ in (13) at the boundary to $\Gamma_{REC}$,
whereas it would otherwise be zero.
### 2.1 Electron kinetic equation in Legendre formalism
Spherical harmonics are an orthonormal basis set in the solid angle space
$\left(\theta,\varphi\right)\in\left[0,\pi\right)\times\left[0,2\pi\right)$,
and can be written as a suitably normalized product of associated Legendre
polynomials $P_{l}^{m}(\cos\theta)$ and the complex phase $e^{im\varphi}$. The
spherical harmonic convention used in SOL-KiT is same as the ones in
KALOS[22]/OSHUN[23]. As such, the traditional Cartesian coordinate system for
velocity $(v_{x},v_{y},v_{z})$ will be rotated so that the angles $\theta$ and
$\varphi$ are defined as part of a spherical coordinate system:
$v=\sqrt{v_{x}^{2}+v_{y}^{2}+v_{z}^{2}},\quad\theta=\arccos(v_{x}/v),\quad\varphi=\arctan(v_{z}/v_{y}).$
(15)
If then we decompose the distribution function $f(v,\theta,\phi)$ in spherical
harmonics we get:
$f(v,\theta,\varphi)=\sum_{l=0}^{\infty}{\sum_{m=-l}^{l}{f_{l}^{m}(v)P_{l}^{|m|}(\cos\theta)e^{im\varphi}}}$
(16)
where the complex expansion coefficients $f_{l}^{m}$ satisfy
$(f_{l}^{m})^{*}=f_{l}^{-m}$. This allows writing the kinetic equation for
electrons as a set of equations for the amplitudes $f_{l}^{m}$, whose physical
significance becomes obvious when moments of the distribution function are
expressed using the expansion. If $\phi$ is a scalar function of $v$ then
$\int\phi f(\vec{v})d\vec{v}=4\pi\int_{0}^{\infty}\phi f_{0}^{0}(v)v^{2}dv$
(17)
while if $\vec{a}$ is a vector function of $v$
$\int\vec{a}f(\vec{v})d\vec{v}=\frac{4\pi}{3}\int_{0}^{\infty}||a||\begin{pmatrix}f_{1}^{0}\\\
2Re(f_{1}^{1})\\\ -2Im(f_{1}^{1})\end{pmatrix}v^{2}dv$ (18)
and similarly for higher order tensors[21, 23]. As can be seen from this, not
only does the spherical harmonic expansion have useful properties in relation
to collisions, it also provides a physically meaningful decomposition for
evaluating transport quantities.
Since the current model is 1D, and azimuthal symmetry around the $x$-axis is
assumed, we do not treat magnetic field effects, and the spherical harmonic
decomposition reduces to a Legendre polynomial decomposition ($m=0$ always).
Accordingly, in the rest of this paper, harmonics will be labeled only by
their $l$-number, i.e. $f_{l}(v)$.
Following the Legendre decomposition of the distribution function as outlined
above, the 1D kinetic equation can be written as a set of equations
$\frac{\partial f_{l}(x,v,t)}{\partial t}=A_{l}+E_{l}+C_{l},$ (19)
where now all of the operators are moved to the RHS and are functions of $l$
in addition to whatever arguments they naturally have. These will be examined
in detail below.
#### 2.1.1 Vlasov terms
The terms on the LHS of equation (1) are usually referred to as the Vlasov
terms. The two Vlasov terms in equation (19) are the spatial advection term
$A_{l}$ (corresponding to second LHS term in (1)), and the velocity space
advection term due to the electric field in the $x$-direction $E_{l}$
(corresponding to third LHS term in (1)).
Firstly, the spatial advection term (advection in the $x$-direction), for a
given harmonic $l$ is
$A_{l}=-\frac{l}{2l-1}v\frac{\partial f_{l-1}}{\partial
x}-\frac{l+1}{2l+3}v\frac{\partial f_{l+1}}{\partial x}.$ (20)
Spatial advection couples harmonics with different $l$ numbers. The physical
significance of this coupling is most easily seen in the coupling between
$f_{0}$ and $f_{1}$. The moments of $f_{0}$ are the density and total energy,
while $f_{1}$ is associated with flows (of particles and energy). Thus
gradients in $f_{0}$ (density and temperature) drive advection of $f_{1}$
(flows), and vice-versa.
The velocity space advection term due to the electric field couples harmonics
as well, albeit through velocity space gradients in $f_{l}$. As only the $x$
component of the electric field is treated, in the remainder of the paper it
will simply be written as $E$ for brevity. The velocity space advection
operator is given by[22]
$E_{l}=\frac{e}{m}E\left[\frac{l}{2l-1}G_{l-1}+\frac{l+1}{2l+3}H_{l+1}\right]$
(21)
where
$G_{l}(v)=v^{l}\frac{\partial v^{-l}f_{l}}{\partial v},$ (22)
$H_{l}(v)=\frac{1}{v^{l+1}}\frac{\partial v^{l+1}f_{l}}{\partial v}.$ (23)
As is evident from equation (21), the electric field couples harmonics through
the $G_{l}$ and $H_{l}$ functions, which contain velocity space gradients of
the coupled harmonics.
Thus, Vlasov terms provide coupling of harmonics through either spatial
gradients or the electric field.
#### 2.1.2 Coulomb collision terms
Let us consider the effect of Coulomb collisions on the distribution function
$f$ of particles of mass $m$ and charge $q=ze$ colliding with particles of
mass $M=\mu m$ and charge $Q=Ze$ with distribution $F$. We follow the
formalism of Shakorfsky et al.[21], starting from the Rosenbluth coefficient
formulation of the Fokker-Planck collision operator for Coulomb collisions
$\frac{1}{\Gamma_{zZ}}\frac{\delta f}{\delta
t}=\frac{4\pi}{\mu}Ff+\frac{\mu-1}{\mu+1}\nabla\mathcal{H}(F)\cdot\nabla
f+\frac{\nabla\nabla\mathcal{G}(F):\nabla\nabla f}{2},$ (24)
where $\nabla=\partial/\partial\vec{v}$ and
$\Gamma_{zZ}=(zZe^{2})^{2}\ln\Lambda/(4\pi(m\epsilon_{0})^{2})$, with
$\ln\Lambda$ denoting the Coulomb logarithm. The Rosenbluth drag and diffusion
coefficients are respectively $\mathcal{H}$ and $\mathcal{G}$. We separate the
distribution functions into their isotropic and anisotropic components
$F=F_{0}+F_{a}$, $f=f_{0}+f_{a}$. The key assumption going forward is that the
anisotropic component is small compared to the isotropic one, so that it
becomes possible to linearize equation (24). Expanding the distribution
function and the Rosenbluth coefficients in harmonics and using the
integrals[21]
$I_{j}(F_{l})=\frac{4\pi}{v^{j}}\int_{0}^{v}F_{l}(u)u^{j+2}du,\quad
J_{j}(F_{l})=\frac{4\pi}{v^{j}}\int_{v}^{\infty}F_{l}(u)u^{j+2}du,$ (25)
one can derive the expressions for the harmonic components of the Fokker-
Planck collision integral for all $l$. For $l=0$ this is
$\frac{1}{\Gamma_{zZ}}\frac{\delta f_{0}}{\delta
t}=\frac{1}{3v^{2}}\frac{\partial}{\partial
v}\left[\frac{3}{\mu}f_{0}I_{0}(F_{0})+v\left(I_{2}(F_{0})+J_{-1}(F_{0})\right)\frac{\partial
f_{0}}{\partial v}\right].$ (26)
For $l>0$ the following is obtained[23]
$\displaystyle\frac{1}{\Gamma_{zZ}}\frac{\partial f_{l}}{\partial t}$
$\displaystyle=\frac{4\pi}{\mu}\left[F_{0}f_{l}+f_{0}F_{l}\right]$
$\displaystyle-\frac{(\mu-1)}{\mu v^{2}}\left\\{\frac{\partial f_{0}}{\partial
v}\left[\frac{l+1}{2l+1}I_{l}(F_{l})-\frac{l}{2l+1}J_{-1-l}(F_{l})\right]+I_{0}(F_{0})\frac{\partial
f_{l}}{\partial v}\right\\}$
$\displaystyle+\frac{I_{2}(F_{0})+J_{-1}(F_{0})}{3v}\frac{\partial^{2}f_{l}}{\partial
v^{2}}+\frac{-I_{2}(F_{0})+2J_{-1}(F_{0})+3I_{0}(F_{0})}{3v^{2}}\frac{\partial
f_{l}}{\partial v}$
$\displaystyle-\frac{l(l+1)}{2}\times\frac{-I_{2}(F_{0})+2J_{-1}(F_{0})+3I_{0}(F_{0})}{3v^{3}}f_{l}$
$\displaystyle+\frac{1}{2v}\frac{\partial^{2}f_{0}}{\partial
v^{2}}\left[C_{1}I_{l+2}(F_{l})+C_{1}J_{-1-l}(F_{l})+C_{2}I_{l}(F_{l})+C_{2}J_{1-l}(F_{l})\right]$
$\displaystyle+\frac{1}{v^{2}}\frac{\partial f_{0}}{\partial
v}\left[C_{3}I_{l+2}(F_{l})+C_{4}J_{-1-l}(F_{l})+C_{5}I_{l}(F_{l})+C_{6}J_{1-l}(F_{l})\right],$
(27)
where the $C$ coefficients are functions of $l$
$\displaystyle C_{1}$ $\displaystyle=\frac{(l+1)(l+2)}{(2l+1)(2l+3)},\quad
C_{2}=-\frac{(l-1)l}{(2l+1)(2l-1)},\quad
C_{3}=-\frac{(l+1)l/2+l+1}{(2l+1)(2l+3)},$ $\displaystyle C_{4}$
$\displaystyle=\frac{-(l+1)l/2+l+2}{(2l+1)(2l+3)},\quad
C_{5}=\frac{(l+1)l/2+l-1}{(2l+1)(2l-1)},\quad
C_{6}=-\frac{(l+1)l/2-l}{(2l+1)(2l-1)}.$
For electron-electron collisions $\mu=1$. The effect of e-e collisions on the
isotropic part of the distribution function is given by
$\frac{1}{\Gamma_{ee}}\left(\frac{\delta f_{0}}{\delta
t}\right)_{e-e}=\frac{1}{v^{2}}\frac{\partial}{\partial
v}\left[C(f_{0})f_{0}+D(f_{0})\frac{\partial f_{0}}{\partial v}\right],$ (28)
where the drag and diffusion coefficients are defined as
$C(f_{0})=4\pi\int_{0}^{v}f_{0}(u)u^{2}du,$ (29)
$D(f_{0})=4\pi\int_{0}^{v}u^{2}\left[\int_{u}^{\infty}f_{0}(u^{\prime})u^{\prime}du^{\prime}\right]du.$
(30)
Note that $D$ is not in the form one would expect from equation (26). Instead,
it is written in an analytically equivalent form (see section 3.5.). The
electron-electron collision operator for $l=0$ is important for the proper
relaxation of the electron distribution function to a Maxwellian.
For higher harmonics, from (27) we get
$\begin{split}\frac{1}{\Gamma_{ee}}\left(\frac{\delta f_{l}}{\delta
t}\right)_{e-e}&=8\pi
f_{0}f_{l}+\frac{I_{2}(f_{0})+J_{-1}(f_{0})}{3v}\frac{\partial^{2}f_{l}}{\partial
v^{2}}\\\
&+\frac{-I_{2}(f_{0})+2J_{-1}(f_{0})+3I_{0}(f_{0})}{3v^{2}}\frac{\partial
f_{l}}{\partial v}\\\
&-\frac{l(l+1)}{2}\times\frac{-I_{2}(f_{0})+2J_{-1}(f_{0})+3I_{0}(f_{0})}{3v^{3}}f_{l}\\\
&+\frac{1}{2v}\frac{\partial^{2}f_{0}}{\partial
v^{2}}\left[C_{1}I_{l+2}(f_{l})+C_{1}J_{-l-1}(f_{l})+C_{2}I_{l}(f_{l})+C_{2}J_{1-l}(f_{l})\right]\\\
&+\frac{1}{v^{2}}\frac{\partial f_{0}}{\partial
v}\left[C_{3}I_{l+2}(f_{l})+C_{4}J_{-l-1}(f_{l})+C_{5}I_{l}(f_{l})+C_{6}J_{1-l}(f_{l})\right].\end{split}$
(31)
As the ions are either assumed cold or with the same temperature as the
electrons, in the current version of SOL-KiT the effect of electron-ion
collisions on $f_{0}$ is not included.
For higher harmonics we distinguish two cases of electron-ion collisions. The
first is the classical stationary ion case, where $F_{0}=n_{i}\delta(v)/(4\pi
v^{2})$. It can easily be shown that equation (27) reduces to the following
eigenfunction form
$\left(\frac{\delta f_{l}}{\delta
t}\right)_{e-i}=-\frac{l(l+1)}{2}\frac{\Gamma_{ei}n_{i}}{v^{3}}f_{l}.$ (32)
Here it can be seen that this is purely angular scattering, which dampens
higher harmonics, helping us truncate the expansion.
However, when ions are not stationary, but are moving at some velocity much
smaller than the electron thermal velocity (the case we expect in the SOL), it
is necessary to modify the electron-ion collision operator. This is done by
first getting rid of all the terms proportional to the inverse mass ratio in
equation (27). To calculate the $I$ and $J$ integrals for a cold ion stream we
let the ion distribution function be
$F(\vec{v})=n_{i}\delta(\vec{v}-\vec{u_{i}}),$ (33)
Recasting the Dirac delta into spherical coordinates assuming azimuthal
symmetry and expanding in Legendre polynomials gives us
$F_{l}(v)=\frac{n_{i}(2l+1)}{4\pi v^{2}}\delta(v-u_{i}).$ (34)
Substituting these harmonics into the equations for the $I$ and $J$ integrals
gives us
$I_{j}(F_{l})=(2l+1)n_{i}\frac{u_{i}^{j}}{v^{j}}\Theta(v-u_{i}),$ (35)
$J_{j}(F_{l})=(2l+1)n_{i}\frac{u_{i}^{j}}{v^{j}}\Theta(u_{i}-v),$ (36)
where $\Theta$ denotes the Heaviside step function. It is now trivial to see
that all but the $I_{0}$ integrals vanish when $u_{i}=0$, recovering the
stationary ion collision integral. It can also be easily shown that for small
enough $u_{i}$, the collision integral for $f_{1}$ reduces to[21]
$\frac{1}{\Gamma_{zZ}}\frac{\partial f_{1}}{\partial
t}=-\frac{n_{i}}{v^{3}}\left(f_{1}+u_{i}\frac{\partial f_{0}}{\partial
v}\right).$ (37)
If $f_{0}$ is taken to be Maxwellian, this gives a stationary solution to the
$f_{1}$ collision operator to be that $f_{1}$ which yields a slowly drifting
Maxwellian with drift velocity $u_{i}$.
Taking a closer look at the obtained collision integral, we see that electrons
slower than the ions are being carried around by them, as one would expect.
This modifies all harmonics for small $v$, so we lose the convenient property
of clean truncation of the distribution function by electron-ion collisions.
However, in realistic simulation cases, this happens only for a small handful
of velocity space cells, while higher harmonics are normally dampened in the
rest of velocity space. This is to be expected, as our electrons see an ion
Dirac delta and are collisionally driven towards it.
#### 2.1.3 Boltzmann collision terms
We start from the general form of the Boltzmann collision integral for the
effect of collisions on the distribution function of species $s$ colliding
with species $s^{\prime}$
$C[f_{s},f_{s^{\prime}}](v)=\int
d\vec{v_{2}}d\Omega|\vec{v}-\vec{v_{2}}|\sigma(|\vec{v}-\vec{v_{2}}|,\Omega)[f_{s}(\vec{v^{\prime}})f_{s^{\prime}}(\vec{v_{2}^{\prime}})-f_{s}(\vec{v})f_{s^{\prime}}(\vec{v_{2}})],$
(38)
where primed velocities denote values before a collision, and $\sigma$ is the
appropriate differential cross-section. The following results are all derived
under the assumption of a small mass ratio and stationary (slow compared to
the electrons) neutral particles (atoms)[21, 25, 26]. Furthermore, all
differential cross-sections are assumed to be azimuthally symmetric, i.e. are
only a function of energy/velocity of the impacting particle (electron), and
the deflection angle (here $\chi$). This just means that the cross-sections do
not depend on the orientation of the neutral particle.
##### Elastic electron-neutral collisions
The first term in the small mass expansion of the electron-neutral elastic
collision integral cancels for $l=0$. It is therefore necessary to include
higher order effects of collisional energy transfer. This way, one can allow
for long term relaxation of the electron distribution function to a Maxwellian
with temperature equal to the gas temperature $T_{g}$. If $M$ is the neutral
mass, and $n_{b}$ the gas density, the collision integral takes the form
$\left(\frac{\delta f_{0}}{\delta
t}\right)_{e-n_{b}}=\frac{m_{e}}{M+m_{e}}\frac{1}{v^{2}}\frac{\partial}{\partial
v}\left[n_{b}v^{4}\left(\int
d\Omega(1-\cos\chi)\sigma_{b}^{el}(\chi,v)\right)\left(f_{0}+\frac{kT_{g}}{m_{e}v}\frac{\partial
f_{0}}{\partial v}\right)\right]$ (39)
As is the case with electron-ion Coulomb collisions, due to a great mass
difference, the $l>0$ integral is considerably simplified, and can be written
as
$\left(\frac{\delta f_{l>0}}{\delta t}\right)_{e-n_{b}}=-n_{b}v\left[\int
d\Omega(1-P_{l}(\cos(\chi)))\sigma_{b}^{el}(\chi,v)\right]f_{l}.$ (40)
where $P_{l}$ is simply the $l$-th Legendre polynomial.
While implemented and tested in the current version of SOL-KiT, these two
processes are rarely used, because of the lack of proper elastic collision
cross-section data. Currently the cross-section for elastic collisions of an
electron with a hydrogen atom in a given state is obtained using the classical
expression for orbit size. Namely, this gives for the integral elastic
collision cross-section of electrons with hydrogen atoms in the $b$-th state
$\sigma_{b}^{el,TOT}=\pi a_{0}^{2}b^{4}$ (41)
where $a_{0}$ is the Bohr radius.
##### Inelastic electron-neutral collisions
We start with inelastic collisions where the total number of particles of each
species is conserved (e.g. excitation). A standard procedure exists[21, 25]
for an inelastic collision for which the pre-collision and post-collision
velocities are related as
$\frac{m_{e}v^{\prime 2}}{2}=\frac{m_{e}v^{2}}{2}+\epsilon$ (42)
where $\epsilon$ is the inelastic energy loss. Defining
$\alpha=v^{\prime}/v=(1+2\epsilon/mv^{2})^{1/2}$, one can write the collision
integral as
$\left(\frac{\delta f_{l}}{\delta t}\right)^{ex}_{b\rightarrow
b^{\prime}}=-n_{b}v\left[\sigma^{TOT}_{b\rightarrow
b^{\prime}}(v)f_{l}(v)-f_{l}(\alpha
v)\alpha^{2}\left(\sigma^{TOT}_{b\rightarrow b^{\prime}}(\alpha
v)-\sigma^{(l)}_{b\rightarrow b^{\prime}}(\alpha v)\right)\right].$ (43)
Here $\sigma^{TOT}=\int d\Omega\sigma(\chi,v)$ is the integral cross section,
while
$\sigma^{(l)}(v)=\int d\Omega(1-P_{l}(\cos\chi))\sigma(\chi,v).$
Using equation (43) for $l=0$ one can easilly show that particle number is
conserved.
On the other hand, collisional processes such as ionization do not conserve
the total number of particles. The main difficulty in treating ionization is
the fact that it is, ultimately, a 3-body process, and ideally one would like
to know the triply differential cross section for such a process. However,
such an approach is not only complicated, but cross-section data (to the
author’s knowledge) are not systematically available. Because of this, the
approach taken here will be the simplest possible[15], where all electrons
produced in ionization are put in the lowest velocity cell, while the original
electron experiences a standard energy loss/deflection as in the case of
excitation. The collisional operator for ionization takes the following form
$\left(\frac{\delta f_{l}}{\delta t}\right)_{b}^{ion}=\left(\frac{\delta
f_{l}}{\delta
t}\right)^{ex}(\sigma_{b}^{ion})+n_{b}K^{ion}_{b}\frac{\delta(v)}{4\pi
v^{2}}\delta_{l,0}$ (44)
where $\left(\frac{\delta f_{l}}{\delta t}\right)^{ex}(\sigma_{b}^{ion})$ is
equation 43, but with $\sigma^{ex}$ replaced with $\sigma^{ion}$, and with the
collisional ionization rate coefficient defined (unconventionally) as
$K^{ion}_{b}=4\pi\int dvv^{3}f_{0}(v)\sigma^{TOT,ion}_{b}(v).$ (45)
Particle sources are then computed using $S_{ion}=\sum_{b}K^{ion}_{b}n_{b}$,
and similarly for recombination, while the inelastic collision contribution to
$Q$ is similarly calculated by taking the product of each transition rate and
the associated transition energy.
Of course, one operator of the above kinds exists for each possible process of
the given kind, i.e. one operator for each excitation process, and one for
each ionization process, taking in the appropriate neutral state densities and
cross-sections for the given process. The rate coefficients are the same ones
used in equation (13).
Inverse processes (deexcitation and 3-body recombination) are treated using
the principle of detailed balance [34, 35] to obtain cross-sections. For
deexcitation, this gives
$\sigma_{deex}(i,j,v^{\prime})=\frac{g_{j}}{g_{i}}\frac{v^{2}}{v^{\prime
2}}\sigma_{ex}(j,i,v)$ (46)
where $i$ and $j$ are atomic states ($j<i$), and $g_{i}$ and $g_{j}$ their
statistical weights. For hydrogen these are simply $g_{n}=2n^{2}$. Equation
(42) defines the velocities, but we use a negative $\epsilon$.
For 3-body recombination, using the statistical weights of a free electron gas
we get for the cross-section
$\sigma_{3b-recomb}(i,v^{\prime})\frac{1}{n_{e}}=\frac{g_{i}}{2g_{1}^{+}}\left(\frac{h^{2}}{2\pi
m_{e}kT_{e}}\right)^{3/2}\times\frac{v^{2}}{v^{\prime 2}}\sigma_{ion}(i,v),$
(47)
where $h$ is the Planck constant, $n_{e}$ and $T_{e}$ are the electron density
and temperature, respectively, and $g_{1}^{+}$ is the ion ground state
statistical weight (for hydrogen simply $g_{1}^{+}=1$).
To calculate the electron fluid mode $R_{en}$ in equation (3) we use terms of
the form
$R_{en}^{ion}=\sum_{b}\frac{4\pi}{3}\int_{0}^{\infty}\left(\frac{\delta
f_{1}}{\delta t}\right)_{b}^{ion}v^{3}dv,$
where $f_{1}(v)=-u_{e}\partial f_{0}/\partial v$ (with Maxwellian $f_{0}$),
and similarly for other neutral processes.
#### 2.1.4 Electron heating operator
The implemented diffusive heating operator has the form
$\left(\frac{\partial f_{0}}{\partial
t}\right)_{heating}=\Theta(L_{h}-x)D(x,t)\frac{1}{3v^{2}}\frac{\partial}{\partial
v}v^{2}\frac{\partial f_{0}}{\partial v},$ (48)
where $\Theta(L_{h}-x)$ is the step function designating the heating region.
It is easy to check that this operator conserves particle number if $\partial
f/\partial v=0$ on the system boundaries. If we assume a spatially uniform
heating, it is easy to show that
$D(t)=\frac{W_{h}(t)}{m_{e}\int_{0}^{L_{h}}n_{e}(x,t)dx},$ (49)
where $W_{h}(t)$ is the heat flux entering the SOL over length $L_{h}$. This
is related to the fluid model heating $Q_{ext}$ via $Q_{ext}=W_{h}/L_{h}$.
#### 2.1.5 Particle source operator
In order to treat upstream density perturbations, the following electron and
ion particle sources (in kinetic mode only) are implemented
$\left(\frac{\partial f_{0}}{\partial
t}\right)_{source}=\Theta(L_{s}-x)F_{R}(x,t)\left(\frac{m_{e}}{2\pi
kT_{source}}\right)^{3/2}e^{\frac{-mv^{2}}{2kT_{source}}},$ (50)
where $F_{R}$ is the source rate coefficient
$F_{R}=\frac{\Gamma_{in}}{L_{s}},$ (51)
with $\Gamma_{in}$ being the effective upstream flux. The particles are
injected over a length of $L_{s}$ and with temperature $T_{source}$, which can
be the background temperature.
The ion particle source is simply
$\left(\frac{\partial n_{i}}{\partial t}\right)_{source}=F_{R}.$ (52)
#### 2.1.6 Divertor target boundary condition with Legendre polynomials
The boundary condition at the divertor target is calculated using the standard
logical boundary condition [27], setting the ion and electron fluxes to be
equal at the sheath entrance. The logical boundary condition assumes that all
electrons with a parallel velocity above some $v_{c}$ moving towards the
target are lost, while all others are reflected. This translates to having a
sharp cut-off in the electron distribution function. The challenge when
formulating this condition in a Legendre polynomial formalism is the extreme
anisotropy that results from it, which would require a high number of
harmonics to resolve to a satisfactory level. Fortunately, this number is
usually not prohibitively high (see 4.4.). The harmonic content of the ”cut-
off” distribution $f_{cl}$ can be written as a linear combination of known
harmonics
$f_{cl}(v)=\sum_{l^{\prime}}P_{ll^{\prime}}f_{l^{\prime}}(v).$ (53)
For details on the calculation of the transformation matrix $P_{ll^{\prime}}$
see Appendix A. Knowing the form of the distribution function, one can solve
the ambipolarity condition
$\frac{4\pi}{3}\int_{0}^{\infty}v^{3}f_{c1}dv=n_{i,sh}u_{i,sh},$ (54)
where $n_{i,sh}$ is the extrapolated density at the sheath boundary (see 3.7.
below), and $u_{i,sh}$ is the ion velocity at the boundary, given by the Bohm
condition
$u_{i}\geq c_{s}=\sqrt{\frac{k(T_{e}+T_{i})}{m_{i}}},$ (55)
where $T_{e}$ is the electron temperature in the last simulation cell, and
$T_{i}$ is the ion temperature. Solving the ambipolarity condition gives the
value of $v_{c}$, and with it the value of the sheath potential drop
$\Delta\Phi=m_{e}v_{c}^{2}/(2e)$. The electron distribution harmonics with the
correct cut-off velocity can then be used as the dynamically updated boundary
condition.
## 3 Numerical Methods
In this section we present numerical details of the SOL-KiT algorithm,
starting with the definitions of the normalization scheme and the grids used.
An overview of discretization schemes for the various operators in the code
follows.
### 3.1 Normalization
The temperature is normalized to some reference value (in eV), while the
reference density is assumed to be given in $m^{-3}$. These normalization
constants will be refered to as $T_{0}$ and $n_{0}$, respectively. The
velocity is normalized to the electron thermal speed
$v_{th}=({2T_{0}[J]}/{m_{e}})^{1/2}$, time is normalized to the $90^{\circ}$
electron-ion collision time
$t_{0}={v_{th}^{3}}/{(\Gamma_{ei}^{0}n_{0}\ln\Lambda_{ei}(T_{0},n_{0})/Z)}$,
and the length to the thermal electron-ion collision mean free path
$x_{0}=v_{th}t_{0}$. Here $T_{0}[J]$ denotes the normalization temperature
converted from eV to Joules, with
$\Gamma_{ei}^{0}=Z^{2}\Gamma_{ee}^{0}=Z^{2}\frac{e^{4}}{4\pi(m_{e}\epsilon_{0})^{2}},$
and where $\ln\Lambda_{ei}(T_{0},n_{0})$ is the Coulomb logarithm for
electron-ion collisions calculated for the normalization temperature and
density (taken from [36]). All normalized quantities are
$\displaystyle\tilde{v}$
$\displaystyle=\frac{v}{v_{th}},\quad\tilde{t}=\frac{t}{t_{0}},\quad\tilde{x}=\frac{x}{x_{0}},$
$\displaystyle\tilde{f_{l}}$
$\displaystyle=\frac{f_{l}}{n_{0}v_{th}^{-3}},\quad\tilde{E}=\frac{Eet_{0}}{m_{e}v_{th}},\quad\tilde{q}=\frac{q}{m_{e}n_{0}v_{th}^{3}},$
$\displaystyle\tilde{T}_{e,i,g}$
$\displaystyle=\frac{T_{e,i,g}}{T_{0}},\quad\tilde{n}_{e,i,b}=\frac{n_{e,i,b}}{n_{0}},\quad\tilde{u}_{e,i}=\frac{u_{e,i}}{v_{th}},$
$\displaystyle\tilde{\epsilon}$
$\displaystyle=\frac{\epsilon}{T_{0}},\quad\tilde{\sigma}=\frac{\sigma}{\sigma_{0}},$
where $\sigma_{0}=a_{0}^{2}\pi$ (where $a_{0}$ is the Bohr radius) . In the
following sections the normalized quantities will be written without the tilde
in order to lighten the notation.
### 3.2 Grids
The velocity grid is a uniform or geometric grid of cells with (starting)
width $\Delta v_{1}$ and width multiplier $c_{v}$, with $N_{v}$ cell centres
distributed as
$\displaystyle v_{1}$ $\displaystyle=\frac{\Delta v_{1}}{2},\quad\Delta
v_{n}=c_{v}\Delta v_{n-1},\quad v_{n}=v_{n-1}+\frac{1}{2}\left(\Delta
v_{n}+\Delta v_{n-1}\right),$
while the spatial grid is staggered, i.e. consists of cell centres and
boundaries. $N_{c}$ is the number of cells (cell centres), while $N_{x}$ is
used to denote the total number of spatial points (cells and boundaries),
which depends on the boundary conditions of the grid (while $N_{c}$ is an
input parameter).
In the following text, spatial points with an odd index ($x_{1}$, $x_{3}$,
etc.) will denote cell centres and those with an even index will denote cell
boundaries. The values of these points are determined by
$\displaystyle x_{1}=0,\quad x_{k}=x_{k-1}+\frac{\Delta x_{m}^{c}}{2},$
where $m=k$ if $k$ is odd, or $m=k-1$ if $k$ is even. $\Delta x_{m}^{c}$
denotes the cell width of the cell whose centre is at $m$. For a uniform grid,
this is constant, while for a “logarithmic” grid it is an exponential function
that starts at a prescribed width for the first cell $dx$, and drops to a
prescribed width of the last cell $\Delta x_{L}$ (with a fixed number of cells
$N_{c}$).
Variables are positioned on the staggered grid in the following manner:
* 1.
In cell centres:
* (a)
$f_{l}$ for even $l$
* (b)
Number densities: $n_{e}$, $n_{i}$, and $n_{b}$
* (c)
Electron fluid temperatures $T_{e}$
* 2.
On cell boundaries:
* (a)
$f_{l}$ for odd $l$
* (b)
$E$-field
* (c)
Ion and electron fluid velocities, $u_{i}$ and $u_{e}$
Variables not evolved in cell centres are linearly interpolated from
neighbouring cell boundaries, and vice versa.
### 3.3 Timesteps, nonlinear iteration, and vectorization of variables
SOL-KiT uses either uniform timesteps of prescribed length $\Delta t$, or
rescales $\Delta t$ with $\min(T_{e}(x)^{3/2}/n_{TOT}(x))$ (where $n_{TOT}$ is
the total density of heavy particles). This is a conservative estimate for the
shortest Coulomb collision time. In the following text the timestep will be
assumed uniform, but generalization to adaptive timesteps is straightforward.
Timestepping is done using a first order implicit backwards Euler method. Let
$F$ be the vector containing all the evolved quantities, and $M(F)$ the
evolution matrix. Then
$\frac{F^{i+1}-F^{i}}{\Delta t}=M(F^{i^{*}})F^{i+1},$ (56)
or
$F^{i+1}=(I-\Delta tM(F^{i^{*}}))^{-1}F^{i},$ (57)
Within a given timestep, we use fixed point iteration to solve the non-linear
system (57), with $M(F^{i^{*}})$ being evaluated with the solution at the
previous iteration. For the first iteration,$F^{i^{*}}=F^{i}$. (57) is
iterated until convergence is established is established within a prescribed
tolerance. Implicit variables (those that appear at time $i+1$ on RHS of
equation (56)) in the most important nonlinear terms are given in Tables 1-3.
Table 1: Fluid continuity and velocity equation implicit variables in
nonlinear terms (including fluid contribution to Ampère-Maxwell law)
Term | $\frac{\partial(nu)}{\partial x}$ | $S_{ion}{\textsuperscript{a}}$ | $S_{rec}{\textsuperscript{a}}$ | $-u\frac{\partial F}{\partial x}$ | $\frac{\partial(nkT)}{\partial x}$ | $-\frac{u}{n}S$ | $R_{u}$ | $R_{en}$ | $\frac{\partial E}{\partial t}$
---|---|---|---|---|---|---|---|---|---
Implicit variable | $u$ | $n_{b}$ | $n_{e}$b | $F$ | $n$ | $u$ | $u$ | $n_{b}$ or $n_{e}$ | $u$
* a
Collisional-radiative model uses the same implicit variable as in sources.
* b
If using kinetic model this is replaced by $4\pi\int v^{2}f_{0}dv$.
Table 2: Fluid temperature equation implicit variables in nonlinear terms; $S$ in fifth term refers to same variable as in $S_{ion}$ and $S_{rec}$ of Table 1 Term | $-u\frac{\partial T}{\partial x}$ | $-T\frac{\partial u}{\partial x}$ | $q_{T}$ | $q_{u}$ | $-\frac{S}{n}\left[\frac{3}{2}kT-u^{2}\right]$ | $\frac{u}{n}R_{T}$ | $\frac{u}{n}R_{u}$ | $\frac{u}{n}R_{en}$
---|---|---|---|---|---|---|---|---
Implicit variable | $T$ | $u$ | $T$ | $u$ | $S$ | $T$ | $u$ | $n_{b}$ or $n_{e}$
Table 3: Electron kinetic equation implicit variables in nonlinear terms; Coulomb collision terms for $f_{0}$ written out in more detail to avoid confusion (see below) Term | $E\frac{\partial f}{\partial v}$ | $\left(\frac{\delta f_{0}}{\delta t}\right)_{e-e}$ | $\left(\frac{\delta f_{l>0}}{\delta t}\right)_{e-i}^{stationary}$ | $\left(\frac{\delta f_{l>0}}{\delta t}\right)_{e-i}^{moving}$ | $\left(\frac{\delta f_{l}}{\delta t}\right)_{e-n}$
---|---|---|---|---|---
Implicit variable | $E$ | $C^{i^{*}},D^{i^{*}},f_{0}^{i+1},\partial f_{0}^{i+1}/\partial v$ | $f_{l}$ | $f_{l}$, and $n_{i}$ in $I(F_{l})$,$J(F_{l})$ | $f_{l}$
The structure of the variable vector (with all variables present) is the
following:
$F=\begin{pmatrix}F_{loc}(x_{1})\\\ F_{loc}(x_{2})\\\ \vdots\\\
F_{loc}(x_{N_{x}})\end{pmatrix},$ (58)
where $F_{loc}(x_{k})$ is the (spatially) local subvector for quantites at
point $x_{k}$ given as
$F_{loc}(x_{k})=\begin{pmatrix}f_{l_{max}}(x_{k})\\\ f_{l_{max-1}}(x_{k})\\\
\vdots\\\ f_{0}(x_{k})\\\ n_{1}(x_{k})\\\ n_{2}(x_{k})\\\ \vdots\\\
n_{N_{n}}(x_{k})\\\ E(x_{k})\\\ n_{e}(x_{k})\\\ u_{e}(x_{k})\\\
T_{e}(x_{k})\\\ n_{i}(x_{k})\\\ u_{i}(x_{k})\end{pmatrix},$ (59)
where $N_{n}$ is the total number of neutral states tracked, $l_{max}$ is the
highest resolved harmonic, and $f_{l}(x_{k})$ is the $l$-th harmonics
subvector
$f_{l}(x_{k})=\begin{pmatrix}f_{l}(x_{k},v_{1})\\\ f_{l}(x_{k},v_{2})\\\
\vdots\\\ f_{l}(x_{k},v_{N_{v}})\end{pmatrix}.$ (60)
Note that when running in kinetic mode, the vector does not contain electron
fluid quantities, while when the code is running with fluid electrons the
distribution function harmonics are not evolved, but are updated after each
timestep to be a slowly drifting Maxwellian with current temperature, density,
and electron fluid velocity.
From knowing the input parameters $N_{c}$ (and its derived parameters
$2N_{c}-1\leq N_{x}\leq 2(N_{c}+2)-1$), $l_{max}$, $N_{n}$, and $N_{v}$ we can
calculate the total vector length (for a kinetic run) as
$N_{total}=N_{x}\left((l_{max}+1)N_{v}+N_{n}+3\right).$ (61)
For a representative system with $l_{max}=5$, $N_{v}=80$, $N_{n}=30$, and
$N_{x}=128$ this gives $N_{total}=55424$.
To solve the above matrix system, we use the MPI and PETSc libraries [37, 38,
39]. Domain decomposition is done in the spatial dimension with as close to
even distibution of grid points between processors as possible.
### 3.4 Velocity and spatial derivative discretization
The velocity space derivatives appearing in the various kinetic operators are
all implemented using a central difference scheme:
$\displaystyle\frac{\partial^{2}F}{\partial v^{2}}(x_{k},v_{n})$
$\displaystyle=\frac{1}{\Delta
v_{n}}\left[\frac{F(x_{k},v_{n+1})-F(x_{k},v_{n})}{v_{n+1}-v_{n}}-\frac{F(x_{k},v_{n})-F(x_{k},v_{n-1})}{v_{n}-v_{n-1}}\right],$
$\displaystyle\frac{\partial F}{\partial v}(x_{k},v_{n})$
$\displaystyle=\frac{F(x_{k},v_{n+1})-F(x_{k},v_{n-1})}{v_{n+1}-v_{n-1}},$
where $F$ is any velocity space function.
Spatial derivatives are mostly discretized using an analogous central
difference scheme to the above velocity space one. The only exceptions are the
advection terms in the momentum and temperature equations (eq. (3),(6),(10)),
where an upwind scheme can be used instead of the central difference. The
upwind scheme used is simply
$\frac{\partial F}{\partial t}(x_{k})=-u(x_{k})\frac{\partial F}{\partial
x}(x_{k}),$ (62)
where if $u(x_{k})\geq 0$
$\frac{\partial F}{\partial
x}(x_{k})=\frac{F(x_{k})-F(x_{k-1})}{x_{k}-x_{k-1}},$ (63)
and if $u(x_{k})<0$
$\frac{\partial F}{\partial
x}(x_{k})=\frac{F(x_{k+1})-F(x_{k})}{x_{k+1}-x_{k}},$ (64)
where if $F$ is not evolved on $x_{k\pm 1}$ we use use $x_{k\pm 2}$ instead
(this is the case for temperature advection in (6)).
Finally, the diffusion type derivatives are given by
$\displaystyle\frac{\partial}{\partial v}\left(A\frac{\partial F}{\partial
v}\right)(x_{k},v_{n})$ $\displaystyle=\frac{1}{\Delta
v_{n}}[A(x_{k},v_{n+1/2})\frac{F(x_{k},v_{n+1})-F(x_{k},v_{n})}{v_{n+1}-v_{n}}$
$\displaystyle-A(x_{k},v_{n-1/2})\frac{F(x_{k},v_{n})-F(x_{k},v_{n-1})}{v_{n}-v_{n-1}}],$
$\displaystyle\frac{\partial}{\partial x}\left(A\frac{\partial F}{\partial
x}\right)(x_{k})$
$\displaystyle=\frac{1}{x_{k+2}-x_{k-2}}[A(x_{k+1})\frac{F(x_{k+2})-F(x_{k})}{x_{k+2}-x_{k}}$
$\displaystyle-A(x_{k-1})\frac{F(x_{k})-F(x_{k-2})}{x_{k}-x_{k-2}}],$
where the velocity grid boundaries $v_{n+1/2}$ are given as
$v_{n+1/2}=\sum_{l=1}^{n}\Delta v_{l}$.
The only simple derivatives that do not obey the above are in the velocity
advection terms, where in equations (22) and (23) we use the conservative form
$\frac{\partial F}{\partial
v}(x_{k},v_{n})=\frac{F(x_{k},v_{n+1/2})-F(x_{k},v_{n-1/2})}{\Delta v_{n}},$
where $F(x_{k},v_{n+1/2})$ is obtained through linear interpolation.
The following velocity space boundary conditions are assumed for the
distribution function harmonics
$f_{0}(x_{k},0)=\frac{f_{0}(x_{k},v_{1})-f_{0}(x_{k},v_{2})v_{1}^{2}/v_{2}^{2}}{1-v_{1}^{2}/v_{2}^{2}},$
(65)
$f_{l>0}(x_{k},0)=0,$ (66)
i.e. $f_{0}$ at $v=0$ is quadratically extrapolated (this is used whenever
$f_{0}$ at $v=0$ is required) and higher harmonics are set to $0$. At the
velocity space boundary after $v_{N_{v}}$ all $f_{l}$ and their derivatives
are assumed to be zero.
### 3.5 Coulomb collision operators
Here, Coulomb collision operator discretization will briefly be presented to
supplement the material in previous sections. The method used for the
electron-electron collisions for $l=0$ is the Chang-Cooper-Langdon[40, 41, 42]
method. The implementation in SOL-KiT is very similar to that in IMPACT[24],
and as such we will leave out some of the details. We write the collision
operator as a divergence of fluxes $F$
$C_{ee0}^{i+1}(x_{k},v_{n})=\frac{A_{ee}^{0}\ln\Lambda_{ee}(T_{e}^{i}(x_{k}),n_{e}^{i}(x_{k}))}{v_{n}^{2}}\frac{F^{i+1}(x_{k},v_{n+1/2})-F^{i+1}(x_{k},v_{n-1/2})}{\Delta
v_{n}},$ (67)
where
$A_{ee}^{0}=\Gamma_{ee}^{0}n_{0}t_{0}/v_{t}^{3}=1/(Z\ln\Lambda_{ei}(T_{0},n_{0}))$,
and the flux is given by
$\displaystyle F^{i+1}(x_{k},v_{n+1/2})$
$\displaystyle=C^{i^{*}}(x_{k},v_{n+1/2})f_{0}^{i+1}(x_{k},v_{n+1/2})$
$\displaystyle+D^{i^{*}}(x_{k},v_{n+1/2})\frac{f_{0}^{i+1}(x_{k},v_{n+1})-f_{0}^{i+1}(x_{k},v_{n})}{v_{n+1}-v_{n}}.$
(68)
$f_{0}^{i+1}(x_{k},v_{n+1/2})$ is then calculated using a special weighted
interpolation, which ensures relaxation to a Maxwellian
$\displaystyle f_{0}^{i+1}(x_{k},v_{n+1/2})$
$\displaystyle=(1-\delta^{i^{*}}(x_{k},v_{n+1/2}))f_{0}^{i+1}(x_{k},v_{n+1})$
$\displaystyle+\delta^{i^{*}}(x_{k},v_{n+1/2})f_{0}^{i+1}(x_{k},v_{n}),$ (69)
$\displaystyle\delta^{i^{*}}(x_{k},v_{n+1/2})$
$\displaystyle=\frac{1}{W^{i^{*}}(x_{k},v_{n+1/2})}-\frac{1}{\exp[W^{i^{*}}(x_{k},v_{n+1/2})]-1},$
(70) $\displaystyle W^{i^{*}}(x_{k},v_{n+1/2})$
$\displaystyle=(v_{n+1}-v_{n})\frac{C^{i^{*}}(x_{k},v_{n+1/2})}{D^{i^{*}}(x_{k},v_{n+1/2})}.$
(71)
The friction coefficient is
$C^{i^{*}}(x_{k},v_{n+1/2})=4\pi\sum_{l=1}^{n}f_{0}^{i^{*}}(x_{k},v_{l})v_{l}^{2}\Delta
v_{l}.$ (72)
As previously noted in the Section 2.1., we chose to write the diffusion
coefficient in a way different from what is usually in the literature. This
allows discretization that conserves energy as well
$D^{i^{*}}(x_{k},v_{n+1/2})=\frac{4\pi}{v^{*}_{n+1/2}}\sum_{l=1}^{n}v_{l}^{2}\left[\sum_{m=l}^{N_{v}-1}f_{0}^{i^{*}}(x_{k},v_{m+1/2})v_{m+1/2}(v_{m+1}-v_{m})\right]\Delta
v_{l},$ (73)
where $v^{*}_{n+1/2}=(v_{n}+v_{n+1})/2$. Boundary conditions for the $C$ and
$D$ coefficients are taken in such a way to conserve particle density (see
[24]). The resulting submatrix from this operator is tridiagonal. Thus we have
an iterative method which conserves particles and energy (up to nonlinear
iteration tolerance), and which relaxes the distribution to a Maxwellian in
the absence of other driving forces.
The electron-electron collision terms for higher $l$ (equation (31) have the
same normalization constant as that for $l=0$. Here we use a discretization
method similar to that in OSHUN [23], and will hence again, for the sake of
brevity go over only the most important elements. The first three terms in
(31) produce a tridiagonal matrix when discretized, while the remaining terms
produce an upper and lower triangular submatrix due to the the $I$ and $J$
integrals, which are discretized as
$I_{j}[f_{l}](x_{k},v_{n})=4\pi\begin{pmatrix}\left(\frac{v_{1}}{v_{n}}\right)^{j}v_{1}^{2}\Delta
v_{1}\\\ \left(\frac{v_{2}}{v_{n}}\right)^{j}v_{1}^{2}\Delta v_{2}\\\
\vdots\\\ \left(\frac{v_{n-1}}{v_{n}}\right)^{j}v_{n-1}^{2}\Delta v_{n-1}\\\
\frac{1}{2}v_{n}^{2}\Delta v_{n}\\\ 0\\\ \vdots\\\
0\end{pmatrix}^{T}\cdot\begin{pmatrix}f_{l}(x_{k},v_{1})\\\ \vdots\\\
f_{l}(x_{k},v_{N})\end{pmatrix},$ (74)
$J_{j}[f_{l}](x_{k},v_{n})=4\pi\begin{pmatrix}0\\\ \vdots\\\ 0\\\
\frac{1}{2}v_{n}^{2}\Delta v_{n}\\\
\left(\frac{v_{n+1}}{v_{n}}\right)^{j}v_{n+1}^{2}\Delta v_{n+1}\\\ \vdots\\\
\left(\frac{v_{N_{v}-1}}{v_{n}}\right)^{j}v_{N_{v}-1}^{2}\Delta v_{N_{v}-1}\\\
\left(\frac{v_{N_{v}}}{v_{n}}\right)^{j}v_{N_{v}}^{2}\Delta
v_{N_{v}}\end{pmatrix}^{T}\cdot\begin{pmatrix}f_{l}(x_{k},v_{1})\\\ \vdots\\\
f_{l}(x_{k},v_{N})\end{pmatrix}.$ (75)
For $l=1$ the discretized e-e collision operator does not numerically conserve
momentum, unfortunately, as the discertized form loses the partial integration
properties that would analytically conserve momentum. However, the numerically
lost momentum is transfered to the ions, and total momentum is thus conserved.
The electron-ion operator for stationary cold ions is trivial, and is
discretized straightforwardly, while the moving ion operator is discretized
similarly to the e-e operator for higher $l$, with the exception of having
mainly a tridiagonal component (with terms containing $F_{0}$), and a part
(terms containing $F_{l}$) where ion density is the implicit variable (if
working on a cell boundary, it is implicitly interpolated) . For the second
part we need the previously presented $I(F_{l})$ and $J(F_{l})$ integrals for
cold ions ((35) and (36)). For a given lagged ion velocity
$u_{i}^{i^{*}}(x_{k})$ and density $n_{i}^{i+1}(x_{k})$ these integrals in
discrete form are
$I_{j}^{i+1}[F_{l}](x_{k},v_{n})=(2l+1)n_{i}^{i+1}(x_{k})\frac{u_{i}^{i^{*}}(x_{k})^{j}}{v_{n}^{j}}\Theta(v_{k}-u_{i}^{i^{*}}(x_{k})),$
(76)
$J_{j}^{i+1}[F_{l}](x_{k},v_{n})=(2l+1)n_{i}^{i+1}(x_{k})\frac{u_{i}^{i^{*}}(x_{k})^{j}}{v_{n}^{j}}\Theta(u_{i}^{i^{*}}(x_{k})-v_{k}).$
(77)
### 3.6 Electron-neutral collision numerics
In this section we briefly go over the basic properties of the elastic
electron-neutral collision operator before moving on to the important
conservative discretization of inelastic electron-neutral collisions.
##### Elastic collisions
The discretization of elastic collisions borrows greatly from the Coulomb
collision operators. Equation (39) is discretized in a way similar to the
Chang-Cooper-Langdon scheme. However, since the integral is linear in $f_{0}$,
there is no need for special interpolation ($f_{0}$ and $\sigma$ are simply
linearly interpolated), and just the flux formalism was used, with the $C$ and
$D$ coefficients (here left unnormalized) being
$C(v_{n+1/2})=n_{b}v_{n+1/2}^{4}\sigma^{el}(v_{n+1/2}),$ (78)
$D(v_{n+1/2})=n_{b}v_{n+1/2}^{3}\sigma^{el}(v_{n+1/2})\frac{kT_{g}}{m_{e}},$
(79)
where
$\sigma^{el}(v_{n})=\int d\Omega(1-\cos\chi)\sigma^{el}(\chi,v_{n}).$ (80)
Since no differential cross section data is available, this is simply set to
the constant cross-section (41).
The above discretization produces an operator that conserves particle number
and relaxes the electron $f_{0}$ to a Maxwellian with temperature $T_{g}$
(within finite grid effects). For higher $l$, equation (40) is discretized in
a straightforward way.
#### 3.6.1 Conservative discretization scheme for inelastic collisions
In order to streamline the arguments in the next sections, let us introduce
the following notation. If we label inelastic processes with transition energy
$\epsilon$ as $\Pi$, we can label the corresponding superelastic (inverse)
processes as $\Pi^{-1}$ (with transition energy $-\epsilon$). Electron energy
and particle conservation are governed by the $l=0$ harmonic, which we will
denote simply as $f$ in the following derivation. Then the value of $f$ in the
$n$-th velocity cell centre can be written as $f_{n}$, while the total cross-
section notation for any given process will be labeled as $\sigma_{n}$.
Equation (43) contains two terms, a loss/emission term and a gain/absorption
term. While the first term is defined on the used velocity grid, the second
term requires evaluation of the distribution function on (most likely) non-
existant points $\alpha(v)v$ (where $\alpha=(1\pm 2\epsilon/mv^{2})^{1/2}$). A
straightforward interpolation here fails to produce a discretization that
conserves particles and energy. In order to develop such a scheme we start by
writing the collision operator in the emission/absorption form
$C_{n}=-E_{n}+A_{n},$ (81)
where
$E_{n}=n_{neut}v_{n}f_{n}\sigma_{n},$ (82)
and
$A_{n}=\sum_{m}W_{nm}E_{m},$ (83)
where $W_{nm}$ are weights determining the contribution of the emission from
cell $m$ to the absorption in cell $n$. For particle conservation, we want the
number of particles emitted by a single cell $m$ \- $4\pi E_{m}v_{m}^{2}\Delta
v_{m}$ to be absorbed exactly by some other cells $n$ \- $4\pi
W_{nm}E_{m}v_{n}^{2}\Delta v_{n}$. After tidying up, the resulting particle
conservation condition is
$v_{m}^{2}\Delta v_{m}=\sum_{n}W_{nm}v_{n}^{2}\Delta v_{n}.$ (84)
For energy conservation, we note that all energy being emitted via collisions
from one cell needs to either be lost to the energy of internal atomic states
or be absorbed by some cells $n$. In a similar way to the above, one can
cancel distribution functions and cross-sections to obtain the energy
conservation condition
$v_{m}^{2}\Delta
v_{m}\left(v_{m}^{2}\mp\epsilon\right)=\sum_{n}W_{nm}v_{n}^{4}\Delta v_{n}.$
(85)
Finally, it is useful to write down the numerical version of detailed balance
for the collisional cross-section of the inverse process. Here we show the
result for de-excitation ($\Pi=ex$, $\Pi^{-1}=deex$), with the same procedure
applicable to recombination. Detailed balance implies that for a Maxwellian
distribution of electrons and for a Boltzmann distribution of excited states
the rates of excitation and de-excitation become equal. This can be written as
$n_{l}\sum_{m}f_{m}\sigma_{m}^{ex}v_{m}^{3}\Delta
v_{m}=n_{u}\sum_{n}f_{n}\sigma_{n}^{deex}v_{n}^{3}\Delta v_{n},$ (86)
where $n_{l}$ and $n_{u}$ denote the densities of the lower and upper excited
states, respectively. Using a Maxwellian for $f$, relating $n_{u}$ and $n_{l}$
via the Boltzmann distribution, and utilizing equation (84) we obtain
$\sigma_{n}^{deex}=\frac{g_{l}}{g_{u}}e^{\epsilon/T}\sum_{m}W_{nm}\sigma_{m}^{ex}\frac{v_{m}}{v_{n}}e^{-\left(v_{m}^{2}-v_{n}^{2}\right)/T}.$
(87)
Note that this depends on the temperature and the excitation weights, contrary
to the analytical version from equation (46). This is due to the discrete
nature of the grid, where energy differences between absorbing and emitting
cells do not have to be equal to the analytical value $\epsilon$.
#### 3.6.2 Two-absorber mapping
Note that in the above conservation condition we haven’t specified the
summation range for $n$. The simplest way to do this is the following. For
each emitter $m$ we choose exactly two (consecutive) absorber cells, $n_{1}$
and $n_{2}$ such that $\sqrt{v_{m}^{2}\mp\epsilon}$ lies between $v_{n_{1}}$
and $v_{n_{2}}$. We will refer to these two points as the ideal absorber pair.
This way we do not need any further instructions on partitioning the emitted
particles and energy, and can proceed to calculate weights. Since for any
given $m$ there are only two absorbing cells, we can denote the weights simply
as $W_{1}^{m}$ and $W_{2}^{m}$, and solving the conservation conditions we get
$W_{1}^{m}=\frac{v_{m}^{2}\Delta v_{m}}{v_{n_{1}(m)}^{2}\Delta
v_{n_{1}(m)}}\left(1-\frac{v_{m}^{2}-v_{n_{1}(m)}^{2}\mp\epsilon}{v_{n_{2}(m)}^{2}-v_{n_{1}(m)}^{2}}\right),$
(88)
$W_{2}^{m}=\frac{v_{m}^{2}\Delta v_{m}}{v_{n_{2}(m)}^{2}\Delta
v_{n_{2}(m)}}\frac{v_{m}^{2}-v_{n_{1}(m)}^{2}\mp\epsilon}{v_{n_{2}(m)}^{2}-v_{n_{1}(m)}^{2}}.$
(89)
Figure 2 shows an illustration of this mapping for an excitation collision
with both particle and energy transfer obeying conservation conditions.
Figure 2: Two-absorber mapping for an excitation collision; particles and
energy emitted by higher energy cell distributed among the two absorbers,
while a portion of energy is lost to internal energy states of collision
target (atom) - red line. Blue and green lines denote emission to first and
second cell of absorber pair, respectively. The black star shows the location
of the analytical absorption point.
To establish that this scheme can reduce to the analytical result, we note two
properties we expect in the analytic limit. Firstly $n_{1}\rightarrow
n_{2}=n$, i.e. each emitting point has one and only one absorbing point. This
can be done by letting one of the weights tend to $0$. For the sake of this
illustrative argument, let that be $W_{1}^{m}$, from which we see that the
second fraction in (89) tends to unity. Then, we note that the first fraction
will tend to $\alpha(v_{n})^{2}\Delta v_{m}/\Delta v_{n}$ (since
$v_{m}=\alpha(v_{n})v_{n}$ in analytical limit). Taking the differential of
(42) points us toward a grid that satisfies
$\Delta v_{m}/\Delta v_{n}\rightarrow 1/\alpha(v_{n})$
which finally yields the limits
$W_{1}^{m}\rightarrow 0,\quad W_{2}^{m}\rightarrow\alpha(v_{n}).$
Noting that we have chosen to write the gain term as a sum of weighted loss
terms, another $\alpha$ factor can be extracted from the single emitter cell
velocity in the absorption term, and the analytical $\alpha^{2}$ form from
(43) is recovered.
At first glance, this mapping looks good, we have found pairs of absorbers and
weighted their absorption terms to conserve particles and energy, but after a
closer look at eq. (87) a potential problem reveals itself. Suppose that for
some $n$ and every $m$ the weights $W_{nm}=0$ in an excitation process. In
this case $\sigma_{n}^{deex}=0$ for the corresponding deexcitation process. In
other words, if cell $n$ does not absorb in process $\Pi$, it will not emit in
$\Pi^{-1}$, and gaps are left on our velocity grids where cells that should
emit do not. This is not physical, and the mapping needs to be refined.
#### 3.6.3 Two-absorber mapping with pair partition
In the previous section we have associated every cell (emitter) $m$ with an
ideal absorber pair $(n_{1}(m),n_{2}(m))$. We define the absorber list
$\mathcal{A}^{\Pi}_{m}$ as a list of absorber pairs of point $m$ in process
$\Pi$. The two-absorber mapping implies
$\mathcal{A}^{\Pi}_{m}=\left\\{(n_{1}(m),n_{2}(m))\right\\}$, where the only
pair is the ideal absorber pair. As noted above, this produces gaps in the the
velocity grid where cells do not emit in the inverse process $\Pi^{-1}$.
In order to fill out the aforementioned gaps, we must potentially include
absorbers other than those in the ideal pair. This can be done by looking at
all cells $n$ that aren’t in the ideal absorber pair, but whose
$\sqrt{v_{n}^{2}\pm\epsilon}$ (note the change in sign!) falls into cell $m$.
We then refer to $m$ as the ideal emitter for cells $n$.
We then move to generate absorber pairs that would include the non-ideal
absorbers $n$, while satisfying the original constraint, namely that
$\sqrt{v_{m}^{2}\mp\epsilon}$ is between the points of each new pair. In SOL-
KiT this is done by always pairing non-ideal absorbers with one of the ideal
absorbers. If we denote $P_{m}\geq 1$ as the number of (both ideal and non-
ideal) absorber pairs of cell $m$, the absorber list becomes
$\mathcal{A}^{\Pi}_{m}=\left\\{(n_{1}^{p}(m),n_{2}^{p}(m)),p=1,...,P_{m}\right\\}$,
where we label the first and second cell in pair $p$ as $n_{1}^{p}(m)$ and
$n_{2}^{p}(m)$, respectively.
This allows us to use the previous solution, but we require a way to partition
the emitted energy/particles among pairs. We do this by defining a total
energy width of the entire absorber list and normalizing the energy widths of
each cell within the list to the total energy width of the list:
$\beta_{TOT}^{m}=\sum_{n\in\mathcal{A}^{\Pi}_{m}}2v_{n}\Delta
v_{n},\quad\beta_{n}=2v_{n}\Delta v_{n}/\beta_{TOT}^{m}.$
Then we define $\delta_{n}$ as the number of pairs in $\mathcal{A}^{\Pi}_{m}$
which contain point $n$. We can then define
$\gamma^{p}=\frac{\beta_{n_{1}^{p}}}{\delta_{n_{1}^{p}}}+\frac{\beta_{n_{2}^{p}}}{\delta_{n_{2}^{p}}}$
(90)
to be the fraction of the emission given to pair $p$. It is trivial to check
that the sum of all $\gamma^{p}$ for a given absorber list is equal to unity,
as one would expect. An example with an absorber list with three distinct
cells is given in Figure 3.
Figure 3: Pair partition calculation example for an excitation collision;
emitter cell has a list of three absorbers, grouped into two pairs, with
emitted energy and particles partitioned according to factors $\gamma^{1}$ and
$\gamma^{2}$
Then we can go through the same process as the one for the two-absorber case,
except the final results for $W_{1}^{m}$ and $W_{2}^{m}$ will now be functions
of the pair $p$ for which they have been calculated, and will have an extra
$\gamma^{p}$ factor multiplying the previous simple results (see below). To
then calculate the total absorption term for a given cell $n$, it should be
summed over each pair the cell belongs to, i.e.
$A_{n}=\sum_{m}\sum_{p}W_{nm}^{p}E_{m}.$ (91)
This way we have both ensured particle and energy conservation, as well as a
reasonably physical numerical detailed balance condition. The weights in this
case are given by
$W_{n_{1}^{p}(m)m}=\gamma_{m}^{p}\frac{v_{m}^{2}\Delta
v_{m}}{v_{n_{1}^{p}(m)}^{2}\Delta
v_{n_{1}^{p}(m)}}\left(1-\frac{v_{m}^{2}-v_{n_{1}^{p}(m)}^{2}\mp\epsilon}{v_{n_{2}^{p}(m)}^{2}-v_{n_{1}^{p}(m)}^{2}}\right),$
(92)
$W_{n_{2}^{p}(m)m}=\gamma_{m}^{p}\frac{v_{m}^{2}\Delta
v_{m}}{v_{n_{2}^{p}(m)}^{2}\Delta
v_{n_{2}^{p}(m)}}\frac{v_{m}^{2}-v_{n_{1}^{p}(m)}^{2}\mp\epsilon}{v_{n_{2}^{p}(m)}^{2}-v_{n_{1}^{p}(m)}^{2}}.$
(93)
Using the above method for every process produces both a transition mapping
and weights for each one. These depend solely on the grid and the inelastic
processes being considered, and as such do not change during a simulation. The
final discretized (unnormalized) form of the inelastic collision integral for
particle conserving collisions is then
$\displaystyle\left(\frac{\delta f_{l}}{\delta
t}\right)^{inel,i+1}_{b\rightarrow b^{\prime}}(x_{k},v_{n})$
$\displaystyle=-v_{n}n_{b}^{i^{*}}(x_{k})[\sigma^{TOT}_{b\rightarrow
b^{\prime}}(v_{n})f_{l}^{i+1}(x_{k},v_{n})$
$\displaystyle-\sum_{m}\sum_{p}W_{nm}^{p}(\sigma^{TOT}_{b\rightarrow
b^{\prime}}(v_{m})-\sigma^{(l)}_{b\rightarrow
b^{\prime}}(v_{m}))f_{l}^{i+1}(x_{k},v_{m})].$ (94)
### 3.7 Divertor target boundary condition discretization
In order to implement the boundary condition at the divertor target (not
explicitly present on the spatial grid), as was implied in the Section 2.1.6.,
we require knowledge of the forward going electron distribution function. We
reconstruct it by extrapolating harmonics from cells leading up to the
boundary
$f_{\text{odd
}l}^{\text{forward,target}}=\frac{n_{e}^{2}(x_{N_{x}})}{n_{e}^{2}(x_{N_{x}-1})}f_{\text{odd
}l}^{i^{*}}(x_{N_{x}-1}),\quad f_{\text{even
}l}^{\text{forward,target}}=\frac{n_{e}(x_{N_{x}})}{n_{e}(x_{N_{x}-1})}f_{\text{even
}l}^{i^{*}}(x_{N_{x}}),$ (95)
where we scale the harmonics by the ratio of the electron densities at spatial
cells $N_{x}$ and $N_{x}-1$. The choice to perform this sort of extrapolation,
and not a linear one, comes from the danger of large density gradients (which
are often present at the divertor target) to produce negative extrapolated
densities when extrapolated linearly.
Knowing the forward going distribution allows us to calculate the cut-off
distribution using equation (53), while equation (54) imposes $v_{c}$.
However, due to the discretization of velocity space, this will not be one of
the resolved cell centres, and we are required to interpolate in order to
calculate the precise cut-off. A diagram of the interpolation region is given
in Figure 4.
Figure 4: The interpolation region on the velocity grid; cell $K$ contains the
cut-off, and cells $K$ and $K-1$ are replaced with interpolated cells with
centres at $v_{i1}$,$v_{i2}$ and widths $\Delta v_{1}$,$\Delta v_{2}$
The interpolation replaces cell $K$ containing the cut-off and its preceding
cell $K-1$ with two new cells with centres
$v_{i1}=\frac{v_{K+1/2}+v_{c}}{2},\quad v_{i2}=\frac{v_{K-3/2}+v_{c}}{2},$
and with widths
$\Delta v_{1}=v_{K+1/2}-v_{c},\quad\Delta v_{2}=v_{c}-v_{K-3/2}.$
The electron flux to the boundary (given in equation (54)) is then calculated
using this updated grid, while linearly interpolating the $f_{1}$ component of
the cut-off distribution in the two new cells. The electron flux can then be
matched to the ion flux using a bisection or false position method (the latter
being implemented in the current version of SOL-KiT) to find $v_{c}$ with a
desired accuracy.
## 4 Benchmarking SOL-KiT operators
A number of verification tests have been performed using SOL-KiT, aimed at
checking the properties of various implemented operators. The test details are
presented in the following sections, while an overview of the tests and their
scope is given in Table 4. In all tests we consider hydrogenic species, ie.
$Z=1$.
Table 4: List of performed test runs and sets of runs with their target
operators.
Run/Set | Targeted operators
---|---
Runs 1-4 | e-i and e-e collisions, Vlasov, Maxwell
Runs 5-6 | fluid ion and electron operators
Set 1 | e-i and e-e collisions, Vlasov, Maxwell, high $l$
Runs 7-8 | kinetic e-n inel. collisions, CRM
Run 9 | fluid e-n inel. collisions, CRM, detailed balance
Run 10 | e-i and e-e collisions, Vlasov, Maxwell, kinetic e-n inel. collisions, CRM
Run 11 | fluid ion operators - advection
Set 2 | fluid ion operators - charge-exchange friction
Set 3 | divertor boundary condition - analytic limit
Set 4 | divertor boundary condition, high $l$
### 4.1 Heat conduction tests
In order to test a number of operators we performed both local tests with
different scenarios, as well as non-local perturbation tests.
##### Local heat conduction tests
Two types of tests were used to verify the local limit of heat conduction in
SOL-KiT. The first type is a small perturbation test on a periodic grid, with
the initial temperature given by
$T(x)=T_{0}+T_{1}\sin\left(2\pi\frac{x}{L}\right),$ (96)
where $L$ is the total length of the system, given by $L=N_{c}\Delta x$. The
density was set to $n_{0}=10^{19}m^{-3}$, and the rest of the simulation
parameters are
$T_{0}=100\text{eV},\quad T_{1}=0.05\text{eV},$ $N_{c}=64,\quad\Delta x=150\
x_{0},\quad N_{v}=120,$
with $\Delta v$,$\Delta t$, and the total number of timesteps being able to
vary between runs. The total time simulated was always set to $30\ t_{0}$,
i.e. to 30 electron-ion collision times. In all runs the full set of Coulomb
collision operators was active, and the highest harmonic number was kept at
$l_{max}=1$. The calculated conduction heat flux was compared to the Spitzer-
Härm (SH) heat flux $q_{SH}$ (equation (7)), with the heat conductivity for
$Z=1$ in SOL-KiT normalized units being $\kappa=0.6\sqrt{\pi}.$
The ratio of the calculated heat flux and the reference SH flux at the end of
each run would be averaged along the simulation domain, and these are the
results presented below. Three pure kinetic runs (with initial $f_{1}$ set to
0, and no fluid ion operators active) were performed. The reference run (Run
1) with $\Delta t=0.1\ t_{0}$, $\Delta v=0.1\ v_{th}$, $N_{t}=300$ produces an
average heat flux of $q=(0.988296\pm 1\times 10^{-6})q_{SH}$, i.e. reproduces
the reference results with less than 1.2% error. Run 2 used a smaller timestep
$\Delta t=0.05\ t_{0}$, $N_{t}=600$, but the ratio obtained was the same as in
Run 1. Run 3, on the other hand, tested the velocity resolution dependence by
setting $\Delta v=0.05\ v_{th}$, and the obtained heat flux was
$q=(0.997488\pm 1\times 10^{-6})q_{SH}$, reducing the relative error below
0.3%. At this point we believe the relative error is smaller than the
precision of the reference SH flux value, and should thus be taken with a
grain of salt.
The next set of small perturbation tests aimed to compare the results of
kinetic simulations to those performed using the fluid mode of SOL-KiT. For
this purpose, since the fluid model assumes the reference SH heat flux
described above from the very start of the simulation, we initialized $f_{1}$
in the kinetic simulation (Run 4) to a local solution which would give the
same heat flux as the one in the fluid run (Run 5). Other than this, the
parameters of Run 4 were identical to those of Run 1, and the final heat flux
ratio was the same as the one there. Run 5 was performed with the same
parameters as Run 4, but instead of solving the kinetic equation for the
electrons, fluid equations for both electrons and ions were solved. The total
changes in the temperature value at its maximum for both runs were compared.
It was found that the relative diference in the changes was less than 0.3%,
showing good agreement between the kinetic and fluid model in the small
perturbation test.
The final heat conduction test run (Run 6) was done for the fluid model, where
the plasma was initialized on a logarithmic spatial grid ($\Delta x=8.5\
x_{0}$ and $\Delta x_{L}=0.5\ x_{0}$) with the Two-Point Model[6] profile
$T(x)=\left(T_{u}^{7/2}+\frac{x}{L}(T_{u}^{7/2}-T_{d}^{7/2}\right)^{2/7},\quad
n(x)=n_{u}T_{u}/T(x),$
where $n_{u}=0.368\times 10^{19}m^{-3}$ and $T_{u}=18\text{eV}$ are the
upstream density and temperature, while $T_{d}=5\text{eV}$ is the downstream
temperature. Boundary conditions were fixed. We expect that this profile
should not evolve, and after 30000 collision times (for a reference plasma of
$n_{0}=2.5\times 10^{19}m^{-3}$ and $T_{0}=10\text{eV}$), the largest
deviation from the initial profile was 0.12 %, which is due to linear
interpolation close to the downstream boundary, with most points having less
than 0.01% deviation.
##### Non-local heat conduction tests
In order to test how SOL-KiT handles a more non-local regime, we have
performed a set of runs (Set 1) with a setup similar to Run 1 and related
runs, featuring a sinusoidal temperature perturbation. For completeness, we
give the common run parameters for Set 1
$T_{0}=100\text{eV},\quad T_{1}=0.1\text{eV},\quad n_{0}=10^{19}m^{-3},$
$N_{c}=63,\quad N_{v}=120,\quad\Delta v_{1}=0.0307,\quad c_{v}=1.01.$
The various runs in the set were performed with different system lengths and
with varying number of resolved harmonics $l_{max}=1,3,5,7$. In Figure 5 we
plot the $\kappa/\kappa^{(B)}=q/q_{SH}$ for each of the runs, comparing it
with values obtained with the code KIPP[8, 19, 20] and reported by Brodrick et
al.[43]. The ratio $q/q_{SH}$ was obtained by running each run in the set
until the ratio equilibriated. The different values in Figure 5 are plotted as
a function of $k\lambda_{ei}^{(B)}$, where
$\lambda_{ei}^{(B)}=3(\pi/2)^{1/2}x_{0}/4$ is the Braginskii electron-ion
collisional mean freepath, as used by Brodrick et al., and $k=2\pi/L$ is the
perturbation wavenumber. KIPP results were fitted using the following function
based on equation (25) in [43]
$\frac{\kappa}{\kappa^{(B)}}=\left[1+\left(\frac{1}{a(k\lambda_{ei}^{(B)})^{2}}+\frac{1}{bk\lambda_{ei}^{(B)}}\right)^{-1}\right]^{-1},$
(97)
where the values obtained for the parameters $a$ and $b$ are $51.409789$ and
$4.4682314$, respectively. We show that the increase in number of harmonics
leads to an increase in agreement between SOL-KiT and KIPP. Already at $l=5$
SOL-KiT results appear to be very close to the fit values, with the increase
to $l=7$ having a negligible effect on the result. We also note that the
diffusive approximation ($l=1$) appears to break down around
$k\lambda_{ei}^{(B)}=0.075$, while $l=3$ seems to hold up further into the
non-local regime.
Figure 5: Comparison of SOL-KiT and KIPP results for the non-local value of
heat conductivity for different number of resolved harmonics. KIPP results
have been fitted with a version of the fitting function presented in [43],
here see equation 97.
### 4.2 Collisional-Radiative model and inelastic collision tests
(a) Uniform grid - Run 7
(b) Geometric grid - Run 8
Figure 6: Evolution of total density in the two inelastic collision
discretization test runs.
The discretization scheme for inelastic collisions presented in this paper is
designed with three properties in mind - particle and energy conservation, as
well as numerically consistent detailed balance. In this section we present
several tests aimed at confirming the desired properties.
##### Conservation property tests
We start with two runs differing only in the utilized velocity space grid. The
common run parameters are
$n_{0}=10^{19}m^{-3},\quad n_{e}=n_{0},\quad n_{1}=0.1n_{0},\quad N_{n}=30,$
$\Delta t=0.5t_{0},\quad N_{t}=30000,\quad N_{v}=120,\quad
T_{0}=5\text{eV},\quad T_{e}=T_{0},$
where $N_{n}$ is the total number of resolved hydrogen states. Both runs
included only the inelastic electron-atom processes, and were performed in
quasi-0D (low number of spatial cells). The first of the two runs (Run 7) used
a uniform grid with $\Delta v=0.05v_{th}$, while the second (Run 8) used a
geometric grid with $\Delta v_{1}=0.01$ and $c_{v}=1.025$.
Figure 6 shows the average heavy particle density ($n_{i}+\sum n_{b}$) in the
two runs. As can be seen the total error is of the order of $10^{-14}$, which
is consistent with round-off errors. Similar results are obtained for the
relative deviation of the total energy density
$E_{TOT}=3n_{e}kT_{e}/2+n_{e}\epsilon_{ion}+\sum n_{b}\epsilon_{b}$ from
initial value, where $\epsilon_{b}$ is the $1\rightarrow b$ transition energy.
This result is shown in Figure 7, where we see that the geometric grid (Run 8)
is performing somewhat better in the later part of the simulation, likely due
to a finer velocity grid near the origin.
Figure 7: Relative change in total energy density in both electron motion and
atomic states for the two inelastic collision discretization test runs (Runs
7-8).
##### Detailed balance test
In order to test the numerical detailed balance condition for the inverse
processes we treat the electrons as a fluid, thus forcing the distribution
function to a Maxwellian. This simulation (Run 9) was performed with the same
grid, normalization, and initial conditions as the uniform grid run discussed
above. The only difference was the timestep parameters being $N_{t}=1000$ and
$\Delta t=100t_{0}$. By the end of the simulation the atomic states settled
close to a Saha-Boltzmann equilibrium, as would be expected from the detailed
balance condition. The ionization degree relative error computed against the
analytical solution for hydrogen was $\delta X=1.134\times 10^{-8}$, while the
total density and energy errors at the end of the simulation were $\approx
6\times 10^{-14}$ and $\approx 5\times 10^{-14}$, respectively.
(a) Electron temperature
(b) Electron density
Figure 8: Evolution of temperature and density in reproduction of results from
Allais et al. [18] (Run 10) - corresponds to Figures 1 and 2 from original
paper.
Note that in all three runs discussed above we disabled all radiative
processes, as those would mask any energy conservation processes in the first
two runs, and would not allow for equilibrum in the detailed balance test.
##### Integrated test
Finally, an integrated test (Run 10) was performed in order to test the full
interplay of electon and neutral processes. We have attempted to replicate as
closely as possible the simulation performed by Allais et al.[18] using the
code FPI. Parameters used in this run were
$T_{0}=10\text{eV},\quad n_{0}=2.5\times 10^{19}m^{-3},\quad N_{n}=30,\quad
n_{TOT}=n_{0},$ $N_{v}=80,\quad\Delta v_{1}=0.05v_{th},\quad c_{v}=1.025,\quad
l_{max}=1$ $N_{c}=64,\quad\Delta x=5.536x_{0},\quad N_{t}=16600,\quad\Delta
t=0.1t_{0},$
amounting to a domain of length $L=20.31\text{m}$ and total simulation time
$t_{TOT}=49.96\mu\text{s}$. At $x=0$ the temperature is initialized to 25 eV,
and the plasma is 100% ionized. From $x=2.5$m to $x=4.06$m the temperature
drops exponentially to 1eV, while the ionization degree drops to 10%. All
inelastic collisions were enabled, as well as radiative processes, while ions
and neutrals were left stationary as in the original paper. Boundary
conditions were fixed to initial values. Figure 8 shows the evolution of
electron temperature and density corresponding to Fig. 1. and Fig. 2. in [18].
Qualitative behaviour is recovered, while discrepancies of less than 10% are
most likely caused by potential differences in the intialization, as well as
likely different spatial and velocity space resolutions (not reported in
[18]). Another potential cause of discrepency is the use different databases
in SOL-KiT compared to FPI.
### 4.3 Ion flow tests
##### Acoustic wave test
To test the ion advection, we performed the following isothermal ion acoustic
wave test (Run 11). Parameters of the simulation were
$T_{0}=100\text{eV},\quad n_{0}=10^{19}m^{-3},\quad N_{t}=6000,\quad
dt=0.01t_{0},\quad l_{max}=1$ $N_{v}=120,\quad\Delta v=0.1v_{th},\quad
N_{c}=128,\quad dx=0.0078125x_{0},$
with the density and ion velocity initialized on a periodic grid of length
$L=x_{0}$ as
$n(x)=n_{0}+0.01n_{0}\sin\left(2\pi\frac{x}{L}\right),\quad
u_{i}(x)=\frac{v_{th}}{6000}\sin\left(2\pi\frac{x}{L}\right).$
Electron-electron collisions for $l=0$ and electron-ion collisions were turned
on in the simulation. The density and ion velocity are presented at several
points during the evolution of the acoustic wave in Figure 9. The sound speed
was evaluated by fitting a sine function to the ion velocity profiles at
different times, giving the value $c_{s}=(1.001\pm 0.013)c_{s}^{analytical}$.
Estimation of the error was performed conservatively, by taking the greatest
deviation from the computed mean sound speed. Nonetheless, the obtained
agreement is satisfactory even with the conservative error estimate.
(a) Electron/ion density
(b) Ion velocity
Figure 9: Evolution of density and ion velocity during the acoustic wave
propagation test from Run 11.
##### Charge-exchange friction test
To test the charge-exchange cross-section implementation, a set of runs (Set
2) with the following parameters was performed
$T_{0}=10\text{eV},\quad n_{0}=2.5\times 10^{20}m^{-3},\quad N_{n}=1,\quad
n_{TOT}=n_{0},\quad n_{e,i}=n_{0},$
with the electrons and ions decoupled by turning the $E$-field and electron-
ion collisions off. The only operator being tested was the charge-exchange
friction term in the ion equation. The total simulation time was kept at 500
e-i reference collision times. The ion velocity was initialized at
$u_{i}=v_{th}$, and its value was compared to the analytical solution
$u(t)=u(0)/(1+u(0)n_{i}m_{i}n_{1}\sigma_{CX,1}t)$ as the ions were slowed down
by friction. The relative error of the evolved ion velocity is plotted in
Figure 10 for different timestep lengths. It should be noted that the initial
value of the ion velocity is unphysically high, and was used only for stress
testing the operator, with velocities towards the end of the simulation being
more in line with values observed in the SOL.
Figure 10: Relative difference between analytical and computed values during
charge-exchange friction test for various timestep lengths - Set 2.
### 4.4 Divertor boundary condition tests
##### Analytic limit convergence test
The first condition a cut-off logical boundary condition would need to satisfy
is the analytical Maxwellian limit for the sheath properties. In order to test
this aspect of the operator, the cut-off procedure and calculation of the
sheath heat transmission factor and potential drop were performed without
evolving the initially Maxwellian distribution. A set of single step $\Delta
t=0$ simulations (Set 3) with different velocity space resolutions (constant
$v_{max}=6v_{th}$) was used to evaluate convergence of the cut-off calculation
without evolving the distribution. The results of this test are shown in
Figure 11, where we see that both the sheath heat transmission coefficient and
the potential are within a few percent of the analytical values[6] (
$\gamma_{e}=2-0.5\ln(2\pi(1+T_{i}/T_{e})m_{e}/m_{i})$ and
$\Delta\Phi=\gamma_{e}-2$) even for the coarsest grid used.
Figure 11: Relative error of computed sheath heat transmission coefficient and
potential drop as a function of velocity grid resolution - Set 3. Distribution
was set to Maxwellian and wasn’t evolved in order to compare to the analytical
results (see text).
##### High harmonic convergence test
In order to test convergence of the sheath properties in a driven system when
the number of harmonics is varied a set of runs (Set 4) was performed with the
following parameters
$T_{0}=10\text{eV},\quad n_{0}=2.5\times 10^{19}m^{-3},$ $N_{v}=80,\quad\Delta
v_{1}=0.05v_{th},\quad c_{v}=1.01,$
with a short system $L=1x_{0}$ and $N_{c}=2$. Electron-electron collisions
were included only for $l=0$ while electron-ion collisions were on for all
$l>0$. A Maxwellian with $T_{e}=T_{0}$ and $n_{e}=n_{0}$ was imposed as a
fixed boundary condition at $x=0$. This provided a scenario where the boundary
condition operator was sufficiently stressed without entering a strictly
collisionless regime. Simulations were run until the sheath properties
equilibriated, and the results of the test are shown in Figure 12. Simulations
were performed up to $l=25$, and the relative changes in the sheath potential
drop and heat transmission coefficient were tracked. Very quickly the change
drops below 1%, before nonmonotonic behaviour is observed around $l=9$. While
convergence appears slow, this is to be expected with such a sharply cut-off
distribution being expanded in Legendre polynomials. Fortunately, even in this
highly stressed scenario a relatively low number of harmonics $l\leq 5$ seems
to be enough to capture the physically important behaviour.
Figure 12: Relative change in sheath properties when increasing the number of
harmonics in the simulation. Non-monotonic behaviour was observed around
$l=9$, causing the drop in relative change - Set 4.
## 5 Discussion
In the previous section we have presented tests that show SOL-KiT agreeing
with established analytical results for phenomena of interests, namely the
heat conduction and logical sheath boundary condition. The Epperlein-Short
test (Set 1) presented in Section 4.1. as well as Set 4 in Section 4.4. both
show that the code is capable of handling non-local behaviour, as well as
demonstrating that suitable convergence in harmonic number $l$ can be
achieved.
The novel discretization scheme for the Boltzman collision integral for
inelastic collisions on a velocity grid has been shown to conserve particles
and energy up to round-off error. This is to be expected as with $N_{n}=30$
hydrogen states, the number of transitions treated, not including radiative
transitions, is 930. Given the stiff nature of the Collisional-Radiative model
matrix, as well as the corresponding terms in the electron kinetic equations
and the large number of transitions, we expected relative errors of order
$\approx 10^{-13}$. Runs 7-8 in the previous section agree with this estimate,
with the geometric grid notably having better conservation properties. This is
likely due to the fact that the high excited state transitions are better
resolved with a finer grid near the origin. Detailed balance has also been
shown to be observed within a high accuracy, owing to the numerically adapted
detailed balance cross-sections derived in Section 3.6.1.
At the sheath boundary, we implement the logical boundary condition [27].
While this is a standard boundary condition in SOL kinetic codes, this is the
first time it has been implemented in a harmonic decomposition code. A
reasonable concern in this case would be that the sheath properties would be
hard to pinpoint with accuracy, given that the sharp cut-off is poorly
approximated with Legendre polynomials. While we do notice the effects of the
Gibbs phenomenon in the reconstructed distribution function when high $l$ low
collisionality systems are treated, we show in Set 4 that a relatively low
number of harmonics is sufficient to provide an estimate of the relevant
sheath properties. Furthermore, the Maxwellian analytic limit of the sheath
properties requires only resolving up to $l=1$. This further supports the
notion that, while important in general, high harmonic approximations of the
cut-off are not necessary for the transport calculations at the sheath
boundary.
SOL-KiT is a fully implicit code, and is thus not limited by the CFL
condition. It is, therefore, able to resolve the lowest velocity cells for
higher harmonics, unlike explicit codes [23]. This is one of the key features
allowing the implementation of both lab frame ion flow, as well as inelastic
electron-neutral collisions, with the former being the main beneficiary of the
implicit scheme. The reason for this is that the effect of flowing ions in the
lab frame is visible primarily in the low velocity cells, where electrons are
highly collisional and are dragged with the ions. A scheme that cannot resolve
those cells must resort to calculations in the ion centre of mass frame[44,
28]. Given the complicated sheath boundary condition necessary in the
simulation of the SOL, the lab frame is the natural choice. The effect of this
choice is that the complexity normally arising in various LHS operators (in
the Vlasov part) of the electron kinetic equation is transferred into the
electron-ion collision operator. In this work the ions are treated as a cold
beam for the sake of this collisional computation. While the addition of ion
temperature effects would in theory increase the fidelity of the operator, it
would strongly depend on the velocity grid resolution around the ion flow
velocity. It is likely that the effect would be negligible as the ion thermal
velocity is much smaller than the electron thermal velocity, and that the main
collisional effect of the ion thermal population is well captured by the delta
function of a cold beam. However, refinement of the electron-ion collision
operator is a potential path in the development of the code.
Momentum is not explicitly conserved in the electron-electron collision
operator for $l>0$ as implemented in the code. As suggested by Tzoufras et al.
[23], we ensure total momentum conservation by transfering the momentum lost
in electron-electron collisions to the ions. This produces a spurious electric
field in the regions of high flow speeds. However, this electric field is
negligible compared to the physical electric fields occuring in realistic SOL
simulations, as the region of non-zero ion flow tends to be in front of the
divertor target, where the pre-sheath electric field is generated through
pressure gradients.
The way we calculate the electric field also ensures that quasineutrality is
observed up to displacement current effects. This allows SOL-KiT to treat
collisionless phenomena on the Debye length scale. However, as the logical
sheath boundary condition does not require resolving the Debye sheath, this
regime has not been fully explored.
Performance of the code was tracked in several of the presented runs.
Specifications of the used workstation include an Intel® Xeon(R) CPU E5-2630
v3 @ 2.40GHz with a maximum of 16 threads, and 15.6GB of RAM. In all runs
performed so far it appears that the CPU requirement always outweighs the
memory needs of the code. Execution times on the workstation during tests
varied from under 6 minutes for the local heat conductivity runs, to 18.5h for
the reproduction in Run 10. Most runs were performed with the full 16 threads,
excepts for the several quasi-0D tests. These runs (such as Runs 7-8) took
just under 7h to complete on a single core. Run 9, with both electrons and
ions treated as a fluid took slightly over 3h. Since fluid runs have both a
simpler matrix, as well as no requirement to resolve the collisional times,
they tend to run much faster and for longer physical simulation times. The two
main bottlenecks in the current version of the code have been identified as
the electron-electron collisions for $l>0$ as well as the electron-neutral
collision operators. As was noted above, the e-e collisions for $l>0$ produce
dense submatrices for each harmonic, both increasing the matrix computation
times as well as increasing the difficulty of solving the matrix system. The
e-n collisions similarly produce almost dense matrices, but the main
bottleneck there is the large number of transitions, translating into hundreds
of effective collision operator computations per timestep. In order to speed
this up, the detailed balance cross-sections can be updated only once per
timestep, as opposed to during every nonlinear iteration. Another way of
speeding the neutral physics up would be grouping the higher excited states
into effective states, thus reducing the total number of collision integral
matrices computed every timestep.
While SOL-KiT is implicit, in practice the timestep is limited by the
capabilities of the solver. We currently use the Bi-CGSTAB iterative matrix
solver with Block Jacobi preconditioning in the PETSc package. This solver
tends to struggle when the timestep is many ($\approx 50$) times the electron-
ion collisional time, as well as when higher harmonics are included due to the
added stiffness of the matrix due to the fast evolution of $f_{l}$ for high
$l$’s. However, in most situations we are aiming to resolve the collisional
phenomena, and the timestep-limiting effect of higher harmonics becomes
evident at a number of harmonics already high enough where the dense matrix
effects already cause the majority of the slowdown.
SOL-KiT is currently parallelized using MPI, with domain decomposition along
the $x$-axis. Basic optimization was performed on the code, however, many
avenues of potential improvement remain. Besides the already mentioned
grouping of excited neutral states, parallelization in $l$ number is also
being considered, though the degree of speedup attainable would depend heavily
on the inter-processor communication. Different preconditioners and solvers
could also be employed depending on the problem, though this would require
more in-depth tests of the code’s performance.
One of the code’s design goals was to enable consistent and convenient one-to-
one comparisons between a fluid and kinetic model of electron transport. This
was accomplished by ensuring that the physics implementated in the kinetic and
fluid modes use the same data (e.g. cross-sections etc.). A possible extension
of this approach would be the inclusion of various non-local fluid models as
comparison options. The easiest to implement would possibly be the flux
limiter method [4], and more complicated models like the SNB model and
others[43] could be implemented self-consistently with the current framework.
The elastic electron-neutral collision operator currently implemented assumes
a simple cross-section (see 2.1.3.). As we believe the cross-section could be
improved with a more detailed model based on experimental data, while tested,
this operator is not in regular use. Improvements of the neutral model are
being planned, including potential additions of molecules, the implementation
of a fluid as opposed to a diffusive neutral model, and the option to include
high $Z$ impurities. To accompany the improved neutral model, the next step in
SOL-KiT’s development will be the addition of an ion temperature equation, as
well as an electron-ion collision operator for $l=0$. This will allow us to
probe more varied SOL regimes, where the ion and electron temperatures are
decoupled.
Compared to PIC codes, the finite difference approach taken with SOL-KiT
provides a speed-up in computation, as well as avoiding the noise issues
present in PIC codes. The closest comparison to SOL-KiT, however, is the code
FPI [16, 17, 18]. While a form of the logical boundary condition appears to
have been implemented in FPI [16, 17], to our knowledge no in-depth discussion
of its Legendre formalism form is available. Similarly, electron-neutral
inelastic collisions have been implemented and reported in both ALLA[15] and
FPI, but the conservative properties of the operators were not explored in
detail. In this paper we report in detail both of these numerical aspects in
the context of a Legendre decomposition model. Furthermore, the combination of
a fully implicit algorithm together with the lab frame ion treatment and the
aforementioned numerical facets has been realized for the first time in SOL-
KiT. The addition of a self-consistent fluid mode for the electrons adds
another aspect to the code, with work on coupling the two modes under way.
## 6 Conclusion
We have presented the model and the numerics behind the newly developed fully
implicit arbitrary harmonic electron kinetic code SOL-KiT. The code is
designed for the exploration of kinetic effects in the Scrape-Off Layer, and
includes a self-consistent fluid electron mode for one-to-one comparisons.
Novel conservative implemenentation of the electron-neutral collision
operators for inelastic collisions and the Legendre polynomial version of the
logical sheath boundary condition are presented and tested, showing good
agreement with expected properties. We have shown that the code can resolve
highly non-local effects utilizing high harmonics, and have demonstrated that
some of the more demanding operators converge well with the increase in the
number of harmonics. The next steps in SOL-KiT development have been laid out,
focusing on both improving performance, as well as adding new physics to
extend the applicability of the code.
## Acknowledgements
This work was supported through the Imperial President’s PhD Scholarship
scheme. This work was partially funded by the RCUK Energy Programme [grant
number: EP/P012450/1]. Some of the simulation results were obtained with the
use of the Imperial College Research Computing Service [45]
## Appendix A Divertor target boundary condition transformation matrix
derivation
As stated above, we label the cut-off velocity $v_{c}$ and take that all
electrons with parallel velocity greater than $v_{c}$ are lost to the sheath,
and all other electrons are reflected. This informs the form of the
distribution function at the sheath entrance in the following way. If we know
the distribution function for positive $v_{x}$ (moving towards the divertor
target) and can expand it into Legendre polynomials (assuming azimuthal
symmetry, of course)
$f(v_{x}>0,v_{\perp})=\sum_{l}f_{l}(v)P_{l}(\cos\theta),\quad\cos\theta>0.$
Now, if we were to reflect all electrons back, the total (reflection included)
distribution function would be
$f_{R}(v,\theta)=\sum_{l}f_{l}(v)P_{l}(|\cos\theta|).$
Finally, we take away all of the electrons that would have been lost to the
sheath and thus should not be in the distribution function. This is simply all
electrons with $v\cos\theta=v_{x}<-v_{c}$. Using a Heaviside step function,
this can be expressed as
$f_{c}(v,\theta)=f_{R}(v,\theta)\Theta(v\cos\theta+v_{c}).$ (98)
From here we can use Legendre polynomial orthogonality to extract the $l$-th
harmonic of the “cut-off” distribution
$f_{cl}(v)=\frac{2l+1}{2}\int_{-1}^{1}f_{R}(v,\theta)\Theta(v\cos\theta+v_{c})P_{l}(\cos\theta)d(\cos\theta).$
(99)
Using the decomposition of $f_{R}$ we write equation (99) as
$f_{cl}(v)=\frac{2l+1}{2}\sum_{l^{\prime}}f_{l^{\prime}}(v)\int_{-1}^{1}P_{l^{\prime}}(|\cos\theta|)P_{l}(\cos\theta)\Theta(v\cos\theta+v_{c})d(\cos\theta).$
(100)
Here it’s helpful to visualize the two-dimensional velocity space of the cut-
off. This is shown in Fig 13. As can be seen, the integral in (100) has
different limits depending on the value of the velocity $v$. This can be
written as (taking $x=\cos\theta$)
$f_{cl}(v)=\frac{2l+1}{2}\sum_{l^{\prime}}f_{l^{\prime}}(v)\int_{x_{min}}^{1}P_{l^{\prime}}(|x|)P_{l}(x)dx,$
(101)
where $x_{min}=\max(-1,\cos\theta_{max}(v))$, with
$\cos\theta_{max}=-v_{c}/v$. Introducing the transformation matrix
$P_{ll^{\prime}}$ ($v$ dependence implied)
$P_{ll^{\prime}}=\frac{2l+1}{2}\int_{x_{min}}^{1}P_{l^{\prime}}(|x|)P_{l}(x)dx,$
(102)
we get a compact expression for the sheath edge electron distribution
harmonics
$f_{cl}(v)=\sum_{l^{\prime}}P_{ll^{\prime}}f_{l^{\prime}}(v).$ (103)
Figure 13: The distribution function (anisotropy exaggerated) after applying
reflection and cut-off. Highlighted is the integration limit
$\theta_{max}(v).$
Now the problem is reduced to computing the matrix $P_{ll^{\prime}}$. First,
let us dispose of the absolute value in the argument of $P_{l^{\prime}}$ by
separating the integral into positive and negative $x$ intervals
$P_{ll^{\prime}}=\frac{2l+1}{2}\left((-1)^{l^{\prime}}\int_{x_{min}}^{0}P_{l^{\prime}}(x)P_{l}(x)dx+\int_{0}^{1}P_{l^{\prime}}(x)P_{l}(x)dx\right),$
(104)
where we have used the parity property of Legendre polynomials
$P_{l}(-x)=(-1)^{l}P_{l}(x)$. In order to reduce the integrals in (104) to
forms available in the literature we expand the integration range in the first
integral up to $1$ and after grouping terms up, we get
$P_{ll^{\prime}}=P^{x}_{ll^{\prime}}+P^{0}_{ll^{\prime}},$ (105)
where
$P^{x}_{ll^{\prime}}=\frac{2l+1}{2}(-1)^{l^{\prime}}\int_{x_{min}}^{1}P_{l^{\prime}}(x)P_{l}(x)dx,$
(106)
$P^{0}_{ll^{\prime}}=\frac{2l+1}{2}(1-(-1)^{l^{\prime}})\int_{0}^{1}P_{l^{\prime}}(x)P_{l}(x)dx.$
(107)
The integral in (106) can be found in the literature (see for example [46] for
general case) and well known recurrence formulae for the derivatives of
Legendre polynomials can be used to get
$P^{x}_{ll^{\prime}}=\begin{cases}(-1)^{l^{\prime}}\delta_{ll^{\prime}}\quad\text{if
$x_{min}=-1$}\\\ F_{ll^{\prime}}\quad\text{if $x_{min}>-1$ and $l\neq
l^{\prime}$}\\\ F_{ll}\quad\text{if $x_{min}>-1$ and
$l=l^{\prime}$}\end{cases}$
where
$\begin{split}F_{ll^{\prime}}&=(-1)^{l^{\prime}}\frac{2l+1}{2}\Big{[}-\frac{x_{min}}{l+l^{\prime}+1}P_{l}(x_{min})P_{l^{\prime}}(x_{min})\\\
&+\frac{1}{(l-l^{\prime})(l+l^{\prime}+1)}\left(lP_{l-1}(x_{min})P_{l^{\prime}}(x_{min})-l^{\prime}P_{l}(x_{min})P_{l^{\prime}-1}(x_{min})\right)\Big{]},\end{split}$
and
$F_{ll}=(-1)^{l}\left[\frac{1}{2}-\left(\frac{x_{min}}{2}[P_{l}(x_{min})]^{2}+\sum_{k=1}^{l-1}P_{k}(x_{min})(x_{min}P_{k}(x_{min})-P_{k+1}(x_{min}))\right)\right].$
For $P^{0}_{ll^{\prime}}$ we get
$P^{0}_{ll^{\prime}}=\begin{cases}0\quad\text{if $l^{\prime}$ even}\\\
\delta_{ll^{\prime}}\quad\text{if $l^{\prime},l$ odd and $l=l^{\prime}$}\\\
0\quad\text{if $l^{\prime},l$ odd and $l\neq l^{\prime}$}\\\
\frac{2l+1}{(l-l^{\prime})(l+l^{\prime}+1)}\left(lP_{l-1}(0)P_{l^{\prime}}(0)-l^{\prime}P_{l}(0)P_{l^{\prime}-1}(0)\right)\quad\text{if
$l^{\prime}$ odd and $l$ even}\end{cases}$
It should be noted that for $v<v_{c}$ the above equations give
$P_{ll^{\prime}}=0$ for all odd $l$, which is the consequence of reflection
symmetry, from which one can also see that there is no effective flux
contribution for $v<v_{c}$ .
## References
* [1] B. Lipschultz, et al., Plasma-surface interaction, scrape-off layer and divertor physics: implications for ITER, Nucl. Fusion 47 (9) (2007) 1185–1205.
* [2] R. Pitts, et al., Phys. basis and design for ITER plasma-facing components, J. Nucl. Mater. 415 (1 SUPPL) (2011) 957–964.
* [3] S. I. Braginskii, Transport processes in a plasma, Rev. Plasma Phys. 1 (1965) 205–311.
* [4] W. Fundamenski, Parallel heat flux limits in the tokamak scrape-off layer, Plasma Phys. Controll. Fusion 47 (11) (2005) R163–R208. doi:10.1088/0741-3335/47/11/R01.
* [5] A. V. Chankin, D. P. Coster, Comparison of 2D models for the plasma edge with experimental measurements and assessment of deficiencies, J. Nucl. Mater. 390-391 (1) (2009) 319–324. doi:10.1016/j.jnucmat.2009.01.307.
* [6] P. Stangeby, The Plasma Boundary of Magnetic Fusion Devices, CRC Press, 2000.
* [7] R. Chodura, Non-local Heat Conduction along a Scrape-Off Layer with Strong Recycling, Contrib. Plasma Phys. 30 (1) (1990) 153–156.
* [8] A. Chankin, D. Coster, On the locality of parallel transport of heat carrying electrons in the SOL, J. Nucl. Mater. 463 (2015) 498–501. doi:10.1016/j.jnucmat.2014.10.057.
* [9] S. I. Krasheninnikov, A. S. Kukushkin, Phys. of ultimate detachment of a tokamak divertor plasma, J. Plasma Phys. 83 (05) (2017) 155830501. doi:10.1017/S0022377817000654.
* [10] D. Tskhakaya, F. Subba, X. Bonnin, D. P. Coster, W. Fundamenski, R. A. Pitts, On kinetic effects during parallel transport in the SOL, Contrib. Plasma Phys. 48 (1-3) (2008) 89–93. doi:10.1002/ctpp.200810015.
* [11] D. Tskhakaya, On Recent Massively Parallelized PIC Simulations of the SOL, Contrib. Plasma Phys. 52 (5-6) (2012) 490–499. doi:10.1002/ctpp.201210038.
* [12] D. Tskhakaya, M. Groth, 1D kinetic modelling of the JET SOL with tungsten divertor plates, J. Nucl. Mater. 438 (2013) S522–S525. doi:10.1016/j.jnucmat.2013.01.108.
* [13] A. Kirschner, D. Tskhakaya, S. Brezinsek, D. Borodin, J. Romazanov, R. Ding, A. Eksaeva, C. Linsmeier, Modelling of plasma-wall interaction and impurity transport in fusion devices and prompt deposition of tungsten as application, Plasma Phys. Controll. Fusion 60. doi:10.1088/1361-6587/aa8dce.
* [14] D. Tskhakaya, A. Soba, R. Schneider, M. Borchardt, E. Yurtesen, J. Westerholm, PIC/MC code BIT1 for plasma simulations on HPC, Proceedings of the 18th Euromicro Conference on Parallel, Distributed and Network-Based Processing, PDP 2010 (2010) 476–481doi:10.1109/PDP.2010.47.
* [15] O. Batishchev, M. M. Shoucri, A. A. Batishcheva, I. Shkarofsky, Fully kinetic simulation of coupled plasma and neutral particles in scrape-off layer plasmas of fusion devices, J. Plasma Phys. 61 (2) (1999) 347–364. doi:Doi10.1017/S0022377898007375.
* [16] Z. Abou-Assaleh, J. P. Matte, T. W. Johnston, R. Marchand, Fokker-Planck Modelling of Edge Plasma Near the Neutralizer Plate in a Tokamak, Contrib. Plasma Phys. 32 (3/4) (1992) 268–272. doi:10.1002/ctpp.2150320315.
* [17] Z. Abou‐Assaleh, M. Petravic, R. Vesey, J. P. Matte, T. W. Johnston, Non‐Local Transport in a Tokamak Plasma Divertor with Recycling, Contrib. Plasma Phys. 34 (2/3) (1994) 175–179. doi:10.1002/ctpp.2150340213.
* [18] F. Allais, J. P. Matte, F. Alouani-Bibi, C. G. Kim, D. P. Stotler, T. D. Rognlien, Modification of atomic physics rates due to nonlocal electron parallel heat transport in divertor plasmas, J. Nucl. Mater. 337-339 (1-3 SPEC. ISS.) (2005) 246–250. doi:10.1016/j.jnucmat.2004.10.089.
* [19] M. Zhao, A. V. Chankin, D. P. Coster, Kinetic simulations of electron heat flux in the scrape-off layer, Nucl. Mater. Energy 12 (2017) 819–824. doi:10.1016/j.nme.2017.01.025.
* [20] A. V. Chankin, G. Corrigan, A. E. Jaervinen, Assessment of the strength of kinetic effects of parallel electron transport in the SOL and divertor of JET high radiative H-mode plasmas using EDGE2D-EIRENE and KIPP codes, Plasma Phys. and Controll. Fusion 60.
* [21] I. P. Shkarofsky, T. W. Johnston, B. M. P., The Particle Kinetics of Plasmas, Addison Wesley, 1966.
* [22] A. R. Bell, A. P. L. Robinson, M. Sherlock, R. J. Kingham, W. Rozmus, Fast electron transport in laser-produced plasmas and the KALOS code for solution of the Vlasov-Fokker-Planck equation, Plasma Phys. Controll. Fusion 48 (2006) R37.
* [23] M. Tzoufras, A. R. Bell, P. A. Norreys, F. S. Tsung, A Vlasov-Fokker-Planck code for high energy density physics, J. Comp. Phys. 230 (17) (2011) 6475–6494. doi:10.1016/j.jcp.2011.04.034.
* [24] R. J. Kingham, A. R. Bell, An implicit Vlasov-Fokker-Planck code to model non-local electron transport in 2-D with magnetic fields, J. Comp. Phys. 194 (1) (2004) 1–34. doi:10.1016/j.jcp.2003.08.017.
* [25] T. Makabe, Z. Petrovic, Plasma Electronics, 2nd Edition, CRC Press, 2016.
* [26] K. Kumar, H. R. Skullerud, R. E. Robson, Kinetic Theory of Charged Particle Swarms in Neutral Gases, Aust. J. Phys. 33 (2) (1980) 343. doi:10.1071/PH800343b.
* [27] R. J. Procassini, C. K. Birdsall, B. I. Cohen, Particle simulations of collisional transport in a high recycling, diverted tokamak scrape-off layer, Nucl. Fusion 30 (11) (1990) 2329–2348. doi:10.1088/0029-5515/30/11/010.
* [28] E. M. Epperlein, G. J. Rickard, A. R. Bell, A code for the solution of the Vlasov-Fokker-Planck equation in 1-D or 2-D, Computer Phys. Comm. 52 (1) (1988) 7–13. doi:10.1016/0010-4655(88)90165-8.
* [29] M. Capitelli, R. Celiberto, G. Colonna, F. Esposito, C. Gorse, K. Hassouni, A. Laricchiuta, S. Longo, Collisional-Radiative Models for Atomic Hydrogen Plasmas, Springer New York, New York, NY, 2016, pp. 143–173. doi:10.1007/978-1-4419-8185-1_6.
* [30] E. M. Epperlein, The accuracy of Braginskii’s transport coefficients for a Lorentz plasma, J. Phys. D 17 (9) (1984) 1823–1827. doi:10.1088/0022-3727/17/9/007.
* [31] E. M. Epperlein, M. G. Haines, Plasma transport coefficients in a magnetic field by direct numerical solution of the Fokker-Planck equation, Phys. Fluids 4 (29) (1986) 1029.
* [32] R. K. Janev, D. Reiter, U. Samm, Collision Processes in Low-Temperature Hydrogen Plasmas, Sciences-New York (2003) 190.
* [33] A. Kramida, Y. Ralchenko, J. Reader, N. A. T. (2018), Nist atomic spectra database, https://physics.nist.gov/asd, accessed: 8th Dec 2017. doi:https://dx.doi.org/10.18434/T4W30F.
* [34] A. Bogaerts, R. Gijbels, J. Vlcek, Collisional-radiative model for an argon glow discharge, J. Applied Phys. 84 (1) (1998) 121–136. doi:10.1063/1.368009.
* [35] G. Colonna, L. D. Pietanza, M. Capitelli, Coupled solution of a time-dependent collisional-radiative model and Boltzmann equation for atomic hydrogen plasmas: Possible implications with LIBS plasmas, Spectrochim. Acta B 56 (6) (2001) 587–598. doi:10.1016/S0584-8547(01)00223-3.
* [36] J. D. Huba, Nrl plasma formulary, naval research laboratory, washington dc 20375\.
* [37] S. Balay, S. Abhyankar, M. F. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, A. Dener, V. Eijkhout, W. D. Gropp, D. Karpeyev, D. Kaushik, M. G. Knepley, D. A. May, L. C. McInnes, R. T. Mills, T. Munson, K. Rupp, P. Sanan, B. F. Smith, S. Zampini, H. Zhang, H. Zhang, PETSc Web page, https://www.mcs.anl.gov/petsc (2019).
URL https://www.mcs.anl.gov/petsc
* [38] S. Balay, S. Abhyankar, M. F. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, A. Dener, V. Eijkhout, W. D. Gropp, D. Karpeyev, D. Kaushik, M. G. Knepley, D. A. May, L. C. McInnes, R. T. Mills, T. Munson, K. Rupp, P. Sanan, B. F. Smith, S. Zampini, H. Zhang, H. Zhang, PETSc users manual, Tech. Rep. ANL-95/11 - Revision 3.12, Argonne National Laboratory (2019).
URL https://www.mcs.anl.gov/petsc
* [39] S. Balay, W. D. Gropp, L. C. McInnes, B. F. Smith, Efficient management of parallelism in object oriented numerical software libraries, in: E. Arge, A. M. Bruaset, H. P. Langtangen (Eds.), Modern Software Tools in Scientific Computing, Birkhäuser Press, 1997, pp. 163–202.
* [40] J. S. Chang, G. Cooper, A practical difference scheme for Fokker-Planck equations, J. Comp. Phys. 6 (1) (1970) 1–16. doi:10.1016/0021-9991(70)90001-X.
* [41] E. M. Epperlein, Implicit and Conservative Diference Scheme for the Fokker-Planck Equation, J. Comp. Phys. 112 (1994) 291–297.
* [42] A. B. Langdon, Conservative Differencing of the Electron Fokker-Planck Transport Equation, CECAM Report of Workshop on The Flux Limiter and Heat Flow Instabilities in Laser-Fusion Plasmas, Universite Paris Sud, France (1981) 69.
* [43] J. P. Brodrick, R. J. Kingham, M. M. Marinak, M. V. Patel, A. V. Chankin, J. T. Omotani, D. Umansky, M. V.and Del Sorbo, B. Dudson, J. T. Parker, G. D. Kerbel, M. Sherlock, C. P. Ridgers, Testing nonlocal models of electron thermal conduction for magnetic and inertial confinement fusion applications, Phys. Plasmas 24 (9) (2017) 092309. doi:10.1063/1.5001079.
* [44] C. Ridgers, Magnetic Fields and Non-Local Transport in Laser Plasmas, Ph.D. thesis (2008).
* [45] Imperial College Research Computing Service, https://doi.org/10.14469/hpc/2232. doi:https://doi.org/10.14469/hpc/2232.
* [46] R. Szmytkowski, On the derivative of the Legendre function of the first kind with respect to its degree, J. Phys. A 39 (49) (2006) 15147–15172. doi:10.1088/0305-4470/39/49/006.
|
2024-09-04T02:54:56.230711 | 2020-03-02T12:00:52 | 2003.00786 | {
"authors": "V. Venkatesha, H. Aruna Kumara and Devaraja Mallesha Naik",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25982",
"submitter": "Venkatesha Venkatesha",
"url": "https://arxiv.org/abs/2003.00786"
} | arxiv-papers | # Riemann Solitons and Almost Riemann Solitons on Almost Kenmotsu Manifolds
V. Venkatesha H. Aruna Kumara and Devaraja Mallesha Naik Department of
Mathematics
Kuvempu University
Shankaraghatta
Karnataka 577 451
India<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
The aim of this article is to study the Riemann soliton and gradient almost
Riemann soliton on certain class of almost Kenmotsu manifolds. Also some
suitable examples of Kenmotsu and $(\kappa,\mu)^{\prime}$-almost Kenmotsu
manifolds are constructed to justify our results.
###### Key words and phrases:
Riemann soliton, Ricci soliton, almost Kenmotsu manifold, Einstein manifold.
###### 1991 Mathematics Subject Classification:
53C25, 53C15, 53D15
## 1\. Introduction
Hamilton [9] introduced the concept of Ricci flow. The idea of Ricci flow
generalized to the concept of Riemann flow (see [16, 17]). As an analog of
Ricci soliton, Hiric̆a and Udrişte [10] introduced and studied Riemann
soliton. This is appears as a self-similar solution of the Riemann flow [16]:
$\displaystyle\frac{\partial}{\partial t}G(t)=-2R(g(t)),\quad t\in[0,I],$
where $G=\frac{1}{2}g\mathbin{\bigcirc\mspace{-15.0mu}\wedge\mspace{3.0mu}}g$,
$R$ is the Riemann curvature tensor associated to the metric $g$ and
$\mathbin{\bigcirc\mspace{-15.0mu}\wedge\mspace{3.0mu}}$ is Kulkarni-Nomizu
product. These extensions are natural, since some results in the Riemann flow
resembles the case of Ricci flow (for detail, see [17]). For instance, the
Riemann flow satisfies the short time existence and uniqueness [11]. Further
the authors in [14] characterize the Riemann soliton in terms of infinitesimal
harmonic transformation. For $(0,2)$-tensors $A$ and $B$, thier Kulkarni-
Nomizu product $A\mathbin{\bigcirc\mspace{-15.0mu}\wedge\mspace{3.0mu}}B$ is
given by
$\displaystyle(A\mathbin{\bigcirc\mspace{-15.0mu}\wedge\mspace{3.0mu}}B)(X_{1},X_{2},X_{3},$
$\displaystyle
X_{4})=A(X_{1},X_{3})B(X_{2},X_{4})+A(X_{2},X_{4})B(X_{1},X_{3})$
$\displaystyle-A(X_{1},X_{4})B(X_{2},X_{3})-A(X_{2},X_{3})B(X_{1},X_{4}).$
(1.1)
Riemann soliton is a smooth manifold $M$ together with Riemannian metric $g$
that satisfies
$\displaystyle
2R+\lambda(g\mathbin{\bigcirc\mspace{-15.0mu}\wedge\mspace{3.0mu}}g)+(g\mathbin{\bigcirc\mspace{-15.0mu}\wedge\mspace{3.0mu}}\pounds_{V}g)=0,$
(1.2)
where $V$ is a vector field called as a potential vector field, $\pounds_{V}$
denotes the Lie-derivative and $\lambda$ is a constant. The Riemann soliton
also corresponds as a fixed point of the Riemann flow, and they can be viewed
as a dynamical system, on the space of Riemannian metric modulo
diffeomorphism. Note that notion of Riemann soliton generalizes the space of
constant sectional curvature (i.e.,
$R=kg\mathbin{\bigcirc\mspace{-15.0mu}\wedge\mspace{3.0mu}}g$, for some
constant $k$). A Riemann soliton is called shrinking when $\lambda<0$, steady
when $\lambda=0$ and expanding $\lambda>0$. If the vector field $V$ is the
gradient of a potential function $u$, then we get the notion of gradient
Riemann soliton. In such a case, the equation (1.2) transform into
$\displaystyle
R+\frac{1}{2}\lambda(g\mathbin{\bigcirc\mspace{-15.0mu}\wedge\mspace{3.0mu}}g)+(g\mathbin{\bigcirc\mspace{-15.0mu}\wedge\mspace{3.0mu}}Hess~{}u)=0,$
(1.3)
where $Hess~{}u$ denotes the Hessian of the smooth function $u$. In [10], it
is proved that a Riemann soliton on a compact Riemannian manifold is a
gradient. A Riemann soliton is called trivial when potential vector field $V$
vanishes identically, and this case manifold is of constant sectional
curvature. If $\lambda$ appearing in equation (1.2) and (1.3) is a smooth
function, then $g$ is called almost Riemann soliton and almost gradient
Riemann soliton respectively.
In [10], Hiric̆a and Udrişte studied Riemann soliton and gradient Riemann
soliton within the framework of Sasakian manifold. In their study, it is
proved that if Sasakian manifold admits a Riemann soliton whose soliton vector
field $V$ is pointwise collinear with $\xi$ (or a gradient Riemann soliton and
potential function is harmonic), then it is Sasakian space form. In this
study, the authors exposes an open problem as classify gradient Riemann
soliton for arbitrary potential function on Sasakian manifold. To fulfill this
classification, the present authors in [4] consider a Riemann soliton on
contact manifold, and obtained several intersting results. These results on
contact geometry intrigues us to study the Riemann soliton on other almost
contact metric manifolds. In this paper, we classify certain class of almost
Kenmotsu manifold which admits a Riemann soliton and almost gradient Riemann
solion.
## 2\. Preliminaries
Let $(M,g)$ be a $(2n+1)$-dimensional smooth Riemannian manifold. On this
manifold if there exists a (1,1)-type tensor field $\varphi$, a global vector
field $\xi$ and 1-form $\eta$ such that
$\displaystyle\varphi^{2}X=-X+\eta(X)\xi,\quad\eta(\xi)=1,$ (2.1)
$\displaystyle g(\varphi X,\varphi Y)=g(X,Y)-\eta(X)\eta(Y),$ (2.2)
for any vector field $X,Y$ on $M$, then we say that $(\varphi,\xi,\eta,g)$ is
an almost contact metric structure and $M$ is an almost contact metric
manifold [1]. Generally, $\xi$ and $\eta$ are called Reeb or characterstic
vector field and almost contact 1-form respectively. We remark that an almost
contact metric structure on a Riemannian manifold $M$ may be regarded as a
reduction of the structure group $M$ to $U(n)\times 1$. For such a manifold,
we define a fundamental 2-form $\Phi$ by $\Phi(X,Y)=g(X,\varphi Y)$. It is
well known that the normality of an almost contact structure is expressed by
vanishing of the tensor $N_{\varphi}=[\varphi,\varphi]+2d\eta\otimes\xi$,
where $[\varphi,\varphi]$ is the Nijenhuis tensor of $\varphi$ (see [1]).
Almost Kenmotsu manifold [8] is an almost contact metric manifold such that
$d\eta=0$ and $d\Phi=2\eta\wedge\Phi$. A normal almost Kenmotsu manifold is
called Kenmotsu manifold, and this normality condition is expressed as
$\displaystyle(\nabla_{X}\varphi)Y=g(\varphi X,Y)\xi-\eta(Y)\varphi X,$
for any vector field $X,Y$ on $M$, where $\nabla$ denotes the Levi-Civita
connection of the Riemannian metric $g$. On an almost Kenmotsu manifold, we
set $\ell=R(\cdot,\xi)\xi$, $2h=\pounds_{\xi}\varphi$ and
$h^{\prime}=h\circ\varphi$, where $R$ denotes the curvature tensor of $M$ and
$\pounds$ is the Lie-derivative. These mentioned tensor fields plays important
role in almost Kenmotsu manifold, and it is easily seen that both are
symmetric and satisfies the following equations
$\displaystyle\nabla_{X}\xi=X-\eta(X)\xi+h^{\prime}X,$ (2.3) $\displaystyle
h\xi=\ell\xi=0,\quad trh=trh^{\prime}=0,\quad h\varphi+\varphi h=0,$ (2.4)
$\displaystyle tr\ell=S(\xi,\xi)=g(Q\xi,\xi)=-2n-trh^{2},$ (2.5)
where $S$ is the Ricci curvature tensor, $Q$ is the Ricci operator with
respect to $g$ anb $tr$ is the trace operator.
###### Definition 2.1.
On almost contact metric manifold $M$, a vector field $V$ is said to be
infinitesimal contact transformation if $\pounds_{V}\eta=\sigma\eta$, for some
function $\sigma$. In particular, we called $V$ is a strict infinitesimal
contact transformation if $\pounds_{V}\eta=0$.
Now, we recall some formulas which are helpfull to prove our main results.
Using the symmetry of $\pounds_{V}\nabla$ in the commutation formula (see Yano
[22]):
$\displaystyle(\pounds_{V}$
$\displaystyle\nabla_{X}g-\nabla_{X}\pounds_{V}g-\nabla_{[V,X]}g)(Y,Z)$
$\displaystyle=-g((\pounds_{V}\nabla)(X,Y),Z)-g((\pounds_{V}\nabla)(X,Z),Y),$
we derive
$\displaystyle
2g((\pounds_{V}\nabla)(X,Y),Z)=(\nabla_{X}\pounds_{V}g)(Y,Z)+(\nabla_{Y}\pounds_{V}g)(Z,X)-(\nabla_{Z}\pounds_{V}g)(X,Y).$
(2.6)
The following well-known commutation equations is also known to us from Yano
[22]:
$\displaystyle(\pounds_{V}R)(X,Y)Z=(\nabla_{X}\pounds_{V}\nabla)(Y,Z)-(\nabla_{Y}\pounds_{V}\nabla)(X,Z),$
(2.7)
$\displaystyle\pounds_{V}\nabla_{X}Y-\nabla_{X}\pounds_{V}Y-\nabla_{[V,X]}Y=(\pounds_{V}\nabla)(X,Y).$
(2.8)
## 3\. Riemann soliton and almost gradient Riemann soliton on Kenmotsu
manifolds
Dileo and Pastore [2] proved that the almost contact metric structure of an
almost Kenmotsu manifold is normal if and only if the foliations of the
distribution $\mathcal{D}$ are Kählerian and the $(1,1)$-type tensor field $h$
vanishes. In particular, we have immediately that almost Kenmotsu manifold is
a Kenmotsu manifold if and only if $h=0$. The following formulae are holds for
a Kenmotsu manifold [12]
$\displaystyle\nabla_{X}\xi=X-\eta(X)\xi,$ (3.1) $\displaystyle
R(X,Y)\xi=\eta(X)Y-\eta(Y)X,$ (3.2) $\displaystyle Q\xi=-2n\xi.$ (3.3)
Differentiating (3.3) along an arbitrary vector field $X$ and recalling (3.1)
we deduce
$\displaystyle(\nabla_{X}Q)\xi=-QX-2nX.$ (3.4)
In this section, we aim to investigate the existence of geometry of Riemann
soliton and an almost gradient Riemann soliton on Kenmotsu manifold. First, we
prove the following result.
###### Lemma 3.1.
Let $M$ be a $(2n+1)$-dimensional Kenmotsu manifold. If $(g,V)$ is a Riemann
soliton with soliton vector $V$ has a constant divergence, then the potential
vector field and Ricci tensor satisfy
$\displaystyle divV=2n(1-\lambda),$ (3.5)
$\displaystyle(\pounds_{V}S)(X,\xi)=\frac{1}{2n-1}\\{-(Yr)+(\xi r)\eta(Y)\\},$
(3.6)
where $div$ denotes the divergence operator.
###### Proof.
As a result of (1), the Riemann soliton equation (1.2) can be expressed as
$\displaystyle 2R(X,Y,$ $\displaystyle
Z,W)+2\lambda\\{g(X,W)g(Y,Z)-g(X,Z)g(Y,W)\\}$
$\displaystyle+\\{g(X,W)(\pounds_{V}g)(Y,Z)+g(Y,Z)(\pounds_{V}g)(X,W)$
$\displaystyle-g(X,Z)(\pounds_{V}g)(Y,W)-g(Y,W)(\pounds_{V}g)(X,Z)\\}=0.$
(3.7)
Contractiong (3.7) over $X$ and $W$, we obtain
$\displaystyle(\pounds_{V}g)(Y,Z)+\frac{2}{2n-1}S(Y,Z)+\frac{2}{2n-1}(2n\lambda+divV)g(Y,Z)=0.$
(3.8)
Taking covariant derivative of the above relation gives
$\displaystyle(\nabla_{X}\pounds_{V}g)(Y,Z)+\frac{2}{2n-1}(\nabla_{X}S)(Y,Z)=0,$
(3.9)
where we applied $V$ has a constant divergence. Inserting the foregoing
equation into (2.6) we obtain
$\displaystyle
g((\pounds_{V}\nabla)(X,Y),Z)=\frac{1}{2n-1}\\{(\nabla_{Z}S)(X,Y)-(\nabla_{X}S)(Y,Z)-(\nabla_{Y}S)(Z,X)\\}.$
(3.10)
The present authors in [18] and Ghosh in [7] independently gave the complete
proof of the following formula which is valid for any Kenmotsu manifold
$\displaystyle(\nabla_{\xi}Q)X=-2QX-4nX.$ (3.11)
Replacing $Y$ by $\xi$ in (3.10), applying (3.4) and (3.11) we get
$\displaystyle(\pounds_{V}\nabla)(X,\xi)=\frac{2}{2n-1}QX+\frac{4n}{2n-1}X.$
(3.12)
Differentiating (3.12) covariantly along $Y$ and utilizing (3.1), we find
$\displaystyle(\nabla_{Y}\pounds_{V}\nabla)(X,\xi)+(\pounds_{V}\nabla)(X,Y)-\frac{2}{2n-1}\eta(Y)\\{QX+2nX\\}=\frac{2}{2n-1}(\nabla_{Y}Q)X.$
Making use of this in the well known expression (2.7), we have
$\displaystyle(\pounds_{V}R)(X,Y)\xi=$
$\displaystyle\frac{2}{2n-1}\\{\eta(X)QY-\eta(Y)QX+(\nabla_{X}Q)Y-(\nabla_{Y}Q)X\\}$
$\displaystyle+\frac{4n}{2n-1}\\{\eta(X)Y-\eta(Y)X\\}.$ (3.13)
We insert $Y=\xi$ in the above relation and utilizing (3.4) and (3.11) to
achieve $(\pounds_{V}R)(X,\xi)\xi=0$. On the other hand, operating
$\pounds_{V}$ to the formula $R(X,\xi)\xi=-X+\eta(X)\xi$ (follows from (3.2))
yields
$\displaystyle(\pounds_{V}R)(X,\xi)\xi+g(X,\pounds_{V}\xi)\xi-2\eta(\pounds_{V}\xi)X=\\{(\pounds_{V}\eta)X\\}\xi,$
which by virtue of $(\pounds_{V}R)(X,\xi)\xi=0$ becomes
$\displaystyle
g(X,\pounds_{V}\xi)\xi-2\eta(\pounds_{V}\xi)X=\\{(\pounds_{V}\eta)X\\}\xi.$
(3.14)
With the aid of (3.3), the equation (3.8) takes the form
$\displaystyle(\pounds_{V}g)(X,\xi)=-\frac{2}{2n-1}\\{2n\lambda+divV-2n\\}\eta(X).$
(3.15)
Now, Lie-differentiating $\eta(X)=g(X,\xi)$ and $g(\xi,\xi)=1$ along $V$ and
taking into account (3.15) provides
$\displaystyle(\pounds_{V}\eta)X-g(X,\pounds_{V}\xi)+\frac{2}{2n-1}\\{2n\lambda+divV-2n\\}\eta(X)=0,$
$\displaystyle\eta(\pounds_{V}\xi)=\frac{1}{2n-1}\\{2n\lambda+divV-2n\\}.$
Utilizing these equations in (3.14) yields
$(2n\lambda+divV-2n)\\{X-\eta(X)\xi\\}=0$. Tracing this provides (3.5). Now
contracting the equation (3.13) provides
$\displaystyle(\pounds_{V}S)(Y,\xi)=\frac{1}{2n-1}\\{-(Yr)-2(r+2n(2n+1))\\}\eta(Y),$
(3.16)
where we applied the well-known formula $divQ=\frac{1}{2}grad\,\,r$ and
$tr\nabla Q=grad\,\,r$. Taking trace of (3.11) provides $(\xi
r)=-2(r+2n(2n+1))$. By virtue of this and (3.16) gives (3.6). This completes
the proof. ∎
###### Theorem 3.2.
Let $M$ be an $\eta$-Einstein Kenmotsu manifold of dimension higher than $3$.
If $(g,V)$ is a Riemann soliton with $divV=constant$, then $M$ is Einstein.
###### Proof.
We know that a Kenmotsu manifold $M$ is $\eta$-Einstein if and only if
$\displaystyle
S(X,Y)=\left(\frac{r}{2n}+1\right)g(X,Y)-\left(\frac{r}{2n}+2n+1\right)\eta(X)\eta(Y).$
(3.17)
With the aid of the above equation, it has been proved by present authors in
([18], Lemma 3.4) that on $\eta$-Einstein Kenmotsu manifold of $dim>3$ there
holds
$\displaystyle Dr=(\xi r)\xi.$
Making use of this in (3.6), we have $(\pounds_{V}S)(X,\xi)=0$. Now, applying
$\pounds_{V}$ to (3.3), recalling (3.17), (3.15), (3.5) and
$\eta(\pounds_{V}\xi)=0$, we obtain
$\displaystyle(r+2n(2n+1))\pounds_{V}\xi=0.$ (3.18)
Suppose that $r\neq 2n(2n+1)$ in some open set $\mathcal{O}$ of M. Then on
$\mathcal{O}$, $\pounds_{V}\xi=0=\pounds_{V}\eta$. Replacing $Y$ by $\xi$ in
(2.8) and using $\pounds_{V}\xi=0=\pounds_{V}\eta$, we have
$\displaystyle(\pounds_{V}\nabla)(X,\xi)=$
$\displaystyle\pounds_{V}X-\eta(X)\pounds_{V}\xi-\\{(\pounds_{V}\eta)X\\}\xi$
$\displaystyle-\eta(\pounds_{V}X)\xi-\pounds_{V}X+\eta(\pounds_{V}X)\xi=0.$
By virtue of this in (3.12), we have $QX=-2nX$. Taking trace of this gives
$r=-2n(2n+1)$ on $\mathcal{O}$, which is a contradiction on $\mathcal{O}$.
Thus, equation (3.18) gives $r=-2n(2n+1)$ and therefore we can conclude from
$\eta$-Einstein condition (3.17) that $M$ is Einstein. ∎
It is known that any $3$-dimensional Kenmotsu manifold is $\eta$-Einstein. As
a result of Theorem 3.2, it is interesting to study Riemann soliton in
Kenmotsu $3$-manifold and here, we prove the following outcome.
###### Theorem 3.3.
Let be a 3-dimensional Kenmotsu manifold. If $(g,V)$ represents a Riemann
soliton with soliton vector $V$ has a constant divergence, then $M$ is of
constant negative curvature $-1$.
###### Proof.
It is well known that the Riemannian curvature tensor of $3$-dimensional
Riemannian manifold is given by
$\displaystyle R(X,Y)Z=$ $\displaystyle g(Y,Z)QX-g(X,Z)QY+g(QY,Z)X-g(QX,Z)Y$
$\displaystyle-\frac{1}{2}\\{g(Y,Z)X-g(X,Z)Y\\}.$ (3.19)
Replacing $\xi$ in place of $Y$ and $Z$ in above equation, employing (3.2) and
(3.3) gives
$\displaystyle
QX=\left(\frac{r}{2}+1\right)X-\left(\frac{r}{2}+3\right)\eta(X)\xi,$
which is equivalent to
$\displaystyle
S(X,Y)=\left(\frac{r}{2}+1\right)g(X,Y)-\left(\frac{r}{2}+3\right)\eta(X)\eta(Y).$
(3.20)
Applying $\pounds_{V}$ to (3.3), recalling (3.20) yields
$\displaystyle(\pounds_{V}S)(X,\xi)+\left(\frac{r}{2}+1\right)g(X,\pounds_{V}\xi)-\left(\frac{r}{2}+3\right)\eta(X)\eta(\pounds_{V}\xi)=-2(\pounds_{V}\eta)(X).$
This together with (3.6) provides
$\displaystyle-Y(r)+\xi(r)\eta(Y)+\left(\frac{r}{2}+1\right)g(X,\pounds_{V}\xi)$
$\displaystyle-\left(\frac{r}{2}+3\right)\eta(X)\eta(\pounds_{V}\xi)+2(\pounds_{V}\eta)(X)=0.$
(3.21)
By virtue of (3.5) and (3.15), it follows from (3.21) that
$\displaystyle(r+6)g(X,\pounds_{V}\xi)=2\\{Y(r)-\xi(r)\eta(Y)\\},$ (3.22)
where we used Theorem 4.1 of Wang [20]. Now suppose that $r=-6$. Then from
(3.20) we have $QX=-2X$, and substituting this in (3.19) one gets
$\displaystyle R(X,Y)Z=-\\{g(Y,Z)X-g(X,Z)Y\\},$
which means $M$ is of constant curvature $-1$.
Now we suppose that $r\neq-6$ in some open set $\mathcal{O}$ of $M$. Then on
$\mathcal{O}$, the relation (3.22) can be written as
$\displaystyle\pounds_{V}\xi=f\\{Dr-\xi(r)\xi\\},$ (3.23)
where $f=\frac{2}{r+6}$. Taking $Y$ by $\xi$ in (2.8) and using (3.1), (3.23)
and (3.12), it follows that
$\displaystyle f\\{$ $\displaystyle
X(r)\eta(Y)+Y(r)\eta(X)-2\xi(r)\eta(X)\eta(Y)+g(\nabla_{X}Dr,Y)-X(\xi(r))\eta(Y)\\}$
$\displaystyle+X(f)\\{Y(r)-\xi(r)\eta(Y)\\}+\left\\{f~{}\xi(r)-\frac{1}{2}\xi(r)\right\\}g(\varphi
X,\varphi Y)=0.$
Antisymmetrizing the foregoing equation and keeeping in mind the Poincare
lemma: $g(\nabla_{X}Dr,Y)=g(\nabla_{Y}Dr,X)$, we find
$\displaystyle
f\\{Y(\xi(r))\eta(X)-X(\xi(r))\eta(Y)\\}+X(f)\\{Y(r)-\xi(r)\eta(Y)\\}$
$\displaystyle-Y(f)\\{X(r)-\xi(r)\eta(X)\\}=0.$
Putting $Y=\xi$ in the above equation and using Theorem 4.1 of Wang [20] one
get
$\displaystyle(2f-\xi(f))\\{X(r)-\xi(r)\eta(X)\\},$
which implies
$\displaystyle(2f-\xi(f))\\{Dr-\xi(r)\xi\\}.$ (3.24)
In the following first case we show that $Dr=\xi(r)\xi$, one get
$g(Dr,\xi)=-2(r+6)$. Since $f=\frac{2}{r+6}$ we obtain $\xi(f)=2f$. In the
second case we show $\xi(f)=2f$ implies $Dr=\xi(r)\xi$. Thus for any point
$p\in\mathcal{O}$, we have $(Dr)_{p}=\xi_{p}(r)\xi_{p}$ if and only if
$\xi_{p}(f)=2f$. Hence from (3.24), we have either $\xi(f)=2f$ or
$Dr=\xi(r)\xi$.
Case 1: First we assume
$\displaystyle Dr=\xi(r)\xi,$ (3.25)
and so (3.23) becomes $\pounds_{V}\xi=0$ on the open set $\mathcal{O}$ of $M$.
Setting $Y$ by $\xi$ in (2.8) and using $\pounds_{V}\xi=0$, we have
$(\pounds_{V}\nabla)(X,\xi)=0$. This together with (3.12) provides $QX=-2X$.
Contracting this with respect to $X$ gives $r=-6$ which yields a
contradiction.
Case 2: Suppose if $\xi(f)=2f$, then using $f=\frac{2}{r+6}$ we obtain
$\xi(r)=-2(r+6)$ which means
$\displaystyle g(Dr,\xi)=-2(r+6).$
As $r\neq-6$ on $\mathcal{O}$, the above equation implies $Dr=f\xi$, for some
smooth function $f$. In fact, we have $Dr=\xi(r)\xi$ and so following the Case
1 we arrive the contradiction. This completes the proof. ∎
Now, we consider a Kenmotsu metric as a gradient almost Riemann soliton and
first, we need the following result to prove our main theorem.
###### Lemma 3.4.
For gradient almost Riemann soliton, the following formula is valid
$\displaystyle R(X,Y)Du=$
$\displaystyle\frac{1}{2n-1}\\{(\nabla_{Y}Q)X-(\nabla_{X}Q)Y\\}$
$\displaystyle+\frac{1}{2n-1}\\{Y(2n\lambda+\Delta u)X-X(2n\lambda+\Delta
u)Y\\},$ (3.26)
where $\Delta u=divDu$, $\Delta$ is the Laplacian operator.
###### Proof.
Contracting gradient almost Riemann soliton equation (1.3), we get
$\displaystyle Hess~{}u+\frac{1}{2n-1}S+\frac{1}{2n-1}(2n\lambda+\Delta
u)g=0.$
Note that the above equation may exhibited as
$\displaystyle\nabla_{Y}Du=-\frac{1}{2n-1}QY-\frac{1}{2n-1}(2n\lambda+\Delta
u)Y.$ (3.27)
By straightforward computations, using the well known expression of the
curvature tensor:
$\displaystyle
R(X,Y)=\nabla_{X}\nabla_{Y}-\nabla_{Y}\nabla_{X}-\nabla_{[X,Y]},$
and the repeated use of equation (3.27) gives the desired result. ∎
###### Theorem 3.5.
If metric of a Kenmotsu manifold $M$ of dimeninsion $(2n+1)$ represents a
gradient almost Riemann soliton, then either $M$ is Einstein or the soliton
vector field $V$ is pointwise collinear with the characteristic vector field
$\xi$ on an open set $\mathcal{O}$ of $M$. In the first case, if $M$ is
complete, then $M$ is locally isometric to a hyperbolic space
$\mathbb{H}^{2n+1}$, and the function $2n\lambda+\Delta u$, upto an additive
constant, can be expressed as a linear combination of $cosh\,t$ and $sinh\,t$.
###### Proof.
Replacing $Y$ by $\xi$ in (3.26) and employing (3.4) and (3.11), we find
$\displaystyle
R(X,\xi)Du=-\frac{1}{2n-1}\\{QX+2nX\\}+\frac{1}{2n-1}\\{\xi(2n\lambda+\Delta
u)X-X(2n\lambda+\Delta u)\xi\\}.$ (3.28)
From (3.2), we have $R(X,\xi)Y=g(X,Y)\xi-\eta(Y)X$. Employing this into (3.28)
provides
$\displaystyle g(X,Du+\frac{1}{2n-1}D(2n\lambda+\Delta u))$
$\displaystyle\xi-\\{\xi(u)+\frac{1}{2n-1}\xi(2n\lambda+\Delta u)\\}X$
$\displaystyle=-\frac{1}{2n-1}\\{QX+2nX\\}.$ (3.29)
Inner product of the previous equation with $\xi$ and using (3.3) provides
$X(u+\frac{1}{2n-1}(2n\lambda+\Delta u))=\xi(u+\frac{1}{2n-1}(2n\lambda+\Delta
u))\eta(X)$, from which we have
$\displaystyle d(u+\frac{1}{2n-1}(2n\lambda+\Delta
u))=\xi(u+\frac{1}{2n-1}(2n\lambda+\Delta u))\eta,$ (3.30)
where $d$ is the exterior derivative. This means that
$u+\frac{1}{2n-1}(2n\lambda+\Delta u)$ is invariant along the distribution
$\mathcal{D}$, i.e., $X(u+\frac{1}{2n-1}(2n\lambda+\Delta u))=0$ for any
vector field $X\in\mathcal{D}$. Utilizing (3.30) into (3.29), we obtain
$\displaystyle\\{\xi(u)+\frac{1}{2n-1}\xi(2n\lambda+\Delta u)\\}\eta(X)$
$\displaystyle\xi-\\{\xi(u)+\frac{1}{2n-1}\xi(2n\lambda+\Delta u)\\}X$
$\displaystyle=-\frac{1}{2n-1}\\{QX+2nX\\}.$ (3.31)
Contracting the above equation, one immediately obtain
$\displaystyle\xi(u+\frac{1}{2n-1}(2n\lambda+\Delta
u))=\frac{1}{2n-1}\\{\frac{r}{2n}+2n+1\\}.$ (3.32)
Making use of last equation in (3.31) one can obtain $\eta$-Einstein condition
(3.17). Now, we contract equation (3.26) over $X$ to deduce
$\displaystyle S(Y,Du)=\frac{1}{2(2n-1)}Y(r)+\frac{2n}{2n-1}Y(2n\lambda+\Delta
u).$
In the contrast of above equation with (3.17) we obtain
$\displaystyle(r+2n)Y(u)-($ $\displaystyle
r+2n(2n+1))\xi(u)\eta(Y)-\frac{1}{2(2n-1)}Y(r)$
$\displaystyle-\frac{4n^{2}}{2n-1}Y(2n\lambda+\Delta u)=0.$ (3.33)
Inserting $Y=\xi$ in (3.33) and recalling (3.32) one can get
$\xi(r)=-2\\{r+2n(2n+1)\\}$. Action of $d$ on (3.30), we get $dr\wedge\eta=0$,
where we used $d^{2}=0$ and $d\eta=0$. Hence by virtue of
$\xi(r)=-2\\{r+2n(2n+1)\\}$ we have
$\displaystyle Dr=-2\\{r+2n(2n+1)\\}\xi.$ (3.34)
Suppose that $X$ in (3.33) is orthogonal to $\xi$. Taking into account
$(u+\frac{1}{2n-1}(2n\lambda+\Delta u))$ being a constant along $\mathcal{D}$
and utilizing (3.30) and (3.34), then we get $(r+2n(2n+1))X(u)=0$ for any
$X\in\mathcal{D}$. This implies that
$\displaystyle(r+2n(2n+1))(Du-\xi(u)\xi)=0.$ (3.35)
If $r=-2n(2n+1)$, then the equation (3.17) shows that $QX=-2nX$, and hence $M$
is Einstein. Since $r=-2n(2n+1)$, it follows from (3.32) that
$\xi(u)=-\frac{1}{2n-1}\xi(2n\lambda+\Delta u)$ and therefore
$Du=-\frac{1}{2n-1}D(2n\lambda+\Delta u)\xi$. Thus, equation (3.27) can be
exhibited as
$\displaystyle\nabla_{X}D(2n\lambda+\Delta u)=((2n\lambda+\Delta u)-2n)X.$
(3.36)
According to Theorem 2 of Tashiro [15] we obtain $M$ is locally isometric to
the hyperbolic space $\mathbb{H}^{2n+1}$, when $M$ is complete. Since
$\nabla_{\xi}\xi=0$ and $g(\xi,D(2n\lambda+\Delta u))=\xi(2n\lambda+\Delta
u)$, we deduce from (3.36) that $\xi(\xi(2n\lambda+\Delta
u))=(2n\lambda+\Delta u)-2n$. But, it is known [12] that a
$(2n+1)$-dimensional Kenmotsu manifold is locally isometric to the warped
product $I\times_{ce^{t}}N^{2n}$, where $N$ is a Kähler manifold and $I$ is an
open interval. Employing $\xi=\frac{\partial}{\partial t}$ (where $t$ denotes
the coordinate on $I$) into (3.36) we obtain
$\displaystyle\frac{d^{2}(2n\lambda+\Delta u)}{dt}=(2n\lambda+\Delta u)-2n.$
The solution of this can be exhibited as $(2n\lambda+\Delta
u)=A\,cosh\,\,t+B\,sinh\,\,t+2n$, where $A$, $B$ constants on $M$.
Now, suppose that $r\neq-2n(2n+1)$ in some open set $\mathcal{O}$ of $M$, then
from (3.35) we have $Du=\xi(u)\xi$, and this completes the proof. ∎
###### Remark 3.6.
Theorem 1 in [6], Theorem 1 in [5] and Theorem 3 in [7] of Ghosh is a direct
corollary of Theorem 3.2, Theorem 3.3 and Theorem 3.5 respectively.
Now we construct an example of Riemann soliton on 3-diemnsional Kenmotsu
manifold, and verify our results.
###### Example.
Let us indicate the canonical coordinates on $\mathbb{R}^{3}$ by $(x,y,z)$,
and take the 3-dimensional manifold $M\subset\mathbb{R}^{3}$ defined by
$\displaystyle M=\\{(x,y,z)\in\mathbb{R}^{3}|z\neq 0\\}.$
We may easily verify that putting
$\displaystyle\varphi\left(\frac{\partial}{\partial
x}\right)=\frac{\partial}{\partial
y},\quad\varphi\left(\frac{\partial}{\partial
y}\right)=-\frac{\partial}{\partial
x},\quad\varphi\left(\frac{\partial}{\partial z}\right)=0,$
$\displaystyle\xi=\frac{\partial}{\partial z},\quad\eta=dz,\quad
g=\frac{1}{2}exp(2z)(dx^{2}+dy^{2})+dz^{2},$
$(\varphi,\xi,\eta,g)$ is an almost contact metric structure on $M$. The
matrix representation of $g$ with respect to $\frac{\partial}{\partial x}$,
$\frac{\partial}{\partial y}$ and $\frac{\partial}{\partial z}$ is
$\displaystyle(g_{ij})=\begin{pmatrix}\frac{1}{2}exp(2z)&0&0\\\
0&\frac{1}{2}exp(2z)&0\\\ 0&0&1\end{pmatrix}.$
Using the Koszul formula we have
$\displaystyle\begin{gathered}\nabla_{\frac{\partial}{\partial
x}}\frac{\partial}{\partial x}=-\frac{1}{2}exp(2z)\frac{\partial}{\partial
z},\quad\nabla_{\frac{\partial}{\partial x}}\frac{\partial}{\partial
y}=0,\quad\nabla_{\frac{\partial}{\partial x}}\frac{\partial}{\partial
z}=\frac{\partial}{\partial x},\\\ \nabla_{\frac{\partial}{\partial
y}}\frac{\partial}{\partial x}=0,\quad\nabla_{\frac{\partial}{\partial
y}}\frac{\partial}{\partial y}=-\frac{1}{2}exp(2z)\frac{\partial}{\partial
z},\quad\nabla_{\frac{\partial}{\partial y}}\frac{\partial}{\partial
z}=\frac{\partial}{\partial y},\\\ \nabla_{\frac{\partial}{\partial
z}}\frac{\partial}{\partial x}=\frac{\partial}{\partial
x},\quad\nabla_{\frac{\partial}{\partial z}}\frac{\partial}{\partial
y}=\frac{\partial}{\partial y},\quad\nabla_{\frac{\partial}{\partial
x}}\frac{\partial}{\partial z}=0,\end{gathered}$ (3.40)
where $\nabla$ denotes the Levi-Civita connection of the Riemannian metric
$g$. It is not hard to verify that the condition (3.1) for Kenmotsu manifold
is satisfied. Hence, the manifold under consideration is a Kenmotsu manifold.
By a straightforward calculation we have
$\displaystyle R\left(\frac{\partial}{\partial x},\frac{\partial}{\partial
y}\right)\frac{\partial}{\partial z}=0,\quad R\left(\frac{\partial}{\partial
x},\frac{\partial}{\partial z}\right)\frac{\partial}{\partial y}=0,\quad
R\left(\frac{\partial}{\partial x},\frac{\partial}{\partial
z}\right)\frac{\partial}{\partial z}=-\frac{\partial}{\partial x},$
$\displaystyle R\left(\frac{\partial}{\partial x},\frac{\partial}{\partial
y}\right)\frac{\partial}{\partial
x}=\frac{1}{2}exp(2z)\frac{\partial}{\partial y},\quad
R\left(\frac{\partial}{\partial x},\frac{\partial}{\partial
y}\right)\frac{\partial}{\partial
y}=-\frac{1}{2}exp(2z)\frac{\partial}{\partial x},$ $\displaystyle
R\left(\frac{\partial}{\partial x},\frac{\partial}{\partial
z}\right)\frac{\partial}{\partial
x}=\frac{1}{2}exp(2z)\frac{\partial}{\partial z},\quad
R\left(\frac{\partial}{\partial y},\frac{\partial}{\partial
z}\right)\frac{\partial}{\partial x}=0,$ $\displaystyle
R\left(\frac{\partial}{\partial y},\frac{\partial}{\partial
z}\right)\frac{\partial}{\partial
y}=\frac{1}{2}exp(2z)\frac{\partial}{\partial z},\quad
R\left(\frac{\partial}{\partial y},\frac{\partial}{\partial
z}\right)\frac{\partial}{\partial z}=-\frac{\partial}{\partial y},$ (3.41)
and applying these, we obtain
$\displaystyle S\left(\frac{\partial}{\partial x},\frac{\partial}{\partial
x}\right)=-exp(2z),\quad S\left(\frac{\partial}{\partial
y},\frac{\partial}{\partial y}\right)=-exp(2z),\quad
S\left(\frac{\partial}{\partial z},\frac{\partial}{\partial z}\right)=-2.$
Thus, the Ricci tensor satisfy $S(X,Y)=-2ng(X,Y)$ for any $X,Y\in\Gamma(TM)$,
that is, $M$ is Einstein. From (3.41), we can easily shows that
$\displaystyle R(X,Y)Z=-\\{g(Y,Z)X-g(X,Z)Y\\},$ (3.42)
for any $X,Y,Z\in\Gamma(TM)$. Next consider a vector field
$\displaystyle V=a\left(y\frac{\partial}{\partial x}-x\frac{\partial}{\partial
y}\right),$ (3.43)
where $a\neq 0$ is a constant. One can easily verify that $V$ has constant
divergence. As a result of (3.40), one can easily justify that
$\displaystyle(\pounds_{V}g)(X,Y)=0,$ (3.44)
for any $X,Y\in\Gamma(TM)$. Unifying (3.44) and (3.42), we obtain that $g$ is
a Riemann soliton, that is, (1.2) holds true with $V$ as in (3.43) and
$\lambda=1$. Further (3.42) shows that $M$ is of constant negative curvature
$-1$ and this verifies the Theorem 3.3.
## 4\. Riemann solitons and almost gradient Riemann solitons on
$(\kappa,\mu)^{\prime}$-almost Kenmotsu manifolds with $\kappa<-1$
If the Reeb or characterstic vector field $\xi$ of an almost Kenmotsu manifold
$M$ belonging to the $(\kappa,\mu)^{\prime}$-nullity distribution (see [3]),
that is,
$\displaystyle
R(X,Y)\xi=\kappa\\{\eta(Y)X-\eta(X)Y\\}+\mu\\{\eta(Y)h^{\prime}X-\eta(X)h^{\prime}Y\\},$
(4.1)
for some constants $\kappa$ and $\mu$, then $M$ is called
$(\kappa,\mu)^{\prime}$-almost Kenmotsu manifold. Classification of almost
Kenmotsu manifold with $\xi$ belonging to the $(\kappa,\mu)^{\prime}$-nulliy
distribution have done by many geometers. For more detail, we refer to [3, 13,
19, 20]. On $(\kappa,\mu)^{\prime}$-almost Kenmotsu manifold, the following
relation hold
$\displaystyle h^{\prime 2}=(\kappa+1)\varphi^{2},$
$\displaystyle\quad\text{or}\quad h^{2}=(\kappa+1)\varphi^{2},$ (4.2)
$\displaystyle Q\xi=2n\kappa\xi.$ (4.3)
Let $X\in Ker\eta$ be an eigenvector field of $h^{\prime}$ orthogonal to $\xi$
with the corresponding eigenvalue $\theta$. As a result of (4.2), we get
$\theta^{2}=-(\kappa+1)$ and hence we have $\kappa\leq-1$. From (4.2) we
remark that the tensor field $h^{\prime}=0$ (equivalently, $h=0$) if and only
if $\kappa=-1$, and $\kappa<-1$ if and only if $h^{\prime}\neq 0$. As stated
by Dileo and Pastore [3] in Proposition 4.1 that, on a
$(\kappa,\mu)^{\prime}$-almost Kenmotsu manifold with $\kappa<-1$, we have
$\mu=-2$. In this section, we paln to investigate the geometry of a Riemann
soliton and gradient almost Riemann soliton on non-Kenmotsu
$(\kappa,\mu)^{\prime}$-almost Kenmotsu manifold. First, we reminisce the
following result for our later use.
###### Lemma 4.1.
In a $(\kappa,\mu)^{\prime}$-almost Kenmotsu manifold $M$ with $\kappa<-1$,
the Ricci operator satisfies
$\displaystyle QX=-2nX+2n(\kappa+1)\eta(X)\xi-2nh^{\prime},$ (4.4)
where $h^{\prime}\neq 0$. Moreover, the scalar curvature $r$ of $M$ is
$2n(\kappa-2n)$.
Now, we prove the following outcome.
###### Theorem 4.2.
Let $M$ be a non-Kenmotsu $(\kappa,\mu)^{\prime}$-almost Kenmotsu manifold. If
metric of $M$ is a Riemann soliton with $divV=constant$, then either the
manifold is locally isometric to the Reimannian product
$\mathbb{H}^{n+1}(-4)\times\mathbb{R}^{n}$ or the potential vector field $V$
is strict infinitesimal contact transformation.
###### Proof.
Contracting the equation (1.2) over $X$ and $W$, one can get
$\displaystyle(\pounds_{V}g)(Y,Z)+\frac{2}{2n-1}S(Y,Z)+\frac{2}{2n-1}(2n\lambda+divV)g(Y,Z)=0.$
As a result of Lemma 4.1, the above equation transform into
$\displaystyle(\pounds_{V}g)(Y,$ $\displaystyle
Z)+\frac{2}{2n-1}\\{(2n\lambda+divV-2n)g(Y,Z)$
$\displaystyle+2n(\kappa+1)\eta(Y)\eta(Z)-2ng(h^{\prime}Y,Z)\\}=0.$ (4.5)
Differentiating (4.5) covariantly along $X$, recalling (2.3) and $V$ has a
constant divergence, we have
$\displaystyle(\nabla_{X}\pounds_{V}g)(Y,Z)=-$
$\displaystyle\frac{4n}{2n-1}(\kappa+1)\\{g(X+h^{\prime}X,Y)\eta(Z)+g(X+h^{\prime}X,Z)\eta(Y)$
$\displaystyle-2\eta(X)\eta(Y)\eta(Z)\\}+\frac{4n}{2n-1}g((\nabla_{X}h^{\prime})Y,Z).$
(4.6)
In [3] Dileo and Pastore proved that on $M$ there holds
$\displaystyle g((\nabla_{X}h^{\prime})Y,Z)=g$
$\displaystyle((\kappa+1)X-h^{\prime}X,Y)\eta(Z)+\eta(Y)g((\kappa+1)X-h^{\prime}X,Z)$
$\displaystyle-2(\kappa+1)\eta(X)\eta(Y)\eta(Z).$ (4.7)
Making use of (4.6) into (2.6), we entails that
$\displaystyle(\pounds_{V}\nabla)(X,Y)=-\frac{4n}{2n-1}(\kappa+2)g(h^{\prime}X,Y)\xi,$
(4.8)
where we applied (4.7). Taking covariant differentiation of (4.8) we get
$\displaystyle(\nabla_{Y}\pounds_{V}\nabla)(X,Z)=$
$\displaystyle-\frac{4n}{2n-1}(\kappa+2)g((\nabla_{Y}h^{\prime})Y,Z)\xi$
$\displaystyle-\frac{4n}{2n-1}(\kappa+2)g(h^{\prime}X,Z)(Y-\eta(Y)\xi+h^{\prime}Y).$
Putting the foregoing equation into (2.7) yields
$\displaystyle(\pounds_{V}$ $\displaystyle
R)(X,Y)Z=\frac{4n}{2n-1}\\{g((\nabla_{Y}h^{\prime})X,Z)\xi-g((\nabla_{X}h^{\prime})Y,Z)\xi$
$\displaystyle+g(h^{\prime}X,Z)(Y-\eta(Y)\xi+h^{\prime}Y)-g(h^{\prime}Y,Z)(X-\eta(X)\xi+h^{\prime}X)\\}.$
(4.9)
Contracting the equation (4.9) with respect to $X$ and recalling (4.7) we
easily obtain
$\displaystyle(\pounds_{V}S)(Y,Z)=-\frac{8n^{2}}{2n-1}(\kappa+2)g(h^{\prime}Y,Z).$
(4.10)
From (4.4), the Ricci tensor can be written as
$\displaystyle
S(Y,Z)=-2ng(Y,Z)+2n(\kappa+1)\eta(Y)\eta(Z)-2ng(h^{\prime}Y,Z).$
Lie-derivative of this equation along potential vector field $V$, utilization
of (2.3) and (4.5) yields
$\displaystyle(\pounds_{V}S)(Y,Z)$
$\displaystyle=\frac{4n}{2n-1}\\{(2n\lambda+divV+2n\kappa)g(Y,Z)-2(\kappa+1)(2n\lambda+divV$
$\displaystyle+2n\kappa)\eta(Y)\eta(Z)+(2n\lambda+divV-4n)g(h^{\prime}Y,Z)\\}+2n(\kappa+1)$
$\displaystyle\\{\eta(Y)g(\pounds_{V}\xi,Z)+\eta(Z)g(\pounds_{V}\xi,Y)\\}-2ng((\pounds_{V}h^{\prime})Y,Z).$
(4.11)
Putting (4.11) together with (4.10), we get
$\displaystyle g((\pounds_{V}h^{\prime})Y,Z)=$
$\displaystyle\frac{2}{2n-1}\\{(2n\lambda+divV+2n\kappa)g(Y,Z)-2(\kappa+1)(2n\lambda+divV$
$\displaystyle+2n\kappa)\eta(Y)\eta(Z)+(2n(\kappa+2)+2n\lambda+divV-4n)g(h^{\prime}Y,Z)\\}$
$\displaystyle+(\kappa+1)\\{\eta(Y)g(\pounds_{V}\xi,Z)+\eta(Z)g(\pounds_{V}\xi,Y)\\}.$
(4.12)
Note that by switching $Y=Z=\xi$ in (4.5) we obtain
$\eta(\pounds_{V}\xi)-\frac{1}{2n-1}(2n\lambda+divV+2n\kappa)=0$. Applying
this equation and substituting $Y=Z=\xi$ in (4.12) one can get
$(2n\lambda+divV+2n\kappa)=0$ by reason of $\kappa<-1$ and eventually we have
$\displaystyle(\pounds_{V}h^{\prime})Y=(\kappa+1)\\{\eta(Y)\pounds_{V}\xi+g(\pounds_{V}\xi,Y)\xi\\}.$
Inserting $Y$ by $\xi$ in the foregoing equation and utilization of
$(2n\lambda+\Delta u+2n\kappa)=0$ gives
$\displaystyle h^{\prime}\pounds_{V}\xi=-(\kappa+1)\pounds_{V}\xi.$ (4.13)
In view of (4.2), $(2n\lambda+divV+2n\kappa)=0$ and $\kappa<-1$, the action of
$h^{\prime}$ on the above equation gives
$h^{\prime}\pounds_{V}\xi=\pounds_{V}\xi$. This together with (4.13) yields
$\displaystyle(\kappa+2)\pounds_{V}\xi=0.$
Suppose $\kappa=-2$, then it follows from Proposition 4.1 and Corollary 4.2 of
Dielo and Pastore [3] that a non-Kenmotsu $(\kappa,\mu)^{\prime}$-almost
Kenmotsu manifold is locally isometric to the Riemannian product
$\mathbb{H}^{n+1}(-4)\times\mathbb{R}^{n}$. If $\kappa\neq-2$, then we have
$\pounds_{V}\xi=0$. As a result of $2n\lambda+divV+2n\kappa=0$ and
$(\pounds_{V}g)(X,\xi)=0$ we obtain
$\displaystyle(\pounds_{V}\eta)X=(\pounds_{V}g)(X,\xi)+g(X,\pounds_{V}\xi)=0.$
This means potential vector field $V$ is strict infinitesimal contact
transformation. This finishes the proof. ∎
As a result of Lemma 4.1, the relation (3.27) and (3.26), we obtain the
following fruitful result.
###### Theorem 4.3.
Let $M$ be a non-Kenmotsu $(\kappa,\mu)^{\prime}$-almost Kenmotsu manifold
which admits a gradient almost Riemann soliton. Then, the soliton is expanding
with $\lambda=\frac{6n-2}{2n-1}$ and $M$ is locally isometric to the
Riemannian product $\mathbb{H}^{n+1}(-4)\times\mathbb{R}^{n}$. Moreover, the
potential vector field is tangential to the Euclidean factor $\mathbb{R}^{n}$.
###### Proof.
With the help of (4.4) we have
$\displaystyle(\nabla_{Y}Q)X-(\nabla_{X}Q)Y=$
$\displaystyle-2n((\nabla_{Y}h^{\prime})X-(\nabla_{X}h^{\prime})Y)$
$\displaystyle-2n(\kappa+1)(\eta(Y)(X+h^{\prime}X)-\eta(X)(Y+h^{\prime}Y)).$
Utilization of the above equation in (3.26) gives
$\displaystyle R(X,Y)Du=$
$\displaystyle-\frac{2n}{2n-1}\\{((\nabla_{Y}h^{\prime})X-(\nabla_{X}h^{\prime})Y)$
$\displaystyle+(\kappa+1)(\eta(Y)(X+h^{\prime}X)-\eta(X)(Y+h^{\prime}Y))\\}$
$\displaystyle+\frac{1}{2n-1}\\{Y(2n\lambda+\Delta u)X-X(2n\lambda+\Delta
u)Y\\}$ (4.14)
Using $\xi$ in place of $X$ in (4.14) we get an equation, then taking inner
product of the resulting equation with $\xi$ gives
$\displaystyle g(R(\xi,Y)Du,\xi)=\frac{1}{2n-1}\\{Y(2n\lambda+\Delta
u)-\xi(2n\lambda+\Delta u)\eta(Y)\\}.$ (4.15)
As a result of (4.1), we get
$\displaystyle
g(R(\xi,Y)\xi,Du)=\kappa\\{\xi(u)\eta(Y)-Y(u)\\}+2g(h^{\prime}Du,Y),$ (4.16)
where we used $\mu=-2$. Comparing (4.15) with (4.16) yields that
$\displaystyle\frac{1}{2n-1}D(2n\lambda+\Delta u)=\kappa
Du-\kappa\xi(u)\xi-2h^{\prime}Du+\frac{1}{2n-1}\xi(2n\lambda+\Delta u)\xi.$
(4.17)
According to Corollary 4.1 of Delio and Pastore [3], and Lemma 3.4 of Wang and
Liu [21], we have that $tr(\nabla_{X}h^{\prime})=0$ and
$(divh^{\prime})X=2n(\kappa+1)\eta(X)$. Applying these equation and
contracting (4.14) over $X$ gives that
$\displaystyle S(Y,Du)=\frac{2n}{2n-1}Y(2n\lambda+\Delta u).$
As a result of (4.4), the above equation transform into
$\displaystyle\frac{1}{2n-1}D(2n\lambda+\Delta
u)=-Du+(\kappa+1)\xi(u)\xi-h^{\prime}Du.$ (4.18)
By virtue of (4.17) and (4.18) we obtain
$\displaystyle(\kappa+1)Du-(2\kappa+1)\xi(u)\xi-h^{\prime}Du+\frac{1}{2n-1}\xi(2n\lambda+\Delta
u)=0.$
Remembering the assumption $\kappa<-1$ and equation (4.2), the action of
$h^{\prime}$ on above equation gives that
$\displaystyle h^{\prime}Du=-Du+\xi(u)\xi.$ (4.19)
Again making use of (4.2), the action $h^{\prime}$ on (4.19) yields
$(\kappa+1)(Du-\xi(u)\xi)=h^{\prime}Du$. This together with (4.19) gives that
$\displaystyle(\kappa+2)(Du-\xi(u)\xi)=0.$ (4.20)
Thus, we have either $\kappa=-2$ or $Du=\xi(u)\xi$. Now, we prove that the
second case can not occur. In fact, if we assume that the second case is true,
that is, $Du=\xi(u)\xi$, then covariant derivative of this along $X$ gives
$\displaystyle\nabla_{X}Du=X(\xi(u))\xi+\xi(u)\\{X-\eta(X)\xi+h^{\prime}X\\}.$
With the aid of above equation, (3.27) becomes
$\displaystyle\frac{1}{2n-1}QX=-(\xi(u)+\frac{1}{2n-1}(2n\lambda+\Delta
u))X+(\xi(u)\eta(X)-X(\xi(u)))\xi-\xi(u)h^{\prime}X.$
Utilization of (4.4) in the foregoing equation furnishes
$\displaystyle\left(\xi(u)+\frac{1}{2n-1}(2n\lambda+\Delta
u-2n)\right)X+\left(\xi(u)-\frac{2n}{2n-1}\right)h^{\prime}X$
$\displaystyle+\left(\frac{2n}{2n-1}(\kappa+1)\eta(X)+X(\xi(u))-\xi(u)\eta(X)\right)\xi=0.$
(4.21)
Contracting (4.21) with respect to $X$ and recalling (2.4) we obtain
$\displaystyle 2n(\kappa+1)\left(\xi(u)-\frac{2n}{2n-1}\right)=0,$
and this exhibits that $\xi(u)=\frac{2n}{2n-1}$ because of $\kappa<-1$.
Putting $\xi(u)=\frac{2n}{2n-1}$ into (4.21) we obatin
$\displaystyle(2n\lambda+\Delta u)X+2n\kappa\eta(X)\xi=0.$ (4.22)
Considering $X$ in (4.22) is orthogonal to $\xi$, we have $2n\lambda+\Delta
u=0$. Thus, (4.22) becomes $2n\kappa\eta(X)\xi=0$. It follows that $\kappa=0$,
this contradicts our assumption $\kappa<-1$. Therefore, we obtain from (4.20)
that $\kappa=-2$. In view of $\kappa=-2=\mu$, on the basis of results of Dileo
and Pastore [3] we obtain that $M$ is locally isometric to the Riemannian
product $\mathbb{H}^{n+1}(-4)\times\mathbb{R}^{n}$. Employing $\kappa=-2$ and
(4.19) in (4.17) we obtain
$\displaystyle D(2n\lambda+\Delta u)=\xi(2n\lambda+\Delta u).$ (4.23)
Utilizing (4.23) and (4.19) in (4.18) provides
$\displaystyle\xi\left(\frac{1}{2n-1}(2n\lambda+\Delta u)+2u\right)=0.$ (4.24)
In the framework of (4.4), (3.27) and (4.24) we acquire that
$\displaystyle 2n\lambda+\Delta u=4n+\frac{1}{2}\xi(\xi(2n\lambda+\Delta u)).$
(4.25)
From (4.2) we have $h^{\prime 2}=-\varphi^{2}$. Now, we shall indicate by
$[1]^{\prime}$ and $[-1]^{\prime}$ the distribution of the eigen vectors of
$h^{\prime}$ orthogonal to $\xi$ with eigen values $1$ and $-1$ respectively,
and also we may consider a local orthonormal $\varphi$-frame $\\{E_{i},\varphi
E_{i},\xi\\}_{i=1}^{n}$ with $E_{i}\in[1]^{\prime}$ and $\varphi
E_{i}\in[-1]^{\prime}$. As a result of (4.19), we easily find that $Du$ has no
components on the distribution $[1]^{\prime}$. Thus, we write
$Du=\sum_{i=n}^{n}\gamma_{i}\varphi E_{i}+\xi(u)\xi$, where
$\\{\gamma_{i}\\}_{i=1}^{n}$ are smooth function on $M$. Applying this and
(4.24) in (3.27) we obtain that
$\displaystyle\frac{1}{2n-1}QX=$
$\displaystyle\frac{1}{2n-1}\left(\frac{1}{2}\xi(2n\lambda+\Delta
u)-(2n\lambda+\Delta u)\right)X-\sum_{i=1}^{n}X(\gamma_{i})\varphi E_{i}$
$\displaystyle-\sum_{i=1}^{n}\gamma_{i}\nabla_{X}\varphi
E_{i}+\frac{1}{2(2n-1)}\xi(2n\lambda+\Delta u)h^{\prime}X$
$\displaystyle+\frac{1}{2(2n-1)}\left(X(\xi(2n\lambda+\Delta
u))-\xi(2n\lambda+\Delta u)\eta(X)\right)\xi.$
By virtue of (4.4), the above equation transform into
$\displaystyle\frac{1}{2n-1}$
$\displaystyle\left(\frac{1}{2}\xi(2n\lambda+\Delta u)-(2n\lambda+\Delta
u)+2n\right)X-\sum_{i=1}^{n}X(\gamma_{i})\varphi
E_{i}-\sum_{i=1}^{n}\gamma_{i}\nabla_{X}\varphi E_{i}$
$\displaystyle+\frac{1}{2n-1}\left(\frac{1}{2}\xi(2n\lambda+\Delta
u)+2n\right)h^{\prime}X+\frac{1}{2(2n-1)}\\{X(\xi(2n\lambda+\Delta u))$
$\displaystyle-\xi(2n\lambda+\Delta u)\eta(X)+4n\eta(X)\\}\xi=0.$ (4.26)
According to the proof of Proposition 4.1 of [3], we have that
$\nabla_{E_{j}}\varphi E_{i}\in[-1]^{\prime}$ for any $E_{i}\in[1]^{\prime}$,
$1\leq j\leq n$. Consequently, using $E_{j}\in[1]^{\prime}$ in place of $X$ in
(4.26) provides
$\displaystyle(2n\lambda+\Delta u)=\xi(2n\lambda+\Delta u)+4n.$ (4.27)
Unifying (4.25) with (4.27) yields that $2n\lambda+\Delta u=4n$. Now,
contracting (3.27) over $X$ gives
$\displaystyle\Delta u=-\frac{4n^{2}}{2n-1},$
where we applied $\kappa=-2$ and $r=2n(\kappa-2n)$. Combining the above
equation with $2n\lambda+\Delta u=4n$ yields that
$\lambda=\frac{6n-2}{2n-1}>0$, and this means that the gradient almost Riemann
soliton is expanding. Eventually, utilization of $2n\lambda+\Delta u=4n$ in
(4.24) gives $\xi(u)=0$ and thus from (4.19) we have $h^{\prime}Du=-Du$.
According to Theorem 4.2 of [3], the factor $\mathbb{R}^{n}$ in the product
space $\mathbb{H}^{n+1}(-4)\times\mathbb{R}^{n}$ is the integral submanifold
of the distribution $[-1]^{\prime}$. Therefore, $h^{\prime}Du=-Du$ implies
that the gradient of the potential function $Du$ is tangential to the
Euclidean factor $\mathbb{R}^{n}$. ∎
###### Remark 4.4.
Theorem 3.1 of Wang [20] is a direct corollary of the above Theorem 4.3.
Before closing this section, we present an example of 3-dimensional
$(\kappa,\mu)^{\prime}$-almost Kenmotsu manifolds admitting a Riemann soliton.
###### Example.
We consider a 3-dimensional manifold
$\displaystyle M=\\{(x,y,z)\in\mathbb{R}^{3},z\neq 0\\},$
and linear independently vector fields
$\displaystyle e_{1}=-\frac{\partial}{\partial x}+2y\frac{\partial}{\partial
y}-\frac{\partial}{\partial z},\quad e_{2}=\frac{\partial}{\partial y},\quad
e_{3}=\frac{\partial}{\partial z}.$
One can easily check that
$\displaystyle[e_{1},e_{2}]=-2e_{2},\quad[e_{2},e_{3}]=0,\quad[e_{1},e_{3}]=0.$
On $M$ we define a (1,1)-tensor field $\varphi$ by $\varphi(e_{1})=0$,
$\varphi(e_{2})=e_{3}$ and $\varphi(e_{3})=-e_{2}$, and we define a Riemannian
metric $g$ such that $g(e_{i},e_{j})=\delta_{ij}$, $1\leq i,j\leq 3$. We
denote by $\xi=e_{1}$ and $\eta$ its dual 1-form with respect to the metric
$g$.
On the other hand, it is not hard to justify that $M$ with the structure
$(\varphi,\xi,\eta,g)$ is an almost Kenmotsu structure, which is not Kenmotsu,
since the operator $h^{\prime}$ does not vanish. In fact, it is given by
$\displaystyle h^{\prime}(e_{1})=0,\quad h^{\prime}(e_{2})=e_{2},\quad
h^{\prime}(e_{3})=-e_{3}.$
Utilization of Koszul formula gives
$\displaystyle\begin{gathered}\nabla_{e_{1}}e_{1}=0,\quad\nabla_{e_{1}}e_{2}=0,\quad\nabla_{e_{1}}e_{3}=0,\\\
\nabla_{e_{2}}e_{1}=2e_{2},\quad\nabla_{e_{2}}e_{2}=-2e_{1},\quad\nabla_{e_{2}}e_{3}=0,\\\
\nabla_{e_{3}}e_{1}=0,\quad\nabla_{e_{3}}e_{2}=0,\quad\nabla_{e_{3}}e_{3}=0,\end{gathered}$
(4.31)
where $\nabla$ denotes the Levi-Civita connection of the Riemannian metric
$g$. By a straightforward calculation we have
$\displaystyle\begin{gathered}R(e_{1},e_{2})e_{1}=4e_{2},\quad
R(e_{1},e_{2})e_{2}=-4e_{1},\quad R(e_{1},e_{2})e_{3}=0,\\\
R(e_{1},e_{3})e_{1}=0,\quad R(e_{1},e_{3})e_{2}=0,\quad
R(e_{1},e_{3})e_{3}=0,\\\ R(e_{2},e_{3})e_{1}=0,\quad
R(e_{2},e_{3})e_{2}=0,\quad R(e_{2},e_{3})e_{3}=0.\end{gathered}$ (4.35)
With the help of the expressions of the curvature tensor, we conclude that the
characteristic vector field $\xi$ belongs to the
$(\kappa,\mu)^{\prime}$-nullity distribution with $\kappa=-2$ and $\mu=-2$.
Let us consider a vector field
$\displaystyle V=e^{-2x}\frac{\partial}{\partial
y}+4(x-z)\frac{\partial}{\partial z}.$ (4.36)
One can easily check that $divV=-4$, a constant. As a result of (4.31), one
can get
$\displaystyle\begin{gathered}(\pounds_{V}g)(e_{1},e_{1})=0,\quad(\pounds_{V}g)(e_{2},e_{2})=0,\quad(\pounds_{V}g)(e_{3},e_{3})=-8,\\\
(\pounds_{V}g)(e_{1},e_{2})=0,\quad(\pounds_{V}g)(e_{1},e_{3})=0,\quad(\pounds_{V}g)(e_{2},e_{3})=0\end{gathered}$
(4.39)
In view of (4.39) and (4.35), one can easily verify that
$\displaystyle
2R(e_{i},e_{j},e_{k},e_{l})+4(g\mathbin{\bigcirc\mspace{-15.0mu}\wedge\mspace{3.0mu}}g)(e_{i},e_{j},e_{k},e_{l})+(g\mathbin{\bigcirc\mspace{-15.0mu}\wedge\mspace{3.0mu}}\pounds_{V}g)(e_{i},e_{j},e_{k},e_{l})=0,$
for $1\leq i,j,k,l\leq 3$. Thus $g$ is a Riemann soliton with the soliton
vector field $V$ as given in (4.36) and $\lambda=4$. According to Dileo and
Pastore [3], we obtain that $(-2,-2)^{\prime}$-almost Kenmotsu manifold $M$ is
locally isometric to the product $\mathbb{H}^{2}(-4)\times\mathbb{R}$. This
verifies Theorem 4.2.
## References
* [1] D.E. Blair, Riemannian geometry of contact and symplectic manifolds. In: Progress in Mathematics, 203. Birkhäuser, Boston (2010).
* [2] G. Dileo, A. M. Pastore, Almost Kenmotsu manifolds and local symmetry, Bull. Belg. Math. Soc. Simon Stevin. 14(2) (2007), 343-354.
* [3] G. Dileo, A. M. Pastore, Almost Kenmotsu manifolds and nullity distributions, J. Geom. 93(1-2) (2009), 46-61.
* [4] M.N. Devaraja, H.A. Kumara, V. Venkatesha, Riemann soliton within the framework of contact geometry, Quaestiones Mathematicae, (2020) DOI: 10.2989/16073606.2020.1732495
* [5] A. Ghosh, Kenmotsu 3-metric as a Ricci soliton, Chaos Solitons & Fractals, 44 (2011), 647-650.
* [6] A. Ghosh, An $\eta$-Einstein Kenmotsu metric as a Ricci soliton, Publ. Math. Debrecen, 82 (2013), 691-598.
* [7] A. Ghosh, Ricci soliton and Ricci almost soliton within the framework of Kenmotsu manifold, Carpathian Math. Publ. 11(1) (2019), 59-69.
* [8] D. Janssens, L. Vanhecke, Almost contact structures and curvature tensors, Kodai Math. J. 4(1) (1981), 1-27.
* [9] R.S, Hamilton, The Ricci flow on surfaces. Mathematics and general relativity. Contemp. Math., Amer. Math. Soc. 71 (1988), 237-262.
* [10] I.E. Hiric̆a, C. Udrişste, Ricci and Riemann solitons, Balkan J. Geom. Applications. 21(2) (2016), 35-44.
* [11] I.E. Hiric̆a, C. Udrişste, Basic evolution PDE’s in Riemannian Geometry, Balkan J.Geom.Appl. 17(1) (2012), 30-40.
* [12] K. Kenmotsu, A class of almost contact Riemannian manifolds, Tôhoku Math. J. 24(1) (1972), 93-103.
* [13] D.G. Prakasha, P. Veeresha, Venkatesha, The Fischer–Marsden conjecture on non-Kenmotsu $(\kappa,\mu)^{\prime}$-almost Kenmotsu manifolds, J. Geom. 110 (1) (2019), https://doi.org/10.1007/s00022-018-0457-8.
* [14] S.E. Stepanov, I.I. Tsyganok, The theory of infinitesimal harmonic transformations and its applications to the global geometry of Riemann solitons, Balk. J. Geom. Appl. 24 (2019), 113–121.
* [15] Y. Tashiro, Complete Riemannian manifolds and some vector fields, Trans. Amer. Math. Soc. 117 (1965), 251-275.
* [16] C. Udrişte, Riemann flow and Riemann wave, Ann. Univ. Vest, Timisoara. Ser. Mat.-Inf. 48(1-2) (2010), 265-274.
* [17] C. Udrişte, Riemann flow and Riemann wave via bialternate product Riemannian metric. preprint, arXiv.org/math.DG/1112.4279v4 (2012).
* [18] Venkatesha, D. M. Naik, H. A. Kumara, $*$-Ricci soliton and gradient almost $*$-Ricci soliton on Kenmotsu manifolds, Mathemtica Slovoca (accepted).
* [19] V. Venkatesha, H. A. Kumara, Gradient $\rho$-Einstein soliton on almost Kenmotsu manifolds, Ann. Univ. Ferrara. 65(2) (2019), 375-388.
* [20] Y. Wang, Gradient almost Ricci soliton on two classes of almost Kenmotsu manifolds, J. Korean Math. Soc. 53(5) (2016), 1101-1114.
* [21] Y. Wang, X. Liu, Locally symmetric CR-integrable almost Kenmotsu manifolds, Mediterr. J. Math. 12(1) (2015), 159-171.
* [22] K. Yano, Integral formulas in Riemannian geometry, Marcel Dekker, New York, 1970.
|
2024-09-04T02:54:56.253122 | 2020-02-10T14:38:16 | 2003.00809 | {
"authors": "Indigo J. D. Orton",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25983",
"submitter": "Indigo Orton",
"url": "https://arxiv.org/abs/2003.00809"
} | arxiv-papers | # Vision based body gesture meta features for Affective Computing
Indigo Jay Dennis Orton
(June, 2019)
###### Abstract
##### Title: Vision based body gesture meta features for Affective Computing
Early detection of psychological distress is key to effective treatment.
Automatic detection of distress, such as depression, is an active area of
research.
Current approaches utilise vocal, facial, and bodily modalities. Of these, the
bodily modality is the least investigated, partially due to the difficulty in
extracting bodily representations from videos, and partially due to the lack
of viable datasets. Existing body modality approaches use automatic
categorization of expressions to represent body language as a series of
specific expressions, much like words within natural language.
In this dissertation I present a new type of feature, within the body
modality, that represents meta information of gestures, such as speed, and use
it to predict a non-clinical depression label. This differs to existing work
by representing overall behaviour as a small set of aggregated meta features
derived from a person’s movement. In my method I extract pose estimation from
videos, detect gestures within body parts, extract meta information from
individual gestures, and finally aggregate these features to generate a small
feature vector for use in prediction tasks.
No existing, publicly available, dataset for distress analysis contains source
videos of participants or pose data extracted from the source videos. As such,
I introduce a new dataset of 65 video recordings of interviews with self-
evaluated distress, personality, and demographic labels. This dataset enables
the development of features utilising the whole body in distress detection
tasks.
I evaluate my newly introduced meta-features for predicting depression,
anxiety, perceived stress, somatic stress, five standard personality measures,
and gender. A linear regression based classifier using these features achieves
a 82.70% F1 score for predicting depression within my novel dataset.
My results suggest this feature type has value for distress prediction and,
more broadly, has potential within the affective computing domain, as they
appear to be useful aggregate representations of human behaviour.
Word count: 14,586
###### keywords:
computer-vision affective-computing
Hughes Hall Computer Science
###### Acknowledgements.
First and foremost I thank my supervisor Dr. Marwa Mahmoud, her guidance has
been invaluable and her collaboration in the development of the dataset has
been integral. Thank you to my peers who have contributed immensely to my
learning through countless rambling and broad ranging discussions. Their
breadth of interests and expertise is always engaging. Finally, thank you to
my family for their endless support and encouragement.
## Chapter 1 Introduction
Mental health is an increasingly prominent area of discussion and research in
society, it is a main contributor to overall disease burden globally [41].
Awareness of its effects and symptoms, destigmatization of its results,
consideration of its causes, and approaches to its treatment and diagnosis, in
the case of illness, are all rapidly evolving. According to the UK Mental
Health Foundation [41], depression is the “predominant mental health problem
worldwide”. Within the UK in 2014, 19.7% of people over the age of 16 showed
symptoms of depression or anxiety. Early detection is paramount to long term
health, 10% of children aged 5–16 years have a clinically diagnosable mental
health issue, of children who have experienced mental health problems 70% do
not have their issues detected or addressed early enough [41].
#### Automatic Distress Detection - Why and How
Automatic distress detection enables large scale early screening. Early
screening can be used to enable mitigation of distress at an earlier stage
that it might otherwise be identified and also prioritization of services. For
example, Crisis Text Line [12] (CTL) provides a free mental health counseling
service via text messages. The communication medium means consumers can
message without a counselor being exclusively available. To better support
those consumers most in need (e.g. those most at risk of suicide), CTL
analyses text messages in real time to triage consumers for counselor
attention [15].
While the CTL example is a specific use case with a single modality, text, the
broader area of distress detection uses many modalities including facial, eye,
head, vocal, bodily, and speech. Existing work has investigated usage of
psychology coding systems such as FACS levels of activity in eye and head
movement [1, 55, 16], orientation of eye gaze and head [46], fundamental
frequencies and biomarkers of vocal recordings [43, 2], automatic
categorization of body expressions [36, 34, 51], and natural language
artifacts from speech [13]. Moreover, many methods incorporate multiple
modalities and achieve significantly better results.
One of the difficulties in this area is the lack of available and relevant
datasets. Distress focused datasets are not readily shared given the sensitive
nature of the subject. Those that are shared rarely include source data, but
rather provide processed outputs, again due to the sensitive of the subject.
Finally, each modality has different source data requirements and datasets are
generally designed with a specific method in mind. All of this creates a
barrier to the development of novel features and thus research within this
area is concentrated on the modalities available within the few public
datasets.
#### Body Modality - A Research Gap
The body modality includes the whole body from the neck down, sometimes the
head is also included. It is the least researched of the modalities. This is,
partially, due to the dataset barrier. No public distress dataset, that I am
aware of, provides body modality data, or the source video recordings to
extract such data. The majority of relevant body modality based automatic
distress detection work is based on private clinical datasets.
##### Existing Approaches
Within these body modality methods there are two primary approaches: body
expression categorization and hand-crafted distress behaviour descriptors. The
first approach uses unsupervised learning to cluster spatio-temporal
descriptors of movement to define expression codebooks and then uses these
codebooks to predict depression severity [36, 34, 51]. The second approach
defines hand-crafted descriptors of behaviour psychology literature has shown
to correlate to distress [23], such as self-adaptors and fidgeting [46, 40],
and to model the intensity of such behaviours to indicate distress severity.
##### Thesis Contributions
My core research question is: is meta information from gestures predictive of
psychological distress?
In this dissertation I present novel generic body gesture meta features for
automatic distress detection. These features are based on the movement,
frequency, and duration of body gestures. Per-gesture these features are cheap
to compute and relatively simplistic, aggregated across all gestures they aim
to represent general behavioural characteristics of a subject.
I evaluate the use of these features for detecting depression (non-clinical)
and find that they provide useful predictive information within linear models.
I then evaluate the generalisability of these features by applying them to
classification tasks for other distress and personality labels.
As I noted earlier, a core barrier to the development of new features within
the body modality is the lack of available distress datasets that expose the
required source data. As such I introduce a novel audio-visual dataset
gathered for this dissertation in collaboration with Dr. Marwa Mahmoud. The
dataset contains audio-visual recordings of semi-structured interviews and
labels based on established psychology self-evaluation questionnaires for
distress and personality measures.
### Dissertation structure
Chapter 2 contains a literature review covering related distress detection
research and relevant technology for working with the body modality
specifically. Chapter 3 introduces a novel dataset for depression detection. I
present my method in Chapter 4. I evaluate my features’ validity in Chapter 5.
Finally, I conclude and outline potential future work in Chapter 6.
## Chapter 2 Literature Review
This review is structured in two parts: automatic depression detection and
body modality technology. In the first component I review existing approaches
to automatic depression detection and related fields, and the variety of
modalities used by these approaches. This defines the research space this
dissertation occupies. The second component outlines the current methods for
working with the body modality and the technology involved in these methods,
such as pose estimation. This provides the basis for the core data I use
within my method.
### 2.1 Automatic Detection of Distress
Distress is expressed through all modalities. Many approaches have been
developed to automatically detect distress using behavioural cues, these
include both mono-modal and multi-modal approaches. For example, face analysis
methods are one of the common, and powerful, techniques enabled by the
development of tools for extracting accurate positional information, such as
facial landmarks, and tools for automatic interpretation based on existing
psychology approaches such as FACS [21].
I review uses of the primary modalities, cross-modal considerations, multi-
modal approaches, and the expanding use of deep learning within automatic
distress detection. The face, head, and body modalities are the most relevant,
though I briefly provide examples of text and vocal modal usage.
#### Text Modality
Dang et al. [13] use linguistic attributes, auxiliary speech behaviour, and
word affect features to predict depression severity and emotional labels
within the DAIC dataset [26]. Linguistic attribute features include total
number of words, unique words, pronouns, lexical proficiency, among others.
Auxiliary speech behaviour cover parallel actions, such as laughing or
sighing, and meta information such as word repeats and average phrase length.
Word features are determined by a collection of annotated corpora that assign
$n$-grams categorizations or ratings relating to their affective semantic. For
example, assigning an emotion type (anger, disgust, joy, etc) to a word or
rating words from 0 to 10 on affective attributes such as arousal, valence,
dominance, and pleasure. These three feature types are used to predict
depression measures.
#### Audio Modality
The audio modality can be a strong predictive source as non-verbal features of
speech can be predictive of distress irrespective of the content of a person’s
speech [43].
Features from audio data commonly include prosody, jitter, intensity,
loudness, fundamental frequency, energy, Harmonic-to-Noise-Ratio (HNR), among
others. Ozdas et al. [43] present a method for analysing fluctuations in the
fundamental frequency of a person’s voice to assess their risk of suicide.
Alghowinem et al. [2] explore the use of a broad selection of vocal features,
extracted using the “openSMILE” toolkit [22] to detect depression. Dibeklioglu
et al. [16] use vocal prosody to detect depression.
In their investigation of psychomotor retardation caused by depressive states,
Syed et al. [55] use low-level descriptors to model turbulence in subject
speech patterns. By profiling the turbulence of depressed and non-depressed
participants with a depression dataset they develop a model for predicting
depression severity based on the level of turbulence.
#### Facial Modality
Joshi et al. [36] present a “bag of facial dynamics” depression detection
method based on the same expression clustering as their “bag of body dynamics”
method described in more depth below. For this facial method the space-time
interest points (STIPs) are generated for face aligned versions of the source
videos.
While Joshi et al. use a categorization approach, Dibeklioglu et al. [16] use
generic features representing facial movement dynamics for depression
detection. This method involves generating statistical derivations from
movement features such as velocity, acceleration, and facial displacement over
a time period and then modeling their effect.
Whilst Dibeklioglu et al. take a generic approach to facial movement, Syed et
al. [55] attempt to model behaviour discussed in psychology literature, using
features representing psychomotor retardation to predict depression severity.
Psychomotor retardation has been shown to be linked to depression [50]. In
particular, Syed et al. generate features to capture craniofacial movements
that represent psychomotor retardation, and thus indicate depression. To
capture the target movements they design features that represent muscular
tightening, a depressed subject is expected to have impaired muscle movements.
They model three types of movement: head movement, mouth movement, and eyelid
movement. These movements are represented by temporal deltas to define the
amount of movement in the region. From these localised movement deltas the
authors aim to represent specific actions such as blinking or contorting of
the mouth. Relatively simplistic features derived from these actions, such as
blink rate, can be indicative of depression [3, 19]. The nature of human
behaviour is that these kinds of simplistic features can contribute useful
information to distress detection models. Moreover, modeling specific facial
actions has been examined as well. For example, Scherer et al. [46] use smile
features such as intensity and duration, along with other modalities, to
detect depression.
Yang et al. [58] present a novel facial descriptor, a “Histogram of
Displacement Range (HDR)”, which describes the amount of movement of facial
landmarks. The histogram counts the number of occurrences of a displacement
within a certain range of movement. Where Syed et al. represented the amount
of movement of certain facial features to measure psychomotor retardation,
Yang et al. represent the number of times the face is distorted, so to speak,
by landmarks moving a certain amount.
###### Eye Sub-Modality
While Syed et al. [55] explored the use of eye lid features, eye gaze features
have also been shown to be effective in predicting distress. This modality has
become viable as eye tracking technology has progressed sufficiently to enable
accurate processing of eye features.
Alghowinem et al. [1] use eye gaze/activity to perform binary classification
of depression in cross-cultural datasets. They extract iris and eyelid
movements to extract features such as blink rate, duration of closed eyes, and
statistical “functionals” (i.e. simple derivations) of the amount of activity.
However, activity is not the only indicator, Scherer et al. [46] use average
eye gaze vertical orientation (among other features), in the span $[-60,60]$
degrees.
#### Head Modality
Joshi et al. [36] present a “histogram of head movements” depression detection
method that models movement of a person’s head over time. They use three
facial landmarks, the corner of each eye and the tip of the nose, to compute
the orientation of the subject’s head. The histogram uses orientation bins of
width 10 within the range $[-90,90]$ degrees for windows of time within a
video. These windowed histograms are then averaged over the full length of the
video. The resulting average histogram is a descriptor of the amount of
movement within the video by representing the variety of angles the head
orients to. This method achieves comparable performance to their “bag of
facial dynamics” method.
A number of the methods using eye activity features also incorporate head
activity in their models. Alghowinem et al. [1] model head activity similarly
to their modeling of eye activity. As with eye activity, they extract
statistical derivations of movement and angular shift. They also include the
duration of the head at different orientations, the rate of change of
orientation, and the total number of orientation changes. Scherer et al. [46]
use head vertical orientation, similar to their eye gaze orientation feature,
as a feature for predicting depression. They use the average pitch of the head
within a 3D head orientation model. Dibeklioglu et al. [16] also model head
movement dynamics in a similar fashion to their facial movement dynamics
features. Similar to Alghowinem et al. they extract statistical derivations of
movement velocity, amplitude, and acceleration as head movement features.
The head movement statistical derivation features presented by Alghowinem et
al. and Dibeklioglu et al. are similar to the features I introduce in this
dissertation, in that they represent meta movement information of the
modality, rather than categorizing movement. Though, of course, Alghowinem et
al. also incorporate categorized movement via their orientation features.
#### Body Modality
The body modality is the least researched of the modalities reviewed. This is
due to a number of factors including the difficulty of dataset creation and
the available technology for extracting raw body data. Contrast this with the
relative ease of working with the other modalities and it is no surprise they
received more attention. However, much of the relevant body modality based
automatic distress detection research appeared in the early 2010s as some
private clinical datasets were gathered and the parallel technological
ecosystem expanded to support generation of body modality features.
##### The Case for Further Investigation
De Gelder [14] presents the case for further research of bodily expressions
within affective neuroscience. Though a parallel field, the core argument is
much the same for affective computing’s investigation of bodily expressions.
Specifically, at the time of writing (2009) de Gelder asserts that 95% of
“social and affective neuroscience” focuses on faces and that the remaining 5%
is mostly split between vocal, musical, and environmental modalities with a
very few number of papers investigating the body modality. This was similar to
the state of automatic distress detection in the early 2010s, though the vocal
modality has a more prominent position in the literature and is more evenly
balanced with the facial modality.
###### Affectation control and robust automatic detection
Non-verbal features provide discriminative information regardless of a
person’s conscious communication, this is particularly important for automatic
distress detection. Different modalities can be consciously controlled to
varying degrees. For example, facial expressions are more easily controlled
than bodily expressions [14]. By including more modalities, and
representations of those modalities, automatic distress detection could become
more robust to conscious affectation modification of modalities.
Further to robustness, one of the advantages of the face and body modalities
is their ability to detect micro-expressions. Micro-expressions are
instinctual reactions to some stimulus that can be predictive of emotional and
distress state [20, 28]. They are significantly harder to control than general
expressions.
##### Expression Categorization
Much of the body modality research has approached the problem as a transfer of
the methods from the facial modality by modeling body movements as expressions
in much the same way facial expressions are [33, 36, 34]. Differences in these
methods have been centred around: what tracklets form the basis of the body
data [36], the use of deep learning vs manual descriptor definition [46], and
process for generating categories of expressions [51].
Joshi et al. [36] demonstrate the discriminative power of bodily expressions
for predicting clinically based depression measures. They use STIPs from
recordings of a participant’s upper body and generate a “Bag of Body Dynamics
(BoB)” based on codebooks of expression representations. STIPs are generated
for a video, then Histograms of Gradient (HoG) and Optical Flow (HoF) are
computed spatio-temporally around the STIPs, these histograms are then
clustered within a sample, the cluster centres form a representation of the
movements occurring in the video. The cluster centres from all videos in a
training set are clustered again to generate the codebook of expressions. A
BoB feature vector is generated for each sample by counting the number of
cluster centres within the sample that fit within each codebook expression.
Finally, these BoB feature vectors are used by an SVM to detect depression.
Joshi et al. [34] extend on this approach by combining a “holistic body
analysis”, similar to the BoB method, this method uses STIPs for whole body
motion analysis and adds relative body part features. These features represent
the movement of the head and limbs relative to the participant’s trunk,
represented as polar histograms.
Applying the same essential method as Joshi et al., Song et al. [51] present a
method for learning a codebook of facial and bodily micro-expressions. They
identify micro-expressions by extracting STIPs over very short time intervals
(e.g. a few hundred milliseconds), then, as Joshi et al. do, they compute
local spatio-temporal features around the STIPs and learn a codebook of
expressions based on these local features. Finally, for each sample they
generate a Bag-of-Words style feature vector based on the codified micro-
expressions present in the sample.
##### Distress Behaviour Descriptors
Psychology literature describes specific behaviours that are correlated with
psychological distress and disorders. For example, Fairbanks et al. [23] find
self-adaptors and fidgeting behaviour to be correlated to psychological
disorders. Based on this work, Scherer et al. [46] evaluate the use of these
behaviours for automatic distress detection. Specifically, they manually
annotate their dataset for hand self-adaptor behaviours and fidgets, including
hand tapping, stroking, grooming, playing with hands or hair, and similar
behaviours. To identify whether regions are relevant to these behaviours they
also annotate these behaviours with categories such as head, hands, arms, and
torso, and then extract statistical information such as the average duration
of self-adaptors in each region. They also annotate leg fidgeting behaviour
such as leg shaking and foot tapping, again they use statistical derivations
of these behaviours as features for detection.
Whilst Scherer et al. manually annotated self-adaptors and fidgets, Mahmoud et
al. [40] present an automatic detector of fidgeting, and similar behaviours,
based on a novel rhythmic motion descriptor. They extract SURF interest point
tracklets from colour and depth data and then apply their novel rhythmic
measure to check similarity among cyclic motion across tracklets. Rhythmic
motion is then localised based on Kinect skeletal regions and classified as
one of four classes: Non-Rhythmic, Hands, Legs, and Rocking.
#### Multi-Modal Fusion
Combining modalities for prediction has proven effective when combining a
variety of modalities [58, 35, 16]. There are four primary types of fusion:
feature fusion such as feature vector concatenation, decision fusion such as
majority vote, hybrid fusion which uses both, and deep learning fusion which
merges inner representations of features within a deep learning architecture.
The deep learning fusion method differs from feature fusion as the features
are provided to separate input layers and only merged after inner layers, but
before decision layers.
Song et al. [51] combine micro-expressions from facial and bodily modalities
with sample-level audio features. They evaluate three methods, early fusion by
concatenating audio features to features from each visual frame, early fusion
using a CCA [30] kernel, and late fusion based on voting, where the per-frame
predictions from the visual modalities are averaged over the sample and then
combined with the audio prediction. Dibeklioglu et al. [16] fuse facial, head,
and vocal modalities using feature concatenation. They extend on this by
performing feature selection on the concatenated vector, rather than the
source vectors, using the Min-Redundancy Max-Relevance algorithm [45].
Alghowinem et al. [1] perform hybrid modality fusion, both combining feature
vectors from modalities and performing a majority vote on individual modality
classification predictions. The vote fusion is based on three classifiers, two
mono-modal classifiers and one feature fusion classifier.
Huang et al. [31] train long-short-term memory (LSTM) models on facial, vocal,
and text modalities and then use a decision level fusion, via a SVR, to
predict the final regression values. This paper differs from many deep
learning approaches as it uses the decision level fusion, rather than having
the deep learning models find patterns across feature types.
###### Temporal contextualisation
Gong & Poellabauer [25] present another approach to usage of multiple
modalities where the text modality provides contextualisation for features
from the audio-visual modalities. They apply a topic modeling method for
depression detection using vocal and facial modalities, where features are
grouped based on the topic being responded to within an interview. The authors
suggest that without topic modeling the features are averaged over too large a
time period such that all temporal information is lost. By segmenting the
samples they aim to retain some of the temporal information. Arbitrary
segmentation would not necessarily be useful, thus their use of logical
segmentation based on topic.
#### Deep Learning
Much of the recent work in distress detection leverages advances in deep
learning, especially advances related to recurrent architectures, such as
LSTMs, which can model sequence data well. In distress detection most
modalities provide sequence data, audio-visual streams, natural language, or
sequence descriptors of the data (such as FACS AUs).
Chen et al. [9] present a method utilizing text, vocal, and facial modalities
for emotion recognition. They explore the use of existing vocal features such
as fundamental frequency analysis, auto-learnt features based on pre-trained
CNNs to extract vocal and facial features, and word embedding features for the
text. The auto-learnt facial features are derived from existing facial
appearance data already extracted from the raw videos (i.e. they do not have
their CNNs process raw frames to extract features). They then experiment with
SVR and LSTM models to evaluate the temporal value of the LSTM model. They
find that fused auto-learnt features from all modalities in combination with
the LSTM model provides the greatest performance.
Yang et al. [59] present an interesting multi-level fusion method that
incorporates text, vocal, and facial modalities. They design a Deep
Convolutional Neural Network (DCNN) to Deep Neural Network (DNN) regression
model that is trained, separately, for audio and video modalities. They also
trained their regression models separately for depressed and non-depressed
participants, resulting in four separate DCNN - DNN models. They use the
openSMILE toolkit for their audio features, this is common among many of the
vocal modality methods (e.g. Alghowinem et al. from above), and FACS AUs for
their visual features. They derive a temporally relevant feature vector from
the set of all AUs by calculating the change in AUs over time. Their text
modality model uses Paragraph Vectors in combination with SVMs and random
forests and predicts a classification task rather than a regression task.
Finally, they fuse, using DNNs, the audio and visual model predictions per
training set, i.e. the depressed participant trained models are fused and the
non-depressed models are fused. They then use another DNN to fuse the two
fused regression predictions (i.e. depressed and non-depressed) and the
classification prediction from the text modality. While they use DNNs to fuse
decisions at multiple levels, this is a decision-level fusion method, not a
deep learning fusion method, as they fuse the regression predictions from each
model rather than the inner layer outputs.
Yang et al. [58] present a second paper which utilises the same structure of
DCNNs and DNNs, with two significant changes: firstly, the video features are
changed to a new global descriptor they present, the “Histogram of
Displacement Range (HDR)” which describes the amount of movement of facial
landmarks, and secondly, the text modality now uses the same DCNN - DNN
architecture to perform regression based on text data. Having changed the
output of the text model the final fusion is a regression fusion using the
same method as the audio visual models were fused in the first paper.
Finally, no method, that I am aware of, uses deep learning end-to-end for
automatic distress detection such that features are learnt from raw data for
predicting distress. All methods apply deep learning on top of audio-visual
descriptors and hand-crafted features. One of the core difficulties is the
relative sparsity of existing data and thus the restricted ability to learn
interesting features. Therefore, continued development of features and
approaches to behavioural representation is valuable and able to contribute to
methods that use deep learning.
### 2.2 Body Gestures
Body gestures present a number of challenges including deriving features
directly from pixels, extracting positional data, detecting gestures within
the data, and designing representative features based on gestures. This
section focuses on previous work on body gesture and pose representation,
which forms the basis of my feature definition and generation.
There are three primary approaches to body gesture representation within the
affective computing literature: traditional computer vision feature detection
algorithms such as STIPs [36, 34], pose estimation [8] from standard video
recordings, and use of specialised 3D capture equipment such as Kinects [40].
#### Recording Type - 2D vs 3D
The first two approaches use standard 2D video recordings to extract the base
data for calculating features. The third approach uses 3D video recordings to
enable more detailed analysis of body movement via depth. The most common 3D
capture equipment used within related work is Kinects. Kinects have an added
benefit that they provide skeletal key points (i.e. joint locations) within a
recording, thus enabling more accurate representation of body data.
#### Extracting Body Representations
Given a video recording, 2D or 3D, with or without skeletal information, the
next phase is feature extraction. Feature extraction approaches fall into two
primary categories, which apply to both 2D and 3D videos: generic video
feature representations and body modality specific representations.
##### Generic Interest Points
The first approach does not target specific body areas or movement, instead it
assumes that the only subject within the video is the subject the model is
concerned with, i.e. the participant, and extracts generic video features from
the recording to represent the body and gestures. Examples of these generic
features are Space-Time Interest Points (STIPs), SURF, Histogram of Gradients
(HOGs) and Optical Flow (HOFs), among others. These features can be extracted
from colour and depth recordings such that they are applicable to both 2D and
3D recordings.
While some approaches apply these generic features (or rather, derivations of
these features) directly to prediction tasks [36, 51] (examples are discussed
in Section 2.1), others aim to incorporate heuristics and information based on
body parts. For example, interest points can be segmented based on location
within the frame to heuristically distinguish body parts (e.g. head and arms)
[34].
Another example is Mahmoud et al. [40] who use SURF keypoints from colour and
depth images across a video to define tracklets for their rhythmic motion
measure. Their dataset is based on Kinect captured videos to provide depth,
this also provides skeletal regions such as feet and head. They use these
Kinect skeletal regions to localise their keypoints’ motion.
##### Body Modality Specific
The second approach extracts body modality specific interest points (e.g.
skeletal models) to calculate features from. There are two primary methods for
extracting these interest points: joint estimations from specialised capture
equipment such as Kinects and pose estimation from existing video recordings.
Such skeletal models have gained popularity in the past few years for action
recognition tasks [54, 17, 10, 48, 56, 18].
In this dissertation I use pose estimation to extract body interest points
(e.g. joints) from each frame in a two-dimensional video. In the past three to
four years there has been substantial work on these pose estimation systems
[57, 7, 49, 60, 39]. The current state-of-the-art is OpenPose by Cao et al.
[8]. OpenPose uses skeletal hierarchy and part affinity fields to estimate
pose interest points (i.e. joints) relative to each other and determine both
part and orientation based on the direction of the proposed limb. The state-
of-the-art model, at the time of writing, generates 25 interest points
identifying the location of all major joints and more detailed information
regarding head and feet. OpenPose can also be used to extract facial landmarks
and detailed hand models.
### 2.3 Summary
I have reviewed methods for automatic depression detection and the variety of
modalities and models used within the field. I have also discussed, in broad
terms, methods used for working with body data within related fields, from
variation in capture equipment to difference in core interest points.
The first component of the review outlined the gap within the existing use of
the body modality that my proposed methodology addresses. While the second
component outlined methods for working with the body modality, and
specifically technology that I use to implement my methodology.
## Chapter 3 Dataset
To develop whole body features I need a dataset containing video recordings
and distress labels. However, due to the sensitive nature of this data no
existing dataset makes both the source video and distress labels publicly
available. Though some datasets make video derived data available, such as
facial landmarks, none include derived data for the participant’s whole body.
Given this I introduce a new dataset containing audio-visual recordings of
interviews and labeled with distress and personality measures.
This is not a clinical dataset, the interviews are performed by a computer
science researcher and the labels are based on established psychology self-
evaluation questionnaires.
#### Collaborators
Dr. Marwa Mahmoud from the Department of Computer Science at the University of
Cambridge was the interviewer. Dr. Gabriela Pavarini from the Department of
Psychiatry at the Univeristy of Oxford provided advice regarding the interview
procedure for collection.
### 3.1 Existing Related Datasets
There are a number of relevant existing datasets, however none satisfy the
requirements of my research. Namely that they: are available, include
psychological distress labels, and include source videos.
I provide an overview of four existing datasets, two clinical datasets, a
distress focused dataset using facial, auditory, and speech modalities, and
one affective computing body gesture dataset. Each of these datasets, and
their related work, provide useful insight for my data collection and model
development research.
#### 3.1.1 Clinical
Joshi et al. describe a clinical dataset [33] for depression analysis that
contains recordings of participants’ upper bodies and faces during a
structured interview. It includes clinical assessment labels for each
participant, including depression severity. This dataset was collected at the
Black Dog Institute [6], a clinical research institute in Australia.
Joshi et al. [34] describe a clinical dataset collected at the University of
Pittsburgh containing full body recordings of participants in interviews, it
uses Major Depressive Disorder (MDD) [4] diagnosis and Hamilton Rating Scale
of Depression (HRSD) [29] as labels. The authors validate the use of full body
features for depression prediction within this dataset.
Given the clinical nature of these datasets they are not publicly available.
#### 3.1.2 Audio-Visual Distress
Gratch et al. introduce the Distress Analysis Interview Corpus (DAIC) dataset
[26] containing audio-visual recordings of interviews with participants with
varying levels of distress. Participants are assessed for depression, post
traumatic stress disorder (PTSD), and anxiety based on self-evaluation
questionnaires. The dataset provides audio data (and derivations such as
transcripts) and extracted facial landmark data, though they do not provide
source video data nor pose data. Source video recordings are rarely shared in
distress focused datasets due to the sensitive of the data.
The dataset contains four participant interview structures: face-to-face
interviews with a human, teleconference interviews with a human interviewer,
“Wizard-of-Oz” interviews with a virtual agent controlled by an unseen
interviewer, and automated interviews with a fully autonomous virtual agent.
This dataset informs my dataset collection via the labels it contains and
their interview structures. The DAIC method has participants complete a set of
questionnaires, then participants are interviewed and recorded, and finally
the participant completes another set of questionnaires. Within the interview
the interviewer asks a few neutral questions to build rapport, then asks
questions related to the participant’s distress symptoms, and finally asks
some neutral questions so the participant can relax before the interview
finishes.
#### 3.1.3 Body Gestures
Palazzi et al. introduce a audio-visual dataset of dynamic conversations
between different ethnicities annotated with prejudice scores. Videos in this
dataset contain two participants interacting in an empty confined space.
Participants move around the space throughout the session, enabling analysis
of body language affected during conversation.
This dataset highlights, and focuses on, the effect attributes of a
counterpart, such as race, gender, and age, have on a person’s behaviour. In
this dataset counterparts are explicitly non-uniform. Whereas, in my dataset
the interviewer (i.e. counterpart for a participant) is the same for all
interviewees. This is useful in controlling for the counterpart variable in
human behaviour, supporting the isolation of correlations between distress and
behaviour.
Datasets such as this one are not useful for my current research question,
however, as they provide source videos they present opportunities for future
work investigating my features generalisability to other domains.
### 3.2 Method
#### 3.2.1 Design
This dataset is designed to enable investigation of the body modality for use
in automatic detection of distress for early screening. This is a non-clinical
dataset.
Its source data is audio-visual recordings of conversational interviews. The
recordings capture the whole body of participants to enable features based on
a whole body modality. These interviews involve questions related to distress
to elicit emotive responses from participants, however the responses to these
questions are irrelevant to the core data. The interviews use a conversational
style to best enable naturalistic gestures from participants.
Labels are scored results from established self-evaluation questionnaires for
assessing distress and personality traits, as well as demographic labels such
as gender. The distress questionnaires are: the PHQ-8 [38, 37] for depression,
GAD-7 [52] for anxiety, SSS-8 [24] for somatic symptoms, and the PSS [11] for
perceived stress. Personality traits are measured using the Big Five Inventory
[32].
#### 3.2.2 Participants
##### Recruitment
I advertised for participants via University of Cambridge email lists, student
social media groups, classified sections of websites, such as Gumtree [27],
specific to the Cambridge area, and paper fliers posted around the University
of Cambridge. Participants were directed to a website111helpresearch.org
describing the study along with a participant registration form.
The registration form captured demographic data and two self-evaluation
psychological distress questionnaires. Demographic data captured includes
gender, age, ethnicity, and nationality. Gender and age were required while
ethnicity and nationality were not. The two psychological distress
questionnaires were the PHQ-8 [38, 37] and GAD-7 [52].
##### Selection
In total 106 people registered to participate and 35 were invited to the face
to face session. The participant population is balanced with regards to
distress levels and gender222 Non-binary/other was given as an option in the
registration form. A number of people registered with this option. However,
none of those people met the distress level criteria and were thus not
selected for an interview.. Distress level balancing aims to include
participants at the extents of the distress spectrum such that there is a
distinct difference between the high and low distress populations. Participant
distress level is selected based on PHQ-8 and GAD-7 questionnaire responses
such that participants are balanced between high (i.e. major or severe) and
low (i.e. mild) distress. Of the invited participants, there are 18 with high
distress and 17 with low distress.
##### Compensation
Participants are compensated for their time with a £15 voucher.
#### 3.2.3 Face to Face Session
During the face to face session participants sign a research consent form
outlining the interview process, complete a battery of five established
psychology questionnaires evaluating distress levels and personality traits,
are interviewed by a researcher, and finally sign a debrief research consent
form that outlines the full purpose of the study. Participants are not aware
of the focus of the research (i.e. body modality analysis) before the
interview such their affectations are natural.
To achieve the conversational interview dynamic the interviewer asks general
questions regarding the participant’s life and further encourages the
participant to elaborate. For example, the interviewer asks “can you tell me
about one time in your life you were particularly happy?” and then asks follow
up questions regarding the example the participant provides. The interview
style and structure is inspired by the DAIC dataset. In developing the
interview structure and questions I also drew on training documents provided
by Peer2Peer Cambridge [44], an organisation dedicated to peer support for
mental health which trains students in general counseling.
So as to avoid influencing participants’ behaviour the interviewer remains as
neutral as possible during the interview, while still responding naturally
such that the participant is comfortable in engaging in the questions.
Furthermore, to ensure neutrality the interviewer is explicitly not aware of
the distress level of participants before the interview and has no prior
relationships with any participant.
##### Technical Faults
16 interviews are interrupted due to technical faults333The camera
disconnected.. Recordings that are interrupted are treated as multiple
individual samples within the dataset (though they remain connected by their
participant ID).
### 3.3 Preliminary Analysis
The dataset contains a total of 35 interviewed participants with a total video
duration of 7 hours 50 minutes and 8 seconds. Each participant provides
responses to 5 questionnaires, including 2 responses to both the PHQ-8 and
GAD-7 questionnaires as participants completed both during registration and
the face-to-face session.
Though significantly more people registered for participation, I include only
those interviewed in this analysis.
#### 3.3.1 Validation Criteria
There are three primary criteria the dataset should satisfy with regards to
label results:
1. 1.
The psychological measures statistically match previous work and published
norms444 Published norms are the standard values for questionnaire results as
defined by the psychology literature. These aim to be representative of the
general population. They thus provide a benchmark for other work (generally
within psychology) to check smaller populations’ results against. (i.e. the
distribution within the participant population is similar to that of the
general population).
2. 2.
There are no confounding correlations. For example, gender correlating highly
to depression would indicate a poorly balanced dataset and would be
confounding for depression analysis.
3. 3.
The labels are well balanced to enable machine learning.
As the common measure of similar distress detection research is depression I
focus primarily on it for this validation.
General statistics regarding the questionnaire and demographic results within
the dataset are provided in Table 3.1. Covariance is presented as normalized
covariance values, also known as the correlation coefficient.
Label | Possible range | Max | Min | Mean | Median | Std. | Depression covariance
---|---|---|---|---|---|---|---
Distress | | | | | | |
Depression | 0–24 | 19 | 0 | 7.43 | 8 | 5.87 | -
Anxiety | 0–21 | 19 | 0 | 7.00 | 8 | 5.53 | 86.15%
Perceived stress | 0–40 | 30 | 1 | 18.17 | 18 | 8.03 | 84.00%
Somatic symptoms | 0–32 | 27 | 1 | 9.06 | 7 | 6.94 | 74.16%
Personality | | | | | | |
Extraversion | 0–32 | 31 | 3 | 16.37 | 17 | 6.33 | -30.49%
Agreeableness | 0–36 | 34 | 12 | 25.67 | 26 | 5.60 | -42.21%
Openness | 0–40 | 39 | 7 | 27.29 | 28 | 6.77 | 4.29%
Neuroticism | 0–32 | 31 | 1 | 16.86 | 18 | 8.60 | 80.00%
Conscientiousness | 0–36 | 36 | 10 | 21.46 | 21 | 6.87 | -46.41%
Demographic | | | | | | |
Gender | - | - | - | - | - | - | 9.47%
Age | - | 52 | 18 | 25.40 | 22 | 9.1 | -11.09%
Table 3.1: General statistics regarding questionnaire and demographic results
within the dataset. The “Depression covariance” column is most important as it
demonstrates the independence of variables with regards to depression (for
example, it shows that age and gender are not confounding of depression).
##### Published Norms
A comparison of the mean values for distress and personality measures between
my dataset and the published norms is presented in Table 3.2. While there are
differences, the measures are generally in line with the published norms. The
dataset has slightly higher mean distress scores, though a substantially
higher mean perceived stress score. Depression, extraversion, and neuroticism
measures are particularly close to their published norms. While the dataset
mean for agreeableness and openness are substantially greater than the
published norms (over 10% over the technical range for those measures).
Label | Dataset mean | Norm mean | Source
---|---|---|---
Distress | | |
Depression | 7.43 | 6.63 | Ory et al. [42]
Anxiety | 7.00 | 5.57 | Spitzer et al. [52]
Perceived stress | 18.17 | 12.76 | Cohen et al. [11]
Somatic symptoms | 9.06 | 12.92 | Gierk et al. [24]
Personality | | |
Extraversion | 16.37 | 16.36 | Srivastava et al. [53]
Agreeableness | 25.67 | 18.64 | Srivastava et al. [53]
Openness | 27.29 | 19.61 | Srivastava et al. [53]
Neuroticism | 16.86 | 16.08 | Srivastava et al. [53]
Conscientiousness | 21.46 | 18.14 | Srivastava et al. [53]
Table 3.2: Comparison of the mean questionnaire values within my dataset to
the published norms. This shows that the population distribution, with regards
to these distress and personality measures, is generally in line with the
broader population.
##### Confounding Correlations
While the other distress measures (anxiety, perceived stress, and somatic
stress) are strongly correlated with depression, the personality measures have
below 50% covariance with the exception of neuroticism which has an 80%
covariance. Furthermore, the demographic measures, gender and age, are
negligibly correlated, with 9.47% and -11.09% covariance, respectively. This
suggests that the labels are not confounding of each other.
##### Label Balance
There are 17 participants below the mean depression result (7.43) and 18
participants above. The mean depression score of the group below the overall
mean is 2.18 while the score for those above is 12.39. Ideally for machine
learning the dataset’s distribution would include more participants at the
severe depression end of the spectrum, though the present distribution still
places the below group firmly in the “mild” category and the above group in
the “major depression” category.
There are 18 male and 17 female participants. As the gender covariance shows,
the split on the depression measure and the split on gender are not the same
participants (gender is balanced across the distress spectrum).
#### 3.3.2 Difference from Registration
Participants complete the PHQ-8 and GAD-7 questionnaires during registration
and during the interview process. These questionnaires are temporal,
specifically, they relate to the participant’s mental state in the past two
weeks. Given this, some difference between registration and interview results
is expected.
With the exception of a small number of outliers, participants were generally
consistent in self-evaluation between registration and interview. PHQ-8
responses have a mean difference of 0.89 while GAD-7 responses have a mean
difference of 0.63. This supports the selection of participants based on
temporal self-evaluation questionnaire results.
#### 3.3.3 Interview Meta Statistics
There is a total of 7 hours 50 minutes and 8 seconds of participant interview
recordings, with a mean interview duration of 13 minutes and 25 seconds. The
standard deviation of interview duration is 3 minutes and 20 seconds and the
median interview duration is 13 minutes and 8 seconds. Depression score and
interview duration are not correlated, with a covariance of 6.95%.
Furthermore, interview duration is not correlated with any questionnaire
result (i.e. distress or personality measure), all absolute covariance values
are below 25%, which provides confidence in the reliability of the data.
### 3.4 Summary
I have introduced a new audio-visual dataset containing recordings of
conversational interviews between participants and a researcher, and annotated
with established psychology self-evaluation questionnaires for depression,
anxiety, somatic symptoms, perceived stress, and personality traits. This
dataset involves 35 participants and 65 recordings (due to recording
interruptions) with a total video duration of 7 hours 50 minutes and 8
seconds.
There are a number of relevant existing datasets including clinical datasets
which contain body gestures but are inaccessible beyond their home institute,
distress datasets that contain facial expressions and speech modalities but no
body gestures or source videos, and video datasets containing body gestures
but lacking distress labels. While these datasets inform my dataset design and
collection, no dataset I am aware of satisfies the criteria for research on
body gesture modalities for predicting distress.
An analysis of the questionnaire results in the dataset show they are aligned
with psychology literature published norms, they are not confounded by factors
such as gender or age, and have a useful and balanced distribution across the
distress spectrum.
## Chapter 4 Method
Figure 4.1: High level pipeline description of feature extraction process.
Having collected the dataset, I now describe my methodology for extracting
gesture meta features and applying them to automatic distress detection. To
this end I define four core stages of my methodology, from video to prediction
(as shown in Figure 4.1):
1. 1.
Pose estimation \- extract pose estimation data from video recordings.
2. 2.
Data preparation \- When preparing the data for learning I clean and smooth
the extracted pose data. Pose data extracted from 2D video frames has a not-
insignificant level of noise.
3. 3.
Feature extraction \- Detect gestures, extract per-gesture features, and
aggregate features across all gestures. I define gestures as sustained
movement of some body part over a minimum period of time (this is elaborated
on in Section 4.3.2).
4. 4.
Classifier training \- a classifier is then trained to predict a target label
(e.g. depression) based on these aggregate features. Classifier choice is
explained in the evaluation, Chapter 5.
### 4.1 Pose Estimation
Figure 4.2: Example of OpenPose estimation output of the participant
interview position. Subject is not a study participant. Pose points are
indicated by the green dots.
Pose estimation extracts per-frame approximations of skeletal joint locations.
This enables more accurate gesture analysis than direct pixel based approaches
(such as STIPs per Joshi et al.’s method [36]).
I process each video to extract per-frame skeletal pose data using OpenPose
[8] by Cao et al. (OpenPose is discussed in more detail in Section 2.2) as it
is the current state-of-the-art in pose estimation. I use the BODY_25 pose
model provided with OpenPose111 OpenPose models provided at
https://github.com/CMU-Perceptual-Computing-Lab/openpose.. As the name
suggests, this model generates 25 pose points corresponding to a subject’s
joints. OpenPose also extracts more detailed hand data that provides joint
estimations for each joint in the hand. Figure 4.2 presents an example of
extracted pose points.
### 4.2 Data Preparation
I perform three data preparation steps: filtering, recovery, and smoothing.
Filtering smooths dataset-level noise, recovery fixes outlier noise (where
outliers are detection failures for specific pose points, but not the whole
body), and smoothing reduces detection noise.
Extracted pose estimation data has two primary forms of noise: individual
frames, or short segments of frames, where detection of a person, or part of a
person, is lost, and detection points moving around slightly even if the
person is static. Manual review of a selection of detection loss cases shows
no consistent cause (for both complete detection loss and partial loss). It
appears to be the deep learning model failing inexplicably on some frames.
#### 4.2.1 Filtering
The absence of, or low representation of, gestures is relevant information for
prediction tasks (simply put, if more activity is relevant then less activity
must be too). However, the absence of gestures can also be due to a lack of
opportunity to express gestures, such as when a sample is too short. These
short samples lead to dataset-level sample noise that can hinder predictive
models.
Samples shorter than 1 minute are excluded as it is difficult to provide
enough opportunity for gesture dynamics in less than 1 minute of video. 12 out
of 65 samples within the dataset are shorter than 1 minute.
#### 4.2.2 Detection Recovery
Pose estimation within my dataset contains frames where certain pose points
(e.g. an arm or a leg) are not detected. In these cases OpenPose returns $0$
for each pose point not detected. Manual review of a number of instances shows
that the joint does not moved much, if at all, during the lost frames.
However, the pose point “moving” to position $0$ causes significant noise in
feature calculation.
Therefore, I perform detection “recovery” to infer the position of the pose
point in the missing frames, thus providing a smoother pose point movement. I
recover the position by linearly interpolating the pose point’s position
between the two closest detected frames temporally surrounding the lost
frame(s).
It is worth noting that this pose point detection failure is different to full
detection failure where the whole participant is not detected within a frame.
I do not attempt to recover such full failure cases as the failure is more
serious and cause is ambiguous. I do not want to introduce stray data by
“recovering” significantly incorrect data. Partial failures suggest a simple
failure of OpenPose to extend through its body hierarchy. Since other pose
points are still detected I am more confident that it is not a “legitimate”
failure to do with participant position. Instead, full failure cases are
treated as “separators” within a sample. Gesture detection occurs on either
side but not across such separators.
#### 4.2.3 Detection Smoothing
To extract more accurate and relevant features I smooth the pose data by
removing high frequency movement within pose points. Such high frequency
movement of pose points is caused by OpenPose’s detection noise (i.e. the
exact pose point might move back and forth by a few pixels each frame while
its target joint is static). Thus smoothing is not smoothing human movement,
but rather smoothing pose extraction noise. To smooth the data I apply a
fourier transform filter to each dimension for each pose point. The smoothing
steps are:
1. 1.
Separate the data into $x$ and $y$ positions and smooth them (i.e. apply the
following steps) independently.
2. 2.
I convert the position sequence data using a fourier transform with a window
length of $64$.
3. 3.
Set the medium and high frequency values (all frequencies above the first
five) to $0$.
4. 4.
Invert the fourier transform on the updated fourier values to reconstruct the
smoothed pose data.
5. 5.
Concatenate the smoothed windows.
### 4.3 Feature Extraction
I extract two types of features: aggregate features across the whole body and
localised features for specific body parts, which I term “body localisations”.
The localisations are: head, hands, legs, and feet. As the participants are
seated the body trunk does not move substantially such that a gesture might be
detected. The whole-body features include aggregations of the body
localisation features as well as features incorporating the whole body.
By including localised and non-localised features I can model the information
provided by individual body parts and also the overall behaviour.
##### Gesture definition
I define a gesture as a period of sustained movement within a body
localisation. Multiple body localisations moving at the same time are treated
as multiple individual, overlapping, gestures.
A gesture is represented as a range of frames in which the target body
localisation has sustained movement.
For example, if a participant waves their hand while talking it would be a
hand gesture. If they were to cross their legs it would register as a gesture
in both the leg and feet localisations.
##### Whole Body Features
* •
Average frame movement \- the per-frame average movement of every tracked pose
point (i.e. whole body). This is the only feature that is not based on
detected gestures.
* •
Proportion of total movement occurring during a gesture \- the proportion of
total movement (i.e. the whole body) that occurred while some body
localisation was affecting a gesture.
* •
Average gesture surprise \- gesture surprise is calculated per-gesture as the
elapsed proportional time since the previous gesture in the same localisation,
or the start of the sample (proportional to length of the sample) for the
first gesture. This overall feature averages the surprise value calculated for
every gesture across all tracked localisations. I use the term “surprise” as
the feature targets the effect on a gesture level basis, rather than the
sample level. This is not a measure of how much of a sample no gesture is
occurring as it is normalised on both the sample length and the number of
gestures222 To illustrate further: if 2 gestures occurred within a sample such
that 80% of the sample duration had no gesture occurring, the average gesture
surprise would be $\dfrac{80}{2}=40$. Whereas, if there were 100 gestures,
still with 80% of the sample with no gesture occurring, the average surprise
be 0.8%, even though both samples had the same proportion without any gesture
occurring. This matches the intuition that each gesture within 100 evenly
spaced gestures would be unsurprising as they were regularly occurring,
whereas the 2 evenly spaced gestures would be surprising because nothing was
happening in between. .
* •
Average gesture movement standard deviation \- the standard deviation of per-
frame movement within a gesture is averaged across all detected gestures. This
is intended to indicate the consistency of movement intensity through a
gesture.
* •
Number of gestures \- total number of detected gestures across all tracked
localisations.
##### Localised Features
Whole body and localised features are concatenated in the same feature vector.
Localised features are included in the final feature vector for each
localisation included in the vector.
* •
Average length of gestures \- the average number of frames per gesture.
* •
Number of gestures \- the total number of gestures, irrespective of gesture
length.
* •
Average per-frame gesture movement \- the average movement across all
gestures.
* •
Total movement in gestures \- the total amount of movement affected by the
detected gestures.
* •
Average gesture surprise \- the average surprise across all gestures.
##### Normalisation
All features are normalised such that the length of the sample does not affect
the results.
I normalise sum based features (e.g. gesture length, gesture count, total
movement, etc) against the total number of frames in the sample and against
the total number of gestures for gesture average values. For example, gesture
surprise is normalised against the total number of frames and normalised a
second time against the total number of gestures.
##### Absent Features
If a feature has no inputs (such as when no gesture was detected within a body
localisation) its value is set to $-1$ to enable models to incorporate the
absence of movement in their predictions.
#### 4.3.1 Body Localisation
Gestures in different body localisations provide distinct information.
Aggregating gestures from different localisations provides a general
representation of this information, however, having features localised to
specific body localisations provides further information, without
significantly increasing the dimensionality.
I define four localisations, hands, head, legs, and feet, based on specific
pose estimation points.
###### Hands
I use the finger tip points (including thumb) detected by OpenPose as the
gesture detection points. This means wrist based gestures (e.g. rolling of a
hand) are detected. Each hand is processed separately, that is, I detect
gestures and calculate individual gestures independently in each hand, these
gestures are then aggregated into a single body localisation feature vector.
This makes the final features side agnostic. This ensures differences in
dominant hand between participants will not affect the result.
###### Head
While OpenPose includes face detection, providing detailed facial landmarks, I
use the general head position points provided by the standard pose detection
component.
###### Legs
I represent legs using the knee pose points from OpenPose. As with hands, I
process gestures in each leg independently and then aggregate to a single
feature vector.
###### Feet
Each foot is comprised of four pose points within OpenPose. Aggregation is the
same as hands and legs.
###### Trunk
I do not include a “trunk” localisation as there is minimal movement in the
trunk, given the seated nature of the dataset interviews. Though some
participants may lean forwards and backwards, these movements are not
represented well within my data as the camera faces the participants directly
such that forwards and backwards leaning would be towards the camera, thus
requiring depth perception which is not included in my data. Side to side
leaning is restricted by the arms of the participant’s chair. As such the
localisations that are relatively free to move, those other than the trunk,
are intuitively the most likely areas to provide predictive information.
#### 4.3.2 Gestures
##### Gesture Detection
To detect gestures within a set of pose points (e.g. finger tips, knees, feet,
head, etc) I scan the activity of the target points for sustained periods of
movement. The gesture detection step takes cleaned per-frame pose estimations
and outputs a collection of ranges of non-overlapping frames that contain
gestures within the localisation.
First, the per-frame absolute movement delta is calculated for each pose
point. The movement is then averaged across all localisation pose points.
Movement deltas are $L^{2}$ distances. Formally,
$\begin{split}M_{p,t}&=P_{p,t}-P_{p,t-1}\\\ F_{t}&=\dfrac{1}{|P|}\sum_{p\in
P}M_{p,t}\end{split}$ (4.1)
where $M_{p,t}$ is the amount of movement for pose point $p$ at time $t$,
$P_{p,t}$ is the position value of pose point $p$ at time $t$, and $F_{t}$ is
the averaged per-frame movement across all points.
Second, I average the movement of each frame within a window such that a small
number of frames do not have a disproportionate effect on the detection. That
is,
$\begin{split}W_{i}=\dfrac{1}{l}\sum_{t=i\times
l}^{t<i\times(l+1)}F_{t}\end{split}$ (4.2)
where $W_{i}$ is the windowed average at window index $i$, $l$ is the length
of the window, and $F_{t}$ is the average movement at frame $t$, from Equation
4.1. In this dissertation I use $l=10$, i.e. a second of movement is
represented by 3 windows, this is experimentally chosen.
Third, the detector iterates through the averaged windows until a window with
an average movement above a threshold is found. The first frame of this
initial window is considered the beginning of the gesture. The gesture
continues until $n$ consecutive windows (I use 3, i.e. 30 frames, as an
approximate of a second) are found below the movement threshold. The last
frame of the final window above the movement threshold is considered the end
of the gesture. This is provided more formally in Algorithm 1.
Algorithm 1 Gesture detection
1:$n\leftarrow\textit{number of consecutive windows}\text{ below threshold for
end}$
2:$m\leftarrow\text{threshold of window }\textit{movement}$
3:$l\leftarrow\text{minimum }\textit{number of windows}\text{ for a gesture}$
4:$gestures\leftarrow EmptyList$
5:$start\leftarrow NULL$
6:$belowThresholdCount\leftarrow 0$
7:
8:for each window movement $W$ at index $i$ do
9: if $W\geq m$ then
10: // Start the gesture on the first window that exceeds the movement
threshold.
11: if $start\equiv NULL$ then
12: $start\leftarrow i$
13: $belowThresholdCount\leftarrow 0$
14:
15: else if $start\neq NULL$ then
16: $belowThresholdCount\leftarrow belowThresholdCount+1$
17: // A gesture is completed after $n$ consecutive windows below the movement
threshold.
18: if $belowThresholdCount\equiv n$ then
19: // The end of a gesture is the final window that exceeds the threshold.
20: $end\leftarrow i-n$
21: if $end-start\geq l$ then
22: $gestures$ append $[start,\ end]$
23: // Reset to find the next gesture.
24: $start\leftarrow NULL$
25: $belowThresholdCount\leftarrow 0$
26:
27:
28:// Close the final gesture.
29:if $start\neq NULL$ then
30: $end\leftarrow finalIndex$
31: if $end-start\geq l$ then
32: $gestures$ append $[start,\ end]$
Having detected gestures in each body localisation I extract the features
described above from each gesture and aggregate them to form the final feature
vector.
### 4.4 Feature Space Search
I have described a collection of novel features whose individual and mutual
predictive value is as yet unknown. Some features are potentially unhelpful to
predictive models. Therefore, distinguishing useful features from unhelpful
features is a key operation to enable development of accurate models, and thus
validate these features. To this end, I perform an exhaustive search of the
feature space to identify the combination of features with the best
performance.
Alghowinem et al. [1] demonstrate the benefits of a variable feature set,
achieving accuracy improvements of up to 10%. Their classification method is
based on SVMs and the dimensionality reduction enabled by feature filtering is
significant. They use a statistical T-threshold based approach to feature
selection. However, Dibeklioglu et al. [16] argue that basing selection on
optimization of mutual information will achieve better results than individual
information based selection. I follow this mutual information approach and
thus feature selection is based on the results achievable given a combination
of features, rather than features’ individual relevance.
I define few enough features such that a brute force feature combination space
search (i.e. test every permutation) is viable. As each feature can be
included or excluded the space has $2^{n}$ permutations, where $n$ is the
number of features being searched.
I iterate over every permutation and perform three fold cross validation using
the combination of features, the permutation with the greatest average cross
validation F1 score is taken as the best permutation, which enables testing
and evaluating the effectiveness of the proposed features.
### 4.5 Summary
In this chapter I have described my methodology for extracting gesture meta
features from videos. The four core stages are: pose estimation, data
preparation, feature extraction, and finally classifier training. I use
OpenPose to extract pose data per-frame as it is the state-of-the-art in pose
estimation. I then filter samples and perform two operations to reduce noise
within the remaining samples: recovery of partial detection failures and
smoothing of high frequency detection noise. Features are extracted by
detecting individual gestures, calculating per-gesture features such as speed,
and aggregating per-gesture features within their localisations and over the
whole body. Classifier choice is an evaluation detail and discussed in the
next chapter.
## Chapter 5 Evaluation
Having described my gesture meta features, I now evaluate their predictive
capability on the dataset introduced in Chapter 3. Section 5.1 describes
implementation details of my evaluations. Section 5.2 defines a simple
baseline using a single feature. In Section 5.3 I evaluate the features’
potential using the exhaustive feature search described in Section 4.4.
Section 5.4 investigates the influence of different body localisations on the
features’ effectiveness. I then demonstrate the features’ generalisability
beyond the primary depression detection task in Section 5.5. In Section 5.6 I
compare the features’ automatic depression detection performance with an
existing body-modality method and a straightforward face-modality method. In
Section 5.7 I experiment with a multi-modal classifier that combines the
gesture meta features with face-modality features. Finally, I summarize the
results in Section 5.8.
### 5.1 Implementation details
Before presenting evaluation results, I outline the details of the evaluation
setup.
#### Evaluation
I perform evaluations using a three fold cross validation. The training and
test samples are split in a participant-independent manner and use stratified
folding to balance them with regards to labels. Cross validating with more
folds leads to fewer test samples per fold. Given the small size of the
dataset, this can lead to more erratic performance (i.e. more extremes in
cross validation results). I assess results based on the average cross
validation F1 score and the standard deviation between fold F1 results.
Average F1 provides an objective measure of model quality. Whilst the standard
deviation provides an indicator of the consistency of the model.
#### Classifier Models
I evaluate four types of classifiers on all tasks:
* •
A linear regression based classifier (denoted as lin) using a classification
threshold of 0.5.
* •
A logistic regression based classifier (denoted as log) using the L-BFGS
solver.
* •
A linear kernel SVM (denoted as svm) with balanced class weighting (without
balancing the class weightings the classifier consistently chooses to predict
a single class on every fold).
* •
A random forest (denoted as rf) with 40 trees, feature bootstrapping, a
minimum of 3 samples per leaf, a maximum depth of 5, balanced class weighting,
and exposing 80% of features per node. These parameters were chosen
experimentally.
#### Labels
I only evaluate binary classification tasks (i.e. high vs. low depression
score) within this dissertation. However, the dataset contains continuous
values for distress and personality measures, thus a constant threshold is
required for each label (similar to the participation selection criteria
discussed in Section 3.2.2). These thresholds are chosen such that the
resulting binary classes are as balanced as possible. Given the small size of
the dataset, balancing the classes is important to classifier training. Per-
label thresholds are reported in Table 5.1.
Label | Threshold | # Participants above | # Participants below
---|---|---|---
Distress | | |
Depression | 7 | 18 | 17
Anxiety | 7 | 18 | 17
Perceived stress | 17 | 18 | 17
Somatic stress | 6 | 19 | 16
Personality | | |
Neuroticism | 17 | 18 | 17
Extraversion | 16 | 18 | 17
Agreeableness | 25 | 20 | 15
Conscientiousness | 20 | 20 | 15
Openness | 27 | 19 | 16
Table 5.1: Binary classification thresholds for distress and personality
labels.
A larger dataset, or an expansion of this dataset, would enable regression
models. Regression models would also naturally provide a multi-class
classification solution as the original questionnaire scoring categories could
be applied to the regressed predictions.
Though I evaluate the features’ predictive capability against multiple labels,
my primary focus is on depression detection, as this is the area that most of
the related work discusses. An evaluation the features’ capability on other
labels is presented in Section 5.5.
#### Localisations
Though I evaluate four localisation types: head, hands, legs, and feet, the
feet localisation has a negative effect on performance, shown in Section 5.4.
As such, all other sections use the three useful localisations: head, hands,
and legs, in their evaluations.
#### Feature Reporting Notation
To concisely report features used by different classification models I
describe a brief notation for enumerating features. The notation defines
tokens for localisations, feature type, and how the lin model interprets the
feature. The structure is “[localisation]-[feature type][linear polarity]”.
Localisation and feature type token mappings are provided in Table 5.2.
Localisation | Token
---|---
Overall | O
Hands | Hn
Head | He
Legs | L
Feet | F
| Feature | Token
---|---|---
Overall | Average frame movement | FM
Proportion of total movement occurring during a gesture | GM
Average gesture surprise | GS
Average gesture movement standard deviation | GD
Number of gestures | GC
Localised | Average length of gesture | GL
Average per-frame gesture movement | GA
Total movement in gestures | GT
Average gesture surprise | GS
Number of gestures | GC
Table 5.2: Feature notation tokens.
I define an indicator of how a model interprets a feature based on its
contribution to a positive classification. A greater value (e.g. more
activity) contributing to a positive classification is denoted by “+”.
Conversely a greater value contributing to a negative classification is
denoted by “$\neg$”. A value which has a negligible effect (defined as a
linear model applying a near-zero coefficient) is denoted by “/”. Finally, if
the lin classifiers for each cross-validation fold are inconsistent in usage
of the feature the “?” indicator is used.
For example, F-GS$\neg$ would denote that a greater amount of surprise in feet
gestures indicates a negative classification.
### 5.2 Baseline
As a baseline I evaluate the use of the only non-gesture based feature,
average per frame movement (O-FM). Results are given in Table 5.3.
Model | F1 avg | F1 std
---|---|---
lin | 34.43% | 11.45%
log | 34.43% | 11.45%
svm | 33.82% | 10.73%
rf | 64.29% | 5.83%
Table 5.3: F1 aggregate scores for models using the baseline feature on the
depression detection task. The rf model achieves the best performance. In this
evaluation only one feature is used, this means that the feature is available
to every decision node in the random forest, enabling strong overfitting.
While the rf model achieves the best results, it is inconsistent across its
folds. Its 3 F1 scores are: 70.06%, 60.00%, and 47.06%. However, it does
suggest that the movement feature alone is valuable for prediction. Though,
the worse than random results achieved by the other models suggest that
movement is not a linear indicator. It is possible, indeed likely, that the rf
model’s result is reflective an overfitting of the dataset.
### 5.3 Feature Search
I evaluate the effect of the feature space search to identify the best feature
combination.
#### All Features
Table 5.4 presents results for each model when provided with the full feature
vector unmodified. Given all features the lin, log, and svm models all improve
on their baselines, while the rf model is worse than its baseline. The
reduction in performance by the rf model can be attributed to the overfitting
ability of random forests. This ability is especially prevalent with single
feature problems as the feature must be made available to every node within
the forest, thus enabling direct fitting of multiple decision trees to the
dataset. The lin model achieves significantly better-than-random performance,
indicating that the features have some linear predictive capability.
Model | F1 avg | F1 std
---|---|---
lin | 66.81% | 8.89%
log | 56.55% | 5.12%
svm | 62.70% | 9.19%
rf | 41.88% | 6.04%
Table 5.4: Classifier F1 performance for detecting depression within my
dataset given all features. There are two primary interest points in this
table: 1) the lin classifier performs best given all features, suggesting that
the features provide some linear information, and 2) the rf classifier
performs significantly worse than the one feature baseline in Table 5.3,
further suggesting the rf classifier is overfitting the one feature baseline.
#### Searched Features
I perform an exhaustive feature space search to identify the best combination
of features (determined by the average cross-validation F1 score). This
provides two important outcomes: the best accuracy possible for a given model
using these features within my dataset and the most relevant features to
predict depression within my dataset. A good accuracy from the first outcome
validates the features I have developed. The second outcome then provides a
basis for further analysis of these features and their relation to depression
within my dataset.
This search requires a model fast enough to train that the full space can be
searched in a viable amount of time and, preferably, a model that is
interpretable. For these reasons I use the lin model to search the feature
space, and then evaluate the best feature combination with the other
classifier types.
Figure 5.1: Comparison of baseline results to feature searched results. This shows that performing a full feature search enables the lin classifier to outperform the other classifiers and feature combinations. Also of interest is the reduction in performance by the rf model when provided with more features, suggesting that it may be overfitting when provided with the single baseline feature. The specific feature combination resulting from the search is discussed below in Chosen Features. Model | F1 avg | F1 std
---|---|---
lin | 82.70% | 8.95%
log | 54.17% | 5.89%
svm | 56.55% | 5.12%
rf | 53.18% | 14.96%
Table 5.5: Performance of the best feature combination as determined by an
exhaustive feature space search using the lin classifier to detect depression.
This demonstrates the power, and the linearity, of the gesture meta features
as the lin classifier is able to achieve a high F1 score.
##### Feature Search Improves Best Performance
The best performance when applying the feature search, again from the lin
classifier, is significantly better, 82.70%, than the classifier’s baseline of
34.43% and all features baseline of 66.81%. This demonstrates the importance
of reducing dimensionality and feature confusion, especially when using
relatively direct methods such as linear regression. Results are provided in
Table 5.5, a comparison of these results to the baseline results is presented
in Figure 5.1.
##### Chosen Features
The full feature combination is: {O-FM?, O-GM+, O-GC?, Hn-GC?, Hn-GT$\neg$,
Hn-GS?, He-GL?, He-GC?, He-GA+, He-GT$\neg$, He-GS?, L-GL$\neg$, L-GC?,
L-GT+}. Analysing this feature set, we can derive some intuition as to
information indicative of depression. The overall (O-*) features suggest that
the number of gestures (O-GC) and the amount of movement within those gestures
(O-GM) is relevant to depression. The O-GM+ token suggests that more movement
within gestures relative to all other movement is indicative of depression.
The localised features suggest that the length of gestures (*-GL) has a
correlation with depression, however, this correlation differs between
localisations. The head localisation is ambiguous as to whether shorter or
longer gestures (He-GL?) is indicative of depression. Whilst longer gestures
in the legs localisation (L-GL$\neg$) is indicative of less depression. Within
this model, less total movement of the hands (Hn-GT$\neg$) is indicative of
distress.
##### Negative Performance Impact on Other Models and Overfitting
The identified feature set is chosen using the lin model, so it is
unsurprising it has a greater improvement than any other model. While the log
classifier’s performance does not change much, the svm and rf classifiers have
reduced performance compared to their all features and one feature baselines,
respectively. There are two aspects to consider here: the value of pre-model
filtering to each model and the potential for each model to overfit. Focusing
on the rf classifier as it has a more distinct reduction in performance;
random forests have inbuilt feature discretion, so the pre-model filtering of
features does not have as great a complexity reducing effect as it does on the
other models. My hyper-parameters give the rf model a relatively large
selection of features (80%) per node, thus it should generally filter, to some
extent, those naturally unhelpful features. Random forests, as decision tree
ensemble methods, have a propensity for overfitting data. By reducing the
number of features available I reduce the available surface for overfitting.
Moreover, when only using one feature, as in the baseline, every decision node
in the random forest has access to the feature, thus enabling particularly
strong overfitting of a small dataset.
#### Dimensionality Compression
I do not perform any dimensionality compression operations in my presented
evaluations. However, I have experimented with Principle Component Analysis
(PCA) both independently and in combination with a feature search. Neither
approach achieved especially interesting results, all were worse than when not
applying PCA. Given this, and the already relatively low number of dimensions,
I do not see it as a critical path of investigation for this dissertation.
### 5.4 Body Localisations
Not all localisations are necessarily beneficial. Identifying which
localisations are helpful is made more difficult by the localisations’
interactions within a classifier and the effect they have on the overall
features. Though a localisation may generate features that are chosen using
feature search, they may reduce overall accuracy by obfuscating predictive
information in the overall features.
Given this, I experiment with localisations included individually and in
varying combinations. I also provide an example of a localisation, feet, that
negatively effects performance when included, even when all other
localisations are also included (and are thus providing the same predictive
information). A comparison of the best F1 scores for localisation combinations
is presented in Figure 5.2.
Figure 5.2: Comparison of the best F1 average scores from localisation
combinations using the lin classifier using all features. The most interesting
results are the bottom four (vertically) localisation results: head - legs,
hands - head - legs, feet - head - legs, and hands - feet - head - legs.
Specifically, the feet localisation impairs the performance of the
localisation combinations when included. This trend is also seen in feet -
legs and feet - head.
##### Localisation Inclusion
Clearly not all of the features generated per localisation provide useful
information. Inclusion of more localisations, and thus a larger feature space,
does not guarantee a better optimal result is available within the space. As
the overall features (those aggregated across all localisations) are effected
by each localisation, the quality of the feature space can be degraded with
the inclusion of localisations. For example, this occurs regularly in Figure
5.2 when including the feet localisation. In particular, the best base
configuration, head - legs using the lin classifier, achieves a 70.88% F1
score, when the feet localisation is included this drops to 49.84%. I have not
identified any psychology literature, or clear intuition, as to why the feet
localisation hinders performance. I see three probable explanations: 1) some
literature does support this and I have simply not identified the literature,
2) this is accurately representing that feet movement meta information does
not distinctively change with distress, but no literature has explicitly
investigated this, and 3) this is simply a attribute of the dataset that is
not reflective of any broader trend, either due to the dataset size or the
nature of the interview dynamic.
##### Best Base Performance Configuration
Though the head - legs configuration achieves the best performance when all
features are used, it does not achieve the best performance when features are
chosen based on an exhaustive search. While my primary configuration, hands -
head - legs, achieves 82.70% F1 average, the head - legs configuration
achieves 80.53%, results are presented in Table 5.6.
Model | F1 avg | F1 std
---|---|---
lin | 80.53% | 4.04%
log | 59.52% | 8.91%
svm | 65.40% | 8.40%
rf | 51.32% | 5.76%
Table 5.6: Performance of models when using features chosen via exhaustive
search with source features from the head - legs configuration. This
configuration achieves close to the best performance of the standard
configuration (Table 5.5). The lin classifier also has more consistent
performance across cross validation folds than it does on the standard
configuration, with a standard deviation of 4.04% compared to 8.95%. However,
these results do not clearly define which configuration is generally better as
the differences are quite minor.
### 5.5 Generalisability
I have demonstrated the gesture meta features’ predictive value with regards
to depression detection. I now evaluate their predictive capability for other
labels including participants’ gender, personality measures, anxiety,
perceived stress, and somatic stress.
I apply the best feature combination identified in Section 5.3 to each of the
labels, presented in Table 5.7. I also perform a feature space search for each
label, using the same process as Section 5.3, to provide greater insight into
the effect of features and their reliability across labels, presented in Table
5.8. A comparison is shown in Figure 5.3. Consistent identification of
features across uncorrelated labels reinforces the hypothesis that they
provide predictive information beyond a specific label and dataset (i.e. are
less likely to be overfitting).
Figure 5.3: Comparison of F1 average scores when using the optimal feature combination for the depression label vs. the optimal feature combination for each label. Each label has a significant performance improvement when using its optimal feature combination. Label | lin | log | svm | rf
---|---|---|---|---
| F1 avg | F1 std | F1 avg | F1 std | F1 avg | F1 std | F1 avg | F1 std
Distress | | | | | | | |
Depression | 82.70% | 8.95% | 54.17% | 5.89% | 56.55% | 5.12% | 53.18% | 14.96%
Anxiety | 47.18% | 23.21% | 38.33% | 19.20% | 30.18% | 4.02% | 53.26% | 16.58%
Perceived stress | 47.18% | 23.21% | 38.33% | 19.20% | 30.18% | 4.02% | 53.26% | 16.58%
Somatic stress | 42.74% | 30.29% | 51.68% | 4.01% | 44.44% | 6.29% | 54.89% | 8.37%
Personality | | | | | | | |
Neuroticism | 62.08% | 0.76% | 31.71% | 10.89% | 33.36% | 12.59% | 38.30% | 8.30%
Extraversion | 51.04% | 14.06% | 67.95% | 17.33% | 65.14% | 12.98% | 52.94% | 20.83%
Agreeableness | 69.48% | 5.97% | 73.61% | 2.13% | 67.98% | 3.92% | 56.96% | 20.79%
Conscientiousness | 71.28% | 6.53% | 72.95% | 6.93% | 78.77% | 6.24% | 79.19% | 6.11%
Openness | 49.64% | 14.94% | 65.08% | 5.94% | 64.46% | 5.31% | 61.47% | 8.64%
Demographic | | | | | | | |
Gender | 34.26% | 18.18% | 52.96% | 2.28% | 40.63% | 18.42% | 63.14% | 5.29%
Table 5.7: Performance of models using the feature combination identified for the depression label (Section 5.3) for predicting a variety of labels. The best results per-label are bolded. These results suggest that the depression chosen feature combination does not generalise particularly well. These results are also surprising as the distress labels (anxiety, perceived stress, and somatic stress), which are correlated to the depression label, perform poorly, whilst uncorrelated labels (such as agreeableness, openness, and gender) perform better. This may be due to overfitting of feature profiles to labels (i.e. the optimal features for the label) or truly distinct feature profiles between the correlated distress labels, though the former appears more probable. Label | lin | log | svm | rf
---|---|---|---|---
| F1 avg | F1 std | F1 avg | F1 std | F1 avg | F1 std | F1 avg | F1 std
Distress | | | | | | | |
Depression | 82.70% | 8.95% | 54.17% | 5.89% | 56.55% | 5.12% | 53.18% | 14.96%
Anxiety | 88.46% | 9.53% | 46.14% | 9.16% | 54.33% | 12.32% | 52.94% | 12.85%
Perceived stress | 88.46% | 9.53% | 46.14% | 9.16% | 54.33% | 12.32% | 52.94% | 12.85%
Somatic stress | 86.37% | 6.45% | 58.44% | 8.85% | 68.08% | 16.97% | 49.48% | 11.38%
Personality | | | | | | | |
Neuroticism | 76.39% | 8.56% | 38.85% | 5.15% | 48.25% | 4.47% | 40.78% | 13.59%
Extraversion | 84.04% | 10.24% | 74.61% | 10.34% | 73.39% | 12.75% | 58.59% | 31.33%
Agreeableness | 82.70% | 5.77% | 69.14% | 6.67% | 50.43% | 14.85% | 67.97% | 1.85%
Conscientiousness | 88.72% | 4.25% | 73.14% | 3.13% | 69.90% | 7.84% | 78.91% | 2.80%
Openness | 85.01% | 6.62% | 64.32% | 5.06% | 68.77% | 8.84% | 60.32% | 7.36%
Demographic | | | | | | | |
Gender | 81.69% | 5.32% | 68.11% | 10.22% | 76.71% | 5.10% | 64.81% | 11.42%
Table 5.8: Performance of models for predicting a variety of labels when
using features identified via a feature space search specific to the label.
This demonstrates the performance improvement, compared to Table 5.7, achieved
via label specific feature combinations. The best results for each label are
bolded.
#### Results
The depression feature combination does not generalise well to the other
distress measures for the lin model, though it achieves 62–71% F1 scores for
neuroticism, agreeableness, and conscientiousness. Interestingly, the other
labels’ best results using the depression combination are above 60% F1, with
the exception of the other distress measures, which only achieve a best of
53–54% F1. This is surprising as the distress labels are strongly correlated
to the depression label.
However, when features are chosen on a per-label basis the results improve
significantly. All labels111 Though perceived stress and anxiety measures have
a covariance of 82.98% within my dataset, once reduced to binary
classifications they are equivalent (i.e. 100% correlation). Thus results for
both labels are the same. achieve 76%+ (all but one are above 80%) average F1
score with their best classifier. The best classifier for all labels is the
lin classifier, which is to be expected given it is the search classifier and
previous evaluation has shown the features provide useful linear information.
##### Fitting features to labels
There are two potential explanations for the substantial improvement in
performance when using label specific feature combinations: each label could
legitimately have a unique feature profile through which its distinctive
attributes are expressed or the labels could be experiencing a level of
overfitting due to the small size of the dataset. While it is likely to be a
combination of the two reasons, labels that are relatively uncorrelated with
depression are still able to achieve good performance using the depression
chosen features, suggesting it is not just overfitting. For example,
agreeableness, extraversion, and conscientiousness all have covariances of
less than 50% with depression, yet they achieve 72.61%, 67.95%, and 79.19%
average F1 scores, respectively, with the depression features. Openness which
has a 4.29% covariance with depression achieves 65.08% with the depression
chosen features. Furthermore, as Table 5.9 shows, the openness feature
combination shares only 6 out of its 9 features with the depression
combination, whilst it also excludes 8 out of 14 of the depression
combination’s features. Therefore, it can be reasonably suggested that the
features do generalise, acknowledging that fitting the features directly to a
label achieves the best performance, as would be expected with all prediction
tasks.
##### Cross classifier generalisability
Within the depression feature combination results, the lin, log, and rf models
all achieve the best results on at least 2 labels each. The svm classifier
performs close to the best on many labels, such as conscientiousness where it
achieves 78.77% compared to the best result of 79.19%.
The svm model performs better when classifying conscientiousness,
extraversion, and agreeableness, using the depression feature combination,
than it does when predicting the depression label. It is also better at
transferring the depression feature combination to other labels than the lin
model that the feature combination was chosen with. Its performance improves
on most labels when using the label targeted feature sets, such as on gender
where it achieves 76.71% and somatic stress with 68.08% F1, whilst with the
depression feature combination its F1 result was less than 50% for both.
Interestingly, whilst the svm model performed particularly well on
agreeableness using the depression feature combination, it performed worse
when using the label targeted feature combination.
##### Personality
Within the results for the depression feature combination the
conscientiousness label achieves the best average F1 score across all
classifiers, 75.55%, with an average standard deviation of 6.45%.
Unlike the distress evaluation questionnaires, the BFI (personality)
questionnaire is designed such that responses should remain relatively
consistent over time. Future work could investigate whether the features are
able to consistently predict personality measures over time for a participant.
Do the features remain predictive of personality as a person’s temporal
distress measures change?
##### Chosen Features
Table 5.9 presents the feature sets resulting from the per-label feature
searches. There are some consistencies and intuitive inversions within the
feature sets. For example, the head localised average gesture movement (He-GA)
is relevant for many of the labels, though in some it is inconsistent whether
greater or lesser is indicative of the label (instances of inconsistencies are
denoted as He-GA? within Table 5.9). The feature usage is inverted between
neuroticism, where a faster average speed is indicative of a positive
classification, and extraversion and agreeableness, where a slower average
speed is indicative of positive classification.
Furthermore, as stated above, features that are chosen by uncorrelated labels
support the hypothesis that these features are providing useful information.
For example, of the 7 features the gender label chooses, 5 are also chosen by
the anxiety label, including hand total gesture movement (Hn-GT), hand gesture
surprise (Hn-GS), and leg gesture surprise (L-GS). Whilst gender and anxiety
have a covariance of 7.23% within the dataset.
Label | # Feat. | Localizations
---|---|---
| | Overall | Hands | Head | Legs
Distress | | | | |
Depression | 14 | FM?, GM+, GC? | GC?, GT$\neg$, GS? | GL?, GC?, GA+, GT$\neg$, GS? | GL$\neg$, GC?, GT+
Anxiety | 13 | FM+, GM+, GD$\neg$, GC$\neg$ | GT?, GS+ | GL+, GA?, GT$\neg$ | GL$\neg$, GA+, GT+, GS+
Perceived stress | 13 | FM+, GM+, GD$\neg$, GC$\neg$ | GT?, GS+ | GL+, GA?, GT$\neg$ | GL$\neg$, GA+, GT+, GS+
Somatic stress | 9 | GD$\neg$ | GL?, GT?, GS+ | GC$\neg$, GA+ | GL$\neg$, GC$\neg$, GT+
Personality | | | | |
Neuroticism | 14 | GM+, GS?, GC$\neg$ | GC+, GA$\neg$ | GL?, GC+, GA+, GT$\neg$ | GL$\neg$, GC+, GA?, GT+, GS+
Extraversion | 10 | FM+, GC+ | GL$\neg$, GC+, GA+ | GC?, GA$\neg$ | GL+, GT$\neg$, GS$\neg$
Agreeableness | 9 | GS+, GC+ | GC$\neg$, GA$\neg$ | GA$\neg$, GS+ | GC$\neg$, GA?, GT?
Conscientiousness | 8 | FM?, GM$\neg$, GS? | GL+, GS? | GC+ | GL?, GC?
Openness | 9 | GM+, GD+ | GL$\neg$, GC? | GC$\neg$, GA?, GS? | GA?, GT?
Demographic | | | | |
Gender | 7 | GS+, GD$\neg$ | GC+, GT+, GS$\neg$ | GA+ | GS$\neg$
Table 5.9: Features chosen for each label when features are searched
specifically for the label. “Positive classification” for the gender label is
female (i.e. He-GA+ indicates a higher average speed of head gestures is
indicative of the participant being female). Refer to Table 5.2 for the full
notation mapping.
##### Movement
I represent four measures of movement: average movement per-frame (i.e.
speed), standard deviation of movement (i.e. consistency of speed), total
movement both over an entire sample and over individual gestures, and
proportion of total movement that occurs during gestures (i.e. how much/little
movement is there outside of gestures). The amount and speed of movement is
intuitively correlated to distress and personality. This intuition is
validated as these movement representations are included in a variety of
feature sets resulting from feature space searches, across multiple labels.
Indeed, Table 5.9 shows that the speed of the head during a gesture (i.e. He-
GA) is correlated with positive classifications for labels including
neuroticism, depression, somatic stress, while it is inversely correlated with
other classifications including extraversion and agreeableness.
##### Surprise
One of the less initially intuitive features is “gesture surprise”. This
feature represents the distance between a gesture and the previous gesture (or
beginning of the sample) as a proportion of the total length of the sample.
This per-gesture value is then averaged across all gestures, this means it is
not a measure of the proportion of the sample when no gesture occurs (as
discussed in Section 4.3). The intuition is to represent whether the
participant’s gestures occur regularly or after a period of stillness (i.e.
how “surprising” is the gesture). Gesture surprise is included in every
feature combination resulting from a feature search of a label, shown in Table
5.9, suggesting it provides useful information.
##### Linearity
The features are, in general, linearly correlated with different labels, as
shown by the effectiveness of the lin classifier, which has been the focus of
the feature development and evaluation. Using this classifier provides
interpretability of the features and integrity that the features are providing
real, useful, information, not simply a platform for overfitting.
However, not all features are consistently linear with regards to certain
labels. Some features are learnt as correlated and inversely correlated on the
same label in different cross validation rounds. This inconsistency denoted
with the usage indicator “?”. For example, while the hands localised gesture
surprise feature (Hn-GS) is linearly correlated with somatic stress, anxiety,
perceived stress, and gender, it is inconsistently correlated with
conscientiousness and depression. These inconsistencies within feature
correlation are the exception, not the rule. Indeed, all of the features that
experience inconsistency on a label are shown to be linearly correlated to
another label, or linearly correlated within another localisation.
### 5.6 Comparison with Related Work
I evaluate two comparison methods: a linear SVM model based on Bag-of-Words
features of FACS [21] Action Units (AUs), and the Bag-of-Body Dynamics (BoB)
method presented by Joshi et al. [36]. The first method provides a comparison
of modality (facial-modality) within my dataset and a cross-dataset benchmark
with the closest related dataset, DAIC. The second method provides a
comparison to an existing body-modality method that uses an automatic
expression categorization approach to predict depression.
#### AU SVM
This method uses basic facial expression analysis to predict a binary
depression label. I implement a simple linear kernel SVM based on a Bag-of-
Words (BoW) feature vector of FACS AUs to predict the binary depression label
in my dataset and, separately, a binary depression label within DAIC. As with
my dataset, I apply a threshold to DAIC’s PHQ-8 score labels such that the
dataset is as balanced as possible. The resulting threshold is 5 (i.e. 6 and
above is considered “depressed”), this results in 68 of 139 samples classed as
“depressed”. This is a lower threshold than used in my dataset (which is 7).
Within the BoW feature vector, binary AUs are counted for each frame they are
detected in, whilst intensity measured AUs (i.e. continuous value measures)
are summed across all frames. The sums and counts are then normalised against
the number of frames in which the face was successfully detected.
The DAIC dataset provides per-frame predictions for 20 AUs. I use OpenFace [5]
to extract per-frame AUs from my dataset. OpenFace predicts 35 AUs per-frame,
substantially more than provided by DAIC. While this method does not use the
gesture features, I still only evaluate it on the samples that pass the
gesture filtering step (Section 4.2.1) so that the evaluation is consistent
with previous results.
#### Bag-of-Body Dynamics
Joshi et al. [36] use a BoW approach to predict depression in clinically
assessed participants from their BlackDog dataset. The BoW feature vector is
comprised of categorized body expressions based on K-Means clustering of
histograms surrounding STIPs within a video of the subject. Their method is:
1. 1.
Extract Space-Time Interest Points (STIPs).
2. 2.
Calculate histograms of gradients (HoG) and optic flow (HoF) around the STIPs.
3. 3.
Cluster the histograms per-sample using K-Means, the resulting cluster centres
define the sample’s “key interest points” (KIPs).
4. 4.
Cluster KIPs across all samples to define a BoB codebook, also using K-Means.
5. 5.
Generate a BoB feature vector for each sample by fitting its KIPs to the
codebook.
6. 6.
Finally, apply a non-linear SVM to the resulting feature vectors to predict
depression.
They use a radial basis function kernel (RBF) for their SVM.
###### Differences in Method
In the original paper Joshi et al. experiment with multiple KIP counts,
codebook sizes, and perform an extensive grid search for their SVM parameters.
I use a single KIP count and codebook size, 1,000 and 500, which they also
experiment with. While I perform a grid search on the RBF parameters, it may
not be as extensive as their search.
I also exclude samples that generate fewer STIPs than the KIP count. This
results in 45 out of 65 samples being included.
#### Results
The results for each model are presented in Table 5.10, a comparison of
methods applied to my dataset is shown in Figure 5.4. I compare the best
depression detection model from my previous evaluations (i.e. the feature
combination identified in Section 5.3 with the lin classifier), and a lin
classifier using all gesture meta features, with two BoB based SVMs and a FACS
AUs based linear SVM for predicting depression. The lin model performs best,
achieving an 82.70% F1 average with the optimal feature combination and 66.81%
with all features, compared to the next best at 63.71% from the FACS AUs SVM
model and 61.10% from the BoB RBF SVM model. Results for all models are based
on the same three fold cross-validation mean F1 as previously.
Figure 5.4: Comparison of method F1 scores for predicting the depression
label in my dataset. This shows the lin classifier using the optimal
depression feature combination identified in Section 5.3 (lin (opt.))
outperforming the comparison methods. The lin classifier using all of the
gesture meta features (lin (all)) also outperforms the comparison methods,
though not by as much. There are two caveats to this comparison: the features
for the lin (opt.) classifier have been fitted to the label and the BoB
results are for a reduced dataset (45 samples compared to 53 for the lin and
AUs models) as not all of the samples produced enough STIPs for the methods to
operate appropriately.
The FACS AUs SVM model performs better on my dataset than the DAIC dataset.
This is almost certainly due to the difference in quantity and quality of
available FACS AU features. The DAIC dataset provides 20 AU predictions per
frame while my dataset provides 35 AU predictions.
Model | F1 avg | F1 std
---|---|---
lin | Optimal feature combination | 82.70% | 8.95%
All features baseline | 66.81% | 8.89%
BoB | RBF kernel | 61.10% | 6.09%
Linear kernel | 60.13% | 9.24%
FACS AUs | My dataset | 63.71% | 8.02%
DAIC dataset | 56.53% | 3.10%
Table 5.10: Comparison of my lin model, FACS AUs based SVM models, and the
BoB SVM models for predicting depression as defined by the PHQ-8
questionnaire, on my dataset. The FACS AUs DAIC model is the exception, its
results are for the DAIC dataset. The lin model using the optimal feature
combination identified in Section 5.3 achieves the best results. The lin model
using all features (i.e. the all feature baseline from Section 5.3) also beats
the comparison methods.
### 5.7 Multi-Modal
Given the success of the FACS AUs SVM classifier in Section 5.6, I also
evaluate a multi-modal method that fuses my gesture meta features and FACS
AUs.
I perform multiple multi-modal experiments:
1. 1.
Feature vector fusion including all features, evaluated across all four
classifier types.
2. 2.
Feature vector fusion with a feature search on my gesture meta features,
though all AU features are retained in every search iteration.
1. (a)
Search using the lin classifier.
2. (b)
Search using the svm classifier, as this was used successfully with the AUs
features alone.
3. (c)
Search using a radial-basis function kernel SVM classifier which achieves
comparable results to the linear kernel SVM classifier on the AUs features
alone.
3. 3.
Hybrid fusion inspired by Alghowinem et al. [1]. This involved feature fusion
and decision level fusion via a majority vote of three classifiers: one
classifier with only meta features, one with only AUs features, and one with
the fused feature vector. I experimented with all meta features and the
feature searched meta features. I used the best classifier type for the
individual classifiers, i.e. lin for gesture meta features, svm for AUs
features, and then also svm for fused features.
#### Results
The best result was by the feature search method with an svm classifier (2.b),
achieving 81.94% F1 average. The feature search with lin classifier (2.a) was
second best with an 80.93% F1 average. However, these are worse than the best
depression detection score of 82.70% when using the gesture meta features
alone with the lin model. Moreover, the hybrid fusion approaches achieved F1s
in the mid-70s, so rather than correcting errors by the meta features, it
averaged the success rate between the meta classifier and the AUs classifier,
resulting in a worse F1.
#### Potential Future Fusion Approaches
These approaches to fusion are somewhat simplistic. More sophisticated
approaches, such as deep learning fusion, may achieve better results than the
meta features alone. An interesting route for future work is to use recurrent
deep learning to fuse temporally aligned meta features with AUs, rather than
fusing them post-aggregation.
### 5.8 Summary
In this chapter I have evaluated the introduced gesture meta features and
found they provide linear predictive information. To achieve the best
performance I perform an exhaustive search of the feature space to identify
the best feature combination. This demonstrates the potential of the features,
however, it also introduces a risk of overfitting. This risk is mitigated by
two factors: the best performance is achieved by a linear regression based
classifier (i.e. a classifier not prone to overfitting) and many features are
present in optimal feature combinations for multiple uncorrelated labels.
I compared my method to a basic facial-modality method and an existing body-
modality method based on STIPs (i.e. generic analysis of video data). My
method outperforms both when using the lin classifier, both when using all
possible features and when using the optimal feature combination for the
depression label.
Finally, I perform a multi-modal experiment utilising the facial-modality
comparison method and my novel gesture meta features. Despite testing multiple
approaches to fusion, no method I evaluated beat the mono-modal gesture meta
features classifiers. However, all related work that I am aware of achieves
better performance when incorporating multiple modals, as such it is likely
that further investigation of multi-modal approaches incorporating the gesture
meta features will identify a multi-modal approach that does improve upon my
mono-modal results.
## Chapter 6 Conclusion
I have presented a novel set of generic body modality features based on meta
information of gestures. To develop and evaluate these features I also
introduced a novel non-clinical audio-visual dataset containing recordings of
semi-structured interviews along with distress and personality labels.
I evaluated these features for detecting depression as a binary classification
task based on PHQ-8 scores. A linear regression based classifier achieved a
82.70% average F1 score, suggesting these features are both useful for
depression detection and provide linear information. I further evaluated the
features’ generalisability via binary classification tasks for other distress
labels, such as anxiety, personality measures, and gender. All generalisation
labels achieved better than 80% average F1 scores, with the exception of one
label, personality neuroticism, which achieved 76.39%.
These are novel features as previous works within the body modality for
automatic distress detection are based on either unsupervised definitions of
expressions or hand-crafted descriptors of distress indicative behaviour.
These features are similar to some of the work within the eye and head
modalities which are based on modeling generic activity levels within these
modalities, among other features.
Finally, these features exist within a broader ecosystem of affect based
features. The best results for automatic distress detection, and similar
tasks, is achieved by integrating multiple modalities and multiple
representations of modalities. Future work may apply these features to new
tasks, extend the feature type by extracting similar meta features, and
develop methods for combining them with other modalities to amplify the
information they provide.
### Future Work
### Dataset
Future work could expand the introduced dataset and increase its quality via
manual annotations. For example, annotating time frames based on the topic of
conversation could enable sample segmentation methods such as Gong &
Poellabauer [25]. This could improve the feature modeling and classifier
stages, another route of future work is gesture definition and detection. For
example, manual annotation of gestures would enable auto-learnt gesture
detectors.
### Regression Tasks
All labels I evaluate, with the exception of gender, are scales based on self-
evaluation questionnaires relating to distress or personality. The distress
scales define multiple levels of severity while the personality scales do not.
Both types of labels are prime for regression prediction tasks. Moreover,
distress classification prediction could be provided by the defined severity
levels once a regression had been performed. However, this requires a larger
dataset (initial experiments with regression on the dataset support this
assertion).
### Features
Applying the presented features within larger datasets could further
illuminate the properties of certain features, either confirming their
inconsistency, or solidifying the linearity in one direction. Such properties
should then be tested with regards to the psychology literature.
#### Trunk
I do not evaluate a “trunk” localisation (i.e. hips to shoulders), though it
may prove useful in future work. In my dataset participants are seated during
the whole interview, thus the two-dimensional range of motion in the trunk
(i.e. up/down and side to side) is small. As I do not include depth recording
the greatest range of motion, forward and backward, is difficult to
incorporate. Future work that either includes depth sensing or has
participants in a wider variety of scenarios, where they may be standing or
more physically engaged, may find some use in a trunk localisation.
#### Cross Localisation Co-occurrence
Co-occurrence of gestures across localisations is not considered in this
dissertation, though it is an area for future work. Applying deep learning to
co-occurrence modeling to identify relevant patterns would be particularly
interesting.
#### Improved Surprise
In future work this feature could be extended to be more indicative of
“surprise”, rather than simply an average of distance since last gesture. For
example, a gesture that occurs within a regular pattern of gestures,
regardless of their distance, might be considered to have low surprise, while
a gesture interrupting that pattern by occurring in the middle of the
pattern’s silent period would be very surprising. This more sophisticated
measure of surprise could account for naturalistic behaviour such as rhythmic
movement/gestures. These types of extensions of features are exciting for two
reasons: firstly, they are interpretable, they do not come from a black box
model, and thus can be supported by psychology literature. Secondly, they are
immediately measurable and their design is based on traditional statistical
techniques such as repetition modeling and can therefore be iterated on more
readily than deep learning based features.
#### Generalisation
Future work, some of which has been discussed here, could extend these
features to achieve better results in more diverse datasets. A greater variety
of scenarios where the participant is more constantly physically engaged such
that movement is more constant, e.g walking or a standing conversation, would
challenge the design of the presented gesture meta features. Indeed, the
current design would have trouble as it relies on the default state being a
lack of movement.
#### Deep Learning for Feature Generation
As I discussed in Chapter 2, deep learning has been applied to the depression
detection task, but still relies on hand-crafted features and descriptors. As
with many fields it is likely that deep learning methods will, in the next few
years, achieve state-of-the-art results, far exceeding the potential of more
traditional approaches. However, to reach this point the deep learning models
need large enough datasets to train on.
Assuming no especially large dataset is developed specifically for automatic
depression detection, one potential for future work is the use of transfer
learning (as Chen et al. [9] explore for vocal and facial features) for body
expressions. For example, CNNs could be trained on general body expression
datasets to learn inner representations of body language. The output of inner
layers could then be used in depression detection tasks (Razavian et al. [47]
present a transfer learning approach to image recognition tasks and achieve
very good results on niche tasks using a generic pre-trained image CNN).
### Classifiers and Deep Learning
In this dissertation I have used two traditional statistical models, linear
and logistic regression, and two more advanced machine learning methods,
support vector machines and random forests. However, I have purposefully
avoided more complex machine learning methods such as neural networks, and
especially deep learning. These more sophisticated methods have achieved
significant improvements on the state-of-the-art in many domains, however,
they suffer from a lack of interpretability and a propensity to overfit data.
As my primary focus has been validating gesture meta features I define
interpretability as a core requirement and, given the size of my dataset,
avoiding methods that overfit is important.
However, having validated these features, deep learning presents opportunities
with regards to learning more sophisticated aggregation functions (this is
slightly different to the feature generating deep learning discussed above). I
aggregate three core features of gestures: their movement, duration, and
“surprise”. I perform aggregation via averaging and standard deviations.
However, given a larger dataset, a deep learning model could foreseeably learn
both more sophisticated aggregation functions and core meta-features. It is
important to emphasize here the burden on dataset size that exists when
attempting to learn valuable features using deep learning.
#### Multi-Modal Approaches with Deep Learning
Deep learning can learn useful modality integration functions. While I
experimented with a multi-modal approach in Section 5.7 it did not achieve
better results than my mono-modal method. I believe this is due to the rather
direct approaches to multi-modal integration I experimented with. Applying
deep learning to modal integration thus presents an opportunity for further
work.
Furthermore, integrating a greater diversity of modalities and modality
representations would be interesting. For example, applying the Bag-of-Body
Dynamics features along with my gesture meta features and FACS AUs. With
regards to the Bag-of-Body Dynamics method, future work could apply the core
clustering concept to pose estimation data rather than STIPs data and may
achieve better results from this.
## References
* Alghowinem et al. [2015] Sharifa Alghowinem, Roland Goecke, Jeffrey F Cohn, Michael Wagner, Gordon Parker, and Michael Breakspear. Cross-cultural detection of depression from nonverbal behaviour. _FG_ , pages 1–8, 2015.
* Alghowinem et al. [2016] Sharifa Alghowinem, Roland Goecke, Julien Epps, Michael Wagner, and Jeffrey F Cohn. Cross-Cultural Depression Recognition from Vocal Biomarkers. In _Interspeech 2016_ , pages 1943–1947. ISCA, September 2016.
* Alghowinem et al. [2018] Sharifa Alghowinem, Roland Goecke, Michael Wagner, Julien Epps, Matthew Hyett, Gordon Parker, and Michael Breakspear. Multimodal Depression Detection: Fusion Analysis of Paralinguistic, Head Pose and Eye Gaze Behaviors. _IEEE Transactions on Affective Computing_ , 9(4):478–490, 2018.
* Association [1994] American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. _Washington, Am Psychiatr Assoc_ , pages 143–146, 1994.
* Baltrušaitis et al. [2018] Tadas Baltrušaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency. OpenFace 2.0: Facial Behavior Analysis Toolkit. In _2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)_, pages 59–66. IEEE, 2018.
* [6] Black Dog Institute. https://www.blackdoginstitute.org.au/, 2019. Accessed 2019/06/01.
* Cao et al. [2016] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. _CoRR_ , cs.CV, 2016.
* Cao et al. [2018] Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, and Yaser Sheikh. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. _arXiv.org_ , December 2018.
* Chen et al. [2017a] Shizhe Chen, Qin Jin, Jinming Zhao, and Shuai Wang. Multimodal Multi-task Learning for Dimensional and Continuous Emotion Recognition. _AVEC@ACM Multimedia_ , pages 19–26, 2017a.
* Chen et al. [2017b] Xinghao Chen, Hengkai Guo, Guijin Wang, and Li Zhang. Motion feature augmented recurrent neural network for skeleton-based dynamic hand gesture recognition. In _2017 IEEE International Conference on Image Processing (ICIP)_ , pages 2881–2885. IEEE, 2017b.
* Cohen et al. [1983] Sheldon Cohen, Tom Kamarck, and Robin Mermelstein. Perceived Stress Scale, 1983.
* [12] Crisis Text Line. https://www.crisistextline.org/, 2019. Accessed 2018/12/03.
* Dang et al. [2017] Ting Dang, Brian Stasak, Zhaocheng Huang, Sadari Jayawardena, Mia Atcheson, Munawar Hayat, Phu Ngoc Le, Vidhyasaharan Sethu, Roland Goecke, and Julien Epps. Investigating Word Affect Features and Fusion of Probabilistic Predictions Incorporating Uncertainty in AVEC 2017. _AVEC@ACM Multimedia_ , pages 27–35, 2017.
* de Gelder [2009] B de Gelder. Why bodies? Twelve reasons for including bodily expressions in affective neuroscience. _Philosophical Transactions of the Royal Society B: Biological Sciences_ , 364(1535):3475–3484, November 2009.
* [15] Detecting Crisis: An AI Solution. https://www.crisistextline.org/blog/ava2, 2018. Accessed 2018/11/20.
* Dibeklioglu et al. [2015] Hamdi Dibeklioglu, Zakia Hammal, Ying Yang, and Jeffrey F Cohn. Multimodal Detection of Depression in Clinical Interviews. _ICMI_ , pages 307–310, 2015.
* Du et al. [2015a] Yong Du, Yun Fu, and Liang Wang. Skeleton based action recognition with convolutional neural network. In _2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR)_ , pages 579–583. IEEE, 2015a.
* Du et al. [2015b] Yong Du, Wei Wang, and Liang Wang. Hierarchical recurrent neural network for skeleton based action recognition. _CVPR_ , pages 1110–1118, 2015b.
* Ebert [1996] D Ebert. Eye-blink rates and depression: Is the antidepressant effect of sleep deprivation mediated by the dopamine system? _Neuropsychopharmacology_ , 15(4):332–339, October 1996.
* Ekman [2009] P Ekman. Telling lies: Clues to deceit in the marketplace, politics, and marriage (revised edition), 2009.
* Ekman and Friesen [1978] P Ekman and W V Friesen. Facial coding action system (FACS): A technique for the measurement of facial actions, 1978.
* Eyben et al. [2010] Florian Eyben, Martin Wöllmer, and Björn Schuller. _Opensmile: the munich versatile and fast open-source audio feature extractor_. the munich versatile and fast open-source audio feature extractor. ACM, New York, New York, USA, October 2010.
* Fairbanks et al. [1982] Lynn A Fairbanks, Michael T McGuire, and Candace J Harris. Nonverbal interaction of patients and therapists during psychiatric interviews. _Journal of Abnormal Psychology_ , 91(2):109–119, 1982.
* Gierk et al. [2014] Benjamin Gierk, Sebastian Kohlmann, Kurt Kroenke, Lena Spangenberg, Markus Zenger, Elmar Brähler, and Bernd Löwe. The Somatic Symptom Scale–8 (SSS-8). _JAMA Internal Medicine_ , 174(3):399–407, March 2014.
* Gong and Poellabauer [2017] Yuan Gong and Christian Poellabauer. Topic Modeling Based Multi-modal Depression Detection. _AVEC@ACM Multimedia_ , pages 69–76, 2017.
* Gratch et al. [2014] Jonathan Gratch, Ron Artstein, Gale M Lucas, Giota Stratou, Stefan Scherer, Angela Nazarian, Rachel Wood, Jill Boberg, David DeVault, Stacy Marsella, David R Traum, Skip Rizzo, and Louis-Philippe Morency. The Distress Analysis Interview Corpus of human and computer interviews. _LREC_ , 2014.
* [27] Gumtree. https://www.gumtree.com/, 2019. Accessed 2019/06/01.
* Haggard and Isaacs [1966] Ernest A Haggard and Kenneth S Isaacs. Micromomentary facial expressions as indicators of ego mechanisms in psychotherapy. In _Methods of Research in Psychotherapy_ , pages 154–165. Springer, Boston, MA, Boston, MA, 1966.
* Hamilton [1967] Max Hamilton. Development of a Rating Scale for Primary Depressive Illness. _British Journal of Social and Clinical Psychology_ , 6(4):278–296, December 1967.
* Hardoon et al. [2006] David R Hardoon, Sandor Szedmak, and John Shawe-Taylor. Canonical Correlation Analysis: An Overview with Application to Learning Methods. _dx.doi.org_ , 16(12):2639–2664, March 2006.
* Huang et al. [2017] Jian Huang, Ya Li, Jianhua Tao, Zheng Lian, Zhengqi Wen, Minghao Yang, and Jiangyan Yi. Continuous Multimodal Emotion Prediction Based on Long Short Term Memory Recurrent Neural Network. _AVEC@ACM Multimedia_ , pages 11–18, 2017.
* John and Srivastava [1999] Oliver P John and Sanjay Srivastava. The Big Five Trait Taxonomy: History, Measurement, and Theoretical Perspectives. In _Handbook of personality Theory and research_ , pages 102–138. t.personality-project.org, 1999.
* Joshi et al. [2012] Jyoti Joshi, Abhinav Dhall, Roland Goecke, Michael Breakspear, and Gordon Parker. Neural-Net Classification For Spatio-Temporal Descriptor Based Depression Analysis. _ieeexplore.ieee.org_ , 2012.
* Joshi et al. [2013a] Jyoti Joshi, Abhinav Dhall, Roland Goecke, and Jeffrey F Cohn. Relative Body Parts Movement for Automatic Depression Analysis. _ACII_ , pages 492–497, 2013a.
* Joshi et al. [2013b] Jyoti Joshi, Roland Goecke, Sharifa Alghowinem, Abhinav Dhall, Michael Wagner, Julien Epps, Gordon Parker, and Michael Breakspear. Multimodal assistive technologies for depression diagnosis and monitoring. _J. Multimodal User Interfaces_ , 7(3):217–228, 2013b.
* Joshi et al. [2013c] Jyoti Joshi, Roland Goecke, Gordon Parker, and Michael Breakspear. Can body expressions contribute to automatic depression analysis? In _2013 10th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2013)_, pages 1–7. IEEE, 2013c.
* Kroenke et al. [2001] Kurt Kroenke, Robert L Spitzer, and Janet B W Williams. The PHQ-9: Validity of a Brief Depression Severity Measure. _Journal of General Internal Medicine_ , 16(9):606–613, 2001.
* Kroenke et al. [2009] Kurt Kroenke, Tara W Strine, Robert L Spitzer, Janet B W Williams, Joyce T Berry, and Ali H Mokdad. The PHQ-8 as a measure of current depression in the general population. _Journal of Affective Disorders_ , 114(1-3):163–173, April 2009.
* Liu et al. [2015] Zhao Liu, Jianke Zhu, Jiajun Bu, and Chun Chen. A survey of human pose estimation: The body parts parsing based methods. _Journal of Visual Communication and Image Representation_ , 32:10–19, July 2015.
* Mahmoud et al. [2013] Marwa Mahmoud, Louis-Philippe Morency, and Peter Robinson. _Automatic multimodal descriptors of rhythmic body movement_. ACM, December 2013.
* [41] Mental Health Foundation. https://www.mentalhealth.org.uk/, 2019. Accessed 2019/06/04.
* Ory et al. [2013] Marcia G Ory, SangNam Ahn, Luohua Jiang, Kate Lorig, Phillip Ritter, Diana D Laurent, Nancy Whitelaw, and Matthew Lee Smith. National Study of Chronic Disease Self-Management: Six-Month Outcome Findings. _Journal of Aging and Health_ , 25(7):1258–1274, September 2013.
* Ozdas et al. [2000] A Ozdas, R G Shiavi, S E Silverman, M K Silverman, and D M Wilkes. Analysis of fundamental frequency for near term suicidal risk assessment. In _IEEE International Conference on Systems, Man, and Cybernetics_ , pages 1853–1858. IEEE, 2000.
* [44] Peer2Peer Cambridge. https://www.peer2peer-cambridge.org/, 2019. Accessed 2019/06/01.
* Peng et al. [2005] Hanchuan Peng, Fuhui Long, and C Ding. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 27(8):1226–1238, August 2005.
* Scherer et al. [2014] Stefan Scherer, Giota Stratou, Gale Lucas, Marwa Mahmoud, Jill Boberg, Jonathan Gratch, Albert Skip Rizzo, and Louis-Philippe Morency. Automatic audiovisual behavior descriptors for psychological disorder analysis. _Image and Vision Computing_ , 32(10):648–658, October 2014.
* Sharif Razavian et al. [2014] Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. CNN Features Off-the-Shelf: An Astounding Baseline for Recognition. pages 806–813, 2014.
* Shukla et al. [2017] Parul Shukla, Kanad K Biswas, and Prem K Kalra. Recurrent Neural Network Based Action Recognition from 3D Skeleton Data. In _2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)_, pages 339–345. IEEE, 2017.
* Simon et al. [2017] Tomas Simon, Hanbyul Joo, Iain A Matthews, and Yaser Sheikh. Hand Keypoint Detection in Single Images Using Multiview Bootstrapping. _CVPR_ , cs.CV, 2017.
* Sobin and Sackeim [2019] Christina Sobin and Harold A Sackeim. Psychomotor symptoms of depression. _academia.edu_ , 2019.
* Song et al. [2013] Yale Song, Louis-Philippe Morency, and Randall Davis. Learning a sparse codebook of facial and body microexpressions for emotion recognition. In _the 15th ACM_ , pages 237–244, New York, New York, USA, 2013\. ACM Press.
* Spitzer et al. [2006] Robert L Spitzer, Kurt Kroenke, Janet B W Williams, and Bernd Löwe. A Brief Measure for Assessing Generalized Anxiety Disorder. _Archives of Internal Medicine_ , 166(10):1092–1097, May 2006.
* Srivastava et al. [2003] Sanjay Srivastava, Oliver P John, Samuel D Gosling, and Jeff Potter. Development of personality in early and middle adulthood: Set like plaster or persistent change? _Journal of Personality and Social Psychology_ , 84(5):1041–1053, 2003.
* Su et al. [2017] Benyue Su, Huang Wu, and Min Sheng. Human action recognition method based on hierarchical framework via Kinect skeleton data. In _2017 International Conference on Machine Learning and Cybernetics (ICMLC)_ , pages 83–90. IEEE, 2017.
* Syed et al. [2017] Zafi Sherhan Syed, Kirill A Sidorov, and A David Marshall. Depression Severity Prediction Based on Biomarkers of Psychomotor Retardation. _AVEC@ACM Multimedia_ , pages 37–43, 2017.
* Wei et al. [2017] Shenghua Wei, Yonghong Song, and Yuanlin Zhang. Human skeleton tree recurrent neural network with joint relative motion feature for skeleton based action recognition. In _2017 IEEE International Conference on Image Processing (ICIP)_ , pages 91–95. IEEE, 2017.
* Wei et al. [2016] Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. Convolutional Pose Machines. In _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 4724–4732. IEEE, 2016.
* Yang et al. [2017a] Le Yang, Dongmei Jiang, Xiaohan Xia, Ercheng Pei, Meshia Cédric Oveneke, and Hichem Sahli. Multimodal Measurement of Depression Using Deep Learning Models. _AVEC@ACM Multimedia_ , pages 53–59, 2017a.
* Yang et al. [2017b] Le Yang, Hichem Sahli, Xiaohan Xia, Ercheng Pei, Meshia Cédric Oveneke, and Dongmei Jiang. Hybrid Depression Classification and Estimation from Audio Video and Text Information. _AVEC@ACM Multimedia_ , pages 45–51, 2017b.
* Zhang et al. [2016] Hong Bo Zhang, Qing Lei, Bi Neng Zhong, Ji Xiang Du, and Jia Lin Peng. A Survey on Human Pose Estimation. _Intelligent Automation and Soft Computing_ , 22(3):483–489, July 2016.
|
2024-09-04T02:54:56.300581 | 2020-02-11T15:07:30 | 2003.00837 | {
"authors": "Farid Ghareh Mohammadi, M. Hadi Amini, and Hamid R. Arabnia",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25984",
"submitter": "Farid Ghareh Mohammadi",
"url": "https://arxiv.org/abs/2003.00837"
} | arxiv-papers | # On Parameter Tuning in Meta-learning for Computer Vision
Farid Ghareh Mohammadi1 M. Hadi Amini2 and Hamid R. Arabnia1
1: Department of Computer Science Franklin College of Arts and Sciences
University of Georgia Athens Georgia 30602
2: School of Computing and Information Sciences College of Engineering and
Computing
Sustainability Optimization and Learning for InterDependent networks
laboratory (solid lab)
Florida International University Miami FL 33199
Emails<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Learning to learn plays a pivotal role in meta-learning (MTL) to obtain an
optimal learning model. In this paper, we investigate image recognition for
unseen categories of a given dataset with limited training information. We
deploy a zero-shot learning (ZSL) algorithm to achieve this goal. We also
explore the effect of parameter tuning on performance of semantic auto-encoder
(SAE). We further address the parameter tuning problem for meta-learning,
especially focusing on zero-shot learning. By combining different embedded
parameters, we improved the accuracy of tuned-SAE. Advantages and
disadvantages of parameter tuning and its application in image classification
are also explored.
Keywords: Advanced Machine Learning, Data Science, Meta-Learning, Zero-Shot
Learning, Few-Shot Learning, Optimized Learning, Parameter Tuning
## 1 Introduction
Motivation: Computer vision algorithms are essential to enable modern
functionalities in future smart cities [1] [2], including face recognition [3]
and automatic license plate recognition. Image classification stands as the
primary challenging problem in computer vision [4] [5] [6] [7]. Computer
vision provides important applications such as visual and infrared sensor
data-based obstacle detection for the visually impaired [8]. Within computer
vision, deep learning is an advanced learning tools based on using a training
dataset to obtain a high performance on a testing dataset [9] [10].
Furthermore, auto-encoders have become recent challenging work in computer
vision [11]; therefore, we focus on semantic auto-encoder in this paper. The
ability to learn new tasks efficiently by leveraging prior experience from
related tasks plays a main role in the world of artificial intelligence,
especially learning. Meta-Learning (MTL) is the most advanced machine learning
algorithm because it acquires the potential ability to learn as efficiently as
possible from prior information. MTL first was presented as early as 1987 by
Schmidhuber [12], and recent state-of-the-art studies [13] [14] [15]
illustrate how MTL perfectly learns from a limited training dataset. In this
paper, we investigate a task of image recognition to classify given images of
unseen categories in the testing dataset for which we had a lack of examples
in the training dataset. We choose zero-shot learning (ZSL) for the process of
unseen image recognition [16] to overcome this limitation properly.
Zero-shot learning is known as attribute-based learning, defined as the
process of learning to recognise unseen objects. ZSL emphasizes learning of a
new distribution of seen classes, given meta description of the seen
categories, and seeks correlations with existing seen categories in a training
dataset. This means that ZSL no longer needs to have any seen samples of
unseen classes before evaluating a performance of predicting unseen classes.
In recent years, ZSL [17] [18] [15] [13] [19] has been an active, challenging
and hot research topic in advanced computer vision, machine learning, and
medical data analysis [20]. Drug discovery [20], image classification [18]
[13] and meta-sense [21] are examples of such research studies. Furthermore,
ZSL has penetrated other domains like human action recognition, networks [18],
etc. In [18], researchers provided comprehensive information about ZSL
applications in computer and mobile networks.
ZSL is a promising MTL algorithm that behaves like human cognitive activity.
Here, we discuss emerging problems with ZSL. First, semantic space
representations, which include attributes (’A’) and word vector (’W’), are
critical for understanding and learning from the seen categories to apply on
unseen categories, but these representations appear challenging to ascertain a
high performance [17]. Human can only understand visual objects and attributes
based on image explanations to recognise new images. However, the explanations
do not yield distinguishable results [22], even traditional machine learning
algorithms will not have promising performances [23]. Second, ZSL provides a
mapping model selection to work with the unseen classes. The most important
feature of ZSL consists of learning a compatible visual-semantic or attribute-
based function and its ability to semantically represent objects. All in all,
the more complex functions we have, the higher the risk of over-fitting, which
yields poor results applied on the unseen classes. Whereas, simple linear
function currently yields poor classification accuracy on the seen classes and
does not adapt accurately on the unseen classes.
Contribution: In this paper, we address the two aforementioned emerging
problems in ZSL and present one promising solution. The main contribution of
this paper is to provide a linear optimal function using tuned parameters to
reach the most promising classification result that outperforms the most
state-of-the-art work. We illustrate a detailed procedure of the work for
meta-Mapping which is inspired from semantic auto-encoder (SAE) in algorithm
1. In the given meta-mapping, we extend the work presented by Kodirov _et al_
[24]
Algorithm Semantic Auto-Encoder (SAE) [24] is an advanced linear mapping auto
encoder that aims to find an optimal mapping function ($\mathscr{W}$) to
recognise and classify unseen classes. Algorithm 1 illustrates a comprehensive
procedure of SAE. First, in this paper, we develop SAE properly, and in the
second step, we optimise the algorithm leveraging the tuning of a few embedded
parameters of SAE. Note that in addition to the meta-learning for computer
vision application, parameter tuning plays a pivotal role in ensuring
convergence of several algorithms [25, 26].
Algorithm 1 Implementation of optimized zero-shot learning (SAE) [24]
1:A batch set of training input $(\mathscr{X,Y})$ and $training_{size}$
2:The best mapping matrix $(\mathscr{W})$ for zero-shot learning
3:Tuning embedded parameters $\left.\begin{array}[]{@{}r@{}r@{}r@{}}\\\
{}\hfil\end{array}\color[rgb]{1,0,0}\right\\}\color[rgb]{1,0,0}\begin{tabular}[]{l}We
contribute here\end{tabular}$
4:Begin Training
5:for t=0 $\cdots training_{size}$ do
6: Learn $\mathscr{W}$:$\mathcal{Y}$ $\Leftarrow$ $\mathscr{W}$ $\mathscr{X}$
7: $Err_{dst.}$=$\mid\mid$ $\mathscr{X}$ \-
$\mathscr{W}$$\mathscr{W}^{\prime}$ $\mathscr{X}$ $\mid\mid_{F}$
$\left.\begin{array}[]{@{}r@{}r@{}r@{}}\\\ {}\hfil\\\ {}\hfil\\\ {}\hfil\\\
{}\hfil\end{array}\color[rgb]{1,0,0}\right\\}\color[rgb]{1,0,0}\begin{tabular}[]{l}We
learn $\mathscr{W}$\\\ until $Err_{Dst.}\leadsto 0$\end{tabular}$
8: Optimize $(\mathscr{W})$ :$\mathscr{W}$
=$\frac{\mathscr{C}}{\mathscr{A}+\mathscr{B}}$
9: Return $\mathscr{W}$ and$\mathscr{W^{T}}$
10:end for
11:End Training
12:Begin Testing
13:Compute acc. on Unseen Classes
14:for t=0 $\cdots test_{size}$ do
15: $\Delta=\mid\mid Pred-Ground_{T}\mid\mid$
$\left.\begin{array}[]{@{}r@{}r@{}r@{}}\\\ {}\hfil\\\ {}\hfil\\\ {}\hfil\\\
{}\hfil\\\ {}\hfil\\\
{}\hfil\end{array}\color[rgb]{1,0,0}\right\\}\color[rgb]{1,0,0}\begin{tabular}[]{l}We
test\\\ unseen classes\\\ maximize the\\\ performance of SAE\end{tabular}$
16: Minimise $\Delta$
17: if ($\Delta$ is minimum) then
18: $\text{Performance}+=\frac{1}{test_{size}}$
19: end if
20:end for
21:End Testing
Organization: The rest of this paper is organized as follows. First we review
related works regarding the use of ZSL for recognition of unseen classes. In
Section 3, we state preliminary know-ledge of meta-learning before expressing
a suggested meta-mapping (ZSL) algorithm. We then present experimental results
and discussion, followed by conclusions.
## 2 Related Works
In this section, we study related works on the use of zero-shot learning and
presented previous examination on the evaluating unseen classes. While the
number of new zero-shot learning methods increasing yearly, the criteria to
examine all methods are not the same [19]. This makes our evaluation the
methods particularly difficult.
We present meta-learning to overcome the traditional machine learning
limitation, which is when the learned model fails to predict testing data when
the class is not already trained in training phases. Meta-learning, specially
zero-shot learning overcomes this limitation by recognizing instances of
unseen classes, which may not have been trained during training. In this
section, we discuss related ZSL research studies.
Table 1: Comparing related datasets ( SS-D refers to dimension of semantic space dimension) Features Datasets | AwA [16] | CUB [27] | ImNet-2 [28]
---|---|---|---
images | 30,475 | 11,788 | 218,000
total classes | 50 | 200 | 1360
seen classes | 40 | 150 | 1000
unseen classes | 10 | 50 | 360
SS-D | 85 | 312 | 1000
Lampert _et al_ [16] proposed an attribute-based prediction method for ZSL
towards object recognition. They introduced a direct attribute prediction
(DAP) method to do classification that learns from seen classes (y) and is
tested on unseen classes ($y^{\prime}$). The authors stated that leveraging
attributes enables us to have highly accurate and cost effective knowledge
transfer between seen classes in a training dataset and unseen classes in a
testing dataset. Romera and Torr _et al_ [29] presented embarrassingly simple
zero-shot learning (ESZSL) and evaluated this with DAP, which is a baseline
algorithm. Then, they updated compatibility of the learning algorithm by
adding a regularization term.
Zhang and Saligrama [30] [31] introduced a new ZSL based on semantic
similarity embedding (SSE) and joint latent similarity embedding (JLSE)
respectively. These researchers [31] formulated zero-shot recognition (ZSR) as
a binary classification problem. They studied a framework leveraging
dictionary learning to entirely learn the parameters of the model, that
finally led to a class-free classifier.
Akata _et al_ [32] presented an image classification technique called,
structured joint embedding (SJE). Meanwhile, Changpinyo _et al_ [33]
introduced a synthesized classifier (SYNC) method using combined semantic
descriptions of $(A+W)$ to provide a higher performance on image recognition
in comparison with DAP. From there, Bucher _et al_ [34] proposed a method,
which leveraged adding visual features into attribute space, and then learned
a metric to minimize the inconsistency by maximizing adaptability of the
semantic embedding. Shigeto _et al_ [35] found that a least square regularised
mapping function does not yield a good result for the hubness problem. Thus,
they proposed regression and CCA-based approaches to ZSL to compute reverse
regression, which means embedding class prototypes into the visual feature
space. SAE [24] proposed by Kodirov _et al_ a semantic auto-encoder to
regularize the learned model by mapping the image feature to semantic space.
Although Xina _et al_ [36] proposed a feature generating framework for $any-
shot$ learning called, f-VAEGAN-D2, there is room for improvement. A lot of
work have done for small datasets [16] [30] [31] [29] [16] [33] [32] [30] [31]
[34] [35] [24], however, just few methods are proposed for large datasets [37]
[28].
Hybrid embedding system: Norouzi _et al_ [37] proposed a hybrid image
embedding system, referred to as convex combination of semantic embeddings
(ConSE), to deal with n-way image classification that lies in between
independent classifier learning and compatibility learning frameworks. ConSE
takes images and maps them into the semantic embedding space using convex
combination of the class label embedding vectors. Furthermore, Fu and Sigal
[28] presented a learning called semi-supervised vocabulary-informed learning
(SS-Voc).
## 3 Meta-learning for Computer Vision: Preliminaries and Algorithms
### 3.1 Preliminaries
In order to cover all available classes, we need machine learning and
evolutionary algorithms try to solve high dimensionality problems of large
datasets such as curse of dimensionality (CoD) [5] [7]. But, machine learning
still cannot learn all samples when there are few instances per class. Meta-
learning alleviates this problem by providing an advanced learning process.
Meta-learning has three important promises for computer vision problems,
specifically image classification and image recognition: 1) few-shot learning
(FSL), 2) one-shot learning (OSL), and 3) zero-shot learning (ZSL). The crux
of FSL is to learn a meta-learner to understand categories with boundaries
without more than few examples of each category. FSL or k-shot learning takes
k samples for each category in the training phase. Algorithm 2, which is
combination of FSL and OSL, presents semantic pseudocode of model agnostic
meta-learning (MAML) which was proposed by Finn [13]. This algorithm added one
more step, which better illustrates how MTL can overcome traditional machine
learning limitation using a meta-learner to optimize the learner with gradient
decent optimization.
Algorithm 2 Implementation of Meta-learning (MAML) [13]
1:A batch set of input targets (I,Y) and $Batch_{size}$
2:The best meta-learner $F(\theta^{\prime})$ for few shot learning and an
optimal mapping matrix $(\mathscr{W})$ to zero-shot learning
3:Begin
4:while work is not done do
5: for t=0 $\cdots Batch_{size}$ do
6: Learn from training batch set $\left.\begin{array}[]{@{}r@{}r@{}r@{}}\\\
{}\hfil\\\ {}\hfil\\\
{}\hfil\end{array}\color[rgb]{1,0,0}\right\\}\color[rgb]{1,0,0}\begin{tabular}[]{l}We
learn\\\ the learner\end{tabular}$
7: Learn new $(\theta)$
8: Update the $(F(\theta))$
9: end for
10: $\theta^{\prime}$=$\theta^{\prime}$+$\nabla\mathscr{L(F(\theta))}$
$\left.\begin{array}[]{@{}r@{}r@{}r@{}}\\\ {}\hfil\\\
{}\hfil\end{array}\color[rgb]{1,0,0}\right\\}\color[rgb]{1,0,0}\begin{tabular}[]{l}We
optimize the meta-learner\\\ until $\nabla\mathscr{L(F(\theta))}\leadsto
0$.\end{tabular}$
11: Update $\mathscr{F(\theta^{\prime})}\backsim\theta^{\prime}$
12:end while
13:End
### 3.2 Preliminaries for Zero Shot Learning
Notations Let’s suppose $(\mathscr{D})$ stands for a training dataset
$(\mathscr{D})$=( $\mathscr{X}$, $\mathscr{Y}$) with seen classes
$E$=$\\{1,2,\cdots,n\\}$. Consider $\mathscr{X}$ involving $\mathscr{d}$
dimensions of semantic space with required data, we map $\mathscr{X}$ into
k-dimensional latent space with a mapping matrix $\mathscr{W}$. We name this
latent representation, ${S}$. Then, we map this latent representation back to
feature space $\mathscr{\hat{X}}$ using the transpose of $\mathscr{W}$, which
is $\mathscr{W^{T}}$. In line 6 of algorithm 1, the authors used a well-known
Sylvester equation to calculate an optimum mapping matrix using $\mathscr{A}$,
$\mathscr{B}$ and $\mathscr{C}$ which stands for $SS^{T}$, $\lambda XX^{T}$
and $(1+\lambda)SX^{T}$, respectively. We use this $\mathscr{W}$ to recognise
unseen classes in a testing dataset, $D^{t}$=( $X^{t}$, $Y^{t}$), with unseen
classes $Z$=$\\{n+1,n+2,\cdots,m\\}$.
## 4 Experimental Result
We investigate extensively to understand the benefits of a few factors in SAE
algorithm [24]. SAE only has one parameter called $\lambda$ which is set
differently for separate datasets. However, SAE has a few embedded parameters
including HITK, Dist and sorting mode, which are required to be optimally
tuned. We tune the embedded parameters with different ranges and values as
follows: HITK ranges between 1 and the total number of unseen classes per
dataset, Dist includes a kernel algorithm to calculate similarity between
mapped unseen class instances and learned seen class instances, sorting mode
occurs in either ascending or descending order. It is worth mentioning that we
mostly give HITK the values of 1$\cdots$7 and 10. Examining all possible
combinations is quite expensive, therefore, we present a set of results per
different value of HITK for comparison. We find out that only HITK, one of
embedded parameters, has direct sensible, publishable and positive effect on
result of tuned-SAE performance.
### 4.1 Semantic Space
To accurately calculate performance of tuned-SAE, we describe the semantic
space (SS) and dimension of semantic space (SS-D). We directly emphasise on
semantic space since the idea behind zero-shot learning relies on this. The
way we describe the inputs becomes important in the training phase to ZSL,
especially SAE. In previous research studies, scientists applied two different
types of semantic space in their work. One is attribute ($A$), one is word
vector ($W$). All the work we present in tables 2 and 3 mostly used attribute
(A), except two research studies. SJE [32] worked with a combined semantic
space (A+W) that yielded a result, 76.3, which was better than the basic
attribute-based method (DAP). Moreover, SS-Voc[28] leveraged both (A/W) but
not at the same time, and the performed results, 78.3/68.9, illustrate that
their approach had been well-computed. However, SAE [24] only used attribute-
based semantic space to calculate the performance of recognising unseen
classess. It is noteworthy to say that, only word-vector(’W’) as SS has been
used for large datasets in the compared work illustrated in table 3 [24].
### 4.2 Pre-defined Parameters
In [24], Kodirov _et al_ compared their proposed method, SAE, with more than
10 highly qualified methods using small datasets, which include AwA [16], CUB
[27], $aP\&Y$ [38] and SUN [39]. The authors improved the accuracy of
recognising unseen images at least $\%6$ in comparison with $SS-voc$ and at
most $\%20$ in comparison with basic attibute-based learning method, DAP.
Further, the researchers used two large datasets: $ImNet-1$ [28] and $ImNet-2$
[28]. Their image recognition errors for large datasets are beyond $\%60$.
### 4.3 Effect of Parameter Tuning on Accuracy of Meta-learning for Computer
Vision
To illustrate in detail the effect of parameter tuning on accuracy of meta-
learning for computer vision, first, we discuss the used datasets, provide an
ablation study, define an evaluation metric and present the state-of-the-art
works and provide comparative evaluation in the following sections.
### 4.4 Dataset
We choose two small, but popular, and one large benchmark dataset for ZSL in
this study: AwA (Animals with Attributes) [16] consists of more than 30,000
images with respect to 50 different classes of animals; CUB-200-2011 Birds
(CUB) [27] consists of 218 instances, 1000 seen classes and 360 unseen
classes; ImNet-2 [28] provides 1000 classes for seen classes and 360 classes
for unseen classes, where seen and unseen classes are extracted from
ILSVRC2012, ILSVRC2010, respectively. Table 1 illustrates details information
of these datasets either training and testing.
Table 2: Comparing the related methods with our contribution for small datasets methods Datasets | AwA | CUB
---|---|---
DAP [16] | 60.1 | -
ESZSL [29] | 75.3 | 48.7
SSE [30] | 76.3 | 30.4
JLSE [31] | 80.5 | 41.8
SJE [32](A+W) | 76.3 | 50.1
SynC [33] | 72.9 | 54.4
MLZSC [34] | 72.9 | 54.4
PRZSL[35] | 80.4 | 52.4
f-VAEGAN-D2 (IND) [36] | 71.1 | 61.0
f-VAEGAN-D2 (TRAN)[36] | 89.8 | 71.1
SS-Voc[28](A/W) | 78.3/68.9 | -
SAE (W) [24] | 84.7 | 61.4
$SAE(W^{T})$ [24] | 84.0 | 60.9
Our contribution | |
$SAE(W)-1$ | 84.7 | 61.4
$SAE(W^{T})-1$ | 84.0 | 60.9
$SAE(W)-2$ | 74.6 | 78.0
$SAE(W^{T})-2$ | 88.9 | 97.4
$SAE(W)-3$ | 83.7 | 85.13
$SAE(W^{T})-3$ | 94.4 | 97.4
$SAE(W)-4$ | 91.0 | 89.9
$SAE(W^{T})-4$ | 97.5 | 97.4
$SAE(W)-5$ | 96.4 | 92.4
$SAE(W^{T})-5$ | 99.1 | 97.4
$SAE(W)-6$ | 99.7 | 94.1
$SAE(W^{T})-6$ | 99.7 | 97.4
$SAE(W)-7$ | 99.5 | 95.3
$SAE(W^{T})-7$ | 99.8 | 97.4
$SAE(W)-10$ | 100 | 97.4
$SAE(W^{T})-10$ | 100 | 97.4
Table 3: Comparing the related methods with our contribution for a large dataset methods Datasets | ImNet-2 | ImNet-2
---|---|---
ConSE[37] | 15.5 | 15.5
SS-Voc[28] | 16.8 | 16.8
$SAE(W)$ [24] | 26.3 | 26.3
$SAE(W^{T})$ [24] | 27.2 | 27.2
Our solution | |
with parameter tuning | $\lambda$= 5 | $\lambda$= 6
$SAE(W)-1$ | 12.2 | 12.2
$SAE(W^{T})-1$ | 12.9 | 13.0
$SAE(W)-2$ | 17.6 | 17.6
$SAE(W^{T})-2$ | 18.3 | 18.4
$SAE(W)-3$ | 21.2 | 21.1
$SAE(W^{T})-3$ | 22.1 | 22.1
$SAE(W)-4$ | 24.0 | 23.9
$SAE(W^{T})-4$ | 24.9 | 24.9
$SAE(W)-5$ | 26.3 | 26.3
$SAE(W^{T})-5$ | 27.2 | 27.3
$SAE(W)-6$ | 28.3 | 28.3
$SAE(W^{T})-6$ | 29.2 | 29.3
$SAE(W)-7$ | 30.1 | 30.1
$SAE(W^{T})-7$ | 31.1 | 31.2
$SAE(W)-10$ | 34.8 | 34.7
$SAE(W^{T})-10$ | 35.6 | 35.7
### 4.5 Ablation Study
In this paper, we work jointly with semantic auto-encoder (SAE), which is an
advanced supervised clustering algorithm for the specific purpose of zero-shot
learning. The main strength of this paper is tuning SAE, which comes from
examining the output by updating parameters. ZSL usually uses a complex
projection function, but SAE leverages a linear projection, according to
algorithm 1.
### 4.6 Evaluation Metric
We compute performance of the tuned-SAE based on the loss function $\mid\mid
Pred-Ground_{T}\mid\mid$, which is also presented in [40] for a metric
learning function and supervised clustering.
### 4.7 Competitors
We compare our method, tuned-SAE, with state-of-the-art method [36] and other
work are compared in [24]. All compared research studies have used zero-shot
learning (supervised learning) [31] [16] and semi-supervised learning[28].
### 4.8 Comparative Evaluation
We make the following observations according to the results in tables 2 and 3:
(1) Our tuned-SAE model obtained the best results on both small and large
datasets. (2) On the small datasets, the gap between tuned-SAE′s results and
the strongest competitors are varied due to different results of SAE. Note
that our tuned model is a linear projection function, while most of the
compared models use complex nonlinear projection functions, and some of them
use more than one semantic space like SJE [32] and SS-voc [28]. (3) Although
tuned-SAE performed well on the large-scale dataset ($ImNet-2$), our model did
not improve the performance more than $\%7$ in comparison with SAE, but tuned-
SAE yields a promising result in comparison with other methods. (4) The
performance of tuned-SAE on the small datasets is far better than the large
dataset. (5) Last but not least, by increasing HITK value from 2 to 10, our
performance directly increases.
## 5 Discussion
Tuning plays a main role in all algorithms, especially machine learning
algorithms. It provides a comprehensive situation for the algorithms to learn
from a training dataset, so the learned model yields a high performance on a
testing dataset. However, machine learning algorithms may not do well for
unseen classes, although they obtained optimised parameters. In this paper, we
address this problem, study semantic auto-encoder (SAE) and develop tuned-SAE
in order to gain a better performance on unsupervised clustering, which is one
of the approaches for zero-shot learning. The results in table 2 depict that
tuned-SAE leads to a better performance than SAE and other related methods. To
compare table 2 with table 3, we find that tuned-SAE performs far better for
small datasets than large datasets. This paper proves that tuning is a big
advantage in zero-shot learning for image recognition. However, it does not
work well for large datsets. Table 3 shows that with different $\lambda$
values we have have different results for $SAE(W)$ and $SAE(W^{T})$, such that
$SAE(W^{T})$ plays as a decoder and maps data from semantic space to feature
space to compute performance of the work.
## 6 Conclusion
Having an optimal learning model is key in the world of machine learning.
However, learning from unseen classes is a critical issue in traditional
machine learning. In this paper, we address this problem and investigate
advanced learning processes to enable learning from seen classes to predict
unseen classes accurately. We aim to focus on SAE as a semantic auto-encoder,
which enables us to find an optimal mapping function between semantic space
and feature space, such that it also works for unseen semantic space and
classes. In this paper, we tune embedded SAE’s parameters in a way that SAE
yields better results than the original parameters presented in [24]. The new
results outperform the original results as well as state-of-the-art
algorithms.
## References
* [1] M. H. Amini, H. Arasteh, and P. Siano, “Sustainable smart cities through the lens of complex interdependent infrastructures: Panorama and state-of-the-art,” in _Sustainable Interdependent Networks II_. Springer, 2019, pp. 45–68.
* [2] R. Louzada Campos, R. Jafri, S. A. Ali, O. Correa, and H. R. Arabnia, “Evaluation of the google tango tablet development kit: A case study of a localization and navigation system,” in _2016 International Conference on Computational Science and Computational Intelligence (CSCI)_. IEEE, 2016, pp. 829–835.
* [3] E. Parcham, N. Mandami, A. N. Washington, and H. R. Arabnia, “Facial expression recognition based on fuzzy networks,” pp. 829–835, 2016.
* [4] F. G. Mohammadi and M. S. Abadeh, “A survey of data mining techniques for steganalysis,” _Recent Advances in Steganography_ , pp. 1–25, 2012.
* [5] ——, “A new metaheuristic feature subset selection approach for image steganalysis,” _Journal of Intelligent & Fuzzy Systems_, vol. 27, no. 3, pp. 1445–1455, 2014.
* [6] F. G. Mohammadi and H. Sajedi, “Region based image steganalysis using artificial bee colony,” _Journal of Visual Communication and Image Representation_ , vol. 44, pp. 214–226, 2017.
* [7] F. G. Mohammadi and M. S. Abadeh, “Image steganalysis using a bee colony based feature selection algorithm,” _Engineering Applications of Artificial Intelligence_ , vol. 31, pp. 35–43, 2014.
* [8] R. Jafri, R. L. Campos, S. A. Ali, and H. R. Arabnia, “Visual and infrared sensor data-based obstacle detection for the visually impaired using the google project tango tablet development kit and the unity engine,” _IEEE Access_ , vol. 6, pp. 443–454, 2017.
* [9] S. Amirian, Z. Wang, T. R. Taha, and H. R. Arabnia, “Dissection of deep learning with applications in image recognition,” in _Proceedings of International Conference on Computational Science and Computational Intelligence (CSCI 2018: December 2018, USA); "Artificial Intelligence" Research Track (CSCI-ISAI)_ , 2018, pp. 1132–1138.
* [10] S. Voghoei, N. Hashemi Tonekaboni, J. Wallace, and H. R. Arabnia, “Deep learning at the edge,” in _Proceedings of International Conference on Computational Science and Computational Intelligence CSCI, Internet of Things" Research Track_ , 2018, pp. 895–901.
* [11] Z. Wang, F. Li, T. R. Taha, and H. R. Arabnia, “Improved automating seismic facies analysis using deep dilated attention autoencoders,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , 2019, pp. 0–0.
* [12] J. Schmidhuber, “Evolutionary principles in self-referential learning,” 1987.
* [13] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in _Proceedings of the 34th International Conference on Machine Learning-Volume 70_. JMLR. org, 2017, pp. 1126–1135.
* [14] A. Rajeswaran, C. Finn, S. Kakade, and S. Levine, “Meta-learning with implicit gradients,” _arXiv preprint arXiv:1909.04630_ , 2019.
* [15] Y. Chen, A. L. Friesen, F. Behbahani, D. Budden, M. W. Hoffman, A. Doucet, and N. de Freitas, “Modular meta-learning with shrinkage,” 2019.
* [16] C. H. Lampert, H. Nickisch, and S. Harmeling, “Attribute-based classification for zero-shot visual object categorization,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 36, no. 3, pp. 453–465, 2013\.
* [17] Z. Ding, H. Zhao, and Y. Fu, _Zero-Shot Learning_. Springer International Publishing, 2019, pp. 127–144.
* [18] C. Zhang, P. Patras, and H. Haddadi, “Deep learning in mobile and wireless networking: A survey,” _IEEE Communications Surveys & Tutorials_, 2019\.
* [19] Y. Xian, B. Schiele, and Z. Akata, “Zero-shot learning-the good, the bad and the ugly,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 4582–4591.
* [20] H. Larochelle, D. Erhan, and Y. Bengio, “Zero-data learning of new tasks.” in _AAAI_ , vol. 1, no. 2, 2008, p. 3.
* [21] F. G. Mohammadi and M. H. Amini, “Promises of meta-learning for device-free human sensing: Learn to sense,” in _Proceedings of the 1st ACM International Workshop on Device-Free Human Sensing_ , ser. DFHS’19. New York, NY, USA: ACM, 2019, pp. 44–47. [Online]. Available: http://doi.acm.org/10.1145/3360773.3360884
* [22] D. Parikh and K. Grauman, “Interactively building a discriminative vocabulary of nameable attributes,” in _CVPR 2011_. IEEE, 2011, pp. 1681–1688.
* [23] K. Duan, D. Parikh, D. Crandall, and K. Grauman, “Discovering localized attributes for fine-grained recognition.” IEEE, 2012, pp. 3474–3481.
* [24] E. Kodirov, T. Xiang, and S. Gong, “Semantic autoencoder for zero-shot learning,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 3174–3183.
* [25] M. H. Amini, “Distributed computational methods for control and optimization of power distribution networks,” Ph.D. dissertation, Carnegie Mellon University, 2019.
* [26] M. H. Amini, J. Mohammadi, and S. Kar, “Distributed holistic framework for smart city infrastructures: Tale of interdependent electrified transportation network and power grid,” _IEEE Access_ , vol. 4, 2019.
* [27] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The caltech-ucsd birds-200-2011 dataset,” 2011.
* [28] Y. Fu and L. Sigal, “Semi-supervised vocabulary-informed learning,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 5337–5346.
* [29] B. Romera-Paredes and P. Torr, “An embarrassingly simple approach to zero-shot learning,” in _International Conference on Machine Learning_ , 2015, pp. 2152–2161.
* [30] Z. Zhang and V. Saligrama, “Zero-shot learning via semantic similarity embedding,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 4166–4174.
* [31] ——, “Zero-shot learning via joint latent similarity embedding,” 2016, pp. 6034–6042.
* [32] Z. Akata, S. Reed, D. Walter, H. Lee, and B. Schiele, “Evaluation of output embeddings for fine-grained image classification,” 2015, pp. 2927–2936.
* [33] S. Changpinyo, W.-L. Chao, B. Gong, and F. Sha, “Synthesized classifiers for zero-shot learning,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 5327–5336.
* [34] M. Bucher, S. Herbin, and F. Jurie, “Improving semantic embedding consistency by metric learning for zero-shot classiffication,” in _European Conference on Computer Vision_. Springer, 2016, pp. 730–746.
* [35] Y. Shigeto, I. Suzuki, K. Hara, M. Shimbo, and Y. Matsumoto, “Ridge regression, hubness, and zero-shot learning,” in _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_. Springer, 2015, pp. 135–151.
* [36] Y. Xian, S. Sharma, B. Schiele, and Z. Akata, “F-vaegan-d2: A feature generating framework for any-shot learning,” in _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2019.
* [37] M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. S. Corrado, and J. Dean, “Zero-shot learning by convex combination of semantic embeddings,” _arXiv preprint arXiv:1312.5650_ , 2013.
* [38] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth, “Describing objects by their attributes,” in _2009 IEEE Conference on Computer Vision and Pattern Recognition_. IEEE, 2009, pp. 1778–1785.
* [39] G. Patterson, C. Xu, H. Su, and J. Hays, “The sun attribute database: Beyond categories for deeper scene understanding,” _International Journal of Computer Vision_ , vol. 108, no. 1-2, pp. 59–81, 2014.
* [40] M. T. Law, Y. Yu, M. Cord, and E. P. Xing, “Closed-form training of mahalanobis distance for supervised clustering,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 3909–3917.
|
2024-09-04T02:54:56.310899 | 2020-02-26T20:05:04 | 2003.00853 | {
"authors": "Ivan Zorin, Jakob Kilgus, Kristina Duswald, Bernhard Lendl, Bettina\n Heise, and Markus Brandstetter",
"full_text_license": null,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25985",
"submitter": "Ivan Zorin",
"url": "https://arxiv.org/abs/2003.00853"
} | arxiv-papers | # Sensitivity-Enhanced Fourier Transform Mid-Infrared Spectroscopy Using a
Supercontinuum Laser Source
Ivan Zorin
Research Center for Non-Destructive Testing
Science Park 2, Altenberger Str.69
4040 Linz, Austria
<EMAIL_ADDRESS>
&Jakob Kilgus
Research Center for Non-Destructive Testing
Science Park 2, Altenberger Str.69
4040 Linz, Austria
&Kristina Duswald
Research Center for Non-Destructive Testing
Science Park 2, Altenberger Str.69
4040 Linz, Austria
&Bernhard Lendl
Institute for Chemical Technologies and Analytics
TU Wien, Getreidemarkt 9
1060 Vienna, Austria
&Bettina Heise
Research Center for Non-Destructive Testing
Science Park 2, Altenberger Str.69
4040 Linz, Austria
&Markus Brandstetter
Research Center for Non-Destructive Testing
Science Park 2, Altenberger Str.69
4040 Linz, Austria
<EMAIL_ADDRESS>
Corresponding<EMAIL_ADDRESS>
###### Abstract
Fourier transform infrared (FTIR) spectrometers have been the dominant
technology in the field of mid-infrared (MIR)spectroscopy for decades.
Supercontinuum laser sources operating in the MIR spectral region now offer
the potential to enrich the field of FTIR spectroscopy due to their
distinctive properties, such as high-brightness, broadband spectral coverage
and enhanced stability. In our contribution, we introduce this advanced light
source as a replacement for conventional thermal emitters. Furthermore, an
approach to efficient coupling of pulsed MIR supercontinuum sources to FTIR
spectrometers is proposed and considered in detail. The experimental part is
devoted to pulse-to-pulse energy fluctuations of the applied supercontinuum
laser, performance of the system, as well as the noise and long-term
stability. Comparative measurements performed with a conventional FTIR
instrument equipped with a thermal emitter illustrate that similar noise
levels can be achieved with the supercontinuum-based system. The analytical
performance of the supercontinuum-based FTIR spectrometer was tested for a
concentration series of aqueous formaldehyde solutions in a liquid flow cell
(500 µmm path length) and compared with the conventional FTIR (130 µm path
length). The results show a four-times-enhanced detection limit due to the
extended path length enabled by the high brightness of the laser. In
conclusion, FTIR spectrometers equipped with novel broadband MIR
supercontinuum lasers could outperform traditional systems providing superior
performance, e.g., interaction path lengths formerly unattainable, while
maintaining low noise levels known from highly stable thermal emitters.
_K_ eywords Mid-infrared spectroscopy $\cdot$ MIR $\cdot$ supercontinuum laser
source $\cdot$ Fourier transform infrared spectroscopy $\cdot$ FTIR
## 1 Introduction
Fourier-transform infrared (FTIR) spectroscopy has been a well established and
widely used tool for chemical characterization in various application
scenarios for decades. FTIR spectrometers are still the gold standard in the
mid-infrared (MIR) spectral range exhibiting reasonable acquisition times,
high sensitivity and spectral resolution[1]. In their typical configuration,
these spectrometers employ thermal light sources emitting black-body radiation
perfectly fitted for many applications. Nevertheless, thermal emitters impose
several limitations caused by their inherent properties, such as spatial
incoherence, low power and omnidirectionality. Recently, a highly interesting
new type of broadband source has been emerging into the MIR spectral region,
namely the supercontinuum laser source. MIR supercontinuum lasers are nowadays
operating in the same or broader wavelength range as thermal sources[2, 3]. In
contrast to the latter, they exhibit drastically higher brightness, spatial
coherence, and stability [4, 5] making them a promising tool for spectroscopy.
Furthermore, supercontinuum sources are an attractive alternative to the
already established MIR quantum cascade lasers (QCL)[6, 7].
The noise characteristics of supercontinuum generation, which had been a
deterrent in their early days, were significantly improved since then[8, 9,
5]. Successful demonstrations of MIR supercontinuum source-based setups in a
wide variety of noise sensitive applications [10, 11, 12, 13, 14, 15] prove
their practical suitability. Furthermore, it is expected that the trend of
noise reduction will continue e.g. by implementing the supercontinuum
generation in an all-normal-dispersion regime, which is insensitive to the
input pump noise delivering the enhanced shot-to-shot spectral coherence[16,
17, 18, 19, 20]. Meanwhile, with respect to the achieved average power,
intensities up to 21.8 W were recently achieved in the wavelength range of 1.9
µm - 3.8 µm[21]. The broad spectral range covered by these sources is
continuously extending [22, 23, 24, 25, 26, 27] even beyond the fingerprint
region revealing attractive potentials for spectroscopic measurements. This
ultra-broadband coverage, including both near-infrared (NIR) and also the MIR
range, is unique and only offered by supercontinuum lasers, while QCLs
inherently offer a narrow-band wavelength tunability.
Due to the strict requirements on precise control of the mirror position, FTIR
spectrometers are designed to operate with continuous-wave (CW) sources as the
modulated signal is sampled at a defined frequency. This scheme introduces the
limitations for operating FTIR spectrometers with pulsed sources without
taking care of the sampling and pulse repetition rates. For this reason, the
latter is usually set much higher to go into quasi-CW mode. This approach has
been evaluated by e.g. the combination of an FTIR and a NIR supercontinuum
source radiating at repetition rates of 80 MHz and 100 MHz already resulting
in an improved signal-to-noise ratio (SNR) and detection limits as well as
extended interaction path lengths[28, 29, 30]. In the much more attractive MIR
spectral range, currently available supercontinuum sources typically operate
at repetition rates from tens of kHz and up to several MHz with a pulse
duration in the sub-nanosecond range. Due to the asynchronization of the
emission and sampling performed by FTIR spectrometers, additional noise and
spectral distortions are introduced resulting in insufficient stability of the
signal for FTIR measurements[31, 32]. CW supercontinuum may be a key to solve
the problem, however, the technical complexity and disadvantages (fibers with
several tens of kilometers length, high power requirements, poor broadening)
[2, 33] make them challenging especially in the MIR spectral range. From an
analytical point of view, CW operation of such bright sources would introduce
substantial thermal load, which may cause damage of the sample under
investigation.
In this contribution, we prove the principle suitability of pulsed MIR
supercontinuum lasers as a light source in an FTIR instrument. We define and
evaluate practical problems for coupling with an FTIR spectrometer that is
highly sensitive to intensity variations and report a successful method to
overcome the sampling problem. In the experimental part, we examine pulse-to-
pulse intensity fluctuations over the supercontinuum emission spectrum, the
noise of the experimental setup and its stability. Finally, the analytical
performance of the proposed solution is investigated and compared to
conventional state-of-the-art instrumentation.
## 2 Materials and methods
### 2.1 Mid-infrared supercontinuum laser source
In our study, a novel and commercially available supercontinuum source (NKT
Photonics, SuperK MIR) with a spectral coverage from 1.1 µm to 4.4 µm was
applied. The ultra-broadband spectrum is generated due to a sequence of non-
linear processes initiated in a ZBLAN (ZrF4-BaF2-LaF3-AlF3-NaF) fiber[2],
pumped in the spectral region of around 2 µm by a pre-broadened and amplified
seed laser (1.55 µm).
The repetition rate of the source is adjustable in the range of 2 MHz to 3 MHz
(2.5 MHz default). The supercontinuum pulse width is shorter than 1 ns
yielding in a duty cycle of less than 0.5% causing the spectrum distortion and
irreproducibility of FTIR measurements when directly applied without
additional measures[31]. The output beam of the supercontinuum source is a
single Gaussian mode with a beam quality parameter $\mathrm{M^{2}\leq 1.1}$
and a diameter of around 4 mm with a beam divergence of $2$ mrad. The total
average power of the source was measured as 475 mW, whereof 200 mW are
radiated in MIR spectral range.
Characterization and verification of the pulse-to-pulse energy fluctuations
were performed using a monochromator (Gilden Photonics, GM500) and a high-
speed Mercury Cadmium Telluride (MCT) detector (PVM series, Vigo, rise time
$\leq 0.7$ ns and 230 MHz bandwidth).
### 2.2 Experimental setup
The experimental setup consisted of a commercial FTIR instrument (Bruker
Optics, Vertex 70) and the pulsed supercontinuum laser as depicted in Fig. 1.
Function GeneratorLock-In AmplifierSCLFCOpticsM1M2Ref.
LaserBSApertureDetectorRD3 MHzLockedBPFROEFTIR Spectrometer Figure 1: Scheme
of the experimental setup: FTIR spectrometer (Bruker Vertex 70, simplified
scheme), SC - supercontinuum laser source, BPF - band-pass spectral filter,
LFC - liquid flow cell, ROE - spectrometer built-in read-out electronics and
mirror control, M1 and M2 - interferometer mirrors and BS - beam splitter; RD
- detector used to produce reference interferogram.
In order to effectively exploit the dynamic range of the detector and avoid
oversaturation by NIR spectral components (e.g. strong and high power seed
laser line at 1.55 µm), a suitable bandpass filter (BPF, Thorlabs FB3500-500,
3100 cm-1 \- 2650 cm-1) was employed thereby determining the investigated
spectral range; the average power of the transmitted band (i.e. incident on
the sample) was measured equal to 17 mW (power meter, Coherent, LM-10 HTD),
which did not lead to observable heating of the sample. It should be noted
that the measurements over the entire MIR part of the emission spectrum are
restricted by the dynamic range of the detector, however, they are realizable
applying edgepass (1.65 µm cut-on wavelength) and neutral density filters or
introducing smaller diameters of the aperture.
The collimated radiation was transmitted through the sample inserted in a
liquid flow cell (PIKE Technologies, 4 mm CaF2 windows) before it entered the
FTIR spectrometer via the external optical input.
The FTIR spectrometer with a typical configuration based on a Michelson
configuration (BS - beam splitter, M1 and M2 - movable and fixed mirrors,
respectively) is schematically shown in Fig. 1. The interferogram was recorded
with a Mercury Cadmium Telluride detector (variable gap Vigo PCI-4TE-12,
detectivity within the spectral range D$\geq~{}2.0\times~{}10^{9}$
$\mathrm{cm\cdot\sqrt{Hz}\cdot W^{-1}}$).
The built-in spectrometer control and read-out unit (ROE) utilized for the
signal acquisition was used to control mirror scan velocity (i.e sampling
frequency) and size of the output aperture. In order to achieve the
equidistant time sampling, the reference monochromatic laser is used to
produce a quasi-sine interferogram [34] (e.g. see Fig. 2(c)) to be recorded by
the reference detector (RD).
Any asynchronization of the signal acquisition and pulse generation would
result in strong noise and low intensity of the recorded interferogram (Fig.
3), since the sampling would frequently be performed in the absence of any
pulse, thereby being equivalent to the measurement of the detector noise.
Figure 2 shows a zoomed-in part of a simulated interferogram and serves purely
for illustration purposes defining the sampling problem and a possible
solution by using an external integrator. The pulses presented in Fig. 2(a)
were chosen according to the parameters of the supercontinuum radiation
exploited in our study. A reference signal (150 kHz, Fig. 2(c)) that defines
the sampling frequency was simulated in order to temporarily scale the graph
and show the infeasibility of the traditional approach to sample the signal
produced by any low-duty cycle source.
Time (µs)Intensity (a.u.)a)b)c) Figure 2: Problem of the signal sampling for
low-duty cycle sources applied in the FTIR spectrometer and approach to
overcome the asynchronization using an external integrator: (a) Interferogram
for the pulsed light source, (b) Integrated CW signal of the detector, i.e.
lock-in amplifier (LIA) output, (c) Reference interferogram produced by the
monochromatic source, defines the sampling frequency (red).
In the system proposed in Fig. 1, the lock-in amplifier (LIA, Stanford
Research Systems, SR844) was used as an integrator to suppress the pulsed
nature of the signal. Therefore, the output of the MCT detector was
transformed into a signal similar to that obtained in the traditional
configuration with a CW source (i.e. to one shown Fig. 2(c)). According to the
Nyquist-Kotelnikov-Shannon theorem, the measurement approach is feasible only
for integration times of the LIA shorter than half of the period of sampling,
otherwise, the interforogram is distorted or undetectable. Nevertheless, very
short time constants that are achievable with modern LIAs (e.g. 1 µs) result
in a higher spectral noise due to the averaging of fewer pulses.
The LIA that was used in this work has a fixed and discrete integration time
range (100 µs, 300 µs, 1 ms and longer). The sampling rate, i.e. the mirror
velocity of the FTIR spectrometer, could be adjusted within the range from 1
kHz to 60 kHz. Therefore, the requirements defined by the sampling theorem
were satisfied by selecting the proper parameters of the LIA and the FTIR
spectrometer. With respect to the trade-off between the speed and sensitivity
(100 µs and 300 µs time constants correspond to 5 kHz and 1.7 kHz sampling
rates), the following measurements were executed at the longer integration
time, i.e. 300 µs, and at 1 kHz sampling frequency. The external function
generator was utilized to trigger the supercontinuum source at the highest
available frequency (3 MHz, according to the maximum repetition rate) and to
synchronize the laser with the LIA. Consequently, around 900 pulses were
integrated for each sampling point.
a)b)c)d)SNR=$\mathbf{3.74}$SNR=$\mathbf{7320}$ Figure 3: Raw interferograms
and reconstructed emission spectra of the supercontinuum source (band-pass
filter inserted) obtained using the traditional sampling approach without
external integrator (a,b) and applying the lock-in amplifier (c,d). The
intensity values are not normalized and correspond to the amplitude of the raw
signals.
Figure 3 illustrates raw signals and spectra obtained by the supercontinuum-
based FTIR spectrometer in two configurations: a direct coupling and a
coupling using the LIA locked at the repetition rate of the source. The
optical parameters and conditions were set the same for both measurements
(aperture 8 mm, 500 µm path length liquid flow cell, distilled water). SNR
values were traditionally derived as the ratio between maximum signal levels
and calculated standard deviations of the zero absorbance lines (also known as
100% lines). It should be noted that the peak intensity when using the LIA-
based integration scheme showed a 250 times higher magnitude, which is
proportional to the pulse sampling probability, i.e. $1/\mathrm{D_{c}}$, where
Dc is the duty cycle.
### 2.3 Samples
Formaldehyde (CH2O or methanal) is a pollutant that is widely distributed in
the environment. Due to its high water solubility [35] and a wide variety of
natural sources such as e.g. oxidation of organic matters[36] and industrial
effluents, formaldehyde and formaldehyde monohydrate (methanediol) can e.g. be
frequently found in food, natural- and drinking water[37].
The evidence of mutagenicity and risks of carcinogenicity of formaldehyde are
well investigated and verified[38]. Histopathological short and long-term
effects in the mucosa of rats are documented at the concentration threshold of
260 ppm[39], while a tolerable concentration of 2.6 ppm with an uncertainty
factor of 100 is defined by the World Health Organization (WHO)[37]. During
the metabolism following oral exposure, formaldehyde is oxidized to formic
acid that also introduces specific toxic effects on human health[40, 41].
The detection of low concentrations of formaldehyde dissolved in water is a
practically interesting problem for cost-efficient spectroscopy since it is
usually done by sophisticated and expensive chromatographic analysis[42]. In
our study, a concentration series of formaldehyde aqueous solutions has been
prepared for spectroscopic analysis by dilution from 37% stock solution (Carl
Roth GmbH).
## 3 Results and discussions
The measured emission spectra of the thermal and supercontinuum source are
depicted in Fig. 4(a), edgepass filter (1.65 µm cut-on wavelength) and neutral
density filter were used to avoid oversaturation of the detector.
(a) Emission spectra of the sources
(b) Absorption spectra of formaldehyde and water
Figure 4: (a) Emission spectra of the supercontinuum laser source and the
thermal source with respect to (b) absorption of water (recorded with the
thermal emitter, 90 µm liquid flow cell) and formaldehyde (spectrum taken from
database[43]). The supercontinuum spectrum was recorded using a neutral-
density filter (optical density of 1), an aperture size was set to 0.25 mm;
the emission spectrum of the thermal source was measured with an aperture size
of 8 mm in the absence of any filters; both recorded emission spectra reveal
strong absorption of CO2 around 2350 cm-1.
Intensity values are not normalized, since different optical parameters were
used to avoid oversaturation of the MCT detector: the aperture was set to 8 mm
diameter for the thermal emitter, while for the high brightness supercontinuum
laser the aperture was set to the minimum available aperture of 0.25 mm with
an additionally inserted neutral density filter (10% transmission).
In order to determine the analytical performance (i.e. limit of detection) of
the supercontinuum-based FTIR, an aqueous dilution series was measured.
Formaldehyde, i.e more accurately its mixture with the hydrated form
methanediol (CH2(OH)2)[44] was chosen as the target analyte. Formaldehyde
exhibits strong absorption due to the stretching vibration of C-H, in a
spectral range where water absorption shows a relative minimum (see Fig. 4(b),
spectrum of formaldehyde taken from a database[43]). Hence, the bandpass
filter with a quasi-Gaussian profile was used to select the spectral range
(3100 cm-1 \- 2650 cm-1); the default spectral resolution was set to 4 cm-1
for the following experimental verification and noise analysis.
### 3.1 Noise and long-term stability
In order to characterize and compare the noise of the supercontinuum-based
system (Fig. 5) and conventional FTIR with the CW thermal emitter, zero
absorbance lines were obtained within the spectral range of interest (3100
cm-1 \- 2650 cm-1).
(a) 100% zero absorbance lines, 130 µm liquid flow cell
(b) Allan-Werle variance
SCTERMS=0.00078RMS=0.00067
Figure 5: (a) Noise performance of the experimental setup using a 130 µm
liquid flow cell filled with blank water: the supercontinuum source (marked as
SC) in comparison to the thermal emitter (TE) expressed in the form of 100%
(zero absorbance) lines and (b) long-term stability of the LIA based setup
illustrated by the Allan Werle variance.
Since the spectroscopic analysis could not be performed directly for the same
path length due to the different intensity levels of the thermal and
supercontinuum source and the limited dynamic range of the detector[45], a
direct comparison of the same sample at the same optical path length was not
feasible. Therefore, a neutral density filter (optical density of 1) was used
to reduce the output power of the laser. Figure 5(a) depicts the recorded 100%
lines measured through a 130 µm liquid flow cell filled with blank water. Four
consecutive spectra were averaged for the calculation of each zero absorption
line. Root-mean-square (RMS) values for both sources exhibit comparable noise
levels (0.67$\times$10-3 and 0.78$\times$10-3 standard errors for thermal and
supercontinuum correspondingly), while the raw signal level of the quasi-
Gaussian peak transmitted through BPF shows a factor of 25 higher magnitude in
the case of the supercontinuum laser. The spectral region beyond 2800 cm-1 is
not used in the calculations due to the total attenuation imposed by water and
thus insufficient light transmitted in this range.
Additionally, long-term measurements using the same 130 µm liquid flow cell
(blank water) were carried out over around 28 hours (10.000 spectra) to
specify the stability of the system, the prevalent types of noise sources and
to estimate detection limits that could be achieved by averaging. The series
of error values calculated for zero absorbance lines (within the same spectral
range of 2800 cm-1 \- 2650 cm-1) were used to derive the Allan-Werle
variance[46] as depicted in Fig. 5(b). The dependence of the signal deviation
on the integration time illustrated by the Allan-Werle variance demonstrates
the stability of the supercontinuum-based FTIR in a long-term perspective.
Analysing the plot, white noise can be observed as a dominant noise floor in
the system, as indicated by the decreasing tendency with a constant slope.
This noise is common and most likely inherited by the nonlinear processes
(initiated by amplification of the pump shot noise) [8, 9] that occurred
during the spectral broadening. A weak long-term drift was observed only after
4$\times$104 seconds and could be caused by temperature changes.
The evolution of the variance reveals that the SNR of the system and the
corresponding limit of detection could be enhanced by 4 orders of magnitude
applying the signal averaging. However, such long averaging is not expedient
so that the spectroscopic measurements in the following sections are performed
with an averaging over 10 spectra only (around 70 seconds).
### 3.2 Pulse-to-pulse energy fluctuations
To specify and complete the noise characterization of the supercontinuum
source, the pulse-to-pulse behaviour over the emission spectrum has been
investigated (Fig. 6).
Figure 6: Normalized pulse-to-pulse energy fluctuations over the emission
spectrum of the supercontinuum laser source (indicated for reference).
A Czerny-Turner monochromator was used to access the pulse-to-pulse energy
fluctuations within a narrow spectral band (3 nm resolution, equidistant in
the wavelength space). The high-speed MCT detector and oscilloscope (400 MHz,
5 GS/s) were used to record pulse waveforms. The measurements were performed
by sweeping over the MIR part of the spectrum with 6 nm step size, while for
each position 1000 pulses were analysed. The normalized pulse energy
fluctuations, depicted in Fig. 6, were calculated as a standard deviation of
the pulse areas (time-integrals, represent a pulse energy) normalized by their
mean.
An average pulse-to-pulse energy fluctuation of around 6.4% could be observed
within the spectral range of interest. For the Gaussian noise, the standard
error for the averaged measurement is proportional to $1/\sqrt{N}$, where $N$
is the number of samples. Thereby, the evaluated noise is verified, since the
measurements coincide quite well with the results demonstrated in the previous
section: the obtained dependence yields the RMS of around 0.0010, derived for
$N=3600$, while the measurements of the 100% lines, where 4 spectra were
averaged (900 pulses integrated for each spectrum), give a RMS of 0.00078.
### 3.3 Quantitative measurements
The performance and analytical usability of the supercontinuum-based FTIR
spectrometer were verified for the quantification of aqueous formaldehyde
solutions (calibration curves shown in Fig. 7).
(a) Thermal source (130 um liquid flow cell)
(b) Supercontinuum source (500 um liquid flow cell)
Figure 7: Calibration curves of a formaldehyde (methanediol) dilution series
obtained by integrating the absorbance within the spectral range of 2720 cm-1
\- 2620 cm-1, covering parts of the C-H stretching band (a) standard FTIR
instrument with CW thermal emitter and 130 µm optical path length flow cell;
(b) supercontinuum-based FTIR and 500 µm flow cell.
The spectroscopic measurements were performed using both the CW thermal
emitter (conventional system without external integrator) and the
supercontinuum laser (experimental setup). The spacers of the liquid flow cell
(130 µm and 500 µm) were selected correspondingly with respect to the
available intensity levels, while the aperture size was set to 8 mm for both
configurations. Despite the different path lengths, the raw signal recorded by
FTIR in the case of the supercontinuum source exhibited a 50 times higher
magnitude due to the distinctive properties of the radiation and the applied
pulse integration approach; for each measurement 10 spectra were averaged.
The calculated absorbance spectra of the standard solutions were integrated
within the spectral range of 2720 cm-1 \- 2620 cm-1, where the sufficient
intensity is obtained for both light sources. Figure 7 depicts the obtained
linear calibration curves. The inset in Fig. 7(b) presents absorbance spectra
for the lowest concentrations of the dilution series. It should be noted that
the spectral band, within which the spectroscopic analysis is performed,
covers the range near the maximuma of the absorption band where it coincides
with relatively weak water absorption.
The linear fitting model delivered $R^{2}$ values of 0.9992 for the thermal
emitter and 0.9989 for the supercontinuum source respectively. Hence,
corresponding slopes of 0.035 (thermal source) and 0.162 (supercontinuum) and
standard errors (SE) yield the limits of detection (LOD) for both
configurations calculated according to the IUPAC definition:[47] a LOD of 580
ppm was achieved using conventional FTIR, while a superior LOD of 140 ppm has
been demonstrated by the experimental system applying the supercontinuum
source.
## 4 Conclusions and Outlook
Novel commercially available noise reduced mid-infrared (MIR) supercontinuum
sources have recently evolved to a state where they can reasonably be applied
for various spectroscopic analytical tasks[10, 11, 12, 13, 14]. However, they
are widely underestimated among spectroscopists, which is, for instance,
illustrated by the fact that they are still basically unnoticed at the
relevant conferences. The main reason might be a specification lag, i.e. high
noise and instability, of their first versions. Therefore, one of the main
goals of this contribution was to demonstrate that in fact MIR supercontinuum
laser sources have achieved a level of maturity to be competitive with the
state-of-the-art equipment. In order to support this idea, we demonstrated
their applicability for the gold standard in MIR spectroscopy, Fourier-
transform infrared spectroscopy (FTIR) that is strongly sensitive to intensity
fluctuations. The basic idea here was to replace the broadband but weak and
spatially incoherent thermal emitters by a broadband and high brightness
spatially coherent source.
In the experimental part, a simple solution to overcome the sampling problem
for the direct coupling of pulsed supercontinuum and FTIR has been proposed
and realized. Utilizing the prospected experimental system, we achieved a
factor of 200 greater amplitude of the raw interferogram and enhanced signal-
to-noise ratio. The setup and the supercontinuum laser source were
characterized with respect to noise and stability of the measurements. A
satisfying long-term stability over around 28 hours and a noise level, which
is comparable to the conventional thermal source, were demonstrated. The
standard errors of the 100% zero absorbance lines (0.67$\times$10-3 and
0.78$\times$10-3 for thermal and supercontinuum sources respectively) were
calculated for the measurements of blank water within a 130 µm liquid flow
cell.
The superior brightness of the supercontinuum source that is provided by the
directionality of the radiation and the high output power allowed us to
increase the interaction path length in a transmission measurement and to
demonstrate the enhanced performance of the supercontinuum-based system. A
spectroscopic analysis of an aqueous formaldehyde dilution series has been
performed. The absorbance spectra (measured for 500 µm liquid flow cell)
yielded a limit-of-detection (LOD) of 140 ppm. In total, a 4-times enhanced
detectivity over a conventional FTIR system with the thermal source was
demonstrated, while the investigation of path lengths over 130 µm were not
expedient with the thermal emitter due to the almost total absorption by
water.
The next generation of supercontinuum sources being developed is based on
chalcogenide fibers and has already reached a milestone in terms of spectral
coverage, starting in the NIR spectral region and spanning up to 14-16 µm
emission wavelength[3, 48, 49, 27, 26]. Thus, they cover almost the entire MIR
spectral range, which makes them a highly promising technology to
significantly push the field of MIR spectroscopy in the nearest future.
Considering the gained results and current developments[50, 19], we believe
that the proposed solution offers new potentials for enhancing the currently
applied methods in this field. The demonstrated approach to couple pulsed
supercontinuum sources appears to be universal and could be applied for any
low-duty cycle source. Meanwhile, a purpose-oriented adaptation of the
presented signal acquisition scheme or investigation of other types of
integration devices, e.g. boxcar, could be considered in order to assemble a
fully-integrated FTIR system, since a price reduction of supercontinuum
sources is expected.
## 5 Funding
Financial support was provided by the Marie Sklodowska-Curie Action SUPUVIR of
the European Union’s H2020-MSCA-ITN-2016 Programme under REA grant agreement
no. 722380; and the strategic economic and research program ”Innovative Upper
Austria 2020” of the province of Upper Austria.
## References
* [1] H. Günzler and H.U. Gremlich. IR Spectroscopy: An Introduction. Wiley, 2002.
* [2] John Dudley and Roy Taylor. Supercontinuum Generation in Optical Fibers. Cambridge University Press, 2010.
* [3] Shixun Dai, Yingying Wang, Xuefeng Peng, Peiqing Zhang, Xunsi Wang, and Yinsheng Xu. A review of mid-infrared supercontinuum generation in chalcogenide glass fibers. Appl. Sci., 8(5):707, May 2018.
* [4] Christian R. Petersen, Peter M. Moselund, Laurent Huot, Lucy Hooper, and Ole Bang. Towards a table-top synchrotron based on supercontinuum generation. Infrared Phys. Technol, 91:182 – 186, 2018.
* [5] Peter M. Moselund, Christian Petersen, Lasse Leick, Jeppe Seidelin Dam, Peter Tidemand-Lichtenberg, and Christian Pedersen. Highly stable, all-fiber, high power zblan supercontinuum source reaching 4.75 µm used for nanosecond mid-ir spectroscopy. In Advanced Solid-State Lasers Congress, page JTh5A.9. Optical Society of America, 2013.
* [6] Andreas Schwaighofer, Markus Brandstetter, and Bernhard Lendl. Quantum cascade lasers (qcls) in biomedical spectroscopy. Chem. Soc. Rev, 46:5903–5924, 2017.
* [7] Andreas Schwaighofer, Milagros Montemurro, Stephan Freitag, Christian Kristament, María J. Culzoni, and Bernhard Lendl. Beyond fourier transform infrared spectroscopy: External cavity quantum cascade laser-based mid-infrared transmission spectroscopy of proteins in the amide i and amide ii region. Anal. Chem, 90(11):7072–7079, 2018. PMID: 29762006.
* [8] N. R. Newbury, B. R. Washburn, K. L. Corwin, and R. S. Windeler. Noise amplification during supercontinuum generation in microstructure fiber. Opt. Lett., 28(11):944–946, Jun 2003.
* [9] J. M. Dudley, K. L. Corwin, N. R. Newbury, B. R. Washburn, S. A. Diddams, and R. S. Windeler. Fundamental noise limitations on supercontinuum generation in microstructure fiber. In 2003 European Quantum Electronics Conference. EQEC 2003 (IEEE Cat No.03TH8665), pages 203–, June 2003.
* [10] Jakob Kilgus, Kristina Duswald, Gregor Langer, and Markus Brandstetter. Mid-infrared standoff spectroscopy using a supercontinuum laser with compact fabry-pérot filter spectrometers. Appl. Spectrosc., 72(4):634–642, Apr 2018.
* [11] Jakob Kilgus, Gregor Langer, Kristina Duswald, Robert Zimmerleiter, Ivan Zorin, Thomas Berer, and Markus Brandstetter. Diffraction limited mid-infrared reflectance microspectroscopy with a supercontinuum laser. Opt. Express., 26(23):30644–30654, Nov 2018.
* [12] F. Borondics, M. Jossent, C. Sandt, L. Lavoute, D. Gaponov, A. Hideur, P. Dumas, and S. Février. Supercontinuum-based fourier transform infrared spectromicroscopy. Optica, 5(4):378–381, Apr 2018.
* [13] Caroline Amiot, Antti Aalto, Piotr Ryczkowski, Juha Toivonen, and Goëry Genty. Cavity enhanced absorption spectroscopy in the mid-infrared using a supercontinuum source. Appl. Phys. Lett., 111(6):061103, 2017.
* [14] Christoph Gasser, Jakob Kilgus, Michael Harasek, Bernhard Lendl, and Markus Brandstetter. Enhanced mid-infrared multi-bounce atr spectroscopy for online detection of hydrogen peroxide using a supercontinuum laser. Opt. Express., 26(9):12169–12179, Apr 2018.
* [15] Ivan Zorin, Rong Su, Andrii Prylepa, Jakob Kilgus, Markus Brandstetter, and Bettina Heise. Mid-infrared fourier-domain optical coherence tomography with a pyroelectric linear array. Opt. Express., 26(25):33428–33439, Dec 2018.
* [16] Mariusz Klimczak, Grzegorz Soboń, Rafal Kasztelanic, Krzysztof M. Abramski, and Ryszard Buczyński. Direct comparison of shot-to-shot noise performance of all normal dispersion and anomalous dispersion supercontinuum pumped with sub-picosecond pulse fiber-based laser. Sci. Rep., 6:19284, Jan 2016.
* [17] Mariusz Klimczak, Bartłomiej Siwicki, Piotr Skibiński, Dariusz Pysz, Ryszard Stępień, Alexander Heidt, Czesław Radzewicz, and Ryszard Buczyński. Coherent supercontinuum generation up to 2.3 µm in all-solid soft-glass photonic crystal fibers with flat all-normal dispersion. Opt. Express., 22(15):18824–18832, Jul 2014.
* [18] S Dupont, Z Qu, S-S Kiwanuka, L E Hooper, J C Knight, S R Keiding, and C F Kaminski. Ultra-high repetition rate absorption spectroscopy with low noise supercontinuum radiation generated in an all-normal dispersion fibre. Laser Phys. Lett., 11(7):075601, may 2014.
* [19] Kai Jiao, Jinmei Yao, Zheming Zhao, Xiange Wang, Nian Si, Xunsi Wang, Peng Chen, Zugang Xue, Youmei Tian, Bin Zhang, Peiqing Zhang, Shixun Dai, Qiuhua Nie, and Rongping Wang. Mid-infrared flattened supercontinuum generation in all-normal dispersion tellurium chalcogenide fiber. Opt. Express., 27(3):2036–2043, Feb 2019.
* [20] Etienne Genier, Patrick Bowen, Thibaut Sylvestre, John M. Dudley, Peter Moselund, and Ole Bang. Amplitude noise and coherence degradation of femtosecond supercontinuum generation in all-normal-dispersion fibers. J. Opt. Soc. Am. B., 36(2):A161–A167, Feb 2019.
* [21] Kun Liu, Jiang Liu, Hongxing Shi, Fangzhou Tan, and Pu Wang. High power mid-infrared supercontinuum generation in a single-mode zblan fiber with up to 21.8 w average output power. Opt. Express., 22(20):24384–24391, Oct 2014.
* [22] Yingying Wang, Shixun Dai, Guangtao Li, Dong Xu, Chenyang You, Xin Han, Peiqing Zhang, Xunsi Wang, and Peipeng Xu. 1.4-7.2 µm broadband supercontinuum generation in an as-s chalcogenide tapered fiber pumped in the normal dispersion regime. Opt. Lett., 42(17):3458–3461, Sep 2017.
* [23] Irnis Kubat, Christian S. Agger, Uffe Møller, Angela B. Seddon, Zhuoqi Tang, Slawomir Sujecki, Trevor M. Benson, David Furniss, Samir Lamrini, Karsten Scholle, Peter Fuhrberg, Bruce Napier, Mark Farries, Jon Ward, Peter M. Moselund, and Ole Bang. Mid-infrared supercontinuum generation to 12.5$\mu$m in large na chalcogenide step-index fibres pumped at 4.5$\mu$m. Opt. Express., 22(16):19169–19182, Aug 2014.
* [24] Neetesh Singh, Darren D. Hudson, Yi Yu, Christian Grillet, Stuart D. Jackson, Alvaro Casas-Bedoya, Andrew Read, Petar Atanackovic, Steven G. Duvall, Stefano Palomba, Barry Luther-Davies, Stephen Madden, David J. Moss, and Benjamin J. Eggleton. Midinfrared supercontinuum generation from 2 to 6µm in a silicon nanowire. Optica, 2(9):797–802, Sep 2015.
* [25] Tonglei Cheng, Kenshiro Nagasaka, Tong Hoang Tuan, Xiaojie Xue, Morio Matsumoto, Hiroshige Tezuka, Takenobu Suzuki, and Yasutake Ohishi. Mid-infrared supercontinuum generation spanning 2.0 to 15.1 µm in a chalcogenide step-index fiber. Opt. Lett., 41(9):2117–2120, May 2016.
* [26] Zheming Zhao, Bo Wu, Xunsi Wang, Zhanghao Pan, Zijun Liu, Peiqing Zhang, Xiang Shen, Qiuhua Nie, Shixun Dai, and Rongping Wang. Mid-infrared supercontinuum covering 2.0-16 µm in a low-loss telluride single-mode fiber. Laser Photonics Rev, 11(2):1700005, 2017.
* [27] Christian R. Petersen, Uffe Møller, Irnis Kubat, Binbin Zhou, Sune Dupont, Jacob Ramsay, Trevor Benson, Slawomir Sujecki, Nabil Abdel-Moneim, Zhuoqi Tang, David Furniss, Angela Seddon, and Ole Bang. Mid-infrared supercontinuum covering the 1.4-13.3 µm molecular fingerprint region using ultra-high na chalcogenide step-index fibre. Nat. Photonics, 8:830–834, Sep 2014.
* [28] Chris A. Michaels, Tony Masiello, and Pamela M. Chu. Fourier transform spectrometry with a near infrared supercontinuum source. In Conference on Lasers and Electro-Optics/International Quantum Electronics Conference, page CMDD6. Optical Society of America, 2009.
* [29] Vasily V. Goncharov and Gregory E. Hall. Supercontinuum fourier transform spectrometry with balanced detection on a single photodiode. J. Chem. Phys, 145(8):084201, 2016.
* [30] Julien Mandon, Evgeni Sorokin, Irina T. Sorokina, Guy Guelachvili, and Nathalie Picqué. Supercontinua for high-resolution absorption multiplex infrared spectroscopy. Opt. Lett., 33(3):285–287, Feb 2008.
* [31] Ahmed M. Othman, Hussein E. Kotb, Yasser Sabry, and Diaa Khalil. MEMS-based Fourier transform spectrometer using pulsed infrared light source. In Wibool Piyawattanametha, Yong-Hwa Park, and Hans Zappe, editors, MOEMS and Miniaturized Systems XVII, volume 10545, pages 220 – 227. International Society for Optics and Photonics, SPIE, 2018.
* [32] Peter M. Moselund, Laurent Huot, and Chris D. Brooks. All-fiber mid-IR supercontinuum: a powerful new tool for IR-spectroscopy. In Robert R. Alfano and Stavros G. Demos, editors, Optical Biopsy XIV: Toward Real-Time Spectroscopic Imaging and Diagnosis, volume 9703, pages 64 – 69. International Society for Optics and Photonics, SPIE, 2016\.
* [33] J.W. Nicholson, A.K. Abeeluck, C. Headley, M.F. Yan, and C.G. Jørgensen. Pulsed and continuous-wave supercontinuum generation in highly nonlinear, dispersion-shifted fibers. Appl. Phys. B: Lasers Opt., 77(2):211–218, Sep 2003.
* [34] P.R. Griffiths, J.A. De Haseth, and J.D. Winefordner. Fourier Transform Infrared Spectrometry. Chemical Analysis: A Series of Monographs on Analytical Chemistry and Its Applications. Wiley, 2007.
* [35] Lyassine Allou, Lahcen El Maimouni, and Stéphane Le Calvé. Henry’s law constant measurements for formaldehyde and benzaldehyde as a function of temperature and water composition. Atmos. Environ., 45(17):2991 – 2998, 2011.
* [36] William H. Glaze, Minoru Koga, and Devon Cancilla. Ozonation byproducts. 2. improvement of an aqueous-phase derivatization method for the detection of formaldehyde and other carbonyl compounds formed by the ozonation of drinking water. Environ. Sci. Technol, 23(7):838–847, 1989.
* [37] World Health Organization. Formaldehyde in Drinking-water. World Health Organization, 2005.
* [38] James A. Swenberg, Benjamin C. Moeller, Kun Lu, Julia E. Rager, Rebecca C. Fry, and Thomas B. Starr. Formaldehyde carcinogenicity research: 30 years and counting for mode of action, epidemiology, and cancer risk assessment. Toxicol. Pathol., 41(2):181–189, 2013. PMID: 23160431.
* [39] H.P. Til, R.A. Woutersen, V.J. Feron, V.H.M. Hollanders, H.E. Falke, and J.J. Clary. Two-year drinking-water study of formaldehyde in rats. Food Chem. Toxicol., 27(2):77 – 87, 1989.
* [40] Liesivuori Jyrki and Savolainen Heikki. Methanol and formic acid toxicity: Biochemical mechanisms. Pharmacol. Toxicol., 69(3):157–163, 1991.
* [41] A A Sadun. Mitochondrial optic neuropathies. J. Neurol., Neurosurg. Psychiatry, 72(4):423–425, 2002.
* [42] Nael G. Yasri, Hasan Seddik, and Maha A. Mosallb. Spectrophotometric determination of formaldehyde based on the telomerization reaction of tryptamine. Arabian J. Chem, 8(4):487 – 494, 2015.
* [43] Peter J. Linstrom and W.G Mallard. "Evaluated Infrared Reference Spectra" by Coblentz Society, Inc., NIST Chemistry WebBook, NIST Standard Reference Database Number 69. National Institute of Standards and Technology, 2019.
* [44] Nobuyuki Matubayasi, Saiko Morooka, Masaru Nakahara, and Hideaki Takahashi. Chemical equilibrium of formaldehyde and methanediol in hot water: Free-energy analysis of the solvent effect. J. Mol. Liq., 134(1):58 – 63, 2007. EMLG/JMLG 2005 Special Issue.
* [45] M. Brandstetter, L. Volgger, A. Genner, C. Jungbauer, and B. Lendl. Direct determination of glucose, lactate and triglycerides in blood serum by a tunable quantum cascade laser-based mid-ir sensor. Appl. Phys. B: Lasers Opt., 110(2):233–239, Feb 2013.
* [46] P. Werle, R. Mücke, and F. Slemr. The limits of signal averaging in atmospheric trace-gas monitoring by tunable diode-laser absorption spectroscopy (tdlas). Appl. Phys. B: Lasers Opt., 57(2):131–139, Aug 1993.
* [47] Gary L. Long and J. D. Winefordner. Limit of detection a closer look at the iupac definition. Anal. Chem, 55(07):712A–724A, 1983. PMID: 22857433.
* [48] Ramon A. Martinez, Genevieve Plant, Kaiwen Guo, Brian Janiszewski, Michael J. Freeman, Robert L. Maynard, Mohammed N. Islam, Fred L. Terry, Oseas Alvarez, Francois Chenard, Robert Bedford, Ricky Gibson, and Agustin I. Ifarraguerri. Mid-infrared supercontinuum generation from 1.6 to >11µm using concatenated step-index fluoride and chalcogenide fibers. Opt. Lett., 43(2):296–299, Jan 2018.
* [49] Hongya Ou, Shixun Dai, Peiqing Zhang, Zijun Liu, Xunsi Wang, Feifei Chen, Hang Xu, Baohua Luo, Yicong Huang, and Rongping Wang. Ultrabroad supercontinuum generated from a highly nonlinear ge–sb–se fiber. Opt. Lett., 41(14):3201–3204, Jul 2016.
* [50] Davide Grassani, Eirini Tagkoudi, Hairun Guo, Clemens Herkommer, Fan Yang, Tobias J. Kippenberg, and Camille-Sophie Brés. Mid infrared gas spectroscopy using efficient fiber laser driven photonic chip-based supercontinuum. Nat. Commun., 10(1):1553, Apr 2019.
|
2024-09-04T02:54:56.332938 | 2020-02-24T23:38:07 | 2003.00866 | {
"authors": "Dinh C. Nguyen, Peng Cheng, Ming Ding, David Lopez-Perez, Pubudu N.\n Pathirana, Jun Li, Aruna Seneviratne, Yonghui Li, H. Vincent Poor",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25986",
"submitter": "Dinh Nguyen",
"url": "https://arxiv.org/abs/2003.00866"
} | arxiv-papers | # Wireless AI: Enabling an AI-Governed
Data Life Cycle
Dinh C. Nguyen, Peng Cheng, Ming Ding, David Lopez-Perez, Pubudu N. Pathirana,
Jun Li, Aruna Seneviratne, Yonghui Li, and H. Vincent Poor Dinh C. Nguyen and
Pubudu N. Pathirana are with School of Engineering, Deakin University,
Australia (e-mails: {cdnguyen, pubudu.pathirana}@deakin.edu.au).Peng Cheng and
Yonghui Li are with the School of Electrical and Information Engineering, the
University of Sydney, Australia, (e-mails: {peng.cheng,
yonghui.li}@sydney.edu.au). Ming Ding is with Data61, CSIRO, Australia (email:
[email protected]). David Lopez-Perez is with Nokia Bell Labs (e-mail:
[email protected]).Jun Li is with School of Electrical and
Optical Engineering, Nanjing University of Science and Technology, Nanjing
210094, China (e-mail: jun.li @njust.edu.cn).Aruna Seneviratne is with School
of Electrical Engineering and Telecommunications, University of New South
Wales (UNSW), NSW, Australia (email: [email protected]).H. Vincent
Poor is with the Department of Electrical Engineering, Princeton University,
Princeton, NJ 08544 USA (e-mail: [email protected]).
###### Abstract
Recent years have seen rapid deployment of mobile computing and Internet of
Things (IoT) networks, which can be mostly attributed to the increasing
communication and sensing capabilities of wireless systems. Big data analysis,
pervasive computing, and eventually artificial intelligence (AI) are envisaged
to be deployed on top of IoT and create a new world featured by data-driven
AI. In this context, a novel paradigm of merging AI and wireless
communications, called Wireless AI that pushes AI frontiers to the network
edge, is widely regarded as a key enabler for future intelligent network
evolution. To this end, we present a comprehensive survey of the latest
studies in wireless AI from the data-driven perspective. Specifically, we
first propose a novel Wireless AI architecture that covers five key data-
driven AI themes in wireless networks, including Sensing AI, Network Device
AI, Access AI, User Device AI and Data-provenance AI. Then, for each data-
driven AI theme, we present an overview on the use of AI approaches to solve
the emerging data-related problems and show how AI can empower wireless
network functionalities. Particularly, compared to the other related survey
papers, we provide an in-depth discussion on the Wireless AI applications in
various data-driven domains wherein AI proves extremely useful for wireless
network design and optimization. Finally, research challenges and future
visions are also discussed to spur further research in this promising area.
###### Index Terms:
Wireless Networks, Artificial Intelligence, Deep Learning, Machine Learning,
Data-driven AI.
## I Introduction
Digitization and automation have been widely recognized as the next wave of
technological revolution, which is envisaged to create a connected world and
fundamentally transform industry, business, public services, and our daily
life. In this context, recent years have seen rapid advancements in the
deployment of Internet of Things (IoT) networks, which can be mostly
attributed to the increasing communication and sensing capabilities combined
with the falling prices of IoT devices [1], [2]. According to the Cisco
forecast, by 2030 there will be more than 500 billion IoT devices connected to
the Internet [3]. In the meanwhile, a vast array of multimedia services are
blooming rapidly, while the data traffic is growing at an astonishing pace.
Annual global data traffic is predicted to increase threefold over the next 5
years, reaching 4.8 ZB per year by 2022 [4]. Fueled by soaring mobile data and
increasing device communication, wireless communication systems are evolving
toward next generation (5G) wireless networks, by incorporating massive
networking, communication, and computing resources. Especially, future
wireless networks should provide many innovative vertical services to satisfy
the diverse service requirements, ranging from residence, work, to social
communications. The key requirements include the support of up to 1000 times
higher data volumes with ultra-low latency (1ms or less than 1ms), and massive
device connectivity supporting 10-100x number of connected devices with ultra-
high reliability of 99.999% [5], [6]. Such stringent service requirements
associated with the increasing complexity of future wireless communications
make traditional methods to network design, deployment, operation, and
optimization no longer adequate. Past and present wireless communications,
regulated by mathematical models, are mostly derived from extant conventional
communication theories. Communication system design significantly depends on
initial network conditions and/or theoretical assumptions to characterize real
environments. However, these techniques are unlikely to handle complex
scenarios with many imperfections and network nonlinearities [7]. The future
wireless communication networks, where the quality of services (QoS) is far
beyond the capabilities and applicability of current modeling and design
approaches, will require robust and intelligent solutions to adapt to the high
network dynamicity for different services in different scenarios [8].
Figure 1: An illustration of the Wireless AI paradigm.
Among the existing technologies, artificial intelligence (AI) is one of the
most promising drivers for wireless communications to meet various technical
challenges. AI techniques such as Machine Learning (ML), Deep Learning (DL)
and Deep Reinforcement Learning (DRL), well known from computer science
disciplines, are beginning to emerge in wireless communications with recent
successes in complicated decision making, wireless network management,
resource optimization, and in-depth knowledge discovery for complex wireless
networking environments [9]. It has been proved that AI techniques can achieve
outstanding performances for wireless communication applications thanks to
their online learning and optimization capabilities in various complex and
dynamic network settings. AI also lowers the complexity of algorithm
computations enabled by data learning and interactions with the environment,
and hence accelerates the convergence in finding sub-optimal solutions
compared to conventional techniques [10]. By bringing the huge volumes of data
to AI, the wireless network ecosystem will create many novel application
scenarios and fuel the continuous booming of AI. Therefore, the integration of
AI techniques into the design of intelligent network architectures becomes a
promising trend for effectively addressing technical challenges in wireless
communication systems.
Indeed, the marriage of wireless communications and AI opens the door to an
entirely new research area, namely ”Wireless AI” [11], [12]. Instead of
heavily relying on datacentres, i.e. cloud servers, to run AI applications,
Wireless AI takes advantage of resources at the network edge to gain AI
insights in wireless networks. More specifically, AI functions can be deployed
over every corner of the wireless IoT networks, including in data generators
(i.e. IoT devices), data receivers (i.e. mobile users, edge devices) and
during on the wireless data transmissions (i.e. wireless links). This would
make the wireless networks fully self-automating all operations, controls and
maintenance functions with limited human involvement. Wireless AI thus solves
effectively the heterogeneity and dynamicity of future wireless networks by
leveraging its intelligent, self-adaptive, and auto-configurable capabilities.
Notably, Wireless AI has attracted much attention from both the academia and
industry. For example, according to Ericsson, wireless AI will have a
significant role in reshaping next-generation wireless cellular networks, from
intelligent service deployment to intelligent policy control, intelligent
resource management, intelligent monitoring, and intelligent prediction [13].
Major enterprises, such as Apple and Google, have been designing and
developing many wireless AI-empowered applications for mobile users (i.e. Face
ID and Siri apps as Apple products [14] or Google Assistant tool of Google
[15]). These effort have boosted a wide spectrum of wireless AI applications,
from data collection, data processing, massive IoT communication to smart
mobile services. Therefore, Wireless AI is potentially a game changer not only
for wireless communication networks, but also for emerging intelligent
services, which will impact virtually every facet of our lives in the coming
years.
### I-A Wireless AI Architecture
In this paper, we propose a novel Wireless AI architecture for wireless
networks as shown in Fig. 1. This new paradigm is positioned to create
cognition, intelligence, automation, and agility for future wireless
communication system design. As shown in Fig. 1, there are mainly four
entities involved in this architecture.
* •
Firstly, our environment, society, and people are monitored and sensed by a
variety of data generators, e.g. sensors and wearable devices to transform the
observations and measurements of our physical world into digital via smart IoT
devices.
* •
Secondly, such data will be transferred via wireless or wire-line links to
wireless data receivers or directly to the Internet. Typical wireless data
receivers include 4G/5G networks, Wi-Fi, LoRa, etc.
* •
Thirdly, such data will be collected by data custodians via the Internet using
cloud services, private cloud or enterprise cloud. A data custodian usually
maintains a platform that organizes, stores and provides private and secure
data access to data consumers. Note that a data custodian can also assume the
role of a data consumer.
* •
Fourthly, data consumers such as government, individuals, analysts, etc., will
access the data so that they can conduct big data mining and harvest useful
information to analyze and predict our physical world. Besides, wireless
service operators such as base station controllers can also exploit such data
to optimize their network for seamless service delivery across data utilities
in wireless networks.
It is important to note that in Fig. 1 there are two shortcuts originated from
the physical world as follows,
* •
The first shortcut goes from the physical world to the data receivers,
skipping the data generators. This framework is also known as wireless sensing
because the data receivers directly see the physical world using radio
frequency spectrum. Typical applications include people counting, human
activity detection and recognition, etc.
* •
The second shortcut goes from the physical world to the data custodians,
skipping both the data generators and the data receivers. This framework
allows the data custodians to directly collect data using survey, report, and
cross-organization data sharing, etc. Typical applications include national
population censuses, tax return lodging, authorized medical record sharing,
etc.
Conventionally, AI functions sit in the cloud, or in powerful data center
owned by data custodians, or in data consumers. In some cases, AI functions
may even exist in an edge cloud, close to data receivers, performing edge
computing tasks. Generally speaking, those AI functions are mainly designed
for big data analysis and most of them follow a traditional paradigm of
centralized AI, i.e. bringing data to computation functions for processing.
Recently, many concerns begin to rise regarding this centralized AI paradigm,
such as user privacy, power consumption, latency, cost, availability of
wireless links, etc. Among them is the user data privacy bottleneck caused by
the centralized network architecture where IoT devices mainly rely on a third
party, i.e. a cloud service provider. Although this model can provide
convenient computing and processing services, the third party can obtain
illegally personal information, leading to serious information leakages and
network security issues. For example, due to the Facebook data privacy scandal
in Year 2018, privacy has become an obstacle for many applications involving
personal data, such as living habits, healthcare data, travel routes, etc.
Hence, enabling useful data collection/analysis without revealing individuals
sensitive information has become an urgent task for the research community.
Moreover, the traditional centralized AI paradigm shows critical limitations
in optimizing wireless networks [19], [20]. In the 5G era with ubiquitous
mobile devices and multi-channel communications, the complexity of wireless
networks is extremely high. The solution of using only a centralized AI
controller to manage the whole network is obviously inefficient to optimize
network operations and ensure high-quality service delivery.
In this context, pushing AI to the network edge, i.e. bringing computation
functions to data, close to data generators and receivers, has been recognized
as a promising solution to protect privacy, reduce latency, save power and
cost, and increase reliability by less-communication. In this paper, we
identify four categories of wireless AI at the network edge and discuss their
technologies, recent developments and future trends. More specifically, the
four categories of wireless AI are shown in Fig. 1 and briefly explained as
follows.
* •
Sensing AI at the Data Receivers: AI supports well sensing tasks such as
people and object detection over wireless networks. By analyzing Wi-Fi CSI
characteristics, AI-based people counting and sensing systems can be designed
to train the CSI values and generate the model so that people detection can be
implemented. For example, convolutional neural network (CNN) is particularly
useful in learning human images and recognizing humans among the crowd at
different scales [73], [75]. Besides, AI can be used for human motion and
activity recognition using wireless signal properties [90]. In fact, the
relation between the CSI value and the dynamicity of human motion has been
exploited to build the learning model using AI algorithms [100].
* •
User and Network Device AI at the Data Generators and Receivers: At the data
generators such as sensors and mobile devices, AI functions have been embedded
into end devices to create a new paradigm: on-device AI. This would open up
new opportunities for simplifying AI implementations in wireless networks by
eliminating the need of external servers, i.e. clouds. For example, deep
neural network (DNN) inference now can be run on mobile devices such as
Android phones for local client applications such as image classification
[197], object recognition [199]. At the data receivers like edge servers, AI
has been applied in the network edge to learn and detect signals, by embedding
an AI-based intelligent algorithm on the base stations or access points.
Specially, content catching at the network edge is now much easier thanks to
the prediction capability of AI. Moreover, AI also has the potential in edge
computing management such as edge resource scheduling [111] and orchestration
management [116].
* •
Access AI in the Data Transfer Process: AI can be applied to facilitate the
data transmission from three main layers, including physical (PHY) layer,
media access control (MAC) layer, and network layer. For example, in PHY
layer, AI has been applied to facilitate CSI acquisition and estimation [125]
that are vitally important for improving data transfer process in terms of
channel allocation, traffic congestion prediction. Further, AI also supports
channel allocation, power control and interference mitigation functions in MAC
layer. In fact, the learning capability of AI is particularly useful in
estimating traffic channel distribution to realize flexible channel allocation
[146] and channel power management [152], aiming at minimizing channel
interference in wireless networks. Specially, AI is able to empower spectrum
sharing, data offloading, multi-connectivity, and resource management services
from the network layer perspective. For example, AI, especially deep learning
has the potential to estimate dynamically a spectrum resource management
policy in the network by online adaptive learning to achieve a reliable and
efficient spectrum sharing among network users [177], [178].
* •
Data-provenance AI in the Data Generation Process: In the data generation
process, AI can provide various useful services, such as AI data compression,
data privacy, and data security. As an example, AI has been applied for
supporting sensor data aggregation and fusion, by classifying useful
information from the collected data samples [224]. Also, the classification of
AI helps monitor the data aggregation process, aiming to analyze the
characteristics of all data and detect data attacks [234]. In addition to
that, AI has been an ideal option to recognize legitimate users from attackers
by training the channel data and performing classification [275]. Meanwhile,
DRL has proved extremely useful in building attack detection mechanisms [244],
[245] that can learn from the user behaviours and estimate the abnormal events
for appropriate security actions.
### I-B Comparisons to Other Related Works and Our Contributions
TABLE I: Existing surveys on AI and wireless networks. Related Surveys | Topic | Key contributions
---|---|---
[16] | Edge computing for AI | This work mainly focused on the survey on the use of edge computing for supporting AI, including edge computing for AI training, edge computing for AI model inference.
[17] | Edge computing for ML | This work focused on the analysis on the edge ML architectures and presented how edge computing can support ML algorithms.
[18] | AI for heterogeneous networks | This study provided a very brief review on the use of AI techniques for heterogeneous networks. They analyzed each AI approaches and then showed the general potentials in Hetnet, but no clear use cases were given.
[19] | DL for edge computing | This work discusses the roles of DL in edge computing-based applications and analyses the benefits of edge computing in supporting DL deployment.
[20] | DL for mobile networks | This survey focused on the discussion on the integration of DL and mobile networks. The authors focused on the survey related to the following aspects: DL for Mobile Data Analysis, DL for User Mobility Analysis, DL for User Localization, DL for Wireless Sensor Networks, and DL for Network Security.
[21] | DL for IoT data analytics | This survey focused on the discussion on the integration of DL and IoT data analytics. The main topics includes the analysis on the use of DL for IoT devices, and DL for fog/cloud computing.
[22] | DL for network traffic control | This survey only focused on the discussion on the use of DL for network traffic control.
[23] | DL for wireless networks | This survey paid attention to performing in-depth quantitative analysis of several applications of DL for the design of wireless networks.
[24] | DRL for communication networks | This survey focused on the discussion on the integration of DRL and communication networks. They focused on reviewing the benefits of DRL in solving communication issues, including network access and control, content catching and offloading, network security and privacy, resource sharing, and data collection.
[25] | ML for IoT wireless communications | This work presents a survey of ML applications to address some key problems in IoT wireless communications such as interference management, spectrum sensing and the future use on the IoT beyond communication.
[26] | ML for wireless networks | The authors mainly concentrated on the use of ML for some domains in wireless networks, including resource management, networking, mobility management, and localization.
[27] | ML for wireless networks | The authors mainly concentrated on the analysis on the roles of AI in wireless networks from the physical layer and the application layer.
[28] | ML for optical networks | The authors mainly concentrated on the use of ML in optical networks.
Our paper | Data-driven AI for wireless networks | We propose a novel Wireless AI architecture for wireless communication networks with five key data-driven AI domains, including Sensing AI, Network Device AI, Access AI, User Device AI and Data-provenance AI. Then, we focus on analyzing the role of AI in each each domain from the data perspective. Specially, we provide an in-depth survey on the Wireless AI applications with various related use cases.
Driven by the recent advances of AI in wireless networks, some efforts have
been made to review related works and provide useful guidelines. Specifically,
the work in [16] presented a survey on the basics of AI and conducted a
discussion on the use of edge computing for supporting AI, including edge
computing for AI training, and edge computing for AI model inference.
Similarly, the authors in [17] analyzed the edge ML architectures and
presented how edge computing can support ML algorithms. Furthermore, the
potential of AI in heterogeneous networks (Hetnet) was investigated in [18]
with a focus on a review of AI approaches for wireless networks, but did not
pay attention to AI-based applications. Meanwhile, DL which has been an active
research area in AI, is integrated in wireless systems for various purposes.
For example, the authors in [19] discussed the roles of DL in edge computing-
based applications and then analyses the benefits of edge computing in
supporting DL deployment. The study in [20] presented the discussion on the
integration of DL and mobile networks. The authors focused on the survey on
various aspects, including DL for mobile data analysis, user mobility
analysis, user localization, wireless sensor networks, and network security.
The authors in [21], [22] studied the potentials of DL in enabling IoT data
analytics (including surveys on the use of DL for IoT devices, and the
performance of DL-based algorithms for fog/cloud computing) and network
traffic control, while the authors in [23] mainly concentrated on the
quantitative analysis of DL applications for the design of wireless networks.
DRL, an emerging AI technique recent years, has been studied for wireless
networks in [24]. This survey focused on the discussion on the integration of
DRL and communication networks, with a holistic review on the benefits of DRL
in solving communication issues, including network access and control, content
catching and offloading, network security and privacy, resource sharing, and
data collection. Meanwhile, the benefits and potentials of ML in wireless
communications and networks were surveyed in recent works [25], [26], [27],
[28] in various use cases, such as resource management, IoT data networks,
mobility management and optical networking. The main contributions of related
AI survey works are summarized in TABLE I.
Despite the significant progress of surveying wireless AI applications, some
limitations have remained to be considered in current works.
* •
Most existing works mainly focused on each of the specific AI techniques
(ML/DL or DRL) for wireless networks.
* •
Most existing surveys on ML/DL only focused on some certain network scenarios
in wireless communication from either data transmitters [16], [17] or data
receivers [18], [19] perspectives.
* •
There is no any existing survey working on the use of AI for wireless
communication networks from the data-driven perspective, i.e. data sensing AI,
data access AI, data device AI, and data-provenance AI.
Considering these limitations and the ongoing research activities, we provide
a more comprehensive survey, which incorporates the recent achievements in the
applications of diverse AI techniques in a wide range of network scenarios. To
this end, our survey provides the following contributions:
1. 1.
We provide an overview of the state-of-the-art AI techniques used in wireless
communication networks.
2. 2.
We propose a novel Wireless AI architecture that covers five key data-driven
AI themes in wireless networks, including Sensing AI, Network Device AI,
Access AI, User Device AI and Data-provenance AI.
3. 3.
For each data-driven AI theme, we present an overview on the use of AI
approaches to solve the emerging data-related problems and show how AI can
empower wireless network functionalities. Particularly, we provide an in-depth
discussion on the Wireless AI applications in various data-driven use cases.
The key lessons learned from the survey of each domain are also summarized.
4. 4.
We provide an elaborative discussion on research challenges in wireless AI and
point out some potential future directions related to AI applications in
wireless networking problems.
### I-C Organization of the Survey Paper
The structure of this survey is organized as follows. Section II presents an
overview of state-of-the-art AI techniques including ML, DL, and DRL. Based on
the proposed wireless AI architecture outlined in the introduction section, we
start to analyze the wireless AI applications in communication networks with
five key domains: Sensing AI, Network Device AI, Access AI, User Device AI and
Data-provenance AI. More specifically, we present a state-of-the-art review on
the existing literature works in the Sensing AI area in Section III. We
classify into three main specific domains, namely people and objection
detection, localization, and motion and activity recognition. Section IV
presents a state-of-the-art survey on the Network Device AI domain,
highlighting the development of AI for supporting wireless services on edge
servers (i.e. base stations). Next, Section V provides an extensive review for
the Access AI domains, with a focus on the discussion on the roles of AI in
PHY layer, MAC layer, and network layer. We then analyze the recent
developments of the User Device AI domain in Section VI, by highlighting the
development of AI for supporting wireless services on user devices. Meanwhile,
Section VII presents a discussion on the recent use of Data-provenance AI
applications in the data generation process in various domains, namely AI data
compression, data clustering, data privacy, data security. Lastly, we point
out the potential research challenges and future research directions in
Section VIII. Finally, Section IX concludes the paper. A list of key acronyms
used throughout the paper is presented in TABLE II.
TABLE II: List of key acronyms. Acronyms | Definitions
---|---
AI | Artificial Intelligence
ML | Machine Learning
DL | Deep Learning
DRL | Deep Reinforcement Learning
SVM | Support vector machine
SVR | Support vector regression
k-NN | k-neural networks
DNN | Deep Neural Network
CNN | Convolutional Neural Network
LSTM | Long-short Term Memory
RNN | Recurrent Neural Networks
DQN | Deep Q-network
FL | Federated learning
PCA | Principal component analysis
MDp | Markov decision processes
MAC | Media Access Control
IoT | Internet of Things
MEC | Mobile edge computing
UAV | Unmanned aerial vehicle
MIMO | Multiple-input and multiple-output
NOMA | Non-orthogonal multiple access
OFDM | Orthogonal Frequency-Division Multiplexing
CSI | Channel state information
V2V | Vehicle-to-vehicle
D2D | Device-to-Device
M2M | Machine-to-Machine
MTD | Machine type device
BS | base station
MCS | Mobile Crowd Sensing
QoS | Quality of Service
SINR | Signal-to-interference-plus-noise ratio
## II AI technology: State of the Art
Reviewing the state-of-the-art studies related to the wireless AI topic, we
find that there are three key AI techniques used for intelligent wireless
communication applications, including ML, DL and DRL. The literature on AI is
very extensive, so the detailed AI survey is beyond this section. The readers
are suggested to refer to a number of fundamental books and tutorials [29],
[30], [31], [32], [33]. Here, we focus on reviewing the recent AI developments
with discussions on some important AI approaches that have been used in the
wireless domains.
### II-A Machine Learning
This sub-section covers analysis on some of the most researched and applied ML
algorithms to wireless communications, including supervised learning, semi-
supervised learning, unsupervised learning, and reinforcement learning.
#### II-A1 Supervised learning
Supervised learning, as the name implies, aims to learn a mapping function
representing the relation of the input and the output under the control of a
supervisor with a labelled data set. In this way, the data model can be
constructed so that each new data input can be learned by the trained model
for making decisions or predictions [34]. Supervised learning is a very broad
ML domain. Here we pay attention to some techniques that have been employed in
wireless AI systems, including support vector machine, K-nearest neighbors,
decision trees.
\- Support vector machine (SVM): The SVM utilizes a part of dataset as support
vectors, which can be considered as training samples distributed close to the
decision surface. The main idea behind SVM is to use a linear classifier that
maps an input dataset into a higher dimensional vector. The aim of its mapping
concept is to separate between the different classes by maximizing their
distance. In SVMs, a nonlinear optimization problem is formulated to solve the
convex objective function, aiming at determining the parameters of SVMs. It is
also noting that SVMs leverage kernel functions, i.e. polynomial or Gaussian
kernels, for feature mapping. Such functions are able to separate data in the
input space by measuring the similarity between two data points. In this way,
the inner product of the input point can be mapped into a higher dimensional
space so that data can be separated [35].
\- K-nearest neighbors (k-NN): Another supervised learning technique is K-NN
that is a lazy learning algorithm which means the model is computed during
classification. In practical applications, k-NN is used mainly for regression
and classification given an unknown data distribution. It utilizes the local
neighbourhood to classify a new sample by setting the parameter K as the
number of nearest neighbours. Then, the distance is computed between the test
point and each training point using the distance metrics such as Euclidean or
Chebyshev [36].
\- Decision trees: Decision trees aim to build a training model that can
predict the value of target variables through decision rules. Here, a tree-
based architecture is used to represent the decision trees where each leaf
node is a class model, while the training is set as the root. From the dataset
samples, the learning process finds the outcome at every leaf and the best
class can be portioned [37].
#### II-A2 Unsupervised Learning
Unsupervised learning is a ML technique that learns a model to estimate the
output without the need of labelled dataset. One of the most important
unsupervised learning algorithms used in wireless networks is K-means that
aims to find the clusters from the set of unlabelled data. The algorithm
implementation is simple with two required parameters, including original data
set and the target number of clusters. Another popular unsupervised learning
algorithm is Expectation-Maximization (EM) that is used to estimate the values
of latent variables (which cannot be observed directly) under the known
variable probability distribution. The importance of EM algorithms is shown
via some use cases, such as estimation for parameters in Hidden Markov Models,
finding the missing data in the data sample space, or support for clustering
tasks [38].
#### II-A3 Semi-supervised learning
As the intermediate technique between supervised learning and unsupervised
learning, semi-supervised learning uses both labelled and unlabelled samples.
The key objective is to learn a better prediction rule using the unlabelled
data than that based on the data labelling in supervised learning that is
always time-consuming and easy to bias on the model. Semi-supervised learning
is mainly used for classification tasks in computer visions, image processing
[39].
Figure 2: Future AI functions in wireless networks.
#### II-A4 Reinforcement Learning
In Reinforcement Learning (RL), there is no need to define the target
outcomes. This method is based on the concept of environment interaction which
means that an agent directly interacts with the surrounding environment, sense
the states and select appropriate actions [40]. As a result, the agent can get
a reward after an action is taken. The aim of the agent is to accumulate the
long-term reward as much as possible that corresponds the objective in
wireless networks, such as system throughput optimization. The most popular RL
technique is Q-learning that uses the Q functions to adjust its policy to seek
an optimal solution via trial and error without the need of prior environment
knowledge. This concept can be applied to wireless scenarios where the system
model is hard to find due to the lack of mathematical formulation, such as
real time network optimization [41], mobile computation offloading [42].
### II-B Deep Learning
Deep Learning (DL) includes supervised or unsupervised learning techniques
that uses multiple neural network layers in a deep architecture [43]. The
neural network consists of neurons connected via weighted connections among
layers. A basic deep neural network (DNN) contains an input layer for
receiving data samples, a single hidden layer for training and an output layer
for generating training outcomes. Here, the number of hidden layers reflects
the depth of the DNN architecture. To generate the desired output, supervised
or unsupervised learning techniques are used with the labeled or unlabeled
data samples, associated with the adjustment of weight values among
perceptrons. DL is known for AI applications such as speech recognition,
computer vision, image processing, object detection [44]. In the literature of
wireless network topics, some of the most popular DL architectures are adopted
including Convolutional Neural Networks (CNN), Recurrent Neural Networks
(RNN), Long-short Term Memory (LSTM) Networks.
#### II-B1 Convolutional Neural Network (CNN)
CNN is a type of neural network used mainly for image processing and
recognition with large pixel datasets [45]. Basically, the CNN architecture is
similar to a deep neural network, but adding two new convolutional and pooling
layers in between hidden layers. The convolution layer is regarded as the core
component of a CNN, responsible to process the input samples based on a
convolution operation to extract the features during the sample compression
thanks to a set of filters. Another block of a CNN is the pooling layers that
aim to make the spatial size of the representation smaller for reducing the
computing overheads and mitigating overfitting. In this fashion, the output of
previous layer can be combined with a perceptron of the next layer. Lastly,
the extracted features are further processed in the final layer until reaching
the output layer. Deep CNNs have been proved with many successes in wireless
communication applications, such as massive MIMO [46], modulation
classification in wireless networks [47].
#### II-B2 Recurrent Neural Network (RNN)
RNNs can be regarded as a deep generative architecture since they can be
considered as several non-linear neuron layers between the input layer and the
output layer in an unfolded fashion [48]. The length of RNN is highly
correlated to the length of data sequence and thus RNNs are well suitable for
modelling the data sequence with text data or audio time series. This is based
on the RNN training procedure where the training process is performed from the
interactions between multilayer perceptrons in a feedback loop. That means the
hidden layer is chained with the others via a loop to create a chained
training structure. This operation is based on the memory capability of each
neuron to store information of computation from the previous data input. Some
RNN-based wireless applications can be fault detection in wireless sensor
networks [49], wireless channel modelling [50].
#### II-B3 Long Short Term Memory (LSTM)
LSTM is an extended version of RNN. Unlike the conventional RNN, LSTM operates
based on the gate concept [51]. It keep information of the recurrent network
in the neuron or gated cell that has different types of gates such as read,
write and erasure gate. Different from digital logic gate in computers, these
gates are analog and formulated using element-wise multiplication by sigmoids
ranging from 0 to 1. Analog gates are able to be differentiable that is well
suitable for backpropagation tasks. These gates are operated similar to the
neurons of neural networks. The cells perform data learning via an iterative
process to adjust the weights via gradient descent. RNN has been applied in
several wireless scenarios such as orthogonal frequency division multiplexing
(OFDM) [52], IoT data analysis [53].
### II-C Deep Reinforcement Learning
Reinforcement Learning is a strong AI technique, which allows a learning agent
to adjust its policy to seek an optimal solution via trial and error without
the need of prior environment knowledge. However, when the dimension of state
and action space explodes, the RL-based solutions can also become inefficient.
Deep Reinforcement Learning (DRL) methods such as deep Q-network (DQN) have
been introduced and demonstrated their superiority for solving complex
problems [54]. Specially, instead of storing the variables in the table as the
RL technique, DRL leverages a deep neural network as the approximator to store
high-space actions and states. DRL is mainly used for model optimization,
classification, and estimation where a decision making problem is formulated
by using an agent and interactive environment. At present, DRL in wireless is
a very active research area where some popular DRL techniques such as deep
Q-learning, double DRL, duelling DRL have been used widely for specific
applications such as data offloading [55], [56], channel allocation [57].
### II-D Visions of Wireless AI
The aforementioned AI techniques would have the great potential in supporting
wireless networks from two perspectives: signal processing and data
processing. Here, signal processing aims to increase the data throughput,
while data processing focuses on producing data utility for specific
applications. In fact, AI functions are particularly useful for signal
processing tasks through signal learning and estimation. In general, signal
processing takes advantages of signal parameters to recognize and classify
types of signal in involved applications. Some AI functions for signal
processing have been recently explored, such as channel interfere signal
detection, frequency and bandwidth estimation, modulation recognition [20],
[23]. AI is able to facilitate feature selection, an important domain in
signal processing, for extracting instantaneous time features, statistical
features or transform features. In practice, feature selection from the
received signals is particularly challenging due to the lack of prior
information of signal parameters and the uncertainty of signal time series.
Feature engineering could be left out by using AI architectures to recognize
effectively important signal features such as amplitude, phase, and frequency
that are important for many signal-related domains, i.e. spectrum sharing
[127], channel coding [81] or edge signal detection [195].
Moreover, AI functions are also particularly useful for data processing. In
the era of IoT, data exhibits unique characteristics, such as large-scale
streaming, heterogeneity, time and space correlation. How to obtain hidden
information and knowledge from big IoT data is a challenge. AI can come as a
promising tool for recognizing and extracting meaningful patterns from
enormous raw input data, aiming to create a core utility for big data
processing in wireless applications such as object detection, big data
analysis or security. For example, big data collected from ubiquitous sensors
can be learned using DL for human/object recognition [259]. In such scenarios,
AI is able to extract useful features which represent most the target object,
and then classified using DL architectures, i.e. CNN or DNN. The
classification capability of AI is also beneficial to data processing tasks
for network security enhancements. Recent works [284], [285] show that a DL
network is able to detect intrusions and threats in IoT datasets so that
modified or unauthorized data can be recognized and eleminated for better
security of data-driven applications.
The migration of computation power from the core to the edge has already
begun. Wireless AI will further lead such an exodus beyond the edge,
illuminating a future with pervasive and end-to-end AI. In the next sections,
we conduct a detailed survey on AI functions in wireless networks following a
migration trail of computation functions from the edge to the source data.
Such AI functions include Sensing AI, Network Device AI, Access AI, User
Device AI and Data-provenance AI as shown in Fig. 2.
## III Sensing AI
As shown in the Wireless AI architecture in Fig. 2, the first Wireless AI
function of the data life cycle is Sensing AI that is responsible to sense and
collect data from the physical environments using sensors and wearable
devices, followed by smart data analytics using AI techniques. In this
section, we present a state-of-the-art review on the existing literature
studies working around the Sensing AI function for wireless networks as shown
in Fig. 3. Here, Sensing AI focuses on three main domains, namely people and
objection detection, localization, and motion and activity recognition.
### III-A People and object detection
#### III-A1 People detection
Here we focus on discussing the use of AI in two people detection domains,
including people counting and mobile crowd sensing.
Figure 3: Sensing AI function.
1.1) People counting:
Problem formulation: People counting provides essential information for
various pervasive applications and services, e.g., crowd control for big
events, smart guiding in events, indoor monitoring. It is important to support
crowd analytics for human crowd size estimation and public space design.
Drawbacks of conventional methods: Most of the existing work uses computer
vision, infrared sensors and devices attached on people such as RFID and smart
phones. However, infrared sensors deployed at the doorway count the entering
or exiting people based on the assumption that people are moving through one
by one under certain time interval. Vision-based approaches count people by
image object detection, while its performance can be weakened by object
overlapping, poor lighting conditions and dead zones [58].
Unique advantages of using wireless AI techniques: The use of AI would enable
smart people counting with light-weight and better accurate human information
analytics [59].
Recent works: We here summarize some representative liturature studies
working on AI techniques for people counting, mainly using linear regression
and SVR methods, AI-based crowdsourcing methods, and DL methods.
* •
Linear regression and SVR methods: In order to improve the accuracy of the
people counting, ML methods such as linear regression and SVR have been
proposed for estimating the number of people using the existing Wi-Fi access
points in indoor environments [60]. The experiment result indicates that SVR
yields the better accuracy in estimating people numbers. SVR is also applied
for pedestrian counting from captured video images [61]. To improve the
efficiency of feature extraction, a Histogram of Oriented Gradient technique
is designed that can capture more meaningful information from the raw images
for better human movement recognition. SVR is also used to learn the
estimation function modelled from the extracted features for people counting
tasks [62]. Radar sensor BumbleBee is employed to detect the human movement
images, and features extracted from the images are classified into three
domains, i.e. time, frequency, and joint time-frequency domains. Here, an
experiment is set up with 60 sessions of data collected in room settings with
43 people hired to do a trial. K-fold cross validation is used to evaluate the
feasibility of the proposed scheme, showing a high correlation and low error
with baseline methods.
* •
Crowdsourcing methods: Another AI-based approach for people counting is
proposed in [63] that implements a people counting model called Wi-Fi-counter
using smartphone and Wi-Fi. The Wi-Fi-based counter scheme employs a
crowdsourcing technique to gather the information of people and Wi-Fi signals.
However, the useful information between of them that is necessary to estimate
people count may be hidden. Then a five-layer neural network architecture is
built, including two main phases: the offline phase for computing the
eigenvalues from Wi-Fi signals at the access point and the online phase for
collecting Wi-Fi data that is then used for training the model. The
performance of Wi-Fi-counter model is mainly evaluated via counting accuracy
and system power usage overheads. Meanwhile, the people counting project in
[64] investigates the efficiency of Random Forest technique. The number of
people at train stations is estimated from images captured from CCTV videos.
Instead of using features extracted randomly that is time-consuming, each
captured image is divided into sub-windows for fast human recognition during
the learning.
* •
DL methods: In general, ML has been applied and achieved certain successes in
people counting over wireless networks. However, in fact the performance in
terms of accuracy, efficient large-scale training and the ability to solve
complex counting tasks with variety of data samples still need to be improved
[65]. DL can be an answer for facilitating future people counting
applications. For example, the authors in [66] work in developing a new crowd
counting model, call Wicount, using Wi-Fi devices. The dynamicity of human
motions makes CSI values changeable over time. Based on this concept, the CSI
characteristics are captured, including amplitude and phase information. A
critical challenge is to find a mathematical function which represents the
relationship between CSI values and people count due to the varying number of
people. To solve this issue, a DL solution is proposed to train the CSI values
and generate the model so that people counting can be implemented. A cross-
scene crowd counting framework is presented in [67]. The key focus is on
mapping the images captured by camera sensors and people counts. To solve the
issue in people counting in the case of surveillance crowd scenes unseen in
the training, a CNN architecture is proposed to train the samples iteratively
and a data-driven technique is used to fine-tune the trained CNN model.
1.2) Mobile Crowd Sensing:
Problem formulation: Mobile Crowd Sensing (MCS) is a large-scale sensing
paradigm empowered by the sensing capability of mobile IoT devices, such as
smartphones, wearable devices. MCS enables mobile devices to sense and collect
data from sensors and share with cloud/edge servers via communication
protocols for further processing. The high mobility of mobile devices makes
MCS flexible to provide different ubiquitous services, such as environment
monitoring, human crowd monitoring, traffic planning [68].
Drawbacks of conventional methods: Existing approaches like dynamic
programming have tried to assume MCS as a static model where the motion of
mobile devices are assumed to follow in a predefined trajectory in the static
environement [68]. In fact, MCS tasks should be always considered in time-
varying environment and the uncertainty of mobile devices, which make current
MCS solutions inefficient.
Unique advantages of using wireless AI techniques: AI helps to realize the
potential of MCS and overcome challenges in crowd sensing applications in
terms of high MCS system dynamicity, sensing network complexity and mobile
data diversity.
Recent works: Several ML and DL approaches have been used to faciliate MCS
applications.
* •
MCS using ML approaches: For example, the work in [69] considers a robust
crowd sensing framework with mobile devices. The key objective is to solve the
uncertainty of sensing data from the mobile user participants and optimize the
sensing revenue under a constrained system budget. In practical scenarios, a
sensing task owner often selects randomly the member who can gather the most
information among all participants for maximizing the profits. However, it may
be infeasible in the context where data quality is uncertain which makes
crowdsensing challenging. To solve this uncertainty issue, an online learning
approach is applied to obtain the information from sensing data thanks to a
selection step without knowing the prior statistics of the system. The authors
in [70] suggest to use ML for feature selection from similar metrics in mobile
sensing data (i.e. Wi-Fi signal strength and acceleration data). To detect
human movement patterns, two techniques: individual following and group
leadership are proposed that use an underlying ML algorithm for identifying
features that can classify human groups. Meanwhile, unsupervised ML is
investigated in [71] to estimate the number of people from audio data on
mobile devices. An application called Crowd++ is implemented on Android phones
to collect over 1200 minutes of audio from 120 participants. To estimate the
number of target object, namely active speakers in the crowd, a ML process is
taken with three steps: speech detection, feature extraction and counting.
* •
MCS using DL approaches: Recently, DL has been investigated to facilitate MCS
ecosystems [72]. How to achieve high-quality data collection in terms of
optimizing collected data with lowest energy consumption is important for MCS
applications in smart cities where mobile terminals equipped with sensors to
gather ubiquitous data. Therefore, a DRL approach is proposed to create a
movement and sensing policy for mobile terminals so that the efficiency of
data collection is maximized. Here, each mobile terminal acts as an agent to
interact with the MCS environment to find the best trajectory for data sensing
and collection. Another DL approach is introduced in [73] where a joint data
validation and edge computing scheme is proposed for a robust MCS. Stemming
from the fact that the current MCS models suffer from several limitations,
lack of incentive mechanism for attracting more members for data sensing, lack
of validation of collected data, and network congestion due to high transmit
data volumes. Thus, DL based on a CNN network for pattern recognition is
adopted to detect unimportant information and extract useful features for
lightweight data sensing. Further, edge computing is necessary to provide a
low-latency data processing for MCS, associated with an incentive mechanism to
motivate more users in collecting data. DRL is also applied in [74] for mobile
crowd sensing in smart cities based on unmanned aerial vehicle (UAVs). To
maximize the data collection rate, a joint adaptive energy allocation and task
routing scheme is considered with charging stations which provide a solution
for replenishing the energy of UAVs. In this context, how to balance the
efficiency of data collection and energy preservation for UAVs is crucial.
Unlike the traditional approaches that formulate MCS as a constrained
optimization problem, here DRL-based DL is leveraged for UAV trajectory
modelling by observations, training and action decision making in an iterative
manner.
#### III-A2 Object detection
In this subsection, we study the roles of AI in object detection.
Problem formulation: As one of the key computer vision areas, object
detection plays an important role in realizing the knowledge and understanding
of images and videos, and is used in various applications, such as image/video
classification, object analytics or object motion tracking.
Drawbacks of conventional methods: Traditional object detection approaches
mostly use a three-step model, with the adoption of informative region
selection, feature selection, and classification using mathematical models
[75]. For example, a multi-scale sliding window is often used to scan the
image to detect target objects, but it is computationally expensive and time-
costing. Further, due to the high similarity of visual features, current
solutions may be unlikely to represent exactly the detection models and mis-
classify objects.
Unique advantages of using wireless AI techniques: Recent years, many research
have made attempts to use AI for object detection thanks to its higher
prediction accuracy but low computational complexity.
Recent works: Object detection (i.e. salient object detection) employ deep
neural networks to recognize the distinctive regions in an image. The work in
[76] uses a fully CNN architecture for salient object detection. A new
encoder-decoder network is created where a global perception module is
constructed to produce a low-resolution saliency map of the object. Another
work in [77] presents a recurrent fully convolutional network for better
accurate saliency detection. This is enabled by an iterative correction
process during learning, which refines better saliency maps and brings a
better performance in terms of higher prediction accuracy. The authors in [78]
focus on designing a cascaded partial decoder mechanism which helps to remove
shallower features and refine the features of deeper layers in the CNN network
for obtaining precise saliency maps. Based on that, the proposed scheme can
achieve higher prediction accuracy but lower algorithm running time. To reduce
the training complexity and improve the evaluation speed in traditional neural
network in salient object detection, a simplified CNN network is introduced in
[79]. Instead of minimizing the boundary length as the existing Mumford-Shah
approaches have implemented, the focus of this work is to minimize the
intersection over union among the pixels of the images. This non-reliance on
super-pixels makes the method fully convolutional and thus enhances the
estimation speed. Besides, a hybrid contrast-oriented CNN architecture is
investigated in [80] to solve the issues of the computation latency and
redundancies in current CNN networks. The fully convolutional networks are
devised in the first stream for fast learning of the image to map accurately
it with dense saliency prediction. Also, a segment-level spatial pooling (SP)
stream complementary is integrated for sparse saliency inference and modelling
visual contrast among super-pixels.
### III-B Localization
Figure 4: DL for CSI-based human activity classification.
Problem formulation: Localization aims to determine the geographic coordinates
of network nodes and objects [81]. It is important to various applications,
such as object tracking, human recognition or network resource optimization.
Drawbacks of conventional methods: In many scenarios, it is infeasible to use
global positioning system (GPS) hardware to specify where a node is located,
especially in indoor environments. Distance of nodes can be calculated using
mathematical techniques such as RSSI, TOA, and TDOA [82]. However, the
location of nodes may be changeable over time due to movements.
Unique advantages of using wireless AI techniques: AI comes as a promising
solution to estimate effectively locations of objects/nodes due to some
benefits. First, AI can approximate the relative locations of nodes to
absolute locations without the need of physical measurements. Second, in the
complex dynamic network, AI can help to divide into sub-classes using
classifiers, which reduces the complexity of localization problem for better
node estimation. Here, we summarize some recent research results on the use of
AI in localization tasks.
Recent works: Some important works are highlighed with the exploitation of AI
for localization tasks, by using ML and DL approaches.
* •
ML-based localization: The work in [83] presents a sensor node localization
scheme using ML classifiers. Data is collected with time-stamp from four
capacitive sensors with a realistic room setting. A variety of ML classifiers
such as Bayes network, random forest, and SVM are used to evaluate the
performance of indoor localization, showing that random forest performs best
overall. The authors in [84] propose a non-linear semi supervised noise
minimization algorithm for sensor node localization via iterative learning.
The collected labelled and unlabelled data are expressed as a weighted graph.
The localization of wireless sensor node can be considered as semi supervised
learning. Another work in [85] implements a neural network on Android phones
for indoor localization. The system is a two-layer model, where the first
layer is a neural network coupled with random forest, while the second one is
a pedometer for accelerating the data sampling process. Different from [85],
the work in [86] focuses on using ML for supporting floor localization in a
multi-access point-based floor setting. To eliminate the redundancy
information during the data collection, principal component analysis (PCA) is
employed in the pre-processing step. Then, an extreme learning machine
approach is taken to for model learning to build a classification function
which is used for estimating the floor location. Recently, the use of ML for
coarse localization in human activity recognition tasks is also investigated
[87]. In this work, data as micro-Doppler signatures that are collected from
radar sensors is classified using SVM and k-nearest neighbors (kNN)
classifiers. To further improve the accuracy of classification, a CNN
architecture is designed with respect to angles of object movement, and
numbers of radars.
* •
DL-based localization: The potential of DL in supporting localization
applications has been proven via recent successes. The work in [88] explores a
solution for device-free wireless localization in wireless networks. The
discriminative features are first extracted from wireless signals and learned
by a three-layer deep neural network. Using the extracted features from the DL
model, a ML-based softmax regression algorithm is then employed to estimate
the node location in indoor lab and apartment settings. Meanwhile, a deep
extreme learning machine technique is presented in [89] for fingerprint
localization based on signal strength of wireless sensor networks. Instead of
using random weight generation, neural network-based autoencoder is used for
sample training in the encoding and decoding manner. This allows to extract
better features and then localize the node better. DL is also considered in
[90] to support the estimation of target locations in IoT networks. More
specific, a CNN model is developed that learns the amplitude of channel
frequency from the Wi-Fi CSI signals collected by a gateway. The model is
tested in real room environments and shows high localization precision. CSI
data is also exploited in [91] to estimate the object location. From the CSI
packets, random forest is used for selecting key features, including variance,
median, and maximum, and minimum amplitudes of each frequency band. These
features are fed to a CNN model using Keras to train the model for location
estimation. Recently, DRL has been applied to facilitate localization tasks
[92]. To formulate the localization problem as a MDP, which is the key
structure of DRL approaches, a continuous wireless localization process is
derived. The localization task is divided into time slots where the agent
inputs the location information at the previous time slot and outputs the new
location in the next slot. Therefore, the computation of the location at each
step only depends on the location at the previous step, which makes it
appropriate to formulate a MDP problem. Then, a deep Q-learning algorithm is
built to solve the proposed problem, aiming to minimize the error of
localization estimations.
### III-C Motion and Activity Recognition
Problem formulation: In social networks, human motion data may be very large
and activity parterns are changable over time. How to deal with a large-scale
datasets for motion and activity recognition tasks is a challenge.
Drawbacks of conventional methods: Many traditional techniques have assumed
to have prior knowledge of the CSI or environment information for motion
recognition [93], [94], but it is hard to achieve in practice. Moreover, the
samples collected from human motion activities are often very large associated
with the complexity of multiple motion patterns, which make current approaches
such as dynamic programming inefficient to model exactly the motion estimation
task so that the problem can be solved effectively.
Unique advantages of using wireless AI techniques: Thanks to high
classification accuracy and efficient online learning based on scalable big
datasets, AI has been employed for human motion and activity recognition based
on wireless signal properties for better accuracy of activity recognition.
Recent works: In [93], the authors study the CSI retrieved from the host
computer to estimate and classify the four body motions: normal gait, abnormal
gait (ataxia) [95]. The main idea is to analyze the relation between the CSI
value and the change of human motion. Then, a system setting with an access
point and network interface for CSI detection is considered so that gait
abnormality and hand tremors can be estimated using a ML algorithm. The
authors in [96] develop a motion detection system using Wi-Fi CSI data. The
system relies on the variance of CSI to detect the human motion that is
empowered by a moving filter in supervised ML algorithm. Compared to
traditional motion recognition approaches using wearable sensors, radars that
require hardware settings and management efforts, motion detection based on
Wi-Fi signals is much more flexible, cheaper due to using the available
infrastructure.
Moreover, a multiple access point-based CSI aggregation architecture is
introduced in [97], aiming to estimate human motion. A large-scale Wi-Fi CSI
datasets from the data collection process are fed to a multi-layer CNN model
for training and analyzing the relations between CSI strengths and levels of
human motion. In comparison to SVM-based training, CNN provides a better
accuracy in activity recognition. As an investigation on the influence of
human motions on Wi-Fi signals, the work in [98] suggest a DL network for
image processing. From the measures on multiple channels, CSI values are
transformed to a radio image, which is then extracted to generate features via
a DNN learning process and trained for motion classification with ML. A DL
model for CSI-based activity recognition can be seen in Fig. 4. As an further
effort for improving the feature selection from CSI datasets in human activity
recognition, an enhanced learning model is suggested in [99]. In fact, the
data collected from Wi-Fi signals contains redundancy information that may not
be related to human motion, and thus needs to be remoted for better
estimation. Motivated by this, a background reduction module is designed to
filter unrelated motion information from the original CSI data. Then,
correlation features are extracted from the filtered CSI data. To this end, a
DL-based RNN is employed to extract the deeper features via offline training.
Specially, LSTM is integrated with RNN to achieve better recognition accuracy
with lower computational complexity. Moreover, a Wi-Fi CSI collection
framework based on IoT devices is introduced in [100] for recognizing human
activity. To learn effectively the features extracted from CSI datasets, a
joint DL model of autoencoder, CNN and LSTM is designed. Here, autoencoder
aims to eliminate the noise from CSI data, CNN extracts the important features
from the output of autoencoder, while LSTM extracts inherent dependencies in
the features. Compared to existing feature selection techniques, the proposed
scheme can achieve a much lower algorithm running latency with better
recognition accuracy.
### III-D Lessons learned
The main lessons acquired from the survey on the Sensing AI domain are
highlighted as the following.
* •
AI has been applied and achieved certain successes in people and object
detection over wireless networks. By analyzing Wi-Fi CSI characteristics, AI-
based people counting and sensing systems can be built to train the CSI values
and generate the model so that people detection can be implemented. We have
also observed that CNN is particularly useful in learning human images and
recognizing humans among the crowd at different scales. Besides, many recent
researches have made attempts to use DL for object detection, with a focus on
two domains: salient object detection and text detection. We also find that
most studies leverage deep CNN as the classifier for better accurate object
recognition thanks to its fully convolutional networks [79], [101], [102].
* •
Moreover, AI helps estimate effectively locations of objects/nodes due to some
benefits. First, AI can approximate the relative locations of nodes to
absolute locations without the need of physical measurements. Second, in the
complex dynamic network, AI can help to divide into sub-class using
classifiers, which reduces the complexity of localization problem for better
node estimation. Many research efforts have been made to apply both ML and DL
in supporting localization applications in mobile IoT networks [89], [92].
* •
Lastly, the use of AI for human motion and activity recognition using wireless
signal properties has been realized. The relation between the CSI value and
the change of human motion has been exploited to build the learning model
using AI algorithms. Especially, to learn effectively the features extracted
from CSI datasets for human detection tasks, a joint DL model of autoencoder,
CNN and LSTM is designed [100]. Here, autoencoder aims to eliminate the noise
from CSI data, CNN extracts the important features from the output of
autoencoder, while LSTM extracts inherent dependencies in the features. This
triple AI combination is promising to improve recognition accuracy and reduce
the training latency, which is important for wireless AI deployments in
realistic scenarios.
In summary, we list Sensing AI use cases in the taxonomy TABLE III to
summarize the key contributions of each reference work.
TABLE III: Taxonomy of Sensing AI use cases. Category | Ref. | Use case | AI techniques applied | Main contributions
---|---|---|---|---
People and object detection | [60] | People counting | SVR | A scheme for estimating the number of people using the existing Wi-Fi access points.
[62] | People counting | SVR | An estimation function for people counting using extracted features.
[63] | People counting | Five-layer neural networks | A people counting model, called Wi-Fi-counter using smartphone and Wi-Fi.
[69] | Crowd sensing | Online ML | A robust crowd sensing framework with mobile devices.
[73] | Crowd sensing | CNN | A pattern recognition model for detecting unimportant information and extract useful features for lightweight crowd sensing.
[77] | Salient object detection | Recurrent convolutional network | A recurrent fully convolutional network for better accurate saliency detection.
[79] | Salient object detection | CNN | A simplified CNN network for salient object detection.
Localization | [83] | Sensor node localization | Bayes network, random forest, and SVM | A sensor node localization scheme using ML classifiers.
[84] | Sensor node localization | Semi supervised learning | A non-linear semi supervised noise minimization algorithm for sensor node localization via iterative learning.
[85] | Indoor localization | Random forest | A neural network on Android phones for indoor localization.
[88] | Wireless localization | Three-layer DNNs | A solution for device-free wireless localization in wireless networks.
AI at Devices | [93] | Human body motion classification | Online ML | A study of CSI estimation retrieved from the host computer to estimate and classify the four body motions.
[96] | Motion detection | Supervised ML | A motion detection system using Wi-Fi CSI data.
[97] | Human motion detection | Multi-layer CNN | A multiple access point-based CSI aggregation architecture, aiming to estimate human motion.
[98] | Human motion detection | DNNs | An investigation on the influence of human motions on Wi-Fi signals.
[100] | Human activity recognition | CNN | A Wi-Fi CSI collection framework based on IoT devices for recognizing human activity.
## IV Network Device AI
The next Wireless AI function of the data life cycle in Fig. 2 is Network
Device AI where AI has been applied in the network edge to learn and detect
signals, by embedding an AI-based intelligent algorithm on the base stations
or access points. We here focus on two main applications of the Network Device
AI function, including content catching and edge computing management as shown
in Fig. 5.
Figure 5: Network Device AI function.
### IV-A Content caching
Problem formulation: Caching contents at base stations/access points at the
edge of the network has emerged as a promising approach to reduce traffic and
enhance the quality of experience (QoE) [103].
Drawbacks of conventional methods: Most existing studies assume that the
distribution of content catching popularity is known, but in practice, it is
complex and non-stationary because of high content dymanics [103].
Unique advantages of using wireless AI techniques: Consider the complexity and
dynamicity of contents, recent studies have investigated the integration of AI
at BSs to facilitate content caching policies.
Recent works: Many solutions have been proposed to support content catching,
using ML and DL approaches.
* •
ML-based edge content catching: For example, the authors in [104] consider
optimization of caching efficiency at small cell base stations. They focus on
minimizing the backhaul load, formulate it as an optimization problem and
leverage a K-means clustering algorithm to identify spatiotemporal patterns
from the content requests at the BS. As a result, the similar content
preferences are grouped into different classes so that the BS can cache the
most important content with high accuracy and low caching latency. Another
work that uses ML for edge caching, aiming to to estimate content popularity
and design a caching scheme is proposed in [105]. The caching entities with
cabled backhaul or wireless backhaul are deployed at the BSs, i.e. macro BSs
or small BSs, to enable a flexible and low-latency method of content
distribution, compared to with caching on remote cloud servers. Because of the
fact that the computation resource at the BS can be insufficient for
processing cached data and implementing the learning process, a power-
efficient learning approach is developed using unsupervised learning (i.e.
K-means clustering), an improved DNN network. The simulation results verify
that compared to conventional optimization algorithms, the learning-based
scheme can achieve better cached data prediction rate with higher accuracy and
lower energy consumption. RL has recently been applied to edge-based content
caching architectures. With an assumption that content popularity and user
preference are not available at MEC servers, a multi-agent RL is an
appropriate option for learning and training the caching decisions [106]. Each
MEC server as an agent learns on how to coordinate its individual caching
actions to maximize the optimal caching reward (i.e. the downloading latency
metric). Due to the large state space, the conventional Q-table may not enough
for action exploration, a combinatorial upper confidence bound solution is
integrated to mitigate the RL algorithm complexity.
* •
DL-based edge content catching: Unlike aforementioned solutions, the work in
[107] suggests a DRL approach for content caching at the edge nodes, i.e. at
the base stations, to solve the policy control problem. Indeed, content
delivery is always complex with a huge number of contents and different cache
sizes in realistic scenarios, which pose critical challenges on estimating
content popularity distribution and determining contents for storage in
caches. DRL which is well known in solving complex control problems is
suitable for this caching decisions at the BS. User requests to the BS are
considered as the system state and caching decisions (cache or not) are action
variables in the RL formulation, while cache hit rate is chosen as the reward
to represent the goal of the proposed scheme. Through the simulation, DRL has
proven its efficiency with better cache hit rate in various system settings
(i.e. edge cache capability). For content caching in fog computing-empowered
IoT networks, an actor–critic deep RL architecture is provided in [108] and
deployed at the edge router with the objective of reducing caching latency.
Instead of deriving a mathematical formulation which is hard to obtain in
practice, the authors propose a model-free learning scheme, where the agent
continuously interacts with the fog caching environment to sense the states
(i.e. numbers of contents and user requests). Since the edge router prefers to
cache those contents with better popularity, the content transmission delay is
formulated as a reward function. By this way, the content with higher
popularity is more likely to have lower time cost, which motivates the agent
to take the action of caching for long-term reward.
### IV-B Edge Computing Management
Problem formulation: With the assistance of mobile edge computing, mobile
devices can rely on edge services for their computation and processing tasks.
Due to the constrained resource of edge servers and the unprecedented mobile
data traffic, resource scheduling and orchestration management are required to
achieve robust edge management and ensure long-term service delivery for
ubiquitous mobile users.
Drawbacks of conventional methods: Most of current strategies rely on mixed
integer nonlinear programming or dynamic programming to find the edge
computing management policies, but it is unlikely to apply to complex wireless
edge networks with unpredictable data traffic and user demands [16], [17].
Further, when the dimension of the network increases due to the increase of
mobile users, the traditional techniques are unable to solve the increasingly
computational issue and then hard to scale well to meet QoS of all users.
Unique advantages of using wireless AI techniques: AI would be a natural
option to achieve a scalable and low-complex edge computing management thanks
to its DL and large-scale data optimization. In this sub-section, we
investigate the use of AI for edge computing management with some
representative recent works in two use case domains: edge resource scheduling
and orchestration management.
Recent works: Many studies have leveraged AI techniques to support edge
computing management via two important services: edge resource scheduling and
orchestration management.
* •
Edge resource scheduling: AI has been applied widely to facilitate edge
resource scheduling. The authors in [109] present a DL-based resource
scheduling scheme for hybrid MEC networks, including base stations, vehicles
and UAVs connecting with edge cloud. A DNN network is proposed with a
scheduling layer aimed to perform edge computing resource allocation and user
association. To provide an adaptive policy for resource allocation at the MEC
nodes in mobile edge networks with multi-users, a deep Q-learning scheme is
introduced in [110]. In the multi-user MEC network, it is necessary to have a
proper policy for the MEC server so that it can adjust part of edge resource
and allocate fairly to all users to avoid interference and reduce latency of
users for queuing to be served. Therefore, a smart resource allocation scheme
using deep Q-learning is developed and deployed directly on the MEC server so
that it can allocate effectively resource for offloaded tasks of network users
under different data task arrival rates. Moreover, a multi-edge resource
scheduling scheme is provided in [111]. An edge computing use case for a
protest crowd incident management model is considered where ubiquitous data
including images and videos are collected and offloaded to the nearby MEC
server for execution. Here, how to allocate edge resource to the offloaded
tasks and adjust edge computation with varying user demands to avoid network
congestion is a critical challenge. Motivated by this, an edge computing
prediction solution using ML is proposed, from the real data (transmission and
delay records) in wireless network experiments. The ML algorithm can estimate
network costs (i.e. user data offloading cost) so that efficient edge resource
scheduling can be achieved. To further improve the efficiency of online edge
computing scheduling, another solution in [112] suggests a DRL-based schem.
The authors focus on the mobility-aware service migration and energy control
problem in mobile edge computing. The model includes three main entities: a
network hypervisor for aggregating the states of input information such as
server workloads, bandwidth consumption, spectrum allocation; a RL-based
controller for producing the control actions based on the input information,
and action executor for obtaining the control decisions from the RL controller
to execute corresponding actions. As a case study, the authors investigate the
proposed DRL scheme with Deep Q-learning for an edge service migration
example, showing that DRL can achieve superior performance, compared to the
greedy scheme in terms of lower system cost [113].
* •
Edge orchestration management: In addition to edge resource scheduling, the
potential of AI has been investigated in edge orchestration management. As an
example, the work in [114] considers an orchestration model of networking,
caching and computing resources with MEC. The authors concentrate on the
resource allocation problem that is formulated as an optimization problem with
respect to the gains of networking, caching and computing. The triple model is
highly complex that is hard to be solved by traditional optimization
algorithms. Therefore, a deep Q-learning approach is proposed to achieve the
high convergence performance with the best system utility. Meanwhile, in
[115], a joint scheme of virtual edge orchestration with virtualized network
functions and data flow scheduling is considered. Motivated by the fact that
real-world networks may be hard to be modelled due to the complex and dynamic
nature of wireless networks, a model-free DRL algorithm using deep
deterministic policy gradients is proposed, aiming to minimize system delay
and operation costs (i.e. execution time). Specially, an edge-Service Function
Chain (SFC) orchestration scheme based on blockchain [116] is introduced in
[117] for hybrid cloud-edge resource scheduling. Blockchain is adopted to
create secure transmission between users and edge service providers, while the
SFC orchestration model is formulated as a time-slotted chain to adapt with
the dynamicity of IoTs. Based on that, a joint problem of orchestration
control and service migration is derived and solved by a DRL approach using
deep Q-learning. The numerical simulation results show that the learning-based
method can achieve high performance with low latency of orchestration
algorithm, and system cost savings.
### IV-C Lessons learned
The deployment of AI functions at edge servers (i.e. base stations) instead of
remote clouds has emerged as a new direction for flexile network designs with
low latency and privacy protection. AI has been applied in the network edge
learn and detect signals, by embedding an AI-based intelligent algorithm on
the base stations or access points [118]. Specially, content catching at the
network edge is now much easier thanks to the prediction capability of AI. Due
to the huge number of contents and different cache sizes in realistic network
scenarios, how to accurately estimate content popularity to achieve a high
catching rate at the edge is highly challenging. AI comes as a natural choice
to provide smart caching policies for implementing content catching with
respect to latency and content loss constraints [105], [119]. Moreover, the
potential of AI in edge computing management has been also investigated in
mainly two use-case domains: edge resource scheduling [109] and orchestration
management [114]. We list Network Device AI use cases in the taxonomy TABLE IV
to summarize the contributions of each reference work.
TABLE IV: Taxonomy of Network Device AI use cases. Category | Ref. | Use case | AI techniques applied | Main contributions
---|---|---|---|---
Network Device AI | [104] | Catching optimization | K-means | A scheme for optimization of caching efficiency at small cell base stations.
[106] | Caching decisions | Multi-agent RL | A scheme for coordinating individual caching actions to maximize the optimal caching reward in MEC.
[110] | Edge resource allocation | Deep Q-learning | A scheme for resource allocation at the MEC nodes in mobile edge networks with multi-users.
[111] | Multi-edge resource scheduling | Online ML | A protest crowd incident management model for multi-edge resource scheduling.
[112] | Edge computing management | DRL | A DRL-enabled edge computing management framework with MEC.
[113] | Edge orchestration management | Deep Q-learning | An orchestration model of networking, caching and computing resources with MEC.
[115] | Virtual edge orchestration | Model-free DRL | A joint scheme for virtual edge orchestration with virtualized network functions and data flow scheduling.
## V Access AI
The third Wireless AI function of the data life cycle in Fig. 2 is Access AI
where AI can be applied to facilitate the data transmission from three main
layers, including PHY layer, MAC layer, and network layer. Here, we focus on
discussions on the roles of AI in such layers as shown in Fig. 6.
Figure 6: Access AI function.
### V-A PHY Layer AI
AI has been applied in physical layer communications and shown impressive
performances in recent years [120], [121], [122]. Here, we focus on discussing
the roles of AI in three important domains, namely CSI acquisition, precoding,
and coding/decoding.
#### V-A1 CSI acquisition
In this sub-section, we analyze the application of AI in CSI acquisition.
Problem formulation: The massive multiple-input multiple-output (MIMO) system
is widely regarded as a major technology for fifth-generation wireless
communication systems. By equipping a base station (BS) with hundreds or even
thousands of antennas in a centralized or distributed manner, such a system
can substantially reduce multiuser interference and provide a multifold
increase in cell throughput. This potential benefit is mainly obtained by
exploiting CSI at BSs. In current frequency division duplexity (FDD) MIMO
systems (e.g., long-term evolution Release-8), the downlink CSI is acquired at
the user equipment (UE) during the training period and returns to the BS
through feedback links.
Drawbacks of conventional methods: Vector quantization or codebook-based
approaches are usually adopted to reduce feedback overhead. However, the
feedback quantities resulting from these approaches need to be scaled linearly
with the number of transmit antennas and are prohibitive in a massive MIMO
regime.
Unique advantages of using wireless AI techniques: Recently, AI techniques
such as ML and DL have been adopted as a promising tool to facilitate CSI
acquisition in wireless communications. For example, AI potentially lowers the
complexity and latency for CSI acquisition tasks such as CSI feedback and CSI
sensing. Particularly, it is shown that a deep neural network with online
learning is able to facilitate the parameter tuning and expedite the
convergence rate, which improves the estimation of CSI without occupation of
uplink bandwidth resource [121].
Recent works: There are a number of studies working on ML and DL techniques
for CSI acquisition.
* •
ML-based CSI acquisition: The work in [123] introduces a ML-based CSI
acquisition architecture for massive MIMO with frequency division duplex
(FDD). The learning algorithm is divided into three steps: offline regression
model training, online CSI estimation, and online CSI prediction by using
linear regression (LR) and support vector regression (SVR). The proposed ML
solution enables each user to estimate the downlink channel separately and
then feed back the estimated low dimension channel to the base station based
on the simple codebook method, which reduces the overhead of uplink feedback.
Another work in [124] is also related to CSI estimation. It provides a ML-
based time division duplex (TDD) scheme in which CSI can be estimated via a
ML-based predictor instead of conventional pilot-based channel estimator.
Here, two ML-based structures are built to improve the CSI prediction, namely,
CNN combined with autoregressive (AR) predictor and autoregressive network
with exogenous inputs, which can improve the prediction accuracy, and
alleviate overheads of channel estimation.
* •
DL-based CSI acquisition: DL has been also widely used in CSI acquisition
tasks. As an example, the authors in [125] introduce CsiNet, a novel DL-
enabled CSI sensing and recovery mechanism that learns to effectively use
channel structure from training samples. In CsiNet, the recent and popular
CNNs are explored for the encoder (CSI sensing) and decoder (recovery network)
so that they can exploit spatial local correlation by enforcing a local
connectivity pattern among the neurons of adjacent layers. To compress CSI in
massive MIMO system, a multiple-rate compressive sensing neural network
framework is proposed in [126]. The key focus is on the investigation of CSI
reconstruction accuracy and feasibility at the user equipments (UE) for the
CSI feedback problem by using a DL-based network architecture, called CsiNet+.
This framework not only enhances reconstruction accuracy but also decreases
storage space at the UE, thus enhancing the system feasibility. The authors in
[127] also propose a convolutional DL architecture, called DeepCMC, for
efficient compression of the channel gain matrix to alleviate the significant
CSI feedback load in massive MIMO systems. The proposed scheme is composed of
fully convolutional layers followed by quantization and entropy coding blocks
for better CSI estimation under low feedback rates.
In the similar direction, the study in [128] presents a new architecture of DL
in CSI feedback compression, which also takes advantage of the memory
characteristic of RNN for feature extraction, compression and decompression.
To reduce the size of the model, a depthwise separable convolutions is adopted
in feature recovery for better recovery quality at different compression
ratios (CRs). For reducing CSI feedback overhead due to significant amount of
BS antennas in massive MIMO systems, DL is used in [129] decoupled with
superimposed coding. In particular, a multi-task neural network with subnet-
by-subnet training method is employed to facilitate the parameter tuning and
expedite the convergence rate, which improves the estimation of CSI without
occupation of uplink bandwidth resource.
#### V-A2 Precoding
In this sub-section, we discuss the roles of AI in supporting precoding.
Problem formulation: Hybrid precoding is a significant issue in millimeter
wave (mmWave) massive MIMO system. To cope with a large number of antennas,
hybrid precoding is required where signals are processed from both analog and
digital precoders. Typically, the estimation of MIMO channels is neccessary to
establish the hybrid precoding matrices where AI has emerged as a promising
tool to support precoding.
Drawbacks of conventional methods: Due to the large-scale antennas as well as
the high resolution analog-to-digital converters (ADCs)/digital-to analog
converters (DACs), current digital precoding frameworks [130], [131] suffer
from high power consumption and hardware costs. Besides, the radio frequency
(RF) chains is considerable that may make traditional solutions such as one-
bit ADCs processing [132] inefficient to achieve a desired throughput rate.
Unique advantages of using wireless AI techniques: Recently, AI has attracted
much attention in precoding system designs thanks to its low complexity,
online learning for better estimation of hybrid precoding matrices, and its
adaptability to time-varying mmWave channels.
Recent works: There are many AI techniques used to facilitate precoding tasks
in the literature.
* •
ML approaches: For example, ML has been adopted in [133] to build a hybrid
precoding architecture with two main steps. First, an optimal hybrid precoder
is designed with a hybrid precoding algorithm enabled by an enhanced cross-
entropy approach in ML. The second step is to design the optimal hybrid
combiner with an approximate optimization method, aiming to obtain the optimal
precoding matrix and combining matrix to maximize the spectrum efficiency of
mmWave massive MIMO system with low computation complexity [134]. To support
hybrid precoding in mmWave MIMO-OFDM systems, a ML approach using cluster
analysis is introduced in [135]. The key purpose is to group dynamic subarray
with the help of PCA that is used to extract frequency-flat RF precoder from
the principle component of the optimal frequency-selective precoders.
Similarly, the study in [136] incorporates an unadjusted probability and
smoothing constant to across-entropy method in ML to perform a hybrid
precoding implementation to solve the issue of energy loss in mmWave massive
MIMO systems with multiple RF chains.
* •
DL approaches: Meanwhile, the authors in [137] investigate the use of DL,
namely neural networks called an auto-precoder, to design the hybrid precoding
metrics by using the prior observations of the channel. Based on a learning
procedure, the auto-precoder can learn to optimize the channel sensing
vectors, which are then used to predict the hybrid beamforming vectors without
negligible training overheads. The work in [138] applies DNNs to obtain a
decentralized robust precoding framework in multi-user MIMO settings, aiming
to cope with the continuous decision space and the required fine granularity
of the precoding. To this end, the authors develop a decentralized DNNs
architecture, where each TX is responsible to approximate a decision function
using a DNN, and then all the DNNs are jointly trained to enforce a common
behaviour for all cooperative TXs. To reduce the large computational
complexity and local-minimum problems faced by the traditional design of
hybrid precoders for multi-user MIMO scenarios, the work in [139] considers a
DL scheme using a CNN for a new mmWaves hybrid precoding architecture. More
specific, the precoder performs an offline training process by using the
channel matrix of users as the input to train the CNN, which generates the
output labelled as the hybrid precoder weights. After that, based on the
trained model, the CNN-MIMO can predict the hybrid precoders by feeding the
network with the user channel matrix. A significant advantage of the CNN
architecture is a capability to handling the imperfections of the input data
and the dynamicity of the network, which contributes a stable and better
performance in terms of high throughput rates [140].
* •
DRL approaches: Recently, DRL has been appeared as a new research direction to
further improve the performance of current ML/DL approaches in precoding
design. The work in [141] also focuses on precoding issues, but it uses DRL
for hybrid precoding in MIMO under uncertain environment and unknown
interference. The benefit of DRL is the ability to learn from the environment
using an agent to force a policy for optimal actions and then obtain rewards
without a specific model. Using this theory, the authors develop a game
theoretic DRL-based algorithm to learn and update intelligently a dynamic
codebook process for digital precoding. The simulation results show that DRL
can effectively maximize multi-hop signal coverage and data rate for MIMO
systems. Similarly, the study in [142] also analyzes a mmWave hybrid precoding
model. This work introduces a mmWave point-to-point massive MIMO system where
the hybrid beamforming design is considered using DRL. In this scheme, channel
information is considered as the system state, precoder matrix elements are
actions, while spectral efficiency and bit error rate are regarded jointly as
a reward function. Based on that, a mmWave hybrid precoding problem is
formulated as a markov decision process (MDP) that is solved by a deep
Q-learning algorithm. The authors concluded that the DRL adoption can achieve
high performance in terms of high spectral efficiency and minimal error rate,
compared to its algorithm counterparts.
#### V-A3 Coding/decoding
In this sub-section, we present a discussion on the use of AI in supporting
coding/decoding in wireless networks.
Problem formulation: Channel coding and decoding are significant components in
modern wireless communications. Error correcting codes for channel coding are
utilized to achieve reliable communications at rates of the Shannon channel
capability. For instance, low-density parity-check (LDPC) codes are able to
exhibit near Shannon capability with decoding algorithms such as belief
propagation (BP) [143].
Drawbacks of conventional methods: Many coding and decoding solutions have
been proposed in the literature but they still remain some limitations. For
example, LDPC codes may not achieve desired performance due to channel fading
and coloured noise that also introduces high coding complexity [143]. Another
approach is whitening that aims to transform coloured noise to white noise,
but it requires matrix multiplication that is known to highly complex for
long-length codes.
Drawbacks of conventional methods: The advances of AI open up new
opportunities to address coding/decoding issues. Instead of relying on a pre-
defined channel model, AI is able to learn the network channel via learning
with low computation complexity, better coding accuracy and improved decoding
performances.
Recent works: The work in [144] uses CNN to develop a smart receiver
architecture for channel decoding. Stemming from the fact that the difficulty
of accurate channel noise estimation with presence of decoding error, neural
layers are exploited to estimate channel noise. In this process, training is
particularly important to eliminate estimation errors and capture useful
features for the BP decoder. The continuous interaction between the BP decoder
and CNN would improve decoding SNR with much lower complexity and better
decoding accuracy, as confirmed from simulation results. The authors in [145]
concern about the stability of channel coding in terms of block lengths and
number of information bits. This can be achieved by using neural networks as
replacement of an existing polar decoder. In this case, the large polar
encoding graph is separated into sub-graphs so that the training of codeword
lengths is improved for better decoding performances with low computation
latency. An improvement of BP algorithms for decoding block codes using AI is
shown in [146]. A RNN decoder is designed to replace the conventional BP
decoder which desires to enhance the decoding efficiency on parity check
matrices. The ability of AI in assisting coding and decoding is also
demonstrated in [147] that employs parameterized RNN decoder for decoding. The
main objective is to train weights of the decoder using a dataset that is
constructed from noise collections of a codeword. This training process is
performed using stochastic gradient descent, showing that the bit error rate
is reduced significantly without causing computational complexity.
### V-B MAC layer AI
In this sub-section, we survey the recent AI-based solutions for MAC layer in
three main domains, including channel allocation, power control and
interference mitigation.
#### V-B1 Channel allocation
In this sub-section, we present a discussion on the use of AI in channel
allocation.
Problem formulation: Channel allocation is a major concern in densely deployed
wireless networks wherein there has a large number of base stations/ access
points but limited available channels. Channel allocation is possible to lower
network congestion, improve network throughput and enhance user experience.
Drawbacks of conventional methods: In existing studies [148], [149], the
major challenge lies in accurately modelling the channel characteristics to
make effectively channel decisions for long-term system performance
maximization under the complex network settings.
Unique advantages of using wireless AI techniques: AI with its learning and
prediction capabilities shows great prospects in solving wireless channel
allocation, i.e. better solving high dimensions of states, improving channel
network learning for optimal channel allocation.
Recent works: Some ML and DL approaches are applied to empower channel
allocation applications.
* •
ML approaches: An AI-based solution is proposed in [148] for channel
allocation in Wireless Networked Control Systems (WNCSs) where each subsystem
uses a timer to access the channel. To formulate a channel allocation for
subsystems, an online learning is considered so that the subsystems can learn
the channel parameters (i.e. Gilbert-Elliott (GE) channel) to adjust their
timers according to channel variation for gaining optimal channel resource. To
support channel assignment for multi-channel communication in IoT networks,
the work in [149] introduces a ML-based algorithm running on IoT devices.
Through the learning process, each IoT device estimates the channel with
higher resources (i.e. bandwidth) to make decisions for selecting the
appropriate channel in the dynamic environment. The authors in [150] also
employ ML to implement a channel allocation strategy for Wi-Fi devices in the
context of channel interference. The learning allows each device to estimate
the channel states and detect the behaviour of neighbour devices for
minimizing channel usage overlapping. Another solution for channel allocation
is [151] where a RL-based ML approach is designed for optimizing channel
allocation in wireless sensor networks. The RL agent is used to interact with
the environment (i.e. sensor networks) and select the channel with high
frequency for sensor nodes, aiming to reduce the error rate during the
transmission.
* •
DL approaches: Another approach in [152] solves the channel allocation issue
by taking advantage of pilot symbols prior known by both transmitters and
receivers in OFDM systems. In fact, the estimation of sub-channels relies on
the number of pilots and pilot positions. Motivated by this, the authors
propose a autoencoder-based scheme with DNN to find the pilot position. The
autoencoder is first trained to encode the input (i.e. the true channel) into
representation code to reconstruct it. Then, the pilots are allocated at
nonzero indices of the sparse vector so that the channel allocation can be
done only at the indices corresponding to the pilots. Besides, a conjunction
of DRL and Graph Convolutional Networks is considered in [153] for a channel
allocation scheme in wireless local area networks (WLANs). The high density
makes the use of DRL suitable for solving high dimensions of states coming
from the large number of mobile users and coordinators (i.e. access points).
For channel allocation in multi-beam satellite (MBS) systems, the study in
[154] suggests a DRL-based solution to achieve a long-term optimal channel
resource allocation under the complex interactions between satellite and user
terminals.
#### V-B2 Power control
In this sub-section, we present a discussion on the use of AI in power
control.
Problem formulation: The energy-efficient power control is one of the most
important issues for the sustainable growth of future wireless communication
networks. How to ensure smooth communications of devices while efficient power
control on the channels is critical. This topic is of paramount importance for
practical communication scenarios such as MIMO systems, OFDM networks and D2D
communications [155].
Drawbacks of conventional methods: Most traditional approaches follow a
predefined channel of wireless communication to formulate the power
control/allocation problem [156], [157]. However, these schemes only work well
in the static wireless networks with predictable channel conditions (i.e. user
demands, transmit data sizes), while it is hard to apply in the networks
without a specific channel model.
Unique advantages of using wireless AI techniques: Recently, AI has been
investigated to solve power control issues in complex and dynamic wireless
communications with low computation complexity and better throughput for power
transmisison and power allocation.
Recent works:
* •
ML approaches: The work in [156] presents a power control scheme using ML for
D2D networks. The major objective is to dynamically allocate power to the
network of D2D users and cellular users for avoiding channel interference.
This can be done by using a Q-learning algorithm which learns a power
allocation policy such that the successful data transmission of each user is
ensured with a certain level of power. In the complex wireless communication
networks with nonstationary wireless channels, the control of transmit power
for transmitters is a challenging task. The authors in [157] propose a
learning-based scheme with the objective of optimizing transmit power over the
wireless channel under a certain power budget. The distribution of power
allocation to channels is estimated through a learning process so that the use
of power serving data transmissions is efficiently predicted and optimized in
a long run.
* •
DL approaches: Meanwhile, the study in [158] investigates the potential of DL
in addressing power control issues in wireless networks. As an experimental
implementation, the authors employ a DNN architecture to perform a power
control in large wireless systems without the need of a complex channel
estimation procedure. The DNN network is trained using CSI values on all
network links as inputs to capture the interference pattern over the different
links. With the aid of learning, the proposed scheme can adapt well when the
size of the network increases. Similar to this work, a solution for power
allocation is proposed in [159] that uses ANNs to estimate power allocation in
wireless interference networks. Here, the distribution of wireless channels is
used as the input for the deep ANNs architecture to perform training, which
then generates an optimization model to specify how much power should be
allocated to each channel.
In [160], a LSTM which is a DL-based RNN architecture is recommended for
building a channel prediction mechanism in wireless body are networks. Based
on the constructed model, a predictor is derived that estimates the power
consumption on each channel in order to adjust the suitable level of power
over the channels. [161] also leverage DL to propose a dynamic power control
scheme for NOMA in wireless catching systems. They concentrate on minimizing
the transmission delay by formulating it as an optimization problem with
respect to transmit deadline between base station and users as well as total
power constraint. Then, a DNN architecture is built to learn in an iterative
manner, evaluate and find an optimal solution in a fashion the transmission
time is minimized and optimal power allocation is achieved under a power
budget.
* •
DRL approaches: DRL also shows the high potential in solving power control
problems. A multi-agent DRL-based transmit power control scheme is introduced
in [162] for wireless communications. Different from the existing power
control frameworks which require accurate information of channel gains of all
users, the proposed model only needs certain some channels with high power
values. As a result, the complexity of the learning algorithm is low
regardless of the size of the network, which shows a good scalability of the
proposed scheme. DRL is also employed in [163] to develop a joint scheme of
resource allocation and power control for mobile edge computing (MEC)-based
D2D networks. A model-free reinforcement learning algorithm is designed to
update the parameters and rules of resource allocation and power control via a
policy gradient method. In the training, each D2D transmitter acts as an agent
to interact with the environment and take two appropriate actions: channel
selection and power selection, aiming to find a near-optimal solution to
maximize these two objective values. Another possible solution for the power
control problem is [164] that leverages DRL to estimate optimal power
allocation policies in wireless interference channels. The state space
includes the number of channels and the levels of power over the channels,
while the action space is defined as a conjunction of channel selection and
power allocation policies. The learning process is conducted in an iterative
manner to find a reward of transmission rate, sum-rate of all users, and
fairness among users.
#### V-B3 Interference management
In this sub-section, we present a discussion on the use of AI in interference
management.
Problem formulation: In the ultra-dense networks, interference stems from the
usage of the same frequency resource among neighbouring users or network
cells. In OFDMA networks, for example, inter-cell interference can be caused
by two or more neighbour cells using the same frequency band, which can
degrade the overall performance of wireless networks.
Drawbacks of conventional methods: Usually, different frequency reuse factors
(FRF) or partial-frequency reuse approaches are employed to coordinate and
manage interference among users. However, these strategies may be
inefficiently in future wireless networks with variable network traffic and
dynamic user patterns, leading to under-usage or lack of the spectrum in
user/cell networks.
Unique advantages of using wireless AI techniques: By using AI, it is possible
that an adaptive and online approach is devised to enable minimum network
interference for efficient cellular deployments in terms of spectral
allocation efficiency and as well as QoS fulfilment.
Recent works: Many AI-based studies have been done to solve interference
management issues.
* •
ML approaches: In [165], the authors consider uplink interference management
in multi-user LTE cellular networks with the aid of data-driven ML. A strategy
for power control is necessary to adjust the transmit power on the
transmission channels for interference balance among users under the settings
of cell-specific power control parameters in LTE networks. Then, a stochastic
leaning-enabled gradient algorithm is developed to determine optimal power
control parameters in a fashion interference among user channels is maintained
in an acceptable level for improving the data rate in LTE networks. The work
in [166] solves the issue of inter-cell interference which is caused by two or
more neighbour cells using the same frequency band in OFDMA networks through a
decentralized RL approach. Different from the previous study, the authors
focus on the spectrum assignment for interference control among cells. To
formulate a learning problem, each cell is regarded as an agent which
determines optimally an amount of spectrum resource based on obtained context
information (i.e. average signal to interference ratio (SIR)/chunk in the cell
and user throughput). Concretely, the proposed scheme demonstrates its high
performance in cellular deployments in terms of spectral allocation efficiency
and SINR enhancement as well as QoS fulfilment.
* •
DL approaches: To investigate the potential of DL in interference management
for wireless communications, the authors in [167] suggest a DNN architecture
to develop a real-time optimization scheme for time sensitive signal
processing over interference-limited channels in multi-user networks. By
introducing a learning process, the relationship between the input and the
output of conventional signal processing algorithms can be approximated and
then the DNN-based signal processing algorithm can be implemented effectively
in terms of low computation complexity and better system throughput (high
spectrum efficiency and low user interference levels).
Moreover, a joint DL-enabled scheme of interference coordination and beam
management (BM-IC) is presented in [168] for dense mmWave networks. The
ultimate objective is to minimize the interference and enhance the network
sum-rate with respect to beam directions, beamwidths, and transmit power
resource allocations. To do so, a beamforming training is required to
establish directional links in IEEE 802.11ay. Then, the BM-IC is formulated as
a joint optimization problem that is approximated by a DNN network through an
iterative learning procedure. The authors in [169] also consider an
interference classification approach in satellite systems where DL is employed
to provide an interference classifier for classification of some cellular
standards, including LTE, UMTS and GSM. DRL is also investigated for dealing
interference management issues [170] that aims at minimizing a joint metric of
wireless transmit latency and interference levels in UAVs-enabled cellular
networks. To realize this, a cooperative game is derived in which UAVs are
players to learn its trajectory, power allocation in the context of
insufficient information of network settings, such as user service demands,
UAV locations. To deal with this dynamic game, DRL comes as an ideal candidate
to allow UAVs to estimate its path and adjust power resource allocation so
that the interference among UAVs is minimized.
* •
DRL approaches: The interference management problem is also considered in
[171] that concentrates on in the context of vehicle-to-vehicle (V2V)
communications. In fact, in the complex vehicular networks, a large number of
V2V links results in a high interference level, which degrades the overall
performance of the involved network, such as high latency and less
transmission reliability. By introducing an RL concept where each V2V link
acts as an agent and spectrum usage and power allocation are determined using
CSI and mutual information among channels, a DRL architecture is built to
provide a unified network observation model for adjusting resource allocation
among vehicles to avoid high interference on V2V links. Lastly, another
solution for interference management is presented in [172] from the QoS
perspective. To evaluate QoS of massive MIMO cognitive radio networks in terms
of transmission rate and interference, a user selection approach is proposed
in using a deep Q-learning. Two key cognitive radio scenarios are taken into
consideration, including radio networks with available and unavailable CSI
knowledge at the secondary base station. While the former case can be
considered with a conventional programming tool to formulate and solve the
optimization problem of power allocation, the later case is realized by using
a Deep Q-learning scheme. The base station is regarded as an agent which
senses the cognitive radio environment and makes decisions on how much power
resource should be allocated to each secondary user so that interference in
the network is minimal [173].
### V-C Network layer AI
In this section, we discuss the roles of AI in supporting spectrum sharing,
data offloading, multi-connectivity, and network resource management.
#### V-C1 Spectrum sharing
In this sub-section, we discuss the roles of AI in supporting spectrum
sharing.
Problem formulation: The ultra-dense networks in the emerging 5G era promises
to support one million devices per $km^{2}$ in the future. However, with the
full swing of IoTs reaching every corner, the anticipated device density could
even surpass this target, particularly in large-scale cellular networks. To
address this problem, consensus has been reached that spectrum sharing by
various users and devices will play a key role, monitoring frequency-band use
and dynamically determining spectrum sharing. With the available spectrum
identified, secondary users (SUs) will share the opportunities efficiently and
fairly, through an appropriate spectrum sharing strategy. Essentially, a
distributed self-organized multi-agent system is highly desirable, where each
SU acts in a self-organised manner without any information exchange with its
peers. Since the complete state space of the network is typically very large,
and not available online to SUs, the direct computation of optimal policies
becomes intractable.
Drawbacks of conventional methods: Most existing studies on the multi-agent
case use game theory [174], matching theory [175], and graph colouring [176]
to obtain structured solutions. But these model-dependent solutions make many
impractical assumptions, such as the knowledge of signal-to-interference-plus-
noise ratio (SINR), transmission power, and price from base stations.
Moreover, when the number of users is larger than that of channels, model-
dependent methods (graph colouring in particular) will become infeasible.
Therefore, a fundamentally new approach to spectrum sharing needs to be
developed, handling the large state space and enabling autonomous operation
among a large number of SUs.
Unique advantages of using wireless AI techniques: AI has emerged as a highly
efficient solution to overcome these challenges in spectrum sharing. Here, we
focus on analyzing the applications of AI in spectrum sharing with recent
successful works. Many AI-empowered solutions with ML, DL and DRL techniques
have been proposed to spectrum sharing issues in wireless communications.
Recent works: The work in [177] presents an inter-operator spectrum sharing
(IOSS) model that takes advantage of the benefits of spectral proximity
between users and BSs. By executing a Q-learning scheme at each BSs, the
network operators can achieve efficient spectrum sharing among users with high
quality of experience and spectral resource utilization. To be clear, the
network operators can share their licenced spectrum with others via an
orthogonal spectrum sharing based business model. The BSs can make decisions
to utilize the IOSS mode based on the amount of spectral resources requested
from the neighbours, while inferences among users are taken to formulate the
action (i.e. spectrum sharing parameters) to satisfy user demands in different
load scenarios. To this end, a Q-learning algorithm is derived so that the BS
can adjust dynamically the spectrum amount based on the states and actions for
sharing among different types of users, including users with high service
demands and users with less resource requirements.
Database-based spectrum sharing is of the important solutions for wireless IoT
communications [178]. In fact, the spectrum sharing policies of a spectrum can
be aggregated as a database so that the SUs can refer for avoiding spectrum
interference with primary users, especially in the primary exclusive region
(PER). Therefore, updating the PER once spectrum interference occurs is
necessary to make sure that the smooth spectrum exchange is ensured. To do
that, a supervised learning algorithm is derived to update the PER and learn a
sharing policy over time for allocating optimally spectrum over the network,
aiming to increase the efficiency of spectrum usage among users.
To support spectrum sharing in vehicular networks, the work in [179] considers
a multi-agent RL scheme where multiple vehicle-to-vehicle (V2V) links reuse
the frequency spectrum occupied by vehicle-to-infrastructure (V2I) links.
Here, the V2V links interconnect neighbouring vehicles, while the V2I network
links each vehicle to the nearby BS. The process of spectrum sharing among V2V
links and V2I connections is formulated as a multi-agent problem with respect
to spectrum sub-band selection. The key objective is to optimize the
capability of V2I links for high bandwidth content delivery, which is enabled
by a reward design and centralized learning procedure. Then, a multi-agent
deep RL is proposed where each V2V link acts as an agent to learn from the
vehicular environment for gaining high spectrum usage. The implementation
results indicate the advantage of the learning approach in terms of better
system capacity of V2I links and enhanced load delivery rate of V2V links with
spectrum savings.
#### V-C2 Data offloading
In this sub-section, we discuss the roles of AI in supporting data offloading.
Problem formulation: Data offloading refers to the mechanism where the
resource-limited nodes offload part or full of their data to the resourceful
nodes (i.e. edge/cloud servers). Data offloading potentially mitigates the
burden on resource-constrained nodes in terms of computation and storage
savings. Worthy of mention, this solution would benefit both end users for
better service processing and service providers (i.e. edge/cloud services) for
network congestion control and better user service delivery. For example, the
advances of mobile cloud/edge technologies open up opportunities to solve the
computation challenges on mobile devices by providing offloading services. In
this way, mobile users can offload their computation-extensive tasks (i.e.
data and programming codes) to resourceful cloud/edge servers for efficient
execution. This solution potentially reduces computation pressure on devices,
enhances service qualities to satisfy the ever-increasing computation demands
of modern mobile devices.
Drawbacks of conventional methods: Some literature works listed in [180],
[181] show that the traditional offloading strategies mainly leverage Lyapunov
or convex optimization methods to deal with the data offloading problem.
However, such traditional optimization algorithms is only suitable for low-
complexity tasks and often needs the information of system statistics that may
not be available in practice. Besides, how to adapt the offloading model to
the varying environments (i.e. varying user demands, channel availability) is
a critical challenge to be solved.
Unique advantages of using wireless AI techniques: In the data offloading
process, AI plays an important role in supporting computation services,
channel estimations and device power control for optimal offloading.
Recent works: Many AI-based studies have been done to solve data offloading
issues.
* •
ML approaches: For example, the authors in [182] apply ML associated with big
data to a traffic offloading in heterogeneous ultra-dense networks. They
investigate the potential of ML in overcome challenges such as latency, energy
consumption in offloading applications such as D2D offloading and V2V
offloading. An AI-enhanced offloading scheme is proposed in [180] for
industrial IoT where mobile devices can offload their data tasks to nearby
edge servers or remote cloud servers for computation. The main performance
indicator considered in offloading is service accuracy (i.e. accuracy of
computation results). A compressed neural network is deployed on edge servers
to learn model parameters from data across edge servers, instead of relying on
a cloud server for model updates. Then, the predictive model is trained based
on a specific requirement of service delay and accuracy of computation tasks,
congestion status of the edge-IoT network, and available computing edge
resource. The learning-based offloading framework is able to estimate how much
data should be offloaded for optimal service accuracy in real-time network
settings. A hierarchical ML approach is also proposed in [181] for an
industrial IoT network with mobile edge computing (MEC), where each user
device makes decision to offload to MEC servers with respect to transmission
error rate, QoS level, and communication resource constraints for data
processing delay minimization.
* •
DL approaches: A DL-based solution for supporting offloading in wireless
networks is evaluated in [183]. To be clear, a vehicular traffic offloading
scheme is considered to deal with the issue of computation burden on resource-
limited vehicles by using MEC services. Transmission mode selection and MEC
server selection are taken into consideration for building an optimization
problem, which is then solved effectively by a deep Q-learning algorithm.
Moreover, a cooperative offloading architecture for multi-access MEC is given
in [184]. In the complex environment with multiple MEC servers and varying
user offloading requirements, how to offload tasks to support mobile users
while preserve network resource and improve data catching on MEC servers is
challenging. Inspired by these, a joint offloading scheme of MEC and D2D
offloading is formulated and offloading decisions are formulated as a multi-
label classification problem. To solve the proposed problem, a deep supervised
learning-based algorithm is then derived which aims to minimize the overhead
of data offloading. Considering the limitation of battery and computation
capability of IoT devices, the study in [185] suggests an offloading scheme
which enable IoT devices to submit their heavy data tasks to nearby MEC
servers or cloud servers. A challenge here is how to choose which computation
platform (MEC or cloud) to serve IoT users for the best offloading latency
efficiency. To tackle this problem, a DL mechanism is proposed to learn the
offloading behaviours of users and estimate offloading traffic to reduce
computing latency with respect to task size, data volume and MEC capability.
* •
DRL approaches: Moreover, to learn a computation offloading process for
wireless MEC networks, a DRL approach is described in [186] as shown in Fig.
7. The key problem to be solved is how to optimize offloading decisions (i.e.
offloading tasks to MEC servers or locally executing at mobile devices) and
achieve network power resource preservation with respect to network channel
dynamicity. Motivated by this, a joint optimization problem is derived that is
then solved effectively by a deep Q-learning empowered by a DNN network,
showing that DRL not only provides flexible offloading decision policies but
also adjusts adaptively network resource to satisfy the computation demands of
all network users. DRL is aslo useful in vehicular data offloading for
Internet of vehicle networks [187]. To achieve an optimal offloading decision
for tasks with data dependence, a RL agent can be neccessary to sense the
vehicular networks to collect necessary information, including user mobility
and resource demands, and then train offline the collected data on the MEC
nodes. Besides, thanks to the online training process, the vehicular service
transactions can well adapt with the changes of vehicular environments for
efficient offloading. In addition, to assist computation offloading in 5G
vehicular IoT applications, an intelligent offloading scheme using DRL is
proposed in [188]. The main focus is on optimizing offloading costs (i.e.
latency, power usage) by taking spectrum allocation into consideration. Unlike
the traditional DRL approach, this work considers a multi-agent DRL
architecture where each vehicular user acts as an agent to cooperatively make
decisions based on data task information and CSI feedback.
Figure 7: Deep reinforcement learning for data offloading.
#### V-C3 Multi-connectivity
Due to the increasing number of base stations and mobile users in 5G ultra-
dense networks, the demands of multi-connectivity with high reliability and
spectrum efficiency is growing. In such as a context, AI has come as a
promising solution to cope with the enormous state space caused by high
network complexity and well support interconnection among users [189].
Communication techniques such as duel connectivity can realize the data
transmission between base station and mobile users such that each station can
select a codeword from a code book to establish an analog beam for assisting
downlink communication with users. ML approaches such as SVM can support
classification for codeword selection by a learning process where the inputs
include user transmit power and CSI data. By using trained model, users can
select effectively channels for uplink transmission and base stations can
select codeword for downlink transmission with low complexity, which achieves
fast and reliable ultra-dense network communications. To solve the mobility
management issue caused by frequent handovers in ultra-dense small cells, a
LSTM-based DL algorithm is proposed in [190]. This aims to learn the
historical mobility patterns of users to estimate future user movement, which
is important for hanover estimation for regulate user connectivity with base
stations. Further, to improve the scalability of wireless communications in 5G
vehicular networks, a traffic workload prediction scheme using neural networks
is considered in [191]. The model is able to learn connectivity behaviour
based on data rate, numbers of users and QoS requirements in order to make
decisions for adjusting scalability for efficient vehicular communications in
terms of low latency and high service request rates.
#### V-C4 Network resource management
In this sub-section, we present how AI can enable smart network resource
management.
Problem formulation: The current cellular technology and wireless networks may
not satisfy the strides of wireless network demands. Resource management
becomes more complex to maintain the QoE and achieve the expected system
utility. The development of advanced technologies in 5G such as software
defined networking (SDN) [192] and network slicing makes network resource
management much more challenging that thus urgently requires new innovative
and intelligent solutions.
Drawbacks of conventional methods: Many solutions have been proposed to solve
network resource management issues, such as generic algorithms or dynamic
programming approaches, but they have high complexity in terms of computation
and optimization [25]. Further, these conventional approaches have been
demonstrated to be inefficient under presence of dynamicity of network
resources and user demands.
Drawbacks of conventional methods: AI can achieve smart and reliable network
resource management thanks to its online learning and optimization capability.
An important feature of AI is its smart making decision ability (i.e. RL
approaches) to adaptively allocate network resources and control intelligently
resource so that the network stability is guaranteed.
Recent works: The work in [193] presents an AI-based network resource
management scheme for SDN-empowered 5G vehicular networks. LSTM and CNN are
used to classify and detect the resource demands at the control plane. To
mimic the resource usage behaviour, learning is necessary to train the
virtualization manager placed on the SDN controller. It is able to adjust the
resource (i.e. bandwidth) based on the user demand and QoS requirements so
that network fairness among users can be achieved. In the line of discussion,
DL has been used in [194] to manage SDN resources in vehicular networks. To do
that, a flow management scheme is designed and integrated in the SDN
controller so that resource utilization is optimized. Due to the different
priority levels and data requests of different vehicular users, a dynamic
resource allocation algorithm is vitally important to avoid over resource
usage as well as resource scarcity at any users. CNN is deployed on the
network controller to manage the flow control for multiple users, aiming to
optimize resource usage. Based on the network information such as bandwidth,
available resource, user requests, the network controller can learn the new
resource usage patterns for better allocation. The authors in [195] focus on
the use of DRL for resource management in network slicing. Here, a deep
Q-network controller tries to seek a bandwidth sharing solution to achieve a
fair and optimal bandwidth resource with respect to available resource and
dynamic user demands without knowing the specific network model. Another work
in [196] also considers DRL as a solution for resource management in network
slices. In the presence of uncertain network slice demands, designing an
autonomous controller with the ability to interact and explore the environment
to make decisions for resource allocation is desired. Deep Q network can
fulfil this requirement by using an agent that estimate the state-action
distribution where slice demands are regarded as states while allocated
resources are seen as actions. The implementation results show that the
combination of a reward-clipping scheme and duelling deep network improves the
bandwidth allocation under different slicing scenarios.
### V-D Lessons learned
The main lessons acquired from the survey on the Access AI domain are
highlighted as the following.
* •
AI well support the designs of three key network layers: PHY layer, MAC layer,
and Network layer. For PHY layer, AI has been applied to facilitate CSI
acquisition tasks in wireless communications such as CSI estimation [124],
learning for CSI sensing and recovery [125], and CSI feedback compression
[128]. AI provides promising solutions by its learning and prediction
capabilities to simplify the precoding design under uncertain environments and
unknown network traffic patterns [141]. Further, AI also facilitates channel
coding/decoding for wireless systems [146].
* •
Many AI-based solutions have been proposed for MAC layer, mainly in three
domains, namely channel allocation, power control and interference mitigation.
In the multi-channel communications with less CSI and high channel variation,
AI especially DL is able to estimate dynamically the channel allocation by
establishing the adaptive channel assignment policy with low computational
complexity. AI has been also investigated to solve power control issues in
complex and dynamic wireless communications with recent successes [156],
[157], [161]. Specially, some emerging research results demonstrate that AI is
particularly useful for interference management in small cell networks or
mmWave networks [168].
* •
AI also gain enormous interests in network layer designs from four fundamental
domains, spectrum sharing, data offloading, multi-connectivity, and network
resource management. For instance, many AI-based approaches have been
introduced to solve the critical spectrum sharing issues in wireless
communications, ranging from wireless spectrum sharing [177] to IoT spectrum
exchange [178]. Meanwhile, the learning capability of AI is also helpful to
estimate data traffic and channel distribution to realize flexible data
offloading [180], multi-connectivity [190], and resource management [193] in
wireless networks.
In summary, we list Access AI use cases in the taxonomy TABLE V to summarize
the contributions of each reference work.
TABLE V: Taxonomy of Access AI use cases. Category | Ref. | Use case | AI techniques applied | Main contributions
---|---|---|---|---
PHY layer AI | [123] | CSI acquisition | Support vector regression-SVR | A ML-based CSI acquisition architecture for massive multiple input multipleoutput (MIMO) with frequency division duplex (FDD).
[124] | CSI estimation | CNN | A ML-based time division duplex (TDD) scheme in which CSI can be estimated via a ML-based predictor.
[126] | CSI compression | DL (CsiNet+) | A multiple-rate compressive sensing neural network framework for CSI compression in MIMO.
[127] | CSI compression | Convolutional DL | A convolutional DL architecture, called DeepCMC, for efficient compression of the channel gain matrix.
[137] | Hybrid channel precoding | Neural networks | Designing the hybrid precoding metrics by using the prior observations of the channel with neural networks.
[138] | Channel precoding | DNNs | A decentralized robust precoding framework in multi-user MIMO settings.
[139] | mmWaves hybrid precoding | CNN | A DL scheme using a convolutional neural network (CNN) for a new mmWaves hybrid precoding architecture.
MAC layer AI | [148] | Distributed channel access | Online ML | A distributed channel access scheme for Wireless Networked Control Systems.
[149] | Channel assignment | Online ML | A DL-based scheme for channel assignment for multi-channel communication in IoT networks.
[151] | Channel allocation | RL | A RL based ML approach is designed for optimizing channel allocation in wireless sensor networks.
[156] | Power control | Q-learning | A power control scheme using ML for D2D networks.
[157] | Transmit power optimization | ML | A learning-based scheme for optimizing transmit power over the wireless channel under a certain power budget.
[165] | Interference management | Stochastic gradient leaning | A scheme for interference management in multi-user LTE cellular networks with the aid of data-driven ML.
[166] | Intercell interference control | Distributed RL | A scheme for intercell interference in OFDMA networks.
Network layer AI | [177] | Spectrum sharing | Q-learning | An inter-operator spectrum sharing (IOSS) model that takes advantage of the benefits of spectral proximity between users and BSs.
[178] | Spectrum sharing | Supervised learning | A Database-based spectrum sharing model for PU-SU networks.
[179] | Spectrum sharing | RL | A scheme for spectrum sharing in vehicular networks with V2V and V2I links.
[180] | Data offloading | Compressed neural networks | An AI-enhanced offloading scheme for industrial IoT.
[183] | Vehicular traffic offloading | Deep Q-learning | A vehicular traffic offloading scheme for vehicles.
[186] | Computation offloading | DRL | A computation offloading scheme for wireless MEC networks.
[190] | Multi-connectivity | LSTM-based DL | A DL scheme for mobility management caused by frequent handovers in ultra-dense small cells
[191] | Multi-connectivity | Neural networks | A model for supporting connectivity in 5G vehicular networks
[193] | Network resource management | LSTM and CNN | A DL-based scheme for 5G wireless networks.
## VI User Device AI
The next Wireless AI function of the data life cycle in Fig. 2 is User Device
AI where AI have been embedded into end devices to create a new on-device AI
paradigm. In this section, we study the User Device AI function for wireless
networks by analyzing the integration of AI in mobile user devices as shown in
Fig. 8.
### VI-A Embedded AI functions
Figure 8: User Device AI function.
In this sub-section, we present how AI is integrated into mobile devices to
develop on-develop AI functions.
Problem formulation: With the rapid growth of smart mobile devices and
improved embedded hardware, there is interest in implementing AI on the device
[197], for both ML and DL functions. In the future networks with higher
requirements in terms of low-latency data processing and reliable
intelligence, pushing AI functions to mobile devices with privacy enhancement
would be a notable choice to wireless network design.
Drawbacks of conventional methods: Traditionally, AI functions are placed on
remote cloud servers, which results in long latency for AI processing due to
long-distance transmissions. Further, the reliance on a central server faces a
serious scalable problem when processing data from distributed mobile devices.
The use of external servers to run AI functions is also vulnerable to security
and data privacy issues due to the third parties.
Unique advantages of using wireless AI techniques: AI can be run directly on
mobile devices to provide smart and online learning functions for mobile
networks at the edge. The key benefits of embedded AI functions consist of
low-latency learning and improve the intelligence of mobile devices, which can
open up new intesting on-device applications, such as mobile smart object
detection, mobile human recognition.
Recent works: Some important works are highlighed with the integration of AI
on mobile devices, including embedded ML functions and embedded DL functions.
* •
Embedded ML functions: The work in [198] introduces nnstreamer, an AI software
system running on mobile devices or edge devices. This software is able to
execute directly real-time ML functions, such as neural networks, with complex
mobile data streams by simplifying ML implementations on mobile devices
without relying on cloud servers. The feasibility of nnstreamer has been
investigated via real-world applications, such as event detection through
neural networks with sensor stream samples as the input data for training and
classification. In [199], a GPU-based binary neural networks (BNN) engine for
Android devices is designed, that optimizes both software and hardware to be
appropriate for running on resource-constrained devices, i.e. Android phones.
More specific, the authors exploit the computation capability of BNN on mobile
devices, decoupled with parallel optimizations with the OpenCL library toolkit
for enabling real-time and reliable neural network deployments on Android
devices. Another research effort in buiding AI on devices is in [200] that
applies ML to Android malware detection on Android smart devices. They build
an Application Program Interface (API) call function and integrate a ML
classification algorithm to detect abnormal mobile operations or malicious
attack on the device. As a trial, some typical ML classifiers are considered,
including SVM, decision tree, and bagging to characterize the designed API
scheme, showing that ML can assist to recognize malware, showing that the ML-
based bagging classifier achieves the best performance in terms of higher
malicious application detection rate. Moreover, a ML-based API module is
designed in [201] to recognize malware on Android phones. There are 412
samples including 205 benign app and 207 malware app used in the test for
classification by integrating with SVM and tree random forest classifiers. The
trial results show that ML classifiers can achieve high malware detection
rates, with over 90% for cross validation examination. Similarly, a ML-based
malware detection scheme called Talos is also presented in [202] on Android
operating systems. Talos only relies on on-device ML to identify and detect
malicious apps on Android devices without the need of external computing
services for running the ML algorithm.
* •
Embedded DL functions: Software and hardware support for performing DL on
mobile devices is now evolving extremely fast. In [203], the authors run DNNs
on over 10,000 mobile devices and more than 50 different mobile system-on-
chips. The most complex task that can be run with the help of TensorFlow
Mobile or TensorFlow Lite is image classification with MobileNet or Inception
CNNs at the moment. The work in [204] proposes Minerva as a highly automated
co-design approach across the algorithm, architecture, and circuit levels to
optimize DNN hardware accelerators. Minerva enables highly accurate, ultra-low
power DNN accelerators (in the range of tens of milliwatts), making it
feasible to deploy DNNs in power-constrained IoT and mobile devices. Another
example of mobile AI function is in [205] that develops a MobileNet app
running lightweight CNN on mobile devices for mobile localization
applications. Images captured by phone cameras can be recognized by CNN to
specify the centroid of the object (i.e. the human hand), which is useful to
various applications in human motion detections. CNN is also applied to build
classification functions for preserve user privacy on embedded devices [206].
With the popularity of Android apps on the market, malicious apps can attack
mobile operating systems to exploit personal information, which breaches
mobile data over the Internet. By using a CNN-based training model, the mobile
processor can recognize attack activities by integrating the trained model
into the malicious apps. Therefore, all behaviour and data streams on mobile
websites can be identified and detected.
CNN inference is becoming necessary for improving user experience with recent
applications, such as information extraction from mobile data streams, virtual
reality. The authors in [207] present a hardware design that considers to
integrate CNN inference into embedded AMR devices. They implement a Kernel-
levels scheme which represents the four-layer CNN across heterogeneous cores.
Different cores would process different layers based on the consecutive images
in a stream. The improvement of overall throughput of the CNN architecture
stems from the computation resources of multi-cores and on-chip data memory.
In addition to CNN, DNN has been explored for mobile learning [208]. In this
work, an on-chip DL model is designed for edge AI devices. Due to the overhead
of computation on mobile CPUs, an improved DNN inference engine is considered
with half precision support and optimized neural network layers to be suitable
for resource-limited devices. This model potentially enhances the GPU
throughput by 20 times and saves 80% memory usage. As a different AI platform,
a collaborative AI scheme is proposed in [209] where cloud servers are used to
support AI training, while AI model is executed on the mobile devices. All
features of an AI model (i.e. DNN) are offloaded to cloud for processing. A
loss function is created that calculates the compressibility of intermediate
features in the model, and it is then integrated with the overall loss
function. This solution would accelerate the learning process for multi-tasks,
and mitigate the workload offloaded to cloud through compression.
In particular, CNN can be integrated in edge devices to create an on-device
CNN architecture [210]. CNN layers can learn the features collected from the
data samples on wearable sensor devices. The edge board such as SparkFun
Apollo3 chip is used to run the DL algorithm. The novelty of this work is to
use batch normalization for recognition tasks with CNN on mobile devices. To
optimize the performance of on-chip DL-based system, a memory footprint
optimization technique is also involved and compared with conventional
statistical ML technique, showing a lower execution delay for running mobile
DL models. Recently, an on-board AI model is designed for mobile devices for
sensing and prediction tasks [211]. To do this, a CNN architecture is
considered, with two convolutional blocks, two linear blocks with a sigmoid
block to perform learning and classification on embedded devices. A use-case
using CNN for detecting the seed germination dynamics in agriculture is taken
with seed images as training data inputs, showing the high accuracy in seed
recognition with power savings for running DL algorithm.
Figure 9: Federated learning for mobile wireless networks.
### VI-B Federated learning at user side
In this sub-section, we present the recent development of federated learning
(FL) in wireless networks. Conceptually, FL requires a cross-layer
implementation involving user devices, network devices, and the access
procedure. However, since almost all of the AI-related computations are
carried out by user devices in FL, we put FL under the category of this
section ”User device AI”.
Problem formulation: In the large-scale mobile networks, it is in-efficient
to send the AI learning model to remote cloud or centralized edge servers for
training due to high communication costs and AI data privacy issues. With
widely distributed and heterogeneous user data located outside the cloud
nowadays, centralized AI would be no longer a good option to run AI in
wireless networks. FL can preserve privacy while training a AI model based on
data locally hosted by multiple clients. Rather than sending the raw data to a
centralized data-center for training, FL allows for training a shared model on
the server by aggregating locally computed updates black [212].
Drawbacks of conventional methods: The traditional AI models rely on a cloud
or edge server to run AI functions, which put high pressure on such a
centralized data center in terms of high data storage and processing demands.
Further, offloading to remote servers to run AI function may result in data
loss due to the user reluctancy of providing sensitive data, which in return
degrades data privacy.
Unique advantages of using wireless AI techniques: FL allows to train AI
models in a distributed manner where local data can be trained at mobile
devices for low-latency learning and data privacy protection. It also reduce
the flow of data traffic to avoid network congestion.
Recent works: In [213], the authors integrate the DRL techniques and FL
framework with mobile edge systems, for optimizing, caching in mobile edge
computing. The work in [214] presents a collaborative FL network for smart IoT
users. The FL includes two steps, including model training on the smartphones
using data collected from IoT devices and model sharing over the blockchain
network [215]. To encourage more users to join the FL, an incentive mechanism
is designed to award coins to users, which in return enhances the robustness
of the mobile FL network. To support low-latency vehicle-to-vehicle (V2V)
communications, the authors in [216] employ FL that enables to interconnect
vehicular users in a distributed manner. In this context, each user transmits
its local model information such as vehicle queue lengths to the roadside unit
(RSU) where a centralized learning module is available for perform learning
for maximum likelihood estimation.
FL is also useful to DRL training in the offloading process of the MEC-based
IoT networks [217]. In the offloading scheme, each IoT device takes actions to
make offloading decisions based on system states, i.e. device transmit power,
CSI conditions. In such a context, FL allows each IoT user to use his data to
train the DRL agent locally without exposing data to the edge server, which
preserves the privacy of personal information. Each trained model on
individual IoT user is then aggregated to build a common DRL model for. By
using FL, the IoT data is trained locally that eliminates the channel
congestion due to a large amount of data transmitted over the network. In
addition to training support, FL is able to preserve the privacy of DL
schemes, i.e. in the neural network training process [218]. Here, the training
model consists of centralized learning on the edge server and FL at the users.
Specially, in FL, both users and servers collaborate to train a common neural
network. Each user keeps local training parameters and synchronize with the
cloud server to achieve an optimal training performance. By implementing the
FL, user privacy can be ensured due to the local training. Moreover, a FL
scheme is introduced in [219] to support training the collaborative model over
a network of agents of users, service providers and agents. These entities can
incorporate to predict the information sharing using a privacy-preserved
centralized model. Here, the centralized model is stored in the cloud server,
but learning parameter settings or updates are implemented by each local
agent. The cloud server then averages the updates to produce the shared model.
All training data is kept on the local user agent, and no updates are stored
on the cloud server for privacy protection.
Another solution introduced in [220] uses FL for building a self-learning
distributed architecture that aims to monitor security of IoT devices (i.e.
detecting compromised devices). In this case, FL is used to aggregate
detection profiles of the IoT network where the mobile gateways can train data
locally without sharing with remote servers. This would not only mitigate the
latency overhead of the training process, but also ensure IoT data privacy by
using available resource for model training. Besides, to support low-latency
communication in mobile wireless networks, a federated edge computing scheme
is proposed in [221] as depicted in Fig. 9. User data can be trained locally
using local datasets and model updates can be transmitted over the multi-
access channel to the edge server for aggregation of local models. The process
is repeated until the global model is converged. Further, an on-device FL
model for wireless network is suggested by [222]. The main focus is on a
design for communication latency mitigation using a Newton method and a scheme
for reducing error during the FL model update based on difference-of-convex
functions programming. Different from these works, the authors in [223] pay
attention to privacy preservation for mobile FL. The FL model includes key
generation centre, cloud server and participants (users). Each user has
private dataset and performs training locally, then extracts the model updates
(gradient parameters). These updates would be exchanged with the cloud server
to achieve a global training. Here, to preserve privacy for FL, a distributed
Gaussian scheme is integrated that allows to encrypt local data to achieve a
secure aggregation protocol.
### VI-C Lessons learned
With the rapid growth of smart mobile devices and improved hardware
technology, AI functions have been embedded into mobile devices to create a
new paradigm: on-device AI. This would open up new opportunities in
simplifying AI implementation in wireless networks by eliminating the need of
external servers, i.e. clouds. In fact, the feasibility of on-device AI model
has been demonstrated via recent successes with designs of mobile ML functions
[198] and mobile DL functions [203]. For example, DNN inference now can be run
on mobile devices such as Android phones for local client applications such as
image classification [205], object recognition [210]. Developing deep AI
models on mobile devices is predicted to revolutionize the way future
intelligent networks operates and provides user services, especially in the
booming era of the Internet of Things. Moreover, AI can be implemented by the
cooperation of distributed mobile devices, which gives birth to FL paradigms.
Rather than sending the raw data to a centralized data-center for training, FL
allows for training a shared model on the server by aggregating locally
computed updates [212]. FL can support training the collaborative model over a
network of agents of users, service providers and agents with low training
latency, traffic congestion mitigation and privacy preservation [217]. We list
User Device AI use cases in the taxonomy TABLE VI to summarize the
contributions of each reference work.
TABLE VI: Taxonomy of User Device AI use cases. Category | Ref. | Use case | AI techniques applied | Main contributions
---|---|---|---|---
User Device AI | [198] | Embedded ML functions | Neural networks | An AI software system running on mobile devices or edge devices.
[199] | Mobile ML engine | Bsinary neural networks | A GPU-based binary neural networks (BNN) engine for Android devices.
[200] | Malware detection | SVM, decision tree | An Android malware detection on Android smart devices.
[201] | Malware detection | SVM, tree random forest | A ML-based API module to recognize malware on Android phones.
[203] | Image classification | DNNs | A scheme for image classification using deep neural networks (DNNs) on mobile devices.
[205] | Mobile localization | CNN | A MobileNet app, which run lightweight CNN on mobile devices for mobile localization applications.
[206] | Mobile privacy protection | CNN | A scheme for building classification functions for preserve user privacy on embedded devices.
[208] | DNN inference engine | DNN | An on-chip DL model for edge AI devices.
[210] | Mobile activity recognition | CNN | An on-device CNN architecture for support human activity recognition.
[211] | Mobile sensing and prediction | CNN | An on-board AI model for mobile devices for sensing and prediction tasks
[214] | Collaborative federated learning | FL | A collaborative federated learning network for smart IoT users.
[216] | Vehicular communications | FL | A FL model for supporting low-latency vehicle-to-vehicle (V2V) communications.
[219] | Collaborative learning | FL | A FL scheme for training the collaborative model over a network of agents of users, service providers and agents.
[220] | Security monitoring | FL | A self-learning distributed architecture to monitor security of IoT devices.
[221] | Low-latency communication | FL | A federated edge computing for supporting low-latency communication in mobile wireless networks.
[222] | Communication latency mitigation | FL | An on-device FL model for wireless networks.
## VII Data-provenance AI
The final Wireless AI function of the data life cycle in Fig. 2 is Data-
provenance AI where AI can provide various useful services, such as AI data
compression, data privacy, and data security. We will focus on discussing the
roles of AI for such important services that are shown in Fig. 10.
Figure 10: Data-provenance AI function.
### VII-A AI Data compression
We here analyze how AI can help data compression tasks in wireless networks.
Problem formulation: In the era of big data, i.e. big sensor data, sensor data
needs to be compressed to mitigate network traffic and save resources of
wireless networks. There are two important data compression tasks in wireless
networks, including data aggregation and data fusion. Here, data aggregation
is a process that aims to minimize the data packets from different sources
(i.e. sensor nodes) to alleviate the transmit workload. Due to the resource
constraints of wireless sensor devices, data aggregation is of paramount
importance for achieving efficient data collection while preserving energy
resources on wireless networks. Meanwhile, in the data fusion, data from
ubiquitous sensors can be fused to analyze and create a global view on the
target subject. For example, in a smart home setting where sensors in each
room collect data that represents the location information.
Drawbacks of conventional methods: Many data compression algorithms have been
proposed in [23], [25], but they remain some unsolved issues. For example, a
wireless sensor network often consists of heterogeneous sensor nodes with
multi-media data associated with different statistical properties, which makes
traditional data compression approaches using mathematical formulations
inefficient. Further, in the future networks, the number of sensor nodes would
increase rapidly with higher data complexity and high data volume, how to
compress data with high scalability and robustness without loss of valuable
data is a challenge.
Unique advantages of using wireless AI techniques: The major benefits of AI
for data compression tasks in wireless networks are intelligence and
scalability. Thanks to the online learning and prediction, AI can extract the
useful information to compress the ubiquitous sensor data and identify the
most important features that are necessary for later data processing.
Moreover, AI can provide scalable data learning, data processing and data
computation for a large dataset. The use of DNN would overcome the limitations
of current ML approaches to map and learn the big data collected from sensors.
Recent works: Some important works are highlighed with the exploitation of AI
for data compression tasks, including sensor data aggregation and sensor data
fusion.
* •
Sensor data aggregation: The work in [224] presents a sensory data aggregation
scheme for wireless sensor clusters. The main focus is on classifying the
collected data using a supervised intrusion detection system. Here a hybrid ML
system including a misuse detection and an anomaly detection system are
designed to monitor the data aggregation process and detect data attacks. To
investigate the feasibility of the proposed approach, a knowledge discovery in
data mining dataset [225] is employed as the dataset for training and testing,
showing that ML can help achieve above 99% detection rate and 99.8% accuracy.
A new tree-based data aggregation framework is introduced in [226] for sensor
networks. Sensory data can be collected by an aggregator and then a part of
data is transmitted to the sink associated with a function that represents the
characteristics of all data, while the remaining data is processed in the sink
for transmit energy savings. The classification of the aggregated data is
implemented by a Bayesian belief network, which can achieve 91% accuracy via
numerical simulations.
The efficiency of data aggregation in wireless networks can be further
improved with DL approaches [227]. In this study, the focus is on privacy
preservation for sensor data on the link between gateways and servers. Here, a
DL autoencoder is used to ensure security for the sensor communication channel
by reconstructing the obfuscated data from the original data, which can hidden
data from unauthorized parties. In some practical scenarios, aggregators may
not be willing to collect data due to the lack of motivation (i.e. less
aggregation revenues). To solve this issue, the work in [228] implements a
payment-privacy preservation scheme that enables aggregators to collect
securely data while paying (i.e. rewards) to them. This process can formulated
as a MDP problem and since the transition probabilities of payment is unknown
to the aggregators, a model-free DRL approach using deep Q-learning is
proposed to learn the model via trial and error with an agent. The reward here
correlates with the optimal payment so that the agent explores the data
aggregation network to find an optimal policy for maximizing revenues.
* •
Sensor data fusion: The study in [229] presents a multi-sensor data fusion
that serves human movement classification based on ML. A network of
heterogeneous sensors of a tri-axial accelerometer, a micro-Doppler radar and
a camera is used to collect data that is then fused and classified by a ML
algorithm with SVM and KNN. In [230], a sensor fusion architecture is proposed
for object recognition. Parameters or information measured from sensors can be
fused with those collected from other sensors to create a global view on the
target subject, which helps evaluate it more comprehensively. To facilitate
the fusion process, a processing model based on unsupervised ML is integrated
that helps classify the features from the fused datasets. Experiments use
camera images as the input data for training and classification, showing that
ML can recognize the object well in different parameter settings. Another
solution for sensor fusion in wireless sensor networks is presented in [231].
In particular, a system setting with various rooms is considered, where
sensors in each room collect data that represents the location information.
Then, data collected from different sensors is fused and a ML-based K-Spectral
Centroid algorithm is adopted to cluster the data from different rooms
together.
In practice, heterogeneous sensory data fusion may be challenging due to the
insufficient information of collected data sources and the difficulty to
represent the multimodal data, which can make data estimation infeasible. To
solve this issue, a new multimodal data fusion for sensor networks is proposed
in [232]. More specific, a deep multimodal encoder based on neural networks is
designed to perform learning from data collected from sensors to capture the
correlated features, which addresses the missing data issue. Importantly, an
unsupervised learning algorithm is derived to compress data by learning joint
features among the set of features in the raw dataset. Meanwhile, a Bayesian
network is employed in [233] for sensor data fusion. The complex non-Gaussian
sample dynamics in sensor signal patterns are fused and classified based on an
offline Bayesian non-parametric Dirichlet Process model, which achieves a
higher accuracy performance compared with traditional Bayes network and SVM
classifiers.
### VII-B AI Data Privacy
In this sub-section, we present how AI can help improve data privacy in
wireless networks.
Problem formulation: At the network edge, data privacy protection is the most
important service that should be provided. The dynamicity of the data
collection, data transmisison and data distribution in future networks would
put at risks of user’s information leakage. For example, the data collection
can expose sensor’s information to third parties which makes privacy (i.e.
privacy of user address, user name) vulnerable. Therefore, protecting data
privacy is of paramount importance for network design and operations. By
ensuring data privacy, a wireless network can appeal to a larger number of
users which increases the utility of such a system.
Drawbacks of conventional methods: Many existing approaches have been
proposed to solve data privacy issues, mainly using encryption protocols [23],
[26]. Although such techniques can keep the content of user’s information
private from the others and third parties, it is hard to apply to scalable
wireless networks with large data. Further, they often apply a specific
encryption protocol to a group of multi-media data collected from sensors and
devices, which is infeasible to future large-scale networks.
Unique advantages of using wireless AI techniques: AI potentially provides
privacy functions for sensed data via classification and detection to
recognize potential data threats. AI is also useful to user location privacy.
In fact, AI can also support protection of user location privacy by building
decision-making policies for users to adjust their location information and
selecting privacy location preference.
Recent works: From the perspective of data generation process, we consider two
AI data privacy domains, including data sensing privacy and user location
privacy.
* •
Data sensing privacy: The work in [234] concerns about privacy preservation
for mobile data sensing. Differential privacy is used to protect the
individual information during the data collection process. To authenticate the
user identity based on their own data, a ML-based classifier approach using
two-layer neural networks is designed. It classifies the privacy sensitivity
levels (0-9) and evaluates the privacy vulnerability with leakage possibility.
To overcome the limitations of existing encryption solutions in terms of
computation complexity and the lack of ability to solving large-scale domains,
a multi-functional data sensing scheme with awareness of different privacy is
proposed in [235]. The key objective is to provide flexible sensing functions
and aggregation properties to serve different demands from different users.
Besides, it is important to preserve the collected sensor data against
adversarial attacks for better user information protection. In such contexts,
the aggregation queries from sensors are collected as the dataset for training
that is implemented by a ML algorithm, while the multifunctional aggregation
can be performed by a Laplace differential privacy technique. Then, learning
helps to estimate the aggregation results without revealing user privacy. As
an example, a distributed privacy-preserved ML scheme for crowd sensing is
presented in [236] that combines sensing, learning and privacy protection via
a classifier design on mobile devices. The behind ML algorithm is a stochastic
gradient descent model that is implemented on each local device. This solution
not only distributes learning workload over the network of devices, but also
keeps data private due to no need to transfer data on the network. Meanwhile,
the work in [237] solves the data collection issue with privacy awareness
using K-mean clustering. Indeed, to ensure data privacy, a technique of adding
Laplace noise to the data compression process. Also, aggregated data is
distributed over the mobile device network where each device holds a
differentially-private sketch of the network datasets.
In the multimedia IoT networks, the sensor nodes can sense data from the
environment and transmit collected data to cloud servers for further
processing. A critical issue here is the data sensing and transmission can
reveal private data, i.e. location information. It is thus important to
protect data privacy during the data sensing process. The work in [238]
proposes a cluster approach that divides sensor devices into different groups.
Each cluster controller ensures the privacy of its sensor group, associated
with a neural network that classifies the collected data and extracts useful
features for data learning. Such data processing can be helped by using MEC
services that not only provides low-latency computation but also enhances
privacy for physical sensor devices [16]. A DL scheme for privacy protection
and computation services for physical sensors can be seen in Fig. 11.
* •
User location privacy: An example is in [239] that introduces a privacy
preservation scheme to help users to manage their location information. The
key objective is to enable users to obtain the privacy preferences based on
their choice factors such as QoS, trust levels. Based on that, a k-learning
model is built that can make decisions for users in preserving location
privacy. The authors in [240] are interested in privacy in mobile social
networks. They first perform a real experiment to collect user location
information in a campus network and investigate the possibility of location
privacy leakage. Then, a privacy management scheme is designed to provide
privacy control for mobile users. This model is empowered by a ML-based
decision tree algorithm that learns the user privacy preference and classifies
location contexts in the social ecosystem.
At present, the location of mobile devices is mostly identified by GPS
technology, but this platform remains inherent limitations in terms of energy
overheads and reliance on the third party (i.e. service providers), which
poses user information at privacy risks. A new method presented in [241] can
achieve a user privacy location guarantee with energy savings. By using the
features of current location that represents the privacy location preference,
a ML approach is applied to learn the privacy model, aiming to classify
privacy preferences depending on whether the user want or not want to share
its location information based on their activities. It is noting that
capturing features only relies on sensors available on smart phones without
using GPS that is power saving and eliminates the need of third party. To
preserve user location privacy in crowd sensing, a learning scheme is
presented in [242] to model the correlations between sensitive and non-
sensitive scenarios according to the relations with private locations. The
main aim is to find a strategy so that the system utility is maximized while
user location is protected against malicious attacks. The work in [243]
jointly takes context factors, i.e. occasion and time, and individual factors
into considerations to establish a privacy location preservation protocol in
social networks. A ML-based regression model is involved to evaluate the
privacy levels based on such features, showing that the occasion factor
dominates in determining the privacy degree during the location sharing.
Figure 11: DL for MEC privacy preservation.
### VII-C AI Data Security
In addition to privacy protection, AI also has the great potential for
enhancing security for wireless networks. We here present how AI can help to
protect data security.
Problem formulation: Data security is the top research concern in wireless
network design. A wireless network system is unable to work well and robust
under the presence of security threats and attacks. The openness and
dynamicity of wireless networks makes them prone to external data threats,
while unauthorized users can access the resource of the network.
Drawbacks of conventional methods: Wireless networks have gained increasing
influences on modern life that makes cyber security an significant research
area. Despite installed defensive security layers, the wireless networks
inherit cyber attacks from the IT environments. Many security models have been
proposed, mainly using middle boxes such as Firewall, Antivirus and Intrusion
Detection Systems (IDS) [244]. These solutions aims to control the traffic
flow to a network based on predefined security rules, or scan the system for
detecting malicious activities. However, many traditional security mechanisms
still suffer from a high false alarm rate, ignoring actual serious threats
while generating alerts for non-threatening situations. Another issue of the
existing security systems is the impossibility of detecting unknown attacks.
In the context of increasingly complex mobile networks, the human-device
environments change over time, sophisticated attacks emerge constantly. How to
develop a flexible, smart and reliable security solution to cope with various
attacks and threats for future wireless networks is of paramount importance
for network operators.
Unique advantages of using wireless AI techniques: AI has been emerged as a
very promising tool to construct intelligent security models. Thanks to the
ability of learning from massive datasets, AI can recognize and detect attack
variants and security threats, including unknown ones in an automatic and
dynamic fashion without relying on domain knowledge [245]. Also, by
integrating with on-device learning algorithms, it is possible for AI to
recognize the unique features of a certain user (i.e. through AI classifiers),
aiming for the user classification against threats.
Recent works: We here introduce some important security services that AI can
offer, including user authentication and attack detection.
* •
AI for user authentication: Some recent works are interested in exploiting AI
for user authentication in wireless networks. An example in [246] introduces a
user authentication model using millimetre sensors. Each radar sensor is
equipped with four receive antennas and two transmit antennas. Signals that re
reflected from a part of human body are analyzed and extracted to produce a
feature dataset that is then fed to a ML-based random forest classifier.
Through training and learning, the classifier can recognize the biometric
properties of any individuals. In [247], a user authentication scheme is
proposed based on CSI measurements working for both stationary and mobile
users. For the former case, the authors build a user profile with
considerations of spoofing attacks, while for the later case, the correlations
of CSI measures are taken to implement user authentication checking.
Specially, an authenticator is created that runs a ML algorithm to learn from
the CSI profiles and determine whether the CSI value measured in the coming
packet matches with the one created in the profile storage. If they match, the
authentication is successful, otherwise that user is denied for further
processing.
The Wi-Fi signals can be exploited to build datasets of learning for user
authentication. The authors in [248] try the first time for a new user
authentication system using human gait biometrics from Wi-Fi signals. Here,
CSI value samples are collected from Wi-Fi devices. The core component of the
scheme is a 23-layer DNN network that is modelled to extract the important
salient gait features collected from gait biometrics in the CSI measurement
dataset. DNN would act as a classifier to extract unique features of a user
and identify that user among the mobile user set. A user authentication scheme
for Wi-Fi networks is also suggested in [249] which can be installed on mobile
devices to detect malicious access and prevent spoofing attacks. By
integrating with on-device learning algorithms, it is possible to recognize
the unique features of a certain user (i.e. through ML classifiers) where
visible light can be an indicator input for the user classification. As a
different viewpoint of user authentication, the work in [250] analyzes CSI
data for user authentication in wireless networks. First, a CNN classifier is
built to learn the extracted features from CSI dataset such that pooling
layers are utilized to simplify the computation process for faster learning.
To analyze CSI features, a RNN network is then designed, including some
recurrent layers and a fully connected layer where user recognition is
implemented. The combination of CNN and RNN models leads to a new scheme,
called CRNN, aiming to use the capability of information representation of CNN
and information modelling of RNN. Thus, the accuracy for user authentication
is improved that is also confirmed through computer simulations. It is noted
that in wireless networks, the authentication between two users on the
physical network layer based on traditional approaches is challenging due to
the time variance of channels and mobility of users.
* •
AI for attack detection: The work in [251] introduces a scalable intrusion
detection system (IDS) for classifying and recognizing unknown and
unpredictable cyberattacks. The IDS dataset collected from a large-scale
network-level and host-level events is processed and learned by a DNN
architecture that can classify the network behaviours and intrusions. Compared
to commercial IDSs that mainly rely on physical measurements or defined
thresholds on feature sets to realize the intrusions, learning can assist
flexible IDS without any further system configurations, and work well under
varying network conditions such as network traffic, user behaviour, data
patterns. Another solution in [252] formulates an intrusion detection system
using RNN. The main focus is on evaluating the detection rate and false
positive rate. The intrusion detection can be expressed equivalently to a
multi-class classification problem for recognizing whether the traffic
behaviour in the network is normal or not. The key motivation here is to
enhance the accuracy of the classifiers so that intrusive network behaviours
can be detected and classified efficiently. Motivated by this, a RNN-based
classifier is leveraged to extract abnormal traffic events from the network
features using the well-known NSL-KDD dataset [253]. Specially, the attacks
can be divided into four main types, including DoS (Denial of Service
attacks), R2L (Root to Local attacks), U2R (User to Root attack), and Probe
(Probing attacks). The results of deep training and learning show a much
better classification accuracy of RNN in comparison with ML techniques.
In addition to that, wireless networks also have to solve spoofing attacks
where an attack claims to be a legitimate node to gain malicious access or
perform denial of service attacks. Conventional security solutions, such as
digital signatures, cannot fully address the spoofing issues, especially in
the dynamic environment with unknown channel parameters. Recently, AI gains
interest in spoofing detection tasks. As an example, a solution proposed in
[254] uses ML to detect and identify spoofing attacks in massive MIMO 5G
networks. By collecting the virtual path gain of massive MIMO channel, a
virtual channel space is created to detect the anomaly among the network
users. In particular, a learning-based algorithm is built to estimate the
anomalies and recognize spoofing attacks without knowing the knowledge of
channel/signal information. Further, the work in [255] also leverages the
potential of ML to detect spoofing attacks in wireless sensor networks. By
analyzing the correlation of signal strength on the mobile radio channel, the
learning algorithm receives signal strength samples as the input data for
abnormal behaviour analysis. To enhance the detection accuracy, the collected
samples can be categorized into small windows that may include the attacking
nodes and legitimate nodes. Then, a collaborative learning scheme with k-NN
and k-means is applied to identify the spoofing attacks in the dense networks.
Meanwhile, the authors in [256] use RL to implement a spoofing detection model
from the PHY layer perspective for wireless networks. The key concept is to
exploit the CSI of radio packets to detect spoofing vulnerabilities. The
spoofing authentication process is formulated as a zero-sum game including a
receiver and spoofers. Here, the receiver specifies the test threshold in PHY
spoofing detection, while spoofers select attack frequency to optimize its
utility with respect to Bayesian risks. Specially, the threshold selection is
achieved by trial and error process using Q-learning without knowing CSI such
as channel variations.
### VII-D Lessons learned
The main lessons acquired from the survey on the Data-provenance AI domain are
highlighted as the following.
* •
Recent studies also raise the discussion on the AI adoption for data-driven
applications during the data generation process. They mainly concentrate on
four specific domains, namely AI data compression, data clustering, data
privacy, data security. As an example, AI has been applied for supporting
sensor data aggregation and fusion, by classifying useful information from the
collected data samples [224]. Also, the classification of AI helps monitor the
data aggregation process, aiming to analyze the characteristics of all data
and detect data attacks [226].
* •
At the network edge, usage privacy and data security protection are the most
important services that should be provided. From the perspective of data
generation process, the applications of AI in data privacy are reflected in
two key domains, including data sensing privacy and user location privacy.
Currently, AI-based data classification algorithms can be implemented on each
local device. This solution not only distributes learning workload over the
network of devices, but also keeps data private due to no need to transfer
data on the network [236]. Interestingly, recent studies in [239], [240] show
that ML can help learn the user privacy preference and classify location
contexts in the social ecosystem.
* •
In addition to privacy protection, AI also has the great potential for
enhancing security for wireless networks. In the literature, AI mainly provide
some important security services, including user authentication and access
control. In practical wireless networks, the authentication between two users
on the physical network layer based on traditional approaches is challenging
due to the time variance of channels and mobility of users. AI may be an ideal
option to recognize legitimate users from attackers by training the channel
data and perform classification. Meanwhile, DRL has proved extremely useful in
building access control mechanisms [257], [258] that can learn from the user
behaviours and estimate the access events for appropriate security actions.
In summary, we list Data-provenance AI use cases in the taxonomy TABLE VII to
summarize the contributions of each reference work.
TABLE VII: Taxonomy of Data-provenance AI use cases. Category | Ref. | Use case | AI techniques applied | Main contributions
---|---|---|---|---
AI Data compression | [224] | Sensory data aggregation | Supervised ML | A sensory data aggregation scheme for wireless sensor clusters.
[226] | Sensory data aggregation | Bayesian belief network | A new tree-based data aggregation framework for sensor networks.
[227] | Data aggregation | DL autoencoder | A scheme for data aggregation tasks with a focus on privacy preservation.
[229] | Multi-sensor data fusion | SVM and KNN | A multi-sensor data fusion that serves human movement classification based on ML.
AI Data Privacy | [234] | Data privacy preservation | Neural networks | A ML scheme for privacy preservation in mobile data sensing.
[235] | Different privacy | Neural networks | A multifunctional data sensing scheme with awareness of different privacy.
[237] | Privacy awareness | K-mean clustering | A model for the data collection issue with privacy awareness using K-mean clustering.
[238] | Data privacy | Neural networks | A scheme for protecting data privacy during the data sensing process.
[239] | User location privacy | k-learning | A privacy preservation scheme that helps users to manage their location information.
[243] | Privacy location preservation | Regression learning | A privacy location preservation protocol in social networks.
AI Data Security | [246] | User authentication | Random forest | A user authentication model using millimetre sensors.
[248] | User authentication | Multi-layer DNN | A new user authentication system using human gait biometrics from Wi-Fi signals.
[250] | User authentication | CNN | A scheme for analyzing CSI data for user authentication in wireless networks.
[251] | Intrusion detection | DNN | A scalable intrusion detection system (IDS) for classifying and recognizing unknown and unpredictable cyberattacks.
[252] | Intrusion detection | RNN | An intrusion detection system for evaluating the detection rate and false positive rate.
[254] | Spoofing detection | Supervised learning | A ML approach to detect and identify spoofing attacks in massive MIMO 5G networks.
[255] | Spoofing detection | k-NN and k-means | A learning model for detecting spoofing attacks in wireless sensor networks.
[256] | Spoofing detection | RL | A spoofing detection model from the PHY layer perspective for wireless networks.
## VIII Research Challenges and Future Directions
Different approaches surveyed in this paper clearly show that AI plays a key
role in the evolution of wireless communication networks with various
applications and services. However, the throughout review on the use of AI in
wireless networks also reveals several critical research challenges and open
issues that should be considered carefully during the system design. Further,
some future directions are also highlighted.
### VIII-A Research challenges
We analyze some important challenges in wireless AI from three main aspects:
security threats, AI limitations, and complexity of AI wireless
communications.
#### VIII-A1 Security threats
Security is a major concern in wireless AI systems. Data collected from
sensors and IoT devices are trained to build the model that takes actions for
the involved wireless applications, such as decision making for channel
allocation, user selection, or spectrum sharing. Adversaries can inject false
data or adversarial sample inputs which makes AI learning invalid. Attackers
can also attempt to manipulate the collected data, tamper with the model and
modify the outputsc [259]. For instance, in reinforcement learning, the
learning environment may be tampered with an attacker so that the agent senses
incorrectly the information input data. Another concern can stem from the
manipulation on hardware implementation by modifying hardware settings or re-
configuring system learning parameters. The work in [260] investigated the
multi-factor adversarial attack issues in DNN with input perturbation and DNN
model-reshaping risks which lead to wrong data classification and then damages
the reliability of the DNN system. Evaluating security threats on AI models
and providing creative solutions to prevent attack risks and protect AI
operations is vitally important to AI-based wireless applications.
#### VIII-A2 Limitations of AI
While AI shows impressive results in wireless communication applications, its
models still remain performance limitations. For example, the imperfection
bottleneck in the training process of DNN can be exploited by adversarial
attacks to tamper with input data that leads to misclassification [261].
Further, an investigation in [262] reveals the false confidence of DNN in
estimating images. By generating fooling samples (i.e. images) in a way
imperceptible to humans, a DNN can mislabel the image and thus recognize
wrongly the object. Another limitation is the high cost of training the AI
model. Indeed, it is not uncommon to take some days for the AI training on
mobile devices or even edge devices with limited computation resources.
A more challenge is the scarcity of real datasets available for wireless
communications necessary to for the AI training, i.e. in supervised learning
[263]. In many wireless scenarios where only a small dataset is not sufficient
for the training which degrades the performance of AI model in terms of low
classification or prediction accuracy, and high model overfitting, and thus
the confidence of AI model may not be guaranteed. This drawback would be an
obstacle for deployment of AI in real-world wireless applications since the
feasibility of AI should be demonstrated in testbeds or experiments in the
real environments.
#### VIII-A3 Complexity of AI wireless communication networks
The complexity may stem from wireless data and network architecture. In fact,
in AI wireless communication networks, the data is produced from both
application traffic and information of devices and objects. Considering the
complex wireless network with various data resources, it is challenging to
extract data needed for the AI training of a specific task. In some cognitive
network scenarios, how to label the massive learning data for training AI
models is also highly challenging. From the network architecture perspective,
the traditional protocol of wireless communication is based on a multi-layer
network architecture. However, currently it is difficult to determine which
and how many layer should be configured to run AI functions and deploy AI
technologies in devices. Therefore, it is essential to consider intelligent
protocol architectures with the simplified and adaptive capability according
to AI functions and requirements [264].
### VIII-B Future directions
We discuss some of the future research directions on wireless AI motivated
from the recent success in the field.
#### VIII-B1 Specialized AI architecture for wireless networks
The AI architecture has significant impacts on the overall performance of the
wireless AI systems, while the current AI mode design is still simple. Many AI
architectures proposed in [21], [22], [24], [26] are mostly relied on
conventional algorithms with learning iterations, data compression, decoding
techniques. The performance of wireless AI algorithms can be reduced thanks to
recent AI advances with sub-block partition solutions for neural networks
[146] or data augmentation for DL [265]. However, these AI approaches suffer
from dimensionality problems when applying to wireless applications due to the
high complexity of the wireless systems, and data training would be
inefficient when the network dimension is increased with the increase of
mobile devices and traffic volumes. Therefore, developing adaptive AI
architectures specialized in wireless communication scenarios is the key for
empowering future intelligent applications. As suggested by [266], by adapting
with the dynamicity of wireless communications, AI can model effectively the
wireless systems and discover the knowledge of data generated from network
traffic and distributed devices. Besides, motivated by [24], [26], AI
functions should be implemented and deployed in the wireless network in a
distributed manner. This architecture not only influences the wireless data
transmission infrastructure, but also modernize the way that wireless
communications are organized and managed [267].
Furthermore, with the ambition of deploying AI on mobile devices/edge devices
to extend the network intelligence to local clients, developing specialized
hardware/chipsets for AI is vitally important. Recent advances in hardware
designs for high-bandwidth CPUs and specialized AI accelerators open up new
opportunities to the development of AI devices at the edge of the network
[268], [269]. Motivated by this, recently Facebook bring ML inference to the
mobile devices and edge platforms [270] thanks to specialized AI functions
with high data training speed and low energy costs. The work in [271] also
introduces a DL inference runtime model running on embedded mobile devices.
This scheme is able to support data training and MIMO modelling using a
hardware controller empowered by mobile CPUs with low latency and power
consumption with bandwidth savings.
#### VIII-B2 From simulation to implementation
Reviewing the literature, most of the AI algorithms conceived for wireless
communication networks are in their simulation stages. Thus, the researchers
and practitioners needs to further improve AI functions before they are
realized in real-world wireless networks. We here propose some solutions that
can be considered in practical wireless AI design. First, real-world datasets
collected from physical communication systems must be made available for ready
AI function implementation. The sets of AI data generated from simulation
environments such as deepMIMO dataset [272] can contain the important network
features to perform AI functions, but they are unable to reflect exactly the
characteristics of a real wireless system. Second, as discussed in the
previous section, designing specialized hardware/chipsets for AI is extremely
important to deploy AI applications on edge/mobile devices. These new hardware
architectures should provide functions specializing AI to support on-device
training, modelling and classification at the edge of the network. Third, the
multidimensional assessment for wireless AI applications in various scenarios
is also vitally necessary to realize the feasibility of the AI adoption in
practice. By comparing with simulation performances, the designers can adjust
the model parameters and architecture settings so that AI can well fit the
real channel statistical characteristics.
#### VIII-B3 AI for 5G and beyond
The heterogeneous nature of 5G and beyond wireless networks features multi-
access ultra-dense networks with high data transmission capability and service
provisions. AI has been regarded as the key enabler to drive the 5G networks
by providing a wide range of services, such as intelligent resource
allocation, network traffic prediction, smart system management. For example,
the integration of AI enables network operators to realize intelligent
services, such as autonomous data collection across heterogeneous access
networks, dynamic network slicing to provide ubiquitous local applications for
mobile users, and service provisioning management [272]. Wireless AI also acts
as AI-as-a-service that can realize the potentials of Massive MIMO, a key 5G
technology. For example, AI can be used to predict the user distribution in
MIMO 5G cells, provide MIMO channel estimation, and optimize massive MIMO
beamforming [273]. With the rapid development of edge 5G services, AI would
potentially transform the way edge devices perform 5G functionalities. In
fact, the network management and computation optimization of mobile edge
computing (MEC) would be simplified by using intelligent AI software through
learning from the historical data. Lightweight AI engines at the 5G devices
can perform real-time mobile computing and decision making without the
reliance of remote cloud servers.
Beyond the fifth-generation (B5G) networks, or so-called 6G, will appear to
provide superior performance to 5G and meet the increasingly service
requirements of future wireless networks in the 2030s. The 6G networks are
envisioned to provide new human-centric values with transformative services,
such as quantum communications, wearable computing, autonomous driving,
virtuality, space-air-ground networks [274], [275], and the Internet of Nano-
Things [276]. AI technologies are expected to play an indispensable role in
facilitating end-to-end design and optimization of the data processing in 6G
wireless networks [277]. The future cellular networks will be much more
complex and dynamic, and the number of parameters to be controlled will
increase rapidly. In such scenarios, AI can provide smart solutions by its
self-learning and online optimization capabilities to support advanced data
sensing, data communication, and signal processing [278]. These benefits that
AI bring about would accelerate the deployment of future 6G networks and
enable newly innovative wireless services.
The application of AI in wireless is still at its infancy and will quickly
mature in the coming years for providing smarter wireless network services.
The network topology model and wireless communication architecture will be
more complex with the dynamicity of data traffic and mobile devices. AI is
expected to play a key role in realizing the intelligence of the future
wireless services and allow for a shift from human-based network management to
automatic and autonomous network operations. Therefore, the integration of AI
in wireless networks has reshaped and transformed the current service
provisions with high performances. This detailed survey is expected to pave a
way for new innovative researches and solutions for empowering the next
generation of wireless AI.
## IX Conclusions
In this paper, we have presented a state-of-the-art survey on the emerging
Wireless AI in communication networks. We have first proposed a novel Wireless
AI architecture from the data-driven perspective with a focus on five main
themes in wireless networks, namely Sensing AI, Network Device AI, Access AI,
User Device AI and Data-provenance AI. Then, we have focused on analyzing the
use of AI approaches in each data-driven AI theme by surveying the recent
research efforts in the literature. For each domain, we have performed a
holistic discussion on the Wireless AI applications in different network
scenarios, and the key lessons learned from the survey of each domain have
been derived. Based on the holistic survey, we have pointed out the important
research challenges to be considered carefully for further investigation on
the AI applications in wireless networks. Finally, we have outlined potential
future research directions towards to AI 5G wireless networks and beyond. We
hope that this article stimulates more interest in this promising area, and
encourages more research efforts toward the full realization of Wireless AI.
## References
* [1] A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, “Internet of things: A survey on enabling technologies, protocols, and applications,” _IEEE Communications Surveys & Tutorials_, vol. 17, no. 4, pp. 2347–2376, 2015.
* [2] J. Lin, W. Yu, N. Zhang, X. Yang, H. Zhang, and W. Zhao, “A survey on internet of things: Architecture, enabling technologies, security and privacy, and applications,” _IEEE Internet of Things Journal_ , vol. 4, no. 5, pp. 1125–1142, 2017.
* [3] Internet of Things 2016 [Online]. Available: https://www.cisco.com/c/dam/ en/us/products/collateral/se/internetof-things/at-a-glance-c45-731471.pdf., accessed October, 2019.
* [4] Virtual networking [Online]. Available: https://www.cisco.com/c/en/us/ solutions/collateral/service-provider/visual-networking-index-vni/white-paper-c11-741490.html, accessed October, 2019.
* [5] M. Agiwal, A. Roy, and N. Saxena, “Next generation 5G wireless networks: A comprehensive survey,” _IEEE Communications Surveys & Tutorials_, vol. 18, no. 3, pp. 1617–1655, 2016.
* [6] A. Gupta and R. K. Jha, “A survey of 5G network: Architecture and emerging technologies,” _IEEE Access_ , vol. 3, pp. 1206–1232, 2015.
* [7] Q. Chen, W. Wang, F. Wu, S. De, R. Wang, B. Zhang, and X. Huang, “A survey on an emerging area: Deep learning for smart city data,” _IEEE Transactions on Emerging Topics in Computational Intelligence_ , 2019.
* [8] P. Cheng, Z. Chen, M. Ding, Y. Li, B. Vucetic, and D. Niyato, “Spectrum intelligent radio: Technology, development, and future trends,” _arXiv preprint arXiv:2001.00207_ , 2020.
* [9] M. Mohammadi and A. Al-Fuqaha, “Enabling cognitive smart cities using big data and machine learning: Approaches and challenges,” _IEEE Communications Magazine_ , vol. 56, no. 2, pp. 94–101, 2018.
* [10] Z. Zhang, K. Long, J. Wang, and F. Dressler, “On swarm intelligence inspired self-organized networking: its bionic mechanisms, designing principles and optimization approaches,” _IEEE Communications Surveys & Tutorials_, vol. 16, no. 1, pp. 513–537, 2013.
* [11] X. Ge, J. Thompson, Y. Li, X. Liu, W. Zhang, and T. Chen, “Applications of artificial intelligence in wireless communications,” _IEEE Communications Magazine_ , vol. 57, no. 3, pp. 12–13, 2019.
* [12] M. E. M. Cayamcela and W. Lim, “Artificial intelligence in 5G technology: A survey,” in _Proc. 2018 International Conference on Information and Communication Technology Convergence (ICTC)_ , 2018, pp. 860–865.
* [13] Machine learning, AI aids network incident prediction, prevention https://www.rcrwireless.com/20191105/5G/machine-learning-ai-network-incident-prediction-prevention, accessed October, 2019.
* [14] Face ID advanced technology https://support.apple.com/en-us/HT208108, accessed October, 2019.
* [15] Phones with the Google Assistant https://assistant.google.com/platforms/phones/, accessed October, 2019.
* [16] Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, “Edge intelligence: Paving the last mile of artificial intelligence with edge computing,” _arXiv preprint arXiv:1905.10083_ , 2019.
* [17] J. Park, S. Samarakoon, M. Bennis, and M. Debbah, “Wireless network intelligence at the edge,” _Proceedings of the IEEE_ , vol. 107, no. 11, pp. 2204–2239, 2019.
* [18] X. Wang, X. Li, and V. C. Leung, “Artificial intelligence-based techniques for emerging heterogeneous network: State of the arts, opportunities, and challenges,” _IEEE Access_ , vol. 3, pp. 1379–1391, 2015.
* [19] X. Wang, Y. Han, V. C. Leung, D. Niyato, X. Yan, and X. Chen, “Convergence of edge computing and deep learning: A comprehensive survey,” _IEEE Communications Surveys & Tutorials_, 2020.
* [20] C. Zhang, P. Patras, and H. Haddadi, “Deep learning in mobile and wireless networking: A survey,” _IEEE Communications Surveys & Tutorials_, 2019\.
* [21] M. Mohammadi, A. Al-Fuqaha, S. Sorour, and M. Guizani, “Deep learning for IoT big data and streaming analytics: A survey,” _IEEE Communications Surveys & Tutorials_, vol. 20, no. 4, pp. 2923–2960, 2018.
* [22] Z. M. Fadlullah, F. Tang, B. Mao, N. Kato, O. Akashi, T. Inoue, and K. Mizutani, “State-of-the-art deep learning: Evolving machine intelligence toward tomorrow’s intelligent network traffic control systems,” _IEEE Communications Surveys & Tutorials_, vol. 19, no. 4, pp. 2432–2455, 2017.
* [23] A. Zappone, M. Di Renzo, and M. Debbah, “Wireless networks design in the era of deep learning: Model-based, AI-based, or both?” _arXiv preprint arXiv:1902.02647_ , 2019.
* [24] N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y.-C. Liang, and D. I. Kim, “Applications of deep reinforcement learning in communications and networking: A survey,” _IEEE Communications Surveys & Tutorials_, 2019\.
* [25] J. Jagannath, N. Polosky, A. Jagannath, F. Restuccia, and T. Melodia, “Machine learning for wireless communications in the internet of things: a comprehensive survey,” _Ad Hoc Networks_ , p. 101913, 2019.
* [26] Y. Sun, M. Peng, Y. Zhou, Y. Huang, and S. Mao, “Application of machine learning in wireless networks: Key techniques and open issues,” _IEEE Communications Surveys & Tutorials_, vol. 21, no. 4, pp. 3072–3108, 2019.
* [27] J. Wang, C. Jiang, H. Zhang, Y. Ren, K.-C. Chen, and L. Hanzo, “Thirty years of machine learning: The road to pareto-optimal wireless networks,” _IEEE Communications Surveys & Tutorials_, 2020.
* [28] F. Musumeci, C. Rottondi, A. Nag, I. Macaluso, D. Zibar, M. Ruffini, and M. Tornatore, “An overview on application of machine learning techniques in optical networks,” _IEEE Communications Surveys & Tutorials_, vol. 21, no. 2, pp. 1383–1408, 2018.
* [29] C. M. Bishop, _Pattern recognition and machine learning_. Springer, 2006.
* [30] M. M. Najafabadi, F. Villanustre, T. M. Khoshgoftaar, N. Seliya, R. Wald, and E. Muharemagic, “Deep learning applications and challenges in big data analytics,” _Journal of Big Data_ , vol. 2, no. 1, p. 1, 2015.
* [31] W. G. Hatcher and W. Yu, “A survey of deep learning: platforms, applications and emerging research trends,” _IEEE Access_ , vol. 6, pp. 24 411–24 432, 2018.
* [32] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “A brief survey of deep reinforcement learning,” _arXiv preprint arXiv:1708.05866_ , 2017.
* [33] Y. Li, “Deep reinforcement learning: An overview,” _arXiv preprint arXiv:1701.07274_ , 2017.
* [34] S. B. Kotsiantis, I. Zaharakis, and P. Pintelas, “Supervised machine learning: A review of classification techniques,” _Emerging Artificial Intelligence Applications in Computer Engineering_ , vol. 160, pp. 3–24, 2007\.
* [35] J. Shawe-Taylor and N. Cristianini, “Support vector machines,” _An Introduction to Support Vector Machines and Other Kernel-based Learning Methods_ , pp. 93–112, 2000.
* [36] L. Jiang, Z. Cai, D. Wang, and S. Jiang, “Survey of improving k-nearest-neighbor for classification,” in _Proc. Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007)_ , vol. 1, 2007, pp. 679–683.
* [37] M. Hanumanthappa, T. S. Kumar, and D. T. S. Kumar, “Intrusion detection system using decision tree algorithm,” in _Communication Technology (ICCT), 2012 IEEE 14th International Conference on. IEEE_ , 2012.
* [38] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu, “An efficient k-means clustering algorithm: Analysis and implementation,” _IEEE Transactions on Pattern Analysis & Machine Intelligence_, no. 7, pp. 881–892, 2002.
* [39] X. J. Zhu, “Semi-supervised learning literature survey,” University of Wisconsin-Madison Department of Computer Sciences, Tech. Rep., 2005.
* [40] R. S. Sutton and A. G. Barto, “Reinforcement learning: An introduction,” 2011\.
* [41] N. Jiang, Y. Deng, A. Nallanathan, and J. A. Chambers, “Reinforcement learning for real-time optimization in nb-IoT networks,” _IEEE Journal on Selected Areas in Communications_ , vol. 37, no. 6, pp. 1424–1440, 2019.
* [42] X. Wan, G. Sheng, Y. Li, L. Xiao, and X. Du, “Reinforcement learning based mobile offloading for cloud-based malware detection,” in _GLOBECOM 2017-2017 IEEE Global Communications Conference_ , 2017, pp. 1–6.
* [43] S. Pouyanfar, S. Sadiq, Y. Yan, H. Tian, Y. Tao, M. P. Reyes, M.-L. Shyu, S.-C. Chen, and S. Iyengar, “A survey on deep learning: Algorithms, techniques, and applications,” _ACM Computing Surveys (CSUR)_ , vol. 51, no. 5, p. 92, 2018.
* [44] A. Shrestha and A. Mahmood, “Review of deep learning algorithms and architectures,” _IEEE Access_ , vol. 7, pp. 53 040–53 065, 2019.
* [45] D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber, “Flexible, high performance convolutional neural networks for image classification,” in _Twenty-Second International Joint Conference on Artificial Intelligence_ , 2011.
* [46] M. Elwekeil, S. Jiang, T. Wang, and S. Zhang, “Deep convolutional neural networks for link adaptations in mimo-ofdm wireless systems,” _IEEE Wireless Communications Letters_ , 2018.
* [47] S. Peng, H. Jiang, H. Wang, H. Alwageed, and Y.-D. Yao, “Modulation classification using convolutional neural network based deep learning model,” in _2017 26th Wireless and Optical Communication Conference (WOCC)_ , 2017, pp. 1–5.
* [48] I. Sutskever, J. Martens, and G. E. Hinton, “Generating text with recurrent neural networks,” in _Proceedings of the 28th International Conference on Machine Learning (ICML-11)_ , 2011, pp. 1017–1024.
* [49] A. I. Moustapha and R. R. Selmic, “Wireless sensor network modeling using modified recurrent neural networks: Application to fault detection,” _IEEE Transactions on Instrumentation and Measurement_ , vol. 57, no. 5, pp. 981–988, 2008.
* [50] M. F. Caetano, M. R. Makiuchi, S. S. Fernandes, M. V. Lamar, J. L. Bordim, and P. S. Barreto, “A recurrent neural network mac protocol towards to opportunistic communication in wireless networks,” in _2019 16th International Symposium on Wireless Communication Systems (ISWCS)_ , 2019, pp. 63–68.
* [51] J. Kim, J. Kim, H. L. T. Thu, and H. Kim, “Long short term memory recurrent neural network classifier for intrusion detection,” in _2016 International Conference on Platform Technology and Service (PlatCon)_ , 2016, pp. 1–5.
* [52] G. B. Tarekegn, H.-P. Lin, A. B. Adege, Y. Y. Munaye, and S.-S. Jeng, “Applying long short-term memory (lstm) mechanisms for fingerprinting outdoor positioning in hybrid networks,” in _2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall)_ , 2019, pp. 1–5.
* [53] W. Zhang, W. Guo, X. Liu, Y. Liu, J. Zhou, B. Li, Q. Lu, and S. Yang, “Lstm-based analysis of industrial IoT equipment,” _IEEE Access_ , vol. 6, pp. 23 551–23 560, 2018.
* [54] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” _arXiv preprint arXiv:1312.5602_ , 2013.
* [55] D. C. Nguyen, P. N. Pathirana, M. Ding, and A. Seneviratne, “Secure computation offloading in blockchain based IoT networks with deep reinforcement learning,” _arXiv preprint arXiv:1908.07466_ , 2019.
* [56] D. C. Nguyen, P. Pathirana, M. Ding, and A. Seneviratne, “Privacy-preserved task offloading in mobile blockchain with deep reinforcement learning,” _arXiv preprint arXiv:1908.07467_ , 2019.
* [57] C. He, Y. Hu, Y. Chen, and B. Zeng, “Joint power allocation and channel assignment for noma with deep reinforcement learning,” _IEEE Journal on Selected Areas in Communications_ , vol. 37, no. 10, pp. 2200–2210, 2019.
* [58] Y. Yang, J. Cao, X. Liu, and X. Liu, “Wi-count: Passing people counting with cots wifi devices,” in _2018 27th International Conference on Computer Communication and Networks (ICCCN)_ , 2018, pp. 1–9.
* [59] J. He and A. Arora, “A regression-based radar-mote system for people counting,” in _2014 IEEE International Conference on Pervasive Computing and Communications (PerCom)_ , 2014, pp. 95–102.
* [60] T. Yoshida and Y. Taniguchi, “Estimating the number of people using existing wifi access point based on support vector regression,” _International Information Institute (Tokyo). Information_ , vol. 19, no. 7A, p. 2661, 2016.
* [61] Y. Wang, H. Lian, P. Chen, and Z. Lu, “Counting people with support vector regression,” in _2014 10th International Conference on Natural Computation (ICNC)_ , 2014, pp. 139–143.
* [62] J. He and A. Arora, “A regression-based radar-mote system for people counting,” in _2014 IEEE International Conference on Pervasive Computing and Communications (PerCom)_ , 2014, pp. 95–102.
* [63] H. Li, E. C. Chan, X. Guo, J. Xiao, K. Wu, and L. M. Ni, “Wi-counter: smartphone-based people counter using crowdsourced wi-fi signal data,” _IEEE Transactions on Human-Machine Systems_ , vol. 45, no. 4, pp. 442–452, 2015.
* [64] H. Farhood, X. He, W. Jia, M. Blumenstein, and H. Li, “Counting people based on linear, weighted, and local random forests,” in _2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)_ , 2017, pp. 1–7.
* [65] I. Sobron, J. Del Ser, I. Eizmendi, and M. Vélez, “Device-free people counting in IoT environments: New insights, results, and open challenges,” _IEEE Internet of Things Journal_ , vol. 5, no. 6, pp. 4396–4408, 2018.
* [66] S. Liu, Y. Zhao, and B. Chen, “Wicount: A deep learning approach for crowd counting using wifi signals,” in _2017 IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC)_ , 2017, pp. 967–974.
* [67] C. Zhang, H. Li, X. Wang, and X. Yang, “Cross-scene crowd counting via deep convolutional neural networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 833–841.
* [68] A. Capponi, C. Fiandrino, B. Kantarci, L. Foschini, D. Kliazovich, and P. Bouvry, “A survey on mobile crowdsensing systems: Challenges, solutions and opportunities,” _IEEE Communications Surveys & Tutorials_, 2019.
* [69] K. Han, C. Zhang, and J. Luo, “Taming the uncertainty: Budget limited robust crowdsensing through online learning,” _IEEE/ACM Transactions on Networking_ , vol. 24, no. 3, pp. 1462–1475, 2015.
* [70] M. B. Kjærgaard, H. Blunck, M. Wüstenberg, K. Grønbask, M. Wirz, D. Roggen, and G. Tröster, “Time-lag method for detecting following and leadership behavior of pedestrians from mobile sensing data,” in _2013 IEEE International Conference on Pervasive Computing and Communications (PerCom)_ , 2013, pp. 56–64.
* [71] C. Xu, S. Li, Y. Zhang, E. Miluzzo, and Y.-f. Chen, “Crowdsensing the speaker count in the wild: Implications and applications,” _IEEE Communications Magazine_ , vol. 52, no. 10, pp. 92–99, 2014.
* [72] C. H. Liu, Z. Chen, and Y. Zhan, “Energy-efficient distributed mobile crowd sensing: A deep learning approach,” _IEEE Journal on Selected Areas in Communications_ , vol. 37, no. 6, pp. 1262–1276, 2019.
* [73] Z. Zhou, H. Liao, B. Gu, K. M. S. Huq, S. Mumtaz, and J. Rodriguez, “Robust mobile crowd sensing: When deep learning meets edge computing,” _IEEE Network_ , vol. 32, no. 4, pp. 54–60, 2018.
* [74] C. H. Liu, Z. Dai, Y. Zhao, J. Crowcroft, D. O. Wu, and K. Leung, “Distributed and energy-efficient mobile crowdsensing with charging stations by deep reinforcement learning,” _IEEE Transactions on Mobile Computing_ , 2019.
* [75] Z.-Q. Zhao, P. Zheng, S.-t. Xu, and X. Wu, “Object detection with deep learning: A review,” _IEEE Transactions on Neural Networks and Learning Systems_ , vol. 30, no. 11, pp. 3212–3232, 2019.
* [76] M. Feng, H. Lu, and E. Ding, “Attentive feedback network for boundary-aware salient object detection,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 1623–1632.
* [77] L. Wang, L. Wang, H. Lu, P. Zhang, and X. Ruan, “Salient object detection with recurrent fully convolutional networks,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 41, no. 7, pp. 1734–1746, 2018.
* [78] Z. Wu, L. Su, and Q. Huang, “Cascaded partial decoder for fast and accurate salient object detection,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 3907–3916.
* [79] Z. Luo, A. Mishra, A. Achkar, J. Eichel, S. Li, and P.-M. Jodoin, “Non-local deep features for salient object detection,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2017, pp. 6609–6617.
* [80] G. Li and Y. Yu, “Contrast-oriented deep neural networks for salient object detection,” _IEEE Transactions on Neural Networks and Learning Systems_ , vol. 29, no. 12, pp. 6038–6051, 2018.
* [81] P. N. Pathirana, N. Bulusu, A. V. Savkin, and S. Jha, “Node localization using mobile robots in delay-tolerant sensor networks,” _IEEE transactions on Mobile Computing_ , vol. 4, no. 3, pp. 285–296, 2005.
* [82] M. A. Alsheikh, S. Lin, D. Niyato, and H.-P. Tan, “Machine learning in wireless sensor networks: Algorithms, strategies, and applications,” _IEEE Communications Surveys & Tutorials_, vol. 16, no. 4, pp. 1996–2018, 2014.
* [83] O. B. Tariq, M. T. Lazarescu, J. Iqbal, and L. Lavagno, “Performance of machine learning classifiers for indoor person localization with capacitive sensors,” _IEEE Access_ , vol. 5, pp. 12 913–12 926, 2017.
* [84] A. Singh and S. Verma, “Graph laplacian regularization with procrustes analysis for sensor node localization,” _IEEE Sensors Journal_ , vol. 17, no. 16, pp. 5367–5376, 2017.
* [85] D. N. Phuong and T. P. Chi, “A hybrid indoor localization system running ensemble machine learning,” in _2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom)_, 2018, pp. 1071–1078.
* [86] G. Qi, Y. Jin, and J. Yan, “Rssi-based floor localization using principal component analysis and ensemble extreme learning machine technique,” in _2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)_ , 2018, pp. 1–5.
* [87] F. Luo, S. Poslad, and E. Bodanese, “Human activity detection and coarse localization outdoors using micro-doppler signatures,” _IEEE Sensors Journal_ , 2019.
* [88] J. Wang, X. Zhang, Q. Gao, H. Yue, and H. Wang, “Device-free wireless localization and activity recognition: A deep learning approach,” _IEEE Transactions on Vehicular Technology_ , vol. 66, no. 7, pp. 6258–6267, 2016.
* [89] Z. E. Khatab, A. Hajihoseini, and S. A. Ghorashi, “A fingerprint method for indoor localization using autoencoder based deep extreme learning machine,” _IEEE sensors letters_ , vol. 2, no. 1, pp. 1–4, 2017.
* [90] B. Berruet, O. Baala, A. Caminada, and V. Guillet, “Delfin: A deep learning based csi fingerprinting indoor localization in IoT context,” in _2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN)_ , 2018, pp. 1–8.
* [91] S. A. Samadh, Q. Liu, X. Liu, N. Ghourchian, and M. Allegue, “Indoor localization based on channel state information,” in _2019 IEEE Topical Conference on Wireless Sensors and Sensor Networks (WiSNet)_ , 2019, pp. 1–4.
* [92] Y. Li, X. Hu, Y. Zhuang, Z. Gao, P. Zhang, and N. El-Sheimy, “Deep reinforcement learning (drl): Another perspective for unsupervised wireless localization,” _IEEE Internet of Things Journal_ , 2019.
* [93] X. Yang, S. A. Shah, A. Ren, D. Fan, N. Zhao, S. Zheng, W. Zhao, W. Wang, P. J. Soh, and Q. H. Abbasi, “S-band sensing-based motion assessment framework for cerebellar dysfunction patients,” _IEEE Sensors Journal_ , 2018.
* [94] D. C. Nguyen, K. D. Nguyen, and P. N. Pathirana, “A mobile cloud based IoMT framework for automated health assessment and management,” in _2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)_. IEEE, 2019, pp. 6517–6520.
* [95] B. Kashyap, M. Horne, P. N. Pathirana, L. Power, and D. Szmulewicz, “Automated topographic prominence based quantitative assessment of speech timing in cerebellar ataxia,” _Biomedical Signal Processing and Control_ , vol. 57, p. 101759, 2020.
* [96] S. Lolla and A. Zhao, “Wifi motion detection: A study into efficacy and classification,” in _2019 IEEE Integrated STEM Education Conference (ISEC)_ , 2019, pp. 375–378.
* [97] H. Li, K. Ota, M. Dong, and M. Guo, “Learning human activities through wi-fi channel state information with multiple access points,” _IEEE Communications Magazine_ , vol. 56, no. 5, pp. 124–129, 2018.
* [98] Q. Gao, J. Wang, X. Ma, X. Feng, and H. Wang, “Csi-based device-free wireless localization and activity recognition using radio image features,” _IEEE Transactions on Vehicular Technology_ , vol. 66, no. 11, pp. 10 346–10 356, 2017.
* [99] Z. Shi, J. A. Zhang, R. Xu, and G. Fang, “Human activity recognition using deep learning networks with enhanced channel state information,” in _2018 IEEE Globecom Workshops (GC Wkshps)_ , 2018, pp. 1–6.
* [100] H. Zou, Y. Zhou, J. Yang, H. Jiang, L. Xie, and C. J. Spanos, “Deepsense: Device-free human activity recognition via autoencoder long-term recurrent convolutional network,” in _2018 IEEE International Conference on Communications (ICC)_ , 2018, pp. 1–6.
* [101] T. He, W. Huang, Y. Qiao, and J. Yao, “Accurate text localization in natural image with cascaded convolutional text network,” _arXiv preprint arXiv:1603.09423_ , 2016.
* [102] W. Lu, H. Sun, J. Chu, X. Huang, and J. Yu, “A novel approach for video text detection and recognition based on a corner response feature map and transferred deep convolutional neural network,” _IEEE Access_ , vol. 6, pp. 40 198–40 211, 2018.
* [103] L. Li, G. Zhao, and R. S. Blum, “A survey of caching techniques in cellular networks: Research issues and challenges in content placement and delivery strategies,” _IEEE Communications Surveys & Tutorials_, vol. 20, no. 3, pp. 1710–1732, 2018.
* [104] G. Shen, L. Pei, P. Zhiwen, L. Nan, and Y. Xiaohu, “Machine learning based small cell cache strategy for ultra dense networks,” in _2017 9th International Conference on Wireless Communications and Signal Processing (WCSP)_ , 2017, pp. 1–6.
* [105] Z. Chang, L. Lei, Z. Zhou, S. Mao, and T. Ristaniemi, “Learn to cache: Machine learning for network edge caching in the big data era,” _IEEE Wireless Communications_ , vol. 25, no. 3, pp. 28–35, 2018.
* [106] W. Jiang, G. Feng, S. Qin, and Y.-C. Liang, “Learning-based cooperative content caching policy for mobile edge computing,” in _ICC 2019-2019 IEEE International Conference on Communications (ICC)_ , 2019, pp. 1–6.
* [107] C. Zhong, M. C. Gursoy, and S. Velipasalar, “A deep reinforcement learning-based framework for content caching,” in _2018 52nd Annual Conference on Information Sciences and Systems (CISS)_ , 2018, pp. 1–6.
* [108] Y. Wei, F. R. Yu, M. Song, and Z. Han, “Joint optimization of caching, computing, and radio resources for fog-enabled IoT using natural actor–critic deep reinforcement learning,” _IEEE Internet of Things Journal_ , vol. 6, no. 2, pp. 2061–2073, 2018.
* [109] F. Jiang, K. Wang, L. Dong, C. Pan, W. Xu, and K. Yang, “Deep learning based joint resource scheduling algorithms for hybrid mec networks,” _IEEE Internet of Things Journal_ , 2019.
* [110] T. Yang, Y. Hu, M. C. Gursoy, A. Schmeink, and R. Mathar, “Deep reinforcement learning based resource allocation in low latency edge computing networks,” in _2018 15th International Symposium on Wireless Communication Systems (ISWCS)_ , 2018, pp. 1–5.
* [111] J. Patman, P. Lovett, A. Banning, A. Barnert, D. Chemodanov, and P. Calvam, “Data-driven edge computing resource scheduling for protest crowds incident management,” in _2018 IEEE 17th International Symposium on Network Computing and Applications (NCA)_ , 2018, pp. 1–8.
* [112] D. Zeng, L. Gu, S. Pan, J. Cai, and S. Guo, “Resource management at the network edge: A deep reinforcement learning approach,” _IEEE Network_ , vol. 33, no. 3, pp. 26–33, 2019.
* [113] C.-H. Hong and B. Varghese, “Resource management in fog/edge computing: A survey,” _arXiv preprint arXiv:1810.00305_ , 2018.
* [114] Y. He, C. Liang, Z. Zhang, F. R. Yu, N. Zhao, H. Yin, and Y. Zhang, “Resource allocation in software-defined and information-centric vehicular networks with mobile edge computing,” in _2017 IEEE 86th Vehicular Technology Conference (VTC-Fall)_ , 2017, pp. 1–5.
* [115] L. Gu, D. Zeng, W. Li, S. Guo, A. Zomaya, and H. Jin, “Deep reinforcement learning based vnf management in geo-distributed edge computing,” in _2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS)_ , 2019, pp. 934–943.
* [116] D. C. Nguyen, P. N. Pathirana, M. Ding, and A. Seneviratne, “Integration of blockchain and cloud of things: Architecture, applications and challenges,” _arXiv preprint arXiv:1908.09058_ , 2019.
* [117] S. Guo, Y. Dai, S. Xu, X. Qiu, and F. Qi, “Trusted cloud-edge network resource management: Drl-driven service function chain orchestration for IoT,” _IEEE Internet of Things Journal_ , 2019.
* [118] K. Satyanarayana, M. El-Hajjar, A. A. Mourad, and L. Hanzo, “Multi-user hybrid beamforming relying on learning-aided link-adaptation for mmwave systems,” _IEEE Access_ , vol. 7, pp. 23 197–23 209, 2019.
* [119] L. Hou, L. Lei, K. Zheng, and X. Wang, “A q-learning based proactive caching strategy for non-safety related services in vehicular networks,” _IEEE Internet of Things Journal_ , 2018.
* [120] Z. Qin, H. Ye, G. Y. Li, and B.-H. F. Juang, “Deep learning in physical layer communications,” _IEEE Wireless Communications_ , vol. 26, no. 2, pp. 93–99, 2019.
* [121] T. O’Shea and J. Hoydis, “An introduction to deep learning for the physical layer,” _IEEE Transactions on Cognitive Communications and Networking_ , vol. 3, no. 4, pp. 563–575, 2017.
* [122] H. He, S. Jin, C.-K. Wen, F. Gao, G. Y. Li, and Z. Xu, “Model-driven deep learning for physical layer communications,” _IEEE Wireless Communications_ , 2019.
* [123] P. Dong, H. Zhang, and G. Y. Li, “Machine learning prediction based csi acquisition for fdd massive mimo downlink,” in _2018 IEEE Global Communications Conference (GLOBECOM)_ , 2018, pp. 1–6.
* [124] J. Yuan, H. Q. Ngo, and M. Matthaiou, “Machine learning-based channel estimation in massive mimo with channel aging,” in _2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)_ , 2019, pp. 1–5.
* [125] C.-K. Wen, W.-T. Shih, and S. Jin, “Deep learning for massive mimo csi feedback,” _IEEE Wireless Communications Letters_ , vol. 7, no. 5, pp. 748–751, 2018.
* [126] J. Guo, C.-K. Wen, S. Jin, and G. Y. Li, “Convolutional neural network based multiple-rate compressive sensing for massive mimo csi feedback: Design, simulation, and analysis,” _arXiv preprint arXiv:1906.06007_ , 2019.
* [127] Q. Yang, M. B. Mashhadi, and D. Gündüz, “Deep convolutional compression for massive mimo csi feedback,” in _2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP)_ , 2019, pp. 1–6.
* [128] X. Li and H. Wu, “Spatio-temporal representation with deep neural recurrent network in mimo csi feedback,” _arXiv preprint arXiv:1908.07934_ , 2019.
* [129] C. Qing, B. Cai, Q. Yang, J. Wang, and C. Huang, “Deep learning for csi feedback based on superimposed coding,” _IEEE Access_ , vol. 7, pp. 93 723–93 733, 2019.
* [130] M. Gharavi-Alkhansari and A. B. Gershman, “Fast antenna subset selection in mimo systems,” _IEEE transactions on signal processing_ , vol. 52, no. 2, pp. 339–347, 2004.
* [131] C. Rusu, R. Mendez-Rial, N. González-Prelcic, and R. W. Heath, “Low complexity hybrid precoding strategies for millimeter wave communication systems,” _IEEE Transactions on Wireless Communications_ , vol. 15, no. 12, pp. 8380–8393, 2016.
* [132] Y. Dong and L. Qiu, “Spectral efficiency of massive mimo systems with low-resolution adcs and mmse receiver,” _IEEE Communications Letters_ , vol. 21, no. 8, pp. 1771–1774, 2017.
* [133] L. Li, H. Ren, X. Li, W. Chen, and Z. Han, “Machine learning-based spectrum efficiency hybrid precoding with lens array and low-resolution adcs,” _IEEE Access_ , vol. 7, pp. 117 986–117 996, 2019.
* [134] X. Gao, L. Dai, Y. Sun, S. Han, and I. Chih-Lin, “Machine learning inspired energy-efficient hybrid precoding for mmwave massive mimo systems,” in _2017 IEEE International Conference on Communications (ICC)_ , 2017, pp. 1–6.
* [135] Y. Sun, Z. Gao, H. Wang, and D. Wu, “Machine learning based hybrid precoding for mmwave mimo-ofdm with dynamic subarray,” _arXiv preprint arXiv:1809.03378_ , 2018.
* [136] T. Ding, Y. Zhao, L. Li, D. Hu, and L. Zhang, “Hybrid precoding for beamspace mimo systems with sub-connected switches: A machine learning approach,” _IEEE Access_ , vol. 7, pp. 143 273–143 281, 2019.
* [137] X. Li and A. Alkhateeb, “Deep learning for direct hybrid precoding in millimeter wave massive mimo systems,” _arXiv preprint arXiv:1905.13212_ , 2019.
* [138] P. de Kerret and D. Gesbert, “Robust decentralized joint precoding using team deep neural network,” in _2018 15th International Symposium on Wireless Communication Systems (ISWCS)_ , 2018, pp. 1–5.
* [139] A. M. Elbir and A. Papazafeiropoulos, “Hybrid precoding for multi-user millimeter wave massive mimo systems: A deep learning approach,” _IEEE Transactions on Vehicular Technology_ , 2019.
* [140] J.-M. Kang, I.-M. Kim, and C.-J. Chun, “Deep learning-based mimo-noma with imperfect sic decoding,” _IEEE Systems Journal_ , 2019.
* [141] M. Feng and H. Xu, “Game theoretic based intelligent multi-user millimeter-wave mimo systems under uncertain environment and unknown interference,” in _2019 International Conference on Computing, Networking and Communications (ICNC)_ , 2019, pp. 687–691.
* [142] Q. Wang and K. Feng, “Precodernet: Hybrid beamforming for millimeter wave systems using deep reinforcement learning,” _arXiv preprint arXiv:1907.13266_ , 2019.
* [143] T. J. Richardson and R. L. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding,” _IEEE Transactions on information theory_ , vol. 47, no. 2, pp. 599–618, 2001.
* [144] F. Liang, C. Shen, and F. Wu, “An iterative bp-cnn architecture for channel decoding,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 12, no. 1, pp. 144–159, 2018.
* [145] S. Cammerer, T. Gruber, J. Hoydis, and S. Ten Brink, “Scaling deep learning-based decoding of polar codes via partitioning,” in _GLOBECOM 2017-2017 IEEE Global Communications Conference_ , 2017, pp. 1–6.
* [146] E. Nachmani, Y. Be’ery, and D. Burshtein, “Learning to decode linear codes using deep learning,” in _2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton)_ , 2016, pp. 341–346.
* [147] E. Nachmani, E. Marciano, D. Burshtein, and Y. Be’ery, “Rnn decoding of linear block codes,” _arXiv preprint arXiv:1702.07560_ , 2017.
* [148] T. Farjam, T. Charalambous, and H. Wymeersch, “Timer-based distributed channel access in networked control systems over known and unknown gilbert-ellIoTt channels,” in _2019 18th European Control Conference (ECC)_ , 2019, pp. 2983–2989.
* [149] J. Ma, T. Nagatsuma, S.-J. Kim, and M. Hasegawa, “A machine-learning-based channel assignment algorithm for IoT,” in _2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)_ , 2019, pp. 1–6.
* [150] O. Jeunen, P. Bosch, M. Van Herwegen, K. Van Doorselaer, N. Godman, and S. Latré, “A machine learning approach for ieee 802.11 channel allocation,” in _2018 14th International Conference on Network and Service Management (CNSM)_ , 2018, pp. 28–36.
* [151] T. Ahmed, F. Ahmed, and Y. Le Moullec, “Optimization of channel allocation in wireless body area networks by means of reinforcement learning,” in _2016 IEEE Asia Pacific Conference on Wireless and Mobile (APWiMob)_ , 2016, pp. 120–123.
* [152] S. Lee, H. Ju, and B. Shim, “Pilot assignment and channel estimation via deep neural network,” in _2018 24th Asia-Pacific Conference on Communications (APCC)_ , 2018, pp. 454–458.
* [153] K. Nakashima, S. Kamiya, K. Ohtsu, K. Yamamoto, T. Nishio, and M. Morikura, “Deep reinforcement learning-based channel allocation for wireless lans with graph convolutional networks,” _arXiv preprint arXiv:1905.07144_ , 2019.
* [154] X. Hu, S. Liu, R. Chen, W. Wang, and C. Wang, “A deep reinforcement learning-based framework for dynamic resource allocation in multibeam satellite systems,” _IEEE Communications Letters_ , vol. 22, no. 8, pp. 1612–1615, 2018.
* [155] M. Chiang, P. Hande, T. Lan, C. W. Tan _et al._ , “Power control in wireless cellular networks,” _Foundations and Trends in Networking_ , vol. 2, no. 4, pp. 381–533, 2008.
* [156] S. Toumi, M. Hamdi, and M. Zaied, “An adaptive q-learning approach to power control for d2d communications,” in _2018 International Conference on Advanced Systems and Electric Technologies (IC_ASET)_ , 2018, pp. 206–209.
* [157] M. Eisen, K. Gatsis, G. J. Pappas, and A. Ribeiro, “Learning in wireless control systems over nonstationary channels,” _IEEE Transactions on Signal Processing_ , vol. 67, no. 5, pp. 1123–1137, 2018.
* [158] T. Zhang and S. Mao, “Energy-efficient power control in wireless networks with spatial deep neural networks,” _IEEE Transactions on Cognitive Communications and Networking_ , 2019.
* [159] A. Zappone, M. Debbah, and Z. Altman, “Online energy-efficient power control in wireless networks by deep neural networks,” in _2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)_ , 2018, pp. 1–5.
* [160] Y. Yang, D. B. Smith, and S. Seneviratne, “Deep learning channel prediction for transmit power control in wireless body area networks,” in _ICC 2019-2019 IEEE International Conference on Communications (ICC)_ , 2019, pp. 1–6.
* [161] Y. Fu, W. Wen, Z. Zhao, T. Q. Quek, S. Jin, and F.-C. Zheng, “Dynamic power control for noma transmissions in wireless caching networks,” _IEEE Wireless Communications Letters_ , vol. 8, no. 5, pp. 1485–1488, 2019.
* [162] Y. S. Nasir and D. Guo, “Multi-agent deep reinforcement learning for dynamic power allocation in wireless networks,” _IEEE Journal on Selected Areas in Communications_ , vol. 37, no. 10, pp. 2239–2250, 2019.
* [163] D. Wang, H. Qin, B. Song, X. Du, and M. Guizani, “Resource allocation in information-centric wireless networking with d2d-enabled mec: A deep reinforcement learning approach,” _IEEE Access_ , vol. 7, pp. 114 935–114 944, 2019.
* [164] Z. Lu and M. C. Gursoy, “Dynamic channel access and power control via deep reinforcement learning,” in _2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall)_ , 2019, pp. 1–5.
* [165] S. Deb and P. Monogioudis, “Learning-based uplink interference management in 4g lte cellular systems,” _IEEE/ACM Transactions on Networking_ , vol. 23, no. 2, pp. 398–411, 2014.
* [166] F. Bernardo, R. Agusti, J. Pérez-Romero, and O. Sallent, “Intercell interference management in ofdma networks: A decentralized approach based onreinforcement learning,” _IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)_ , vol. 41, no. 6, pp. 968–976, 2011.
* [167] H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, “Learning to optimize: Training deep neural networks for interference management,” _IEEE Transactions on Signal Processing_ , vol. 66, no. 20, pp. 5438–5453, 2018.
* [168] P. Zhou, X. Fang, X. Wang, Y. Long, R. He, and X. Han, “Deep learning-based beam management and interference coordination in dense mmwave networks,” _IEEE Transactions on Vehicular Technology_ , vol. 68, no. 1, pp. 592–603, 2018.
* [169] P. Henarejos, M. Á. Vázquez, and A. I. Pérez-Neira, “Deep learning for experimental hybrid terrestrial and satellite interference management,” _arXiv preprint arXiv:1906.03012_ , 2019.
* [170] U. Challita, W. Saad, and C. Bettstetter, “Interference management for cellular-connected UAVs: A deep reinforcement learning approach,” _IEEE Transactions on Wireless Communications_ , vol. 18, no. 4, pp. 2125–2140, 2019.
* [171] H. Ye, G. Y. Li, and B.-H. F. Juang, “Deep reinforcement learning based resource allocation for v2v communications,” _IEEE Transactions on Vehicular Technology_ , vol. 68, no. 4, pp. 3163–3173, 2019.
* [172] Z. Shi, X. Xie, and H. Lu, “Deep reinforcement learning based intelligent user selection in massive mimo underlay cognitive radios,” _IEEE Access_ , vol. 7, pp. 110 884–110 894, 2019.
* [173] L. Zhang, J. Tan, Y.-C. Liang, G. Feng, and D. Niyato, “Deep reinforcement learning for modulation and coding scheme selection in cognitive hetnets,” in _ICC 2019-2019 IEEE International Conference on Communications (ICC)_ , 2019, pp. 1–6.
* [174] D. Niyato and E. Hossain, “A game-theoretic approach to competitive spectrum sharing in cognitive radio networks,” in _2007 IEEE Wireless Communications and Networking Conference_ , 2007, pp. 16–20.
* [175] N. Namvar and F. Afghah, “Spectrum sharing in cooperative cognitive radio networks: A matching game framework,” in _2015 49th Annual Conference on Information Sciences and Systems (CISS)_ , 2015, pp. 1–5.
* [176] T. Yang, R. Zhang, X. Cheng, and L. Yang, “Graph coloring based resource sharing (gcrs) scheme for d2d communications underlaying full-duplex cellular networks,” _IEEE Transactions on Vehicular Technology_ , vol. 66, no. 8, pp. 7506–7517, 2017.
* [177] M. Srinivasan, V. J. Kotagi, and C. S. R. Murthy, “A q-learning framework for user qoe enhanced self-organizing spectrally efficient network using a novel inter-operator proximal spectrum sharing,” _IEEE Journal on Selected Areas in Communications_ , vol. 34, no. 11, pp. 2887–2901, 2016.
* [178] A. Yamada, T. Nishio, M. Morikura, and K. Yamamoto, “Machine learning-based primary exclusive region update for database-driven spectrum sharing,” in _2017 IEEE 85th Vehicular Technology Conference (VTC Spring)_ , 2017, pp. 1–5.
* [179] L. Liang, H. Ye, and G. Y. Li, “Spectrum sharing in vehicular networks based on multi-agent reinforcement learning,” _arXiv preprint arXiv:1905.02910_ , 2019.
* [180] W. Sun, J. Liu, and Y. Yue, “Ai-enhanced offloading in edge computing: when machine learning meets industrial IoT,” _IEEE Network_ , vol. 33, no. 5, pp. 68–74, 2019.
* [181] B. Yang, X. Cao, X. Li, Q. Zhang, and L. Qian, “Mobile edge computing based hierarchical machine learning tasks distribution for iIoT,” _IEEE Internet of Things Journal_ , 2019.
* [182] L. Wang, C. Yang, and R. Q. Hu, “Autonomous traffic offloading in heterogeneous ultra-dense networks using machine learning,” _IEEE Wireless Communications_ , vol. 26, no. 4, pp. 102–109, 2019.
* [183] K. Zhang, Y. Zhu, S. Leng, Y. He, S. Maharjan, and Y. Zhang, “Deep learning empowered task offloading for mobile edge computing in urban informatics,” _IEEE Internet of Things Journal_ , 2019.
* [184] S. Yu and R. Langar, “Collaborative computation offloading for multi-access edge computing,” in _2019 IFIP/IEEE Symposium on Integrated Network and Service Management (IM)_ , 2019, pp. 689–694.
* [185] X. Xu, D. Li, Z. Dai, S. Li, and X. Chen, “A heuristic offloading method for deep learning edge services in 5G networks,” _IEEE Access_ , 2019.
* [186] L. Huang, S. Bi, and Y. J. Zhang, “Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks,” _IEEE Transactions on Mobile Computing_ , 2019.
* [187] Q. Qi, J. Wang, Z. Ma, H. Sun, Y. Cao, L. Zhang, and J. Liao, “Knowledge-driven service offloading decision for vehicular edge computing: A deep reinforcement learning approach,” _IEEE Transactions on Vehicular Technology_ , vol. 68, no. 5, pp. 4192–4203, 2019.
* [188] Z. Ning, P. Dong, X. Wang, M. S. Obaidat, X. Hu, L. Guo, Y. Guo, J. Huang, B. Hu, and Y. Li, “When deep reinforcement learning meets 5G vehicular networks: A distributed offloading framework for traffic big data,” _IEEE Transactions on Industrial Informatics_ , 2019.
* [189] Y. Yang, X. Deng, D. He, Y. You, and R. Song, “Machine learning inspired codeword selection for dual connectivity in 5G user-centric ultra-dense networks,” _IEEE Transactions on Vehicular Technology_ , vol. 68, no. 8, pp. 8284–8288, 2019.
* [190] C. Wang, Z. Zhao, Q. Sun, and H. Zhang, “Deep learning-based intelligent dual connectivity for mobility management in dense network,” in _2018 IEEE 88th Vehicular Technology Conference (VTC-Fall)_ , 2018, pp. 1–5.
* [191] I. Alawe, A. Ksentini, Y. Hadjadj-Aoul, and P. Bertin, “Improving traffic forecasting for 5G core network scalability: A machine learning approach,” _IEEE Network_ , vol. 32, no. 6, pp. 42–49, 2018.
* [192] D. C. Nguyen, P. N. Pathirana, M. Ding, and A. Seneviratne, “Blockchain for 5G and beyond networks: A state of the art survey,” _arXiv preprint arXiv:1912.05062_ , 2019.
* [193] S. K. Tayyaba, H. A. Khattak, A. Almogren, M. A. Shah, I. U. Din, I. Alkhalifa, and M. Guizani, “5G vehicular network resource management for improving radio access through machine learning,” _IEEE Access_ , vol. 8, pp. 6792–6800, 2020.
* [194] A. Jindal, G. S. Aujla, N. Kumar, R. Chaudhary, M. S. Obaidat, and I. You, “Sedative: Sdn-enabled deep learning architecture for network traffic control in vehicular cyber-physical systems,” _IEEE Network_ , vol. 32, no. 6, pp. 66–73, 2018.
* [195] R. Li, Z. Zhao, Q. Sun, I. Chih-Lin, C. Yang, X. Chen, M. Zhao, and H. Zhang, “Deep reinforcement learning for resource management in network slicing,” _IEEE Access_ , vol. 6, pp. 74 429–74 441, 2018.
* [196] Y. Hua, R. Li, Z. Zhao, X. Chen, and H. Zhang, “Gan-powered deep distributional reinforcement learning for resource management in network slicing,” _IEEE Journal on Selected Areas in Communications_ , 2019.
* [197] S. Dhar, J. Guo, J. Liu, S. Tripathi, U. Kurup, and M. Shah, “On-device machine learning: An algorithms and learning theory perspective,” _arXiv preprint arXiv:1911.00623_ , 2019.
* [198] M. Ham, J. J. Moon, G. Lim, W. Song, J. Jung, H. Ahn, S. Woo, Y. Cho, J. Park, S. Oh _et al._ , “Nnstreamer: Stream processing paradigm for neural networks, toward efficient development and execution of on-device AI applications,” _arXiv preprint arXiv:1901.04985_ , 2019.
* [199] G. Chen, S. He, H. Meng, and K. Huang, “Phonebit: Efficient gpu-accelerated binary neural network inference engine for mobile phones,” _arXiv preprint arXiv:1912.04050_ , 2019.
* [200] N. Peiravian and X. Zhu, “Machine learning for android malware detection using permission and api calls,” in _2013 IEEE 25th international conference on tools with artificial intelligence_ , 2013, pp. 300–305.
* [201] Y. Rosmansyah, B. Dabarsyah _et al._ , “Malware detection on android smartphones using api class and machine learning,” in _2015 International Conference on Electrical Engineering and Informatics (ICEEI)_ , 2015, pp. 294–297.
* [202] H. C. Takawale and A. Thakur, “Talos app: On-device machine learning using tensorflow to detect android malware,” in _2018 Fifth International Conference on Internet of Things: Systems, Management and Security_ , 2018, pp. 250–255.
* [203] A. Ignatov, R. Timofte, W. Chou, K. Wang, M. Wu, T. Hartley, and L. Van Gool, “Ai benchmark: Running deep neural networks on android smartphones,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 0–0.
* [204] B. Reagen, P. Whatmough, R. Adolf, S. Rama, H. Lee, S. K. Lee, J. M. Hernández-Lobato, G.-Y. Wei, and D. Brooks, “Minerva: Enabling low-power, highly-accurate deep neural network accelerators,” in _2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA)_ , 2016, pp. 267–278.
* [205] F. Gouidis, P. Panteleris, I. Oikonomidis, and A. Argyros, “Accurate hand keypoint localization on mobile devices,” in _2019 16th International Conference on Machine Vision Applications (MVA)_ , 2019, pp. 1–6.
* [206] B. Gulmezoglu, A. Zankl, C. Tol, S. Islam, T. Eisenbarth, and B. Sunar, “Undermining user privacy on mobile devices using ai,” _arXiv preprint arXiv:1811.11218_ , 2018.
* [207] S. Wang, G. Ananthanarayanan, Y. Zeng, N. Goel, A. Pathania, and T. Mitra, “High-throughput cnn inference on embedded arm big. little multi-core processors,” _arXiv preprint arXiv:1903.05898_ , 2019.
* [208] Z. Ji, “Hg-caffe: Mobile and embedded neural network gpu (opencl) inference engine with fp16 supporting,” _arXiv preprint arXiv:1901.00858_ , 2019.
* [209] S. R. Alvar and I. V. Bajić, “Multi-task learning with compressible features for collaborative intelligence,” _arXiv preprint arXiv:1902.05179_ , 2019.
* [210] T. Zebin, P. J. Scully, N. Peek, A. J. Casson, and K. B. Ozanyan, “Design and implementation of a convolutional neural network on an edge computing smartphone for human activity recognition,” _IEEE Access_ , vol. 7, pp. 133 509–133 520, 2019.
* [211] D. Shadrin, A. Menshchikov, D. Ermilov, and A. Somov, “Designing future precision agriculture: Detection of seeds germination using artificial intelligence on a low-power embedded system,” _IEEE Sensors Journal_ , vol. 19, no. 23, pp. 11 573–11 582, 2019.
* [212] W. Y. B. Lim, N. C. Luong, D. T. Hoang, Y. Jiao, Y.-C. Liang, Q. Yang, D. Niyato, and C. Miao, “Federated learning in mobile edge networks: A comprehensive survey,” _arXiv preprint arXiv:1909.11875_ , 2019.
* [213] X. Wang, Y. Han, C. Wang, Q. Zhao, X. Chen, and M. Chen, “In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning,” _IEEE Network_ , vol. 33, no. 5, pp. 156–165, 2019\.
* [214] Y. Zhao, J. Zhao, L. Jiang, R. Tan, and D. Niyato, “Mobile edge computing, blockchain and reputation-based crowdsourcing IoT federated learning: A secure, decentralized and privacy-preserving system,” _arXiv preprint arXiv:1906.10893_ , 2019.
* [215] D. C. Nguyen, P. N. Pathirana, M. Ding, and A. Seneviratne, “Blockchain for secure EHRs sharing of mobile cloud based e-health systems,” _IEEE Access_ , vol. 7, pp. 66 792–66 806, 2019.
* [216] S. Samarakoon, M. Bennis, W. Saad, and M. Debbah, “Federated learning for ultra-reliable low-latency v2v communications,” in _2018 IEEE Global Communications Conference (GLOBECOM)_ , 2018, pp. 1–7.
* [217] J. Ren, H. Wang, T. Hou, S. Zheng, and C. Tang, “Federated learning-based computation offloading optimization in edge computing-supported internet of things,” _IEEE Access_ , vol. 7, pp. 69 194–69 201, 2019.
* [218] G. Xu, H. Li, S. Liu, K. Yang, and X. Lin, “Verifynet: Secure and verifiable federated learning,” _IEEE Transactions on Information Forensics and Security_ , vol. 15, pp. 911–926, 2019.
* [219] P. Zhou, K. Wang, L. Guo, S. Gong, and B. Zheng, “A privacy-preserving distributed contextual federated online learning framework with big data support in social recommender systems,” _IEEE Transactions on Knowledge and Data Engineering_ , 2019.
* [220] T. D. Nguyen, S. Marchal, M. Miettinen, H. Fereidooni, N. Asokan, and A.-R. Sadeghi, “Dïot: A federated self-learning anomaly detection system for IoT,” in _2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS)_ , 2019, pp. 756–767.
* [221] G. Zhu, Y. Wang, and K. Huang, “Broadband analog aggregation for low-latency federated edge learning,” _IEEE Transactions on Wireless Communications_ , 2019.
* [222] S. Hua, K. Yang, and Y. Shi, “On-device federated learning via second-order optimization with over-the-air computation,” in _2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall)_ , 2019, pp. 1–5.
* [223] M. Hao, H. Li, X. Luo, G. Xu, H. Yang, and S. Liu, “Efficient and privacy-enhanced federated learning for industrial artificial intelligence,” _IEEE Transactions on Industrial Informatics_ , 2019.
* [224] H. Zou, Y. Zhou, J. Yang, H. Jiang, L. Xie, and C. J. Spanos, “Deepsense: Device-free human activity recognition via autoencoder long-term recurrent convolutional network,” in _2018 IEEE International Conference on Communications (ICC)_ , 2018, pp. 1–6.
* [225] S. Otoum, B. Kantarci, and H. T. Mouftah, “Mitigating false negative intruder decisions in wsn-based smart grid monitoring,” in _2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC)_ , 2017, pp. 153–158.
* [226] I. Atoui, A. Ahmad, M. Medlej, A. Makhoul, S. Tawbe, and A. Hijazi, “Tree-based data aggregation approach in wireless sensor network using fitting functions,” in _2016 Sixth international conference on digital information processing and communications (ICDIPC)_ , 2016, pp. 146–150.
* [227] A. Kurniawan and M. Kyas, “A privacy-preserving sensor aggregation model based deep learning in large scale internet of things applications,” in _2019 IEEE 17th World Symposium on Applied Machine Intelligence and Informatics (SAMI)_ , 2019, pp. 391–396.
* [228] Y. Liu, H. Wang, M. Peng, J. Guan, J. Xu, and Y. Wang, “Deepga: A privacy-preserving data aggregation game in crowdsensing via deep reinforcement learning,” _IEEE Internet of Things Journal_ , 2019.
* [229] H. Li, A. Shrestha, F. Fioranelli, J. Le Kernec, H. Heidari, M. Pepa, E. Cippitelli, E. Gambi, and S. Spinsante, “Multisensor data fusion for human activities classification and fall detection,” in _2017 IEEE SENSORS_ , 2017, pp. 1–3.
* [230] N. LaHaye, J. Ott, M. J. Garay, H. M. El-Askary, and E. Linstead, “Multi-modal object tracking and image fusion with unsupervised deep learning,” _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ , vol. 12, no. 8, pp. 3056–3066, 2019.
* [231] S. Crul, G. Ottoy, L. Van der Perre, and L. De Strycker, “Improved commissioning in indoor wireless networks through sensor fusion using clustering,” in _2018 IEEE Sensors Applications Symposium (SAS)_ , 2018, pp. 1–6.
* [232] Z. Liu, W. Zhang, S. Lin, and T. Q. Quek, “Heterogeneous sensor data fusion by deep multimodal encoding,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 11, no. 3, pp. 479–491, 2017.
* [233] O. F. Beyca, P. K. Rao, Z. Kong, S. T. Bukkapatnam, and R. Komanduri, “Heterogeneous sensor data fusion approach for real-time monitoring in ultraprecision machining (upm) process using non-parametric bayesian clustering and evidence theory,” _IEEE Transactions on Automation Science and Engineering_ , vol. 13, no. 2, pp. 1033–1044, 2015.
* [234] X. Niu, Q. Ye, Y. Zhang, and D. Ye, “A privacy-preserving identification mechanism for mobile sensing systems,” _IEEE Access_ , vol. 6, pp. 15 457–15 467, 2018.
* [235] M. Yang, T. Zhu, B. Liu, Y. Xiang, and W. Zhou, “Machine learning differential privacy with multifunctional aggregation in a fog computing architecture,” _IEEE Access_ , vol. 6, pp. 17 119–17 129, 2018.
* [236] J. Hamm, A. C. Champion, G. Chen, M. Belkin, and D. Xuan, “Crowd-ml: A privacy-preserving learning framework for a crowd of smart devices,” in _2015 IEEE 35th International Conference on Distributed Computing Systems_ , 2015, pp. 11–20.
* [237] V. Schellekens, A. Chatalic, F. Houssiau, Y.-A. de Montjoye, L. Jacques, and R. Gribonval, “Differentially private compressive k-means,” in _ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2019, pp. 7933–7937.
* [238] M. Usman, M. A. Jan, X. He, and J. Chen, “P2dca: A privacy-preserving-based data collection and analysis framework for iomt applications,” _IEEE Journal on Selected Areas in Communications_ , vol. 37, no. 6, pp. 1222–1230, 2019\.
* [239] G. Natesan and J. Liu, “An adaptive learning model for k-anonymity location privacy protection,” in _2015 IEEE 39th Annual Computer Software and Applications Conference_ , vol. 3, 2015, pp. 10–16.
* [240] H. Li, H. Zhu, S. Du, X. Liang, and X. S. Shen, “Privacy leakage of location sharing in mobile social networks: Attacks and defense,” _IEEE Transactions on Dependable and Secure Computing_ , vol. 15, no. 4, pp. 646–660, 2016.
* [241] T. Maekawa, N. Yamashita, and Y. Sakurai, “How well can a user’s location privacy preferences be determined without using gps location data?” _IEEE Transactions on Emerging Topics in Computing_ , vol. 5, no. 4, pp. 526–539, 2014.
* [242] Q. Ma, S. Zhang, T. Zhu, K. Liu, L. Zhang, W. He, and Y. Liu, “Plp: Protecting location privacy against correlation analyze attack in crowdsensing,” _IEEE transactions on mobile computing_ , vol. 16, no. 9, pp. 2588–2598, 2016\.
* [243] F. Raber and A. Krüger, “Deriving privacy settings for location sharing: Are context factors always the best choice?” in _2018 IEEE Symposium on Privacy-Aware Computing (PAC)_ , 2018, pp. 86–94.
* [244] I. Stellios, P. Kotzanikolaou, M. Psarakis, C. Alcaraz, and J. Lopez, “A survey of IoT-enabled cyberattacks: Assessing attack paths to critical infrastructures and services,” _IEEE Communications Surveys & Tutorials_, vol. 20, no. 4, pp. 3453–3495, 2018.
* [245] P. Mishra, V. Varadharajan, U. Tupakula, and E. S. Pilli, “A detailed investigation and analysis of using machine learning techniques for intrusion detection,” _IEEE Communications Surveys & Tutorials_, vol. 21, no. 1, pp. 686–728, 2018.
* [246] K. Diederichs, A. Qiu, and G. Shaker, “Wireless biometric individual identification utilizing millimeter waves,” _IEEE Sensors Letters_ , vol. 1, no. 1, pp. 1–4, 2017.
* [247] H. Liu, Y. Wang, J. Liu, J. Yang, Y. Chen, and H. V. Poor, “Authenticating users through fine-grained channel information,” _IEEE Transactions on Mobile Computing_ , vol. 17, no. 2, pp. 251–264, 2017.
* [248] A. Pokkunuru, K. Jakkala, A. Bhuyan, P. Wang, and Z. Sun, “Neuralwave: Gait-based user identification through commodity wifi and deep learning,” in _IECON 2018-44th Annual Conference of the IEEE Industrial Electronics Society_ , 2018, pp. 758–765.
* [249] Z. Zhao, G. Min, Y. Pang, W. Gao, and J. Lv, “Towards fast and reliable wifi authentication by utilizing visible light diversity,” in _2019 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)_ , 2019, pp. 1–9.
* [250] Q. Wang, H. Li, D. Zhao, Z. Chen, S. Ye, and J. Cai, “Deep neural networks for csi-based authentication,” _IEEE Access_ , vol. 7, pp. 123 026–123 034, 2019.
* [251] R. Vinayakumar, M. Alazab, K. Soman, P. Poornachandran, A. Al-Nemrat, and S. Venkatraman, “Deep learning approach for intelligent intrusion detection system,” _IEEE Access_ , vol. 7, pp. 41 525–41 550, 2019.
* [252] C. Yin, Y. Zhu, J. Fei, and X. He, “A deep learning approach for intrusion detection using recurrent neural networks,” _Ieee Access_ , vol. 5, pp. 21 954–21 961, 2017.
* [253] N. Paulauskas and J. Auskalnis, “Analysis of data pre-processing influence on intrusion detection using nsl-kdd dataset,” in _2017 Open Conference of Electrical, Electronic and Information Sciences (eStream)_ , 2017, pp. 1–5.
* [254] N. Wang, L. Jiao, P. Wang, M. Dabaghchian, and K. Zeng, “Efficient identity spoofing attack detection for IoT in mm-wave and massive mimo 5G communication,” in _2018 IEEE Global Communications Conference (GLOBECOM)_ , 2018, pp. 1–6.
* [255] E. M. de Lima Pinto, R. Lachowski, M. E. Pellenz, M. C. Penna, and R. D. Souza, “A machine learning approach for detecting spoofing attacks in wireless sensor networks,” in _2018 IEEE 32nd International Conference on Advanced Information Networking and Applications (AINA)_ , 2018, pp. 752–758.
* [256] L. Xiao, Y. Li, G. Han, G. Liu, and W. Zhuang, “Phy-layer spoofing detection with reinforcement learning in wireless networks,” _IEEE Transactions on Vehicular Technology_ , vol. 65, no. 12, pp. 10 037–10 047, 2016.
* [257] T. Mekki, I. Jabri, A. Rachedi, and M. B. Jemaa, “Proactive and hybrid wireless network access strategy for vehicle cloud networks: An evolutionary game approach,” in _2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC)_ , 2017, pp. 1108–1113.
* [258] Y. Yu, T. Wang, and S. C. Liew, “Deep-reinforcement learning multiple access for heterogeneous wireless networks,” _IEEE Journal on Selected Areas in Communications_ , vol. 37, no. 6, pp. 1277–1290, 2019.
* [259] N. Papernot, P. McDaniel, A. Sinha, and M. P. Wellman, “Sok: Security and privacy in machine learning,” in _2018 IEEE European Symposium on Security and Privacy (EuroS &P)_, 2018, pp. 399–414.
* [260] Q. Liu, T. Liu, Z. Liu, Y. Wang, Y. Jin, and W. Wen, “Security analysis and enhancement of model compressed deep learning systems under adversarial attacks,” in _Proceedings of the 23rd Asia and South Pacific Design Automation Conference_. IEEE Press, 2018, pp. 721–726.
* [261] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in _2016 IEEE European Symposium on Security and Privacy (EuroS &P)_, 2016, pp. 372–387.
* [262] A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 427–436.
* [263] M. E. Morocho-Cayamcela, H. Lee, and W. Lim, “Machine learning for 5G/b5G mobile and wireless communications: Potential, limitations, and future directions,” _IEEE Access_ , vol. 7, pp. 137 184–137 206, 2019.
* [264] X. Ge, J. Thompson, Y. Li, X. Liu, W. Zhang, and T. Chen, “Applications of artificial intelligence in wireless communications,” _IEEE Communications Magazine_ , vol. 57, no. 3, pp. 12–13, 2019.
* [265] A. Mikołajczyk and M. Grochowski, “Data augmentation for improving deep learning in image classification problem,” in _2018 international interdisciplinary PhD workshop (IIPhDW)_ , 2018, pp. 117–122.
* [266] N. Farsad and A. Goldsmith, “Detection algorithms for communication systems using deep learning,” _arXiv preprint arXiv:1705.08044_ , 2017.
* [267] O. Simeone, “A very brief introduction to machine learning with applications to communication systems,” _IEEE Transactions on Cognitive Communications and Networking_ , vol. 4, no. 4, pp. 648–664, 2018.
* [268] J. Welser, J. Pitera, and C. Goldberg, “Future computing hardware for ai,” in _2018 IEEE International Electron Devices Meeting (IEDM)_ , 2018, pp. 1–3.
* [269] W. J. Dally, C. T. Gray, J. Poulton, B. Khailany, J. Wilson, and L. Dennison, “Hardware-enabled artificial intelligence,” in _2018 IEEE Symposium on VLSI Circuits_ , 2018, pp. 3–6.
* [270] C.-J. Wu, D. Brooks, K. Chen, D. Chen, S. Choudhury, M. Dukhan, K. Hazelwood, E. Isaac, Y. Jia, B. Jia _et al._ , “Machine learning at facebook: Understanding inference at the edge,” in _2019 IEEE International Symposium on High Performance Computer Architecture (HPCA)_ , 2019, pp. 331–344.
* [271] W. Kang and J. Chung, “Power-and time-aware deep learning inference for mobile embedded devices,” _IEEE Access_ , vol. 7, pp. 3778–3789, 2018.
* [272] R. Li, Z. Zhao, X. Zhou, G. Ding, Y. Chen, Z. Wang, and H. Zhang, “Intelligent 5G: When cellular networks meet artificial intelligence,” _IEEE Wireless communications_ , vol. 24, no. 5, pp. 175–183, 2017.
* [273] H. Huang, S. Guo, G. Gui, Z. Yang, J. Zhang, H. Sari, and F. Adachi, “Deep learning for physical-layer 5G wireless techniques: Opportunities, challenges and solutions,” _arXiv preprint arXiv:1904.09673_ , 2019.
* [274] D. Chi-Nguyen, P. N. Pathirana, M. Ding, and A. Seneviratne, “Secrecy performance of the UAV enabled cognitive relay network,” in _2018 IEEE 3rd International Conference on Communication and Information Systems (ICCIS)_. IEEE, 2018, pp. 117–121.
* [275] A. Fotouhi, H. Qiang, M. Ding, M. Hassan, L. G. Giordano, A. Garcia-Rodriguez, and J. Yuan, “Survey on UAV cellular communications: Practical aspects, standardization advancements, regulation, and security challenges,” _IEEE Communications Surveys & Tutorials_, vol. 21, no. 4, pp. 3417–3442, 2019.
* [276] K. David and H. Berndt, “6G vision and requirements: Is there any need for beyond 5G?” _IEEE Vehicular Technology Magazine_ , vol. 13, no. 3, pp. 72–80, 2018.
* [277] W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: Applications, trends, technologies, and open research problems,” _arXiv preprint arXiv:1902.10265_ , 2019.
* [278] K. B. Letaief, W. Chen, Y. Shi, J. Zhang, and Y.-J. A. Zhang, “The roadmap to 6G–AI empowered wireless networks,” _arXiv preprint arXiv:1904.11686_ , 2019.
|
2024-09-04T02:54:56.365775 | 2020-02-20T11:53:56 | 2003.00886 | {
"authors": "Indrajit Saha and Veeraruna Kavitha",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25987",
"submitter": "Indrajit Saha",
"url": "https://arxiv.org/abs/2003.00886"
} | arxiv-papers | 11institutetext: IEOR, IIT Bombay, India
11email: {indrojit<EMAIL_ADDRESS>
# Financial replicator dynamics: emergence of systemic-risk-averting
strategies
Indrajit Saha Veeraruna Kavitha
###### Abstract
We consider a random financial network with a large number of agents. The
agents connect through credit instruments borrowed from each other or through
direct lending, and these create the liabilities. The settlement of the debts
of various agents at the end of the contract period can be expressed as
solutions of random fixed point equations. Our first step is to derive these
solutions (asymptotically), using a recent result on random fixed point
equations. We consider a large population in which the agents adapt one of the
two available strategies, risky or risk-free investments, with an aim to
maximize their expected returns (or surplus). We aim to study the emerging
strategies when different types of replicator dynamics capture inter-agent
interactions. We theoretically reduced the analysis of the complex system to
that of an appropriate ordinary differential equation (ODE). We proved that
the equilibrium strategies converge almost surely to that of an attractor of
the ODE. We also derived the conditions under which a mixed evolutionary
stable strategy (ESS) emerges; in these scenarios the replicator dynamics
converges to an equilibrium at which the expected returns of both the
populations are equal. Further the average dynamics (choices based on large
observation sample) always averts systemic risk events (events with large
fraction of defaults). We verified through Monte Carlo simulations that the
equilibrium suggested by the ODE method indeed represents the limit of the
dynamics.
###### Keywords:
Evolutionary stable strategy (ESS), Replicator dynamics, Ordinary differential
equation, Random graph, Systemic risk, Financial network.
## 1 Introduction
We consider a financial network with large number of agents. These agents are
interconnected to each other through financial commitments (e.g., borrowing
-lending etc.). In addition they make investments in either risk-free (risk
neutral) or risky derivatives. In such a system the agents not only face
random economic shocks (received via significantly smaller returns of their
risky investments), they are also affected by the percolation of the shocks
faced by their neighbours (creditors), neighbours of their neighbours etc. In
the recent years from $2007-2008$ onwards, there is a surge of activity to
study the financial and systemic level risks caused by such a percolation of
shocks ([6, 5, 4, 1]). Systemic risk is the study of the risks related to
financial networks, when individual or entity level shocks can trigger severe
instability at system level that can collapse the entire economy (e.g., [6, 5,
4]). In this set of papers, the author study the kind of topology (or graph
structure) that is more stable towards the percolation of shocks in financial
network, where stability is measured in terms of the total number of defaults
in the network.
In contrast to many existing studies in literature related to systemic risk,
we consider heterogeneous agents and we consider evolutionary framework. In
our consideration, there are two groups of agents existing simultaneously in
the network; one group of agents invest in risk-free instruments, while the
other group considers risky investments. The second group borrows money from
the other members of the network to gather more funds towards the risky
investments (with much higher expected returns). These investments are
subjected to large (but rare) economic shocks, which can potentially percolate
throughout the network and can even affect the ‘risk-free’ agents; the extent
of percolation depends upon relative sizes of the two groups. We consider that
new agents join such a network after each round of investment; they chose
their investment type (risky or risk-free) based on their observations of the
returns (the surplus of the agents after paying back their liabilities) of a
random sample of agents that invested in previous round. The relative sizes of
the two groups changes, the network structure changes, which influences the
(economic shock-influenced) returns of the agents in the next round, which in
turn influences the decision of the new agents for the round after. Thus the
system evolves after each round. We study this evolution process using the
well known evolutionary game theoretic tools.
In a financial network perspective, this type of work is new to the best of
our knowledge. We found few papers that consider evolutionary approach in
other aspects related to finance; in [8], the authors study the financial
safety net (a series of the arrangement of the firms to maintain financial
stability), and analyze the evolution of the bank strategies (to take
insurance or not); recently in [7] authors consider an evolutionary game
theoretic model with three types of players, i) momentum traders ii)
contrarian traders iii) fundamentalists and studied the evolution of the
relative populations. As already mentioned, these papers relate to very
different aspects in comparison with our work.
Evolutionary stable strategies Traditionally evolutionary game models have
been studied in the literature to study animal behaviour. The key ingredients
of the evolutionary game models are a) a large number of players, b)the
dynamics and c) the pay-off function (e.g., see the pioneering work [13]).
Replicator dynamics deals with evolution of strategies, reward based learning
in dynamic evolutionary games. Typically it is shown that these dynamics
converge to a stable equilibrium point called Evolutionary Stable Strategy
(ESS), which can be seen as a refinement of a strict Nash Equilibrium ([13]);
a strategy prevailing in a large population is called evolutionary stable if
any small fraction of mutants playing a different strategy get wiped out
eventually. Formally, in a 2-player symmetric game, a pure strategy $\hat{s}$
is said to be evolutionary stable if
1. 1.
$(\hat{s},\hat{s})$ is a Nash Equilibrium; i.e., $u(\hat{s},\hat{s})\geq
u(s^{{}^{\prime}},\hat{s})$ $\forall s^{{}^{\prime}}$ and
2. 2.
If $(\hat{s},\hat{s})$ is not a strict NE (i.e., $\exists$ some
$s^{{}^{\prime}}\neq\hat{s}$ such that
$u(\hat{s},\hat{s})=u(s^{{}^{\prime}},\hat{s})$), then
$u(\hat{s},s^{{}^{\prime}})>u(s^{{}^{\prime}},s^{{}^{\prime}})$.
We study the possible emergence of evolutionary stable strategies, when people
chose either a risky or a risk-free strategy; the main difference being that
the returns of either group are influenced by the percolation of shocks. The
returns of the portfolios depend further upon the percolation of shocks due to
layered structure of financial connections, and not just on the returns of the
investments, i.e., not just on economic shocks. Our main conclusions are two
fold; a) when agents consider large sample of data for observation and
learning, the replicator dynamics can settle to a mixed ESS, at which the
expected returns of the two the groups are balanced; b) in many other
scenarios, through theoretical as well as simulation based study, we observed
that the replicator dynamics converges to one of the two strategies, i.e., to
a pure ESS (after completely wiping out the other group).
The analysis of these complex networks (in each round) necessitated the study
of random fixed point equations (defined sample path-wise in large dimensional
spaces), which represent the clearing vectors of all the agents ([1, 6, 5, 4]
etc). The study is made possible because of the recent result in [1], which
provided an asymptotically accurate one dimensional equivalent solution.
## 2 Large Population Finance Network
We consider random graphs, where the edges represent the financial connection
between the two nodes. Any two nodes are connected with probability $p_{ss}>0$
independent of the others, but the weights on the edges depend on (the number
of) neighbors. This graph represents a large financial network where borrowing
and lending are represented by the edges and the weights over them. The
modeller may not have access to the exact connections of the network, but
random graph model is a good approach to analyse such a complex system. In
particular we consider the graphs that satisfy the assumptions of [1].
The agents are repeatedly investing in some financial projects. In each round
of investment, the agents borrow/lend to/from some random subset of the agents
of the network. Some of them may invest the remaining in a risk-free
investment (which has a constant rate of interest $r_{s}$). While the others
invest the rest of their money in risky investments which have random returns;
we consider a binomial model in which returns are high (rate $u$) with high
probability $\delta$ and can have large shocks (rate $d$), but with small
probability ($1-\delta$); it is clear that $d<r_{s}<u$. We thus have two types
of agents, we call the group that invests in risk-free projects as ‘risk-free’
group ($G_{1}$), the rest are being referred to as ‘risky’ group ($G_{2}$)
New agents join the network in each round of investment. They chose their
investment type, either risk-free or risky, for the first time based on the
previous experience of the network and continue the same choice for all future
rounds of investment. The new agents learn from network experience (returns of
agents of the previous round of investments) and chose a suitable investment
type, that can potentially give them good returns. The new agents either learn
from the experience of a random sample (returns of two random agents) of the
network or learn from a large number of agents. In the former case, their
choice of investment type depends upon the returns of the random sample in the
previous round. While in the latter case the decision can also depend on the
average utility of each group of the agents, obtained after observing large
number of samples.
Two strategies: As mentioned before, there are two strategies available in the
financial market. Risk-free agents of $G_{1}$ use strategy 1; these agents
lend some amount of their initial wealth to other agents (of $G_{2}$) that are
willing to borrow, while the rest is invested in a government security, for
example, bonds, government project etc. Risky agents of $G_{2}$ are adapting
strategy $2$, wherein they borrow funds from the other agents and invest in
risky security, for example, derivative markets, stocks, corporate loans etc..
These agents also lend to other agents of $G_{2}.$ Let $\epsilon_{t}$ be the
fraction of the agents in $G_{1}$ group and let $n(t)$ be the total number of
agents in round $t$. Thus the total number of agents (during round $t$) in
group 1 equals $n_{1}(t):=|G_{1}|=n(t)\epsilon_{t}$ and
$n_{2}(t):=|G_{2}|=n(t)(1-\epsilon_{t})$.
We consider that one new agent is added in each round 111this approach can
easily be generalized to several other types of dynamics and we briefly
discuss a few of them towards the end., and thus size of the graph/network is
increasing. The agents are homogeneous, i.e., they reserve the same wealth
$w>0$ for investments (at the initial investment period) of each round. Each
round is composed of two time periods, the agents invest during the initial
investment period and they obtain their returns after some given time gap. The
two time period model is borrowed from [5, 4, 1] etc. The new agents make
their choice for the next (and the future) round(s), based on their
observations of these returns of the previous round.
Initial investment phases: During the initial investment phases (of any round
$t$), any agent $i\in G_{1}$ lends to any agent $j\in G_{2}$ with probability
$p_{ss}$ and it lends (same) amount222This normalization, (after choosing the
required parameters, like $w$, appropriately) is done to derive simpler final
expressions. $w/(n(t)p_{ss})$ to each of the approachers based on the number
that approached it for loan; let $I_{ij}$ be the indicator of this lending
event. Note that for large $n(t)$, the number of approachers of $G_{2}$
approximately equals $n(t)(1-\epsilon_{t})p_{ss}$, and, thus any agent of
$G_{1}$ lends approximately $w(1-\epsilon_{t})$ fraction to agents of $G_{2}$.
The agents of $G_{1}$ invest the rest $w\epsilon_{t}$ in risk-free investment
(returns with fixed rate of interest $r_{s}$).
Let $\tilde{w}$ be the accumulated wealth333These amounts could be random and
different from agent to agent, but with large networks (by law of large
numbers) one can approximate these to be constants. of any agent of $G_{2}$
out of which a positive fraction $\alpha$ is invested towards the other banks
of $G_{2}$ and $(1-\alpha)$ portion is invested in risky security. Thus the
accumulated wealth of a typical $G_{2}$ agent is governed by the following
equation,
$\tilde{w}=\underbrace{w+w\epsilon}_{\text{Initial wealth + Borrowed from
$G_{1}$}}+\underbrace{\tilde{w}\alpha}_{\text{Lend/borrow $G_{2}$}}\mbox{ and
thus }\tilde{w}=\frac{w(1+\epsilon)}{(1-\alpha)}.$ (1)
Thus the total investment towards the risky venture equals
$\tilde{w}(1-\alpha)=w(1+\epsilon)$. The $G_{2}$ agents have to settle their
liabilities at the end of the return/contract period (in each round) and this
would depend upon their returns from the risky investments. Thus the total
liability of any agent of $G_{2}$ is $y=(w\epsilon+\tilde{w}\alpha)(1+r_{b})$,
where $r_{b}$ is the borrowing rate444For simplicity of explanation, we are
considering constant terms to represent all these quantities, in reality they
would be i.i.d. quantities which are further independent of other rounds and
the asymptotic analysis would go through as in [1].; by simplifying
$y=\frac{w(\epsilon+\alpha)(1+r_{b})}{(1-\alpha)}.$
Similarly, any agent of $G_{2}$ lends the following amount to each of its
approachers (of $G_{2}$):
$\frac{\alpha{\tilde{w}}}{n(1-\epsilon_{t})p_{ss}}=\frac{\alpha
w(1+\epsilon)}{n(1-\epsilon_{t})p_{ss}(1-\alpha)}.$ (2)
Figure 1: Apportioning of $G_{1}$
Figure 2: Apportioning of $G_{2}$
Return and Settling phases, Clearing Vectors: We fix the round $t$ and avoid
notation $t$ for simpler notations. The agents of $G_{2}$ have to clear their
liabilities during this phase in every round. Recall the agents of $G_{2}$
invested $w(1+\epsilon)$ amount in risky-investments and the corresponding
random returns (after economic shocks) are:
$K_{i}=\begin{cases}w(1+\epsilon)(1+u)=:k_{u},&\text{w.p. (with probability)
}\ \delta\\\ w(1+\epsilon)(1+d)=:k_{d},&\text{otherwise}\end{cases}$ (3)
This is the well known binomial model, in which the upward moment occurs with
probability $\delta$ and downward moment with $(1-\delta).$ The agents have to
return $y$ (after the interest rate $r_{b}$) amount to their creditors,
however may not be able to manage the same because of the above economic
shocks. In case of default, the agents return the maximum possible; let
$X_{i}$ be the amount cleared by the $i^{th}$ agent of group $G_{2}$. Here we
consider a standard bankruptcy rule, limited liability and pro-rata basis
repayment of the debt contract (see [5, 4]), where the amounts returned are
proportional to their liability ratios. Thus node $j$ of $G_{2}$ pays back
$X_{i}L_{ji}(1+r_{b})/y$ towards node $i$, where $L_{ji}$ the amount borrowed
(liability) during initial investment phases equals (see the details of
previous subsection and equation (2)):
$L_{ji}=\begin{cases}I_{ji}\frac{w}{np_{ss}},&\text{if}\ i\in G_{1}\\\
I_{ji}\frac{\alpha w(1+\epsilon)}{np_{ss}(1-\alpha)(1-\epsilon)},&\text{if}\
i\in G_{2}.\end{cases}$ (4)
Thus the maximum amount cleared by any agent $j\in{\cal G}_{2}$, $X_{j}$, is
given by the following fixed point equation in terms of the clearing vector
$\\{X_{i}\\}_{i\in G_{2}}$ composed of clearing values of all the agents (see
[5, 4] etc):
$X_{i}=\min\left\\{\bigg{(}K_{i}+\sum_{j\in
G_{2}}X_{j}\frac{L_{ji}}{y}-v\bigg{)}^{+},\ y\right\\},$ (5)
with the following details: the term $K_{i}$ is the return of the risky
investment, the term $\sum_{j\in G_{2}}X_{j}\ L_{ji}/y$ equals the claims form
the other agents (those borrowed from agent $i$) and $v$ denotes the taxes to
pay. In other words, agent $i$ will pay back the (maximum possible) amount
$K_{i}+\sum_{j\in G_{2}}X_{j}\frac{L_{ji}}{y}-v$ in case of a default, and in
the other event, will exactly pay back the liability amount $y$.
Surplus of any agent is defined as the amount obtained from various
investments, after clearing all the liabilities. This represents the utility
of the agent in the given round. The surplus of the agent $i\in G_{2}$:
$R^{2}_{i}=\left(K_{i}+\sum_{j\in G_{2}}X_{j}\frac{L_{ji}}{y}-v-y\right)^{+},$
(6)
while that of agent $i\in G_{1}$ is given by:
$R^{1}_{i}=\left(w\epsilon(1+r_{s})+\sum_{j\in
G_{2}}X_{j}\frac{L_{ji}}{y}-v\right)^{+}.$ (7)
In the above, the first term is the return from the risk free investment. The
second term equals the returns or claims form $G_{2}$ agents (whom they lent)
and $v$ denotes the amount of taxes.
## 3 Asymptotic approximation of the large networks
We thus have dynamic graphs whose size increases with each round. In this
section, we obtain appropriate asymptotic analysis of these graphs/systems,
with an aim to derive the pay-off of each group after each round. Towards
this, we derive the (approximate) closed form expression of the equations (6)
and (7), which are nothing but the per-agent returns after the settlement of
the liabilities.
The returns of the agents depend upon how other agents settle their
liabilities to their connections/creditors. Thus our first step is to derive
the solution of the clearing vector fixed point equations (5). Observe that
the clearing vector $\\{X_{j}\\}_{j\in G_{2}}$ is the solution of the vector-
valued random fixed point equations (5) in $n$-dimensional space (where $n$ is
the size of the network), defined sample-path wise.
Clearing vectors using results of [1]: Our financial framework can be analysed
using the results of [1], as the details of the model match555Observe that
$\alpha(1+\epsilon)/(\alpha+\epsilon)<1$. the assumptions of the paper. By [1,
Theorem 1], the aggregate claims converge almost surely to constant values (as
the network size increases to infinity):
$\displaystyle\mbox{(claims of agents of $G_{1}$)},$
$\displaystyle\displaystyle\sum_{j\in G_{2}}X_{j}\frac{L_{ji}}{y}\to$
$\displaystyle\frac{(1-\alpha)(1-\epsilon)}{\alpha+\epsilon}{\bar{x}}^{\infty}\mbox{
a.s.},\mbox{ and }$ $\displaystyle\mbox{(claims of agents of $G_{2}$)},$
$\displaystyle\displaystyle\sum_{j\in G_{2}}X_{j}\frac{L_{ji}}{y}\to$
$\displaystyle\frac{\alpha(1+\epsilon)}{(\alpha+\epsilon)}{\bar{x}}^{\infty}\mbox{
a.s.},$
where the common expected clearing value ${\bar{x}}^{\infty}$ satisfies the
following fixed point equation in one-dimension:
${\bar{x}}^{\infty}=E\bigg{(}\min\left\\{K_{i}+\frac{\alpha(1+\epsilon)}{\alpha+\epsilon}{\bar{x}}^{\infty}-v,y\right\\}\bigg{)}^{+}.$
(8)
Further by the same Theorem, the clearing vectors converge almost surely to
(asymptotically independent) random vectors:
$X_{i}\to\bigg{(}\min\left\\{K_{i}+\frac{\alpha(1+\epsilon)}{\alpha+\epsilon}{\bar{x}}^{\infty}-v,y\right\\}\bigg{)}^{+}\mbox{,
for each }i\in G_{2}.$ (9)
By virtue of the above results, the random returns given by equations (6) and
(7), converge almost surely:
$\displaystyle R^{1}_{i}$ $\displaystyle\to$
$\displaystyle\left(w\epsilon(1+r_{s})+\frac{(1-\alpha)(1-\epsilon)}{(\alpha+\epsilon)}{\bar{x}}^{\infty}-v\right)^{+},\mbox{
for each }i\in G_{1}$ (10) $\displaystyle R^{2}_{i}$ $\displaystyle\to$
$\displaystyle\left(K_{i}+\frac{\alpha(1+\epsilon)}{\alpha+\epsilon}{\bar{x}}^{\infty}-v-y\right)^{+},\mbox{
for each }i\in G_{2}.$ (11)
Probability of Default is defined as the fraction of agents of $G_{2}$ that
failed to pay back their full liability, i.e., $P_{d}:=P({X_{i}}<y)$. For
large networks (when the initial network size $n_{0}$ itself is sufficiently
large), one can use the above approximate expressions and using the same we
obtain the default probabilities and the aggregate clearing vectors in the
following (proof in Appendix).
###### Lemma 1
The asymptotic average clearing vector and the default probability of $G_{2}$
is given by :
$({\bar{x}}^{\infty},P_{d})=\begin{cases}(y,\hskip 68.2866pt0)&\text{if }\ \
c_{\epsilon}>\frac{y-\underline{w}}{y}\\\ \left(\frac{\delta
y+(1-\delta)\underline{w}}{1-(1-\delta)c_{\epsilon}},\hskip
11.38109pt1-\delta\right)&\text{if}\ \
\frac{y-\overline{w}}{y-(1-\delta)(\overline{w}-\underline{w})}<c_{\epsilon}<\frac{y-\underline{w}}{y}\\\
\left(\frac{k_{d}(1-\delta)+k_{u}\delta}{1-c_{\epsilon}},\ \ \ \ \ \
1\right)&\text{if }\ \
c_{\epsilon}<\frac{y-\overline{w}}{y-(1-\delta)(\overline{w}-\underline{w})}\end{cases}$
(12)
where, $c_{\epsilon}=\frac{\alpha+\alpha\epsilon}{\alpha+\epsilon}$,
$E(W)=\delta k_{u}+(1-\delta)k_{d}$ , $\underline{w}=k_{d}-v$ and
$\overline{w}=k_{u}-v$. $\blacksquare$
Expected Surplus: By virtue of the Theorem developed in [1, Theorem 1] we have
a significantly simplified limit system, whose performance is derived in the
above Lemma. We observe that this approximation is sufficiently close
(numerical simulations illustrate good approximations), and assume the
following as the pay-offs of each group after each round of the investments:
$\displaystyle\phi_{1}(\epsilon):=E(R^{1}_{i})=\bigg{(}w\epsilon(1+r_{s})+\frac{(1-\alpha)(1-\epsilon)}{\alpha+\epsilon}{\bar{x}}^{\infty}-v\bigg{)}^{+},\mbox{
for any agent of }G_{1}$
$\displaystyle\phi_{2}(\epsilon):=E(R^{2}_{i})=E\bigg{(}K_{i}+\frac{(1+\epsilon)\alpha}{\alpha+\epsilon}{\bar{x}}^{\infty}-v-y\bigg{)}^{+},$
(13)
$\displaystyle=\bigg{(}k_{u}+\frac{\alpha(1+\epsilon)}{\alpha+\epsilon}{\bar{x}}^{\infty}-v-y\bigg{)}^{+}\delta+\bigg{(}k_{d}+\frac{\alpha(1+\epsilon)}{\alpha+\epsilon}{\bar{x}}^{\infty}-v-y\bigg{)}^{+}(1-\delta),$
for any agent of $G_{2}.$ Observe here that the aggregate limits are almost
sure constants, hence the expected surplus of all the agents of the same group
are equal, while the random returns of the same group are i.i.d. (independent
and identically distributed).
## 4 Analysis of Replicator dynamics
In every round of investments, we have a new network that represents the
liability structure of all the agents of that round formed by the investment
choices of the agents, and, in the previous two sections we computed the
(asymptotically approximate) expected returns/utilities of each agent of the
network. As already mentioned in Section 2, new agents join the network in
each round, and chose their strategies depending upon their observations of
these expected returns of the previous round.
These kind of dynamics is well described in literature by name replicator
dynamics (e.g.,[12, 7, 2] etc). The main purpose of such a study is to derive
asymptotic analysis and answer some or all of the following questions: will
the dynamics converge, i.e., would the relative fractions of various
populations settle as the number of rounds increase? will some of the
strategies disappear eventually? if more than one population type survives
what would be the asymptotic fractions? etc. These kind of analysis are common
in other types of networks (e.g., wireless networks (e.g., [12]), biological
networks ([2])), but are relatively less studied in the context of financial
networks (e.g., [7]). We are interested in knowing the asymptotic outcome of
these kind of dynamics (if there exists one) and study the influence of
various network parameters on the outcome. We begin with precise description
of the two types of dynamics considered in this paper.
### 4.1 Average Dynamics
The new agent contacts two random (uniformly sampled) agents of the previous
round. If both the contacted agents belong to the same group, the new agent
adapts the strategy of that group. When it contacts agents from both the
groups it investigates more before making a choice; the new agent observes
significant portion of the network, in that, it obtains a good estimate of the
average utility of agents belonging to both the groups. It adapts the strategy
of the group with maximum (estimated) average utility.
Say it observes the average of each group with an error that is normally
distributed with mean equal to the expected return of the group and variance
proportional to the size of the group, i.e., it observes (here ${\cal
N}(0,\sigma^{2})$ is a zero mean Gaussian random variable with variance
$\sigma^{2}$)
${\hat{\phi}}_{i}(\epsilon)=\phi_{i}(\epsilon)+{\cal N}_{i}\mbox{ with }{\cal
N}_{1}\sim{\cal N}\left(0,\frac{1}{{\bar{c}}\epsilon}\right)\mbox{ and }{\cal
N}_{2}\sim{\cal N}\left(0,\frac{1}{{\bar{c}}(1-\epsilon)}\right),$
for some ${\bar{c}}$ large. Observe by this modeling that: the expected values
of the observations are given by $(\phi_{1}(\epsilon),\ \phi_{2}(\epsilon))$
and are determined by the relative proportions of the two populations, while
the variance of any group reduces as its proportion increases to 1 and
increases as the proportion reduces to zero. We also assume that the
estimation errors $\\{{\cal N}_{1},{\cal N}_{2}\\}$ (conditioned on the
relative fraction, $\epsilon$) corresponding to the two groups are
independent. Then the probability that the new agent chooses strategy 1 is
given by
$Prob({\hat{\phi}}_{1}(\epsilon)-{\hat{\phi}}_{2}(\epsilon)>0)=Prob({\cal
N}_{2}-{\cal N}_{1}\leq\phi_{1}(\epsilon)-\phi_{2}(\epsilon)),$
which by (conditional) independence of Gaussian random variables equals666
because
$\frac{1}{\epsilon}+\frac{1}{1-\epsilon}=\frac{1}{\epsilon(1-\epsilon)}$
$\displaystyle
g(\epsilon):=\int_{-\infty}^{\left(\phi_{1}(\epsilon)-\phi_{2}(\epsilon)\right)\sqrt{{\bar{c}}\epsilon(1-\epsilon)}}e^{-x^{2}/2}\frac{dx}{\sqrt{2\pi}}.$
(14)
Let $(n_{1}(t),n_{2}(t))$ respectively represent the sizes of $G_{1}$ and
$G_{2}$ population after round $t$ and note that
$\epsilon_{t}=\frac{n_{1}(t)}{n_{1}(t)+n_{2}(t)}$. Then the system dynamics is
given by the following ($g(\cdot)$ given by (14)):
$\displaystyle(n_{1}(t+1),\ n_{2}(t+1))$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{llll}\big{(}n_{1}(t)+1,&n_{2}(t)\big{)}&\text{w.p.}\
\ \epsilon_{t}^{2}+2\epsilon_{t}(1-\epsilon_{t})g(\epsilon_{t})\\\
\big{(}n_{1}(t),&n_{2}(t)+1\big{)}&\text{w.p.}\ \
(1-\epsilon_{t})^{2}+2\epsilon_{t}(1-\epsilon_{t})(1-g(\epsilon_{t})).\\\
\end{array}\right.$ (17) (18)
It is clear that (with $\epsilon_{0}$ and $n_{0}$ representing the initial
quantities),
$\displaystyle\epsilon_{t+1}$ $\displaystyle=$
$\displaystyle\frac{n_{1}(t+1)}{t+n_{0}+1}=\frac{(t+n_{0})\epsilon_{t}+Y_{t+1}}{t+n_{0}+1}=\epsilon_{t}+\frac{1}{t+n_{0}+1}\left(Y_{t+1}-\epsilon_{t}\right)\mbox{
where }$ $\displaystyle Y_{t+1}$ $\displaystyle=$
$\displaystyle\begin{cases}1&\text{wp}\ \
\epsilon_{t}^{2}+2\epsilon_{t}(1-\epsilon_{t})g(\epsilon_{t})\\\ 0&\text{wp}\
\ (1-\epsilon_{t})^{2}+2\epsilon_{t}(1-\epsilon_{t})(1-g(\epsilon_{t}))\mbox{,
for all }t\geq 1.\end{cases}$
One can rewrite the update equations as
$\displaystyle\epsilon_{t+1}$ $\displaystyle=$
$\displaystyle\epsilon_{t}+\frac{1}{t+n_{0}+1}\left(h(\epsilon_{t})+M_{t+1}\right),\mbox{
with, }\ \ M_{t+1}\ :=\ Y_{t+1}-\epsilon_{t}-h(\epsilon_{t}),\mbox{ where, }$
$\displaystyle h(\epsilon)$ $\displaystyle:=$ $\displaystyle
E\big{[}Y_{t+1}-\epsilon_{t}|\epsilon_{t}=\epsilon\big{]}=\epsilon(1-\epsilon)\left(2g(\epsilon)-1\right)\mbox{
for any }0\leq\epsilon\leq 1.$
and observe that (with ${\cal F}_{t}$ the natural filtration of the process
till $t$)
$\displaystyle E[M_{t+1}|{\cal F}_{t}]$ $\displaystyle=$ $\displaystyle
E[M_{t+1}|\epsilon_{t}]=0\mbox{ and }\ E[M_{t+1}^{2}|{\cal F}_{t}]\ \leq\
C\mbox{ for some constant }C<\infty.$
Further observe that
$\hskip 19.91692pt0\leq\epsilon_{t}\leq 1\mbox{ for all }t\mbox{ and all
sample paths}.$
Thus our algorithm satisfies assumptions777The assumptions require that the
process is defined for the entire real line. One can easily achieve this by
letting $h(\epsilon)=0$ for all $\epsilon\notin[0,1]$, which still ensures
required Lipschitz continuity and by extending $M_{t+1}=0$ for all
$\epsilon_{t}\notin[0,1]$. A.1 to A.4 of [11] and hence we have using [11,
Theorem 2] that
###### Theorem 1
The sequence $\\{\epsilon_{t}\\}$ generated by average dynamics (18) converges
almost surely (a.s.) to a (possibly sample path dependent) compact connected
internally chain transitive invariant set of ODE:
$\dot{\epsilon_{t}}=h(\epsilon_{t}).\hskip 56.9055pt\mbox{
{\hfill$\blacksquare$} }$ (19)
The dynamics start with initial condition $\epsilon_{0}\in(0,1)$ and clearly
would remain inside the interval $[0,1]$, i.e., $\epsilon_{t}\in[0,1]$ for all
$t$ (and almost surely). Thus we consider the invariant sets of ODE (19)
within this interval for some interesting case studies in the following (Proof
in Appendix).
###### Corollary 1
Define ${\bar{r}}_{r}:=u\delta+d(1-\delta)$. And assume $w(1+d)>v$ and observe
that $u>r_{b}\geq r_{s}>d.$ Assume $\epsilon_{0}\in(0,1).$ Given the rest of
the parameters of the problem, there exists a ${\bar{\delta}}<1$ (depends upon
the instance of the problem) such that the following statements are valid for
all $\delta\geq{\bar{\delta}}$:
1. (a)
If ${\bar{r}}_{r}>r_{b}>r_{s}$ then $\phi_{2}(\epsilon)>\phi_{1}(\epsilon)$
for all $\epsilon$, and $\epsilon_{t}\to 0$ almost surely.
2. (b)
If $\phi_{1}(\epsilon)>\phi_{2}(\epsilon)$ for all $\epsilon$ then
$\epsilon_{t}\to 1$ almost surely.
3. (c)
When $r_{b}>{\bar{r}}_{r}>r_{s}$, and case (b) is negated there exists a
unique zero $\epsilon^{*}$ of the equation
$\phi_{1}(\epsilon)-\phi_{2}(\epsilon)=0$ and
$\hskip 28.45274pt\epsilon_{t}\to\epsilon^{*}\mbox{ almost surely; further for
$\delta\approx 1$,
}\epsilon^{*}\approx\frac{r_{b}-{\bar{r}}_{r}}{{\bar{r}}_{r}-r_{s}}.\hskip
42.67912pt\mbox{{\hfill$\blacksquare$} }$
From (13) and Lemma 1, it is easy to verify that all the limit points are
evolutionary stable strategies (ESS). Thus the replicator dynamics either
settles to a pure strategy ESS or mixed ESS (in part (c) of the corollary),
depending upon the parameters of the network; after a large number of rounds,
either the fraction of agents following one of the strategies converges to one
or zero or the system reaches a mixed ESS which balances the expected returns
of the two groups.
In many scenarios, the expected rate of return of the risky investments is
much higher than the rate of interest related to lending/borrowing, i.e.,
${\bar{r}}_{r}>r_{b}$. Further the assumptions of the corollary are satisfied
by more or less all the scenarios (due to standard no-arbitrage assumptions)
and because the shocks are usually rare (i.e., $\delta$ is close to 1). Hence
by the above corollary, in majority of scenarios, the average dynamics
converges to a pure strategy with all ‘risky’ agents (i.e., $\epsilon_{t}\to
0$). The group $G_{1}$ gets wiped out and almost all agents invest in risky
ventures, as the expected rate of returns is more even in spite of large
economic shocks. One can observe a converse or a mixed ESS when the magnitude
of the shocks is large ($d$ too small) or when the shocks are too often to
make ${\bar{r}}_{r}<r_{b}$.
### 4.2 Random dynamics
When the new agent contacts two random agents of different groups, its choice
depends directly upon the returns of the two contacted agents. The rest of the
details remain the same as in average dynamics. In other words, the new agents
observe less, their investment choice is solely based on the (previous round)
returns of the two contacted agents. In this case the dynamics are governed by
the following (see (10)-(11)):
$\displaystyle(n_{1}(t+1),n_{2}(t+1))$ $\displaystyle=$
$\displaystyle\begin{cases}\left(n_{1}(t)+1,\hskip
28.45274ptn_{2}(t)\right)&\text{wp}\ \ \epsilon_{t}^{2}\\\
\left(n_{1}(t),\hskip 45.5244ptn_{2}(t)+1\right)&\text{wp}\ \
(1-\epsilon_{t})^{2}\\\ \left(n_{1}(t)+G(\epsilon_{t}),\ \
n_{2}(t)+(1-G(\epsilon_{t}))\right)&\text{else, with}\\\ \end{cases}$
$\displaystyle G(\epsilon_{t})$ $\displaystyle=$ $\displaystyle
1_{\\{R^{1}\geq R^{2}\\}}$ $\displaystyle=$ $\displaystyle
1_{\left\\{\left(w\epsilon(1+r_{s})+\frac{(1-\alpha)(1-\epsilon)}{(\alpha+\epsilon)}{\bar{x}}^{\infty}-v\right)^{+}\geq\left(K_{i}+\frac{\alpha(1+\epsilon)}{\alpha+\epsilon}{\bar{x}}^{\infty}-v-y\right)^{+}\right\\}},$
where ${\bar{x}}^{\infty}={\bar{x}}^{\infty}(\epsilon_{t})$ is given by Lemma
1. Here we assume people prefer risk-free strategy under equality, one can
easily consider the other variants. Once again this can be rewritten as
$\displaystyle\epsilon_{t+1}=\epsilon_{t}+\frac{Z_{t+1}-\epsilon_{t}}{t+n_{0}+1}\mbox{
with }\ \ Z_{t+1}=\begin{cases}1&\text{wp}\ \ \epsilon_{t}^{2}\\\ 0&\text{wp}\
\ (1-\epsilon_{t})^{2}\\\ G(\epsilon_{t})&\text{else}.\end{cases}$ (21)
As in previous section the above algorithm satisfies assumptions888In the
current paper, we consider scenarios in which $h_{R}(\cdot)$ is Lipschitz
continuous, basically under the conditions of Corollary 2. A.1 to A.4 of [11]
and once again using [11, Theorem 2], we have:
###### Theorem 2
The sequence $\\{\epsilon_{t}\\}$ generated by average dynamics (4.2)
converges almost surely (a.s.) to a (possibly sample path dependent) compact
connected internally chain transitive invariant set of ODE:
$\dot{\epsilon}(t)=h_{R}(\epsilon(t)),\
h_{R}(\epsilon):=E_{\epsilon}\big{[}Z_{t+1}-\epsilon_{t}|\epsilon_{t}=\epsilon\big{]}=\epsilon(1-\epsilon)(2E[G(\epsilon)]-1).\mbox{
{\hfill$\blacksquare$} }$ (22)
One can derive the analysis of this dynamics in a similar way as in average
dynamics, however there is an important difference between the two dynamics;
we can never have random dynamics converges to an intermediate attractor, like
the attractor in part (c) of Corollary 1 (unique $\epsilon^{*}$ satisfying
$\phi_{1}=\phi_{2}$). This is because
$E_{\epsilon}[G]=P(R^{1}(\epsilon)>R^{2}(\epsilon))$ equals $0$, $1-\delta$ or
1 and never $1/2$ (unless $\delta=1/2$, which is not a realistic case).
Nevertheless, we consider the invariant sets (corresponding to pure ESS)
within $[0,1]$ for some cases (Proof in Appendix):
###### Corollary 2
Assume $\epsilon_{0}\in(0,1).$ Given the rest of the parameters of the
problem, there exists a $1/2<\delta<1$ (depends upon the instance of the
problem) such that the following statements are valid:
1. (a)
If $E_{\epsilon}[G]=0$ for all $\epsilon$ or $1-\delta$ for all $\epsilon$,
then $\epsilon_{t}\to 0$ almost surely.
2. (b)
If $E_{\epsilon}[G]=1$ for all $\epsilon$, then $\epsilon_{t}\to 1$ almost
surely.
3. (c)
When $w(1+d)>v$ and $u>r_{b}\geq r_{s}>d$, there exists a ${\bar{\delta}}<1$
such that for all $\delta\geq{\bar{\delta}}$, the default probability
$P_{d}\leq(1-\delta)$ and $E[G]=1-\delta$ and this is true for all $\epsilon$.
Hence by part (a), $\epsilon_{t}\to 0$ almost surely. $\blacksquare$
Remarks: Thus from part (c), under the conditions of Corollary 1, the random
dynamics always converges to all ‘risky’ agents (pure ESS), while the average
dynamics, as given by Corollary 1, either converges to pure or mixed ESS
further based on system parameters (mainly various rates of return).
From this partial analysis (corollaries are for large enough $\delta$) it
appears that one can never have mixed ESS with random dynamics, and this is a
big contrast to the average dynamics; when agents observe sparsely the network
eventually settles to one of the two strategies, and if they observe more
samples there is a possibility of emergence of mixed ESS that balances the two
returns. We observe similar things, even for $\delta$ as small as $0.8$ in
numerical simulations (Table 4). We are keen to understand this aspect in more
details as a part of the future work.
To summarize we have a financial network which grows with new additions, in
which the new agents adapt one of the two available strategies based on the
returns of the agents that they observed/interacted with. Our asymptotic
analysis of [1] was instrumental in deriving these results. This is just an
initial study of the topic. One can think of other varieties of dynamics, some
of which could be a part of our future work. The existing agents may change
their strategies depending upon their returns and observations. The agents
might leave the network if they have reduced returns repeatedly. The network
may adjust itself without new additions etc.
## 5 Numerical observations
We performed numerical simulations to validate our theory. We included Monte-
Carlo (MC) simulation based dynamics in which the clearing vectors are also
computed by directly solving the fixed point equations, for any given sample
path of shocks. Our theoretical observation well matches the MC based limits.
In Tables 4, 4 and 4 we tabulated the limits of the average dynamics for
various scenarios, and the observations match the results of Corollary 1. The
configuration used for Table 4 is:
$n_{0}=2000,\epsilon_{0}=0.75,r_{s}=0.18,r_{b}=0.19,w=100,v=46,\alpha=0.1$,
while that used for Table 4 is:
$n_{0}=2000,\epsilon_{0}=0.5,r_{s}=0.17,r_{b}=0.19,w=100,v=40,\alpha=0.1$. For
both these tables risky expected rate of returns ${\bar{r}}_{r}$ is smaller
than $r_{b}$ and the dynamics converges either to ‘all risky’ agents
configuration or to a mixed ESS. In Table 4, the risky expected rate of
returns ${\bar{r}}_{r}=.1250$ which is greater than $r_{b}$ and $r_{s}$, thus
the dynamics converges to all risky-agents, as indicated by Corollary 1.
$u$ | $d$ | $\delta$ | $\phi_{1}$ | $\phi_{2}$ | $\epsilon^{*}$
---|---|---|---|---|---
0.2 | -0.05 | 0.8 | 72 | 0 | 1
0.2 | -0.1 | 0.8 | 72 | 0 | 1
0.2 | -0.15 | 0.8 | 72 | 0 | 1
0.2 | -0.2 | 0.8 | 72 | 0 | 1
0.2 | -0.25 | 0.8 | 72 | 0 | 1
Table 1: When the shocks are too large along with larger taxes ($v=46$), the
average dynamics converges to a configuration with all ‘risk-free agents’!
$u$ | $d$ | $\delta$ | $\phi_{1}$ | $\phi_{2}$ | $\epsilon^{*}$
---|---|---|---|---|---
0.2 | -0.1 | 0.95 | 78.33 | 78.27 | 0.3326
0.2 | -0.11 | 0.95 | 78.24 | 78.31 | 0.3791
0.2 | -0.12 | 0.95 | 78.14 | 78.14 | 0.4288
0.2 | -0.13 | 0.95 | 78.04 | 78.04 | 0.4820
0.2 | -0.14 | 0.95 | 77.92 | 77.92 | 0.5385
Table 2: Average dynamics converges to mixed ESS, at which both populations
survive with $\phi_{1}=\phi_{2}$,
$u$ | $d$ | $\delta$ | $\phi_{1}$ | $\phi_{2}$ | $\epsilon^{*}$
---|---|---|---|---|---
0.15 | -0.1 | 0.9 | 0 | 82.12 | 0
0.16 | -0.1 | 0.9 | 0 | 83.24 | 0
0.17 | -0.1 | 0.9 | 0 | 84.29 | 0
0.18 | -0.1 | 0.9 | 0 | 85.19 | 0
Table 3: Average Dynamics converges to all ‘risky-agents’; Configuration:
$n_{0}=2000,\epsilon_{0}=.5,r_{s}=0.10,r_{b}=0.12,w=100,v=30,\alpha=0.5$
Config | $\epsilon^{*}$(Theory) | $\epsilon^{\star}$(Monte Carlo)
---|---|---
($d,\delta,v$) | Avg | Rndm | Avg | Rndm
0.10, 0.95, 40 | 0 | 0 | .0016 | 0.0011
-0.10, 0.95, 40 | 0.33 | 0 | .3214 | 0.0004
-0.15, 0.95, 40 | 0.6 | 0 | .5988 | 0.0014
0.10, 0.80, 46 | 1 | 0 | .9896 | 0.0065
Table 4: Average and Random dynamics, Comparison of MC results with theory
Configuration: $n_{0}=2000,u=0.2,r_{s}=0.17,r_{b}=0.19,w=100,\alpha=0.1$
In Table 4 we considered random dynamics as well as average dynamics. In
addition, we provided the Monte-Carlo based estimates. There is a good match
between the MC estimates and the theory. Further we have the following
observations: a) random dynamics always converge to a configuration with all
‘risky’ agents, as given by Corollary 2; b) when ${\bar{r}}_{r}>r_{b}$, the
average dynamics also converges to $\epsilon^{*}=0$ as suggested by Corollary
1; and c) when ${\bar{r}}_{r}<r_{b}$, the average dynamics converges to mixed
ESS or to a configuration with all ‘risk-free’ agents, again as given by
Corollary 1.
As the ‘risk increases’, i.e., as the amount of taxes increase and or as the
expected rate of return of risky investments ${\bar{r}}_{r}$ decreases, one
can observe that the average dynamics converges to all ‘risk-free’ agents
(last row of Table 4) thus averting systemic risk event (when there are large
number of defaults, $P_{d}$). While the random dynamics fails to do the same.
As predicted by theory (the configurations satisfying part (b) of Corollary
2), random dynamics might also succeed in averting the systemic risk event,
when the expected number of defaults is one for all $\epsilon>0.$ It is
trivial to verify that the configuration with $w(1+u)<v$, is one such example.
Thus, average dynamics is much more robust towards averting systemic risk
events.
## 6 Conclusions
We consider a financial network with a large number of agents. The agents are
interconnected via liability graphs. There are two types of agents, one group
lends to others and invests the rest in risk-free projects, while the second
group borrows/lends and invests the rest in risky ventures. Our study is
focused on analysing the emergence of these groups, when the new agents adapt
their strategies for the next investment round based on the returns of the
previous round. We considered two types of dynamics; in average dynamics the
new agents observe large sample of data before deciding their strategy, and in
random dynamics the decision is based on a small random sample.
We have the following important observations: a) when the expected rate of
return of the risky investments is higher (either when the shocks are rare or
when the shocks are not too large) than the risk-free rate, then ‘risk-free’
group wipes out eventually, almost all agents go for risky ventures; this is
true for both types of dynamics; b) when the expected rate of risky
investments is smaller, a mixed ESS can emerge with average dynamics while the
random dynamics always converges to all risky agents; at mixed ESS the
expected returns of both the groups are equal; more interestingly, when the
risky-expected rate is too small, the average dynamics converges to a
configuration with all risk-free agents.
In other words, in scenarios with possibility of a systemic risk event, i.e.,
when there is a possibility of the complete-system collapse (all agents
default), the average dynamics manages to wipe out completely the risky
agents; the random dynamics can fail to do the same. Thus when agents make
their choices rationally and after observing sufficient sample of the returns
of the previous round of investments, there is a possibility to avoid systemic
risk events. These are some initial results and we would like to investigate
further in future to make more affirmative statements in this direction.
## References
* [1] Veeraruna Kavitha, Indrajit Saha, and Sandeep Juneja. ”Random Fixed Points, Limits and Systemic risk.” In 2018 IEEE Conference on Decision and Control (CDC), pp. 5813-5819. IEEE, 2018.
* [2] Miekisz, Jacek. ”Evolutionary game theory and population dynamics.” Multiscale Problems in the Life Sciences. Springer, Berlin, Heidelberg, 2008. 269-316.
* [3] Benveniste, Albert, Michel Métivier, and Pierre Priouret. Adaptive algorithms and stochastic approximations. Vol. 22. Springer Science & Business Media, 2012.
* [4] Larry Eisenberg and Thomas H Noe. Systemic risk in financial systems. Management Science, 2001.
* [5] Daron Acemoglu, Asuman Ozdaglar, and Alireza Tahbaz-Salehi. Systemic risk and stability in financial networks. The american economic review, 2015.
* [6] Franklin Allen and Douglas Gale. Financial contagion. Journal of political economy, 2000.
* [7] Li, Honggang, Chensheng Wu, and Mingyu Yuan. ”An evolutionary game model of financial markets with heterogeneous players.” Procedia Computer Science 17 (2013): 958-964.
* [8] Yang, Ke, Kun Yue, Hong Wu, Jin Li, and Weiyi Liu. ”Evolutionary Analysis and Computing of the Financial Safety Net.” In International Workshop on Multi-disciplinary Trends in Artificial Intelligence, pp. 255-267. Springer, Cham, 2016.
* [9] Brock, William A., and Cars H. Hommes. ”Heterogeneous beliefs and routes to chaos in a simple asset pricing model.” Journal of Economic dynamics and Control 22, no. 8-9 (1998): 1235-1274.
* [10] Friedman, Daniel. ”Towards evolutionary game models of financial markets.” (2001): 177-185.
* [11] Borkar, Vivek S. Stochastic approximation: a dynamical systems viewpoint. Vol. 48. Springer, 2009.
* [12] Tembine, Hamidou, Eitan Altman, Rachid El-Azouzi, and Yezekael Hayel. ”Evolutionary games in wireless networks.” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 40, no. 3 (2009): 634-646.
* [13] Smith, J. Maynard, and George R. Price. ”The logic of animal conflict.” Nature 246, no. 5427 (1973): 15-18.
## Appendix
Proof of Lemma 1 : We consider the following:
Case 1: First consider the case when downward shock can be absorbed , in this
case the clearing vector ${\bar{x}}^{\infty}=y\delta+y(1-\delta)=y$, default
probability is $P_{d}=0$. The region is true if the following condition is
meet i.e., if
$K_{d}-v+yc_{\epsilon}>y\implies c_{\epsilon}>\frac{y-\underline{w}}{y}$
Case 2: Consider the case with banks receive shock will default and the
corresponding average clearing vector
${\bar{x}}^{\infty}=y\delta+(\underline{w}+c_{\epsilon}{\bar{x}}^{\infty})(1-\delta)$
which simplifies to :
${\bar{x}}^{\infty}=\frac{y\delta+\underline{w}(1-\delta)}{1-c_{\epsilon}(1-\delta).}$
This region lasts if the following conditions hold to be true
$K_{d}-v+c_{\epsilon}{\bar{x}}^{\infty}<y,\mbox{ and
}K_{u}-v+c_{\epsilon}{\bar{x}}^{2\infty}>y.$
Substituting
${\bar{x}}^{\infty}=\frac{y\delta+\underline{w}(1-\delta)}{1-c_{\epsilon}(1-\delta)}$
we have,
$\hskip
159.33542pt\frac{y-\overline{w}}{y-(1-\delta)(\overline{w}-\underline{w})}<c_{\epsilon}<\frac{y-\underline{w}}{y}.$
Case 3: In this we first calculate ${\bar{x}}^{\infty}$ which is obtained by
solving following fixed point equation:
$\displaystyle{\bar{x}}^{\infty}$ $\displaystyle=$
$\displaystyle(K_{d}-v+c_{\epsilon}{\bar{x}}^{\infty})(1-\delta)+(K_{u}-v+c_{\epsilon}{\bar{x}}^{\infty})\delta\
\ \implies{\bar{x}}^{\infty}\ =\ \frac{EW}{1-c_{\epsilon}}.$
In this case the default probability is $P_{d}=1$. The regime satisfies if the
following hold
$\displaystyle K_{u}-v+c_{\epsilon}\frac{EW}{1-c_{\epsilon}}<y\ \ \implies
c_{\epsilon}<\frac{y-\overline{w}}{y-(1-\delta)(\overline{w}-\underline{w})}.\hskip
14.22636pt\mbox{ {\hfill$\blacksquare$} }$
Proof of Corollary 1: First consider the system with $\delta=1$, i.e., system
without shocks. From Lemma 1, $P_{D}\leq(1-\delta)$ for all $\epsilon$ because
(with $\delta=1$)
$y\left(c_{\epsilon}-\frac{y-{\bar{w}}}{y}\right)=w(1+\epsilon)(1+u)-v-w\epsilon(1+r_{b})=w(1+u)-v+w\epsilon(u-r_{b})\geq
w(1+u)-v,$
for all $\epsilon$ (the lower bound independent of $\epsilon$). Under these
assumptions, there exists ${\bar{\delta}}<1$ by continuity of the involved
functions such that
$y\left(c_{\epsilon}-\frac{y-{\bar{w}}}{y-(1-\delta)({\bar{w}}-{\underline{w}})}\right)>0\mbox{
for all }\delta\geq{\bar{\delta}}\mbox{ and for all }\epsilon.$
Thus from Lemma 1 ${\bar{x}}^{\infty}=y$ or ${\bar{x}}^{\infty}=\frac{\delta
y+(1-\delta)\underline{w}}{1-(1-\delta)c_{\epsilon}}$ for all such
$\delta\geq{\bar{\delta}}$. We would repeat a similar trick again, so assume
initially ${\bar{x}}^{\infty}=y$ for all $\epsilon$ and consider
$\delta\geq{\bar{\delta}}$. With this assumption we will have:
$\displaystyle R^{1}(\epsilon)$ $\displaystyle=$
$\displaystyle\left(w\epsilon(1+r_{s})+\frac{(1-\alpha)(1-\epsilon)}{(\alpha+\epsilon)}y-v\right)^{+}$
$\displaystyle=$
$\displaystyle\left(w\epsilon(1+r_{s})+w(1-\epsilon)(1+r_{b})-v\right)^{+}$
$\displaystyle=$
$\displaystyle\left(w(1+r_{b})-v+w\epsilon(r_{s}-r_{b})\right)\mbox{, under
the given hypothesis, and }$ $\displaystyle R^{2}(\epsilon)$ $\displaystyle=$
$\displaystyle\left(K_{i}+\frac{\alpha(1+\epsilon)}{\alpha+\epsilon}y-v-y\right)^{+}=\left(K_{i}-w\epsilon(1+r_{b})-v\right)^{+}$
(24) $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{llll}R^{2}_{u}&\mbox{ w.p.
}\delta&\mbox{ where }R^{2}_{u}:=w(1+u)-v+w\epsilon(u-r_{b})\\\
\left(R^{2}_{d}\right)^{+}&\mbox{ w.p. }1-\delta&\mbox{ where
}R^{2}_{d}:=w(1+d)-v+w\epsilon(d-r_{b}).\end{array}\right.$ (27)
Note that $R^{2}_{u}\geq w(1+u)-v>0$ (for any $\epsilon$) under the given
hypothesis.
Proof of part (a): When ${\bar{r}}_{r}>r_{b}$, from (13), it is clear that
(inequality only when $R^{2}_{d}$ is negative)
$\displaystyle\phi_{2}(\epsilon)-\phi_{1}(\epsilon)\geq
R^{2}_{u}\delta+R^{2}_{d}(1-\delta)-\phi_{1}(\epsilon)=w({\bar{r}}_{r}-r_{b})+w\epsilon({\bar{r}}_{r}-r_{s})>0.$
Thus in this case $\phi_{2}>\phi_{1}$ for all $\epsilon$ and hence
$g(\epsilon)<1/2\mbox{ and }2g(\epsilon)-1<0\mbox{ for all $0<\epsilon<1$. }$
Therefore with Lyaponuv function $V_{0}(\epsilon)=\epsilon/(1-\epsilon)$ on
defined on neighbourhood $[0,1)$ of $0$ (in relative topology on $[0,1]$) we
observe that
$\frac{dV_{0}}{d\epsilon}h(\epsilon)=\frac{\epsilon}{1-\epsilon}(2g(\epsilon)-1)<0\mbox{
for all }0<\epsilon<1\mbox{ and equals }0\mbox{ for }\epsilon=0.$
Further $V_{0}(\epsilon)\to\infty$ as $\epsilon\to 1$, the boundary point of
$[0,1)$. Thus $\epsilon^{*}=0$ is the asymptotically stable attractor of ODE
(19) (see [11, Appendix, pp.148]) and hence the result follows by Theorem 2.
For all $\delta\geq{\bar{\delta}}$, from Lemma 1, we have the following
$\displaystyle\sup_{\epsilon}|y-{\bar{x}}^{\infty}|=\sup_{\epsilon}(1-\delta)\bigg{|}\frac{y-c_{\epsilon}-\underline{w}}{1-(1-\delta)c_{\epsilon}}\bigg{|}<\frac{1-\delta}{\delta}\eta$
(28)
for some $\eta>0$ , which decreases to 0 as $\delta\to 1$. ( The last
inequality is due to $c_{\epsilon}<1$ and then taking supremum over
$\epsilon$). By continuity of the above upper bound with respect to $\delta$
and the subsequent functions considered in the above parts of the proof, there
exists a ${\bar{\delta}}<1$ (further big if required) such that all the above
arguments are true for all $\delta>{\bar{\delta}}$.
Proof of part (b): The proof follows in similar way, now using Lyaponuv
function $V_{1}(\epsilon)=(1-\epsilon)/\epsilon$ on neighbourhood $(0,1]$ of
1, and by observing that $g(\epsilon)>1/2$ for all $\epsilon<1$ and hence
$\frac{dV_{1}}{d\epsilon}h(\epsilon)=-\frac{1-\epsilon}{\epsilon}(2g(\epsilon)-1)<0\mbox{
for all }0<\epsilon<1\mbox{ and equals }0\mbox{ for }\epsilon=1.$
Proof of part (c): It is clear that $\phi_{1}(\epsilon)=R^{1}(\epsilon)$
decreases linearly as $\epsilon$ increases:
$\phi_{1}(\epsilon)=w(1+r_{b})-v+w\epsilon(r_{s}-r_{b}).$
For $\epsilon$ in the neighbourhood of $0$, $\phi_{2}(\epsilon)>0$ and is
decreasing linearly with slope ${\bar{r}}_{r}-r_{b}$, because
$R^{2}_{d}(0)=w(1+d)-v>0$ and thus for such $\epsilon$
$\phi_{2}(\epsilon)=w(1+{\bar{r}}_{r})-v+w\epsilon({\bar{r}}_{r}-r_{b}).$
From (24), $R^{2}_{d}({\epsilon})$ is decreasing with increase in $\epsilon.$
There is a possibility of an ${\bar{\epsilon}}$ that satisfies
$R^{2}_{d}({\bar{\epsilon}})=0$, in which case $\phi_{2}$ increases linearly
with slope $\delta w(u-r_{b})$, i.e.,
$\phi_{2}(\epsilon)=\delta\left[w(1+u)-v+w\epsilon(u-r_{b})\right]\mbox{ for
all }\epsilon\geq{\bar{\epsilon}}.$
When ${\bar{r}}_{r}<r_{b}$ we have,
$\hskip 68.2866pt\phi_{1}(0)=w(1+r_{b})-v>w(1+{\bar{r}}_{r})-v=\phi_{2}(0).$
By hypothesis $\phi_{1}(\epsilon)<\phi_{2}(\epsilon)$ for some $\epsilon$,
hence by intermediate value theorem there exists at least one $\epsilon^{*}$
that satisfies $\phi_{1}(\epsilon^{*})=\phi_{2}(\epsilon^{*}).$ Further the
zero is unique because $\phi_{2}$ is either linear or piece-wise linear (with
different slops), while $\phi_{1}$ is linear.
Consider Lyaponuv function
$V_{*}(\epsilon):=(\epsilon-\epsilon^{*})^{2}/(\epsilon(1-\epsilon))$ on
neighbourhood $(0,1)$ of $\epsilon^{*}$, note $V_{*}(\epsilon)\to\infty$ as
$\epsilon\to 0$ or $\epsilon\to 1$ and observe by (piecewise) linearity of the
functions we will have
$\displaystyle\phi_{1}(\epsilon)$ $\displaystyle>$
$\displaystyle\phi_{2}(\epsilon)\mbox{ and thus }(2g(\epsilon)-1)>0\mbox{ for
all }0<\epsilon<\epsilon^{*}\mbox{ and }$ $\displaystyle\phi_{2}(\epsilon)$
$\displaystyle>$ $\displaystyle\phi_{1}(\epsilon)\mbox{ and thus
}(2g(\epsilon)-1)<0\mbox{ for all }1>\epsilon>\epsilon^{*}.$
Thus we have999When $\epsilon<1/2$ and $\epsilon<\epsilon^{*}$ then clearly
$\frac{(\epsilon-\epsilon^{*})(2\epsilon-1)}{\epsilon(1-\epsilon)}>0$. When
$\epsilon>1/2$ we have $(2\epsilon-1)/\epsilon<1/2$ and with
$\epsilon<\epsilon^{*}$ we have $\epsilon^{*}-\epsilon<1-\epsilon$ and thus
$2+\frac{(\epsilon-\epsilon^{*})(2\epsilon-1)}{\epsilon(1-\epsilon)}\geq
3/2>0\mbox{ for all }\epsilon<\epsilon^{*}.$ In a similar way
$\epsilon>\epsilon^{*}$, then we will have that the above term is again
positive. ,
$\frac{dV_{*}}{d\epsilon}=2\frac{\epsilon-\epsilon^{*}}{\epsilon(1-\epsilon)}+\frac{(\epsilon-\epsilon^{*})^{2}(2\epsilon-1)}{\epsilon^{2}(1-\epsilon)^{2}}\mbox{
and hence }$
$\frac{dV_{*}}{d\epsilon}h(\epsilon)=(\epsilon-\epsilon^{*})\left(2+\frac{(\epsilon-\epsilon^{*})(2\epsilon-1)}{\epsilon(1-\epsilon)}\right)(2g(\epsilon)-1)<0\mbox{
for all }\epsilon\notin\\{0,1,\epsilon^{*}\\}.$
Thus $\epsilon^{*}$ is the asymptotically stable attractor of ODE (19) and
hence the result follows by Theorem 2. The result can be extended for
$\delta<1$ as in case (a) and the rest of the details follow by direct
verification (at $\delta=1$), i.e., by showing that
$\phi_{1}(\epsilon^{*})=\phi_{2}(\epsilon^{*})$ at $\delta=1$ and the equality
is satisfied approximately in the neighbourhood of $\delta=1$. $\blacksquare$
Proof of Corollary 2: For part (a),
$h_{R}(\epsilon)=-c_{G}\epsilon(1-\epsilon)$, where the constant $c_{G}=1$ (or
respectively $c_{G}=2\delta-1$). Using Lyanponuv function of part (a) of
Corollary 1, the proof follows in exactly the same lines.
For part (b), $h_{R}(\epsilon)=\epsilon(1-\epsilon)$, and proof follows as in
part (b) of Corollary 1. For part (c), first observe (using equations
(Appendix)-(24) of proof of Corollary 1)
$\displaystyle R^{2}_{u}(\epsilon)-R^{1}(\epsilon)$ $\displaystyle\geq$
$\displaystyle
w(1+u)+w\epsilon(u-r_{s})-y+{\bar{x}}^{\infty}\bigg{(}\frac{2\alpha+\epsilon-1}{\alpha+\epsilon}\bigg{)}$
$\displaystyle=$ $\displaystyle
w(1+u)+w\epsilon(u-r_{s})+({\bar{x}}^{\infty}-y)-{\bar{x}}^{\infty}\bigg{(}\frac{1-\alpha}{\alpha+\epsilon}\bigg{)}$
$\displaystyle=$ $\displaystyle
w(u-r_{b})+w\epsilon(u-r_{s})+({\bar{x}}^{\infty}-y)\bigg{(}1-\frac{1-\alpha}{\alpha+\epsilon}\bigg{)}>0.$
The last inequality is trivially true for $\delta=1$ (and so
${\bar{x}}^{\infty}=y$) for the given hypothesis, and then by continuity as in
proof of Corollary 1, one can consider $\bar{\delta}<1$ such that for all
$\delta\geq\bar{\delta}$, the term
$({\bar{x}}^{\infty}-y)\big{(}1-\frac{1-\alpha}{\alpha+\epsilon}\big{)}$
(uniformly over $\epsilon$) can be made arbitrarily small. When $P_{d}=0$,
i.e., ${\bar{x}}^{\infty}=y$ for some $\epsilon$, then
$R^{2}_{d}(\epsilon)-R^{1}(\epsilon)=w(d-r_{b})+w\epsilon(d-r_{s})<0$ for all
such $\epsilon$. When $P_{d}\neq 0$, then $R^{2}_{d}=0\leq R^{1}$. Thus in
either case $R^{2}_{d}(\epsilon)\leq R^{1}(\epsilon)$ for all $\epsilon$.
By virtue of the above arguments we have $P_{D}\leq(1-\delta)$ and
$E[G]=1-\delta$ and this is true for all $\epsilon$, for all
$\delta\geq{\bar{\delta}}$. The rest of the proof follows from part(a).
$\blacksquare$
|
2024-09-04T02:54:56.377861 | 2020-02-28T10:02:58 | 2003.00899 | {
"authors": "Kate Wilkinson, George Cevora",
"full_text_license": null,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25988",
"submitter": "George Cevora",
"url": "https://arxiv.org/abs/2003.00899"
} | arxiv-papers | # Demonstrating Rosa: the fairness solution for any Data Analytic pipeline
Kate Wilkinson
illumr Ltd., London, United Kingdom George Čevora
illumr Ltd., London, United Kingdom
<EMAIL_ADDRESS>
###### Abstract
Most datasets of interest to the analytics industry are impacted by various
forms of human bias. The outcomes of Data Analytics [DA] or Machine Learning
[ML] on such data are therefore prone to replicating the bias. As a result, a
large number of biased decision-making systems based on DA/ML have recently
attracted attention.
In this paper we introduce Rosa, a free, web-based tool to easily de-bias
datasets with respect to a chosen characteristic. Rosa is based on the
principles of Fair Adversarial Networks, developed by illumr Ltd., and can
therefore remove interactive, non-linear, and non-binary bias. Rosa is stand-
alone pre-processing step / API, meaning it can be used easily with any DA/ML
pipeline.
We test the efficacy of Rosa in removing bias from data-driven decision making
systems by performing standard DA tasks on five real-world datasets, selected
for their relevance to current DA problems, and also their high potential for
bias. We use simple ML models to model a characteristic of analytical
interest, and compare the level of bias in the model output both with and
without Rosa as a pre-processing step.
We find that in all cases there is a substantial decrease in bias of the data-
driven decision making systems when the data is pre-processed with Rosa.
A topic that has recently received much attention both in the media and in
academia is the bias in decisions made based on the output of Machine Learning
[ML] algorithms [22][16][10]. This bias is especially problematic when the
result is unfair discrimination between groups on the basis of a protected
characteristic. In UK Law, protected characteristics are defined as age,
disability, gender reassignment, marriage and civil partnership, pregnancy and
maternity, race, religion or belief, sex and sexual orientation. The Equality
Act, 2010 [3], made it illegal to discriminate based on any of these
characteristics.
Discrimination in the output of ML algorithms most often stems from biased
human judgements made in the past. These biased judgements can make up the
dataset which an algorithm is trained on, or they can shape the composition of
the dataset so that it does not represent the true population. If an ML
algorithm is trained to predict biased human judgements, then the bias will be
replicated in decisions made based on the output of the algorithm.
One widely publicised example occurred in 2015, when Amazon revealed that its
experimental recruitment algorithm was heavily biased against women [9]. The
dataset used to train the algorithm comprised successful resumes submitted to
the company over the last 10 years. The technology industry is historically
and currently male-dominated, and therefore the proportion of female resumes
in the dataset was far less than the true proportion of female applicants that
would be suitable for the job today. It is understandable that an algorithm
trained to assess the suitability of applicants would find a link between
successful applicants and gender. If decisions are made based on the
assessments of this algorithm, then an unfair proportion of female applicants
will be hired, propagating the bias further.
Almost every dataset has some degree of bias, and not using biased data at all
is certainly worse than making decisions without any data. This calls for
methods of debiasing data with respect to a given characteristic, so that we
can still use potentially biased datasets without fear of creating algorithms
that discriminate unfairly.
Unfortunately, removing bias from a dataset or algorithm is not
straightforward. Simply removing the column that contains information about
the protected characteristic does not necessarily remove information about
that characteristic from the dataset. Datasets often contain variables which
are proxies for other variables. For example, postcode may be a proxy for
race, or education may be a proxy for gender. As discussed in Čevora 2019 [6],
the most common methods of bias removal do not work particularly well, are
based on subjective notions of fairness and/or are very difficult to
implement.
Fair Adversarial Networks [FANs] are an alternative technique for removing
multivariate, non-binary and/or non-linear bias from a dataset, whilst being
trivial to implement [6]. The FAN methodology encodes a novel, biased dataset
which can subsequently be used with a range of analytical tools, free of bias.
FANs therefore directly tackle the problem of biased datasets in ML.
In December 2019, illumr Ltd. released a tool called Rosa, which uses the
principles of FANs to debias novel datasets using a web-based interface. A
free, demo version of Rosa is available online (at illumr’s website), where
any user can upload a small dataset and debias it with respect to a chosen
characteristic. In particular, Rosa aims to be accessible to data analysts who
may not have much experience with data preparation or advanced Machine
Learning techniques.
In this paper, we demonstrate the effect of Rosa on 5, real-world datasets.
The datasets are selected for their high potential to be used in real-world
decision making, and also their high potential for inherent bias.
To test the effectiveness of Rosa, we perform a simple data analytical task
using each dataset, and examine the degree of bias in the output. We then
repeat the task using datasets which have been processed by Rosa, and
investigate whether the bias has decreased. We find significant/total
reductions in bias on all datasets after processing by Rosa. The models that
are chosen for the analytical tasks are basic, out-of-the box models, in order
to imitate the scenario in which an inexperienced analyst may wish to use
Rosa.
## 1 Data Analytic Examples
### 1.1 Criminal Recidivism
Criminal sanctions in most countries do not only depend solely on the criminal
act that has been committed, but also on personal circumstances (e.g. having
children) and the level of threat the individual poses to society. Of a
particular interest to us is estimation of the likelihood of recidivism for a
criminal defendant, as it is becoming increasingly common to delegate this
task to automated decision-making systems. These systems can have a huge
impact on people’s lives, as individuals deemed unlikely to recidivate may be
released on bail or assigned lower sentences than those deemed otherwise.
Correctional Offender Management Profiling for Alternative Sanctions [COMPAS]
is a technology used by the Department of Justice of the United States [DoJ]
to assess the likelihood of recidivism of criminal defendants and guide bail
decisions. Larson and colleagues [12] found significant racial bias in these
assessments, with African-Americans being assigned significantly higher
recidivism risk scores, independent of whether they recidivated or not. This
means COMPAS is much more likely to wrongly label African-Americans as
potential recidivists than White Americans. Conversely White Americans are
much more likely to be wrongly labelled as non-recidivists.
The DoJ unfortunately defends this pattern of discrimination as being fair
[11]. Their argument relies on the fact that more African-Americans recidivate
according to DoJ statistics. Shockingly, Flores and colleagues, writing in
defense of the DoJ, fail to acknowledge the significant evidence for racial
bias within the criminal justice system itself, with Black populations
suffering from higher rates of stop, search, and arrest, despite research to
suggest that their crime rate is relatively similar to other populations [17,
7, 5, 8, 18, 19]. As a result, crime datasets often imply a much higher rate
of criminality for Black populations than is likely to be the case.
#### 1.1.1 Data and Methods
In this section we use a dataset complied by ProPublica that includes the
criminal profiles of 11,757 defendants from Broward County, Florida, along
with an indication of whether they recidivated during a two-year follow-up
study. The dataset is freely available to download at Github. Before starting
the analysis we discarded non-categorical or numerical data, and data relating
to the COMPAS risk-assessment scores allocated to the offenders. We kept data
from only two race categories, _Caucasian_ and _African-American_.
To replicate a recidivism model such as COMPAS we used a basic logistic
regression model from Python’s sklearn package to predict likelihood of
recidivism while excluding the race variable from the data, and looked for
bias in the predictions of our model. We then processed the data with Rosa and
repeated the exact same modelling, checking to see whether the bias had
reduced.
#### 1.1.2 Results
As shown in Figures 1 and 2 the distribution of model estimates is clearly
discriminatory against African-Americans when Rosa is not used. This bias
essentially disappeared when the data was pre-processed using Rosa.
|
---|---
|
Figure 1: Without data pre-processing using Rosa (left-hand plots), African-Americans are labelled as having a higher probability of recidivism than their Caucasian counterparts by our model, independent of whether they recidivated or not. Using Rosa as a pre-processing step (right-hand plot), this difference has been almost entirely removed. Note that without considering the estimates for recidivists and non-recidivists separately, any difference in the score distribution for African-Americans and Caucasians could be explained by a different rate of recidivism between the racial categories, as opposed to an unfair pattern of misclassification. | Original data | Rosa pre-processed
---|---|---
| non-recid | recid | non-recid | recid
$\mu$ African-American | 0.37 | 0\. 46 | 0.35 | 0.37
$\mu$ Caucasian | 0.27 | 0.33 | 0.35 | 0.36
$\mu$ Diff | 0.10 | 0.13 | 0.00 | 0.01
$\sigma$ African-American | 0.13 | 0.14 | 0.06 | 0.06
$\sigma$ Caucasian | 0.11 | 0.13 | 0.05 | 0.06
$\sigma$ Average | 0.12 | 0.14 | 0.06 | 0.06
$\mu$ Diff / $\sigma$ Average | 0.85 | 0.92 | 0.00 | 0.17
Figure 2: Caucasian individuals receive significantly lower recidivism
likelihood estimates from our recidivism model when Rosa was not used than
their African-American counterparts, irrespective of whether they later
recidivate or not. The amount of bias in the model has been quantified by
dividing the difference in mean estimate for African-Americans and Caucasians
by the average standard deviation of the two distributions. The higher the
value the greater the bias. Using Rosa has significantly decreased the level
of bias in the estimates both for recidivists and non-recidivists.
The accuracy of the regression model prior to debiasing by Rosa was 67%, and
post-debiasing it was 63%. This drop, while significant is likely due to the
bias inherent in the dataset - which was also used to evaluate accuracy.
Unfortunately the real-world nature of the examples presented in this paper do
not allow unbiased evaluation.
#### 1.1.3 Discussion
We have performed a simple DA task to model the chance of recidivism of a
criminal defendant. Similar systems are used across USA to determine a
defendant’s suitability for alternative sanctions and can therefore have a
significant impact on the individual’s life. Without correcting for bias our
model discriminated against African-Americans, who were always considered of
higher recidivism risk whether they recidivated or not. These results are in
line with an analysis of COMPAS - a system developed by Equivant and used by
DoJ that has been demonstrated to perform similar discrimination [12].
Replicating the exact same DA pipeline that resulted in a discriminatory
model, but applying Rosa as a data pre-processing step, resulted in model that
did not discriminate.
It is hardly surprising that the initial model was biased against African-
Americans. The dataset was likely affected by the issues discussed in Section
1.1, such as a heightened rate of stop and search for African-Americans
compared to Caucasians, leading to an over-representation of African-Americans
(and black people in general) in criminal datasets relative to their rate of
criminality.
For example, arrests for marijuana possession are 3.7 times higher for the
black population in the US than the white population, despite similar reported
levels of usage [5]. For the COMPAS dataset, ‘Marijuana’ was one of the most
frequently occurring words in the text describing the charge for each
defendant. Dealing with such systematic bias in criminal datasets is essential
as data-driven decision making becomes more widespread. Here we have
demonstrated that Rosa is a suitable tool for such a task.
### 1.2 Absenteeism at work
DA has an ever-increasing role in Human Resource Management [HR], with
employers aiming to only hire/retain the highest performing employees.
Absenteeism is an important aspect of HR for many organisations and therefore
its estimation is an interesting feature that can be used to screen applicants
during the hiring process. While employers should be considering absenteeism
during the hiring process, it is known to be related to age [14] which is a
protected characteristic. Achieving lower absenteeism in a workforce by age-
skewed hiring would therefore be illegal.
#### 1.2.1 Data and Methods
We have performed a simple analysis to model absenteeism in the _Absenteeism
at work_ dataset available from the UCI Machine Learning Repository. This
dataset holds personal information on a set of employees from a courier
company in Brazil, along with their total absenteeism time over three years.
Before commencing with the modelling exercise we removed data on the
type/season of absence, and replaced the ‘Absenteeism Time in Hours’ variable
with a derived feature: ‘Absenteeism in Upper Quartile’, which indicates
whether an employee has absence in the upper quartile of all employee
absences. We also replaced the age (in years) with a categorical variable,
indicating whether employees were under 35, 35 to 45 or over 45. This
simplified the identification of bias as we could compare the model estimates
of absenteeism directly between the three age groups, rather than in a
continuous fashion.
We used the Logistic Regression model from Python’s sklearn package to predict
whether employees had absenteeism in the upper quartile of all the employees
in the dataset while excluding the age variable from the data, and analysed
for bias with respect to age. We then used the same model to predict
absenteeism with data that had been debiased by Rosa, and checked whether the
bias has reduced.
#### 1.2.2 Results
|
---|---
|
Figure 3: For both the upper quartile and lower three-quartile absentees, those under 35 received lower estimates of absenteeism than those over 45. The right-hand plots show the model estimates using data pre-processed by Rosa, where most of the difference in model estimates for the over 45 and under 35 groups has been removed. In particular, before Rosa was applied to the dataset, the older lower three-quartile absentees received higher estimates of absenteeism than the younger upper quartile absentees, indicating a high degree of bias with respect to age. This pattern disappeared when Rosa was used to pre-process the data. | Original data | Rosa pre-processed
---|---|---
| upper quart | lower quarts | upper quart | lower quarts
$\mu$ Under 35 | 0.33 | 0.29 | 0.40 | 0.37
$\mu$ Over 45 | 0.52 | 0.37 | 0.41 | 0.35
$\mu$ Diff | 0.19 | 0.08 | 0.01 | 0.02
$\sigma$ Under 35 | 0.05 | 0.10 | 0.17 | 0.15
$\sigma$ Over 45 | 0.16 | 0.14 | 0.11 | 0.08
$\sigma$ Average | 0.11 | 0.12 | 0.12 | 0.12
$\mu$ Diff / $\sigma$ Average | 1.81 | 0.69 | 0.13 | 0.20
Figure 4: Younger individuals receive significantly lower estimates of
absenteeism than their older counterparts when we investigate the upper and
lower three-quartiles of absentees separately in a model without any
compensation of age related bias. The amount of bias in the model has been
quantified by dividing the difference in mean estimate for those over 45 and
those under 35 by the average standard deviation of the two distributions. The
higher the value the greater the bias. Using Rosa has significantly decreased
the level of bias in the estimates both for those with upper quartile
absenteeism and those without.
There was a decrease in prediction accuracy of the model from 72% to 65%,
although the unbiased prediction accuracy cannot be directly compared with the
prediction accuracy of a biased model. This is because predicting certain
outcomes with and without bias are fundamentally different tasks, and
therefore success on these separate tasks cannot be compared directly.
#### 1.2.3 Discussion
We demonstrated that a simple model to predict employee absenteeism shows
significant bias with respect to age. This is concerning as it is becoming
increasingly common for automated assessments to form a part of hiring
processes, and predicting absenteeism is of much interest to employers. While
it is acceptable to discriminate based on likely absenteeism alone, we have
seen that this correlates with age in a simple model, which would likely
result in age-based discrimination in hiring, which is illegal.
A particular pattern of discrimination appeared in the analysis of the
original dataset: the older lower three-quartile absentees received higher
estimates of absenteeism than the younger upper-quartile absentees, indicating
a large degree of discrimination with respect to age. This pattern disappeared
when Rosa was used to pre-process the data.
Using the exact same DA pipeline to predict absenteeism after pre-processing
by Rosa, the model estimates showed near zero bias, meaning that the dataset
would be safe to use in an automated hiring procedure without risk of age
discrimination.
### 1.3 Heart Disease
Coronary heart disease [CHD] is the leading cause of death in women [2]. At
certain ages, CHD mortality rates are up to 5x higher in men than women [4].
However, this risk is highly age dependent and means that over a lifetime, men
and women have a relatively similar risk. Despite this, the higher risk for
men during middle-age leads to the common misconception that CHD is a ‘men’s
disease’, in turn leading to a lack of research and data collected on CHD in
women.
Although the major risk factors for CHD in healthy women are the same as those
identified for healthy men in epidemiological studies, the relative strength
of certain risk factors may depend on gender [13][20]. This means that a model
which is trained to predict the risk of heart disease using data mostly from
men may perform less well on women. This can also lead to the misdiagnosis of
heart disease in women, as symptoms that indicate the presence of heart
disease in women are often considered atypical [15].
Misdiagnosis is a serious issue, as the longer a patient goes without
appropriate treatment, the greater the risk of mortality. A study by the
University of Leeds found that women who had a final diagnosis of a STEMI-type
heart attack had a 59 per cent greater chance of a misdiagnosis compared with
men [21].
#### 1.3.1 Data and Methods
We used a simple DA model to predict the presence of heart disease from a
dataset of various blood measurements from patients in Cleveland, Ohio. The
_Heart Disease Dataset_ is available at Kaggle. The original dataset has a
scale for type of heart disease, with 0 indicating the absence of heart
disease and 1 to 3 indicating some degree of heart disease. To simplify
modelling, we converted these categories into a binary variable indicating
whether a patient has any degree of heart disease or not.
We used the Logistic Regression model from Python’s sklearn package to predict
the presence of heart disease for individuals in the dataset while excluding
the gender variable from the data, and checked for bias with respect to gender
in the model estimates. We then debiased the data with respect to gender, and
repeated the exact same analytical process to see whether the bias had
decreased.
#### 1.3.2 Results
As shown in Figure 5 and 6, the model estimates of heart disease for men are
clearly higher than for women without data pre-processing with Rosa,
regardless of whether the patient had heart disease or not. Using data that
has been pre-processed by Rosa, this bias is significantly reduced.
|
---|---
|
Figure 5: Higher model estimates of heart disease were assigned to men compared to women, both for patients with and without heart disease before the data was pre-processed by Rosa. Using the same model with data pre-processed by Rosa, the model assigns equal estimates to men and women with and without heart disease. | Original data | Rosa pre-processed
---|---|---
| Healthy | Heart Disease | Healthy | Heart Disease
$\mu$ Women | 0.37 | 0.64 | 0.44 | 0.54
$\mu$ Men | 0.44 | 0.67 | 0.44 | 0.54
$\mu$ Diff | 0.07 | 0.03 | 0.00 | 0.00
$\sigma$ Women | 0.24 | 0.18 | 0.11 | 0.12
$\sigma$ Men | 0.20 | 0.16 | 0.11 | 0.11
$\sigma$ Average | 0.22 | 0.16 | 0.11 | 0.11
$\mu$ Diff / $\sigma$ Average | 0.30 | 0.20 | 0.04 | 0.05
Figure 6: Women received lower estimates of heart disease from our regression
model compared to men, independent of whether they had heart disease or not.
This means that women are more likely to be incorrectly given a negative
diagnosis when they actually have heart disease, compared to men. The amount
of bias in the model has been quantified by dividing the difference in mean
estimate for men and women with and without heart disease by the average
standard deviation of the two distributions. The higher the value the greater
the bias. Using Rosa has significantly decreased the level of bias in the
estimates both for those with heart disease and those without.
The accuracy of the model in diagnosing heart disease using biased data was
0.74, and post-Rosa it was 0.67, although it is not reasonable to directly
compare the accuracy of a biased model with the accuracy of an unbiased model.
This is because predicting certain outcomes with and without bias are
fundamentally different tasks.
#### 1.3.3 Discussion
We found that a simple model to predict heart disease was biased with respect
to gender. This fits well with research on the under-/mis-diagnosis of heart
disease in women compared to men [21]. The dataset may suffer from two
problems which could lead to biased predictions: 1) the dataset is likely to
contain an over-representation of men compared to the true population of heart
disease sufferers, as women are less likely be diagnosed correctly; and 2)
signals in the data which may lead to good predictions for men do not
necessarily apply to women.
After de-biasing with Rosa, the predictions made using the heart disease
dataset had significantly reduced bias. This has significant implications, as
it means that existing data collected from clinical trials on men can be used
to predict risk of heart disease without disadvantaging women.
### 1.4 Predicting the Economic Needs Index of Schools
As the financial situation of a school can have a large impact on the success
of its students, it is important to identify and direct resources to
financially disadvantaged schools. In New York City, elite schools admit
students based on the results of a single test: the Specialized High Schools
Admissions Test, or SHSAT. Unfortunately, due to the lack of resources
available to schools in deprived areas, and also the link between race and
socio-economic status, there is little diversity in those admitted to these
elite high schools. This is a problem that replicates across many cities and
many countries.
In order to counter this problem, more resources must be directed to under-
performing schools, allowing students from deprived areas to catch-up with
their less deprived counterparts.
A key variable indicating whether students at a particular school are likely
to benefit from additional resources is the economic needs index of the
school. The economic needs index of the school is the average economic needs
index of its pupils, and is based on factors such as whether the student is
eligible for assistance from the state and whether they have lived in
temporary housing [1].
It is important that the economic needs index can be estimated accurately,
however, as race correlates with socio-economic status, there is opportunity
for inadvertent racial discrimination in estimates (as we will demonstrate).
#### 1.4.1 Data and Methods
PASSNYC is a not-for-profit organisation that uses public data to identify
promising students in New York’s under-performing schools. They direct
resources to these schools in order to increase the diversity of students
taking SHSAT.
We used a dataset on schools in New York City, compiled by PASSNYC, to predict
the economic needs index of students. The _PASSNYC dataset_ is available at
Kaggle.
We converted the ‘Percent Black / Hispanic’ column to a binary variable which
indicated whether the school had a majority of black students or not, in order
to ease the identification of bias.
We used the Ridge regression model from Python’s sklearn package to predict
the economic needs index of each school while excluding the race variable from
the data, and looked for racial bias in the predictions. We then debiased the
data with respect to race, and made another set of predictions using the
debiased data to see whether the bias had decreased when the data was pre-
processed using Rosa.
#### 1.4.2 Results
|
---|---
Figure 7: Before data pre-processing by Rosa, schools with a majority of black students receive much higher estimates of economic needs index than schools with a majority of white students, with very little overlap. Using the same model with data pre-processed by Rosa, the estimates are far less polarised. | True Values | Pre-Rosa | Post-Rosa
---|---|---|---
$\mu$ Majority Black | 0.76 | 0.75 | 0.69
$\mu$ Majority White | 0.42 | 0.44 | 0.57
$\mu$ Diff | 0.34 | 0.31 | 0.12
$\sigma$ Majority Black | 0.13 | 0.08 | 0.11
$\sigma$ Majority White | 0.19 | 0.06 | 0.08
$\sigma$ Average | 0.16 | 0.07 | 0.095
$\mu$ Diff / $\sigma$ Average | 1.93 | 4.43 | 1.26
Figure 8: Prior to data de-biasing using Rosa, the model overestimates the
true difference between the economic needs index of majority black schools and
majority white schools. The amount of bias in the model has been quantified by
dividing the difference in mean estimate for majority black and majority white
schools by the average standard deviation of the two distributions. The higher
the value the greater the bias. Using Rosa has significantly decreased the
level of bias in the estimates both for those with heart disease and those
without. It is clear that the pre-Rosa model overestimates this difference
compared to the true value, and after data pre-processing with Rosa the
difference is far closer to the true value.
The $R^{2}$ for predictions prior to debiasing was 0.57, and post-debiasing it
was 0.35, although we cannot properly assess model accuracy on biased data.
#### 1.4.3 Discussion
We found that without using Rosa our model overestimated the economic needs
index of majority black schools, and underestimated the economic needs of
majority white schools. This is problematic as it means that white students
from disadvantaged backgrounds may not receive the same level of support as
similarly disadvantaged black students.
After debiasing the dataset with Rosa, the difference in mean economic needs
index estimate assigned by our model was much closer to the true value than
before de-biasing. This means that organisations like PASSNYC can better
allocate resources by predicting the economic need of students without racial
bias.
### 1.5 Communities and Crime
Being able to predict the rate of crime in different communities is of
interest to law enforcement bodies, as it can allow them to better distribute
resources such as police officers.
However, there is a significant body of research to suggest that Black people
are unfairly represented in criminal datasets due to the racial bias of those
responsible for enacting the law [8, 18, 19]. In certain circumstances, it has
even been found that the true rate of crime for Black persons is likely equal
to that of White persons, despite large differences in the data collected by
law enforcement bodies [5, 17].
This is highly problematic, as any model used to predict crime rate based on
such datasets is likely to overestimate the strength of the true relationship
between race and crime, leading to biased predictions. In this example it
might lead to an unnecessary direction of police resources to communities with
a large Black population. In turn, this is likely to lead to a greater rate of
arrest of Black individuals (as there will be more police officers in areas
with a large Black population), further strengthening the bias in crime
datasets.
#### 1.5.1 Data and Methods
We used a dataset on communities and crime within the United States (combining
data from the 1990 US Census, law enforcement data from the 1990 US LEMAS
survey, and crime data from the 1995 FBI UCR) to predict the rate of violent
crime in different communities.
The _Communities and Crime dataset_ is available from the UCI Machine Learning
Repository. We discarded 22 of the most sparsely populated columns, and
converted the ‘RacePercentBlack’ column to a binary label indicating whether
the black proportion in a given community is in the upper quartile across all
communities in the dataset. We removed all other columns that contained
information about the racial profile of each community.
We used the Linear Regression model in Python’s sklearn package to predict the
rate of violent crime in each community while excluding the race variable from
the data, and looked for bias with respect to race. We then debiased the data
with respect to race, and made another set of predictions using the exact same
DA pipeline to see whether the bias had truly been removed.
#### 1.5.2 Results
|
---|---
Figure 9: Before the data was pre-processed by Rosa, our model assigns much higher estimates of violent crime rate to majority Black communities compared to majority White communities. After the data has been pre-processed by Rosa, there is less discrepancy between model estimates for majority Black and majority White communities. | True Values | Pre-Rosa | Post-Rosa
---|---|---|---
$\mu$ Black Upper Quart | 0.49 | 0.51 | 0.39
$\mu$ Black Lower Quarts | 0.16 | 0.15 | 0.20
$\mu$ Diff | 0.33 | 0.36 | 0.19
$\sigma$ Black Upper Quart | 0.27 | 0.17 | 0.29
$\sigma$ White Lower Quarts | 0.16 | 0.13 | 0.17
$\sigma$ Average | 0.215 | 0.15 | 0.23
$\mu$ Diff / $\sigma$ Average | 1.53 | 2.40 | 0.83
Figure 10: There is a large difference in the rate of violent crime between
majority Black and majority White communities in the _Communities and Crime
dataset_ , despite evidence to suggest that the true difference should be
minimal. This is reflected in the estimates from the model before data pre-
processing with Rosa. The amount of bias in the model has been quantified by
dividing the difference in mean estimate for upper-quartile black and lower
three-quartile black communities by the average standard deviation of the two
distributions. The higher the value the greater the bias. The pre-Rosa model
has even greater bias than the original dataset. After data pre-processing
with Rosa, our model estimates have less bias than the original dataset.
The $R^{2}$ for predictions prior to debiasing was 0.69, and post-debiasing it
was 0.52, although we cannot properly assess model accuracy on biased data.
#### 1.5.3 Discussion
There is evidence to suggest that there is only a small difference in the rate
of crime for Black persons and White populations, however, most criminal
datasets currently have a large over-representation of Black persons due to
the well documented biases of those enacting the law. This is problematic, as
police resources may be incorrectly distributed based on this data, further
strengthening the existing bias.
We found that our simple model overestimated violent crime rate in communities
with a high proportion of Black residents. After pre-processing the data using
Rosa, there was much less difference in the model estimates for majority Black
and majority White communities.
Although the difference in model estimates for communities with a large
proportion of Black residents and those with a large proportion of White
residents after pre-processing with Rosa was smaller than the true difference
in the dataset, extensive research suggests the dataset itself is biased, and
therefore it cannot be used as a benchmark. We must instead aim for a level of
bias below the true dataset bias in order to prevent the further propagation
of racial bias through the criminal justice system. This was achieved by using
Rosa.
## 2 Discussion
Rosa is a free, web-based tool, which uses the principles of Fair Adversarial
Networks [FANs] to remove bias from a dataset with respect to a certain
characteristic or characteristics. In this paper we demonstrated the bias
removing capabilities of Rosa on five datasets which were related to real and
current issues in the world of Data Analytics [DA] and Machine Learning [ML].
These datasets contained racial, gender or age bias that made the results of
our analysis clearly biased as well; however the bias was successfully
removed/significantly decreased each time we have used Rosa as a pre-
processing step in our analysis.
The main advantage of Rosa is its wide applicability and simplicity for the
end user. Rosa is stand-alone and does not require any integration into the ML
pipeline that a dataset is intended for. It is therefore compatible with a
wide range of ML and data analysis techniques. Less technically minded users
may choose to use a graphical user interface such as the one available at
rosa.illumr.com to pre-process their data for further use in software such as
MS Excel or IBM SPSS. A data scientist comfortable with scripting, on the
other hand, can use the Rosa API directly. In such cases debiasing data
becomes a single line of code.
It should be noted that although we have demonstrated the efficacy of Rosa at
removing bias with respect to protected characteristics [3], Rosa can remove
any type of bias. For example, it might be desirable to remove regional bias
when evaluating performance of an enterprise’s regional offices. In such cases
performance of employees in different parts of the country may not be
comparable without removing the bias of the region.
There were slight decreases in prediction accuracy across all models after
data debiasing. However, because the datasets themselves were all biased in
some respect, we cannot directly compare model accuracy before and after
debiasing. The model trained on the biased dataset was only tested on biased
data - we do not know what its performance would be on fair data. Tests on
synthetic datasets with artificially injected bias indicate that the accuracy
increases when using Rosa (running the DA pipeline with biased data but
evaluating the performance on data before artificial biasing). This result is,
however, reliant on the assumptions about how human biases work and therefore
do not provide a definite proof of increase in accuracy when using Rosa. The
nature of the problem that Rosa addresses makes it impossible to prove an
increase in accuracy in the real-world.
The stochastic nature of both the simple DA models and the FANs behind Rosa
mean that the results of similar experiments will vary every time, and for a
thorough analysis should be repeated multiple times. In our analyses for this
paper, we present a randomly selected run of the DA pipeline to keep the
matter simple. We have observed only very marginal variance in the output with
repeated runs.
As demonstrated in this paper, Rosa is capable of removing many different
types of bias, with no change in approach taken by the end user other than
selecting the right characteristic. This ease-of-use is unrivalled by
alternative bias removal methods.
## References
* [1] Student economic need index. Available at: https://data.cccnewyork.org/data/bar/1371/student-economic-need-index#1371/a/1/1622/62, 2019\.
* [2] II Abubakar, T Tillmann, and A Banerjee. Global, regional, and national age-sex specific all-cause and cause-specific mortality for 240 causes of death, 1990-2013: a systematic analysis for the global burden of disease study 2013. Lancet, 385(9963):117–171, 2015.
* [3] Equality Act. c. 15. Retrieved from the UK National Archives website: http://www. legislation. gov. uk/ukpga/2010/15/contents, 2010.
* [4] Sophie H Bots, Sanne A E Peters, and Mark Woodward. Sex differences in coronary heart disease and stroke mortality: a global assessment of the effect of ageing between 1980 and 2010. BMJ Global Health, 2(2), 2017.
* [5] WC Bunting, Lynda Garcia, and Ezekiel Edwards. The war on marijuana in black and white. American Civil Liberties Union, June, 2013.
* [6] George Cevora. Fair adversarial networks. 2020\.
* [7] Dean A Dabney, Laura Dugan, Volkan Topalli, and Richard C Hollinger. The impact of implicit stereotyping on offender profiling: Unexpected results from an observational study of shoplifting. Criminal Justice and Behavior, 33(5):646–674, 2006.
* [8] Dean A Dabney, Richard C Hollinger, and Laura Dugan. Who actually steals? a study of covertly observed shoplifters. Justice Quarterly, 21(4):693–728, 2004.
* [9] Jeffrey Dastin. Amazon scraps secret ai recruiting tool that showed bias against women. Reuters, 2018.
* [10] Virginia Eubanks. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, 2018.
* [11] Anthony W Flores, Kristin Bechtel, and Christopher T Lowenkamp. False positives, false negatives, and false analyses: A rejoinder to machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks. Fed. Probation, 80:38, 2016.
* [12] Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. How we analyzed the compas recidivism algorithm. ProPublica (5 2016), 9, 2016.
* [13] JoAnn E Manson, Heather Tosteson, Paul M Ridker, Suzanne Satterfield, Patricia Hebert, Gerald T O’Connor, Julie E Buring, and Charles H Hennekens. The primary prevention of myocardial infarction. New England journal of medicine, 326(21):1406–1416, 1992.
* [14] Joseph J Martocchio. Age-related differences in employee absenteeism: a meta-analysis. Psychology and Aging, 4(4):409, 1989.
* [15] Jean C McSweeney, Leanne L Lefler, and Beth F Crowder. What’s wrong with me? women’s coronary heart disease diagnostic experiences. Progress in Cardiovascular Nursing, 20(2):48–57, 2005.
* [16] Cathy O’Neil. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books, 2016.
* [17] Alex R Piquero and Robert W Brame. Assessing the race–crime and ethnicity–crime relationship in a sample of serious adolescent delinquents. Crime & Delinquency, 54(3):390–422, 2008.
* [18] George E Schreer, Saundra Smith, and Kirsten Thomas. “shopping while black”: Examining racial discrimination in a retail setting 1. Journal of Applied Social Psychology, 39(6):1432–1444, 2009.
* [19] Andy Shallice and Paul Gordon. Black People, White Justice?: Race and the Criminal Justice System. Runnymede Trust London, 1990.
* [20] Eric Vittinghoff, Michael G Shlipak, Paul D Varosy, Curt D Furberg, Christine C Ireland, Steven S Khan, Roger Blumenthal, Elizabeth Barrett-Connor, Stephen Hulley, et al. Risk factors and secondary prevention in women with heart disease: the heart and estrogen/progestin replacement study. Annals of internal medicine, 138(2):81–89, 2003.
* [21] Jianhua Wu, Chris P Gale, Marlous Hall, Tatendashe B Dondo, Elizabeth Metcalfe, Ged Oliver, Phil D Batin, Harry Hemingway, Adam Timmis, and Robert M West. Editor’s choice-impact of initial hospital diagnosis on mortality for acute myocardial infarction: A national cohort study. European Heart Journal: Acute Cardiovascular Care, 7(2):139–148, 2018.
* [22] James Zou and Londa Schiebinger. Ai can be sexist and racist—it’s time to make it fair, 2018.
|
2024-09-04T02:54:56.388535 | 2020-03-02T13:41:04 | 2003.00913 | {
"authors": "Geoffroy J. Aubry, Luis S. Froufe-P\\'erez, Ulrich Kuhl, Olivier\n Legrand, Frank Scheffold and Fabrice Mortessagne",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25989",
"submitter": "Geoffroy J. Aubry",
"url": "https://arxiv.org/abs/2003.00913"
} | arxiv-papers | # Experimental Tuning of Transport Regimes
in Hyperuniform Disordered Photonic Materials
Geoffroy J. Aubry<EMAIL_ADDRESS>Département de Physique, Université
de Fribourg, Switzerland Institut de Physique de Nice, Université Côte
d’Azur/CNRS, France Luis S. Froufe-Pérez Département de Physique, Université
de Fribourg, Switzerland Ulrich Kuhl Institut de Physique de Nice,
Université Côte d’Azur/CNRS, France Olivier Legrand Institut de Physique de
Nice, Université Côte d’Azur/CNRS, France Frank Scheffold Département de
Physique, Université de Fribourg, Switzerland Fabrice Mortessagne Institut
de Physique de Nice, Université Côte d’Azur/CNRS, France
(August 27, 2024)
###### Abstract
We present wave transport experiments in hyperuniform disordered arrays of
cylinders with high dielectric permittivity. Using microwaves, we show that
the same material can display transparency, photon diffusion, Anderson
localization, or a full band gap, depending on the frequency $\nu$ of the
electromagnetic wave. Interestingly, we find a second weaker band gap, which
appears to be related to the second peak of the structure factor. Our results
emphasize the importance of spatial correlations on different length scales
for the formation of photonic band gaps.
In analogy to electronic semiconductors, dielectric materials in a periodic
[1, 2, 3, 4], quasiperiodic [5], or amorphous configuration [6, 7, 8, 9, 10]
can all display full band gaps. For the latter materials, due to the absence
of long range order, the band gap has been associated with local resonances of
the scatterers or correlated scattering clusters, which is reminiscent of the
tight-binding model in electronic semiconductors [11]. In contrast to
electrons, however, there exist no bound photon states making this analogy
questionable. Other proposals have linked the opening of a gap directly to the
suppression of density fluctuations on large length scales, known as stealthy
hyperuniformity (SHU) [7]. While the precise origin of a band gap in an
amorphous dielectric material is yet unknown, the transport properties inside
the gap are well understood [3, 12, 9, 10]. In both periodic and nonperiodic
band gap materials, an incident light wave enters by a finite distance
$L_{\mathrm{B}}$, called the Bragg length, and is then totally reflected. For
a slab of thickness $L$, the wave can tunnel through the material with a
probability $T\sim e^{-L/L_{\mathrm{B}}}$. However, outside the gap, the
transport properties differ strongly. Photonic crystals either reflect,
diffract into Bragg peaks, or they are transparent, which is a direct
consequence of long-range order and the corresponding sharp Bragg maxima in
the structure factor $S(\vec{k})$. The situation is entirely different for
amorphous materials, which scatter light strongly over a broad range of
$\vec{k}$. Recent numerical work has revealed that this leads to a rich
transport phase diagram for amorphous band gap materials—with regions of
transparency, Anderson localization, and light diffusion—not present in
ordered materials [10]. In contrast to disordered photonic crystals, discussed
for example in the celebrated article by Sajeev John in 1987 [2], the diffuse
scattering and localization observed outside the gap is not a consequence of
imperfections, but an inherent feature of the amorphous material [9].
Introduced in 2004, stealthy hyperuniformity provides an elegant way to
construct such idealized disordered materials with finely tunable correlations
encoded by the degree of stealthiness $\chi$, ranging from $0\to 0.5$ before
the onset of crystallization [13, *Uche2004].
Thirty years after John’s seminal work on the interplay between photonic band
gap formation and strong localization in disordered dielectric lattices [2], a
controlled experimental study of the optical transport properties in between
ordered and disordered states of matter is still lacking [15]. Here, we
present experimental results obtained for a 2D system composed of high index
dielectric cylinders in air [16] placed according to SHU point patterns [7].
To probe the different transport regimes experimentally, we conduct
measurements in the microwave regime since the frequency span in this regime
is much larger than in the optical one. Furthermore, our microwave setup
provides a more versatile platform compared to optics. Our samples consist of
about $N\simeq 200$ cylindrical scatterers (dielectric permittivity
$\varepsilon\simeq 36$, radius $r=3$ mm, height $h=5$ mm; the Mie scattering
efficiency of such a cylinder is shown in the Supplemental Material, Fig. S1)
placed in an aluminum 2D cavity ($50\times 50\times 0.5$ cm3) on a SHU point
pattern (on a square of size of approximately $25\times 25$ cm2) generated by
simulating an annealing relaxation scheme [9] (see Fig. 1(a)).
Figure 1: (a) Setup for 2D microwave scattering and transport experiments. The
dielectric cylinders are placed in between two conducting aluminum plates. To
reveal the interior of the sample the top plate has been removed. We place
absorbing foam (LS-14 from Emerson&Cuming) around the sample. A fixed antenna
(1, black arrow) is positioned at the center of the cavity, $(x,y)=(0,0)$. The
mobile antenna (2, red arrow) enters the cavity through small holes arranged
on a $(x,y)$ grid in the top plate. (b) Transmitted power
$\left|S_{12}(\nu)\right|^{2}$ for different configurations ($\chi$) and for a
given distance $d=\sqrt{x^{2}+y^{2}}$ between (1) and (2).
We perform measurements on five different configurations
$\chi=0.15,0.25,0.30,0.40$, and a triangular lattice. For all the samples
studied, we kept the number density constant ($\rho\simeq 0.32$ cm-2). The
point patterns and the structure factors of the samples are shown in the
Supplemental Material Fig. S2. The cavity can be considered as two dimensional
for the microwave frequencies $\nu<10$ GHz studied. Under this condition, only
the first transverse magnetic mode, TM0, exists in air: the electric field is
perpendicular to the plane, and the field amplitude is uniform over the cavity
height [17]. We mimic an infinite 2D system by placing absorbing carbon loaded
polyurethane foam between the sample and the metallic walls of the cavity. We
raster the cavity with a mobile antenna that is inserted by a robotic arm
through holes drilled into the upper plate with a diameter 2 mm, on a $5\times
5$ mm2 grid unit cell. Considering the sample size, and the fact that we are
not able to penetrate the cavity at the holes above the scatterers, we end up
with about $\sim 2700$ measured positions.
At each grid point $(x,y)$, we measure the complex transmission spectrum
$S_{12}(\nu)$ between a fixed antenna (1) placed at the center of the cavity
and the mobile antenna (2) using a vector network analyzer. Figure 1(b) shows
examples of measured spectra $|S_{12}(\nu)|^{2}$ between the central position
$1$ and probe position $2$ for different $\chi$ values and for a given
distance $d$ between the antennas. The small transmission values of order
$10^{-6}$ or less are because the receiving antenna is weakly coupled to the
cavity. The measured spectra consist of a superposition of peaks which are
associated to the resonances of the system. We extract their frequency,
complex amplitude and width using harmonic inversion as described in Ref. [18,
19]. We then cluster the resonances measured on all the lattice points in
order to reveal all the eigenmodes present in the system without being spoiled
by false resonances induced by noise (See Supplemental Material III.1 [20, 21,
22]).
Figure 2: Experimental density of states (DOS). Histogram of states per $0.15$
GHz frequency interval for different configurations: $\chi$ between 0.15 and
0.40, and for the triangular lattice. The hatched areas are a guide to the eye
to illustrate the measured band gap widths as a function of $\chi$.
In Fig. 2, we plot a histogram of the frequencies of the eigenmodes, which is
directly proportional to the density of states (DOS). We compare the results
for SHU point patterns with different values of $\chi$, to the results
obtained for a triangular lattice. As shown in earlier numerical work, the
triangular lattice is the champion photonic crystal structure in 2D, with a
gap slightly larger than disordered hyperuniform structures [9]. Our
experimental data confirms the two first TM photonic band gaps predicted for
the triangular lattice [3]. We also find frequency windows without states for
the SHU disordered systems. Surprisingly, not only the first but also the
second band gap is present in the $\chi=0.4$ sample. To our knowledge, second
and higher order band gaps have so far neither been predicted nor observed in
disordered systems. This finding is in contradiction to previous claims about
the origin of band gaps in disordered photonic materials [6, 23, 24]. To
corroborate additional evidence for this interesting observation, we performed
band structure calculations, using the same parameters as in the experiment
(see Supplemental Material §. IV [25]). These numerical data confirm the
existence of a second-order band gap for $\chi\geq 0.4$. Both the first and
the second gap approximately match the maxima of $S(k)$ of the triangular
lattice and of the SHU structures, supporting earlier proposals that short-
range spatial correlations play a key role for the opening of band-gaps in
amorphous photonic materials [9]. Experimentally, we observe a narrow photonic
band gap even for our most disordered sample ($\chi=0.15$). Our numerical data
for a large ensemble of system realizations, however, suggest that the band
gap closes for $\chi\lesssim 0.3$ and reduces to a pseudogap with a small but
finite density of states. Naturally, variations between different realizations
of hyperuniform materials become more pronounced for smaller values of $\chi$
(see Supplemental Material Fig. S5) and moreover the number of states per
frequency bin is small for a finite sized system. This can lead to the
situation that the central frequency and width of the band gaps depend on the
precise realization of the point pattern, which is a distinct feature of
disordered materials not found in crystals. For larger values of $\chi$ these
variations are suppressed, and the gap becomes more robust against statistical
fluctuations.
We now consider the optical properties of our material outside the gap [10].
The amplitude of the peaks observed in Fig. 1(b), and clustered to reveal the
eigenmodes, differs from one position to the other and from this we obtain an
electric field amplitude map $E_{\nu}(x,y)$ of an eigenmode [26] (see
Supplemental Material § III [20, 21, 22, 27]). These eigenmodes maps, shown in
the first line of Fig. 3, reveal the striking variations in optical transport
properties across the spectral range covered by our experiment.
Figure 3: Electromagnetic field distribution of the eigenmodes and wave
transport in the time domain for a sample with $\chi=0.30$
($\nu_{\mathrm{c}}=2.88$GHz). (a-e): Signed amplitudes of selected eigenmodes
at different characteristic frequencies. (a) cavity mode, (b) diffusive mode,
(c) dielectric localized mode, (d) air localized mode and (e) diffusive mode.
(f-j): Maps of the electric field for wave transport at different times
$t_{1},t_{2},t_{3}$ and for different central frequencies $f_{0}$. The wave—a
Gaussian pulse centered at $f_{0}$ and having a width of 0.5 GHz in the
frequency domain—is emitted at the center of the maps, and its temporal
representation is shown in the last line
($\Re[\tilde{F}_{f_{0},\Delta\nu}(t)]$ is the real part of the Fourier
transform of the Gaussian band pass filter). The colored vertical lines
indicate the time of each frame shown $t_{1},t_{2},t_{3}$. Entire videos are
included in the Supplemental Material, Videos S6. The color scale is adjusted
for each individual panels.
At low frequencies, we observe simple square cavity modes as if the medium was
homogeneous, which is a remarkable result given the fact that at $\nu\sim
2$GHz, the system size $L=25$ cm is almost two orders of magnitude larger than
the Boltzmann mean free path $\ell_{\mathrm{s}}(\nu)$ of the cylinder ensemble
(see Supplemental Material Fig. S1), with
$\ell_{\mathrm{s}}(\nu)=[\sigma_{\mathrm{s}}(\nu)\rho]^{-1}$ given by the
total scattering cross section $\sigma_{\mathrm{s}}(\nu)$ and the number
density $\rho$. An alternative way to study wave propagation in the SHU
material is to monitor the wave emitted by the central antenna as it
propagates through the medium in the time domain. By calculating the real part
of the Fourier-transform of $S_{12}(\nu)\times F_{f_{0},\Delta\nu}(\nu)$ (with
$F_{f_{0},\Delta\nu}$ a band pass filter of bandwidth $\Delta\nu$ centered
around $f_{0}$) at all points on the lattice, we reconstruct movies of the
propagating electromagnetic fields as a function of time for the selected
bandwidth $\Delta\nu$. Individual frames of the movies are shown in Figs.
3(f-j) (details on the numerical procedure and the entire movies are included
in the Supplemental Material § V). Figure 3(f) shows that at low frequencies a
circular wave propagates from the central antenna into the medium again
signaling transparency. Note that the disordered pattern observed at $t_{3}$
in Fig. 3(f) is due to the nonperfectly absorbing foams placed around the
sample which reflect part of the signal (for more details, see Supplemental
Material Videos S6-1 and S6-2). From the velocity of the circular wave in the
medium we can derive the effective refractive index of the samples and find
$n_{\mathrm{eff}}\sim 1.8$. Equally, counting the nodal lines of the modes
(Fig. 3(a)) and relating them to their frequencies, we obtain values of the
effective refractive index of the metamaterial in the range
$n_{\mathrm{eff}}=1.7\pm 0.3$. The uncertainty is due to the fact that, for
disordered systems, the cavity size is not well defined and moreover, we
observe a slight increase of $n_{\mathrm{eff}}$ from $\nu=1\to 3$ GHz. For
comparison, the Maxwell-Garnett effective refractive index, which in 2D
corresponds to the square root of the surface averaged permittivity, is
$n_{\text{MG}}=2.05$.
Torquato and coworkers named their designer materials “stealthy” hyperuniform
because they predicted them to be fully transparent below a threshold
frequency $\nu<\nu_{\mathrm{c}}$ [28]. The latter is equivalent to saying that
$L/\ell^{\star}\to 0$ (with $\ell^{\star}$ the transport mean free path),
while $L/\ell_{\mathrm{s}}$ remains finite and can even be larger than one. In
this first-order or single-scattering approximation
$\nu_{\mathrm{c}}=\frac{c}{n_{\mathrm{eff}}}\sqrt{\frac{\rho\chi}{\pi}}$ [10].
For our system parameters, the theoretical $\nu_{\mathrm{c}}$ range from
$\simeq 2.2$ GHz ($\chi=0.15$) to $\simeq 3.0$ GHz ($\chi=0.4$) based on an
effective refractive index of $n_{\mathrm{eff}}\sim 1.8$. Leseur _et al._
[29] demonstrated recently that stealthy transparency is also robust against
recurrent multiple scattering. They establish a stricter criterion for
transparency, $L/\ell_{\mathrm{s}}\ll k\ell_{\mathrm{s}}$, in a dense SHU
disordered material composed of dipolar point scatterers. While transparency
is retained under this condition it also implies that the transition at
$\nu_{\mathrm{c}}$ is not sharp but system size dependent. From a theoretical
evaluation of $\sigma_{\mathrm{s}}(\nu)$ for our $\varepsilon=36$ cylinders in
air, however, we find that only for $\nu<1$ GHz the condition
$L/\ell_{\mathrm{s}}<k\ell_{\mathrm{s}}$ is met (see Supplemental Material
Fig. S1 [30]). The experimental results, however, suggest that the condition
set by Leseur _et al._ [29] is too restrictive and transparency remains a
robust feature for $\nu<\nu_{\mathrm{c}}$ in our dense, high index SHU
materials, even for $k\ell_{\mathrm{s}}\lesssim 1$ (see also Supplemental
Material Fig. S7).
For frequencies $\nu>\nu_{\mathrm{c}}$ transparency is clearly lost and we
observe scattering and wave diffusion. The modes become disordered, Fig. 3(b),
and the propagating wavefronts in the time domain are highly distorted
signaling mean free paths smaller than the system size, Fig. 3(g). A closer
inspection of the propagating wave fronts, Supplemental Material Fig. S7,
illustrates how the onset of scattering and wave diffusion is shifted to
higher frequencies $\nu_{\mathrm{c}}(\chi)\propto\sqrt{\chi}$ as the system
becomes more and more stealthy. At frequencies close to the first band gap, we
observe spatially localized modes as shown in Figs. 3(c) and (d) [16, 31, 32].
In the time domain, we find that, at longer times, the wave stays localized
near the central antenna, as shown in the panels framed red in Figs. 3(h,i)
and in the corresponding Supplemental Material videos S6-4 and S6-6.
Figure 4: Thouless conductance for different degrees of stealthy
hyperuniformity $\chi$ between 0.15 and 0.40. The curves are shifted by a
factor 10 for clarity. The hatched areas show the width of the experimentally
observed band gaps for each value of $\chi$ using the same colors.
We note that the modes below the band gap are localized on the dielectric
cylinders, Fig. 3(c), and the modes above the band gap are localized in air,
Fig. 3(d). For frequencies in between the first and the second band gap we
again observe diffusive modes, Fig. 3(e), as well as extended waves at later
times, Fig. 3(j). For frequencies in the band gaps we find no modes, all
positions are phase coherent and there is no propagation.
Next, we calculate the Thouless conductance
$g_{\mathrm{Th}}=\delta\nu/\Delta\nu$, which is a fundamental localization
parameter [33, 34, 35]. Thouless argued that in the Anderson localization
regime, the dimensionless ratio $g_{\mathrm{Th}}=\delta\nu/\Delta\nu$ falls
below unity. In this case, the spectral widths $\delta\nu$ of the modes are
smaller than their spacing $\Delta\nu$, and the modes are isolated [33]. In
the opposite limit, for $g_{\mathrm{Th}}\geq 1$ modes overlap and waves can
propagate. By calculating the average width of the modes in each frequency
bin, Fig. 2, we extract the mean Thouless conductance for each frequency bin
as shown in Fig. 4. We have marked the data points directly at the band edges
by open circles in Fig. 4. Note that, due to the discretization, their values
can be affected by the zeroes of the DOS in the gap. Inside the band gap there
are no modes and $\left<g_{\mathrm{Th}}\right>$ is not defined. We find values
of $\left<g_{\mathrm{Th}}\right>\sim 1$ everywhere except in the vicinity of
the gap where $\left<g_{\mathrm{Th}}\right>$ drops by up to two orders of
magnitude, signaling localization. This result is consistent with both the
finite spatial extension of the modes we observe experimentally, see Figs.
3(c,d), and the localization of the propagating wave in the same frequency
domain, Fig. 3(h,i). In the low-frequency regime, the Thouless conductance is
close to one, and wave transport expands over the whole system size.
In conclusion, we show experimentally that disordered dielectric structures
display different characteristic transport regimes such as transparency,
photon diffusion, Anderson localization, as well as first and even second
order band gaps. We rationalize our findings by analyzing the mode structure
and the propagation of waves in the time domain. We find evidence that
transparency is robust against recurrent multiple scattering, and that the
stealthy materials we study retain their low-frequency transparency even for
the unusually strong refractive index mismatch between our scatterers and air
$\sqrt{\varepsilon/\varepsilon_{\text{air}}}=6$. Our results lend support to
recent numerical predictions and shed new light on the interplay between
disorder and correlations [10]. We believe this will have significant
consequences for the design of photonic materials, such as two-dimensional
nanostructured materials for light harvesting in solar cells [36] or light
guiding in all-optical circuit applications [37].
### Acknowledgments
G.A., L.S.F., and F.S. acknowledge funding by the Swiss National Science
Foundation through Project No. 169074 and No. 188494, and through the National
Center of Competence in Research Bio-Inspired Materials. We would like to
thank Paul Chaikin and Juanjo Saenz for discussions.
## References
* Yablonovitch [1987] E. Yablonovitch, Inhibited spontaneous emission in solid-state physics and electronics, Phys. Rev. Lett. 58, 2059 (1987).
* John [1987] S. John, Strong localization of photons in certain disordered dielectric superlattices, Phys. Rev. Lett. 58, 2486 (1987).
* Joannopoulos _et al._ [2008] J. Joannopoulos, S. Johnson, J. Winn, and R. Meade, _Photonic Crystals: Molding the Flow of Light_, 2nd ed. (Princeton University Press, Princeton (New Jersey), 2008).
* Vynck _et al._ [2009] K. Vynck, D. Felbacq, E. Centeno, A. I. Căbuz, D. Cassagne, and B. Guizal, All-dielectric rod-type metamaterials at optical frequencies, Phys. Rev. Lett. 102, 133901 (2009).
* Zoorob _et al._ [2000] M. E. Zoorob, M. D. B. Charlton, G. J. Parker, J. J. Baumberg, and M. C. Netti, Complete photonic bandgaps in 12-fold symmetric quasicrystals, Nature 404, 740 (2000).
* Jin _et al._ [2001] C. Jin, X. Meng, B. Cheng, Z. Li, and D. Zhang, Photonic gap in amorphous photonic materials, Phys. Rev. B 63, 195107 (2001).
* Florescu _et al._ [2009] M. Florescu, S. Torquato, and P. J. Steinhardt, Designer disordered materials with large, complete photonic band gaps, Proceedings of the National Academy of Sciences 106, 20658 (2009).
* Liew _et al._ [2011] S. F. Liew, J.-K. Yang, H. Noh, C. F. Schreck, E. R. Dufresne, C. S. O’Hern, and H. Cao, Photonic band gaps in three-dimensional network structures with short-range order, Phys. Rev. A 84, 063818 (2011).
* Froufe-Pérez _et al._ [2016] L. S. Froufe-Pérez, M. Engel, P. F. Damasceno, N. Muller, J. Haberko, S. C. Glotzer, and F. Scheffold, Role of short-range order and hyperuniformity in the formation of band gaps in disordered photonic materials, Phys. Rev. Lett. 117, 053902 (2016).
* Froufe-Pérez _et al._ [2017] L. S. Froufe-Pérez, M. Engel, J. J. Sáenz, and F. Scheffold, Band gap formation and Anderson localization in disordered photonic materials with structural correlations, Proceedings of the National Academy of Sciences 114, 9570 (2017).
* Yang _et al._ [2010] J.-K. Yang, C. Schreck, H. Noh, S.-F. Liew, M. I. Guy, C. S. O’Hern, and H. Cao, Photonic-band-gap effects in two-dimensional polycrystalline and amorphous structures, Phys. Rev. A 82, 053838 (2010).
* Marichy _et al._ [2016] C. Marichy, N. Muller, L. S. Froufe-Pérez, and F. Scheffold, High-quality photonic crystals with a nearly complete band gap obtained by direct inversion of woodpile templates with titanium dioxide, Scientific Reports 6, 21818 (2016).
* Torquato and Stillinger [2003] S. Torquato and F. H. Stillinger, Local density fluctuations, hyperuniformity, and order metrics, Phys. Rev. E 68, 041113 (2003).
* Uche _et al._ [2004] O. U. Uche, F. H. Stillinger, and S. Torquato, Constraints on collective density variables: Two dimensions, Phys. Rev. E 70, 046122 (2004).
* Sperling _et al._ [2016] T. Sperling, L. Schertel, M. Ackermann, G. J. Aubry, C. Aegerter, and G. Maret, Can 3D light localization be reached in ‘white paint’?, New Journal of Physics 18, 013039 (2016), 1510.08092 .
* Laurent _et al._ [2007] D. Laurent, O. Legrand, P. Sebbah, C. Vanneste, and F. Mortessagne, Localized Modes in a Finite-Size Open Disordered Microwave Cavity, Phys. Rev. Lett. 99, 253902 (2007).
* Jackson [1998] J. D. Jackson, _Classical Electrodynamics_ , 3rd ed. (John Wiley & Sons, Inc., New York, 1998).
* Main _et al._ [2000] J. Main, P. A. Dando, D. Belkic, and H. S. Taylor, Decimation and harmonic inversion of periodic orbit signals, Journal of Physics A: Mathematical and General 33, 1247 (2000).
* Wiersig and Main [2008] J. Wiersig and J. Main, Fractal Weyl law for chaotic microcavities: Fresnel’s laws imply multifractal scattering, Phys. Rev. E 77, 036205 (2008).
* Maier and Slater [1952] L. C. Maier and J. C. Slater, Field strength measurements in resonant cavities, Journal of Applied Physics 23, 68 (1952).
* Pourrajabi _et al._ [2014] M. Pourrajabi, D. Moulavi, R. J. G. B. Campello, A. Zimek, J. Sander, and R. Goebel, Model selection for semi-supervised clustering, in _Advances in Database Technology – EDBT 2014_, edited by S. Amer-Yahia, V. Christophides, A. Kementsietsidis, M. Garofalakis, S. Idreos, and V. Leroy (OpenProceedings.org, Konstanz, 2014) pp. 331–342.
* Ruiz _et al._ [2007] C. Ruiz, M. Spiliopoulou, and E. Menasalvas, C-DBSCAN: Density-based clustering with constraints, in _Rough Sets, Fuzzy Sets, Data Mining and Granular Computing_, edited by A. An, J. Stefanowski, S. Ramanna, C. J. Butz, W. Pedrycz, and G. Wang (Springer Berlin Heidelberg, Berlin, Heidelberg, 2007) pp. 216–223.
* Miyazaki _et al._ [2003] H. Miyazaki, M. Hase, H. T. Miyazaki, Y. Kurokawa, and N. Shinya, Photonic material for designing arbitrarily shaped waveguides in two dimensions, Phys. Rev. B 67, 235109 (2003).
* Rockstuhl _et al._ [2006] C. Rockstuhl, U. Peschel, and F. Lederer, Correlation between single-cylinder properties and bandgap formation in photonic structures, Opt. Lett. 31, 1741 (2006).
* Johnson and Joannopoulos [2001] S. G. Johnson and J. D. Joannopoulos, Block-iterative frequency-domain methods for Maxwell’s equations in a planewave basis, Opt. Express 8, 173 (2001).
* Stein _et al._ [1995] J. Stein, H.-J. Stöckmann, and U. Stoffregen, Microwave studies of billiard green functions and propagators, Phys. Rev. Lett. 75, 53 (1995).
* Xeridat _et al._ [2009] O. Xeridat, C. Poli, O. Legrand, F. Mortessagne, and P. Sebbah, Quasimodes of a chaotic elastic cavity with increasing local losses, Phys. Rev. E 80, 035201(R) (2009).
* Batten _et al._ [2008] R. D. Batten, F. H. Stillinger, and S. Torquato, Classical disordered ground states: Super-ideal gases and stealth and equi-luminous materials, Journal of Applied Physics 104, 033504 (2008).
* Leseur _et al._ [2016] O. Leseur, R. Pierrat, and R. Carminati, High-density hyperuniform materials can be transparent, Optica 3, 763 (2016).
* Bohren and Huffman [1998] C. F. Bohren and D. R. Huffman, _Absorption and Scattering of Light by Small Particles_ (Wiley, New York, 1998).
* Le Thomas _et al._ [2009] N. Le Thomas, R. Houdré, D. M. Beggs, and T. F. Krauss, Fourier space imaging of light localization at a photonic band-edge located below the light cone, Phys. Rev. B 79, 033305 (2009).
* García _et al._ [2012] P. D. García, S. Stobbe, I. Söllner, and P. Lodahl, Nonuniversal intensity correlations in a two-dimensional Anderson-localizing random medium, Phys. Rev. Lett. 109, 253902 (2012).
* Thouless [1977] D. J. Thouless, Maximum metallic resistance in thin wires, Phys. Rev. Lett. 39, 1167 (1977).
* Wang and Genack [2011] J. Wang and A. Z. Genack, Transport through modes in random media, Nature 471, 345 (2011).
* Mondal _et al._ [2019] S. Mondal, R. Kumar, M. Kamp, and S. Mujumdar, Optical Thouless conductance and level-spacing statistics in two-dimensional Anderson localizing systems, Phys. Rev. B 100, 060201(R) (2019).
* Vynck _et al._ [2012] K. Vynck, M. Burresi, F. Riboli, and D. S. Wiersma, Photon management in two-dimensional disordered media, Nature Materials 11, 1017 (2012).
* Milošević _et al._ [2019] M. M. Milošević, W. Man, G. Nahal, P. J. Steinhardt, S. Torquato, P. M. Chaikin, T. Amoah, B. Yu, R. A. Mullen, and M. Florescu, Hyperuniform disordered waveguides and devices for near infrared silicon photonics, Scientific Reports 9, 20338 (2019).
## Supplementary Material
This document contains the scattering properties of a single rod, details on
the structures of the point patterns, the band structure calculation, details
on the time domain propagation videos and all the technical information on the
data analysis. The seven videos (permanently stored on the Zenodo repository:
https://doi.org/10.5281/zenodo.3978032) show how the electromagnetic wave
propagates in the cavity for different frequency ranges (see Supplemental
Material Fig. S6 for the description of the videos).
## I Boltzmann scattering mean free path.
In Fig. S1 we show the scattering efficiency $Q$ of an individual cylinder in
TM polarization calculated using Mie theory [30] (upper panel). In the lower
panel, we show how the corresponding Boltzmann scattering mean free path
$\ell_{\mathrm{sca}}(\nu)=[\sigma_{\mathrm{sca}}(\nu)\rho]^{-1}$ (with
$\sigma_{\mathrm{sca}}(\nu)=2rQ_{\mathrm{sca}}$ the total scattering cross
section) compares with $L$, the size of the system, and $\lambda_{0}$, the
wavelength in vacuum of the wave.
Figure S1: Upper panel: Scattering efficiency $Q_{\mathrm{sca}}$ of individual
cylinders in TM polarization (solid blue line), and the three first terms in
the Mie expansion (dashed lines). Lower panel: optical density
$L/\ell_{\mathrm{s}}$ in the independent scattering approximation using the
Boltzmann scattering mean free path and the sample size $L$. Also shown is
$k_{0}\ell_{\mathrm{s}}$ with $k_{0}=2\pi/\lambda$ and the wavelength in
vacuum $\lambda_{0}$ ($k_{0}=2\pi/\lambda_{0}$).
## II Point patterns and their structure factors.
Figure S2(a) shows the point patterns of the samples studied in this study,
and Fig. S2(b) the corresponding average structure factors
$\displaystyle
S(\mathbf{k})={\frac{1}{N}}\sum_{j=1}^{N}\sum_{l=1}^{N}\mathrm{e}^{-i\mathbf{k}\cdot(\mathbf{R}_{j}-\mathbf{R}_{l})},$
(S1)
over 1000 samples generated as the ones used in this study, where $R_{j}$ are
the positions of the $N$ points, and $\mathbf{k}$ is the wavevector.
Figure S2: (a) Point patterns of the studied samples. (b) Radially averaged
structure factors $S(k)$ of the studied samples as a function of $ka$, where
$a=1/\sqrt{\rho}$ and $\rho$ denotes the number density of scatterers. The
structure factors are averaged over 1000 different realizations of about 200
points. The grey vertical lines indicate the peaks of the radially averaged
triangular lattice structure factor (Bragg peaks).
## III Visualization of the eigenmodes of the disordered cavity
### III.1 Clustering of the resonances into modes
The measured spectra consist of a superposition of peaks (see Main Text Fig.
1(b)). which are associated to the resonances of the system. We determine the
frequencies $\nu^{i}$, widths $\gamma^{i}$ and complex amplitudes $A^{i}$ of
each resonance $i=1,\dots,N$ using the harmonic inversion method described in
ref. [18, 19]. Ideally, resonances belonging to the same mode should all have
the same frequency. In practice, the presence of the mobile antenna at every
point $(x,y)$ shifts the resonant frequency by a small amount depending on the
intensity of the electromagnetic field at the specific mobile antenna position
[20], see Fig. S3.
Figure S3: For each position $(x,y)$, a spectrum is measured and the
frequencies are extracted using harmonic inversion: these are the points
plotted in this figure for two different frequency ranges. The points are then
clusterized: each color corresponds to a cluster found by the algorithm. The
upper panel corresponds to a typical situation in the stealth regime where the
intensity is almost uniform over the sample (small frequency shifts). The
lower panel corresponds to the case of localized modes with large intensities
corresponding to large frequency shifts.
Note that we minimize this perturbation due to the mobile antenna by having it
extending into the cavity by only 1 mm whereas the height of the cavity is 5
mm. This has the consequence that it is weakly coupled to the field, and
explains the low transmission values as seen in Main Text Fig. 1(b). We
identify all data points belonging to a certain cluster by using a density-
based clustering algorithm [22] fulfilling the condition that two points
having the same coordinate $(x,y)$ cannot be in the same cluster. To associate
each resonant signal at position $(x,y)$ to a specific mode, we apply a semi-
supervised clustering algorithm. This allows us to identify every single mode
of the disordered cavity, associated with discrete resonance frequencies, as
long as the mode amplitude is large enough to be detected by the vector
network analyzer [21, 22].
More precisely, we use a slightly modified version of the C-DBSCAN algorithm
published in Ref. [22]. In our version, step 2 of the algorithm [22] either
labels the points in the KD-tree leaf as noise ratio (if the density is too
small), or we create a local cluster for each point in the leaf. Depending on
the frequency range, we run our modified version of C-DBSCAN either in the
$(x,y,\nu)$, $(x,y,\nu,\gamma)$ or $(x,y,\nu,\gamma,\ln A)$ space to reach the
best clustering results. An example of the result is shown in Fig. S3 where
the different clusters, or modes, found by the algorithm are plotted using
different colors.
### III.2 Electric field amplitude maps
In the first line of Main Text Fig. 3, we plot the signed amplitude
$E_{\nu}^{\pm}(x,y)=\operatorname{sgn}\left(\textrm{Re}[\tilde{S}_{12}]\right)|\tilde{S}_{12}|$,
where $\tilde{S}_{12}$ is the transmission deduced from $S_{12}$ after the ad
hoc rotation of the global phase making the real and imaginary parts
statistically independent [27]. This allows to represent both the real and
imaginary parts of the eigenmodes on the same map.
## IV Numerical simulations of the DOS
Figure S4 shows the normalized density of states (nDOS) of the stealthy
hyperuniform samples obtained numerically for a large statistical ensemble of
point patterns and using periodic boundary conditions.
Figure S4: Normalized density of states (nDOS) obtained by taking the average
over the band structure calculated numerically for 500 system realizations at
each value of $\chi$.
The properties of the dielectric cylinders and their density are identical to
those of the system studied in the experiment. The nDOS was calculated using
the MIT Photonic Bands [25] software using the supercell method [3] as
described earlier in ref. [9]. This dataset was obtained by calculating 500
different samples for each $\chi$-value (between 0.1 and 0.5, every 0.05).
Figure S5 shows the average and the standard deviation of the gap central
frequency and width found for the samples used in Fig. S4.
Figure S5: Spread of the first gap central frequency and width found in the
numerical results used to obtain Fig. S4. The error bars correspond to the
standard deviations, the scattered points to the 500 individual systems per
$\chi$-value used to compute the statistics. The dashed lines correspond to
the results obtained for the triangular lattice. The right panel shows the
histograms for the $\chi=0.30$ samples.
The statistical variations are large at low and intermediate $\chi$-values
(between 0.10 and 0.35). At large $\chi$-values ($\geq 0.4$), the standard
deviation vanishes: the gap central frequencies and widths are similar from
sample to sample.
## V Time domain propagation videos
We obtain time domain propagation signals from the real part of the Fourier
transform of the complex transmission spectra multiplied by a chosen bandpass
filter centered at $f_{0}$ with a standard deviation $\Delta\nu$. We use a
Gaussian bandpass filter to avoid window effects in the Fourier transform. The
excitation in the time domain is therefore a Gaussian pulse with a temporal
spread inversely proportional to $1/\Delta\nu$ of the Gaussian bandpass
filter.
Videos S6-1, 2 and 3 show the propagation of the wave in the low frequency
regime (well below the gap frequency $\nu_{\mathrm{G}}\simeq 5$ GHz.
1. 1.
Stealth regime (Gaussian bandpass filter, $f_{0}=1.75$ GHz, $\Delta\nu=0.25$
GHz)
2. 2.
Stealth regime (Gaussian bandpass filter, $f_{0}=2.25$ GHz, $\Delta\nu=0.25$
GHz)
3. 3.
Wave diffusion (Gaussian bandpass filter, $f_{0}=3.5$ GHz, $\Delta\nu=0.25$
GHz)
4. 4.
Dielectric Anderson localized modes just below the band gap (Gaussian bandpass
filter, $\Delta\nu=0.25$ GHz)
5. 5.
Square filter in the band gaps
6. 6.
Air Anderson localized modes just above the band gap (Gaussian bandpass
filter, $\Delta\nu=0.25$ GHz)
7. 7.
Wave diffusion (Gaussian bandpass filter, $f_{0}=6.5$ GHz, $\Delta\nu=0.25$
GHz)
Figure S6: Videos description. The videos are permanently stored on the Zenodo
repository: https://doi.org/10.5281/zenodo.3978032.
We observe that for frequencies $\nu<\nu_{\mathrm{c}}$ and at early times, the
spherical wave structure is well preserved, indicating the absence of
scattering. This boundary between the stealth regime and the diffusive regime
is also shown in more detail in Fig. S7.
Figure S7: Maps of the electric field amplitude for the propagation of a pulse
of spectral width $\Delta\nu=0.125$ GHz at different central frequencies
$f_{0}$ (for details see text and Main Text Fig. 3), and first half of the
Gaussian pulse used for the excitation. The frames shown in the figure are
taken at the time marked by the blue vertical line. The panels in the green
polygon indicate frequencies below $\nu_{\mathrm{c}}(\chi)$. The radius of the
dashed circles indicate the place where a wave emitted at the time marked by
the red vertical line should be at the time marked by the blue vertical line,
for a homogeneous medium with $n_{\mathrm{eff}}=1.8$. The color scale is
adjusted for each individual panels.
The panels in the green shaded polygon indicate that the Gaussian pulse
central frequency $f_{0}$ is below the critical stealth frequency
$\nu_{\mathrm{c}}=\frac{c}{n_{\mathrm{eff}}}\sqrt{\frac{\rho\chi}{\pi}}$, and
above $\nu_{\mathrm{c}}$ elsewhere. By eye, we see a clear correlation between
the wave front smoothness and the transition from the stealth regime to the
diffusive regime for frequencies $\nu>\nu_{\mathrm{c}}$. Since
$\nu_{\mathrm{c}}\propto\sqrt{\chi}$ the transition is shifted to higher
frequencies when increasing the degree of stealthiness $\chi$. Note that the
wave distortion at later times (in the videos) is explained by reflections of
the wave on the non-ideal absorbing foam walls.
Video S6-4 (respectively S6-6) shows the electromagnetic field for a Gaussian
pulse centered 0.25 GHz below (resp. above) the band gap and having a width
$\Delta\nu=0.25$ GHz. Video S6-7 shows the propagation of the wave in the high
frequency regime, well above the first band gap. As in the low frequency
regime for frequencies above $\nu_{\mathrm{c}}$, we observe a strong
scattering and wave diffusion.
Finally, video S6-5 shows the electromagnetic field in the band gap. For this
video, the bandpass filter was chosen to be a square filter fitting exactly
the band gaps as extracted from Main Text Fig. 2. This explains the windowing
effect seen in the input signal.
|
2024-09-04T02:54:56.399821 | 2020-03-02T14:10:53 | 2003.00924 | {
"authors": "Lukas Schaupp, Patrick Pfreundschuh, Mathias Buerki, Cesar Cadena,\n Roland Siegwart, Juan Nieto",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25990",
"submitter": "Lukas Schaupp",
"url": "https://arxiv.org/abs/2003.00924"
} | arxiv-papers | # MOZARD: Multi-Modal Localization for Autonomous Vehicles in Urban Outdoor
Environments
Lukas Schaupp1, Patrick Pfreundschuh3, Mathias Bürki1,2, Cesar Cadena1,
Roland Siegwart1, and Juan Nieto1
1Autonomous Systems Lab, ETH Zürich<EMAIL_ADDRESS>
2Sevensense Robotics AG<EMAIL_ADDRESS>3ETH Zurich,
<EMAIL_ADDRESS>
###### Abstract
Visually poor scenarios are one of the main sources of failure in visual
localization systems in outdoor environments. To address this challenge, we
present MOZARD, a multi-modal localization system for urban outdoor
environments using vision and LiDAR. By extending our preexisting key-point
based visual multi-session local localization approach with the use of
semantic data, an improved localization recall can be achieved across vastly
different appearance conditions. In particular we focus on the use of
curbstone information because of their broad distribution and reliability
within urban environments. We present thorough experimental evaluations on
several driving kilometers in challenging urban outdoor environments, analyze
the recall and accuracy of our localization system and demonstrate in a case
study possible failure cases of each subsystem. We demonstrate that MOZARD is
able to bridge scenarios where our previous work VIZARD fails, hence yielding
an increased recall performance, while a similar localization accuracy of
0.2$m$ is achieved.
## I Introduction
Due to increasing traffic in urban environments and changing customer demands,
self-driving vehicles are one of the most discussed and promising technologies
in the car and robotics industry. Still no system was presented yet, that
allows to localize robustly under all light, weather and environmental
conditions. However, precise localization is a vital feature for each
autonomous driving task, since a wrong pose estimate may lead to accidents.
Especially in urban environments, safety margins on the position of the car
are small due to crowded traffic and other traffic participants (e.g.
pedestrians, cyclists). Because of multi-path effects or satellite blockage,
GPS sensors cannot be used reliably under those urban conditions. Thus, other
sensors have to be used for localization. For this purpose, mainly LiDARs and
cameras have been used in the last years. Appearance changes in urban
environments challenge visual localization approaches. However, such driving
scenarios contain persistent structures even under those appearance changes.
Curbstones are one such feature. Curbstones are used to protect pedestrians
from cars and to separate the sidewalk from the street. As they delimit the
street, they also offer information of the area where the car is allowed to be
placed in. Detection of their position relative to the car can thus allow to
localize inside the lane. In contrast to other geometrical shapes such as
poles and road markings, curbstone measurements are found more frequently in
urban environments and yield a reliable, continuous lateral constraint for
pose refinement. Due to their shape and their contrasting color with respect
to the pavement in many cases, they can be detected both in camera images as
well as in LiDAR pointclouds.
Figure 1: We aim at accurately localizing the UP-Drive vehicle in a map of
features extracted from vision and LiDAR data depicted on the right side. Our
proposed algorithm can be separated into two distinct steps. We extract
keypoint-based features from camera and additional 3D geometrical curbstone
information from a semantic vision-LiDAR pipeline. The features extracted from
images of the surround-view camera system (top-left corner) are matched
against $3D$ landmarks in the map while our raw curbstone measurements
(bottom-left corner) are matched to their corresponding landmarks. Inlier
matches, centered on the estimated $6DoF$ pose of the vehicle in the map, are
illustrated as dark yellow lines on the right side. Purple indicates our pre-
generated curbstone map data represented by splines. During runtime we
downsample the nearest splines spatially to match with the current raw
curbstone measurements indicated by the red and green color.
Therefore, our pipeline named MOZARD extends our previous visual localization
system - VIZARD[1] with the use of additional geometrical features from LiDAR
data, for the self-driving cars used in the UP-Drive project111The UP-Drive
project is a research endeavor funded by the European Commission, aiming at
advancing research and development towards fully autonomous cars in urban
environment. See www.up-drive.eu.. In a thorough evaluation of our proposed
localization system using our long-term outdoor dataset collection, we
investigate key performance metrics such as localization accuracy and recall
and demonstrate in a case study possible failure scenarios.
We see the following aspects as the main contributions of this paper:
* •
A semantic extension of our key-point-based localization pipeline based upon
the extraction of curbstone information is presented, that allows to bridge
sparse key-point based feature scenarios in visual localization.
* •
In a thorough evaluation on the long-term dataset collection UP-Drive, we
demonstrate a reliable localization performance across different appearance
conditions in urban outdoor environments. We compare our results to our
vision-based localization pipeline and demonstrate significant performance
increases.
* •
A computational performance analysis showing that our proposed algorithm
exhibits real-time capabilities and better scalability.
## II related work
Since our localization system is a multi-modal semantic extension of our
previous work we will concentrate the related work on frameworks that exploit
semantic features using either one modality or fusing multiple modalities in
different ways. Therefore these works can be subdivided into each of their
specific sensor setup. For general related work on our prior visual key-point
based system we refer to our previous work [1].
#### Vision-only
A recent example of using semantic features for the purpose of localization
and mapping is Lu et al. [2] using a monocular camera for road-mark detection
whereas other studies use traffic signs [3], line segments from multi-view
cameras [4] or poles [5] [6] for feature matching. On the detection of
curbstones, traditional image based curbstone detection mostly uses the
vanishing point and color distribution to detect the corresponding pixels [7],
[8]. Recent work such from Enzweiler et al. [9] and Panev et al. [10] also
demonstrate a learning based approach to detect curbs in images. In contrast
to the vision based approaches, we concentrate on the multi-modal aspect as
provided by Goga et al. [11].
#### LiDAR-only
LiDAR based methods use assumptions about the shape of the semantic features
like curbs, poles and planes by evaluating the difference in elevation [12,
13], slope [12] or curvature [14, 15]. Authors such as Schaefer et al. [16]
detect and extract 3D poles from the scenery which are then being used for map
tracking. In regards to the usage of curbstones, most applications use
geographical map data [17, 18] or road networks [19] as a reference to
localize with detected curbstones. Unlike these works, our approach does not
rely on external pre-generated data for curbstone map construction.
#### Vision and LiDAR
Recent work such from Kampker et al. [20] use a camera to extract pole like
landmarks and a LIDAR for cylinder shapes for the task of self-localization.
Kummerle et al. [21] demonstrate that basic geometric primitives can be
extracted using vision and LiDAR to obtain road markings, poles and facades
which can then be used for localization and mapping for the purpose of self-
localization on various weather conditions. While these approaches use both
modalities for mapping and localization separately, there has been recent
research into cross-modality. Xiao et al. [22] uses a LiDAR to build an HD map
and extract 3D semantic features from the map. Then a monocular camera is used
with a deep learning based approach to match these semantic features with the
ones from the camera. In contrast to the graph-based SLAM formulation used in
our approach, the mentioned approaches are filter-based, with Extended Kalman
Filters [23] or Monte Carlo Localization [18, 20], and they are evaluated at
low speed and/or on short maps of a few hundred meters [22]. In addition they
do not use raw curbstone measurements as a feature for localization and
mapping [21, 22, 16, 17, 18]. Our work is evaluated on a long-term map with a
length of over 5$km$ using urban driving speed of around 50$km/h$.
## III Methodology
Figure 2: The key-point based map-tracking module extracts 2D features from
current camera images, and matches them with 3D map landmarks locally in image
space using a pose prior $\hat{T}^{t}_{MB}$ while our semantic map-tracking
module matches point-cloud based curbstone measurements between our map and
immediate input. The state estimation module fuses the visual 2D-3D and
geometrical 3D-3D matches with the current wheel-odometry measurement to
obtain a current vehicle pose estimate $T^{t}_{MB}$.
A schematic overview of MOZARD can be found in Figure 2. Since our work
extends the VIZARD framework, we refer to the general methodology from Buerki
et al. [1]. We assume that our visual localization pipeline already created a
map by tracking and triangulating local 2D features extracted along a
trajectory.
### III-A Curbstone Detection
For our curbstone detection we employ the work from Goga et al. [11]. Goga et
al., fuse a vision-based segmentation CNN with LiDAR data. In a post-
processing step they extract, refine and filter semantic curb ROIs to obtain
new curb measurements. In the following we use their curbstone detection as
input into our pipeline.
### III-B Map Extension
The curbstones are added to a map that was built using the VIZARD pipeline
[1]. This map is called the base map in the following. Curbstone points
detected in a specific LiDAR pointcloud frame will be called curbstone
observation. The detection pipeline finds curbstone pointclouds in the vehicle
frame $\mathcal{F}_{B}$. We find the closest vertex in time within the base
map and allocate the curbstone pointcloud. This is performed for all curbstone
detections along the trajectory. From the base map, the respective
transformations from the map coordinate frame $\mathcal{F}_{M}$ to the body
frame at time $t$ can be looked up. Using $T_{MB}^{t}$ at each vertex in the
base map that contains a curbstone observation, a curbstone map in
$\mathcal{F}_{M}$ can then be created.
### III-C Curbstone Map-Tracking
Figure 3: For our curbstone parameterization pipeline we first obtain raw
curbstone measurements. After a voxelization processing step, we cluster the
curbstones into segments. Finally, a cubic b-spline is fitted to each segment.
In case the fitting algorithm fails, we store the raw points.
The curbstone tracking module is the core component of the curbstone
localization pipeline. It performs an alignment between the map curbstones and
the input curbstones to estimate the vehicle pose. A sanity check is performed
to detect wrong alignments. If it is fulfilled, a pose constraint is added to
the graph. The integration into the VIZARD system is shown in Figure 2. The
single steps are explained in detail in the following section.
#### III-C1 Reference Curbstone Retrieval
To retrieve the map curbstones, a prior estimate $\hat{T}^{t}_{MB}$ is used.
In a fixed radius $r_{lookup}$ around the estimated position $\hat{P}^{t}$, we
then search for the closest vertex in Euclidean distance in the base map that
contains curbstones. Furthermore, a criterion on the maximum yaw angle between
the prior pose estimate and the base map vertex pose is used to prevent wrong
associations.
#### III-C2 Pointcloud Registration
Given the map and input curbstones a pointcloud registration is performed. As
the registration algorithm NDT (Normal Distribution Transform) [24] is used
from the implementation of the Point Cloud Library [25]. Additionally, outlier
points are removed using a fixed ratio since some artifacts might only be
included in one of both pointclouds, due to occlusion or unsimilar detection.
The pointcloud registration estimates a transformation $T_{align}$ that aligns
the input cloud to the map cloud.
#### III-C3 Sanity Check
In some regions, the input or map pointcloud can consist of very few points.
Matching in those scenarios can be ambiguous and lead to wrong associations.
Thus, matching is only performed if both pointclouds exceed a minimum amount
of points. Since urban street scenarios change frequently, e.g due to
constructions or parked cars, the input and map pointcloud can diverge
heavily. In those cases, pointcloud registration might fail, ending up in
wrong alignments and thereby wrong pose estimates. Therefore, a sanity check
has to be performed, to detect wrong pose estimates. To do so, a matching
score can be calculated, that can be used as an indicator if the alignment was
successful. Magnusson et al. [24] proposed a matching score for NDT. It
corresponds to the likelihood that the aligned input points lie on the
reference scan. A more detailed explanation can be found in his work [24]. The
alignment is considered valid, if the mean likelihood over all input points is
higher than a threshold $P_{min}$.
#### III-C4 Pose Constraint
The new pose estimate is calculated as:
$T^{t}_{MB_{estimate}}=\hat{T}^{t}_{MB}*T_{align}$
If the sanity check is successful, the pose constraint is added to the pose
graph using a fixed covariance. For our experiments, the covariance was
determined empirically.
### III-D Curbstone Parameterization
Since curbstone maps can scale quickly given the multitude of possibly
redundant observations, a memory overhead is induced. To reduce this memory
footprint, we perform a curbstone parameterization. Since the map contains
several artifacts like intersections or roundabouts, a curve parameterization
was preferred over a polyline. Curbstones are not continuous throughout the
whole map, as they often end at intersections. Thus, it naturally makes sense
to split the map into single connected regions. In a first step, the raw
curbstone pointcloud is subsampled. A clustering is then performed on the
subsampled points, to find connected segments of a maximum length. The length-
to-width ratio of each segment is then calculated. If a certain threshold is
fulfilled, a Cubic B-Spline is fitted to the segment. By doing so, only the
control points of the spline have to be saved, instead of all raw curbstone
points. If the threshold is not fulfilled, the raw points are saved. The steps
are explained in detail in the following and shown in Figure 3.
#### III-D1 Subsampling
The high point density of the raw pointcloud can result in high runtimes of
the clustering as well as in overfitting of the spline to noise in the points.
Thus, a spatial subsampling using a voxel grid with a leaf size of $30cm$ is
performed. A pointcloud of the means of the points inside each voxel is then
used for clustering.
#### III-D2 Clustering
The clustering is performed in a two-step fashion. First, a Euclidean
clustering using a tolerance of more than $2m$ is performed to find large
segments. Since the curvature can vary along long segments, fitting a single
spline to it can be problematic, as different levels of detail are needed
along the segment. An example is a curb going around a corner: While low
curvature is desired in the straight sections, high curvature is needed in the
area of the corner to properly describe the curb. Thus, the coarse cluster is
split into smaller sub-clusters with a maximum expansion of 20$m$ before
validating each sub-cluster by using an SVD Decomposition.
#### III-D3 Cubic B-Spline Fitting
Spline fitting usually refers to fitting a spline that goes through each
single input point. However, due to the noisy nature of the curbstone segment,
a best fit given a fixed amount of control points is preferred in this case
instead of fitting every single point. To achieve this, the approach proposed
by Wang et al. [26] to fit an open cubic B-Spline is used. The number of
control points is calculated proportionally to the approximate length of the
segment, using $0.25\ points/m$, but a minimum of $4$. For segments with a
large width (indicating an intersection, road curve or round-about) a fixed
amount of $20$ points is used to allow for a proper representation. To
validate our fitted spline, we define our goodness score $GS$ as follows:
$GS=\frac{\\#Spline\ Inliers}{\\#Spline\ Points}*\frac{\\#Point\
Inliers}{\\#Points}$
whereas $SplineInliers$ is the number of sampled spline point close to a raw
point and $PointInliers$ the number of raw points close to points sampled from
a spline. Naively using all sub-segment points for the spline fitting can lead
to an overfitting of the curve. Thus, the best set of points is found in a
RANSAC-like manner. In each iteration, one third of the sub-segment points is
sampled randomly. The spline is then fitted to the sampled points. Eventually,
the spline with the highest score is chosen.
#### III-D4 Spline Sampling
To be able to perform matching with the input cloud, points are spatially
uniformly sampled from the splines during runtime. Those sampled points are
then used as the map pointcloud.
## IV Evaluation
In the following section, the performance of the proposed pipeline is
evaluated and compared against the VIZARD pipeline as a benchmark. Long-term
experiments in an urban scenario are performed on varying weather and
appearance conditions. A special focus is set on how curbstone map tracking
influences localization accuracy and recall. Example cases are presented,
where localization gaps in the VIZARD pipeline could be bridged using
curbstone localization. The sensor set-up of the UP-Drive vehicle and the
datasets used in the experiments are described in the next section.
### IV-A The UP-Drive Platform
For the collection of the datasets, the UP-Drive vehicle was used. Its sensor
setup consists of four fish-eye cameras, resulting in a surround view of the
car. Gray-scale images with a resolution of $640x400px$ are recorded at
$30Hz$. Five Velodyne LiDARs are mounted on top of the car. Curbstones are
obtained from the approach as described by Goga et al. [11]. Additionally, a
low-cost IMU and wheel tick encoders are used to provide odometry
measurements. A consumer-grade GPS sensor is used to gain an initial position
estimate and near-by map poses are used to generate an initial orientation
estimate.
### IV-B UP-Drive Dataset Collection
The UP-Drive dataset collection was recorded between December 2017 and
November 2019 in Wolfsburg, Germany, at the Volkswagen factory and its
surrounding area and aggregate a total driving distance of multiple $100$
kilometers. The environment is urban, with common artifacts such as busy
streets, buses, zebra crossings and pedestrians. Since the data was collected
over several months, seasonal appearance changes as well as multiple weather
and day-time conditions are present. For this work, our dataset selection is
dependent on the availability of curbstone measurements, which result from the
curbstone detection pipeline from Goga et al. [11]. Since the curbstone
detection module was only enabled in some of our datasets, our evaluation
dataset collection consists of $5$ sessions which totals to $10$ drives from
August 2019 to November 2019. Each session contains two partially overlapping
routes in opposite directions and consists of the same amount of sunny and
cloudy/rainy conditions captured throughout a day. Recordings in rainy
conditions are categorized as Cloudy, since there is little difference in
performance on rainy datasets as opposed to in dry conditions.
### IV-C Metrics
#### IV-C1 Localization Recall
The fraction of the total travelled distance in which a successful
localization was achieved is calculated as the localization recall $r$[%].
While using only the visual pipeline, a localization attempt at time $t$ is
accepted as successful if a minimum of $10$ inlier landmark observations is
present after pose optimization. When using the combined pipeline, a
localization is counted as successful, if the condition above is fulfilled or
if a viable curbstone alignment (see section III-C) could be performed.
#### IV-C2 Localization Accuracy
As no ground-truth for the described dataset exists, the poses estimated by an
RTK GPS sensor are used instead as a reference. RTK GPS altitude estimates are
not reliable, thus the error in $z$ can not be calculated reliably. Therefore,
we focus on the planar $\mathbf{p^{e}_{xy}}$ and lateral translation error
$\mathbf{p^{e}_{y}}$ as well as on the orientation error
$\mathbf{\theta^{e}_{xyz}}$.
| $\mathbf{r}_{mt}$[%] | | $\mathbf{\bar{p}^{e}_{xy}}$, $\mathbf{\bar{p}^{e}_{y}}$ | | $\mathbf{\bar{\theta}^{e}_{xyz}}$
---|---|---|---|---|---
| $10$-$08$ | $10$-$25$ | $11$-$08$ | $11$-$20$ | | $10$-$08$ | $10$-$25$ | $11$-$08$ | $11$-$20$ | | $10$-$08$ | $10$-$25$ | $11$-$08$ | $11$-$20$
$MOZARD$ \- $Map$ | | | | | | | | | | | | | |
$($08-21$)$ | 100.0 | 99.94 | 99.06 | 99.82 | | 0.08 [0.34],0.04 [0.21] | 0.07 [0.26], 0.03 [0.13] | 0.09 [0.37], 0.04 [0.2] | 0.13 [0.37], 0.05 [0.2] | | 0.64 [0.75] | 0.74 [0.76] | 1.04 [1.33] | 1.09 [1.4]
$($08-21; 10-08$)$ | - | 100.0 | 100.0 | 100.0 | | - | 0.06 [0.15], 0.03 [0.08] | 0.07 [0.24], 0.03 [0.13] | 0.1 [0.29], 0.04 [0.14] | | - | 0.66 [0.65] | 1.15 [1.4] | 1.2 [1.4]
$($08-21; 10-08; 10-25$)$ | - | - | 100.0 | 100.0 | | - | - | 0.07 [0.22], 0.03[0.11] | 0.09 [0.27], 0.03 [0.12] | | - | - | 1.21 [1.43] | 1.23 [1.42]
$($08-21; 10-08; 10-25; 11-08$)$ | - | - | - | 100.0 | | - | - | - | 0.07 [0.18], 0.02 [0.1] | | - | - | - | 1.13 [1.38]
$VIZARD$ \- $Map$ | | | | | | | | | | | | | |
$($08-21$)$ | 100.0 | 98.2 | 97.94 | 91.76 | | 0.08 [0.28], 0.04 [0.16] | 0.07 [0.26], 0.03 [0.13] | 0.09 [0.29], 0.04 [0.17] | 0.13 [0.37], 0.05 [0.21] | | 0.64 [0.76] | 0.74 [0.76] | 1.1 [0.16] | 1.07 [1.28]
$($08-21; 10-08$)$ | - | 100.0 | 99.9 | 97.89 | | - | 0.06 [0.13], 0.02 [0.07] | 0.07 [0.24], 0.03 [0.13] | 0.1 [0.29], 0.04 [0.14] | | - | 0.55 [0.67] | 1.15 [1.4] | 1.19 [1.37]
$($08-21; 10-08; 10-25$)$ | - | - | 100.0 | 99.18 | | - | - | 0.07 [0.22], 0.03 [0.11] | 0.09 [0.25], 0.03 [0.13] | | - | - | 1.21 [1.43] | 1.23 [1.41]
$($08-21; 10-08; 10-25; 11-08$)$ | - | - | - | 99.7 | | - | - | - | 0.07 [0.18], 0.02 [0.1] | | - | - | - | 1.13 [1.38]
TABLE I: The localization performance on the UP-Drive dataset, showing
localization recall, and the median planar $\mathbf{\bar{p}^{e}_{xy}}$,
lateral $\mathbf{\bar{p}^{e}_{y}}$ and orientation
($\mathbf{\bar{\theta}^{e}_{xyz}}$) accuracy. The $90$ percentile is shown in
square brackets. Numbering in round brackets defines the timestamp of the
sessions used for mapping. E.g. (08-21) represents August, 21th.
### IV-D Localization Accuracy and Recall
In order to fully rely on MOZARD to control the car in the UP-Drive project, a
high localization recall with an accuracy below 0.5$m$ is paramount, as only
short driving segments with no localization may be bridged with wheel-odometry
before the car may deviate from its designated lane. Curbstones are not
available for the whole trajectory, but for around 89% of the distance of the
map. We compare localization recall and accuracy of our localization system to
our prior work on visual localization - VIZARD [1]. Note, however, that our
prior work relied on the use of cameras for localization, as in contrast to
the former, the latter is now able to use LiDAR and vision. To demonstrate
that curbstones provide useful additional information, we construct and expand
a map iteratively using multiple datasets. Our first map is constructed from
two datasets (one session) from August 2019. We then evaluate this map against
multiple sessions from different months and add these sessions to our (multi-
session) map in a iterative fashion. We present the resulting key evaluation
metrics (localization recall $\mathbf{r}_{mt}$[%] and localization accuracy)
in Table I over all sessions. By including this comparison, we aim at
highlighting the gain in localization recall attainable by using MOZARD while
keeping a consistent median translation and orientation error. As shown in
Table I, MOZARD is able to attain close to a $100\%$ recall performance on all
$4$ sessions on the UP-Drive dataset, while VIZARD performance increase
correlates with the addition of sessions to its base map due to the change in
visual appearance. We further note that in both cases the planar median
localization accuracy are below 15$cm$, while the median lateral error is
below 10$cm$. The median orientation errors are on average less than 1
$degree$. For MOZARD the 90th percentile shows an increase which is likely to
be due to the higher uncertainty in precision of curbstone measurements.
### IV-E Runtime
On our live car platform Goga et al. [11] demonstrated that their curbstone
detection pipeline deployed on 2 Nvidia GTX 1080 takes around 20$ms$ for the
CNN image segmentation to complete on all $4$ cameras. An additional 32$ms$
are needed for the fusion of 5 LiDARs to run on an Intel i7-3770K CPU. Our
curbstone alignment module takes an average of approximately 25$ms$, while the
map tracking module (with vision) can take from 27$ms$ with a single session
map up to 48$ms$ on our largest multi-session map (see Table I) and has been
evaluated on an Intel Xeon E3-1505M CPU. This would allow MOZARD to run with
around 10$Hz$ on a single machine on a single session map. Table II summarizes
our findings.
Module | Average Runtime [ms]
---|---
Curbstone Detection | 52
Curbstone Tracking | 25
Map Tracking (VIZARD) | 27-48
Total | 104-125
TABLE II: Runtime of each component of MOZARD . Curbstone Detection and
Curbstone Tracking with average runtime over all evaluated datasets on a
single session map, while Map Tracking shows average runtime for running on a
single session map and on the largest multi-session map.
### IV-F Case Study
We provide further insights into our pipeline by showing specific failure
examples for each component. Sample images of a section where the visual
localization fails on the evaluated datasets are depicted in Figure 4. Due to
occlusion and the absence of surrounding building structures, barely any
stable visual cues are found in this section, preventing the visual
localization system from matching a sufficient amount of landmarks from the
map. This example demonstrates the current limitations of VIZARD, while our
MOZARD pipeline is able to handle these sparse keypoint-based scenarios.
Unfortunately there are also scenarios where a lack of keypoints and
curbstones exists or our curbstone alignment fails - hence conditions where
both pipeline are likely to fail as depicted in the right image of Figure 4. A
further extension of our current framework to other geometric shapes such as
poles, road markings could provide additional useful information that would
allow us to further increase our localization performance. Note that we used a
single session map for the evaluation of this case study and VIZARD is able to
bridge some of these scenarios if enough datasets are provided during the
mapping process.
Figure 4: On the left, a sample image of a trajectory segment that fails to
localize due to occlusion. A lack of keypoints renders it unfeasible to match
a sufficient number of map landmarks. On the middle, the projected curbstone
information is depicted in the camera frame in red - enabling a continued
localization although visual localization failed. On the right, a sample image
is depicted where our curbstone and vision pipeline fail. In this case
curbstones are actually detected but alignment fails due to our constraints.
## V conclusions
We presented MOZARD, a geometric extension to our visual localization system
for urban outdoor environments. Through our evaluation on $8$ datasets,
including several kilometers of real-world driving conditions, we demonstrated
the benefits of using curbstone information for localization and mapping. Our
datasets used in the experiments contain challenging appearance conditions
such as seasonal changes, wet road surfaces and sun reflections. A comparison
with our prior work demonstrated that we can achieve a higher recall
performance while using less datasets during the mapping process, as the
pipeline would fail due to sparse keypoint scenarios. Our run-time analysis
shows that our approach demonstrates real time capabilities. Although the
curbstone detection stack of MOZARD takes in average more computing time than
VIZARD, it is to note that an object segmentation/detection algorithm on a
self-driving car has to be deployed for environmental perception independent
of whether a localization takes place or not. Even taking in account the total
computational time, our approach still runs at $10Hz$ while needing up to four
times less data while achieving the same localization performance. We also
showed specific cases where both of our pipelines would fail due to occlusions
and/or curbstone misalignment giving suggestions for future work such as the
extension of our approach to poles and road markings. Our findings showed that
by extending a keypoint based visual localization approach with geometric
features - curbstones in our case, an improvement in robustness with
consistent high accuracy in localization is obtained.
## ACKNOWLEDGMENT
This project has received funding from the EU H2020 research project under
grant agreement No 688652 and from the Swiss State Secretariat for Education,
Research and Innovation (SERI) under contract number 15.0284.
## References
* [1] M. Bürki, L. Schaupp, M. Dymczyk, R. Dubé, C. Cadena, R. Siegwart, and J. Nieto, “Vizard: Reliable visual localization for autonomous vehicles in urban outdoor environments,” 2019.
* [2] Y. Lu, J. Huang, Y.-T. Chen, and B. Heisele, “Monocular localization in urban environments using road markings,” 06 2017, pp. 468–474.
* [3] L. DOrazio, N. Conci, and F. Stoffella, “Exploitation of road signalling for localization refinement of autonomous vehicles,” 07 2018, pp. 1–6.
* [4] K. Hara and H. Saito, “Vehicle localization based on the detection of line segments from multi-camera images,” vol. 27, pp. 617–626, 01 2015.
* [5] L. Weng, M. Yang, L. Guo, B. Wang, and C. Wang, “Pole-based real-time localization for autonomous driving in congested urban scenarios,” in _2018 IEEE International Conference on Real-time Computing and Robotics (RCAR)_ , Aug 2018, pp. 96–101.
* [6] R. Spangenberg, D. Goehring, and R. Rojas, “Pole-based localization for autonomous vehicles in urban scenarios,” in _2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , Oct 2016, pp. 2161–2166.
* [7] N. John, B. Anusha, and K. Kutty, “A Reliable Method for Detecting Road Regions from a Single Image Based on Color Distribution and Vanishing Point Location,” _Procedia Computer Science_ , vol. 58, pp. 2–9, jan 2015.
* [8] S. Hosseinyalamdary and M. Peter, “Lane level localization : using images and hd maps to mitgate the lateral error,” in _Proceedings of ISPRS Hannover Workshop : HRIGI 17 – CMRT 17 – ISA 17 – EuroCOW 17, 6–9 June 2017, Hannover, Germany_ , ser. ISPRS Archives, C. Heipke, Ed., vol. XLII-1/W1. International Society for Photogrammetry and Remote Sensing (ISPRS), 2017, pp. 129–134.
* [9] M. Enzweiler, P. Greiner, C. Knoppel, and U. Franke, “Towards multi-cue urban curb recognition,” in _2013 IEEE Intelligent Vehicles Symposium (IV)_. IEEE, jun 2013, pp. 902–907.
* [10] S. Panev, F. Vicente, F. De la Torre, and V. Prinet, “Road curb detection and localization with monocular forward-view vehicle camera,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 20, no. 9, pp. 3568–3584, Sep. 2019.
* [11] S. E. C. Goga and S. Nedevschi, “Fusing semantic labeled camera images and 3D LiDAR data for the detection of urban curbs,” in _Proceedings - 2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing, ICCP 2018_ , 2018.
* [12] Z. Liu, J. Wang, and D. Liu, “A new curb detection method for unmanned ground vehicles using 2d sequential laser data,” 2013.
* [13] A. Miraliakbari, M. Hahn, and S. Sok, “Automatic extraction of road surface and curbstone edges from mobile laser scanning data,” 2015.
* [14] C. F. Lopez, D. F. Llorca, C. Stiller, and M. Á. Sotelo, “Curvature-based curb detection method in urban environments using stereo and laser,” _2015 IEEE Intelligent Vehicles Symposium (IV)_ , pp. 579–584, 2015.
* [15] C. F. Lopez, R. Izquierdo, D. F. Llorca, and M. Á. Sotelo, “Road curb and lanes detection for autonomous driving on urban scenarios,” _17th International IEEE Conference on Intelligent Transportation Systems (ITSC)_ , pp. 1964–1969, 2014.
* [16] A. Schaefer, D. Buscher, J. Vertens, L. Luft, and W. Burgard, “Long-term urban vehicle localization using pole landmarks extracted from 3-d lidar scans,” 09 2019, pp. 1–7.
* [17] P. Bonnifait, M. Jabbour, and V. Cherfaoui, “Autonomous navigation in urban areas using GIS-managed information,” _International Journal of Vehicle Autonomous Systems_ , vol. 6, no. 1/2, p. 83, 2008.
* [18] B. Qin, Z. J. Chong, T. Bandyopadhyay, M. H. Ang, E. Frazzoli, and D. Rus, “Curb-Intersection Feature Based Monte Carlo Localization on Urban Roads,” Tech. Rep.
* [19] J. Stueckler, H. Schulz, and S. Behnke, “In-lane localization in road networks using curbs detected in omnidirectional height images,” in _Proceedings of Robotik 2008_ , 2008.
* [20] A. Kampker, J. Hatzenbuehler, L. Klein, M. Sefati, K. Kreisköther, and D. Gert, _Concept Study for Vehicle Self-Localization Using Neural Networks for Detection of Pole-Like Landmarks: Proceedings of the 15th International Conference IAS-15_ , 01 2019, pp. 689–705.
* [21] J. Kummerle, M. Sons, F. Poggenhans, T. Kuhner, M. Lauer, and C. Stiller, “Accurate and efficient self-localization on roads using basic geometric primitives,” 05 2019, pp. 5965–5971.
* [22] Z. Xiao, K. Jiang, S. Xie, T. Wen, C. Yu, and D. Yang, “Monocular vehicle self-localization method based on compact semantic map,” 05 2018.
* [23] H. Lee, J. Park, and W. Chung, “Localization of Outdoor Mobile Robots Using Curb Features in Urban Road Environments,” _Mathematical Problems in Engineering_ , vol. 2014, pp. 1–12, apr 2014.
* [24] M. Magnusson, “The Three-Dimensional Normal-Distributions Transform —an Efficient Representation for Registration, Surface Analysis, and Loop Detection,” 2009.
* [25] R. B. Rusu and S. Cousins, “3D is here: Point Cloud Library (PCL),” in _IEEE International Conference on Robotics and Automation (ICRA)_ , Shanghai, China, May 9-13 2011.
* [26] W. Wang, H. Pottmann, and Y. Liu, “Fitting b-spline curves to point clouds by curvature-based squared distance minimization,” _ACM Transactions on Graphics_ , vol. 25, pp. 214–238, 05 2006.
|
2024-09-04T02:54:56.411613 | 2020-03-02T14:25:12 | 2003.00931 | {
"authors": "Lourdes Cruz, Yuriko Pitones, Enrique Reyes",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25991",
"submitter": "Lourdes Cruz Gonz\\'alez",
"url": "https://arxiv.org/abs/2003.00931"
} | arxiv-papers | # Unmixedness of some weighted oriented graphs
Lourdes Cruz<EMAIL_ADDRESS>Departamento de Matemáticas
Centro de Investigación y de Estudios Avanzados del Instituto Politécnico
Nacional
Apartado Postal 14–740, Ciudad de México
07000 México Yuriko Pitones<EMAIL_ADDRESS>Departamento de
Matemáticas
Centro de Investigación y de Estudios Avanzados del Instituto Politécnico
Nacional
Apartado Postal 14–740, Ciudad de México
07000 México Enrique Reyes<EMAIL_ADDRESS>Departamento de
Matemáticas
Centro de Investigación y de Estudios Avanzados del Instituto Politécnico
Nacional
Apartado Postal 14–740, Ciudad de México
07000 México
( )
###### Abstract
Let $D=(G,\mathcal{O},w)$ be a weighted oriented graph whose edge ideal is
$I(D)$. In this paper, we characterize the unmixed property of $I(D)$ for each
one of the following cases: $G$ is an $SCQ$ graph; $G$ is a chordal graph; $G$
is a simplicial graph; $G$ is a perfect graph; $G$ has no $4$\- or $5$-cycles;
$G$ is a graph without $3$\- and $5$-cycles; and ${\rm girth}(G)\geqslant 5$.
Keywords: Weighted oriented graphs, edge ideal, unmixed ideal, $SCQ$ graph,
perfect graph.
## 1 Introduction
A weighted oriented graph $D$ is a triplet $(G,\mathcal{O},w)$, where $G$ is a
simple graph whose vertex set is $V(G)$; $\mathcal{O}$ is an edge orientation
of $G$ (an assignment of a direction to each edge of $G$); and $w$ is a
function $w:V(G)\to\mathbb{N}$. In this case, $G$ is called the underlying
graph of $D$. The vertex set of $D$ is $V(D):=V(G)$, the edge set of $D$,
denoted by $E(D)$, is the set of oriented edges of the oriented graph
$(G,\mathcal{O})$. The _weight_ of $x\in V(D)$ is $w(x)$ and we denote the set
$\\{x\in V(D)\mid w(x)>1\\}$ by $V^{+}$. If $R=K[x_{1},\ldots,x_{n}]$ is a
polynomial ring over a field $K$, then the edge ideal of $D$ is
$I(D)=(x_{i}x_{j}^{w(x_{j})}\mid(x_{i},x_{j})\in E(D))$ where
$V(D)=\\{x_{1},\ldots,x_{n}\\}$. These ideals (introduced in [10]) generalize
the usual edge ideals of graphs (see [14]), since if $w(x)=1$ for each $x\in
V(D)$, then $I(D)$ is the edge ideal of $G$, i.e. $I(D)=I(G)$. An interest in
$I(D)$ comes from coding theory in some studies of Reed–Muller typed codes,
(see [1, 9]). Furthermore, some algebraic and combinatorial invariants and
properties of $I(D)$ have been studied in some papers [7, 8, 10, 11, 15]. In
particular, in [10] is given a characterization of the irredundant irreducible
decomposition of $I(D)$. This characterization permits studying when $I(D)$ is
unmixed, using the strong vertex covers (Definition 2.10 and Theorem 2.12).
The unmixed property of $I(D)$ have been studied when $G$ is one of the
following graphs: cycles in [10]; graphs with whiskers in [7, 10]; bipartite
graphs in [8, 10]; graphs without $3$\- $5$\- and $7$-cycles and König graphs
in [11].
In this paper, we study the unmixed property of $I(D)$ for some families of
weighted oriented graphs. In Section 2, we give the known results and
definitions that we will use in the following sections. In Section 3 (in
Theorem 3.10), we characterize when a subset of $V(D)$ is contained in a
strong vertex cover. Using this result, we characterize the unmixed property
of $I(D)$, when $G$ is a perfect graph (see Theorem 3.11). In Section 4 (in
Theorem 4.8), we characterize the unmixed property of $I(D)$ when $G$ is an
$SCQ$ graph (see Definition 2.27). These graphs generalize the graph defined
in [13] and in the context of this paper, they are important because if $G$ is
well–covered such that $G$ is simplicial, $G$ is chordal or $G$ has no some
small cycles, then $G$ is an $SCQ$ graph (see Remark 2.29 and Theorem 2.30).
In [3], using the $SCQ$ graphs the authors characterize the vertex
decomposable property of $G$ when each $5$-cycle of $G$ has at least $4$
chords. Also, in Section 4, we characterize the unmixed property of $I(D)$
when $G$ is Köning, $G$ is simplicial or $G$ is chordal (see Corollaries 4.2
and 4.9). In Section 5, we characterize the unmixed property of $I(D)$, when
$G$ has no $3$\- or $5$-cycles; or $G$ has no $4$\- or $5$-cycles; or $G$ has
girth greater than 4 (see Theorems 5.4, 5.10 and 5.13). Finally, in Section 6,
we give some examples. Our results generalize the results about the unmixed
property of $I(D)$ given in [7, 8, 10, 11], since if $G$ is well-covered and
$G$ is one of the following graphs: cycles, graphs with whiskers, bipartite
graphs, Köning graphs, or graphs without $3$-, $5$\- and $7$-cycles, then $G$
is an $SCQ$ graph.
## 2 Preliminaries
In this Section, we give some definitions and well-known results that we will
use in the following sections. Let $D=(G,\mathcal{O},w)$ be a weighted
oriented graph, recall that $V^{+}=\\{x\in V(D)\mid w(x)>1\\}$ and
$I(D)=\big{(}x_{i}x_{j}^{w(x_{j})}\mid(x_{i},x_{j})\in E(D)\big{)}$.
###### Definition 2.1
Let $x$ be a vertex of $D$, the sets
$N_{D}^{+}(x):=\\{y\mid(x,y)\in E(D)\\}\quad{\rm and}\quad
N_{D}^{-}(x):=\\{y\mid(y,x)\in E(D)\\}$
are called the out-neighbourhood and the in-neighbourhood of $x$,
respectively. The neighbourhood of $x$ is the set $N_{D}(x):=N_{D}^{+}(x)\cup
N_{D}^{-}(x)$. Furthermore, $N_{D}[x]:=N_{D}(x)\cup\\{x\\}$. Also, if
$A\subseteq V(D)$ then $N_{D}(A):=\\{b\in V(D)\mid b\in N_{D}(a)\ {\rm for\
some}\ a\in A\\}$.
###### Definition 2.2
Let $x$ be a vertex of $D$. If $N_{D}^{+}(x)=\emptyset$, then $x$ is called a
sink. On the other hand, $x$ is a source if $N_{D}^{-}(x)=\emptyset$.
###### Remark 2.3
Consider the weighted oriented graph $\tilde{D}=(G,\mathcal{O},\tilde{w})$
with $\tilde{w}(x)=1$ if $x$ is a source and $\tilde{w}(x)=w(x)$ if $x$ is not
a source. Hence, $I(\tilde{D})=I(D)$. Therefore, in this paper, we assume that
if $x$ is a source, then $w(x)=1$.
###### Definition 2.4
The degree of $x\in V(D)$ is $deg_{G}(x):=|N_{D}(x)|$ and
$N_{G}(x):=N_{D}(x)$.
###### Definition 2.5
A vertex cover $\mathcal{C}$ of $D$ (resp. of $G$) is a subset of $V(D)$
(resp. of $V(G)$), such that if $(x,y)\in E(D)$ (resp. $\\{x,y\\}\in E(G)$),
then $x\in\mathcal{C}$ or $y\in\mathcal{C}$. A vertex cover $\mathcal{C}$ of
$D$ is minimal if each proper subset of $\mathcal{C}$ is not a vertex cover of
$D$.
###### Remark 2.6
Let $\mathcal{C}$ be a vertex cover of $D$ and $e\in E(G)$. Then,
$\mathcal{C}\cap e\neq\emptyset$. Furthermore, $e\cap(\mathcal{C}\setminus
a)\neq\emptyset$ if $a\notin e$, $b\in N_{D}(a)$ and $e=\\{a,b\\}$. Hence,
$(\mathcal{C}\setminus a)\cup N_{D}(a)$ is a vertex cover of $D$.
###### Definition 2.7
Let $\mathcal{C}$ be a vertex cover of $D$, we define the following three
sets:
* •
$L_{1}(\mathcal{C}):=\\{x\in\mathcal{C}\mid
N_{D}^{+}(x)\cap\mathcal{C}^{c}\neq\emptyset\\}$ where
$\mathcal{C}^{c}=V(D)\setminus\mathcal{C}$,
* •
$L_{2}(\mathcal{C}):=\\{x\in\mathcal{C}\mid\mbox{$x\notin L_{1}(\mathcal{C})$
and $N^{-}_{D}(x)\cap\mathcal{C}^{c}\neq\emptyset$}\\}$,
* •
$L_{3}(\mathcal{C}):=\mathcal{C}\setminus(L_{1}(\mathcal{C})\cup
L_{2}(\mathcal{C}))$.
###### Remark 2.8
If $\mathcal{C}$ is a vertex cover of $G$, $x\in V(G)\setminus\mathcal{C}$ and
$y\in N_{G}(x)$, then $e:=\\{x,y\\}\in E(G)$ and
$e\cap\mathcal{C}\neq\emptyset$. So, $y\in\mathcal{C}$, since
$x\notin\mathcal{C}$. Hence, $N_{G}(x)\subseteq\mathcal{C}$.
###### Remark 2.9
Let $\mathcal{C}$ be a vertex cover of $D$, then $x\in L_{3}(\mathcal{C})$ if
and only if $N_{D}[x]\subseteq\mathcal{C}$. Hence,
$L_{3}(\mathcal{C})=\emptyset$ if and only if $\mathcal{C}$ is minimal.
###### Definition 2.10
A vertex cover $\mathcal{C}$ of $D$ is strong if for each $x\in
L_{3}(\mathcal{C})$ there is $(y,x)\in E(D)$ such that $y\in
L_{2}(\mathcal{C})\cup L_{3}(\mathcal{C})=\mathcal{C}\setminus
L_{1}(\mathcal{C})$ with $y\in V^{+}$ (i.e. $w(y)>1$).
###### Definition 2.11
An ideal $I$ of a ring $R$ is unmixed if each one of its associated primes has
the same height.
###### Theorem 2.12
[10, Theorem 31] The following conditions are equivalent:
1. (1)
$I(D)$ is unmixed.
2. (2)
Each strong vertex cover of $D$ has the same cardinality.
3. (3)
$I(G)$ is unmixed and $L_{3}(\mathcal{C})=\emptyset$ for each strong vertex
cover $\mathcal{C}$ of $D$.
###### Definition 2.13
The cover number of $G$ is $\tau(G):={\rm min\ }\\{|\mathcal{C}|\
\mid\mathcal{C}\ {\rm\ is\ a\ vertex\ cover\ of\ }G\\}$. Furthermore, a
$\tau$-reduction of $G$ is a collection of pairwise disjoint induced subgraphs
$H_{1},\ldots,H_{s}$ of $G$ such that $V(G)=\cup_{i=1}^{s}V(H_{i})$ and
$\tau(G)=\sum_{i=1}^{s}\tau(H_{i})$.
###### Remark 2.14
We have $\tau(G)=|\mathcal{C}_{1}|$, for some vertex cover $\mathcal{C}_{1}$.
So, $\mathcal{C}_{1}$ is minimal. Thus, by Remark 2.9,
$L_{3}(\mathcal{C}_{1})=\emptyset$. Hence, $\mathcal{C}_{1}$ is strong. Now,
if $I(D)$ is unmixed, then by (2) in Theorem 2.12,
$|\mathcal{C}|=|\mathcal{C}_{1}|=\tau(G)$ for each strong vertex cover
$\mathcal{C}$ of $D$.
###### Definition 2.15
A stable set of $G$ is a subset of $V(G)$ containing no edge of $G$. The
stable number of $G$, denoted by $\beta(G)$, is $\beta(G):={\rm max\ }\\{|S|\
\mid S{\rm\ is\ a\ stable\ set\ of\ }G\\}$. Furthermore $G$ is well–covered if
$|S|=\beta(G)$ for each maximal stable set $S$ of $G$.
###### Remark 2.16
$S$ is a stable set of $G$ if and only if $V(G)\setminus S$ is a vertex cover.
Hence, $\tau(G)=|V(G)|-\beta(G)$.
###### Remark 2.17
[11, Remark 2.12] $G$ is well-covered if and only if $I(G)$ is unmixed.
###### Definition 2.18
A collection of pairwise disjoint edges of $G$ is called a matching. A perfect
matching is a matching whose union is $V(G)$. On the other hand, $G$ is a
König graph if $\tau(G)=\nu(G)$ where $\nu(G)$ is the maximum cardinality of a
matching of $G$.
###### Definition 2.19
Let $e$ be an edge of $G$. If $\\{a,a^{\prime}\\}\in E(G)$ for each pair of
edges, $\\{a,b\\}$, $\\{a^{\prime},b^{\prime}\\}\in E(G)$ and
$e=\\{b,b^{\prime}\\}$, then we say that $e$ has the property (P). On the
other hand, we say that a matching $P$ of $G$ has the property (P) if each
edge of $P$ has the property (P).
###### Theorem 2.20
[2, Proposition 15] If $G$ is a Köning graph without isolated vertices, then
$G$ is well–covered if and only if $G$ has a perfect matching with the
property (P).
###### Definition 2.21
$\mathcal{P}=(x_{1},\ldots,x_{n})$ is a walk (resp. an oriented walk) if
$\\{x_{i},x_{i+1}\\}\in E(G)$ for $i=1,\ldots,n-1$. In this case,
$\mathcal{P}$ is a path (resp. an oriented path) if $x_{1},\ldots,x_{n}$ are
different. On the other hand, a walk (resp. an oriented walk),
$C=(z_{1},z_{2},\ldots,z_{n},z_{1})$ is a $n$-cycle (resp. an oriented
$n$-cycle) if $(z_{1},\ldots,z_{n})$ is a path (resp. is an oriented path).
###### Definition 2.22
Let $A$ be a subset of $V(G)$, then the graph induced by $A$, denoted by
$G[A]$, is the subgraph $G_{1}$ of $G$ with $V(G_{1})=A$ and $E(G_{1})=\\{e\in
E(G)\mid e\subseteq A\\}$. On the other hand, a subgraph $H$ of $G$ is induced
if there is $B\subseteq V(G)$ such that $H=G[B]$.
A cycle $C$ of $G$ is induced if $C$ is an induced subgraph of $G$.
###### Definition 2.23
A weighted oriented graph
$D^{\prime}=(G^{\prime},\mathcal{O}^{\prime},w^{\prime})$ is a weighted
oriented subgraph of $D=(G,\mathcal{O},w)$, if
$(G^{\prime},\mathcal{O}^{\prime})$ is an oriented subgraph of
$(G,\mathcal{O})$ and $w^{\prime}(x)=w(x)$ for each $x\in V(G^{\prime})$.
Furthermore, $D^{\prime}$ is an induced weighted oriented subgraph of $D$ if
$G^{\prime}$ is an induced subgraph of $G$.
###### Definition 2.24
A vertex $v$ is called simplicial if the induced subgraph $H=G[N_{G}[v]]$ is a
complete graph with $k=|V(H)|-1$, in this case, $H$ is called $k$-simplex (or
simplex). The set of simplexes of $G$ is denoted by $S_{G}$. $G$ is a
simplicial graph if every vertex of $G$ is a simplicial vertex of $G$ or is
adjacent to a simplicial vertex of $G$.
###### Definition 2.25
The minimum length of a cycle (contained) in a graph $G$, is called the girth
of $G$. On the other hand, $G$ is a chordal graph if the induced cycles are
$3$-cycles.
###### Theorem 2.26
[12, Theorems 1 and 2] If $G$ is a chordal or simplicial graph, then $G$ is
well-covered if and only if every vertex of $G$ belongs to exactly one simplex
of $G$.
###### Definition 2.27
An induced $5$-cycle $C$ of $G$ is called basic if $C$ does not contain two
adjacent vertices of degree three or more in $G$. $G$ is an $SCQ$ graph (or
$G\in SCQ$) if $G$ satisfies the following conditions:
1. $(i)$
There is $Q_{G}$ such that $Q_{G}=\emptyset$ or $Q_{G}$ is a matching of $G$
with the property (P).
2. $(ii)$
$\\{V(H)\mid H\in S_{G}\cup C_{G}\cup Q_{G}\\}$ is a partition of $V(G)$,
where $C_{G}$ is the set of basic $5$-cycles.
In the following three results, we use the graphs of Figure 1.
$\mathbf{C_{7}}$$d_{\tiny 1}$$d_{\tiny 2}$$b_{\tiny 2}$$a_{\tiny 2}$$a_{\tiny
1}$$b_{\tiny 1}$$c_{\tiny 1}$$c_{\tiny 2}$$g_{\tiny 2}$$g_{\tiny
1}$$\mathbf{P_{10}}$$C_{1}$$C_{2}$$\mathbf{\tilde{e}_{\tiny
1}}$$\mathbf{\tilde{e}_{\tiny 2}}$$\mathbf{\tilde{e}_{\tiny 3}}$$b_{\tiny
4}$$b_{\tiny 3}$$b_{\tiny 2}$$b_{\tiny 1}$$a_{\tiny 4}$$a_{\tiny 3}$$a_{\tiny
2}$$a_{\tiny 1}$$d_{\tiny 2}$$d_{\tiny 1}$$c_{\tiny 1}$$c_{\tiny
2}$$v$$\mathbf{P_{13}}$$C_{1}$$C_{2}$$\mathbf{\tilde{e}_{\tiny
1}}$$\mathbf{\tilde{e}_{\tiny 2}}$$v$$a_{\tiny 1}$$a_{\tiny 3}$$a_{\tiny
2}$$b_{\tiny 1}$$c_{\tiny 1}$$b_{\tiny 3}$$c_{\tiny 3}$$b_{\tiny 2}$$c_{\tiny
2}$$\mathbf{T_{10}}$$\mathbf{\tilde{e}_{\tiny 1}}$$\mathbf{\tilde{e}_{\tiny
2}}$$\mathbf{\tilde{e}_{\tiny 3}}$$a_{\tiny 1}$$a_{\tiny 2}$$a_{\tiny
7}$$a_{\tiny 6}$$a_{\tiny 3}$$a_{\tiny 4}$$a_{\tiny 5}$$b_{\tiny 1}$$b_{\tiny
7}$$b_{\tiny 2}$$b_{\tiny 6}$$b_{\tiny 3}$$b_{\tiny 5}$$b_{\tiny
4}$$\mathbf{P_{14}}$$\mathbf{Q_{13}}$$a_{\tiny 1}$$a_{\tiny 2}$$d_{\tiny
1}$$d_{\tiny 2}$$h$$g_{\tiny 1}$$g_{\tiny 2}$$h^{\prime}$$b_{\tiny
1}$$b_{\tiny 2}$$v$$c_{\tiny 1}$$c_{\tiny 2}$$C_{1}$$\mathbf{\tilde{e}_{\tiny
1}}$$\mathbf{\tilde{e}_{\tiny 2}}$$\mathbf{\tilde{e}_{\tiny 3}}$ Figure 1:
###### Theorem 2.28
[6, Theorem 1.1] If $G$ is connected without $4$\- and $5$-cycles, then $G$ is
well-covered if and only if $G\in\\{C_{7},T_{10}\\}$ or $\\{V(H)\mid H\in
S_{G}\\}$ is a partition of $V(G)$.
###### Remark 2.29
Suppose $G$ is well-covered. If $G$ is simplicial, or $G$ is chordal or $G$ is
a graph without $4$\- and $5$-cycles and $G\notin\\{C_{7},T_{10}\\}$. Then, by
Theorems 2.26 and 2.28, $\\{V(H)\mid H\in S_{G}\\}$ is a partition of $V(G)$.
Therefore, $G$ is an $SCQ$ graph with $C_{G}=Q_{G}=\emptyset$.
###### Theorem 2.30
[5, Theorem 2 and Theorem 3] If $G$ is a connected graph without $3$\- and
$4$-cycles, then $G$ is well-covered if and only if
$G\in\\{K_{1},C_{7},P_{10},P_{13},P_{14},Q_{13}\\}$ or $\\{V(H)\mid H\in
S_{G}\cup C_{G}\\}$ is a partition of $V(G)$.
###### Definition 2.31
The complement of $G$, denoted by $\overline{G}$, is the graph with
$V(\overline{G})=V(G)$ such that for each pair of different vertices $x$ and
$y$ of $D$, we have that $\\{x,y\\}\in E(\overline{G})$ if and only if
$\\{x,y\\}\notin E(G)$.
###### Definition 2.32
A $k$-colouring of $G$ is a function $c:V(G)\rightarrow\\{1,2,\ldots,k\\}$
such that $c(u)\neq c(v)$ if $\\{u,v\\}\notin E(G)$. The smallest integer $k$
such that $G$ has a $k$-colouring is called the chromatic number of $G$ and it
is denoted by $\chi(G)$. On the other hand, the clique number, denoted by
$\omega(G)$ is the size of the largest complete subgraph of $G$. Finally, $G$
is perfect if $\chi(H)=\omega(H)$ for every induced subgraph $H$ of $G$.
###### Remark 2.33
Let $A$ be a subset of $V(G)$, then $A$ is a stable set of $G$ if and only if
$\overline{G}[A]$ is a complete subgraph of $\overline{G}$. Hence,
$\beta(G)=\omega(\overline{G})$.
###### Theorem 2.34
[4, Theorem 5.5.3] $G$ is perfect if and only if $\overline{G}$ is perfect.
## 3 Strong vertex cover and $\star$-semi-forest
Let $D=(G,\mathcal{O},w)$ be a weighted oriented graph. In this Section, we
introduce the unicycle oriented subgraphs (Definition 3.2), the root oriented
trees (Definition 3.3), and the $\star$-semi-forests of $D$ (Definition 3.4).
With this definitions, we characterize when a subset of $V(G)$ is contained in
a strong vertex cover (see Theorem 3.10). Using this result, we characterize
when $I(D)$ is unmixed if $G$ is a perfect graph (see Definition 2.32 and
Theorem 3.11).
###### Proposition 3.1
If $\mathcal{C}$ is a vertex cover of $D$ such that
$N_{D}^{+}(A)\subseteq\mathcal{C}$ and $A\subseteq V^{+}$, then there is a
strong vertex cover $\mathcal{C}^{\prime}$ of $D$, such that
$N_{D}^{+}(A)\subseteq\mathcal{C}^{\prime}\subseteq\mathcal{C}$.
Proof. First, we prove that there is a vertex cover $\mathcal{C}^{\prime}$
such that $L_{3}(\mathcal{C}^{\prime})\subseteq
N_{D}^{+}(A)\subseteq\mathcal{C}^{\prime}\subseteq\mathcal{C}$. We take
$L:=N_{D}^{+}(A)$. If $L_{3}(\mathcal{C})\subseteq L$, then we take
$\mathcal{C}^{\prime}=\mathcal{C}$. Now, we suppose there is $a_{1}\in
L_{3}(\mathcal{C})\setminus L$, then by Remark 2.9,
$N_{D}[a_{1}]\subseteq\mathcal{C}$. Thus,
$\mathcal{C}_{1}=\mathcal{C}\setminus\\{a_{1}\\}$ is a vertex cover and
$L\subseteq\mathcal{C}_{1}$, since $L\subseteq\mathcal{C}$ and $a_{1}\notin
L$. Now, we suppose that there are vertex covers
$\mathcal{C}_{0},\ldots,\mathcal{C}_{k}$, such that
$L\subseteq\mathcal{C}_{i}=\mathcal{C}_{i-1}\setminus\\{a_{i}\\}$ and
$a_{i}\in L_{3}(\mathcal{C}_{i-1})\setminus L$ for $i=1,\ldots,k$ where
$\mathcal{C}_{0}=\mathcal{C}$ and we give the following recursively process:
If $L_{3}(\mathcal{C}_{k})\subseteq L$, then we take
$\mathcal{C}^{\prime}=\mathcal{C}_{k}$. Now, if there is $a_{k+1}\in
L_{3}(\mathcal{C}_{k})\setminus L$, then by Remark 2.9,
$N_{D}[a_{k+1}]\subseteq\mathcal{C}_{k}$. Consequently,
$\mathcal{C}_{k+1}:=\mathcal{C}_{k}\setminus\\{a_{k+1}\\}$ is a vertex cover.
Also, $L\subseteq\mathcal{C}_{k+1}$, since $L\subseteq\mathcal{C}_{k}$ and
$a_{k+1}\not\in L$. This process is finite, since $|V(D)|$ is finite. Hence,
there is $m$ such that $L_{3}(\mathcal{C}_{m})\subseteq
L\subseteq\mathcal{C}_{m}\subseteq\mathcal{C}$. Therefore, we take
$\mathcal{C}^{\prime}=\mathcal{C}_{m}$.
Now, we prove that $\mathcal{C}^{\prime}$ is strong. We take $x\in
L_{3}(\mathcal{C}^{\prime})$, then $x\in L=N_{D}^{+}(A)$, since
$L_{3}(\mathcal{C}^{\prime})\subseteq L$. Thus, $(y,x)\in E(D)$ for some $y\in
A\subseteq V^{+}$. Hence, $y\in\mathcal{C}^{\prime}$, since $x\in
L_{3}(\mathcal{C}^{\prime})$. Also, $y\not\in L_{1}(\mathcal{C}^{\prime})$,
since $N_{D}^{+}(y)\subseteq N_{D}^{+}(A)\subseteq\mathcal{C}^{\prime}$.
Hence, $y\in\big{(}\mathcal{C}^{\prime}\setminus
L_{1}(\mathcal{C}^{\prime})\big{)}\cap V^{+}$. Therefore,
$\mathcal{C}^{\prime}$ is strong. $\Box$
###### Definition 3.2
If $B$ is a weighted oriented subgraph of $D$ with exactly one cycle $C$, then
$B$ is called unicycle oriented graph when $B$ satisfies the following
conditions:
1. $(i)$
$C$ is an oriented cycle in $B$ and there is an oriented path from $C$ to $y$
in $B$, for each $y\in V(B)\setminus V(C)$.
2. $(ii)$
If $x\in V(B)$ with $w(x)=1$, then $deg_{B}(x)=1$.
###### Definition 3.3
A weighted oriented subgraph $T$ of $D$ without cycles, is a root oriented
tree (ROT) with parent $v\in V(T)$ when $T$ satisfies the following
properties:
1. $(i)$
If $x\in V(T)\setminus\\{v\\}$, there is an oriented path $\mathcal{P}$ in $T$
from $v$ to $x$.
2. $(ii)$
If $x\in V(T)$ with $w(x)=1$, then $deg_{T}(x)=1$ and $x\neq v$ or
$V(T)=\\{v\\}$ and $x=v$.
###### Definition 3.4
A weighted oriented subgraph $H$ of $D$ is a $\star$-semi-forest if there are
root oriented trees $T_{1},\ldots,T_{r}$ whose parents are
$v_{1},\ldots,v_{r}$ and unicycle oriented subgraphs $B_{1},\ldots,B_{s}$ such
that $H=\big{(}\cup_{i=1}^{r}\ T_{i}\big{)}\cup\big{(}\cup_{j=1}^{s}\
B_{j}\big{)}$ with the following conditions:
1. $(i)$
$V(T_{1}),\ldots,V(T_{r}),V(B_{1}),\ldots,V(B_{s})$ is a partition of $V(H)$.
2. $(ii)$
There is $W=\\{w_{1},\ldots,w_{r}\\}\subseteq V(D)\setminus V(H)$ such that
$w_{i}\in N_{D}(v_{i})$ for $i=1,\ldots,r$ (it is possible that $w_{i}=w_{j}$
for some $1\leqslant i<j\leqslant r$).
3. $(iii)$
There is a partition $W_{1}$, $W_{2}$ of $W$ such that $W_{1}$ is a stable set
of $D$, $W_{2}\subseteq V^{+}$ and $(w_{i},v_{i})\in E(D)$ if $w_{i}\in
W_{2}$. Also, $N_{D}^{+}(W_{2}\cup\tilde{H})\cap W_{1}=\emptyset$, where
$\tilde{H}=\\{x\in V(H)\mid deg_{H}(x)\geqslant 2\\}\cup\\{v_{i}\mid
deg_{H}(v_{i})=1\\}.$
###### Remark 3.5
By Definition 3.2 and Definition 3.3, we have $\tilde{H}\subseteq V^{+}$.
Furthermore, if $v_{i}$ is a parent vertex of $T_{i}$, with
$deg_{H}(v_{i})\geqslant 1$, then $v_{i}\in\tilde{H}$.
###### Lemma 3.6
If $H$ is a $\star$-semi-forest of $D$, then
$V(H)\subseteq N_{D}(W_{1})\cup N_{D}^{+}(W_{2}\cup\tilde{H})$.
Proof. We take $x\in V(H)$. Since $H=\big{(}\cup_{i=1}^{r}\
T_{i}\big{)}\cup\big{(}\cup_{j=1}^{s}\ B_{j}\big{)}$, we have two cases:
Case 1) $x\in V(B_{j})$ for some $1\leqslant j\leqslant s$. Let $C$ be the
oriented cycle of $B_{j}$. If $x\in V(C)$, then there is $y_{1}\in V(C)$ such
that $(y_{1},x)\in E(C)$. Furthermore, $deg_{H}(y_{1})\geqslant
deg_{C}(y_{1})=2$, then $y_{1}\in\tilde{H}$. Hence, $x\in
N_{D}^{+}(y_{1})\subseteq N_{D}^{+}(\tilde{H})$. Now, if $x\in
V(B_{j})\setminus V(C)$, then there is an oriented path $\mathcal{P}$ in
$B_{j}$ from $C$ to $x$. Thus, there is $y_{2}\in V(\mathcal{P})$ such that
$(y_{2},x)\in E(\mathcal{P})$. If $|V(\mathcal{P})|>2$, then
$deg_{H}(y_{2})\geqslant deg_{\mathcal{P}}(y_{2})=2$. If $|V(\mathcal{P})|=2$,
then $y_{2}\in V(C)$ and $deg_{H}(y_{2})>deg_{C}(y_{2})=2$. Therefore,
$y_{2}\in\tilde{H}$ and $x\in N_{D}^{+}(\tilde{H})$.
Case 2) $x\in V(T_{i})$ for some $1\leqslant i\leqslant r$. First, assume
$x=v_{i}$, then there is $w_{i}\in W$ such that $x\in N_{D}(w_{i})$.
Consequently, $x\in N_{D}(W_{1})$ if $w_{i}\in W_{1}$ and, by $(iii)$ of
Definition 3.4, $x\in N_{D}^{+}(w_{i})\subseteq N_{D}^{+}(W_{2})$ if $w_{i}\in
W_{2}$. Now, we suppose $x\neq v_{i}$, then there is an oriented path
$\mathcal{L}$, from $v_{i}$ to $x$. Consequently, there is $y_{3}\in
V(\mathcal{L})$ such that $(y_{3},x)\in E(D)$. If $y_{3}\neq v_{i}$, then
$deg_{H}(y_{3})\geqslant deg_{\mathcal{L}}(y_{3})=2$. Thus,
$y_{3}\in\tilde{H}$ and $x\in N_{D}^{+}(\tilde{H})$. Finally, if
$y_{3}=v_{i}$, then $deg_{H}(y_{3})\geqslant 1$. Hence, by Remark 3.5,
$y_{3}\in\tilde{H}$ and $x\in N_{D}^{+}(\tilde{H})$. $\Box$
###### Remark 3.7
Sometimes to stress the relation between $W$ and $H$ in Definition 3.4, $W$ is
denoted by $W^{H}$. Similarly, $W_{1}^{H}$ and $W_{2}^{H}$. If
$\\{T_{1},\ldots,T_{r}\\}=\emptyset$, then
$W^{H}=W_{1}^{H}=W_{2}^{H}=\emptyset$.
###### Lemma 3.8
Let $K$ be a weighted oriented subgraph of $D$. If $H$ is a maximal ROT in $K$
with parent $v$, or $H$ is a maximal unicycle oriented subgraph in $K$ whose
cycle is $C$, then there is no $(y,x)\in E(K)$ with $x\in V(K)\setminus V(H)$
and $y\in V^{+}\cap V(H)$.
Proof. By contradiction suppose there is $(y,x)\in E(K)$ with $x\in
V(K)\setminus V(H)$ and $y\in V^{+}\cap V(H)$. Thus, $H\subsetneq
H_{1}:=H\cup\\{(y,x)\\}\subseteq K$. If $H$ is a unicycle oriented subgraph
with cycle $C$ (resp. $H$ is a ROT), then there is an oriented path
$\mathcal{P}$ from $C$ (resp. from $v$) to $y$. Consequently,
$\mathcal{P}\cup\\{(y,x)\\}$ is an oriented path from $C$ (resp. from $v$) to
$x$ in $H_{1}$. Furthermore, $H_{1}$ has exactly one cycle (resp. has no
cycles), since $deg_{H_{1}}(x)=1$ and $V(H_{1})\setminus V(H)=\\{x\\}$.
Now, we take $z\in V(H_{1})$ with $w(z)=1$, then $z=x$ or $z\in V(H)$. We
prove $deg_{H_{1}}(z)=1$. If $z=x$, then $deg_{H_{1}}(x)=1$. Now, if $z\in
V(H)$, then $z\neq y$, since $y\in V^{+}$. So, $deg_{H_{1}}(z)=deg_{H}(z)$,
since $N_{H_{1}}(x)=\\{y\\}$. If $H$ is a ROT with $V(H)=\\{v\\}$, then
$y=z=v$. A contradiction, since $w(z)=1$ and $y\in V^{+}$. Consequently, by
$(ii)$ in Definitions 3.2 and 3.3, $deg_{H_{1}}(z)=deg_{H}(z)=1$. Hence,
$H_{1}$ is a unicycle oriented subgraph with cycle $C$ (resp. is a ROT with
parent $v$) of $K$. This is a contradiction, since $H\subsetneq H_{1}\subseteq
K$ and $H$ is maximal. $\Box$
###### Definition 3.9
Let $K$ be a weighted oriented subgraph of $D$ and $H$ a $\star$-semi-forest
of $D$. We say $H$ is a generating $\star$-semi-forest of $K$ if $V(K)=V(H)$.
###### Theorem 3.10
Let $K$ be an induced weighted oriented subgraph of $D$. Hence, the following
conditions are equivalent:
1. (1)
There is a strong vertex cover $\mathcal{C}$ of $D$, such that
$V(K)\subseteq\mathcal{C}$.
2. (2)
There is a generating $\star$-semi-forest $H$ of $K$.
Proof. ${\rm(2)}\Rightarrow{\rm(1)}$ Let $\mathcal{C}_{1}$ be a minimal vertex
cover of $D$. By (2), $K$ has a generating $\star$-semi-forest $H$. Now, using
the notation of Definition 3.4, we take
$\mathcal{C}_{2}=\big{(}\mathcal{C}_{1}\setminus W_{1}\big{)}\cup
N_{D}(W_{1})\cup N_{D}^{+}(W_{2}\cup\tilde{H})$. By Remark 2.6,
$\mathcal{C}_{2}$ is a vertex cover of $D$. Since $W_{1}$ is a stable set,
$N_{D}(W_{1})\cap W_{1}=\emptyset$. Then, $\mathcal{C}_{2}\cap
W_{1}=\emptyset$, since $N_{D}^{+}(W_{2}\cup\tilde{H})\cap W_{1}=\emptyset$.
By Remark 3.5 and $(iii)$ in Definition 3.4, $\tilde{H}\cup W_{2}\subseteq
V^{+}$. So, by Proposition 3.1, there is a strong vertex cover $\mathcal{C}$
of $D$ such that
$N_{D}^{+}(W_{2}\cup\tilde{H})\subseteq\mathcal{C}\subseteq\mathcal{C}_{2}$.
Consequently, $\mathcal{C}\cap W_{1}=\emptyset$, since $\mathcal{C}_{2}\cap
W_{1}=\emptyset$. Thus, $N_{D}(W_{1})\subseteq\mathcal{C}$, since
$\mathcal{C}$ is a vertex cover. Then, by Lemma 3.6, $V(H)\subseteq
N_{D}(W_{1})\cup N_{D}^{+}(W_{2}\cup\tilde{H})\subseteq\mathcal{C}$. Hence,
$V(K)\subseteq\mathcal{C}$, since $H$ is a generating $\star$-semi-forest of
$K$.
${\rm(1)}\Rightarrow{\rm(2)}$ We have, $\mathcal{C}$ is a strong vertex cover
such that $V(K)\subseteq\mathcal{C}$. If $A:=L_{1}(\mathcal{C})\cap
V(K)=\\{v_{1},\ldots,v_{s}\\}$, then there is $w_{i}\in
V(D)\setminus\mathcal{C}\subseteq V(D)\setminus V(K)$ such that
$(v_{i},w_{i})\in E(D)$. We take the ROT’s
$M_{1}=\\{v_{1}\\},\ldots,M_{s}=\\{v_{s}\\}$ and sets $W_{1}^{i}=\\{w_{i}\\}$
and $W_{2}^{i}=\emptyset$ for $i=1,\ldots,s$.
Now, we will give a recursive process to obtain a generating $\star$-semi-
forest of $K$. For this purpose, suppose we have connected $\star$-semi-
forests $M_{s+1},\ldots,M_{l}$ of $K\setminus A$ with subsets
$W_{1}^{s+1},\ldots,W_{1}^{l},W_{2}^{s+1},\ldots,W_{2}^{l}\subseteq
V(D)\setminus V(K)$ and $V^{s+1},\ldots,V^{l}\subseteq V(K)$ such that for
each $s<j\leqslant l$, they satisfies the following conditions:
* (a)
$V^{j}=\\{v_{j}\\}$ if $M_{j}$ is a ROT with parent $v_{j}$ or $V^{j}$ is the
cycle of $M_{j}$ if $M_{j}$ is a unicycle oriented subgraph,
* (b)
$M_{j}$ is a maximal ROT in $K^{j}:=K\setminus\cup_{i=1}^{j-1}V(M_{i})$ with
parent in $V^{j}$ or $M_{j}$ is a maximal unicycle oriented subgraph in
$K^{j}$ with cycle $V^{j}$.
* (c)
$W_{1}^{j}\cap\mathcal{C}=\emptyset$ and
$W_{2}^{j}\subseteq\big{(}\mathcal{C}\setminus(L_{1}(\mathcal{C})\cup
V(K))\big{)}\cap V^{+}$.
Hence, we take $K^{l+1}:=K\setminus\big{(}\cup_{i=1}^{l}V(M_{i})\big{)}$. This
process starts with $l=s$; in this case,
$K^{s+1}:=K\setminus\big{(}\cup_{i=1}^{s}V(M_{i})\big{)}=K\setminus A$;
furthermore, if $A=\emptyset$, then $K^{1}=K$. Continuing with the recursive
process, if $K^{l+1}=\emptyset$, then $V(K)=\cup_{i=1}^{l}V(M_{i})$ and we
stop the process. Now, if $K^{l+1}\neq\emptyset$, then we will construct a
connected $\star$-semi-forest $M_{l+1}$ of $K^{l+1}$ in the following way:
Case (1) $L_{2}(\mathcal{C})\cap V(K^{l+1})\neq\emptyset$. Then, there is
$z\in L_{2}(\mathcal{C})\cap V(K^{l+1})$. Thus, there is $(z^{\prime},z)\in
E(D)$ with $z^{\prime}\notin\mathcal{C}$. We take a maximal ROT $M_{l+1}$ in
$K^{l+1}$, whose parent is $z$. Also, we take $V^{l+1}=\\{v_{l+1}\\}=\\{z\\}$,
$W_{1}^{l+1}=\\{w_{l+1}\\}=\\{z^{\prime}\\}$ and $W_{2}^{l+1}=\emptyset$.
Hence, $M_{l+1}$ satisfies (a), (b) and (c), since
$z^{\prime}\notin\mathcal{C}$ and $W_{2}^{l+1}=\emptyset$.
Case (2) $L_{2}(\mathcal{C})\cap V(K^{l+1})=\emptyset$. Then,
$V(K^{l+1})\subseteq L_{3}(\mathcal{C})$, since $K^{l+1}\subseteq K\setminus
A\subseteq\mathcal{C}\setminus L_{1}(\mathcal{C})$. We take $x\in V(K^{l+1})$,
then there is $x_{1}\in\big{(}\mathcal{C}\setminus
L_{1}(\mathcal{C})\big{)}\cap V^{+}$ such that $(x_{1},x)\in E(D)$, since
$\mathcal{C}$ is strong. If $x_{1}\in V(K^{l+1})$, then there is
$x_{2}\in\big{(}\mathcal{C}\setminus L_{1}(\mathcal{C})\big{)}\cap V^{+}$ such
that $(x_{2},x_{1})\in E(D)$, since $\mathcal{C}$ is strong. Continuing with
this process we obtain a maximal path
$\mathcal{P}=(x_{r},x_{r-1},\ldots,x_{1},x)$ such that
$x_{r-1},\ldots,x_{1},x$ are different in $V(K^{l+1})$ and
$x_{1},\ldots,x_{r}\in\big{(}\mathcal{C}\setminus
L_{1}(\mathcal{C})\big{)}\cap V^{+}$. Thus,
$x_{r}\notin\cup_{j=1}^{s}V(M_{j})$, since $x_{r}\notin L_{1}(\mathcal{C})$.
Now, suppose $x_{r}\in V(M_{j})$ for some $s<j\leqslant l$. So,
$(x_{r},x_{r-1})\in E(K^{j})$, $x_{r-1}\in V(K^{j})\setminus V(M_{j})$ and
$x_{r}\in V^{+}\cap V(M_{j})$. Furthermore, $N_{D}^{+}(x_{r})\cap
W_{1}^{j}=\emptyset$, since $x_{r}\in\mathcal{C}\setminus L_{1}(\mathcal{C})$
and $\mathcal{C}\cap W_{1}^{j}=\emptyset$. A contradiction, by Lemma 3.8,
since $W^{j}\cap V(K^{j})=\emptyset$. Hence,
$x_{r}\notin\cup_{j=1}^{l}V(M_{j})$. Consequently, $x_{r}\notin V(K)$ or
$x_{r}\in V(K^{l+1})$.
Case (2.a) $x_{r}\notin V(K)$. Then, take a maximal ROT $M_{l+1}$ in $K^{l+1}$
whose parent is $x_{r-1}$. Also, we take
$V^{l+1}=\\{v_{l+1}\\}=\\{x_{r-1}\\}$, $W_{1}^{l+1}=\emptyset$; and
$W_{2}^{l+1}=\\{w_{l+1}\\}=\\{x_{r}\\}$. Thus, $M_{l+1}$ satisfies (a), (b)
and (c), since $W_{1}^{l+1}=\emptyset$, $x_{r}\in\big{(}\mathcal{C}\setminus
L_{1}(\mathcal{C})\big{)}\cap V^{+}$ and $x_{r}\notin V(K)$.
Case (2.b) $x_{r}\in V(K^{l+1})$. Then, $x_{r}\in L_{3}(\mathcal{C})$, since
$V(K^{l+1})\subseteq L_{3}(\mathcal{C})$. Hence, there is
$x_{r+1}\in\big{(}\mathcal{C}\setminus L_{1}(\mathcal{C})\big{)}\cap V^{+}$
such that $(x_{r+1},x_{r})\in E(D)$, Then
$\tilde{\mathcal{P}}=(x_{r+1},x_{r},\ldots,x_{1},x)$ is an oriented walk. By
the maximality of $\mathcal{P}$, we have that
$x_{r}\in\\{x_{r-1},\ldots,x_{1},x\\}$. Thus,
$\mathcal{P}=(x_{r},\ldots,x_{1},x)$ contains an oriented cycle $C$. We take a
maximal unicycle oriented subgraph $M_{l+1}$ of $K^{l+1}$ with cycle $C$,
$V^{l+1}=C$ and $W_{1}^{l+1}=W_{2}^{l+1}=\emptyset$. Then, $M_{l+1}$ satisfies
(a), (b) and (c).
Since $K$ is finite, with this proceeding we obtain
$M_{1},\ldots,M_{t}\subseteq K$ such that $V(K)=\cup_{i=1}^{t}\ V(M_{i})$,
$W_{1}^{i}\cap\mathcal{C}=\emptyset$ and
$W_{2}^{i}\subseteq\big{(}\mathcal{C}\setminus L_{1}(\mathcal{C})\big{)}\cap
V^{+}$ for $i=1,\ldots t$. We take $H:=\cup_{i=1}^{t}\ M_{i}$ with
$W_{j}=\cup_{i=1}^{t}\ W_{j}^{i}$ for $j=1,2$. So, $V(H)=V(K)$. Also,
$W_{1}\cap\mathcal{C}=\emptyset$, then $W_{1}$ is a stable set, since
$\mathcal{C}$ is a vertex cover. Furthermore, $W_{2}\subseteq V^{+}$ and
$W_{2}\subseteq\mathcal{C}\setminus L_{1}(\mathcal{C})$, then
$N_{D}^{+}(W_{2})\subseteq\mathcal{C}$. Then, $N_{D}^{+}(W_{2})\cap
W_{1}=\emptyset$, since $\mathcal{C}\cap W_{1}=\emptyset$. If $x\in
L_{1}(\mathcal{C})\cap V(K)$, then there is $1\leqslant i\leqslant s$ such
that $x=v_{i}$ and $M_{i}=\\{v_{i}\\}$. Consequently,
$deg_{H}(x)=deg_{M_{i}}(v_{i})=0$. Thus, $\tilde{H}\cap
L_{1}(\mathcal{C})=\emptyset$ implying
$N_{D}^{+}(\tilde{H})\subseteq\mathcal{C}$, since $V(H)\subseteq\mathcal{C}$.
Hence, $N_{D}^{+}(\tilde{H})\cap W_{1}=\emptyset$, since
$W_{1}\cap\mathcal{C}=\emptyset$. Therefore, $H$ is a generating $\star$-semi-
forest of $K$. $\Box$
###### Theorem 3.11
Let $D=(G,\mathcal{O},w)$ be a weighted oriented graph where $G$ is a perfect
graph, then $G$ has a $\tau$-reduction $H_{1},\ldots,H_{s}$ in complete
subgraphs. Furthermore, $I(D)$ is unmixed if and only if each $H_{i}$ has no
generating $\star$-semi-forests.
Proof. First, we prove $G$ has a $\tau$-reduction in complete graphs. By
Theorem 2.34, $\overline{G}$ is perfect. Thus,
$s:=w(\overline{G})=\chi(\overline{G})$. So, there is a $s$-colouring
$c:V(\overline{G})\rightarrow\\{1,\ldots,s\\}$. We take $V_{i}:=c^{-1}(i)$ for
$i=1,\ldots,s$. Then, $V_{i}$ is a stable set in $\overline{G}$, since $c$ is
a $s$-colouring. Hence, by Remark 2.33, $H_{i}:=G[V_{i}]$ is a complete graph
in $G$ and $s=\omega(\overline{G})=\beta(G)$. Furthermore,
$V_{1},\ldots,V_{s}$ is a partition of $V(\overline{G})=V(G)$, since $c$ is a
function. Consequently,
$\sum\limits_{i=1}^{s}\tau(H_{i})=\sum\limits_{i=1}^{s}\big{(}|V_{i}|-1\big{)}=\Big{(}\sum\limits_{i=1}^{s}|V_{i}|\Big{)}-s=|V(G)|-\beta(G)=\tau(G)$.
Finally, by Remark 2.16, $|V(G)|-\beta(G)=\tau(G)$, then, $H_{1},\ldots,H_{s}$
is a $\tau$-reduction of $G$.
Now, we prove that $I(D)$ is unmixed if and only if each $H_{i}$ has no
generating $\star$-semi-forests.
$\Rightarrow)$ By contradiction, assume $H_{j}$ has a generating $\star$-semi-
forest, then by Theorem 3.10 there is a strong vertex $\mathcal{C}$ such that
$V_{j}\subseteq\mathcal{C}$. Furthermore, $\mathcal{C}\cap V_{i}$ is a vertex
cover of $H_{i}$, then $|\mathcal{C}\cap V_{i}|\geqslant\tau(H_{i})=|V_{i}|-1$
for $i\neq j$. Thus, $|\mathcal{C}|=\sum_{i=1}^{s}|\mathcal{C}\cap
V_{i}|\geqslant|V_{j}|+\sum_{\begin{subarray}{c}i=1\\\ i\neq
j\end{subarray}}^{s}(|V_{i}|-1)$, since $V_{1},\ldots,V_{s}$ is a partition of
$V(G)$. Hence, by Remark 2.16, $|\mathcal{C}|>|V(G)|-s=\tau(G)$, since
$s=\beta(G)$. A contradiction, by Remark 2.14, since $I(D)$ is unmixed.
$\Leftarrow)$ Let $\mathcal{C}$ be a strong vertex cover, then
$\mathcal{C}\cap V_{i}$ is a vertex cover of $H_{i}$. So, $|\mathcal{C}\cap
V_{i}|\geqslant\tau(H_{i})=|V_{i}|-1$ for $i=1,\ldots,s$. Furthermore, by
Theorem 3.10, $V_{i}\not\subseteq\mathcal{C}$. Consequently, $|\mathcal{C}\cap
V_{i}|=|V_{i}|-1$. Thus,
$|\mathcal{C}|=\sum_{i=1}^{s}\big{(}|V_{i}|-1\big{)}$, since
$V_{1},\ldots,V_{s}$ is a partition of $V(G)$. Therefore, by (2) in Theorem
2.12, $I(D)$ is unmixed. $\Box$
## 4 Unmixedness of weighted oriented $SCQ$ graphs
Let $D=(G,\mathcal{O},w)$ be a weighted oriented graph. If $P$ is a perfect
matching of $G$ with the property (P), then in Proposition 4.1, we
characterize when $|\mathcal{C}\cap e|=1$, for each strong vertex cover
$\mathcal{C}$ of $D$ and each $e\in P$. Using Proposition 4.1 in Corollary
4.2, we characterize when $I(D)$ is unmixed if $G$ is Köning. In Proposition
4.6, we characterize the basic $5$-cycles, $C$ such that $|\mathcal{C}\cap
V(C)|=3$ for each strong vertex cover $\mathcal{C}$ of $D$. Furthermore, in
Theorem 4.8, we characterize when $I(D)$ is unmixed if $G$ is an $SCQ$ graph
(see Definition 2.27). Finally, using this result we characterize the unmixed
property of $I(D)$, when $G$ is simplicial or $G$ is chordal (see Corollary
4.9).
###### Proposition 4.1
Let $e$ be an edge of $G$. Hence, the following conditions are equivalent:
1. (1)
$|\mathcal{C}\cap e|=1$ for each strong vertex cover $\mathcal{C}$ of $D$.
2. (2)
$e$ has the property (P) and $N_{D}(b)\subseteq N_{D}^{+}(a)$ if
$(a,b^{\prime})\in E(D)$ with $a\in V^{+}$ and $e=\\{b,b^{\prime}\\}$.
Proof. ${\rm(1)}\Rightarrow{\rm(2)}$ First, we show $e$ has the property (P).
By contradiction, suppose there are $\\{a,b\\},\\{a^{\prime},b^{\prime}\\}\in
E(G)$ such that $\\{a,a^{\prime}\\}\notin E(G)$. This implies, there is a
maximal stable set $S$ such that $\\{a,a^{\prime}\\}\subseteq S$. So,
$\tilde{\mathcal{C}}=V(G)\setminus S$ is a minimal vertex cover. Consequently,
$\tilde{\mathcal{C}}$ is strong. Furthermore,
$a,a^{\prime}\notin\tilde{\mathcal{C}}$, then
$b,b^{\prime}\in\tilde{\mathcal{C}}$, since
$\\{a,b\\},\\{a^{\prime},b^{\prime}\\}\in E(G)$. A contradiction by (1). Now,
assume $(a,b^{\prime})\in E(D)$ with $a\in V^{+}$ and $e=\\{b,b^{\prime}\\}$,
then we will prove that $N_{D}(b)\subseteq N_{D}^{+}(a)$. By contradiction,
suppose there is $c\in N_{D}(b)\setminus N_{D}^{+}(a)$. We take a maximal
stable set $S$ such that $b\in S$. Thus, $\mathcal{C}_{1}=V(G)\setminus S$ is
a minimal vertex cover such that $b\notin\mathcal{C}_{1}$. By Remark 2.6,
$\mathcal{C}=\big{(}\mathcal{C}_{1}\setminus\\{c\\}\big{)}\cup N_{D}(c)\cup
N_{D}^{+}(a)$ is a vertex cover. Furthermore, $c\notin\mathcal{C}$, since
$c\notin N_{D}^{+}(a)$. By Proposition 3.1, there is a strong vertex cover
$\mathcal{C}^{\prime}$ such that
$N_{D}^{+}(a)\subseteq\mathcal{C}^{\prime}\subseteq\mathcal{C}$, since $a\in
V^{+}$. Also, $b^{\prime}\in N_{D}^{+}(a)\subseteq\mathcal{C}^{\prime}$ and
$c\notin\mathcal{C}^{\prime}$, since $(a,b^{\prime})\in E(D)$ and
$c\notin\mathcal{C}$. Then, $b\in N_{D}(c)\subseteq\mathcal{C}^{\prime}$.
Hence, $\\{b,b^{\prime}\\}\subseteq\mathcal{C}^{\prime}$. This is a
contradiction, by (1).
${\rm(2)}\Rightarrow{\rm(1)}$ By contradiction, assume there is a strong
vertex cover $\mathcal{C}$ of $D$ such that $|\mathcal{C}\cap e|\neq 1$. So,
$|\mathcal{C}\cap e|=2$, since $\mathcal{C}$ is a vertex cover. Hence, by
Theorem 3.10, there is a generating $\star$-semi-forest $H$ of $e$. We set
$e=\\{z,z^{\prime}\\}$. First, assume $H$ is not connected. Then, using the
Definition 3.4, we have $H=M_{1}\cup M_{2}$ where $M_{1}=\\{v_{1}\\}$,
$M_{2}=\\{v_{2}\\}$ and $w_{1},w_{2}\in W$ such that $w_{i}\in N_{D}(v_{i})$
for $i=1,2$. Thus, $\\{z,z^{\prime}\\}=\\{v_{1},v_{2}\\}$ and
$\\{w_{1},w_{2}\\}\in E(G)$, since $e$ satisfies the property (P). This
implies $|W_{1}\cap\\{w_{1},w_{2}\\}|\leqslant 1$, since $W_{1}$ is a stable
set. Hence, we can suppose $w_{2}\in W_{2}$, then $w_{2}\in V^{+}$ and
$(w_{2},z^{\prime})\in E(D)$. Consequently, by (2), $w_{1}\in
N_{D}(z)\subseteq N_{D}^{+}(w_{2})$, then $(w_{2},w_{1})\in E(D)$.
Furthermore, by $(iii)$ in Definition 3.4, $N_{D}^{+}(W_{2})\cap
W_{1}=\emptyset$, then $w_{1}\in W_{2}$. So, $w_{1}\in V^{+}$ and
$(w_{1},z)\in E(D)$. By (1) with $a=w_{1}$, we have $(w_{1},w_{2})\in E(D)$. A
contradiction, then $H$ is connected. Thus, $H$ is a ROT with
$V(H)=\\{z,z^{\prime}\\}$. We can suppose $v_{1}=z$ and $W^{H}=\\{w_{1}\\}$,
then $(z,z^{\prime})\in E(D)$, $w_{1}\in N_{D}(z)$ and $z=v_{1}\in\tilde{H}$,
since $deg_{H}(v_{1})=1$. If $w_{1}\in N_{D}^{+}(z)$, then $w_{1}\in W_{1}$,
since $z=v_{1}$. A contradiction, since $N_{D}^{+}(\tilde{H})\cap
W_{1}=\emptyset$. Then, $w_{1}\notin N_{D}^{+}(z)$. By Remark 3.5,
$z=v_{1}\in\tilde{H}\subseteq V^{+}$. Therefore, by (1) (taking $a=b=z$ and
$b^{\prime}=z^{\prime}$), we have $N_{D}(z)\subseteq N_{D}^{+}(z)$, since
$e=\\{z,z^{\prime}\\}$ and $z^{\prime}\in N_{D}^{+}(z)$. A contradiction,
since $w_{1}\in N_{D}(z)\setminus N_{D}^{+}(z)$. $\Box$
###### Corollary 4.2
[11, Theorem 3.4] Let $D=(G,\mathcal{O},w)$ be a weighted oriented graph,
where $G$ is Köning without isolated vertices. Hence, $I(D)$ is unmixed if and
only if $D$ satisfies the following two conditions:
1. (a)
G has a perfect matching $P$ with the property (P).
2. (b)
$N_{D}(b)\subseteq N_{D}^{+}(a)$, when $a\in V^{+}$, $\\{b,b^{\prime}\\}\in P$
and $b^{\prime}\in N_{D}^{+}(a)$.
Proof. $\Rightarrow)$ By Theorem 2.12, $I(G)$ is unmixed. Thus, by Remark 2.17
and Theorem 2.20, $G$ has a perfect matching $P$ with the property (P).
Consequently, $\nu(G)=|P|$. Also, $\tau(G)=\nu(G)$, since $G$ is Köning. So,
$\tau(G)=|P|$. Now, we take a strong vertex cover $\mathcal{C}$ of $D$ and
$e\in P$. Then, $|\mathcal{C}\cap e|\geqslant 1$. Furthermore, by Remark 2.14,
$|\mathcal{C}|=\tau(G)=|P|$. Hence, $|\mathcal{C}\cap e|=1$, since
$\mathcal{C}=\cup_{\tilde{e}\in P}\ \mathcal{C}\cap\tilde{e}$. Therefore, by
Proposition 4.1, $D$ satisfies (b).
$\Leftarrow)$ We take a strong vertex cover $\mathcal{C}$ of $D$. By
Proposition 4.1, $|\mathcal{C}\cap e|=1$ for each $e\in P$, since $D$
satisfies (a) and (b). This implies $|\mathcal{C}|=|P|$, since $P$ is a
perfect matching. Therefore, by $(2)$ in Theorem 2.12, $I(D)$ is unmixed.
$\Box$
###### Lemma 4.3
If there is a basic $5$-cycle $C=(z_{1},z_{2},z_{3},z_{4},z_{5},z_{1})$ with
$(z_{1},z_{2})$, $(z_{2},z_{3})\in E(D)$, $z_{2}\in V^{+}$ and $C$ satisfies
one of the following conditions:
1. (a)
$(z_{3},z_{4})\in E(D)$ with $z_{3}\in V^{+}$.
2. (b)
$(z_{1},z_{5})$, $(z_{5},z_{4})\in E(D)$ with $z_{5}\in V^{+}$.
then there is a strong vertex cover $\tilde{\mathcal{C}}$ such that
$|\tilde{\mathcal{C}}\cap V(C)|=4$.
Proof. We take $\mathcal{C}=\big{(}\mathcal{C}_{0}\setminus V(C)\big{)}\cup
N_{D}(z_{1})\cup N_{D}^{+}(z_{2},x)$ where $\mathcal{C}_{0}$ is a vertex cover
and $x=z_{3}$ if $C$ satisfies (a) or $x=z_{5}$ if $C$ satisfies (b). Thus,
$x\in V^{+}$. Furthermore, $z_{2},z_{3},z_{5}\in N_{D}(z_{1})\cup
N_{D}^{+}(z_{2})$ and $z_{4}\in N_{D}^{+}(z_{3})$ if $C$ satisfies (a) or
$z_{4}\in N_{D}^{+}(z_{5})$ if $C$ satisfies (b). Hence,
$\\{z_{2},z_{3},z_{4},z_{5}\\}\subseteq N_{D}(z_{1})\cup N_{D}^{+}(z_{2},x)$.
Consequently, $\\{z_{2},z_{3},z_{4},z_{5}\\}\subseteq\mathcal{C}$, implying
$\mathcal{C}$ is a vertex cover, since $\mathcal{C}_{0}$ is vertex cover and
$N_{D}(z_{1})\subseteq\mathcal{C}$. Also, $z_{1}\notin\mathcal{C}$, since
$z_{1}\notin N_{D}(z_{1})\cup N_{D}^{+}(z_{2},z_{3})$ and $z_{1}\notin
N_{D}^{+}(z_{5})$ if $C$ satisfies (b). By Proposition 3.1, there is a strong
vertex cover $\mathcal{C}^{\prime}$ such that
$N_{D}^{+}(z_{2},x)\subseteq\mathcal{C}^{\prime}\subseteq\mathcal{C}$, since
$\\{z_{2},x\\}\subseteq V^{+}$. So, $z_{1}\notin\mathcal{C}^{\prime}$, since
$z_{1}\notin\mathcal{C}$. Then, by Remark 2.8,
$N_{D}(z_{1})\subseteq\mathcal{C}^{\prime}$. Hence,
$\\{z_{2},z_{3},z_{4},z_{5}\\}\subseteq N_{D}(z_{1})\cup
N_{D}^{+}(z_{2},x)\subseteq\mathcal{C}^{\prime}$. Therefore,
$|\mathcal{C}^{\prime}\cap V(C)|=4$, since $z_{1}\notin\mathcal{C}^{\prime}$.
$\Box$
###### Definition 4.4
Let $C$ be an induced $5$-cycle, we say that $C$ has the $\star$-property if
for each $(a,b)\in E(C)$ where $a\in V^{+}$, then
$C=(a^{\prime},a,b,b^{\prime},c,a^{\prime})$ with the following properties:
1. $(\star.1)$
$(a^{\prime},a)\in E(D)$ and $w(a^{\prime})=1$.
2. $(\star.2)$
$N_{D}^{-}(a)\subseteq N_{D}(c)$ and $N_{D}^{-}(a)\cap V^{+}\subseteq
N_{D}^{-}(c)$.
3. $(\star.3)$
$N_{D}(b^{\prime})\subseteq N_{D}(a^{\prime})\cup N_{D}^{+}(a)$ and
$N_{D}^{-}(b^{\prime})\cap V^{+}\subseteq N_{D}^{-}(a^{\prime})$.
###### Lemma 4.5
Let $C=(a_{1}^{\prime},a_{1},b_{1},b_{1}^{\prime},c_{1},a_{1}^{\prime})$ be a
basic $5$-cycle of $D$, such that $(a_{1}^{\prime},a_{1})\in E(D)$,
$deg_{D}(a_{1})\geqslant 3$, $deg_{D}(c_{1})\geqslant 3$ and $w(b_{1})=1$. If
there is a strong vertex cover $\mathcal{C}$ of $D$, such that
$V(C)\subseteq\mathcal{C}$, then $C$ has no the $\star$-property.
Proof. By contradiction, suppose $C$ has the $\star$-property and there is a
strong vertex cover $\mathcal{C}$, such that $V(C)\subseteq\mathcal{C}$. Then,
$deg_{D}(a_{1}^{\prime})=deg_{D}(b_{1}^{\prime})=2$, since $C$ is a basic
cycle, $deg_{D}(a_{1})\geqslant 3$ and $deg_{D}(c_{1})\geqslant 3$. Hence,
$a_{1}^{\prime},b_{1}^{\prime}\in L_{3}(\mathcal{C})$, since
$V(C)\subseteq\mathcal{C}$. Thus, $(c_{1},a_{1}^{\prime})\in E(D)$ and
$w(c_{1})\neq 1$, since $a_{1}^{\prime}\in L_{3}(\mathcal{C})$,
$deg_{D}(a_{1}^{\prime})=2$, $(a_{1}^{\prime},a_{1})\in E(D)$ and
$\mathcal{C}$ is strong. By ($\star.1$) with $(a,b)=(c_{1},a_{1}^{\prime})$,
we have that $(b_{1}^{\prime},c_{1})\in E(D)$. Hence,
$N_{D}^{-}(b_{1}^{\prime})\subseteq\\{b_{1}\\}$, since
$deg_{D}(b_{1}^{\prime})=2$. This is a contradiction, since $b_{1}^{\prime}\in
L_{3}(\mathcal{C})$ and $w(b_{1})=1$. $\Box$
###### Proposition 4.6
Let $C$ be a basic $5$-cycle, then $C$ has the $\star$-property if and only if
$|\mathcal{C}\cap V(C)|=3$ for each strong vertex cover $\mathcal{C}$ of $D$.
Proof. $\Rightarrow)$ By contradiction, we suppose there is a strong vertex
cover $\mathcal{C}$ such that $|\mathcal{C}\cap V(C)|\geqslant 4$. Thus, there
is a path $L=(d_{1},d_{2},d_{3},d_{4})\subseteq C$ such that
$V(L)\subseteq\mathcal{C}$. Then, $deg_{D}(d_{2})=2$ or $deg_{D}(d_{3})=2$,
since $C$ is basic. We can suppose $deg_{D}(d_{2})=2$, then
$N_{D}(d_{2})\subseteq\mathcal{C}$. This implies $b_{1}:=d_{2}\in
L_{3}(\mathcal{C})$. So, there is $(a_{1},b_{1})\in E(D)$ with
$a_{1}\in\big{(}\mathcal{C}\setminus L_{1}(\mathcal{C})\big{)}\cap V^{+}$,
since $\mathcal{C}$ is strong. Since, $N_{D}(b_{1})\subseteq C$, we can set
$C=(a_{1}^{\prime},a_{1},b_{1},b_{1}^{\prime},c_{1},a_{1}^{\prime})$.
Consequently,
$\\{a_{1},b_{1}^{\prime}\\}=N_{D}(b_{1})=N_{D}(d_{2})=\\{d_{1},d_{3}\\}\subseteq\mathcal{C}$.
By ($\star.1$), $(a_{1}^{\prime},a_{1})\in E(D)$ and $w(a_{1}^{\prime})=1$. If
$b_{1}\in V^{+}$, then by Remark 2.3, $b_{1}$ is not a sink. This implies,
$(b_{1},b_{1}^{\prime})\in E(D)$. Then, by ($\star.1$) with
$(a,b)=(b_{1},b_{1}^{\prime})$, $w(a_{1})=1$. A contradiction, since $a_{1}\in
V^{+}$. Hence, $w(b_{1})=1$.
We prove $a_{1}^{\prime}\in\mathcal{C}$. By contradiction assume
$a_{1}^{\prime}\not\in\mathcal{C}$, then
$\\{b_{1},a_{1},c_{1},b_{1}^{\prime}\\}\subseteq\mathcal{C}$, since
$|\mathcal{C}\cap V(C)|\geqslant 4$. Suppose $b_{1}^{\prime}\in
L_{3}(\mathcal{C})$, then there is $y\in\big{(}N_{D}^{-}(b_{1}^{\prime})\cap
V^{+}\big{)}\setminus L_{1}(\mathcal{C})$. Then, by ($\star.3$) with
$(a,b)=(a_{1},b_{1})$, $y\in N_{D}^{-}(a_{1}^{\prime})$, i.e.
$(y,a_{1}^{\prime})\in E(D)$. Consequently, $y\in L_{1}(\mathcal{C})$, since
$a_{1}^{\prime}\notin\mathcal{C}$. This is a contradiction. Hence,
$b_{1}^{\prime}\notin L_{3}(\mathcal{C})$, i.e. there is $y^{\prime}\in
N_{D}(b_{1}^{\prime})\setminus\mathcal{C}$, since
$b_{1}^{\prime}\in\mathcal{C}$. By ($\star.3$), $y^{\prime}\in
N_{D}(a_{1}^{\prime})\cup N_{D}^{+}(a_{1})$. Furthermore,
$a_{1}^{\prime}\notin\mathcal{C}$, then
$N_{D}(a_{1}^{\prime})\subseteq\mathcal{C}$ and $y^{\prime}\notin
N_{D}(a_{1}^{\prime})$, since $\mathcal{C}$ is a vertex cover and
$y^{\prime}\notin\mathcal{C}$. This implies $y^{\prime}\in N_{D}^{+}(a_{1})$,
then $a_{1}\in L_{1}(\mathcal{C})$, since $a_{1}\in\mathcal{C}$ and
$y^{\prime}\notin\mathcal{C}$. A contradiction, since $a_{1}\notin
L_{1}(\mathcal{C})$. Therefore, $a_{1}^{\prime}\in\mathcal{C}$.
Thus, $\\{b_{1},a_{1},a_{1}^{\prime},b_{1}^{\prime}\\}\subseteq\mathcal{C}$.
Now, we prove $c_{1}\in\mathcal{C}$, $deg_{D}(a_{1})\geqslant 3$ and
$deg_{D}(c_{1})\geqslant 3$.
Case (1) $a_{1}\in L_{3}(\mathcal{C})$. Consequently, there is $z\in
N_{D}^{-}(a_{1})\cap V^{+}$ such that $z\in\mathcal{C}\setminus
L_{1}(\mathcal{C})$. Then, $z\notin V(C)$, since $N_{D}^{-}(a_{1})\cap
V(C)=\\{a_{1}^{\prime}\\}$ and $w(a_{1}^{\prime})=1$. By ($\star.2$), $z\in
N_{D}^{-}(c_{1})$. Thus, $(z,c_{1})\in E(D)$. Consequently,
$c_{1}\in\mathcal{C}$, $deg_{D}(a_{1})\geqslant 3$ and
$deg_{D}(c_{1})\geqslant 3$, since $z\in\mathcal{C}\setminus
L_{1}(\mathcal{C})$ and $z\in N_{D}(a_{1})\cap N_{D}(c_{1})$.
Case (2) $a_{1}\notin L_{3}(\mathcal{C})$. This implies, there is
$z^{\prime}\in N_{D}(a_{1})$ such that $z^{\prime}\notin\mathcal{C}$. Then,
$z^{\prime}\notin V(C)$, since $N_{D}(a_{1})\cap
V(C)=\\{a_{1}^{\prime},b_{1}\\}\subseteq\mathcal{C}$. Consequently,
$z^{\prime}\in N_{D}^{-}(a_{1})$, since $a_{1}\in\mathcal{C}\setminus
L_{1}(\mathcal{C})$. By ($\star.2$), we have $z^{\prime}\in
N_{D}^{-}(a_{1})\subseteq N_{D}(c_{1})$. Hence, $c_{1}\in\mathcal{C}$,
$deg_{D}(a_{1})\geqslant 3$ and $deg_{D}(c_{1})\geqslant 3$, since
$z^{\prime}\notin\mathcal{C}$ and $z^{\prime}\in N_{D}(a_{1})\cap
N_{D}(c_{1})$.
This implies, $V(C)\subseteq\mathcal{C}$. A contradiction, by Lemma 4.5, since
$C$ has the $\star$-property.
$\Leftarrow)$ Assume $C=(a^{\prime},a,b,b^{\prime},c,a^{\prime})$ with
$(a,b)\in E(C)$ such that $w(a)\neq 1$. We take a minimal vertex cover
$\mathcal{C}$ of $D$. We will prove ($\star.1$), ($\star.2$) and ($\star.3$).
$\mathbf{(\star.1)}$ First we will prove $(a^{\prime},a)\in E(D)$. By
contradiction, suppose $(a,a^{\prime})\in E(D)$. By Remark 2.3, there is $y\in
N_{D}^{-}(a)$, since $a\in V^{+}$. Thus, $y\notin V(C)$ and $deg_{D}(a)\geq
3$. Consequently, $deg_{D}(a^{\prime})=deg_{D}(b)=2$, since $C$ is basic.
Also, $deg_{D}(b^{\prime})=2$ or $deg_{D}(c)=2$, since $C$ is basic. We can
assume $deg_{D}(c)=2$, then $N_{D}(c)=\\{a^{\prime},b^{\prime}\\}$. So, by
Remark 2.6, $\mathcal{C}_{1}=\big{(}\mathcal{C}\setminus\\{y,c\\}\big{)}\cup
N_{D}(y,b)\cup N_{D}^{+}(a)$ is a vertex cover, since $\mathcal{C}$ is a
vertex cover, $\\{a^{\prime},b^{\prime}\\}\subseteq N_{D}(b)\cup
N_{D}^{+}(a)\subseteq\mathcal{C}_{1}$. Since $deg_{D}(c)=2$, we have $c\not\in
N_{D}(y)$. Furthermore, $c\notin N_{D}(b)\cup N_{D}^{+}(a)$, since $C$ is
induced. Then, $c\not\in\mathcal{C}_{1}$. Also, $N_{D}(b)=\\{b^{\prime},a\\}$,
implies $y\not\in\mathcal{C}_{1}$, since $y\not\in N_{D}^{+}(a)$. By
Proposition 3.1 there is a strong vertex cover $\mathcal{C}_{1}^{\prime}$ such
that $N_{D}^{+}(a)\subseteq\mathcal{C}_{1}^{\prime}\subseteq\mathcal{C}_{1}$,
since $a\in V^{+}$. Thus, $c,y\notin\mathcal{C}_{1}^{\prime}$, since
$c,y\notin\mathcal{C}_{1}$. By Remark 2.8, $a^{\prime},b^{\prime},a\in
N_{D}(c)\cup N_{D}(y)\subseteq\mathcal{C}_{1}^{\prime}$. Furthermore, $b\in
N_{D}^{+}(a)\subseteq\mathcal{C}_{1}^{\prime}$. Hence,
$|\mathcal{C}_{1}^{\prime}\cap V(C)|=4$. A contradiction.
Now, we prove $w(a^{\prime})=1$. By contradiction, assume $w(a^{\prime})\neq
1$. By the last argument, $(c,a^{\prime})\in E(D)$, since $(a^{\prime},a)\in
E(D)$ and $a\in V^{+}$. A contradiction, by (a) in Lemma 4.3.
$\mathbf{(\star.2)}$ We will prove $N_{D}^{-}(a)\subseteq N_{D}(c)$. By
contradiction, suppose there is $y\in N_{D}^{-}(a)\setminus N_{D}(c)$. Also,
$N_{D}^{-}(a)\cap V(C)\subseteq\\{a^{\prime}\\}\subseteq N_{D}(c)$, since
$b\in N_{D}^{+}(a)$. Hence, $y\notin V(C)$. By Remark 2.6,
$\mathcal{C}_{2}=\big{(}\mathcal{C}\setminus\\{y,c\\}\big{)}\cup
N_{D}(y,c)\cup N_{D}^{+}(a)$ is a vertex cover. Furthermore,
$y,c\notin\mathcal{C}_{2}$, since $y\in N_{D}^{-}(a)\setminus N_{D}(c)$ and
$c\notin N_{D}(a,y)$. By Proposition 3.1, there is a strong vertex cover
$\mathcal{C}_{2}^{\prime}$ such that
$N_{D}^{+}(a)\subseteq\mathcal{C}_{2}^{\prime}\subseteq\mathcal{C}_{2}$, since
$a\in V^{+}$. Thus, $y,c\notin\mathcal{C}_{2}^{\prime}$ since
$y,c\notin\mathcal{C}_{2}$. By Remark 2.8, $a,a^{\prime},b^{\prime}\in
N_{D}(y,c)\subseteq\mathcal{C}_{2}^{\prime}$. Hence,
$|\mathcal{C}_{2}^{\prime}\cap V(C)|=4$, since $b\in
N_{D}^{+}(a)\subseteq\mathcal{C}_{2}^{\prime}$. A contradiction.
Now, we prove $N_{D}^{-}(a)\cap V^{+}\subseteq N_{D}^{-}(c)$. By
contradiction, suppose there is $y\in N_{D}^{-}(a)\cap V^{+}\setminus
N_{D}^{-}(c)$. By Remark 2.6,
$\mathcal{C}_{3}=(\mathcal{C}\setminus\\{c\\})\cup N_{D}(c)\cup
N_{D}^{+}(a,y)$ is a vertex cover. Furthermore, $c\notin N_{D}^{+}(a,y)$, then
$c\notin\mathcal{C}_{3}$. By Proposition 3.1, there is a strong vertex cover
$\mathcal{C}_{3}^{\prime}$ such that
$N_{D}^{+}(a,y)\subseteq\mathcal{C}_{3}^{\prime}\subseteq\mathcal{C}_{3}$
since $\\{a,y\\}\subseteq V^{+}$. So, $c\notin\mathcal{C}_{3}^{\prime}$, since
$c\notin\mathcal{C}_{3}$. Thus, by Remark 2.8 $a^{\prime},b^{\prime}\in
N_{D}(c)\subseteq\mathcal{C}_{3}^{\prime}$. Also, $a,b\in
N_{D}^{+}(a,y)\subseteq\mathcal{C}_{3}^{\prime}$. Hence,
$|\mathcal{C}^{\prime}_{3}\cap V(C)|=4$, a contradiction.
$\mathbf{(\star.3)}$ We prove $N_{D}(b^{\prime})\subseteq
N_{D}(a^{\prime})\cup N_{D}^{+}(a)$. By contradiction, we suppose there is
$y\in N_{D}(b^{\prime})\setminus\big{(}N_{D}(a^{\prime})\cup
N_{D}^{+}(a)\big{)}$. Thus, $y\notin C$, since $N_{D}(b^{\prime})\cap
V(C)=\\{c,b\\}\subseteq N_{D}(a^{\prime})\cup N_{D}^{+}(a)$. By Remark 2.6,
$\mathcal{C}_{4}=\big{(}\mathcal{C}\setminus\\{y,a^{\prime}\\})\big{)}\cup
N_{D}(y,a^{\prime})\cup N_{D}^{+}(a)$. Furthermore, $y\notin\mathcal{C}_{4}$,
since $y\notin N_{D}(a^{\prime})\cup N_{D}^{+}(a)$. By ($\star.1$),
$(a^{\prime},a)\in E(D)$, then $a^{\prime}\notin\mathcal{C}_{4}$, since
$a^{\prime}\notin N_{D}(y)\cup N_{D}^{+}(a)$. By Proposition 3.1, there is a
strong vertex cover $\mathcal{C}_{4}^{\prime}$ such that
$N_{D}^{+}(a)\subseteq\mathcal{C}_{4}^{\prime}\subseteq\mathcal{C}_{4}$, since
$a\in V^{+}$. So, $y,a^{\prime}\notin\mathcal{C}_{4}^{\prime}$, since
$y,a^{\prime}\notin\mathcal{C}_{4}$. Thus, by Remark 2.8 $b^{\prime},a,c\in
N_{D}(y)\cup N_{D}(a^{\prime})\subseteq\mathcal{C}_{4}^{\prime}$. Also, $b\in
N_{D}^{+}(a)\subseteq\mathcal{C}_{4}^{\prime}$. Hence,
$|\mathcal{C}_{4}^{\prime}\cap V(C)|=4$, a contradiction.
Finally, we prove $N_{D}^{-}(b^{\prime})\cap V^{+}\subseteq
N_{D}^{-}(a^{\prime})$. By contradiction, we suppose there is
$y\in\big{(}N_{D}^{-}(b^{\prime})\cap V^{+}\big{)}\setminus
N_{D}^{-}(a^{\prime})$. By $(\star.1)$, $a^{\prime}\in N_{D}^{-}(a)$.
Furthermore, by (a) in Lemma 4.3, $y\neq b$, since $y\in V^{+}$. If $y=c$,
then $(c,b^{\prime})\in E(D)$. Thus, by $(\star.1)$, with the edge
$(a^{\prime},c)\in E(D)$. A contradiction by (b) in Lemma 4.3, since $c=y\in
V^{+}$. Hence, $y\notin V(C)$. By Remark 2.6,
$\mathcal{C}_{5}=\big{(}\mathcal{C}\setminus\\{a^{\prime}\\}\big{)}\cup
N_{D}(a^{\prime})\cup N_{D}^{+}(y,a)$ is a vertex cover. By Remark 2.6,
$a^{\prime}\notin N_{D}^{+}(y,a)$, since $(a^{\prime},a)\in E(D)$ and $y\notin
N_{D}^{-}(a^{\prime})$. Consequently, $a^{\prime}\notin\mathcal{C}_{5}$. By
Proposition 3.1, there is a strong vertex cover $\mathcal{C}_{5}^{\prime}$
such that
$N_{D}^{+}(a,y)\subseteq\mathcal{C}_{5}^{\prime}\subseteq\mathcal{C}_{5}$,
since $\\{a,y\\}\subseteq V^{+}$. So,
$a^{\prime}\notin\mathcal{C}_{5}^{\prime}$, since
$a^{\prime}\notin\mathcal{C}_{5}$. Then, by Remark 2.8 $a,c\in
N_{D}(a^{\prime})\subseteq\mathcal{C}_{5}^{\prime}$. Furthermore,
$b,b^{\prime}\in N_{D}^{+}(a,y)\subseteq\mathcal{C}_{5}^{\prime}$. Hence,
$|\mathcal{C}_{5}^{\prime}\cap V(C)|=4$, a contradiction. $\Box$
###### Lemma 4.7
Let $\mathcal{C}$ be a vertex cover of $D$ where $G$ is an $SCQ$ graph. Hence,
$|\mathcal{C}|=\tau(G)$ if and only if $|\mathcal{C}\cap V(K)|=|V(K)|-1$,
$|\mathcal{C}\cap V(C)|=3$ and $|\mathcal{C}\cap e|=1$ for each $K\in S_{G}$,
$C\in C_{G}$ and $e\in Q_{G}$, respectively.
Proof. We set $\mathcal{C}$ a vertex cover of $D$, $K\in S_{G}$, $C\in C_{G}$
and $e\in Q_{G}$. Then, there are $y\in V(G)$ and $a,a^{\prime}\in V(C)$ such
that $K=G[N_{G}[y]]$, $deg_{G}(a)=deg_{G}(a^{\prime})=2$ and
$\\{a,a^{\prime}\\}\notin E(G)$. We set $A_{K}:=V(K)\setminus\\{y\\}$ and
$B_{C}:=V(C)\setminus\\{a,a^{\prime}\\}$. Also, $\mathcal{C}\cap V(K)$ is a
vertex cover of $K$, so $|\mathcal{C}\cap V(K)|\geqslant\tau(K)=|V(K)|-1$.
Similarly, $|\mathcal{C}\cap V(C)|\geqslant\tau(C)=3$ and $|\mathcal{C}\cap
e|\geqslant\tau(e)=1$. Thus,
$|\mathcal{C}|=\sum\limits_{K\in S_{G}}|\mathcal{C}\cap
V(K)|+\sum\limits_{C\in C_{G}}|\mathcal{C}\cap V(C)|+\sum\limits_{e\in
Q_{G}}|\mathcal{C}\cap e|\geqslant\sum\limits_{K\in
S_{G}}\big{(}|V(K)|-1\big{)}+3|C_{G}|+|Q_{G}|,$ (4.1)
since $\mathcal{H}=\\{V(H)\mid H\in S_{G}\cup C_{G}\cup Q_{G}\\}$ is a
partition of $V(G)$. Now, we take a maximal stable set $S$ contained in
$V(Q_{G}):=\\{x\in V(G)\mid x\in e\ {\rm and}\ e\in Q_{G}\\}$. Then, $|S\cap
e|\leqslant 1$ for each $e\in Q_{G}$, since $S$ is stable. If $S\cap
e=\emptyset$ for some $e=\\{x_{1},x_{2}\\}\in Q_{G}$, then there are
$y_{1},y_{2}\in S$ such that $\\{x_{1},y_{1}\\}$, $\\{x_{2},y_{2}\\}\in E(G)$,
since $S$ is maximal. But $Q_{G}$ satisfies the property (P), then
$\\{y_{1},y_{2}\\}\in E(G)$. A contradiction, since $S$ is stable. Hence,
$|S\cap e|=1$ for each $e\in Q_{G}$. Consequently, $|S|=|Q_{G}|$ and
$|S^{\prime}|=|Q_{G}|$, where $S^{\prime}=V(Q_{G})\setminus S$. Now, we take
$\mathcal{C}(S^{\prime})=\Big{(}\bigcup\limits_{K\in
S_{G}}A_{K}\Big{)}\bigcup\Big{(}\bigcup\limits_{C\in C_{G}}B_{C}\Big{)}\bigcup
S^{\prime}$.
We prove $\mathcal{C}(S^{\prime})$ is a vertex cover of $D$. By contradiction,
suppose there is $\hat{e}\in E(G)$ such that
$\hat{e}\cap\mathcal{C}(S^{\prime})=\emptyset$. We set $z\in\hat{e}$, then
$\hat{e}=\\{z,z^{\prime}\\}$. If $z\in V(\tilde{K})$ for some $\tilde{K}\in
S_{G}$, then $\tilde{K}=G[N_{G}[z]]$, since
$A_{\tilde{K}}\subseteq\mathcal{C}(S^{\prime})$ and
$z\notin\mathcal{C}(S^{\prime})$. So, $z^{\prime}\in
N_{G}(z)\subseteq\tilde{K}\setminus\\{z\\}=A_{\tilde{K}}\subseteq\mathcal{C}(S^{\prime})$.
A contradiction, since $\hat{e}\cap\mathcal{C}(S^{\prime})=\emptyset$. Now, if
$z\in V(\tilde{C})$ for some $\tilde{C}\in C_{G}$, then $z\notin
B_{\tilde{C}}$. Thus, $deg_{G}(z)=2$ implying $z^{\prime}\in
B_{\tilde{C}}\subseteq\mathcal{C}(S^{\prime})$, since $\\{z,z^{\prime}\\}\in
E(G)$. A contradiction. Then, $\hat{e}\subseteq V(Q_{G})$, since $\mathcal{H}$
is a partition of $V(G)$. Also, $\hat{e}\cap S^{\prime}=\emptyset$, this
implies $\hat{e}\subseteq V(Q_{G})\setminus S^{\prime}=S$. But $S$ is stable.
This is a contradiction. Hence, $\mathcal{C}(S^{\prime})$ is a vertex cover of
$D$. Furthermore,
$|\mathcal{C}(S^{\prime})|=\sum\limits_{K\in S_{G}}|A_{K}|+\sum\limits_{C\in
C_{G}}|B_{C}|+|S^{\prime}|=\sum\limits_{K\in
S_{G}}\big{(}|V(K)|-1\big{)}+3|C_{G}|+|Q_{G}|$.
Thus, $\tau(G)=\sum_{K\in S_{G}}\big{(}|V(K)|-1\big{)}+3|C_{G}|+|Q_{G}|$.
Therefore, by (4.1), $|\mathcal{C}|=\tau(G)$ if and only if $|\mathcal{C}\cap
V(K)|=|K|-1$, $|\mathcal{C}\cap V(C)|=3$ and $|\mathcal{C}\cap e|=1$ for each
$K\in S_{G}$, $C\in C_{G}$ and $e\in Q_{G}$, respectively. $\Box$
###### Theorem 4.8
Let $D=(G,\mathcal{O},w)$ be a weighted oriented graph where $G$ is an $SCQ$
graph. Hence, $I(D)$ is unmixed if and only if $D$ satisfies the following
conditions:
1. (a)
Each basic $5$-cycle of $G$ has the $\star$-property.
2. (b)
Each simplex of $D$ has no generating $\star$-semi-forests.
3. (c)
$N_{D}(b)\subseteq N_{D}^{+}(a)$ when $a\in V^{+}$, $\\{b,b^{\prime}\\}\in
Q_{G}$ and $b^{\prime}\in N_{D}^{+}(a)$.
Proof. $\Rightarrow)$ We take a strong vertex cover $\mathcal{C}$ of $D$, then
by Remark 2.14, $|\mathcal{C}|=\tau(G)$. Consequently, by Lemma 4.7,
$|\mathcal{C}\cap V(K)|=|V(K)|-1$, $|\mathcal{C}\cap V(C)|=3$ and
$|\mathcal{C}\cap e|=1$ for each $K\in S_{G}$, $C\in C_{G}$ and $e\in Q_{G}$.
Thus, $V(K)\not\subseteq\mathcal{C}$. Consequently, by Theorem 3.10, $D$
satisfies (b). Furthermore, by Propositions 4.6 and 4.1, $D$ satisfies (a) and
(c).
$\Leftarrow)$ Let $\mathcal{C}$ be a strong vertex cover of $D$. By (a) and
Proposition 4.6, we have $|\mathcal{C}\cap V(C)|=3$ for each $C\in C_{G}$.
Furthermore, by (b) and Theorem 3.10, $V(K)\not\subseteq\mathcal{C}$ for each
$K\in S_{G}$. Consequently, $|V(K)|>|\mathcal{C}\cap
V(K)|\geqslant\tau(K)=|V(K)|-1$. So, $|\mathcal{C}\cap V(K)|=|V(K)|-1$. Now,
if $e\in Q_{G}$, then $e$ has the property (P), since $Q_{G}$ has the property
(P). Thus, by (c) and Proposition 4.1, $|\mathcal{C}\cap e|=1$. Hence, by
Lemma 4.7, $|\mathcal{C}|=\tau(G)$. Therefore $I(D)$ is unmixed, by $(2)$ in
Theorem 2.12. $\Box$
###### Corollary 4.9
Let $D=(G,\mathcal{O},w)$ be a weighted oriented graph where $G$ is a
simplicial or chordal graph. Hence, $I(D)$ is unmixed if and only if $D$
satisfies the following conditions:
1. (a)
Each vertex is in exactly one simplex of $D$.
2. (b)
Each simplex of $D$ has not a generating $\star$-semi-forest.
Proof. $\Rightarrow)$ By $(3)$ in Theorem 2.12 and Remark 2.17, $G$ is well-
covered. Thus, by Theorem 2.26, $G$ satisfies (a). Furthermore, by Remark
2.29, $G$ is an $SCQ$ graph with $C_{G}=Q_{G}=\emptyset$. Hence, by Theorem
4.8, $D$ satifies (b).
$\Leftarrow)$ By (a), $\\{V(H)\mid H\in S_{G}\\}$ is a partition of $V(G)$.
Hence, $G$ is an $SCQ$ graph with $C_{G}=\emptyset$ and $Q_{G}=\emptyset$.
Therefore, by (b) and Theorem 4.8, $I(D)$ is unmixed. $\Box$
## 5 Unmixedness of weighted oriented graphs without some small cycles
Let $D=(G,\mathcal{O},w)$ be a weighted oriented graph. In this Section, we
study and characterize the unmixed property of $I(D)$ when $G$ has no $3$\- or
$5$\- cycles (Theorem 5.4), or $G$ is a graph without $4$\- or $5$-cycles
(Theorem 5.10), or $girth(G)\geqslant 5$ (Theorem 5.13). In other words, in
this Section, we characterize the unmixed property of $I(D)$ when $G$ has at
most one of the following types of cycles: $3$-cycles, $4$-cycles and
$5$-cycles.
###### Proposition 5.1
If for each $(y,x)\in E(D)$ with $y\in V^{+}$, we have that
$N_{D}(y^{\prime})\subseteq N_{D}^{+}(y)$ for some $y^{\prime}\in
N_{D}(x)\setminus y$, then $L_{3}(\mathcal{C})=\emptyset$ for each strong
vertex cover $\mathcal{C}$ of $D$.
Proof. By contradiction, suppose there is a strong vertex cover $\mathcal{C}$
of $D$ and $x\in L_{3}(\mathcal{C})$. Hence, there is
$y\in\big{(}\mathcal{C}\setminus L_{1}(\mathcal{C})\big{)}\cap V^{+}$ with
$(y,x)\in E(D)$. Then, $N_{D}(x)\subseteq\mathcal{C}$ and
$N_{D}^{+}(y)\subseteq\mathcal{C}$, since $x\in L_{3}(\mathcal{C})$ and
$y\in\mathcal{C}\setminus L_{1}(\mathcal{C})$. By hypothesis, there is a
vertex $y^{\prime}\in N_{D}(x)\setminus y\subseteq\mathcal{C}$ such that
$N_{D}(y^{\prime})\subseteq N_{D}^{+}(y)\subseteq\mathcal{C}$. Thus,
$y^{\prime}\in L_{3}(\mathcal{C})$. Since $\mathcal{C}$ is strong, there is
$(y_{1},y^{\prime})\in E(D)$ with $y_{1}\in V^{+}$. So, $y_{1}\in
N_{D}(y^{\prime})\subseteq N_{D}^{+}(y)$. On the other hand, $(y_{1},x_{1})\in
E(D)$ where $x_{1}:=y^{\prime}$ and $y_{1}\in V^{+}$, then by hypothesis,
there is $y_{1}^{\prime}\in N_{D}(x_{1})\setminus y_{1}$ such that
$N_{D}(y_{1}^{\prime})\subseteq N_{D}^{+}(y_{1})$. Hence, $y_{1}^{\prime}\in
N_{D}(x_{1})=N_{D}(y^{\prime})\subseteq N_{D}^{+}(y)$. Consequently, $y\in
N_{D}(y_{1}^{\prime})\subseteq N_{D}^{+}(y_{1})$. A contradiction, since
$y_{1}\in N_{D}^{+}(y)$. $\Box$
###### Corollary 5.2
If $G$ is well-covered and $V^{+}$ is a subset of sinks, then $I(D)$ is
unmixed.
Proof. If $y\in V^{+}$, then $y$ is a sink. Thus, $(y,x)\notin E(D)$ for each
$x\in V(D)$. Hence, by Proposition 5.1, $L_{3}(\mathcal{C})=\emptyset$, for
each strong vertex cover $\mathcal{C}$ of $D$. Furthermore, by Remark 2.17,
$I(G)$ is unmixed. Therefore $I(D)$ is unmixed, by $(3)$ in Theorem 2.12.
$\Box$
###### Lemma 5.3
Let $(z,y),(y,x)$ be edges of $D$ with $y\in V^{+}$ and
$N_{D}(x)=\\{y,x_{1},\ldots,x_{s}\\}$. If there are $z_{i}\in
N_{D}(x_{i})\setminus N_{D}^{+}(y)$ such that $\\{z,x,z_{1},\ldots,z_{s}\\}$
is a stable set, then $I(D)$ is mixed.
Proof. We take $A:=\\{z,z_{1},\ldots,z_{s}\\}$, then $A\cup\\{x\\}$ is a
stable set. We can take a maximal stable set $S$ of $V(G)$, such that
$A\cup\\{x\\}\subseteq S$. So, $\tilde{\mathcal{C}}=V(G)\setminus S$ is a
minimal vertex cover of $D$. Hence, $\mathcal{C}=\tilde{\mathcal{C}}\cup
N_{D}^{+}(y)$ is a vertex cover of $D$. Also $A\cap\mathcal{C}=\emptyset$,
since $A\subseteq S$, $z\in N_{D}^{-}(y)$ and $z_{i}\notin N_{D}^{+}(y)$. By
Proposition 3.1, there is a strong vertex cover $\mathcal{C}^{\prime}$ of $D$
such that $N_{D}^{+}(y)\subseteq\mathcal{C}^{\prime}\subseteq\mathcal{C}$,
since $y\in V^{+}$. Thus, $A\cap\mathcal{C}^{\prime}=\emptyset$, since
$A\cap\mathcal{C}=\emptyset$. Then, by Remark 2.8,
$N_{D}(A)\subseteq\mathcal{C}^{\prime}$. Furthermore
$N_{D}(x)=\\{y,x_{1},\ldots,x_{s}\\}\subseteq N_{D}(A)$. Consequently,
$N_{D}(x)\subseteq\mathcal{C}^{\prime}$. Hence, $x\in
L_{3}(\mathcal{C}^{\prime})$, since $x\in
N_{D}^{+}(y)\subseteq\mathcal{C}^{\prime}$. Therefore, by (3) in Theorem 2.12,
$I(D)$ is mixed. $\Box$
###### Theorem 5.4
Let $D=(G,\mathcal{O},w)$ be a weighted oriented graph such that $G$ has no
$3$\- or $5$-cycles. Hence, $I(D)$ is unmixed if and only if $D$ satisfies the
following conditions:
1. (a)
$G$ is well-covered.
2. (b)
If $(y,x)\in E(D)$ with $y\in V^{+}$, then $N_{D}(y^{\prime})\subseteq
N_{D}^{+}(y)$ for some $y^{\prime}\in N_{D}(x)\setminus y$.
Proof. $\Leftarrow)$ By Proposition 5.1 and (b), we have that
$L_{3}(\mathcal{C})=\emptyset$ for each strong vertex cover $\mathcal{C}$ of
$D$. Furthermore, by (a) and Remark 2.17, $I(G)$ is unmixed. Therefore, by
$(3)$ in Theorem 2.12, $I(D)$ is unmixed.
$\Rightarrow)$ By (3) in Theorem 2.12 and Remark 2.17, $D$ satisfies (a). Now,
we take $(y,x)\in E(D)$ with $y\in V^{+}$. Then, by Remark 2.3, there is $z\in
N_{D}^{-}(y)$. Furthermore $z\notin N_{D}(x)$, since $G$ has no $3$-cycles. We
set $N_{D}(x)\setminus y=\\{x_{1},\ldots,x_{s}\\}$. We will prove (b). By
contradiction, suppose there is $z_{i}\in N_{D}(x_{i})\setminus N_{D}^{+}(y)$
for each $i=1,\ldots,s$. If $\\{z_{i},z_{j}\\}\in E(G)$ for some $1\leqslant
i<j\leqslant s$, then $(x,x_{i},z_{i},z_{j},x_{j},x)$ is a $5$-cycle. But $G$
has no $5$-cycles, then $\\{z_{1},\ldots,z_{s}\\}$ is a stable set. Now, if
$\\{x,z_{k}\\}\in E(G)$ or $\\{z,z_{k}\\}\in E(G)$ for some
$k\in\\{1,\ldots,s\\}$, then $(x,x_{k},z_{k},x)$ is a $3$-cycle or
$(z,y,x,x_{k},z_{k},z)$ is a $5$-cycle. Hence, $\\{x,z,z_{1},\ldots,z_{s}\\}$
is a stable set. A contradiction, by Lemma 5.3, since $I(D)$ is unmixed.
$\Box$
In the following results, we use the notation of Figure 1.
###### Remark 5.5
Let $G$ be a graph in $\\{C_{7},T_{10},P_{10},P_{13},P_{14},Q_{13}\\}$. Hence,
1. (a)
$G$ does not contain $4$-cycles. Furthermore, if $G$ has a $3$-cycle, then
$G=T_{10}$.
2. (b)
If $deg_{G}(x)=2$, then $x$ is not in a $3$-cycle of $G$.
3. (c)
If $G\neq C_{7}$ and $\tilde{e}=\\{v,u\\}\in E(G)$ with
$deg_{D}(v)=deg_{D}(u)=2$, then
$\tilde{e}\in\\{\tilde{e}_{1},\tilde{e}_{2},\tilde{e}_{3}\\}$. Also, if
$\tilde{e}$ is in a $5$-cycle $C$, then $G\in\\{P_{10},P_{13}\\}$,
$\tilde{e}\in\\{\tilde{e}_{1},\tilde{e}_{2}\\}$ and $C\in\\{C_{1},C_{2}\\}$ or
$G=Q_{13}$, $\tilde{e}=\tilde{e}_{1}$ and $C=C_{1}$.
4. (d)
If $P=(y_{1},y_{2},y_{3})$ is a path in $G$ with $deg_{G}(y_{i})=2$ for
$i=1,2,3$, then $G=C_{7}$.
Proof. (a) By Theorems 2.28 and 2.30, $G$ has no $4$-cycles. Now, if $G$ has a
$3$-cycle then, by Theorem 2.30, $G=T_{10}$.
(b) By (a), the unique $3$-cycle is $(c_{1},c_{2},c_{3},c_{1})$ in $T_{10}$
and $deg_{T_{10}}(c_{i})=3$ for $i=1,2,3$.
(c) By Figure 1, $\tilde{e}\in\\{\tilde{e}_{1},\tilde{e}_{2},\tilde{e}_{3}\\}$
and $G\in\\{T_{10},P_{10},P_{13},Q_{13}\\}$, since $G\neq C_{7}$. Now, assume
$\tilde{e}$ is in a $5$-cycle $C$. By Theorem 2.28, $G\neq T_{10}$. If
$G=P_{10}$, then $\tilde{e}_{3}$ is not in a $5$-cycle. Thus,
$\tilde{e}\in\\{\tilde{e}_{1},\tilde{e}_{2}\\}$ and $C\in\\{C_{1},C_{2}\\}$.
Now, if $G=P_{13}$, then $\tilde{e}\in\\{\tilde{e}_{1},\tilde{e}_{2}\\}$ and
$C\in\\{C_{1},C_{2}\\}$. Finally, if $G=Q_{13}$, then
$\tilde{e}_{2},\tilde{e}_{3}$ are not in a $5$-cycle. Hence,
$\tilde{e}=\tilde{e}_{1}$ and $C=C_{1}$.
(d) By contradiction, suppose $G\neq C_{7}$. If $e\in E(P)$, then by (c),
$e\in\\{e_{1},e_{2},e_{3}\\}$. But, $e_{i}\cap e_{j}=\emptyset$ for $i\neq j$.
A contradiction, since $P$ is a path. $\Box$
###### Lemma 5.6
Let $G$ be a graph in $\\{C_{7},T_{10},P_{10},P_{13},P_{14},Q_{13}\\}$ with
$I(D)$ unmixed. If $(z,y),(y,x)\in E(D)$ with $y\in V^{+}$ and
$N_{D}(x)\setminus\\{y\\}=\\{x_{1}\\}$, then $deg_{D}(x_{1})=2$.
Proof. By contradiction, suppose $deg_{D}(x_{1})\geqslant 3$. Hence, there are
$z_{1},z_{1}^{\prime}\in N_{D}(x_{1})\setminus\\{x\\}$. By hypothesis,
$deg_{D}(x)=2$. Then, by (b) in Remark 5.5, $x$ is not in a $3$-cycle. So,
$x_{1}\neq z$. Furthermore, by (a) in Remark 5.5, $G$ has no $4$-cycles. Thus,
$z\notin\\{z_{1}^{\prime},z_{1}\\}$ and $z_{1},z_{1}^{\prime}\notin N_{D}(y)$.
If $z_{1}^{\prime},z_{1}\in N_{D}(z)$, then
$(x_{1},z_{1}^{\prime},z,z_{1},x_{1})$ is a $4$-cycle. A contradiction, then
we can assume $z_{1}\notin N_{D}(z)$. Consequently, $\\{x,z,z_{1}\\}$ is a
stable set, since $x$ is not in a $3$-cycle. A contradiction, by Lemma 5.3,
since $I(D)$ is unmixed. $\Box$
###### Lemma 5.7
If $I(D)$ is unmixed, $G\in\\{C_{7},T_{10},P_{10},P_{13},P_{14},Q_{13}\\}$ and
$\tilde{e}=(y,x)\in E(D)$ with $deg_{D}(x)=2$ and $y\in V^{+}$, then
$G=P_{10}$ and $\tilde{e}=(d_{i},b_{j})$ with $\\{i,j\\}=\\{1,2\\}$.
Proof. By Remark 2.3, there is $z\in N_{D}^{-}(y)$. We set
$N_{D}(x)=\\{y,x_{1}\\}$, then by (b) in Remark 5.5, $z\neq x_{1}$. Thus, by
Lemma 5.6, $deg_{D}(x_{1})=2$. Now, we set $N_{D}(x_{1})=\\{x,z_{1}\\}$. By
(b) in Remark 5.5, $x$ is not in a $3$-cycle. So, $z,z_{1}\notin N_{D}(x)$ and
$z_{1}\neq y$. Also, by (a) in Remark 5.5, $G$ has no $4$-cycles. Then,
$z_{1}\neq z$ and $z_{1}\notin N_{D}(y)$. If $z\notin N_{D}(z_{1})$, then
$\\{x,z,z_{1}\\}$ is a stable set. A contradiction, by Lemma 5.3, since $I(D)$
is unmixed. Hence, $\\{z_{1},z\\}\in E(G)$ and $C:=(z,y,x,x_{1},z_{1},z)$ is a
$5$-cycle. Suppose $y^{\prime}\in N_{D}^{-}(y)\setminus\\{z\\}$, then
$y^{\prime}\notin N_{D}(z_{1})$, since $(z_{1},z,y,y^{\prime},z_{1})$ is not
in a $4$-cycle in $G$. Consequently, $\\{y^{\prime},x,z_{1}\\}$ is a stable
set, since $deg_{D}(x)=2$. A contradiction, by Lemma 5.3, since
$(y^{\prime},y),(y,x)\in E(D)$, $y\in V^{+}$, $N_{D}(x)=\\{y,x_{1}\\}$ and
$z_{1}\in N_{D}(x_{1})\setminus N_{D}^{+}(y)$. Hence, $N_{D}^{-}(y)=\\{z\\}$.
Now, by (c) in Remark 5.5 and by symmetry of $P_{10}$ and $P_{13}$, we can
assume $\\{x,x_{1}\\}=\tilde{e}_{1}$, $C=C_{1}$ and
$G\in\\{P_{10},P_{13},Q_{13}\\}$.
First, assume $G=P_{13}$. By symmetry and notation of Figure 1, we can suppose
$x_{1}=a_{1}$ and $x=a_{2}$, since $\tilde{e}_{1}=\\{x,x_{1}\\}$. Then,
$y=b_{2}$, $z=c_{1}$ and $z_{1}=b_{1}$. Thus, $(b_{2},d_{2})\in E(D)$, since
$y=b_{2}$ and $N_{D}^{-}(y)=\\{z\\}=\\{c_{1}\\}$. By Figure 1,
$(c_{1},b_{2})=(z,y)\in E(D)$, $(b_{2},d_{2})\in E(D)$, $b_{2}=y\in V^{+}$ and
$N_{D}(d_{2})=\\{b_{2},b_{4},v\\}$. Also, $a_{4}\in N_{D}(b_{4})$, $d_{1}\in
N_{D}(v)\setminus N_{D}(b_{2})$ and $\\{c_{1},d_{2},a_{4},d_{1}\\}$ is a
stable set. A contradiction, by Lemma 5.3, since $I(D)$ is unmixed.
Now, suppose $G=Q_{13}$. By the symmetry of we can suppose $x=a_{2}$ and
$x_{1}=a_{1}$, then $d_{2}=y\in V^{+}$, $z=h$ and $(h,d_{2})=(z,y)\in E(D)$.
So, $(d_{2},c_{2})\in E(D)$, since
$N_{D}^{-}(d_{2})=N_{D}^{-}(y)=\\{z\\}=\\{h\\}$. A contradiction, by Lemma
5.3, since $(h,d_{2}),(d_{2},c_{2})\in E(D)$, $d_{2}\in V^{+}$,
$N_{D}(c_{2})=\\{d_{2},b_{1}\\}$, $g_{1}\in N_{D}(b_{1})\setminus
N_{D}^{+}(d_{2})$ and $\\{h,c_{2},g_{1}\\}$ is a stable set.
Hence, $G=P_{10}$ and $\\{x,x_{1}\\}=\tilde{e}_{1}=\\{a_{1},b_{1}\\}$. If
$x=a_{1}$ and $x_{1}=b_{1}$, then $g_{1}=y\in V^{+}$, $(d_{1},g_{1})=(z,y)\in
E(D)$, since $C=C_{1}$. Furthermore, $(g_{1},c_{1})\in E(D)$, since
$N_{D}^{-}(g_{1})=N_{D}^{-}(y)=\\{z\\}=\\{d_{1}\\}$. A contradiction by Lemma
5.3, since $(d_{1},g_{1}),(g_{1},c_{1})\in E(D)$, $g_{1}\in V^{+}$,
$N_{D}(c_{1})=\\{g_{1},c_{2}\\}$, $g_{2}\in N_{D}(c_{2})$ and
$\\{d_{1},c_{1},g_{2}\\}$ is stable. Therefore, $x=b_{1}$ and $x_{1}=a_{1}$,
implying $y=d_{2}$ and $(y,x)=(d_{2},b_{1})$, since $C=C_{1}$. $\Box$
###### Remark 5.8
Assume $I(D)$ is unmixed, $G\in\\{C_{7},T_{10},Q_{13},P_{13},P_{14}\\}$,
$\mathcal{C}$ is a strong vertex cover of $D$ and $y\in\mathcal{C}\cap V^{+}$
such that $N_{G}(y)\setminus\mathcal{C}\subseteq V_{2}:=\\{a\in V(G)\mid
deg_{G}(a)=2\\}$. We take $b\in N_{G}(y)\setminus\mathcal{C}$. If $(y,b)\in
E(D)$, then by Lemma 5.7, $G=P_{10}$. A contradiction, then $(b,y)\in E(D)$.
Consequently, $N_{D}(y)\setminus\mathcal{C}\subseteq N_{D}^{-}(y)$, i.e.
$N_{D}^{+}(y)\subseteq\mathcal{C}$. Hence, $y\in\mathcal{C}\setminus
L_{1}(\mathcal{C})$.
###### Proposition 5.9
If $I(D)$ is unmixed, with $G\in\\{C_{7},T_{10},Q_{13},P_{13},P_{14}\\}$, then
the vertices of $V^{+}$ are sinks.
Proof. By contradiction, suppose there is $(y,x)\in E(D)$ with $y\in V^{+}$.
Then, by Lemma 5.7, $deg_{D}(x)\geqslant 3$. Thus, $G\neq C_{7}$. By Remark
2.3, $y$ is not a source. So, there is $(z,y)\in E(D)$. We set $V_{2}:=\\{a\in
V(G)\mid deg_{G}(a)=2\\}$. By Theorem 2.12,
$L_{3}(\tilde{\mathcal{C}})=\emptyset$ for each strong vertex cover
$\tilde{\mathcal{C}}$ of $D$, since $I(D)$ is unmixed. Hence, to obtain a
contradiction, we will give a vertex cover $\mathcal{C}$ of $D$ such that
$L_{3}(\mathcal{C})=\\{x\\}$ and $y\in\mathcal{C}\setminus
L_{1}(\mathcal{C})$, since with these conditions $\mathcal{C}$ is strong. We
will use the notation of Figure 1.
Case (1) If $D=T_{10}$, then $x\in\\{c_{1},c_{2},c_{3},v\\}$, since
$deg_{G}(x)\geqslant 3$.
Case (1.a) $x\in\\{c_{1},c_{2},c_{3}\\}$. By symmetry of $P_{10}$, we can
assume $x=c_{1}$ and $y\in\\{b_{1},c_{2}\\}$. Thus,
$\mathcal{C}_{1}=\\{v,a_{2},a_{3},b_{1},c_{1},c_{2},c_{3}\\}$ is a vertex
cover with $L_{3}(\mathcal{C}_{1})=\\{x\\}$. If $y=b_{1}$, then $y\in C_{1}$
and $z=a_{1}$, since $N_{D}(b_{1})=\\{a_{1},c_{1}\\}$. So,
$N_{D}^{+}(y)=\\{c_{1}\\}\subseteq\mathcal{C}_{1}$. Now, if $y=c_{2}$, then
$y\in\mathcal{C}_{1}$. Furthermore, by Lemma 5.7, $(b_{2},c_{2})\in E(D)$,
since $c_{2}=y\in V^{+}$, $b_{2}\in V_{2}$ and $I(D)$ is unmixed.
Consequently,
$N_{D}^{+}(y)\subseteq\\{c_{1},c_{3}\\}\subseteq\mathcal{C}_{1}$. Hence,
$y\in\mathcal{C}_{1}\setminus L_{1}(\mathcal{C}_{1})$ and we take
$\mathcal{C}=\mathcal{C}_{1}$.
Case (1.b) $x=v$. Then,
$\mathcal{C}_{2}=\\{v,a_{1},a_{2},a_{3},c_{1},c_{2},c_{3}\\}$ is a vertex
cover with $L_{3}(\mathcal{C}_{2})=\\{x\\}$. By symmetry of $P_{10}$, we can
suppose $y=a_{1}$. Consequently, $z=b_{1}$ and
$N_{D}^{+}(y)=\\{v\\}\subseteq\mathcal{C}_{2}$, since $a_{1}\in V_{2}$. Hence,
$y\in\mathcal{C}_{2}\setminus L_{1}(\mathcal{C}_{2})$ and we take
$\mathcal{C}=\mathcal{C}_{2}$.
Case (2) If $D=P_{14}$, then, by symmetry, we can assume $y=a_{1}$ and
$x\in\\{a_{2},b_{1}\\}$.
Case (2.a) $x=a_{2}$. Thus, $z\in\\{a_{7},b_{1}\\}$. We take
$\mathcal{C}_{3}=\\{a_{1},a_{2},a_{3},a_{4},a_{6},b_{1},b_{2},b_{5},b_{6},b_{7}\\}$
if $z=a_{7}$ or
$\mathcal{C}_{3}=\\{a_{1},a_{2},a_{3},a_{5},a_{7},b_{2},b_{3},b_{4},b_{5},b_{6}\\}$
if $z=b_{1}$. So, $\mathcal{C}_{3}$ is a vertex cover of $D$,
$y\in\mathcal{C}_{3}$ and $L_{3}(\mathcal{C}_{3})=\\{x\\}$. Furthermore,
$N_{D}(y)\setminus\mathcal{C}_{3}=\\{z\\}$ and $z\in N_{D}^{-}(y)$, then
$y\in\mathcal{C}_{3}\setminus L_{1}(\mathcal{C}_{3})$ and we take
$\mathcal{C}=\mathcal{C}_{3}$.
Case (2.b) $x=b_{1}$. By symmetry of $P_{14}$, we can suppose $z=a_{2}$. Then,
$\mathcal{C}_{4}=\\{a_{1},a_{3},a_{4},a_{5},a_{7},b_{1},b_{2},b_{3},b_{6},b_{7}\\}$
is a vertex cover of $D$ with $L_{3}(\mathcal{C}_{4})=\\{x\\}$. Also,
$N_{D}(y)=\setminus\mathcal{C}_{4}=\\{a_{2}\\}$ and $a_{2}=z\in N_{D}^{-}(y)$.
Hence, $y\in\mathcal{C}_{4}\setminus L_{1}(\mathcal{C}_{4})$ and we take
$\mathcal{C}=\mathcal{C}_{4}$.
Case (3) If $D=P_{13}$, then we can assume $x\in\\{b_{1},c_{2},d_{1}\\}$,
since $deg_{G}(x)\geqslant 3$.
Case (3.a) $x=c_{2}$. Then, $y\in N_{D}(c_{2})=\\{b_{3},b_{4},c_{1}\\}$.
Without loss of generality, we can suppose $y\in\\{b_{3},c_{1}\\}$. If
$y=c_{1}$, then $z\in\\{b_{1},b_{2}\\}$. By symmetry, we can assume $z=b_{1}$
so $N_{D}^{+}(y)=N_{D}^{+}(c_{1})\subseteq\\{b_{2},c_{2}\\}$. We take
$\mathcal{C}_{5}=\\{a_{1},a_{4},b_{2},b_{3},b_{4},c_{1},c_{2},d_{1},v\\}$.
Thus, $\mathcal{C}_{5}$ is a vertex cover of $D$ with
$L_{3}(\mathcal{C}_{5})=\\{x\\}$, $y\in\mathcal{C}_{5}$ and
$N_{D}^{+}(y)\subseteq\\{b_{2},c_{2}\\}\subseteq\mathcal{C}_{5}$. Now, if
$y=b_{3}$, then $b_{3}\in V^{+}\cap\mathcal{C}_{5}$. Hence, by Lemma 5.7,
$(a_{3},b_{3})\in E(D)$, since $a_{3}\in V_{2}$ and $I(D)$ is unmixed.
Consequently, $N_{D}^{+}(y)\subseteq\\{c_{2},d_{1}\\}\subset\mathcal{C}_{5}$.
Therefore, $y\in\mathcal{C}_{5}\setminus L_{1}(\mathcal{C}_{5})$ and we take
$\mathcal{C}=\mathcal{C}_{5}$.
Case (3.b) $x=b_{1}$. Hence,
$\mathcal{C}_{6}=\\{a_{1},a_{3},b_{1},b_{2},b_{3},b_{4},c_{1},d_{1},d_{2}\\}$
is a vertex cover of $D$ with $L_{3}(\mathcal{C}_{6})=\\{x\\}$ and $y\in
N_{D}(b_{1})=\\{a_{1},c_{1},d_{1}\\}$. Also, $y\in\mathcal{C}_{6}$. If
$y\in\\{a_{1},d_{1}\\}$, then
$N_{D}(y)\setminus\mathcal{C}_{6}\subseteq\\{a_{2},v\\}\subseteq V_{2}$.
Consequently, by Lemma 5.7, $(b,y)\in E(D)$ for each $b\in
N_{D}(y)\setminus\mathcal{C}_{6}$, since $y\in V^{+}$. So,
$N_{D}^{+}\subseteq\mathcal{C}_{6}$ implying $y\in\mathcal{C}_{6}\setminus
L_{1}(\mathcal{C}_{6})$. Now, if $y=c_{1}$. We can assume $(c_{2},c_{1})\in
E(D)$, since in another case we have the case (3.a) with $x=c_{2}$ and
$y=c_{1}$. Thus, $y=c_{1}\in\mathcal{C}_{6}\setminus L_{1}(\mathcal{C}_{6})$,
since $N_{D}^{+}(c_{1})\subseteq\\{b_{1},b_{2}\\}\subset\mathcal{C}_{6}$.
Therefore, we take $\mathcal{C}=\mathcal{C}_{6}$.
Case (3.c) $x=d_{1}$. Then, $y\in N_{G}(d_{1})=\\{b_{1},b_{3},v\\}$. By
symmetry, we can assume $y\in\\{b_{1},v\\}$. Furthermore,
$\mathcal{C}_{7}=\\{a_{2},a_{4},b_{1},b_{2},b_{3},b_{4},c_{1},d_{1},v\\}$ is a
vertex cover of $D$ with $L_{3}(\mathcal{C}_{7})=\\{x\\}$ and
$y\in\mathcal{C}_{7}$. If $y=b_{1}$, then
$N_{D}(y)\setminus\mathcal{C}_{7}\subseteq\\{a_{1}\\}$. Also, by Lemma 5.7,
$N_{D}(y)\setminus\mathcal{C}_{7}\subseteq N_{D}^{-}(y)$, since $a_{1}\in
V_{2}$ and $y\in V^{+}$. Thus, $y\in\mathcal{C}_{7}\setminus
L_{1}(\mathcal{C}_{7})$. Now, if $y=v$, then $z=d_{2}$, since
$N_{D}(v)=\\{d_{1},d_{2}\\}$. This implies
$N_{D}^{+}(v)=\\{d_{1}\\}\subseteq\mathcal{C}_{7}$. So,
$y\in\mathcal{C}_{7}\setminus L_{1}(\mathcal{C}_{7})$ and we take
$\mathcal{C}=\mathcal{C}_{7}$.
Case (4) $D=Q_{13}$. Hence, $x\in\\{d_{1},d_{2},g_{1},g_{2},h,h^{\prime}\\}$,
since $deg_{D}(x)\geqslant 3$. By symmetry, we can suppose
$x\in\\{d_{2},g_{2},h,h^{\prime}\\}$.
Case (4.a) $x\in\\{d_{2},g_{2}\\}$. We take
$\mathcal{C}_{8}=\\{a_{2},c_{1},c_{2},d_{1},d_{2},g_{1},g_{2},h,h^{\prime}\\}$
if $x=d_{2}$ or
$\mathcal{C}_{8}=\\{a_{1},b_{2},c_{2},d_{1},d_{2},g_{1},g_{2},h,h^{\prime}\\}$
if $x=g_{2}$. Thus, $\mathcal{C}_{8}$ is a vertex cover of $D$ with
$L_{3}(\mathcal{C}_{8})=\\{x\\}$ and
$V(G)\setminus\mathcal{C}_{8}=\\{a_{1},b_{1},b_{2},v\\}\cup\\{a_{2},b_{1},c_{1},v\\}\subseteq
V_{2}$. Consequently, by Lemma 5.7, $N_{D}(y)\setminus\mathcal{C}_{8}\subseteq
N_{D}^{-}(y)$, since $y\in V^{+}$. This implies
$N_{D}^{+}(y)\subseteq\mathcal{C}_{8}$. Therefore
$y\in\mathcal{C}_{8}\setminus L_{1}(\mathcal{C}_{8})$ and we take
$\mathcal{C}=\mathcal{C}_{8}$.
Case (4.b) $x\in\\{h,h^{\prime}\\}$. We take
$\mathcal{C}_{9}=\\{a_{2},b_{1},b_{2},d_{1},d_{2},g_{1},g_{2},h,v\\}$ if $x=h$
or
$\mathcal{C}_{9}=\\{a_{2},c_{1},c_{2},d_{1},d_{2},g_{1},g_{2},h^{\prime},v\\}$
if $x=h^{\prime}$. Thus, $\mathcal{C}_{9}$ is a vertex cover of $D$ with
$L_{3}(\mathcal{C}_{9})=\\{x\\}$. Also, $y\in
N_{G}(x)\subseteq\\{v,d_{1},d_{2},g_{1},g_{2}\\}$. If $y=v$, then
$\\{x,z\\}\subseteq N_{D}(y)=\\{h,h^{\prime}\\}$. Hence,
$N_{D}^{+}(y)=\\{x\\}\subseteq\mathcal{C}_{9}$, then
$y\in\mathcal{C}_{9}\setminus L_{1}(\mathcal{C}_{9})$. Now, if $y\neq v$, then
$y\in\\{d_{1},d_{2}\\}$ when $x=h$ or $y\in\\{g_{1},g_{2}\\}$ when
$x=h^{\prime}$. Then,
$N_{D}(y)\setminus\mathcal{C}_{9}\subseteq\\{a_{1},c_{1},c_{2}\\}\subseteq
V_{2}$ if $x=h$ or
$N_{D}(y)\setminus\mathcal{C}_{9}\subseteq\\{b_{1},b_{2}\\}\subseteq V_{2}$ if
$x=h^{\prime}$. Consequently, by Lemma 5.7,
$N_{D}(y)\setminus\mathcal{C}_{9}\subseteq N_{D}^{-}(y)$, since $y\in V^{+}$.
Thus, $N_{D}^{+}(y)\subseteq\mathcal{C}_{9}$ and $y\in\mathcal{C}_{9}\setminus
L_{1}(\mathcal{C}_{9})$. Therefore, we take $\mathcal{C}=\mathcal{C}_{9}$.
$\Box$
###### Theorem 5.10
Let $D=(G,\mathcal{O},w)$ be a connected weighted oriented graph without $4$\-
and $5$-cycles. Hence, $I(D)$ is unmixed if and only if $D$ satisfies one of
the following conditions:
1. (a)
$G\in\\{C_{7},T_{10}\\}$ and the vertices of $V^{+}$ are sinks.
2. (b)
Each vertex is in exactly one simplex of $D$ and each simplex of $D$ has no
generating $\star$-semi-forests.
Proof. $\Rightarrow)$ By (3) in Theorem 2.12 and Remark 2.17, $G$ is well-
covered. Thus, by Theorem 2.28, $G\in\\{C_{7},T_{10}\\}$ or $\\{V(H)\mid H\in
S_{G}\\}$ is a partition of $V(G)$. If $G\in\\{C_{7},T_{10}\\}$, then by
Proposition 5.9, $D$ satisfies (a). Now, if $\\{V(H)\mid H\in S_{G}\\}$ is a
partition of $V(G)$, then $G$ is an $SCQ$ graph with $C_{G}=Q_{G}=\emptyset$.
Hence, by Theorem 4.8, $D$ satisfies (b).
$\Leftarrow)$ If $D$ satisfies (a), then by Theorem 2.28, $G$ is well-covered.
Consequently, by Corollary 5.2, $I(D)$ is unmixed. Now, if $D$ satisfies (b),
then $G$ is an $SCQ$ graph, with $C_{G}=Q_{G}=\emptyset$. Therefore, by (b)
and Theorem 4.8, $I(D)$ is unmixed. $\Box$
###### Corollary 5.11
Let $D$ be a weighted oriented graph without isolated vertices and
$girth(G)\geqslant 6$. Hence, $I(D)$ is unmixed if and only if $D$ satisfies
one of following properties:
1. (a)
$G=C_{7}$ and the vertices of $V^{+}$ are sinks.
2. (b)
$G$ has a perfect matching
$e_{1}=\\{x_{1},x_{1}^{\prime}\\},\ldots,e_{r}=\\{x_{r},x_{r}^{\prime}\\}$
where $deg_{D}(x_{1}^{\prime})=\cdots=deg_{D}(x_{r}^{\prime})=1$ and
$(x_{j}^{\prime},x_{j})\in E(D)$ if $x_{j}\in V^{+}$.
Proof. $\Leftarrow)$ If $D$ satisfies (a), then $I(D)$ is unmixed by (a) in
Theorem 5.10. Now, assume $D$ satisfies (b). Assume $b^{\prime}\in
N_{D}^{+}(a)$ with $a\in V^{+}$ such that
$\\{b,b^{\prime}\\}=e_{j}=\\{x_{j},x_{j}^{\prime}\\}$ for some $1\leqslant
j\leqslant r$. If $b^{\prime}=x_{j}^{\prime}$, then $x_{j}=a\in V^{+}$, since
$deg_{D}(x_{j}^{\prime})=1$. So, $x_{j}^{\prime}=b^{\prime}\in
N_{D}^{+}(a)=N_{D}^{+}(x_{j})$. But by hypothesis, $(x_{j}^{\prime},x_{j})\in
E(D)$, since $x_{j}\in V^{+}$. A contradiction, then $b^{\prime}=x_{j}$. Thus,
$b=x_{j}^{\prime}$ implies $N_{D}(b)=\\{x_{j}\\}=\\{b^{\prime}\\}\subseteq
N_{D}^{+}(a)$. Hence, by Proposition 4.1, $|\mathcal{C}\cap e_{j}|=1$ where
$\mathcal{C}$ is a strong vertex cover. So, $|\mathcal{C}|=r$. Therefore, by
(2) in Theorem 2.12, $I(D)$ is unmixed.
$\Rightarrow)$ $G\neq T_{10}$, since $T_{10}$ has $3$-cycles. Hence, by
Theorem 5.10, $D$ satisfies (a) or $\\{V(H)\mid H\in S_{G}\\}$ is a partition
of $V(G)$. Furthermore, $S_{G}\subseteq E(G)$, since $G$ has not isolated
vertices and $girth(G)\geqslant 6$. Thus, we can assume
$S_{G}=\\{e_{1},\ldots,e_{r}\\}$ where $e_{i}=\\{x_{i},x_{i}^{\prime}\\}$ and
$deg_{D}(x_{i}^{\prime})=1$ for $i=1,\ldots,s$. Also, by Theorem 5.10, each
$e_{i}$ has no generating $\star$-semi-forests. So, by Theorem 3.10,
$e_{i}\not\subseteq\mathcal{C}$ for each strong vertex cover $\mathcal{C}$.
Consequently, $|e_{i}\cap\mathcal{C}|=1$, since $\mathcal{C}$ is a vertex
cover. Then, $e_{i}$ satisfies (2) of Proposition 4.1. Now, suppose
$(x_{j},x_{j}^{\prime})\in E(D)$ and $x_{j}\in V^{+}$. We take $a=b=x_{j}$ and
$b^{\prime}=x_{j}^{\prime}$, then by (2) of Proposition 4.1,
$N_{D}(x_{j})=N_{D}(b)\subseteq N_{D}^{+}(a)=N_{D}^{+}(x_{j})$. Hence, $x_{j}$
is a source. A contradiction, by Remark 2.3, since $x_{j}\in V^{+}$.
Therefore, $D$ satisfies (b). $\Box$
###### Proposition 5.12
If $G=P_{10}$, then the following properties are equivalent:
1. (1)
$I(D)$ is unmixed.
2. (2)
If $y\in V^{+}$ and $y$ is not a sink, then $y=d_{1}$ with
$N_{D}^{+}(y)=\\{g_{1},b_{2}\\}$ or $y=d_{2}$ with
$N_{D}^{+}(y)=\\{g_{2},b_{1}\\}$.
Proof. ${\rm(2)}\Rightarrow{\rm(1)}$ Let $\mathcal{C}$ be a strong vertex
cover of $D$. Suppose $x\in L_{3}(\mathcal{C})$. Then, there is
$y\in\big{(}\mathcal{C}\setminus L_{1}(\mathcal{C})\big{)}\cap V^{+}$ such
that $x\in N_{D}^{+}(y)$. Thus, by (2), $y\in\\{d_{1},d_{2}\\}$ and $x\in
N_{D}^{+}(y)\subseteq\\{b_{1},b_{2},g_{1},g_{2}\\}$. Hence,
$L_{3}(\mathcal{C})\subseteq\\{b_{1},b_{2},g_{1},g_{2}\\}$. By symmetry of
$P_{10}$, we can assume $y=d_{1}$. Then by (2), $x\in
N_{D}^{+}(y)=\\{g_{1},b_{2}\\}$. Also,
$\\{g_{1},d_{1},b_{2}\\}=N_{D}^{+}[y]\subseteq\mathcal{C}$, since
$y\in\mathcal{C}\setminus L_{1}(\mathcal{C})$. But $y=d_{1}\notin
L_{3}(\mathcal{C})$, then $N_{D}(y)\not\subseteq\mathcal{C}$. Thus,
$d_{2}\notin\mathcal{C}$. So, by Remark 2.8,
$\\{b_{1},d_{1},g_{2}\\}=N_{D}(d_{2})\subseteq\mathcal{C}$. Furthermore,
$\\{a_{1},a_{2}\\}\cap L_{3}(\mathcal{C})=\emptyset$, since
$L_{3}(\mathcal{C})\subseteq\\{b_{1},b_{2},g_{1},g_{2}\\}$. Consequently,
$\\{a_{1},a_{2}\\}\cap\mathcal{C}=\emptyset$, since
$N_{D}(a_{1},a_{2})=\\{g_{1},b_{1},g_{2},b_{2}\\}\subseteq\mathcal{C}$. But,
$x\in\\{g_{1},b_{2}\\}\cap L_{3}(\mathcal{C})$, then $a_{1}\in
N_{D}(g_{1})\subseteq\mathcal{C}$ or $a_{2}\in
N_{D}(b_{2})\subseteq\mathcal{C}$. Hence,
$\\{a_{1},a_{2}\\}\cap\mathcal{C}\neq\emptyset$. A contradiction, then
$L_{3}(\mathcal{C})=\emptyset$. Also, by Theorem 2.28 and Remark 2.17, $I(G)$
is unmixed. Therefore, by (3) in Theorem 2.12, $I(D)$ is unmixed.
${\rm(1)}\Rightarrow{\rm(2)}$ We take $V_{2}:=\\{a\in V(G)\mid
deg_{G}(a)=2\\}$ and $y\in V^{+}$, such that $y$ is not a sink, then there is
$(y,x)\in E(D)$. Also, by Remark 2.3, there is $z\in N_{D}^{-}(y)$. We will
prove $y\in\\{d_{1},d_{2}\\}$. If $deg_{D}(x)=2$, then by Lemma 5.7,
$y\in\\{d_{1},d_{2}\\}$. Now, we assume $deg_{D}(x)\geqslant 3$, then by
symmetry of $P_{10}$, we can suppose $x\in\\{g_{1},d_{1}\\}$.
First suppose $x=g_{1}$. Thus, $y\in N_{D}(g_{1})=\\{a_{1},c_{1},d_{1}\\}$. If
$y\in\\{a_{1},c_{1}\\}$, then $deg_{D}(y)=2$ and
$y\in\mathcal{C}_{1}=\\{a_{1},a_{2},c_{1},d_{1},d_{2},g_{1},g_{2}\\}$ is a
vertex cover of $D$ with $L_{3}(\mathcal{C}_{1})=\\{x\\}$. Hence,
$N_{D}(y)=\\{x,z\\}$ implies
$N_{D}^{+}(y)=\\{x\\}=\\{g_{1}\\}\subseteq\mathcal{C}_{1}$. Consequently,
$y\in\mathcal{C}_{1}\setminus L_{1}(\mathcal{C}_{1})$. Hence,
$\mathcal{C}_{1}$ is strong, since $L_{3}(\mathcal{C}_{1})=\\{x\\}$. A
contradiction, by Theorem 2.12, then $y=d_{1}$.
Now, suppose $x=d_{1}$. Then,
$\mathcal{C}_{2}=\\{a_{1},b_{2},c_{2},d_{1},d_{2},g_{1},g_{2}\\}$ is a vertex
cover of $D$ with $L_{3}(\mathcal{C}_{2})=\\{x\\}$. Also, $y\in
N_{D}(x)=\\{g_{1},b_{2},d_{2}\\}$. If $y\in\\{g_{1},b_{2}\\}$, then
$N_{D}(y)\setminus\\{d_{1}\\}\subseteq
N_{D}(g_{1},b_{2})\setminus\\{d_{1}\\}=\\{a_{1},a_{2},c_{1}\\}\subseteq
V_{2}$. Hence, by Lemma 5.7, $N_{D}(y)\setminus\\{d_{1}\\}\subseteq
N_{D}^{-}(y)$, since $y\notin\\{b_{1},b_{2}\\}$. Thus,
$N_{D}^{+}(y)=\\{d_{1}\\}=\\{x\\}\subseteq\mathcal{C}_{2}$ implying
$y\in\mathcal{C}_{2}\setminus L_{1}(\mathcal{C}_{2})$. Consequently,
$\mathcal{C}_{2}$ is strong with $L_{3}(\mathcal{C}_{2})\neq\emptyset$. A
contradiction, by Theorem 2.12, then $y=d_{2}$.
Therefore $y\in\\{d_{1},d_{2}\\}$. By symmetry of $P_{10}$, we can assume
$y=d_{1}$. Now, we will prove $N_{D}^{+}(y)=\\{g_{1},b_{2}\\}$. By
contradiction, in each one of the following cases, we give a strong vertex
cover $\mathcal{C}^{\prime}$ with $L_{3}(\mathcal{C}^{\prime})\neq\emptyset$,
since $I(D)$ is unmixed and $z\in N_{D}^{-}(y)$.
Case (1) $g_{1}\notin N_{D}^{+}(d_{1})$ and $b_{2}\in N_{D}^{+}(d_{1})$. So,
$N_{D}^{+}(d_{1})\subseteq\\{b_{2},d_{2}\\}$ and $\mathcal{C}_{1}^{\prime}=$
$\\{a_{1},a_{2},b_{2},c_{1},c_{2},d_{1},d_{2}\\}$ is a vertex cover of $D$
with $L_{3}(\mathcal{C}_{1}^{\prime})=\\{b_{2}\\}$. Also, $(d_{1},b_{2})\in
E(D)$ and $y=d_{1}\in\big{(}\mathcal{C}_{1}^{\prime}\setminus
L_{1}(\mathcal{C}_{1}^{\prime})\big{)}\cap V^{+}$, since $b_{2}\in
N_{D}^{+}[d_{1}]\subseteq\\{d_{1},b_{2},d_{2}\\}\subset\mathcal{C}_{1}^{\prime}$.
Hence, $\mathcal{C}_{1}^{\prime}$ is a strong vertex cover of $D$.
Case (2) $b_{2}\notin N_{D}^{+}(d_{1})$ and $g_{1}\in N_{D}^{+}(d_{1})$. Then,
$N_{D}^{+}(d_{1})\subseteq\\{g_{1},d_{2}\\}$ and $\mathcal{C}_{2}^{\prime}=$
$\\{a_{1},a_{2},c_{1},d_{1},d_{2},g_{1},g_{2}\\}$ is a vertex cover of $D$
with $L_{3}(\mathcal{C}_{2}^{\prime})=\\{g_{1}\\}$. Furthermore
$(d_{1},g_{1})\in E(D)$ and
$d_{1}=y\in\big{(}\mathcal{C}_{2}^{\prime}\setminus
L_{1}(\mathcal{C}_{2}^{\prime})\big{)}\cap V^{+}$, since $g_{1}\in
N_{D}^{+}[d_{1}]\subseteq\\{d_{1},g_{1},d_{2}\\}\subset\mathcal{C}_{2}^{\prime}$.
Hence, $\mathcal{C}_{2}^{\prime}$ is a strong vertex cover of $D$.
Case (3) $b_{2},g_{1}\notin N_{D}^{+}(d_{1})$. Thus, $z=d_{1}$,
$N_{D}^{+}(d_{1})=\\{d_{2}\\}$ and $\mathcal{C}_{3}^{\prime}=$
$\\{a_{2},b_{1},c_{1},d_{1},d_{2},g_{1},g_{2}\\}$ is a vertex cover of $D$
with $L_{6}(\mathcal{C}_{3}^{\prime})=\\{d_{2}\\}$. Also, $(d_{1},d_{2})\in
E(D)$ and $d_{1}=y\in\big{(}\mathcal{C}_{3}^{\prime}\setminus
L_{1}(\mathcal{C}_{3}^{\prime})\big{)}\cap V^{+}$, since
$N_{D}^{+}[d_{1}]=\\{d_{1},d_{2}\\}\subset\mathcal{C}_{3}^{\prime}$. Hence,
$\mathcal{C}_{3}^{\prime}$ is a strong vertex cover of $D$. $\Box$
###### Theorem 5.13
Let $D=(G,\mathcal{O},w)$ be a connected weighted oriented graph, with ${\rm
girth}(G)\geqslant 5$. Hence, $I(D)$ is unmixed if and only if $D$ satisfies
one of the following properties:
1. (a)
$G\in\\{K_{1},C_{7},Q_{13},P_{13},P_{14}\\}$ and the vertices of $V^{+}$ are
sinks.
2. (b)
$G=P_{10}$, furthermore if $y$ is not a sink in $V^{+}$, then $y=d_{1}$ with
$N_{D}^{+}(y)=\\{g_{1},b_{2}\\}$ or $y=d_{2}$ with
$N_{D}^{+}(y)=\\{g_{2},b_{1}\\}$.
3. (c)
Each vertex is in exactly one simplex of $G$ or in exactly one basic $5$-cycle
of $G$. Furthermore, each simplex of $D$ has not a generating $\star$-semi-
forest and each basic $5$-cycle of $D$ has the $\star$-property.
Proof. $\Rightarrow)$ By (3) in Theorem 2.12 and Remark 2.17, $G$ is well-
covered. By Theorem 2.30, $G\in\\{K_{1},C_{7},P_{10},P_{13},P_{14},Q_{13}\\}$
or $\\{V(H)\mid H\in S_{G}\cup C_{G}\\}$ is a partition of $V(G)$. If
$\\{V(H)\mid H\in S_{G}\cup C_{G}\\}$ is a partition of $V(G)$, then $G$ is an
$SCQ$ graph with $Q_{G}=\emptyset$. Hence, by Theorem 4.8, $D$ satisfies (c).
Now, if $G=P_{10}$, then by Proposition 5.12, $D$ satisfies (b). Furthermore,
if $G\in\\{C_{7},Q_{13},P_{13},P_{14}\\}$, then by Proposition 5.9, $D$
satisfies (a). Finally, if $G=K_{1}$, then by Remark 2.3, $V^{+}=\emptyset$.
$\Leftarrow)$ If $D$ satisfies (b), then by Proposition 5.12, $I(D)$ is
unmixed. Now, if $D$ satisfies (c), then $G$ is an $SCQ$ graph with
$Q_{G}=\emptyset$. Consequently, by Theorem 4.8, $I(D)$ is unmixed. Finally,
if $D$ satisfies (a), then by Theorem 2.30, $G$ is well-covered. Therefore, by
Corollary 5.2, $I(D)$ is unmixed. $\Box$
###### Remark 5.14
A graph is well-covered if and only if each connected component is well-
covered. Hence, when $D$ is no connected in Theorem 5.10 (resp. 5.13), $I(D)$
is unmixed if and only if each connected component of $D$ satisfies (a) or (b)
(resp. (a), (b) or (c)).
## 6 Examples
###### Example 6.1
Let $D_{1}$ be the following weighted oriented graph.
$v_{\small 1}$$1$$w_{\small 1}$$2$$2$$2$$1$$2$$2$$2$$1$$1$$2$$2$$2$$v_{\small
2}$$2$$2$$2$$w_{\small 2}$$2$$1$$2$$1$$2$$2$$2$$2$$1$$2$$w_{\small
5}$$1$$1$$v_{\small 3}$$2$$v_{\small 4}$$2$$2$$2$$w_{\small
4}$$2$$2$$v_{\small 5}$$2$$2$$\mathbf{v_{\small
1}}$$1$$\mathbf{T_{1}}$$w_{\small
1}$$2$$2$$2$$1$$2$$2$$2$$1$$1$$2$$2$$2$$\mathbf{B_{1}}$$\mathbf{v_{\small
2}}$$2$$2$$2$$w_{\small 2}=w_{\small
3}$$2$$1$$2$$1$$2$$\mathbf{T_{2}}$$2$$2$$2$$1$$2$$w_{\small
5}$$1$$1$$\mathbf{v_{\small 3}}$$2$$\mathbf{T_{3}}$$\mathbf{v_{\small
4}}$$2$$2$$2$$w_{\small 4}$$2$$2$$\mathbf{v_{\small
5}}$$2$$2$$\mathbf{T_{4}}$$\mathbf{T_{5}}$
We take the weighted oriented subgraph $K$ of $D_{1}$ induced by
$V(D_{1})\setminus\\{w_{1},w_{2},w_{4},w_{5}\\}$. In the following figure,
$T_{1},\ldots,T_{5}$ are the ROT’s and $B_{1}$ is the unicycle oriented
subgraph such that the parent of $T_{i}$ is $v_{i}$ and
$W^{T_{i}}=\\{w_{i}\\}$ for $i=1,\ldots,5$. Furthermore, $H=\cup_{i=1}^{5}\
T_{i}\ \cup B_{1}$ is a $\star$-semi-forest with $W_{1}^{H}=\\{w_{1},w_{5}\\}$
$W_{2}^{H}=\\{w_{2},w_{4}\\}$ and $V(H)=V(K)$. Therefore, $H$ is a generating
$\star$-semi-forest of $K$.
###### Example 6.2
Let $D_{2}$ be an oriented weighted graph such as the Figure. Let
$K^{4}=H=D_{2}[x_{1},x_{2},x_{3}]$ be an induced weighted oriented subgraph of
$D_{2}$ and $H$ a generating $\star$-semi-forest of $K^{4}$ with
$\tilde{H}=H=\\{x_{1},x_{2},x_{3}\\}\subseteq V^{+}$ and
$W_{1}=W_{2}=\emptyset$. Then, by Theorem 3.10, there is a strong vertex cover
$\mathcal{C}$ of $D_{2}$, such that $V(K^{4})\subseteq\mathcal{C}$. For this
purpose, first we take a minimal vertex cover $\mathcal{C}_{i}$ of $D$ and
define $\mathcal{C}_{i}^{\prime}=(\mathcal{C}_{i}\setminus W_{1})\cup
N_{D}(W_{1})\cup N_{D}^{+}(W_{2}\cup\tilde{H})$. Finally, we use the algorithm
in the proof of the Proposition 3.1.
In this example, $\mathcal{C}_{i}^{\prime}=\mathcal{C}_{i}\cup
N_{D}^{+}(\tilde{H})=\mathcal{C}_{i}\cup
N_{D}^{+}(\\{x_{1},x_{2},x_{3}\\})=\mathcal{C}_{i}\cup\\{x_{1},x_{2},x_{3},y_{1},y_{2}\\}$.
$z_{\tiny 1}$$z_{\tiny 2}$$z_{\tiny 3}$$y_{\tiny 1}$$x_{\tiny
1}$$w>1$$y_{\tiny 2}$$x_{\tiny 2}$$w>1$$y_{\tiny 3}$$x_{\tiny 3}$$w>1$$H$
* •
If we take $\mathcal{C}_{1}=\\{x_{1},x_{2},y_{1},y_{2},y_{3}\\}$ as a minimal
vertex cover of $D_{2}$, then
$\mathcal{C}_{1}^{\prime}=\\{x_{1},x_{2},x_{3},y_{1},y_{2},y_{3}\\}$ and
$L_{3}(\mathcal{C}_{1}^{\prime})\setminus N_{D}^{+}(\tilde{H})=\emptyset$.
Thus, by Proposition 3.1,
$\mathcal{C}=\mathcal{C}_{1}^{\prime}=\\{x_{1},x_{2},x_{3},y_{1},y_{2},y_{3}\\}$
is a strong vertex cover such that $V(K^{4})\subseteq\mathcal{C}$ and it is no
minimal. Furthermore, in this case it is enough to know the orientation of
$E(K^{4})$, since $L_{3}(\mathcal{C})=\\{x_{1},x_{2},x_{3}\\}$ and
$N_{D_{2}}(x_{i})\subseteq\mathcal{C}$ for $i=1,2,3$.
* •
Now, if we take $\mathcal{C}_{2}=\\{x_{1},x_{2},x_{3},z_{1},z_{2},z_{3}\\}$ as
a minimal vertex cover of $D_{2}$, then
$\mathcal{C}_{2}^{\prime}=V(D_{2})\setminus\\{y_{3}\\}$ and
$L_{3}(\mathcal{C}_{2}^{\prime})\setminus
N_{D}^{+}(\tilde{H})=\\{z_{1},z_{2}\\}$. Thus, by the algorithm in the proof
of Proposition 3.1,
$\mathcal{C}^{\prime}=\mathcal{C}_{4}^{\prime}=\\{x_{1},x_{2},x_{3},y_{1},y_{2},z_{3}\\}$
is a strong vertex cover such that $V(K^{4})\subseteq\mathcal{C}^{\prime}$.
Since $\mathcal{C}$ and $\mathcal{C}^{\prime}$ are no minimal with
$L_{3}(\mathcal{C})=\\{x_{1},x_{2},x_{3}\\}$ and
$L_{3}(\mathcal{C}^{\prime})=\\{x_{1},x_{2}\\}$, then $D_{2}$ is mixed.
Furthermore, $V(D_{2})$ has a partition in complete graphs:
$K^{i}=D_{2}[y_{i},z_{i}]$ for $i=1,2,3$ and $K^{4}=D_{2}[x_{1},x_{2},x_{3}]$
.
###### Example 6.3
Let $D_{3}=(G,\mathcal{O},w)$ be the following oriented graph. Hence,
$w>1$$b^{\prime}$$d_{\small 1}^{\prime}$$b$$d_{\small
2}^{\prime}$$1$$c$$d_{\small 1}$$w>1$$w>1$$a$$d_{\small
2}$$w>1$$a^{\prime}$$1$
* •
$G$ has no $3$\- and $4$-cycles.
* •
The basic $5$-cycle $C=(a^{\prime},a,b,b^{\prime},c,a^{\prime})$ satisfies the
$\star$-property.
* •
$D_{3}[\\{d_{1},d_{1}^{\prime}\\}]$ has not a generating $\star$-semi-forest.
But $T_{1}=\tilde{e}=(d_{2},d_{2}^{\prime})$ is a ROT with parent
$v_{1}=d_{2}$ and $W=W_{2}=\\{w_{1}=a\\}$. Furthermore, $H=T_{1}$ is a
$\star$-semi-forest with $V(H)=V(K)$, where
$K:=D_{3}[\\{d_{2},d_{2}^{\prime}\\}]$. Therefore, $H$ is a generating
$\star$-semi-forest of $K$ and $I(D_{3})$ is mixed.
###### Example 6.4
Let $D_{4}=(G,\mathcal{O},w)$ be the following oriented weighted graph. Hence,
$w>1$$d$$1$$1$$1$$1$$1$$1$$1$$1$$1$
* •
$G=P_{10}$, then by Theorem 2.30 and Remark 2.17, $G$ is well-covered and
$I(G)$ is unmixed.
* •
$d$ is not a sink and $d\in V^{+}$.
* •
By Proposition 5.12, $I(D_{4})$ is unmixed.
## References
* [1] C. Carvalho, V. G. Neumann and H. H. López. Projective nested cartesian codes. Bull. Braz. Math. Soc. (N.S.), 48, (2017), no. 2, 283–-302.
* [2] I. D. Castrillón, R. Cruz, and E. Reyes. On well-covered, vertex decomposable and Cohen–Macaulay graphs. Electron. J. Combin., 23, (2016), no. 2, Paper 2.39, 17pp.
* [3] I. Castrillón and E. Reyes, Pure vertex decomposable simplicial complex associated to graphs whose $5$-cycles are chorded, Bol. Soc. Mat. Mex. 23 (2017), 399–412.
* [4] R. Diestel, Graph Theory, Second Edition, Graduate Texts in Mathematics, 173, Springer-Verlag, New York, 2000.
* [5] A. Finbow, B. Hartnell and R. J. Nowakowski, A characterization of well-covered graphs of girth 5 or greater, J. of Combinatorial Theory B 57 (1993) 44-68.
* [6] A. Finbow, B. Hartnell and R. J. Nowakowski, A characterization of well-covered graphs that contain neither 4- nor 5-cycles, J. Graph Theory 18 (1994) 713-721.
* [7] P. Gimenez, J. Martínez-Bernal, A. Simis, R. H. Villarreal, and C. E. Vivares, Symbolic powers of monomial ideals and Cohen–Macaulay vertex-weighted digraphs, Singularities, Algebraic Geometry, Commutative Algebra, and Related Topics, Springer, Cham, (2018), 491–510.
* [8] H. T. Há, K. N. Lin, S. Morey, E. Reyes and R. H. Villarreal, Edge ideals of oriented graphs, Internat. J. Algebra Comput., 29 (2019), no. 3, 535–559.
* [9] J. Martínez-Bernal, Y. Pitones and R. H. Villarreal, Minimum distance functions of graded ideals and Reed-Muller-type codes. J. Pure Appl. Algebra 221 (2017), no. 2, 251–-275.
* [10] Y. Pitones, E. Reyes and J. Toledo, Monomial ideals of weighted oriented graphs, Electronic. J. Combin., 26 (2019), no. 3, Paper 3.44, 18 pp.
* [11] Y. Pitones, E. Reyes and R. H. Villarreal, Unmixed and Cohen–Macaulay weighted oriented König graphs, (2019), preprint, arxiv:1909.13295.
* [12] E. Prisner, J. Topp and P. D. Vestergaard, Well Covered Simplicial, Chordal, and Circular Arc Graphs, J. Graph Theory, 21 (1996) 113-119.
* [13] B. Randerath and L. Volkmann, A characterization of well-covered block-cactus graphs, Australas. J. Combin, 9 (1994) 307–-314.
* [14] R. H. Villarreal, Monomial Algebras, Second Edition, Monographs ans Research Notes in Mathematics, Chapman and Hall/CRC, 2015.
* [15] G. Zhu, L. Xu, H. Wang and Z. Tang, Projective dimensions and regularity of edge ideal of some weighted oriented graphs, Rocky Mountain J. Math., 49 (2019), no. 4, 1391-–1406.
|
2024-09-04T02:54:56.435743 | 2020-03-02T15:21:20 | 2003.00964 | {
"authors": "M. Usaid Awan, Marco Morucci, Vittorio Orlandi, Sudeepa Roy, Cynthia\n Rudin, Alexander Volfovsky",
"full_text_license": null,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25992",
"submitter": "Marco Morucci",
"url": "https://arxiv.org/abs/2003.00964"
} | arxiv-papers | # Almost-Matching-Exactly for Treatment Effect Estimation under Network
Interference
M. Usaid Awan Department of Economics, Duke University Marco Morucci
Department of Political Science, Duke University Vittorio Orlandi Department
of Statistical Science, Duke University Sudeepa Roy Department of Computer
Science, Duke University Cynthia Rudin Department of Statistical Science,
Duke University Department of Computer Science, Duke University Department
of Electrical and Computer Engineering, Duke University Alexander Volfovsky
Department of Statistical Science, Duke University
###### Abstract
We propose a matching method that recovers direct treatment effects from
randomized experiments where units are connected in an observed network, and
units that share edges can potentially influence each others’ outcomes.
Traditional treatment effect estimators for randomized experiments are biased
and error prone in this setting. Our method matches units almost exactly on
counts of unique subgraphs within their neighborhood graphs. The matches that
we construct are interpretable and high-quality. Our method can be extended
easily to accommodate additional unit-level covariate information. We show
empirically that our method performs better than other existing methodologies
for this problem, while producing meaningful, interpretable results.
### 1 INTRODUCTION
Randomized experiments are considered to be the gold standard for estimating
causal effects of a treatment on an outcome. Typically, in these experiments,
the outcome of a unit is assumed to be only affected by the unit’s own
treatment status, and not by the treatment assignment of other units (Cox,
1958; Rubin, 1980). However, in many applications – such as measuring
effectiveness of an advertisement campaign or a teacher training program –
units interact, and ignoring these interactions results in poor causal
estimates (Halloran and Struchiner, 1995; Sobel, 2006). We propose a method
that leverages the observed network structure of interactions between units to
account for treatment interference among them.
We study a setting in which a treatment has been uniformly randomized over a
set of units connected in a network, and where treatments of connected units
can influence each others’ outcomes. The development of methods for this
setting is a relatively new field in causal inference methodology, and only
few approaches for it have been proposed (e.g., van der Laan, 2014; Aronow et
al., 2017; Sussman and Airoldi, 2017).
In this paper, we propose a method that leverages matching (Rosenbaum and
Rubin, 1983) to recover direct treatment effects from experiments with
interference. Our method makes several key contributions to the study of this
setting: First, our method explicitly leverages information about the network
structure of the experimental sample to adjust for possible interference while
estimating direct treatment effects. Second, unlike other methods, matching
allows us to nonparametrically estimate treatment effects, without the need to
specify parametric models for interference or outcomes. Third, matching
produces highly interpretable results, informing analysts as to which features
of the input data were used to produce estimates. More specifically, we match
on features of graphs that are easy to interpret and visualize.
In our setting, units experience interference according to their neighborhood
graphs – the graphs defined by the units they are directly connected to. Units
with similar neighborhood graphs will experience similar interference. For
example, the educational outcome of a student randomly assigned to an extra
class depends on whether or not her friends are also assigned to that class,
and not just on how many: the specific structure of the student’s friendship
circle will influence whether or not study groups are formed, how information
is shared, how much attention the treated student will devote to the class,
and so on. All of this will impact the overall educational outcomes of
interest.
Because of this, matching units with similar neighborhood graphs together will
enable us to recover direct treatment effects even under interference. We
match units’ neighborhood graphs on counts of subgraphs within them, as graphs
with similar counts of the same unique subgraphs are naturally likely to be
similar. From there, we construct matches on individuals with similar sets of
important subgraphs; here, the set of important subgraphs is learned from a
training set. We generalize the Almost-Matching-Exactly (AME) framework (Dieng
et al., 2019; Wang et al., 2019) to match units on subgraphs in experimental
settings. We do this by constructing graph-based features that can explain
both the interference pattern in the experiment and predict the underlying
social network. We demonstrate that our method performs better than other
methods for the same problem in many settings, while generating interpretable
matches.
The paper will proceed as follows: In Section 2, we make explicit the
assumptions underpinning our framework, and outline our matching approach to
estimating direct treatment effects. In Sections 3 and 4, we evaluate the
effectiveness of our method on simulated and real-world data. Theoretical
evaluation of our approach is available in the appendix.
#### 1.1 Related Work
Work on estimating causal effects under interference between units has three
broad themes. First, there has been a growing body of work on the design of
novel randomization schemes to perform causal inference under interference
(Liu and Hudgens, 2014; Sinclair et al., 2012; Duflo and Saez, 2003; Basse and
Airoldi, 2018). Some of this work makes explicit use of observed network
structure to randomly assign treatment so as to reduce interference (Ugander
et al., 2013; Toulis and Kao, 2013; Eckles et al., 2016, 2017; Jagadeesan et
al., 2019). These methodologies are inapplicable to our setting as they
require non-uniform treatment assignment, whereas in our setting we wish to
correct for interference after randomization. Second, there is work on
estimating direct treatment effects in experiments under interference, and
after conventional treatment randomization, similar to our setting. Some
existing work aims to characterize the behavior of existing estimators under
interference (Manski, 2013; Sävje et al., 2017). Other approaches lay out
methods based on randomization inference to test a variety of hypotheses under
interference and treatment randomization (Rosenbaum, 2007; Aronow, 2012; Athey
et al., 2018). Some of these approaches mix randomization inference and
outcome models (Bowers et al., 2013). For the explicit problem of recovery of
treatment effects under interference, Aronow et al. (2017) provide a general
framework to translate different assumptions about interference into inverse-
probability estimators, and Sussman and Airoldi (2017) give linearly unbiased,
minimum integrated-variance estimators under a series of assumptions about
interference. These methods either ignore explicit network structure, or
require probabilities under multiple complex sampling designs to be estimated
explicitly. Finally, there have been studies of observational inference under
network interference (van der Laan, 2014; Liu et al., 2016; Ogburn et al.,
2017; Forastiere et al., 2016). However, recovering causal estimates using
observational data when units are expected to influence each other requires a
structural model of both the nature of interference and contagion among units.
### 2 METHODOLOGY
We discuss our problem and approach in this section.
#### 2.1 Problem Statement
We have a set of $n$ experimental units indexed by $i$. These units are
connected in a known graph $G=(V,E)$, where $V(G)=\\{1,\dots,n\\}$ is the set
of vertices of $G$, and $E(G)$ is the set of edges of $G$. We disallow self-
loops in our graph. We say that $H$ is a subgraph of $G$ if $V(H)\subseteq
V(G)$ and $E(H)\subseteq E(G)$. Let $t_{i}\in\\{0,1\\}$ represent the
treatment indicator for unit $i$, $\mathbf{t}$ represent the vector of
treatment indicators for the entire sample, and $\mathbf{t}_{-i}$ represent
the treatment indicators for all units except $i$. Given a treatment vector
$\mathbf{t}$ on the entire sample (i.e., for all vertices in $G$), we use
$G^{\mathbf{t}}$ to denote the labeled graph, where each vertex $i\in V(G)$
has been labeled with its treatment indicator $t_{i}$. In addition, we use
$G_{P}$ to denote a graph induced by the set of vertices $P\subseteq V(G)$ on
$G$, such that $V(G_{P})=P$ and $E(G_{P})=\\{(e_{1},e_{2})\in E(G):\;e_{1}\in
P,e_{2}\in P\\}$. We use the notation $\mathcal{N}_{i}=\\{j:(i,j)\in E(G)\\}$
to represent the neighborhood of vertex $i$. The labeled neighborhood graph of
a unit $i$, $G_{\mathcal{N}_{i}}^{\mathbf{t}}$, is defined as the graph
induced by the neighbors of $i$, and labeled according to $\mathbf{t}$. We
also define $\mathbf{t}_{\mathcal{N}_{i}}$ to be the vector of treatment
indicators corresponding to unit $i$’s neighborhood graph. A unit’s response
to the treatment is represented by its random potential outcomes
$Y_{i}(\mathbf{t})$ = $Y_{i}(t_{i},\mathbf{t}_{-i})$. Unlike other commonly
studied causal inference settings, unit $i$’s potential outcomes are now a
function of both the treatment assigned to $i$, and of all other units’
treatments. Observed treatments for unit $i$ and the whole sample are
represented by the random variables $T_{i}$ and $\mathbf{T}$ respectively. We
assume that the number of treated units is always $n^{(1)}$, i.e.,
$\sum_{i=1}^{n}T_{i}=n^{(1)}$.
A0: Ignorability of Treatment Assignment. We make the canonical assumption
that treatments are administered independently of potential outcomes, that is:
$Y_{i}(t_{i},\mathbf{t}_{-i})\mathrel{\text{\scalebox{1.07}{$\perp\mkern-10.0mu\perp$}}}\mathbf{T}$,
and $0<\Pr(T_{i}=1)<1$ for all units. In practice, we assume that treatment is
assigned uniformly at random to units, which is possible only in experimental
settings. As stated before, we do not make the canonical Stable Unit Treatment
Value Assumption (SUTVA) (Rubin, 1980), which, among other requirements,
states that units are exclusively affected by the treatment assigned to them.
We do not make this assumption because our units are connected in a network:
it could be possible for treatments to spread along the edges of the network
and to affect connected units’ outcomes. We do maintain the assumption of
comparable treatments across units, which is commonly included in SUTVA.
Our causal quantity of interest will be the Average Direct Effect (ADE), which
is defined as follows:
$\displaystyle ADE$
$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[Y_{i}(1,\mathbf{0})-Y_{i}(0,\mathbf{0})],$
(1)
where $\mathbf{t}_{-i}=\mathbf{0}$ represents the treatment assignment in
which no unit other than $i$ is treated. The summand represents the treatment
effect on unit $i$ when no other unit is treated, and, therefore, no
interference occurs (Halloran and Struchiner, 1995).
#### 2.2 Framework
We outline the requirements of our framework for direct effect estimation
under interference. We denote interference effects on a unit $i$ with the
function $f_{i}(\mathbf{t}):\\{0,1\\}^{n}\mapsto\mathbb{R}$, a function that
maps each possible treatment allocation for the $n$ units to the amount of
interference on unit $i$. We will use several assumptions to restrict the
domain of $f$ to a much smaller set (and overload the notation $f_{i}$
accordingly). To characterize $f$, we rely on the typology of interference
assumptions introduced by Sussman and Airoldi (2017). The first three
assumptions (A0-A2) needed in our framework are common in the interference
literature (e.g., Manski, 2013; Toulis and Kao, 2013; Eckles et al., 2016;
Athey et al., 2018):
A1: Additivity of Main Effects. First, we assume that main treatment effects
are additive, i.e., that there is no interaction between units’ treatment
indicators. This allows us to write:
$\displaystyle
Y_{i}(t,\mathbf{t}_{-i})=t\tau_{i}+f_{i}(\mathbf{t}_{-i})+\epsilon_{i}$ (2)
where $\tau_{i}$ is the direct treatment effect on unit $i$, and
$\epsilon_{i}$ is some baseline effect.
A2: Neighborhood Interference. We focus on a specific form of the interference
function $f_{i}$ by assuming that the interference experienced by unit $i$
depends only on treatment of its neighbors. That is, if for two treatment
allocations $\mathbf{t},\mathbf{t}^{\prime}$ we have
$\mathbf{t}_{\mathcal{N}_{i}}=\mathbf{t}_{\mathcal{N}_{i}}^{\prime}$ then
$f_{i}(\mathbf{t})=f_{i}(\mathbf{t}^{\prime})$. To make explicit this
dependence on the neighborhood subgraph, we will write
$f_{i}(\mathbf{t}_{\mathcal{N}_{i}})\equiv f_{i}(\mathbf{t})$.
A3: Isomorphic Graph Interference We assume that, if two units $i$ and $j$
have _isomorphic labeled neighborhood graphs_ , then they receive the same
amount of interference, denoting isomorphism by $\simeq$,
$G_{\mathcal{N}_{i}}^{\mathbf{t}}\simeq
G_{\mathcal{N}_{j}}^{\mathbf{t}}\implies
f_{i}(\mathbf{t}_{\mathcal{N}_{i}})=f_{j}(\mathbf{t}_{\mathcal{N}_{j}})\equiv
f(G_{\mathcal{N}_{i}}^{\mathbf{t}})=f(G_{\mathcal{N}_{j}}^{\mathbf{t}})$.
While Assumptions A1 and A2 are standard, A3 is new. This assumption allows us
to study interference in a setting where units with similar neighborhood
subgraphs experience similar amounts of interference.
All our assumptions together induce a specific form for the potential
outcomes, namely that they depend on neighborhood structure
$G_{\mathcal{N}_{i}}^{\mathbf{t}}$, but not exactly who the neighbors are
(information contained in $\mathcal{N}_{i}$) nor treatment assignments for
those outside the neighborhood (information contained in
$\mathbf{t}_{\mathcal{N}_{i}}$). Namely:
###### Proposition 1.
Under assumptions A0-A3, potential outcomes in (2) for all units $i$ can be
written as:
$\displaystyle Y_{i}(t,\mathbf{t}_{-i})$
$\displaystyle=t\tau_{i}+f(G_{\mathcal{N}_{i}}^{\mathbf{t}})+\epsilon_{i},$
(3)
where $\tau_{i}$ is the direct treatment effect on unit $i$, and
$\epsilon_{i}$ is some baseline response.
In addition, suppose that baseline responses for all units are equal to each
other in expectation, i.e., for all $i$, $\mathbb{E}[\epsilon_{i}]=\alpha$.
Then under assumptions A0-A3, for neighborhood graph structures $g_{i}$ of
unit $i$ and treatment vectors $\mathbf{t}$, the ADE is identified as:
$\displaystyle
ADE=\frac{1}{n^{(1)}}\sum_{i=1}^{n}\mathbb{E}\bigl{[}T_{i}\times$
$\displaystyle\bigl{(}\mathbb{E}[Y_{i}|G_{\mathcal{N}_{i}}^{\mathbf{T}}\simeq
g_{i}^{\mathbf{t}},T_{i}=1]$
$\displaystyle-\mathbb{E}[Y_{i}|G_{\mathcal{N}}^{\mathbf{T}}\simeq
g_{i}^{\mathbf{t}},T_{i}=0]\bigr{)}\bigr{]},$
where $G_{\mathcal{N}_{i}}^{\mathbf{T}}$ is the neighborhood graph of $i$
labelled according to the treatment assignment $\mathbf{T}$.
The proposition (whose proof is in the appendix) states that the interference
received by a unit is a function of each unit’s neighborhood graph. Further,
the outcomes can be decomposed additively into this function and the direct
treatment effect on $i$. The proposition implies that the ADE is identified by
matching each treated unit to one or more control units with an isomorphic
neighborhood graph, and computing the direct effect on the treated using these
matches. This effect is, in expectation over individual treatment assignments,
equal to the ADE.
#### 2.3 Subgraph Selection via Almost-Matching-Exactly
Given Proposition 1 and the framework established in the previous section, we
would ideally like to match treated and control units that have isomorphic
neighborhood graphs. This would allow us to better estimate the ADE without
suffering interference bias: for a treated unit $i$, if a control unit $j$ can
be found such that $G_{\mathcal{N}_{i}}^{\mathbf{t}}\simeq
G_{\mathcal{N}_{j}}^{\mathbf{t}}$, then $j$’s outcome will be identical in
expectation to $i$’s counterfactual outcome and can be used as a proxy.
Unfortunately, the number of non-isomorphic (canonically unique) graphs with a
given number of nodes and edges grows incredibly quickly (Harary, 1994) and
finding such matches is infeasible for large graphs. We therefore resort to
counting all subgraphs that appear in a unit’s neighborhood graph and matching
units based on the counts of those subgraphs. However, instead of exactly
matching on the counts of those subgraphs, we match treated and control units
if they have _similar_ counts, since matching exactly on all subgraph counts
implies isomorphic neighborhoods and is also infeasible. Further, absolutely
exact matches may not exist in real networks.
Constructing inexact matches, in turn, requires a measure of relative graph
importance. In Figure 1, for example, there are two control units that the
treated unit may be matched to; if triangles contribute more to the
interference function, it should be matched to the right; otherwise, if degree
and/or two-stars are more important, it should be matched to the left. Of
course, these relative importance measures might depend on the problem and we
would like to learn them.
Figure 1: Inexact matching presupposes an ordering of feature importance;
should the the treated ego (black) be matched to a control whose neighborhood
graph has the same number of units (left), or same number of triangles
(right)?
It might be tempting to match directly on $f$, as that would lead to unbiased
inference. However, we abstain from doing so for two reasons. Firstly, in
practice, the true interference is unknown and we could only match on
estimated values of $f$; this suffers from all the problems that afflict
matching on estimated propensity scores without appropriate adjustments
(Abadie and Imbens, 2016) or parametric approximations (Rubin and Thomas,
1996). Such corrections or approximations do not currently exist for estimated
interference functions and their development is an active area of research.
Secondly, interpretability is a key component of our framework that would be
lost matching on $f$-values; these values are scalar summaries of interference
that depends on entire graphs. Estimating $f$ well would also likely require
complex and uninterpretable nonparametric methods. In Section K of the
appendix, we empirically compare matching units on $f$-values to our subgraph
matching method via simulation. The loss of interpretability associated with
matching on $f$ does not yield substantial gains in performance, even when
using _true_ values of $f$ for matching, which is impossible in practice.
Almost-Matching-Exactly (AME) (Wang et al., 2019; Dieng et al., 2019; Awan et
al., 2019) provides a framework for the above problem that is explicitly
geared towards building interpretable, high-quality matches on discrete
covariates, which in our setting are the counts of the treated subgraphs in
the neighborhood. AME performs inexact matching while learning importance
weights for each covariate from a training set, prioritizing matches on more
important covariates. In this way, it neatly addresses the challenge of
inexact matching by learning a metric specific to discrete covariates (namely,
a weighted Hamming distance). Formally, AME matches units so as to optimize a
flexible measure of match quality. For each treated unit $i$, solving the AME
problem is equivalent to finding:
$\displaystyle\boldsymbol{\theta}^{i^{*}}$
$\displaystyle\in\operatorname*{arg\,max}_{\boldsymbol{\theta}\in\\{0,1\\}^{p}}\boldsymbol{\theta}^{T}\mathbf{w}$
(4) $\displaystyle\text{such that }\exists j:t_{j}=0\text{ and
}\mathbf{x}_{j}\circ\boldsymbol{\theta}=\mathbf{x}_{i}\circ\boldsymbol{\theta}$
where $\circ$ denotes the Hadamard product, $\mathbf{w}$ is a vector of
weights and $\boldsymbol{x}_{i},\boldsymbol{x}_{j}$ are vectors of binary
covariates for units $i$ and $j$ that we might like to match on. In our
network interference setting, these are vectors of subgraph counts. The vector
$\mathbf{w}$ denotes the importance of each subgraph in causing interference.
We will leverage both information on outcomes and networks to construct an
estimate for it.
We start by enumerating (up to isomorphism) all the $p$ subgraphs
$g_{1},\dots,g_{p}$ that appear in any of the
$G_{\mathcal{N}_{i}}^{\mathbf{t}},i\in 1,\dots,n$. The covariates for unit $i$
are then given by
$S(G_{\mathcal{N}_{i}}^{\mathbf{t}})=(S_{1}(G_{\mathcal{N}_{i}}^{\mathbf{t}}),\dots,S_{p}(G_{\mathcal{N}_{i}}^{\mathbf{t}}))$
where $S_{k}(G_{\mathcal{N}_{i}}^{\mathbf{t}})$ denotes the number of times
subgraph $g_{k}$ appears in the subgraphs of
$G_{\mathcal{N}_{i}}^{\mathbf{t}}$. These counts are then converted into
binary indicators that are one if the count of subgraph $g_{k}$ in each unit’s
neighborhood is exactly $x$, for all $x$ observed in the data. Thus, units
will be matched exactly if they have identical subgraph counts. We then
approximately solve the problem in Equation (4) to find the optimally
important set of subgraphs upon which to exactly match each treated unit, such
that there is at least one control unit that matches exactly with the treated
unit on the chosen subgraph counts. The key idea behind this approach is that
we want to match units exactly on subgraph counts that contribute
significantly to the interference function, trading off exactly-matching on
these important subgraphs with potential mismatches on subgraphs that
contribute less to interference.
In practice, our implementation enumerates all subgraphs in each unit’s
neighborhood and stores the count of each pattern – this is computationally
challenging. There is a growing body of work on efficient counting algorithms
for pre-specified small patterns (up to 4-5 nodes) but there is little
research on fast methods to both enumerate and count all motifs in a graph
(e.g., Pinar et al., 2017; Marcus and Shavitt, 2010; Hu et al., 2013).
Empirically, we see that this enumeration takes less than 30 seconds for 50
units.
###### The FLAME Algorithm for AME.
The Fast Large Almost Matching Exactly (FLAME) algorithm (Wang et al., 2019)
approximates the solution to the AME problem. The procedure starts by exactly
matching all possible units on all covariates. It then drops one covariate at
a time, choosing the drop maximizing the match quality $\mathtt{MQ}$ at that
iteration, defined:
$\displaystyle\mathtt{MQ}=C\cdot\mathtt{BF}-\widehat{\mathtt{PE}}_{Y}.$ (5)
The match quality is the sum of a balancing factor $\mathtt{BF}$ and a
predictive error $\widehat{\mathtt{PE}}_{Y}$, with relative weights determined
by the hyper-parameter $C$. The balancing factor is defined as the proportion
of treated units plus the proportion of control units matched at that
iteration. Introducing the balancing factor into the objective has the
advantage of encouraging more units to be matched, thereby minimizing variance
of estimators (see Wang et al., 2019). In our setting, the second component of
the match quality, predictive error, takes the form:
$\displaystyle\widehat{\mathtt{PE}}_{Y}=\operatorname*{arg\,min}_{h\in\mathcal{F}_{1}}\sum_{i}^{n}(Y_{i}-h(S(G_{\mathcal{N}_{i}}^{\mathbf{t}})\circ\boldsymbol{\theta},T_{i}))^{2}$
(6)
for some class of functions $\mathcal{F}_{1}$. It is computed using a holdout
training set and discourages dropping covariates that are useful for
predicting the outcome. In this way, FLAME strikes a balance between matching
many units and ensuring these matches are of high-quality. By using a holdout
set to determine how useful a set of variables is for out-of-sample
prediction, FLAME learns a measure of covariate importance via a weighted
Hamming distance. Specifically, it learns a vector of importance weights
$\mathbf{w}$ for the different subgraph counts that minimizes
$\mathbf{w}^{T}\mathbb{I}[S(G_{\mathcal{N}_{i}}^{\mathbf{t}})\neq
S(G_{\mathcal{N}_{j}}^{\mathbf{t}})]$, where
$\mathbb{I}[S(G_{\mathcal{N}_{i}}^{\mathbf{t}})\neq
S(G_{\mathcal{N}_{j}}^{\mathbf{t}})]$ is a vector whose $k^{\textrm{th}}$
entry is 0 if the labeled neighborhood graphs of $i$ and $j$ have the same
count of subgraph $k$, and 1 otherwise.
To this match quality term, we add a network fit term to give subgraphs more
weight that are highly predictive of overall network structure. We fit a
logistic regression model in which the edges $(i,j)$ between units $i,j$ are
independent given $\mathcal{N}_{i},\mathcal{N}_{j}$, and dependent on the
subgraph counts of units $i$ and $j$:
$(i,j)\stackrel{{\scriptstyle
iid}}{{\sim}}\textrm{Bern}(\textrm{logit}(\beta_{1}^{T}S(G_{\mathcal{N}_{i}}^{\mathbf{t}})+\beta_{2}^{T}S(G_{\mathcal{N}_{j}}^{\mathbf{t}})))$
To the match quality in the original formulation, we then add
$\widehat{\mathtt{PE}}_{G}$, defined to be the _AIC_ (Akaike, 1974) of this
fitted model, weighted by a hyperparameter $D$. Therefore, at each iteration:
$\displaystyle\mathtt{MQ}=C\cdot\mathtt{BF}-\widehat{\mathtt{PE}}_{Y}+D\cdot\widehat{\mathtt{PE}}_{G}.$
Thus, we penalize not only subgraph drops that impede predictive performance
or making matches, but also those that make the observed network unlikely.
$\widehat{\mathtt{PE}}_{G}$ represents the empirical prediction error of the
chosen set of statistics for the observed graph: if
$\widehat{\mathtt{PE}}_{G}$ is low, then the chosen subgraphs do a good job of
predicting the observed graph. This error is also evaluated at a minimum over
another class of prediction functions, $\mathcal{F}_{2}$. This term in the AME
objective is justified by Assumption A3: units that have isomorphic labeled
neighborhood graphs should experience the same amount of interference, and
subgraph counts should be predictive of neighborhood graph structure.
Our approach to estimating the ADE is therefore as follows. (1) For each unit
$i$, count and label all types of subgraphs in
$G_{\mathcal{N}_{i}}^{\mathbf{t}}$. (2) Run FLAME, encouraging large numbers
of matches on subgraph counts, while using the covariates that are most
important for predicting the outcome and the network. (3) Estimate ADE as
$\widehat{ADE}$, by computing the difference in means for each matched group
and then averaging across matched groups, weighted by their size. Since our
approach is based on FLAME, we call it _FLAME-Networks_.
Extensions. FLAME-Networks immediately extends to handling unit-level
covariate information for baseline adjustments; we simply concatenate subgraph
information and covariate information in the same dataset and then make
almost-exact matches. FLAME-Networks will automatically learn the weights of
both subgraphs and baseline covariates to make matches that take both into
account. Another straightforward extension considers interference not just in
the immediate neighborhood of each unit, but up to an arbitrary number of hops
away. To extend FLAME-Networks in this way, it is sufficient to enumerate
subgraphs in the induced neighborhood graph of each unit $i$ where the
vertices considered are those with a path to $i$ that is at most $k$ steps
long. Given these counts, our method proceeds exactly as before.
### 3 EXPERIMENTS
We begin by evaluating the performance of our estimator in a variety of
simulated settings, in which we vary the form of the interference function. We
find that our approach performs well in many distinct settings. In Section M
of the appendix, we also assess the quality of the matches constructed.
We simulate graphs from an Erdős-Rènyi model:
$G\sim\textrm{Erd\H{o}s-R\\`{e}nyi}(n,q)$, by which every possible edge
between the $n$ units is created independently, with probability $q$. In
Sections H and I of the appendix, we also perform experiments on cluster-
randomized and real-world networks. Treatments for the whole sample are
generated with $\Pr(\mathbf{T}=\mathbf{t})=\binom{n}{n^{(1)}}^{-1}$, where
$n^{(1)}$ is the number of treated units. Outcomes are generated according to
$Y_{i}(t,\mathbf{t}_{-i})=\mathbf{t}\tau_{i}+f(G_{\mathcal{N}_{i}}^{\mathbf{t}})+\epsilon_{i}$,
where $\epsilon\sim N(\mathbf{0},I_{n})$ represents a baseline outcome;
$\tau_{i}\sim N(\mathbf{5},I_{n})$ represents a direct treatment effect, and
$f$ is the interference function. In Section J of the appendix, we consider a
setting in which the errors are heteroscedastic. For the interference
function, we use additive combinations of the subgraph components in Table 1
and define $m_{ip},p=1,\dots,7$ to be the counts of feature $p$ in
$G_{\mathcal{N}_{i}\cup\\{i\\}}^{\mathbf{t}}$. Lastly, the counts of each
component are normalized to have mean 0 and standard deviation 1.
$d_{i}$: | The treated degree of unit $i$
---|---
$\Delta_{i}$: | The number of triangles in $G_{\mathcal{N}_{i}\cup\\{i\\}}^{\mathbf{t}}$ with at least one treated unit.
$\bigstar_{i}^{k}$: | The number of $k$-stars in $G_{\mathcal{N}_{i}\cup\\{i\\}}^{\mathbf{t}}$ with at least one treated unit.
$\dagger_{i}^{k}$: | The number of units in $G_{\mathcal{N}_{i}\cup\\{i\\}}^{\mathbf{t}}$ with degree $\geq k$ and at least one treated unit among their neighbors.
$B_{i}$: | The vertex betweenness of unit $i$.
$C_{i}$: | The closeness centrality of unit $i$.
Table 1: Interference components used in experiments; see the appendix for
more details.
We compare our approach with different methods to estimate the ADE under
interference:
Naïve. The simple difference in means between treatment and control groups
assuming no interference.
All Eigenvectors. Eigenvectors for the entire adjacency matrix are computed
with every treated unit matched to the control unit minimizing the Mahalanobis
distance between the eigenvectors, weighing the $k$’th eigenvector by $1/k$.
The idea behind this estimator is that the eigendecomposition of the adjacency
matrix encodes important information about the network and how interference
might spread within it.
First Eigenvector. Same as All Eigenvectors except units are matched only on
their values of the largest-eigenvalue eigenvector.
Stratified Naïve. The stratified naïve estimator as discussed by Sussman and
Airoldi (2017). A weighted difference-in-means estimator where units are
divided into strata defined by their treated degree (number of treated
vertices they are connected to), and assigned weight equal to the number of
units within the stratum in the final difference of weighted averages between
treated and control groups.
SANIA MIVLUE. The minimum integrated variance, linear unbiased estimator under
assumptions of symmetrically received interference and additivity of main
effects, when the priors on the baseline outcome and direct treatment effect
have no correlation between units; proposed by Sussman and Airoldi (2017).
FLAME-Networks. Our proposed method. In all simulations, the two components of
the PE function are weighted equally, and a ridge regression is used to
compute outcome prediction error.
#### 3.1 Experiment 1: Additive Interference
First we study a setting in which interference is an additive function of the
components in Table 1. Outcomes in this experiment have the form:
$Y_{i}=\gamma_{1}d_{i}+\gamma_{2}\Delta_{i}+\gamma_{3}\bigstar^{2}_{i}+\gamma_{4}\bigstar^{4}_{i}+\gamma_{5}\dagger_{i}^{3}+\gamma_{6}B_{i}+\gamma_{7}C_{i}+\epsilon_{i}$,
with $\epsilon_{i}\sim N(0,1)$. We simulate 50 datasets for each setting, in
which the units are in an $ER(50,0.05)$ graph. Table 2 shows values for the
$\gamma_{i}$ in each of our experimental settings.
Feature | $d_{i}$ | $\Delta_{i}$ | $\bigstar^{2}_{i}$ | $\bigstar^{4}_{i}$ | $\dagger^{3}$ | $B_{i}$ | $C_{i}$
---|---|---|---|---|---|---|---
Weight | $\gamma_{1}$ | $\gamma_{2}$ | $\gamma_{3}$ | $\gamma_{4}$ | $\gamma_{5}$ | $\gamma_{6}$ | $\gamma_{7}$
Setting 1 | 0 | 10 | 0 | 0 | 0 | 0 | 0
Setting 2 | 10 | 10 | 0 | 0 | 0 | 0 | 0
Setting 3 | 0 | 10 | 1 | 1 | 1 | 1 | -1
Setting 4 | 5 | 1 | 10 | 1 | 1 | 1 | -1
Table 2: Settings for Experiment 1.
Results for Experiment 1 are reported in Figure 2. FLAME-Networks outperforms
all other methods both in terms of average error, and standard deviation over
the simulations. This is likely because FLAME-Networks learns weights for the
subgraphs that are proportionate to those we use at each setting, and matches
units on subgraphs with larger weights. When the interference function is
multiplicative instead of additive, FLAME-Networks performs similarly; results
are in the appendix.
Figure 2: Results from Experiment 1. Each violin plot represents the
distribution over simulations of absolute estimation error. The panels are
numbered according to the parameter settings of the simulations. Violin plots
are blue if the method had mean error lower than or equal to FLAME-Networks’
and red otherwise. The black line inside each violin is the median error. The
dashed line is FLAME-Networks’ mean error.
#### 3.2 Experiment 2: Covariate Adjustment
A strength of FLAME-Networks is its ability to natively account for covariate
information. We analyze a setting in which baseline effects are dependent on
an additional discrete-valued covariate, $x$, that is observed alongside the
network. Outcomes take the form
$Y_{i}=t\tau_{i}+f(G_{\mathcal{N}_{i}}^{\mathbf{t}})+\beta
x_{i}+\epsilon_{i}$, where $x_{i}$ is chosen uniformly at random from
$\\{1,2,3\\}$ for each unit, and $\beta$ is fixed at 15. This means that our
sample is divided into 3 strata defined by the covariate values. We ran FLAME-
Networks with a dataset consisting of subgraph counts, plus observed
covariates for each unit. For comparison with the other methods, we first
regress $Y$ on $x$, and then use the residuals of that regression as outcomes
for the other methods. This way, the initial regression will account for the
baseline effects of $x$, and the residuals contain only potential
interference. The interference function takes the form
$f(G_{\mathcal{N}_{i}}^{\mathbf{t}})=d_{i}+\Delta_{i}+B_{i}$, which is what we
are trying to learn with the other methods. We simulate the sample network
from $ER(50,0.05)$.
Results are displayed in Table 3. FLAME-Networks performs, on average, better
than all the other methods. Results in Section L of the supplement show that
when $\beta$ is increased, none of the methods suffer in performance. While
regression adjustment prior to estimation seems to have a positive impact on
the performance of other methods in the presence of additional covariates,
FLAME-Networks performs best. This is because FLAME-Networks is built to
easily handle the inclusion of covariates in its estimation procedure.
Method | Median | 25th q | 75th q
---|---|---|---
FLAME-Networks | 0.39 | 0.21 | 0.59
First Eigenvector | 0.47 | 0.40 | 0.83
All Eigenvectors | 0.55 | 0.29 | 0.79
Naive | 0.53 | 0.36 | 0.92
SANIA | 1.93 | 1.75 | 2.25
Stratified | 4.49 | 4.45 | 4.53
Table 3: Results from Experiment 2 with $\beta=5$. Median and 25th and 75th
percentile of absolute error over 40 simulated datasets.
#### 3.3 Experiment 3: Misspecified Interference
We now study the robustness of our estimator in a setting in which one of our
key assumptions – A3 – is violated. Specifically, we now allow for treated and
control units to receive different amounts of interference, even if they have
the same labelled neighborhood graphs. We do this by temporarily eliminating
all control-control edges in the network and then counting the features in
Table 1 used to assign interference. That is, consider a unit $i$ with a
single, untreated neighbor $j$. In our new setting, if degree is a feature of
the interference function, then $i$ being treated implies $i$ receives
interference from $j$. But if $i$ is untreated, then $i$ would receive no
interference from $j$, because its neighbor is also untreated. This crucially
implies that FLAME-Networks will be matching–and estimating the ADE
from–individuals that do not necessarily receive similar amounts of
interference.
In this setting, we generate interference according to:
$f_{i}=(5-\gamma)d_{i}+\gamma\Delta_{i}$ for $\gamma\in[0,5]$ and assess the
performance of FLAME-Networks against that of the SANIA and stratified
estimators. Results are shown in Figure 3. We see that, when degree is the
only component with weight in the interference function, FLAME-Networks
performs better than the stratified estimator, but worse than the SANIA
estimator, which leverages aspects of the graph related to degree. However,
our performance improves as $\gamma$ increases and the true interference
depends more on triangle counts since the triangle counts available to FLAME-
Networks represent the actual interference pattern more frequently than the
degree counts did. Thus, we see that although violation of our method’s
assumptions harms its performance, it still manages at times to outperform
estimators that rely too heavily on degree.
Figure 3: Results from Experiment 3. These simulations were run on an ER(75,
0.07) graph. The bands around the lines represent 25th and 75th quantiles of
the 50 simulations, for each value of $\gamma$.
### 4 APPLICATION
In this section, we demonstrate the practical utility of our method. We use
data collected by Banerjee et al. (2013) on social networks for 75 villages in
Karnataka, India. They are a median distance of 46 km from one another other,
motivating the assumption that network interference is experienced solely
between individuals from the same village. For each village, we study the
effect of 1. lack of education on election participation; 2. lack of education
on Self-Help-Group (SHG) participation; and 3. being male on SHG
participation. We proxy election participation by ownership of an election
card. We compare our estimates – which account for network interference – to
naive estimates – which assume no network interference. Data pre-processing is
summarized in the appendix.
For ADE estimates, we assume the treatment is randomly assigned. We find that
lack of education is associated with higher SHG participation, and that males
are less likely to participate in SHGs than females (see Figure 4). These
results make sense in the context of developing countries where SHGs are
mainly utilized by females in low-income families, which generally have lower
education levels. We observe that education does not impact election
participation; in developing countries, an individual’s decision to
participate in an election may be driven by factors such caste, religion,
influence of local leaders and closeness of race (Shachar and Nalebuff, 1999;
Gleason, 2001). FLAME-Networks matches units in each village by subgraph
counts and covariate values to estimate the ADE. Looking at the matched
groups, we discover that subgraphs such as 2-stars and triangles were
important for matching, implying that second-order connections could be
affecting interference in this setting. Further details of the matched groups
are in Section F.
Figure 4 plots naive and FLAME-Networks ADE estimates. We find a significant
difference between our estimates and the naive estimates when estimating the
effect of being male on participation in SHGs. The naive estimator
overestimates the treatment effect, which is possible when ignoring network
interference. It is plausible, in this setting, that interference would
heighten these effects for all outcomes. This is because individuals from
similar social backgrounds or gender tend to interact more together, and,
therefore, are more likely to influence each other’s participation decision,
both in elections and in SHGs.
Figure 4: Naive and FLAME-Networks ADE estimates and their difference. Red,
blue, and green respectively correspond to (treatment, outcome) pairs: (no
education, election participation), (no education, SHG participation), and
(gender, SHG participation).
### 5 DISCUSSION
Conventional estimators for treatment effects in randomized experiments will
be biased when there is interference between units. We have introduced FLAME-
Networks – a method to recover direct treatment effects in such settings. Our
method is based on matching units with similar neighborhood graphs in an
almost-exact way, thus producing interpretable, high-quality results. We have
shown that FLAME-Networks performs better than existing methods both on
simulated data, and we have used real-world data to show how it can be applied
in a real setting. Our method extends easily to settings with additional
covariate information for units and to taking into account larger
neighborhoods for interference. In future work, our method can be extended to
learning a variety of types of distance metrics between graphs.
##### Acknowledgements
This work was supported in part by NIH award 1R01EB025021-01, NSF awards
IIS-1552538 and IIS1703431, a DARPA award under the L2M program, and a Duke
University Energy Initiative ERSF grant.
### References
* Abadie and Imbens (2016) Alberto Abadie and Guido Imbens. Matching on the estimated propensity score. _Econometrica_ , pages 781–807, 2016.
* Akaike (1974) H. Akaike. A new look at the statistical model identification. _IEEE Transactions on Automatic Control_ , 19(6):716–723, December 1974. doi: 10.1109/TAC.1974.1100705.
* Aronow (2012) Peter M Aronow. A general method for detecting interference between units in randomized experiments. _Sociological Methods & Research_, 41(1):3–16, 2012.
* Aronow et al. (2017) Peter M Aronow, Cyrus Samii, et al. Estimating average causal effects under general interference, with application to a social network experiment. _The Annals of Applied Statistics_ , 11(4):1912–1947, 2017.
* Athey et al. (2018) Susan Athey, Dean Eckles, and Guido W Imbens. Exact p-values for network interference. _Journal of the American Statistical Association_ , 113(521):230–240, 2018.
* Awan et al. (2019) M Awan, Yameng Liu, Marco Morucci, Sudeepa Roy, Cynthia Rudin, and Alexander Volfovsky. Interpretable almost matching exactly with instrumental variables. In _Conference on Uncertainty in Artificial Intelligence (UAI)_ , 2019\.
* Banerjee et al. (2013) Abhijit Banerjee, Arun G Chandrasekhar, Esther Duflo, and Matthew O Jackson. The diffusion of microfinance. _Science_ , 341(6144):1236498, 2013.
* Basse and Airoldi (2018) Guillaume W Basse and Edoardo M Airoldi. Model-assisted design of experiments in the presence of network-correlated outcomes. _Biometrika_ , 105(4):849–858, 2018.
* Bowers et al. (2013) Jake Bowers, Mark M Fredrickson, and Costas Panagopoulos. Reasoning about interference between units: A general framework. _Political Analysis_ , 21(1):97–124, 2013.
* Cox (1958) David Roxbee Cox. _Planning of Experiments_. Wiley, 1958.
* Dieng et al. (2019) Awa Dieng, Yameng Liu, Sudeepa Roy, Cynthia Rudin, and Alexander Volfovsky. Interpretable almost-exact matching for causal inference. In _Proceedings of Artificial Intelligence and Statistics (AISTATS)_ , pages 2445–2453, 2019.
* Duflo and Saez (2003) Esther Duflo and Emmanuel Saez. The role of information and social interactions in retirement plan decisions: Evidence from a randomized experiment. _The Quarterly Journal of Economics_ , 118(3):815–842, 2003.
* Eckles et al. (2016) Dean Eckles, René F Kizilcec, and Eytan Bakshy. Estimating peer effects in networks with peer encouragement designs. _Proceedings of the National Academy of Sciences_ , 113(27):7316–7322, 2016.
* Eckles et al. (2017) Dean Eckles, Brian Karrer, and Johan Ugander. Design and analysis of experiments in networks: Reducing bias from interference. _Journal of Causal Inference_ , 5(1), 2017.
* Forastiere et al. (2016) Laura Forastiere, Edoardo M Airoldi, and Fabrizia Mealli. Identification and estimation of treatment and interference effects in observational studies on networks. _arXiv preprint arXiv:1609.06245_ , 2016.
* Gleason (2001) Suzanne Gleason. Female political participation and health in india. _The ANNALS of the American Academy of Political and Social Science_ , 573(1):105–126, 2001.
* Halloran and Struchiner (1995) M Elizabeth Halloran and Claudio J Struchiner. Causal inference in infectious diseases. _Epidemiology (Cambridge, Mass.)_ , 6(2):142–151, 1995.
* Harary (1994) Frank Harary. _Graph Theory_. Addison-Wesley, 1994.
* Harris (2013) Kathleen Harris. The add health study: design and accomplishments. Technical report, Carolina Population Center: University of North Carolina at Chapel Hill, 2013.
* Hu et al. (2013) Xiaocheng Hu, Yufei Tao, and Chin-Wan Chung. Massive graph triangulation. In _Proceedings of the 2013 ACM SIGMOD international conference on Management of data_ , pages 325–336, 2013.
* Jagadeesan et al. (2019) Ravi Jagadeesan, Natesh Pillai, and Alexander Volfovsky. Designs for estimating the treatment effect in networks with interference. _Annals of Statistics_ , 2019.
* Liu and Hudgens (2014) Lan Liu and Michael G Hudgens. Large sample randomization inference of causal effects in the presence of interference. _Journal of the American Statistical Association_ , 109(505):288–301, 2014.
* Liu et al. (2016) Lan Liu, Michael G Hudgens, and Sylvia Becker-Dreps. On inverse probability-weighted estimators in the presence of interference. _Biometrika_ , 103(4):829–842, 2016.
* Manski (2013) Charles F Manski. Identification of treatment response with social interactions. _The Econometrics Journal_ , 16(1):S1–S23, 2013\.
* Marcus and Shavitt (2010) Dror Marcus and Yuval Shavitt. Efficient counting of network motifs. In _2010 IEEE 30th International Conference on Distributed Computing Systems Workshops_ , pages 92–98. IEEE, 2010.
* Ogburn et al. (2017) Elizabeth L Ogburn, Tyler J VanderWeele, et al. Vaccines, contagion, and social networks. _The Annals of Applied Statistics_ , 11(2):919–948, 2017.
* Pinar et al. (2017) Ali Pinar, C Seshadhri, and Vaidyanathan Vishal. Escape: Efficiently counting all 5-vertex subgraphs. In _Proceedings of the 26th International Conference on World Wide Web_ , pages 1431–1440, 2017.
* Rosenbaum (2007) Paul R Rosenbaum. Interference between units in randomized experiments. _Journal of the American Statistical Association_ , 102(477):191–200, 2007.
* Rosenbaum and Rubin (1983) Paul R Rosenbaum and Donald B Rubin. The central role of the propensity score in observational studies for causal effects. _Biometrika_ , 70(1):41–55, 1983.
* Rubin and Thomas (1996) Donald Rubin and Neal Thomas. Matching using estimated propensity scores: relating theory to practice. _Biometrics_ , 52(1):249–264, 1996.
* Rubin (1980) Donald B Rubin. Randomization analysis of experimental data: The fisher randomization test comment. _Journal of the American Statistical Association_ , 75(371):591–593, 1980.
* Sävje et al. (2017) Fredrik Sävje, Peter M Aronow, and Michael G Hudgens. Average treatment effects in the presence of unknown interference. _arXiv preprint arXiv:1711.06399_ , 2017.
* Shachar and Nalebuff (1999) Ron Shachar and Barry Nalebuff. Follow the leader: Theory and evidence on political participation. _American Economic Review_ , 89(3):525–547, 1999\.
* Sinclair et al. (2012) Betsy Sinclair, Margaret McConnell, and Donald P Green. Detecting spillover effects: Design and analysis of multilevel experiments. _American Journal of Political Science_ , 56(4):1055–1069, 2012.
* Sobel (2006) Michael E Sobel. What do randomized studies of housing mobility demonstrate? causal inference in the face of interference. _Journal of the American Statistical Association_ , 101(476):1398–1407, 2006.
* Sussman and Airoldi (2017) Daniel L Sussman and Edoardo M Airoldi. Elements of estimation theory for causal effects in the presence of network interference. _arXiv preprint arXiv:1702.03578_ , 2017.
* Toulis and Kao (2013) Panos Toulis and Edward Kao. Estimation of causal peer influence effects. In _International Conference on Machine Learning (ICML)_ , pages 1489–1497, 2013.
* Ugander et al. (2013) Johan Ugander, Brian Karrer, Lars Backstrom, and Jon Kleinberg. Graph cluster randomization: Network exposure to multiple universes. In _Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , pages 329–337. ACM, 2013.
* van der Laan (2014) Mark J van der Laan. Causal inference for a population of causally connected units. _Journal of Causal Inference_ , 2(1):13–74, 2014\.
* Wang et al. (2019) Tianyu Wang, Marco Morucci, M Awan, Yameng Liu, Sudeepa Roy, Cynthia Rudin, and Alexander Volfovsky. FLAME: A fast large-scale almost matching exactly approach to causal inference. _arXiv preprint arXiv:1707.06315_ , 2019.
## Appendix
### A Proofs from the Paper
#### A.1 Proof of Proposition 1
###### Proof.
We start by writing potential outcomes for an arbitrary unit $i$ from (2) as
$Y_{i}(t,\mathbf{t}_{-i})=f_{i}(t,\mathbf{t}_{-i})+\epsilon_{i}$, where
$\epsilon_{i}$ is some pre-treatment value of the potential outcome, and
$f_{i}(t,\mathbf{t}_{-i})$ is the treatment response function, dependent both
on unit $i$’s treatment and on everyone else’s. Using A0-A3 we can write (2)
as:
$\displaystyle Y_{i}(t,\mathbf{t}_{-i})$
$\displaystyle=t\tau_{i}+f_{i}(\mathbf{t}_{-i})+\epsilon_{i}$ (By A1)
$\displaystyle=t\tau_{i}+f_{i}(\mathbf{t}_{\mathcal{N}_{i}})+\epsilon_{i}$ (By
A2) $\displaystyle=t\tau_{i}+f(G_{\mathcal{N}_{i}}^{\mathbf{t}})+\epsilon_{i}$
$\displaystyle\mbox{(By A3)}.$ (7)
This proves Equation (3). The first line agrees with the additivity
assumption, which permits a contribution $f_{i}(\mathbf{t}_{-i})$ to
$Y_{i}(t,\mathbf{t}_{-i})$ arising from anywhere within the rest of the graph.
The second line states that this contribution is limited to $i$’s
neighborhood. The third line states that the contribution depends only the
neighborhood structure and not anything else about the neighbors.
To prove identification for the ADE, we must show that the individual
treatment effect is identified for a treated unit $i$. We need to show that:
$\mathbb{E}[Y_{i}(1,\mathbf{0})-Y_{i}(0,\mathbf{0})]=\mathbb{E}[Y_{i}|T_{i}=1,G_{\mathcal{N}_{i}}^{\mathbf{T}}=g_{i}^{\mathbf{t}}]-\mathbb{E}[Y_{i}|T_{i}=0,G_{\mathcal{N}_{i}}^{\mathbf{T}}=g_{i}^{\mathbf{t}}]$.
We have first:
$\displaystyle\mathbb{E}[Y_{i}(1,\mathbf{0})-Y_{i}(0,\mathbf{0})]$
$\displaystyle=\mathbb{E}[t\tau_{i}+f(G_{\mathcal{N}_{i}}^{\mathbf{0}})+\epsilon_{i}-f(G_{\mathcal{N}_{i}}^{\mathbf{0}})-\epsilon_{i}]$
$\displaystyle=\mathbb{E}[\epsilon_{i}+\tau_{i}-\epsilon_{i}]=\tau_{i}.$ (8)
Second, we have:
$\displaystyle\mathbb{E}[Y_{i}|G_{\mathcal{N}_{i}}^{\mathbf{T}}\simeq
g_{i}^{\mathbf{t}},T_{i}=1]-\mathbb{E}[Y_{i}|G_{\mathcal{N}_{i}}^{\mathbf{T}}\simeq
g_{i}^{\mathbf{t}},T_{i}=0]$ $\displaystyle=$
$\displaystyle\mathbb{E}[Y_{i}(1,\mathbf{T}_{-i})T_{i}+$
$\displaystyle\quad+Y_{i}(0,\mathbf{T}_{-i})(1-T_{i})|G_{\mathcal{N}_{i}}^{\mathbf{T}}\simeq
g_{i}^{\mathbf{t}},T_{i}=1]$ $\displaystyle-$
$\displaystyle\mathbb{E}[Y_{i}(1,\mathbf{T}_{-i})T_{i}$
$\displaystyle\quad+Y_{i}(0,\mathbf{T}_{-i})(1-T_{i})|G_{\mathcal{N}_{i}}^{\mathbf{T}}\simeq
g_{i}^{\mathbf{t}},T_{i}=0]$ $\displaystyle=$
$\displaystyle\mathbb{E}[\tau_{i}+f(g_{i}^{\mathbf{t}})+\epsilon_{i}|G_{\mathcal{N}_{i}}^{\mathbf{T}}\simeq
g_{i}^{\mathbf{t}},T_{i}=1]$
$\displaystyle-\mathbb{E}[f(g_{i}^{\mathbf{t}})+\epsilon_{i}|G_{\mathcal{N}_{j}}^{\mathbf{T}}=g_{i}^{\mathbf{t}},T_{i}=0]$
$\displaystyle=$
$\displaystyle\alpha+f(g_{i}^{\mathbf{t}})+\tau_{i}-\alpha-f(g_{i}^{\mathbf{t}})$
$\displaystyle=$ $\displaystyle\tau_{i}.$ (9)
The first equality follows from the definition of $Y_{i}$, the second equality
from the result in (3) and A3, and the third equality follows from
independence of $T$ and $Y$ given in A0:
$\mathbb{E}[\epsilon_{i}|T_{i}]=\mathbb{E}[\epsilon_{i}]=\alpha$ for all $i$.
Finally, we can use both of the results above to obtain the ADE:
$\displaystyle\frac{1}{n^{(1)}}\sum_{i=1}^{n}\mathbb{E}[T_{i}\times(\mathbb{E}[Y_{i}|G_{\mathcal{N}_{i}}^{\mathbf{T}}\simeq
g_{i}^{\mathbf{t}},T_{i}=1]$
$\displaystyle\qquad\qquad\quad-\mathbb{E}[Y_{i}|G_{\mathcal{N}_{i}}^{\mathbf{T}}\simeq
g_{i}^{\mathbf{t}},T_{i}=0])]$
$\displaystyle=\frac{1}{n^{(1)}}\sum_{i=1}^{n}\mathbb{E}[T_{i}\tau_{i}]$ (By
(9))
$\displaystyle=\frac{1}{n^{(1)}}\sum_{i=1}^{n}\mathbb{E}[T_{i}\times(\mathbb{E}[Y_{i}(1,\mathbf{0})-Y_{i}(0,\mathbf{0})])]$
(By (8))
$\displaystyle=\frac{1}{n^{(1)}}\sum_{i=1}^{n}\Pr(T_{i}=1)(\mathbb{E}[Y_{i}(1,\mathbf{0})-Y_{i}(0,\mathbf{0})])$
$\displaystyle=\frac{1}{n^{(1)}}\sum_{i=1}^{n}\frac{n^{(1)}}{n}(\mathbb{E}[Y_{i}(1,\mathbf{0})-Y_{i}(0,\mathbf{0})])$
$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[Y_{i}(1,\mathbf{0})-Y_{i}(0,\mathbf{0})],$
where $\Pr(T_{i}=1)=\frac{n^{(1)}}{n}$ by assumption of complete
randomization. ∎
### B Additional Theoretical Results
We study the expected error for one AME match on subgraphs under two
assumptions: that the true weights for the AME objective (the weighted Hamming
distance) are known, and that the candidate units for matching all have
independently generated neighborhoods, and none of the units in these
neighborhoods are being matched. Additional information on this setting is
available in the proof.
###### Proposition 2.
(Oracle AME Error With Independent Neighborhoods) Suppose that there are $N$
independently generated graphs, each with $n$ vertices, and all i.i.d. from
the same distribution over graphs:
$\Pr(G_{1}=g_{1},\dots,G_{N}=g_{N})=\prod_{i=1}^{N}p(g_{i})$. Assume matches
are only allowed between units in different graphs. Suppose additionally that
$n^{(1)}$ randomly chosen units within each graph are assigned treatment, so
that $\Pr(\mathbf{T}_{i}=\mathbf{t}_{i})=\binom{n}{n^{(1)}}^{-1}$. Assume
further that interference functions obey the following: $|f(g)-f(h)|\leq
K\mathbf{w}^{T}\mathbb{I}[S(g)\neq S(h)]$, where $\mathbf{w}$ is a vector of
positive, real-valued, importance weights for the subgraphs counts, such that
$\|\mathbf{w}\|_{1}=M_{n}$ for some constant $0<M_{n}<\infty$, and such that
the condition above is satisfied for $\mathbf{w}$, and $\mathbb{I}[S(g)\neq
S(h)]$ is the Hamming distance: a vector of the same length as $\mathbf{w}$
that is 0 at position $k$ if graphs $g$ and $h$ have the same count of
subgraph $k$, and 1 otherwise. Assume that baseline responses have variance
$Var(\epsilon_{i})=\sigma^{2}\;\forall i$. Then, for a treatment unit $i$, if
$j$ solves the AME problem, i.e.,
$j\in\operatorname*{arg\,min}\limits_{\begin{subarray}{c}{k=1,\dots,n}\\\
{T_{k}=0}\end{subarray}}\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}})\neq
S(G_{\mathcal{N}_{k}}^{\mathbf{T}})]$, under A0-A3:
$\displaystyle\mathbb{E}\left[|Y_{i}-Y_{j}-\tau_{i}|\big{|}T_{i}=1,G_{\mathcal{N}_{i}}^{\mathbf{t}}=h_{\mathcal{N}_{i}}^{\mathbf{t}}\right]\leq\sqrt{2}\sigma$
$\displaystyle+K\binom{n-1}{n^{(1)}}^{-1}\times\sum_{g\in\mathcal{G}_{n}}\sum_{\begin{subarray}{c}\mathbf{t}\in\mathcal{T}\\\
t_{j}=0\end{subarray}}\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}})\neq
S(g_{\mathcal{N}_{j}}^{\mathbf{t}})]p(g)$
$\displaystyle\times\left(\frac{n^{(1)}}{n}+\frac{n-n^{(1)}}{n}C(g_{\mathcal{N}_{j}}^{\mathbf{t}})\right)^{N-2},$
where $\mathcal{G}_{n}$ is the set of all graphs with $n$ units, and
$C(h_{\mathcal{N}_{j}}^{\mathbf{t}})\leq 1$ for all $g$ and $\mathbf{t}$.
A proof is available in the following section. The first element in the right
hand side of the inequality is the standard deviation of the baseline
responses. One summation is over all possible graphs with $n$ units, and the
other summation is over possible treatment assignments. The expression inside
the summation is the product of three terms. First, the weighted Hamming
distance between a graph and the target graph we are trying to match. Second,
the probability of observing that graph. Third, an upper bound on the
probability that unit $j$ is among the minimizers of the weighted Hamming
distance. Note that
$\mathbf{w}^{T}\mathbb{I}[S(g_{\mathcal{N}_{j}}^{\mathbf{t}})\neq
S(g_{\mathcal{N}_{j}}^{\mathbf{t}})]p(g)$ is bounded for fixed $n$ for all $g$
and $\mathbf{t}$. This implies that the bound converges to $2\sqrt{\sigma}$ as
$N\rightarrow\infty$, as long as the size of neighborhood graphs is held
fixed, because perfect matching is possible with large amounts of data in this
regime.
#### B.1 Proof of Proposition 2
We briefly review some notation and assumptions to be used in this proof. For
the purposes of theory, we study a simplified setting, in which we have to
AME-match a unit $i$ to one unit in a set of candidate units of size $N$ such
that: a) all the candidate units belong to disconnected graphs, which we refer
to as candidate graphs. b) within each candidate graph there is only one pre-
determined candidate unit c) candidate units have neighborhood graphs denoted
by $G_{\mathcal{N}_{j}}$. d) all the candidate graphs are drawn independently
from the same distribution over graphs:
$\Pr(G_{\mathcal{N}_{1}}=g_{1},\dots,G_{\mathcal{N}_{N}}=g_{N})=\prod_{i=1}^{N}p(g_{i})$.
The support of $p$ will be $\mathcal{G}_{n}$: the set of all graphs with
exactly $n$ units. We use $g_{\mathcal{N}_{i}}$ to denote the subgraph induced
over $g$ by the units in the set of neighbors of unit $i$,
$\mathcal{N}_{i}\subseteq V(g)$, i.e., $g_{\mathcal{N}_{i}}$ is the graph
consisting only of the vertices that share an edge with $i$, in $g$, and of
the edges in $g$ that are between these vertices. The ego $i$ is not included
in $g_{\mathcal{N}_{i}}$.
Assigned treatments are denoted by $\mathbf{T}$, where
$\mathbf{T}\in\\{0,1\\}^{n}$, but in this setting treatment assignment is
assumed to be independent within the $N$ candidate graphs. Formally, the
assumption we make is that
$\Pr(\mathbf{T}_{1}=\mathbf{t}_{1},\dots,\mathbf{T}_{N}=\mathbf{t}_{N})=\prod_{i=1}^{N}\binom{n}{n^{(1)}}^{-1}$,
i.e., $n^{(1)}$ units are always treated uniformly at random within each of
the $N$ candidate graphs.
The direct treatment effect for any unit $i$ is given by $\tau_{i}$. We use
$\mathbb{I}[S(g)\neq S(h)]$ to indicate the Hamming distance between subgraph
counts of graphs $g$ and $h$. This means that $\mathbb{I}[S(g)\neq S(h)]$ is a
vector of size $|\mathcal{G}_{n}|$ that will be 1 in the $\ell^{th}$ entry if
$g$ and $h$ have the same amount of occurrences of graph $g_{\ell}$ among
their subgraphs. Note that this distance is coloring sensitive: two subgraphs
that are isomorphic in shape but not labels will belong to different entries
in this distance. The matched group of a treated unit $i$, denoted
$\mathtt{MG}_{i}$ is the set of all control units that match with $i$. In our
setting $j\in\mathtt{MG}_{i}$ if it solves the AME problem, that is
$j\in\operatorname*{arg\,min}\limits_{\begin{subarray}{c}k=1\dots,n\\\
T_{k}=0\end{subarray}}\mathbf{w}^{T}\mathbb{I}[S(G_{\mathcal{N}_{k}}^{\mathbf{T}})\neq
S(g_{\mathcal{N}_{i}}^{\mathbf{t}})]$. Finally, we assume that both the graph
for the unit we want to match and the treatment assignment for that unit’s
graph are fixed: $\mathbf{t}_{i}$ is the treatment assignment in the graph of
$i$, and $h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}}$ is the neighborhood graph of
$i$, where $h$ denotes unit $i$’s graph. All other notation is as in the main
paper.
###### Proof.
We start by upper-bounding our quantity of interest as follows:
$\displaystyle\mathbb{E}[|Y_{i}-Y_{j}-\tau_{i}|\big{|}j\in\mathtt{MG}_{i}]$
$\displaystyle=\mathbb{E}[|Y_{i}(1,\mathbf{t}_{i-i})-Y_{j}(0,\mathbf{T}_{j-j})-\tau_{i}|\big{|}j\in\mathtt{MG}_{i}]$
$\displaystyle=\mathbb{E}[|\tau_{i}+f(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})+\epsilon_{i}-f(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}})-\epsilon_{j}-\tau_{i}|\big{|}j\in\mathtt{MG}_{i}]$
$\displaystyle\leq\mathbb{E}[|f(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})-f(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}})|\big{|}j\in\mathtt{MG}_{i}]$
$\displaystyle\quad+\mathbb{E}[|\epsilon_{i}-\epsilon_{j}|\big{|}j\in\mathtt{MG}_{i}]$
$\displaystyle\leq
K\mathbb{E}[\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}})]\big{|}j\in\mathtt{MG}_{i}]$
$\displaystyle\quad+\mathbb{E}[|\epsilon_{i}-\epsilon_{j}|\big{|}j\in\mathtt{MG}_{i}],$
(10)
where the notation $\mathbf{T}_{j-j}$ denotes the treatment indicator for
candidate graph $j$ for all units except $j$. The first equality follows from
A0 since the event $j\in\mathtt{MG}_{i}$ implies that $T_{j}=0$, as only
control units are allowed in the matched groups. The second equality follows
from Proposition 1. The first inequality is an application of the triangle
inequality. The last line follows from the condition on the interference
functions. Consider the second term. We can use the Cauchy-Schwarz inequality
to construct a simple upper bound on it:
$\displaystyle\mathbb{E}[|\epsilon_{i}-\epsilon_{j}|\big{|}j\in\mathtt{MG}_{i}]=\mathbb{E}[|\epsilon_{i}-\epsilon_{j}|]$
$\displaystyle\leq\sqrt{\mathbb{E}[(\epsilon_{i}-\epsilon_{j})^{2}]}$
$\displaystyle=\sqrt{\mathbb{E}[\epsilon_{i}^{2}]+\mathbb{E}[\epsilon_{j}^{2}]-2\mathbb{E}[\epsilon_{i}]\mathbb{E}[\epsilon_{j}]}=\sqrt{2}\sigma$
where the last equality follows for the fact that the $\epsilon_{i}$ have mean
0 and are independent, with $Var(\epsilon_{i})=\sigma^{2}$ for all $i$.
Consider now the term
$\mathbb{E}[\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}})]|j\in\mathtt{MG}_{i}]$. To upper-bound
this, we write it out as follows using the definition of expectation:
$\displaystyle\mathbb{E}[\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}})]|j\in\mathtt{MG}_{i}]$
$\displaystyle=\sum_{g\in\mathcal{G}_{n}}\sum_{\mathbf{t}\in\mathcal{T},t_{j}=0}\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(g_{\mathcal{N}_{j}}^{\mathbf{t}_{j}})]$
$\displaystyle\qquad\qquad\times\Pr(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}}=g_{\mathcal{N}_{j}}^{\mathbf{t}},\mathbf{T}_{j}=\mathbf{t}|j\in\mathtt{MG}_{i}).$
We want to find an upper bound on
$\Pr(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}}=g_{\mathcal{N}_{j}}^{\mathbf{t}},\mathbf{T}_{j}=\mathbf{t}|j\in\mathtt{MG}_{i})$.
We start by writing this quantity out as a product of two probabilities:
$\displaystyle\Pr(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}}=g_{\mathcal{N}_{j}}^{\mathbf{t}},\mathbf{T}_{j}=\mathbf{t}|j\in\mathtt{MG}_{i})$
$\displaystyle=\Pr(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}}=g_{\mathcal{N}_{j}}^{\mathbf{t}}|j\in\mathtt{MG}_{i},\mathbf{T}_{j}=\mathbf{t})$
$\displaystyle\times\Pr(\mathbf{T}_{j}=\mathbf{t}|j\in\mathtt{MG}_{i})$
$\displaystyle=\Pr(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}}=g_{\mathcal{N}_{j}}^{\mathbf{t}}|j\in\mathtt{MG}_{i},\mathbf{T}_{j}=\mathbf{t})\binom{n-1}{n^{(1)}}^{-1}.$
Note that
$\Pr(\mathbf{T}_{j}=\mathbf{t}|j\in\mathtt{MG}_{i})=\binom{n-1}{n^{(1)}}^{-1}$
because treatment is uniformly randomized with $n^{(1)}$ units always treated
in each candidate graph, but $T_{j}=0$ conditionally on $j\in\mathtt{MG}_{i}$.
We use Bayes’ rule to write out the first term in the final product as
$\Pr(G_{\mathcal{N}_{j}}^{\mathbf{T}}=g_{\mathcal{N}_{j}}^{\mathbf{t}}|j\in\mathtt{MG}_{i},\mathbf{T}_{j}=\mathbf{t})=\frac{\Pr(j\in\mathtt{MG}_{i}|\mathbf{T}_{j}=\mathbf{t},G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}}=g_{\mathcal{N}_{j}}^{\mathbf{t}})\Pr(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}}=g_{\mathcal{N}_{j}}^{\mathbf{t}}|\mathbf{T}_{j}=\mathbf{t})}{\Pr(j\in\mathtt{MG}_{i}|\mathbf{T}_{j}=\mathbf{t})}$.
By assumption, if all neighborhood graphs are empty, all units are used for
all matched groups, and we are restricting ourselves to assignments in which
$T_{j}=0$, therefore, $\Pr(j\in\mathtt{MG}_{i}|\mathbf{T}_{j}=\mathbf{t})=1$.
Second, by assumption
$\Pr(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}}=g_{\mathcal{N}_{j}}^{\mathbf{t}}|\mathbf{T}_{j}=\mathbf{t})=p(g)$.
This is because treatment assignment is independent of the graph. We are left
with having to find an expression for the likelihood, this can be written as:
$\displaystyle\Pr(j\in\mathtt{MG}_{i}|\mathbf{T}_{j}=\mathbf{t},G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}}=g_{\mathcal{N}_{j}}^{\mathbf{t}})$
$\displaystyle=\Pr(j\in\operatorname*{arg\,min}_{\begin{subarray}{c}k=1,\dots,N,\\\
k\neq
i\end{subarray}}\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(G_{\mathcal{N}_{k}}^{\mathbf{T}_{k}})]$
$\displaystyle\qquad\qquad|\mathbf{T}_{j}=\mathbf{t},G_{\mathcal{N}_{j}}^{\mathbf{T}}=g_{\mathcal{N}_{j}}^{\mathbf{t}})$
$\displaystyle=\prod_{\begin{subarray}{c}k=1\\\ k\neq
i,j\end{subarray}}^{N}\bigg{[}\Pr(\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(G_{\mathcal{N}_{k}}^{\mathbf{T}_{k}})]\geq$
$\displaystyle\qquad\qquad\>\quad\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(g_{\mathcal{N}_{j}}^{\mathbf{t}})]|T_{k}=0)\Pr(T_{k}=0)$
$\displaystyle\qquad\quad+\Pr(T_{k}=1)\bigg{]}$
$\displaystyle=\prod_{\begin{subarray}{c}k=1\\\ k\neq
i,j\end{subarray}}^{N}\bigg{[}\Pr\big{(}\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(G_{\mathcal{N}_{k}}^{\mathbf{T}_{k}})]$ $\displaystyle\qquad\quad\ \
\,\geq\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(g_{\mathcal{N}_{j}}^{\mathbf{t}})]|T_{k}=0\big{)}\frac{n-n^{(1)}}{n}$
$\displaystyle\qquad\quad+\frac{n^{(1)}}{n}\biggl{]}.$
The second equality follows because $k$ can never be in the matched group of
unit $i$ if $T_{k}=1$, and, if $T_{k}=0$, then $k$ must be one of the
minimizers of the weighted Hamming distance between neighborhood subgraph
counts. The probability is a product of densities because of independence of
candidate subgraphs. For an arbitrary unit, $k$, we define the following
compact notation for the probability that $k$’s weighted Hamming distance from
$i$ is larger than the weighted Hamming distance from $j$ to $i$:
$\displaystyle\Pr(\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(G_{\mathcal{N}_{k}}^{\mathbf{T}_{k}})]$
$\displaystyle\qquad\geq\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(g_{\mathcal{N}_{j}}^{\mathbf{t}})]|T_{k}=0)$
$\displaystyle=:C_{k}(g_{\mathcal{N}_{j}}^{\mathbf{t}})\leq 1.$
Note that the last inequality follows from the fact that the expression above
is a probability. Since graphs and treatment assignments, $G_{k}$ and
$\mathbf{T}_{k}$ are the only random variables in the probability denoted by
$C_{k}(g_{\mathcal{N}_{j}}^{\mathbf{t}})$, and since they are all independent,
and identically distributed, we can say that
$C_{1}(g_{\mathcal{N}_{j}}^{\mathbf{t}})=C_{2}(g_{\mathcal{N}_{j}}^{\mathbf{t}})=\dots=C_{N}(g_{\mathcal{N}_{j}}^{\mathbf{t}})=C(g_{\mathcal{N}_{j}}^{\mathbf{t}})$.
Because of this we have:
$\displaystyle\Pr(j\in\mathtt{MG}_{i}|G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}}=g_{\mathcal{N}_{j}}^{\mathbf{t}},\mathbf{T}_{j}=\mathbf{t})$
$\displaystyle=\left(\frac{n^{(1)}}{n}+\frac{n-n^{(1)}}{n}C(g_{\mathcal{N}_{j}}^{\mathbf{t}})\right)^{N-2}.$
Putting all the elements we have together we get the expression for the first
term in the bound:
$\displaystyle\mathbb{E}[\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}})]|j\in\mathtt{MG}_{i}]$
$\displaystyle=\sum_{g\in\mathcal{G}_{n}}\sum_{\mathbf{t}\in\mathcal{T}:t_{j}=0}\mathbf{w}^{T}\mathbb{I}[S(g_{\mathcal{N}_{j}}^{\mathbf{t}})\neq
S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})]$
$\displaystyle\qquad\qquad\times\Pr(G_{\mathcal{N}_{j}}^{\mathbf{T}}=g_{\mathcal{N}_{j}}^{\mathbf{t}},\mathbf{T}_{\mathcal{N}_{j}}=\mathbf{t}_{\mathcal{N}_{j}}|j\in\mathtt{MG}_{i})$
$\displaystyle=\sum_{g\in\mathcal{G}_{N}}\sum_{\mathbf{t}\in\mathcal{T}:t_{j}=0}\mathbf{w}^{T}\mathbb{I}[S(g_{\mathcal{N}_{j}}^{\mathbf{t}})\neq
S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})]$
$\displaystyle\times\binom{n-1}{n^{(1)}}^{-1}p(g)\left(\frac{n^{(1)}}{n}+\frac{n-n^{(1)}}{n}C(g_{\mathcal{N}_{j}}^{\mathbf{t}})\right)^{N-2}.$
∎
#### B.2 Asymptotic Behavior
Here we expand on the asymptotic consequences of Proposition 2: note first,
that, by assumption $\|\mathbf{w}\|_{1}=M_{n}$, and that, therefore
$\mathbf{w}^{T}\mathbb{I}[S(g)\neq S(h)]\leq M_{n}$ for any graphs
$g,h\in\mathcal{G}_{n}$. That is to say, the weighted Hamming distance between
any two graphs with $n$ units will be upper-bounded by the sum of the weights.
Recall also that $C(g_{\mathcal{N}_{j}}^{\mathbf{t}})\leq 1$ for all $g$ and
$\mathbf{t}$ as this quantity is a probability, and let
$C_{max}=\max\limits_{\begin{subarray}{c}g\in\mathcal{G}_{n}\\\
\mathbf{t}\in\mathcal{T}\end{subarray}}C(g_{\mathcal{N}_{j}}^{\mathbf{t}})$.
We can combine all these bounds with the upper bound in Proposition 2 to
write:
$\displaystyle\mathbb{E}[\mathbf{w}^{T}\mathbb{I}[S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})\neq
S(G_{\mathcal{N}_{j}}^{\mathbf{T}_{j}})]|j\in\mathtt{MG}_{i}]$
$\displaystyle=\sum_{g\in\mathcal{G}_{\mathcal{N}}}\sum_{\mathbf{t}\in\mathcal{T},t_{j}=0}\mathbf{w}^{T}\mathbb{I}[S(g_{\mathcal{N}_{j}}^{\mathbf{t}})\neq
S(h_{\mathcal{N}_{i}}^{\mathbf{t}_{i}})]$
$\displaystyle\times\binom{n-1}{n^{(1)}}^{-1}p(g)\left(\frac{n^{(1)}}{n}+\frac{n-n^{(1)}}{n}C(g_{\mathcal{N}_{j}}^{\mathbf{t}})\right)^{N-2}$
$\displaystyle\leq
M_{n}\left(\frac{n^{(1)}}{n}+\frac{n-n^{(1)}}{n}C_{max}\right)^{N-2}$
$\displaystyle\times\sum_{g\in\mathcal{G}_{n}}\sum_{\mathbf{t}\in\mathcal{T},t_{j}=0}\binom{n-1}{n^{(1)}}^{-1}p(g)$
$\displaystyle=M_{n}\left(\frac{n^{(1)}}{n}+\frac{n-n^{(1)}}{n}C_{max}\right)^{N-2}.$
The first equality follows from Proposition 2, the first inequality from the
bounds previously discussed, and the second equality follows from the fact
that the sum in the second to last line is a sum of probability distributions
over their entire domain, and therefore is equal to 1. Under the condition
that $n$, the number of units in each unit’s candidate graph, stays fixed, and
that $C_{max}<1$, then, as $N\rightarrow\infty$, we have
$M_{n}\left(\frac{n^{(1)}}{n}+\frac{n-n^{(1)}}{n}C_{max}\right)^{N-2}\rightarrow
0$, because the quantity inside the parentheses is always less than 1. This
makes sense, because asymptotically, matches can be made exactly; i.e., units
matched in the way described in our theoretical setting have isomorphic
neighborhood subgraphs asymptotically. This also has a consequence that the
bound in Proposition 2 converges to $\sqrt{2}\sigma$ asymptotically in $N$.
This is the variance of the baseline errors and can be lowered by matching the
same unit with multiple others. As noted before, for this argument to apply,
candidate graphs must remain of fixed size $n$ as they grow in number, so that
the quantity $M_{n}$ remains constant: this setting is common in cluster-
randomized trials where a growing number of units is clustered into fixed-size
clusters of size at most $n$. The asymptotic behavior of our proposed
methodology is less clear in settings in which the analyst cannot perform such
clustering before randomization and $n$ is allowed to grow with $N$, and is an
avenue for potential future research.
#### B.3 Heteroskedasticity in The Baseline Effects
In a network setting such as the one we study, it is possible that baseline
effects of units do not have equal variance. Here we discuss how this setting
affects our result in Proposition 2. Here, we maintain that
$\mathbb{E}[\epsilon_{i}]=\alpha$ for all $i$, but we assume that
$Var(\epsilon_{i})=\sigma_{i}^{2}$, and that
$Cov(\epsilon_{i}\epsilon_{j})\neq 0$. Starting from the upper bound on the
estimation error given in (10), we can see that the baseline effects only come
in in the term: $\mathbb{E}[|\epsilon_{i}-\epsilon_{j}||j\in\mathtt{MG}_{i}]$,
we therefore focus our attention on this term, as the rest of this bound does
not change when the variance of these terms changes. Note first, that
$\mathbb{E}[|\epsilon_{i}-\epsilon_{j}||j\in\mathtt{MG}_{i}]=\mathbb{E}[|\epsilon_{i}-\epsilon_{j}|]$
as the event $j\in\mathtt{MG}_{i}$ is independent of the baseline effects. We
can now apply the Cauchy-Schwarz inequality, in the same way as we do in the
proof of Proposition 2, to obtain:
$\displaystyle\mathbb{E}[|\epsilon_{i}-\epsilon_{j}|]$
$\displaystyle\leq\sqrt{\mathbb{E}[(\epsilon_{i}-\epsilon_{j})^{2}]}$
$\displaystyle=\sqrt{\mathbb{E}[\epsilon_{i}^{2}]+\mathbb{E}[\epsilon_{j}^{2}]-2\mathbb{E}[\epsilon_{i}]\mathbb{E}[\epsilon_{j}]}$
$\displaystyle=\sqrt{\sigma^{2}_{i}+\alpha^{2}+\sigma_{j}^{2}+\alpha^{2}-2\alpha^{2}}$
$\displaystyle=\sqrt{\sigma_{i}^{2}+\sigma_{j}^{2}}.$
Clearly, this is not too different from the homoskedastic setting we study in
the proposition: as long as neither of the unit variances is too large for
inference, results in the heteroskedastic setting will suffer from similar
bias as they would under independent baseline effects with equal variance.
Simulations, shown in Section J, also support the above rationale of
comparable performance in the heteroskedastic case and demonstrate that FLAME-
Networks still outperforms competing methods.
### C Derivation of SANIA MIVLUE Used in Simulations
Theorem 6.2 of Sussman and Airoldi, 2017
Suppose potential outcomes satisfy SANIA and that the prior on the parameters
(baseline outcome and direct treatment effect) has no correlation between
units. If unbiased estimators exist, the MIVLUE weights are:
$w_{i}(\mathbf{z})=\frac{C_{i,d_{i}^{\mathbf{z}}}}{\sum_{d=0}^{n-1}C_{i,d}}\cdot\frac{2z_{i}-1}{n\mathrm{P}(\mathbf{z}_{i}^{\text{obs}}=z_{i},d_{i}^{\mathbf{z}^{\text{obs}}}=d_{i}^{\mathbf{z}})}$
where
$\displaystyle C_{i,d}$ $\displaystyle=$
$\displaystyle\left(\sum_{\mathbf{z}}\mathrm{P}(\mathbf{z})\mathbf{1}\\{d_{i}^{\mathbf{z}}=d\\}\cdot\frac{\Sigma(\mathbf{z})_{ii}}{\mathrm{P}(z_{i}^{\text{obs}}=z_{i},d_{i}^{\mathbf{z}^{\text{obs}}}=d)^{2}}\right)^{-1}$
and $C_{i,d}$ is defined to be 0 if the probability in its denominator is 0.
#### C.1 Setup and Notation
We assume that there is a constant probability $p$ of each unit being treated
and that units are independently treated. Let unit $i$ have $d_{i}$ neighbors
and a treated degree of $d_{i}^{\mathbf{z}}$. In our setting,
$\Sigma(\mathbf{z})_{ii}=\Sigma_{\alpha,ii}+z_{i}\Sigma_{\beta,ii}$ where
$\Sigma_{\alpha}$ and $\Sigma_{\beta}$ are the covariance matrices on priors
placed on the baseline outcome and the direct treatment effect, respectively.
Additionally in our setting, their diagonals are constant and so we let
$\sigma^{2}_{\alpha}:=\Sigma_{\alpha,ii}$ and
$\sigma^{2}_{\beta}:=\Sigma_{\beta,ii}$.
#### C.2 Find
$\mathrm{P}(z_{i}^{\text{obs}}=z_{i},d_{i}^{\mathbf{z}^{\text{obs}}}=d_{i}^{\mathbf{z}})$
By the setup, the constituent probabilities are independent and the
probability of treatment is constant across units and so:
$\mathrm{P}(z_{i}^{\text{obs}}=z_{i},d_{i}^{\mathbf{z}^{\text{obs}}}=d_{i}^{\mathbf{z}})=[z_{i}p+(1-z_{i})(1-p)]{d_{i}\choose
d_{i}^{\mathbf{z}}}p^{d_{i}^{\mathbf{z}}}(1-p)^{d_{i}-d_{i}^{\mathbf{z}}}$
#### C.3 Find $C_{i,d}$
Below, we will consider $\mathbf{z}_{\text{neighbor}(i)}$, the treatment
assignment of the neighbors of $i$ (excluding $i$), $z_{i}$, the treatment
assignment of unit $i$, and $\mathbf{z}_{\text{rest}(i)}$, the treatment
assignment of the remaining units.
$\displaystyle C_{i,d}$
$\displaystyle=\left(\sum_{\mathbf{z}}\frac{\mathrm{P}(\mathbf{z})\mathbf{1}\\{d_{i}^{\mathbf{z}}=d\\}\Sigma(\mathbf{z})_{ii}}{\mathrm{P}(z_{i}^{\text{obs}}=z_{i},d_{i}^{\mathbf{z}^{\text{obs}}}=d)^{2}}\right)^{-1}$
$\displaystyle=\left(\sum_{\mathbf{z}:d_{i}^{\mathbf{z}}=d}\frac{\mathrm{P}(\mathbf{z}_{\text{neighbor}(i)})\mathrm{P}(z_{i})\mathrm{P}(\mathbf{z}_{\text{rest}(i)})\Sigma(\mathbf{z})_{ii}}{\mathrm{P}(z_{i}^{\text{obs}}=z_{i},d_{i}^{\mathbf{z}^{\text{obs}}}=d)^{2}}\right)^{-1}$
$\displaystyle=\left(\frac{\mathrm{P}(\mathbf{z}_{\text{neighbor}(i)})}{\mathrm{P}(z_{i}^{\text{obs}}=z_{i},d_{i}^{\mathbf{z}^{\text{obs}}}=d)^{2}}\right)^{-1}$
$\displaystyle\quad\times\left(\sum_{\mathbf{z}:d_{i}^{\mathbf{z}}=d}\Sigma(\mathbf{z})_{ii}\mathrm{P}(z_{i})\mathrm{P}(\mathbf{z}_{\text{rest}(i)})\right)^{-1}$
$\displaystyle=\left(\frac{\mathrm{P}(\mathbf{z}_{\text{neighbor}(i)})}{\mathrm{P}(z_{i}^{\text{obs}}=z_{i},d_{i}^{\mathbf{z}^{\text{obs}}}=d)^{2}}\right)^{-1}$
$\displaystyle\quad\times\left(\sum_{\mathbf{z}:d_{i}^{\mathbf{z}}=d}(\sigma^{2}_{\alpha}+z_{i}\sigma^{2}_{\beta})\mathrm{P}(z_{i})\mathrm{P}(\mathbf{z}_{\text{rest}(i)})\right)^{-1}$
$\displaystyle=\left(\frac{\mathrm{P}(\mathbf{z}_{\text{neighbor}(i)})}{\mathrm{P}(z_{i}^{\text{obs}}=z_{i},d_{i}^{\mathbf{z}^{\text{obs}}}=d)^{2}}\right)^{-1}$
$\displaystyle\quad\times\left(\sigma^{2}_{\alpha}+\sum_{\mathbf{z}:d_{i}^{\mathbf{z}}=d}(z_{i}\sigma^{2}_{\beta})\mathrm{P}(z_{i})\mathrm{P}(\mathbf{z}_{\text{rest}(i)})\right)^{-1}$
$\displaystyle=\left(\frac{\mathrm{P}(\mathbf{z}_{\text{neighbor}(i)})}{\mathrm{P}(z_{i}^{\text{obs}}=z_{i},d_{i}^{\mathbf{z}^{\text{obs}}}=d)^{2}}\right)$
$\displaystyle\quad\times\left(\sigma^{2}{\alpha}+\sum_{\begin{subarray}{c}\mathbf{z}:d_{i}^{\mathbf{z}}=d\\\
z_{i}=1\end{subarray}}(\sigma^{2}_{\beta,})p\mathrm{P}(\mathbf{z}_{\text{rest}(i)})\right)^{-1}$
$\displaystyle=\left(\frac{\mathrm{P}(\mathbf{z}_{\text{neighbor}(i)})\left(\sigma^{2}_{\alpha}+\sigma^{2}_{\beta}\right)}{\mathrm{P}(z_{i}^{\text{obs}}=z_{i},d_{i}^{\mathbf{z}^{\text{obs}}}=d)^{2}}\right)^{-1}$
$\displaystyle=\frac{\left([z_{i}p+(1-z_{i})(1-p)]{d_{i}\choose
d}p^{d}(1-p)^{d_{i}-d}\right)^{2}}{(\sigma^{2}_{\alpha}+\sigma^{2}_{\beta})p^{d}(1-p)^{d_{i}-d}}$
$\displaystyle=\frac{[z_{i}p+(1-z_{i})(1-p)]^{2}{d_{i}\choose
d}^{2}p^{d}(1-p)^{d_{i}-d}}{\sigma^{2}_{\alpha}+\sigma^{2}_{\beta}}$
#### C.4 Find $w_{i}$
Plugging in the expressions we’ve found:
$\displaystyle w_{i}(\mathbf{z})=$
$\displaystyle\frac{C_{i,d_{i}^{\mathbf{z}}}}{\sum_{d=0}^{n-1}C_{i,d}}\cdot\frac{2z_{i}-1}{n\mathrm{P}(\mathbf{z}_{i}^{\text{obs}}=z_{i},d_{i}^{\mathbf{z}^{\text{obs}}}=d_{i}^{\mathbf{z}})}$
$\displaystyle=\frac{\frac{[z_{i}p+(1-z_{i})(1-p)]^{2}{d_{i}\choose
d_{i}^{\mathbf{z}}}^{2}p^{d_{i}^{\mathbf{z}}}(1-p)^{d_{i}-d_{i}^{\mathbf{z}}}}{(\sigma^{2}_{\alpha}+\sigma^{2}_{\beta})}}{\frac{\sum_{d=0}^{n-1}{d_{i}\choose
d}^{2}p^{d}(1-p)^{d_{i}-d}(z_{i}p+(1-z_{i})(1-p))^{2}}{(\sigma^{2}_{\alpha}+\sigma^{2}_{\beta})}}$
$\displaystyle\quad\times\frac{2z_{i}-1}{n[z_{i}p+(1-z_{i})(1-p)]{d_{i}\choose
d_{i}^{\mathbf{z}}}p^{d_{i}^{\mathbf{z}}}(1-p)^{d_{i}-d_{i}^{\mathbf{z}}}}$
$\displaystyle=\frac{(z_{i}p+(1-z_{i})(1-p))(2z_{i}-1)}{n\sum_{d=0}^{n-1}{d_{i}\choose
d}^{2}p^{d}(1-p)^{d_{i}-d}(z_{i}p+(1-z_{i})(1-p))^{2}}$
$\displaystyle=\frac{(z_{i}p+(1-z_{i})(1-p))(2z_{i}-1)}{n(z_{i}p+(1-z_{i})(1-p))^{2}}$
$\displaystyle\quad\times\frac{1}{\sum_{d=0}^{n-1}{d_{i}\choose
d}^{2}p^{d}(1-p)^{d_{i}-d}}$
Note in the first fraction that $(z_{i}p+(1-z_{i})(1-p))(2z_{i}-1)$ equals $p$
when $z_{i}=1$ and $p-1$ when $z_{i}=0$. Also, $(z_{i}p+(1-z_{i})(1-p))^{2}$
equals $p^{2}$ when $z_{i}=1$ and $(1-p)^{2}$ when $z_{i}=0$. Thus, the first
term is $1/np$ when $z_{i}=1$ and $-1/n(1-p)$ when $z_{i}=0$ and so the
overall expression for the weights is:
$w_{i}(\mathbf{z})=\frac{z_{i}/np-(1-z_{i})/(n(1-p))}{\sum_{d=0}^{n-1}{d_{i}\choose
d}^{2}p^{d}(1-p)^{d_{i}-d}}$
and the MIVLUE is given by $\sum_{i=1}^{n}w_{i}Y_{i}$.
### D Subgraph Descriptions
Here, for graphs without self-loops, we define the interference components
used in our simulations:
* •
Degree: the degree of a node is the number of edges it is a part of.
* •
Triangle: A graph with three mutually connected nodes (see Figure 5).
* •
Square: A graph with 4 nodes and 4 edges, such that each node is a part of
exactly two distinct edges (see Figure 5).
* •
$k$-Star: A graph with $k+1$ nodes, the first $k$ of which are all connected
to the $(k+1)$st node and no others (see Figure 5).
* •
Vertex Betweenness: The vertex betweenness of a vertex $v$ is defined as:
$\sum_{i\neq v\neq j}\frac{\sigma_{ij}(v)}{\sigma_{ij}}$
where $\sigma_{ij}$ is the number of shortest paths between vertices $i$ and
$j$ and $\sigma_{ij}(v)$ is the number of shortest paths between $i$ and $j$
that go through $v$. We use the normalized vertex betweenness which scales the
above expression by $2/(n^{2}-3n+2)$ where $n$ is the number of nodes in the
graph.
* •
Closeness Centrality: We use the normalized closeness centrality of a vertex
$v$, defined as:
$\frac{n-1}{\sum_{i=1}^{n}d(\sigma_{vi})}$
where $d(\sigma_{vi})$ is the length of the shortest path between $v$ and $i$
and $n$ is the number of nodes in the graph.
Figure 5: Four types of treated neighborhood subgraphs the colored unit might
be a part of: triangle (top right), square (bottom right), 2-star (top left),
4-star (bottom left).
### E Data Pre-processing
We estimate the DTE using data from from Banerjee et al. (2013) on social
networks for the 75 villages in India. A unit $i$ is defined to be socially
connected with unit $j$ if they are connected across at least 3 of the
following four types of social connections: (1) $i$ visits $j$’s house, (2)
$j$ visits $i$’s house, (3) $i$ and $j$ socialize and are relatives, (4) $i$
and $j$ socialize and are not related to each other. Along with subgraph
counts, we also use age and whether or not individual $i$ speaks Telugu as
additional covariates to match on. For computational efficiency, we drop units
with maximum degree of connection greater than 15, where the cut-off is
selected based on computational efficiency.
### F Matched Groups
We provide sample matched groups in Table 4. These matched groups were
produced by applying FLAME-Networks on the data discussed in Section 4. We
report all the covariates used for matching. The first group is comprised of
40-year-old units who do not speak Telugu, and have 2 or 3 triangles, 3
2-stars, and 3 edges in their treated neighborhood graph. These units (given
the binning) are matched exactly. The second group is comprised of units who
speak Telugu with no triangles, 3 2-stars, 3 edges in their treated
neighborhood graph. Note that units in this group are matched approximately,
since they are not matched exactly on age.
Units | Triangles | 2-Stars | Edges | Telugu | Age | Treated | Outcome
---|---|---|---|---|---|---|---
Matched Group 1 | | | | | | |
1 | 2 or 3 | 3 | 3 | 0 | 40 | 0 | 1
2 | 2 or 3 | 3 | 3 | 0 | 40 | 1 | 0
3 | 2 or 3 | 3 | 3 | 0 | 40 | 1 | 0
4 | 2 or 3 | 3 | 3 | 0 | 40 | 1 | 0
Matched Group 2 | | | | | | |
1 | 0 | 3 | 3 | 1 | 30 | 0 | 0
2 | 0 | 3 | 3 | 1 | 34 | 0 | 0
3 | 0 | 3 | 3 | 1 | 25 | 0 | 1
4 | 0 | 3 | 3 | 1 | 20 | 1 | 0
Table 4: Sample Match Groups. Two sample matched groups generated by FLAME-
Networks using data discussed in Section 4. The columns are the covariates
used for matching, along with treatment status and outcome. The counts of
subgraphs were coarsened into 10 bins defined by deciles of the empirical
distribution of counts. The two groups have relatively good match quality
overall. Note that the first group matches units exactly (given the binning).
However, Matched Group 2 matches units approximately, with exact matches on
subgraph counts and whether or not individuals speak Telugu, but inexact
matches on age.
### G Additional Experiment: Multiplicative Interference
We explore settings in which interference is a nonlinear function of the
interference components and their weights. Since matching is nonparametric, it
is particularly appropriate for handling non-linearities in interference
functions. Outcomes in this experiment have the form
$Y_{i}(t,\mathbf{t}_{-i})=t\tau_{i}+\alpha\prod_{p=1}^{P}m_{ip}^{\mathbb{I}[p\text{
is included}]}+\epsilon_{i}$. Table 5 shows which components are included in
the outcome function for each setting. We use a small number of parameters in
each setting, as their multiplicative interaction suffices to define a complex
interference function. The simulations are run on an $ER(50,0.05)$ graph.
Setting | $d_{i}$ | $\Delta_{i}$ | $\bigstar_{i}^{4}$ | $B_{i}$
---|---|---|---|---
1 | x | x | |
2 | x | | | x
3 | | x | | x
4 | | x | x |
Table 5: Parameters included in interference function Experiment 1. The marked
components for each setting were the only ones included in those experiments.
Results for this experiment are presented in Figure 6. FLAME-Networks
performed better than all baseline methods in this setting, both in terms of
mean absolute error and, in most cases, in terms of standard deviation over
simulations. The stratified and SANIA estimators perform especially poorly,
because they cannot handle nonlinear interference settings, unlike FLAME-
Networks.
Figure 6: Results from experiments with a multiplicative interference
function. Each violin plot represents the distribution over simulations of
absolute estimation error over for each method. The panels are numbered
according to the parameter setting the simulations were ran with. Violin plots
are color-coded blue if the method had mean error either equal to or better
than FLAME-Networks and red otherwise. The black line inside each violin is
the median error. The dashed line is FLAME-Networks’ mean error.
### H Additional Experiment: Graph Cluster Randomization
We also explored the performance of FLAME-Networks in settings in which
treatment is randomized within multiple clusters, which have few connections
between them. More specifically, we simulate a network according to a
stochastic block model with 5 clusters. In each cluster, there are 10 units, 5
of which are treated. The probability of edges within clusters is 0.3 and
between clusters is 0.05. This results in graphs with few edges between
clusters. We then evaluate our method as previously described, simulating the
outcome with additive interference and homoskedastic errors. The results in
Figure 7 demonstrate that FLAME-Networks outperforms competing methods in this
setting as well.
Figure 7: Results from experiments with additive interference on graphs in
which treatment is randomly assigned within multiple clusters with few edges
between them. Each violin plot represents the distribution over simulations of
absolute estimation error over for each method. The panels are numbered
according to the parameter setting the simulations were ran with. Violin plots
are color-coded blue if the method had mean error either equal to or better
than FLAME-Networks and red otherwise. The black line inside each violin is
the median error. The dashed line is FLAME-Networks’ mean error.
### I Additional Experiment: Real Network
To ensure that FLAME-Networks also performs well on real networks, we consider
an AddHealth network (Harris, 2013). Specifically, we use the addhealthc3
dataset from the amen R package, with all edges treated as undirected. There
are 32 nodes and on every simulation, 16 are randomly selected to be treated.
Outcome and additive interference are simulated as previously described.
Errors are homoskedastic. The results in Figure 8 demonstrate that FLAME-
Networks still outperforms competing methods.
Figure 8: Results from experiments on a real, AddHealth network with additive
interference. Each violin plot represents the distribution over simulations of
absolute estimation error over for each method. The panels are numbered
according to the parameter setting the simulations were ran with. Violin plots
are color-coded blue if the method had mean error either equal to or better
than FLAME-Networks and red otherwise. The black line inside each violin is
the median error. The dashed line is FLAME-Networks’ mean error.
### J Additional Experiment: Heteroscedastic Errors
We also explored the performance of FLAME-Networks in settings in which the
variance of the outcomes is not constant. We simulate a single ER(50, 0.07)
graph and randomly treat 25 units. We consider additive interference, as in
the body of the text, and all other simulation parameters are the same, expect
for that now, each unit $i$ has baseline outcome
$\alpha_{i}\stackrel{{\scriptstyle ind}}{{\sim}}N(0,v_{i})$ with
$v_{i}\stackrel{{\scriptstyle ind}}{{\sim}}U(0,1)$. We see in Figure 9 that
FLAME-Networks outperforms competitors; the fact that it is nonparametric
allows it to handle more flexible baseline outcomes and variances.
Figure 9: Results from experiments with an additive interference function
involving heteroskedasticity in the baseline effects across units. Each violin
plot represents the distribution over simulations of absolute estimation error
over for each method. The panels are numbered according to the parameter
setting the simulations were ran with. Violin plots are color-coded blue if
the method had mean error either equal to or better than FLAME-Networks and
red otherwise. The black line inside each violin is the median error. The
dashed line is FLAME-Networks’ mean error.
### K Additional Experiment: Matching on True Interference
Here, we compare FLAME-Networks to approaches that match directly on units’
interference values. FLAME-Networks already has the advantage of interpretably
matching on neighborhood graphs that can be visualized and compared, as
opposed to uninterpretable scalar values of an interference function.
Additionally, to perform well in practice, one would typically need to use
equally uninterpretable machine learning methods to estimate units’
interference values well. But even comparing FLAME-Networks to an approach
that matches on the true (typically unknown) interference values, we see that
our method does well in comparison, because it learns and matches on baseline
effects as well as (approximate) interference values. Results using an ER(50,
0.07) graph with 25 units randomly treated, an additive interference function,
and homoskedastic errors – as previously described – are shown in Figure 10.
### L Additional Experiment: Covariate Weight
In this section, we show that increasing the influence that covariates have on
the outcome function harms neither FLAME-Networks nor the competing methods.
As in the results shown in the main text, however, the performance of FLAME-
Networks is still superior, given that it naturally handles covariate data.
The experimental setup is the same as in Experiment 2 in the main text and
results are shown in Tables 6 and 7
Method | Median | 25th q | 75th q
---|---|---|---
FLAME-Networks | 0.34 | 0.15 | 0.52
First Eigenvector | 0.41 | 0.24 | 0.49
All Eigenvectors | 0.36 | 0.32 | 0.74
Naive | 0.61 | 0.19 | 0.85
SANIA | 2.31 | 1.78 | 2.75
Stratified | 4.56 | 4.55 | 4.63
Table 6: Additional results from the experimental setup of Experiment 2, but
with $\beta=20$. Median and 25th and 75th percentile of absolute error over 10
simulations.
–
Method | Median | 25th q | 75th q
---|---|---|---
FLAME-Networks | 0.25 | 0.08 | 0.51
First Eigenvector | 0.52 | 0.32 | 0.85
All Eigenvectors | 0.53 | 0.29 | 0.83
Naive | 0.78 | 0.32 | 1.16
SANIA | 1.86 | 1.68 | 2.11
Stratified | 4.78 | 4.74 | 4.80
Table 7: Additional results from the experimental setup of Experiment 2, but
with $\beta=25$. Median and 25th and 75th percentile of absolute error over 10
simulations.
–
Figure 10: Results from experiments comparing FLAME-Networks to matching units
on their true interference values. Each violin plot represents the
distribution over simulations of absolute estimation error over for each
method. The panels are numbered according to the parameter setting the
simulations were ran with. Violin plots are color-coded blue if the method had
mean error either equal to or better than FLAME-Networks and red otherwise.
The black line inside each violin is the median error. The dashed line is
FLAME-Networks’ mean error.
### M Match Quality
Here, we assess the quality of matches generated by FLAME-Networks versus
matching on true interference, and the All Eigenvectors approach. To do so,
for FLAME-Networks: for each (control) treated unit, we take the minimal
Frobenius norm of the difference between that unit’s neighborhood adjacency
matrix and that of all the (treated) control units in its matched group111The
Frobenius norm of the difference of the adjacency matrices, up to reordering
of the vertices., and average across all units. This gives an average graph
distance for a single simulation. And to do so for the true interference
matching and All Eigenvectors approaches: for every (control) treated unit, we
take the closest (treated) control unit, find the graph distance (as above)
between their neighborhood subgraphs, and average across units. This gives an
average graph distance for a single simulation. Results from 50 simulations
performed on an ER(50, 0.07) graph with additive interference and
homoskedastic errors, as described previously, are shown in Figure 11, showing
that FLAME-Networks produces more matches between units with similar
neighborhood subgraphs than matching on the true interference or the All
Eigenvectors method.
Figure 11: Results from experiments comparing the average distance between the
neighborhood subgraphs of the units matched by different methods. Each violin
plot represents the distribution over simulations of graph distance for each
method. The panels are numbered according to the parameter setting the
simulations were ran with. Violin plots are color-coded blue if the method had
mean graph distance either equal to or better than FLAME-Networks and red
otherwise. The black line inside each violin is the median graph distance. The
dashed line is FLAME-Networks’ mean graph distance.
|
2024-09-04T02:54:56.452309 | 2020-03-02T15:44:14 | 2003.00973 | {
"authors": "Ashish Dandekar, Debabrota Basu, Stephane Bressan",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25993",
"submitter": "Debabrota Basu",
"url": "https://arxiv.org/abs/2003.00973"
} | arxiv-papers | # Differential Privacy at Risk: Bridging Randomness and Privacy Budget
Ashish Dandekar<EMAIL_ADDRESS>
Département d’informatique,
École Normale Supérieure
Paris, France Debabrota Basu<EMAIL_ADDRESS>
Data Science and AI Division
Department of Computer Science and Engineering
Chalmers University of Technology
Göteborg, Sweden Stéphane Bressan<EMAIL_ADDRESS>
School of Computing
National University of Singapore
Singapore, Singapore
###### Abstract
The calibration of noise for a privacy-preserving mechanism depends on the
sensitivity of the query and the prescribed privacy level. A data steward must
make the non-trivial choice of a privacy level that balances the requirements
of users and the monetary constraints of the business entity.
We analyse roles of the sources of randomness, namely the explicit randomness
induced by the noise distribution and the implicit randomness induced by the
data-generation distribution, that are involved in the design of a privacy-
preserving mechanism. The finer analysis enables us to provide stronger
privacy guarantees with quantifiable risks. Thus, we propose privacy at risk
that is a probabilistic calibration of privacy-preserving mechanisms. We
provide a composition theorem that leverages privacy at risk. We instantiate
the probabilistic calibration for the Laplace mechanism by providing
analytical results.
We also propose a cost model that bridges the gap between the privacy level
and the compensation budget estimated by a GDPR compliant business entity. The
convexity of the proposed cost model leads to a unique fine-tuning of privacy
level that minimises the compensation budget. We show its effectiveness by
illustrating a realistic scenario that avoids overestimation of the
compensation budget by using privacy at risk for the Laplace mechanism. We
quantitatively show that composition using the cost optimal privacy at risk
provides stronger privacy guarantee than the classical advanced composition.
## 1 Introduction
Dwork et al. quantify the privacy level $\varepsilon$ in _$\varepsilon$
-differential privacy_ as an upper bound on the worst-case privacy loss
incurred by a privacy-preserving mechanism. Generally, a privacy-preserving
mechanism perturbs the results by adding the calibrated amount of random noise
to them. The calibration of noise depends on the sensitivity of the query and
the specified privacy level. In a real-world setting, a data steward must
specify a privacy level that balances the requirements of the users and
monetary constraints of the business entity. Garfinkel et al. report the
issues in deploying differential privacy as the privacy definition by the US
census bureau. They highlight the lack of analytical methods to choose the
privacy level. They also report empirical studies that show the loss in
utility due to the application of privacy-preserving mechanisms.
We address the dilemma of a data steward in two ways. Firstly, we propose a
probabilistic quantification of privacy levels. Probabilistic quantification
of privacy levels provides a data steward a way to take quantified risks under
the desired utility of the data. We refer to the probabilistic quantification
as _privacy at risk_. We also derive a composition theorem that leverages
privacy at risk. Secondly, we propose a cost model that links the privacy
level to a monetary budget. This cost model helps the data steward to choose
the privacy level constrained on the estimated budget and vice versa.
Convexity of the proposed cost model ensures the existence of a unique privacy
at risk that would minimise the budget. We show that the composition with an
optimal privacy at risk provides stronger privacy guarantees than the
traditional advanced composition (Dwork et al., 2014). In the end, we
illustrate a realistic scenario that exemplifies how the data steward can
avoid overestimation of the budget by using the proposed cost model by using
privacy at risk.
The probabilistic quantification of privacy levels depends on two sources of
randomness: the _explicit randomness_ induced by the noise distribution and
the _implicit randomness_ induced by the data-generation distribution. Often,
these two sources are coupled with each other. We require analytical forms of
both sources of randomness as well as an analytical representation of the
query to derive a privacy guarantee. Computing the probabilistic
quantification is generally a challenging task. Although we find multiple
probabilistic privacy definitions in the literature (Machanavajjhala et al.,
2008; Hall et al., 2012), we are missing analytical quantification bridging
the randomness and privacy level of a privacy-preserving mechanism. To the
best of our knowledge, we are the first to analytically derive such a
probabilistic quantification, namely privacy at risk, for the widely used
Laplace mechanism (Dwork et al., 2006b). We also derive a composition theorem
with privacy at risk. It is a special case of the advanced composition theorem
(Dwork et al., 2014) that deals with a sequential and adaptive use of privacy-
preserving mechanisms. We work on a simpler model independent evaluations used
in the basic composition theorem (Dwork et al., 2014).
The privacy level proposed by the differential privacy framework is too
abstract a quantity to be integrated in a business setting. We propose a cost
model that maps the privacy level to a monetary budget. The corresponding cost
model for the probabilistic quantification of privacy levels is a convex
function of the privacy level. Hence, it leads to a unique probabilistic
privacy level that minimises the cost. We illustrate a realistic scenario in a
GDPR compliant business entity that needs an estimation of the compensation
budget that it needs to pay to stakeholders in the unfortunate event of a
personal data breach. The illustration shows that the use of probabilistic
privacy levels avoids overestimation of the compensation budget without
sacrificing utility.
In this work, we comparatively evaluate the privacy guarantees using privacy
at risk of using Laplace mechanism. We quantitatively compare the composition
under the optimal privacy at risk, which is estimated using the cost model,
with traditional composition mechanisms - the basic composition and advanced
mechanism (Dwork et al., 2014). We observe that it gives stronger privacy
guarantees than the ones by the advanced composition without sacrificing on
the utility of the mechanism.
In conclusion, benefits of the probabilistic quantification i.e. the privacy
at risk are twofold. It not only quantifies the privacy level for a given
privacy-preserving mechanism but also facilitates decision-making in problems
that focus on the privacy-utility trade-off and the compensation budget
minimisation.
## 2 Background
We consider a universe of datasets $\mathcal{D}$. We explicitly mention when
we consider that the datasets are sampled from a data-generation distribution
$\mathcal{G}$ with support $\mathcal{D}$. Two datasets of equal cardinality
$x$ and $y$ are said to be neighbouring datasets if they differ in one data
point. A pair of neighbouring datasets is denoted by $x\sim y$. In this work,
we focus on a specific class of queries called numeric queries. A numeric
query $f$ is a function that maps a dataset into a real-valued vector, i.e.
$f:\mathcal{D}\rightarrow\mathbb{R}^{k}$. For instance, a sum query returns
the sum of the values in a dataset.
In order to achieve a privacy guarantee, a _privacy-preserving mechanism_ , or
mechanism in short, is a randomised algorithm, that adds noise to the query
from a given family of distributions. Thus, a privacy-preserving mechanism of
a given family, $\mathcal{M}(f,\Theta)$, for the query $f$ and the set of
parameters $\Theta$ of the given noise distribution, is a function that maps a
dataset into a real vector, i.e.
$\mathcal{M}(f,\Theta):\mathcal{D}\rightarrow\mathbb{R}^{k}$. We denote a
privacy-preserving mechanism as $\mathcal{M}$, when the query and the
parameters are clear from the context.
###### Definition 1
[Differential Privacy (Dwork et al., 2014).] A privacy-preserving mechanism
$\mathcal{M}$, equipped with a query $f$ and with parameters $\Theta$, is
$(\varepsilon,\delta)$-differentially private if for all
$Z\subseteq\textrm{Range}(\mathcal{M})$ and $x,y\in\mathcal{D}$ such that
$x\sim y$:
$\mathbb{P}(\mathcal{M}(f,\Theta)(x)\in
Z)\leq\text{e}^{\varepsilon}\mathbb{P}(\mathcal{M}(f,\Theta)(y)\in Z)+\delta$
$(\varepsilon,0)$-differentially private mechanism is ubiquitously called as
$\varepsilon$-differentially private.
A privacy-preserving mechanism provides perfect privacy if it yields
indistinguishable outputs for all neighbouring input datasets. The privacy
level $\varepsilon$ quantifies the privacy guarantee provided by
$\varepsilon$-differential privacy. For a given query, the smaller the value
of the $\varepsilon$, the qualitatively higher is the privacy. A randomised
algorithm that is $\varepsilon$-differentially private is also
$\varepsilon^{\prime}$-differential private for any
$\varepsilon^{\prime}>\varepsilon$.
In order to satisfy $\varepsilon$-differential privacy, the parameters of a
privacy-preserving mechanism requires a calculated calibration. The amount of
noise required to achieve a specified privacy level depends on the query. If
the output of the query does not change drastically for two neighbouring
datasets, then small amount of noise is required to achieve a given privacy
level. The measure of such fluctuations is called the _sensitivity_ of the
query. The parameters of a privacy-preserving mechanism are calibrated using
the sensitivity of the query that quantifies the smoothness of a numeric
query.
###### Definition 2
[Sensitivity.] The sensitivity of a query
$f:\mathcal{D}\rightarrow\mathbb{R}^{k}$ is defined as
$\Delta_{f}\triangleq\max_{\begin{subarray}{c}x,y\in\mathcal{D}\\\ x\sim
y\end{subarray}}~{}\lVert f(x)-f(y)\rVert_{1}.$
The Laplace mechanism is a privacy-preserving mechanism that adds scaled noise
sampled from a calibrated Laplace distribution to the numeric query.
###### Definition 3
[(Papoulis and Pillai, 2002)] The Laplace distribution with mean zero and
scale $b>0$ is a probability distribution with probability density function
$\mathrm{Lap}(b)\triangleq\frac{1}{2b}\exp{(-\frac{|x|}{b})},$
where $x\in\mathbb{R}$. We write $\mathrm{Lap}(b)$ to denote a random variable
$X\sim\mathrm{Lap}(b)$
###### Definition 4
[Laplace Mechanism (Dwork et al., 2006b).] Given any function
$f:\mathcal{D}\rightarrow\mathbb{R}^{k}$ and any $x\in\mathcal{D}$, the
Laplace Mechanism is defined as
$\mathcal{L}^{\Delta_{f}}_{\varepsilon}(x)\triangleq\mathcal{M}\left(f,~{}\frac{\Delta_{f}}{\varepsilon}\right)(x)=f(x)+(L_{1},...,L_{k}),$
where $L_{i}$ is drawn from
$\mathrm{Lap}\left(\frac{\Delta_{f}}{\varepsilon}\right)$ and added to the
$i^{\mathrm{th}}$ component of $f(x)$.
###### Theorem 5
(Dwork et al., 2006b) The Laplace mechanism,
$\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}$, is
$\varepsilon_{0}$-differentially private.
## 3 Privacy at Risk: A Probabilistic Quantification of Randomness
The parameters of a privacy-preserving mechanism are calibrated using the
privacy level and the sensitivity of the query. A data steward needs to choose
appropriate privacy level for practical implementation. Lee and Clifton show
that the choice of an actual privacy level by a data steward in regard to her
business requirements is a non-trivial task. Recall that the privacy level in
the definition of differential privacy corresponds to the worst case privacy
loss. Business users are however used to taking and managing risks, if the
risks can be quantified. For instance, Jorion defines Value at Risk that is
used by risk analysts to quantify the loss in investments for a given
portfolio and an acceptable confidence bound. Motivated by the formulation of
Value at Risk, we propose to use the use of probabilistic privacy level. It
provides us a finer tuning of an $\varepsilon_{0}$-differentially private
privacy-preserving mechanism for a specified risk $\gamma$.
###### Definition 6
[Privacy at Risk.] For a given data generating distribution $\mathcal{G}$, a
privacy-preserving mechanism $\mathcal{M}$, equipped with a query $f$ and with
parameters $\Theta$, satisfies $\varepsilon$-differential privacy with a
_privacy at risk_ $0\leq\gamma\leq 1$, if for all
$Z\subseteq\textrm{Range}(\mathcal{M})$ and $x,y$ sampled from $\mathcal{G}$
such that $x\sim y$:
$\mathbb{P}\left[\left|\ln{\frac{\mathbb{P}(\mathcal{M}(f,\Theta)(x)\in
Z)}{\mathbb{P}(\mathcal{M}(f,\Theta)(y)\in
Z)}}\right|>\varepsilon\right]\leq\gamma,$ (1)
where the outer probability is calculated with respect to the probability
space $\mathrm{Range}(\mathcal{M}\circ\mathcal{G})$ obtained by applying the
privacy-preserving mechanism $\mathcal{M}$ on the data-generation distribution
$\mathcal{G}$.
If a privacy-preserving mechanism is $\varepsilon_{0}$-differentially private
for a given query $f$ and parameters $\Theta$, for any privacy level
$\varepsilon\geq\varepsilon_{0}$, privacy at risk is $0$. Our interest is to
quantify the risk $\gamma$ with which $\varepsilon_{0}$-differentially private
privacy-preserving mechanism also satisfies a stronger
$\varepsilon$-differential privacy, i.e. $\varepsilon<\varepsilon_{0}$.
Unifying Probabilistic and Random Differential Privacy. As a natural
consequence, Equation 1 unifies the notions of probabilistic differential
privacy and random differential privacy by accounting for both sources of
randomness in a privacy-preserving mechanism. Machanavajjhala et al. define
probabilistic differential privacy that incorporates the explicit randomness
of the noise distribution of the privacy-preserving mechanism whereas Hall et
al. define random differential privacy that incorporates the implicit
randomness of the data-generation distribution. In probabilistic differential
privacy, the outer probability is computed over the sample space of
$\mathrm{Range}(\mathcal{M})$ and all datasets are equally probable.
### 3.1 Composition theorem
Application of $\varepsilon$-differential privacy to many real-world problem
suffers from the degradation of privacy guarantee, i.e. privacy level, over
the composition. The basic composition theorem (Dwork et al., 2014) dictates
that the privacy guarantee degrades linear in the number of evaluations of the
mechanism. Advanced composition theorem (Dwork et al., 2014) provides a finer
analysis of the privacy loss over multiple evaluations and provides a square
root dependence on the the number of evaluations. In this section, we provide
the composition theorem for privacy at risk.
###### Definition 7
[Privacy loss random variable.] For a privacy-preserving mechanism
$\mathcal{M}:\mathcal{D}\rightarrow R$ and two neighbouring datasets
$x,y\in\mathcal{D}$, the privacy loss random variable $\mathcal{C}$ takes a
value $r\in R$
$\mathcal{C}\triangleq\ln{\frac{\mathbb{P}(\mathcal{M}(x))}{\mathbb{P}(\mathcal{M}(y))}}$
###### Lemma 8
If a privacy-preserving mechanism $\mathcal{M}$ satisfies $\varepsilon_{0}$
differential privacy, then
$\mathbb{P}[|\mathcal{C}|\leq\varepsilon_{0}]=1$
###### Theorem 9
For all $\varepsilon_{0},\varepsilon,\gamma,\delta>0$, the class of
$\varepsilon_{0}$-differentially private mechanisms, which satisfy
$(\varepsilon,\gamma)$-privacy at risk, are
$(\varepsilon^{\prime},\delta)$-differential privacy under $n$-fold
composition where
$\varepsilon^{\prime}=\varepsilon_{0}\sqrt{2n\ln{\frac{1}{\delta}}}+n\mu$
where,
$\mu=[\gamma\varepsilon(e^{\varepsilon}-1)+(1-\gamma)\varepsilon_{0}(e^{\varepsilon_{0}}-1)]$
Proof Let, $\mathcal{M}^{1...n}:\mathcal{D}\rightarrow R^{1}\times
R^{2}\times...\times R^{n}$ denote the $n$-fold composition of privacy-
preserving mechanisms $\\{\mathcal{M}^{i}:\mathcal{D}\rightarrow
R^{i}\\}_{i=1}^{n}$. Each $\varepsilon_{0}$-differentially private
$\mathcal{M}^{i}$ also satisfies $(\varepsilon,\gamma)$-privacy at risk for
some $\varepsilon\leq\varepsilon_{0}$ and appropriately computer $\gamma$.
Consider any two neighbouring datasets $x,y\in\mathcal{D}$. Let,
$B=\\{(r_{1},...,r_{n})|\bigwedge_{i=1}^{n}\frac{\mathbb{P}(\mathcal{M}^{i}(x)=r_{i})}{\mathbb{P}(\mathcal{M}^{i}(y)=r_{i})}>\text{e}^{\varepsilon}\\}$
Using the technique in (Dwork et al., 2014, Theorem 3.20), it suffices to show
that $\mathbb{P}(\mathcal{M}^{1...n}(x)\in B)\leq\delta$.
Consider,
$\displaystyle\ln{\frac{\mathbb{P}(\mathcal{M}^{1...n}(x)=(r_{1},...,r_{n}))}{\mathbb{P}(\mathcal{M}^{1...n}(y)=(r_{1},...,r_{n}))}}$
$\displaystyle=$
$\displaystyle\ln{\prod_{i=1}^{n}}\frac{\mathbb{P}(\mathcal{M}^{i}(x)=r_{i})}{\mathbb{P}(\mathcal{M}^{i}(y)=r_{i})}$
$\displaystyle=$
$\displaystyle\sum_{i=1}^{n}\ln{\frac{\mathbb{P}(\mathcal{M}^{i}(x)=r_{i})}{\mathbb{P}(\mathcal{M}^{i}(y)=r_{i})}}~{}=~{}\sum_{i=1}^{n}\mathcal{C}^{i}$
(2)
where $\mathcal{C}^{i}$ in the last line denotes privacy loss random variable
related $\mathcal{M}^{i}$.
Consider, an $\varepsilon$-differentially private mechanism
$\mathcal{M}_{\varepsilon}$ and $\varepsilon_{0}$-differentially private
mechanism $\mathcal{M}_{\varepsilon_{0}}$. Let $\mathcal{M}_{\varepsilon_{0}}$
satisfy $(\varepsilon,\gamma)$-privacy at risk for
$\varepsilon\leq\varepsilon_{0}$ and appropriately computed $\gamma$. Each
$\mathcal{M}^{i}$ can be simulated as the mechanism
$\mathcal{M}_{\varepsilon}$ with probability $\gamma$ and the mechanism
$\mathcal{M}_{\varepsilon_{0}}$ otherwise. Therefore, privacy loss random
variable for each mechanism $\mathcal{M}^{i}$ can be written as
$\mathcal{C}^{i}=\gamma\mathcal{C}_{\varepsilon}^{i}+(1-\gamma)\mathcal{C}_{\varepsilon_{0}}^{i}$
where, $\mathcal{C}_{\varepsilon}^{i}$ denotes the privacy loss random
variable associated with the mechanism $\mathcal{M}_{\varepsilon}$ and
$\mathcal{C}_{\varepsilon_{0}}^{i}$ denotes the privacy loss random variable
associated with the mechanism $\mathcal{M}_{\varepsilon_{0}}$. Using (Dwork et
al., 2014, Lemma $3.18$), we can bound the mean of every privacy loss random
variable as,
$\mu\triangleq\mathbb{E}[\mathcal{C}^{i}]\leq[\gamma\varepsilon(e^{\varepsilon}-1)+(1-\gamma)\varepsilon_{0}(e^{\varepsilon_{0}}-1)]$
We have a collection of $n$ independent privacy random variables
$\mathcal{C}^{i}$s such that
$\mathbb{P}\left[|\mathcal{C}^{i}|\leq\varepsilon_{0}\right]=1$. Using
Hoeffding’s bound (Hoeffding, 1994) on the sample mean for any $\beta>0$,
$\mathbb{P}\left[\frac{1}{n}\sum_{i}C^{i}\geq\mathbb{E}[\mathcal{C}^{i}]+\beta\right]\leq\exp{\left(-\frac{n\beta^{2}}{2\varepsilon_{0}^{2}}\right)}$
Rearranging the inequality by renaming the upper bound on the probability as
$\delta$, we get,
$\mathbb{P}\left[\sum_{i}C^{i}\geq
n\mu+\varepsilon_{0}\sqrt{2n\ln{\frac{1}{\delta}}}\right]\leq\delta$
Theorem 9 is an analogue, in the privacy at risk setting, of the advanced
composition of differential privacy (Dwork et al., 2014, Theorem 3.20) under a
constraint of independent evaluations. Note that, if one takes $\gamma=0$,
then we obtain the exact same formula as in (Dwork et al., 2014, Theorem
3.20). It provides a sanity check for the consistency of composition using
privacy at risk.
In fact, if we consider both sources of randomness, the expected value of loss
function must be computed by using the law of total expectation.
$\mathbb{E}[\mathcal{C}]=\mathbb{E}_{x,y\sim\mathcal{G}}[\mathbb{E}[\mathcal{C}]|x,y]$
Therefore, the exact computation of privacy guarantees after the composition
requires access to the data-generation distribution. We assume a uniform data-
generation distribution while proving Theorem 9. We can obtain better and
finer privacy guarantees accounting for data-generation distribution, which we
keep as a future work.
### 3.2 Convexity and Post-processing
Since privacy at risk provides a probabilistic privacy guarantee for an
$\varepsilon_{0}$-differentially private mechanism, it does not alter the
basic properties of differential privacy - convexity and post-processing. We
now show that privacy at risk equally adheres to both of these properties.
###### Lemma 10 (Convexity)
For a given $\varepsilon_{0}$-differentially private privacy-preserving
mechanism, privacy at risk satisfies convexity property.
Proof Let $\mathcal{M}$ be a mechanism that satisfies
$\varepsilon_{0}$-differential privacy. By the definition of the privacy at
risk, it also satisfies $(\varepsilon_{1},\gamma_{1})$-privacy at risk as well
as $(\varepsilon_{2},\gamma_{2})$-privacy at risk for some
$\varepsilon_{1},\varepsilon_{2}\leq\varepsilon_{0}$ and appropriately
computed values of $\gamma_{1}$ and $\gamma_{2}$. Let $\mathcal{M}^{1}$ and
$\mathcal{M}^{2}$ denote the hypothetical mechanisms that satisfy
$(\varepsilon_{1},\gamma_{1})$-privacy at risk and
$(\varepsilon_{2},\gamma_{2})$-privacy at risk respectively. We can write
privacy loss random variables as follows:
$\displaystyle\mathcal{C}^{1}$
$\displaystyle\leq\gamma_{1}\varepsilon_{1}+(1-\gamma_{1})\varepsilon_{0}$
$\displaystyle\mathcal{C}^{2}$
$\displaystyle\leq\gamma_{2}\varepsilon_{2}+(1-\gamma_{2})\varepsilon_{0}$
where $\mathcal{C}^{1}$ and $\mathcal{C}^{2}$ denote privacy loss random
variables for $\mathcal{M}^{1}$ and $\mathcal{M}^{2}$.
Let us consider a privacy-preserving mechanism $\mathcal{M}$ that uses
$\mathcal{M}^{1}$ with a probability $p$ and $\mathcal{M}^{2}$ with a
probability $(1-p)$ for some $p\in[0,1]$. By using the techniques in the proof
of Theorem 9, the privacy loss random variable $\mathcal{C}$ for $\mathcal{M}$
can be written as:
$\displaystyle\mathcal{C}$
$\displaystyle=p\mathcal{C}^{1}+(1-p)\mathcal{C}^{2}$
$\displaystyle\leq\gamma^{\prime}\varepsilon^{\prime}+(1-\gamma^{\prime})\varepsilon_{0}$
where
$\displaystyle\varepsilon^{\prime}$
$\displaystyle=\frac{p\gamma_{1}\varepsilon_{1}+(1-p)\gamma_{2}\varepsilon_{2}}{p\gamma_{1}+(1-p)\gamma_{2}}$
$\displaystyle\gamma^{\prime}$ $\displaystyle=(1-p\gamma_{1}-(1-p)\gamma_{2})$
Thus, $\mathcal{M}$ satisfies $(\varepsilon^{\prime},\gamma^{\prime})$-privacy
at risk. It proves that privacy at risk staisfies convexity (Kifer and Lin,
2012, Axiom 2.1.2).
###### Lemma 11 (Post-processing)
For a given $\varepsilon_{0}$-differentially private privacy-preserving
mechanism, privacy at risk satisfies post-processing property.
Proof
${(\varepsilon_{0},0)-DP}$${(\varepsilon_{0},0)-DP}$${(\varepsilon,\gamma)-PaR}$${(\varepsilon,\gamma)-PaR}$$\scriptstyle{\varphi}$
Privacy at risk analyses sources of randomness involved in a privacy-
preserving mechanism to provide probabilistic guarantees over the privacy
level of differential privacy. Application of a deterministic function as a
post-processing does not effect either data-generation distribution or noise-
distribution. Thus privacy at risk simply reflects the features of underlying
privacy level under post-processing. Since differential privacy is preserved
under post-processing (Dwork et al., 2014, Proposition 2.1), so is the privacy
at risk.
We need to closely observe Lemma 11. Privacy at risk is preserved under post-
processing only if the mechanism satisfies a stronger privacy guarantee such
as differential privacy. If the mechanism satisfies a variant such
probabilistic differential privacy, which does not satisfy post-processing
property Machanavajjhala et al. (2008), then privacy at risk will not be
preserved under post-processing.
## 4 Privacy at Risk for Laplace Mechanism
In this section, we instantiate privacy at risk for the Laplace mechanism for
three cases: two cases involving two sources of randomness and third case
involving the coupled effect. Three different cases correspond to three
different interpretations of the confidence level, represented by the
parameter $\gamma$, corresponding to three interpretation of the support of
the outer probability in Definition 6. In order to highlight this nuance, we
denote the confidence levels corresponding to the three cases and their three
sources of randomness as $\gamma_{1}$, $\gamma_{2}$ and $\gamma_{3}$,
respectively.
### 4.1 The Case of Explicit Randomness
In this section, we study the effect of the explicit randomness induced by the
noise sampled from Laplacian distribution. We provide a probabilistic
quantification for fine tuning for the Laplace mechanism. We fine-tune the
privacy level for a specified risk under by assuming that the sensitivity of
the query is known a priori.
For a Laplace mechanism $\mathcal{L}_{\varepsilon_{0}}^{\Delta_{f}}$
calibrated with sensitivity $\Delta_{f}$ and privacy level $\varepsilon_{0}$,
we present the analytical formula relating privacy level $\varepsilon$ and the
risk $\gamma_{1}$ in Theorem 12. The proof is available in Appendix A.
###### Theorem 12
The risk $\gamma_{1}\in[0,1]$ with which a Laplace Mechanism
$\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}$, for a numeric query
$f:\mathcal{D}\rightarrow\mathbb{R}^{k}$ satisfies a privacy level
$\varepsilon\geq 0$ is given by
$\gamma_{1}=\frac{\mathbb{P}(T\leq\varepsilon)}{\mathbb{P}(T\leq\varepsilon_{0})},$
(3)
where $T$ is a random variable that follows a distribution with the following
density function.
$P_{T}(t)=\frac{2^{1-k}t^{k-\frac{1}{2}}K_{k-\frac{1}{2}}(t)\varepsilon_{0}}{\sqrt{2\pi}\Gamma(k)\Delta_{f}}$
where $K_{n-\frac{1}{2}}$ is the Bessel function of second kind.
Figure 1 shows the plot of the privacy level against risk for different values
of $k$ and for a Laplace mechanism $\mathcal{L}^{1.0}_{1.0}$. As the value of
$k$ increases, the amount of noise added in the output of numeric query
increases. Therefore, for a specified privacy level, the privacy at risk level
increases with the value of $k$.
The analytical formula representing $\gamma_{1}$ as a function of
$\varepsilon$ is bijective. We need to invert it to obtain the privacy level
$\varepsilon$ for a privacy at risk $\gamma_{1}$. However the analytical
closed form for such an inverse function is not explicit. We use a numerical
approach to compute privacy level for a given privacy at risk from the
analytical formula of Theorem 12.
Figure 1:
Figure 2:
Figure 3: Privacy level $\varepsilon$ for varying privacy at risk $\gamma_{1}$
for Laplace mechanism $\mathcal{L}_{\varepsilon_{0}}^{1.0}$. In Figure 1, we
use $\varepsilon_{0}=1.0$ and different values of $k$. In Figure 2, for $k=1$
and different values of $\varepsilon_{0}$.
Figure 4: Number of samples $n$ for varying privacy at risk $\gamma_{2}$ for
different error parameter $\rho$.
Result for a Real-valued Query. For the case $k=1$, the analytical derivation
is fairly straightforward. In this case, we obtain an invertible closed-form
of a privacy level for a specified risk. It is presented in Equation 4.
$\varepsilon=\ln{\left(\frac{1}{1-\gamma_{1}(1-e^{-\varepsilon_{0}})}\right)}$
(4)
Remarks on $\varepsilon_{0}$. For $k=1$, Figure 2 shows the plot of privacy at
risk level $\varepsilon$ versus privacy at risk $\gamma_{1}$ for the Laplace
mechanism $\mathcal{L}^{1.0}_{\varepsilon_{0}}$. As the value of
$\varepsilon_{0}$ increases, the probability of Laplace mechanism generating
higher value of noise reduces. Therefore, for a fixed privacy level, privacy
at risk increases with the value of $\varepsilon_{0}$. The same observation is
made for $k>1$.
### 4.2 The Case of Implicit Randomness
In this section, we study the effect of the implicit randomness induced by the
data-generation distribution to provide a fine tuning for the Laplace
mechanism. We fine-tune the risk for a specified privacy level without
assuming that the sensitivity of the query.
If one takes into account randomness induced by the data-generation
distribution, all pairs of neighbouring datasets are not equally probable.
This leads to estimation of sensitivity of a query for a specified data-
generation distribution. If we have access to an analytical form of the data-
generation distribution and to the query, we could analytically derive the
sensitivity distribution for the query. In general, we have access to the
datasets, but not the data-generation distribution that generates them. We,
therefore, statistically estimate sensitivity by constructing an empirical
distribution. We call the sensitivity value obtained for a specified risk from
the empirical cumulative distribution of sensitivity the sampled sensitivity
(Definition 14). However, the value of sampled sensitivity is simply an
estimate of the sensitivity for a specified risk. In order to capture this
additional uncertainty introduced by the estimation from the empirical
sensitivity distribution rather than the true unknown distribution, we compute
a lower bound on the accuracy of this estimation. This lower bound yields a
probabilistic lower bound on the specified risk. We refer to it as empirical
risk. For a specified absolute risk $\gamma_{2}$, we denote by
$\hat{\gamma_{2}}$ corresponding empirical risk.
For the Laplace mechanism $\mathcal{L}_{\varepsilon}^{\Delta_{S_{f}}}$
calibrated with sampled sensitivity $\Delta_{S_{f}}$ and privacy level
$\varepsilon$, we evaluate the empirical risk $\hat{\gamma_{2}}$. We present
the result in Theorem 13. The proof is available in Appendix B.
###### Theorem 13
Analytical bound on the empirical risk, $\hat{\gamma_{2}}$, for Laplace
mechanism $\mathcal{L}_{\varepsilon}^{\Delta_{S_{f}}}$ with privacy level
$\varepsilon$ and sampled sensitivity $\Delta_{S_{f}}$ for a query
$f:\mathcal{D}\rightarrow\mathbb{R}^{k}$ is
$\hat{\gamma_{2}}\geq\gamma_{2}(1-2e^{-2\rho^{2}n})$ (5)
where $n$ is the number of samples used for estimation of the sampled
sensitivity and $\rho$ is the accuracy parameter. $\gamma_{2}$ denotes the
specified absolute risk.
The error parameter $\rho$ controls the closeness between the empirical
cumulative distribution of the sensitivity to the true cumulative distribution
of the sensitivity. Lower the value of the error, closer is the empirical
cumulative distribution to the true cumulative distribution. Figure 4 shows
the plot of number of samples as a function of the privacy at risk and the
error parameter. Naturally, we require higher number of samples in order to
have lower error rate. The number of samples reduces as the privacy at risk
increases. The lower risk demands precision in the estimated sampled
sensitivity, which in turn requires larger number of samples.
Let, $\mathcal{G}$ denotes the data-generation distribution, either known
apriori or constructed by subsampling the available data. We adopt the
procedure of (Rubinstein and Aldà, 2017) to sample two neighbouring datasets
with $p$ data points each. We sample $p-1$ data points from $\mathcal{G}$ that
are common to both of these datasets and later two more data points. From
those two points, we allot one data point to each of the two datasets.
Let, $S_{f}=\lVert f(x)-f(y)\rVert_{1}$ denotes the sensitivity random
variable for a given query $f$, where $x$ and $y$ are two neighbouring
datasets sampled from $\mathcal{G}$. Using $n$ pairs of neighbouring datasets
sampled from $\mathcal{G}$, we construct the empirical cumulative
distribution, $F_{n}$, for the sensitivity random variable.
###### Definition 14
For a given query $f$ and for a specified risk $\gamma_{2}$, sampled
sensitivity, $\Delta_{S_{f}}$, is defined as the value of sensitivity random
variable that is estimated using its empirical cumulative distribution
function, $F_{n}$, constructed using $n$ pairs of neighbouring datasets
sampled from the data-generation distribution $\mathcal{G}$.
$\Delta_{S_{f}}\triangleq F_{n}^{-1}(\gamma_{2})$
If we knew analytical form of the data generation distribution, we could
analytically derive the cumulative distribution function of the sensitivity,
$F$, and find the sensitivity of the query as $\Delta_{f}=F^{-1}(1)$.
Therefore, in order to have the sampled sensitivity close to the sensitivity
of the query, we require the empirical cumulative distributions to be close to
the cumulative distribution of the sensitivity. We use this insight to derive
the analytical bound in the Theorem 13.
Figure 5:
Figure 6:
Figure 7: Dependence of error and number of samples on the privacy at risk for
Laplace mechanism $\mathcal{L}_{1.0}^{\Delta_{S_{f}}}$. For the figure on the
left hand side, we fix the number of samples to $10000$. For the Figure 6 we
fix the error parameter to $0.01$.
### 4.3 The Case of Explicit and Implicit Randomness
In this section, we study the combined effect of both explicit randomness
induced by the noise distribution and implicit randomness in the data-
generation distribution respectively. We do not assume the knowledge of the
sensitivity of the query.
We estimate sensitivity using the empirical cumulative distribution of
sensitivity. We construct the empirical distribution over the sensitivities
using the sampling technique presented in the earlier case. Since we use the
sampled sensitivity (Definition 14) to calibrate the Laplace mechanism, we
estimate the _empirical risk_ $\hat{\gamma_{3}}$.
For Laplace mechanism $\mathcal{L}_{\varepsilon_{0}}^{\Delta_{S_{f}}}$
calibrated with sampled sensitivity $\Delta_{S_{f}}$ and privacy level
$\varepsilon_{0}$, we present the analytical bound on the empirical
sensitivity $\hat{\gamma_{3}}$ in Theorem 15 with proof in the Appendix C.
###### Theorem 15
Analytical bound on the empirical risk $\hat{\gamma_{3}}\in[0,1]$ to achieve a
privacy level $\varepsilon>0$ for Laplace mechanism
$\mathcal{L}_{\varepsilon_{0}}^{\Delta_{S_{f}}}$ with sampled sensitivity
$\Delta_{S_{f}}$ of a query $f:\mathcal{D}\rightarrow\mathbb{R}^{k}$ is
$\hat{\gamma_{3}}\geq\gamma_{3}(1-2e^{-2\rho^{2}n})$ (6)
where $n$ is the number of samples used for estimating the sensitivity, $\rho$
is the accuracy parameter. $\gamma_{3}$ denotes the specified absolute risk
defined as:
$\gamma_{3}=\frac{\mathbb{P}(T\leq\varepsilon)}{\mathbb{P}(T\leq\eta\varepsilon_{0})}\cdot\gamma_{2}$
The error parameter $\rho$ controls the closeness between the empirical
cumulative distribution of the sensitivity to the true cumulative distribution
of the sensitivity. Figure 7 shows the dependence of the error parameter on
the number of samples. In Figure 5, we observe that the for a fixed number of
samples and a privacy level, the privacy at risk decreases with the value of
error parameter. For a fixed number of samples, smaller values of the error
parameter reduce the probability of similarity between the empirical
cumulative distribution of sensitivity and the true cumulative distribution.
Therefore, we observe the reduction in the risk for a fixed privacy level. In
Figure 6, we observe that for a fixed value of error parameter and a fixed
level of privacy level, the risk increases with the number of samples. For a
fixed value of the error parameter, larger values of the sample size increase
the probability of similarity between the empirical cumulative distribution of
sensitivity and the true cumulative distribution. Therefore, we observe the
increase in the risk for a fixed privacy level.
Effect of the consideration of implicit and explicit randomness is evident in
the analytical expression for $\gamma_{3}$ in Equation 7. Proof is available
in Appendix C. The privacy at risk is composed of two factors whereas the
second term is a privacy at risk that accounts for inherent randomness. The
first term takes into account the implicit randomness of the Laplace
distribution along with a coupling coefficient $\eta$. We define $\eta$ as the
ratio of the true sensitivity of the query to its sampled sensitivity.
$\gamma_{3}\triangleq\frac{\mathbb{P}(T\leq\varepsilon)}{\mathbb{P}(T\leq\eta\varepsilon_{0})}\cdot\gamma_{2}$
(7)
## 5 Minimising Compensation Budget for Privacy at Risk
Many service providers collect users’ data to enhance user experience. In
order to avoid misuse of this data, we require a legal framework that not only
limits the use of the collected data but also proposes reparative measures in
case of a data leak. General Data Protection Regulation
(GDPR)111https://eugdpr.org/ is such a legal framework.
Section 82 in GDPR states that any person who suffers from material or non-
material damage as a result of a personal data breach has the right to demand
compensation from the data processor. Therefore, every GDPR compliant business
entity that either holds or processes personal data needs to secure a certain
budget in the worst case scenario of the personal data breach. In order to
reduce the risk of such an unfortunate event, the business entity may use
privacy-preserving mechanisms that provide provable privacy guarantees while
publishing their results. In order to calculate the compensation budget for a
business entity, we devise a cost model that maps the privacy guarantees
provided by differential privacy and privacy at risk to monetary costs. The
discussions demonstrate the usefulness of probabilistic quantification of
differential privacy in a business setting.
### 5.1 Cost Model for Differential Privacy
Let $E$ be the compensation budget that a business entity has to pay to every
stakeholder in case of a personal data breach when the data is processed
without any provable privacy guarantees. Let $E_{\varepsilon}^{dp}$ be the
compensation budget that a business entity has to pay to every stakeholder in
case of a personal data breach when the data is processed with privacy
guarantees in terms of $\varepsilon$-differential privacy.
Privacy level, $\varepsilon$, in $\varepsilon$-differential privacy is the
quantifier of indistinguishability of the outputs of a privacy-preserving
mechanism when two neighbouring datasets are provided as inputs. When the
privacy level is zero, the privacy-preserving mechanism outputs all results
with equal probability. The indistinguishability reduces with increase in the
privacy level. Thus, privacy level of zero bears the lowest risk of personal
data breach and the risk increases with the privacy level.
$E_{\varepsilon}^{dp}$ needs to be commensurate to such a risk and, therefore,
it needs to satisfy the following constraints.
1. 1.
For all $\varepsilon\in\mathbb{R}^{\geq 0}$, $E_{\varepsilon}^{dp}\leq E$.
2. 2.
$E_{\varepsilon}^{dp}$ is a monotonically increasing function of
$\varepsilon$.
3. 3.
As $\varepsilon\rightarrow 0$, $E_{\varepsilon}^{dp}\rightarrow E_{min}$ where
$E_{min}$ is the unavoidable cost that business entity might need to pay in
case of personal data breach even after the privacy measures are employed.
4. 4.
As $\varepsilon\rightarrow\infty$, $E_{\varepsilon}^{dp}\rightarrow E$.
There are various functions that satisfy these constraints. For instantiation,
we choose to work with a cost model that is convex with respect to
$\varepsilon$. The cost model $E_{\varepsilon}^{dp}$ as defined in Equation 8.
$E_{\varepsilon}^{dp}\triangleq E_{min}+Ee^{-\frac{c}{\varepsilon}}$ (8)
$E_{\varepsilon}^{dp}$ has two parameters, namely $c>0$ and $E_{min}\geq 0$.
$c$ controls the rate of change in the cost as the privacy level changes and
$E_{min}$ is a privacy level independent bias. For this study, we use a
simplified model with $c=1$ and $E_{min}=0$.
### 5.2 Cost Model for Privacy at Risk
Let $E_{\varepsilon_{0}}^{par}(\varepsilon,\gamma)$ be the compensation that a
business entity has to pay to every stakeholder in case of a personal data
breach when the data is processed with an $\varepsilon_{0}$-differentially
private privacy-preserving mechanism along with a probabilistic quantification
of privacy level. Use of such a quantification allows use to provide a
stronger a stronger privacy guarantee viz. $\varepsilon<\varepsilon_{0}$ for a
specified privacy at risk at most $\gamma$ for Thus, we calculate
$E_{\varepsilon_{0}}^{par}$ using Equation 9.
$E_{\varepsilon_{0}}^{par}(\varepsilon,\gamma)\triangleq\gamma
E_{\varepsilon}^{dp}+(1-\gamma)E_{\varepsilon_{0}}^{dp}$ (9)
#### 5.2.1 Existence of Minimum Compensation Budget
We want to find the privacy level, say $\varepsilon_{min}$, that yields the
lowest compensation budget. We do that by minimising Equation 9 with respect
to $\varepsilon$.
###### Lemma 16
$E_{\varepsilon_{0}}^{par}(\varepsilon,\gamma)$ is a convex function of
$\varepsilon$ if $E_{\varepsilon}^{dp}$ is defined by Equation (9).
This result also generalises to any other convex cost model satisfying the
four conditions. By Lemma 16, there exists a unique $\varepsilon_{min}$ that
minimises the compensation budget for a specified parametrisation, say
$\varepsilon_{0}$. Since the risk $\gamma$ in Equation 9 is itself a function
of privacy level $\varepsilon$, analytical calculation of $\varepsilon_{min}$
is not possible in the most general case. When the output of the query is a
real number, we derive the analytic form (Equation 4) to compute the risk
under the consideration of explicit randomness. In such a case,
$\varepsilon_{min}$ is calculated by differentiating Equation 9 with respect
to $\varepsilon$ and equating it to zero. It gives us Equation 10 that we
solve using any root finding technique such as Newton-Raphson method (Press,
2007) to compute $\varepsilon_{min}$.
$\frac{1}{\varepsilon}-\ln{\left(1-\frac{1-e^{\varepsilon}}{\varepsilon^{2}}\right)}=\frac{1}{\varepsilon_{0}}$
(10)
#### 5.2.2 Fine-tuning Privacy at Risk
For a fixed budget, say $B$, re-arrangement of Equation 9 gives us an upper
bound on the privacy level $\varepsilon$. We use the cost model with $c=1$ and
$E_{min}=0$ to derive the upper bound. If we have a maximum permissible
expected mean absolute error $T$, we use Equation 12 to obtain a lower bound
on the privacy at risk level. Equation 11 illustrates the upper and lower
bounds that dictate the permissible range of $\varepsilon$ that a data
publisher can promise depending on the budget and the permissible error
constraints.
$\frac{1}{T}\leq\varepsilon\leq\left[\ln{\left(\frac{\gamma
E}{B-(1-\gamma)E_{\varepsilon_{0}}^{dp}}\right)}\right]^{-1}$ (11)
Thus, the privacy level is constrained by the effectiveness requirement from
below and by the monetary budget from above. (Hsu et al., 2014) calculate
upper and lower bound on the privacy level in the differential privacy. They
use a different cost model owing to the scenario of research study that
compensates its participants for their data and releases the results in a
differentially private manner. Their cost model is different than our GDPR
inspired modelling.
### 5.3 Illustration
Suppose that the health centre in a university that complies to GDPR publishes
statistics of its staff health checkup, such as obesity statistics, twice in a
year. In January 2018, the health centre publishes that 34 out of 99 faculty
members suffer from obesity. In July 2018, the health centre publishes that 35
out of 100 faculty members suffer from obesity. An intruder, perhaps an
analyst working for an insurance company, checks the staff listings in January
2018 and July 2018, which are publicly available on website of the university.
The intruder does not find any change other than the recruitment of John Doe
in April 2018. Thus, with high probability, the intruder deduces that John Doe
suffers from obesity. In order to avoid such a privacy breach, the health
centre decides to publish the results using the Laplace mechanism. In this
case, the Laplace mechanism operates on the count query.
Figure 8: Variation in the budget for Laplace mechanism
$\mathcal{L}_{\varepsilon_{0}}^{1}$ under privacy at risk considering explicit
randomness in the Laplace mechanism for the illustration in Section 5.3.
In order to control the amount of noise, the health centre needs to
appropriately set the privacy level. Suppose that the health centre decides to
use the expected mean absolute error, defined in Equation 12, as the measure
of effectiveness for the Laplace mechanism.
$\mathbb{E}\left[\lvert\mathcal{L}^{1}_{\varepsilon}(x)-f(x)\rvert\right]=\frac{1}{\varepsilon}$
(12)
Equation 12 makes use of the fact that the sensitivity of the count query is
one. Suppose that the health centre requires the expected mean absolute error
of at most two in order to maintain the quality of the published statistics.
In this case, the privacy level has to be at least $0.5$.
In order to compute the budget, the health centre requires an estimate of $E$.
Moriarty et al. (2012) shows that the incremental cost of premiums for the
health insurance with morbid obesity ranges between $\$5467$ to $\$5530$. With
reference to this research, the health centre takes $\$5500$ as an estimate of
$E$. For the staff size of $100$ and the privacy level $0.5$, the health
centre uses Equation 8 in its simplified setting to compute the total budget
of $\$74434.40$.
Is it possible to reduce this budget without degrading the effectiveness of
the Laplace mechanism? We show that it is possible by fine-tuning the Laplace
mechanism. Under the consideration of the explicit randomness introduced by
the Laplace noise distribution, we show that $\varepsilon_{0}$-differentially
private Laplace mechanism also satisfies $\varepsilon$-differential privacy
with risk $\gamma$, which is computed using the formula in Theorem 12. Fine-
tuning allows us to get a stronger privacy guarantee,
$\varepsilon<\varepsilon_{0}$ that requires a smaller budget. In Figure 8, we
plot the budget for various privacy levels. We observe that the privacy level
$0.274$, which is same as $\varepsilon_{min}$ computed by solving Equation 10,
yields the lowest compensation budget of $\$37805.86$. Thus, by using privacy
at risk, the health centre is able to save $\$36628.532$ without sacrificing
the quality of the published results.
Figure 9: $\mathcal{L}_{0.1}^{1}$ satisfies $(0.08,0.80)$-privacy at risk.
Figure 10: $\mathcal{L}_{0.5}^{1}$ satisfies $(0.27,0.61)$-privacy at risk.
Figure 11: $\mathcal{L}_{1.0}^{1}$ satisfies $(0.42,0.54)$-privacy at risk.
Figure 12: Comparing the privacy guarantee obtained by basic composition and
advanced composition (Dwork et al., 2014) with the composition obtained using
optimal privacy at risk that minimises the cost of Laplace mechanism
$\mathcal{L}_{\varepsilon_{0}}^{1}$. For the evaluation, we set
$\delta=10^{-5}$.
### 5.4 Cost Model and the Composition of Laplace Mechanisms
Convexity of the proposed cost function enables us to estimate the optimal
value of the privacy at risk level. We use the optimal privacy value to
provide tighter bounds on the composition of Laplace mechanism. In Figure 12,
we compare the privacy guarantees obtained by using basic composition theorem
(Dwork et al., 2014), advanced composition theorem (Dwork et al., 2014) and
the composition theorem for privacy at risk. We comparatively evaluate them
for composition of Laplace mechanisms with privacy levels $0.1,0.5$ and $1.0$.
We compute the privacy level after composition by setting $\delta$ to
$10^{-5}$.
We observe that the use of optimal privacy at risk provided significantly
stronger privacy guarantees as compared to the conventional composition
theorems. Advanced composition theorem is known to provide stronger privacy
guarantees for mechanism with smaller $\varepsilon$s. As we observe in Figure
11 and Figure 10, the composition provides strictly stronger privacy
guarantees than basic composition, in the cases where the advanced composition
fails.
## 6 Balancing Utility and Privacy
In this section, we empirically illustrate and discuss the steps that a data
steward needs to take and the issues that she needs to consider in order to
realise a required privacy at risk level $\varepsilon$ for a confidence level
$\gamma$ when seeking to disclose the result of a query.
We consider a query that returns the parameter of a ridge regression (Murphy,
2012) for an input dataset. It is a basic and widely used statistical analysis
tool. We use the privacy-preserving mechanism presented by Ligett et al. for
ridge regression. It is a Laplace mechanism that induces noise in the output
parameters of the ridge regression. The authors provide a theoretical upper
bound on the sensitivity of the ridge regression, which we refer as
sensitivity, in the experiments.
### 6.1 Dataset and Experimental Setup
We conduct experiments on a subset of the 2000 US census dataset provided by
Minnesota Population Center in its Integrated Public Use Microdata Series
(Ruggles et al., 2015). The census dataset consists of 1% sample of the
original census data. It spans over 1.23 million households with records of
2.8 million people. The value of several attributes is not necessarily
available for every household. We have therefore selected $212,605$ records,
corresponding to the household heads, and $6$ attributes, namely, Age, Gender,
Race, Marital Status, Education, Income, whose values are available for the
$212,605$ records.
In order to satisfy the constraint in the derivation of the sensitivity of
ridge regression (Ligett et al., 2017), we, without loss of generality,
normalise the dataset in the following way. We normalise Income attribute such
that the values lie in $[0,1]$. We normalise other attributes such that
$l_{2}$ norm of each data point is unity.
All experiments are run on Linux machine with 12-core 3.60GHz Intel® Core
i7™processor with 64GB memory. Python® 2.7.6 is used as the scripting
language.
### 6.2 Result Analysis
We train ridge regression model to predict Income using other attributes as
predictors. We split the dataset into the training dataset ($80\%$) and
testing dataset ($20\%$). We compute the root mean squared error (RMSE) of
ridge regression, trained on the training data with regularisation parameter
set to $0.01$, on the testing dataset. We use it as the metric of utility
loss. Smaller the value of RMSE, smaller the loss in utility. For a given
value of privacy at risk level, we compute $50$ runs of an experiment of a
differentially private ridge regression and report the means over the $50$
runs of the experiment.
Let us now provide illustrative experiments under the three different cases.
In every scenario, the data steward is given a privacy at risk level
$\varepsilon$ and the confidence level $\gamma$ and wants to disclose the
parameters of a ridge regression model that she trains on the census dataset.
She needs to calibrate the Laplace mechanism to achieve the privacy at risk
required the ridge regression query.
The Case of Explicit Randomness (cf. Section 4.1). In this scenario, the data
steward knows the sensitivity for the ridge regression. She needs to compute
the privacy level, $\varepsilon_{0}$, to calibrate the Laplace mechanism. She
uses Equation 3 that links the desired privacy at risk level $\varepsilon$,
the confidence level $\gamma_{1}$ and the privacy level of noise
$\varepsilon_{0}$. Specifically, for given $\varepsilon$ and $\gamma_{1}$, she
computes $\varepsilon_{0}$ by solving the equation:
$\gamma_{1}\mathbb{P}(T\leq\varepsilon_{0})-\mathbb{P}(T\leq\varepsilon)=0.$
Since the equation does not give an analytical formula for $\varepsilon_{0}$,
the data steward uses a root finding algorithm such as Newton-Raphson method
(Press, 2007) to solve the above equation. For instance, if she needs to
achieve a privacy at risk level $\varepsilon=0.4$ with confidence level
$\gamma_{1}=0.6$, she can substitute these values in the above equation and
solve the equation to get the privacy level of noise $\varepsilon_{0}=0.8$.
Figure 13: Utility, measured by RMSE (right y-axis), and privacy at risk level
$\varepsilon$ for Laplace mechanism (left y-axis) for varying confidence
levels $\gamma_{1}$.
Figure 14: Empirical cumulative distribution of the sensitivities of ridge
regression queries constructed using $15000$ samples of neighboring datasets.
Figure 13 shows the variation of privacy at risk level $\varepsilon$ and
confidence level $\gamma_{1}$. It also depicts the variation of utility loss
for different privacy at risk levels in Figure 13.
In accordance to the data steward’s problem, if she needs to achieve a privacy
at risk level $\varepsilon=0.4$ with confidence level $\gamma_{1}=0.6$, she
obtains the privacy level of noise to be $\varepsilon_{0}=0.8$. Additionally,
we observe that the choice of privacy level $0.8$ instead of $0.4$ to
calibrate the Laplace mechanism gives lower utility loss for the data steward.
This is the benefit drawn from the risk taken under the control of privacy at
risk.
Thus, she uses privacy level $\varepsilon_{0}$ and the sensitivity of the
function to calibrate Laplace mechanism.
The Case of Implicit Randomness (cf. Section 4.2). In this scenario, the data
steward does not know the sensitivity of ridge regression. She assesses that
she can afford to sample at most $n$ times from the population dataset. She
understands the effect of the uncertainty introduced by the statistical
estimation of the sensitivity. Therefore, she uses the confidence level for
empirical privacy at risk $\hat{\gamma_{2}}$.
Given the value of $n$, she chooses the value of the accuracy parameter using
Figure 4. For instance, if the number of samples that she can draw is
$10^{4}$, she chooses the value of the accuracy parameter $\rho=0.01$. Next,
she uses Equation 13 to determine the value of probabilistic tolerance,
$\alpha$, for the sample size $n$. For instance, if the data steward is not
allowed to access more than $15,000$ samples, for the accuracy of $0.01$ the
probabilistic tolerance is $0.9$.
$\alpha=1-2e^{(-2\rho^{2}n)}$ (13)
She constructs an empirical cumulative distribution over the sensitivities as
described in Section 4.2. Such an empirical cumulative distribution is shown
in Figure 14. Using the computed probabilistic tolerance and desired
confidence level $\hat{\gamma_{2}}$, she uses equation in Theorem 13 to
determine $\gamma_{2}$. She computes the sampled sensitivity using the
empirical distribution function and the confidence level for privacy
$\Delta_{S_{f}}$ at risk $\gamma_{2}$. For instance, using the empirical
cumulative distribution in Figure 14 she calculates the value of the sampled
sensitivity to be approximately $0.001$ for $\gamma_{2}=0.4$ and approximately
$0.01$ for $\gamma_{2}=0.85$
Thus, she uses privacy level $\varepsilon$, sets the number of samples to be
$n$ and computes the sampled sensitivity $\Delta_{S_{f}}$ to calibrate the
Laplace mechanism.
The Case of Explicit and Implicit Randomness (cf. Section 4.3). In this
scenario, the data steward does not know the sensitivity of ridge regression.
She is not allowed to sample more than $n$ times from a population dataset.
For a given confidence level $\gamma_{2}$ and the privacy at risk
$\varepsilon$, she calibrates the Laplace mechanism using illustration for
Section 4.3. The privacy level in this calibration yields utility loss that is
more than her requirement. Therefore, she wants to re-calibrate the Laplace
mechanism in order to reduce utility loss.
For the re-calibration, the data steward uses privacy level of the pre-
calibrated Laplace mechanism, i.e. $\varepsilon$, as the privacy at risk level
and she provides a new confidence level for empirical privacy at risk
$\hat{\gamma_{3}}$. Using Equation 26 and Equation 24, she calculates:
$\hat{\gamma_{3}}\mathbb{P}(T\leq\eta\varepsilon_{0})-\alpha\gamma_{2}~{}\mathbb{P}(T\leq\varepsilon)=0$
She solves such an equation for $\varepsilon_{0}$ using the root finding
technique such as Newton-Raphson method (Press, 2007). For instance, if she
needs to achieve a privacy at risk level $\varepsilon=0.4$ with confidence
levels $\hat{\gamma_{3}}=0.9$ and $\gamma_{2}=0.9$, she can substitute these
values and the values of tolerance parameter and sampled sensitivity, as used
in the previous experiments, in the above equation. Then, solving the equation
leads to the privacy level of noise $\varepsilon_{0}=0.8$.
Thus, she re-calibrates the Laplace mechanism with privacy level
$\varepsilon_{0}$, sets the number of samples to be $n$ and sampled
sensitivity $\Delta_{S_{f}}$.
## 7 Related Work
Calibration of mechanisms. Researchers have proposed different privacy-
preserving mechanisms to make different queries differentially private. These
mechanisms can be broadly classified into two categories. In one category, the
mechanisms explicitly add calibrated noise, such as Laplace noise in the work
of (Dwork et al., 2006c) or Gaussian noise in the work of (Dwork et al.,
2014), to the outputs of the query. In the other category, (Chaudhuri et al.,
2011; Zhang et al., 2012; Acs et al., 2012; Hall et al., 2013) propose
mechanisms that alter the query function so that the modified function
satisfies differentially privacy. Privacy-preserving mechanisms in both of
these categories perturb the original output of the query and make it
difficult for a malicious data analyst to recover the original output of the
query. These mechanisms induce randomness using the explicit noise
distribution. Calibration of these mechanisms require the knowledge of the
sensitivity of the query. Nissim et al. consider the implicit randomness in
the data-generation distribution to compute an estimate of the sensitivity.
The authors propose the smooth sensitivity function that is an envelope over
the local sensitivities for all individual datasets. Local sensitivity of a
dataset is the maximum change in the value of the query over all of its
neighboring datasets. In general, it is not easy to analytically estimate the
smooth sensitivity function for a general query. Rubinstein and Aldà also
study the inherent randomness in the data-generation algorithm. They do not
use the local sensitivity. We adopt their approach of sampling the sensitivity
from the empirical distribution of the sensitivity. They use order statistics
to choose a particular value of the sensitivity. We use the risk, which
provides a mediation tool for business entities to assess the actual business
risks, on the sensitivity distribution to estimate the sensitivity.
Refinements of differential privacy. In order to account for both sources of
randomness, refinements of $\varepsilon$-differential privacy are proposed in
order to bound the probability of occurrence of worst case scenarios.
Machanavajjhala et al. propose probabilistic differential privacy that
considers upper bounds of the worst case privacy loss for corresponding
confidence levels on the noise distribution. Definition of probabilistic
differential privacy incorporates the explicit randomness induced by the noise
distribution and bounds the probability over the space of noisy outputs to
satisfy the $\varepsilon$-differential privacy definition. Dwork and Rothblum
propose Concentrated differential privacy that considers the expected values
of the privacy loss random variables for the corresponding. Definition of
concentrated differential privacy incorporates the explicit randomness induced
by the noise distribution but considering only the expected value of privacy
loss satisfying $\varepsilon$-differential privacy definition instead of using
the confidence levels limits its scope.
Hall et al. propose random differential privacy that considers the privacy
loss for corresponding confidence levels on the implicit randomness in the
data-generation distribution. Definition of random differential privacy
incorporates the implicit randomness induced by the data-generation
distribution and bounds the probability over the space of datasets generated
from the given distribution to satisfy the $\varepsilon$-differential privacy
definition. Dwork et al. define approximate differential privacy by adding a
constant bias to the privacy guarantee provided by the differential privacy.
It is not a probabilistic refinement of the differential privacy.
Around the same time of our work, Triastcyn and Faltings independently propose
Bayesian differential privacy that takes into account both of the sources of
randomness. Despite this similarity, our works differ in multiple dimensions.
Firstly, they have shown the reduction of their definition to a variant of
Renyi differential privacy that depends on the data-generation distribution.
Secondly, they rely on the moment accountant for the composition of the
mechanisms. Lastly, they do not provide a finer case-by-case analysis of the
source of randomness, which leads to analytical solutions for the privacy
guarantee.
Kifer and Machanavajjhala define Pufferfish privacy framework, and its variant
by Bassily et al., that considers randomness due to data-generation
distribution as well as noise distribution. Despite the generality of their
approach, the framework relies on the domain expert to define a set of
_secrets_ that they want to protect.
Composition theorem. Recently proposed technique of the moment accountant
(Abadi et al., 2016) has become the state-of-the-art of composing mechanisms
in the area of privacy-preserving machine learning. Abadi et al. show that the
moment accountant provides much strong privacy guarantees than the
conventional composition mechanisms. It works by keeping track of various
moments of privacy loss random variable and use the bounds on them to provide
privacy guarantees. The moment accountant requires access to data-generation
distribution to compute the bounds on the moment. Hence, the privacy
guarantees are specific to the dataset.
Cost models. (Ghosh and Roth, 2015; Chen et al., 2016) propose game theoretic
methods that provide the means to evaluate the monetary cost of differential
privacy. Our approach is inspired by the approach in the work of Hsu et al..
They model the cost under a scenario of a research study wherein the
participants are reimbursed for their participation. Our cost modelling is
driven by the scenario of securing a compensation budget in compliance with
GDPR. Our requirement differs from the requirements for the scenario in their
work. In our case, there is no monetary incentive for participants to share
their data.
## 8 Conclusion and Future Works
In this paper, we provide a means to fine-tune the privacy level of a privacy-
preserving mechanism by analysing various sources of randomness. Such a fine-
tuning leads to probabilistic quantification on privacy levels with quantified
risks, which we call as privacy at risk. We also provide composition theorem
that leverages privacy at risk. We analytical calculate privacy at risk for
Laplace mechanism. We propose a cost model that bridges the gap between the
privacy level and the compensation budget estimated by a GDPR compliant
business entity. Convexity of the cost function ensures existence of unique
privacy at risk that minimises compensation budget. The cost model helps in
not only reinforcing the ease of application in a business setting but also
providing stronger privacy guarantees on the composition of mechanism.
Privacy at risk may be fully analytically computed in cases where the data-
generation, or the sensitivity distribution, the noise distribution and the
query are analytically known and take convenient forms. We are now looking at
such convenient but realistic cases.
## Acknowledgements
We want convey a special thanks to Pierre Senellart at DI, École Normale
Supérieure, Paris for his careful reading of our drafts and thoughtful
interventions.
## References
* Abadi et al. (2016) Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In _Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security_ , pages 308–318, 2016.
* Acs et al. (2012) Gergely Acs, Claude Castelluccia, and Rui Chen. Differentially private histogram publishing through lossy compression. In _Data Mining (ICDM), 2012 IEEE 12th International Conference on_ , pages 1–10. IEEE, 2012.
* Askey and Daalhuis (2010) RA Askey and AB Olde Daalhuis. Generalized hypergeometric functions and meijer g-function. _NIST handbook of mathematical functions_ , pages 403–418, 2010.
* Bassily et al. (2013) Raef Bassily, Adam Groce, Jonathan Katz, and Adam Smith. Coupled-worlds privacy: Exploiting adversarial uncertainty in statistical data privacy. In _2013 IEEE 54th Annual Symposium on Foundations of Computer Science_ , pages 439–448. IEEE, 2013.
* Chaudhuri et al. (2011) Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical risk minimization. _Journal of Machine Learning Research_ , 12(Mar):1069–1109, 2011.
* Chen et al. (2016) Yiling Chen, Stephen Chong, Ian A Kash, Tal Moran, and Salil Vadhan. Truthful mechanisms for agents that value privacy. _ACM Transactions on Economics and Computation (TEAC)_ , 4(3):13, 2016.
* Dwork and Rothblum (2016) Cynthia Dwork and Guy N Rothblum. Concentrated differential privacy. _arXiv preprint arXiv:1603.01887_ , 2016.
* Dwork et al. (2006a) Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In _Eurocrypt_ , volume 4004, pages 486–503. Springer, 2006a.
* Dwork et al. (2006b) Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In _Theory of Cryptography Conference_ , pages 265–284. Springer, 2006b.
* Dwork et al. (2006c) Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. _Calibrating Noise to Sensitivity in Private Data Analysis_ , pages 265–284. Springer Berlin Heidelberg, 2006c.
* Dwork et al. (2014) Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. _Foundations and Trends® in Theoretical Computer Science_ , 9(3–4):211–407, 2014.
* Garfinkel et al. (2018) Simson L Garfinkel, John M Abowd, and Sarah Powazek. Issues encountered deploying differential privacy. _arXiv preprint arXiv:1809.02201_ , 2018.
* Ghosh and Roth (2015) Arpita Ghosh and Aaron Roth. Selling privacy at auction. _Games and Economic Behavior_ , 91:334–346, 2015.
* Hall et al. (2012) Rob Hall, Alessandro Rinaldo, and Larry Wasserman. Random differential privacy. _Journal of Privacy and Confidentiality_ , 4(2):43–59, 2012.
* Hall et al. (2013) Rob Hall, Alessandro Rinaldo, and Larry Wasserman. Differential privacy for functions and functional data. _Journal of Machine Learning Research_ , 14(Feb):703–727, 2013.
* Hoeffding (1994) Wassily Hoeffding. Probability inequalities for sums of bounded random variables. In _The Collected Works of Wassily Hoeffding_ , pages 409–426. Springer, 1994.
* Hsu et al. (2014) Justin Hsu, Marco Gaboardi, Andreas Haeberlen, Sanjeev Khanna, Arjun Narayan, Benjamin C Pierce, and Aaron Roth. Differential privacy: An economic method for choosing epsilon. In _Computer Security Foundations Symposium (CSF), 2014 IEEE 27th_ , pages 398–410. IEEE, 2014.
* Inc. (2014) Wolfram Research, Inc. Mathematica, Version 10, 2014. Champaign, IL.
* Jorion (2000) Philippe Jorion. Value at risk: The new benchmark for managing financial risk, 01 2000\.
* Kifer and Lin (2012) Daniel Kifer and Bing-Rong Lin. An axiomatic view of statistical privacy and utility. _Journal of Privacy and Confidentiality_ , 4(1), 2012.
* Kifer and Machanavajjhala (2012) Daniel Kifer and Ashwin Machanavajjhala. A rigorous and customizable framework for privacy. In _Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI symposium on Principles of Database Systems_ , pages 77–88. ACM, 2012.
* Lee and Clifton (2011) Jaewoo Lee and Chris Clifton. How much is enough? choosing $\varepsilon$ for differential privacy. In _International Conference on Information Security_ , pages 325–340. Springer, 2011.
* Ligett et al. (2017) Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, and Steven Z Wu. Accuracy first: Selecting a differential privacy level for accuracy constrained erm. In _Advances in Neural Information Processing Systems_ , pages 2563–2573, 2017.
* Machanavajjhala et al. (2008) Ashwin Machanavajjhala, Daniel Kifer, John Abowd, Johannes Gehrke, and Lars Vilhuber. Privacy: Theory meets practice on the map. In _Data Engineering, 2008. ICDE 2008. IEEE 24th International Conference on_ , pages 277–286. IEEE, 2008.
* Massart et al. (1990) Pascal Massart et al. The tight constant in the dvoretzky-kiefer-wolfowitz inequality. _The annals of Probability_ , 18(3):1269–1283, 1990.
* Moriarty et al. (2012) James P Moriarty, Megan E Branda, Kerry D Olsen, Nilay D Shah, Bijan J Borah, Amy E Wagie, Jason S Egginton, and James M Naessens. The effects of incremental costs of smoking and obesity on health care costs among adults: a 7-year longitudinal study. _Journal of Occupational and Environmental Medicine_ , 54(3):286–291, 2012.
* Murphy (2012) Kevin P. Murphy. _Machine Learning: A Probabilistic Perspective_. The MIT Press, 2012. ISBN 0262018020, 9780262018029.
* Nissim et al. (2007) Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. Smooth sensitivity and sampling in private data analysis. In _Proceedings of the thirty-ninth annual ACM symposium on Theory of computing_ , pages 75–84. ACM, 2007.
* Papoulis and Pillai (2002) Athanasios Papoulis and S Unnikrishna Pillai. _Probability, random variables, and stochastic processes_. Tata McGraw-Hill Education, 2002.
* Press (2007) William H Press. _Numerical recipes 3rd edition: The art of scientific computing_. Cambridge university press, 2007.
* Rubinstein and Aldà (2017) Benjamin IP Rubinstein and Francesco Aldà. Pain-free random differential privacy with sensitivity sampling. In _International Conference on Machine Learning_ , pages 2950–2959, 2017.
* Ruggles et al. (2015) Steven Ruggles, Katie Genadek, Ronald Goeken, Josiah Grover, and Matthew Sobek. Integrated public use microdata series: Version 6.0 [dataset], 2015. URL http://doi.org/10.18128/D010.V6.0.
* Triastcyn and Faltings (2019) Aleksei Triastcyn and Boi Faltings. Federated learning with bayesian differential privacy. _arXiv preprint arXiv:1911.10071_ , 2019.
* Zhang et al. (2012) Jun Zhang, Zhenjie Zhang, Xiaokui Xiao, Yin Yang, and Marianne Winslett. Functional mechanism: regression analysis under differential privacy. _Proceedings of the VLDB Endowment_ , 5(11):1364–1375, 2012.
## A Proof of Theorem 12 (Section 4.1)
Although a Laplace mechanism $\mathcal{L}^{\Delta_{f}}_{\varepsilon}$ induces
higher amount of noise on average than a Laplace mechanism
$\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}$ for
$\varepsilon<\varepsilon_{0}$, there is a non-zero probability that
$\mathcal{L}^{\Delta_{f}}_{\varepsilon}$ induces noise commensurate to
$\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}$. This non-zero probability guides
us to calculate the privacy at risk $\gamma_{1}$ for the privacy at risk level
$\varepsilon$. In order to get an intuition, we illustrate the calculation of
the overlap between two Laplace distributions as an estimator of similarity
between the two distributions.
###### Definition 17
[Overlap of Distributions, (Papoulis and Pillai, 2002)] The overlap, $O$,
between two probability distributions $P_{1},P_{2}$ with support $\mathcal{X}$
is defined as
$O=\int_{\mathcal{X}}\min[P_{1}(x),P_{2}(x)]~{}{dx}.$
###### Lemma 18
The overlap $O$ between two probability distributions,
$\mathrm{Lap}(\frac{\Delta_{f}}{\varepsilon_{1}})$ and
$\mathrm{Lap}(\frac{\Delta_{f}}{\varepsilon_{2}})$, such that
$\varepsilon_{2}\leq\varepsilon_{1}$, is given by
$O=1-(\exp{(-\mu\varepsilon_{2}/\Delta_{f})}-\exp{(-\mu\varepsilon_{1}/\Delta_{f})}),$
where
$\mu=\frac{\Delta_{f}\ln{(\varepsilon_{1}/\varepsilon_{2})}}{\varepsilon_{1}-\varepsilon_{2}}$.
Using the result in Lemma 18, we note that the overlap between two
distributions with $\varepsilon_{0}=1$ and $\varepsilon=0.6$ is $0.81$. Thus,
$\mathcal{L}^{\Delta_{f}}_{0.6}$ induces noise that is more than $80\%$ times
similar to the noise induced by $\mathcal{L}^{\Delta_{f}}_{1.0}$. Therefore,
we can loosely say that at least $80\%$ of the times a Laplace Mechanism
$\mathcal{L}^{\Delta_{f}}_{1.0}$ will provide the same privacy as a Laplace
Mechanism $\mathcal{L}^{\Delta_{f}}_{0.8}$.
Although the overlap between Laplace distributions with different scales
offers an insight into the relationship between different privacy levels, it
does not capture the constraint induced by the sensitivity. For a given query
$f$, the amount of noise required to satisfy differential privacy is
commensurate to the sensitivity of the query. This calibration puts a
constraint on the noise that is required to be induced on a pair of
neighbouring datasets. We state this constraint in Lemma 19, which we further
use to prove that the Laplace Mechanism
$\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}$ satisfies
$(\varepsilon,\gamma_{1})$-privacy at risk.
###### Lemma 19
For a Laplace Mechanism $\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}$, the
difference in the absolute values of noise induced on a pair of neighbouring
datasets is upper bounded by the sensitivity of the query.
Proof Suppose that two neighbouring datasets $x$ and $y$ are given input to a
numeric query $f:\mathcal{D}\rightarrow\mathbb{R}^{k}$. For any output
$z\in\mathbb{R}^{k}$ of the Laplace Mechanism
$\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}$,
$\displaystyle\sum_{i=1}^{k}\left(|f(y_{i})-z_{i}|-|f(x_{i})-z_{i}|\right)$
$\displaystyle\leq\sum_{i=1}^{k}\left(|f(x_{i})-f(y_{i})|\right)$
$\displaystyle\leq\Delta_{f}.$
We use triangular inequality in the first step and Definition 2 of sensitivity
in the second step.
We write $\mathrm{Exp}(b)$ to denote a random variable sampled from an
_exponential distribution_ with scale $b>0$. We write
$\mathrm{Gamma}(k,\theta)$ to denote a random variable sampled from a _gamma
distribution_ with shape $k>0$ and scale $\theta>0$.
###### Lemma 20
[(Papoulis and Pillai, 2002)] If a random variable $X$ follows Laplace
Distribution with mean zero and scale $b$, $|X|\sim\mathrm{Exp}(b)$.
###### Lemma 21
[(Papoulis and Pillai, 2002)] If $X_{1},...,X_{n}$ are $n$ i.i.d. random
variables each following the Exponential Distribution with scale $b$,
$\sum_{i=1}^{n}X_{i}\sim\textrm{Gamma}(n,b)$.
###### Lemma 22
If $X_{1}$ and $X_{2}$ are two i.i.d. Gamma$(n,\theta)$ random variables, the
probability density function for the random variable $T=|X_{1}-X_{2}|/\theta$
is given by
$P_{T}(t;n,\theta)=\frac{2^{2-n}t^{n-\frac{1}{2}}K_{n-\frac{1}{2}}(t)}{\sqrt{2\pi}\Gamma(n)\theta}$
where $K_{n-\frac{1}{2}}$ is the modified Bessel function of second kind.
Proof Let $X_{1}$ and $X_{2}$ be two i.i.d. $\mathrm{Gamma}(n,\theta)$ random
variables. Characteristic function of a Gamma random variable is given as
$\phi_{X_{1}}(z)=\phi_{X_{2}}(z)=(1-\iota z\theta)^{-n}.$
Therefore,
$\phi_{X_{1}-X_{2}}(z)=\phi_{X_{1}}(z)\phi_{X_{2}}^{*}(z)=\frac{1}{(1+(z\theta)^{2})^{n}}$
Probability density function for the random variable $X_{1}-X_{2}$ is given
by,
$\displaystyle P_{X_{1}-X_{2}}(x)$
$\displaystyle=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-\mathrm{i}zx}\phi_{X_{1}-X_{2}}(z)dz$
$\displaystyle=\frac{2^{1-n}{|\frac{x}{\theta}|}^{n-\frac{1}{2}}K_{n-\frac{1}{2}}(|\frac{x}{\theta}|)}{\sqrt{2\pi}\Gamma(n)\theta}$
where $K_{n-\frac{1}{2}}$ is the Bessel function of second kind. Let
$T=|\frac{X_{1}-X_{2}}{\theta}|$. Therefore,
$P_{T}(t;n,\theta)=\frac{2^{1-n}t^{n-\frac{1}{2}}K_{n-\frac{1}{2}}(t)}{\sqrt{2\pi}\Gamma(n)\theta}$
We use Mathematica (Inc., 2014) to solve the above integral.
###### Lemma 23
If $X_{1}$ and $X_{2}$ are two i.i.d. Gamma$(n,\theta)$ random variables and
$|X_{1}-X_{2}|\leq M$, then $T^{\prime}=|X_{1}-X_{2}|/\theta$ follows the
distribution with probability density function:
$P_{T^{\prime}}(t;n,\theta,M)=\frac{P_{T}(t^{\prime};n,\theta)}{P_{T}(T\leq
M)},$
where $P_{T}$ is the probability density function of defined in Lemma 22.
###### Lemma 24
For Laplace Mechanism $\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}$ with query
$f:\mathcal{D}\rightarrow\mathbb{R}^{k}$ and for any output $Z\subseteq
Range(\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}})$,
$\varepsilon\leq\varepsilon_{0}$,
$\gamma_{1}\triangleq\mathbb{P}\left[\ln{\left|\frac{\mathbb{P}(\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}(x)\in
Z)}{\mathbb{P}(\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}(y)\in
Z)}\right|}\leq\varepsilon\right]=\frac{\mathbb{P}(T\leq\varepsilon)}{\mathbb{P}(T\leq\varepsilon_{0})},$
where $T$ follows the distribution in Lemma 22,
$P_{T}(t;k,\frac{\Delta_{f}}{\varepsilon_{0}})$.
Proof Let, $x\in\mathcal{D}$ and $y\in\mathcal{D}$ be two datasets such that
$x\sim y$. Let $f:\mathcal{D}\rightarrow\mathbb{R}^{k}$ be some numeric query.
Let $\mathbb{P}_{x}(z)$ and $\mathbb{P}_{y}(z)$ denote the probabilities of
getting the output $z$ for Laplace mechanisms
$\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}(x)$ and
$\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}(y)$ respectively. For any point
$z\in\mathbb{R}^{k}$ and $\varepsilon\neq 0$,
$\displaystyle~{}\frac{\mathbb{P}_{x}(z)}{\mathbb{P}_{y}(z)}$
$\displaystyle=\prod_{i=1}^{k}\frac{\exp{\left(\frac{-\varepsilon_{0}|f(x_{i})-z_{i}|}{\Delta_{f}}\right)}}{\exp{\left(\frac{-\varepsilon_{0}|f(y_{i})-z_{i}|}{\Delta_{f}}\right)}}$
$\displaystyle=\prod_{i=1}^{k}\exp{\left(\frac{\varepsilon_{0}(|f(y_{i})-z_{i}|-|f(x_{i})-z_{i}|)}{\Delta_{f}}\right)}$
$\displaystyle=\exp{\left(\varepsilon\left[\frac{\varepsilon_{0}\sum_{i=1}^{k}(|f(y_{i})-z_{i}|-|f(x_{i})-z_{i}|)}{\varepsilon\Delta_{f}}\right]\right)}.$
(14)
By Definition 4,
$(f(x)-z),(f(y)-z)\sim\textrm{Lap}(\Delta_{f}/\varepsilon_{0}).$ (15)
Application of Lemma 20 and Lemma 21 yields,
$\sum_{i=1}^{k}\left(|f(x_{i})-z_{i}|\right)\sim\textrm{Gamma}(k,\Delta_{f}/\varepsilon_{0}).$
(16)
Using Equations 15, 16, and Lemma 19, 23, we get
$\left(\frac{\varepsilon_{0}}{\Delta_{f}}\sum_{i=1}^{k}\left|\left(|f(y_{i})-z|-|f(x_{i})-z|\right)\right|\right)\\\
\sim P_{T^{\prime}}(t;k,\Delta_{f}/\varepsilon_{0},\Delta_{f}).$ (17)
since,
$\sum_{i=1}^{k}\left|\left(|f(y_{i})-z|-|f(x_{i})-z|\right)\right|\leq\Delta_{f}$.
Therefore,
$\mathbb{P}\left(\left[\frac{\varepsilon_{0}}{\Delta_{f}}\sum_{i=1}^{k}\
\left|\left(|f(y_{i})-z|-|f(x_{i})-z|\right)\right|\right]\leq\varepsilon\right)=\frac{\mathbb{P}(T\leq\varepsilon)}{\mathbb{P}(T\leq\varepsilon_{0})},$
(18)
where $T$ follows the distribution in Lemma 22. We use Mathematica (Inc.,
2014) to analytically compute,
$\mathbb{P}(T\leq
x)\propto\left({}_{1}F_{2}(\frac{1}{2};\frac{3}{2}-k,\frac{3}{2};\frac{x^{2}}{4})\sqrt{\pi}4^{k}x]\right)-\\\
\left(2_{1}F_{2}(k;\frac{1}{2}+k,k+1;\frac{x^{2}}{4})x^{2k}\Gamma(k)\right)$
where ${}_{1}F_{2}$ is the regularised generalised hypergeometric function as
defined in (Askey and Daalhuis, 2010). From Equation A and 18,
$\mathbb{P}\left[\ln{\left|\frac{\mathbb{P}(\mathcal{L}_{\varepsilon_{0}}^{\Delta_{f}}(x)\in
S)}{\mathbb{P}(\mathcal{L}_{\varepsilon_{0}}^{\Delta_{f}}(y)\in
S)}\right|}\leq\varepsilon\right]=\frac{\mathbb{P}(T\leq\varepsilon)}{\mathbb{P}(T\leq\varepsilon_{0})}.$
This completes the proof of Theorem 12.
###### Corollary 25
Laplace Mechanism $\mathcal{L}^{\Delta_{f}}_{\varepsilon_{0}}$ with
$f:\mathcal{D}\rightarrow\mathbb{R}^{k}$ is
$(\varepsilon,\delta)$-probabilistically differentially private where
$\delta=\begin{cases}1-\frac{\mathbb{P}(T\leq\varepsilon)}{\mathbb{P}(T\leq\varepsilon_{0})}&\quad\varepsilon\leq\varepsilon_{0}\\\
0&\quad\varepsilon>\varepsilon_{0}\end{cases}$
and $T$ follows $\mathrm{BesselK}(k,\Delta_{f}/\varepsilon_{0})$.
## B Proof of Theorem 13 (Section 4.2)
Proof Let, $x$ and $y$ be any two neighbouring datasets sampled from the data
generating distribution $\mathcal{G}$. Let, $\Delta_{S_{f}}$ be the sampled
sensitivity for query $f:\mathcal{D}\rightarrow\mathbb{R}^{k}$. Let,
$\mathbb{P}_{x}(z)$ and $\mathbb{P}_{y}(z)$ denote the probabilities of
getting the output $z$ for Laplace mechanisms
$\mathcal{L}_{\varepsilon}^{\Delta_{S_{f}}}(x)$ and
$\mathcal{L}_{\varepsilon}^{\Delta_{S_{f}}}(y)$ respectively. For any point
$z\in\mathbb{R}^{k}$ and $\varepsilon\neq 0$,
$\displaystyle\frac{\mathbb{P}_{x}(z)}{\mathbb{P}_{y}(z)}$
$\displaystyle=\prod_{i=1}^{k}\frac{\exp{\left(\frac{-\varepsilon|f(x_{i})-z_{i}|}{\Delta_{S_{f}}}\right)}}{\exp{\left(\frac{-\varepsilon|f(y_{i})-z_{i}|}{\Delta_{S_{f}}}\right)}}$
$\displaystyle=\exp{\left(\frac{\varepsilon\sum_{i=1}^{k}(|f(y_{i})-z_{i}|-|f(x_{i})-z_{i}|)}{\Delta_{S_{f}}}\right)}$
$\displaystyle\leq\exp{\left(\frac{\varepsilon\sum_{i=1}^{k}|f(y_{i})-f(x_{i})|}{\Delta_{S_{f}}}\right)}$
$\displaystyle=\exp{\left(\frac{\varepsilon\lVert
f(y)-f(x)\rVert_{1}}{\Delta_{S_{f}}}\right)}$ (19)
We used triangle inequality in the penultimate step.
Using the trick in the work of (Rubinstein and Aldà, 2017), we define
following events. Let, $B^{\Delta_{S_{f}}}$ denotes the set of pairs
neighbouring dataset sampled from $\mathcal{G}$ for which the sensitivity
random variable is upper bounded by $\Delta_{S_{f}}$. Let,
$C_{\rho}^{\Delta_{S_{f}}}$ denotes the set of sensitivity random variable
values for which $F_{n}$ deviates from the unknown cumulative distribution of
$S$, $F$, at most by the accuracy value $\rho$. These events are defined in
Equation 20.
$\displaystyle B^{\Delta_{S_{f}}}$
$\displaystyle\triangleq\\{x,y\sim\mathcal{G}~{}\textrm{such that}~{}\lVert
f(y)-f(x)\rVert_{1}\leq\Delta_{S_{f}}\\}$ $\displaystyle
C_{\rho}^{\Delta_{S_{f}}}$
$\displaystyle\triangleq\left\\{\sup_{\Delta}|F_{S}^{n}(\Delta)-F_{S}(\Delta)|\leq\rho\right\\}$
(20)
$\displaystyle\mathbb{P}(B^{\Delta_{S_{f}}})$
$\displaystyle=\mathbb{P}(B^{\Delta_{S_{f}}}|C_{\rho}^{\Delta_{S_{f}}})\mathbb{P}(C_{\rho}^{\Delta_{S_{f}}})$
(21)
$\displaystyle+\mathbb{P}(B^{\Delta_{S_{f}}}|\overline{C\rho^{\Delta_{S_{f}}}})\mathbb{P}(\overline{C_{\rho}^{\Delta_{S_{f}}}})$
$\displaystyle\geq\mathbb{P}(B^{\Delta_{S_{f}}}|C_{\rho}^{\Delta_{S_{f}}})\mathbb{P}(C_{\rho}^{\Delta_{S_{f}}})$
$\displaystyle=F_{n}(\Delta_{S_{f}})\mathbb{P}(C_{\rho}^{\Delta_{S_{f}}})$
$\displaystyle\geq\gamma_{2}\cdot(1-2e^{-2\rho^{2}n})$ (22)
In the last step, we use the definition of the sampled sensitivity to get the
value of the first term. The last term is obtained using DKW-inequality, as
defined in (Massart et al., 1990), where the $n$ denotes the number of samples
used to build empirical distribution of the sensitivity, $F_{n}$.
From Equation 19, we understand that if $\lVert f(y)-f(x)\rVert_{1}$ is less
than or equals to the sampled sensitivity then the Laplace mechanism
$\mathcal{L}_{\varepsilon}^{\Delta_{S_{f}}}$ satisfies
$\varepsilon$-differential privacy. Equation 22 provides the lower bound on
the probability of the event $\lVert f(y)-f(x)\rVert_{1}\leq\Delta_{S_{f}}$.
Thus, combining Equation 19 and Equation 22 completes the proof.
## C Proof of Theorem 15 (Section 4.3)
Proof of Theorem 15 builds upon the ideas from the proofs for the rest of the
two cases. In addition to the events defined in Equation 20, we define an
additional event $A_{\varepsilon_{0}}^{\Delta_{S_{f}}}$, defined in Equation
23, as a set of outputs of Laplace mechanism
$\mathcal{L}_{\varepsilon_{0}}^{\Delta_{S_{f}}}$ that satisfy the constraint
of $\varepsilon$-differential privacy for a specified privacy at risk level
$\varepsilon$.
$A_{\varepsilon_{0}}^{\Delta_{S_{f}}}\triangleq\left\\{z\sim\mathcal{L}_{\varepsilon_{0}}^{\Delta_{S_{f}}}~{}:~{}\ln{\left|\frac{\mathcal{L}_{\varepsilon_{0}}^{\Delta_{S_{f}}}(x)}{\mathcal{L}_{\varepsilon_{0}}^{\Delta_{S_{f}}}(y)}\right|}\leq\varepsilon,x,y\sim\mathcal{G}\right\\}$
(23)
###### Corollary 26
$\mathbb{P}(A_{\varepsilon_{0}}^{\Delta_{S_{f}}}|B^{\Delta_{S_{f}}})=\frac{\mathbb{P}(T\leq\varepsilon)}{\mathbb{P}(T\leq\eta\varepsilon_{0})}$
where $T$ follows the distribution $P_{T}(t;\Delta_{S_{f}}/\varepsilon_{0})$
in Lemma 22 and $\eta=\frac{\Delta_{f}}{\Delta_{S_{f}}}$.
Proof We provide the sketch of the proof. Proof follows from the proof of
Lemma 24. For a Laplace mechanism calibrated with the sampled sensitivity
$\Delta_{S_{f}}$ and privacy level $\varepsilon_{0}$, Equation 17 translates
to,
$\left(\frac{\varepsilon_{0}}{\Delta_{S_{f}}}\sum_{i=1}^{k}\left|\left(|f(y_{i})-z|-|f(x_{i})-z|\right)\right|\right)\sim\\\
P_{T^{\prime}}(t;k,\Delta_{S_{f}}/\varepsilon_{0},\Delta_{S_{f}}).$
since,
$\sum_{i=1}^{k}\left|\left(|f(y_{i})-z|-|f(x_{i})-z|\right)\right|\leq\Delta_{f}$.
Using Lemma 23 and Equation 18,
$\mathbb{P}(A_{\varepsilon_{0}}^{\Delta_{S_{f}}})=\frac{\mathbb{P}(T\leq\varepsilon)}{\mathbb{P}(T\leq\eta\varepsilon_{0})}$
where $T$ follows the distribution $P_{T}(t;\Delta_{S_{f}}/\varepsilon_{0})$
and $\eta=\frac{\Delta_{f}}{\Delta_{S_{f}}}$.
For this case, we do not assume the knowledge of the sensitivity of the query.
Using the empirical estimation presented in Section 4.2, if we choose the
sampled sensitivity for privacy at risk $\gamma_{2}=1$, we obtain an
approximation for $\eta$.
###### Lemma 27
For a given value of accuracy parameter $\rho$,
$\frac{\Delta_{f}}{\Delta_{S_{f}}^{*}}=1+\mathcal{O}\left(\frac{\rho}{\Delta_{S_{f}}^{*}}\right)$
where $\Delta_{S_{f}}^{*}=F_{n}^{-1}(1)$.
$\mathcal{O}\left(\frac{\rho}{\Delta_{S_{f}}^{*}}\right)$ denotes order of
$\frac{\rho}{\Delta_{S_{f}}^{*}}$, i.e.,
$\mathcal{O}\left(\frac{\rho}{\Delta_{S_{f}}^{*}}\right)=k\frac{\rho}{\Delta_{S_{f}}^{*}}$
for some $k\geq 1$.
Proof For a given value of accuracy parameter $\rho$ and any $\Delta>0$,
$F_{n}(\Delta)-F(\Delta)\leq\rho$
Since above inequality is true for any value of $\Delta$, let
$\Delta=F^{-1}(1)$. Therefore,
$\displaystyle F_{n}(F^{-1}(1))-F(F^{-1}(1))$ $\displaystyle\leq\rho$
$\displaystyle F_{n}(F^{-1}(1))$ $\displaystyle\leq 1+\rho$ (24)
Since a cumulative distribution function is $1$-Lipschitz [(Papoulis and
Pillai, 2002)],
$\displaystyle|F_{n}(F_{n}^{-1}(1))-F_{n}(F^{-1}(1))|$
$\displaystyle\leq|F_{n}^{-1}(1)-F^{-1}(1)|$
$\displaystyle|F_{n}(F_{n}^{-1}(1))-F_{n}(F^{-1}(1))|$
$\displaystyle\leq|\Delta_{S_{f}}^{*}-\Delta_{f}|$ $\displaystyle\rho$
$\displaystyle\leq\Delta_{f}-\Delta_{S_{f}}^{*}$ $\displaystyle
1+\frac{\rho}{\Delta_{S_{f}}^{*}}$
$\displaystyle\leq\frac{\Delta_{f}}{\Delta_{S_{f}}^{*}}$
where we used result from Equation 24 in step 3. Introducing
$\mathcal{O}\left(\frac{\rho}{\Delta_{S_{f}}^{*}}\right)$ completes the proof.
###### Lemma 28
For Laplace Mechanism $\mathcal{L}_{\varepsilon_{0}}^{\Delta_{S_{f}}}$ with
sampled sensitivity $\Delta_{S_{f}}$ of a query
$f:\mathcal{D}\rightarrow\mathbb{R}^{k}$ and for any $Z\subseteq
Range(\mathcal{L}_{\varepsilon}^{\Delta_{S_{f}}})$,
$\mathbb{P}\left[\ln{\left|\frac{\mathbb{P}(\mathcal{L}_{\varepsilon_{0}}(x)\in
Z)}{\mathbb{P}(\mathcal{L}_{\varepsilon_{0}}(y)\in
Z)}\right|}\leq\varepsilon\right]\geq\frac{\mathbb{P}(T\leq\varepsilon)}{\mathbb{P}(T\leq\eta\varepsilon_{0})}\gamma_{2}(1-2e^{-2\rho^{2}n})$
where $n$ is the number of samples used to find sampled sensitivity,
$\rho\in[0,1]$ is a accuracy parameter and
$\eta=\frac{\Delta_{f}}{\Delta_{S_{f}}}$. The outer probability is calculated
with respect to support of the data-generation distribution $\mathcal{G}$.
Proof The proof follows from the proof of Lemma 24 and Lemma 28. Consider,
$\displaystyle\mathbb{P}(A_{\varepsilon_{0}}^{\Delta_{S_{f}}})$
$\displaystyle\geq\mathbb{P}(A_{\varepsilon_{0}}^{\Delta_{S_{f}}}|B^{\Delta_{S_{f}}})\mathbb{P}(B^{\Delta_{S_{f}}}|C_{\rho}^{\Delta_{S_{f}}})\mathbb{P}(C_{\rho}^{\Delta_{S_{f}}})$
$\displaystyle\geq\frac{\mathbb{P}(T\leq\varepsilon)}{\mathbb{P}(T\leq\eta\varepsilon_{0})}\cdot\gamma_{2}\cdot(1-2e^{-2\rho^{2}n})$
(25)
The first term in the final step of Equation 25 follows from the result in
Corollary 26 where $T$ follows
$\mathrm{BesselK}(k,\frac{\Delta_{S_{f}}}{\varepsilon_{0}})$. It is the
probability with which the Laplace mechanism
$\mathcal{L}_{\varepsilon_{0}}^{\Delta_{S_{f}}}$ satisfies
$\varepsilon$-differential privacy for a given value of sampled sensitivity.
Probability of occurrence of event $A_{\varepsilon_{0}}^{\Delta_{S_{f}}}$
calculated by accounting for both explicit and implicit sources of randomness
gives the risk for privacy level $\varepsilon$. Thus, the proof of Lemma 28
completes the proof for Theorem 15.
Comparing the equations in Theorem 15 and Lemma 28, we observe that
$\gamma_{3}\triangleq\frac{\mathbb{P}(T\leq\varepsilon)}{\mathbb{P}(T\leq\eta\varepsilon_{0})}\cdot\gamma_{2}$
(26)
The privacy at risk, as defined in Equation 26, is free from the term that
accounts for the accuracy of sampled estimate. If we know cumulative
distribution of the sensitivity, we do not suffer from the uncertainty of
introduced by sampling from the empirical distribution.
|
2024-09-04T02:54:56.470816 | 2020-03-02T16:59:21 | 2003.01025 | {
"authors": "Haoran Su, Kejian Shi, Joseph. Y.J. Chow, Li Jin",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25994",
"submitter": "Haoran Su",
"url": "https://arxiv.org/abs/2003.01025"
} | arxiv-papers | # Dynamic Queue-Jump Lane for Emergency Vehicles under Partially Connected
Settings: A Multi-Agent Deep Reinforcement Learning Approach
Haoran Su Kejian Shi Joseph. Y.J. Chow Li Jin<EMAIL_ADDRESS>Tandon School
of Engineering, New York University, Brooklyn, NY, 11201
###### Abstract
Emergency vehicle (EMV) service is a key function of cities and is exceedingly
challenging due to urban traffic congestion. A main reason behind EMV service
delay is the lack of communication and cooperation between vehicles blocking
EMVs. In this paper, we study the improvement of EMV service under V2X
connectivity. We consider the establishment of dynamic queue jump lanes
(DQJLs) based on real-time coordination of connected vehicles in the presence
of non-connected human driven vehicles. We develop a novel Markov decision
process formulation for the DQJL coordination strategies, which explicitly
accounts for the uncertainty of drivers’ yielding pattern to approaching EMVs.
Based on pairs of neural networks representing actors and critics for agent
vehicles, we develop a multi-agent actor-critic deep reinforcement learning
algorithm which handles varying number of vehicles and random proportion of
connected vehicles in the traffic. Approaching the optimal coordination
strategies via indirect and direct reinforcement learning, we present two
schemata to address multi-agent reinforcement learning on this connected
vehicle application. Both approaches are validated, on a micro-simulation
testbed SUMO, to establish a DQJL fast and safely. Validation results reveal
that, with DQJL coordination strategies, it saves up to 30% time for EMVs to
pass a link-level intelligent urban roadway than the baseline scenario.
###### keywords:
Connected Vehicles, Multi-agent System , Reinforcement Learning, Emergency
Vehicles, V2X
††journal: Transportation Research Part C
## 1 Introduction
Increasing population and urbanization have made it exceedingly challenging to
operate urban emergency services efficiently. For example, historical data
from New York City, USA [1] shows that the number of annual emergency vehicle
(EMV) incidents has grown from 1,114,693 in 2004 to 1,352,766 in 2014, with
corresponding average response time of 7:53 min and 9:23 min, respectively
[2]. This means an approximately 20% increase in response time in ten years.
In the case of cardiac arrest, every minute until defibrillation reduces
survival chances by 7% to 10%, and after 8 minutes there is little chance of
survival [3]. Cities are less resilient with worsening response time from EMVs
(ambulances, fire trucks, police cars), mainly due to traffic congestion.
The performance of these EMV service systems in congested traffic can be
improved with technology. As a core of modern ITSs, wireless vehicle-to-
everything (V2X) connectivity, such as 5G-cellular network recently, provide
significant opportunities for improving urban emergency response. On the one
hand, wireless connectivity provides EMVs the traffic conditions on possible
routes between the station (hospital, fire station, police station, etc.) and
the call, which enables more efficient dispatch and routing. Through V2I
communication with the EMVs on duty, traffic managers can broadcast the
planned route of EMVs to non-EMVs that may be affected, and non-EMVs can
cooperate to form alternative queue-jump lanes for approaching EMVs.
In this study, we introduce a novel concept of dynamic queue-jump lanes
(DQJL). A queue jump lane (QJL) [4, 5] is a lane type used for bus operation
so that buses can bypass long queues prior to intersections. In the context of
EMVs, we define a DQJL to be a dynamic lane that is formed while an EMV is en-
route to clear the downstream space. This is achieved by using V2X technology
to inform downstream vehicles of the temporary change in the lane type so that
vehicles would shift out of the lane. Doing so should significantly reduce the
response time for EMVs on the urban roadway. By monitoring and managing the
connected components of the roadway, we are able to produce real-time
coordination strategies to clear a path for the approaching EMVs.
DQJL involves several uncertainties that impact the its implementation. One is
how far downstream should the V2X set the lane type? Setting the DQJL too far
downstream would negatively impact the background traffic. Setting it too
short might not give vehicles enough time to comply. Secondly, a DQJL strategy
can be implemented with an allocation recommendation to inform downstream
vehicles of which open spaces to pull over to. Open spaces are finite and
limited, and their usage depends on drivers’ compliance. Poorly coordinated
allocations can lead to low compliance due to difficult maneuvering
requirements. The latter decision is called ”DQJL with coordination” while
DQJL without coordination indicates no allocation recommendation given.
Consider the stochastic driving behaviors and other uncertainties in the road
environment, we formulate the problem into a partially observable Markov
decision process (MDP). Aiming for real-time decision making when
coordinating, we adopt the multi-agent actor-critic deep reinforcement
learning to approach optimal coordination strategies. The methodology
considers partial connected settings, indicating our frameworks are flexible
with varying numbers of vehicles and connected vehicles’ penetration rates.
The main contributions of the presented study are as follows: we introduce the
concept of DQJL and solve the objectives of DQJL applications under stochastic
road environment; we develop a multi-agent actor-critic deep reinforcement
learning algorithm which copes with varying numbers of agent vehicles and
penetration rates; addressing the real-time optimal coordination strategies
via indirect and direct approach, we validate the DQJL coordinated system
performs significantly better than the baseline scenario without any
coordination.
The rest of the paper is organized in the following sections. In Sec. 2, we
overview related studies on queue-jump lanes for EMVs and actor-critic
reinforcement learning methods in similar ITS applications. In Sec. 3, we
elaborate the DQJL problem formulation as a Markov decision process. In Sec.
4, we demonstrate our methodology, including the indirect and direct multi-
agent reinforcement learning approaches. We elaborate our experimental design
and synthesize a simulation dataset for the training in Sec. 5. We validate
the proposed methodology and provide insights on the results in Sec. 6. At
last, conclusions are made in Sec. 7.
Figure 1: DQJL Coordination for an emergency vehicle.
## 2 Literature Review
In this section, we discuss literature available on Queue jump lane in Subsec.
2.1, and existing multi-agent deep reinforcement learning techniques on
similar connected vehicle applications in Subsec. 2.2.
### 2.1 Queue jump lane for emergency vehicles
Queue jump lane (DQJ) has never been used for EMV deployment and is considered
a novel operation strategy to apply this technology for EMV deployment with
the aid of connected vehicle technologies. Although QJL is a relatively new
technology, literature has already demonstrates the positive effects they have
in reducing travel time variability, especially when used in conjunction with
transit signal priority (TSP). However, they are all based on moving-
bottleneck models for buses [4, 5]; we are borrowing this bus operation
strategy for EMV deployment in our setting, since EMVs typically move faster
than non-EMVs and since EMVs can “preempt” non-EMV traffic because of their
priority.
At the same time, with only siren technologies, other vehicles often do not
get enough warning time from the EMVs. Even then, there is a lack of clarity
in the direction of the route to avoid. The confusion, particularly under
highly congested scenarios, leads to increased delays as mentioned above, and
also to 4 to 17 times higher accident rates [6] and increased severity of
collisions [7], which can further lead to increased response time. A study by
Savolainen et al. [8] using data from Detroit, Michigan, verified this
sensitivity of driver behaviors to different ITS communication methods for EMV
route information.
Hannoun et al. [9] use mixed integer linear programming to develop a static
path clearance control strategy for approaching EMVs while QJLs have not been
studied as a dynamic control strategy and there’s an urgent need for capturing
uncertainties in realistic traffic conditions, especially in events under non-
deterministic setting such as yielding to an approaching EMV. To apply dynamic
control strategy for vehicle’s motion planning for QJL establishment, we
introduce the concept of dynamic queue jump lane (DQJL), see Fig. 1. During
the process of clearing a QJL for the passing EMV, non-EMVs are constantly
monitored and instructed with actions so that DQJLs are established quickly
and safely. In the presented work, we also investigate the DQJL applications
under a partially connected scenario, under which some vehicles are equipped
with communication devices to establish complete communication channels
between themselves and the infrastructure but others do not.
### 2.2 Actor-critic method in ITS applications
Many state-of-the-art ITS applications are adopting deep reinforcement
learning techniques to realize connected vehicle applications. Among
reinforcement learning schemes, the actor critic method, which combines the
Q-learning and policy gradient, has been favored by extensive connected
vehicle applications on cooperative traffic signal controls [10, 11, 12],
connected vehicle platooning [13], or trajectory planning [14, 15].
Extending the actor critic algorithm from a single ”super-agent” perspective
to the multi-agent setting allows us to concentrate more on microscopic
traffic management problems under connected settings. Under the multi-agent
setting, it is assumed, by the essential definition of multi-agent systems
that each vehicle doesn’t have the full knowledge of the road environment, but
can partially observe its surroundings. The local observation contains
information obtained through physical surroundings, such as neighboring
vehicles’ positions, as well as messages through communication channels among
connected vehicles and the infrastructure. Different actor critic
architectures are used by these connected vehicle applications for various
application purposes.
The fully centralized setting describes that all information is processed in a
centralized controlled system. Namely, the policy and value networks for all
vehicles, connected or autonomous, are stored in the centralized controller.
All vehicles report their own observation spaces to the controller and, as a
result, the controller has the full knowledge of the road environment.
Decisions for each connected vehicles are then made by the centralized
controller. The fully centralized setting based on actor critic methods appear
in vehicle communication establishment [16], coordination through
intersections [17], and a previous study on DQJL coordination under fully
connected scenario [18]. However, the fully centralized setting does not
accommodate this application due to its limited usage under partially
connected road environment, restricting the application’s practicality within
the short to medium future. The centralized setting also raises concerns on
the high latency and synchronization cost, making real-time decision extremely
difficult among vehicles.
At the same time, the fully decentralized settings are broadly approved in ITS
applications. Each vehicle has its own policy network and actor network
deployed on board. Each vehicle interacts with the environment independently.
Methodologies addressing the multi-agent vehicle applications under this
setting originate from the independent actor-critic (IAC) [19]. Plenty of
robotic motion planning vehicle applications [20, 21] are based on the fully
decentralized setting. The centralized controller, under this scheme, is no
longer needed. Communications among vehicles are significantly reduced. While
treating other vehicles as parts of the road environment faces huge risk of
not converging as the environment becomes non-stationary. Vehicles also have
difficulties in identifying their roles in cooperative tasks. Each individual
vehicle is required to equipped advanced sensors, such as Lidar and cameras,
to fully observe the road environment, significantly increasing the
establishment cost of the system.
Proposed by [22, 23], the centralized training with decentralized execution
framework overcomes challenges in previous settings. Under this framework,
each vehicle has its own policy network deployed on board, while the
corresponding value network is stored in the centralized controller. During
the training stage, the centralized controller updates critic through TD
learning with all agents’ observation and their actions. The policy network
then outputs the optimal action based on the vehicle’s local observation.
After the training stage, the value networks are completely abandoned and
vehicles are able to operate based on their own observation spaces of the road
environment and their policy networks, eliminating the concerns on
synchronization. Since the centralized controller train value networks
together, the road environment becomes stationary because the joint action is
known, even though policy networks may change during the training stage.
The comparisons between these frameworks are summarized in Table. 1. Denoting
the local observation and action for a vehicle as $o^{i}_{t}$ and $a^{i}_{t}$
at step $t$, the joint observation is then denoted as $\mathbf{o}_{t}$ and
joint action as $\mathbf{a}_{t}$. The actors and critics indicate the input
information are either local or global. The latency column indicates whether
the low latency and synchronization cost are highly required during execution
stage or not, while the sensor column indicates if the vehicles need to equip
the Lidar/cameras under the corresponding framework.
Settings | Actor | Critic | Latency | Sensors
---|---|---|---|---
Centralized | $\pi(a^{i}_{t}|\mathbf{o}_{t};\theta_{i})$ | $Q(\mathbf{o}_{t},\mathbf{a}_{t};w_{i})$ | + | -
Decentralized | $\pi(a^{i}_{t}|o^{i}_{t};\theta_{i})$ | $Q(o^{i}_{t},a^{i}_{t};w_{i})$ | - | +
CTDE | $\pi(a^{i}_{t}|o^{i}_{t};\theta_{i})$ | $Q(\mathbf{o}_{t},\mathbf{a}_{t};w_{i})$ | - | -
Table 1: Summary of actor-critic based frameworks for connected vehicles.
Multi-agent actor critic algorithms based on CTDE architecture, such as
MADDPG, are applied in vehicle motion planning through signalized intersection
application [24], traffic flow optimization [25], and vehicle network
resources allocation [26]. The existing literature view vehicles as the agents
for cooperative or competitive traffic tasks. While these existing literature
neither handle the heterogeneity of vehicle groups under partially connected
setting, nor do they consider for varying size of the agents, i.e. vehicles.
They also do not explicitly elaborate how to cope with discrete action space
using multi-agent actor critic algorithm.
## 3 Model Formulation
In this section, we elaborate how we model the DQJL application as a
partially-observed Markov decision process. In Subsec. 3.1, we reveal the
structure of the DQJL coordination framework. The partially connected setting
is explained in Subsec. 3.2, and components of the proposed MDP are
illustrated in 3.3, 3.4, 3.5, 3.6 respectively. The objective function for the
proposed MDP is presented in Subsec. 3.7.
### 3.1 DQJL coordination framework
In order to model the establishment of DQJL for an emergency vehicle (EMV),
let us take a look at a typical urban road segment. An urban road segment
consists of two lanes facing the same direction. When an EMV on duty
approaches this road segment, the centralized control system based on cellular
network will send out real time yielding coordination instructions to
connected vehicles.
With the cellular-assisted network prevailing in V2X communication [27, 28,
29] nowadays, connected vehicles’ information including positions, velocity,
acceleration/deceleration and vehicle unique attributes such as vehicle length
and most comfortable deceleration are communicated directly among connected
vehicles and the infrastructure.
Figure 2: DQJL coordination framework under cellular-assisted vehicle
networks.
Approaching EMVs remain equipped with siren technologies to provide acoustic
and visual warnings to vehicles, bikers and pedestrians. At the same time,
EMVs’ VANET-based [7] onboard warning systems are able to broadcast emergency
vehicles’ kinematic information to connected vehicles on the roadway. Sensors
and detectors are deployed along the roadway to capture necessary features of
non-connected vehicles. Vehicles’ kinematic information including real time
velocities and positions on the roadway are collected via technologies such as
inductive and magnetic loop sensors [30, 31].
Along the roadway, one or more roadside units (RSU) monitor and identify
whether the approaching vehicles are connected or not. They also serve to
synchronize data and coordinate when dealing with emergency cases. Information
from the connected vehicles via the cellular-assisted vehicle network and data
collected from sensors are processed together in RSUs. The RSUs then send the
processed data to the centralized controller and wait for real-time
coordination strategies as feedback. The RSUs then broadcast coordination
instructions to the corresponding vehicles. The overall coordination framework
is summarized in Fig. 2. At the same time, the communication framework has a
strict demand for low latency and synchronization cost in communication to
ensure the compatibility of real-time coordination.
To ensure the approaching EMV can pass the road segment as fast as possible,
it is assumed the EMV does not perform lane change during passing so its
average velocity can be maximized. The lane on which EMV is operating is named
as the passing lane for referring convenience. All vehicles originally on the
passing lane will pull over onto the other lane, which is referred as the
neighboring lane.
Stochastic road dynamics originate from stochastic driving behaviors. For
example, the uncertainties in drivers’ perception-reaction time result in
different reaction distance as shown in Fig. 3. Distinct braking abilities of
different vehicles bring significantly different braking distances even when
they start to decelerate at the same velocity. Stochastic driving behaviors
also includes the randomness in lane-changing pattern when performing pull-
over and $etc.$
Figure 3: Braking process of a non-EMV after receiving a yielding instruction.
Due to the uncertainties mentioned above, it is beneficial to picture the
controlled system in a Markov decision process (MDP) framework so that the
randomness can be addressed properly. Table. 2 provides notations for
variables used for describing the controlled system.
Table 2: Roadway Environment Notations Notation | Meaning
---|---
$x$ | front bump position of the vehicle
$v$ | velocity of the vehicle
$l$ | length of the vehicle
$b^{*}$ | baseline acceleration/deceleration of the vehicle
$y$ | lane position of the vehicle
$\xi$ | the yielding status of the vehicle
$\phi$ | vehicle category: {CV, HV or trivial vehicle}
$L$ | length of the road segment
$d$ | minimum safety gap between two vehicle
### 3.2 The partially connected mixed traffic
According to the description of the controlled system above, the stream of
vehicles has two categories of vehicles: the first one is the traditional
human-driven vehicles (HV) with no advanced communication equipment and
automation technologies; The second type is the connected vehicles (CV), which
are highly connected within the framework and are able to share real-time
information to the infrastructure or other CVs. For this particular
application, vehicular automated technologies that operate the wheeling or
acceleration of the vehicle are not considered, which means all vehicles are
essentially operated by human drivers. Inspired by control strategies for
partially automated traffic [32, 33], the study develops a set of control
strategies for the partially connected traffic.
On the one hand, HVs and CVs both contribute to the stochastic road dynamics
due to the fact that they are operated by human drivers. All vehicles are
unique with respect to the vehicles length and their deceleration rate as a
result of their distinct braking abilities. All drivers are unique with
respect to their perception reaction time, lane-changing behavior and other
driving styles. These uncertainties constitute the stochastic traffic dynamics
in this link-level segment.
Moreover, CVs with explicit communication channels can be monitored and
”managed” collectively to achieve cooperative tasks, even in the presence of
HVs. The cooperation scheme is supposed to handle with the stochastic road
dynamics mentioned above. At the same time, although HVs are not spontaneously
sharing information with CVs or the intelligent infrastructure, their state
information is constantly monitored by the infrastructure and observed by
their surrounding CVs.
### 3.3 Agents and state
In this MDP, the agents have two categories: HVs and CVs. The CVs are referred
as active agents since their behaviors are controlled by the learning process,
and HVs as passive agents since their behaviors are not controlled by the
learning process. At step $t$, the state representation of the an non-EMV, CV
or HV, is expressed by the following features:
$s^{i}_{t}=[x^{i}_{t},y^{i}_{t},v^{i}_{t},\xi^{i}_{t},l^{i},b^{*}_{i},\phi^{i}],$
where $\xi^{i}_{t}$ indicates whether or not a non-EMV is in the yielding
process for the approaching EMV, and $\phi^{i}$ helps identify whether or not
the $i$th non-EMV is a CV, HV or trivial vehicle.
Therefore, the ground truth state of the whole controlled system at step $t$
is characterized as
$\mathbf{s}_{t}=[s^{0}_{t},s^{1}_{t},s^{2}_{t},\dots,s^{n}_{t}],$
where $s^{0}_{t}$ is the state representation of the EMV and there are $n$
non-EMVs on the roadway when coordination process begins.
Notice that all non-EMVs and their drivers are unique and non-exchangeable.
Non-EMVs are differentiated by their vehicle length $l$ and their baseline
acceleration/deceleration $b^{*}$ due to their unique braking ability. The
drivers are also differentiable due to nature of everyone’s unique driving
style and behavior. Thus, the agents are heterogeneous in this MDP framework.
### 3.4 Observation space
Each non-EMV only observes partial road segment. To define local observation
of anon-EMV, the four closest adjacent non-EMVs are selected, which are the
ego vehicle’s leading vehicle, its following vehicle and the leading vehicle
and the following vehicles on the neighboring lane. At the same time, the
approaching EMV can broadcast its real time kinematic information to all CVs
on this road segment, and any non-EMV would include the EMV’s information in
its own local observation. The adjacent vehicles and the approaching EMV
imposes significant impacts on the ego vehicle’s motion planning decisions.
Fig. 4 demonstrates the local observation captured by any non-EMV on this road
segment.
Figure 4: Example of the local observation captured by any non-EMV on the road
segment.
For the $i$th non-EMV on this road segment, its local observation is described
as
$o^{i}_{t}=[s^{0}_{t},s^{i}_{t},s^{ij}_{t},s^{ik}_{t},s^{il}_{t},s^{im}_{t}],$
where $ij,ik,il,im$ represents $i$th vehicle’s adjacent vehicles at step $t$
respectively, and $s^{0}_{t}$ stands for the state representation of the EMV
at step $t$.
The dimension of any non-EMV’s observation space should be fixed. If a non-EMV
does not have a complete observation space, such as the leading vehicle of a
lane, trivial vehicles are introduced to fill up the observation space. A
trivial vehicle contains insignificant vehicle features and it is labeled
during the learning process, which helps the controller to ignore its
interaction with other vehicles.
### 3.5 Action
For HVs, the actions are primarily determined by the distance between the EMV
and itself when drivers hear the sirens or have visual confirmation of the
vehicle. This distance-based action function with w.r.t to step is described
as
$a^{i}_{t}=\begin{cases}0,&\text{if }x^{i}_{t}-x^{i}_{t}\geq L_{HV}.\\\
1,&\text{otherwise}.\\\ \end{cases}$ (1)
For CVs, the agents are human drivers operating the connected vehicles, so the
action instructed to each human driver is to yield or not. Thus, the discrete
action space for agent $i$ at step $t$ is represented as
$a^{i}_{t}\in\\{0,1\\}$.
Notice that both the yielding and the moving forward instructions are high-
level motion planning representations, and they don’t determine the vehicles’
absolute velocities as well as accelerations. With yielding or moving forward
instructions, human drivers drive according to the road environment dynamics
in addition to their unique and stochastic driving styles. Defining the action
in such way circumvents the difficulty of explicitly indicating the specific
kinematics attributes as well as the stochastic driving behaviors associated.
The joint action of $n$ non-EMVs at step $t$ is then represented as
$\mathbf{a}_{t}=[a^{1}_{t},a^{2}_{t},\dots,a^{n}_{t}].$
### 3.6 Reward
For the DQJL application, three reward functions are named to approach the
purpose of establishing a DQJL safely as fast as possible. First of all, to
avoid vehicle collision, a significant negative reward $P_{collision}$ for
overlapping of any two neighboring vehicles’ positions is imposed:
$r^{collision}_{t}=\begin{cases}P_{collision},&\text{ if
}x^{i}_{t}+d<x^{j}_{t}-l^{j},\forall i,j\text{ when }y^{i}_{t}=y^{j}_{t}.\\\
0,&\text{otherwise.}\\\ \end{cases}$
To motivate non-EMVs to clear the path, the second reward function penalizes
every step elapsed,
$r^{elapsed}_{t}=\begin{cases}-1,&\text{ if }x^{0}_{t}-l^{0}<L.\\\
0,&\text{otherwise,}\\\ \end{cases}$
where $l^{0}$ represents the vehicle length of the approach EMV. The third
reward function emphasizes the urgency of yielding for each non-EMV in the way
of the EMV. Namely,
$r^{i}_{t}=\begin{cases}-\frac{P_{priority}}{x^{i}_{t}-x^{0}_{t}},&\text{if
}x^{i}_{t}\leq L\text{ and }y^{i}_{t}=0,\\\ 0,&\text{otherwise,}\\\
\end{cases}$
where $P$ is a penalty constant. This definition prioritizes the yielding of
non-EMVs which are closer to the EMV that are downstream from it. To
summarize, the immediate step reward at step $t$ is
$R(t)=r^{collision}_{t}+r^{elapsed}_{t}+\sum_{i=1}^{n}r^{i}_{t}(1-y_{i}^{t}).$
Since the application is cooperative, the reward received by each agent is
identical to the team reward, which is
$R^{1}_{t}=R^{2}_{t}=\dots=R^{n}_{t}=R_{t}.$
### 3.7 MDP objective function
Defining the long-term return for the team as $U_{t}$ at step $t$, the
objective function for the DQJL coordination can be represented as
$\max U_{0}=\sum^{\infty}_{k=0}\gamma^{k}R_{k}.$ (2)
Solving (2) not only returns the optimal coordination strategy but also
reveals the minimum amount of time for DQJL establishment. Notice that since
the immediate step rewards are affected by the stochastic road environment,
the expected return reflects the uncertainties during the DQJL coordination
process.
## 4 Multi-Agent Actor-critic Method for Coordination Strategies
In this section, we introduce the our approach for DQJL coordination
strategies. Starting with Subsec. 4.1 and 4.2, we revisit the preliminaries of
RL and the actor critic method. In Subsec. 4.3, we show the step-by-step
derivation of the multi-agent actor-critic method for DQJL coordination. In
Subsec. 4.5, we elaborate how to construct a road dynamic model reflecting the
realistic traffic condition, and then employ the multi-agent actor critic
algorithms under centralized training with decentralized execution framework
as the indirect approach. In Subsec. 4.6, we demonstrate the pipeline for the
embedding of the MARL algorithm on Simulation of Urban Mobility (SUMO) as the
direct approach. The deep neural network structures supporting the presented
algorithm as function approximators are presented in Subsec. 4.4.
### 4.1 Bellman optimality
In the fully observable MDP, an agent has the full knowledge of the
environment $s\in S$ at step $t$, and select an action $a$ according to its
policy $\pi(a|s)$. The state transition occurs as $s_{t}\sim s_{t+1}$ and an
immediate step reward is received in the form of
$R_{t}=R(s_{t},a_{t},s_{t+1})$.
If the MDP has infinite-horizon, the sampled expected return under some policy
is calculated as $R^{\pi}_{t}=\sum_{\tau=t}^{\infty}\gamma^{\tau-t}r_{t}$,
where $\tau\in[0,1)$ is the discount factor. From there, Q-function under some
policy is defined as
$Q^{\pi}(s,a)=\operatorname{\mathbb{E}}{}[R^{\pi}_{t}|s_{t}=s,a_{t}=a].$ (3)
Hence, the optimal Q-function is obtained from $Q^{*}=\max_{\pi}Q^{\pi}$,
yielding the optimal policy $\pi^{*}$. Under $\pi^{*}$,
$a\in\arg\max_{a_{t+1}}Q^{*}(s,a_{t+1})$.
According to the Bellman’s Equation [34], the optimal Q-function is solved by
$Q^{*}(s,a)=R_{t}+\gamma\sum_{s_{t+1}\in
S}p(s_{t+1}|s_{t},a_{t})\max_{a_{t+1}\in A}Q(s_{t},a_{t+1}).$ (4)
Since $R_{t}$ and $p(s_{t+1}|s_{t},a_{t})$ are unknown to the agent, the agent
has to approach the optimality by purely data-driven dynamic programming based
on the collected experience $(s_{t},a_{t},R_{t},s_{t+1})$.
### 4.2 Single-agent actor-critic method
Actor-critic method, introduced by [35], combines the policy gradient
algorithm with value-based learning method. State-of-the-art actor-critic
based methods, such as DDPG [36] and soft actor-critic method [37], exhibit
outstanding performance in a variety of control problems in stochastic
processes, especially in the motion planning control of connected and
autonomous vehicles. Consider an intelligent vehicle as the agent with a
discrete action space, e.g. driving forward and backing up. The vehicle’s
state-value function under some policy $\pi$ is calculated as
$V_{\pi}(s)=\sum_{a\in A}\pi(a|s)Q_{\pi}(a,s),$
where $\pi(a|s)$ represents the policy function and $Q_{\pi}(a,s)$ refers to
the state-action function. $V_{\pi}(s)$ estimates the vehicle’s standing in
the application. With the approximation power of neural networks, the policy
function is parameterized with a set of trainable parameters as
$\pi(a|s;\bm{\theta})$, and the state-action function is parameterized as
$Q_{\pi}(a,s;\mathbf{w})$.
The state-value function is then parameterized as
$V_{\pi}(s;\bm{\theta},\mathbf{w})$. By sampling an action through the
vehicle’s current policy $a\sim\pi(\cdot|s;\bm{\theta})$, the corresponding
next state $s_{t+1}$ and immediate step reward $R_{t}$ are observed.
The value network, i.e the critic, is trained via the temporal difference (TD)
learning. First, $Q(s,a;\mathbf{w})$ and $Q(s_{t+1},a_{t+1};\mathbf{w})$ are
computed and a TD target is set as
$y_{t}=R_{t}+\gamma Q(s_{t+1},a_{t+1};\mathbf{w}).$ (5)
The purpose of setting up (5) is to minimize the difference between the
$y_{t}$ and $Q(s_{t},a_{t};\mathbf{w})$. Mathematically,
$\min L(\mathbf{w})=\frac{1}{2}[y_{t}-Q(s_{t},a_{t};\mathbf{w})]^{2},$
and $\mathbf{w}$ is updated by the gradient ascent method as
$\mathbf{w}_{t+1}=\mathbf{w}_{t}+\alpha\frac{\partial
L(\mathbf{w})}{\partial\mathbf{w}}|_{\mathbf{w}=\mathbf{w_{t}}},$ (6)
in which $\alpha$ is the learning rate for the value network.
The policy network, i.e. the actor, is updated through the traditional policy
gradient algorithm. Denoting the policy gradient of an action $a_{t}$ as
$\mathbf{g}(a_{t},\bm{\theta})=\frac{\partial\log\pi(a|s;\bm{\theta}))}{\partial\bm{\theta}}Q(s_{t},a;\mathbf{w})$,
it is easy to derive the policy gradient with respect to the state-value
function as
$\frac{\partial
V(s;\bm{\theta},\mathbf{w}_{t})}{\partial\bm{\theta}}=\operatorname{\mathbb{E}}_{A}[\mathbf{g}(A,\bm{\theta})].$
(7)
By randomly sampling an action through Monte Carlo simulation
$a_{t}\sim\pi(\cdot|s_{t};\bm{\theta}_{t})$, $\mathbf{g}(a_{t})$ is guaranteed
unbiased. $\bm{\theta}$ is then updated through the stochastic gradient
descent as
$\bm{\theta}_{t+1}=\bm{\theta}_{t}+\beta\mathbf{g}(a,\bm{\theta}_{t}),$ (8)
where $\beta$ is the learning rate of the policy network. By setting up the
applicable reward functions, a vehicle’s critic will improve its rating
ability with regard to the ground truth estimation. The actor, on the other
hand, will make the optimal decision by the critic’s standard, guiding the
vehicle achieve the application purpose.
### 4.3 Multi-agent actor-critic method in discrete action space
Under the CTDE framework, each vehicle in the system, HV or CV, has a local
policy network, and the corresponding is stored in the centralized controller.
The focus of the learning process is on the policy networks of the CVs since
HVs are not controlled by the learning process and their actors are fixed. To
maintain the training consistency, HVs are viewed as agents. The main idea
behind CTDE is that the road environment is considered stationary after
knowing the joint actions of all non-EMVs even when some actors are updated in
the training. Assuming there are $J$ CVs and total $N$ non-EMVs in the
controlled system, the CTDE architecture is summarized in Fig. 5.
Figure 5: The CTDE architecture on the DQJL application.
The controlled system is described as a fixed length $L$ road environment with
$n$ non-EMVs. These non-EMVs have parameterized policies with
$\bm{\theta}=\\{\theta_{0},\theta_{1},\dots,\theta_{n}\\}$ and their policies
are summarized as $\bm{\pi}=\\{\pi_{1},\pi_{1},\dots,\pi_{n}\\}$. $\theta$ for
HVs are constant and they are not updated through training. Another set of
parameters $\mathbf{w}=\\{w_{1},w_{2},\dots,w_{n}\\}$ is selected to denote
the value networks stored in the centralized controller.
#### 4.3.1 Experience replay initialization
Suppose at some step $t$, the joint observation of all vehicles is
$\mathbf{s}_{t}$. A joint action $a_{t}$ is taken and the joint next state
$\mathbf{s}_{t+1}$ is observed. An experience replay bank
$D=[\mathbf{s}_{t},\mathbf{s}_{t+1},\mathbf{a}_{t},R^{1}_{t},R^{2}_{t},\dots,R^{n}_{t}]$
to store the joint state, joint next state, joint action and joint reward so
that centralized the training stage can have a more efficient use of the
previous experience.
#### 4.3.2 Centralized critics update
Similar to the employment of the TD learning shown in (5) for single-agent RL,
a TD target for the multi-agent scenario is named as
$y_{t}=R_{t}+\gamma
Q^{\bm{\theta}^{\prime}}_{i}(\mathbf{s}_{t+1},\mathbf{a}_{t+1};\mathbf{w}_{i})|_{a^{j}_{t+1}=\theta_{j}^{\prime}(o^{j}_{t+1})},$
(9)
and the target is to minimize the error between $y_{t}$ and
$Q^{\bm{\theta}}_{i}(\mathbf{s}_{t},\mathbf{a}_{t})$ as
$\min
L(\mathbf{w}_{i})=\frac{1}{2}\operatorname{\mathbb{E}}_{(\mathbf{s}_{t},a_{t},R_{t},\mathbf{s}_{t+1})\in
D}[(Q^{\bm{\theta}}_{i}(\mathbf{s}_{t},\mathbf{a}_{t})-y_{t})^{2}].$ (10)
In (9),
$\bm{\theta}^{\prime}=\\{\theta_{1}^{\prime},\theta_{2}^{\prime},\dots,\theta_{n}^{\prime}\\}$
represents the target policy set with delayed parameters. Through the gradient
ascent algorithm as in (6), the value network considering for all agents’
states and actions are updated.
#### 4.3.3 Decentralized actors update
Actor deployed on each agent vehicle is updated through policy gradient. The
gradient of expected return for $i$th non-EMV in the discrete action space is
described as
$\nabla_{\theta_{i}}J(\theta_{i})=\operatorname{\mathbb{E}}_{\mathbf{s},a_{i}\sim\pi_{i}}[\nabla_{\theta_{i}}\pi_{i}(a^{i}_{t}|o^{i}_{t})\nabla_{a^{i}_{t}}Q^{\bm{\pi}}_{i}(\mathbf{s}_{t},\mathbf{a}_{t})],$
(11)
where $Q^{\bm{\pi}}_{i}(\mathbf{s}_{t},\mathbf{a}_{t})$ refers to the action-
value function stored in the centralized critic shown in (10). The centralized
critic value considers all actions from all agents, making the environment
stationary in training. Since the action space is discrete, a Gumbel-softmax
function [38] is employed to produce differentiable action samples from the
policy when performing the Monte Carlo simulation. The differentiable sampled
actions help to train the policy without using the log derivative trick as
used in DDPG [22].
Notice in (10) that even though the experience replay doesn’t explicitly
memorize the joint next action, the centralized critic utilizes all agents’
target actors to predict the joint next action and improve the stability of
the learning during the update. The training of the centralized critics
enhance the core idea of CTDE that, knowing the joint action, the road
environment is stationary even though some policies change.
Following the idea of multi-agent actor critic method in discrete action
space, we expand and summarize the proposed MARL algorithm in Algo. 1 for the
optimal DQJL coordination strategy with dynamic amount of non-EMVs in the
presence of HVs.
Algorithm 1 Multi-Agent Actor Critic for DQJL Coordination
1: Initialize $M$ pairs of policy networks and value networks
2: Initialize the experience history $D$ with mini-batch size
3: for each coordination training episode do
4: reset road environment for the initial state $\mathbf{s}$
5: insert trivial vehicles until there are $M$ non-EMVs
6: for each coordination training step do
7: sample an action $a^{i}_{t}$ for each CV as
$a^{i}_{t}=\pi(\cdot|o^{i}_{t})$
8: sample an action $a^{i}_{t}$ for each HV based on (1)
9: execute $\mathbf{a}_{t}$ and observe $\mathbf{s}_{t+1}$ and $R_{t}$
10: store $(\mathbf{s}_{t},\mathbf{a}_{t},R_{t},\mathbf{s}_{t+1})$ into $D$
11: update state $\mathbf{s}_{t}\xleftarrow{}\mathbf{s}_{t+1}$
12: for non-EMV = $1$ to $M$ do
13: if the vehicle is non-trivial then
14: draw a mini-batch sample
$(\mathbf{s}_{t},\mathbf{a}_{t},R_{t},\mathbf{s}_{t+1})$ from $D$
15: set $y_{i}$ and update the critic according to (10)
16: if the vehicle is a CV then
17: update the actor according to (11)
18: end if
19: end if
20: end for
21: update target networks by
$\theta_{i}^{\prime}\xleftarrow{}(1-\tau)\theta_{i}^{\prime}+\tau\theta_{i}$
22: end for
23: end for
### 4.4 Deep neural network architecture
Vehicle trajectories are temporal-sensitive data, and vehicles maintain action
consistency if they are yielding to an approaching EMV, i.e. a yielding
vehicle decelerates and changes lane until it parks into the neighboring lane.
Utilizing a long short-term memory (LSTM) layer as the output layer for the
actor, the network can further reduce non-stationarity. The action space for
this application is binary, so the LSTM layer is followed by a Gumbel softmax
function to output the probabilities of yielding or driving forward
respectively.
As introduced in Algo. 1, the proposed methodology can handle various number
of vehicles in the system by fixing the dimension of state representations.
Adopting a similar technique in a previous study [18], trivial vehicles are
added into the controlled system until there are $M$ pairs of networks.
Although some networks are idle through the process, this technique
circumvents the challenge in many connected vehicles applications where the
number of agents is varying. This technique ensures the proposed methodology’s
engineering practicality.
See Fig. 6 for the detail neural network architectures and dimensions of the
policy network and value network. The policy network takes input of the
vehicle’s local observation, and processed through a double layer multi-layer
perceptron (MLP). Each layer of the MLP consists of a normalization layer, a
linear layer as well as a ReLU function. A LSTM layer is then attached, and
two probabilities are resulted from the Gumbel-softmax function, standing for
the vehicle’s probabilities of yielding or not at this step respectively.
The value network takes the input of the road environment state representation
and the joint action at this step. After concatenating them together, the
network feeds the processed vector through a three layer MLP. The result from
the value network is a scalar, representing the centralized state-action value
for this particular agent. Adam Optimizer is selected to optimize losses for
both networks.
For the policy network, the input is the flatten local observation with
dimension of $6\times 7$, linear layer 1 has dimension of $42\times 64$ and
linear layer 2 has dimension of $64\times 128$. The LSTM layer takes input
dimension of 128 and output 128. Linear layer 3 has dimension of $128\times
2$. For the value network, the concatenation layer has a dimension of
$(7+1)M\times 256$, linear layer 4 has a dimension of $256\times 256$, and
linear layer 5 has a dimension of $256\times 512$. Linear layer 6 is a fully
connected layer which outputs the state-value function.
Since the agents are heterogeneous in this application, they are trained in
separate networks and no parameter sharing involves as all agents are non-
exchangeable.
Figure 6: Deep neural network architectures for a pair of policy network and
value network.
### 4.5 Indirect MARL approach
The transition probabilities between states needs to be provided to complete
the MDP setting. The state transition probabilities, for DQJL application, are
the collective result of the longitudinal and latitudinal dynamics, stochastic
driving behavior and the coordination strategy.
The real road dynamics for partially connected settings are usually
complicated and difficult to interpret. Hence, some fundamental models are
incorporated to picture the road dynamics. The road dynamics can be
categorized in the longitudinal and latitudinal direction taking account of
the stochastic driving behavior as well as DQJL coordination.
#### 4.5.1 Longitudinal dynamics
Along the direction of the traffic flow, it is natural to employ a car-
following model to capture the longitudinal dynamics. In this study, we modify
and adopt the discrete version of the intelligent driver model (IDM) [39] so
that vehicles’ positions, velocities and accelerations can be computed across
the discrete time horizon, i.e. MDP steps. To avoid denotation confusion, $u$
is used to represent acceleration.
For a non-EMV not in the yielding process, the discrete IDM model states for
the next step
$\displaystyle v^{i}_{t+1}$ $\displaystyle=v^{i}_{t}+u^{i}_{t}\Delta t,$
$\displaystyle x^{i}_{t+1}$ $\displaystyle=x^{i}_{t}+v^{i}_{t}\Delta
t+\frac{u^{i}_{t}\Delta t}{2},$
in which the acceleration of the vehicle at step $t$ can be determined by the
ego vehicle $i$ and its leading vehicle $j$ as
$u^{i}_{t}=u_{0}[1-(\frac{v^{i}_{t}}{v_{0}})^{4}-(\frac{s^{*}(v^{i}_{t},v^{j}_{t}-v^{i}_{t})}{x^{j}_{t}-x^{i}_{j}-l^{j}})^{2}],$
where $s^{*}(v^{i}_{t},v^{j}_{t}-v^{i}_{t})$ stands for the desired dynamic
distance of two neighboring vehicles and can be approximated by
$s^{*}(v^{i}_{t},v^{j}_{t}-v^{i}_{t})=d+v^{i}_{t}T+\frac{v^{i}_{t}(v^{j}_{t}-v^{i}_{t})}{2\sqrt{u_{0}b_{0}}}.$
In the equations above, $u_{0},b_{0}$ represents the IDM acceleration and
deceleration baseline correspondingly, $T$ stands for the safety headway and
$d$ represents the minimum safety gap between two neighboring vehicles.
For a non-EMV in the yielding process, its longitudinal dynamics involves
mainly deceleration. As illustrated in Fig. 3, there are two longitudinal
uncertainties in a vehicle’s braking motion. The first one represents drivers’
perception-reaction time, $t_{r}$, which stands for a delay between human
drivers’ perception and execution. As suggested in [40] on drivers’ perception
abilities, the perception-reaction time follows the normal distribution of
$t_{r}\sim\mathcal{N}(2.25s,(0.5s)^{2})$ is selected for this study. Decision
time is also consider included in the perception-reaction time in this
application as most decisions are made by the centralized controller. To
realize the perception-reaction time, additional data structure is used to
count the MDP step until the drivers start to decelerate.
The second type of uncertainty in the longitudinal direction is the
deceleration of vehicles when they are yielding. Each vehicle has its unique
baseline deceleration rate and the human driver has its own unique braking
behavior. The deceleration of the vehicle is captured as a white noise in
addition to the baseline deceleration rate $b^{*}$. Mathematically speaking,
the deceleration, not counting for car-following model, for a vehicle after
receiving an yielding instruction at time $T_{i}$ is:
$\displaystyle b^{i}_{t}=\begin{cases}0,&t\in[T_{i},T_{i}+t_{r}).\\\
{b}^{*}_{i}+\epsilon_{decel},&t\geq T_{i}+t_{r}.\\\ \end{cases}$
Thus, the overall longitudinal dynamics for a non-EMV is described as a
function of an vehicle’s velocity, position, as well as its leading velocity
and position:
$\displaystyle
v^{i}_{t+1}=f(v^{i}_{t},x^{i}_{t},v^{ij}_{t},x^{ij}_{t})=\begin{cases}v^{i}_{t}+u^{i}_{t}\Delta
t,&\text{if }\xi^{i}_{t}=0,\\\ v^{i}_{t}-{b}^{i}_{t}\Delta t,&\text{
otherwise},\\\ \end{cases}$
where $ij$ denotes the leading vehicle of vehicle $i$.
Notice that the EMV strictly follows the longitudinal dynamics. Its front bump
position, velocity, and acceleration are determined by the distance between
itself and its leading vehicle as well as the difference between their speeds.
Since the maximum allowable speed of the EMV is much larger than that of a
non-EMV, it is expected that the EMV travels faster when it doesn’t have any
leading vehicles, i.e. DQJL established.
#### 4.5.2 Latitudinal dynamics
Inspired by the benchmark lane-changing model [41], the dynamic lane position
of vehicles in a DQJL process are viewed to be the result of stochastic
driving behaviors and coordination strategies. If an EMV is approaching on the
passing lane of this road segment, the centralized controller aims to form a
vehicle platoon on the neighboring lane. If a non-EMV on the passing lane
starts to yield, it no longer follows the car-following model and begins to
decelerate until it pulls over onto the neighboring lane, at which step this
non-EMV stops yielding.
Meanwhile, a vehicle on the neighboring lane will also decelerate temporarily.
The yielding process for this decelerating non-EMV is considered completed
when it has a new leading vehicle in the platoon. All vehicles will not
necessarily brake until stop. Thus, the yielding status $\xi^{i}$ can be
tracked as
$\xi^{i}_{t+1}=g(\xi^{i}_{t},a^{i}_{t})=\begin{cases}1,\text{if
}a^{i}_{t}=1\text{ or }\xi^{i}_{t}=1\\\ 0,\text{otherwise}.\end{cases}$
After the perception-reaction time, the yielding non-EMV attempts to pull over
onto the neighboring lane, the probability of successfully lane changing,
without considering for collisions, is captured as a geometric distribution
parameter $p$. If the average time for successfully lane changing is $t_{lc}$
and temporal step length is $\Delta$, it is easy to derive that
$p=\frac{1}{t_{lc}/\Delta t}\Rightarrow p=\frac{\Delta t}{t_{lc}},$
and the lane position $y$ of a pulling-over vehicle for the next step can be
determined as
$y^{i}_{t+1}=h(y^{i}_{t},\xi^{i}_{t})=\begin{cases}0,&t\in[T_{i},T_{i}+t_{r}),\\\
Y\sim G(\frac{\Delta t}{t_{lc}}),&t\geq t_{r}.\\\ \end{cases}$
#### 4.5.3 State transition for indirect MARL
Adopting both longitudinal and latitudinal dynamics with stochastic driving
behavior, it is easy to mathematically describe the next state
$\mathbf{s}_{t+1}\sim\mathbf{s}_{t}$ with some joint action $\mathbf{a}$ to be
$\mathbf{s}_{t+1}=\begin{bmatrix}x^{1}_{t}+v^{1}_{t}\Delta
t&h(y^{1}_{t},\xi^{1}_{t})&f(v^{1}_{t},x^{1}_{t},v^{1j}_{t},x^{1j}_{t})&g(\xi^{i}_{t},a^{i}_{t})&l^{1}&b^{*}_{1}&\phi^{1}\\\
x^{2}_{t}+v^{2}_{t}\Delta
t&h(y^{2}_{t},\xi^{2}_{t})&f(v^{2}_{t},x^{2}_{t},v^{2j}_{t},x^{2j}_{t})&g(\xi^{i}_{t},a^{i}_{t})&l^{2}&b^{*}_{2}&\phi^{2}\\\
\vdots&&&&&&\vdots\\\ x^{n}_{t}+v^{n}_{t}\Delta
t&h(y^{n}_{t},\xi^{n}_{t})&f(v^{n}_{t},x^{n}_{t},v^{nj}_{t},x^{nj}_{t})&g(\xi^{i}_{t},a^{n}_{t})&l^{n}&b^{*}_{n}&\phi^{n}\\\
\end{bmatrix},$ (12)
where $ij$ represents $i$th vehicle’s leading vehicle.
### 4.6 Direct MARL approach
However, the real traffic conditions are much more complicated than the
longitudinal and latitudinal dynamics. Human drivers are obeying car-following
models, lane changing models, and other explicit or implicit driving dynamics
with uncertainties. It is inefficient to capture all potential dynamics and
develop a kinematics models to process through the proposed MARL framework.
Thus, the proposed methodology should not assume a differentiable model of the
road environment dynamics. Data-driven reinforcement learning is able to
detect and understand the comprehensive road environment dynamics and the
noise distributions, which represent stochastic driving behavior from human
drivers.
Furthermore, the proposed framework is compatible with various types of
communication structures among the vehicles and the infrastructure. The CTDE
framework cannot only be adopted to realize cooperative connected vehicles
applications such as establishing DQJLs for EMVs, but also can be utilized for
other connected vehicles applications, cooperative or competitive,
particularly in those with stochastic dynamics incurred by human drivers.
## 5 Simulation Test Bed Experimental Design
Since it is impossible to perform in-field vehicle coordination training due
to safety and expense, we employ Simulation of Urban Mobility (SUMO) [42] as
the test bed. In this section, we demonstrate the experimental design with
generating simulation data in Subsec. 5.1, designing and implementing the
simulation in Subsec. 5.2. Lastly, the model parameters and training
hyperparameters are shown in Subsec. 5.3.
Figure 7: EMV is entering the road section from the left.
### 5.1 Data synthesis
Providing realistic numerical values to simulate the realistic traffic
conditions on a two-lane urban roadway is crucial for the reproduciblility of
application in field. Therefore, a dataset containing randomly generate
starting conditions, including different number of vehicles and different
penetration ratio, is prepared to serve as the training set for both indirect
and direct MARL approaches. To better compare the quantitative results under
different penetration ratios, different number of non-EMVs are selected to
further generate a grouped test set to reflect the difference under the chosen
road environment configurations.111The dataset is available at
shorturl.at/PRT45. As shown in Table. 3, we generate the vehicle features
based on listed literature source. The statistics of some vehicle features are
presented in Fig. 8.
Table 3: Randomly generated vehicle features Vehicle Feature | Value | Source
---|---|---
Vehicle Length $l$ | $\mathcal{N}(4.5m,(1m)^{2})$ | [43]
Vehicle Baseline Deceleration $b^{*}$ | $\mathcal{N}(-2m/s^{2},(1m/s^{2})^{2})$ | [44, 45]
Standard Deviation of Deceleration $\epsilon_{decel}$ | $0.5m/s^{2}$ | [44, 45]
The road segment studied is an arterial class III with LOS category C
according to Arterial LOS standards [46]. The nominal value for road
environment configuration applied to both indirect and direct MARL training is
listed in Table. 4.
Table 4: System configuration parameters System Configuration Parameters | Value
---|---
Temporal Step Length $\Delta t$ | 0.5s
Length of the Road segment $L$ | 200m
Minimum Safety Gap $d$ | 0.5m
Length of Reaction Distance $L_{HV}$ | 75m
Non-EMV Starting Velocity $v_{0}^{1}$ | 4.5m/s
EMV Starting Velocity $v_{0}^{0}$ | 8m/s
EMV Maximum Allowable Velocity $v_{max}^{0}$ | 12m/s
(a) Generated starting positions
(b) Generated vehicle lengths
Figure 8: The generated vehicle starting positions and average vehicle
lengths.
### 5.2 RL-SUMO embedding
For the indirect MARL approach, customized IDM parameters selected for the
training process is listed in Table. 5. The trained coordination strategy is
then validated on the test set on SUMO.
Table 5: Indirect MARL IDM model parameters Modified IDM model parameters | Value | Source
---|---|---
Baseline Acceleration $u_{0}$ | $3m/s^{2}$ | [47]
Baseline Deceleration $b_{0}$ | $-2m/s^{2}$ | [47]
Desired Traffic Speed $v^{*}$ | $10m/s$ | [47, 48]
Desired Headway $T_{0}$ | $1.5s$ | [47, 48]
For the direct MARL approach, the state transitions are completely determined
by the SUMO dynamics. Instead of developing an accurate road dynamic model
when yielding to approach EMVs, this approach does not assume an
differentiable model and is able to handle all potential state transitions.
Inspired by Flow [49], the embedding on SUMO is achieved via socket
programming and the TraCI package in SUMO. The complete multi-agent
reinforcement learning pipeline on SUMO is presented in Fig. 9.
Figure 9: SUMO-based multi-agent reinforcement learning training pipeline for
DQJL problem. The pipeline combines the multi-agent road environment with the
SUMO test bed. The proposed pipeline processes and manages the observations,
actions, and rewards as input or output to the SUMO platform. Coordinated by
TraCI, the pipeline can initialize, load and reset experiment road
configurations.
Instead of adopting the blue-light device, in SUMO sub-lane modulo, which
provide a virtual passage between lanes [50, 51], we initialize another
vehicle starting upstream to serve as the EMV in the simulation. The EMV’s
initial velocity and maximum allowable velocity are set according to Table. 4
for the fast passage through the segment. The other EMV vehicle features, such
as vehicle length, are trivial as they do not influence the learning process.
In the direct MARL approach, built-in vehicle lane-changing models are
overwritten so that non-EMVs only change lane when instructed, or forced if
they are HVs. see Table. 6.
Table 6: Configuration parameters for SUMO vehicle lane-changing Lane Changing Parameters | Value
---|---
collision.mingap-factor | 0
speed mode | 0
lane change mode | 0
lsStrategic | 0
lsCooperative | 0.5
Similar to the noise perturbation adopted in the indirect MARL approach,
identical distributions of noises reflecting stochastic driving behaviors are
added to each vehicle during the training stage for the indirect MARL. Through
TraCI, additional data structures are initialized to track and execute the
randomness. For instance, if an drivers’ perception reaction time is sampled
as $2.5s$, a step counter is employed to make the driver move at
$\frac{2.5s}{\Delta t}=5$ steps delayed according to Table. 4.
### 5.3 Training hyperparameters
To maintain the consistency in training under indirect and direct MARL
approaches, the training hyperparameters presented in Table. 7 are selected
for the MARL training stage.
Table 7: RL training hyperparameters Model Parameters and Hyper-parameters | Value
---|---
Priority Penalize Constant $P_{priority}$ | 0.5
Collision Penalize Constant $P_{collision}$ | -1000
Discount Factor $\gamma$ | 0.99
Minibatch Size | 64
Actor Learning Rate $\alpha$ | $10^{-4}$
Critic Learning Rate $\beta$ | $10^{-3}$
Replay Memory Size $D$ | 10000
Initial Epsilon $\epsilon$ | 0.99
Epsilon decay | $10^{-3}$
Loss Optimizer | Adam
## 6 Validation Results and Discussion
In this section, we analyze the test bed validation results from the designed
experiments. Training performance for both indirect and direct MARL approaches
with the proposed neural network structure are compared in Subsec. 6.1. The
validated EMV passing time resulted from both approaches are evaluated in
Subsec. 6.2, and their corresponding training time are assessed in Subsec.
6.3. The real-time coordination compatibility is also justified in Subsec.
6.4.
### 6.1 Training performance comparison
The learning performance with multiple independent runs for the indirect and
direct reinforcement learning framework is presented in Fig. 10.
(a) Indirect MARL results after 15000 episodes.
(b) Direct MARL results after 15000 episodes.
Figure 10: The training performance for Fig. 10(a)-indirect MARL and Fig.
10(b)-direct MARL.
The dark lines highlights the mean value of these runs and the shaded area
stands for one standard deviation.
According to the learning behaviors of the indirect and direct MARL
approaches, the indirect MARL approach converges much faster w.r.t number of
episodes. The indirect MARL approach achieves optimal average reward within
approximately 3000 episodes, while the direct MARL approach converges at an
approximately equivalent level within around 8000 episodes. The fast
convergence of the indirect MARL approach takes advantage of the significant
reduction in exploration time due to establishment of road environment
dynamics beforehand. Both learning curves become stable onward, and the
fluctuations representing the effects of randomness in road environment
initialization and stochastic road dynamics.
### 6.2 Emergency vehicle passing time validation
To assess the DQJL coordination strategy from the proposed approaches,
different groups of experiments are set up to compare the EMV passing time on
a 200 meters road segment. The results under the indirect reinforcement
learning framework are shown in Fig. 11 grouped by different connectivity
penetration rate. Within the same group of experiments, i.e. same number of
non-EMVs, four scenarios with different penetration rates are separated but
all other vehicles’ features are kept consistent, including their starting
positions.222An EMV passing demo is available at shorturl.at/djsx4 for
qualitative analysis.
According to Fig. 11, the EMV passing time exhibits monotonically decreasing
pattern when the proportion of connected vehicles increases. The all HVs
scenario is referred as the baseline scenario without any DQJL coordination.
When the road is more crowded, it is evident from the data that distinctions
of EMV passing time between all CVs and other scenarios broaden. The reason is
the approaching EMV may be blocked by one more HV in the coordination process,
significantly reducing its velocity meanwhile. As a result, the system
performance of the DQJL coordination may be bottle-necked in the presence of
HVs.
Comparing the grouped results along the numbers of non-EMVs in the system, it
appears that EMV passing time increments with the number of vehicles in the
system. The increasing patterns differentiate, however, with different
penetration rates. With all HVs in the traffic, the EMV passing time grows
with an increasing manner when the road becomes more congested, whereas the
EMV passing time with 100% penetration rate grows marginally. It is then not
difficult to imply that the difference will scale when the target road segment
becomes increasing congested. Considering the results, with all connected
vehicles, the proposed approach saves approximately 30% of EMV passing time
compared with the baseline scenario. The validation process is based on a
Intel CPU i9 with 3.6 GHz with a NVIDIA GeForce 2080 Ti GPU, with an average
decision time of 5.3 ms.
Figure 11: EMV passing time results from indirect reinforcement learning
approach. Figure 12: Comparison of EMV passing time between two approaches.
Correlating the EMV passing time with coordination strategies from the
indirect MARL approach and direct MARL approach, the coordination strategies
from these two approaches can be quantitatively assessed. As shown in Fig. 12
comparing the EMV passing time under coordination strategies from both
approaches, it is straightforward to see that the coordination strategy from
the direct MARL approach is slightly better than that from the indirect MARL
approach as it results in shorter time of DQJL establishment, which can be
explained by the fact that the direct MARL approach grasps the real testbed
state transition better. Since the difference in EMV passing time between
these two approaches is insignificant across densities, it is considered that
both MARL approaches yield the optimal DQJL coordination strategies,
especially when their learning curves converge at a comparable level.
### 6.3 Training time comparison
The training time reflects the cost of the proposed reinforcement learning
approaches. Although the proposed methodology is capable of handling various
numbers of vehicles in the controlled system, a set of densities, representing
a group of numbers of non-EMVs, is selected for the examination of the
training time with both approaches. Instead of allowing both approaches
converge at optimal reward level, training time for 5000 episodes are
recorded.
Figure 13: MARL training time for 5,000 episodes against different road
densities.
Notice that the proposed methodology can handle various quantities of non-EMVs
in the system and these densities are only chosen for experimenting the
scalablity of the proposed methodologies. All training runs are conducted on a
Intel CPU i9 with 3.6 GHz with a NVIDIA GeForce 2080 Ti GPU.
It is evident from the training time results shown in Fig.13 that the direct
reinforcement learning approach requires approximately 4 times of training
time to finish 5000 episodes of training. The result is equivalent to say, the
pipeline presented in Fig. 9 of allocating, initializing, and exchanging
information takes additional of 3 times of training time besides performing
numerical computations. It is also noticeable that the training time is
increasing with an accelerating pace, which implies the ”curse of
dimensionality” of the proposed algorithm. When the number of non-EMVs in the
controlled system increases, the training time to reach optimal coordination
tends to grow exponentially.
### 6.4 Real-time compatibility validation
The average decision time based on the validation configuration is 5.3ms.
According to the 3GPP vehicle-to-everything communication standards [52], the
minimum requirement for an collision-free message sending interval is 10ms.
Thus, the proposed framework is compatible for generating real-time
coordination strategies with additional time for the support of functional
expansions.
## 7 Conclusion and Future Directions
In this study, we propose a multi-agent actor-critic based deep reinforcement
algorithm to address the fast and safe DQJL establishment under the partially
connected setting. By utilizing the centralized training with decentralized
execution framework, the proposed methodology is scalable and extensible for
multi-agent vehicular motion planning control problems in connected vehicle
applications. In this study, two ways approaching the optimal coordination
strategies are presented-indirect MARL approach based on an modified car-
following model and stochastic driving behavior, and direct MARL approach
based on the traffic simulation software. Both approaches are validated on the
SUMO testbed, eventually, with the baseline scenario in which the traffic
consists of all HVs and no coordination strategy is provided. The validation
results suggest, with the real time coordination instructions informing the
connected component in the traffic, the EMV passing time can be reduced by
approximately 30%, providing a promising direction for facilitating emergency
service vehicles in intelligent urban road environment. Finally, we evaluate
the training cost of both approaches and validate the real-time compatibility
with the proposed connectivity schema. To summarize the contributions of the
presented study,
1. 1.
we introduced the concept of dynamic queue-jump lane (DQJL) to reduce EMVs
passing time on intelligent urban roadways;
2. 2.
we developed a multi-agent actor-critic based deep reinforcement learning
algorithm which addresses varying number of agent vehicles with discrete space
for action representations for DQJL applications;
3. 3.
we proposed a connected vehicle application framework which deals with
partially connected traffic and stochastic driving behaviors;
4. 4.
we presented two approaches, direct and indirect, when applying multi-agent
reinforcement learning on connected vehicle applications and we validated the
proposed coordination significantly improves emergency vehicle performances on
connected urban roadway.
The study can be extended into a few directions for the realization of the
DQJL coordination in the field. First, regarding the privacy concern of a
connected vehicle’s policy through vehicular communications, policy inference
networks can be additionally deployed onto each agent vehicle. The policy
interference network allows each agent to learn other vehicles’ policies
without explicitly asking through communication channel. Second, the problem
setting can be extended into signalized or unsignalized traffic intersections
to recognize the multi-agent coordination pattern in a more complicated road
environment. Realizing the intersection coordination and link-level
coordination based on the presented work, we are able to present the network-
level DQJL coordination, which will further improve the emergency vehicles’
performance.
## 8 Acknowledgment
This research was supported by the C2SMART University Transportation Center at
New York University, and Dwight David Eisenhower Transportation Fellowship
Program (DDETFP) from the United States Department of Transportation. The
authors thank Dr. Shuseng Wang for inspirations and advice on the methodology.
The authors also thank Hengfeng Lu for preparing graphs.
## 9 Declaration of Competing Interest
The authors declare that they have no competing financial interests or
personal relationships, to the best of their knowledge, that could have
appeared to influence the work reported in this paper.
## References
* Government [2020a] N. Y. C. Government, New York End-To-End Response Times, 2019 (accessed November 22, 2020)a. URL: https://www1.nyc.gov/site/911reporting/reports/end-to-end-repsonse-time.page.
* Government [2020b] N. Y. C. Government, Emergency Response Incidents, 2014 (accessed February 28, 2020)b. URL: https://data.cityofnewyork.us/Public-Safety/Emergency-Response-Incidents/pasr-j7fb.
* AHA [2020] AHA, Heart Disease and Stroke Statistics, 2013 (accessed February 28, 2020). URL: https://cpr.heart.org/AHAECC/CPRAndECC/ResuscitationScience/UCM_477263_AHA-Cardiac-Arrest-20Statistics.jsp5BR=301,L,NC5D.
* Zhou and Gan [2005] G. Zhou, A. Gan, Performance of transit signal priority with queue jumper lanes, Transportation Research Record 1925 (2005) 265–271. doi:10.1177/0361198105192500127.
* Cesme et al. [2015] B. Cesme, S. Z. Altun, B. Lane, Queue jump lane, transit signal priority, and stop location evaluation of transit preferential treatments using microsimulation, Transportation Research Record 2533 (2015) 39–49. doi:10.3141/2533-05.
* Buchenscheit et al. [2009] A. Buchenscheit, F. Schaub, F. Kargl, M. Weber, A vanet-based emergency vehicle warning system, 2009 IEEE Vehicular Networking Conference (VNC) (2009) 1–8.
* Yasmin et al. [2012] S. Yasmin, S. Anowar, R. Tay, Effects of drivers’ actions on severity of emergency vehicle collisions, Transportation Research Record 2318 (2012) 90–97. doi:10.3141/2318-11.
* Savolainen et al. [2010] P. T. Savolainen, T. K. Datta, I. Ghosh, T. J. Gates, Effects of dynamically activated emergency vehicle warning sign on driver behavior at urban intersections, Transportation Research Record 2149 (2010) 77–83. doi:10.3141/2149-09.
* Hannoun et al. [2019] G. J. Hannoun, P. Murray-Tuite, K. Heaslip, T. Chantem, Facilitating emergency response vehicles’ movement through a road segment in a connected vehicle environment, IEEE Transactions on Intelligent Transportation Systems 20 (2019) 3546–3557. doi:10.1109/TITS.2018.2877758.
* Aslani et al. [2019] M. Aslani, M. S. Mesgari, S. Seipel, M. Wiering, Developing adaptive traffic signal control by actor–critic and direct exploration methods, in: Proceedings of the Institution of Civil Engineers-Transport, volume 172, Thomas Telford Ltd, 2019, pp. 289–298.
* Chu et al. [2020] T. Chu, J. Wang, L. Codecà, Z. Li, Multi-agent deep reinforcement learning for large-scale traffic signal control, IEEE Transactions on Intelligent Transportation Systems 21 (2020) 1086–1095. doi:10.1109/TITS.2019.2901791.
* Guo et al. [2019] Q. Guo, L. Li, X. (Jeff) Ban, Urban traffic signal control with connected and automated vehicles: A survey, Transportation Research Part C: Emerging Technologies 101 (2019) 313 – 334. URL: http://www.sciencedirect.com/science/article/pii/S0968090X18311641. doi:https://doi.org/10.1016/j.trc.2019.01.026.
* Wei et al. [2018] S. Wei, Y. Zou, T. Zhang, X. Zhang, W. Wang, Design and experimental validation of a cooperative adaptive cruise control system based on supervised reinforcement learning, Applied sciences 8 (2018) 1014.
* Yu et al. [2018] L. Yu, X. Shao, Y. Wei, K. Zhou, Intelligent land-vehicle model transfer trajectory planning method based on deep reinforcement learning, Sensors 18 (2018) 2905.
* Yang et al. [2017] C. Y. D. Yang, K. Ozbay, X. J. Ban, Developments in connected and automated vehicles, Journal of Intelligent Transportation Systems 21 (2017) 251–254. URL: https://doi.org/10.1080/15472450.2017.1337974. doi:10.1080/15472450.2017.1337974. arXiv:https://doi.org/10.1080/15472450.2017.1337974.
* Şahin et al. [2018] T. Şahin, R. Khalili, M. Boban, A. Wolisz, Reinforcement learning scheduler for vehicle-to-vehicle communications outside coverage, in: 2018 IEEE Vehicular Networking Conference (VNC), 2018, pp. 1–8. doi:10.1109/VNC.2018.8628366.
* Guan et al. [2020] Y. Guan, Y. Ren, S. E. Li, Q. Sun, L. Luo, K. Li, Centralized cooperation for connected and automated vehicles at intersections by proximal policy optimization, IEEE Transactions on Vehicular Technology 69 (2020) 12597–12608.
* Su et al. [2020] H. Su, K. Shi, L. Jin, J. Y. J. Chow, V2i connectivity-based dynamic queue-jump lane for emergency vehicles: A deep reinforcement learning approach, 2020. arXiv:2008.00335.
* Tan [1993] M. Tan, Multi-agent reinforcement learning: Independent vs. cooperative agents, in: In Proceedings of the Tenth International Conference on Machine Learning, Morgan Kaufmann, 1993, pp. 330–337.
* Puccetti et al. [2019] L. Puccetti, C. Rathgeber, S. Hohmann, Actor-critic reinforcement learning for linear longitudinal output control of a road vehicle, in: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), IEEE, 2019, pp. 2907–2913.
* Huang et al. [2017] Z. Huang, X. Xu, H. He, J. Tan, Z. Sun, Parameterized batch reinforcement learning for longitudinal control of autonomous land vehicles, IEEE Transactions on Systems, Man, and Cybernetics: Systems 49 (2017) 730–741.
* Lowe et al. [2017] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, I. Mordatch, Multi-agent actor-critic for mixed cooperative-competitive environments, 2017. arXiv:1706.02275.
* Foerster et al. [2017] J. Foerster, G. Farquhar, T. Afouras, N. Nardelli, S. Whiteson, Counterfactual multi-agent policy gradients, 2017. arXiv:1705.08926.
* Wu et al. [2020] T. Wu, M. Jiang, L. Zhang, Cooperative multiagent deep deterministic policy gradient (comaddpg) for intelligent connected transportation with unsignalized intersection, Mathematical Problems in Engineering 2020 (2020).
* Cao et al. [2020] J. Cao, S. Leng, K. Zhang, Multi-agent learning empowered collaborative decision for autonomous driving vehicles, in: 2020 International Conference on UK-China Emerging Technologies (UCET), 2020, pp. 1–4. doi:10.1109/UCET51115.2020.9205416.
* Kwon and Kim [2019] D. Kwon, J. Kim, Multi-agent deep reinforcement learning for cooperative connected vehicles, in: 2019 IEEE Global Communications Conference (GLOBECOM), IEEE, 2019, pp. 1–6.
* Santa et al. [2008] J. Santa, A. F. Gómez-Skarmeta, M. Sánchez-Artigas, Architecture and evaluation of a unified v2v and v2i communication system based on cellular networks, Computer Communications 31 (2008) 2850 – 2861. doi:https://doi.org/10.1016/j.comcom.2007.12.008, mobility Protocols for ITS/VANET.
* Milanés et al. [2010] V. Milanés, J. Godoy, J. Pérez, B. Vinagre, C. González, E. Onieva, J. Alonso, V2i-based architecture for information exchange among vehicles, IFAC Proceedings Volumes 43 (2010) 85 – 90. doi:https://doi.org/10.3182/20100906-3-IT-2019.00017, 7th IFAC Symposium on Intelligent Autonomous Vehicles.
* Muhammad and Safdar [2018] M. Muhammad, G. A. Safdar, Survey on existing authentication issues for cellular-assisted v2x communication, Vehicular Communications 12 (2018) 50 – 65. doi:https://doi.org/10.1016/j.vehcom.2018.01.008.
* Scarzello et al. [2001] J. F. Scarzello, D. S. Lenko, A. C. Feaga, Vehicle presence, speed and length detecting system and roadway installed detector therefor, 2001\. US Patent 6,208,268.
* Cheung et al. [2005] S. Y. Cheung, S. Coleri, B. Dundar, S. Ganesh, C.-W. Tan, P. Varaiya, Traffic measurement and vehicle classification with single magnetic sensor, Transportation Research Record 1917 (2005) 173–181.
* Lazar et al. [2019] D. A. Lazar, E. Bıyık, D. Sadigh, R. Pedarsani, Learning how to dynamically route autonomous vehicles on shared roads, 2019. arXiv:1909.03664.
* Bıyık et al. [2019] E. Bıyık, D. A. Lazar, D. Sadigh, R. Pedarsani, The green choice: Learning and influencing human decisions on shared roads, in: 2019 IEEE 58th Conference on Decision and Control (CDC), 2019, pp. 347–354. doi:10.1109/CDC40024.2019.9030169.
* Bellman [1954] R. Bellman, The theory of dynamic programming, Bull. Amer. Math. Soc. 60 (1954) 503–515.
* Konda and Tsitsiklis [2000] V. R. Konda, J. N. Tsitsiklis, Actor-critic algorithms, in: Advances in neural information processing systems, 2000, pp. 1008–1014.
* Lillicrap et al. [2019] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, D. Wierstra, Continuous control with deep reinforcement learning, 2019\. arXiv:1509.02971.
* Haarnoja et al. [2019] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel, S. Levine, Soft actor-critic algorithms and applications, 2019. arXiv:1812.05905.
* Jang et al. [2017] E. Jang, S. Gu, B. Poole, Categorical reparameterization with gumbel-softmax, 2017\. arXiv:1611.01144.
* Gipps [1981] P. G. Gipps, A behavioural car-following model for computer simulation, Transportation Research Part B: Methodological 15 (1981) 105–111.
* McGehee et al. [2000] D. V. McGehee, E. N. Mazzae, G. S. Baldwin, Driver reaction time in crash avoidance research: Validation of a driving simulator study on a test track, Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44 (2000) 3–320–3–323. doi:10.1177/154193120004402026. arXiv:https://doi.org/10.1177/154193120004402026.
* Kesting et al. [2007] A. Kesting, M. Treiber, D. Helbing, General lane-changing model mobil for car-following models, Transportation Research Record 1999 (2007) 86–94. doi:10.3141/1999-10. arXiv:https://doi.org/10.3141/1999-10.
* Lopez et al. [2018] P. A. Lopez, M. Behrisch, L. Bieker-Walz, J. Erdmann, Y.-P. Fltterd, R. Hilbrich, L. Lcken, J. Rummel, P. Wagner, E. Wiener, Microscopic traffic simulation using sumo, in: The 21st IEEE International Conference on Intelligent Transportation Systems, IEEE, 2018, p. 1.
* Coifman [2015] B. Coifman, Empirical flow-density and speed-spacing relationships: Evidence of vehicle length dependency, Transportation Research Part B: Methodological 78 (2015) 54 – 65. doi:https://doi.org/10.1016/j.trb.2015.04.006.
* Bokare and Maurya [2017] P. Bokare, A. Maurya, Acceleration-deceleration behaviour of various vehicle types, Transportation Research Procedia 25 (2017) 4733 – 4749. doi:https://doi.org/10.1016/j.trpro.2017.05.486, world Conference on Transport Research - WCTR 2016 Shanghai. 10-15 July 2016.
* Maurya and Bokare [2012] A. K. Maurya, P. S. Bokare, Study of deceleration behaviour of different vehicle types., International Journal for Traffic & Transport Engineering 2 (2012).
* Roess et al. [2004] R. P. Roess, E. S. Prassas, W. R. McShane, Traffic engineering, Pearson/Prentice Hall, 2004.
* Treiber et al. [2000] M. Treiber, A. Hennecke, D. Helbing, Congested traffic states in empirical observations and microscopic simulations, Physical Review E 62 (2000) 1805–1824. doi:10.1103/physreve.62.1805.
* Kesting et al. [2010] A. Kesting, M. Treiber, D. Helbing, Enhanced intelligent driver model to access the impact of driving strategies on traffic capacity, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 368 (2010) 4585–4605. URL: http://dx.doi.org/10.1098/rsta.2010.0084. doi:10.1098/rsta.2010.0084.
* Kheterpal et al. [2018] N. Kheterpal, K. Parvate, C. Wu, A. Kreidieh, E. Vinitsky, A. Bayen, Flow: Deep reinforcement learning for control in sumo, in: SUMO 2018- Simulating Autonomous and Intermodal Transport Systems, volume 2 of EPiC Series in Engineering, EasyChair, 2018, pp. 134–151. doi:10.29007/dkzb.
* Bieker-Walz et al. [2018] L. Bieker-Walz, M. Behrisch, M. Junghans, Analysis of the traffic behavior of emergency vehicles in a microscopic traffic simulation, EPiC Series in Engineering 2 (2018) 1–13.
* Obrusnik [2019] V. Obrusnik, Simulating the impact of prioritization of emergency vehicles at traffic light controlled junctions on the other traffic, CTU in PRAGUE Master Thesis (2019).
* Wang et al. [2017] X. Wang, S. Mao, M. Gong, An overview of 3gpp cellular vehicle-to-everything standards, GetMobile: Mobile Computing and Communications 21 (2017) 19–25. doi:10.1145/3161587.3161593.
|
2024-09-04T02:54:56.487299 | 2020-03-02T17:06:06 | 2003.01032 | {
"authors": "Nikolai Miklin and Micha{\\l} Oszmaniec",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25995",
"submitter": "Nikolai Miklin",
"url": "https://arxiv.org/abs/2003.01032"
} | arxiv-papers | # A universal scheme for robust self-testing in the prepare-and-measure
scenario
Nikolai Miklin<EMAIL_ADDRESS>Institute of Theoretical Physics and
Astrophysics, National Quantum Information Center, Faculty of Mathematics,
Physics and Informatics, University of Gdansk, 80-306 Gdańsk, Poland
International Centre for Theory of Quantum Technologies (ICTQT), University of
Gdansk, 80-308 Gdańsk, Poland Michał Oszmaniec<EMAIL_ADDRESS>Center
for Theoretical Physics, Polish Academy of Sciences, Al. Lotników 32/46,
02-668 Warszawa, Poland
###### Abstract
We consider the problem of certification of arbitrary ensembles of pure states
and projective measurements solely from the experimental statistics in the
prepare-and-measure scenario assuming the upper bound on the dimension of the
Hilbert space. To this aim we propose a universal and intuitive scheme based
on establishing perfect correlations between target states and suitably-chosen
projective measurements. The method works in all finite dimensions and allows
for robust certification of the overlaps between arbitrary preparation states
and between the corresponding measurement operators. Finally, we prove that
for qubits our technique can be used to robustly self-test arbitrary
configurations of pure quantum states and projective measurements. These
results pave the way towards practical application of the prepare-and-measure
paradigm to certification of quantum devices.
## 1 Introduction
Quantum devices are becoming more and more complex and the possibilities of
their precise control and manipulation keep increasing. Recently reported
demonstration of quantum computational advantage by Google [1] is only an
intermediate milestone and quantum technologies have a potential real-life
applications in fields such as quantum sensing [2], simulation of quantum
systems [3], efficient computation [4] and machine learning [5, 6].
With the increasing complexity of quantum systems, there is a growing need for
certification and verification of their performance. This task is usually
realized via the combination of quantum tomography and various benchmarking
schemes (see [7] for a recent review). However, these methods, despite being
powerful and universally applicable, depend on the assumptions about the inner
workings of quantum systems, such as perfect measurements or uncorrelated and
independent errors. In contrast to these approaches _self-testing_ is a method
which aims at proving the uniqueness of the implemented states or measurements
based solely on the observed statistics and under minimal physical
assumptions.
The paradigm of self-testing was first introduced in the context of quantum
cryptography [8], with the aim to obtain trust in cryptographic devices (see
[9] for a recent review). Initially, it was applied to correlations observed
in the Bell scenario [10] (see e.g., [8, 11, 12, 13]). The most know result in
this area is certification of the singlet state in the case of maximal
violation of the Clauser-Horne-Shimony-Holt Bell inequality [14].
With the growing number of results on self-testing, more and more attention is
being drawn to prepare-and-measure scenarios, that are more experimentally
appealing as compared to the one of Bell (see e.g., [15, 16, 17, 18]).
Therein, one does not need to ensure space-like separation of the measurement
events by two parties; in contrast, one party, Alice, communicates some of her
states to Bob, who measures them in some basis of his choice. In order to get
meaningful certification results further assumptions are needed. In the most
commonly studied _semi-device-independent_ (SDI) scenario [19], one assumes
that the dimension of the quantum system used for transmitting information is
bounded from above. There exist, however, alternative approaches based on
other constraints like minimal overlap [16], mean energy constraint [20] or
entropy constraint [21].
Let us briefly recap on what has been done so far in the area of SDI self-
testing. First, self-testing results were proven for mutually unbiased bases
(MUBs) in dimension $2$, for both state ensembles and measurements [22]. This
was further generalised to SDI certification of mutual unbiasedness of pairs
of bases in an arbitrary dimension in Ref. [23]. Methods for self-testing of
extremal qubit positive operator-valued measures (POVMs) were proposed in [24,
25] and further extended to symmetric-informationally complete (SIC) POVMs
[26]. Importantly, all of the above results either rely on numerical
approaches, for general state preparations and POVMs, or work only for special
scenarios that exhibit many symmetries.
In this work we propose a simple analytical method allowing to certify
overlaps between preparations of arbitrary pure states and arbitrary
projective measurements in _qudit_ systems. The scheme relies on establishing
perfect correlations between preparation states and outcomes of suitably-
chosen projective measurements. The method is universally applicable and
robust to experimental noise. We prove that for qubits our SDI certification
method can be used to obtain a robust self-testing result for arbitrary
preparations of pure qubit states and corresponding projective measurements.
While for higher dimensions we do not show self-testing, our scheme allows for
SDI certification of numerous inherently quantum properties. The examples
include, but are not limited to: _arbitrary_ overlaps between any number of
measurement bases, MUB conditions, information-completeness of measurements,
and SIC relations among measurement effects.
We believe that our findings greatly extend the applicability of the paradigm
of self-testing in the SDI setting. They will likely have application for
certification of near-term quantum computers [7], especially since our scheme
is not sensitive to state-preparation and measurement errors, which is one of
the major problems in certification of quantum devices [27]. We also expect
possible cryptographic applications as our setup is very similar to the one of
textbook quantum key distribution schemes [28, 29]. The perfect correlations
can be utilized for generation of the secret key, while the rest can be used
to estimate the security. Thus, our methods can be directly applied for
certification of quantum devices implementing protocols such as BB84 [28],
which is normally achieved by introducing additional preparation states or
measurement bases [30].
Before we proceed, let us first fix some of the notations used in this paper.
Let $X$ be a linear operator acting on a finite-dimensional Hilbert space
$\mathcal{H}$. Throughout the paper we use $\left|\left|\hskip 1.0ptX\hskip
1.0pt\right|\right|$, $\left|\left|\hskip 1.0ptX\hskip
1.0pt\right|\right|_{F}$ to denote operator norm and Frobenius norm of $X$. We
will also use $|\mathbf{n}|$ to denote the Euclidean norm of
$\mathbf{n}\in\mathbb{R}^{3}$, and $[n]$ to denote an $n$-element set
$\\{1,2,\dots,n\\}$.
## 2 Description of the scenario
We consider the prepare-and-measure scenario in which in each run of the
experiment Alice prepares a quantum $d$-level system in one of the states from
a finite set of preparations $\varrho_{a}^{x}$, for which we use two indexes
$x\in[n]$ and $a\in[d]$. Subsequently, Bob performs a measurement on this
state with a finite choice of measurement settings $y\in[n]$ having the
possible outcomes $b\in[d]$. We assume that measurement process performed by
Bob is described by quantum mechanics and furthermore that the parties do not
communicate in any other way and do not have access to any entangled states or
shared randomness [31] (the role of this assumption is discussed in detail
later in the text). This implies that the observed statistics $p(b|a,x,y)$ are
given by the Born rule i.e.,
$p(b|a,x,y)={\mathrm{tr}}(\varrho^{x}_{a}M^{y}_{b})$, where
$\mathbf{M}^{y}=(M^{y}_{1},M^{y}_{2},\dots,M^{y}_{d})$ is a quantum
measurement (POVM) performed by Bob upon the choice of the setting $y$. The
goal of SDI certification is then to identify the preparation states and the
measurements, or their properties, based solely on the observed statistics
$p(b|a,x,y)$ _assuming_ the upper bound on the dimension $d$ and the validity
of Born’s rule. We say that certain states $\varrho^{x}_{a}$ and measurements
$\mathbf{M}^{y}$ can be _self-tested_ if the observed statistics specify these
objects uniquely up to a unitary transformation and, perhaps, a global
transposition.
Figure 1: The idea of the certification scheme for the overlaps between
states. Alice chooses her inputs $a,x$ and Bob his input $y$. Alice sends
states $\varrho^{x}_{a}$ to Bob, who produces his outcome $b$. After
establishing perfect correlations between preparation states and measurements
for $y=x$, the rest of the statistics can be used to compute the overlaps
between the preparation states. This can be used to directly certify presence
of coherences between Alice’s input states and between effects of measurements
implemented by Bob.
## 3 Certification of overlaps
We start with a presentation of our scheme for certifying pairwise overlaps
between _pure qudit_ preparation states and between the corresponding
_projective_ measurements. By “corresponding" we mean that in our scheme we
set these states and measurements to be “equal" in a sense that
$\varrho^{x}_{a}=(\varrho^{x}_{a})^{2}=M^{x}_{a}$ for all $a\in[d]$ and
$x\in[n]$. In what follows we will refer to these objects as target pure
states and target measurements respectively. Their experimental counterparts
we denote as $\tilde{\varrho}_{a}^{x}$, and $\mathbf{\tilde{M}}^{y}$
respectively. By “experimental" we mean any states and measurements defined
over the same Hilbert space as the target ones and that reproduce the
_observed statistics_ i.e.,
$\tilde{p}(b|a,x,y)={\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{M}^{y}_{b})$.
Clearly, we _do not_ assume that the experimental states and measurement have
to be “equal".
The idea of our certification scheme is very intuitive, yet powerful (see Fig.
1). Assume that Alice and Bob prepared their devices in a way that
$\tilde{p}(b|a,x,y)=1$, whenever $y=x$ and $b=a$. In other words, outcomes of
Bob’s measurement are _perfectly correlated_ with the preparations of Alice
(whenever $x=y$). Since the quantum dimension is upper bounded by $d$, we can
easily conclude that $\tilde{\varrho}^{x}_{a}=\tilde{M}^{x}_{a}$, for all
$a\in[d]$ and $x\in[n]$. Clearly, after these perfect correlations are
established, the “cross-terms", can be used to compute the overlaps between
the preparation states and between measurement operators:
$\tilde{p}(b|a,x,y)={\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{M}^{y}_{b})={\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{\varrho}^{y}_{b})={\mathrm{tr}}(\tilde{M}^{x}_{a}\tilde{M}^{y}_{b})$.
Therefore, if the experimental statistics $\tilde{p}(b|a,x,y)$ match the
target statistics $p(b|a,x,y)$, we can certify that overlaps between
experimental states match those of the target states. The same holds for the
corresponding measurement operators.
Our method can be also applied when experimental statistics do not match
exactly the target ones.
###### Theorem 1 (Robust SDI certification of overlaps).
Consider pure target qudit preparation states $\varrho^{x}_{a}$ and target
projective measurements $\mathbf{M}^{y}$, where $a\in[d]$ and $x,y\in[n]$.
Assume that $\varrho^{x}_{a}=M^{x}_{a}$ for all $a,x$ and furthermore that
experimental states $\tilde{\varrho}^{x}_{a}$ and measurements
$\mathbf{\tilde{M}}^{y}$ act on Hilbert space of dimension at most $d$ and
generate statistics
$\tilde{p}(b|a,x,y)={\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{M}^{y}_{b})$
such that
$|\tilde{p}(b|a,x,y)-{\mathrm{tr}}(\varrho^{x}_{a}M^{y}_{b})|\leq\varepsilon$,
for all $a,b,x,y$. Then, input states $\tilde{\varrho}^{x}_{a}$ are almost
pure and measurements $\mathbf{\tilde{M}}^{y}$ are almost projective in the
sense that
$\displaystyle\text{for all $x$\ \ }\sum_{a=1}^{d}\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}\hskip 1.0pt\right|\right|\geq d(1-2\varepsilon)\
,$ (1) $\displaystyle\text{for all $y$\ \ }\sum_{b=1}^{d}\left|\left|\hskip
1.0pt\tilde{M}^{y}_{b}\hskip 1.0pt\right|\right|\geq d(1-\varepsilon)\ .$ (2)
Moreover, for all $x\neq x^{\prime}$, $a\neq a^{\prime}$, $y\neq y^{\prime}$,
and $b\neq b^{\prime}$, we have
$\displaystyle|{\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})-{\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})|\leq\varepsilon+\sqrt{2\varepsilon+d^{2}\varepsilon^{2}},$
(3)
$\displaystyle|{\mathrm{tr}}(\tilde{M}^{y}_{b}\tilde{M}^{y^{\prime}}_{b^{\prime}})-{\mathrm{tr}}(M^{y}_{b}M^{y^{\prime}}_{b^{\prime}})|\leq\varepsilon+(1+d\varepsilon)\sqrt{2\varepsilon+d^{2}\varepsilon^{2}}\
.$
###### Proof.
We start with a straightforward proof of Eq.(2). Since $\left|\left|\hskip
1.0pt\tilde{M}^{x}_{a}\hskip
1.0pt\right|\right|\geq{\mathrm{tr}}(\tilde{M}^{x}_{a}\varrho)$ for any state
$\varrho$, the relation
${\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{M}^{x}_{a})\geq 1-\varepsilon$,
valid for all $a,x$, implies $\left|\left|\hskip 1.0pt\tilde{M}^{x}_{a}\hskip
1.0pt\right|\right|\geq 1-\varepsilon$, for all $a,x$, and hence for all $x$
we have $\sum_{a=1}^{d}\left|\left|\hskip 1.0pt\tilde{M}^{x}_{a}\hskip
1.0pt\right|\right|\geq d-d\varepsilon$.
To prove the bounds on the norms of experimental states in Eq. (1), we fix a
setting $x$ and use a decomposition:
$\tilde{M}^{x}_{a}=\lambda_{a}|\phi_{a}\rangle\langle\phi_{a}|+\text{Rest}_{a}\
,$ (4)
where $\lambda_{a}$ is the largest eigenvalue of $\tilde{M}^{x}_{a}$,
$|\phi_{a}\rangle\langle\phi_{a}|$ is the corresponding eigenvector and,
$\text{Rest}_{a}$ is a positive-semi-definite operator satisfying
$\langle\phi_{a}|\text{Rest}_{a}|\phi_{a}\rangle=0$ (to keep the derivations
legible, we omit the index $x$). Using the fact that observed statistics are
$\varepsilon$ close to the target ones we get:
$\displaystyle
1-\varepsilon\leq{\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{M}^{x}_{a})=\lambda_{a}\langle\phi_{a}|\tilde{\varrho}^{x}_{a}|\phi_{a}\rangle+{\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\text{Rest}_{a})\leq\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}\hskip
1.0pt\right|\right|+{\mathrm{tr}}(\text{Rest}_{a})\ .$ (5)
By taking the sum over $a$ we obtain $\sum_{a=1}^{d}\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}\hskip 1.0pt\right|\right|\geq
d(1-\varepsilon)-\sum_{a=1}^{d}{\mathrm{tr}}(\text{Rest}_{a})$. We can now use
the identity $d=\sum_{a=1}^{d}{\mathrm{tr}}(\tilde{M}^{x}_{a})$, which follows
form that fact that operators $M^{x}_{a}$ form a POVM in $\mathbb{C}^{d}$.
This equality allows us to give a bound:
$d=\sum_{a=1}^{d}\lambda_{a}+\sum_{a=1}^{d}{\mathrm{tr}}(\text{Rest}_{a})\leq
d(1-\varepsilon)+\sum_{a=1}^{d}{\mathrm{tr}}(\text{Rest}_{a})\ ,$ (6)
which is equivalent to $\sum_{a=1}^{d}{\mathrm{tr}}(\text{Rest}_{a})\leq
d\varepsilon$. Inserting this to $\sum_{a=1}^{d}\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}\hskip 1.0pt\right|\right|\geq
d(1-\varepsilon)-\sum_{a=1}^{d}{\mathrm{tr}}(\text{Rest}_{a})$ we obtain Eq.
(1).
We now proceed to the proof of Eqs. (3). We start with the one for the
overlaps between preparation states. The proof is given by the following
sequence of inequalities which hold for every $a\neq a^{\prime}$, $x\neq
x^{\prime}$:
$\displaystyle|{\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})-{\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})|\leq|{\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})-{\mathrm{tr}}(\tilde{M}^{x}_{a}\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})|+|{\mathrm{tr}}(\tilde{M}^{x}_{a}\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})-{\mathrm{tr}}(M^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})|$
$\displaystyle\leq\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}-\tilde{M}^{x}_{a}\hskip
1.0pt\right|\right|+\varepsilon\leq\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}-\tilde{M}^{x}_{a}\hskip
1.0pt\right|\right|_{F}+\varepsilon=\sqrt{{\mathrm{tr}}((\tilde{\varrho}^{x}_{a})^{2})+{\mathrm{tr}}((\tilde{M}^{x}_{a})^{2})-2{\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{M}^{x}_{a})}+\varepsilon$
$\displaystyle\leq\sqrt{1+(1+d^{2}\varepsilon^{2})-2(1-\varepsilon)}=\varepsilon+\sqrt{2\varepsilon+d^{2}\varepsilon^{2}}.$
All of the above inequalities are pretty straightforward apart from
${\mathrm{tr}}((\tilde{M}^{x}_{a})^{2})\leq 1+d^{2}\varepsilon^{2}$ that we
prove below. For this we again use the partial spectral decomposition form Eq.
(4) (without writing the superscript $x$ as above) which implies:
$\displaystyle{\mathrm{tr}}((\tilde{M}^{x}_{a})^{2})=\lambda_{a}^{2}+{\mathrm{tr}}(\text{Rest}_{a}^{2})\leq
1+({\mathrm{tr}}(\text{Rest}_{a}))^{2}\leq 1+d^{2}\varepsilon^{2}.$ (7)
The proof for the overlaps of POVM effects is very similar to the one for
states with the only difference being the following inequality:
$\displaystyle|{\mathrm{tr}}(\tilde{M}^{x}_{a}\tilde{M}^{x^{\prime}}_{a^{\prime}})-{\mathrm{tr}}(\tilde{M}^{x}_{a}\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})|\leq(1+d\varepsilon)\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}-\tilde{M}^{x}_{a}\hskip 1.0pt\right|\right|,$ (8)
that is used in the second step in the proof in Eq. (3). One can easily verify
the validity of this inequality by writing once more the decomposition
$\tilde{M}^{x^{\prime}}_{a^{\prime}}=\lambda_{a}|\phi_{a^{\prime}}\rangle\langle\phi_{a^{\prime}}|+\text{Rest}_{a^{\prime}}$
and remembering that ${\mathrm{tr}}(\text{Rest}_{a})\leq d\varepsilon$. ∎
The above result states that if the statistics observed in our certification
scheme vary just a little bit form the target ones, the overlaps of the
experimental states are also close to the overlaps between target states (and
analogously for measurements). To our best knowledge analogous results have
been previously known only for very special symmetric target states and
measurements forming MUBs [22, 23].
In Appendix B (Lemma 1) we improve the above bounds for the case of qubit
systems. Moreover, in Appendix A we prove, by giving explicit examples, that
the bounds in Eq. (3) are tight in the first orders in $\sqrt{\varepsilon}$
and $d$.
## 4 Self-testing of qubits
We now show that certification of overlaps allows to prove robust self-testing
result for _arbitrary_ pure qubit preparations and projective measurements
appearing in our certification scheme.
###### Theorem 2 (Robust self-testing of qubit systems).
Consider target pure qubit states $\varrho^{x}_{a}$ and projective
measurements $\mathbf{M}^{y}$, where $a=1,2$, $x,y\in[n]$, and
$\varrho^{x}_{a}=M^{x}_{a}$ for all $a,x$. Assume that experimental qubit
states $\tilde{\varrho}^{x}_{a}$ and measurements $\mathbf{\tilde{M}}^{y}$
generate statistics
$\tilde{p}(b|a,x,y)=\mathrm{tr}(\tilde{\varrho}^{x}_{a}\tilde{M}^{y}_{b})$
such that
$|\tilde{p}(b|a,x,y)-{\mathrm{tr}}(\varrho^{x}_{a}M^{y}_{b})|\leq\varepsilon$,
for all $a,b=1,2$ and $x,y\in[n]$. Then, there exist $\varepsilon_{0}$ such
that for $\varepsilon\leq\varepsilon_{0}$ there exist a qubit unitary $U$ such
that
$\displaystyle\frac{1}{2n}\sum_{a,x}{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{a})^{(T)}U^{\dagger}\varrho^{x}_{a})\geq
1-f(\varepsilon)\ ,$
$\displaystyle\frac{1}{2n}\sum_{b,y}{\mathrm{tr}}(U(\tilde{M}^{y}_{b})^{(T)}U^{\dagger}M^{y}_{b})\geq
1-g(\varepsilon)\ ,$ (9)
where $(\cdot)^{(T)}$ is the transposition with respect to a fixed basis in
$\mathbb{C}^{2}$ that may have to be applied to all experimental states and
measurements at the same time. Moreover, functions
$f,g:[0,\varepsilon_{0})\rightarrow\mathbb{R}_{+}$ depend solely on the target
states and measurements and, for small $\varepsilon$, have the asymptotics
$f(\varepsilon)\propto\varepsilon,\ g(\varepsilon)\propto\varepsilon$.
In the above we used the fidelity
$F(\varrho,\sigma)={\mathrm{tr}}(\varrho\sigma)$ to indicate the closeness
between _rotated_ experimental states and target pure states (and analogously
for measurements), following existing literature. The case $\varepsilon=0$
corresponds to ideal reconstruction of target states and measurements after
applying a suitable isometry. We remark that we allow only unitary operations
(and possible transposition), as opposed to general channels [22, 25], to be
applied to the experimental states in order to approximate the target states
_as well as possible_.
In Appendix B we give a formal version of the above result and its proof.
Moreover, we present there robustness bounds expressed in terms of the trace
distance and its analogue for measurements [32]. We remark that the functions
$f,g$ become unbounded once Bloch vectors of target qubit states become
singular, i.e., once the target states are close to being aligned in a space
of smaller dimension.
###### Remark.
Certification of overlaps between pure states in general does not allow for
their self-testing in higher-dimensional systems. This is e.g., due to the
existence of unitary inequivalent sets of SIC-POVMs for $d=3$ [33] and MUBs
for $d=4$ [34] (even if we allow for complex conjugation).
###### Sketch of the proof.
We give a full proof for the ideal case ($\varepsilon=0$) below. From Theorem
1 it follows that for all $x,x^{\prime},a,a^{\prime}$ we have
$\tilde{\varrho^{x}_{a}}=\tilde{M}^{x}_{a}$ and
${\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})={\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})$.
Using the Bloch representation,
$\varrho^{x}_{a}=\frac{1}{2}(\mathbbm{1}+\mathbf{n}^{x}_{a}\cdot\bm{\sigma})$,
we can conclude that also
$\mathbf{\tilde{n}}^{x}_{a}\cdot\mathbf{\tilde{n}}^{x^{\prime}}_{a^{\prime}}=\mathbf{n}^{x}_{a}\cdot\mathbf{n}^{x^{\prime}}_{a^{\prime}}$,
where $\mathbf{n}^{x}_{a}$ and $\tilde{\mathbf{n}}^{x}_{a}$ are Bloch vectors
of $\varrho^{x}_{a}$ and $\tilde{\varrho}^{x}_{a}$ respectively.
Assume now that the vectors
$\mathbf{n}^{1}_{1},\mathbf{n}^{2}_{1},\mathbf{n}^{3}_{1}$ are linear
independent. Let $O$ be the linear transformation defined by
$O\mathbf{n}^{x}_{1}=\mathbf{\tilde{n}}^{x}_{1}$, $x=1,2,3$, and let $L$ be
the matrix whose rows are the vectors $\mathbf{n}^{x}_{1}$, $x=1,2,3$. Then,
we have $LO^{T}OL^{T}=LL^{T}$, and consequently, since $L$ is invertible by
the construction, $O^{T}O=\mathbbm{1}_{3}$, i.e., $O$ is an orthogonal
transformation in $\mathbb{R}^{3}$. It is well-known [35] that if $\det(O)=1$,
there exist a unitary matrix $U$ such that
${\mathrm{tr}}(U(\tilde{\varrho}^{x}_{a})U^{\dagger}\varrho^{x}_{a})=1$ for
$x=1,2,3$ and $a=1$. By our assumption all remaining states $\varrho^{x}_{a}$
can be decomposed in the basis
$\\{\mathbbm{1},\varrho^{1}_{1},\varrho^{2}_{1},\varrho^{3}_{1}\\}$, with the
coefficients depending solely on the overlaps
${\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})$. The, the
same $U$ that maps $\tilde{\varrho}^{x}_{1}$ to $\varrho^{x}_{1}$, for
$x=1,2,3$, also connects the remaining pairs of states. Finally, if
$\det(O)=-1$, the transformation $O$ corresponds to application of the
transposition in the standard basis of $\mathbb{C}^{2}$ followed by
application of a unitary operation $U$, determined by $O$ [36].
For the general case $\varepsilon>0$ we consider Cholesky factorisations of
Gram matrices, denoted by $\Gamma$ and $\tilde{\Gamma}$, of Bloch vectors
$\mathbf{n}^{1}_{1},\mathbf{n}^{2}_{1},\mathbf{n}^{3}_{1}$ and their
experimental counterparts respectively. Theorem 1 is then used to bound
$\left|\left|\hskip 1.0pt\Gamma-\tilde{\Gamma}\hskip 1.0pt\right|\right|_{F}$
and utilize results of Ref. [37] to gauge how much the Cholesky decompositions
of $\Gamma$ and $\tilde{\Gamma}$ differ in the Frobenius norm. The latter can
be connected to the average fidelity between the selected target and
experimental states. The robustness for the remaining states follows from the
fact that they can be decomposed (as operators) using the three initially
chosen target states and the identity.
If vectors from the set $\\{{\mathbf{n}^{x}_{a}\\}}$ span two dimensional
space then the same arguments for both $\varepsilon=0$ and $\varepsilon>0$ can
be repeated for the considered subspace. Importantly, in this case additional
transposition is not necessary. ∎
Figure 2: Lower bounds for the average fidelity between the experimental and
target states. Part (a) presents results for $n=2,3$ qubit MUBs. Part (b)
presents results for two qubit bases for different degree of bias $\alpha$.
Value $\alpha=0$ corresponds to two MUBs while $\alpha=1$ gives two identical
bases.
## 5 Examples
We now apply the quantitative formulation of Theorem 2 to lower-bound average
fidelities for different configurations of target states as a function of the
allowed error $\varepsilon$. In Figure 2 we present results for eigenstates of
$n=2$ and $n=3$ Pauli matrices (i.e., states forming $n=2$ and $n=3$ qubit
MUBs), and states belonging to two biased projective measurements satisfying
${\mathrm{tr}}{(\varrho^{1}_{a}\varrho^{2}_{a})}=\frac{1+\alpha}{2}$, $a=1,2$
where we take $\alpha\in[0,1]$.
For $n=2$ MUBs we compare our results with Ref. [22] that aimed at self-
testing of qubit MUBs. The results of Ref. [22] give the upper bound on
average fidelity equal $0.75$ for the deviation of $\simeq 0.1$ in the figure
of merit. In our scheme this happens for $\varepsilon\simeq 0.033$ as shown on
Fig. 2. To test the versatility of our scheme, we also applied it to $n=3$
MUBs, trine and tetrahedral configurations of qubit states. Quantitative
results concerning these examples are listed in Table 1, while detailed
derivations are given in Appendix D. For trine and tetrahedral configurations
the robustness is obtained via the so-called Procrustes problem, described in
Appendix C, based on Ref. [38]. For cases other than two MUBs we cannot make
any comparison with the existing literature since, to our best knowledge,
these case have not been studied previously.
Configuration | $\varepsilon_{0}$ | $C$
---|---|---
2 MUBs | $\approx 0.062$ | $\frac{7}{2}+\sqrt{2}$
3 MUBs | $\approx 0.030$ | $6$
2 biased bases | $\lesssim\frac{4-3\alpha-\sqrt{7-6\alpha}}{18}$ | $2+\left(1+\frac{\sqrt{1+\alpha}}{\sqrt{2}(1-\alpha)}\right)^{2}$
Trine | $\approx 0.058$ | $\frac{19}{3}$
Tetrahedron | $\approx 0.037$ | $10$
Table 1: Results of quantitative variant of Theorem 2 applied to different
configurations of target quantum states. The threshold $\varepsilon_{0}$ sets
the maximal noise level which is tolerated by our scheme. The constant $C$ is
defined via the relation
$f(\varepsilon)\stackrel{{\scriptstyle\varepsilon\rightarrow
0}}{{\approx}}C\varepsilon$, where $1-f$ is the lower bound on the average
fidelity from Eq. (2).
## 6 Shared randomness
Throughout the article we assumed that preparation and measurement devices are
uncorrelated, which is often true in practice. However, one may also consider
a situation in which Alice and Bob share a random variable $\lambda$, that
they can use to coordinate their actions in a given round of the experiment.
The most general statistics that can be generated in such a scenario can be
expressed as $p(b|a,x,y)=\int d\lambda
p(\lambda){\mathrm{tr}}(\varrho^{x}_{a}(\lambda)M^{y}_{b}(\lambda))$, where
$p(\lambda)$ denotes the probability distribution of $\lambda$.
Interestingly, the presence of shared randomness makes our main results, as
stated by Theorems 1 and 2, inapplicable. The easiest way to see it is to
consider the following simple example of $n=d=2$, one bit of shared randomness
$\lambda\in\\{1,2\\}$, and $\tilde{\varrho}^{1,2}_{a}=|a\rangle\langle a|$,
$\tilde{M}^{1,2}_{b}=|b\rangle\langle b|$. This clearly satisfies the
requirement on $\tilde{p}(a|a,x,x)=1$. Now, Alice’s and Bob’s devices can
decide to “flip" their preparations and measurement operators whenever $x=2$,
$y=2$ and $\lambda=2$ (note that $\lambda$ can be distributed in an arbitrary
way). This procedure does not affect the correlations $\tilde{p}(a|a,x,y=x)$,
but it can be used to set $\tilde{p}(a|a,1,2)$ to an arbitrary value.
Additionally, we believe that in the presence of shared randomness, the usual
notion of self-testing in the SDI setting has to be reconsidered.
Specifically, for _any_ quantum realisation giving the statistics
$p(b|a,x,y)={\mathrm{tr}}(\varrho^{x}_{a}M^{y}_{b})$, one can always consider
a strategy in which with probability $p(\lambda)$ Alice prepares states
$\varrho^{x}_{a}(\lambda)=U_{\lambda}\varrho^{x}_{a}U^{\dagger}_{\lambda}$ and
Bob implements measurements
$M^{y}_{b}(\lambda)=U_{\lambda}M^{y}_{b}U^{\dagger}_{\lambda}$, where
$U_{\lambda}$ is some arbitrary unitary transformation depending on $\lambda$.
Clearly, such strategy reproduces the original statistics, and makes it
impossible to find a single unitary that connects the target and experimental
states (or measurements).
While, we do not consider the assumption on no shared randomness strong, below
we propose a modification of our SDI certification scheme that allows to
certify overlaps between arbitrary pure states even in the presence of shared
randomness. The idea is to introduce, for every pair of non-orthogonal states,
a suitable _intermediate state_ [39], that enforces fixed overlaps between
experimental states in every round of the experiment.
Recall that in our certification scheme we consider pure target qudit states
$\varrho^{x}_{a}$ and target projective measurements $\mathbf{M}^{y}$, where
$a\in[d]$ and $x,y\in[n]$. We extend this scheme by introducing an additional
intermediate target state $\varrho_{z}$ for every two pairs of Alice’s input
$(x,a)$ , $(x^{\prime},a^{\prime})$, where $a\neq a^{\prime}$ and $x\neq
x^{\prime}$. The state $\varrho_{z}$ is chosen as the unique state satisfying:
${\mathrm{tr}}\left(\varrho_{z}(\varrho^{x}_{a}+\varrho^{x^{\prime}}_{a^{\prime}})\right)=1+{\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})^{\frac{1}{2}}\
.$ (10)
Let $\tilde{p}(b|z,y)$ denote the experimental statistics, corresponding to
inputs $z,y$ and outcome $b$. In the ideal scenario, in which experimental
statistics satisfy assumptions of Theorem 1 with $\varepsilon=0$, assume
additionally that
$\tilde{p}(a|z,x)+\tilde{p}(a^{\prime}|z,x^{\prime})=1+\sqrt{\tilde{p}(a^{\prime}|x,a,x^{\prime})}\
.$ Below we prove that under the above assumptions for all $\lambda$ we have
${\mathrm{tr}}(\tilde{\varrho}^{x}_{a}(\lambda)\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}(\lambda))={\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})$.
Since in the ideal case overlaps between preparation states _do not depend_ on
$\lambda$ and match the target value, our modified certification scheme works
also in the presence of shared randomness. In the noisy scenario one can
generally expect to be able to upper-bound the probability (over the choice of
$\lambda$) of the deviations from perfect correlations between states and
measurements, as a function of the error $\varepsilon$. We postpone the
discussion of this interesting point to the future work.
###### Proof of soundness of certification in the presence of shared
randomness.
Consider a general physical realisation of experimental statistics
$\tilde{p}(b|x,a,y)$, $\tilde{p}(b|z,y)$ via classically correlated
preparations and measurements on $\mathbb{C}^{d}$:
$\displaystyle\tilde{p}(b|x,a,y)$ $\displaystyle=\int d\lambda
p(\lambda){\mathrm{tr}}(\tilde{\varrho}^{x}_{a}(\lambda)\tilde{M}^{y}_{b}(\lambda))\
,$ (11) $\displaystyle\tilde{p}(b|z,y)$ $\displaystyle=\int d\lambda
p(\lambda){\mathrm{tr}}(\tilde{\varrho}_{z}(\lambda)\tilde{M}^{y}_{b}(\lambda))\
,$ (12)
where $x,y\in[n]$, $a,b\in[d]$ and the variable (input) $z$ labels elements in
the set of unordered pairs $\\{{(x,a),(x^{\prime},a^{\prime})\\}}$, where
$x\neq x^{\prime}$ and $a\neq a^{\prime}$. Assume now that the above
statistics match _exactly_ the ones required by our scheme and are compatible
with Eq. (10) in the sense that:
$\displaystyle\tilde{p}(b|x,a,y)$ $\displaystyle=$
$\displaystyle{\mathrm{tr}}(\varrho^{x}_{a}\varrho^{y}_{b})\ ,$ (13)
$\displaystyle\tilde{p}(a|z,x)+\tilde{p}(a^{\prime}|z,x^{\prime})$
$\displaystyle=$ $\displaystyle 1+\sqrt{\tilde{p}(a^{\prime}|x,a,x^{\prime})}\
.$ (14)
In what follows we prove that if the above constraints are satisfied, then for
all111Technically speaking this equality holds for _almost all_ $\lambda$.:
$\displaystyle\tilde{\varrho}^{x}_{a}(\lambda)$ $\displaystyle=$
$\displaystyle\tilde{M}^{x}_{a}(\lambda)\ ,$ (15)
$\displaystyle{\mathrm{tr}}(\varrho^{x}_{a}\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})$
$\displaystyle=$
$\displaystyle{\mathrm{tr}}(\varrho^{x}_{a}(\lambda)\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}(\lambda))\
,$ (16)
where $x,x^{\prime}\in[n]$, $a,a^{\prime}\in[d]$. The above equation means
that the overlaps between preparation sates (and measurements) do not depend
on on the value of the shared random variable $\lambda$.
The proof of Eq. (15) is straightforward. Namely, form Eq. (13) if follows
that:
$\int d\lambda
p(\lambda){\mathrm{tr}}\left(\tilde{\varrho}^{x}_{a}(\lambda)\tilde{M}^{x}_{a}(\lambda)\right)=1\
.$ (17)
Since
${\mathrm{tr}}(\tilde{\varrho}^{x}_{a}(\lambda)\tilde{M}^{x}_{a}(\lambda))\leq
1$, we get that for all $\lambda$ we have
${\mathrm{tr}}(\tilde{\varrho}^{x}_{a}(\lambda)\tilde{M}^{x}_{a}(\lambda))=1$.
Since this reasoning can be repeated for all $a\in[d]$ (for the fixed value of
$x$) and since operators $\tilde{M}^{x}_{a}$ form a POVM on $\mathbb{C}^{d}$ ,
we finally get Eq. (15).
The proof of Eq. (16) is more involved and relies on both Eqs. (13) and (14).
Specifically, from Eq. (13) and the already established identity
$\tilde{\varrho}^{x}_{a}=\tilde{M}^{x}_{a}$, it follows that Eq. (14) is
equivalent to:
$\int d\lambda
p(\lambda){\mathrm{tr}}\left(\tilde{\varrho}_{z}(\lambda)[\tilde{\varrho}^{x}_{a}(\lambda)+\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}(\lambda)]\right)=1+\sqrt{{\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})}\
,$ (18)
where moreover:
$\int d\lambda
p(\lambda){\mathrm{tr}}\left(\tilde{\varrho}^{x}_{a}(\lambda)\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}(\lambda)\right)={\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})\
.$ (19)
Using Bloch representation of qubit states (states
$\tilde{\varrho}^{x}_{a}(\lambda),\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}(\lambda)$
are pure and hence span a two dimensional subspace of $\mathbb{C}^{d}$) it is
straightforward to obtain the bound:
${\mathrm{tr}}\left(\tilde{\varrho}_{z}(\lambda)[\tilde{\varrho}^{x}_{a}(\lambda)+\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}(\lambda)]\right)\leq
1+\sqrt{{\mathrm{tr}}(\tilde{\varrho}^{x}_{a}(\lambda)\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}(\lambda))}\
.$ (20)
After setting
$g(\lambda)=\sqrt{{\mathrm{tr}}(\tilde{\varrho}^{x}_{a}(\lambda)\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}(\lambda))}$
and using Eq. (19) we obtain:
$\int d\lambda p(\lambda)g(\lambda)\geq\sqrt{\int d\lambda
p(\lambda)g(\lambda)^{2}}\ .$ (21)
Using Chauchy-Schwartz inequality for the left hand side of the above
inequality we finally get:
$\int d\lambda p(\lambda)g(\lambda)=\sqrt{\int d\lambda
p(\lambda)g(\lambda)^{2}}\ ,$ (22)
which is equivalent to saying that the variance of the random variable
$g(\lambda)$ vanishes. Therefore, $g(\lambda)=\alpha$, where $\alpha$ is some
numerical constant. We conclude the proof by noting that from Eq. (19) it
follows that
$\alpha^{2}={\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})$
which implies Eq. (16). ∎
## 7 Extension of the scheme to SDI characterization of rank-1 non-projective
measurements
In our work we focus on self-testing of arbitrary configurations of
preparation states and projective measurements. In this section we discuss
possible extension of our results to certification of properties of general
rank-1 measurements in arbitrary dimension $d$. To that end, we introduce
additional states coming from suitably-chosen orthonormal bases per every
effect of the measurement we want to certify. With the help of these extra
states (and corresponding projective measurements) we can e.g., certify the
following properties of POVMs in arbitrary dimension $d$: (i) information
completeness, (ii) extremality of rank-1 POVMs, and (iii) SIC-property.
The main idea of the extension is the following. Suppose we would like to
self-test an $m$-outcome POVM $\mathbf{N}=(N_{1},\dots,N_{m})$ acting on a
Hilbert space of dimension $d<m$ with each effect $N_{b}$ being of rank $1$
i.e., $N_{b}=\alpha_{b}\sigma_{b}$, for some pure states $\sigma_{b}$ and
positive scalars $\alpha_{b}$. We consider $m$ sets of preparation states
$\\{\varrho^{x}_{a}\\}$, $a\in[d]$, ${x\in[m]}$ and the corresponding
projective measurements $\mathbf{M}^{y}$, $y\in[m]$. Assume now that in the
idealized case we observe statistics satisfying:
$\displaystyle{\mathrm{tr}}(\varrho^{b}_{a}N_{b})=0,\quad\forall
b\in[m],\,a\neq 1,$
$\displaystyle{\mathrm{tr}}(\varrho^{x}_{a}M^{x}_{b})=\delta_{a,b},\quad\forall
x\in[m]\ .$ (23)
From the second set of conditions we conclude that $\\{\varrho^{x}_{a}\\}_{a}$
forms, as before, a basis of pure states for each $x$. From the first
condition in Eq. (7) we get that $\mathbf{N}$ has to be a rank-1 POVM and,
moreover, that $\varrho^{b}_{1}=\sigma_{b}$ ($N_{b}$ is orthogonal to all of
the states except the one with $a=1$). Consequently, we have
${\mathrm{tr}}(\varrho^{b}_{1}N_{b})=\alpha_{b}$ for all $b\in[m]$. This fact
together with the measured overlaps
${\mathrm{tr}}(\varrho^{b}_{1}\varrho^{b^{\prime}}_{1})={\mathrm{tr}}(\varrho^{b}_{1}M^{b^{\prime}}_{1})$
gives the information about the overlaps ${\mathrm{tr}}(N_{b}N_{b^{\prime}})$
between effects of the POVM $\mathbf{N}$.
The knowledge of all overlaps ${\mathrm{tr}}(N_{b}N_{b^{\prime}})$ (including
$b^{\prime}=b$) and the fact that $\mathbf{N}$ is a rank-1 POVM allow to
certify many interesting properties of $\mathbf{N}$. First, as long as $m\leq
d^{2}$ we can certify extremality of $\mathbf{N}$. This follows from the fact
that rank-1 POVMs are extremal if and only if their effects are linearly
independent (see e.g., Ref. [40]). Linear independence can be directly
inferred from the experimentally accessible moment matrix
$\Gamma_{b,b^{\prime}}={\mathrm{tr}}(N_{b}N_{b^{\prime}})$. Specifically,
$\Gamma$ is non-singular if and only if operators $\\{N_{b}\\}_{b}$ are linear
independent. Now, if $m=d^{2}$ we can use the same reasoning to infer
information completeness of $\mathbf{N}$. Finally, from the moment matrix
$\Gamma$ we can directly verify the SIC property i.e., the condition
${\mathrm{tr}}(N_{b}N_{b^{\prime}})=\frac{1+\delta_{b,b^{\prime}}}{d^{2}(d+1)},\forall
b,b^{\prime}$.
## 8 Discussion
We have presented a systematic analytical scheme for noise-resilient
certification of overlaps between arbitrary configurations of pure quantum
states and rank-$1$ projective measurements in the prepare-and-measure
scenario. For qubits our scheme can be used to robustly self-test general
ensembles of pure quantum sates and the corresponding projective measurements.
We believe that these findings pave the way towards systematic certification
of general quantum systems in the semi-device-independent paradigm. This is
supported by the concrete qubit results from Table 1 and by the universality
of our protocol for certifying overlaps of quantum sates in arbitrary
dimension.
There are many exciting research directions that stem from our work. One of
them is a systematic SDI characterization of generalized measurements beyond
qubits. Another research direction is the robustness analysis of our scheme in
the presence of shared randomness. An important contribution to the filed of
self-testing would be a generalization of our self-testing results for qubit
to higher-dimensional systems. Finally, it is interesting to combine our
methods with recent findings relating state discrimination games and resource
theories of measurements and channels [41, 42, 43].
We would also like to draw reader’s attention to a related work [44].
## Acknowledgements
We thank Marcin Pawłowski, Jedrzej Kaniewski and Tanmoy Biswas for interesting
discussions and comments. NM acknowledges the financial support by First TEAM
Grant No. 2016-1/5. We acknowledge partial support by the Foundation for
Polish Science (IRAP project, ICTQT, contract no. 2018/MAB/5, co-financed by
EU within Smart Growth Operational Programme). We acknowledge the support from
International Research Agendas Programme of the Foundation for Polish Science
funded by the European Regional Development Fund. MO acknowledges the
financial support by TEAM-NET project (contract no.
POIR.04.04.00-00-17C1/18-00).
## Appendix
In this Appendix we provide technical details that were omitted in the main
text. First, in Appendix A we prove statements regarding saturation of the
bounds in Theorem 1. Second, in Appendix B we formulate and prove a
quantitative version of Theorem 2, which concerns robustness of our self-
testing protocol expressed in terms of average fidelities. We also prove there
Theorem 3 that gives analogous results expressed in terms of trace distance.
In Appendix C we provide alternative robustness analysis based on orthogonal
Procrustes problem. In Appendix D we provide technical details about the
considered examples in Table 1. Finally, in Appendix E we give proofs of two
auxiliary Lemmas needed in the proof of the technical version of Theorem 2.
## Appendix A Explicit form of states and measurements saturating the bounds
in Theorem 1
We start by giving an example of four states and two qubit measurements
($n=2$, $d=2$), for which the bound in Eq. (3) on the deviation of the
overlaps scales like $\sqrt{\varepsilon}$. Their explicit form is given below:
$\displaystyle\varrho^{1}_{1}=|\tilde{0}\rangle\langle\tilde{0}|,\quad|\tilde{0}\rangle=\sqrt{1-\varepsilon}|0\rangle+\sqrt{\varepsilon}|1\rangle,\quad\varrho^{2}_{1}=|+\rangle\langle+|,$
(24)
where $|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$, and as required by
our scheme $\varrho^{x}_{2}=\mathbbm{1}-\varrho^{x}_{1},\;x=1,2$ and
$M^{x}_{a}=\varrho^{x}_{a}$, for all $x,a$. Let the experimental states and
measurements be the following:
$\displaystyle\tilde{\varrho}^{1}_{1}=|0\rangle\langle
0|,\quad\tilde{\varrho}^{1}_{2}=|1\rangle\langle
1|,\quad\tilde{\varrho}^{2}_{1}=|+\rangle\langle+|,\quad\tilde{\varrho}^{2}_{2}=|-\rangle\langle-|,$
$\displaystyle\tilde{M}^{1}_{1}=|\tilde{0}\rangle\langle\tilde{0}|,\quad\tilde{M}^{2}_{1}=|\tilde{+}\rangle\langle\tilde{+}|,\quad|\tilde{+}\rangle=\sqrt{1-\varepsilon}|+\rangle+\sqrt{\varepsilon}|-\rangle\
.$ (25)
First, we need to show that the experimental statistics is deviated from the
target one by at most $\varepsilon$, i.e.,
$|{\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{M}^{y}_{b})-{\mathrm{tr}}(\varrho^{x}_{a}\varrho^{y}_{b})|\leq\varepsilon,\forall
a,b,x,y$. Since $\tilde{\varrho}^{x}_{1}+\tilde{\varrho}^{x}_{2}=\mathbbm{1}$
for both $x=1,2$, it is sufficient to consider the cases of $a=1,b=1$.
Calculating the statistics we can see that
${\mathrm{tr}}(\tilde{\varrho}^{1}_{1}\tilde{M}^{1}_{1})={\mathrm{tr}}(\tilde{\varrho}^{2}_{1}\tilde{M}^{2}_{1})=1-\varepsilon$,
and
${\mathrm{tr}}(\tilde{\varrho}^{1}_{1}\tilde{M}^{2}_{1})={\mathrm{tr}}(\tilde{\varrho}^{2}_{1}\tilde{M}^{1}_{1})=\frac{1}{2}+\sqrt{\varepsilon-\varepsilon^{2}}$
and the latter is just the same as the target probability,
${\mathrm{tr}}(\varrho^{1}_{1}\varrho^{2}_{1})$. Finally, calculation of the
overlap between the states yields:
$\displaystyle|{\mathrm{tr}}(\varrho^{1}_{1}\varrho^{2}_{1})-{\mathrm{tr}}(\tilde{\varrho}^{1}_{1}\tilde{\varrho}^{2}_{1})|=\sqrt{\varepsilon-\varepsilon^{2}},$
(26)
which confirms the scaling of the first order of $\sqrt{\varepsilon}$.
Now, let us show that the bounds from Eq. (3) are also tight in the first
order of $d$. We still consider the case of $n=2$, but now $d$ is arbitrary.
Let us consider the following POVM:
$M_{1}=|1\rangle\langle 1|+\varepsilon(d-1)|+\rangle\langle+|\ ,\
M_{a}=(|a\rangle-\delta|+\rangle)(\langle a|-\delta\langle+|)\ a=2,3,\dots,d\
,$ (27)
where $\delta=\frac{1}{\sqrt{d-1}}+\sqrt{\frac{1}{d-1}-\varepsilon}$,
$\\{|a\rangle\\}_{a=1}^{d}$ is the computational basis of $\mathbb{C}^{d}$ and
$|+\rangle=\frac{1}{\sqrt{d-1}}\sum_{a=2}^{d}|a\rangle$ is the "maximally
coherent" state in the subspace spanned by $\\{|i\rangle\\}_{i=2}^{d}$. In the
proof of Theorem 1 the quantity
$|{\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})-{\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})|$
is upper-bounded by $\varepsilon+\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}-\tilde{M}^{x}_{a}\hskip 1.0pt\right|\right|$, for
which there exist a state $\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}$ and an
effect $\tilde{M}^{x^{\prime}}_{a^{\prime}}$ reaching the bound. Now if we
take $\mathbf{\tilde{M}}^{x}$ to be the POVM we just introduced, and
$\tilde{\varrho}^{x}_{1}=|1\rangle\langle 1|$,
$\tilde{\varrho}^{x}_{a}=\frac{M_{a}}{{\mathrm{tr}}(M_{a})}$, $a=2,3,\dots,d$,
the conditions of the Theorem 1 will be satisfied and the resulting bound will
be $d\varepsilon$.
## Appendix B Quantitative statement of the robust self-testing
In this part we first formulate Theorem 3 in which we present robustness of
our self-testing scheme in terms of trace distance222Recall that the trace
distance is defined via
$\mathrm{d}_{\mathrm{tr}}(\sigma,\varrho)=\frac{1}{2}\left|\left|\hskip
1.0pt\sigma-\varrho\hskip
1.0pt\right|\right|_{1}=\frac{1}{2}{\mathrm{tr}}\sqrt{(\sigma-\varrho)^{\dagger}(\tilde{\varrho}-\varrho)}$
and it has a neat operational interpretation in term of optimal success
probability $p$ of distinguishing $\varrho$ and $\sigma$ via the most general
quantum measurements:
$p=\frac{1}{2}(1+\mathrm{d}_{\mathrm{tr}}(\sigma,\varrho))$. for states and
operator norm for measurements. Then, we give a quantitative statement of
Theorem 2, that concerns robustness of our self-testing scheme in terms of
average fidelities. We then proceed with proofs of both results.
In what follows we will need the following definition.
###### Definition 1.
Let $\\{\mathbf{n}_{i}\\}_{i\in[n]}$ be a set of $n$ vectors in
$\mathbb{R}^{l}$. Matrix $\Gamma\in\mathbb{R}^{n\times n}$ is called the Gram
matrix of the set $\\{\mathbf{n}_{i}\\}_{i\in[n]}$, if its elements are given
by $\Gamma_{i,j}=\mathbf{n}_{i}\cdot\mathbf{n}_{j}$, $i,j\in[n]$.
###### Theorem 3 (Robust self-testing for qubits via trace distance and
operator norm).
Consider pure target qubit preparation states $\varrho^{x}_{a}$ and target
projective measurements $\mathbf{M}^{y}$, where $a=1,2$ and $x,y\in[n]$.
Assume that $\varrho^{x}_{a}=M^{x}_{a}$ for all $a,x$ and, furthermore, that
experimental states $\tilde{\varrho}^{x}_{a}$ and measurements
$\mathbf{\tilde{M}}^{y}$ act on Hilbert space of dimension at most $d$ and
generate statistics
$\tilde{p}(b|a,x,y)={\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{M}^{y}_{b})$
such that
$|\tilde{p}(b|a,x,y)-{\mathrm{tr}}(\varrho^{x}_{a}M^{y}_{b})|\leq\varepsilon$,
for all $a,b,x,y$.
Let $k\in\\{{2,3\\}}$ be the cardinality of the maximal set of linearly
independent Bloch vectors of states $\varrho^{x}_{1}$. Fix a set $S\subset[n]$
of $k$ linearly independent vectors $\\{\mathbf{n}^{x}_{1}\\}_{x\in S}$,
construct their Gram matrix $\Gamma_{S}$, and let $L_{S}$ be a Cholesky factor
of $\Gamma_{S}$ (i.e., $\Gamma_{S}=L_{S}L^{T}_{S}$ and $L_{S}$ is lower
triangular). For every $x\in[n]\setminus S$ and both $a=1,2$ let
$\mathbf{c}^{x,a}_{S}$, denote the coefficients of decomposition of
$\mathbf{n}^{x}_{a}$ as a linear combination of
$\\{\mathbf{n}^{x}_{1}\\}_{x\in S}$. Finally, let us define three auxiliary
functions
$\displaystyle
F_{k}(\varepsilon)\coloneqq\sqrt{\varepsilon}\sqrt{4k(k-1)}\sqrt{1+2\sqrt{\varepsilon}+\frac{k+3}{k-1}\varepsilon}\
,$ $\displaystyle O_{k}(\varepsilon)\coloneqq
2((k-1)\sqrt{\varepsilon}+(k+1)\varepsilon)\ ,$ (28) $\displaystyle
E_{S,k}(\varepsilon)\coloneqq\frac{1}{2\sqrt{2}}\frac{\left|\left|\hskip
1.0pt\Gamma_{S}^{-1}\hskip
1.0pt\right|\right|F_{k}(\varepsilon)}{\sqrt{1-\left|\left|\hskip
1.0pt\Gamma_{S}^{-1}\hskip
1.0pt\right|\right|O_{k}(\varepsilon)}}\min\left[\frac{\left|\left|\hskip
1.0pt\Gamma_{S}\hskip 1.0pt\right|\right|}{\left|\left|\hskip
1.0pt\Gamma_{S}\hskip 1.0pt\right|\right|_{F}},\frac{\left|\left|\hskip
1.0ptL_{S}\hskip 1.0pt\right|\right|}{\sqrt{k}}\right]\ .$
Then, there is a region of $\varepsilon\in[0,\varepsilon_{0})$ determined by
$\displaystyle\left|\left|\hskip 1.0pt\Gamma_{S}^{-1}\hskip
1.0pt\right|\right|O_{k}(\varepsilon)\leq 1,$ (29)
for which there exist a qubit unitary matrix $U$ such that
$\displaystyle\frac{1}{k}\sum_{x\in
S}\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{1})^{(T)}U^{\dagger},\varrho^{x}_{1})\leq
E_{S,k}(\varepsilon)\ ,$ (30a) $\displaystyle\frac{1}{k}\sum_{x\in
S}\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{2})^{(T)}U^{\dagger},\varrho^{x}_{2})\leq
E_{S,k}(\varepsilon)+2\sqrt{\varepsilon}\ ,$ (30b)
$\displaystyle\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{a})^{(T)}U^{\dagger},\varrho^{x}_{a})\leq\sqrt{k}{|\mathbf{c}^{a,x}_{S}|E_{S,k}(\varepsilon)}+\frac{\sqrt{k}}{2}\left(|\mathbf{c}^{a,x}_{S}|+\frac{\sqrt{k}}{k-1}\right)\frac{\left|\left|\hskip
1.0pt\Gamma_{S}^{-1}\hskip
1.0pt\right|\right|O_{k}(\varepsilon)}{1-\left|\left|\hskip
1.0pt\Gamma_{S}^{-1}\hskip 1.0pt\right|\right|O_{k}(\varepsilon)},$ (30c)
$\displaystyle\text{for}\;x\notin S,\ a=1,2\ ,$
$\displaystyle\frac{1}{k}\sum_{y\in S}\left|\left|\hskip
1.0ptU(\tilde{M}^{y}_{1})^{(T)}U^{\dagger}-M^{y}_{1}\hskip
1.0pt\right|\right|\leq E_{S,k}(\varepsilon)+\sqrt{\varepsilon}\ ,$ (30d)
$\displaystyle\left|\left|\hskip
1.0ptU(\tilde{M}^{y}_{1})^{(T)}U^{\dagger}-M^{y}_{1}\hskip
1.0pt\right|\right|\leq\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{y}_{1})^{(T)}U^{\dagger},\varrho^{y}_{1})+\sqrt{\varepsilon},\quad\text{for}\;y\notin
S\ ,$ (30e)
where $(\cdot)^{(T)}$ is the transposition (with respect a fixed basis in
$\mathbb{C}^{2}$) that may have to be applied to all experimental states and
measurements simultaneously.
The reason for such a formulation of the theorem lays in the proof techniques
that were used to derive it. Namely, we use results on the stability of the
Cholesky factorization. Since the result employed by us (Ref. [37]) is valid
for positive-definite matrices we cannot apply it to the Gram matrix of all of
the vectors $\mathbf{n}^{x}_{a}$ due to their linear dependence. As a result,
our bounds depend on the particular choice of the set $S$ of states whose
Bloch vectors are linearly independent. In what follows, without the loss of
generality we take $S=\\{1,2\\}$ for $k=2$ and $S=\\{1,2,3\\}$ for $k=3$ in
the proof. In practice, however, one should consider all subsets $S$ of states
that give non-singular Gram matrices having in mind a specific application.
_Outline of the proof.–_ In what follows we will consider a particular subset
$\\{\mathbf{n}^{x}_{1}\\}_{x\in S}$ of $k$ linearly independent Bloch vectors.
For simplify we will omit the subscript $S$ whenever possible. The main idea
of the proof is to use a Cholesky factors of $k\times k$ the Gram matrices
$\Gamma$ and $\tilde{\Gamma}$ of target ($\\{\mathbf{n}^{x}_{1}\\}_{x\in S}$)
and experimental ($\\{\mathbf{\tilde{n}}^{x}_{1}\\}_{x\in S}$) Bloch vectors
respectively. We then make use of the result of Ref. [37] on the stability of
Cholesky factorization which, in simple terms, states that if $\Gamma$ and
$\tilde{\Gamma}$ are close to each other, so are their Cholesky factors $L$
and $\tilde{L}$. Specifically, this result, which we quote bellow (see Theorem
[Sun 1991]), sets an upper bound on the Frobenius norm of $\Delta
L=\tilde{L}-L$ in terms of the Frobenius norm of the perturbation
$\Delta\Gamma=\tilde{\Gamma}-\Gamma$. The Frobeniuous norm of the perturbation
$\Delta L$ can be connected to the trace distance between states
$\varrho^{x}_{1}$ and the rotated states $\tilde{\varrho}^{x}_{1}$ in the
selected subset $S$. On the other hand, the bound on $\left|\left|\hskip
1.0pt\Delta\Gamma\hskip 1.0pt\right|\right|_{F}$ can be estimated from our
assumption: $|p(b|a,x,y)-\tilde{p}(b|a,x,y)|\leq\varepsilon$. This bound
follows directly from the results of Theorem 1, but here we use an improved
qubit stability bounds, which are given by Lemma 1. This, in turn, leads to
stronger estimates for the norms of $\Delta\Gamma$ in Lemma 2 bellow.
Combining all these results produces bounds in Eq. (30a).
In the next part of the proof we determine the trace distance between states
$\varrho^{x}_{2}$ and $\tilde{\varrho}^{x}_{2}$ for $x\in S$ based solely on
the fact that $\varrho^{x}_{2}=\mathbbm{1}-\varrho^{x}_{1}$ and
$\tilde{\varrho}^{x}_{2}$ are _close to_
$\mathbbm{1}-\tilde{\varrho}^{x}_{1}$. This gives the bounds in Eq. (30b). For
states $x\notin S$ we use the fact that they can be decomposed in the basis of
the states in $S$. Since this linear decomposition can be different for target
and experimental states, we use the result on stability of linear systems (see
Theorem [Higham 2002], which we also quote below). The bounds for the states
$x\notin S$ are given by Eq. (30c). Finally, we connect the bounds for the
distance between the target and experimental measurements and the distance
between the corresponding states resulting in Eq. (30d) and (30e).
###### Theorem 2 (Quantitative formulation of robust self-testing for
qubits).
Consider pure target qubit preparation states $\varrho^{x}_{a}$ and target
projective measurements $\mathbf{M}^{y}$, where $a=1,2$ and $x,y\in[n]$.
Assume that $\varrho^{x}_{a}=M^{x}_{a}$ for all $a,x$ and, furthermore, that
experimental states $\tilde{\varrho}^{x}_{a}$ and measurements
$\mathbf{\tilde{M}}^{y}$ act on Hilbert space of dimension at most $d$ and
generate statistics
$\tilde{p}(b|a,x,y)={\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{M}^{y}_{b})$
such that
$|\tilde{p}(b|a,x,y)-{\mathrm{tr}}(\varrho^{x}_{a}M^{y}_{b})|\leq\varepsilon$,
for all $a,b,x,y$.
Let $k\in\\{{2,3\\}}$ be the cardinality of the maximal set of linearly
independent Bloch vectors of states $\varrho^{x}_{1}$. Fix a set $S\subset[n]$
of $k$ linearly independent vectors $\\{\mathbf{n}^{x}_{1}\\}_{x\in S}$,
construct their Gram matrix $\Gamma_{S}$, and let $L_{S}$ be a Cholesky factor
of $\Gamma_{S}$ (i.e., $\Gamma_{S}=L_{S}L^{T}_{S}$ and $L_{S}$ is lower
triangular). For every $x\in[n]\setminus S$ and both $a=1,2$ let
$\mathbf{c}^{x,a}_{S}$, denote the coefficients of decomposition of
$\mathbf{n}^{x}_{a}$ as a linear combination of
$\\{\mathbf{n}^{x}_{1}\\}_{x\in S}$. Finally, let us define three auxiliary
functions
$\displaystyle
F_{k}(\varepsilon)\coloneqq\sqrt{\varepsilon}\sqrt{4k(k-1)}\sqrt{1+2\sqrt{\varepsilon}+\frac{k+3}{k-1}\varepsilon}\
,$ $\displaystyle O_{k}(\varepsilon)\coloneqq
2((k-1)\sqrt{\varepsilon}+(k+1)\varepsilon)\ ,$ $\displaystyle
E_{S,k}(\varepsilon)\coloneqq\frac{1}{2\sqrt{2}}\frac{\left|\left|\hskip
1.0pt\Gamma_{S}^{-1}\hskip
1.0pt\right|\right|F_{k}(\varepsilon)}{\sqrt{1-\left|\left|\hskip
1.0pt\Gamma_{S}^{-1}\hskip
1.0pt\right|\right|O_{k}(\varepsilon)}}\min\left[\frac{\left|\left|\hskip
1.0pt\Gamma_{S}\hskip 1.0pt\right|\right|}{\left|\left|\hskip
1.0pt\Gamma_{S}\hskip 1.0pt\right|\right|_{F}},\frac{\left|\left|\hskip
1.0ptL_{S}\hskip 1.0pt\right|\right|}{\sqrt{k}}\right]\ .$ (31)
Then, there is a region of $\varepsilon\in[0,\varepsilon_{0})$ determined by
$\displaystyle\left|\left|\hskip 1.0pt\Gamma_{S}^{-1}\hskip
1.0pt\right|\right|O_{k}(\varepsilon)\leq 1,$ (32)
for which there exist a qubit unitary matrix $U$ such that
$\displaystyle\frac{1}{k}\sum_{x\in
S}{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{1})^{(T)}U^{\dagger}\varrho^{x}_{1})\geq
1-\frac{\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}-E_{S,k}(\varepsilon)^{2}\
,$ (33a) $\displaystyle\frac{1}{k}\sum_{x\in
S}{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{2})^{(T)}U^{\dagger}\varrho^{x}_{2})\geq
1-\frac{\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}-(E_{S,k}(\varepsilon)+2\sqrt{\varepsilon})^{2}\
,$ (33b)
$\displaystyle{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{a})^{(T)}U^{\dagger}\varrho^{x}_{a})\geq
1-\frac{\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}-\left(\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{a})^{(T)}U^{\dagger},\varrho^{x}_{a})\right)^{2},\quad\text{for}\;x\notin
S,\ a=1,2\ ,$ (33c) $\displaystyle\frac{1}{k}\sum_{y\in
S}{\mathrm{tr}}(U(\tilde{M}^{y}_{1})^{(T)}U^{\dagger}M^{y}_{1})\geq
1-\frac{5}{2}\varepsilon-\left(\sqrt{2}E_{S,k}(\varepsilon)+\sqrt{\varepsilon}\right)^{2}\
,$ (33d)
$\displaystyle{\mathrm{tr}}(U(\tilde{M}^{y}_{1})^{(T)}U^{\dagger}M^{y}_{1})\geq
1-\varepsilon-\left(\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{y}_{1})^{(T)}U^{\dagger},\varrho^{y}_{1})+\sqrt{\varepsilon}\right)^{2},\quad\text{for}\;y\notin
S\ ,$ (33e)
where $(\cdot)^{(T)}$ is the transposition (with respect a fixed basis in
$\mathbb{C}^{2}$) that may have to be applied to all experimental states and
measurements simultaneously. Note that in formulas (33c) and (30e) we have
used trace distance
$\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{a})^{(T)}U^{\dagger},\varrho^{x}_{a})$
in order to simplify the resulting formulas. The bound on this quantity is
given in Eq. (30c).
###### Remark.
Results of Theorem 2 are obtained using a similar reasoning to that given in
the outline of the proof of Theorem 3. However, the proof steps are
supplemented by bounds connecting fidelity to the trace distance (for states)
and operator norm (for measurement operators). The proof of Theorem 2 is given
at the end of this part of the Appendix.
Before we proceed we state two theorems from the literature that are needed
for our proof. First, we repeat the statement of the Theorem 1.4 from Ref.
[37] for real-valued matrices. Second, we state a result concerning stability
of systems of linear equation, which we borrow from Ref. [45] (Theorem 7.2).
We changed the notation used in these papers in accordance with our paper.
###### Theorem (Sun 1991 - Stability of Cholesky factorisation).
Let $\Gamma$ be an $k\times k$ positive definite matrix and $\Gamma=LL^{T}$
its Cholesky factorization. If $\Delta\Gamma$ is an $k\times k$ symmetric
matrix satisfying
$\displaystyle\left|\left|\hskip 1.0pt\Gamma^{-1}\hskip
1.0pt\right|\right|\left|\left|\hskip 1.0pt\Delta\Gamma\hskip
1.0pt\right|\right|\leq 1,$ (34)
then there is a unique Cholesky factorization
$\displaystyle\Gamma+\Delta\Gamma=(L+\Delta L)(L+\Delta L)^{T},$
and
$\displaystyle\left|\left|\hskip 1.0pt\Delta L\hskip
1.0pt\right|\right|_{F}\leq\frac{\left|\left|\hskip 1.0pt\Gamma^{-1}\hskip
1.0pt\right|\right|\left|\left|\hskip 1.0pt\Delta\Gamma\hskip
1.0pt\right|\right|_{F}}{\sqrt{2(1-\left|\left|\hskip 1.0pt\Gamma^{-1}\hskip
1.0pt\right|\right|\left|\left|\hskip 1.0pt\Delta\Gamma\hskip
1.0pt\right|\right|})}\min\left[\frac{\left|\left|\hskip 1.0ptL\hskip
1.0pt\right|\right|_{F}\left|\left|\hskip 1.0pt\Gamma\hskip
1.0pt\right|\right|}{\left|\left|\hskip 1.0pt\Gamma\hskip
1.0pt\right|\right|_{F}},\left|\left|\hskip 1.0ptL\hskip
1.0pt\right|\right|\right].$ (35)
###### Remark.
This theorem dates back to 1991, and, of course there have been attempts to
improve this result. However, the recent review [46] on this topic suggests
that the bound given in the Theorem above is the most appropriate in our case
(Remark 3.2, Ref. [46]).
###### Theorem (Higham 2002 - Stability of systems of linear equations).
Let $\mathbf{c}$ be a solution of a system of linear equations
$\Gamma\mathbf{c}=\mathbf{g}$, where $\mathbf{c},\mathbf{g}\in\mathbb{R}^{k}$
and $\Gamma\in\mathrm{Mat}_{k\times k}(\mathbb{R})$. Let now
$\mathbf{\tilde{c}}$ be a solution to
$(\Gamma+\Delta\Gamma)\mathbf{\tilde{c}}=\mathbf{g}+\mathbf{\Delta g}$, where
$\Delta\Gamma\in\mathrm{Mat}_{k\times k}(\mathbb{R})$, $\mathbf{\Delta
g}\in\mathbb{R}^{k}$. Assume that there exits $\delta^{\prime}>0$ such that
$\left|\left|\hskip 1.0pt\Delta\Gamma\hskip
1.0pt\right|\right|\leq\delta^{\prime}\left|\left|\hskip 1.0ptE\hskip
1.0pt\right|\right|$ and $|\mathbf{\Delta g}|\leq\delta^{\prime}|\mathbf{f}|$
(for some $E\in\mathrm{Mat}_{k\times k}(\mathbb{R})$,
$\mathbf{f}\in\mathbb{R}^{k}$), and that $\delta^{\prime}\left|\left|\hskip
1.0pt\Gamma^{-1}\hskip 1.0pt\right|\right|\left|\left|\hskip 1.0ptE\hskip
1.0pt\right|\right|\leq 1$. Then, we have
$\displaystyle\frac{|\mathbf{c}-\mathbf{\tilde{c}}|}{|\mathbf{c}\,|}\leq\frac{\delta^{\prime}}{1-\delta^{\prime}\left|\left|\hskip
1.0pt\Gamma^{-1}\hskip 1.0pt\right|\right|\left|\left|\hskip 1.0ptE\hskip
1.0pt\right|\right|}\left(\frac{\left|\left|\hskip 1.0pt\Gamma^{-1}\hskip
1.0pt\right|\right||\mathbf{f}|}{|\mathbf{c}\,|}+\left|\left|\hskip
1.0pt\Gamma^{-1}\hskip 1.0pt\right|\right|\left|\left|\hskip 1.0ptE\hskip
1.0pt\right|\right|\right)\ .$ (36)
###### Lemma 1.
Under the conditions of Theorem 1 qubit states and measurements
$\\{\varrho^{x}_{a}\\}_{a,x}$, $\\{\mathbf{M}^{y}\\}_{y}$ and
$\\{\tilde{\varrho}^{x}_{a}\\}_{a,x}$, $\\{\mathbf{\tilde{M}}^{y}\\}_{y}$
satisfy
$\displaystyle\left|\left|\hskip 1.0pt\tilde{\varrho}^{x}_{a}\hskip
1.0pt\right|\right|\geq\frac{1-2\varepsilon}{1-\varepsilon},\;\forall a,x,$
$\displaystyle|{\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})-{\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})|\leq\varepsilon+\sqrt{\varepsilon},$
$\displaystyle|{\mathrm{tr}}(\tilde{M}^{y}_{b}\tilde{M}^{y^{\prime}}_{b^{\prime}})-{\mathrm{tr}}(M^{y}_{b}M^{y^{\prime}}_{b^{\prime}})|\leq\varepsilon+(1+\varepsilon)\sqrt{\varepsilon},$
$\forall x\neq x^{\prime},a\neq a^{\prime}$ and $\forall y\neq
y^{\prime},b\neq b^{\prime}$ respectively, and $\varepsilon\leq\frac{1}{3}$.
###### Lemma 2.
Let $\Gamma$ be the Gram matrix of Bloch vectors of target states
$\\{\varrho^{x}_{1}\\}_{x\in S}$. Likewise, let $\tilde{\Gamma}$ be the Gram
matrix of Bloch vectors of experimental states
$\\{\tilde{\varrho}^{x}_{1}\\}_{x\in S}$. Let
$|p(b|a,x,y)-\tilde{p}(b|a,x,y)|\leq\varepsilon$ for all $b,a,x,y$ and let
$k=|S|$ be the cardinality of the set $S$. Then, we have the following bounds
on norms of $\Delta\Gamma=\tilde{\Gamma}-\Gamma$
$\displaystyle\left|\left|\hskip 1.0pt\Delta\Gamma\hskip
1.0pt\right|\right|_{F}\leq F_{k}(\varepsilon)\ ,$
$\displaystyle\left|\left|\hskip 1.0pt\Delta\Gamma\hskip
1.0pt\right|\right|\leq O_{k}(\varepsilon)\ ,$ (37)
where
$\displaystyle
F_{k}(\varepsilon)=\sqrt{\varepsilon}\sqrt{4k(k-1)}\sqrt{1+2\sqrt{\varepsilon}+\frac{k+3}{k-1}\varepsilon}\
,$ $\displaystyle
O_{k}(\varepsilon)=2((k-1)\sqrt{\varepsilon}+(k+1)\varepsilon)\ .$
###### Proof of Theorem 3.
Let $\tilde{\Gamma}$ be the Gram matrix of Bloch vectors
$\\{\mathbf{\tilde{n}}^{x}_{1}\\}_{x\in S}$ of the experimental states for
$x\in S$. Let us for now assume that the Cholesky factorization
$\tilde{\Gamma}=\tilde{L}\tilde{L}^{T}$ exists and let $\Delta L=\tilde{L}-L$.
We start our _first part of the proof_ regarding the states $x\in S$, $a=1$ by
connecting the square of the norm $\left|\left|\hskip 1.0pt\Delta L\hskip
1.0pt\right|\right|_{F}$ with trace distance between the states
$\\{\tilde{\varrho}^{x}_{1}\\}_{x\in S}$ and $\\{\varrho^{x}_{1}\\}_{x\in S}$.
From the sketch of the proof of Theorem 3 it follows that the Bloch vectors
$\\{\mathbf{n}^{x}_{1}\\}_{x\in S}$ of the target states and the vectors
$\\{\mathbf{l}^{x}\\}_{x\in S}$, transpose of which are rows of $L$, are
connected via an orthogonal transformation, which we denote as $O$, i.e.,
$\mathbf{l}^{x}=O\mathbf{n}^{x}_{1}$, $x\in S$. Similarly, for the Bloch
vectors $\\{\mathbf{\tilde{n}}^{x}_{1}\\}_{x\in S}$ of the experimental states
and the vectors $\\{\mathbf{\tilde{l}}^{x}\\}_{x\in S}$ that form $\tilde{L}$
there exists an orthogonal transformation $\tilde{O}$, such that
$\mathbf{\tilde{l}}^{x}=\tilde{O}\mathbf{\tilde{n}}^{x}_{1}$, $x\in S$. Let us
define states
$\tau^{x}=\frac{1}{2}(\mathbbm{1}+\mathbf{l}^{x}\cdot\bm{\sigma})$, and
$\tilde{\tau}^{x}=\frac{1}{2}(\mathbbm{1}+\mathbf{\tilde{l}}^{x}\cdot\bm{\sigma})$,
$x\in S$. We know that there exist unitary transformations $V$ and $\tilde{V}$
such that: $\tau^{x}=V(\varrho^{x}_{1})^{(T)}V^{\dagger}$, and
$\tilde{\tau}^{x}=\tilde{V}(\tilde{\varrho}^{x}_{1})^{(T)}\tilde{V}^{\dagger}$
for $x\in S$ (the optional transposition $(\cdot)^{(T)}$ is reserved for the
cases $\det(O)=-1$ and $\det(\tilde{O})=-1$). We have the following sequence
of equalities valid for $x\in S$:
$\displaystyle\mathrm{d}_{\mathrm{tr}}(\tilde{\tau}^{x},\tau^{x})$
$\displaystyle=$
$\displaystyle\mathrm{d}_{\mathrm{tr}}(\tilde{V}(\tilde{\varrho}^{x}_{1})^{(T)}\tilde{V}^{\dagger},V(\varrho^{x}_{1})^{(T)}V^{\dagger})=\mathrm{d}_{\mathrm{tr}}(V^{\dagger}\tilde{V}(\tilde{\varrho}^{x}_{1})^{(T)}\tilde{V}^{\dagger}V,(\varrho^{x}_{1})^{(T)})$
(38) $\displaystyle=$
$\displaystyle\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{1})^{(T)}U^{\dagger},\varrho^{x}_{1})\
,$
where we used the invariance of trace distancee under unitary evolution and
transposition, and also denoted the resulting unitary $V^{\dagger}\tilde{V}$
as $U$. It is clear that one transposition in the final formula in Eq. (38) is
enough as the case $\det(\tilde{O})=\det(O)=-1$ is equivalent to having
$\det(\tilde{O})=\det(O)=1$ and changing $U$ to $U^{T}$.
We need to show now that this unitary $U$ satisfies the claim of Theorem 2 for
the states $x\in S$. For qubits, trace distance can be expressed directly via
Eucleedian distance between Bloch vectors:
$\displaystyle\mathrm{d}_{\mathrm{tr}}(\tilde{\tau}^{x},\tau^{x})=\frac{1}{2}|\mathbf{\tilde{l}}^{x}-\mathbf{l}^{x}|\
.$ (39)
Using this fact and Eq. (38), it is straightforward to derive the following
upper bound:
$\displaystyle\frac{1}{k}\sum_{x=1}^{k}\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{1})^{(T)}U^{\dagger},\varrho^{x}_{1})\leq\frac{1}{2\sqrt{k}}\left|\left|\hskip
1.0pt\Delta L\hskip 1.0pt\right|\right|_{F}\ .$ (40)
We now use Theorem [Sun 1991] (Ref. [37]) to upper-bound $\left|\left|\hskip
1.0pt\Delta L\hskip 1.0pt\right|\right|_{F}$. Specifically, we apply it to the
Gram matrix of the Bloch vectors of states $\\{\varrho^{x}_{1}\\}_{x\in S}$.
We have $\left|\left|\hskip 1.0ptL\hskip 1.0pt\right|\right|_{F}=\sqrt{k}$,
since the target states are assumed to be pure. The direct substitution of the
bound in Eq. (35) into Eq. (40) gives the bound in Eq. (30a), and also the
condition in Eq. (29) when the Theorem 3 (and also Theorem 2) applies.
In the beginning of the proof we assumed that the Cholesky factorization
$\tilde{\Gamma}=\tilde{L}\tilde{L}^{T}$ exists. The condition in Eq. (29)
gives the sufficient condition for this to hold. We conclude this part of the
proof by noting that a similar statement in terms of fidelity (Theorem 2, Eq.
(33a)) follows from Eq. (40) by connecting fidelity to the trace distance as
described in Eq. (54).
In _the second part of the proof_ we derive upper bounds on the trace
distances between the states corresponding to $x\in S$ and $a=2$. As we will
see, those can be connected to the bounds for the states with $x\in S$ and
$a=1$. Indeed, we can write the following for every $x\in S$:
$\displaystyle
2\mathrm{d}_{\mathrm{tr}}(U\tilde{\varrho}^{x}_{2}U^{\dagger},\varrho^{x}_{2})$
$\displaystyle=\left|\left|\hskip
1.0pt\varrho^{x}_{2}-U\tilde{\varrho}^{x}_{2}U^{\dagger}\hskip
1.0pt\right|\right|_{1}\leq\left|\left|\hskip
1.0pt\mathbbm{1}-\varrho^{x}_{1}-U(\mathbbm{1}-\tilde{\varrho}^{x}_{1})U^{\dagger}\hskip
1.0pt\right|\right|_{1}+\left|\left|\hskip
1.0ptU(\mathbbm{1}-\tilde{\varrho}^{x}_{1}-\tilde{\varrho}^{x}_{2})U^{\dagger}\hskip
1.0pt\right|\right|_{1}$ $\displaystyle\leq\left|\left|\hskip
1.0pt\varrho^{x}_{1}-U\tilde{\varrho}^{x}_{1}U^{\dagger}\hskip
1.0pt\right|\right|_{1}+\left|\left|\hskip
1.0pt\mathbbm{1}-\tilde{\varrho}^{x}_{1}+\tilde{M}^{x}_{2}\hskip
1.0pt\right|\right|_{1}-\left|\left|\hskip
1.0pt\tilde{M}^{x}_{2}-\tilde{\varrho}^{x}_{2}\hskip 1.0pt\right|\right|_{1}$
$\displaystyle=2\mathrm{d}_{\mathrm{tr}}(U\tilde{\varrho}^{x}_{1}U^{\dagger},\varrho^{x}_{1})+\sum_{a=1,2}\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}-\tilde{M}^{x}_{a}\hskip 1.0pt\right|\right|_{1}.$
Exactly the same reasoning can be applied to upper bound the trace distance
$\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{2})^{T}U^{\dagger},\varrho^{x}_{2})$,
in which case the transposition will propagate to
$\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{1})^{T}U^{\dagger},\varrho^{x}_{1})$
but would not affect the term $\sum_{a=1,2}\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}-\tilde{M}^{x}_{a}\hskip 1.0pt\right|\right|_{1}$
(as we can take $(\tilde{M}^{x}_{a})^{T}$).
To finish the calculations we need to upper-bound the second summand in Eq.
(B). We do it by writing $\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}-\tilde{M}^{x}_{a}\hskip
1.0pt\right|\right|_{1}\leq 2\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}-\tilde{M}^{x}_{a}\hskip 1.0pt\right|\right|\leq
2\sqrt{\varepsilon}$, $\forall a,x$, where the last inequality is proven in
Lemma 1 (see Eq. (77)). From here, it is easy to obtain the bound in Eq.
(30b). The reason why we do not simply apply the same reasoning to the states
corresponding to $a=2$ and $x\in S$ as for the states with $a=1$ is because we
want the isometry $U$ to be the same for all of the states.
In _the third part of the proof_ we derive the bounds for the preparations
states corresponding to $x\notin S$ and both $a=1,2$. We start by reminding
ourselves that the set of states $\\{\varrho^{x}_{1}\\}_{x\in S}$ is assumed
to be tomographically complete. Is is equivalent to the assumption that the
vectors $\\{\mathbf{l}^{x}\\}_{x\in S}$, defined above, are linearly
independent and span $\mathbb{R}^{3}$ for $k=3$, or the considered subspace
for $k=2$. The condition in Eq. (29) of the Theorem 3 (also stated in Eq.
(34)) ensures that the same holds for the vectors
$\\{\mathbf{\tilde{l}}^{x}\\}_{x\in S}$. If so, let us expand the Bloch
vectors of the states $\varrho^{x^{\prime}}_{a^{\prime}}$ and
$U\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}U^{\dagger}$, $x^{\prime}\notin S$
in terms of $\\{\mathbf{l}^{x}\\}_{x\in S}$ and
$\\{\mathbf{\tilde{l}}^{x}\\}_{x\in S}$ respectively. Let us denote the
coefficients of these linear expansions as
$\\{c^{a^{\prime},x^{\prime}}_{x}\\}_{x\in S}$ and
$\\{\tilde{c}^{a^{\prime},x^{\prime}}_{x}\\}_{x\in S}$ respectively. These
coefficients will, of course, depend on $x^{\prime}$ and $a^{\prime}$ as well
as on the choice of the set $S$. However, for simplicity of the derivations we
are going omit the subscripts $x^{\prime}$ and $a^{\prime}$ until we present
the final result. We will also assume, without loss of generality, that
$S=\\{1,2,3\\}$ (or $S=\\{1,2\\}$ for $k=2$).
It is clear that the coefficients $\\{c_{x}\\}_{x\in S}$ and
$\\{\tilde{c}_{x}\\}_{x\in S}$ satisfy the following respective systems of
linear equations:
$\displaystyle\Gamma\mathbf{c}=\mathbf{g},\quad\tilde{\Gamma}\mathbf{\tilde{c}}=\mathbf{\tilde{g}},$
(42)
where $\mathbf{c}=(c_{1},c_{2},c_{3})$,
$\mathbf{g}=(2{\mathrm{tr}}(\varrho^{x^{\prime}}_{a^{\prime}}\varrho^{1}_{1})-1,2{\mathrm{tr}}(\varrho^{x^{\prime}}_{a^{\prime}}\varrho^{2}_{1})-1,2{\mathrm{tr}}(\varrho^{x^{\prime}}_{a^{\prime}}\varrho^{3}_{1})-1)$
and analogously
$\mathbf{\tilde{c}}=(\tilde{c}_{1},\tilde{c}_{2},\tilde{c}_{3})$,
$\mathbf{g}=(2{\mathrm{tr}}(\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}\tilde{\varrho}^{1}_{1})-1,2{\mathrm{tr}}(\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}\tilde{\varrho}^{2}_{1})-1,2{\mathrm{tr}}(\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}\tilde{\varrho}^{3}_{1})-1)$
whenever the cardinality $k$ of the set $S$ is $3$. For $k=2$ and
$\mathbf{c},\mathbf{\tilde{c}},\mathbf{g},\mathbf{\tilde{g}}\in\mathbb{R}^{2}$
their definition is analogous. We again omitted the subscripts
$a^{\prime},x^{\prime}$ for the vectors $\mathbf{g}$ and $\mathbf{\tilde{g}}$
for simplicity. The matrices $\Gamma$ and $\tilde{\Gamma}$ are still the
moment matrices for the states $x\in S$, $a=1$.
In complete analogy to Eq. (39) we can write:
$\displaystyle\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})^{(T)}U^{\dagger},\varrho^{x^{\prime}}_{a^{\prime}})=\frac{1}{2}\Big{|}\sum_{x\in
S}(\tilde{c}_{x}\mathbf{\tilde{l}}^{x}-c_{x}\mathbf{l}^{x})\Big{|}.$ (43)
We then can upper-bound the latter norm as follows:
$\displaystyle\Big{|}\sum_{x\in
S}(\tilde{c}_{x}\mathbf{\tilde{l}}^{x}-c_{x}\mathbf{l}^{x})\Big{|}\leq\Big{|}\sum_{x\in
S}c_{x}(\mathbf{\tilde{l}}^{x}-\mathbf{l}^{x})\Big{|}+\Big{|}\sum_{x\in
S}(\tilde{c}_{x}-c_{x})\mathbf{\tilde{l}}^{x}\Big{|}\leq|\mathbf{c}\,|\left|\left|\hskip
1.0pt\Delta L\hskip 1.0pt\right|\right|_{F}+\sum_{x\in
S}|\tilde{c}_{x}-c_{x}||\mathbf{\tilde{l}}^{x}|.$
In the equation above we used the following relation:
$\displaystyle|\sum_{x\in
S}c_{x}(\mathbf{\tilde{l}}^{x}-\mathbf{l}^{x})|=\sqrt{\sum_{i=1}^{3}\left(\sum_{x\in
S}c_{x}(\tilde{l}^{x}_{i}-l^{x}_{i})\right)^{2}}\leq\sqrt{\sum_{x^{\prime}\in
S}c^{2}_{x^{\prime}}\sum_{i=1}^{3}\sum_{x\in
S}(\tilde{l}^{x}_{i}-l^{x}_{i})^{2}}=|\mathbf{c}\,|\left|\left|\hskip
1.0pt\Delta L\hskip 1.0pt\right|\right|_{F}.$ (45)
Also, since the norms $|\mathbf{\tilde{l}}^{x}|$ can be upper-bounded by $1$
for all $x$, the resulting upper bound can be simplified further to be
$|\mathbf{c}\,|\left|\left|\hskip 1.0pt\Delta L\hskip
1.0pt\right|\right|_{F}+\sum_{x\in S}|\tilde{c}_{x}-c_{x}|$.
To estimate the deviation $|\tilde{c}_{x}-c_{x}|$ we apply Theorem [Higham
2002] by taking $\Delta\Gamma=\tilde{\Gamma}-\Gamma$ and $\mathbf{\Delta
g}=\mathbf{\tilde{g}}-\mathbf{g}$. The bound on the operator norm
$\left|\left|\hskip 1.0pt\Delta\Gamma\hskip 1.0pt\right|\right|$ is given by
Lemma 2. At the same time $|\mathbf{\Delta g}|=2\sqrt{\sum_{x\in
S}({\mathrm{tr}}(\varrho^{x^{\prime}}_{a^{\prime}}\varrho^{x}_{1})-{\mathrm{tr}}(\tilde{\varrho}^{x^{\prime}}_{a^{\prime}}\tilde{\varrho}^{x}_{1}))^{2}}\leq
2\sqrt{k}(\sqrt{\varepsilon}+\varepsilon)$. If we take
$\delta^{\prime}=2((k-1)\sqrt{\varepsilon}+(k+1)\varepsilon)$, some matrix $E$
with $\left|\left|\hskip 1.0ptE\hskip 1.0pt\right|\right|=1$, vector
$\mathbf{f}$ such that $|\mathbf{f}|=\frac{\sqrt{k}}{k-1}$, we satisfy the
conditions of the Theorem [Higham 2002] and get the following bound:
$\displaystyle|\mathbf{c}-\mathbf{\tilde{c}}|\leq\left(|\mathbf{c}|+\frac{\sqrt{k}}{k-1}\right)\frac{\left|\left|\hskip
1.0pt\Gamma^{-1}\hskip 1.0pt\right|\right|\left|\left|\hskip
1.0pt\Delta\Gamma\hskip 1.0pt\right|\right|}{1-\left|\left|\hskip
1.0pt\Gamma^{-1}\hskip 1.0pt\right|\right|\left|\left|\hskip
1.0pt\Delta\Gamma\hskip 1.0pt\right|\right|}\ ,$ (46)
As the final step we need to connect the bound in Eq. (B) with the one in Eq.
(46) by the relation between $1$-norm and the Euclidean norm in
$\mathbb{R}^{k}$, which effectively adds a factor of $\sqrt{k}$ to the bound
in Eq. (46). Combining everything together we obtain the following:
$\displaystyle\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})^{(T)}U^{\dagger},\varrho^{x^{\prime}}_{a^{\prime}})\leq\frac{1}{2}|\mathbf{c}\,|\left|\left|\hskip
1.0pt\Delta L\hskip
1.0pt\right|\right|_{F}+\frac{\sqrt{k}}{2}\left(|\mathbf{c}|+\frac{\sqrt{k}}{k-1}\right)\frac{\left|\left|\hskip
1.0pt\Gamma^{-1}\hskip 1.0pt\right|\right|\left|\left|\hskip
1.0pt\Delta\Gamma\hskip 1.0pt\right|\right|}{1-\left|\left|\hskip
1.0pt\Gamma^{-1}\hskip 1.0pt\right|\right|\left|\left|\hskip
1.0pt\Delta\Gamma\hskip 1.0pt\right|\right|},$ $\displaystyle x^{\prime}\notin
S,a^{\prime}=1,2.$ (47)
The above bound is given in Eq. (30c) in terms of the quantity
$E_{S,k}(\varepsilon)$ (Eq. (28)), where we took $\left|\left|\hskip
1.0pt\Delta L_{S}\hskip 1.0pt\right|\right|_{F}=2\sqrt{k}E_{S,k}(\varepsilon)$
and brought back all the necessary subscripts. Again, using the relation in
Eq. (55) we can derive bounds on the fidelity between the states
$\varrho^{x}_{a}$ and $\tilde{\varrho}^{x}_{a}$ for $x\notin S$.
In _the fourth, final, part of the proof_ we derive the bounds for the
measurements. We do it by connecting the distance between the experimental
measurements and the distance between the corresponding states. Indeed, we can
write the following:
$\displaystyle\left|\left|\hskip
1.0ptU(\tilde{M}^{y}_{1})^{(T)}U^{\dagger}-M^{y}_{1}\hskip
1.0pt\right|\right|\leq\left|\left|\hskip
1.0ptU(\tilde{\varrho}^{y}_{1})^{(T)}U^{\dagger}-\varrho^{y}_{1}\hskip
1.0pt\right|\right|+\left|\left|\hskip
1.0pt\tilde{\varrho}^{y}_{1}-\tilde{M}^{y}_{1}\hskip
1.0pt\right|\right|,\quad\forall y,$ (48)
where we used the triangle inequality and the invariance of the norm
$\left|\left|\hskip 1.0pt\tilde{\varrho}^{y}_{1}-\tilde{M}^{y}_{1}\hskip
1.0pt\right|\right|$ under unitary transformations. Remembering that
$\left|\left|\hskip 1.0pt\tilde{\varrho}^{y}_{1}-\tilde{M}^{y}_{1}\hskip
1.0pt\right|\right|\leq\sqrt{\varepsilon}$ (Lemma 1, Eq. (77)) and
$\left|\left|\hskip
1.0ptU(\tilde{\varrho}^{y}_{1})^{(T)}U^{\dagger}-\varrho^{y}_{1}\hskip
1.0pt\right|\right|=\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{y}_{1})^{(T)}U^{\dagger},\varrho^{y}_{1})$,
we produce the bounds in Eqs. (30d,30e). ∎
Finally, we proceed to the proof of Theorem 2, which gives a quantitative
robustness analysis expressed in terms of average fidelity.
###### Proof of Theorem 2.
Let us first state a useful identity between Frobenius distance and the
fidelity, valid for an arbitrary pure states $\varrho$ and a Hermitian
operator $X$:
$\left|\left|\hskip 1.0ptX-\varrho\hskip
1.0pt\right|\right|^{2}_{F}=1+{\mathrm{tr}}(X^{2})-2{\mathrm{tr}}(X\varrho)\
.$ (49)
We follow exactly the same steps that were given in the proof of Theorem 3. We
assume that the reader is familiar with the notation introduced there. First,
in order to derive the bound in Eq. (33a) we repeat the reasoning preceding
Eq. (38). Analogously to the trace distance discussed there, we use the
invariance of the fidelity under unitary transformations and transposition
which gives us for all $x\in S=[k]$:
${\mathrm{tr}}(\tilde{\tau}^{x}\tau^{x})={\mathrm{tr}}(U(\tilde{\varrho}^{x}_{1})^{(T)}U^{\dagger}\varrho^{x}_{1})\
,$ (50)
where $U=V^{\dagger}\tilde{V}$. Using the above formula and standard algebra
involving Pauli matrices we obtain:
$\frac{1}{k}\sum_{x=1}^{k}{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{1})^{(T)}U^{\dagger}\varrho^{x}_{1})=\frac{1}{2}+\frac{1}{2k}\sum_{x=1}^{k}\mathbf{l}^{x}\cdot\mathbf{\tilde{l}}^{x}\
.$ (51)
On the other hand, we have the following identity:
$\displaystyle\left|\left|\hskip 1.0pt\Delta L\hskip
1.0pt\right|\right|^{2}_{F}=\sum_{x=1}^{k}|\mathbf{l}^{x}|^{2}+\sum_{x=1}^{k}|\mathbf{\tilde{l}}^{x}|^{2}-2\sum_{x=1}^{k}\mathbf{l}^{x}\cdot\mathbf{\tilde{l}}^{x}\
,$ (52)
where $|\cdot|$ is a standard Euclidean norm in $\mathbb{R}^{3}$. Using the
identity
$|\mathbf{\tilde{l}}^{x}|^{2}=2{\mathrm{tr}}((\tilde{\varrho}^{x}_{1})^{2})-1$
and the fact that target states are pure (which implies $|\mathbf{l}^{x}|=1$)
we obtain:
$\displaystyle\left|\left|\hskip 1.0pt\Delta L\hskip
1.0pt\right|\right|^{2}_{F}=2\sum_{x=1}^{k}{\mathrm{tr}}((\tilde{\varrho}^{x}_{1})^{2})-2\sum_{x=1}^{k}\mathbf{l}^{x}\cdot\mathbf{\tilde{l}}^{x}\
.$ (53)
Inserting the above into Eq. (51) and using the fact that
${\mathrm{tr}}{(\tilde{\varrho}^{x}_{1})^{2}}\geq
1-\frac{2\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}$ (this follows
straightforwardly from Lemma 2), we finally obtain:
$\frac{1}{k}\sum_{x=1}^{k}{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{1})^{(T)}U^{\dagger}\varrho^{x}_{1})\geq
1-\frac{\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}-\frac{\left|\left|\hskip
1.0pt\Delta L\hskip 1.0pt\right|\right|^{2}_{F}}{4k}\ .$ (54)
We complete the proof of Eq. (33a) by again employing, exactly as before,
Theorem [Sun 1991] (Ref. [37]) in order to upper-bound $\left|\left|\hskip
1.0pt\Delta L\hskip 1.0pt\right|\right|_{F}$.
Now, to derive Eq. (33b), which gives the bounds for the fidelity of states
for $x\in[k]$ and $a=2$, we can use the following inequality which can be
derived from Eq. (49) and from the relation
${\mathrm{tr}}{(\tilde{\varrho}^{x}_{1})^{2}}\geq
1-\frac{2\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}$:
$\displaystyle{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{a})^{(T)}U^{\dagger}\varrho^{x}_{a})\geq
1-\frac{\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}-(\mathrm{d}_{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{a})^{(T)}U^{\dagger},\varrho^{x}_{a}))^{2}.$
(55)
From Eqs. (40,B) it follows that:
$\displaystyle\frac{1}{k}\sum_{x=1}^{k}(\mathrm{d}_{\mathrm{tr}}(\tilde{\varrho}^{x}_{2})^{(T)}U^{\dagger},\varrho^{x}_{2}))^{2}\leq\frac{1}{k}\sum_{x=1}^{k}(\mathrm{d}_{\mathrm{tr}}(\tilde{\varrho}^{x}_{1})^{(T)}U^{\dagger},\varrho^{x}_{1})+2\sqrt{\varepsilon})^{2}$
(56) $\displaystyle\leq\frac{\left|\left|\hskip 1.0pt\Delta L\hskip
1.0pt\right|\right|^{2}_{F}}{4k}+\frac{2\left|\left|\hskip 1.0pt\Delta L\hskip
1.0pt\right|\right|_{F}}{\sqrt{k}}\sqrt{\varepsilon}+4\varepsilon=\left(\frac{\left|\left|\hskip
1.0pt\Delta L\hskip
1.0pt\right|\right|_{F}}{2\sqrt{k}}+2\sqrt{\varepsilon}\right)^{2},$
which gives the desired bound form Eq. (33b):
$\displaystyle\frac{1}{k}\sum_{x=1}^{k}{\mathrm{tr}}(U(\tilde{\varrho}^{x}_{2})^{(T)}U^{\dagger}\varrho^{x}_{2})\geq
1-\frac{\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}-\left(\frac{\left|\left|\hskip
1.0pt\Delta L\hskip
1.0pt\right|\right|_{F}}{2\sqrt{k}}+2\sqrt{\varepsilon}\right)^{2}\ .$ (57)
The proof of the remaining bounds in Eqs. (33c,33d,33e) is straightforward and
follows directly form the formula in Eq. (49). In particular, in order to
prove Eq. (33c) we set $X=\tilde{\varrho}^{x}_{a}$ and
$\varrho=\varrho^{x}_{a}$ in Eq. (49), use the inequality
${\mathrm{tr}}{(\tilde{\varrho}^{x}_{a})^{2}}\geq
1-\frac{2\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}$ and the relation
$\left|\left|\hskip 1.0pt\sigma-\varrho\hskip
1.0pt\right|\right|^{2}_{F}=2\mathrm{d}_{\mathrm{tr}}(\varrho,\sigma)^{2}$,
with latter being true for arbitrary qubits states $\varrho$ and $\sigma$.
Derivation of Eq. (33d) is completely analogous to the one of Eq. (33b).
Finally, for Eq. (33e), the reasoning is again analogous to Eq. (33c), where,
after setting $X=\tilde{M}^{y}_{1}$ and $\varrho=\varrho^{y}_{1}$, it is
additionally necessary to use a simple lower bound:
${\mathrm{tr}}((M^{y}_{1})^{2})\geq(1-\varepsilon)^{2}$, which follows from
the conditions of Theorem 2. ∎
## Appendix C Alternative bounds from Procrustes
###### Lemma 3 (Robust self-testing for qubits from Procrustes).
Consider pure target qubit preparation states $\varrho^{x}_{a}$ and target
projective measurements $\mathbf{M}^{y}$, where $a=1,2$ and $x,y\in[n]$.
Assume that $\varrho^{x}_{a}=M^{x}_{a}$ for all $a,x$ and, furthermore, that
experimental states $\tilde{\varrho}^{x}_{a}$ and measurements
$\mathbf{\tilde{M}}^{y}$ act on Hilbert space of dimension at most $d$ and
generate statistics
$\tilde{p}(b|a,x,y)={\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{M}^{y}_{b})$
such that
$|\tilde{p}(b|a,x,y)-{\mathrm{tr}}(\varrho^{x}_{a}M^{y}_{b})|\leq\varepsilon$,
for all $a,b,x,y$.
Let $\\{\varrho_{i}\\}_{i=1}^{m}$ be a subset of $m$ considered states among
$\varrho^{x}_{a}$. Let $L$ be a matrix whose rows are the Bloch vectors of
states $\varrho_{i}$, $i\in[m]$, and let $k\in\\{{2,3\\}}$ be its rank ($m\geq
k$). Assume, without loss of generality, that for $k=2$ the third component of
the Bloch vectors is $0$. In that case, truncate $L$ to the first two columns.
Let us define two auxiliary functions:
$\displaystyle P_{m}(\varepsilon,L)=\begin{cases}\left|\left|\hskip
1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|F_{m}(\varepsilon)+\min\left[\frac{\left|\left|\hskip
1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|F_{m}(\varepsilon)}{\sqrt{1-\left|\left|\hskip
1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|^{2}F_{m}(\varepsilon)}},\sqrt{\sqrt{k}F_{m}(\varepsilon)}\right],&\text{if}\left|\left|\hskip
1.0ptL^{\ddagger}\hskip 1.0pt\right|\right|\sqrt{F_{m}(\varepsilon)}<1\\\
\left|\left|\hskip 1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|F_{m}(\varepsilon)+\sqrt{\sqrt{k}F_{m}(\varepsilon)}&\text{otherwise,}\end{cases}$
(58)
and
$\displaystyle
F_{m}(\varepsilon)=\sqrt{4m(m-1)\varepsilon\left(1+2\sqrt{\varepsilon}+\frac{m+3}{m-1}\varepsilon\right)},$
(59)
where $L^{\ddagger}=(LL^{T})^{-1}L^{T}$. There exist a unitary matrix $U$ such
that:
$\displaystyle\frac{1}{m}\sum_{i=1}^{m}{\mathrm{tr}}(\varrho_{i}U\tilde{\varrho}_{i}U^{\dagger})\geq
1-\frac{\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}-\frac{1}{4m}P^{2}_{m}(\varepsilon,L).$
(60)
In this section we derive alternative bounds for the fidelity between
preparation states that follows from the bounds on the so-called orthogonal
Procrustes problem [47]. The problem itself can be formulated as follows.
Given two sets of vectors $\mathbf{x}_{1},\mathbf{x}_{2},\dots,\mathbf{x}_{m}$
and $\mathbf{y}_{1},\mathbf{y}_{2},\dots,\mathbf{y}_{m}$ in $\mathbb{R}^{d}$
find an orthogonal transformation $O\in\mathrm{O}(d)$ in $\mathbb{R}^{d}$ that
minimizes $\sum_{i=1}^{m}|\mathbf{x}_{i}-O\mathbf{y}_{i}|$. This problem has a
clear relevance to our task. Indeed, if we take $\mathbf{x}_{i}$ to be the
Bloch vectors of the target qubit preparation states and $\mathbf{y}_{i}$ the
Bloch vectors of the experimental states, then minimization over
$\mathrm{O}(3)$ is the same as the problem of finding a unitary transformation
that connects those qubit states. In Ref. [38] the bounds on Procrustes
problem were derived. We give formulation of Theorem 1 from Ref. [38] below,
where we change the notation according to our problem.
###### Theorem (Arias-Castro et.al. 2020 - A perturbation bound for
Procrustes).
Given two tall matrices $L$ and $\tilde{L}$ of same size with $L$ having full
rank, and set $\delta^{2}=\left|\left|\hskip
1.0pt\tilde{L}\tilde{L}^{T}-LL^{T}\hskip 1.0pt\right|\right|_{F}$. Then we
have
$\displaystyle\min_{O\in\mathrm{O}(k)}\left|\left|\hskip
1.0ptL-O\tilde{L}\hskip
1.0pt\right|\right|_{F}\leq\begin{cases}\left|\left|\hskip
1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|\delta^{2}+\min\left[\frac{\left|\left|\hskip
1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|\delta^{2}}{\sqrt{1-\left|\left|\hskip
1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|^{2}\delta^{2}}},k^{\frac{1}{4}}\delta\right],&\text{if}\left|\left|\hskip
1.0ptL^{\ddagger}\hskip 1.0pt\right|\right|\delta<1\\\ \left|\left|\hskip
1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|\delta^{2}+k^{\frac{1}{4}}\delta&\text{otherwise.}\end{cases}$
(61)
In the formulation of the above theorem $L^{\ddagger}$ stands for the
Moore–Penrose inverse, which can be defined as
$L^{\ddagger}=(LL^{T})^{-1}L^{T}$ for tall matrices of full rank.
###### Proof of Lemma 3.
. Unitizing the results of the Theorem [Arias-Castro et.al. 2020] is rather
straightforward. Let $L$ be a matrix which rows are the Bloch vectors of all
target preparation states $\\{\varrho^{x}_{a}\\}_{a,x}$. Let $\tilde{L}$ in
turn be the matrix of Bloch vectors of the experimental states
$\\{\tilde{\varrho}^{x}_{a}\\}_{a,x}$. The matrices $LL^{T}$ and
$\tilde{L}\tilde{L}^{T}$ are then, of course, the full Gram matrices $\Gamma$
and $\tilde{\Gamma}$. By “full" we mean that now we do not select a subset $S$
of linearly independent vectors among the Bloch vectors of $\varrho^{x}_{a}$.
Instead, $\Gamma$ and $\tilde{\Gamma}$ are formed by all the considered
states, which can still be a subset of the $2n$ states ($x\in[n]$, $a\in[2]$).
We will be using $m$ to denote the number of the considered states, and a
simple one-indexed set $\\{\varrho_{i}\\}_{i}$ to denote the states
themselves. Estimating $\delta^{2}$ from the formulation of the Theorem
[Arias-Castro et.al. 2020] is a direct application of the bound on
$\left|\left|\hskip 1.0pt\Delta\Gamma\hskip 1.0pt\right|\right|_{F}$ from
Lemma 2, where now instead of $k$ one should put the number $m$ of the
considered preparation states.
As for the left-hand side of Eq. (61), we can write the following:
$\displaystyle\left|\left|\hskip 1.0ptL-O\tilde{L}\hskip
1.0pt\right|\right|_{F}^{2}={\mathrm{tr}}(LL^{T})+{\mathrm{tr}}(\tilde{L}\tilde{L}^{T})-2{\mathrm{tr}}(L^{T}O\tilde{L})$
$\displaystyle\geq
m+m-m\frac{4\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}-4\sum_{i=1}^{m}{\mathrm{tr}}(\varrho_{i}U\tilde{\varrho}_{i}U^{\dagger})+2m$
(62)
where we use the following identity:
$\displaystyle{\mathrm{tr}}(L^{T}O\tilde{L})=\sum_{i=1}^{m}\mathbf{n}_{i}\cdot
O\mathbf{\tilde{n}}_{i}=2\sum_{i=1}^{m}{\mathrm{tr}}(\varrho_{i}U\tilde{\varrho}_{i}U^{\dagger})-m,$
(63)
and $U$ is the unitary transformation in $\mathrm{SU}(2)$ corresponding to the
orthogonal transformation $O$ of the Bloch vectors. The bound on the average
fidelity between $m$ target preparation states and the corresponding
experimental states is then simply:
$\displaystyle\frac{1}{m}\sum_{i=1}^{m}{\mathrm{tr}}(\varrho_{i}U\tilde{\varrho}_{i}U^{\dagger})\geq
1-\frac{\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}-\frac{1}{4m}P^{2}_{m}(\varepsilon,L)$
(64)
where
$\displaystyle P_{m}(\varepsilon,L)=\begin{cases}\left|\left|\hskip
1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|F_{m}(\varepsilon)+\min\left[\frac{\left|\left|\hskip
1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|F_{m}(\varepsilon)}{\sqrt{1-\left|\left|\hskip
1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|^{2}F_{m}(\varepsilon)}},\sqrt{\sqrt{k}F_{m}(\varepsilon)}\right],&\text{if}\left|\left|\hskip
1.0ptL^{\ddagger}\hskip 1.0pt\right|\right|\sqrt{F_{m}(\varepsilon)}<1\\\
\left|\left|\hskip 1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|F_{m}(\varepsilon)+\sqrt{\sqrt{k}F_{m}(\varepsilon)}&\text{otherwise,}\end{cases}$
(65)
and
$\displaystyle
F_{m}(\varepsilon)=\sqrt{4m(m-1)\varepsilon\left(1+2\sqrt{\varepsilon}+\frac{m+3}{m-1}\varepsilon\right)}.$
(66)
∎
## Appendix D Examples
Below we provide detailed derivations of the results presented in Table 1.
The first example concerns $n=2,3$ MUBs in $d=2$. Since for MUBs
${\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})=\frac{1}{2}$,
for $x\neq x^{\prime},\forall a,a^{\prime}$, it follows that $\Gamma$ is an
identity matrix in $\mathbb{R}^{n}$ ($n\in\\{2,3\\}$). Hence, in Theorem 2
(see Appendix B) we should take $\left|\left|\hskip 1.0pt\Gamma_{S}\hskip
1.0pt\right|\right|=\left|\left|\hskip 1.0pt\Gamma^{-1}_{S}\hskip
1.0pt\right|\right|=\left|\left|\hskip 1.0ptL_{S}\hskip
1.0pt\right|\right|=1$, and $\left|\left|\hskip 1.0pt\Gamma\hskip
1.0pt\right|\right|_{F}=\sqrt{n}$. The resulting bound is the average between
expressions in Eq. (30a) and Eq. (30b) with the function
$E_{S,k}(\varepsilon)$ being simply:
$\displaystyle
E_{S,k}(\varepsilon)=\frac{1}{2\sqrt{2n}}\frac{F_{k}(\varepsilon)}{\sqrt{1-O_{k}(\varepsilon)}}.$
(67)
The leading linear term is given in Table 1 for both $n=2,3$.
The second example is a little less straightforward. From the condition
${\mathrm{tr}}(\varrho^{1}_{1}\varrho^{2}_{1})=\frac{1+\alpha}{2}$,
$\alpha\in(-1,1)$ we obtain that $\Gamma=\left(\begin{smallmatrix}1&\alpha\\\
\alpha&1\end{smallmatrix}\right)$, and hence $\left|\left|\hskip
1.0pt\Gamma\hskip 1.0pt\right|\right|=1+|\alpha|$, $\left|\left|\hskip
1.0pt\Gamma^{-1}\hskip 1.0pt\right|\right|=\frac{1}{1-|\alpha|}$, and
$\left|\left|\hskip 1.0pt\Gamma\hskip
1.0pt\right|\right|_{F}=\sqrt{2+2\alpha^{2}}$. The output $L$ of the Cholesky
factorization is $L=\left(\begin{smallmatrix}1&0\\\
\alpha&\sqrt{1-\alpha^{2}}\end{smallmatrix}\right)$, which leads to
$\left|\left|\hskip 1.0ptL\hskip 1.0pt\right|\right|=\sqrt{1+|a|}$. This also
determines the minimum in Eq. (28) to be $\frac{\sqrt{1+|a|}}{\sqrt{2}}$.
Plugging this values in Eq. (28) gives:
$\displaystyle
E_{\\{1,2\\},2}(\varepsilon)=\frac{1}{4}\frac{\sqrt{1+|\alpha|}F_{k}(\varepsilon)}{\sqrt{1-|\alpha|-O_{k}(\varepsilon)}}.$
(68)
The final bound is again the average of the bounds in Eq. (30a) and Eq. (30b).
The first order in $\varepsilon$ for this bound is given in Table 1. The
applicability of the above bound is determined by the inequality
$1-|\alpha|-O_{2}(\varepsilon)\geq 0$. Importantly, the latter condition gives
nonempty region $\varepsilon\in[0,\varepsilon_{0})$ whenever $|\alpha|>0$.
The third example is a trine ensemble of states
$(\varrho^{1}_{1},\varrho^{2}_{2},\varrho^{3}_{1})$, with
$\varrho^{x}_{1}=\frac{\mathbbm{1}}{2}+\frac{1}{2}\mathbf{n}_{x}\cdot\bm{\sigma}$,
$x=1,2,3$, and where $\mathbf{n}_{1}=(1,0,0)$,
$\mathbf{n}_{2}=\left(-\frac{1}{2},\frac{\sqrt{3}}{2},0\right)$, and
$\mathbf{n}_{3}=\left(-\frac{1}{2},-\frac{\sqrt{3}}{2},0\right)$. For this
configuration of the preparation states the alternative robustness analysis
via Procrustes (see Appendix C) gives better bounds. Given the vectors
$\mathbf{n}_{i}$, $i=1,2,3$ we can directly compute that $\left|\left|\hskip
1.0ptL^{\ddagger}\hskip 1.0pt\right|\right|=\sqrt{\frac{2}{3}}$. Inserting
this value to Lemma 3 produces the results given in Table 1.
The fourth example is the tetrahedron, with $\mathbf{n}_{1}=(0,0,1)$,
$\mathbf{n}_{2}=\left(\sqrt{\frac{8}{9}},0,-\frac{1}{3}\right)$,
$\mathbf{n}_{3}=\left(-\sqrt{\frac{2}{9}},\sqrt{\frac{2}{3}},-\frac{1}{3}\right)$,
$\mathbf{n}_{4}=\left(-\sqrt{\frac{2}{9}},-\sqrt{\frac{2}{3}},-\frac{1}{3}\right)$,
and
$\varrho^{x}_{1}=\frac{\mathbbm{1}}{2}+\frac{1}{2}\mathbf{n}_{x}\cdot\bm{\sigma}$,
$x=1,2,3,4$ as before. In this case, we also employ the bounds from
Procrustes. For the above configuration of states we have that
$\left|\left|\hskip 1.0ptL^{\ddagger}\hskip
1.0pt\right|\right|=\frac{\sqrt{3}}{2}$, which leads to the results in Table
1.
## Appendix E Proofs of auxiliary results
###### Proof of Lemma 1.
First of all, we improve the bound on the norm of each of the experimental
states. From $\left|\left|\hskip 1.0pt\tilde{M}^{y}_{b}\hskip
1.0pt\right|\right|\geq 1-\varepsilon$, for $d=2$ it follows immediately that
the second (second largest) eigenvalue of each of the effects
$\tilde{M}^{y}_{b}$ cannot exceed $\varepsilon$. Hence, we can conclude that
${\mathrm{tr}}(\tilde{M}^{y}_{b})\leq 1+\varepsilon,\forall y,b$.
Secondly, we can improve the bound on the norm of each of the experimental
states. For that let us write the spectral decomposition of each experimental
state and POVM effect:
$\displaystyle\tilde{\varrho}^{x}_{a}$
$\displaystyle=\eta(\mathbbm{1}-|\psi\rangle\langle\psi|)+(1-\eta)|\psi\rangle\langle\psi|=\eta\mathbbm{1}+(1-2\eta)|\psi\rangle\langle\psi|$
(69) $\displaystyle\tilde{M}^{x}_{a}$
$\displaystyle=\lambda_{0}|\phi\rangle\langle\phi|+\lambda_{1}(\mathbbm{1}-|\phi\rangle\langle\phi|),$
where we assume that $\lambda_{0}\geq\lambda_{1}$. We omitted the indices
$x,a$ for $\eta$, $\lambda_{0},\lambda_{1}$ and $\psi$,$\phi$ for simplicity.
We can assume, without loss of generality, that $\eta\geq\frac{1}{2}$, and let
us also assume for now that $|\langle\phi|\psi\rangle|^{2}\leq\frac{1}{2}$.
From the condition
${\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{M}^{x}_{a})\geq 1-\varepsilon$,
it then follows that:
$\displaystyle\eta(\lambda_{0}+\lambda_{1})+(1-2\eta)(\lambda_{0}|\langle\phi|\psi\rangle|^{2}+\lambda_{1}(1-|\langle\phi|\psi\rangle|^{2}))\geq
1-\varepsilon,$ (70)
from where we can obtain a lower bound on $\eta$, namely:
$\displaystyle\eta\geq\frac{1}{1-2|\langle\phi|\psi\rangle|^{2}}\left(\frac{1-\varepsilon-\lambda_{1}}{\lambda_{0}-\lambda_{1}}-\frac{1}{2}\right)+\frac{1}{2}.$
(71)
The expression on the right-hand side of the above inequality is maximal when
$|\langle\phi|\psi\rangle|^{2}=0$ and $\lambda_{0}=1$,
$\lambda_{1}=\varepsilon$, which returns the bound $\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}\hskip
1.0pt\right|\right|\geq\frac{1-2\varepsilon}{1-\varepsilon}$.
Now let us return to our assumption
$|\langle\phi|\psi\rangle|^{2}\leq\frac{1}{2}$, for which the above bound is
valid. We can upper-bound $\eta$ by $1$ in Eq. (70), which returns nontrivial
upper bound on $|\langle\phi|\psi\rangle|^{2}$ that happens to be
$1-\frac{1-2\varepsilon}{1-\varepsilon}$. This function is below $\frac{1}{2}$
for $\varepsilon\leq\frac{1}{3}$, i.e., for $\varepsilon\leq\frac{1}{3}$ our
newly-derived bound on $\left|\left|\hskip 1.0pt\tilde{\varrho}^{x}_{a}\hskip
1.0pt\right|\right|$ is valid. The region $\varepsilon\in[0,\frac{1}{3}]$ is
significantly larger than the resulting region in which our self-testing
argument are valid, so this assumption does not affect our final results.
Let us now try to improve the bounds for the overlaps. In the proof of Theorem
1 (see Eq. (3)) we have already established that:
$\displaystyle|{\mathrm{tr}}(\tilde{\varrho}^{x}_{a}\tilde{\varrho}^{x^{\prime}}_{a^{\prime}})-{\mathrm{tr}}(\varrho^{x}_{a}\varrho^{x^{\prime}}_{a^{\prime}})|\leq\varepsilon+\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}-\tilde{M}^{x}_{a}\hskip
1.0pt\right|\right|,\quad\forall x\neq x^{\prime},\forall a,a^{\prime}.$ (72)
Let us now refine the bound on $\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}-\tilde{M}^{x}_{a}\hskip 1.0pt\right|\right|$. For
simplicity, we present this result below for a pair of operators
$\varrho=(1-\eta)\mathbbm{1}+(2\eta-1)|\psi\rangle\langle\psi|$ and
$M=\lambda_{1}\mathbbm{1}+(\lambda_{0}-\lambda_{1})|\phi\rangle\langle\phi|$,
that satisfy the conditions ${\mathrm{tr}}(\varrho M)\geq 1-\varepsilon$, and
$1-\varepsilon\leq\lambda_{0}\leq 1$, $0\leq\lambda_{1}\leq\varepsilon$. To
compute the norm, we look for the eigenvector $|\xi\rangle$ of the operator
$(\varrho-M)$, i.e., $(\varrho-M)|\xi\rangle=\Lambda|\xi\rangle$,
corresponding to the maximal eigenvalue. We have the following quadratic
equation for the eigenvalue $\Lambda$:
$\displaystyle\Lambda^{2}-\Lambda(1-\lambda_{0}-\lambda_{1})+\eta(1-\eta)-\lambda_{1}+\lambda_{0}\lambda_{1}-\eta(\lambda_{0}-\lambda_{1})+(2\eta-1)(\lambda_{0}-\lambda_{1})|\langle\phi|\psi\rangle|^{2}=0.$
The sum of the roots of this equation is equal to $1-\lambda_{0}-\lambda_{1}$.
Since we know that $|1-\lambda_{0}-\lambda_{1}|\leq\varepsilon$, then either
both roots are of the same sign, in which case the largest eigenvalue could
only be $\varepsilon$, or they are of the opposite sign. In the latter case,
it is evident that the absolute values of both roots are maximal whenever the
free term in Eq. (E) is minimal. We then can upper-bound the solutions to Eq.
(E) by lower-bounding the free term using the condition ${\mathrm{tr}}(\varrho
M)\geq 1-\varepsilon$. Indeed, from ${\mathrm{tr}}(\varrho M)\geq
1-\varepsilon$ we infer immediately that:
$\displaystyle{\mathrm{tr}}(\varrho
M)=\lambda_{0}-\eta(\lambda_{0}-\lambda_{1})+(2\eta-1)(\lambda_{0}-\lambda_{1})|\langle\phi|\psi\rangle|^{2}\geq
1-\varepsilon,$ (74)
and thus we reduce Eq. (E) to:
$\displaystyle\Lambda^{2}-\Lambda(1-\lambda_{0}-\lambda_{1})+\eta(1-\eta)-\lambda_{0}-\lambda_{1}+\lambda_{0}\lambda_{1}+1-\varepsilon=0.$
(75)
Using the same argument we can set $\eta=1$, because $\eta(1-\eta)\geq 0$. The
positive root of Eq. (75) is equal to:
$\displaystyle\Lambda=\frac{1-\lambda_{0}-\lambda_{1}}{2}+\sqrt{\left(\frac{1-\lambda_{0}-\lambda_{1}}{2}\right)^{2}-(1-\lambda_{0})(1-\lambda_{1})+\varepsilon}.$
(76)
It is easy to check that the above expression does not have any local maxima
w.r.t. $\lambda_{0}$,$\lambda_{1}$ on the domain
$1-\varepsilon\leq\lambda_{0}\leq 1$, $0\leq\lambda_{1}\leq\varepsilon$,
whenever $\varepsilon<\frac{1}{2}$, which we assume to be the case. Thus, we
conclude that the maximal value of $\Lambda$ corresponds to the boundary of
the region of $(\lambda_{0},\lambda_{1})$. By considering this boundary we
find that this maximal value corresponds to the case of $\lambda_{0}=1$ and
$\lambda_{1}=0$ which yields $\Lambda=\sqrt{\varepsilon}$. From the above
argument we finally conclude that:
$\displaystyle\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}-\tilde{M}^{x}_{a}\hskip
1.0pt\right|\right|\leq\sqrt{\varepsilon},\quad\forall a,$ (77)
which completes the proof for the state overlaps. From Eq. (77) and the fact
that ${\mathrm{tr}}(\tilde{M}^{y}_{b})\leq 1+\varepsilon$, it is easy to
obtain the improved bound on the overlaps between measurement effect. ∎
###### Proof of Lemma 2.
Let us start by deriving the bound on the Frobenius norm $\left|\left|\hskip
1.0pt\Delta\Gamma\hskip 1.0pt\right|\right|_{F}$:
$\displaystyle\left|\left|\hskip 1.0pt\Delta\Gamma\hskip
1.0pt\right|\right|^{2}_{F}=\sum_{x=1}^{k}(|\Gamma_{x,x}-\tilde{\Gamma}_{x,x}|^{2}+\sum_{x^{\prime}\neq
x}|\Gamma_{x,x^{\prime}}-\tilde{\Gamma}_{x,x^{\prime}}|^{2})$
$\displaystyle=4\sum_{x=1}^{k}\Big{(}(1-{\mathrm{tr}}(\tilde{\varrho}^{x}_{1})^{2})^{2}+\sum_{x^{\prime}\neq
x}|{\mathrm{tr}}(\varrho^{x}_{1}\varrho^{x^{\prime}}_{1})-{\mathrm{tr}}(\tilde{\varrho}^{x}_{1}\tilde{\varrho}^{x^{\prime}}_{1})|^{2}\Big{)}.$
(78)
We have already established the bound on
$|{\mathrm{tr}}(\varrho^{x}_{1}\varrho^{x^{\prime}}_{1})-{\mathrm{tr}}(\tilde{\varrho}^{x}_{1}\tilde{\varrho}^{x^{\prime}}_{1})|$
in Lemma 1. The bound on $(1-{\mathrm{tr}}(\tilde{\varrho}^{x}_{1})^{2})^{2}$
can be obtained from the bound on the norm of each $\tilde{\varrho}^{x}_{1}$.
Namely, from the condition $\left|\left|\hskip
1.0pt\tilde{\varrho}^{x}_{a}\hskip
1.0pt\right|\right|\geq\frac{1-2\varepsilon}{1-\varepsilon}$ we can
immediately conclude that:
$\displaystyle
1-{\mathrm{tr}}(\tilde{\varrho}^{x}_{1})^{2}\leq\frac{2\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}.$
(79)
From here, it is easy to get to the final bound on $\left|\left|\hskip
1.0pt\Delta\Gamma\hskip 1.0pt\right|\right|^{2}_{F}$, which reads:
$\displaystyle\left|\left|\hskip 1.0pt\Delta\Gamma\hskip
1.0pt\right|\right|^{2}_{F}\leq
4k(k-1)(\varepsilon+\sqrt{\varepsilon})^{2}+\frac{16k\varepsilon^{2}(1-2\varepsilon)^{2}}{(1-\varepsilon)^{4}}\leq
4k(k-1)\varepsilon\left(1+2\sqrt{\varepsilon}+\frac{k+3}{k-1}\varepsilon\right),$
(80)
where we made some approximations to simplify the result.
Now, let us derive the bound on $\left|\left|\hskip 1.0pt\Delta\Gamma\hskip
1.0pt\right|\right|$. In principle, we know that $\left|\left|\hskip
1.0pt\Delta\Gamma\hskip 1.0pt\right|\right|\leq\left|\left|\hskip
1.0pt\Delta\Gamma\hskip 1.0pt\right|\right|_{F}$, but we can derive a better
bound based on the fact that the diagonal entries of $\Delta\Gamma$ are much
less than the off-diagonal entries.
First of all, due to the triangle inequality, we can write $\left|\left|\hskip
1.0pt\Delta\Gamma\hskip 1.0pt\right|\right|\leq\left|\left|\hskip
1.0pt\mathrm{diag}(\Delta\Gamma)\hskip 1.0pt\right|\right|+\left|\left|\hskip
1.0pt\mathrm{offdiag}(\Delta\Gamma)\hskip 1.0pt\right|\right|$, where we split
$\Delta\Gamma$ on the diagonal and off-diagonal parts. The first term,
$\left|\left|\hskip 1.0pt\mathrm{diag}(\Delta\Gamma)\hskip
1.0pt\right|\right|$, can be easily bounded as follows:
$\displaystyle\left|\left|\hskip 1.0pt\mathrm{diag}(\Delta\Gamma)\hskip
1.0pt\right|\right|=2\max_{x}(1-{\mathrm{tr}}(\tilde{\varrho}^{x}_{1})^{2})\leq\frac{4\varepsilon(1-2\varepsilon)}{(1-\varepsilon)^{2}}\leq
4\varepsilon.$ (81)
As for the off-diagonal part, we give the proof for two cases $k=2$ and $k=3$
separately. For $k=2$, $\left|\left|\hskip
1.0pt\mathrm{offdiag}(\Delta\Gamma)\hskip
1.0pt\right|\right|=2|{\mathrm{tr}}(\varrho^{1}_{1}\varrho^{2}_{1})-{\mathrm{tr}}(\tilde{\varrho}^{1}_{1}\tilde{\varrho}^{2}_{1})|\leq
2\sqrt{\varepsilon}+2\varepsilon$, where we used the results of Lemma 1. As
for $k=3$, we will need some intermediate result, namely the following
relation:
$\displaystyle\left|\left|\hskip 1.0ptA\hskip
1.0pt\right|\right|\leq\sqrt{\frac{k-1}{k}}\left|\left|\hskip 1.0ptA\hskip
1.0pt\right|\right|_{F},$ (82)
where $k$ is the size of the matrix $A$ with ${\mathrm{tr}}(A)=0$. We give a
proof of this below.
Let us assume that $\\{\lambda_{i}\\}_{i=1}^{k}$ are the eigenvalues of matrix
$A$, hence we know that $\sum_{i=1}^{k}\lambda_{i}=0.$ Let $\lambda_{1}$ be
the largest eigenvalue, i.e., the operator norm of $A$, if $\lambda_{1}\geq
0$. If we wish to maximize the Frobenius norm of $A$ for fixed $\lambda_{1}$,
the following lower-bound has to be satisfied:
$\displaystyle\left|\left|\hskip 1.0ptA\hskip
1.0pt\right|\right|_{F}^{2}=\lambda_{1}^{2}+\sum_{i=2}^{k}\lambda_{i}^{2}\geq\lambda_{1}^{2}+\frac{1}{k-1}\left(\sum_{i=2}^{k}|\lambda_{i}|\right)^{2}\geq\lambda_{1}^{2}+\frac{1}{k-1}\left(\sum_{i=2}^{k}\lambda_{i}\right)^{2}=\lambda_{1}^{2}\frac{k}{k-1},$
(83)
which proves the bound in Eq. (82). Using the above result, we obtain the
following bound:
$\displaystyle\left|\left|\hskip 1.0pt\mathrm{offdiag}(\Delta\Gamma)\hskip
1.0pt\right|\right|\leq\sqrt{\frac{2}{3}}\left|\left|\hskip
1.0pt\mathrm{offdiag}(\Delta\Gamma)\hskip
1.0pt\right|\right|_{F}=2\sqrt{\frac{2}{3}}\sqrt{\sum_{x\neq
x^{\prime}}|{\mathrm{tr}}(\varrho^{x}_{1}\varrho^{x^{\prime}}_{1})-{\mathrm{tr}}(\tilde{\varrho}^{x}_{1}\tilde{\varrho}^{x^{\prime}}_{1})|^{2}}\leq
4(\sqrt{\varepsilon}+\varepsilon),$
which completes out proof. ∎
## References
* Arute et al. [2019] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, Brian Burkett, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Andrew Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Graff, Keith Guerin, Steve Habegger, Matthew P. Harrigan, Michael J. Hartmann, Alan Ho, Markus Hoffmann, Trent Huang, Travis S. Humble, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Land huis, Mike Lindmark, Erik Lucero, Dmitry Lyakh, Salvatore Mandrà, Jarrod R. McClean, Matthew McEwen, Anthony Megrant, Xiao Mi, Kristel Michielsen, Masoud Mohseni, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Eleanor G. Rieffel, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven, and John M. Martinis. Quantum supremacy using a programmable superconducting processor. _Nature_ , 574(7779):505–510, Oct 2019. doi: 10.1038/s41586-019-1666-5.
* Degen et al. [2017] C. L. Degen, F. Reinhard, and P. Cappellaro. Quantum sensing. _Rev. Mod. Phys._ , 89:035002, Jul 2017. doi: 10.1103/RevModPhys.89.035002. URL https://link.aps.org/doi/10.1103/RevModPhys.89.035002.
* Georgescu et al. [2014] I. M. Georgescu, S. Ashhab, and Franco Nori. Quantum simulation. _Rev. Mod. Phys._ , 86:153–185, Mar 2014. doi: 10.1103/RevModPhys.86.153. URL https://link.aps.org/doi/10.1103/RevModPhys.86.153.
* Montanaro [2016] Ashley Montanaro. Quantum algorithms: an overview. _npj Quantum Information_ , 2:15023, Jan 2016. doi: 10.1038/npjqi.2015.23.
* Biamonte et al. [2017] Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. Quantum machine learning. _Nature_ , 549(7671):195–202, Sep 2017. doi: 10.1038/nature23474.
* Dunjko and Briegel [2018] Vedran Dunjko and Hans J Briegel. Machine learning & artificial intelligence in the quantum domain: a review of recent progress. _Reports on Progress in Physics_ , 81(7):074001, jun 2018. doi: 10.1088/1361-6633/aab406. URL https://doi.org/10.1088%2F1361-6633%2Faab406.
* Eisert et al. [2019] J. Eisert, D. Hangleiter, N. Walk, I. Roth, D. Markham, R. Parekh, U. Chabaud, and E. Kashefi. Quantum certification and benchmarking. _arXiv e-prints_ , art. arXiv:1910.06343, Oct 2019.
* Mayers and Yao [2003] Dominic Mayers and Andrew Yao. Self testing quantum apparatus. _arXiv preprint quant-ph/0307205_ , 2003.
* Šupić and Bowles [2019] Ivan Šupić and Joseph Bowles. Self-testing of quantum systems: a review. _arXiv preprint arXiv:1904.10042_ , 2019.
* Bell [1964] John S Bell. On the einstein podolsky rosen paradox. _Physics Physique Fizika_ , 1(3):195, 1964.
* Chen et al. [2016] Shin-Liang Chen, Costantino Budroni, Yeong-Cherng Liang, and Yueh-Nan Chen. Natural framework for device-independent quantification of quantum steerability, measurement incompatibility, and self-testing. _Physical Review Letters_ , 116(24):240401, 2016\.
* Coladangelo et al. [2017] Andrea Coladangelo, Koon Tong Goh, and Valerio Scarani. All pure bipartite entangled states can be self-tested. _Nature communications_ , 8:15485, 2017.
* Bowles et al. [2018] Joseph Bowles, Ivan Šupić, Daniel Cavalcanti, and Antonio Acín. Device-independent entanglement certification of all entangled states. _Physical Review Letters_ , 121(18):180503, 2018\.
* Popescu and Rohrlich [1992] Sandu Popescu and Daniel Rohrlich. Which states violate Bell’s inequality maximally? _Physics Letters A_ , 169(6):411–414, 1992.
* Ahrens et al. [2014] Johan Ahrens, Piotr Badziąg, Marcin Pawłowski, Marek Żukowski, and Mohamed Bourennane. Experimental tests of classical and quantum dimensionality. _Phys. Rev. Lett._ , 112:140401, Apr 2014. doi: 10.1103/PhysRevLett.112.140401. URL https://link.aps.org/doi/10.1103/PhysRevLett.112.140401.
* Brask et al. [2017] Jonatan Bohr Brask, Anthony Martin, William Esposito, Raphael Houlmann, Joseph Bowles, Hugo Zbinden, and Nicolas Brunner. Megahertz-rate semi-device-independent quantum random number generators based on unambiguous state discrimination. _Phys. Rev. Applied_ , 7:054018, May 2017. doi: 10.1103/PhysRevApplied.7.054018. URL https://link.aps.org/doi/10.1103/PhysRevApplied.7.054018.
* Aguilar et al. [2018] Edgar A. Aguilar, Máté Farkas, Daniel Martínez, Matías Alvarado, Jaime Cariñe, Guilherme B. Xavier, Johanna F. Barra, Gustavo Cañas, Marcin Pawłowski, and Gustavo Lima. Certifying an irreducible 1024-dimensional photonic state using refined dimension witnesses. _Phys. Rev. Lett._ , 120:230503, Jun 2018. doi: 10.1103/PhysRevLett.120.230503. URL https://link.aps.org/doi/10.1103/PhysRevLett.120.230503.
* Anwer et al. [2020] Hammad Anwer, Sadiq Muhammad, Walid Cherifi, Nikolai Miklin, Armin Tavakoli, and Mohamed Bourennane. Experimental characterization of unsharp qubit observables and sequential measurement incompatibility via quantum random access codes, Aug 2020\. URL https://link.aps.org/doi/10.1103/PhysRevLett.125.080403.
* Pawłowski and Brunner [2011] Marcin Pawłowski and Nicolas Brunner. Semi-device-independent security of one-way quantum key distribution. _Phys. Rev. A_ , 84:010302, Jul 2011. doi: 10.1103/PhysRevA.84.010302. URL https://link.aps.org/doi/10.1103/PhysRevA.84.010302.
* Van Himbeeck et al. [2017] Thomas Van Himbeeck, Erik Woodhead, Nicolas J. Cerf, Raúl García-Patrón, and Stefano Pironio. Semi-device-independent framework based on natural physical assumptions. _Quantum_ , 1:33, November 2017. ISSN 2521-327X. doi: 10.22331/q-2017-11-18-33. URL https://doi.org/10.22331/q-2017-11-18-33.
* Chaves et al. [2015] Rafael Chaves, Jonatan Bohr Brask, and Nicolas Brunner. Device-independent tests of entropy. _Phys. Rev. Lett._ , 115:110501, Sep 2015. doi: 10.1103/PhysRevLett.115.110501. URL https://link.aps.org/doi/10.1103/PhysRevLett.115.110501.
* Tavakoli et al. [2018a] Armin Tavakoli, Jędrzej Kaniewski, Tamás Vértesi, Denis Rosset, and Nicolas Brunner. Self-testing quantum states and measurements in the prepare-and-measure scenario. _Physical Review A_ , 98(6):062307, 2018a.
* Farkas and Kaniewski [2019] Máté Farkas and Jędrzej Kaniewski. Self-testing mutually unbiased bases in the prepare-and-measure scenario. _Physical Review A_ , 99(3):032316, 2019.
* Mironowicz and Pawłowski [2019] Piotr Mironowicz and Marcin Pawłowski. Experimentally feasible semi-device-independent certification of four-outcome positive-operator-valued measurements. _Phys. Rev. A_ , 100:030301, Sep 2019. doi: 10.1103/PhysRevA.100.030301. URL https://link.aps.org/doi/10.1103/PhysRevA.100.030301.
* Tavakoli et al. [2018b] Armin Tavakoli, Massimiliano Smania, Tamás Vértesi, Nicolas Brunner, and Mohamed Bourennane. Self-testing non-projective quantum measurements. _arXiv preprint arXiv:1811.12712_ , 2018b.
* Tavakoli et al. [2019] Armin Tavakoli, Denis Rosset, and Marc-Olivier Renou. Enabling computation of correlation bounds for finite-dimensional quantum systems via symmetrization. _Phys. Rev. Lett._ , 122:070501, Feb 2019. doi: 10.1103/PhysRevLett.122.070501. URL https://link.aps.org/doi/10.1103/PhysRevLett.122.070501.
* Blume-Kohout et al. [2017] Robin Blume-Kohout, John King Gamble, Erik Nielsen, Kenneth Rudinger, Jonathan Mizrahi, Kevin Fortier, and Peter Maunz. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography. _Nature Communications_ , 8, Feb 2017. doi: 10.1038/ncomms14485.
* Bennett and Brassard [1984] Charles H Bennett and Gilles Brassard. Quantum cryptography: Public-key distribution and coin tossing. In _Proceedings of IEEE International Conference on Computers, Systems and Signal Processing_ , pages 175–179. Bangalore, India, IEEE Press, 1984\.
* Bennett [1992] Charles H Bennett. Quantum cryptography using any two nonorthogonal states. _Phys. Rev. Lett._ , 68:3121–3124, May 1992. doi: 10.1103/PhysRevLett.68.3121. URL https://link.aps.org/doi/10.1103/PhysRevLett.68.3121.
* Woodhead et al. [2013] Erik Woodhead, Charles Ci Wen Lim, and Stefano Pironio. Semi-device-independent QKD based on BB84 and a CHSH-type estimation. In _Theory of Quantum Computation, Communication, and Cryptography_ , pages 107–115, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg. ISBN 978-3-642-35656-8.
* de Vicente [2017] Julio I. de Vicente. Shared randomness and device-independent dimension witnessing. _Phys. Rev. A_ , 95:012340, Jan 2017. doi: 10.1103/PhysRevA.95.012340. URL https://link.aps.org/doi/10.1103/PhysRevA.95.012340.
* Navascués and Popescu [2014] Miguel Navascués and Sandu Popescu. How energy conservation limits our measurements. _Phys. Rev. Lett._ , 112:140502, Apr 2014. doi: 10.1103/PhysRevLett.112.140502. URL https://link.aps.org/doi/10.1103/PhysRevLett.112.140502.
* Renes et al. [2004] Joseph M. Renes, Robin Blume-Kohout, A. J. Scott, and Carlton M. Caves. Symmetric informationally complete quantum measurements. _Journal of Mathematical Physics_ , 45(6):2171–2180, 2004. doi: 10.1063/1.1737053. URL https://doi.org/10.1063/1.1737053.
* Brierley et al. [2009] Stephen Brierley, Stefan Weigert, and Ingemar Bengtsson. All mutually unbiased bases in dimensions two to five, 2009.
* Nielsen and Chuang [2011] Michael A. Nielsen and Isaac L. Chuang. _Quantum Computation and Quantum Information: 10th Anniversary Edition_. Cambridge University Press, USA, 10th edition, 2011. ISBN 1107002176.
* Bargmann [1964] V. Bargmann. Note on Wigner’s theorem on symmetry operations. _Journal of Mathematical Physics_ , 5(7):862–868, 1964. doi: 10.1063/1.1704188. URL https://doi.org/10.1063/1.1704188.
* Sun [1991] Ji-Guang Sun. Perturbation bounds for the Cholesky and QR factorizations. _BIT Numerical Mathematics_ , 31(2):341–352, Jun 1991. ISSN 1572-9125. doi: 10.1007/BF01931293. URL https://doi.org/10.1007/BF01931293.
* Arias-Castro et al. [2020] Ery Arias-Castro, Adel Javanmard, and Bruno Pelletier. Perturbation bounds for Procrustes, classical scaling, and trilateration, with applications to manifold learning. _Journal of Machine Learning Research_ , 21:15–1, 2020\.
* Bechmann-Pasquinucci and Gisin [2003] H. Bechmann-Pasquinucci and N. Gisin. Intermediate states in quantum cryptography and Bell inequalities. _Phys. Rev. A_ , 67:062310, Jun 2003. doi: 10.1103/PhysRevA.67.062310. URL https://link.aps.org/doi/10.1103/PhysRevA.67.062310.
* D’Ariano et al. [2005] Giacomo Mauro D’Ariano, Paoloplacido Lo Presti, and Paolo Perinotti. Classical randomness in quantum measurements. _Journal of Physics A: Mathematical and General_ , 38(26):5979, 2005.
* Oszmaniec and Biswas [2019] Michał Oszmaniec and Tanmoy Biswas. Operational relevance of resource theories of quantum measurements. _Quantum_ , 3:133, April 2019. ISSN 2521-327X. doi: 10.22331/q-2019-04-26-133. URL https://doi.org/10.22331/q-2019-04-26-133.
* Takagi and Regula [2019] Ryuji Takagi and Bartosz Regula. General resource theories in quantum mechanics and beyond: Operational characterization via discrimination tasks. _Phys. Rev. X_ , 9:031053, Sep 2019. doi: 10.1103/PhysRevX.9.031053. URL https://link.aps.org/doi/10.1103/PhysRevX.9.031053.
* Uola et al. [2019] Roope Uola, Tristan Kraft, Jiangwei Shang, Xiao-Dong Yu, and Otfried Gühne. Quantifying quantum resources with conic programming. _Phys. Rev. Lett._ , 122:130404, Apr 2019. doi: 10.1103/PhysRevLett.122.130404. URL https://link.aps.org/doi/10.1103/PhysRevLett.122.130404.
* Tavakoli [2020] Armin Tavakoli. Semi-device-independent certification of independent quantum state and measurement devices. _arXiv preprint arXiv:2003.03859_ , 2020.
* Higham [2002] Nicholas J. Higham. _Accuracy and Stability of Numerical Algorithms_. Society for Industrial and Applied Mathematics, second edition, 2002. doi: 10.1137/1.9780898718027. URL https://epubs.siam.org/doi/abs/10.1137/1.9780898718027.
* Chang and Stehlé [2010] X.-W. Chang and D. Stehlé. Rigorous perturbation bounds of some matrix factorizations. _SIAM Journal on Matrix Analysis and Applications_ , 31(5):2841–2859, 2010. doi: 10.1137/090778535. URL https://doi.org/10.1137/090778535.
* Hurley and Cattell [1962] John R Hurley and Raymond B Cattell. The Procrustes program: Producing direct rotation to test a hypothesized factor structure. _Behavioral science_ , 7(2):258–262, 1962.
|
2024-09-04T02:54:56.504392 | 2020-03-02T17:15:06 | 2003.01037 | {
"authors": "Vincent Lostanlen and Alice Cohen-Hadria and Juan Pablo Bello",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25996",
"submitter": "Vincent Lostanlen",
"url": "https://arxiv.org/abs/2003.01037"
} | arxiv-papers | # One or Two Frequencies?
The Scattering Transform Answers ††thanks: This work is supported by the NSF
award 1633259 (BIRDVOX).
Vincent Lostanlen Music and Audio Research Lab
New York University
New York, NY, USA
<EMAIL_ADDRESS>Alice Cohen-Hadria IRIT
University of Toulouse — CNRS
Toulouse, France
<EMAIL_ADDRESS>Juan Pablo Bello Music and Audio Research Lab
New York University
New York, NY, USA
<EMAIL_ADDRESS>
###### Abstract
With the aim of constructing a biologically plausible model of machine
listening, we study the representation of a multicomponent stationary signal
by a wavelet scattering network. First, we show that renormalizing second-
order nodes by their first-order parents gives a simple numerical criterion to
assess whether two neighboring components will interfere psychoacoustically.
Secondly, we run a manifold learning algorithm (Isomap) on scattering
coefficients to visualize the similarity space underlying parametric additive
synthesis. Thirdly, we generalize the “one or two components” framework to
three sine waves or more, and prove that the effective scattering depth of a
Fourier series grows in logarithmic proportion to its bandwidth.
###### Index Terms:
Audio systems, Amplitude modulation, Continuous wavelet transform, Fourier
series, Multi-layer neural network.
## I Introduction
In the mammalian auditory system, cochlear hair cells operate like band-pass
filters whose equivalent rectangular bandwidth (ERB) grows in proportion to
their center frequency. Given two sine waves
$t\mapsto\bm{y_{1}}(t)=a_{1}\cos(f_{1}t+\varphi_{1})$ and
$t\mapsto\bm{y_{2}}(t)=a_{2}\cos(f_{2}t+\varphi_{2})$ of respective
frequencies $f_{1}>0$ and $f_{2}>0$, we perceive their mixture as a musical
chord insofar as $\bm{y_{1}}$ and $\bm{y_{2}}$ belong to disjoint critical
bands. However, if $a_{2}\ll a_{1}$ or $f_{2}\approx f_{1}$, then the tone
$\bm{y_{2}}$ is said to be _masked_ by $\bm{y_{1}}$. In lieu of two pure
tones, we hear a “beating tone”: i.e., a locally sinusoidal wave whose carrier
frequency is $\frac{1}{2}(f_{1}+f_{2})$ and whose modulation frequency is
$\frac{1}{2}|f_{1}-f_{2}|$. In humans, the resolution of beating tones
involves physiological processes beyond the cochlea, i.e., in the primary
auditory cortex.
The scattering transform ($\mathbf{S}$) is a deep convolutional operator which
alternates constant-$Q$ wavelet decompositions and the application of
pointwise complex modulus, up to some time scale $T$. Broadly speaking, its
first two layers ($\mathbf{S_{1}}$ and $\mathbf{S_{2}}$) resemble the
functioning of the cochlea and the primary auditory cortex, respectively. In
the context of audio classification, scattering transforms have been
succesfully employed to represent speech [2], environmental sounds [13], urban
sounds [20], musical instruments [10], rhythms [8], and playing techniques
[24]. Therefore, the scattering transform simultaneously enjoys a diverse
range of practical motivations, a firm rooting in wavelet theory, and a
plausible correspondence with neurophysiology.
This article discusses the response of the scattering transform operator to a
complex tone input $\bm{y}:t\mapsto\bm{y_{1}}(t)+\bm{y_{2}}(t)$, depending on
the sinusoidal parameters of $\bm{y_{1}}$ and $\bm{y_{2}}$. In this respect,
we follow a well-established methodology in nonstationary signal processing,
colloquially known as: “One or two frequencies? The X Answers”, where X is the
nonlinear operator of interest. The key idea is to identify transitional
regimes in the response of X with respect to variations in relative amplitude
($\frac{a_{2}}{a_{1}}$), relative frequency ($\frac{f_{2}}{f_{1}}$), and
relative phase ($\varphi_{2}-\varphi_{1}$). Prior publications have done so
for X being the empirical mode decomposition [19], the synchrosqueezing
transform [25], and the singular spectrum analysis operator [9]. We extend
this line of research to the case where X is the scattering transform in
dimension one.
## II Wavelet-based recursive interferometry
Let $\bm{\psi}\in\mathbf{L}^{2}(\mathbb{R},\mathbb{C})$ a Hilbert-analytic
filter with null average, unit center frequency, and an ERB equal to $1/Q$. We
define a constant-$Q$ wavelet filterbank as the family
$\bm{\psi}_{\lambda}:t\mapsto\lambda\bm{\psi}(\lambda t)$. Each wavelet
$\bm{\psi}_{\lambda}$ has a center frequency of $\lambda$, an ERB of
$\lambda/Q$, and an effective receptive field of $(2\pi Q/\lambda)$ in the
time domain. In practice, the frequency variable $\lambda$ gets discretized
according to a geometric progression of common ratio $2^{\frac{1}{Q}}$.
Consequently, every continuous signal $\bm{y}$ that is bandlimited to
$[f_{\min},f_{\max}]$ activates a number of
$Q\log_{2}({\frac{f_{\max}}{f_{\min}}})$ wavelets $\bm{\psi_{\lambda}}$ at
most.
We define the scalogram of $\bm{y}$ as the squared complex modulus of its
constant-$Q$ transform (CQT):
$\mathbf{U_{1}}\bm{y}:(t,\lambda_{1})\longmapsto\left|\int_{-\infty}^{+\infty}\bm{y}(t^{\prime})\bm{\psi_{\lambda_{1}}}(t-t^{\prime})\;\mathrm{d}t^{\prime}\right|^{2}.$
(1)
Likewise, we define a second layer of nonlinear transformation for $\bm{y}$ as
the “scalogram of its scalogram”:
$\mathbf{U_{2}}\bm{y}:(t,\lambda_{1},\lambda_{2})\longmapsto\Big{|}\big{|}\bm{y}\ast\bm{\psi_{\lambda_{1}}}\big{|}^{2}\ast\bm{\psi_{\lambda_{2}}}\Big{|}^{2}(t),$
(2)
where the asterisk denotes a convolution product. This construct may be
iterated for every integer $m$ by “scattering” the multivariate signal
$\mathbf{U_{m}}\bm{y}$ into all wavelet subbands $\lambda_{m}<\lambda_{m-1}$:
$\mathbf{U_{m+1}}\bm{y}:(t,\lambda_{1}\ldots\lambda_{m+1})\longmapsto\\\
\Big{|}\mathbf{U_{m}}\bm{y}(t,\lambda_{1}\ldots\lambda_{m})\ast\bm{\psi}_{\lambda_{m}}\Big{|}^{2}(t,\lambda_{1}\ldots\lambda_{m}).$
(3)
Note that the original definition of the scattering transform adopts the
complex modulus ($|z|=\sqrt{z\bar{z}}$) rather its square ($|z|^{2}=z\bar{z}$)
as its activation function. This is to ensure that $\mathbf{U_{m}}$ is a non-
expansive map in terms of Lipschitz regularity. However, to simplify our
calculation and spare an intermediate stage of linearization of the square
root, we choose to employ a measure of power rather than amplitude. This idea
was initially proposed by [6] in the context of marine bioacoustics.
Every layer $m$ in this deep convolutional network composes an invariant
linear system (namely, the CQT) and a pointwise operation (the squared complex
modulus). Thus, by recurrence over the depth variable $m$, every tensor
$\mathbf{U_{m}}\bm{y}$ is equivariant to the action of delay operators. In
order to replace this equivariance property by an invariance property, we
integrate each $\mathbf{U_{m}}$ over some predefined time scale $T$, yielding
the invariant scattering transform:
$\displaystyle\mathbf{S_{m}}\bm{y}:(t,p)\longmapsto\int_{-\infty}^{+\infty}\mathbf{U_{m}}(t^{\prime},p)\bm{\phi}_{T}(t-t^{\prime})\;\mathrm{d}t^{\prime},$
(4)
where the $m$-tuple $p=(\lambda_{1}\ldots\lambda_{m})$ is a scattering path
and the signal $\bm{\phi}_{T}$ is a real-valued low-pass filter of time scale
$T$.
## III Auditory masking in a scattering network
Figure 1: Superimposed heatmaps of second-order masking coefficients
$\mathbf{\widetilde{S}_{2}}\bm{y}$ after a scattering transform of two sine
waves $\bm{y_{1}}$ and $\bm{y_{2}}$, measured around the frequency $f_{1}$, as
a function of relative amplitude $\frac{a_{2}}{a_{1}}$ and relative frequency
difference $\frac{|f_{2}-f_{1}|}{f_{1}}$. The color of each blot denotes the
resolution $\lambda_{2}$ at the second layer. Wavelets have an asymmetric
profile (Gammatone wavelets) and a quality factor $Q=4$. The second layer
covers an interval of nine octaves below $f_{1}$. For the sake of clarity, we
only display one interference pattern per octave.
Given $n\in\\{1,2\\}$, the convolution between every sine wave $\bm{y_{n}}$
and every wavelet $\bm{\psi_{\lambda_{1}}}$ writes as a multiplication in the
Fourier domain. Because $\bm{\psi_{\lambda_{1}}}$ is Hilbert-analytic, only
the analytic part
$\bm{y_{n}^{\mathrm{a}}}=\bm{y_{n}}+\mathrm{i}\mathcal{H}\\{\mathbf{y_{n}}\\}=a_{n}\exp(\mathrm{i}(f_{n}t+\varphi_{n}))$
of the real signal $\bm{y_{n}}$ is preserved in the CQT:
$\left(\bm{y_{n}}\ast\bm{\psi_{\lambda_{1}}}\right)(t)=\dfrac{1}{2}\bm{\widehat{\psi}_{\lambda_{1}}}(f_{n})\bm{y_{n}^{\mathrm{a}}}(t).$
(5)
By linearity of the CQT, we expand the interference between $\bm{y_{1}}$ and
$\bm{y_{2}}$ by heterodyning:
$\displaystyle\Big{|}(\bm{y_{1}}+\bm{y_{2}})\ast\bm{\psi_{\lambda_{1}}}\Big{|}^{2}(t)=\frac{1}{2}\Big{|}\bm{\widehat{\psi}}\big{(}\frac{f_{1}}{\lambda_{1}}\big{)}\Big{|}^{2}a_{1}^{2}+\frac{1}{2}\Big{|}\bm{\widehat{\psi}}\big{(}\frac{f_{2}}{\lambda_{1}}\big{)}\Big{|}^{2}a_{2}^{2}$
$\displaystyle+\mathfrak{R}\Bigg{(}\\!\bm{\widehat{\psi}}\Big{(}\frac{f_{1}}{\lambda_{1}}\Big{)}\bm{\widehat{\psi}^{\ast}}\Big{(}\frac{f_{2}}{\lambda_{1}}\Big{)}\\!\Bigg{)}a_{1}a_{2}\cos\big{(}(f_{2}-f_{1})t+(\varphi_{2}-\varphi_{1})\big{)}.$
(6)
Because the wavelet $\bm{\psi}$ has a null average, the two constant terms in
the equation above are absorbed by the first layer of the scattering network,
and disappear at deeper layers. However, the cross term, proportional to
$a_{1}a_{2}$, is a “difference tone” of fundamental frequency
$\Delta\\!f=|f_{2}-f_{1}|$.
The authors of a previous publication [3] have remarked that this difference
tone elicits a peak in second-order scattering coefficients for the path
$p=(\lambda_{1},\lambda_{2})=(f_{1},|f_{2}-f_{1}|)$. In the following, we
generalize their study to include the effect of the relative amplitude
$\frac{a_{2}}{a_{1}}$, the wavelet shape $\bm{\psi}$, the quality factor $Q$,
and the time scale of local stationarity $T$.
Equation 6 illustrates how the scalogram operator $\mathbf{U_{1}}$ converts a
complex tone (two frequencies $f_{1}$ and $f_{2}$) into a simple tone (one
frequency $|f_{2}-f_{1}|$). For this simple tone to carry a nonnegligible
amplitude in $\mathbf{U_{2}}$, three conditions must be satisfied. First, the
rectangular term $a_{1}a_{2}$ must be nonnegligible in comparison to the
square terms $a_{1}^{2}$ and $a_{2}^{2}$. Secondly, there must exist a wavelet
$\bm{\psi}_{\lambda_{1}}$ whose spectrum encompasses both frequencies $f_{1}$
and $f_{2}$. Said otherwise, $\lambda_{1}$ must satisfy the inequalities
$|\frac{f_{n}}{\lambda_{1}}-1|\ll\frac{1}{Q}$, both for $f_{n}=f_{1}$ and for
$f_{n}=f_{2}$. Thirdly, the frequency difference $|f_{2}-f_{1}|$ must belong
to the passband of some second-order wavelet $\bm{\psi_{\lambda_{2}}}$. Yet,
in practice, to guarantee the temporal localization of scattering coefficients
and restrict the filterbank to a finite number of octaves, the scaling factor
of every $\bm{\psi_{\lambda_{m}}}$ is upper-bounded by the temporal constant
$T$. Therefore, the period $\frac{2\pi}{|f_{2}-f_{1}|}$ of the difference tone
should be under the pseudo-period of the wavelet with support $T$; i.e., a
pseudo-period of $QT$. Hence the third condition: $|f_{2}-f_{1}|\ll\frac{2\pi
Q}{T}$.
One simple way of quantifying the amount of mutual interference between
signals $\bm{y_{1}}$ and $\bm{y_{2}}$ is to renormalize second-order
coefficients by their first-order “parent” coefficients:
$\mathbf{\widetilde{S}_{2}}\bm{y}(t,\lambda_{1},\lambda_{2})=\dfrac{\mathbf{S_{2}}\bm{y}(t,\lambda_{1},\lambda_{2})}{\mathbf{S_{1}}\bm{y}(t,\lambda_{1})}$
(7)
This operation, initially proposed by [2], is conceptually analogous to
classical methods in adaptive gain control, notably per-channel energy
normalization (PCEN) [14].
In accordance with the “one or two frequencies” methodology, Figure 1
illustrates the value of this ratio of energies in the subband
$\lambda_{1}=f_{1}$, for different values of relative amplitude
$\frac{a_{2}}{a_{1}}$ and relative frequency difference
$\frac{|f_{2}-f_{1}|}{f_{1}}$. We fixed $f_{2}<f_{1}$ without loss of
generality. As expected, we observe that, for $a_{2}\approx a_{1}$ and a
relative frequency difference between $\frac{Q}{f_{1}T}$ and $\frac{1}{Q}$,
second-layer wavelets $\bm{\psi_{\lambda_{2}}}$ resonate with the difference
tone as a result of the interference between signals $\bm{y_{1}}$ and
$\bm{y_{2}}$.
## IV Application to manifold learning
To demonstrate the ability of the scattering transform to characterize
auditory masking, we build a dataset of complex tones according to the
following additive synthesis model:
$\bm{y}_{\alpha,r}(t)=\sum_{n=1}^{N}\dfrac{1+(-1)^{n}r}{n^{\alpha}}\cos(nf_{1}t)\bm{\phi}_{T}(t),$
(8)
where $\bm{\phi}_{T}$ is a Hann window of duration $T$. This additive
synthesis model depends upon two parameters: the Fourier decay $\alpha$ and
the relative odd-to-even amplitude difference $r$. Figure 2 displays the CQT
log-magnitude spectrum of $\bm{y}_{\alpha,r}$ for different values of $\alpha$
and $r$. In practice, we set $T$ to $1024$ samples, $N$ to $32$ harmonics, and
$f_{1}$ between $12$ and $24$ cycles.
Figure 2: Log-magnitudes of synthetic musical tones as a function of wavelet
log-frequency ($\log\lambda$). Ticks of the vertical (resp. horizontal) axis
denote relative amplitude (resp. frequency) intervals of 10 dB (resp. one
octave). Parameters $\alpha$ and $r$ denote the Fourier decay exponent and the
relative odd-to-even amplitude difference $r$ respectively. See Equation 8 for
details.
Our synthetic dataset comprises $2500$ audio signals in total, corresponding
to $50$ values of $\alpha$ between $0$ and $2$ and $50$ values of $r$ between
$0$ and $1$, while $f_{1}$ is an integer chosen uniformly at random between
$12$ and $24$. We extract the scattering transform of each signal
$\bm{y}_{\alpha,r}$ up to order $M=2$, with $Q=1$ and $J=8$, by means of the
Kymatio Python package [4]. Concatenating $QJ$ first-order coefficients with
$\frac{1}{2}Q^{2}J(J-1)$ second-order coefficients yields a representation in
dimension $37$.
For visualization purposes, we bring the $37$-dimensional space of scattering
coefficients to the dimension three by means of the Isomap algorithm for
unsupervised manifold learning [22]. The appeal behind Isomap is that pairwise
Euclidean distances in the 3-D point cloud approximate the corresponding
geodesic distances over the $K$-nearest neighbor graph associated to the
dataset. Throughout this paper, we set the number of neighbors to $K=100$ and
measure neighboring relationships by comparing high-dimensional $\ell^{2}$
distances. Crucially, in the case of the scattering transform, these
$\ell^{2}$ distances are provably stable (i.e., Lipschitz-continuous) to the
action of diffeomorphisms [16, Theorem 2.12].
Scattering transform embedding
Audiovisual correspondence (Open-L3) embedding
Mel-frequency cepstral coefficient (MFCC) embedding
Color: $f_{1}\qquad\qquad$ Color: $\alpha\quad\qquad\qquad$ Color: $r\qquad$
Figure 3: Isomap embedding of synthetic musical notes, as described by their
scattering transform coefficients (top); their Open-L3 coefficients (center);
and their mel-frequency cepstral coefficients (MFCC, bottom). The color of a
dot, ranging from red to blue via white, denotes the fundamental frequency
$f_{1}$ (left), the Fourier decay exponent $\alpha$ (center), and the relative
odd-to-even amplitude difference $r$ (right) respectively. Note that all
methods are unsupervised: triplets ($f_{1}$, $\alpha$, $r$) are not directly
supplied to the models, but only serve for color grading post hoc. See Section
IV for details.
Figure 3 (top) illustrates our findings. We observe that, after scattering
transform and Isomap dimensionality reduction, the dataset appears as a 3-D
Cartesian mesh whose principal components align with $f_{1}$, $\alpha$, and
$r$ respectively. This result demonstrates that the scattering transform is
capable of disentangling and linearizing multiple factors of variability in
the spectral envelope of periodic signals, even if those factors are not
directly amenable to diffeomorphisms.
As a point of comparison, Figure 3 presents the outcome of Isomap on
alternative feature representations: Open-L3 embedding (center) and mel-
frequency cepstral coefficients (MFCC, bottom). The former results from
training a deep convolutional network (convnet) on a self-supervised task of
audiovisual correspondence, and yields $6177$ coefficients [7]. The latter
resuts from a log-mel-spectrogram representation, followed by a discrete
cosine transform (DCT) over the mel-frequency axis, and yields $12$
coefficients. We compute MFCC with librosa v0.7 [18] default parameters.
We observe that Open-L3 embeddings correctly disentangles boundary conditions
($r$) from fundamental frequency ($f_{1}$), but fails to disentangle Fourier
decay ($\alpha$) from $f_{1}$. Instead, correlations between $r$ and $f_{1}$
are positive for low-pitched sounds ($12$ to $16$ cycles) and negative for
high-pitched sounds ($16$ to $24$ cycles). Although this failure deserves a
more formal inquiry, we hypothesize that this it stems from the small
convolutional receptive field of the $L^{3}$-Net: $24$ mel subbands, i.e.,
roughly half an octave around $1$ kHz.
Figure 4: Energy decay as a function of wavelet scattering depth $m$, for
mixtures of $N$ components with equal amplitudes, equal phases, and evenly
spaced frequencies. The color of each line plot denotes the integer part of
$\log_{2}N$. In this experiment, wavelets have a sine cardinal profile
(Shannon wavelets) and a quality factor equal to $Q=1$. Each filterbank covers
seven octaves.
Moreover, in the case of MFCC, we find that the variability in fundamental
frequency ($f_{1}$) dominates the variability in spectral shape parameters
($\alpha$ and $r$), thus yielding a rectilinear embedding (top). This
observation is in line with a previous publication [11], which showed
statistically that MFCCs are overly sensitive to frequency transposition in
complex tones.
From this qualitative benchmark, it appears that the scattering transform is a
more interpretable representation of periodic signals than Open-L3, while
incurring a smaller computational cost. However, in the presence of aperiodic
signals such as environmental sounds, Open-L3 outperforms the scattering
transform in terms of classification accuracy with linear support vector
machines [5]. To remain competitive, the scattering transform must not only
capture heterodyne interference, but also joint spectrotemporal modulations
[1]. In this context, future work will strive to combine insights from
multiresolution analysis and deep self-supervised learning.
## V Beyond pairwise interference:
full-depth scattering networks
In speech and music processing, pitched sounds are rarely approximable as a
mixture of merely two components. More often than not, they contain ten
components or more, and span across multiple octaves in the Fourier domain.
Thus, computing the masking coefficient at the second layer only provides a
crude description of the timbral content within each critical band. Indeed,
$\mathbf{S_{2}}$ encodes pairwise interference between sinusoidal components
but fails to characterize more intricate structures in the spectral envelope
of $\bm{y}$.
To address this issue, we propose to study the scattering transform beyond
order two, thus encompassing heterodyne structures of greater multiplicity.
For the sake of mathematical tractability, we consider the following mother
wavelet, hereafter called “complex Shannon wavelet” after [15, Section 7.2.2]:
$\bm{\psi}:t\longmapsto\dfrac{\exp(2\mathrm{i}t)-\exp(\mathrm{i}t)}{2\pi\mathrm{i}t}$
(9)
The definition of a scattering transform with complex Shannon wavelets
requires to resort to the theory of tempered distributions. We refer to [21]
for further mathematical details.
The following theorem, proven in the Appendix, describes the response of a
deep scattering network in the important particular case of a periodic signal
with finite bandwidth.
###### Theorem V.1.
Let $\bm{y}\in\mathcal{C}^{\infty}(\mathbb{R})$ a periodic signal of
fundamental frequency $f_{1}$. Let $\bm{\psi}$ the complex Shannon wavelet as
in Equation 9 and $\mathbf{U_{1}}$ its associated scalogram operator as in
Equation 1. If $\bm{y}$ has a finite bandwidth of $M$ octaves, then its
scattering coefficients $\mathbf{U_{m}}\bm{y}$ are zero for any $m>M$.
This result is in agreement with the theorem of exponential decay of
scattering coefficients [23]. Note, however, that [23] expresses an upper
bound on the energy at fixed depth for integrable signals, while we express an
upper bound on the depth at fixed bandwidth for periodic signals.
We apply the theorem above to the case of a signal containing $N$ components
of equal amplitudes, equal phases, and evenly spaced frequencies:
$\bm{y}:t\mapsto\sum_{n=1}^{N}a_{1}\cos(nf_{1}t+\varphi_{1})$. Figure 4
illustrates the decay of scatterered energy as a function of depth. The
conceptual analogy between depth and scale was originally proposed by [17] in
a theoretical effort to clarify the role of hierarchical symmetries in
convnets.
Although our findings support this analogy, we note that computing a
scattering transform with $M=\log_{2}T$ layers is often impractical. However,
if the Fourier series in $\bm{y}$ satisfies a self-similarity assumption, it
is possible to match the representational capacity of a full-depth scattering
network while keeping the depth to $M=2$. Indeed, spiral scattering performs
wavelet convolutions over time, over log-frequency, and across octaves,
thereby capturing the spectrotemporal periodicity of Shepard tones and
Shepard-Risset glissandos [12]. Further research is needed to integrate
broadband demodulation into deep convolutional architectures for machine
listening.
## VI Conclusion
In this article, we have studied the role of every layer in a scattering
network by means of a well-established methodology, colloquially known as “one
or two components” [19]. We have come up with a numerical criterion of
psychoacoustic masking; demonstrated that the scattering transform
disentangles multiple factors of variability in the spectral envelope; and
proven that the effective scattered depth of Fourier series is bounded by the
logarithm of its bandwidth, thus emphasizing the importance of capturing
geometric regularity across temporal scales.
## References
* [1] Joakim Andén, Vincent Lostanlen and Stéphane Mallat “Joint time–frequency scattering” In _IEEE Transactions on Signal Processing_ 67.14 IEEE, 2019, pp. 3704–3718
* [2] Joakim Andén and Stéphane Mallat “Deep scattering spectrum” In _IEEE Trans. Signal Process._ 62.16 IEEE, 2014, pp. 4114–4128
* [3] Joakim Andén and Stéphane Mallat “Scattering representation of modulated sounds” In _Proc. DAFx_ , 2012
* [4] Mathieu Andreux et al. “Kymatio: Scattering transforms in Python” In _JMLR_ 21.60, 2020, pp. 1–6
* [5] Relja Arandjelovic and Andrew Zisserman “Look, listen and learn” In _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 609–617
* [6] Randall Balestriero and Hervé Glotin “Linear time complexity deep Fourier scattering network and extension to nonlinear invariants” In _arXiv_ 1707.05841, 2017
* [7] Jason Cramer, Ho-Hsiang Wu, Justin Salamon and Juan Pablo Bello “Look, listen, and learn more: Design choices for deep audio embeddings” In _Proc. ICASSP_ , 2019, pp. 3852–3856 IEEE
* [8] Daniel Haider and Peter Balazs “Extraction of Rhythmical Features with the Gabor Scattering Transform” In _Proc. CMMR_ , 2019
* [9] Jinane Harmouche, Dominique Fourer, Pierre Auger and Patrick Flandrin “Une ou deux composantes: la réponse de l’analyse spectrale singulière” In _Actes du colloque GRETSI_ , 2015
* [10] Vincent Lostanlen, Joakim Andén and Mathieu Lagrange “Extended playing techniques: The next milestone in musical instrument recognition” In _Proc. DLfM_ , 2018
* [11] Vincent Lostanlen and Carmine-Emanuele Cella “Deep convolutional networks on the pitch spiral for musical instrument recognition” In _Proc. ISMIR_ , 2016
* [12] Vincent Lostanlen and Stéphane Mallat “Wavelet scattering on the pitch spiral” In _Proc. DAFX_ , 2016
* [13] Vincent Lostanlen, Grégoire Lafay, Joakim Andén and Mathieu Lagrange “Relevance-based quantization of scattering features for unsupervised mining of environmental audio” In _EURASIP J. Audio Speech Mus. Process._ 2018.1 Springer, 2018, pp. 15
* [14] V. Lostanlen et al. “Per-Channel Energy Normalization: Why and How” In _IEEE Signal Proc. Let._ 26.1, 2019, pp. 39–43
* [15] Stéphane Mallat “A wavelet tour of signal processing: The sparse way” Associated Press, 2008
* [16] Stéphane Mallat “Group invariant scattering” In _Comm. Pure Appl. Math._ 65.10 Wiley Online Library, 2012, pp. 1331–1398
* [17] Stéphane Mallat “Understanding deep convolutional networks” In _Phil. Trans. R. Soc._ 374.2065 The Royal Society Publishing, 2016, pp. 20150203
* [18] Brian McFee et al. “librosa: 0.7.2”, 2020 DOI: 10.5281/zenodo.3606573
* [19] Gabriel Rilling and Patrick Flandrin “One or two frequencies? The empirical mode decomposition answers” In _IEEE Trans. Signal Process._ 56.1 IEEE, 2008, pp. 85–95
* [20] Justin Salamon and Juan Pablo Bello “Feature learning with deep scattering for urban sound analysis” In _Proc. EUSIPCO_ , 2015, pp. 724–728 IEEE
* [21] Robert S Strichartz “A guide to distribution theory and Fourier transforms” World Scientific Publishing Company, 2003
* [22] Joshua B Tenenbaum, Vin De Silva and John C Langford “A global geometric framework for nonlinear dimensionality reduction” In _science_ 290.5500 American Association for the Advancement of Science, 2000, pp. 2319–2323
* [23] Irène Waldspurger “Exponential decay of scattering coefficients” In _Proc. SampTA_ , 2017, pp. 143–146 IEEE
* [24] Changhong Wang, Emmanouil Benetos, Vincent Lostanlen and Elaine Chew “Adaptive time–frequency scattering for periodic modulation recognition in music signals” In _Proc. ISMIR_ , 2019
* [25] Hau-Tieng Wu, Patrick Flandrin and Ingrid Daubechies “One or two frequencies? The synchrosqueezing answers” In _Adv. Adapt. Data Anal._ 3.01–02 World Scientific, 2011, pp. 29–39
## Appendix: proof of Theorem V.1
###### Proof.
We reason by induction over the depth variable $M$. The base case ($M=1$)
leads to $\mathbf{U_{1}}\bm{y}(t,\lambda)=1$ if $\lambda<f_{1}\leq 2\lambda$
and zero otherwise. Because $\bm{\psi}$ has one vanishing moment, it follows
that $\mathbf{U_{2}}\bm{y}$ is zero, and likewise at deeper layers. To prove
the induction step at depth $M$, to decompose $\bm{y}$ into a low-pass
approximation $(\bm{y}\ast\bm{g_{M}})$ spanning the subband $[0;2^{M}f_{1}[$
and a high-pass detail $(\bm{y}\ast\bm{h_{M}})$ spanning the subband
$[2^{M}f_{1};2^{(M+1)}f_{1}[$. Denoting by $c_{n}$ the complex-valued Fourier
coefficients of $\bm{y}$, we have at every time $t\in\mathbb{R}$:
$\displaystyle\bm{y}(t)$ $\displaystyle=(\bm{y}\ast\bm{g_{M}})(t)$
$\displaystyle+$ $\displaystyle(\bm{y}\ast\bm{h_{M}})(t)$
$\displaystyle=\sum_{|n|\leq 2^{M}}c_{n}\exp(\mathrm{i}nf_{1}t)$
$\displaystyle+$ $\displaystyle\sum_{|n|>2^{M}}c_{n}\exp(\mathrm{i}nf_{1}t)$
(10)
On one hand, the coarse term $(\bm{y}\ast\bm{g_{M}})$ has a bandwidth of $M$
octaves. Therefore, by the induction hypothesis, we have
$\mathbf{U_{m}}(\bm{y}\ast\bm{g_{M}})=0$ for $m>M$, and _a fortiori_ for
$m>(M+1)$. On the other hand, we consider the complex Shannon scalogram of
$(\bm{y}\ast\bm{h_{M}})$ in some subband $\lambda>0$:
$\big{|}\bm{y}\ast\bm{h_{M}}\ast\bm{\psi}_{\lambda}\big{|}^{2}(t)\leq\big{|}\bm{y}\ast\bm{h_{M}}\ast\bm{\psi}_{2^{M}}\big{|}^{2}(t)\hfill\\\
\hfill=\sum_{n=(1+2^{M})}^{2^{M+1}}\sum_{k=(1+2^{M})}^{2^{M+1}}c_{n}c_{k}^{\ast}\exp\big{(}\mathrm{i}(n-k)f_{1}t\big{)}$
(11)
In the double sum above, all integer differences of the form $(n-k)$ range
between $-(2^{M}-1)$ and $(2^{M}-1)$. Thus,
$\big{|}\bm{y}\ast\bm{h_{M}}\ast\bm{\psi}_{2^{M}}\big{|}^{2}$ is a periodic
signal of fundamental frequency $f_{1}$ spanning $M$ octaves. Furthermore,
because $\bm{h_{M}}=\bm{\psi}_{2^{M}}$,
$\big{|}\bm{y}\ast\bm{h_{M}}\ast\bm{\psi}_{\lambda}\big{|}^{2}$ has a smaller
bandwidth than $\big{|}\bm{y}\ast\bm{h_{M}}\ast\bm{\psi}_{2^{M}}\big{|}^{2}$;
i.e., $M$ octaves or less. By the induction hypothesis, we have:
$\forall\lambda,\;\mathbf{U_{M+1}}\big{(}|\bm{y}\ast\bm{h_{M}}\ast\bm{\psi}_{\lambda}|^{2}\big{)}(\lambda_{1},\ldots,\lambda_{m+1})=0.$
(12)
In the equation above, we recognize the scattering path
$p=(\lambda,\lambda_{1},\ldots,\lambda_{M+1})$ of $\mathbf{U_{M+2}}$. Finally,
because the scattering transform is a nonexpansive operator [16, Prop. 2.5],
we have the inequality:
$\big{\|}\mathbf{U_{M+2}}(\bm{y})\big{\|}\leq\big{\|}\mathbf{U_{M+2}}(\bm{y}\ast\bm{g_{M}})\big{\|}+\big{\|}\mathbf{U_{M+2}}(\bm{y}\ast\bm{h_{M}})\big{\|}=0,$
(13)
which implies $\mathbf{U_{M+2}}\bm{y}=0$, and likewise at deeper layers. We
conclude by induction that the theorem holds for any $M$. ∎
|
2024-09-04T02:54:56.514197 | 2020-03-02T17:18:25 | 2003.01040 | {
"authors": "Chuangchuang Sun, Macheng Shen, and Jonathan P. How",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:25997",
"submitter": "Chuangchuang Sun",
"url": "https://arxiv.org/abs/2003.01040"
} | arxiv-papers | # Scaling Up Multiagent Reinforcement Learning for Robotic Systems:
Learn an Adaptive Sparse Communication Graph
Chuangchuang Sun, Macheng Shen, and Jonathan P. How ∗Laboratory for
Information and Decision Systems, Massachusetts Institute of Technology, 77
Massachusetts Ave, Cambridge, MA 02139.
Emails: {ccsun1, macshen<EMAIL_ADDRESS>
###### Abstract
The complexity of multiagent reinforcement learning (MARL) in multiagent
systems increases exponentially with respect to the agent number. This
scalability issue prevents MARL from being applied in large-scale multiagent
systems. However, one critical feature in MARL that is often neglected is that
the interactions between agents are quite sparse. Without exploiting this
sparsity structure, existing works aggregate information from all of the
agents and thus have a high sample complexity. To address this issue, we
propose an adaptive sparse attention mechanism by generalizing a sparsity-
inducing activation function. Then a sparse communication graph in MARL is
learned by graph neural networks based on this new attention mechanism.
Through this sparsity structure, the agents can communicate in an effective as
well as efficient way via only selectively attending to agents that matter the
most and thus the scale of the MARL problem is reduced with little optimality
compromised. Comparative results show that our algorithm can learn an
interpretable sparse structure and outperforms previous works by a significant
margin on applications involving a large-scale multiagent system.
## I INTRODUCTION
Reinforcement Learning (RL) has achieved enormous successes in robotics [1]
and gaming [2] in both single and multiagent settings. For example, deep
reinforcement learning (DRL) achieved super-human performance in the two-
player game Go, which has a very high-dimensional state-action space [3, 4].
However, in multiagent scenarios, the sizes of the state space, joint action
space, and joint observation space grow exponentially with the number of
agents. As a result of this high dimensionality, existing multiagent
reinforcement learning (MARL) algorithms require significant computational
resources to learn an optimal policy, which impedes the application of MARL to
systems such as swarm robotics [5]. Thus, improving the scalability of MARL is
a necessary step towards building large-scale multiagent learning systems for
real-world applications.
In MARL, the increase of complexity of finding an optimal joint policy, with
respect to the number of agents, is a result of coupled interactions between
agents [6]. However, in many multiagent scenarios, the interactions between
agents are quite sparse. For example, in a soccer game, an agent typically
only needs to pay attention to other nearby agents when dribbling because
agents far away are not able to intercept. The existence of such sparsity
structures of the state transition dynamics (or the state-action-reward
relationships) suggests that an agent may only need to attend to information
from a small subset of the agents for near-optimal decision-making. Note that
the other players that require attention might not be nearby, such as the
receiver of a long pass in soccer. In such cases, the agent only needs to
selectively attend to agents that “matter the most”. As a result, the agent
can spatially and temporally reduce the scale of the planning problem.
In large-scale MARL, sample complexity is a bottleneck of scalability [7]. To
reduce the sample complexity, another feature we can exploit is the
interchangeability of homogeneous agents: switching two agents’ state/action
will not make any difference to the environment. This interchangeability
implies permutation-invariance of the multiagent state-action value function
(a.k.a. the centralized $Q$-function) as well as interchangeability of agent
policies. However, many MARL algorithms such as MADDPG [8], VDN [9], QMIX [10]
do not exploit this symmetry and thus have to learn this interchangeability
from experience, which increases the sample complexity unnecessarily.
Graph neural network (GNN) is a specific neural network architecture in which
permutation-invariance features can be embedded via graph pooling operations,
so this approach has been applied in MARL [11, 12, 13] to exploit the
interchangeability. As MARL is a non-structural scenario where the
links/connections between the nodes/agents are ambiguous to decide, a graph
has to be created in advance to apply GNN for MARL. Refs. [11, 12, 13], apply
ad-hoc methods, such as $k$-nearest neighbors, hard threshold, and random
dropout to obtain a graph structure. However, these methods require
handcrafted metrics to measure the closeness between agents, which are
scenario-specific and thus not general/principled. Inappropriately selecting
neighbors based on a poorly designed closeness metric could lead to the
failure of learning a useful policy.
While attention mechanisms [14] could be applied to learn the strength of the
connections between a pair of agents (i.e., closeness metric) in a general and
principled way, such strengths are often dense, leading to a nearly-complete
computation graph that does not benefit scalability. The dense attention
mechanism results from that the softmax activation function operated on the
raw attention logits generates a probability distribution with full support.
One solution to enforce a sparse graph is top $k$ thresholding [15], which
keeps the $k$-largest attention scores and truncates the rest to zero.
However, this truncation is a non-differentiable operation that may cause
problems for gradient-based optimization algorithms, such as those used in
end-to-end training. Therefore, a sparse attention mechanism that preserves
the gradient flow necessary for gradient-based training is required.
To address the non-differentiability issue in sparse attention mechanisms, we
generalize sparsemax [16] and obtain a sparsity mechanism whose pattern is
adaptive to the environment states. This sparsity mechanism can reduce the
complexity of both the forward pass and the back-propagation of the policy and
value networks, as well as preserving the end-to-end trainability in contrast
to hard thresholding. With the introduction of GNN and generalized sparsemax,
which can preserve permutation invariance and promote sparsity respectively,
the scalability of MARL is improved.
The discussion so far was restricted to homogeneous agents and thus
permutation-invariance is desirable. However, in heterogeneous multiagent
systems or competitive environments, permutation invariance and
interchangeability are no longer valid. For example, in soccer, switching
positions of two players from different sides can make a difference to the
game. To address this heterogeneity, GNN-based MARL must distinguish the
different semantic meanings of the connections between different agent pairs
(e.g. friend/friend relationship versus friend/foe relationship). We address
this requirement by multi-relational graph convolution network [17] to pass
messages using different graph convolution layers on graph edge connections
with different semantic meanings.
To summarize, we propose to learn an adaptive sparse communication graph
within the GNN-based framework to improve the scalability of MARL, which
applies to both homogeneous and heterogeneous multiagent systems in mixed
cooperative-competitive scenarios.
### I-A Related Work
One of the existing works exploiting the structure in MARL is the mean-field
reinforcement learning (MFRL) [18] algorithm, which takes as input the
observation and the mean action of neighboring agents to make the decision,
and neglects the actions of all the other agents. This simplification leads to
good scalability. However, the mean action cannot distinguish the difference
among neighboring agents and the locality approximations fail to capture
information from a far but important agent for optimal decision-making, which
leads to sub-optimal policies. Multi-Actor-Attention-Critic (MAAC) is proposed
in [19] to aggregate information using attention mechanism from all the other
agents. Similarly, [11, 13, 20] also employ the attention mechanism to learn a
representation for the action-value function. However, the communication
graphs used there are either dense or ad-hoc ($k$ nearest neighbors), which
makes the learning difficult.
Sparse attention mechanisms were first studied by the natural language
processing community in [16], where sparsemax was proposed as a sparse
alternative to the activation function softmax. The basic idea is to project
the attention logits onto the probability simplex, which can generate zero
entries once the projection hits the boundary of the simplex. While
generalized sparse attention mechanisms were further studied in [21, 22, 23],
they are not adaptive to the state in the context of MARL, in terms of the
sparsity pattern.
Given this state of the art, the contributions of this paper are twofold.
First, we propose a new adaptive sparse attention mechanism in MARL to learn a
sparse communication graph, which improves the scalability of MARL by lowering
the sample complexity. Second, we extend our GNN-based MARL to heterogeneous
systems in mixed cooperative-competitive settings using multi-relational GNN.
The evaluations show that our algorithm significantly outperforms previous
approaches on applications involving a large number of agents. This technique
can be applied to empower large-scale autonomous systems such as swarm
robotics.
## II PRELIMINARIES
### II-A Multiagent Reinforcement Learning
As a multiagent extension of Markov decision processes (MDPs), a Markov game
is defined as a tuple $\langle N,S,\\{O_{i}\\}_{i\in N},\\{A_{i}\\}_{i\in
N},\\{r_{i}\\}_{i\in N},\gamma\rangle$, where $N=[1,\ldots,n]$ is a set of
agent indices, $S$ is the set of state, $\\{O_{i}\\}_{i\in N}$ and
$\\{A_{i}\\}_{i\in N}$ are the joint observation and joint action sets,
respectively. The $i$th agent chooses actions via a stochastic policy
$\pi_{\theta_{i}}:O_{i}\times A_{i}\to[0,1]$, which leads to the next state
according to the state transition function $\mathcal{T}:{S}\texttimes
A_{1}\texttimes\ldots\texttimes A_{n}\to S$. The $i$th agent also obtains a
reward as a function of the state and agent’s action
$r_{i}:S\texttimes\\{A_{i}\\}_{i\in N}\to\mbox{$\mathbb{R}$}$, and receives a
private observation correlated with the state $o_{i}:S\times\\{A_{i}\\}_{i\in
N}\to O_{i}$. The initial states are determined by a distribution
$\rho:S\to[0,1]$. The $i$th agent aims to maximize its own total expected
return $R_{i}=\sum_{t=1}^{T}\gamma^{t}r_{i}^{t}$, with discount factor
$\gamma$ and time horizon $T$.
### II-B Multi-head attention
The scaled dot-product attention mechanism was first proposed in [14] for
natural language processing. An attention function maps the query and a set of
key-value pairs to the output, which is the weighted sum of the values. The
weight assigned to the each value calculated via a compatibility function of
the query and the corresponding key. In the context of MARL, let $h_{i},i\in
N$ be the representation of the agents. Key, query and value of agent $i$ is
defined as $K_{i}^{l}=W_{K}h_{i}^{l}\in\mathbb{R}^{d_{K}}$,
$Q_{i}^{l}=W_{Q}h_{i}^{l}$ and $V_{i}^{l}=W_{V}h_{i}^{l}$, respectively with
$W_{K},W_{Q}$ and $W_{V}$ are parameter matrices. The output for agent $i$ is
then
$\text{Att}_{i}(h)=\sum_{j}w_{ij}V_{j},$ (1)
where $w_{i\bullet}\in\mathbb{R}^{n}$, the $i$-th row of the weight matrix
$w$, is defined as
$w_{i\bullet}=\sigma_{a}\Big{(}\frac{(K_{i})^{T}Q}{\sqrt{d_{K}}}\Big{)}$ (2)
with $\sigma_{a}$ being the softmax function in previous works of GNN-based
MARL. The weight $w_{i\bullet}$ is dense as
$\operatorname*{softmax}_{i}(z)\neq 0$ for any vector $z$ and $i$.
To increase the expressiveness, multi-head attention is applied here via
simply concatenating the outputs from a single attention function [14].
### II-C Relational GNN
In heterogeneous multiagent systems, different agent pair can have different
relations, such as friend or foe in a two-party zero-sum game. As a result,
information aggregation from agents with different relations should have
different parameters. Work in [17] proposed relational graph convolutional
network to model multi-relational data. The forward-pass update of agent $i$
in a multi-relational graph is as follows
$h_{i}^{(l+1)}=\sigma\Big{(}\sum_{r\in\mathcal{R}}\sum_{j\in\mathcal{N}_{i}^{r}}\frac{1}{c_{i,r}}W_{r}^{(l)}h_{j}^{(l)}+W_{0}^{(l)}h_{i}^{(l)}\Big{)},$
(3)
where $\mathcal{N}_{i}^{r}$ denotes the set of neighbor indices of agent $i$
under relation $r\in\mathcal{R}$ and $c_{i,r}$ is a normalization constant. To
distinguish the heterogeneity in MARL, similar to this convolution-based
multi-relational GNN, we apply different attention heads on agent pairs with
different relations.
## III APPROACH
In this section, we present our approach to exploit the sparsity in MARL by
generalizing the dense soft-max attention to adaptive sparse attention.
Moreover, our approach to apply multi-relational attention mechanism for
heterogeneous games involving competitive agents is also introduced.
### III-A Learning a communication graph via adaptive sparse attention
The scaled dot-product attention is applied to learn the communication graph
in MARL. If an attention weight between a pair of agents is zero, then there
is no communication/message passing between them. Thus, the normalization
function $\sigma_{a}(\bullet)$ in (2) is critical to learn a communication
graph. As usually used in the attention mechanism [14] or classifications,
$\sigma_{a}(\bullet)$ is usually set to be softmax, which cannot induce
sparsity. We propose an adaptive sparse activation function as an alternative
to softmax.
Let $x\in\mathbb{R}^{d}$ be the raw attention logits and $y$ be normalized
attention strength in the ($d-1$)-dimensional probability simplex defined as
$\Delta^{d}:=\\{y\in\mathbb{R}^{d}|y\geq 0,\textbf{1}^{T}y=1\\}$. We are
interested in the mapping from $x\in\mathbb{R}^{d}$ to $y\in\Delta^{d}$. In
other words, such a mapping can transform real weights to a probability
distribution, i.e., the normalized attention strength between a pair of
agents. The classical softmax, used in most attention mechanisms, is defined
component-wisely as
$y_{i}=\operatorname*{softmax}_{i}(x)=\frac{e^{x_{i}}}{\sum_{i=1}^{d}e^{x_{i}}}.$
(4)
A limitation of the softmax transformation is that the resulting probability
distribution always has full support, which makes the communication graph
dense, resulting in high complexity. In order to reduce the complexity, our
idea is to replace the softmax activation function with a generalized
activation function, which could adaptively be dense or sparse based on the
state. To investigate alternative activation functions to softmax, consider
the max operator defined as
$\max(x):=\max_{i\in[d]}(x_{i})=\sup_{y\in\Delta^{d}}y^{T}x,$ (5)
where $[d]=\\{1,\ldots,d\\}$. The second equality comes from that the supremum
of the linear form over a simplex is always achieved at a vertex, i.e., one of
the standard basis vector $\\{e_{i}\\}_{i\in[d]}$. As a result, the max
operator puts all the probability mass onto a single element, or in other
words, only one entry of $y$ is nonzero corresponding to the largest entry of
$x$. For example, with $x=[0,t]\in\mbox{$\mathbb{R}$}^{2}$, the probability
distribution w.r.t. the logit $t$, i.e.,
$(\operatorname*{arg\,sup}_{y\in\Delta^{d}}y^{T}x)_{2}$, is a step function,
as $(\operatorname*{arg\,sup}_{y\in\Delta^{d}}y^{T}x)_{2}$ equals 1 if $t>0$
and $0$ otherwise. This discontinuity at $t=0$ of the step function is not
amenable to gradient-based optimization algorithms for training deep neural
networks. One solution to the discontinuity issue encountered in (6) is to add
a regularized $\Omega(y)$ in the max operator as
$\Pi_{\Omega}(x)=\operatorname*{arg\,max}_{y\in\Delta^{d}}y^{T}x+\gamma\Omega(y)$
(6)
Different regularizers $\Omega(y)$ produce different mappings with distinct
properties (see summary in Table I). Note that with $\Omega(y)$ as the Shannon
entropy, $\Pi_{\Omega}(x)$ recovers softmax. With the states/observations
evolving, the ideal profile of $\Pi_{\Omega}(x)$ should be able to adapt the
sparsity extent (controlled via $\gamma$) and the pattern (controlled via the
selection of $\Omega(y)$) accordingly.
TABLE I: List of different regularizers and their corresponding mappings $y=\Pi_{\Omega}(x)$, where $x$ is the raw attention logits and $y$ is the probability distribution in $\Delta^{d}$. Entropy | $\Omega(y)$ | $\Pi_{\Omega}(x)$ | Ref.
---|---|---|---
Shannon | $\sum_{i}y_{i}\log(y_{i})$ | $\operatorname*{softmax}_{i}(x)=\frac{e^{x_{i}}}{\sum_{i=1}^{d}e^{x_{i}}}$ | [22]
$l_{2}$ norm | $-\frac{1}{2}\sum_{i}y_{i}^{2}$ | $\arg\min_{y\in\Delta^{d}}\|y-x\|^{2}$ | [16]
Tsallis | $\left\\{\begin{array}[]{ll}\frac{\sum_{i}(y_{i}-y_{i}^{\alpha})}{\alpha(\alpha-1)},&\alpha\neq 1\\\ \sum_{i}y_{i}\log(y_{i}),&\alpha=1\end{array}\right.$ | No closed-form | [24]
Generalized | $\displaystyle\frac{1}{q}\sum_{i}(y_{i}-\frac{\displaystyle e^{qy_{i}}-1}{\displaystyle e^{q}-1})$ | No closed-form | [25]
TABLE II: List of different $G(x)$ and their resulting mappings $\Pi_{\Omega}(x)$ $\gamma G_{i}(x)$ | $\frac{e^{x_{i}}}{\sum_{i}e^{x_{i}}}$ | $\frac{{x_{i}^{2}}}{\sum_{i}{x_{i}^{2}}}$ | $x_{i}$
---|---|---|---
$\Pi_{\Omega}(x)$ | softmax | softmax | sparsemax
Property | Translation invariance $\Pi_{\Omega}(x)=\Pi_{\Omega}(x+c\textbf{1})$ | Scaling invariance $\Pi_{\Omega}(x)=\Pi_{\Omega}(cx)$ | Translation invariance $\Pi_{\Omega}(x)=\Pi_{\Omega}(x+c\textbf{1})$
Example | $\Pi_{\Omega}([100,101])=\Pi_{\Omega}([0,1])$ | $\Pi_{\Omega}([1,2])=\Pi_{\Omega}([1,2]\times 10^{-3})$ | $\Pi_{\Omega}([100,101])=\Pi_{\Omega}([0,1])$
Note that the Tsallis entropy and the generalized entropy in Table I do not
have closed-form solutions [22], which will increase the computational burden
since iterative numerical algorithms will have to be employed. Sparsemax has a
closed-form solution and can induce sparsity, but sparsemax is not adaptive
and lacks flexibility as it is unable to switch from one sparsity pattern to
another when necessary. We aim to combine the advantages and avoid the
disadvantages using this new formulation
$\Pi_{\Omega}(x)=\operatorname*{arg\,min}_{y\in\Delta^{d}}||y-\gamma
G(x)||^{2},$ (7)
with $G(x):\mbox{$\mathbb{R}$}^{d}\to\mbox{$\mathbb{R}$}^{d}$ and $\gamma$
being a learnable neural network and a scalar, respectively. By choosing
different $G(x)$, $\Pi_{\Omega}(x)$ can exhibit different sparsity patterns
including softmax and sparsemax. With $G(x)$ fixed, the parameter $\gamma$ can
control how sparse the output could be, similar to the temperature parameter
in softmax. The summary in Table II shows that (7) will lead to a general
mapping and can combine properties such as translation and scaling invariance
adaptively. Work in [23] proposed sparse-hourglass that can adjust the trade-
off between translation and scaling invariance via tunable parameters.
However, it is unclear under which circumstances one property is more
desirable than the other, so there is little to no prior knowledge on how to
tune such parameters. In contrast, our formulation in (7) can balance such
trade-off via learning $G(x)$ and $\gamma$ while work in [23] is based on a
fixed form of $G(x)$ with tunable parameters.
While we can let the neural network learn
$G(x):\mbox{$\mathbb{R}$}^{d}\to\mbox{$\mathbb{R}$}^{d}$ without any
restrictions, there is indeed prior knowledge that we can apply, e.g.,
monotonicity. It is desired to keep the monotonicity of $\Pi_{\Omega}(x)$,
i.e., $\forall x_{i}>x_{j},(\Pi_{\Omega}(x))_{i}>(\Pi_{\Omega}(x)_{j}$, as
larger attention logit should be mapped into larger attention strength. As
sparsemax is monotonic, this requires that $\forall
x_{i}>x_{j},G_{i}(x)>G_{j}(x)$, or in other words, the order of the input of
$G(x)$ coincides with that of the output. To keep this property, $G(x)$ is
designed component-wisely as
$G_{i}(x)=\psi(\phi_{1}(x_{i}),\sum_{i}\phi_{2}(x_{i}))$, with
$\psi:\mbox{$\mathbb{R}$}^{2}\to\mbox{$\mathbb{R}$}^{1},\phi_{1},\phi_{2}:\mbox{$\mathbb{R}$}^{1}\to\mbox{$\mathbb{R}$}^{1}$
are neural networks with hidden layers. Note that $G_{i}(x)$ should be coupled
with all of the entries of $x$ instead of be a univariate function only
depending on $x_{i}$, as demonstrated in Table II. As the second argument of
$\psi$ (i.e., $\sum_{i}\phi_{2}(x_{i})$) is invariant to $G_{i}(x),\forall
i\in[d]$, the order preserving of
$G(x):\mbox{$\mathbb{R}$}^{d}\to\mbox{$\mathbb{R}$}^{d}$ is equivalent to the
monotonicity of $\psi(\bullet)$ and $\phi_{1}(\bullet)$. In order to keep this
monotonicity, we enforce all the weights of the networks $\psi$ and $\phi_{1}$
to be positive [26], by applying an absolute value function on the weights.
This architecture can accelerate the learning process with extra prior
knowledge, as it is monotonic by design.
### III-B Message passing in MARL via GNN
We will present how the information is aggregated to learn a representation
for per-agent value/policy network using a graph neural network. The scaled
dot-product attention mechanism (Section II-B) with our generalized sparsemax
as the activation function, denoted as sparse-Att, is applied to learn a
communication graph and pass messages through the connections in the graph.
We start with homogeneous multiagent system, where the relation between any
agent pair is identical. A graph is defined as
$\mathcal{G}:=(\mathcal{V},\mathcal{E})$, where $v_{i}\in\mathcal{V}$
represent an agent and the cardinality of $\mathcal{V}$ is $|\mathcal{V}|$.
Moreover, $e_{ij}\in\mathcal{E}$ is $1$ if agent $i$ and $j$ can communicate
directly (or agent $j$ is observable to agent $i$), and $0$ otherwise. This is
a restriction on the communication graph and $\mathcal{E}$ is the set of all
possible edges. Then sparse-Att aims to learn a subset of $\mathcal{E}$ via
induced sparsity without compromising much optimality. For agent $i$, let
$U_{i}=f_{a}(X_{i})$ and $E_{i}$ be its observation and entity encoding
respectively, where $X_{i},i\in\mathcal{V}$ is the local state and $f_{a}$ is
a learnable agent encoder network. Then the initial observation embedding of
agent $i$, denoted as $h_{i}^{(1)}$, is
$h_{i}^{(1)}=f_{mp}(U_{i}\|E_{i}),$ (8)
where $f_{mp}$ is another learnable network and the operator $\|$ denotes
concatenation. Then at hop $l$ ($l$-th round of message passing), agent $i$
aggregates information from its possible neighbors belonging to the set
$\mathcal{N}=\\{j\in\mathcal{V}|e_{ij}=1\\}$ as follows
$h_{i}^{(l+1)}=f_{mp}\Big{(}h_{i}^{(l)}\|\text{sparse-
Att}_{i}^{\mathcal{N}}(h^{(l)})\Big{)}.$ (9)
With $l\geq 2$, the multi-hop message passing can enable the agent to obtain
information from beyond its immediate neighbors. In the message aggregation
from all of the agents $\mathcal{N}$, identical parameters are used in
$\text{sparse-Att}_{i}^{\mathcal{N}}$, which enforces the permutation-
invariance. This property is desirable because homogeneous agents are
interchangeable.
However, interchangeability is no longer applicable to heterogeneous systems
or mixed cooperative-competitive environment. For example, with
$\mathcal{V}_{1},\mathcal{V}_{2}\subseteq\mathcal{V}$ being a two-team
partition of $\mathcal{V}$, agents cooperate with other agents from the same
team but compete against agents from the other team. For agent
$i\in\mathcal{V}_{1}$, its teammate neighborhood and enemy neighborhood are
$\mathcal{N_{+}}=\\{j\in\mathcal{V}_{1}|e_{ij}=1\\}$ and
$\mathcal{N_{-}}=\\{j\in\mathcal{V}_{2}|e_{ij}=1\\}$, respectively. The edges
connecting teammates and enemies are called positive and negative edges. Then
based on multi-relational GNN, agent $i$ aggregates information at hop $l$ in
the following way
$h_{i}^{(l+1)}=f_{mp}\Big{(}h_{i}^{(l)}\|\text{sparse-
Att}_{i}^{\mathcal{N}_{+}}(h^{(l)})\|\text{sparse-
Att}_{i}^{\mathcal{N}_{-}}(h^{(l)})\Big{)},$
where $\text{sparse-Att}_{i}^{\mathcal{N}_{+}}$ and $\text{sparse-
Att}_{i}^{\mathcal{N}_{-}}$ are different attention heads. Additionally,
balance theory [27] suggests that “the teammate of my teammate is my teammate”
and “the enemy of my enemy is my teammate.” In a two-team competitive game,
any walk (a sequence of nodes and edges of a graph) between an agent pair in
the communication graph, comprising of both positive and negative edges, will
lead to the same relation between the agent pair [28]. This property
eliminates the ambiguity that the information aggregated from the same agent
(but different walk) might have a different teammate/enemy property.
Figure 1: Our sparse-Att framework consists of three modules: encoder, multi-
relational sparse attention mechanism, and value/policy network, with
homogeneous agents sharing all parameters. Agents employ different attention
heads to aggregate information alongside connections with different semantic
meanings, followed by a concatenation. $L$ is the number of the message-
passing rounds; see (9). “concat” denotes the concatenation operation. Here
only two classes (shown in red and blue) of heterogeneous agents are shown for
simplicity.
The proposed algorithmic framework is illustrated in Fig. 1. After $L$ rounds
of message passing, each agent has an updated encoding $h_{i}^{(L+1)}$. This
encoding is then fed into the value network and the policy network, which
estimate the state value and a probability distribution over all possible
actions, respectively. As homogeneous agents are interchangeable, they share
all of the parameters, including entity encoding, policy, value and message
passing. Proximal policy gradient (PPO, [29]) is employed to train the model
in an end-to-end manner. As only local information is required, the proposed
approach is decentralized. Moreover, our approach maintains the
transferability of GNN-based approaches as all the network dimensions are
invariant to agent/entity number in the system.
## IV EXPERIMENTS
### IV-A Task description
The proposed algorithm is evaluated in three swarm robotics tasks: Coverage,
Formation, and ParticleSoccer [30], first two of which are cooperative and the
third is competitive. The tasks are simulated in the Multiagent Particle
Environment111https://github.com/openai/multiagent-particle-envs(MAPE [8]).
The agents in MAPE can move in a 2-dimensional space following a double
integrator dynamic model. The action space of the agents is discretized, with
each agent can accelerate/decelerate in both $X$ and $Y$ direction. The three
tasks are briefly introduced as follows.
Coverage: There are $n_{A}$ agents (light purple) and $n_{L}$ landmarks
(black) in the environment (see illustration in Fig. 2(a)). The objective for
the agents is to cover the landmarks with the smallest possible number of
timesteps. Agents are not assigned to reach a certain landmark, but instead,
have to figure out the assignment via communication such that the task can be
finished optimally.
Formation: There are $n_{A}$ agents (blue) and $1$ landmarks (black) in the
environment (see illustration in Fig. 2(b)), with $n_{A}$ being an even
natural number. The agents need to split into two sub-teams of equal size,
with each of them building a formation of a regular pentagon. The two regular
pentagons with different sizes are both centered at the landmark.
ParticleSoccer: There are $n_{A}$ agents and 3 landmarks in the environment
(see illustration in Fig. 2(c)), with the bigger landmark as a movable ball
and the two smaller ones as a fixed landmark. A team wins the game via pushing
the black ball to the opponent team’s goal. The goal color of the light blue
(red, resp.) team is blue (red, resp.).
(a) Coverage
(b) Formation
(c) ParticleSoccer
Figure 2: Three different simulation tasks used in this work.
### IV-B Implementation specifications
The agent encoder $f_{a}(\bullet)$ and the entity encoder take input the
$4$-dimensional agent states and $2$-dimensional entity states, respectively.
The queries, keys, and values in all of the sparse attention mechanism are
$128$-dimensional. The communication hop is $L=2$. All neural networks are
fully connected with the ReLU activation function. In the sparsity-promoting
function (7), $\phi_{1},\phi_{2}$ and $\psi$ all have one hidden layer with
dimensions being $16$, $16$ and $64$, respectively. The absolute value
function is used to keep the weights of the monotonicity-preserving neural
network positive.
Evaluation is performed every $320$ episodes and PPO update is executed for
$4$ epochs after collecting experience of $4096$ timesteps.
### IV-C Results
In the cooperative scenarios i.e., Coverage and Formation, two metrics are
used to evaluate the algorithms. The first is the average reward per step and
the second is the task success rate. Higher means better performance for both
metrics.
We compare our algorithms with two baselines: GNN-based MARL with dense
attention mechanism [11] and MAAC [19]. These two algorithms are considered to
be strong baselines as they reported advantageous results against algorithms
including MADDPG [8], COMA [31], VDN [32] and QMIX [10]. Public
repositories222https://github.com/sumitsk/matrl.git333
https://github.com/shariqiqbal2810/MAAC are used for comparison. As both
repositories also apply their algorithms on MAPE, the default hyperparameters
are used for comparison.
In simulation, we set $n_{A}=30$ and $n_{A}=20$ for Coverage and Formation,
respectively. Fig. 4 and Fig. 4 demonstrated that our algorithm can achieve
higher rewards than the two baselines with fewer episodes. This validates that
sparse-Att can accelerate the learning process via aggregating information
from agents that matter the most. Moreover, in terms of the second metric,
i.e., success rate, our algorithm consistently outperforms the two baselines
by a significant margin (with a much smaller variance), as shown in Fig. 5.
The evaluations of both metrics for two scenarios provide strong support for
the advantages of our algorithm.
Figure 3: Reward comparison of our algorithm against two baselines for the
Coverage task.
Figure 4: Reward comparison of our algorithm against two baselines for the
Formation task.
(a) Coverage
(b) Formation
Figure 5: Performance comparison of three algorithm on two scenarios. Multiple
policies learned from each algorithm are evaluated and the mean/standard
deviation are plotted.
For the competitive ParticleSoccer task, we set $n_{A}=20$ with both red team
and blue team of size $\frac{n_{A}}{2}=10$. As this task is competitive, the
above two metrics are no longer applicable. Instead, we let the red (blue,
resp.) play against a blue (red, resp.) team from another algorithm. Table III
presents the results of the inter-algorithm competition. The overall score of
each algorithm equals the sum of the winning evaluation episodes of its red
team and blue team playing against blue and red team respectively from other
algorithms. The overall scores in Table III show that our algorithm can learn
strong policies.
TABLE III: Evaluation of three algorithms in the competitive ParticleSoccer task. Each pair is evaluated for $50$ episodes and the $(\bullet,\bullet,\bullet)$ in each cell denotes the number of red team winning episodes, blue team wining episodes and the draw episodes. A draw means that neither team scores within a given episode length. $\text{win}_{\text{red}}$ and $\text{win}_{\text{blue}}$ are the winning episodes of the red and blue team, respectively when competing against blue and red team from other algorithms. Red Blue | sparse-Att (ours) | dense-Att | MAAC | $\text{win}_{\text{red}}$
---|---|---|---|---
sparse-Att (ours) | blue!25$(48,0,2)$ | $(15,0,35)$ | $(26,0,24)$ | $41$
dense-Att | $(9,1,40)$ | blue!25$(5,0,45)$ | $(3,0,47)$ | $11$
MAAC | (7,0,43) | (2,0,48) | blue!25$(3,0,47)$ | $9$
$\text{win}_{\text{blue}}$ | $-15$ | $-17$ | $-29$ | N/A
| sparse-Att (ours) | dense-Att | MAAC |
overall scores: $\text{win}_{\text{red}}+\text{win}_{\text{blue}}$ | $\mathbf{26}$ | $-6$ | $-20$ |
### IV-D Interpretability of the sparse communication graph
Let us proceed by considering the inherent sparity in Formation and
ParticleSoccer. As mentioned in the description of the Formation scenario, the
formation of each pentagon is related to half of the agents, while the sub-
team assignments need to be learned. In the implementation, the reward is set
to require that the first $\frac{n_{A}}{2}$ agents closest to the landmark
build the formations of the inner pentagon and the remaining $\frac{n_{A}}{2}$
agents to build the formations of the outer pentagon. With the convergence of
the learning algorithm, once a sub-team partition is learned to complete the
two sub-tasks, the learned agent indexing of each team should not vary due to
the distance sorting and the two pentagons are relatively far away. As a
result, the reward to complete each sub-task is only related to the
corresponding sub-team and hence the two sub-teams are decoupled from each
other. The adjacency matrix of the learned communication graph shown in Fig.
6(a) validates that the inter-team communication is very sparse. This
adjacency matrix is up to row/column permutation as indexing of each sub-team
is learned without being known as a prior. Moreover, in a sub-team, the
algorithm learns a communication graph similar to a star-graph. It can be
understood that each sub-team selects a leader. As a star-graph is a connected
graph with possibly minimum edges, this communication protocol is both
effective and efficient. Also, the length of the path between any agent pair
in a star graph is no greater than $2$, which echos the two-hop communication
($L=2$) we used in the simulation. That is because due to the two-hop message-
passing, the agents can eventually communicate with agents as far as two edges
away, which includes all of the agents in a star graph. Note that the sparsity
on the diagonal entries of the communication graph does not mean that the
agent’s own information is neglected, as it is separately concatenated; see
(9).
Also, in the ParticleSoccer scenario, from each team’s perspective, agents
need to coordinate tightly within the team to greedily push the ball to the
other team’s goal while only attending to a small number of agents from the
other team. This leads to dense intra-team communication but relatively sparse
inter-team communication. This is validated by the approximately block-
diagonal adjacency matrix of the learned communication graph in Fig. 6(b).
(a) Coverage
(b) Formation
Figure 6: Sparse communication graph for two scenarios. For the Coverage, our
sparse-Att learns to split into two sub-team as desired and the learned sparse
star-like communication graph makes communication both effective and
efficient. In the ParticleSoccer, sparse-Att learn to pay more attention to
teammates and a necessary subset of enemies.
## V CONCLUSIONS and FUTURE WORK
This paper exploits sparsity to scale up Multi-Agent Reinforcement Learning
(MARL), which is motivated by the fact that interactions are often sparse in
multiagent systems. We propose a new general and adaptive sparsity-inducing
activation function to empower an attention mechanism, which can learn a
sparse communication graph among agents. The sparse communication graph can
make the message-passing both effective and efficient such that the
scalability of MARL is improved without compromising optimality. Our algorithm
outperforms two baselines by a significant margin on three tasks. Moreover,
for scenarios with inherent sparsity, it is shown that the sparsity of the
learned communication graph is interpretable.
Future work will focus on combining evolutionary population curriculum
learning and graph neural network to further improve the scalability. In
addition, robust learning against evolving/learned adversarial attacks is also
of great interest.
## ACKNOWLEDGMENTS
Research is supported by Scientific Systems Company, Inc. under research
agreement $\\#$ SC-1661-04. Authors would like to thank Dong-Ki Kim, Samir
Wadhwania and Michael Everett for their many useful discussions and Amazon Web
Services for computation support.
## References
* [1] J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: A survey,” _The International Journal of Robotics Research_ , vol. 32, no. 11, pp. 1238–1274, 2013.
* [2] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, _et al._ , “Human-level control through deep reinforcement learning,” _Nature_ , vol. 518, no. 7540, pp. 529–533, 2015.
* [3] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, _et al._ , “Mastering the game of go with deep neural networks and tree search,” _nature_ , vol. 529, no. 7587, p. 484, 2016.
* [4] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, _et al._ , “Mastering the game of go without human knowledge,” _Nature_ , vol. 550, no. 7676, pp. 354–359, 2017.
* [5] M. Hüttenrauch, A. Šošić, and G. Neumann, “Guided deep reinforcement learning for swarm systems,” _arXiv preprint arXiv:1709.06011_ , 2017.
* [6] D. S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein, “The complexity of decentralized control of markov decision processes,” _Mathematics of operations research_ , vol. 27, no. 4, pp. 819–840, 2002.
* [7] L. Bu, R. Babu, B. De Schutter, _et al._ , “A comprehensive survey of multiagent reinforcement learning,” _IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)_ , vol. 38, no. 2, pp. 156–172, 2008.
* [8] R. Lowe, Y. Wu, A. Tamar, J. Harb, O. P. Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” in _Advances in Neural Information Processing Systems_ , 2017, pp. 6379–6390.
* [9] P. Sunehag, G. Lever, A. Gruslys, W. M. Czarnecki, V. Zambaldi, M. Jaderberg, M. Lanctot, N. Sonnerat, J. Z. Leibo, K. Tuyls, _et al._ , “Value-decomposition networks for cooperative multi-agent learning based on team reward,” in _Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems_. International Foundation for Autonomous Agents and Multiagent Systems, 2018, pp. 2085–2087.
* [10] T. Rashid, M. Samvelyan, C. S. De Witt, G. Farquhar, J. Foerster, and S. Whiteson, “QMIX: monotonic value function factorisation for deep multi-agent reinforcement learning,” _arXiv preprint arXiv:1803.11485_ , 2018\.
* [11] A. Agarwal, S. Kumar, and K. Sycara, “Learning transferable cooperative behavior in multi-agent teams,” _arXiv preprint arXiv:1906.01202_ , 2019\.
* [12] A. Khan, E. Tolstaya, A. Ribeiro, and V. Kumar, “Graph policy gradients for large scale robot control,” _arXiv preprint arXiv:1907.03822_ , 2019.
* [13] J. Jiang, C. Dun, and Z. Lu, “Graph convolutional reinforcement learning for multi-agent cooperation,” _arXiv preprint arXiv:1810.09202_ , vol. 2, no. 3, 2018.
* [14] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in _Advances in neural information processing systems_ , 2017, pp. 5998–6008.
* [15] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, “Attention-based models for speech recognition,” in _Advances in neural information processing systems_ , 2015, pp. 577–585.
* [16] A. Martins and R. Astudillo, “From softmax to sparsemax: A sparse model of attention and multi-label classification,” in _International Conference on Machine Learning_ , 2016, pp. 1614–1623.
* [17] M. Schlichtkrull, T. N. Kipf, P. Bloem, R. Van Den Berg, I. Titov, and M. Welling, “Modeling relational data with graph convolutional networks,” in _European Semantic Web Conference_. Springer, 2018, pp. 593–607.
* [18] Y. Yang, R. Luo, M. Li, M. Zhou, W. Zhang, and J. Wang, “Mean field multi-agent reinforcement learning,” _arXiv preprint arXiv:1802.05438_ , 2018\.
* [19] S. Iqbal and F. Sha, “Actor-attention-critic for multi-agent reinforcement learning,” _arXiv preprint arXiv:1810.02912_ , 2018.
* [20] A. Das, T. Gervet, J. Romoff, D. Batra, D. Parikh, M. Rabbat, and J. Pineau, “Tarmac: Targeted multi-agent communication,” _arXiv preprint arXiv:1810.11187_ , 2018.
* [21] V. Niculae and M. Blondel, “A regularized framework for sparse and structured neural attention,” in _Advances in Neural Information Processing Systems_ , 2017, pp. 3338–3348.
* [22] M. Blondel, A. F. Martins, and V. Niculae, “Learning classifiers with fenchel-young losses: Generalized entropies, margins, and algorithms,” _arXiv preprint arXiv:1805.09717_ , 2018.
* [23] A. Laha, S. A. Chemmengath, P. Agrawal, M. Khapra, K. Sankaranarayanan, and H. G. Ramaswamy, “On controllable sparse alternatives to softmax,” in _Advances in Neural Information Processing Systems_ , 2018, pp. 6422–6432.
* [24] G. M. Correia, V. Niculae, and A. F. Martins, “Adaptively sparse transformers,” _arXiv preprint arXiv:1909.00015_ , 2019.
* [25] A. M. Kowalski, R. D. Rossignoli, and E. M. Curado, _Concepts and recent advances in Generalized Information Measures and Statistics_. Bentham Science Publishers, 2013.
* [26] C. Dugas, Y. Bengio, F. Bélisle, C. Nadeau, and R. Garcia, “Incorporating functional knowledge in neural networks,” _Journal of Machine Learning Research_ , vol. 10, no. Jun, pp. 1239–1262, 2009.
* [27] F. Heider, “Attitudes and cognitive organization,” _The Journal of psychology_ , vol. 21, no. 1, pp. 107–112, 1946.
* [28] D. Easley, J. Kleinberg, _et al._ , “Networks, crowds, and markets: Reasoning about a highly connected world,” _Significance_ , vol. 9, pp. 43–44, 2012.
* [29] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” _arXiv preprint arXiv:1707.06347_ , 2017\.
* [30] E. Şahin, “Swarm robotics: From sources of inspiration to domains of application,” in _International workshop on swarm robotics_. Springer, 2004, pp. 10–20.
* [31] J. N. Foerster, G. Farquhar, T. Afouras, N. Nardelli, and S. Whiteson, “Counterfactual multi-agent policy gradients,” in _Thirty-second AAAI conference on artificial intelligence_ , 2018.
* [32] P. Sunehag, G. Lever, A. Gruslys, W. M. Czarnecki, V. Zambaldi, M. Jaderberg, M. Lanctot, N. Sonnerat, J. Z. Leibo, K. Tuyls, _et al._ , “Value-decomposition networks for cooperative multi-agent learning,” _arXiv preprint arXiv:1706.05296_ , 2017.
|